threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "I'm working on patches to implement the (slightly brain damaged) SQL9x\ntime zone spec. This allows one to specify a numeric time offset for the\ntime zone.\n\nThe changes do not affect system catalogs, touching the date/time\nroutines in a few places and affecting gram.y and variable.c. To get the\nrange of choices to include those specified by the standard I am\nchanging the interface to SetPGVariable() to accept parser nodes rather\nthan just a string as an argument. This should make some of the other\nSET variables easier to support too.\n\nThese changes are not quite ready to go (since I need to go back and fix\nup the other parameters supported by SetPGVariable()) but I believe that\nthey are very low risk since the default behavior would stay the same.\n\nI expect to have this ready at the beginning of next week. One might\nconsider this a bug fix...\n\nComments?\n\n - Thomas\n",
"msg_date": "Fri, 12 Oct 2001 16:47:41 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "SQL99 time zones"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I'm working on patches to implement the (slightly brain damaged) SQL9x\n> time zone spec. This allows one to specify a numeric time offset for the\n> time zone. [ ... ]\n> Comments?\n\nWhile this is doubtless a good thing, I'm starting to feel very itchy\nabout the fact that we've slipped beta a couple of weeks now while you\nhack up \"one more\" datetime improvement. We've got to have some closure\non this process.\n\nAt this point I guess the reasonable thing to do is say \"Beta on Monday,\nthis time for sure!\". If you haven't committed these changes by Monday,\nI think you should hold them over for 7.3.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Oct 2001 14:27:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL99 time zones "
}
] |
[
{
"msg_contents": "Looks like Monday is our next beta target date. My mailbox is empty of\noutstanding patches except for an ecpg one I will apply tomorrow unless\nsomeone objects to it.\n\nThere are some patches still being worked on, but there always will be.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 13 Oct 2001 01:16:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Monday beta?"
}
] |
[
{
"msg_contents": "\n>What do folks think?\n>Take care,\n>Bill\n\nHello Bill,\n\nThe community have been waiting for packages for a long time. I don't \nbelieve you did it!!!\n\nIMHO most applications do not fully benefit from the power of PostgreSQL \nbecause transactions are performed at application lever \n(PHP/asp/Java/Application server). Sometimes, libraries are mapped to \ndatabase structure, which is nonsense when a simple view with left joins \ncan solve a problem.\n\nMost applications should be developed/ported at PostgreSQL level using the \nfull range of available tools (transactions, triggers, views, foreign keys, \nrules and off course PL/pgSQL). This is much easier and powerful. Then, all \nyou need is to display information using a good object-oriented language \n(Java/PHP).\n\nWith the help of packages, a lot of developers will probably release GPL \nlibraries and PostgreSQL will become the #1 database in the world.\n\nAt pgAdmin team, we were thinking of developing packages at client level. \nThis is nonsense when reading your paper. The ability of defining context \nlevels is a great feature. Question: how do you map package to PostgreSQL \nobjects (tables, views, triggers)? Is there any possibility of defining \ntemplates? Can this be added to packages in the future with little impact \non PostgreSQL internals?\n\nNow, we can only thank you for bringing Packages to PostgreSQL.\n\nBest regards,\nJean-Michel POURE\npgAdmin Team\n",
"msg_date": "Sat, 13 Oct 2001 10:11:39 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Package support for Postgres"
},
{
"msg_contents": "On Sat, 13 Oct 2001, Jean-Michel POURE wrote:\n\n> >What do folks think?\n> >Take care,\n> >Bill\n>\n> Hello Bill,\n>\n> The community have been waiting for packages for a long time. I don't\n> believe you did it!!!\n>\n> IMHO most applications do not fully benefit from the power of PostgreSQL\n> because transactions are performed at application lever\n> (PHP/asp/Java/Application server). Sometimes, libraries are mapped to\n> database structure, which is nonsense when a simple view with left joins\n> can solve a problem.\n>\n> Most applications should be developed/ported at PostgreSQL level using the\n> full range of available tools (transactions, triggers, views, foreign keys,\n> rules and off course PL/pgSQL). This is much easier and powerful. Then, all\n> you need is to display information using a good object-oriented language\n> (Java/PHP).\n>\n> With the help of packages, a lot of developers will probably release GPL\n> libraries and PostgreSQL will become the #1 database in the world.\n\nYep. PostgreSQL is within reach of really challenging the commercial\ndatabases. I think the core developers are working on the changes needed\nto challenge the commercial db's in terms of speed and performance for big\ndatastores (WAL, working to prevent OID rollover, etc.). Packages address\na different side of what will be needed to challenge the big boys - better\nstored procedure support. :-)\n\n> At pgAdmin team, we were thinking of developing packages at client level.\n> This is nonsense when reading your paper. The ability of defining context\n> levels is a great feature. Question: how do you map package to PostgreSQL\n> objects (tables, views, triggers)? Is there any possibility of defining\n> templates? Can this be added to packages in the future with little impact\n> on PostgreSQL internals?\n\nPackages don't really map to DB objects (tables, views, triggers) at the\nmoment. Have you used Oracle much? These packages are a direct translation\nof Oracle packages, with a few PostgreSQL extentions thrown in (Oracle\ndoesn't have PostgreSQL's ability to add aggregates, operators, and system\ntypes AFAIK, so their packages likewise don't, and types in packages AFAIK\nare package-specific).\n\nI forget who said it, but operators (and aggregates) are basically just\nsugar wrapped around functions; these packages are another form of sugar\nwrapped around functions. To start adding views and tables and triggers\nmakes packages more than just special sugar around functions.\n\nAlso, my big concern is that if we start adding tables and views and\ntriggers to packages, pg_dump becomes a nightmare.\n\n> Now, we can only thank you for bringing Packages to PostgreSQL.\n\nYou're welcome.\n\nTake care,\n\nBill\n\n",
"msg_date": "Sun, 14 Oct 2001 04:42:43 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Package support for Postgres"
}
] |
[
{
"msg_contents": "Do we still need code to warn during VACUUM when you get near to OID\nwraparound? I know Tom has handled XID wraparound and has OID usage\ndecreased.\n\nI have a patch to warn about OID wraparound but don't know if it is\nstill desired.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 13 Oct 2001 13:16:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Warning of OID wraparound"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Do we still need code to warn during VACUUM when you get near to OID\n> wraparound?\n\nI don't think so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Oct 2001 13:55:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Warning of OID wraparound "
}
] |
[
{
"msg_contents": "While looking to implement the ODBC replace() function (replace occurences\nof $2 in $1 by $3), I found that it could be expressed as:\n\nCREATE FUNCTION replace(text, text, text) RETURNS text AS '\n select\n case when position($2 in $1) = 0 or char_length($2) = 0\n then $1\n else substring($1 from 1 for position($2 in $1) - 1)\n || $3\n || replace(substring($1 from position($2 in $1) + char_length($2)), $2, $3)\n end;\n' LANGUAGE SQL WITH (isstrict);\n\nNow this command doesn't actually work because it requires the replace()\nfunction to exist already. But it does work if one first creates a stub\nreplace() function and then uses CREATE OR REPLACE.\n\n(So much about the claim that procedural languages are a security hole\nbecause they allow infinite loops.)\n\nI was wondering whether, as a future project, we could make this more\nconvenient by parsing the body of the function with the binding of the\nfunction already in effect.\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 13 Oct 2001 20:15:16 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Recursive SQL functions"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I was wondering whether, as a future project, we could make this more\n> convenient by parsing the body of the function with the binding of the\n> function already in effect.\n\nSeems like a simple rearrangement of the code. First insert the pg_proc\nentry, then CommandCounterIncrement, then do the parsing/checking of the\nfunction body. Given the CCI, the new entry will be visible for the\nchecking --- and if we error out, it rolls back just fine anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Oct 2001 19:39:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recursive SQL functions "
}
] |
[
{
"msg_contents": "I'm not sure what the answer to your problem is, but I'm sure you have the\nwrong approach.\n\nFor all practical purposes, client/server database programming is a\nmultiprocessing problem set. What you are trying to implement is a mutex. A\nmutex is a mutual exclusion tool. You can't reliably do what you think you are\ndoing. \n\nIf one process asks if something is locked, and the answer is no, in the\ninterim time another process can do the same thing. You will still have a\ndeadlock situation because [n] processes can read something as unlocked, and\nthen set themselves on a course of action in which all will attempt to lock.\nYou may reduce the probability, but you can not eliminate it.\n\nYou will need to come up with a mutex protocol. i.e. \n\n<?php\n\t$res = pg_exec($conn, \"select mylock()\");\n\n\tif(pg_Result($res, \"mylock\") == \"yes\")){\n\t\t(***)\n\t\tpg_exec($conn, \"select myunlock()\");\n\t}\n\telse\n\t\t// is locked do something else\n?>\n\nThe \"mylock\" and the \"myunlock\" have to work across the multiple PostgreSQL\nprocesses and use SYSV semaphore or something to manage the lock.\n\nSo, what end result are you trying to have? Are you saying you want one user to\nbe able to lock a table for a series of transactions, while another can use it\nin a readonly fashion? But it only gets read-only access if something is\nalready locked? What if it needs to update? Since you mention PHP, I assume\nthis is a web site or something. Since you mention login, I assume you are\nwriting some sort of session manager.\n\nAFAIK SQL does not have the concept of a testable Mutex, you will have to write\nyour own. But if you are doing a session manager in PHP, email me directly, I\nhave a number of suggestions.\n\n\n\n\nMaurizio Ortolan wrote:\n> \n> Hello to everybody!\n> \n> I've a little problem with LOCK-ing a\n> certain row in a table using PHP and\n> PostgreSQL on LINUX.\n> \n> >> In a few words, I'd like to undertand\n> >> how find out if a certain row is locked,\n> >> in order to prevent a kind of deadlock.\n> \n> Which is the (system) table where all\n> locked row or tables are 'saved' ?\n> Is there any flag?\n> \n> // ############################\n> Example 1:\n> \n> User A:\n> \n> BEGIN WORK;\n> select login from people where userid='1' for update;\n> [ ... ]\n> COMMIT WORK;\n> \n> User B:\n> BEGIN WORK;\n> (***)\n> select login from people where userid='1' for update;\n> \n> [ WAIT UNTIL 'COMMIT WORK' of user A ! :( ]\n> \n> COMMIT WORK;\n> \n> Solution:\n> I'd like to put in (***) a quick check in order to\n> know if the row with userid='1' is already locked or not.\n> \n> In this way, if it's already locked, I'll use\n> select login from people where userid='1';\n> [ ONLY READ ]\n> instead of\n> select login from people where userid='1' for update;\n> [READ & WRITE]\n> \n> // ############################\n> Example 2:\n> \n> BEGIN WORK;\n> LOCK TABLE utenti IN SHARE ROW EXCLUSIVE MODE;\n> select login from people where userid='1';\n> COMMIT WORK;\n> \n> // ############################\n> \n> Many thanks to everybody!\n> Ciao!\n> MaURIZIO\n> \n> crix98@____tin.it\n> \n> It's sure that\n> a small example in PHP will very very appreciated!! :))\n> \n> PS: it's possible to setup a timeout for a locked table,\n> in order to exec an aoutomatic ROLLBACK ??\n> (for examples if the user goes away?\n> \n> *******************************************\n> ** Happy surfing on THE NET !! **\n> ** Ciao by **\n> ** C R I X 98 **\n> *******************************************\n> AntiSpam: rimuovere il trattino basso\n> dall'indirizzo per scrivermi...\n> (delete the underscore from the e-mail address to reply)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Sun, 14 Oct 2001 08:51:18 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: php-psql lock problem. Thanks!"
},
{
"msg_contents": "Hello to everybody!\n\nI've a little problem with LOCK-ing a\ncertain row in a table using PHP and\nPostgreSQL on LINUX.\n\n >> In a few words, I'd like to undertand\n >> how find out if a certain row is locked,\n >> in order to prevent a kind of deadlock.\n\nWhich is the (system) table where all\nlocked row or tables are 'saved' ?\nIs there any flag?\n\n// ############################\nExample 1:\n\nUser A:\n\nBEGIN WORK;\nselect login from people where userid='1' for update;\n[ ... ]\nCOMMIT WORK;\n\nUser B:\nBEGIN WORK;\n(***)\nselect login from people where userid='1' for update;\n\n[ WAIT UNTIL 'COMMIT WORK' of user A ! :( ]\n\nCOMMIT WORK;\n\nSolution:\nI'd like to put in (***) a quick check in order to\nknow if the row with userid='1' is already locked or not.\n\nIn this way, if it's already locked, I'll use\n select login from people where userid='1';\n [ ONLY READ ]\ninstead of\n select login from people where userid='1' for update;\n [READ & WRITE]\n\n// ############################\nExample 2:\n\nBEGIN WORK;\nLOCK TABLE utenti IN SHARE ROW EXCLUSIVE MODE;\nselect login from people where userid='1';\nCOMMIT WORK;\n\n// ############################\n\n\nMany thanks to everybody!\nCiao!\nMaURIZIO\n\ncrix98@____tin.it\n\n\nIt's sure that\n a small example in PHP will very very appreciated!! :))\n\n\nPS: it's possible to setup a timeout for a locked table,\n in order to exec an aoutomatic ROLLBACK ??\n (for examples if the user goes away?\n\n*******************************************\n** Happy surfing on THE NET !! **\n** Ciao by **\n** C R I X 98 **\n*******************************************\nAntiSpam: rimuovere il trattino basso\n dall'indirizzo per scrivermi...\n(delete the underscore from the e-mail address to reply)\n\n",
"msg_date": "Sun, 14 Oct 2001 12:24:47 -0700",
"msg_from": "Maurizio Ortolan <crix98@tin.it>",
"msg_from_op": false,
"msg_subject": "php-psql lock problem. Thanks!"
}
] |
[
{
"msg_contents": "Hi all,\n\nIt would be very nice if PL/PgSQL could return a record set (ie, set of\ntuples). This could be done in two ways as far as I can imagine: either\nPL/PgSQL just returns the rows as a normal query would or it could return\na cursor. The prior would be very useful, the latter easier to implement\n(especially if INOUT arguments get implemented =)).\n\nCurrently, this seems to go against the grain of PL/PgSQL - am I missing\nsomething?\n\nGavin\n\n",
"msg_date": "Sun, 14 Oct 2001 23:05:15 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Feature Request - PL/PgSQL"
},
{
"msg_contents": "You already can return a cursor.\n\nSupport for returning a record set is being worked on.\n\n-alex\nOn Sun, 14 Oct 2001, Gavin Sherry wrote:\n\n> Hi all,\n> \n> It would be very nice if PL/PgSQL could return a record set (ie, set of\n> tuples). This could be done in two ways as far as I can imagine: either\n> PL/PgSQL just returns the rows as a normal query would or it could return\n> a cursor. The prior would be very useful, the latter easier to implement\n> (especially if INOUT arguments get implemented =)).\n> \n> Currently, this seems to go against the grain of PL/PgSQL - am I missing\n> something?\n> \n> Gavin\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n",
"msg_date": "Sun, 14 Oct 2001 11:08:30 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request - PL/PgSQL"
}
] |
[
{
"msg_contents": "\nThe http link to the snapshots no longer works on\nwww.ca.postgresql.org/ftpsite.\nIf you could correct that I would greatly appreciate it :-)\nAlso this link only works on some of the mirror sites, \nit would be great if all of them would have it (Germany works, Austria\ndoes not).\n\nI think a working http download would greatly reduce bandwidth on the\nservers.\n\nThanx in advance\nAndreas\n",
"msg_date": "Mon, 15 Oct 2001 11:43:57 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "http link to ftp download area broken"
},
{
"msg_contents": "On Mon, 15 Oct 2001, Zeugswetter Andreas SB SD wrote:\n\n>\n> The http link to the snapshots no longer works on\n> www.ca.postgresql.org/ftpsite.\n> If you could correct that I would greatly appreciate it :-)\n> Also this link only works on some of the mirror sites,\n> it would be great if all of them would have it (Germany works, Austria\n> does not).\n\nIt's fixed, but they all point to the same place so I don't see how\none would work and not the other. The place they point to was broken.\n\n> I think a working http download would greatly reduce bandwidth on the\n> servers.\n\nHow's that?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 15 Oct 2001 06:39:57 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: http link to ftp download area broken"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> On Mon, 15 Oct 2001, Zeugswetter Andreas SB SD wrote:\n>> The http link to the snapshots no longer works on\n>> www.ca.postgresql.org/ftpsite.\n\n> It's fixed, but they all point to the same place so I don't see how\n> one would work and not the other. The place they point to was broken.\n\nNote however that the snapshots themselves are still broken, unless\nMarc has done something about it in the past day or so. Last I checked,\nthe snapshots have failed to update since the CVSROOT move.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Oct 2001 10:40:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: http link to ftp download area broken "
}
] |
[
{
"msg_contents": "\nRight now, from what I can tell, the snapshot looks great to me:\n\npostgresql# ls -lt\ntotal 486\ndrwxrwxrwx 15 pgsql pgsql 512 Oct 15 04:04 src\ndrwxrwxrwx 43 pgsql pgsql 1024 Oct 15 04:04 contrib\ndrwxrwxrwx 4 pgsql pgsql 512 Oct 15 04:04 doc\ndrwxrwxrwx 2 pgsql pgsql 512 Oct 15 04:04 config\n-rwxr-xr-x 1 pgsql pgsql 249153 Oct 14 04:00 configure\n-rw-r--r-- 1 pgsql pgsql 35886 Oct 13 04:01 configure.in\n-rw-r--r-- 1 pgsql pgsql 132689 Oct 13 04:01 HISTORY\n-rw-r--r-- 1 pgsql pgsql 700 Oct 2 10:21 register.txt\n-rw-r--r-- 1 pgsql pgsql 34643 Oct 1 13:46 INSTALL\n-rw-r--r-- 1 pgsql pgsql 3464 Sep 17 19:00 GNUmakefile.in\n-rw-r--r-- 1 pgsql pgsql 566 Aug 26 18:28 aclocal.m4\n-rw-r--r-- 1 pgsql pgsql 1928 May 10 21:46 README\n-rw-r--r-- 1 pgsql pgsql 1432 Feb 9 2001 Makefile\n-rw-r--r-- 1 pgsql pgsql 1189 Jan 24 2001 COPYRIGHT\n\nAll fresh dates ...\n\n\n\n",
"msg_date": "Mon, 15 Oct 2001 08:41:50 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Snaptshot appears fine to me ..."
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Right now, from what I can tell, the snapshot looks great to me:\n\nThat looks up-to-date to me too, but where did you get it from?\nThe copy I pulled just now from\nftp://ftp.us.postgresql.org/dev/postgresql-snapshot.tar.gz\nhas still got the problem:\n\n$ ls -l postgresql-snapshot\ntotal 974\n-rw-r--r-- 1 tgl users 1189 Jan 25 2001 COPYRIGHT\ndrwxr-xr-x 2 tgl users 1024 Oct 14 04:00 ChangeLogs/\n-rw-r--r-- 1 tgl users 3567 Apr 9 2001 GNUmakefile.in\n-rw-r--r-- 1 tgl users 132330 Sep 16 04:00 HISTORY\n-rw-r--r-- 1 tgl users 34643 Apr 7 2001 INSTALL\n-rw-r--r-- 1 tgl users 1432 Feb 10 2001 Makefile\n-rw-r--r-- 1 tgl users 1928 May 11 04:00 README\n-rw-r--r-- 1 tgl users 586 Aug 27 04:00 aclocal.m4\ndrwxr-xr-x 2 tgl users 1024 Oct 14 04:00 config/\n-rwxr-xr-x 1 tgl users 249316 Sep 15 04:00 configure*\n-rw-r--r-- 1 tgl users 35852 Sep 15 04:00 configure.in\ndrwxr-xr-x 41 tgl users 1024 Oct 14 04:00 contrib/\ndrwxr-xr-x 4 tgl users 1024 Oct 14 04:00 doc/\n-rw-r--r-- 1 tgl users 738 May 11 04:00 register.txt\ndrwxr-xr-x 15 tgl users 1024 Oct 14 04:00 src/\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Oct 2001 11:28:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Snaptshot appears fine to me ... "
},
{
"msg_contents": "> \n> Right now, from what I can tell, the snapshot looks great to me:\n> \n> postgresql# ls -lt\n> total 486\n> drwxrwxrwx 15 pgsql pgsql 512 Oct 15 04:04 src\n> drwxrwxrwx 43 pgsql pgsql 1024 Oct 15 04:04 contrib\n> drwxrwxrwx 4 pgsql pgsql 512 Oct 15 04:04 doc\n> drwxrwxrwx 2 pgsql pgsql 512 Oct 15 04:04 config\n> -rwxr-xr-x 1 pgsql pgsql 249153 Oct 14 04:00 configure\n> -rw-r--r-- 1 pgsql pgsql 35886 Oct 13 04:01 configure.in\n> -rw-r--r-- 1 pgsql pgsql 132689 Oct 13 04:01 HISTORY\n> -rw-r--r-- 1 pgsql pgsql 700 Oct 2 10:21 register.txt\n> -rw-r--r-- 1 pgsql pgsql 34643 Oct 1 13:46 INSTALL\n> -rw-r--r-- 1 pgsql pgsql 3464 Sep 17 19:00 GNUmakefile.in\n> -rw-r--r-- 1 pgsql pgsql 566 Aug 26 18:28 aclocal.m4\n> -rw-r--r-- 1 pgsql pgsql 1928 May 10 21:46 README\n> -rw-r--r-- 1 pgsql pgsql 1432 Feb 9 2001 Makefile\n> -rw-r--r-- 1 pgsql pgsql 1189 Jan 24 2001 COPYRIGHT\n> \n> All fresh dates ...\n\nIt is not the dates on the files. What does doc/TODO show. I ftp'ed\nfrom ftp.us.postgresql.org and got a September 13th date in the TODO\nfile.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Oct 2001 11:48:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Snaptshot appears fine to me ..."
},
{
"msg_contents": "\ntry ftp2.us.postgresql.org ... this is what is holding up beta right now,\nwe have to get the mirrors fixed, which Vince is working on ...\nftp2.us.postgresql.org is the only 'operational' mirror we have right now,\nbut, since Vince isn't ready yet, we haven't \"broken\" the others yet ...\nonce Vince is ready, we'll break the other ones and get them fixed up ...\n\n\nOn Mon, 15 Oct 2001, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Right now, from what I can tell, the snapshot looks great to me:\n>\n> That looks up-to-date to me too, but where did you get it from?\n> The copy I pulled just now from\n> ftp://ftp.us.postgresql.org/dev/postgresql-snapshot.tar.gz\n> has still got the problem:\n>\n> $ ls -l postgresql-snapshot\n> total 974\n> -rw-r--r-- 1 tgl users 1189 Jan 25 2001 COPYRIGHT\n> drwxr-xr-x 2 tgl users 1024 Oct 14 04:00 ChangeLogs/\n> -rw-r--r-- 1 tgl users 3567 Apr 9 2001 GNUmakefile.in\n> -rw-r--r-- 1 tgl users 132330 Sep 16 04:00 HISTORY\n> -rw-r--r-- 1 tgl users 34643 Apr 7 2001 INSTALL\n> -rw-r--r-- 1 tgl users 1432 Feb 10 2001 Makefile\n> -rw-r--r-- 1 tgl users 1928 May 11 04:00 README\n> -rw-r--r-- 1 tgl users 586 Aug 27 04:00 aclocal.m4\n> drwxr-xr-x 2 tgl users 1024 Oct 14 04:00 config/\n> -rwxr-xr-x 1 tgl users 249316 Sep 15 04:00 configure*\n> -rw-r--r-- 1 tgl users 35852 Sep 15 04:00 configure.in\n> drwxr-xr-x 41 tgl users 1024 Oct 14 04:00 contrib/\n> drwxr-xr-x 4 tgl users 1024 Oct 14 04:00 doc/\n> -rw-r--r-- 1 tgl users 738 May 11 04:00 register.txt\n> drwxr-xr-x 15 tgl users 1024 Oct 14 04:00 src/\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Mon, 15 Oct 2001 12:11:35 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Snaptshot appears fine to me ... "
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> try ftp2.us.postgresql.org ...\n\nAh, that one looks much better, except for one stray file:\n\n\t postgresql-snapshot/config/#cvs.cvsup-64286.15\n\n> this is what is holding up beta right now,\n> we have to get the mirrors fixed, which Vince is working on ...\n\nAny ETA on that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Oct 2001 14:03:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Snaptshot appears fine to me ... "
},
{
"msg_contents": "On Mon, 15 Oct 2001, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > try ftp2.us.postgresql.org ...\n>\n> Ah, that one looks much better, except for one stray file:\n>\n> \t postgresql-snapshot/config/#cvs.cvsup-64286.15\n>\n> > this is what is holding up beta right now,\n> > we have to get the mirrors fixed, which Vince is working on ...\n>\n> Any ETA on that?\n\nVince is hoping to have that done by tomorrow ...\n\n\n>\n> \t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 15 Oct 2001 15:23:47 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Snaptshot appears fine to me ... "
},
{
"msg_contents": "Marc G. Fournier writes:\n\n> Right now, from what I can tell, the snapshot looks great to me:\n>\n> postgresql# ls -lt\n> total 486\n> drwxrwxrwx 15 pgsql pgsql 512 Oct 15 04:04 src\n> drwxrwxrwx 43 pgsql pgsql 1024 Oct 15 04:04 contrib\n> drwxrwxrwx 4 pgsql pgsql 512 Oct 15 04:04 doc\n> drwxrwxrwx 2 pgsql pgsql 512 Oct 15 04:04 config\n> -rwxr-xr-x 1 pgsql pgsql 249153 Oct 14 04:00 configure\n> -rw-r--r-- 1 pgsql pgsql 35886 Oct 13 04:01 configure.in\n> -rw-r--r-- 1 pgsql pgsql 132689 Oct 13 04:01 HISTORY\n> -rw-r--r-- 1 pgsql pgsql 700 Oct 2 10:21 register.txt\n> -rw-r--r-- 1 pgsql pgsql 34643 Oct 1 13:46 INSTALL\n> -rw-r--r-- 1 pgsql pgsql 3464 Sep 17 19:00 GNUmakefile.in\n> -rw-r--r-- 1 pgsql pgsql 566 Aug 26 18:28 aclocal.m4\n> -rw-r--r-- 1 pgsql pgsql 1928 May 10 21:46 README\n> -rw-r--r-- 1 pgsql pgsql 1432 Feb 9 2001 Makefile\n> -rw-r--r-- 1 pgsql pgsql 1189 Jan 24 2001 COPYRIGHT\n>\n> All fresh dates ...\n\nThe problem was that some of the (sub-)tarballs had old dates, not what\nhappened to the files within.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 15 Oct 2001 21:27:27 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Snaptshot appears fine to me ..."
},
{
"msg_contents": "On Mon, 15 Oct 2001, Marc G. Fournier wrote:\n\n>\n> try ftp2.us.postgresql.org ... this is what is holding up beta right now,\n> we have to get the mirrors fixed, which Vince is working on ...\n> ftp2.us.postgresql.org is the only 'operational' mirror we have right now,\n> but, since Vince isn't ready yet, we haven't \"broken\" the others yet ...\n> once Vince is ready, we'll break the other ones and get them fixed up ...\n\nThere can be no mirror breaking/fixing until server1's reliable. In it's\ncurrent state it's useless, I can't even finish testing/debugging the new\nmirror stuff right now. I've been getting database unavailable errors of\none form or another for over two hours now and I know it's available cuze\nI'm connected to it!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 16 Oct 2001 10:00:11 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Snaptshot appears fine to me ... "
},
{
"msg_contents": "On Mon, 15 Oct 2001, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Right now, from what I can tell, the snapshot looks great to me:\n>\n> That looks up-to-date to me too, but where did you get it from?\n> The copy I pulled just now from\n> ftp://ftp.us.postgresql.org/dev/postgresql-snapshot.tar.gz\n> has still got the problem:\n\nThat should be ftp2.us.postgresql.org. But it should be irrelevant\nvery soon as I think we're ready to make the switchover now.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 19 Oct 2001 06:43:28 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Snaptshot appears fine to me ... "
}
] |
[
{
"msg_contents": "I can confirm that the nightly snapshots are still not pulling from\ncurrent CVS. The TODO file in the snapshot of October 14th shows a\n\"Last Updated\" of September 13th.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Oct 2001 11:42:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Snapshot not using current CVS"
}
] |
[
{
"msg_contents": "Are we ready to start beta on 7.2?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Oct 2001 12:42:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Ready for Beta?"
},
{
"msg_contents": "> Are we ready to start beta on 7.2?\n\nFor me, the only remaining issue is follwing. Since it seems there's\nno objection, I will commit the changes in a few hours (I'm getting\nride on a train for a business trip. The train is coming...)\n--\nTatsuo Ishii\n\n>Subject: Re: [HACKERS] pg_client_encoding\n>From: Tatsuo Ishii <t-ishii@sra.co.jp>\n>To: phede-ml@islande.org\n>Cc: pgsql-hackers@postgresql.org\n>Date: Mon, 15 Oct 2001 10:05:20 +0900\n>X-Mailer: Mew version 1.94.2 on Emacs 20.7 / Mule 4.1 (葵)\n>\n>> * Tatsuo Ishii <t-ishii@sra.co.jp> [011014 16:05]:\n>> > > > ASCII\t\tSQL_ASCII\n>> > > > UTF-8\t\tUNICODE\t\t\t\tUTF_8\n>> > > > MULE-INTERNAL\tMULE_INTERNAL\n>> > > > ISO-8859-1\tLATIN1\t\t\t\tISO_8859_1\n>> > > > ISO-8859-2\tLATIN2\t\t\t\tISO_8859_2\n>> > > > ISO-8859-3\tLATIN3\t\t\t\tISO_8859_3\n>> > > > ISO-8859-4\tLATIN4\t\t\t\tISO_8859_4\n>> > > > ISO-8859-5\tISO_8859_5\n>> > > > ISO-8859-6\tISO_8859_6\n>> > > > ISO-8859-7\tISO_8859_7\n>> > > > ISO-8859-8\tISO_8859_8\n>> > > > ISO-8859-9\tLATIN5\t\t\t\tISO_8859_9\n>> > > > ISO-8859-10\tISO_8859_10\t\t\tLATIN6\n>> > > > ISO-8859-13\tISO_8859_13\t\t\tLATIN7\n>> > > > ISO-8859-14\tISO_8859_14\t\t\tLATIN8\n>> > > > ISO-8859-15\tISO_8859_15\t\t\tLATIN9\n>> > > > ISO-8859-16\tISO_8859_16\n>> > > \n>> > > Why aren't you using LATINx for (some of) these as well?\n>> > \n>> > If LATIN6 to 9 are well defined in the SQL or some other standards, I\n>> > would not object using them. I just don't have enough confidence.\n>> > For ISO-8859-5 to 8, and 16, I don't see well defined standards.\n>> \n>> ISO-8859-16 *is* LATIN10, I just don't have the reference to prove it\n>> (I can look for it, if you want to).\n>> \n>> ISO-8859-5 to 8 aren't latin scripts. From memory, 5 is cyrillic, 6 is\n>> arabic, 7 is greek, 8 is ??? (hebrew ?)...\n>> \n>> So it would make sense to add LATIN10, still :)\n>\n>If you were sure ISO-8859-16 == LATIN10, I could add it.\n>\n>Ok, here is the modified encoding table (column1 is the standard name,\n>2 is our \"official\" name, and 3 is alias). If there's no objection, I\n>will change them.\n>\n>ASCII\t\tSQL_ASCII\n>UTF-8\t\tUNICODE\t\tUTF_8\n>MULE-INTERNAL\tMULE_INTERNAL\n>ISO-8859-1\tLATIN1\t\tISO_8859_1\n>ISO-8859-2\tLATIN2\t\tISO_8859_2\n>ISO-8859-3\tLATIN3\t\tISO_8859_3\n>ISO-8859-4\tLATIN4\t\tISO_8859_4\n>ISO-8859-5\tISO_8859_5\n>ISO-8859-6\tISO_8859_6\n>ISO-8859-7\tISO_8859_7\n>ISO-8859-8\tISO_8859_8\n>ISO-8859-9\tLATIN5\t\tISO_8859_9\n>ISO-8859-10\tLATIN6\t\tISO_8859_10\n>ISO-8859-13\tLATIN7\t\tISO_8859_13\n>ISO-8859-14\tLATIN8\t\tISO_8859_14\n>ISO-8859-15\tLATIN9\t\tISO_8859_15\n>ISO-8859-16\tLATIN10\t\tISO_8859_16\n\n",
"msg_date": "Tue, 16 Oct 2001 13:16:20 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Ready for Beta?"
}
] |
[
{
"msg_contents": "I've been watching for this for some time. First it was 7.0, then \n7.1. Does anyone have any idea on when the row re-use code will be \nready? \n\nCurrently I'm running into trouble with an OLTP database. It grows \nlike crazy, has only 3,000,000 rows and vacuum takes a good 1/2 hour. \nGiven trouble with Great Bridge is there any info out there on when \n7.2 might hit the streets?\n\n-Michael\n_________________________________________________________________\n http://fastmail.ca/ - Fast Free Web Email for Canadians\n>From pgsql-sql-owner@postgresql.org Tue Oct 16 01:07:12 2001\nReceived: from sss.pgh.pa.us ([192.204.191.242])\n\tby postgresql.org (8.11.3/8.11.4) with ESMTP id f9FEl8r27466\n\tfor <pgsql-sql@postgresql.org>; Mon, 15 Oct 2001 10:47:08 -0400 (EDT)\n\t(envelope-from tgl@sss.pgh.pa.us)\nReceived: from sss2.sss.pgh.pa.us (tgl@localhost [127.0.0.1])\n\tby sss.pgh.pa.us (8.11.4/8.11.4) with ESMTP id f9FEkIc14278;\n\tMon, 15 Oct 2001 10:46:18 -0400 (EDT)\nTo: \"Aasmund Midttun Godal\" <aasmund@godal.com>\ncc: pgsql-sql@postgresql.org\nSubject: Re: Restricting access to Large objects \nIn-reply-to: <20011010201859.23555.qmail@ns.krot.org> \nReferences: <20011010201859.23555.qmail@ns.krot.org>\nComments: In-reply-to \"Aasmund Midttun Godal\" <aasmund@godal.com>\n\tmessage dated \"Wed, 10 Oct 2001 20:18:59 +0000\"\nDate: Mon, 15 Oct 2001 10:46:18 -0400\nMessage-ID: <14275.1003157178@sss.pgh.pa.us>\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nX-Archive-Number: 200110/194\nX-Sequence-Number: 5149\n\n\"Aasmund Midttun Godal\" <aasmund@godal.com> writes:\n> How can I restrict access to large objects.\n\nYou can't. This is one of the many deficiencies of large objects.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Oct 2001 18:29:51 -0400 (EDT)",
"msg_from": "\"Michael Richards\" <michael@fastmail.ca>",
"msg_from_op": true,
"msg_subject": "When will vacuum go away?"
},
{
"msg_contents": "\"Michael Richards\" <michael@fastmail.ca> writes:\n> I've been watching for this for some time. First it was 7.0, then \n> 7.1. Does anyone have any idea on when the row re-use code will be \n> ready? \n\nVACUUM isn't disappearing any time soon, but 7.2's version of vacuum\nruns in parallel with normal transactions, so it's not so painful to\nrun it frequently. See discussion in development docs,\nhttp://candle.pha.pa.us/main/writings/pgsql/sgml/maintenance.html\n\n> Given trouble with Great Bridge is there any info out there on when \n> 7.2 might hit the streets?\n\nThe last several postponements of 7.2 beta have *not* been the fault\nof the ex-GreatBridge folks around here.\n\nYou can find a snapshot that should be pretty durn close to 7.2beta1\nat ftp://ftp2.us.postgresql.org/pub/dev/postgresql-snapshot.tar.gz\n(note that at last word, other mirrors were not up to date --- if\nthe doc/TODO file doesn't contain a date in October, it's stale).\nI think the only thing we're still waiting on is some datetime fixes\nfrom Tom Lockhart...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Oct 2001 00:57:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: When will vacuum go away? "
},
{
"msg_contents": "> You can find a snapshot that should be pretty durn close to 7.2beta1\n> at ftp://ftp2.us.postgresql.org/pub/dev/postgresql-snapshot.tar.gz\n> (note that at last word, other mirrors were not up to date --- if\n> the doc/TODO file doesn't contain a date in October, it's stale).\n> I think the only thing we're still waiting on is some datetime fixes\n> from Tom Lockhart...\n\nI'm a bit confused. Are you implying that the rest of the mirrors are\nbroken or that ftp2 just has info that hasn't been put out for the rest of\nthe mirrors yet.. I mirror every 4 hours and get:\n\n# ./rsync-postgres-ftp\nreceiving file list ... done\nwrote 110 bytes read 19042 bytes 7660.80 bytes/sec\ntotal size is 432138525 speedup is 22563.62\n\nwhen connecting to hub.org and:\n\n@ERROR: Unknown module 'postgresql-ftp'\n\nwhen connecting to rsync.postgresql.org.\n\nDid I miss something here?\n\n- Brandon\n\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n\n",
"msg_date": "Tue, 16 Oct 2001 07:42:07 -0400 (EDT)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Why are ftp mirrors out of sync?"
},
{
"msg_contents": "bpalmer <bpalmer@crimelabs.net> writes:\n> Did I miss something here?\n\nNo, but Marc said yesterday that he and Vince were in process of\nchanging something about mirror configuration. You'll have to\nask them if mirror admins need to do anything...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Oct 2001 10:03:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why are ftp mirrors out of sync? "
},
{
"msg_contents": "On Tue, 16 Oct 2001, Tom Lane wrote:\n\n> bpalmer <bpalmer@crimelabs.net> writes:\n> > Did I miss something here?\n>\n> No, but Marc said yesterday that he and Vince were in process of\n> changing something about mirror configuration. You'll have to\n> ask them if mirror admins need to do anything...\n\nThey will. It'll involve filling out a form and changing the virtual\nserver's name.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 16 Oct 2001 10:07:07 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Why are ftp mirrors out of sync? "
},
{
"msg_contents": "\nIn a few hours, vince and I will be shutting down the rsync mirror on\nhub.org ... we've been spending the past couple of weeks re-doing and\nre-writing *alot* of the sites ...\n\nfor instance, if you mirror www.postgresql.org, it no longer includes the\nextensive mailing list archives ... that is a seperate web site/mirror ...\nso you can omit that if you want to just deal with stuff like the docs ...\n\nalso, all sites are going to be 'advertised' the same as the FreeBSD\nproject does it:\n\nftp.us.postgresql.org\nftp2.us.postgresql.org\netc ...\n\nno more individual domains ... the one problem we've been noticing over\nthe years is that search engines are picking up old mirrors, indexing them\nand presenting them to ppl ... by moving to the FreeBSD style, when a\nmirror goes offline, we can easily redirect that 'name' to a live IP, so\nthat ppl don't get pointers to stale, or non-existent sites ...\n\n\n\nOn Tue, 16 Oct 2001, bpalmer wrote:\n\n> > You can find a snapshot that should be pretty durn close to 7.2beta1\n> > at ftp://ftp2.us.postgresql.org/pub/dev/postgresql-snapshot.tar.gz\n> > (note that at last word, other mirrors were not up to date --- if\n> > the doc/TODO file doesn't contain a date in October, it's stale).\n> > I think the only thing we're still waiting on is some datetime fixes\n> > from Tom Lockhart...\n>\n> I'm a bit confused. Are you implying that the rest of the mirrors are\n> broken or that ftp2 just has info that hasn't been put out for the rest of\n> the mirrors yet.. I mirror every 4 hours and get:\n>\n> # ./rsync-postgres-ftp\n> receiving file list ... done\n> wrote 110 bytes read 19042 bytes 7660.80 bytes/sec\n> total size is 432138525 speedup is 22563.62\n>\n> when connecting to hub.org and:\n>\n> @ERROR: Unknown module 'postgresql-ftp'\n>\n> when connecting to rsync.postgresql.org.\n>\n> Did I miss something here?\n>\n> - Brandon\n>\n>\n>\n> ----------------------------------------------------------------------------\n> c: 646-456-5455 h: 201-798-4983\n> b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n",
"msg_date": "Tue, 16 Oct 2001 20:27:19 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Why are ftp mirrors out of sync?"
},
{
"msg_contents": "BTW will there be a 7.1.4 release before 7.2 comes out so we can dump our databases to \nupgrade to 7.2 w/o there being 60 in the seconds field?\n\nTom Lane wrote:\n\n> \"Michael Richards\" <michael@fastmail.ca> writes:\n> \n>>I've been watching for this for some time. First it was 7.0, then \n>>7.1. Does anyone have any idea on when the row re-use code will be \n>>ready? \n>>\n> \n> VACUUM isn't disappearing any time soon, but 7.2's version of vacuum\n> runs in parallel with normal transactions, so it's not so painful to\n> run it frequently. See discussion in development docs,\n> http://candle.pha.pa.us/main/writings/pgsql/sgml/maintenance.html\n> \n> \n>>Given trouble with Great Bridge is there any info out there on when \n>>7.2 might hit the streets?\n>>\n> \n> The last several postponements of 7.2 beta have *not* been the fault\n> of the ex-GreatBridge folks around here.\n> \n> You can find a snapshot that should be pretty durn close to 7.2beta1\n> at ftp://ftp2.us.postgresql.org/pub/dev/postgresql-snapshot.tar.gz\n> (note that at last word, other mirrors were not up to date --- if\n> the doc/TODO file doesn't contain a date in October, it's stale).\n> I think the only thing we're still waiting on is some datetime fixes\n> from Tom Lockhart...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n-- \nJoseph Shraibman\njks@selectacast.net\nIncrease signal to noise ratio. http://www.targabot.com\n\n",
"msg_date": "Thu, 18 Oct 2001 21:25:19 -0400",
"msg_from": "Joseph Shraibman <jks@selectacast.net>",
"msg_from_op": false,
"msg_subject": "Re: When will vacuum go away?"
},
{
"msg_contents": "Joseph Shraibman <jks@selectacast.net> writes:\n> BTW will there be a 7.1.4 release before 7.2 comes out so we can dump\n> our databases to upgrade to 7.2 w/o there being 60 in the seconds\n> field?\n\nI doubt it. We're having enough trouble trying to get everyone lined\nup to produce a 7.2 beta :-(. Producing another 7.1 patch release\nisn't in the cards.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Oct 2001 21:30:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: When will vacuum go away? "
}
] |
[
{
"msg_contents": "Hello everybody.\n\nI'm Daniel Varela, the developer of DBBalancer, a load balancing connection \npool for PostgreSQL (http://www.sourceforge.net/projects/dbbalancer), and \nI've recently been faced to a problem with protocol versions. DBBalancer \nspeaks itself Postgres protocol to open and close connections, while \nforwarding all other messages. Currently implements protocol version 2.0, as \ndefined in the documentation.\n\nThe problem appeared when using DBBalancer with PostgreSQL and PHP installed \nfrom Debian packages, as a tcpdump capture identified a StartUp packet like \nthis:\n\n00 00 01 28 04 d2 16 2f ...\n\nAccording to the protocol specification the first four bytes are the length \nof the packet (ok), but the next four should be the protocol version....\n\nWhat I would like to know, since the lack of documentation (at least I \ncouldn't find any) of previous versions of the protocol is:\n\na) Is this really a previous version of the protocol?\nb) Is it worth to implement it in DBBalancer?\nc) If it is, where could I find documentation about it? I know that the \nsource code \"somewhat\" documents it, but it would be very very time saving to \nhave access to some docs.\n\nBest regards and thanks in advance.\n\n\n\n-- \n\n----------------------------------\nRegards from Spain. Daniel Varela\n----------------------------------\n\nIf you think education is expensive, try ignorance.\n -Derek Bok (Former Harvard President)\n",
"msg_date": "Tue, 16 Oct 2001 00:29:56 +0200",
"msg_from": "Daniel Varela Santoalla <dvs@arrakis.es>",
"msg_from_op": true,
"msg_subject": "Old backend/frontend protocol versions"
},
{
"msg_contents": "Daniel Varela Santoalla <dvs@arrakis.es> writes:\n> The problem appeared when using DBBalancer with PostgreSQL and PHP installed \n> from Debian packages, as a tcpdump capture identified a StartUp packet like \n> this:\n> 00 00 01 28 04 d2 16 2f ...\n> According to the protocol specification the first four bytes are the length \n> of the packet (ok), but the next four should be the protocol version....\n\nThis is an SSL negotiation request --- look for NEGOTIATE_SSL_CODE\nin the sources.\n\n> What I would like to know, since the lack of documentation (at least I \n> couldn't find any) of previous versions of the protocol is:\n\nThe people who did the SSL feature did a spectacularly poor job of\ndocumenting it. AFAICT, the rest of that packet is a wasted bunch of\nzeroes, and the next thing that happens is that the postmaster sends\nback a one-byte OK-to-use-SSL-or-not response; if OK, the next step\nis to engage in an SSL connection dialog, then send the real StartUp\npacket under protection of SSL. But no, there's not a word of\ndocumentation about it except some comments in the source code.\n\nUnless your balancer can cope with a stream of data that it cannot make\nany sense of, my guess is that you'll have a hard time doing anything\nuseful with SSL-encrypted connections. You might be best off to reject\nthem out of hand (send back a 1-byte response 'N', then wait for the\nnon-encrypted StartUpPacket). Or perhaps run separate SSL sessions on\nyour incoming and outgoing datastreams.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Oct 2001 12:59:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Old backend/frontend protocol versions "
}
] |
[
{
"msg_contents": "I'm sure this is not the correct place to ask this, but:\n\n\nI've been looking for documents and other info on replication efforts in \nPostgreSQL. If anyone here can point me to places where I can find \nthese, it would be really appreciated.\n\nAlso, if any of you have any comments and warnings about current \nimplementations (if there are any) I'd like to hear those too.\n\n\n\nMany thanks\n\n\n\nNathan\n\n",
"msg_date": "Tue, 16 Oct 2001 03:30:38 GMT",
"msg_from": "Nathan Reilly <nreilly@bigpond.net.au>",
"msg_from_op": true,
"msg_subject": "Replication"
},
{
"msg_contents": "> \n> I've been looking for documents and other info on replication efforts \n> in PostgreSQL. If anyone here can point me to places where I can \n> find these, it would be really appreciated. \n\nHere is some research work that was conducted a few months ago.\n\nhttp://gborg.postgresql.org/genpage?replication_research\n\nThe project page for this research is here. \n\nhttp://gborg.postgresql.org/project/pgreplication/projdisplay.php\n\n> \n> \n> Also, if any of you have any comments and warnings about current \n> implementations (if there are any) I'd like to hear those too.\n> \nIt really depends on what type of replication your looking for. \nSynchronous or Asynchronous. Master/Slave or\nMulti-Master. \n\n\nDarren\n\n",
"msg_date": "Tue, 16 Oct 2001 15:59:31 -0400",
"msg_from": "Darren Johnson <darren.johnson@home.com>",
"msg_from_op": false,
"msg_subject": "Re: Replication"
},
{
"msg_contents": "> I've been looking for documents and other info on replication efforts in\n> PostgreSQL. If anyone here can point me to places where I can find\n> these, it would be really appreciated.\n\nPostgreSQL Inc offers asynchronous one-way replication, used and proven\nin high throughput environments. The original prototype implementation\nis available as a contrib package in the PostgreSQL source tree. \n\nI think that there are other efforts at different styles of replication\nbut do not know the current status.\n\nHTH\n\n - Thomas\n",
"msg_date": "Tue, 16 Oct 2001 23:45:31 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Replication"
},
{
"msg_contents": "> I've been looking for documents and other info on replication efforts in\n> PostgreSQL. If anyone here can point me to places where I can find\n> these, it would be really appreciated.\n\nIt really depends on your replication needs. I would suggest taking a\nlook at gborg.postgresql.org for any of the replication projects there.\nThere are some that work, some that don't and some that are still in the\nworks.\n\n> Also, if any of you have any comments and warnings about current\n> implementations (if there are any) I'd like to hear those too.\n\nAlso in the works is adding the ability to postgresql native to do\nreplication. That is, however, at least as far away as the next version\n(7.3, ~6-8 months).\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Tue, 16 Oct 2001 19:56:22 -0400 (EDT)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: Replication"
},
{
"msg_contents": "Try http://techdocs.postgresql.org/\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Nathan Reilly\n> Sent: Tuesday, 16 October 2001 11:31 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] Replication\n> \n> \n> I'm sure this is not the correct place to ask this, but:\n> \n> \n> I've been looking for documents and other info on replication efforts in \n> PostgreSQL. If anyone here can point me to places where I can find \n> these, it would be really appreciated.\n> \n> Also, if any of you have any comments and warnings about current \n> implementations (if there are any) I'd like to hear those too.\n> \n> \n> \n> Many thanks\n> \n> \n> \n> Nathan\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n",
"msg_date": "Wed, 17 Oct 2001 11:28:17 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Replication"
}
] |
[
{
"msg_contents": "Hi!\n\nWithout this patch I couldn't compile PostgreSQL on Solaris 8 x86 using\nSun's compiler. May be it will be usefull for someone else?\n\nRegards\nDenis Ustimenko\nOldham\n\n---------------------------------------------------------------\ndenis@tracer$ diff configure.orig configure\n744c744\n< i?86-*-solaris) need_tas=yes; tas_file=solaris_i386.s ;;\n---\n> i?86-*-solaris*) need_tas=yes; tas_file=solaris_i386.s ;;\n\n\n",
"msg_date": "Tue, 16 Oct 2001 17:17:35 +0700 (NOVST)",
"msg_from": "Denis A Ustimenko <denis@oldham.ru>",
"msg_from_op": true,
"msg_subject": "compiling on Solaris 8 x86"
},
{
"msg_contents": "Patch to be applied to proper config* file.\n\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\n> Hi!\n> \n> Without this patch I couldn't compile PostgreSQL on Solaris 8 x86 using\n> Sun's compiler. May be it will be usefull for someone else?\n> \n> Regards\n> Denis Ustimenko\n> Oldham\n> \n> ---------------------------------------------------------------\n> denis@tracer$ diff configure.orig configure\n> 744c744\n> < i?86-*-solaris) need_tas=yes; tas_file=solaris_i386.s ;;\n> ---\n> > i?86-*-solaris*) need_tas=yes; tas_file=solaris_i386.s ;;\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 16 Oct 2001 13:55:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiling on Solaris 8 x86"
},
{
"msg_contents": "Patch applied. Thanks. Patch attached. autoconf run.\n\n---------------------------------------------------------------------------\n\n\n> Hi!\n> \n> Without this patch I couldn't compile PostgreSQL on Solaris 8 x86 using\n> Sun's compiler. May be it will be usefull for someone else?\n> \n> Regards\n> Denis Ustimenko\n> Oldham\n> \n> ---------------------------------------------------------------\n> denis@tracer$ diff configure.orig configure\n> 744c744\n> < i?86-*-solaris) need_tas=yes; tas_file=solaris_i386.s ;;\n> ---\n> > i?86-*-solaris*) need_tas=yes; tas_file=solaris_i386.s ;;\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql/configure.in,v\nretrieving revision 1.145\ndiff -c -r1.145 configure.in\n*** configure.in\t2001/10/13 04:23:50\t1.145\n--- configure.in\t2001/10/19 15:03:22\n***************\n*** 116,122 ****\n case $host in\n *-*-hpux*) need_tas=yes; tas_file=hpux.s ;;\n sparc-*-solaris*) need_tas=yes; tas_file=solaris_sparc.s ;;\n! i?86-*-solaris) need_tas=yes; tas_file=solaris_i386.s ;;\n *) need_tas=no; tas_file=dummy.s ;;\n esac\n AC_LINK_FILES([src/backend/port/tas/${tas_file}], [src/backend/port/tas.s])\n--- 116,122 ----\n case $host in\n *-*-hpux*) need_tas=yes; tas_file=hpux.s ;;\n sparc-*-solaris*) need_tas=yes; tas_file=solaris_sparc.s ;;\n! i?86-*-solaris*) need_tas=yes; tas_file=solaris_i386.s ;;\n *) need_tas=no; tas_file=dummy.s ;;\n esac\n AC_LINK_FILES([src/backend/port/tas/${tas_file}], [src/backend/port/tas.s])",
"msg_date": "Fri, 19 Oct 2001 11:04:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiling on Solaris 8 x86"
}
] |
[
{
"msg_contents": "In moving from 7.1.3 to 7.2devel (for bug fixes) we've encountered a\nproblem with a, previously valid, column name: time. In 7.1.3 the\nfollowing worked:\n\n CREATE TABLE test(time INTEGER);\n\nwhile in 7.2devel it results in a parse error:\n\n ERROR: parser: parse error at or near \"time\"\n\nLooking at the source I see this is a result of 'time' being added to\nColLabel in backend/parser/gram.y earlier this month.\n\nThis effects interface code and database migration using pg_dump.\n\nObviously a new column name will have to be used, however is there a\ndefinitive list of keywords to avoid so such an occurance wouldn't\nhappen in a production system? Currently these include:\n\n abort\n all\n analyse\n analyze\n and\n any\n asc\n between\n binary\n bit\n both\n case\n cast\n char\n character\n check\n cluster\n coalesce\n collate\n column\n constraint\n copy\n cross\n current_date\n current_time\n current_timestamp\n current_user\n dec\n decimal\n default\n deferrable\n desc\n distinct\n do\n else\n end\n except\n exists\n explain\n extract\n false\n float\n for\n foreign\n freeze\n from\n full\n global\n group\n having\n ilike\n initially\n in\n inner\n intersect\n into\n inout\n is\n isnull\n join\n leading\n left\n like\n limit\n listen\n load\n local\n lock\n move\n natural\n nchar\n new\n not\n notnull\n nullif\n null\n numeric\n off\n offset\n old\n on\n only\n or\n order\n out\n outer\n overlaps\n position\n precision\n primary\n public\n references\n reset\n right\n select\n session_user\n setof\n show\n some\n substring\n table\n then\n time\n timestamp\n to\n trailing\n transaction\n trim\n true\n union\n unique\n unknown\n user\n using\n vacuum\n varchar\n verbose\n when\n where\n\nBest Regards, Lee Kindness..\n",
"msg_date": "Tue, 16 Oct 2001 12:25:51 +0100 (BST)",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": true,
"msg_subject": "Column names - time"
},
{
"msg_contents": "Lee Kindness <lkindness@csl.co.uk> writes:\n> Obviously a new column name will have to be used, however is there a\n> definitive list of keywords to avoid so such an occurance wouldn't\n> happen in a production system?\n\nThere is an up-to-date list of keywords in the documentation:\nhttp://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/sql-keywords-appendix.html\n\nAs for predicting what keywords might become reserved in future PG\nreleases, my crystal ball is down at the moment ... but words that are\nreserved in SQL99 would be good things to avoid.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Oct 2001 17:19:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Column names - time "
},
{
"msg_contents": "Tom Lane writes:\n > There is an up-to-date list of keywords in the documentation:\n > http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/sql-keywords-appendix.html\n\nThanks for the info. Would I be right in saying that the status of\ntime (unreserved for PostgreSQL) for 7.2 needs to be changed to\nreserved since it can no-longer be used as an unquoted column\nidentifier?\n\nPerhaps the other time related identifiers too.\n\nRegards, Lee Kindness.\n",
"msg_date": "Wed, 17 Oct 2001 09:36:35 +0100 (BST)",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Column names - time "
},
{
"msg_contents": "Lee Kindness <lkindness@csl.co.uk> writes:\n> Tom Lane writes:\n>>> There is an up-to-date list of keywords in the documentation:\n>>> http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/sql-keywords-appendix.html\n\n> Thanks for the info. Would I be right in saying that the status of\n> time (unreserved for PostgreSQL) for 7.2 needs to be changed\n\nProbably. Peter has a script that generates that table directly from\ngram.y, and I assume he'll run it sometime before 7.2 release...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Oct 2001 10:21:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Column names - time "
},
{
"msg_contents": "Tom Lane writes:\n\n> Probably. Peter has a script that generates that table directly from\n> gram.y, and I assume he'll run it sometime before 7.2 release...\n\nAfter beta has started.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 17 Oct 2001 22:37:48 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Column names - time "
}
] |
[
{
"msg_contents": "I saw over 7 hours delay between postgresql.org and sever1.pgsql.org.\nDoes anynone know what's happening here?\n\nReceived: from postgresql.org (webmail.postgresql.org [216.126.85.28])\n\tby server1.pgsql.org (8.11.6/8.11.6) with ESMTP id f9GBurU27235\n\tfor <t-ishii@sra.co.jp>; Tue, 16 Oct 2001 06:57:11 -0500 (CDT)\n\t(envelope-from pgsql-hackers-owner+M14299=sra.co.jp=t+2Dishii@postgresql.org)\nReceived: from sraigw.sra.co.jp (sraigw.sra.co.jp [202.32.10.2])\n\tby postgresql.org (8.11.3/8.11.4) with ESMTP id f9G4GcP30712\n\tfor <pgsql-hackers@postgresql.org>; Tue, 16 Oct 2001 00:16:39 -0400 (EDT)\n\t(envelope-from t-ishii@sra.co.jp)\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 16 Oct 2001 21:13:12 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "delayed mail?"
},
{
"msg_contents": "Tatsuo Ishii writes:\n > I saw over 7 hours delay between postgresql.org and sever1.pgsql.org.\n > Does anynone know what's happening here?\n\nI've seen massive delays too:\n\nReceived: from mail.csl.co.uk by euphrates.csl.co.uk (8.9.3/ConceptI 2.4)\n\tid EAA08864; Tue, 16 Oct 2001 04:24:41 +0100 (BST)\nReceived: from server1.pgsql.org by mail.csl.co.uk (8.11.1/ConceptO 2.3)\n\tid f9G3Oh219345; Tue, 16 Oct 2001 04:24:44 +0100\nReceived: from postgresql.org (webmail.postgresql.org [216.126.85.28])\n\tby server1.pgsql.org (8.11.6/8.11.6) with ESMTP id f9G3Ljq66512\n\tfor <lkindness@csl.co.uk>; Mon, 15 Oct 2001 22:23:36 -0500 (CDT)\n\t(envelope-from pgsql-hackers-owner+M14283=csl.co.uk=lkindness@postgresql.org)\nReceived: from mail1.hub.org (webmail.hub.org [216.126.85.1])\n\tby postgresql.org (8.11.3/8.11.4) with ESMTP id f9FNaKP72316\n\tfor <pgsql-hackers@postgreSQL.org>; Mon, 15 Oct 2001 19:36:20 -0400 (EDT)\n\t(envelope-from pgman@candle.pha.pa.us)\nReceived: from candle.pha.pa.us (candle.navpoint.com [162.33.245.46])\n\tby mail1.hub.org (8.11.3/8.11.4) with ESMTP id f9FGg8X57598\n\tfor <pgsql-hackers@postgreSQL.org>; Mon, 15 Oct 2001 12:42:08 -0400 (EDT)\n\t(envelope-from pgman@candle.pha.pa.us)\nReceived: (from pgman@localhost)\n\tby candle.pha.pa.us (8.11.6/8.10.1) id f9FGg6g02963\n\tfor pgsql-hackers@postgreSQL.org; Mon, 15 Oct 2001 12:42:06 -0400 (EDT)\nMessage-Id: <200110151642.f9FGg6g02963@candle.pha.pa.us>\n\nand this one's days:\n\nReceived: from mail.csl.co.uk by euphrates.csl.co.uk (8.9.3/ConceptI 2.4)\n\tid KAA11316; Tue, 16 Oct 2001 10:34:03 +0100 (BST)\nReceived: from postgresql.org by mail.csl.co.uk (8.11.1/ConceptO 2.3)\n\tid f9G9Y3220747; Tue, 16 Oct 2001 10:34:03 +0100\nReceived: from postgresql.org (webmail.postgresql.org [216.126.85.28])\n\tby postgresql.org (8.11.3/8.11.4) with SMTP id f9G0GHP83130;\n\tMon, 15 Oct 2001 20:16:27 -0400 (EDT)\n\t(envelope-from pgsql-announce-owner+M120@postgresql.org)\nReceived: from candle.pha.pa.us (candle.navpoint.com [162.33.245.46])\n\tby postgresql.org (8.11.3/8.11.4) with ESMTP id f9D4our63609;\n\tSat, 13 Oct 2001 00:50:56 -0400 (EDT)\n\t(envelope-from pgman@candle.pha.pa.us)\nReceived: (from pgman@localhost)\n\tby candle.pha.pa.us (8.11.6/8.10.1) id f9D4oqo09577;\n\tSat, 13 Oct 2001 00:50:52 -0400 (EDT)\nMessage-Id: <200110130450.f9D4oqo09577@candle.pha.pa.us>\n\nHardly any (yours omong the excepted, headers below) messages are\ncoming through in a timely fashion:\n\nReceived: from mail.csl.co.uk by euphrates.csl.co.uk (8.9.3/ConceptI 2.4)\n\tid PAA14131; Tue, 16 Oct 2001 15:17:46 +0100 (BST)\nReceived: from server1.pgsql.org by mail.csl.co.uk (8.11.1/ConceptO 2.3)\n\tid f9GEHl222496; Tue, 16 Oct 2001 15:17:47 +0100\nReceived: from postgresql.org (webmail.postgresql.org [216.126.85.28])\n\tby server1.pgsql.org (8.11.6/8.11.6) with ESMTP id f9GEEEs48462\n\tfor <lkindness@csl.co.uk>; Tue, 16 Oct 2001 09:16:31 -0500 (CDT)\n\t(envelope-from pgsql-hackers-owner+M14305=csl.co.uk=lkindness@postgresql.org)\nReceived: from sraigw.sra.co.jp (sraigw.sra.co.jp [202.32.10.2])\n\tby postgresql.org (8.11.3/8.11.4) with ESMTP id f9GCDVP15349\n\tfor <pgsql-hackers@postgresql.org>; Tue, 16 Oct 2001 08:13:32 -0400 (EDT)\n\t(envelope-from t-ishii@sra.co.jp)\nReceived: from sranhm.sra.co.jp (sranhm [133.137.13.152])\n\tby sraigw.sra.co.jp (8.9.3/3.7W-sraigw) with ESMTP id VAA16107\n\tfor <pgsql-hackers@postgresql.org>; Tue, 16 Oct 2001 21:13:28 +0900 (JST)\nReceived: from localhost (IDENT:t-ishii@portsv3-24.sra.co.jp [133.137.84.24])\n\tby sranhm.sra.co.jp (8.9.3+3.2W/3.7W-srambox) with ESMTP id VAA10335\n\tfor <pgsql-hackers@postgresql.org>; Tue, 16 Oct 2001 21:13:26 +0900\nMessage-Id: <20011016211312V.t-ishii@sra.co.jp>\nLee.\n",
"msg_date": "Tue, 16 Oct 2001 15:25:33 +0100 (BST)",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "delayed mail?"
},
{
"msg_contents": "\nknown problems, should be fixed and catching up now ...\n\nOn Tue, 16 Oct 2001, Tatsuo Ishii wrote:\n\n> I saw over 7 hours delay between postgresql.org and sever1.pgsql.org.\n> Does anynone know what's happening here?\n>\n> Received: from postgresql.org (webmail.postgresql.org [216.126.85.28])\n> \tby server1.pgsql.org (8.11.6/8.11.6) with ESMTP id f9GBurU27235\n> \tfor <t-ishii@sra.co.jp>; Tue, 16 Oct 2001 06:57:11 -0500 (CDT)\n> \t(envelope-from pgsql-hackers-owner+M14299=sra.co.jp=t+2Dishii@postgresql.org)\n> Received: from sraigw.sra.co.jp (sraigw.sra.co.jp [202.32.10.2])\n> \tby postgresql.org (8.11.3/8.11.4) with ESMTP id f9G4GcP30712\n> \tfor <pgsql-hackers@postgresql.org>; Tue, 16 Oct 2001 00:16:39 -0400 (EDT)\n> \t(envelope-from t-ishii@sra.co.jp)\n> --\n> Tatsuo Ishii\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n",
"msg_date": "Tue, 16 Oct 2001 17:10:58 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: delayed mail?"
}
] |
[
{
"msg_contents": "Break the SQL code that has been implemented for prior versions??\n Bummer ;((.\n",
"msg_date": "16 Oct 2001 08:24:12 -0700",
"msg_from": "huongch@bigfoot.com (Flancer)",
"msg_from_op": true,
"msg_subject": "To Postgres Devs : Wouldn't changing the select limit syntax ...."
},
{
"msg_contents": "> Break the SQL code that has been implemented for prior versions??\n> Bummer ;((.\n\nYes, but we don't follow the MySQL behavior, which we copied when we\nadded LIMIT. Seems we should agree with their implementation.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Oct 2001 12:11:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > Break the SQL code that has been implemented for prior versions??\n> > Bummer ;((.\n>\n> Yes, but we don't follow the MySQL behavior, which we copied when we\n> added LIMIT. Seems we should agree with their implementation.\n\nIsn't it much worse to not follow PostgreSQL behaviour than to not follow\nMySQL behaviour?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n\n",
"msg_date": "Wed, 17 Oct 2001 22:36:44 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > > Break the SQL code that has been implemented for prior versions??\n> > > Bummer ;((.\n> >\n> > Yes, but we don't follow the MySQL behavior, which we copied when we\n> > added LIMIT. Seems we should agree with their implementation.\n> \n> Isn't it much worse to not follow PostgreSQL behaviour than to not follow\n> MySQL behaviour?\n\nWell, it was on the TODO list and people complained while porting their\nMySQL applications. We clearly made a mistake in the initial\nimplementation.\n\nThe question is do we fix it or continue with a different\nimplementation. Because we have the separate LIMIT and OFFSET we can\nfix it while giving people a solution that will work for all versions. \nIf we don't fix it, all MySQL queries that are ported will be broken.\n\nI assume it got on the TODO list because fixing it was the accepted\nsolution. We can, of course, change our minds.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Oct 2001 16:46:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > > Break the SQL code that has been implemented for prior versions??\n> > > Bummer ;((.\n> >\n> > Yes, but we don't follow the MySQL behavior, which we copied when we\n> > added LIMIT. Seems we should agree with their implementation.\n> \n> Isn't it much worse to not follow PostgreSQL behavior than to not follow\n> MySQL behavior?\n\nAnother idea: because our historical Limit #,# differs from MySQL, one\nidea is to disable LIMIT #,# completely and instead print an error\nstating they have to use LIMIT # OFFSET #. Although that would break\nboth MySQl and old PostgreSQL queries, it would not generate incorrect\nresults.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Oct 2001 18:34:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian writes:\n> >\n> > > > Break the SQL code that has been implemented for prior versions??\n> > > > Bummer ;((.\n> > >\n> > > Yes, but we don't follow the MySQL behavior, which we copied when we\n> > > added LIMIT. Seems we should agree with their implementation.\n> >\n> > Isn't it much worse to not follow PostgreSQL behaviour than to not follow\n> > MySQL behaviour?\n> \n> Well, it was on the TODO list and people complained while porting their\n> MySQL applications. We clearly made a mistake in the initial\n> implementation.\n> \n> The question is do we fix it or continue with a different\n> implementation. Because we have the separate LIMIT and OFFSET we can\n> fix it while giving people a solution that will work for all versions.\n> If we don't fix it, all MySQL queries that are ported will be broken.\n\nBut it seems absurd to trouble existent PG users instead.\n\nregrads,\nHiroshi Inoue\n",
"msg_date": "Thu, 18 Oct 2001 15:21:08 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>>Bruce Momjian writes:\n>>\n>>\n>>>>Break the SQL code that has been implemented for prior versions??\n>>>> Bummer ;((.\n>>>>\n>>>Yes, but we don't follow the MySQL behavior, which we copied when we\n>>>added LIMIT. Seems we should agree with their implementation.\n>>>\n>>Isn't it much worse to not follow PostgreSQL behavior than to not follow\n>>MySQL behavior?\n>>\n> \n> Another idea: because our historical Limit #,# differs from MySQL, one\n> idea is to disable LIMIT #,# completely and instead print an error\n> stating they have to use LIMIT # OFFSET #. Although that would break\n> both MySQl and old PostgreSQL queries, it would not generate incorrect\n> results.\n\n\nI would say the relevant behaviour is neither the one that MySQL \nhistorically uses nor the one that PostgreSQL historically uses, but the \none that is specified in the relevant standards. Since nobody brought \nthis up yet I presume these standards leave the implementation of LIMIT \nopen (I tried to google myself, but I couldn't exactly find it).\nIs that correct or does (any of the) the SQL standards specify a behaviour?\n\nJochem\n\n",
"msg_date": "Thu, 18 Oct 2001 09:31:58 +0200",
"msg_from": "Jochem van Dieten <jochemd@oli.tudelft.nl>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "Greetings, Bruce!\n\nAt 18.10.2001, 02:34, you wrote:\n\n>> Isn't it much worse to not follow PostgreSQL behavior than to not follow\n>> MySQL behavior?\n\nBM> Another idea: because our historical Limit #,# differs from MySQL, one\nBM> idea is to disable LIMIT #,# completely and instead print an error\nBM> stating they have to use LIMIT # OFFSET #. Although that would break\nBM> both MySQl and old PostgreSQL queries, it would not generate incorrect\nBM> results.\n\n It doesn't seem like a good idea. The best solution, IMHO, would\n be to introduce optional \"MySQL-compatibility mode\" for LIMIT in 7.2\n Later LIMIT #,# can be marked deprecated in favour of LIMIT #,\n OFFSET #\n But please, don't *break* things; while this change may make life\n easier for some people migrating from MySQL far more people would\n be pissed off...\n\n-- \nYours, Alexey V. Borzov, Webmaster of RDW.ru\n\n\n",
"msg_date": "Thu, 18 Oct 2001 12:04:06 +0400",
"msg_from": "Alexey Borzov <borz_off@rdw.ru>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "> Greetings, Bruce!\n> \n> At 18.10.2001, 02:34, you wrote:\n> \n> >> Isn't it much worse to not follow PostgreSQL behavior than to not follow\n> >> MySQL behavior?\n> \n> BM> Another idea: because our historical Limit #,# differs from MySQL, one\n> BM> idea is to disable LIMIT #,# completely and instead print an error\n> BM> stating they have to use LIMIT # OFFSET #. Although that would break\n> BM> both MySQl and old PostgreSQL queries, it would not generate incorrect\n> BM> results.\n> \n> It doesn't seem like a good idea. The best solution, IMHO, would\n> be to introduce optional \"MySQL-compatibility mode\" for LIMIT in 7.2\n> Later LIMIT #,# can be marked deprecated in favour of LIMIT #,\n> OFFSET #\n> But please, don't *break* things; while this change may make life\n> easier for some people migrating from MySQL far more people would\n> be pissed off...\n\nOK, it seems enough people don't want this change that we have to do\nsomething. What do people suggest? Can we throw an elog(NOTICE)\nmessage in 7.2 stating that LIMIT #,# will disappear in the next release\nand to start using LIMIT/OFFSET. That way, people can migrate their\ncode to LIMIT/OFFSET during 7.2 and it can disappear in 7.3?\n\nI frankly think the LIMIT #,# is way too confusing anyway and would be\nglad to have it removed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 18 Oct 2001 10:55:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "On Wed, 17 Oct 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian writes:\n> >\n> > > > Break the SQL code that has been implemented for prior versions??\n> > > > Bummer ;((.\n> > >\n> > > Yes, but we don't follow the MySQL behavior, which we copied when we\n> > > added LIMIT. Seems we should agree with their implementation.\n> >\n> > Isn't it much worse to not follow PostgreSQL behaviour than to not follow\n> > MySQL behaviour?\n>\n> Well, it was on the TODO list and people complained while porting their\n> MySQL applications. We clearly made a mistake in the initial\n> implementation.\n>\n> The question is do we fix it or continue with a different\n> implementation. Because we have the separate LIMIT and OFFSET we can\n> fix it while giving people a solution that will work for all versions.\n> If we don't fix it, all MySQL queries that are ported will be broken.\n>\n> I assume it got on the TODO list because fixing it was the accepted\n> solution. We can, of course, change our minds.\n\nChanging PG to match MySQL may rankle loyalists' feathers a bit,\nbut if we can relativeless painless make it easy to port from MySQL\nto PG, it's a win.\n\n-- \n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nIndependent Knowledge Management Consultant\n\n",
"msg_date": "Thu, 18 Oct 2001 13:44:57 -0400 (EDT)",
"msg_from": "Joel Burton <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "Jochem van Dieten <jochemd@oli.tudelft.nl> writes:\n> I would say the relevant behaviour is neither the one that MySQL \n> historically uses nor the one that PostgreSQL historically uses, but the \n> one that is specified in the relevant standards.\n\nThere aren't any: SQL92 and SQL99 have no such feature. (Although I\nnotice that they list LIMIT as a word likely to become reserved in\nfuture versions.)\n\nAFAIK we copied the idea and the syntax from MySQL ... but we got the\norder of the parameters wrong.\n\nIMHO \"LIMIT n OFFSET n\" is far more readable than \"LIMIT m,n\" anyway.\n(Quick: which number is first in the comma version? By what reasoning\ncould you deduce that if you'd forgotten?) So I think we should\ndeprecate and eventually eliminate the comma version, if we're not\ngoing to conform to the de facto standard for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Oct 2001 14:18:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit "
},
{
"msg_contents": "I know the differences between VACUUM and VACUUM ANALYZE have been discussed\nbefore, but I'd like to know how you schedule your cleaning jobs. Right now\nI do a\n\nVACUUM\nVACUUM ANALYZE\n\nevery hour... it takes about 3 minutes to run both. Should I run ANALYZE\nless often?\n\nMark\nEpilogue.net\n\n",
"msg_date": "Thu, 18 Oct 2001 14:33:06 -0400",
"msg_from": "\"Mark Coffman\" <mark@epilogue.net>",
"msg_from_op": false,
"msg_subject": "VACUUM vs VACUUM ANALYZE"
},
{
"msg_contents": "I think that's a grand idea. Mysql does a lot of things in an 'odd' way \nand I prefer the unambiguous LIMIT .. OFFSET form, it follows the design \nof SQL in general.\n\n-d\n\nBruce Momjian wrote:\n\n>OK, it seems enough people don't want this change that we have to do\n>something. What do people suggest? Can we throw an elog(NOTICE)\n>message in 7.2 stating that LIMIT #,# will disappear in the next release\n>and to start using LIMIT/OFFSET. That way, people can migrate their\n>code to LIMIT/OFFSET during 7.2 and it can disappear in 7.3?\n>\n>I frankly think the LIMIT #,# is way too confusing anyway and would be\n>glad to have it removed.\n>\n\n\n",
"msg_date": "Thu, 18 Oct 2001 16:23:04 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "As a user of both MySQL and PostgreSQL I can say that I would *love* it if\nyou went with \"LIMIT n OFFSET m\" instead of \"LIMIT m,n\". *every* time I\nuse the offset feature I have to look it up in the manual or some other\ncode snippet that has it (and where it's clear).\n\nEven it broke some script I'd written it's pretty easy to find and fix\nit...\n\njust my 2 cents...\n\nOn Thu, 18 Oct 2001, Tom Lane wrote:\n\n> Jochem van Dieten <jochemd@oli.tudelft.nl> writes:\n> > I would say the relevant behaviour is neither the one that MySQL\n> > historically uses nor the one that PostgreSQL historically uses, but the\n> > one that is specified in the relevant standards.\n>\n> There aren't any: SQL92 and SQL99 have no such feature. (Although I\n> notice that they list LIMIT as a word likely to become reserved in\n> future versions.)\n>\n> AFAIK we copied the idea and the syntax from MySQL ... but we got the\n> order of the parameters wrong.\n>\n> IMHO \"LIMIT n OFFSET n\" is far more readable than \"LIMIT m,n\" anyway.\n> (Quick: which number is first in the comma version? By what reasoning\n> could you deduce that if you'd forgotten?) So I think we should\n> deprecate and eventually eliminate the comma version, if we're not\n> going to conform to the de facto standard for it.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n",
"msg_date": "Thu, 18 Oct 2001 14:02:22 -0700 (PDT)",
"msg_from": "Philip Hallstrom <philip@adhesivemedia.com>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Jochem van Dieten <jochemd@oli.tudelft.nl> writes:\n> \n>>I would say the relevant behaviour is neither the one that MySQL \n>>historically uses nor the one that PostgreSQL historically uses, but the \n>>one that is specified in the relevant standards.\n>>\n> \n> There aren't any: SQL92 and SQL99 have no such feature. (Although I\n> notice that they list LIMIT as a word likely to become reserved in\n> future versions.)\n\n\nBut according to the list in the PostgreSQL docs OFFSET is not a \nreserved word. Is it one of the 'likely to become reserved' words?\n\n\n> IMHO \"LIMIT n OFFSET n\" is far more readable than \"LIMIT m,n\" anyway.\n> (Quick: which number is first in the comma version? By what reasoning\n> could you deduce that if you'd forgotten?) So I think we should\n> deprecate and eventually eliminate the comma version, if we're not\n> going to conform to the de facto standard for it.\n\n\nI agree that LIMIT n OFFSET n is by far the most readable format, and is \ntherefore the desirable format. But I am not sure about deprecating and \neliminating the other syntax. Above all it should be avoided that it is \nnow deprecated but is included in the next SQL standard and has to be \nadded again.\n\nFor now, I abstain.\n\nJochem\n\n\n\n",
"msg_date": "Thu, 18 Oct 2001 23:04:25 +0200",
"msg_from": "Jochem van Dieten <jochemd@oli.tudelft.nl>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "\nOK, I see several votes that say remove LIMIT #,# now, in 7.2 and throw\nan error telling them to use LIMIT # OFFSET #.\n\nThe only other option is to throw a NOTICE that LIMIT #,# will go away\nin 7.3.\n\nUnless I hear otherwise, I will assume people prefer the first option.\n\n\n---------------------------------------------------------------------------\n> As a user of both MySQL and PostgreSQL I can say that I would *love* it if\n> you went with \"LIMIT n OFFSET m\" instead of \"LIMIT m,n\". *every* time I\n> use the offset feature I have to look it up in the manual or some other\n> code snippet that has it (and where it's clear).\n> \n> Even it broke some script I'd written it's pretty easy to find and fix\n> it...\n> \n> just my 2 cents...\n> \n> On Thu, 18 Oct 2001, Tom Lane wrote:\n> \n> > Jochem van Dieten <jochemd@oli.tudelft.nl> writes:\n> > > I would say the relevant behaviour is neither the one that MySQL\n> > > historically uses nor the one that PostgreSQL historically uses, but the\n> > > one that is specified in the relevant standards.\n> >\n> > There aren't any: SQL92 and SQL99 have no such feature. (Although I\n> > notice that they list LIMIT as a word likely to become reserved in\n> > future versions.)\n> >\n> > AFAIK we copied the idea and the syntax from MySQL ... but we got the\n> > order of the parameters wrong.\n> >\n> > IMHO \"LIMIT n OFFSET n\" is far more readable than \"LIMIT m,n\" anyway.\n> > (Quick: which number is first in the comma version? By what reasoning\n> > could you deduce that if you'd forgotten?) So I think we should\n> > deprecate and eventually eliminate the comma version, if we're not\n> > going to conform to the de facto standard for it.\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 18 Oct 2001 17:31:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": " LIMIT m OFFSET m *is* there now..\n\n There is a LIMIT m,n syntax too I guess, though it appears that it's\nbackwards from MySQL..\n\n I don't see much point in having two different ways of doing the same\nthing unless you wanted to maintain compatibility with another RDBMS - but\nthat doesn't appear to be the case here (isn't that reversed from the MySQL\nimplementation?).. However, removing it now is going to break people's SQL..\nI didn't know you could LIMIT m,n until today so I wouldn't have a clue as\nto how many people actually use that syntax. Perhaps the idea of tossing a\nnotice up that that syntax is going away in the next release would be a\nbetter idea than just yanking it out right away - then we can see how many\npeople complain :-)\n\n-Mitch\n\n> As a user of both MySQL and PostgreSQL I can say that I would *love* it if\n> you went with \"LIMIT n OFFSET m\" instead of \"LIMIT m,n\". *every* time I\n> use the offset feature I have to look it up in the manual or some other\n> code snippet that has it (and where it's clear).\n>\n> Even it broke some script I'd written it's pretty easy to find and fix\n> it...\n\n\n",
"msg_date": "Thu, 18 Oct 2001 19:35:39 -0400",
"msg_from": "\"Mitch Vincent\" <mvincent@cablespeed.com>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit "
},
{
"msg_contents": "On Thu, Oct 18, 2001 at 02:33:06PM -0400, Mark Coffman wrote:\n> I know the differences between VACUUM and VACUUM ANALYZE have been discussed\n> before, but I'd like to know how you schedule your cleaning jobs. Right now\n> I do a\n> \n> VACUUM\n> VACUUM ANALYZE\n\nvacuum analyze does a vacuum anyway, so you don't need both.\n\n> every hour... it takes about 3 minutes to run both. Should I run ANALYZE\n> less often?\n\nHere we do it once per day, though after a major set of updates i run it\nmanually. We're not under heavy load though.\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> Magnetism, electricity and motion are like a three-for-two special offer:\n> if you have two of them, the third one comes free.\n",
"msg_date": "Fri, 19 Oct 2001 09:49:15 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM vs VACUUM ANALYZE"
},
{
"msg_contents": "\"Mark Coffman\" <mark@epilogue.net> writes:\n\n> I know the differences between VACUUM and VACUUM ANALYZE have been discussed\n> before, but I'd like to know how you schedule your cleaning jobs. Right now\n> I do a\n> \n> VACUUM\n> VACUUM ANALYZE\n> \n> every hour... it takes about 3 minutes to run both. Should I run ANALYZE\n> less often?\n\nANALYZE includes regular VACUUM functionality, so you don't have to do \nboth. So you're probably down to 2 minutes now. ;)\n\nThere's nothing wrong with running every hour--it depends on the\nsize and activity level of your DB.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "18 Oct 2001 20:06:11 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM vs VACUUM ANALYZE"
},
{
"msg_contents": "Jochem van Dieten <jochemd@oli.tudelft.nl> writes:\n> Tom Lane wrote:\n>> There aren't any: SQL92 and SQL99 have no such feature. (Although I\n>> notice that they list LIMIT as a word likely to become reserved in\n>> future versions.)\n\n> But according to the list in the PostgreSQL docs OFFSET is not a \n> reserved word. Is it one of the 'likely to become reserved' words?\n\nNope, it's not listed. There's no guarantee that their intended use\nis the same as ours, anyway, so I don't put any stock in this as a\nreason to make a decision now. It was just an observation in passing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Oct 2001 21:12:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit "
},
{
"msg_contents": "> But according to the list in the PostgreSQL docs OFFSET is not a \n> reserved word. Is it one of the 'likely to become reserved' words?\n> \n> \n> > IMHO \"LIMIT n OFFSET n\" is far more readable than \"LIMIT m,n\" anyway.\n> > (Quick: which number is first in the comma version? By what reasoning\n> > could you deduce that if you'd forgotten?) So I think we should\n> > deprecate and eventually eliminate the comma version, if we're not\n> > going to conform to the de facto standard for it.\n> \n> \n> I agree that LIMIT n OFFSET n is by far the most readable format, and is \n> therefore the desirable format. But I am not sure about deprecating and \n> eliminating the other syntax. Above all it should be avoided that it is \n> now deprecated but is included in the next SQL standard and has to be \n> added again.\n\nI am confused. While LIMIT and OFFSET may are potential SQL standard\nreserved words, I don't see how LIMIT #,# would ever be a standard\nspecification. Do you see this somewhere I am missing. Again, LIMIT\n#,# is the only syntax we are removing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 18 Oct 2001 21:26:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "\"Mark Coffman\" <mark@epilogue.net> writes:\n> I do a\n\n> VACUUM\n> VACUUM ANALYZE\n\n> every hour... it takes about 3 minutes to run both. Should I run ANALYZE\n> less often?\n\nVACUUM ANALYZE is a superset of VACUUM; there's certainly no reason to\ndo both one after the other.\n\nAs to whether you should do plain VACUUM some hours and VACUUM ANALYZE\nothers, that depends --- how fast are the statistics of your data\nchanging? If the stats (such as column minimum and maximum values) are\nrelatively stable, you could get away with fewer ANALYZEs. Maybe do one\nANALYZE every night at an off-peak time, and just plain VACUUM the rest\nof the day.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Oct 2001 21:34:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM vs VACUUM ANALYZE "
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>>\n>>>IMHO \"LIMIT n OFFSET n\" is far more readable than \"LIMIT m,n\" anyway.\n>>>(Quick: which number is first in the comma version? By what reasoning\n>>>could you deduce that if you'd forgotten?) So I think we should\n>>>deprecate and eventually eliminate the comma version, if we're not\n>>>going to conform to the de facto standard for it.\n>>\n>>I agree that LIMIT n OFFSET n is by far the most readable format, and is \n>>therefore the desirable format. But I am not sure about deprecating and \n>>eliminating the other syntax. Above all it should be avoided that it is \n>>now deprecated but is included in the next SQL standard and has to be \n>>added again.\n> \n> I am confused. While LIMIT and OFFSET may are potential SQL standard\n> reserved words, I don't see how LIMIT #,# would ever be a standard\n> specification. Do you see this somewhere I am missing. Again, LIMIT\n> #,# is the only syntax we are removing.\n\n\nIf you are confident that LIMIT #,# would never be an official SQL \nstandard who am I to second guess that ;) I don't see that possibility \nanywhere either, but I just wanted to make sure. The possibility that it \nmight become an official standard is the only objection I had against \ndeprecating and eventual elimination of that syntax.\n\nLIMIT # OFFSET # has my vote.\n\nJochem\n\n",
"msg_date": "Fri, 19 Oct 2001 13:26:27 +0200",
"msg_from": "Jochem van Dieten <jochemd@oli.tudelft.nl>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> >>\n> >>>IMHO \"LIMIT n OFFSET n\" is far more readable than \"LIMIT m,n\" anyway.\n> >>>(Quick: which number is first in the comma version? By what reasoning\n> >>>could you deduce that if you'd forgotten?) So I think we should\n> >>>deprecate and eventually eliminate the comma version, if we're not\n> >>>going to conform to the de facto standard for it.\n> >>\n> >>I agree that LIMIT n OFFSET n is by far the most readable format, and is \n> >>therefore the desirable format. But I am not sure about deprecating and \n> >>eliminating the other syntax. Above all it should be avoided that it is \n> >>now deprecated but is included in the next SQL standard and has to be \n> >>added again.\n> > \n> > I am confused. While LIMIT and OFFSET may are potential SQL standard\n> > reserved words, I don't see how LIMIT #,# would ever be a standard\n> > specification. Do you see this somewhere I am missing. Again, LIMIT\n> > #,# is the only syntax we are removing.\n> \n> \n> If you are confident that LIMIT #,# would never be an official SQL \n> standard who am I to second guess that ;) I don't see that possibility \n> anywhere either, but I just wanted to make sure. The possibility that it \n> might become an official standard is the only objection I had against \n> deprecating and eventual elimination of that syntax.\n> \n> LIMIT # OFFSET # has my vote.\n\nOK, we have received only one vote to keep LIMIT #,# working for one\nmore release, and several to remove it so I am committing a patch now to\nremove LIMIT #,# and instead have them use LIMIT # OFFSET #:\n\n test=> select * from pg_class LIMIT 1,1;\n ERROR: LIMIT #,# syntax no longer supported. Use LIMIT # OFFSET #.\n\nThis message will not be removed in later releases because people\nporting from MySQL will need to have it there even after our users have\nported their queries.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 19 Oct 2001 22:42:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "> If you are confident that LIMIT #,# would never be an official SQL \n> standard who am I to second guess that ;) I don't see that possibility \n> anywhere either, but I just wanted to make sure. The possibility that it \n> might become an official standard is the only objection I had against \n> deprecating and eventual elimination of that syntax.\n> \n> LIMIT # OFFSET # has my vote.\n\nOne more thing. I have added the code to suggest alternate syntax for\nLIMIT #,#:\n\n test=> select * from pg_class LIMIT 1,1;\n ERROR: LIMIT #,# syntax no longer supported. Use LIMIT # OFFSET #.\n\nIf there are other queries that use syntax that frequently fails, I\nwould like to hear about it so we can generate a helpful error message\nrather than just a generic syntax error.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 19 Oct 2001 22:51:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> One more thing. I have added the code to suggest alternate syntax for\n> LIMIT #,#:\n\n> test=> select * from pg_class LIMIT 1,1;\n> ERROR: LIMIT #,# syntax no longer supported. Use LIMIT # OFFSET #.\n\nIf you're going to do that, *please* suggest the *correct* substitution.\nAFAICT, our version of LIMIT m,n transposes to OFFSET m LIMIT n; but\nyour message suggests the opposite.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Oct 2001 23:44:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > One more thing. I have added the code to suggest alternate syntax for\n> > LIMIT #,#:\n> \n> > test=> select * from pg_class LIMIT 1,1;\n> > ERROR: LIMIT #,# syntax no longer supported. Use LIMIT # OFFSET #.\n> \n> If you're going to do that, *please* suggest the *correct* substitution.\n> AFAICT, our version of LIMIT m,n transposes to OFFSET m LIMIT n; but\n> your message suggests the opposite.\n\nRemember, the 7.1 code was:\n\n! select_limit: LIMIT select_limit_value ',' select_offset_value\n! { $$ = makeList2($4, $2); }\n\nThis was changed a few weeks ago to match MySQL, and only today removed.\n\nHowever, our new message suggests the old PostgreSQL syntax, not the\nMySQL syntax. Optimally we should ship with this ordering for 7.2 and\nreverse it for 7.3 or 7.4.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 20 Oct 2001 00:54:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> If you're going to do that, *please* suggest the *correct* substitution.\n>> AFAICT, our version of LIMIT m,n transposes to OFFSET m LIMIT n; but\n>> your message suggests the opposite.\n\n> Remember, the 7.1 code was:\n\n> ! select_limit: LIMIT select_limit_value ',' select_offset_value\n> ! { $$ = makeList2($4, $2); }\n\n> This was changed a few weeks ago to match MySQL, and only today removed.\n\nWups, you're right, I was looking at the cvs-tip code not 7.1.\nWhat was that about the order not being easy to remember? :-(\n\n> However, our new message suggests the old PostgreSQL syntax, not the\n> MySQL syntax. Optimally we should ship with this ordering for 7.2 and\n> reverse it for 7.3 or 7.4.\n\nActually, it seems that the message should point out *both* the\nold-Postgres and the MySQL translations. One camp or the other\nis going to get burnt otherwise. Maybe:\n\nERROR: LIMIT #,# syntax no longer supported. Use LIMIT # OFFSET #.\n\tIf translating pre-7.2 Postgres: LIMIT m,n => LIMIT m OFFSET n\n\tIf translating MySQL: LIMIT m,n => OFFSET m LIMIT n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 20 Oct 2001 01:01:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit "
},
{
"msg_contents": "I think it is a better idea to yank it out now then rather later on..\ncos either way our SQL codes gonna get broken.. sooner or later.. it\nwon't make much difference now or later.\n",
"msg_date": "20 Oct 2001 00:27:02 -0700",
"msg_from": "huongch@bigfoot.com (Flancer)",
"msg_from_op": true,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "Tom Lane writes:\n\n> ERROR: LIMIT #,# syntax no longer supported. Use LIMIT # OFFSET #.\n> \tIf translating pre-7.2 Postgres: LIMIT m,n => LIMIT m OFFSET n\n> \tIf translating MySQL: LIMIT m,n => OFFSET m LIMIT n\n\nI think someone is panicking here for no reason, but I'm not sure who it\nis. If we think that LIMIT x,y should be phased out, then let's add that\nto the documentation and remove it in a later release, so people have a\nchance to prepare. But you're removing a perfectly fine feature that has\nreceived no attention in the last two years on the last day before beta\nbecause of a mysterious crowd of people porting from MySQL. Let me tell\nyou: People porting from MySQL are going to have a lot of other problems\nbefore they find out that LIMIT works differently.\n\nIn addition I want to repeat my object to notices and errors that are\nteaching syntax or trying to give smart tips. If a command is not\nsyntactically correct then it's a syntax error. If the command used to be\ncorrect, might be correct in the future, or is correct in some other life\nthen it's still a syntax error. If we want to have a \"tip mode\" then\nlet's have one, but until that happens the documentation is the place to\nexplain error messages or give advice how to avoid them.\n\nNow, of course this whole situation is a bit unfortunate because of the\nsyntax mixup. But let's remember that most people that are going to use\nPostgreSQL 7.2 are the people that are using PostgreSQL 7.1 now, and\nthey're going to be a lot happier the less they're going to be annoyed by\ngratuitous breaks in compatibility that had no prior notice at all.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 20 Oct 2001 13:46:15 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> If you're going to do that, *please* suggest the *correct* substitution.\n> >> AFAICT, our version of LIMIT m,n transposes to OFFSET m LIMIT n; but\n> >> your message suggests the opposite.\n> \n> > Remember, the 7.1 code was:\n> \n> > ! select_limit: LIMIT select_limit_value ',' select_offset_value\n> > ! { $$ = makeList2($4, $2); }\n> \n> > This was changed a few weeks ago to match MySQL, and only today removed.\n> \n> Wups, you're right, I was looking at the cvs-tip code not 7.1.\n> What was that about the order not being easy to remember? :-(\n\nConfusing syntax proven!\n\n> > However, our new message suggests the old PostgreSQL syntax, not the\n> > MySQL syntax. Optimally we should ship with this ordering for 7.2 and\n> > reverse it for 7.3 or 7.4.\n> \n> Actually, it seems that the message should point out *both* the\n> old-Postgres and the MySQL translations. One camp or the other\n> is going to get burnt otherwise. Maybe:\n> \n> ERROR: LIMIT #,# syntax no longer supported. Use LIMIT # OFFSET #.\n> \tIf translating pre-7.2 Postgres: LIMIT m,n => LIMIT m OFFSET n\n> \tIf translating MySQL: LIMIT m,n => OFFSET m LIMIT n\n\nI opted for a more generic message which makes clear the person it is\nnot a cut-and-past error message:\n\t\n\ttest=> select * from pg_class LIMIT 1,1;\n\tERROR: LIMIT #,# syntax no longer supported.\n\t Use separate LIMIT and OFFSET clauses.\n\nThat should take care of it in a flexible way.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 20 Oct 2001 12:42:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "> Tom Lane writes:\n> \n> > ERROR: LIMIT #,# syntax no longer supported. Use LIMIT # OFFSET #.\n> > \tIf translating pre-7.2 Postgres: LIMIT m,n => LIMIT m OFFSET n\n> > \tIf translating MySQL: LIMIT m,n => OFFSET m LIMIT n\n> \n> I think someone is panicking here for no reason, but I'm not sure who it\n> is. If we think that LIMIT x,y should be phased out, then let's add that\n> to the documentation and remove it in a later release, so people have a\n\nWe took a vote on 'general' and had only one person who wanted it kept\nfor an additional release and several who wanted it removed right now.\n\n> chance to prepare. But you're removing a perfectly fine feature that has\n\nWe are not removing the feature so much as forcing a syntax change on\nuser queries.\n\n> received no attention in the last two years on the last day before beta\n\nWe have been _near_ beta for over one month now. It doesn't seem wise\nto let this time just go to waste so I am trying to do what I can to\nmove PostgreSQL forward during this period.\n\nThe LIMIT #,# was actuall changed a almost a month ago in gram.y:\n\t\n\trevision 2.253\n\tdate: 2001/09/23 03:39:01; author: momjian; state: Exp; lines: +3 -3\n\tImplement TODO item:\n\t\n\t * Change LIMIT val,val to offset,limit to match MySQL\n\nThis new activity is because someone asked about why the change was made\nand the discussion on general led to this solution.\n\n> because of a mysterious crowd of people porting from MySQL. Let me tell\n> you: People porting from MySQL are going to have a lot of other problems\n> before they find out that LIMIT works differently.\n\nTrue.\n\n> In addition I want to repeat my object to notices and errors that are\n> teaching syntax or trying to give smart tips. If a command is not\n> syntactically correct then it's a syntax error. If the command used to be\n> correct, might be correct in the future, or is correct in some other life\n> then it's still a syntax error. If we want to have a \"tip mode\" then\n> let's have one, but until that happens the documentation is the place to\n> explain error messages or give advice how to avoid them.\n\nI disagree. If the 'tip' is localized to a few lines, usually in\ngram.y, I don't see a reason not to help people find the right answer. \nIt helps them and reduces redundant bug repots. I can't imagine a\nreason not to do it unless it starts to make our code more complex.\n\nI don't want to jump through hoops to give people tips, but if it is\neasy, let's do it.\n\n> Now, of course this whole situation is a bit unfortunate because of the\n> syntax mixup. But let's remember that most people that are going to use\n> PostgreSQL 7.2 are the people that are using PostgreSQL 7.1 now, and\n> they're going to be a lot happier the less they're going to be annoyed by\n> gratuitous breaks in compatibility that had no prior notice at all.\n\nAgain, we took a vote on general. If there are people who want this\nkept around for another release, we can do it. Let's hear from you.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 20 Oct 2001 12:50:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "> > > I am confused. While LIMIT and OFFSET may are potential SQL standard\n> > > reserved words, I don't see how LIMIT #,# would ever be a standard\n> > > specification. Do you see this somewhere I am missing. Again, LIMIT\n> > > #,# is the only syntax we are removing.\n> > If you are confident that LIMIT #,# would never be an official SQL\n> > standard who am I to second guess that ;) I don't see that possibility\n> > anywhere either, but I just wanted to make sure. The possibility that it\n> > might become an official standard is the only objection I had against\n> > deprecating and eventual elimination of that syntax.\n> > LIMIT # OFFSET # has my vote.\n> OK, we have received only one vote to keep LIMIT #,# working for one\n> more release, and several to remove it so I am committing a patch now to\n> remove LIMIT #,# and instead have them use LIMIT # OFFSET #:\n\nI've cc'd this to the hackers list. I know the discussion started on\ngeneral, but if I hadn't been subscribed to *that* list I'd have never\nknown about any of this. And noone else on hackers would either.\n\n - Thomas\n",
"msg_date": "Mon, 22 Oct 2001 05:51:42 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "(switched thread to hackers)\n\n> ... If the 'tip' is localized to a few lines, usually in\n> gram.y, I don't see a reason not to help people find the right answer.\n> It helps them and reduces redundant bug repots. I can't imagine a\n> reason not to do it unless it starts to make our code more complex.\n\nI'm with Peter on this one. I'd like to *not* clutter up the code and\nerror reporting with hints and suggestions which may or may not be to\nthe point.\n\nWe *should* have docs which list error messages and possible solutions,\nand throwing that info into code is a poor second choice imho.\n\n - Thomas\n",
"msg_date": "Mon, 22 Oct 2001 05:56:54 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "> > > > I am confused. While LIMIT and OFFSET may are potential SQL standard\n> > > > reserved words, I don't see how LIMIT #,# would ever be a standard\n> > > > specification. Do you see this somewhere I am missing. Again, LIMIT\n> > > > #,# is the only syntax we are removing.\n> > > If you are confident that LIMIT #,# would never be an official SQL\n> > > standard who am I to second guess that ;) I don't see that possibility\n> > > anywhere either, but I just wanted to make sure. The possibility that it\n> > > might become an official standard is the only objection I had against\n> > > deprecating and eventual elimination of that syntax.\n> > > LIMIT # OFFSET # has my vote.\n> > OK, we have received only one vote to keep LIMIT #,# working for one\n> > more release, and several to remove it so I am committing a patch now to\n> > remove LIMIT #,# and instead have them use LIMIT # OFFSET #:\n> \n> I've cc'd this to the hackers list. I know the discussion started on\n> general, but if I hadn't been subscribed to *that* list I'd have never\n> known about any of this. And noone else on hackers would either.\n\nThe discussion has moved from patches to general so our general users\ncould comment on this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 13:00:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "> (switched thread to hackers)\n> \n> > ... If the 'tip' is localized to a few lines, usually in\n> > gram.y, I don't see a reason not to help people find the right answer.\n> > It helps them and reduces redundant bug repots. I can't imagine a\n> > reason not to do it unless it starts to make our code more complex.\n> \n> I'm with Peter on this one. I'd like to *not* clutter up the code and\n> error reporting with hints and suggestions which may or may not be to\n> the point.\n> \n> We *should* have docs which list error messages and possible solutions,\n> and throwing that info into code is a poor second choice imho.\n\nIs it really clutter to add a clause and elog(). I am not advocating\nadding stuff like crazy, but when we see people having the same problem,\nit seems worth adding it. Our docs are pretty big and most people who\nhave this type of problem are not going to know where to look in the\ndocs. If the elog pointed them to the proper section in the docs, that\nwould be even better, but then again, you are doing the elog at that\npoint.\n\nWhat do others think? It would be good to have a specific example to\ndiscuss.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 13:22:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > (switched thread to hackers)\n> >\n> > > ... If the 'tip' is localized to a few lines, usually in\n> > > gram.y, I don't see a reason not to help people find the right answer.\n> > > It helps them and reduces redundant bug repots. I can't imagine a\n> > > reason not to do it unless it starts to make our code more complex.\n> >\n> > I'm with Peter on this one. I'd like to *not* clutter up the code and\n> > error reporting with hints and suggestions which may or may not be to\n> > the point.\n> >\n> > We *should* have docs which list error messages and possible solutions,\n> > and throwing that info into code is a poor second choice imho.\n> \n> Is it really clutter to add a clause and elog(). I am not advocating\n> adding stuff like crazy, but when we see people having the same problem,\n> it seems worth adding it. Our docs are pretty big and most people who\n> have this type of problem are not going to know where to look in the\n> docs. If the elog pointed them to the proper section in the docs, that\n> would be even better, but then again, you are doing the elog at that\n> point.\n> \n> What do others think? It would be good to have a specific example to\n> discuss.\n\nFWIW, Oracle has its \"oerr\" utility which takes the arguments:\n\noerr facility error-code\n\nSo the RDBMS generates an error code with a single line message less\nthan or equal to 76 characters in length, prefixed by the facility\nand error code:\n\nORA-01034: ORACLE not available\n\nThe user can then get detailed information through the oerr utility.\nIt would be nice, when we have error codes (are they apart of the\nnew NLS support?), we have a \"pgerr\" utility to serve the same\npurpose. And of course the message files shipped with Oracle contain\nlocalized messages.\n\nExample output:\n\n$oerr ora 12203\n\n12203, 00000, \"TNS:unable to connect to destination\"\n// *Cause: Invalid TNS address supplied or destination is not\nlistening.\n// This error can also occur because of underlying network transport\n// problems.\n// *Action: Verify that the service name you entered on the command\nline\n// was correct. Ensure that the listener is running at the remote\nnode and\n// that the ADDRESS parameters specified in TNSNAMES.ORA are\ncorrect.\n// Finally, check that all Interchanges needed to make the\nconnection are\n// up and running.\n\nIt would then be nice to have both a command-line version of the\nPostgreSQL equivalent and a web-based version on postgresql.org for\nusers to use. \n\nJust my 2 cents, of course,\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Mon, 22 Oct 2001 16:00:03 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the select "
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I'm with Peter on this one. I'd like to *not* clutter up the code and\n> error reporting with hints and suggestions which may or may not be to\n> the point.\n> We *should* have docs which list error messages and possible solutions,\n> and throwing that info into code is a poor second choice imho.\n\nWhile you have a point in the abstract, a big difficulty is that the\ndocs never track the code with any accuracy. Look at the \"Outputs\"\nportions of our existing reference pages. To the extent that they\ndescribe possible errors at all, the information is a sad joke: out of\ndate in most cases, certainly incomplete in every case. Just last week\nI was thinking that we should rip all that stuff out, rather than\npretend it is or ever will be accurate.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Oct 2001 16:07:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the select limit "
},
{
"msg_contents": "> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > I'm with Peter on this one. I'd like to *not* clutter up the code and\n> > error reporting with hints and suggestions which may or may not be to\n> > the point.\n> > We *should* have docs which list error messages and possible solutions,\n> > and throwing that info into code is a poor second choice imho.\n> \n> While you have a point in the abstract, a big difficulty is that the\n> docs never track the code with any accuracy. Look at the \"Outputs\"\n> portions of our existing reference pages. To the extent that they\n> describe possible errors at all, the information is a sad joke: out of\n> date in most cases, certainly incomplete in every case. Just last week\n> I was thinking that we should rip all that stuff out, rather than\n> pretend it is or ever will be accurate.\n\nI recommend tips when they are one line in length, have a high\nprobability of being accurate, and are common mistakes. Anything longer\nand we should point to a specific section in the docs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 16:54:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n\n[snip]\n\n> \n> What do others think? \n\nPlease reverse your change and go into beta quickly.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 23 Oct 2001 09:32:29 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the select "
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> \n> [snip]\n> \n> > \n> > What do others think? \n> \n> Please reverse your change and go into beta quickly.\n\nI need more information. What do you want reversed, and are there\nenough votes to reverse those votes already made?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 20:57:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the select"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> >\n> > [snip]\n> >\n> > >\n> > > What do others think?\n> >\n> > Please reverse your change and go into beta quickly.\n> \n> I need more information. What do you want reversed,\n\nrevision 2.253\ndate: 2001/09/23 03:39:01; author: momjian; state: Exp; lines: +3 -3\n Implement TODO item:\n \n * Change LIMIT val,val to offset,limit to match MySQL\n\nand the related description in HISTORY(Migration to 7.2).\n\n> and are there\n> enough votes to reverse those votes already made?\n\nI don't think that enough votes are needed to reverse \nthe change. You broke the discussion fisrt rule.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 23 Oct 2001 10:14:18 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the "
},
{
"msg_contents": "> > I need more information. What do you want reversed,\n> \n> revision 2.253\n> date: 2001/09/23 03:39:01; author: momjian; state: Exp; lines: +3 -3\n> Implement TODO item:\n> \n> * Change LIMIT val,val to offset,limit to match MySQL\n> \n> and the related description in HISTORY(Migration to 7.2).\n\n\n> \n> > and are there\n> > enough votes to reverse those votes already made?\n> \n> I don't think that enough votes are needed to reverse \n> the change. You broke the discussion fisrt rule.\n\nIt was on the TODO list, and I did exactly what was listed there. What\nwe have now is a discussion that the TODO item was wrong.\n\nI also have very few votes to just put it back to how it was in 7.1. We\nhave votes for throwing a NOTICE that this syntax is going away, and\nvotes to remove it completely in 7.2. We also have few votes to merely\nreverse the meaning of the numbers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 21:28:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the selectlimit"
},
{
"msg_contents": "Good day,\n\nMy name is John Worsley, I'm one of the authors of the new O'Reilly\nPostgrSQL book. We're wrapping up the PL/pgSQL chapter's technical edit\nright now, but there are a couple of concerns that I was hoping someone\nmight be able to help with.\n\nMainly, the existing documentation on the RENAME statement seems\ninaccurate; it states that you can re-name variables, records, or\nrowtypes. However, in practice, our tests show that attempting to RENAME\nvalid variables with:\n\n RENAME varname TO newname;\n\n...yeilds a PL/pgSQL parse error, inexplicably. If I try the same syntax\non a non-declared variable, it actually says \"there is no variable\" with\nthat name in the current block, so...I think something odd is happening. :)\n\nI believe we have only gotten RENAME to work with either the NEW or OLD\nrecord variables when using PL/pgSQL with triggers, but the documentation\nsuggests that this should be a general-purpose statement.\n\nAny assistance would be greatly appreciated. :)\n\nThe RENAME statement seems kind of odd, since it seems that you could just\nas easily declare a general variable with the right name to begin with,\nand maybe that's why this isn't apparently documented anywhere else? I\njust want to make sure the documentation is both accurate and complete.\n\n\nKind Regards,\nJw.\n-- \nJohn Worsley, Command Prompt, Inc.\njlx@commandprompt.com by way of pgsql-hackers@commandprompt.com\n\n",
"msg_date": "Mon, 22 Oct 2001 18:36:31 -0700 (PDT)",
"msg_from": "\"Command Prompt, Inc.\" <pgsql-hackers@commandprompt.com>",
"msg_from_op": false,
"msg_subject": "PL/pgSQL RENAME bug?"
},
{
"msg_contents": "\"Command Prompt, Inc.\" <pgsql-hackers@commandprompt.com> writes:\n> Mainly, the existing documentation on the RENAME statement seems\n> inaccurate; it states that you can re-name variables, records, or\n> rowtypes. However, in practice, our tests show that attempting to RENAME\n> valid variables with:\n> RENAME varname TO newname;\n> ...yeilds a PL/pgSQL parse error, inexplicably. If I try the same syntax\n> on a non-declared variable, it actually says \"there is no variable\" with\n> that name in the current block, so...I think something odd is happening. :)\n\nYup, this is a bug. The plpgsql grammar expects varname to be a T_WORD,\nbut in fact the scanner will only return T_WORD for a name that is not\nany known variable name. Thus RENAME cannot possibly work, and probably\nnever has worked.\n\nLooks like it should accept T_VARIABLE, T_RECORD, T_ROW (at least).\nT_WORD ought to draw \"no such variable\". Jan, I think this is your turf...\n\n> The RENAME statement seems kind of odd, since it seems that you could just\n> as easily declare a general variable with the right name to begin with,\n\nIt seems pretty useless to me too. Perhaps it's there because Oracle\nhas one?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Oct 2001 15:31:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL RENAME bug? "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> It was on the TODO list, and I did exactly what was listed there. What\n> we have now is a discussion that the TODO item was wrong.\n\nI don't consider the items on the TODO list to be past the \"adequately\ndiscussed\" stage.\n\nTo the topic at hand: I find reversing the argument order is going to\nsilently break a lot of applications. Removing the syntax altogether\ncould be a reasonable choice, but since it doesn't hurt anyone right now\nI'd prefer an advance notice for one release.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 23 Oct 2001 22:41:54 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I recommend tips when they are one line in length, have a high\n> probability of being accurate, and are common mistakes. Anything longer\n> and we should point to a specific section in the docs.\n\nI would put \"when porting from MySQL\" into that category.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 23 Oct 2001 22:42:11 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > I recommend tips when they are one line in length, have a high\n> > probability of being accurate, and are common mistakes. Anything longer\n> > and we should point to a specific section in the docs.\n> \n> I would put \"when porting from MySQL\" into that category.\n\nI would too except when we implement the feature backwards and then\nremove it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Oct 2001 17:25:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the select limit"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > It was on the TODO list, and I did exactly what was listed there. What\n> > we have now is a discussion that the TODO item was wrong.\n> \n> I don't consider the items on the TODO list to be past the \"adequately\n> discussed\" stage.\n> \n> To the topic at hand: I find reversing the argument order is going to\n> silently break a lot of applications. Removing the syntax altogether\n> could be a reasonable choice, but since it doesn't hurt anyone right now\n> I'd prefer an advance notice for one release.\n> \n\nWhich is what we are doing now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Oct 2001 17:25:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the selectlimit"
},
{
"msg_contents": "\nHas this been addressed?\n\n\n---------------------------------------------------------------------------\n\n> \"Command Prompt, Inc.\" <pgsql-hackers@commandprompt.com> writes:\n> > Mainly, the existing documentation on the RENAME statement seems\n> > inaccurate; it states that you can re-name variables, records, or\n> > rowtypes. However, in practice, our tests show that attempting to RENAME\n> > valid variables with:\n> > RENAME varname TO newname;\n> > ...yeilds a PL/pgSQL parse error, inexplicably. If I try the same syntax\n> > on a non-declared variable, it actually says \"there is no variable\" with\n> > that name in the current block, so...I think something odd is happening. :)\n> \n> Yup, this is a bug. The plpgsql grammar expects varname to be a T_WORD,\n> but in fact the scanner will only return T_WORD for a name that is not\n> any known variable name. Thus RENAME cannot possibly work, and probably\n> never has worked.\n> \n> Looks like it should accept T_VARIABLE, T_RECORD, T_ROW (at least).\n> T_WORD ought to draw \"no such variable\". Jan, I think this is your turf...\n> \n> > The RENAME statement seems kind of odd, since it seems that you could just\n> > as easily declare a general variable with the right name to begin with,\n> \n> It seems pretty useless to me too. Perhaps it's there because Oracle\n> has one?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Nov 2001 21:21:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL RENAME bug?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Has this been addressed?\n\nNo ... I punted in Jan's direction ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Nov 2001 23:44:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL RENAME bug? "
},
{
"msg_contents": "\nIs this completed?\n\n\n---------------------------------------------------------------------------\n\n> \"Command Prompt, Inc.\" <pgsql-hackers@commandprompt.com> writes:\n> > Mainly, the existing documentation on the RENAME statement seems\n> > inaccurate; it states that you can re-name variables, records, or\n> > rowtypes. However, in practice, our tests show that attempting to RENAME\n> > valid variables with:\n> > RENAME varname TO newname;\n> > ...yeilds a PL/pgSQL parse error, inexplicably. If I try the same syntax\n> > on a non-declared variable, it actually says \"there is no variable\" with\n> > that name in the current block, so...I think something odd is happening. :)\n> \n> Yup, this is a bug. The plpgsql grammar expects varname to be a T_WORD,\n> but in fact the scanner will only return T_WORD for a name that is not\n> any known variable name. Thus RENAME cannot possibly work, and probably\n> never has worked.\n> \n> Looks like it should accept T_VARIABLE, T_RECORD, T_ROW (at least).\n> T_WORD ought to draw \"no such variable\". Jan, I think this is your turf...\n> \n> > The RENAME statement seems kind of odd, since it seems that you could just\n> > as easily declare a general variable with the right name to begin with,\n> \n> It seems pretty useless to me too. Perhaps it's there because Oracle\n> has one?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 12 Nov 2001 00:08:40 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL RENAME bug?"
},
{
"msg_contents": "Tom Lane wrote:\n> \"Command Prompt, Inc.\" <pgsql-hackers@commandprompt.com> writes:\n> > Mainly, the existing documentation on the RENAME statement seems\n> > inaccurate; it states that you can re-name variables, records, or\n> > rowtypes. However, in practice, our tests show that attempting to RENAME\n> > valid variables with:\n> > RENAME varname TO newname;\n> > ...yeilds a PL/pgSQL parse error, inexplicably. If I try the same syntax\n> > on a non-declared variable, it actually says \"there is no variable\" with\n> > that name in the current block, so...I think something odd is happening. :)\n>\n> Yup, this is a bug. The plpgsql grammar expects varname to be a T_WORD,\n> but in fact the scanner will only return T_WORD for a name that is not\n> any known variable name. Thus RENAME cannot possibly work, and probably\n> never has worked.\n>\n> Looks like it should accept T_VARIABLE, T_RECORD, T_ROW (at least).\n> T_WORD ought to draw \"no such variable\". Jan, I think this is your turf...\n\n Sounds pretty much like that. Will take a look.\n\n>\n> > The RENAME statement seems kind of odd, since it seems that you could just\n> > as easily declare a general variable with the right name to begin with,\n>\n> It seems pretty useless to me too. Perhaps it's there because Oracle\n> has one?\n\n And I don't even remember why I've put it in. Maybe because\n it's an Oracle thing. This would be a cool fix, removing the\n damned thing completely. I like that solution :-)\n\n Anyone against removal?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 20 Feb 2002 13:39:41 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL RENAME bug?"
}
] |
[
{
"msg_contents": "Hi All,\n\nJust wondering if someone could give me an indication as to how to create a\nnew regression test, for the ADD UNIQUE stuff I did?\n\nI intend to do a regression test during the beta, and try to produce a patch\nthat addresses some of the concerns Tom had with the code.\n\nCheers,\n\nChris\n\n",
"msg_date": "Wed, 17 Oct 2001 09:41:05 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Making regression tests"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Just wondering if someone could give me an indication as to how to create a\n> new regression test, for the ADD UNIQUE stuff I did?\n\nIt's not hard. Make a file in src/test/regress/sql containing test\nqueries, and one in src/test/regress/sql/expected containing the\nexpected output (which is just the *actual* output, after you've\nchecked it over). Add the test name to src/test/regress/serial_schedule\nand src/test/regress/parallel_schedule.\n\nIf your test needs to cater for platform-dependent source or output\nthen things are a little harder, but I don't see why an ADD UNIQUE\ntest would need that.\n\nActually, I don't see why you wouldn't just add some more test queries\nto src/test/regress/sql/alter_table.sql and corresponding output to\nsrc/test/regress/expected/alter_table.out. But if you really want\na separate test file, the two _schedule files are the places that\nneed to know about it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Oct 2001 01:25:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making regression tests "
}
] |
[
{
"msg_contents": "Here is my patches for libpq++ compiling by Sun C++:\n--------------------------------------------------------------------------\n*** ../postgresql-7.1.3.orig//src/Makefile.shlib О©╫О©╫ О©╫О©╫О©╫ 15 10:25:07\n2001\n--- .//src/Makefile.shlib О©╫О©╫ О©╫О©╫О©╫ 17 12:23:41 2001\n***************\n*** 179,185 ****\n ifeq ($(with_gnu_ld), yes)\n LINK.shared += -Wl,-soname,$(soname)\n else\n! LINK.shared += -Wl,-h,$(soname)\n endif\n SHLIB_LINK += -lm -lc\n endif\n--- 179,185 ----\n ifeq ($(with_gnu_ld), yes)\n LINK.shared += -Wl,-soname,$(soname)\n else\n! LINK.shared += -h,$(soname)\n endif\n SHLIB_LINK += -lm -lc\n endif\n*** ../postgresql-7.1.3.orig//src/makefiles/Makefile.solaris О©╫О©╫ О©╫О©╫О©╫ 17\n00:14:25 2000\n--- .//src/makefiles/Makefile.solaris О©╫О©╫ О©╫О©╫О©╫ 17 11:59:33 2001\n***************\n*** 5,12 ****\n ifeq ($(with_gnu_ld), yes)\n export_dynamic = -Wl,-E\n rpath = -Wl,-rpath,$(libdir)\n- else\n- rpath = -Wl,-R$(libdir)\n endif\n shlib_symbolic = -Wl,-Bsymbolic\n--------------------------------------------------------------------------\n\nRegards\nDenis Ustimenko\n\n",
"msg_date": "Wed, 17 Oct 2001 12:55:05 +0700 (NOVST)",
"msg_from": "Denis A Ustimenko <denis@oldham.ru>",
"msg_from_op": true,
"msg_subject": "compiling libpq++ on Solaris with Sun SPRO6U2"
}
] |
[
{
"msg_contents": "Hello,\n\n I am trying to install and run DBBalancer-0.3.0.tar .gz file on\nRedHat Linux 7.0 . I am using PostgreSQL 7.0 which comes with RedHat\nLinux 7.0 distribution. As per the INSTALL file I have installed\nACE-5.2.tar.gz file sucessfully under /usr/local/src. But when I am\nissuing the configure command from src directory , I am getting the\nfollowing error\n\n./configure --with-ACE=/usr/local/src/ACE_wrappers/ace\n\nError: libACE.so wasn't found. If it's installed in your system please\nuse --with-ACE parameter to specify the base directory of the\ninstallation. Compilation will likely to fail in any other case.\n\nRegards\n\nK.PRAMOD KUMAR REDDY\nExaband (India) Private Limited,\n24,Shantiniketan,\nEast Marredpally,\nSecunderabad-26.\n\n*********************************************************************************\nThis email and any files transmitted along with it are confidential and\nintended\nsolely for the use of the individual or entity to whom they are\naddressed.\n\nMail recipients and senders are bound by the\nEXABAND (INDIA) PRIVATE LIMITED Non-Disclosure Policy.\n\nIf you have not signed the Non-Disclosure Policy or if you feel you have\nreceived this mail in error, please notify security@uulogic.com\n*********************************************************************************\n\n\n\n",
"msg_date": "Wed, 17 Oct 2001 13:28:04 +0530",
"msg_from": "Pramod Reddy <pramod.reddy@uulogic.com>",
"msg_from_op": true,
"msg_subject": "DBBalancer bugs"
}
] |
[
{
"msg_contents": "Sorry, previous patch was wrong.\n\nDenis Ustimenko\n--------------------------------------\n*** orig/postgresql-7.1.3//src/makefiles/Makefile.solaris О©╫О©╫ О©╫О©╫О©╫ 17\n00:14:25 2000\n--- postgresql-7.1.3//src/makefiles/Makefile.solaris О©╫О©╫ О©╫О©╫О©╫ 17 14:33:11\n2001\n***************\n*** 6,12 ****\n export_dynamic = -Wl,-E\n rpath = -Wl,-rpath,$(libdir)\n else\n! rpath = -Wl,-R$(libdir)\n endif\n shlib_symbolic = -Wl,-Bsymbolic\n\n--- 6,12 ----\n export_dynamic = -Wl,-E\n rpath = -Wl,-rpath,$(libdir)\n else\n! rpath = -R$(libdir)\n endif\n shlib_symbolic = -Wl,-Bsymbolic\n\n*** orig/postgresql-7.1.3//src/Makefile.shlib О©╫О©╫ О©╫О©╫О©╫ 15 10:25:07 2001\n--- postgresql-7.1.3//src/Makefile.shlib О©╫О©╫ О©╫О©╫О©╫ 17 13:00:29 2001\n***************\n*** 179,185 ****\n ifeq ($(with_gnu_ld), yes)\n LINK.shared += -Wl,-soname,$(soname)\n else\n! LINK.shared += -Wl,-h,$(soname)\n endif\n SHLIB_LINK += -lm -lc\n endif\n--- 179,185 ----\n ifeq ($(with_gnu_ld), yes)\n LINK.shared += -Wl,-soname,$(soname)\n else\n! LINK.shared += -h $(soname)\n endif\n SHLIB_LINK += -lm -lc\n endif\n\n\n",
"msg_date": "Wed, 17 Oct 2001 15:19:20 +0700 (NOVST)",
"msg_from": "Denis A Ustimenko <denis@oldham.ru>",
"msg_from_op": true,
"msg_subject": "compiling libpq++ on Solaris with Sun SPRO6U2 (fixed & tested)"
},
{
"msg_contents": "Denis A Ustimenko writes:\n\n[change -Wl,-R to -R and -Wl,-h to -h]\n\nI'm having a difficult time understanding this. Both -R and -h are linker\noptions, not compiler options. So while the compiler driver might be nice\nenough to recognize them as the former and pass them through, this change\njust pushes these chances, and it doesn't add any theoretical change of\nfunctionality.\n\nSo, if you want this to be fixed, you're going to have to start with\nexplaining your problem, and then we can start looking for solutions.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 17 Oct 2001 22:34:47 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed"
},
{
"msg_contents": "\n\nwhats wrong with kill -9' the postmaster\nworks fine for me hahahaa.\n\n\n\n> Date: Wed, 17 Oct 2001 22:34:47 +0200 (CEST)\n> From: Peter Eisentraut <peter_e@gmx.net>\n> To: Denis A Ustimenko <denis@oldham.ru>\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] compiling libpq++ on Solaris with Sun SPRO6U2\n> (fixed\n>\n> Denis A Ustimenko writes:\n>\n> [change -Wl,-R to -R and -Wl,-h to -h]\n>\n> I'm having a difficult time understanding this. Both -R and -h are linker\n> options, not compiler options. So while the compiler driver might be nice\n> enough to recognize them as the former and pass them through, this change\n> just pushes these chances, and it doesn't add any theoretical change of\n> functionality.\n>\n> So, if you want this to be fixed, you're going to have to start with\n> explaining your problem, and then we can start looking for solutions.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Wed, 17 Oct 2001 15:19:11 -0700 (PDT)",
"msg_from": "Dan <dphoenix@bravenet.com>",
"msg_from_op": false,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed"
},
{
"msg_contents": "\n\nOn Wed, 17 Oct 2001, Peter Eisentraut wrote:\n\n> Denis A Ustimenko writes:\n>\n> [change -Wl,-R to -R and -Wl,-h to -h]\n>\n> I'm having a difficult time understanding this. Both -R and -h are linker\n> options, not compiler options. So while the compiler driver might be nice\n\nOh, no Peter:\n\ndenis@tracer$ CC -help|grep \"\\-h\"\n-h<name> Assign <name> to generated dynamic shared library\n-help Same as -xhelp=flags\ndenis@tracer$ CC -help|grep \"\\-R\"\n-R<p>[:<p>...] Build runtime search path list into executable\ndenis@tracer$ cc -flags|grep \"\\-h\"\n-h <name> Assign <name> to generated dynamic shared library\ndenis@tracer$ cc -flags|grep \"\\-R\"\n-R<dir[:dir]> Build runtime search path list into executable\n\n> enough to recognize them as the former and pass them through, this change\n> just pushes these chances, and it doesn't add any theoretical change of\n> functionality.\n>\n> So, if you want this to be fixed, you're going to have to start with\n> explaining your problem, and then we can start looking for solutions.\n\nThe problem is simple. I can't compile libpq++.so with latest Sun's\ncompiler:\n$ make\n.........\nCC -KPIC -G -Wl,-h,libpq++.so.3 pgconnection.o pgdatabase.o pgtransdb.o\npgcursordb.o pglobject.o -L../../../src/interfaces/libpq -lpq -lm -lc\n-Wl,-R/usr/local/pgsql/lib -o libpq++.so.3.1\nCC: Warning: Option -Wl,-h,libpq++.so.3 passed to ld, if ld is invoked,\nignored otherwise\nCC: Warning: Option -Wl,-R/usr/local/pgsql/lib passed to ld, if ld is\ninvoked, ignored otherwise\n/usr/ccs/bin/ld: illegal option -- W\n/usr/ccs/bin/ld: illegal option -- W\nusage: ld [-6:abc:d:e:f:h:il:mo:p:rstu:z:B:D:F:GI:L:M:N:P:Q:R:S:VY:?]\nfile(s)\n [-64] enforce a 64-bit link-edit\n.........\n\nBut \"CC -KPIC -G -h libpq++.so.3 ... -R/usr/local/pgsql/lib ...\"\nworks fine for me.\n\nRegards\nDenis Ustimenko\n\n",
"msg_date": "Thu, 18 Oct 2001 10:07:42 +0700 (NOVST)",
"msg_from": "Denis A Ustimenko <denis@oldham.ru>",
"msg_from_op": true,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed"
},
{
"msg_contents": "It's really quite simple, the Sun C compiler (acc) does not understand\nthe -Wl flag, rather it passes the -R and -h options onto the linker\nverbatim.\n\nGiven the only two (realistic) compiler choices under Solaris are gcc\nand acc it makes sense to support then both 'out of the box'.\n\nI keep a similar patch to Denis's (without the wierd hi-ascii\ncharacters in his) lying around for when i build a Solaris version.\n\nRegards, Lee Kindness.\n\nPeter Eisentraut writes:\n > Denis A Ustimenko writes:\n > \n > [change -Wl,-R to -R and -Wl,-h to -h]\n > \n > I'm having a difficult time understanding this. Both -R and -h are linker\n > options, not compiler options. So while the compiler driver might be nice\n > enough to recognize them as the former and pass them through, this change\n > just pushes these chances, and it doesn't add any theoretical change of\n > functionality.\n > \n > So, if you want this to be fixed, you're going to have to start with\n > explaining your problem, and then we can start looking for solutions.\n > \n > -- \n > Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n > \n > \n > ---------------------------(end of broadcast)---------------------------\n > TIP 4: Don't 'kill -9' the postmaster\n",
"msg_date": "Thu, 18 Oct 2001 09:17:55 +0100 (BST)",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed"
},
{
"msg_contents": "For your information I've attached the man page for the Sun C\ncompiler, which explicitly lists the -h and -R flags.\n\nRegards, Lee Kindness.\n\nLee Kindness writes:\n > It's really quite simple, the Sun C compiler (acc) does not understand\n > the -Wl flag, rather it passes the -R and -h options onto the linker\n > verbatim.\n > Peter Eisentraut writes:\n > > I'm having a difficult time understanding this. Both -R and -h are linker\n > > options, not compiler options. So while the compiler driver might be nice\n > > enough to recognize them as the former and pass them through, this change\n > > just pushes these chances, and it doesn't add any theoretical change of\n > > functionality.\n\n\nUser Commands acc(1)\n\nNAME\n acc - C compiler\n\nSYNOPSIS\n acc [ -Aname [(tokens) ] ] [ -a ] [ -B [static|dynamic] ]\n [ -C ] [ -c ] [ -cg89 ] [ -cg92 ] [ -Dname [=token ] ]\n [ -dalign ] [ -d [y|n] ] [ -dryrun ] [ -E ] [ -fast ]\n [ -fd ] [ -flags ] [ -fnonstd ] [ -fns ] [ -fround=r ]\n [ -fsimple[=n] ] [ -fsingle ] [ -ftrap=t ] [ -G ]\n [ -g ] [ -H ] [ -help ] [ -hname ] [ -Idir ]\n [ -inline=[f1,...,fn] ] [ -KPIC ] [ -Kpic ] [ -keeptmp ]\n [ -Ldir ] [ -lname ] [ -libmieee ] [ -libmil ] [ -M ]\n [ -misalign ] [ -misalign2 ] [ -mt ] [ -native ] [ -nolib ]\n [ -nolibmil ] [ -noqueue ] [ -O[1|2|3|4|5] ]\n [ -o outputfile ] [ -P ] [ -p ] [ -pg ] [ -PIC ] [ -pic ]\n [ -Qdir dir ] [ -Qoption c arg ] [ -Qpath dir ]\n [ -Qproduce srctype ] [ -qdir dir ] [ -qoption c arg ]\n [ -qpath dir ] [ -qproduce srctype ]\n [ -R dir[:dir] ] [ -S ] [ -s ] [ -sb ] [ -sbfast ]\n [ -strconst ] [ -temp=dir ] [ -time ] [ -Uname ]\n [ -unroll =n ] [ -V ] [ -v ] [ -vc ] [ -w ]\n [ -X [a |c |s |t ]] [ -xa ] [ -xarch=a ] [ -xautopar ]\n [ -xcache=c ] [ -xCC ] [ -xcg89 ] [ -xcg92 ] [ -xchip=c ]\n [ -xdepend ] [ -xe ] [ -xexplicitpar ]\n [ -xF ] [ -xhelp=f ] [ -xildoff ]\n [ -xildon ] [ -xinline=[f1,...,fn] ] [ -xlibmieee ]\n [ -xlibmil ] [ -xlicinfo ] [ -xloopinfo ] [ -xM ] [ -xM1 ]\n [ -xMerge ] [ -xnolib ] [ -xnolibmil ] [ -xO[1|2|3|4|5] ]\n [ -xP ] [ -xparallel ] [ -xpg ] [ -xprofile=p ]\n [ -xreduction ] [ -xregs=r ]\n [ -xrestrict=f ] [ -xs ] [ -xsafe=mem ] [ -xsb ]\n [ -xsbfast ] [ -xsfpconst ] [ -xspace ]\n [ -xstrconst ] [ -xtarget=t ] [ -xtemp=dir ] [ -xtime ]\n [ -xtransition ] [ -xunroll=n ] [ -xvpara ] [ -Y,dir ]\n [ -Zll ] [ -Zlp ] [ -Ztha ]\n\nDESCRIPTION\n acc (SPARC only) is not intended to be used directly on\n Solaris 2.x. The sole purpose for making it available on\n Solaris 2.x is to enable /usr/ucb/cc. The package SUNWscpu\n must be installed to use this. The options for /usr/ucb/cc\n are the same as for acc and are described here.\n\n acc is the C compiler. It translates programs written in the\n C programming language into executable load modules, or into\n relocatable binary programs for subsequent loading with the\n ld(1) link editor.\n\n In addition to the many options, acc accepts several types\n of filename arguments. For instance, files with names end-\n ing in .c are taken to be C source programs. They are com-\n piled, and each resulting object program is placed in the\n current directory. The object file is named after its\n source file - the suffix .o replacing .c in the name of the\n object. In the same way, files whose names end with .s are\n taken to be assembly source programs. They are assembled,\n and produce .o files. Filenames ending in .il are taken to\n be inline expansion code template files; these are used to\n expand calls to selected routines in-line when code optimi-\n zation is enabled. See FILES, below for a complete list of\n compiler-related filename suffixes.\n\n Other arguments refer to assembler or loader options, object\n programs, or object libraries. Unless -c, -S, -E -P or\n -Qproduce is specified, these programs and libraries,\n together with the results of any specified compilations or\n assemblies, are linked (in the order given) to produce an\n output file named a.out. You can specify a name for the\n executable by using the -o option.\n\n If a single file is compiled and linked all at once, the\n intermediate files are deleted.\n\nOPTIONS\n When debugging or profiling objects are compiled using the\n -g or -pg options, respectively, the ld command for linking\n them should also contain the appropriate option.\n\n See ld(1) for link-time options.\n\n -a Insert code to count how many times each basic block is\n executed. This is the old style of basic block profil-\n ing for tcov. See -xprofile=tcov for information on the\n new style of profiling and the tcov(1) man page for\n more details.\n\n Invokes a runtime recording mechanism that creates a .d\n file for every .c file (at normal termination). The .d\n file accumulates execution data for the corresponding\n source file. The tcov(1) utility can then be run on\n the source file to generate statistics about the pro-\n gram. This option is incompatible with -g .\n\n -Aname[(tokens)]\n Associate name as a predicate with the specified tokens\n as if by a #assert preprocessing directive.\n Preassertions:\n system(unix)\n cpu(sparc)\n machine(sparc)\n\n The above are not predefined in -Xc mode.\n\n If -A is followed by a dash (-) only, it causes all\n predefined macros (other than those that begin with __)\n and predefined assertions to be forgotten.\n\n -B [static|dynamic]\n -B dynamic causes the link editor to look for files\n named libx.so and then for files named libx.a when\n given the -lx option. -B static causes the link editor\n to look only for files named libx.a. This option may\n be specified multiple times on the command line as a\n toggle. This option and its argument are passed to ld.\n\n -C Cause the preprocessor to pass along all comments other\n than those on preprocessing directive lines.\n\n -c Suppress linking with ld(1) and produce a .o file for\n each source file. A single object file can be named\n explicitly using the -o option.\n\n -cg89\n This option is a macro for:\n =xarch=v7 -xchip=old -xcache=64/32/1.\n\n -cg92\n This option is a macro for:\n =xarch=v8 -xchip=super -xcache=16/64/4:1024/64/1.\n\n -Dname[=token]\n Associates name with the specified token as if by a\n #define preprocessing directive. If no =token is\n specified, the token 1 is supplied.\n Predefinitions:\n sparc\n sun\n unix\n\n The above are not predefined in -Xc mode.\n These predefinitions are valid in all modes:\n __sparc,\n __unix,\n __sun,\n __BUILTIN_VA_ARG_INCR\n __SUNPRO_C=0x400\n __SVR4\n __`uname -s` `uname -r`\n\n -dalign\n Generate double-word load/store instructions whenever\n possible for improved performance. Assumes that all\n double and long long type data are double-word aligned,\n and should not be used when correct alignment is not\n assured.\n\n -d [y|n]\n -dy specifies dynamic linking, which is the default, in\n the link editor. -dn specifies static linking in the\n link editor. This option and its argument are passed\n to ld.\n\n -dryrun\n Show but do not execute the commands constructed by the\n compilation driver.\n\n -E Preprocess only the named C files and send the result\n to the standard output. The output will contain\n preprocessing directives for use by the next pass of\n the compilation system.\n\n -fast\n Select the optimum combination of compilation options\n for speed. This should provide close to the maximum\n performance for most realistic applications. Modules\n compiled with -fast , must also be linked with -fast .\n\n It is a convenience option, and it chooses the fastest\n code generation option available on the compile-time\n hardware, the optimization level -O2, a set of inline\n expansion templates, the -fns option, the -ftrap=%none\n option, and the -dalign option.\n\n If you combine -fast with other options, the last\n specification applies. The code generation option, the\n optimization level and using inline template files can\n be overridden by subsequent switches. For example,\n although the optimization part of -fast is -O2 , the\n optimization part of -fast -O1 is -O1 .\n\n Do not use this option for programs that depend on IEEE\n standard exception handling; you can get different\n numerical results, premature program termination, or\n unexpected SIGFPE signals.\n\n -fd Report old-style function definitions and declarations.\n\n -flags\n Print a summary of each compiler option.\n\n -fnonstd\n This option is a macro for -fns and -ftrap=common.\n\n -fns Turn on the SPARC nonstandard floating-point mode.\n\n The default is the SPARC standard floating-point mode.\n\n If you compile one routine with -fns, then compile all\n the program routines with the -fns option; otherwise,\n you can get unexpected results.\n\n -fround=r\n Set the IEEE 754 rounding mode that is established at\n runtime during the program initialization.\n\n r must be one of: nearest, tozero, negative, positive.\n\n The default is -fround=nearest.\n\n The meanings are the same as those for the ieee_flags\n subroutine.\n\n If you compile one routine with -fround=r, compile all\n the program routines with the same -fround=r option;\n otherwise, you can get unexpected results.\n\n -fsimple[=n]\n Allow the optimizer to make simplifying assumptions\n concerning floating-point arithmetic. If n is present,\n it must be 0, 1, or 2.\n\n The defaults are:\n o With no -fsimple[=n], the compiler uses -fsimple=0.\n o With only -fsimple, no =n, the compiler uses -fsim-\n ple=1.\n\n -fsimple=0\n Permit no simplifying assumptions. Preserve strict IEEE\n 754 conformance.\n\n -fsimple=1\n Allow conservative simplifications. The resulting code\n does not strictly conform to IEEE 754, but numeric\n results of most programs are unchanged.\n\n With -fsimple=1, the optimizer can assume the follow-\n ing:\n o The IEEE 754 default rounding/trapping modes do not\n change after process initialization.\n o Computations producing no visible result other than\n potential floating- point exceptions may be deleted.\n o Computations with Infinity or NaNs as operands need\n not propagate NaNs to their results. For example, x*0\n may be replaced by 0.\n o Computations do not depend on sign of zero.\n\n With -fsimple=1, the optimizer is not allowed to optim-\n ize completely without regard to roundoff or excep-\n tions. In particular, a floating-point computation can-\n not be replaced by one that produces different results\n with rounding modes held constant at run time. -fast\n implies -fsimple=1.\n\n -fsimple=2\n Permit aggressive floating point optimizations that may\n cause many programs to produce different numeric\n results due to changes in rounding. For example, permit\n the optimizer to replace all computations of x/y in a\n given loop with x*z, where x/y is guaranteed to be\n evaluated at least once in the loop, z=1/y, and the\n values of y and z are known to have constant values\n during execution of the loop.\n\n Even with -fsimple=2, the optimizer still is not per-\n mitted to introduce a floating point exception in a\n program that otherwise produces none.\n\n -fsingle\n ( -Xt and -Xs modes only). Causes the compiler to\n evaluate float expressions as single precision rather\n than double precision. (This option has no effect if\n the compiler is used in either -Xa or -Xc modes, as\n float expressions are already evaluated as single pre-\n cision.)\n\n -ftrap=t\n Set the IEEE 754 trapping mode in effect at startup.\n\n t is a comma-separated list that consists of one or\n more of the following: %all, %none, common,\n [no%]invalid, [no%]overflow, [no%]underflow,\n [no%]division, [no%]inexact.\n\n The default is -ftrap=%none.\n\n This option sets the IEEE 754 trapping modes that are\n established at program initialization. Processing is\n left-to-right. The common exceptions, by definition,\n are invalid, division by zero, and overflow.\n\n Example: -ftrap=%all,no%inexact means set all traps,\n except inexact.\n\n The meanings are the same as for the ieee_flags subrou-\n tine, except that:\n o %all turns on all the trapping modes.\n o %none, the default, turns off all trapping modes.\n o A no% prefix turns off that specific trapping mode.\n\n If you compile one routine with -ftrap=t, compile all\n routines of the program with the same -ftrap=t option;\n otherwise, you can get unexpected results.\n -G Direct the link editor to produce a shared object\n rather than a dynamically linked executable. This\n option is passed to ld. It cannot be used with the -dn\n option.\n\n -g Produce additional symbol table information for dbx(1).\n\n The -g option makes -xildon the default incremental\n linker option. See -xildon. Invoke ild in place of ld\n unless any of the following are true: The -G option is\n present, the -xildoff option is present, any source\n files are named on the command line.\n\n When used with the -O option, a limited amount of\n debugging is available. The combination, -xO4 -g, turns\n off the inlining that you usually get with -xO4.\n\n -H Print, one per line, the path name of each file\n included during the current compilation on the standard\n error output.\n\n -help\n Display a one-line summary of compiler options.\n\n -hname\n Names a shared dynamic library. The -hname option\n assigns a name to a shared dynamic library. This pro-\n vides versions of a shared dynamic library. In general,\n the name after -h should be exactly what you have\n after the -o. You may insert a space between -h and\n name. This option is passed to ld.\n\n -Idir\n Add dir to the list of directories in which to search\n for #include files with relative filenames (not begin-\n ning with slash /). The preprocessor first searches\n for #include files in the directory containing source-\n file, then in directories named with -I options (if\n any), and finally, in /usr/include.\n\n -inline=[f1,...,fn]\n For user-written routines, try to inline only those\n named in the list f1 to fn. It tries routines only in\n the file being compiled. The list is a comma-separated\n list of functions and subroutines.\n\n If compiling with -O3, this can increase optimization\n by inlining some routines. The -O3 option inlines none\n by itself.\n\n If compiling with -O4, this can decrease optimization\n by restricting inlining to only those routines in the\n list. With -O4, the compiler normally tries to inline\n all user-written subroutines and functions. When xin-\n line= is specified with an empty rlist, it indicates\n that none of the routines in the source file are to be\n inlined.\n\n A routine is not inlined if any of the following apply\n (no warning):\n o Optimization is less than -O3\n o The routine cannot be found\n o Inlining the routine does not look profitable or\n safe to iropt\n o The source for the routine is not in the file being\n compiled\n\n -KPIC\n Like -Kpic, but allows the global offset table to span\n the range of 32-bit addresses in those rare cases where\n there are too many global data objects for -Kpic.\n\n -Kpic\n Produce position-independent code. Each reference to a\n global datum is generated as a dereference of a pointer\n in the global offset table. Each function call is gen-\n erated in pc-relative addressing mode through a pro-\n cedure linkage table. The size of the global offset\n table is 8K on SPARC processors.\n\n -keeptmp\n Retains files created during compilation, rather than\n automatically deleting them.\n\n -Ldir\n Add dir to the list of directories containing object-\n library routines (for linking using ld(1).\n\n -lname\n Link with object library name (for ld(1)). This option\n must follow the sourcefile arguments.\n\n -libmieee\n Force IEEE 754 style return values for math routines in\n exceptional cases. In such cases, no exeception mes-\n sage will be printed, and errno should not be relied\n on.\n\n -libmil\n Inlines some library routines for faster execution.\n\n -M Run only the macro preprocessor (cpp) on the named C\n programs, requesting that it generate makefile depen-\n dencies and send the result to the standard output (see\n make(1) for details about makefiles and dependencies).\n\n -misalign\n -misalign assumes that data is not properly aligned and\n thus very conservative loads and stores must be used\n for data, that is, one byte at a time. Using this\n option can cause significant performance degradation\n when running the program.\n\n -misalign2\n -misalign2, like -misalign, assumes that data is not\n properly aligned, but that data is at least half-word\n aligned. Though conservative uses of loads and stores\n must be used for data, the performance degradation when\n running a program should be less than for -misalign.\n\n -mt Passes D_REENTRANT to preprocessor. Appends -l thread.\n If you are doing your own multithread coding, you must\n use this option in the compile and link steps. To\n obtain faster execution, this option requires a mul-\n tiprocessor system. On a single-processor system, the\n resulting executable usually runs more slowly with this\n option.\n\n -native\n Ascertain which code-generation options are available\n on the machine running the compiler, and direct the\n compiler to generate code targeted for that machine.\n\n This option is a synonym for -xtarget=native .\n\n The -fast macro includes -native in its expansion.\n\n -nolib\n Does not link any libraries by default; that is, no -l\n options are passed to ld . Normally, the acc driver\n passes -lm -lansi -lc to ld .\n\n When you use -nolib , you have to pass all -l options\n yourself. For example:\n acc test.c -nolib -lansi -Bstatic -lm -Bdynamic\n -lc\n links libm statically and the other libraries dynami-\n cally.\n\n -nolibmil\n Reset -fast so that it does not include inline tem-\n plates. Use this after the -fast option: cc fast\n nolibmil ...\n\n -noqueue\n Tells the compiler not to queue this compile request if\n a license is not available.\n\n -O[1|2|3|4|5]\n Optimize the object code. May be used with -g, but not\n with -xa. Specifying -O is equivalent to specifying\n -O2. Level is one of:\n\n 1 Do basic local optimization (peephole).\n\n 2 Do basic local and global optimization. This\n is induction variable elimination, local and\n global common subexpression elimination,\n algebraic simplification, copy propagation,\n constant propagation, loop-invariant optimi-\n zation, register allocation, basic block\n merging, tail recursion elimination, dead\n code elimination, tail call elimination and\n complex expression expansion.\n\n The -O2 level does not assign global, exter-\n nal, or indirect references to registers. It\n treats these references and definitions as if\n they were declared \"volatile.\" In general,\n the -O2 level results in minimum code size.\n\n 3 Beside what -O2 does, this also optimizes\n references and definitions for external vari-\n ables. Loop unrolling and software pipelin-\n ing are also performed. The -O3 level does\n not trace the effects of pointer assignments.\n When compiling either device drivers, or pro-\n grams that modify external variables from\n within signal handlers, you may need to use\n the volatile type qualifier to protect the\n object from optimization. In general, the -O3\n level results in increased code size.\n\n 4 Besides what -O3 does, this also does\n automatic inlining of functions contained in\n the same file; this usually improves execu-\n tion speed. The -O4 level does trace the\n effects of pointer assignments. In general,\n the -O4 level results in increased code size.\n\n 5 Generate the highest level of optimization.\n Use optimization algorithms that take more\n compilation time or that do not have as high\n a certainty of improving execution time.\n Optimization at this level is more likely to\n improve performance if it is done with pro-\n file feedback.\n\n If the optimizer runs out of memory, it tries to\n recover by retrying the current procedure at a\n lower level of optimization and resumes subsequent\n procedures at the original level specified in the\n command-line option.\n\n If you optimize at -O3 or -O4 with very large pro-\n cedures (thousands of lines of code in the same\n procedure), the optimizer may require a large\n amount of virtual memory. In such cases, machine\n performance may degrade.\n\n -o outputfile\n Name the output file outputfile. outputfile must have\n the appropriate suffix for the type of file to be pro-\n duced by the compilation (see FILES, below). outputfile\n cannot be the same as sourcefile (the compiler will not\n overwrite the source file).\n\n -P Preprocess only. Puts the output in a file with a .i\n suffix. The output will not contain any preprocessor\n line directives, unlike the -E option.\n\n -p Prepare the object code to collect data for profiling\n with prof(1). Invokes a run-time recording mechanism\n that produces a mon.out file (at normal termination).\n\n -pg Prepare the object code to collect data for profiling\n with gprof(1). Invokes a run-time recording mechanism\n that produces a gmon.out file (at normal termination).\n\n -PIC Same as -KPIC.\n\n -pic Same as -Kpic.\n\n -Qdir dir\n Look for compiler components in directory dir.\n\n -Qoption c arg\n Pass the option arg to the component c. The option\n must be appropriate to that component and may begin\n with a minus sign. c can be one of: acomp, fbe\n (Solaris 2.x only) or as (Solaris 1.x only) cg, iropt,\n or ld.\n\n -Qpath dir\n Insert directory dir into the compilation search path.\n The path will be searched for alternate versions of the\n compilation programs, such as acomp(1), and ld(1).\n This path will also be searched first for certain relo-\n catable object files that are implicitly referenced by\n the compiler driver, for example *crt*.o and bb_link.o.\n -Qproduce srctype\n Produce source code of the type sourcetype. sourcetype\n can be one of:\n .i Preprocessed C source.\n .o Object file.\n .s Assembler source.\n\n -qdir dir\n Same as -Qdir dir.\n\n -qoption c arg\n Same as -Qoption c arg.\n\n -qpath dir\n Same as -Qpath dir.\n\n -qproduce srctype\n Same as -Qproduce srctype.\n\n -R dir[:dir]\n A colon-separated list of directories used to specify\n library search directories to the runtime linker. If\n present and not null, it is recorded in the output\n object file and passed to the runtime linker.\n\n If both LD_RUN_PATH and the -R option are specified,\n the -R option takes precedence.\n\n -S Do not assemble the program but produce an assembly\n source file.\n\n -s Remove all symbolic debugging information from the out-\n put object file. Passed to ld(1).\n\n -sb Generate extra symbol table information for the Sun\n Source Code Browser.\n\n -sbfast\n Create the database for the Sun Source Code Browser,\n but do not actually compile.\n\n -strconst\n Insert string literals into the read-only data section\n of the text segment instead of the data segment.\n\n -temp=dir\n Set directory for temporary files to be dir.\n\n -time\n Report execution times for the various compilation\n passes.\n\n -Uname\n Cause any definition of name to be undefined, as if by\n a #undef preprocessing directive. If the same name is\n specified for both -D and -U, name is not defined,\n regardless of the order of the options.\n\n -unroll=n\n Specifies whether or not the compiler optimizes\n (unrolls) loops. n is a positive integer. When n is\n 1, it is a command and the compiler unrolls no loops.\n When n is greater than 1, the -unroll=n merely suggests\n to the compiler that unrolled loops be unrolled n\n times.\n\n -V Print the name and version ID of each pass as the com-\n piler executes.\n\n -v Verbose. Print the version number of the compiler and\n the name of each program it executes.\n\n -vc Directs the compiler to perform stricter semantic\n checks and enable other lint-like checks.\n\n -w Do not print warnings.\n\n -X[a|c|s|t]\n Specify the degree of conformance to the ANSI C stan-\n dard. The degree of conformance can be one of the fol-\n lowing:\n\n a (ANSI)\n ANSI C plus Sun C compatibility extensions, with\n semantic changes required by ANSI C. Where Sun C\n and ANSI C specify different semantics for the\n same construct, the compiler will issue warnings\n about the conflict and use the ANSI C interpreta-\n tion. This is the default mode.\n\n c (conformance)\n Maximally conformant ANSI C, without Sun C compa-\n tibility extensions. The compiler will issue\n errors and warnings for programs that use non-ANSI\n C constructs.\n\n s (Sun C)\n The compiled language includes all features compa-\n tible with (pre-ANSI) Sun C. The compiler tries\n to warn about all language constructs that have\n differing behavior between Sun ANSI C and the old\n Sun C. Invokes cpp for processing. __STDC__ is not\n defined in this mode.\n\n t (transition)\n ANSI C plus Sun C compatibility extensions,\n without semantic changes required by ANSI C.\n Where Sun C and ANSI C specify different semantics\n for the same construct, the compiler will issue\n warnings about the conflict and use the Sun C\n interpretation.\n\n The predefined macro __STDC__ has the value 0 for -Xt\n and -Xa, and 1 for -Xc. (It is not defined for -Xs.)\n All warning messages about differing behavior can be\n eliminated through appropriate coding; for example, use\n of casts can eliminate the integral promotion change\n warnings.\n\n -xa Same as -a.\n\n -xarch=a\n Limit the set of instructions the compiler may use.\n\n a must be one of: generic, v7, v8a, v8, v8plus,\n v8plusa.\n\n Although this option can be used alone, it is part of\n the expansion of the -xtarget option; its primary use\n is to override a value supplied by the -xtarget option.\n\n This option limits the instructions generated to those\n of the specified architecture, and allows the specified\n set of instructions. The option does not guarantee the\n specified set is used; however, under optimization, the\n set is usually used.\n\n If this option is used with optimization, the appropri-\n ate choice can provide good performance of the execut-\n able on the specified architecture. An inappropriate\n choice can result in serious degradation of perfor-\n mance.\n\n v7, v8, and v8a are all binary compatible. v8plus and\n v8plusa are binary compatible with each other and for-\n ward, but not backward. For any particular choice, the\n generated executable can run much more slowly on ear-\n lier architectures (to the left in the above list).\n See the C 4.0 User's Guide for details.\n\n The -xarch values are:\n\n generic\n Get good performance on most SPARCs, and major\n degradation on none. This is the default.\n\n v7 Limit the instruction set to V7 architecture.\n\n v8a Limit the instruction set to the V8a version of\n the V8 architecture.\n\n v8 Limit the instruction set to V8 architecture.\n\n v8plus\n Limit the instruction set to the V8plus version\n of the V9 architecture.\n\n v8plusa\n Limit the instruction set to the V8plusa version\n of the V9 architecture.\n\n -xautopar\n Turn on automatic parallelization for multiple proces-\n sors. Does dependence analysis (analyze loops for\n inter- iteration data dependence) and loop restructur-\n ing. If optimization is not at -xO3 or higher, optimi-\n zation is raised to -xO3 and a warning is emitted.\n\n -xcache=c\n Define the cache properties for use by the optimizer.\n\n c must be one of the following:\n\n o generic\n\n o s1/l1/a1\n\n o s1/l1/a1:s2/l2/a2\n\n o s1/l1/a1:s2/l2/a2:s3/l3/a3\n\n The si/li/ai are defined as follows:\n\n si\n The size of the data cache at level i, in kilobytes\n\n li\n The line size of the data cache at level i, in bytes\n\n ai\n The associativity of the data cache at level i\n\n Although this option can be used alone, it is part of\n the expansion of the -xtarget option; its primary use\n is to override a value supplied by the -xtarget option.\n\n This option specifies the cache properties that the\n optimizer can use. It does not guarantee that any\n particular cache property is used.\n\n The -xcache values are:\n\n generic\n Define the cache properties for good performance\n on most SPARCs. This is the default.\n\n s1/l1/a1\n Define level 1 cache properties.\n\n s1/l1/a1:s2/l2/a2\n Define levels 1 and 2 cache properties.\n\n s1/l1/a1:s2/l2/a2:s3/l3/a3\n Define levels 1, 2, and 3 cache properties.\n\n -xCC Accept C++-style comments.\n\n -xcg89\n Same as -cg89.\n\n -xcg92\n Same as -cg92.\n\n -xchip=c\n Specify the target processor for use by the optimizer.\n\n c must be one of: generic, old, super, super2, micro,\n micro2, hyper, hyper2, powerup, ultra.\n\n Although this option can be used alone, it is part of\n the expansion of the -xtarget option; its primary use\n is to override a value supplied by the -xtarget option.\n\n This option specifies timing properties by specifying\n the target processor.\n\n Some effects are:\n\n o The ordering of instructions, that is, scheduling\n\n o The way the compiler uses branches\n\n o The instructions to use in cases where semantically\n equivalent alternatives are available\n\n The -xchip values are:\n\n generic\n Use timing properties for good performance on\n most SPARCs.\n old Use timing properties of pre-SuperSPARC proces-\n sors.\n\n super Use timing properties of the SuperSPARC chip.\n\n super2 Use timing properties of the SuperSPARC II chip.\n\n micro Use timing properties of the microSPARC chip.\n\n micro2 Use timing properties of the microSPARC II chip.\n\n hyper Use timing properties of the hyperSPARC chip.\n\n hyper2 Use timing properties of the hyperSPARC II chip.\n\n powerup\n Use timing properties of the Weitek PowerUp\n chip.\n\n ultra Use timing properties of the UltraSPARC chip.\n\n -xdepend\n Analyze loops for inter-iteration data dependencies and\n do loop restructuring. Dependence analysis is included\n in -xautopar. The dependency analysis is done at com-\n pile time. The -xdepend option is ignored unless\n either -xO3 or -xO4 is on, explicitly, or by another\n option.\n\n -xe Performs only syntax and semantic checking on the\n source files, but does not produce any object or exe-\n cutable code.\n\n -xexplicitpar\n Parallelize the loops that are specified. You do the\n dependency analysis: analyze and specify loops for\n inter-iteration and data dependencies. The software\n parallelizes the specified loops. If optimization is\n not at -xO3 or higher, then it is raised to -xO3.\n\n Avoid -xexplicitpar if you do your own thread manage-\n ment.\n\n The -xexplicitpar option requires the iMPact C mul-\n tiprocessor enhancement package. To get faster code,\n use this option on a multiprocessor system. On a\n single-processor system, the generated code usually\n runs slower.\n\n If you identify a loop for parallelization, and the\n loop has dependencies, you can get incorrect results,\n possibly different ones with each run, and with no\n warnings. Do not apply an explicit parallel pragma to\n a reduction loop. The explicit parallelization is\n done, but the reduction aspect of the loop is not done,\n and the results can be incorrect.\n\n If you use -xexplicitpar and compile and link in one\n step, then linking automatically includes the micro-\n tasking library and the threads-safe C runtime library.\n If you use -xexplicitpar and compile and link in\n separate steps, then you must also link with -xexpli-\n citpar.\n\n -xF Enables performance analysis of the executable using\n the Analyzer and Debugger. (See analyzer(1) and\n debugger(1) man pages.) Produces code that can be\n reordered at the function level. Each function in the\n file is placed in a separate section; for example,\n functions foo() and bar() will be placed in the sec-\n tions .text%foo and .text%bar , respectively. Function\n ordering in the executable can be controlled by using\n -xF in conjunction with the -M option to ld (see\n ld(1)).\n\n -xhelp=f\n Display on-line help information.\n\n -xhelp=flags displays a summary of the compiler\n options; -xhelp=readme displays the readme file;\n -xhelp=errors displays the Error and Warning Messages\n file.\n\n -xildoff\n Turn off the incremental linker and force the use of\n ld. This option is the default if you do not use the\n -g option, or you do use the -G option, or any source\n files are present on the command line. Override this\n default by using the -xildon option.\n\n -xildon\n Turn on the incremental linker and force the use of ild\n in incremental mode. This option is the default if you\n use the -g option, and you do not use the -G option,\n and there are no source files present on the command\n line. Override this default by using the -xildoff\n option.\n\n -xinline=[f1,...,fn]\n Same as -inline.\n\n -xlibmieee\n Same as -libmieee.\n\n -xlibmil\n Same as -libmil.\n\n -xlicinfo\n Returns information about the licensing system. In\n particular, it returns the name of the license server\n and the userids of users who have licenses checked out.\n When you use this option, the compiler is not invoked\n and a license is not checked out.\n\n -xloopinfo\n Show which loops are parallelized and which are not.\n This option is normally for use with the -xautopar and\n -xexplicitpar options. It requires the iMPact C mul-\n tiprocessor enhancement package.\n\n -xM Generate makefile dependencies.\n\n -xM1 Generate makefile dependencies.\n\n -xMerge\n Directs acc to merge the data segment with the text\n segment for assembler. Data initialized in the object\n file produced by this compilation is read-only and\n (unless linked with ld-N) is shared between processes.\n\n -xnolib\n Same as -nolib.\n\n -xnolibmil\n Same as -nolibmil.\n\n -xO[1|2|3|4|5]\n Same as -O[1|2|3|4|5].\n\n -xP Print prototypes for K&R C function definitions.\n\n -xparallel\n Parallelize both automatic and explicit loops. This\n option invokes -xautopar, -xdepend, and -xexplicitpar.\n There is a risk of producing incorrect results.\n\n Avoid -xparallel if you do your own thread management.\n\n This option requires the iMPact C multiprocessor\n enhancement package. To get faster code, use this\n option on a multiprocessor SPARC system. On a single-\n processor system, the generated code usually runs more\n slowly.\n\n The -xautopar option (and therefore the -xparallel\n option) includes dependency analysis; that is, if you\n try a -xautopar both with and without -xdepend, there\n is no noticeable difference.\n\n If you compile and link in one step, -xparallel links\n with the microtasking library and the threads-safe C\n runtime library. If you compile and link in separate\n steps, and you compile with -xparallel, then link with\n -xparallel.\n\n -xpg Same as -pg.\n\n -xprofile=p\n Collect data for a profile or use a profile to optim-\n ize.\n\n p must be collect, use[:name], or tcov.\n\n This option causes execution frequency data to be col-\n lected and saved during execution, then the data can be\n used in subsequent runs to improve performance. This\n option is only valid when a level of optimization is\n specified.\n\n collect\n Collect and save execution frequency for later use\n by the optimizer.\n\n use[:name]\n Use execution frequency data saved by the com-\n piler. The name is the name of the executable that\n is being analyzed. This name is optional. If name\n is not specified, a.out is assumed to be the name\n of the executable.\n\n tcov Correctly collects data for programs that have\n source code in header files or that make use of\n C++ templates. See also -xa.\n\n -xreduction\n Analyze loops for reduction in automatic paralleliza-\n tion. To enable parallelization of reduction loops,\n specify both -xreduction and -xautopar.\n\n If you specify -xreduction without -xautopar, the com-\n piler issues a warning.\n\n This option requires the iMPact C multiprocessor\n enhancement package. To get faster code, this option\n also requires a multiprocessor system. On a single-\n processor system, the generated code usually runs more\n slowly.\n\n There is always potential for roundoff error with\n reduction.\n\n If you have a reduction loop to be parallelized, then\n use -xreduction with -xautopar. Do not use an explicit\n pragma, because the explicit pragma prevents reduction\n for that loop, resulting in wrong answers.\n\n -xregs=r\n Specify the usage of registers for the generated code.\n\n r is a comma-separated list that consists of one or\n more of the following: [no%]appl, [no%]float.\n\n Example: -xregs=appl,no%float\n\n The -xregs= values are:\n\n appl Allow using the registers g2, g3, and g4.\n\n no%appl\n Do not use the appl registers.\n\n float Allow using the floating-point registers as\n specified in the SPARC ABI.\n\n no%float\n Do not use the floating-point registers.\n\n The default is: -regs=appl,float.\n\n -xrestrict=f\n Treat pointer-valued function parameters as restricted\n pointers. f is a comma-separated list that consists of\n one or more function parameters, %all, %none. This\n command-line option can be used on its own, but is best\n used with optimization of -xO3 or greater.\n\n The default is %none. Specifying -xrestrict is\n equivalent to specifying -xrestrict=%all.\n\n -xs Disable Auto-Read for dbx. Use this option in case you\n cannot keep the .o files around. It passes the -s\n option to the assembler.\n\n No Auto-Read is the older way of loading symbol tables.\n It places all symbol tables for dbx in the executable\n file. The linker links more slowly and dbx initializes\n more slowly.\n\n Auto-Read is the newer and default way of loading sym-\n bol tables. With Auto-Read, the information is\n distributed in the .o files, so that dbx loads the sym-\n bol table information only if and when it is needed.\n Hence, the linker links faster, and dbx initializes\n faster.\n\n With -xs, if you move the executables to another direc-\n tory, then to use dbx, you can ignore the object (.o)\n files.\n\n Without -xs, if you move the executables, you must move\n both the sources files and the object (.o) files, or\n set the path with the dbx pathmap or use command.\n\n -xsafe=mem\n Allow the compiler to assume no memory-based traps\n occur.\n\n This option grants permission to use the speculative\n load instruction on V9 machines.\n\n -xsb Same as -sb.\n\n -xsbfast\n Same as -sbfast.\n\n -xsfpconst\n Represent unsuffixed floating-point constants as single\n precision, instead of the default mode of double preci-\n sion. Not valid with -Xc.\n\n -xspace\n Do no optimizations that increase code size. Example:\n Do not unroll loops.\n\n -xstrconst\n Same as -strconst.\n\n -xtarget=t\n Specify the target system for the instruction set and\n optimization.\n\n t must be one of: native, generic, system-name.\n\n The -xtarget option permits a quick and easy specifica-\n tion of the -xarch, -xchip, and -xcache combinations\n that occur on real systems. The only meaning of -xtar-\n get is in its expansion.\n\n The -xtarget values are:\n\n native Get the best performance on the host system.\n\n generic Get the best performance for generic architec-\n ture, chip, and cache. This is the default.\n\n system-name\n Get the best performance for the specified\n system.\n Valid system names are: sun4/15, sun4/20,\n sun4/25, sun4/30, sun4/40, sun4/50, sun4/60,\n sun4/65, sun4/75, sun4/110, sun4/150,\n sun4/260, sun4/280, sun4/330, sun4/370,\n sun4/390, sun4/470, sun4/490, sun4/630,\n sun4/670, sun4/690, sselc, ssipc, ssipx, sslc,\n sslt, sslx, sslx2, ssslc, ss1, ss1plus, ss2,\n ss2p, ss4, ss5, ssvyger, ss10, ss10/hs11,\n ss10/hs12, ss10/hs14, ss10/20, ss10/hs21,\n ss10/hs22, ss10/30, ss10/40, ss10/41, ss10/50,\n ss10/51, ss10/61, ss10/71, ss10/402, ss10/412,\n ss10/512, ss10/514, ss10/612, ss10/712,\n ss20/hs11, ss20/hs12, ss20/hs14, ss20/hs21,\n ss20/hs22, ss20/51, ss20/61, ss20/71,\n ss20/502, ss20/512, ss20/514, ss20/612,\n ss20/712, ss600/41, ss600/51, ss600/61,\n ss600/120, ss600/140, ss600/412, ss600/512,\n ss600/514, ss600/612, ss1000, sc2000, cs6400,\n solb5, solb6, ultra, ultra1/140, ultra1/170,\n ultra1/1170, ultra1/2170, ultra1/2200.\n See the section on -xtarget=t in the C 4.0\n User's Guide for the -xtarget expansions that\n show the mneumonic encodings of the actual\n system names and numbers.\n\n This option is a macro. Each specific value for\n -xtarget expands into a specific set of values for the\n -xarch, -xchip, and -xcache options. For example:\n -xtarget=sun4/15 is equivalent to:\n -xarch=v8a -xchip=micro -xcache=2/16/1\n\n -xtemp=dir\n Set directory for temporary files to dir.\n\n -xtime\n Same as -time.\n\n -xtransition\n Issue warnings for differences between K&R C and ANSI\n C.\n\n -xunroll=n\n Same as -unroll.\n\n -xvpara\n Issue warnings for loops that may not be safe to\n parallelize. As the compiler detects each explicitly\n parallelized loop that has dependencies, it issues a\n warning message but the loop is parallelized. Use with\n the -xexplicitpar option and the #pragma MP directive.\n\n This option requires the iMPact C multiprocessor\n enhancement package.\n\n -Y,dir\n Change default directories for finding libraries files.\n\n -Zll Create the lock_lint database files (.ll files), one\n per each .c file compiled for the lock_lint(1) program,\n which is included in the iMPact product. Do not actu-\n ally compile.\n\n -Zlp Prepare object files for the loop profiler, looptool.\n The looptool(1) utility can then be run to generate\n loop statistics about the program. Use this option with\n -xdepend; if -xdepend is not explicitly or implicitly\n specified, turns on -xdepend and issues a warning. If\n optimization is not at -O3 or higher, optimization is\n raised to -O3 and a warning is issued.\n\n The -Zlp option requires the iMPact C multiprocessor\n enhancement package.\n\n -Ztha\n Prepare code for analysis by the thread analyzer, the\n performance analysis tool for multithreaded code.\n\n acc recognizes -r, -u, -YP,dir, and -z, and passes these\n options and their arguments to ld. acc also passes any\n unrecognized options to ld with a warning.\n\n PRAGMAS\n The following #pragmas are recognized by the compilation\n system:\n #prama align, #pragma fini, #prama init, #prama ident,\n #pragma int_to_unsigned, #prama MP serial_loop, #prama MP\n serial_loop_nested, #prama MP taskloop, #prama\n nomemorydepend, #prama no_side_effect, #pragma pack, #prama\n pipeloop, #pragma unknown_control_flow, #prama unroll,\n #prama weak.\n Refer to the C 4.0 User's Guide for more information on\n these pragmas.\n\nFILES\n a.out executable output file\n file.a library of object files\n file.c C source file\n file.d tcov(1) test coverage input\n file\n file.i C source file after prepro-\n cessing\n file.il inline expansion file\n file.o object file\n file.s assembler source file\n file.tcov output from tcov(1)\n acc compiler command line driver\n acomp compiler front end\n cg code generator\n crt1.o runtime startup code\n crti.o runtime startup code\n crtn.o runtime startup code\n fbe assembler\n gcrt1.o startup for profiling with\n gprof(1)\n gmon.out default profile file for -pg\n iropt global optimzer\n mcrt1.o start-up for profiling with\n prof(1) and intro(3)\n mon.out default profile file for -p\n .sb The directory used to store\n sbrowser(1) data when the -xsb\n or -xsbfast flag is used.\n .sbinit A file containing commands\n which can be used to specify\n the location of the .sb direc-\n tory and to control the execu-\n tion of sbcleanup\n sbcleanup deletes obsolete files in the\n .sb directory and creates an\n up-to-date .sb/Index file\n\nSEE ALSO\n ar(1), as(1), cflow(1), ctags(1), cxref(1), dbx(1),\n gprof(1), ld(1), lint(1), m4(1), make(1S), prof(1), tcov(1)\n\n C 4.0 User's Guide\n\n B. W. Kernighan and D. M. Ritchie, The C Programming\n Language, Prentice-Hall, 1978\n\nDIAGNOSTICS\n The diagnostics produced by the C compiler are intended to\n be self-explanatory. Occasional obscure messages may be\n produced by the preprocessor, assembler, or loader.\n\n Last change: 5 September 1995 25",
"msg_date": "Thu, 18 Oct 2001 09:28:19 +0100 (BST)",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed"
},
{
"msg_contents": "Lee Kindness writes:\n\n> For your information I've attached the man page for the Sun C\n> compiler, which explicitly lists the -h and -R flags.\n\nI didn't read much farther than\n\n acc (SPARC only) is not intended to be used directly on\n Solaris 2.x.\n\n;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 18 Oct 2001 23:03:41 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed"
},
{
"msg_contents": "Peter Eisentraut writes:\n > Lee Kindness writes:\n > > For your information I've attached the man page for the Sun C\n > > compiler, which explicitly lists the -h and -R flags.\n > I didn't read much farther than\n > acc (SPARC only) is not intended to be used directly on\n > Solaris 2.x. ;-)\n\nTouche, but the man page for the front-end (plain old cc) doesn't list\noptions and only refers to the acc man page ;)\n\nOnto another Solaris compilation issue...\n\nAfter a simple './configure' on a stock Solaris 2.6 box the\ncompilation of interfaces/ecpg/lib/execute.c fails due to the macro\ndefinition of 'gettext' to ''. This macro is invoked on the prototype\nof gettext() in libintl.h (included via locale.h).\n\nA './configure --enable-nls' is needed.\n\nTo properly fix the problem either:\n\n 1. Don't include or use locale functions in execute.c unless\n --enable-locale has been specified.\n\n 2. In execute.c the include for locale.h whould be moved above that\n of postgres_fe.h\n\n 3. Replace '#define gettext' in c.h with something more unique\n (PG_gettext perhaps?)\n\nRegards, Lee Kindness.\n",
"msg_date": "Fri, 19 Oct 2001 09:40:53 +0100 (BST)",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "Compiling on Solaris with Sun compiler"
},
{
"msg_contents": "Lee Kindness writes:\n\n> Touche, but the man page for the front-end (plain old cc) doesn't list\n> options and only refers to the acc man page ;)\n\nWell, I'm stumped. All the Solaris compilers I've ever seen did support\nand document the -Wl option.\n\n> After a simple './configure' on a stock Solaris 2.6 box the\n> compilation of interfaces/ecpg/lib/execute.c fails due to the macro\n> definition of 'gettext' to ''. This macro is invoked on the prototype\n> of gettext() in libintl.h (included via locale.h).\n\nFail how and why?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 20 Oct 2001 11:27:52 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Compiling on Solaris with Sun compiler"
},
{
"msg_contents": "Peter Eisentraut writes:\n > Lee Kindness writes:\n > > Touche, but the man page for the front-end (plain old cc) doesn't list\n > > options and only refers to the acc man page ;)\n > Well, I'm stumped. All the Solaris compilers I've ever seen did support\n > and document the -Wl option.\n\nWell I never submitted my patch for building using the Sun compilers\nsince I thought that the newer versions did support the -Wl\noption - I'm using an old (version 4) Sun compiler. However it seems\nthat Denis it using revision 2 of the latest version 6 compiler!\n\n > > After a simple './configure' on a stock Solaris 2.6 box the\n > > compilation of interfaces/ecpg/lib/execute.c fails due to the macro\n > > definition of 'gettext' to ''. This macro is invoked on the prototype\n > > of gettext() in libintl.h (included via locale.h).\n > Fail how and why?\n\nWell in c.h there is the following define:\n\n #ifdef ENABLE_NLS\n #include <libintl.h>\n #else\n #define gettext(x) (x)\n #endif\n #define gettext_noop(x) (x)\n\nso gettext() simply is the supplied parameter if --enable-nls is not\nsupplied. However ecpg/execute.c has the following includes:\n\n #include \"postgres_fe.h\"\n #include <stdio.h>\n #include <locale.h>\n\nVia postgres_fe.h gettext() gets defined as above. However locale.h\nalso pulls in the systems libintl.h which has the following prototype:\n\n extern char *gettext();\n\nwhich the preprocessor changes to:\n\n extern char *();\n\ndue to the gettext define in c.h. Naturally this makes the build\nfail.\n\nConfiguring with --enable-nls gets round this but I don't require that\nfunctionality.\n\nRegards, Lee Kindness.\n",
"msg_date": "Tue, 23 Oct 2001 10:28:57 +0100 (BST)",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Compiling on Solaris with Sun compiler"
},
{
"msg_contents": "Lee Kindness writes:\n\n> After a simple './configure' on a stock Solaris 2.6 box the\n> compilation of interfaces/ecpg/lib/execute.c fails due to the macro\n> definition of 'gettext' to ''. This macro is invoked on the prototype\n> of gettext() in libintl.h (included via locale.h).\n\nThis should be fixed now.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 24 Oct 2001 23:54:33 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Compiling on Solaris with Sun compiler"
},
{
"msg_contents": "Lee Kindness writes:\n > Peter Eisentraut writes:\n > > Lee Kindness writes:\n > > > Touche, but the man page for the front-end (plain old cc) doesn't list\n > > > options and only refers to the acc man page ;)\n > > Well, I'm stumped. All the Solaris compilers I've ever seen did support\n > > and document the -Wl option.\n > Well I never submitted my patch for building using the Sun compilers\n > since I thought that the newer versions did support the -Wl\n > option - I'm using an old (version 4) Sun compiler. However it seems\n > that Denis it using revision 2 of the latest version 6 compiler!\n\nDoes the patch break Sun compilers which do accept '-Wl'? If it\ndoesn't it might be worthwhile to apply it so things compile out of\nthe box for more Sun compilers...\n\nOtherwise it'd be useful to included/mention the patch in the Solaris\nFAQ.\n\nRegards, Lee Kindness.\n",
"msg_date": "Tue, 30 Oct 2001 10:46:36 +0000 (GMT)",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Compiling on Solaris with Sun compiler"
},
{
"msg_contents": "\nCan I get a status on this? I see Peter stated:\n\t\n\tI didn't read much farther than\n\t\n\t acc (SPARC only) is not intended to be used directly on\n\t Solaris 2.x.\n\t\n\n---------------------------------------------------------------------------\n\n> Sorry, previous patch was wrong.\n> \n> Denis Ustimenko\n> --------------------------------------\n> *** orig/postgresql-7.1.3//src/makefiles/Makefile.solaris ?? ??? 17\n> 00:14:25 2000\n> --- postgresql-7.1.3//src/makefiles/Makefile.solaris ?? ??? 17 14:33:11\n> 2001\n> ***************\n> *** 6,12 ****\n> export_dynamic = -Wl,-E\n> rpath = -Wl,-rpath,$(libdir)\n> else\n> ! rpath = -Wl,-R$(libdir)\n> endif\n> shlib_symbolic = -Wl,-Bsymbolic\n> \n> --- 6,12 ----\n> export_dynamic = -Wl,-E\n> rpath = -Wl,-rpath,$(libdir)\n> else\n> ! rpath = -R$(libdir)\n> endif\n> shlib_symbolic = -Wl,-Bsymbolic\n> \n> *** orig/postgresql-7.1.3//src/Makefile.shlib ?? ??? 15 10:25:07 2001\n> --- postgresql-7.1.3//src/Makefile.shlib ?? ??? 17 13:00:29 2001\n> ***************\n> *** 179,185 ****\n> ifeq ($(with_gnu_ld), yes)\n> LINK.shared += -Wl,-soname,$(soname)\n> else\n> ! LINK.shared += -Wl,-h,$(soname)\n> endif\n> SHLIB_LINK += -lm -lc\n> endif\n> --- 179,185 ----\n> ifeq ($(with_gnu_ld), yes)\n> LINK.shared += -Wl,-soname,$(soname)\n> else\n> ! LINK.shared += -h $(soname)\n> endif\n> SHLIB_LINK += -lm -lc\n> endif\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Nov 2001 17:40:42 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed &"
},
{
"msg_contents": "Bruce Momjian writes:\n > Can I get a status on this? I see Peter stated:\n > \tI didn't read much farther than\n > \t acc (SPARC only) is not intended to be used directly on\n > \t Solaris 2.x.\n\nBruce, see attached message, I think to sum things up:\n\n 1. The patch will not harm Sun compilers which do support -Wl\n 2. It will help those who do not\n 3. We need to investigate which versions do/don't support the option,\n it seems odd that (my) V4 compiler does not and Denis's V6.2\n doesn't but I imagine Peter Eisentraut's does...\n\nLooking thru the archives for 'Compiling on Solaris with Sun compiler'\nshould give more background.\n\nLee.\n\n\nLee Kindness writes:\n > Peter Eisentraut writes:\n > > Lee Kindness writes:\n > > > Touche, but the man page for the front-end (plain old cc) doesn't list\n > > > options and only refers to the acc man page ;)\n > > Well, I'm stumped. All the Solaris compilers I've ever seen did support\n > > and document the -Wl option.\n > Well I never submitted my patch for building using the Sun compilers\n > since I thought that the newer versions did support the -Wl\n > option - I'm using an old (version 4) Sun compiler. However it seems\n > that Denis it using revision 2 of the latest version 6 compiler!\n\nDoes the patch break Sun compilers which do accept '-Wl'? If it\ndoesn't it might be worthwhile to apply it so things compile out of\nthe box for more Sun compilers...\n\nOtherwise it'd be useful to included/mention the patch in the Solaris\nFAQ.\n\nRegards, Lee Kindness.",
"msg_date": "Wed, 7 Nov 2001 10:08:47 +0000 (GMT)",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed &"
},
{
"msg_contents": "Lee Kindness writes:\n\n> 1. The patch will not harm Sun compilers which do support -Wl\n> 2. It will help those who do not\n> 3. We need to investigate which versions do/don't support the option,\n> it seems odd that (my) V4 compiler does not and Denis's V6.2\n> doesn't but I imagine Peter Eisentraut's does...\n\nOkay, I figured it out:\n\nThe Solaris C compiler uses -Wl to pass options to the linker (like\neveryone else). But the Solaris C++ compiler uses '-Qoption ld' to pass\noptions to the linker. Words are not sufficient...\n\nSo I guess that this patch is okay if you can assert that #1 above is\ntrue. If it isn't we'll find out soon enough.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 8 Nov 2001 01:26:14 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed"
},
{
"msg_contents": "\nWas this resolved and applied?\n\n---------------------------------------------------------------------------\n\n> Lee Kindness writes:\n> \n> > 1. The patch will not harm Sun compilers which do support -Wl\n> > 2. It will help those who do not\n> > 3. We need to investigate which versions do/don't support the option,\n> > it seems odd that (my) V4 compiler does not and Denis's V6.2\n> > doesn't but I imagine Peter Eisentraut's does...\n> \n> Okay, I figured it out:\n> \n> The Solaris C compiler uses -Wl to pass options to the linker (like\n> everyone else). But the Solaris C++ compiler uses '-Qoption ld' to pass\n> options to the linker. Words are not sufficient...\n> \n> So I guess that this patch is okay if you can assert that #1 above is\n> true. If it isn't we'll find out soon enough.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 10 Nov 2001 21:58:22 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Was this resolved and applied?\n\nNo. The proposed patch should still be applied.\n\n>\n> ---------------------------------------------------------------------------\n>\n> > Lee Kindness writes:\n> >\n> > > 1. The patch will not harm Sun compilers which do support -Wl\n> > > 2. It will help those who do not\n> > > 3. We need to investigate which versions do/don't support the option,\n> > > it seems odd that (my) V4 compiler does not and Denis's V6.2\n> > > doesn't but I imagine Peter Eisentraut's does...\n> >\n> > Okay, I figured it out:\n> >\n> > The Solaris C compiler uses -Wl to pass options to the linker (like\n> > everyone else). But the Solaris C++ compiler uses '-Qoption ld' to pass\n> > options to the linker. Words are not sufficient...\n> >\n> > So I guess that this patch is okay if you can assert that #1 above is\n> > true. If it isn't we'll find out soon enough.\n> >\n> > --\n> > Peter Eisentraut peter_e@gmx.net\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 11 Nov 2001 16:41:18 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed"
},
{
"msg_contents": "\nPatch applied. Sorry for the delay. I got confused in the resulting\ndiscussion. This will appear in 7.2 final.\n\n---------------------------------------------------------------------------\n\n> Sorry, previous patch was wrong.\n> \n> Denis Ustimenko\n> --------------------------------------\n> *** orig/postgresql-7.1.3//src/makefiles/Makefile.solaris ?? ??? 17\n> 00:14:25 2000\n> --- postgresql-7.1.3//src/makefiles/Makefile.solaris ?? ??? 17 14:33:11\n> 2001\n> ***************\n> *** 6,12 ****\n> export_dynamic = -Wl,-E\n> rpath = -Wl,-rpath,$(libdir)\n> else\n> ! rpath = -Wl,-R$(libdir)\n> endif\n> shlib_symbolic = -Wl,-Bsymbolic\n> \n> --- 6,12 ----\n> export_dynamic = -Wl,-E\n> rpath = -Wl,-rpath,$(libdir)\n> else\n> ! rpath = -R$(libdir)\n> endif\n> shlib_symbolic = -Wl,-Bsymbolic\n> \n> *** orig/postgresql-7.1.3//src/Makefile.shlib ?? ??? 15 10:25:07 2001\n> --- postgresql-7.1.3//src/Makefile.shlib ?? ??? 17 13:00:29 2001\n> ***************\n> *** 179,185 ****\n> ifeq ($(with_gnu_ld), yes)\n> LINK.shared += -Wl,-soname,$(soname)\n> else\n> ! LINK.shared += -Wl,-h,$(soname)\n> endif\n> SHLIB_LINK += -lm -lc\n> endif\n> --- 179,185 ----\n> ifeq ($(with_gnu_ld), yes)\n> LINK.shared += -Wl,-soname,$(soname)\n> else\n> ! LINK.shared += -h $(soname)\n> endif\n> SHLIB_LINK += -lm -lc\n> endif\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 11 Nov 2001 14:21:21 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed &"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > Was this resolved and applied?\n> \n> No. The proposed patch should still be applied.\n\nThanks. Applied.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 11 Nov 2001 14:21:35 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiling libpq++ on Solaris with Sun SPRO6U2 (fixed"
}
] |
[
{
"msg_contents": "All,\n\nHow do I get a list of DB's or Tables with a postgres SQL statement?\nIt needs to be an SQL statement otherwise perl/DBI/prepare won't parse it.\nI know there is some funtions in psql:\n\n\\l (show databases)\n\\d (show tables)\n\nwhich work fine from psql, but these\nstatements won't be parsed by perl, DBI\n\nI also know there is an object:\n\n@names = $dbh->tables;\n\nbut what about databases?\n\nPlease help.\n\nRon de Jong\nthe Netherlands\n\n\n\n\n\n",
"msg_date": "Wed, 17 Oct 2001 13:45:22 +0200",
"msg_from": "\"Ron de Jong\" <radejong@planet.nl>",
"msg_from_op": true,
"msg_subject": "How do I get a list of DB's or Tables with a postgres SQL statement?"
},
{
"msg_contents": "Ron de Jong wrote:\n\n> I know there is some funtions in psql:\n> \n> \\l (show databases)\n> \\d (show tables)\n\nIf you start psql with the -E option, it will show you which queries it\nmakes to the internal tables to retrieve the backslash result. Then\ncopying and customizing your own query is very simple.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Thu, 18 Oct 2001 13:10:03 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: How do I get a list of DB's or Tables with a postgres SQL\n\tstatement?"
},
{
"msg_contents": "See the pg_database table:\n\n http://www.postgresql.org/idocs/index.php?catalog-pg-database.html\n\nRegards, Lee.\n\nRon de Jong writes:\n > All,\n > \n > How do I get a list of DB's or Tables with a postgres SQL statement?\n > It needs to be an SQL statement otherwise perl/DBI/prepare won't parse it.\n > I know there is some funtions in psql:\n > \n > \\l (show databases)\n > \\d (show tables)\n > \n > which work fine from psql, but these\n > statements won't be parsed by perl, DBI\n > \n > I also know there is an object:\n > \n > @names = $dbh->tables;\n > \n > but what about databases?\n > \n > Please help.\n > \n > Ron de Jong\n > the Netherlands\n > \n > \n > \n > \n > \n > \n > ---------------------------(end of broadcast)---------------------------\n > TIP 5: Have you checked our extensive FAQ?\n > \n > http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Thu, 18 Oct 2001 15:09:30 +0100 (BST)",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "How do I get a list of DB's or Tables with a postgres SQL statement?"
}
] |
[
{
"msg_contents": "I have tons of old files with names like base/db/pg_sorttemp####.##. I\n assume that they are temporary sorting files but somehow they never got\ncleared out. Is it safe to delete these from a running system. The files\nare months old.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 17 Oct 2001 15:46:17 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "pg_sorttemp files"
},
{
"msg_contents": "My guess is probably yes it's ok - just shut down the server before deleting\nthem!\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of D'Arcy J.M. Cain\n> Sent: Thursday, 18 October 2001 3:46 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] pg_sorttemp files\n>\n>\n> I have tons of old files with names like base/db/pg_sorttemp####.##. I\n> assume that they are temporary sorting files but somehow they never got\n> cleared out. Is it safe to delete these from a running system. The files\n> are months old.\n>\n> --\n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Thu, 18 Oct 2001 09:52:21 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_sorttemp files"
},
{
"msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n> I have tons of old files with names like base/db/pg_sorttemp####.##. I\n> assume that they are temporary sorting files but somehow they never got\n> cleared out. Is it safe to delete these from a running system. The files\n> are months old.\n\nThe first #### is the PID of the backend that made them. If there is no\nsuch backend anymore according to ps, it's safe to zap 'em. I'd rely on\nthat much more than the mod date.\n\nBTW, if you are seeing unreclaimed sorttemp files in a recent release\n(7.0 or later), I'd like to know about it. That shouldn't happen,\nshort of a backend crash anyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Oct 2001 13:45:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_sorttemp files "
},
{
"msg_contents": "Thus spake Tom Lane\n> darcy@druid.net (D'Arcy J.M. Cain) writes:\n> > I have tons of old files with names like base/db/pg_sorttemp####.##. I\n> > assume that they are temporary sorting files but somehow they never got\n> > cleared out. Is it safe to delete these from a running system. The files\n> > are months old.\n> \n> The first #### is the PID of the backend that made them. If there is no\n> such backend anymore according to ps, it's safe to zap 'em. I'd rely on\n> that much more than the mod date.\n\nThanks. I wasn't sure about that PID thing but I have now run a script\nthat got rid of them all.\n\n> BTW, if you are seeing unreclaimed sorttemp files in a recent release\n> (7.0 or later), I'd like to know about it. That shouldn't happen,\n> short of a backend crash anyway...\n\nWell, I had over 6,000 of these files. This database is about a year old.\nI haven't seen all that many backend crashes in that time. I guess I better\nkeep a close eye on them.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 19 Oct 2001 07:52:05 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: pg_sorttemp files"
},
{
"msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n>> BTW, if you are seeing unreclaimed sorttemp files in a recent release\n>> (7.0 or later), I'd like to know about it. That shouldn't happen,\n>> short of a backend crash anyway...\n\n> Well, I had over 6,000 of these files. This database is about a year old.\n> I haven't seen all that many backend crashes in that time. I guess I better\n> keep a close eye on them.\n\nWow! Did you happen to note how many distinct PIDs were accounted for?\nThat would give us some idea of how many lossage events there were.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Oct 2001 10:09:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_sorttemp files "
},
{
"msg_contents": "Thus spake Tom Lane\n> darcy@druid.net (D'Arcy J.M. Cain) writes:\n> >> BTW, if you are seeing unreclaimed sorttemp files in a recent release\n> >> (7.0 or later), I'd like to know about it. That shouldn't happen,\n> >> short of a backend crash anyway...\n> \n> > Well, I had over 6,000 of these files. This database is about a year old.\n> > I haven't seen all that many backend crashes in that time. I guess I better\n> > keep a close eye on them.\n> \n> Wow! Did you happen to note how many distinct PIDs were accounted for?\n> That would give us some idea of how many lossage events there were.\n\nI think there were about 25 or so.\n\nPerhaps these files could be cleaned up on startup.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 20 Oct 2001 07:44:20 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: pg_sorttemp files"
},
{
"msg_contents": "> Thus spake Tom Lane\n> > darcy@druid.net (D'Arcy J.M. Cain) writes:\n> > >> BTW, if you are seeing unreclaimed sorttemp files in a recent release\n> > >> (7.0 or later), I'd like to know about it. That shouldn't happen,\n> > >> short of a backend crash anyway...\n> > \n> > > Well, I had over 6,000 of these files. This database is about a year old.\n> > > I haven't seen all that many backend crashes in that time. I guess I better\n> > > keep a close eye on them.\n> > \n> > Wow! Did you happen to note how many distinct PIDs were accounted for?\n> > That would give us some idea of how many lossage events there were.\n> \n> I think there were about 25 or so.\n> \n> Perhaps these files could be cleaned up on startup.\n\nThe will be cleaned up on postmaster startup in 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 20 Oct 2001 12:38:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_sorttemp files"
}
] |
[
{
"msg_contents": "Greetings,\n\nPostgreSQL 7.1.3, FreeBSD-4.3-RELEASE, gcc 2.95.3\n\nI'm trying to attempt to detect a failed backend connection, but a call to \nPQstatus() always returns the state of the backend when the call was \nmade. For example, take this test code:\n\n\tPGconn *pgConn;\n\tPGresult *pgRes;\n\tint fdPGconn;\n\n\tint i = 0;\n\tint iNewState = 0;\n\tint iOldState = 60;\n\n\tpgConn = PQconnectdb(\"dbname=pglogd user=postgres\");\n\n\twhile ( i == 0 )\n\t{\n\t\tiNewState = PQstatus(pgConn);\n\n\t\tif ( iNewState != iOldState )\n\t\t{\n\t\t\tiOldState = iNewState;\n\t\t\tprintf(\"Connection State [%d]\\n\", iNewState);\n\n\t\t\tfdPGconn = PQsocket(pgConn);\n\t\t\tprintf(\"Connection Socket [%d]\\n\", fdPGconn);\n\t\t}\n\n\t\tsleep(1);\n\t}\n\n\tPQfinish(pgConn);\n\nIf you start this with the backend running, the status is CONNECTION_OK, \nthen pull the plug on the backend, the call to PQstatus() will still return \nCONNECTION_OK, even though the backend is not running. Start this program \nwith the backend not running, then start the backend, PQstatus() never sees \nthe backend come to life...\n\nAm I reading PQstatus() wrong? Is there any way to detect when the backend \ngoes down or comes back up?\n\nThanks,\nMatthew\n\n",
"msg_date": "Wed, 17 Oct 2001 20:47:09 -0400",
"msg_from": "Matthew Hagerty <mhagerty@voyager.net>",
"msg_from_op": true,
"msg_subject": "PQstatus() detect change in connection..."
},
{
"msg_contents": "I presume you are trying to re-establish a connection automatically...if\nthat doesn't apply, ignore the rest of this email :)\n\nThe way I interpreted the docs was that you can use the return codes from\nPQexec() to establish whether the command was sent to the backend correctly.\nPQresultStatus() returns whether the command was syntactically\ncorrect/executed OK.\n\nI've attached a chunk of code from a back-end independent DB driver\n(supports Oracle, PgSQL, MySQL through the same front end API), which\nimplements this auto-reconnect. Take a look at the sqlExec() method.\n\nThis code successfully recovers when used in a client connection pool in the\nfollowing sequence:\n\n1) start postmaster\n2) connect through pool/driver\n3) issue SQL statements\n4) kill postmaster\n5) start postmaster\n6) issue SQL statements\n7) driver detects connection invalid, reconnects and re-issues\nautomatically.\n\nPerhaps those infinitely more knowledgeable on the list have a better/more\ncorrect way of doing things?\n\nCheers,\n\nMark Pritchard\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Matthew Hagerty\n> Sent: Thursday, 18 October 2001 10:47 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] PQstatus() detect change in connection...\n>\n>\n> Greetings,\n>\n> PostgreSQL 7.1.3, FreeBSD-4.3-RELEASE, gcc 2.95.3\n>\n> I'm trying to attempt to detect a failed backend connection, but\n> a call to\n> PQstatus() always returns the state of the backend when the call was\n> made. For example, take this test code:\n>\n> \tPGconn *pgConn;\n> \tPGresult *pgRes;\n> \tint fdPGconn;\n>\n> \tint i = 0;\n> \tint iNewState = 0;\n> \tint iOldState = 60;\n>\n> \tpgConn = PQconnectdb(\"dbname=pglogd user=postgres\");\n>\n> \twhile ( i == 0 )\n> \t{\n> \t\tiNewState = PQstatus(pgConn);\n>\n> \t\tif ( iNewState != iOldState )\n> \t\t{\n> \t\t\tiOldState = iNewState;\n> \t\t\tprintf(\"Connection State [%d]\\n\", iNewState);\n>\n> \t\t\tfdPGconn = PQsocket(pgConn);\n> \t\t\tprintf(\"Connection Socket [%d]\\n\", fdPGconn);\n> \t\t}\n>\n> \t\tsleep(1);\n> \t}\n>\n> \tPQfinish(pgConn);\n>\n> If you start this with the backend running, the status is CONNECTION_OK,\n> then pull the plug on the backend, the call to PQstatus() will\n> still return\n> CONNECTION_OK, even though the backend is not running. Start\n> this program\n> with the backend not running, then start the backend, PQstatus()\n> never sees\n> the backend come to life...\n>\n> Am I reading PQstatus() wrong? Is there any way to detect when\n> the backend\n> goes down or comes back up?\n>\n> Thanks,\n> Matthew\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>",
"msg_date": "Thu, 18 Oct 2001 11:51:17 +1000",
"msg_from": "\"Mark Pritchard\" <mark@tangent.net.au>",
"msg_from_op": false,
"msg_subject": "Re: PQstatus() detect change in connection..."
},
{
"msg_contents": "I am trying to re-establish a connection, however, I cannot afford to issue \na query to determine if the connection still exists. I'm writing a server \nthat uses the asynchronous query processing functions and speed is an \nissue. Queries are slow compared to what the server does and it cannot \nwait around for a query to finish just to see if another query *should* be \nattempted based on the connection status.\n\nI've been digging into the libpq code to see what is going on, maybe I can \ngleam a little hint or two there... Anyone know a good *fast* way to test \nif a socket is still valid?\n\nThanks,\nMatthew\n\nAt 11:51 AM 10/18/2001 +1000, Mark Pritchard wrote:\n>I presume you are trying to re-establish a connection automatically...if\n>that doesn't apply, ignore the rest of this email :)\n>\n>The way I interpreted the docs was that you can use the return codes from\n>PQexec() to establish whether the command was sent to the backend correctly.\n>PQresultStatus() returns whether the command was syntactically\n>correct/executed OK.\n>\n>I've attached a chunk of code from a back-end independent DB driver\n>(supports Oracle, PgSQL, MySQL through the same front end API), which\n>implements this auto-reconnect. Take a look at the sqlExec() method.\n>\n>This code successfully recovers when used in a client connection pool in the\n>following sequence:\n>\n>1) start postmaster\n>2) connect through pool/driver\n>3) issue SQL statements\n>4) kill postmaster\n>5) start postmaster\n>6) issue SQL statements\n>7) driver detects connection invalid, reconnects and re-issues\n>automatically.\n>\n>Perhaps those infinitely more knowledgeable on the list have a better/more\n>correct way of doing things?\n>\n>Cheers,\n>\n>Mark Pritchard\n>\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Matthew Hagerty\n> > Sent: Thursday, 18 October 2001 10:47 AM\n> > To: pgsql-hackers@postgresql.org\n> > Subject: [HACKERS] PQstatus() detect change in connection...\n> >\n> >\n> > Greetings,\n> >\n> > PostgreSQL 7.1.3, FreeBSD-4.3-RELEASE, gcc 2.95.3\n> >\n> > I'm trying to attempt to detect a failed backend connection, but\n> > a call to\n> > PQstatus() always returns the state of the backend when the call was\n> > made. For example, take this test code:\n> >\n> > PGconn *pgConn;\n> > PGresult *pgRes;\n> > int fdPGconn;\n> >\n> > int i = 0;\n> > int iNewState = 0;\n> > int iOldState = 60;\n> >\n> > pgConn = PQconnectdb(\"dbname=pglogd user=postgres\");\n> >\n> > while ( i == 0 )\n> > {\n> > iNewState = PQstatus(pgConn);\n> >\n> > if ( iNewState != iOldState )\n> > {\n> > iOldState = iNewState;\n> > printf(\"Connection State [%d]\\n\", iNewState);\n> >\n> > fdPGconn = PQsocket(pgConn);\n> > printf(\"Connection Socket [%d]\\n\", fdPGconn);\n> > }\n> >\n> > sleep(1);\n> > }\n> >\n> > PQfinish(pgConn);\n> >\n> > If you start this with the backend running, the status is CONNECTION_OK,\n> > then pull the plug on the backend, the call to PQstatus() will\n> > still return\n> > CONNECTION_OK, even though the backend is not running. Start\n> > this program\n> > with the backend not running, then start the backend, PQstatus()\n> > never sees\n> > the backend come to life...\n> >\n> > Am I reading PQstatus() wrong? Is there any way to detect when\n> > the backend\n> > goes down or comes back up?\n> >\n> > Thanks,\n> > Matthew\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n\n",
"msg_date": "Thu, 18 Oct 2001 00:19:05 -0400",
"msg_from": "Matthew Hagerty <mhagerty@voyager.net>",
"msg_from_op": true,
"msg_subject": "Re: PQstatus() detect change in connection..."
},
{
"msg_contents": "Matthew Hagerty <mhagerty@voyager.net> writes:\n> Anyone know a good *fast* way to test \n> if a socket is still valid?\n\nWhat exactly are you trying to defend against?\n\nIn general, I don't believe that there is any way of discovering whether\nthe server is still up, other than to send it a query. (FWIW, an empty\nquery string bounces back very quickly, with little processing.)\n\nFor particular scenarios it's possible that some notification has been\ndelivered to the client, but if you have had (say) a loss of network\nconnectivity then there just is no other alternative. Your end isn't\ngoing to discover the connectivity loss until it tries to send a\nmessage.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Oct 2001 14:10:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PQstatus() detect change in connection... "
},
{
"msg_contents": "At 02:10 PM 10/18/2001 -0400, Tom Lane wrote:\n>Matthew Hagerty <mhagerty@voyager.net> writes:\n> > Anyone know a good *fast* way to test\n> > if a socket is still valid?\n>\n>What exactly are you trying to defend against?\n>\n>In general, I don't believe that there is any way of discovering whether\n>the server is still up, other than to send it a query. (FWIW, an empty\n>query string bounces back very quickly, with little processing.)\n>\n>For particular scenarios it's possible that some notification has been\n>delivered to the client, but if you have had (say) a loss of network\n>connectivity then there just is no other alternative. Your end isn't\n>going to discover the connectivity loss until it tries to send a\n>message.\n>\n> regards, tom lane\n\n\nI was using PQstatus() under the assumption that it actually *checked* the \nconnection, however I have since discovered that is simply returns the \nvalue in a structure, and that value only gets updated in pqReadData() or \npqReadReady() (both of which are internal function calls.)\n\nWhat I'm doing is using the asynchronous processing to write a server that \ndoes not have to wait around for a query to finish (which is a slow process \ncompared to what the rest of the server does.) So, using a query to test \nif the connection is up seems rather redundant and slow... I was hoping to \ncome up with a faster more simple solution. If the connection is down I \nneed to write - what would have been a query - to a temporary place and \nattempt a reconnect, all while going off and doing other things.\n\nThis all came about when my main select() bailed because the backend went \ndown and the socket's file-descriptor became invalid. I could probably \ncatch the error in that loop, but I also want to check the connection \n*before* submitting a query... Basically, I hope to avoid a huge rewrite \nbased on my assumption of how PQstatus() was actually working. ;-)\n\nCurrently I'm looking at fnctl() or a dedicated select() call (similar to \nwhat pgReadReady() does), but I'm not sure of the OS overhead of these \nsolutions compared to each other or an empty query. Any insight would be \ngreatly appreciated.\n\nThanks,\nMatthew\n\n",
"msg_date": "Thu, 18 Oct 2001 16:54:57 -0400",
"msg_from": "Matthew Hagerty <mhagerty@voyager.net>",
"msg_from_op": true,
"msg_subject": "Re: PQstatus() detect change in connection... "
},
{
"msg_contents": "Matthew Hagerty writes:\n\n> I am trying to re-establish a connection, however, I cannot afford to issue\n> a query to determine if the connection still exists.\n\nBut requesting that the server do something *is* the only way to know\nwhether it's still alive. Another question to ask, of course, would be,\nwhy is your server always going down?\n\n> I've been digging into the libpq code to see what is going on, maybe I can\n> gleam a little hint or two there... Anyone know a good *fast* way to test\n> if a socket is still valid?\n\nTry to send or receive something.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 18 Oct 2001 23:03:23 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: PQstatus() detect change in connection..."
},
{
"msg_contents": "Matthew Hagerty <mhagerty@voyager.net> writes:\n> but I also want to check the connection \n> *before* submitting a query...\n\nThis strikes me as utterly pointless. You'll need to be able to recover\nfrom query failure anyway, so what's the value of testing beforehand?\nSend the query and see if it works or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Oct 2001 20:51:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PQstatus() detect change in connection... "
}
] |
[
{
"msg_contents": "Hello\nTrying to dump all databases \n> ./pg_dumpall -h baze.zenon.net -p 63010 -u > $HOME/pg_dumpall\n\nGet error :\npsql: connectDBStart() -- connect() failed: No such file or directory\n Is the postmaster running at 'localhost'\n and accepting connections on Unix socket '5432'?\npsql: connectDBStart() -- connect() failed: No such file or directory\n Is the postmaster running at 'localhost'\n and accepting connections on Unix socket '5432'?\npsql: connectDBStart() -- connect() failed: No such file or directory\n Is the postmaster running at 'localhost'\n and accepting connections on Unix socket '5432'?\npsql: connectDBStart() -- connect() failed: No such file or directory\n Is the postmaster running at 'localhost'\n and accepting connections on Unix socket '5432'?\n> \n\n\nQuestion - how to create full dump for my base ? \n\nregarrds\nkorshunov\n\n\n\n\n\n\n\nHello\nTrying to dump all databases \n> ./pg_dumpall -h baze.zenon.net -p 63010 \n-u > $HOME/pg_dumpall\n \nGet error :psql: connectDBStart() -- \nconnect() failed: No such file or \ndirectory Is the postmaster \nrunning at 'localhost' and \naccepting connections on Unix socket '5432'?psql: connectDBStart() -- \nconnect() failed: No such file or \ndirectory Is the postmaster \nrunning at 'localhost' and \naccepting connections on Unix socket '5432'?psql: connectDBStart() -- \nconnect() failed: No such file or \ndirectory Is the postmaster \nrunning at 'localhost' and \naccepting connections on Unix socket '5432'?psql: connectDBStart() -- \nconnect() failed: No such file or \ndirectory Is the postmaster \nrunning at 'localhost' and \naccepting connections on Unix socket '5432'?> \n \nQuestion - how to create full \n dump for my base ? \n\n \nregarrds\nkorshunov",
"msg_date": "Thu, 18 Oct 2001 04:48:58 +0400",
"msg_from": "\"Korshunov Ilya\" <kosha@kp.ru>",
"msg_from_op": true,
"msg_subject": "may be bug in pg_dumpall in 7.0.3 "
},
{
"msg_contents": "\"Korshunov Ilya\" <kosha@kp.ru> writes:\n>> ./pg_dumpall -h baze.zenon.net -p 63010 -u > $HOME/pg_dumpall\n\nI think you'll need to set PGHOST and PGPORT to get that old version\nof pg_dumpall to work. Current sources seem to do this better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Oct 2001 14:43:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: may be bug in pg_dumpall in 7.0.3 "
}
] |
[
{
"msg_contents": "I'm working on making some changes to the top level configure.in and m4\nhas now been running for 17 minutes on a 1.4G tbird. Am I missing\nsomething or is this know to take forever?\n\nThanks guys,\n- Brandon\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Wed, 17 Oct 2001 22:12:57 -0400 (EDT)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": true,
"msg_subject": "autoconf taking forever?"
},
{
"msg_contents": "bpalmer <bpalmer@crimelabs.net> writes:\n> I'm working on making some changes to the top level configure.in and m4\n> has now been running for 17 minutes on a 1.4G tbird. Am I missing\n> something or is this know to take forever?\n\nSomething's broken. autoconf executes in about 3 seconds on my machine,\nwhich is doubtless a lot slower than yours.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Oct 2001 13:58:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: autoconf taking forever? "
},
{
"msg_contents": "bpalmer writes:\n\n> I'm working on making some changes to the top level configure.in and m4\n> has now been running for 17 minutes on a 1.4G tbird. Am I missing\n> something or is this know to take forever?\n\nFor me, the autoconf run is \"instantaneous\". Make sure you're using\nAutoconf 2.13, and you don't have actual infinite loops in your code. Or\nperhaps the problem is in OpenBSD's m4 (which I suspect you are using)?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 18 Oct 2001 23:03:03 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: autoconf taking forever?"
},
{
"msg_contents": "\n\nI always found with new machines and configure scripts is if gethostname\ndoes not resolve then the autoconfig will hang.\nI would make sure your /etc/resolve.conf /etc/hosts , hostname domainname\nare setup right and resolve from the command line.....man gethostbyname\n\n\n\nOn Thu, 18 Oct 2001, Tom Lane wrote:\n\n> Date: Thu, 18 Oct 2001 13:58:30 -0400\n> From: Tom Lane <tgl@sss.pgh.pa.us>\n> To: bpalmer <bpalmer@crimelabs.net>\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] autoconf taking forever?\n>\n> bpalmer <bpalmer@crimelabs.net> writes:\n> > I'm working on making some changes to the top level configure.in and m4\n> > has now been running for 17 minutes on a 1.4G tbird. Am I missing\n> > something or is this know to take forever?\n>\n> Something's broken. autoconf executes in about 3 seconds on my machine,\n> which is doubtless a lot slower than yours.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n",
"msg_date": "Thu, 18 Oct 2001 14:24:54 -0700 (PDT)",
"msg_from": "Dan <dphoenix@bravenet.com>",
"msg_from_op": false,
"msg_subject": "Re: autoconf taking forever? "
},
{
"msg_contents": "> > I'm working on making some changes to the top level configure.in and m4\n> > has now been running for 17 minutes on a 1.4G tbird. Am I missing\n> > something or is this know to take forever?\n>\n> For me, the autoconf run is \"instantaneous\". Make sure you're using\n> Autoconf 2.13, and you don't have actual infinite loops in your code. Or\n> perhaps the problem is in OpenBSD's m4 (which I suspect you are using)?\n\nI am running 2.13 (even on a clean checkout of 7.1.3) and the autoconf\ntakes forever. However, m4 is the process that's running forever, so I\nhave no doubs that the problem is there. What version do you use that\nworks? I'll try getting a new version... Any ideas where to look for a\nsolution though?\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Tue, 23 Oct 2001 19:03:46 -0400 (EDT)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": true,
"msg_subject": "Re: autoconf taking forever?"
},
{
"msg_contents": "bpalmer <bpalmer@crimelabs.net> writes:\n> I am running 2.13 (even on a clean checkout of 7.1.3) and the autoconf\n> takes forever. However, m4 is the process that's running forever, so I\n> have no doubs that the problem is there. What version do you use that\n> works?\n\nGNU m4 ... the version I have here is \n\n$ m4 --version\nGNU m4 1.4\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 13:57:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: autoconf taking forever? "
}
] |
[
{
"msg_contents": "Apologies if you think this mail is a little long-winded but I want to be as\nclear as possible on this.\n\nPostgreSQL - 7.1.3 (installed on Linux 2.4.2-2)\nPSQLODBC.DLL - 07.01.0007\nVisual C++ - 6.0\n\nI have a C++ app running on WINDOWS2000 and I am trying to use\nSQLBindParamater with a unicode (wchar_t) variable.\n\nI installed postgreSQL using the following arguments:\n\n\n\n\n\n./configure --enable-multibyte=UNICODE --enable-unicode-conversion --enable-\nodbc\n\nI have tested my app against SQL SERVER and DB2 and it works fine. (You can\nrun my program against SQL SERVER, DB2 and PostgreSQL by simply setting one\nof the global variables DBP_SQLSERVER, DBP_DB2 or DBP_POSTGRES to 1)\n\n\n\nThe SQL to generate the database table and test data that my test program\nuses is as follows:\n\n--SQL Server\ndrop table testtable\ngo\ncreate table testtable\n(\ncol1 NVARCHAR(20) NOT NULL,\ncol2 NVARCHAR(20) NOT NULL,\ncol3 CHAR(20) NOT NULL,\ncol4 INTEGER NOT NULL\n)\ngo\ninsert into testtable values (N'record one', N'record one data a', 'record\none data b', 1)\ngo\ninsert into testtable values (N'record two', N'record two data a', 'record\ntwo data b', 2)\ngo\ninsert into testtable values (N'record three', N'record three data a',\n'record three data b', 3)\ngo\nselect * from testtable\n----------------------------------------------------------------------------\n----\n--DB2\ndrop table testtable\ngo\ncreate table testtable\n(\ncol1 VARGRAPHIC(20) NOT NULL,\ncol2 VARGRAPHIC(20) NOT NULL,\ncol3 CHAR(20) NOT NULL,\ncol4 INTEGER NOT NULL\n)\ngo\ninsert into testtable values ('record one', 'record one data a', 'record one\ndata b', 1)\ngo\ninsert into testtable values ('record two', 'record two data a', 'record two\ndata b', 2)\ngo\ninsert into testtable values ('record three', 'record three data a', 'record\nthree data b', 3)\ngo\nselect * from testtable\n----------------------------------------------------------------------------\n----\n--Postgres\ndrop table testtable\ngo\ncreate table testtable\n(\ncol1 NCHAR VARYING(20) NOT NULL,\ncol2 NCHAR VARYING(20) NOT NULL,\ncol3 CHAR(20) NOT NULL,\ncol4 INTEGER NOT NULL\n)\ngo\ninsert into testtable values ('record one', 'record one data a', 'record one\ndata b', 1)\ngo\ninsert into testtable values ('record two', 'record two data a', 'record two\ndata b', 2)\ngo\ninsert into testtable values ('record three', 'record three data a', 'record\nthree data b', 3)\ngo\nselect * from testtable\n\n\n\nHere is my test program in full:\n\n//--- BEGIN PROGRAM SOURCE\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <windows.h>\n#include <sqlext.h>\n\nint DBP_SQLSERVER = 1;\nint DBP_DB2 = 0;\nint DBP_POSTGRES = 0;\n\n#define ENV 1\n#define DBC 2\n#define STMT 3\n#define SETCODE 1\n\n#define SQLNOTFOUND 100\n\nvoid OpenConnecton(void);\nvoid CloseConnection(void);\nvoid SelectSQL(void);\nvoid odbc_checkerr(wchar_t *, int, int);\nlong set_native_sql(wchar_t *, int);\nvoid myexit(int);\nvoid ChangeSession(wchar_t *);\n\nSQLWCHAR out_connect_str[1024] = {0};\nSQLWCHAR in_connect_str[1026] = {0};\nSQLSMALLINT in_connect_str_len = 1024;\nSQLSMALLINT out_connect_str_len = 1024;\nSQLSMALLINT stringlen = 0;\n\nHENV henv;\nHDBC hdbc;\nHSTMT hstmt;\nlong odbc_rc;\nlong Native_sql_code;\n#define ATEND (Native_sql_code == 100)\nwchar_t Msg[SQL_MAX_MESSAGE_LENGTH];\n\nwchar_t strSQL[513] = {0};\nlong lngCBInd = 0;\n\nvoid wmain(int argc, wchar_t **argv)\n{\n OpenConnecton();\n SelectSQL();\n CloseConnection();\n\n myexit(0);\n}\n//**************************************************************************\n******************\nvoid OpenConnecton()\n{\n // CREATE THE ENVIRONMENT HANDLE\n odbc_rc = SQLAllocEnv(&henv);\n odbc_checkerr(L\"OpenConnecton: SQLAllocEnv\", ENV, SETCODE);\n\n // CREATE THE CONNECTION HANDLE\n odbc_rc = SQLAllocConnect(henv, &hdbc);\n odbc_checkerr(L\"OpenConnecton: SQLAllocConnect\", DBC, SETCODE);\n\n // BUILD CONNECTION STRING\n if (DBP_SQLSERVER) {\n swprintf((wchar_t *)in_connect_str,\n L\"Driver={SQL Server};\"\n L\"SERVER=MYSEQUELSERVER;\"\n L\"DATABASE=mydatabase;\"\n L\"UID=me;\"\n L\"PWD=me;\"\n L\"UseProcForPrepare=0\");\n }\n else if (DBP_DB2) {\n swprintf((wchar_t *)in_connect_str,\n L\"DRIVER={IBM DB2 ODBC Driver};\"\n L\"UID=me;\"\n L\"PWD=me;\"\n L\"GRAPHIC=3;\"\n L\"DBALIAS=MYALIAS;\");\n }\n else { // PostgreSQL\n swprintf((wchar_t *)in_connect_str,\n L\"DRIVER={PostgreSQL};\"\n L\"UID=me;\"\n L\"PWD=me;\"\n L\"SERVER=MYPOSTSERVER;\"\n L\"DATABASE=mydatabase;\");\n }\n\n // CONNECT TO SERVER\nwprintf(L\"CONNECTION STRING <%s>\\n\", in_connect_str);\n odbc_rc = SQLDriverConnect(hdbc,\n (SQLHWND)0,\n (SQLWCHAR *)in_connect_str,\n (SQLSMALLINT)in_connect_str_len,\n (SQLWCHAR *)out_connect_str,\n (SQLSMALLINT)out_connect_str_len,\n &stringlen,\n SQL_DRIVER_NOPROMPT\n );\n odbc_checkerr(L\"OpenConnecton: SQLDriverConnect\", DBC, SETCODE);\n\n if (DBP_DB2) ChangeSession(L\"efacdb\");\n}\n//**************************************************************************\n******************\nvoid CloseConnection(void)\n{\nwprintf(L\"CLOSING CONNECTION\\n\");\n\n odbc_rc = SQLDisconnect(hdbc);\n odbc_checkerr(L\"CloseConnection: SQLDisconnect\", DBC, SETCODE);\n\n odbc_rc = SQLFreeHandle(SQL_HANDLE_DBC, hdbc);\n odbc_checkerr(L\"CloseConnection: SQLFreeHandle\", DBC, SETCODE);\n\n odbc_rc = SQLFreeHandle(SQL_HANDLE_ENV, henv);\n odbc_checkerr(L\"CloseConnection: SQLFreeHandle\", ENV, SETCODE);\n}\n//**************************************************************************\n******************\nvoid SelectSQL(void)\n{\n long lngValue = 0;\n long rows = 0;\n wchar_t strBindInUni[21] = {0};\n wchar_t strBindOut[21] = {0};\n char strBindInAsc[21] = {0};\n\n//******************************\n// SELECT 1 (bind using INTEGER)\n//******************************\n odbc_rc = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt);\n odbc_checkerr(L\"SELECT 1: SQLAllocHandle\", STMT, SETCODE);\n\n lngValue = 1;\n\n odbc_rc = SQLBindParameter(hstmt, 1, SQL_PARAM_INPUT, SQL_C_LONG,\nSQL_INTEGER, 0, 0, &lngValue, 0, NULL);\n odbc_checkerr(L\"SELECT 1: SQLBindParameter\", STMT, SETCODE);\n\n odbc_rc = SQLExecDirect(hstmt, (SQLWCHAR *)L\"select col2 from testtable\nwhere col4 = ?\", SQL_NTS);\n odbc_checkerr(L\"SELECT 1: SQLExecDirect\", STMT, SETCODE);\n\n odbc_rc = SQLBindCol(hstmt, 1, SQL_C_WCHAR, strBindOut, sizeof(strBindOut),\n&lngCBInd);\n odbc_checkerr(L\"SELECT 1: SQLBindCol\", STMT, SETCODE);\n\n odbc_rc = SQLFetch(hstmt);\n set_native_sql(L\"SELECT 1: SQLFetch\", STMT);\n\n if (ATEND) {\nwprintf(L\"SELECT 1: SQLFetch = ATEND\\n\");\n\n SQLFreeStmt(hstmt, SQL_DROP);\n CloseConnection();\n myexit(0);\n }\n odbc_checkerr(L\"SELECT 1: SQLFetch\", STMT, 0);\n\nwprintf(L\"SELECT 1: DATA FETCHED: strBindOut = <%s>\\n\", strBindOut);\n\n SQLFreeStmt(hstmt, SQL_DROP);\n\n//***********************************\n// SELECT 2 (bind using ASCII STRING)\n//***********************************\n odbc_rc = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt);\n odbc_checkerr(L\"SELECT 2: SQLAllocHandle\", STMT, SETCODE);\n\n strcpy(strBindInAsc, \"record two data b\");\n\n odbc_rc = SQLBindParameter(hstmt, 1, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_CHAR,\nsizeof(strBindInAsc), 0, strBindInAsc, 0, NULL);\n odbc_checkerr(L\"SELECT 2: SQLBindParameter\", STMT, SETCODE);\n\n odbc_rc = SQLExecDirect(hstmt, (SQLWCHAR *)L\"select col2 from testtable\nwhere col3 = ?\", SQL_NTS);\n odbc_checkerr(L\"SELECT 2: SQLExecDirect\", STMT, SETCODE);\n\n odbc_rc = SQLBindCol(hstmt, 1, SQL_C_WCHAR, strBindOut, sizeof(strBindOut),\n&lngCBInd);\n odbc_checkerr(L\"SELECT 2: SQLBindCol\", STMT, SETCODE);\n\n odbc_rc = SQLFetch(hstmt);\n set_native_sql(L\"SELECT 2: SQLFetch\", STMT);\n\n if (ATEND) {\nwprintf(L\"SELECT 2: SQLFetch = ATEND\\n\");\n\n SQLFreeStmt(hstmt, SQL_DROP);\n CloseConnection();\n myexit(0);\n }\n odbc_checkerr(L\"SELECT 2: SQLFetch\", STMT, 0);\n\nwprintf(L\"SELECT 2: DATA FETCHED: strBindOut = <%s>\\n\", strBindOut);\n\n SQLFreeStmt(hstmt, SQL_DROP);\n\n//*************************************\n// SELECT 3 (bind using UNICODE STRING)\n//*************************************\n odbc_rc = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt);\n odbc_checkerr(L\"SELECT 3: SQLAllocHandle\", STMT, SETCODE);\n\n wcscpy(strBindInUni, L\"record three\");\n\n odbc_rc = SQLBindParameter(hstmt, 1, SQL_PARAM_INPUT, SQL_C_WCHAR,\nSQL_WVARCHAR, sizeof(strBindInUni), 0, strBindInUni, 0, NULL);\n odbc_checkerr(L\"SELECT 3: SQLBindParameter\", STMT, SETCODE);\n\n odbc_rc = SQLExecDirect(hstmt, (SQLWCHAR *)L\"select col2 from testtable\nwhere col1 = ?\", SQL_NTS);\n odbc_checkerr(L\"SELECT 3: SQLExecDirect\", STMT, SETCODE);\n\n odbc_rc = SQLBindCol(hstmt, 1, SQL_C_WCHAR, strBindOut, sizeof(strBindOut),\n&lngCBInd);\n odbc_checkerr(L\"SELECT 3: SQLBindCol\", STMT, SETCODE);\n\n odbc_rc = SQLFetch(hstmt);\n set_native_sql(L\"SELECT 3: SQLFetch\", STMT);\n\n if (ATEND) {\nwprintf(L\"SELECT 3: SQLFetch = ATEND\\n\");\n\n SQLFreeStmt(hstmt, SQL_DROP);\n CloseConnection();\n myexit(0);\n }\n odbc_checkerr(L\"SELECT 3: SQLFetch\", STMT, 0);\n\nwprintf(L\"SELECT 3: DATA FETCHED: strBindOut = <%s>\\n\", strBindOut);\n\n SQLFreeStmt(hstmt, SQL_DROP);\n}\n//**************************************************************************\n******************\nvoid ChangeSession(wchar_t *session)\n{\n wchar_t strSQL[256] = {0};\n\n swprintf(strSQL, L\"SET SCHEMA = %s\", session);\n\nwprintf(L\"ChangeSession: session <%s>\\n\", session);\n\n odbc_rc = SQLAllocStmt(hdbc, &hstmt);\n odbc_checkerr(L\"ChangeSession: SQLAllocStmt\", STMT, SETCODE);\n\n odbc_rc = SQLExecDirect(hstmt, (SQLWCHAR *)strSQL, SQL_NTS);\n odbc_checkerr(L\"ChangeSession: SQLExecDirect\", STMT, SETCODE);\n\n SQLFreeStmt(hstmt, SQL_DROP);\n}\n//**************************************************************************\n******************\nvoid myexit(int num)\n{\n wchar_t s[2] = {0};\n _getws(s);\n\n exit(num);\n}\n//**************************************************************************\n******************\nlong set_native_sql(wchar_t *str, int handle_type)\n{\n wchar_t SqlState[6];\n SWORD MsgLen;\n\n//wprintf(L\"set_native_sql: IN: odbc_rc = %ld, Native_sql_code = %ld, Msg\n<%s>\\n\", odbc_rc, Native_sql_code, Msg);\n if (odbc_rc == SQL_SUCCESS || (DBP_SQLSERVER && odbc_rc ==\nSQL_SUCCESS_WITH_INFO))\n return Native_sql_code = SQL_SUCCESS;\n\n if (handle_type == STMT) {\n\n if (odbc_rc == SQLNOTFOUND)\n return Native_sql_code = SQLNOTFOUND;\n else {\n if (SQLGetDiagRec(\n SQL_HANDLE_STMT,\n hstmt,\n 1,\n SqlState,\n &Native_sql_code,\n Msg,\n SQL_MAX_MESSAGE_LENGTH - 1,\n &MsgLen) != SQL_SUCCESS) {\n // Should never occur...?\n wprintf(L\"STMT: (%s): ODBC produced an error but no error code could be\nfound (%s)\\n\", str, Msg);\n myexit(0);\n }\n }\n }\n else if (handle_type == DBC) {\n if (SQLGetDiagRec(\n SQL_HANDLE_DBC,\n hdbc,\n 1,\n SqlState,\n &Native_sql_code,\n Msg,\n SQL_MAX_MESSAGE_LENGTH - 1,\n &MsgLen) != SQL_SUCCESS) {\n // Should never occur...?\n wprintf(L\"DBC: (%s): ODBC produced an error but no error code could be\nfound.\", str);\n myexit(0);\n }\n }\n else {\n if (SQLGetDiagRec(\n SQL_HANDLE_ENV,\n henv,\n 1,\n SqlState,\n &Native_sql_code,\n Msg,\n SQL_MAX_MESSAGE_LENGTH - 1,\n &MsgLen) != SQL_SUCCESS) {\n // Should never occur...?\n wprintf(L\"ENV: (%s): ODBC produced an error but no error code could be\nfound.\", str);\n myexit(0);\n }\n }\n\n if (Native_sql_code == 0) {\n // We have an error but their is no\n // native sql code, so set to 1000.\n Native_sql_code = 1000;\n }\n\n Native_sql_code = -Native_sql_code;\n\n return Native_sql_code;\n}\n//**************************************************************************\n******************\nvoid odbc_checkerr(wchar_t *str, int stattype, int checktype)\n{\n//wprintf(L\"odbc_checkerr: odbc_rc = %ld\\n\", odbc_rc);\n\n if (odbc_rc == SQL_SUCCESS || ((DBP_SQLSERVER || DBP_DB2) && odbc_rc ==\nSQL_SUCCESS_WITH_INFO)) {\n Native_sql_code = SQL_SUCCESS;\n return;\n }\n\n if (checktype == SETCODE)\n set_native_sql(str, stattype);\n\n if (Native_sql_code == SQL_SUCCESS) return;\n\n wprintf(L\"ODBC ERROR:(%s) %ld (%s).\", str, Native_sql_code, Msg);\n\n //CloseConnection();\n\n myexit(0);\n}\n\n//--- ENDPROGRAM SOURCE\n\n\nAnd here is the output generated from my program running against the 3\ndatabases:\n\nSQL SERVER:\n\nCONNECTION STRING <Driver={SQL\nServer};SERVER=MYSEQUELSERVER;DATABASE=mydatabase;UID=me;PWD=me;UseProcForPr\nepare=0>\nSELECT 1: DATA FETCHED: strBindOut = <record one data a>\nSELECT 2: DATA FETCHED: strBindOut = <record two data a>\nSELECT 3: DATA FETCHED: strBindOut = <record three data a>\nCLOSING CONNECTION\n\nDB2:\n\nCONNECTION STRING <DRIVER={IBM DB2 ODBC\nDriver};UID=me;PWD=me;GRAPHIC=3;DBALIAS=MYALIAS;>\nChangeSession: session <efacdb>\nSELECT 1: DATA FETCHED: strBindOut = <record one data a>\nSELECT 2: DATA FETCHED: strBindOut = <record two data a>\nSELECT 3: DATA FETCHED: strBindOut = <record three data a>\nCLOSING CONNECTION\n\nPostgreSQL:\n\nCONNECTION STRING\n<DRIVER={PostgreSQL};UID=me;PWD=me;SERVER=MYPOSTSERVER;DATABASE=mydatabase;>\nSELECT 1: DATA FETCHED: strBindOut = <record one data a>\nSELECT 2: DATA FETCHED: strBindOut = <record two data a>\nset_native_sql: 01: Native_sql_code = 0, Msg <[Microsoft][ODBC Driver\nManager] SQL data type out of range>\nODBC ERROR:(SELECT 3: SQLBindParameter) -1000 ([Microsoft][ODBC Driver\nManager]\nSQL data type out of range).\n\n\nAs you can see I can succesfully use an ASCII character string for an INPUT\nparameter when binding but not a UNICODE character string.\nSurely PostgreSQL supports binding of UNICODE character strings ?\n\n\nThanks for any help on this.\nAndy\nahm@exel.co.uk\n\n\n",
"msg_date": "Thu, 18 Oct 2001 12:32:54 +0100",
"msg_from": "\"Andy Hallam\" <ahm@exel.co.uk>",
"msg_from_op": true,
"msg_subject": "ODBC SQLBindParameter and UNICODE strings"
},
{
"msg_contents": "Andy Hallam wrote:\n> \n> Apologies if you think this mail is a little long-winded but I want to be as\n> clear as possible on this.\n> \n> PostgreSQL - 7.1.3 (installed on Linux 2.4.2-2)\n> PSQLODBC.DLL - 07.01.0007\n> Visual C++ - 6.0\n> \n> I have a C++ app running on WINDOWS2000 and I am trying to use\n> SQLBindParamater with a unicode (wchar_t) variable.\n> \n> I installed postgreSQL using the following arguments:\n> \n> ./configure --enable-multibyte=UNICODE --enable-unicode-conversion --enable-\n> odbc\n> \n\n[snip]\n\n> \n> As you can see I can succesfully use an ASCII character string for an INPUT\n> parameter when binding but not a UNICODE character string.\n> Surely PostgreSQL supports binding of UNICODE character strings ?\n\nUnfortunately no. Psqlodbc driver doesn't support UNICODE(UCS-2)\nbinding currently. --enable-multibyte=UNICODE means the sever side\nsupport of UTF-8(not UCS-2) encoding. \n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 19 Oct 2001 12:14:09 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: ODBC SQLBindParameter and UNICODE strings"
},
{
"msg_contents": "Thanks for that.\n\nI'll have to work around this by extracting all character variable data and\n'hard coding' this into the SQL statement before I SQLExecute() the\nstatement. I had to do the same for ... Oracle (sorry for swearing).\n\nDo you (or anyone else for that matter) know if/when UNICODE binding will be\nimplemented in the Psqlodbc driver?\n\nAndy.\n\nbuilding the character data into the SQL statement for\n\"Hiroshi Inoue\" <Inoue@tpf.co.jp> wrote in message\nnews:3BCF9A80.F6DEF77C@tpf.co.jp...\n> Andy Hallam wrote:\n> >\n> > Apologies if you think this mail is a little long-winded but I want to\nbe as\n> > clear as possible on this.\n> >\n> > PostgreSQL - 7.1.3 (installed on Linux 2.4.2-2)\n> > PSQLODBC.DLL - 07.01.0007\n> > Visual C++ - 6.0\n> >\n> > I have a C++ app running on WINDOWS2000 and I am trying to use\n> > SQLBindParamater with a unicode (wchar_t) variable.\n> >\n> > I installed postgreSQL using the following arguments:\n> >\n> >\n./configure --enable-multibyte=UNICODE --enable-unicode-conversion --enable-\n> > odbc\n> >\n>\n> [snip]\n>\n> >\n> > As you can see I can succesfully use an ASCII character string for an\nINPUT\n> > parameter when binding but not a UNICODE character string.\n> > Surely PostgreSQL supports binding of UNICODE character strings ?\n>\n> Unfortunately no. Psqlodbc driver doesn't support UNICODE(UCS-2)\n> binding currently. --enable-multibyte=UNICODE means the sever side\n> support of UTF-8(not UCS-2) encoding.\n>\n> regards,\n> Hiroshi Inoue\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n",
"msg_date": "Fri, 19 Oct 2001 09:02:01 +0100",
"msg_from": "\"Andy Hallam\" <ahm@exel.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: ODBC SQLBindParameter and UNICODE strings"
}
] |
[
{
"msg_contents": "Accept an INTERVAL argument for SET TIME ZONE per SQL99.\n Modified the parser and the SET handlers to use full Node structures\n rather than simply a character string argument.\nI've implemented and committed changes to improve the feature set for\nINTERVAL, as well as making other bug fixes and improvements to other\ndate/time types. There is one more issue to address as a bug fix\nregarding precision and rounding for INTERVAL values for which precision\nhas been specified (it is possible to pick up some extraneous cruft in\nthe lsb of the floating point number when rounding).\n\nThe CVS log entry is below. initdb required (sorry!) and non-reference\ndate/time regression tests probably will need to be updated.\n\nAll regression tests pass.\n\n - Thomas\n\nImplement INTERVAL() YEAR TO MONTH (etc) syntax per SQL99.\n Does not yet accept the goofy string format that goes along with, but\n that should be easy to add as a bug fix now or feature improvement\n later.\nAccept an INTERVAL argument for SET TIME ZONE per SQL99.\n Modified the parser and the SET handlers to use full Node structures\n rather than simply a character string argument.\nImplement INTERVAL() YEAR TO MONTH (etc) syntax per SQL99.\n Does not yet accept the goofy string format that goes along with, but\n this should be fairly straight forward to fix now as a bug or later\n as a feature.\nImplement precision for the INTERVAL() type.\n Use the typmod mechanism for both of INTERVAL features.\nFix the INTERVAL syntax in the parser:\n opt_interval was in the wrong place.\nINTERVAL is now a reserved word, otherwise we get reduce/reduce errors.\nImplement an explicit date_part() function for TIMETZ.\n Should fix coersion problem with INTERVAL reported by Peter E.\nFix up some error messages for date/time types.\n Use all caps for type names within message.\nFix recently introduced side-effect bug disabling 'epoch' as a\nrecognized\n field for date_part() etc. Reported by Peter E. (??)\nBump catalog version number.\nRename \"microseconds\" current transaction time field\n from ...Msec to ...Usec. Duh!\ndate/time regression tests updated for reference platform, but a few\n changes will be necessary for others.\n",
"msg_date": "Thu, 18 Oct 2001 17:40:18 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "date/time improvements for 7.2"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Implement precision for the INTERVAL() type.\n> Use the typmod mechanism for both of INTERVAL features.\n\nIf I could figure out what the typmod of an interval type is defined\nto be, I'd fix format_type() to display the type name properly so that\npg_dump would do the right thing. But it doesn't seem very well\ndocumented as to what the valid values are...\n\nAlso:\n\nregression=# create table foo(f1 interval(6));\nCREATE\nregression=# insert into foo values ('1 hour');\nERROR: AdjustIntervalForTypmod(): internal coding error\n\nwhich I think is because\n\n\t\tif (range == MASK(YEAR))\n\nshould be\n\n\t\telse if (range == MASK(YEAR))\n\nat line 384 of timestamp.c.\n\nAlso, you're going to have some problems with your plan to make\n0xFFFF in the high bits mean \"no range, but maybe a precision\",\nbecause there are a number of places that think that any typmod < 0\nis a dummy. I would strongly suggest that you arrange the coding\nof interval's typmod to follow that convention, rather than assume\nyou can ignore it. Perhaps use 0x7FFF (or zero...) to mean \"no range\",\nand make sure none of the bits that are used are the sign bit?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Oct 2001 23:13:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: date/time improvements for 7.2 "
},
{
"msg_contents": "> > Implement precision for the INTERVAL() type.\n> > Use the typmod mechanism for both of INTERVAL features.\n> If I could figure out what the typmod of an interval type is defined\n> to be, I'd fix format_type() to display the type name properly so that\n> pg_dump would do the right thing. But it doesn't seem very well\n> documented as to what the valid values are...\n\nI tried to follow what seemed to be the conventions of the numeric data\ntype in putting the \"precision\" in the low 16 bits. 0xFFFF implies\n\"unspecified precision\". I reused some existing mask definitions for the\nfields within an interval, and plopped those into the high 16 bits, with\n0xFFFF << 16 implying that all fields are allowed. So \"typmod = -1\"\nimplies behavior compatible with the existing/former feature set.\n\nNot sure *where* this should be documented, since it is used in more\nthan one place. Suggestions?\n\n> ERROR: AdjustIntervalForTypmod(): internal coding error\n\nOops. You found the problem spot; I've got patches...\n\n> Also, you're going to have some problems with your plan to make\n> 0xFFFF in the high bits mean \"no range, but maybe a precision\",\n> because there are a number of places that think that any typmod < 0\n> is a dummy. I would strongly suggest that you arrange the coding\n> of interval's typmod to follow that convention, rather than assume\n> you can ignore it. Perhaps use 0x7FFF (or zero...) to mean \"no range\",\n> and make sure none of the bits that are used are the sign bit?\n\nWhat exactly does \"is a dummy\" mean? (outside of possible personal\nopinions ;) Are there places which decline to call a \"normalization\nroutine\" if typmod is less than zero, rather than equal to -1? I didn't\nnotice an effect such as that in my (limited) testing.\n\nbtw, in changing the convention to use 0x7FFF rather than 0xFFFF, I\nfound another bug, where I transposed the two subfields for one case in\ngram.y. Will also be fixed.\n\n - Thomas\n",
"msg_date": "Fri, 19 Oct 2001 03:45:03 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: date/time improvements for 7.2"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Are there places which decline to call a \"normalization\n> routine\" if typmod is less than zero, rather than equal to -1?\n\nThe format_type routines think that typmod < 0 means \"no typmod\nspecified\". I am not sure where else this may be true, but I'm\npretty sure that that behavior was copied from elsewhere. We could\ntry to tighten up the convention to be that only exactly -1 means\n\"unspecified\", but I'm worried about what code we might miss.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Oct 2001 01:20:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: date/time improvements for 7.2 "
}
] |
[
{
"msg_contents": "peter=# drop function test();\nDROP\n\npeter=# create or replace function test() returns int as 'return 1;' language plperl;\nCREATE\npeter=# select test();\n test\n------\n 1\n(1 row)\n\npeter=# create or replace function test() returns int as 'return 2;' language plperl;\nCREATE\npeter=# select test();\n test\n------\n 1\n(1 row)\n\nThe same can be observed with PL/Tcl and PL/Python, but not with PL/pgSQL\nand plain SQL. Obviously, there is some caching going on, and a session\nrestart fixes everything, but the failure with this plain and simple test\ncase makes me wonder about this new feature...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 18 Oct 2001 19:41:45 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Create or replace function doesn't work so well"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The same can be observed with PL/Tcl and PL/Python, but not with PL/pgSQL\n> and plain SQL. Obviously, there is some caching going on, and a session\n> restart fixes everything, but the failure with this plain and simple test\n> case makes me wonder about this new feature...\n\nHmm. I fixed plplgsql a few days ago, but I was unaware that the other\nPLs cached anything. Will look.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Oct 2001 16:15:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Create or replace function doesn't work so well "
},
{
"msg_contents": "Peter,\n\nOn Thu, 18 Oct 2001, Peter Eisentraut wrote:\n\n> peter=# drop function test();\n> DROP\n> \n\n[snip]\n\n> The same can be observed with PL/Tcl and PL/Python, but not with PL/pgSQL\n> and plain SQL. Obviously, there is some caching going on, and a session\n> restart fixes everything, but the failure with this plain and simple test\n> case makes me wonder about this new feature...\n> \n\nI cannot recreate this on my devel system with plain SQL\n\ntemplate1=# drop function test();\nDROP\ntemplate1=# create or replace function test() returns int as 'select 1;'\nlanguage 'sql';\nCREATE\ntemplate1=# select test();\n test\n------\n 1\n(1 row)\n\ntemplate1=# create or replace function test() returns int as 'select 2;'\nlanguage 'sql';\nCREATE\ntemplate1=# select test();\n test\n------\n 2\n(1 row)\n\n\nHowever,\n\ntemplate1=# create or replace function test() returns int as 'begin\ntemplate1'# return ''1'';\ntemplate1'# end;\ntemplate1'# ' language 'plpgsql';\nCREATE\ntemplate1=# select test();\n test\n------\n 1\n(1 row)\n\n\ntemplate1=# create or replace function test() returns int as 'begin\ntemplate1'# return ''2'';\ntemplate1'# end;\ntemplate1'# ' language 'plpgsql';\nCREATE\ntemplate1=# select test();\n test\n------\n 1\n(1 row)\n\n\nYet,\n\ntemplate1=# create or replace function test() returns int as 'select 3'\nlanguage 'sql';\nCREATE\ntemplate1=# select test();\n test\n------\n 3\n(1 row)\n\nSo, it must be caching at of procedural (C??) functions. Apologies for not\ntesting this on all languages -- I presumed what was good for SQL would be\ngood for PLpgSQL ;).\n\nI'll look into further but.\n\nGavin\n\n",
"msg_date": "Fri, 19 Oct 2001 08:00:29 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Create or replace function doesn't work so well"
},
{
"msg_contents": "\nHas this been resolved?\n\n\n---------------------------------------------------------------------------\n\n> Peter,\n> \n> On Thu, 18 Oct 2001, Peter Eisentraut wrote:\n> \n> > peter=# drop function test();\n> > DROP\n> > \n> \n> [snip]\n> \n> > The same can be observed with PL/Tcl and PL/Python, but not with PL/pgSQL\n> > and plain SQL. Obviously, there is some caching going on, and a session\n> > restart fixes everything, but the failure with this plain and simple test\n> > case makes me wonder about this new feature...\n> > \n> \n> I cannot recreate this on my devel system with plain SQL\n> \n> template1=# drop function test();\n> DROP\n> template1=# create or replace function test() returns int as 'select 1;'\n> language 'sql';\n> CREATE\n> template1=# select test();\n> test\n> ------\n> 1\n> (1 row)\n> \n> template1=# create or replace function test() returns int as 'select 2;'\n> language 'sql';\n> CREATE\n> template1=# select test();\n> test\n> ------\n> 2\n> (1 row)\n> \n> \n> However,\n> \n> template1=# create or replace function test() returns int as 'begin\n> template1'# return ''1'';\n> template1'# end;\n> template1'# ' language 'plpgsql';\n> CREATE\n> template1=# select test();\n> test\n> ------\n> 1\n> (1 row)\n> \n> \n> template1=# create or replace function test() returns int as 'begin\n> template1'# return ''2'';\n> template1'# end;\n> template1'# ' language 'plpgsql';\n> CREATE\n> template1=# select test();\n> test\n> ------\n> 1\n> (1 row)\n> \n> \n> Yet,\n> \n> template1=# create or replace function test() returns int as 'select 3'\n> language 'sql';\n> CREATE\n> template1=# select test();\n> test\n> ------\n> 3\n> (1 row)\n> \n> So, it must be caching at of procedural (C??) functions. Apologies for not\n> testing this on all languages -- I presumed what was good for SQL would be\n> good for PLpgSQL ;).\n> \n> I'll look into further but.\n> \n> Gavin\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Nov 2001 17:41:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Create or replace function doesn't work so well"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Has this been resolved?\n\nYes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Nov 2001 17:59:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Create or replace function doesn't work so well "
}
] |
[
{
"msg_contents": "... work for you with code built from the cvs tip? I did an update and\nbuild tonight and see\n\nmyst$ postmaster -i\npostgres: invalid option -- r\nUsage:\n postgres -boot [-d] [-D datadir] [-F] [-o file] [-x num] dbname\n -d debug mode\n -D datadir data directory\n -F turn off fsync\n -o file send debug output to file\n -x num internal use\nDEBUG: startup process 6818 exited with status 1; aborting startup\n\nwhereas omitting the \"-i\" seems to work better:\n\nmyst$ postmaster\nDEBUG: database system was shut down at 2001-10-19 06:13:38 UTC\nDEBUG: checkpoint record is at 0/112F54\nDEBUG: redo record is at 0/112F54; undo record is at 0/0; shutdown TRUE\nDEBUG: next transaction id: 89; next oid: 16556\nDEBUG: database system is ready\n\n\nI didn't see this symptom this morning afaict. Any ideas??\n\n - Thomas\n",
"msg_date": "Fri, 19 Oct 2001 06:19:43 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Does \"postmaster -i\"..."
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> ... work for you with code built from the cvs tip? I did an update and\n> build tonight and see\n\n> myst$ postmaster -i\n> postgres: invalid option -- r\n\nHmm. I was fooling with postmaster.c & postgres.c last night.\nI didn't think I touched parameter parsing --- and my test setup\ndoes use -i --- but I'll take another look :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Oct 2001 10:21:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Does \"postmaster -i\"... "
},
{
"msg_contents": "> ... work for you with code built from the cvs tip? I did an update and\n> build tonight and see\n\nA bit more information: an unadorned \"-i\" fails:\n\nmyst$ postmaster -i\npostgres: invalid option -- r\nUsage:\n postgres -boot [-d] [-D datadir] [-F] [-o file] [-x num] dbname\n -d debug mode\n -D datadir data directory\n -F turn off fsync\n -o file send debug output to file\n -x num internal use\nDEBUG: startup process 7172 exited with status 1; aborting startup\n\n\nBut no arguments succeeds:\n\nmyst$ postmaster \nDEBUG: database system was shut down at 2001-10-19 13:38:20 UTC\nDEBUG: checkpoint record is at 0/1191A4\nDEBUG: redo record is at 0/1191A4; undo record is at 0/0; shutdown TRUE\nDEBUG: next transaction id: 98; next oid: 16557\nDEBUG: database system is ready\nDEBUG: fast shutdown request\nDEBUG: shutting down\nDEBUG: database system is shut down\n\n\nAnd multiple arguments succeeds (without damaging the other arguments):\n\nmyst$ postmaster -i -p 12000\nDEBUG: database system was shut down at 2001-10-19 13:39:08 UTC\nDEBUG: checkpoint record is at 0/1191E4\nDEBUG: redo record is at 0/1191E4; undo record is at 0/0; shutdown TRUE\nDEBUG: next transaction id: 98; next oid: 16557\nDEBUG: database system is ready\nDEBUG: fast shutdown request\nDEBUG: shutting down\nDEBUG: database system is shut down\n\n\nI've done a \"make clean all install\", and did not see this symptom\nearlier (I've been building and running quite often the last few days\nwith updated cvs sources).\n\n - Thomas\n",
"msg_date": "Fri, 19 Oct 2001 14:24:23 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: Does \"postmaster -i\"..."
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> ... work for you with code built from the cvs tip? I did an update and\n> build tonight and see\n\n> myst$ postmaster -i\n> postgres: invalid option -- r\n\nI just rebuilt from cvs tip, and I don't see any such problem...\nanyone else?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Oct 2001 11:50:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Does \"postmaster -i\"... "
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> A bit more information: an unadorned \"-i\" fails:\n> myst$ postmaster -i\n> postgres: invalid option -- r\n> But no arguments succeeds:\n> myst$ postmaster \n> And multiple arguments succeeds (without damaging the other arguments):\n> myst$ postmaster -i -p 12000\n\nAll three of these cases work just fine for me. Maybe some platform\ndependency has snuck in? Hard to see how though. It looks like the\nfailure is occurring when the postmaster launches the xlog startup\nsubprocess. The building of the argument list for that subprocess is\nfixed and not dependent on what you give to the postmaster (see\nSSDataBase in postmaster.c).\n\nHmm... I wonder if the argument list itself is good, and the parsing is\nwhat's broken. We're using getopt() for that, and there's an ugliness\nin that getopt has static state that has to be reset (since it's already\nbeen used once to parse the postmaster's arglist). We do \"optind = 1\"\nin SSDataBase, but maybe on your platform, we need to do more than that\nto point getopt at the correct arglist. Any ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Oct 2001 13:42:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Does \"postmaster -i\"... "
},
{
"msg_contents": "> We do \"optind = 1\"\n> in SSDataBase, but maybe on your platform, we need to do more than that\n> to point getopt at the correct arglist. Any ideas?\n\nAh ... I betcha your platform needs optreset = 1. Fix coming ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Oct 2001 13:52:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Does \"postmaster -i\"... "
},
{
"msg_contents": "...\n> All three of these cases work just fine for me. Maybe some platform\n> dependency has snuck in? Hard to see how though. It looks like the\n> failure is occurring when the postmaster launches the xlog startup\n> subprocess. The building of the argument list for that subprocess is\n> fixed and not dependent on what you give to the postmaster (see\n> SSDataBase in postmaster.c).\n\nNo ideas at all; this stuff has always worked for me (and everyone\nelse). Let's wait and see if anyone else notices a problem; in the\nmeantime I have a workaround by giving it more than one argument per one\nof the examples.\n\nI *haven't* blown away my tree and done a build from scratch, but istm\nthat is not something that would magically get things working.\n\nHmm. I'll try a distclean and see if that helps; maybe configure has\nchanged a bit and the cache is messing me up??\n\n - Thomas\n",
"msg_date": "Fri, 19 Oct 2001 17:58:23 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: Does \"postmaster -i\"..."
},
{
"msg_contents": "> Ah ... I betcha your platform needs optreset = 1. Fix coming ...\n\nI've just committed this. Please update and let me know if it helps.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Oct 2001 14:20:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Does \"postmaster -i\"... "
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> A bit more information: an unadorned \"-i\" fails:\n\nI believe this is fixed now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Oct 2001 19:13:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Does \"postmaster -i\"... "
},
{
"msg_contents": "> > A bit more information: an unadorned \"-i\" fails:\n> I believe this is fixed now.\n\nSeems to be, on my Linux box. Thanks for tracking it down...\n\n - Thomas\n",
"msg_date": "Tue, 23 Oct 2001 01:34:00 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: Does \"postmaster -i\"..."
}
] |
[
{
"msg_contents": "\n> Matthew Hagerty <mhagerty@voyager.net> writes:\n> > but I also want to check the connection *before* submitting a\nquery...\n\nIf you mean directly before the query, then forget it, as Tom already\nsaid :-)\n\n> This strikes me as utterly pointless. You'll need to be able to\nrecover\n> from query failure anyway, so what's the value of testing beforehand?\n> Send the query and see if it works or not.\n\nI see a value in checking connection status before you start doing\nloads of local work after a long idle time, that results in a query.\nIn this situation I guess it is good enough to send an empty query\neven if it takes a little.\n\nIn our projects we recv 0 bytes from the socket every x seconds\nduring long idle periods to detect connection problems early.\nWhile it is not 100% reliable (since it does not transfer\nanything over the network) it does detect some common error situations.\n\nI am not 100% sure, but I think PQstatus could be patched to do that.\n\nAndreas\n",
"msg_date": "Fri, 19 Oct 2001 12:04:39 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: PQstatus() detect change in connection... "
}
] |
[
{
"msg_contents": "Any idea to get a human readable list with column descriptions like\ntype,size,key,default,null.\nIt would be nice if it would look simular to the mysql variant:\n\nmysql> describe employee;\n+-----------+----------+------+-----+---------+----------------+\n| Field | Type | Null | Key | Default | Extra |\n+-----------+----------+------+-----+---------+----------------+\n| Id | int(11) | | PRI | NULL | auto_increment |\n| FirstName | char(30) | | MUL | | |\n| LastName | char(30) | | | | |\n| Infix | char(10) | YES | | NULL | |\n| Address1 | char(30) | YES | | NULL | |\n| PostCode | char(10) | YES | | NULL | |\n| Town | int(11) | YES | | NULL | |\n+-----------+----------+------+-----+---------+----------------+\n\nCheers, Ron.\n\n\n\n\n\n\n\n\n\nAny idea to get a human readable list with column \ndescriptions liketype,size,key,default,null.It would be nice if it would \nlook simular to the mysql variant:mysql> \ndescribe \nemployee;+-----------+----------+------+-----+---------+----------------+| \nField | Type | Null | Key | \nDefault | Extra \n|+-----------+----------+------+-----+---------+----------------+| \nId | int(11) \n| | PRI | NULL | auto_increment \n|| FirstName | char(30) | | MUL \n| \n| \n|| LastName | char(30) | \n| | \n| \n|| Infix | char(10) | YES \n| | NULL \n| \n|| Address1 | char(30) | YES | | \nNULL \n| \n|| PostCode | char(10) | YES | | \nNULL \n| \n|| Town | int(11) | YES \n| | NULL \n| \n|+-----------+----------+------+-----+---------+----------------+\n \nCheers, \nRon.",
"msg_date": "Fri, 19 Oct 2001 13:22:38 +0200",
"msg_from": "\"Ron de Jong\" <radejong@planet.nl>",
"msg_from_op": true,
"msg_subject": "Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!"
},
{
"msg_contents": "Hello\n\n\npsql <dbname>\n\n\\dt employee\n\nShould do the trick\n\n\n\n\n\n\"Ron de Jong\" <radejong@planet.nl> wrote in\nnews:9qp2et$i7q$1@reader05.wxs.nl: \n\n> Any idea to get a human readable list with column descriptions like\n> type,size,key,default,null.\n> It would be nice if it would look simular to the mysql variant:\n> \n> mysql> describe employee;\n> +-----------+----------+------+-----+---------+----------------+\n>| Field | Type | Null | Key | Default | Extra |\n>| +-----------+----------+------+-----+---------+----------------+ Id \n>| | int(11) | | PRI | NULL | auto_increment | FirstName |\n>| char(30) | | MUL | | | LastName |\n>| char(30) | | | | | Infix |\n>| char(10) | YES | | NULL | | Address1 |\n>| char(30) | YES | | NULL | | PostCode |\n>| char(10) | YES | | NULL | | Town | int(11)\n>| | YES | | NULL | |\n>| +-----------+----------+------+-----+---------+----------------+ \n> \n> Cheers, Ron.\n> \n> \n> \n>|int(11) | YES |\n>|NULL \n>| \n>|   ; \n>|<BR>+-----------+----------+------+-----+---------+----------------+</FO\n>|NT></FONT></DIV> \n> <DIV><FONT face=Arial size=2><FONT face=\"Courier\n> New\"></FONT></FONT> </DIV> <DIV><FONT face=Arial size=2><FONT\n> face=\"Courier New\">Cheers, \n> Ron.<BR><BR></DIV></FONT></FONT></BODY></HTML> \n> \n> Attachment decoded: untitled-3.htm\n> ------=_NextPart_000_00A9_01C158A1.1FB50320--\n\n\n Posted Via Usenet.com Premium Usenet Newsgroup Services\n----------------------------------------------------------\n ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **\n---------------------------------------------------------- \n http://www.usenet.com\n",
"msg_date": "19 Oct 2001 07:54:15 -0500",
"msg_from": "None <None@news.tht.net>",
"msg_from_op": false,
"msg_subject": "Re: Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!"
},
{
"msg_contents": "\npsql \\d command. \n\n> Any idea to get a human readable list with column descriptions like\n> type,size,key,default,null.\n> It would be nice if it would look simular to the mysql variant:\n> \n> mysql> describe employee;\n> +-----------+----------+------+-----+---------+----------------+\n> | Field | Type | Null | Key | Default | Extra |\n> +-----------+----------+------+-----+---------+----------------+\n> | Id | int(11) | | PRI | NULL | auto_increment |\n> | FirstName | char(30) | | MUL | | |\n> | LastName | char(30) | | | | |\n> | Infix | char(10) | YES | | NULL | |\n> | Address1 | char(30) | YES | | NULL | |\n> | PostCode | char(10) | YES | | NULL | |\n> | Town | int(11) | YES | | NULL | |\n> +-----------+----------+------+-----+---------+----------------+\n> \n> Cheers, Ron.\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 19 Oct 2001 10:57:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!"
},
{
"msg_contents": "On Fri, 19 Oct 2001, Ron de Jong wrote:\n\n> Any idea to get a human readable list with column descriptions like\n> type,size,key,default,null.\n> It would be nice if it would look simular to the mysql variant:\n>\n> mysql> describe employee;\n> +-----------+----------+------+-----+---------+----------------+\n> | Field | Type | Null | Key | Default | Extra |\n> +-----------+----------+------+-----+---------+----------------+\n> | Id | int(11) | | PRI | NULL | auto_increment |\n> | FirstName | char(30) | | MUL | | |\n> | LastName | char(30) | | | | |\n> | Infix | char(10) | YES | | NULL | |\n> | Address1 | char(30) | YES | | NULL | |\n> | PostCode | char(10) | YES | | NULL | |\n> | Town | int(11) | YES | | NULL | |\n> +-----------+----------+------+-----+---------+----------------+\n\nEasily done -- look at the \\d commands in psql or \\h to get help\nin psql. This is a FAQ -- STFW.\n\nBTW, the -hackers list is for tricky questions requiring experienced\ndeveloper help, or for discussion among the gurus. Please post\ngeneral questions to pgsql-general or pgsql-novice and re-post\nto pgsql-hackers only if you get no response w/in a week.\n\nHTH,\n-- \n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nIndependent Knowledge Management Consultant\n\n",
"msg_date": "Fri, 19 Oct 2001 11:37:37 -0400 (EDT)",
"msg_from": "Joel Burton <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!"
},
{
"msg_contents": "Not even close!\n\n<None> wrote in message news:3bd02277$1_4@Usenet.com...\n> Hello\n>\n>\n> psql <dbname>\n>\n> \\dt employee\n>\n> Should do the trick\n>\n>\n>\n>\n>\n> \"Ron de Jong\" <radejong@planet.nl> wrote in\n> news:9qp2et$i7q$1@reader05.wxs.nl:\n>\n> > Any idea to get a human readable list with column descriptions like\n> > type,size,key,default,null.\n> > It would be nice if it would look simular to the mysql variant:\n> >\n> > mysql> describe employee;\n> > +-----------+----------+------+-----+---------+----------------+\n> >| Field | Type | Null | Key | Default | Extra |\n> >| +-----------+----------+------+-----+---------+----------------+ Id\n> >| | int(11) | | PRI | NULL | auto_increment | FirstName |\n> >| char(30) | | MUL | | | LastName |\n> >| char(30) | | | | | Infix |\n> >| char(10) | YES | | NULL | | Address1 |\n> >| char(30) | YES | | NULL | | PostCode |\n> >| char(10) | YES | | NULL | | Town | int(11)\n> >| | YES | | NULL | |\n> >| +-----------+----------+------+-----+---------+----------------+\n> >\n> > Cheers, Ron.\n> >\n> >\n> >\n> >|int(11) | YES |\n> >|NULL \n> >| \n> >|   ;\n> >|<BR>+-----------+----------+------+-----+---------+----------------+</FO\n> >|NT></FONT></DIV>\n> > <DIV><FONT face=Arial size=2><FONT face=\"Courier\n> > New\"></FONT></FONT> </DIV> <DIV><FONT face=Arial size=2><FONT\n> > face=\"Courier New\">Cheers,\n> > Ron.<BR><BR></DIV></FONT></FONT></BODY></HTML>\n> >\n> > Attachment decoded: untitled-3.htm\n> > ------=_NextPart_000_00A9_01C158A1.1FB50320--\n>\n>\n> Posted Via Usenet.com Premium Usenet Newsgroup Services\n> ----------------------------------------------------------\n> ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **\n> ----------------------------------------------------------\n> http://www.usenet.com\n\n\n",
"msg_date": "Sat, 20 Oct 2001 17:24:39 +0200",
"msg_from": "\"Ron de Jong\" <radejong@planet.nl>",
"msg_from_op": true,
"msg_subject": "Re: Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!"
},
{
"msg_contents": "On Sat, Oct 20, 2001 at 05:24:39PM +0200, Ron de Jong wrote:\n> Not even close!\n> \n\nOh? What's it missing? the \\dt display in psql has all the information\nin your mythical table versionbelow, just organized a little differently,\ndoesn't it?\n\nParticularly on the hackers list, if there's a feature you think is \nlacking on PostgreSQL, then _describe_ it (pun intended)\n\nRoss\n\n> <None> wrote in message news:3bd02277$1_4@Usenet.com...\n> > Hello\n> >\n> > psql <dbname>\n> >\n> > \\dt employee\n> >\n> > Should do the trick\n> >\n> >\n> > \"Ron de Jong\" <radejong@planet.nl> wrote in\n> > news:9qp2et$i7q$1@reader05.wxs.nl:\n> >\n> > > Any idea to get a human readable list with column descriptions like\n> > > type,size,key,default,null.\n> > > It would be nice if it would look simular to the mysql variant:\n> > >\n> > > mysql> describe employee;\n> > > +-----------+----------+------+-----+---------+----------------+\n> > >| Field | Type | Null | Key | Default | Extra |\n> > >| +-----------+----------+------+-----+---------+----------------+ Id\n> > >| | int(11) | | PRI | NULL | auto_increment | FirstName |\n> > >| char(30) | | MUL | | | LastName |\n> > >| char(30) | | | | | Infix |\n> > >| char(10) | YES | | NULL | | Address1 |\n> > >| char(30) | YES | | NULL | | PostCode |\n> > >| char(10) | YES | | NULL | | Town | int(11)\n> > >| | YES | | NULL | |\n> > >| +-----------+----------+------+-----+---------+----------------+\n> > >\n> > > Cheers, Ron.\n> > >\n> > >\n> > >\n> > >|int(11) | YES |\n> > >|NULL \n> > >| \n> > >|   ;\n> > >|<BR>+-----------+----------+------+-----+---------+----------------+</FO\n> > >|NT></FONT></DIV>\n> > > <DIV><FONT face=Arial size=2><FONT face=\"Courier\n> > > New\"></FONT></FONT> </DIV> <DIV><FONT face=Arial size=2><FONT\n> > > face=\"Courier New\">Cheers,\n> > > Ron.<BR><BR></DIV></FONT></FONT></BODY></HTML>\n> > >\n> > > Attachment decoded: untitled-3.htm\n> > > ------=_NextPart_000_00A9_01C158A1.1FB50320--\n> >\n> >\n> > Posted Via Usenet.com Premium Usenet Newsgroup Services\n> > ----------------------------------------------------------\n> > ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **\n> > ----------------------------------------------------------\n> > http://www.usenet.com\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \nRoss Reedstrom, Ph.D. reedstrm@rice.edu\nExecutive Director phone: 713-348-6166\nGulf Coast Consortium for Bioinformatics fax: 713-348-6182\nRice University MS-39\nHouston, TX 77005\n",
"msg_date": "Tue, 23 Oct 2001 13:25:33 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!"
},
{
"msg_contents": ">>>>> \"Ron\" == Ron de Jong <radejong@planet.nl> writes:\n\n Ron> Any idea to get a human readable list with column\n Ron> descriptions like type,size,key,default,null.\n\n Ron> It would be nice if it would look simular to the mysql\n Ron> variant:\n\nYou'll need to write your own query to get it to look like mysql.\n>From psql, you can do\n\n \\d employee\n\nroland\n-- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD RL Enterprises\nroland@rlenter.com 76-15 113th Street, Apt 3B\nroland@astrofoto.org Forest Hills, NY 11375\n",
"msg_date": "23 Oct 2001 16:22:13 -0400",
"msg_from": "Roland Roberts <roland@astrofoto.org>",
"msg_from_op": false,
"msg_subject": "Re: Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!"
},
{
"msg_contents": "Reply.\n> On Sat, Oct 20, 2001 at 05:24:39PM +0200, Ron de Jong wrote:\n> > Not even close!\n> > \n> \n> Oh? What's it missing? the \\dt display in psql has all the information\n> in your mythical table versionbelow, just organized a little differently,\n> doesn't it?\n> \n> Particularly on the hackers list, if there's a feature you think is \n> lacking on PostgreSQL, then _describe_ it (pun intended)\n> \n\nagreed.\nbesides, what he want can be obtained by making queries to the system tables.\njust have to look through them a bit.\n\n> Ross\n> \n> > <None> wrote in message news:3bd02277$1_4@Usenet.com...\n> > > Hello\n> > >\n> > > psql <dbname>\n> > >\n> > > \\dt employee\n> > >\n> > > Should do the trick\n> > >\n> > >\n> > > \"Ron de Jong\" <radejong@planet.nl> wrote in\n> > > news:9qp2et$i7q$1@reader05.wxs.nl:\n> > >\n> > > > Any idea to get a human readable list with column descriptions like\n> > > > type,size,key,default,null.\n> > > > It would be nice if it would look simular to the mysql variant:\n> > > >\n> > > > mysql> describe employee;\n> > > > +-----------+----------+------+-----+---------+----------------+\n> > > >| Field | Type | Null | Key | Default | Extra |\n> > > >| +-----------+----------+------+-----+---------+----------------+ Id\n> > > >| | int(11) | | PRI | NULL | auto_increment | FirstName |\n> > > >| char(30) | | MUL | | | LastName |\n> > > >| char(30) | | | | | Infix |\n> > > >| char(10) | YES | | NULL | | Address1 |\n> > > >| char(30) | YES | | NULL | | PostCode |\n> > > >| char(10) | YES | | NULL | | Town | int(11)\n> > > >| | YES | | NULL | |\n> > > >| +-----------+----------+------+-----+---------+----------------+\n> > > >\n> > > > Cheers, Ron.\n> > > >\n> > > >\n> > > >\n> > > >|int(11) | YES |\n> > > >|NULL \n> > > >| \n> > > >|   ;\n> > > >|<BR>+-----------+----------+------+-----+---------+----------------+</FO\n> > > >|NT></FONT></DIV>\n> > > > <DIV><FONT face=Arial size=2><FONT face=\"Courier\n> > > > New\"></FONT></FONT> </DIV> <DIV><FONT face=Arial size=2><FONT\n> > > > face=\"Courier New\">Cheers,\n> > > > Ron.<BR><BR></DIV></FONT></FONT></BODY></HTML>\n> > > >\n> > > > Attachment decoded: untitled-3.htm\n> > > > ------=_NextPart_000_00A9_01C158A1.1FB50320--\n> > >\n> > >\n> > > Posted Via Usenet.com Premium Usenet Newsgroup Services\n> > > ----------------------------------------------------------\n> > > ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **\n> > > ----------------------------------------------------------\n> > > http://www.usenet.com\n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n\n-- \nICQ: 15605359 Bicho\n =^..^=\nFirst, they ignore you. Then they laugh at you. Then they fight you. Then you win. Mahatma Gandhi.\n........Por que no pensaran los hombres como los animales? Pink Panther........\n-------------------------------気検体の一致------------------------------------\n暑さ寒さも彼岸まで。\nアン アン アン とっても大好き\n\n",
"msg_date": "Tue, 23 Oct 2001 19:36:50 -0600",
"msg_from": "David Eduardo Gomez Noguera <davidgn@servidor.unam.mx>",
"msg_from_op": false,
"msg_subject": "Re: Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!"
},
{
"msg_contents": "Hello \n\nAs the person who gave the original suggestion, I'd like to say that if you \ngave some more information, we may be able to help. What do you see when you \ndo \\dt ?\n\nWhen I do it, I see more or less exactly what you report in mysql.\n\n\n\"Ron de Jong\" <radejong@planet.nl> wrote in\nnews:9qs50m$pdf$1@reader07.wxs.nl: \n\n> Not even close!\n> \n> <None> wrote in message news:3bd02277$1_4@Usenet.com...\n>> Hello\n>>\n>>\n>> psql <dbname>\n>>\n>> \\dt employee\n>>\n>> Should do the trick\n>>\n>>\n>>\n>>\n>>\n>> \"Ron de Jong\" <radejong@planet.nl> wrote in\n>> news:9qp2et$i7q$1@reader05.wxs.nl: \n>>\n>> > Any idea to get a human readable list with column descriptions like\n>> > type,size,key,default,null. It would be nice if it would look\n>> > simular to the mysql variant: \n>> >\n>> > mysql> describe employee;\n>> > +-----------+----------+------+-----+---------+----------------+\n>> >| Field | Type | Null | Key | Default | Extra |\n>> >| +-----------+----------+------+-----+---------+----------------+ Id\n>> >| | int(11) | | PRI | NULL | auto_increment | FirstName\n>> >| | | \n>> >| char(30) | | MUL | | | LastName |\n>> >| char(30) | | | | | Infix |\n>> >| char(10) | YES | | NULL | | Address1 |\n>> >| char(30) | YES | | NULL | | PostCode |\n>> >| char(10) \n>> >| | YES | | NULL | | Town | int(11) \n>> >| | YES | | NULL | |\n>> >| | +-----------+----------+------+-----+---------+----------------+\n>> >\n>> > Cheers, Ron.\n>> >\n\n> \n> \n\n\n Posted Via Usenet.com Premium Usenet Newsgroup Services\n----------------------------------------------------------\n ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **\n---------------------------------------------------------- \n http://www.usenet.com\n",
"msg_date": "26 Oct 2001 04:22:08 -0500",
"msg_from": "tweekie <None@news.tht.net>",
"msg_from_op": false,
"msg_subject": "Re: Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!"
}
] |
[
{
"msg_contents": "---------- Forwarded message ----------\nDate: Fri, 19 Oct 2001 08:22:46 -0600\nFrom: David Eduardo Gomez Noguera <davidgn@servidor.unam.mx>\nTo: \"Ron de Jong\" <Ron.Antispam@news.tht.net>\nCc: davidgn@servidor.unam.mx\nSubject: Re: [HACKERS] Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!\n\non psql, do \\? there are a lot of commands that let you do it:\n\n\\l (this list databases)\ndabicho=# \\l\n List of databases\n Database | Owner | Encoding \n-------------+----------+-----------\n agenda | dabicho | SQL_ASCII\n cele | dabicho | SQL_ASCII\n dabicho | dabicho | SQL_ASCII\n diccionario | dabicho | SQL_ASCII\n imagenes | dabicho | SQL_ASCII\n libros | dabicho | SQL_ASCII\n mp3 | dabicho | SQL_ASCII\n postgres | postgres | SQL_ASCII\n template0 | postgres | SQL_ASCII\n template1 | postgres | SQL_ASCII\n(10 rows)\n\nmp3=# \\d (this list tables on the current db)\n List of relations\n Name | Type | Owner \n----------------+----------+---------\n album | table | dabicho\n album_id_seq | sequence | dabicho\n artista | table | dabicho\n artista_id_seq | sequence | dabicho\n dirpath | table | dabicho\n dirpath_id_seq | sequence | dabicho\n genero | table | dabicho\n genero_id_seq | sequence | dabicho\n mp3 | table | dabicho\n mp3_id_seq | sequence | dabicho\n pga_forms | table | dabicho\n pga_layout | table | dabicho\n pga_queries | table | dabicho\n pga_reports | table | dabicho\n pga_schema | table | dabicho\n pga_scripts | table | dabicho\n\n\nmp3=# \\d mp3 (this describes a table (mp3)\n Table \"mp3\"\n Attribute | Type | Modifier \n---------------+-------------------+------------------------------------------------\n id | integer | not null default nextval('\"mp3_id_seq\"'::text)\n fk_dirpath_id | integer | not null\n filename | character varying | not null\n titulo | text | not null default 'unknown'\n fk_artista_id | integer | not null default 1\n fk_album_id | integer | not null default 1\n comentario | text | not null default 'none'\n year | integer | default 2001\n genero | smallint | not null default 1\nIndices: mp3_fk_dirpath_id_key,\n mp3_pkey\n\nwith \\pset you can set output format.\n\nmp3=# \\pset expanded\nExpanded display is on.\nmp3=# \\d mp3\nTable \"mp3\"\n-[ RECORD 1 ]---------------------------------------------\nAttribute | id\nType | integer\nModifier | not null default nextval('\"mp3_id_seq\"'::text)\n-[ RECORD 2 ]---------------------------------------------\nAttribute | fk_dirpath_id\nType | integer\nModifier | not null\n-[ RECORD 3 ]---------------------------------------------\n....\nthere are many combinations\nmp3=# \\pset border 2\nBorder style is 2.\nmp3=# \\d mp3\n Table \"mp3\"\n+---------------+-------------------+------------------------------------------------+\n| Attribute | Type | Modifier |\n+---------------+-------------------+------------------------------------------------+\n| id | integer | not null default nextval('\"mp3_id_seq\"'::text) |\n| fk_dirpath_id | integer | not null |\n| filename | character varying | not null |\n| titulo | text | not null default 'unknown' |\n| fk_artista_id | integer | not null default 1 |\n| fk_album_id | integer | not null default 1 |\n| comentario | text | not null default 'none' |\n| year | integer | default 2001 |\n| genero | smallint | not null default 1 |\n+---------------+-------------------+------------------------------------------------+\nIndices: mp3_fk_dirpath_id_key,\n mp3_pkey\n\npretty much the same, and fairly human readable to me. (although not everything sorted in columns, i guess you could do querys to the system tables to get that, or use awk to get the bits you want =) )\nI just this the postgres team has don an excelent work so far.\n\nReply.\n> Any idea to get a human readable list with column descriptions like\n> type,size,key,default,null.\n> It would be nice if it would look simular to the mysql variant:\n> \n> mysql> describe employee;\n> +-----------+----------+------+-----+---------+----------------+\n> | Field | Type | Null | Key | Default | Extra |\n> +-----------+----------+------+-----+---------+----------------+\n> | Id | int(11) | | PRI | NULL | auto_increment |\n> | FirstName | char(30) | | MUL | | |\n> | LastName | char(30) | | | | |\n> | Infix | char(10) | YES | | NULL | |\n> | Address1 | char(30) | YES | | NULL | |\n> | PostCode | char(10) | YES | | NULL | |\n> | Town | int(11) | YES | | NULL | |\n> +-----------+----------+------+-----+---------+----------------+\n> \n> Cheers, Ron.\n> \n> \n> \n\n\n\n-- \nICQ: 15605359 Bicho\n =^..^=\nFirst, they ignore you. Then they laugh at you. Then they fight you. Then you win. Mahatma Gandhi.\n........Por que no pensaran los hombres como los animales? Pink Panther........\n-------------------------------気検体の一致------------------------------------\n暑さ寒さも彼岸まで。\nアン アン アン とっても大好き\n\n> on psql, do \\? there are a lot of commands that let you do it:\n> \n> \\l (this list databases)\n> dabicho=# \\l\n> List of databases\n> Database | Owner | Encoding \n> -------------+----------+-----------\n> agenda | dabicho | SQL_ASCII\n> cele | dabicho | SQL_ASCII\n> dabicho | dabicho | SQL_ASCII\n> diccionario | dabicho | SQL_ASCII\n> imagenes | dabicho | SQL_ASCII\n> libros | dabicho | SQL_ASCII\n> mp3 | dabicho | SQL_ASCII\n> postgres | postgres | SQL_ASCII\n> template0 | postgres | SQL_ASCII\n> template1 | postgres | SQL_ASCII\n> (10 rows)\n> \n> mp3=# \\d (this list tables on the current db)\n> List of relations\n> Name | Type | Owner \n> ----------------+----------+---------\n> album | table | dabicho\n> album_id_seq | sequence | dabicho\n> artista | table | dabicho\n> artista_id_seq | sequence | dabicho\n> dirpath | table | dabicho\n> dirpath_id_seq | sequence | dabicho\n> genero | table | dabicho\n> genero_id_seq | sequence | dabicho\n> mp3 | table | dabicho\n> mp3_id_seq | sequence | dabicho\n> pga_forms | table | dabicho\n> pga_layout | table | dabicho\n> pga_queries | table | dabicho\n> pga_reports | table | dabicho\n> pga_schema | table | dabicho\n> pga_scripts | table | dabicho\n> \n> \n> mp3=# \\d mp3 (this describes a table (mp3)\n> Table \"mp3\"\n> Attribute | Type | Modifier \n> ---------------+-------------------+------------------------------------------------\n> id | integer | not null default nextval('\"mp3_id_seq\"'::text)\n> fk_dirpath_id | integer | not null\n> filename | character varying | not null\n> titulo | text | not null default 'unknown'\n> fk_artista_id | integer | not null default 1\n> fk_album_id | integer | not null default 1\n> comentario | text | not null default 'none'\n> year | integer | default 2001\n> genero | smallint | not null default 1\n> Indices: mp3_fk_dirpath_id_key,\n> mp3_pkey\n> \n> with \\pset you can set output format.\n> \n> mp3=# \\pset expanded\n> Expanded display is on.\n> mp3=# \\d mp3\n> Table \"mp3\"\n> -[ RECORD 1 ]---------------------------------------------\n> Attribute | id\n> Type | integer\n> Modifier | not null default nextval('\"mp3_id_seq\"'::text)\n> -[ RECORD 2 ]---------------------------------------------\n> Attribute | fk_dirpath_id\n> Type | integer\n> Modifier | not null\n> -[ RECORD 3 ]---------------------------------------------\n> ....\n> there are many combinations\n> mp3=# \\pset border 2\n> Border style is 2.\n> mp3=# \\d mp3\n> Table \"mp3\"\n> +---------------+-------------------+------------------------------------------------+\n> | Attribute | Type | Modifier |\n> +---------------+-------------------+------------------------------------------------+\n> | id | integer | not null default nextval('\"mp3_id_seq\"'::text) |\n> | fk_dirpath_id | integer | not null |\n> | filename | character varying | not null |\n> | titulo | text | not null default 'unknown' |\n> | fk_artista_id | integer | not null default 1 |\n> | fk_album_id | integer | not null default 1 |\n> | comentario | text | not null default 'none' |\n> | year | integer | default 2001 |\n> | genero | smallint | not null default 1 |\n> +---------------+-------------------+------------------------------------------------+\n> Indices: mp3_fk_dirpath_id_key,\n> mp3_pkey\n> \n> pretty much the same, and fairly human readable to me. (although not everything sorted in columns, i guess you could do querys to the system tables to get that, or use awk to get the bits you want =) )\n> I just this the postgres team has don an excelent work so far.\n> \n> Reply.\n> > Any idea to get a human readable list with column descriptions like\n> > type,size,key,default,null.\n> > It would be nice if it would look simular to the mysql variant:\n> > \n> > mysql> describe employee;\n> > +-----------+----------+------+-----+---------+----------------+\n> > | Field | Type | Null | Key | Default | Extra |\n> > +-----------+----------+------+-----+---------+----------------+\n> > | Id | int(11) | | PRI | NULL | auto_increment |\n> > | FirstName | char(30) | | MUL | | |\n> > | LastName | char(30) | | | | |\n> > | Infix | char(10) | YES | | NULL | |\n> > | Address1 | char(30) | YES | | NULL | |\n> > | PostCode | char(10) | YES | | NULL | |\n> > | Town | int(11) | YES | | NULL | |\n> > +-----------+----------+------+-----+---------+----------------+\n> > \n> > Cheers, Ron.\n> > \n> > \n> > \n> \n> \n> \n> -- \n> ICQ: 15605359 Bicho\n> =^..^=\n> First, they ignore you. Then they laugh at you. Then they fight you. Then you win. Mahatma Gandhi.\n> ........Por que no pensaran los hombres como los animales? Pink Panther........\n> -------------------------------気検体の一致------------------------------------\n> 暑さ寒さも彼岸まで。\n> アン アン アン とっても大好き\n\n\n-- \nICQ: 15605359 Bicho\n =^..^=\nFirst, they ignore you. Then they laugh at you. Then they fight you. Then you win. Mahatma Gandhi.\n........Por que no pensaran los hombres como los animales? Pink Panther........\n-------------------------------気検体の一致------------------------------------\n暑さ寒さも彼岸まで。\nアン アン アン とっても大好き\n\n",
"msg_date": "Fri, 19 Oct 2001 08:28:12 -0600",
"msg_from": "David Eduardo Gomez Noguera <davidgn@servidor.unam.mx>",
"msg_from_op": true,
"msg_subject": "Fw: Re: Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!"
},
{
"msg_contents": "Sorry about that message. My mailer had a bad reply format.\n\n\n-- \nICQ: 15605359 Bicho\n =^..^=\nFirst, they ignore you. Then they laugh at you. Then they fight you. Then you win. Mahatma Gandhi.\n........Por que no pensaran los hombres como los animales? Pink Panther........\n-------------------------------気検体の一致------------------------------------\n暑さ寒さも彼岸まで。\nアン アン アン とっても大好き\n\n",
"msg_date": "Fri, 19 Oct 2001 08:45:37 -0600",
"msg_from": "David Eduardo Gomez Noguera <davidgn@servidor.unam.mx>",
"msg_from_op": true,
"msg_subject": "Re: Fw: Re: Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!"
},
{
"msg_contents": "If you like the psql client then have a look at a graphical \"psql\" CGI\nprogram!!!\nIt's just one file twdba.cgi to put in your cgi-bin directory and your\nbrowser does the rest.\n\nhttp://home.planet.nl/~radejong (TWDBA Download button)\n\nYou see the \\? tricks only work with that particular psql client and the\nengine does not honor them from another client.\n\nYou'll love it!!!\n\n\n\"David Eduardo Gomez Noguera\" <davidgn@servidor.unam.mx> wrote in message\nnews:20011019082812.2bf29485.davidgn@servidor.unam.mx...\n> ---------- Forwarded message ----------\n> Date: Fri, 19 Oct 2001 08:22:46 -0600\n> From: David Eduardo Gomez Noguera <davidgn@servidor.unam.mx>\n> To: \"Ron de Jong\" <Ron.Antispam@news.tht.net>\n> Cc: davidgn@servidor.unam.mx\n> Subject: Re: [HACKERS] Is there no \"DESCRIBE <TABLE>;\" on PGSQL?\nhelp!!!\n>\n> on psql, do \\? there are a lot of commands that let you do it:\n>\n> \\l (this list databases)\n> dabicho=# \\l\n> List of databases\n> Database | Owner | Encoding\n> -------------+----------+-----------\n> agenda | dabicho | SQL_ASCII\n> cele | dabicho | SQL_ASCII\n> dabicho | dabicho | SQL_ASCII\n> diccionario | dabicho | SQL_ASCII\n> imagenes | dabicho | SQL_ASCII\n> libros | dabicho | SQL_ASCII\n> mp3 | dabicho | SQL_ASCII\n> postgres | postgres | SQL_ASCII\n> template0 | postgres | SQL_ASCII\n> template1 | postgres | SQL_ASCII\n> (10 rows)\n>\n> mp3=# \\d (this list tables on the current db)\n> List of relations\n> Name | Type | Owner\n> ----------------+----------+---------\n> album | table | dabicho\n> album_id_seq | sequence | dabicho\n> artista | table | dabicho\n> artista_id_seq | sequence | dabicho\n> dirpath | table | dabicho\n> dirpath_id_seq | sequence | dabicho\n> genero | table | dabicho\n> genero_id_seq | sequence | dabicho\n> mp3 | table | dabicho\n> mp3_id_seq | sequence | dabicho\n> pga_forms | table | dabicho\n> pga_layout | table | dabicho\n> pga_queries | table | dabicho\n> pga_reports | table | dabicho\n> pga_schema | table | dabicho\n> pga_scripts | table | dabicho\n>\n>\n> mp3=# \\d mp3 (this describes a table (mp3)\n> Table \"mp3\"\n> Attribute | Type | Modifier\n> ---------------+-------------------+--------------------------------------\n----------\n> id | integer | not null default\nnextval('\"mp3_id_seq\"'::text)\n> fk_dirpath_id | integer | not null\n> filename | character varying | not null\n> titulo | text | not null default 'unknown'\n> fk_artista_id | integer | not null default 1\n> fk_album_id | integer | not null default 1\n> comentario | text | not null default 'none'\n> year | integer | default 2001\n> genero | smallint | not null default 1\n> Indices: mp3_fk_dirpath_id_key,\n> mp3_pkey\n>\n> with \\pset you can set output format.\n>\n> mp3=# \\pset expanded\n> Expanded display is on.\n> mp3=# \\d mp3\n> Table \"mp3\"\n> -[ RECORD 1 ]---------------------------------------------\n> Attribute | id\n> Type | integer\n> Modifier | not null default nextval('\"mp3_id_seq\"'::text)\n> -[ RECORD 2 ]---------------------------------------------\n> Attribute | fk_dirpath_id\n> Type | integer\n> Modifier | not null\n> -[ RECORD 3 ]---------------------------------------------\n> ....\n> there are many combinations\n> mp3=# \\pset border 2\n> Border style is 2.\n> mp3=# \\d mp3\n> Table \"mp3\"\n>\n+---------------+-------------------+---------------------------------------\n---------+\n> | Attribute | Type | Modifier\n|\n>\n+---------------+-------------------+---------------------------------------\n---------+\n> | id | integer | not null default\nnextval('\"mp3_id_seq\"'::text) |\n> | fk_dirpath_id | integer | not null\n|\n> | filename | character varying | not null\n|\n> | titulo | text | not null default 'unknown'\n|\n> | fk_artista_id | integer | not null default 1\n|\n> | fk_album_id | integer | not null default 1\n|\n> | comentario | text | not null default 'none'\n|\n> | year | integer | default 2001\n|\n> | genero | smallint | not null default 1\n|\n>\n+---------------+-------------------+---------------------------------------\n---------+\n> Indices: mp3_fk_dirpath_id_key,\n> mp3_pkey\n>\n> pretty much the same, and fairly human readable to me. (although not\neverything sorted in columns, i guess you could do querys to the system\ntables to get that, or use awk to get the bits you want =) )\n> I just this the postgres team has don an excelent work so far.\n>\n> Reply.\n> > Any idea to get a human readable list with column descriptions like\n> > type,size,key,default,null.\n> > It would be nice if it would look simular to the mysql variant:\n> >\n> > mysql> describe employee;\n> > +-----------+----------+------+-----+---------+----------------+\n> > | Field | Type | Null | Key | Default | Extra |\n> > +-----------+----------+------+-----+---------+----------------+\n> > | Id | int(11) | | PRI | NULL | auto_increment |\n> > | FirstName | char(30) | | MUL | | |\n> > | LastName | char(30) | | | | |\n> > | Infix | char(10) | YES | | NULL | |\n> > | Address1 | char(30) | YES | | NULL | |\n> > | PostCode | char(10) | YES | | NULL | |\n> > | Town | int(11) | YES | | NULL | |\n> > +-----------+----------+------+-----+---------+----------------+\n> >\n> > Cheers, Ron.\n> >\n> >\n> >\n>\n>\n>\n> --\n> ICQ: 15605359 Bicho\n> =^..^=\n> First, they ignore you. Then they laugh at you. Then they fight you. Then\nyou win. Mahatma Gandhi.\n> ........Por que no pensaran los hombres como los animales? Pink\nPanther........\n> -------------------------------\u001b$B5$8!BN$N0lCW\u001b(B-------------------------\n-----------\n> \u001b$B=k$54($5$bH`4_$^$G!#\u001b(B\n> \u001b$B%\"%s\u001b(B \u001b$B%\"%s\u001b(B \u001b$B%\"%s\u001b(B \u001b$B$H$C$F$bBg9%$-\u001b(B\n>\n> > on psql, do \\? there are a lot of commands that let you do it:\n> >\n> > \\l (this list databases)\n> > dabicho=# \\l\n> > List of databases\n> > Database | Owner | Encoding\n> > -------------+----------+-----------\n> > agenda | dabicho | SQL_ASCII\n> > cele | dabicho | SQL_ASCII\n> > dabicho | dabicho | SQL_ASCII\n> > diccionario | dabicho | SQL_ASCII\n> > imagenes | dabicho | SQL_ASCII\n> > libros | dabicho | SQL_ASCII\n> > mp3 | dabicho | SQL_ASCII\n> > postgres | postgres | SQL_ASCII\n> > template0 | postgres | SQL_ASCII\n> > template1 | postgres | SQL_ASCII\n> > (10 rows)\n> >\n> > mp3=# \\d (this list tables on the current db)\n> > List of relations\n> > Name | Type | Owner\n> > ----------------+----------+---------\n> > album | table | dabicho\n> > album_id_seq | sequence | dabicho\n> > artista | table | dabicho\n> > artista_id_seq | sequence | dabicho\n> > dirpath | table | dabicho\n> > dirpath_id_seq | sequence | dabicho\n> > genero | table | dabicho\n> > genero_id_seq | sequence | dabicho\n> > mp3 | table | dabicho\n> > mp3_id_seq | sequence | dabicho\n> > pga_forms | table | dabicho\n> > pga_layout | table | dabicho\n> > pga_queries | table | dabicho\n> > pga_reports | table | dabicho\n> > pga_schema | table | dabicho\n> > pga_scripts | table | dabicho\n> >\n> >\n> > mp3=# \\d mp3 (this describes a table (mp3)\n> > Table \"mp3\"\n> > Attribute | Type | Modifier\n>\n> ---------------+-------------------+--------------------------------------\n----------\n> > id | integer | not null default\nnextval('\"mp3_id_seq\"'::text)\n> > fk_dirpath_id | integer | not null\n> > filename | character varying | not null\n> > titulo | text | not null default 'unknown'\n> > fk_artista_id | integer | not null default 1\n> > fk_album_id | integer | not null default 1\n> > comentario | text | not null default 'none'\n> > year | integer | default 2001\n> > genero | smallint | not null default 1\n> > Indices: mp3_fk_dirpath_id_key,\n> > mp3_pkey\n> >\n> > with \\pset you can set output format.\n> >\n> > mp3=# \\pset expanded\n> > Expanded display is on.\n> > mp3=# \\d mp3\n> > Table \"mp3\"\n> > -[ RECORD 1 ]---------------------------------------------\n> > Attribute | id\n> > Type | integer\n> > Modifier | not null default nextval('\"mp3_id_seq\"'::text)\n> > -[ RECORD 2 ]---------------------------------------------\n> > Attribute | fk_dirpath_id\n> > Type | integer\n> > Modifier | not null\n> > -[ RECORD 3 ]---------------------------------------------\n> > ....\n> > there are many combinations\n> > mp3=# \\pset border 2\n> > Border style is 2.\n> > mp3=# \\d mp3\n> > Table \"mp3\"\n> >\n+---------------+-------------------+---------------------------------------\n---------+\n> > | Attribute | Type | Modifier\n|\n> >\n+---------------+-------------------+---------------------------------------\n---------+\n> > | id | integer | not null default\nnextval('\"mp3_id_seq\"'::text) |\n> > | fk_dirpath_id | integer | not null\n|\n> > | filename | character varying | not null\n|\n> > | titulo | text | not null default 'unknown'\n|\n> > | fk_artista_id | integer | not null default 1\n|\n> > | fk_album_id | integer | not null default 1\n|\n> > | comentario | text | not null default 'none'\n|\n> > | year | integer | default 2001\n|\n> > | genero | smallint | not null default 1\n|\n> >\n+---------------+-------------------+---------------------------------------\n---------+\n> > Indices: mp3_fk_dirpath_id_key,\n> > mp3_pkey\n> >\n> > pretty much the same, and fairly human readable to me. (although not\neverything sorted in columns, i guess you could do querys to the system\ntables to get that, or use awk to get the bits you want =) )\n> > I just this the postgres team has don an excelent work so far.\n> >\n> > Reply.\n> > > Any idea to get a human readable list with column descriptions like\n> > > type,size,key,default,null.\n> > > It would be nice if it would look simular to the mysql variant:\n> > >\n> > > mysql> describe employee;\n> > > +-----------+----------+------+-----+---------+----------------+\n> > > | Field | Type | Null | Key | Default | Extra |\n> > > +-----------+----------+------+-----+---------+----------------+\n> > > | Id | int(11) | | PRI | NULL | auto_increment |\n> > > | FirstName | char(30) | | MUL | | |\n> > > | LastName | char(30) | | | | |\n> > > | Infix | char(10) | YES | | NULL | |\n> > > | Address1 | char(30) | YES | | NULL | |\n> > > | PostCode | char(10) | YES | | NULL | |\n> > > | Town | int(11) | YES | | NULL | |\n> > > +-----------+----------+------+-----+---------+----------------+\n> > >\n> > > Cheers, Ron.\n> > >\n> > >\n> > >\n> >\n> >\n> >\n> > --\n> > ICQ: 15605359 Bicho\n> > =^..^=\n> > First, they ignore you. Then they laugh at you. Then they fight you.\nThen you win. Mahatma Gandhi.\n> > ........Por que no pensaran los hombres como los animales? Pink\nPanther........\n>\n> -------------------------------\u001b$B5$8!BN$N0lCW\u001b(B-------------------------\n-----------\n> > \u001b$B=k$54($5$bH`4_$^$G!#\u001b(B\n> > \u001b$B%\"%s\u001b(B \u001b$B%\"%s\u001b(B \u001b$B%\"%s\u001b(B \u001b$B$H$C$F$bBg9%$-\u001b(B\n>\n>\n> --\n> ICQ: 15605359 Bicho\n> =^..^=\n> First, they ignore you. Then they laugh at you. Then they fight you. Then\nyou win. Mahatma Gandhi.\n> ........Por que no pensaran los hombres como los animales? Pink\nPanther........\n> -------------------------------\u001b$B5$8!BN$N0lCW\u001b(B-------------------------\n-----------\n> \u001b$B=k$54($5$bH`4_$^$G!#\u001b(B\n> \u001b$B%\"%s\u001b(B \u001b$B%\"%s\u001b(B \u001b$B%\"%s\u001b(B \u001b$B$H$C$F$bBg9%$-\u001b(B\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sat, 20 Oct 2001 18:02:31 +0200",
"msg_from": "\"Ron de Jong\" <radejong@planet.nl>",
"msg_from_op": false,
"msg_subject": "Re: Is there no \"DESCRIBE <TABLE>;\" on PGSQL? help!!!"
}
] |
[
{
"msg_contents": "Marc,\n\nI've noticed (and not only me) significant slowdown of search at\nfts.postgresql.org.\n\n From our logs:\n\n...\nSun Aug 19 03:32:00 EDT 2001\n/usr/local/mailware/CORE/bin/vacuum_analyze: /usr/local/pgsql/bin/psql: No such\nfile or directory\n...\nSun Oct 14 03:32:00 EDT 2001\n/usr/local/mailware/CORE/bin/vacuum_analyze: /usr/local/pgsql/bin/psql: Permissi\non denied\n\nPlease, fix permissions asap\n\n",
"msg_date": "Fri, 19 Oct 2001 18:27:49 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "permissions problem at hub.org !"
},
{
"msg_contents": "\nuse /usr/local/bin/psql\n\n\nVince.\n\nOn Fri, 19 Oct 2001, Oleg Bartunov wrote:\n\n> Marc,\n>\n> I've noticed (and not only me) significant slowdown of search at\n> fts.postgresql.org.\n>\n> >From our logs:\n>\n> ...\n> Sun Aug 19 03:32:00 EDT 2001\n> /usr/local/mailware/CORE/bin/vacuum_analyze: /usr/local/pgsql/bin/psql: No such\n> file or directory\n> ...\n> Sun Oct 14 03:32:00 EDT 2001\n> /usr/local/mailware/CORE/bin/vacuum_analyze: /usr/local/pgsql/bin/psql: Permissi\n> on denied\n>\n> Please, fix permissions asap\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 19 Oct 2001 12:03:36 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: permissions problem at hub.org !"
},
{
"msg_contents": "On Fri, 19 Oct 2001, Vince Vielhaber wrote:\n\n>\n> use /usr/local/bin/psql\n\nok.\n\n>\n>\n> Vince.\n>\n> On Fri, 19 Oct 2001, Oleg Bartunov wrote:\n>\n> > Marc,\n> >\n> > I've noticed (and not only me) significant slowdown of search at\n> > fts.postgresql.org.\n> >\n> > >From our logs:\n> >\n> > ...\n> > Sun Aug 19 03:32:00 EDT 2001\n> > /usr/local/mailware/CORE/bin/vacuum_analyze: /usr/local/pgsql/bin/psql: No such\n> > file or directory\n> > ...\n> > Sun Oct 14 03:32:00 EDT 2001\n> > /usr/local/mailware/CORE/bin/vacuum_analyze: /usr/local/pgsql/bin/psql: Permissi\n> > on denied\n> >\n> > Please, fix permissions asap\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n>\n> Vince.\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 19 Oct 2001 19:24:33 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: permissions problem at hub.org !"
},
{
"msg_contents": "\nhasn't been in /usr/local/pgsql/bin since Aug 18th ...\n\nhub# ls -lt `which psql`\n-rwxr-xr-x 1 root wheel 108472 Aug 16 08:55 /usr/local/bin/psql\n\nOn Fri, 19 Oct 2001, Oleg Bartunov wrote:\n\n> Marc,\n>\n> I've noticed (and not only me) significant slowdown of search at\n> fts.postgresql.org.\n>\n> From our logs:\n>\n> ...\n> Sun Aug 19 03:32:00 EDT 2001\n> /usr/local/mailware/CORE/bin/vacuum_analyze: /usr/local/pgsql/bin/psql: No such\n> file or directory\n> ...\n> Sun Oct 14 03:32:00 EDT 2001\n> /usr/local/mailware/CORE/bin/vacuum_analyze: /usr/local/pgsql/bin/psql: Permissi\n> on denied\n>\n> Please, fix permissions asap\n>\n>\n\n",
"msg_date": "Fri, 19 Oct 2001 12:40:52 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: permissions problem at hub.org !"
}
] |
[
{
"msg_contents": "I have traced down the postmaster-option-processing failure that Thomas\nreported this morning. It appears to be specific to systems running\nglibc: the problem is that resetting optind to 1 is not enough to\nput glibc's getopt() subroutine into a good state to process a fresh\nset of options. (Internally it has a \"nextchar\" pointer that is still\npointing at the old argv list, and only if the pointer points to a null\ncharacter will it wake up enough to reexamine the argv pointer you give\nit.) The reason we see this now, and didn't see it before, is that\nI rearranged startup to set the ps process title as soon as possible\nafter forking a subprocess --- and at least on Linux machines, that\n\"nextchar\" pointer is pointing into the argv array that's overwritten\nby init_ps_display.\n\nWhile I could revert that change, I don't want to. The idea was to be\nsure that a postmaster child running its authentication cycle could be\nidentified, and I still think that's an important feature. So I want to\nfind a way to make it work.\n\nLooking at the source code of glibc's getopt, it seems there are two\nways to force a reset:\n\n* set __getopt_initialized to 0. I thought this was an ideal solution\nsince configure could check for the presence of __getopt_initialized.\nUnfortunately it seems that glibc is built in such a way that that\nsymbol isn't exported :-(, even though it looks global in the source.\n\n* set optind to 0, instead of the more usual 1. This will work, but\nit requires us to know that we're dealing with glibc getopt and not\nanyone else's getopt.\n\nI have thought of two ways to detect glibc getopt: one is to assume that\nif getopt_long() is available, we should set optind=0. The other is to\ntry a runtime test in configure and see if it works to set optind=0.\nRuntime configure tests aren't very appealing, but I don't much care\nfor equating HAVE_GETOPT_LONG to how we should reset optind, either.\n\nOpinions anyone? Better ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Oct 2001 17:50:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Detecting glibc getopt?"
},
{
"msg_contents": "(I still see the symptom btw; did a make distclean and configure after\nupdating my tree)\n",
"msg_date": "Sat, 20 Oct 2001 01:06:18 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Detecting glibc getopt?"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> (I still see the symptom btw; did a make distclean and configure after\n> updating my tree)\n\nYeah, it's still busted; my first try was wrong. I have confirmed the\n\"optind = 0\" fix works on my LinuxPPC machine, but we need to decide\nhow to autoconfigure that hack.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Oct 2001 23:28:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Detecting glibc getopt? "
},
{
"msg_contents": "Tom Lane writes:\n\n> The reason we see this now, and didn't see it before, is that\n> I rearranged startup to set the ps process title as soon as possible\n> after forking a subprocess --- and at least on Linux machines, that\n> \"nextchar\" pointer is pointing into the argv array that's overwritten\n> by init_ps_display.\n\nHow about copying the entire argv[] array to a new location before the\nvery first call to getopt(). Then you can use getopt() without hackery\nand can do anything you want to the \"real\" argv area. That should be a\nlot safer. (We don't know yet what other platforms might play\noptimization tricks in getopt().)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 20 Oct 2001 13:46:23 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Detecting glibc getopt?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> How about copying the entire argv[] array to a new location before the\n> very first call to getopt(). Then you can use getopt() without hackery\n> and can do anything you want to the \"real\" argv area. That should be a\n> lot safer. (We don't know yet what other platforms might play\n> optimization tricks in getopt().)\n\nWell, mumble --- strictly speaking, there is *NO* way to use getopt\nover multiple cycles \"without hackery\". The standard for getopt\n(http://www.opengroup.org/onlinepubs/7908799/xsh/getopt.html)\ndoesn't say you're allowed to scribble on optind in the first place.\nBut you're probably right that having a read-only copy of the argv\nvector will make things safer. Will do it that way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 20 Oct 2001 12:33:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Detecting glibc getopt? "
},
{
"msg_contents": "\nIs this resolved?\n\n---------------------------------------------------------------------------\n\n> I have traced down the postmaster-option-processing failure that Thomas\n> reported this morning. It appears to be specific to systems running\n> glibc: the problem is that resetting optind to 1 is not enough to\n> put glibc's getopt() subroutine into a good state to process a fresh\n> set of options. (Internally it has a \"nextchar\" pointer that is still\n> pointing at the old argv list, and only if the pointer points to a null\n> character will it wake up enough to reexamine the argv pointer you give\n> it.) The reason we see this now, and didn't see it before, is that\n> I rearranged startup to set the ps process title as soon as possible\n> after forking a subprocess --- and at least on Linux machines, that\n> \"nextchar\" pointer is pointing into the argv array that's overwritten\n> by init_ps_display.\n> \n> While I could revert that change, I don't want to. The idea was to be\n> sure that a postmaster child running its authentication cycle could be\n> identified, and I still think that's an important feature. So I want to\n> find a way to make it work.\n> \n> Looking at the source code of glibc's getopt, it seems there are two\n> ways to force a reset:\n> \n> * set __getopt_initialized to 0. I thought this was an ideal solution\n> since configure could check for the presence of __getopt_initialized.\n> Unfortunately it seems that glibc is built in such a way that that\n> symbol isn't exported :-(, even though it looks global in the source.\n> \n> * set optind to 0, instead of the more usual 1. This will work, but\n> it requires us to know that we're dealing with glibc getopt and not\n> anyone else's getopt.\n> \n> I have thought of two ways to detect glibc getopt: one is to assume that\n> if getopt_long() is available, we should set optind=0. The other is to\n> try a runtime test in configure and see if it works to set optind=0.\n> Runtime configure tests aren't very appealing, but I don't much care\n> for equating HAVE_GETOPT_LONG to how we should reset optind, either.\n> \n> Opinions anyone? Better ideas?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Nov 2001 22:25:57 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Detecting glibc getopt?"
},
{
"msg_contents": "> Is this resolved?\n\nSure. Within a day or two of the initial problem report.\n\n - Thomas\n",
"msg_date": "Wed, 07 Nov 2001 03:40:34 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Detecting glibc getopt?"
}
] |
[
{
"msg_contents": "Using current sources, the following sequence:\n\nset DateStyle TO 'Postgres';\nset TimeZone TO 'PST8PDT';\nselect '2001-09-22T18:19:20'::timestamp(2);\n\nproduces\n\n timestamptz\n------------------------------\n Sat Sep 22 11:19:20 2001 PDT\n\non my HPUX box, and evidently also on your machine because that's\nwhat's in the timestamptz expected file. However, on a LinuxPPC\nmachine I get\n\n timestamptz\n------------------------------\n Sat Sep 22 18:19:20 2001 PDT\n\nie, the value after 'T' is interpreted as local time not GMT time.\n\nQuestion 1: which behavior is correct per spec? I'd have expected\nlocal time myself, but I'm not sure where this is specified.\n\nQuestion 2: where to look for the reason for the difference in the\ncode? I'm a tad surprised that the HP box behaves more like\nyours does than the LinuxPPC box ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Oct 2001 18:29:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Platform dependency in timestamp parsing"
},
{
"msg_contents": "> Using current sources, the following sequence:\n> set DateStyle TO 'Postgres';\n> set TimeZone TO 'PST8PDT';\n> select '2001-09-22T18:19:20'::timestamp(2);\n> produces... (snip) ...\n> on my HPUX box, and evidently also on your machine because that's\n> what's in the timestamptz expected file. However, on a LinuxPPC\n> machine I get ... (snip) ...\n> ie, the value after 'T' is interpreted as local time not GMT time.\n> Question 1: which behavior is correct per spec? I'd have expected\n> local time myself, but I'm not sure where this is specified.\n\nIt should be local time.\n\n> Question 2: where to look for the reason for the difference in the\n> code? I'm a tad surprised that the HP box behaves more like\n> yours does than the LinuxPPC box ...\n\nMe too :)\n\nIt is a one line fix in datetime.c, on or about line 918. It needs a\n\"tmask = 0;\" for the new DTK_ISO_TIME case so that the \"feature bitmask\"\nis not altered by the \"T\" in the string. When it is altered, it thinks\nthat a time zone was already specified, so does not try to determine\none.\n\nBefore:\n\nthomas=# select timestamp '2001-10-19T16:47'; \n------------------------\n 2001-10-19 09:47:00-07\n\nAfter:\n\nthomas=# select timestamp '2001-10-19T16:47';\n------------------------\n 2001-10-19 16:47:00-07\n\nI have patches...\n\n - Thomas\n",
"msg_date": "Sat, 20 Oct 2001 00:00:57 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Platform dependency in timestamp parsing"
},
{
"msg_contents": "I've applied patches; all regression tests pass and the\n'yyy-mm-ddThh:mm:ss' is now handled correctly afaict.\n\nThere is an ongoing issue regarding precision and rounding for cases\nwith large interval spans. I've patched the tree with a possible\nsolution involving counting significant figures before rounding, but I\ndon't think it is the right one. Especially since it involves a log10()\ncall :(\n\n - Thomas\n",
"msg_date": "Sat, 20 Oct 2001 01:10:01 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Platform dependency in timestamp parsing"
}
] |
[
{
"msg_contents": "Hi,\n\nVersion 7.1.3, Linux 2.2.18\n\nFollowing procedure:\n\n1. pg_dump dbname > outfile\nEverything is fine.\n\n2. Recreating the database on another system (same Versions)\npsql dbname < infile\n\nI get once:\nERROR: parser: parse error at or near \",\"\nThe rest works fine.\n\nDebug -d2 shows that recreating an operator fails. There was never a problem\ncreating this operator before and it worked fine. It just fails during restore. It seem\nthe function numeric_neq, which is created later (after the second operator) is missing.\nSo pg_dump doesn't seem to dump the functions before the operators.\n\n<snip>\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: CREATE FUNCTION \"numeric_eq\" (numeric,double precision) RETURNS b\nDEBUG: ProcessUtility: CREATE FUNCTION \"numeric_eq\" (numeric,double precision)\nDEBUG: query: select $1 = $2::numeric;\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: CREATE OPERATOR <> (PROCEDURE = numeric_neq ,\n LEFTARG = numeric ,\n RIGHTARG = double precision ,\n COMMUTATOR = <> ,\n NEGATOR = ,\n RESTRICT = eqsel ,\n JOIN = eqjoinsel );\nERROR: parser: parse error at or near \",\"\nDEBUG: AbortCurrentTransaction\nDEBUG: StartTransactionCommand\nDEBUG: query: CREATE OPERATOR = (PROCEDURE = numeric_eq ,\n<snip> \n\nIt's not real problem for me. I think it happened while\nplaying with pgadmin, changing a function call in\nan operator. But still, shouldn't pg_dump look after it?\nAny ideas how to fix this? \n\nregards\n\nJohann Zuschlag\nzuschlag@online.de\n\n\n\n\n\n\n\n\n",
"msg_date": "Sat, 20 Oct 2001 00:41:11 +0200",
"msg_from": "\"Johann Zuschlag\" <zuschlag@online.de>",
"msg_from_op": true,
"msg_subject": "Error while restoring database"
},
{
"msg_contents": "\"Johann Zuschlag\" <zuschlag@online.de> writes:\n> DEBUG: query: CREATE OPERATOR <> (PROCEDURE = numeric_neq ,\n> LEFTARG = numeric ,\n> RIGHTARG = double precision ,\n> COMMUTATOR = <> ,\n> NEGATOR = ,\n> RESTRICT = eqsel ,\n> JOIN = eqjoinsel );\n> ERROR: parser: parse error at or near \",\"\n\nHmm, so what happened to the NEGATOR link?\n\nYou have not given us enough information to understand what, if\nanything, needs to be fixed ... do you still have the original\ndatabase to look at?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Oct 2001 23:39:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error while restoring database "
}
] |
[
{
"msg_contents": "\nYes, we inherited these arrays from Berkeley and haven't had any need to\nremove them. Are you trying to do things that the other interfaces like\nODBC and JDBC don't handle?\n\nThe group array is a hack but the pg_proc array would be hard to replace\nbecauseit acts as part of the unique key used for cache lookups.\n\n---------------------------------------------------------------------------\n\n> Hello all!!\n> \n> \n> I'm developer of a interface for PostgreSQL for the Borland Kylix\n> and Delphi tools (http://www.vitavoom.com). I've run into the following\n> problems with catalogs:\n> \n> - pg_group: the grolist field is an array. How can I make a query\n> that tell me the usernames of a group ?\n> - pg_proc: the proargtypes field is an array. How can I make a query\n> that will link those types to the pg_types catalog ???\n> \n> This catalog design seems a very crude hack to make the things\n> working for me. Can't those relations be separated in another table ? Or\n> maybe a function that can search for a value in array, and make a wroking\n> reference for an array\n> element in a relation (something like \"select typname from pg_type, pg_group\n> where oid\n> in grolist\").\n> I also quote the PotgreSQL user manual\n> (http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/arrays.html):\n> \n> \"Tip: Arrays are not lists; using arrays in the manner described in the\n> previous paragraph is often a sign of database misdesign. The array field\n> should generally be split off into a separate table. Tables can obviously be\n> searched easily.\"\n> \n> Best Regards,\n> Steve Howe\n> \n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 19 Oct 2001 23:37:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Catalogs design question"
},
{
"msg_contents": "Hello all!!\n\n\n I'm developer of a interface for PostgreSQL for the Borland Kylix\nand Delphi tools (http://www.vitavoom.com). I've run into the following\nproblems with catalogs:\n\n - pg_group: the grolist field is an array. How can I make a query\nthat tell me the usernames of a group ?\n - pg_proc: the proargtypes field is an array. How can I make a query\nthat will link those types to the pg_types catalog ???\n\n This catalog design seems a very crude hack to make the things\nworking for me. Can't those relations be separated in another table ? Or\nmaybe a function that can search for a value in array, and make a wroking\nreference for an array\nelement in a relation (something like \"select typname from pg_type, pg_group\nwhere oid\nin grolist\").\n I also quote the PotgreSQL user manual\n(http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/arrays.html):\n\n\"Tip: Arrays are not lists; using arrays in the manner described in the\nprevious paragraph is often a sign of database misdesign. The array field\nshould generally be split off into a separate table. Tables can obviously be\nsearched easily.\"\n\nBest Regards,\nSteve Howe\n\n\n\n\n\n\n",
"msg_date": "Sat, 20 Oct 2001 00:39:59 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": false,
"msg_subject": "Catalogs design question"
},
{
"msg_contents": "On Sat, 20 Oct 2001, Steve Howe wrote:\n\n> Hello all!!\n>\n>\n> I'm developer of a interface for PostgreSQL for the Borland Kylix\n> and Delphi tools (http://www.vitavoom.com). I've run into the following\n> problems with catalogs:\n>\n> - pg_group: the grolist field is an array. How can I make a query\n> that tell me the usernames of a group ?\n> - pg_proc: the proargtypes field is an array. How can I make a query\n> that will link those types to the pg_types catalog ???\n>\n> This catalog design seems a very crude hack to make the things\n> working for me. Can't those relations be separated in another table ? Or\n> maybe a function that can search for a value in array, and make a wroking\n> reference for an array\n> element in a relation (something like \"select typname from pg_type, pg_group\n> where oid\n> in grolist\").\n> I also quote the PotgreSQL user manual\n> (http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/arrays.html):\n\nIn the contrib/ directory are procedures to search arrays for values.\nThis may help.\n\n-- \n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nIndependent Knowledge Management Consultant\n\n",
"msg_date": "Sat, 20 Oct 2001 00:22:30 -0400 (EDT)",
"msg_from": "Joel Burton <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: Catalogs design question"
},
{
"msg_contents": "> > I also quote the PotgreSQL user manual\n> >\n(http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/arrays.html):\n>\n> In the contrib/ directory are procedures to search arrays for values.\n> This may help.\n\n\nThanks for the tip, but in fact I've seen them (and they're listed on the\nsame document I pointed on the original message).\nThese are sequential (slow) searches, and can't be indexed. in resume:\nnothing but another crude hack :). I could even use it, but I can';t tell my\nusers \"oh this feature works but you must compile this contrib code inyo\nyour servers\". Many users can't do it, and many don't even know how to do it\n:(\n\nBest Regards,\nSteve Howe\n\n",
"msg_date": "Sat, 20 Oct 2001 04:55:45 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": false,
"msg_subject": "Re: Catalogs design question"
},
{
"msg_contents": "Hello Bruce!\n\n> Yes, we inherited these arrays from Berkeley and haven't had any need to\n> remove them. Are you trying to do things that the other interfaces like\n> ODBC and JDBC don't handle?\nAbout the groups: I just want to write a function that will return the users\nnames belonged by a given group. I understand I can load the arrays in\nmemory, then sequentially compare the members from pg_shadow, but doing it\ngoes against the database priciple after all.\nAbout the procs: the Borland's dbExpress specification demands a\ninput/output list of parameters for stored procedures, and I'm going to use\nfunctions as stored procedures. But I need to make a types list to be able\nlist what are those params.\n\n> The group array is a hack but the pg_proc array would be hard to replace\n> becauseit acts as part of the unique key used for cache lookups.\nThis design itself bothers me.\nWe have no other option left ? Like arrays being referenced in relations ?\nThat's far from perfect, but at least would solve those issues and others\nwhich might appear in other catalogs...\n\nBest Regards,\nSteve Howe\n\n",
"msg_date": "Sat, 20 Oct 2001 05:06:59 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": false,
"msg_subject": "Re: Catalogs design question"
},
{
"msg_contents": "Hi,\n\nI think Bruce meant contrib/intarray which provides incredibly fast\nindexed access to arrays of integers, which is your case.\nWe use it a lot, particularly in our full text search engine (OpenFTS).\n\n\tregards,\n\n\tOleg\nOn Sat, 20 Oct 2001, Steve Howe wrote:\n\n> > > I also quote the PotgreSQL user manual\n> > >\n> (http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/arrays.html):\n> >\n> > In the contrib/ directory are procedures to search arrays for values.\n> > This may help.\n>\n>\n> Thanks for the tip, but in fact I've seen them (and they're listed on the\n> same document I pointed on the original message).\n> These are sequential (slow) searches, and can't be indexed. in resume:\n> nothing but another crude hack :). I could even use it, but I can';t tell my\n> users \"oh this feature works but you must compile this contrib code inyo\n> your servers\". Many users can't do it, and many don't even know how to do it\n> :(\n>\n> Best Regards,\n> Steve Howe\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 20 Oct 2001 11:26:08 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Catalogs design question"
},
{
"msg_contents": "Steve Howe writes:\n\n> > The group array is a hack but the pg_proc array would be hard to replace\n> > becauseit acts as part of the unique key used for cache lookups.\n> This design itself bothers me.\n> We have no other option left ? Like arrays being referenced in relations ?\n> That's far from perfect, but at least would solve those issues and others\n> which might appear in other catalogs...\n\nIn general, the system catalogs are far from a perfect example (or even an\nexample at all) for pure, normalized relational database design. A more\nimportant concern in processing efficiency. For instance, currently the\nexecution of a procedure takes one catalog lookup versus (1 + nargs) in a\nmore normalized design. (This is an oversimplification, but you get the\nidea.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 20 Oct 2001 13:45:22 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Catalogs design question"
},
{
"msg_contents": "\"Steve Howe\" <howe@carcass.dhs.org> writes:\n>> The group array is a hack but the pg_proc array would be hard to replace\n>> becauseit acts as part of the unique key used for cache lookups.\n\n> This design itself bothers me.\n> We have no other option left ? Like arrays being referenced in relations ?\n\nSure, it *could* be done another way. As far as pg_proc goes, I agree\nwith Bruce: there are far too many places that know the existing\nrepresentation for us to consider changing it. The pain involved would\nvastly outweigh any possible benefit.\n\nThe representation of groups is not so widely known, however. We could\nprobably get away with changing it, if someone wanted to propose a\nbetter catalog schema and do the legwork to make it happen.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 20 Oct 2001 12:42:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Catalogs design question "
},
{
"msg_contents": "Hi Steve, \n\nYour question about - pg_proc \nselect t.typname from pg_type t , pg_proc p\nwhere p.proname = '<your_stored_procedure>' and p.proargtypes[0] = t.oid ;\nselect t.typname from pg_type t , pg_proc p\nwhere p.proname = '<your_stored_procedure>' and p.proargtypes[1] = t.oid ;\n...\nselect t.typname from pg_type t , pg_proc p\nwhere p.proname = '<your_stored_procedure>' and p.proargtypes[7] = t.oid ;\n\nAs far as I understand the proargtypes entries 0 means no further parameter. \nThis oidvector type of proargtypes seems to have a start index of 0. \nAs long as there are at maximum 8 parameters allowed, this looks practicable. \n\n\nYour question about - pg_group \nThe pg_group column is more bulky, because the int4[] type does not have \nan upper limit. \nSo, the only solution I can see is \nget the number of array elements of the group you want to query \nselect array_dims(grolist) from pg_group where groname = '<your_group>';\n\nand then generate automatically a query like \n\nselect u.usename from pg_user u , pg_group g where \n g.grolist[1] = u.usesysid and g.groname='<your_group>' \nunion\nselect u.usename from pg_user u , pg_group g where \n g.grolist[2] = u.usesysid and g.groname='<your_group>' \nunion\n...\nselect u.usename from pg_user u , pg_group g where \n g.grolist[n] = u.usesysid and g.groname='<your_group>' ;\n\nThis looks very much like another crude hack you've already \ncomplained about. Sorry, but I can't help. \n\nTwo more items I do not understand:\nYou said, the procedures to search arrays in contrib/ are slow. \nMaybe that's true, but usually you do not have thousands of users \nin a group, don't you. \nYou said, many users cannot compile this contrib code. Yes, and they \nare not supposed to do so, because it's up to a system admin to do. \nWhat do I miss here? \n\nRegards, Christoph \n",
"msg_date": "Mon, 22 Oct 2001 17:13:57 METDST",
"msg_from": "Haller Christoph <ch@rodos.fzk.de>",
"msg_from_op": false,
"msg_subject": "Re: Catalogs design question"
},
{
"msg_contents": "Hello Haller!!\n>\n> Your question about - pg_proc\n> select t.typname from pg_type t , pg_proc p\n> where p.proname = '<your_stored_procedure>' and p.proargtypes[0] = t.oid ;\n> select t.typname from pg_type t , pg_proc p\n> where p.proname = '<your_stored_procedure>' and p.proargtypes[1] = t.oid ;\n> ...\n> select t.typname from pg_type t , pg_proc p\n> where p.proname = '<your_stored_procedure>' and p.proargtypes[7] = t.oid ;\n>\n> As far as I understand the proargtypes entries 0 means no further\nparameter.\n> This oidvector type of proargtypes seems to have a start index of 0.\n> As long as there are at maximum 8 parameters allowed, this looks\npracticable.\nThere is such a limit ? I didn't know. This makes your code a working way.\nI'll look further on this later... and even if it's not a query that I would\nsay it's beautiful, it's a way, thanks :).\n\n> Your question about - pg_group\n> The pg_group column is more bulky, because the int4[] type does not have\n> an upper limit.\n> So, the only solution I can see is\n> get the number of array elements of the group you want to query\n> select array_dims(grolist) from pg_group where groname = '<your_group>';\n>\n> and then generate automatically a query like\n>\n> select u.usename from pg_user u , pg_group g where\n> g.grolist[1] = u.usesysid and g.groname='<your_group>'\n> union\n> select u.usename from pg_user u , pg_group g where\n> g.grolist[2] = u.usesysid and g.groname='<your_group>'\n> union\n> ...\n> select u.usename from pg_user u , pg_group g where\n> g.grolist[n] = u.usesysid and g.groname='<your_group>' ;\n>\n> This looks very much like another crude hack you've already\n> complained about. Sorry, but I can't help.\nYes, it's ugly code. I would rather write a function, but again I can't\nassume the user has pl/perl or pl/pgsql (or any other).\n\n> Two more items I do not understand:\n> You said, the procedures to search arrays in contrib/ are slow.\n> Maybe that's true, but usually you do not have thousands of users\n> in a group, don't you.\nYes. I would use it if I can.\n> You said, many users cannot compile this contrib code. Yes, and they\n> are not supposed to do so, because it's up to a system admin to do.\n> What do I miss here?\nOh, I develop an interface for PostgreSQL called\npgExpress(http://www.vitavoom.com) - it's like an ODBC driver or such. I\nmust provide the functionality I described for the driver users; it's not\nfor me. I would of course have compiled and used the contrib code. But the\ndriver must work \"out-of-the-box\", and requiring a recompile (where many\ntimes is impossible to users) is not a solution...\nRight now, I'm hardcoding that relation inside the driver, what's also not\nwhat I dreamed about, but I seem to have no other choice.\n\nThanks for the ideas btw :)\n\nBest Regards,\nSteve Howe\n\n",
"msg_date": "Sat, 27 Oct 2001 06:26:02 -0200",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": false,
"msg_subject": "Re: Catalogs design question"
},
{
"msg_contents": "Hello Haller!!!\n> Your question about - pg_proc\n> select t.typname from pg_type t , pg_proc p\n> where p.proname = '<your_stored_procedure>' and p.proargtypes[0] = t.oid ;\n> select t.typname from pg_type t , pg_proc p\n> where p.proname = '<your_stored_procedure>' and p.proargtypes[1] = t.oid ;\n> ...\n> select t.typname from pg_type t , pg_proc p\n> where p.proname = '<your_stored_procedure>' and p.proargtypes[7] = t.oid ;\n>\n> As far as I understand the proargtypes entries 0 means no further\nparameter.\n> This oidvector type of proargtypes seems to have a start index of 0.\n> As long as there are at maximum 8 parameters allowed, this looks\npracticable.\nThere is no limit on the number of arguments. An user could create a weird\nfunction like this:\n\nhowe=# CREATE FUNCTION test2(int2, int2, int2, int2, int2, int2, int2, int2,\nint2, int2, int2, int2, int2) RETURNS int4\n AS 'SELECT 1 AS RESULT' LANGUAGE 'sql';\nCREATE\n\nand it would be allowed...\n\nhowe=# select proargtypes from pg_proc where proname='test';\n proargtypes\n----------------------------------------\n 21 21 21 21 21 21 21 21 21 21 21 21 21\n(1 row)\n\nAgain, the problem is that I can't predict (nor limit) what users will try\nto do...\n\n\nBest Regards,\nSteve Howe\n\n",
"msg_date": "Sun, 28 Oct 2001 03:56:28 -0200",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": false,
"msg_subject": "Re: Catalogs design question"
},
{
"msg_contents": "\"Steve Howe\" <howe@carcass.dhs.org> writes:\n>> As long as there are at maximum 8 parameters allowed, this looks\n>> practicable.\n\n> There is no limit on the number of arguments.\n\nYou're both wrong: the limit is FUNC_MAX_ARGS, which hasn't been 8 in\nquite some time. It's presently 16 by default, and can be configured\nhigher at build time.\n\nFor the purposes of a frontend application, I think it's best to assume\nthat the specific limit is unknown --- ie, you should be able to\ninteroperate with a backend regardless of the FUNC_MAX_ARGS value it\nwas built with.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Oct 2001 12:21:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Catalogs design question "
}
] |
[
{
"msg_contents": "\n>No argument here. But the proposed Oracle \"packages\" are something\n>completely different and don't solve any of the problems you list.\n\nHello,\n\nI agree packages are not designed primarily for library installation. I \nalso agree packages might need dependency checking. But packages can be \ndumped easily and have initialization functions. Therefore, IMHO, why not \nuse them as library installers?\n\nAt the moment, there is only one PostgreSQL set of libraries available: \nOpenACS. On a marvelous development tool like PostgreSQL, there should be \nmany more. Remember, MS Windows succeeded because of MS Office. What is \nPostgreSQL without a strong set of software libraries and applications?\n\nPostgreSQL is mostly designed by AND for hackers. This is cultural reality \nwhich I do not blame. I remember users posting features requests about \nALTER TABLES DROP COLUMN. Some answers we like \"OK, this can be done, but \nwhat for?\". Same as for packages as far as I can read the latest posts.\n\nCheers, Jean-Michel\n",
"msg_date": "Sat, 20 Oct 2001 13:18:15 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: Packages are needed"
}
] |
[
{
"msg_contents": "\nI just checked out postgresql from CVS, built it and\ndid a pg_dumpall of my 7.1.3 databases.\nWhen I try and load the data into 7.2 it gives\na bunch of errors like\n\\N command not found\nI guess they are nulls and it can't recognise them\nor something.\n\n",
"msg_date": "Sat, 20 Oct 2001 21:32:10 +1000",
"msg_from": "Chris Bitmead <chris@bitmead.com>",
"msg_from_op": true,
"msg_subject": "Unable to upgrade to 7.2"
},
{
"msg_contents": "Chris Bitmead <chris@bitmead.com> writes:\n> When I try and load the data into 7.2 it gives\n> a bunch of errors like\n> \\N command not found\n\nYou're going to have to be more specific if you want help fixing it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 20 Oct 2001 12:46:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unable to upgrade to 7.2 "
}
] |
[
{
"msg_contents": "I'm proud to announce you that the latest release of TWDBA now includes\nPGSQL as well.\n\nDownload it from:\n\nhttp://home.planet.nl/~radejong/\n\nFurther I would like to thank all those users for testing TWDBA and positive\nfeedback.\nListening to users is the only way to come to a good product...\n\nRegards,\n\nRon de Jong\nthe Netherlands\n(Windmill & Cloggyland)\n(in reality drugs & redlight district ;-)\n\n\n\n\n",
"msg_date": "Sat, 20 Oct 2001 18:31:49 +0200",
"msg_from": "\"Ron de Jong\" <radejong@planet.nl>",
"msg_from_op": true,
"msg_subject": "Typhoon-Web-DataBase-Administrator-1.3.0 with PostgreSQL support\n\treleased!!!"
}
] |
[
{
"msg_contents": "\"Johann Zuschlag\" <zuschlag@online.de> writes:\n> Here is an excerpt of the original dump-file.\n\nThat's what you showed us already. What I'd like to see is the\noriginal database contents, particularly\n\n\tselect * from pg_operator where oid = 280343;\n\tselect * from pg_operator where oid = 280344;\n\nso we can see why pg_dump is producing the bogus output.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 20 Oct 2001 12:36:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Error while restoring database "
}
] |
[
{
"msg_contents": "Can a rule see the where statement in a query which it has been triggered by? or is it simply ignored?? what happens?\n\ni.e.\n\nCREATE TABLE foo (\n\tid INTEGER PRIMARY KEY,\n\tname TEXT\n);\n\nCREATE VIEW bar AS SELECT * FROM foo; -- Great view?\n\nCREATE RULE bar_update AS ON UPDATE TO bar DO INSTEAD UPDATE foo SET id = NEW.id, name = NEW.name WHERE OLD.id = id;\n\nNow if I do a:\n\nUPDATE bar SET id = id + 10, WHERE id > 10;\n\nWhat really happens?\n\nDoes the update first select from bar, and pick out which rows to do the update on, and then do the update on these rows or what? \n\nI tried it, and I got an answer I cannot explain, first it works, then it doesn't:\n\nenvisity=# CREATE TABLE foo (\nenvisity(# id INTEGER PRIMARY KEY,\nenvisity(# name TEXT\nenvisity(# );\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'foo_pkey' for tabl\ne 'foo'\nCREATE\nenvisity=# \nenvisity=# CREATE VIEW bar AS SELECT * FROM foo; -- Great view?\nCREATE\nenvisity=# \nenvisity=# CREATE RULE bar_update AS ON UPDATE TO bar DO INSTEAD UPDATE foo SET \nfoo.id = NEW.id, foo.name = NEW.name WHERE OLD.id = foo.id;\nERROR: parser: parse error at or near \".\"\nenvisity=# CREATE RULE bar_update AS ON UPDATE TO bar DO INSTEAD UPDATE foo SET \nid = NEW.id, name = NEW.name WHERE OLD.id = id;\nCREATE\nenvisity=# INSERT INTO foo (1, 't');\nERROR: parser: parse error at or near \"1\"\nenvisity=# INSERT INTO foo VALUES(1, 't');\nINSERT 57054 1\nenvisity=# INSERT INTO foo VALUES(2, 'tr');\nINSERT 57055 1\nenvisity=# INSERT INTO foo VALUES(12, 'tg');\nINSERT 57056 1\nenvisity=# INSERT INTO foo VALUES(15, 'tgh');\nINSERT 57057 1\nenvisity=# INSERT INTO foo VALUES(14, 'th');\nINSERT 57058 1\nenvisity=# UPDATE bar SET id = id + 10 > \n\nenvisity=# UPDATE bar SET id = id + 10 where id > 10;\nUPDATE 3 -- Here it works\nenvisity=# select * from bar;\n id | name \n----+------\n 1 | t\n 2 | tr\n 22 | tg\n 24 | th\n 25 | tgh\n(5 rows)\n\nenvisity=# #CREATE VIEW bar AS SELECT * FROM foo; -- Great view?\nERROR: parser: parse error at or near \"#\"\nenvisity=# DROP VIEW bar;\nDROP\nenvisity=# CREATE VIEW bar AS SELECT id * 2 as id, name FROM foo; -- Great view\n?\nCREATE\nenvisity=# CREATE RULE bar_update AS ON UPDATE TO bar DO INSTEAD UPDATE foo SET \nid = NEW.id, name = NEW.name WHERE OLD.id = id;\nCREATE\nenvisity=# UPDATE bar SET id = id + 10 where id > 10;\nUPDATE 0\nenvisity=# select * from bar;\n id | name \n----+------\n 2 | t\n 4 | tr\n 44 | tg\n 48 | th\n 50 | tgh\n(5 rows)\n\nenvisity=# UPDATE bar SET id = id + 10 where id > 10;\nUPDATE 0\nenvisity=# select * from foo;\n id | name \n----+------\n 1 | t\n 2 | tr\n 22 | tg\n 24 | th\n 25 | tgh\n(5 rows)\n\nenvisity=# UPDATE bar SET id = id + 10 where id > 10;\nUPDATE 0 -- Here it doesn't work.\n\n\n\nAasmund Midttun Godal\n\naasmund@godal.com - http://www.godal.com/\n+47 40 45 20 46\n",
"msg_date": "Sat, 20 Oct 2001 23:57:12 GMT",
"msg_from": "\"Aasmund Midttun Godal\" <postgresql@envisity.com>",
"msg_from_op": true,
"msg_subject": "CREATE RULE ON UPDATE/DELETE"
},
{
"msg_contents": "On Sat, 20 Oct 2001, Aasmund Midttun Godal wrote:\n\n> Can a rule see the where statement in a query which it has been\n> triggered by? or is it simply ignored?? what happens?\n>\n\nLooking over your question, I wanted to clarify the problem a bit, so:\n(cleaned up example a bit from Aasmund)\n\n\n-- set up tables\n\ndrop view normal;\ndrop view dbl;\ndrop table raw;\n\nCREATE TABLE raw (id INT PRIMARY KEY, name TEXT );\nINSERT INTO raw VALUES(1, 'a');\nINSERT INTO raw VALUES(2, 'b');\nINSERT INTO raw VALUES(12, 'c');\nINSERT INTO raw VALUES(15, 'd');\nINSERT INTO raw VALUES(14, 'e');\n\n\n-- set up two views: \"normal\", a simple view,\n-- and \"dbl\", which shows id * 2\n\n-- create basic rules to allow update to both views\n\nCREATE VIEW normal AS SELECT * FROM raw;\n\nCREATE RULE normal_update AS ON UPDATE TO normal DO INSTEAD UPDATE raw SET\nid = NEW.id, name = NEW.name WHERE OLD.id = id;\n\nCREATE VIEW dbl AS SELECT id * 2 as id, name FROM raw;\n\nCREATE RULE dbl_update AS ON UPDATE TO dbl DO INSTEAD UPDATE raw SET\nid = NEW.id, name = NEW.name WHERE OLD.id = id;\n\n\n-- now test this\n\nUPDATE normal SET id = id + 10 where id > 10; -- works fine\n\nUPDATE dbl SET id = id + 10 where id > 10; -- above shows UPDATE 0\n -- even though there are ids > 10\n\nUPDATE dbl SET id = id + 10; -- UPDATE 1; shows table\nSELECT * FROM dbl; -- inconsistencies: two \"a\"s\nSELECT * FROM raw;\n\n\n\nThe issue is that there are no IDs over 10 that have another ID that is\nexactly their value, so the first update to \"dbl\" does nothing.\n\nThe second time, w/o the ID>10 restriction, it finds 1(a), and double\nthat, 2(b), and adds 10; getting confused about which record to edit.\n\nIs this the best way to interpret this? Is this a bug?\n\n\n-- \n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nIndependent Knowledge Management Consultant\n\n",
"msg_date": "Sat, 20 Oct 2001 23:31:10 -0400 (EDT)",
"msg_from": "Joel Burton <joel@joelburton.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE RULE ON UPDATE/DELETE"
},
{
"msg_contents": "\nOn Sat, 20 Oct 2001, Joel Burton wrote:\n\n> On Sat, 20 Oct 2001, Aasmund Midttun Godal wrote:\n> \n> > Can a rule see the where statement in a query which it has been\n> > triggered by? or is it simply ignored?? what happens?\n> >\n> \n> Looking over your question, I wanted to clarify the problem a bit, so:\n> (cleaned up example a bit from Aasmund)\n\n> drop view normal;\n> drop view dbl;\n> drop table raw;\n> \n> CREATE TABLE raw (id INT PRIMARY KEY, name TEXT );\n> INSERT INTO raw VALUES(1, 'a');\n> INSERT INTO raw VALUES(2, 'b');\n> INSERT INTO raw VALUES(12, 'c');\n> INSERT INTO raw VALUES(15, 'd');\n> INSERT INTO raw VALUES(14, 'e');\n> \n> \n> -- set up two views: \"normal\", a simple view,\n> -- and \"dbl\", which shows id * 2\n> \n> -- create basic rules to allow update to both views\n> \n> CREATE VIEW normal AS SELECT * FROM raw;\n> \n> CREATE RULE normal_update AS ON UPDATE TO normal DO INSTEAD UPDATE raw SET\n> id = NEW.id, name = NEW.name WHERE OLD.id = id;\n> \n> CREATE VIEW dbl AS SELECT id * 2 as id, name FROM raw;\n> \n> CREATE RULE dbl_update AS ON UPDATE TO dbl DO INSTEAD UPDATE raw SET\n> id = NEW.id, name = NEW.name WHERE OLD.id = id;\n\n> The issue is that there are no IDs over 10 that have another ID that is\n> exactly their value, so the first update to \"dbl\" does nothing.\n> \n> The second time, w/o the ID>10 restriction, it finds 1(a), and double\n> that, 2(b), and adds 10; getting confused about which record to edit.\n> \n> Is this the best way to interpret this? Is this a bug?\n\nDon't think so. I think the rule doesn't make any sense.\nNEW.id and OLD.id are probably dbl values, so saying OLD.id=id (where id\nis raw.id since that's the update table) isn't correct. It probably\nshould be OLD.id=id*2 (which seems to work for me, btw) It's editing\na different row than the one that's being selected.\n\n\n",
"msg_date": "Sun, 21 Oct 2001 00:41:29 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE RULE ON UPDATE/DELETE"
},
{
"msg_contents": "\n> Don't think so. I think the rule doesn't make any sense.\n> NEW.id and OLD.id are probably dbl values, so saying OLD.id=id (where id\n> is raw.id since that's the update table) isn't correct. It probably\n> should be OLD.id=id*2 (which seems to work for me, btw) It's editing\n> a different row than the one that's being selected.\n\nI forgot to mention in this that I needed to made an additional change in\nthe rule to make the ids come out correct at the end :(. The update set\nid=NEW.id should be id=NEW.id/2 of course... Otherwise the +10 becomes a\n+20.\n\n\n",
"msg_date": "Sun, 21 Oct 2001 01:33:59 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE RULE ON UPDATE/DELETE"
},
{
"msg_contents": "Joel Burton <joel@joelburton.com> writes:\n> CREATE VIEW dbl AS SELECT id * 2 as id, name FROM raw;\n\n> CREATE RULE dbl_update AS ON UPDATE TO dbl DO INSTEAD UPDATE raw SET\n> id = NEW.id, name = NEW.name WHERE OLD.id = id;\n\nSurely you'd need something like\n\nCREATE RULE dbl_update AS ON UPDATE TO dbl DO INSTEAD UPDATE raw SET\nid = NEW.id / 2, name = NEW.name WHERE OLD.id = id * 2;\n\n(untested...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 Oct 2001 12:47:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE RULE ON UPDATE/DELETE "
},
{
"msg_contents": "Yes, I agree perfectly... I never thought of that! I would really like it if some more info was added to the docs regarding info on rules and triggers. The section on update rules is quite good, but some more would never hurt. One point in the trigger vs rules section which at least to me is very important is the simple fact that you cannot have a trigger on a select... Ok I understand why - but it took some time...\n\nThank you for answering my questions!\n\nregards,\n\nAasmund.\nOn Sun, 21 Oct 2001 12:47:41 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Joel Burton <joel@joelburton.com> writes:\n> \n> \n> Surely you'd need something like\n> \n> CREATE RULE dbl_update AS ON UPDATE TO dbl DO INSTEAD UPDATE raw SET\n> id = NEW.id / 2, name = NEW.name WHERE OLD.id = id * 2;\n> \n> (untested...)\n> \n> \t\t\tregards, tom lane\n\nAasmund Midttun Godal\n\naasmund@godal.com - http://www.godal.com/\n+47 40 45 20 46\n",
"msg_date": "Sun, 21 Oct 2001 22:26:03 GMT",
"msg_from": "\"Aasmund Midttun Godal\" <postgresql@envisity.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE RULE ON UPDATE/DELETE"
},
{
"msg_contents": "Yes, I agree perfectly... I never thought of that! I would really like it if some more info was added to the docs regarding info on rules and triggers. The section on update rules is quite good, but some more would never hurt. One point in the trigger vs rules section which at least to me is very important is the simple fact that you cannot have a trigger on a select... Ok I understand why - but it took some time...\n\nThank you for answering my questions!\n\nregards,\n\nAasmund.\nOn Sun, 21 Oct 2001 12:47:41 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Joel Burton <joel@joelburton.com> writes:\n> \n> \n> Surely you'd need something like\n> \n> CREATE RULE dbl_update AS ON UPDATE TO dbl DO INSTEAD UPDATE raw SET\n> id = NEW.id / 2, name = NEW.name WHERE OLD.id = id * 2;\n> \n> (untested...)\n> \n> \t\t\tregards, tom lane\n\nAasmund Midttun Godal\n\naasmund@godal.com - http://www.godal.com/\n+47 40 45 20 46\n",
"msg_date": "Sun, 21 Oct 2001 22:26:03 GMT",
"msg_from": "\"Aasmund Midttun Godal\" <postgresql@envisity.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE RULE ON UPDATE/DELETE"
},
{
"msg_contents": "\nI have added the following text to the CREATE TRIGGER manual page to\naddress this issue. It often confuses people so it is good to point\nout:\n\n <para>\n <command>SELECT</command> does not modify any rows so you can not\n create <command>SELECT</command> triggers.\n </para>\n\n\n---------------------------------------------------------------------------\n\n> Yes, I agree perfectly... I never thought of that! I would really like it if some more info was added to the docs regarding info on rules and triggers. The section on update rules is quite good, but some more would never hurt. One point in the trigger vs rules section which at least to me is very important is the simple fact that you cannot have a trigger on a select... Ok I understand why - but it took some time...\n> \n> Thank you for answering my questions!\n> \n> regards,\n> \n> Aasmund.\n> On Sun, 21 Oct 2001 12:47:41 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Joel Burton <joel@joelburton.com> writes:\n> > \n> > \n> > Surely you'd need something like\n> > \n> > CREATE RULE dbl_update AS ON UPDATE TO dbl DO INSTEAD UPDATE raw SET\n> > id = NEW.id / 2, name = NEW.name WHERE OLD.id = id * 2;\n> > \n> > (untested...)\n> > \n> > \t\t\tregards, tom lane\n> \n> Aasmund Midttun Godal\n> \n> aasmund@godal.com - http://www.godal.com/\n> +47 40 45 20 46\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 29 Nov 2001 17:17:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE RULE ON UPDATE/DELETE"
}
] |
[
{
"msg_contents": "Shouldn't there be some form of CREATE TABLE AS / WITHOUT OIDS?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 21 Oct 2001 14:27:18 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "CREATE TABLE AS / WITHOUT OIDs?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Shouldn't there be some form of CREATE TABLE AS / WITHOUT OIDS?\n\nI thought about that, but decided it wasn't worth cluttering the\nparsetree representation with yet another CreateAs/SelectInto hack.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Oct 2001 17:37:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE TABLE AS / WITHOUT OIDs? "
},
{
"msg_contents": "> Shouldn't there be some form of CREATE TABLE AS / WITHOUT OIDS?\n\nHave we decided we don't need this? Is it a TODO item?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Nov 2001 21:13:15 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE TABLE AS / WITHOUT OIDs?"
}
] |
[
{
"msg_contents": "Shouldn't this work?\n\ncreate table test ( a int, unique (oid) );\nERROR: CREATE TABLE: column \"oid\" named in key does not exist\n\nBecause this works:\n\ncreate table test ( a int );\nCREATE\n\nalter table test add unique (oid);\nNOTICE: ALTER TABLE/UNIQUE will create implicit index 'test_oid_key' for table 'test'\nCREATE\n\nAnd shouldn't the last one say \"ALTER\"?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 21 Oct 2001 14:28:01 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Creating unique constraints on OID"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Shouldn't this work?\n> create table test ( a int, unique (oid) );\n> ERROR: CREATE TABLE: column \"oid\" named in key does not exist\n\nNow it does.\n\nregression=# create table test ( a int, unique (oid) );\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_oid_key' for table 'test'\nCREATE\nregression=# drop table test;\nDROP\nregression=# create table test ( a int, unique (oid) ) without oids;\nERROR: CREATE TABLE: column \"oid\" named in key does not exist\nregression=# create table test ( a int ) without oids;\nCREATE\nregression=# alter table test add unique (oid);\nERROR: ALTER TABLE: column \"oid\" named in key does not exist\nregression=# drop table test;\nDROP\nregression=# create table test ( a int );\nCREATE\nregression=# alter table test add unique (oid);\nNOTICE: ALTER TABLE/UNIQUE will create implicit index 'test_oid_key' for table 'test'\nCREATE\nregression=#\n\n> And shouldn't the last one say \"ALTER\"?\n\nThe reason that happens is that parser/analyze.c transforms the command\ninto an ALTER TABLE step that adds a constraint (a no-op in this case)\nplus a CREATE INDEX step. The commandTag emitted by the last step is\nwhat psql shows. This could possibly be fixed, but it seems not worth\nthe trouble.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Oct 2001 18:53:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Creating unique constraints on OID "
},
{
"msg_contents": "So the result of all this is that the behaviour of my ADD UNIQUE code is\ncorrect in this case?\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Shouldn't this work?\n> > create table test ( a int, unique (oid) );\n> > ERROR: CREATE TABLE: column \"oid\" named in key does not exist\n>\n> Now it does.\n\nIn 7.2 you mean? Or did you just fix it then?\n\n> > And shouldn't the last one say \"ALTER\"?\n>\n> The reason that happens is that parser/analyze.c transforms the command\n> into an ALTER TABLE step that adds a constraint (a no-op in this case)\n> plus a CREATE INDEX step. The commandTag emitted by the last step is\n> what psql shows. This could possibly be fixed, but it seems not worth\n> the trouble.\n\nIf it were to be changed - I really wouldn't know where to do that...\n\nChris\n\n",
"msg_date": "Tue, 23 Oct 2001 10:09:33 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Creating unique constraints on OID "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> So the result of all this is that the behaviour of my ADD UNIQUE code is\n> correct in this case?\n\nThe AlterTable code wasn't broken; the error was in parser/analyze.c,\nwhich was prematurely rejecting the command.\n\n>> Peter Eisentraut <peter_e@gmx.net> writes:\n> Shouldn't this work?\n> create table test ( a int, unique (oid) );\n> ERROR: CREATE TABLE: column \"oid\" named in key does not exist\n>> \n>> Now it does.\n\n> In 7.2 you mean? Or did you just fix it then?\n\nI just fixed it moments before sending that message.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Oct 2001 22:50:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Creating unique constraints on OID "
}
] |
[
{
"msg_contents": "On Sat, 20 Oct 2001 12:36:16 -0400, Tom Lane wrote:\n\n>That's what you showed us already. What I'd like to see is the\n>original database contents, particularly\n>\n>\tselect * from pg_operator where oid = 280343;\n>\tselect * from pg_operator where oid = 280344;\n>\n>so we can see why pg_dump is producing the bogus output.\n\nI'm sorry. I'm not so deep in the internals of postgreSQL. I'm just\ndoing some psqlodbc-supports and tests.\n\nAttached you find the results of the above selects.\n\nregards\n\nJohann Zuschlag\nzuschlag@online.de",
"msg_date": "Sun, 21 Oct 2001 14:45:03 +0200",
"msg_from": "\"Johann Zuschlag\" <zuschlag@online.de>",
"msg_from_op": true,
"msg_subject": "Re: Error while restoring database"
},
{
"msg_contents": "\"Johann Zuschlag\" <zuschlag@online.de> writes:\n>> select * from pg_operator where oid = 280343;\n>> select * from pg_operator where oid = 280344;\n> Attached you find the results of the above selects.\n\nOkay ... are there any rows in pg_operator with OID 280346 or 280347 ?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 Oct 2001 12:42:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error while restoring database "
},
{
"msg_contents": "On Sun, 21 Oct 2001 12:42:57 -0400, Tom Lane wrote:\n\n>>> select * from pg_operator where oid = 280343;\n>>> select * from pg_operator where oid = 280344;\n>> Attached you find the results of the above selects.\n>\n>Okay ... are there any rows in pg_operator with OID 280346 or 280347 ?\n\nYes, seems so. See the attachment. Again, the negator stuff\nnever worked for numeric.\n\nregards\n\n\nJohann Zuschlag\nzuschlag@online.de",
"msg_date": "Sun, 21 Oct 2001 19:54:03 +0200",
"msg_from": "\"Johann Zuschlag\" <zuschlag@online.de>",
"msg_from_op": true,
"msg_subject": "Re: Error while restoring database"
},
{
"msg_contents": "\"Johann Zuschlag\" <zuschlag@online.de> writes:\n>> Okay ... are there any rows in pg_operator with OID 280346 or 280347 ?\n\n> Yes, seems so. See the attachment. Again, the negator stuff\n> never worked for numeric.\n\nLooks like these are \"shell\" operator definitions left over from\ncommutator or negator forward references that were never satisfied.\npg_dump did the right thing to not dump them. I'd say that the backend\nshould never have accepted a shell operator def with an empty name,\nthough, which is what you seem to have at OID 280347.\n\nDo you happen to have the exact command that you gave to create\noperator 280343 (numeric_neq)? I think what this really boils down\nto is insufficient error checking somewhere in CREATE OPERATOR.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 Oct 2001 14:14:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error while restoring database "
},
{
"msg_contents": "On Sun, 21 Oct 2001 14:14:05 -0400, Tom Lane wrote:\n\n>Looks like these are \"shell\" operator definitions left over from\n>commutator or negator forward references that were never satisfied.\n>pg_dump did the right thing to not dump them. I'd say that the backend\n>should never have accepted a shell operator def with an empty name,\n>though, which is what you seem to have at OID 280347.\n>\n>Do you happen to have the exact command that you gave to create\n>operator 280343 (numeric_neq)? I think what this really boils down\n>to is insufficient error checking somewhere in CREATE OPERATOR.\n\nFortunately I still have the scripts. I used pgAdminII. But I think some\ntime earlier I used psql for the same operator. So shouldn't make any\ndifference.\n\nI used the two scripts below. I think in that particular order. \n(Still wondering, why there is a negator '<>' in the first one :-)\n\nI just tested them again. No error message.\nAnd I've got one with an empty name!\nAlways wondered why.\n\n(1)\ncreate function numeric_eq(numeric,float8) returns bool as '\n select $1 = $2::numeric;\n' language 'sql';\n\ncreate operator = (\n leftarg=numeric,\n rightarg=float8,\n procedure=numeric_eq,\n commutator='=',\n negator='<>',\n restrict=eqsel,\n join=eqjoinsel\n );\n\n(2)\ncreate function numeric_neq(numeric,float8) returns bool as '\n select $1 = $2::numeric;\n' language 'sql';\n\ncreate operator <> (\n leftarg=numeric,\n rightarg=float8,\n procedure=numeric_neq,\n commutator='<>',\n negator='',\n restrict=eqsel,\n join=eqjoinsel\n );\n\nregards\n\n\nJohann Zuschlag\nzuschlag@online.de\n\n\n",
"msg_date": "Sun, 21 Oct 2001 22:03:52 +0200",
"msg_from": "\"Johann Zuschlag\" <zuschlag@online.de>",
"msg_from_op": true,
"msg_subject": "Re: Error while restoring database"
},
{
"msg_contents": "\"Johann Zuschlag\" <zuschlag@online.de> writes:\n> create function numeric_neq(numeric,float8) returns bool as '\n> select $1 = $2::numeric;\n> ' language 'sql';\n\n> create operator <> (\n> leftarg=numeric,\n> rightarg=float8,\n> procedure=numeric_neq,\n> commutator='<>',\n> negator='',\n ^^^^^^^^^^\n> restrict=eqsel,\n> join=eqjoinsel\n> );\n\nWell, there's your problem...\n\nFor 7.2, I have added some error checking to the system that will\nprevent it accepting invalid operator names in commutator/negator\nparameters.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 Oct 2001 16:18:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error while restoring database "
}
] |
[
{
"msg_contents": "Dear all,\n \nI am trying to install PostgreSQL 7.1.3 on Win98 with APACHE\nand PHP (both installed and running), and\nam getting errors with \"make\" and \"make install\" (see below).\n \nWhat are the differences in installations for Win98, WinNT and Win2000?\nThere are so many procedures around and none is working without problems.\n \nI installed Cygwin to emulate UNIX environment and Cygwin IPC to\nsupport the linker (ld.exe). \n \nI dowloaded \"postgresql-7.1.3.tar.gz\".\n \n\"./configure\" finished properly with \"un.h\" and \"tcp.h\" installed BUT\nwithout \"endian.h\" (is this important ???)\n \nI also copied \"libpostgres.a\" into \"/usr/local/lib\".\n \nThere are some Windows Makefiles (\"../src/win32.mak\" and \"../src/makefiles/Makefile.win) - Do I need to run some and how and when. \n \nI can run \"postmaster -i&\" after IMPROPER installation BUT \"psql\" does not work.\n \nAlso, PHP commands of type \"pg_*\" are not recognised. I turned ON (I believe) PHP-Postgres in \"php.ini\" file residing in Windows dir by allowing \"extension=php_pgsql.dll\".\n \n \nWhat is WRONG?\n \nMany thanks,\n \nSteven.\n \n****************\n\"make\" and \"make install\" ERRORS:\n\n....\n\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations command.o common.o help.o input.o stringutils.o mainloop.o copy.o startup.o prompt.o variables.o large_obj.o print.o describe.o tab-complete.o -L../../../src/interfaces/libpq -lpq -L/usr/local/lib -g -lz -lcrypt -lreadline -lcygipc -o psql\n\ntab-complete.o(.text+0x2a36):tab-complete.c: undefined reference to `filename_completion_function'\n\ncollect2: ld returned 1 exit status\n\nmake[3]: *** [psql] Error 1\n\nmake[3]: Leaving directory `/usr/src/postgresql-7.1.3/src/bin/psql'\n\nmake[2]: *** [all] Error 2\n\nmake[2]: Leaving directory `/usr/src/postgresql-7.1.3/src/bin'\n\nmake[1]: *** [all] Error 2\n\nmake[1]: Leaving directory `/usr/src/postgresql-7.1.3/src'\n\nmake: *** [all] Error 2\n\n \n****************\n\n\n\n\n\n\n\nDear all, I am trying to install \nPostgreSQL 7.1.3 on Win98 with APACHEand PHP (both installed and running), \nandam getting errors with \"make\" and \"make install\" (see \nbelow). What are the differences in installations for Win98, WinNT \nand Win2000?There are so many procedures around and none is working without \nproblems. I installed Cygwin to emulate UNIX environment and Cygwin \nIPC tosupport the linker (ld.exe). I dowloaded \n\"postgresql-7.1.3.tar.gz\". \"./configure\" finished properly with \n\"un.h\" and \"tcp.h\" installed BUTwithout \"endian.h\" (is this important \n???) I also copied \"libpostgres.a\" into \n\"/usr/local/lib\". There are some Windows Makefiles \n(\"../src/win32.mak\" and \"../src/makefiles/Makefile.win) - Do I need to run some \nand how and when. I can run \"postmaster -i&\" after IMPROPER \ninstallation BUT \"psql\" does not work. Also, PHP commands of type \n\"pg_*\" are not recognised. I turned ON (I believe) PHP-Postgres in \"php.ini\" \nfile residing in Windows dir by allowing \n\"extension=php_pgsql.dll\". What is \nWRONG? Many \nthanks, Steven. ****************\"make\" and \"make \ninstall\" ERRORS:\n \n....\n \ngcc -O2 -Wall -Wmissing-prototypes \n-Wmissing-declarations command.o common.o help.o input.o stringutils.o \nmainloop.o copy.o startup.o prompt.o variables.o large_obj.o print.o describe.o \ntab-complete.o -L../../../src/interfaces/libpq -lpq -L/usr/local/lib -g -lz \n-lcrypt -lreadline -lcygipc -o psql\n \ntab-complete.o(.text+0x2a36):tab-complete.c: \nundefined reference to `filename_completion_function'\n \ncollect2: ld returned 1 exit status\n \nmake[3]: *** [psql] Error 1\n \nmake[3]: Leaving directory \n`/usr/src/postgresql-7.1.3/src/bin/psql'\n \nmake[2]: *** [all] Error 2\n \nmake[2]: Leaving directory \n`/usr/src/postgresql-7.1.3/src/bin'\n \nmake[1]: *** [all] Error 2\n \nmake[1]: Leaving directory \n`/usr/src/postgresql-7.1.3/src'\n \nmake: *** [all] Error 2\n \n ****************",
"msg_date": "Mon, 22 Oct 2001 10:17:25 +0930",
"msg_from": "\"Steven Vajdic\" <Steven.Vajdic@motorola.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 7.1.3 installation on Windows platforms"
}
] |
[
{
"msg_contents": "Reply-To: sender\n\nHi. I was surprised to discover today that postgres's\ncharacter types don't support zero bytes. That is,\nPostgres isn't 8-bit clean. Why is that?\n\nMore to the point, I need to store about 1k bytes per row\nof varying-length 8-bit binary data. I have a few options:\n\n + BLOBs. PostgreSQL BLOBs make me nervous. I worry about\n the BLOB not being deleted when the corresponding row in\n the table is deleted. The documentation is vague.\n\n + What I really need is a binary *short* object type.\n I have heard rumors of a legendary \"bytea\" type that might\n help me, but it doesn't appear to be documented anywhere,\n so I hesitate to use it.\n\n + I can base64-encode the data and store it in a \"text\"\n field. But postgres is a great big data-storage system;\n surely it can store binary data without resorting to\n this kind of hack.\n\nWhat should I do? Please help. Thanks!\n\n-- \nJason Orendorff\n\nP.S. I would love to help improve PostgreSQL's documentation.\n Whom should I contact?\n\n",
"msg_date": "Sun, 21 Oct 2001 23:18:46 -0500",
"msg_from": "\"Jason Orendorff\" <jason@jorendorff.com>",
"msg_from_op": true,
"msg_subject": "storing binary data"
},
{
"msg_contents": "Jason,\n\nBLOBs as you have correctly inferred do not get automatically deleted. \nYou can add triggers to your tables to delete them automatically if you \nso desire.\n\nHowever 'bytea' is the datatype that is most appropriate for your needs. \n It has been around for a long time, but not well documented. I have \nbeen using it in my code since 7.0 of postgres and it works fine. In \nfact many of the internal postgres tables use it.\n\nThe problem with bytea is that many of the client interfaces don't \nsupport it well or at all. So depending on how you intend to access the \ndata you may not be able to use the bytea datatype. The situation is \nmuch improved in 7.2 with bytea documented and better support for it in \nthe client interfaces (jdbc especially).\n\nEncoding the data into a text format will certainly work, if you can't \nwork around the current limitations of the above two options. And I \nbelieve there is some contrib code to help in this area.\n\nthanks,\n--Barry\n\n\n\nJason Orendorff wrote:\n\n> Reply-To: sender\n> \n> Hi. I was surprised to discover today that postgres's\n> character types don't support zero bytes. That is,\n> Postgres isn't 8-bit clean. Why is that?\n> \n> More to the point, I need to store about 1k bytes per row\n> of varying-length 8-bit binary data. I have a few options:\n> \n> + BLOBs. PostgreSQL BLOBs make me nervous. I worry about\n> the BLOB not being deleted when the corresponding row in\n> the table is deleted. The documentation is vague.\n> \n> + What I really need is a binary *short* object type.\n> I have heard rumors of a legendary \"bytea\" type that might\n> help me, but it doesn't appear to be documented anywhere,\n> so I hesitate to use it.\n> \n> + I can base64-encode the data and store it in a \"text\"\n> field. But postgres is a great big data-storage system;\n> surely it can store binary data without resorting to\n> this kind of hack.\n> \n> What should I do? Please help. Thanks!\n> \n> \n\n\n",
"msg_date": "Tue, 23 Oct 2001 10:04:27 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data"
},
{
"msg_contents": "\"Jason Orendorff\" <jason@jorendorff.com> writes:\n\n> Reply-To: sender\n\nJust to be nice, I'll do this. ;)\n\n> Hi. I was surprised to discover today that postgres's\n> character types don't support zero bytes. That is,\n> Postgres isn't 8-bit clean. Why is that?\n\nAs I understand it, the storage system itself is 8-bit clean; it's the\nparser layer that isn't (as it uses C strings everywhere). \n\n> More to the point, I need to store about 1k bytes per row\n> of varying-length 8-bit binary data. I have a few options:\n> \n> + BLOBs. PostgreSQL BLOBs make me nervous. I worry about\n> the BLOB not being deleted when the corresponding row in\n> the table is deleted. The documentation is vague.\n\nThis is an issue. There is definitely no automatic deletion of\nLOs. There is a 'vacuumlo' program in contrib/ that may be useful, or\nyou can roll your own, or you can use triggers to make sure LOs get\ndeleted.\n\nFWIW, I've been using LOBs in a couple of applications and haven't had \ntoo much trouble. \n\n> + What I really need is a binary *short* object type.\n> I have heard rumors of a legendary \"bytea\" type that might\n> help me, but it doesn't appear to be documented anywhere,\n> so I hesitate to use it.\n\nIt is in 7.1, but is more fully documented in 7.2 (which is entering\nbeta). See:\n\nhttp://candle.pha.pa.us/main/writings/pgsql/sgml/datatype-binary.html\n\n> + I can base64-encode the data and store it in a \"text\"\n> field. But postgres is a great big data-storage system;\n> surely it can store binary data without resorting to\n> this kind of hack.\n\nSince the only way to store or retrieve non-LOB data is to go through\nthe SQL parser, you always have to do some escaping. The link above\ntells you how to do it for 'bytea' without having to go the base64\nroute. \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "23 Oct 2001 13:05:56 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data"
},
{
"msg_contents": "Use bytea. Search archives.\n\nOn Sun, 21 Oct 2001, Jason Orendorff wrote:\n\n> Reply-To: sender\n> \n> Hi. I was surprised to discover today that postgres's\n> character types don't support zero bytes. That is,\n> Postgres isn't 8-bit clean. Why is that?\n> \n> More to the point, I need to store about 1k bytes per row\n> of varying-length 8-bit binary data. I have a few options:\n> \n> + BLOBs. PostgreSQL BLOBs make me nervous. I worry about\n> the BLOB not being deleted when the corresponding row in\n> the table is deleted. The documentation is vague.\n> \n> + What I really need is a binary *short* object type.\n> I have heard rumors of a legendary \"bytea\" type that might\n> help me, but it doesn't appear to be documented anywhere,\n> so I hesitate to use it.\n> \n> + I can base64-encode the data and store it in a \"text\"\n> field. But postgres is a great big data-storage system;\n> surely it can store binary data without resorting to\n> this kind of hack.\n> \n> What should I do? Please help. Thanks!\n> \n> \n\n",
"msg_date": "Tue, 23 Oct 2001 13:14:29 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data"
},
{
"msg_contents": "\"Jason Orendorff\" <jason@jorendorff.com> writes:\n> Hi. I was surprised to discover today that postgres's\n> character types don't support zero bytes. That is,\n> Postgres isn't 8-bit clean. Why is that?\n\n(a) because all our datatype I/O interfaces are based on C-style\n (null terminated) strings\n\n(b) because comparison of character datatypes is based on strcoll()\n (at least if you compiled with locale support)\n\nFixing either of these is far more pain than is justified to allow\npeople to store non-textual data in textual datatypes. I don't foresee\nit happening.\n\n> + What I really need is a binary *short* object type.\n> I have heard rumors of a legendary \"bytea\" type that might\n> help me, but it doesn't appear to be documented anywhere,\n> so I hesitate to use it.\n\nIt's real and it's not going away. It is pretty poorly documented\nand doesn't have a wide variety of functions ... but hey, you can help\nimprove that situation. This is an open source project after all ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Oct 2001 13:49:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data "
},
{
"msg_contents": ">> + What I really need is a binary *short* object type.\n>> I have heard rumors of a legendary \"bytea\" type that might\n>> help me, but it doesn't appear to be documented anywhere,\n>> so I hesitate to use it.\n>>\n> \n> It's real and it's not going away. It is pretty poorly documented\n> and doesn't have a wide variety of functions ... but hey, you can help\n> improve that situation. This is an open source project after all ;-)\n> \n> \t\t\tregards, tom lane\n\nI'll take a shot at improving the documentation for bytea. I'm hoping \ndocumentation patches are accepted during beta though ;-)\n\nAlso, FWIW, 7.2 includes bytea support for LIKE, NOT LIKE, LIKE ESCAPE, \n||, trim(), substring(), position(), length(), indexing, and various \ncomparators.\n\nJoe\n\n",
"msg_date": "Tue, 23 Oct 2001 23:53:07 -0700",
"msg_from": "Joe Conway <joseph.conway@home.com>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data"
},
{
"msg_contents": "...\n> I'll take a shot at improving the documentation for bytea. I'm hoping\n> documentation patches are accepted during beta though ;-)\n\nAlways. At least up until a week or so before release, when we need to\nfirm up the docs and work on final cleanup etc. There are several\nannouncements leading up to that point, so it will not be a suprise.\n\n - Thomas\n",
"msg_date": "Wed, 24 Oct 2001 13:32:23 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data"
},
{
"msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> I'll take a shot at improving the documentation for bytea. I'm hoping \n> documentation patches are accepted during beta though ;-)\n\nOf course. The only limitation we place during beta is \"no new features\nadded\". I plan to spend a good deal of time on the docs during beta\nmyself.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 12:59:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data "
},
{
"msg_contents": "Jason Orendorff writes:\n\n> Hi. I was surprised to discover today that postgres's\n> character types don't support zero bytes. That is,\n> Postgres isn't 8-bit clean. Why is that?\n\nPostgreSQL is 8-bit clean. The character types don't support zero bytes\nbecause the character types store characters, not bytes.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 24 Oct 2001 20:57:48 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data"
},
{
"msg_contents": "Quick question - I couldn't find this in the docs:\n\nWhat exactly is the advantage in using VIEWs? I get the impression that the\nSELECT query it is based on is cached (ie. a cached query plan).\n\nBut, is this cached between db restarts, between connections, etc. Is it\ncached upon the first use of the view for a db instance for a particular\nconnection, etc?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Thursday, 25 October 2001 1:00 AM\n> To: Joe Conway\n> Cc: Jason Orendorff; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] storing binary data\n>\n>\n> Joe Conway <joseph.conway@home.com> writes:\n> > I'll take a shot at improving the documentation for bytea. I'm hoping\n> > documentation patches are accepted during beta though ;-)\n>\n> Of course. The only limitation we place during beta is \"no new features\n> added\". I plan to spend a good deal of time on the docs during beta\n> myself.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Thu, 25 Oct 2001 10:17:48 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> What exactly is the advantage in using VIEWs?\n\nA level of logical indirection between the application and the physical\ndata schema. There are no performance benefits.\n\n> I get the impression that the\n> SELECT query it is based on is cached (ie. a cached query plan).\n\nNope. If there's something in the docs that makes you think so,\npoint out so I can fix it ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 22:21:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data "
},
{
"msg_contents": "> > I get the impression that the\n> > SELECT query it is based on is cached (ie. a cached query plan).\n>\n> Nope. If there's something in the docs that makes you think so,\n> point out so I can fix it ;-)\n\nHmmm...I could have sworn that you mentioned in passing something about\ncached query plans and VIEWs - I must have been in dream land.\n\nChris\n\n",
"msg_date": "Thu, 25 Oct 2001 10:31:02 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data "
},
{
"msg_contents": ">\n>I'll take a shot at improving the documentation for bytea. I'm hoping \n>documentation patches are accepted during beta though ;-)\n>\n>Also, FWIW, 7.2 includes bytea support for LIKE, NOT LIKE, LIKE ESCAPE, \n>||, trim(), substring(), position(), length(), indexing, and various \n>comparators.\n>\n\nCool!\n\nWould it be practical to use substring for retrieving chunks of binary data\nin manageable sizes? Or would the overheads be too high?\n\nCheerio,\nLink.\n\n",
"msg_date": "Thu, 25 Oct 2001 16:56:29 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n\n> What exactly is the advantage in using VIEWs? I get the impression that the\n> SELECT query it is based on is cached (ie. a cached query plan).\n\nI had the same impression but I've been told (with explanations) that\nthe query plan for a view is not cached in any way.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Fri, 26 Oct 2001 11:11:44 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data"
},
{
"msg_contents": "Lincoln Yeoh wrote:\n\n>>Also, FWIW, 7.2 includes bytea support for LIKE, NOT LIKE, LIKE ESCAPE, \n>>||, trim(), substring(), position(), length(), indexing, and various \n>>comparators.\n>>\n>>\n> \n> Cool!\n> \n> Would it be practical to use substring for retrieving chunks of binary data\n> in manageable sizes? Or would the overheads be too high?\n> \n> Cheerio,\n> Link.\n\nI haven't done any performance testing, but it should be no different \nthan the substring function used on TEXT fields. Try it out and let us \nknow ;-)\n\n-- Joe\n\n",
"msg_date": "Sat, 27 Oct 2001 18:02:57 -0700",
"msg_from": "Joe Conway <joseph.conway@home.com>",
"msg_from_op": false,
"msg_subject": "Re: storing binary data"
}
] |
[
{
"msg_contents": "Postgresql 7.1.3\n\nI'm having a problem with createlang.\n\nCommands:\n\n[postgres@boxy postgres]$ createdb test1\nPassword: <----- Correct\npassword\nCREATE DATABASE\n[postgres@boxy postgres]$ createlang plpgsql test1\nPassword: <----- Correct\npassword (does not say it was incorrect, the first character is upper\ncase)\nPassword: <-----\nIncorrect password \"something\"\npsql: Password authentication failed for user 'postgres'\nPassword: <-----\nIncorrect password \"something\"\npsql: Password authentication failed for user 'postgres'\ncreatelang: language installation failed\n[postgres@boxy postgres]$\n\nLogs corresponding to those commands:\n\n2001-10-22 15:15:22 [13115] DEBUG: connection: host=[local]\nuser=postgres database=template1\n2001-10-22 15:15:33 [13125] DEBUG: connection: host=[local]\nuser=postgres database=test1\nPassword authentication failed for user 'postgres'\nPassword authentication failed for user 'postgres'\n\npg_hba.conf entry:\nlocal all crypt\n\nNow again have a look at this (quite interesting):\n\n[postgres@boxy postgres]$ dropdb test1\nPassword:\nDROP DATABASE\n[postgres@boxy postgres]$ createdb test1\nPassword:\nCREATE DATABASE\n[postgres@boxy postgres]$ createlang -l test1\nPassword:\n Procedural languages\n Name | Trusted? | Compiler\n------+----------+----------\n(0 rows)\n\n[postgres@boxy postgres]$ createlang plpgsql test1\nPassword:\nPassword:\nPassword:\nPassword:\n[postgres@boxy postgres]$ createlang -l test1\nPassword:\n Procedural languages\n Name | Trusted? | Compiler\n---------+----------+----------\n plpgsql | t | PL/pgSQL\n(1 row)\n\n[postgres@boxy postgres]$\n\nI had to enter the password 4 times for it to create the language.\n\nThanks.\n\n",
"msg_date": "Mon, 22 Oct 2001 15:24:28 +1000 (EST)",
"msg_from": "speedboy <speedboy@nomicrosoft.org>",
"msg_from_op": true,
"msg_subject": "createlang difficulty."
},
{
"msg_contents": "I just tried it with current sources and got:\n\n\t#$ aspg createlang plpgsql test\n\tPassword: <- bad password\n\tFATAL 1: Password authentication failed for user \"postgres\"\n\tpsql: FATAL 1: Password authentication failed for user \"postgres\"\n\t\n\tcreatelang: external error\n\t#$ aspg createlang plpgsql test\n\tPassword: <- correct password\n\tPassword: <- correct password\n\tPassword: <- correct password\n\tPassword: <- correct password\n\nLooks OK to me.\n\n> Postgresql 7.1.3\n> \n> I'm having a problem with createlang.\n> \n> Commands:\n> \n> [postgres@boxy postgres]$ createdb test1\n> Password: <----- Correct\n> password\n> CREATE DATABASE\n> [postgres@boxy postgres]$ createlang plpgsql test1\n> Password: <----- Correct\n> password (does not say it was incorrect, the first character is upper\n> case)\n> Password: <-----\n> Incorrect password \"something\"\n> psql: Password authentication failed for user 'postgres'\n> Password: <-----\n> Incorrect password \"something\"\n> psql: Password authentication failed for user 'postgres'\n> createlang: language installation failed\n> [postgres@boxy postgres]$\n> \n> Logs corresponding to those commands:\n> \n> 2001-10-22 15:15:22 [13115] DEBUG: connection: host=[local]\n> user=postgres database=template1\n> 2001-10-22 15:15:33 [13125] DEBUG: connection: host=[local]\n> user=postgres database=test1\n> Password authentication failed for user 'postgres'\n> Password authentication failed for user 'postgres'\n> \n> pg_hba.conf entry:\n> local all crypt\n> \n> Now again have a look at this (quite interesting):\n> \n> [postgres@boxy postgres]$ dropdb test1\n> Password:\n> DROP DATABASE\n> [postgres@boxy postgres]$ createdb test1\n> Password:\n> CREATE DATABASE\n> [postgres@boxy postgres]$ createlang -l test1\n> Password:\n> Procedural languages\n> Name | Trusted? | Compiler\n> ------+----------+----------\n> (0 rows)\n> \n> [postgres@boxy postgres]$ createlang plpgsql test1\n> Password:\n> Password:\n> Password:\n> Password:\n> [postgres@boxy postgres]$ createlang -l test1\n> Password:\n> Procedural languages\n> Name | Trusted? | Compiler\n> ---------+----------+----------\n> plpgsql | t | PL/pgSQL\n> (1 row)\n> \n> [postgres@boxy postgres]$\n> \n> I had to enter the password 4 times for it to create the language.\n> \n> Thanks.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 14:33:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: createlang difficulty."
},
{
"msg_contents": "> I just tried it with current sources and got:\n> \n> \t#$ aspg createlang plpgsql test\n> \tPassword: <- bad password\n> \tFATAL 1: Password authentication failed for user \"postgres\"\n> \tpsql: FATAL 1: Password authentication failed for user \"postgres\"\n> \t\n> \tcreatelang: external error\n> \t#$ aspg createlang plpgsql test\n> \tPassword: <- correct password\n> \tPassword: <- correct password\n> \tPassword: <- correct password\n> \tPassword: <- correct password\n> \n> Looks OK to me.\n\nOk, so it connects four times. From a users perspective that might be\nconfusing. Is it possible to only prompt once for the password, just an\nidea I guess whoever created the program would want that to happen from a\neasy to use point of view. I.e. dummy proof?\n\nThankyou.\n\n",
"msg_date": "Tue, 23 Oct 2001 08:52:05 +1000 (EST)",
"msg_from": "speedboy <speedboy@nomicrosoft.org>",
"msg_from_op": true,
"msg_subject": "Re: createlang difficulty."
},
{
"msg_contents": "> > I just tried it with current sources and got:\n> > \n> > \t#$ aspg createlang plpgsql test\n> > \tPassword: <- bad password\n> > \tFATAL 1: Password authentication failed for user \"postgres\"\n> > \tpsql: FATAL 1: Password authentication failed for user \"postgres\"\n> > \t\n> > \tcreatelang: external error\n> > \t#$ aspg createlang plpgsql test\n> > \tPassword: <- correct password\n> > \tPassword: <- correct password\n> > \tPassword: <- correct password\n> > \tPassword: <- correct password\n> > \n> > Looks OK to me.\n> \n> Ok, so it connects four times. From a users perspective that might be\n> confusing. Is it possible to only prompt once for the password, just an\n> idea I guess whoever created the program would want that to happen from a\n> easy to use point of view. I.e. dummy proof?\n\nUh, yes, connecting once would be ideal. It currently runs each SQL\nquery it needs in psql and checks the exit status. Not sure how to code\nthat in one psql session.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 19:48:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: createlang difficulty."
},
{
"msg_contents": "speedboy <speedboy@nomicrosoft.org> writes:\n> Ok, so it connects four times. From a users perspective that might be\n> confusing. Is it possible to only prompt once for the password,\n\nThis would require replacing the createlang shell script with a\nspecialized C program. (Or, perhaps, adding conditional-execution\ncapability to psql scripts ... which would be very useful but an\nawful lot of work.)\n\nIt's unlikely to get to the top of anyone's to-do list any time soon,\nbecause the fact of the matter is that if you have Postgres configured\nto demand passwords for administrator connections, you're going to have\nlots of problems like this. createlang is not the only script that\ninvokes multiple programs --- pg_dumpall is another example that's\ngoing to be even harder to work around.\n\nThe better answer is to arrange things so that local connections don't\nneed passwords. One fairly portable approach is to run an IDENTD daemon\nand use ident auth for TCP connections through 127.0.0.1; then you just\nsay PGHOST=127.0.0.1 and you're home free.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Oct 2001 19:54:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: createlang difficulty. "
}
] |
[
{
"msg_contents": "Hello PostgreSQl Users!\n\nPostSQL V 7.1.1:\n\nI have defined a table and the necessary indices.\nBut the index is not used in every SELECT. (Therefore, the selects are\n*very* slow, due to seq scan on\n20 million entries, which is a test setup up to now)\n\nThe definitions can be seen in the annex.\n\nDoes some body know the reason and how to circumvent the seq scan?\n\nIs the order of index creation relevant? I.e., should I create the\nindices before inserting\nentries or the other way around?\n\nShould a hashing index be used? (I tried this, but I got the known error\n\"Out of overflow pages\")\n(The docu on \"create index\" says :\n \"Notes \n\n The Postgres query optimizer will consider using a btree index\nwhenever an indexed attribute is involved in a\n comparison using one of: <, <=, =, >=, > \n\n The Postgres query optimizer will consider using an rtree index\nwhenever an indexed attribute is involved in a\n comparison using one of: <<, &<, &>, >>, @, ~=, && \n\n The Postgres query optimizer will consider using a hash index\nwhenever an indexed attribute is involved in a\n comparison using the = operator. \"\n\n\nThe table entry 'epoche' is used in two different indices. Should that\nbe avoided?\n\nAny suggestions are welcome.\n\nThank you in advance.\nReiner\n------------------------------\nAnnex:\n======\n\nTable:\n------\n\\d wetter\n Table \"wetter\"\n Attribute | Type | Modifier \n-----------+--------------------------+----------\n sensor_id | integer | not null\n epoche | timestamp with time zone | not null\n wert | real | not null\nIndices: wetter_epoche_idx,\n wetter_pkey\n\n \\d wetter_epoche_idx\n Index \"wetter_epoche_idx\"\n Attribute | Type \n-----------+--------------------------\n epoche | timestamp with time zone\nbtree\n\n\n\\d wetter_pkey\n Index \"wetter_pkey\"\n Attribute | Type \n-----------+--------------------------\n sensor_id | integer\n epoche | timestamp with time zone\nunique btree (primary key)\n\n\nSelect where index is used:\n============================\nexplain select * from wetter order by epoche desc; \nNOTICE: QUERY PLAN:\n\nIndex Scan Backward using wetter_epoche_idx on wetter \n(cost=0.00..3216018.59 rows=20340000 width=16)\n\nEXPLAIN\n\n\n\nSelect where the index is NOT used:\n===================================\nexplain select * from wetter where epoche between '1970-01-01' and\n'1980-01-01' order by epoche asc;\nNOTICE: QUERY PLAN:\n\nSort (cost=480705.74..480705.74 rows=203400 width=16)\n -> Seq Scan on wetter (cost=0.00..454852.00 rows=203400 width=16)\n\nEXPLAIN\n\n--\nMit freundlichen Gruessen / With best regards\n Reiner Dassing\n",
"msg_date": "Mon, 22 Oct 2001 08:42:40 +0200",
"msg_from": "Reiner Dassing <dassing@wettzell.ifag.de>",
"msg_from_op": true,
"msg_subject": "Index of a table is not used (in any case)"
},
{
"msg_contents": "Reiner Dassing <dassing@wettzell.ifag.de> writes:\n\n> Hello PostgreSQl Users!\n> \n> PostSQL V 7.1.1:\n> \n> I have defined a table and the necessary indices.\n> But the index is not used in every SELECT. (Therefore, the selects are\n> *very* slow, due to seq scan on\n> 20 million entries, which is a test setup up to now)\n\nPerennial first question: did you VACUUM ANALYZE?\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "23 Oct 2001 00:00:24 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index of a table is not used (in any case)"
},
{
"msg_contents": "> Hello PostgreSQl Users!\n>\n> PostSQL V 7.1.1:\n\nYou should upgrade to 7.1.3 at some point...\n\n> I have defined a table and the necessary indices.\n> But the index is not used in every SELECT. (Therefore, the selects are\n> *very* slow, due to seq scan on\n> 20 million entries, which is a test setup up to now)\n>\n> The definitions can be seen in the annex.\n>\n> Does some body know the reason and how to circumvent the seq scan?\n\nYes. You probably have not run 'VACUUM ANALYZE' on your large table.\n\n> Is the order of index creation relevant? I.e., should I create the\n> indices before inserting\n> entries or the other way around?\n\nIf you are inserting a great many entries, insert the data first and then\ncreate the indices - it will be much faster this way.\n\n> Should a hashing index be used? (I tried this, but I got the known error\n> \"Out of overflow pages\")\n\nJust do the default CREATE INDEX - btree should be fine... (probably)\n\n> The table entry 'epoche' is used in two different indices. Should that\n> be avoided?\n\nIt's not a problem, but just check your EXPLAIN output after the VACUUM to\ncheck that you have them right.\n\nChris\n\n",
"msg_date": "Tue, 23 Oct 2001 12:08:14 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index of a table is not used (in any case)"
},
{
"msg_contents": "\nOn Mon, 22 Oct 2001, Reiner Dassing wrote:\n\n> Hello PostgreSQl Users!\n> \n> PostSQL V 7.1.1:\n> \n> I have defined a table and the necessary indices.\n> But the index is not used in every SELECT. (Therefore, the selects are\n> *very* slow, due to seq scan on\n> 20 million entries, which is a test setup up to now)\n> \n> The definitions can be seen in the annex.\n> \n> Does some body know the reason and how to circumvent the seq scan?\n> \n> Is the order of index creation relevant? I.e., should I create the\n> indices before inserting\n> entries or the other way around?\n> \n\nHave you run a vacuum analyze to update the statistics after the data was\nloaded?\n\n",
"msg_date": "Mon, 22 Oct 2001 21:27:07 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Index of a table is not used (in any case)"
},
{
"msg_contents": "Reinier,\n\nFor future notice, [SQL] is the correct list for this kind of inquiry.\nPlease do not post it to [HACKERS]. And please don't cross-post ... it\nresults in a lot of needless duplication of effort.\n\n> I have defined a table and the necessary indices.\n\n> Is the order of index creation relevant? I.e., should I create the\n> indices before inserting\n> entries or the other way around?\n\nUmmm ... not to be obvious, or anything, but did you VACCUUM ANALYZE\nafter populating your table?\n\nThere's also some special steps to take if you are regularly deleting\nlarge numbers of records.\n\n-Josh\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n",
"msg_date": "Mon, 22 Oct 2001 21:50:57 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: Index of a table is not used (in any case)"
},
{
"msg_contents": "Hello all!\n\nThank you for the answers I got.\n\nI would like to mention first, that I will use the [SQL] list for my\nanswers,\nregarding the notice of Josh Berkus.\n\nQ: \"did you use VACUUM ANALYZE\"?\nA: This table was a test bed, just using INSERTS without ANY deletes or\nupdates\n (See: vacuum verbose analyze wetter;\n NOTICE: --Relation wetter--\n NOTICE: Pages 149752: Changed 0, reaped 194, Empty 0, New 0; \n Tup 20340000: Vac 26169, Keep/VTL 0/0, Crash 0, UnUsed 0,\nMinLen 52, \n MaxLen 52; \n Re-using: Free/Avail. Space 1467792/1467792; \n EndEmpty/Avail. Pages 0/194. CPU 6.10s/1.78u sec.\n )\n\n\nQ: You should upgrade to 7.1.3?\nA: Can you tell me the specific the reason?\n\n\nAm afraid, that the real answer is not mentioned:\nWhy is the index used in the SELECT:\nselect * from wetter order by epoche desc;\n \n\nselect * from wetter where epoche between '1970-01-01' and '1980-01-01'\norder by epoche asc;\n\n?\n\nAny ideas?\n\n--\nMit freundlichen Gruessen / With best regards\n Reiner Dassing\n",
"msg_date": "Tue, 23 Oct 2001 09:01:04 +0200",
"msg_from": "Reiner Dassing <dassing@wettzell.ifag.de>",
"msg_from_op": true,
"msg_subject": "Re: Index of a table is not used (in any case)"
},
{
"msg_contents": "Reiner Dassing <dassing@wettzell.ifag.de> writes:\n\n> I would like to mention first, that I will use the [SQL] list for my\n> answers,\n> regarding the notice of Josh Berkus.\n> \n> Q: \"did you use VACUUM ANALYZE\"?\n> A: This table was a test bed, just using INSERTS without ANY deletes or\n> updates\n\nYou still need to run VACUUM ANALYZE. The ANALYZE part measures the\nstatistics of your data, which the planner needs in order to make\ndecision. \n\n\n> Am afraid, that the real answer is not mentioned:\n> Why is the index used in the SELECT:\n> select * from wetter order by epoche desc;\n> \n> \n> select * from wetter where epoche between '1970-01-01' and '1980-01-01'\n> order by epoche asc;\n\nIf you EXPLAIN output for these queries, someone can probably help\nyou. \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "23 Oct 2001 09:01:59 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Index of a table is not used (in any case)"
},
{
"msg_contents": "In article <web-490372@davinci.ethosmedia.com>, Josh Berkus wrote:\n> Reinier,\n> \n> For future notice, [SQL] is the correct list for this kind of inquiry.\n> Please do not post it to [HACKERS]. And please don't cross-post ... it\n> results in a lot of needless duplication of effort.\n> \n>> I have defined a table and the necessary indices.\n> \n>> Is the order of index creation relevant? I.e., should I create the\n>> indices before inserting\n>> entries or the other way around?\n> \n> Ummm ... not to be obvious, or anything, but did you VACCUUM ANALYZE\n> after populating your table?\n> \n> There's also some special steps to take if you are regularly deleting\n> large numbers of records.\n\nCould you tell me what those steps are or where to find them? I have\na db that I delete about 1 million records a day from in a batch job.\nThe only special thing I do is every few days I reindex the table\ninvolved to reclame the space burned by the indexes not reclaiming\nspace on deletion of rows. What other good and useful things could I\ndo?\n\nThanks \n\nmarc\n\n\n> \n> -Josh\n> \n> ______AGLIO DATABASE SOLUTIONS___________________________\n> Josh Berkus\n> Complete information technology josh@agliodbs.com\n> and data management solutions (415) 565-7293\n> for law firms, small businesses fax 621-2533\n> and non-profit organizations. San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n",
"msg_date": "Tue, 23 Oct 2001 13:28:47 GMT",
"msg_from": "marc@oscar.eng.cv.net (Marc Spitzer)",
"msg_from_op": false,
"msg_subject": "Re: Index of a table is not used (in any case)"
},
{
"msg_contents": "Doug McNaught wrote:\n\n> Reiner Dassing <dassing@wettzell.ifag.de> writes:\n>\n> > Hello PostgreSQl Users!\n> >\n> > PostSQL V 7.1.1:\n> >\n> > I have defined a table and the necessary indices.\n> > But the index is not used in every SELECT. (Therefore, the selects are\n> > *very* slow, due to seq scan on\n> > 20 million entries, which is a test setup up to now)\n>\n> Perennial first question: did you VACUUM ANALYZE?\n\nCan there, or could there, be a notion of \"rule based\" optimization of\nqueries in PostgreSQL? The \"not using index\" problem is probably the most\ncommon and most misunderstood problem.\n\n\n",
"msg_date": "Tue, 23 Oct 2001 10:40:00 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Index of a table is not used (in any case)"
},
{
"msg_contents": "Reiner Dassing <dassing@wettzell.ifag.de> writes:\n> explain select * from wetter order by epoche desc; \n> NOTICE: QUERY PLAN:\n\n> Index Scan Backward using wetter_epoche_idx on wetter \n> (cost=0.00..3216018.59 rows=20340000 width=16)\n\n> explain select * from wetter where epoche between '1970-01-01' and\n> '1980-01-01' order by epoche asc;\n> NOTICE: QUERY PLAN:\n\n> Sort (cost=480705.74..480705.74 rows=203400 width=16)\n> -> Seq Scan on wetter (cost=0.00..454852.00 rows=203400 width=16)\n\nIt's hard to believe that you've done a VACUUM ANALYZE on this table,\nsince you are getting a selectivity estimate of exactly 0.01, which\njust happens to be the default selectivity estimate for range queries.\nHow many rows are there really in this date range?\n\nAnyway, the reason the planner is picking a seqscan+sort is that it\nthinks that will be faster than an indexscan. It's not necessarily\nwrong. Have you compared the explain output and actual timings both\nways? (Use \"set enable_seqscan to off\" to force it to pick an indexscan\nfor testing purposes.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Oct 2001 15:14:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index of a table is not used (in any case) "
},
{
"msg_contents": "Hello Tom!\n\nTom Lane wrote:\n> \n> Reiner Dassing <dassing@wettzell.ifag.de> writes:\n> > explain select * from wetter order by epoche desc;\n> > NOTICE: QUERY PLAN:\n> \n> > Index Scan Backward using wetter_epoche_idx on wetter\n> > (cost=0.00..3216018.59 rows=20340000 width=16)\n> \n> > explain select * from wetter where epoche between '1970-01-01' and\n> > '1980-01-01' order by epoche asc;\n> > NOTICE: QUERY PLAN:\n> \n> > Sort (cost=480705.74..480705.74 rows=203400 width=16)\n> > -> Seq Scan on wetter (cost=0.00..454852.00 rows=203400 width=16)\n> \n> It's hard to believe that you've done a VACUUM ANALYZE on this table,\n> since you are getting a selectivity estimate of exactly 0.01, which\n> just happens to be the default selectivity estimate for range queries.\n> How many rows are there really in this date range?\n> \nWell, I did not claim that i made a VACUUM ANALYZE, I just set up a new\ntable\nfor testing purposes doing just INSERTs.\n\nAfter VACUUM ANALYSE the results look like:\nexplain select * from wetter where epoche between '1970-01-01' and\ntest_wetter-# '1980-01-01' order by epoche asc;\nNOTICE: QUERY PLAN:\n\nIndex Scan using wetter_epoche_idx on wetter (cost=0.00..3313780.74\nrows=20319660 width=16)\n\nEXPLAIN\n\nNow, the INDEX Scan is used and therefore, the query is very fast, as\nexpected.\n\nFor me, as a user not being involved in all the intrinsics of\nPostgreSQL, the question was\n\n\"Why is this SELECT so slow?\" (this question is asked a lot of times in\nthis Mail lists)\n\nNow, I would like to say thank you! You have explained me and hopefully\nmany more users\nwhat is going on behind the scene.\n\n> Anyway, the reason the planner is picking a seqscan+sort is that it\n> thinks that will be faster than an indexscan. It's not necessarily\n> wrong. Have you compared the explain output and actual timings both\n> ways? (Use \"set enable_seqscan to off\" to force it to pick an indexscan\n> for testing purposes.)\n> \n> regards, tom lane\n\n--\nMit freundlichen Gruessen / With best regards\n Reiner Dassing\n",
"msg_date": "Thu, 25 Oct 2001 14:52:48 +0200",
"msg_from": "Reiner Dassing <dassing@wettzell.ifag.de>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Index of a table is not used (in any case)"
}
] |
[
{
"msg_contents": "Hi Bruce,\n\nyou might add that I did the following useful enhancement to ECPG:\n\n- EXECUTE ... INTO ... implemented\n- multiple row descriptor support (e.g. CARDINALITY)\n\nI don't feel that my humble contribution of a few lines is important but\nthe improvement made really is important (n times performance if you use\nit).\n\nYours\n Christof\n\n\n",
"msg_date": "Mon, 22 Oct 2001 09:43:07 +0200",
"msg_from": "Christof Petig <christof@petig-baender.de>",
"msg_from_op": true,
"msg_subject": "HISTORY (ecpg enhancements not yet mentioned)"
},
{
"msg_contents": "\nAdded. I will update the HISTORY file today or tomorrow to add newer\nchanges than 2001-09-13.\n\n\n---------------------------------------------------------------------------\n\n> Hi Bruce,\n> \n> you might add that I did the following useful enhancement to ECPG:\n> \n> - EXECUTE ... INTO ... implemented\n> - multiple row descriptor support (e.g. CARDINALITY)\n> \n> I don't feel that my humble contribution of a few lines is important but\n> the improvement made really is important (n times performance if you use\n> it).\n> \n> Yours\n> Christof\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 14:42:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: HISTORY (ecpg enhancements not yet mentioned)"
}
] |
[
{
"msg_contents": "Your <enigma@sevensages.org> address bounced. Do you have another one?\n\nThanks,\n\nBill\n\n",
"msg_date": "Mon, 22 Oct 2001 03:59:29 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": true,
"msg_subject": "For John Havard, please ignore otherwise"
}
] |
[
{
"msg_contents": "On Tue, 23 Oct 2001, Bruce Momjian wrote:\n\n> > Dear all,\n> >\n> > Would it be possible to implement CREATE OR REPLACE VIEW / TRIGGER in\n> > PostgreSQL 7.2?\n\nProbably not, it's rather late in the cycle (isn't beta imminent?). Oh,\nI'd vote for \"OR REPLACE\" as there's already an opt_or_replace\nnon-terminal in the parser. Adding an optional \"OR DROP\" might displease\nyacc, and also follows in the same vein as what we have for CREATE\nFUNCTION.\n\n> > Alternatively, could someone implement CREATE OR DROP VIEW / TRIGGER? These\n> > features are needed for pgAdmin II (we could also provide a patch for\n> > PhpPgAdmin). If this cannot be implemented in PostgreSQL, we will go for\n> > pseudo-modification solutions (which is definitely not a good solution).\n>\n> Our current CREATE OR REPLACE FUNCTION perserves the OID of the\n> function. Is there similar functionality you need where a simple\n> DROP (ignore the error), CREATE will not work?\n\nIf possible, it's nice to not have commands whose error codes you ignore.\nThat way if you see an error, you know you need to do something about it.\n\nTake care,\n\nBill\n\n",
"msg_date": "Mon, 22 Oct 2001 11:45:04 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CREATE OR REPLACE VIEW/TRIGGER"
},
{
"msg_contents": "On Tue, 23 Oct 2001, Bruce Momjian wrote:\n\n> > If possible, it's nice to not have commands whose error codes you ignore.\n> > That way if you see an error, you know you need to do something about it.\n>\n> Folks, is this a valid reason for adding OR REPLACE to all CREATE object\n> commands?\n\nSounds good to me. :-)\n\nTake care,\n\nBill\n\n",
"msg_date": "Tue, 23 Oct 2001 03:34:29 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CREATE OR REPLACE VIEW/TRIGGER"
},
{
"msg_contents": "Dear all,\n\nWould it be possible to implement CREATE OR REPLACE VIEW / TRIGGER in \nPostgreSQL 7.2?\n\nAlternatively, could someone implement CREATE OR DROP VIEW / TRIGGER? These \nfeatures are needed for pgAdmin II (we could also provide a patch for \nPhpPgAdmin). If this cannot be implemented in PostgreSQL, we will go for \npseudo-modification solutions (which is definitely not a good solution).\n\nWe are also waiting for a proper ALTER table DROP column but we are day \ndreamers...\n\nThanks for your help and comprehension.\nBest regards,\nJean-Michel POURE\npgAdmin team\n",
"msg_date": "Tue, 23 Oct 2001 17:16:06 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "CREATE OR REPLACE VIEW/TRIGGER"
},
{
"msg_contents": "Jean-Michel POURE <jm.poure@freesurf.fr> writes:\n> Would it be possible to implement CREATE OR REPLACE VIEW / TRIGGER in \n> PostgreSQL 7.2?\n\nWe're already vastly overdue for beta. The time for new feature\nrequests for 7.2 is past ... especially nontrivial requests.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Oct 2001 14:29:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CREATE OR REPLACE VIEW/TRIGGER "
},
{
"msg_contents": "> We are also waiting for a proper ALTER table DROP column but we are day \n> dreamers...\n\nThis is a good example of bad management on our parts. We couldn't\ndecide between two possible DROP COLUMN implementations, so we now have\nthe worst result, which is no implementation at all.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Oct 2001 19:19:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CREATE OR REPLACE VIEW/TRIGGER"
},
{
"msg_contents": "> Dear all,\n> \n> Would it be possible to implement CREATE OR REPLACE VIEW / TRIGGER in \n> PostgreSQL 7.2?\n> \n> Alternatively, could someone implement CREATE OR DROP VIEW / TRIGGER? These \n> features are needed for pgAdmin II (we could also provide a patch for \n> PhpPgAdmin). If this cannot be implemented in PostgreSQL, we will go for \n> pseudo-modification solutions (which is definitely not a good solution).\n\nOur current CREATE OR REPLACE FUNCTION perserves the OID of the\nfunction. Is there similar functionality you need where a simple\nDROP (ignore the error), CREATE will not work?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Oct 2001 19:22:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CREATE OR REPLACE VIEW/TRIGGER"
},
{
"msg_contents": "> > > Alternatively, could someone implement CREATE OR DROP VIEW / TRIGGER? These\n> > > features are needed for pgAdmin II (we could also provide a patch for\n> > > PhpPgAdmin). If this cannot be implemented in PostgreSQL, we will go for\n> > > pseudo-modification solutions (which is definitely not a good solution).\n> >\n> > Our current CREATE OR REPLACE FUNCTION perserves the OID of the\n> > function. Is there similar functionality you need where a simple\n> > DROP (ignore the error), CREATE will not work?\n> \n> If possible, it's nice to not have commands whose error codes you ignore.\n> That way if you see an error, you know you need to do something about it.\n\nFolks, is this a valid reason for adding OR REPLACE to all CREATE object\ncommands?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Oct 2001 20:56:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CREATE OR REPLACE VIEW/TRIGGER"
},
{
"msg_contents": "\nI have added this to the TODO list:\n\n\t* Add OR REPLACE clauses to non-FUNCTION object creation\n\nI think there are clearly some other objects that need OR REPLACE. Not\nsure which ones yet.\n\n\n---------------------------------------------------------------------------\n\n> Dear all,\n> \n> Would it be possible to implement CREATE OR REPLACE VIEW / TRIGGER in \n> PostgreSQL 7.2?\n> \n> Alternatively, could someone implement CREATE OR DROP VIEW / TRIGGER? These \n> features are needed for pgAdmin II (we could also provide a patch for \n> PhpPgAdmin). If this cannot be implemented in PostgreSQL, we will go for \n> pseudo-modification solutions (which is definitely not a good solution).\n> \n> We are also waiting for a proper ALTER table DROP column but we are day \n> dreamers...\n> \n> Thanks for your help and comprehension.\n> Best regards,\n> Jean-Michel POURE\n> pgAdmin team\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Oct 2001 21:01:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE OR REPLACE VIEW/TRIGGER"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Our current CREATE OR REPLACE FUNCTION perserves the OID of the\n> function. Is there similar functionality you need where a simple\n> DROP (ignore the error), CREATE will not work?\n>> \n>> If possible, it's nice to not have commands whose error codes you ignore.\n>> That way if you see an error, you know you need to do something about it.\n\n> Folks, is this a valid reason for adding OR REPLACE to all CREATE object\n> commands?\n\nNot until we do the necessary legwork. I spent a good deal of time over\nthe past week making the various PL modules react to replacement of\npg_proc entries by CREATE OR REPLACE FUNCTION (cf. complaint from Peter\na week or so back). CREATE OR REPLACE VIEW implies updating cached\nquery plans, and I'm not sure what CREATE OR REPLACE TRIGGER implies.\nBut I am pretty sure it's not a trivial question.\n\nIn short: put it on the todo list, but note that there are some\nimplications...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 00:53:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CREATE OR REPLACE VIEW/TRIGGER "
},
{
"msg_contents": "> Not until we do the necessary legwork. I spent a good deal of time over\n> the past week making the various PL modules react to replacement of\n> pg_proc entries by CREATE OR REPLACE FUNCTION (cf. complaint from Peter\n> a week or so back). CREATE OR REPLACE VIEW implies updating cached\n> query plans, and I'm not sure what CREATE OR REPLACE TRIGGER implies.\n> But I am pretty sure it's not a trivial question.\n> \n> In short: put it on the todo list, but note that there are some\n> implications...\n\nThat's all I needed to know.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Oct 2001 00:55:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CREATE OR REPLACE VIEW/TRIGGER"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > > Alternatively, could someone implement CREATE OR DROP VIEW / TRIGGER? These\n> > > > features are needed for pgAdmin II (we could also provide a patch for\n> > > > PhpPgAdmin). If this cannot be implemented in PostgreSQL, we will go for\n> > > > pseudo-modification solutions (which is definitely not a good solution).\n> > >\n> > > Our current CREATE OR REPLACE FUNCTION perserves the OID of the\n> > > function. Is there similar functionality you need where a simple\n> > > DROP (ignore the error), CREATE will not work?\n> >\n> > If possible, it's nice to not have commands whose error codes you ignore.\n> > That way if you see an error, you know you need to do something about it.\n> \n> Folks, is this a valid reason for adding OR REPLACE to all CREATE object\n> commands?\n\nWell, Oracle has CREATE OR REPLACE for:\n\nViews\nFunctions\nProcedures\nTriggers\nTypes\nPackages\n\nbut not for (at least 8.0.5):\n\nTables\nIndexes\nSequences\n\nAt first glance, I'm not sure why Oracle doesn't allow for the\nreplacement of the non-\"compiled\" objects. Perhaps the complexities\ninvolved in enforcing RI was too much. The *major* advantage to\nallowing a REPLACE in Oracle is to preserve permissions granted to\nvarious users and groups (aka ROLES). Oracle automatically\nrecompiles views, functions, procedures, etc. if their underlying\ndependencies change:\n\nSQL> CREATE TABLE employees (key integer, salary float);\n\nTable created.\n\nSQL> CREATE VIEW salaries AS SELECT * FROM employees WHERE salary <\n15000;\n\nView created.\n\nSQL> SELECT * FROM salaries;\n\nno rows selected\n\nSQL> DROP TABLE employees;\n\nTable dropped.\n\nSQL> SELECT * FROM salaries;\nSELECT * FROM salaries\n *\nERROR at line 1:\nORA-04063: view \"MASCARM.SALARIES\" has errors\n\n\nSQL> CREATE TABLE employees (key integer, salary float);\n\nTable created.\n\nSQL> SELECT * FROM salaries;\n\nno rows selected\n\nSo it seems to me that the major reason is to preserve GRANT/REVOKE\nprivileges issues against the object in question.\n\nFWIW,\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Wed, 24 Oct 2001 01:13:43 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE OR REPLACE VIEW/TRIGGER"
},
{
"msg_contents": "Bill Studenmund writes:\n\n> > Our current CREATE OR REPLACE FUNCTION perserves the OID of the\n> > function. Is there similar functionality you need where a simple\n> > DROP (ignore the error), CREATE will not work?\n>\n> If possible, it's nice to not have commands whose error codes you ignore.\n> That way if you see an error, you know you need to do something about it.\n\nTechnically, it's not an error, it's an \"exception condition\". This might\nmake you feel better when consciously ignoring it. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 24 Oct 2001 23:56:01 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CREATE OR REPLACE VIEW/TRIGGER"
},
{
"msg_contents": "On Tue, 23 Oct 2001 17:16:06 +0200, you wrote:\n>CREATE OR DROP VIEW \n\nIs this for real? If I were a database server I would say to the\nclient \"please make up your mind\" :-)\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n",
"msg_date": "Sat, 27 Oct 2001 13:50:28 +0200",
"msg_from": "Rene Pijlman <rene@lab.applinet.nl>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CREATE OR REPLACE VIEW/TRIGGER"
},
{
"msg_contents": "\n> >CREATE OR DROP VIEW\n>Is this for real? If I were a database server I would say to the\n>client \"please make up your mind\" :-)\n\nI meant DROP IF EXISTS and then CREATE.\nThis is more simple to implement than CREATE OR REPLACE.\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Sat, 27 Oct 2001 15:50:11 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: CREATE OR REPLACE VIEW/TRIGGER"
}
] |
[
{
"msg_contents": "I just checked:\n\n\tftp://ftp.us.postgresql.org/dev/postgresql-base-snapshot.tar.gz\n\nand the snapshot has the proper file contents, showing doc/TODO with a\ndate of October 19th.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 15:10:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "snapshots now working"
}
] |
[
{
"msg_contents": "Hello, I'm trying to set up a trigger on insert or update\nbut when using the predefined variable ``OLD'' I get a\nNOTICE from the trigger function about OLD not being defined yet.\n\nOf course OLD is not defined when the function is triggered on INSERT\nevent, and I did not mention it if not inside a conditional block\nchecking for the TG_OP variable being 'UPDATE'.\n\nFor better understanding here is some code:\n\nBEGIN\n\tIF TG_OP = 'UPDATE' THEN\n\t\tIF OLD.id <> NEW.id THEN\n\t\t\t-- do the work\n\t\tEND IF\n\tEND IF;\nEND;\n\nEven when TG_OP != 'UPDATE' (INSERT event) I still get an error\nmessage from the pl/pgsql compiler (the first time the trigger is fired).\n\nWhat should I do then ? Is it still possible to use the same function\nfor UPDATE OR INSERT events ?\n\nTIA\n\n--san;\n",
"msg_date": "Mon, 22 Oct 2001 23:25:44 +0200",
"msg_from": "san@cobalt.rmnet.it",
"msg_from_op": true,
"msg_subject": "PL/pgSQL triggers ON INSERT OR UPDATE"
},
{
"msg_contents": "First, I may be wrong but I do think they would prefer if you did not cross-post (especially to hackers).\n\nSecond I think it probably make more sense to make two different triggers here.\n\nIf you really wanted to do it that way you might want to try executing that part.\n\nRegards,\n\nAasmund.\n\nOn Mon, 22 Oct 2001 23:25:44 +0200, san@cobalt.rmnet.it wrote:\n> Hello, I'm trying to set up a trigger on insert or update\n> but when using the predefined variable ``OLD'' I get a\n> NOTICE from the trigger function about OLD not being defined yet.\n> \n> Of course OLD is not defined when the function is triggered on INSERT\n> event, and I did not mention it if not inside a conditional block\n> checking for the TG_OP variable being 'UPDATE'.\n> \n> For better understanding here is some code:\n> \n> BEGIN\n> \tIF TG_OP = 'UPDATE' THEN\n> \t\tIF OLD.id <> NEW.id THEN\n> \t\t\t-- do the work\n> \t\tEND IF\n> \tEND IF;\n> END;\n> \n> Even when TG_OP != 'UPDATE' (INSERT event) I still get an error\n> message from the pl/pgsql compiler (the first time the trigger is fired).\n> \n> What should I do then ? Is it still possible to use the same function\n> for UPDATE OR INSERT events ?\n> \n> TIA\n> \n> --san;\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\nAasmund Midttun Godal\n\naasmund@godal.com - http://www.godal.com/\n+47 40 45 20 46\n",
"msg_date": "Tue, 23 Oct 2001 16:31:12 GMT",
"msg_from": "\"Aasmund Midttun Godal\" <postgresql@envisity.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL triggers ON INSERT OR UPDATE"
}
] |
[
{
"msg_contents": "Folks,\n\nWe have some big tables (1.2 billion records) and indexing is quite\ntime consuming. Since we have this running on dual Athlon box, it\nwould be great to make indices in parallel.\n\nOn Postgresql 7.1.3, it seems that the table is locked after the \nfirst \"create index\" is started up. Is this right? Is there any\nway to do this in parallel?\n\n--Martin\n\n\n\n",
"msg_date": "Mon, 22 Oct 2001 17:38:23 -0400",
"msg_from": "Martin Weinberg <weinberg@osprey.astro.umass.edu>",
"msg_from_op": true,
"msg_subject": "Using an SMP machine to make multiple indices on the same table"
},
{
"msg_contents": "Martin Weinberg <weinberg@osprey.astro.umass.edu> writes:\n> On Postgresql 7.1.3, it seems that the table is locked after the \n> first \"create index\" is started up. Is this right?\n\nAFAIK it's a share lock, which only prohibits modifications to the\ntable, not reads (nor concurrent index builds). Not sure how you\nexpect the system to do better than that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Oct 2001 23:09:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using an SMP machine to make multiple indices on the same table "
},
{
"msg_contents": "On Mon, Oct 22, 2001 at 05:38:23PM -0400, Martin Weinberg wrote:\n> Folks,\n> \n> We have some big tables (1.2 billion records) and indexing is quite\n> time consuming. Since we have this running on dual Athlon box, it\n> would be great to make indices in parallel.\n> \n> On Postgresql 7.1.3, it seems that the table is locked after the \n> first \"create index\" is started up. Is this right? Is there any\n> way to do this in parallel?\n\nMy question in, would it help. The creation of the index should only be\nlimited by the bandwidth of the drives. I would think that creating two\nindexes at the same time would simply trash the disk a lot and end up being\nslower.\n\nThe answer to your questions however, are yes and no respectivly.\n\nHTH,\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> Magnetism, electricity and motion are like a three-for-two special offer:\n> if you have two of them, the third one comes free.\n",
"msg_date": "Tue, 23 Oct 2001 13:52:08 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": false,
"msg_subject": "Re: Using an SMP machine to make multiple indices on the same table"
},
{
"msg_contents": "Tom,\n\nYes, I understand locking the table, but empirically, two index\ncreations will not run simultaneously on the same table. So if \nI start (and background) two \n\n\tpsql -c \"create index one on mytable . . .\" database\n\tpsql -c \"create index two on mytable . . .\" database\n\ncommands. The first one starts and the second one waits until the \nfirst is finished (as tracked by \"ps avx\" or \"top\").\n\n--Martin\n\nTom Lane wrote on Mon, 22 Oct 2001 23:09:26 EDT\n>Martin Weinberg <weinberg@osprey.astro.umass.edu> writes:\n>> On Postgresql 7.1.3, it seems that the table is locked after the \n>> first \"create index\" is started up. Is this right?\n>\n>AFAIK it's a share lock, which only prohibits modifications to the\n>table, not reads (nor concurrent index builds). Not sure how you\n>expect the system to do better than that.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\n",
"msg_date": "Tue, 23 Oct 2001 13:59:32 -0400",
"msg_from": "Martin Weinberg <weinberg@osprey.astro.umass.edu>",
"msg_from_op": true,
"msg_subject": "Re: Using an SMP machine to make multiple indices on the "
},
{
"msg_contents": "Tom,\n\nI should have forwarded you the ps output; here are the relevant lines:\n\n*******************************************************************************\n 294 ttyp0 S 0:00 203 108 1991 836 0.0 psql -e -c create \nindex v3_pscat_k_m_idx on v3_pscat(k_m) wsdb\n 295 ? R 0:27 2170 1425 17122 13252 1.4 postgres: postgres \nwsdb [local] CREATE\n 296 ttyp0 S 0:00 203 108 1991 836 0.0 psql -e -c create \nindex v3_pscat_h_m_idx on v3_pscat(h_m) wsdb\n 297 ? S 0:00 190 1425 11858 2436 0.2 postgres: postgres \nwsdb [local] CREATE waiting\n 300 ttyp0 R 0:00 273 55 3016 1384 0.1 ps avx\n*******************************************************************************\n\nNote the \"CREATE waiting\" process . . .\n\n--Martin\n\nTom Lane wrote on Mon, 22 Oct 2001 23:09:26 EDT\n>Martin Weinberg <weinberg@osprey.astro.umass.edu> writes:\n>> On Postgresql 7.1.3, it seems that the table is locked after the \n>> first \"create index\" is started up. Is this right?\n>\n>AFAIK it's a share lock, which only prohibits modifications to the\n>table, not reads (nor concurrent index builds). Not sure how you\n>expect the system to do better than that.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\n",
"msg_date": "Tue, 23 Oct 2001 14:14:30 -0400",
"msg_from": "Martin Weinberg <weinberg@osprey.astro.umass.edu>",
"msg_from_op": true,
"msg_subject": "Re: Using an SMP machine to make multiple indices on the "
},
{
"msg_contents": "Martin Weinberg <weinberg@osprey.astro.umass.edu> writes:\n> Yes, I understand locking the table, but empirically, two index\n> creations will not run simultaneously on the same table.\n\nHmm, on trying it you are right. The second index creation blocks here:\n\n#6 0x1718e0 in XactLockTableWait (xid=17334) at lmgr.c:344\n#7 0x9e530 in heap_mark4update (relation=0xc1be62f8, tuple=0x7b03b7f0,\n buffer=0x7b03b828) at heapam.c:1686\n#8 0xcb410 in LockClassinfoForUpdate (relid=387785, rtup=0x7b03b7f0,\n buffer=0x7b03b828, confirmCommitted=0 '\\000') at index.c:1131\n#9 0xcb534 in IndexesAreActive (relid=387785, confirmCommitted=1 '\\001')\n at index.c:1176\n#10 0xf0f04 in DefineIndex (heapRelationName=0x400aab20 \"tenk1\",\n indexRelationName=0x400aab00 \"anotherj\", accessMethodName=0x59f48 \"btree\",\n attributeList=0x400aab80, unique=0, primary=0, predicate=0x0,\n rangetable=0x0) at indexcmds.c:133\n#11 0x17e118 in ProcessUtility (parsetree=0x400aaba0, dest=Remote)\n at utility.c:905\n\nEssentially it's trying to do a SELECT FOR UPDATE on the pg_class tuple\nof the relation before it starts building the index.\n\nI have opined before that LockClassinfoForUpdate is a mistake that\nshouldn't exist at all, since acquiring the proper lock on the relation\nought to be sufficient. I see no need for locking the pg_class tuple,\nand certainly none for doing so at the beginning of the operation rather\nthan the end.\n\nHiroshi, I think you defended it last time; any comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Oct 2001 14:16:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Using an SMP machine to make multiple indices on the\n\tsame table"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Martin Weinberg <weinberg@osprey.astro.umass.edu> writes:\n> > Yes, I understand locking the table, but empirically, two index\n> > creations will not run simultaneously on the same table.\n> \n> Hmm, on trying it you are right. The second index creation blocks here:\n> \n> #6 0x1718e0 in XactLockTableWait (xid=17334) at lmgr.c:344\n> #7 0x9e530 in heap_mark4update (relation=0xc1be62f8, tuple=0x7b03b7f0,\n> buffer=0x7b03b828) at heapam.c:1686\n> #8 0xcb410 in LockClassinfoForUpdate (relid=387785, rtup=0x7b03b7f0,\n> buffer=0x7b03b828, confirmCommitted=0 '\\000') at index.c:1131\n> #9 0xcb534 in IndexesAreActive (relid=387785, confirmCommitted=1 '\\001')\n> at index.c:1176\n> #10 0xf0f04 in DefineIndex (heapRelationName=0x400aab20 \"tenk1\",\n> indexRelationName=0x400aab00 \"anotherj\", accessMethodName=0x59f48 \"btree\",\n> attributeList=0x400aab80, unique=0, primary=0, predicate=0x0,\n> rangetable=0x0) at indexcmds.c:133\n> #11 0x17e118 in ProcessUtility (parsetree=0x400aaba0, dest=Remote)\n> at utility.c:905\n> \n> Essentially it's trying to do a SELECT FOR UPDATE on the pg_class tuple\n> of the relation before it starts building the index.\n> \n> I have opined before that LockClassinfoForUpdate is a mistake that\n> shouldn't exist at all, since acquiring the proper lock on the relation\n> ought to be sufficient.\n\nAs I've already mentioned many times I never agree with you.\n\n> I see no need for locking the pg_class tuple,\n> and certainly none for doing so at the beginning of the operation rather\n> than the end.\n> \n> Hiroshi, I think you defended it last time; any comments?\n\nHmm the excluive row level lock by FOR UPDATE is too strong\nin this case. OK I would change IndexesAreActive() to not\nacquire a lock on the pg_class tuple for user tables because\nreindex doesn't need to handle relhasindex for user tables\nsince 7.1.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 24 Oct 2001 14:58:42 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Using an SMP machine to make multiple indices on the\n\tsame"
},
{
"msg_contents": "> -----Original Message-----\n> From: Hiroshi Inoue\n>\n> Tom Lane wrote:\n> >\n> > Martin Weinberg <weinberg@osprey.astro.umass.edu> writes:\n> > > Yes, I understand locking the table, but empirically, two index\n> > > creations will not run simultaneously on the same table.\n> >\n> > Hmm, on trying it you are right. The second index creation blocks here:\n> >\n> > #6 0x1718e0 in XactLockTableWait (xid=17334) at lmgr.c:344\n> > #7 0x9e530 in heap_mark4update (relation=0xc1be62f8, tuple=0x7b03b7f0,\n> > buffer=0x7b03b828) at heapam.c:1686\n> > #8 0xcb410 in LockClassinfoForUpdate (relid=387785, rtup=0x7b03b7f0,\n> > buffer=0x7b03b828, confirmCommitted=0 '\\000') at index.c:1131\n> > #9 0xcb534 in IndexesAreActive (relid=387785,\n> confirmCommitted=1 '\\001')\n> > at index.c:1176\n> > #10 0xf0f04 in DefineIndex (heapRelationName=0x400aab20 \"tenk1\",\n> > indexRelationName=0x400aab00 \"anotherj\",\n> accessMethodName=0x59f48 \"btree\",\n> > attributeList=0x400aab80, unique=0, primary=0, predicate=0x0,\n> > rangetable=0x0) at indexcmds.c:133\n> > #11 0x17e118 in ProcessUtility (parsetree=0x400aaba0, dest=Remote)\n> > at utility.c:905\n> >\n> > Essentially it's trying to do a SELECT FOR UPDATE on the pg_class tuple\n> > of the relation before it starts building the index.\n> >\n> > I have opined before that LockClassinfoForUpdate is a mistake that\n> > shouldn't exist at all, since acquiring the proper lock on the relation\n> > ought to be sufficient.\n>\n> As I've already mentioned many times I never agree with you.\n>\n> > I see no need for locking the pg_class tuple,\n> > and certainly none for doing so at the beginning of the operation rather\n> > than the end.\n> >\n> > Hiroshi, I think you defended it last time; any comments?\n>\n> Hmm the excluive row level lock by FOR UPDATE is too strong\n> in this case. OK I would change IndexesAreActive() to not\n> acquire a lock on the pg_class tuple for user tables because\n> reindex doesn't need to handle relhasindex for user tables\n> since 7.1.\n\nIn the end, I changed DefineIndex() to not call IndexesAreActive().\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Thu, 25 Oct 2001 00:23:47 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Using an SMP machine to make multiple indices on the\n\tsame"
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> In the end, I changed DefineIndex() to not call IndexesAreActive().\n\nI saw that. But is it a good solution? If someone has deactivated\nindexes on a user table (ie turned off relhasindex), then creating a\nnew index would activate them again, which would probably be bad.\n\nI have realized that this code is wrong anyway, because it doesn't\nacquire ShareLock on the relation until far too late; all the setup\nprocessing is done with no lock at all :-(. LockClassinfoForUpdate\nprovided a little bit of security against concurrent schema changes,\nthough not enough.\n\nAlso, I'm now a little worried about whether concurrent index creations\nwill actually work. Both CREATE INDEX operations will try to update\nthe pg_class tuple to set relhasindex true. Since they use\nsimple_heap_update for that, the second one is likely to fail\nbecause simple_heap_update doesn't handle concurrent updates.\n\nI think what we probably want is\n\n\t1. Acquire ShareLock at the very start.\n\n\t2. Check for indexes present but relhasindex = false,\n\t if so complain.\n\n\t3. Build the index.\n\n\t4. Update pg_class tuple, being prepared for concurrent\n\t updates (ie, do NOT use simple_heap_update here).\n\nI still don't see any value in LockClassinfoForUpdate, however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 13:49:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Using an SMP machine to make multiple indices on the\n\tsame"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > In the end, I changed DefineIndex() to not call IndexesAreActive().\n> \n> I saw that. But is it a good solution? If someone has deactivated\n> indexes on a user table (ie turned off relhasindex), then creating a\n> new index would activate them again, which would probably be bad.\n\nI apolgize my neglect of reconsidering the activte/deactivate\nstuff for indexes. Probably it is no longer needed now(since 7.1).\nReindex under postmaster for user tables has been available \nfrom the first. I didn't write a documentation about it inten-\ntionally in 7.0 though it was my neglect also in 7.1 sorry.\nIn 7.0 REINDEX set relhasindex to false first to tell all \nbackends that the indexes are unavailable because we wasn't \nable to recreate indexes safely in case of abort. Note\nthat relhasindex was set immediately(out of transactional\ncontrol) in 7.0 and acruiring a lock for the pg_class tuple\nwas very critical.\nSince 7.1 we are able to recreate indexes safely under \npostmaster and REINDEX doesn't set relhasindex to false\nfor user tables. Though REINDEX deactivates the indexes of\nsystem tables the deactivation is done under transactional\ncontrol and other backends never see the deactivated \nrelhasindex.\n\n> \n> I have realized that this code is wrong anyway, because it doesn't\n> acquire ShareLock on the relation until far too late; all the setup\n> processing is done with no lock at all :-(. LockClassinfoForUpdate\n> provided a little bit of security against concurrent schema changes,\n> though not enough.\n> \n> Also, I'm now a little worried about whether concurrent index creations\n> will actually work. Both CREATE INDEX operations will try to update\n> the pg_class tuple to set relhasindex true.\n\nYes but there's a big difference. It's at the end of the creation\nnot at the beginning. Also note that UpdateStats() updates pg_class\ntuple in case of B-trees etc before updating relhasindex. I'm\nsuspicios if we should update Stats under the transactional control. \n\n Since they use\n> simple_heap_update for that, the second one is likely to fail\n> because simple_heap_update doesn't handle concurrent updates.\n> \n> I think what we probably want is\n> \n> 1. Acquire ShareLock at the very start.\n> \n> 2. Check for indexes present but relhasindex = false,\n> if so complain.\n> \n> 3. Build the index.\n> \n> 4. Update pg_class tuple, being prepared for concurrent\n> updates (ie, do NOT use simple_heap_update here).\n> \n> I still don't see any value in LockClassinfoForUpdate, however.\n\nISTM to rely on completely the lock for the corresponding\nrelation is a little misplaced. For example ALTER TABLE OWNER\ndoesn't acquire any lock on the table but it seems natural to me.\nUPDATE pg_class set .. doesn't acquire any lock on the correspoding\nrelations of the target pg_class tuples but it seems natural to me,\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 25 Oct 2001 10:23:31 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Using an SMP machine to make multiple indices on "
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> Also, I'm now a little worried about whether concurrent index creations\n>> will actually work. Both CREATE INDEX operations will try to update\n>> the pg_class tuple to set relhasindex true.\n\n> Yes but there's a big difference. It's at the end of the creation\n> not at the beginning. Also note that UpdateStats() updates pg_class\n> tuple in case of B-trees etc before updating relhasindex. I'm\n> suspicios if we should update Stats under the transactional control. \n\nIt would probably be good to fix things so that there's only one update\ndone for both stats and relhasindex, instead of two. But we *will* get\nfailures in simple_heap_update if we continue to use that routine.\nThe window for failure may be relatively short but it's real. It's not\nnecessarily short, either; consider multiple CREATE INDEX commands\nexecuted in a transaction block.\n\n>> I still don't see any value in LockClassinfoForUpdate, however.\n\n> ISTM to rely on completely the lock for the corresponding\n> relation is a little misplaced.\n\nSurely we *must* be able to rely on the relation lock. For example:\nhow does SELECT FOR UPDATE of the relation's pg_class tuple prevent\nwriters from adding tuples to the relation? It does not and cannot.\nOnly getting the appropriate relation lock provides a semantically\ncorrect guarantee that the relation isn't changing underneath us.\nLocking the pg_class tuple only locks the tuple itself, it has no wider\nscope of meaning.\n\n> For example ALTER TABLE OWNER\n> doesn't acquire any lock on the table but it seems natural to me.\n\nSeems like a bug to me. Consider this scenario:\n\nBackend 1\t\t\t\tBackend 2\n\nbegin;\n\nlock table1;\n\nselect from table1; -- works\n\n\t\t\t\t\talter table1 set owner ...\n\nselect from table1; -- fails, no permissions\n\nThat should not happen. It wouldn't happen if ALTER TABLE OWNER\nwere acquiring an appropriate lock on the relation.\n\n> UPDATE pg_class set .. doesn't acquire any lock on the correspoding\n> relations of the target pg_class tuples but it seems natural to me,\n\nWhile we allow knowledgeable users to poke at the system catalogs\ndirectly, I feel that that is very much a \"let the user beware\"\nfacility. I have no urge to try to guarantee cross-backend\ntransactional safety for changes executed that way. But CREATE INDEX,\nALTER TABLE, and so forth should have safe concurrent behavior.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Oct 2001 17:35:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Using an SMP machine to make multiple indices on the\n\tsame"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> Also, I'm now a little worried about whether concurrent index creations\n> >> will actually work. Both CREATE INDEX operations will try to update\n> >> the pg_class tuple to set relhasindex true.\n> \n> > Yes but there's a big difference. It's at the end of the creation\n> > not at the beginning. Also note that UpdateStats() updates pg_class\n> > tuple in case of B-trees etc before updating relhasindex. I'm\n> > suspicios if we should update Stats under the transactional control.\n> \n> It would probably be good to fix things so that there's only one update\n> done for both stats and relhasindex, instead of two. \n\nI don't fully agree with you at this point. It's pretty painful\nto update relatively irrevalent items at a time in some cases.\nUpdateStats() had updated both reltuples and relhasindex before 7.0.\nIt's me who changed UpdateStats() to not update relhasindex when\nI implemented REINDEX command. Reindex has to set relhasindex to \ntrue after all the indexes of a table were recreated.\n\n> But we *will* get\n> failures in simple_heap_update if we continue to use that routine.\n> The window for failure may be relatively short but it's real. It's not\n> necessarily short, either; consider multiple CREATE INDEX commands\n> executed in a transaction block.\n> \n> >> I still don't see any value in LockClassinfoForUpdate, however.\n> \n> > ISTM to rely on completely the lock for the corresponding\n> > relation is a little misplaced.\n> \n> Surely we *must* be able to rely on the relation lock. For example:\n> how does SELECT FOR UPDATE of the relation's pg_class tuple prevent\n> writers from adding tuples to the relation? It does not and cannot.\n\nI've never said that the relation lock is unnecessary.\nThe stuff around relhasindex is(was) an exception that keeps\na (possibly) long term lock for the pg_class tuple apart from \nthe relevant relation lock.\nWhat I've mainly intended is to guard our(at least my) code.\nIf our(my) code acquires an AccessExclusiveLock on a relation\nand would update the correspoing pg_class tuple, I'd like to\nget the locked tuple not the unlocked one because I couldn't\nchange unlocked tuples without anxiety. That's almost all.\nIn most cases the AccessExclusiveLock on the relation would\nalready block other backends which must be blocked as you\nsay and so the lock on the pg_class tuple would cause few\nadditional lock conflicts. Where are disadvantages to get\nlocked pg_class tuples ?\n\n> Only getting the appropriate relation lock provides a semantically\n> correct guarantee that the relation isn't changing underneath us.\n> Locking the pg_class tuple only locks the tuple itself, it has no wider\n> scope of meaning.\n> \n> > For example ALTER TABLE OWNER\n> > doesn't acquire any lock on the table but it seems natural to me.\n> \n> Seems like a bug to me. Consider this scenario:\n> \n> Backend 1 Backend 2\n> \n> begin;\n> \n> lock table1;\n> \n> select from table1; -- works\n> \n> alter table1 set owner ...\n> \n> select from table1; -- fails, no permissions\n> \n> That should not happen. It wouldn't happen if ALTER TABLE OWNER\n> were acquiring an appropriate lock on the relation.\n\nHmm ok agreed. One of my intentions is to guard our(my) code\nfrom such careless(?) applications.\n\n> \n> > UPDATE pg_class set .. doesn't acquire any lock on the correspoding\n> > relations of the target pg_class tuples but it seems natural to me,\n> \n> While we allow knowledgeable users to poke at the system catalogs\n> directly, I feel that that is very much a \"let the user beware\"\n> facility.\n\nMe too. Again what I intend is to guard our(my) code from\nsuch knowledgeable users not guarantee them an expected(?)\nresult.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 26 Oct 2001 10:44:17 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Using an SMP machine to make multiple indices on "
}
] |
[
{
"msg_contents": "> > I don't think that enough votes are needed to reverse \n> > the change. You broke the discussion first rule.\n\nAre you subscribed to general? We had a big discussion there and there\nwas almost universal agreement that the LIMIT #,# syntax is too\nerror-prone, and the only reason to have it was for MySQL compatibility. \nOf course, our syntax is backward, so is it not a compatibility but an\nincompatibility.\n\nEveryone thought LIMIT # OFFSET # was preferred.\n\nI don't mind reversing out all of this but I can't make everyone happy.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 21:40:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the selectlimit"
},
{
"msg_contents": "...\n> Are you subscribed to general?\n...\n> Everyone thought LIMIT # OFFSET # was preferred.\n\nI think Hiroshi's point is the same as mine: discussions of feature\nchanges need to happen on -hackers before being implemented.\nSubscriptions to other mailing lists should not be required to stay up\nwith mainstream development issues.\n\nI'm recently subscribed to -general, to allow me to respond to email\nthreads, but it has a lot of traffic and I may not stay subscribed for\nlong...\n\n - Thomas\n",
"msg_date": "Tue, 23 Oct 2001 01:47:50 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the "
},
{
"msg_contents": "\nOK, then why did Tom tell me to have the discusion on general? Don't we\nask the general users about user-visible feature removal? The is not an\nimplementation issue but a simple, \"What do users want?\" I agree it\nwould be good on hacker too, but how do we have a discussion on both?\n\n> ...\n> > Are you subscribed to general?\n> ...\n> > Everyone thought LIMIT # OFFSET # was preferred.\n> \n> I think Hiroshi's point is the same as mine: discussions of feature\n> changes need to happen on -hackers before being implemented.\n> Subscriptions to other mailing lists should not be required to stay up\n> with mainstream development issues.\n> \n> I'm recently subscribed to -general, to allow me to respond to email\n> threads, but it has a lot of traffic and I may not stay subscribed for\n> long...\n> \n> - Thomas\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 21:55:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the selectlimit"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > I don't think that enough votes are needed to reverse\n> > > the change. You broke the discussion first rule.\n> \n> Are you subscribed to general? We had a big discussion there and there\n\nI know the discussion and I've thought Peter's objection was\nsuffienctly valid to reverse your change but the discussion\nhas continued.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 23 Oct 2001 11:03:24 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the "
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I think Hiroshi's point is the same as mine: discussions of feature\n> changes need to happen on -hackers before being implemented.\n\nWell, IIRC there *was* some discussion about this some months back, and\nno one particularly objected to changing it to be compatible with MySQL.\nThat's why Bruce felt free to execute on the TODO item despite being\nso close to beta.\n\n> Subscriptions to other mailing lists should not be required to stay up\n> with mainstream development issues.\n\nActually, the reason we have an argument now is the other way around:\nsome non-hackers people complained when the change notice went by.\nWe do have an obligation to users who don't read -hackers.\n\nGiven the amount of noise being raised on the issue now, I think the\nbetter part of valor is to revert to the 7.1 behavior and plan to\ndiscuss it again for 7.3. But it's not like Bruce did this with no\nwarning or discussion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Oct 2001 22:32:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the selectlimit "
},
{
"msg_contents": "> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > I think Hiroshi's point is the same as mine: discussions of feature\n> > changes need to happen on -hackers before being implemented.\n> \n> Well, IIRC there *was* some discussion about this some months back, and\n> no one particularly objected to changing it to be compatible with MySQL.\n> That's why Bruce felt free to execute on the TODO item despite being\n> so close to beta.\n> \n> > Subscriptions to other mailing lists should not be required to stay up\n> > with mainstream development issues.\n> \n> Actually, the reason we have an argument now is the other way around:\n> some non-hackers people complained when the change notice went by.\n> We do have an obligation to users who don't read -hackers.\n> \n> Given the amount of noise being raised on the issue now, I think the\n> better part of valor is to revert to the 7.1 behavior and plan to\n> discuss it again for 7.3. But it's not like Bruce did this with no\n> warning or discussion.\n\n[ BCC to general ]\n\nI agree. Let me reverse this to 7.1 behavior, and note in the HISTORY\nfile that LIMIT #,# will be removed in 7.3. That way, people know it is\ncoming and it gives them one release to fix their queries. I know Tom\nwanted it removed right away because it is so confusing but I think we\nhave enough votes to keep it around, unchanged, for another release.\n\nAs to whether we should emit a NOTICE every time LIMIT #,# is used, I\nthink not, but if people want it I can add it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 22:38:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] To Postgres Devs : Wouldn't changing the selectlimit"
},
{
"msg_contents": "On Monday 22 October 2001 10:32 pm, Tom Lane wrote:\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > I think Hiroshi's point is the same as mine: discussions of feature\n> > changes need to happen on -hackers before being implemented.\n[snip]\n> > Subscriptions to other mailing lists should not be required to stay up\n> > with mainstream development issues.\n\n> Actually, the reason we have an argument now is the other way around:\n> some non-hackers people complained when the change notice went by.\n> We do have an obligation to users who don't read -hackers.\n\nIf they want to deal with development issues, let them subscribe to hackers. \nSorry, I know that's more than a little rude. But that _is_ what the hackers \nlist is for, right? 'The developers live there' is the advertisement.....\n\nAs I'm subscribed to most of the postgresql lists, I sometimes miss which \nlist it's on -- but I'll have to say that I agree with both Thomas and Bruce: \nthe behavior needs to be fixed, AND it needs to be discussed on hackers \nbefore fixing.\n\n> Given the amount of noise being raised on the issue now, I think the\n> better part of valor is to revert to the 7.1 behavior and plan to\n> discuss it again for 7.3. But it's not like Bruce did this with no\n> warning or discussion.\n\nCommunications breakdown either way. The warning and discussion was on \ngeneral -- a bcc to hackers would have been a good thing, IMHO.\n\nBut that's past. It's mighty close to beta -- is this fix a showstopper? \nThe behavior currently is rather broken according to the results of the \ndiscussion on general. Do we really want a whole 'nother major version cycle \nto pass before this kludge is fixed? Six months to a year down the road?\n\nThe longer this behavior is in the code, the harder it's going to be to \nremove it, IMNSHO.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 22 Oct 2001 23:56:24 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the selectlimit"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> The behavior currently is rather broken according to the results of the \n> discussion on general. Do we really want a whole 'nother major version cycle\n> to pass before this kludge is fixed? Six months to a year down the road?\n> The longer this behavior is in the code, the harder it's going to be to \n> remove it, IMNSHO.\n\nI agree completely with these points, which is why I'd rather have seen\nit dealt with (one way or t'other) in 7.2. But we appear to have a lot\nof people who don't think it's been discussed adequately in\n$PREFERRED_FORUM ... and the one thing I *really* don't want is to hold\nup 7.2 beta anymore for this issue. Let's stuff this worm back in the\ncan and get on with it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Oct 2001 00:11:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the selectlimit "
},
{
"msg_contents": "> But that's past. It's mighty close to beta -- is this fix a showstopper? \n> The behavior currently is rather broken according to the results of the \n> discussion on general. Do we really want a whole 'nother major version cycle \n> to pass before this kludge is fixed? Six months to a year down the road?\n> \n> The longer this behavior is in the code, the harder it's going to be to \n> remove it, IMNSHO.\n\nWe just have too many opinions here. I have put it back and noted it\nwill be removed in 7.3. If someone else wants to propose it to be\nremoved in 7.2 and have a vote, and do the work, and take the heat, go\nahead. I am not going to do it.\n\nIt is just like the grief I got over jdbc patches for 7.1. At some\npoint it is not worth having people get upset at me over it. Basically,\nyou have removed any desire I have to resolve this.\n\n\nFYI, my personal opinion is that we should keep it around for one more\nrelease because forcing people to remove it from the queries with no\nwarning is more disruptive, I think, than the fact we don't match\nMySQL's syntax. Also, LIMIT #,# is no longer documented. That change\nwill be in 7.2. Of course, that means that if someone tries MySQL's\nsyntax, they have no documentation stating that the params are\nbackwards. If they read the HISTORY file, they will know not to use\nLIMIT #,# anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Oct 2001 00:24:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the selectlimit"
},
{
"msg_contents": "Not possible to accept both forms at present and issue a notice that \nLIMIT m,n is deprecated?\n\nIf LIMIT m,n is found, internally re-write it to LIMIT m OFFSET n and \npress on.\n\nThis should appease everyone and still allow the 'proper' form to be \nimplemented right now. There isn't just the question of when it appears \nin pgsql, but when it appears in everyone else's code that depends on \npostgres. If you delay LIMIT..OFFSET, then I too am affected in my \ncode. If I use it today and my code is in beta (which it is), then when \nit goes release, I'll have to issue a change in the future for that. \n Granted it's not a big thing for me, but if I have 200,000 \ninstallations, that means eventually there will have to be 200,000 \nupgrades when they upgrade postgres.\n\nWe all know that everyone updates their software frequently and in a \ntimely manner to keep things running smoothly, right? *cough*\n\nDavid\n\nTom Lane wrote:\n\n>Given the amount of noise being raised on the issue now, I think the\n>better part of valor is to revert to the 7.1 behavior and plan to\n>discuss it again for 7.3. But it's not like Bruce did this with no\n>warning or discussion.\n>\n\n\n",
"msg_date": "Tue, 23 Oct 2001 01:07:23 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the selectlimit"
},
{
"msg_contents": "> I agree completely with these points, which is why I'd rather have seen\n> it dealt with (one way or t'other) in 7.2. But we appear to have a lot\n> of people who don't think it's been discussed adequately in\n> $PREFERRED_FORUM ... and the one thing I *really* don't want is to hold\n> up 7.2 beta anymore for this issue. Let's stuff this worm back in the\n> can and get on with it.\n\nFrankly, I'd be happy to consider this a bug fix either way. The timing\nis compatible with 7.2, and I'm happy that Bruce is bringing this to\nresolution. My point was simply that some discussion on -hackers is\nappropriate, and that others on -hackers who might have a stake in this\nshould be in on the discussion.\n\nfwiw, I don't have a strong opinion about *which* path is taken to fix\nthe problem. But the old implementation is the worst of all worlds, and\nthe replacement syntax which is already in the code is a better choice.\n\n - Thomas\n",
"msg_date": "Tue, 23 Oct 2001 13:52:18 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the "
},
{
"msg_contents": "> Not possible to accept both forms at present and issue a notice that \n> LIMIT m,n is deprecated?\n\nWe accept both now and will for <=7.2. In 7.3, it will be only LIMIT #\nOFFSET #.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Oct 2001 12:23:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] To Postgres Devs : Wouldn't changing the selectlimit"
}
] |
[
{
"msg_contents": "Dear all,\n\nI am trying to install PostgreSQL 7.1.3 on Win98 with APACHE\nand PHP (both installed and running), and\nam getting errors with \"make\" and \"make install\" (see below).\n\nWhat are the differences in installations for Win98, WinNT and Win2000?\nThere are so many procedures around and none is working without\nproblems.\n\nI installed Cygwin to emulate UNIX environment and Cygwin IPC to\nsupport the linker (ld.exe).\n\nI dowloaded \"postgresql-7.1.3.tar.gz\".\n\n\"./configure\" finished properly with \"un.h\" and \"tcp.h\" installed BUT\nwithout \"endian.h\" (is this important ???)\n\nI also copied \"libpostgres.a\" into \"/usr/local/lib\".\n\nThere are some Windows Makefiles (\"../src/win32.mak\" and\n\"../src/makefiles/Makefile.win) - Do I need to run some and how and\nwhen.\n\nI can run \"postmaster -i&\" after IMPROPER installation BUT \"psql\" does\nnot work.\n\nAlso, PHP commands of type \"pg_*\" are not recognised. I turned ON (I\nbelieve) PHP-Postgres in \"php.ini\" file residing in Windows dir by\nallowing \"extension=php_pgsql.dll\".\n\n\nWhat is WRONG?\n\nMany thanks,\n\nSteven.\n\n****************\n\"make\" and \"make install\" ERRORS:\n\n....\n\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations command.o\ncommon.o help.o input.o stringutils.o mainloop.o copy.o startup.o\nprompt.o variables.o large_obj.o\nprint.o describe.o tab-complete.o -L../../../src/interfaces/libpq -lpq\n-L/usr/local/lib -g -lz -lcrypt -lreadline -lcygipc -o psql\n\ntab-complete.o(.text+0x2a36):tab-complete.c: undefined reference to\n`filename_completion_function'\n\ncollect2: ld returned 1 exit status\n\nmake[3]: *** [psql] Error 1\n\nmake[3]: Leaving directory `/usr/src/postgresql-7.1.3/src/bin/psql'\n\nmake[2]: *** [all] Error 2\n\nmake[2]: Leaving directory `/usr/src/postgresql-7.1.3/src/bin'\n\nmake[1]: *** [all] Error 2\n\nmake[1]: Leaving directory `/usr/src/postgresql-7.1.3/src'\n\nmake: *** [all] Error 2\n\n\n****************\n\n--\n***********************************************\n\nSteven Vajdic (BSc/Hon, MSc)\nSenior Software Engineer\nMotorola Australia Software Centre (MASC)\n2 Second Avenue, Technology Park\nAdelaide, South Australia 5095\nemail: Steven.Vajdic@motorola.com\nemail: svajdic@asc.corp.mot.com\nPh.: +61-8-8168-3543\nFax: +61-8-8168-3501\nFront Office (Ph): +61-8-8168-3500\n\n----------------------------------------\nmobile: +61 (0)419 860 903\nAFTER WORK email: steven_vajdic@ivillage.com\nHome address: 6 Allawah Av., Glen Osmond SA 5064, Australia\n----------------------------------------\n\n***********************************************\n\n\n",
"msg_date": "Tue, 23 Oct 2001 11:20:55 +0930",
"msg_from": "Steven Vajdic <svajdic@asc.corp.mot.com>",
"msg_from_op": true,
"msg_subject": "Postgres 7.1.3. installation on Windows platforms"
}
] |
[
{
"msg_contents": "[ BCC to general ]\n\nAdded to TODO:\n\n* Remove LIMIT #,# and force use LIMIT and OFFSET clauses in 7.3 (Bruce)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Oct 2001 22:51:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "LIMIT TODO item"
}
] |
[
{
"msg_contents": "Thanks, but latest Cygwin installation (using internet \"setup.exe\"),\nalthough saying it includes PostgreSQL, does NOT.\n\nThere is a \"nmake /f win32.mak\" to be run under Visual C++ explained in\nPostgreSQL Docs=Windows installation in order to build \"libpq.dll\" which\nneeds to be placed into \"windows/system\" dir under Win98/95/ME.\n\nYet, some installation procedures suggest that file \"pq.dll\" is placed into\n/usr/local/pgsql/bin. That file already exists in\n\"../src/interfaces/libpq/\".\n\nWhat is the difference between \"pq.dll\" and \"libpq.dll\"?\nI have found \"libpq.dll\" in PHP installation, copied it to\n\"windows/system\" - psql is not working (not built properly due to \"make\"\nerror).\n\nEverything in my installation is fine except \"make\" and \"make install\"\n(make[3]: *** [psql] Error 1, etc...).\nIt seems that \"psql.exe\" is not built properly.\n\n\"ipc-daemon\" is running,\n\"initdb\" finishes properly,\nBUT \"psql -l\" or \"psql template1\" says:\npg_flush: send() failed: The descriptor is a file, not a socket\npg_recvbuf: recv() failed: The descriptor is a file, not a socket\n\nAND (although \"php.ini\" file allows \"extension php_pgsql.dll\" and the file\nis copied to \"windows/system\") \"pg_connect\" as the first command to connect\nto\nPostgreSQL data base is not recognised.\n\n????\n\n----- Original Message -----\nFrom: tek1 <tek1@pobox.com>\nTo: <svajdic@asc.corp.mot.com>\nSent: Tuesday, October 23, 2001 1:32 AM\nSubject: Re: [GENERAL] Postgres 7.1.3. installation on Windows platforms\n\n\n> try using the postgresql version (7.1.3) that comes with cygwin to avoid\n> the complicated installation, which a lot of people have been having\n> problems with.\n>\n> also, there's a postgresql cygwin mailing list:\n>\n> pgsql-cygwin@postgresql.org\n>\n>\n>\n> At 11:20 01/10/23 +0930, you wrote:\n> >Dear all,\n> >\n> >I am trying to install PostgreSQL 7.1.3 on Win98 with APACHE\n> >and PHP (both installed and running), and\n> >am getting errors with \"make\" and \"make install\" (see below).\n> >\n> >What are the differences in installations for Win98, WinNT and Win2000?\n> >There are so many procedures around and none is working without\n> >problems.\n> >\n> >I installed Cygwin to emulate UNIX environment and Cygwin IPC to\n> >support the linker (ld.exe).\n> >\n> >I dowloaded \"postgresql-7.1.3.tar.gz\".\n> >\n> >\"./configure\" finished properly with \"un.h\" and \"tcp.h\" installed BUT\n> >without \"endian.h\" (is this important ???)\n> >\n> >I also copied \"libpostgres.a\" into \"/usr/local/lib\".\n> >\n> >There are some Windows Makefiles (\"../src/win32.mak\" and\n> >\"../src/makefiles/Makefile.win) - Do I need to run some and how and\n> >when.\n> >\n> >I can run \"postmaster -i&\" after IMPROPER installation BUT \"psql\" does\n> >not work.\n> >\n> >Also, PHP commands of type \"pg_*\" are not recognised. I turned ON (I\n> >believe) PHP-Postgres in \"php.ini\" file residing in Windows dir by\n> >allowing \"extension=php_pgsql.dll\".\n> >\n> >\n> >What is WRONG?\n> >\n> >Many thanks,\n> >\n> >Steven.\n> >\n> >****************\n> >\"make\" and \"make install\" ERRORS:\n> >\n> >....\n> >\n> >gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations command.o\n> >common.o help.o input.o stringutils.o mainloop.o copy.o startup.o\n> >prompt.o variables.o large_obj.o\n> >print.o describe.o tab-complete.o -L../../../src/interfaces/libpq -lpq\n> >-L/usr/local/lib -g -lz -lcrypt -lreadline -lcygipc -o psql\n> >\n> >tab-complete.o(.text+0x2a36):tab-complete.c: undefined reference to\n> >`filename_completion_function'\n> >\n> >collect2: ld returned 1 exit status\n> >\n> >make[3]: *** [psql] Error 1\n> >\n> >make[3]: Leaving directory `/usr/src/postgresql-7.1.3/src/bin/psql'\n> >\n> >make[2]: *** [all] Error 2\n> >\n> >make[2]: Leaving directory `/usr/src/postgresql-7.1.3/src/bin'\n> >\n> >make[1]: *** [all] Error 2\n> >\n> >make[1]: Leaving directory `/usr/src/postgresql-7.1.3/src'\n> >\n> >make: *** [all] Error 2\n> >\n> >\n> >****************\n> >\n> >--\n> >***********************************************\n> >\n> >Steven Vajdic (BSc/Hon, MSc)\n> >Senior Software Engineer\n> >Motorola Australia Software Centre (MASC)\n> >2 Second Avenue, Technology Park\n> >Adelaide, South Australia 5095\n> >email: Steven.Vajdic@motorola.com\n> >email: svajdic@asc.corp.mot.com\n> >Ph.: +61-8-8168-3543\n> >Fax: +61-8-8168-3501\n> >Front Office (Ph): +61-8-8168-3500\n> >\n> >----------------------------------------\n> >mobile: +61 (0)419 860 903\n> >AFTER WORK email: steven_vajdic@ivillage.com\n> >Home address: 6 Allawah Av., Glen Osmond SA 5064, Australia\n> >----------------------------------------\n> >\n> >***********************************************\n> >\n> >\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 6: Have you searched our list archives?\n> >\n> >http://archives.postgresql.org\n>\n\n",
"msg_date": "Tue, 23 Oct 2001 21:52:51 +0930",
"msg_from": "\"Steven Vajdic\" <Steven.Vajdic@motorola.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 7.1.3. installation on Windows platforms"
}
] |
[
{
"msg_contents": "I have an idea for creating a Perl script, but I just wanted to make\nsure that no one has already created something similar.\nWhen doing a full row select, it's necessary to create all the bind\nvariables, and then do a column by column select statement. Ugly. I\nwant to automagically create an include that I would be able to place\nin the DECLARE SECTION and also use in the SELECT statements.\nAll of these methods are based upon my prior experience with embedded\nsql, so if Postgres has a better method, or something new has come up,\nplease let me know.\nThanks!\nMark\n",
"msg_date": "23 Oct 2001 05:53:00 -0700",
"msg_from": "mcarthey@execpc.com (Mark)",
"msg_from_op": true,
"msg_subject": "dynamic #include's?"
}
] |
[
{
"msg_contents": "I'd like to propose a new command, CREATE OPERATOR CLASS. Its purpose is\nto create a named operator class, so that you can create new types of\nindex ops. Also, its inclusion would remove the section of the\ndocumentation where we tell people how to manually manipulate the system\ntables.\n\nSince schema support is going to change some of the details of the system\ntables in important ways, I think it's better to move away from manual\nupdates.\n\nThe command is basically an instrumentation of the documentation on how to\nadd new operator classes.\n\nHere's the syntax I'd like to propose:\n\nCREATE OPERATOR CLASS <name> [DEFAULT] FOR TYPE <typename> USING <access\nmethod> WITH <list of operators> AND <list of support functions>\n\nNew keywords are \"CLASS\" (SQL99 reserved word) and \"REPEATABLE\" (SQL99\nnon-reserved word, see below for usage).\n\n<name> is the class's name, and <typename> is the type to be indexed.\n<access method> is the assosciated access method from pg_am (btree, rtree,\nhash, gist).\n\nThe presence of [DEFAULT] indicates that this operator class shold be made\nthe default operator class for the type.\n\n<list of operators> is a comma-delimited list of operator specs. An\noperator spec is either an operator or an operator followed by the keyword\n\"REPEATABLE\". The presence of \"REPEATABLE\" indicates that amopreqcheck\nshould be set to true for this operator. Each item in this list will\ngenerate an entry in pg_amop.\n\n<list of support functions> is a comma-seperated list of functions used to\nassist the index method. Each item in this list will generate an item in\npg_amproc.\n\nI agree that I think it is rare that anything will set \"REPEATABLE\", but\nthe point of this effort is to keep folks from mucking around with the\nsystem tables manually, so we should support making any reasonable entry\nin pg_amop.\n\nHere's an example based on the programmer's guide. We've created the type\n\"complex\", and have comparison functions complex_abs_lt, complex_abs_le,\ncomplex_abs_eq, complex_abs_gt, complex_abs_ge. Then let us have created\noperators \"||<\", \"||<=\", \"||=\", \"||>\", \"||>=\" based on them. We also have\nthe complex_abs_cmp helper function. To create the operator class, the\ncommand would be:\n\nCREATE OPERATOR CLASS complex_abs_ops DEFAULT FOR TYPE complex USING\nbtree with ||<, ||<=, ||=, ||>=, ||> and complex_abs_cmp;\n\nAmong other things, complex_abs_ops would be the default operator class\nfor the complex type after this command.\n\n\nAn example using REPEATABLE would be:\n\nCREATE OPERATOR CLASS complex_abs_ops DEFAULT FOR TYPE complex USING btree\nwith ||< REPEATABLE, ||<=, ||=, ||>=, ||> REPEATABLE and complex_abs_cmp;\n\nNote: I don't think the above command will create a correct operator\nclass, it just shows how to add REPEATABLE.\n\nThe alternative to \"REPEATABLE\" would be something like\n\"hit_needs_recheck\" after the operator. Suggestions?\n\nThoughts?\n\nTake care,\n\nBill\n\n",
"msg_date": "Tue, 23 Oct 2001 06:41:15 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": true,
"msg_subject": "Proposed new create command, CREATE OPERATOR CLASS"
},
{
"msg_contents": "On Tue, 23 Oct 2001, Bill Studenmund wrote:\n\n> Here's the syntax I'd like to propose:\n>\n> CREATE OPERATOR CLASS <name> [DEFAULT] FOR TYPE <typename> USING <access\n> method> WITH <list of operators> AND <list of support functions>\n\nHmmm.. Teach me to read the docs. :-) There's no way to set opckeytype. So\nhwo about:\n\nCREATE OPERATOR CLASS <name> [DEFAULT] FOR TYPE <typename> [AS <stored\ntype>] USING <access method> WITH <list of operators> AND <list of support\nfunctions>\n\nWith AS <stored type> present, the opckeytype column gets set to that type\nname's oid.\n\n> New keywords are \"CLASS\" (SQL99 reserved word) and \"REPEATABLE\" (SQL99\n> non-reserved word, see below for usage).\n>\n> <name> is the class's name, and <typename> is the type to be indexed.\n> <access method> is the assosciated access method from pg_am (btree, rtree,\n> hash, gist).\n>\n> The presence of [DEFAULT] indicates that this operator class shold be made\n> the default operator class for the type.\n>\n> <list of operators> is a comma-delimited list of operator specs. An\n> operator spec is either an operator or an operator followed by the keyword\n> \"REPEATABLE\". The presence of \"REPEATABLE\" indicates that amopreqcheck\n> should be set to true for this operator. Each item in this list will\n> generate an entry in pg_amop.\n\nI decided to change that to an operator followed by \"needs_recheck\" to\nindicate a recheck is needed. \"needs_recheck\" is not handled as a keyword,\nbut as an IDENT which is examined at parse time.\n\n> <list of support functions> is a comma-seperated list of functions used to\n> assist the index method. Each item in this list will generate an item in\n> pg_amproc.\n>\n> I agree that I think it is rare that anything will set \"REPEATABLE\", but\n> the point of this effort is to keep folks from mucking around with the\n> system tables manually, so we should support making any reasonable entry\n> in pg_amop.\n\nTake care,\n\nBill\n\n",
"msg_date": "Tue, 23 Oct 2001 11:09:53 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS"
},
{
"msg_contents": "On Wed, 24 Oct 2001, Tom Lane wrote:\n\n> Bill Studenmund <wrstuden@netbsd.org> writes:\n> > I'd like to propose a new command, CREATE OPERATOR CLASS.\n>\n> Seems like a good idea.\n>\n> > operator spec is either an operator or an operator followed by the keyword\n> > \"REPEATABLE\". The presence of \"REPEATABLE\" indicates that amopreqcheck\n> > should be set to true for this operator.\n>\n> This is bogus, since REPEATABLE is a very poor description of the\n> meaning of amopreqcheck; to the extent that it matches the meaning\n> at all, it's backwards. Don't pick a keyword for this solely on the\n> basis of what you can find that's already reserved by SQL99.\n>\n> Given the restricted syntax, the keyword could be a TokenId anyway,\n> so it's not really reserved; accordingly there's no need to limit\n> ourselves to what SQL99 says we can reserve.\n>\n> Perhaps use \"RECHECK\"? That would fit the field more closely...\n\nI was writing a note saying that as this one came in. Yes, it's now a\nTokenId, and I look for the text \"needs_recheck\".\n\n> > I agree that I think it is rare that anything will set \"REPEATABLE\", but\n> > the point of this effort is to keep folks from mucking around with the\n> > system tables manually, so we should support making any reasonable entry\n> > in pg_amop.\n>\n> Then you'd better add support for specifying an opckeytype, too. BTW\n> these things are not all that rare; there are examples right now in\n> contrib.\n\nYep, I noticed that.\n\n> > CREATE OPERATOR CLASS complex_abs_ops DEFAULT FOR TYPE complex USING\n> > btree with ||<, ||<=, ||=, ||>=, ||> and complex_abs_cmp;\n>\n> This syntax is obviously insufficient to identify the procedures, since\n> it doesn't show argument lists (and we do allow overloading). Less\n\nSo then funcname(type list) [, funcname(type list)] would be the way to\ngo?\n\n> obviously, it's not sufficient to identify the operators either. I\n> think you're implicitly assuming that only binary operators on the\n> specified type will ever be members of index opclasses. That does not\n> seem like a good assumption to wire into the syntax. Perhaps borrow\n\nWell, the requirement of binarity is something which is explicit in our\nexample documentation, and so that's why I used it.\n\n> the syntax used for DROP OPERATOR, which is ugly but not ambiguous:\n>\n> \toperator (type, type)\n> \toperator (type, NONE)\n> \toperator (NONE, type)\n>\n> We could allow an operator without any parenthesized args to imply a\n> binary op on the specified type, which would certainly be the most\n> common case.\n\nDo any of the access methods really support using non-binary operators?\n\n> BTW, is there any need to support filling nonconsecutive amopstrategy or\n> amprocnum slots? This syntax can't do that. GiST seems to have a\n> pretty loose idea of what set of strategy numbers you can have, so\n> there might possibly be a future need for that.\n\nI can add support for skipping operators, if needed. A comma followed by a\ncomma would indicate a null name.\n\nOh gross. I just looked at contrib/intarray, and it defines two entries in\npg_amop for amopstrategy number 20. They do happen to be commutators of\neach other. Look for the @@ and ~~ operators.\n\nWait a second, how can you do that? Doesn't that violate\npg_amop_opc_strategy_index ? It's supposed to make pairs of amopclaid and\namopstrategy be unique.\n\nConfused....\n\n> Also, it might be better to use a syntax in the style of CREATE\n> OPERATOR, with a list of param = value notations, because that's\n> more easily extensible if we change the opclass stuff again.\n>\n> \tCREATE OPERATOR CLASS classname (\n> \t\tbasetype = complex,\n> \t\tdefault,\n> \t\toperator1 = ||< ,\n> \t\t...\n> \t\tproc1 = complex_abs_cmp );\n>\n> However, specifying the proc arglists in this style would be awfully\n> tedious :-(. I can't think of anything better than\n>\n> \t\tproc1arg1 = complex,\n> \t\tproc1arg2 = complex,\n> \t\t...\n>\n> which is mighty ugly.\n\nWhich is why I didn't use it. :-)\n\nIf we can't make the other syntax work, then we can go with a DefineStmt\ntype syntax.\n\nTake care,\n\nBill\n\n",
"msg_date": "Tue, 23 Oct 2001 12:05:32 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS "
},
{
"msg_contents": "On Thu, 25 Oct 2001, Teodor Sigaev wrote:\n\n> Make me right if I mistake.\n>\n> When we was developing operator @@, I saw that postgres don't use index in\n> select if operation has not commutator. But operator with different types in\n> argument can't be commutator with itself. So I maked operator ~~ only for\n> postgres can use index access for operator @@. There is no any difficulties to\n> adding index support for operator ~~. The same things is with contrib/tsearch\n> module.\n>\n> But I think that there is not any other necessity in presence ~~.\n\nSo only one of the two needs to go into pg_amop, correct? Then everything\nelse is fine.\n\nTake care,\n\nBill\n\n",
"msg_date": "Wed, 24 Oct 2001 03:16:22 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS"
},
{
"msg_contents": "On Wed, 24 Oct 2001, Tom Lane wrote:\n\n> Bill Studenmund <wrstuden@netbsd.org> writes:\n> > [ revised proposal for CREATE OPERATOR CLASS syntax ]\n>\n> I don't like the idea of writing a bunch of consecutive commas (and\n> having to count them correctly) for cases where we're inserting\n> noncontigous amopstrategy or amprocnum numbers. Perhaps the syntax\n> for the elements of the lists could be\n>\n> \t[ integer ] operator [ ( argtype, argtype ) ] [ RECHECK ]\n>\n> \t[ integer ] funcname ( argtypes )\n>\n> where if the integer is given, it is the strategy/procnum for this\n> entry, and if it's not given then it defaults to 1 for the first\n> item and previous-entry's-number-plus-one for later items.\n\nThat would work.\n\n> Or just require the integer all the time. That seems a lot less\n> mistake-prone, really. Concision is not a virtue in the case of\n> a command as specialized as this. Is there really anything wrong with\n>\n> CREATE OPERATOR CLASS complex_abs_ops\n> \tDEFAULT FOR TYPE complex USING btree\n> \tWITH\n> \t\t1 ||<,\n> \t\t2 ||<=,\n> \t\t3 ||=,\n> \t\t4 ||>=,\n> \t\t5 ||>\n> \tAND\n> \t\t1 complex_abs_cmp(complex, complex);\n\nNot really. Especially when there are ones which are 3, 6, 7, 8, 20\nfloating around. :-)\n\n> (One could imagine adding system catalogs that give symbolic names\n> to the strategy/procnum numbers for each access method, and then\n> allowing names instead of integers in this command. I'm not sure\n> whether GiST has sufficiently well-defined strategy numbers to make that\n> work, but even if not, I like this better than a positional approach to\n> figuring out which operator is which.)\n\nSomething like that (having a catalog of what the different operators are\nsupposed to be) would be nice. Especially for the support procs, so that\nCREATE OPERATOR CLASS could make sure you gave the right ones for each\nnumber.\n\n> > I decided to change that to an operator followed by \"needs_recheck\" to\n> > indicate a recheck is needed. \"needs_recheck\" is not handled as a keyword,\n> > but as an IDENT which is examined at parse time.\n>\n> Ugh. Make it a keyword. As long as it can be a TokenId there is no\n> downside to doing so, and doing it that way eliminates interesting\n> issues about case folding etc. (Did you know that case folding rules\n> are slightly different for keywords and identifiers?)\n\nOk. Will do. Yes, I know the case folding is different, though I'm not\n100% sure how so. I assume it's something like for identifiers, acents &\nsuch get folded to unaccented characters?\n\n> I still like RECHECK better than NEEDS_RECHECK, but that's a minor\n> quibble.\n\nRECHECK is one word. I'll go with it.\n\nTake care,\n\nBill\n\n",
"msg_date": "Wed, 24 Oct 2001 03:50:16 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS "
},
{
"msg_contents": "Bill Studenmund <wrstuden@netbsd.org> writes:\n> I'd like to propose a new command, CREATE OPERATOR CLASS.\n\nSeems like a good idea.\n\n> operator spec is either an operator or an operator followed by the keyword\n> \"REPEATABLE\". The presence of \"REPEATABLE\" indicates that amopreqcheck\n> should be set to true for this operator.\n\nThis is bogus, since REPEATABLE is a very poor description of the\nmeaning of amopreqcheck; to the extent that it matches the meaning\nat all, it's backwards. Don't pick a keyword for this solely on the\nbasis of what you can find that's already reserved by SQL99.\n\nGiven the restricted syntax, the keyword could be a TokenId anyway,\nso it's not really reserved; accordingly there's no need to limit\nourselves to what SQL99 says we can reserve.\n\nPerhaps use \"RECHECK\"? That would fit the field more closely...\n\n> I agree that I think it is rare that anything will set \"REPEATABLE\", but\n> the point of this effort is to keep folks from mucking around with the\n> system tables manually, so we should support making any reasonable entry\n> in pg_amop.\n\nThen you'd better add support for specifying an opckeytype, too. BTW\nthese things are not all that rare; there are examples right now in\ncontrib.\n\n> CREATE OPERATOR CLASS complex_abs_ops DEFAULT FOR TYPE complex USING\n> btree with ||<, ||<=, ||=, ||>=, ||> and complex_abs_cmp;\n\nThis syntax is obviously insufficient to identify the procedures, since\nit doesn't show argument lists (and we do allow overloading). Less\nobviously, it's not sufficient to identify the operators either. I\nthink you're implicitly assuming that only binary operators on the\nspecified type will ever be members of index opclasses. That does not\nseem like a good assumption to wire into the syntax. Perhaps borrow\nthe syntax used for DROP OPERATOR, which is ugly but not ambiguous:\n\n\toperator (type, type)\n\toperator (type, NONE)\n\toperator (NONE, type)\n\nWe could allow an operator without any parenthesized args to imply a\nbinary op on the specified type, which would certainly be the most\ncommon case.\n\nBTW, is there any need to support filling nonconsecutive amopstrategy or\namprocnum slots? This syntax can't do that. GiST seems to have a\npretty loose idea of what set of strategy numbers you can have, so\nthere might possibly be a future need for that.\n\nAlso, it might be better to use a syntax in the style of CREATE\nOPERATOR, with a list of param = value notations, because that's\nmore easily extensible if we change the opclass stuff again.\n\n\tCREATE OPERATOR CLASS classname (\n\t\tbasetype = complex,\n\t\tdefault,\n\t\toperator1 = ||< ,\n\t\t...\n\t\tproc1 = complex_abs_cmp );\n\nHowever, specifying the proc arglists in this style would be awfully\ntedious :-(. I can't think of anything better than\n\n\t\tproc1arg1 = complex,\n\t\tproc1arg2 = complex,\n\t\t...\n\nwhich is mighty ugly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 19:06:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS "
},
{
"msg_contents": "Bill Studenmund <wrstuden@netbsd.org> writes:\n> Do any of the access methods really support using non-binary operators?\n\nWhether they do today is not the question. The issue is whether they\ncould --- and they certainly could.\n\n> Oh gross. I just looked at contrib/intarray, and it defines two entries in\n> pg_amop for amopstrategy number 20. They do happen to be commutators of\n> each other. Look for the @@ and ~~ operators.\n\n> Wait a second, how can you do that? Doesn't that violate\n> pg_amop_opc_strategy_index ?\n\nIt sure does, but running the script shows that the second insert\ndoesn't try to insert any rows. There's no entry in the temp table\nfor ~~ because its left and right operands are not the types the\nSELECT/INTO is looking for.\n\nThis is evidently a bug in the script. Oleg?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 20:37:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS "
},
{
"msg_contents": "Bill Studenmund <wrstuden@netbsd.org> writes:\n> [ revised proposal for CREATE OPERATOR CLASS syntax ]\n\nI don't like the idea of writing a bunch of consecutive commas (and\nhaving to count them correctly) for cases where we're inserting\nnoncontigous amopstrategy or amprocnum numbers. Perhaps the syntax\nfor the elements of the lists could be\n\n\t[ integer ] operator [ ( argtype, argtype ) ] [ RECHECK ]\n\n\t[ integer ] funcname ( argtypes )\n\nwhere if the integer is given, it is the strategy/procnum for this\nentry, and if it's not given then it defaults to 1 for the first\nitem and previous-entry's-number-plus-one for later items.\n\nOr just require the integer all the time. That seems a lot less\nmistake-prone, really. Concision is not a virtue in the case of\na command as specialized as this. Is there really anything wrong with\n\nCREATE OPERATOR CLASS complex_abs_ops\n\tDEFAULT FOR TYPE complex USING btree\n\tWITH\n\t\t1 ||<,\n\t\t2 ||<=,\n\t\t3 ||=,\n\t\t4 ||>=,\n\t\t5 ||>\n\tAND\n\t\t1 complex_abs_cmp(complex, complex);\n\n(One could imagine adding system catalogs that give symbolic names\nto the strategy/procnum numbers for each access method, and then\nallowing names instead of integers in this command. I'm not sure\nwhether GiST has sufficiently well-defined strategy numbers to make that\nwork, but even if not, I like this better than a positional approach to\nfiguring out which operator is which.)\n\n\n> I decided to change that to an operator followed by \"needs_recheck\" to\n> indicate a recheck is needed. \"needs_recheck\" is not handled as a keyword,\n> but as an IDENT which is examined at parse time.\n\nUgh. Make it a keyword. As long as it can be a TokenId there is no\ndownside to doing so, and doing it that way eliminates interesting\nissues about case folding etc. (Did you know that case folding rules\nare slightly different for keywords and identifiers?)\n\nI still like RECHECK better than NEEDS_RECHECK, but that's a minor\nquibble.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 22:11:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS "
},
{
"msg_contents": ">>Wait a second, how can you do that? Doesn't that violate\n>>pg_amop_opc_strategy_index ?\n>>\n> \n> It sure does, but running the script shows that the second insert\n> doesn't try to insert any rows. There's no entry in the temp table\n> for ~~ because its left and right operands are not the types the\n> SELECT/INTO is looking for.\n> \n> This is evidently a bug in the script. Oleg?\n> \n\n\nMake me right if I mistake.\n\nWhen we was developing operator @@, I saw that postgres don't use index in \nselect if operation has not commutator. But operator with different types in \nargument can't be commutator with itself. So I maked operator ~~ only for \npostgres can use index access for operator @@. There is no any difficulties to \nadding index support for operator ~~. The same things is with contrib/tsearch \nmodule.\n\nBut I think that there is not any other necessity in presence ~~.\n\n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Thu, 25 Oct 2001 12:53:06 +0400",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS"
},
{
"msg_contents": "On Mon, 29 Oct 2001, Oleg Bartunov wrote:\n\n> On Thu, 25 Oct 2001, Teodor Sigaev wrote:\n>\n> > >>Wait a second, how can you do that? Doesn't that violate\n> > >>pg_amop_opc_strategy_index ?\n> > >\n> > > This is evidently a bug in the script. Oleg?\n> >\n> > Make me right if I mistake.\n\nDon't add @@ to pg_amop.\n\n> > When we was developing operator @@, I saw that postgres don't use index in\n> > select if operation has not commutator. But operator with different types in\n> > argument can't be commutator with itself. So I maked operator ~~ only for\n> > postgres can use index access for operator @@. There is no any difficulties to\n> > adding index support for operator ~~. The same things is with contrib/tsearch\n> > module.\n> >\n> > But I think that there is not any other necessity in presence ~~.\n\n?? An operator with different times in the arguements most certainly can\nbe a commutator with itself.\n\nTry:\n\nselect oid, oprname as \"n\", oprkind as \"k\", oprleft, oprright, oprresult,\noprcom, oprcode from pg_operator where oprleft <> oprright and oprname =\n'+';\n\nand look at the results. There are a number of pairs of same-name\ncommutators: 552 & 553 add int2 to int4, 688 & 692 add int4 to int8, and\nso on.\n\nAlso, I was able to do this:\n\ntesting=# CREATE OPERATOR @@ (\ntesting(# LEFTARG = _int4, RIGHTARG = query_int, PROCEDURE = boolop,\ntesting(# COMMUTATOR = '@@', RESTRICT = contsel, join = contjoinsel );\nCREATE\ntesting=# CREATE OPERATOR @@ (\ntesting(# LEFTARG = query_int, RIGHTARG = _int4, PROCEDURE = rboolop,\ntesting(# COMMUTATOR = '@@', RESTRICT = contsel, join = contjoinsel );\nCREATE\ntesting=#\n\n> Tom,\n>\n> this is interesting question - do we really need commutator to get\n> postgres to use index. This is the only reason we created ~~ operator.\n\nPlease note: my concern is not with the ~~ operator, it's with trying to\ninsert that operator into pg_amop. Well, with trying to insert both the @@\nand ~~ operators in as strategy (amopstrategy) 20. amopclaid and\namopstrategy are part of a unique index for pg_amop. So you *can't* add\ntwo operators in the same opclass as the same sequence number.\n\nAlthough, given the above example, I think the ~~ operator should be\nrenamed the @@ operator. :-)\n\nI think you do need to have both variants of the operator around. A\nbinary, type asymmetric operator without a commutator is less useful. And\nmakes lese sense.\n\nTake care,\n\nBill\n\n",
"msg_date": "Sat, 27 Oct 2001 17:46:03 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS"
},
{
"msg_contents": "On Thu, 25 Oct 2001, Teodor Sigaev wrote:\n\n> >>Wait a second, how can you do that? Doesn't that violate\n> >>pg_amop_opc_strategy_index ?\n> >>\n> >\n> > It sure does, but running the script shows that the second insert\n> > doesn't try to insert any rows. There's no entry in the temp table\n> > for ~~ because its left and right operands are not the types the\n> > SELECT/INTO is looking for.\n> >\n> > This is evidently a bug in the script. Oleg?\n> >\n>\n>\n> Make me right if I mistake.\n>\n> When we was developing operator @@, I saw that postgres don't use index in\n> select if operation has not commutator. But operator with different types in\n> argument can't be commutator with itself. So I maked operator ~~ only for\n> postgres can use index access for operator @@. There is no any difficulties to\n> adding index support for operator ~~. The same things is with contrib/tsearch\n> module.\n>\n> But I think that there is not any other necessity in presence ~~.\n\nTom,\n\nthis is interesting question - do we really need commutator to get\npostgres to use index. This is the only reason we created ~~ operator.\n\n\tRegards,\n\t\tOleg\n>\n>\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 29 Oct 2001 17:30:51 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> this is interesting question - do we really need commutator to get\n> postgres to use index. This is the only reason we created ~~ operator.\n\nAFAIR there is not a requirement to have a commutator link. However\nthe indexable operation has to be framed as \"indexedvar OP constant\".\nIf the natural way to write it is as \"constant OP indexedvar\" then\nyou won't get an indexscan unless it can be commuted to the other way.\nThe same issue arises if you think that the operator might be useful\nin joins.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Oct 2001 12:18:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS "
},
{
"msg_contents": "\nBill, do you have a newer version of this patch for application to 7.3?\n\n\n---------------------------------------------------------------------------\n\nBill Studenmund wrote:\n> I'd like to propose a new command, CREATE OPERATOR CLASS. Its purpose is\n> to create a named operator class, so that you can create new types of\n> index ops. Also, its inclusion would remove the section of the\n> documentation where we tell people how to manually manipulate the system\n> tables.\n> \n> Since schema support is going to change some of the details of the system\n> tables in important ways, I think it's better to move away from manual\n> updates.\n> \n> The command is basically an instrumentation of the documentation on how to\n> add new operator classes.\n> \n> Here's the syntax I'd like to propose:\n> \n> CREATE OPERATOR CLASS <name> [DEFAULT] FOR TYPE <typename> USING <access\n> method> WITH <list of operators> AND <list of support functions>\n> \n> New keywords are \"CLASS\" (SQL99 reserved word) and \"REPEATABLE\" (SQL99\n> non-reserved word, see below for usage).\n> \n> <name> is the class's name, and <typename> is the type to be indexed.\n> <access method> is the assosciated access method from pg_am (btree, rtree,\n> hash, gist).\n> \n> The presence of [DEFAULT] indicates that this operator class shold be made\n> the default operator class for the type.\n> \n> <list of operators> is a comma-delimited list of operator specs. An\n> operator spec is either an operator or an operator followed by the keyword\n> \"REPEATABLE\". The presence of \"REPEATABLE\" indicates that amopreqcheck\n> should be set to true for this operator. Each item in this list will\n> generate an entry in pg_amop.\n> \n> <list of support functions> is a comma-seperated list of functions used to\n> assist the index method. Each item in this list will generate an item in\n> pg_amproc.\n> \n> I agree that I think it is rare that anything will set \"REPEATABLE\", but\n> the point of this effort is to keep folks from mucking around with the\n> system tables manually, so we should support making any reasonable entry\n> in pg_amop.\n> \n> Here's an example based on the programmer's guide. We've created the type\n> \"complex\", and have comparison functions complex_abs_lt, complex_abs_le,\n> complex_abs_eq, complex_abs_gt, complex_abs_ge. Then let us have created\n> operators \"||<\", \"||<=\", \"||=\", \"||>\", \"||>=\" based on them. We also have\n> the complex_abs_cmp helper function. To create the operator class, the\n> command would be:\n> \n> CREATE OPERATOR CLASS complex_abs_ops DEFAULT FOR TYPE complex USING\n> btree with ||<, ||<=, ||=, ||>=, ||> and complex_abs_cmp;\n> \n> Among other things, complex_abs_ops would be the default operator class\n> for the complex type after this command.\n> \n> \n> An example using REPEATABLE would be:\n> \n> CREATE OPERATOR CLASS complex_abs_ops DEFAULT FOR TYPE complex USING btree\n> with ||< REPEATABLE, ||<=, ||=, ||>=, ||> REPEATABLE and complex_abs_cmp;\n> \n> Note: I don't think the above command will create a correct operator\n> class, it just shows how to add REPEATABLE.\n> \n> The alternative to \"REPEATABLE\" would be something like\n> \"hit_needs_recheck\" after the operator. Suggestions?\n> \n> Thoughts?\n> \n> Take care,\n> \n> Bill\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Feb 2002 14:43:58 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS"
},
{
"msg_contents": "\nBill, is there a patch that is ready for application?\n\n---------------------------------------------------------------------------\n\nBill Studenmund wrote:\n> On Mon, 29 Oct 2001, Oleg Bartunov wrote:\n> \n> > On Thu, 25 Oct 2001, Teodor Sigaev wrote:\n> >\n> > > >>Wait a second, how can you do that? Doesn't that violate\n> > > >>pg_amop_opc_strategy_index ?\n> > > >\n> > > > This is evidently a bug in the script. Oleg?\n> > >\n> > > Make me right if I mistake.\n> \n> Don't add @@ to pg_amop.\n> \n> > > When we was developing operator @@, I saw that postgres don't use index in\n> > > select if operation has not commutator. But operator with different types in\n> > > argument can't be commutator with itself. So I maked operator ~~ only for\n> > > postgres can use index access for operator @@. There is no any difficulties to\n> > > adding index support for operator ~~. The same things is with contrib/tsearch\n> > > module.\n> > >\n> > > But I think that there is not any other necessity in presence ~~.\n> \n> ?? An operator with different times in the arguements most certainly can\n> be a commutator with itself.\n> \n> Try:\n> \n> select oid, oprname as \"n\", oprkind as \"k\", oprleft, oprright, oprresult,\n> oprcom, oprcode from pg_operator where oprleft <> oprright and oprname =\n> '+';\n> \n> and look at the results. There are a number of pairs of same-name\n> commutators: 552 & 553 add int2 to int4, 688 & 692 add int4 to int8, and\n> so on.\n> \n> Also, I was able to do this:\n> \n> testing=# CREATE OPERATOR @@ (\n> testing(# LEFTARG = _int4, RIGHTARG = query_int, PROCEDURE = boolop,\n> testing(# COMMUTATOR = '@@', RESTRICT = contsel, join = contjoinsel );\n> CREATE\n> testing=# CREATE OPERATOR @@ (\n> testing(# LEFTARG = query_int, RIGHTARG = _int4, PROCEDURE = rboolop,\n> testing(# COMMUTATOR = '@@', RESTRICT = contsel, join = contjoinsel );\n> CREATE\n> testing=#\n> \n> > Tom,\n> >\n> > this is interesting question - do we really need commutator to get\n> > postgres to use index. This is the only reason we created ~~ operator.\n> \n> Please note: my concern is not with the ~~ operator, it's with trying to\n> insert that operator into pg_amop. Well, with trying to insert both the @@\n> and ~~ operators in as strategy (amopstrategy) 20. amopclaid and\n> amopstrategy are part of a unique index for pg_amop. So you *can't* add\n> two operators in the same opclass as the same sequence number.\n> \n> Although, given the above example, I think the ~~ operator should be\n> renamed the @@ operator. :-)\n> \n> I think you do need to have both variants of the operator around. A\n> binary, type asymmetric operator without a commutator is less useful. And\n> makes lese sense.\n> \n> Take care,\n> \n> Bill\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 6 Mar 2002 16:58:43 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposed new create command, CREATE OPERATOR CLASS"
}
] |
[
{
"msg_contents": "> >> Um, Vadim? Still of the opinion that elog(STOP) is a good\n> >> idea here? That's two people now for whom that decision has\n> >> turned localized corruption into complete database failure.\n> >> I don't think it's a good tradeoff.\n> \n> > One is able to use pg_resetxlog so I don't see point in\n> > removing elog(STOP) there. What do you think?\n>\n> Well, pg_resetxlog would get around the symptom, but at the cost of\n> possibly losing updates that are further along in the xlog than the\n> update for the corrupted page. (I'm assuming that the problem here\n> is a page with a corrupt LSN.) I think it's better to treat flush\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nOn restart, entire content of all modified after last checkpoint pages\nshould be restored from WAL. In Denis case it looks like newly allocated\nfor update page was somehow corrupted before heapam.c:2235 (7.1.2 src)\nand so there was no XLOG_HEAP_INIT_PAGE flag in WAL record => page\ncontent was not initialized on restart. Denis reported system crash -\nvery likely due to memory problem.\n\n> request past end of log as a DEBUG or NOTICE condition and keep going.\n> Sure, it indicates badness somewhere, but we should try to have some\n> robustness in the face of that badness. I do not see any reason why\n> XLOG has to declare defeat and go home because of this condition.\n\nOk - what about setting some flag there on restart and abort restart\nafter all records from WAL applied? So DBA will have choice either\nto run pg_resetxlog after that and try to dump data or restore from\nold backup. I still object just NOTICE there - easy to miss it. And\nin normal processing mode I'd leave elog(STOP) there.\n\nVadim\nP.S. Further discussions will be in hackers-list, sorry.\n",
"msg_date": "Tue, 23 Oct 2001 15:52:30 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: Database corruption? "
}
] |
[
{
"msg_contents": "Hi All,\n\nIn the LOCK TABLE docs it documents the SELECT...FOR UPDATE as follows:\n\n\n----\nROW SHARE MODE\nNote: Automatically acquired by SELECT...FOR UPDATE. While it is a shared\nlock, may be upgraded later to a ROW EXCLUSIVE lock.\n\nConflicts with EXCLUSIVE and ACCESS EXCLUSIVE lock modes.\n----\n\nHowever, if I begin a transaction in one window and SELECT...FOR UPDATE a\nrow, then begin a transaction in another window and SELECT ... FOR UPDATE\nthe same row, the second SELECT..FOR UPDATE blocks until the first\ntransactions is committed or rolled back.\n\nSo, shouldn't this mean that the ROW SHARE mode should in fact be documented\nto conflict with itself??? And with this behaviour is it really a shared\nlock? I don't get it!\n\nChris\n\n\n",
"msg_date": "Wed, 24 Oct 2001 10:56:12 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "LOCK ROW SHARE MODE"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> In the LOCK TABLE docs it documents the SELECT...FOR UPDATE as follows:\n\n> ROW SHARE MODE\n> Note: Automatically acquired by SELECT...FOR UPDATE. While it is a shared\n> lock, may be upgraded later to a ROW EXCLUSIVE lock.\n> Conflicts with EXCLUSIVE and ACCESS EXCLUSIVE lock modes.\n\n> However, if I begin a transaction in one window and SELECT...FOR UPDATE a\n> row, then begin a transaction in another window and SELECT ... FOR UPDATE\n> the same row, the second SELECT..FOR UPDATE blocks until the first\n> transactions is committed or rolled back.\n\n> So, shouldn't this mean that the ROW SHARE mode should in fact be documented\n> to conflict with itself??? And with this behaviour is it really a shared\n> lock? I don't get it!\n\nROW SHARE is a table-level lock mode. SELECT FOR UPDATE grabs ROW SHARE\nlock on the table, *plus* an exclusive-write lock on the selected row(s).\nThe latter is what's conflicting for you.\n\nI think the code is okay, but the documentation could use some work...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 00:34:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LOCK ROW SHARE MODE "
}
] |
[
{
"msg_contents": "I've got a bit of a problem. I added a fast SIGALRM handler in my \nproject to do various maintenance and this broke PQconnectStart().\n\nOct 23 21:56:36 james BlueList: connectDBStart() -- connect() failed: \nInterrupted system call ^IIs the postmaster running (with -i) at \n'archives.blue-labs.org' ^Iand accepting connections on TCP/IP port 5432?\n\nPQstatus() returns CONNECTION_BAD, how can I reenter the connection \ncycle or delay, more like, how do I differentiate between an actual \nfailure to connect and an interruption by signal? My alarm timer \nhappens much too frequently for this code to make a connection and \nunfortunately I can't disable the alarm because it's used for bean \ncounting and other maintenance.\n\nThanks,\nDavid\n\nCode snippet:\n\n...\n /*\n * play some tricks now, use async connect mode to find if the server\n * is alive. once we've figured that out, disconnect and immediately\n * reconnect in blocking mode. this mitigates the annoying hangs from\n * using PQconnectdb which has no support for a timeout.\n */\n conn=PQconnectStart(cstr);\n if(!conn) {\n dlog(_LOG_debug, \"SQL conn is NULL, aborting\");\n return NULL;\n }\n \n do {\n c++;\n pgstat=PQstatus(conn);\n switch (pgstat) {\n case CONNECTION_STARTED:\n dlog(_LOG_debug, \"Connecting to SQL server...\");\n break;\n case CONNECTION_MADE:\n case CONNECTION_OK: \n dlog(_LOG_debug, \"Connected to SQL server in asynchronous \nmode...\");\n break;\n case CONNECTION_BAD:\n dlog(_LOG_debug, PQerrorMessage(conn));\n if(conn)\n PQfinish(conn);\n dlog(_LOG_warning, \"failed to connect to server\");\n return NULL;\n break;\n default:\n dlog(_LOG_debug, \"pg conx state = %i\", pgstat);\n break;\n }\n\n if(pgstat==CONNECTION_MADE||CONNECTION_OK)\n break;\n \n if(c>15) {\n if(conn)\n PQfinish(conn);\n dlog(_LOG_warning, \"failed to connect to server, timed out\");\n return NULL;\n }\n \n req.tv_sec=1;\n req.tv_nsec=0;\n sleep(&req); \n \n } while(1); \n \n /*\n * close it and reopen it in normal blocking mode\n */\n PQfinish(conn);\n conn=PQconnectdb(cstr);\n...\n\n\n",
"msg_date": "Wed, 24 Oct 2001 01:45:20 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "PQconnectStart() and -EINTR"
},
{
"msg_contents": "David Ford <david@blue-labs.org> writes:\n\n> I've got a bit of a problem. I added a fast SIGALRM handler in my project to\n> do various maintenance and this broke PQconnectStart().\n> \n> \n> Oct 23 21:56:36 james BlueList: connectDBStart() -- connect() failed:\n> Interrupted system call ^IIs the postmaster running (with -i) at\n> 'archives.blue-labs.org' ^Iand accepting connections on TCP/IP port 5432?\n> \n> \n> PQstatus() returns CONNECTION_BAD, how can I reenter the connection cycle or\n> delay, more like, how do I differentiate between an actual failure to connect\n> and an interruption by signal? My alarm timer happens much too frequently for\n> this code to make a connection and unfortunately I can't disable the alarm\n> because it's used for bean counting and other maintenance.\n\nSounds like something in libpq needs to check for EINTR and reissue the\nconnect() call (or select()/poll() if it's a nonblocking connect()). \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "24 Oct 2001 12:27:37 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: PQconnectStart() and -EINTR"
},
{
"msg_contents": "David Ford <david@blue-labs.org> writes:\n> I've got a bit of a problem. I added a fast SIGALRM handler in my \n> project to do various maintenance and this broke PQconnectStart().\n\nIt'd probably be reasonable to just retry the connect() call if it\nfails with EINTR. If that works for you, send a patch...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 14:09:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PQconnectStart() and -EINTR "
}
] |
[
{
"msg_contents": "I'm working on an application where it is necessary to make copies of large\nobjects, and now I wonder if it is safe\nto use this (symbolic, somewhat PHP like) code. Say I've a LOB with OID=1234\n\n$oid = db_exec(\"select lo_create(....)\")\ndb_exec(\"delete from pg_largeobject where loid=$oid\")\ndb_exec(\"insert into pg_largeobject select $oid, pageno, data from\npg_largeobject where loid=1234\")\n\nis this a safe way to accomplish this?\n\n\nAnd another question regarding large objects, as I see the objects are\norganized in units of 2048 bytes each. Can I somehow set this to a higher\nvalue like 8k or 32k (I use 32k pages).\n\nI'm using the latest 7.2 cvs version.\n\nHope someone of you can help me, thanks!\n\nBest regards,\n Mario Weilguni\n\n\n",
"msg_date": "Wed, 24 Oct 2001 11:11:03 +0200",
"msg_from": "\"mario\" <mw@sime.com>",
"msg_from_op": true,
"msg_subject": "Make a copy of a large object"
}
] |
[
{
"msg_contents": "\n> > > *very* slow, due to seq scan on\n> > > 20 million entries, which is a test setup up to now)\n> >\n> > Perennial first question: did you VACUUM ANALYZE?\n> \n> Can there, or could there, be a notion of \"rule based\" optimization of\n> queries in PostgreSQL? The \"not using index\" problem is probably the\nmost\n> common and most misunderstood problem.\n\nThere is a (sort of) rule based behavior in PostgreSQL, \nthe down side of the current implementation is, that certain \nother commands than ANALYZE (e.g. \"create index\") partly update \noptimizer statistics. This is bad behavior, since then only part \nof the statistics are accurate. Statistics always have to be seen \nin context to other table's and other index'es statistics. \n\nThus, currently the rule based optimizer only works if you create \nthe indexes on empty tables (before loading data), which obviously \nhas downsides. Else you have no choice but to ANALYZE frequently.\n\nI have tried hard to fight for this pseudo rule based behavior, \nbut was only partly successful in convincing core. My opinion is, \nthat (unless runtime statistics are kept) no other command than \nANALYZE should be allowed to touch optimizer relevant statistics \n(maybe unless explicitly told to).\n\nAndreas\n",
"msg_date": "Wed, 24 Oct 2001 12:10:30 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Index of a table is not used (in any case)"
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > > > *very* slow, due to seq scan on\n> > > > 20 million entries, which is a test setup up to now)\n> > >\n> > > Perennial first question: did you VACUUM ANALYZE?\n> >\n> > Can there, or could there, be a notion of \"rule based\" optimization of\n> > queries in PostgreSQL? The \"not using index\" problem is probably the\n> most\n> > common and most misunderstood problem.\n> \n> There is a (sort of) rule based behavior in PostgreSQL,\n> the down side of the current implementation is, that certain\n> other commands than ANALYZE (e.g. \"create index\") partly update\n> optimizer statistics. This is bad behavior, since then only part\n> of the statistics are accurate. Statistics always have to be seen\n> in context to other table's and other index'es statistics.\n> \n> Thus, currently the rule based optimizer only works if you create\n> the indexes on empty tables (before loading data), which obviously\n> has downsides. Else you have no choice but to ANALYZE frequently.\n> \n> I have tried hard to fight for this pseudo rule based behavior,\n> but was only partly successful in convincing core. My opinion is,\n> that (unless runtime statistics are kept) no other command than\n> ANALYZE should be allowed to touch optimizer relevant statistics\n> (maybe unless explicitly told to).\n\nPerhaps there could be an extension to ANALYZE, i.e. ANALYZE RULEBASED\ntablename that would restore or recalculate the state that a table would be if\nall indexes were created from scratch?\n\nThe \"not using index\" was very frustrating to understand. The stock answer,\n\"did you vacuum?\" just isn't enough. There has to be some explanation (in the\nFAQ or something) about the indexed key distribution in your data. Postgres'\nstatistics are pretty poor too, a relative few very populous entries in a table\nwill make it virtually impossible for the cost based optimizer (CBO) to use an\nindex.\n\nAt my site we have lots of tables that have many duplicate items in an index.\nIt is a music based site and has a huge amount of \"Various Artists\" entries. No\nmatter what we do, there is NO way to get Postgres to use the index from the\nquery alone. We have over 20 thousand artists, but 5 \"Various Artists\" or\n\"Soundtrack\" entries change the statistics so much that they exclude an index\nscan. We have to run the system with sequential scan disabled. Running with seq\ndisabled eliminates the usefulness of the CBO because when it is a justified\ntable scan, it does an index scan.\n\nI have approached this windmill before and a bit regretful at bringing it up\nagain, but it is important, very important. There needs to be a way to direct\nthe optimizer about how to optimize the query.\n\nUsing \"set foo=bar\" prior to a query is not acceptable. Web sites use\npersistent connections to the databases and since \"set\" can not be restored,\nyou override global settings for the session, or have to code, in the web page,\nthe proper default setting. The result is either that different web processes\nwill behave differently depending on the order in which they execute queries,\nor you have to have your DBA write web pages.\n\nA syntax like:\n\nselect * from table where /* enable_seqscan = false */ key = 'value';\n\nWould be great in that you could tune the optimizer as long as the settings\nwere for the clause directly following the directive, without affecting the\nstate of the session or transaction. For instance:\n\nselect id from t1, t2 where /* enable_seqscan = false */ t1.key = 'value' and\nt2.key = 'test' and t1.id = t2.id;\n\nThe where \"t1.key = 'value'\" condition would be prohibited from using a\nsequntial scan, while the \"t2.key = 'test'\" would use it if it made sense.\n\nIs this possible?\n",
"msg_date": "Wed, 24 Oct 2001 08:59:34 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Index of a table is not used (in any case)"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> ... Postgres' statistics are pretty poor too, a relative few very\n> populous entries in a table will make it virtually impossible for the\n> cost based optimizer (CBO) to use an index.\n\nHave you looked at development sources lately?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 14:21:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index of a table is not used (in any case) "
},
{
"msg_contents": "mlw writes:\n\n> The \"not using index\" was very frustrating to understand. The stock answer,\n> \"did you vacuum?\" just isn't enough. There has to be some explanation (in the\n> FAQ or something) about the indexed key distribution in your data.\n\nMost \"not using index\" questions seem to be related to a misunderstanding\nof users to the effect that \"if there is an index it must be used, not\nmatter what the query\", which is of course far from reality. Add to that\nthe (related) category of inquiries from people that think the index ought\nto be used but don't have any actual timings to show, you have a lot of\npeople that just need to be educated.\n\nOf course the question \"did you vacuum\" (better, did you analyze) is\nannoying, just as the requirement to analyze is annoying in the first\nplace, but unless someone designs a better query planner it will have to\ndo. The reason why we always ask that question first is that people\ninvariantly have not analyzed. A seasoned developer can often tell from\nthe EXPLAIN output whether ANALYZE has been done, but users cannot.\nPerhaps something can be done in this area, but I'm not exactly sure what.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 24 Oct 2001 23:55:42 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Index of a table is not used (in any case)"
}
] |
[
{
"msg_contents": "Hi all,\n\nI was just looking for the code which checks for the memory available on \nmachine before writing the data.\nAny related information will be appreciated.\n\nThanks,\nKKG\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp\n\n",
"msg_date": "Wed, 24 Oct 2001 15:47:42 +0530",
"msg_from": "\"Kiran Kumar Gahlot\" <kkgahlot@hotmail.com>",
"msg_from_op": true,
"msg_subject": "check for disk space"
}
] |
[
{
"msg_contents": "I'm working on an application where it is necessary to make copies of large\nobjects, and now I wonder if it is safe\nto use this (symbolic, somewhat PHP like) code. Say I've a LOB with OID=1234\n\n$oid = db_exec(\"select lo_create(....)\")\ndb_exec(\"delete from pg_largeobject where loid=$oid\")\ndb_exec(\"insert into pg_largeobject select $oid, pageno, data from\npg_largeobject where loid=1234\")\n\nis this a safe way to accomplish this?\n\n\nAnd another question regarding large objects, as I see the objects are\norganized in units of 2048 bytes each. Can I somehow set this to a higher\nvalue like 8k or 32k (I use 32k pages).\n\nI'm using the latest 7.2 cvs version.\n\nHope someone of you can help me, thanks!\n\nBest regards,\n Mario Weilguni\n\n\n\n",
"msg_date": "Wed, 24 Oct 2001 12:28:58 +0200",
"msg_from": "\"mario\" <mweilguni@sime.com>",
"msg_from_op": true,
"msg_subject": "copying a large object?"
},
{
"msg_contents": "\"mario\" <mweilguni@sime.com> writes:\n> And another question regarding large objects, as I see the objects are\n> organized in units of 2048 bytes each. Can I somehow set this to a higher\n> value like 8k or 32k (I use 32k pages).\n\nThen you've already got larger units, because the code is\n\n#define LOBLKSIZE (BLCKSZ / 4)\n\nI don't believe it'd be a good idea to try to make it larger than that,\nthough you're free to experiment...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 14:15:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: copying a large object? "
}
] |
[
{
"msg_contents": "\nIt seems Sybase has dropped the BETWEEN search condition. I thought\nit was part of SQL92, has it been dropped from the spec since then or\nwasn't it ever in there?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 24 Oct 2001 08:01:04 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "between?"
},
{
"msg_contents": "> It seems Sybase has dropped the BETWEEN search condition. I thought\n> it was part of SQL92, has it been dropped from the spec since then or\n> wasn't it ever in there?\n\nIt is documented in every SQL book I have and I see it in our SQL99\ndocs. Are you *sure* Sybase dropped it? If so, then it presumably is\nmentioned in the release notes. What do they say about it??\n\n - Thomas\n",
"msg_date": "Wed, 24 Oct 2001 13:40:13 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: between?"
},
{
"msg_contents": "On Wed, 24 Oct 2001, Thomas Lockhart wrote:\n\n> > It seems Sybase has dropped the BETWEEN search condition. I thought\n> > it was part of SQL92, has it been dropped from the spec since then or\n> > wasn't it ever in there?\n>\n> It is documented in every SQL book I have and I see it in our SQL99\n> docs. Are you *sure* Sybase dropped it? If so, then it presumably is\n> mentioned in the release notes. What do they say about it??\n\nOne of the guys here said he saw it in the release notes, but I just\ntried it and it worked. I'm gonna have to find what he was looking\nat.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 24 Oct 2001 10:04:43 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: between?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI was just looking for the code which checks for the memory available on\nmachine before writing the data.\nAny related information will be appreciated.\n\nThanks,\nKKG\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp\n\n",
"msg_date": "Wed, 24 Oct 2001 18:31:49 +0530",
"msg_from": "\"Kiran Kumar Gahlot\" <kkgahlot@hotmail.com>",
"msg_from_op": true,
"msg_subject": "check disk space"
}
] |
[
{
"msg_contents": "HELP\n\nleft outer join intruction working or not on POSTGRES 7\n\nZenon Karol\n\n\n\n\n",
"msg_date": "Wed, 24 Oct 2001 15:02:15 +0200",
"msg_from": "\"Zenon\" <anatol@raptor.bci.krakow.pl>",
"msg_from_op": true,
"msg_subject": "join instruction"
},
{
"msg_contents": "\"Zenon\" <anatol@raptor.bci.krakow.pl> writes:\n> left outer join intruction working or not on POSTGRES 7\n\nIt works in 7.1 or later.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 15:54:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: join instruction "
}
] |
[
{
"msg_contents": "\nThe minor featurette seems to have crept into current sources; it is\nprobably the cause of pg_dump being unable to reinstate disabled triggers.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 25 Oct 2001 01:14:22 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Can't cast bigint to smallint?"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> The minor featurette seems to have crept into current sources; it is\n> probably the cause of pg_dump being unable to reinstate disabled triggers.\n\nHuh? There's never been a cast from int8 to int2. I checked 7.0 and\n7.1, they both complain as well:\n\ntest71=# select 8::int8::int2;\nERROR: Cannot cast type 'int8' to 'int2'\n\nWhere exactly is pg_dump failing?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 16:09:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can't cast bigint to smallint? "
},
{
"msg_contents": "At 16:09 24/10/01 -0400, Tom Lane wrote:\n>\n>Huh? There's never been a cast from int8 to int2. I checked 7.0 and\n>7.1, they both complain as well:\n>\n\nIs this a policy decision, or just a case where noone has had a chance to\ndo it?\n\n\n>Where exactly is pg_dump failing?\n>\n\nThe problem in in the code to re-enable triggers:\n\n...reltriggers = (select Count(*)....\n\nSo perhaps this version now has Count returning a bigint rather than an int?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 25 Oct 2001 09:33:05 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Can't cast bigint to smallint? "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 16:09 24/10/01 -0400, Tom Lane wrote:\n>> Huh? There's never been a cast from int8 to int2. I checked 7.0 and\n>> 7.1, they both complain as well:\n\n> Is this a policy decision, or just a case where noone has had a chance to\n> do it?\n\nJust a missing feature. The code additions would be trivial --- but\nwould require an initdb to add the catalog entries. I'm loath to do it\nso close to beta.\n\n>> Where exactly is pg_dump failing?\n\n> The problem in in the code to re-enable triggers:\n> ...reltriggers = (select Count(*)....\n> So perhaps this version now has Count returning a bigint rather than an int?\n\nYes, that's what changed. Perhaps change the code to look like\n\t\t(select count(*)::integer ...\n\nOn the other hand, that's no answer for people trying to load existing\ndump files into 7.2.\n\nPerhaps we should just do another catalog update and not worry about it.\nWe just had one earlier this week, so I suppose another wouldn't make\nall that much difference. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 19:41:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can't cast bigint to smallint? "
},
{
"msg_contents": "At 19:41 24/10/01 -0400, Tom Lane wrote:\n>We just had one earlier this week, so I suppose another wouldn't make\n>all that much difference. Comments?\n\nMy pref would be for the initdb; the current situation may break (other)\nexisting apps.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 25 Oct 2001 09:49:06 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Can't cast bigint to smallint? "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> The problem in in the code to re-enable triggers:\n> ...reltriggers = (select Count(*)....\n> So perhaps this version now has Count returning a bigint rather than an int?\n\nOkay, I've added conversion functions for int8-to-int2 and vice versa.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Oct 2001 10:13:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can't cast bigint to smallint? "
}
] |
[
{
"msg_contents": "If PostgreSQL failed to compile on your computer or you found a bug that\nis likely to be specific to one platform then please fill out this form\nand e-mail it to pgsql-ports@postgresql.org.\n\nTo report any other bug, fill out the form below and e-mail it to\npgsql-bugs@postgresql.org.\n\nIf you not only found the problem but solved it and generated a patch then\ne-mail it to pgsql-patches@postgresql.org instead. Please use the command\n\"diff -c\" to generate the patch.\n\nYou may also enter a bug report at http://www.postgresql.org/ instead of\ne-mailing this form.\n\n============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name : arguile\nYour email address : arguile@lucentstudios.com\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) : Intel P3 Xeon\n\n Operating System (example: Linux 2.0.26 ELF) : Linux 2.2.19smp\n\n PostgreSQL version (example: PostgreSQL-7.1.3): PostgreSQL-7.1.3\n\n Compiler used (example: gcc 2.95.2) : gcc 2.95.3\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\n\nSYNOPSIS:\n If a table field is altered to add a default, the default value is\nbypassed by pre-existing rules.\n\n\nDETAILS:\n Let's say we have an employee (overused yes, but bear with me) table and\nany changes to it are logged in a seperate table. The changes are logged\nvia a bunch of RULEs.\n\n\tCREATE TABLE foo (id int);\n\tCREATE TABLE log (id int, date timestamp);\n\n\tCREATE RULE foo_insert AS\n\t ON INSERT TO foo DO\n\t INSERT INTO log (id) VALUES (new.id);\n\nWe insert a value and the rule is doing it's job.\n\n\tINSERT INTO foo (id) VALUES (1);\n\nTracking changes is all well and good but when they occured would be\nuseful so a a timestamp field is added and is given the default of now().\n\n\tALTER TABLE log ALTER date SET DEFAULT now();\n\nWe then insert another record into the main table,\n\n\tINSERT INTO foo (id) VALUES (2);\n\nand are suprised to find out there's no timestamp in the date field. Just\nto test we insert a value directly into the log table, then another into\nour main table.\n\n\tINSERT INTO log (id) VALUES (3);\n\tINSERT INTO foo (id) VALUES (4);\n\nAt this point we'd expect the log to contain:\n\n id | date\n----+------------------------\n 1 |\n 2 | 0000-00-00 00:00:00-00\n 3 | 0000-00-00 00:00:00-00\n 4 | 0000-00-00 00:00:00-00\n\n\nInstead the INSERT in the RULE seem to somehow bypass the default value\nand we get this:\n\n id | date\n----+------------------------\n 1 |\n 2 |\n 3 | 0000-00-00 00:00:00-00\n 4 |\n\n\nIt didn't happen quite like that but you get the drift. As a side note, if\nyou add a NOT NULL contraint to the date (I know it's a SQL reserved word\nbut this is an example ;) field _that_ will be honoured and the system\nwill complain. It just seems to like ignoring defaults set after the fact.\n\nThanks for your time.\n\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible:\n----------------------------------------------------------------------\n\n-- This doesn't work\n\nDROP TABLE foo; DROP TABLE log;\nCREATE TABLE foo (id int);\nCREATE TABLE log (id int, date timestamp);\nCREATE RULE foo_insert AS\n ON INSERT TO foo DO\n INSERT INTO log (id) VALUES (new.id);\nINSERT INTO foo (id) VALUES (1);\nALTER TABLE log ALTER date SET DEFAULT now(); -- alter after rule\nINSERT INTO foo (id) VALUES (2);\nINSERT INTO log (id) VALUES (3);\nINSERT INTO foo (id) VALUES (4);\nSELECT * FROM log;\n\n-- This does work\n\nDROP TABLE foo; DROP TABLE log;\nCREATE TABLE foo (id int);\nCREATE TABLE log (id int, date timestamp);\nALTER TABLE log ALTER date SET DEFAULT now(); -- alter before rule\nCREATE RULE foo_insert AS\n ON INSERT TO foo DO\n INSERT INTO log (id) VALUES (new.id);\nINSERT INTO foo (id) VALUES (1);\nINSERT INTO foo (id) VALUES (2);\nINSERT INTO log (id) VALUES (3);\nINSERT INTO foo (id) VALUES (4);\nSELECT * FROM log;\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\nI find 'em not fix 'em. :)\n\n",
"msg_date": "Wed, 24 Oct 2001 11:38:38 -0400 (EDT)",
"msg_from": "Arguile <arguile@lucentstudios.com>",
"msg_from_op": true,
"msg_subject": "New default ignored by pre-exising insert rulesets."
},
{
"msg_contents": "Arguile <arguile@lucentstudios.com> writes:\n> If a table field is altered to add a default, the default value is\n> bypassed by pre-existing rules.\n\nYeah, this problem has been known for awhile (to me at least). The\ndifficulty is that default values are added to INSERTs by the parser,\nwhich is before rule creation and expansion. So the saved info about\nthe rule already has all the defaults it's gonna get. What's worse,\nit won't track changes in existing defaults (though I'm not sure we\nsupport altering defaults, anyway). If I do\n\nregression=# create table foo (f1 int default 1, f2 int default 2);\nCREATE\nregression=# create view v1 as select * from foo;\nCREATE\nregression=# create rule v1i as on insert to v1 do instead\nregression-# insert into foo values(new.f1);\nCREATE\nregression=# select pg_get_ruledef('v1i');\n pg_get_ruledef\n\n--------------------------------------------------------------------------------------------\n CREATE RULE v1i AS ON INSERT TO v1 DO INSTEAD INSERT INTO foo (f1, f2) VALUES (new.f1, 2);\n(1 row)\n\nthen I can see that the defaults have crept into what's stored for the\nrule.\n\nI believe the best fix for this is to move default-insertion out of the\nparser and do it during planning, instead --- probably at the same\nplace that manipulates the insert's targetlist to match the column\nordering of the table. A possible objection is that default expressions\nwouldn't be subject to rule manipulation, but we don't have any feature\nthat requires that anyway.\n\nComments anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 18:41:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New default ignored by pre-exising insert rulesets. "
},
{
"msg_contents": "I said:\n> Arguile <arguile@lucentstudios.com> writes:\n>> If a table field is altered to add a default, the default value is\n>> bypassed by pre-existing rules.\n\n> I believe the best fix for this is to move default-insertion out of the\n> parser and do it during planning, instead\n\nI have committed fixes for this into the 7.2 sources.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Nov 2001 15:42:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New default ignored by pre-exising insert rulesets. "
}
] |
[
{
"msg_contents": " Hi all,\n\nI need to perform a tree traversal on a big table (millions of rows).\nTo avoid recursive queries, one for each non-leaf node, this table has,\nin addition to its 70 columns, a VARCHAR(30000) column that is used\nexclusively to sort the rows with the required order. The actual content\nlength in that column is expected to be, on average, much less than the\ndeclared limit and the text will be composed of digits and letters only.\n\nPlease, are there any restrictions about using such a wide column to\norder a table? Can an index on that column help?\n\nI'm running PostgreSQL 7.1.2, on Linux 2.2.16, compiled with options:\n--prefix=/usr/local/pgsql --enable-locale --enable-multibyte\n\n\nRegards,\n\nAntonio Sergio\n\n",
"msg_date": "Wed, 24 Oct 2001 11:51:53 -0400",
"msg_from": "Antonio Sergio de Mello e Souza <asergioz@bol.com.br>",
"msg_from_op": true,
"msg_subject": "Index on wide column"
},
{
"msg_contents": "Antonio Sergio de Mello e Souza <asergioz@bol.com.br> writes:\n> I need to perform a tree traversal on a big table (millions of rows).\n> To avoid recursive queries, one for each non-leaf node, this table has,\n> in addition to its 70 columns, a VARCHAR(30000) column that is used\n> exclusively to sort the rows with the required order. The actual content\n> length in that column is expected to be, on average, much less than the\n> declared limit and the text will be composed of digits and letters only.\n\nAre there any entries that will actually approach 30000 chars?\n\n> Please, are there any restrictions about using such a wide column to\n> order a table?\n\nNo.\n\n> Can an index on that column help?\n\nbtree indexes can't cope with index entries wider than 1/3 page, so\nyou'd probably find that building a btree index fails, if there really\nare 30k-wide entries in the column. This limit is squishy because the\nentries can be TOAST-compressed, but you're not likely to get 12:1\ncompression. You could improve matters by increasing BLXKSZ to 32K,\nhowever; then you'd only need 3:1 compression, which might work\ndepending on how repetitive the column data is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 14:29:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index on wide column "
}
] |
[
{
"msg_contents": "\nI have been asked to run pgindent in preparation for beta starting\ntomorrow. In this run, I will also reformat the jdbc files as agreed to\nby the jdbc list. I don't have much time to wait before starting the\npgindent run. I hope people don't have outstanding patches sitting\naround.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Oct 2001 13:55:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "pgindent run"
}
] |
[
{
"msg_contents": "Is something amiss with the CVS server? I'm running an strace to watch \na cvs update and it's forbiddingly slow. It zooms along until it slams \ninto a brick wall for minutes, sometimes 10+ minutes, then it flys on.\n\nDavid\n\n\n",
"msg_date": "Wed, 24 Oct 2001 16:52:37 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "CVS server stumbling?"
},
{
"msg_contents": "\nYes, I have seen this too today.\n\n> Is something amiss with the CVS server? I'm running an strace to watch \n> a cvs update and it's forbiddingly slow. It zooms along until it slams \n> into a brick wall for minutes, sometimes 10+ minutes, then it flys on.\n> \n> David\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Oct 2001 20:32:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] CVS server stumbling?"
},
{
"msg_contents": "\nSee notice on pgsql-announce concerning our upcoming move ... :)\n\nOn Wed, 24 Oct 2001, David Ford wrote:\n\n> Is something amiss with the CVS server? I'm running an strace to watch\n> a cvs update and it's forbiddingly slow. It zooms along until it slams\n> into a brick wall for minutes, sometimes 10+ minutes, then it flys on.\n>\n> David\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n",
"msg_date": "Wed, 24 Oct 2001 21:15:38 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: CVS server stumbling?"
}
] |
[
{
"msg_contents": "I have been looking at the way that deferred triggers slow down when the\nsame row is updated multiple times within a transaction. The problem\nappears to be entirely due to calling deferredTriggerGetPreviousEvent()\nto find the trigger list entry for the previous update of the row: we\ndo a linear search, so the behavior is roughly O(N^2) when there are N\nupdated rows.\n\nThe only reason we do this is to enforce the \"triggered data change\nviolation\" restriction of the spec. However, I think we've\nmisinterpreted the spec. The code prevents an RI referenced value from\nbeing changed more than once in a transaction, but what the spec\nactually says is thou shalt not change it more than once per\n*statement*. We have discussed this several times in the past and\nI think people have agreed that the current behavior is wrong,\nbut nothing's been done about it.\n\nI think all we need to do to implement things correctly is to consider a\nprevious event only if both xmin and cmin of the old tuple match the\ncurrent xact & command IDs, rather than considering it on the basis of\nxmin alone.\n\nAside from being correct, this will make a significant difference in\nperformance. If we were doing it per spec then\ndeferredTriggerGetPreviousEvent would never be called in typical\noperations, and so its speed wouldn't be an issue. Moreover, if we do\nit per spec then completed trigger event records could be removed from\nthe trigger list at end of statement, rather than keeping them till end\nof transaction, which'd save memory space.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 18:13:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "\"Triggered data change violation\", once again"
},
{
"msg_contents": "On Wed, 24 Oct 2001, Tom Lane wrote:\n\n> The only reason we do this is to enforce the \"triggered data change\n> violation\" restriction of the spec. However, I think we've\n> misinterpreted the spec. The code prevents an RI referenced value from\n> being changed more than once in a transaction, but what the spec\n> actually says is thou shalt not change it more than once per\n> *statement*. We have discussed this several times in the past and\n> I think people have agreed that the current behavior is wrong,\n> but nothing's been done about it.\n> \n> I think all we need to do to implement things correctly is to consider a\n> previous event only if both xmin and cmin of the old tuple match the\n> current xact & command IDs, rather than considering it on the basis of\n> xmin alone.\n\nAre there any things that might update the command ID during the execution\nof the statement from inside functions that are being run? I really don't\nunderstand the details of how that works (which is the biggest reason I\nhaven't yet tackled some of the big remaining broken stuff in the\nreferential actions, because AFAICT we need to be able to update a row\nthat matched at the beginning of the statement, not the ones that match\nat the time the triggers run). \n\n",
"msg_date": "Wed, 24 Oct 2001 17:39:10 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: \"Triggered data change violation\", once again"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n>> I think all we need to do to implement things correctly is to consider a\n>> previous event only if both xmin and cmin of the old tuple match the\n>> current xact & command IDs, rather than considering it on the basis of\n>> xmin alone.\n\n> Are there any things that might update the command ID during the execution\n> of the statement from inside functions that are being run?\n\nFunctions can run new commands that get new command ID numbers within\nthe current transaction --- but on return from the function, the current\ncommand number is restored. I believe rows inserted by such a function\nwould look \"in the future\" to us at the outer command, and would be\nignored.\n\nActually, now that I think about it, the MVCC rules are that tuples with\nxmin = currentxact are not visible unless they have cmin < currentcmd.\nNot equal to. This seems to render the entire \"triggered data change\"\ntest moot --- I rather suspect that we cannot have such a condition\nas old tuple cmin = currentcmd at all, and so we could just yank all\nthat code entirely.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Oct 2001 20:49:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: \"Triggered data change violation\", once again "
},
{
"msg_contents": "\nOn Wed, 24 Oct 2001, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> >> I think all we need to do to implement things correctly is to consider a\n> >> previous event only if both xmin and cmin of the old tuple match the\n> >> current xact & command IDs, rather than considering it on the basis of\n> >> xmin alone.\n> \n> > Are there any things that might update the command ID during the execution\n> > of the statement from inside functions that are being run?\n> \n> Functions can run new commands that get new command ID numbers within\n> the current transaction --- but on return from the function, the current\n> command number is restored. I believe rows inserted by such a function\n> would look \"in the future\" to us at the outer command, and would be\n> ignored.\n> \n> Actually, now that I think about it, the MVCC rules are that tuples with\n> xmin = currentxact are not visible unless they have cmin < currentcmd.\n> Not equal to. This seems to render the entire \"triggered data change\"\n> test moot --- I rather suspect that we cannot have such a condition\n> as old tuple cmin = currentcmd at all, and so we could just yank all\n> that code entirely.\n\nI'm not sure if this sequence would be an example of something that\nwould be disallowed, but if I do something like:\n\nMake a plpgsql function that does update table1 set key=1 where key=2;\nMake that an after update trigger on table1\nPut a key=1 row into table1\nUpdate table1 to set key to 2\n\nI end up with a 1 in the table. I'm not sure, but I think that such\na case would be possible through the fk stuff with triggers that modify \nthe primary key table (right now it might \"work\" due to the problems\nof checking intermediate states). Wouldn't this be the kind of thing\nthe \"triggered data change\" is supposed to prevent? I may be just\nmisunderstanding the intent of the spec.\n\n",
"msg_date": "Wed, 24 Oct 2001 18:10:14 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: \"Triggered data change violation\", once again "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> >> I think all we need to do to implement things correctly is to consider a\n> >> previous event only if both xmin and cmin of the old tuple match the\n> >> current xact & command IDs, rather than considering it on the basis of\n> >> xmin alone.\n> \n> > Are there any things that might update the command ID during the execution\n> > of the statement from inside functions that are being run?\n> \n> Functions can run new commands that get new command ID numbers within\n> the current transaction --- but on return from the function, the current\n> command number is restored. I believe rows inserted by such a function\n> would look \"in the future\" to us at the outer command, and would be\n> ignored.\n\nI'm suspicious if this is reasonable. If those changes are ignored\nwhen are taken into account ? ISTM deferred constraints has to see\nthe latest tuples and take the changes into account. \n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 25 Oct 2001 13:49:16 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: \"Triggered data change violation\", once again"
}
] |
[
{
"msg_contents": "i was wondering if the jan's work on SPI portal creation of prepared/saved\nplans has something to do with caching query plan ?\n\nthx in advance.\n\n\n",
"msg_date": "Thu, 25 Oct 2001 01:37:44 +0200",
"msg_from": "\"Christian Meunier\" <jelan@magelo.com>",
"msg_from_op": true,
"msg_subject": "Cache query plan.."
}
] |
[
{
"msg_contents": "OK, I see my email got through to the list. Running pgindent now and\nwill commit changes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Oct 2001 20:34:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "pgindent run"
}
] |
[
{
"msg_contents": "Just noticed this QT software:\n\nhttp://www.globecom.net/tora/\n\nIt's a very lovely administrative tool for Oracle. I wonder if anyone would\nbe interested in porting it to Postgres?\n\nDon't think many of the funky administrative functions can be acheived\nremotely in Postgres yet tho?\n\nChris\n\n",
"msg_date": "Thu, 25 Oct 2001 12:54:35 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "TOra"
}
] |
[
{
"msg_contents": "\nIs there a way to automatically ROLLBACK transactions that are \nin \"idle in transaction\" for too long ?\n\nI remember it has been discussed on this list, but what is the current\nstatus ?\n\nThis is a problem that has haunted me on several web applicatons using \napplication servers that have persistent connection (Zope, apache-php\nwith \npersistent connections)\n\n-------------------\nHannu\n",
"msg_date": "Thu, 25 Oct 2001 10:18:20 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "timeout for \"idle in transaction\""
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Is there a way to automatically ROLLBACK transactions that are \n> in \"idle in transaction\" for too long ?\n\nNope, we don't have anything for that. Not clear to me that it's\nappropriate as a server-side function anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Oct 2001 08:49:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout for \"idle in transaction\" "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > Is there a way to automatically ROLLBACK transactions that are\n> > in \"idle in transaction\" for too long ?\n> \n> Nope, we don't have anything for that. Not clear to me that it's\n> appropriate as a server-side function anyway.\n\nThis can't be done from the client side and we do have other types of \ndeadlock detection on server side so this seems to quite appropriate \nfrom where I stand.\n\nI guess it would be quite nice to have as a connection-level setting, \nso that things that benefit from it can set it to some reasonable \nvalue while others that want to behave unsocially can do it as well ;)\n\nThe default could be 1-3 sec of idle time in transaction for typical \nclient-server and web apps while command line clients (like psql) could \nset it to something more automatically.\n\n-----------------\nHannu\n",
"msg_date": "Thu, 25 Oct 2001 16:13:58 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: timeout for \"idle in transaction\""
}
] |
[
{
"msg_contents": "I have run pgindent on the C files and run pgjindent on the jdbc files\nas requested by the jdbc list. You can package up beta now. I will\nupdate the HISTORY file tomorrow with recent changes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Oct 2001 02:02:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "pgindent run"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have run pgindent on the C files and run pgjindent on the jdbc files\n> as requested by the jdbc list. You can package up beta now. I will\n> update the HISTORY file tomorrow with recent changes.\n\nPlease hold on that packaging until I add the int2<->int8 cast functions\nthat Philip pointed out pg_dump needs. Will have it done in an hour or\ntwo.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Oct 2001 08:46:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run "
},
{
"msg_contents": "\nD'oh ...\n\nOkay, will hold off on packaging, but have already tag'd it ...\n\nIf we aren'g putting that Packaging stuff into v7.2, can we get it into\nbeta as contrib also? Before I do the first packagingof the beta?\n\nOn Thu, 25 Oct 2001, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have run pgindent on the C files and run pgjindent on the jdbc files\n> > as requested by the jdbc list. You can package up beta now. I will\n> > update the HISTORY file tomorrow with recent changes.\n>\n> Please hold on that packaging until I add the int2<->int8 cast functions\n> that Philip pointed out pg_dump needs. Will have it done in an hour or\n> two.\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Thu, 25 Oct 2001 08:55:36 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run "
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> If we aren'g putting that Packaging stuff into v7.2, can we get it into\n> beta as contrib also? Before I do the first packagingof the beta?\n\nUh ... what?\n\nI just meant to wait a little bit on wrapping the tarball while I make\nthis last(?) catalog update. I don't know of anything that should go\ninto contrib.\n\nI saw you updated the version tag in configure, but aren't there three\nor four other places that need work to brand the version number?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Oct 2001 09:39:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run "
},
{
"msg_contents": "On Thu, 25 Oct 2001, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > If we aren'g putting that Packaging stuff into v7.2, can we get it into\n> > beta as contrib also? Before I do the first packagingof the beta?\n>\n> Uh ... what?\n>\n> I just meant to wait a little bit on wrapping the tarball while I make\n> this last(?) catalog update. I don't know of anything that should go\n> into contrib.\n>\n> I saw you updated the version tag in configure, but aren't there three\n> or four other places that need work to brand the version number?\n\nNot that I've ever changed ... I know that Bruce does a bunch of docs\nrelated stuff, like in HISTORY and whatnot ...\n\n",
"msg_date": "Thu, 25 Oct 2001 09:43:52 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run "
},
{
"msg_contents": "> On Thu, 25 Oct 2001, Tom Lane wrote:\n> \n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > If we aren'g putting that Packaging stuff into v7.2, can we get it into\n> > > beta as contrib also? Before I do the first packagingof the beta?\n> >\n> > Uh ... what?\n> >\n> > I just meant to wait a little bit on wrapping the tarball while I make\n> > this last(?) catalog update. I don't know of anything that should go\n> > into contrib.\n> >\n> > I saw you updated the version tag in configure, but aren't there three\n> > or four other places that need work to brand the version number?\n> \n> Not that I've ever changed ... I know that Bruce does a bunch of docs\n> related stuff, like in HISTORY and whatnot ...\n\nI noticed that SELECT version() shows:\n\n\ttest=> select version();\n\t version \n\t----------------------------------------------------------------\n\t PostgreSQL 7.2devel on i386-pc-bsdi4.2, compiled by GCC 2.95.2\n\t ^^^^^\n\t(1 row)\t\n\nI see this in configure.in and am applying a patch to make it 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Oct 2001 11:51:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run"
},
{
"msg_contents": "> On Thu, 25 Oct 2001, Tom Lane wrote:\n> \n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > If we aren'g putting that Packaging stuff into v7.2, can we get it into\n> > > beta as contrib also? Before I do the first packagingof the beta?\n> >\n> > Uh ... what?\n> >\n> > I just meant to wait a little bit on wrapping the tarball while I make\n> > this last(?) catalog update. I don't know of anything that should go\n> > into contrib.\n> >\n> > I saw you updated the version tag in configure, but aren't there three\n> > or four other places that need work to brand the version number?\n> \n> Not that I've ever changed ... I know that Bruce does a bunch of docs\n> related stuff, like in HISTORY and whatnot ...\n\nLooks like Marc already got configure.in:\n\n\tVERSION='7.2b1'\n\nI will work on HISTORY now but you don't have to wait for me for beta1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Oct 2001 12:03:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run"
}
] |
[
{
"msg_contents": "\nHi!\n\nI say kind of, as I am not sure about it, or whether there is a newer\nversion that does not show up the bug. Here's the description:\n\nWhen I use the following request (either on psql or using JDBC), the\nbackend crashes, making the other backends fail spectacularly.\n\nThe beast:\n\nselect S.last_stat, hdb_comfort as \"Confort Haut Dbit\" , U.url as item\nfrom url_stats S, urls U where S.idzone = 9999 and S.idurl=U.idurl and\n\nS.idurl in (543888938, -776493094) and last_stat between '2001-09-24\n16:15:00.704' and '2001-10-25 00:00:00.0' union select\ntrunc_3hour(last_stat) as last_stat, avg(hdb_comfort) as \"Confort Haut\nDbit\", idcontact::varchar(512) as item from url_stats S,urls U, reports\nR where S.idzone = 9999 and S.last_stat between '2001-09-24\n16:15:00.704' and '2001-10-25 00:00:00.0' and S.idurl=u.idurl and\nr.idurl=u.idurl and (status=1 or status=5) and (idcontact in\n(-431758079)) group by idcontact, trunc_3hour(last_stat) order by\nlast_stat;\n\n(sorry about that ;-)\n\nI have three (interesting for the example) tables:\n\nTable url_stats ( hdb_comfort int, last_stat timestamp, idurl int,\nidzone int, [...] )\nTable urls ( idurl int, url varchar(512), status int [...] )\nTable reports ( idurl int, idcontact int, [...] )\n\nThere are indices, called:\nident_url\nurl_by_id\n both on table urls (idurl)\nurl_by_status\n on table urls (status)\n\nFor table url_stats, they are quite straightforward:\nIndices: stat_by_idurl,\n stat_by_idurl_idzone_laststat,\n stat_by_idurl_last_stat\n\nFunction timestamp trunc_3hour (timestamp) returns the year, month, day\nfields intact, minutes and seconds to zero, and hour /3 *3 (so as I only\nget 00:00:00, 03:00:00, 06:00:00, 09:00:00, ... 21:00:00).\n\nWell, now you have all the elemensts.\n\nAn explain select ... shows:\nUnique (cost=41329.35..41329.56 rows=3 width=32)\n -> Sort (cost=41329.35..41329.35 rows=28 width=32)\n -> Append (cost=0.00..41328.66 rows=28 width=32)\n -> Nested Loop (cost=0.00..41222.22 rows=28 width=32)\n -> Seq Scan on urls u (cost=0.00..68.31\nrows=1431 width=16)\n -> Index Scan using stat_by_idurl_idzone_laststat\non url_stats s (cost=0.00..28.75 rows=1 width=16)\n -> Aggregate (cost=106.44..106.44 rows=0 width=28)\n -> Group (cost=106.44..106.44 rows=1 width=28)\n -> Sort (cost=106.44..106.44 rows=1\nwidth=28)\n -> Nested Loop (cost=0.00..106.43\nrows=1 width=28)\n -> Nested Loop\n(cost=0.00..52.11 rows=2 width=12)\n -> Index Scan using\nurl_by_contact on reports r (cost=0.00..13.26 rows=19 width=8)\n -> Index Scan using\nurl_by_id on urls u (cost=0.00..2.02 rows=1 width=4)\n -> Index Scan using\nstat_by_idurl_idzone_laststat on url_stats s (cost=0.00..28.71 rows=6\nwidth=16)\n\n\nWould the verbose query plan useful? I can send it to you if needed.\n\n\nAbout the version:\n$ psql --version\npsql (PostgreSQL) 7.0.3\ncontains readline, history, multibyte support\n\nI firmly believe that it's a RedHat compiled version.\n\nI do not wish to upgrade, if it is not absolutely required, as I have\nabout 2Gb data and availability is a main concern.\n\nMore information:\n\nIf I execute (from psql) the two parts of the union separately, none\ncrashes. If I do that into tables temp1 and temp2, which were not\npreviously created and I issue \"select * from temp1 union select * from\ntemp2;\" it does not crash either.\n\nThe other clients tell me that the backend wishes them to reconnect, as\nanother backend died and shared memory could be corrupted. The crashing\none just says pgReadData() -- the backend closed the connection\nunexpectedly, or something close to this.\n\nIf you have some clues, or some other way of writing the request without\ndramatically turning performance to unacceptable limits, anything will\nbe welcome.\n\nThe url_stats table contains 1500000+ tuples (I do not dare select\ncount(*) from url_stats ;-), urls contains 1000+ and reports contains\nabout 5000 (not sure, but >1000 and <100000).\n\nIf you believe that upgrading could lead us to a notable performance\nincrease, we may study the situation.\n\nThank you for reading my e-mail.\n\nThank you very, very much for answering it.\n\nYours,\n\nAntonio Fiol\nW3ping\n\n",
"msg_date": "Thu, 25 Oct 2001 10:43:17 +0200",
"msg_from": "Antonio Fiol =?iso-8859-1?Q?Bonn=EDn?= <fiol@w3ping.com>",
"msg_from_op": true,
"msg_subject": "Kind of \"bug-report\""
},
{
"msg_contents": "Antonio Fiol =?iso-8859-1?Q?Bonn=EDn?= <fiol@w3ping.com> writes:\n> I say kind of, as I am not sure about it, or whether there is a newer\n> version that does not show up the bug. Here's the description:\n\nPlease update to 7.1.3 and let us know whether you still see the\nproblem. We fixed a number of problems with UNION in 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Oct 2001 08:55:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Kind of \"bug-report\" "
}
] |
[
{
"msg_contents": "\n> Of course the question \"did you vacuum\" (better, did you analyze) is\n> annoying, just as the requirement to analyze is annoying in the first\n> place, but unless someone designs a better query planner it \n> will have to do. The reason why we always ask that question first is \n> that people invariantly have not analyzed.\n\nI think it is also not allways useful to ANALYZE. There are applications\n\nthat choose optimal plans with only the rudimentary statistics VACUUM \ncreates. And even such that use optimal plans with only the default \nstatistics in place.\n\nImho one of the biggest sources for problems is people creating new\nindexes on populated tables when the rest of the db/table has badly\noutdated statistics or even only default statistics in place.\nIn this situation the optimizer is badly misguided, because it now\nsees completely inconsistent statistics to work on.\n(e.g. old indexes on that table may seem way too cheap compared \nto table scan) \n\nI would thus propose a more distinguished approach of writing \nthe statistics gathered during \"create index\" to the system tables.\n\nSomething like:\nif (default stats in place)\n write defaults\nelse if (this is the only index)\n write gathered statistics\nelse \n write only normalized statistics for index\n (e.g. index.reltuples = table.reltuples;\n index.relpages = (index.gathered.relpages * \n table.relpages / table.gathered.relpages)\n\nAndreas\n",
"msg_date": "Thu, 25 Oct 2001 12:04:33 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Index of a table is not used (in any case)"
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Imho one of the biggest sources for problems is people creating new\n> indexes on populated tables when the rest of the db/table has badly\n> outdated statistics or even only default statistics in place.\n> In this situation the optimizer is badly misguided, because it now\n> sees completely inconsistent statistics to work on.\n> (e.g. old indexes on that table may seem way too cheap compared \n> to table scan) \n\nI don't think any of this is correct. We don't have per-index\nstatistics. The only stats updated by CREATE INDEX are the same ones\nupdated by plain VACUUM, viz the number-of-tuples and number-of-pages\ncounts in pg_class. I believe it's reasonable to update those stats\nmore often than the pg_statistic stats (in fact, if we could keep them\nconstantly up-to-date at a reasonable cost, we'd do so). The\npg_statistic stats are designed as much as possible to be independent\nof the absolute number of rows in the table, so that it's okay if they\nare out of sync with the pg_class stats.\n\nThe major reason why \"you vacuumed but you never analyzed\" is such a\nkiller is that in the absence of any pg_statistic data, the default\nselectivity estimates are such that you may get either an index or seq\nscan depending on how big the table is. The cost estimates are\nnonlinear (correctly so, IMHO, though I wouldn't necessarily defend the\nexact shape of the curve) and ye olde default 0.01 will give you an\nindexscan for a small table but not for a big one. In 7.2 I have\nreduced the default selectivity estimate to 0.005, for a number of\nreasons but mostly to get it out of the range where the decision will\nflip-flop. Observe:\n\ntest71=# create table foo (f1 int);\nCREATE\ntest71=# create index fooi on foo(f1);\nCREATE\ntest71=# explain select * from foo where f1 = 42;\nNOTICE: QUERY PLAN:\n\nIndex Scan using fooi on foo (cost=0.00..8.14 rows=10 width=4)\n\nEXPLAIN\ntest71=# select reltuples,relpages from pg_class where relname = 'foo';\n reltuples | relpages\n-----------+----------\n 1000 | 10\n(1 row)\n\nEXPLAIN\ntest71=# update pg_class set reltuples = 100000, relpages = 1000 where relname = 'foo';\nUPDATE 1\ntest71=# explain select * from foo where f1 = 42;\nNOTICE: QUERY PLAN:\n\nIndex Scan using fooi on foo (cost=0.00..1399.04 rows=1000 width=4)\n\nEXPLAIN\ntest71=# update pg_class set reltuples = 1000000, relpages = 10000 where relname = 'foo';\nUPDATE 1\ntest71=# explain select * from foo where f1 = 42;\nNOTICE: QUERY PLAN:\n\nSeq Scan on foo (cost=0.00..22500.00 rows=10000 width=4)\n\nEXPLAIN\ntest71=#\n\nIn current sources you keep getting an indexscan as you increase the\nnumber of tuples...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Oct 2001 09:19:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index of a table is not used (in any case) "
}
] |
[
{
"msg_contents": "Hi All,\n\n my database server has very high load in this morning.\nI've found the problem. One of my index was not used so far!\nit's interesting:\n\naddb=> \\d banners\n Table \"banners\"\n Attribute | Type | Modifier \n------------+--------------------------+----------------------------------------------------\n b_no | integer | not null default nextval('banners_b_no_seq'::text)\n usr_no | integer | \n b_ext | character varying(10) | \n b_link | character varying(100) | \n b_from | date | \n b_to | date | \n b_lastview | timestamp with time zone | default now()\n b_maxview | integer | \n b_curview | integer | default 0\n b_maxclick | integer | \n b_curclick | integer | default 0\n b_weight | integer | default 1\n b_curwg | double precision | default 0\n b_active | boolean | default 'f'::bool\n last_upd | timestamp with time zone | default now()\n upd_usr | integer | \n b_name | character varying(40) | \nIndices: b_usr_no_idx,\n banners_b_no_key\n\naddb=> EXPLAIN SELECT b_link FROM banners WHERE b_no = 3;\nNOTICE: QUERY PLAN:\n\nSeq Scan on banners (cost=0.00..1.57 rows=1 width=12)\n\nEXPLAIN\naddb=> DROP INDEX banners_b_no_key;\nDROP\naddb=> CREATE INDEX banners_b_no_key ON banners (b_no);\nCREATE\naddb=> EXPLAIN SELECT b_link FROM banners WHERE b_no = 3;\nNOTICE: QUERY PLAN:\n\nIndex Scan using banners_b_no_key on banners (cost=0.00..4.43 rows=1 width=12)\n\nEXPLAIN\naddb=> \n\nWhy index wasn't used ?\npostgresql-7.1.2, redhat 7.0, kernel:2.2.19\n\nThanks, Gabor\n\n",
"msg_date": "Thu, 25 Oct 2001 14:01:14 +0200",
"msg_from": "\"Gabor Csuri\" <gcsuri@auto999.com>",
"msg_from_op": true,
"msg_subject": "Index not used ! Why?"
},
{
"msg_contents": "> Hello!\n> It needs some help by the command\n> VACUUM [VERBOSE] ANALYZE table;\n> to choose the ideal query strategy.\n\nHow can I choose better query strategy than ...WHERE key_field = x; ?\n\nRegards, Gabor.\n\n",
"msg_date": "Thu, 25 Oct 2001 14:49:38 +0200",
"msg_from": "\"Gabor Csuri\" <gcsuri@auto999.com>",
"msg_from_op": true,
"msg_subject": "Re: Index not used ! Why?"
},
{
"msg_contents": "> my database server has very high load in this morning.\n> I've found the problem. One of my index was not used so far!\n> it's interesting:\n> ...\n> addb=> CREATE INDEX banners_b_no_key ON banners (b_no);\n> CREATE\n> addb=> EXPLAIN SELECT b_link FROM banners WHERE b_no = 3;\n> NOTICE: QUERY PLAN:\n>\n> Index Scan using banners_b_no_key on banners (cost=0.00..4.43\n> rows=1 width=12)\n>\n> EXPLAIN\n> addb=>\n>\n> Why index wasn't used ?\n> postgresql-7.1.2, redhat 7.0, kernel:2.2.19\n\nTry to create a unique index :\nCREATE UNIQUE INDEX banners_b_no_key ON banners (b_no);\nor specify a primary key :\nALTER TABLE banners ADD CONSTRAINT pk_banners PRIMARY KEY (b_no);\n\nthen ANALYZE your table ....\n\n-- Nicolas --\n\nWe ( me and my teammate ) try to create a little graphical client for\nPostgreSQL in Java. If someone want to try it :\nhttp://pgInhaler.ifrance.com. It's an alpha version with lots of bugs... Try\nit and send us your feedback to pginhaler@ifrance.com... Thanx...\n\n",
"msg_date": "Thu, 25 Oct 2001 15:31:50 +0200",
"msg_from": "\"Nicolas Verger\" <nicolas@verger.net>",
"msg_from_op": false,
"msg_subject": "Re: Index not used ! Why? + Little graphical client ..."
}
] |
[
{
"msg_contents": "Tom Lane writes:\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > Imho one of the biggest sources for problems is people creating new\n> > indexes on populated tables when the rest of the db/table has badly\n> > outdated statistics or even only default statistics in place.\n> > In this situation the optimizer is badly misguided, because it now\n> > sees completely inconsistent statistics to work on.\n> > (e.g. old indexes on that table may seem way too cheap compared \n> > to table scan) \n> \n> I don't think any of this is correct. We don't have per-index\n> statistics. The only stats updated by CREATE INDEX are the same ones\n> updated by plain VACUUM, viz the number-of-tuples and number-of-pages\n> counts in pg_class.\n\n1. Have I said anything about other stats, than relpages and reltuples ?\n\n2. There is only limited use in the most accurate pg_statistics if\nreltuples\nand relpages is completely off. In the current behavior you eg get:\n\nrel1: pages = 100000\t-- updated from \"create index\"\nindex1 pages = 2\t\t-- outdated\nindex2 pages = 2000\t-- current\n\nrel2: pages = 1\t\t-- outdated\n\n--> Optimizer will prefer join order: rel2, rel1\n\n> I believe it's reasonable to update those stats\n> more often than the pg_statistic stats (in fact, if we could keep them\n> constantly up-to-date at a reasonable cost, we'd do so).\n\nThere is a whole lot of difference between keeping them constantly up to\n\ndate and modifying (part of) them in the \"create index\" command, so I do\n\nnot counter your above sentence, but imho the conclusion is wrong.\n\n> The\n> pg_statistic stats are designed as much as possible to be independent\n> of the absolute number of rows in the table, so that it's okay if they\n> are out of sync with the pg_class stats.\n\nIndependently, they can only be good for choosing whether to use an \nindex or seq scan. They are not sufficient to choose a good join order.\n\n> The major reason why \"you vacuumed but you never analyzed\" is such a\n> killer is that in the absence of any pg_statistic data, the default\n> selectivity estimates are such that you may get either an index or seq\n> scan depending on how big the table is. The cost estimates are\n> nonlinear (correctly so, IMHO, though I wouldn't necessarily \n> defend the\n> exact shape of the curve) and ye olde default 0.01 will give you an\n> indexscan for a small table but not for a big one. In 7.2 I have\n> reduced the default selectivity estimate to 0.005, for a number of\n> reasons but mostly to get it out of the range where the decision will\n> flip-flop.\n\nYes, the new selectivity is better, imho even still too high.\nImho the strategy should be to assume a good selectivity\nof values in absence of pg_statistics evidence.\nIf the index was not selective enough for an average query, the\ndba should not have created the index in the first place.\n\n> test71=# create table foo (f1 int);\n> test71=# create index fooi on foo(f1);\n> test71=# explain select * from foo where f1 = 42;\n\n> Index Scan using fooi on foo (cost=0.00..8.14 rows=10 width=4)\n\n> test71=# update pg_class set reltuples = 100000, relpages = \n> 1000 where relname = 'foo';\n> Index Scan using fooi on foo (cost=0.00..1399.04 rows=1000 width=4)\n\n> test71=# update pg_class set reltuples = 1000000, relpages = \n> 10000 where relname = 'foo';\n\n> Seq Scan on foo (cost=0.00..22500.00 rows=10000 width=4)\n\n> In current sources you keep getting an indexscan as you increase the\n> number of tuples...\n\nAs you can see it toppeled at 10 Mio rows :-(\n\nAndreas\n",
"msg_date": "Thu, 25 Oct 2001 16:24:25 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Index of a table is not used (in any case) "
}
] |
[
{
"msg_contents": "Hello all\n\nI asked this question a while back but got no response - is there any way of \ncreating a Java stored procedure in a postgres database ? I can see that \nthere is a built-in PL/sql type of environment and a python one but it would \nbe nice if I could migrate Java stored procedures in an Oracle database into \npostgres.\n\nAny comments?\n\nChris\n\n Posted Via Usenet.com Premium Usenet Newsgroup Services\n----------------------------------------------------------\n ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **\n---------------------------------------------------------- \n http://www.usenet.com\n",
"msg_date": "25 Oct 2001 09:46:58 -0500",
"msg_from": "tweekie <None@news.tht.net>",
"msg_from_op": true,
"msg_subject": "java virtual machine"
},
{
"msg_contents": "--- tweekie <None@news.tht.net> wrote:\n> I asked this question a while back but got no response - is there any way of \n> creating a Java stored procedure in a postgres database ?\n\nAFAIR, there was an answer to your question by Bruce,\nand it was that PostgreSQL does not support Java \nas a PL. The only languages are PL/pgSQL, PL/Python,\nPL/Perl and PL/TCL if I'm not mistaken.\n\n> I can see that \n> there is a built-in PL/sql type of environment and a python one but it would \n> be nice if I could migrate Java stored procedures in an Oracle database into \n> postgres.\n> Any comments?\n\nWrite PL/Java one day :)\n\n-s\n\n__________________________________________________\nDo You Yahoo!?\nMake a great connection at Yahoo! Personals.\nhttp://personals.yahoo.com\n",
"msg_date": "Thu, 25 Oct 2001 12:19:29 -0700 (PDT)",
"msg_from": "Serguei Mokhov <serguei_mokhov@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: java virtual machine"
},
{
"msg_contents": "* tweekie <None@news.tht.net> wrote:\n|\n| I asked this question a while back but got no response - is there any way of \n| creating a Java stored procedure in a postgres database ? I can see that \n| there is a built-in PL/sql type of environment and a python one but it would \n| be nice if I could migrate Java stored procedures in an Oracle database into \n| postgres.\n| \n| Any comments?\n\n\nIt would rock ;-) An Hungarian guy just sent a mail indicating that he\nhad a first prototype version of something with Kaffe up and running.\nBut I believe there is a lot of issues to be solved, especially\nthreading issues...\n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n",
"msg_date": "26 Oct 2001 03:05:49 +0200",
"msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>",
"msg_from_op": false,
"msg_subject": "Re: java virtual machine"
},
{
"msg_contents": "> > I asked this question a while back but got no response - is there any way of \n> > creating a Java stored procedure in a postgres database ?\n> \n> AFAIR, there was an answer to your question by Bruce,\n> and it was that PostgreSQL does not support Java \n> as a PL. The only languages are PL/pgSQL, PL/Python,\n> PL/Perl and PL/TCL if I'm not mistaken.\n\nPL/Ruby too. ;)\n\n> > I can see that \n> > there is a built-in PL/sql type of environment and a python one but it would \n> > be nice if I could migrate Java stored procedures in an Oracle database into \n> > postgres.\n> > Any comments?\n> \n> Write PL/Java one day :)\n\n::shudder:: -sc\n",
"msg_date": "Mon, 29 Oct 2001 21:52:21 -0800",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": false,
"msg_subject": "Re: java virtual machine"
}
] |
[
{
"msg_contents": "\n... is now packaged ... mirrors will pick it up soon, but if anyone wants\nto do a quick check, its in /pub/beta ...\n\n\n\n",
"msg_date": "Thu, 25 Oct 2001 12:48:08 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "7.2b1 ..."
},
{
"msg_contents": "Marc G. Fournier writes:\n\n> ... is now packaged ... mirrors will pick it up soon, but if anyone wants\n> to do a quick check, its in /pub/beta ...\n\nWhat ever happened to 7.2beta1?\n\nSorry, but the inconsistency in naming of releases and CVS tags (if ever\nthere would be any) is driving me nuts.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 25 Oct 2001 20:41:44 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "\nCVS tags have been conssitent since v7.1 ...\n\n\nOn Thu, 25 Oct 2001, Peter Eisentraut wrote:\n\n> Marc G. Fournier writes:\n>\n> > ... is now packaged ... mirrors will pick it up soon, but if anyone wants\n> > to do a quick check, its in /pub/beta ...\n>\n> What ever happened to 7.2beta1?\n>\n> Sorry, but the inconsistency in naming of releases and CVS tags (if ever\n> there would be any) is driving me nuts.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n>\n>\n\n",
"msg_date": "Thu, 25 Oct 2001 15:13:58 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "Marc G. Fournier writes:\n\n> CVS tags have been conssitent since v7.1 ...\n\nAllow me to consider that a joke:\n\nREL7_2_BETA1\nREL7_1_STABLE\nREL7_1_BETA3\nREL7_1_BETA2\nREL7_1_BETA\nREL7_1_2\nREL7_1\n\nSo,\n\nWhere is REL7_1_1? Where is REL7_1_BETA1? What does REL7_1_BETA belong\nto? What ever happened to beta4 thru beta6 or so, and rc1 through rc3?\nWhat is the CVS tag for 7.2b1? What release is the tag REL7_2_BETA1 for?\n\nAnd if there is a 7.2b1, where is 7.2a1? And do you realise that in the\nGNU tradition, a release 7.2b would be a beta release leading to 7.3?\n\nAnd I won't even start talking about the names of the ChangeLog files\nwhich are fortunately gone.\n\nAll of this requires just a minute of thought and will save countless\npeople a headache.\n\nPlease.\n\n\n>\n>\n> On Thu, 25 Oct 2001, Peter Eisentraut wrote:\n>\n> > Marc G. Fournier writes:\n> >\n> > > ... is now packaged ... mirrors will pick it up soon, but if anyone wants\n> > > to do a quick check, its in /pub/beta ...\n> >\n> > What ever happened to 7.2beta1?\n> >\n> > Sorry, but the inconsistency in naming of releases and CVS tags (if ever\n> > there would be any) is driving me nuts.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 26 Oct 2001 20:41:16 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Marc G. Fournier writes:\n> \n> > CVS tags have been conssitent since v7.1 ...\n> \n> Allow me to consider that a joke:\n> \n> REL7_2_BETA1\n> REL7_1_STABLE\n> REL7_1_BETA3\n> REL7_1_BETA2\n> REL7_1_BETA\n> REL7_1_2\n> REL7_1\n> \n> So,\n> \n> Where is REL7_1_1? Where is REL7_1_BETA1? What does REL7_1_BETA belong\n> to? What ever happened to beta4 thru beta6 or so, and rc1 through rc3?\n> What is the CVS tag for 7.2b1? What release is the tag REL7_2_BETA1 for?\n\nYou could also consider the above as an IQ test ;)\n\n> And if there is a 7.2b1, where is 7.2a1? And do you realise that in the\n> GNU tradition, a release 7.2b would be a beta release leading to 7.3?\n\nAnd in linux kernel tradition there would be no non-beta 7.3 and the\nbeta \nfor 7.2 would be 7.1.299 or something, and there would also be numerous \n\"brown paper bag\" releases ;)\n\n> And I won't even start talking about the names of the ChangeLog files\n> which are fortunately gone.\n> \n> All of this requires just a minute of thought and will save countless\n> people a headache.\n\nAs the result of \"a minute of thought\" is heavily dependent of the\nthinker\nI suggest that you do a writeup of yours, enumerating the rules for both \ninternal (code and CVS tags) and external development, alpha, beta and\nrelease\nnumbering and naming as well as rules for when and how to apply them.\n\nIf you come up with something that all thinkers can agree, I'm sure it\nwill \nbe used from now on.\n\n--------------------\nHannu\n",
"msg_date": "Sat, 27 Oct 2001 01:14:29 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "On Friday 26 October 2001 04:14 pm, Hannu Krosing wrote:\n> And in linux kernel tradition there would be no non-beta 7.3 and the\n> beta\n> for 7.2 would be 7.1.299 or something, and there would also be numerous\n> \"brown paper bag\" releases ;)\n\nWe have had our share of 'brown paper bag' releases, too. And they serve a \nuseful purpose -- keeping us on our toes and reminded that even the best and \nbrightest open source development team around can make mistakes.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 26 Oct 2001 19:42:53 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "...\n> If you come up with something that all thinkers can agree, I'm sure it\n> will be used from now on.\n\nI *think* that somewhere there is a list of \"things to do\" to prepare a\nrelease. If that isn't in the sgml doc set, it should be. And if it\ndoesn't mention the naming convention for beta and release labels, it\nshould.\n\nPeter or someone, do you want to collect that stuff (with useful\nadditions) and make a chapter or appendix in the developer's docs?\n\n - Thomas\n",
"msg_date": "Sat, 27 Oct 2001 00:32:51 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "On Thursday 25 October 2001 12:48 pm, Marc G. Fournier wrote:\n> ... is now packaged ... mirrors will pick it up soon, but if anyone wants\n> to do a quick check, its in /pub/beta ...\n\nAttempting to build an initial RPMset here.... Will upload when I get a good \nbuild -- although I may have to release without the contrib tree packaged, \ndue to build errors.\n\nStay tuned for the latest.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 26 Oct 2001 21:04:54 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "> ...\n> > If you come up with something that all thinkers can agree, I'm sure it\n> > will be used from now on.\n> \n> I *think* that somewhere there is a list of \"things to do\" to prepare a\n> release. If that isn't in the sgml doc set, it should be. And if it\n> doesn't mention the naming convention for beta and release labels, it\n> should.\n\nIt is tools/RELEASE_CHANGES.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 Oct 2001 21:38:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "Is there some formal place to make comments on how 7.2b1 works? I'm about\nto run it through it's paces on OBSD. Or is this just a 'it's broked'\ntesting time?\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Sat, 27 Oct 2001 19:32:24 -0400 (EDT)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "Hannu Krosing writes:\n\n> You could also consider the above as an IQ test ;)\n\nThe only problem is that computers have been shown to have an IQ of zero.\n\n> I suggest that you do a writeup of yours, enumerating the rules for\n> both internal (code and CVS tags) and external development, alpha,\n> beta and release numbering and naming as well as rules for when and\n> how to apply them.\n\nThe rules have been the same for as long as memory serves. The\ndevelopment tree is first labeled as betaX for a few consecutive X, then\nrcX a few times, then follows a release, and the numbering scheme of the\nreleases is well known. We've more recently introduced labeling the\ndevelopment tree itself as \"devel\".\n\nThe problem appears to be that the people that perform these actions do\nnot fully understand the scope of the issues the come with those actions,\nand therefore perform them carelessly. (If you don't believe \"careless\",\nthe commit message that changed the version to 7.2b1 is less than one line\nand contains two obvious spelling mistakes.)\n\nFor example, release numbers ought to sort lexicographically. There are\njust too many tools that would prefer this. Yet, this issue is ignored\ncompletely.\n\nRelease making should be reproduceable -- without race conditions. This\nwould at least require a CVS tag for every release, and a reliable way to\npackage the documentation with the rest of the source.\n\nPeople need to understand the meaning of the release names. There are\nobviously way too many release numbering schemes out there, few of which I\nlike. But in the history of PostgreSQL, there has never been a release\ncalled X.Yb1. I have currently no confidence that the next release won't\nbe called X.YBeta2, to mess up all chanced of anything sorting correctly.\n\nIn a sense, making a release is a change in the source code, and if it's\ndone in novel ways it should be discussed first.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 28 Oct 2001 13:30:17 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "Lamar Owen writes:\n\n> Attempting to build an initial RPMset here.... Will upload when I get a good\n> build -- although I may have to release without the contrib tree packaged,\n> due to build errors.\n\nDid you get all the patches I sent you? These should have the contrib\ntree covered. If you plan to release the \"initial\" RPM set without\nanything remotely similar to those patches you'll probably run into a\nboatload of problems.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 28 Oct 2001 13:31:17 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "On Sunday 28 October 2001 07:31 am, Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > Attempting to build an initial RPMset here.... Will upload when I get a\n> > good build -- although I may have to release without the contrib tree\n> > packaged, due to build errors.\n\n> Did you get all the patches I sent you? These should have the contrib\n> tree covered. \n\nGot them; applied them; tweaked them (not involving contrib); got the \nfollowing:\n\nmake[1]: Entering directory \n`/usr/src/redhat/BUILD/postgresql-7.2b1/contrib/fuzzystrmatch'\nsed 's,MODULE_PATHNAME,$libdir/fuzzystrmatch,g' fuzzystrmatch.sql.in \n>fuzzystrmatch.sql\ngcc -O2 -march=i386 -mcpu=i686 -Wall -Wmissing-prototypes \n-Wmissing-declarations -fpic -I. -I../../src/include -I/usr/kerberos/include \n-c -o fuzzystrmatch.o fuzzystrmatch.c\nfuzzystrmatch.c: In function `_metaphone':\nfuzzystrmatch.c:345: parse error before `return'\nmake[1]: *** [fuzzystrmatch.o] Error 1\nmake[1]: Leaving directory \n`/usr/src/redhat/BUILD/postgresql-7.2b1/contrib/fuzzystrmatch'\nmake: *** [all] Error 2\n\nLooking at it, but with a transmitter not running right here it could be a \nfew days before I get back to it.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 29 Oct 2001 15:44:12 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> gcc -O2 -march=i386 -mcpu=i686 -Wall -Wmissing-prototypes \n> -Wmissing-declarations -fpic -I. -I../../src/include -I/usr/kerberos/include \n> -c -o fuzzystrmatch.o fuzzystrmatch.c\n> fuzzystrmatch.c: In function `_metaphone':\n> fuzzystrmatch.c:345: parse error before `return'\n> make[1]: *** [fuzzystrmatch.o] Error 1\n> make[1]: Leaving directory \n> `/usr/src/redhat/BUILD/postgresql-7.2b1/contrib/fuzzystrmatch'\n> make: *** [all] Error 2\n\nThis is a bug introduced by Bruce's recent pgindent run, not an RPM\npackaging issue. I believe the fix is in CVS already.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Oct 2001 16:58:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ... "
},
{
"msg_contents": "On Monday 29 October 2001 04:58 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > gcc -O2 -march=i386 -mcpu=i686 -Wall -Wmissing-prototypes\n> > -Wmissing-declarations -fpic -I. -I../../src/include\n> > -I/usr/kerberos/include -c -o fuzzystrmatch.o fuzzystrmatch.c\n> > fuzzystrmatch.c: In function `_metaphone':\n> > fuzzystrmatch.c:345: parse error before `return'\n> > make[1]: *** [fuzzystrmatch.o] Error 1\n> > make[1]: Leaving directory\n> > `/usr/src/redhat/BUILD/postgresql-7.2b1/contrib/fuzzystrmatch'\n> > make: *** [all] Error 2\n\n> This is a bug introduced by Bruce's recent pgindent run, not an RPM\n> packaging issue. I believe the fix is in CVS already.\n\nOk.\n\nI'll patch only what I have to patch to get a build of 7.2b1, or I might as \nwell call any resultant RPMset postgresql-7.x.cvs20011029 or somesuch.\n\nAt least it was something simple.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 29 Oct 2001 17:29:01 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
},
{
"msg_contents": "> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > gcc -O2 -march=i386 -mcpu=i686 -Wall -Wmissing-prototypes \n> > -Wmissing-declarations -fpic -I. -I../../src/include -I/usr/kerberos/include \n> > -c -o fuzzystrmatch.o fuzzystrmatch.c\n> > fuzzystrmatch.c: In function `_metaphone':\n> > fuzzystrmatch.c:345: parse error before `return'\n> > make[1]: *** [fuzzystrmatch.o] Error 1\n> > make[1]: Leaving directory \n> > `/usr/src/redhat/BUILD/postgresql-7.2b1/contrib/fuzzystrmatch'\n> > make: *** [all] Error 2\n> \n> This is a bug introduced by Bruce's recent pgindent run, not an RPM\n> packaging issue. I believe the fix is in CVS already.\n\nYes, fixed today. It was a macro that was called with no trailing\nsemicolon.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Oct 2001 21:37:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b1 ..."
}
] |
[
{
"msg_contents": "\nHello,\n\nI've tried to search some postgresql mailing lists and i haven't found\nan answer to my problem...\n\nI create a function with the setof keyword...\n\ncreate function employee(int) returns setof employee as 'select * from\nemployee where $1 = id'\nlanguage 'sql';\n\n\nInstead of returning a tuple, I get this:\n\n ?column?\n-----------\n 136491256\n \nI tried exchanging \"$1\" and \"id\" but the thing did not yet work. I\nreplaced the \"*\" with the actual fields in my table and it still would\nnot work. \n\nWhat could be the problem? By the way, I use postgreseql 7.1.3\n\nThanks!\n\nCarlo Florendo\n",
"msg_date": "Thu, 25 Oct 2001 15:55:26 -0400",
"msg_from": "fcarlo@ntsp.nec.co.jp",
"msg_from_op": true,
"msg_subject": "inquiry using create function"
}
] |
[
{
"msg_contents": "I have updated the HISTORY file to be current as of today. Marc, it may\nbe nice to repackage beta1 with that one file changed, but my guess is\nthat we will have a beta2 soon enough.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Oct 2001 16:00:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "HISTORY updated"
}
] |
[
{
"msg_contents": "\nHello,\n\nThis is my first post here.\nI've tried to search the archive and i haven't found\nan answer to my problem...here it is...\n\nI created a function with the \"create function+setof\" keywords...\n\ncreate function employee(int) returns setof employee as 'select * from\nemployee where $1 = id'\nlanguage 'sql';\n\n\nInstead of returning a tuple, I get this:\n\n ?column?\n-----------\n 136491256\n \nI tried exchanging \"$1\" and \"id\" but the thing did not yet work. I\nreplaced the \"*\" with the actual fields in my table and it still would\nnot work. \n\nWhat could be the problem? By the way, I use postgreseql 7.1.3\n\nThanks!\n\nCarlo Florendo\n",
"msg_date": "Thu, 25 Oct 2001 16:47:28 -0400",
"msg_from": "fcarlo@ntsp.nec.co.jp",
"msg_from_op": true,
"msg_subject": "inquiry using create function"
}
] |
[
{
"msg_contents": "Hi,\n\nIn my application I use 'LOCK seq'. In 7.0.2 it worked fine but in\n7.1.2 Postgres complains that 'seq is not a table'. Is this\n(disabling to lock a sequences) an intended change?\n\nThanks\nMikhail\n",
"msg_date": "Thu, 25 Oct 2001 17:02:23 -0400",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": true,
"msg_subject": "LOCK SEQUENCE"
},
{
"msg_contents": "Mikhail Terekhov <terekhov@emc.com> writes:\n> In my application I use 'LOCK seq'. In 7.0.2 it worked fine but in\n> 7.1.2 Postgres complains that 'seq is not a table'. Is this\n> (disabling to lock a sequences) an intended change?\n\nHmm, it wasn't thought about too much, but why in the world would you\nwant to lock a sequence? Seems like that destroys the point of using\none.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Oct 2001 17:40:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LOCK SEQUENCE "
}
] |
[
{
"msg_contents": "I'm fresh in the code, but this has solved my issues with PQconnect* \nfailing when interrupted by signals. Some of it is sloppy and not to my \nliking yet, but I'm still digging through to see if anything else needs \ntouched. Comments appreciated.\n\nHonestly, I'm a bit surprised that this issue hasn't been encountered \nbefore.\n\nSummary:\n * changes to connect() sections to handle errno=EINTR. this solves \nlibpq PQconnect* family problems if the connect is interrupted by a \nsignal such as SIGALRM.\n * not all read/recv/write/send calls have been updated\n\nDavid",
"msg_date": "Thu, 25 Oct 2001 17:08:25 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "[patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "On 25 Oct 2001 at 17:08 (-0400), David Ford wrote:\n| I'm fresh in the code, but this has solved my issues with PQconnect* \n| failing when interrupted by signals. Some of it is sloppy and not to my \n| liking yet, but I'm still digging through to see if anything else needs \n| touched. Comments appreciated.\n\nDisclaimer: I may be wrong as hell ;-), but...\n\n\nI'm not sure this is correct. I've tried to /make/ a SIGALRM cause\nconnect to errno==EINTR, but I can't cause this condition. I suspect\nyou have another signal being raised that is causing your symptom.\nFTR, the textbook definition[1] of EINTR error for connect is:\n\n The attempt to establish a connection was interrupted by delivery \n of a signal that was caught; the connection will be established \n asynchronously.\n\nPlease check the attached prog to see if it is representative of your\ncode relating to the connect error you're seeing. If it is, please\nrun it and see if you can get it to cause the EINTR error from connect.\nIf you can't I'm more certain that you have a problem elsewhere.\n\ncheers.\n brent\n\n1. http://www.opengroup.org/onlinepubs/7908799/xns/connect.html\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman",
"msg_date": "Thu, 25 Oct 2001 23:13:19 -0400",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> I'm not sure this is correct. I've tried to /make/ a SIGALRM cause\n> connect to errno==EINTR, but I can't cause this condition.\n\nIt wouldn't surprise me in the least if this behavior is\nplatform-dependent. It may well be that David's kernel will allow\nconnect() to be interrupted by SIGALRM while yours won't. (Which\nreminds me that neither of you specified what platforms you were\ntesting on. For shame.) Or maybe the difference depends on whether\nyou are trying to connect to a local or remote server.\n\nUnless someone can point out a situation where retrying connect()\nafter EINTR is actively bad, my inclination is to accept the patch.\nIt seems like a net improvement in robustness to me, with no evident\ndownside other than a line or two more code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Oct 2001 00:05:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully "
},
{
"msg_contents": "Many signals may be the cause of -EINTR. It depends on what the signal \nis as to how it's normally handled. sigalarm is the most common due to \nit being a timer event.\n\nGenerate a timer that expires as fast as possible (not too fast to \nprevent code execution), and you should see things left and right return \nwith -EINTR.\n\nI'm very much aware of why SIGALRM is happening, I generate it and I \ncatch it. As per my original message on this thread, my program does \nmaintenance on a scheduled basis. The period of that maintenance is \nmany times per second.\n\nSooo... :)\n\nNow let's get on with the story.\n\nLibpq doesn't deal with system calls being interrupted in the slightest. \n None of the read/write or socket calls handle any errors. Even benign \nreturns i.e. EINTR are treated as fatal errors and returned. Not to \nmalign, but there is no reason not to continue on and handle EINTR.\n\nDavid\np.s. you cant use sleep() or alarm() functions and have a timer event as \nwell. The only POSIX compliant function that doesn't trample signal \ntimer events is nanosleep().\n\nBrent Verner wrote:\n\nOn 25 Oct 2001 at 17:08 (-0400), David Ford wrote:\n| I'm fresh in the code, but this has solved my issues with PQconnect* \n| failing when interrupted by signals. Some of it is sloppy and not to my \n| liking yet, but I'm still digging through to see if anything else needs \n| touched. Comments appreciated.\n\nDisclaimer: I may be wrong as hell ;-), but...\n\n\nI'm not sure this is correct. I've tried to /make/ a SIGALRM cause\nconnect to errno==EINTR, but I can't cause this condition. I suspect\nyou have another signal being raised that is causing your symptom.\nFTR, the textbook definition[1] of EINTR error for connect is:\n\n The attempt to establish a connection was interrupted by delivery \n of a signal that was caught; the connection will be established \n asynchronously.\n\nPlease check the attached prog to see if it is representative of your\ncode relating to the connect error you're seeing. If it is, please\nrun it and see if you can get it to cause the EINTR error from connect.\nIf you can't I'm more certain that you have a problem elsewhere.\n\ncheers.\n brent\n\n1. http://www.opengroup.org/onlinepubs/7908799/xns/connect.html\n\n\n------------------------------------------------------------------------\n[snipped]\n\n",
"msg_date": "Fri, 26 Oct 2001 00:26:22 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "On 26 Oct 2001 at 00:05 (-0400), Tom Lane wrote:\n| Brent Verner <brent@rcfile.org> writes:\n| > I'm not sure this is correct. I've tried to /make/ a SIGALRM cause\n| > connect to errno==EINTR, but I can't cause this condition.\n| \n| It wouldn't surprise me in the least if this behavior is\n| platform-dependent. It may well be that David's kernel will allow\n| connect() to be interrupted by SIGALRM while yours won't. (Which\n| reminds me that neither of you specified what platforms you were\n| testing on. For shame.) Or maybe the difference depends on whether\n| you are trying to connect to a local or remote server.\n\nsorry, I tested the attached prog on linux(2.2/2.4) and freebsd(4.4R)\nto both local and remote(slow) servers.\n\n| Unless someone can point out a situation where retrying connect()\n| after EINTR is actively bad, my inclination is to accept the patch.\n| It seems like a net improvement in robustness to me, with no evident\n| downside other than a line or two more code.\n\n I've found numerous examples where connect() is retried after EINTR,\ninfact it appears to be fairly common.\n\ncheers,\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Fri, 26 Oct 2001 01:01:42 -0400",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> | Unless someone can point out a situation where retrying connect()\n> | after EINTR is actively bad, my inclination is to accept the patch.\n\n> I've found numerous examples where connect() is retried after EINTR,\n> infact it appears to be fairly common.\n\nPerhaps it does work that way on your system, but that's not the point.\nOn a machine that behaves that way, we'll never see EINTR returned by\nconnect(), and so our reaction to it is unimportant. The question is\nwhat we should do if we *do* get EINTR from connect(). AFAICS, the\nappropriate response is to retry. We already do retry after EINTR in\nlibpq's recv, send, select, etc calls --- perhaps connect got overlooked\nbecause it's usually only done at program startup.\n\nAfter further thought, though, it's unclear to me why this solves\nDavid's problem. If he's got a repeating SIGALRM on a cycle short\nenough to interrupt a connect(), seems like it'd just fail again\non the next try.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Oct 2001 10:22:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully "
},
{
"msg_contents": "David Ford writes:\n\n> Libpq doesn't deal with system calls being interrupted in the slightest.\n> None of the read/write or socket calls handle any errors. Even benign\n> returns i.e. EINTR are treated as fatal errors and returned. Not to\n> malign, but there is no reason not to continue on and handle EINTR.\n\nLibpq certainly does deal with system calls being interrupted: It does\nnot allow them to be interrupted. Take a look into the file pqsignal.c to\nsee why.\n\nIf your alarm timer interrupts system calls then that's because you have\ninstalled your signal handler to allow that. In my mind, a reasonable\nbehaviour in that case would be to let the PQconnect or equivalent fail\nand provide the errno to the application.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 26 Oct 2001 23:04:39 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Libpq certainly does deal with system calls being interrupted: It does\n> not allow them to be interrupted. Take a look into the file pqsignal.c to\n> see why.\n\n??? Are you momentarily confusing backend and frontend libpq?\n\nAFAICT the client-side libpq doesn't (and shouldn't) touch signal\nhandling at all, except for a couple of places in the print routines\nthat temporarily block SIGPIPE.\n\nSince we deal happily with EINTR for most of the frontend socket calls,\nI don't see a reason not to cope with it for connect() too. I am\nsomewhat concerned about what exactly it means for a non-blocking\nconnect, however. Maybe it doesn't mean anything, and we could treat\nit the same as EINPROGRESS.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Oct 2001 17:36:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully "
},
{
"msg_contents": "\"The **SA_RESTART** flag is always set by the underlying system in \nPOSIX mode so that interrupted system calls will fail with return value \nof -1 and the *EINTR* error in /errno/ instead of getting restarted.\" \n libpq's pqsignal.c doesn't turn off SA_RESTART for SIGALRM. Further, \npqsignal.c only handles SIGPIPE and not to mention that other parts of \nlibpq do handle EINTR properly.\n\nPQconnect* family does not handle EINTR. It does not handle the \npossible and perfectly legitimate interruption of a system call. \n Globally trying to disable system calls from being interrupted is a Bad \nThing. Having a timer event is common, having a timer event in a daemon \nis often required. Timers allow for good housekeeping and playing nice \nwith the rest of the system.\n\nYour reasonable behavior in the case of EINTR means repeatable and \nmysterious failure. There isn't a clean way to re-enter PQconnect* \nwhile maintaining state in the case of signal interruption and no \nguarantee the function won't be interrupted again.\n\nBasically if you have a timer event in your software and you use pgsql, \nthen the following will happen.\n\na) if the timer event always happens outside the PQconnect* call is \ncompleted your code will function\nb) if the timer event always fires during the PQconnect* call, your code \nwill never function\nc) if your timer event sometimes fires during the PQconnect* call, your \ncode will sometimes function\n\nThere are no ifs, ands, or buts about it, if a timer fires inside \nPQconnect* as it is now, there is no way to continue. With a suitablly \nlong timer period, you can try the PQconnect* call again and if the \nconnect succeeds before the timer fires again you're fine. If not, you \nmust repeatedly try.\n\nThat said, there are two ways about it. a) handle it cleanly inside \nPQconnect* like it should be done, or b) have the programmer parse the \nerror string for \"Interrupted system call\" and re-enter PQconnect. a) \nis clean, short, and simple. b) wastes a lot of CPU to attempt to \naccomplish the task. a) is guaranteed and b) is not guaranteed.\n\nDavid\n\nPeter Eisentraut wrote:\n\nDavid Ford writes:\n\n>Libpq doesn't deal with system calls being interrupted in the slightest.\n> None of the read/write or socket calls handle any errors. Even benign\n>returns i.e. EINTR are treated as fatal errors and returned. Not to\n>malign, but there is no reason not to continue on and handle EINTR.\n>\n\nLibpq certainly does deal with system calls being interrupted: It does\nnot allow them to be interrupted. Take a look into the file pqsignal.c to\nsee why.\n\nIf your alarm timer interrupts system calls then that's because you have\ninstalled your signal handler to allow that. In my mind, a reasonable\nbehaviour in that case would be to let the PQconnect or equivalent fail\nand provide the errno to the application.\n\n\n",
"msg_date": "Fri, 26 Oct 2001 19:51:48 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "Tom Lane writes:\n\n> AFAICT the client-side libpq doesn't (and shouldn't) touch signal\n> handling at all, except for a couple of places in the print routines\n> that temporarily block SIGPIPE.\n\nWhich was my point.\n\n> Since we deal happily with EINTR for most of the frontend socket calls,\n> I don't see a reason not to cope with it for connect() too. I am\n> somewhat concerned about what exactly it means for a non-blocking\n> connect, however. Maybe it doesn't mean anything, and we could treat\n> it the same as EINPROGRESS.\n\nI feel that if the user installed his signal handlers to interrupt system\ncalls then he probably had a reason for it, possibly because of the timing\naspects of his application. Thus, it shouldn't be libpq's task to\noverride that decision. If the user doesn't want system calls to be\ninterrupted, then he should install the signal handlers in the proper way.\nIf he doesn't know how to do that, he needs to educate himself, that's\nall.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 28 Oct 2001 20:16:05 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n>>AFAICT the client-side libpq doesn't (and shouldn't) touch signal\n>>handling at all, except for a couple of places in the print routines\n>>that temporarily block SIGPIPE.\n>>\n>\n>Which was my point.\n>\n\nMy patch doesn't affect signal handling, my patch affects the response \nof connect() after it is interrupted by a signal.\n\n>>Since we deal happily with EINTR for most of the frontend socket calls,\n>>I don't see a reason not to cope with it for connect() too. I am\n>>somewhat concerned about what exactly it means for a non-blocking\n>>connect, however. Maybe it doesn't mean anything, and we could treat\n>>it the same as EINPROGRESS.\n>>\n>\n>I feel that if the user installed his signal handlers to interrupt system\n>calls then he probably had a reason for it, possibly because of the timing\n>aspects of his application. Thus, it shouldn't be libpq's task to\n>override that decision. If the user doesn't want system calls to be\n>interrupted, then he should install the signal handlers in the proper way.\n>If he doesn't know how to do that, he needs to educate himself, that's\n>all.\n>\n\nLet me ask you how you would handle SIGALRM without interrupting \nsyscalls? Further, how would you guarantee your SIGALRM handler would \nexecute within your granular limits?\n\nYes, the user has a timing aspect. Libpq isn't overriding it and my \npatch doesn't change that. My patch adds the EINTR handling. Currently \nthere is no way for libpq to continue processing a connect call if that \nsyscall is interrupted and the user is using POSIX defaults. Please \nrefer to the POSIX specification, I povided a quote in a previous message.\n\nPOSIX default signal handling sets SA_RESTART which -disables- system \ncall restart.\n\nAt present, the PQconnect* function treats EINTR as a fatal error.\n\nDavid\n\n\n",
"msg_date": "Sun, 28 Oct 2001 19:19:01 -0500",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I feel that if the user installed his signal handlers to interrupt system\n> calls then he probably had a reason for it, possibly because of the timing\n> aspects of his application. Thus, it shouldn't be libpq's task to\n> override that decision.\n\nHow are we \"overriding\" it? A more correct description would be that\nwe are \"coping\" with it. We already do so when a send or recv is\ninterrupted, so I don't see why there's a problem with extending that\npolicy to connect calls.\n\nWhat I think you are arguing is that marking signal handlers SA_RESTART\nis a sufficient answer for this issue, but I don't really agree, on\ntwo grounds: (a) not everyone has POSIX signals, (b) SA_RESTART is a\nblunt instrument because it's per-signal. A user might well want\nSIGALRM to interrupt some operations, but that doesn't mean he wants it\nto cause failures inside subroutine libraries. Look at the backend:\nwe make SIGALRM non-SA_RESTART, which means we need to retry after\nEINTR in most places. We do it because we want certain specific waits\nto be interruptible, not because we want a global policy of \"fail if\ninterrupted\". (Now that I look at it, I wonder whether SIGINT,\nSIGTERM, SIGQUIT shouldn't be non-SA_RESTART as well, but that's a\ndifferent discussion.)\n\nMy quibble with David has been about whether the fix is correct in\ndetail, not about whether its purpose is correct.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Oct 2001 12:10:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully "
},
{
"msg_contents": "Tom Lane wrote:\n\n>My quibble with David has been about whether the fix is correct in\n>detail, not about whether its purpose is correct.\n>\n\nMy preference is to spin until ! EINTR as soon as possible after the \nsocket operation. I'm going to seek some advice from kernel people \nregarding the inner workings of sockets. I suspect that we'll be able \nto get away with handling it like as is done with EINPROGRESS, but \nsomething is nagging me at the back of my head.\n\nIn regards to the other signals, I think there is another subtle \nproblem. Remember a month ago when I had a huge database that I had to \nupgrade, I had no disk space to export it to and when the old version of \npsql ran out of memory it crashed? The backend continued to push query \ndata out the closed pipe until the backend was forcibly closed or the \nquery completed. Naturally this caused considerable spammage on the \nconsole.\n\nSo I suspect some tuning needs to be done in respect to SIGPIPE w/ the \nbackend.\n\nDavid\n\n\n",
"msg_date": "Mon, 29 Oct 2001 15:01:34 -0500",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "David Ford <david@blue-labs.org> writes:\n> problem. Remember a month ago when I had a huge database that I had to \n> upgrade, I had no disk space to export it to and when the old version of \n> psql ran out of memory it crashed? The backend continued to push query \n> data out the closed pipe until the backend was forcibly closed or the \n> query completed. Naturally this caused considerable spammage on the \n> console.\n\nThis has been discussed before. Don't bother proposing that backends\nshould use the default handling of SIGPIPE, because that won't be\naccepted. A safe limited solution would be to keep backend libpq from\nemitting multiple consecutive \"broken pipe\" reports to stderr.\nA better-sounding solution that might have unforeseen side effects\nis to set QueryCancel as soon as we see a nonrecoverable-looking\nsend() error. This is on the TODO list but no one's gotten round to\ndoing anything about it yet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Oct 2001 16:26:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully "
},
{
"msg_contents": "> David Ford <david@blue-labs.org> writes:\n> > problem. Remember a month ago when I had a huge database that I had to \n> > upgrade, I had no disk space to export it to and when the old version of \n> > psql ran out of memory it crashed? The backend continued to push query \n> > data out the closed pipe until the backend was forcibly closed or the \n> > query completed. Naturally this caused considerable spammage on the \n> > console.\n> \n> This has been discussed before. Don't bother proposing that backends\n> should use the default handling of SIGPIPE, because that won't be\n> accepted. A safe limited solution would be to keep backend libpq from\n> emitting multiple consecutive \"broken pipe\" reports to stderr.\n> A better-sounding solution that might have unforeseen side effects\n> is to set QueryCancel as soon as we see a nonrecoverable-looking\n> send() error. This is on the TODO list but no one's gotten round to\n> doing anything about it yet.\n\nGuys, can we come to a resolution this so I can mark it as completed?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 11 Nov 2001 22:57:50 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Guys, can we come to a resolution this so I can mark it as completed?\n\nThe QueryCancel idea is a neat hack but I don't have enough confidence\nin it to throw it in during beta. I'll add a little bit of code to\npq_flush to suppress duplicate error messages; that should be safe\nenough and get rid of the worst aspects of the problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Nov 2001 23:23:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Guys, can we come to a resolution this so I can mark it as completed?\n> \n> The QueryCancel idea is a neat hack but I don't have enough confidence\n> in it to throw it in during beta. I'll add a little bit of code to\n> pq_flush to suppress duplicate error messages; that should be safe\n> enough and get rid of the worst aspects of the problem.\n\nSo I will mark it as done. If there is something for the TODO list\nhere, please let me know.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 11 Nov 2001 23:41:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
}
] |
[
{
"msg_contents": "I think I read somewhere that you CANNOT access a table view etc. in another\ndatabase.\nI wanted to confirm that this was the case or see if there was a method\n(hack) to create\nan alias to another database.\n\nFor example:\n\npsql foo\nfoo# CREATE VIEW v1 AS SELECT * FROM BAR.t1;\n\nwhere BAR is another database (same PostgreSQL server).\n\nI was also wondering what the correct method is for restircting users\nability to\ncreate tables, views etc.\n\n\n",
"msg_date": "Thu, 25 Oct 2001 19:53:44 -0400",
"msg_from": "\"Sean Sell\" <sksell@mindspring.com>",
"msg_from_op": true,
"msg_subject": "Cross Database Links"
},
{
"msg_contents": "----- Original Message ----- \nFrom: Sean Sell <sksell@mindspring.com>\nSent: Thursday, October 25, 2001 6:53 PM\n\n\n> I think I read somewhere that you CANNOT access a table view etc. in another\n> database.\n> I wanted to confirm that this was the case or see if there was a method\n> (hack) to create\n> an alias to another database.\n\nAs I said before this will possible when schemas are implemented.\n\n-s\n\n",
"msg_date": "Mon, 29 Oct 2001 14:48:06 -0500",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: Cross Database Links"
}
] |
[
{
"msg_contents": ">\n>\n>\n>It wouldn't surprise me in the least if this behavior is\n>platform-dependent. It may well be that David's kernel will allow\n>connect() to be interrupted by SIGALRM while yours won't. (Which\n>reminds me that neither of you specified what platforms you were\n>testing on. For shame.) Or maybe the difference depends on whether\n>you are trying to connect to a local or remote server.\n>\n>Unless someone can point out a situation where retrying connect()\n>after EINTR is actively bad, my inclination is to accept the patch.\n>It seems like a net improvement in robustness to me, with no evident\n>downside other than a line or two more code.\n>\n\nI didn't specify my OS because this sort of a thing is standard *nix etc \ndesign (well, m$ excluded of course).\n\nI use Linux. Every *nix that I know of can have system calls be \ninterrupted.\n\nPlease wait a day before applying the patch, I want to make it a bit \nmore clean/readable and make sure I covered everything in fe-connect.c, \nI found that the SSL functions are traversed even if ssl is turned off \nin the config file and I have to handle that too.\n\nDavid\n\n\n",
"msg_date": "Fri, 26 Oct 2001 00:58:14 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "David Ford <david@blue-labs.org> writes:\n> Please wait a day before applying the patch, I want to make it a bit \n> more clean/readable and make sure I covered everything in fe-connect.c, \n\nBTW, reading the HPUX man page for connect I find the following relevant\nerror codes:\n\n [EALREADY] Nonblocking I/O is enabled with\n O_NONBLOCK, O_NDELAY, or FIOSNBIO, and a\n previous connection attempt has not yet\n completed.\n\n [EINPROGRESS] Nonblocking I/O is enabled using\n O_NONBLOCK, O_NDELAY, or FIOSNBIO, and\n the connection cannot be completed\n immediately. This is not a failure.\n Make the connect() call again a few\n seconds later. Alternatively, wait for\n completion by calling select() and\n selecting for write.\n\n [EINTR] The connect was interrupted by a signal\n before the connect sequence was\n complete. The building of the\n connection still takes place, even\n though the user is not blocked on the\n connect() call.\n\n [EISCONN] The socket is already connected.\n\nThis does not actually *say* that the appropriate behavior after EINTR\nis to retry, but reading between the lines one might infer that it will\nwork like the nonblocking case, wherein a retry of connect tries to link\nto the existing connection attempt, not start a new one.\n\nWhat's more important is that a retry will expose the possibility of\ngetting EALREADY or EISCONN. EALREADY certainly must be treated as\nsuccess the same as EINPROGRESS (if it exists on a given platform ---\nbetter #ifdef it I think). Not so sure about EISCONN; does that imply\n\"you moron, this socket's been open forever\", or does it get returned on\nthe first iteration that doesn't return EALREADY?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Oct 2001 10:46:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully "
}
] |
[
{
"msg_contents": ">\n>\n>After further thought, though, it's unclear to me why this solves\n>David's problem. If he's got a repeating SIGALRM on a cycle short\n>enough to interrupt a connect(), seems like it'd just fail again\n>on the next try.\n>\n\nOk, a few things. The connect() call is just an interface to the \nkernel. Sometimes a connect() to a socket may take a long time, even up \nto two minutes (or depending on your kernel's timeout), so it isn't \nunfeasible that the call can be interrupted. Next, the userland \nconnect() is interrupted but the kernel isn't. The kernel keeps working \nit and eventually completes or aborts the connection attempt. It then \nsets the data structures and values so the next time userland comes \nalive it's ready for it. The connect() call doesn't restart at the \nbeginning, it continues where it left off.\n\nDavid\n\n\n",
"msg_date": "Fri, 26 Oct 2001 15:37:05 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
}
] |
[
{
"msg_contents": ">\n>\n>This does not actually *say* that the appropriate behavior after EINTR\n>is to retry, but reading between the lines one might infer that it will\n>work like the nonblocking case, wherein a retry of connect tries to link\n>to the existing connection attempt, not start a new one.\n>\n>What's more important is that a retry will expose the possibility of\n>getting EALREADY or EISCONN. EALREADY certainly must be treated as\n>success the same as EINPROGRESS (if it exists on a given platform ---\n>better #ifdef it I think). Not so sure about EISCONN; does that imply\n>\"you moron, this socket's been open forever\", or does it get returned on\n>the first iteration that doesn't return EALREADY?\n>\n\nNot to worry. EINPROGRESS, etc, are normally used for select() and \nreturn the current state of the connection attempt. connect() in \nblocking mode will normally only return when it is interrupted or \ncomplete. In non-blocking mode, it will return immediately with errno \nset to EALREADY, in which case you should spin on the socket connection \nattempt and wait until it returns a good status, i.e. return value=0, or \nif you happen to make a mistake and drop that iteration, EISCONN.\n\nThose of you who are interested in socket operations would probably \nenjoy reading TCP/IP Illustrated, I believe the most recent version is \n#3, sorry I don't have the ISBN handy but most libraries should have v2.\n\nAs to your last concern Tom, the cycle should be: return=-1 [repeats \nuntil connection fails or succeeds], return=0 on success, or -1 on \nfailure w/ errno set to appropriate fault, and afterwards, either return \n-1 with errno=EISCONN (already connected with prior success), or \nappropriate fault.\n\nThe important part is that if the socket successfully connects, it will \nreturn a 0. That should be the last iteration of your spin on that \nconnect call. If you have a mistake in your code and you keep spinning \non a successfully connected socket, yes you will get EISCONN, i.e. \n\"hello..the socket is still connected\" :)\n\nThus a simplified loop should look like this:\n\ndo {\n int r;\n\n r=connect(...);\n if(!r) /* we're connected, r=0. exit the loop */\n break;\n \n switch(errno) { /* connect returns 0 or -1, so this must be -1 */\n case EINTR: /* we were interrupted by a signal */\n continue;\n default:\n error_log(logfd, \"Error connecting, kernel said: %s\", \nstrerror(errno));\n r=-2; /* jump out of the loop */\n break;\n }\n\n } while(r==-1):\n\nThis will continue trying the connect call as long as it hasn't returned \na failure message and the errno is EINTR. All other -1 return values \nwill exit the loop. As soon as it has connected properly it will also \nexit the loop.\n\nDavid\n\n\n",
"msg_date": "Fri, 26 Oct 2001 15:57:01 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "David Ford <david@blue-labs.org> writes:\n> Thus a simplified loop should look like this:\n\nNo, it should *not* look like that. The fe-connect.c code is designed\nto move on as soon as it's convinced that the kernel has accepted the\nconnection request. We use a non-blocking connect() call and later\nwait for connection complete by probing the select() status. Looping\non the connect() itself would be a busy-wait, which would be antisocial.\n\nCertainly we can move on as soon as we see EINPROGRESS or EALREADY.\nWhat I'm wondering at the moment is whether we could take EINTR to be\nalso an indication that the kernel has accepted the connection request,\nand proceed forward to the select(), rather than trying the connect()\nagain.\n\nAlthough Brent didn't say so very clearly, I think his real concern is\n\"why are you getting EINTR from a non-blocking connect, and what does\nthat mean anyway?\" The HPUX man page certainly makes it sound like\nthis is not distinct from the EINPROGRESS return, and that there's\nno need to retry the connect() per se.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Oct 2001 17:29:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
}
] |
[
{
"msg_contents": "Hi!\n\nWould you please give me some comments about this subject? This proposal\nwas sent to you, attached to a previous mail, few weeks ago.\nI'll appreciate so much your opinion, because we are using the\nPostgreSQL for an important project at the University, and the\nincorporation of this piece of code is contributing with some tests we\nare carrying out. If you think this implementation is not correct or it\nhas some drawback, your suggestions will be welcome. I'll modify\neverything you think is needed.\nIn addition, I'm developing this software in the context of my\nundergraduate thesis. The inclusion of these features and their testing\nis the objective of this thesis, then all your help is invaluable for\nme!!\n\nPerhaps you do not need the features I implemented in my research\nproject (activate/deactivate) but it is a central point for us. \nBesides, if the adding of this characteristic does not produce any side\neffect, I think it would be added, so it would be useful for other\nusers. Some other DBMSs have these mechanisms.\nI'll wait for your comments. Take into account that if it is needed,\nI'll re-write all parts of the code you recommend me.\nBest regards.\n\nSergio.\n\n> \n> Hello:\n> \n> Here at the Universidad Nacional del Centro de la Provincia de Buenos Aires,\n> we (myself as full professor and some other colleagues) are extensively using\n> the rules of PostgreSQL. This mean that we are teaching using PostgreSQL and\n> we are doing some research using it as platform for testing as well.\n> \n> Naturally we do not need to explain you how useful in many contexts the rules\n> are.\n> \n> However we have found two minor weaknesses:\n> \n> A) It is related with situations where more than one rule is involved and the\n> seccond one requires completion of the first one. In our sort of problems this\n> happens frequently. This can be solved adding the notion of\n> \"disablement\" of the first rule within the re-writing of the second rule when\n> the first rule is not required since the knowledge of the action of the second\n> rule allows it. To do this, the addition of two new commands is proposed:\n> DEACTIVATE/ACTIVATE RULE.\n> \n> B) The lack of a transaction abortion clause. (Chapter 17 Section 5\n> PostgreSQL 7.1 Programmer�s Guide)\n> The addition of the function\n> \n> pg_abort_with_msg(text)\n> wich can be called from a SELECT is proposed.\n> \n> We ask to one of our students (Sergio Pili) to develop an implementation for\n> both A and B. They has been in operation since Dec 2000 in version 7.0.3.\n> Lately, Sergio ported his implementation to version 7.1.3. We\n> think that this update to PostgreSQL will strenght the power of the rules\n> rewriting system.\n> \n> Attached Sergio's patch for version 7.1.3.\n> \n> Jorge H Doorn",
"msg_date": "Fri, 26 Oct 2001 20:01:53 -0300",
"msg_from": "Sergio Pili <sergiop@sinectis.com.ar>",
"msg_from_op": true,
"msg_subject": "[Fwd: PostgreSQL new commands proposal]"
},
{
"msg_contents": "Sergio Pili <sergiop@sinectis.com.ar> writes:\n>> A) It is related with situations where more than one rule is involved\n>> and the seccond one requires completion of the first one. In our sort\n>> of problems this happens frequently. This can be solved adding the\n>> notion of \"disablement\" of the first rule within the re-writing of\n>> the second rule when the first rule is not required since the\n>> knowledge of the action of the second rule allows it. To do this, the\n>> addition of two new commands is proposed: DEACTIVATE/ACTIVATE RULE.\n\nYou haven't made a case at all for why this is a good idea, nor whether\nthe result couldn't be accomplished with some cleaner approach (no,\nI don't think short-term disablement of a rule is a clean approach...)\nPlease give some examples that show why you think such a feature is\nuseful.\n\n>> B) The lack of a transaction abortion clause. (Chapter 17 Section 5\n>> PostgreSQL 7.1 Programmer�s Guide)\n>> The addition of the function\n>> pg_abort_with_msg(text)\n>> wich can be called from a SELECT is proposed.\n\nThis seems straightforward enough, but again I'm bemused why you'd want\nsuch a thing. Rules are sufficiently nonprocedural that it's hard to\nsee the point of putting deliberate error traps into them --- it seems\ntoo hard to control whether the error occurs or not. I understand\nreporting errors in procedural languages ... but all our procedural\nlanguages already have error-raising mechanisms. For example, you could\nimplement this function in plpgsql as\n\nregression=# create function pg_abort_with_msg(text) returns int as\nregression-# 'begin\nregression'# raise exception ''%'', $1;\nregression'# return 0;\nregression'# end;' language 'plpgsql';\nCREATE\nregression=# select pg_abort_with_msg('bogus');\nERROR: bogus\nregression=#\n\nAgain, a convincing example of a situation where this is an appropriate\nsolution would go a long way towards making me see why the feature is\nneeded.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Oct 2001 20:45:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: PostgreSQL new commands proposal] "
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.