threads
listlengths
1
2.99k
[ { "msg_contents": "I am looking at adding an index tuple flag to indicate when a heap tuple\nis expired so the index code can skip looking up the heap tuple.\n\nThe problem is that I can't figure out how be sure that the heap tuple\ndoesn't need to be looked at by _any_ backend. Right now, we update the\ntransaction commit flags in the heap tuple to prevent a pg_log lookup,\nbut that is not enough because some transactions may still see that heap\ntuple as visible.\n\nVacuum has a complex test that looks a currently running transactions\nand stuff. Do I have to duplicate that test in the new code? Seems I\ndo.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 17 May 2001 11:50:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Adding index flag showing tuple status" } ]
[ { "msg_contents": "I think the restore process could be made more error proof if the following \nadditions could be made:\n\npg_dump, (maybe started with a special flag) if run with superuser rights \nshould not issue\n\n\\connect - username\ncreate table foo (...\ncopy foo from ...\n\nto create tables, but use this syntax:\ncreate table foo(...\ncopy foo from ...\nupdate pg_class \n set relowner=u.usesysid \n from pg_user u \n where u.usename='username' and relname='foo';\n\nThis would avoid situations where a restore becomes impossible because \npassword authentication is necessary. This would allow restoring without \nhaving to set trust in pg_hba.conf. A patch would be simple, in fact I wrote \nit within a few minutes. \n\nOr am I completly wrong and there's a better way to accomplish this?\n\nBest regards,\n\tMario Weilguni\n\n-- \n===================================================\n Mario Weilguni � � � � � � � � KPNQwest Austria GmbH\n�Senior Engineer Web Solutions Nikolaiplatz 4\n�tel: +43-316-813824 � � � � 8020 graz, austria\n�fax: +43-316-813824-26 � � � http://www.kpnqwest.at\n�e-mail: mario.weilguni@kpnqwest.com\n===================================================\n", "msg_date": "Thu, 17 May 2001 17:50:56 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": true, "msg_subject": "restore improvement" }, { "msg_contents": "Mario Weilguni writes:\n\n> pg_dump, (maybe started with a special flag) if run with superuser rights\n> should not issue\n>\n> \\connect - username\n> create table foo (...\n> copy foo from ...\n\n> This would avoid situations where a restore becomes impossible because\n> password authentication is necessary. This would allow restoring without\n> having to set trust in pg_hba.conf. A patch would be simple, in fact I wrote\n> it within a few minutes.\n>\n> Or am I completly wrong and there's a better way to accomplish this?\n\nset session authentication 'username';\n\nThe command already exists in 7.2devel sources. Feel free to send a patch\nto make pg_dump use it. Note that this command requires superuser rights,\nso there needs to be an alternative.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 17 May 2001 18:27:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: restore improvement" } ]
[ { "msg_contents": "CVSROOT:\t/home/projects/pgsql/cvsroot\nModule name:\tpgsql\nChanges by:\tpetere@hub.org\t01/05/17 13:44:18\n\nModified files:\n\tdoc/src/sgml : runtime.sgml \n\tsrc/backend/utils/fmgr: Makefile dfmgr.c \n\tsrc/backend/utils/misc: guc.c \n\tsrc/include : fmgr.h \n\nLog message:\n\tAdd dynamic_library_path parameter and automatic appending of shared\n\tlibrary extension.\n\n", "msg_date": "Thu, 17 May 2001 13:44:19 -0400 (EDT)", "msg_from": "Peter Eisentraut - PostgreSQL <petere@hub.org>", "msg_from_op": true, "msg_subject": "pgsql/ oc/src/sgml/runtime.sgml rc/backend/uti ..." }, { "msg_contents": "Peter Eisentraut - PostgreSQL <petere@hub.org> writes:\n> \tAdd dynamic_library_path parameter and automatic appending of shared\n> \tlibrary extension.\n\nLooks good. One tiny nit: I think that DLSUFFIX should be appended if\nthe given basename doesn't contain any '.', rather than first trying\nthe name as-is. The problem with doing the latter is that the presence\nof a file or subdirectory named \"foo\" would prevent us from loading\n\"foo.so\" correctly.\n\nLooking ahead, there's a lot of subsidiary work that should now happen:\n\n1. createlang should no longer insert libdir or dlsuffix into the\nfunction declarations it makes (so building createlang from\ncreatelang.sh won't be necessary anymore).\n\n2. Likewise for CREATE FUNCTION commands in regress tests.\n\n3. CREATE FUNCTION documentation should now recommend using relative\npath and omitting suffix, rather than using full absolute pathname.\nUpdate examples in SGML docs, src/tutorial/.\n\nMaybe some other places too. Are you planning to do all that, or should\nwe put it on the to-do list for someone else?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 May 2001 14:54:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/runtime.sgml rc/backend/uti ... " }, { "msg_contents": "Tom Lane writes:\n\n> Looks good. One tiny nit: I think that DLSUFFIX should be appended if\n> the given basename doesn't contain any '.', rather than first trying\n> the name as-is.\n\nThat won't work if the basename already contains a '.', as is possible if\nsome version is embedded in the library name. Libtool sometimes creates\nsuch libraries.\n\n> The problem with doing the latter is that the presence of a file or\n> subdirectory named \"foo\" would prevent us from loading \"foo.so\"\n> correctly.\n\nPerhaps we should try appending the extension first? This would model\nlibtool's libltdl more closely. If we encourage leaving off the extension\nin load commands then this could be the better way. Of course we could\nalso check if the candidate file is a real file first.\n\n> 1. createlang should no longer insert libdir or dlsuffix into the\n> function declarations it makes\n\nHow do we handle the case where the user changed the path? Do we say,\n\"$libdir needs to be in the path or you're on your own\"?\n\nI like this better: Let createlang set the path to '$libdir/plpgsql.so'\n(literally) and substitute $libdir independend of the current mechanism.\nThat would survive an update.\n\n> 2. Likewise for CREATE FUNCTION commands in regress tests.\n\nI'd prefer to keep a few variations of each.\n\n> 3. CREATE FUNCTION documentation should now recommend using relative\n> path and omitting suffix, rather than using full absolute pathname.\n> Update examples in SGML docs, src/tutorial/.\n\nThe create function ref page needs some editorial changes which I will get\nto in the next few days.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 17 May 2001 21:25:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/runtime.sgml rc/backend/uti ..." }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> 1. createlang should no longer insert libdir or dlsuffix into the\n>> function declarations it makes\n\n> How do we handle the case where the user changed the path? Do we say,\n> \"$libdir needs to be in the path or you're on your own\"?\n\nSeems OK to me, but ...\n\n> I like this better: Let createlang set the path to '$libdir/plpgsql.so'\n> (literally) and substitute $libdir independend of the current mechanism.\n\n'$libdir/plpgsql', please, no platform dependencies. But OK otherwise.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 May 2001 19:49:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/ oc/src/sgml/runtime.sgml rc/backend/uti\n\t..." }, { "msg_contents": "Tom Lane writes:\n\n> 1. createlang should no longer insert libdir or dlsuffix into the\n> function declarations it makes (so building createlang from\n> createlang.sh won't be necessary anymore).\n\ncreatelang tries to be helpful by trying test -f $the_file first, to guard\nagainst attempts to load PL/Tcl when no Tcl support was configured. This\napproach has a few other subtle flaws: it requires createlang to be run\non the server machine and under the same user account as the server.\nMaybe we could devise another way to determine gracefully whether a given\nPL is installed.\n\n> 2. Likewise for CREATE FUNCTION commands in regress tests.\n\nThe shared objects used in the regression test are not located relative to\nthe $libdir. Some for the various tutorials, examples.\n\n> 3. CREATE FUNCTION documentation should now recommend using relative\n> path and omitting suffix, rather than using full absolute pathname.\n\nDetails in the Programmer's Guide now.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 19 May 2001 11:02:23 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/runtime.sgml rc/backend/uti ..." }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> createlang tries to be helpful by trying test -f $the_file first, to guard\n> against attempts to load PL/Tcl when no Tcl support was configured. This\n> approach has a few other subtle flaws: it requires createlang to be run\n> on the server machine and under the same user account as the server.\n> Maybe we could devise another way to determine gracefully whether a given\n> PL is installed.\n\nTry eliminating the test entirely. CREATE FUNCTION now checks to make\nsure that it can find the specified library (and that the named function\nexists therein), so as long as createlang aborts on CREATE FUNCTION\nfailure, I see no need for a separate test.\n\n>> 2. Likewise for CREATE FUNCTION commands in regress tests.\n\n> The shared objects used in the regression test are not located relative to\n> the $libdir. Some for the various tutorials, examples.\n\nDuh. Never mind that then ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 10:28:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/runtime.sgml rc/backend/uti ... " } ]
[ { "msg_contents": "Jeremy Blumenfeld (jeremy@horizonlive.com) reports a bug with a severity of 1\nThe lower the number the more severe it is.\n\nShort Description\nProblems with avg on interval data type\n\nLong Description\nWe have recently upgraded from 7.0.3 to 7.1 and a query which used to work is no longer working.\nThe query does an avg on an interval column and now gets the error:\n\nERROR: Bad interval external representation '0'\n\nWe've reproduced this problem with other tables using an interval type. It can handle count, sum, but not avg.\n\nSample Code\n\n\nNo file was uploaded with this report\n\n", "msg_date": "Thu, 17 May 2001 15:24:39 -0400 (EDT)", "msg_from": "pgsql-bugs@postgresql.org", "msg_from_op": true, "msg_subject": "Problems with avg on interval data type" }, { "msg_contents": "pgsql-bugs@postgresql.org writes:\n> The query does an avg on an interval column and now gets the error:\n> ERROR: Bad interval external representation '0'\n\nSorry about that :-(. A last-minute tightening of the allowed input\nformats for interval broke avg(interval), but you're the first one to\nnotice.\n\nI have corrected this in the sources for 7.1.2, but that will not help\nyou much unless you care to re-initdb with 7.1.2. If you don't want to\ninitdb, you can manually correct the erroneous catalog entry with\n\nupdate pg_aggregate set agginitval = '{0 second,0 second}' where\naggname = 'avg' and aggbasetype = 1186;\n\nNote you will need to do this in each extant database including\ntemplate1 (or at least all the databases where you plan to use\navg(interval)).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 12:08:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problems with avg on interval data type " }, { "msg_contents": "> We have recently upgraded from 7.0.3 to 7.1 and a query which used\n> to work is no longer working.\n> The query does an avg on an interval column and now gets the error:\n> ERROR: Bad interval external representation '0'\n\nOK, there is one case of interval constant which is not handled\ncorrectly in the 7.1.x release -- the simplest interval specification\nhaving only an unadorned integer. That is a bug, for which I have a\npatch (or patches) available.\n\nBefore I post the patch (which should go into the 7.1.2 release as a bug\nfix) I need feedback on a conventions dilemma, which led to the code\nmodifications which introduced the bug. Here it is:\n\nIntervals usually indicate a time span, and can be specified with either\n\"# units\" strings (e.g. '5 hours') or (as of 7.1) as \"hh:mm:ss\" (e.g.\n'05:00').\n\nA new construct, \"a_expr AT TIME ZONE c_expr\" is supported in 7.1, per\nSQL99 spec. One of the possible arguments is\n\n a_expr AT TIME ZONE 'PST'\n\nand\n\n a_expr AT TIME ZONE INTERVAL '-08:00'\n\nIt is this last style which leads to the problem of how to interpret\nsigned or unsigned integers as interval types. For example, in this\ncontext\n\n INTERVAL '-8'\n\nmust be interpreted as having units of \"hours\", while our historical\nusage has\n\n INTERVAL '8'\n\nbeing interpreted as \"seconds\" (even with signed values). Currently, we\ninterpret various forms as follows:\n\n Value\tUnits\n +8\thours\n -8\thours\n 8.0\tseconds\n 8\t?? seconds ??\n\nI would propose that the last example should be interpreted in units of\nseconds, but that could be perilously close to the conventions required\nfor the signed examples. Comments?\n\n - Thomas\n", "msg_date": "Fri, 18 May 2001 16:47:10 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Problems with avg on interval data type" }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n>> We have recently upgraded from 7.0.3 to 7.1 and a query which used\n>> to work is no longer working.\n>> The query does an avg on an interval column and now gets the error:\n>> ERROR: Bad interval external representation '0'\n\n> OK, there is one case of interval constant which is not handled\n> correctly in the 7.1.x release -- the simplest interval specification\n> having only an unadorned integer. That is a bug, for which I have a\n> patch (or patches) available.\n\nI have modified the declaration of avg(interval) to initialize its\naccumulator as '0 second' instead of just '0', so as far as the\naggregate goes it's not necessary to consider this a bug.\n\n> Currently, we interpret various forms as follows:\n\n> Value\tUnits\n> +8\t\thours\n> -8\t\thours\n> 8.0\t\tseconds\n> 8\t\t?? seconds ??\n\n> I would propose that the last example should be interpreted in units of\n> seconds, but that could be perilously close to the conventions required\n> for the signed examples. Comments?\n\nYipes. I do not like the idea that '+8' and '8' yield radically\ndifferent results. That's definitely going to create unhappiness.\n\nI suggest that the current code is more correct than you think ;-).\nISTM it is a good idea to require a units field, or at least some\npunctuation giving a clue about units --- for example I do not object to\n'08:00' being interpreted as hours and minutes. But I would be inclined\nto reject all four of the forms '+8', '-8', '8.0', and '8' as ambiguous.\nIs there something in the SQL spec that requires us to accept them?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 12:55:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problems with avg on interval data type " }, { "msg_contents": "> > Value Units\n> > +8 hours\n> > -8 hours\n> > 8.0 seconds\n> > 8 ?? seconds ??\n> > I would propose that the last example should be interpreted in units of\n> > seconds, but that could be perilously close to the conventions required\n> > for the signed examples. Comments?\n> Yipes. I do not like the idea that '+8' and '8' yield radically\n> different results. That's definitely going to create unhappiness.\n\nYeah, I agree. The ugliness is that an unsigned integer has been\naccepted in the past as \"seconds\", and would seem to be a reasonable\nassumption.\n\n> I suggest that the current code is more correct than you think ;-).\n> ISTM it is a good idea to require a units field, or at least some\n> punctuation giving a clue about units --- for example I do not object to\n> '08:00' being interpreted as hours and minutes. But I would be inclined\n> to reject all four of the forms '+8', '-8', '8.0', and '8' as ambiguous.\n> Is there something in the SQL spec that requires us to accept them?\n\nSingle-field signed integers (and unsigned integers?) must be acceptable\nfor a time zone specification (pretty sure this is covered in the SQL\nspec). Remember that SQL is woefully inadequate for date, time, and time\nzone support, but afaicr a signed integer is one of the few things they\ndo specify ;)\n\n - Thomas\n", "msg_date": "Fri, 18 May 2001 23:30:14 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Problems with avg on interval data type" }, { "msg_contents": "Tom Lane writes:\n\n> I suggest that the current code is more correct than you think ;-).\n> ISTM it is a good idea to require a units field, or at least some\n> punctuation giving a clue about units --- for example I do not object to\n> '08:00' being interpreted as hours and minutes. But I would be inclined\n> to reject all four of the forms '+8', '-8', '8.0', and '8' as ambiguous.\n> Is there something in the SQL spec that requires us to accept them?\n\nOur interval is quite a bit different from the SQL version. In SQL, an\ninterval value looks like this:\n\nINTERVAL -'5 12:30:15.3' DAY TO SECOND\n\nThe unit qualifier is required. Consequentially, I would reject anything\nwithout units, except '0' maybe.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 19 May 2001 02:55:12 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: Problems with avg on interval data type " }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n>> I suggest that the current code is more correct than you think ;-).\n>> ISTM it is a good idea to require a units field, or at least some\n>> punctuation giving a clue about units --- for example I do not object to\n>> '08:00' being interpreted as hours and minutes. But I would be inclined\n>> to reject all four of the forms '+8', '-8', '8.0', and '8' as ambiguous.\n>> Is there something in the SQL spec that requires us to accept them?\n\n> Single-field signed integers (and unsigned integers?) must be acceptable\n> for a time zone specification (pretty sure this is covered in the SQL\n> spec).\n\nBut surely there is other context cuing you that the number is a\ntimezone? In any case, you weren't proposing that interval_in\nshould accept '8' as a timezone ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 21:14:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problems with avg on interval data type " }, { "msg_contents": "> > Single-field signed integers (and unsigned integers?) must be acceptable\n> > for a time zone specification (pretty sure this is covered in the SQL\n> > spec).\n> But surely there is other context cuing you that the number is a\n> timezone? In any case, you weren't proposing that interval_in\n> should accept '8' as a timezone ...\n\nIn the particular case I mentioned, any context is gone by the time the\nparser gets beyond gram.y (and that is well before the constant is\nevaluated). The general point is that in this case (and perhaps some\nother cases) there is a need to convert an interval into a time zone, so\nthe specification of either or both had better be self consistant.\n\nWe do not have an explicit timezone type which could make different\nassumptions about units on unadorned integers (and I'd like to avoid\ndefining a new type for this purpose since SQL9x seems to think that\n\"interval\" should be enough). I'd also like to avoid *requiring* the\nfull brain damage of SQL9x interval specifications such as Peter\nmentioned; we may support it, but should not require it since it is\ntruely horrid.\n\n - Thomas\n", "msg_date": "Sat, 19 May 2001 06:21:18 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Problems with avg on interval data type" } ]
[ { "msg_contents": "I've been noticing that the information in the HISTORY file is regularly\nincomplete, not understandable, and mostly useless except for \"wow, look\nwhat we've done\" purposes. When we get to a release a year from now\n(*grin*) I'm sure the dynamic_library_path thing is going to end up under\n\"shared library fixes\".\n\nWouldn't it be better if the author (or committer) of a user visible\nchange would himself add a snippet to the release notes?\n\nThe additional benefit would be that users that get intermediate\ndevelopment sources would have an idea what's been going on in between.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 17 May 2001 22:21:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Maintaining HISTORY" }, { "msg_contents": "> I've been noticing that the information in the HISTORY file is regularly\n> incomplete, not understandable, and mostly useless except for \"wow, look\n> what we've done\" purposes. When we get to a release a year from now\n> (*grin*) I'm sure the dynamic_library_path thing is going to end up under\n> \"shared library fixes\".\n\nI am sure you are correct.\n\n> Wouldn't it be better if the author (or committer) of a user visible\n> change would himself add a snippet to the release notes?\n> \n> The additional benefit would be that users that get intermediate\n> development sources would have an idea what's been going on in between.\n\nSure, feel free to dive in an add stuff to HISTORY. We usually have\n~150 items listed for a major release, and I tend to concentrate on\nuser-visible changes. Lots of optimizer and configuration stuff gets\nshortened because it is more invisible to the user, though no less\nimportant.\n\nMarc has started to included the Changelog for each release, though that\nwill be a monster for any major release. What I think would be ideal\nwould be to have the Changelog available as a web page, so you could see\nby date the changes all the way back to 1.0.2. We already have the\ntools to generate the web content as part of our commit messages.\n\nAlso, feel free to go back in the HISTORY/release.sgml and clarify\nanything in there. I keep it brief so people can quickly look over the\nchanges we have made, but clearly I don't understand all of them myself.\nIf people want it more verbose, we can do that, or if they want a\nsecondary location for more verbose information, we can do that too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 17 May 2001 16:37:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining HISTORY" } ]
[ { "msg_contents": "I have been thinking about the problem of VACUUM and how we might fix it\nfor 7.2. Vadim has suggested that we should attack this by implementing\nan overwriting storage manager and transaction UNDO, but I'm not totally\ncomfortable with that approach: it seems to me that it's an awfully large\nchange in the way Postgres works. Instead, here is a sketch of an attack\nthat I think fits better into the existing system structure.\n\nFirst point: I don't think we need to get rid of VACUUM, exactly. What\nwe want for 24x7 operation is to be able to do whatever housekeeping we\nneed without locking out normal transaction processing for long intervals.\nWe could live with routine VACUUMs if they could run in parallel with\nreads and writes of the table being vacuumed. They don't even have to run\nin parallel with schema updates of the target table (CREATE/DROP INDEX,\nALTER TABLE, etc). Schema updates aren't things you do lightly for big\ntables anyhow. So what we want is more of a \"background VACUUM\" than a\n\"no VACUUM\" solution.\n\nSecond: if VACUUM can run in the background, then there's no reason not\nto run it fairly frequently. In fact, it could become an automatically\nscheduled activity like CHECKPOINT is now, or perhaps even a continuously\nrunning daemon (which was the original conception of it at Berkeley, BTW).\nThis is important because it means that VACUUM doesn't have to be perfect.\nThe existing VACUUM code goes to huge lengths to ensure that it compacts\nthe table as much as possible. We don't need that; if we miss some free\nspace this time around, but we can expect to get it the next time (or\neventually), we can be happy. This leads to thinking of space management\nin terms of steady-state behavior, rather than the periodic \"big bang\"\napproach that VACUUM represents now.\n\nBut having said that, there's no reason to remove the existing VACUUM\ncode: we can keep it around for situations where you need to crunch a\ntable as much as possible and you can afford to lock the table while\nyou do it. The new code would be a new command, maybe \"VACUUM LAZY\"\n(or some other name entirely).\n\nEnough handwaving, what about specifics?\n\n1. Forget moving tuples from one page to another. Doing that in a\ntransaction-safe way is hugely expensive and complicated. Lazy VACUUM\nwill only delete dead tuples and coalesce the free space thus made\navailable within each page of a relation.\n\n2. This does no good unless there's a provision to re-use that free space.\nTo do that, I propose a free space map (FSM) kept in shared memory, which\nwill tell backends which pages of a relation have free space. Only if the\nFSM shows no free space available will the relation be extended to insert\na new or updated tuple.\n\n3. Lazy VACUUM processes a table in five stages:\n A. Scan relation looking for dead tuples; accumulate a list of their\n TIDs, as well as info about existing free space. (This pass is\n completely read-only and so incurs no WAL traffic.)\n B. Remove index entries for the dead tuples. (See below for details.)\n C. Physically delete dead tuples and compact free space on their pages.\n D. Truncate any completely-empty pages at relation's end. (Optional,\n see below.)\n E. Create/update FSM entry for the table.\nNote that this is crash-safe as long as the individual update operations\nare atomic (which can be guaranteed by WAL entries for them). If a tuple\nis dead, we care not whether its index entries are still around or not;\nso there's no risk to logical consistency.\n\n4. Observe that lazy VACUUM need not really be a transaction at all, since\nthere's nothing it does that needs to be cancelled or undone if it is\naborted. This means that its WAL entries do not have to hang around past\nthe next checkpoint, which solves the huge-WAL-space-usage problem that\npeople have noticed while VACUUMing large tables under 7.1.\n\n5. Also note that there's nothing saying that lazy VACUUM must do the\nentire table in one go; once it's accumulated a big enough batch of dead\ntuples, it can proceed through steps B,C,D,E even though it's not scanned\nthe whole table. This avoids a rather nasty problem that VACUUM has\nalways had with running out of memory on huge tables.\n\n\nFree space map details\n----------------------\n\nI envision the FSM as a shared hash table keyed by table ID, with each\nentry containing a list of page numbers and free space in each such page.\n\nThe FSM is empty at system startup and is filled by lazy VACUUM as it\nprocesses each table. Backends then decrement/remove page entries as they\nuse free space.\n\nCritical point: the FSM is only a hint and does not have to be perfectly\naccurate. It can omit space that's actually available without harm, and\nif it claims there's more space available on a page than there actually\nis, we haven't lost much except a wasted ReadBuffer cycle. This allows\nus to take shortcuts in maintaining it. In particular, we can constrain\nthe FSM to a prespecified size, which is critical for keeping it in shared\nmemory. We just discard entries (pages or whole relations) as necessary\nto keep it under budget. Obviously, we'd not bother to make entries in\nthe first place for pages with only a little free space. Relation entries\nmight be discarded on a least-recently-used basis.\n\nAccesses to the FSM could create contention problems if we're not careful.\nI think this can be dealt with by having each backend remember (in its\nrelcache entry for a table) the page number of the last page it chose from\nthe FSM to insert into. That backend will keep inserting new tuples into\nthat same page, without touching the FSM, as long as there's room there.\nOnly then does it go back to the FSM, update or remove that page entry,\nand choose another page to start inserting on. This reduces the access\nload on the FSM from once per tuple to once per page. (Moreover, we can\narrange that successive backends consulting the FSM pick different pages\nif possible. Then, concurrent inserts will tend to go to different pages,\nreducing contention for shared buffers; yet any single backend does\nsequential inserts in one page, so that a bulk load doesn't cause\ndisk traffic scattered all over the table.)\n\nThe FSM can also cache the overall relation size, saving an lseek kernel\ncall whenever we do have to extend the relation for lack of internal free\nspace. This will help pay for the locking cost of accessing the FSM.\n\n\nLocking issues\n--------------\n\nWe will need two extensions to the lock manager:\n\n1. A new lock type that allows concurrent reads and writes\n(AccessShareLock, RowShareLock, RowExclusiveLock) but not anything else.\nLazy VACUUM will grab this type of table lock to ensure the table schema\ndoesn't change under it. Call it a VacuumLock until we think of a better\nname.\n\n2. A \"conditional lock\" operation that acquires a lock if available, but\ndoesn't block if not.\n\nThe conditional lock will be used by lazy VACUUM to try to upgrade its\nVacuumLock to an AccessExclusiveLock at step D (truncate table). If it's\nable to get exclusive lock, it's safe to truncate any unused end pages.\nWithout exclusive lock, it's not, since there might be concurrent\ntransactions scanning or inserting into the empty pages. We do not want\nlazy VACUUM to block waiting to do this, since if it does that it will\ncreate a lockout situation (reader/writer transactions will stack up\nbehind it in the lock queue while everyone waits for the existing\nreader/writer transactions to finish). Better to not do the truncation.\n\nAnother place where lazy VACUUM may be unable to do its job completely\nis in compaction of space on individual disk pages. It can physically\nmove tuples to perform compaction only if there are not currently any\nother backends with pointers into that page (which can be tested by\nlooking to see if the buffer reference count is one). Again, we punt\nand leave the space to be compacted next time if we can't do it right\naway.\n\nThe fact that inserted/updated tuples might wind up anywhere in the table,\nnot only at the end, creates no headaches except for heap_update. That\nroutine needs buffer locks on both the page containing the old tuple and\nthe page that will contain the new. To avoid possible deadlocks between\ndifferent backends locking the same two pages in opposite orders, we need\nto constrain the lock ordering used by heap_update. This is doable but\nwill require slightly more code than is there now.\n\n\nIndex access method improvements\n--------------------------------\n\nPresently, VACUUM deletes index tuples by doing a standard index scan\nand checking each returned index tuple to see if it points at any of\nthe tuples to be deleted. If so, the index AM is called back to delete\nthe tested index tuple. This is horribly inefficient: it means one trip\ninto the index AM (with associated buffer lock/unlock and search overhead)\nfor each tuple in the index, plus another such trip for each tuple actually\ndeleted.\n\nThis is mainly a problem of a poorly chosen API. The index AMs should\noffer a \"bulk delete\" call, which is passed a sorted array of main-table\nTIDs. The loop over the index tuples should happen internally to the\nindex AM. At least in the case of btree, this could be done by a\nsequential scan over the index pages, which avoids the random I/O of an\nindex-order scan and so should offer additional speedup.\n\nFurther out (possibly not for 7.2), we should also look at making the\nindex AMs responsible for shrinking indexes during deletion, or perhaps\nvia a separate \"vacuum index\" API. This can be done without exclusive\nlocks on the index --- the original Lehman & Yao concurrent-btrees paper\ndidn't describe how, but more recent papers show how to do it. As with\nthe main tables, I think it's sufficient to recycle freed space within\nthe index, and not necessarily try to give it back to the OS.\n\nWe will also want to look at upgrading the non-btree index types to allow\nconcurrent operations. This may be a research problem; I don't expect to\ntouch that issue for 7.2. (Hence, lazy VACUUM on tables with non-btree\nindexes will still create lockouts until this is addressed. But note that\nthe lockout only lasts through step B of the VACUUM, not the whole thing.)\n\n\nThere you have it. If people like this, I'm prepared to commit to\nmaking it happen for 7.2. Comments, objections, better ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 May 2001 19:05:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Plans for solving the VACUUM problem" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n[...]\n\n> There you have it. If people like this, I'm prepared to commit to\n> making it happen for 7.2. Comments, objections, better ideas?\n\nI'm just commenting from the peanut gallery, but it looks really\nwell-thought-out to me. I like the general \"fast, good enough, not\nnecessarily perfect\" approach you've taken.\n\nI'd be happy to help test and debug once things get going.\n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n", "msg_date": "17 May 2001 20:04:06 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Tom Lane wrote:\n> \nI love it all. \nI agree that vacuum should be an optional function that really packs tables.\n\nI also like the idea of a vacuum that runs in the background and does not too\nbadly affect operation.\n\nMy only suggestion would be to store some information in the statistics about\nwhether or not, and how bad, a table needs to be vacuumed. In a scheduled\nbackground environment, the tables that need it most should get it most often.\nOften times many tables never need to be vacuumed.\n\nAlso, it would be good to be able to update the statistics without doing a\nvacuum, i.e. rather than having to vacuum to analyze, being able to analyze\nwithout a vacuum.\n", "msg_date": "Thu, 17 May 2001 21:31:59 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "> Free space map details\n> ----------------------\n> \n> I envision the FSM as a shared hash table keyed by table ID, with each\n> entry containing a list of page numbers and free space in each such page.\n> \n> The FSM is empty at system startup and is filled by lazy VACUUM as it\n> processes each table. Backends then decrement/remove page entries as they\n> use free space.\n> \n> Critical point: the FSM is only a hint and does not have to be perfectly\n> accurate. It can omit space that's actually available without harm, and\n> if it claims there's more space available on a page than there actually\n> is, we haven't lost much except a wasted ReadBuffer cycle. This allows\n> us to take shortcuts in maintaining it. In particular, we can constrain\n> the FSM to a prespecified size, which is critical for keeping it in shared\n> memory. We just discard entries (pages or whole relations) as necessary\n> to keep it under budget. Obviously, we'd not bother to make entries in\n> the first place for pages with only a little free space. Relation entries\n> might be discarded on a least-recently-used basis.\n\nThe only question I have is about the Free Space Map. It would seem\nbetter to me if we could get this map closer to the table itself, rather\nthan having every table of every database mixed into the same shared\nmemory area. I can just see random table access clearing out most of\nthe map cache and perhaps making it less useless.\n\nIt would be nice if we could store the map on the first page of the disk\ntable, or store it in a flat file per table. I know both of these ideas\nwill not work, but I am just throwing it out to see if someone has a\nbetter idea. \n\nI wonder if cache failures should be what drives the vacuum daemon to\nvacuum a table? Sort of like, \"Hey, someone is asking for free pages\nfor that table. Let's go find some!\" That may work really well. \nAnother advantage of centralization is that we can record update/delete\ncounters per table, helping tell vacuum where to vacuum next. Vacuum\nroaming around looking for old tuples seems wasteful.\n\nAlso, I suppose if we have the map act as a shared table cache (fseek\ninfo), it may override the disadvantage of having it all centralized.\n\nI know I am throwing out the advantages and disadvantages of\ncentralization, but I thought I would give out the ideas.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 17 May 2001 22:27:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Heck ya...\n\n> I wonder if cache failures should be what drives the vacuum daemon to\n> vacuum a table? Sort of like, \"Hey, someone is asking for free pages\n> for that table. Let's go find some!\" That may work really well.\n> Another advantage of centralization is that we can record update/delete\n> counters per table, helping tell vacuum where to vacuum next. Vacuum\n> roaming around looking for old tuples seems wasteful.\n\nCounters seem like a nice addition. For example, access patterns to session\ntables are almost pure UPDATE/DELETEs and a ton of them. On the other hand,\nlog type tables see no UPDATE/DELETE but tend to be huge in comparision. I\nsuspect many applications outside ours will show large disparties in the\n\"Vacuumability\" score for different tables.\n\nQuick question:\nUsing lazy vacuum, if I have two identical (at the file level) copies of a\ndatabase, run the same queries against them for a few days, then shut them\ndown again, are the copies still identical? Is this different than the\ncurrent behavior (ie, queries, full vacuum)?\n\n\nAZ\n\n\n", "msg_date": "Thu, 17 May 2001 23:31:54 -0400", "msg_from": "\"August Zajonc\" <augustz@bigfoot.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n \n> > Also, it would be good to be able to update the statistics without doing a\n> > vacuum, i.e. rather than having to vacuum to analyze, being able to analyze\n> > without a vacuum.\n> \n> Irrelevant, not to mention already done ...\n\nDo you mean that we already can do just analyze ?\n\nWhat is the syntax ?\n\n----------------\nHannu\n", "msg_date": "Fri, 18 May 2001 08:50:40 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Tom Lane wrote:\n> \n>> \n> Enough handwaving, what about specifics?\n> \n> 1. Forget moving tuples from one page to another. Doing that in a\n> transaction-safe way is hugely expensive and complicated. Lazy VACUUM\n> will only delete dead tuples and coalesce the free space thus made\n> available within each page of a relation.\n\nIf we really need to move a tuple we can do it by a regular update that \nSET-s nothing and just copies the tuple to another page inside a\nseparate \ntransaction.\n\n-------------------\nHannu\n", "msg_date": "Fri, 18 May 2001 09:04:42 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "> Also, it would be good to be able to update the statistics without doing a\n> vacuum, i.e. rather than having to vacuum to analyze, being able to\nanalyze\n> without a vacuum.\n>\nI was going to ask the same thing. In a lot of situations (insert\ndominated) vacumm analyze is more important for the update of statistics\nthen for recovery of space. Could we roll that into this? Or should we\nalso have an Analyze daemon?\n\n", "msg_date": "Thu, 17 May 2001 23:36:36 -0500", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: Re: Plans for solving the VACUUM problem" }, { "msg_contents": "> Free space map details\n> ----------------------\n>\n> Accesses to the FSM could create contention problems if we're not careful.\n\nAnother quick thought for handling FSM contention problems. A backend could\ngive up waiting for access to the FSM after a short period of time, and just\nappend it's data to the end of the file the same way it's done now. Dunno\nif that is feasable but it seemed like an idea to me.\n\nOther than that, I would just like to say this will be a great improvement\nfor pgsql. Tom, you and several other on this list continue to impress the\nhell out of me.\n\n", "msg_date": "Thu, 17 May 2001 23:45:14 -0500", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> My only suggestion would be to store some information in the statistics about\n> whether or not, and how bad, a table needs to be vacuumed.\n\nI was toying with the notion of using the FSM to derive that info,\nsomewhat indirectly to be sure (since what the FSM could tell you would\nbe about tuples inserted not tuples deleted). Heavily used FSM entries\nwould be the vacuum daemon's cues for tables to hit more often.\n\nANALYZE stats don't seem like a productive way to attack this, since\nthere's no guarantee they'd be updated often enough. If the overall\ndata distribution of a table isn't changing much, there's no need to\nanalyze it often. \n\n> Also, it would be good to be able to update the statistics without doing a\n> vacuum, i.e. rather than having to vacuum to analyze, being able to analyze\n> without a vacuum.\n\nIrrelevant, not to mention already done ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 00:52:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "On Thu, 17 May 2001, Tom Lane wrote:\n\n> I have been thinking about the problem of VACUUM and how we might fix it\n> for 7.2. Vadim has suggested that we should attack this by implementing\n> an overwriting storage manager and transaction UNDO, but I'm not totally\n> comfortable with that approach: it seems to me that it's an awfully large\n> change in the way Postgres works. Instead, here is a sketch of an attack\n> that I think fits better into the existing system structure.\n\n<snip>\n\n> \n\nMy AUD0.02, FWIW, is this sounds great. You said you were only planning to\nconcentrate on performance enhancements, not new features, Tom, but IMHO\nthis is a new feature and a good one :).\n\nAs several others have mentioned, automatic analyze would also be nice. I\ngather the backend already has the ability to treat analyze as a separate\nprocess, so presumably this is a completely separate issue from automatic\nvacuum. Some sort of background daemon or whatever would be good. And\nagain, one could take the approach that it doesn't have to get it 100%\nright, at least in the short term; as long as it is continually\nincrementing itself in the direction of accurate statistics, then that's\nmuch better than the current situation. Presumably one could also retain\nthe option of doing an explicit analyze occasionally, if you have\nprocessor cycles to burn and are really keen to get the stats correct in\na hurry.\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen tim@proximity.com.au\nProximity Pty Ltd http://www.proximity.com.au/\n http://www4.tpg.com.au/users/rita_tim/\n\n", "msg_date": "Fri, 18 May 2001 14:57:46 +1000 (EST)", "msg_from": "Tim Allen <tim@proximity.com.au>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Tom Lane wrote:\n> \n> I have been thinking about the problem of VACUUM and how we might fix it\n> for 7.2. Vadim has suggested that we should attack this by implementing\n> an overwriting storage manager and transaction UNDO, but I'm not totally\n> comfortable with that approach: \n\nIIRC, Vadim doesn't intend to implement an overwriting\nsmgr at least in the near future though he seems to \nhave a plan to implement a functionality to allow space\nre-use without vacuum as marked in TODO. IMHO UNDO\nfunctionality under no overwriting(ISTM much easier\nthan under overwriting) smgr has the highest priority.\nSavepoints is planned for 7.0 but we don't have it yet.\n\n[snip]\n\n> \n> Locking issues\n> --------------\n> \n> We will need two extensions to the lock manager:\n> \n> 1. A new lock type that allows concurrent reads and writes\n> (AccessShareLock, RowShareLock, RowExclusiveLock) but not\n> anything else.\n\nWhat's different from RowExclusiveLock ?\nDoes it conflict with itself ?\n\n> \n> The conditional lock will be used by lazy VACUUM to try to upgrade its\n> VacuumLock to an AccessExclusiveLock at step D (truncate table). If it's\n> able to get exclusive lock, it's safe to truncate any unused end pages.\n> Without exclusive lock, it's not, since there might be concurrent\n> transactions scanning or inserting into the empty pages. We do not want\n> lazy VACUUM to block waiting to do this, since if it does that it will\n> create a lockout situation (reader/writer transactions will stack up\n> behind it in the lock queue while everyone waits for the existing\n> reader/writer transactions to finish). Better to not do the truncation.\n> \n\nAnd would the truncation occur that often in reality under\nthe scheme(without tuple movement) ?\n\n[snip]\n\n> \n> Index access method improvements\n> --------------------------------\n> \n> Presently, VACUUM deletes index tuples by doing a standard index scan\n> and checking each returned index tuple to see if it points at any of\n> the tuples to be deleted. If so, the index AM is called back to delete\n> the tested index tuple. This is horribly inefficient: it means one trip\n> into the index AM (with associated buffer lock/unlock and search overhead)\n> for each tuple in the index, plus another such trip for each tuple actually\n> deleted.\n> \n> This is mainly a problem of a poorly chosen API. The index AMs should\n> offer a \"bulk delete\" call, which is passed a sorted array of main-table\n> TIDs. The loop over the index tuples should happen internally to the\n> index AM. At least in the case of btree, this could be done by a\n> sequential scan over the index pages, which avoids the random I/O of an\n> index-order scan and so should offer additional speedup.\n> \n\n???? Isn't current implementation \"bulk delete\" ?\nFast access to individual index tuples seems to be\nalso needed in case a few dead tuples.\n\n> Further out (possibly not for 7.2), we should also look at making the\n> index AMs responsible for shrinking indexes during deletion, or perhaps\n> via a separate \"vacuum index\" API. This can be done without exclusive\n> locks on the index --- the original Lehman & Yao concurrent-btrees paper\n> didn't describe how, but more recent papers show how to do it. As with\n> the main tables, I think it's sufficient to recycle freed space within\n> the index, and not necessarily try to give it back to the OS.\n> \n\nGreat. There would be few disadvantages in our btree\nimplementation.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 18 May 2001 14:21:12 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "\"Matthew T. O'Connor\" <matthew@zeut.net> writes:\n> Another quick thought for handling FSM contention problems. A backend could\n> give up waiting for access to the FSM after a short period of time, and just\n> append it's data to the end of the file the same way it's done now. Dunno\n> if that is feasable but it seemed like an idea to me.\n\nMmm ... maybe, but I doubt it'd help much. Appending a page to the file\nrequires grabbing some kind of lock anyway (since you can't have two\nbackends doing it at the same instant). With any luck, that locking can\nbe merged with the locking involved in accessing the FSM.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 01:29:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The only question I have is about the Free Space Map. It would seem\n> better to me if we could get this map closer to the table itself, rather\n> than having every table of every database mixed into the same shared\n> memory area. I can just see random table access clearing out most of\n> the map cache and perhaps making it less useless.\n\nWhat random access? Read transactions will never touch the FSM at all.\nAs for writes, seems to me the places you are writing are exactly the\nplaces you need info for.\n\nYou make a good point, which is that we don't want a schedule-driven\nVACUUM to load FSM entries for unused tables into the map at the cost\nof throwing out entries that *are* being used. But it seems to me that\nthat's easily dealt with if we recognize the risk.\n\n> It would be nice if we could store the map on the first page of the disk\n> table, or store it in a flat file per table. I know both of these ideas\n> will not work,\n\nYou said it. What's wrong with shared memory? You can't get any closer\nthan shared memory: keeping maps in the files would mean you'd need to\nchew up shared-buffer space to get at them. (And what was that about\nrandom accesses causing your maps to get dropped? That would happen\nfor sure if they live in shared buffers.)\n\nAnother problem with keeping stuff in the first page: what happens when\nthe table gets big enough that 8k of map data isn't really enough?\nWith a shared-memory area, we can fairly easily allocate a variable\namount of space based on total size of a relation vs. total size of\nrelations under management.\n\nIt is true that a shared-memory map would be useless at system startup,\nuntil VACUUM has run and filled in some info. But I don't see that as\na big drawback. People who aren't developers like us don't restart\ntheir postmasters every five minutes.\n\n> Another advantage of centralization is that we can record update/delete\n> counters per table, helping tell vacuum where to vacuum next. Vacuum\n> roaming around looking for old tuples seems wasteful.\n\nIndeed. But I thought you were arguing against centralization?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 01:41:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "At 19:05 17/05/01 -0400, Tom Lane wrote:\n>\n>But having said that, there's no reason to remove the existing VACUUM\n>code: we can keep it around for situations where you need to crunch a\n>table as much as possible and you can afford to lock the table while\n>you do it. \n\nIt would be great if this was the *only* reason to use the old-style\nVACUUM. ie. We should try to avoid a solution that has a VACCUM LAZY in\nbackground and a recommendation to a 'VACUMM PROPERLY' once in a while.\n\n\n>The new code would be a new command, maybe \"VACUUM LAZY\"\n>(or some other name entirely).\n\nMaybe a name that reflects it's strength/purpose: 'VACUUM\nONLINE/BACKGROUND/NOLOCKS/CONCURRENT' etc.\n\n\n>Enough handwaving, what about specifics?\n>\n>1. Forget moving tuples from one page to another. Doing that in a\n>transaction-safe way is hugely expensive and complicated. Lazy VACUUM\n>will only delete dead tuples and coalesce the free space thus made\n>available within each page of a relation.\n\nCould this be done opportunistically, meaning it builds up a list of\ncandidates to move (perhaps based on emptiness of page), then moves a\nsubset of these in each pass? It's only really useful in the case of a\ntable that has a high update load then becomes static. Which is not as\nunusual as it sounds: people do archive tables by renaming them, then\ncreate a new lean 'current' table. With the new vacuum, the static table\nmay end up with many half-empty pages that are never reused.\n\n\n>2. This does no good unless there's a provision to re-use that free space.\n>To do that, I propose a free space map (FSM) kept in shared memory, which\n>will tell backends which pages of a relation have free space. Only if the\n>FSM shows no free space available will the relation be extended to insert\n>a new or updated tuple.\n\nI assume that now is not a good time to bring up memory-mapped files? ;-}\n\n\n>3. Lazy VACUUM processes a table in five stages:\n> A. Scan relation looking for dead tuples; accumulate a list of their\n> TIDs, as well as info about existing free space. (This pass is\n> completely read-only and so incurs no WAL traffic.)\n\nWere you planning on just a free byte count, or something smaller? Dec/RDB\nuses a nast system of DBA-defined thresholds for each storage area: 4 bits,\nwhere 0=empty, and 1, 2 & 3 indicate above/below thresholds (3 is also\nconsidered 'full'). The thresholds are usually set based on average record\nsizes. In this day & age, I suspect a 1 byte percentage, or 2 byte count is\nOK unless space is really a premium.\n\n\n>5. Also note that there's nothing saying that lazy VACUUM must do the\n>entire table in one go; once it's accumulated a big enough batch of dead\n>tuples, it can proceed through steps B,C,D,E even though it's not scanned\n>the whole table. This avoids a rather nasty problem that VACUUM has\n>always had with running out of memory on huge tables.\n\nThis sounds great, especially if the same approach could be adopted when/if\nmoving records.\n\n\n>Critical point: the FSM is only a hint and does not have to be perfectly\n>accurate. It can omit space that's actually available without harm, and\n>if it claims there's more space available on a page than there actually\n>is, we haven't lost much except a wasted ReadBuffer cycle.\n\nSo long as you store the # of bytes (or %), that should be fine. One of the\nhorrors of the Dec/RDB system is that with badly set threholds you can\ncycle through many pages looking for one that *really* has enough free space.\n\nAlso, would the detecting process fix the bad entry?\n\n\n> This allows\n>us to take shortcuts in maintaining it. In particular, we can constrain\n>the FSM to a prespecified size, which is critical for keeping it in shared\n>memory. We just discard entries (pages or whole relations) as necessary\n>to keep it under budget.\n\nPresumably keeping the 'most empty' pages?\n\n\n>Obviously, we'd not bother to make entries in\n>the first place for pages with only a little free space. Relation entries\n>might be discarded on a least-recently-used basis.\n\nYou also might want to record some 'average/min/max' record size for the\ntable to assess when a page's free space is insufficient for the\naverage/minimum record size.\n\nWhile on the subject of record keeping, it would be great if it was coded\nto collect statistics about it's own operation for Jan's stats package.\n\n\n>Accesses to the FSM could create contention problems if we're not careful.\n>I think this can be dealt with by having each backend remember (in its\n>relcache entry for a table) the page number of the last page it chose from\n>the FSM to insert into. That backend will keep inserting new tuples into\n>that same page, without touching the FSM, as long as there's room there.\n>Only then does it go back to the FSM, update or remove that page entry,\n>and choose another page to start inserting on. This reduces the access\n>load on the FSM from once per tuple to once per page. \n\nThis seems to have the potential to create as many false FSM page entries\nas there are backends. Is it really that expensive to lock the FSM table\nentry, subtract a number, then unlock it? especially when compared to the\ncost of adding/updating a whole record.\n\n\n>(Moreover, we can\n>arrange that successive backends consulting the FSM pick different pages\n>if possible. Then, concurrent inserts will tend to go to different pages,\n>reducing contention for shared buffers; yet any single backend does\n>sequential inserts in one page, so that a bulk load doesn't cause\n>disk traffic scattered all over the table.)\n\nThis will also increase the natural clustering of the database for SERIAL\nfields even though the overall ordering will be all over the place (at\nleast for insert-intensive apps).\n\n\n>\n>Locking issues\n>--------------\n>\n>We will need two extensions to the lock manager:\n>\n>1. A new lock type that allows concurrent reads and writes\n>(AccessShareLock, RowShareLock, RowExclusiveLock) but not anything else.\n>Lazy VACUUM will grab this type of table lock to ensure the table schema\n>doesn't change under it. Call it a VacuumLock until we think of a better\n>name.\n\nIs it possible/worth adding a 'blocking notification' to the lock manager.\nThen VACUUM could choose to terminate/restart when someone wants to do a\nschema change. This is realy only relevant if the VACUUM will be prolonged.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 18 May 2001 15:52:26 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> 1. A new lock type that allows concurrent reads and writes\n>> (AccessShareLock, RowShareLock, RowExclusiveLock) but not\n>> anything else.\n\n> What's different from RowExclusiveLock ?\n\nI wanted something that *does* conflict with itself, thereby ensuring\nthat only one instance of VACUUM can be running on a table at a time.\nThis is not absolutely necessary, perhaps, but without it matters would\nbe more complex for little gain that I can see. (For example, the tuple\nTIDs that lazy VACUUM gathers in step A might already be deleted and\ncompacted out of existence by another VACUUM, and then reused as new\ntuples, before the first VACUUM gets back to the page to delete them.\nThere would need to be a defense against that if we allow concurrent\nVACUUMs.)\n\n> And would the truncation occur that often in reality under\n> the scheme(without tuple movement) ?\n\nProbably not, per my comments to someone else. I'm not very concerned\nabout that, as long as we are able to recycle freed space within the\nrelation.\n\nWe could in fact move tuples if we wanted to --- it's not fundamentally\ndifferent from an UPDATE --- but then VACUUM becomes a transaction and\nwe have the WAL-log-traffic problem back again. I'd like to try it\nwithout that and see if it gets the job done.\n\n> ???? Isn't current implementation \"bulk delete\" ?\n\nNo, the index AM is called separately for each index tuple to be\ndeleted; more to the point, the search for deletable index tuples\nshould be moved inside the index AM for performance reasons.\n\n> Fast access to individual index tuples seems to be\n> also needed in case a few dead tuples.\n\nYes, that is the approach Vadim used in the LAZY VACUUM code he did\nlast summer for Alfred Perlstein. I had a couple of reasons for not\nduplicating that method:\n\n1. Searching for individual index tuples requires hanging onto (or\nrefetching) the index key data for each target tuple. A bulk delete\nbased only on tuple TIDs is simpler and requires little memory.\n\n2. The main reason for that form of lazy VACUUM is to minimize the time\nspent holding the exclusive lock on a table. With this approach we\ndon't need to worry so much about that. A background task trawling\nthrough an index won't really bother anyone, since it won't lock out\nupdates.\n\n3. If we're concerned about the speed of lazy VACUUM when there's few\ntuples to be recycled, there's a really easy answer: don't recycle 'em.\nLeave 'em be until there are enough to make it worth going after 'em.\nRemember this is a \"fast and good enough\" approach, not a \"get every\npossible byte every time\" approach.\n\nUsing a key-driven delete when we have only a few deletable tuples might\nbe a useful improvement later on, but I don't think it's necessary to\nbother with it in the first go-round. This is a big project already...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 02:07:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 19:05 17/05/01 -0400, Tom Lane wrote:\n>> 1. Forget moving tuples from one page to another.\n\n> Could this be done opportunistically, meaning it builds up a list of\n> candidates to move (perhaps based on emptiness of page), then moves a\n> subset of these in each pass?\n\nWell, if we move tuples at all then we have a considerably different\nanimal: to move tuples across pages you must be a transaction so that\nyou can have an atomic commit for both pages, and that brings back the\nissue of how long the transaction runs for and how large its WAL trail\nwill grow before it can be dropped. Yeah, you could move a limited\nnumber of tuples, commit, and start again ... but it's not so\nlightweight anymore.\n\nPerhaps we will eventually end up with three strengths of VACUUM:\nthe existing heavy-duty form, the \"lazy\" form that isn't transactional,\nand an intermediate form that is willing to move tuples in simpler\ncases (none of that tuple-chain-moving stuff please ;-)). But I'm\nnot buying into producing the intermediate form in this go-round.\nLet's build the lazy form first and get some experience with it\nbefore we decide if we need yet another kind of VACUUM.\n\n>> To do that, I propose a free space map (FSM) kept in shared memory, which\n>> will tell backends which pages of a relation have free space. Only if the\n>> FSM shows no free space available will the relation be extended to insert\n>> a new or updated tuple.\n\n> I assume that now is not a good time to bring up memory-mapped files? ;-}\n\nDon't see the relevance exactly ...\n\n> Were you planning on just a free byte count, or something smaller? Dec/RDB\n> uses a nast system of DBA-defined thresholds for each storage area: 4 bits,\n> where 0=empty, and 1, 2 & 3 indicate above/below thresholds (3 is also\n> considered 'full'). The thresholds are usually set based on average record\n> sizes. In this day & age, I suspect a 1 byte percentage, or 2 byte count is\n> OK unless space is really a premium.\n\nI had toyed with two different representations of the FSM:\n\n1. Bitmap: one bit per page in the relation, set if there's an\n\"interesting\" amount of free space in the page (exact threshold ???).\nDEC's approach seems to be a generalization of this.\n\n2. Page list: page number and number of free bytes. This is six bytes\nper page represented; you could maybe compress it to 5 but I'm not sure\nthere's much point.\n\nI went with #2 mainly because it adapts easily to being forced into a\nlimited amount of space (just forget the pages with least free space)\nwhich is critical for a table to be kept in shared memory. A bitmap\nwould be less forgiving. #2 also gives you a better chance of going\nto a page that actually has enough free space for your tuple, though\nyou'd still need to be prepared to find out that it no longer has\nenough once you get to it. (Whereupon, you go back to the FSM, fix\nthe outdated entry, and keep searching.)\n\n> While on the subject of record keeping, it would be great if it was coded\n> to collect statistics about it's own operation for Jan's stats package.\n\nSeems like a good idea, but I've seen no details yet about that package...\n\n> This seems to have the potential to create as many false FSM page entries\n> as there are backends. Is it really that expensive to lock the FSM table\n> entry, subtract a number, then unlock it?\n\nYes, if you are contending against N other backends to get that lock.\nRemember the *whole* point of this design is to avoid locking as much\nas possible. Too many trips to the FSM could throw away the performance\nadvantage.\n\n> Is it possible/worth adding a 'blocking notification' to the lock manager.\n> Then VACUUM could choose to terminate/restart when someone wants to do a\n> schema change. This is realy only relevant if the VACUUM will be prolonged.\n\nSeems messier than it's worth ... the VACUUM might not be the only thing\nholding off your schema update anyway, and regular transactions aren't\nlikely to pay any attention.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 02:58:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> \n> > And would the truncation occur that often in reality under\n> > the scheme(without tuple movement) ?\n> \n> Probably not, per my comments to someone else. I'm not very concerned\n> about that, as long as we are able to recycle freed space within the\n> relation.\n> \n\nAgreed.\n\n> We could in fact move tuples if we wanted to --- it's not fundamentally\n> different from an UPDATE --- but then VACUUM becomes a transaction and\n> we have the WAL-log-traffic problem back again.\n\nAnd it has been always the cause of bugs and innefficiency\nof VACUUM IMHO.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 18 May 2001 16:03:36 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n>> Irrelevant, not to mention already done ...\n\n> Do you mean that we already can do just analyze ?\n\nIn development sources, yes. See\n\nhttp://www.ca.postgresql.org/devel-corner/docs/postgres/sql-analyze.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 03:18:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Plans for solving the VACUUM problem " }, { "msg_contents": "On Thu, 17 May 2001, Tom Lane wrote:\n\n>\n> We will also want to look at upgrading the non-btree index types to allow\n> concurrent operations. This may be a research problem; I don't expect to\n> touch that issue for 7.2. (Hence, lazy VACUUM on tables with non-btree\n> indexes will still create lockouts until this is addressed. But note that\n> the lockout only lasts through step B of the VACUUM, not the whole thing.)\n\nam I right you plan to work with GiST indexes as well ?\nWe read a paper \"Concurrency and Recovery in Generalized Search Trees\"\nby Marcel Kornacker, C. Mohan, Joseph Hellerstein\n(http://citeseer.nj.nec.com/kornacker97concurrency.html)\nand probably we could go in this direction. Right now we're working\non adding of multi-key support to GiST.\n\nbtw, I have a question about function gistPageAddItem in gist.c\nit just decompress - compress key and calls PageAddItem to\nwrite tuple. We don't understand why do we need this function -\nwhy not use PageAddItem function. Adding multi-key support requires\na lot of work and we don't want to waste our efforts and time.\nWe already done some tests (gistPageAddItem -> PageAddItem) and\neverything is ok. Bruce, you're enthuasistic in removing unused code :-)\n\n\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 18 May 2001 18:45:46 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> On Thu, 17 May 2001, Tom Lane wrote:\n>> We will also want to look at upgrading the non-btree index types to allow\n>> concurrent operations.\n\n> am I right you plan to work with GiST indexes as well ?\n> We read a paper \"Concurrency and Recovery in Generalized Search Trees\"\n> by Marcel Kornacker, C. Mohan, Joseph Hellerstein\n> (http://citeseer.nj.nec.com/kornacker97concurrency.html)\n> and probably we could go in this direction. Right now we're working\n> on adding of multi-key support to GiST.\n\nYes, GIST should be upgraded to do concurrency. But I have no objection\nif you want to work on multi-key support first.\n\nMy feeling is that a few releases from now we will have btree and GIST\nas the preferred/well-supported index types. Hash and rtree might go\naway altogether --- AFAICS they don't do anything that's not done as\nwell or better by btree or GIST, so what's the point of maintaining\nthem?\n\n> btw, I have a question about function gistPageAddItem in gist.c\n> it just decompress - compress key and calls PageAddItem to\n> write tuple. We don't understand why do we need this function -\n\nThe comment says\n\n** Take a compressed entry, and install it on a page. Since we now know\n** where the entry will live, we decompress it and recompress it using\n** that knowledge (some compression routines may want to fish around\n** on the page, for example, or do something special for leaf nodes.)\n\nAre you prepared to say that you will no longer support the ability for\nGIST compression routines to do those things? That seems shortsighted.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 12:38:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "On Fri, 18 May 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > On Thu, 17 May 2001, Tom Lane wrote:\n> >> We will also want to look at upgrading the non-btree index types to allow\n> >> concurrent operations.\n>\n> > am I right you plan to work with GiST indexes as well ?\n> > We read a paper \"Concurrency and Recovery in Generalized Search Trees\"\n> > by Marcel Kornacker, C. Mohan, Joseph Hellerstein\n> > (http://citeseer.nj.nec.com/kornacker97concurrency.html)\n> > and probably we could go in this direction. Right now we're working\n> > on adding of multi-key support to GiST.\n\nAnother paper to read:\n\"Efficient Concurrency Control in Multidimensional Access Methods\"\nby Kaushik Chakrabarti\nhttp://www.ics.uci.edu/~kaushik/research/pubs.html\n\n>\n> Yes, GIST should be upgraded to do concurrency. But I have no objection\n> if you want to work on multi-key support first.\n>\n> My feeling is that a few releases from now we will have btree and GIST\n> as the preferred/well-supported index types. Hash and rtree might go\n> away altogether --- AFAICS they don't do anything that's not done as\n> well or better by btree or GIST, so what's the point of maintaining\n> them?\n\nCool ! We could write rtree (and btree) ops using GiST. We have already\nrealization of rtree for box ops and there are no problem to write\nadditional ops for points, polygons etc.\n\n>\n> > btw, I have a question about function gistPageAddItem in gist.c\n> > it just decompress - compress key and calls PageAddItem to\n> > write tuple. We don't understand why do we need this function -\n>\n> The comment says\n>\n> ** Take a compressed entry, and install it on a page. Since we now know\n> ** where the entry will live, we decompress it and recompress it using\n> ** that knowledge (some compression routines may want to fish around\n> ** on the page, for example, or do something special for leaf nodes.)\n>\n> Are you prepared to say that you will no longer support the ability for\n> GIST compression routines to do those things? That seems shortsighted.\n>\n\nNo-no !!! we don't intend to lose that (compression) functionality.\n\nthere are several reason we want to eliminate gistPageAddItem:\n1. It seems there are no examples where compress uses information about\n the page.\n2. There is some discrepancy between calculation of free space on page and\n the size of tuple saved on page - calculation of free space on page\n by gistNoSpace uses compressed tuple but tuple itself saved after\n recompression. It's possible that size of tupple could changed\n after recompression.\n3. decompress/compress could slowdown insert because it happens\n for every tuple.\n4. Currently gistPageAddItem is broken because it's not toast safe\n (see call gist_tuple_replacekey in gistPageAddItem)\n\nRight now we use #define GIST_PAGEADDITEM in gist.c and\nworking with original PageAddItem. If people insist on gistPageAddItem\nwe'll totally rewrite it. But for now we have enough job to do.\n\n\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 18 May 2001 20:10:10 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n>> The comment says\n>> \n>> ** Take a compressed entry, and install it on a page. Since we now know\n>> ** where the entry will live, we decompress it and recompress it using\n>> ** that knowledge (some compression routines may want to fish around\n>> ** on the page, for example, or do something special for leaf nodes.)\n>> \n>> Are you prepared to say that you will no longer support the ability for\n>> GIST compression routines to do those things? That seems shortsighted.\n\n> No-no !!! we don't intend to lose that (compression) functionality.\n\n> there are several reason we want to eliminate gistPageAddItem:\n> 1. It seems there are no examples where compress uses information about\n> the page.\n\nWe have none now, perhaps, but the original GIST authors seemed to think\nit would be a useful capability. I'm hesitant to rip out functionality\nthat they put in --- I don't think we understand GIST better than they\ndid ;-)\n\n> 2. There is some discrepancy between calculation of free space on page and\n> the size of tuple saved on page - calculation of free space on page\n> by gistNoSpace uses compressed tuple but tuple itself saved after\n> recompression. It's possible that size of tupple could changed\n> after recompression.\n\nYes, that's a serious problem with the idea. We'd have to suppose that\nrecompression could not increase the size of the tuple --- or else be\nprepared to back up and find another page and do it again (ugh).\n\n> 3. decompress/compress could slowdown insert because it happens\n> for every tuple.\n\nSeems like there should be a flag somewhere that tells whether the\ncompression function actually cares about the page context or not.\nIf not, you could skip the useless processing.\n\n> 4. Currently gistPageAddItem is broken because it's not toast safe\n> (see call gist_tuple_replacekey in gistPageAddItem)\n\nNot sure I see the problem. gist_tuple_replacekey is kinda ugly, but\nwhat's it got to do with TOAST?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 14:10:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "On Fri, 18 May 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> >> The comment says\n> >>\n> >> ** Take a compressed entry, and install it on a page. Since we now know\n> >> ** where the entry will live, we decompress it and recompress it using\n> >> ** that knowledge (some compression routines may want to fish around\n> >> ** on the page, for example, or do something special for leaf nodes.)\n> >>\n> >> Are you prepared to say that you will no longer support the ability for\n> >> GIST compression routines to do those things? That seems shortsighted.\n>\n> > No-no !!! we don't intend to lose that (compression) functionality.\n>\n> > there are several reason we want to eliminate gistPageAddItem:\n> > 1. It seems there are no examples where compress uses information about\n> > the page.\n>\n> We have none now, perhaps, but the original GIST authors seemed to think\n> it would be a useful capability. I'm hesitant to rip out functionality\n> that they put in --- I don't think we understand GIST better than they\n> did ;-)\n\nok. we save the code for future. Probably we could ask original author\nbtw, who is the orig, author (Hellerstein, Aoki) ?\n\n>\n> > 2. There is some discrepancy between calculation of free space on page and\n> > the size of tuple saved on page - calculation of free space on page\n> > by gistNoSpace uses compressed tuple but tuple itself saved after\n> > recompression. It's possible that size of tupple could changed\n> > after recompression.\n>\n> Yes, that's a serious problem with the idea. We'd have to suppose that\n> recompression could not increase the size of the tuple --- or else be\n> prepared to back up and find another page and do it again (ugh).\n>\n\nWe have to keep in mind this when return to gistPageAddItem.\n\n> > 3. decompress/compress could slowdown insert because it happens\n> > for every tuple.\n>\n> Seems like there should be a flag somewhere that tells whether the\n> compression function actually cares about the page context or not.\n> If not, you could skip the useless processing.\n>\n\nok. see above.\n\n> > 4. Currently gistPageAddItem is broken because it's not toast safe\n> > (see call gist_tuple_replacekey in gistPageAddItem)\n>\n> Not sure I see the problem. gist_tuple_replacekey is kinda ugly, but\n> what's it got to do with TOAST?\n>\n\ntuple could be formed not by index_formtuple which is a correct way.\n\n\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 18 May 2001 21:24:53 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > On Thu, 17 May 2001, Tom Lane wrote:\n> >> We will also want to look at upgrading the non-btree index types to allow\n> >> concurrent operations.\n> \n> > am I right you plan to work with GiST indexes as well ?\n> > We read a paper \"Concurrency and Recovery in Generalized Search Trees\"\n> > by Marcel Kornacker, C. Mohan, Joseph Hellerstein\n> > (http://citeseer.nj.nec.com/kornacker97concurrency.html)\n> > and probably we could go in this direction. Right now we're working\n> > on adding of multi-key support to GiST.\n> \n> Yes, GIST should be upgraded to do concurrency. But I have no objection\n> if you want to work on multi-key support first.\n> \n> My feeling is that a few releases from now we will have btree and GIST\n> as the preferred/well-supported index types. Hash and rtree might go\n> away altogether --- AFAICS they don't do anything that's not done as\n> well or better by btree or GIST, so what's the point of maintaining\n> them?\n\nWe clearly have too many index types, and we are actively telling people\nnot to use hash. And Oleg, don't you have a better version of GIST rtree\nthan our native rtree?\n\nI certainly like streamlining our stuff.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 18 May 2001 14:35:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "On Fri, 18 May 2001, Bruce Momjian wrote:\n\n> > Oleg Bartunov <oleg@sai.msu.su> writes:\n> > > On Thu, 17 May 2001, Tom Lane wrote:\n> > >> We will also want to look at upgrading the non-btree index types to allow\n> > >> concurrent operations.\n> >\n> > > am I right you plan to work with GiST indexes as well ?\n> > > We read a paper \"Concurrency and Recovery in Generalized Search Trees\"\n> > > by Marcel Kornacker, C. Mohan, Joseph Hellerstein\n> > > (http://citeseer.nj.nec.com/kornacker97concurrency.html)\n> > > and probably we could go in this direction. Right now we're working\n> > > on adding of multi-key support to GiST.\n> >\n> > Yes, GIST should be upgraded to do concurrency. But I have no objection\n> > if you want to work on multi-key support first.\n> >\n> > My feeling is that a few releases from now we will have btree and GIST\n> > as the preferred/well-supported index types. Hash and rtree might go\n> > away altogether --- AFAICS they don't do anything that's not done as\n> > well or better by btree or GIST, so what's the point of maintaining\n> > them?\n>\n> We clearly have too many index types, and we are actively telling people\n> not to use hash. And Oleg, don't you have a better version of GIST rtree\n> than our native rtree?\n\nWe have rtree implementation using GiST - it incomplete, just box.\nlook at http://www.sai.msu.su/~megera/postgres/gist/\nWe ported old code from pg95 time. But it's not difficult to port\nremaining part. I'm not sure if our version is better, we didn't thoroughly\ntest, but seems that memory requirements is better and much faster\nindex construction.\n\n>\n> I certainly like streamlining our stuff.\n>\n\n\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 18 May 2001 21:48:02 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "> Second: if VACUUM can run in the background, then there's no reason not\n> to run it fairly frequently. In fact, it could become an automatically\n> scheduled activity like CHECKPOINT is now, or perhaps even a continuously\n> running daemon (which was the original conception of it at Berkeley, BTW).\n\nMaybe it's obvious, but I'd like to mention that you need some way of setting \npriority. If it's a daemon, or a process, you an nice it. If not, you need to \nimplement something by yourself.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Email: kar@webline.dk\n", "msg_date": "Sat, 19 May 2001 10:38:29 +0200", "msg_from": "Kaare Rasmussen <kar@webline.dk>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Kaare Rasmussen <kar@webline.dk> writes:\n>> Second: if VACUUM can run in the background, then there's no reason not\n>> to run it fairly frequently. In fact, it could become an automatically\n>> scheduled activity like CHECKPOINT is now, or perhaps even a continuously\n>> running daemon (which was the original conception of it at Berkeley, BTW).\n\n> Maybe it's obvious, but I'd like to mention that you need some way of\n> setting priority.\n\nNo, you don't --- or at least it's far less easy than it looks. If any\none of the backends gets niced, then you have a priority inversion\nproblem. That backend may be holding a lock that other backends want.\nIf it's not running because it's been niced, then the other backends\nthat are not niced are still kept from running.\n\nEven if we wanted to implement priority inversion detection (painful\nfor locks, and completely impractical for spinlocks), there wouldn't\nbe anything we could do about it: Unix doesn't allow non-root processes\nto reduce their nice setting.\n\nObviously any automatically-scheduled VACUUM would need some kind of\nthrottling mechanism to keep it from running constantly. But that's not\na priority setting.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 10:44:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "Very neat. You mention that the truncation of both heap and index \nrelations is not necessarily mandatory. Under what conditions would \neither of them be truncated?\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tTom Lane [SMTP:tgl@sss.pgh.pa.us]\n\n....\n\n3. Lazy VACUUM processes a table in five stages:\n A. Scan relation looking for dead tuples; accumulate a list of \ntheir\n TIDs, as well as info about existing free space. (This pass is\n completely read-only and so incurs no WAL traffic.)\n B. Remove index entries for the dead tuples. (See below for \ndetails.)\n C. Physically delete dead tuples and compact free space on their \npages.\n D. Truncate any completely-empty pages at relation's end. \n (Optional,\n see below.)\n E. Create/update FSM entry for the table.\n\n...\n\nFurther out (possibly not for 7.2), we should also look at making the\nindex AMs responsible for shrinking indexes during deletion, or \nperhaps\nvia a separate \"vacuum index\" API. This can be done without \nexclusive\nlocks on the index --- the original Lehman & Yao concurrent-btrees \npaper\ndidn't describe how, but more recent papers show how to do it. As \nwith\nthe main tables, I think it's sufficient to recycle freed space \nwithin\nthe index, and not necessarily try to give it back to the OS.\n\n...\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 17 May 2001 20:52:51 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem" }, { "msg_contents": "Mike Mascari <mascarm@mascari.com> writes:\n> Very neat. You mention that the truncation of both heap and index \n> relations is not necessarily mandatory. Under what conditions would \n> either of them be truncated?\n\nIn the proposal as given, a heap file would be truncated if (a) it\nhas at least one totally empty block at the end, and (b) no other\ntransaction is touching the table at the instant that VACUUM is\nready to truncate it.\n\nThis would probably be fairly infrequently true, especially for\nheavily used tables, but if you believe in a \"steady state\" analysis\nthen that's just fine. No point in handing blocks back to the OS\nonly to have to allocate them again soon.\n\nWe might want to try to tilt the FSM-driven reuse of freed space\nto favor space near the start of the file and avoid end blocks.\nWithout that, you might never get totally free blocks at the end.\n\nThe same comments hold for index blocks, with the additional problem\nthat the index structure would make it almost impossible to drive usage\naway from the physical end-of-file. For btrees I think it'd be\nsufficient if we could recycle empty blocks for use elsewhere in the\nbtree structure. Actually shrinking the index probably won't happen\nshort of a REINDEX.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 01:02:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "Hello,\n\nI've noticed that all custom operators or inet type (such as <<, <<=, etc) \ncannot use an index, even though it is possible to define such an\noperation on an index, for ex:\nX << Y can be translated to \"X >= network(Y) && X <= broadcast(Y)\" (or so)\n\nAccording to docs, postgres has hard-coded the ops which match index types\n(such as btree for <,>,=, etc and rtree for @, etc). Is there a better way\nthan hardcoding support for inet types into index-selection code?\n\n-alex\n\n", "msg_date": "Thu, 17 May 2001 20:58:11 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "operators and indices?" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> I've noticed that all custom operators or inet type (such as <<, <<=, etc) \n> cannot use an index, even though it is possible to define such an\n> operation on an index, for ex:\n> X << Y can be translated to \"X >= network(Y) && X <= broadcast(Y)\" (or so)\n\nYou could possibly kluge that in the same way that LIKE is handled,\nsee match_special_index_operator() and friends. But before we extend\nthat kluge to too many operators, we ought to think about inventing\na table-driven implementation instead of hardwired code. I have no\nidea what one might look like though :-( ... there's not a lot of\nregularity visible in the cases at hand.\n\n> According to docs, postgres has hard-coded the ops which match index types\n> (such as btree for <,>,=, etc and rtree for @, etc).\n\nIt's not hardwired, at least not from the point of view of the mainframe\nbackend. A given index AM defines the semantics of the operators it can\ndeal with, and then the pg_amop etc. tables show which operators fill\nwhich roles for each supported datatype.\n\nIf you can come up with a useful generalization of that inet property\nfor other datatypes, we could think about extending the set of operator\nroles for btree. But as long as we're dealing with one-of-a-kind\nspecial cases for particular datatypes, I'm not sure there's any better\nanswer than hardwired code...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 02:26:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: operators and indices? " } ]
[ { "msg_contents": "\n> I am looking at adding an index tuple flag to indicate when a \n> heap tuple is expired so the index code can skip looking up the heap tuple.\n> \n> The problem is that I can't figure out how be sure that the heap tuple\n> doesn't need to be looked at by _any_ backend. Right now, we update the\n> transaction commit flags in the heap tuple to prevent a pg_log lookup,\n> but that is not enough because some transactions may still see that heap\n> tuple as visible.\n\nIf you are only marking those, that need not be visible anymore, can you not\nsimply delete that key (slot) from the index ? I know vacuum then shows a count \nmismatch, but that could probably be accounted for.\n\nAndreas\n", "msg_date": "Fri, 18 May 2001 08:26:43 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Adding index flag showing tuple status" }, { "msg_contents": ">> I am looking at adding an index tuple flag to indicate when a \n>> heap tuple is expired so the index code can skip looking up the heap tuple.\n>> \n>> The problem is that I can't figure out how be sure that the heap tuple\n>> doesn't need to be looked at by _any_ backend. Right now, we update the\n>> transaction commit flags in the heap tuple to prevent a pg_log lookup,\n>> but that is not enough because some transactions may still see that heap\n>> tuple as visible.\n\n> If you are only marking those, that need not be visible anymore, can you not\n> simply delete that key (slot) from the index ? I know vacuum then shows a count \n> mismatch, but that could probably be accounted for.\n\nI am not sure we need this, if we implement a lightweight VACUUM per my\nproposal in a nearby thread. The conditions that you could mark or\ndelete an index tuple under are exactly the same that VACUUM would be\nlooking for (viz, tuple dead as far as all remaining transactions are\nconcerned).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 03:17:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Adding index flag showing tuple status " }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > I am looking at adding an index tuple flag to indicate when a \n> > heap tuple is expired so the index code can skip looking up the heap tuple.\n> > \n> > The problem is that I can't figure out how be sure that the heap tuple\n> > doesn't need to be looked at by _any_ backend. Right now, we update the\n> > transaction commit flags in the heap tuple to prevent a pg_log lookup,\n> > but that is not enough because some transactions may still see that heap\n> > tuple as visible.\n> \n> If you are only marking those, that need not be visible anymore, can you not\n> simply delete that key (slot) from the index ? I know vacuum then shows a count \n> mismatch, but that could probably be accounted for.\n\nI wasn't going to delete it, just add a flag to index scans know they\ndon't need to look at the heap table.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 18 May 2001 10:59:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Adding index flag showing tuple status" } ]
[ { "msg_contents": "\nWhen organizing available free storage for re-use, we will probably have\na choice whether to favor using space in (mostly-) empty blocks, or in \nmostly-full blocks. Empty and mostly-empty blocks are quicker -- you \ncan put lots of rows in them before they fill up and you have to choose \nanother. Preferring mostly-full blocks improves active-storage and \ncache density because a table tends to occupy fewer total blocks.\n\nDoes anybody know of papers that analyze the tradeoffs involved?\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Fri, 18 May 2001 01:19:22 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": true, "msg_subject": "storage density" } ]
[ { "msg_contents": "\n> > ???? Isn't current implementation \"bulk delete\" ?\n> \n> No, the index AM is called separately for each index tuple to be\n> deleted; more to the point, the search for deletable index tuples\n> should be moved inside the index AM for performance reasons.\n\nWouldn't a sequential scan on the heap table be the fastest way to find\nkeys, that need to be deleted ?\n\nforeach tuple in heap that can be deleted do:\n\tforeach index\n\t\tcall the current \"index delete\" with constructed key and xtid\n\nThe advantage would be, that the current API would be sufficient and\nit should be faster. The problem would be to create a correct key from the heap\ntuple, that you can pass to the index delete function.\n\nAndreas\n", "msg_date": "Fri, 18 May 2001 12:13:55 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Plans for solving the VACUUM problem " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> foreach tuple in heap that can be deleted do:\n> \tforeach index\n> \t\tcall the current \"index delete\" with constructed key and xtid\n\nSee discussion with Hiroshi. This is much more complex than TID-based\ndelete and would be faster only for small numbers of tuples. (Very\nsmall numbers of tuples, is my gut feeling, though there's no way to\nprove that without implementations of both in hand.)\n\nA particular point worth making is that in the common case where you've\nupdated the same row N times (without changing its index key), the above\napproach has O(N^2) runtime. The indexscan will find all N index tuples\nmatching the key ... only one of which is the one you are looking for on\nthis iteration of the outer loop.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 09:47:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "\n> A particular point worth making is that in the common case where you've\n> updated the same row N times (without changing its index key), the above\n> approach has O(N^2) runtime. The indexscan will find all N index tuples\n> matching the key ... only one of which is the one you are looking for on\n> this iteration of the outer loop.\n\nIt was my understanding, that the heap xtid is part of the key now, and thus \nwith a somewhat modified access, it would find the one exact row directly.\nAnd in above case, the keys (since identical except xtid) will stick close \ntogether, thus caching will be good.\n\nAndreas\n", "msg_date": "Fri, 18 May 2001 15:58:52 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Plans for solving the VACUUM problem " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> It was my understanding, that the heap xtid is part of the key now,\n\nIt is not.\n\nThere was some discussion of doing that, but it fell down on the little\nproblem that in normal index-search cases you *don't* know the heap tid\nyou are looking for.\n\n> And in above case, the keys (since identical except xtid) will stick close \n> together, thus caching will be good.\n\nEven without key-collision problems, deleting N tuples out of a total of\nM index entries will require search costs like this:\n\nbulk delete in linear scan way:\n\n\tO(M)\t\tI/O costs (read all the pages)\n\tO(M log N)\tCPU costs (lookup each TID in sorted list)\n\nsuccessive index probe way:\n\n\tO(N log M)\tI/O costs for probing index\n\tO(N log M)\tCPU costs for probing index (key comparisons)\n\nFor N << M, the latter looks like a win, but you have to keep in mind\nthat the constant factors hidden by the O() notation are a lot different\nin the two cases. In particular, if there are T indexentries per page,\nthe former I/O cost is really M/T * sequential read cost whereas the\nlatter is N log M * random read cost, yielding a difference in constant\nfactors of probably a thousand or two. You get some benefit in the\nlatter case from caching the upper btree levels, but that's by\ndefinition not a large part of the index bulk. So where's the breakeven\npoint in reality? I don't know but I suspect that it's at pretty small\nN. Certainly far less than one percent of the table, whereas I would\nthink that people would try to schedule VACUUMs at an interval where\nthey'd be reclaiming several percent of the table.\n\nSo, as I said to Hiroshi, this alternative looks to me like a possible\nfuture refinement, not something we need to do in the first version.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 10:35:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "set digest\n\n", "msg_date": "Fri, 18 May 2001 08:25:50 -0700", "msg_from": "Fernando Cabrera <fcabrera@dmvlink.com>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "\n> There was some discussion of doing that, but it fell down on the little\n> problem that in normal index-search cases you *don't* know the heap tid\n> you are looking for.\n\nI can not follow here. It does not matter if you don't know a trailing \npart of the key when doing a btree search, it only helps to directly find the \nleaf page.\n\n> \n> > And in above case, the keys (since identical except xtid) will stick close \n> > together, thus caching will be good.\n> \n> Even without key-collision problems, deleting N tuples out of a total of\n> M index entries will require search costs like this:\n> \n> bulk delete in linear scan way:\n> \n> \tO(M)\t\tI/O costs (read all the pages)\n> \tO(M log N)\tCPU costs (lookup each TID in sorted list)\n> \n> successive index probe way:\n> \n> \tO(N log M)\tI/O costs for probing index\n> \tO(N log M)\tCPU costs for probing index (key comparisons)\n\nboth\tO(N log (levels in index))\n\n> So, as I said to Hiroshi, this alternative looks to me like a possible\n> future refinement, not something we need to do in the first version.\n\nYes, I thought it might be easier to implement than the index scan :-)\n\nAndreas\n", "msg_date": "Fri, 18 May 2001 17:28:28 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: AW: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "\n> > > I am looking at adding an index tuple flag to indicate when a \n> > > heap tuple is expired so the index code can skip looking up the heap tuple.\n> > > \n> > > The problem is that I can't figure out how be sure that the heap tuple\n> > > doesn't need to be looked at by _any_ backend. Right now, we update the\n> > > transaction commit flags in the heap tuple to prevent a pg_log lookup,\n> > > but that is not enough because some transactions may still see that heap\n> > > tuple as visible.\n> > \n> > If you are only marking those, that need not be visible anymore, can you not\n> > simply delete that key (slot) from the index ? I know vacuum then shows a count \n> > mismatch, but that could probably be accounted for.\n> \n> I wasn't going to delete it, just add a flag to index scans know they\n> don't need to look at the heap table.\n\nIf it is only a flag, you would need to go to the same trouble that vacuum already\ngoes to (you cannot mark it if someone else is still interested in this snapshot).\nThus I do not see any benefit in adding a flag, versus deleting not needed keys.\nTo avoid the snapshot trouble you would need a xid (xmax or something), and that \nis 4 bytes, and not a simple flag.\n\nAndreas\n", "msg_date": "Fri, 18 May 2001 17:55:34 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Adding index flag showing tuple status" }, { "msg_contents": "> > I wasn't going to delete it, just add a flag to index scans know they\n> > don't need to look at the heap table.\n> \n> If it is only a flag, you would need to go to the same trouble\n> that vacuum already goes to (you cannot mark it if someone else\n> is still interested in this snapshot). Thus I do not see any\n> benefit in adding a flag, versus deleting not needed keys. To\n> avoid the snapshot trouble you would need a xid (xmax or\n> something), and that is 4 bytes, and not a simple flag.\n\nYes, I would need:\n\n GetXmaxRecent(&XmaxRecent);\n\nwhich find the minimum visible transaction for all active backends. \nThis is currently used in vacuum and btbuild. I would use that\nvisibility to set the bit. Because it is only a bit, I would need read\nlock but not write lock. Multiple people can set the same bit.\n\nBasically, I need to:\n\n\tadd a global XmaxRecent and set it once per transaction if needed\n\tadd a parameter to heap_fetch to pass back an index tid\n\tif the heap is not visible to any backend, mark the index flag\n\tupdate the index scan code to skip returning these flaged entries\n\nOf course, Tom thinks is may not be needed with his new vacuum so I may\nnever implement it.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 18 May 2001 12:08:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Adding index flag showing tuple status" } ]
[ { "msg_contents": "> FYI, the code was warning about psql backslash command usage:\n> \n> /*\n> * If the command was not recognized, try to parse it as a one-letter\n> * command with immediately following argument (a still-supported,\n> * but no longer encouraged, syntax).\n> */\n\nSheesh. Sorry, I should actually look at the patch (or even just read\nthe email carefully). This one is none of my business, and I have no\nopinion about how it should be :(\n\n - Thomas\n", "msg_date": "Fri, 18 May 2001 16:49:17 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] syntax warning on" }, { "msg_contents": "> > FYI, the code was warning about psql backslash command usage:\n> > \n> > /*\n> > * If the command was not recognized, try to parse it as a one-letter\n> > * command with immediately following argument (a still-supported,\n> > * but no longer encouraged, syntax).\n> > */\n> \n> Sheesh. Sorry, I should actually look at the patch (or even just read\n> the email carefully). This one is none of my business, and I have no\n> opinion about how it should be :(\n\nYes, I was wondering about that. Seem to warn about \\connect, I think. \nAt least I think so, or \\cdbname.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 18 May 2001 12:54:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] syntax warning on" } ]
[ { "msg_contents": "I've been going talking with the SGI technical support about some of the\nerrors I got when compiling Postgres 7.1.1 on SGI IRIX 6.5.12 with the\nMIPSPro 7.3 C compiler. I've already mentioned that somehow the compiler\ncan't see the correct definition for strdup (I believe she thought that\nit was due to the POSIX declaration). There's also a problem with it not\nseeing the structure timeval defined. timeval is in\n/usr/include/sys/time.h and is declared in the following way:\n\n#if _XOPEN4UX || defined(_BSD_TYPES) || defined(_BSD_COMPAT)\n/*\n * Structure returned by gettimeofday(2) system call,\n * and used in other calls.\n * Note this is also defined in sys/resource.h\n */\n#ifndef _TIMEVAL_T\n#define _TIMEVAL_T\nstruct timeval {\n#if _MIPS_SZLONG == 64\n __int32_t :32;\n#endif\n time_t tv_sec; /* seconds */\n long tv_usec; /* and microseconds */\n};\n\nSo SGI is assuming that you're declaring BSD types or compatibility.\nHowever, the tech support person said that with the compiler's POSIX\ndeclaration, this is conflicting. Basically, she says that POSIX implies\ngeneralized portability across many platforms, but BSD implies a\nspecific type of platform. So that's where she thinks SGI is having the\ntrouble-- two conflicting type declarations.\n\nIs this correct?\n-Tony\n\n\n\n\n", "msg_date": "Fri, 18 May 2001 10:41:56 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "IRIX 6.5.12: POSIX & BSD" } ]
[ { "msg_contents": "\tAnyone know what the default password is for the postgres user? When\ntrying to use createuser, the password for the postgres user is asked. What\nis that default password? I searched all through the docs but couldn't find\nit.\n \nThanks.\n\n\n", "msg_date": "Fri, 18 May 2001 12:03:27 -0700", "msg_from": "Mike Cianflone <mcianflone@littlefeet-inc.com>", "msg_from_op": true, "msg_subject": "What is the default password for the postgres user in the default\n\tdatabase" }, { "msg_contents": "Mike Cianflone writes:\n\n> \tAnyone know what the default password is for the postgres user?\n\nNULL\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 18 May 2001 21:23:32 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: What is the default password for the postgres user in\n\tthe default database" } ]
[ { "msg_contents": "\nIs anyone using the define OLD_FILE_NAMING? Seems we can remove the\ntest now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 18 May 2001 16:17:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "#ifdef OLD_FILE_NAMING" }, { "msg_contents": "I have applied the following patch to remove code used by\nOLD_FILE_NAMING. No one responded to my email question so I assume no\none wants the old code.\n\n> \n> Is anyone using the define OLD_FILE_NAMING? Seems we can remove the\n> test now.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/catalog/catalog.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/catalog/catalog.c,v\nretrieving revision 1.41\ndiff -c -r1.41 catalog.c\n*** src/backend/catalog/catalog.c\t2001/05/30 14:15:26\t1.41\n--- src/backend/catalog/catalog.c\t2001/05/30 20:46:22\n***************\n*** 22,113 ****\n #include \"miscadmin.h\"\n #include \"utils/lsyscache.h\"\n \n- #ifdef OLD_FILE_NAMING\n /*\n- * relpath\t\t\t\t- construct path to a relation's file\n- *\n- * Note that this only works with relations that are visible to the current\n- * backend, ie, either in the current database or shared system relations.\n- *\n- * Result is a palloc'd string.\n- */\n- char *\n- relpath(const char *relname)\n- {\n- \tchar\t *path;\n- \n- \tif (IsSharedSystemRelationName(relname))\n- \t{\n- \t\t/* Shared system relations live in {datadir}/global */\n- \t\tsize_t\t\tbufsize = strlen(DataDir) + 8 + sizeof(NameData) + 1;\n- \n- \t\tpath = (char *) palloc(bufsize);\n- \t\tsnprintf(path, bufsize, \"%s/global/%s\", DataDir, relname);\n- \t\treturn path;\n- \t}\n- \n- \t/*\n- \t * If it is in the current database, assume it is in current working\n- \t * directory. NB: this does not work during bootstrap!\n- \t */\n- \treturn pstrdup(relname);\n- }\n- \n- /*\n- * relpath_blind\t\t\t- construct path to a relation's file\n- *\n- * Construct the path using only the info available to smgrblindwrt,\n- * namely the names and OIDs of the database and relation.\t(Shared system\n- * relations are identified with dbid = 0.) Note that we may have to\n- * access a relation belonging to a different database!\n- *\n- * Result is a palloc'd string.\n- */\n- \n- char *\n- relpath_blind(const char *dbname, const char *relname,\n- \t\t\t Oid dbid, Oid relid)\n- {\n- \tchar\t *path;\n- \n- \tif (dbid == (Oid) 0)\n- \t{\n- \t\t/* Shared system relations live in {datadir}/global */\n- \t\tpath = (char *) palloc(strlen(DataDir) + 8 + sizeof(NameData) + 1);\n- \t\tsprintf(path, \"%s/global/%s\", DataDir, relname);\n- \t}\n- \telse if (dbid == MyDatabaseId)\n- \t{\n- \t\t/* XXX why is this inconsistent with relpath() ? */\n- \t\tpath = (char *) palloc(strlen(DatabasePath) + sizeof(NameData) + 2);\n- \t\tsprintf(path, \"%s/%s\", DatabasePath, relname);\n- \t}\n- \telse\n- \t{\n- \t\t/* this is work around only !!! */\n- \t\tchar\t\tdbpathtmp[MAXPGPATH];\n- \t\tOid\t\t\tid;\n- \t\tchar\t *dbpath;\n- \n- \t\tGetRawDatabaseInfo(dbname, &id, dbpathtmp);\n- \n- \t\tif (id != dbid)\n- \t\t\telog(FATAL, \"relpath_blind: oid of db %s is not %u\",\n- \t\t\t\t dbname, dbid);\n- \t\tdbpath = ExpandDatabasePath(dbpathtmp);\n- \t\tif (dbpath == NULL)\n- \t\t\telog(FATAL, \"relpath_blind: can't expand path for db %s\",\n- \t\t\t\t dbname);\n- \t\tpath = (char *) palloc(strlen(dbpath) + sizeof(NameData) + 2);\n- \t\tsprintf(path, \"%s/%s\", dbpath, relname);\n- \t\tpfree(dbpath);\n- \t}\n- \treturn path;\n- }\n- \n- #else\t\t\t\t\t\t\t/* ! OLD_FILE_NAMING */\n- \n- /*\n * relpath\t\t\t- construct path to a relation's file\n *\n * Result is a palloc'd string.\n--- 22,28 ----\n***************\n*** 157,163 ****\n \treturn path;\n }\n \n- #endif\t /* OLD_FILE_NAMING */\n \n /*\n * IsSystemRelationName\n--- 72,77 ----\nIndex: src/backend/catalog/index.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/catalog/index.c,v\nretrieving revision 1.151\ndiff -c -r1.151 index.c\n*** src/backend/catalog/index.c\t2001/05/18 22:35:50\t1.151\n--- src/backend/catalog/index.c\t2001/05/30 20:46:27\n***************\n*** 1350,1360 ****\n \t */\n \tpg_class = heap_openr(RelationRelationName, RowExclusiveLock);\n \n- #ifdef\tOLD_FILE_NAMING\n- \tif (!IsIgnoringSystemIndexes())\n- #else\n \tif (!IsIgnoringSystemIndexes() && (!IsReindexProcessing() || pg_class->rd_rel->relhasindex))\n- #endif\t /* OLD_FILE_NAMING */\n \t{\n \t\ttuple = SearchSysCacheCopy(RELOID,\n \t\t\t\t\t\t\t\t ObjectIdGetDatum(relid),\n--- 1350,1356 ----\n***************\n*** 1424,1430 ****\n \theap_close(pg_class, RowExclusiveLock);\n }\n \n- #ifndef OLD_FILE_NAMING\n void\n setNewRelfilenode(Relation relation)\n {\n--- 1420,1425 ----\n***************\n*** 1494,1500 ****\n \tCommandCounterIncrement();\n }\n \n- #endif\t /* OLD_FILE_NAMING */\n \n /* ----------------\n *\t\tUpdateStats\n--- 1489,1494 ----\n***************\n*** 1553,1563 ****\n \t */\n \tpg_class = heap_openr(RelationRelationName, RowExclusiveLock);\n \n- #ifdef\tOLD_FILE_NAMING\n- \tin_place_upd = (IsReindexProcessing() || IsBootstrapProcessingMode());\n- #else\n \tin_place_upd = (IsIgnoringSystemIndexes() || IsReindexProcessing());\n- #endif\t /* OLD_FILE_NAMING */\n \n \tif (!in_place_upd)\n \t{\n--- 1547,1553 ----\n***************\n*** 2000,2013 ****\n \tif (iRel == NULL)\n \t\telog(ERROR, \"reindex_index: can't open index relation\");\n \n- #ifndef OLD_FILE_NAMING\n \tif (!inplace)\n \t{\n \t\tinplace = IsSharedSystemRelationName(NameStr(iRel->rd_rel->relname));\n \t\tif (!inplace)\n \t\t\tsetNewRelfilenode(iRel);\n \t}\n- #endif\t /* OLD_FILE_NAMING */\n \t/* Obtain exclusive lock on it, just to be sure */\n \tLockRelation(iRel, AccessExclusiveLock);\n \n--- 1990,2001 ----\n***************\n*** 2084,2092 ****\n \t\t\t\toverwrite,\n \t\t\t\tupd_pg_class_inplace;\n \n- #ifdef OLD_FILE_NAMING\n- \toverwrite = upd_pg_class_inplace = deactivate_needed = true;\n- #else\n \tRelation\trel;\n \n \toverwrite = upd_pg_class_inplace = deactivate_needed = false;\n--- 2072,2077 ----\n***************\n*** 2138,2144 ****\n \t\t\telog(ERROR, \"the target relation %u is shared\", relid);\n \t}\n \tRelationClose(rel);\n! #endif\t /* OLD_FILE_NAMING */\n \told = SetReindexProcessing(true);\n \tif (deactivate_needed)\n \t{\n--- 2123,2129 ----\n \t\t\telog(ERROR, \"the target relation %u is shared\", relid);\n \t}\n \tRelationClose(rel);\n! \n \told = SetReindexProcessing(true);\n \tif (deactivate_needed)\n \t{\nIndex: src/backend/commands/indexcmds.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/indexcmds.c,v\nretrieving revision 1.47\ndiff -c -r1.47 indexcmds.c\n*** src/backend/commands/indexcmds.c\t2001/03/22 06:16:11\t1.47\n--- src/backend/commands/indexcmds.c\t2001/05/30 20:46:27\n***************\n*** 654,666 ****\n \t\telog(ERROR, \"relation \\\"%s\\\" is of type \\\"%c\\\"\",\n \t\t\t name, ((Form_pg_class) GETSTRUCT(tuple))->relkind);\n \n- #ifdef\tOLD_FILE_NAMING\n- \tif (!reindex_index(tuple->t_data->t_oid, force, false))\n- #else\n \tif (IsIgnoringSystemIndexes())\n \t\toverwrite = true;\n \tif (!reindex_index(tuple->t_data->t_oid, force, overwrite))\n- #endif\t /* OLD_FILE_NAMING */\n \t\telog(NOTICE, \"index \\\"%s\\\" wasn't reindexed\", name);\n \n \tReleaseSysCache(tuple);\n--- 654,662 ----\nIndex: src/backend/tcop/utility.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/tcop/utility.c,v\nretrieving revision 1.111\ndiff -c -r1.111 utility.c\n*** src/backend/tcop/utility.c\t2001/05/27 09:59:29\t1.111\n--- src/backend/tcop/utility.c\t2001/05/30 20:46:29\n***************\n*** 891,908 ****\n \t\t\t\t\t\tbreak;\n \t\t\t\t\tcase TABLE:\n \t\t\t\t\t\trelname = (char *) stmt->name;\n- \t\t\t\t\t\tif (IsSystemRelationName(relname))\n- \t\t\t\t\t\t{\n- #ifdef\tOLD_FILE_NAMING\n- \t\t\t\t\t\t\tif (!allowSystemTableMods && IsSystemRelationName(relname))\n- \t\t\t\t\t\t\t\telog(ERROR, \"\\\"%s\\\" is a system table. call REINDEX under standalone postgres with -O -P options\",\n- \t\t\t\t\t\t\t\t\t relname);\n- \t\t\t\t\t\t\tif (!IsIgnoringSystemIndexes())\n- \t\t\t\t\t\t\t\telog(ERROR, \"\\\"%s\\\" is a system table. call REINDEX under standalone postgres with -P -O options\",\n- \n- \t\t\t\t\t\t\t\t\t relname);\n- #endif\t /* OLD_FILE_NAMING */\n- \t\t\t\t\t\t}\n \t\t\t\t\t\tif (!pg_ownercheck(GetUserId(), relname, RELNAME))\n \t\t\t\t\t\t\telog(ERROR, \"%s: %s\", relname, aclcheck_error_strings[ACLCHECK_NOT_OWNER]);\n \t\t\t\t\t\tReindexTable(relname, stmt->force);\n--- 891,896 ----\nIndex: src/backend/utils/init/postinit.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/init/postinit.c,v\nretrieving revision 1.85\ndiff -c -r1.85 postinit.c\n*** src/backend/utils/init/postinit.c\t2001/05/08 21:06:43\t1.85\n--- src/backend/utils/init/postinit.c\t2001/05/30 20:46:30\n***************\n*** 21,30 ****\n #include <math.h>\n #include <unistd.h>\n \n- #ifndef OLD_FILE_NAMING\n #include \"catalog/catalog.h\"\n- #endif\n- \n #include \"access/heapam.h\"\n #include \"catalog/catname.h\"\n #include \"catalog/pg_database.h\"\n--- 21,27 ----\nIndex: src/backend/utils/misc/database.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/misc/database.c,v\nretrieving revision 1.46\ndiff -c -r1.46 database.c\n*** src/backend/utils/misc/database.c\t2001/05/30 14:15:27\t1.46\n--- src/backend/utils/misc/database.c\t2001/05/30 20:46:30\n***************\n*** 140,158 ****\n \tPage\t\tpg;\n \tchar\t *dbfname;\n \tForm_pg_database tup_db;\n \n! #ifdef OLD_FILE_NAMING\n! \tdbfname = (char *) palloc(strlen(DataDir) + 8 + strlen(DatabaseRelationName) + 2);\n! \tsprintf(dbfname, \"%s/global/%s\", DataDir, DatabaseRelationName);\n! #else\n! \t{\n! \t\tRelFileNode rnode;\n! \n! \t\trnode.tblNode = 0;\n! \t\trnode.relNode = RelOid_pg_database;\n! \t\tdbfname = relpath(rnode);\n! \t}\n! #endif\n \n \tif ((dbfd = open(dbfname, O_RDONLY | PG_BINARY, 0)) < 0)\n \t\telog(FATAL, \"cannot open %s: %m\", dbfname);\n--- 140,150 ----\n \tPage\t\tpg;\n \tchar\t *dbfname;\n \tForm_pg_database tup_db;\n+ \tRelFileNode rnode;\n \n! \trnode.tblNode = 0;\n! \trnode.relNode = RelOid_pg_database;\n! \tdbfname = relpath(rnode);\n \n \tif ((dbfd = open(dbfname, O_RDONLY | PG_BINARY, 0)) < 0)\n \t\telog(FATAL, \"cannot open %s: %m\", dbfname);\nIndex: src/include/catalog/catalog.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/catalog/catalog.h,v\nretrieving revision 1.16\ndiff -c -r1.16 catalog.h\n*** src/include/catalog/catalog.h\t2001/03/22 04:00:34\t1.16\n--- src/include/catalog/catalog.h\t2001/05/30 20:46:30\n***************\n*** 16,34 ****\n \n #include \"access/tupdesc.h\"\n \n- #ifdef OLD_FILE_NAMING\n- \n- extern char *relpath(const char *relname);\n- extern char *relpath_blind(const char *dbname, const char *relname,\n- \t\t\t Oid dbid, Oid relid);\n- \n- #else\n #include \"storage/relfilenode.h\"\n \n extern char *relpath(RelFileNode rnode);\n extern char *GetDatabasePath(Oid tblNode);\n- \n- #endif\n \n extern bool IsSystemRelationName(const char *relname);\n extern bool IsSharedSystemRelationName(const char *relname);\n--- 16,25 ----\nIndex: src/include/catalog/index.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/catalog/index.h,v\nretrieving revision 1.34\ndiff -c -r1.34 index.h\n*** src/include/catalog/index.h\t2001/05/07 00:43:24\t1.34\n--- src/include/catalog/index.h\t2001/05/30 20:46:30\n***************\n*** 50,59 ****\n extern bool IndexesAreActive(Oid relid, bool comfirmCommitted);\n extern void setRelhasindex(Oid relid, bool hasindex);\n \n- #ifndef OLD_FILE_NAMING\n extern void setNewRelfilenode(Relation relation);\n \n- #endif\t /* OLD_FILE_NAMING */\n extern bool SetReindexProcessing(bool processing);\n extern bool IsReindexProcessing(void);\n \n--- 50,57 ----\nIndex: src/interfaces/odbc/connection.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/odbc/connection.h,v\nretrieving revision 1.23\ndiff -c -r1.23 connection.h\n*** src/interfaces/odbc/connection.h\t2001/05/25 08:12:31\t1.23\n--- src/interfaces/odbc/connection.h\t2001/05/30 20:46:32\n***************\n*** 9,14 ****\n--- 9,17 ----\n #ifndef __CONNECTION_H__\n #define __CONNECTION_H__\n \n+ #include <stdlib.h>\n+ #include <string.h>\n+ \n #ifdef HAVE_CONFIG_H\n #include \"config.h\"\n #endif", "msg_date": "Wed, 30 May 2001 16:52:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] #ifdef OLD_FILE_NAMING" } ]
[ { "msg_contents": "Will anyone object if I change the calling convention for operator\nselectivity estimation functions (oprrest, oprjoin entries in\npg_operator)?\n\nHistorically the call conventions have been\n\ndouble oprrest(Oid opid, Oid relid, AttrNumber attno,\n\t\tDatum value, int32 flag);\n\ndouble oprjoin(Oid opid, Oid relid1, AttrNumber attno1,\n\t\tOid relid2, AttrNumber attno2);\n\nThese are not only extremely restrictive (no chance to do anything\nintelligent with clauses more complex than var op const or var op var),\nbut they force the estimator to re-look-up information that's already\nreadily available inside the planner, such as the type of the variables\nin question.\n\nI'd like to change these conventions to be like\n\ndouble oprjoin(Query *root, Oid operator, List *args);\n\nwhich would be more flexible and more efficient than the current\napproach. I can see a couple of possible objections though:\n\n1. This would break any user-written selectivity estimators that\nmight be out there. I kinda doubt that there are any, since we've\nnever really documented what these functions do or how to write one,\nbut if you've got one then speak up!\n\n2. To do anything useful, functions called like this would have to be\ncoded in C (since higher-level languages would have no idea what the\nstruct datatypes are). In theory at least, one might have coded\nestimator functions in PL languages or even SQL under the existing call\nconventions. Again, though, I think this is a theoretical possibility\nrather than something that's ever been done or is likely to be done.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 18:46:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "May I change the API for operator selectivity estimators?" } ]
[ { "msg_contents": "I have an external search engine system which plugs in to postgres. I use a few\nC functions to interface the search daemon with the Postgres back-end.\n\nThe best that I have been able to do is do a \"select\" for each result. I have a\nlive demo/test site:\n\nhttp://www.mohawksoft.com/search.php3, and the PHP source code is at\nhttp://www.mohawksoft.com/ftss_example.txt.\n\nI would love to get the results with one select statement, but have, to date,\nbeen unable to figure out how. Anyone with any ideas?\n", "msg_date": "Fri, 18 May 2001 19:23:28 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "External search engine, advice" }, { "msg_contents": "> I have an external search engine system which plugs in to postgres. I use a few\n> C functions to interface the search daemon with the Postgres back-end.\n> \n> The best that I have been able to do is do a \"select\" for each result. I have a\n> live demo/test site:\n> \n> http://www.mohawksoft.com/search.php3, and the PHP source code is at\n> http://www.mohawksoft.com/ftss_example.txt.\n> \n> I would love to get the results with one select statement, but have, to date,\n> been unable to figure out how. Anyone with any ideas?\n\nIt's possible to return a set of results from C functions using the\nnew function manager in 7.1 or later. Take a look at following email\nin the archive.\n\nSubject: Re: [INTERFACES] Re: can external C-function get multiple rows? \nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nTo: alexey@price.ru\ncc: pgsql-interfaces@postgresql.org\nDate: Mon, 30 Apr 2001 01:52:57 -0400\n\nActually I have created such a function calling an external full text\nsearch engine called \"namazu\". Here is an example to search a keyword\n\"int8\" from index files pre-generated by namazu.\n\ntest=# select pgnmzsrch('int8','/home/t-ishii/lib/namazu/hackers');\n ?column? \n----------------------------------------\n /home/t-ishii/lib/namazu/hackers/21000\n /home/t-ishii/lib/namazu/hackers/21001\n /home/t-ishii/lib/namazu/hackers/21003\n /home/t-ishii/lib/namazu/hackers/21004\n /home/t-ishii/lib/namazu/hackers/21002\n /home/t-ishii/lib/namazu/hackers/21005\n /home/t-ishii/lib/namazu/hackers/21006\n(7 rows)\n--\nTatsuo Ishii\n", "msg_date": "Sat, 19 May 2001 12:05:28 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: External search engine, advice" }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> > I have an external search engine system which plugs in to postgres. I use a few\n> > C functions to interface the search daemon with the Postgres back-end.\n> >\n> > The best that I have been able to do is do a \"select\" for each result. I have a\n> > live demo/test site:\n> >\n> > http://www.mohawksoft.com/search.php3, and the PHP source code is at\n> > http://www.mohawksoft.com/ftss_example.txt.\n> >\n> > I would love to get the results with one select statement, but have, to date,\n> > been unable to figure out how. Anyone with any ideas?\n> \n> It's possible to return a set of results from C functions using the\n> new function manager in 7.1 or later. Take a look at following email\n> in the archive.\n\nWell, I kind of have that already. I can return a set, but I can't use it in a\njoin.\n\nfreedb=# select ftss_search('all { pink floyd money }') ;\n ftss_search\n-------------\n 120\n(1 row)\n \nfreedb=# select * from cdsongs where songid = ftss_results() ;\nERROR: Set-valued function called in context that cannot accept a set\n\nHow do you join against a set?\n", "msg_date": "Sat, 19 May 2001 10:41:54 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: External search engine, advice" }, { "msg_contents": "> Well, I kind of have that already. I can return a set, but I can't use it in a\n> join.\n> \n> freedb=# select ftss_search('all { pink floyd money }') ;\n> ftss_search\n> -------------\n> 120\n> (1 row)\n> \n> freedb=# select * from cdsongs where songid = ftss_results() ;\n> ERROR: Set-valued function called in context that cannot accept a set\n> \n> How do you join against a set?\n\nWell, assuming that ftss_results() returns a set of songid, you could\ndo something like:\n\nselect * from cdsongs where songid in (select ftss_results());\n\nBTW, what's the difference between ftss_search and ftss_results?\n--\nTatsuo Ishii\n", "msg_date": "Sun, 20 May 2001 00:12:53 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Re: External search engine, advice" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> freedb=# select * from cdsongs where songid = ftss_results() ;\n> ERROR: Set-valued function called in context that cannot accept a set\n\n'=' is a scalar operation. Try\n\nselect * from cdsongs where songid IN (select ftss_results());\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 11:28:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: External search engine, advice " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > freedb=# select * from cdsongs where songid = ftss_results() ;\n> > ERROR: Set-valued function called in context that cannot accept a set\n> \n> '=' is a scalar operation. Try\n> \n> select * from cdsongs where songid IN (select ftss_results());\n\nI was afraid you'd say that. That does not use indexes.\n\nIt is pointless to use a text search engine if the result has to perform a\ntable scan anyway.\n\nIf I do:\n\ncreate temp table fubar as select ftss_results() as songid;\nselect * from cdsongs where songid = fubar.songid;\n\nThat works, but that is slow and a lot of people have emotional difficulties\nwith using temporary tables. (Oracle syndrome) Also, an 'IN' clause does not\npreserve the order of the results, where as a join should.\n", "msg_date": "Sat, 19 May 2001 12:15:41 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: External search engine, advice" }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> > Well, I kind of have that already. I can return a set, but I can't use it in a\n> > join.\n> >\n> > freedb=# select ftss_search('all { pink floyd money }') ;\n> > ftss_search\n> > -------------\n> > 120\n> > (1 row)\n> >\n> > freedb=# select * from cdsongs where songid = ftss_results() ;\n> > ERROR: Set-valued function called in context that cannot accept a set\n> >\n> > How do you join against a set?\n> \n> Well, assuming that ftss_results() returns a set of songid, you could\n> do something like:\n> \n> select * from cdsongs where songid in (select ftss_results());\n\nThat, however, does not use the songid index, thus it renders the text search\nengine useless.\n\n> \n> BTW, what's the difference between ftss_search and ftss_results?\n\nftss_search executes the search to the external engine, and returns the number\nof results. ftss_results returns the set of results.\n", "msg_date": "Sat, 19 May 2001 12:18:40 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: External search engine, advice" }, { "msg_contents": "mlw wrote:\n> \n> Tom Lane wrote:\n> >\n> > mlw <markw@mohawksoft.com> writes:\n> > > freedb=# select * from cdsongs where songid = ftss_results() ;\n> > > ERROR: Set-valued function called in context that cannot accept a set\n> >\n> > '=' is a scalar operation. Try\n> >\n> > select * from cdsongs where songid IN (select ftss_results());\n> \n> I was afraid you'd say that. That does not use indexes.\n> \n> It is pointless to use a text search engine if the result has to perform a\n> table scan anyway.\n> \n> If I do:\n> \n> create temp table fubar as select ftss_results() as songid;\n> select * from cdsongs where songid = fubar.songid;\n> \n> That works, but that is slow and a lot of people have emotional difficulties\n> with using temporary tables. (Oracle syndrome) Also, an 'IN' clause does not\n> preserve the order of the results, where as a join should.\n\nSo the standard answer to \"IN doesn't use indexes\" is to use EXISTS instead. I'm\nsurely being hopelessly naive here, but why won't that work in this case?\n\nRegards,\t\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew@catalyst.net.nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64(21)635-694, Fax: +64(4)499-5596, Office: +64(4)499-2267xtn709\n", "msg_date": "Sun, 20 May 2001 12:14:10 +1200", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: Re: External search engine, advice" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> If I do:\n> create temp table fubar as select ftss_results() as songid;\n> select * from cdsongs where songid = fubar.songid;\n> That works, but that is slow and a lot of people have emotional difficulties\n> with using temporary tables.\n\nIf you don't like temp tables, try\n\nselect cdsongs.* from cdsongs, (select ftss_results() as ftss) as tmp\nwhere songid = tmp.ftss;\n\nwhich'll produce the same results.\n\nDo I need to point out that the semantics aren't the same as with IN?\n(Unless the output of ftss_results is guaranteed unique...)\n\n> Also, an 'IN' clause does not\n> preserve the order of the results, where as a join should.\n\nThis statement is flat-out wrong --- don't you know that SQL makes no\npromises about tuple ordering?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 20:20:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: External search engine, advice " }, { "msg_contents": "mlw wrote:\n> \n> I have an external search engine system which plugs in to postgres. I use a few\n> C functions to interface the search daemon with the Postgres back-end.\n> \n> The best that I have been able to do is do a \"select\" for each result. I have a\n> live demo/test site:\n> \n> http://www.mohawksoft.com/search.php3, and the PHP source code is at\n> http://www.mohawksoft.com/ftss_example.txt.\n> \n> I would love to get the results with one select statement, but have, to date,\n> been unable to figure out how. Anyone with any ideas?\n\nWell, I think I got it, and I am posting so that people trying to do what I am\ndoing, can look through the postings!!\n\n Datum ftss_search(PG_FUNCTION_ARGS)\n {\n int4 result;\n int state;\n \n if(!fcinfo->resultinfo)\n {\n PG_RETURN_NULL();\n }\n state = search_state();\n if(state == 0)\n {\n text * string= PG_GETARG_TEXT_P(0);\n int len = VARSIZE(string)-VARHDRSZ;\n char szString[len+1];\n memcpy(szString, VARDATA(string), len);\n szString[len]=0;\n search(DEFAULT_PORT, DEFAULT_HOST, szString);\n }\n if(search_nextresult(&result))\n {\n ReturnSetInfo *rsi = (ReturnSetInfo *)fcinfo->resultinfo;\n rsi->isDone = ExprMultipleResult;\n PG_RETURN_INT32(result);\n }\n else\n {\n ReturnSetInfo *rsi = (ReturnSetInfo *)fcinfo->resultinfo;\n rsi->isDone = ExprEndResult ;\n }\n PG_RETURN_NULL();\n }\n\nThe above is an example of how to write a function that returns multiple\nresults.\n\ncreate function ftss_search (varchar)\n\treturns setof integer\n\tas '/usr/local/lib/library.so', 'ftss_search'\n\tlanguage 'c' with (iscachable);\n\nThe above in an example of how one would register this function in postgres.\n\nselect table.* from table, (select fts_search('all { bla bla }') as key) as\nresult where result.key = table.key;\n\nThe above is an example of how to use this function.\n\nThanks everyone for you help.\n", "msg_date": "Sat, 19 May 2001 22:35:54 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: External search engine, advice" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> The above is an example of how to write a function that returns multiple\n> results.\n\nOne suggestion: you must check not only that fcinfo->resultinfo isn't\nNULL, but that it points at the sort of node you're expecting. Say\n\n\tif (fcinfo->resultinfo == NULL ||\n\t ! IsA(fcinfo->resultinfo, ReturnSetInfo))\n\t\t<complain>;\n\nIf you fail to do this, you can fully expect your code to coredump\na version or two hence. Right now the only possibility for resultinfo\nis to point at a ReturnSetInfo, but that *will* change.\n\n> create function ftss_search (varchar)\n> \treturns setof integer\n> \tas '/usr/local/lib/library.so', 'ftss_search'\n> \tlanguage 'c' with (iscachable);\n\n> The above in an example of how one would register this function in postgres.\n\nHmm ... given that ftss refers to external files, is it a good idea to\nmark it cachable? I'd sort of expect that the values it returns for\na particular argument could change over time. Cachable amounts to a\npromise that the results for a given argument will not change over time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 May 2001 00:28:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: External search engine, advice " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > The above is an example of how to write a function that returns multiple\n> > results.\n> \n> One suggestion: you must check not only that fcinfo->resultinfo isn't\n> NULL, but that it points at the sort of node you're expecting. Say\n> \n> if (fcinfo->resultinfo == NULL ||\n> ! IsA(fcinfo->resultinfo, ReturnSetInfo))\n> <complain>;\n> \n\nOK, that makes sense. I will put that in.\n\n\n> If you fail to do this, you can fully expect your code to coredump\n> a version or two hence. Right now the only possibility for resultinfo\n> is to point at a ReturnSetInfo, but that *will* change.\n> \n> > create function ftss_search (varchar)\n> > returns setof integer\n> > as '/usr/local/lib/library.so', 'ftss_search'\n> > language 'c' with (iscachable);\n> \n> > The above in an example of how one would register this function in postgres.\n> \n> Hmm ... given that ftss refers to external files, is it a good idea to\n> mark it cachable? I'd sort of expect that the values it returns for\n> a particular argument could change over time. Cachable amounts to a\n> promise that the results for a given argument will not change over time.\n\nThis I don't understand. What is the lifetime of a value that \"iscacheable?\"\nNot using \"iscacheable\" will force a table scan, but are you saying that when a\nresult is marked \"iscacheable\" it lasts the life time of the postgres session? \n\n From what I've been able to tell, a function's value which has been cached\nseems only to last the life of a transaction. For instance:\n\nselect * from table where field = fubar ('bla bla') ;\n\nWhen executed, fubar gets called once. On the next invocation of the same\nquery, fubar is again called. So I don't think cacheable has any more\npersistence than transaction. If this isn't the case, then YIKES!\n", "msg_date": "Sun, 20 May 2001 08:06:55 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: External search engine, advice" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n>> Hmm ... given that ftss refers to external files, is it a good idea to\n>> mark it cachable?\n\n> This I don't understand. What is the lifetime of a value that \"iscacheable?\"\n\nForever. cachable says it's OK to reduce \"func(constant)\" to \"constant\"\non sight. Right now it's not really forever because we don't save query\nplans for very long (unless they're inside a plpgsql function) ... but\nif you have a function that depends on any outside data besides its\narguments, you'd be ill-advised to mark it cachable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 May 2001 12:39:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: External search engine, advice " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> >> Hmm ... given that ftss refers to external files, is it a good idea to\n> >> mark it cachable?\n> \n> > This I don't understand. What is the lifetime of a value that \"iscacheable?\"\n> \n> Forever. cachable says it's OK to reduce \"func(constant)\" to \"constant\"\n> on sight. Right now it's not really forever because we don't save query\n> plans for very long (unless they're inside a plpgsql function) ... but\n> if you have a function that depends on any outside data besides its\n> arguments, you'd be ill-advised to mark it cachable.\n\nThat's scary!!!\n\nI can sort of see why you'd want that, but can you also see why a developer\nwould not want that?\n\nTake this query:\n\nselect * from table where field = function(...);\n\nWithout the \"iscacheable\" flag, this function will force a table scan, but\nalthough the returned value may change over time, it would not change for this\nparticular transaction.\n\nSome functions need to be called each time they are evaluated.\nSome functions need to be called only once per transaction.\n\nDo you really see any need for a function's result to have a lifetime beyond\nits transaction? I see a real danger in preserving the value of a function\nacross a transaction. Granted, things like \"create index fubar_ndx on fubar\n(function(field));\" depend on this behavior, but other applications will have\nproblems.\n\nHow do we get a cached value of a function that exists for a transaction, such\nthat we can use indexes, and how do we identify the functions who's results\nshould have a longer lifetime?\n\nAm I out in left field here? Does anyone see this as a problem? I guess there\nshould be three states to the lifetime of a functions return value?\n", "msg_date": "Sun, 20 May 2001 13:32:24 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: External search engine, advice" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Am I out in left field here? Does anyone see this as a problem? I guess there\n> should be three states to the lifetime of a functions return value?\n\nThere has been some talk of that, but nailing down exactly what the\nsemantics ought to be still needs more thought.\n\nAs far as optimizing indexscans goes, the correct intermediate concept\nwould be something like \"result is fixed within any one scan\", not any\none transaction. You wouldn't really want to find that\n\n\tbegin;\n\tselect * from foo where x = functhatreadsbar();\n\tupdate bar ...;\n\tselect * from foo where x = functhatreadsbar();\n\tend;\n\ndoes not give you the desired results.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 May 2001 13:44:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: External search engine, advice " }, { "msg_contents": "At 01:44 PM 5/20/01 -0400, Tom Lane wrote:\n\n>As far as optimizing indexscans goes, the correct intermediate concept\n>would be something like \"result is fixed within any one scan\", not any\n>one transaction. You wouldn't really want to find that\n>\n>\tbegin;\n>\tselect * from foo where x = functhatreadsbar();\n>\tupdate bar ...;\n>\tselect * from foo where x = functhatreadsbar();\n>\tend;\n>\n>does not give you the desired results.\n\nNo, you certainly wouldn't want that. Cached for the extent of a statement\nmight make sense.\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 20 May 2001 11:21:44 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: Re: External search engine, advice " }, { "msg_contents": "Tom Lane wrote:\n> \n> begin;\n> select * from foo where x = functhatreadsbar();\n> update bar ...;\n> select * from foo where x = functhatreadsbar();\n> end;\n> \n> does not give you the desired results.\n\nBut why would you be marking the function 'iscachable' if you wanted to see the\nchange there?\n\nCheers,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew@catalyst.net.nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64(21)635-694, Fax: +64(4)499-5596, Office: +64(4)499-2267xtn709\n", "msg_date": "Mon, 21 May 2001 07:46:36 +1200", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: Re: External search engine, advice" }, { "msg_contents": "Andrew McMillan wrote:\n> \n> Tom Lane wrote:\n> >\n> > begin;\n> > select * from foo where x = functhatreadsbar();\n> > update bar ...;\n> > select * from foo where x = functhatreadsbar();\n> > end;\n> >\n> > does not give you the desired results.\n> \n> But why would you be marking the function 'iscachable' if you wanted to see the\n> change there?\nBecause if there is an index on 'x' you would want to use it instead of\nperforming a full table scan. If table 'foo' has millions of records, and\nfuncthatreadsbar() return one value, an operation that can take milliseconds,\nnot takes seconds with no benefit.\n", "msg_date": "Sun, 20 May 2001 17:00:31 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: External search engine, advice" }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > Am I out in left field here? Does anyone see this as a problem? I guess there\n> > should be three states to the lifetime of a functions return value?\n> \n> There has been some talk of that, but nailing down exactly what the\n> semantics ought to be still needs more thought.\n> \n> As far as optimizing indexscans goes, the correct intermediate concept\n> would be something like \"result is fixed within any one scan\", not any\n> one transaction. You wouldn't really want to find that\n> \n> begin;\n> select * from foo where x = functhatreadsbar();\n> update bar ...;\n> select * from foo where x = functhatreadsbar();\n> end;\n> \n> does not give you the desired results.\n\nOK, what is one to do?\n\nThere is an obvious need to use functions which return a single value, and\nwhich can be assumed \"frozen' for the life of a query or transaction, but would\nabsolutely break if they could never change after that. This distinction from\n\"iscachable\" is vitally important to people coding functions for Postgres. I\nknow a lot of what I have written for postgres would break if the desired\nmeaning of \"iscachable\" were to be applied.\n", "msg_date": "Sun, 20 May 2001 17:18:16 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: External search engine, advice" } ]
[ { "msg_contents": "> I have been thinking about the problem of VACUUM and how we \n> might fix it for 7.2. Vadim has suggested that we should\n> attack this by implementing an overwriting storage manager\n> and transaction UNDO, but I'm not totally comfortable with\n> that approach: it seems to me that it's an awfully large\n> change in the way Postgres works. \n\nI'm not sure if we should implement overwriting smgr at all.\nI was/is going to solve space reusing problem with non-overwriting\none, though I'm sure that we have to reimplement it (> 1 table\nper data file, stored on disk FSM etc).\n\n> Second: if VACUUM can run in the background, then there's no\n> reason not to run it fairly frequently. In fact, it could become\n> an automatically scheduled activity like CHECKPOINT is now,\n> or perhaps even a continuously running daemon (which was the\n> original conception of it at Berkeley, BTW).\n\nAnd original authors concluded that daemon was very slow in\nreclaiming dead space, BTW.\n\n> 3. Lazy VACUUM processes a table in five stages:\n> A. Scan relation looking for dead tuples;...\n> B. Remove index entries for the dead tuples...\n> C. Physically delete dead tuples and compact free space...\n> D. Truncate any completely-empty pages at relation's end. \n> E. Create/update FSM entry for the table.\n...\n> If a tuple is dead, we care not whether its index entries are still\n> around or not; so there's no risk to logical consistency.\n\nWhat does this sentence mean? We canNOT remove dead heap tuple untill\nwe know that there are no index tuples referencing it and your A,B,C\nreflect this, so ..?\n\n> Another place where lazy VACUUM may be unable to do its job completely\n> is in compaction of space on individual disk pages. It can physically\n> move tuples to perform compaction only if there are not currently any\n> other backends with pointers into that page (which can be tested by\n> looking to see if the buffer reference count is one). Again, we punt\n> and leave the space to be compacted next time if we can't do it right\n> away.\n\nWe could keep share buffer lock (or add some other kind of lock)\nuntill tuple projected - after projection we need not to read data\nfor fetched tuple from shared buffer and time between fetching\ntuple and projection is very short, so keeping lock on buffer will\nnot impact concurrency significantly.\n\nOr we could register callback cleanup function with buffer so bufmgr\nwould call it when refcnt drops to 0.\n\n> Presently, VACUUM deletes index tuples by doing a standard index\n> scan and checking each returned index tuple to see if it points\n> at any of the tuples to be deleted. If so, the index AM is called\n> back to delete the tested index tuple. This is horribly inefficient:\n...\n> This is mainly a problem of a poorly chosen API. The index AMs\n> should offer a \"bulk delete\" call, which is passed a sorted array\n> of main-table TIDs. The loop over the index tuples should happen\n> internally to the index AM.\n\nI agreed with others who think that the main problem of index cleanup\nis reading all index data pages to remove some index tuples. You told\nyouself about partial heap scanning - so for each scanned part of table\nyou'll have to read all index pages again and again - very good way to\ntrash buffer pool with big indices.\n\nWell, probably it's ok for first implementation and you'll win some CPU\nwith \"bulk delete\" - I'm not sure how much, though, and there is more\nsignificant issue with index cleanup if table is not locked exclusively:\nconcurrent index scan returns tuple (and unlock index page), heap_fetch\nreads table row and find that it's dead, now index scan *must* find\ncurrent index tuple to continue, but background vacuum could already\nremove that index tuple => elog(FATAL, \"_bt_restscan: my bits moved...\");\n\nTwo ways: hold index page lock untill heap tuple is checked or (rough\nschema)\nstore info in shmem (just IndexTupleData.t_tid and flag) that an index tuple\nis used by some scan so cleaner could change stored TID (get one from prev\nindex tuple) and set flag to help scan restore its current position on\nreturn.\n\nI'm particularly interested in discussing this issue because of it must be\nresolved for UNDO and chosen way will affect in what volume we'll be able\nto implement dirty reads (first way doesn't allow to implement them in full\n- ie selects with joins, - but good enough to resolve RI constraints\nconcurrency issue).\n\n> There you have it. If people like this, I'm prepared to commit to\n> making it happen for 7.2. Comments, objections, better ideas?\n\nWell, my current TODO looks as (ORDER BY PRIORITY DESC):\n\n1. UNDO;\n2. New SMGR;\n3. Space reusing.\n\nand I cannot commit at this point anything about 3. So, why not to refine\nvacuum if you want it. I, personally, was never be able to convince myself\nto spend time for this.\n\nVadim\n", "msg_date": "Fri, 18 May 2001 17:08:07 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> If a tuple is dead, we care not whether its index entries are still\n>> around or not; so there's no risk to logical consistency.\n\n> What does this sentence mean? We canNOT remove dead heap tuple untill\n> we know that there are no index tuples referencing it and your A,B,C\n> reflect this, so ..?\n\nSorry if it wasn't clear. I meant that if the vacuum process fails\nafter removing an index tuple but before removing the (dead) heap tuple\nit points to, there's no need to try to undo. That state is OK, and\nwhen we next get a chance to vacuum we'll still be able to finish\nremoving the heap tuple.\n\n>> Another place where lazy VACUUM may be unable to do its job completely\n>> is in compaction of space on individual disk pages. It can physically\n>> move tuples to perform compaction only if there are not currently any\n>> other backends with pointers into that page (which can be tested by\n>> looking to see if the buffer reference count is one). Again, we punt\n>> and leave the space to be compacted next time if we can't do it right\n>> away.\n\n> We could keep share buffer lock (or add some other kind of lock)\n> untill tuple projected - after projection we need not to read data\n> for fetched tuple from shared buffer and time between fetching\n> tuple and projection is very short, so keeping lock on buffer will\n> not impact concurrency significantly.\n\nOr drop the pin on the buffer to show we no longer have a pointer to it.\nI'm not sure that the time to do projection is short though --- what\nif there are arbitrary user-defined functions in the quals or the\nprojection targetlist?\n\n> Or we could register callback cleanup function with buffer so bufmgr\n> would call it when refcnt drops to 0.\n\nHmm ... might work. There's no guarantee that the refcnt would drop to\nzero before the current backend exits, however. Perhaps set a flag in\nthe shared buffer header, and the last guy to drop his pin is supposed\nto do the cleanup? But then you'd be pushing VACUUM's work into\nproductive transactions, which is probably not the way to go.\n\n>> This is mainly a problem of a poorly chosen API. The index AMs\n>> should offer a \"bulk delete\" call, which is passed a sorted array\n>> of main-table TIDs. The loop over the index tuples should happen\n>> internally to the index AM.\n\n> I agreed with others who think that the main problem of index cleanup\n> is reading all index data pages to remove some index tuples.\n\nFor very small numbers of tuples that might be true. But I'm not\nconvinced it's worth worrying about. If there aren't many tuples to\nbe freed, perhaps VACUUM shouldn't do anything at all.\n\n> Well, probably it's ok for first implementation and you'll win some CPU\n> with \"bulk delete\" - I'm not sure how much, though, and there is more\n> significant issue with index cleanup if table is not locked exclusively:\n> concurrent index scan returns tuple (and unlock index page), heap_fetch\n> reads table row and find that it's dead, now index scan *must* find\n> current index tuple to continue, but background vacuum could already\n> remove that index tuple => elog(FATAL, \"_bt_restscan: my bits moved...\");\n\nHm. Good point ...\n\n> Two ways: hold index page lock untill heap tuple is checked or (rough\n> schema)\n> store info in shmem (just IndexTupleData.t_tid and flag) that an index tuple\n> is used by some scan so cleaner could change stored TID (get one from prev\n> index tuple) and set flag to help scan restore its current position on\n> return.\n\nAnother way is to mark the index tuple \"gone but not forgotten\", so to\nspeak --- mark it dead without removing it. (We could know that we need\nto do that if we see someone else has a buffer pin on the index page.)\nIn this state, the index scan coming back to work would still be allowed\nto find the index tuple, but no other index scan would stop on the\ntuple. Later passes of vacuum would eventually remove the index tuple,\nwhenever vacuum happened to pass through at an instant where no one has\na pin on that index page.\n\nNone of these seem real clean though. Needs more thought.\n\n> Well, my current TODO looks as (ORDER BY PRIORITY DESC):\n\n> 1. UNDO;\n> 2. New SMGR;\n> 3. Space reusing.\n\n> and I cannot commit at this point anything about 3. So, why not to refine\n> vacuum if you want it. I, personally, was never be able to convince myself\n> to spend time for this.\n\nOkay, good. I was worried that this idea would conflict with what you\nwere doing, but it seems it won't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 20:27:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "> Well, my current TODO looks as (ORDER BY PRIORITY DESC):\n> \n> 1. UNDO;\n> 2. New SMGR;\n> 3. Space reusing.\n> \n> and I cannot commit at this point anything about 3. So, why not to refine\n> vacuum if you want it. I, personally, was never be able to convince myself\n> to spend time for this.\n\nVadim, can you remind me what UNDO is used for?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 18 May 2001 20:39:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "> I see postgres 7.1.1 is out now. Was the fix for this\n> problem included in the new release?\n\nI fear it will be in 7.2 only.\n\n> On Thursday 29 March 2001 20:02, Philip Warner wrote:\n> > At 19:14 29/03/01 -0800, Mikheev, Vadim wrote:\n> > >> >Reported problem is caused by bug (only one tuple \n> version must be\n> > >> >returned by SELECT) and this is way to fix it.\n> > >>\n> > >> I assume this is not possible in 7.1?\n> > >\n> > >Just looked in heapam.c - I can fix it in two hours.\n> > >The question is - should we do this now?\n> > >Comments?\n> >\n> > It's a bug; how confident are you of the fix?\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n", "msg_date": "Fri, 18 May 2001 17:28:48 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: possible row locking bug in 7.0.3 & 7.1" } ]
[ { "msg_contents": "> Vadim, can you remind me what UNDO is used for?\n\nOk, last reminder -:))\n\nOn transaction abort, read WAL records and undo (rollback)\nchanges made in storage. Would allow:\n\n1. Reclaim space allocated by aborted transactions.\n2. Implement SAVEPOINTs.\n Just to remind -:) - in the event of error discovered by server\n - duplicate key, deadlock, command mistyping, etc, - transaction\n will be rolled back to the nearest implicit savepoint setted\n just before query execution; - or transaction can be aborted by\n ROLLBACK TO <savepoint_name> command to some explicit savepoint\n setted by user. Transaction rolled back to savepoint may be continued.\n3. Reuse transaction IDs on postmaster restart.\n4. Split pg_log into small files with ability to remove old ones (which\n do not hold statuses for any running transactions).\n\nVadim\n", "msg_date": "Fri, 18 May 2001 18:10:10 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> Vadim, can you remind me what UNDO is used for?\n> Ok, last reminder -:))\n\n> On transaction abort, read WAL records and undo (rollback)\n> changes made in storage. Would allow:\n\n> 1. Reclaim space allocated by aborted transactions.\n> 2. Implement SAVEPOINTs.\n> Just to remind -:) - in the event of error discovered by server\n> - duplicate key, deadlock, command mistyping, etc, - transaction\n> will be rolled back to the nearest implicit savepoint setted\n> just before query execution; - or transaction can be aborted by\n> ROLLBACK TO <savepoint_name> command to some explicit savepoint\n> setted by user. Transaction rolled back to savepoint may be continued.\n> 3. Reuse transaction IDs on postmaster restart.\n> 4. Split pg_log into small files with ability to remove old ones (which\n> do not hold statuses for any running transactions).\n\nHm. On the other hand, relying on WAL for undo means you cannot drop\nold WAL segments that contain records for any open transaction. We've\nalready seen several complaints that the WAL logs grow unmanageably huge\nwhen there is a long-running transaction, and I think we'll see a lot\nmore.\n\nIt would be nicer if we could drop WAL records after a checkpoint or two,\neven in the presence of long-running transactions. We could do that if\nwe were only relying on them for crash recovery and not for UNDO.\n\nLooking at the advantages:\n\n1. Space reclamation via UNDO doesn't excite me a whole lot, if we can\nmake lightweight VACUUM work well. (I definitely don't like the idea\nthat after a very long transaction fails and aborts, I'd have to wait\nanother very long time for UNDO to do its thing before I could get on\nwith my work. Would much rather have the space reclamation happen in\nbackground.)\n\n2. SAVEPOINTs would be awfully nice to have, I agree.\n\n3. Reusing xact IDs would be nice, but there's an answer with a lot less\nimpact on the system: go to 8-byte xact IDs. Having to shut down the\npostmaster when you approach the 4Gb transaction mark isn't going to\nimpress people who want a 24x7 commitment, anyway.\n\n4. Recycling pg_log would be nice too, but we've already discussed other\nhacks that might allow pg_log to be kept finite without depending on\nUNDO (or requiring postmaster restarts, IIRC).\n\nI'm sort of thinking that undoing back to a savepoint is the only real\nusefulness of WAL-based UNDO. Is it practical to preserve the WAL log\njust back to the last savepoint in each xact, not the whole xact?\n\nAnother thought: do we need WAL UNDO at all to implement savepoints?\nIs there some way we could do them like nested transactions, wherein\neach savepoint-to-savepoint segment is given its own transaction number?\nCommitting multiple xact IDs at once might be a little tricky, but it\nseems like a narrow, soluble problem. Implementing UNDO without\ncreating lots of performance issues looks a lot harder.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 21:37:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "On Fri, May 18, 2001 at 06:10:10PM -0700, Mikheev, Vadim wrote:\n> > Vadim, can you remind me what UNDO is used for?\n> \n> Ok, last reminder -:))\n> \n> On transaction abort, read WAL records and undo (rollback)\n> changes made in storage. Would allow:\n> \n> 1. Reclaim space allocated by aborted transactions.\n> 2. Implement SAVEPOINTs.\n> Just to remind -:) - in the event of error discovered by server\n> - duplicate key, deadlock, command mistyping, etc, - transaction\n> will be rolled back to the nearest implicit savepoint setted\n> just before query execution; - or transaction can be aborted by\n> ROLLBACK TO <savepoint_name> command to some explicit savepoint\n> setted by user. Transaction rolled back to savepoint may be continued.\n> 3. Reuse transaction IDs on postmaster restart.\n> 4. Split pg_log into small files with ability to remove old ones (which\n> do not hold statuses for any running transactions).\n\nI missed the original discussions; apologies if this has already been\nbeaten into the ground. But... mightn't sub-transactions be a \nbetter-structured way to expose this service?\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Fri, 18 May 2001 18:56:25 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "> Another thought: do we need WAL UNDO at all to implement savepoints?\n> Is there some way we could do them like nested transactions, wherein\n> each savepoint-to-savepoint segment is given its own transaction number?\n> Committing multiple xact IDs at once might be a little tricky, but it\n> seems like a narrow, soluble problem. Implementing UNDO without\n> creating lots of performance issues looks a lot harder.\n\nI am confused why we can't implement subtransactions as part of our\ncommand counter? The counter is already 4 bytes long. Couldn't we\nrollback to counter number X-10?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 18 May 2001 23:12:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am confused why we can't implement subtransactions as part of our\n> command counter? The counter is already 4 bytes long. Couldn't we\n> rollback to counter number X-10?\n\nThat'd work within your own transaction, but not from outside it.\nAfter you commit, how will other backends know which command-counter\nvalues of your transaction to believe, and which not?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 23:15:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am confused why we can't implement subtransactions as part of our\n> > command counter? The counter is already 4 bytes long. Couldn't we\n> > rollback to counter number X-10?\n> \n> That'd work within your own transaction, but not from outside it.\n> After you commit, how will other backends know which command-counter\n> values of your transaction to believe, and which not?\n\nSeems we would have to store the command counters for the parts of the\ntransaction that committed, or the ones that were rolled back. Yuck.\n\nI hate to add UNDO complexity just for subtransactions.\n\nHey, I have an idea. Can we do subtransactions as separate transactions\n(as Tom mentioned), and put the subtransaction id's in the WAL, so they\nan be safely committed/rolledback as a group?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 18 May 2001 23:29:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Hey, I have an idea. Can we do subtransactions as separate transactions\n> (as Tom mentioned), and put the subtransaction id's in the WAL, so they\n> an be safely committed/rolledback as a group?\n\nIt's not quite that easy: all the subtransactions have to commit at\n*the same time* from the point of view of other xacts, or you have\nconsistency problems. So there'd need to be more xact-commit mechanism\nthan there is now. Snapshots are also interesting; we couldn't use a\nsingle xact ID per backend to show the open-transaction state.\n\nWAL doesn't really enter into it AFAICS...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 23:43:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Hey, I have an idea. Can we do subtransactions as separate transactions\n> > (as Tom mentioned), and put the subtransaction id's in the WAL, so they\n> > an be safely committed/rolledback as a group?\n> \n> It's not quite that easy: all the subtransactions have to commit at\n> *the same time* from the point of view of other xacts, or you have\n> consistency problems. So there'd need to be more xact-commit mechanism\n> than there is now. Snapshots are also interesting; we couldn't use a\n> single xact ID per backend to show the open-transaction state.\n\nYes, I knew that was going to come up that you have to add a lock to the\npg_log that is only in affect when someone is commiting a transaction\nwith subtransactions. Normal transactions get read/sharedlock, while\nsubtransaction needs exclusive/writelock.\n\nSeems a lot easier than UNDO. Vadim you mentioned UNDO would allow\nspace reuse for rolledback transactions, but in most cases the space\nreuse is going to be for old copies of committed transactions, right? \nWere you going to use WAL to get free space from old copies too?\n\nVadim, I think I am missing something. You mentioned UNDO would be used\nfor these cases and I don't understand the purpose of adding what would\nseem to be a pretty complex capability:\n\n> 1. Reclaim space allocated by aborted transactions.\n\nIs there really a lot to be saved here vs. old tuples of committed\ntransactions?\n\n> 2. Implement SAVEPOINTs.\n> Just to remind -:) - in the event of error discovered by server\n> - duplicate key, deadlock, command mistyping, etc, - transaction\n> will be rolled back to the nearest implicit savepoint setted\n> just before query execution; - or transaction can be aborted by\n> ROLLBACK TO <savepoint_name> command to some explicit savepoint\n> setted by user. Transaction rolled back to savepoint may be\n> continued.\n\nDiscussing, perhaps using multiple transactions.\n\n> 3. Reuse transaction IDs on postmaster restart.\n\nDoesn't seem like a huge win.\n\n> 4. Split pg_log into small files with ability to remove old ones (which\n> do not hold statuses for any running transactions).\n\nThat one is interesting. Seems the only workaround for that would be to\nallow a global scan of all databases and tables to set commit flags,\nthen shrink pg_log and set XID offset as start of file.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 18 May 2001 23:57:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Hey, I have an idea. Can we do subtransactions as separate transactions\n> > > (as Tom mentioned), and put the subtransaction id's in the WAL, so they\n> > > an be safely committed/rolledback as a group?\n> > \n> > It's not quite that easy: all the subtransactions have to commit at\n> > *the same time* from the point of view of other xacts, or you have\n> > consistency problems. So there'd need to be more xact-commit mechanism\n> > than there is now. Snapshots are also interesting; we couldn't use a\n> > single xact ID per backend to show the open-transaction state.\n> \n> Yes, I knew that was going to come up that you have to add a lock to the\n> pg_log that is only in affect when someone is commiting a transaction\n> with subtransactions. Normal transactions get read/sharedlock, while\n> subtransaction needs exclusive/writelock.\n\nI was wrong here. Multiple backends can write to pg_log at the same\ntime, even subtraction ones. It is just that no backend can read from\npg_log during a subtransaction commit. Acctually, they can if the are\nreading a transaction status that is less than the minium active\ntransaction id, see GetXmaxRecent().\n\nDoesn't seem too bad.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 19 May 2001 08:12:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Hey, I have an idea. Can we do subtransactions as separate transactions\n> > (as Tom mentioned), and put the subtransaction id's in the WAL, so they\n> > an be safely committed/rolledback as a group?\n> \n> It's not quite that easy: all the subtransactions have to commit at\n> *the same time* from the point of view of other xacts, or you have\n> consistency problems. So there'd need to be more xact-commit mechanism\n> than there is now. Snapshots are also interesting; we couldn't use a\n> single xact ID per backend to show the open-transaction state.\n\nOK, I have another idea about subtransactions as multiple transaction\nids.\n\nI realize that the snapshot problem would be an issue, because now\ninstead of looking at your own transaction id, you have to look at\nmultiple transaction ids. We could do this as a List of xid's, but that\nwill not scale well.\n\nMy idea is for a subtransaction backend to have its own pg_log-style\nmemory area that shows which transactions it owns and has\ncommitted/aborted. It can have the log start at its start xid, and can\nlook in pg_log and in there anytime it needs to check the visibility of\na transaction greater than its minium xid. 16k can hold 64k xids, so it\nseems it should scale pretty well. (Each xid is two bits in pg_log.)\n\nIn fact, multi-query transactions are just a special case of\nsubtransactions, where all previous subtransactions are\ncommitted/visible. We could use the same pg_log-style memory area for\nmulti-query transactions, eliminating the command counter and saving 8\nbytes overhead per tuple.\n\nCurrently, the XMIN/XMAX command counters are used only by the current\ntransaction, and they are useless once the transaction finishes and take\nup 8 bytes on disk.\n\nSo, this idea gets us subtransactions and saves 8 bytes overhead. This\nreduces our per-tuple overhead from 36 to 28 bytes, a 22% reduction!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 19 May 2001 08:23:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> In fact, multi-query transactions are just a special case of\n> subtransactions, where all previous subtransactions are\n> committed/visible. We could use the same pg_log-style memory area for\n> multi-query transactions, eliminating the command counter and saving 8\n> bytes overhead per tuple.\n\nInteresting thought, but command IDs don't act the same as transactions;\nin particular, visibility of one scan to another doesn't necessarily\ndepend on whether the scan has finished.\n\nPossibly that could be taken into account by having different rules for\n\"do we think it's committed\" in the local pg_log than the global one.\n\nAlso, this distinction would propagate out of the xact status code;\nfor example, it wouldn't do for heapam to set the \"known committed\"\nbit on a tuple just because it's from a previous subtransaction of the\ncurrent xact. Right now that works because heapam knows the difference\nbetween xacts and commands; it would still have to know the difference.\n\nA much more significant objection is that such a design would eat xact\nIDs at a tremendous rate, to no purpose. CommandCounterIncrement is a\ncheap operation now, and we do it with abandon. It would not be cheap\nif it implied allocating a new xact ID that would eventually need to be\nmarked committed. I don't mind allocating a new xact ID for each\nexplicitly-created savepoint, but a new ID per CommandCounterIncrement\nis a different story.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 11:13:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "> Hm. On the other hand, relying on WAL for undo means you cannot drop\n> old WAL segments that contain records for any open transaction. We've\n> already seen several complaints that the WAL logs grow unmanageably huge\n> when there is a long-running transaction, and I think we'll see a lot\n> more.\n> \n> It would be nicer if we could drop WAL records after a checkpoint or two,\n> even in the presence of long-running transactions. We could do that if\n> we were only relying on them for crash recovery and not for UNDO.\n\nAs you understand this is old, well-known problem in database practice,\ndescribed in books. Two ways - either abort too long running transactions\nor (/and) compact old log segments: fetch and save (to use for undo)\nrecords of long-running transactions and remove other records. Neither\nway is perfect but nothing is perfect at all -:)\n\n> 1. Space reclamation via UNDO doesn't excite me a whole lot, if we can\n> make lightweight VACUUM work well. (I definitely don't like the idea\n\nSorry, but I'm going to consider background vacuum as temporary solution\nonly. As I've already pointed, original PG authors finally became\ndisillusioned with the same approach. What is good in using UNDO for 1.\nis the fact that WAL records give you *direct* physical access to changes\nwhich should be rolled back.\n\n> that after a very long transaction fails and aborts, I'd have to wait\n> another very long time for UNDO to do its thing before I could get on\n> with my work. Would much rather have the space reclamation happen in\n> background.)\n\nUnderstandable, but why other transactions should read dirty data again\nand again waiting for background vacuum? I think aborted transaction\nshould take some responsibility for mess made by them -:)\nAnd keeping in mind 2. very long transactions could be continued -:)\n\n> 2. SAVEPOINTs would be awfully nice to have, I agree.\n> \n> 3. Reusing xact IDs would be nice, but there's an answer with a lot less\n> impact on the system: go to 8-byte xact IDs. Having to shut down the\n> postmaster when you approach the 4Gb transaction mark isn't going to\n> impress people who want a 24x7 commitment, anyway.\n\n+8 bytes in tuple header is not so tiny thing.\n\n> 4. Recycling pg_log would be nice too, but we've already discussed other\n> hacks that might allow pg_log to be kept finite without depending on\n> UNDO (or requiring postmaster restarts, IIRC).\n\nWe did... and didn't get agreement.\n\n> I'm sort of thinking that undoing back to a savepoint is the only real\n> usefulness of WAL-based UNDO. Is it practical to preserve the WAL log\n> just back to the last savepoint in each xact, not the whole xact?\n\nNo, it's not. It's not possible in overwriting systems at all - all\ntransaction records are required.\n\n> Another thought: do we need WAL UNDO at all to implement savepoints?\n> Is there some way we could do them like nested transactions, wherein\n> each savepoint-to-savepoint segment is given its own transaction number?\n> Committing multiple xact IDs at once might be a little tricky, but it\n> seems like a narrow, soluble problem.\n\nImplicit savepoints wouldn't be possible - this is very convenient\nfeature I've found in Oracle.\nAnd additional code in tqual.c wouldn't be good addition.\n\n> Implementing UNDO without creating lots of performance issues looks\n> a lot harder.\n\nWhat *performance* issues?!\nThe only issue is additional disk requirements.\n\nVadim\n\n\n", "msg_date": "Sat, 19 May 2001 23:36:34 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n>> 1. Space reclamation via UNDO doesn't excite me a whole lot, if we can\n>> make lightweight VACUUM work well.\n\n> Sorry, but I'm going to consider background vacuum as temporary solution\n> only. As I've already pointed, original PG authors finally became\n> disillusioned with the same approach.\n\nHow could they become disillusioned with it, when they never tried it?\nI know of no evidence that any version of PG has had backgroundable\n(non-blocking-to-other-transactions) VACUUM, still less within-relation\nspace recycling. They may have become disillusioned with the form of\nVACUUM that they actually had (ie, the same one we've inherited) --- but\nplease don't call that \"the same approach\" I'm proposing.\n\nCertainly, doing VACUUM this way is an experiment that may fail, or may\nrequire further work before it really works well. But I'd appreciate it\nif you wouldn't prejudge the results of the experiment.\n\n>> Would much rather have the space reclamation happen in\n>> background.)\n\n> Understandable, but why other transactions should read dirty data again\n> and again waiting for background vacuum? I think aborted transaction\n> should take some responsibility for mess made by them -:)\n\nThey might read it again and again before the failed xact gets around to\nremoving the data, too. You cannot rely on UNDO for correctness; at\nmost it can be a speed/space optimization. I see no reason to assume\nthat it's a more effective optimization than a background vacuum\nprocess.\n\n>> 3. Reusing xact IDs would be nice, but there's an answer with a lot less\n>> impact on the system: go to 8-byte xact IDs.\n\n> +8 bytes in tuple header is not so tiny thing.\n\nAgreed, but the people who need 8-byte IDs are not running small\ninstallations. I think they'd sooner pay a little more in disk space\nthan risk costs in performance or reliability.\n\n>> Another thought: do we need WAL UNDO at all to implement savepoints?\n>> Is there some way we could do them like nested transactions, wherein\n>> each savepoint-to-savepoint segment is given its own transaction number?\n\n> Implicit savepoints wouldn't be possible - this is very convenient\n> feature I've found in Oracle.\n\nWhy not? Seems to me that establishing implicit savepoints is just a\nuser-interface issue; you can do it, or not do it, regardless of the\nunderlying mechanism.\n\n>> Implementing UNDO without creating lots of performance issues looks\n>> a lot harder.\n\n> What *performance* issues?!\n> The only issue is additional disk requirements.\n\nNot so. UNDO does failed-transaction cleanup work in the interactive\nbackends, where it necessarily delays clients who might otherwise be\nissuing their next command. A VACUUM-based approach does the cleanup\nwork in the background. Same work, more or less, but it's not in the\nclients' critical path.\n\nBTW, UNDO for failed transactions alone will not eliminate the need for\nVACUUM. Will you also make successful transactions go back and\nphysically remove the tuples they deleted?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 May 2001 13:09:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "> >> 1. Space reclamation via UNDO doesn't excite me a whole lot, if we can\n> >> make lightweight VACUUM work well.\n> \n> > Sorry, but I'm going to consider background vacuum as temporary solution\n> > only. As I've already pointed, original PG authors finally became\n> > disillusioned with the same approach.\n> \n> How could they become disillusioned with it, when they never tried it?\n> I know of no evidence that any version of PG has had backgroundable\n> (non-blocking-to-other-transactions) VACUUM, still less within-relation\n> space recycling. They may have become disillusioned with the form of\n> VACUUM that they actually had (ie, the same one we've inherited) --- but\n> please don't call that \"the same approach\" I'm proposing.\n\nPre-Postgres'95 (original) versions had vacuum daemon running in\nbackground. I don't know if that vacuum shrinked relations or not\n(there was no shrinking in '95 version), I know that daemon had to\ndo some extra work in moving old tuples to archival storage, but\nanyway as you can read in old papers in the case of consistent heavy\nload daemon was not able to cleanup storage fast enough. And the\nreason is obvious - no matter how optimized your daemon will be\n(in regard to blocking other transactions etc), it will have to\nperform huge amount of IO just to find space available for reclaiming.\n\n> Certainly, doing VACUUM this way is an experiment that may fail, or may\n> require further work before it really works well. But I'd appreciate it\n> if you wouldn't prejudge the results of the experiment.\n\nWhy not, Tom? Why shouldn't I say my opinion?\nLast summer your comment about WAL, may experiment that time, was that\nit will save just a few fsyncs. It was your right to make prejudment,\nwhat's wrong with my rights? And you appealed to old papers as well, BTW.\n\n> > Understandable, but why other transactions should read dirty data again\n> > and again waiting for background vacuum? I think aborted transaction\n> > should take some responsibility for mess made by them -:)\n> \n> They might read it again and again before the failed xact gets around to\n> removing the data, too. You cannot rely on UNDO for correctness; at\n> most it can be a speed/space optimization. I see no reason to assume\n> that it's a more effective optimization than a background vacuum\n> process.\n\nReally?! Once again: WAL records give you *physical* address of tuples\n(both heap and index ones!) to be removed and size of log to read\nrecords from is not comparable with size of data files.\n\n> >> Another thought: do we need WAL UNDO at all to implement savepoints?\n> >> Is there some way we could do them like nested transactions, wherein\n> >> each savepoint-to-savepoint segment is given its own transaction number?\n> \n> > Implicit savepoints wouldn't be possible - this is very convenient\n> > feature I've found in Oracle.\n> \n> Why not? Seems to me that establishing implicit savepoints is just a\n> user-interface issue; you can do it, or not do it, regardless of the\n> underlying mechanism.\n\nImplicit savepoints are setted by server automatically before each\nquery execution - you wouldn't use transaction IDs for this.\n\n> >> Implementing UNDO without creating lots of performance issues looks\n> >> a lot harder.\n> \n> > What *performance* issues?!\n> > The only issue is additional disk requirements.\n> \n> Not so. UNDO does failed-transaction cleanup work in the interactive\n> backends, where it necessarily delays clients who might otherwise be\n> issuing their next command. A VACUUM-based approach does the cleanup\n> work in the background. Same work, more or less, but it's not in the\n> clients' critical path.\n\nNot same work but much more and in the critical pathes of all clients.\nAnd - is overall performance of Oracle or Informix worse then in PG?\nSeems delays in clients for rollback doesn't affect performance so much.\nBut dirty storage does it.\n\n> BTW, UNDO for failed transactions alone will not eliminate the need for\n> VACUUM. Will you also make successful transactions go back and\n> physically remove the tuples they deleted?\n\nThey can't do this, as you know pretty well. But using WAL to get TIDs to\nbe deleted is considerable, no?\n\nVadim\n\n\n", "msg_date": "Sun, 20 May 2001 14:00:48 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "> Were you going to use WAL to get free space from old copies too?\n\nConsiderable approach.\n\n> Vadim, I think I am missing something. You mentioned UNDO would be used\n> for these cases and I don't understand the purpose of adding what would\n> seem to be a pretty complex capability:\n\nYeh, we already won title of most advanced among simple databases, -:)\nYes, looking in list of IDs assigned to single transaction in tqual.c is much\neasy to do than UNDO. As well as couple of fsyncs is easy than WAL.\n\n> > 1. Reclaim space allocated by aborted transactions.\n> \n> Is there really a lot to be saved here vs. old tuples of committed\n> transactions?\n\nAre you able to protect COPY FROM from abort/crash?\n\nVadim\n\n\n", "msg_date": "Sun, 20 May 2001 14:13:37 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> Really?! Once again: WAL records give you *physical* address of tuples\n> (both heap and index ones!) to be removed and size of log to read\n> records from is not comparable with size of data files.\n\nYou sure? With our current approach of dumping data pages into the WAL\non first change since checkpoint (and doing so again after each\ncheckpoint) it's not too difficult to devise scenarios where the WAL log\nis *larger* than the affected datafiles ... and can't be truncated until\nsomeone commits.\n\nThe copied-data-page traffic is the worst problem with our current\nWAL implementation. I did some measurements last week on VACUUM of a\ntest table (the accounts table from a \"pg_bench -s 10\" setup, which\ncontains 1000000 rows; I updated 20000 rows and then vacuumed). This\ngenerated about 34400 8k blocks of WAL traffic, of which about 33300\nrepresented copied pages and the other 1100 blocks were actual WAL\nentries. That's a pretty massive I/O overhead, considering the table\nitself was under 20000 8k blocks. It was also interesting to note that\na large fraction of the CPU time was spent calculating CRCs on the WAL\ndata.\n\nWould it be possible to split the WAL traffic into two sets of files,\none for WAL log records proper and one for copied pages? Seems like\nwe could recycle the pages after each checkpoint rather than hanging\nonto them until the associated transactions commit.\n\n>> Why not? Seems to me that establishing implicit savepoints is just a\n>> user-interface issue; you can do it, or not do it, regardless of the\n>> underlying mechanism.\n\n> Implicit savepoints are setted by server automatically before each\n> query execution - you wouldn't use transaction IDs for this.\n\nIf the user asked you to, I don't see why not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 May 2001 17:25:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "On Sun, 20 May 2001, Vadim Mikheev wrote:\n\n> > >> 1. Space reclamation via UNDO doesn't excite me a whole lot, if we can\n> > >> make lightweight VACUUM work well.\n> >\n> > > Sorry, but I'm going to consider background vacuum as temporary solution\n> > > only. As I've already pointed, original PG authors finally became\n> > > disillusioned with the same approach.\n> >\n> > How could they become disillusioned with it, when they never tried it?\n> > I know of no evidence that any version of PG has had backgroundable\n> > (non-blocking-to-other-transactions) VACUUM, still less within-relation\n> > space recycling. They may have become disillusioned with the form of\n> > VACUUM that they actually had (ie, the same one we've inherited) --- but\n> > please don't call that \"the same approach\" I'm proposing.\n>\n> Pre-Postgres'95 (original) versions had vacuum daemon running in\n> background. I don't know if that vacuum shrinked relations or not\n> (there was no shrinking in '95 version), I know that daemon had to\n> do some extra work in moving old tuples to archival storage, but\n> anyway as you can read in old papers in the case of consistent heavy\n> load daemon was not able to cleanup storage fast enough. And the\n> reason is obvious - no matter how optimized your daemon will be\n> (in regard to blocking other transactions etc), it will have to\n> perform huge amount of IO just to find space available for reclaiming.\n>\n> > Certainly, doing VACUUM this way is an experiment that may fail, or may\n> > require further work before it really works well. But I'd appreciate it\n> > if you wouldn't prejudge the results of the experiment.\n>\n> Why not, Tom? Why shouldn't I say my opinion?\n> Last summer your comment about WAL, may experiment that time, was that\n> it will save just a few fsyncs. It was your right to make prejudment,\n> what's wrong with my rights? And you appealed to old papers as well, BTW.\n\nIf its an \"experiment\", shouldn't it be done outside of the main source\ntree, with adequate testing in a high load situation, with a patch\nreleased to the community for further testing/comments, before it is added\nto the source tree? From reading Vadim's comment above (re:\npre-Postgres95), this daemonized approach would cause a high I/O load on\nthe server in a situation where there are *alot* of UPDATE/DELETEs\nhappening to the database, which should be easily recreatable, no? Or,\nVadim, am I misundertanding?\n\n\n", "msg_date": "Sun, 20 May 2001 18:29:03 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> If its an \"experiment\", shouldn't it be done outside of the main source\n> tree, with adequate testing in a high load situation, with a patch\n> released to the community for further testing/comments, before it is added\n> to the source tree?\n\nMebbe we should've handled WAL that way too ;-)\n\nSeriously, I don't think that my proposed changes need be treated with\nquite that much suspicion. The only part that is really intrusive is\nthe shared-memory free-heap-space-management change. But AFAICT that\nwill be a necessary component of *any* approach to getting rid of\nVACUUM. We've been arguing here, in essence, about whether a background\nor on-line approach to finding free space will be more useful; but that\nstill leaves you with the question of what you do with the free space\nafter you've found it. Without some kind of shared free space map,\nthere's not anything you can do except have the process that found the\nspace do tuple moving and file truncation --- ie, VACUUM. So even if\nI'm quite wrong about the effectiveness of a background VACUUM, the FSM\ncode will still be needed: an UNDO-style approach is also going to need\nan FSM to do anything with the free space it finds. It's equally clear\nthat the index AMs have to support index tuple deletion without\nexclusive lock, or we'll still have blocking problems during free-space\ncleanup, no matter what drives that cleanup. The only part of what\nI've proposed that might end up getting relegated to the scrap heap is\nthe \"lazy vacuum\" command itself, which will be a self-contained and\nrelatively small module (smaller than the present commands/vacuum.c,\nfor sure).\n\nBesides which, Vadim has already said that he won't have time to do\nanything about space reclamation before 7.2. So even if background\nvacuum does end up getting superseded by something better, we're going\nto need it for a release or two ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 May 2001 17:57:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "At 01:09 PM 20-05-2001 -0400, Tom Lane wrote:\n>>> 3. Reusing xact IDs would be nice, but there's an answer with a lot less\n>>> impact on the system: go to 8-byte xact IDs.\n>\n>> +8 bytes in tuple header is not so tiny thing.\n>\n>Agreed, but the people who need 8-byte IDs are not running small\n>installations. I think they'd sooner pay a little more in disk space\n>than risk costs in performance or reliability.\n\nAn additional 4 (8?) bytes per tuple to increase the \"mean time before\nproblem \" 4 billion times sounds good to me.\n\nCheerio,\nLink.\n\n", "msg_date": "Mon, 21 May 2001 09:28:06 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "> > Really?! Once again: WAL records give you *physical* address of tuples\n> > (both heap and index ones!) to be removed and size of log to read\n> > records from is not comparable with size of data files.\n> \n> You sure? With our current approach of dumping data pages into the WAL\n> on first change since checkpoint (and doing so again after each\n> checkpoint) it's not too difficult to devise scenarios where the WAL log\n> is *larger* than the affected datafiles ... and can't be truncated until\n> someone commits.\n\nYes, but note mine \"size of log to read records from\" - each log record\nhas pointer to previous record made by same transaction: rollback must\nnot read entire log file to get all records of specific transaction.\n\n> >> Why not? Seems to me that establishing implicit savepoints is just a\n> >> user-interface issue; you can do it, or not do it, regardless of the\n> >> underlying mechanism.\n> \n> > Implicit savepoints are setted by server automatically before each\n> > query execution - you wouldn't use transaction IDs for this.\n> \n> If the user asked you to, I don't see why not.\n\nExample of one of implicit savepoint usage: skipping duplicate key insertion.\nUsing transaction IDs when someone want to insert a few thousand records?\n\nVadim\n\n\n", "msg_date": "Sun, 20 May 2001 19:27:07 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "> If its an \"experiment\", shouldn't it be done outside of the main source\n> tree, with adequate testing in a high load situation, with a patch\n> released to the community for further testing/comments, before it is added\n> to the source tree? From reading Vadim's comment above (re:\n> pre-Postgres95), this daemonized approach would cause a high I/O load on\n> the server in a situation where there are *alot* of UPDATE/DELETEs\n> happening to the database, which should be easily recreatable, no? Or,\n> Vadim, am I misundertanding?\n\nIt probably will not cause more IO than vacuum does right now.\nBut unfortunately it will not reduce that IO. Cleanup work will be spreaded\nin time and users will not experience long lockouts but average impact\non overall system throughput will be same (or maybe higher).\nMy point is that we'll need in dynamic cleanup anyway and UNDO is\nwhat should be implemented for dynamic cleanup of aborted changes.\nPlus UNDO gives us natural implementation of savepoints and some\nabilities in transaction IDs management, which we may use or not\n(though, 4. - pg_log size management - is really good thing).\n\nVadim\n\n\n", "msg_date": "Sun, 20 May 2001 19:57:48 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "> Seriously, I don't think that my proposed changes need be treated with\n> quite that much suspicion. The only part that is really intrusive is\n\nAgreed. I fight for UNDO, not against background vacuum -:)\n\n> the shared-memory free-heap-space-management change. But AFAICT that\n> will be a necessary component of *any* approach to getting rid of\n> VACUUM. We've been arguing here, in essence, about whether a background\n> or on-line approach to finding free space will be more useful; but that\n> still leaves you with the question of what you do with the free space\n> after you've found it. Without some kind of shared free space map,\n> there's not anything you can do except have the process that found the\n> space do tuple moving and file truncation --- ie, VACUUM. So even if\n> I'm quite wrong about the effectiveness of a background VACUUM, the FSM\n> code will still be needed: an UNDO-style approach is also going to need\n> an FSM to do anything with the free space it finds. It's equally clear\n\nUnfortunately, I think that we'll need in on-disk FSM and that FSM is\nactually the most complex thing to do in \"space reclamation\" project.\n\n> Besides which, Vadim has already said that he won't have time to do\n> anything about space reclamation before 7.2. So even if background\n> vacuum does end up getting superseded by something better, we're going\n> to need it for a release or two ...\n\nYes.\n\nVadim\n\n\n", "msg_date": "Sun, 20 May 2001 20:06:15 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> Unfortunately, I think that we'll need in on-disk FSM and that FSM is\n> actually the most complex thing to do in \"space reclamation\" project.\n\nI hope we can avoid on-disk FSM. Seems to me that that would create\nproblems both for performance (lots of extra disk I/O) and reliability\n(what happens if FSM is corrupted? A restart won't fix it).\n\nBut, if we do need it, most of the work needed to install FSM APIs\nshould carry over. So I still don't see an objection to doing\nin-memory FSM as a first step.\n\n\nBTW, I was digging through the old Postgres papers this afternoon,\nto refresh my memory about what they actually said about VACUUM.\nI was interested to discover that at one time the tuple-insertion\nalgorithm went as follows:\n 1. Pick a page at random in the relation, read it in, and see if it\n has enough free space. Repeat up to three times.\n 2. If #1 fails to find space, append tuple at end.\nWhen they got around to doing some performance measurement, they\ndiscovered that step #1 was a serious loser, and dropped it in favor\nof pure #2 (which is what we still have today). Food for thought.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 May 2001 00:32:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> It probably will not cause more IO than vacuum does right now.\n> But unfortunately it will not reduce that IO.\n\nUh ... what? Certainly it will reduce the total cost of vacuum,\nbecause it won't bother to try to move tuples to fill holes.\nThe index cleanup method I've proposed should be substantially\nmore efficient than the existing code, as well.\n\n> My point is that we'll need in dynamic cleanup anyway and UNDO is\n> what should be implemented for dynamic cleanup of aborted changes.\n\nUNDO might offer some other benefits, but I doubt that it will allow\nus to eliminate VACUUM completely. To do that, you would need to\nkeep track of free space using exact, persistent (on-disk) bookkeeping\ndata structures. The overhead of that will be very substantial: more,\nI predict, than the approximate approach I proposed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 May 2001 10:06:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "\n\nTom Lane wrote:\n\n> \n> Hm. On the other hand, relying on WAL for undo means you cannot drop\n> old WAL segments that contain records for any open transaction. We've\n> already seen several complaints that the WAL logs grow unmanageably huge\n> when there is a long-running transaction, and I think we'll see a lot\n> more.\n> \n> It would be nicer if we could drop WAL records after a checkpoint or two,\n> even in the presence of long-running transactions. We could do that if\n> we were only relying on them for crash recovery and not for UNDO.\n\nIn Oracle the REDO and UNDO are separate. REDO in oracle is limited in \nsize to x log files y big (where x and y are parameters at database \ncreation). These x log files are reused in a round robin way with \ncheckpoints forced when wrapping around.\n\nREDO in oracle is done by something known as a 'rollback segment'. \nThere are many rollback segments. A transaction is assigned one \nrollback segment to write its UNDO records to. Transactions are \nassigned to a rollback segment randomly (or you can explicitly assign a \ntransaction to a particular rollback segment if you want). The size of \nrollback segments are limited and transactions are aborted if they \nexceed that size. This is why oracle has the option to assign a \nspecific rollback segment to a transaction, so if you know you are going \nto update/insert tons of stuff you want to use an appropriately sized \nrollback segment for that transaction.\n\nThings I like about the Oracle approach vs current WAL REDO and proposed \nUNDO:\n\n1 Sizes allocated to REDO and UNDO in Oracle are known and \nconfigurable. WAL sizes are unknown and not constrained.\n2 Oracle allows for big transactions by allowing them to use \nspecifically sizied rollback segments to handle the large transaction.\nWhile WAL could be enhanced to fix 1, it appears difficult to have \ndifferent limits for different types of transactions as Oracle supports \nin 2.\n\nThinks I don't like about the Oracle approach:\n\n1 Not only updates, but also long running queries can be aborted if the \nrollback segment size it too small, as the undo is necessary to create \nan snapshot of the state of the database at the time the query started.\n\nthanks,\n--Barry\n\n> \n\n", "msg_date": "Mon, 21 May 2001 15:49:30 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "\n\nMikheev, Vadim wrote:\n\n> \n> Ok, last reminder -:))\n> \n> On transaction abort, read WAL records and undo (rollback)\n> changes made in storage. Would allow:\n> \n> 1. Reclaim space allocated by aborted transactions.\n> 2. Implement SAVEPOINTs.\n> Just to remind -:) - in the event of error discovered by server\n> - duplicate key, deadlock, command mistyping, etc, - transaction\n> will be rolled back to the nearest implicit savepoint setted\n> just before query execution; - or transaction can be aborted by\n> ROLLBACK TO <savepoint_name> command to some explicit savepoint\n> setted by user. Transaction rolled back to savepoint may be continued.\n> 3. Reuse transaction IDs on postmaster restart.\n> 4. Split pg_log into small files with ability to remove old ones (which\n> do not hold statuses for any running transactions).\n> \n> Vadim\n\nThis is probably not a good thread to add my two cents worth, but here \ngoes anyway.\n\nThe biggest issue I see with the proposed UNDO using WAL is the issue of \nlarge/long lasting transactions. While it might be possible to solve \nthis problem with some extra work. However keep in mind that different \ntypes of transactions (i.e. normal vs bulk loads) require different \namounts of time and/or UNDO. To solve this problem, you really need per \ntransaction limits which seems difficult to implement.\n\nI have no doubt that UNDO with WAL can be done. But is there some other \nway of doing UNDO that might be just as good or better?\n\nPart of what I see in this thread reading between the lines is that some \nbelieve the solution to many problems in the long term is to implement \nan overwriting storage manager. Implementing UNDO via WAL is a \nnecessary step in that direction. While others seem to believe that the \nnon-overwriting storage manager has some life in it yet, and may even be \nthe storage manager for many releases to come. I don't know enough \nabout the internals to have any say in that discussion, however the \ngrass isn't always greener on the other side of the fence (i.e. an \noverwriting storage manager will come with its own set of problems/issues).\n\nSo let me throw out an idea for UNDO using the current storage manager. \nFirst let me state that UNDO is a bit of a misnomer, since undo for \ntransactions is already implemented. That is what pg_log is all about. \nThe part of UNDO that is missing is savepoints (either explicit or \nimplicit), because pg_log doesn't capture the information for each \ncommand in a transaction. So the question really becomes, how to \nimplement savepoints with current storage manager?\n\nI am going to lay out one assumption that I am making:\n1) Most transactions are either completely successful or completely \nrolled back\n (If this weren't true, i.e. you really needed savepoints to partially \nrollback changes, you couldn't be using the existing version of \npostgresql at all)\n\nMy proposal is:\n 1) create a new relation to store 'failed commands' for transactions. \n This is similar to pg_log for transactions, but takes it to the \ncommand level. And since it records only failed commands (or ranges of \nfailed commands), thus most transactions will not have any information \nin this relation per the assumption above.\n 2) Use the unused pg_log status (3 = unused, 2 = commit, 1 = abort, 0 \n= inprocess) to mean that the transaction was commited but some commands \nwere rolled back (i.e. partial commit)\n Again for the majority of transactions nothing will need to change, \nsince they will still be marked as committed or aborted.\n 3) Code that determines whether or not a tuple is committed or not \nneeds to be aware of this new pg_log status, and look in the new \nrelation to see if the particular command was rolled back or not to \ndetermine the commited status of the tuple. This subtly changes the \nmeaning of HEAP_XMIN_COMMITTED and related flags to reflect the \ntransaction and command status instead of just the transaction status.\n\nThe runtime cost of this shouldn't be too high, since the committed \nstate is cached in HEAP_XMIN_COMMITTED et al, it is only the added cost \nfor the pass that needs to set these flags, and then there is only added \ncost in the case that the transaction wasn't completely sucessful (again \nmy assumption above).\n\nNow I have know idea if what I am proposing is really doable or not. I \nam just throwing this out as an alternative to WAL based \nUNDO/savepoints. The reason I am doing this is that to me it seems to \nleverage much of the existing infrastructure already in place that \nperforms undo for rolledback transactions (all the tmin, tmax, cmin, \ncmax stuff as well as vacuum). Also it doesn't come with the large WAL \nlog file problem for large transactions.\n\nNow having said all of this I realize that this doesn't solve the 4 \nbillion transaction id limit problem, or the large size of the pg_log \nfile with large numbers of transactions.\n\nthanks,\n--Barry\n\n\n> \n\n", "msg_date": "Mon, 21 May 2001 17:18:18 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "Greetings,\n I have made the following table(s),indexes,etc. I wonder if there\nis an index (or something else), I can create to make the query use a\n\"better\" plan. (not that it's slow at the moment, but as the table\ngrows...). \n\nSchema:\n\n--\n-- Selected TOC Entries:\n--\n\\connect - neteng\n--\n-- TOC Entry ID 2 (OID 18735)\n--\n-- Name: attack_types_id_seq Type: SEQUENCE Owner: neteng\n--\n\nCREATE SEQUENCE \"attack_types_id_seq\" start 1 increment 1 maxvalue 2147483647 minvalue 1 cache 1 ;\n\n--\n-- TOC Entry ID 3 (OID 18754)\n--\n-- Name: attack_types Type: TABLE Owner: neteng\n--\n\nCREATE TABLE \"attack_types\" (\n\t\"id\" integer DEFAULT nextval('\"attack_types_id_seq\"'::text) NOT NULL,\n\t\"attack_type\" character varying(30),\n\tConstraint \"attack_types_pkey\" Primary Key (\"id\")\n);\n\n--\n-- TOC Entry ID 4 (OID 18769)\n--\n-- Name: attack_db Type: TABLE Owner: neteng\n--\n\nCREATE TABLE \"attack_db\" (\n\t\"attack_type\" integer,\n\t\"start_time\" timestamp with time zone,\n\t\"end_time\" timestamp with time zone,\n\t\"src_router\" inet,\n\t\"input_int\" integer,\n\t\"output_int\" integer,\n\t\"src_as\" integer,\n\t\"src_ip\" inet,\n\t\"src_port\" integer,\n\t\"dst_as\" integer,\n\t\"dst_ip\" inet,\n\t\"dst_port\" integer,\n\t\"protocol\" integer,\n\t\"tos\" integer,\n\t\"pr_flags\" integer,\n\t\"pkts\" bigint,\n\t\"bytes\" bigint,\n\t\"next_hop\" inet\n);\n\n--\n-- TOC Entry ID 5 (OID 19897)\n--\n-- Name: protocols Type: TABLE Owner: neteng\n--\n\nCREATE TABLE \"protocols\" (\n\t\"proto\" integer,\n\t\"proto_name\" text\n);\n\n\\connect - ler\n--\n-- TOC Entry ID 12 (OID 20362)\n--\n-- Name: \"getattack_type\" (integer) Type: FUNCTION Owner: ler\n--\n\nCREATE FUNCTION \"getattack_type\" (integer) RETURNS text AS 'SELECT CAST(attack_type as text) from attack_types\nwhere id = $1;' LANGUAGE 'sql';\n\n--\n-- TOC Entry ID 13 (OID 20462)\n--\n-- Name: \"format_port\" (integer,integer) Type: FUNCTION Owner: ler\n--\n\nCREATE FUNCTION \"format_port\" (integer,integer) RETURNS text AS 'SELECT CASE \n WHEN $1 = 1 THEN trim(to_char(($2 >> 8) & 255, ''09'')) || ''-'' ||\n trim(to_char($2 & 255,''09''))\n WHEN $1 > 1 THEN trim(to_char($2,''00009''))\n END;' LANGUAGE 'sql';\n\n--\n-- TOC Entry ID 14 (OID 20508)\n--\n-- Name: \"get_protocol\" (integer) Type: FUNCTION Owner: ler\n--\n\nCREATE FUNCTION \"get_protocol\" (integer) RETURNS text AS 'SELECT proto_name FROM protocols\n WHERE proto = $1;' LANGUAGE 'sql';\n\n--\n-- TOC Entry ID 15 (OID 20548)\n--\n-- Name: \"format_protocol\" (integer) Type: FUNCTION Owner: ler\n--\n\nCREATE FUNCTION \"format_protocol\" (integer) RETURNS text AS 'SELECT CASE \n WHEN get_protocol($1) IS NOT NULL THEN trim(get_protocol($1))\n ELSE CAST($1 as text)\n END;' LANGUAGE 'sql';\n\n--\n-- TOC Entry ID 10 (OID 20816)\n--\n-- Name: \"plpgsql_call_handler\" () Type: FUNCTION Owner: ler\n--\n\nCREATE FUNCTION \"plpgsql_call_handler\" () RETURNS opaque AS '/usr/local/pgsql/lib/plpgsql.so', 'plpgsql_call_handler' LANGUAGE 'C';\n\n--\n-- TOC Entry ID 11 (OID 20817)\n--\n-- Name: plpgsql Type: PROCEDURAL LANGUAGE Owner: \n--\n\nCREATE TRUSTED PROCEDURAL LANGUAGE 'plpgsql' HANDLER \"plpgsql_call_handler\" LANCOMPILER 'PL/pgSQL';\n\n--\n-- TOC Entry ID 16 (OID 20831)\n--\n-- Name: \"tcp_flags\" (integer) Type: FUNCTION Owner: ler\n--\n\nCREATE FUNCTION \"tcp_flags\" (integer) RETURNS text AS 'DECLARE flag ALIAS for $1;\n ret text;\nBEGIN \n IF (flag & 128) = 128 THEN ret := ''C'';\n ELSE ret := '' '';\n END IF;\n IF (flag & 64) = 64 THEN ret := ret || ''E'';\n ELSE ret := ret || '' '';\n END IF;\n IF (flag & 32) = 32 THEN ret := ret || ''U'';\n ELSE ret := ret || '' '';\n END IF;\n IF (flag & 16) = 16 THEN ret := ret || ''A'';\n ELSE ret := ret || '' '';\n END IF;\n IF (flag & 8) = 8 THEN ret := ret || ''P'';\n ELSE ret := ret || '' '';\n END IF;\n IF (flag & 4) = 4 THEN ret := ret || ''R'';\n ELSE ret := ret || '' '';\n END IF;\n IF (flag & 2) = 2 THEN ret := ret || ''S'';\n ELSE ret := ret || '' '';\n END IF;\n IF (flag & 1) = 1 THEN ret := ret || ''F'';\n ELSE ret := ret || '' '';\n END IF;\n RETURN ret;\nEND;' LANGUAGE 'plpgsql';\n\n--\n-- TOC Entry ID 6 (OID 21918)\n--\n-- Name: exempt_ips Type: TABLE Owner: ler\n--\n\nCREATE TABLE \"exempt_ips\" (\n\t\"ip\" inet\n);\n\n--\n-- TOC Entry ID 7 (OID 21918)\n--\n-- Name: exempt_ips Type: ACL Owner: \n--\n\nREVOKE ALL on \"exempt_ips\" from PUBLIC;\nGRANT ALL on \"exempt_ips\" to PUBLIC;\nGRANT ALL on \"exempt_ips\" to \"ler\";\n\n--\n-- TOC Entry ID 17 (OID 22324)\n--\n-- Name: \"format_flags\" (integer,integer) Type: FUNCTION Owner: ler\n--\n\nCREATE FUNCTION \"format_flags\" (integer,integer) RETURNS text AS 'SELECT CASE \n WHEN $1 = 6 THEN tcp_flags($2)\n ELSE ''N/A''\n END;' LANGUAGE 'sql';\n\n\\connect - neteng\n--\n-- TOC Entry ID 8 (OID 18769)\n--\n-- Name: \"end_index\" Type: INDEX Owner: neteng\n--\n\nCREATE INDEX \"end_index\" on \"attack_db\" using btree ( \"end_time\" \"timestamp_ops\" );\n\n--\n-- TOC Entry ID 9 (OID 18769)\n--\n-- Name: \"start_index\" Type: INDEX Owner: neteng\n--\n\nCREATE INDEX \"start_index\" on \"attack_db\" using btree ( \"start_time\" \"timestamp_ops\" );\n\n--\n-- TOC Entry ID 20 (OID 18802)\n--\n-- Name: \"RI_ConstraintTrigger_18801\" Type: TRIGGER Owner: neteng\n--\n\nCREATE CONSTRAINT TRIGGER \"attack_type\" AFTER INSERT OR UPDATE ON \"attack_db\" FROM \"attack_types\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_check_ins\" ('attack_type', 'attack_db', 'attack_types', 'UNSPECIFIED', 'attack_type', 'id');\n\n--\n-- TOC Entry ID 18 (OID 18804)\n--\n-- Name: \"RI_ConstraintTrigger_18803\" Type: TRIGGER Owner: neteng\n--\n\nCREATE CONSTRAINT TRIGGER \"attack_type\" AFTER DELETE ON \"attack_types\" FROM \"attack_db\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_noaction_del\" ('attack_type', 'attack_db', 'attack_types', 'UNSPECIFIED', 'attack_type', 'id');\n\n--\n-- TOC Entry ID 19 (OID 18806)\n--\n-- Name: \"RI_ConstraintTrigger_18805\" Type: TRIGGER Owner: neteng\n--\n\nCREATE CONSTRAINT TRIGGER \"attack_type\" AFTER UPDATE ON \"attack_types\" FROM \"attack_db\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_noaction_upd\" ('attack_type', 'attack_db', 'attack_types', 'UNSPECIFIED', 'attack_type', 'id');\n\nQuery: \n EXPLAIN\n SELECT to_char(start_time,'MM/DD/YY') as mmddyy, \n to_char(start_time,'HH24:MI:SS') as hhmmss, \n getattack_type(attack_type) as type, \n src_router as router, \n input_int as ii, \n output_int as oi, \n src_as as srcas,host(src_ip) || '/' || masklen(src_ip) || ':' || \n format_port(protocol,src_port) as src_address, \n dst_as as dstas,host(dst_ip) || '/' || masklen(dst_ip) || ':' || \n format_port(protocol,dst_port) as dst_address, \n format_protocol(protocol) as prot, \n tos,format_flags(protocol,pr_flags) as tcpflags, \n pkts,bytes, \n bytes/pkts as bytes_per_packet, \n to_char(end_time,'MM/DD/YY') as end_mmddyy, \n to_char(end_time,'HH24:MI:SS') as end_hhmmss, \n next_hop \n FROM attack_db \n WHERE (start_time >= now() - '02:00:00'::interval OR \n end_time >= now() - '02:00:00'::interval) \n AND host(src_ip) NOT IN (select host(ip) from exempt_ips) \n AND host(dst_ip) NOT IN (select host(ip) from exempt_ips) \n ORDER BY bytes DESC; ;\n\n\nExplain Output:\n\nNOTICE: QUERY PLAN:\n\nSort (cost=10870.77..10870.77 rows=5259 width=120)\n -> Seq Scan on attack_db (cost=0.00..10358.95 rows=5259 width=120)\n SubPlan\n -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 18 May 2001 20:22:02 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Interesting question" }, { "msg_contents": "* Larry Rosenman <ler@lerctr.org> [010518 20:25]:\n> Greetings,\n> I have made the following table(s),indexes,etc. I wonder if there\n> is an index (or something else), I can create to make the query use a\n> \"better\" plan. (not that it's slow at the moment, but as the table\n> grows...). \n(Oh, one point, this is 7.2devel...)\n\n> \n> Schema:\n> \n> --\n> -- Selected TOC Entries:\n> --\n> \\connect - neteng\n> --\n> -- TOC Entry ID 2 (OID 18735)\n> --\n> -- Name: attack_types_id_seq Type: SEQUENCE Owner: neteng\n> --\n> \n> CREATE SEQUENCE \"attack_types_id_seq\" start 1 increment 1 maxvalue 2147483647 minvalue 1 cache 1 ;\n> \n> --\n> -- TOC Entry ID 3 (OID 18754)\n> --\n> -- Name: attack_types Type: TABLE Owner: neteng\n> --\n> \n> CREATE TABLE \"attack_types\" (\n> \t\"id\" integer DEFAULT nextval('\"attack_types_id_seq\"'::text) NOT NULL,\n> \t\"attack_type\" character varying(30),\n> \tConstraint \"attack_types_pkey\" Primary Key (\"id\")\n> );\n> \n> --\n> -- TOC Entry ID 4 (OID 18769)\n> --\n> -- Name: attack_db Type: TABLE Owner: neteng\n> --\n> \n> CREATE TABLE \"attack_db\" (\n> \t\"attack_type\" integer,\n> \t\"start_time\" timestamp with time zone,\n> \t\"end_time\" timestamp with time zone,\n> \t\"src_router\" inet,\n> \t\"input_int\" integer,\n> \t\"output_int\" integer,\n> \t\"src_as\" integer,\n> \t\"src_ip\" inet,\n> \t\"src_port\" integer,\n> \t\"dst_as\" integer,\n> \t\"dst_ip\" inet,\n> \t\"dst_port\" integer,\n> \t\"protocol\" integer,\n> \t\"tos\" integer,\n> \t\"pr_flags\" integer,\n> \t\"pkts\" bigint,\n> \t\"bytes\" bigint,\n> \t\"next_hop\" inet\n> );\n> \n> --\n> -- TOC Entry ID 5 (OID 19897)\n> --\n> -- Name: protocols Type: TABLE Owner: neteng\n> --\n> \n> CREATE TABLE \"protocols\" (\n> \t\"proto\" integer,\n> \t\"proto_name\" text\n> );\n> \n> \\connect - ler\n> --\n> -- TOC Entry ID 12 (OID 20362)\n> --\n> -- Name: \"getattack_type\" (integer) Type: FUNCTION Owner: ler\n> --\n> \n> CREATE FUNCTION \"getattack_type\" (integer) RETURNS text AS 'SELECT CAST(attack_type as text) from attack_types\n> where id = $1;' LANGUAGE 'sql';\n> \n> --\n> -- TOC Entry ID 13 (OID 20462)\n> --\n> -- Name: \"format_port\" (integer,integer) Type: FUNCTION Owner: ler\n> --\n> \n> CREATE FUNCTION \"format_port\" (integer,integer) RETURNS text AS 'SELECT CASE \n> WHEN $1 = 1 THEN trim(to_char(($2 >> 8) & 255, ''09'')) || ''-'' ||\n> trim(to_char($2 & 255,''09''))\n> WHEN $1 > 1 THEN trim(to_char($2,''00009''))\n> END;' LANGUAGE 'sql';\n> \n> --\n> -- TOC Entry ID 14 (OID 20508)\n> --\n> -- Name: \"get_protocol\" (integer) Type: FUNCTION Owner: ler\n> --\n> \n> CREATE FUNCTION \"get_protocol\" (integer) RETURNS text AS 'SELECT proto_name FROM protocols\n> WHERE proto = $1;' LANGUAGE 'sql';\n> \n> --\n> -- TOC Entry ID 15 (OID 20548)\n> --\n> -- Name: \"format_protocol\" (integer) Type: FUNCTION Owner: ler\n> --\n> \n> CREATE FUNCTION \"format_protocol\" (integer) RETURNS text AS 'SELECT CASE \n> WHEN get_protocol($1) IS NOT NULL THEN trim(get_protocol($1))\n> ELSE CAST($1 as text)\n> END;' LANGUAGE 'sql';\n> \n> --\n> -- TOC Entry ID 10 (OID 20816)\n> --\n> -- Name: \"plpgsql_call_handler\" () Type: FUNCTION Owner: ler\n> --\n> \n> CREATE FUNCTION \"plpgsql_call_handler\" () RETURNS opaque AS '/usr/local/pgsql/lib/plpgsql.so', 'plpgsql_call_handler' LANGUAGE 'C';\n> \n> --\n> -- TOC Entry ID 11 (OID 20817)\n> --\n> -- Name: plpgsql Type: PROCEDURAL LANGUAGE Owner: \n> --\n> \n> CREATE TRUSTED PROCEDURAL LANGUAGE 'plpgsql' HANDLER \"plpgsql_call_handler\" LANCOMPILER 'PL/pgSQL';\n> \n> --\n> -- TOC Entry ID 16 (OID 20831)\n> --\n> -- Name: \"tcp_flags\" (integer) Type: FUNCTION Owner: ler\n> --\n> \n> CREATE FUNCTION \"tcp_flags\" (integer) RETURNS text AS 'DECLARE flag ALIAS for $1;\n> ret text;\n> BEGIN \n> IF (flag & 128) = 128 THEN ret := ''C'';\n> ELSE ret := '' '';\n> END IF;\n> IF (flag & 64) = 64 THEN ret := ret || ''E'';\n> ELSE ret := ret || '' '';\n> END IF;\n> IF (flag & 32) = 32 THEN ret := ret || ''U'';\n> ELSE ret := ret || '' '';\n> END IF;\n> IF (flag & 16) = 16 THEN ret := ret || ''A'';\n> ELSE ret := ret || '' '';\n> END IF;\n> IF (flag & 8) = 8 THEN ret := ret || ''P'';\n> ELSE ret := ret || '' '';\n> END IF;\n> IF (flag & 4) = 4 THEN ret := ret || ''R'';\n> ELSE ret := ret || '' '';\n> END IF;\n> IF (flag & 2) = 2 THEN ret := ret || ''S'';\n> ELSE ret := ret || '' '';\n> END IF;\n> IF (flag & 1) = 1 THEN ret := ret || ''F'';\n> ELSE ret := ret || '' '';\n> END IF;\n> RETURN ret;\n> END;' LANGUAGE 'plpgsql';\n> \n> --\n> -- TOC Entry ID 6 (OID 21918)\n> --\n> -- Name: exempt_ips Type: TABLE Owner: ler\n> --\n> \n> CREATE TABLE \"exempt_ips\" (\n> \t\"ip\" inet\n> );\n> \n> --\n> -- TOC Entry ID 7 (OID 21918)\n> --\n> -- Name: exempt_ips Type: ACL Owner: \n> --\n> \n> REVOKE ALL on \"exempt_ips\" from PUBLIC;\n> GRANT ALL on \"exempt_ips\" to PUBLIC;\n> GRANT ALL on \"exempt_ips\" to \"ler\";\n> \n> --\n> -- TOC Entry ID 17 (OID 22324)\n> --\n> -- Name: \"format_flags\" (integer,integer) Type: FUNCTION Owner: ler\n> --\n> \n> CREATE FUNCTION \"format_flags\" (integer,integer) RETURNS text AS 'SELECT CASE \n> WHEN $1 = 6 THEN tcp_flags($2)\n> ELSE ''N/A''\n> END;' LANGUAGE 'sql';\n> \n> \\connect - neteng\n> --\n> -- TOC Entry ID 8 (OID 18769)\n> --\n> -- Name: \"end_index\" Type: INDEX Owner: neteng\n> --\n> \n> CREATE INDEX \"end_index\" on \"attack_db\" using btree ( \"end_time\" \"timestamp_ops\" );\n> \n> --\n> -- TOC Entry ID 9 (OID 18769)\n> --\n> -- Name: \"start_index\" Type: INDEX Owner: neteng\n> --\n> \n> CREATE INDEX \"start_index\" on \"attack_db\" using btree ( \"start_time\" \"timestamp_ops\" );\n> \n> --\n> -- TOC Entry ID 20 (OID 18802)\n> --\n> -- Name: \"RI_ConstraintTrigger_18801\" Type: TRIGGER Owner: neteng\n> --\n> \n> CREATE CONSTRAINT TRIGGER \"attack_type\" AFTER INSERT OR UPDATE ON \"attack_db\" FROM \"attack_types\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_check_ins\" ('attack_type', 'attack_db', 'attack_types', 'UNSPECIFIED', 'attack_type', 'id');\n> \n> --\n> -- TOC Entry ID 18 (OID 18804)\n> --\n> -- Name: \"RI_ConstraintTrigger_18803\" Type: TRIGGER Owner: neteng\n> --\n> \n> CREATE CONSTRAINT TRIGGER \"attack_type\" AFTER DELETE ON \"attack_types\" FROM \"attack_db\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_noaction_del\" ('attack_type', 'attack_db', 'attack_types', 'UNSPECIFIED', 'attack_type', 'id');\n> \n> --\n> -- TOC Entry ID 19 (OID 18806)\n> --\n> -- Name: \"RI_ConstraintTrigger_18805\" Type: TRIGGER Owner: neteng\n> --\n> \n> CREATE CONSTRAINT TRIGGER \"attack_type\" AFTER UPDATE ON \"attack_types\" FROM \"attack_db\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_noaction_upd\" ('attack_type', 'attack_db', 'attack_types', 'UNSPECIFIED', 'attack_type', 'id');\n> \n> Query: \n> EXPLAIN\n> SELECT to_char(start_time,'MM/DD/YY') as mmddyy, \n> to_char(start_time,'HH24:MI:SS') as hhmmss, \n> getattack_type(attack_type) as type, \n> src_router as router, \n> input_int as ii, \n> output_int as oi, \n> src_as as srcas,host(src_ip) || '/' || masklen(src_ip) || ':' || \n> format_port(protocol,src_port) as src_address, \n> dst_as as dstas,host(dst_ip) || '/' || masklen(dst_ip) || ':' || \n> format_port(protocol,dst_port) as dst_address, \n> format_protocol(protocol) as prot, \n> tos,format_flags(protocol,pr_flags) as tcpflags, \n> pkts,bytes, \n> bytes/pkts as bytes_per_packet, \n> to_char(end_time,'MM/DD/YY') as end_mmddyy, \n> to_char(end_time,'HH24:MI:SS') as end_hhmmss, \n> next_hop \n> FROM attack_db \n> WHERE (start_time >= now() - '02:00:00'::interval OR \n> end_time >= now() - '02:00:00'::interval) \n> AND host(src_ip) NOT IN (select host(ip) from exempt_ips) \n> AND host(dst_ip) NOT IN (select host(ip) from exempt_ips) \n> ORDER BY bytes DESC; ;\n> \n> \n> Explain Output:\n> \n> NOTICE: QUERY PLAN:\n> \n> Sort (cost=10870.77..10870.77 rows=5259 width=120)\n> -> Seq Scan on attack_db (cost=0.00..10358.95 rows=5259 width=120)\n> SubPlan\n> -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n> -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 18 May 2001 20:26:02 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: Interesting question" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> EXPLAIN\n> SELECT ...\n> FROM attack_db \n> WHERE (start_time >= now() - '02:00:00'::interval OR \n> end_time >= now() - '02:00:00'::interval) \n> AND host(src_ip) NOT IN (select host(ip) from exempt_ips) \n> AND host(dst_ip) NOT IN (select host(ip) from exempt_ips) \n> ORDER BY bytes DESC;\n\n> NOTICE: QUERY PLAN:\n\n> Sort (cost=10870.77..10870.77 rows=5259 width=120)\n> -> Seq Scan on attack_db (cost=0.00..10358.95 rows=5259 width=120)\n> SubPlan\n> -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n> -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n\n\nMaking use of the indexes on start_time and end_time would be a good\nthing. The reason it's not doing that is it doesn't think that the\nexpressions \"now() - '02:00:00'::interval\" reduce to constants. We\nmay have a proper solution for that by the time 7.2 comes out, but\nin the meantime you could fake it with a function that hides the\nnoncacheable function and operator --- see previous discussions of\nthis identical issue in the archives.\n\nThe NOT INs are pretty ugly too (and do you need the host() conversion\nthere? Seems like a waste of cycles...). You might be able to live\nwith that if the timestamp condition will always be pretty restrictive,\nbut otherwise they'll be a no go. Consider NOT EXISTS with an index\non exempt_ips(ip).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 22:09:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Interesting question " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010518 21:09]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > EXPLAIN\n> > SELECT ...\n> > FROM attack_db \n> > WHERE (start_time >= now() - '02:00:00'::interval OR \n> > end_time >= now() - '02:00:00'::interval) \n> > AND host(src_ip) NOT IN (select host(ip) from exempt_ips) \n> > AND host(dst_ip) NOT IN (select host(ip) from exempt_ips) \n> > ORDER BY bytes DESC;\n> \n> > NOTICE: QUERY PLAN:\n> \n> > Sort (cost=10870.77..10870.77 rows=5259 width=120)\n> > -> Seq Scan on attack_db (cost=0.00..10358.95 rows=5259 width=120)\n> > SubPlan\n> > -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n> > -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n> \n> \n> Making use of the indexes on start_time and end_time would be a good\n> thing. The reason it's not doing that is it doesn't think that the\n> expressions \"now() - '02:00:00'::interval\" reduce to constants. We\n> may have a proper solution for that by the time 7.2 comes out, but\n> in the meantime you could fake it with a function that hides the\n> noncacheable function and operator --- see previous discussions of\n> this identical issue in the archives.\nOK. What would you suggest for the function? I'd like the\n'02:00:00'::interval to be a variable somehow to change the \ninterval we're searching. What fills the table is a daemon that is\nlooking at the netflow data, and when a packet that matches one of the\nattack profiles comes along, it does an insert into attack_db. \n\n\n> \n> The NOT INs are pretty ugly too (and do you need the host() conversion\n> there? Seems like a waste of cycles...). You might be able to live\n> with that if the timestamp condition will always be pretty restrictive,\n> but otherwise they'll be a no go. Consider NOT EXISTS with an index\n> on exempt_ips(ip).\nYes, because the masks will probably be different each time (this is\nfrom netflow data from my cisco's). The exempt IP's table is, at the\nmoment 4 ip's, so that's quick anyway. \n\n> \n> \t\t\tregards, tom lane\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 18 May 2001 21:44:00 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: Interesting question" }, { "msg_contents": "* Larry Rosenman <ler@lerctr.org> [010518 21:48]:\n> * Tom Lane <tgl@sss.pgh.pa.us> [010518 21:09]:\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > EXPLAIN\n> > > SELECT ...\n> > > FROM attack_db \n> > > WHERE (start_time >= now() - '02:00:00'::interval OR \n> > > end_time >= now() - '02:00:00'::interval) \n> > > AND host(src_ip) NOT IN (select host(ip) from exempt_ips) \n> > > AND host(dst_ip) NOT IN (select host(ip) from exempt_ips) \n> > > ORDER BY bytes DESC;\n> > \n> > > NOTICE: QUERY PLAN:\n> > \n> > > Sort (cost=10870.77..10870.77 rows=5259 width=120)\n> > > -> Seq Scan on attack_db (cost=0.00..10358.95 rows=5259 width=120)\n> > > SubPlan\n> > > -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n> > > -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n> > \n> > \n> > Making use of the indexes on start_time and end_time would be a good\n> > thing. The reason it's not doing that is it doesn't think that the\n> > expressions \"now() - '02:00:00'::interval\" reduce to constants. We\n> > may have a proper solution for that by the time 7.2 comes out, but\n> > in the meantime you could fake it with a function that hides the\n> > noncacheable function and operator --- see previous discussions of\n> > this identical issue in the archives.\n> OK. What would you suggest for the function? I'd like the\n> '02:00:00'::interval to be a variable somehow to change the \n> interval we're searching. What fills the table is a daemon that is\n> looking at the netflow data, and when a packet that matches one of the\n> attack profiles comes along, it does an insert into attack_db. \nI tried the following function:\n\n--\n-- TOC Entry ID 15 (OID 35180)\n--\n-- Name: \"nowminus\" (interval) Type: FUNCTION Owner: ler\n--\n\nCREATE FUNCTION \"nowminus\" (interval) RETURNS timestamp with time zone AS 'SELECT now() - $1;' LANGUAGE 'sql';\n\nand the following query:\n\n EXPLAIN\n SELECT to_char(start_time,'MM/DD/YY') as mmddyy, \n to_char(start_time,'HH24:MI:SS') as hhmmss, \n getattack_type(attack_type) as type, \n src_router as router, \n input_int as ii, \n output_int as oi, \n src_as as srcas,host(src_ip) || '/' || masklen(src_ip) || ':' || \n format_port(protocol,src_port) as src_address, \n dst_as as dstas,host(dst_ip) || '/' || masklen(dst_ip) || ':' || \n format_port(protocol,dst_port) as dst_address, \n format_protocol(protocol) as prot, \n tos,format_flags(protocol,pr_flags) as tcpflags, \n pkts,bytes, \n bytes/pkts as bytes_per_packet, \n to_char(end_time,'MM/DD/YY') as end_mmddyy, \n to_char(end_time,'HH24:MI:SS') as end_hhmmss, \n next_hop \n FROM attack_db \n WHERE (start_time >= nowminus('02:00:00'::interval) OR \n end_time >= nowminus('02:00:00'::interval) )\n AND host(src_ip) NOT IN (select host(ip) from exempt_ips) \n AND host(dst_ip) NOT IN (select host(ip) from exempt_ips) \n ORDER BY bytes DESC; ;\n\nAnd got the following plan:\n\nNOTICE: QUERY PLAN:\n\nSort (cost=11313.95..11313.95 rows=5497 width=120)\n -> Seq Scan on attack_db (cost=0.00..10777.58 rows=5497 width=120)\n SubPlan\n -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n\nEXPLAIN\n> \n> \n> > \n> > The NOT INs are pretty ugly too (and do you need the host() conversion\n> > there? Seems like a waste of cycles...). You might be able to live\n> > with that if the timestamp condition will always be pretty restrictive,\n> > but otherwise they'll be a no go. Consider NOT EXISTS with an index\n> > on exempt_ips(ip).\n> Yes, because the masks will probably be different each time (this is\n> from netflow data from my cisco's). The exempt IP's table is, at the\n> moment 4 ip's, so that's quick anyway. \n> \n> > \n> > \t\t\tregards, tom lane\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 18 May 2001 22:27:50 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: Interesting question" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> CREATE FUNCTION \"nowminus\" (interval) RETURNS timestamp with time zone AS 'SELECT now() - $1;' LANGUAGE 'sql';\n\nRight idea, but you need to mark it iscachable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 23:38:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Interesting question " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010518 22:39]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > CREATE FUNCTION \"nowminus\" (interval) RETURNS timestamp with time zone AS 'SELECT now() - $1;' LANGUAGE 'sql';\n> \n> Right idea, but you need to mark it iscachable.\nAha:\n\nSame query, with nowminus marked iscachable:\nNOTICE: QUERY PLAN:\n\nSort (cost=513.69..513.69 rows=447 width=120)\n -> Index Scan using start_index, end_index on attack_db (cost=0.00..494.01 rows=447 width=120)\n SubPlan\n -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n -> Seq Scan on exempt_ips (cost=0.00..1.04 rows=4 width=12)\n\nEXPLAIN\n> \n> \t\t\tregards, tom lane\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 19 May 2001 07:21:47 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: Interesting question" } ]
[ { "msg_contents": "We have on the TODO list:\n\n\t* SELECT pg_class FROM pg_class generates strange error\n\nIt passes the tablename as targetlist all the way to the executor, where\nit throws an error about Node 704 unkown.\n\nThis patch fixes the problem by generating an error in the parser:\n\n\ttest=> select pg_class from pg_class;\n\tERROR: You can't use a relation alone in a target list.\n\nIt passes regression tests.\n\nFYI, I am working down the TODO list, doing items I can handle.\n\nI will apply later if no one objects.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/parser/parse_target.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/parse_target.c,v\nretrieving revision 1.66\ndiff -c -r1.66 parse_target.c\n*** src/backend/parser/parse_target.c\t2001/03/22 03:59:41\t1.66\n--- src/backend/parser/parse_target.c\t2001/05/19 03:07:41\n***************\n*** 55,60 ****\n--- 55,63 ----\n \tif (expr == NULL)\n \t\texpr = transformExpr(pstate, node, EXPR_COLUMN_FIRST);\n \n+ \tif (IsA(expr, Ident) && ((Ident *) expr)->isRel)\n+ \t\telog(ERROR,\"You can't use a relation alone in a target list.\");\n+ \n \ttype_id = exprType(expr);\n \ttype_mod = exprTypmod(expr);", "msg_date": "Fri, 18 May 2001 23:17:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Fix for tablename in targetlist" }, { "msg_contents": "Bruce Momjian writes:\n\n> This patch fixes the problem by generating an error in the parser:\n>\n> \ttest=> select pg_class from pg_class;\n> \tERROR: You can't use a relation alone in a target list.\n\nMaybe it's the parser that's getting it wrong. What if pg_class has a\ncolumn called pg_class?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 19 May 2001 10:06:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Fix for tablename in targetlist" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > This patch fixes the problem by generating an error in the parser:\n> >\n> > \ttest=> select pg_class from pg_class;\n> > \tERROR: You can't use a relation alone in a target list.\n> \n> Maybe it's the parser that's getting it wrong. What if pg_class has a\n> column called pg_class?\n\nThe parser doesn't know about columns or table names. It just passes\nthem along. The code checks the indent and sets isRel if it matches\nsomething in the range table. Seems like it checks for column matches\nin the range table first. Looks OK:\n\t\n\ttest=> create table test (test int);\n\tCREATE\n\ttest=> insert into test values (1);\n\tINSERT 145570 1\n\ttest=> select test from test;\n\t test \n\t------\n\t 1\n\t(1 row)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 19 May 2001 08:09:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Fix for tablename in targetlist" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Bruce Momjian writes:\n>> This patch fixes the problem by generating an error in the parser:\n>> \n>> test=> select pg_class from pg_class;\n>> ERROR: You can't use a relation alone in a target list.\n\n> Maybe it's the parser that's getting it wrong. What if pg_class has a\n> column called pg_class?\n\nNot an issue: the ambiguous name will be resolved as a column name, and\nit will be so resolved before this code executes. We do know at this\npoint that we have an unadorned relation name; the question is what\nto do with it.\n\nI had a thought this morning that raising an error may be the wrong\nthing to do. We could instead choose to expand the name into\n\"pg_class.*\", which would take only a little more code and would\narguably do something useful instead of useless. (I suspect that the\nfjoin stuff that still remains in the backend was originally designed\nto support exactly this interpretation.)\n\nOf course, if we did that, then \"select huge_table;\" might spit back\na lot of stuff at you before you remembered you'd left off the rest\nof the query ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 10:50:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for tablename in targetlist " }, { "msg_contents": "> > Maybe it's the parser that's getting it wrong. What if pg_class has a\n> > column called pg_class?\n> \n> Not an issue: the ambiguous name will be resolved as a column name, and\n> it will be so resolved before this code executes. We do know at this\n> point that we have an unadorned relation name; the question is what\n> to do with it.\n> \n> I had a thought this morning that raising an error may be the wrong\n> thing to do. We could instead choose to expand the name into\n> \"pg_class.*\", which would take only a little more code and would\n> arguably do something useful instead of useless. (I suspect that the\n> fjoin stuff that still remains in the backend was originally designed\n> to support exactly this interpretation.)\n> \n> Of course, if we did that, then \"select huge_table;\" might spit back\n> a lot of stuff at you before you remembered you'd left off the rest\n> of the query ;-)\n\nI thought about adding the *. We already allow 'SELECT tab.*'. Should\n'SELECT tab' behave the same? It certainly could.\n\nActually, I just tried:\n\t\n\ttest=> select test;\n\tERROR: Attribute 'test' not found\n\ttest=> select test.*;\n\t test \n\t------\n\t 1\n\t(1 row)\n\nSeems a tablename with no FROM clause doesn't get marked as isRel\nbecause it is not in the range table to be matched.\n\nWhat would happen if we added auto-star is that a table name in a target\nlist would automatically become tablename.*. Seems it is too prone to\ncause bad queries to be accepted.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 19 May 2001 11:24:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Fix for tablename in targetlist" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Seems a tablename with no FROM clause doesn't get marked as isRel\n> because it is not in the range table to be matched.\n\n> What would happen if we added auto-star is that a table name in a target\n> list would automatically become tablename.*. Seems it is too prone to\n> cause bad queries to be accepted.\n\nNo; the auto-star would only happen if the thing is marked isRel.\nSo it would just cover the case of \"select tab from tab\". It seems\nreasonable to me --- what other possible interpretation of the meaning\nis there?\n\nI tend to agree that we should not change the code to make \"select tab\"\nwork, on the grounds of error-proneness.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 11:33:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for tablename in targetlist " }, { "msg_contents": "> No; the auto-star would only happen if the thing is marked isRel.\n> So it would just cover the case of \"select tab from tab\". It seems\n> reasonable to me --- what other possible interpretation of the meaning\n> is there?\n> \n> I tend to agree that we should not change the code to make \"select tab\"\n> work, on the grounds of error-proneness.\n\nOK, here is another patch that does this:\n\n\ttest=> select test from test;\n\t test \n\t------\n\t 1\n\t(1 row)\n\nIt is small to I am attaching it here.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/parser/parse_target.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/parse_target.c,v\nretrieving revision 1.66\ndiff -c -r1.66 parse_target.c\n*** src/backend/parser/parse_target.c\t2001/03/22 03:59:41\t1.66\n--- src/backend/parser/parse_target.c\t2001/05/19 17:18:11\n***************\n*** 154,166 ****\n \t\t}\n \t\telse\n \t\t{\n! \t\t\t/* Everything else but Attr */\n! \t\t\tp_target = lappend(p_target,\n! \t\t\t\t\t\t\t transformTargetEntry(pstate,\n! \t\t\t\t\t\t\t\t\t\t\t\t\tres->val,\n! \t\t\t\t\t\t\t\t\t\t\t\t\tNULL,\n! \t\t\t\t\t\t\t\t\t\t\t\t\tres->name,\n! \t\t\t\t\t\t\t\t\t\t\t\t\tfalse));\n \t\t}\n \n \t\ttargetlist = lnext(targetlist);\n--- 154,183 ----\n \t\t}\n \t\telse\n \t\t{\n! \t\t\tNode\t *rteorjoin;\n! \t\t\tint\t\t\tsublevels_up;\n! \n! \t\t\tif (IsA(res->val, Ident) &&\n! \t\t\t\t(rteorjoin = refnameRangeOrJoinEntry(pstate,\n! \t\t\t\t\t\t\t\t\t\t\t\t((Ident *) res->val)->name,\n! \t\t\t\t\t\t\t\t\t\t\t\t&sublevels_up)) != NULL &&\n! \t\t\t\tIsA(rteorjoin, RangeTblEntry))\n! \t\t\t{\n! \t\t\t\t/* Expand SELECT tab FROM tab; to SELECT tab.* FROM tab; */\n! \t\t\t\tp_target = nconc(p_target,\n! \t\t\t\t\t\t\texpandRelAttrs(pstate,\n! \t\t\t\t\t\t\t (RangeTblEntry *) rteorjoin));\n! \t\t\t}\n! \t\t\telse\n! \t\t\t{\n! \t\t\t\t/* Everything else */\n! \t\t\t\tp_target = lappend(p_target,\n! \t\t\t\t\t\t\t\t transformTargetEntry(pstate,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\tres->val,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\tNULL,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\tres->name,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\tfalse));\n! \t\t\t}\n \t\t}\n \n \t\ttargetlist = lnext(targetlist);", "msg_date": "Sat, 19 May 2001 13:20:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Fix for tablename in targetlist" }, { "msg_contents": "Tom Lane writes:\n\n> I had a thought this morning that raising an error may be the wrong\n> thing to do. We could instead choose to expand the name into\n> \"pg_class.*\",\n\nYeah, and maybe if there's a database called pg_class we select from all\ntables at once. In fact, this should be the default behaviour if I just\ntype 'SELECT;'.\n\nNo really, I don't see a point of not enforcing the correct syntax, when\nadding '.*' is all it takes to get the alternative behaviour in a standard\nway.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 19 May 2001 19:34:40 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Fix for tablename in targetlist " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> No really, I don't see a point of not enforcing the correct syntax, when\n> adding '.*' is all it takes to get the alternative behaviour in a standard\n> way.\n\nTrue, although there's a certain inconsistency in allowing a whole row\nto be passed to a function by\n\n\tselect foo(pg_class) from pg_class;\n\nand not allowing the same row to be output by\n\n\tselect pg_class from pg_class;\n\nI don't feel strongly about it though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 20:23:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for tablename in targetlist " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, here is another patch that does this:\n\nThat seems considerably uglier than your first patch. In particular,\nwhy aren't you looking for isRel being set in the Ident node? It\nlooks to me like you may have changed the behavior in the case where\nthe Ident could be either a table or column name.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 20:26:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for tablename in targetlist " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, here is another patch that does this:\n> \n> That seems considerably uglier than your first patch. In particular,\n> why aren't you looking for isRel being set in the Ident node? It\n> looks to me like you may have changed the behavior in the case where\n> the Ident could be either a table or column name.\n\nOK, here is a new patch. I thought I had to go through\ntransformTargetEntry() -> transformExpr() -> transformIdent() to get\nIdent.isRel set. Seems it is set earlier too, so the new code is\nshorter. I am still researching the purpose of Ident.isRel. If someone\nknows, please chime in. I know it says the Ident is a relation, but why\nhave a field when you can look it up in the rangetable?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/TODO\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/TODO,v\nretrieving revision 1.464\ndiff -c -r1.464 TODO\n*** doc/TODO\t2001/05/18 16:28:12\t1.464\n--- doc/TODO\t2001/05/20 00:35:29\n***************\n*** 140,146 ****\n * Allow RULE recompilation\n * Add BETWEEN ASYMMETRIC/SYMMETRIC\n * Change LIMIT val,val to offset,limit to match MySQL\n! * Allow Pl/PgSQL's RAISE function to take expressions\n * ALTER\n \t* ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n \t [inheritance]\n--- 140,146 ----\n * Allow RULE recompilation\n * Add BETWEEN ASYMMETRIC/SYMMETRIC\n * Change LIMIT val,val to offset,limit to match MySQL\n! * Allow PL/PgSQL's RAISE function to take expressions\n * ALTER\n \t* ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n \t [inheritance]\nIndex: src/backend/parser/parse_target.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/parse_target.c,v\nretrieving revision 1.66\ndiff -c -r1.66 parse_target.c\n*** src/backend/parser/parse_target.c\t2001/03/22 03:59:41\t1.66\n--- src/backend/parser/parse_target.c\t2001/05/20 00:35:30\n***************\n*** 154,166 ****\n \t\t}\n \t\telse\n \t\t{\n! \t\t\t/* Everything else but Attr */\n! \t\t\tp_target = lappend(p_target,\n! \t\t\t\t\t\t\t transformTargetEntry(pstate,\n! \t\t\t\t\t\t\t\t\t\t\t\t\tres->val,\n! \t\t\t\t\t\t\t\t\t\t\t\t\tNULL,\n! \t\t\t\t\t\t\t\t\t\t\t\t\tres->name,\n! \t\t\t\t\t\t\t\t\t\t\t\t\tfalse));\n \t\t}\n \n \t\ttargetlist = lnext(targetlist);\n--- 154,182 ----\n \t\t}\n \t\telse\n \t\t{\n! \t\t\tNode\t *rteorjoin;\n! \t\t\tint\t\t\tsublevels_up;\n! \n! \t\t\tif (IsA(res->val, Ident) && ((Ident *) res->val)->isRel)\n! \t\t\t{\n! \t\t\t\trteorjoin = refnameRangeOrJoinEntry(pstate,\n! \t\t\t\t\t\t\t\t\t\t\t\t((Ident *) res->val)->name,\n! \t\t\t\t\t\t\t\t\t\t\t\t&sublevels_up);\n! \t\t\t\tAssert(rteorjoin != NULL);\n! \t\t\t\tp_target = nconc(p_target,\n! \t\t\t\t\t\t\texpandRelAttrs(pstate,\n! \t\t\t\t\t\t\t (RangeTblEntry *) rteorjoin));\n! \t\t\t}\n! \t\t\telse\n! \t\t\t{\n! \t\t\t\t/* Everything else */\n! \t\t\t\tp_target = lappend(p_target,\n! \t\t\t\t\t\t\t\t transformTargetEntry(pstate,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\tres->val,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\tNULL,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\tres->name,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\tfalse));\n! \t\t\t}\n \t\t}\n \n \t\ttargetlist = lnext(targetlist);", "msg_date": "Sat, 19 May 2001 20:38:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Fix for tablename in targetlist" }, { "msg_contents": "> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > OK, here is another patch that does this:\n> > \n> > That seems considerably uglier than your first patch. In particular,\n> > why aren't you looking for isRel being set in the Ident node? It\n> > looks to me like you may have changed the behavior in the case where\n> > the Ident could be either a table or column name.\n> \n> OK, here is a new patch. I thought I had to go through\n> transformTargetEntry() -> transformExpr() -> transformIdent() to get\n> Ident.isRel set. Seems it is set earlier too, so the new code is\n> shorter. I am still researching the purpose of Ident.isRel. If someone\n> knows, please chime in. I know it says the Ident is a relation, but why\n> have a field when you can look it up in the rangetable?\n\nThis patch was no good. It worked only because I had created test as:\n\n\tCREATE TABLE test ( test int);\n\nto test Peter's test of matching column/table names. In fact, I was\nright that you have to call transformTargetEntry() -> transformExpr() ->\ntransformIdent() to get isRel set, and I have to do the longer fix.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 19 May 2001 22:50:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Fix for tablename in targetlist" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> In fact, I was\n> right that you have to call transformTargetEntry() -> transformExpr() ->\n> transformIdent() to get isRel set, and I have to do the longer fix.\n\nYes, I would think that you should do transformTargetEntry() first and\nthen look to see if you have an Ident w/ isRel set. The initial patch\nwas OK because it happened after that transformation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 May 2001 00:30:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for tablename in targetlist " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > In fact, I was\n> > right that you have to call transformTargetEntry() -> transformExpr() ->\n> > transformIdent() to get isRel set, and I have to do the longer fix.\n> \n> Yes, I would think that you should do transformTargetEntry() first and\n> then look to see if you have an Ident w/ isRel set. The initial patch\n> was OK because it happened after that transformation.\n\nYes, and you can't do that later because you need to add to the target\nlist. Calling transformTargetEntry returns a Resdom, which I don't\nthink I can tell if that is a rel or not, and it mucks up\npstate->p_last_resno++. I will show the patch on the patches list. It\ndoes things similar to what happens above in that function.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 20 May 2001 00:35:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Fix for tablename in targetlist" }, { "msg_contents": "On Sat, May 19, 2001 at 10:50:31AM -0400, Tom Lane wrote:\n> I had a thought this morning that raising an error may be the wrong\n> thing to do. We could instead choose to expand the name into\n> \"pg_class.*\", which would take only a little more code and would\n> arguably do something useful instead of useless. (I suspect that the\n> fjoin stuff that still remains in the backend was originally designed\n> to support exactly this interpretation.)\n\nThis is almost certainly the wrong thing to do. It would introduce\nambiguity to the syntax, that can only be error prone in the long run.\n\nWhat happens if people put that kind of query in a view, or hard coded\ninto a program somewhere, then later decide to ALTER TABLE to add a\ncolumn by that name?\n\nIf somebody forgets the \".*\", they should reasonably expect an error\nmessage. (And, I would personally be annoyed if I didn't get one, and\ninstead my incorrect query went through)\n\n-- \nMichael Samuel <michael@miknet.net>\n", "msg_date": "Thu, 24 May 2001 09:25:09 +1000", "msg_from": "Michael Samuel <michael@miknet.net>", "msg_from_op": false, "msg_subject": "Re: Fix for tablename in targetlist" }, { "msg_contents": "Bruce,\n\nOn Fri, 18 May 2001, Bruce Momjian wrote:\n\n> We have on the TODO list:\n> \n> \t* SELECT pg_class FROM pg_class generates strange error\n> \n> It passes the tablename as targetlist all the way to the executor, where\n> it throws an error about Node 704 unkown.\n\nThe problem is caused in transformIdent() (parse_expr.c):\n\n if (ident->indirection == NIL &&\n refnameRangeTableEntry(pstate, ident->name) != NULL)\n {\n ident->isRel = TRUE;\n result = (Node *) ident;\n }\n\nIt is pretty clear what is happening here. ident->name is a member of\nrange table so the type of ident is not changed, as would be the case with\nan attribute. Commenting this code out means that result = NULL and the\nerror 'Attribute 'pg_class' not found'. This, in my opinion, is the\ncorrect error to be generated. Moreover, I cannot find any flow on effect\nwhich may result from removing this code -- regression tests all\npass. From what I can tell, all transformations of Nodes which are of type\nIdent should have already been transformed anyway -- have I over looked\nsomething?\n\nGavin\n\n", "msg_date": "Tue, 12 Jun 2001 23:17:09 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Fix for tablename in targetlist" }, { "msg_contents": "> Bruce,\n> \n> On Fri, 18 May 2001, Bruce Momjian wrote:\n> \n> > We have on the TODO list:\n> > \n> > \t* SELECT pg_class FROM pg_class generates strange error\n> > \n> > It passes the tablename as targetlist all the way to the executor, where\n> > it throws an error about Node 704 unkown.\n> \n> The problem is caused in transformIdent() (parse_expr.c):\n> \n> if (ident->indirection == NIL &&\n> refnameRangeTableEntry(pstate, ident->name) != NULL)\n> {\n> ident->isRel = TRUE;\n> result = (Node *) ident;\n> }\n> \n> It is pretty clear what is happening here. ident->name is a member of\n> range table so the type of ident is not changed, as would be the case with\n> an attribute. Commenting this code out means that result = NULL and the\n> error 'Attribute 'pg_class' not found'. This, in my opinion, is the\n> correct error to be generated. Moreover, I cannot find any flow on effect\n> which may result from removing this code -- regression tests all\n> pass. From what I can tell, all transformations of Nodes which are of type\n> Ident should have already been transformed anyway -- have I over looked\n> something?\n\nI am confused. I thought I fixed this about a month ago. Do we need\nmore coded added here? \n\nYou are suggesting throwing an error as soon as an idend appears as a\nrelation. I don't know enough about the code to be sure that is OK. I\nrealize the regression tests pass.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Jun 2001 10:59:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Fix for tablename in targetlist" }, { "msg_contents": "> > You are suggesting throwing an error as soon as an idend appears as a\n> > relation. I don't know enough about the code to be sure that is OK. I\n> > realize the regression tests pass.\n> \n> Removing the said code and not applying your patch allows the parser to\n> recognise that pg_class is not an attribute of pg_class relation. There\n> does not seem to be any side effect from removing this code, though I\n> would like to see if someone can find fault in that. If there is no\n> problem, then -- in light of the discussion on this a month or so ago --\n> SELECT pg_class FROM pg_class should be be considered 'select the column\n> pg_class from the pg_class relation' which is the same as SELECT\n> nosuchcolumn FROM pg_class. Isn't this the most effective way to solve the\n> problem then?\n\nUh..., I don't know.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Jun 2001 22:31:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Fix for tablename in targetlist" }, { "msg_contents": "Bruce,\n\nOn Tue, 12 Jun 2001, Bruce Momjian wrote:\n\n> > Bruce,\n> > \n> > On Fri, 18 May 2001, Bruce Momjian wrote:\n> > \n> > > We have on the TODO list:\n> > > \n> > > \t* SELECT pg_class FROM pg_class generates strange error\n> > > \n> > > It passes the tablename as targetlist all the way to the executor, where\n> > > it throws an error about Node 704 unkown.\n> > \n> > The problem is caused in transformIdent() (parse_expr.c):\n> > \n> > if (ident->indirection == NIL &&\n> > refnameRangeTableEntry(pstate, ident->name) != NULL)\n> > {\n> > ident->isRel = TRUE;\n> > result = (Node *) ident;\n> > }\n> > \n> > It is pretty clear what is happening here. ident->name is a member of\n> > range table so the type of ident is not changed, as would be the case with\n> > an attribute. Commenting this code out means that result = NULL and the\n> > error 'Attribute 'pg_class' not found'. This, in my opinion, is the\n> > correct error to be generated. Moreover, I cannot find any flow on effect\n> > which may result from removing this code -- regression tests all\n> > pass. From what I can tell, all transformations of Nodes which are of type\n> > Ident should have already been transformed anyway -- have I over looked\n> > something?\n> \n> I am confused. I thought I fixed this about a month ago. Do we need\n> more coded added here? \n> \n> You are suggesting throwing an error as soon as an idend appears as a\n> relation. I don't know enough about the code to be sure that is OK. I\n> realize the regression tests pass.\n\nRemoving the said code and not applying your patch allows the parser to\nrecognise that pg_class is not an attribute of pg_class relation. There\ndoes not seem to be any side effect from removing this code, though I\nwould like to see if someone can find fault in that. If there is no\nproblem, then -- in light of the discussion on this a month or so ago --\nSELECT pg_class FROM pg_class should be be considered 'select the column\npg_class from the pg_class relation' which is the same as SELECT\nnosuchcolumn FROM pg_class. Isn't this the most effective way to solve the\nproblem then?\n\nThanks\n\nGavin\n\n", "msg_date": "Wed, 13 Jun 2001 12:33:39 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Fix for tablename in targetlist" }, { "msg_contents": "> On Tue, 12 Jun 2001, Bruce Momjian wrote:\n>> I am confused. I thought I fixed this about a month ago. Do we need\n>> more coded added here? \n\nYou did, and we don't. In current sources:\n\nregression=# SELECT pg_class FROM pg_class;\nERROR: You can't use relation names alone in the target list, try relation.*.\nregression=#\n\nOne might quibble with the wording of the error message, but at least\nit's to the point.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Jun 2001 09:52:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix for tablename in targetlist " } ]
[ { "msg_contents": "Hi,\n\nI managed to drop really important table. Fortunately, I had a backup of\nthe table (raw #### file, not a ascii file). After putting that table into\nfreshly initdb'd database, postgres doesn't see new transactions even\nthough 'vacuum' sees the tuples alright. \n\nSo, question. I'd like to force XID (of last committed transaction) of\npostgres forward. Is there a good way to do it or hacking source is the\nonly way?\n\n", "msg_date": "Fri, 18 May 2001 23:42:47 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "force of last XID" }, { "msg_contents": "> Hi,\n> \n> I managed to drop really important table. Fortunately, I had a backup of\n> the table (raw #### file, not a ascii file). After putting that table into\n> freshly initdb'd database, postgres doesn't see new transactions even\n> though 'vacuum' sees the tuples alright. \n> \n> So, question. I'd like to force XID (of last committed transaction) of\n> postgres forward. Is there a good way to do it or hacking source is the\n> only way?\n\nTry copying over the pg_log file from the old database. That should\nhelp.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 18 May 2001 23:58:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: force of last XID" } ]
[ { "msg_contents": "I've sent this a few weeks ago and got support, I just wanted to issue the\nfinal call.\n\nSQL and Postgres differ in behaviour if the value of a char or varchar\ntype exceeds the declared length. Postgres cuts off the value, SQL\nrequires to raise an error.\n\nIn particular, the compliant behaviour is:\n\ncreate table test (a varchar(4));\n\ninsert into test values ('ok');\n[ok]\ninsert into test values ('not ok');\nERROR: value too long for type character varying(4)\ninsert into test values ('good ');\n[truncates spaces that are too long]\n\nI think this behaviour is desirable over the old one because it makes the\nchar and varchar types useful in the first place.\n\nFor bit types there is, of course, no such extra rule for spaces.\nHowever, SQL requires that for fixed-width bit strings, the input value\nmust have as many digits as the declared length of the string. That is,\n\ncreate table test (a bit(4));\ninsert into test values (b'101');\n\nwill fail. I think that's reasonable, too, because it avoids the\nendianness issues.\n\nUnless there are objections, I will make this happen.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 19 May 2001 12:33:40 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Truncation of char, varchar, bit, varbit types" } ]
[ { "msg_contents": "I tried to do a TRUNCATE on a table that was referenced by other table and\nthe refererential integrity wasn't kept. Seems to me this thing behaves\nreally differently then DELETE FROM table. I think it should be at least\nmentioned in the docs or disabled if the table is referenced by foreign\nkey.\n\nOndrej\n--\nAs President I have to go vacuum my coin collection!\n\n", "msg_date": "Sat, 19 May 2001 12:56:48 +0200 (CEST)", "msg_from": "Ondrej Palkovsky <xpalo03@vse.cz>", "msg_from_op": true, "msg_subject": "TRUNCATE doesn't follow referential integrity" } ]
[ { "msg_contents": "Is any support for reworking the postgres headers such that they can be used,\ncleanly, in a C++ program?\n", "msg_date": "Sat, 19 May 2001 10:21:27 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "C++ Headers" }, { "msg_contents": "\nMy guess it that certain defines have to be #ifdef'ed out for C++. Can\nyou list them again. We have talked about this in the past, but haven't\ngotten a solution yet.\n\n> Is any support for reworking the postgres headers such that they can be used,\n> cleanly, in a C++ program?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 19 May 2001 11:25:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: C++ Headers" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Is any support for reworking the postgres headers such that they can be used,\n> cleanly, in a C++ program?\n\nYou'll get no support for a request for a blank check. What do you have\nin mind exactly?\n\nISTM that making the backend's internal headers C++-clean has already\nbeen looked into, but rejected on grounds that I don't recall clearly.\nCheck the list archives.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 11:26:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: C++ Headers " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010519 10:29]:\n> mlw <markw@mohawksoft.com> writes:\n> > Is any support for reworking the postgres headers such that they can be used,\n> > cleanly, in a C++ program?\n> \n> You'll get no support for a request for a blank check. What do you have\n> in mind exactly?\n> \n> ISTM that making the backend's internal headers C++-clean has already\n> been looked into, but rejected on grounds that I don't recall clearly.\n> Check the list archives.\nI do know that you can use libpq-fe.h cleanly in a C++ program (the \ntable posted earlier today gets populated by a C++ app), modulo the\nOid type conflicts with an SNMP++ header, handled by a quick #define. \n\nLER\n\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 19 May 2001 11:04:37 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: C++ Headers" }, { "msg_contents": "Tom Lane wrote:\n\n> mlw <markw@mohawksoft.com> writes:\n> > Is any support for reworking the postgres headers such that they can be used,\n> > cleanly, in a C++ program?\n>\n> You'll get no support for a request for a blank check. What do you have\n> in mind exactly?\n>\n> ISTM that making the backend's internal headers C++-clean has already\n> been looked into, but rejected on grounds that I don't recall clearly.\n> Check the list archives.\n>\n\nI have used:\n\n #ifdef __cplusplus\n extern \"C\" {\n #endif\n\n headers......\n\n\n\n #ifdef __cplusplus\n }\n #endif\n\n\non many backend header files for use\non my threaded version of postgres.\nI seems to work fine as I have not had any problems\nyet.\n\n\nMyron Scott\n\n\n\n\n", "msg_date": "Sat, 19 May 2001 10:09:59 -0700", "msg_from": "Myron Scott <mscott@sacadia.com>", "msg_from_op": false, "msg_subject": "Re: C++ Headers" }, { "msg_contents": "> Tom Lane wrote:\n> \n> > mlw <markw@mohawksoft.com> writes:\n> > > Is any support for reworking the postgres headers such that they can be used,\n> > > cleanly, in a C++ program?\n> >\n> > You'll get no support for a request for a blank check. What do you have\n> > in mind exactly?\n> >\n> > ISTM that making the backend's internal headers C++-clean has already\n> > been looked into, but rejected on grounds that I don't recall clearly.\n> > Check the list archives.\n> >\n> \n> I have used:\n> \n> #ifdef __cplusplus\n> extern \"C\" {\n> #endif\n> \n> headers......\n> \n> \n> \n> #ifdef __cplusplus\n> }\n> #endif\n> \n> \n> on many backend header files for use\n> on my threaded version of postgres.\n> I seems to work fine as I have not had any problems\n> yet.\n\nThe only mention I see of this is in c.h:\n\t\n\t#ifndef __cplusplus\n\t#ifndef bool\n\ttypedef char bool;\n\t\n\t#endif /* ndef bool */\n\t#endif /* not C++ */\n\nIf you need more cplusplus stuff, lets figure it out and add it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 19 May 2001 13:17:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: C++ Headers" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The only mention I see of this is in c.h:\n\t\n> \t#ifndef __cplusplus\n> \t#ifndef bool\n> \ttypedef char bool;\n\t\n> \t#endif /* ndef bool */\n> \t#endif /* not C++ */\n\n> If you need more cplusplus stuff, lets figure it out and add it.\n\nActually, that portion of c.h is a time bomb that is likely to blow up\nin the face of some poor C++ user. There's no guarantee that a C++\ncompiler's built-in bool type will be compatible with \"char\", is there?\nIf it happened to be, say, same size as \"int\", then a C++ module\nwould interpret lots of things differently from a C module.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 20:34:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: C++ Headers " }, { "msg_contents": "Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The only mention I see of this is in c.h:\n>\n> > #ifndef __cplusplus\n> > #ifndef bool\n> > typedef char bool;\n>\n> > #endif /* ndef bool */\n> > #endif /* not C++ */\n>\n> > If you need more cplusplus stuff, lets figure it out and add it.\n>\n> Actually, that portion of c.h is a time bomb that is likely to blow up\n> in the face of some poor C++ user. There's no guarantee that a C++\n> compiler's built-in bool type will be compatible with \"char\", is there?\n> If it happened to be, say, same size as \"int\", then a C++ module\n> would interpret lots of things differently from a C module.\n\nThis in fact has happened within ECPG. But since sizeof(bool) is passed to\nlibecpg it was possible to figure out which 'bool' is requested.\n\nAnother issue of C++ compatibility would be cleaning up the usage of\n'const' declarations. C++ is really strict about 'const'ness. But I don't\nknow whether postgres' internal headers would need such a cleanup. (I\nsuspect that in ecpg there is an oddity left with respect to host variable\ndeclaration. I'll check that later)\n\n Christof\n\n\n", "msg_date": "Mon, 21 May 2001 11:01:01 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "Re: C++ Headers" }, { "msg_contents": "> This in fact has happened within ECPG. But since sizeof(bool) is passed to\n> libecpg it was possible to figure out which 'bool' is requested.\n> \n> Another issue of C++ compatibility would be cleaning up the usage of\n> 'const' declarations. C++ is really strict about 'const'ness. But I don't\n> know whether postgres' internal headers would need such a cleanup. (I\n> suspect that in ecpg there is an oddity left with respect to host variable\n> declaration. I'll check that later)\n\nWe have added more const-ness to libpq++ for 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 00:19:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: C++ Headers" }, { "msg_contents": "On Tue, May 22, 2001 at 12:19:41AM -0400, Bruce Momjian wrote:\n> > This in fact has happened within ECPG. But since sizeof(bool) is passed to\n> > libecpg it was possible to figure out which 'bool' is requested.\n> > \n> > Another issue of C++ compatibility would be cleaning up the usage of\n> > 'const' declarations. C++ is really strict about 'const'ness. But I don't\n> > know whether postgres' internal headers would need such a cleanup. (I\n> > suspect that in ecpg there is an oddity left with respect to host variable\n> > declaration. I'll check that later)\n> \n> We have added more const-ness to libpq++ for 7.2.\n\nBreaking link compatibility without bumping the major version number\non the library seems to me serious no-no.\n\nTo const-ify member functions without breaking link compatibility,\nyou have to add another, overloaded member that is const, and turn\nthe non-const function into a wrapper. For example:\n\n void Foo::bar() { ... } // existing interface\n\nbecomes\n\n void Foo::bar() { ((const Foo*)this)->bar(); } \n void Foo::bar() const { ... } \n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Tue, 22 May 2001 13:34:58 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: C++ Headers" }, { "msg_contents": "> On Tue, May 22, 2001 at 12:19:41AM -0400, Bruce Momjian wrote:\n> > > This in fact has happened within ECPG. But since sizeof(bool) is passed to\n> > > libecpg it was possible to figure out which 'bool' is requested.\n> > > \n> > > Another issue of C++ compatibility would be cleaning up the usage of\n> > > 'const' declarations. C++ is really strict about 'const'ness. But I don't\n> > > know whether postgres' internal headers would need such a cleanup. (I\n> > > suspect that in ecpg there is an oddity left with respect to host variable\n> > > declaration. I'll check that later)\n> > \n> > We have added more const-ness to libpq++ for 7.2.\n> \n> Breaking link compatibility without bumping the major version number\n> on the library seems to me serious no-no.\n> \n> To const-ify member functions without breaking link compatibility,\n> you have to add another, overloaded member that is const, and turn\n> the non-const function into a wrapper. For example:\n> \n> void Foo::bar() { ... } // existing interface\n> \n> becomes\n> \n> void Foo::bar() { ((const Foo*)this)->bar(); } \n> void Foo::bar() const { ... } \n\nThanks. That was my problem, not knowing when I break link compatiblity\nin C++. Major updated.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 17:52:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: C++ Headers" }, { "msg_contents": "On Tue, May 22, 2001 at 05:52:20PM -0400, Bruce Momjian wrote:\n> > On Tue, May 22, 2001 at 12:19:41AM -0400, Bruce Momjian wrote:\n> > > > This in fact has happened within ECPG. But since sizeof(bool) is\n> > > > passed to libecpg it was possible to figure out which 'bool' is\n> > > > requested.\n> > > >\n> > > > Another issue of C++ compatibility would be cleaning up the\n> > > > usage of 'const' declarations. C++ is really strict about\n> > > > 'const'ness. But I don't know whether postgres' internal headers\n> > > > would need such a cleanup. (I suspect that in ecpg there is an\n> > > > oddity left with respect to host variable declaration. I'll\n> > > > check that later)\n> > >\n> > > We have added more const-ness to libpq++ for 7.2.\n> > \n> > Breaking link compatibility without bumping the major version number\n> > on the library seems to me serious no-no.\n> > \n> > To const-ify member functions without breaking link compatibility,\n> > you have to add another, overloaded member that is const, and turn\n> > the non-const function into a wrapper. For example:\n> > \n> > void Foo::bar() { ... } // existing interface\n> > \n> > becomes\n> > \n> > void Foo::bar() { ((const Foo*)this)->bar(); } \n> > void Foo::bar() const { ... } \n> \n> Thanks. That was my problem, not knowing when I break link compatiblity\n> in C++. Major updated.\n\nWouldn't it be better to add the forwarding function and keep\nthe same major number? It's quite disruptive to change the\nmajor number for what are really very minor changes. Otherwise\nyou accumulate lots of near-copies of almost-identical libraries\nto be able to run old binaries.\n\nA major-number bump should usually be something planned for\nand scheduled.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Tue, 22 May 2001 18:27:19 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: C++ Headers" }, { "msg_contents": "> > > > We have added more const-ness to libpq++ for 7.2.\n> > > \n> > > Breaking link compatibility without bumping the major version number\n> > > on the library seems to me serious no-no.\n> > > \n> > > To const-ify member functions without breaking link compatibility,\n> > > you have to add another, overloaded member that is const, and turn\n> > > the non-const function into a wrapper. For example:\n> > > \n> > > void Foo::bar() { ... } // existing interface\n> > > \n> > > becomes\n> > > \n> > > void Foo::bar() { ((const Foo*)this)->bar(); } \n> > > void Foo::bar() const { ... } \n> > \n> > Thanks. That was my problem, not knowing when I break link compatiblity\n> > in C++. Major updated.\n> \n> Wouldn't it be better to add the forwarding function and keep\n> the same major number? It's quite disruptive to change the\n> major number for what are really very minor changes. Otherwise\n> you accumulate lots of near-copies of almost-identical libraries\n> to be able to run old binaries.\n> \n> A major-number bump should usually be something planned for\n> and scheduled.\n\nThat const was just one of many const's added, and I am sure there will\nbe more stuff happening to C++. I changed a function returning short\nfor tuple length to int. Not worth mucking it up.\n\nIf it was just that one it would be OK.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 23 May 2001 11:35:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: C++ Headers" } ]
[ { "msg_contents": "Sorry to gripe here. Don't get me wrong, I think Postgres is amazing, and I\nthink all you guys do an amazing job. \n\nIs it just me, or do others agree, functions returning sets need to be able to\nbe used in a select where equal clause.\n\nselect * from table where field = funct_set('bla bla');\n\nWhere funct_set returns multiple results. This currently does not work. The\nonly other option with sets is usage like:\n\nselect * from table where field in (select funct_set('bla bla')) ;\n\nWhich would be OK if it were able to use an index and preserve the order\nreturned.\n\nWithout the ability to use a set to select multiple objects in a join, it is\ndramatically limited in use for a relational database. In fact, I can't think\nof many uses for it as is.\n\nHaving used Postgres since 1996/1997 I have only a very few gripes, and I think\n7.1 is a fantastic effort, but, I think a HUGE amount of applications can not\nbe done because of this limitation.\n\nWhat can I do to help get this ability in 7.2? I is VERY important to a project\non which I am working.\n", "msg_date": "Sat, 19 May 2001 16:35:56 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Functions returning sets" }, { "msg_contents": "\n(Machine couldn't find mx record for mohawksoft, replying only\nto list)\n\nOn Sat, 19 May 2001, mlw wrote:\n\n> Sorry to gripe here. Don't get me wrong, I think Postgres is amazing, and I\n> think all you guys do an amazing job. \n> \n> Is it just me, or do others agree, functions returning sets need to be able to\n> be used in a select where equal clause.\n>\n> select * from table where field = funct_set('bla bla');\n\nI think what we should probably do is make IN better and use that or then\nsupport =ANY(=SOME)/=ALL on such things. I think =ANY would be easy\nsince IN is defined in terms of it in the spec.\n\nI personally don't like the idea of value = set being value in set any\nmore than value = array being value in array.\n\n", "msg_date": "Sat, 19 May 2001 14:08:06 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Functions returning sets" }, { "msg_contents": "Stephan Szabo wrote:\n> \n> (Machine couldn't find mx record for mohawksoft, replying only\n> to list)\n> \n> On Sat, 19 May 2001, mlw wrote:\n> \n> > Sorry to gripe here. Don't get me wrong, I think Postgres is amazing, and I\n> > think all you guys do an amazing job.\n> >\n> > Is it just me, or do others agree, functions returning sets need to be able to\n> > be used in a select where equal clause.\n> >\n> > select * from table where field = funct_set('bla bla');\n\nI don't understand your reasoning. Look at the syntax:\n\nselect * from foo where bar = function(...);\n\nIf function() returns one value, then only one will be returned and the\nrelation features of postgres can be used, as in \"select * from foo, this where\nfoo.bar = function() and foo.bar = this.that\"\n\nIf function() can return multiple values, should it not follow that multiple\nvalues should be selected? \n\nIn the example where one result is returned, that makes sense. Why does the\nexample of multiple results being returned no longer make sense?\n\nIt is a point of extreme frustration to me that I can't do this easily. Lacking\nthis ability makes Postgres almost impossible to implement a search engine\ncorrectly. I know it is selfish to feel this way, but I am positive my\nfrustration is indicative of others out there trying to use Postgres for\ncertain applications. I bet a huge number of developers feel the same way,\nbut try a few quick tests and give up on Postgres all together, without saying\na word. What good are multiple results in a relational environment if one can\nnot use them as relations?\n", "msg_date": "Sat, 19 May 2001 17:42:13 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Functions returning sets" }, { "msg_contents": "At 02:08 PM 5/19/01 -0700, Stephan Szabo wrote:\n>\n>(Machine couldn't find mx record for mohawksoft, replying only\n>to list)\n>\n>On Sat, 19 May 2001, mlw wrote:\n>\n>> Sorry to gripe here. Don't get me wrong, I think Postgres is amazing, and I\n>> think all you guys do an amazing job. \n>> \n>> Is it just me, or do others agree, functions returning sets need to be able to\n>> be used in a select where equal clause.\n>>\n>> select * from table where field = funct_set('bla bla');\n>\n>I think what we should probably do is make IN better...\n\nIN is the right operator. \"field = [set of rows]\" isn't SQL92 and doesn't\nreally make sense. It's the equivalent of:\n\nselect * from table where field = (select some_column from some_other_table)\n\nIf the subselect returns a single row it will work, if it returns multiple\nrows you need to use IN. That's just how SQL92 is defined.\n\nI'd assume that a function returning a single column and single row will\nwork just as the subselect does, but there's no reason expect it to work\nif more than one row is returned.\n\nWhat's so hard about writing \"IN\" rather than \"=\" ???\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 19 May 2001 16:08:37 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: Functions returning sets" }, { "msg_contents": "Don Baccus <dhogaza@pacifier.com> writes:\n> What's so hard about writing \"IN\" rather than \"=\" ???\n\nEven more to the point, if we did adopt such a (crazy IMHO)\ninterpretation of '=', what makes anyone think that it'd be\nany more efficient than IN?\n\nAFAICT, mlw is hoping that redefining '=' would magically avoid the\nperformance problems with IN, but my bet is it'd be just the same.\n\nWhat we need to do is teach the system how to handle WHERE ... IN ...\nas a form of join. Changing semantics of operators isn't necessary\nnor helpful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 20:44:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Functions returning sets " }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> I think what we should probably do is make IN better and use that or then\n> support =ANY(=SOME)/=ALL on such things. I think =ANY would be easy\n> since IN is defined in terms of it in the spec.\n\nAnd in our code too ;-). ANY/ALL have been there for awhile.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 May 2001 20:45:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Functions returning sets " }, { "msg_contents": "At 08:44 PM 5/19/01 -0400, Tom Lane wrote:\n>Don Baccus <dhogaza@pacifier.com> writes:\n>> What's so hard about writing \"IN\" rather than \"=\" ???\n>\n>Even more to the point, if we did adopt such a (crazy IMHO)\n>interpretation of '=', what makes anyone think that it'd be\n>any more efficient than IN?\n\nI was going to mention this, but stuck to the old letter-to-the-editor\nrule of one point per note. :)\n\n>AFAICT, mlw is hoping that redefining '=' would magically avoid the\n>performance problems with IN, but my bet is it'd be just the same.\n\nOr that it would actually be a join operator??? Wasn't clear to me\nexactly what he was expecting.\n\n>What we need to do is teach the system how to handle WHERE ... IN ...\n>as a form of join.\n\nYes.\n\nBTW Oracle has a 1000-element limit on the number of values in an\n\"IN\" set. This limits its generality as it applies to subselects\nas well as lists of constants. It seems that PG isn't the only\nRDMBS that has problems with getting \"IN\" implemented perfectly.\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 19 May 2001 17:56:06 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: Functions returning sets " }, { "msg_contents": "Tom Lane wrote:\n> \n> Don Baccus <dhogaza@pacifier.com> writes:\n> > What's so hard about writing \"IN\" rather than \"=\" ???\n> \n> Even more to the point, if we did adopt such a (crazy IMHO)\n> interpretation of '=', what makes anyone think that it'd be\n> any more efficient than IN?\n> \n> AFAICT, mlw is hoping that redefining '=' would magically avoid the\n> performance problems with IN, but my bet is it'd be just the same.\n> \n> What we need to do is teach the system how to handle WHERE ... IN ...\n> as a form of join. Changing semantics of operators isn't necessary\n> nor helpful.\n\nI will defer, of course, to your interpretation of '=', but I still think it\n(if implemented efficiently) could be cool. However, I hang my head in shame\nthat I didn't see this syntax:\n\nselect table.* from table, (select function() as field) as result where\ntable.field = result.field;\n\nIt seems to be pretty efficient, and satisfies the main criteria that I needed,\nwhich was a full text search could be used on select with no external\nprogramming language.\n\nCurrently my system can't be used without an external programming language, and\nthis is a huge, if awkward solution. Thanks all.\n", "msg_date": "Sat, 19 May 2001 21:34:27 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Functions returning sets" }, { "msg_contents": "Don Baccus <dhogaza@pacifier.com> writes:\n> BTW Oracle has a 1000-element limit on the number of values in an\n> \"IN\" set.\n\nYeah? What happens when you get past that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 May 2001 00:31:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Functions returning sets " }, { "msg_contents": "I think you probably could use contrib/intarray if you redesign\nyour scheme.\n\n\tOleg\n\nOn Sat, 19 May 2001, mlw wrote:\n\n> Tom Lane wrote:\n> >\n> > Don Baccus <dhogaza@pacifier.com> writes:\n> > > What's so hard about writing \"IN\" rather than \"=\" ???\n> >\n> > Even more to the point, if we did adopt such a (crazy IMHO)\n> > interpretation of '=', what makes anyone think that it'd be\n> > any more efficient than IN?\n> >\n> > AFAICT, mlw is hoping that redefining '=' would magically avoid the\n> > performance problems with IN, but my bet is it'd be just the same.\n> >\n> > What we need to do is teach the system how to handle WHERE ... IN ...\n> > as a form of join. Changing semantics of operators isn't necessary\n> > nor helpful.\n>\n> I will defer, of course, to your interpretation of '=', but I still think it\n> (if implemented efficiently) could be cool. However, I hang my head in shame\n> that I didn't see this syntax:\n>\n> select table.* from table, (select function() as field) as result where\n> table.field = result.field;\n>\n> It seems to be pretty efficient, and satisfies the main criteria that I needed,\n> which was a full text search could be used on select with no external\n> programming language.\n>\n> Currently my system can't be used without an external programming language, and\n> this is a huge, if awkward solution. Thanks all.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 20 May 2001 07:58:28 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Re: Functions returning sets" }, { "msg_contents": "mlw wrote:\n> \n> Stephan Szabo wrote:\n> >\n> > (Machine couldn't find mx record for mohawksoft, replying only\n> > to list)\n> >\n> > On Sat, 19 May 2001, mlw wrote:\n> >\n> > > Sorry to gripe here. Don't get me wrong, I think Postgres is amazing, and I\n> > > think all you guys do an amazing job.\n> > >\n> > > Is it just me, or do others agree, functions returning sets need to be able to\n> > > be used in a select where equal clause.\n> > >\n> > > select * from table where field = funct_set('bla bla');\n> \n> I don't understand your reasoning. Look at the syntax:\n> \n> select * from foo where bar = function(...);\n> \n> If function() returns one value, then only one will be returned and the\n> relation features of postgres can be used, as in \"select * from foo, this where\n> foo.bar = function() and foo.bar = this.that\"\n> \n> If function() can return multiple values, should it not follow that multiple\n> values should be selected?\n\nof course not! if function() can return (i.e. returns) as set then bar\nmust be of \ntype SET too and only rows that are = (in whatever sense currently\ndefined) should\nbe returned\n\n---------------\nHannu\n", "msg_date": "Sun, 20 May 2001 12:34:31 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Re: Functions returning sets" }, { "msg_contents": "At 12:31 AM 5/20/01 -0400, Tom Lane wrote:\n>Don Baccus <dhogaza@pacifier.com> writes:\n>> BTW Oracle has a 1000-element limit on the number of values in an\n>> \"IN\" set.\n>\n>Yeah? What happens when you get past that?\n\nMy understanding is that it gives you an error, though I've never tried\nit myself.\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 20 May 2001 09:27:30 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: Functions returning sets " }, { "msg_contents": "On Sun, 20 May 2001, Don Baccus wrote:\n\n> >> BTW Oracle has a 1000-element limit on the number of values in an\n> >> \"IN\" set.\n> >\n> >Yeah? What happens when you get past that?\n>\n> My understanding is that it gives you an error, though I've never\n> tried it myself.\n\nIt does.\n\nMatthew.\n\n", "msg_date": "Sun, 20 May 2001 18:09:42 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: Functions returning sets " }, { "msg_contents": "On Sat, 19 May 2001, mlw wrote:\n\n> Stephan Szabo wrote:\n> > \n> > (Machine couldn't find mx record for mohawksoft, replying only\n> > to list)\n> > \n> > On Sat, 19 May 2001, mlw wrote:\n> > \n> > > Sorry to gripe here. Don't get me wrong, I think Postgres is amazing, and I\n> > > think all you guys do an amazing job.\n> > >\n> > > Is it just me, or do others agree, functions returning sets need to be able to\n> > > be used in a select where equal clause.\n> > >\n> > > select * from table where field = funct_set('bla bla');\n> \n> I don't understand your reasoning. Look at the syntax:\n> \n> select * from foo where bar = function(...);\n> \n> If function() returns one value, then only one will be returned and the\n> relation features of postgres can be used, as in \"select * from foo, this where\n> foo.bar = function() and foo.bar = this.that\"\n> \n> If function() can return multiple values, should it not follow that multiple\n> values should be selected? \n\nWhat does select * from foo where bar=(select value from foo2) mean when\nthe second returns more than one row. IIRC, sql says that's an error.\nI don't see why functions returning sets would be different than a\nsubquery. That's what =ANY, =ALL, and IN are for. I don't see what the\ndifference between doing this with =s and making IN work better really is.\n\n", "msg_date": "Sun, 20 May 2001 10:53:55 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Functions returning sets" }, { "msg_contents": "On Sat, 19 May 2001, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > I think what we should probably do is make IN better and use that or then\n> > support =ANY(=SOME)/=ALL on such things. I think =ANY would be easy\n> > since IN is defined in terms of it in the spec.\n> \n> And in our code too ;-). ANY/ALL have been there for awhile.\n\n:) Well that makes it easier anyway. I do agree with the point that at\nsome point IN needs to be smarter. Can the IN always get written as a\njoin and is it always better to do so?\n\n\n", "msg_date": "Sun, 20 May 2001 10:55:25 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Functions returning sets " }, { "msg_contents": "At 10:55 AM 5/20/01 -0700, Stephan Szabo wrote:\n> Can the IN always get written as a\n>join and is it always better to do so?\n\nNope:\n\nopenacs4=# select 1 in (1,2,3);\n ?column? \n----------\n t\n(1 row)\n\nYou might also do something like\n\n\"select count(*) from foo where foo.state in ('rejected', 'banned');\"\n\nA better question, I guess, is if it is always better to write\nit as a join if the left hand operand is a table column and\nthe right hand operand a rowset.\n\n\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 20 May 2001 11:29:00 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: Functions returning sets " }, { "msg_contents": "\nOn Sun, 20 May 2001, Don Baccus wrote:\n\n> At 10:55 AM 5/20/01 -0700, Stephan Szabo wrote:\n> > Can the IN always get written as a\n> >join and is it always better to do so?\n> \n> Nope:\n> ...\n> A better question, I guess, is if it is always better to write\n> it as a join if the left hand operand is a table column and\n> the right hand operand a rowset.\n\nWell I was assuming we were talking about the subquery case\nin general :)\n\nIt might be a problem with subqueries with set value functions\nand parameters passed down from the outer tables:\nselect * from blah where\n blah.val1 in \n (select count(*) from blah2 where blah2.val2=blah.val2\n group by blah2.val3);\n\n", "msg_date": "Sun, 20 May 2001 11:55:02 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Functions returning sets " } ]
[ { "msg_contents": "I see Tom Lane implemented the SQL92 feature of using subselects in \nFROM clauses:\n\nCREATE TABLE foo (\nkey integer not null,\nvalue text);\n\nSELECT * FROM (SELECT * FROM foo) AS bar\nWHERE bar.key = 1;\n\nPerhaps this is how functions returning sets should operate:\n\nSELECT titles.* FROM titles, (SELECT funct_set('blah blah')) AS bar\nWHERE titles.title = bar.title;\n\nFWIW,\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tmlw [SMTP:markw@mohawksoft.com]\nSent:\tSaturday, May 19, 2001 5:42 PM\nTo:\tStephan Szabo\nCc:\tHackers List\nSubject:\t[HACKERS] Re: Functions returning sets\n\nStephan Szabo wrote:\n>\n> (Machine couldn't find mx record for mohawksoft, replying only\n> to list)\n>\n> On Sat, 19 May 2001, mlw wrote:\n>\n> > Sorry to gripe here. Don't get me wrong, I think Postgres is \namazing, and I\n> > think all you guys do an amazing job.\n> >\n> > Is it just me, or do others agree, functions returning sets need \nto be able to\n> > be used in a select where equal clause.\n> >\n> > select * from table where field = funct_set('bla bla');\n\nI don't understand your reasoning. Look at the syntax:\n\nselect * from foo where bar = function(...);\n\nIf function() returns one value, then only one will be returned and \nthe\nrelation features of postgres can be used, as in \"select * from foo, \nthis where\nfoo.bar = function() and foo.bar = this.that\"\n\nIf function() can return multiple values, should it not follow that \nmultiple\nvalues should be selected?\n\nIn the example where one result is returned, that makes sense. Why \ndoes the\nexample of multiple results being returned no longer make sense?\n\nIt is a point of extreme frustration to me that I can't do this \neasily. Lacking\nthis ability makes Postgres almost impossible to implement a search \nengine\ncorrectly. I know it is selfish to feel this way, but I am positive \nmy\nfrustration is indicative of others out there trying to use Postgres \nfor\ncertain applications. I bet a huge number of developers feel the \nsame way,\nbut try a few quick tests and give up on Postgres all together, \nwithout saying\na word. What good are multiple results in a relational environment if \none can\nnot use them as relations?\n\n", "msg_date": "Sat, 19 May 2001 18:41:45 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "RE: Re: Functions returning sets" }, { "msg_contents": "> I see Tom Lane implemented the SQL92 feature of using subselects in\n> FROM clauses:\n>\n> CREATE TABLE foo (\n> key integer not null,\n> value text);\n>\n> SELECT * FROM (SELECT * FROM foo) AS bar\n> WHERE bar.key = 1;\n>\n> Perhaps this is how functions returning sets should operate:\n>\n> SELECT titles.* FROM titles, (SELECT funct_set('blah blah')) AS bar\n> WHERE titles.title = bar.title;\n>\n\nThis is exactly what I was thinking. To take it one step further, then you\ncould define a view as:\n\nCREATE VIEW vw_funct_set as (SELECT funct_set('blah blah'));\n\nThen the statement above might look like:\n\nSELECT titles.* FROM titles, vw_funct_set AS bar WHERE titles.title =\nbar.title;\n\nIMHO this seems the most natural way to do what you have proposed.\n\n-- Joe\n\n", "msg_date": "Sat, 19 May 2001 16:38:16 -0700", "msg_from": "\"Joe Conway\" <joe@conway-family.com>", "msg_from_op": false, "msg_subject": "Re: Re: Functions returning sets" }, { "msg_contents": "Why not like Interbase ?\n\nwhen you define a procedure in Interbase, you a the 'suspend' instruction, \nit suspend execution of the stored procedure and returns variables, then \ncome back to the procedure.\n\nselect * from myfunc('ba ba');\nselect mycol from myfunc('dada');\n\nescuse my poor english :)\n\n....\nMike Mascari wrote:\n\n> I see Tom Lane implemented the SQL92 feature of using subselects in\n> FROM clauses:\n> \n> CREATE TABLE foo (\n> key integer not null,\n> value text);\n> \n> SELECT * FROM (SELECT * FROM foo) AS bar\n> WHERE bar.key = 1;\n> \n> Perhaps this is how functions returning sets should operate:\n> \n> SELECT titles.* FROM titles, (SELECT funct_set('blah blah')) AS bar\n> WHERE titles.title = bar.title;\n> \n> FWIW,\n> \n> Mike Mascari\n> mascarm@mascari.com\n\n\n", "msg_date": "Sun, 20 May 2001 09:32:33 +0200", "msg_from": "mordicus <mordicus@free.fr>", "msg_from_op": false, "msg_subject": "RE: Re: Functions returning sets" }, { "msg_contents": "Why not like Interbase ?\n\nwhen you define a procedure in Interbase, you have the 'suspend' \ninstruction, \nit suspend execution of the stored procedure and returns variables, then \ncome back to the procedure.\n\nselect * from myfunc('ba ba');\nselect mycol from myfunc('dada');\n\nescuse my poor english :)\n\n\nMike Mascari wrote:\n\n> I see Tom Lane implemented the SQL92 feature of using subselects in\n> FROM clauses:\n> \n> CREATE TABLE foo (\n> key integer not null,\n> value text);\n> \n> SELECT * FROM (SELECT * FROM foo) AS bar\n> WHERE bar.key = 1;\n> \n> Perhaps this is how functions returning sets should operate:\n> \n> SELECT titles.* FROM titles, (SELECT funct_set('blah blah')) AS bar\n> WHERE titles.title = bar.title;\n> \n> FWIW,\n> \n> Mike Mascari\n> mascarm@mascari.com\n\n", "msg_date": "Sun, 20 May 2001 09:47:29 +0200", "msg_from": "mordicus <mordicus@free.fr>", "msg_from_op": false, "msg_subject": "RE: Re: Functions returning sets" } ]
[ { "msg_contents": "Hi all,\n\nAttached is my patch that adds DROP CONSTRAINT support to PostgreSQL. I\nbasically want your guys feedback. I have sprinkled some of my q's thru\nthe text delimited with the @@ symbol. It seems to work perfectly.\n\nAt the moment it does CHECK constraints only, with inheritance. However,\ndue to the problem mentioned before with the mismatching between inherited\nconstraints it may be wise to disable the inheritance feature for a while.\nit is written in an extensible fashion to support future dropping of other\ntypes of constraint, and is well documented.\n\nPlease send me your comments, check my use of locking, updating of\nindices, use of ERROR and NOTICE, etc. and I will rework the patch based\non feedback until everyone\nis happy with it...\n\nChris", "msg_date": "Sun, 20 May 2001 15:50:07 +0800 (WST)", "msg_from": "Christopher <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "DROP CONSTRAINT patch" }, { "msg_contents": "Anyone looked at this yet?\n\nAlso, if someone could tell me where I should attempt to add a regression\ntest and what, exactly, I should be regression testing it would be\nhelpful...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> Sent: Sunday, 20 May 2001 3:50 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] DROP CONSTRAINT patch\n>\n>\n> Hi all,\n>\n> Attached is my patch that adds DROP CONSTRAINT support to PostgreSQL. I\n> basically want your guys feedback. I have sprinkled some of my q's thru\n> the text delimited with the @@ symbol. It seems to work perfectly.\n>\n> At the moment it does CHECK constraints only, with inheritance. However,\n> due to the problem mentioned before with the mismatching between inherited\n> constraints it may be wise to disable the inheritance feature for a while.\n> it is written in an extensible fashion to support future dropping of other\n> types of constraint, and is well documented.\n>\n> Please send me your comments, check my use of locking, updating of\n> indices, use of ERROR and NOTICE, etc. and I will rework the patch based\n> on feedback until everyone\n> is happy with it...\n>\n> Chris\n>\n\n", "msg_date": "Tue, 22 May 2001 14:25:40 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: DROP CONSTRAINT patch" }, { "msg_contents": "> Anyone looked at this yet?\n> \n> Also, if someone could tell me where I should attempt to add a regression\n> test and what, exactly, I should be regression testing it would be\n> helpful...\n\nI was hoping someone else would comment on it. I will check it out\nlater and either apply or comment. You seemed little uncertain of it so\nI left it in my mailbox for a while.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 07:13:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP CONSTRAINT patch" }, { "msg_contents": "No actually, I'm quite confident of it - it works perfectly on my computer\nand does exactly what it's supposed to do. I have tested it extensively.\nIt also works in a transaction and gets rolled back properly. However, I\nhaven't tested it with multiple backends simultaneously doing stuff. I'm\npretty sure there won't be anything wrong with it though...\n\nIt's the stylistic stuff I don't know about. For example, I issue a NOTICE\nif more than one constraint was removed - I don't know if this is desirable\nbehaviour from the viewpoint of you guys (I personally think it's very\nimportant!).\n\nBut hey...\n\nChris\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Tuesday, 22 May 2001 7:13 PM\n> To: Christopher Kings-Lynne\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] DROP CONSTRAINT patch\n>\n>\n> > Anyone looked at this yet?\n> >\n> > Also, if someone could tell me where I should attempt to add a\n> regression\n> > test and what, exactly, I should be regression testing it would be\n> > helpful...\n>\n> I was hoping someone else would comment on it. I will check it out\n> later and either apply or comment. You seemed little uncertain of it so\n> I left it in my mailbox for a while.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Wed, 23 May 2001 09:52:03 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: DROP CONSTRAINT patch" }, { "msg_contents": "\nOn Tue, 22 May 2001, Christopher Kings-Lynne wrote:\n\n> Anyone looked at this yet?\n> \n> Also, if someone could tell me where I should attempt to add a regression\n> test and what, exactly, I should be regression testing it would be\n> helpful...\n\nAt the risk of making it even longer, probably alter_table.sql.\nYou probably want to try out various conceivable uses of the drop\nconstraint, including error conditions.\n\nSome things like:\ncreate table with constraint\ntry to insert valid row\ntry to insert invalid row\ndrop the constraint\ntry to insert valid row\ntry to insert row that was invalid\n\ncreate table with two equal named constraints\ninsert valid to both\ninsert valid to one but not two\ninsert valid to two but not one\ninsert valid to neither\n...\n\ncreate table with two non-equal named constraints\n(do inserts)\ndrop constraint one \ntry to insert valid for both, \n valid for one but not two\n valid for two but not one\n valid for neither\ndrop constraint two\n(do more inserts)\n\n", "msg_date": "Tue, 22 May 2001 19:59:36 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "RE: DROP CONSTRAINT patch" }, { "msg_contents": "For the add/drop constraint clauses would it be an idea to change the syntax\nto:\n\nALTER TABLE [ ONLY ] x ADD CONSTRAINT x;\nALTER TABLE [ ONLY ] x DROP CONSTRAINT x;\n\nSo that people can specify whether the constraint should be inherited or\nnot?\n\nChris\n\n", "msg_date": "Wed, 23 May 2001 11:21:35 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "ADD/DROP CONSTRAINT and inheritance" }, { "msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it withing the next 48 hours.\n\n> Hi all,\n> \n> Attached is my patch that adds DROP CONSTRAINT support to PostgreSQL. I\n> basically want your guys feedback. I have sprinkled some of my q's thru\n> the text delimited with the @@ symbol. It seems to work perfectly.\n> \n> At the moment it does CHECK constraints only, with inheritance. However,\n> due to the problem mentioned before with the mismatching between inherited\n> constraints it may be wise to disable the inheritance feature for a while.\n> it is written in an extensible fashion to support future dropping of other\n> types of constraint, and is well documented.\n> \n> Please send me your comments, check my use of locking, updating of\n> indices, use of ERROR and NOTICE, etc. and I will rework the patch based\n> on feedback until everyone\n> is happy with it...\n> \n> Chris\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 23 May 2001 11:19:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP CONSTRAINT patch" }, { "msg_contents": "On Wed, 23 May 2001, Christopher Kings-Lynne wrote:\n\n> For the add/drop constraint clauses would it be an idea to change the syntax\n> to:\n> \n> ALTER TABLE [ ONLY ] x ADD CONSTRAINT x;\n> ALTER TABLE [ ONLY ] x DROP CONSTRAINT x;\n> \n> So that people can specify whether the constraint should be inherited or\n> not?\n\nI'm not sure. I don't much use the inheritance stuff, so I'm not sure\nwhat would be better for them. Hopefully one of the people interested\nin inheritance will speak up. :) Technically it's probably not difficult\nto do.\n\nA related question is whether or not you can drop a constraint on a\nsubtable that's inherited from a parent.\n\n\n", "msg_date": "Wed, 23 May 2001 09:21:37 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: ADD/DROP CONSTRAINT and inheritance" }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> On Wed, 23 May 2001, Christopher Kings-Lynne wrote:\n>> For the add/drop constraint clauses would it be an idea to change the syntax\n>> to:\n>> \n>> ALTER TABLE [ ONLY ] x ADD CONSTRAINT x;\n>> ALTER TABLE [ ONLY ] x DROP CONSTRAINT x;\n\nIf the patch is coded in the same style as the existing ALTER code then\nthis will happen automatically. Are you looking at current development\ntip as the comparison point for your changes?\n\n> A related question is whether or not you can drop a constraint on a\n> subtable that's inherited from a parent.\n\nThere is the question of whether it's a good idea to allow a constraint\nto exist on a parent but not on its subtables. Seems like a bad idea to\nme. But as long as the default is to propagate these changes, I'm not\nreally eager to prohibit DBAs from doing the other. Who's to say what's\na misuse of inheritance and what's not...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 13:55:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ADD/DROP CONSTRAINT and inheritance " }, { "msg_contents": "> Your patch has been added to the PostgreSQL unapplied patches list at:\n>\n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nWould you like me to submit a version minus all my questions? And what\nabout the inheritance feature - should it be left enabled (as is) at the\nmoment?\n\nChris\n\n", "msg_date": "Thu, 24 May 2001 09:34:20 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: DROP CONSTRAINT patch" }, { "msg_contents": "> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > On Wed, 23 May 2001, Christopher Kings-Lynne wrote:\n> >> For the add/drop constraint clauses would it be an idea to\n> change the syntax\n> >> to:\n> >>\n> >> ALTER TABLE [ ONLY ] x ADD CONSTRAINT x;\n> >> ALTER TABLE [ ONLY ] x DROP CONSTRAINT x;\n>\n> If the patch is coded in the same style as the existing ALTER code then\n> this will happen automatically. Are you looking at current development\n> tip as the comparison point for your changes?\n\nI'm not sure what you mean here, Tom - I meant that the ONLY keyword could\nbe optional. I know that most of the existing ADD CONSTRAINT code does not\npropagate constraints to children. However, if it was ever changed so that\nit did - would it be nice to allow the DBA to specify that it should not be\npropagated.\n\n> > A related question is whether or not you can drop a constraint on a\n> > subtable that's inherited from a parent.\n>\n> There is the question of whether it's a good idea to allow a constraint\n> to exist on a parent but not on its subtables.\n\nIt seems to me that someone needs to sit down and decide on the inheritance\nsemantics that should be enforced (ideally) and then they can be coded to\nthe design.\n\n> Seems like a bad idea to\n> me. But as long as the default is to propagate these changes, I'm not\n> really eager to prohibit DBAs from doing the other. Who's to say what's\n> a misuse of inheritance and what's not...\n\nAt the moment we have:\n\n* ADD CONSTRAINT does not propagate\n* If you create a table with a CHECK constraint, then create a table that\ninherits from that, the CHECK constraint _does_ propagate.\n\nSeems to me that these behaviours are inconsistent...\n\nChris\n\n", "msg_date": "Thu, 24 May 2001 10:40:36 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: ADD/DROP CONSTRAINT and inheritance " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I'm not sure what you mean here, Tom - I meant that the ONLY keyword could\n> be optional.\n\nThe current gram.y code allows either ALTER TABLE foo ONLY or ALTER\nTABLE foo* for all forms of ALTER ... with the default interpretation\nbeing the latter.\n\n> At the moment we have:\n> * ADD CONSTRAINT does not propagate\n\nI doubt you will find anyone who's willing to argue that that's not a\nbug --- specifically, AlterTableAddConstraint()'s lack of inheritance\nrecursion like its siblings have. Feel free to fix it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 22:59:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ADD/DROP CONSTRAINT and inheritance " }, { "msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > I'm not sure what you mean here, Tom - I meant that the ONLY\n> keyword could\n> > be optional.\n>\n> The current gram.y code allows either ALTER TABLE foo ONLY or ALTER\n> TABLE foo* for all forms of ALTER ... with the default interpretation\n> being the latter.\n\nOops - ok, I didn't notice that...hmmm...maybe I should check my patch for\nthat before Bruce commits it...\n\n> > At the moment we have:\n> > * ADD CONSTRAINT does not propagate\n>\n> I doubt you will find anyone who's willing to argue that that's not a\n> bug --- specifically, AlterTableAddConstraint()'s lack of inheritance\n> recursion like its siblings have. Feel free to fix it.\n\nIt was next on my list. However, I don't want to step on Stephan's toes...\n\nChris\n\n", "msg_date": "Thu, 24 May 2001 11:34:06 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: ADD/DROP CONSTRAINT and inheritance " }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > > I'm not sure what you mean here, Tom - I meant that the ONLY\n> > keyword could\n> > > be optional.\n> >\n> > The current gram.y code allows either ALTER TABLE foo ONLY or ALTER\n> > TABLE foo* for all forms of ALTER ... with the default interpretation\n> > being the latter.\n> \n> Oops - ok, I didn't notice that...hmmm...maybe I should check my patch for\n> that before Bruce commits it...\n\nGot it. Patch on hold.\n\nAs you can see, inheritance really needs some attention. Feel free to\ngive it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 24 May 2001 00:29:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ADD/DROP CONSTRAINT and inheritance" }, { "msg_contents": "> > Your patch has been added to the PostgreSQL unapplied patches list at:\n> >\n> > \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> \n> Would you like me to submit a version minus all my questions? And what\n> about the inheritance feature - should it be left enabled (as is) at the\n> moment?\n\nYou can send me a totally new patch if you wish. That way, you can\nthrow the whole patch to the patches list for comment and improvement.\n\nIf you would prefer for me to apply it in pieces, I can.\n\nPlease keep going and resolve these inheritance issues.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 24 May 2001 00:31:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP CONSTRAINT patch" }, { "msg_contents": "> You can send me a totally new patch if you wish. That way, you can\n> throw the whole patch to the patches list for comment and improvement.\n\nI'll do that.\n\n> Please keep going and resolve these inheritance issues.\n\nIt's not that there's inheritance issues in my patch. I am actually certain\nthat it will work fine with the ONLY keyword. However, I will make some\nchanges and submit it to pgsql-patches.\n\nThe problem is that although my patch can quite happily remove all\nconstraints of the same name from all the child tables, the problem is - can\nwe be sure that all the constraints with that name are actually inherited\nfrom the parent constraint. Answer - no. At least, not until the add\nconstraint stuff is all fixed. So maybe I will just hash out inheritance\nsupport for the time being.\n\nChris\n\n", "msg_date": "Thu, 24 May 2001 12:40:20 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: DROP CONSTRAINT patch" }, { "msg_contents": "> > You can send me a totally new patch if you wish. That way, you can\n> > throw the whole patch to the patches list for comment and improvement.\n> \n> I'll do that.\n> \n> > Please keep going and resolve these inheritance issues.\n> \n> It's not that there's inheritance issues in my patch. I am actually certain\n> that it will work fine with the ONLY keyword. However, I will make some\n> changes and submit it to pgsql-patches.\n> \n> The problem is that although my patch can quite happily remove all\n> constraints of the same name from all the child tables, the problem is - can\n> we be sure that all the constraints with that name are actually inherited\n> from the parent constraint. Answer - no. At least, not until the add\n> constraint stuff is all fixed. So maybe I will just hash out inheritance\n> support for the time being.\n\nOh, I was hoping you could at least list _all_ the inheritance issues\nfor the TODO list. Seems you have found some already. If we have that,\nwe can get a plan for someone to finally fix them all.\n\nWhat we really need is a list, and a discussion, and decisions, rather\nthan just letting it lay around.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 24 May 2001 00:49:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP CONSTRAINT patch" }, { "msg_contents": "\n> > Seems like a bad idea to\n> > me. But as long as the default is to propagate these changes, I'm not\n> > really eager to prohibit DBAs from doing the other. Who's to say what's\n> > a misuse of inheritance and what's not...\n> \n> At the moment we have:\n> \n> * ADD CONSTRAINT does not propagate\n> * If you create a table with a CHECK constraint, then create a table that\n> inherits from that, the CHECK constraint _does_ propagate.\n> \n> Seems to me that these behaviours are inconsistent...\n\nYep, but I've got the minimal patch to fix ADD CONSTRAINT. I'm just\nwaiting for the upcoming weekend so I can add the regression tests and\npack it up.\n\n", "msg_date": "Wed, 23 May 2001 21:57:28 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "RE: ADD/DROP CONSTRAINT and inheritance " }, { "msg_contents": "\nThanks. Patch applied. @@ comments removed because patch was reviewed.\n\n> Hi all,\n> \n> Attached is my patch that adds DROP CONSTRAINT support to PostgreSQL. I\n> basically want your guys feedback. I have sprinkled some of my q's thru\n> the text delimited with the @@ symbol. It seems to work perfectly.\n> \n> At the moment it does CHECK constraints only, with inheritance. However,\n> due to the problem mentioned before with the mismatching between inherited\n> constraints it may be wise to disable the inheritance feature for a while.\n> it is written in an extensible fashion to support future dropping of other\n> types of constraint, and is well documented.\n> \n> Please send me your comments, check my use of locking, updating of\n> indices, use of ERROR and NOTICE, etc. and I will rework the patch based\n> on feedback until everyone\n> is happy with it...\n> \n> Chris\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 30 May 2001 08:57:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP CONSTRAINT patch" }, { "msg_contents": "> \n> Thanks. Patch applied. @@ comments removed because patch was reviewed.\n> \n\nActually, can someone make sure all the @@ comments have been dealt\nwith? I assume people checked them, but maybe not.\n\n\n> > Hi all,\n> > \n> > Attached is my patch that adds DROP CONSTRAINT support to PostgreSQL. I\n> > basically want your guys feedback. I have sprinkled some of my q's thru\n> > the text delimited with the @@ symbol. It seems to work perfectly.\n> > \n> > At the moment it does CHECK constraints only, with inheritance. However,\n> > due to the problem mentioned before with the mismatching between inherited\n> > constraints it may be wise to disable the inheritance feature for a while.\n> > it is written in an extensible fashion to support future dropping of other\n> > types of constraint, and is well documented.\n> > \n> > Please send me your comments, check my use of locking, updating of\n> > indices, use of ERROR and NOTICE, etc. and I will rework the patch based\n> > on feedback until everyone\n> > is happy with it...\n> > \n> > Chris\n> \n> Content-Description: \n> \n> [ Attachment, skipping... ]\n> \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://www.postgresql.org/search.mpl\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? config.log\n? config.cache\n? config.status\n? dropcons.diff\n? GNUmakefile\n? src/GNUmakefile\n? src/Makefile.global\n? src/backend/postgres\n? src/backend/catalog/global.bki\n? src/backend/catalog/global.description\n? src/backend/catalog/template1.bki\n? src/backend/catalog/template1.description\n? src/backend/port/Makefile\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_config/pg_config\n? src/bin/pg_ctl/pg_ctl\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_dump/pg_restore\n? src/bin/pg_dump/pg_dumpall\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/psql/psql\n? src/bin/scripts/createlang\n? src/include/config.h\n? src/include/stamp-h\n? src/interfaces/ecpg/lib/libecpg.so.3\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgeasy/libpgeasy.so.2\n? src/interfaces/libpq/libpq.so.2\n? src/pl/plpgsql/src/libplpgsql.so.1\n? src/test/regress/log\n? src/test/regress/pg_regress\n? src/test/regress/tmp_check\n? src/test/regress/results\n? src/test/regress/expected/copy.out\n? src/test/regress/expected/create_function_1.out\n? src/test/regress/expected/create_function_2.out\n? src/test/regress/expected/misc.out\n? src/test/regress/expected/constraints.out\n? src/test/regress/sql/copy.sql\n? src/test/regress/sql/create_function_1.sql\n? src/test/regress/sql/create_function_2.sql\n? src/test/regress/sql/misc.sql\n? src/test/regress/sql/constraints.sql\nIndex: src/backend/catalog/heap.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/catalog/heap.c,v\nretrieving revision 1.165\ndiff -c -r1.165 heap.c\n*** src/backend/catalog/heap.c\t2001/05/14 20:30:19\t1.165\n--- src/backend/catalog/heap.c\t2001/05/20 07:27:27\n***************\n*** 48,53 ****\n--- 48,54 ----\n #include \"miscadmin.h\"\n #include \"optimizer/clauses.h\"\n #include \"optimizer/planmain.h\"\n+ #include \"optimizer/prep.h\"\n #include \"optimizer/var.h\"\n #include \"nodes/makefuncs.h\"\n #include \"parser/parse_clause.h\"\n***************\n*** 1975,1980 ****\n--- 1976,2138 ----\n \n \theap_endscan(rcscan);\n \theap_close(rcrel, RowExclusiveLock);\n+ \n+ }\n+ \n+ /*\n+ * Removes all CHECK constraints on a relation that match the given name.\n+ * It is the responsibility of the calling function to acquire a lock on\n+ * the relation.\n+ * Returns: The number of CHECK constraints removed.\n+ */\n+ int\n+ RemoveCheckConstraint(Relation rel, const char *constrName, bool inh)\n+ {\n+ Oid relid;\n+ \tRelation\t\t\trcrel;\n+ \tRelation\t\t\trelrel;\n+ \tRelation\t\t\tinhrel;\n+ \tRelation\t\t\trelidescs[Num_pg_class_indices];\n+ \tTupleDesc\t\ttupleDesc;\n+ \tTupleConstr\t\t*oldconstr;\n+ \tint\t\t\t\tnumoldchecks;\n+ \tint\t\t\t\tnumchecks;\n+ \tHeapScanDesc\trcscan;\n+ \tScanKeyData\t\tkey[2];\n+ \tHeapTuple\t\trctup;\n+ \tHeapTuple\t\treltup;\n+ \tForm_pg_class\trelStruct;\n+ \tint\t\t\t\trel_deleted = 0;\n+ int all_deleted = 0;\n+ \n+ /* Find id of the relation */\n+ /* @@ Does this need to be done _after_ the heap_open\n+ has acquired a lock? @@ */\n+ relid = RelationGetRelid(rel);\n+ \n+ /* Process child tables and remove constraints of the\n+ same name. */\n+ if (inh)\n+ {\n+ List *child,\n+ *children;\n+ \n+ /* This routine is actually in the planner */\n+ children = find_all_inheritors(relid);\n+ \n+ /*\n+ * find_all_inheritors does the recursive search of the\n+ * inheritance hierarchy, so all we have to do is process all\n+ * of the relids in the list that it returns.\n+ */\n+ foreach(child, children)\n+ {\n+ Oid\tchildrelid = lfirsti(child);\n+ \n+ if (childrelid == relid)\n+ continue;\n+ inhrel = heap_open(childrelid, AccessExclusiveLock);\n+ all_deleted += RemoveCheckConstraint(inhrel, constrName, false);\n+ heap_close(inhrel, NoLock);\n+ }\n+ }\n+ \n+ \t/* Grab an exclusive lock on the pg_relcheck relation */\n+ \trcrel = heap_openr(RelCheckRelationName, RowExclusiveLock);\n+ \n+ \t/*\n+ \t * Create two scan keys. We need to match on the oid of the table\n+ \t * the CHECK is in and also we need to match the name of the CHECK\n+ \t * constraint.\n+ \t */\n+ \tScanKeyEntryInitialize(&key[0], 0, Anum_pg_relcheck_rcrelid,\n+ \t\t\t\t\t\t F_OIDEQ, RelationGetRelid(rel));\n+ \n+ \t/* @@ F_NAMEEQ correct? @@ */\n+ \tScanKeyEntryInitialize(&key[1], 0, Anum_pg_relcheck_rcname,\n+ \t\t\t\t\t\t F_NAMEEQ, PointerGetDatum(constrName));\n+ \n+ \t/* @@ &key needed here? @@ */\n+ \t/* Begin scanning the heap */\n+ \trcscan = heap_beginscan(rcrel, 0, SnapshotNow, 2, key);\n+ \n+ \t/*\n+ \t * Scan over the result set, removing any matching entries. Note\n+ \t * that this has the side-effect of removing ALL CHECK constraints\n+ \t * that share the specified constraint name.\n+ \t */\n+ \twhile (HeapTupleIsValid(rctup = heap_getnext(rcscan, 0))) {\n+ \t\tsimple_heap_delete(rcrel, &rctup->t_self);\n+ \t\t++rel_deleted;\n+ ++all_deleted;\n+ \t}\n+ \n+ \t/* Clean up after the scan */\n+ \theap_endscan(rcscan);\n+ \n+ \t/* @@ Do I need to heap_close here? @@ */\n+ \n+ \t/*\n+ \t * @@ I copied this from elsewhere - is it still appropriate? @@\n+ \t * Update the count of constraints in the relation's pg_class tuple.\n+ \t * We do this even if there was no change, in order to ensure that an\n+ \t * SI update message is sent out for the pg_class tuple, which will\n+ \t * force other backends to rebuild their relcache entries for the rel.\n+ \t * (Of course, for a newly created rel there is no need for an SI\n+ \t * message, but for ALTER TABLE ADD ATTRIBUTE this'd be important.)\n+ \t */\n+ \n+ \t/*\n+ \t * Get number of existing constraints.\n+ \t * @@ Does this need to be above where we delete them!? It seems\n+ * to work like it is! @@\n+ \t */\n+ \n+ \ttupleDesc = RelationGetDescr(rel);\n+ \toldconstr = tupleDesc->constr;\n+ \tif (oldconstr)\n+ \t\tnumoldchecks = oldconstr->num_check;\n+ \telse\n+ \t\tnumoldchecks = 0;\n+ \t/* @@ Do we need to pfree oldconstr? @@ */\n+ \n+ \t/* Calculate the new number of checks in the table, fail if negative */\n+ \tnumchecks = numoldchecks - rel_deleted;\n+ \n+ \tif (numchecks < 0)\n+ \t\telog(ERROR, \"check count became negative\");\n+ \n+ \t/* @@ Should I check if rel_deleted > 0 here for\n+ efficiency? (re: SI Updates)@@ */\n+ \t/* @@ Do I need to heap_open here? @@ */\n+ \trelrel = heap_openr(RelationRelationName, RowExclusiveLock);\n+ \treltup = SearchSysCacheCopy(RELOID,\n+ \t\tObjectIdGetDatum(RelationGetRelid(rel)), 0, 0, 0);\n+ \n+ \tif (!HeapTupleIsValid(reltup))\n+ \t\telog(ERROR, \"cache lookup of relation %u failed\",\n+ \t\t\t RelationGetRelid(rel));\n+ \trelStruct = (Form_pg_class) GETSTRUCT(reltup);\n+ \n+ \trelStruct->relchecks = numchecks;\n+ \n+ \tsimple_heap_update(relrel, &reltup->t_self, reltup);\n+ \n+ \t/* Keep catalog indices current */\n+ \tCatalogOpenIndices(Num_pg_class_indices, Name_pg_class_indices,\n+ \t\t\t\t\t relidescs);\n+ \tCatalogIndexInsert(relidescs, Num_pg_class_indices, relrel, reltup);\n+ \tCatalogCloseIndices(Num_pg_class_indices, relidescs);\n+ \n+ \t/* Clean up after the scan */\n+ \theap_freetuple(reltup);\n+ \theap_close(relrel, RowExclusiveLock);\n+ \n+ \t/* Close the heap relation */\n+ \theap_close(rcrel, RowExclusiveLock);\n+ \n+ \t/* Return the number of tuples deleted */\n+ \treturn all_deleted;\n }\n \n static void\nIndex: src/backend/commands/command.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\nretrieving revision 1.127\ndiff -c -r1.127 command.c\n*** src/backend/commands/command.c\t2001/05/09 21:10:38\t1.127\n--- src/backend/commands/command.c\t2001/05/20 07:27:32\n***************\n*** 51,59 ****\n #include \"catalog/pg_shadow.h\"\n #include \"utils/relcache.h\"\n \n- #ifdef\t_DROP_COLUMN_HACK__\n #include \"parser/parse.h\"\n- #endif\t /* _DROP_COLUMN_HACK__ */\n #include \"access/genam.h\"\n \n \n--- 51,57 ----\n***************\n*** 1322,1327 ****\n--- 1320,1330 ----\n \n \t\t\t\t\t\t\tbreak;\n \t\t\t\t\t\t}\n+ \t\t\t\t\tcase CONSTR_PRIMARY:\n+ \t\t\t\t\t\t{\n+ \n+ \t\t\t\t\t\t\tbreak;\n+ \t\t\t\t\t\t}\n \t\t\t\t\tdefault:\n \t\t\t\t\t\telog(ERROR, \"ALTER TABLE / ADD CONSTRAINT is not implemented for that constraint type.\");\n \t\t\t\t}\n***************\n*** 1583,1595 ****\n \n /*\n * ALTER TABLE DROP CONSTRAINT\n */\n void\n AlterTableDropConstraint(const char *relationName,\n \t\t\t\t\t\t bool inh, const char *constrName,\n \t\t\t\t\t\t int behavior)\n {\n! \telog(ERROR, \"ALTER TABLE / DROP CONSTRAINT is not implemented\");\n }\n \n \n--- 1586,1657 ----\n \n /*\n * ALTER TABLE DROP CONSTRAINT\n+ * Note: It is legal to remove a constraint with name \"\" as it is possible\n+ * to add a constraint with name \"\".\n+ * Christopher Kings-Lynne\n */\n void\n AlterTableDropConstraint(const char *relationName,\n \t\t\t\t\t\t bool inh, const char *constrName,\n \t\t\t\t\t\t int behavior)\n {\n! \tRelation\t\trel;\n! \tint\t\t\tdeleted;\n! \n! #ifndef NO_SECURITY\n! \tif (!pg_ownercheck(GetUserId(), relationName, RELNAME))\n! \t\telog(ERROR, \"ALTER TABLE: permission denied\");\n! #endif\n! \n! \t/* We don't support CASCADE yet - in fact, RESTRICT\n! \t * doesn't work to the spec either! */\n! \tif (behavior == CASCADE)\n! \t\telog(ERROR, \"ALTER TABLE / DROP CONSTRAINT does not support the CASCADE keyword\");\n! \n! \t/*\n! \t * Acquire an exclusive lock on the target relation for\n! \t * the duration of the operation.\n! \t * @@ AccessExclusiveLock or RowExclusiveLock??? @@\n! \t */\n! \n! \trel = heap_openr(relationName, AccessExclusiveLock);\n! \n! \t/* Disallow DROP CONSTRAINT on views, indexes, sequences, etc */\n! \tif (rel->rd_rel->relkind != RELKIND_RELATION)\n! \t\telog(ERROR, \"ALTER TABLE / DROP CONSTRAINT: %s is not a table\",\n! \t\t\t relationName);\n! \n! \t/*\n! \t * Since all we have is the name of the constraint, we have to look through\n! \t * all catalogs that could possibly contain a constraint for this relation.\n! \t * We also keep a count of the number of constraints removed.\n! \t */\n! \n! \tdeleted = 0;\n! \n! \t/*\n! \t * First, we remove all CHECK constraints with the given name\n! \t */\n! \n! \tdeleted += RemoveCheckConstraint(rel, constrName, inh);\n! \n! \t/*\n! \t * Now we remove NULL, UNIQUE, PRIMARY KEY and FOREIGN KEY constraints.\n! \t *\n! \t * Unimplemented.\n! \t */\n! \n! \t/* Close the target relation */\n! \theap_close(rel, NoLock);\n! \n! \t/* If zero constraints deleted, complain */\n! \tif (deleted == 0)\n! \t\telog(ERROR, \"ALTER TABLE / DROP CONSTRAINT: %s does not exist\",\n! \t\t\t constrName);\n! \t/* Otherwise if more than one constraint deleted, notify */\n! \telse if (deleted > 1)\n! \t\telog(NOTICE, \"Multiple constraints dropped\");\n! \n }\n \n \nIndex: src/include/catalog/heap.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/catalog/heap.h,v\nretrieving revision 1.35\ndiff -c -r1.35 heap.h\n*** src/include/catalog/heap.h\t2001/05/07 00:43:24\t1.35\n--- src/include/catalog/heap.h\t2001/05/20 07:27:38\n***************\n*** 45,50 ****\n--- 45,53 ----\n \t\t\t\t\t\t List *rawColDefaults,\n \t\t\t\t\t\t List *rawConstraints);\n \n+ extern int RemoveCheckConstraint(Relation rel, const char *constrName, bool inh);\n+ \n+ \n extern Form_pg_attribute SystemAttributeDefinition(AttrNumber attno);\n \n #endif\t /* HEAP_H */", "msg_date": "Wed, 30 May 2001 09:33:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP CONSTRAINT patch" } ]
[ { "msg_contents": "Hello,\n\nAs I understand it:\nIssuing a �select for update� within a transaction will prevent other users\nfrom modifying data. (exclusive lock)\n\nI�m contemplating using the DBI interface as a permanent client to postgres\nfor a group of users, and have a few questions.\n\nWill the above procedure still exclusively lock the row, when using a\nbrowser to issue a �select for update�?\nI�m confused about sth->finish; and will it finish the transaction, before\nthe user re-submits the form data to update the row....\n\nCan anyone help me or offer some �howtos� to read regarding postgres and\nDBI? \n\nthanks.\n\nRaoul. \n\n\n\n\"select for update\" question....\n\n\nHello,\n\nAs I understand it:\nIssuing a “select for update” within a transaction will prevent other users from modifying data. (exclusive lock)\n\nI’m contemplating using the DBI interface as a permanent client to postgres for a group of users, and have a few questions.\n\nWill the above procedure still exclusively lock the row, when using a browser to issue a “select for update”?\nI’m confused about sth->finish; and will it finish the transaction, before the user re-submits the form data to update the row....\n\nCan anyone help me or offer some “howtos” to read regarding postgres and DBI?  \n\nthanks.\n\nRaoul.", "msg_date": "Sun, 20 May 2001 19:31:04 GMT", "msg_from": "Raoul Callaghan <ausit@bigpond.net.au>", "msg_from_op": true, "msg_subject": "\"select for update\" question...." } ]
[ { "msg_contents": "As long as you're fixing bugs in pgindent, here are some more follies\nexhibited by the most recent pgindent run. Some of these bugs have\nbeen there for awhile, but at least one (the removal of space before\na same-line comment) seems to be new as of the most recent run.\n\nThe examples are all taken from \n\n$ cd .../src/backend/utils/adt/\n$ cvs diff -c -r1.85 -r1.86 selfuncs.c\n\nbut there are many similar cases elsewhere.\n\n1. I can sort of understand inserting space before an #ifdef (though I do\nnot agree with it), but why before #endif? Why does the first example\ninsert a blank line *only* before the #endif and not its matching #if?\nIf you're trying to set off the #if block from surrounding code, seems\nlike a blank line *after* the #endif would help, not one before. But\nnone of the here-inserted blank lines seem to me to improve readability;\nI'd vote for not making any such changes.\n\n***************\n*** 648,653 ****\n--- 653,659 ----\n {\n #ifdef NOT_USED\t\t\t\t\t/* see neqjoinsel() before removing me! */\n \tOid\t\t\topid = PG_GETARG_OID(0);\n+ \n #endif\n \tOid\t\t\trelid1 = PG_GETARG_OID(1);\n \tAttrNumber\tattno1 = PG_GETARG_INT16(2);\n***************\n*** 1078,1087 ****\n--- 1091,1102 ----\n convert_string_datum(Datum value, Oid typid)\n {\n \tchar\t *val;\n+ \n #ifdef USE_LOCALE\n \tchar\t *xfrmstr;\n \tsize_t\t\txfrmsize;\n \tsize_t\t\txfrmlen;\n+ \n #endif\n \n \tswitch (typid)\n***************\n\n2. Here are two examples of a long-standing bug: pgindent frequently\n(but not always) mis-indents the first 'case' line(s) of a switch.\nI don't agree with its indentation of block comments between case\nentries, either.\n\n***************\n*** 845,892 ****\n \tswitch (valuetypid)\n \t{\n \n! \t\t/*\n! \t\t * Built-in numeric types\n! \t\t */\n! \t\tcase BOOLOID:\n! \t\tcase INT2OID:\n! \t\tcase INT4OID:\n! \t\tcase INT8OID:\n! \t\tcase FLOAT4OID:\n! \t\tcase FLOAT8OID:\n! \t\tcase NUMERICOID:\n! \t\tcase OIDOID:\n! \t\tcase REGPROCOID:\n \t\t\t*scaledvalue = convert_numeric_to_scalar(value, valuetypid);\n \t\t\t*scaledlobound = convert_numeric_to_scalar(lobound, boundstypid);\n \t\t\t*scaledhibound = convert_numeric_to_scalar(hibound, boundstypid);\n \t\t\treturn true;\n \n! \t\t/*\n! \t\t * Built-in string types\n! \t\t */\n \t\tcase CHAROID:\n \t\tcase BPCHAROID:\n \t\tcase VARCHAROID:\n--- 851,898 ----\n \tswitch (valuetypid)\n \t{\n \n! \t\t\t/*\n! \t\t\t * Built-in numeric types\n! \t\t\t */\n! \t\t\tcase BOOLOID:\n! \t\t\tcase INT2OID:\n! \t\t\tcase INT4OID:\n! \t\t\tcase INT8OID:\n! \t\t\tcase FLOAT4OID:\n! \t\t\tcase FLOAT8OID:\n! \t\t\tcase NUMERICOID:\n! \t\t\tcase OIDOID:\n! \t\t\tcase REGPROCOID:\n \t\t\t*scaledvalue = convert_numeric_to_scalar(value, valuetypid);\n \t\t\t*scaledlobound = convert_numeric_to_scalar(lobound, boundstypid);\n \t\t\t*scaledhibound = convert_numeric_to_scalar(hibound, boundstypid);\n \t\t\treturn true;\n \n! \t\t\t/*\n! \t\t\t * Built-in string types\n! \t\t\t */\n \t\tcase CHAROID:\n \t\tcase BPCHAROID:\n \t\tcase VARCHAROID:\n***************\n*** 911,917 ****\n {\n \tswitch (typid)\n \t{\n! \t\tcase BOOLOID:\n \t\t\treturn (double) DatumGetBool(value);\n \t\tcase INT2OID:\n \t\t\treturn (double) DatumGetInt16(value);\n--- 917,923 ----\n {\n \tswitch (typid)\n \t{\n! \t\t\tcase BOOLOID:\n \t\t\treturn (double) DatumGetBool(value);\n \t\tcase INT2OID:\n \t\t\treturn (double) DatumGetInt16(value);\n***************\n\n3. This is new misbehavior as of the last pgindent run: whitespace\nbetween a statement and a comment on the same line is sometimes removed.\n\n***************\n*** 1120,1126 ****\n \n #ifdef USE_LOCALE\n \t/* Guess that transformed string is not much bigger than original */\n! \txfrmsize = strlen(val) + 32;\t\t/* arbitrary pad value here... */\n \txfrmstr = (char *) palloc(xfrmsize);\n \txfrmlen = strxfrm(xfrmstr, val, xfrmsize);\n \tif (xfrmlen >= xfrmsize)\n--- 1137,1143 ----\n \n #ifdef USE_LOCALE\n \t/* Guess that transformed string is not much bigger than original */\n! \txfrmsize = strlen(val) + 32;/* arbitrary pad value here... */\n \txfrmstr = (char *) palloc(xfrmsize);\n \txfrmlen = strxfrm(xfrmstr, val, xfrmsize);\n \tif (xfrmlen >= xfrmsize)\n***************\n\n4. This breaking of a comment attached to a #define scares me.\nApparently gcc still thinks the comment is valid, but it seems to me\nthat this is making not-very-portable assumptions about the behavior of\nthe preprocessor. I always thought that you needed to use backslashes\nto continue a preprocessor command onto the next line reliably.\n\n***************\n*** 1691,1705 ****\n \n #define FIXED_CHAR_SEL\t0.04\t/* about 1/25 */\n #define CHAR_RANGE_SEL\t0.25\n! #define ANY_CHAR_SEL\t0.9\t\t/* not 1, since it won't match end-of-string */\n #define FULL_WILDCARD_SEL 5.0\n #define PARTIAL_WILDCARD_SEL 2.0\n \n--- 1718,1733 ----\n \n #define FIXED_CHAR_SEL\t0.04\t/* about 1/25 */\n #define CHAR_RANGE_SEL\t0.25\n! #define ANY_CHAR_SEL\t0.9\t\t/* not 1, since it won't match\n! \t\t\t\t\t\t\t\t * end-of-string */\n #define FULL_WILDCARD_SEL 5.0\n #define PARTIAL_WILDCARD_SEL 2.0\n \n***************\n\n5. Here's an interesting one: why was the \"return\" line misindented?\nI don't particularly agree with the handling of the comments on the\n#else and #endif lines either... they're not even mutually consistent,\nlet alone stylistically good.\n\n***************\n*** 1904,1912 ****\n \telse\n \t\tresult = false;\n \treturn (bool) result;\n! #else /* not USE_LOCALE */\n! \treturn true;\t\t\t\t/* We must be in C locale, which is OK */\n! #endif /* USE_LOCALE */\n }\n \n /*\n--- 1935,1943 ----\n \telse\n \t\tresult = false;\n \treturn (bool) result;\n! #else\t\t\t\t\t\t\t/* not USE_LOCALE */\n! \t\t\t\treturn true;\t/* We must be in C locale, which is OK */\n! #endif\t /* USE_LOCALE */\n }\n \n /*\n***************\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 May 2001 17:03:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "More pgindent follies" }, { "msg_contents": "\nOK, I have fixes for all of these. BSD indent clearly has some bugs,\nand ironically, the indent source code is quite ugly. The best way to\nwork around them is with pre/post processing, which we already do.\n\nI just checked FreeBSD and NetBSD and they don't seem to have added too\nmuch to BSD indent it since it left Berkeley. They have upped the\nkeyword limit to 1000, but that is still too small for us, and I don't\nsee any other major work being done.\n\nLooks like GNU indent development was picked up in 1999 again, so it may\nbe worth another look:\n\n-r--r--r-- 1 34 21 160411 Feb 3 1994 indent-1.9.1.tar.gz\n-r--r--r-- 1 34 21 143307 May 27 1999 indent-1.10.0.tar.gz\n-r--r--r-- 1 34 21 183160 Jul 4 1999 indent-2.1.0.tar.gz\n-r--r--r-- 1 34 21 183999 Jul 15 1999 indent-2.1.1.tar.gz\n-r--r--r-- 1 34 21 186287 Jul 24 1999 indent-2.2.0.tar.gz\n-r--r--r-- 1 34 21 191303 Sep 25 1999 indent-2.2.1.tar.gz\n-r--r--r-- 1 34 21 192872 Sep 28 1999 indent-2.2.2.tar.gz\n-r--r--r-- 1 34 21 198319 Oct 1 1999 indent-2.2.3.tar.gz\n-r--r--r-- 1 34 21 197332 Nov 4 1999 indent-2.2.4.tar.gz\n-r--r--r-- 1 34 21 203764 Jan 16 2000 indent-2.2.5.tar.gz\n-r--r--r-- 1 34 21 222510 Nov 17 2000 indent-2.2.6.tar.gz\n\nOK, here are my comments.\n\n\n> As long as you're fixing bugs in pgindent, here are some more follies\n> exhibited by the most recent pgindent run. Some of these bugs have\n> been there for awhile, but at least one (the removal of space before\n> a same-line comment) seems to be new as of the most recent run.\n> \n\nGlad to fix these. I hacked together pgindent just to match what I\nthought looked good, so if people have things that look bad, let me\nknow.\n\n\n> The examples are all taken from \n> \n> $ cd .../src/backend/utils/adt/\n> $ cvs diff -c -r1.85 -r1.86 selfuncs.c\n\nGreat that you found them all in one file, or bad that they all existed\nin one file. :-)\n\n\n> but there are many similar cases elsewhere.\n> \n> 1. I can sort of understand inserting space before an #ifdef (though I do\n> not agree with it), but why before #endif? Why does the first example\n> insert a blank line *only* before the #endif and not its matching #if?\n> If you're trying to set off the #if block from surrounding code, seems\n> like a blank line *after* the #endif would help, not one before. But\n> none of the here-inserted blank lines seem to me to improve readability;\n> I'd vote for not making any such changes.\n> \n> ***************\n> *** 648,653 ****\n> --- 653,659 ----\n> {\n> #ifdef NOT_USED\t\t\t\t\t/* see neqjoinsel() before removing me! */\n> \tOid\t\t\topid = PG_GETARG_OID(0);\n> + \n> #endif\n> \tOid\t\t\trelid1 = PG_GETARG_OID(1);\n> \tAttrNumber\tattno1 = PG_GETARG_INT16(2);\n> ***************\n> *** 1078,1087 ****\n> --- 1091,1102 ----\n> convert_string_datum(Datum value, Oid typid)\n> {\n> \tchar\t *val;\n> + \n> #ifdef USE_LOCALE\n> \tchar\t *xfrmstr;\n> \tsize_t\t\txfrmsize;\n> \tsize_t\t\txfrmlen;\n> + \n> #endif\n> \n> \tswitch (typid)\n> ***************\n\nFixed with 'awk' script to remove blank line above #endif.\n\n> \n> 2. Here are two examples of a long-standing bug: pgindent frequently\n> (but not always) mis-indents the first 'case' line(s) of a switch.\n> I don't agree with its indentation of block comments between case\n> entries, either.\n> \n> ***************\n> *** 845,892 ****\n> \tswitch (valuetypid)\n> \t{\n> \n> ! \t\t/*\n> ! \t\t * Built-in numeric types\n> ! \t\t */\n> ! \t\tcase BOOLOID:\n> ! \t\tcase INT2OID:\n> ! \t\tcase INT4OID:\n> ! \t\tcase INT8OID:\n> ! \t\tcase FLOAT4OID:\n> ! \t\tcase FLOAT8OID:\n> ! \t\tcase NUMERICOID:\n> ! \t\tcase OIDOID:\n> ! \t\tcase REGPROCOID:\n> \t\t\t*scaledvalue = convert_numeric_to_scalar(value, valuetypid);\n> \t\t\t*scaledlobound = convert_numeric_to_scalar(lobound, boundstypid);\n> \t\t\t*scaledhibound = convert_numeric_to_scalar(hibound, boundstypid);\n> \t\t\treturn true;\n> \n> ! \t\t/*\n> ! \t\t * Built-in string types\n> ! \t\t */\n> \t\tcase CHAROID:\n> \t\tcase BPCHAROID:\n> \t\tcase VARCHAROID:\n> --- 851,898 ----\n> \tswitch (valuetypid)\n> \t{\n> \n> ! \t\t\t/*\n> ! \t\t\t * Built-in numeric types\n> ! \t\t\t */\n> ! \t\t\tcase BOOLOID:\n> ! \t\t\tcase INT2OID:\n> ! \t\t\tcase INT4OID:\n> ! \t\t\tcase INT8OID:\n> ! \t\t\tcase FLOAT4OID:\n> ! \t\t\tcase FLOAT8OID:\n> ! \t\t\tcase NUMERICOID:\n> ! \t\t\tcase OIDOID:\n> ! \t\t\tcase REGPROCOID:\n> \t\t\t*scaledvalue = convert_numeric_to_scalar(value, valuetypid);\n> \t\t\t*scaledlobound = convert_numeric_to_scalar(lobound, boundstypid);\n> \t\t\t*scaledhibound = convert_numeric_to_scalar(hibound, boundstypid);\n> \t\t\treturn true;\n> \n> ! \t\t\t/*\n> ! \t\t\t * Built-in string types\n> ! \t\t\t */\n> \t\tcase CHAROID:\n> \t\tcase BPCHAROID:\n> \t\tcase VARCHAROID:\n> ***************\n> *** 911,917 ****\n> {\n> \tswitch (typid)\n> \t{\n> ! \t\tcase BOOLOID:\n> \t\t\treturn (double) DatumGetBool(value);\n> \t\tcase INT2OID:\n> \t\t\treturn (double) DatumGetInt16(value);\n> --- 917,923 ----\n> {\n> \tswitch (typid)\n> \t{\n> ! \t\t\tcase BOOLOID:\n> \t\t\treturn (double) DatumGetBool(value);\n> \t\tcase INT2OID:\n> \t\t\treturn (double) DatumGetInt16(value);\n> ***************\n\nThis is actually caused by functions that do not define local variables\nbut start with switch:\n\n\n\tint\n\tfunc(int x)\n\t{\n\t\tswitch (x)\n\t\t{\n\t\t\tcase 1:\n\t\t\tcase 2:\n\nin these cases, case 1 and case2 get indented too far. They actually\nget indented as though they were variable names, which is clearly wrong.\nBSD indent has a bug in such cases, so the trick was to add:\n\n\tint pgindent_func_no_var_fix;\n\nbefore indent is run, and after remove it and an optional blank line if\nit was the only variable definition.\n\n> \n> 3. This is new misbehavior as of the last pgindent run: whitespace\n> between a statement and a comment on the same line is sometimes removed.\n> \n> ***************\n> *** 1120,1126 ****\n> \n> #ifdef USE_LOCALE\n> \t/* Guess that transformed string is not much bigger than original */\n> ! \txfrmsize = strlen(val) + 32;\t\t/* arbitrary pad value here... */\n> \txfrmstr = (char *) palloc(xfrmsize);\n> \txfrmlen = strxfrm(xfrmstr, val, xfrmsize);\n> \tif (xfrmlen >= xfrmsize)\n> --- 1137,1143 ----\n> \n> #ifdef USE_LOCALE\n> \t/* Guess that transformed string is not much bigger than original */\n> ! \txfrmsize = strlen(val) + 32;/* arbitrary pad value here... */\n> \txfrmstr = (char *) palloc(xfrmsize);\n> \txfrmlen = strxfrm(xfrmstr, val, xfrmsize);\n> \tif (xfrmlen >= xfrmsize)\n> ***************\n\n\nThis is happening because it has landed on a tab stop and isn't adding\nanother one. I have added a 'sed' line to properly push these to the\nnext tab stop.\n\n> \n> 4. This breaking of a comment attached to a #define scares me.\n> Apparently gcc still thinks the comment is valid, but it seems to me\n> that this is making not-very-portable assumptions about the behavior of\n> the preprocessor. I always thought that you needed to use backslashes\n> to continue a preprocessor command onto the next line reliably.\n> \n> ***************\n> *** 1691,1705 ****\n> \n> #define FIXED_CHAR_SEL\t0.04\t/* about 1/25 */\n> #define CHAR_RANGE_SEL\t0.25\n> ! #define ANY_CHAR_SEL\t0.9\t\t/* not 1, since it won't match end-of-string */\n> #define FULL_WILDCARD_SEL 5.0\n> #define PARTIAL_WILDCARD_SEL 2.0\n> \n> --- 1718,1733 ----\n> \n> #define FIXED_CHAR_SEL\t0.04\t/* about 1/25 */\n> #define CHAR_RANGE_SEL\t0.25\n> ! #define ANY_CHAR_SEL\t0.9\t\t/* not 1, since it won't match\n> ! \t\t\t\t\t\t\t\t * end-of-string */\n> #define FULL_WILDCARD_SEL 5.0\n> #define PARTIAL_WILDCARD_SEL 2.0\n> \n> ***************\n\nI don't see the problem here. My assumption is that the comment is not\npart of the define, right?\n\n\n> \n> 5. Here's an interesting one: why was the \"return\" line misindented?\n> I don't particularly agree with the handling of the comments on the\n> #else and #endif lines either... they're not even mutually consistent,\n> let alone stylistically good.\n> \n> ***************\n> *** 1904,1912 ****\n> \telse\n> \t\tresult = false;\n> \treturn (bool) result;\n> ! #else /* not USE_LOCALE */\n> ! \treturn true;\t\t\t\t/* We must be in C locale, which is OK */\n> ! #endif /* USE_LOCALE */\n> }\n> \n> /*\n> --- 1935,1943 ----\n> \telse\n> \t\tresult = false;\n> \treturn (bool) result;\n> ! #else\t\t\t\t\t\t\t/* not USE_LOCALE */\n> ! \t\t\t\treturn true;\t/* We must be in C locale, which is OK */\n> ! #endif\t /* USE_LOCALE */\n> }\n> \n> /*\n> ***************\n\nSame cause as #2 (switch/case indent). Fixed.\n\nI will cvs commit the pgindent changes, and send a diff of selfuncs.c to\npatches so people can see the fixes. Fixing the entire tree will have\nto wait for 7.2 beta.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 21 May 2001 21:22:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: More pgindent follies" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> 4. This breaking of a comment attached to a #define scares me.\n>> \n>> ***************\n>> *** 1691,1705 ****\n>> \n>> #define FIXED_CHAR_SEL\t0.04\t/* about 1/25 */\n>> #define CHAR_RANGE_SEL\t0.25\n>> ! #define ANY_CHAR_SEL\t0.9\t\t/* not 1, since it won't match end-of-string */\n>> #define FULL_WILDCARD_SEL 5.0\n>> #define PARTIAL_WILDCARD_SEL 2.0\n>> \n>> --- 1718,1733 ----\n>> \n>> #define FIXED_CHAR_SEL\t0.04\t/* about 1/25 */\n>> #define CHAR_RANGE_SEL\t0.25\n>> ! #define ANY_CHAR_SEL\t0.9\t\t/* not 1, since it won't match\n>> ! \t\t\t\t\t\t\t\t * end-of-string */\n>> #define FULL_WILDCARD_SEL 5.0\n>> #define PARTIAL_WILDCARD_SEL 2.0\n>> \n>> ***************\n\n> I don't see the problem here. My assumption is that the comment is not\n> part of the define, right?\n\nWell, that's the question. ANSI C requires comments to be replaced by\nwhitespace before preprocessor commands are detected/executed, but there\nwas an awful lot of variation in preprocessor behavior before ANSI.\nI suspect there are still preprocessors out there that might misbehave\non this input --- for example, by leaving the text \"* end-of-string */\"\npresent in the preprocessor output. Now we still go to considerable\nlengths to support not-quite-ANSI preprocessors. I don't like the idea\nthat all the work done by configure and c.h in that direction might be\nwasted because of pgindent carelessness.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 09:05:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: More pgindent follies " }, { "msg_contents": "> > I don't see the problem here. My assumption is that the comment is not\n> > part of the define, right?\n> \n> Well, that's the question. ANSI C requires comments to be replaced by\n> whitespace before preprocessor commands are detected/executed, but there\n> was an awful lot of variation in preprocessor behavior before ANSI.\n> I suspect there are still preprocessors out there that might misbehave\n> on this input --- for example, by leaving the text \"* end-of-string */\"\n> present in the preprocessor output. Now we still go to considerable\n> lengths to support not-quite-ANSI preprocessors. I don't like the idea\n> that all the work done by configure and c.h in that direction might be\n> wasted because of pgindent carelessness.\n\nI agree, but in a certain sense, we would have found those compilers\nalready. This is not new behavour as far as I know, and clearly this\nwould throw a compiler error.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 23 May 2001 11:58:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: More pgindent follies" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I agree, but in a certain sense, we would have found those compilers\n> already. This is not new behavour as far as I know, and clearly this\n> would throw a compiler error.\n\n[ Digs around ... ] OK, you're right, it's not new behavior; we have\ninstances of similarly-split comments at least back to REL6_5. I guess\nmy fears of portability problems are unfounded.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 15:45:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: More pgindent follies " }, { "msg_contents": "On Wed, May 23, 2001 at 11:58:51AM -0400, Bruce Momjian wrote:\n> > > I don't see the problem here. My assumption is that the comment is not\n> > > part of the define, right?\n> > \n> > Well, that's the question. ANSI C requires comments to be replaced by\n> > whitespace before preprocessor commands are detected/executed, but there\n> > was an awful lot of variation in preprocessor behavior before ANSI.\n> > I suspect there are still preprocessors out there that might misbehave\n> > on this input --- for example, by leaving the text \"* end-of-string */\"\n> > present in the preprocessor output. Now we still go to considerable\n> > lengths to support not-quite-ANSI preprocessors. I don't like the idea\n> > that all the work done by configure and c.h in that direction might be\n> > wasted because of pgindent carelessness.\n> \n> I agree, but in a certain sense, we would have found those compilers\n> already. This is not new behavour as far as I know, and clearly this\n> would throw a compiler error.\n\nThis is good news!\n\nMaybe this process can be formalized. That is, each official release \nmigh contain a source file with various \"modern\" constructs which we \nsuspect might break old compilers.\n\nA comment block at the top requests that any breakage be reported.\n\nA configure option would allow a user to avoid compiling it, and a\ncomment in the file would explain how to use the option. After a\nmajor release, any modern construct that caused no trouble in the \nlast release is considered OK to use.\n\nThis process makes it easy to leave behind obsolete language \nrestrictions: if you wonder if it's OK now to use a feature that once \nbroke some crufty platform, drop it in modern.c and forget about it. \nAfter the next release, you know the answer.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Wed, 23 May 2001 14:21:14 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: More pgindent follies" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> This is good news!\n\n> Maybe this process can be formalized. That is, each official release \n> migh contain a source file with various \"modern\" constructs which we \n> suspect might break old compilers.\n\nI have no objection to this, if the process *is* formalized --- that\nis, we explicitly know and agree to probing for certain obsolescent\nconstructs in each release. The thing that bothered me about this\nwas that pgindent was pushing the envelope without any explicit\ndiscussion or advance knowledge.\n\nThere's plenty of historical cruft in PG that I'd be glad to get rid\nof, if we can satisfy ourselves that it's no longer needed for any\nplatform of interest. It's \"stealth\" obsolescence checks that bother\nme ;-)\n\n> After a major release, any modern construct that caused no trouble in\n> the last release is considered OK to use.\n\nProbably need to allow a couple major releases, considering that we\nsee lots of people migrating from not-the-last release. But that's a\nquibble. My point is we need an explicit debate about the risks and\nbenefits of each change. Finding out two years later that a broken tool\nwas doing the experiment without our knowledge is not cool.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 May 2001 00:04:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: More pgindent follies " }, { "msg_contents": "> I have no objection to this, if the process *is* formalized --- that\n> is, we explicitly know and agree to probing for certain obsolescent\n> constructs in each release. The thing that bothered me about this\n> was that pgindent was pushing the envelope without any explicit\n> discussion or advance knowledge.\n\nActually, as can be seen from the recent use of readdir() by WAL, often\nnew features/constructs get in when someone adds them and they get\nthrough release testing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 24 May 2001 00:37:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: More pgindent follies" } ]
[ { "msg_contents": "If I create an index based on a function, as:\n\ncreate index fubar on table (function(field));\n\nand then I later:\n\ndrop function function(...);\n\nIt makes it impossible to pg_dump a database until one also explicitly drops\nthe index. This happens in 7.0 and I haven't checked it 7.1.\n\nIf it were my code, I wouldn't call it a bug, I'd call it user error, but there\nshould, at least, be some mention of this in the FAQ or something.\n", "msg_date": "Sun, 20 May 2001 18:53:08 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Index and functions" } ]
[ { "msg_contents": "While we're on the subject of new system catalogs, how about a many to many\ncatalog like this:\n\npg_depend (oid obj, oid dep)\n\nThat maps the oid of a system object (such as a constraint, index, function,\ntrigger, anything) to all other system objects that are dependent upon it.\nAlthough it may take a bit of work to implement, it will trivialise\nsupporting CASCADE/RESTRICT on DROP.\n\nChris\n\n", "msg_date": "Mon, 21 May 2001 09:58:37 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "New system catalog idea" }, { "msg_contents": "\nI think this is a good idea, and something to add to the TODO list. We\nare hitting too many of these gotchas.\n\n\t* Add pg_depend table to track object dependencies\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> While we're on the subject of new system catalogs, how about a many to many\n> catalog like this:\n> \n> pg_depend (oid obj, oid dep)\n> \n> That maps the oid of a system object (such as a constraint, index, function,\n> trigger, anything) to all other system objects that are dependent upon it.\n> Although it may take a bit of work to implement, it will trivialise\n> supporting CASCADE/RESTRICT on DROP.\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 21 May 2001 21:35:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: New system catalog idea" } ]
[ { "msg_contents": "Hi All ,\n\nSELECT DISTINCT h_name FROM haszon;\n+---------------+\n| h_name |\n+---------------+\n| CITROEN |\n| DAEWOO-FSO |\n| DAEWOO-LUBLIN |\n| FIAT |\n| FORD |\n| GAZ |\n| HYUNDAI |\n| KIA |\n| MAZDA |\n| MERCEDES BENZ |\n| MITSUBISHI |\n| NISSAN |\n| OPEL |\n| PEUGEOT |\n| RENAULT |\n| SEAT |\n| SKODA |\n| SUZUKI |\n| TATA |\n| TOYOTA |\n| VOLKSWAGEN |\n+---------------+\nQuery OK, 21 rows in set (0,20 sec)\n\nSELECT cn_name FROM carname\n\n+---------------+\n| cn_name |\n+---------------+\n| ALFA ROMEO |\n| AUDI |\n| BMW |\n| CHRYSLER |\n| CITROEN |\n| DAEWOO |\n| DAIHATSU |\n| DAIMLER |\n| FIAT |\n| FORD |\n| HONDA |\n| HYUNDAI |\n| JAGUAR |\n| JEEP |\n| KIA |\n| LADA |\n| LANCIA |\n| LAND ROVER |\n| LEXUS |\n| MAZDA |\n| MERCEDES BENZ |\n| MG |\n| MITSUBISHI |\n| NISSAN |\n| OPEL |\n| PEUGEOT |\n| PORSCHE |\n| PROTON |\n| RENAULT |\n| ROVER |\n| SAAB |\n| SEAT |\n| SKODA |\n| SUBARU |\n| SUZUKI |\n| TOYOTA |\n| VOLKSWAGEN |\n| VOLVO |\n| <Null> |\n+---------------+\nQuery OK, 39 rows in set (0,35 sec)\n\nSELECT DISTINCT h_name\nFROM haszon\nWHERE h_name IN (SELECT cn_name FROM carname)\n\n+---------------+\n| h_name |\n+---------------+\n| CITROEN |\n| FIAT |\n| FORD |\n| HYUNDAI |\n| KIA |\n| MAZDA |\n| MERCEDES BENZ |\n| MITSUBISHI |\n| NISSAN |\n| OPEL |\n| PEUGEOT |\n| RENAULT |\n| SEAT |\n| SKODA |\n| SUZUKI |\n| TOYOTA |\n| VOLKSWAGEN |\n+---------------+\nQuery OK, 17 rows in set (0,22 sec)\n\nI think it's good, but\nSELECT DISTINCT h_name\nFROM haszon\nWHERE h_name NOT IN (SELECT cn_name FROM carname)\n\n+--------+\n| h_name |\n+--------+\n+--------+\nQuery OK, 0 rows in set (0,10 sec)\n\nWhy ?\n\npostgres-7.1 rpm on RedHat 7.0\n\nThanks, Gabor\n\n\n", "msg_date": "Mon, 21 May 2001 12:27:43 +0200", "msg_from": "\"Gabor Csuri\" <gcsuri@coder.hu>", "msg_from_op": true, "msg_subject": "I don't understand..." }, { "msg_contents": "Hi All again,\n\n after I deleted the \"null row\" from carname:\nSELECT DISTINCT h_name\nFROM haszon\nWHERE h_name NOT IN (SELECT cn_name FROM carname)\n\n+---------------+\n| h_name |\n+---------------+\n| DAEWOO-FSO |\n| DAEWOO-LUBLIN |\n| GAZ |\n| TATA |\n+---------------+\nQuery OK, 4 rows in set (0,13 sec)\n\nIt's working now, but is it correct?\n\nBye, Gabor.\n\n> I think it's good, but\n> SELECT DISTINCT h_name\n> FROM haszon\n> WHERE h_name NOT IN (SELECT cn_name FROM carname)\n>\n> +--------+\n> | h_name |\n> +--------+\n> +--------+\n> Query OK, 0 rows in set (0,10 sec)\n>\n> Why ?\n>\n> postgres-7.1 rpm on RedHat 7.0\n>\n> Thanks, Gabor\n\n\n\n", "msg_date": "Mon, 21 May 2001 13:09:09 +0200", "msg_from": "\"Gabor Csuri\" <gcsuri@coder.hu>", "msg_from_op": true, "msg_subject": "Re: I don't understand..." }, { "msg_contents": "Gabor Csuri writes:\n\n> SELECT DISTINCT h_name\n> >FROM haszon\n> WHERE h_name NOT IN (SELECT cn_name FROM carname)\n>\n> +--------+\n> | h_name |\n> +--------+\n> +--------+\n> Query OK, 0 rows in set (0,10 sec)\n>\n> Why ?\n\nBecause one of the cn_name values is NULL. Observe the semantics of the\nIN operator if the set contains a NULL value:\n\nh_name NOT IN (a, b, c)\nNOT (h_name = a OR h_name = b OR h_name = c)\n\nSay c is null:\n\nNOT (h_name = a OR h_name = b OR h_name = NULL)\nNOT (h_name = a OR h_name = b OR NULL)\nNOT (NULL)\nNULL\n\nwhich is false.\n\nYou might want to add a ... WHERE cn_name IS NOT NULL in the subquery.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 22 May 2001 16:40:55 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: I don't understand..." }, { "msg_contents": "Gabor - \nTri-valued logic strikes again. Remember, NULL represents \"don't know\",\nwhich means \"could be anything\". So, when you ask the system to return\nvalues that are guaranteed not to be in a list, and that list contains\na NULL, the system returns nothing, since the NULL _could_ be equal to\nthe whatever value you're comparing against: the system just doesn't know.\n\nThe operational fixes are:\n\n1) delete nulls where they're not appropriate\nor better\n2) use NOT NULL constraints everywhere you can.\nand\n3) use WHERE NOT NULL in your subselects, if NULL is appropriate in\n the underlying column\n\nRoss\n\n\nOn Mon, May 21, 2001 at 01:09:09PM +0200, Gabor Csuri wrote:\n> Hi All again,\n> \n> after I deleted the \"null row\" from carname:\n> SELECT DISTINCT h_name\n> FROM haszon\n> WHERE h_name NOT IN (SELECT cn_name FROM carname)\n> \n> +---------------+\n> | h_name |\n> +---------------+\n> | DAEWOO-FSO |\n> | DAEWOO-LUBLIN |\n> | GAZ |\n> | TATA |\n> +---------------+\n> Query OK, 4 rows in set (0,13 sec)\n> \n> It's working now, but is it correct?\n> \n> Bye, Gabor.\n> \n> > I think it's good, but\n> > SELECT DISTINCT h_name\n> > FROM haszon\n> > WHERE h_name NOT IN (SELECT cn_name FROM carname)\n> >\n> > +--------+\n> > | h_name |\n> > +--------+\n> > +--------+\n> > Query OK, 0 rows in set (0,10 sec)\n> >\n> > Why ?\n> >\n> > postgres-7.1 rpm on RedHat 7.0\n> >\n> > Thanks, Gabor\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Tue, 22 May 2001 09:43:40 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Re: I don't understand..." }, { "msg_contents": "\nOn Mon, 21 May 2001, Gabor Csuri wrote:\n\n> Hi All again,\n> \n> after I deleted the \"null row\" from carname:\n> SELECT DISTINCT h_name\n> FROM haszon\n> WHERE h_name NOT IN (SELECT cn_name FROM carname)\n> \n> +---------------+\n> | h_name |\n> +---------------+\n> | DAEWOO-FSO |\n> | DAEWOO-LUBLIN |\n> | GAZ |\n> | TATA |\n> +---------------+\n> Query OK, 4 rows in set (0,13 sec)\n> \n> It's working now, but is it correct?\n\nYep. :(\nSQLs NULLs give lots of pain and suffering.\n\nNULL is an unknown value, so you can know\nthat there *IS* a matching row, but you \nnever know with certainty that there *ISN'T*\na matching row when a NULL is involved. \nBasically IN says, if row1=row2 is true for \nany row, return true; if row1=row2 is false \nfor every row return false; otherwise return \nNULL. When it gets to the comparison with\nthe NULL, row1=row2 gives a NULL not a false,\nso the IN returns NULL (which won't get\nthrough the where clause).\n\n\n", "msg_date": "Tue, 22 May 2001 08:21:13 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Re: I don't understand..." } ]
[ { "msg_contents": "Hi.\n\nWe tried the RPM under RH 6.2 and it would unpack, saying we needed this or\nthat or the other... A whole bunch of dependencies.\n\nAnyway, once of us downloaded the source and compiled it and it worked...\n\nI just want to be sure... there's no compelling reason to use RH 7.x\n(basically any new distribution with newer libraries) right? Are there any\nsubtle problems we'd face if we continue using this setup?\n\nBasically, the dependencies with the RPM install have made me suspicious...\n\nThanks in advance.\n\n--Arsalan.\n\n", "msg_date": "Mon, 21 May 2001 17:16:31 +0530", "msg_from": "\"Arsalan Zaidi\" <azaidi@directi.com>", "msg_from_op": true, "msg_subject": "Using 7.1rc1 under RH 6.2" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Monday 21 May 2001 07:46, Arsalan Zaidi wrote:\n> We tried the RPM under RH 6.2 and it would unpack, saying we needed this or\n> that or the other... A whole bunch of dependencies.\n\n> I just want to be sure... there's no compelling reason to use RH 7.x\n> (basically any new distribution with newer libraries) right? Are there any\n> subtle problems we'd face if we continue using this setup?\n\nRedHat 7.1 is a substantial upgrade for more than one reason. Performance is \nbetter, and stability (at least for me) has been just as good.\n\n> Basically, the dependencies with the RPM install have made me suspicious...\n\nWell, first, 7.1rc1 is old. The official 7.1 release is available, as is \n7.1.1 in /pub/binary/v7.1.1/RPMS/redhat-6.2 on the ftp.postgresql.org mirror \nnear you.\n\nNext, is there an existing PostgreSQL 6.5.3 install still on your 6.2 box? \nIf so, you'll need to issue to upgrade with all existing RPM's listed on one \nrpm command line -- and you will need to add the package postgresql-libs to \nthe list. If that fails for some reason, please post (to the pgsql-ports \nlist) the resulting dependency failures.\n\nThe dependencies are there for a reason. But without details of what they \nwere, I cannot further advise.\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7CT4l5kGGI8vV9eERArgYAKCxMQgBWTFzdDldLUjQENLwfyziLgCg5Ujz\nAp8we+lWCeoAUNfYf/o+jKg=\n=lgNH\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 21 May 2001 12:11:15 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Using 7.1rc1 under RH 6.2" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> On Monday 21 May 2001 07:46, Arsalan Zaidi wrote:\n> > We tried the RPM under RH 6.2 and it would unpack, saying we needed this or\n> > that or the other... A whole bunch of dependencies.\n> \n> > I just want to be sure... there's no compelling reason to use RH 7.x\n> > (basically any new distribution with newer libraries) right? Are there any\n> > subtle problems we'd face if we continue using this setup?\n> \n> RedHat 7.1 is a substantial upgrade for more than one reason. Performance is \n> better, and stability (at least for me) has been just as good.\n\nThe 2.4 kernels make sure that fdatasync() works properly, which could\nimprove performance in certain scenarious quite a bit.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "21 May 2001 13:38:46 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Using 7.1rc1 under RH 6.2" } ]
[ { "msg_contents": "\n>> I don't know what you are using those database for, but nothing prevents\n> >you from letting your clients connect to the different databases the\n> >same time.\n\n\n>But that requires me to make a new database connection for each database I\n>need to access.\n\n>And putting 200+ tables in one single database is not an option.\n\n>The application which needs to be able to do this is a\n>cross-database-application (MSSQL, Oracle, Sybase) and I have almost no\n>room for doing major changes to the SQL which this application uses.\n\n>But the lack of this feature in Postgres makes it almost impossible to\n>make a structured database design for huge application. I know this\n>question have been asked before in another postgres forum as early as\n>1998, and what Bruce Momjian said then was that most commercial databases\n>couldn't do it, which was probably right for 1998, but today even MySQL\n>can do this! Sybase, Oracle and MSSQL can also do this. I think even DB2\n>and Informix can.\n\n>I was really suprised when I discovered that this was even an issue with\n>Postgres, because everything else in this wonderful DBM is on an\n>enterprise level of quality and functionality.\n\nI'm stuck in the same cleft in the tree - database application originally \nwritten for Oracle and Sybase, that still needs to work in Oracle, and the \nSQL and database structure etched in stone. The problem isn't about a client \nwith multiple connections, its about executing the following query:\n\nSELECT A.*, B.* FROM FOO.USERS A, BAR.FAVORITE_BEERS B WHERE A.USER = \nB.GUZZLER\n\nPutting 200+ tables in a database certainly isn't a big deal, as I think Tom \nLane points out in another post in this thread. I am poking at the parser in \nmy copious free time just to see how easy it would be to just strip a schema \nname off the items in the FROM clause before anything happens, but one \ndoesn't pick up the internals of the parser in 10-15 minutes a day...hints \nanyone? Anyway, this way I COULD put all the tables in one database, keep the \nschema-based queries, and no one would ever know.\n\nI would twitch on the floor in utter extasy if I could hose Oracle...while \ntheir licensing is more flexible than in the past, it still doesn't sit \nright, and despite all their claims to the contrary their java support is a \njoke. And maybe their pinheaded sales reps WOULD STOP CALLING ME EVERY WEEK.\n\nIf I ever come up with said schema-dropping patch, and anyone else wants it, \nlet me know.\n\n-- \nRegards,\n\nAndrew Rawnsley\nRavensfield Geographic Resources, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n", "msg_date": "Mon, 21 May 2001 07:55:13 -0400", "msg_from": "Andrew Rawnsley <ronz@ravensfield.com>", "msg_from_op": true, "msg_subject": "Queries across multiple databases =?iso-8859-1?q?=A0?=(was: SELECT\n\tfrom a table in another database)." }, { "msg_contents": "On Mon, 21 May 2001 07:55:13 -0400\nAndrew Rawnsley <ronz@ravensfield.com> wrote:\n\n> If I ever come up with said schema-dropping patch, and anyone else\nwants it, \n> let me know.\n\nI'd dance a happy jig ;-)\n\nI'm not sure whether it is quite the way to do it, but I'd have a better\ntime with things if I could span databases in a single request. Are\nthere theoretical problems with spanning databases in a single query? Is\nit a feature of bad database design & implementation?\n\nThanks\n\nCiao\n\nZak\n\n--\n====================================================================\nZak McGregor\nhttp://www.carfolio.com - Specifications of cars online. Over 7000!\n--------------------------------------------------------------------\nOf course my password is the same as my pet's name. \nMy macaw's name was Q47pY!3, but I change it every 90 days.\n====================================================================\n", "msg_date": "Mon, 21 May 2001 16:04:25 +0200", "msg_from": "Zak McGregor <zak@mighty.co.za>", "msg_from_op": false, "msg_subject": "Re: Queries across multiple databases ���(was: SELECT from a table in another database)." }, { "msg_contents": "On Monday 21 May 2001 10:04am, you wrote:\n> On Mon, 21 May 2001 07:55:13 -0400\n>\n> Andrew Rawnsley <ronz@ravensfield.com> wrote:\n> > If I ever come up with said schema-dropping patch, and anyone else\n>\n> wants it,\n>\n> > let me know.\n>\n> I'd dance a happy jig ;-)\n>\n> I'm not sure whether it is quite the way to do it, but I'd have a better\n> time with things if I could span databases in a single request. Are\n> there theoretical problems with spanning databases in a single query? Is\n> it a feature of bad database design & implementation?\n>\n\nI imagine its one of those initial implementation decisions made 12 years ago \nby people completely separate from the current maintainers, to whom this sort \nof need wasn't a priority. I would also hazard to guess (in near-complete \nignorance - correct me if I'm wrong) that it would be difficult to change \nwithout doing serious re-plumbing. Hence its 'exotic' status. The current set \nof maintainers certainly have enough to think about without slamming them \nwith that sort of change...\n\nI do think the issue will continue to bubble to the top as more and more folk \nget stuck into the position of wanting to transition to Postgres from \nOracle/Sybase/whatever and either don't want to recode or simply can't, and \nalso want to maintain as much database neutrality as possible.\n\nThe 'drop the schema' thing isn't quite the way to do it, no. Its a gross \nhack, but if it gets us through until it becomes a more important issue then \nso be it. Hence the sense of getting the source... (or, as I have heard Paul \nEveritt say, you can actually fix the problem after hundreds of manhours as \nopposed to just being told that your support level isn't high enough).\n\n\n-- \nRegards,\n\nAndrew Rawnsley\nRavensfield Geographic Resources, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n", "msg_date": "Mon, 21 May 2001 12:26:11 -0400", "msg_from": "Andrew Rawnsley <ronz@ravensfield.com>", "msg_from_op": true, "msg_subject": "Re: Queries across multiple databases (was: SELECT from a table in\n\tanother database)." }, { "msg_contents": "From: \"Zak McGregor\" <zak@mighty.co.za>\n\n> On Mon, 21 May 2001 07:55:13 -0400\n> Andrew Rawnsley <ronz@ravensfield.com> wrote:\n>\n> > If I ever come up with said schema-dropping patch, and anyone else\n> wants it,\n> > let me know.\n>\n> I'd dance a happy jig ;-)\n>\n> I'm not sure whether it is quite the way to do it, but I'd have a better\n> time with things if I could span databases in a single request. Are\n> there theoretical problems with spanning databases in a single query? Is\n> it a feature of bad database design & implementation?\n\nI think the developers are planning full schema support for the relatively\nnear future (possibly even 7.2, but check the archives and see what's been\nsaid). Although it looks easy to access a table from another database,\nthings can rapidly become more complicated as you start having to deal with\ntransactions, triggers, rules, constraints...\n\n- Richard Huxton\n\n", "msg_date": "Mon, 21 May 2001 17:40:54 +0100", "msg_from": "\"Richard Huxton\" <dev@archonet.com>", "msg_from_op": false, "msg_subject": "\n =?iso-8859-1?Q?Re:_=5BGENERAL=5D_Queries_across_multiple_databases_=A0=28?=\n\t=?iso-8859-1?Q?was:_SELECT_from_a_table_in_another_database=29.?=" }, { "msg_contents": "On Mon, 21 May 2001 16:04:25 +0200\nZak McGregor <zak@mighty.co.za> wrote:\n\n> I'm not sure whether it is quite the way to do it, but I'd have a\nbetter\n> time with things if I could span databases in a single request. Are\n> there theoretical problems with spanning databases in a single query?\nIs\n> it a feature of bad database design & implementation?\n\nI suspect my statement was not so clear. I mean is it a feature of *my*\nbad database design & implementation if I am finding it necessary to\ncontemplate cross-database queries?\n\nApologies for the vagueness of the original...\n\nCiao\n\nZak\n\n\n--\n====================================================================\nZak McGregor\nhttp://www.carfolio.com - Specifications of cars online. Over 7000!\n--------------------------------------------------------------------\nOf course my password is the same as my pet's name. \nMy macaw's name was Q47pY!3, but I change it every 90 days.\n====================================================================\n", "msg_date": "Mon, 21 May 2001 18:50:10 +0200", "msg_from": "Zak McGregor <zak@mighty.co.za>", "msg_from_op": false, "msg_subject": "Re: Queries across multiple databases ���(was: SELECT from a table in another database)." }, { "msg_contents": "> > I'm not sure whether it is quite the way to do it, but I'd have a better\n> > time with things if I could span databases in a single request. Are\n> > there theoretical problems with spanning databases in a single query? Is\n> > it a feature of bad database design & implementation?\n> \n> I think the developers are planning full schema support for the relatively\n> near future (possibly even 7.2, but check the archives and see what's been\n> said). Although it looks easy to access a table from another database,\n> things can rapidly become more complicated as you start having to deal with\n> transactions, triggers, rules, constraints...\n\nSchema is on my radar screen for 7.2. I am waiting to do some research\nin what needs to be done, but my initial idea is to use the system cache\nto do namespace mapping, just like is done now for temp tables.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 07:31:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Queries across multiple databases ?(was:\n\tSELECT from a table in another database)." }, { "msg_contents": "Hello,\n\tI know that queries across multiple databases are in TODO list\nDoes it mean that databases can be on multiple servers (postmasters) and\nthat update,insert,delete across multiple databases are possible.\n\nThanks\nsnpe@verat.net\n", "msg_date": "Tue, 22 May 2001 14:51:28 +0200", "msg_from": "snpe <snpe@verat.net>", "msg_from_op": false, "msg_subject": "Multiple database - multiple query - multiple server " }, { "msg_contents": "On Tue, 22 May 2001, snpe wrote:\n\n> Hello,\n> \tI know that queries across multiple databases are in TODO list\n> Does it mean that databases can be on multiple servers (postmasters) and\n> that update,insert,delete across multiple databases are possible.\n\nAll the multiple database stuff is in the exotic features section,\nit's not here now, and probably won't be for a while unless someone steps\nup to do it.\n\n", "msg_date": "Tue, 22 May 2001 13:06:30 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Multiple database - multiple query - multiple server" } ]
[ { "msg_contents": "Apologises if I've missed something, but isn't that the same xmin that ODBC\nuses for row versioning?\n- Stuart\n\n\t<Snip>\n> Currently, the XMIN/XMAX command counters are used only by the current\n> transaction, and they are useless once the transaction finishes and take\n> up 8 bytes on disk.\n\t<Snip>\n\n", "msg_date": "Mon, 21 May 2001 13:00:59 +0100", "msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem" }, { "msg_contents": "\"Henshall, Stuart - WCP\" wrote:\n> \n> Apologises if I've missed something, but isn't that the same xmin that ODBC\n> uses for row versioning?\n> - Stuart\n> \n> <Snip>\n> > Currently, the XMIN/XMAX command counters are used only by the current\n> > transaction, and they are useless once the transaction finishes and take\n> > up 8 bytes on disk.\n> <Snip>\n\nBTW, is there some place where I could read about exact semantics of\nsytem fields ?\n\n--------------------\nHannu\n", "msg_date": "Mon, 21 May 2001 16:53:13 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: RE: Plans for solving the VACUUM problem" }, { "msg_contents": "\nYou can read my Internals paper at the bottom of developers corner.\n\n[ Charset ISO-8859-15 unsupported, converting... ]\n> \"Henshall, Stuart - WCP\" wrote:\n> > \n> > Apologises if I've missed something, but isn't that the same xmin that ODBC\n> > uses for row versioning?\n> > - Stuart\n> > \n> > <Snip>\n> > > Currently, the XMIN/XMAX command counters are used only by the current\n> > > transaction, and they are useless once the transaction finishes and take\n> > > up 8 bytes on disk.\n> > <Snip>\n> \n> BTW, is there some place where I could read about exact semantics of\n> sytem fields ?\n> \n> --------------------\n> Hannu\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 00:38:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RE: Plans for solving the VACUUM problem" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> You can read my Internals paper at the bottom of developers corner.\n> \n\nin answer to my question:\n\n> > BTW, is there some place where I could read about exact semantics of\n> > sytem fields ?\n\nthe only thing I found was on page 50 which claimed that\n\nxmax - destruction transaction id\n\nwhich is what I remembered, \n\nbut then no xmax should ever be visible in a regular query :\n\n\nhannu=# create table parent(pid int primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index\n'parent_pkey' for table 'parent'\nCREATE\nhannu=# create table child(cid int references parent(pid) on update\ncascade);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\nhannu=# insert into parent values(1);\nINSERT 27525 1\nhannu=# insert into child values(1);\nINSERT 27526 1\nhannu=# update parent set pid=2;\nUPDATE 1\nhannu=# select xmin,xmax,* from parent;\n xmin | xmax | pid \n------+------+-----\n 688 | 688 | 2\n(1 row)\n\n------------------------\nHannu\n", "msg_date": "Tue, 22 May 2001 10:20:26 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "XMAX weirdness (was: Plans for solving the VACUUM problem)" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> but then no xmax should ever be visible in a regular query :\n\nNot so. For example, if a transaction tried to delete a tuple, but is\neither still open or rolled back, then other transactions would see its\nXID in the tuple's xmax. Also, SELECT FOR UPDATE uses xmax to record\nthe XID of the transaction that has the tuple locked --- that's the\ncase you are seeing, because of the SELECT FOR UPDATE done by the\nforeign-key triggers on the table.\n\nThere is a claim in the current documentation that xmax is never nonzero\nin a visible tuple, but that's incorrect ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 15:12:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: XMAX weirdness (was: Plans for solving the VACUUM problem) " } ]
[ { "msg_contents": "\n> > Vadim, can you remind me what UNDO is used for?\n> 4. Split pg_log into small files with ability to remove old ones (which\n> do not hold statuses for any running transactions).\n\nThey are already small (16Mb). Or do you mean even smaller ?\nThis imposes one huge risk, that is already a pain in other db's. You need\nall logs of one transaction online. For a GigaByte transaction like a bulk\ninsert this can be very inconvenient. \nImho there should be some limit where you can choose whether you want \nto continue without the feature (no savepoint) or are automatically aborted.\n\nIn any case, imho some thought should be put into this :-)\n\nAnother case where this is a problem is a client that starts a tx, does one little\ninsert or update on his private table, and then sits and waits for a day.\n\nBoth cases currently impose no problem whatsoever.\n\nAndreas\n", "msg_date": "Mon, 21 May 2001 16:44:42 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Plans for solving the VACUUM problem" }, { "msg_contents": "\n\nZeugswetter Andreas SB wrote:\n\n>>> Vadim, can you remind me what UNDO is used for?\n>> \n>> 4. Split pg_log into small files with ability to remove old ones (which\n>> do not hold statuses for any running transactions).\n> \n> \n> They are already small (16Mb). Or do you mean even smaller ?\n> This imposes one huge risk, that is already a pain in other db's. You need\n> all logs of one transaction online. For a GigaByte transaction like a bulk\n> insert this can be very inconvenient. \n> Imho there should be some limit where you can choose whether you want \n> to continue without the feature (no savepoint) or are automatically aborted.\n> \n> In any case, imho some thought should be put into this :-)\n> \n> Another case where this is a problem is a client that starts a tx, does one little\n> insert or update on his private table, and then sits and waits for a day.\n> \n> Both cases currently impose no problem whatsoever.\n\nCorrect me if I am wrong, but both cases do present a problem currently \nin 7.1. The WAL log will not remove any WAL files for transactions that \nare still open (even after a checkpoint occurs). Thus if you do a bulk \ninsert of gigabyte size you will require a gigabyte sized WAL \ndirectory. Also if you have a simple OLTP transaction that the user \nstarted and walked away from for his one week vacation, then no WAL log \nfiles can be deleted until that user returns from his vacation and ends \nhis transaction.\n\n--Barry\n\n> \n> \n> Andreas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n", "msg_date": "Mon, 21 May 2001 13:19:08 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: AW: Plans for solving the VACUUM problem" }, { "msg_contents": "Barry Lind wrote:\n>\n>\n> Zeugswetter Andreas SB wrote:\n>\n> >>> Vadim, can you remind me what UNDO is used for?\n> >>\n> >> 4. Split pg_log into small files with ability to remove old ones (which\n> >> do not hold statuses for any running transactions).\n> >\n> >\n> > They are already small (16Mb). Or do you mean even smaller ?\n> > This imposes one huge risk, that is already a pain in other db's. You need\n> > all logs of one transaction online. For a GigaByte transaction like a bulk\n> > insert this can be very inconvenient.\n> > Imho there should be some limit where you can choose whether you want\n> > to continue without the feature (no savepoint) or are automatically aborted.\n> >\n> > In any case, imho some thought should be put into this :-)\n> >\n> > Another case where this is a problem is a client that starts a tx, does one little\n> > insert or update on his private table, and then sits and waits for a day.\n> >\n> > Both cases currently impose no problem whatsoever.\n>\n> Correct me if I am wrong, but both cases do present a problem currently\n> in 7.1. The WAL log will not remove any WAL files for transactions that\n> are still open (even after a checkpoint occurs). Thus if you do a bulk\n> insert of gigabyte size you will require a gigabyte sized WAL\n> directory. Also if you have a simple OLTP transaction that the user\n> started and walked away from for his one week vacation, then no WAL log\n> files can be deleted until that user returns from his vacation and ends\n> his transaction.\n\n As a rule of thumb, online applications that hold open\n transactions during user interaction are considered to be\n Broken By Design (tm). So I'd slap the programmer/design\n team with - let's use the server box since it doesn't contain\n anything useful.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 21 May 2001 16:41:33 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: AW: Plans for solving the VACUUM problem" }, { "msg_contents": "At 04:41 PM 21-05-2001 -0400, Jan Wieck wrote:\n>\n> As a rule of thumb, online applications that hold open\n> transactions during user interaction are considered to be\n> Broken By Design (tm). So I'd slap the programmer/design\n> team with - let's use the server box since it doesn't contain\n> anything useful.\n>\n\nMany web applications use persistent database connections for performance\nreasons.\n\nI suppose it's unlikely for webapps to update a row and then sit and wait a\nlong time for a hit, so it shouldn't affect most of them.\n\nHowever if long running transactions are to be aborted automatically, it\ncould possibly cause problems with some apps out there. \n\nWorse if long running transactions are _disconnected_ (not just aborted).\n\nRegards,\nLink.\n\n\n\n", "msg_date": "Tue, 22 May 2001 09:38:54 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: AW: Plans for solving the VACUUM problem" }, { "msg_contents": "> > As a rule of thumb, online applications that hold open\n> > transactions during user interaction are considered to be\n> > Broken By Design (tm). So I'd slap the programmer/design\n> > team with - let's use the server box since it doesn't contain\n> > anything useful.\n>\n> Many web applications use persistent database connections for performance\n> reasons.\n\nPersistent connection is not the same as an OPEN transaction BTW.\n\n> I suppose it's unlikely for webapps to update a row and then sit and wait a\n> long time for a hit, so it shouldn't affect most of them.\n>\n> However if long running transactions are to be aborted automatically, it\n> could possibly cause problems with some apps out there.\n>\n> Worse if long running transactions are _disconnected_ (not just aborted).\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Tue, 22 May 2001 14:25:59 +0700", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: Re: AW: Plans for solving the VACUUM problem" }, { "msg_contents": "Lincoln Yeoh wrote:\n> At 04:41 PM 21-05-2001 -0400, Jan Wieck wrote:\n> >\n> > As a rule of thumb, online applications that hold open\n> > transactions during user interaction are considered to be\n> > Broken By Design (tm). So I'd slap the programmer/design\n> > team with - let's use the server box since it doesn't contain\n> > anything useful.\n> >\n>\n> Many web applications use persistent database connections for performance\n> reasons.\n>\n> I suppose it's unlikely for webapps to update a row and then sit and wait a\n> long time for a hit, so it shouldn't affect most of them.\n>\n> However if long running transactions are to be aborted automatically, it\n> could possibly cause problems with some apps out there.\n>\n> Worse if long running transactions are _disconnected_ (not just aborted).\n\n All true, but unrelated. He was talking about open\n transactions holding locks while the user is off to recycle\n some coffee or so. A persistent database connection doesn't\n mean that you're holding a transaction while waiting for the\n next hit.\n\n And Postgres doesn't abort transaction or disconnect because\n of their runtime. Then again, it'd take care for half done\n work aborted by the httpd because a connection loss inside of\n a transaction causes an implicit rollback.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Tue, 22 May 2001 09:01:56 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Re: AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "\n> > > Vadim, can you remind me what UNDO is used for?\n> > 4. Split pg_log into small files with ability to remove old ones (which\n> > do not hold statuses for any running transactions).\n\nand I wrote:\n> They are already small (16Mb). Or do you mean even smaller ?\n\nSorry for above little confusion of pg_log with WAL on my side :-(\n\nAndreas\n", "msg_date": "Mon, 21 May 2001 17:04:37 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "Hi folks,\n\n I just committed changes to the SPI manager and PL/pgSQL,\n providing full CURSOR support. A detailed description is\n attached as a Postscript file. Brief description follows.\n\n Enhancement of SPI:\n\n There are a couple of new functions and internal changes to\n the SPI memory management. SPI now creates separate memory\n contexts for prepared and saved plans and tuple result\n sets. The contexts are children of where the allocations\n used to happen, so it's fully upgrade compatible. New\n functions SPI_freeplan(plan) and SPI_freetuptable(tuptab)\n allow to simply destroy the contexts when no longer needed.\n\n The other new functions deal with portals:\n\n Portal\n SPI_cursor_find(char *name);\n\n Get an existing portal by name\n\n Portal\n SPI_cursor_open(char *name, void *plan,\n Datum *Values, char *Nulls);\n\n Use a prepared or saved SPI plan to create a new\n portal. if <name> is NULL, the function will make up\n a unique name inside the backend. A portal created by\n this can be accessed by the main application as well\n if SPI_cursor_open() was called inside of an explicit\n transaction block.\n\n void\n SPI_cursor_fetch(Portal portal, bool forward, int count);\n\n Fetch at max <count> tuples from <portal> into the\n well known SPI_tuptable and set SPI_processed.\n <portal> could be any existing portal, even one\n created by the main application using DECLARE ...\n CURSOR.\n\n void\n SPI_cursor_move(Portal portal, bool forward, int count);\n\n Same as fetch but suppress tuples.\n\n void\n SPI_cursor_close(Portal portal);\n\n Close the given portal. Doesn't matter who created it\n (SPI or main application).\n\n New datatype \"refcursor\"\n\n A new datatype \"refcursor\" is created as a basetype, which\n is equivalent to \"text\". This is required below.\n\n Enhancement of PL/pgSQL\n\n Explicit cursor can be declared as:\n\n DECLARE\n ...\n curname CURSOR [(argname type [, ...])]\n IS <select_stmt>;\n ...\n\n The <select_stmt> can use any so far declared variable or\n positional function arguments (possibly aliased). These\n will be evaluated at OPEN time.\n\n Explicit cursor can be opened with:\n\n BEGIN\n ...\n OPEN curname [(expr [, ...])];\n ...\n\n The expression list is required if and only if the explicit\n cursor declaration contains an argument list. The created\n portal will be named 'curname' and is accessible globally.\n\n Reference cursor can be declared as:\n\n DECLARE\n ...\n varname REFCURSOR;\n ...\n\n and opened with\n\n BEGIN\n ...\n OPEN varname FOR <select_stmt>;\n -- or\n OPEN varname FOR EXECUTE <string expression>;\n ...\n\n The type \"refcursor\" is a datatype like text, and the\n variables \"value\" controls the \"name\" argument to\n SPI_cursor_open(). Defaulting to NULL, the resulting portal\n will get a generic, unique name and the variable will be\n set to that name at OPEN. If the function assigns a value\n before OPEN, that'll be used as the portal name.\n\n Cursors (of both types) are used with:\n\n BEGIN\n ...\n FETCH cursorvar INTO {record | row | var [, ...]};\n ...\n CLOSE cursorvar;\n\n FETCH sets the global variable FOUND to flag if another row\n is available. A typical loop thus looks like this:\n\n BEGIN\n OPEN myrefcur FOR SELECT * FROM mytab;\n LOOP\n FETCH myrefcur INTO myrow;\n EXIT WHEN NOT FOUND;\n -- Process one row\n END LOOP;\n CLOSE myrefcur;\n\n The \"refcursor\" type can be used for function arguments or\n return values as well. So one function can call another to\n open a cursor, assigning it's return value to a\n \"refcursor\", pass that down to other functions and - you\n get the idea.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #", "msg_date": "Mon, 21 May 2001 11:06:13 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "PL/pgSQL CURSOR support" } ]
[ { "msg_contents": "\n> > I tend to agree that we should not change the code to make \"select tab\"\n> > work, on the grounds of error-proneness.\n> \n> OK, here is another patch that does this:\n> \n> \ttest=> select test from test;\n> \t test \n> \t------\n> \t 1\n> \t(1 row)\n\nI object also. It is not a feature, and it might stand in the way of a future interpretation.\nWhat does it buy you except saving the 2 chars '.*' ? Imho it is far from obvious behavior.\n\nAndreas\n", "msg_date": "Mon, 21 May 2001 17:09:54 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Fix for tablename in targetlist" } ]
[ { "msg_contents": "I just upgraded PostgreSQL from 21 March CVS (rc1?) to May 19 16:21 GMT CVS.\nI found that all my cgi/fcg scripts which use libpq++ stopped working in\nthe vague sense of apache mentioning an internal server error. Relinking\nthem cured the problem (had to do this in haste => unfortunately no more\ninformation)\n\n-rwxr-xr-x 1 postgres postgres 154795 Mar 21 21:28 libpq++.so.3.1\n-rwxr-xr-x 1 postgres postgres 155212 May 21 14:48 libpq++.so.3.2\n\nis the change. The programs using libpq only (not lipq++ as well) worked as\nbefore. I am sorry, I don't have an error message to say how it is broken,\nbut I do have a slight feeling that maybe the major shared library number\ncould have been bumped up...\n\nAh... A clue!\n\nUndefined PLT symbol \"ConnectionBad__12PgConnection\" (reloc type = 7, symnum\n= 132)\n\nquartz% nm -g libpq++.so.3.1 | grep ConnectionBad\n000025e8 T ConnectionBad__12PgConnection\nquartz% !:s/1/2/\nnm -g libpq++.so.3.2 | grep ConnectionBad\n000024fc T ConnectionBad__C12PgConnection\n\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq++/pgconnection.h,v\nretrieving revision 1.10\nretrieving revision 1.11\ndiff -u -r1.10 -r1.11\n--- pgconnection.h 2001/02/10 02:31:30 1.10\n+++ pgconnection.h 2001/05/09 17:29:10 1.11\n\n- int ConnectionBad();\n...\n+ bool ConnectionBad() const;\n\n\nSo I would suggest that the major number be bumped, leaving a small window\nsince 9 May with a problem..\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 21 May 2001 16:13:28 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "shared library strangeness?" }, { "msg_contents": "\nI am always confused when to bump the minor and when the major. I also\nwas not sure how significant the change would be for apps. We added\nconst, and I changed the return type of one function from short to int. \nSeems like ConnectionBad was also changed.\n\nI bumped the minor in preparation for 7.2. Seems the major needs\nbumping. I will do it now for libpq++.\n\n\n> I just upgraded PostgreSQL from 21 March CVS (rc1?) to May 19 16:21 GMT CVS.\n> I found that all my cgi/fcg scripts which use libpq++ stopped working in\n> the vague sense of apache mentioning an internal server error. Relinking\n> them cured the problem (had to do this in haste => unfortunately no more\n> information)\n> \n> -rwxr-xr-x 1 postgres postgres 154795 Mar 21 21:28 libpq++.so.3.1\n> -rwxr-xr-x 1 postgres postgres 155212 May 21 14:48 libpq++.so.3.2\n> \n> is the change. The programs using libpq only (not lipq++ as well) worked as\n> before. I am sorry, I don't have an error message to say how it is broken,\n> but I do have a slight feeling that maybe the major shared library number\n> could have been bumped up...\n> \n> Ah... A clue!\n> \n> Undefined PLT symbol \"ConnectionBad__12PgConnection\" (reloc type = 7, symnum\n> = 132)\n> \n> quartz% nm -g libpq++.so.3.1 | grep ConnectionBad\n> 000025e8 T ConnectionBad__12PgConnection\n> quartz% !:s/1/2/\n> nm -g libpq++.so.3.2 | grep ConnectionBad\n> 000024fc T ConnectionBad__C12PgConnection\n> \n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq++/pgconnection.h,v\n> retrieving revision 1.10\n> retrieving revision 1.11\n> diff -u -r1.10 -r1.11\n> --- pgconnection.h 2001/02/10 02:31:30 1.10\n> +++ pgconnection.h 2001/05/09 17:29:10 1.11\n> \n> - int ConnectionBad();\n> ...\n> + bool ConnectionBad() const;\n> \n> \n> So I would suggest that the major number be bumped, leaving a small window\n> since 9 May with a problem..\n> \n> Cheers,\n> \n> Patrick\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 07:23:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: shared library strangeness?" }, { "msg_contents": "On Tue, 22 May 2001, Bruce Momjian wrote:\n\n> I am always confused when to bump the minor and when the major. I also\n> was not sure how significant the change would be for apps. We added\n> const, and I changed the return type of one function from short to int. \n> Seems like ConnectionBad was also changed.\n\nSorry for the delay.\n\nYou need to bump the minor whenever you add to the library. You need to\nbump the major whenever you delete from the library or change(*) the\ninterface to a function. i.e. if a program links against the library, as\nlong as the routine names it linked against behave as it expected at\ncompile time, you don't need to bump the major.\n\n(*) NetBSD (and I think other OSs too) use a gcc-ism, RENAME, to be able\nto change the interface seen by new programs w/o changing the minor\nnumber. What you do is prototype the function as you want it now, and have\na __RENAME(new_name) at the end of the prototype. When you build the\nlibrary, you have a routine having the old footprint and old name, and a\nnew routine with the new footprint and named new_name. Old programs look\nfor the old name, and get what they expect. New programs look for the new\nname, and also get what they expect.\n\nI'm not sure if Postgres needs to go to that much trouble.\n\nTake care,\n\nBill\n\n", "msg_date": "Mon, 2 Jul 2001 09:29:23 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@zembu.com>", "msg_from_op": false, "msg_subject": "Re: shared library strangeness?" }, { "msg_contents": "On Mon, Jul 02, 2001 at 09:29:23AM -0700, Bill Studenmund wrote:\n> On Tue, 22 May 2001, Bruce Momjian wrote:\n> \n> > I am always confused when to bump the minor and when the major. I also\n> > was not sure how significant the change would be for apps. We added\n> > const, and I changed the return type of one function from short to int. \n> > Seems like ConnectionBad was also changed.\n> \n> You need to bump the minor whenever you add to the library. You need to\n> bump the major whenever you delete from the library or change(*) the\n> interface to a function. i.e. if a program links against the library, as\n> long as the routine names it linked against behave as it expected at\n> compile time, you don't need to bump the major.\n> \n> (*) NetBSD (and I think other OSs too) use a gcc-ism, RENAME, to be able\n> to change the interface seen by new programs w/o changing the minor\n> number. What you do is prototype the function as you want it now, and have\n> a __RENAME(new_name) at the end of the prototype. When you build the\n> library, you have a routine having the old footprint and old name, and a\n> new routine with the new footprint and named new_name. Old programs look\n> for the old name, and get what they expect. New programs look for the new\n> name, and also get what they expect.\n\nGNU binutils, Solaris ld, and other complete implementations of the ELF \nstandard also support \"symbol versioning\", which IIUC doesn't require\ncompiler support. This apparatus was used, for example, in Gnu libc to \nadd binary-backward-compatible support for a 64-bit file positioning\nwithout need to change the source-level interface.\n\nOf course just prepending a \"new_\" prefix would be a poor choice of \nversion naming convention.\n\nIt's possible to do this portably with elaborate macro apparatus, or (in \nC++) with namespace aliases, but it's not pretty. Because it clutters \nheader files, it can be confusing to users who depend on header files to \nsupplement documentation.\n\nNathan Myers\nncm@zembu.com\n\n", "msg_date": "Mon, 2 Jul 2001 11:58:15 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: shared library strangeness?" } ]
[ { "msg_contents": "\n> True, although there's a certain inconsistency in allowing a whole row\n> to be passed to a function by\n> \n> \tselect foo(pg_class) from pg_class;\n> \n> and not allowing the same row to be output by\n\nImho there is a big difference between the two. The foo(pg_class) calls a function \nwith argument type opaque or type pg_class.\nI would go so far as to say, that above foo function call would have a\ndifferent meaning if written with 'pg_class.*'.\n\n\tselect foo(pg_class.*) from pg_class;\n\t \nCould be interpreted as calling a function foo with pg_class ncolumns \narguments of the corresponding types.\n\n> \n> \tselect pg_class from pg_class; \n\nProbably a valid interpretation would be if type pg_class or opaque had an \noutput function.\n\nAndreas\n", "msg_date": "Mon, 21 May 2001 17:20:40 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Fix for tablename in targetlist " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> select pg_class from pg_class; \n\n> Probably a valid interpretation would be if type pg_class or opaque had an \n> output function.\n\nHmm, good point. We shouldn't foreclose the possibility of handling\nthings that way. Okay, I'm convinced: allowing .* to be omitted isn't\na good idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 May 2001 11:47:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Fix for tablename in targetlist " }, { "msg_contents": "> Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> >> select pg_class from pg_class; \n> \n> > Probably a valid interpretation would be if type pg_class or opaque had an \n> > output function.\n> \n> Hmm, good point. We shouldn't foreclose the possibility of handling\n> things that way. Okay, I'm convinced: allowing .* to be omitted isn't\n> a good idea.\n> \n\nOK, old patch causing an elog being added now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 21 May 2001 13:42:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Fix for tablename in targetlist" }, { "msg_contents": "Patch applied and attached. TODO updated.\n\n> Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> >> select pg_class from pg_class; \n> \n> > Probably a valid interpretation would be if type pg_class or opaque had an \n> > output function.\n> \n> Hmm, good point. We shouldn't foreclose the possibility of handling\n> things that way. Okay, I'm convinced: allowing .* to be omitted isn't\n> a good idea.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/parser/parse_expr.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/parse_expr.c,v\nretrieving revision 1.95\ndiff -c -r1.95 parse_expr.c\n*** src/backend/parser/parse_expr.c\t2001/05/19 00:33:20\t1.95\n--- src/backend/parser/parse_expr.c\t2001/05/21 17:59:13\n***************\n*** 585,591 ****\n--- 585,594 ----\n \t\tNode\t *var = colnameToVar(pstate, ident->name);\n \n \t\tif (var != NULL)\n+ \t\t{\n+ \t\t\tident->isRel = FALSE;\n \t\t\tresult = transformIndirection(pstate, var, ident->indirection);\n+ \t\t}\n \t}\n \n \tif (result == NULL)\nIndex: src/backend/parser/parse_target.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/parse_target.c,v\nretrieving revision 1.66\ndiff -c -r1.66 parse_target.c\n*** src/backend/parser/parse_target.c\t2001/03/22 03:59:41\t1.66\n--- src/backend/parser/parse_target.c\t2001/05/21 17:59:14\n***************\n*** 55,60 ****\n--- 55,63 ----\n \tif (expr == NULL)\n \t\texpr = transformExpr(pstate, node, EXPR_COLUMN_FIRST);\n \n+ \tif (IsA(expr, Ident) && ((Ident *)expr)->isRel)\n+ \t\telog(ERROR,\"You can't use relation names alone in the target list, try relation.*.\");\t\n+ \n \ttype_id = exprType(expr);\n \ttype_mod = exprTypmod(expr);\n \nIndex: src/include/nodes/parsenodes.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\nretrieving revision 1.127\ndiff -c -r1.127 parsenodes.h\n*** src/include/nodes/parsenodes.h\t2001/05/07 00:43:25\t1.127\n--- src/include/nodes/parsenodes.h\t2001/05/21 17:59:19\n***************\n*** 1080,1087 ****\n \tNodeTag\t\ttype;\n \tchar\t *name;\t\t\t/* its name */\n \tList\t *indirection;\t/* array references */\n! \tbool\t\tisRel;\t\t\t/* is a relation - filled in by\n! \t\t\t\t\t\t\t\t * transformExpr() */\n } Ident;\n \n /*\n--- 1080,1086 ----\n \tNodeTag\t\ttype;\n \tchar\t *name;\t\t\t/* its name */\n \tList\t *indirection;\t/* array references */\n! \tbool\t\tisRel;\t\t\t/* is this a relation or a column? */\n } Ident;\n \n /*", "msg_date": "Mon, 21 May 2001 14:36:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Fix for tablename in targetlist" } ]
[ { "msg_contents": "Every once in a while some user complains that \"the cursor keys don't work\nanymore in psql\". There has even been one case where a major vendor has\nshipped binaries without readline support. While we keep telling them,\n\"make sure configure finds the library and the header files\", this is a\nrather difficult thing for users to do reliably, especially if they are\nusing a pre-packaged deal where no intervention is expected.\n\nI think we should add a --with-readline option to configure, and make\nconfigure die with an error if the option is used and no readline is\nfound. If the option is not used, readline would still be used if found.\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 21 May 2001 17:35:32 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Detecting readline in configure" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I think we should add a --with-readline option to configure, and make\n> configure die with an error if the option is used and no readline is\n> found. If the option is not used, readline would still be used if found.\n\nThis would not help, unless the user was clueful enough to invoke the\noption, which would likely not be true. (AFAICT, the majority of these\ncomplaints come from people who don't even know what libreadline is,\nlet alone that they have no or only a partial install of it.)\n\nAn effective but rather fascist approach would be to make the option be\n--without-readline, ie, the *default* behavior of configure is to fail\nunless a usable readline is found. I do not think I like that, since\nreadline does not qualify as a critical component IMHO. (And, given\nthe GPL-vs-every-other-license political agenda of the readline crowd,\nI don't want them to think we think that either ;-))\n\nI suggest that the behavior of configure not be changed, but that it be\ntweaked to put out a slightly more obvious notice about not being able\nto find readline support. Maybe\n\n\tchecking for libreadline ... no\n\tchecking for libedit ... no\n\t*\n\t* NOTICE: I couldn't find libreadline nor libedit. You will\n\t* not have history support in psql.\n\t*\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 May 2001 12:58:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Detecting readline in configure " }, { "msg_contents": "Tom Lane writes:\n\n> I suggest that the behavior of configure not be changed, but that it be\n> tweaked to put out a slightly more obvious notice about not being able\n> to find readline support. Maybe\n>\n> \tchecking for libreadline ... no\n> \tchecking for libedit ... no\n> \t*\n> \t* NOTICE: I couldn't find libreadline nor libedit. You will\n> \t* not have history support in psql.\n> \t*\n\nThis may be useful as well, but it doesn't help those doing unattended\nbuilds, such as RPMs and *BSD ports. In that case you need to abort to\nnotify the user that things didn't go the way the package maker had\nplanned.\n\nLet's do both.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 21 May 2001 19:48:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Detecting readline in configure " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> This may be useful as well, but it doesn't help those doing unattended\n> builds, such as RPMs and *BSD ports. In that case you need to abort to\n> notify the user that things didn't go the way the package maker had\n> planned.\n\nOh, I see: you are thinking of --with-readline as something a package\nmaintainer would put into a specfile. OK, that seems like something\nthat might be useful to do.\n\n> Let's do both.\n\nAgreed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 May 2001 13:56:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Detecting readline in configure " }, { "msg_contents": "peter_e@gmx.net (Peter Eisentraut) wrote in message news:<Pine.LNX.4.30.0105211728290.757-100000@peter.localdomain>...\n> Every once in a while some user complains that \"the cursor keys don't work\n> anymore in psql\". There has even been one case where a major vendor has\n> shipped binaries without readline support. While we keep telling them,\n> \"make sure configure finds the library and the header files\", this is a\n> rather difficult thing for users to do reliably, especially if they are\n> using a pre-packaged deal where no intervention is expected.\n\nJust FYI,\n\nOn SGI IRIX, the libreadline is in /usr/freeware/lib and\n/usr/freeware/include. It's no big thing to\nadd the --with-includes and --with-lib, but it would be nice if the\nirix template were\ntold to look for libreadline there.\n\n-Tony\n", "msg_date": "21 May 2001 13:28:20 -0700", "msg_from": "reina@nsi.edu (Tony Reina)", "msg_from_op": false, "msg_subject": "Re: Detecting readline in configure" }, { "msg_contents": "On Mon, 21 May 2001, Peter Eisentraut wrote:\n\n> Tom Lane writes:\n> \n> > \tchecking for libreadline ... no\n> > \tchecking for libedit ... no\n> > \t*\n> > \t* NOTICE: I couldn't find libreadline nor libedit. You will\n> > \t* not have history support in psql.\n> > \t*\n> \n> This may be useful as well, but it doesn't help those doing unattended\n> builds, such as RPMs and *BSD ports. In that case you need to abort to\n> notify the user that things didn't go the way the package maker had\n> planned.\n\n*BSD ports/packages shouldn't have much of a problem. They can encode\ndependencies, both in the binary package and in the build-from-source\nprocess. So if the package maker did things right, the packaging system\nwould have either squalked, or tried to install libreadline before running\nconfigure.\n\nTake care,\n\nBill\n\n", "msg_date": "Tue, 22 May 2001 23:58:20 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@zembu.com>", "msg_from_op": false, "msg_subject": "Re: Detecting readline in configure " } ]
[ { "msg_contents": "\n> Tom Lane wrote:\n> > \n> > begin;\n> > select * from foo where x = functhatreadsbar();\n\nI thought that the per statement way to do it with a non cacheable function was:\n\tselect * from foo where x = (select functhatreadsbar());\n\n??\nAndreas\n\nPS: an iscacheable function without arguments is imho a funny construct anyways.\n", "msg_date": "Mon, 21 May 2001 17:39:11 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: External search engine, advice" } ]
[ { "msg_contents": "\n> Really?! Once again: WAL records give you *physical* address of tuples\n> (both heap and index ones!) to be removed and size of log to read\n> records from is not comparable with size of data files.\n\nSo how about a background \"vacuum like\" process, that reads the WAL\nand does the cleanup ? Seems that would be great, since it then does not \nneed to scan, and does not make forground cleanup necessary.\n\nProblem is when cleanup can not keep up with cleaning WAL files, that already \nwant to be removed. I would envision a config, that sais how many Mb of WAL \nare allowed to queue up before clients are blocked.\n\nAndreas\n", "msg_date": "Mon, 21 May 2001 17:49:28 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "\n> Would it be possible to split the WAL traffic into two sets of files,\n\nSure, downside is two fsyncs :-( When I first suggested physical log \nI had a separate file in mind, but that is imho only a small issue.\n\nOf course people with more than 3 disks could benefit from a split.\n\nTom: If your ratio of physical pages vs WAL records is so bad, the config\nshould simply be changes to do fewer checkpoints (say every 20 min like a \ntypical Informix setup).\n\nAndreas\n", "msg_date": "Mon, 21 May 2001 18:00:58 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Plans for solving the VACUUM problem " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> Tom: If your ratio of physical pages vs WAL records is so bad, the config\n> should simply be changes to do fewer checkpoints (say every 20 min like a \n> typical Informix setup).\n\nI was using the default configuration. What caused the problem was\nprobably not so much the standard 5-minute time-interval-driven\ncheckpoints, as it was the standard every-3-WAL-segments checkpoints.\nPossibly we ought to increase that number?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 May 2001 12:06:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "\n> My point is that we'll need in dynamic cleanup anyway and UNDO is\n> what should be implemented for dynamic cleanup of aborted changes.\n\nI do not yet understand why you want to handle aborts different than outdated\ntuples. The ratio in a well tuned system should well favor outdated tuples.\nIf someone ever adds \"dirty read\" it is also not the case that it is guaranteed, \nthat nobody accesses the tuple you currently want to undo. So I really miss to see\nthe big difference.\n\nAndreas\n", "msg_date": "Mon, 21 May 2001 18:11:16 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "\n> > Tom: If your ratio of physical pages vs WAL records is so bad, the config\n> > should simply be changes to do fewer checkpoints (say every 20 min like a \n> > typical Informix setup).\n> \n> I was using the default configuration. What caused the problem was\n> probably not so much the standard 5-minute time-interval-driven\n\nI am quite sure, that I would increase the default to at least 15 min here.\n\n> checkpoints, as it was the standard every-3-WAL-segments checkpoints.\n> Possibly we ought to increase that number?\n\nHere I am unfortunately not so sure with the current logic (that you can only free \nthem after the checkpoint). I think the admin has to choose this. Maybe increase to 4,\nbut 64 Mb is quite a lot for a small installation :-(\n\nAndreas\n", "msg_date": "Mon, 21 May 2001 18:22:47 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "> > Really?! Once again: WAL records give you *physical*\n> > address of tuples (both heap and index ones!) to be\n> > removed and size of log to read records from is not\n> > comparable with size of data files.\n> \n> So how about a background \"vacuum like\" process, that reads\n> the WAL and does the cleanup ? Seems that would be great,\n> since it then does not need to scan, and does not make\n> forground cleanup necessary.\n> \n> Problem is when cleanup can not keep up with cleaning WAL\n> files, that already want to be removed. I would envision a\n> config, that sais how many Mb of WAL are allowed to queue\n> up before clients are blocked.\n\nYes, some daemon could read logs and gather cleanup info.\nWe could activate it when switching to new log file segment\nand synchronization with checkpointer is not big deal. That\ndaemon would also archive log files for WAL-based BAR,\nif archiving is ON.\n\nBut this will be useful only with on-disk FSM.\n\nVadim\n", "msg_date": "Mon, 21 May 2001 09:31:15 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "We have a TODO item\n\t* Update reltuples in COPY\n\nI was just about to go do this when I realized that it may not be such\na hot idea after all. The problem is that updating pg_class.reltuples\nmeans that concurrent COPY operations will block each other, because\nthey want to update the same row in pg_class. You can already see this\nhappen in CREATE INDEX:\n\n\tcreate table foo(f1 int);\n\tbegin;\n\tcreate index fooey on foo(f1);\n\n-- in another psql do\n\n\tcreate index fooey2 on foo(f1);\n\n-- second backend blocks until first xact is committed or rolled back.\n\nWhile this doesn't bother me for CREATE INDEX, it does bother me for\nCOPY, since people often use COPY to avoid per-tuple INSERT overhead.\nIt seems pretty likely that this will cause blocking problems for real\napplications. I think that may be a bigger problem than the benefit of\nnot needing a VACUUM (or, now, ANALYZE) to get the stats updated.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 May 2001 12:45:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Is stats update during COPY IN really a good idea?" }, { "msg_contents": "> We have a TODO item\n> \t* Update reltuples in COPY\n> \n> I was just about to go do this when I realized that it may not be such\n> a hot idea after all. The problem is that updating pg_class.reltuples\n> means that concurrent COPY operations will block each other, because\n> they want to update the same row in pg_class. You can already see this\n> happen in CREATE INDEX:\n\nPeople are using COPY into the same table at the same time? \n\n> While this doesn't bother me for CREATE INDEX, it does bother me for\n> COPY, since people often use COPY to avoid per-tuple INSERT overhead.\n> It seems pretty likely that this will cause blocking problems for real\n> applications. I think that may be a bigger problem than the benefit of\n> not needing a VACUUM (or, now, ANALYZE) to get the stats updated.\n\nOh, well we can either decide to do it or remove the TODO item. Either\nway we win!\n\nMy vote is to update pg_class. The VACUUM takes much more time than the\nupdate, and we are only updating the pg_class row, right? Can't we just\nstart a new transaction and update the pg_class row, that way we don't\nhave to open it for writing during the copy.\n\nFYI, I had a 100k deep directory that caused me problems this morning. \nJust catching up.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 21 May 2001 13:41:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is stats update during COPY IN really a good idea?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> People are using COPY into the same table at the same time? \n\nYes --- we had a message from someone who was doing that (and running\ninto unrelated performance issues) just last week.\n\n> My vote is to update pg_class. The VACUUM takes much more time than the\n> update, and we are only updating the pg_class row, right?\n\nWhat? What does VACUUM have to do with this?\n\nThe reason this is a significant issue is that the first COPY could be\ninside a transaction, in which case the lock will persist until that\ntransaction commits, which could be awhile.\n\n> Can't we just start a new transaction and update the pg_class row,\n> that way we don't have to open it for writing during the copy.\n\nNo, we cannot; requiring COPY to happen outside a transaction block is\nnot acceptable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 May 2001 13:54:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Is stats update during COPY IN really a good idea? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > People are using COPY into the same table at the same time? \n> \n> Yes --- we had a message from someone who was doing that (and running\n> into unrelated performance issues) just last week.\n\nOK.\n\n> > My vote is to update pg_class. The VACUUM takes much more time than the\n> > update, and we are only updating the pg_class row, right?\n> \n> What? What does VACUUM have to do with this?\n\nYou have to VACUUM to get pg_class updated after COPY, right?\n\n> The reason this is a significant issue is that the first COPY could be\n> inside a transaction, in which case the lock will persist until that\n> transaction commits, which could be awhile.\n\nOh, I see. Can we disable the pg_class update if we are in a\nmulti-statement transaction?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 21 May 2001 13:56:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is stats update during COPY IN really a good idea?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> My vote is to update pg_class. The VACUUM takes much more time than the\n> update, and we are only updating the pg_class row, right?\n>> \n>> What? What does VACUUM have to do with this?\n\n> You have to VACUUM to get pg_class updated after COPY, right?\n\nBut doing this is only interesting if you need to update reltuples in\norder to get the planner to generate reasonable plans. In reality, if\nyou've added enough data to cause the plans to shift, you probably ought\nto do an ANALYZE anyway to update pg_statistic. Given that ANALYZE is a\nlot cheaper than it used to be, I think that the notion of making COPY\ndo this looks fairly obsolete anyhow.\n\n> Oh, I see. Can we disable the pg_class update if we are in a\n> multi-statement transaction?\n\nUgh. Do you really want COPY's behavior to depend on context like that?\n\nIf you did want context-dependent behavior, a saner approach would be to\nonly try to update reltuples if the copy has more than, say, doubled the\nold value. This would be likely to happen in bulk load and unlikely to\nhappen in concurrent-insertions-that-choose-to-use-COPY. But I'm not\nconvinced we need it at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 May 2001 14:07:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Is stats update during COPY IN really a good idea? " }, { "msg_contents": "> > You have to VACUUM to get pg_class updated after COPY, right?\n> \n> But doing this is only interesting if you need to update reltuples in\n> order to get the planner to generate reasonable plans. In reality, if\n> you've added enough data to cause the plans to shift, you probably ought\n> to do an ANALYZE anyway to update pg_statistic. Given that ANALYZE is a\n> lot cheaper than it used to be, I think that the notion of making COPY\n> do this looks fairly obsolete anyhow.\n\nYes, but remember, we are trying to catch ignorant cases, not\nexperienced people.\n\n> > Oh, I see. Can we disable the pg_class update if we are in a\n> > multi-statement transaction?\n> \n> Ugh. Do you really want COPY's behavior to depend on context like that?\n> \n> If you did want context-dependent behavior, a saner approach would be to\n> only try to update reltuples if the copy has more than, say, doubled the\n> old value. This would be likely to happen in bulk load and unlikely to\n> happen in concurrent-insertions-that-choose-to-use-COPY. But I'm not\n> convinced we need it at all.\n\nMaybe not. The COPY/pg_class hack is just to quiet people who have done\nCOPY and forgotten VACUUM or ANALYZE. Maybe the user is only performing\na few operations before deleting the table. Updating pg_class does help\nin that case.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 07:34:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is stats update during COPY IN really a good idea?" } ]
[ { "msg_contents": "> > It probably will not cause more IO than vacuum does right now.\n> > But unfortunately it will not reduce that IO.\n> \n> Uh ... what? Certainly it will reduce the total cost of vacuum,\n> because it won't bother to try to move tuples to fill holes.\n\nOh, you're right here, but daemon will most likely read data files\nagain and again with in-memory FSM. Also, if we'll do partial table\nscans then we'll probably re-read indices > 1 time.\n\n> The index cleanup method I've proposed should be substantially\n> more efficient than the existing code, as well.\n\nNot in IO area.\n\n> > My point is that we'll need in dynamic cleanup anyway and UNDO is\n> > what should be implemented for dynamic cleanup of aborted changes.\n> \n> UNDO might offer some other benefits, but I doubt that it will allow\n> us to eliminate VACUUM completely. To do that, you would need to\n\nI never told this -:)\n\n> keep track of free space using exact, persistent (on-disk) bookkeeping\n> data structures. The overhead of that will be very substantial: more,\n> I predict, than the approximate approach I proposed.\n\nI doubt that \"big guys\" use in-memory FSM. If they were able to do this...\n\nVadim\n", "msg_date": "Mon, 21 May 2001 09:53:35 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "> I hope we can avoid on-disk FSM. Seems to me that that would create\n> problems both for performance (lots of extra disk I/O) and reliability\n> (what happens if FSM is corrupted? A restart won't fix it).\n\nWe can use WAL for FSM.\n\nVadim\n", "msg_date": "Mon, 21 May 2001 09:55:40 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> I think the in-shared-mem FSM could have some max-per-table\n> limit and the background VACUUM just skips the entire table\n> as long as nobody reused any space.\n\nI was toying with the notion of trying to use Vadim's \"MNMB\" idea\n(see his description of the work he did for Perlstein last year);\nthat is, keep track of the lowest block number of any modified block\nwithin each relation since the last VACUUM. Then VACUUM would only\nhave to scan from there to the end. This covers the totally-untouched-\nrelation case nicely, and also helps a lot for large rels that you're\nmostly just adding to or perhaps updating recent additions.\n\nThe FSM could probably keep track of such info fairly easily, since\nit will already be aware of which blocks it's told backends to try\nto insert into. But it would have to be told about deletes too,\nwhich would mean more FSM access traffic and more lock contention.\nAnother problem (given my current view of how FSM should work) is that\nrels not being used at all would not be in FSM, or would age out of it,\nand so you wouldn't know that you didn't need to vacuum them.\nSo I'm not sure yet if it's a good idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 May 2001 13:22:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "> From: Mikheev, Vadim \n> Sent: Monday, May 21, 2001 10:23 AM\n> To: 'Jan Wieck'; Tom Lane\n> Cc: The Hermit Hacker; 'Bruce Momjian';\n> pgsql-hackers@postgresql.orgrg.us.greatbridge.com\n\nStrange address, Jan?\n\n> Subject: RE: [HACKERS] Plans for solving the VACUUM problem\n> \n> \n> > I think the in-shared-mem FSM could have some max-per-table\n> > limit and the background VACUUM just skips the entire table\n> > as long as nobody reused any space. Also it might only\n> > compact pages that lead to 25 or more percent of freespace in\n> > the first place. That makes it more likely that if someone\n> > looks for a place to store a tuple that it'll fit into that\n> > block (remember that the toaster tries to keep main tuples\n> > below BLKSZ/4).\n> \n> This should be configurable parameter like PCFREE (or something\n> like that) in Oracle: consider page for insertion only if it's\n> PCFREE % empty.\n> \n> Vadim\n> \n", "msg_date": "Mon, 21 May 2001 10:37:58 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "> > We could keep share buffer lock (or add some other kind of lock)\n> > untill tuple projected - after projection we need not to read data\n> > for fetched tuple from shared buffer and time between fetching\n> > tuple and projection is very short, so keeping lock on buffer will\n> > not impact concurrency significantly.\n> \n> Or drop the pin on the buffer to show we no longer have a pointer to it.\n\nThis is not good for seqscans which will return to that buffer anyway.\n\n> > Or we could register callback cleanup function with buffer so bufmgr\n> > would call it when refcnt drops to 0.\n> \n> Hmm ... might work. There's no guarantee that the refcnt\n> would drop to zero before the current backend exits, however.\n> Perhaps set a flag in the shared buffer header, and the last guy\n> to drop his pin is supposed to do the cleanup?\n\nThis is what I've meant - set (register) some pointer in buffer header\nto cleanup function.\n\n> But then you'd be pushing VACUUM's work into productive transactions,\n> which is probably not the way to go.\n\nNot big work - I wouldn't worry about it.\n\n> > Two ways: hold index page lock untill heap tuple is checked\n> > or (rough schema) store info in shmem (just IndexTupleData.t_tid\n> > and flag) that an index tuple is used by some scan so cleaner could\n> > change stored TID (get one from prev index tuple) and set flag to\n> > help scan restore its current position on return.\n> \n> Another way is to mark the index tuple \"gone but not forgotten\", so to\n> speak --- mark it dead without removing it. (We could know that we need\n> to do that if we see someone else has a buffer pin on the index page.)\n\nRegister cleanup function just like with heap above.\n\n> None of these seem real clean though. Needs more thought.\n\nVadim\n", "msg_date": "Mon, 21 May 2001 10:52:28 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "> > My point is that we'll need in dynamic cleanup anyway and UNDO is\n> > what should be implemented for dynamic cleanup of aborted changes.\n> \n> I do not yet understand why you want to handle aborts different than\n> outdated tuples.\n\nMaybe because of aborted tuples have shorter Time-To-Live.\nAnd probability to find pages for them in buffer pool is higher.\n\n> The ratio in a well tuned system should well favor outdated tuples.\n> If someone ever adds \"dirty read\" it is also not the case that it\n> is guaranteed, that nobody accesses the tuple you currently want\n> to undo. So I really miss to see the big difference.\n\nIt will not be guaranteed anyway as soon as we start removing\ntuples without exclusive access to relation.\n\nAnd, I cannot say that I would implement UNDO because of\n1. (cleanup) OR 2. (savepoints) OR 4. (pg_log management)\nbut because of ALL of 1., 2., 4.\n\nVadim\n", "msg_date": "Mon, 21 May 2001 11:01:45 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem " }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > > My point is that we'll need in dynamic cleanup anyway and UNDO is\n> > > what should be implemented for dynamic cleanup of aborted changes.\n> > \n> > I do not yet understand why you want to handle aborts different than\n> > outdated tuples.\n> \n> Maybe because of aborted tuples have shorter Time-To-Live.\n> And probability to find pages for them in buffer pool is higher.\n\nThis brings up an idea I had about auto-vacuum. I wonder if autovacuum\ncould do most of its work by looking at the buffer cache pages and\ncommit xids. Seems it would be quite easy record freespace in pages\nalready in the buffer and collect that information for other backends to\nuse. It could also move tuples between cache pages with little\noverhead.\n\nThere wouldn't be an I/O overhead, and frequently used tables are\nalready in the cache.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 13:32:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "> > The ratio in a well tuned system should well favor outdated tuples.\n> > If someone ever adds \"dirty read\" it is also not the case that it\n> > is guaranteed, that nobody accesses the tuple you currently want\n> > to undo. So I really miss to see the big difference.\n> \n> It will not be guaranteed anyway as soon as we start removing\n> tuples without exclusive access to relation.\n> \n> And, I cannot say that I would implement UNDO because of\n> 1. (cleanup) OR 2. (savepoints) OR 4. (pg_log management)\n> but because of ALL of 1., 2., 4.\n\nOK, I understand your reasoning here, but I want to make a comment.\n\nLooking at the previous features you added, like subqueries, MVCC, or\nWAL, these were major features that greatly enhanced the system's\ncapabilities.\n\nNow, looking at UNDO, I just don't see it in the same league as those\nother additions. Of course, you can work on whatever you want, but I\nwas hoping to see another major feature addition for 7.2. We know we\nbadly need auto-vacuum, improved replication, and point-in-time recover.\n\nI can see UNDO improving row reuse, and making subtransactions and\npg_log compression easier, but these items do not require UNDO. \n\nIn fact, I am unsure why we would want an UNDO way of reusing rows of\naborted transactions and an autovacuum way of reusing rows from\ncommitted transactions, expecially because aborted transactions account\nfor <5% of all transactions. It would be better to put work into one\nmechanism that would reuse all tuples.\n\nIf UNDO came with no limitations, it may be a good option, but the need\nto carry tuples until transaction commit does add an extra burden on\nprogrammers and administrators, and I just don't see what we are getting\nfor it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 13:47:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > The ratio in a well tuned system should well favor outdated tuples.\n> > > If someone ever adds \"dirty read\" it is also not the case that it\n> > > is guaranteed, that nobody accesses the tuple you currently want\n> > > to undo. So I really miss to see the big difference.\n> >\n> > It will not be guaranteed anyway as soon as we start removing\n> > tuples without exclusive access to relation.\n> >\n> > And, I cannot say that I would implement UNDO because of\n> > 1. (cleanup) OR 2. (savepoints) OR 4. (pg_log management)\n> > but because of ALL of 1., 2., 4.\n> \n> OK, I understand your reasoning here, but I want to make a comment.\n> \n> Looking at the previous features you added, like subqueries, MVCC, or\n> WAL, these were major features that greatly enhanced the system's\n> capabilities.\n> \n> Now, looking at UNDO, I just don't see it in the same league as those\n> other additions. \n\nHmm hasn't it been an agreement ? I know UNDO was planned\nfor 7.0 and I've never heard objections about it until\nrecently. People also have referred to an overwriting smgr\neasily. Please tell me how to introduce an overwriting smgr\nwithout UNDO.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 23 May 2001 09:49:33 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "> > Looking at the previous features you added, like subqueries, MVCC, or\n> > WAL, these were major features that greatly enhanced the system's\n> > capabilities.\n> > \n> > Now, looking at UNDO, I just don't see it in the same league as those\n> > other additions. \n> \n> Hmm hasn't it been an agreement ? I know UNDO was planned\n> for 7.0 and I've never heard objections about it until\n> recently. People also have referred to an overwriting smgr\n> easily. Please tell me how to introduce an overwriting smgr\n> without UNDO.\n\nI guess that is the question. Are we heading for an overwriting storage\nmanager? I didn't see that in Vadim's list of UNDO advantages, but\nmaybe that is his final goal. If so UNDO may make sense, but then the\nquestion is how do we keep MVCC with an overwriting storage manager?\n\nThe only way I can see doing it is to throw the old tuples into the WAL\nand have backends read through that for MVCC info.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 20:53:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "\n\nBruce Momjian wrote:\n> \n> > > Looking at the previous features you added, like subqueries, MVCC, or\n> > > WAL, these were major features that greatly enhanced the system's\n> > > capabilities.\n> > >\n> > > Now, looking at UNDO, I just don't see it in the same league as those\n> > > other additions.\n> >\n> > Hmm hasn't it been an agreement ? I know UNDO was planned\n> > for 7.0 and I've never heard objections about it until\n> > recently. People also have referred to an overwriting smgr\n> > easily. Please tell me how to introduce an overwriting smgr\n> > without UNDO.\n> \n> I guess that is the question. Are we heading for an overwriting storage\n> manager?\n\nI've never heard that it was given up. So there seems to be\nat least a possibility to introduce it in the future.\nPostgreSQL could have lived without UNDO due to its no\noverwrite smgr. I don't know if avoiding UNDO is possible\nto implement partial rollback(I don't think it's easy\nanyway). However it seems harmful for the future \nimplementation of an overwriting smgr if we would\nintroduce it.\n\n> I didn't see that in Vadim's list of UNDO advantages, but\n> maybe that is his final goal.\n> If so UNDO may make sense, but then the\n> question is how do we keep MVCC with an overwriting storage manager?\n> \n\nIt doesn't seem easy. ISTM it's one of the main reason we\ncouldn't introduce an overwriting smgr in 7.2.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 23 May 2001 12:05:23 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> People also have referred to an overwriting smgr easily.\n\nI am all for an overwriting smgr, but as a feature that can be selected \non a table-by table basis (or at least in compile time), not as an\noverall change\n\n> Please tell me how to introduce an overwriting smgr\n> without UNDO.\n\nI would much more like a dead-space-reusing smgr on top of MVCC which\ndoes \nnot touch live transactions.\n\n------------------\nHannu\n", "msg_date": "Wed, 23 May 2001 12:11:49 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>> I guess that is the question. Are we heading for an overwriting storage\n>> manager?\n\n> I've never heard that it was given up. So there seems to be\n> at least a possibility to introduce it in the future.\n\nUnless we want to abandon MVCC (which I don't), I think an overwriting\nsmgr is impractical. We need a more complex space-reuse scheme than\nthat.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 15:31:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> >> I guess that is the question. Are we heading for an overwriting storage\n> >> manager?\n> \n> > I've never heard that it was given up. So there seems to be\n> > at least a possibility to introduce it in the future.\n> \n> Unless we want to abandon MVCC (which I don't), I think an overwriting\n> smgr is impractical. \n\nImpractical ? Oracle does it.\n\n> We need a more complex space-reuse scheme than\n> that.\n> \n\nIMHO we have to decide which to go now.\nAs I already mentioned, changing current handling\nof transactionId/CommandId to avoid UNDO is not\nonly useless but also harmful for an overwriting\nsmgr.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 24 May 2001 08:15:06 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "At 08:15 AM 5/24/01 +0900, Hiroshi Inoue wrote:\n\n>> Unless we want to abandon MVCC (which I don't), I think an overwriting\n>> smgr is impractical. \n>\n>Impractical ? Oracle does it.\n\nIt's not easy, though ... the current PG scheme has the advantage of being\nrelatively simple and probably more efficient than scanning logs like\nOracle has to do (assuming your datafiles aren't thoroughly clogged with\nold dead tuples).\n\nHas anyone looked at InterBase for hints for space-reusing strategies? \n\nAs I understand it, they have a tuple-versioning scheme similar to PG's.\n\nIf nothing else, something might be learned as to the efficiency and\neffectiveness of one particular approach to solving the problem.\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 23 May 2001 16:24:48 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Don Baccus wrote:\n> \n> At 08:15 AM 5/24/01 +0900, Hiroshi Inoue wrote:\n> \n> >> Unless we want to abandon MVCC (which I don't), I think an overwriting\n> >> smgr is impractical.\n> >\n> >Impractical ? Oracle does it.\n> \n> It's not easy, though ... the current PG scheme has the advantage of being\n> relatively simple and probably more efficient than scanning logs like\n> Oracle has to do (assuming your datafiles aren't thoroughly clogged with\n> old dead tuples).\n> \n\nI think so too. I've never said that an overwriting smgr\nis easy and I don't love it particularily.\n\nWhat I'm objecting is to avoid UNDO without giving up\nan overwriting smgr. We shouldn't be noncommittal now. \n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 24 May 2001 11:22:00 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> Unless we want to abandon MVCC (which I don't), I think an overwriting\n>> smgr is impractical. \n\n> Impractical ? Oracle does it.\n\nOracle has MVCC?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 23:02:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> Unless we want to abandon MVCC (which I don't), I think an overwriting\n> >> smgr is impractical.\n> \n> > Impractical ? Oracle does it.\n> \n> Oracle has MVCC?\n> \n\nYes.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 24 May 2001 12:24:20 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" }, { "msg_contents": "At 11:02 PM 5/23/01 -0400, Tom Lane wrote:\n>Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>> Tom Lane wrote:\n>>> Unless we want to abandon MVCC (which I don't), I think an overwriting\n>>> smgr is impractical. \n>\n>> Impractical ? Oracle does it.\n>\n>Oracle has MVCC?\n\nWith restrictions, yes. You didn't know that? Vadim did ...\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 23 May 2001 21:42:43 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " }, { "msg_contents": "Don Baccus wrote:\n> \n> At 08:15 AM 5/24/01 +0900, Hiroshi Inoue wrote:\n> \n> >> Unless we want to abandon MVCC (which I don't), I think an overwriting\n> >> smgr is impractical.\n> >\n> >Impractical ? Oracle does it.\n> \n> It's not easy, though ... the current PG scheme has the advantage of being\n> relatively simple and probably more efficient than scanning logs like\n> Oracle has to do (assuming your datafiles aren't thoroughly clogged with\n> old dead tuples).\n> \n> Has anyone looked at InterBase for hints for space-reusing strategies?\n> \n> As I understand it, they have a tuple-versioning scheme similar to PG's.\n> \n> If nothing else, something might be learned as to the efficiency and\n> effectiveness of one particular approach to solving the problem.\n\nIt may also be beneficial to study SapDB (which is IIRC a branch-off of \nAdabas) although they claim at http://www.sapdb.org/ in features\nsection:\n\nNOT supported features:\n\n Collations\n\n Result sets that are created within a stored procedure and\nfetched outside. This feature is planned to be\n offered in one of the coming releases.\n Meanwhile, use temporary tables.\n see Reference Manual: SAP DB 7.2 and 7.3 -> Data\ndefinition -> CREATE TABLE statement: Owner of a\n table\n\n Multi version concurrency for OLTP\n It is available with the object extension of SAPDB only.\n\n Hot stand by\n This feature is planned to be offered in one of the coming\nreleases. \n\nSo MVCC seems to be a bolt-on feature there.\n\n---------------------\nHannu\n", "msg_date": "Fri, 25 May 2001 02:05:19 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "Hi folks,\n\n I just committed changes to the SPI manager and PL/pgSQL,\n providing full CURSOR support. A detailed description is\n attached as a Postscript file. Brief description follows.\n\n#\n# Note:\n# The version with the attachment didn't make it up to now.\n# Resending it without.\n#\n\n Enhancement of SPI:\n\n There are a couple of new functions and internal changes to\n the SPI memory management. SPI now creates separate memory\n contexts for prepared and saved plans and tuple result\n sets. The contexts are children of where the allocations\n used to happen, so it's fully upgrade compatible. New\n functions SPI_freeplan(plan) and SPI_freetuptable(tuptab)\n allow to simply destroy the contexts when no longer needed.\n\n The other new functions deal with portals:\n\n Portal\n SPI_cursor_find(char *name);\n\n Get an existing portal by name\n\n Portal\n SPI_cursor_open(char *name, void *plan,\n Datum *Values, char *Nulls);\n\n Use a prepared or saved SPI plan to create a new\n portal. if <name> is NULL, the function will make up\n a unique name inside the backend. A portal created by\n this can be accessed by the main application as well\n if SPI_cursor_open() was called inside of an explicit\n transaction block.\n\n void\n SPI_cursor_fetch(Portal portal, bool forward, int count);\n\n Fetch at max <count> tuples from <portal> into the\n well known SPI_tuptable and set SPI_processed.\n <portal> could be any existing portal, even one\n created by the main application using DECLARE ...\n CURSOR.\n\n void\n SPI_cursor_move(Portal portal, bool forward, int count);\n\n Same as fetch but suppress tuples.\n\n void\n SPI_cursor_close(Portal portal);\n\n Close the given portal. Doesn't matter who created it\n (SPI or main application).\n\n New datatype \"refcursor\"\n\n A new datatype \"refcursor\" is created as a basetype, which\n is equivalent to \"text\". This is required below.\n\n Enhancement of PL/pgSQL\n\n Explicit cursor can be declared as:\n\n DECLARE\n ...\n curname CURSOR [(argname type [, ...])]\n IS <select_stmt>;\n ...\n\n The <select_stmt> can use any so far declared variable or\n positional function arguments (possibly aliased). These\n will be evaluated at OPEN time.\n\n Explicit cursor can be opened with:\n\n BEGIN\n ...\n OPEN curname [(expr [, ...])];\n ...\n\n The expression list is required if and only if the explicit\n cursor declaration contains an argument list. The created\n portal will be named 'curname' and is accessible globally.\n\n Reference cursor can be declared as:\n\n DECLARE\n ...\n varname REFCURSOR;\n ...\n\n and opened with\n\n BEGIN\n ...\n OPEN varname FOR <select_stmt>;\n -- or\n OPEN varname FOR EXECUTE <string expression>;\n ...\n\n The type \"refcursor\" is a datatype like text, and the\n variables \"value\" controls the \"name\" argument to\n SPI_cursor_open(). Defaulting to NULL, the resulting portal\n will get a generic, unique name and the variable will be\n set to that name at OPEN. If the function assigns a value\n before OPEN, that'll be used as the portal name.\n\n Cursors (of both types) are used with:\n\n BEGIN\n ...\n FETCH cursorvar INTO {record | row | var [, ...]};\n ...\n CLOSE cursorvar;\n\n FETCH sets the global variable FOUND to flag if another row\n is available. A typical loop thus looks like this:\n\n BEGIN\n OPEN myrefcur FOR SELECT * FROM mytab;\n LOOP\n FETCH myrefcur INTO myrow;\n EXIT WHEN NOT FOUND;\n -- Process one row\n END LOOP;\n CLOSE myrefcur;\n\n The \"refcursor\" type can be used for function arguments or\n return values as well. So one function can call another to\n open a cursor, assigning it's return value to a\n \"refcursor\", pass that down to other functions and - you\n get the idea.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 21 May 2001 15:05:19 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "PL/pgSQL CURSOR support" } ]
[ { "msg_contents": "> > We could keep share buffer lock (or add some other kind of lock)\n> > untill tuple projected - after projection we need not to read data\n> > for fetched tuple from shared buffer and time between fetching\n> > tuple and projection is very short, so keeping lock on buffer will\n> > not impact concurrency significantly.\n> \n> Or drop the pin on the buffer to show we no longer have a pointer\n> to it. I'm not sure that the time to do projection is short though\n> --- what if there are arbitrary user-defined functions in the quals\n> or the projection targetlist?\n\nWell, while we are on this subject I finally should say about issue\nbothered me for long time: only \"simple\" functions should be allowed\nto deal with data in shared buffers directly. \"Simple\" means: no SQL\nqueries there. Why? One reason: we hold shlock on buffer while doing\nseqscan qual - what if qual' SQL queries will try to acquire exclock\non the same buffer? Another reason - concurrency. I think that such\n\"heavy\" functions should be provided with copy of data.\n\nVadim\n", "msg_date": "Mon, 21 May 2001 13:01:29 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem " }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> I'm not sure that the time to do projection is short though\n>> --- what if there are arbitrary user-defined functions in the quals\n>> or the projection targetlist?\n\n> Well, while we are on this subject I finally should say about issue\n> bothered me for long time: only \"simple\" functions should be allowed\n> to deal with data in shared buffers directly. \"Simple\" means: no SQL\n> queries there. Why? One reason: we hold shlock on buffer while doing\n> seqscan qual - what if qual' SQL queries will try to acquire exclock\n> on the same buffer?\n\nI think we're there already: AFAICT, user-specified quals and\nprojections are done after dropping the buffer shlock. (Yes, I know\nthere's a HeapKeyTest inside heapgettup, but user quals don't get\ndone there.) We do still hold a pin, but that seems OK to me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 May 2001 16:46:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plans for solving the VACUUM problem " } ]
[ { "msg_contents": "> Correct me if I am wrong, but both cases do present a problem\n> currently in 7.1. The WAL log will not remove any WAL files\n> for transactions that are still open (even after a checkpoint\n> occurs). Thus if you do a bulk insert of gigabyte size you will\n> require a gigabyte sized WAL directory. Also if you have a simple\n> OLTP transaction that the user started and walked away from for\n> his one week vacation, then no WAL log files can be deleted until\n> that user returns from his vacation and ends his transaction.\n\nTodo:\n\n1. Compact log files after checkpoint (save records of uncommitted\n transactions and remove/archive others).\n2. Abort long running transactions.\n\nVadim\n", "msg_date": "Mon, 21 May 2001 13:29:03 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "I know this is not an \"on-topic\" post, but I wanted my message to reach the\nright people.\n\nI want to thank all of you who have worked so hard to make Postgres such an\nexcellent database. Since people tend to complain a lot, I thought it might\nbe nice to share some good news...\n\nI have just finished a several month project migrating a backend from Access\nto Postgres. We went live with PG about a week ago and the results are far\nbetter than we ever hoped for (knock on wood). The database is comprised of\n43 tables, consumes over 167 MB (\"du\" on \"data\") and tends to have 5-10\nsimultaneous users. We have gone from several database corruptions per day\nto 24 hour uptime. Using ADO/ODBC, our average calc engine run-time was\nreduced by 50%!\n\nLive backups, significantly increased performance, scalability for\nadditional users, elimination of corruption, and a path for web access to\nour system -- Postgres has greatly simplified my life!\n\nThank you for all of your hard work!\n\nSincerely,\n\nBrian E. Pangburn\nThe Pangburn Company, Inc.\nwww.nqadmin.com\n\n\n\n", "msg_date": "Mon, 21 May 2001 16:33:23 -0500", "msg_from": "\"Brian E. Pangburn\" <bepangburn@yahoo.com>", "msg_from_op": true, "msg_subject": "Thank you" } ]
[ { "msg_contents": "Hi,\n\nI need grant access on my DB to many users that's can SELECT, UPDATE,\nINSERT or DELETE over tables, and viwes. This point, all OK.\n\nBut I don't wan't this users can create new tables on my DB.\n\nWhere can I do that ???\n\n\nRegards,\n\nTulio Oliveira\n\n\n-- \nUm velho homem s�bio disse uma vez: \"Quando voc� atualiza um exploit,\nvoc� � \nbom. Quando voc� � o primeiro a hackear cada sucessiva vers�o de um\nproduto \nque roda em milh�es de computadores pela Internet, voc� cria uma\nDinastia\".\n", "msg_date": "Mon, 21 May 2001 20:44:14 -0300", "msg_from": "Tulio Oliveira <mestredosmagos@marilia.com>", "msg_from_op": true, "msg_subject": "Prevent CREATE TABLE" } ]
[ { "msg_contents": "\n> We have a TODO item\n> \t* Update reltuples in COPY\n> \n> I was just about to go do this when I realized that it may not be such\n> a hot idea after all.\n\nImho it is not a good idea at all. The statistics are a very sensitive area,\nthat imho should only be calculated on request. I already don't like the\nstatistics that are implicitly created during create index.\n\nEighter you have online stats keeping or you don't.\nFor me this is a definite all or nothing issue. Anything inbetween is \nonly good for unpleasant surprises.\n\nI have very strong feelings about this, because of bad experience.\nI would be willing to go into detail.\n\nA syntactic extension to copy (\"with analyze\") on the other hand would \nbe a feature.\n\nAndreas\n", "msg_date": "Tue, 22 May 2001 09:19:53 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Is stats update during COPY IN really a good idea?" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > We have a TODO item\n> > \t* Update reltuples in COPY\n> > \n> > I was just about to go do this when I realized that it may not be such\n> > a hot idea after all.\n> \n> Imho it is not a good idea at all. The statistics are a very sensitive area,\n> that imho should only be calculated on request. I already don't like the\n> statistics that are implicitly created during create index.\n\nOK, if you feel strongly, and Tom does, I will remove the item. \nHowever, just remember that pg_class already has a row count that we\nforce in there by default.\n\n\ttest=> create table test (x int);\n\tCREATE\n\ttest=> select reltuples from pg_class where relname = 'test';\n\t reltuples \n\t-----------\n\t 1000\n\t(1 row)\n\nI was just suggesting we make that accurate if we can, even if we can\nmake it accurate only 80% of the time. Once we INSERT, it isn't\naccurate anymore anyway. This is just an estimate, and in my mind, it\ndoesn't have to be accurate in all cases.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 07:53:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Is stats update during COPY IN really a good idea?" } ]
[ { "msg_contents": "Just thought that I'd tell you.\n\nI've been waiting (very patiently, I think) for a long time for outre joins, \nviews w/ joins and not the least, functions that can handle NULL's in an \norderly way.\n\nA little anxious I started implementing these elements in my projects, \nremoving the workarounds and patches I've put in there, and - well, they just \nall work like a charm!!\n\nThis is great. Working with Windows products, I'm more used to being \ndisappointed whenever people promise that this or that function will be in \nthe next release. But this time, with PostgreSQL, not :-)\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Email: kar@webline.dk\n", "msg_date": "Tue, 22 May 2001 09:37:36 +0200", "msg_from": "Kaare Rasmussen <kar@webline.dk>", "msg_from_op": true, "msg_subject": "Feedback" } ]
[ { "msg_contents": "\n> Correct me if I am wrong, but both cases do present a problem currently \n> in 7.1. The WAL log will not remove any WAL files for transactions that \n> are still open (even after a checkpoint occurs). Thus if you do a bulk \n> insert of gigabyte size you will require a gigabyte sized WAL \n> directory. Also if you have a simple OLTP transaction that the user \n> started and walked away from for his one week vacation, then no WAL log \n> files can be deleted until that user returns from his vacation and ends \n> his transaction.\n\nI am not sure, it might be so implemented. But there is no technical reason\nto keep them beyond checkpoint without UNDO.\n\nAndreas\n", "msg_date": "Tue, 22 May 2001 09:54:21 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "\n> As a rule of thumb, online applications that hold open\n> transactions during user interaction are considered to be\n> Broken By Design (tm). So I'd slap the programmer/design\n> team with - let's use the server box since it doesn't contain\n> anything useful.\n\nWe have a database system here, and not an OLTP helper app.\nA database system must support all sorts of mixed usage from simple \nOLTP to OLAP. Imho the usual separation on different servers gives more\nheadaches than are necessary.\n\nThus above statement can imho be true for one OLTP application, but not \nfor all applications on one db server.\n\nAndreas\n", "msg_date": "Tue, 22 May 2001 10:06:54 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": " \n> REDO in oracle is done by something known as a 'rollback segment'. \n\nYou are not seriously saying that you like the \"rollback segments\" in Oracle.\nThey only cause trouble: \n1. configuration (for every different workload you need a different config) \n2. snapshot too old \n3. tx abort because rollback segments are full\n4. They use up huge amounts of space (e.g. 20 Gb rollback seg for a 120 Gb SAP)\n\nIf I read the papers correctly Version 9 gets rid of Point 1 but the rest ...\n\nAndreas\n", "msg_date": "Tue, 22 May 2001 10:16:10 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Plans for solving the VACUUM problem" }, { "msg_contents": "Actually I don't like the problems with rollback segments in oracle at \nall. I am just concerned that using WAL for UNDO will have all of the \nsame problems if it isn't designed carefully. At least in oracle's \nrollback segments there are multiple of them, in WAL there is just one, \nthus you will potentially have that 20Gig all in your single log \ndirectory. People are already reporting the log directory growing to a \ngig or more when running vacuum in 7.1.\n\nOf the points you raised about oracle's rollback segment problems:\n\n1. configuration (for every different workload you need a different config) \n\nPostgres should be able to do a better job here.\n\n\n2. snapshot too old \n\nShouldn't be a problem as long as postgres continues to use a non-overwriting storage manager. However under an overwriting storage manager, you need to keep all of the changes in the UNDO records to satisfy the query snapshot, thus if you want to limit the size of UNDO you may need to kill long running queries.\n\n3. tx abort because rollback segments are full\nIf you want to limit the size of the UNDO, then this is a corresponding \nbyproduct. I believe a mail note was sent out yesterday suggesting that \nlimits like this be added to the todo list.\n\n4. They use up huge amounts of space (e.g. 20 Gb rollback seg for a 120 Gb SAP)\nYou need to store the UNDO information somewhere. And on active \ndatabases that can amount to alot of information, especially for bulk \nloads or massive updates.\n\nthanks,\n--Barry\n\n\nZeugswetter Andreas SB wrote:\n\n> \n> \n>> REDO in oracle is done by something known as a 'rollback segment'. \n> \n> \n> You are not seriously saying that you like the \"rollback segments\" in Oracle.\n> They only cause trouble: \n> 1. configuration (for every different workload you need a different config) \n> 2. snapshot too old \n> 3. tx abort because rollback segments are full\n> 4. They use up huge amounts of space (e.g. 20 Gb rollback seg for a 120 Gb SAP)\n> \n> If I read the papers correctly Version 9 gets rid of Point 1 but the rest ...\n> \n> Andreas\n> \n> \n\n", "msg_date": "Tue, 22 May 2001 10:11:50 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "> Todo:\n> \n> 1. Compact log files after checkpoint (save records of uncommitted\n> transactions and remove/archive others).\n\nOn the grounds that undo is not guaranteed anyway (concurrent heap access),\nwhy not simply forget it, since above sounds rather expensive ?\nThe downside would only be, that long running txn's cannot [easily] rollback\nto savepoint.\n\n> 2. Abort long running transactions.\n\nThis is imho \"the\" big downside of UNDO, and should not simply be put on \nthe TODO without thorow research. I think it would be better to forget UNDO for long \nrunning transactions before aborting them.\n\nAndreas\n", "msg_date": "Tue, 22 May 2001 11:27:19 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "\n> Explicit cursor can be declared as:\n> \n> DECLARE\n> ...\n> curname CURSOR [(argname type [, ...])]\n> IS <select_stmt>;\n\nIn esql you would have FOR instead of IS.\n\nDECLARE curname CURSOR ... FOR ....\n\nThus the question, where is the syntax from ?\nThere seems to be a standard for \"the\" SQL stored procedure language:\n\n\"Persistent Stored Module definition of the ANSI SQL99 standard\" (quote from DB/2)\nAnybody know this ?\n\nAndreas\n", "msg_date": "Tue, 22 May 2001 11:32:40 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: PL/pgSQL CURSOR support" }, { "msg_contents": "Definitely it's Oracle's syntax.\n\n\"Zeugswetter Andreas SB \" <ZeugswetterA@wien.spardat.at> �������/�������� �\n�������� ���������:\nnews:11C1E6749A55D411A9670001FA6879633682EA@sdexcsrv1.f000.d0188.sd.spardat.\nat...\n>\n> > Explicit cursor can be declared as:\n> >\n> > DECLARE\n> > ...\n> > curname CURSOR [(argname type [, ...])]\n> > IS <select_stmt>;\n>\n> In esql you would have FOR instead of IS.\n>\n> DECLARE curname CURSOR ... FOR ....\n>\n> Thus the question, where is the syntax from ?\n> There seems to be a standard for \"the\" SQL stored procedure language:\n>\n> \"Persistent Stored Module definition of the ANSI SQL99 standard\" (quote\nfrom DB/2)\n> Anybody know this ?\n>\n> Andreas\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n\n\n", "msg_date": "Tue, 22 May 2001 13:54:22 +0400", "msg_from": "\"Sergey E. Volkov\" <sve@raiden.bancorp.ru>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL CURSOR support" }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n>\n> > Explicit cursor can be declared as:\n> >\n> > DECLARE\n> > ...\n> > curname CURSOR [(argname type [, ...])]\n> > IS <select_stmt>;\n>\n> In esql you would have FOR instead of IS.\n>\n> DECLARE curname CURSOR ... FOR ....\n>\n> Thus the question, where is the syntax from ?\n\n From the worlds most expens\\b\\b\\b\\b\\b\\b - er - reliable\n commercial database system.\n\n> There seems to be a standard for \"the\" SQL stored procedure language:\n>\n> \"Persistent Stored Module definition of the ANSI SQL99 standard\" (quote from DB/2)\n> Anybody know this ?\n\n The entire PL/pgSQL was written with some compatibility in\n mind. Otherwise FOR loops would look more like\n\n [ <<label>> ]\n FOR <loop_name> AS\n [ EACH ROW OF ] [ CURSOR <cursor_name> FOR ]\n <cursor_specification> DO\n <statements>\n END FOR;\n\n The good thing is that we can have any number of loadable\n procedural languages. It's relatively easy to change the\n PL/pgSQL parser and create some PL/SQL99 handler. As long as\n the symbols in the modules don't conflict, I see no reason\n why we shouldn't.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Tue, 22 May 2001 09:35:14 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: AW: PL/pgSQL CURSOR support" }, { "msg_contents": ">From SQL99 (Note: the 'FOR' keyword seems standard...):\n\n 14 Data manipulation\n\n\n\n 14.1 <declare cursor>\n\n Function\n\n Define a cursor.\n\n Format\n\n <declare cursor> ::=\n DECLARE <cursor name> [ <cursor sensitivity> ]\n [ <cursor scrollability> ] CURSOR\n [ <cursor holdability> ]\n [ <cursor returnability> ]\n FOR <cursor specification>\n\n <cursor sensitivity> ::=\n SENSITIVE\n | INSENSITIVE\n | ASENSITIVE\n\n <cursor scrollability> ::=\n SCROLL\n | NO SCROLL\n\n <cursor holdability> ::=\n WITH HOLD\n | WITHOUT HOLD\n\n <cursor returnability> ::=\n WITH RETURN\n | WITHOUT RETURN\n\n <cursor specification> ::=\n <query expression> [ <order by clause> ]\n [ <updatability clause> ]\n\n <updatability clause> ::=\n FOR { READ ONLY | UPDATE [ OF <column name list> ] }\n\n <order by clause> ::=\n ORDER BY <sort specification list>\n\n <sort specification list> ::=\n <sort specification> [ { <comma> <sort specification> }... ]\n\n <sort specification> ::=\n <sort key> [ <collate clause> ] [ <ordering specification> ]\n\n <sort key> ::=\n <value expression>\n\n <ordering specification> ::= ASC | DESC\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Zeugswetter\n> Andreas SB\n> Sent: Tuesday, 22 May 2001 5:33 PM\n> To: 'Jan Wieck'; PostgreSQL HACKERS\n> Subject: AW: [HACKERS] PL/pgSQL CURSOR support\n> \n> \n> \n> > Explicit cursor can be declared as:\n> > \n> > DECLARE\n> > ...\n> > curname CURSOR [(argname type [, ...])]\n> > IS <select_stmt>;\n> \n> In esql you would have FOR instead of IS.\n> \n> DECLARE curname CURSOR ... FOR ....\n> \n> Thus the question, where is the syntax from ?\n> There seems to be a standard for \"the\" SQL stored procedure language:\n> \n> \"Persistent Stored Module definition of the ANSI SQL99 standard\" \n> (quote from DB/2)\n> Anybody know this ?\n> \n> Andreas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n", "msg_date": "Wed, 23 May 2001 09:56:29 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: PL/pgSQL CURSOR support" }, { "msg_contents": "\nCan someone comment on the use of FOR/IS in cursors?\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> >From SQL99 (Note: the 'FOR' keyword seems standard...):\n> \n> 14 Data manipulation\n> \n> \n> \n> 14.1 <declare cursor>\n> \n> Function\n> \n> Define a cursor.\n> \n> Format\n> \n> <declare cursor> ::=\n> DECLARE <cursor name> [ <cursor sensitivity> ]\n> [ <cursor scrollability> ] CURSOR\n> [ <cursor holdability> ]\n> [ <cursor returnability> ]\n> FOR <cursor specification>\n> \n> <cursor sensitivity> ::=\n> SENSITIVE\n> | INSENSITIVE\n> | ASENSITIVE\n> \n> <cursor scrollability> ::=\n> SCROLL\n> | NO SCROLL\n> \n> <cursor holdability> ::=\n> WITH HOLD\n> | WITHOUT HOLD\n> \n> <cursor returnability> ::=\n> WITH RETURN\n> | WITHOUT RETURN\n> \n> <cursor specification> ::=\n> <query expression> [ <order by clause> ]\n> [ <updatability clause> ]\n> \n> <updatability clause> ::=\n> FOR { READ ONLY | UPDATE [ OF <column name list> ] }\n> \n> <order by clause> ::=\n> ORDER BY <sort specification list>\n> \n> <sort specification list> ::=\n> <sort specification> [ { <comma> <sort specification> }... ]\n> \n> <sort specification> ::=\n> <sort key> [ <collate clause> ] [ <ordering specification> ]\n> \n> <sort key> ::=\n> <value expression>\n> \n> <ordering specification> ::= ASC | DESC\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Zeugswetter\n> > Andreas SB\n> > Sent: Tuesday, 22 May 2001 5:33 PM\n> > To: 'Jan Wieck'; PostgreSQL HACKERS\n> > Subject: AW: [HACKERS] PL/pgSQL CURSOR support\n> > \n> > \n> > \n> > > Explicit cursor can be declared as:\n> > > \n> > > DECLARE\n> > > ...\n> > > curname CURSOR [(argname type [, ...])]\n> > > IS <select_stmt>;\n> > \n> > In esql you would have FOR instead of IS.\n> > \n> > DECLARE curname CURSOR ... FOR ....\n> > \n> > Thus the question, where is the syntax from ?\n> > There seems to be a standard for \"the\" SQL stored procedure language:\n> > \n> > \"Persistent Stored Module definition of the ANSI SQL99 standard\" \n> > (quote from DB/2)\n> > Anybody know this ?\n> > \n> > Andreas\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://www.postgresql.org/search.mpl\n> > \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 30 May 2001 10:45:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL CURSOR support" }, { "msg_contents": "\nSo FOR is standard ANSI, and IS is Oracle, and we went with Oracle. \nShould we allow both?\n\n\n> Bruce Momjian wrote:\n> >\n> > Can someone comment on the use of FOR/IS in cursors?\n> >\n> \n> DECLARE <name> CURSOR IS <select_stmt> is the Oracle PL/SQL\n> syntax. Since PL/pgSQL was written from the start with one\n> eye on portability from/to Oracle, I'd like to stick with\n> that.\n> \n> It's relatively simple to just substitute all PLpgSQL (and\n> other case combos) occurences by something else, then replace\n> the gram.y and scan.l files with whatever you want and voila,\n> you come up with another procedural language as compatible as\n> possible to your formerly preferred database. There is no\n> reason other than that we'll have more PL handlers to\n> support, why we shouldn't have two or three different\n> procedural SQL dialects. All can coexist and only those used\n> in your DB schema will get loaded.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== JanWieck@Yahoo.com #\n> \n> \n> \n> _________________________________________________________\n> Do You Yahoo!?\n> Get your free @yahoo.com address at http://mail.yahoo.com\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 30 May 2001 12:15:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL CURSOR support" }, { "msg_contents": "Bruce Momjian wrote:\n>\n> Can someone comment on the use of FOR/IS in cursors?\n>\n\n DECLARE <name> CURSOR IS <select_stmt> is the Oracle PL/SQL\n syntax. Since PL/pgSQL was written from the start with one\n eye on portability from/to Oracle, I'd like to stick with\n that.\n\n It's relatively simple to just substitute all PLpgSQL (and\n other case combos) occurences by something else, then replace\n the gram.y and scan.l files with whatever you want and voila,\n you come up with another procedural language as compatible as\n possible to your formerly preferred database. There is no\n reason other than that we'll have more PL handlers to\n support, why we shouldn't have two or three different\n procedural SQL dialects. All can coexist and only those used\n in your DB schema will get loaded.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 30 May 2001 12:17:29 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL CURSOR support" }, { "msg_contents": "Jan Wieck writes:\n\n> There is no\n> reason other than that we'll have more PL handlers to\n> support,\n\n... which is a pretty big reason ...\n\n> why we shouldn't have two or three different\n> procedural SQL dialects. All can coexist and only those used\n> in your DB schema will get loaded.\n\nOr you can make one PL support alternative, non-conflicting dialects.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 30 May 2001 20:30:51 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL CURSOR support" }, { "msg_contents": "On Wed, 30 May 2001, Bruce Momjian wrote:\n>\n> So FOR is standard ANSI, and IS is Oracle, and we went with Oracle.\n> Should we allow both?\n>\n\nI have the opinion that PostgreSQL should always support ANSI first and\n*then* go ahead and add the features that are becoming the standard due\nto inclusion in a major database system like Oracle. As such, I think it\nwould be wise to allow both.\n\n-- \n//========================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\========================================================//\n\n", "msg_date": "Wed, 30 May 2001 13:33:33 -0500 (CDT)", "msg_from": "\"D. Hageman\" <dhageman@dracken.com>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL CURSOR support" }, { "msg_contents": "> DECLARE <name> CURSOR IS <select_stmt> is the Oracle PL/SQL\n> syntax. Since PL/pgSQL was written from the start with one\n> eye on portability from/to Oracle, I'd like to stick with\n> that.\n> \n> It's relatively simple to just substitute all PLpgSQL (and\n> other case combos) occurences by something else, then replace\n> the gram.y and scan.l files with whatever you want and voila,\n> you come up with another procedural language as compatible as\n> possible to your formerly preferred database. There is no\n> reason other than that we'll have more PL handlers to\n> support, why we shouldn't have two or three different\n> procedural SQL dialects. All can coexist and only those used\n> in your DB schema will get loaded.\n\nOK, how about this patch that allows both FOR and IS. Seems like a\ngood idea, and we can document FOR.\n\nAlso, I don't see any documentation on the new plpgsql cursor support.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/pl/plpgsql/src/gram.y\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/pl/plpgsql/src/gram.y,v\nretrieving revision 1.19\ndiff -c -r1.19 gram.y\n*** src/pl/plpgsql/src/gram.y\t2001/05/21 14:22:18\t1.19\n--- src/pl/plpgsql/src/gram.y\t2001/05/30 20:05:03\n***************\n*** 355,361 ****\n \t\t\t\t\t{\n \t\t\t\t\t\tplpgsql_ns_rename($2, $4);\n \t\t\t\t\t}\n! \t\t\t\t| decl_varname K_CURSOR decl_cursor_args K_IS K_SELECT decl_cursor_query\n \t\t\t\t\t{\n \t\t\t\t\t\tPLpgSQL_var *new;\n \t\t\t\t\t\tPLpgSQL_expr *curname_def;\n--- 355,361 ----\n \t\t\t\t\t{\n \t\t\t\t\t\tplpgsql_ns_rename($2, $4);\n \t\t\t\t\t}\n! \t\t\t\t| decl_varname K_CURSOR decl_cursor_args decl_is_from K_SELECT decl_cursor_query\n \t\t\t\t\t{\n \t\t\t\t\t\tPLpgSQL_var *new;\n \t\t\t\t\t\tPLpgSQL_expr *curname_def;\n***************\n*** 499,505 ****\n \t\t\t\t\t\tplpgsql_ns_push(NULL);\n \t\t\t\t\t}\n \t\t\t\t;\n! \t\t\t\t\n \n decl_aliasitem\t: T_WORD\n \t\t\t\t\t{\n--- 499,507 ----\n \t\t\t\t\t\tplpgsql_ns_push(NULL);\n \t\t\t\t\t}\n \t\t\t\t;\n! \n! decl_is_from\t:\tK_IS |\n! \t\t\t\t\tK_FOR;\n \n decl_aliasitem\t: T_WORD\n \t\t\t\t\t{", "msg_date": "Wed, 30 May 2001 16:09:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL CURSOR support" }, { "msg_contents": "Peter Eisentraut wrote:\n> Jan Wieck writes:\n>\n> > There is no\n> > reason other than that we'll have more PL handlers to\n> > support,\n>\n> ... which is a pretty big reason ...\n>\n> > why we shouldn't have two or three different\n> > procedural SQL dialects. All can coexist and only those used\n> > in your DB schema will get loaded.\n>\n> Or you can make one PL support alternative, non-conflicting dialects.\n\nHmmm,\n\n combining it, we need a place to tell the language handler\n about it's personality. So the handler in plpgsql.so could\n serve more than one dialect and just jump into different\n gram.y path's. Note that it already does kinda that by faking\n a first token telling if actually a function or trigger get's\n compiled.\n\n Will sleep over that idea.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 30 May 2001 16:09:57 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL CURSOR support" }, { "msg_contents": "Jan has approved the following patch that allows both FOR and IS for\nPL/PgSQL cursors.\n\n> > DECLARE <name> CURSOR IS <select_stmt> is the Oracle PL/SQL\n> > syntax. Since PL/pgSQL was written from the start with one\n> > eye on portability from/to Oracle, I'd like to stick with\n> > that.\n> > \n> > It's relatively simple to just substitute all PLpgSQL (and\n> > other case combos) occurences by something else, then replace\n> > the gram.y and scan.l files with whatever you want and voila,\n> > you come up with another procedural language as compatible as\n> > possible to your formerly preferred database. There is no\n> > reason other than that we'll have more PL handlers to\n> > support, why we shouldn't have two or three different\n> > procedural SQL dialects. All can coexist and only those used\n> > in your DB schema will get loaded.\n> \n> OK, how about this patch that allows both FOR and IS. Seems like a\n> good idea, and we can document FOR.\n> \n> Also, I don't see any documentation on the new plpgsql cursor support.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/pl/plpgsql/src/gram.y\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/pl/plpgsql/src/gram.y,v\nretrieving revision 1.19\ndiff -c -r1.19 gram.y\n*** src/pl/plpgsql/src/gram.y\t2001/05/21 14:22:18\t1.19\n--- src/pl/plpgsql/src/gram.y\t2001/05/30 20:05:03\n***************\n*** 355,361 ****\n \t\t\t\t\t{\n \t\t\t\t\t\tplpgsql_ns_rename($2, $4);\n \t\t\t\t\t}\n! \t\t\t\t| decl_varname K_CURSOR decl_cursor_args K_IS K_SELECT decl_cursor_query\n \t\t\t\t\t{\n \t\t\t\t\t\tPLpgSQL_var *new;\n \t\t\t\t\t\tPLpgSQL_expr *curname_def;\n--- 355,361 ----\n \t\t\t\t\t{\n \t\t\t\t\t\tplpgsql_ns_rename($2, $4);\n \t\t\t\t\t}\n! \t\t\t\t| decl_varname K_CURSOR decl_cursor_args decl_is_from K_SELECT decl_cursor_query\n \t\t\t\t\t{\n \t\t\t\t\t\tPLpgSQL_var *new;\n \t\t\t\t\t\tPLpgSQL_expr *curname_def;\n***************\n*** 499,505 ****\n \t\t\t\t\t\tplpgsql_ns_push(NULL);\n \t\t\t\t\t}\n \t\t\t\t;\n! \t\t\t\t\n \n decl_aliasitem\t: T_WORD\n \t\t\t\t\t{\n--- 499,507 ----\n \t\t\t\t\t\tplpgsql_ns_push(NULL);\n \t\t\t\t\t}\n \t\t\t\t;\n! \n! decl_is_from\t:\tK_IS |\t\t/* Oracle */\n! \t\t\t\t\tK_FOR;\t\t/* ANSI */\n \n decl_aliasitem\t: T_WORD\n \t\t\t\t\t{", "msg_date": "Thu, 31 May 2001 13:16:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL CURSOR support" } ]
[ { "msg_contents": "> However, just remember that pg_class already has a row count that we\n> force in there by default.\n\n> I was just suggesting we make that accurate if we can, even if we can\n> make it accurate only 80% of the time. Once we INSERT, it isn't\n> accurate anymore anyway. This is just an estimate, and in my mind, it\n> doesn't have to be accurate in all cases.\n\nActually I think the accuracy of db stats is often over estimated.\nFor installed OLTP applications the most important thing is, that\nquery plans are predictable. They do not even need to be optimal, \nthey only need to deliver an expected performance.\n\nI actually do get perfect query plans without any stats, because our\nindexes are perfectly matched to our statements, and in two cases we tuned\nthe sql appropriately (2 of >200 statements with Informix optimizer hints). For such a \ncondition you actually want a rule based optimizer. The current default values during \ncreate table are more or less chosen to give exactly this \"rule based\" behavior. \nThe trouble is, that after the first implicitly created stats,\nthe optimizer goes completely bananas, because now he thinks that one table has 1000 \n(the default) rows (it actually has 10000000), but the other has 100000 and the optimizer now\nknows that and chooses a different plan. And just because you copy a few rows ?\n\nAndreas\n", "msg_date": "Tue, 22 May 2001 14:31:19 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Is stats update during COPY IN really a good id\n\tea?" }, { "msg_contents": "> I actually do get perfect query plans without any stats, because\n> our indexes are perfectly matched to our statements, and in two\n> cases we tuned the sql appropriately (2 of >200 statements with\n> Informix optimizer hints). For such a condition you actually\n> want a rule based optimizer. The current default values during\n> create table are more or less chosen to give exactly this \"rule\n> based\" behavior. The trouble is, that after the first implicitly\n> created stats, the optimizer goes completely bananas, because\n> now he thinks that one table has 1000 (the default) rows (it\n> actually has 10000000), but the other has 100000 and the optimizer\n> now knows that and chooses a different plan. And just because\n> you copy a few rows ?\n\nOh, that is interesting. You didn't explicitly ask for stats, but got\nthem anyway and that caused a problem.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 08:40:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Is stats update during COPY IN really a good id\n ea?" } ]
[ { "msg_contents": "Off topic, but I thought this was interesting. This is information\nabout compiling OpenOffice. Hope PostgreSQL never gets to be this size:\n\n http://www.openoffice.org/FAQs/build_faq.html\n\n---------------------------------------------------------------------------\n\n\tHow much hard drive space needed for a full build of OpenOffice\n\tincluding source? \n\n\tThe current recommendation is 3GB.\n\n---------------------------------------------------------------------------\n\t\n\tHow long does an OpenOffice build take? \n\t\n\tOur current experience is that a full build of OpenOffice is\n\tapproximately 20 hours on a single CPU Pentium III with 256MB of RAM\n\trunning Linux. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 10:05:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "OpenOffice compile" }, { "msg_contents": "In article <200105221405.f4ME5OG13140@candle.pha.pa.us>, \"Bruce Momjian\"\n<pgman@candle.pha.pa.us> wrote:\n\n> Off topic, but I thought this was interesting. This is information\n> about compiling OpenOffice. Hope PostgreSQL never gets to be this size:\n> \n> http://www.openoffice.org/FAQs/build_faq.html\n\n\tIMHO, basing an open project on 7.6M lines of code that can only \ncompiled on four or five different architectures isn't a policy that\nanyone should emulate.\n", "msg_date": "Tue, 22 May 2001 19:57:03 GMT", "msg_from": "\"Craig Orsinger\" <orsingerc@epg.lewis.army_mil.invalid>", "msg_from_op": false, "msg_subject": "Re: OpenOffice compile" }, { "msg_contents": "On Mar 22 May 2001 22:57, you wrote:\n> In article <200105221405.f4ME5OG13140@candle.pha.pa.us>, \"Bruce Momjian\"\n>\n> <pgman@candle.pha.pa.us> wrote:\n> > Off topic, but I thought this was interesting. This is information\n> > about compiling OpenOffice. Hope PostgreSQL never gets to be this size:\n> >\n> > http://www.openoffice.org/FAQs/build_faq.html\n>\n> \tIMHO, basing an open project on 7.6M lines of code that can only\n> compiled on four or five different architectures isn't a policy that\n> anyone should emulate.\n\nI think that the problem with OpenOffice is that it comes from StarOffice. \nLast time I downloaded SO was 5.2 and had over 70MB of binary, compressed. \nAnyway, I don't think you can get it smaller, but, as the people of \nOpenOffice are doing, you can always split it up, modularize it, so it is \neasy to make changes.\n\nP.D.: Linux kernel hasquite an amount of lines of code, but the developers \nhave no problem finding bugs and all the stuff.\n\nSaludos.... :-)\n\n-- \nCualquiera administra un NT.\nEse es el problema, que cualquiera administre.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Mon, 28 May 2001 11:07:10 +0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: Re: OpenOffice compile" }, { "msg_contents": "\"Craig Orsinger\" <orsingerc@epg.lewis.army_mil.invalid> writes:\n\n> In article <200105221405.f4ME5OG13140@candle.pha.pa.us>, \"Bruce Momjian\"\n> <pgman@candle.pha.pa.us> wrote:\n> \n> > Off topic, but I thought this was interesting. This is information\n> > about compiling OpenOffice. Hope PostgreSQL never gets to be this size:\n> > \n> > http://www.openoffice.org/FAQs/build_faq.html\n> \n> \tIMHO, basing an open project on 7.6M lines of code that can only \n> compiled on four or five different architectures\n\nThat many? OpenOffice hardly compiles anywhere, and it needs an\n_exact_ version of the compiler as it tries to do it's own exception\nhandling. Very strange.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "28 May 2001 10:08:06 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: OpenOffice compile" }, { "msg_contents": "On Mon, May 28, 2001 at 10:08:06AM -0400, Trond Eivind Glomsr�d wrote:\n> > \tIMHO, basing an open project on 7.6M lines of code that can only \n> > compiled on four or five different architectures\n> \n> That many? OpenOffice hardly compiles anywhere, and it needs an\n> _exact_ version of the compiler as it tries to do it's own exception\n> handling. Very strange.\n\ni beleive that Mozilla suffered from a similar fate when released by\nNetscape.\n\nwhile i don't know alot of people using Mozilla directly, i am aware of a\nnumber of offshoot applications that are using the base source code.\n\ni would agree that source-suites with 10 million lines of code are rather\nun-wieldy.\n\nhowever, i fully support the efforts of any group that tries to make it\na more sane thing.\n\nOpenOffice/StarOffice/whatever would be a wonderful boon to the OSS movement.\n\nit would be nice if the database engine behind it was a full fledge SQL\nsystem like postgres.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Mon, 28 May 2001 12:12:53 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Re: OpenOffice compile" } ]
[ { "msg_contents": "One other data point:\n\n---------------------------------------------------------------------------\n\t\n\tHow much source code is there? \n\t\n\tOpenOffice source will have approx. 20,000 source files. \n\t\n\tOpenOffice will have approx. 7,600,000 lines of code. The majority of\n\tthe code is C++. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 10:08:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Open Office" } ]
[ { "msg_contents": "I asked this question on General/Questions about a week ago, I've done \na bit more testing since, but still can't get things to work properly. \nNoone answered on the other group and a search of the archives did not \nturn up anything even remotely similar.\n\nSystem:\n\nSolaris 7, patched through 01May2001\ngcc 2.95.3 release\nautomake 1.4\nautoconf 2.13\nreadline 4.2\nopenssl 0.9.6a\nGNU ld 2.11\nGNU make 3.79.1\n\n./configure options:\n--sysconfdir=/etc\n--docdir=/usr/share/doc\n--mandir=/usr/share/man\n--enable-debug\n--enable-depend\n--enable-cassert\n--with-perl\n--with-openssl\n--enable-odbc\n--with-gnu-ld\n--enable-syslog\n\nI have also tried changing --with-perl to --without-perl and removing -\n-with-gnu-ld (changing my path to point to /usr/ccs/bin/ld and \n/usr/ucb/ld and rebuilding after each change to use the different ld).\n\nI also have tried both the GNU make and the included Sun make.\n\nI have successfully built and can use:\nApache 1.3.19 with SSL\nSnort 1.7 with SSL\nPHP 4.05 with SSL\nOpenSSH v2.9p1\nPerl v5.6.1\n\nHowever, when I build out PostgreSQL (for the express purpose of using\nit in conjunction with Snort and the Advanced Console for Intrusion\nDatabases (ACID), I get an error on starting httpd that says:\n\nroot@slowlaris:/# httpsdctl start\nSyntax error on line 307 of /etc/apache/httpd.conf:\nCannot load /usr/apache/libexec/libphp4.so into server: ld.so.1:\n /usr/apache/bin/httpsd: fatal: relocation error:\n file /usr/local/pgsql/lib/libpq.so.2: symbol main:\n referenced symbol not\nfound /usr/apache/bin/httpsdctl start: httpsd could not be started\nroot@slowlaris:/#\n\nSo I did an ld test:\n\nroot@slowlaris:/# /usr/ucb/ld -lpq\nUndefined first referenced\n symbol in file\nmain /usr/local/pgsql/lib/libpq.so\nld: fatal: Symbol referencing errors. No output written to a.out\n\n(also happens with /usr/local/bin/ld (GNU) and /usr/ccs/bin/ld)\n \nI have tried back versions to v7.0 and up versions to the latest CVS,\nall give this error on running after compilation. I did double check \nand copy libpq.so.2.1 and it's sym-links into /usr/lib to see if the \nbinary was hard coding the path - no luck.\n\nPathing is correct, I can execute psql from the prompt and init /\ninstall a working database for Snort to use, but even though I can \nquery directly and see that the database is accepting data and storing \nit correctly.\n\nIt appears to only be this library that is not complete.\n\nAny ideas on what I might try to fix this? Am I missing something \nblindingly obvious?\n\n- Ed\n", "msg_date": "Tue, 22 May 2001 17:30:27 +0000 (UTC)", "msg_from": "evazquez@inflow.com (E A Vazquez Jr)", "msg_from_op": true, "msg_subject": "unable to use pgSQL due to undefined symbol" } ]
[ { "msg_contents": "\nftp://ftp.postgresql.org/pub/source/v7.1.2 ...\n\nJust want a second opinion before I announce more publicly ...\n\nThanks ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Tue, 22 May 2001 14:31:49 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Not released yet, but could someone take a quick peak ..." }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> ftp://ftp.postgresql.org/pub/source/v7.1.2 ...\n> \n> Just want a second opinion before I announce more publicly ...\n\nI'd check. But the postgresql ftp site appears to be broken for the past\nfew days.\n\n-- \nKarl\n", "msg_date": "Tue, 22 May 2001 13:53:16 -0400", "msg_from": "Karl DeBisschop <kdebisschop@alert.infoplease.com>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak ..." }, { "msg_contents": "\nbroken how? I just connected into it ...\n\nOn Tue, 22 May 2001, Karl DeBisschop wrote:\n\n> The Hermit Hacker wrote:\n> >\n> > ftp://ftp.postgresql.org/pub/source/v7.1.2 ...\n> >\n> > Just want a second opinion before I announce more publicly ...\n>\n> I'd check. But the postgresql ftp site appears to be broken for the past\n> few days.\n>\n> --\n> Karl\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Tue, 22 May 2001 15:33:55 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Not released yet, but could someone take a quick peak\n ..." }, { "msg_contents": "The Hermit Hacker writes:\n\n> ftp://ftp.postgresql.org/pub/source/v7.1.2 ...\n>\n> Just want a second opinion before I announce more publicly ...\n\nIt contains the 7.2 branch documentation set. No go...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 22 May 2001 21:46:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak\n ..." }, { "msg_contents": "> \n> broken how? I just connected into it ...\n\n\nMy guess is that he is seeing the ftp limit. I hit it too and could not\nlook at the URL.\n\n\n> \n> On Tue, 22 May 2001, Karl DeBisschop wrote:\n> \n> > The Hermit Hacker wrote:\n> > >\n> > > ftp://ftp.postgresql.org/pub/source/v7.1.2 ...\n> > >\n> > > Just want a second opinion before I announce more publicly ...\n> >\n> > I'd check. But the postgresql ftp site appears to be broken for the past\n> > few days.\n> >\n> > --\n> > Karl\n> >\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 15:53:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak\n ..." }, { "msg_contents": "On Tue, 22 May 2001, The Hermit Hacker wrote:\n\n> > > ftp://ftp.postgresql.org/pub/source/v7.1.2 ...\n> > >\n> > > Just want a second opinion before I announce more publicly ...\n> >\n> > I'd check. But the postgresql ftp site appears to be broken for the past\n> > few days.\n\nMy mirror is up to sync for anyone who needs it.\n\nftp://ftp.crimelabs.net/pub\n\n- brandon\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Tue, 22 May 2001 16:29:51 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak\n ..." }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n>> I'd check. But the postgresql ftp site appears to be broken for the past\n>> few days.\n>\n> broken how? I just connected into it ...\n\nMaybe you did, but no one else can. ftp.postgresql.org has said\n\n530- The maximum number of concurrent connections\n530- has been reached.\n\nevery time I've tried it for the last several weeks. The mirrors seem\nto have a hard time getting in, too --- digex doesn't have 7.1.2 yet,\nfor example.\n\ncrimelabs has it, however ... am downloading that copy now, will report\nback.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 11:32:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak ... " }, { "msg_contents": "On Wed, 23 May 2001, Tom Lane wrote:\n\n> every time I've tried it for the last several weeks. The mirrors seem\n> to have a hard time getting in, too --- digex doesn't have 7.1.2 yet,\n> for example.\n\nDoes digex use rsync? That's how i'm getting the files, so it shouldn't\ncare about ftp access..\n\n- brandon\n\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Wed, 23 May 2001 11:57:25 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak\n ..." }, { "msg_contents": "\nall mirrors use rsync to update their code, and all of those that are\nlisted at www.postgresql.org, both ftp and www, are no more then 2 days\nold (Vince, it is two days we set it at, right?) ...\n\n\nOn Wed, 23 May 2001, bpalmer wrote:\n\n> On Wed, 23 May 2001, Tom Lane wrote:\n>\n> > every time I've tried it for the last several weeks. The mirrors seem\n> > to have a hard time getting in, too --- digex doesn't have 7.1.2 yet,\n> > for example.\n>\n> Does digex use rsync? That's how i'm getting the files, so it shouldn't\n> care about ftp access..\n>\n> - brandon\n>\n>\n> b. palmer, bpalmer@crimelabs.net\n> pgp: www.crimelabs.net/bpalmer.pgp5\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Wed, 23 May 2001 13:21:48 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Not released yet, but could someone take a quick peak\n ..." }, { "msg_contents": "\nwhich ones should I pull in? the ones in ~/ftp/pub/doc/7.1? or is there\nnewer along that tree that we need to generate?\n\nOn Tue, 22 May 2001, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n>\n> > ftp://ftp.postgresql.org/pub/source/v7.1.2 ...\n> >\n> > Just want a second opinion before I announce more publicly ...\n>\n> It contains the 7.2 branch documentation set. No go...\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Wed, 23 May 2001 13:23:02 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Not released yet, but could someone take a quick peak\n ..." }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> ftp://ftp.postgresql.org/pub/source/v7.1.2 ...\n> Just want a second opinion before I announce more publicly ...\n\nLooks OK from here except for the problem Peter already noted:\nthe tarred-up doc files were created from development tip, not 7.1\nbranch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 13:09:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak ... " }, { "msg_contents": "On Wed, 23 May 2001, The Hermit Hacker wrote:\n\n>\n> all mirrors use rsync to update their code, and all of those that are\n> listed at www.postgresql.org, both ftp and www, are no more then 2 days\n> old (Vince, it is two days we set it at, right?) ...\n\nYup.\n\n>\n>\n> On Wed, 23 May 2001, bpalmer wrote:\n>\n> > On Wed, 23 May 2001, Tom Lane wrote:\n> >\n> > > every time I've tried it for the last several weeks. The mirrors seem\n> > > to have a hard time getting in, too --- digex doesn't have 7.1.2 yet,\n> > > for example.\n> >\n> > Does digex use rsync? That's how i'm getting the files, so it shouldn't\n> > care about ftp access..\n> >\n> > - brandon\n> >\n> >\n> > b. palmer, bpalmer@crimelabs.net\n> > pgp: www.crimelabs.net/bpalmer.pgp5\n> >\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 23 May 2001 13:20:36 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak\n ..." }, { "msg_contents": "The Hermit Hacker writes:\n\n> which ones should I pull in? the ones in ~/ftp/pub/doc/7.1? or is there\n> newer along that tree that we need to generate?\n\nYou can take\n\n~petere/pg71/pgsql/doc/src/postgres.tar.gz\n\nThe man.tar.gz from the ftp directory is okay.\n\n>\n> On Tue, 22 May 2001, Peter Eisentraut wrote:\n>\n> > The Hermit Hacker writes:\n> >\n> > > ftp://ftp.postgresql.org/pub/source/v7.1.2 ...\n> > >\n> > > Just want a second opinion before I announce more publicly ...\n> >\n> > It contains the 7.2 branch documentation set. No go...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 23 May 2001 20:12:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak\n ..." }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> which ones should I pull in? the ones in ~/ftp/pub/doc/7.1? or is there\n> newer along that tree that we need to generate?\n\nI think there are some doc fixes in the REL7_1 branch, so generating\nfrom the end of that branch would be nicest.\n\nReal question is why the build procedure didn't get this right\nautomatically ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 15:50:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak ... " }, { "msg_contents": "\nSorry for delay ... just built using the .tar.gz file as below if someone\nwants to check it to confirm ... for those with shell access on the\nserver, just ftp to it and login as yourself, and cd to\n~pgsql/ftp/pub/source/v7.1.2 ...\n\nif all looks well, I'll put out a more general announce later this evening\nafter some of the mirrors have had a chance to download ...\n\nOn Wed, 23 May 2001, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n>\n> > which ones should I pull in? the ones in ~/ftp/pub/doc/7.1? or is there\n> > newer along that tree that we need to generate?\n>\n> You can take\n>\n> ~petere/pg71/pgsql/doc/src/postgres.tar.gz\n>\n> The man.tar.gz from the ftp directory is okay.\n>\n> >\n> > On Tue, 22 May 2001, Peter Eisentraut wrote:\n> >\n> > > The Hermit Hacker writes:\n> > >\n> > > > ftp://ftp.postgresql.org/pub/source/v7.1.2 ...\n> > > >\n> > > > Just want a second opinion before I announce more publicly ...\n> > >\n> > > It contains the 7.2 branch documentation set. No go...\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Thu, 24 May 2001 13:44:10 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Not released yet, but could someone take a quick peak\n ..." }, { "msg_contents": "Looks good for me:\nLinux x86, 2.2.19, compiled with --enable-locale\nAll 76 tests passed.\n\n\tOleg\nOn Thu, 24 May 2001, The Hermit Hacker wrote:\n\n>\n> Sorry for delay ... just built using the .tar.gz file as below if someone\n> wants to check it to confirm ... for those with shell access on the\n> server, just ftp to it and login as yourself, and cd to\n> ~pgsql/ftp/pub/source/v7.1.2 ...\n>\n> if all looks well, I'll put out a more general announce later this evening\n> after some of the mirrors have had a chance to download ...\n>\n> On Wed, 23 May 2001, Peter Eisentraut wrote:\n>\n> > The Hermit Hacker writes:\n> >\n> > > which ones should I pull in? the ones in ~/ftp/pub/doc/7.1? or is there\n> > > newer along that tree that we need to generate?\n> >\n> > You can take\n> >\n> > ~petere/pg71/pgsql/doc/src/postgres.tar.gz\n> >\n> > The man.tar.gz from the ftp directory is okay.\n> >\n> > >\n> > > On Tue, 22 May 2001, Peter Eisentraut wrote:\n> > >\n> > > > The Hermit Hacker writes:\n> > > >\n> > > > > ftp://ftp.postgresql.org/pub/source/v7.1.2 ...\n> > > > >\n> > > > > Just want a second opinion before I announce more publicly ...\n> > > >\n> > > > It contains the 7.2 branch documentation set. No go...\n> >\n> > --\n> > Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> >\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 24 May 2001 20:57:04 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak\n ..." }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Thursday 24 May 2001 12:44, The Hermit Hacker wrote:\n> Sorry for delay ... just built using the .tar.gz file as below if someone\n> wants to check it to confirm ... for those with shell access on the\n> server, just ftp to it and login as yourself, and cd to\n> ~pgsql/ftp/pub/source/v7.1.2 ...\n\nI much prefer the line:\nscp user@server.org:~pgsql/ftp/pub/source/v7.1.2/* .\nmyself. :-) Not to mention that ftp passwords are sent over the wire in \ncleartext :-<. And if you set up RSA authentication, you don't even need to \ntype in your password, allowing automated mirroring in a secure fashion.\n\nIncidentally, that line works with the proper substitutions. Looking at the \ntarball now....\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7DU5V5kGGI8vV9eERApAZAKCQCCkUGZBlHRDIvMPqoc4PbfNoPwCg2NJA\nELm2n3GNhYkKjMf47xJ1cxw=\n=gYI/\n-----END PGP SIGNATURE-----\n", "msg_date": "Thu, 24 May 2001 14:09:23 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak ..." }, { "msg_contents": "> I much prefer the line:\n> scp user@server.org:~pgsql/ftp/pub/source/v7.1.2/* .\n> myself. :-) Not to mention that ftp passwords are sent over the wire in \n> cleartext :-<. And if you set up RSA authentication, you don't \n> even need to \n> type in your password, allowing automated mirroring in a secure fashion.\n> \n> Incidentally, that line works with the proper substitutions. \n\nReally? I usually have to do this:\n\nscp user@server.org:~pgsql/ftp/pub/source/v7.1.2/\\* .\n\nChris\n\n", "msg_date": "Fri, 25 May 2001 09:45:36 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: Not released yet, but could someone take a quick peak ..." }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Thursday 24 May 2001 21:45, Christopher Kings-Lynne wrote:\n> Lamar Owen wrote:\n> > I much prefer the line:\n> > scp user@server.org:~pgsql/ftp/pub/source/v7.1.2/* .\n\n> > Incidentally, that line works with the proper substitutions.\n\n> Really? I usually have to do this:\n\n> scp user@server.org:~pgsql/ftp/pub/source/v7.1.2/\\* .\n\nThe unadorned '*' was one of the things covered by the 'proper substitutions' \nclause above.... :-) As I only grab the main tarball, I would do a sub for \nthe * as 'postgresql-7.1.2.tar.gz'\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7Dq3S5kGGI8vV9eERAkaHAKDG36gmrkDrMjvOUcBea0llUszziACeN14F\nsaMgPMkSLH8ayNCzB/k/+xU=\n=up1e\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 25 May 2001 15:09:02 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Not released yet, but could someone take a quick peak ..." } ]
[ { "msg_contents": "Hi you all:\n\nI�ve got some kind of a problem in the deploy of my perl client.\nMy environment is the following\nIhave a Solaris 7 web server in the internet, powered by apache and outside the local net.\n>From it I can access, through a hole in the firewall to my PostgreSQL 7.0.2 (yes, I�d better migrate cause it�s illusionating) server (Solaris 8) in the local net, and the requests can only come \nfrom the local web and from the web server, so it works extremely fine with JSP, Java and so on (wow!).\nThe problem starts when I want my Perl CGI's to work in this environment, because i do not want to install more copies of Postgres in my not-very-much-free-space web server. (that�s why I \nhave my own DB-server).\nInstallation procedure tells me that I have to specify where is my Postgres installation, but I�ve got no installation!\nCan anyone tell me what I have to do to configure a pure client with no server capabilities? or it�s unpossible? or takes so long effort that it�s not worth?\n\nThank you.\n\nManuel SEDANO CADIN~ANOS\nmsedano@virtualcom.es\n\n\n", "msg_date": "Tue, 22 May 2001 17:54:04 GMT", "msg_from": "Manuel SEDANO <msedano@virtualcom.es>", "msg_from_op": true, "msg_subject": "Configurating perl access to a separate Postgres Server" } ]
[ { "msg_contents": "\nThis timezone fix was applied to jdbc2 in 7.1.2 by Thomas. Do you want\nthis in jdbc1 also? I am not recommending it, just asking to make sure\nit wasn't overlooked.\n\n---------------------------------------------------------------------------\n\nrevision 1.22.2.1\ndate: 2001/05/22 14:46:46; author: thomas; state: Exp; lines: +4 -4\nPatch from Barry Lind to correctly decode time zones in timestamp results.\n Without patch, the time zone field is ignored and the returned time is\n not correct.\n Already applied to the development tree...\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 14:03:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "JDBC commit to 7.1.2" }, { "msg_contents": "The jdbc1 code does not have the problem that this patch corrects. So \nno it does not need to be and should not be applied to the jdbc1 area.\n\nI also noticed that that the original patch which was for the \njdbc2/ResultSet.java file was also applied to the jdbc1/ResultSet.java \nfile on the main trunk of the CVS. This will not work and that patch \nneeds to be removed from jdbc1/ResultSet.java. (The variable the patch \nuses doesn't exist in the jdbc1 code thus the current jdbc1 code will \nnot compile).\n\nThe jdbc1 and jdbc2 sets of files are sufficiently different that \npatches for one should not be applied to the other.\n\nthanks,\n--Barry\n\nBruce Momjian wrote:\n\n> This timezone fix was applied to jdbc2 in 7.1.2 by Thomas. Do you want\n> this in jdbc1 also? I am not recommending it, just asking to make sure\n> it wasn't overlooked.\n> \n> ---------------------------------------------------------------------------\n> \n> revision 1.22.2.1\n> date: 2001/05/22 14:46:46; author: thomas; state: Exp; lines: +4 -4\n> Patch from Barry Lind to correctly decode time zones in timestamp results.\n> Without patch, the time zone field is ignored and the returned time is\n> not correct.\n> Already applied to the development tree...\n> \n\n", "msg_date": "Tue, 22 May 2001 12:10:47 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: JDBC commit to 7.1.2" }, { "msg_contents": "> The jdbc1 code does not have the problem that this patch corrects. So \n> no it does not need to be and should not be applied to the jdbc1 area.\n> \n> I also noticed that that the original patch which was for the \n> jdbc2/ResultSet.java file was also applied to the jdbc1/ResultSet.java \n> file on the main trunk of the CVS. This will not work and that patch \n> needs to be removed from jdbc1/ResultSet.java. (The variable the patch \n> uses doesn't exist in the jdbc1 code thus the current jdbc1 code will \n> not compile).\n> \n> The jdbc1 and jdbc2 sets of files are sufficiently different that \n> patches for one should not be applied to the other.\n\nGot it. Backed out of jdbc1 in current.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 15:20:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: JDBC commit to 7.1.2" }, { "msg_contents": "\nFYI, the reason I personally didn't want to apply this to 7.1.2 is\nbecause we already added two new bugs in 7.1.1, and I didn't want to be\nthe one to add another. Add that to the natural problems I have with\nJDBC, and I ran. :-)\n\n\n> The jdbc1 code does not have the problem that this patch corrects. So \n> no it does not need to be and should not be applied to the jdbc1 area.\n> \n> I also noticed that that the original patch which was for the \n> jdbc2/ResultSet.java file was also applied to the jdbc1/ResultSet.java \n> file on the main trunk of the CVS. This will not work and that patch \n> needs to be removed from jdbc1/ResultSet.java. (The variable the patch \n> uses doesn't exist in the jdbc1 code thus the current jdbc1 code will \n> not compile).\n> \n> The jdbc1 and jdbc2 sets of files are sufficiently different that \n> patches for one should not be applied to the other.\n> \n> thanks,\n> --Barry\n> \n> Bruce Momjian wrote:\n> \n> > This timezone fix was applied to jdbc2 in 7.1.2 by Thomas. Do you want\n> > this in jdbc1 also? I am not recommending it, just asking to make sure\n> > it wasn't overlooked.\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > revision 1.22.2.1\n> > date: 2001/05/22 14:46:46; author: thomas; state: Exp; lines: +4 -4\n> > Patch from Barry Lind to correctly decode time zones in timestamp results.\n> > Without patch, the time zone field is ignored and the returned time is\n> > not correct.\n> > Already applied to the development tree...\n> > \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 17:48:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: JDBC commit to 7.1.2" }, { "msg_contents": "> FYI, the reason I personally didn't want to apply this to 7.1.2 is\n> because we already added two new bugs in 7.1.1, and I didn't want to be\n> the one to add another. Add that to the natural problems I have with\n> JDBC, and I ran. :-)\n\nUnderstandable. You shouldn't *have* to take responsibility for every\npatch coming in, so feel free to leave some things for others. And if\nthere is a problem with this one, I'll understand if you need to yell at\nme a little ;)\n\nAs a (sort of) aside, by forcing me to inspect the code we have\nuncovered a problem with atypical time zones which we would not\notherwise have noticed, at least until someone in Newfoundland or India\nstumbled on it. No patch yet, but at least we know it is there...\n\n - Thomas\n", "msg_date": "Tue, 22 May 2001 23:18:48 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: JDBC commit to 7.1.2" }, { "msg_contents": "Attached is a patch to fix the problem Thomas mentions below. The JDBC \ndriver now correctly handles timezones that are offset fractional hours \nfrom GMT (ie. -06:30).\n\nthanks,\n--Barry\n\nThomas Lockhart wrote:\n\n>> FYI, the reason I personally didn't want to apply this to 7.1.2 is\n>> because we already added two new bugs in 7.1.1, and I didn't want to be\n>> the one to add another. Add that to the natural problems I have with\n>> JDBC, and I ran. :-)\n> \n> \n> Understandable. You shouldn't *have* to take responsibility for every\n> patch coming in, so feel free to leave some things for others. And if\n> there is a problem with this one, I'll understand if you need to yell at\n> me a little ;)\n> \n> As a (sort of) aside, by forcing me to inspect the code we have\n> uncovered a problem with atypical time zones which we would not\n> otherwise have noticed, at least until someone in Newfoundland or India\n> stumbled on it. No patch yet, but at least we know it is there...\n> \n> - Thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n>", "msg_date": "Fri, 25 May 2001 17:00:59 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Patch for JDBC fractional hour timezone offset bug" }, { "msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it withing the next 48 hours.\n\n> \n> Attached is a patch to fix the problem Thomas mentions below. The JDBC \n> driver now correctly handles timezones that are offset fractional hours \n> from GMT (ie. -06:30).\n> \n> thanks,\n> --Barry\n> \n> Thomas Lockhart wrote:\n> \n> >> FYI, the reason I personally didn't want to apply this to 7.1.2 is\n> >> because we already added two new bugs in 7.1.1, and I didn't want to be\n> >> the one to add another. Add that to the natural problems I have with\n> >> JDBC, and I ran. :-)\n> > \n> > \n> > Understandable. You shouldn't *have* to take responsibility for every\n> > patch coming in, so feel free to leave some things for others. And if\n> > there is a problem with this one, I'll understand if you need to yell at\n> > me a little ;)\n> > \n> > As a (sort of) aside, by forcing me to inspect the code we have\n> > uncovered a problem with atypical time zones which we would not\n> > otherwise have noticed, at least until someone in Newfoundland or India\n> > stumbled on it. No patch yet, but at least we know it is there...\n> > \n> > - Thomas\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://www.postgresql.org/search.mpl\n> > \n> > \n> \n\n> *** ./org/postgresql/jdbc1/ResultSet.java.orig\tFri May 25 16:23:19 2001\n> --- ./org/postgresql/jdbc1/ResultSet.java\tFri May 25 16:26:35 2001\n> ***************\n> *** 472,484 ****\n> //so this code strips off timezone info and adds on the GMT+/-...\n> //as well as adds a third digit for partial seconds if necessary\n> StringBuffer strBuf = new StringBuffer(s);\n> char sub = strBuf.charAt(strBuf.length()-3);\n> if (sub == '+' || sub == '-') {\n> strBuf.setLength(strBuf.length()-3);\n> if (subsecond) {\n> ! strBuf = strBuf.append('0').append(\"GMT\").append(s.substring(s.length()-3, s.length())).append(\":00\");\n> } else {\n> ! strBuf = strBuf.append(\"GMT\").append(s.substring(s.length()-3, s.length())).append(\":00\");\n> }\n> } else if (subsecond) {\n> strBuf = strBuf.append('0');\n> --- 472,506 ----\n> //so this code strips off timezone info and adds on the GMT+/-...\n> //as well as adds a third digit for partial seconds if necessary\n> StringBuffer strBuf = new StringBuffer(s);\n> + \n> + //we are looking to see if the backend has appended on a timezone.\n> + //currently postgresql will return +/-HH:MM or +/-HH for timezone offset\n> + //(i.e. -06, or +06:30, note the expectation of the leading zero for the\n> + //hours, and the use of the : for delimiter between hours and minutes)\n> + //if the backend ISO format changes in the future this code will\n> + //need to be changed as well\n> char sub = strBuf.charAt(strBuf.length()-3);\n> if (sub == '+' || sub == '-') {\n> strBuf.setLength(strBuf.length()-3);\n> if (subsecond) {\n> ! strBuf.append('0').append(\"GMT\").append(s.substring(s.length()-3, s.length())).append(\":00\");\n> ! } else {\n> ! strBuf.append(\"GMT\").append(s.substring(s.length()-3, s.length())).append(\":00\");\n> ! }\n> ! } else if (sub == ':') {\n> ! //we may have found timezone info of format +/-HH:MM, or there is no\n> ! //timezone info at all and this is the : preceding the seconds\n> ! char sub2 = strBuf.charAt(strBuf.length()-5);\n> ! if (sub2 == '+' || sub2 == '-') {\n> ! //we have found timezone info of format +/-HH:MM\n> ! strBuf.setLength(strBuf.length()-5);\n> ! if (subsecond) {\n> ! strBuf.append('0').append(\"GMT\").append(s.substring(s.length()-5));\n> } else {\n> ! strBuf.append(\"GMT\").append(s.substring(s.length()-5));\n> ! }\n> ! } else if (subsecond) {\n> ! strBuf.append('0');\n> }\n> } else if (subsecond) {\n> strBuf = strBuf.append('0');\n> *** ./org/postgresql/jdbc2/ResultSet.java.orig\tFri May 25 16:23:34 2001\n> --- ./org/postgresql/jdbc2/ResultSet.java\tFri May 25 16:24:25 2001\n> ***************\n> *** 484,497 ****\n> --- 484,519 ----\n> sbuf.setLength(0);\n> sbuf.append(s);\n> \n> + //we are looking to see if the backend has appended on a timezone.\n> + //currently postgresql will return +/-HH:MM or +/-HH for timezone offset\n> + //(i.e. -06, or +06:30, note the expectation of the leading zero for the\n> + //hours, and the use of the : for delimiter between hours and minutes)\n> + //if the backend ISO format changes in the future this code will\n> + //need to be changed as well\n> char sub = sbuf.charAt(sbuf.length()-3);\n> if (sub == '+' || sub == '-') {\n> + //we have found timezone info of format +/-HH\n> sbuf.setLength(sbuf.length()-3);\n> if (subsecond) {\n> sbuf.append('0').append(\"GMT\").append(s.substring(s.length()-3)).append(\":00\");\n> } else {\n> sbuf.append(\"GMT\").append(s.substring(s.length()-3)).append(\":00\");\n> }\n> + } else if (sub == ':') {\n> + //we may have found timezone info of format +/-HH:MM, or there is no\n> + //timezone info at all and this is the : preceding the seconds\n> + char sub2 = sbuf.charAt(sbuf.length()-5);\n> + if (sub2 == '+' || sub2 == '-') {\n> + //we have found timezone info of format +/-HH:MM\n> + sbuf.setLength(sbuf.length()-5);\n> + if (subsecond) {\n> + sbuf.append('0').append(\"GMT\").append(s.substring(s.length()-5));\n> + } else {\n> + sbuf.append(\"GMT\").append(s.substring(s.length()-5));\n> + }\n> + } else if (subsecond) {\n> + sbuf.append('0');\n> + }\n> } else if (subsecond) {\n> sbuf.append('0');\n> }\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 25 May 2001 20:44:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Patch for JDBC fractional hour timezone offset bug" }, { "msg_contents": "\nThanks. Applied.\n\n\n> \n> Attached is a patch to fix the problem Thomas mentions below. The JDBC \n> driver now correctly handles timezones that are offset fractional hours \n> from GMT (ie. -06:30).\n> \n> thanks,\n> --Barry\n> \n> Thomas Lockhart wrote:\n> \n> >> FYI, the reason I personally didn't want to apply this to 7.1.2 is\n> >> because we already added two new bugs in 7.1.1, and I didn't want to be\n> >> the one to add another. Add that to the natural problems I have with\n> >> JDBC, and I ran. :-)\n> > \n> > \n> > Understandable. You shouldn't *have* to take responsibility for every\n> > patch coming in, so feel free to leave some things for others. And if\n> > there is a problem with this one, I'll understand if you need to yell at\n> > me a little ;)\n> > \n> > As a (sort of) aside, by forcing me to inspect the code we have\n> > uncovered a problem with atypical time zones which we would not\n> > otherwise have noticed, at least until someone in Newfoundland or India\n> > stumbled on it. No patch yet, but at least we know it is there...\n> > \n> > - Thomas\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://www.postgresql.org/search.mpl\n> > \n> > \n> \n\n> *** ./org/postgresql/jdbc1/ResultSet.java.orig\tFri May 25 16:23:19 2001\n> --- ./org/postgresql/jdbc1/ResultSet.java\tFri May 25 16:26:35 2001\n> ***************\n> *** 472,484 ****\n> //so this code strips off timezone info and adds on the GMT+/-...\n> //as well as adds a third digit for partial seconds if necessary\n> StringBuffer strBuf = new StringBuffer(s);\n> char sub = strBuf.charAt(strBuf.length()-3);\n> if (sub == '+' || sub == '-') {\n> strBuf.setLength(strBuf.length()-3);\n> if (subsecond) {\n> ! strBuf = strBuf.append('0').append(\"GMT\").append(s.substring(s.length()-3, s.length())).append(\":00\");\n> } else {\n> ! strBuf = strBuf.append(\"GMT\").append(s.substring(s.length()-3, s.length())).append(\":00\");\n> }\n> } else if (subsecond) {\n> strBuf = strBuf.append('0');\n> --- 472,506 ----\n> //so this code strips off timezone info and adds on the GMT+/-...\n> //as well as adds a third digit for partial seconds if necessary\n> StringBuffer strBuf = new StringBuffer(s);\n> + \n> + //we are looking to see if the backend has appended on a timezone.\n> + //currently postgresql will return +/-HH:MM or +/-HH for timezone offset\n> + //(i.e. -06, or +06:30, note the expectation of the leading zero for the\n> + //hours, and the use of the : for delimiter between hours and minutes)\n> + //if the backend ISO format changes in the future this code will\n> + //need to be changed as well\n> char sub = strBuf.charAt(strBuf.length()-3);\n> if (sub == '+' || sub == '-') {\n> strBuf.setLength(strBuf.length()-3);\n> if (subsecond) {\n> ! strBuf.append('0').append(\"GMT\").append(s.substring(s.length()-3, s.length())).append(\":00\");\n> ! } else {\n> ! strBuf.append(\"GMT\").append(s.substring(s.length()-3, s.length())).append(\":00\");\n> ! }\n> ! } else if (sub == ':') {\n> ! //we may have found timezone info of format +/-HH:MM, or there is no\n> ! //timezone info at all and this is the : preceding the seconds\n> ! char sub2 = strBuf.charAt(strBuf.length()-5);\n> ! if (sub2 == '+' || sub2 == '-') {\n> ! //we have found timezone info of format +/-HH:MM\n> ! strBuf.setLength(strBuf.length()-5);\n> ! if (subsecond) {\n> ! strBuf.append('0').append(\"GMT\").append(s.substring(s.length()-5));\n> } else {\n> ! strBuf.append(\"GMT\").append(s.substring(s.length()-5));\n> ! }\n> ! } else if (subsecond) {\n> ! strBuf.append('0');\n> }\n> } else if (subsecond) {\n> strBuf = strBuf.append('0');\n> *** ./org/postgresql/jdbc2/ResultSet.java.orig\tFri May 25 16:23:34 2001\n> --- ./org/postgresql/jdbc2/ResultSet.java\tFri May 25 16:24:25 2001\n> ***************\n> *** 484,497 ****\n> --- 484,519 ----\n> sbuf.setLength(0);\n> sbuf.append(s);\n> \n> + //we are looking to see if the backend has appended on a timezone.\n> + //currently postgresql will return +/-HH:MM or +/-HH for timezone offset\n> + //(i.e. -06, or +06:30, note the expectation of the leading zero for the\n> + //hours, and the use of the : for delimiter between hours and minutes)\n> + //if the backend ISO format changes in the future this code will\n> + //need to be changed as well\n> char sub = sbuf.charAt(sbuf.length()-3);\n> if (sub == '+' || sub == '-') {\n> + //we have found timezone info of format +/-HH\n> sbuf.setLength(sbuf.length()-3);\n> if (subsecond) {\n> sbuf.append('0').append(\"GMT\").append(s.substring(s.length()-3)).append(\":00\");\n> } else {\n> sbuf.append(\"GMT\").append(s.substring(s.length()-3)).append(\":00\");\n> }\n> + } else if (sub == ':') {\n> + //we may have found timezone info of format +/-HH:MM, or there is no\n> + //timezone info at all and this is the : preceding the seconds\n> + char sub2 = sbuf.charAt(sbuf.length()-5);\n> + if (sub2 == '+' || sub2 == '-') {\n> + //we have found timezone info of format +/-HH:MM\n> + sbuf.setLength(sbuf.length()-5);\n> + if (subsecond) {\n> + sbuf.append('0').append(\"GMT\").append(s.substring(s.length()-5));\n> + } else {\n> + sbuf.append(\"GMT\").append(s.substring(s.length()-5));\n> + }\n> + } else if (subsecond) {\n> + sbuf.append('0');\n> + }\n> } else if (subsecond) {\n> sbuf.append('0');\n> }\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 27 May 2001 20:36:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Patch for JDBC fractional hour timezone offset bug" } ]
[ { "msg_contents": "> > 1. Compact log files after checkpoint (save records of uncommitted\n> > transactions and remove/archive others).\n> \n> On the grounds that undo is not guaranteed anyway (concurrent\n> heap access), why not simply forget it,\n\nWe can set flag in ItemData and register callback function in\nbuffer header regardless concurrent heap/index access. So buffer\nwill be cleaned before throwing it out from buffer pool\n(little optimization: if at the time when pin drops to 0 buffer\nis undirty then we shouldn't really clean it up to avoid unnecessary\nwrite - we can save info in FSM that space is available and clean it\nup on first pin/read).\nSo, only ability of *immediate reusing* is not guaranteed. But this is\ngeneral problem of space reusing till we assume that buffer pin is\nenough to access data.\n\n> since above sounds rather expensive ?\n\nI'm not sure. For non-overwriting smgr required UNDO info is pretty\nsmall because of we're not required to keep tuple data, unlike\nOracle & Co. We can even store UNDO info in non-WAL format to avoid\nlog record header overhead. UNDO files would be kind of Oracle rollback\nsegments but muuuuch smaller. But yeh - additional log reads.\n\n> The downside would only be, that long running txn's cannot\n> [easily] rollback to savepoint.\n\nWe should implement savepoints for all or none transactions, no?\n\n> > 2. Abort long running transactions.\n> \n> This is imho \"the\" big downside of UNDO, and should not\n> simply be put on the TODO without thorow research. I think it\n> would be better to forget UNDO for long running transactions\n> before aborting them.\n\nAbort could be configurable.\n\nVadim\n", "msg_date": "Tue, 22 May 2001 11:20:37 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "\nI see some use of SEP_CHAR for '/' in the source. Do we want to use it\nconsistenly, or not use it at all? I can make the change.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 14:54:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "SEP_CHAR" }, { "msg_contents": "Bruce Momjian writes:\n\n> I see some use of SEP_CHAR for '/' in the source. Do we want to use it\n> consistenly, or not use it at all? I can make the change.\n\nI vote for not at all, since there is not at all a use for it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 22 May 2001 22:35:18 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: SEP_CHAR" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Bruce Momjian writes:\n>> I see some use of SEP_CHAR for '/' in the source. Do we want to use it\n>> consistenly, or not use it at all? I can make the change.\n\n> I vote for not at all, since there is not at all a use for it.\n\nIf it's not actually needed for the Cygwin port, I'd vote for taking it\nout. It adds clutter to the sources, to no purpose. Also, even if you\nfix it to be used consistently *every* place it should be, how long will\nit stay that way? Seems practically unmaintainable to me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 11:38:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SEP_CHAR " }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Bruce Momjian writes:\n> >> I see some use of SEP_CHAR for '/' in the source. Do we want to use it\n> >> consistenly, or not use it at all? I can make the change.\n> \n> > I vote for not at all, since there is not at all a use for it.\n> \n> If it's not actually needed for the Cygwin port, I'd vote for taking it\n> out. It adds clutter to the sources, to no purpose. Also, even if you\n> fix it to be used consistently *every* place it should be, how long will\n> it stay that way? Seems practically unmaintainable to me.\n\nOK, two votes, will be removed shortly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 23 May 2001 11:40:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SEP_CHAR" }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Bruce Momjian writes:\n> >> I see some use of SEP_CHAR for '/' in the source. Do we want to use it\n> >> consistenly, or not use it at all? I can make the change.\n> \n> > I vote for not at all, since there is not at all a use for it.\n> \n> If it's not actually needed for the Cygwin port, I'd vote for taking it\n> out. It adds clutter to the sources, to no purpose. Also, even if you\n> fix it to be used consistently *every* place it should be, how long will\n> it stay that way? Seems practically unmaintainable to me.\n\nI see it used by psql and pg_dump and I will leave them alone, assuming\n/ is not hardeded elsewhere in that code.\n\nActually, it is:\n\n sprintf(psqlrc, \"%s/.psqlrc\", home);\n\nso it seems NT handles '/' anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 23 May 2001 11:48:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SEP_CHAR" }, { "msg_contents": "Bruce Momjian writes:\n\n> Actually, it is:\n>\n> sprintf(psqlrc, \"%s/.psqlrc\", home);\n>\n> so it seems NT handles '/' anyway.\n\n'/' has been a valid path separator when using the C io facilities at\nleast since Turbo C on MS-DOS.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 23 May 2001 18:29:40 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: SEP_CHAR" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I see it used by psql and pg_dump and I will leave them alone, assuming\n> / is not hardeded elsewhere in that code.\n\nIf you're gonna take it out, take it out completely.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 12:31:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SEP_CHAR " } ]
[ { "msg_contents": "http://www.postgresql.org/~petere/gettext.html\n\nThis is a compilation of the BSD-licensed gettext tools from NetBSD plus\nsome of my own code, put into a (hopefully) portable package, intended to\nbe evaluated for possible use in PostgreSQL. Give it a try if you're\ninterested. I've already tried it on FreeBSD, Linux, and Unixware, so\ndon't bother with those.\n\nI feel that this is ready to go. NetBSD is using it in production as a\nGNU gettext replacement, even for large packages. The .mo file format is\npretty simple actually, so if issues came up we could tackle them\nourselves.\n\nAs for portability, the fanciest feature it uses is mmap(), but only to\nmap a real file into memory to read it there. The missing functions\n(strlcat, strlcpy, strsep) I've worked around with autoconf, the rest\nlooks all like basic POSIX stuff.\n\nSo I suppose if this looks okay I will start trying to work out how we can\nuse this for PostgreSQL.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 22 May 2001 22:34:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "BSD gettext" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> http://www.postgresql.org/~petere/gettext.html\n> \n> This is a compilation of the BSD-licensed gettext tools from NetBSD plus\n> some of my own code, put into a (hopefully) portable package, intended to\n> be evaluated for possible use in PostgreSQL. Give it a try if you're\n> interested. I've already tried it on FreeBSD, Linux, and Unixware, so\n> don't bother with those.\n\n# uname -a\nSunOS mage 5.8 Generic_108528-06 sun4u sparc SUNW,Ultra-5_10\n# pwd\n/usr/local/src/3/bsd-gettext-0.0\n# gmake install >/dev/null 2>&1 \n# echo $? \n0\n# LANGUAGE=sv /usr/local/bin/gettext\n/usr/local/bin/gettext: argument saknas\n# \n\n\n\n-- \n\nRick Robino v. (503) 891-9283\nWave Division Consulting @. wavedivision.com\n", "msg_date": "Tue, 22 May 2001 17:48:35 -0700", "msg_from": "Rick Robino <rrobino@wavedivision.com>", "msg_from_op": false, "msg_subject": "Re: BSD gettext" }, { "msg_contents": "Peter Eisentraut wrote:\n\n> This is a compilation of the BSD-licensed gettext tools from NetBSD plus\n> some of my own code, put into a (hopefully) portable package, intended to\n> be evaluated for possible use in PostgreSQL. Give it a try if you're\n> interested. I've already tried it on FreeBSD, Linux, and Unixware, so\n> don't bother with those.\n \nI tried on Digital Unix 4.0f, using Digital cc:\n\nMaking all in gettext\nmake[2]: Entering directory `/tmp/bsd-gettext-0.0/gettext'\ncc -DHAVE_CONFIG_H -I. -I. -I.. \n-DLOCALEDIR=\\\"/tmp/gettext/share/locale\\\" -I ../libintl -g -c\ngettext-main.c\ncc: Severe: ../config.h, line 36: Cannot find file <unistd.h> specified\nin #include directive. (noinclfilef)\n#include <unistd.h>\n-^\nmake[2]: *** [gettext-main.o] Error 1\n\nUsing gcc it breaks when linking against snprintf (this can be worked\naround)\n\nHope it helps.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Wed, 23 May 2001 10:55:35 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": false, "msg_subject": "Re: BSD gettext" }, { "msg_contents": "On Wed, May 23, 2001 at 10:55:35AM +0300, Alessio Bragadini wrote:\n> Peter Eisentraut wrote:\n> \n> > This is a compilation of the BSD-licensed gettext tools from NetBSD plus\n> > some of my own code, put into a (hopefully) portable package, intended to\n> > be evaluated for possible use in PostgreSQL. Give it a try if you're\n> > interested. I've already tried it on FreeBSD, Linux, and Unixware, so\n> > don't bother with those.\n> \n> I tried on Digital Unix 4.0f, using Digital cc:\n> \n> Making all in gettext\n> make[2]: Entering directory `/tmp/bsd-gettext-0.0/gettext'\n> cc -DHAVE_CONFIG_H -I. -I. -I.. \n> -DLOCALEDIR=\\\"/tmp/gettext/share/locale\\\" -I ../libintl -g -c\n> gettext-main.c\n> cc: Severe: ../config.h, line 36: Cannot find file <unistd.h> specified\n> in #include directive. (noinclfilef)\n> #include <unistd.h>\n> -^\n> make[2]: *** [gettext-main.o] Error 1\n\nMake sure you have no spaces after -I on Tru64 UNIX.\n\n-- \nalbert chin (china@thewrittenword.com)\n", "msg_date": "Wed, 23 May 2001 15:43:04 -0500", "msg_from": "pgsql-hackers@thewrittenword.com", "msg_from_op": false, "msg_subject": "Re: Re: BSD gettext" }, { "msg_contents": "pgsql-hackers@thewrittenword.com wrote:\n\n> Make sure you have no spaces after -I on Tru64 UNIX.\n\nThat was it, thanks. Sent a micro-patch to Peter.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Thu, 24 May 2001 10:39:23 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": false, "msg_subject": "Re: BSD gettext" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> This is a compilation of the BSD-licensed gettext tools from NetBSD plus\n> some of my own code, put into a (hopefully) portable package, intended to\n> be evaluated for possible use in PostgreSQL. Give it a try if you're\n> interested.\n\nOn HPUX 10.20:\n\nmake[2]: Entering directory `/home/tgl/pgsql/bsd-gettext-0.0/libintl'\n/bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I.. -g -O2 -c gettext.c\ngcc -DHAVE_CONFIG_H -I. -I. -I.. -g -O2 -c gettext.c -o gettext.o\ngettext.c: In function `mapit':\ngettext.c:313: `MAP_FAILED' undeclared (first use in this function)\ngettext.c:313: (Each undeclared identifier is reported only once\ngettext.c:313: for each function it appears in.)\ngettext.c: In function `unmapit':\ngettext.c:442: `MAP_FAILED' undeclared (first use in this function)\nmake[2]: *** [gettext.lo] Error 1\n\nThe HPUX man page for mmap documents its failure return value as \"-1\",\nso I hacked around this with\n\n#ifndef MAP_FAILED\n#define MAP_FAILED ((void *) (-1))\n#endif\n\nwhereupon it built and passed the simple self-test you suggested.\nHowever, I think it's pretty foolish to depend on mmap for such\nlittle reason as this code does. I suggest ripping out the mmap\nusage and just reading the file with good old read(2).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 May 2001 08:31:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BSD gettext " }, { "msg_contents": "> The HPUX man page for mmap documents its failure return value as \"-1\",\n> so I hacked around this with\n> \n> #ifndef MAP_FAILED\n> #define MAP_FAILED ((void *) (-1))\n> #endif\n> \n> whereupon it built and passed the simple self-test you suggested.\n> However, I think it's pretty foolish to depend on mmap for such\n> little reason as this code does. I suggest ripping out the mmap\n> usage and just reading the file with good old read(2).\n\nAgreed. Let read() use mmap() internally if it wants to.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 24 May 2001 10:30:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BSD gettext" }, { "msg_contents": "On Thu, May 24, 2001 at 10:30:01AM -0400, Bruce Momjian wrote:\n> > The HPUX man page for mmap documents its failure return value as \"-1\",\n> > so I hacked around this with\n> > \n> > #ifndef MAP_FAILED\n> > #define MAP_FAILED ((void *) (-1))\n> > #endif\n> > \n> > whereupon it built and passed the simple self-test you suggested.\n> > However, I think it's pretty foolish to depend on mmap for such\n> > little reason as this code does. I suggest ripping out the mmap\n> > usage and just reading the file with good old read(2).\n> \n> Agreed. Let read() use mmap() internally if it wants to.\n\nThe reason mmap() is faster than read() is that it can avoid copying \ndata to the place you specify. read() can \"use mmap() internally\" only \nin cases rare enough to hardly be worth checking for. \n\nStdio is often able to use mmap() internally for parsing, and in \nglibc-2.x (and, I think, on recent Solarix and BSDs) it does. Usually, \ntherefore, it would be better to use stdio functions (except fread()!) \nin place of read(), where possible, to allow this optimization.\n\nUsing mmap() in place of disk read() almost always results in enough\nperformance improvement to make doing so worth a lot of disruption.\nToday mmap() is used heavily enough, in important programs, that \nworries about unreliability are no better founded than worries about\nread().\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Thu, 24 May 2001 14:10:34 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: BSD gettext" }, { "msg_contents": "Tom Lane writes:\n\n> However, I think it's pretty foolish to depend on mmap for such\n> little reason as this code does. I suggest ripping out the mmap\n> usage and just reading the file with good old read(2).\n\nI'm prepared to do that if mmap doesn't work for someone, but I'm not\neager to fork the code without proven breakage.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 24 May 2001 23:11:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: BSD gettext " } ]
[ { "msg_contents": "\nFolks:\n As I study the source of LockBuffer in bufmgr.c I came across\nthe following code snippet for the case of releasing a\nshared (read) lock:\n\n\n if (mode == BUFFER_LOCK_UNLOCK)\n {\n if (*buflock & BL_R_LOCK)\n {\n Assert(buf->r_locks > 0);\n Assert(!(buf->w_lock));\n Assert(!(*buflock & (BL_W_LOCK | BL_RI_LOCK)));\n (buf->r_locks)--;\n *buflock &= ~BL_R_LOCK;\n\nThis code resets BL_R_LOCK on the first release of a shared lock.\nI think it should check that the count of readers be zero:\n( something like\n\n if (mode == BUFFER_LOCK_UNLOCK)\n {\n if (*buflock & BL_R_LOCK)\n {\n Assert(buf->r_locks > 0);\n Assert(!(buf->w_lock));\n Assert(!(*buflock & (BL_W_LOCK | BL_RI_LOCK)));\n (buf->r_locks)--;\n if (!buf->r_locks)\n *buflock &= ~BL_R_LOCK;\n\n\nOr I am missing something...\n\n thanks\n regards\n Mauricio\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com\n\n", "msg_date": "Tue, 22 May 2001 15:40:16 -0500", "msg_from": "\"Mauricio Breternitz\" <mbjsql@hotmail.com>", "msg_from_op": true, "msg_subject": "? potential bug in LockBuffer ?" } ]
[ { "msg_contents": "> (buf->r_locks)--;\n> if (!buf->r_locks)\n> *buflock &= ~BL_R_LOCK;\n> \n> \n> Or I am missing something...\n\nbuflock is per-backend flag, it's not in shmem. Backend is\nallowed only single lock per buffer.\n\nVadim\n", "msg_date": "Tue, 22 May 2001 13:57:14 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: ? potential bug in LockBuffer ?" } ]
[ { "msg_contents": "> > And, I cannot say that I would implement UNDO because of\n> > 1. (cleanup) OR 2. (savepoints) OR 4. (pg_log management)\n> > but because of ALL of 1., 2., 4.\n> \n> OK, I understand your reasoning here, but I want to make a comment.\n> \n> Looking at the previous features you added, like subqueries, MVCC, or\n> WAL, these were major features that greatly enhanced the system's\n> capabilities.\n> \n> Now, looking at UNDO, I just don't see it in the same league as those\n> other additions. Of course, you can work on whatever you want, but I\n> was hoping to see another major feature addition for 7.2. We know we\n> badly need auto-vacuum, improved replication, and point-in-time recover.\n\nI don't like auto-vacuum approach in long term, WAL-based BAR is too easy\nto do -:) (and you know that there is man who will do it, probably),\nbidirectional sync replication is good to work on, but I'm more\ninterested in storage/transaction management now. And I'm not sure\nif I'll have enough time for \"another major feature in 7.2\" anyway.\n\n> It would be better to put work into one mechanism that would\n> reuse all tuples.\n\nThis is what we're discussing now -:)\nIf community will not like UNDO then I'll probably try to implement\ndead space collector which will read log files and so on. Easy to\n#ifdef it in 7.2 to use in 7.3 (or so) with on-disk FSM. Also, I have\nto implement logging for non-btree indices (anyway required for UNDO,\nWAL-based BAR, WAL-based space reusing).\n\nVadim\n", "msg_date": "Tue, 22 May 2001 14:33:38 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "love to have a copy of that script that backups postgre remotely. \n\nangel\n\n\n\n\n\n\n\nlove to have a copy of that script that backups \npostgre remotely.   \n \nangel", "msg_date": "Tue, 22 May 2001 16:51:24 -0700", "msg_from": "\"roger\" <administrator@hollywoodmistress.com>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Has this been already fixed or reported?\n\n---------------------------------------------------------\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nUsing pager is off.\ntest=# select version();\n version \n---------------------------------------------------------------------\n PostgreSQL 7.1.1 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\ntest=# \\dz\nDid not find any relation named \"z\".\ntest-# \\dz\nDid not find any relation named \"z\".\nSegmentation fault (core dumped)\n--\nTatsuo Ishii\n", "msg_date": "Wed, 23 May 2001 09:59:57 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "None" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Has this been already fixed or reported?\n\n> test-# \\dz\n> Did not find any relation named \"z\".\n> Segmentation fault (core dumped)\n\nYes, this is fixed in 7.1.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 11:52:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "Can someone point me to an active postgres listing......\n\nthis one seems a bit �slow�....(no offense intended)... \n\n\n\nanyone actively follow this group?\n\n\nCan someone point me to an active postgres listing......\n\nthis one seems a bit “slow”....(no offense intended)...", "msg_date": "Wed, 23 May 2001 03:47:06 GMT", "msg_from": "Raoul Callaghan <ausit@bigpond.net.au>", "msg_from_op": true, "msg_subject": "anyone actively follow this group?" } ]
[ { "msg_contents": "\n> If community will not like UNDO then I'll probably try to implement\n\nImho UNDO would be great under the following circumstances:\n\t1. The undo is only registered for some background work process\n\t and not done in the client's backend (or only if it is a small txn).\n\t2. The same mechanism should also be used for outdated tuples\n\t (the only difference beeing, that some tuples need to wait longer\n\t because of an active xid)\n\nThe reasoning to not do it in the client's backend is not only that the client\ndoes not need to wait, but that the nervous dba tends to kill them if after one hour \nof forward work the backend seemingly does not respond anymore (because it is\nbusy with undo).\n\n> dead space collector which will read log files and so on.\n\nWhich would then only be a possible implementation variant of above :-)\nFirst step probably would be to separate the physical log to reduce WAL size.\n\n> to implement logging for non-btree indices (anyway required for UNDO,\n> WAL-based BAR, WAL-based space reusing).\n\nImho it would be great to implement a generic (albeit more expensive) \nredo for all possible index types, that would be used in absence of a physical \nredo for that particular index type (which is currently available for btree).\n\nThe prerequisites would be a physical log that saves the page before \nmodification. The redo could then be done (with the info from the heap tuple log record)\nwith the same index interface, that is used during normal operation.\n\nImho implementing a new index type is difficult enough as is without the need \nto write a redo and undo.\n\nAndreas\n", "msg_date": "Wed, 23 May 2001 09:35:22 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "> > People also have referred to an overwriting smgr\n> > easily. Please tell me how to introduce an overwriting smgr\n> > without UNDO.\n\nThere is no way. Although undo for an overwriting smgr would involve a\nvery different approach than with non-overwriting. See Vadim's post about what \ninfo suffices for undo in non overwriting smgr (file and ctid).\n\n> I guess that is the question. Are we heading for an overwriting storage\n> manager? I didn't see that in Vadim's list of UNDO advantages, but\n> maybe that is his final goal. If so UNDO may make sense, but then the\n> question is how do we keep MVCC with an overwriting storage manager?\n> \n> The only way I can see doing it is to throw the old tuples into the WAL\n> and have backends read through that for MVCC info.\n\nIf PostgreSQL wants to stay MVCC, then we should imho forget \"overwriting smgr\"\nvery fast.\n\nLet me try to list the pros and cons that I can think of:\nPro:\n\tno index modification if key stays same\n\tno search for free space for update (if tuple still fits into page)\n\tno pg_log\nCon:\n\tadditional IO to write \"before image\" to rollback segment\n\t\t(every before image, not only first after checkpoint)\n\t\t(also before image of every index page that is updated !)\n\tneed a rollback segment that imposes all sorts of contention problems\n\tactive rollback, that needs to do a lot of undo work\n\nAndreas\n", "msg_date": "Wed, 23 May 2001 10:26:26 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "\n> > The downside would only be, that long running txn's cannot\n> > [easily] rollback to savepoint.\n> \n> We should implement savepoints for all or none transactions, no?\n\nWe should not limit transaction size to online available disk space for WAL. \nImho that is much more important. With guaranteed undo we would need \ndiskspace for more than 2x new data size (+ at least space for 1x all modified \npages unless physical log is separated from WAL).\n\nImho a good design should involve only little more than 1x new data size.\n\n> \n> > > 2. Abort long running transactions.\n> > \n> > This is imho \"the\" big downside of UNDO, and should not\n> > simply be put on the TODO without thorow research. I think it\n> > would be better to forget UNDO for long running transactions\n> > before aborting them.\n> \n> Abort could be configurable.\n\nThe point is, that you need to abort before WAL runs out of disk space\nregardless of configuration.\n\nAndreas\n", "msg_date": "Wed, 23 May 2001 10:45:17 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "At 14:33 22/05/01 -0700, Mikheev, Vadim wrote:\n>\n>If community will not like UNDO then I'll probably try to implement\n>dead space collector which will read log files and so on. \n\nI'd vote for UNDO; in terms of usability & friendliness it's a big win.\nTom's plans for FSM etc are, at least, going to get us some useful data,\nand at best will mean we can hang of WAL based FSM for a few versions.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 23 May 2001 18:58:58 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "RE: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "\n> >If community will not like UNDO then I'll probably try to implement\n> >dead space collector which will read log files and so on. \n> \n> I'd vote for UNDO; in terms of usability & friendliness it's a big win.\n\nCould you please try it a little more verbose ? I am very interested in \nthe advantages you see in \"UNDO for rollback only\".\n\npg_log is a very big argument, but couldn't we try to change the format\nto something that only stores ranges of aborted txn's in a btree like format ? \nNow that we have WAL, that should be possible.\n\nAndreas\n", "msg_date": "Wed, 23 May 2001 11:25:12 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "At 11:25 23/05/01 +0200, Zeugswetter Andreas SB wrote:\n>\n>> >If community will not like UNDO then I'll probably try to implement\n>> >dead space collector which will read log files and so on. \n>> \n>> I'd vote for UNDO; in terms of usability & friendliness it's a big win.\n>\n>Could you please try it a little more verbose ? I am very interested in \n>the advantages you see in \"UNDO for rollback only\".\n\nI have not been paying strict attention to this thread, so it may have\nwandered into a narrower band than I think we are in, but my understanding\nis that UNDO is required for partial rollback in the case of failed\ncommands withing a single TX. Specifically,\n\n- A simple typo in psql can currently cause a forced rollback of the entire\nTX. UNDO should avoid this.\n\n- It is not uncommon for application in other databases to handle errors\nfrom the database (eg. missing FKs), and continue a TX.\n\n- Similarly, when we get a new error reporting system, general constraint\n(or other) failures should be able to be handled in one TX.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 23 May 2001 20:43:24 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "Re: AW: Plans for solving the VACUUM problem" }, { "msg_contents": "> >> I'd vote for UNDO; in terms of usability & friendliness it's a big win.\n> >\n> >Could you please try it a little more verbose ? I am very interested in \n> >the advantages you see in \"UNDO for rollback only\".\n\nThe confusion here is that you say you want UNDO, but then say you want\nit to happen in the background, which sounds more like autovacuum than\nUNDO.\n\n> I have not been paying strict attention to this thread, so it may have\n> wandered into a narrower band than I think we are in, but my understanding\n> is that UNDO is required for partial rollback in the case of failed\n> commands withing a single TX. Specifically,\n> \n> - A simple typo in psql can currently cause a forced rollback of the entire\n> TX. UNDO should avoid this.\n> \n> - It is not uncommon for application in other databases to handle errors\n> from the database (eg. missing FKs), and continue a TX.\n> \n> - Similarly, when we get a new error reporting system, general constraint\n> (or other) failures should be able to be handled in one TX.\n\nI think what you are asking for here is subtransactions, which can be\ndone without UNDO if we assign a unique transaction id's to each\nsubtransaction. Not pretty, but possible. \n\nUNDO makes subtransactions easier because you can UNDO the part that\nfailed. However, someone mentioned you may have to wait for that part\nto be undone. I think you have to wait because the current transaction\nwould have no way to know which rows were visible to the current\ntransaction unless you mark them right away or grope through the WAL\nevery time you visit a tuple.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 23 May 2001 11:27:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "\nset_inherited_rel_pathlist in src/backend/path/allpaths.c says:\n\n /*\n * XXX for now, can't handle inherited expansion of FOR UPDATE; can we\n * do better?\n */\n\nIs this a terribly difficult thing to implement?\n\nThe RI triggers use FOR UPDATE which makes RI impossible with\ninheritence hierarchies. Is there any chance this might make it onto\nsomeone's todo list?\n\n\nThanks.\n", "msg_date": "Wed, 23 May 2001 03:48:58 -0700", "msg_from": "Stephen Deasey <stephen@themer.com>", "msg_from_op": true, "msg_subject": "select ... for update and inheritence" }, { "msg_contents": "Stephen Deasey <stephen@themer.com> writes:\n> set_inherited_rel_pathlist in src/backend/path/allpaths.c says:\n\n> /*\n> * XXX for now, can't handle inherited expansion of FOR UPDATE; can we\n> * do better?\n> */\n\n> Is this a terribly difficult thing to implement?\n\nIt might be as easy as adding the child tables to the rowmark list.\nOr not. I didn't have time to experiment with it for 7.1. Want to\nwork on it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 May 2001 15:35:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: select ... for update and inheritence " } ]
[ { "msg_contents": "\n> - A simple typo in psql can currently cause a forced rollback of the entire\n> TX. UNDO should avoid this.\n\nYes, I forgot to mention this very big advantage, but undo is not the only possible way \nto implement savepoints. Solutions using CommandCounter have been discussed.\nAlthough the pg_log mechanism would become more complex, a background\n\"vacuum-like\" process could put highest priority on removing such rolled back parts\nof transactions.\n\nAndreas\n", "msg_date": "Wed, 23 May 2001 13:08:28 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Plans for solving the VACUUM problem" } ]
[ { "msg_contents": "\npg_dump's help provides this near the bottom:\n\nIf no database name is not supplied, then the PGDATABASE environment\nvariable value is used.\n\nShouldn't that read:\n\nIf no database name is supplied, then the PGDATABASE environment\nvariable value is used.\n\nor:\n\nIf the database name is not supplied, then the PGDATABASE environment\nvariable value is used.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 23 May 2001 11:28:59 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "miswording in pg_dump" } ]
[ { "msg_contents": "This is resent of my messages I sent by mistake to BUGS list\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n---------- Forwarded message ----------\nDate: Wed, 23 May 2001 19:14:41 +0300 (GMT)\nFrom: Oleg Bartunov <oleg@sai.msu.su>\nTo: tgl@sss.pgh.pa.us\nCc: pgsql-bugs@postgresql.org\nSubject: rfd: multi-key GiST index problems\n\nTom,\n\nwe're hardly working on multi-key support in GiST and horizon is\nbeing closer :-) I'd like to raise several questions:\n\n1. index_getprocid (backend/access/index/idexam.c) doesn't\n properly supports multi-keys indexes with procnum > 1\n it's works only if either procnum=1 (B-tree, hash) or attnum=1\n (Rtree,Gist,hash). But for multi-key GiST we have\n 7>=procnum >=1 and attnum > 1\n\n We've found workaround for GiST by using define, but general solution\n requires knowledge of the number of procedures for given type of index\n We didn't find a place where this number is stored in the index structure.\n if it's not index it's necessary to add it to the index structure.\n\n2. it's necessary to recognize if index attribute requires checking\n for lossines. Consider following:\n create index a on table using gist(a gist__int_ops) with (islossy);\n create index b on table using gist(b gist_box_ops);\n\n create index c on table using gist(b gist_box_ops,a gist__int_ops) with (islossy);\n\n gist__int_ops uses compression with lossy, so we need to check heap tuple\n after successful checking of index tuple, but gist_box_ops doesn't\n requires such test. In third example with multi-key index we\n forced to use 'with (islossy)' for all index even if select will\n use index by first attribute (b gist_box_ops) which is a not right\n thing.\n We'd like to specify lossy attribute for each attribute of index\n something like:\n create index c on table using gist(b gist_box_ops,a gist__int_ops with (islossy));\n Accordingly executor should understand and process this syntax.\n\nCurrent status:\n\nwe could create multi-key GiST index and search is working for 1st attribute\nas in current version. there is a problem with searching on next attributes\nbecause currently StrategyNumber (contains, overlap, less ..etc) doesn't\ndetermined for these attributes. StrategyNumber is used for method\nConsistence. We hope to resolve this problem.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n", "msg_date": "Wed, 23 May 2001 20:07:34 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "rfd: multi-key GiST index problems (fwd)" } ]
[ { "msg_contents": "On Wed, May 23, 2001 at 11:35:31AM -0400, Bruce Momjian wrote:\n> > > > > We have added more const-ness to libpq++ for 7.2.\n> > > > \n> > > > Breaking link compatibility without bumping the major version number\n> > > > on the library seems to me serious no-no.\n> > > > \n> > > > To const-ify member functions without breaking link compatibility,\n> > > > you have to add another, overloaded member that is const, and turn\n> > > > the non-const function into a wrapper. For example:\n> > > > \n> > > > void Foo::bar() { ... } // existing interface\n> > > > \n> > > > becomes\n> > > > \n> > > > void Foo::bar() { ((const Foo*)this)->bar(); } \n> > > > void Foo::bar() const { ... } \n> > > \n> > > Thanks. That was my problem, not knowing when I break link compatiblity\n> > > in C++. Major updated.\n> > \n> > Wouldn't it be better to add the forwarding function and keep\n> > the same major number? It's quite disruptive to change the\n> > major number for what are really very minor changes. Otherwise\n> > you accumulate lots of near-copies of almost-identical libraries\n> > to be able to run old binaries.\n> > \n> > A major-number bump should usually be something planned for\n> > and scheduled.\n> \n> That const was just one of many const's added, and I am sure there will\n> be more stuff happening to C++. I changed a function returning short\n> for tuple length to int. Not worth mucking it up.\n> \n> If it was just that one it would be OK.\n\nI'll bet lots of people would like to see more careful planning about \nbreaking link compatibility. Other changes that break link compatibility \ninclude changing a struct or class referred to from inline functions, and \nadding a virtual function in a base class.\n\nIt's possible to make a lot of improvements without breaking link\ncompatibility, but it does take more care than in C. If you wonder\nwhether a change would break link compatibility, please ask on the list.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Wed, 23 May 2001 14:36:50 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": true, "msg_subject": "Re: C++ Headers" }, { "msg_contents": "> > If it was just that one it would be OK.\n> \n> I'll bet lots of people would like to see more careful planning about \n> breaking link compatibility. Other changes that break link compatibility \n> include changing a struct or class referred to from inline functions, and \n> adding a virtual function in a base class.\n> \n> It's possible to make a lot of improvements without breaking link\n> compatibility, but it does take more care than in C. If you wonder\n> whether a change would break link compatibility, please ask on the list.\n\nOur C++ interface needs serious work, and I don't want to burden a\nmaintainer with adding muck for backward compatibility.\n\nWe do update libpq occasionally and don't keep link compatibility. We do\nkeep interface compatibility with the backend.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 23 May 2001 18:57:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: C++ Headers" } ]