threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nHas anyone considered creating an aggregate function that returns an \narray of all matching rows? I am not sure if this makes much sense from \na speed point of view or is possible, but it would help denormalizing \ntables when necessary. For example, consider a table that looks as follows:\n\nSELECT * FROM t;\n id | value\n----+-------\n 1 | 1.5\n 1 | 2.5\n 1 | 3.5\n 2 | 4.5\n 2 | 5.5\n(5 rows)\n\nIt would be nice to be able to do a query as follows:\n\nSELECT id, agg_array(value) FROM t GROUP BY id;\n id | agg_array\n----+-----\n 1 | {1.5,2.5,3.5}\n 2 | {4.5,5.5}\n(2 rows)\n\nThanks,\nDavid Kaplan\n\n\n\n\n", "msg_date": "Mon, 01 Jul 2002 18:25:18 -0700", "msg_from": "\"David M. Kaplan\" <dmkaplan@ucdavis.edu>", "msg_from_op": true, "msg_subject": "aggregate that returns array" }, { "msg_contents": "On Tue, 2002-07-02 at 03:25, David M. Kaplan wrote:\n> Hi,\n> \n> Has anyone considered creating an aggregate function that returns an \n> array of all matching rows?\n\ncheck contrib/intagg for a function that does it for integers.\n\n-----------\nHannu\n\n\n\n", "msg_date": "02 Jul 2002 10:40:46 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: aggregate that returns array" } ]
[ { "msg_contents": "OK,\n\nOn HEAD, I am still seeing the attached failures. They didn't happen\nbefore, but appeared in the last couple of months. All other tests pass.\n\nIt seems to just be a tuple ordering issue - I really don't know what caused\nit? If necessary, I can just modify the expected result, but I haven't seen\nanyone else with this issue - maybe there's more to it.\n\nFor the impatient:\n\n*** ./expected/rules.out Fri May 3 08:32:19 2002\n--- ./results/rules.out Tue Jul 2 12:25:57 2002\n***************\n*** 1005,1012 ****\n SELECT * FROM shoe_ready WHERE total_avail >= 2;\n shoename | sh_avail | sl_name | sl_avail | total_avail\n ------------+----------+------------+----------+-------------\n- sh1 | 2 | sl1 | 5 | 2\n sh3 | 4 | sl7 | 7 | 4\n (2 rows)\n\n CREATE TABLE shoelace_log (\n--- 1005,1012 ----\n SELECT * FROM shoe_ready WHERE total_avail >= 2;\n shoename | sh_avail | sl_name | sl_avail | total_avail\n ------------+----------+------------+----------+-------------\n sh3 | 4 | sl7 | 7 | 4\n+ sh1 | 2 | sl1 | 5 | 2\n (2 rows)\n\n CREATE TABLE shoelace_log (\n\n======================================================================\n\nChris", "msg_date": "Tue, 2 Jul 2002 12:32:40 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Rule Regression Test Failure on FreeBSD/Alpha" } ]
[ { "msg_contents": "I was getting CVS errors because regress/GNUmakefile does an 'rm -rf\nresults', removing the CVS directory from results. This was causing my\n'cvs update's to fail. My fix was to change 'rm -rf results' to 'rm -f\nresults/*.out'.\n\nDoes anyone know when this problem was added. I don't see any\nsignificant changes in GNUmakefile.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 2 Jul 2002 01:03:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "regress/results directory problem" } ]
[ { "msg_contents": "pgman wrote:\n> I was getting CVS errors because regress/GNUmakefile does an 'rm -rf\n> results', removing the CVS directory from results. This was causing my\n> 'cvs update's to fail. My fix was to change 'rm -rf results' to 'rm -f\n> results/*.out'.\n> \n> Does anyone know when this problem was added. I don't see any\n> significant changes in GNUmakefile.\n\nOK, I found the problem. I do a 'cvs update' as:\n\n\tcvs update -P -d pgsql\n\nThe -d brings in any directories that may have been created since the\nlast checkout. That is causing problems when the regress/results\ndirectory has been deleted and recreated.\n\nI am backing out my GNUmakefile change. I am still unclear why this has\nstarted happening all of a sudden.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 2 Jul 2002 01:42:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: regress/results directory problem" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Does anyone know when this problem was added. I don't see any\n>> significant changes in GNUmakefile.\n\nAFAICT, the results directory hasn't been touched in 21 months.\nAre you sure you haven't changed your own CVS setup or arguments?\n\n> The -d brings in any directories that may have been created since the\n> last checkout. That is causing problems when the regress/results\n> directory has been deleted and recreated.\n\nI use cvs update -d -P myself, and I do *not* see it creating the\nresults directory.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2002 09:27:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: regress/results directory problem " }, { "msg_contents": "...\n> I am backing out my GNUmakefile change. I am still unclear why this has\n> started happening all of a sudden.\n\n?\n\nThe results/ directory should not be a part of CVS (since it is assumed\nto not exist by the regression tests). But it has been in CVS since 1997\nduring a period of time when a Makefile in that directory was\nresponsible for cleaning the directory. \n\nWe are relying on the pruning capabilities of CVS and so never really\nnotice that this was the case (I use -Pd almost always too).\n\nI doubt anything has changed recently in this regard.\n\n - Thomas\n\n\n", "msg_date": "Tue, 02 Jul 2002 07:04:33 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: regress/results directory problem" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Does anyone know when this problem was added. I don't see any\n> >> significant changes in GNUmakefile.\n> \n> AFAICT, the results directory hasn't been touched in 21 months.\n> Are you sure you haven't changed your own CVS setup or arguments?\n\nI looked at that. My cvs hasn't changed, my OS hasn't changed. I\ndeleted my entire tree and did another checkout, but that didn't help.\n\n> > The -d brings in any directories that may have been created since the\n> > last checkout. That is causing problems when the regress/results\n> > directory has been deleted and recreated.\n> \n> I use cvs update -d -P myself, and I do *not* see it creating the\n> results directory.\n\nThe problem is that if the results directory exists, the update fails\nbecause there is no /CVS directory in there.\n\nHere is a sample session. At the start regress/results does not exist:\n\n---------------------------------------------------------------------------\n\n$ cvs update -d -P pgsql\n? pgsql/config.log\n...\n$ cd /pg/test/regress/\n$ mkdir results\n$ cd -\n/pgcvs\n$ cvs update -d -P pgsql\n? pgsql/config.log\n...\n? pgsql/src/test/regress/results\ncvs update: in directory pgsql/src/test/regress/results:\ncvs update: cannot open CVS/Entries for reading: No such file or directory\n\n---------------------------------------------------------------------------\n\nIt seems to be failing because there is no CVS directory in results.\n\nThis is CVS:\n\n\tConcurrent Versions System (CVS) 1.10.3 (client/server)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 2 Jul 2002 11:08:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: regress/results directory problem" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I use cvs update -d -P myself, and I do *not* see it creating the\n>> results directory.\n\n> The problem is that if the results directory exists, the update fails\n> because there is no /CVS directory in there.\n\nAh. I always do a 'make distclean' before I risk a whole-tree update...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2002 11:12:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: regress/results directory problem " }, { "msg_contents": "\nMarc has removed the regress/results directory from CVS.\n\n---------------------------------------------------------------------------\n\nThomas Lockhart wrote:\n> ...\n> > I am backing out my GNUmakefile change. I am still unclear why this has\n> > started happening all of a sudden.\n> \n> ?\n> \n> The results/ directory should not be a part of CVS (since it is assumed\n> to not exist by the regression tests). But it has been in CVS since 1997\n> during a period of time when a Makefile in that directory was\n> responsible for cleaning the directory. \n> \n> We are relying on the pruning capabilities of CVS and so never really\n> notice that this was the case (I use -Pd almost always too).\n> \n> I doubt anything has changed recently in this regard.\n> \n> - Thomas\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 13:05:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: regress/results directory problem" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Marc has removed the regress/results directory from CVS.\n\nUh ... say it ain't so, Joe!\n\nregress/results/Makefile was part of several releases. If you\nreally did that, then it is no longer possible to extract the state\nof some past releases from CVS.\n\nThis cure is way worse than the disease.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 00:08:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: regress/results directory problem " } ]
[ { "msg_contents": "A while ago, I started a small discussion about passing arguments to a NOTIFY \nso that the listening backend could get more information about the event.\n\nThere wasn't exactly a consensus from what I understand, but the last thing I \nremember is that someone intended to speed up the notification process by \nstoring the events in shared memory segments (IIRC this was Tom's idea). That \nwould create a remote possibility of a spurious notification, but the idea is \nthat the listening application can check the status and determine what \nhappened.\n\nI looked at the TODO, but I couldn't find anything, nor could I find anything \nin the docs. \n\nIs someone still interested in implementing this feature? Are there still \npeople who disagree with the above implementation strategy?\n\nRegards,\n\tJeff\n\n\n", "msg_date": "Tue, 2 Jul 2002 02:37:19 -0700", "msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>", "msg_from_op": true, "msg_subject": "listen/notify argument (old topic revisited)" }, { "msg_contents": "On Tue, Jul 02, 2002 at 02:37:19AM -0700, Jeff Davis wrote:\n> A while ago, I started a small discussion about passing arguments to a NOTIFY \n> so that the listening backend could get more information about the event.\n\nFunny, I was just about to post to -hackers about this.\n\n> There wasn't exactly a consensus from what I understand, but the last thing I \n> remember is that someone intended to speed up the notification process by \n> storing the events in shared memory segments (IIRC this was Tom's idea). That \n> would create a remote possibility of a spurious notification, but the idea is \n> that the listening application can check the status and determine what \n> happened.\n\nYes, that was Tom Lane. IMHO, we need to replace the existing\npg_listener scheme with an improved model if we want to make any\nsignificant improvements to asynchronous notifications. In summary,\nthe two designs that have been suggested are:\n\n pg_notify: a new system catalog, stores notifications only --\n pg_listener stores only listening backends.\n\n shmem: all notifications are done via shared memory and not stored\n in system catalogs at all, in a manner similar to the cache\n invalidation code that already exists. This avoids the MVCC-induced\n performence problem with storing notification in system catalogs,\n but can lead to spurrious notifications -- the statically sized\n buffer in which notifications are stored can overflow. Applications\n will be able to differentiate between overflow-induced and regular\n messages.\n\n> Is someone still interested in implementing this feature? Are there still \n> people who disagree with the above implementation strategy?\n\nSome people objected to shmem at the time; personally, I'm not really\nsure which design is best. Any comments from -hackers?\n\nIf there's a consensus on which route to take, I'll probably implement\nthe preferred design for 7.3. However, I think that a proper\nimplementation of notify messages will need an FE/BE protocol change,\nso that will need to wait for 7.4.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n\n", "msg_date": "Tue, 2 Jul 2002 10:52:35 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "Jeff Davis wrote:\n> A while ago, I started a small discussion about passing arguments to a NOTIFY \n> so that the listening backend could get more information about the event.\n> \n> There wasn't exactly a consensus from what I understand, but the last thing I \n> remember is that someone intended to speed up the notification process by \n> storing the events in shared memory segments (IIRC this was Tom's idea). That \n> would create a remote possibility of a spurious notification, but the idea is \n> that the listening application can check the status and determine what \n> happened.\n\nI don't see a huge value to using shared memory. Once we get\nauto-vacuum, pg_listener will be fine, and shared memory like SI is just\ntoo hard to get working reliabily because of all the backends\nreading/writing in there. We have tables that have the proper sharing\nsemantics; I think we should use those and hope we get autovacuum soon.\n\nAs far as the message, perhaps passing the oid of the pg_listener row to\nthe backend would help, and then the backend can look up any message for\nthat oid in pg_listener.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 2 Jul 2002 11:33:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I don't see a huge value to using shared memory. Once we get\n> auto-vacuum, pg_listener will be fine,\n\nNo it won't. The performance of notify is *always* going to suck\nas long as it depends on going through a table. This is particularly\ntrue given the lack of any effective way to index pg_listener; the\nmore notifications you feed through, the more dead rows there are\nwith the same key...\n\n> and shared memory like SI is just\n> too hard to get working reliabily because of all the backends\n> reading/writing in there.\n\nA curious statement considering that PG depends critically on SI\nworking. This is a solved problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2002 14:50:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I don't see a huge value to using shared memory. Once we get\n> > auto-vacuum, pg_listener will be fine,\n> \n> No it won't. The performance of notify is *always* going to suck\n> as long as it depends on going through a table. This is particularly\n> true given the lack of any effective way to index pg_listener; the\n> more notifications you feed through, the more dead rows there are\n> with the same key...\n\nWhy can't we do efficient indexing, or clear out the table? I don't\nremember.\n\n> > and shared memory like SI is just\n> > too hard to get working reliabily because of all the backends\n> > reading/writing in there.\n> \n> A curious statement considering that PG depends critically on SI\n> working. This is a solved problem.\n\nMy point is that SI was buggy for years until we found all the bugs, so\nyea, it is a solved problem, but solved with difficulty.\n\nDo we want to add another SI-type capability that could be as difficult\nto get working properly, or will the notify piggyback on the existing SI\ncode. If that latter, that would be fine with me, but we still have the\noverflow queue problem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 2 Jul 2002 15:09:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Why can't we do efficient indexing, or clear out the table? I don't\n> remember.\n\nI don't recall either, but I do recall that we tried to index it and\nbacked out the changes. In any case, a table on disk is just plain\nnot the right medium for transitory-by-design notification messages.\n\n>> A curious statement considering that PG depends critically on SI\n>> working. This is a solved problem.\n\n> My point is that SI was buggy for years until we found all the bugs, so\n> yea, it is a solved problem, but solved with difficulty.\n\nThe SI message mechanism itself was not the source of bugs, as I recall\nit (although certainly the code was incomprehensible in the extreme;\nthe original programmer had absolutely no grasp of readable coding style\nIMHO). The problem was failure to properly design the interactions with\nrelcache and catcache, which are pretty complex in their own right.\nAn SI-like NOTIFY mechanism wouldn't have those issues.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2002 15:23:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Why can't we do efficient indexing, or clear out the table? I don't\n> > remember.\n> \n> I don't recall either, but I do recall that we tried to index it and\n> backed out the changes. In any case, a table on disk is just plain\n> not the right medium for transitory-by-design notification messages.\n\nOK, I can help here. I added an index on pg_listener so lookups would\ngo faster in the backend, but inserts/updates into the table also\nrequire index additions, and your feeling was that the table was small\nand we would be better without the index and just sequentially scanning\nthe table. I can easily add the index and make sure it is used properly\nif you are now concerned about table access time.\n\nI think your issue was that it is only looked up once, and only updated\nonce, so there wasn't much sense in having that index maintanance\noverhead, i.e. you only used the index once per row.\n\n(I remember the item being on TODO for quite a while when we discussed\nthis.)\n\nOf course, a shared memory system probably is going to either do it\nsequentailly or have its own index issues, so I don't see a huge\nadvantage to going to shared memory, and I do see extra code and a queue\nlimit.\n\n> >> A curious statement considering that PG depends critically on SI\n> >> working. This is a solved problem.\n> \n> > My point is that SI was buggy for years until we found all the bugs, so\n> > yea, it is a solved problem, but solved with difficulty.\n> \n> The SI message mechanism itself was not the source of bugs, as I recall\n> it (although certainly the code was incomprehensible in the extreme;\n> the original programmer had absolutely no grasp of readable coding style\n> IMHO). The problem was failure to properly design the interactions with\n> relcache and catcache, which are pretty complex in their own right.\n> An SI-like NOTIFY mechanism wouldn't have those issues.\n\nOh, OK, interesting. So _that_ was the issue there.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 2 Jul 2002 15:47:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Of course, a shared memory system probably is going to either do it\n> sequentailly or have its own index issues, so I don't see a huge\n> advantage to going to shared memory, and I do see extra code and a queue\n> limit.\n\nDisk I/O vs. no disk I/O isn't a huge advantage? Come now.\n\nA shared memory system would use sequential (well, actually\ncircular-buffer) access, which is *exactly* what you want given\nthe inherently sequential nature of the messages. The reason that\ntable storage hurts is that we are forced to do searches, which we\ncould eliminate if we had control of the storage ordering. Again,\nit comes down to the fact that tables don't provide the right\nabstraction for this purpose.\n\nThe \"extra code\" argument doesn't impress me either; async.c is\ncurrently 900 lines, about 2.5 times the size of sinvaladt.c which is\nthe guts of SI message passing. I think it's a good bet that a SI-like\nnotify module would be much smaller than async.c is now; it's certainly\nunlikely to be significantly larger.\n\nThe queue limit problem is a valid argument, but it's the only valid\ncomplaint IMHO; and it seems a reasonable tradeoff to make for the\nother advantages.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2002 16:07:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Of course, a shared memory system probably is going to either do it\n> > sequentailly or have its own index issues, so I don't see a huge\n> > advantage to going to shared memory, and I do see extra code and a queue\n> > limit.\n> \n> Disk I/O vs. no disk I/O isn't a huge advantage? Come now.\n\nMy assumption is that it throws to disk as backing store, which seems\nbetter to me than dropping the notifies. Is disk i/o a real performance\npenalty for notify, and is performance a huge issue for notify anyway,\nassuming autovacuum?\n\n> A shared memory system would use sequential (well, actually\n> circular-buffer) access, which is *exactly* what you want given\n> the inherently sequential nature of the messages. The reason that\n> table storage hurts is that we are forced to do searches, which we\n> could eliminate if we had control of the storage ordering. Again,\n> it comes down to the fact that tables don't provide the right\n> abstraction for this purpose.\n\nTo me, it just seems like going to shared memory is taking our existing\ntable structure and moving it to memory. Yea, there is no tuple header,\nand yea we can make a circular list, but we can't index the thing, so is\nspinning around a circular list any better than a sequential scan of a\ntable. Yea, we can delete stuff better, but autovacuum would help with\nthat. It just seems like we are reinventing the wheel.\n\nAre there other uses for this? Can we make use of RAM-only tables?\n\n> The \"extra code\" argument doesn't impress me either; async.c is\n> currently 900 lines, about 2.5 times the size of sinvaladt.c which is\n> the guts of SI message passing. I think it's a good bet that a SI-like\n> notify module would be much smaller than async.c is now; it's certainly\n> unlikely to be significantly larger.\n> \n> The queue limit problem is a valid argument, but it's the only valid\n> complaint IMHO; and it seems a reasonable tradeoff to make for the\n> other advantages.\n\nI am just not excited about it. What do others think?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 2 Jul 2002 17:12:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is disk i/o a real performance\n> penalty for notify, and is performance a huge issue for notify anyway,\n\nYes, and yes. I have used NOTIFY in production applications, and I know\nthat performance is an issue.\n\n>> The queue limit problem is a valid argument, but it's the only valid\n>> complaint IMHO; and it seems a reasonable tradeoff to make for the\n>> other advantages.\n\nBTW, it occurs to me that as long as we make this an independent message\nbuffer used only for NOTIFY (and *not* try to merge it with SI), we\ndon't have to put up with overrun-reset behavior. The overrun reset\napproach is useful for SI because there are only limited times when\nwe are prepared to handle SI notification in the backend work cycle.\nHowever, I think a self-contained NOTIFY mechanism could be much more\nflexible about when it will remove messages from the shared buffer.\nConsider this:\n\n1. To send NOTIFY: grab write lock on shared-memory circular buffer.\nIf enough space, insert message, release lock, send signal, done.\nIf not enough space, release lock, send signal, sleep some small\namount of time, and then try again. (Hard failure would occur only\nif the proposed message size exceeds the buffer size; as long as we\nmake the buffer size a parameter, this is the DBA's fault not ours.)\n\n2. On receipt of signal: grab read lock on shared-memory circular\nbuffer, copy all data up to write pointer into private memory,\nadvance my (per-process) read pointer, release lock. This would be\nsafe to do pretty much anywhere we're allowed to malloc more space,\nso it could be done say at the same points where we check for cancel\ninterrupts. Therefore, the expected time before the shared buffer\nis emptied after a signal is pretty small.\n\nIn this design, if someone sits in a transaction for a long time,\nthere is no risk of shared memory overflow; that backend's private\nmemory for not-yet-reported NOTIFYs could grow large, but that's\nhis problem. (We could avoid unnecessary growth by not storing\nmessages that don't correspond to active LISTENs for that backend.\nIndeed, a backend with no active LISTENs could be left out of the\ncircular buffer participation list altogether.)\n\nWe'd need to separate this processing from the processing that's used to\nforce SI queue reading (dz's old patch), so we'd need one more signal\ncode than we use now. But we do have SIGUSR1 available.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2002 17:35:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " }, { "msg_contents": "\nLet me tell you what would be really interesting. If we didn't report\nthe pid of the notifying process and we didn't allow arbitrary strings\nfor notify (just pg_class relation names), we could just add a counter\nto pg_class that is updated for every notify. If a backend is\nlistening, it remembers the counter at listen time, and on every commit\nchecks the pg_class counter to see if it has incremented. That way,\nthere is no queue, no shared memory, and there is no scanning. You just\npull up the cache entry for pg_class and look at the counter.\n\nOne problem is that pg_class would be updated more frequently. Anyway,\njust an idea.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is disk i/o a real performance\n> > penalty for notify, and is performance a huge issue for notify anyway,\n> \n> Yes, and yes. I have used NOTIFY in production applications, and I know\n> that performance is an issue.\n> \n> >> The queue limit problem is a valid argument, but it's the only valid\n> >> complaint IMHO; and it seems a reasonable tradeoff to make for the\n> >> other advantages.\n> \n> BTW, it occurs to me that as long as we make this an independent message\n> buffer used only for NOTIFY (and *not* try to merge it with SI), we\n> don't have to put up with overrun-reset behavior. The overrun reset\n> approach is useful for SI because there are only limited times when\n> we are prepared to handle SI notification in the backend work cycle.\n> However, I think a self-contained NOTIFY mechanism could be much more\n> flexible about when it will remove messages from the shared buffer.\n> Consider this:\n> \n> 1. To send NOTIFY: grab write lock on shared-memory circular buffer.\n> If enough space, insert message, release lock, send signal, done.\n> If not enough space, release lock, send signal, sleep some small\n> amount of time, and then try again. (Hard failure would occur only\n> if the proposed message size exceeds the buffer size; as long as we\n> make the buffer size a parameter, this is the DBA's fault not ours.)\n> \n> 2. On receipt of signal: grab read lock on shared-memory circular\n> buffer, copy all data up to write pointer into private memory,\n> advance my (per-process) read pointer, release lock. This would be\n> safe to do pretty much anywhere we're allowed to malloc more space,\n> so it could be done say at the same points where we check for cancel\n> interrupts. Therefore, the expected time before the shared buffer\n> is emptied after a signal is pretty small.\n> \n> In this design, if someone sits in a transaction for a long time,\n> there is no risk of shared memory overflow; that backend's private\n> memory for not-yet-reported NOTIFYs could grow large, but that's\n> his problem. (We could avoid unnecessary growth by not storing\n> messages that don't correspond to active LISTENs for that backend.\n> Indeed, a backend with no active LISTENs could be left out of the\n> circular buffer participation list altogether.)\n> \n> We'd need to separate this processing from the processing that's used to\n> force SI queue reading (dz's old patch), so we'd need one more signal\n> code than we use now. But we do have SIGUSR1 available.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 2 Jul 2002 21:03:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "On Tuesday 02 July 2002 06:03 pm, Bruce Momjian wrote:\n> Let me tell you what would be really interesting. If we didn't report\n> the pid of the notifying process and we didn't allow arbitrary strings\n> for notify (just pg_class relation names), we could just add a counter\n> to pg_class that is updated for every notify. If a backend is\n> listening, it remembers the counter at listen time, and on every commit\n> checks the pg_class counter to see if it has incremented. That way,\n> there is no queue, no shared memory, and there is no scanning. You just\n> pull up the cache entry for pg_class and look at the counter.\n>\n> One problem is that pg_class would be updated more frequently. Anyway,\n> just an idea.\n\nI think that currently a lot of people use select() (after all, it's mentioned \nin the docs) in the frontend to determine when a notify comes into a \nlistening backend. If the backend only checks on commit, and the backend is \nlargely idle except for notify processing, might it be a while before the \nfrontend realizes that a notify was sent?\n\nRegards,\n\tJeff\n\n\n\n>\n> ---------------------------------------------------------------------------\n>\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Is disk i/o a real performance\n> > > penalty for notify, and is performance a huge issue for notify anyway,\n> >\n> > Yes, and yes. I have used NOTIFY in production applications, and I know\n> > that performance is an issue.\n> >\n> > >> The queue limit problem is a valid argument, but it's the only valid\n> > >> complaint IMHO; and it seems a reasonable tradeoff to make for the\n> > >> other advantages.\n> >\n> > BTW, it occurs to me that as long as we make this an independent message\n> > buffer used only for NOTIFY (and *not* try to merge it with SI), we\n> > don't have to put up with overrun-reset behavior. The overrun reset\n> > approach is useful for SI because there are only limited times when\n> > we are prepared to handle SI notification in the backend work cycle.\n> > However, I think a self-contained NOTIFY mechanism could be much more\n> > flexible about when it will remove messages from the shared buffer.\n> > Consider this:\n> >\n> > 1. To send NOTIFY: grab write lock on shared-memory circular buffer.\n> > If enough space, insert message, release lock, send signal, done.\n> > If not enough space, release lock, send signal, sleep some small\n> > amount of time, and then try again. (Hard failure would occur only\n> > if the proposed message size exceeds the buffer size; as long as we\n> > make the buffer size a parameter, this is the DBA's fault not ours.)\n> >\n> > 2. On receipt of signal: grab read lock on shared-memory circular\n> > buffer, copy all data up to write pointer into private memory,\n> > advance my (per-process) read pointer, release lock. This would be\n> > safe to do pretty much anywhere we're allowed to malloc more space,\n> > so it could be done say at the same points where we check for cancel\n> > interrupts. Therefore, the expected time before the shared buffer\n> > is emptied after a signal is pretty small.\n> >\n> > In this design, if someone sits in a transaction for a long time,\n> > there is no risk of shared memory overflow; that backend's private\n> > memory for not-yet-reported NOTIFYs could grow large, but that's\n> > his problem. (We could avoid unnecessary growth by not storing\n> > messages that don't correspond to active LISTENs for that backend.\n> > Indeed, a backend with no active LISTENs could be left out of the\n> > circular buffer participation list altogether.)\n> >\n> > We'd need to separate this processing from the processing that's used to\n> > force SI queue reading (dz's old patch), so we'd need one more signal\n> > code than we use now. But we do have SIGUSR1 available.\n> >\n> > \t\t\tregards, tom lane\n\n\n\n", "msg_date": "Tue, 2 Jul 2002 19:02:58 -0700", "msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>", "msg_from_op": true, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "Jeff Davis wrote:\n> On Tuesday 02 July 2002 06:03 pm, Bruce Momjian wrote:\n> > Let me tell you what would be really interesting. If we didn't report\n> > the pid of the notifying process and we didn't allow arbitrary strings\n> > for notify (just pg_class relation names), we could just add a counter\n> > to pg_class that is updated for every notify. If a backend is\n> > listening, it remembers the counter at listen time, and on every commit\n> > checks the pg_class counter to see if it has incremented. That way,\n> > there is no queue, no shared memory, and there is no scanning. You just\n> > pull up the cache entry for pg_class and look at the counter.\n> >\n> > One problem is that pg_class would be updated more frequently. Anyway,\n> > just an idea.\n> \n> I think that currently a lot of people use select() (after all, it's mentioned \n> in the docs) in the frontend to determine when a notify comes into a \n> listening backend. If the backend only checks on commit, and the backend is \n> largely idle except for notify processing, might it be a while before the \n> frontend realizes that a notify was sent?\n\nI meant to check exactly when it does now; when a query completes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 2 Jul 2002 22:10:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "> Of course, a shared memory system probably is going to either do it\n> sequentailly or have its own index issues, so I don't see a huge\n> advantage to going to shared memory, and I do see extra code and a queue\n> limit.\n\nIs a shared memory implementation going to play silly buggers with the Win32\nport?\n\nChris\n\n\n\n", "msg_date": "Wed, 3 Jul 2002 14:20:14 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "On Wed, 2002-07-03 at 08:20, Christopher Kings-Lynne wrote:\n> > Of course, a shared memory system probably is going to either do it\n> > sequentailly or have its own index issues, so I don't see a huge\n> > advantage to going to shared memory, and I do see extra code and a queue\n> > limit.\n> \n> Is a shared memory implementation going to play silly buggers with the Win32\n> port?\n\nPerhaps this is a good place to introduce anonymous mmap ?\n\nIs there a way to grow anonymous mmap on demand ?\n\n----------------\nHannu\n\n\n\n", "msg_date": "03 Jul 2002 12:21:32 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "On Tue, 2002-07-02 at 23:35, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is disk i/o a real performance\n> > penalty for notify, and is performance a huge issue for notify anyway,\n> \n> Yes, and yes. I have used NOTIFY in production applications, and I know\n> that performance is an issue.\n> \n> >> The queue limit problem is a valid argument, but it's the only valid\n> >> complaint IMHO; and it seems a reasonable tradeoff to make for the\n> >> other advantages.\n> \n> BTW, it occurs to me that as long as we make this an independent message\n> buffer used only for NOTIFY (and *not* try to merge it with SI), we\n> don't have to put up with overrun-reset behavior. The overrun reset\n> approach is useful for SI because there are only limited times when\n> we are prepared to handle SI notification in the backend work cycle.\n> However, I think a self-contained NOTIFY mechanism could be much more\n> flexible about when it will remove messages from the shared buffer.\n> Consider this:\n> \n> 1. To send NOTIFY: grab write lock on shared-memory circular buffer.\n\nAre you planning to have one circular buffer per listening backend ?\n\nWould that not be waste of space for large number of backends with long\nnotify arguments ?\n\n--------------\nHannu\n\n\n\n", "msg_date": "03 Jul 2002 12:53:25 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "On Tue, 2002-07-02 at 17:12, Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Of course, a shared memory system probably is going to either do it\n> > > sequentailly or have its own index issues, so I don't see a huge\n> > > advantage to going to shared memory, and I do see extra code and a queue\n> > > limit.\n> > \n> > Disk I/O vs. no disk I/O isn't a huge advantage? Come now.\n> \n> My assumption is that it throws to disk as backing store, which seems\n> better to me than dropping the notifies. Is disk i/o a real performance\n> penalty for notify, and is performance a huge issue for notify anyway,\n> assuming autovacuum?\n\nFor me, performance would be one of the only concerns. Currently I use\ntwo methods of finding changes, one is NOTIFY which directs frontends to\nreload various sections of data, the second is a table which holds a\nQUEUE of actions to be completed (which must be tracked, logged and\ncompleted).\n\nIf performance wasn't a concern, I'd simply use more RULES which insert\nrequests into my queue table.\n\n\n\n", "msg_date": "03 Jul 2002 07:18:44 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Is a shared memory implementation going to play silly buggers with the Win32\n> port?\n\nNo. Certainly no more so than shared disk buffers or the SI message\nfacility, both of which are *not* optional.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2002 09:30:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Perhaps this is a good place to introduce anonymous mmap ?\n\nI don't think so; it just adds a portability variable without buying\nus anything.\n\n> Is there a way to grow anonymous mmap on demand ?\n\nNope. Not portably, anyway. For instance, the HPUX man page for mmap\nsayeth:\n\n If the size of the mapped file changes after the call to mmap(), the\n effect of references to portions of the mapped region that correspond\n to added or removed portions of the file is unspecified.\n\nDynamically re-mmapping after enlarging the file might work, but there\nare all sorts of interesting constraints on that too; it looks like\nyou'd have to somehow synchronize things so that all the backends do it\nat the exact same time.\n\nOn the whole I see no advantage to be gained here, compared to the\nimplementation I sketched earlier with a fixed-size shared buffer and\nenlargeable internal buffers in backends.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2002 09:48:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Are you planning to have one circular buffer per listening backend ?\n\nNo; one circular buffer, period.\n\nEach backend would also internally buffer notifies that it hadn't yet\ndelivered to its client --- but since the time until delivery could vary\ndrastically across clients, I think that's reasonable. I'd expect\nclients that are using LISTEN to avoid doing long-running transactions,\nso under normal circumstances the internal buffer should not grow very\nlarge.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2002 09:51:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> There could a little more smartness here to avoid unneccessary copying\n> (not just storing) of not-listened-to data.\n\nYeah, I was wondering about that too.\n\n> I guess that depending on the circumstances this can be either faster or\n> slower than copying them all in one memmove.\n\nThe more interesting question is whether it's better to hold the read\nlock on the shared buffer for the minimum possible amount of time; if\nso, we'd be better off to pull the data from the buffer as quickly as\npossible and then sort it later. Determining whether we are interested\nin a particular notify name will probably take a probe into a (local)\nhashtable, so it won't be super-quick. However, I think we could\narrange for readers to use a sharable lock on the buffer, so having them\nexpend that processing while holding the read lock might be acceptable.\n\nMy guess is that the actual volume of data going through the notify\nmechanism isn't going to be all that large, and so avoiding one memcpy\nstep for it isn't going to be all that exciting. I think I'd lean\ntowards minimizing the time spent holding the shared lock, instead.\nBut it's a judgment call.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2002 10:30:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " }, { "msg_contents": "On Tue, Jul 02, 2002 at 05:35:42PM -0400, Tom Lane wrote:\n> 1. To send NOTIFY: grab write lock on shared-memory circular buffer.\n> If enough space, insert message, release lock, send signal, done.\n> If not enough space, release lock, send signal, sleep some small\n> amount of time, and then try again. (Hard failure would occur only\n> if the proposed message size exceeds the buffer size; as long as we\n> make the buffer size a parameter, this is the DBA's fault not ours.)\n\nHow would this interact with the current transactional behavior of\nNOTIFY? At the moment, executing a NOTIFY command only stores the\npending notification in a List in the backend you're connected to;\nwhen the current transaction commits, the NOTIFY is actually\nprocessed (stored in pg_listener, SIGUSR2 sent, etc) -- if the\ntransaction is rolled back, the NOTIFY isn't sent. If we do the\nactual insertion when the NOTIFY is executed, I don't see a simple\nway to get this behavior...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n\n", "msg_date": "Wed, 3 Jul 2002 11:18:00 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "On Wed, 2002-07-03 at 15:51, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > Are you planning to have one circular buffer per listening backend ?\n> \n> No; one circular buffer, period.\n> \n> Each backend would also internally buffer notifies that it hadn't yet\n> delivered to its client --- but since the time until delivery could vary\n> drastically across clients, I think that's reasonable. I'd expect\n> clients that are using LISTEN to avoid doing long-running transactions,\n> so under normal circumstances the internal buffer should not grow very\n> large.\n> \n> \t\t\tregards, tom lane\n\n> 2. On receipt of signal: grab read lock on shared-memory circular\n> buffer, copy all data up to write pointer into private memory,\n> advance my (per-process) read pointer, release lock. This would be\n> safe to do pretty much anywhere we're allowed to malloc more space,\n> so it could be done say at the same points where we check for cancel\n> interrupts. Therefore, the expected time before the shared buffer\n> is emptied after a signal is pretty small.\n>\n> In this design, if someone sits in a transaction for a long time,\n> there is no risk of shared memory overflow; that backend's private\n> memory for not-yet-reported NOTIFYs could grow large, but that's\n> his problem. (We could avoid unnecessary growth by not storing\n> messages that don't correspond to active LISTENs for that backend.\n> Indeed, a backend with no active LISTENs could be left out of the\n> circular buffer participation list altogether.)\n\nThere could a little more smartness here to avoid unneccessary copying\n(not just storing) of not-listened-to data. Perhaps each notify message\ncould be stored as\n\n(ptr_to_next_blk,name,data)\n\nso that the receiving backend could skip uninetersting (not-listened-to)\nmessages. \n\nI guess that depending on the circumstances this can be either faster or\nslower than copying them all in one memmove.\n\nThis will be slower if all messages are interesting, this will be an\noverall win if there is one backend listening to messages with big\ndataload and lots of other backends listening to relatively small\nmessages.\n\nThere are scenarios where some more complex structure will be faster (a\nsparse communication structure, say 1000 backends each listening to 1\nname and notifying ten others - each backend has to (manually ;) check\n1000 messages to find the one that is for it) but your proposed\nstructure seems good enough for most common uses (and definitely better\nthan the current one)\n\n---------------------\nHannu\n\n\n\n", "msg_date": "03 Jul 2002 17:20:47 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> but we are already attracting a thundering herd by\n> sending a signal to all _possibly_ interested backends at the same time\n\nThat's why it's so important that the readers use a sharable lock. The\nonly thing they'd be locking out is some new writer trying to send (yet\nanother) notify.\n\nAlso, it's a pretty important optimization to avoid signaling backends\nthat are not listening for any notifies at all.\n\nWe could improve on it further by keeping info in shared memory about\nwhich backends are listening for which notify names, but I don't see\nany good way to do that in a fixed amount of space.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2002 11:48:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> How would this interact with the current transactional behavior of\n> NOTIFY?\n\nNo change. Senders would only insert notify messages into the shared\nbuffer when they commit (uncommited notifies would live in a list in\nthe sender, same as now). Readers would be expected to remove messages\nfrom the shared buffer ASAP after receiving the signal, but they'd\nstore those messages internally and not forward them to the client until\nsuch time as they're not inside a transaction block.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2002 11:51:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " }, { "msg_contents": "On Wed, 2002-07-03 at 16:30, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > There could a little more smartness here to avoid unneccessary copying\n> > (not just storing) of not-listened-to data.\n> \n> Yeah, I was wondering about that too.\n> \n> > I guess that depending on the circumstances this can be either faster or\n> > slower than copying them all in one memmove.\n> \n> The more interesting question is whether it's better to hold the read\n> lock on the shared buffer for the minimum possible amount of time;\n\nOTOH, we may decide that getting a notify ASAP is not a priority and\njust go on doing what we did before if we can't get the lock and try\nagain the next time around.\n\nThis may have some pathological behaviours (starving some backends who\nalways come late ;), but we are already attracting a thundering herd by\nsending a signal to all _possibly_ interested backends at the same time\ntime.\n\nKeeping a list of who listens to what can solve this problem (but only\nin case of sparse listening habits).\n\n-----------------\nHannu\n\n\n\n", "msg_date": "03 Jul 2002 18:09:34 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "On Wed, 2002-07-03 at 16:30, Tom Lane wrote:\n> My guess is that the actual volume of data going through the notify\n> mechanism isn't going to be all that large, and so avoiding one memcpy\n> step for it isn't going to be all that exciting. \n\nIt may become large if we will have an implementation which can cope\nwell with lage volumes :)\n\n> I think I'd lean towards minimizing the time spent holding the\n> shared lock, instead.\n\nIn case you are waiting for just one message out of 1000 it may still be\nfaster to do selective copying.\n\nIt is possible that 1000 strcmp's + 1000 pointer traversals are faster\nthan one big memcpy, no ?\n\n> But it's a judgment call.\n\nIf we have a clean C interface + separate PG binding we may write\nseveral different modules for different scenarios and let the user\nchoose (even at startup time) - code optimized for messages that\neverybody wants is bound to be suboptimal for case when they only want 1\nout of 1000 messages. Same for different message sizes.\n\n-------------\nHannu\n\n\n\n", "msg_date": "03 Jul 2002 18:18:39 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "On Wed, 2002-07-03 at 22:43, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > On Wed, 2002-07-03 at 17:48, Tom Lane wrote:\n> >> That's why it's so important that the readers use a sharable lock. The\n> >> only thing they'd be locking out is some new writer trying to send (yet\n> >> another) notify.\n> \n> > But there must be some way to communicate the positions of read pointers\n> > of all backends for managing the free space, lest we are unable to know\n> > when the buffer is full. \n> \n> Right. But we play similar games already with the existing SI buffer,\n> to wit:\n> \n> Writers grab the controlling lock LW_EXCLUSIVE, thereby having sole\n> access; in this state it's safe for them to examine all the read\n> pointers as well as examine/update the write pointer (and of course\n> write data into the buffer itself). The furthest-back read pointer\n> limits what they can write.\n\nIt means a full seq scan over pointers ;)\n\n> Readers grab the controlling lock LW_SHARED, thereby ensuring there\n> is no writer (but there may be other readers). In this state they\n> may examine the write pointer (to see how much data there is) and\n> may examine and update their own read pointer. This is safe and\n> useful because no reader cares about any other's read pointer.\n\nOK. Now, how will we introduce transactional behaviour to this scheme ?\n\nIt is easy to save transaction id with each notify message, but is there\na quick way for backends to learn when these transactions commit/abort\nor if they have done either in the past ?\n\nIs there already a good common facility for that, or do I just need to\nexamine some random tuples in hope of finding out ;)\n\n--------------\nHannu\n\n\n\n\n", "msg_date": "03 Jul 2002 21:56:38 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "On Wed, 2002-07-03 at 17:48, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > but we are already attracting a thundering herd by\n> > sending a signal to all _possibly_ interested backends at the same time\n> \n> That's why it's so important that the readers use a sharable lock. The\n> only thing they'd be locking out is some new writer trying to send (yet\n> another) notify.\n\nBut there must be some way to communicate the positions of read pointers\nof all backends for managing the free space, lest we are unable to know\nwhen the buffer is full. \n\nI imagined that at least this info was kept in share memory.\n\n> Also, it's a pretty important optimization to avoid signaling backends\n> that are not listening for any notifies at all.\n\nBut of little help when they are all listening to something ;)\n\n> We could improve on it further by keeping info in shared memory about\n> which backends are listening for which notify names, but I don't see\n> any good way to do that in a fixed amount of space.\n\nA compromize would be to do it for some fixed amount of mem (say 10\nnames/backend) and assume \"all\" if out of that memory.\n\nNotifying everybody has less bad effects when backends listen to more\nnames and keeping lists is pure overhead when all listeners listen to\nall names.\n\n--------------\nHannu\n\n\n", "msg_date": "03 Jul 2002 19:25:44 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> On Wed, 2002-07-03 at 17:48, Tom Lane wrote:\n>> That's why it's so important that the readers use a sharable lock. The\n>> only thing they'd be locking out is some new writer trying to send (yet\n>> another) notify.\n\n> But there must be some way to communicate the positions of read pointers\n> of all backends for managing the free space, lest we are unable to know\n> when the buffer is full. \n\nRight. But we play similar games already with the existing SI buffer,\nto wit:\n\nWriters grab the controlling lock LW_EXCLUSIVE, thereby having sole\naccess; in this state it's safe for them to examine all the read\npointers as well as examine/update the write pointer (and of course\nwrite data into the buffer itself). The furthest-back read pointer\nlimits what they can write.\n\nReaders grab the controlling lock LW_SHARED, thereby ensuring there\nis no writer (but there may be other readers). In this state they\nmay examine the write pointer (to see how much data there is) and\nmay examine and update their own read pointer. This is safe and\nuseful because no reader cares about any other's read pointer.\n\n>> We could improve on it further by keeping info in shared memory about\n>> which backends are listening for which notify names, but I don't see\n>> any good way to do that in a fixed amount of space.\n\n> A compromize would be to do it for some fixed amount of mem (say 10\n> names/backend) and assume \"all\" if out of that memory.\n\nI thought of that too, but it's not clear how much it'd help. The\nwriter would have to scan through all the per-reader data while holding\nthe write lock, which is not good for concurrency. On SMP hardware it\ncould actually be a net loss. Might be worth experimenting with though.\n\nYou could make a good reduction in the shared-memory space needed by\nstoring just a hash code for the interesting names, and not the names\nthemselves. (I'd also be inclined to include the hash code in the\ntransmitted message, so that readers could more quickly ignore\nuninteresting messages.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2002 13:43:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " }, { "msg_contents": "Tom Lane wrote:\n> themselves. (I'd also be inclined to include the hash code in the\n> transmitted message, so that readers could more quickly ignore\n> uninteresting messages.)\n\nDoesn't seem worth it, and how would the user know their hash; they\nalready have a C string for comparison. Do we have to handle possible\nhash collisions?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 13:49:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> themselves. (I'd also be inclined to include the hash code in the\n>> transmitted message, so that readers could more quickly ignore\n>> uninteresting messages.)\n\n> Doesn't seem worth it, and how would the user know their hash;\n\nThis is not the user's problem; it is the writing backend's\nresponsibility to compute and add the hash. Basically we trade off some\nspace to compute the hash code once at the writer not N times at all the\nreaders.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2002 13:54:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> themselves. (I'd also be inclined to include the hash code in the\n> >> transmitted message, so that readers could more quickly ignore\n> >> uninteresting messages.)\n> \n> > Doesn't seem worth it, and how would the user know their hash;\n> \n> This is not the user's problem; it is the writing backend's\n> responsibility to compute and add the hash. Basically we trade off some\n> space to compute the hash code once at the writer not N times at all the\n> readers.\n\nOh, OK. When you said \"transmitted\", I thought you meant transmitted to\nthe client.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 13:56:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited)" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n>> Right. But we play similar games already with the existing SI buffer,\n>> to wit:\n\n> It means a full seq scan over pointers ;)\n\nI have not seen any indication that the corresponding scan in the SI\ncode is a bottleneck --- and that has to scan over *all* backends,\nwithout even the opportunity to skip those that aren't LISTENing.\n\n> OK. Now, how will we introduce transactional behaviour to this scheme ?\n\nIt's no different from before --- notify messages don't get into the\nbuffer at all, until they're committed. See my earlier response to Neil.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 00:13:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: listen/notify argument (old topic revisited) " } ]
[ { "msg_contents": "\n\n\n", "msg_date": "Tue, 2 Jul 2002 14:56:48 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "ignor ethis ..." } ]
[ { "msg_contents": "SQL92 requires named constraints to have names that are unique within\ntheir schema. Our past implementation did not require constraint names\nto be unique at all; as a compromise I suggested requiring constraint\nnames to be unique for any given relation. Rod Taylor's pending\npg_constraint patch implements that approach, but I'm beginning to have\nsecond thoughts about it.\n\nOne problem I see is that pg_constraint entries can *only* be associated\nwith relations; so the table has no way to represent constraints\nassociated with domains --- not to mention assertions, which aren't\nassociated with any table at all. I'm in no hurry to try to implement\nassertions, but domain constraints are definitely interesting. We'd\nprobably have to put domain constraints into a separate table, which\nis possible but not very attractive.\n\nAt the SQL level, constraint names seem to be used in only two\ncontexts: DROP CONSTRAINT subcommands of ALTER TABLE and ALTER DOMAIN\ncommands, and SET CONSTRAINTS ... IMMEDIATE/DEFERRED. In the DROP\ncontext there's no real need to identify constraints globally, since\nthe associated table or domain name is available, but in SET CONSTRAINTS\nthe syntax doesn't include a table name.\n\nOur current implementation of SET CONSTRAINTS changes the behavior of\nall constraints matching the specified name, which is pretty bogus\ngiven the lack of uniqueness. If we don't go over to the SQL92 approach\nthen I think we need some other way of handling SET CONSTRAINTS that\nallows a more exact specification of the target constraint.\n\nA considerable advantage of per-relation constraint names is that a new\nunique name can be assigned for a nameless constraint while holding only\na lock on the target relation. We'd need a global lock to create unique\nconstraint names in the SQL92 semantics. The only way I can see around\nthat would be to use newoid(), or perhaps a dedicated sequence\ngenerator, to construct constraint names. The resulting unpredictable\nconstraint names would be horribly messy to deal with in the regression\ntests, so I'm not eager to do this.\n\nEven per-relation uniqueness has some unhappiness: if you have a domain\nwith a named constraint, and you try to use this domain for two columns\nof a relation, you'll get a constraint name conflict. Inheriting\nsimilar constraint names from two different parent relations is also\ntroublesome. We could get around these either by going back to the\nold no-uniqueness approach, or by being willing to alter constraint\nnames to make them unique (eg, by tacking on \"_nnn\" when needed).\nBut this doesn't help SET CONSTRAINTS.\n\nAt the moment I don't much like any of the alternatives. Ideas anyone?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2002 14:38:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Scope of constraint names" }, { "msg_contents": "> One problem I see is that pg_constraint entries can *only* be associated\n> with relations; so the table has no way to represent constraints\n> associated with domains --- not to mention assertions, which aren't\n\nIt's ugly, but one could make the relid 0, and add a typeid which is\nnon-zero to represent a constraint against a domain. Relation\nconstraints have typeid 0 and relid as a normal number.\n\nObviously I prefer unique constraint names mostly for my users. For\nsome reason they tend to try to make assumptions about a constraint\ngiven the name and have been fooled about what the constraint actually\nis more than once due to 'having seen it before elsewhere'.\n\nIs applying a lock on the pg_constraint table really that bad during\ncreation? Sure, you could only make one constraint at a time -- but\nthats the same with relations, types, and a fair number of other things\nthat are usually created at the same time (or same transaction) as most\nconstraints will be.\n\n\n\n\n\n", "msg_date": "02 Jul 2002 20:15:11 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Scope of constraint names" }, { "msg_contents": "> One problem I see is that pg_constraint entries can *only* be associated\n> with relations; so the table has no way to represent constraints\n> associated with domains --- not to mention assertions, which aren't\n> associated with any table at all. I'm in no hurry to try to implement\n> assertions, but domain constraints are definitely interesting. We'd\n> probably have to put domain constraints into a separate table, which\n> is possible but not very attractive.\n\nHmmm...there must be some sort of schema that can do both in one table?\nEven something nastly like:\n\nrefid Oid of relation or domain\ntype 'r' for relation and 'd' for domain\n...\n\n> Our current implementation of SET CONSTRAINTS changes the behavior of\n> all constraints matching the specified name, which is pretty bogus\n> given the lack of uniqueness. If we don't go over to the SQL92 approach\n> then I think we need some other way of handling SET CONSTRAINTS that\n> allows a more exact specification of the target constraint.\n\nIf we do go over to SQL92, what kind of problems will people have reloading\ntheir old schema? Should <unnamed> be excluded from the uniqueness\ncheck...?\n\n> A considerable advantage of per-relation constraint names is that a new\n> unique name can be assigned for a nameless constraint while holding only\n> a lock on the target relation. We'd need a global lock to create unique\n> constraint names in the SQL92 semantics.\n\nSurely adding a foreign key is what you'd call a 'rare' event in a database,\noccurring once once for millions or queries? Hence, we shouldn't worry\nabout it too much?\n\n> The only way I can see around\n> that would be to use newoid(), or perhaps a dedicated sequence\n> generator, to construct constraint names. The resulting unpredictable\n> constraint names would be horribly messy to deal with in the regression\n> tests, so I'm not eager to do this.\n\nSurely you do the ol' loop and test sort of thing...?\n\n> Even per-relation uniqueness has some unhappiness: if you have a domain\n> with a named constraint, and you try to use this domain for two columns\n> of a relation, you'll get a constraint name conflict. Inheriting\n> similar constraint names from two different parent relations is also\n> troublesome. We could get around these either by going back to the\n> old no-uniqueness approach, or by being willing to alter constraint\n> names to make them unique (eg, by tacking on \"_nnn\" when needed).\n> But this doesn't help SET CONSTRAINTS.\n>\n> At the moment I don't much like any of the alternatives. Ideas anyone?\n\nIf they're both equally evil, then maybe we should consider going the SQL92\nway, for compatibilities sake?\n\nChris\n\n\n\n", "msg_date": "Wed, 3 Jul 2002 16:38:10 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Scope of constraint names" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> A considerable advantage of per-relation constraint names is that a new\n>> unique name can be assigned for a nameless constraint while holding only\n>> a lock on the target relation. We'd need a global lock to create unique\n>> constraint names in the SQL92 semantics.\n\n> Surely adding a foreign key is what you'd call a 'rare' event in a database,\n> occurring once once for millions or queries? Hence, we shouldn't worry\n> about it too much?\n\nI don't buy that argument even for foreign keys --- and remember that\npg_constraint will also hold entries for CHECK, UNIQUE, and PRIMARY KEY\nconstraints. I don't want to have to take a global lock whenever we\ncreate an index.\n\n>> The only way I can see around\n>> that would be to use newoid(), or perhaps a dedicated sequence\n>> generator, to construct constraint names. The resulting unpredictable\n>> constraint names would be horribly messy to deal with in the regression\n>> tests, so I'm not eager to do this.\n\n> Surely you do the ol' loop and test sort of thing...?\n\nHow is a static 'expected' file going to do loop-and-test?\n\nOne possible answer to that is to report all unnamed constraints as\n\"<unnamed>\" in error messages, even though they'd have distinct names\ninternally. I don't much care for that approach though, since it might\nmake it hard for users to figure out which internal name to mention in\nDROP CONSTRAINT. But it'd keep the expected regression output stable.\n\n> If they're both equally evil, then maybe we should consider going the SQL92\n> way, for compatibilities sake?\n\nIf the spec didn't seem so brain-damaged on this point, I'd be more\neager to follow it. I can't see any advantage in the way they chose\nto do it. But yeah, I'd lean to following the spec, if we can think\nof a way around the locking and regression testing issues it creates.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2002 09:41:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Scope of constraint names " }, { "msg_contents": "> I don't buy that argument even for foreign keys --- and remember that\n> pg_constraint will also hold entries for CHECK, UNIQUE, and PRIMARY KEY\n> constraints. I don't want to have to take a global lock whenever we\n> create an index.\n\nI don't understand why a global lock is necessary -- and not simply a\nlock on the pg_constraint table and the relations the constraint is\napplied to (foreign key locks two, all others one).\n\n\n\n", "msg_date": "03 Jul 2002 17:40:30 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Scope of constraint names" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n>> I don't want to have to take a global lock whenever we\n>> create an index.\n\n> I don't understand why a global lock is necessary --\n\nTo be sure we are creating a unique constraint name.\n\n> and not simply a lock on the pg_constraint table\n\nIn this context, a lock on pg_constraint *is* global, because it will\nmean that no one else can be creating an index on some other table.\nThey'd need to hold that same lock to ensure that *their* chosen\nconstraint name is unique.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 00:29:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Scope of constraint names " }, { "msg_contents": "> > and not simply a lock on the pg_constraint table\n> \n> In this context, a lock on pg_constraint *is* global, because it will\n> mean that no one else can be creating an index on some other table.\n> They'd need to hold that same lock to ensure that *their* chosen\n> constraint name is unique.\n\nSo I am understanding correctly.\n\nI think it would be a rare event to have more than one person changing\nthe database structure at the same time. Anyway, the index example is a\nbad example isn't it? It already takes an lock on pg_class which is\njust as global.\n\nCheck constraints and foreign key constraints are two that I can see\naffected in the manner described.\n\n\nAnyway, my current implementation has constraint names unique to the\nrelation only -- not the namespace, although my locking may be excessive\nin that area.\n\n\n\n", "msg_date": "04 Jul 2002 07:59:49 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Scope of constraint names" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> I think it would be a rare event to have more than one person changing\n> the database structure at the same time.\n\nI don't buy this assumption --- consider for example two clients\ncreating temp tables.\n\n> Anyway, the index example is a\n> bad example isn't it? It already takes an lock on pg_class which is\n> just as global.\n\nAu contraire; there is no exclusive lock needed at present.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 12:42:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Scope of constraint names " }, { "msg_contents": "> > Anyway, the index example is a\n> > bad example isn't it? It already takes an lock on pg_class which is\n> > just as global.\n> \n> Au contraire; there is no exclusive lock needed at present.\n\nOh.. I thought pg_class was locked for any relation creation. If thats\nnot the case, then I wouldn't want constraints to impose additional\nlimitations.\n\nMisunderstanding on my part.\n\n\n\n", "msg_date": "04 Jul 2002 17:27:27 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Scope of constraint names" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> bad example isn't it? It already takes an lock on pg_class which is\n> just as global.\n>> \n>> Au contraire; there is no exclusive lock needed at present.\n\n> Oh.. I thought pg_class was locked for any relation creation.\n\nWe only take a standard writer's lock (RowExclusiveLock) on system\ncatalogs that we modify. This is mainly to avoid problems from\na concurrent VACUUM FULL moving tuples around --- it doesn't prevent\nother processes from inserting/updating/deleting other rows in the\nsame catalog.\n\nMost of the interesting locking done for DDL operations is done on\nthe relation being defined or modified, not on the system catalogs.\nSo we have full concurrency in terms of being able to do DDL operations\non different relations at the same time. That's what I don't want to\ngive up.\n\n\nIf we want to go over to SQL-spec-compatible naming of constraints\n(ie, unique within schemas), I think the only way to cope is to generate\nunique names for nameless constraints using the OID counter --- so\nthey'd really be, say, \"$8458321\" --- but when reporting constraint\nviolation errors, substitute \"<unnamed>\" or some other fixed string\nin the error report for any constraint having a name of this form.\nDoing the latter would keep the regression test expected outputs stable.\nUsing the OID counter (or a sequence generator) would avoid the locking\nproblem.\n\nAnother thing we need to think about in any case is coping with\nconstraints that are inherited from a parent table or from a domain\nthat's used as one or more columns' datatype. Whether we think\nconstraints should have per-schema or per-relation names, we lose\nanyway if we make multiple copies of such a constraint.\n\nPerhaps we need to distinguish \"original\" constraints (which could\nreasonably be expected to have schema-wide-unique names) from \"derived\"\nconstraints, which are the check expressions we actually attach to\nindividual columns. I think we'd want to store the derived versions\nexplicitly (eg, with column numbers in Vars adjusted to match the\ncolumn that's supposed to be tested in a given table), but they'd not\nbe expected to have unique names. This leads to the idea that a\npg_constraint entry needs two name fields: its real name, which is\nunique but might just be an autogenerated \"$nnn\", and its logical name\nthat we actually report in constraint violation messages. The logical\nname would be inherited from the \"original\" constraint when building\na \"derived\" constraint. We'd also set up pg_depend dependencies to\nlink the derived constraints to their original. This would be needed\nto make ALTER DOMAIN DROP CONSTRAINT work.\n\nThoughts anyone?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 18:58:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Scope of constraint names " }, { "msg_contents": "Tom Lane writes:\n\n> A considerable advantage of per-relation constraint names is that a new\n> unique name can be assigned for a nameless constraint while holding only\n> a lock on the target relation. We'd need a global lock to create unique\n> constraint names in the SQL92 semantics.\n\nPresumably, the field pg_class.relchecks already keeps a count of the\nnumber of constraints, so it should be possible to assign numbers easily.\n\n> The only way I can see around that would be to use newoid(), or perhaps\n> a dedicated sequence generator, to construct constraint names. The\n> resulting unpredictable constraint names would be horribly messy to deal\n> with in the regression tests, so I'm not eager to do this.\n\nOr we simply assign constraint names explicitly in the regression tests.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n", "msg_date": "Sun, 7 Jul 2002 23:55:25 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Scope of constraint names" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> A considerable advantage of per-relation constraint names is that a new\n>> unique name can be assigned for a nameless constraint while holding only\n>> a lock on the target relation. We'd need a global lock to create unique\n>> constraint names in the SQL92 semantics.\n\n> Presumably, the field pg_class.relchecks already keeps a count of the\n> number of constraints, so it should be possible to assign numbers easily.\n\nBut pg_class.relchecks is per-relation --- how does it help you assign a\nglobally unique number?\n\n\nAfter much thought I am coming around to the conclusion that we should\nname constraints within-schemas (ie, there will be a schema OID column\nin pg_constraint), but *not require these names to be unique*. DROP\nCONSTRAINT, SET CONSTRAINTS, etc will act on all constraints matching\nthe target name, as they do now. This will create the minimum risk of\nbreaking existing database schemas, while still allowing us to move\nsome of the way towards SQL compliance --- in particular, SET\nCONSTRAINTS with a schema-qualified constraint name would work as the\nspec expects.\n\nWe would still take care to generate unique-within-a-relation names for\nnameless constraints, using the same code that exists now, but we'd not\nenforce this by means of a unique index on pg_constraint.\n\nA compromise between that and exact SQL semantics would be to enforce\nuniqueness of conname + connamespace + conrelid + contypid (the last\nbeing a column that links to pg_type for domain constraints; conrelid\nand contypid are each zero if not relevant). This would have the effect\nof making relation constraint names unique per-relation, and domain\nconstraint names separately unique per-domain, and also allowing global\nassertion names that are unique per-schema as in SQL92. This seems a\nlittle baroque to me, but maybe it will appeal to others.\n\nComments?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2002 14:12:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Scope of constraint names " } ]
[ { "msg_contents": " Hi,\n I know ORACLE has distributed database management features: such as\n replication. Can postgreSQL do this? Is any software available for this\npurpose for postgreSQL?\n\n Thanks,\n\nLiping\n\n\n\n", "msg_date": "Tue, 02 Jul 2002 14:47:49 -0700", "msg_from": "liping guo <liping.guo@acsatl.com>", "msg_from_op": true, "msg_subject": "Does postgreSQL have distributed database management?" } ]
[ { "msg_contents": "OK, this is what I'm seeing on FreeBSD/Alpha for libpq++. I haven't figured\nout how to build libpqxx yet.:\n\ngmake[3]: Entering directory\n`/home/chriskl/pgsql-head/src/interfaces/libpq++'\ng++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/\ninclude -c -o pgconnection.o pgconnection.cc -MMD\ncc1plus: warning:\n***\n*** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM\n***\n\ng++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/\ninclude -c -o pgdatabase.o pgdatabase.cc -MMD\ncc1plus: warning:\n***\n*** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM\n***\n\ng++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/\ninclude -c -o pgtransdb.o pgtransdb.cc -MMD\ncc1plus: warning:\n***\n*** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM\n***\n\ng++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/\ninclude -c -o pgcursordb.o pgcursordb.cc -MMD\ncc1plus: warning:\n***\n*** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM\n***\n\ng++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/\ninclude -c -o pglobject.o pglobject.cc -MMD\ncc1plus: warning:\n***\n*** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM\n***\n\nar cr libpq++.a `lorder pgconnection.o pgdatabase.o pgtransdb.o pgcursordb.o\npglobject.o | tsort`\nranlib libpq++.a\ng++ -O2 -g -Wall -fpic -DPIC -shared -Wl,-x,-soname,libpq++.so.4\npgconnection.o pgdatabase.o pgtransdb.o pgcursordb.o\nlobject.o -L../../../src/interfaces/libpq -lpq -R/home/chriskl/local/lib -\no libpq++.so.4\nrm -f libpq++.so\nln -s libpq++.so.4 libpq++.so\ngmake[3]: Leaving directory\n`/home/chriskl/pgsql-head/src/interfaces/libpq++'\n\n\n\n\n", "msg_date": "Wed, 3 Jul 2002 14:25:46 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "libpq++ build problems" }, { "msg_contents": "On Wed, Jul 03, 2002 at 02:25:46PM +0800, Christopher Kings-Lynne wrote:\n> OK, this is what I'm seeing on FreeBSD/Alpha for libpq++. \n\n[cut]\n[paste]\n\n> cc1plus: warning:\n> ***\n> *** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM\n> ***\n\nDoesn't say it doesn't work though... Have you tried running the\nresulting code?\n\n\n> I haven't figured out how to build libpqxx yet.:\n\nBasically, ./configure; make; make check; make install. You may have to\nuse configure options --with-postgres=/your/postgres/dir or its cousins.\nPlus, you'll also run into the same gcc warning so you may have to set\nthe environment variable CXXFLAGS to something like -O before running \nconfigure. The same will probably help with libpq++ as well BTW.\n\n\nJeroen\n\n\n\n\n", "msg_date": "Wed, 3 Jul 2002 17:28:34 +0200", "msg_from": "jtv <jtv@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: libpq++ build problems" }, { "msg_contents": "\nAnd the problem is, what? Except for the -O2 warnings, looks fine to\nme.\n\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> OK, this is what I'm seeing on FreeBSD/Alpha for libpq++. I haven't figured\n> out how to build libpqxx yet.:\n> \n> gmake[3]: Entering directory\n> `/home/chriskl/pgsql-head/src/interfaces/libpq++'\n> g++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/\n> include -c -o pgconnection.o pgconnection.cc -MMD\n> cc1plus: warning:\n> ***\n> *** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM\n> ***\n> \n> g++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/\n> include -c -o pgdatabase.o pgdatabase.cc -MMD\n> cc1plus: warning:\n> ***\n> *** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM\n> ***\n> \n> g++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/\n> include -c -o pgtransdb.o pgtransdb.cc -MMD\n> cc1plus: warning:\n> ***\n> *** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM\n> ***\n> \n> g++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/\n> include -c -o pgcursordb.o pgcursordb.cc -MMD\n> cc1plus: warning:\n> ***\n> *** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM\n> ***\n> \n> g++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/\n> include -c -o pglobject.o pglobject.cc -MMD\n> cc1plus: warning:\n> ***\n> *** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM\n> ***\n> \n> ar cr libpq++.a `lorder pgconnection.o pgdatabase.o pgtransdb.o pgcursordb.o\n> pglobject.o | tsort`\n> ranlib libpq++.a\n> g++ -O2 -g -Wall -fpic -DPIC -shared -Wl,-x,-soname,libpq++.so.4\n> pgconnection.o pgdatabase.o pgtransdb.o pgcursordb.o\n> lobject.o -L../../../src/interfaces/libpq -lpq -R/home/chriskl/local/lib -\n> o libpq++.so.4\n> rm -f libpq++.so\n> ln -s libpq++.so.4 libpq++.so\n> gmake[3]: Leaving directory\n> `/home/chriskl/pgsql-head/src/interfaces/libpq++'\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 11:46:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq++ build problems" }, { "msg_contents": "\nActually, I am confused. In src/template/freebsd I see:\n\t\n\tCFLAGS='-pipe'\n\t\n\tcase $host_cpu in\n\t alpha*) CFLAGS=\"$CFLAGS -O\";;\n\t i386*) CFLAGS=\"$CFLAGS -O2\";;\n\tesac\n\nso why is he seeing the -O2 flag on FreeBSD/alpha?\n\n---------------------------------------------------------------------------\n\njtv wrote:\n> On Wed, Jul 03, 2002 at 02:25:46PM +0800, Christopher Kings-Lynne wrote:\n> > OK, this is what I'm seeing on FreeBSD/Alpha for libpq++. \n> \n> [cut]\n> [paste]\n> \n> > cc1plus: warning:\n> > ***\n> > *** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM\n> > ***\n> \n> Doesn't say it doesn't work though... Have you tried running the\n> resulting code?\n> \n> \n> > I haven't figured out how to build libpqxx yet.:\n> \n> Basically, ./configure; make; make check; make install. You may have to\n> use configure options --with-postgres=/your/postgres/dir or its cousins.\n> Plus, you'll also run into the same gcc warning so you may have to set\n> the environment variable CXXFLAGS to something like -O before running \n> configure. The same will probably help with libpq++ as well BTW.\n> \n> \n> Jeroen\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 13:45:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq++ build problems" }, { "msg_contents": "On Wed, Jul 03, 2002 at 01:45:56PM -0400, Bruce Momjian wrote:\n> \n> Actually, I am confused. In src/template/freebsd I see:\n> \t\n> \tCFLAGS='-pipe'\n> \t\n> \tcase $host_cpu in\n> \t alpha*) CFLAGS=\"$CFLAGS -O\";;\n> \t i386*) CFLAGS=\"$CFLAGS -O2\";;\n> \tesac\n> \n> so why is he seeing the -O2 flag on FreeBSD/alpha?\n\nProbably because CXXFLAGS still has -O2 set.\n\n\nJeroen\n\n\n\n", "msg_date": "Wed, 3 Jul 2002 20:51:46 +0200", "msg_from": "jtv <jtv@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: libpq++ build problems" }, { "msg_contents": "jtv wrote:\n> On Wed, Jul 03, 2002 at 01:45:56PM -0400, Bruce Momjian wrote:\n> > \n> > Actually, I am confused. In src/template/freebsd I see:\n> > \t\n> > \tCFLAGS='-pipe'\n> > \t\n> > \tcase $host_cpu in\n> > \t alpha*) CFLAGS=\"$CFLAGS -O\";;\n> > \t i386*) CFLAGS=\"$CFLAGS -O2\";;\n> > \tesac\n> > \n> > so why is he seeing the -O2 flag on FreeBSD/alpha?\n> \n> Probably because CXXFLAGS still has -O2 set.\n\nInteresting. I thought -O2 was only set in /template files, but I now\nsee it is set in configure too. The following patch fixes the libpqxx\ncompile problem on FreeBSD/alpha. The old code set -O2 for\nFreeBSD/i386, but that is already set earlier. The new patch just\nupdates the FreeBSD/alpha compile.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/template/freebsd\n===================================================================\nRCS file: /cvsroot/pgsql/src/template/freebsd,v\nretrieving revision 1.10\ndiff -c -r1.10 freebsd\n*** src/template/freebsd\t16 Nov 2000 05:51:07 -0000\t1.10\n--- src/template/freebsd\t3 Jul 2002 19:45:14 -0000\n***************\n*** 1,7 ****\n CFLAGS='-pipe'\n \n! case $host_cpu in\n! alpha*) CFLAGS=\"$CFLAGS -O\";;\n! i386*) CFLAGS=\"$CFLAGS -O2\";;\n! esac\n! \n--- 1,6 ----\n CFLAGS='-pipe'\n \n! if [ `expr \"$host_cpu\" : \"alpha\"` -ge 5 ]\n! then\tCFLAGS=\"$CFLAGS -O\"\n! \tCXXFLAGS=\"$CFLAGS -O\"\n! fi", "msg_date": "Wed, 3 Jul 2002 15:47:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq++ build problems" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> ... The following patch fixes the libpqxx\n> compile problem on FreeBSD/alpha. The old code set -O2 for\n> FreeBSD/i386, but that is already set earlier. The new patch just\n> updates the FreeBSD/alpha compile.\n\nAs a general rule, anything that affects one *BSD affects them all.\nI am always very suspicious of any patch that changes only one of\nthe *BSD templates or makefiles. I'm not even convinced we should\nhave separate makefiles/templates for 'em ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 00:52:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq++ build problems " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > ... The following patch fixes the libpqxx\n> > compile problem on FreeBSD/alpha. The old code set -O2 for\n> > FreeBSD/i386, but that is already set earlier. The new patch just\n> > updates the FreeBSD/alpha compile.\n> \n> As a general rule, anything that affects one *BSD affects them all.\n> I am always very suspicious of any patch that changes only one of\n> the *BSD templates or makefiles. I'm not even convinced we should\n> have separate makefiles/templates for 'em ...\n\nWell, in this case FreeBSD/alpha -O2 thows that warning. Hard to miss\nthat one. It _is_ a unique case for that platform.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 01:32:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq++ build problems" }, { "msg_contents": "Bruce Momjian writes:\n\n> jtv wrote:\n> > On Wed, Jul 03, 2002 at 01:45:56PM -0400, Bruce Momjian wrote:\n> > >\n> > > Actually, I am confused. In src/template/freebsd I see:\n> > >\n> > > \tCFLAGS='-pipe'\n> > >\n> > > \tcase $host_cpu in\n> > > \t alpha*) CFLAGS=\"$CFLAGS -O\";;\n> > > \t i386*) CFLAGS=\"$CFLAGS -O2\";;\n> > > \tesac\n> > >\n> > > so why is he seeing the -O2 flag on FreeBSD/alpha?\n> >\n> > Probably because CXXFLAGS still has -O2 set.\n>\n> Interesting. I thought -O2 was only set in /template files, but I now\n> see it is set in configure too. The following patch fixes the libpqxx\n> compile problem on FreeBSD/alpha. The old code set -O2 for\n> FreeBSD/i386, but that is already set earlier. The new patch just\n> updates the FreeBSD/alpha compile.\n\nExcept that it now fails to set CFLAGS correctly. Please avoid \"expr\"\ntoo. \"case\" is fine.\n\nActually, you can't really set CXXFLAGS in the template file, because at\nthat point you don't know what kind of C++ compiler is going to be used\nyet. That's why it's handled in configure later.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n", "msg_date": "Sun, 7 Jul 2002 12:58:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: libpq++ build problems" }, { "msg_contents": "Tom Lane writes:\n\n> As a general rule, anything that affects one *BSD affects them all.\n> I am always very suspicious of any patch that changes only one of\n> the *BSD templates or makefiles. I'm not even convinced we should\n> have separate makefiles/templates for 'em ...\n\nIf they could retroactively agree on a way to build shared libraries, we\nmight have a shot.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n", "msg_date": "Sun, 7 Jul 2002 12:58:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: libpq++ build problems " }, { "msg_contents": "Peter Eisentraut wrote:\n> > Interesting. I thought -O2 was only set in /template files, but I now\n> > see it is set in configure too. The following patch fixes the libpqxx\n> > compile problem on FreeBSD/alpha. The old code set -O2 for\n> > FreeBSD/i386, but that is already set earlier. The new patch just\n> > updates the FreeBSD/alpha compile.\n> \n> Except that it now fails to set CFLAGS correctly. Please avoid \"expr\"\n> too. \"case\" is fine.\n\nChanged, I assume for portability.\n\n> Actually, you can't really set CXXFLAGS in the template file, because at\n> that point you don't know what kind of C++ compiler is going to be used\n> yet. That's why it's handled in configure later.\n\nLooking at configure.in, it looks pretty safe:\n\n if test \"$ac_env_CXXFLAGS\" != set; then\n if test \"$GXX\" = yes; then\n CXXFLAGS=-O2\n else\n case $template in\n osf) CXXFLAGS='-O4 -Olimit 2000' ;;\n unixware) CXXFLAGS='-O' ;;\n *) CXXFLAGS= ;;\n esac\n fi\n fi\n\nBecause CXXFLAGS is already set for freebsd/alpha, it falls through,\nmissing the -O2 setting.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sun, 7 Jul 2002 10:26:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq++ build problems" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Looking at configure.in, it looks pretty safe:\n\n> if test \"$ac_env_CXXFLAGS\" != set; then\n> if test \"$GXX\" = yes; then\n> CXXFLAGS=-O2\n> else\n> case $template in\n> osf) CXXFLAGS='-O4 -Olimit 2000' ;;\n> unixware) CXXFLAGS='-O' ;;\n> *) CXXFLAGS= ;;\n> esac\n> fi\n> fi\n\n> Because CXXFLAGS is already set for freebsd/alpha, it falls through,\n\nI don't think so; the ac_env_ flag presumably indicates whether\nconfigure inherited CXXFLAGS from its environment, not whether it\nset it internally.\n\nBut even if setting CXXFLAGS in the template did override this code,\nit would be a mistake, because the point of this code is to allow a\nchoice between g++-specific and vendor's-compiler-specific CXXFLAGS.\n\nPerhaps we could do something like this:\n\n\t# set defaults for most platforms\n\tGCC_CXXFLAGS=\"-O2\"\n\tVENDOR_CXXFLAGS=\n\n\t# now include template, which may override either of the above\n\n\t# now select proper CXXFLAGS\n\tif test \"$ac_env_CXXFLAGS\" != set; then\n\t if test \"$GXX\" = yes; then\n\t CXXFLAGS=\"$GCC_CXXFLAGS\"\n\t else\n\t CXXFLAGS=\"$VENDOR_CXXFLAGS\"\n\t fi\n\tfi\n\nThis would allow us to push the special cases for osf and unixware out\ninto their template files, which would be a Good Thing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Jul 2002 12:03:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq++ build problems " }, { "msg_contents": "Good idea. Patch attached. autoconf run.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Looking at configure.in, it looks pretty safe:\n> \n> > if test \"$ac_env_CXXFLAGS\" != set; then\n> > if test \"$GXX\" = yes; then\n> > CXXFLAGS=-O2\n> > else\n> > case $template in\n> > osf) CXXFLAGS='-O4 -Olimit 2000' ;;\n> > unixware) CXXFLAGS='-O' ;;\n> > *) CXXFLAGS= ;;\n> > esac\n> > fi\n> > fi\n> \n> > Because CXXFLAGS is already set for freebsd/alpha, it falls through,\n> \n> I don't think so; the ac_env_ flag presumably indicates whether\n> configure inherited CXXFLAGS from its environment, not whether it\n> set it internally.\n> \n> But even if setting CXXFLAGS in the template did override this code,\n> it would be a mistake, because the point of this code is to allow a\n> choice between g++-specific and vendor's-compiler-specific CXXFLAGS.\n> \n> Perhaps we could do something like this:\n> \n> \t# set defaults for most platforms\n> \tGCC_CXXFLAGS=\"-O2\"\n> \tVENDOR_CXXFLAGS=\n> \n> \t# now include template, which may override either of the above\n> \n> \t# now select proper CXXFLAGS\n> \tif test \"$ac_env_CXXFLAGS\" != set; then\n> \t if test \"$GXX\" = yes; then\n> \t CXXFLAGS=\"$GCC_CXXFLAGS\"\n> \t else\n> \t CXXFLAGS=\"$VENDOR_CXXFLAGS\"\n> \t fi\n> \tfi\n> \n> This would allow us to push the special cases for osf and unixware out\n> into their template files, which would be a Good Thing.\n> \n> \t\t\tregards, tom lane\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql/configure.in,v\nretrieving revision 1.186\ndiff -c -r1.186 configure.in\n*** configure.in\t28 May 2002 16:57:53 -0000\t1.186\n--- configure.in\t7 Jul 2002 20:05:54 -0000\n***************\n*** 243,248 ****\n--- 243,252 ----\n # variable.\n PGAC_ARG_REQ(with, CC, [], [CC=$with_CC])\n \n+ # Set here so it can be over-ridden in the template file\n+ GCC_CXXFLAGS=\"-O2\"\n+ VENDOR_CXXFLAGS=\"\"\n+ \n case $template in\n aix) pgac_cc_list=\"gcc xlc\";;\n irix) pgac_cc_list=\"cc\";; # no gcc\n***************\n*** 593,605 ****\n AC_PROG_CXX\n if test \"$ac_env_CXXFLAGS\" != set; then\n if test \"$GXX\" = yes; then\n! CXXFLAGS=-O2\n else\n! case $template in\n! \tosf)\t\tCXXFLAGS='-O4 -Olimit 2000' ;;\n! unixware)\tCXXFLAGS='-O' ;;\n! \t*)\t\tCXXFLAGS= ;;\n! esac\n fi\n fi\n if test \"$enable_debug\" = yes && test \"$ac_cv_prog_cxx_g\" = yes; then\n--- 597,605 ----\n AC_PROG_CXX\n if test \"$ac_env_CXXFLAGS\" != set; then\n if test \"$GXX\" = yes; then\n! CXXFLAGS=\"$GCC_CXXFLAGS\"\n else\n! CXXFLAGS=\"$VENDOR_CXXFLAGS\"\n fi\n fi\n if test \"$enable_debug\" = yes && test \"$ac_cv_prog_cxx_g\" = yes; then\nIndex: src/template/freebsd\n===================================================================\nRCS file: /cvsroot/pgsql/src/template/freebsd,v\nretrieving revision 1.12\ndiff -c -r1.12 freebsd\n*** src/template/freebsd\t7 Jul 2002 14:24:13 -0000\t1.12\n--- src/template/freebsd\t7 Jul 2002 20:05:57 -0000\n***************\n*** 2,6 ****\n \n case $host_cpu in\n alpha*) CFLAGS=\"$CFLAGS -O\";;\n! CXXFLAGS=\"$CXXFLAGS -O\"\n esac\n--- 2,6 ----\n \n case $host_cpu in\n alpha*) CFLAGS=\"$CFLAGS -O\";;\n! GCC_CXXFLAGS=\"-O\"\n esac\nIndex: src/template/osf\n===================================================================\nRCS file: /cvsroot/pgsql/src/template/osf,v\nretrieving revision 1.3\ndiff -c -r1.3 osf\n*** src/template/osf\t31 Oct 2000 18:16:20 -0000\t1.3\n--- src/template/osf\t7 Jul 2002 20:05:57 -0000\n***************\n*** 6,8 ****\n--- 6,10 ----\n CFLAGS='-O4 -Olimit 2000'\n CCC=cxx\n fi\n+ VENDOR_CXXFLAGS='-O4 -Olimit 2000'\n+ \nIndex: src/template/unixware\n===================================================================\nRCS file: /cvsroot/pgsql/src/template/unixware,v\nretrieving revision 1.9\ndiff -c -r1.9 unixware\n*** src/template/unixware\t22 Oct 2000 22:15:09 -0000\t1.9\n--- src/template/unixware\t7 Jul 2002 20:05:57 -0000\n***************\n*** 3,5 ****\n--- 3,7 ----\n else\n CFLAGS='-O -K inline'\n fi\n+ VENDOR_CXXFLAGS=\"-O\"\n+", "msg_date": "Sun, 7 Jul 2002 16:27:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq++ build problems" } ]
[ { "msg_contents": "Hi All,\n\nI have given up working on the BETWEEN node. It got to the stage where I\nrealised I was really out of my depth! Rod Taylor has indicated an interest\nin the problem and I have sent him my latest patch, so hopefully he'll be\nable to crack it.\n\nSo instead, I've taken up with the DROP COLUMN crusade. It seems that the\nfollowing are the jobs that need to be done:\n\n* Add attisdropped to pg_attribute\n - Looking for takers for this one, otherwise I'll look into it.\n* Fill out AlterTableDropColumn\n - I've done this, with the assumption that attisdropped exists. It sets\nattisdropped to true, drops the column default and renames the column.\n(Plus does all other normal ALTER TABLE checks)\n* Modify parser and other places to ignore dropped columns\n - This is also up for grabs.\n* Modify psql and pg_dump to handle dropped columns\n - I've done this.\n\nOnce the above is done, we have a working drop column implementation.\n\n* Modify all other interfaces, JDBC, etc. to handle dropped cols.\n - I think this can be suggested to the relevant developers once the above\nis committed!\n\n* Modify VACUUM to add a RECLAIM option to reduce on disk table size.\n - This is out of my league, so it's up for grabs\n\nI have approached a couple of people off-list to see if they're interested\nin helping, so please post to the list if you intend to work on something.\n\nIt has also occurred to me that once drop column exists, users will be able\nto change the type of their columns manually (ie. create a new col, update\nall values, drop the old col). So, there is no reason why this new\nattisdropped field shouldn't allow us to implement a full ALTER TABLE/SET\nTYPE sort of feature - cool huh?\n\nChris\n\n\n\n", "msg_date": "Wed, 3 Jul 2002 16:46:55 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> all values, drop the old col). So, there is no reason why this new\n> attisdropped field shouldn't allow us to implement a full ALTER TABLE/SET\n> TYPE sort of feature - cool huh?\n\n\nI've not looked in a while, but the column rename code did not account\nfor issues in foreign keys, etc. Those should be easier to ferret out\nsoon, but may not be so nice to change yet.\n\nIt should also be noted that an ALTER TABLE / SET TYPE implemented with\nthe above idea with run into the 2x diskspace issue as well as take\nquite a while to process.\n\n\n\n", "msg_date": "03 Jul 2002 07:24:50 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> I've not looked in a while, but the column rename code did not account\n> for issues in foreign keys, etc. Those should be easier to ferret out\n> soon, but may not be so nice to change yet.\n\nWhich is probably a good reason for us to offer it as an all-in-one command,\nrather than expecting them to do it manually...\n\n> It should also be noted that an ALTER TABLE / SET TYPE implemented with\n> the above idea with run into the 2x diskspace issue as well as take\n> quite a while to process.\n\nI think that if the 'SET TYPE' operation is ever to be rollback-able, it\nwill need to use 2x diskspace. If it's overwritten in place, there's no\nchance of fallback... I think that a DBA would choose to use the command\nknowing full well what it requires? Better than not offering them the\nchoice at all!\n\nChris\n\n\n\n\n", "msg_date": "Wed, 3 Jul 2002 20:23:52 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> > It should also be noted that an ALTER TABLE / SET TYPE implemented with\n> > the above idea with run into the 2x diskspace issue as well as take\n> > quite a while to process.\n> \n> I think that if the 'SET TYPE' operation is ever to be rollback-able, it\n> will need to use 2x diskspace. If it's overwritten in place, there's no\n> chance of fallback... I think that a DBA would choose to use the command\n> knowing full well what it requires? Better than not offering them the\n> choice at all!\n\nTrue, but if we did the multi-version thing in pg_attribute we may be\nable to coerce to the right type on the way out making it a high speed\nchange.\n\n\n\n", "msg_date": "03 Jul 2002 08:32:30 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "On Wed, 2002-07-03 at 14:32, Rod Taylor wrote:\n> > > It should also be noted that an ALTER TABLE / SET TYPE implemented with\n> > > the above idea with run into the 2x diskspace issue as well as take\n> > > quite a while to process.\n> > \n> > I think that if the 'SET TYPE' operation is ever to be rollback-able, it\n> > will need to use 2x diskspace. If it's overwritten in place, there's no\n> > chance of fallback... I think that a DBA would choose to use the command\n> > knowing full well what it requires? Better than not offering them the\n> > choice at all!\n> \n> True, but if we did the multi-version thing in pg_attribute we may be\n> able to coerce to the right type on the way out making it a high speed\n> change.\n\nIf I understand you right, i.e. you want to do the conversion at each\nselect(), then the change is high speed but all subsequent queries using\nit will pay a a speed penalty, not to mention added complexity of the\nwhole thing.\n\nI don't think that making changes quick autweights added slowness and\ncomplexity - changes are meant to be slow ;)\n\nThe real-life analogue to the proposed scenario would be adding one\nextra wheel next to each existing one in a car in order to make it\npossible to change tyres while driving - while certainly possible nobody\nactually does it.\n\n---------------\nHannu\n\n\n\n\n\n", "msg_date": "03 Jul 2002 16:55:38 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Hi All,\n>\n> I have given up working on the BETWEEN node. It got to the stage where I\n> realised I was really out of my depth! Rod Taylor has indicated an interest\n> in the problem and I have sent him my latest patch, so hopefully he'll be\n> able to crack it.\n>\n> So instead, I've taken up with the DROP COLUMN crusade. It seems that the\n> following are the jobs that need to be done:\n\nGreat crusade!\n\n> * Add attisdropped to pg_attribute\n> - Looking for takers for this one, otherwise I'll look into it.\n\nI can do this for you. Just let me know when.\n\n> * Fill out AlterTableDropColumn\n> - I've done this, with the assumption that attisdropped exists. It sets\n> attisdropped to true, drops the column default and renames the column.\n> (Plus does all other normal ALTER TABLE checks)\n> * Modify parser and other places to ignore dropped columns\n> - This is also up for grabs.\n\nAs I remember, Hiroshi's drop column changed the attribute number to a\nspecial negative value, which required lots of changes to track.\nKeeping the same number and just marking the column as dropped is a big\nwin. This does push the coding out the client though.\n\n> * Modify psql and pg_dump to handle dropped columns\n> - I've done this.\n> \n> Once the above is done, we have a working drop column implementation.\n> \n> * Modify all other interfaces, JDBC, etc. to handle dropped cols.\n> - I think this can be suggested to the relevant developers once the above\n> is committed!\n> \n> * Modify VACUUM to add a RECLAIM option to reduce on disk table size.\n> - This is out of my league, so it's up for grabs\n\nWill UPDATE on a row set the deleted column to NULL? If so, the\ndisk space used by the column would go away over time. In fact, a\nsimple:\n\t\n\tUPDATE tab SET col = col;\n\tVACUUM;\n\nwould remove the data stored in the deleted column; no change to VACUUM\nneeded.\n\n> I have approached a couple of people off-list to see if they're interested\n> in helping, so please post to the list if you intend to work on something.\n> \n> It has also occurred to me that once drop column exists, users will be able\n> to change the type of their columns manually (ie. create a new col, update\n> all values, drop the old col). So, there is no reason why this new\n> attisdropped field shouldn't allow us to implement a full ALTER TABLE/SET\n> TYPE sort of feature - cool huh?\n\nYep.\n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 12:18:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Hiroshi Inoue wrote:\n> > As I remember, Hiroshi's drop column changed the attribute number to a\n> > special negative value, which required lots of changes to track.\n> \n> ??? What do you mean by *lots of* ?\n\nYes, please remind me. Was your solution renumbering the attno values? \nI think there are fewer cases to fix if we keep the existing attribute\nnumbering and just mark the column as deleted. Is this accurate?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 20:25:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "\n\nBruce Momjian wrote:\n> \n> Christopher Kings-Lynne wrote:\n> > Hi All,\n> >\n> > I have given up working on the BETWEEN node. It got to the stage where I\n> > realised I was really out of my depth! Rod Taylor has indicated an interest\n> > in the problem and I have sent him my latest patch, so hopefully he'll be\n> > able to crack it.\n> >\n> > So instead, I've taken up with the DROP COLUMN crusade. It seems that the\n> > following are the jobs that need to be done:\n> \n> Great crusade!\n> \n> > * Add attisdropped to pg_attribute\n> > - Looking for takers for this one, otherwise I'll look into it.\n> \n> I can do this for you. Just let me know when.\n> \n> > * Fill out AlterTableDropColumn\n> > - I've done this, with the assumption that attisdropped exists. It sets\n> > attisdropped to true, drops the column default and renames the column.\n> > (Plus does all other normal ALTER TABLE checks)\n> > * Modify parser and other places to ignore dropped columns\n> > - This is also up for grabs.\n> \n> As I remember, Hiroshi's drop column changed the attribute number to a\n> special negative value, which required lots of changes to track.\n\n??? What do you mean by *lots of* ?\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n\n\n", "msg_date": "Thu, 04 Jul 2002 09:26:01 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > > As I remember, Hiroshi's drop column changed the attribute number to a\n> > > special negative value, which required lots of changes to track.\n> >\n> > ??? What do you mean by *lots of* ?\n> \n> Yes, please remind me. Was your solution renumbering the attno values?\n\nYes though I don't intend to object to Christopher's proposal.\n\n> I think there are fewer cases to fix if we keep the existing attribute\n> numbering and just mark the column as deleted. Is this accurate?\n\nNo. I don't understand why you think so. \n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n\n\n", "msg_date": "Thu, 04 Jul 2002 09:37:24 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > > > As I remember, Hiroshi's drop column changed the attribute number to a\n> > > > special negative value, which required lots of changes to track.\n> > >\n> > > ??? What do you mean by *lots of* ?\n> > \n> > Yes, please remind me. Was your solution renumbering the attno values?\n> \n> Yes though I don't intend to object to Christopher's proposal.\n> \n> > I think there are fewer cases to fix if we keep the existing attribute\n> > numbering and just mark the column as deleted. Is this accurate?\n> \n> No. I don't understand why you think so. \n\nWith the isdropped column, you really only need to deal with '*'\nexpansion in a few places, and prevent the column from being accessed. \nWith renumbering, the backend loops that go through the attnos have to\nbe dealt with.\n\nIs this correct? I certainly prefer attno renumbering to isdropped\nbecause it allows us to get DROP COLUMN without any client changes, or\nat least with fewer because the dropped column has a negative attno. Is\nthis accurate?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 20:38:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "\n\nBruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> > >\n> > > Hiroshi Inoue wrote:\n> > > > > As I remember, Hiroshi's drop column changed the attribute number to a\n> > > > > special negative value, which required lots of changes to track.\n> > > >\n> > > > ??? What do you mean by *lots of* ?\n> > >\n> > > Yes, please remind me. Was your solution renumbering the attno values?\n> >\n> > Yes though I don't intend to object to Christopher's proposal.\n> >\n> > > I think there are fewer cases to fix if we keep the existing attribute\n> > > numbering and just mark the column as deleted. Is this accurate?\n> >\n> > No. I don't understand why you think so.\n> \n> With the isdropped column, you really only need to deal with '*'\n> expansion in a few places, and prevent the column from being accessed.\n> With renumbering, the backend loops that go through the attnos have to\n> be dealt with.\n\nI used the following macro in my trial implementation.\n #define COLUMN_IS_DROPPED(attribute) ((attribute)->attnum <= \nDROP_COLUMN_OFFSET)\nThe places where the macro was put are exactly the places\nwhere attisdropped must be checked.\n\nThe difference is essentially little. Please don't propagate\na wrong information. \n \n> Is this correct? I certainly prefer attno renumbering to isdropped\n> because it allows us to get DROP COLUMN without any client changes,\n\nUnfortunately many apps rely on the fact that the attnos are\nconsecutive starting from 1. It was the main reason why Tom\nrejected my trial. Nothing has changed about it.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n\n\n", "msg_date": "Thu, 04 Jul 2002 10:27:15 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> > > > Yes, please remind me. Was your solution renumbering the\n> attno values?\n> > >\n> > > Yes though I don't intend to object to Christopher's proposal.\n\nHiroshi,\n\nI am thinking of rolling back my CVS to see if there's code from your\nprevious test implementation that we can use. Apart from the DropColumn\nfunction itself, what other changes did you make? Did you have\nmodifications for '*' expansion in the parser, etc.?\n\nChris\n\n\n\n", "msg_date": "Thu, 4 Jul 2002 10:00:11 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > > > > Yes, please remind me. Was your solution renumbering the\n> > attno values?\n> > > >\n> > > > Yes though I don't intend to object to Christopher's proposal.\n> \n> Hiroshi,\n> \n> I am thinking of rolling back my CVS to see if there's code from your\n> previous test implementation that we can use. Apart from the DropColumn\n> function itself, what other changes did you make? Did you have\n> modifications for '*' expansion in the parser, etc.?\n\nYes, please review Hiroshi's work. It is good work. Can we have an\nanalysis of Hiroshi's approach vs the isdropped case.\n\nIs it better to renumber the attno or set a column to isdropped. The\nformer may be easier on the clients.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 22:18:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> > I am thinking of rolling back my CVS to see if there's code from your\n> > previous test implementation that we can use. Apart from the DropColumn\n> > function itself, what other changes did you make? Did you have\n> > modifications for '*' expansion in the parser, etc.?\n>\n> Yes, please review Hiroshi's work. It is good work. Can we have an\n> analysis of Hiroshi's approach vs the isdropped case.\n\nYes, it is. I've rolled it back and I'm already incorporating his changes\nto the parser into my patch. I just have to grep all the source code for\n'HACK' to find all the changes. It's all very handy.\n\n> Is it better to renumber the attno or set a column to isdropped. The\n> former may be easier on the clients.\n\nWell, obviously I prefer the attisdropped approach. I think it's clearer\nand there's less confusion. As a head developer for phpPgAdmin that's what\nI'd prefer... Hiroshi obviously prefers his solution, but doesn't object to\nmine/Tom's. I think that with all the schema-related changes that clients\nwill have to handle in 7.3, we may as well hit them with the dropped column\nstuff in the same go, that way there's fewer rounds of clients scrambling to\nkeep up with the server.\n\nI intend to email every single postgres client I can find and tell them\nabout the new changes, well before we release 7.3...\n\nChris\n\n\n\n", "msg_date": "Thu, 4 Jul 2002 10:40:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > > I am thinking of rolling back my CVS to see if there's code from your\n> > > previous test implementation that we can use. Apart from the DropColumn\n> > > function itself, what other changes did you make? Did you have\n> > > modifications for '*' expansion in the parser, etc.?\n> >\n> > Yes, please review Hiroshi's work. It is good work. Can we have an\n> > analysis of Hiroshi's approach vs the isdropped case.\n> \n> Yes, it is. I've rolled it back and I'm already incorporating his changes\n> to the parser into my patch. I just have to grep all the source code for\n> 'HACK' to find all the changes. It's all very handy.\n\nYes. It should have been accepted long ago, but we were waiting for a\n\"perfect\" solution which we all know now will never come.\n\n> \n> > Is it better to renumber the attno or set a column to isdropped. The\n> > former may be easier on the clients.\n> \n> Well, obviously I prefer the attisdropped approach. I think it's clearer\n> and there's less confusion. As a head developer for phpPgAdmin that's what\n> I'd prefer... Hiroshi obviously prefers his solution, but doesn't object to\n\nOK, can you explain the issues from a server and client perspective,\ni.e. renumbering vs isdropped?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 22:43:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > > > > Yes, please remind me. Was your solution renumbering the\n> > attno values?\n> > > >\n> > > > Yes though I don't intend to object to Christopher's proposal.\n> \n> Hiroshi,\n> \n> I am thinking of rolling back my CVS to see if there's code from your\n> previous test implementation that we can use. Apart from the DropColumn\n> function itself, what other changes did you make? Did you have\n> modifications for '*' expansion in the parser, etc.?\n\nDon't mind my posting.\nI'm only correcting a misunderstanding for my work.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n\n\n", "msg_date": "Thu, 04 Jul 2002 11:44:45 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> > Well, obviously I prefer the attisdropped approach. I think\n> it's clearer\n> > and there's less confusion. As a head developer for phpPgAdmin\n> that's what\n> > I'd prefer... Hiroshi obviously prefers his solution, but\n> doesn't object to\n>\n> OK, can you explain the issues from a server and client perspective,\n> i.e. renumbering vs isdropped?\n\nWell in the renumbering case, the client needs to know about missing attnos\nand it has to know to ignore negative attnos (which it probably does\nalready). ie. psql and pg_dump wouldn't have to be modified in that case.\n\nIn the isdropped case, the client needs to know to exclude any column with\n'attisdropped' set to true.\n\nSo in both cases, the client needs to be updated. I personally prefer the\nexplicit 'is dropped' as opposed to the implicit 'negative number', but hey.\n\n*sigh* Now I've gone and made an argument for the renumbering case. I'm\ngoing to have a good look at Hiroshi's old code and see which one is less\ncomplicated, etc. So far all I've really need to do is redefine Hiroshi's\nCOLUMN_DROPPED macro.\n\nI'm sure that both methods could be made to handle a 'ALTER TABLE/SET TYPE'\nsyntax.\n\nChris\n\n\n\n", "msg_date": "Thu, 4 Jul 2002 11:01:05 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > > Well, obviously I prefer the attisdropped approach. I think\n> > it's clearer\n> > > and there's less confusion. As a head developer for phpPgAdmin\n> > that's what\n> > > I'd prefer... Hiroshi obviously prefers his solution, but\n> > doesn't object to\n> >\n> > OK, can you explain the issues from a server and client perspective,\n> > i.e. renumbering vs isdropped?\n> \n> Well in the renumbering case, the client needs to know about missing attnos\n> and it has to know to ignore negative attnos (which it probably does\n> already). ie. psql and pg_dump wouldn't have to be modified in that case.\n> \n> In the isdropped case, the client needs to know to exclude any column with\n> 'attisdropped' set to true.\n> \n> So in both cases, the client needs to be updated. I personally prefer the\n> explicit 'is dropped' as opposed to the implicit 'negative number', but hey.\n> \n> *sigh* Now I've gone and made an argument for the renumbering case. I'm\n> going to have a good look at Hiroshi's old code and see which one is less\n> complicated, etc. So far all I've really need to do is redefine Hiroshi's\n> COLUMN_DROPPED macro.\n> \n> I'm sure that both methods could be made to handle a 'ALTER TABLE/SET TYPE'\n> syntax.\n\nYes! This is exactly what I would like investigated. I am embarrassed\nto see that we had Hiroshi's patch all this time and never implemented\nit.\n\nI think it underscores that we have drifted too far into the code purity\ncamp and need a little reality check that users have needs and we should\ntry to meet them if we want to be successful. How many DROP COLUMN\ngripes have we heard over the years! Now I am upset.\n\nOK, I calmed down now. What I would like to know is which DROP COLUMN\nmethod is easier on the server end, and which is easier on the client\nend. If one is easier in both places, let's use that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 23:24:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> Unfortunately many apps rely on the fact that the attnos are\n> consecutive starting from 1. It was the main reason why Tom\n> rejected my trial. Nothing has changed about it.\n\nOK, I've been looking at Hiroshi's implementation. It's basically\nsemantically equivalent to mine from what I can see so far. The only\ndifference really is in how the dropped columns are marked.\n\nI've been ruminating on Hiroshi's statement at the top there. What was the\nreasoning for assuming that 'many apps rely on the fact that the attnos are\nconsecutive'? Is that true? phpPgAdmin doesn't. In fact, phpPgAdmin won't\nrequire any changes with Hiroshi's implementaiton and will require changes\nwith mine.\n\nAnyway, an app that relies on consecutive attnos is going to have pain\nskipping over attisdropped columns anyway???\n\nIn fact, I'm now beginning to think that I should just resurrect Hiroshi's\nimplementation. I'm prepared to do that if people like...\n\nChris\n\n\n\n", "msg_date": "Thu, 4 Jul 2002 12:50:22 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I used the following macro in my trial implementation.\n> #define COLUMN_IS_DROPPED(attribute) ((attribute)->attnum <= \n> DROP_COLUMN_OFFSET)\n> The places where the macro was put are exactly the places\n> where attisdropped must be checked.\n\nActually, your trial required column dropped-ness to be checked in\nmany more places than the proposed approach does. Since you renumbered\nthe dropped column, nominal column numbers didn't correspond to physical\norder of values in tuples anymore; that meant checking for dropped\ncolumns in many low-level tuple manipulations.\n\n>> Is this correct? I certainly prefer attno renumbering to isdropped\n>> because it allows us to get DROP COLUMN without any client changes,\n\n> Unfortunately many apps rely on the fact that the attnos are\n> consecutive starting from 1. It was the main reason why Tom\n> rejected my trial. Nothing has changed about it.\n\nI'm still not thrilled about it ... but I don't see a reasonable way\naround it, either. I don't see any good way to do DROP COLUMN\nwithout breaking applications that make such assumptions. Unless\nyou have one, we may as well go for the approach that adds the least\ncomplication to the backend.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 01:04:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > Unfortunately many apps rely on the fact that the attnos are\n> > consecutive starting from 1. It was the main reason why Tom\n> > rejected my trial. Nothing has changed about it.\n> \n> OK, I've been looking at Hiroshi's implementation. It's basically\n> semantically equivalent to mine from what I can see so far. The only\n> difference really is in how the dropped columns are marked.\n> \n> I've been ruminating on Hiroshi's statement at the top there. What was the\n> reasoning for assuming that 'many apps rely on the fact that the attnos are\n> consecutive'? Is that true? phpPgAdmin doesn't. In fact, phpPgAdmin won't\n> require any changes with Hiroshi's implementaiton and will require changes\n> with mine.\n> \n> Anyway, an app that relies on consecutive attnos is going to have pain\n> skipping over attisdropped columns anyway???\n> \n> In fact, I'm now beginning to think that I should just resurrect Hiroshi's\n> implementation. I'm prepared to do that if people like...\n\nWell, you have clearly identified that Hiroshi's approach is cleaner for\nclients, because most clients don't need any changes. If the server end\nlooks equivalent for both approaches, I suggest you get started with\nHiroshi's idea.\n\nWhen Hiroshi's idea was originally proposed, some didn't like the\nuncleanliness of it, and particularly relations that relied on attno\nwould all have to be adjusted/removed. We didn't have pg_depend, of\ncourse, so there was this kind of gap in knowing how to remove all\nreferences to the dropped column.\n\nThere was also this idea that somehow the fairy software goddess was\ngoing to come down some day and give us a cleaner way to implement DROP\nCOLUMN. She still hasn't shown up. :-)\n\nI just read over TODO.detail/drop and my memory was correct. It was a\nmixure of having no pg_depend coupled with other ideas. Now that\npg_depend is coming, DROP COLUMN is ripe for a solution.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 01:20:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Tom Lane wrote:\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I used the following macro in my trial implementation.\n> > #define COLUMN_IS_DROPPED(attribute) ((attribute)->attnum <= \n> > DROP_COLUMN_OFFSET)\n> > The places where the macro was put are exactly the places\n> > where attisdropped must be checked.\n> \n> Actually, your trial required column dropped-ness to be checked in\n> many more places than the proposed approach does. Since you renumbered\n> the dropped column, nominal column numbers didn't correspond to physical\n> order of values in tuples anymore; that meant checking for dropped\n> columns in many low-level tuple manipulations.\n> \n> >> Is this correct? I certainly prefer attno renumbering to isdropped\n> >> because it allows us to get DROP COLUMN without any client changes,\n> \n> > Unfortunately many apps rely on the fact that the attnos are\n> > consecutive starting from 1. It was the main reason why Tom\n> > rejected my trial. Nothing has changed about it.\n> \n> I'm still not thrilled about it ... but I don't see a reasonable way\n> around it, either. I don't see any good way to do DROP COLUMN\n> without breaking applications that make such assumptions. Unless\n> you have one, we may as well go for the approach that adds the least\n> complication to the backend.\n\nIt may turn out to be a choice of client-cleanliness vs. backend\ncleanliness. Seems Hiroshi already wins for client cleanliness. We\njust need to know how many extra places need to be checked in the\nbackend.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 01:23:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> It may turn out to be a choice of client-cleanliness vs. backend\n> cleanliness. Seems Hiroshi already wins for client cleanliness.\n\nNo, he only breaks even for client cleanliness --- either approach\n*will* require changes in clients that look at pg_attribute.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 01:26:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > It may turn out to be a choice of client-cleanliness vs. backend\n> > cleanliness. Seems Hiroshi already wins for client cleanliness.\n> \n> No, he only breaks even for client cleanliness --- either approach\n> *will* require changes in clients that look at pg_attribute.\n\nUh, Christopher already indicated three clients, psql, pg_dump, and\nanother that will not require changes for Hiroshi's approach, but will\nrequire changes for isdropped. That doesn't seem \"break even\" to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 01:33:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> Well in the renumbering case, the client needs to know about missing attnos\n> and it has to know to ignore negative attnos (which it probably does\n> already). ie. psql and pg_dump wouldn't have to be modified in that case.\n> In the isdropped case, the client needs to know to exclude any column with\n> 'attisdropped' set to true.\n> So in both cases, the client needs to be updated.\n\nHow about defining a view (or views) which hides these details? Perhaps\na view which is also defined in SQL99 as one of the information_schema\nviews which we might like to have anyway?\n\n - Thomas\n\n\n", "msg_date": "Wed, 03 Jul 2002 22:40:28 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> No, he only breaks even for client cleanliness --- either approach\n>> *will* require changes in clients that look at pg_attribute.\n\n> Uh, Christopher already indicated three clients, psql, pg_dump, and\n> another that will not require changes for Hiroshi's approach, but will\n> require changes for isdropped.\n\nOh? If either psql or pg_dump don't break, it's a mere coincidence,\nbecause they certainly depend on attnum. (It's also pretty much\nirrelevant considering they're both under our control and hence easily\nfixed.)\n\nI'm fairly certain that Christopher is mistaken, anyhow. Check the\nmanipulations of attribute defaults for a counterexample in pg_dump.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 01:50:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I used the following macro in my trial implementation.\n> > #define COLUMN_IS_DROPPED(attribute) ((attribute)->attnum <=\n> > DROP_COLUMN_OFFSET)\n> > The places where the macro was put are exactly the places\n> > where attisdropped must be checked.\n> \n> Actually, your trial required column dropped-ness to be checked in\n> many more places than the proposed approach does.\n\nHave you ever really checked my trial implementation ?\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n\n\n", "msg_date": "Thu, 04 Jul 2002 14:53:55 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> No, he only breaks even for client cleanliness --- either approach\n> >> *will* require changes in clients that look at pg_attribute.\n> \n> > Uh, Christopher already indicated three clients, psql, pg_dump, and\n> > another that will not require changes for Hiroshi's approach, but will\n> > require changes for isdropped.\n> \n> Oh? If either psql or pg_dump don't break, it's a mere coincidence,\n> because they certainly depend on attnum. (It's also pretty much\n> irrelevant considering they're both under our control and hence easily\n> fixed.)\n> \n> I'm fairly certain that Christopher is mistaken, anyhow. Check the\n> manipulations of attribute defaults for a counterexample in pg_dump.\n\nWell, it seems isdropped is going to have to be checked by _any_ client,\nwhile holes in the number will have to be checked by _some_ clients. Is\nthat accurate?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 02:00:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> Actually, your trial required column dropped-ness to be checked in\n>> many more places than the proposed approach does.\n\n> Have you ever really checked my trial implementation ?\n\nWell, I've certainly stumbled over it in places like relcache.c\nand preptlist.c, which IMHO should not have to know about this...\nand I have little confidence that there are not more places that\nwould have needed fixes if the change had gotten any wide use.\nYou were essentially assuming that it was okay for pg_attribute.attnum\nto not agree with indexes into tuple descriptors, which seems very\nshaky to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 02:02:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "Tom Lane wrote:\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> Actually, your trial required column dropped-ness to be checked in\n> >> many more places than the proposed approach does.\n> \n> > Have you ever really checked my trial implementation ?\n> \n> Well, I've certainly stumbled over it in places like relcache.c\n> and preptlist.c, which IMHO should not have to know about this...\n> and I have little confidence that there are not more places that\n> would have needed fixes if the change had gotten any wide use.\n> You were essentially assuming that it was okay for pg_attribute.attnum\n> to not agree with indexes into tuple descriptors, which seems very\n> shaky to me.\n\nIsn't it only the dropped column that doesn't agree with the descriptor.\nThe kept columns retain the same numbering, and a NULL sits in the\ndropped spot, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 02:06:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Well, it seems isdropped is going to have to be checked by _any_ client,\n> while holes in the number will have to be checked by _some_ clients. Is\n> that accurate?\n\nWhat's your point? No client that examines pg_attribute can be trusted\nuntil it's been examined pretty closely (as in, more closely than\nChristopher looked at pg_dump). I'd prefer to see us keep the backend\nsimple and trustworthy, rather than pursue a largely-illusory idea that\nwe might be saving some trouble on the client side. The clients are\nless likely to cause unrecoverable data corruption if something is\nmissed.\n\nIf we were willing to remap attnums so that clients would require *no*\nchanges, it would be worth doing --- but I believe we've already\nrejected that approach as unworkable. I don't think \"maybe you don't\nneed to change, but you'd better study your code very carefully anyway\"\nis a big selling point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 02:11:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Well, it seems isdropped is going to have to be checked by _any_ client,\n> > while holes in the number will have to be checked by _some_ clients. Is\n> > that accurate?\n> \n> What's your point? No client that examines pg_attribute can be trusted\n> until it's been examined pretty closely (as in, more closely than\n> Christopher looked at pg_dump). I'd prefer to see us keep the backend\n> simple and trustworthy, rather than pursue a largely-illusory idea that\n> we might be saving some trouble on the client side. The clients are\n> less likely to cause unrecoverable data corruption if something is\n> missed.\n> \n> If we were willing to remap attnums so that clients would require *no*\n> changes, it would be worth doing --- but I believe we've already\n> rejected that approach as unworkable. I don't think \"maybe you don't\n> need to change, but you'd better study your code very carefully anyway\"\n> is a big selling point.\n\nIt sure is. If most people don't need to modify their code, that is a\nwin. Your logic is that we should make everyone modify their code and\nsomehow that will be more reliable? No wonder people think we are more\nworried about clean code than making things easier for our users.\n\nI will vote for the option that has the less pain for our users _and_ in\nthe backend, but if it is close, I will prefer to make things easier on\nclients/users.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 02:15:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> > What's your point? No client that examines pg_attribute can be trusted\n> > until it's been examined pretty closely (as in, more closely than\n> > Christopher looked at pg_dump). I'd prefer to see us keep the backend\n> > simple and trustworthy, rather than pursue a largely-illusory idea that\n> > we might be saving some trouble on the client side. The clients are\n> > less likely to cause unrecoverable data corruption if something is\n> > missed.\n\nI'm prepared to admit I didn't look at pg_dump too hard. I have to say that\nI agree with Tom here, but that's personal opinion. If Tom reckons that\nthere's places where Hiroshi's implementation needed work and that there\nwould be messiness, then I'm inclined to believe him.\n\nIn all honesty, the amount of changes clients have to make to support\nschemas makes checking dropped columns pale in significance.\n\n> > If we were willing to remap attnums so that clients would require *no*\n> > changes, it would be worth doing --- but I believe we've already\n> > rejected that approach as unworkable. I don't think \"maybe you don't\n> > need to change, but you'd better study your code very carefully anyway\"\n> > is a big selling point.\n\nExactly. I like the whole 'explicit' idea of having attisdropped. There's\nno ifs and buts. It's not a case of, \"oh, the attnum is negative, but it's\nnot an arbitratily negative system column\" sort of thing.\n\n> I will vote for the option that has the less pain for our users _and_ in\n> the backend, but if it is close, I will prefer to make things easier on\n> clients/users.\n\nI will vote for attisdropped. However, I'm not a main developer and I will\ngo with the flow. In the meantime, I'm developing attisdropped but using\nsome of Hiroshi's implementation...\n\nChris\n\n\n\n", "msg_date": "Thu, 4 Jul 2002 14:27:40 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Well, it seems isdropped is going to have to be checked by _any_ client,\n> > while holes in the number will have to be checked by _some_ clients. Is\n> > that accurate?\n> \n> What's your point? No client that examines pg_attribute can be trusted\n> until it's been examined pretty closely (as in, more closely than\n> Christopher looked at pg_dump). I'd prefer to see us keep the backend\n> simple and trustworthy, rather than pursue a largely-illusory idea that\n> we might be saving some trouble on the client side.\n\nLargely-illusory? Almost every pg_attribute query will have to be modified\nfor isdropped, while Hiroshi's approach requires so few changes, we are\nhaving trouble even finding a query that needs to be modified. That's\npretty clear to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 02:27:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> OK, I've been looking at Hiroshi's implementation. It's basically\n> semantically equivalent to mine from what I can see so far. The only\n> difference really is in how the dropped columns are marked.\n\nTrue enough, but that's not a trivial difference. The problem with\nHiroshi's implementation is that there's no longer a close tie between\npg_attribute.attnum and physical positions of datums in tuples. I think\nthat that's going to affect a lot of low-level code, and that he hasn't\nfound all of it.\n\nKeeping the attisdropped marker separate from attnum is logically\ncleaner, and IMHO much less likely to lead to trouble down the road.\n\nWe should not allow ourselves to put too much weight on the fact that\nsome clients use \"attnum > 0\" as a filter for attributes that they\n(think they) need not pay attention to. That's only a historical\nartifact, and it's far from clear that it will keep those clients\nout of trouble anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 02:28:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "By the way,\n\nWhat happens if someone drops ALL the columns in a table? Do you just leave\nit there as-is without any columns or should it be forbidden or should it be\ninterpreted as a drop table?\n\nChris\n\n\n\n", "msg_date": "Thu, 4 Jul 2002 14:29:34 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> True enough, but that's not a trivial difference. The problem with\n> Hiroshi's implementation is that there's no longer a close tie between\n> pg_attribute.attnum and physical positions of datums in tuples. I think\n> that that's going to affect a lot of low-level code, and that he hasn't\n> found all of it.\n\nOK, I can see how that would be a problem actually. You'd have to regard\nattnum as a 'virtual attnum' and keep having to reverse the computation to\nfigure out what its original attnum was...\n\n> Keeping the attisdropped marker separate from attnum is logically\n> cleaner, and IMHO much less likely to lead to trouble down the road.\n\nI'm a purist and I like to think that good, clean, well thought out code\nalways results in more stable, bug free software.\n\n> We should not allow ourselves to put too much weight on the fact that\n> some clients use \"attnum > 0\" as a filter for attributes that they\n> (think they) need not pay attention to. That's only a historical\n> artifact, and it's far from clear that it will keep those clients\n> out of trouble anyway.\n\nIt's also not 'every client app' that will need to be altered. Just DB\nadmin apps, of which there aren't really that many. And remember, anyone\nwho uses the catalogs directly always does so at their own risk. I think\nthat once we have a proper INFORMATION_SCHEMA anyway, all clients should use\nthat. Heck, if INFORMATION_SCHEMA gets in in 7.3, then clients might have a\n_heap_ of work to do...\n\nChris\n\n\n\n", "msg_date": "Thu, 4 Jul 2002 14:36:45 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> What happens if someone drops ALL the columns in a table?\n\nGood point. Ideally we should allow that, but in practice I suspect\nthere are many places that will blow up on zero-length tuples.\nRejecting the situation might be the better part of valor ... anyway\nI'm not excited about spending a lot of time searching for such bugs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 02:38:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "Tom Lane wrote:\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > OK, I've been looking at Hiroshi's implementation. It's basically\n> > semantically equivalent to mine from what I can see so far. The only\n> > difference really is in how the dropped columns are marked.\n> \n> True enough, but that's not a trivial difference. The problem with\n> Hiroshi's implementation is that there's no longer a close tie between\n> pg_attribute.attnum and physical positions of datums in tuples. I think\n> that that's going to affect a lot of low-level code, and that he hasn't\n> found all of it.\n\nIsn't that only for the dropped column. Don't the remaining columns stay\nlogically clear as far as the tuple storage is concerned?\n\n> \n> Keeping the attisdropped marker separate from attnum is logically\n> cleaner, and IMHO much less likely to lead to trouble down the road.\n\nMy problem is that you are pushing the DROP COLUMN check out into almost\nevery client that uses pg_attribute. And we are doing this to keep our\nbackend cleaner. Seems we should do the work once, in the backend, and\nnot burden clients will all of this.\n\n> We should not allow ourselves to put too much weight on the fact that\n> some clients use \"attnum > 0\" as a filter for attributes that they\n> (think they) need not pay attention to. That's only a historical\n> artifact, and it's far from clear that it will keep those clients\n> out of trouble anyway.\n\nWell, why shouldn't we use the fact that most/all clients don't look at\nattno < 0, and that we have no intention of changing that requirement. \nWe aren't coding in a vacuum. We have clients, they do that already,\nlet's use it.\n\nAttno < 0 is not historical. It is in the current code, and will remain\nso for the forseeable future, I think.\n\nI honestly don't understand the priorities we are setting here.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 02:39:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> Actually, your trial required column dropped-ness to be checked in\n> >> many more places than the proposed approach does.\n> \n> > Have you ever really checked my trial implementation ?\n> \n> Well, I've certainly stumbled over it in places like relcache.c\n> and preptlist.c, which IMHO should not have to know about this...\n> and I have little confidence that there are not more places that\n> would have needed fixes if the change had gotten any wide use.\n> You were essentially assuming that it was okay for pg_attribute.attnum\n> to not agree with indexes into tuple descriptors, which seems very\n> shaky to me.\n\nI already explained it to you once in the thread Re: [HACKERS]\nRFC: Restructuring pg_aggregate. How many times should I\nexplain the same thing ?\nMy trial implementation is essentially the same as adding\nisdropped pg_attribute column. There's no strangeness in\nmy implementation.\nThe reason why I adopted negative attnos is as follows.\nI also explained it more than twice.\n\n1) It doesn't need initdb. It was very conveneient for\n the TRIAL implementation.\n2) It's more sensitive about oversights of modification\n than isdropped column implementation.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n\n\n", "msg_date": "Thu, 04 Jul 2002 15:53:02 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> My problem is that you are pushing the DROP COLUMN check out into almost\n> every client that uses pg_attribute. And we are doing this to keep our\n> backend cleaner. Seems we should do the work once, in the backend, and\n> not burden clients will all of this.\n\nAs a user of Postgres, I found the following more painful:\n\n* Anti-varchar truncation in 7.2\n* Making you have to quote \"timestamp\"(), etc.\n\nPeople mail the list every day with backwards compatibility problems. We've\ndone it before, why not do it again? In fact, I'm sure there are already\nbackwards compatibility problems in 7.3.\n\n> > We should not allow ourselves to put too much weight on the fact that\n> > some clients use \"attnum > 0\" as a filter for attributes that they\n> > (think they) need not pay attention to. That's only a historical\n> > artifact, and it's far from clear that it will keep those clients\n> > out of trouble anyway.\n>\n> Well, why shouldn't we use the fact that most/all clients don't look at\n> attno < 0, and that we have no intention of changing that requirement.\n> We aren't coding in a vacuum. We have clients, they do that already,\n> let's use it.\n>\n> Attno < 0 is not historical. It is in the current code, and will remain\n> so for the forseeable future, I think.\n\nProblem is, the current code actually assumes that attno < 0 means that the\nattribute is a system column, NOT a dropped user column.\n\nAs an example, I'd have to change all of these in the Postgres source code:\n\n /* Prevent them from altering a system attribute */\n if (attnum < 0)\n elog(ERROR, \"ALTER TABLE: Cannot alter system attribute\n\\\"%s\\\"\",\n colName);\n\nWho knows how many other things like this are littered through the source?\n\nChris\n\n\n\n", "msg_date": "Thu, 4 Jul 2002 14:54:14 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> > No, he only breaks even for client cleanliness --- either approach\n> > *will* require changes in clients that look at pg_attribute.\n> \n> Uh, Christopher already indicated three clients, psql, pg_dump, and\n> another that will not require changes for Hiroshi's approach, but will\n> require changes for isdropped. That doesn't seem \"break even\" to me.\n\nAnd Tom pointed out that I was wrong...\n\nChris\n\n\n\n", "msg_date": "Thu, 4 Jul 2002 15:07:59 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > OK, I've been looking at Hiroshi's implementation. It's basically\n> > semantically equivalent to mine from what I can see so far. The only\n> > difference really is in how the dropped columns are marked.\n> \n> True enough, but that's not a trivial difference.\n\n> The problem with\n> Hiroshi's implementation is that there's no longer a close tie between\n> pg_attribute.attnum and physical positions of datums in tuples. \n\n?? Where does the above consideration come from ?\n\nBTW there seems a misunderstanding about my posting.\nI'm not objecting to add attisdropped pg_attribute column.\nThey are essentially the same and so I used macros\nlike COLUMN_IS_DROPPED in my implementation so that\nI can easily change the implementation to use isdropped\npg_attribute column.\nI'm only correcting the unfair valuation for my\ntrial work.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n\n\n", "msg_date": "Thu, 04 Jul 2002 18:21:30 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> BTW there seems a misunderstanding about my posting.\n> I'm not objecting to add attisdropped pg_attribute column.\n> They are essentially the same and so I used macros\n> like COLUMN_IS_DROPPED in my implementation so that\n> I can easily change the implementation to use isdropped\n> pg_attribute column.\n> I'm only correcting the unfair valuation for my\n> trial work.\n\nHiroshi, I totally respect your trial work. In fact, I'm relying on it to\ndo the attisdropped implementation. I think everyone's beginning to get a\nbit cranky here - I think we should all just calm down.\n\nChris\n\n\n\n", "msg_date": "Thu, 4 Jul 2002 17:27:18 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Thomas Lockhart wrote:\n> > Well in the renumbering case, the client needs to know about missing attnos\n> > and it has to know to ignore negative attnos (which it probably does\n> > already). ie. psql and pg_dump wouldn't have to be modified in that case.\n> > In the isdropped case, the client needs to know to exclude any column with\n> > 'attisdropped' set to true.\n> > So in both cases, the client needs to be updated.\n> \n> How about defining a view (or views) which hides these details? Perhaps\n> a view which is also defined in SQL99 as one of the information_schema\n> views which we might like to have anyway?\n\nWe could change pg_attribute to another name, and create a view called\npg_attribute that never returned isdropped columns to the client. That\nwould allow clients to work cleanly, and the server to work cleanly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 08:20:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Largely-illusory? Almost every pg_attribute query will have to be modified\n> for isdropped, while Hiroshi's approach requires so few changes, we are\n> having trouble even finding a query that needs to be modified. That's\n> pretty clear to me.\n\nApparently you didn't think hard about the pg_dump example. The problem\nthere isn't the query so much as it is the wired-in assumption that the\nretrieved rows will correspond to attnums 1-N in sequence. That\nassumption breaks either way we do it. The illusion is thinking that\nclients won't break.\n\nI suspect it will actually be easier to fix pg_dump if we use the\nattisdropped approach --- it could keep the assumption that its array\nindexes equal attnums, include attisdropped explicitly in the rows\nit stores, and just not output rows that have attisdropped true.\nGetting rid of the index == attnum assumption will be a considerably\nmore subtle, and fragile, patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 12:34:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "> We could change pg_attribute to another name, and create a view called\n> pg_attribute that never returned isdropped columns to the client. That\n> would allow clients to work cleanly, and the server to work cleanly.\n\nAnother case where having an informational schema would eliminate the\nwhole argument -- as the clients wouldn't need to touch the system\ntables.\n\nAny thoughts on that initial commit Peter?\n\n\n\n", "msg_date": "04 Jul 2002 17:29:28 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Rod Taylor wrote:\n> > We could change pg_attribute to another name, and create a view called\n> > pg_attribute that never returned isdropped columns to the client. That\n> > would allow clients to work cleanly, and the server to work cleanly.\n> \n> Another case where having an informational schema would eliminate the\n> whole argument -- as the clients wouldn't need to touch the system\n> tables.\n> \n> Any thoughts on that initial commit Peter?\n\n From my new understanding, the client coders _want_ to see the isdropped\nrow so the attno's are consecutive.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 17:32:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n>> We could change pg_attribute to another name, and create a view called\n>> pg_attribute that never returned isdropped columns to the client. That\n>> would allow clients to work cleanly, and the server to work cleanly.\n\n> Another case where having an informational schema would eliminate the\n> whole argument -- as the clients wouldn't need to touch the system\n> tables.\n\nThis is a long-term solution, not a near-term one. I suspect it's\nreally unlikely that pg_dump, pgAdmin, etc will ever want to switch\nover to the SQL-standard informational schema, because they will want\nto be able to look at Postgres-specific features that are not reflected\nin the standardized schema. Certainly there will be no movement in\nthat direction until the informational schema is complete; a first-cut\nimplementation won't attract any interest at all :-(\n\nI thought about the idea of a backward-compatible pg_attribute view,\nbut I don't see any efficient way to generate the consecutively-numbered\nattnum column in a view; anyone?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 19:56:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "Tom Lane wrote:\n> Rod Taylor <rbt@zort.ca> writes:\n> >> We could change pg_attribute to another name, and create a view called\n> >> pg_attribute that never returned isdropped columns to the client. That\n> >> would allow clients to work cleanly, and the server to work cleanly.\n> \n> > Another case where having an informational schema would eliminate the\n> > whole argument -- as the clients wouldn't need to touch the system\n> > tables.\n> \n> This is a long-term solution, not a near-term one. I suspect it's\n> really unlikely that pg_dump, pgAdmin, etc will ever want to switch\n> over to the SQL-standard informational schema, because they will want\n> to be able to look at Postgres-specific features that are not reflected\n> in the standardized schema. Certainly there will be no movement in\n> that direction until the informational schema is complete; a first-cut\n> implementation won't attract any interest at all :-(\n> \n> I thought about the idea of a backward-compatible pg_attribute view,\n> but I don't see any efficient way to generate the consecutively-numbered\n> attnum column in a view; anyone?\n\nNo, we can't, and because our client coders want consecutive, it is a\ndead idea. Even if we could do it, we would be feeding clients attno\nvalues that are inaccurate, causing problems when attno is joined to\nother tables.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 20:03:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Even if we could do it, we would be feeding clients attno\n> values that are inaccurate, causing problems when attno is joined to\n> other tables.\n\nGood point; we'd need similar views replacing pg_attrdef and probably\nother places. Messy indeed :-(\n\nBut as Dave already pointed out, it's probably pointless to worry.\nThe schema support in 7.3 will already de-facto break nearly every\nclient that inspects the system catalogs, so adding some more work\nfor DROP COLUMN support isn't going to make much difference.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 20:15:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "> > We could change pg_attribute to another name, and create a view called\n> > pg_attribute that never returned isdropped columns to the client. That\n> > would allow clients to work cleanly, and the server to work cleanly.\n>\n> Another case where having an informational schema would eliminate the\n> whole argument -- as the clients wouldn't need to touch the system\n> tables.\n\nSince postgres has so many features that standard SQL doesn't have (eg.\ncustom operators), how are they going to be shown in the information schema?\n\nChris\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 10:07:22 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "On Thu, 2002-07-04 at 22:07, Christopher Kings-Lynne wrote:\n> > > We could change pg_attribute to another name, and create a view called\n> > > pg_attribute that never returned isdropped columns to the client. That\n> > > would allow clients to work cleanly, and the server to work cleanly.\n> >\n> > Another case where having an informational schema would eliminate the\n> > whole argument -- as the clients wouldn't need to touch the system\n> > tables.\n> \n> Since postgres has so many features that standard SQL doesn't have (eg.\n> custom operators), how are they going to be shown in the information schema?\n\nI would assume we would add pg_TABLE or TABLES.pg_COLUMN as appropriate\nand where it wouldn't disturbe normal usage.\n\nIf we always put pg columns at the end it shouldn't disturbe programs\nwhich use vectors to pull information out of the DB with a target of *.\n\n\n\n", "msg_date": "04 Jul 2002 22:48:06 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > OK, I've been looking at Hiroshi's implementation. It's basically\n> > semantically equivalent to mine from what I can see so far. The only\n> > difference really is in how the dropped columns are marked.\n> \n> True enough, but that's not a trivial difference. The problem with\n> Hiroshi's implementation is that there's no longer a close tie between\n> pg_attribute.attnum and physical positions of datums in tuples. I think\n> that that's going to affect a lot of low-level code, and that he hasn't\n> found all of it.\n\nPlease don't propagate an unfair valuation without any verification.\n\nregards,\nHiroshi Inoue\n\n\n", "msg_date": "Sun, 7 Jul 2002 00:43:26 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " } ]
[ { "msg_contents": "Hi all,\n\nThe attached patch implements the SQL92 UNIQUE predicate. I've written\nsome regression tests (as well as adding a few for subselects in FROM\nclauses). I'll update the documentation if/when this patch is accepted.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC", "msg_date": "Wed, 3 Jul 2002 12:21:23 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "UNIQUE predicate" }, { "msg_contents": "Hi,\n\nI've attached the changes I've made to pg_attribute.h - I can't see what's\nwrong but whenever I do an initdb it fails:\n\ninitdb -D /home/chriskl/local/data\nThe files belonging to this database system will be owned by user \"chriskl\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with locale C.\n\ncreating directory /home/chriskl/local/data... ok\ncreating directory /home/chriskl/local/data/base... ok\ncreating directory /home/chriskl/local/data/global... ok\ncreating directory /home/chriskl/local/data/pg_xlog... ok\ncreating directory /home/chriskl/local/data/pg_clog... ok\ncreating template1 database in /home/chriskl/local/data/base/1...\ninitdb failed.\nRemoving /home/chriskl/local/data.\n\nChris", "msg_date": "Thu, 4 Jul 2002 10:11:36 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Adding attisdropped" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I've attached the changes I've made to pg_attribute.h - I can't see what's\n> wrong but whenever I do an initdb it fails:\n\nDid you change the relnatts entry in pg_class.h for pg_attribute?\n\nMore generally, run initdb with -d or -v or whatever its debug-output\nswitch is, and look at the last few lines to see the actual error.\n(Caution: this may produce megabytes of output.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 00:58:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Adding attisdropped " }, { "msg_contents": "\nSeems we may not need isdropped, so I will hold on evaluating this.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> Hi,\n> \n> I've attached the changes I've made to pg_attribute.h - I can't see what's\n> wrong but whenever I do an initdb it fails:\n> \n> initdb -D /home/chriskl/local/data\n> The files belonging to this database system will be owned by user \"chriskl\".\n> This user must also own the server process.\n> \n> The database cluster will be initialized with locale C.\n> \n> creating directory /home/chriskl/local/data... ok\n> creating directory /home/chriskl/local/data/base... ok\n> creating directory /home/chriskl/local/data/global... ok\n> creating directory /home/chriskl/local/data/pg_xlog... ok\n> creating directory /home/chriskl/local/data/pg_clog... ok\n> creating template1 database in /home/chriskl/local/data/base/1...\n> initdb failed.\n> Removing /home/chriskl/local/data.\n> \n> Chris\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 01:27:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Adding attisdropped" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> The attached patch implements the SQL92 UNIQUE predicate.\n\nThe implementation seems to be well short of usefulness in a production\nsetting, for two reasons: (1) you're accumulating all the tuples into\nmemory --- what if they don't fit? (2) the comparison step is O(N^2),\nwhich renders the first point rather moot ... a test case large enough\nto risk memory exhaustion will not complete in your lifetime.\n\nI think a useful implementation will require work in the planner to\nconvert the UNIQUE predicate into a SORT/UNIQUE plan structure (somewhat\nlike the way DISTINCT is implemented, but we just want a boolean\nresult).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2002 17:32:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: UNIQUE predicate " }, { "msg_contents": "On Sat, Jul 06, 2002 at 05:32:53PM -0400, Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > The attached patch implements the SQL92 UNIQUE predicate.\n> \n> The implementation seems to be well short of usefulness in a production\n> setting, for two reasons: (1) you're accumulating all the tuples into\n> memory --- what if they don't fit? (2) the comparison step is O(N^2),\n> which renders the first point rather moot ... a test case large enough\n> to risk memory exhaustion will not complete in your lifetime.\n\nThat's true -- I probably should have noted in the original email that\nmy implementation was pretty much \"the simplest thing that works\".\n\n> I think a useful implementation will require work in the planner to\n> convert the UNIQUE predicate into a SORT/UNIQUE plan structure (somewhat\n> like the way DISTINCT is implemented, but we just want a boolean\n> result).\n\nHmmm... that's certainly possible, but I'm not sure the feature is\nimportant enough to justify that much effort.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Tue, 9 Jul 2002 16:16:01 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: UNIQUE predicate" }, { "msg_contents": "Neil Conway wrote:\n> On Sat, Jul 06, 2002 at 05:32:53PM -0400, Tom Lane wrote:\n> > nconway@klamath.dyndns.org (Neil Conway) writes:\n> > > The attached patch implements the SQL92 UNIQUE predicate.\n> > \n> > The implementation seems to be well short of usefulness in a production\n> > setting, for two reasons: (1) you're accumulating all the tuples into\n> > memory --- what if they don't fit? (2) the comparison step is O(N^2),\n> > which renders the first point rather moot ... a test case large enough\n> > to risk memory exhaustion will not complete in your lifetime.\n> \n> That's true -- I probably should have noted in the original email that\n> my implementation was pretty much \"the simplest thing that works\".\n> \n> > I think a useful implementation will require work in the planner to\n> > convert the UNIQUE predicate into a SORT/UNIQUE plan structure (somewhat\n> > like the way DISTINCT is implemented, but we just want a boolean\n> > result).\n> \n> Hmmm... that's certainly possible, but I'm not sure the feature is\n> important enough to justify that much effort.\n\nI am going to agree with Tom on this one. We do foreign key triggers in\nmemory, but having a entire query result in memory to perform UNIQUE\nseems really stretching the resources of the machine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 18:17:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: UNIQUE predicate" } ]
[ { "msg_contents": "Hey,\n\n Oreilly and Assoc is interviewing me and they asked me two questions I \ndon't have the answers to:\n\nWhen is 7.3 set to land?\n\nWhen is 8.0 set to land?\n\n\nI said, when there done, but they want a little more ;)\n\nJoshua Drake\nCo-Author Practical PostgreSQL\nCommand Prompt, Inc. -- Creators of Mammoth PostgreSQL\n\n\n\n", "msg_date": "Wed, 03 Jul 2002 13:46:46 -0700", "msg_from": "\"Joshua D. Drake\" <jd@commandprompt.com>", "msg_from_op": true, "msg_subject": "I am being interviewed by OReilly" }, { "msg_contents": "\"Joshua D. Drake\" <jd@commandprompt.com> writes:\n> Oreilly and Assoc is interviewing me and they asked me two questions I \n> don't have the answers to:\n> When is 7.3 set to land?\n> When is 8.0 set to land?\n\n7.3 will go beta at the end of August, barring major disasters.\nAs for final release, it's done when it's done --- the optimistic\nschedule would be end of September, but we do not release by the\ncalendar. We release when we think the code is ready.\n\nThere is no plan anywhere that involves an 8.0; if anyone thinks\nthey know how many 7.* releases there will be, when 8.0 will be\nout, or what will be in it, they are just blowing smoke. We have\na hard enough time seeing ahead to the next release...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 00:23:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "On Wed, Jul 03, 2002 at 01:46:46PM -0700, Joshua D. Drake wrote:\n> \n> When is 7.3 set to land?\n> \n> When is 8.0 set to land?\n\nAs a matter of curiosity, what would constitute \"8.0\" as opposed to,\nsay, 7.4? (I know that 7.0 happened partly because a great whack of\nnew features went in, but I haven't found anything in the -hackers\narchives to explain why the number change. Maybe it's just a phase\nof the moon thing, or something.)\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n\n\n", "msg_date": "Thu, 4 Jul 2002 11:36:41 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "> When is 8.0 set to land?\n\nYou might point out that every release is an 8.0 by the pathetic\nstandards now used by many or most products for labeling releases.\n\nWe take a perverse pride in versioning The Old Fashioned Way, perhaps to\nan extreme ;)\n\n - Thomas\n\n\n", "msg_date": "Thu, 04 Jul 2002 08:50:11 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Wed, Jul 03, 2002 at 01:46:46PM -0700, Joshua D. Drake wrote:\n> > \n> > When is 7.3 set to land?\n> > \n> > When is 8.0 set to land?\n> \n> As a matter of curiosity, what would constitute \"8.0\" as opposed to,\n> say, 7.4? (I know that 7.0 happened partly because a great whack of\n> new features went in, but I haven't found anything in the -hackers\n> archives to explain why the number change. Maybe it's just a phase\n> of the moon thing, or something.)\n\nActually, it was a wack of new features in 6.5 when we realized we had\nto up the version on the next release. I think multi-master replication\nwould be an 8.0 item, and point-in-time recovery.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 12:17:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Andrew Sullivan <andrew@libertyrms.info> writes:\n> As a matter of curiosity, what would constitute \"8.0\" as opposed to,\n> say, 7.4? (I know that 7.0 happened partly because a great whack of\n> new features went in, but I haven't found anything in the -hackers\n> archives to explain why the number change. Maybe it's just a phase\n> of the moon thing, or something.)\n\nI remember quite a deal of argument about whether to call it 7.0 or 6.6;\nwe had started that cycle with the assumption that it would be called\n6.6, and changed our minds near the end. Personally I'd have preferred\nto stick the 7.* label on starting with the next release (actually\ncalled 7.1) which had WAL and TOAST in it. That was really a\nsignificant set of changes, both on the inside and outside.\n\nYou could make a fair argument that the upcoming 7.3 ought to be\ncalled 8.0, because the addition of schema support will break an\nawful lot of client-side code ;-). But I doubt we will do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 13:24:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "Tom Lane wrote:\n> Andrew Sullivan <andrew@libertyrms.info> writes:\n> > As a matter of curiosity, what would constitute \"8.0\" as opposed to,\n> > say, 7.4? (I know that 7.0 happened partly because a great whack of\n> > new features went in, but I haven't found anything in the -hackers\n> > archives to explain why the number change. Maybe it's just a phase\n> > of the moon thing, or something.)\n> \n> I remember quite a deal of argument about whether to call it 7.0 or 6.6;\n> we had started that cycle with the assumption that it would be called\n> 6.6, and changed our minds near the end. Personally I'd have preferred\n> to stick the 7.* label on starting with the next release (actually\n> called 7.1) which had WAL and TOAST in it. That was really a\n> significant set of changes, both on the inside and outside.\n> \n> You could make a fair argument that the upcoming 7.3 ought to be\n> called 8.0, because the addition of schema support will break an\n> awful lot of client-side code ;-). But I doubt we will do that.\n\nYes, the problem with incrementing on major features is that we would\nstart to look like Emacs numbering fairly quickly.\n\nAt some point, we may have to modify our name and start at 1.0 again.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 14:36:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Tom Lane wrote:\n\n> You could make a fair argument that the upcoming 7.3 ought to be\n> called 8.0, because the addition of schema support\n\nStar-schema support?\n\n will break an\n> awful lot of client-side code ;-).\n\n\n", "msg_date": "Thu, 04 Jul 2002 12:17:56 -0700", "msg_from": "Jeff Glatt is a Dumbass <glatt@dumbass.net>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Bruce Momjian wrote:\n> \n<snip>\n> \n> At some point, we may have to modify our name and start at 1.0 again.\n\nHeh Heh Heh\n\nLet's do the M$ trick and pick a name that everyone will confuse and\nassume it's us:\n\n\"Standard SQL 1.0\".\n\nSo when people use the popularity question for deciding their database\n\"what database does everyone else use? I just want the standard one...\"\n\nWe win. :)\n\n+ Justin\n\n\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n\n\n", "msg_date": "Fri, 05 Jul 2002 10:22:37 +0930", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Thu, 4 Jul 2002, Tom Lane wrote:\n\n> You could make a fair argument that the upcoming 7.3 ought to be called\n> 8.0, because the addition of schema support will break an awful lot of\n> client-side code ;-). But I doubt we will do that.\n\nActually, from reading that thread, I started to think along those lines\ntoo ... it is a major change, is there a reason why going to 8.0 on this\none is a bad idea? I realize that its *only* been 2 years that we've been\nin v7.0 ... :) v7.0 was released back in Mar of 2000 ... so its almost\n2.5 years ...\n\nI don't necessarily agree with Bruce's thought that distributed\nreplication would be the marker, since there is no set path to that right\nnow, nor is there, I believe, enough knowledge about whether or not bring\nsuch in will affect anyting other then the backend itself ...\n\nWith this next release, we are looking at breaking the front-end apps, as\nI understand it ... I think that's pretty drastic of a change to force\ngoing to 8.0 ...\n\nWe don't release fast, or often, so our v7.2 is like some other projects\nv7.26, at the rate some of them release ...\n\nI'd like to see this next release go to 8.0 ...\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 00:17:25 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Should next release by 8.0 (Was: Re: [GENERAL] I am being interviewed\n\tby OReilly )" }, { "msg_contents": "On Thu, 4 Jul 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Andrew Sullivan <andrew@libertyrms.info> writes:\n> > > As a matter of curiosity, what would constitute \"8.0\" as opposed to,\n> > > say, 7.4? (I know that 7.0 happened partly because a great whack of\n> > > new features went in, but I haven't found anything in the -hackers\n> > > archives to explain why the number change. Maybe it's just a phase\n> > > of the moon thing, or something.)\n> >\n> > I remember quite a deal of argument about whether to call it 7.0 or 6.6;\n> > we had started that cycle with the assumption that it would be called\n> > 6.6, and changed our minds near the end. Personally I'd have preferred\n> > to stick the 7.* label on starting with the next release (actually\n> > called 7.1) which had WAL and TOAST in it. That was really a\n> > significant set of changes, both on the inside and outside.\n> >\n> > You could make a fair argument that the upcoming 7.3 ought to be\n> > called 8.0, because the addition of schema support will break an\n> > awful lot of client-side code ;-). But I doubt we will do that.\n>\n> Yes, the problem with incrementing on major features is that we would\n> start to look like Emacs numbering fairly quickly.\n\nAt 2.5years in v7.x, I think its going to be a long while before we start\ngetting into the 20's :)\n\n> At some point, we may have to modify our name and start at 1.0 again.\n\nYa, that's it ... we've only spent, what, 8 years now making 'PostgreSQL'\nknown, so let's change the name *just* so that we can start at 1.0 and\nface a new challenge of getting ppl to recognize the name?\n\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 00:19:19 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "> With this next release, we are looking at breaking the front-end apps, as\n> I understand it ... I think that's pretty drastic of a change to force\n> going to 8.0 ...\n> \n> We don't release fast, or often, so our v7.2 is like some other projects\n> v7.26, at the rate some of them release ...\n> \n> I'd like to see this next release go to 8.0 ...\n\nHmmm...makes sense. I'd be for it.\n\nBTW - has anyone looked at Neil's PREPARE patch yet?\n\nChris\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 11:54:39 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Should next release by 8.0" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n>> At some point, we may have to modify our name and start at 1.0 again.\n\n> Ya, that's it ... we've only spent, what, 8 years now making 'PostgreSQL'\n> known, so let's change the name *just* so that we can start at 1.0 and\n> face a new challenge of getting ppl to recognize the name?\n\nI've heard a number of people opine that we should go back to just plain\n'Postgres', which is pronounceable by the uninitiate, and besides which\nthat's what we use informally most of the time. 'PostgreSQL' is about\nas marketing-unfriendly a name as you could easily find...\n\nI'd not be in favor of picking something new out of the blue, but I'd\npick 'Postgres' over 'PostgreSQL' if it were up to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 23:59:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> >> At some point, we may have to modify our name and start at 1.0 again.\n> \n> > Ya, that's it ... we've only spent, what, 8 years now making 'PostgreSQL'\n> > known, so let's change the name *just* so that we can start at 1.0 and\n> > face a new challenge of getting ppl to recognize the name?\n> \n> I've heard a number of people opine that we should go back to just plain\n> 'Postgres', which is pronounceable by the uninitiate, and besides which\n> that's what we use informally most of the time. 'PostgreSQL' is about\n> as marketing-unfriendly a name as you could easily find...\n\nI personally agree.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 00:01:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> On Thu, 4 Jul 2002, Tom Lane wrote:\n> \n<snip>\n\nWe can also go any number in between... like \"7.5\"...\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> I'd like to see this next release go to 8.0 ...\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n\n\n", "msg_date": "Fri, 05 Jul 2002 13:43:10 +0930", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Should next release by 8.0 (Was: Re: [GENERAL] I am being " }, { "msg_contents": "\nWhile there are big changes between 7.2 and the next release, they\naren't really any bigger than others during the 7.x series. I don't\nreally feel that the next release is worth an 8.0 rather than a 7.3. But\nthis is just an opinion; it's not something I'm prepared to argue about.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 16:34:01 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Should next release by 8.0 (Was: Re: [GENERAL] I am" }, { "msg_contents": "On Thu, 4 Jul 2002, Tom Lane wrote:\n\n> I'd not be in favor of picking something new out of the blue, but I'd\n> pick 'Postgres' over 'PostgreSQL' if it were up to me.\n\nAs I recall the only real reason for the change was to emphasize that\nthe query language had changed to SQL. Back in my young and naive days\n(probably early '95) I remember picking up Postgres, realizing it didn't\nuse SQL as the query language, thinking, \"How terrible!\" and immediately\ndropping it for MySQL. (I'm older and wiser now, but it's too late--all\nthe systems that let you use something less crappy than SQL are now\ngone. *Sigh*.) Anyway, I expect that others had the same experience, and\nthus something like that was required to get people who had previously\ndropped it to go back to it again.\n\nNow that QUEL or PostQUEL or whatever it was is long gone and fogotten\n(except maybe in certain CA-Unicenter shops), I see no reason we\ncouldn't go back to \"Postgres\" now.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 16:47:16 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "In my book, schema support is a big thing, leading to rethink a lot of\ndatabase organization and such. PostgreSQL 8 would stress this\nimportance.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n\n\n\n", "msg_date": "05 Jul 2002 15:37:51 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": false, "msg_subject": "Re: Should next release by 8.0 (Was: Re: [GENERAL] I am" }, { "msg_contents": "On Fri, 5 Jul 2002, Curt Sampson wrote:\n\n>\n> While there are big changes between 7.2 and the next release, they\n> aren't really any bigger than others during the 7.x series. I don't\n> really feel that the next release is worth an 8.0 rather than a 7.3. But\n> this is just an opinion; it's not something I'm prepared to argue about.\n\nActually, the \"big\" change is such that will, at least as far as I'm\nunderstanding it, break pretty much every front-end applicaiton ... which,\nI'm guessing, is pretty major, no? :)\n\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 09:48:44 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Should next release by 8.0 (Was: Re: [GENERAL] I am" }, { "msg_contents": "On Thu, 4 Jul 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> >> At some point, we may have to modify our name and start at 1.0 again.\n>\n> > Ya, that's it ... we've only spent, what, 8 years now making 'PostgreSQL'\n> > known, so let's change the name *just* so that we can start at 1.0 and\n> > face a new challenge of getting ppl to recognize the name?\n>\n> I've heard a number of people opine that we should go back to just plain\n> 'Postgres', which is pronounceable by the uninitiate, and besides which\n\nI can never figure this out ... what is so difficult about 'Postgres-Q-L'?\n\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 10:05:21 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Actually, the \"big\" change is such that will, at least as far as I'm\n> understanding it, break pretty much every front-end applicaiton ...\n\nOnly those that inspect system catalogs --- I'm not sure what percentage\nthat is, but surely it's not \"pretty much every\" one. psql for example\nis only affected because of its \\d commands.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2002 10:16:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should next release by 8.0 (Was: Re: [GENERAL] I am being\n\tinterviewed by OReilly )" }, { "msg_contents": "On Fri, 5 Jul 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Actually, the \"big\" change is such that will, at least as far as I'm\n> > understanding it, break pretty much every front-end applicaiton ...\n>\n> Only those that inspect system catalogs --- I'm not sure what percentage\n> that is, but surely it's not \"pretty much every\" one. psql for example\n> is only affected because of its \\d commands.\n\nOkay, anyone have any ideas of other packages that would inspect the\nsystem catalog? The only ones I could think of, off the top of my head,\nwould be pgAccess, pgAdmin and phpPgAdmin ... but I would guess that any\n'administratively oriented' interface would face similar problems, no?\n\n\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 11:27:16 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Should next release by 8.0 (Was: Re: [GENERAL] I am" }, { "msg_contents": "> Actually, the \"big\" change is such that will, at least as far as I'm\n> understanding it, break pretty much every front-end applicaiton ... which,\n> I'm guessing, is pretty major, no? :)\n\nI've always thought of our release numbering as having \"themes\". The 6.x\nseries took Postgres from interesting but buggy to a solid system, with\na clear path to additional capabilities. The 7.x series fleshes out SQL\nstandards compliance and rationalizes the O-R features, as well as adds\nto robustness and speed with WAL etc. And the 8.x series would enable\nPostgres to extend to distributed systems etc., quite likely having some\nfundamental restructuring of the way we handle sources of data (remember\nour discussions a couple years ago regarding \"tuple sources\"?).\n\nSo I feel that bumping to 8.x just for schemas is not necessary. I\n*like* the idea of having more than one or two releases in a series, and\nwould be very happy to see a 7.3 released.\n\n - Thomas\n\n\n", "msg_date": "Fri, 05 Jul 2002 09:37:12 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Should next release by 8.0 (Was: Re: [GENERAL] I am" }, { "msg_contents": "On Fri, 5 Jul 2002, Thomas Lockhart wrote:\n\n> > Actually, the \"big\" change is such that will, at least as far as I'm\n> > understanding it, break pretty much every front-end applicaiton ... which,\n> > I'm guessing, is pretty major, no? :)\n>\n> I've always thought of our release numbering as having \"themes\". The 6.x\n> series took Postgres from interesting but buggy to a solid system, with\n> a clear path to additional capabilities. The 7.x series fleshes out SQL\n> standards compliance and rationalizes the O-R features, as well as adds\n> to robustness and speed with WAL etc. And the 8.x series would enable\n> Postgres to extend to distributed systems etc., quite likely having some\n> fundamental restructuring of the way we handle sources of data (remember\n> our discussions a couple years ago regarding \"tuple sources\"?).\n>\n> So I feel that bumping to 8.x just for schemas is not necessary. I\n> *like* the idea of having more than one or two releases in a series, and\n> would be very happy to see a 7.3 released.\n\nSeems I'm the only one for 8.x, so 7.3 it is :)\n\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 13:43:26 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Should next release by 8.0 (Was: Re: [GENERAL] I am" }, { "msg_contents": "Can't figure it out cause you're a techie most of us that subscribe\nto these lists are... When I was doing tech support i had people asking\n for help with [these aren't typo's]\n\nProstgre sequel\n\nPostresquirrel\n\npgsql\n\npsql\n\nProgress\n\netc...\n\nand alot of the folks i talked to were not newbies, they had been using\nit for a while.\n\n> -----Original Message-----\n> From: pgsql-general-owner@postgresql.org\n> [mailto:pgsql-general-owner@postgresql.org]On Behalf Of Marc G. Fournier\n> Sent: Friday, July 05, 2002 10:05 AM\n> To: Tom Lane\n> Cc: Bruce Momjian; Andrew Sullivan; pgsql-general@postgresql.org\n> Subject: Re: [GENERAL] I am being interviewed by OReilly\n>\n>\n> On Thu, 4 Jul 2002, Tom Lane wrote:\n>\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > >> At some point, we may have to modify our name and start at 1.0 again.\n> >\n> > > Ya, that's it ... we've only spent, what, 8 years now making\n> 'PostgreSQL'\n> > > known, so let's change the name *just* so that we can start at 1.0 and\n> > > face a new challenge of getting ppl to recognize the name?\n> >\n> > I've heard a number of people opine that we should go back to just plain\n> > 'Postgres', which is pronounceable by the uninitiate, and besides which\n>\n> I can never figure this out ... what is so difficult about 'Postgres-Q-L'?\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n>\n>\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 13:48:38 -0300", "msg_from": "\"Jeff MacDonald\" <jeff@tsunamicreek.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "On Fri, 5 Jul 2002, Marc G. Fournier wrote:\n\n> On Thu, 4 Jul 2002, Tom Lane wrote:\n> \n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > >> At some point, we may have to modify our name and start at 1.0 again.\n> >\n> > > Ya, that's it ... we've only spent, what, 8 years now making 'PostgreSQL'\n> > > known, so let's change the name *just* so that we can start at 1.0 and\n> > > face a new challenge of getting ppl to recognize the name?\n> >\n> > I've heard a number of people opine that we should go back to just plain\n> > 'Postgres', which is pronounceable by the uninitiate, and besides which\n> \n> I can never figure this out ... what is so difficult about 'Postgres-Q-L'?\n\n\nOf course I got it completely round the wrong and thought Postgres was the\nlatter of the two names. I've got to go change my documents now.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 17:48:38 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Thu, 4 Jul 2002, Tom Lane wrote:\n> \n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > >> At some point, we may have to modify our name and start at 1.0 again.\n> >\n> > > Ya, that's it ... we've only spent, what, 8 years now making 'PostgreSQL'\n> > > known, so let's change the name *just* so that we can start at 1.0 and\n> > > face a new challenge of getting ppl to recognize the name?\n> >\n> > I've heard a number of people opine that we should go back to just plain\n> > 'Postgres', which is pronounceable by the uninitiate, and besides which\n> \n> I can never figure this out ... what is so difficult about 'Postgres-Q-L'?\n\nBeats me, but when the Addison-Wesley publisher called to talk to me\nabout doing a book, he called is Postgre. I knew we were in trouble.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 13:50:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Fri, 5 Jul 2002, Tom Lane wrote:\n> \n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > Actually, the \"big\" change is such that will, at least as far as I'm\n> > > understanding it, break pretty much every front-end applicaiton ...\n> >\n> > Only those that inspect system catalogs --- I'm not sure what percentage\n> > that is, but surely it's not \"pretty much every\" one. psql for example\n> > is only affected because of its \\d commands.\n> \n> Okay, anyone have any ideas of other packages that would inspect the\n> system catalog? The only ones I could think of, off the top of my head,\n> would be pgAccess, pgAdmin and phpPgAdmin ... but I would guess that any\n> 'administratively oriented' interface would face similar problems, no?\n\nThat's a good point. Only the admin stuff is affected, not all\napplications. All applications _can_ now use schemas, but for most\ncases applications remain working unchanged.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 14:25:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should next release by 8.0 (Was: Re: [GENERAL] I am" }, { "msg_contents": "Well, for one:\n\nAn awful lot of people think the name is:\n\"Postgre-SQL\", or\n\"Postgre-SEQUEL\"\n\nIf marketing matters (as a lot of people have been\nsuggesting in recent threads), then better to have a\nnon-ambiguous name that is easy to pronounce\ncorrectly.\n\n--- \"Marc G. Fournier\" <scrappy@hub.org> wrote:\n> I can never figure this out ... what is so difficult\n> about 'Postgres-Q-L'?\n> \n\n\n__________________________________________________\nDo You Yahoo!?\nSign up for SBC Yahoo! Dial - First Month Free\nhttp://sbc.yahoo.com\n\n\n", "msg_date": "Fri, 5 Jul 2002 11:57:29 -0700 (PDT)", "msg_from": "Jeff Eckermann <jeff_eckermann@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "On Fri, Jul 05, 2002 at 01:50:05PM -0400, Bruce Momjian wrote:\n> \n> Beats me, but when the Addison-Wesley publisher called to talk to me\n> about doing a book, he called is Postgre. I knew we were in trouble.\n\nFor what it's worth, the calling cards they printed for me here\n(which I have used exactly 0 times) when I arrived have \"Postgre SQL\nAdministrator\" on them. And I corrected the typo 5 times and sent it\nback for proof each time. They just didn't believe it, I guess.\n\nAnyway, I never thing about it. But I _do_ tend to say \"Postgres\". \nPossibly for the same reason that I find \"GNU/Linux\" to be too much\ntrouble.\n\nAndrew \"not to mention Micrsoft SQL Server 20,642\" Sullivan\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 16:06:09 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Bruce Momjian dijo: \n\n> Marc G. Fournier wrote:\n\n> > I can never figure this out ... what is so difficult about 'Postgres-Q-L'?\n> \n> Beats me, but when the Addison-Wesley publisher called to talk to me\n> about doing a book, he called is Postgre. I knew we were in trouble.\n\nLots of people here calls it \"Postgre\", even something like \"Postgree\"\n(Postgri in spanish). Others say \"Postgre-S-Q-L\". But the vast\nmajority uses plain \"Postgres\". I have yet to meet somebody who says\n\"Postgres-Q-L\".\n\nI think the pronunciation is really counter-intuitive. Not that it's\ndifficult to say; I think it's difficult to read.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"God is real, unless declared as int\"\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 18:14:28 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "> Beats me, but when the Addison-Wesley publisher called to talk to me\n> about doing a book, he called is Postgre. I knew we were in trouble.\n\nI know someone who does that too (even after telling him it is NOT Postgre a\nLOT of times)... I just don't know where people get that idea!\n\nSander.\n\n\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 00:44:07 +0200", "msg_from": "\"Sander Steffann\" <sander@steffann.nl>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Hi!\n\n> I've always thought of our release numbering as having \"themes\". The 6.x\n> series took Postgres from interesting but buggy to a solid system, with\n> a clear path to additional capabilities. The 7.x series fleshes out SQL\n> standards compliance and rationalizes the O-R features, as well as adds\n> to robustness and speed with WAL etc. And the 8.x series would enable\n> Postgres to extend to distributed systems etc.\n\nThis sounds very good to me. I get the feeling sometimes that software\nprojects just increase the major version number to 'sound interesting'. I\ndon't think that PostgreSQL needs that anymore. A modest numbering policy\nmight even give it a 'stable' feeling...\n\nSander.\n\n\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 00:51:55 +0200", "msg_from": "\"Sander Steffann\" <sander@steffann.nl>", "msg_from_op": false, "msg_subject": "Re: Should next release by 8.0 (Was: Re: [GENERAL] I am" }, { "msg_contents": "Sander Steffann wrote:\n> > Beats me, but when the Addison-Wesley publisher called to talk to me\n> > about doing a book, he called is Postgre. I knew we were in trouble.\n> \n> I know someone who does that too (even after telling him it is NOT Postgre a\n> LOT of times)... I just don't know where people get that idea!\n\nI didn't write it correctly. He said \"Postgray\" I think or \"Postgree\". \nBoth made me feel a little ill. The strange thing is that until\nPostgreSQL got popular, no one really said the word, we just wrote it,\nand my editor has a \"PostgreSQL\" string macro so I don't even type it\nanymore, but when you start to talk to people, it does become a problem.\n\nThe idea of calling it \"Postgres SQL Server\" has merit because it is so\nclose to what we already have, just an added 's' and a space.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 22:13:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Fri, 5 Jul 2002, Jeff Eckermann wrote:\n\n> Well, for one:\n>\n> An awful lot of people think the name is:\n> \"Postgre-SQL\", or\n> \"Postgre-SEQUEL\"\n>\n> If marketing matters (as a lot of people have been\n> suggesting in recent threads), then better to have a\n> non-ambiguous name that is easy to pronounce\n> correctly.\n\nPost-gres-Q-L ... again, that is difficult to pronounce in what way? Or\nis it just too much work to educate ppl? I'm sorry, but I have alot of\nppl that ask me about 'Postgres', and the first thing I do is explain to\nthem what *Postgres* was, and how to pronounce Post-gres-Q-L ...\npersonally, I've been saying it for so long now that it just rolls off the\ntongue *shrug*\n\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 00:04:26 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "On Sat, 6 Jul 2002, Marc G. Fournier wrote:\n\n> Post-gres-Q-L ... again, that is difficult to pronounce in what way?\n\n With the accent on which sylable?\n\nRich\n:-)\n\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 20:25:34 -0700 (PDT)", "msg_from": "Rich Shepard <rshepard@appl-ecosys.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "On Fri, 5 Jul 2002, Rich Shepard wrote:\n\n> On Sat, 6 Jul 2002, Marc G. Fournier wrote:\n>\n> > Post-gres-Q-L ... again, that is difficult to pronounce in what way?\n>\n> With the accent on which sylable?\n\nI never really thought about it ... but you tell me:\n\n\thttp://www2.ca.postgresql.org/postgresql.mp3\n\nI don't really hear an accent on any of the syllables in there, but I've\nbeen known to be tone deaf too :)\n\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 01:17:34 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "On Sat, 6 Jul 2002, Marc G. Fournier wrote:\n\n> On Fri, 5 Jul 2002, Rich Shepard wrote:\n> \n> > On Sat, 6 Jul 2002, Marc G. Fournier wrote:\n> >\n> > > Post-gres-Q-L ... again, that is difficult to pronounce in what way?\n> >\n> > With the accent on which sylable?\n> \n> I never really thought about it ... but you tell me:\n> \n> \thttp://www2.ca.postgresql.org/postgresql.mp3\n> \n> I don't really hear an accent on any of the syllables in there, but I've\n> been known to be tone deaf too :)\n\n\nI can't say I've had any trouble thinking how to say PostgreSQL but I don't\nthink it rolls off the tongue as easily as SQL Server for instance.\n\nAlthough I think that name is a good name using Postgres SQL Server would\nenable people to shorten it to just the Postgres part while still clearly\nidentifying what it is they are on about, I'm ignoring the issue of more\nexperienced people remembering/knowing about just plain Postgres. Plus it has\nthe added benefit of being similar to another product, i.e. Microsoft SQL\nServer, which people already say. Sometimes, okay so most times, this is\nshortened to SQL Server and we'd just be reversing this emphasis.\n\nAnyway, I do quite like the PostgreSQL, even if I've had to go back and\ncapitalise that S twice now, I just thought I'd point out the blindingly\nobvious.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 11:07:03 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On July 5, 2002 10:27 am, Marc G. Fournier wrote:\n> On Fri, 5 Jul 2002, Tom Lane wrote:\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > Actually, the \"big\" change is such that will, at least as far as I'm\n> > > understanding it, break pretty much every front-end applicaiton ...\n> >\n> > Only those that inspect system catalogs --- I'm not sure what percentage\n> > that is, but surely it's not \"pretty much every\" one. psql for example\n> > is only affected because of its \\d commands.\n>\n> Okay, anyone have any ideas of other packages that would inspect the\n> system catalog? The only ones I could think of, off the top of my head,\n> would be pgAccess, pgAdmin and phpPgAdmin ... but I would guess that any\n> 'administratively oriented' interface would face similar problems, no?\n\nPyGreSQL pokes into the catalogues a bit.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n\n\n", "msg_date": "Sat, 6 Jul 2002 06:15:10 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>", "msg_from_op": false, "msg_subject": "Re: Should next release by 8.0 (Was: Re: [GENERAL] I am" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The idea of calling it \"Postgres SQL Server\" has merit because it is so\n> close to what we already have, just an added 's' and a space.\n\n... and a M$ trademark violation suit, just waiting to happen whenever\nM$ decides we are big enough to be a threat.\n\nStay far far away from any name including \"SQL Server\".\n\n(I still like plain \"Postgres\" though.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2002 11:20:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "On Sat, 6 Jul 2002, Marc G. Fournier wrote:\n\n> I never really thought about it ... but you tell me:\n> \n> \thttp://www2.ca.postgresql.org/postgresql.mp3\n> \n> I don't really hear an accent on any of the syllables in there, but I've\n> been known to be tone deaf too :)\n\n No sound card or speakers here, only the built-in squeeker that comes with\nall units. In fact, I was just teasing, anyway, because the thread on the\nname and its pronounciation had taken on such a long and serious life. From\na linguistic perspective, however, I tend to pronounce it with the accent on\nthe second syllable.\n\nCiao,\n\nRich\n\nDr. Richard B. Shepard, President\n\n Applied Ecosystem Services, Inc. (TM)\n 2404 SW 22nd Street | Troutdale, OR 97060-1247 | U.S.A.\n + 1 503-667-4517 (voice) | + 1 503-667-8863 (fax) | rshepard@appl-ecosys.com\n http://www.appl-ecosys.com\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 08:32:05 -0700 (PDT)", "msg_from": "Rich Shepard <rshepard@appl-ecosys.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n>>The idea of calling it \"Postgres SQL Server\" has merit because it is so\n>>close to what we already have, just an added 's' and a space.\n> \n> \n> ... and a M$ trademark violation suit, just waiting to happen whenever\n> M$ decides we are big enough to be a threat.\n> \n> Stay far far away from any name including \"SQL Server\".\n\nIIRC SQL Server (or \"SQL-server\" to be exact) is just a definition from \nthe SQL standard. I doubt whether Microsoft can monopolize something \nlike that (the SQL standard was there first right?). Of course I am \nmaking a pretty big assumption about the sanity of the actual law :)\n\nWhether it is really worthwhile to invest resources into that is an \nentirely different matter. Personally, I am happy with PostgreSQL.\n\nJochem\n\n\n\n", "msg_date": "Sat, 06 Jul 2002 18:15:39 +0200", "msg_from": "Jochem van Dieten <jochemd@oli.tudelft.nl>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Jochem van Dieten <jochemd@oli.tudelft.nl> writes:\n>> Stay far far away from any name including \"SQL Server\".\n\n> IIRC SQL Server (or \"SQL-server\" to be exact) is just a definition from \n> the SQL standard. I doubt whether Microsoft can monopolize something \n> like that (the SQL standard was there first right?).\n\nMicrosoft has managed to make \"Windows\" into a trademark, even though\nit's by rights a generic term. Yes, I know the name of their product\nis really \"Microsoft Windows\", but just try calling something \"Windows\"\nand see what happens ...\n\nEven if we avoid any trademark problems, \"Postgres SQL Server\" just\nseems way too much like a me-too name. It *will* cause confusion\nwith Microsoft's product.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2002 13:37:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "Marc G. Fournier wrote:\n> On Fri, 5 Jul 2002, Jeff Eckermann wrote:\n> \n> > Well, for one:\n> >\n> > An awful lot of people think the name is:\n> > \"Postgre-SQL\", or\n> > \"Postgre-SEQUEL\"\n> >\n> > If marketing matters (as a lot of people have been\n> > suggesting in recent threads), then better to have a\n> > non-ambiguous name that is easy to pronounce\n> > correctly.\n> \n> Post-gres-Q-L ... again, that is difficult to pronounce in what way? Or\n\nI think the problem is that PostgreSQL is both a name and an acronym,\nmixed into single word. Postgres is a name, SQL is an acronym,\nPostgreSQL is both. That is where people get confused.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sat, 6 Jul 2002 14:02:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Post-gres-Q-L ... again, that is difficult to pronounce in what way? Or\n\n> I think the problem is that PostgreSQL is both a name and an acronym,\n> mixed into single word. Postgres is a name, SQL is an acronym,\n> PostgreSQL is both. That is where people get confused.\n\nThe problem is that the typography makes it look like the split should\nbe \"Postgre / SQL\". I can see exactly why people think the name portion\nis \"Postgre\" --- it's not at all apparent that the \"S\" is part of both\nparts of the word, until you've been told.\n\nHad we capitalized the name like \"PostgresQL\" maybe the correct\npronunciation would be more obvious.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2002 14:10:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": " \n> I think the problem is that PostgreSQL is both a name and an acronym,\n> mixed into single word. Postgres is a name, SQL is an acronym,\n> PostgreSQL is both. That is where people get confused.\n\nthats the point..\ni think \"postgres\" sounds better :)\ninfact i have posted , a couple of times, to pgsql-general@postgres.org :p\n ^_^\n-- \n\nVarun\n------\n@n=(544290696690,305106661574,116357),$b=16,@c=' .JPacehklnorstu'=~\n/./g;for$n(@n){map{$h=int$n/$b**$_;$n-=$b**$_*$h;$c[@c]=$h}c(0..9);\npush@p,map{$c[$_]}@c[c($b..$#c)];$#c=$b-1}print@p;sub'c{reverse @_}\n\n\n\n\n\n", "msg_date": "Sat, 06 Jul 2002 23:46:01 +0530", "msg_from": "Varun Kacholia <varunk@cse.iitb.ac.in>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Post-gres-Q-L ... again, that is difficult to pronounce in what way? Or\n> \n> > I think the problem is that PostgreSQL is both a name and an acronym,\n> > mixed into single word. Postgres is a name, SQL is an acronym,\n> > PostgreSQL is both. That is where people get confused.\n> \n> The problem is that the typography makes it look like the split should\n> be \"Postgre / SQL\". I can see exactly why people think the name portion\n> is \"Postgre\" --- it's not at all apparent that the \"S\" is part of both\n> parts of the word, until you've been told.\n> \n> Had we capitalized the name like \"PostgresQL\" maybe the correct\n> pronunciation would be more obvious.\n\nYep, there is no other word that has an acronym part and a word part,\nand where capitalization not match in the two parts. It is almost a\nrecipe for confusion.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sat, 6 Jul 2002 14:21:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Post-gres-Q-L ... again, that is difficult to pronounce in what way? Or\n> \n> > I think the problem is that PostgreSQL is both a name and an acronym,\n> > mixed into single word. Postgres is a name, SQL is an acronym,\n> > PostgreSQL is both. That is where people get confused.\n> \n> The problem is that the typography makes it look like the split should\n> be \"Postgre / SQL\". I can see exactly why people think the name portion\n> is \"Postgre\" --- it's not at all apparent that the \"S\" is part of both\n> parts of the word, until you've been told.\n> \n> Had we capitalized the name like \"PostgresQL\" maybe the correct\n> pronunciation would be more obvious.\n\nNo one pronounces MySQL as Mice-QL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sat, 6 Jul 2002 14:22:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Sat, 6 Jul 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The idea of calling it \"Postgres SQL Server\" has merit because it is so\n> > close to what we already have, just an added 's' and a space.\n>\n> ... and a M$ trademark violation suit, just waiting to happen whenever\n> M$ decides we are big enough to be a threat.\n>\n> Stay far far away from any name including \"SQL Server\".\n>\n> (I still like plain \"Postgres\" though.)\n\nNobody to date, I don't believe, has jumped down ppls throat for\ninformally calling it Postres .. the \"formal\" name is PostgreSQL ..\nnothing stop's ppl from using Postgres in 'conversation' though ... I\npersonaly use PgSQL more often then not, since I find ppl seem to be able\nto spell it, while alot of ppl that I've dealt with have a problem with\nspelling the Postgres part of PostgreSQL for some reason ...\n\n\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 15:26:29 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "On Sat, 6 Jul 2002, Tom Lane wrote:\n\n> Had we capitalized the name like \"PostgresQL\" maybe the correct\n> pronunciation would be more obvious.\n ^^^^\n\n Remove this word and you'd be dead on. Perhaps the simplest resolution is\nto do just this.\n\nRich\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 11:27:08 -0700 (PDT)", "msg_from": "Rich Shepard <rshepard@appl-ecosys.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "Marc G. Fournier wrote:\n> On Sat, 6 Jul 2002, Tom Lane wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > The idea of calling it \"Postgres SQL Server\" has merit because it is so\n> > > close to what we already have, just an added 's' and a space.\n> >\n> > ... and a M$ trademark violation suit, just waiting to happen whenever\n> > M$ decides we are big enough to be a threat.\n> >\n> > Stay far far away from any name including \"SQL Server\".\n> >\n> > (I still like plain \"Postgres\" though.)\n> \n> Nobody to date, I don't believe, has jumped down ppls throat for\n> informally calling it Postres .. the \"formal\" name is PostgreSQL ..\n> nothing stop's ppl from using Postgres in 'conversation' though ... I\n> personaly use PgSQL more often then not, since I find ppl seem to be able\n\nI think that is because PgSQL is a full acronym, rather than a mixed\nword/acronym combination, though actually if Pg were a word, they would\nsay pu-gu-SQL. The 'my' in MySQL is a word so they say it that way.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sat, 6 Jul 2002 14:30:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Rich Shepard wrote:\n> On Sat, 6 Jul 2002, Tom Lane wrote:\n> \n> > Had we capitalized the name like \"PostgresQL\" maybe the correct\n> > pronunciation would be more obvious.\n> ^^^^\n> \n> Remove this word and you'd be dead on. Perhaps the simplest resolution is\n> to do just this.\n\nWhat we really need then is Postgres-Q-L.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sat, 6 Jul 2002 14:45:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Sat, 6 Jul 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The idea of calling it \"Postgres SQL Server\" has merit because it is so\n> > close to what we already have, just an added 's' and a space.\n>\n> ... and a M$ trademark violation suit, just waiting to happen whenever\n> M$ decides we are big enough to be a threat.\n>\n> Stay far far away from any name including \"SQL Server\".\n>\n> (I still like plain \"Postgres\" though.)\n\nHere in Russia, most people use \"Postgres\".\n\n>\n> \t\t\tregards, tom lane\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 21:49:31 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "On Sat, 6 Jul 2002, Bruce Momjian wrote:\n\n> What we really need then is Postgres-Q-L.\n \n That, or PostgreS-Q-L would both make clear the pronounciation and retain\nthe differentiation from the former Postgres. Businesses change names and\nlogos quite successfully. No reason not to make such a change if it resolves\neveryone's concerns.\n\nRich \n\nDr. Richard B. Shepard, President\n\n Applied Ecosystem Services, Inc. (TM)\n 2404 SW 22nd Street | Troutdale, OR 97060-1247 | U.S.A.\n + 1 503-667-4517 (voice) | + 1 503-667-8863 (fax) | rshepard@appl-ecosys.com\n http://www.appl-ecosys.com\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 12:08:10 -0700 (PDT)", "msg_from": "Rich Shepard <rshepard@appl-ecosys.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Sat, Jul 06, 2002 at 02:30:17PM -0400, Bruce Momjian wrote:\n> Marc G. Fournier wrote:\n> > On Sat, 6 Jul 2002, Tom Lane wrote:\n> > \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > The idea of calling it \"Postgres SQL Server\" has merit because it is so\n> > > > close to what we already have, just an added 's' and a space.\n> > >\n> > > ... and a M$ trademark violation suit, just waiting to happen whenever\n> > > M$ decides we are big enough to be a threat.\n> > >\n> > > Stay far far away from any name including \"SQL Server\".\n> > >\n> > > (I still like plain \"Postgres\" though.)\n> > \n> > Nobody to date, I don't believe, has jumped down ppls throat for\n> > informally calling it Postres .. the \"formal\" name is PostgreSQL ..\n> > nothing stop's ppl from using Postgres in 'conversation' though ... I\n> > personaly use PgSQL more often then not, since I find ppl seem to be able\n> \n> I think that is because PgSQL is a full acronym, rather than a mixed\n> word/acronym combination, though actually if Pg were a word, they would\n> say pu-gu-SQL. The 'my' in MySQL is a word so they say it that way.\n\nI often hear it pronounced Pig-Squeal, which is memorable, but may not\ngive the right impression.\n\nCheers,\n Steve\n\n\n", "msg_date": "Sat, 6 Jul 2002 12:09:15 -0700", "msg_from": "Steve Atkins <steve@blighty.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "\n\tHi,\n\n> I didn't write it correctly. He said \"Postgray\" I think or \"Postgree\".\n> Both made me feel a little ill. The strange thing is that until\n> PostgreSQL got popular, no one really said the word, we just wrote it,\n> and my editor has a \"PostgreSQL\" string macro so I don't even type it\n> anymore, but when you start to talk to people, it does become a problem.\n>\n> The idea of calling it \"Postgres SQL Server\" has merit because it is so\n> close to what we already have, just an added 's' and a space.\n\n\tIs this under discussion? I do think that \"PostgreSQL\" is a good\nname but does cause a lot of confusion. I usually call it just \"Postgres\".\nI don't see the necessity of having \"SQL\" in it's name, although \"Postgres\nSQL\" is also interesting.\n\n\n\t[]'s\n\tRicardo.\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 16:33:16 -0300 (BRT)", "msg_from": "Ricardo Junior <suga@netbsd.com.br>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "\n\tMaybe not so obvious when people start asking what this \"QL\"\nmeans. ;-)\n\n\t[]'s\n\tRicardo.\n\n\nOn Sat, 6 Jul 2002, Tom Lane wrote:\n\n> > I think the problem is that PostgreSQL is both a name and an acronym,\n> > mixed into single word. Postgres is a name, SQL is an acronym,\n> > PostgreSQL is both. That is where people get confused.\n>\n> The problem is that the typography makes it look like the split should\n> be \"Postgre / SQL\". I can see exactly why people think the name portion\n> is \"Postgre\" --- it's not at all apparent that the \"S\" is part of both\n> parts of the word, until you've been told.\n>\n> Had we capitalized the name like \"PostgresQL\" maybe the correct\n> pronunciation would be more obvious.\n>\n> \t\t\tregards, tom lane\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 16:41:48 -0300 (BRT)", "msg_from": "Ricardo Junior <suga@netbsd.com.br>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The idea of calling it \"Postgres SQL Server\" has merit because it is so\n> > close to what we already have, just an added 's' and a space.\n>\n> ... and a M$ trademark violation suit, just waiting to happen whenever\n> M$ decides we are big enough to be a threat.\n>\n> Stay far far away from any name including \"SQL Server\".\n\nThen we just leave out the 'Server' :-)\n\n> (I still like plain \"Postgres\" though.)\n\nIf you make it 'Postgres SQL' you have it all. Sure, everybody will call it\n'Postgres' (like a lot of them do now). But you still have the 'PostgreSQL'\nfeeling...\n\nJust my feeling though...\nSander\n\n\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 23:28:44 +0200", "msg_from": "\"Sander Steffann\" <sander@steffann.nl>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "On Sat, 6 Jul 2002, Ricardo Junior wrote:\n\n> On Sat, 6 Jul 2002, Tom Lane wrote:\n>\n> > Had we capitalized the name like \"PostgresQL\" maybe the correct\n> > pronunciation would be more obvious.\n>\n> \tMaybe not so obvious when people start asking what this \"QL\"\n> means. ;-)\n\nThat's obvious. Since QL means \"query language,\" \"Postgres QL\" would\nrefer to the old, QUEL-derived query language that Postgres used before\nit was ripped out and replaced with SQL, right?\n\n\"Postgres\" is simple, people use it anyway, and everybody now knows that\nPostgres uses SQL instead of its own query language now, so I think it\nwould be a very good to just switch back to to that. With the demise of\nGreat Bridge, we even have the postgres.org domain name free for this now.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Mon, 8 Jul 2002 11:51:50 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "Hi Tom:\n\n> You could make a fair argument that the upcoming 7.3 ought to be\n> called 8.0, because the addition of schema support will break an\n> awful lot of client-side code ;-). But I doubt we will do that.\n\nHmm. As it happens, I've written an awful lot of client-side code. Can you\nelaborate on what will break, or point me to a resource that lays it out?\n\n-- sgl\n\n\n=======================================================\nSteve Lane\n\nVice President\nChris Moyer Consulting, Inc.\n833 West Chicago Ave Suite 203\n\nVoice: (312) 433-2421 Email: slane@fmpro.com\nFax: (312) 850-3930 Web: http://www.fmpro.com\n=======================================================\n\n\n\n", "msg_date": "Sun, 07 Jul 2002 23:40:44 -0500", "msg_from": "Steve Lane <slane@fmpro.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "On Fri, 5 Jul 2002, Marc G. Fournier wrote:\n\n> > I've heard a number of people opine that we should go back to just plain\n> > 'Postgres', which is pronounceable by the uninitiate, and besides which\n>\n> I can never figure this out ... what is so difficult about 'Postgres-Q-L'?\n\nNothing. We even have audio files on the website so there is no question\non how to pronounce it. Some folks just aren't happy unless they can\nchange what doesn't need changing is all. I guess it's their way of\ncontributing.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n\n\n", "msg_date": "Mon, 8 Jul 2002 08:22:02 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "Steve Lane <slane@fmpro.com> writes:\n>> You could make a fair argument that the upcoming 7.3 ought to be\n>> called 8.0, because the addition of schema support will break an\n>> awful lot of client-side code ;-). But I doubt we will do that.\n\n> Hmm. As it happens, I've written an awful lot of client-side code. Can you\n> elaborate on what will break, or point me to a resource that lays it out?\n\nAnything that looks at the system catalogs is likely to have some\ntrouble; the notion that there is at most one pg_class row named 'foo',\nfor example, will fall down.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2002 10:30:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": " From someone whose DSN used to look like: PostgreSQL\n\nI can tell you that P O S T G R E S Q L means *nothing* to people who are\nnew to it. However, one can clearly see SQL at the end, and recognise that\nas a valid acronym, so assume that the prefix, and hence the product\nidentifier, is POSTGRE\n\nThat's where even educated people (perhaps even ESPECIALLY educated people)\nget the idea.\n\nTerry Fielder\nNetwork Engineer\nGreat Gulf Homes / Ashton Woods Homes\nterry@greatgulfhomes.com\n\n> -----Original Message-----\n> From: pgsql-general-owner@postgresql.org\n> [mailto:pgsql-general-owner@postgresql.org]On Behalf Of\n> Sander Steffann\n> Sent: Friday, July 05, 2002 6:44 PM\n> To: Bruce Momjian; Marc G. Fournier\n> Cc: Tom Lane; Andrew Sullivan; pgsql-general@postgresql.org\n> Subject: Re: [GENERAL] I am being interviewed by OReilly\n>\n>\n> > Beats me, but when the Addison-Wesley publisher called to talk to me\n> > about doing a book, he called is Postgre. I knew we were\n> in trouble.\n>\n> I know someone who does that too (even after telling him it\n> is NOT Postgre a\n> LOT of times)... I just don't know where people get that idea!\n>\n> Sander.\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n\n", "msg_date": "Mon, 8 Jul 2002 12:17:30 -0400", "msg_from": "terry@greatgulfhomes.com", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Marc G. Fournier wrote:\n> > On Sat, 6 Jul 2002, Tom Lane wrote:\n> >\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > The idea of calling it \"Postgres SQL Server\" has merit because it is so\n> > > > close to what we already have, just an added 's' and a space.\n> > >\n> > > ... and a M$ trademark violation suit, just waiting to happen whenever\n> > > M$ decides we are big enough to be a threat.\n> > >\n> > > Stay far far away from any name including \"SQL Server\".\n> > >\n> > > (I still like plain \"Postgres\" though.)\n> >\n> > Nobody to date, I don't believe, has jumped down ppls throat for\n> > informally calling it Postres .. the \"formal\" name is PostgreSQL ..\n> > nothing stop's ppl from using Postgres in 'conversation' though ... I\n> > personaly use PgSQL more often then not, since I find ppl seem to be able\n> \n> I think that is because PgSQL is a full acronym, rather than a mixed\n> word/acronym combination, though actually if Pg were a word, they would\n> say pu-gu-SQL. The 'my' in MySQL is a word so they say it that way.\n\nWouldn't that be \"PiggySeeQuel\" ?\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Mon, 08 Jul 2002 14:27:48 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Marc G. Fournier:\n\n> I can never figure this out ... what is so difficult about 'Postgres-Q-L'?\n\nhttp://www.postgresql.org/\n\n\"Ever wonder how PostgreSQL is really pronounced?\" \n\n;-)\n", "msg_date": "Thu, 11 Jul 2002 20:46:36 +0200", "msg_from": "knut.suebert@web.de", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" } ]
[ { "msg_contents": "I have tried to compile PostgreSQL with the Intel C Compiler 6.0 for \nLinux. During this process some errors occurred which I have attached to \nthis email. I have compiled the sources using:\n\n[hs@duron postgresql-7.2.1]$ cat compile.sh\n#!/bin/sh\n\nCC=/usr/local/intel_compiler/compiler60/ia32/bin/icc CFLAGS=' -O3 ' \n./configure\nmake\n\nIf anybody is interested in testing the compiler feel free to contact me.\n\n Hans", "msg_date": "Thu, 04 Jul 2002 00:41:24 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>", "msg_from_op": true, "msg_subject": "Compiling PostgreSQL with Intel C Compiler 6.0" }, { "msg_contents": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at> writes:\n> I have tried to compile PostgreSQL with the Intel C Compiler 6.0 for \n> Linux. During this process some errors occurred which I have attached to \n> this email. I have compiled the sources using:\n\nThese are not errors, only overly-pedantic warnings.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 00:47:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Compiling PostgreSQL with Intel C Compiler 6.0 " } ]
[ { "msg_contents": "Dear all,\n\nA simple testing program :\n\n* * * * * * * * * * * * * * begin * * * * * * * * * * * * * * * *\n\n#include <stdio.h>\n#include <stdlib.h>\n\nint main (void)\n{\n unsigned int v;\n\n v = 0x87654321L;\n\n return (0);\n}\n\n* * * * * * * * * * * * * * end * * * * * * * * * * * * * * * *\n\ncompile with ecpg using :\n\n ecpg -o mytest.c -I/usr/include/pgsql mytest.pgc\n\nproduces the output C program file as follow :\n\n* * * * * * * * * * * * * * begin * * * * * * * * * * * * * * * *\n\n/* Processed by ecpg (2.8.0) */\n/* These three include files are added by the preprocessor */\n#include <ecpgtype.h>\n#include <ecpglib.h>\n#include <ecpgerrno.h>\n#line 1 \"test.pgc\"\n#include <stdio.h>\n#include <stdlib.h>\n\nint main (void)\n{\n unsigned int v;\n\n v = '0x87654321'L;\n\n return (0);\n}\n\n* * * * * * * * * * * * * * * end * * * * * * * * * * * * * * *\n\nIt has translated the 4 bytes constant (0x87654321) into a one byte\nchar constant (within the single quotes) during pre-processing. Seems\nthis happens only when the high bit of the constant is set (i.e. it\nwon't add the quotes if the constant is 0x12345678). \n\nAlso, I noticed that the line number reported during the preprocessing \nerror output is incorrect : it is '1' less than the actual line number \nin the source file. As shown, I am using version 2.8.0 of ecpg. Is my \nversion being too old to be buggy ? Any suggestion to bypass the \ntranslation problem ?\n\nThanks,\nRaymond Fung.\n\n\n", "msg_date": "Thu, 04 Jul 2002 10:00:01 +0800", "msg_from": "Raymond Fung <raymondf@acm.org>", "msg_from_op": true, "msg_subject": "ecpg problem : pre-processor translated long constant to char" }, { "msg_contents": "On Thu, Jul 04, 2002 at 10:00:01AM +0800, Raymond Fung wrote:\n> Dear all,\n> ...\n> It has translated the 4 bytes constant (0x87654321) into a one byte\n> char constant (within the single quotes) during pre-processing. Seems\n> this happens only when the high bit of the constant is set (i.e. it\n> won't add the quotes if the constant is 0x12345678). \n\nYes, this is a bug. But look here:\n\nmm=# create table test (i int8);\nCREATE\nmm=# insert into test values (x'80000000');\nERROR: Bad hexadecimal integer input '80000000'\nmm=# insert into test values (1234567890123);\nINSERT 22762 1\n\nThe reason is that both the backend parser and the ecpg parser use\nstrtol to parse the hex constant. And strtol does not like anything >\n0x80000000. \n\n> Also, I noticed that the line number reported during the preprocessing \n> error output is incorrect : it is '1' less than the actual line number \n> in the source file. As shown, I am using version 2.8.0 of ecpg. Is my \n> version being too old to be buggy ? Any suggestion to bypass the \n> translation problem ?\n\nI heard about the off-by-one problem sometimes, but I've yet to find the\ntime to look for the reason. A collegue is bugging me all the time. :-)\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n\n\n", "msg_date": "Thu, 4 Jul 2002 16:21:01 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ecpg problem : pre-processor translated long constant\n\tto char" } ]
[ { "msg_contents": "pgman wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > The following patch fixes all of these, and it will be in 7.3.\n> > \n> > It will? Kindly explain what the heck this is doing. It looks to\n> > me like it will probably break more cases than it fixes. AFAICS\n> > it's just completely muddying the waters about what is quoted and\n> > what isn't ...\n> \n> Well, the waters were already muddy. Doing a pg_dump -Ft, you will see\n> with the old code how the tags for functions had quotes, and how the\n> GRANT/REVOKE has quotes for table names, while the CREATE TABLE tags for\n> these do not. This fixes that, and tries to get the -P string they\n> enter for pg_restore to match the tag by removing spaces.\n\nIn summary, the tags stored by pg_dump should _not_ have quotes. \nFunction names and GRANT/REVOKE were coming out with quotes around them\nin pg_dump -Ft format. This does not change the ASCII output, which is\nalways quoted.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 01:39:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] [SQL] pg_restore cannot restore function" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 04 July 2002 07:40\n> To: Tom Lane\n> Cc: Christopher Kings-Lynne; Hiroshi Inoue; Hackers\n> Subject: Re: [HACKERS] BETWEEN Node & DROP COLUMN\n> \n> \n> Well, why shouldn't we use the fact that most/all clients \n> don't look at attno < 0, and that we have no intention of \n> changing that requirement. \n> We aren't coding in a vacuum. We have clients, they do that \n> already, let's use it.\n\nJust to chuck my $0.02 in the pot:\n\npgAdmin will require modification not matter which route is taken. It\n*does* look at columns with negative attnums whenever the user switches\non the 'View System Objects' option which un-hides the pg_*\ntables/views, columns with attnums < 1, template1 and more.\n\nFrom my pov, the least painful route would be to add the attisdropped\ncolumn. I can add a check to this far more easily than messing about\nwith losing columns where attnum < -7 - especially, if in a future\nrelease of PostgreSQL the number of columns like tableoid, xid etc\nchanges.\n\nPersonnally, from a not caring about how it works, just how it's\npresented perspective, attisdropped seems much cleaner to me.\n\nI also agree with Christopher - compared to the work the addition of\nschemas required (~50 hours in pgAdmin) this is a 2 minute job!\n\nWell, that was more like $0.10....\n\nRegards, Dave.\n\n\n", "msg_date": "Thu, 4 Jul 2002 10:18:02 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" } ]
[ { "msg_contents": "I've committed support for IS DISTINCT FROM to the head of the CVS tree.\nI did not update the catalog version, but since one enumerated type has\nadded a value it may be that initdb is actually required. Almost\ncertainly a \"make clean\" is essential.\n\nThere is more work to do, including perhaps adding some more internal\nnodes, but initial functionality is there. I still owe some docs and\nregression tests as well as further work on the implementation (per\nprevious discussion on The Right Way to add row features).\n\nDetails from the cvs log are below.\n\n - Thomas\n\nMove INTERSECT DISTINCT to the supported category. Error in docs.\nImplement the IS DISTINCT FROM operator per SQL99.\nReused the Expr node to hold DISTINCT which strongly resembles\n the existing OP info. Define DISTINCT_EXPR which strongly resembles\n the existing OPER_EXPR opType, but with handling for NULLs required\n by SQL99.\nImplement the IS DISTINCT FROM operator per SQL99.\nReused the Expr node to hold DISTINCT which strongly resembles\n the existing OP info. Define DISTINCT_EXPR which strongly resembles\n the existing OPER_EXPR opType, but with handling for NULLs required\n by SQL99.\nWe have explicit support for single-element DISTINCT comparisons\n all the way through to the executor. But, multi-element DISTINCTs\n are handled by expanding into a comparison tree in gram.y as is done\nfor\n other row comparisons. Per discussions, it might be desirable to move\n this into one or more purpose-built nodes to be handled in the backend.\nDefine the optional ROW keyword and token per SQL99.\n This allows single-element row constructs, which were formerly\ndisallowed\n due to shift/reduce conflicts with parenthesized a_expr clauses.\nDefine the SQL99 TREAT() function. Currently, use as a synonym for\nCAST().\n\n\n", "msg_date": "Thu, 04 Jul 2002 08:28:51 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "IS DISTINCT FROM and TREAT() committed" } ]
[ { "msg_contents": "Hello:\n\nI've got the logging system to the point where I can take a shutdown\nconsistent copy of a system, and play forward through multiple\ncheckpoints. It seems to handle CREATE TABLE/DROP TABLE/TRUNCATE\nproperly, and things are moving forward well. Recovery to an arbitrary\npoint-in-time should be just as easy, but will need some administrative\ninterface for it.\n\nAt this point, some input would be useful on how I should handle things.\n\nThe most important questions that need answering are in sections 2 & 5,\nsince they impact the most other parts of the system. They will also\nrequire good documentation for sysadmins.\n\n\n\nIssues Outstanding for Point In Time Recovery (PITR)\n\n $Date: 2002/07/04 14:23:37 $\n\n $Revision: 1.4 $\n\n J.R. Nield\n\n (Enc: ISO 8859-15 Latin-9)\n\n\n§0 - Introduction\n\n This file is where I'm keeping track of all the issues I run into while\n trying to get PITR to work properly. Hopefully it will evolve into a\n description of how PITR actually works once it is implemented.\n\n I will also try to add feedback as it comes in.\n\n The big items so-far are:\n §1 - Logging Relation file creation, truncation, and removal \n This is mostly done. Can do infinte play-forward from\n online logs.\n §2 - Partial-Write and Bad Block detection\n Need input before starting. Migration issues.\n §3 - Detecting Shutdown Consistent System Recovery\n Mostly done.\n §4 - Interactive Play-Forward Recovery for an Entire System\n Need input before starting.\n §5 - Individual file consistent recovery\n Need input. Semi-Major changes required.\n\n§1 - Logging Relation file creation, truncation, and removal \n\n §1.1 - Problem:\n\n Without file creation in the log, we can't replay committed \n transactions that create relations. \n \n The current code assumes that any transaction reaching commit has already\n ensured it's files exist, and that those files will never be removed. This\n is true now, but not for log-replay from an old backup database system. \n \n The current XLOG code silently ignores block-write requests for\n non-existent files, and assumes that the transaction generating those\n requests must have aborted.\n\n Right now a crash during TRUNCATE TABLE will leave the table in an\n inconsistent state (partially truncated). This would not work when doing\n replay from before the last checkpoint.\n\n §1.1.1 - CREATE DATABASE is also unlogged\n\n This will cause the same replay problems as above.\n \n §1.2 - Proposal:\n\n a) Augment the SMGR code to log relation file operations, and to handle\n redo requests properly. This is simple in the case of create. Drop must be\n logged only IN the commit record. For truncate see (b).\n\n The 'struct f_smgr' needs new operations 'smgr_recreate', 'smgr_reunlink',\n and 'smgr_retruncate'. smgr_recreate should accept a RelFileNode instead\n of a Relation.\n\n Transactions that abort through system failure (ie. unlogged aborts) \n will simply continue to leak files.\n\n b) If TRUNCATE TABLE fails, the system must PANIC. Otherwise, the table\n may be used in a future command, and a replay-recovered database may\n end-up with different data than the original.\n\n WAL must be flushed before truncate as well.\n\n WAL does not need to be flushed before create, if we don't mind \n leaking files sometimes.\n\n c) Redo code should treat writes to non-existent files as an error.\n Changes affect heap & nbtree AM's. [Check others]\n\n d) rtree [and GiST? WTF is GiST? ] is not logged. A replay recovery of\n a database should mark all the rtree indices as corrupt.\n [ actually we should do that now, are we? ]\n\n e) CREATE DATABASE must be logged properly, not use system(cp...)\n\n §1.3 - Status:\n\n All logged SMGR operations are now in a START_CRIT_SECTION()/\n END_CRIT_SECTION() pair enclosing the XLogInsert() and the underlying fs\n operations.\n\n Code has been added to smgr and xact modules to log:\n create (no XLogFlush)\n truncate (XLogFlush)\n pending deletes on commit record\n files to delete on abort record\n\n Code added to md.c to support redo ops\n\n Code added to smgr for RMGR redo/desc callbacks\n\n Code added to xact RMGR callbacks for redo/desc\n\n Database will do infinite shutdown consistent system recovery from the\n online logs, if you manually munge the control file to set state ==\n DB_IN_PRODUCTION instead of DB_SHUTDOWNED.\n\n Still need to do:\n Item (c), recovery cleanup in all AM's\n Item (d), logging in other index AM's\n Item (e), CREATE DATABASE stuff\n\n\n\n§2 - Partial-Write and Bad Block detection\n\n §2.1 - Problem:\n\n In order to protect against partial writes without logging pages\n twice, we need to detect partial pages in system files and report them\n to the system administrator. We also might want to be able to detect\n damaged pages from other causes, like memory corruption, OS errors,\n etc. or in the case where the disk doesn't report bad blocks, but\n returns bad data.\n\n We should also decide what should happen when a file is marked as\n containing corrupt pages, and requires log-archive recovery from a\n backup.\n\n §2.2 - Proposal:\n\n Add a 1 byte 'pd_flags' field to PageHeaderData, with the following\n flag definitions:\n\n PD_BLOCK_CHECKING (1)\n PD_BC_METHOD_BIT (1<<1)\n\n PageHasBlockChecking(page) ((page)->pd_flags & PD_BLOCK_CHECKING)\n PageBCMethodIsCRC64(page) ((page)->pd_flags & PD_BC_METHOD_BIT)\n PageBCMethodIsLSNLast(page) (!PageBCMethodIsCRC64(page))\n\n The last 64 bits of a page are reserved for use by the block checking \n code.\n\n [ Is it worth the trouble to allow the last 8 bytes of a\n page to contain data when block checking is turned off for a Page? \n\n This proposal does not allow that. ]\n\n If the block checking method is CRC64, then that field will contain\n the CRC64 of the block computed at write time.\n\n If the block checking method is LSNLast, then the field contains a\n duplicate of the pd_lsn field.\n\n §2.2.1 - Changes to Page handling routines\n\n All the page handling routines need to understand that \n pd_special == (pd_special - (specialSize + 8))\n\n Change header comment in bufpage.h to reflect this.\n\n §2.2.2 - When Reading a Page\n\n Block corruption is detected on read in the obvious way with CRC64.\n\n In the case of LSNLast, we check to see if pd_lsn == the lsn in the\n last 64 bits of the page. If not, we assume the page is corrupt from\n a partial write (although it could be something else).\n\n IMPORTANT ASSUMPTION:\n The OS/disk device will never write both the first part and\n last part of a block without writing the middle as well.\n This might be wrong in some cases, but at least it's fast.\n\n §2.2.4 - GUC Variables\n\n The user should be able to configure what method is used:\n\n block_checking_write_method = [ checksum | torn_page_flag | none ]\n\n Which method should be used for blocks we write?\n\n check_blocks_on_read = [ true | false ]\n\n When true, verify that the blocks we read are not corrupt, using\n whatever method is in the block header.\n\n When false, ignore the block checking information.\n\n §2.3 - Status:\n\n Waiting for input from pgsql-hackers.\n\n Questions:\n\n Should we allow the user to have more detailed control over\n which parts of a database use block checking?\n\n For example: use 'checksum' on all system catalogs in all databases, \n 'torn_page_flag' on the non-catalog parts of the production database,\n and 'none' on everything else?\n\n§3 - Detecting Shutdown Consistent System Recovery\n\n §3.1 - Problem:\n\n How to notice that we need to do log-replay for a system backup, when the\n restored control file points to a shutdown checkpoint record that is\n before the most recent checkpoint record in the log, and may point into\n an archived file.\n\n §3.2 - Proposal:\n\n At startup, after reading the ControlFile, scan the log directory to\n get the list of active log files, and find the lowest logId and\n logSeg of the files. Ensure that the files cover a contiguous range\n of LSN's.\n\n There are three cases:\n\n 1) ControlFile points to the last valid checkpoint (either\n checkPoint or prevCheckPoint, but one of them is the greatest\n valid checkpoint record in the log stream).\n\n 2) ControlFile points to a valid checkpoint record in an active\n log file, but there are more valid checkpoint records beyond\n it.\n\n 3) ControlFile points to a checkpoint record that should be in the\n archive logs, and is presumably valid.\n\n Case 1 is what we handle now.\n\n Cases 2 and 3 would result from restoring an entire system from\n backup in preparation to do a play-forward recovery.\n\n We need to:\n\n Detect cases 2 and 3.\n\n Alert the administrator and abort startup.\n [Question: Is this always the desired behavior? We can\n handle case 2 without intervention. ]\n\n Let the administrator start a standalone backend, and\n perform a play-forward recovery for the system.\n\n §3.3 - Status:\n\n In progress.\n\n§4 - Interactive Play-Forward Recovery for an Entire System\n\n Play-Forward File Recovery from a backup file must be interactive,\n because not all log files that we need are necessarily in the \n archive directory. It may be possible that not all the archive files\n we need can even fit on disk at one time.\n\n The system needs to be able to prompt the system administrator to feed\n it more log files.\n\n TODO: More here\n\n§5 - Individual file consistent recovery\n\n §5.1 - Problem:\n\n If a file detects corruption, and we restore it from backup, how do \n we know what archived files we need for recovery?\n\n Should file corruption (partial write, bad disk block, etc.) outside \n the system catalog cause us to abort the system, or should we just \n take the relation or database off-line?\n\n Given a backup file, how do we determine the point in the log \n where we should start recovery for the file? What is the highest LSN\n we can use that will fully recover the file?\n\n §5.2 - Proposal: \n \n Put a file header on each file, and update that header to the last\n checkpoint LSN at least once every 'file_lsn_time_slack' minutes, or\n at least once every dbsize/'file_lsn_log_slack' megabytes of log\n written, where dbsize is the estimated size of the database. Have\n these values be settable from the config file. These updates would be\n distributed throughout the hour, or interspersed between regular\n amounts of log generation.\n\n If we have a database backup program or command, it can update the\n header on the file before backup to the greatest value it can assure\n to be safe.\n\n §5.3 - Status:\n\n Waiting for input from pgsql-hackers.\n\n Questions:\n\n There are alternate methods than using a file header to get a\n known-good LSN lower bound for the starting point to recover a backup\n file. Is this the best way?\n\nA) The Definitions\n\n This stuff is obtuse, but I need it here to keep track of what I'm \n saying. Someday I should use it consistently in the rest of this\n document.\n\n \"system\" or \"database system\":\n\n A collection of postgres \"databases\" in one $PGDATA directory,\n managed by one postmaster instance at a time (and having one WAL\n log, etc.)\n\n All the files composing such a system, as a group.\n\n \"up to date\" or \"now\" or \"current\" or \"current LSN\":\n\n The most recent durable LSN for the system.\n\n \"block consistent copy\":\n\n When referring to a file:\n\n A copy of a file, which may be written to during the process of\n copying, but where each BLCKSZ size block is copied atomically.\n\n When referring to multiple files (in the same system):\n\n A copy of all the files, such that each is independently a \"block\n consistent copy\"\n\n \"file consistent copy\":\n\n When referring to a file:\n\n A copy of a file that is not written to between the start and end\n of the copy operation.\n\n When referring to multiple files (in the same system):\n\n A copy of all the files, such that each is independently a \"file\n consistent copy\"\n\n \"system consistent copy\":\n\n When referring to a file:\n \n A copy of a file, where the entire system of which it is a member\n is not written to during the copy.\n\n When referring to multiple files (in the same system):\n\n A copy of all the files, where the entire system of which they are\n members was not written to between the start and end of the\n copying of all the files, as a group.\n\n \"shutdown consistent copy\":\n\n When referring to a file:\n\n A copy of a file, where the entire system of which it is a member\n had been cleanly shutdown before the start of and for the duration\n of the copy.\n\n When referring to multiple files (in the same system):\n\n A copy of all the files, where the entire system of which they are\n members had been cleanly shutdown before the start of and for the\n duration of the copying of all the files, as a group.\n\n \"consistent copy\":\n\n A block, file, system, or shutdown consistent copy.\n\n \"known-good LSN lower bound\" \n or \"LSN lower bound\"\n or \"LSN-LB\":\n\n When referring to a group of blocks, a file, or a group of files:\n\n An LSN known to be old enough that no log entries before it are needed\n to bring the blocks or files up-to-date.\n\n \"known-good LSN greatest lower bound\" \n or \"LSN greatest lower bound\" \n or \"LSN-GLB\":\n\n When referring to a group of blocks, a file, or a group of files:\n\n The greatest possible LSN that is a known-good LSN lower bound for\n the group.\n\n \"backup file\":\n\n A consistent copy of a data file used by the system, for which \n we have a known-good LSN lower bound.\n\n \"optimal backup file\":\n\n A backup file, for which we have the known-good LSN greatest lower\n bound.\n\n \"backup system\":\n \n\n\n \"Play-Forward File Recovery\" or \"PFFR\":\n\n The process of bringing an individual backup file up to date.\n\n \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n\n\n", "msg_date": "04 Jul 2002 11:45:49 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "On Thu, 2002-07-04 at 11:45, J. R. Nield wrote:\n\nOne other item that should be here:\n> The big items so-far are:\n> §1 - Logging Relation file creation, truncation, and removal \n> This is mostly done. Can do infinte play-forward from\n> online logs.\n> §2 - Partial-Write and Bad Block detection\n> Need input before starting. Migration issues.\n> §3 - Detecting Shutdown Consistent System Recovery\n> Mostly done.\n> §4 - Interactive Play-Forward Recovery for an Entire System\n> Need input before starting.\n> §5 - Individual file consistent recovery\n> Need input. Semi-Major changes required.\n §6 - btbuild is not logged\n Not logged because of same assumptions as for file create.\n Only need to log the index build parameters to recreate\n the index, not each page change.\n\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "04 Jul 2002 13:07:22 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "\nI noticed no one has responded to your questions yet.\n\nI think it is because we are sort of in shock. We have needed\npoint-in-time recovery for a long time, but the people who were capable\nof doing it weren't able to focus on it. Then, all of a sudden, we get\nan email from someone who is focusing on it and wants to get the job\ndone. GREAT!\n\nI will give you my short analysis and see how many other questions I can\nanswer.\n\nWe have always known there was a way to do PITR with WAL, but WAL needed\na few extra pieces of information. Unfortunately, we weren't able to\nanalyze what was required. Seems you have gotten very far here, and\nthat is great.\n\nAlso, we thought about having PITR as part of replication (reply of\nreplication traffic logs) but having it tied to WAL is much cleaner and\nhas better performance, I bet.\n\nI will do whatever I can to help. My chat addresses are:\n\t\n\tAIM\tbmomjian\n\tICQ\t151255111\n\tYahoo\tbmomjian\n\tMSN\troot@candle.pha.pa.us\n\tIRC\t#postgresql vis efnet\n\n---------------------------------------------------------------------------\n\nJ. R. Nield wrote:\n> Hello:\n> \n> I've got the logging system to the point where I can take a shutdown\n> consistent copy of a system, and play forward through multiple\n> checkpoints. It seems to handle CREATE TABLE/DROP TABLE/TRUNCATE\n\nYes, that was always something we knew was lacking in the current WAL\ncontents.\n\n> properly, and things are moving forward well. Recovery to an arbitrary\n> point-in-time should be just as easy, but will need some administrative\n> interface for it.\n\nThe adminstrative part can be done easily. We can get that part done. \nIt is the low-level stuff we always needed help with.\n\n> At this point, some input would be useful on how I should handle things.\n> \n> The most important questions that need answering are in sections 2 & 5,\n> since they impact the most other parts of the system. They will also\n> require good documentation for sysadmins.\n\n> ?0 - Introduction\n> \n> This file is where I'm keeping track of all the issues I run into while\n> trying to get PITR to work properly. Hopefully it will evolve into a\n> description of how PITR actually works once it is implemented.\n> \n> I will also try to add feedback as it comes in.\n> \n> The big items so-far are:\n> ?1 - Logging Relation file creation, truncation, and removal \n> This is mostly done. Can do infinte play-forward from\n> online logs.\n\nExcellent!\n\n> ?2 - Partial-Write and Bad Block detection\n> Need input before starting. Migration issues.\n\nUh, we do log do pre-page writes to WAL to recover from partial page\nwrites to disk. Is there something more we need here?\n\nAs for bad block detection, we have thought about adding a CRC to each\npage header, or at least making it optional. WAL already has a CRC.\n\n\n> ?3 - Detecting Shutdown Consistent System Recovery\n> Mostly done.\n> ?4 - Interactive Play-Forward Recovery for an Entire System\n> Need input before starting.\n\nYou mean user interface? My idea would be to just get some command-line\ntool working and we can write some GUI app to manage it and use the\ncommand-line tool as an interface into the system.\n\n> ?5 - Individual file consistent recovery\n> Need input. Semi-Major changes required.\n> \n\nOK, here are the specific questions. Got it.\n\n> ?1 - Logging Relation file creation, truncation, and removal \n> \n> ?1.1 - Problem:\n> \n> Without file creation in the log, we can't replay committed \n> transactions that create relations. \n> \n> The current code assumes that any transaction reaching commit has already\n> ensured it's files exist, and that those files will never be removed. This\n> is true now, but not for log-replay from an old backup database system. \n> \n> The current XLOG code silently ignores block-write requests for\n> non-existent files, and assumes that the transaction generating those\n> requests must have aborted.\n> \n> Right now a crash during TRUNCATE TABLE will leave the table in an\n> inconsistent state (partially truncated). This would not work when doing\n> replay from before the last checkpoint.\n\nYes, there are a few places where we actually create file, and if the\nserver crashes, the file remains out the forever. We need to track that\nbetter. I think we may need some utility that compares pg_class with\nthe files in the directory and cleans out unused files on server\nstartup. I started working on such code as part of VACUUM but it made\ntoo many assumptions because it knew other backends were working at the\nsame time. On recovery, you don't have that problem and can easily do\nalmost an 'ls' and clean out just left over from the crash. Seems that\nwould solve several of those problems.\n\n> \n> ?1.1.1 - CREATE DATABASE is also unlogged\n> \n> This will cause the same replay problems as above.\n\nYep. Again, seems a master cleanup on startup is needed.\n\n> ?1.2 - Proposal:\n> \n> a) Augment the SMGR code to log relation file operations, and to handle\n> redo requests properly. This is simple in the case of create. Drop must be\n> logged only IN the commit record. For truncate see (b).\n\nYep, we knew we needed that.\n\n> The 'struct f_smgr' needs new operations 'smgr_recreate', 'smgr_reunlink',\n> and 'smgr_retruncate'. smgr_recreate should accept a RelFileNode instead\n> of a Relation.\n\nNo problem. Clearly required.\n\n> Transactions that abort through system failure (ie. unlogged aborts) \n> will simply continue to leak files.\n\nYep, need a cleanup process on start.\n\n> b) If TRUNCATE TABLE fails, the system must PANIC. Otherwise, the table\n> may be used in a future command, and a replay-recovered database may\n> end-up with different data than the original.\n\nWe number based on oids. You mean oid wraparound could cause the file\nto be used again?\n\n> WAL must be flushed before truncate as well.\n> \n> WAL does not need to be flushed before create, if we don't mind \n> leaking files sometimes.\n\nCleanup?\n\n> c) Redo code should treat writes to non-existent files as an error.\n> Changes affect heap & nbtree AM's. [Check others]\n\nYep, once you log create/drop, if something doesn't match, it is an\nerror, while before, we could ignore it.\n\n> d) rtree [and GiST? WTF is GiST? ] is not logged. A replay recovery of\n> a database should mark all the rtree indices as corrupt.\n> [ actually we should do that now, are we? ]\n\nKnown problem. Not sure what is being done. TODO has:\n\n\t* Add WAL index reliability improvement to non-btree indexes\n\nso it is a known problem, and we aren't doing anything about it. What\nmore can I say? ;-)\n\n> e) CREATE DATABASE must be logged properly, not use system(cp...)\n\nOK, should be interesting.\n\n> ?1.3 - Status:\n> \n> All logged SMGR operations are now in a START_CRIT_SECTION()/\n> END_CRIT_SECTION() pair enclosing the XLogInsert() and the underlying fs\n> operations.\n> \n> Code has been added to smgr and xact modules to log:\n> create (no XLogFlush)\n> truncate (XLogFlush)\n> pending deletes on commit record\n> files to delete on abort record\n> \n> Code added to md.c to support redo ops\n> \n> Code added to smgr for RMGR redo/desc callbacks\n> \n> Code added to xact RMGR callbacks for redo/desc\n> \n> Database will do infinite shutdown consistent system recovery from the\n> online logs, if you manually munge the control file to set state ==\n> DB_IN_PRODUCTION instead of DB_SHUTDOWNED.\n\nWow, how did you get so far?\n\n> Still need to do:\n> Item (c), recovery cleanup in all AM's\n> Item (d), logging in other index AM's\n> Item (e), CREATE DATABASE stuff\n> \n> \n> \n> ?2 - Partial-Write and Bad Block detection\n> \n> ?2.1 - Problem:\n> \n> In order to protect against partial writes without logging pages\n> twice, we need to detect partial pages in system files and report them\n> to the system administrator. We also might want to be able to detect\n> damaged pages from other causes, like memory corruption, OS errors,\n> etc. or in the case where the disk doesn't report bad blocks, but\n> returns bad data.\n\nInteresting. We just had a discussion about MSSQL page tare bits on\nevery 512-byte block that are set to the same value before the write. \nOn recover, if any the bits in a block are different, they recommend\nrecover using PITR. We don't have PITR (yet) so there was no need to\nimplement it (we just logged whole pages to WAL before writing). I\nthink we may go with just a CRC per page for partial-write detection.\n\n> We should also decide what should happen when a file is marked as\n> containing corrupt pages, and requires log-archive recovery from a\n> backup.\n\nWe can offer the option of no-wal change-page writing which will require\nPITR on partial-write detection or they can keep the existing system\nwith the performance hit.\n\n> \n> ?2.2 - Proposal:\n> \n> Add a 1 byte 'pd_flags' field to PageHeaderData, with the following\n> flag definitions:\n> \n> PD_BLOCK_CHECKING (1)\n> PD_BC_METHOD_BIT (1<<1)\n> \n> PageHasBlockChecking(page) ((page)->pd_flags & PD_BLOCK_CHECKING)\n> PageBCMethodIsCRC64(page) ((page)->pd_flags & PD_BC_METHOD_BIT)\n> PageBCMethodIsLSNLast(page) (!PageBCMethodIsCRC64(page))\n> \n> The last 64 bits of a page are reserved for use by the block checking \n> code.\n\nOK, so you already are on the CRC route.\n\n> [ Is it worth the trouble to allow the last 8 bytes of a\n> page to contain data when block checking is turned off for a Page? \n> This proposal does not allow that. ]\n\nYou can leave the 8-byte empty if no CRC. You may want to turn CRC\non/off without dump.\n> If the block checking method is CRC64, then that field will contain\n> the CRC64 of the block computed at write time.\n\nCool.\n\n> If the block checking method is LSNLast, then the field contains a\n> duplicate of the pd_lsn field.\n> \n> ?2.2.1 - Changes to Page handling routines\n> \n> All the page handling routines need to understand that \n> pd_special == (pd_special - (specialSize + 8))\n> \n> Change header comment in bufpage.h to reflect this.\n\nYes, we should add a format version to the heap page tail anyway like\nbtree has, i.e. some constant on every page that describes the format\nused in that PostgreSQL version.\n\n> ?2.2.2 - When Reading a Page\n> \n> Block corruption is detected on read in the obvious way with CRC64.\n> \n> In the case of LSNLast, we check to see if pd_lsn == the lsn in the\n> last 64 bits of the page. If not, we assume the page is corrupt from\n> a partial write (although it could be something else).\n\nLSN?\n\n> IMPORTANT ASSUMPTION:\n> The OS/disk device will never write both the first part and\n> last part of a block without writing the middle as well.\n> This might be wrong in some cases, but at least it's fast.\n> \n> ?2.2.4 - GUC Variables\n> \n> The user should be able to configure what method is used:\n> \n> block_checking_write_method = [ checksum | torn_page_flag | none ]\n> \n> Which method should be used for blocks we write?\n\nDo we want torn page flag? Seems like a pain to get that on every 512\nbyte section of the 8k page.\n\n> check_blocks_on_read = [ true | false ]\n> \n> When true, verify that the blocks we read are not corrupt, using\n> whatever method is in the block header.\n> \n> When false, ignore the block checking information.\n> \n\nGood idea. We always check on crash, but check on read only when set. \nGood for detecting hardware problems.\n\n\n> ?2.3 - Status:\n> \n> Waiting for input from pgsql-hackers.\n> \n> Questions:\n> \n> Should we allow the user to have more detailed control over\n> which parts of a database use block checking?\n\nI don't think that is needed; installation-wide settings are fine.\n\n> For example: use 'checksum' on all system catalogs in all databases, \n> 'torn_page_flag' on the non-catalog parts of the production database,\n> and 'none' on everything else?\n\nToo complicated. Let's get it implemented and in the field and see what\npeople ask for.\n\n> ?3 - Detecting Shutdown Consistent System Recovery\n> \n> ?3.1 - Problem:\n> \n> How to notice that we need to do log-replay for a system backup, when the\n> restored control file points to a shutdown checkpoint record that is\n> before the most recent checkpoint record in the log, and may point into\n> an archived file.\n> \n> ?3.2 - Proposal:\n> \n> At startup, after reading the ControlFile, scan the log directory to\n> get the list of active log files, and find the lowest logId and\n> logSeg of the files. Ensure that the files cover a contiguous range\n> of LSN's.\n> \n> There are three cases:\n> \n> 1) ControlFile points to the last valid checkpoint (either\n> checkPoint or prevCheckPoint, but one of them is the greatest\n> valid checkpoint record in the log stream).\n> \n> 2) ControlFile points to a valid checkpoint record in an active\n> log file, but there are more valid checkpoint records beyond\n> it.\n> \n> 3) ControlFile points to a checkpoint record that should be in the\n> archive logs, and is presumably valid.\n> \n> Case 1 is what we handle now.\n> \n> Cases 2 and 3 would result from restoring an entire system from\n> backup in preparation to do a play-forward recovery.\n> \n> We need to:\n> \n> Detect cases 2 and 3.\n> \n> Alert the administrator and abort startup.\n> [Question: Is this always the desired behavior? We can\n> handle case 2 without intervention. ]\n> \n> Let the administrator start a standalone backend, and\n> perform a play-forward recovery for the system.\n> \n> ?3.3 - Status:\n> \n> In progress.\n\nSorry, I was confused by this.\n\n> ?4 - Interactive Play-Forward Recovery for an Entire System\n> \n> Play-Forward File Recovery from a backup file must be interactive,\n> because not all log files that we need are necessarily in the \n> archive directory. It may be possible that not all the archive files\n> we need can even fit on disk at one time.\n> \n> The system needs to be able to prompt the system administrator to feed\n> it more log files.\n> \n> TODO: More here\n\nYes, we can have someone working on the GUI once the command-line\ninterface is defined.\n\n> ?5 - Individual file consistent recovery\n> \n> ?5.1 - Problem:\n> \n> If a file detects corruption, and we restore it from backup, how do \n> we know what archived files we need for recovery?\n> \n> Should file corruption (partial write, bad disk block, etc.) outside \n> the system catalog cause us to abort the system, or should we just \n> take the relation or database off-line?\n\nOffline is often best so they can get in there and recover if needed. \nWe usually allow them in with a special flag or utility like\npg_resetxlog.\n\n\n> Given a backup file, how do we determine the point in the log \n> where we should start recovery for the file? What is the highest LSN\n> we can use that will fully recover the file?\n\nThat is tricky. We have discussed it and your backup has to deal with\nsome pretty strange things that can happen while 'tar' is traversing the\ndirectory.\n\n\n> ?5.2 - Proposal: \n> \n> Put a file header on each file, and update that header to the last\n> checkpoint LSN at least once every 'file_lsn_time_slack' minutes, or\n> at least once every dbsize/'file_lsn_log_slack' megabytes of log\n> written, where dbsize is the estimated size of the database. Have\n> these values be settable from the config file. These updates would be\n> distributed throughout the hour, or interspersed between regular\n> amounts of log generation.\n> \n> If we have a database backup program or command, it can update the\n> header on the file before backup to the greatest value it can assure\n> to be safe.\n\nNot sure.\n\n> ?5.3 - Status:\n> \n> Waiting for input from pgsql-hackers.\n> \n> Questions:\n> \n> There are alternate methods than using a file header to get a\n> known-good LSN lower bound for the starting point to recover a backup\n> file. Is this the best way?\n\nNot sure.\n\nI am sure others will chime in with more information.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 01:42:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "J. R. Nield wrote:\n> ?5 - Individual file consistent recovery\n> \n> ?5.1 - Problem:\n> \n> If a file detects corruption, and we restore it from backup, how do \n> we know what archived files we need for recovery?\n> \n> Should file corruption (partial write, bad disk block, etc.) outside \n> the system catalog cause us to abort the system, or should we just \n> take the relation or database off-line?\n> \n> Given a backup file, how do we determine the point in the log \n> where we should start recovery for the file? What is the highest LSN\n> we can use that will fully recover the file?\n> \n> ?5.2 - Proposal: \n> \n> Put a file header on each file, and update that header to the last\n> checkpoint LSN at least once every 'file_lsn_time_slack' minutes, or\n> at least once every dbsize/'file_lsn_log_slack' megabytes of log\n> written, where dbsize is the estimated size of the database. Have\n> these values be settable from the config file. These updates would be\n> distributed throughout the hour, or interspersed between regular\n> amounts of log generation.\n> \n> If we have a database backup program or command, it can update the\n> header on the file before backup to the greatest value it can assure\n> to be safe.\n\nI know there was discussion about this. The issue was when you are\ndoing the backup, how do you handle changes in the file that happen\nduring the backup? I think there was some idea of remembering the WAL\npointer location at the start of the tar backup, and somehow playing\nforward from that point. Now, the trick is to know how much of the WAL\nis duplicated in the backup, and how much needs to be applied to roll\nforward.\n\nBecause we have a non-overwriting storage manager, I think one idea was\nto just replay everything, knowing that some of it may be unnecessary. \nHowever, VACUUM FULL would complicate that because it does overwrite\ntuples and you may get into trouble playing all of that back.\n\nI am sure someone has analyzed this better than me.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 02:01:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "On Fri, 2002-07-05 at 01:42, Bruce Momjian wrote:\n> \n> We have needed\n> point-in-time recovery for a long time, \n\nMost thanks should go to vadim (and whoever else worked on this), since\nhis WAL code already does most of the work. The key thing is auditing\nthe backend to look for every case where we assume some action is not\nvisible until after commit, and therefore don't log its effects. Those\nare the main cases that must be changed.\n\n\n> ---------------------------------------------------------------------------\n> \n> J. R. Nield wrote:\n> > Hello:\n> > \n> > I've got the logging system to the point where I can take a shutdown\n> > consistent copy of a system, and play forward through multiple\n> > checkpoints. It seems to handle CREATE TABLE/DROP TABLE/TRUNCATE\n\nBut notably not for the btree indexes! It looked like they were working,\nbecause the files were there, and all indexes created before the backup\nwould work under insert/delete (including sys catalog indexes). This is\nbecause btree insert/delete is logged, just not during build. So I\nmissed that one case.\n\nYou will end-up with up-to-date table data though, so it is something.\n\nAdding logging support to btbuild is the next step, and I don't think it\nshould be too hard. I am working this now.\n\nIt is also a major advantage that most everything in the system gets\nstored in the catalog tables, and so is logged already.\n\n\n> Uh, we do log pre-page writes to WAL to recover from partial page\n> writes to disk. Is there something more we need here?\n> \n> As for bad block detection, we have thought about adding a CRC to each\n> page header, or at least making it optional. WAL already has a CRC.\n>\n\nYes this should be last to do, because it is not necessary for PITR,\nonly for performance (the option not to write pre-images without fear of\ndata loss). \n \n> Yes, there are a few places where we actually create a file, and if the\n> server crashes, the file remains out there forever. We need to track that\n> better. \n\nOK, there is a bigger problem then just tracking the file though. We\nsometimes do stuff to that file that we don't log. We assume that if we\ncommit, the file must be OK and will not need replay because the\ntransaction would not have committed if the file was not in a commitable\nstate. If we abort, the system never sees the file, so in a sense we\nundo everything we did to the file. It is a kind of poor-man's rollback\nfor certain operations, like btbuild, create table, etc. But it means\nthat we cannot recover the file from the log, even after a commit.\n\n> \n> > \n> > ?1.1.1 - CREATE DATABASE is also unlogged\n> > \n> > This will cause the same replay problems as above.\n> \n> Yep. Again, seems a master cleanup on startup is needed.\n\nThe cleanup is not the problem, only a nuisance. Creating the files\nduring replay is the problem. I must recreate CREATE DATABASE from the\nlog exactly as it was done originally. I think just logging the\nparameters to the command function should be sufficient, but I need to\nthink more about it.\n\n> \n> > b) If TRUNCATE TABLE fails, the system must PANIC. Otherwise, the table\n> > may be used in a future command, and a replay-recovered database may\n> > end-up with different data than the original.\n> \n> We number based on oids. You mean oid wraparound could cause the file\n> to be used again?\n\nThat's not what I meant. Let's say I issue 'TRUNCATE TABLE foo'. Then,\nright before smgrtruncate is called, I do an XLogInsert saying \"Redo a\nTRUNCATE TABLE on foo to nblocks if we crash\". Then smgrtruncate fails\nand we do an elog(ERROR)\n\nNow the user decides that since TRUNCATE TABLE didn't work, he might as\nwell use the table, so he inserts some records into it, generating log\nentries.\n\nWhen I replay this log sequence later, what happens if the TRUNCATE\nsucceeds instead of failing?\n\nI admit that there are other ways of handling it than to PANIC if the\ntruncate fails. All the ones I can come up with seem to amount to some\nkind of ad-hoc UNDO log.\n\n\n> \n> > WAL must be flushed before truncate as well.\n> > \n> > WAL does not need to be flushed before create, if we don't mind \n> > leaking files sometimes.\n> \n> Cleanup?\n\nYes, we could garbage-collect leaked files. XLogFlush is not that\nexpensive though, so I don't have an opinion on this yet.\n\n> \n> > c) Redo code should treat writes to non-existent files as an error.\n> > Changes affect heap & nbtree AM's. [Check others]\n> \n> Yep, once you log create/drop, if something doesn't match, it is an\n> error, while before, we could ignore it.\n> \n> > d) rtree [and GiST? WTF is GiST? ] is not logged. A replay recovery of\n> > a database should mark all the rtree indices as corrupt.\n> > [ actually we should do that now, are we? ]\n> \n> Known problem. Not sure what is being done. TODO has:\n> \n> \t* Add WAL index reliability improvement to non-btree indexes\n> \n> so it is a known problem, and we aren't doing anything about it. What\n> more can I say? ;-)\n\nOnce the other stuff works reliably, I will turn to rtree logging, which\nI have looked at somewhat, although I could really use a copy of the\npaper it is supposed to be based on. I have not figured out GiST enough\nto work on it yet.\n\n> \n> > e) CREATE DATABASE must be logged properly, not use system(cp...)\n> \n> OK, should be interesting.\n> \n> > ?1.3 - Status:\n> > \n> > All logged SMGR operations are now in a START_CRIT_SECTION()/\n> > END_CRIT_SECTION() pair enclosing the XLogInsert() and the underlying fs\n> > operations.\n> > \n> > Code has been added to smgr and xact modules to log:\n> > create (no XLogFlush)\n> > truncate (XLogFlush)\n> > pending deletes on commit record\n> > files to delete on abort record\n> > \n> > Code added to md.c to support redo ops\n> > \n> > Code added to smgr for RMGR redo/desc callbacks\n> > \n> > Code added to xact RMGR callbacks for redo/desc\n> > \n> > Database will do infinite shutdown consistent system recovery from the\n> > online logs, if you manually munge the control file to set state ==\n> > DB_IN_PRODUCTION instead of DB_SHUTDOWNED.\n> \n> Wow, how did you get so far?\n\nBecause it was almost there to start with :-)\n\nBesides, it sounded better before I realized there was still a remaining\nproblem with btree logging to fix. \n\n\n> > In the case of LSNLast, we check to see if pd_lsn == the lsn in the\n> > last 64 bits of the page. If not, we assume the page is corrupt from\n> > a partial write (although it could be something else).\n> \n> LSN?\n\nLog Sequence Number (XLogRecPtr)\n\n> \n> > IMPORTANT ASSUMPTION:\n> > The OS/disk device will never write both the first part and\n> > last part of a block without writing the middle as well.\n> > This might be wrong in some cases, but at least it's fast.\n> > \n> > ?2.2.4 - GUC Variables\n> > \n> > The user should be able to configure what method is used:\n> > \n> > block_checking_write_method = [ checksum | torn_page_flag | none ]\n> > \n> > Which method should be used for blocks we write?\n> \n> Do we want torn page flag? Seems like a pain to get that on every 512\n> byte section of the 8k page.\n\nOk, this section (2.2) was badly written and hard to understand. What I\nam proposing is that we put a copy of the log sequence number, which is\nat the head of the page, into the 8 byte field that we are creating at\nthe end of the page, in place of the CRC. The log sequence number\nincreases every time the page is written (it already does this). I have\ncalled this method 'LSNLast' internally, and the user would call it the\n'torn_page_flag' method.\n\nSo when we read the page, we compare the Log Sequence Number at the\nbeginning and end of the page, and if they are different we assume a\ntorn page.\n\nThis version is weaker than the MS one we were talking about, because it\nis not on every 512 byte section of the page, only the beginning and the\nend. I'm simply looking for a fast alternative to CRC64, that doesn't\nrequire massive reorganization of the page layout code.\n\n\n> \n> > ?2.3 - Status:\n> > \n> > Waiting for input from pgsql-hackers.\n> > \n> > Questions:\n> > \n> > Should we allow the user to have more detailed control over\n> > which parts of a database use block checking?\n> \n> I don't think that is needed; installation-wide settings are fine.\n> \n> > For example: use 'checksum' on all system catalogs in all databases, \n> > 'torn_page_flag' on the non-catalog parts of the production database,\n> > and 'none' on everything else?\n> \n> Too complicated. Let's get it implemented and in the field and see what\n> people ask for.\n\nOk. I agree.\n\n> \n> > ?3 - Detecting Shutdown Consistent System Recovery\n\n> > \n> > ?3.3 - Status:\n> > \n> > In progress.\n> \n> Sorry, I was confused by this.\n\nLet me re-write it, and I'll post it in the next version. The section\ndealt with what to do when you have a valid restored controlfile from a\nbackup system, which is in the DB_SHUTDOWNED state, and that points to a\nvalid shutdown/checkpoint record in the log; only the checkpoint record\nhappens not to be the last one in the log. This is a situation that\ncould never happen now, but would in PITR.\n\n\n> \n> > ?4 - Interactive Play-Forward Recovery for an Entire System\n> > \n> > Play-Forward File Recovery from a backup file must be interactive,\n> > because not all log files that we need are necessarily in the \n> > archive directory. It may be possible that not all the archive files\n> > we need can even fit on disk at one time.\n> > \n> > The system needs to be able to prompt the system administrator to feed\n> > it more log files.\n> > \n> > TODO: More here\n> \n> Yes, we can have someone working on the GUI once the command-line\n> interface is defined.\n\nYes, and the system must not allow any concurrent activity during\nrecovery either. So it looks like a standalone backend operation.\n\n> \n> > ?5 - Individual file consistent recovery\n> > \n> > ?5.1 - Problem:\n> > \n> > If a file detects corruption, and we restore it from backup, how do \n> > we know what archived files we need for recovery?\n> > \n> > Should file corruption (partial write, bad disk block, etc.) outside \n> > the system catalog cause us to abort the system, or should we just \n> > take the relation or database off-line?\n> \n> Offline is often best so they can get in there and recover if needed. \n> We usually allow them in with a special flag or utility like\n> pg_resetxlog.\n> \n> \n> > Given a backup file, how do we determine the point in the log \n> > where we should start recovery for the file? What is the highest LSN\n> > we can use that will fully recover the file?\n> \n> That is tricky. We have discussed it and your backup has to deal with\n> some pretty strange things that can happen while 'tar' is traversing the\n> directory.\n\nEven if we shutdown before we copy the file, we don't want a file that\nhasn't been written to in 5 weeks before it was backed up to require\nfive weeks of old log files to recover. So we need to track that\ninformation somehow, because right now if we scanned the blocks in the\nfile looking for at the page LSN's, we greatest LSN we would see might\nbe much older than where it would be safe to recover from. That is the\nbiggest problem, I think.\n\n> \n> \n> > ?5.2 - Proposal: \n> > \n> > Put a file header on each file, and update that header to the last\n> > checkpoint LSN at least once every 'file_lsn_time_slack' minutes, or\n> > at least once every dbsize/'file_lsn_log_slack' megabytes of log\n> > written, where dbsize is the estimated size of the database. Have\n> > these values be settable from the config file. These updates would be\n> > distributed throughout the hour, or interspersed between regular\n> > amounts of log generation.\n> > \n> > If we have a database backup program or command, it can update the\n> > header on the file before backup to the greatest value it can assure\n> > to be safe.\n> \n> Not sure.\n> \n> > ?5.3 - Status:\n> > \n> > Waiting for input from pgsql-hackers.\n> > \n> > Questions:\n> > \n> > There are alternate methods than using a file header to get a\n> > known-good LSN lower bound for the starting point to recover a backup\n> > file. Is this the best way?\n> \n> Not sure.\n> \n> I am sure others will chime in with more information.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n\n\n", "msg_date": "05 Jul 2002 06:01:40 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "J. R. Nield wrote:\n> On Fri, 2002-07-05 at 01:42, Bruce Momjian wrote:\n> > \n> > We have needed\n> > point-in-time recovery for a long time, \n> \n> Most thanks should go to vadim (and whoever else worked on this), since\n> his WAL code already does most of the work. The key thing is auditing\n> the backend to look for every case where we assume some action is not\n> visible until after commit, and therefore don't log its effects. Those\n> are the main cases that must be changed.\n\nYep. Glad you can focus on that.\n\n> > ---------------------------------------------------------------------------\n> > \n> > J. R. Nield wrote:\n> > > Hello:\n> > > \n> > > I've got the logging system to the point where I can take a shutdown\n> > > consistent copy of a system, and play forward through multiple\n> > > checkpoints. It seems to handle CREATE TABLE/DROP TABLE/TRUNCATE\n> \n> But notably not for the btree indexes! It looked like they were working,\n> because the files were there, and all indexes created before the backup\n> would work under insert/delete (including sys catalog indexes). This is\n> because btree insert/delete is logged, just not during build. So I\n> missed that one case.\n> \n> You will end-up with up-to-date table data though, so it is something.\n> \n> Adding logging support to btbuild is the next step, and I don't think it\n> should be too hard. I am working this now.\n\nGreat.\n\n> It is also a major advantage that most everything in the system gets\n> stored in the catalog tables, and so is logged already.\n> \n> \n> > Uh, we do log pre-page writes to WAL to recover from partial page\n> > writes to disk. Is there something more we need here?\n> > \n> > As for bad block detection, we have thought about adding a CRC to each\n> > page header, or at least making it optional. WAL already has a CRC.\n> >\n> \n> Yes this should be last to do, because it is not necessary for PITR,\n> only for performance (the option not to write pre-images without fear of\n> data loss). \n\nYep.\n\n> > Yes, there are a few places where we actually create a file, and if the\n> > server crashes, the file remains out there forever. We need to track that\n> > better. \n> \n> OK, there is a bigger problem then just tracking the file though. We\n> sometimes do stuff to that file that we don't log. We assume that if we\n> commit, the file must be OK and will not need replay because the\n> transaction would not have committed if the file was not in a commitable\n> state. If we abort, the system never sees the file, so in a sense we\n> undo everything we did to the file. It is a kind of poor-man's rollback\n> for certain operations, like btbuild, create table, etc. But it means\n> that we cannot recover the file from the log, even after a commit.\n\nYep.\n\n> > \n> > > \n> > > ?1.1.1 - CREATE DATABASE is also unlogged\n> > > \n> > > This will cause the same replay problems as above.\n> > \n> > Yep. Again, seems a master cleanup on startup is needed.\n> \n> The cleanup is not the problem, only a nuisance. Creating the files\n> during replay is the problem. I must recreate CREATE DATABASE from the\n> log exactly as it was done originally. I think just logging the\n> parameters to the command function should be sufficient, but I need to\n> think more about it.\n\nOK, makes sense. Nice when you can bundle a complex action into the\nlogging of one command and its parameters.\n\n> > \n> > > b) If TRUNCATE TABLE fails, the system must PANIC. Otherwise, the table\n> > > may be used in a future command, and a replay-recovered database may\n> > > end-up with different data than the original.\n> > \n> > We number based on oids. You mean oid wraparound could cause the file\n> > to be used again?\n> \n> That's not what I meant. Let's say I issue 'TRUNCATE TABLE foo'. Then,\n> right before smgrtruncate is called, I do an XLogInsert saying \"Redo a\n> TRUNCATE TABLE on foo to nblocks if we crash\". Then smgrtruncate fails\n> and we do an elog(ERROR)\n> \n> Now the user decides that since TRUNCATE TABLE didn't work, he might as\n> well use the table, so he inserts some records into it, generating log\n> entries.\n> \n> When I replay this log sequence later, what happens if the TRUNCATE\n> succeeds instead of failing?\n\nYou mean the user is now accessing a partially truncated table? That's\njust too weird. I don't see how the WAL would know how far truncation\nhad gone. I see why you would need the panic and it seems acceptable.\n\n> I admit that there are other ways of handling it than to PANIC if the\n> truncate fails. All the ones I can come up with seem to amount to some\n> kind of ad-hoc UNDO log.\n\nYea, truncate failure seems so rare/impossible to happen, we can do a\npanic and see if it ever happens to anyone. I bet it will not. Those\nare usually cases of an OS crash, so it is the same as a panic.\n\n> > > WAL must be flushed before truncate as well.\n> > > \n> > > WAL does not need to be flushed before create, if we don't mind \n> > > leaking files sometimes.\n> > \n> > Cleanup?\n> \n> Yes, we could garbage-collect leaked files. XLogFlush is not that\n> expensive though, so I don't have an opinion on this yet.\n\nRight now, if we do CREATE TABLE, and the backend crashes, I think it\nleaves a nonreferenced file around. Not something you need to worry\nabout for replication, I guess. We can address it later.\n\n> > > c) Redo code should treat writes to non-existent files as an error.\n> > > Changes affect heap & nbtree AM's. [Check others]\n> > \n> > Yep, once you log create/drop, if something doesn't match, it is an\n> > error, while before, we could ignore it.\n> > \n> > > d) rtree [and GiST? WTF is GiST? ] is not logged. A replay recovery of\n> > > a database should mark all the rtree indices as corrupt.\n> > > [ actually we should do that now, are we? ]\n> > \n> > Known problem. Not sure what is being done. TODO has:\n> > \n> > \t* Add WAL index reliability improvement to non-btree indexes\n> > \n> > so it is a known problem, and we aren't doing anything about it. What\n> > more can I say? ;-)\n> \n> Once the other stuff works reliably, I will turn to rtree logging, which\n> I have looked at somewhat, although I could really use a copy of the\n> paper it is supposed to be based on. I have not figured out GiST enough\n> to work on it yet.\n\nThere has been talk of retiring rtree and using the GIST version of\nrtree. I thought it had some advantages/disadvantages. I don't remember\nfor sure.\n\n> > > Database will do infinite shutdown consistent system recovery from the\n> > > online logs, if you manually munge the control file to set state ==\n> > > DB_IN_PRODUCTION instead of DB_SHUTDOWNED.\n> > \n> > Wow, how did you get so far?\n> \n> Because it was almost there to start with :-)\n> \n> Besides, it sounded better before I realized there was still a remaining\n> problem with btree logging to fix. \n\nOur major problem is that we have a very few people who like to work\nat this level in the code. Glad you are around.\n\n> > > In the case of LSNLast, we check to see if pd_lsn == the lsn in the\n> > > last 64 bits of the page. If not, we assume the page is corrupt from\n> > > a partial write (although it could be something else).\n> > \n> > LSN?\n> \n> Log Sequence Number (XLogRecPtr)\n\nYep, I remembered later.\n\n> > > IMPORTANT ASSUMPTION:\n> > > The OS/disk device will never write both the first part and\n> > > last part of a block without writing the middle as well.\n> > > This might be wrong in some cases, but at least it's fast.\n> > > \n> > > ?2.2.4 - GUC Variables\n> > > \n> > > The user should be able to configure what method is used:\n> > > \n> > > block_checking_write_method = [ checksum | torn_page_flag | none ]\n> > > \n> > > Which method should be used for blocks we write?\n> > \n> > Do we want torn page flag? Seems like a pain to get that on every 512\n> > byte section of the 8k page.\n> \n> Ok, this section (2.2) was badly written and hard to understand. What I\n> am proposing is that we put a copy of the log sequence number, which is\n> at the head of the page, into the 8 byte field that we are creating at\n> the end of the page, in place of the CRC. The log sequence number\n> increases every time the page is written (it already does this). I have\n> called this method 'LSNLast' internally, and the user would call it the\n> 'torn_page_flag' method.\n> \n> So when we read the page, we compare the Log Sequence Number at the\n> beginning and end of the page, and if they are different we assume a\n> torn page.\n\nExcellent.\n\n> This version is weaker than the MS one we were talking about, because it\n> is not on every 512 byte section of the page, only the beginning and the\n> end. I'm simply looking for a fast alternative to CRC64, that doesn't\n> require massive reorganization of the page layout code.\n\nGreat idea, and cheap.\n\n> > > The system needs to be able to prompt the system administrator to feed\n> > > it more log files.\n> > > \n> > > TODO: More here\n> > \n> > Yes, we can have someone working on the GUI once the command-line\n> > interface is defined.\n> \n> Yes, and the system must not allow any concurrent activity during\n> recovery either. So it looks like a standalone backend operation.\n\nOK, we can address this capability.\n\n> > That is tricky. We have discussed it and your backup has to deal with\n> > some pretty strange things that can happen while 'tar' is traversing the\n> > directory.\n\nOK, first, are you thinking of having the nightly backup operate at the\nfile system level, or accessing the pages through the PostgreSQL shared\nbuffers?\n\n> Even if we shutdown before we copy the file, we don't want a file that\n\nOh, so you are thinking of some kind of tar while the db is shutdown,\nand using that as the backup?\n\n> hasn't been written to in 5 weeks before it was backed up to require\n> five weeks of old log files to recover. So we need to track that\n> information somehow, because right now if we scanned the blocks in the\n> file looking for at the page LSN's, we greatest LSN we would see might\n> be much older than where it would be safe to recover from. That is the\n> biggest problem, I think.\n\nYou are saying, \"How do we know what WAL records go with that backup\nsnapshot of the file?\" OK, lets assume we are shutdown. You can grab\nthe WAL log info from pg_control using contrib/pg_controldata and that\ntells you what WAL logs to roll forward when you need to PIT recover\nthat backup later. If you store that info in the first file you backup,\nyou can have that WAL pointer available for later recovery in case you\nare restoring from that backup. Is that the issue?\n\nWhat seems more complicated is doing the backup while the database is\nactive, and this may be a requirement for a final PITR solution. Some\nthink we can grab the WAL pointer at 'tar' start and replay that on the\nbackup even if the file changes during backup.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 13:20:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "Bruce Momjian wrote:\n> You are saying, \"How do we know what WAL records go with that backup\n> snapshot of the file?\" OK, lets assume we are shutdown. You can grab\n> the WAL log info from pg_control using contrib/pg_controldata and that\n> tells you what WAL logs to roll forward when you need to PIT recover\n> that backup later. If you store that info in the first file you backup,\n> you can have that WAL pointer available for later recovery in case you\n> are restoring from that backup. Is that the issue?\n> \n> What seems more complicated is doing the backup while the database is\n> active, and this may be a requirement for a final PITR solution. Some\n> think we can grab the WAL pointer at 'tar' start and replay that on the\n> backup even if the file changes during backup.\n\nOK, I think I understand live backups now using tar and PITR. Someone\nexplained this to me months ago but now I understand it.\n\nFirst, a key issue is that PostgreSQL doesn't fiddle with individual\nitems on disk. It reads an 8k block, modifies it, (writes it to WAL if\nit hasn't been written to that WAL segment before), and writes it to\ndisk. That is key. (Are there cases where don't do this, like\npg_controldata?)\n\nOK, so you do a tar backup of a file. While you are doing the tar,\ncertain 8k blocks are being modified in the file. There is no way to\nknow what blocks are modified as you are doing the tar, and in fact you\ncould read partial page writes during the tar.\n\nOne solution would be to read the file using the PostgreSQL page buffer,\nbut even then, getting a stable snapshot of the file would be difficult.\nNow, we could lock the table and prevent writes while it is being backed\nup, but there is a better way.\n\nWe already have pre-change page images in WAL. When we do the backup,\nany page that was modified while we were backing up is in the WAL. On\nrestore, we can recover whatever tar saw of the file, knowing that the\nWAL page images will recover any page changes made during the tar.\n\nNow, you mentioned we may not want pre-change page images in WAL\nbecause, with PITR, we can more easily recover from the WAL rather than\nhaving this performance hit for many page writes.\n\nWhat I suggest is a way for the backup tar to turn on pre-change page\nimages while the tar is happening, and turn it off after the tar is\ndone.\n\nWe already have this TODO item:\n\n\t* Turn off after-change writes if fsync is disabled (?)\n\nNo sense in doing after-change WAL writes without fsync. We could\nextend this so those after-changes writes could be turned on an off,\nallowing fill tar backups and PITR recovery. In fact, for people with\nreliable hardware, we should already be giving them the option of\nturning off pre-change writes. We don't have a way of detecting partial\npage writes, but then again, we can't detect failures with fsync off\nanyway so it seems to be the same vulnerability. I guess that's why we\nwere going to wrap the effect into the same variable, but for PITR, can\nsee wanting fsync always on and the ability to turn pre-change writes on\nand off.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 22:08:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "J.R.,\n\nNice first draft and a good read. Was going to comment \nin-line but thought this method would be easier to follow. \nThe comments/suggestions below assume that PIT recovery is\nbeing performed at the cluster level with a data backup \nimage created by a tar-like utility. \n\nAs noted, one of the main problems is knowing where to begin\nin the log. This can be handled by having backup processing \nupdate the control file with the first lsn and log file \nrequired. At the time of the backup, this information is or \ncan be made available. The control file can be the last file\nadded to the tar and can contain information spanning the entire\nbackup process.\n\nFor data consistency, since the backup is being performed on \nan active cluster, we have to make sure to mark the end of the \nbackup. On restore, to make the cluster consistent, you have \nto force the user to perform forward recovery passed the point\nof the backup completion marker in the (archived) log. This \ncan be handled using a backup end log record. The backup end\nlog record would have to contain an identifier unique to this \nbackup. If a user requests to stop PIT recovery before this \nlog record is encountered, consistency is not guaranteed. \nPIT should either disallow the action or warn of possible / \nimpending doom.\n\nThe necessary logging for rtee (and others) insertions/deletions\ncan be added to the base code. Not much of a worry but I would\nexpect to encounter other missing log items during testing.\n\nThe idea of using the last lsn on the page to detect a partial\nwrite is used by other dbms systems. You already have that \ninformation available so there is no overhead in computing it. \nNothing wrong with CRC though.\n\nAs for the DB_SHUTDOWNED state, this could be handled by having\nthe backup processing update the control file field to \nDB_PIT_REQUIRED (or some such identifier). After a restore,\nusers would be blocked from connecting to the cluster's databases \nuntil a forward recovery passed the backup end log record has\ncompleted successfully. \n\nAt the end of normal crash recovery, the user has to go digging\nto identify in-flight transactions still in the system and abort\nthem manually. It would be nice if PIT recovery automatically\naborted all in-flight transactions at the end. \n\nAs PostgreSQL heads towards forward recovery functionality, it\nmay be wise to add headers to the log files. As the logs from\nany cluster are identically named, the header would allow unique\nidentification of the file and contents (cluster name, unique \nlog id, id of the prior log file for chaining purposes, lsn \nranges, etc). Most helpful.\n\nJust a few notes from the administrative side. PIT recovery\nshould probably offer the user the following actions:\n\n. forward recover to end of logs [and stop]\n Process log files located in the current directory until you\n read through the last one. Allow the user the option to stop\n or not, just in case the logs are archived. Send back the\n timestamp of the last encountered commit log record and the\n series of log files scanned. \n\n. forward recover to PIT [and stop]\n Similar to that described above but use the commit timestamps\n to gauge PIT progress. \n \n. forward recover query\n Send back the log series covered and the last commit timestamp\n encountered. \n\n. forward recover stop\n Stop the current forward recovery session. Undo all in-flight\n transactions and bring the databases down in a consistent\n state. No other external user actions should be required.\n\nLooking forward to reading draft 2.\n\nCheers,\nPatrick\n--\nPatrick Macdonald \nRed Hat Canada \n\n\"J. R. Nield\" wrote:\n> \n> Hello:\n> \n> I've got the logging system to the point where I can take a shutdown\n> consistent copy of a system, and play forward through multiple\n> checkpoints. It seems to handle CREATE TABLE/DROP TABLE/TRUNCATE\n> properly, and things are moving forward well. Recovery to an arbitrary\n> point-in-time should be just as easy, but will need some administrative\n> interface for it.\n> \n> At this point, some input would be useful on how I should handle things.\n> \n> The most important questions that need answering are in sections 2 & 5,\n> since they impact the most other parts of the system. They will also\n> require good documentation for sysadmins.\n> \n> Issues Outstanding for Point In Time Recovery (PITR)\n> \n> $Date: 2002/07/04 14:23:37 $\n> \n> $Revision: 1.4 $\n> \n> J.R. Nield\n> \n> (Enc: ISO 8859-15 Latin-9)\n> \n> �0 - Introduction\n> \n> This file is where I'm keeping track of all the issues I run into while\n> trying to get PITR to work properly. Hopefully it will evolve into a\n> description of how PITR actually works once it is implemented.\n> \n> I will also try to add feedback as it comes in.\n> \n> The big items so-far are:\n> �1 - Logging Relation file creation, truncation, and removal\n> This is mostly done. Can do infinte play-forward from\n> online logs.\n> �2 - Partial-Write and Bad Block detection\n> Need input before starting. Migration issues.\n> �3 - Detecting Shutdown Consistent System Recovery\n> Mostly done.\n> �4 - Interactive Play-Forward Recovery for an Entire System\n> Need input before starting.\n> �5 - Individual file consistent recovery\n> Need input. Semi-Major changes required.\n> \n> �1 - Logging Relation file creation, truncation, and removal\n> \n> �1.1 - Problem:\n> \n> Without file creation in the log, we can't replay committed\n> transactions that create relations.\n> \n> The current code assumes that any transaction reaching commit has already\n> ensured it's files exist, and that those files will never be removed. This\n> is true now, but not for log-replay from an old backup database system.\n> \n> The current XLOG code silently ignores block-write requests for\n> non-existent files, and assumes that the transaction generating those\n> requests must have aborted.\n> \n> Right now a crash during TRUNCATE TABLE will leave the table in an\n> inconsistent state (partially truncated). This would not work when doing\n> replay from before the last checkpoint.\n> \n> �1.1.1 - CREATE DATABASE is also unlogged\n> \n> This will cause the same replay problems as above.\n> \n> �1.2 - Proposal:\n> \n> a) Augment the SMGR code to log relation file operations, and to handle\n> redo requests properly. This is simple in the case of create. Drop must be\n> logged only IN the commit record. For truncate see (b).\n> \n> The 'struct f_smgr' needs new operations 'smgr_recreate', 'smgr_reunlink',\n> and 'smgr_retruncate'. smgr_recreate should accept a RelFileNode instead\n> of a Relation.\n> \n> Transactions that abort through system failure (ie. unlogged aborts)\n> will simply continue to leak files.\n> \n> b) If TRUNCATE TABLE fails, the system must PANIC. Otherwise, the table\n> may be used in a future command, and a replay-recovered database may\n> end-up with different data than the original.\n> \n> WAL must be flushed before truncate as well.\n> \n> WAL does not need to be flushed before create, if we don't mind\n> leaking files sometimes.\n> \n> c) Redo code should treat writes to non-existent files as an error.\n> Changes affect heap & nbtree AM's. [Check others]\n> \n> d) rtree [and GiST? WTF is GiST? ] is not logged. A replay recovery of\n> a database should mark all the rtree indices as corrupt.\n> [ actually we should do that now, are we? ]\n> \n> e) CREATE DATABASE must be logged properly, not use system(cp...)\n> \n> �1.3 - Status:\n> \n> All logged SMGR operations are now in a START_CRIT_SECTION()/\n> END_CRIT_SECTION() pair enclosing the XLogInsert() and the underlying fs\n> operations.\n> \n> Code has been added to smgr and xact modules to log:\n> create (no XLogFlush)\n> truncate (XLogFlush)\n> pending deletes on commit record\n> files to delete on abort record\n> \n> Code added to md.c to support redo ops\n> \n> Code added to smgr for RMGR redo/desc callbacks\n> \n> Code added to xact RMGR callbacks for redo/desc\n> \n> Database will do infinite shutdown consistent system recovery from the\n> online logs, if you manually munge the control file to set state ==\n> DB_IN_PRODUCTION instead of DB_SHUTDOWNED.\n> \n> Still need to do:\n> Item (c), recovery cleanup in all AM's\n> Item (d), logging in other index AM's\n> Item (e), CREATE DATABASE stuff\n> \n> �2 - Partial-Write and Bad Block detection\n> \n> �2.1 - Problem:\n> \n> In order to protect against partial writes without logging pages\n> twice, we need to detect partial pages in system files and report them\n> to the system administrator. We also might want to be able to detect\n> damaged pages from other causes, like memory corruption, OS errors,\n> etc. or in the case where the disk doesn't report bad blocks, but\n> returns bad data.\n> \n> We should also decide what should happen when a file is marked as\n> containing corrupt pages, and requires log-archive recovery from a\n> backup.\n> \n> �2.2 - Proposal:\n> \n> Add a 1 byte 'pd_flags' field to PageHeaderData, with the following\n> flag definitions:\n> \n> PD_BLOCK_CHECKING (1)\n> PD_BC_METHOD_BIT (1<<1)\n> \n> PageHasBlockChecking(page) ((page)->pd_flags & PD_BLOCK_CHECKING)\n> PageBCMethodIsCRC64(page) ((page)->pd_flags & PD_BC_METHOD_BIT)\n> PageBCMethodIsLSNLast(page) (!PageBCMethodIsCRC64(page))\n> \n> The last 64 bits of a page are reserved for use by the block checking\n> code.\n> \n> [ Is it worth the trouble to allow the last 8 bytes of a\n> page to contain data when block checking is turned off for a Page?\n> \n> This proposal does not allow that. ]\n> \n> If the block checking method is CRC64, then that field will contain\n> the CRC64 of the block computed at write time.\n> \n> If the block checking method is LSNLast, then the field contains a\n> duplicate of the pd_lsn field.\n> \n> �2.2.1 - Changes to Page handling routines\n> \n> All the page handling routines need to understand that\n> pd_special == (pd_special - (specialSize + 8))\n> \n> Change header comment in bufpage.h to reflect this.\n> \n> �2.2.2 - When Reading a Page\n> \n> Block corruption is detected on read in the obvious way with CRC64.\n> \n> In the case of LSNLast, we check to see if pd_lsn == the lsn in the\n> last 64 bits of the page. If not, we assume the page is corrupt from\n> a partial write (although it could be something else).\n> \n> IMPORTANT ASSUMPTION:\n> The OS/disk device will never write both the first part and\n> last part of a block without writing the middle as well.\n> This might be wrong in some cases, but at least it's fast.\n> \n> �2.2.4 - GUC Variables\n> \n> The user should be able to configure what method is used:\n> \n> block_checking_write_method = [ checksum | torn_page_flag | none ]\n> \n> Which method should be used for blocks we write?\n> \n> check_blocks_on_read = [ true | false ]\n> \n> When true, verify that the blocks we read are not corrupt, using\n> whatever method is in the block header.\n> \n> When false, ignore the block checking information.\n> \n> �2.3 - Status:\n> \n> Waiting for input from pgsql-hackers.\n> \n> Questions:\n> \n> Should we allow the user to have more detailed control over\n> which parts of a database use block checking?\n> \n> For example: use 'checksum' on all system catalogs in all databases,\n> 'torn_page_flag' on the non-catalog parts of the production database,\n> and 'none' on everything else?\n> \n> �3 - Detecting Shutdown Consistent System Recovery\n> \n> �3.1 - Problem:\n> \n> How to notice that we need to do log-replay for a system backup, when the\n> restored control file points to a shutdown checkpoint record that is\n> before the most recent checkpoint record in the log, and may point into\n> an archived file.\n> \n> �3.2 - Proposal:\n> \n> At startup, after reading the ControlFile, scan the log directory to\n> get the list of active log files, and find the lowest logId and\n> logSeg of the files. Ensure that the files cover a contiguous range\n> of LSN's.\n> \n> There are three cases:\n> \n> 1) ControlFile points to the last valid checkpoint (either\n> checkPoint or prevCheckPoint, but one of them is the greatest\n> valid checkpoint record in the log stream).\n> \n> 2) ControlFile points to a valid checkpoint record in an active\n> log file, but there are more valid checkpoint records beyond\n> it.\n> \n> 3) ControlFile points to a checkpoint record that should be in the\n> archive logs, and is presumably valid.\n> \n> Case 1 is what we handle now.\n> \n> Cases 2 and 3 would result from restoring an entire system from\n> backup in preparation to do a play-forward recovery.\n> \n> We need to:\n> \n> Detect cases 2 and 3.\n> \n> Alert the administrator and abort startup.\n> [Question: Is this always the desired behavior? We can\n> handle case 2 without intervention. ]\n> \n> Let the administrator start a standalone backend, and\n> perform a play-forward recovery for the system.\n> \n> �3.3 - Status:\n> \n> In progress.\n> \n> �4 - Interactive Play-Forward Recovery for an Entire System\n> \n> Play-Forward File Recovery from a backup file must be interactive,\n> because not all log files that we need are necessarily in the\n> archive directory. It may be possible that not all the archive files\n> we need can even fit on disk at one time.\n> \n> The system needs to be able to prompt the system administrator to feed\n> it more log files.\n> \n> TODO: More here\n> \n> �5 - Individual file consistent recovery\n> \n> �5.1 - Problem:\n> \n> If a file detects corruption, and we restore it from backup, how do\n> we know what archived files we need for recovery?\n> \n> Should file corruption (partial write, bad disk block, etc.) outside\n> the system catalog cause us to abort the system, or should we just\n> take the relation or database off-line?\n> \n> Given a backup file, how do we determine the point in the log\n> where we should start recovery for the file? What is the highest LSN\n> we can use that will fully recover the file?\n> \n> �5.2 - Proposal:\n> \n> Put a file header on each file, and update that header to the last\n> checkpoint LSN at least once every 'file_lsn_time_slack' minutes, or\n> at least once every dbsize/'file_lsn_log_slack' megabytes of log\n> written, where dbsize is the estimated size of the database. Have\n> these values be settable from the config file. These updates would be\n> distributed throughout the hour, or interspersed between regular\n> amounts of log generation.\n> \n> If we have a database backup program or command, it can update the\n> header on the file before backup to the greatest value it can assure\n> to be safe.\n> \n> �5.3 - Status:\n> \n> Waiting for input from pgsql-hackers.\n> \n> Questions:\n> \n> There are alternate methods than using a file header to get a\n> known-good LSN lower bound for the starting point to recover a backup\n> file. Is this the best way?\n> \n> A) The Definitions\n> \n> This stuff is obtuse, but I need it here to keep track of what I'm\n> saying. Someday I should use it consistently in the rest of this\n> document.\n> \n> \"system\" or \"database system\":\n> \n> A collection of postgres \"databases\" in one $PGDATA directory,\n> managed by one postmaster instance at a time (and having one WAL\n> log, etc.)\n> \n> All the files composing such a system, as a group.\n> \n> \"up to date\" or \"now\" or \"current\" or \"current LSN\":\n> \n> The most recent durable LSN for the system.\n> \n> \"block consistent copy\":\n> \n> When referring to a file:\n> \n> A copy of a file, which may be written to during the process of\n> copying, but where each BLCKSZ size block is copied atomically.\n> \n> When referring to multiple files (in the same system):\n> \n> A copy of all the files, such that each is independently a \"block\n> consistent copy\"\n> \n> \"file consistent copy\":\n> \n> When referring to a file:\n> \n> A copy of a file that is not written to between the start and end\n> of the copy operation.\n> \n> When referring to multiple files (in the same system):\n> \n> A copy of all the files, such that each is independently a \"file\n> consistent copy\"\n> \n> \"system consistent copy\":\n> \n> When referring to a file:\n> \n> A copy of a file, where the entire system of which it is a member\n> is not written to during the copy.\n> \n> When referring to multiple files (in the same system):\n> \n> A copy of all the files, where the entire system of which they are\n> members was not written to between the start and end of the\n> copying of all the files, as a group.\n> \n> \"shutdown consistent copy\":\n> \n> When referring to a file:\n> \n> A copy of a file, where the entire system of which it is a member\n> had been cleanly shutdown before the start of and for the duration\n> of the copy.\n> \n> When referring to multiple files (in the same system):\n> \n> A copy of all the files, where the entire system of which they are\n> members had been cleanly shutdown before the start of and for the\n> duration of the copying of all the files, as a group.\n> \n> \"consistent copy\":\n> \n> A block, file, system, or shutdown consistent copy.\n> \n> \"known-good LSN lower bound\"\n> or \"LSN lower bound\"\n> or \"LSN-LB\":\n> \n> When referring to a group of blocks, a file, or a group of files:\n> \n> An LSN known to be old enough that no log entries before it are needed\n> to bring the blocks or files up-to-date.\n> \n> \"known-good LSN greatest lower bound\"\n> or \"LSN greatest lower bound\"\n> or \"LSN-GLB\":\n> \n> When referring to a group of blocks, a file, or a group of files:\n> \n> The greatest possible LSN that is a known-good LSN lower bound for\n> the group.\n> \n> \"backup file\":\n> \n> A consistent copy of a data file used by the system, for which\n> we have a known-good LSN lower bound.\n> \n> \"optimal backup file\":\n> \n> A backup file, for which we have the known-good LSN greatest lower\n> bound.\n> \n> \"backup system\":\n> \n> \n> \"Play-Forward File Recovery\" or \"PFFR\":\n> \n> The process of bringing an individual backup file up to date.\n> \n> \n> --\n> J. R. Nield\n> jrnield@usol.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\n", "msg_date": "Sat, 06 Jul 2002 23:44:58 -0400", "msg_from": "Patrick Macdonald <patrickm@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "Patrick Macdonald wrote:\n> The idea of using the last lsn on the page to detect a partial\n> write is used by other dbms systems. You already have that \n> information available so there is no overhead in computing it. \n> Nothing wrong with CRC though.\n\nAgreed. Just thought I would point out that is not guaranteed. Suppose\nthe 8k block is spread over 16 512 sectors in two cylinders. The OS or\nSCSI tagged queuing could wrote the second part of the page (sectors\n9-16) before the first group (1-8). If it writes 9-16, then writes 1-8\nbut fails in the middle of 1-8, the LSN will match at the front and back\nof the page, but the page will be partially written.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sun, 7 Jul 2002 19:43:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "\nJ.R., just checking to see how PITR recovery is going. Do you need any\nassistance or have any questions for us?\n\nAlso, do you have any idea how close you are to having something\ncompleted? Are you aware we are closing development of 7.3 at the end\nof August and start beta September 1? Is there any way we can help you?\n\n---------------------------------------------------------------------------\n\nJ. R. Nield wrote:\n> On Fri, 2002-07-05 at 01:42, Bruce Momjian wrote:\n> > \n> > We have needed\n> > point-in-time recovery for a long time, \n> \n> Most thanks should go to vadim (and whoever else worked on this), since\n> his WAL code already does most of the work. The key thing is auditing\n> the backend to look for every case where we assume some action is not\n> visible until after commit, and therefore don't log its effects. Those\n> are the main cases that must be changed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 15:36:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "On Tue, 2002-07-16 at 15:36, Bruce Momjian wrote:\n> \n> J.R., just checking to see how PITR recovery is going. Do you need any\n> assistance or have any questions for us?\n> \n> Also, do you have any idea how close you are to having something\n> completed? Are you aware we are closing development of 7.3 at the end\n> of August and start beta September 1? Is there any way we can help you?\n> \n\nIt should be ready to go into CVS by the end of the month. \n\nThat will include: logging all operations except for rtree and GiST,\narchival of logfiles (with new postgresql.conf params), headers on the\nlogfiles to verify the system that created them, standalone backend\nrecovery to a point-in-time, and a rudimentary hot backup capability.\n\nI could use some advice on the proper way to add tests to configure.in,\ngiven that the autoconf output is in CVS. Would you ever want a patch to\ninclude the generated 'configure' file?\n\nRelated to that, the other place I need advice is on adding Ted Tso's\nLGPL'd UUID library (stolen from e2fsprogs) to the source. Are we\nallowed to use this? There is a free OSF/DCE spec for UUID's, so I can\nre-implement the library if required.\n\nWe also haven't discussed commands for backup/restore, but I will use\nwhat I think is appropriate and we can change the grammar if needed. The\ninitial hot-backup capability will require the database to be in\nread-only mode and use tar for backup, and I will add the ability to\nallow writes later.\n\nDoes this sound like a reasonable timeframe/feature-set to make the 7.3\nrelease?\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "17 Jul 2002 00:24:10 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "\"J. R. Nield\" <jrnield@usol.com> writes:\n> Related to that, the other place I need advice is on adding Ted Tso's\n> LGPL'd UUID library (stolen from e2fsprogs) to the source. Are we\n> allowed to use this?\n\nUh, why exactly is UUID essential for this? (The correct answer is\n\"it's not\", IMHO.)\n\n> We also haven't discussed commands for backup/restore, but I will use\n> what I think is appropriate and we can change the grammar if needed. The\n> initial hot-backup capability will require the database to be in\n> read-only mode and use tar for backup, and I will add the ability to\n> allow writes later.\n\nThere is no read-only mode, and I for one will resist adding one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jul 2002 01:15:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR) " }, { "msg_contents": "J. R. Nield wrote:\n> On Tue, 2002-07-16 at 15:36, Bruce Momjian wrote:\n> > \n> > J.R., just checking to see how PITR recovery is going. Do you need any\n> > assistance or have any questions for us?\n> > \n> > Also, do you have any idea how close you are to having something\n> > completed? Are you aware we are closing development of 7.3 at the end\n> > of August and start beta September 1? Is there any way we can help you?\n> > \n> \n> It should be ready to go into CVS by the end of the month. \n> \n> That will include: logging all operations except for rtree and GiST,\n> archival of logfiles (with new postgresql.conf params), headers on the\n> logfiles to verify the system that created them, standalone backend\n> recovery to a point-in-time, and a rudimentary hot backup capability.\n\nSounds great. That gives us another month to iron out any remaining\nissues. This will be a great 7.3 feature!\n\n\n> I could use some advice on the proper way to add tests to configure.in,\n> given that the autoconf output is in CVS. Would you ever want a patch to\n> include the generated 'configure' file?\n\nWe only patch configure.in. If you post to hackers, they can give you\nassistance and I will try to help however I can. I can so some\nconfigure.in stuff for you myself.\n\n> Related to that, the other place I need advice is on adding Ted Tso's\n> LGPL'd UUID library (stolen from e2fsprogs) to the source. Are we\n> allowed to use this? There is a free OSF/DCE spec for UUID's, so I can\n> re-implement the library if required.\n\nWe talked about this on the replication mailing list. We decided that\nhostname, properly hashed to an integer, was the proper way to get this\nvalue. Also, there should be a postgresql.conf variable so you can\noverride the hostname-generated value if you wish. I think that is\nsufficient.\n\n> We also haven't discussed commands for backup/restore, but I will use\n> what I think is appropriate and we can change the grammar if needed. The\n> initial hot-backup capability will require the database to be in\n> read-only mode and use tar for backup, and I will add the ability to\n> allow writes later.\n\nYea, I saw Tom balked at that. I think we have enough manpower and time\nthat we can get hot backup in normal read/write mode working before 7.3\nbeta so I would just code it assuming the system is live and we can deal\nwith making it hot-capable once it is in CVS. It doesn't have to work\n100% until beta time.\n\n> Does this sound like a reasonable timeframe/feature-set to make the 7.3\n> release?\n\nSounds great. This is another killer 7.3 feature, and we really need\nthis for greater enterprise acceptance of PostgreSQL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Jul 2002 01:25:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "\nWe also have implemented a roll forward recovery mechanism. We modified a\n7.2.1 version of Postgres.\nThe mechanism is designed to provide a means of recoverying from the loss or\ncorruption of media. It provides for duplicating wal_files so that if a\nwal_file is lost roll forward recovery can recover the database using the\nduplicated wal_files. Hooks were also added so that the roll forward\nrecovery mechanism can be used to implement a hot standby database.\nAlong with the roll forward recovery mechanism we have also implemented an\nonline database backup utility which is synchronized with the recovery log\nso that the backup can be a starting point for a roll forward recovery\nsession.\n\nRoll forward recovery is enabled for a database cluster by specifying one of\ntwo postmaster configuration parameters.\n\nwal_file_reuse = false This parameter tells the wal system to not reuse wal\nfiles. This option is intended for sites wishing to implement a hot standby\ndatabase where wal files will be periodically copied to another machine\nwhere they will be rolled forward into the standby database.\n\nwal_file_duplicate = <directory_path> This parameter tells the system to\nmirror the files in the pg_xlog directory to the specified directory. This\nallows for the recovery of a database where a wal file has been damaged or\nlost. It also allows for a variant of a hot standby database where the\nduplicate directory is the pg_xlog directory of the standby database.\n\nSince both of these options cause wal files to accumulate indefinately the\ndba needs a means of purging wal files when they are no longer needed. So\nan sql command, \"ALTER SYSTEM PURGE WAL_FILES <wal_file_name>\", has also\nbeen implemented. This command deletes all wal files up to and including\nthe specified <wal_file_name> as long as those wal files are not needed to\nrecover the database in the event of a system crash. To find out the status\nof the wal files a function has been implemented to return a wal file name.\nThe function is:\n\nWal_file( <request_tye>)\nRequest_type := [ �current� | �last� | �checkpoint� | �oldest�]\n\nWal_file ('current') returns the name of the log file currently being\nwritten to.\nWal_file('last') returns the name of the last log file filled.\nWal_file('checkpoint') returns the name of the file containing the current\nredo position. The current redo position is the position in the recovery\nlog where crash recovery would start if the system were to crash now. All\nlogs prior to this one will not be needed to recover the database cluster\nand could be safely removed.\nWal_file('oldest') returns the oldest xlog file found in the pg_xlog\ndirectory.\n\n\nTo actually perform a roll forward you use the postmaster configuration\nparameter \"roll_forward=yes\". This parameter tells the startup process to\nperform crash recovery even though the state of the database as found in the\npg_control file indicates a normal shutdown. This is necessary since the\nstarting point of roll forward session could be the restore of a database\ncluster that was shutdown in order to back it up. Furthermore this\nparameter tells the startup process not to write out a checkpoint record at\nthe end of the roll forward session. This allows for the database cluster to\nreceive subsequent wal files and to have those rolled forward as well. When\nstarting the postmaster with the roll_forward=yes option, it shuts down the\ndatabase as soon as the startup process completes. So the idea is to\nrestore a backup, copy all of your saved/duplicated wal files into the\npg_xlog directory of the restored database and start the postmaster with the\nroll_forward option.\n\nFor point in time recovery there is also a roll_forward_until = <time> which\nrolls forward through the wal files until the first transaction commit note\nthat is greater than or equal to the specified time.\n\nThe pg_copy utility performs an on line copy of a database cluster. Its\nsyntax is:\npg_copy <backup_directory> [-h host] [-p port] ...\nThis makes a copy of the database where backup_directory is what you would\nset PGDATA to in order start a postmaster against the backup copy. The\ndatabase can be being updated while the copy occurs. If you start a\npostmaster against this copy it will appear to the startup process as a\ndatabase that crashed at the instant the pg_copy operation completed.\nFuthermore the pg_copy utility automatically removes any wal files not\nneeded to recover the database from either pg_xlog directory or the\nwal_file_duplicate directory.\n\nSo a DBA to protect the database from media loss just needs to set the\nwal_file_duplicate paramater and periodically pg_copy the database.\n\n\nThe BIG THING we have not done is address the issue that add/drop tables and\nindexes do not propagate through the roll forward recovery mechanism\nproperly.\n\n\n\n-regards\nRichard Tucker\n\n\n\n\n\n", "msg_date": "Wed, 17 Jul 2002 17:45:53 -0400", "msg_from": "Richard Tucker <richt@multera.com>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "Richard Tucker wrote:\n> \n> We also have implemented a roll forward recovery mechanism. We modified a\n> 7.2.1 version of Postgres.\n> The mechanism is designed to provide a means of recoverying from the loss or\n> corruption of media. It provides for duplicating wal_files so that if a\n> wal_file is lost roll forward recovery can recover the database using the\n> duplicated wal_files. Hooks were also added so that the roll forward\n> recovery mechanism can be used to implement a hot standby database.\n> Along with the roll forward recovery mechanism we have also implemented an\n> online database backup utility which is synchronized with the recovery log\n> so that the backup can be a starting point for a roll forward recovery\n> session.\n\nIn researching who has done this work, I found that\nhttp://www.multera.com/ is Progress Software. I assume this\n\"Distributed but Connected\" is partly based on PostgreSQL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Jul 2002 18:17:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "Bruce Momjian wrote:\n> Richard Tucker wrote:\n> > \n> > We also have implemented a roll forward recovery mechanism. We modified a\n> > 7.2.1 version of Postgres.\n> > The mechanism is designed to provide a means of recoverying from the loss or\n> > corruption of media. It provides for duplicating wal_files so that if a\n> > wal_file is lost roll forward recovery can recover the database using the\n> > duplicated wal_files. Hooks were also added so that the roll forward\n> > recovery mechanism can be used to implement a hot standby database.\n> > Along with the roll forward recovery mechanism we have also implemented an\n> > online database backup utility which is synchronized with the recovery log\n> > so that the backup can be a starting point for a roll forward recovery\n> > session.\n> \n> In researching who has done this work, I found that\n> http://www.multera.com/ is Progress Software. I assume this\n> \"Distributed but Connected\" is partly based on PostgreSQL.\n\nOh, I see it now, PostgreSQL is right there:\n\n * OuterEdge\n A distributed site technology suite that allows remote users to\nmaintain the integrity, not the congestion and price, of centralized\ndata. The OuterEdge includes the affordable UltraSQL^(TM) database\nserver powered by PostgreSQL, the third most popular database, and a\n\n ^^^^^^^^^^^^^^^^^^^^^\nSecure Application/web server that supports the industry's most popular\nInternet languages. Together with the Replication Engine software, the\nOuterEdge gives remote users seamless access to important data without\nrelying on bandwidth.\n\nWonder why we are \"the third most popular database\". I think that's\ngood?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Jul 2002 18:19:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "> server powered by PostgreSQL, the third most popular database, and a\n> \n> ^^^^^^^^^^^^^^^^^^^^^\n \n> Wonder why we are \"the third most popular database\". I think that's\n> good?\n\nYou'll notice they didn't qualify where. On this list, it's probably\n#1. Within Progress software perhaps we're third most popular (whatever\ntwo are typically used in the InnerEdge are 1 and 2).\n\n", "msg_date": "17 Jul 2002 18:26:43 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "Rod Taylor wrote:\n> > server powered by PostgreSQL, the third most popular database, and a\n> > \n> > ^^^^^^^^^^^^^^^^^^^^^\n> \n> > Wonder why we are \"the third most popular database\". I think that's\n> > good?\n> \n> You'll notice they didn't qualify where. On this list, it's probably\n> #1. Within Progress software perhaps we're third most popular (whatever\n> two are typically used in the InnerEdge are 1 and 2).\n\nYea, using that logic, #1 would be the Progress internal db system, #2\nwould be MySQL (though that seems doubtful at this point with Nusphere),\nand PostgreSQL.\n\nActually, PostgreSQL is #1 in my home.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Jul 2002 18:28:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "On July 17, 2002 05:45 pm, Richard Tucker wrote:\n> We also have implemented a roll forward recovery mechanism. We modified a\n> 7.2.1 version of Postgres.\n> ...\n\nExcellent! I can't wait. When will it be in current?\n\n> The BIG THING we have not done is address the issue that add/drop tables\n> and indexes do not propagate through the roll forward recovery mechanism\n> properly.\n\nI can live with that. Schemas shouldn't change so often that you can't just \ndup any changes to the backup(s).\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 17 Jul 2002 18:43:39 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "I don't know how our marketing came up third most popular but I think the\norder is, Oracle, MySQL, and PostgreSQL or maybe Oracle, MSSQL and\nPostgreSQL. I'm sure there is some criterion by which PostgreSQL is tenth\nand by some other its number one.\nOf course, my posting was about Point In Time Recovery and not multera\nmarketing spin.\n-regards\nricht\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nSent: Wednesday, July 17, 2002 6:29 PM\nTo: Rod Taylor\nCc: PostgreSQL-development; richt@multera.com; J. R. Nield\nSubject: Re: [HACKERS] Issues Outstanding for Point In Time Recovery\n(PITR)\n\n\nRod Taylor wrote:\n> > server powered by PostgreSQL, the third most popular database, and a\n> >\n> > ^^^^^^^^^^^^^^^^^^^^^\n>\n> > Wonder why we are \"the third most popular database\". I think that's\n> > good?\n>\n> You'll notice they didn't qualify where. On this list, it's probably\n> #1. Within Progress software perhaps we're third most popular (whatever\n> two are typically used in the InnerEdge are 1 and 2).\n\nYea, using that logic, #1 would be the Progress internal db system, #2\nwould be MySQL (though that seems doubtful at this point with Nusphere),\nand PostgreSQL.\n\nActually, PostgreSQL is #1 in my home.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Wed, 17 Jul 2002 18:55:55 -0400", "msg_from": "Richard Tucker <richt@multera.com>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "On Wed, 2002-07-17 at 01:25, Bruce Momjian wrote:\n> \n> We only patch configure.in. If you post to hackers, they can give you\n> assistance and I will try to help however I can. I can so some\n> configure.in stuff for you myself.\n\nThanks for the offer. The only thing I was changing it for was to test\nwhether and how to get a ethernet MAC address using ioctl, so libuuid\ncould use it if available. That is dropped now.\n\n> \n> > Related to that, the other place I need advice is on adding Ted Tso's\n> > LGPL'd UUID library (stolen from e2fsprogs) to the source. Are we\n> > allowed to use this? There is a free OSF/DCE spec for UUID's, so I can\n> > re-implement the library if required.\n> \n> We talked about this on the replication mailing list. We decided that\n> hostname, properly hashed to an integer, was the proper way to get this\n> value. Also, there should be a postgresql.conf variable so you can\n> override the hostname-generated value if you wish. I think that is\n> sufficient.\n\nI will do something like this, but reserve 16 bytes for it just in case\nwe change our minds. It needs to be different among systems on the same\nmachine, so there needs to be a time value and a pseudo-random part as\nwell. Also, 'hostname' will likely be the same on many machines\n(localhost.localdomain or similar).\n\nThe only reason I bothered with UUID's before is because they have a\nstandard setup to make the possibility of collision extremely small, and\nI figured replication will end up using it someday.\n\n> \n> > We also haven't discussed commands for backup/restore, but I will use\n> > what I think is appropriate and we can change the grammar if needed. The\n> > initial hot-backup capability will require the database to be in\n> > read-only mode and use tar for backup, and I will add the ability to\n> > allow writes later.\n> \n> Yea, I saw Tom balked at that. I think we have enough manpower and time\n> that we can get hot backup in normal read/write mode working before 7.3\n> beta so I would just code it assuming the system is live and we can deal\n> with making it hot-capable once it is in CVS. It doesn't have to work\n> 100% until beta time.\n\nHot backup read/write requires that we force an advance in the logfile\nsegment after the backup. We need to save all the logs between backup\nstart and completion. Otherwise the files will be useless as a\nstandalone system if the current logs somehow get destroyed (fire in the\nmachine room, etc.).\n\nThe way I would do this is:\n\n create a checkpoint\n do the block-by-block walk of the files using the bufmgr\n create a second checkpoint\n force the log to advance past the end of the current segment\n save the log segments containing records between the\n first & second checkpont with the backup\n\nThen if you restore the backup, you can recover to the point of the\nsecond checkpoint, even if the logs since then are all gone.\n\nRight now the log segment size is fixed, so this means that we'd waste\n8MB of log space on average to do a backup. Also, the way XLOG reads\nrecords right now, we have to write placeholder records into the empty\nspace, because that's how it finds the end of the log stream. So I need\nto change XLOG to handle \"skip records\", and then to truncate the file\nwhen it gets archived, so we don't have to save up to 16MB of zeros.\n\nAlso, if archiving is turned off, then we can't recycle or delete any\nlogs for the duration of the backup, and we have to save them.\n\nSo I'll finish the XLOG support for this, and then think about the\ncorrect way to walk through all the files.\n \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "17 Jul 2002 19:01:39 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": true, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "J. R. Nield wrote:\n> I will do something like this, but reserve 16 bytes for it just in case\n> we change our minds. It needs to be different among systems on the same\n> machine, so there needs to be a time value and a pseudo-random part as\n> well. Also, 'hostname' will likely be the same on many machines\n> (localhost.localdomain or similar).\n> \n> The only reason I bothered with UUID's before is because they have a\n> standard setup to make the possibility of collision extremely small, and\n> I figured replication will end up using it someday.\n\nSure. Problem is, we support so many platforms that any trickery is a\nproblem. If they can change it in postgresql.conf, that should be\nsufficient.\n\n> Hot backup read/write requires that we force an advance in the logfile\n> segment after the backup. We need to save all the logs between backup\n> start and completion. Otherwise the files will be useless as a\n> standalone system if the current logs somehow get destroyed (fire in the\n> machine room, etc.).\n> \n> The way I would do this is:\n> \n> create a checkpoint\n> do the block-by-block walk of the files using the bufmgr\n> create a second checkpoint\n> force the log to advance past the end of the current segment\n> save the log segments containing records between the\n> first & second checkpont with the backup\n\nSounds good.\n\n> Then if you restore the backup, you can recover to the point of the\n> second checkpoint, even if the logs since then are all gone.\n\nGood, you put the logs that happened during the backup inside the same\nbackup, make it consistent. Makes sense.\n\n> Right now the log segment size is fixed, so this means that we'd waste\n> 8MB of log space on average to do a backup. Also, the way XLOG reads\n> records right now, we have to write placeholder records into the empty\n> space, because that's how it finds the end of the log stream. So I need\n> to change XLOG to handle \"skip records\", and then to truncate the file\n> when it gets archived, so we don't have to save up to 16MB of zeros.\n> \n> Also, if archiving is turned off, then we can't recycle or delete any\n> logs for the duration of the backup, and we have to save them.\n> \n> So I'll finish the XLOG support for this, and then think about the\n> correct way to walk through all the files.\n\nSounds like a good plan.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Jul 2002 20:27:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" } ]
[ { "msg_contents": "> Christopher, if you are still having trouble adding the isdropped system\n> column, please let me know.\n\nThanks Bruce, but I think I've got it sorted now. One weird thing is that\nalthough I added it as being false in pg_attribute.h, I get these tuples\nhaving attisdropped set to true by initdb.\n\nAre these from the toasting process and maybe the stats or something??\n\nChris\n\n attrelid | attname\n----------+-------------------\n 16464 | chunk_id\n 16464 | chunk_seq\n 16464 | chunk_data\n 16466 | chunk_id\n 16466 | chunk_seq\n 16467 | chunk_id\n 16467 | chunk_seq\n 16467 | chunk_data\n 16469 | chunk_id\n 16469 | chunk_seq\n 16470 | chunk_id\n 16470 | chunk_seq\n 16470 | chunk_data\n 16472 | chunk_id\n 16472 | chunk_seq\n 16473 | chunk_id\n 16473 | chunk_seq\n 16473 | chunk_data\n 16475 | chunk_id\n 16475 | chunk_seq\n 16476 | chunk_id\n 16476 | chunk_seq\n 16476 | chunk_data\n 16478 | chunk_id\n 16478 | chunk_seq\n 16479 | chunk_id\n 16479 | chunk_seq\n 16479 | chunk_data\n 16481 | chunk_id\n 16481 | chunk_seq\n 16482 | chunk_id\n 16482 | chunk_seq\n 16482 | chunk_data\n 16484 | chunk_id\n 16484 | chunk_seq\n 16485 | chunk_id\n 16485 | chunk_seq\n 16485 | chunk_data\n 16487 | chunk_id\n 16487 | chunk_seq\n 16488 | chunk_id\n 16488 | chunk_seq\n 16488 | chunk_data\n 16490 | chunk_id\n 16490 | chunk_seq\n 16491 | usecreatedb\n 16491 | usesuper\n 16491 | passwd\n 16491 | valuntil\n 16491 | useconfig\n 16494 | schemaname\n 16494 | tablename\n 16494 | rulename\n 16494 | definition\n 16498 | schemaname\n 16498 | viewname\n 16498 | viewowner\n 16498 | definition\n 16501 | tablename\n 16501 | tableowner\n 16501 | hasindexes\n 16501 | hasrules\n 16501 | hastriggers\n 16504 | tablename\n 16504 | indexname\n 16504 | indexdef\n 16507 | tablename\n 16507 | attname\n 16507 | null_frac\n 16507 | avg_width\n 16507 | n_distinct\n 16507 | most_common_vals\n 16507 | most_common_freqs\n 16507 | histogram_bounds\n 16507 | correlation\n 16511 | relid\n 16511 | relname\n 16511 | seq_scan\n 16511 | seq_tup_read\n 16511 | idx_scan\n 16511 | idx_tup_fetch\n 16511 | n_tup_ins\n 16511 | n_tup_upd\n 16511 | n_tup_del\n 16514 | relid\n 16514 | relname\n 16514 | heap_blks_read\n 16514 | heap_blks_hit\n 16514 | idx_blks_read\n 16514 | idx_blks_hit\n 16514 | toast_blks_read\n 16514 | toast_blks_hit\n 16514 | tidx_blks_read\n 16514 | tidx_blks_hit\n 16518 | relid\n 16518 | indexrelid\n 16518 | relname\n 16518 | indexrelname\n 16518 | idx_scan\n 16518 | idx_tup_read\n 16518 | idx_tup_fetch\n 16521 | relid\n 16521 | indexrelid\n 16521 | relname\n 16521 | indexrelname\n 16521 | idx_blks_read\n 16521 | idx_blks_hit\n 16524 | relid\n 16524 | relname\n 16524 | blks_read\n 16524 | blks_hit\n 16527 | datid\n 16527 | datname\n 16527 | procpid\n 16527 | usesysid\n 16527 | usename\n 16527 | current_query\n 16530 | datid\n 16530 | datname\n 16530 | numbackends\n 16530 | xact_commit\n 16530 | xact_rollback\n 16530 | blks_read\n 16530 | blks_hit\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 09:42:02 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "\nThe problem is that the new column is now part of pg_attribute so every\ncatalog/pg_attribute.h DATA() line has to be updated. Did you update\nthem all with 'false' in the right slot? Not sure what the chunks are.\n\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> > Christopher, if you are still having trouble adding the isdropped system\n> > column, please let me know.\n> \n> Thanks Bruce, but I think I've got it sorted now. One weird thing is that\n> although I added it as being false in pg_attribute.h, I get these tuples\n> having attisdropped set to true by initdb.\n> \n> Are these from the toasting process and maybe the stats or something??\n> \n> Chris\n> \n> attrelid | attname\n> ----------+-------------------\n> 16464 | chunk_id\n> 16464 | chunk_seq\n> 16464 | chunk_data\n> 16466 | chunk_id\n> 16466 | chunk_seq\n> 16467 | chunk_id\n> 16467 | chunk_seq\n> 16467 | chunk_data\n> 16469 | chunk_id\n> 16469 | chunk_seq\n> 16470 | chunk_id\n> 16470 | chunk_seq\n> 16470 | chunk_data\n> 16472 | chunk_id\n> 16472 | chunk_seq\n> 16473 | chunk_id\n> 16473 | chunk_seq\n> 16473 | chunk_data\n> 16475 | chunk_id\n> 16475 | chunk_seq\n> 16476 | chunk_id\n> 16476 | chunk_seq\n> 16476 | chunk_data\n> 16478 | chunk_id\n> 16478 | chunk_seq\n> 16479 | chunk_id\n> 16479 | chunk_seq\n> 16479 | chunk_data\n> 16481 | chunk_id\n> 16481 | chunk_seq\n> 16482 | chunk_id\n> 16482 | chunk_seq\n> 16482 | chunk_data\n> 16484 | chunk_id\n> 16484 | chunk_seq\n> 16485 | chunk_id\n> 16485 | chunk_seq\n> 16485 | chunk_data\n> 16487 | chunk_id\n> 16487 | chunk_seq\n> 16488 | chunk_id\n> 16488 | chunk_seq\n> 16488 | chunk_data\n> 16490 | chunk_id\n> 16490 | chunk_seq\n> 16491 | usecreatedb\n> 16491 | usesuper\n> 16491 | passwd\n> 16491 | valuntil\n> 16491 | useconfig\n> 16494 | schemaname\n> 16494 | tablename\n> 16494 | rulename\n> 16494 | definition\n> 16498 | schemaname\n> 16498 | viewname\n> 16498 | viewowner\n> 16498 | definition\n> 16501 | tablename\n> 16501 | tableowner\n> 16501 | hasindexes\n> 16501 | hasrules\n> 16501 | hastriggers\n> 16504 | tablename\n> 16504 | indexname\n> 16504 | indexdef\n> 16507 | tablename\n> 16507 | attname\n> 16507 | null_frac\n> 16507 | avg_width\n> 16507 | n_distinct\n> 16507 | most_common_vals\n> 16507 | most_common_freqs\n> 16507 | histogram_bounds\n> 16507 | correlation\n> 16511 | relid\n> 16511 | relname\n> 16511 | seq_scan\n> 16511 | seq_tup_read\n> 16511 | idx_scan\n> 16511 | idx_tup_fetch\n> 16511 | n_tup_ins\n> 16511 | n_tup_upd\n> 16511 | n_tup_del\n> 16514 | relid\n> 16514 | relname\n> 16514 | heap_blks_read\n> 16514 | heap_blks_hit\n> 16514 | idx_blks_read\n> 16514 | idx_blks_hit\n> 16514 | toast_blks_read\n> 16514 | toast_blks_hit\n> 16514 | tidx_blks_read\n> 16514 | tidx_blks_hit\n> 16518 | relid\n> 16518 | indexrelid\n> 16518 | relname\n> 16518 | indexrelname\n> 16518 | idx_scan\n> 16518 | idx_tup_read\n> 16518 | idx_tup_fetch\n> 16521 | relid\n> 16521 | indexrelid\n> 16521 | relname\n> 16521 | indexrelname\n> 16521 | idx_blks_read\n> 16521 | idx_blks_hit\n> 16524 | relid\n> 16524 | relname\n> 16524 | blks_read\n> 16524 | blks_hit\n> 16527 | datid\n> 16527 | datname\n> 16527 | procpid\n> 16527 | usesysid\n> 16527 | usename\n> 16527 | current_query\n> 16530 | datid\n> 16530 | datname\n> 16530 | numbackends\n> 16530 | xact_commit\n> 16530 | xact_rollback\n> 16530 | blks_read\n> 16530 | blks_hit\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 4 Jul 2002 21:50:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Thanks Bruce, but I think I've got it sorted now. One weird thing is that\n> although I added it as being false in pg_attribute.h, I get these tuples\n> having attisdropped set to true by initdb.\n\nIt sounds to me like you've failed to make sure that the field is\ninitialized properly when a pg_attribute row is dynamically created.\nLet's see... did you fix the static FormData_pg_attribute rows near\nthe top of heap.c? Does TupleDescInitEntry() know about initializing\nthe field? (I wonder why it doesn't memset() the whole row to zero\nanyway...)\n\npg_attribute is very possibly the most ticklish system catalog\nto add a column to. I'd suggest looking through every single use of\nsome other pg_attribute column, perhaps attstattarget or attnotnull,\nto make sure you're initializing attisdropped everywhere it should be.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 22:07:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "> The problem is that the new column is now part of pg_attribute so every\n> catalog/pg_attribute.h DATA() line has to be updated. Did you update\n> them all with 'false' in the right slot? Not sure what the chunks are.\n\nYep - I did that, I think the problem's more subtle.\n\nChris\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 10:10:37 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN" }, { "msg_contents": "> It sounds to me like you've failed to make sure that the field is\n> initialized properly when a pg_attribute row is dynamically created.\n> Let's see... did you fix the static FormData_pg_attribute rows near\n> the top of heap.c? Does TupleDescInitEntry() know about initializing\n> the field? (I wonder why it doesn't memset() the whole row to zero\n> anyway...)\n\nOK I'll look at them.\n\n> pg_attribute is very possibly the most ticklish system catalog\n> to add a column to. I'd suggest looking through every single use of\n> some other pg_attribute column, perhaps attstattarget or attnotnull,\n> to make sure you're initializing attisdropped everywhere it should be.\n\nOK, I'm on the case.\n\nChris\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 10:18:45 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "> > pg_attribute is very possibly the most ticklish system catalog\n> > to add a column to. I'd suggest looking through every single use of\n> > some other pg_attribute column, perhaps attstattarget or attnotnull,\n> > to make sure you're initializing attisdropped everywhere it should be.\n\nDone.\n\nWow - I've almost finished it now, actually! It's at the stage where\neverything works as expected, all the initdb attributes are properly marked\nnot dropped, the drop column command works fine and psql works fine. All\nregression tests also pass. '*' expansion works properly.\n\nI have a lot of testing to go, however. I will make up regression tests as\nwell.\n\nSome questions:\n\n1. I'm going to prevent people from dropping the last column in their table.\nI think this is the safest option. How do I check if there's any other non\ndropped columns in a table? Reference code anywhere?\n\n2. What should I do about inheritance? I'm going to implement it, but are\nthere issues? It will basically drop the column with the same name in all\nchild tables. Is that correct behaviour?\n\n3. I am going to initially implement the patch to ignore the behaviour and\ndo no dependency checking. I will assume that Rod's patch will handle that\nwithout much trouble.\n\nThanks,\n\nChris\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 14:28:04 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "Christopher Kings-Lynne dijo: \n\nI have a question:\n\n> 2. What should I do about inheritance? I'm going to implement it, but are\n> there issues? It will basically drop the column with the same name in all\n> child tables. Is that correct behaviour?\n\nWhat happens if I drop an inherited column in a child table? Maybe it\nworks, but what happens when I SELECT the column in the parent table?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Granting software the freedom to evolve guarantees only different results,\nnot better ones.\" (Zygo Blaxell)\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 02:42:25 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "> > 2. What should I do about inheritance? I'm going to implement \n> it, but are\n> > there issues? It will basically drop the column with the same \n> name in all\n> > child tables. Is that correct behaviour?\n> \n> What happens if I drop an inherited column in a child table? Maybe it\n> works, but what happens when I SELECT the column in the parent table?\n\nI don't know? Tom?\n\nWell, what happens if you rename a column in a child table? Same problem?\n\nChris\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 14:45:58 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "Christopher Kings-Lynne dijo: \n\n> > > 2. What should I do about inheritance? I'm going to implement \n> > it, but are\n> > > there issues? It will basically drop the column with the same \n> > name in all\n> > > child tables. Is that correct behaviour?\n> > \n> > What happens if I drop an inherited column in a child table? Maybe it\n> > works, but what happens when I SELECT the column in the parent table?\n> \n> I don't know? Tom?\n> \n> Well, what happens if you rename a column in a child table? Same problem?\n\nIt merrily renames the column in the child table (I tried it). When\nSELECTing the parent, bogus data appears. Looks like a bug to me.\nMaybe the ALTER TABLE ... RENAME COLUMN code should check for inherited\ncolumns before renaming them.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nOne man's impedance mismatch is another man's layer of abstraction.\n(Lincoln Yeoh)\n\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 02:57:44 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "> > Well, what happens if you rename a column in a child table? \n> Same problem?\n> \n> It merrily renames the column in the child table (I tried it). When\n> SELECTing the parent, bogus data appears. Looks like a bug to me.\n> Maybe the ALTER TABLE ... RENAME COLUMN code should check for inherited\n> columns before renaming them.\n\nHmmm...so how does one check if one is a child in an inheritance hierarchy?\n\nChris\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 15:01:26 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "> > It merrily renames the column in the child table (I tried it). When\n> > SELECTing the parent, bogus data appears. Looks like a bug to me.\n> > Maybe the ALTER TABLE ... RENAME COLUMN code should check for inherited\n> > columns before renaming them.\n>\n> Hmmm...so how does one check if one is a child in an inheritance\n> hierarchy?\n\nActually, more specifically, how does one check that the column being\ndropped or renamed appears in none of one's parent tables?\n\nI notice there's no find_all_ancestors() function...\n\nChris\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 16:13:46 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> What happens if I drop an inherited column in a child table? Maybe it\n>> works, but what happens when I SELECT the column in the parent table?\n\n> Well, what happens if you rename a column in a child table? Same problem?\n\nIdeally we should disallow both of those, as well as cases like\nchanging the column type.\n\nIt might be that we can use the pg_depend stuff to enforce this (by\nsetting up dependency links from child to parent). However that would\nintroduce a ton of overhead in a regular DROP TABLE, and you'd still\nneed specialized code to prevent the RENAME case (pg_depend wouldn't\ncare about that). I'm thinking that it's worth adding an attisinherited\ncolumn to pg_attribute to make these rules easy to enforce.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2002 10:14:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> 1. I'm going to prevent people from dropping the last column in their table.\n> I think this is the safest option. How do I check if there's any other non\n> dropped columns in a table? Reference code anywhere?\n\nYou look through the Relation's tupledesc and make sure there's at least\none other non-dropped column.\n\n> 2. What should I do about inheritance? I'm going to implement it, but are\n> there issues? It will basically drop the column with the same name in all\n> child tables. Is that correct behaviour?\n\nYes, if the 'inh' flag is set.\n\nIf 'inh' is not set, then the right thing would be to drop the parent's\ncolumn and mark all the *first level* children's columns as\nnot-inherited. How painful that would be depends on what representation\nwe choose for marking inherited columns, if any.\n\n> 3. I am going to initially implement the patch to ignore the behaviour and\n> do no dependency checking. I will assume that Rod's patch will handle that\n> without much trouble.\n\nYeah, Rod was looking ahead to DROP COLUMN. I'm still working on his\npatch (mostly the pg_constraint side) but should have it soon.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2002 10:34:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN Node & DROP COLUMN " } ]
[ { "msg_contents": "Hackers:\n\nI've modified commands/cluster.c so that it recreates the indexes on the\ntable after clustering the table. I attach the patch.\n\nThere are (of course) things I don't understand. For example, whether\n(or when) I should use CommandCounterIncrement() after each\nindex_create, or if I should call setRelhasindex() only once (and not\nonce per index); or whether I need to acquire some lock on the indexes.\n\nI tested it with one table and several indexes. Truth is I don't know\nhow to test for concurrency, or if it's worth the trouble.\n\nThe purpose of this experiment (and, I hope, more to follow) is to\nfamiliarize myself with the guts of PostgreSQL, so I can work on my CS\nthesis with it. If you can point me my misconceptions I'd be happy to\ntry again (and again, and...)\n\nThank you.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"La Primavera ha venido. Nadie sabe como ha sido\" (A. Machado)", "msg_date": "Thu, 4 Jul 2002 22:54:49 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "CLUSTER not lose indexes" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> There are (of course) things I don't understand. For example, whether\n> (or when) I should use CommandCounterIncrement() after each\n> index_create, or if I should call setRelhasindex() only once (and not\n> once per index); or whether I need to acquire some lock on the indexes.\n\nI think you probably want a CommandCounterIncrement at the bottom of the\nloop (after setRelhasindex). If it works as-is it's just by chance,\nie due to internal CCI calls in index_create.\n\nLocking newly-created indexes is not really necessary, since no one else\ncan see them until you commit anyhow.\n\n+ \t\ttuple = SearchSysCache(RELOID, ObjectIdGetDatum(attrs->indexOID),\n+ \t\t\t\t0, 0, 0);\n+ \t\tif (!HeapTupleIsValid(tuple))\n+ \t\t\t\tbreak;\n\nBreaking out of the loop hardly seems an appropriate response to this\nfailure condition. Not finding the index' pg_class entry is definitely\nan error.\n\nI'd also suggest more-liberal commenting, as well as more attention to\nupdating the existing comments to match new reality.\n\n\nIn general, I'm not thrilled about expending more code on the existing\nfundamentally-broken implementation of CLUSTER. We need to look at\nmaking use of the ability to write a new version of a table (or index)\nunder a new relfilenode value, without changing the table's OID.\nHowever, some parts of your patch will probably still be needed when\nsomeone gets around to making that happen, so I won't object for now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2002 23:44:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER not lose indexes " }, { "msg_contents": "Alvaro Herrera wrote:\n> Hackers:\n> \n> I've modified commands/cluster.c so that it recreates the indexes on the\n> table after clustering the table. I attach the patch.\n> \n> There are (of course) things I don't understand. For example, whether\n> (or when) I should use CommandCounterIncrement() after each\n> index_create, or if I should call setRelhasindex() only once (and not\n> once per index); or whether I need to acquire some lock on the indexes.\n> \n> I tested it with one table and several indexes. Truth is I don't know\n> how to test for concurrency, or if it's worth the trouble.\n> \n> The purpose of this experiment (and, I hope, more to follow) is to\n> familiarize myself with the guts of PostgreSQL, so I can work on my CS\n> thesis with it. If you can point me my misconceptions I'd be happy to\n> try again (and again, and...)\n\nI think Tom was suggesting that you may want to continue work on CLUSTER\nand make use of relfilenode. After the cluster, you can just update\npg_class.relfilenode with the new file name (random oid generated at\nbuild time) and as soon as you commit, all backends will start using the\nnew file and you can delete the old one.\n\nThe particular case we would like to improve is this:\n\n\t/* Destroy old heap (along with its index) and rename new. */\n\theap_drop_with_catalog(OIDOldHeap, allowSystemTableMods);\n\n\tCommandCounterIncrement();\n\n\trenamerel(OIDNewHeap, oldrelation->relname);\n\nIn this code, we delete the old relation, then rename the new one. It\nwould be good to have this all happen in one update of\npg_class.relfilenode; that way it is an atomic operation.\n\nSo, create a heap (in the temp namespace so it is deleted on crash),\ncopy the old heap into the new file in cluster order, and when you are\ndone, point the old pg_class relfilenode at the new clustered heap\nfilename, then point the new cluster heap pg_class at the old heap file,\nand then drop the cluster heap file; that will remove the _old_ file\n(I believe on commit) and you are ready to go.\n\nSo, you are basically creating a new heap, but at finish, the new heap's\npg_class and the old heap's file go away. I thought about doing it\nwithout creating the pg_class entry for the new heap, but the code\nreally wants to have a heap it can manipulate.\n\nSame with index rebuilding, I think. The indexes are built on the new\nheap, and the relfilenode swap works just the same.\n\nLet us know if you want to pursue that and we can give you additional\nassistance. I would like to see CLUSTER really polished up. More\npeople are using it now and it really needs someone to focus on it.\n\nGlad to see you working on CLUSTER. Welcome aboard.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 01:06:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER not lose indexes" }, { "msg_contents": "Bruce Momjian dijo: \n\n> Alvaro Herrera wrote:\n> > Hackers:\n> > \n> > I've modified commands/cluster.c so that it recreates the indexes on the\n> > table after clustering the table. I attach the patch.\n\n> I think Tom was suggesting that you may want to continue work on CLUSTER\n> and make use of relfilenode. After the cluster, you can just update\n> pg_class.relfilenode with the new file name (random oid generated at\n> build time) and as soon as you commit, all backends will start using the\n> new file and you can delete the old one.\n\nI'm looking at pg_class, and relfilenode is equal to oid in all cases\nAFAICS. If I create a new, \"random\" relfilenode, the equality will not\nhold. Is that OK? Also, is the new relfilenode somehow guaranteed to\nnot be assigned to another relation (pg_class tuple, I think)?\n\n\n> In this code, we delete the old relation, then rename the new one. It\n> would be good to have this all happen in one update of\n> pg_class.relfilenode; that way it is an atomic operation.\n\nOK, I'll see if I can do that.\n\n> Let us know if you want to pursue that and we can give you additional\n> assistance. I would like to see CLUSTER really polished up. More\n> people are using it now and it really needs someone to focus on it.\n\nI thought CLUSTER was meant to be replaced by automatically clustering\nindexes. Isn't that so? Is anyone working on that?\n\n\n> Glad to see you working on CLUSTER. Welcome aboard.\n\nThank you. I'm really happy to be able to work in Postgres.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Limitate a mirar... y algun dia veras\"\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 01:34:38 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "Re: CLUSTER not lose indexes" }, { "msg_contents": "Alvaro Herrera wrote:\n> Bruce Momjian dijo: \n> \n> > Alvaro Herrera wrote:\n> > > Hackers:\n> > > \n> > > I've modified commands/cluster.c so that it recreates the indexes on the\n> > > table after clustering the table. I attach the patch.\n> \n> > I think Tom was suggesting that you may want to continue work on CLUSTER\n> > and make use of relfilenode. After the cluster, you can just update\n> > pg_class.relfilenode with the new file name (random oid generated at\n> > build time) and as soon as you commit, all backends will start using the\n> > new file and you can delete the old one.\n> \n> I'm looking at pg_class, and relfilenode is equal to oid in all cases\n> AFAICS. If I create a new, \"random\" relfilenode, the equality will not\n> hold. Is that OK? Also, is the new relfilenode somehow guaranteed to\n\nYes, they are all the same only because we haven't gotten CLUSTER fixed\nyet. ;-) We knew we needed to make cases where they are not the same\nspecifically for cases like CLUSTER.\n\n> not be assigned to another relation (pg_class tuple, I think)?\n\nIt will be fine. The relfilenode will be the one assigned to the new\nheap table and will not be used by others.\n\n> \n> \n> > In this code, we delete the old relation, then rename the new one. It\n> > would be good to have this all happen in one update of\n> > pg_class.relfilenode; that way it is an atomic operation.\n> \n> OK, I'll see if I can do that.\n> \n> > Let us know if you want to pursue that and we can give you additional\n> > assistance. I would like to see CLUSTER really polished up. More\n> > people are using it now and it really needs someone to focus on it.\n> \n> I thought CLUSTER was meant to be replaced by automatically clustering\n> indexes. Isn't that so? Is anyone working on that?\n\nWe have no idea how to automatically cluster based on an index. Not\neven sure how the commercial guys do it.\n\n> > Glad to see you working on CLUSTER. Welcome aboard.\n> \n> Thank you. I'm really happy to be able to work in Postgres.\n\nGreat.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 01:45:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER not lose indexes" }, { "msg_contents": "Tom Lane dijo: \n\n> I think you probably want a CommandCounterIncrement at the bottom of the\n> loop (after setRelhasindex). If it works as-is it's just by chance,\n> ie due to internal CCI calls in index_create.\n\nDone.\n\n\n> + \t\ttuple = SearchSysCache(RELOID, ObjectIdGetDatum(attrs->indexOID),\n> + \t\t\t\t0, 0, 0);\n> + \t\tif (!HeapTupleIsValid(tuple))\n> + \t\t\t\tbreak;\n> \n> Breaking out of the loop hardly seems an appropriate response to this\n> failure condition. Not finding the index' pg_class entry is definitely\n> an error.\n\nSure. elog(ERROR) now. I'm not sure what was I thinking when I wrote\nthat.\n\n> I'd also suggest more-liberal commenting, as well as more attention to\n> updating the existing comments to match new reality.\n\nI'm afraid I cannot get too verbose no matter how hard I try. I hope\nthis one is OK.\n\n> In general, I'm not thrilled about expending more code on the existing\n> fundamentally-broken implementation of CLUSTER. We need to look at\n> making use of the ability to write a new version of a table (or index)\n> under a new relfilenode value, without changing the table's OID.\n> However, some parts of your patch will probably still be needed when\n> someone gets around to making that happen, so I won't object for now.\n\nWill try to do this.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nLicensee shall have no right to use the Licensed Software\nfor productive or commercial use. (Licencia de StarOffice 6.0 beta)", "msg_date": "Fri, 5 Jul 2002 02:28:47 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "Re: CLUSTER not lose indexes " }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> I'm looking at pg_class, and relfilenode is equal to oid in all cases\n> AFAICS. If I create a new, \"random\" relfilenode, the equality will not\n> hold. Is that OK?\n\nThe idea is that OID is the logical table identifier and relfilenode is\nthe physical identifier (so relfilenode is what should be used in bufmgr\nand below). There are undoubtedly a few places that get this wrong, but\nwe won't be able to ferret them out until someone starts to exercise\nthe facility.\n\n> Also, is the new relfilenode somehow guaranteed to\n> not be assigned to another relation (pg_class tuple, I think)?\n\nI've been wondering about that myself. We might have to add a unique\nindex on pg_class.relfilenode to ensure this; otherwise, after OID\nwraparound there would be no guarantees.\n\n>> In this code, we delete the old relation, then rename the new one. It\n>> would be good to have this all happen in one update of\n>> pg_class.relfilenode; that way it is an atomic operation.\n\nAs long as you have not committed, it's atomic anyway because no one can\nsee your updates. It'd be nice to do it in one update for efficiency,\nbut don't contort the code beyond reason to achieve that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2002 10:27:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER not lose indexes " }, { "msg_contents": "Tom Lane wrote:\n> > Also, is the new relfilenode somehow guaranteed to\n> > not be assigned to another relation (pg_class tuple, I think)?\n> \n> I've been wondering about that myself. We might have to add a unique\n> index on pg_class.relfilenode to ensure this; otherwise, after OID\n> wraparound there would be no guarantees.\n\nYep, good point.\n\n> >> In this code, we delete the old relation, then rename the new one. It\n> >> would be good to have this all happen in one update of\n> >> pg_class.relfilenode; that way it is an atomic operation.\n> \n> As long as you have not committed, it's atomic anyway because no one can\n> see your updates. It'd be nice to do it in one update for efficiency,\n> but don't contort the code beyond reason to achieve that.\n\nSorry, I meant to say that we added relfilenode for exactly this case,\nso that we have atomic file access semantics. Do we already have that\nfeature in the current code?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 14:08:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER not lose indexes" }, { "msg_contents": "Alvaro Herrera wrote:\n> > In general, I'm not thrilled about expending more code on the existing\n> > fundamentally-broken implementation of CLUSTER. We need to look at\n> > making use of the ability to write a new version of a table (or index)\n> > under a new relfilenode value, without changing the table's OID.\n> > However, some parts of your patch will probably still be needed when\n> > someone gets around to making that happen, so I won't object for now.\n\nOK, I remember now. In renamerel(), the new clustered file is renamed\nto the old table name. However, it has a new oid. The idea of\nrelfilenode was that we want to cluster the table and keep the same\nrelation oid. That way, other system tables that reference that OID\nwill still work.\n\nSeems like renamerel will have to stay because it is used by ALTER TABLE\nRENAME, so we just need some new code that updates the relfilenode of\nthe old pg_class row to point to the new clustered file. Swapping\nrelfilenodes between the old and new pg_class rows and deleting the new\ntable should do the trick of deleting the non-clustered file and the\ntemp pg_class row at the same time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 16:12:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER not lose indexes" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Seems like renamerel will have to stay because it is used by ALTER TABLE\n> RENAME, so we just need some new code that updates the relfilenode of\n> the old pg_class row to point to the new clustered file. Swapping\n> relfilenodes between the old and new pg_class rows and deleting the new\n> table should do the trick of deleting the non-clustered file and the\n> temp pg_class row at the same time.\n\nI think you're still letting your thinking be contorted by the existing\nCLUSTER implementation. Do we need a temp pg_class entry at all? Seems\nlike we just want to UPDATE the pg_class row with the new relfilenode\nvalue; then we can see the update but no one else can (till we commit).\nDitto for the indexes.\n\nWhat's still a little unclear to me is how to access the old heap and\nindex files to read the data while simultaneously accessing the new ones\nto write it. Much of the existing code likes to have a Relation struct\navailable, not only a RelFileNode, so it may be necessary to have both\nold and new Relations present at the same time. If that's the case we\nmight be stuck with making a temp pg_class entry just to support a phony\nRelation :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2002 11:11:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER not lose indexes " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Seems like renamerel will have to stay because it is used by ALTER TABLE\n> > RENAME, so we just need some new code that updates the relfilenode of\n> > the old pg_class row to point to the new clustered file. Swapping\n> > relfilenodes between the old and new pg_class rows and deleting the new\n> > table should do the trick of deleting the non-clustered file and the\n> > temp pg_class row at the same time.\n> \n> I think you're still letting your thinking be contorted by the existing\n> CLUSTER implementation. Do we need a temp pg_class entry at all? Seems\n> like we just want to UPDATE the pg_class row with the new relfilenode\n> value; then we can see the update but no one else can (till we commit).\n> Ditto for the indexes.\n> \n> What's still a little unclear to me is how to access the old heap and\n> index files to read the data while simultaneously accessing the new ones\n> to write it. Much of the existing code likes to have a Relation struct\n> available, not only a RelFileNode, so it may be necessary to have both\n> old and new Relations present at the same time. If that's the case we\n> might be stuck with making a temp pg_class entry just to support a phony\n> Relation :-(\n\nYes, that was my conclusion, that we need the temp heap so we can access\nit in a clean manner. Sure, it would be nice if we could access a file\non its own, but it doesn't seem worth the complexity required to\naccomplish it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sat, 6 Jul 2002 12:24:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER not lose indexes" }, { "msg_contents": "Tom Lane dijo: \n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Seems like renamerel will have to stay because it is used by ALTER TABLE\n> > RENAME, so we just need some new code that updates the relfilenode of\n> > the old pg_class row to point to the new clustered file. Swapping\n> > relfilenodes between the old and new pg_class rows and deleting the new\n> > table should do the trick of deleting the non-clustered file and the\n> > temp pg_class row at the same time.\n> \n> I think you're still letting your thinking be contorted by the existing\n> CLUSTER implementation. Do we need a temp pg_class entry at all? Seems\n> like we just want to UPDATE the pg_class row with the new relfilenode\n> value; then we can see the update but no one else can (till we commit).\n> Ditto for the indexes.\n\nThat's what I originally thought: mess around directly with the smgr (or\nwith some upper layer? I don't know) to create a new relfilenode, and\nthen attach it to the heap. I don't know if it's possible or too\ndifficult.\n\nThen, with Bruce's explanation, I thought I should just create a temp\ntable and exchange relfilenodes, which is much simpler.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"El dia que dejes de cambiar dejaras de vivir\"\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 18:40:19 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "Re: CLUSTER not lose indexes " }, { "msg_contents": "Alvaro Herrera wrote:\n> > I think you're still letting your thinking be contorted by the existing\n> > CLUSTER implementation. Do we need a temp pg_class entry at all? Seems\n> > like we just want to UPDATE the pg_class row with the new relfilenode\n> > value; then we can see the update but no one else can (till we commit).\n> > Ditto for the indexes.\n> \n> That's what I originally thought: mess around directly with the smgr (or\n> with some upper layer? I don't know) to create a new relfilenode, and\n> then attach it to the heap. I don't know if it's possible or too\n> difficult.\n> \n> Then, with Bruce's explanation, I thought I should just create a temp\n> table and exchange relfilenodes, which is much simpler.\n\nYes, creating the file is easy. Inserting into a file without a heap\nrelation pointer is a pain. ;-)\n\nActually, I did a little research on the crash/abort handling of\nswapping the relfilenodes between the old and clustered pg_class heap\nentries. A abort/crash can happen in several ways- abort of the\ntransaction, backend crash during the transaction, or cleaning up of old\nfile at commit.\n\nFirst, when you create a table, the relfilenode is _copied_ to\npendingDeletes. These files are cleaned up if the transaction aborts in\nsmgrDoPendingDeletes(). There will also be an entry in there for the\nsame table after you call heap_delete. As long as the relfilenode at\nheap_delete time points to the original heap file, you should be fine. \nIf it commits, the new file is removed (remember that oid was saved). \nIf it aborts, the new file is removed (CLUSTER failed).\n\nAs far as a crash, there is RemoveTempRelations which calls\nFindTempRelations() but I think that only gets populated after the\ncommit happens and the temp namespaces is populated. I assume the\naborted part of the transaction is invisible to the backend at that\npoint (crash time). Of course, it wouldn't hurt to test these cases to\nmake sure it works right, and I would document that the copying of\nrelfilenode in smgr create/unlink is a desired effect relied upon by\ncluster and any other commands that may use relfilenode renaming in the\nfuture. (If it wasn't copied, then an abort would delete the _old_\nheap. Bad.)\n\nFYI, RENAME also deals with relfilenode renaming in setNewRelfilenode().\nThe difference is that RENAME doesn't need to access the old index, just\nbuild a new one, so it can take shortcuts in how it handles things. It\nuses two methods to modify the tuple, one directly modifying pg_class,\nrather than inserting a new heap row. and another of doing it the\ntraditional way. It does this because REINDEX is used to fix corrupt\nindexes, including corrupt system indexes. You will not be using that\ntype of code in CLUSTER because there is a real temp heap associated\nwith this operation. Just heap_update() like normal for both relations.\n\nHope this helps.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sun, 7 Jul 2002 19:29:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER not lose indexes" }, { "msg_contents": "Bruce Momjian dijo: \n\n> FYI, RENAME also deals with relfilenode renaming in setNewRelfilenode().\n> The difference is that RENAME doesn't need to access the old index, just\n> build a new one, so it can take shortcuts in how it handles things. It\n> uses two methods to modify the tuple, one directly modifying pg_class,\n> rather than inserting a new heap row. and another of doing it the\n> traditional way. It does this because REINDEX is used to fix corrupt\n> indexes, including corrupt system indexes. You will not be using that\n> type of code in CLUSTER because there is a real temp heap associated\n> with this operation. Just heap_update() like normal for both relations.\n\nWell, I think my approach is somewhat more naive. What I'm actually\ndoing is something like:\n\n1. save information on extant indexes\n2. create a new heap, and populate (clustered)\n3. swap the relfilenodes of the new and old heaps\n4. drop new heap (with old relfilenode)\n5. for each index saved in 1:\n5.1. create a new index with the same attributes\n5.2. swap relfilenodes of original and new index\n5.3. drop new index (with old relfilenode)\n\nBut now I'm lost. It has worked sometimes; then I change a minimal\nthing, recompile and then it doesn't work (bufmgr fails an assertion).\nI can tell that the new (table) filenode is correct if I skip step 4\nabove, and check the temp table manually (but of course it has no\nindexes). I've never gotten as far as completing all steps right, so I\ncannot tell whether the new indexes' filenodes are correct.\n\nI'm posting the new patch. Please review it; I've turned it upside down\na few times and frankly don't know what's happening. I sure am\nforgetting a lock or something but cannot find what is it.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Aprender sin pensar es inutil; pensar sin aprender, peligroso\" (Confucio)", "msg_date": "Wed, 10 Jul 2002 22:41:48 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "Re: CLUSTER not lose indexes" }, { "msg_contents": "Tom Lane writes:\n\n> > Also, is the new relfilenode somehow guaranteed to\n> > not be assigned to another relation (pg_class tuple, I think)?\n>\n> I've been wondering about that myself. We might have to add a unique\n> index on pg_class.relfilenode to ensure this; otherwise, after OID\n> wraparound there would be no guarantees.\n\nI've never been happy with the current setup. It's much too tempting to\nthink file name = OID, both in the backend code and by external onlookers,\nespecially since it's currently rare/impossible(?) for them to be\ndifferent.\n\nIt would be a lot clearer if relfilenode were replaced by an integer\nversion, starting at 0, and the heap files were named \"OID_VERSION\".\n\n(In related news, how about filling up the oid/relfilenode numbers with\nzeros on the left, so a directory listing would reflect the numerical\norder?)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Jul 2002 00:54:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CLUSTER not lose indexes " }, { "msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > > Also, is the new relfilenode somehow guaranteed to\n> > > not be assigned to another relation (pg_class tuple, I think)?\n> >\n> > I've been wondering about that myself. We might have to add a unique\n> > index on pg_class.relfilenode to ensure this; otherwise, after OID\n> > wraparound there would be no guarantees.\n> \n> I've never been happy with the current setup. It's much too tempting to\n> think file name = OID, both in the backend code and by external onlookers,\n> especially since it's currently rare/impossible(?) for them to be\n> different.\n\nYea, only reindex and cluster change them. Problem is we already have\noid as a nice unique number ready for use. I don't see a huge advantage\nof improving it.\n\n> It would be a lot clearer if relfilenode were replaced by an integer\n> version, starting at 0, and the heap files were named \"OID_VERSION\".\n\nProblem there is that we can't have relfilenode as an int unless we take\nthe table oid and sequence number and merge them on the fly in the\nbackend. Would be nice for admins, though, so the oid would be there. \nI thought WAL liked the relfilenode as a single number.\n\n> (In related news, how about filling up the oid/relfilenode numbers with\n> zeros on the left, so a directory listing would reflect the numerical\n> order?)\n\nProblem there is that we increase the size of much of the directory\nlookups. Not sure if it is worth worrying about.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 18:58:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CLUSTER not lose indexes" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> It would be a lot clearer if relfilenode were replaced by an integer\n> version, starting at 0, and the heap files were named \"OID_VERSION\".\n\nThe reason to not do that is that the bufmgr and levels below would now\nneed to pass around three numbers, not two, to identify physical files.\n\nWe already beat this topic to death a year ago, I'm not eager to re-open\nit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 00:01:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CLUSTER not lose indexes " }, { "msg_contents": "On Mon, 15 Jul 2002, Bruce Momjian wrote:\n\n> > (In related news, how about filling up the oid/relfilenode numbers with\n> > zeros on the left, so a directory listing would reflect the numerical\n> > order?)\n>\n> Problem there is that we increase the size of much of the directory\n> lookups. Not sure if it is worth worrying about.\n\nProbably not such a big deal, since most systems will be reading\na full block (8K or 16K under *BSD) to load the directory information\nanyway. Doing it in hex would give you only 8-char filenames, anyway.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Tue, 16 Jul 2002 17:59:46 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CLUSTER not lose indexes" }, { "msg_contents": "Curt Sampson wrote:\n> On Mon, 15 Jul 2002, Bruce Momjian wrote:\n> \n> > > (In related news, how about filling up the oid/relfilenode numbers with\n> > > zeros on the left, so a directory listing would reflect the numerical\n> > > order?)\n> >\n> > Problem there is that we increase the size of much of the directory\n> > lookups. Not sure if it is worth worrying about.\n> \n> Probably not such a big deal, since most systems will be reading\n> a full block (8K or 16K under *BSD) to load the directory information\n> anyway. Doing it in hex would give you only 8-char filenames, anyway.\n\nYes, hex may be interesting as a more compact, consistent format. We\nneed to change the docs so oid2name and queries convert to hex on\noutput.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 11:41:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CLUSTER not lose indexes" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> (In related news, how about filling up the oid/relfilenode numbers with\n>> zeros on the left, so a directory listing would reflect the numerical\n>> order?)\n\n> Yes, hex may be interesting as a more compact, consistent format. We\n> need to change the docs so oid2name and queries convert to hex on\n> output.\n\nI don't really see the value-added here. If we had made this decision\nbefore releasing 7.1, I'd not have objected; but at this point we're\ntalking about breaking oid2name and any similar scripts that people\nmay have developed, for what's really a *very* marginal gain. Who cares\nwhether a directory listing reflects numerical order?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 11:57:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CLUSTER not lose indexes " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> (In related news, how about filling up the oid/relfilenode numbers with\n> >> zeros on the left, so a directory listing would reflect the numerical\n> >> order?)\n> \n> > Yes, hex may be interesting as a more compact, consistent format. We\n> > need to change the docs so oid2name and queries convert to hex on\n> > output.\n> \n> I don't really see the value-added here. If we had made this decision\n> before releasing 7.1, I'd not have objected; but at this point we're\n> talking about breaking oid2name and any similar scripts that people\n> may have developed, for what's really a *very* marginal gain. Who cares\n> whether a directory listing reflects numerical order?\n\nI don't see the big value either, just brainstorming.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 11:59:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CLUSTER not lose indexes" } ]
[ { "msg_contents": "Were docs ever submitted for all those new SSL changes? All that code's in\nthere and I have no idea how to set up certificate auth, etc.\n\nAnd also, remember Bill Studemund's CREATE OPERATOR CLASS patch!\n\nChris\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 11:20:31 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "SSL patch" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Were docs ever submitted for all those new SSL changes? All that code's in\n> there and I have no idea how to set up certificate auth, etc.\n> \n> And also, remember Bill Studemund's CREATE OPERATOR CLASS patch!\n\nHere is the text version of the docs. They need to be merged into the\nSGML.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026", "msg_date": "Thu, 4 Jul 2002 23:34:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SSL patch" } ]
[ { "msg_contents": "Here is my proposal for new CREATE CONVERSION which makes it possible\nto define new encoding conversion mapping between two encodings on the\nfly.\n\nThe background:\n\nWe are getting having more and more encoding conversion tables. Up to\nnow, they reach to 385352 source lines and over 3MB in compiled forms\nin total. They are statically linked to the backend. I know this\nitself is not a problem since modern OSs have smart memory management\ncapabilities to fetch only necessary pages from a disk. However, I'm\nworried about the infinite growing of these static tables. I think\nusers won't love 50MB PostgreSQL backend load module.\n\nSecond problem is more serious. The conversion definitions between\ncertain encodings, such as Unicode and others are not well\ndefined. For example, there are several conversion tables for Japanese\nShift JIS and Unicode. This is because each vendor has its own\n\"special characters\" and they define the table in that the conversion\nfits for their purpose.\n\nThe solution:\n\nThe proposed new CREATE CONVERSION will solve these problems. A\nparticular conversion table is statically linked to a dynamic loaded\nfunction and CREATE CONVERSION will tell PostgreSQL that if\na conversion from encoding A to encoding B, then function C should be\nused. In this way, conversion tables are no more statically linked to\nthe backend.\n\nUsers also could define their own conversion tables easily that would\nbest fit for their purpose. Also needless to say, people could define\nnew conversions which PostgreSQL does not support yet.\n\nSyntax proposal:\n\nCREATE CONVERSION <conversion name>\n SOURCE <source encoding name>\n DESTINATION <destination encoding name>\n FROM <conversion function name>\n;\nDROP CONVERSION <conversion name>;\n\nExample usage:\n\nCREATE OR REPLACE FUNCTION euc_jp_to_utf8(TEXT, TEXT, INTEGER)\n RETURNS INTEGER AS euc_jp_to_utf8.so LANGUAGE 'c';\nCREATE CONVERSION euc_jp_to_utf8\n SOURCE EUC_JP DESTINATION UNICODE\n FROM euc_jp_to_utf8;\n\nImplementation:\n\nImplementation would be quite straightforward. Create a new system\ntable, and CREATE CONVERSION stores info onto\nit. pg_find_encoding_converters(utils/mb/mbutils.c) and friends needs\nto be modified so that they recognize dynamically defined conversions.\nAlso psql would need some capabilities to print conversion definition\ninfo.\n\nComments?\n--\nTatsuo Ishii\n\n\n", "msg_date": "Fri, 05 Jul 2002 15:36:41 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Proposal: CREATE CONVERSION" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Syntax proposal:\n> CREATE CONVERSION <conversion name>\n> SOURCE <source encoding name>\n> DESTINATION <destination encoding name>\n> FROM <conversion function name>\n\nDoesn't a conversion currently require several support functions?\nHow much overhead will you be adding to funnel them all through\none function?\n\nBasically I'd like to see a spec for the API of the conversion\nfunction...\n\nAlso, is there anything in SQL99 that we ought to try to be\ncompatible with?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2002 10:38:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "> > CREATE CONVERSION <conversion name>\n> > SOURCE <source encoding name>\n> > DESTINATION <destination encoding name>\n> > FROM <conversion function name>\n> \n> Doesn't a conversion currently require several support functions?\n> How much overhead will you be adding to funnel them all through\n> one function?\n\nNo, only one function is sufficient. What else do you think of?\n\n> Basically I'd like to see a spec for the API of the conversion\n> function...\n\nThat would be very simple (the previous example I gave was unnecessary\ncomplex). The function signature would look like:\n\nconversion_funcion(TEXT) RETURNS TEXT\n\nIt receives source text and converts it then returns it. That's all.\n\n> Also, is there anything in SQL99 that we ought to try to be\n> compatible with?\n\nAs far as I know there's no such an equivalent in SQL99.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Sat, 06 Jul 2002 00:33:30 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> Doesn't a conversion currently require several support functions?\n>> How much overhead will you be adding to funnel them all through\n>> one function?\n\n> No, only one function is sufficient. What else do you think of?\n\nI see two different functions linked to from each pg_wchar_table\nentry... although perhaps those are associated with encodings\nnot with conversions.\n\n>> Basically I'd like to see a spec for the API of the conversion\n>> function...\n\n> That would be very simple (the previous example I gave was unnecessary\n> complex). The function signature would look like:\n> conversion_funcion(TEXT) RETURNS TEXT\n> It receives source text and converts it then returns it. That's all.\n\nIIRC the existing conversion functions deal in C string pointers and\nlengths. I'm a little worried about the extra overhead implicit\nin converting to a TEXT object and back again; that probably means at\nleast two more palloc and memcpy operations. I think you'd be better\noff sticking to a C-level API, because I really don't believe that\nanyone is going to code conversion functions in (say) plpgsql.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2002 11:50:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "Tatsuo Ishii wrote:\n> Here is my proposal for new CREATE CONVERSION which makes it possible\n> to define new encoding conversion mapping between two encodings on the\n> fly.\n> \n> The background:\n> \n> We are getting having more and more encoding conversion tables. Up to\n> now, they reach to 385352 source lines and over 3MB in compiled forms\n> in total. They are statically linked to the backend. I know this\n> itself is not a problem since modern OSs have smart memory management\n> capabilities to fetch only necessary pages from a disk. However, I'm\n> worried about the infinite growing of these static tables. I think\n> users won't love 50MB PostgreSQL backend load module.\n\nYes, those conversion tables are getting huge in the tarball too:\n\t\n\t$ pwd\n\t/pg/backend/utils/mb\n\t$ du\n\t4 ./CVS\n\t7 ./Unicode/CVS\n\t9541 ./Unicode\n\t15805 .\n\nLook at these two file alone:\n\n-rw-r--r-- 1 postgres wheel 1427492 Jun 13 04:28 gb18030_to_utf8.map\n-rw-r--r-- 1 postgres wheel 1427492 Jun 13 04:28 utf8_to_gb18030.map\n\nIf we can make these loadable, that would be good. What would be really\ninteresting is if we could split these out into a separate\ndirectory/project so development on those could take place in an\nindependent way. This would probably stimulate even more encoding\noptions for users.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 14:17:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> I see two different functions linked to from each pg_wchar_table\n> entry... although perhaps those are associated with encodings\n> not with conversions.\n\nYes. those are not directly associated with conversions.\n\n> IIRC the existing conversion functions deal in C string pointers and\n> lengths. I'm a little worried about the extra overhead implicit\n> in converting to a TEXT object and back again; that probably means at\n> least two more palloc and memcpy operations. I think you'd be better\n> off sticking to a C-level API, because I really don't believe that\n> anyone is going to code conversion functions in (say) plpgsql.\n\nI am worried about that too. But if we stick a C-level API, how can we\ndefine the argument data type suitable for C string? I don't see such\ndata types. Maybe you are suggesting that we should not use CREATE\nFUNCTION?\n--\nTatsuo Ishii\n\n\n", "msg_date": "Sat, 06 Jul 2002 09:53:32 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I am worried about that too. But if we stick a C-level API, how can we\n> define the argument data type suitable for C string? I don't see such\n> data types. Maybe you are suggesting that we should not use CREATE\n> FUNCTION?\n\nWell, you'd have to use the same cheat that's used for selectivity\nestimation functions, triggers, I/O functions and everything else that\ndeals in internal datatypes: declare the function as taking and\nreturning OPAQUE. This is moderately annoying but I don't think\nthere's anything really wrong with it in practice.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2002 10:19:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I am worried about that too. But if we stick a C-level API, how can we\n> > define the argument data type suitable for C string? I don't see such\n> > data types. Maybe you are suggesting that we should not use CREATE\n> > FUNCTION?\n> \n> Well, you'd have to use the same cheat that's used for selectivity\n> estimation functions, triggers, I/O functions and everything else that\n> deals in internal datatypes: declare the function as taking and\n> returning OPAQUE. This is moderately annoying but I don't think\n> there's anything really wrong with it in practice.\n\nOh, I see.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Sun, 07 Jul 2002 08:14:08 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "Tatsuo Ishii writes:\n\n> > Also, is there anything in SQL99 that we ought to try to be\n> > compatible with?\n>\n> As far as I know there's no such an equivalent in SQL99.\n\nSure:\n\n 11.34 <translation definition>\n\n Function\n\n Define a character translation.\n\n Format\n\n <translation definition> ::=\n CREATE TRANSLATION <translation name>\n FOR <source character set specification>\n TO <target character set specification>\n FROM <translation source>\n\n <source character set specification> ::= <character set specification>\n\n <target character set specification> ::= <character set specification>\n\n <translation source> ::=\n <existing translation name>\n | <translation routine>\n\n <existing translation name> ::= <translation name>\n\n <translation routine> ::= <specific routine designator>\n\n\nThat's pretty much exactly what you are descibing.\n\nWhat would be really cool is if we could somehow reuse the conversion\nmodules provided by the C library and/or the iconv library. For example,\nI have 176 \"modules\" under /usr/lib/gconv. They should be useful for\nsomething.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n", "msg_date": "Sun, 7 Jul 2002 12:58:07 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "> Tatsuo Ishii writes:\n> \n> > > Also, is there anything in SQL99 that we ought to try to be\n> > > compatible with?\n> >\n> > As far as I know there's no such an equivalent in SQL99.\n> \n> Sure:\n> \n> 11.34 <translation definition>\n\nI guess you mix up SQL99's \"trasnlate\" and \"convert\".\n\nAs far as I know, SQL99's \"translation\" is exactly a translation. e.g.\n\n rr) translation: A method of translating characters in one character\n repertoire into characters of the same or a different character\n repertoire.\n\nFor example, certain \"translation\" might take an input of Engish text,\nand makes an output of Japanese one (I don't know if we could\nimplement such a translation though :-).\n\nOn the other hand \"convert\" just changes the \"form-of-use\" (SQL's\nterm, actually equivalent to \"encoding\"), keeping the character\nrepertoire.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Sun, 07 Jul 2002 21:30:57 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I guess you mix up SQL99's \"trasnlate\" and \"convert\".\n\nNo, I believe Peter has read the spec correctly. Further down they have\n\n <character translation> is a function for changing each character\n of a given string according to some many-to-one or one-to-one\n mapping between two not necessarily distinct character sets.\n\nSo this is intended as a one-character-at-a-time mapping, not a language\ntranslation (which would be far beyond what anyone would expect of a\ndatabase anyway).\n\nOne thing that's really unclear to me is what's the difference between\na <character translation> and a <form-of-use conversion>, other than\nthat they didn't provide a syntax for defining new conversions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Jul 2002 11:46:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "Tom Lane writes:\n\n> One thing that's really unclear to me is what's the difference between\n> a <character translation> and a <form-of-use conversion>, other than\n> that they didn't provide a syntax for defining new conversions.\n\nThe standard has this messed up. In part 1, a form-of-use and an encoding\nare two distinct things that can be applied to a character repertoire (see\nclause 4.6.2.1), whereas in part 2 the term encoding is used in the\ndefinition of form-of-use (clause 3.1.5 r).\n\nWhen I sort it out, however, I think that what Tatsuo was describing is\nindeed a form-of-use conversion. Note that in part 2, clause 4.2.2.1, it\nsays about form-of-use conversions,\n\n It is intended,\n though not enforced by this part of ISO/IEC 9075, that S2 be\n exactly the same sequence of characters as S1, but encoded\n according some different form-of-use. A typical use might be to\n convert a character string from two-octet UCS to one-octet Latin1\n or vice versa.\n\nThis seems to match what we're doing.\n\nA character translation does not make this requirement and it explicitly\ncalls out the possibility of \"many-to-one or one-to-one mapping between\ntwo not necessarily distinct character sets\". I imagine that what this is\nintended to do is to allow the user to create mappings such as ᅵ\n-> oe (as is common in German to avoid using characters with diacritic\nmarks), or ᅵ -> o (as one might do in French to achieve the same). In\nfact, it's a glorified sed command.\n\nSo I withdraw my earlier comment. But perhaps the syntax of the proposed\ncommand could be aligned with the CREATE TRANSLATION command.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n\n", "msg_date": "Sun, 7 Jul 2002 23:54:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "> So I withdraw my earlier comment. But perhaps the syntax of the proposed\n> command could be aligned with the CREATE TRANSLATION command.\n\nOk. What about this?\n\n CREATE CONVERSION <conversion name>\n FOR <encoding name>\n TO <encoding name>\n FROM <conversion routine name>\n\n DROP CONVERSION <conversion name>\n\nBTW, I wonder if we should invent new access privilege for conversion.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Mon, 08 Jul 2002 10:04:47 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "Here is a proposal for new pg_conversion system table. Comments?\n\n/*-------------------------------------------------------------------------\n *\n * pg_conversion.h\n *\t definition of the system \"conversion\" relation (pg_conversion)\n *\t along with the relation's initial contents.\n *\n *\n * Portions Copyright (c) 1996-2002, PostgreSQL Global Development Group\n * Portions Copyright (c) 1994, Regents of the University of California\n *\n * $Id$\n *\n * NOTES\n *\t the genbki.sh script reads this file and generates .bki\n *\t information from the DATA() statements.\n *\n *-------------------------------------------------------------------------\n */\n#ifndef PG_CONVERSION_H\n#define PG_CONVERSION_H\n\n/* ----------------\n *\t\tpostgres.h contains the system type definitions and the\n *\t\tCATALOG(), BOOTSTRAP and DATA() sugar words so this file\n *\t\tcan be read by both genbki.sh and the C compiler.\n * ----------------\n */\n\n/* ----------------------------------------------------------------\n *\t\tpg_conversion definition.\n *\n *\t\tcpp turns this into typedef struct FormData_pg_namespace\n *\n *\tconname\t\t\t\tname of the conversion\n *\tconnamespace\t\tname space which the conversion belongs to\n *\tconowner\t\t\tower of the conversion\n *\tconforencoding\t\tFOR encoding id\n *\tcontoencoding\t\tTO encoding id\n * conproc\t\t\t\tOID of the conversion proc\n * ----------------------------------------------------------------\n */\nCATALOG(pg_conversion)\n{\n\tNameData\tconname;\n\tOid\t\t\tconnamespace;\n\tint4\t\tconowner;\n\tint4\t\tconforencoding;\n\tint4\t\tcontoencoding;\n\tOid\t\t\tconproc;\n} FormData_pg_conversion;\n\n/* ----------------\n *\t\tForm_pg_conversion corresponds to a pointer to a tuple with\n *\t\tthe format of pg_conversion relation.\n * ----------------\n */\ntypedef FormData_pg_conversion *Form_pg_conversion;\n\n/* ----------------\n *\t\tcompiler constants for pg_conversion\n * ----------------\n */\n\n#define Natts_pg_conversion\t\t\t\t6\n#define Anum_pg_conversion_conpname\t\t1\n#define Anum_pg_conversion_connamespace\t2\n#define Anum_pg_conversion_conowner\t\t3\n#define Anum_pg_conversion_conforencoding\t\t4\n#define Anum_pg_conversion_contoencoding\t\t5\n#define Anum_pg_conversion_conproc\t\t6\n\n/* ----------------\n * initial contents of pg_conversion\n * ---------------\n */\n\n/*\n * prototypes for functions in pg_conversion.c\n */\nextern Oid\tConversionCreate(const char *conname, Oid connamespace,\n\t\t\t\t\t\t\t int32 conowner,\n\t\t\t\t\t\t\t int4 conforencoding, int4 contoencoding,\n\t\t\t\t\t\t\t Oid conproc);\n\n#endif /* PG_CONVERSION_H */\n\n\n", "msg_date": "Mon, 08 Jul 2002 17:34:28 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "On Sun, Jul 07, 2002 at 12:58:07PM +0200, Peter Eisentraut wrote:\n> What would be really cool is if we could somehow reuse the conversion\n> modules provided by the C library and/or the iconv library. For example,\n ^^^^^^^\n\n Very good point. Why use own conversion routines/tables if there is common\n library for this?\n\n The encoding API for PostgreSQL is really cool idea.\n\n I unsure with only one argument for encoding function. What if I want\n to use one generic function for all encodings (for example as API to\n iconv)? I think better C interface is:\n\n encode( TEXT data, NAME from, NAME to );\n\n where from/to are encoding names. The other way is use some struct\n that handle this information -- like ARGS in trigger functions.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n\n\n", "msg_date": "Mon, 8 Jul 2002 13:07:50 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> On Sun, Jul 07, 2002 at 12:58:07PM +0200, Peter Eisentraut wrote:\n> > What would be really cool is if we could somehow reuse the conversion\n> > modules provided by the C library and/or the iconv library. For example,\n> ^^^^^^^\n> \n> Very good point. Why use own conversion routines/tables if there is common\n> library for this?\n\nI'm not still sure about the details of conversion map used by\niconv. Japanese users have enough trouble with the conversin between\nUnicode and othe charsets. This is because there are many variation of\nconversion maps provided by vendors. For example, the conversion map\nused for Unicode and SJIS in PostgreSQL has been carefully designed to\nminimize problems described above. Another issue is the availabilty of\niconv among platforms. If we are sure that a particlular iconv\nconversion routine is available on all platforms and the conversion\nresult is good eough, our conversion routine could be replaced by new\none using iconv.\n\n> The encoding API for PostgreSQL is really cool idea.\n> \n> I unsure with only one argument for encoding function. What if I want\n> to use one generic function for all encodings (for example as API to\n> iconv)?\n\nUse a simple wrap function.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Mon, 08 Jul 2002 21:59:44 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "On Mon, Jul 08, 2002 at 09:59:44PM +0900, Tatsuo Ishii wrote:\n> > On Sun, Jul 07, 2002 at 12:58:07PM +0200, Peter Eisentraut wrote:\n> > > What would be really cool is if we could somehow reuse the conversion\n> > > modules provided by the C library and/or the iconv library. For example,\n> > ^^^^^^^\n> > \n> > Very good point. Why use own conversion routines/tables if there is common\n> > library for this?\n> \n> I'm not still sure about the details of conversion map used by\n> iconv. Japanese users have enough trouble with the conversin between\n> Unicode and othe charsets. This is because there are many variation of\n> conversion maps provided by vendors. For example, the conversion map\n> used for Unicode and SJIS in PostgreSQL has been carefully designed to\n> minimize problems described above. Another issue is the availabilty of\n> iconv among platforms. If we are sure that a particlular iconv\n> conversion routine is available on all platforms and the conversion\n> result is good eough, our conversion routine could be replaced by new\n> one using iconv.\n\n This is not problem if we will have some common API. You can use \n current conversion tables (maps) and for example I can use iconv \n on my i386/Linux.\n\n I don't want to replace current maps if somebody needs it. I would\n like to API.\n\n I see iconv is included into glibc now.\n\n> > I unsure with only one argument for encoding function. What if I want\n> > to use one generic function for all encodings (for example as API to\n> > iconv)?\n> \n> Use a simple wrap function.\n\n How knows this function to/from encoding? \n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n\n\n", "msg_date": "Mon, 8 Jul 2002 15:33:35 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> BTW, I wonder if we should invent new access privilege for conversion.\n\nI believe the spec just demands USAGE on the underlying function for\nthe TRANSLATE case, and I don't see why it should be different for\nCONVERT. (In principle, if we didn't use a C-only API, you could\njust call the underlying function directly; so there's little point\nin having protection restrictions different from that case.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2002 09:44:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> CATALOG(pg_conversion)\n> {\n> \tNameData\tconname;\n> \tOid\t\t\tconnamespace;\n> \tint4\t\tconowner;\n> \tint4\t\tconforencoding;\n> \tint4\t\tcontoencoding;\n> \tOid\t\t\tconproc;\n> } FormData_pg_conversion;\n\nShould use type \"regproc\" for conproc, I think.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2002 09:48:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "...\n> So I withdraw my earlier comment. But perhaps the syntax of the proposed\n> command could be aligned with the CREATE TRANSLATION command.\n\nTatsuo, it seems that we should use SQL99 terminology and commands where\nappropriate. We do not yet implement the SQL99 forms of character\nsupport, and I'm not sure if our current system is modeled to fit the\nSQL99 framework. Are you suggesting CREATE CONVERSION to avoid\ninfringing on SQL99 syntax to allow us to use that sometime later?\n\n - Thomas\n\n\n", "msg_date": "Mon, 08 Jul 2002 08:01:25 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> Here is a proposal for new pg_conversion system table. Comments?\n\nI wonder if the encodings themselves shouldn't be represented in some\nsystem table, too. Admittedly, this is nearly orthogonal to the proposed\nsystem table, except perhaps the data type of the two encoding fields.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n", "msg_date": "Mon, 8 Jul 2002 20:27:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "Thomas Lockhart writes:\n\n> Tatsuo, it seems that we should use SQL99 terminology and commands where\n> appropriate. We do not yet implement the SQL99 forms of character\n> support, and I'm not sure if our current system is modeled to fit the\n> SQL99 framework. Are you suggesting CREATE CONVERSION to avoid\n> infringing on SQL99 syntax to allow us to use that sometime later?\n\nSQL99 says that the method by which conversions are created is\nimplementation-defined. Tatsuo is defining the implementation.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n", "msg_date": "Mon, 8 Jul 2002 20:29:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> Tatsuo, it seems that we should use SQL99 terminology and commands where\n> appropriate. We do not yet implement the SQL99 forms of character\n> support, and I'm not sure if our current system is modeled to fit the\n> SQL99 framework. Are you suggesting CREATE CONVERSION to avoid\n> infringing on SQL99 syntax to allow us to use that sometime later?\n\nI'm not sure I understand your question, but I would say I would like\nto follow SQL99 as much as possible.\n\nWhen you say \"We do not yet implement the SQL99 forms of character\nsupport\", I think you mean the ability to specify per column (or even\nper string) charset. I don't think this would happen for 7.3(or 8.0\nwhatever), but sometime later I would like to make it reality.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Tue, 09 Jul 2002 10:06:59 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> > Use a simple wrap function.\n> \n> How knows this function to/from encoding? \n\nFor example you want to define a function for LATIN1 to UNICODE conversion\nfunction would look like:\n\nfunction_for_LATIN1_to_UTF-8(from_string opaque, to_string opaque, length\ninteger)\n{\n\t:\n\t:\n\tgeneric_function_using_iconv(from_str, to_str, \"ISO-8859-1\", \"UTF-8\",\n\tlength);\n}\n\nCREATE FUNCTION function_for_LATIN1_to_UTF-8(opaque, opaque, integer)\nRETURNS integer;\nCREAE CONVERSION myconversion FOR 'LATIN1' TO 'UNICODE' FROM\nfunction_for_LATIN1_to_UTF-8;\n\n\n\n", "msg_date": "Tue, 09 Jul 2002 10:07:11 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> > Here is a proposal for new pg_conversion system table. Comments?\n> \n> I wonder if the encodings themselves shouldn't be represented in some\n> system table, too. Admittedly, this is nearly orthogonal to the proposed\n> system table, except perhaps the data type of the two encoding fields.\n\nThat would be ideal, but I think that would happen at the same time\nwhen CREATE CHARACTER SET would be implemented.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Tue, 09 Jul 2002 10:07:24 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "> I believe the spec just demands USAGE on the underlying function for\n> the TRANSLATE case, and I don't see why it should be different for\n> CONVERT. (In principle, if we didn't use a C-only API, you could\n> just call the underlying function directly; so there's little point\n> in having protection restrictions different from that case.)\n\nOk, so:\n\n(1) a CONVERSION can only be dropped by the superuser or its owner.\n(2) a grant syntax for CONVERSION is:\n\n GRANT USAGE ON CONVERSION <conversion_name> to\n {<user_name> | GROUP <group_name> | PUBLIC} [, ...]\n--\nTatsuo Ishii\n\n\n", "msg_date": "Tue, 09 Jul 2002 10:07:35 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "> When you say \"We do not yet implement the SQL99 forms of character\n> support\", I think you mean the ability to specify per column (or even\n> per string) charset. I don't think this would happen for 7.3(or 8.0\n> whatever), but sometime later I would like to make it reality.\n\nRight.\n\nAn aside: I was thinking about this some, from the PoV of using our\nexisting type system to handle this (as you might remember, this is an\ninclination I've had for quite a while). I think that most things line\nup fairly well to allow this (and having transaction-enabled features\nmay require it), but do notice that the SQL feature of allowing a\ndifferent character set for every column *name* does not map\nparticularly well to our underlying structures.\n\n - Thomas\n\n\n", "msg_date": "Mon, 08 Jul 2002 18:21:31 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> An aside: I was thinking about this some, from the PoV of using our\n> existing type system to handle this (as you might remember, this is an\n> inclination I've had for quite a while). I think that most things line\n> up fairly well to allow this (and having transaction-enabled features\n> may require it), but do notice that the SQL feature of allowing a\n> different character set for every column *name* does not map\n> particularly well to our underlying structures.\n\nI've been think this for a while too. What about collation? If we add\nnew chaset A and B, and each has 10 collations then we are going to\nhave 20 new types? That seems overkill to me.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Tue, 09 Jul 2002 10:47:52 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> I've been think this for a while too. What about collation? If we add\n> new chaset A and B, and each has 10 collations then we are going to\n> have 20 new types? That seems overkill to me.\n\nWell, afaict all of the operations we would ask of a type we will be\nrequired to provide for character sets and collations. So ordering,\nconversions, operators, index methods, etc etc are all required. It\n*does* seem like a lot of work, but the type system is specifically\ndesigned to do exactly this. Lifting those capabilities out of the type\nsystem only to reimplement them elsewhere seems all trouble with no\nupside.\n\nPerhaps the current concept of \"binary compatible types\" could help\nreduce the complexity, if it were made extensible, which it needs\nanyway. But in most cases the character set/collation pair is a unique\ncombination, with a limited set of possibilities for other character\nset/collation pairs with equivalent forms of use, which would keep us\nfrom being able to reuse pieces anyway.\n\nFor most installations, we would install just those character sets the\ninstallation/database requires, so in practice the database size need\nnot grow much beyond what it already is. And we could have conventions\non how functions and operators are named for a character set and/or\ncollation, so we could auto-generate the SQL definitions given an\nimplementation which meets a template standard.\n\nHmm, an aside which might be relevant: I've been looking at the\n\"national character string\" syntax (you know, the N'string' convention)\nand at the binary and hex string syntax (B'101010' and X'AB1D', as\nexamples) and would like to implement them in the lexer and parser by\nhaving the string preceded with a type identifier as though they were\nsomething like\n\nNATIONAL CHARACTER 'string'\nBIN '101010'\nHEX 'AB1D'\n\nwhere both BIN and HEX result in the *same* underlying data type once\ningested (or at least a reasonable facimile). I won't be allowed to\ncreate two data types with the same type OID, but maybe if I assign them\nto be binary compatible then I won't have to flesh out the hex data type\nbut only provide an input and output function.\n\n - Thomas\n\n\n", "msg_date": "Mon, 08 Jul 2002 19:38:26 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> I believe the spec just demands USAGE on the underlying function for\n>> the TRANSLATE case, and I don't see why it should be different for\n>> CONVERT. (In principle, if we didn't use a C-only API, you could\n>> just call the underlying function directly; so there's little point\n>> in having protection restrictions different from that case.)\n\n> Ok, so:\n\n> (1) a CONVERSION can only be dropped by the superuser or its owner.\n\nOkay ...\n\n> (2) a grant syntax for CONVERSION is:\n\n> GRANT USAGE ON CONVERSION <conversion_name> to\n> {<user_name> | GROUP <group_name> | PUBLIC} [, ...]\n\nNo, I don't think a conversion has any privileges of its own at all.\nYou either have USAGE on the underlying function, or not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2002 23:47:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "> > I've been think this for a while too. What about collation? If we add\n> > new chaset A and B, and each has 10 collations then we are going to\n> > have 20 new types? That seems overkill to me.\n> \n> Well, afaict all of the operations we would ask of a type we will be\n> required to provide for character sets and collations. So ordering,\n> conversions, operators, index methods, etc etc are all required. It\n> *does* seem like a lot of work, but the type system is specifically\n> designed to do exactly this. Lifting those capabilities out of the type\n> system only to reimplement them elsewhere seems all trouble with no\n> upside.\n\nIf so, what about the \"coercibility\" property?\nThe standard defines four distinct coercibility properties. So in\nabove my example, actually you are going to define 80 new types?\n(also a collation could be either \"PAD SPACE\" or \"NO PAD\". So you\nmight have 160 new types).\n--\nTatsuo Ishii\n\n\n", "msg_date": "Tue, 09 Jul 2002 14:01:10 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> > (1) a CONVERSION can only be dropped by the superuser or its owner.\n> \n> Okay ...\n> \n> > (2) a grant syntax for CONVERSION is:\n> \n> > GRANT USAGE ON CONVERSION <conversion_name> to\n> > {<user_name> | GROUP <group_name> | PUBLIC} [, ...]\n> \n> No, I don't think a conversion has any privileges of its own at all.\n> You either have USAGE on the underlying function, or not.\n\nI see.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Tue, 09 Jul 2002 14:03:30 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION " }, { "msg_contents": "> If so, what about the \"coercibility\" property?\n> The standard defines four distinct coercibility properties. So in\n> above my example, actually you are going to define 80 new types?\n> (also a collation could be either \"PAD SPACE\" or \"NO PAD\". So you\n> might have 160 new types).\n\nWell, yes I suppose so. The point is that these relationships *must be\ndefined anyway*. Allowed and forbidden conversions must be defined,\ncollation order must be defined, indexing operations must be defined,\netc etc etc. In fact, everything typically associated with a type must\nbe defined, including the allowed conversions between other types\n(character sets/collations).\n\nSo, how are we going to do this *in a general way* without carrying the\ninfrastructure of a (the) type system along with it? What would we be\nable to leave out or otherwise get for free if we use another mechanism?\nAnd is that mechanism fundamentally simpler than (re)using the type\nsystem that we already have?\n\n - Thomas\n\n\n", "msg_date": "Mon, 08 Jul 2002 22:43:25 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> > If so, what about the \"coercibility\" property?\n> > The standard defines four distinct coercibility properties. So in\n> > above my example, actually you are going to define 80 new types?\n> > (also a collation could be either \"PAD SPACE\" or \"NO PAD\". So you\n> > might have 160 new types).\n> \n> Well, yes I suppose so. The point is that these relationships *must be\n> defined anyway*. Allowed and forbidden conversions must be defined,\n> collation order must be defined, indexing operations must be defined,\n> etc etc etc. In fact, everything typically associated with a type must\n> be defined, including the allowed conversions between other types\n> (character sets/collations).\n> \n> So, how are we going to do this *in a general way* without carrying the\n> infrastructure of a (the) type system along with it? What would we be\n> able to leave out or otherwise get for free if we use another mechanism?\n> And is that mechanism fundamentally simpler than (re)using the type\n> system that we already have?\n\nWell, I think charset/collation/coercibility/pad are all string data\ntype specific properties, not common to any other data types. So it\nseems more appropreate for type systems not to have those certain type\nspecific knowledges. For example,\n\nS1 < S2\n\nshould raise an error if S1 has \"no collating properties\" and S2 has\n\"implicit collating properties\", while ok if S1 has \"no collating\nproperties\" and S2 has \"explicit collating properties\". It would be\nvery hard for the type system to handle this kind of cases since it\nrequires special knowledges about string data type.\n\nAlternative?\n\nWhy don't we have these properties in the string data itself?\n(probably we do not need to have them on disk storage). Existing text\ndata type has length + data. I suggest to extend it like:\n\nlength + charset + collation + pad + coercibility + data\n\nWith this above example could be easily handled by < operator.\n\nFor index, maybe we could dynamically replace varstr_cmp function\naccording to collation, though I have not actually examined my idea\nclosely.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Tue, 09 Jul 2002 15:23:35 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "SQL99 allows on the fly encoding conversion:\n\nCONVERT('aaa' USING myconv 'bbb')\n\nSo there could be more than 1 conversion for a paticlular encodings\npair. This lead to an ambiguity for \"default\" conversion used for the\nfrontend/backend automatic encoding conversion. Can we add a flag\nindicating that this is the \"default\" conversion? The new proposed\nsyntax would be:\n\nCREATE CONVERSION <conversion name>\n FOR <source encoding name>\n TO <destination encoding name>\n FROM <conversion function name>\n [DEFAULT]\n\nComments?\n--\nTatsuo Ishii\n\n\n", "msg_date": "Tue, 09 Jul 2002 15:46:40 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "On Tue, Jul 09, 2002 at 10:07:11AM +0900, Tatsuo Ishii wrote:\n> > > Use a simple wrap function.\n> > \n> > How knows this function to/from encoding? \n> \n> For example you want to define a function for LATIN1 to UNICODE conversion\n> function would look like:\n> \n> function_for_LATIN1_to_UTF-8(from_string opaque, to_string opaque, length\n> integer)\n> {\n> \t:\n> \t:\n> \tgeneric_function_using_iconv(from_str, to_str, \"ISO-8859-1\", \"UTF-8\",\n> \tlength);\n> }\n> \n> CREATE FUNCTION function_for_LATIN1_to_UTF-8(opaque, opaque, integer)\n> RETURNS integer;\n> CREAE CONVERSION myconversion FOR 'LATIN1' TO 'UNICODE' FROM\n> function_for_LATIN1_to_UTF-8;\n\n Hmm, but it require define \"function_for_...\" for each conversion. \n For example trigger function I needn't define for each table, but I can\n use only one PostgreSQL function for arbirary table.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n\n\n", "msg_date": "Tue, 9 Jul 2002 09:34:30 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "On Tue, 2002-07-09 at 03:47, Tatsuo Ishii wrote:\n> > An aside: I was thinking about this some, from the PoV of using our\n> > existing type system to handle this (as you might remember, this is an\n> > inclination I've had for quite a while). I think that most things line\n> > up fairly well to allow this (and having transaction-enabled features\n> > may require it), but do notice that the SQL feature of allowing a\n> > different character set for every column *name* does not map\n> > particularly well to our underlying structures.\n> \n> I've been think this for a while too. What about collation? If we add\n> new chaset A and B, and each has 10 collations then we are going to\n> have 20 new types? That seems overkill to me.\n\nCan't we do all collating in unicode and convert charsets A and B to and\nfrom it ?\n\nI would even reccommend going a step further and storing all 'national'\ncharacter sets in unicode.\n\n--------------\nHannu\n\n\n\n\n", "msg_date": "09 Jul 2002 17:50:07 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "Thomas Lockhart writes:\n\n> An aside: I was thinking about this some, from the PoV of using our\n> existing type system to handle this (as you might remember, this is an\n> inclination I've had for quite a while). I think that most things line\n> up fairly well to allow this (and having transaction-enabled features\n> may require it), but do notice that the SQL feature of allowing a\n> different character set for every column *name* does not map\n> particularly well to our underlying structures.\n\nThere more I think about it, the more I come to the conclusion that the\nSQL framework for \"character sets\" is both bogus and a red herring. (And\nit begins with figuring out exactly what a character set is, as opposed\nto a form-of-use, a.k.a.(?) encoding, but let's ignore that.)\n\nThe ability to store each column value in a different encoding sounds\ninteresting, because it allows you to create tables such as\n\n product_id | product_name_en | product_name_kr | product_name_jp\n\nbut you might as well create a table such as\n\n product_id | lang | product_name\n\nwith product_name in Unicode, and have a more extensible application that\nway, too.\n\nI think it's fine to have the encoding fixed for the entire database. It\nsure makes coding easier. If you want to be international, you use\nUnicode. If not you can \"optimize\" your database by using a more\nefficient encoding. In fact, I think we should consider making UTF-8 the\ndefault encoding sometime.\n\nThe real issue is the collation. But the collation is a small subset of\nthe whole locale/character set gobbledigook. Standardized collation rules\nin standardized forms exist. Finding/creating routines to interpret and\napply them should be the focus. SQL's notion to funnel the decision which\ncollation rule to apply through the character sets is bogus. It's\nimpossible to pick a default collation rule for many character sets\nwithout applying bias.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 10 Jul 2002 00:20:54 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "Hannu Krosing writes:\n\n> Can't we do all collating in unicode and convert charsets A and B to and\n> >from it ?\n>\n> I would even reccommend going a step further and storing all 'national'\n> character sets in unicode.\n\nSure. However, Tatsuo maintains that the customary Japanese character\nsets don't map very well with Unicode. Personally, I believe that this is\nan issue that should be fixed, not avoided, but I don't understand the\nissues well enough.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 10 Jul 2002 00:21:23 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "On Wed, 10 Jul 2002 08:21, Peter Eisentraut wrote:\n> Hannu Krosing writes:\n...\n> > I would even reccommend going a step further and storing all 'national'\n> > character sets in unicode.\n>\n> Sure. However, Tatsuo maintains that the customary Japanese character\n> sets don't map very well with Unicode. Personally, I believe that this is\n> an issue that should be fixed, not avoided, but I don't understand the\n> issues well enough.\n\nPresumably improving the Unicode support to cover the full UTF32 (or UCS4) \nrange would help with this. Last time I checked, PostgreSQL only supports the \nUCS2 subset of Unicode, ie 16 bits. From the Unicode propaganda I've read, it \nseems that one of the main goals of the expansion of the range beyond 16 bits \nwas to answer the complaints of Japanese users.\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen tim@proximity.com.au\nProximity Pty Ltd http://www.proximity.com.au/\n http://www4.tpg.com.au/users/rita_tim/\n\n", "msg_date": "Wed, 10 Jul 2002 10:53:37 +1000", "msg_from": "Tim Allen <tim@proximity.com.au>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "On Wed, 10 Jul 2002, Peter Eisentraut wrote:\n\n> Sure. However, Tatsuo maintains that the customary Japanese character\n> sets don't map very well with Unicode. Personally, I believe that this is\n> an issue that should be fixed, not avoided, but I don't understand the\n> issues well enough.\n\nI hear this all the time, but I have yet to have someone show me what,\nIin SO-2022-JP, EUC-JP or SJIS cannot be transparently translated into\nUnicode and back.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Wed, 10 Jul 2002 13:27:07 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> > For example you want to define a function for LATIN1 to UNICODE conversion\n> > function would look like:\n> > \n> > function_for_LATIN1_to_UTF-8(from_string opaque, to_string opaque, length\n> > integer)\n> > {\n> > \t:\n> > \t:\n> > \tgeneric_function_using_iconv(from_str, to_str, \"ISO-8859-1\", \"UTF-8\",\n> > \tlength);\n> > }\n> > \n> > CREATE FUNCTION function_for_LATIN1_to_UTF-8(opaque, opaque, integer)\n> > RETURNS integer;\n> > CREAE CONVERSION myconversion FOR 'LATIN1' TO 'UNICODE' FROM\n> > function_for_LATIN1_to_UTF-8;\n> \n> Hmm, but it require define \"function_for_...\" for each conversion. \n> For example trigger function I needn't define for each table, but I can\n> use only one PostgreSQL function for arbirary table.\n\nI don't think this is a big problem, IMO.\n\nHowever, thinking more, I came to a conclusion that passing encoding\nids would be a good thing. With the encoding id parameters, the\nfunction could check if it is called with correct encodings, and this\nwould prevent disaster. New interface proposal:\n\n pgconv(\n\t\tINTEGER,\t-- source encoding id\n\t\tINTEGER,\t-- destination encoding id\n\t\tOPAQUE,\t\t-- source string (null terminated C string)\n\t\tOPAQUE,\t\t-- destination string (null terminated C string)\n\t\tINTERGER\t-- source string length\n ) returns INTEGER;\t-- dummy. returns nothing, actually.\n\n--\nTatsuo Ishii\n", "msg_date": "Thu, 11 Jul 2002 15:37:49 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "On Thu, Jul 11, 2002 at 03:37:49PM +0900, Tatsuo Ishii wrote:\n> > > CREATE FUNCTION function_for_LATIN1_to_UTF-8(opaque, opaque, integer)\n> > > RETURNS integer;\n> > > CREAE CONVERSION myconversion FOR 'LATIN1' TO 'UNICODE' FROM\n> > > function_for_LATIN1_to_UTF-8;\n> > \n> > Hmm, but it require define \"function_for_...\" for each conversion. \n> > For example trigger function I needn't define for each table, but I can\n> > use only one PostgreSQL function for arbirary table.\n> \n> I don't think this is a big problem, IMO.\n> \n> However, thinking more, I came to a conclusion that passing encoding\n> ids would be a good thing. With the encoding id parameters, the\n> function could check if it is called with correct encodings, and this\n> would prevent disaster. New interface proposal:\n\n OK.\n\n> pgconv(\n> \t\tINTEGER,\t-- source encoding id\n> \t\tINTEGER,\t-- destination encoding id\n\n Where/how is describe conversion between encoding id and encoding\n name? (I maybe something overlook:-) I expect new encoding system \n will extendable and encodings list not will hardcoded like now.\n (extendable = add new encoding without PostgreSQL rebuild)\n\n BTW, the client site needs routines for work with encoding names too\n (pg_char_to_encoding()). Hmm.. it can't be extendable, or yes?\n\n> \t\tOPAQUE,\t\t-- source string (null terminated C string)\n> \t\tOPAQUE,\t\t-- destination string (null terminated C string)\n> \t\tINTERGER\t-- source string length\n> ) returns INTEGER;\t-- dummy. returns nothing, actually.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 11 Jul 2002 10:14:28 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> Where/how is describe conversion between encoding id and encoding\n> name? (I maybe something overlook:-) I expect new encoding system \n> will extendable and encodings list not will hardcoded like now.\n> (extendable = add new encoding without PostgreSQL rebuild)\n\nUser defined charsets(encodings) is under discussion and I believe it\nwould not happen for 7.3.\n\n> BTW, the client site needs routines for work with encoding names too\n> (pg_char_to_encoding()). Hmm.. it can't be extendable, or yes?\n\npg_char_to_encoding() is already in libpq. Or am I missing something?\n--\nTatsuo Ishii\n", "msg_date": "Thu, 11 Jul 2002 17:26:01 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "On Thu, Jul 11, 2002 at 05:26:01PM +0900, Tatsuo Ishii wrote:\n> > Where/how is describe conversion between encoding id and encoding\n> > name? (I maybe something overlook:-) I expect new encoding system \n> > will extendable and encodings list not will hardcoded like now.\n> > (extendable = add new encoding without PostgreSQL rebuild)\n> \n> User defined charsets(encodings) is under discussion and I believe it\n> would not happen for 7.3.\n> \n> > BTW, the client site needs routines for work with encoding names too\n> > (pg_char_to_encoding()). Hmm.. it can't be extendable, or yes?\n> \n> pg_char_to_encoding() is already in libpq. Or am I missing something?\n\n It works with encoding table (pg_enc2name_tbl) and it's compiled \n into backend and client too. It means number of encoding is not possible \n change after compilation and you (user) can't add new encoding without \n pg_enc2name_tbl[] change. I original thought we can add new encodings\n on-the-fly in 7.3 :-) You're right.\n\n IMHO implement \"User defined charsets(encodings)\" will problem for\n current libpq design.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 11 Jul 2002 10:38:22 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> > pg_char_to_encoding() is already in libpq. Or am I missing something?\n> \n> It works with encoding table (pg_enc2name_tbl) and it's compiled \n> into backend and client too. It means number of encoding is not possible \n> change after compilation and you (user) can't add new encoding without \n> pg_enc2name_tbl[] change. I original thought we can add new encodings\n> on-the-fly in 7.3 :-) You're right.\n> \n> IMHO implement \"User defined charsets(encodings)\" will problem for\n> current libpq design.\n\nNo, it's not a libpq problem, but more common \"client/server\" problem\nIMO. It's very hard to share dynamically created object (info)\neffectively between client and server.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 11 Jul 2002 17:52:18 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "On Thu, Jul 11, 2002 at 05:52:18PM +0900, Tatsuo Ishii wrote:\n> > > pg_char_to_encoding() is already in libpq. Or am I missing something?\n> > \n> > It works with encoding table (pg_enc2name_tbl) and it's compiled \n> > into backend and client too. It means number of encoding is not possible \n> > change after compilation and you (user) can't add new encoding without \n> > pg_enc2name_tbl[] change. I original thought we can add new encodings\n> > on-the-fly in 7.3 :-) You're right.\n> > \n> > IMHO implement \"User defined charsets(encodings)\" will problem for\n> > current libpq design.\n> \n> No, it's not a libpq problem, but more common \"client/server\" problem\n> IMO. It's very hard to share dynamically created object (info)\n> effectively between client and server.\n\n IMHO dynamic object will keep server and client must ask for wanted \n information to server.\n\n Karel\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 11 Jul 2002 11:04:59 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> > No, it's not a libpq problem, but more common \"client/server\" problem\n> > IMO. It's very hard to share dynamically created object (info)\n> > effectively between client and server.\n> \n> IMHO dynamic object will keep server and client must ask for wanted \n> information to server.\n\nI agree with you. However real problem is how fast it could be. For\nexample, pg_mblen() is called for each word processed by libpq to know\nthe byte length of the word. If each call to pg_mblen() accesses\nbackend, the performance might be unacceptably slow.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 11 Jul 2002 18:30:48 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "On Thu, Jul 11, 2002 at 06:30:48PM +0900, Tatsuo Ishii wrote:\n> > > No, it's not a libpq problem, but more common \"client/server\" problem\n> > > IMO. It's very hard to share dynamically created object (info)\n> > > effectively between client and server.\n> > \n> > IMHO dynamic object will keep server and client must ask for wanted \n> > information to server.\n> \n> I agree with you. However real problem is how fast it could be. For\n> example, pg_mblen() is called for each word processed by libpq to know\n> the byte length of the word. If each call to pg_mblen() accesses\n> backend, the performance might be unacceptably slow.\n\n It must load all relevant information about actual encoding(s) and\n cache it in libpq.\n\n IMHO basic encoding information like name and id are not problem. \n The PQmblen() is big problem. Strange question: is PQmblen() really\n needful? I see it's used for result printing, but why backend not\n mark size of field (word) to result? If backend good knows size of\n data why not send this information to client togeter with data?\n \n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 11 Jul 2002 15:41:38 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Proposal: CREATE CONVERSION" }, { "msg_contents": "> IMHO basic encoding information like name and id are not problem. \n> The PQmblen() is big problem. Strange question: is PQmblen() really\n> needful? I see it's used for result printing, but why backend not\n> mark size of field (word) to result? If backend good knows size of\n> data why not send this information to client togeter with data?\n\nPQmblen() is used by psql in many places. It is used for parsing query\ntexts supplied by user, not only for data sent from backend.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 11 Jul 2002 23:47:09 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal: CREATE CONVERSION" } ]
[ { "msg_contents": "OK,\n\nThis is the problem I'm having with the DROP COLUMN implementation. Since\nI've already incorporated all of Hiroshi's changes, I think this may have\nbeen an issue in his trial implementation as well.\n\nI have attached my current patch, which works fine and compiles properly.\n\nOk, if you drop a column 'b', then all these work properly:\n\nselect * from tab;\nselect tab.* from tab;\nselect b from tab;\nupdate tab set b = 3;\nselect * from tab where b = 3;\ninsert into tab (b) values (3);\n\nThat's all good. However, the issue is that one of the things that happens\nwhen you drop a column is that the column is renamed to 'dropped_%attnum%'.\nSo, say the 'b' column is renamed to 'dropped_2', then you can do this:\n\nselect dropped_2 from tab;\nselect tab.dropped_2 from tab;\nupdate tab set dropped_2 = 3;\nselect * from tab where dropped_2 = 3;\n\nWhere have I missed the COLUMN_IS_DROPPED checks???\n\nAnother thing: I don't want to name dropped columns 'dropped_...' as I\nthink that's unfair on our non-English speaking users. Should I just use\n'xxxx' or something?\n\nThanks for any help,\n\nChris", "msg_date": "Fri, 5 Jul 2002 15:57:44 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "DROP COLUMN Progress" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> So, say the 'b' column is renamed to 'dropped_2', then you can do this:\n\n> select dropped_2 from tab;\n> select tab.dropped_2 from tab;\n> update tab set dropped_2 = 3;\n> select * from tab where dropped_2 = 3;\n\n> Where have I missed the COLUMN_IS_DROPPED checks???\n\nSounds like you aren't checking in the part of the parser that resolves\nsimple variable references.\n\n> Another thing: I don't want to name dropped columns 'dropped_...' as I\n> think that's unfair on our non-English speaking users. Should I just use\n> 'xxxx' or something?\n\nDon't be silly --- the system catalogs are completely English-centric\nalready. Do you want to change all the catalog and column names to\nmeaningless strings? Since the dropped columns should be invisible to\nanyone who's not poking at the catalogs, I don't see that we are adding\nany cognitive load ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2002 10:42:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN Progress " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> OK,\n> \n> This is the problem I'm having with the DROP COLUMN implementation. Since\n> I've already incorporated all of Hiroshi's changes, I think this may have\n> been an issue in his trial implementation as well.\n> \n> I have attached my current patch, which works fine and compiles properly.\n> \n> Ok, if you drop a column 'b', then all these work properly:\n> \n> select * from tab;\n> select tab.* from tab;\n> select b from tab;\n> update tab set b = 3;\n> select * from tab where b = 3;\n> insert into tab (b) values (3);\n> \n> That's all good. However, the issue is that one of the things that happens\n> when you drop a column is that the column is renamed to 'dropped_%attnum%'.\n> So, say the 'b' column is renamed to 'dropped_2', then you can do this:\n> \n> select dropped_2 from tab;\n> select tab.dropped_2 from tab;\n> update tab set dropped_2 = 3;\n> select * from tab where dropped_2 = 3;\n> \n> Where have I missed the COLUMN_IS_DROPPED checks???\n\nOK, my guess is that it is checks in parser/. I would issue each of\nthese queries with a non-existant column name, find the error message in\nthe code, and add an isdropped check in those places.\n\n> Another thing: I don't want to name dropped columns 'dropped_...' as I\n> think that's unfair on our non-English speaking users. Should I just use\n> 'xxxx' or something?\n\nI think \"dropped\" is OK. The SQL is still English, e.g. DROP.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 14:24:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN Progress" }, { "msg_contents": "> OK, my guess is that it is checks in parser/. I would issue each of\n> these queries with a non-existant column name, find the error message in\n> the code, and add an isdropped check in those places.\n\nOK, I think I've narrowed it down to this block of code in scanRTEForColumn\nin parse_relation.c:\n\n /*\n * Scan the user column names (or aliases) for a match. Complain if\n * multiple matches.\n */\n foreach(c, rte->eref->colnames)\n {\n /* @@ SKIP DROPPED HERE? @@ */\n attnum++;\n if (strcmp(strVal(lfirst(c)), colname) == 0)\n {\n if (result)\n elog(ERROR, \"Column reference \\\"%s\\\" is\nambiguous\", colname);\n result = (Node *) make_var(pstate, rte, attnum);\n rte->checkForRead = true;\n }\n }\n\n\nI'm thinking that I should put a 'SearchSysCacheCopy' where my @@ comment is\nto retrieve the attribute by name, and then do a check to see if it's\ndropped. Is that the best/fastest way of doing things? Seems unfortunate\nto add a another cache lookup in every parsed query...\n\nComments?\n\nChris\n\n\n\n", "msg_date": "Mon, 8 Jul 2002 12:06:07 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN Progress" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> /*\n> * Scan the user column names (or aliases) for a match. Complain if\n> * multiple matches.\n> */\n> foreach(c, rte->eref->colnames)\n> {\n> /* @@ SKIP DROPPED HERE? @@ */\n> attnum++;\n> if (strcmp(strVal(lfirst(c)), colname) == 0)\n> {\n> if (result)\n> elog(ERROR, \"Column reference \\\"%s\\\" is\n> ambiguous\", colname);\n> result = (Node *) make_var(pstate, rte, attnum);\n> rte->checkForRead = true;\n> }\n> }\n> \n> \n> I'm thinking that I should put a 'SearchSysCacheCopy' where my @@ comment is\n> to retrieve the attribute by name, and then do a check to see if it's\n> dropped. Is that the best/fastest way of doing things? Seems unfortunate\n> to add a another cache lookup in every parsed query...\n\nI am still looking but perhaps you could supress dropped columns from\ngetting into eref in the first place.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Mon, 8 Jul 2002 00:16:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN Progress" }, { "msg_contents": "> > I'm thinking that I should put a 'SearchSysCacheCopy' where my\n> @@ comment is\n> > to retrieve the attribute by name, and then do a check to see if it's\n> > dropped. Is that the best/fastest way of doing things? Seems\n> unfortunate\n> > to add a another cache lookup in every parsed query...\n>\n> I am still looking but perhaps you could supress dropped columns from\n> getting into eref in the first place.\n\nOK, that's done. I'm working on not allowing dropped columns in UPDATE\ntargets now.\n\nChris\n\n\n\n", "msg_date": "Mon, 8 Jul 2002 13:09:59 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN Progress" }, { "msg_contents": "> > I am still looking but perhaps you could supress dropped columns from\n> > getting into eref in the first place.\n>\n> OK, that's done. I'm working on not allowing dropped columns in UPDATE\n> targets now.\n\nOK, I've fixed it so that dropped columns cannot be targetted in an update\nstatement, however now I'm running into this problem:\n\nIf you issue an INSERT statement without qualifying any field names, ie:\n\nINSERT INTO tab VALUES (3);\n\nAlthough it will automatically insert NULL for the dropped columns, it still\ndoes cache lookups for the type of the dropped columns, etc. I noticed that\nwhen I tried setting the atttypid of the dropped column to (Oid)-1. Where\nis the bit of code that figures out the list of columns to insert into in an\nunqualified INSERT statement?\n\nI'm thinking that it would be nice if the dropped columns never even make it\ninto the list of target attributes, for performance reasons...\n\nChris\n\n\n\n\n", "msg_date": "Mon, 8 Jul 2002 15:51:03 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN Progress" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Christopher Kings-Lynne wrote:\n>> I'm thinking that I should put a 'SearchSysCacheCopy' where my @@ comment is\n>> to retrieve the attribute by name, and then do a check to see if it's\n>> dropped. Is that the best/fastest way of doing things? Seems unfortunate\n>> to add a another cache lookup in every parsed query...\n\n> I am still looking but perhaps you could supress dropped columns from\n> getting into eref in the first place.\n\nThat was my first thought also, but then the wrong attnum would be used\nin the \"make_var\". Ugh. I think what Chris needs to do is extend the\neref data structure so that there can be placeholders for dropped\nattributes. Perhaps NULLs could be included in the list, and then the\ncode would become like\n\n\tattnum++;\n\tif (lfirst(c) && strcmp(strVal(lfirst(c)), colname) == 0)\n\t\t...\n\nThis would require changes in quite a number of places :-( but I'm not\nsure we have much choice. The eref structure really needs to line up\nwith attnums.\n\nAnother possibility is to enter the dropped attnames in the eref list\nnormally, and do the drop test *after* hitting a match not before.\nThis is still slow, but not as horrendously O(N^2) slow as Chris's\noriginal pseudocode. I'm not sure how much work it'd really save though;\nyou might find yourself hitting all the same places to add tests.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2002 10:26:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN Progress " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > > I am still looking but perhaps you could supress dropped columns from\n> > > getting into eref in the first place.\n> >\n> > OK, that's done. I'm working on not allowing dropped columns in UPDATE\n> > targets now.\n> \n> OK, I've fixed it so that dropped columns cannot be targetted in an update\n> statement, however now I'm running into this problem:\n> \n> If you issue an INSERT statement without qualifying any field names, ie:\n> \n> INSERT INTO tab VALUES (3);\n> \n> Although it will automatically insert NULL for the dropped columns, it still\n> does cache lookups for the type of the dropped columns, etc. I noticed that\n> when I tried setting the atttypid of the dropped column to (Oid)-1. Where\n> is the bit of code that figures out the list of columns to insert into in an\n> unqualified INSERT statement?\n\nparse_target.c::checkInsertTargets()\n\n>\n> I'm thinking that it would be nice if the dropped columns never even make it\n> into the list of target attributes, for performance reasons...\n\nYea, just sloppy to have them there.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Mon, 8 Jul 2002 14:36:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN Progress" }, { "msg_contents": "> That was my first thought also, but then the wrong attnum would be used\n> in the \"make_var\". Ugh. I think what Chris needs to do is extend the\n> eref data structure so that there can be placeholders for dropped\n> attributes. Perhaps NULLs could be included in the list, and then the\n> code would become like\n\nHmmm... I don't get it - at the moment I'm preventing them from even\ngetting into the eref and all regression tests pass and every test I try\nworks as well...\n\nChris\n\n\n\n", "msg_date": "Tue, 9 Jul 2002 10:09:51 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN Progress " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> That was my first thought also, but then the wrong attnum would be used\n>> in the \"make_var\". Ugh. I think what Chris needs to do is extend the\n>> eref data structure so that there can be placeholders for dropped\n>> attributes. Perhaps NULLs could be included in the list, and then the\n>> code would become like\n\n> Hmmm... I don't get it - at the moment I'm preventing them from even\n> getting into the eref and all regression tests pass and every test I try\n> works as well...\n\nAre you checking access to columns that're to the right of the one\ndropped?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2002 23:49:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN Progress " }, { "msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> >> That was my first thought also, but then the wrong attnum would be used\n> >> in the \"make_var\". Ugh. I think what Chris needs to do is extend the\n> >> eref data structure so that there can be placeholders for dropped\n> >> attributes. Perhaps NULLs could be included in the list, and then the\n> >> code would become like\n>\n> > Hmmm... I don't get it - at the moment I'm preventing them from even\n> > getting into the eref and all regression tests pass and every test I try\n> > works as well...\n>\n> Are you checking access to columns that're to the right of the one\n> dropped?\n\nOK, interesting:\n\ntest=# create table test (a int4, b int4, c int4, d int4);\nCREATE TABLE\ntest=# insert into test values (1,2,3,4);\nINSERT 16588 1\ntest=# alter table test drop b;\nALTER TABLE\ntest=# select * from test;\n a | d | d\n---+---+---\n 1 | 3 | 4\n(1 row)\n\nIt half works, half doesn't. Sigh - how come these things always turn out\nharder than I think!?\n\npg_attribute:\n\ntest=# select attrelid,attname,attisdropped from pg_attribute where\nattrelid=16586 and attnum > 0;\n attrelid | attname | attisdropped\n----------+-----------+--------------\n 16586 | a | f\n 16586 | dropped_2 | t\n 16586 | c | f\n 16586 | d | f\n(4 rows)\n\nChris\n\n\n\n", "msg_date": "Tue, 9 Jul 2002 12:00:00 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN Progress " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> Are you checking access to columns that're to the right of the one\n>> dropped?\n\n> OK, interesting:\n\n> test=# create table test (a int4, b int4, c int4, d int4);\n> CREATE TABLE\n> test=# insert into test values (1,2,3,4);\n> INSERT 16588 1\n> test=# alter table test drop b;\n> ALTER TABLE\n> test=# select * from test;\n> a | d | d\n> ---+---+---\n> 1 | 3 | 4\n> (1 row)\n\nWhat of\n\n\tSELECT a,c,d FROM test\n\nI'll bet that doesn't work at all...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jul 2002 00:02:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN Progress " }, { "msg_contents": "> > test=# create table test (a int4, b int4, c int4, d int4);\n> > CREATE TABLE\n> > test=# insert into test values (1,2,3,4);\n> > INSERT 16588 1\n> > test=# alter table test drop b;\n> > ALTER TABLE\n> > test=# select * from test;\n> > a | d | d\n> > ---+---+---\n> > 1 | 3 | 4\n> > (1 row)\n> \n> What of\n> \n> \tSELECT a,c,d FROM test\n> \n> I'll bet that doesn't work at all...\n\nYeah, broken. Damn.\n\ntest=# SELECT a,c,d FROM test;\n a | c | d\n---+---+---\n 1 | 2 | 3\n(1 row)\n\ntest=# SELECT a,d FROM test;\n a | d\n---+---\n 1 | 3\n(1 row)\n\ntest=# SELECT d,c,a FROM test;\n d | c | a\n---+---+---\n 3 | 2 | 1\n(1 row)\n\nChris\n\n\n\n", "msg_date": "Tue, 9 Jul 2002 12:08:52 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN Progress " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> What of\n>> SELECT a,c,d FROM test\n>> I'll bet that doesn't work at all...\n\n> Yeah, broken. Damn.\n\nYup. That loop we were just looking at needs to derive the correct\nattnum when it matches a column name. If you remove deleted columns\nfrom the eref list altogether, you get the wrong answer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jul 2002 00:13:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN Progress " } ]
[ { "msg_contents": "I modified pg_controldata to display a few new fields. Example output\nappears at the end of this message, and the cvs log is:\n\nAdd a few new lines to display recently added fields in the ControlFile \n structure.\nNow includes the following new fields:\n integer/float date/time storage\n maximum length of names (+1; they must also include a null termination)\n maximum number of function arguments\n maximum length of locale name\n\n - Thomas\n\n\nmyst$ ./pg_controldata\npg_control version number: 72\nCatalog version number: 200207021\nDatabase state: IN_PRODUCTION\npg_control last modified: Fri Jul 5 08:33:18 2002\nCurrent log file id: 0\nNext log file segment: 2\nLatest checkpoint location: 0/1663838\nPrior checkpoint location: 0/16637F8\nLatest checkpoint's REDO location: 0/1663838\nLatest checkpoint's UNDO location: 0/0\nLatest checkpoint's StartUpID: 12\nLatest checkpoint's NextXID: 4925\nLatest checkpoint's NextOID: 139958\nTime of latest checkpoint: Fri Jul 5 08:33:01 2002\nDatabase block size: 8192\nBlocks per segment of large relation: 131072\nMaximum length of names: 32\nMaximum number of function arguments: 16\nDate/time type storage: 64-bit integers\nMaximum length of locale name: 128\nLC_COLLATE: C\nLC_CTYPE: C\n\n\n", "msg_date": "Fri, 05 Jul 2002 08:34:31 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "pg_controldata" } ]
[ { "msg_contents": "\n> Let me re-write it, and I'll post it in the next version. The section\n> dealt with what to do when you have a valid restored controlfile from a\n> backup system, which is in the DB_SHUTDOWNED state, and that points to a\n> valid shutdown/checkpoint record in the log; only the checkpoint record\n> happens not to be the last one in the log. This is a situation that\n> could never happen now, but would in PITR.\n\nBut it would need to be restore's responsibility to set the flag to \nDB_IN_PRODUCTION, no?\n\n> Even if we shutdown before we copy the file, we don't want a file that\n> hasn't been written to in 5 weeks before it was backed up to require\n> five weeks of old log files to recover. So we need to track that\n> information somehow, because right now if we scanned the blocks in the\n> file looking for at the page LSN's, we greatest LSN we would see might\n> be much older than where it would be safe to recover from. That is the\n> biggest problem, I think.\n\nWell, if you skip a validity test it could be restore's responsibility \nto know which checkpoint was last before the file backup was taken. \n(When doing a backup you would need to include the last checkpoint info\n== pg_control at start of backup)\n\nAndreas\n\n\n", "msg_date": "Fri, 5 Jul 2002 17:39:09 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" } ]
[ { "msg_contents": "Fix typo in xl_heaptid comment\n\nServus\n Manfred\n\n--- ../base/src/include/access/htup.h\t2002-07-04 18:05:04.000000000 +0200\n+++ src/include/access/htup.h\t2002-07-05 16:52:44.000000000 +0200\n@@ -268,15 +268,15 @@\n /*\n * When we insert 1st item on new page in INSERT/UPDATE\n * we can (and we do) restore entire page in redo\n */\n #define XLOG_HEAP_INIT_PAGE 0x80\n \n /*\n- * All what we need to find changed tuple (18 bytes)\n+ * All what we need to find changed tuple (14 bytes)\n *\n * NB: on most machines, sizeof(xl_heaptid) will include some trailing pad\n * bytes for alignment. We don't want to store the pad space in the XLOG,\n * so use SizeOfHeapTid for space calculations. Similar comments apply for\n * the other xl_FOO structs.\n */\n typedef struct xl_heaptid\n {\n \tRelFileNode node;\n \tItemPointerData tid;\t\t/* changed tuple id */\n } xl_heaptid;\n\nServus\n Manfred\n\n\n", "msg_date": "Fri, 05 Jul 2002 18:24:37 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Typo in htup.h comment" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nManfred Koizar wrote:\n> Fix typo in xl_heaptid comment\n> \n> Servus\n> Manfred\n> \n> --- ../base/src/include/access/htup.h\t2002-07-04 18:05:04.000000000 +0200\n> +++ src/include/access/htup.h\t2002-07-05 16:52:44.000000000 +0200\n> @@ -268,15 +268,15 @@\n> /*\n> * When we insert 1st item on new page in INSERT/UPDATE\n> * we can (and we do) restore entire page in redo\n> */\n> #define XLOG_HEAP_INIT_PAGE 0x80\n> \n> /*\n> - * All what we need to find changed tuple (18 bytes)\n> + * All what we need to find changed tuple (14 bytes)\n> *\n> * NB: on most machines, sizeof(xl_heaptid) will include some trailing pad\n> * bytes for alignment. We don't want to store the pad space in the XLOG,\n> * so use SizeOfHeapTid for space calculations. Similar comments apply for\n> * the other xl_FOO structs.\n> */\n> typedef struct xl_heaptid\n> {\n> \tRelFileNode node;\n> \tItemPointerData tid;\t\t/* changed tuple id */\n> } xl_heaptid;\n> \n> Servus\n> Manfred\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 22:17:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Typo in htup.h comment" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\n\nManfred Koizar wrote:\n> Fix typo in xl_heaptid comment\n> \n> Servus\n> Manfred\n> \n> --- ../base/src/include/access/htup.h\t2002-07-04 18:05:04.000000000 +0200\n> +++ src/include/access/htup.h\t2002-07-05 16:52:44.000000000 +0200\n> @@ -268,15 +268,15 @@\n> /*\n> * When we insert 1st item on new page in INSERT/UPDATE\n> * we can (and we do) restore entire page in redo\n> */\n> #define XLOG_HEAP_INIT_PAGE 0x80\n> \n> /*\n> - * All what we need to find changed tuple (18 bytes)\n> + * All what we need to find changed tuple (14 bytes)\n> *\n> * NB: on most machines, sizeof(xl_heaptid) will include some trailing pad\n> * bytes for alignment. We don't want to store the pad space in the XLOG,\n> * so use SizeOfHeapTid for space calculations. Similar comments apply for\n> * the other xl_FOO structs.\n> */\n> typedef struct xl_heaptid\n> {\n> \tRelFileNode node;\n> \tItemPointerData tid;\t\t/* changed tuple id */\n> } xl_heaptid;\n> \n> Servus\n> Manfred\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sun, 7 Jul 2002 21:52:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Typo in htup.h comment" } ]
[ { "msg_contents": "On generic recovery...\n\nWhat is wrong with this strategy...\n\n0. Put the database in single user mode.\n\n1. Dump the Schema, with creation order properly defined, and with all\nconstraints written to a separate file. (IOW, one file contains the\nbare tables with no index, constraint or trigger stuff, and the other\ncontains all the RI stuff.)\n\n2. Dump the tables (one by one) to text files with \"copy\"\n\n3. Create a new database in a new location.\n\n4. Feed it the bare table schema\n\n5. Pump in the table data using \"copy\" from the saved text files\n\n6. Run the RI script to rebuild index, trigger, PKey, FKey, etc.\n\nI find that is the most trouble free way to do it with most DBMS\nsystems.\n\nAm attempted dump from DBMS X.Y and a load to DBMS (X+1).Y is always a\npile of trouble waiting to happen -- no matter what the system is.\n\n\n", "msg_date": "Fri, 5 Jul 2002 15:30:27 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: (A) native Windows port" } ]
[ { "msg_contents": "As the upcoming release is breaking compatibility anyway: what do you\nthink about placing a magic number and some format version info into\nthe page header?\n\nOne 32-bit-number per page should be enough to encode page type and\nversion. We have just to decide, how we want it:\n\na) combine page type and version into a single 32-bit magic number\n\n HEAPPAGE73 = 0x63c490c9\n HEAPPAGE74 = 0x540beeb3\n ...\n BTREE73 = 0x8cdc8edb\n BTREE74 = 0xbb13f0a1\n\nb) use n bits for the page type and the rest for a version number\n\n HEAPPAGE73 = 0x63c40703\n HEAPPAGE74 = 0x63c40704\n ...\n BTREE73 = 0x8cdc0703\n BTREE74 = 0x8cdc0704\n\nThe latter has the advantage, that the software could easily check for\na version range (e.g. if (PageGetVersion(page) <= 0x0703) ...).\n\nOne might argue, that one magic number *per file* should be\nsufficient. That would mean, that the first page of a file had to\nhave a different format. Btree has such a meta page; I don't know\nabout the other access methods.\n\nWith a magic number in every single page it could even be possible to\ndo a smooth upgrade: \"Just install Postgres 8.0 and continue to use\nyour PostgreSQL 7.4 databases\" :-). Whenever the backend reads an old\nformat page it uses alternative accessor routines. New pages are\nwritten in the new format. Or the database can be run in\ncompatibility mode ... I'm dreaming ...\n\nThoughts?\n\nServus\n Manfred\n\n\n", "msg_date": "Sat, 06 Jul 2002 01:38:05 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Page type and version" }, { "msg_contents": "Manfred Koizar wrote:\n> As the upcoming release is breaking compatibility anyway: what do you\n> think about placing a magic number and some format version info into\n> the page header?\n> \n> One 32-bit-number per page should be enough to encode page type and\n> version. We have just to decide, how we want it:\n> \n> a) combine page type and version into a single 32-bit magic number\n> \n> HEAPPAGE73 = 0x63c490c9\n> HEAPPAGE74 = 0x540beeb3\n> ...\n> BTREE73 = 0x8cdc8edb\n> BTREE74 = 0xbb13f0a1\n> \n> b) use n bits for the page type and the rest for a version number\n> \n> HEAPPAGE73 = 0x63c40703\n> HEAPPAGE74 = 0x63c40704\n> ...\n> BTREE73 = 0x8cdc0703\n> BTREE74 = 0x8cdc0704\n> \n> The latter has the advantage, that the software could easily check for\n> a version range (e.g. if (PageGetVersion(page) <= 0x0703) ...).\n\nYea, b) sounds good.\n\n> One might argue, that one magic number *per file* should be\n> sufficient. That would mean, that the first page of a file had to\n> have a different format. Btree has such a meta page; I don't know\n> about the other access methods.\n\nHeap used to have a header page too but it was removed long ago.\n\nWe do have the TODO item:\n\n\t* Add version file format stamp to heap and other table types\n\nbut I am now questioning why that is there. btree had a version stamp,\nso I thought heap should have one too, but because the PG_VERSION file\nis in every directory, isn't that all that is needed for version\ninformation.\n\nMy vote is just to remove the btree version. If we decide to implement\nmulti-version reading in the backend, we can add it where appropriate.\n\n\n> With a magic number in every single page it could even be possible to\n> do a smooth upgrade: \"Just install Postgres 8.0 and continue to use\n> your PostgreSQL 7.4 databases\" :-). Whenever the backend reads an old\n> format page it uses alternative accessor routines. New pages are\n> written in the new format. Or the database can be run in\n> compatibility mode ... I'm dreaming ...\n\nYes, and as I understand, it is pretty easy from a tuple to snoop to the\nend of the block to see what version stamp is there. Will we ever use\nit?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 22:42:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Page type and version" } ]
[ { "msg_contents": "There was an interesting thread/process discussion in the gproff\nSlashdot discussion:\n\n\thttp://slashdot.org/article.pl?sid=02/07/05/1457231&mode=nested&tid=106\n\nThis guy had interesting comments:\n\n\thttp://slashdot.org/~pthisis/\n\nEspecially this comment:\n\n\thttp://slashdot.org/comments.pl?sid=35441&cid=3829377\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 22:59:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Thread discussion" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, July 05, 2002 7:59 PM\n> To: PostgreSQL-development\n> Subject: [HACKERS] Thread discussion\n> \n> \n> There was an interesting thread/process discussion in the gproff\n> Slashdot discussion:\n> \n> \t\n> http://slashdot.org/article.pl?sid=02/07/05/1457231&mode=neste\nd&tid=106\n\nThis guy had interesting comments:\n\n\thttp://slashdot.org/~pthisis/\n\nEspecially this comment:\n\n\thttp://slashdot.org/comments.pl?sid=35441&cid=3829377\n\n==================================================================\nWhich is pretty much pointless MS bashing and incorrect.\n\nFrom the news:comp.programming.threads FAQ:\n\n Q147: Thread create timings \n\nMatthew Houseman writes:\n\nThought I'd throw this into the pyre. :) I ran the thread/process\ncreate\nstuff on a 166MHz Pentium (no pro, no mmx) under NT4 and Solaris x86\n2.6:\n\n\nNT spawn 240s 24.0 ms/spawn\nSolaris spawn (fork) 123s 12.3 ms/spawn (incl. exec)\nSolaris spawn (vfork) 95s 9.5 ms/spawn (incl. exec)\n\nSolaris fork 47s 4.7 ms/fork\nSolaris vfork 0.37 ms/vfork (37s/100000)\n\nNT thread create 12s 1.2 ms/create\nSolaris thread create 0.11 ms/create (11s/100000)\n\n\nAs you can see, I tried both fork() and vfork(). When doing an immediate\nexec(), you'd normally use vfork(); when just forking, fork() is usually\nwhat you want to use (or have to use).\n\nNote that I had to turn the number of creates up to 100000 for vfork\nand thread create to get better precision in the timings.\n\n\nTo remind you, here are greg's figures (on a Pentium MMX 200MHz):\n\n>NT Spawner (spawnl): 120 Seconds (12.0 millisecond/spawn)\n>Linux Spawner (fork+exec): 57 Seconds ( 6.0 millisecond/spawn)\n>\n>Linux Process Create (fork): 10 Seconds ( 1.0 millisecond/proc)\n>\n>NT Thread Create 9 Seconds ( 0.9 millisecond/thread)\n>Linux Thread Create 3 Seconds ( 0.3 millisecond/thread)\n\n\nJust for fun, I tried the same thing on a 2 CPU 170MHz Ultrasparc.\nI leave it to someone else to figure out how much of this is due to\nthe two CPUs... :)\n\nSolaris spawn (fork) 84s 8.4 ms/spawn (incl. exec)\nSolaris spawn (vfork) 69s 6.9 ms/spawn (incl. exec)\n\nSolaris fork 21s 2.1 ms/fork\nSolaris vfork 0.17 ms/vfork (17s/100000)\n\nSolaris thread create 0.06 ms/create (6s/100000)\n\n\n=================================TOP=============\n Q148: Timing Multithreaded Programs (Solaris) \n\nFrom: sullivan@aisg20a.erim.org (Richard Sullivan)\n\n>I'm trying to time my multithreaded programs on Solaris with multiple \n>processors. I want the real world running time as opposed to the total\n\n>execution time of the programming because I want to measure speedup\nversus \n>sequential algorithms and home much faster the parallel program is for\nthe user.\n\nBradly,\n\n Here is what I wrote to solve this problem (for Solaris anyway). To\nuse it just call iobench_start() after any setup that you don't want\nto measure. When you are done measuring call iobench_end(). When you\nwant to see the statistics call iobench_report(). The output to\nstderr will look like this:\n\nProcess info:\n elapsed time 249.995\n CPU time 164.446\n user time 152.095\n system time 12.3507\n trap time 0.661235\n wait time 68.6506\n pfs major/minor 3379/ 0\n blocks input/output 0/ 0\n \n65.8% CPU usage\n\nThe iobench code is included in the program sources on: index.html.\n=================================TOP=============\n\nMy opinion is that PostgreSQL does not have to exclusively fork() or\nexclusively thread.\nAs Spike Lee said:\n\"Always do the right thing.\"\n\n\n", "msg_date": "Fri, 5 Jul 2002 20:25:25 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Thread discussion" }, { "msg_contents": "Dann Corbit wrote:\n> Especially this comment:\n> \n> \thttp://slashdot.org/comments.pl?sid=35441&cid=3829377\n> \n> ==================================================================\n> Which is pretty much pointless MS bashing and incorrect.\n\nIs there such a thing. ;-)\n\nAnyway, the analysis of Solaris is meaningless. It is in the same camp\nas NT as far as process creation bloat. I have always said threads help\non NT _and_ Solaris.\n\nOn Solaris, the thread popularity is there _because_ the OS is so slow\nat process creation (SVr$ bloat), not necessarily because people really\nwant threads on Solaris.\n\n> >NT Spawner (spawnl): 120 Seconds (12.0 millisecond/spawn)\n> >Linux Spawner (fork+exec): 57 Seconds ( 6.0 millisecond/spawn)\n> >\n> >Linux Process Create (fork): 10 Seconds ( 1.0 millisecond/proc)\n> >\n> >NT Thread Create 9 Seconds ( 0.9 millisecond/thread)\n> >Linux Thread Create 3 Seconds ( 0.3 millisecond/thread)\n\nThe Linux case is more interesting. The same guy had timings for thread\nvs. process of 6usecs vs. 4usecs, but states that it really isn't even a\nblip on the performance radar, and the coding required to do the stuff\nin a threaded manner is a headache:\n\n\thttp://slashdot.org/article.pl?sid=02/07/05/1457231&tid=106\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 23:34:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thread discussion" } ]
[ { "msg_contents": "A conversation in pgsql-interfaces reminded me that it would be a good\nidea for initdb to try to set attnotnull correctly for columns of the\nsystem catalogs. Although we generally don't recommend that people\nupdate catalogs directly, it's sometimes done anyway; having NOT NULL\nconstraints set on the columns that mustn't be null would help make\nthe system more robust.\n\nIt would be fairly easy to make bootstrap.c set attnotnull to true\nfor any column that's of a fixed-width datatype. That appears to\nsolve 99% of the problem with an appropriate amount of effort.\nWe could imagine inventing some BKI macro to explicitly label nullable\nor not-nullable columns in the include/catalog headers, but I don't\nthink it's worth that much trouble.\n\nComments?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Jul 2002 12:46:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Most system catalog columns should be NOT NULL" }, { "msg_contents": "Tom Lane wrote:\n> A conversation in pgsql-interfaces reminded me that it would be a good\n> idea for initdb to try to set attnotnull correctly for columns of the\n> system catalogs. Although we generally don't recommend that people\n> update catalogs directly, it's sometimes done anyway; having NOT NULL\n> constraints set on the columns that mustn't be null would help make\n> the system more robust.\n> \n> It would be fairly easy to make bootstrap.c set attnotnull to true\n> for any column that's of a fixed-width datatype. That appears to\n> solve 99% of the problem with an appropriate amount of effort.\n> We could imagine inventing some BKI macro to explicitly label nullable\n> or not-nullable columns in the include/catalog headers, but I don't\n> think it's worth that much trouble.\n> \n> Comments?\n\nYep, should be done.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sun, 7 Jul 2002 19:35:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Most system catalog columns should be NOT NULL" } ]
[ { "msg_contents": "We have the TODO item:\n\n * Make sure all block numbers are unsigned to increase maximum table size\n\nI did some research on this and generated the following patch. I didn't\nfind much in the way of problems except two vacuum.c fields that should\nprobably be BlockNumber. freespace.c also has a numPages field in\nFSMRelation that is int. Should that be BlockNumber?\n\nI am holding the patch until I get some feedback.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/commands/vacuum.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/commands/vacuum.c,v\nretrieving revision 1.230\ndiff -c -r1.230 vacuum.c\n*** src/backend/commands/vacuum.c\t20 Jun 2002 20:29:27 -0000\t1.230\n--- src/backend/commands/vacuum.c\t8 Jul 2002 03:44:12 -0000\n***************\n*** 60,68 ****\n \n typedef struct VacPageListData\n {\n! \tBlockNumber empty_end_pages;\t/* Number of \"empty\" end-pages */\n! \tint\t\t\tnum_pages;\t\t/* Number of pages in pagedesc */\n! \tint\t\t\tnum_allocated_pages;\t/* Number of allocated pages in\n \t\t\t\t\t\t\t\t\t\t * pagedesc */\n \tVacPage *pagedesc;\t\t/* Descriptions of pages */\n } VacPageListData;\n--- 60,68 ----\n \n typedef struct VacPageListData\n {\n! \tBlockNumber empty_end_pages;\t\t/* Number of \"empty\" end-pages */\n! \tBlockNumber\tnum_pages;\t\t\t\t/* Number of pages in pagedesc */\n! \tBlockNumber\tnum_allocated_pages;\t/* Number of allocated pages in\n \t\t\t\t\t\t\t\t\t\t * pagedesc */\n \tVacPage *pagedesc;\t\t/* Descriptions of pages */\n } VacPageListData;\n***************\n*** 988,994 ****\n \t\t\t\tusable_free_size;\n \tSize\t\tmin_tlen = MaxTupleSize;\n \tSize\t\tmax_tlen = 0;\n- \tint\t\t\ti;\n \tbool\t\tdo_shrinking = true;\n \tVTupleLink\tvtlinks = (VTupleLink) palloc(100 * sizeof(VTupleLinkData));\n \tint\t\t\tnum_vtlinks = 0;\n--- 988,993 ----\n***************\n*** 1285,1291 ****\n \t */\n \tif (do_shrinking)\n \t{\n! \t\tAssert((BlockNumber) fraged_pages->num_pages >= empty_end_pages);\n \t\tfraged_pages->num_pages -= empty_end_pages;\n \t\tusable_free_size = 0;\n \t\tfor (i = 0; i < fraged_pages->num_pages; i++)\n--- 1284,1292 ----\n \t */\n \tif (do_shrinking)\n \t{\n! \t\tBlockNumber\ti;\n! \n! \t\tAssert(fraged_pages->num_pages >= empty_end_pages);\n \t\tfraged_pages->num_pages -= empty_end_pages;\n \t\tusable_free_size = 0;\n \t\tfor (i = 0; i < fraged_pages->num_pages; i++)\n***************\n*** 1412,1418 ****\n \n \tNvacpagelist.num_pages = 0;\n \tnum_fraged_pages = fraged_pages->num_pages;\n! \tAssert((BlockNumber) vacuum_pages->num_pages >= vacuum_pages->empty_end_pages);\n \tvacuumed_pages = vacuum_pages->num_pages - vacuum_pages->empty_end_pages;\n \tif (vacuumed_pages > 0)\n \t{\n--- 1413,1419 ----\n \n \tNvacpagelist.num_pages = 0;\n \tnum_fraged_pages = fraged_pages->num_pages;\n! \tAssert(vacuum_pages->num_pages >= vacuum_pages->empty_end_pages);\n \tvacuumed_pages = vacuum_pages->num_pages - vacuum_pages->empty_end_pages;\n \tif (vacuumed_pages > 0)\n \t{\n***************\n*** 2332,2342 ****\n \t\t\tWriteBuffer(buf);\n \t\t}\n \n! \t\t/* now - free new list of reaped pages */\n! \t\tcurpage = Nvacpagelist.pagedesc;\n! \t\tfor (i = 0; i < Nvacpagelist.num_pages; i++, curpage++)\n! \t\t\tpfree(*curpage);\n! \t\tpfree(Nvacpagelist.pagedesc);\n \t}\n \n \t/*\n--- 2333,2346 ----\n \t\t\tWriteBuffer(buf);\n \t\t}\n \n! \t\t{\n! \t\t\tBlockNumber i;\n! \t\t\t/* now - free new list of reaped pages */\n! \t\t\tcurpage = Nvacpagelist.pagedesc;\n! \t\t\tfor (i = 0; i < Nvacpagelist.num_pages; i++, curpage++)\n! \t\t\t\tpfree(*curpage);\n! \t\t\tpfree(Nvacpagelist.pagedesc);\n! \t\t}\n \t}\n \n \t/*\n***************\n*** 2381,2393 ****\n \tBuffer\t\tbuf;\n \tVacPage *vacpage;\n \tBlockNumber relblocks;\n! \tint\t\t\tnblocks;\n \tint\t\t\ti;\n \n \tnblocks = vacuum_pages->num_pages;\n \tnblocks -= vacuum_pages->empty_end_pages;\t/* nothing to do with them */\n \n! \tfor (i = 0, vacpage = vacuum_pages->pagedesc; i < nblocks; i++, vacpage++)\n \t{\n \t\tCHECK_FOR_INTERRUPTS();\n \t\tif ((*vacpage)->offsets_free > 0)\n--- 2385,2398 ----\n \tBuffer\t\tbuf;\n \tVacPage *vacpage;\n \tBlockNumber relblocks;\n! \tBlockNumber\tnblocks;\n! \tBlockNumber\tblks;\n \tint\t\t\ti;\n \n \tnblocks = vacuum_pages->num_pages;\n \tnblocks -= vacuum_pages->empty_end_pages;\t/* nothing to do with them */\n \n! \tfor (blks = 0, vacpage = vacuum_pages->pagedesc; blks < nblocks; blks++, vacpage++)\n \t{\n \t\tCHECK_FOR_INTERRUPTS();\n \t\tif ((*vacpage)->offsets_free > 0)\n***************\n*** 2636,2643 ****\n vac_update_fsm(Relation onerel, VacPageList fraged_pages,\n \t\t\t BlockNumber rel_pages)\n {\n! \tint\t\t\tnPages = fraged_pages->num_pages;\n! \tint\t\t\ti;\n \tBlockNumber *pages;\n \tSize\t *spaceAvail;\n \n--- 2641,2648 ----\n vac_update_fsm(Relation onerel, VacPageList fraged_pages,\n \t\t\t BlockNumber rel_pages)\n {\n! \tBlockNumber\tnPages = fraged_pages->num_pages;\n! \tBlockNumber\ti;\n \tBlockNumber *pages;\n \tSize\t *spaceAvail;\n \nIndex: src/backend/storage/freespace/freespace.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/storage/freespace/freespace.c,v\nretrieving revision 1.12\ndiff -c -r1.12 freespace.c\n*** src/backend/storage/freespace/freespace.c\t20 Jun 2002 20:29:34 -0000\t1.12\n--- src/backend/storage/freespace/freespace.c\t8 Jul 2002 03:44:13 -0000\n***************\n*** 371,382 ****\n MultiRecordFreeSpace(RelFileNode *rel,\n \t\t\t\t\t BlockNumber minPage,\n \t\t\t\t\t BlockNumber maxPage,\n! \t\t\t\t\t int nPages,\n \t\t\t\t\t BlockNumber *pages,\n \t\t\t\t\t Size *spaceAvail)\n {\n \tFSMRelation *fsmrel;\n! \tint\t\t\ti;\n \n \tLWLockAcquire(FreeSpaceLock, LW_EXCLUSIVE);\n \tfsmrel = lookup_fsm_rel(rel);\n--- 371,382 ----\n MultiRecordFreeSpace(RelFileNode *rel,\n \t\t\t\t\t BlockNumber minPage,\n \t\t\t\t\t BlockNumber maxPage,\n! \t\t\t\t\t BlockNumber nPages,\n \t\t\t\t\t BlockNumber *pages,\n \t\t\t\t\t Size *spaceAvail)\n {\n \tFSMRelation *fsmrel;\n! \tBlockNumber\ti;\n \n \tLWLockAcquire(FreeSpaceLock, LW_EXCLUSIVE);\n \tfsmrel = lookup_fsm_rel(rel);\nIndex: src/backend/utils/adt/tid.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/adt/tid.c,v\nretrieving revision 1.31\ndiff -c -r1.31 tid.c\n*** src/backend/utils/adt/tid.c\t20 Jun 2002 20:29:38 -0000\t1.31\n--- src/backend/utils/adt/tid.c\t8 Jul 2002 03:44:13 -0000\n***************\n*** 87,93 ****\n \tblockNumber = BlockIdGetBlockNumber(blockId);\n \toffsetNumber = itemPtr->ip_posid;\n \n! \tsprintf(buf, \"(%d,%d)\", (int) blockNumber, (int) offsetNumber);\n \n \tPG_RETURN_CSTRING(pstrdup(buf));\n }\n--- 87,93 ----\n \tblockNumber = BlockIdGetBlockNumber(blockId);\n \toffsetNumber = itemPtr->ip_posid;\n \n! \tsprintf(buf, \"(%u,%d)\", blockNumber, (int) offsetNumber);\n \n \tPG_RETURN_CSTRING(pstrdup(buf));\n }\n***************\n*** 140,146 ****\n *\t\tcorrespond to the CTID of a base relation.\n */\n static Datum\n! currtid_for_view(Relation viewrel, ItemPointer tid) \n {\n \tTupleDesc\tatt = RelationGetDescr(viewrel);\n \tRuleLock\t*rulelock;\n--- 140,146 ----\n *\t\tcorrespond to the CTID of a base relation.\n */\n static Datum\n! currtid_for_view(Relation viewrel, ItemPointer tid)\n {\n \tTupleDesc\tatt = RelationGetDescr(viewrel);\n \tRuleLock\t*rulelock;\nIndex: src/include/storage/freespace.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/storage/freespace.h,v\nretrieving revision 1.7\ndiff -c -r1.7 freespace.h\n*** src/include/storage/freespace.h\t20 Jun 2002 20:29:52 -0000\t1.7\n--- src/include/storage/freespace.h\t8 Jul 2002 03:44:14 -0000\n***************\n*** 38,44 ****\n extern void MultiRecordFreeSpace(RelFileNode *rel,\n \t\t\t\t\t BlockNumber minPage,\n \t\t\t\t\t BlockNumber maxPage,\n! \t\t\t\t\t int nPages,\n \t\t\t\t\t BlockNumber *pages,\n \t\t\t\t\t Size *spaceAvail);\n extern void FreeSpaceMapForgetRel(RelFileNode *rel);\n--- 38,44 ----\n extern void MultiRecordFreeSpace(RelFileNode *rel,\n \t\t\t\t\t BlockNumber minPage,\n \t\t\t\t\t BlockNumber maxPage,\n! \t\t\t\t\t BlockNumber nPages,\n \t\t\t\t\t BlockNumber *pages,\n \t\t\t\t\t Size *spaceAvail);\n extern void FreeSpaceMapForgetRel(RelFileNode *rel);", "msg_date": "Sun, 7 Jul 2002 23:51:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "BlockNumber fixes" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I did some research on this and generated the following patch. I didn't\n> find much in the way of problems except two vacuum.c fields that should\n> probably be BlockNumber. freespace.c also has a numPages field in\n> FSMRelation that is int. Should that be BlockNumber?\n\nNot necessary, since the freespace map will never be large enough to\noverflow a signed int (it wouldn't fit in the address space if it were).\nI think that your changes in vacuum.c are probably unnecessary for the\nsame reason. I am generally wary of changing values from signed to\nunsigned without close analysis of how they are used --- did you look\nat *every* comparison involving these fields? How about arithmetic\nthat might compute a negative result?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2002 10:19:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BlockNumber fixes " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I did some research on this and generated the following patch. I didn't\n> > find much in the way of problems except two vacuum.c fields that should\n> > probably be BlockNumber. freespace.c also has a numPages field in\n> > FSMRelation that is int. Should that be BlockNumber?\n> \n> Not necessary, since the freespace map will never be large enough to\n> overflow a signed int (it wouldn't fit in the address space if it were).\n> I think that your changes in vacuum.c are probably unnecessary for the\n> same reason. I am generally wary of changing values from signed to\n> unsigned without close analysis of how they are used --- did you look\n> at *every* comparison involving these fields? How about arithmetic\n> that might compute a negative result?\n\nThe only computation I saw was:\n\n vacuumed_pages = vacuum_pages->num_pages - vacuum_pages->empty_end_pages;\n\nvacuumed_pages is signed, the others are unsigned. However, we print\nthese values as %u so there is a certain confusion there.\n\nIf you say it isn't a problem here, I will just mark the item as done\nand that we are handling the block numbers correctly. The only other\nunusual case I saw was tid outputing block number as %d and not %u. Is\nthat OK?\n\n sprintf(buf, \"(%d,%d)\", (int) blockNumber, (int) offsetNumber);\n\ntidin uses atoi:\n\n blockNumber = (BlockNumber) atoi(coord[0]);\n\nso at least it is consistent. ;-) Doesn't seem right, however.\n\nAlso, pg_class.relpages is an int. We don't have unsigned int columns.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 01:58:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: BlockNumber fixes" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The only other\n> unusual case I saw was tid outputing block number as %d and not %u. Is\n> that OK?\n\nSeems like it should use %u. The input side might be wrong too.\n\n> Also, pg_class.relpages is an int. We don't have unsigned int columns.\n\nYeah. I had a todo item to look at all the uses of relpages and make\nsure they were being casted to unsigned ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 02:25:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BlockNumber fixes " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The only other\n> > unusual case I saw was tid outputing block number as %d and not %u. Is\n> > that OK?\n> \n> Seems like it should use %u. The input side might be wrong too.\n> \n\nOK, fixed. Patch attached. There was also some confusion in the code\nof how strtol returns its end pointer as always non-NULL:\n\t\n\ttest=> insert into x values ('(1,2)');\n\tINSERT 16591 1\n\ttest=> insert into x values ('(1000000000,2)');\n\tINSERT 16592 1\n\ttest=> insert into x values ('(3000000000,2)');\n\tINSERT 16593 1\n\ttest=> select * from x;\n\t y \n\t----------------\n\t (1,2)\n\t (1000000000,2)\n\t (3000000000,2)\n\t(3 rows)\n\t\n\ttest=> insert into x values ('(5000000000,2)');\n\tERROR: tidin: invalid value.\n\ttest=> insert into x values ('(3000000000,200000)');\n\tERROR: tidin: invalid value.\n\ttest=> insert into x values ('(3000000000,20000)');\n\tINSERT 16595 1\n\n> > Also, pg_class.relpages is an int. We don't have unsigned int columns.\n> \n> Yeah. I had a todo item to look at all the uses of relpages and make\n> sure they were being casted to unsigned ...\n\nThey all look OK to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/utils/adt/numutils.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/adt/numutils.c,v\nretrieving revision 1.49\ndiff -c -r1.49 numutils.c\n*** src/backend/utils/adt/numutils.c\t20 Jun 2002 20:29:38 -0000\t1.49\n--- src/backend/utils/adt/numutils.c\t16 Jul 2002 17:51:06 -0000\n***************\n*** 46,52 ****\n pg_atoi(char *s, int size, int c)\n {\n \tlong\t\tl = 0;\n! \tchar\t *badp = (char *) NULL;\n \n \tAssert(s);\n \n--- 46,52 ----\n pg_atoi(char *s, int size, int c)\n {\n \tlong\t\tl = 0;\n! \tchar\t *badp;\n \n \tAssert(s);\n \n***************\n*** 71,77 ****\n \t */\n \tif (errno && errno != EINVAL)\n \t\telog(ERROR, \"pg_atoi: error reading \\\"%s\\\": %m\", s);\n! \tif (badp && *badp && (*badp != c))\n \t\telog(ERROR, \"pg_atoi: error in \\\"%s\\\": can\\'t parse \\\"%s\\\"\", s, badp);\n \n \tswitch (size)\n--- 71,77 ----\n \t */\n \tif (errno && errno != EINVAL)\n \t\telog(ERROR, \"pg_atoi: error reading \\\"%s\\\": %m\", s);\n! \tif (*badp && *badp != c)\n \t\telog(ERROR, \"pg_atoi: error in \\\"%s\\\": can\\'t parse \\\"%s\\\"\", s, badp);\n \n \tswitch (size)\nIndex: src/backend/utils/adt/tid.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/adt/tid.c,v\nretrieving revision 1.31\ndiff -c -r1.31 tid.c\n*** src/backend/utils/adt/tid.c\t20 Jun 2002 20:29:38 -0000\t1.31\n--- src/backend/utils/adt/tid.c\t16 Jul 2002 17:51:06 -0000\n***************\n*** 18,23 ****\n--- 18,27 ----\n \n #include \"postgres.h\"\n \n+ #include <errno.h>\n+ #include <math.h>\n+ #include <limits.h>\n+ \n #include \"access/heapam.h\"\n #include \"catalog/namespace.h\"\n #include \"utils/builtins.h\"\n***************\n*** 47,52 ****\n--- 51,58 ----\n \tItemPointer result;\n \tBlockNumber blockNumber;\n \tOffsetNumber offsetNumber;\n+ \tchar\t *badp;\n+ \tint\t\t\thold_offset;\n \n \tfor (i = 0, p = str; *p && i < NTIDARGS && *p != RDELIM; p++)\n \t\tif (*p == DELIM || (*p == LDELIM && !i))\n***************\n*** 55,62 ****\n \tif (i < NTIDARGS)\n \t\telog(ERROR, \"invalid tid format: '%s'\", str);\n \n! \tblockNumber = (BlockNumber) atoi(coord[0]);\n! \toffsetNumber = (OffsetNumber) atoi(coord[1]);\n \n \tresult = (ItemPointer) palloc(sizeof(ItemPointerData));\n \n--- 61,76 ----\n \tif (i < NTIDARGS)\n \t\telog(ERROR, \"invalid tid format: '%s'\", str);\n \n! \terrno = 0;\n! \tblockNumber = strtoul(coord[0], &badp, 10);\n! \tif (errno || *badp != DELIM)\n! \t\telog(ERROR, \"tidin: invalid value.\");\n! \n! \thold_offset = strtol(coord[1], &badp, 10);\n! \tif (errno || *badp != RDELIM ||\n! \t\thold_offset > USHRT_MAX || hold_offset < 0)\n! \t\telog(ERROR, \"tidin: invalid value.\");\n! \toffsetNumber = hold_offset;\n \n \tresult = (ItemPointer) palloc(sizeof(ItemPointerData));\n \n***************\n*** 87,93 ****\n \tblockNumber = BlockIdGetBlockNumber(blockId);\n \toffsetNumber = itemPtr->ip_posid;\n \n! \tsprintf(buf, \"(%d,%d)\", (int) blockNumber, (int) offsetNumber);\n \n \tPG_RETURN_CSTRING(pstrdup(buf));\n }\n--- 101,107 ----\n \tblockNumber = BlockIdGetBlockNumber(blockId);\n \toffsetNumber = itemPtr->ip_posid;\n \n! \tsprintf(buf, \"(%u,%u)\", blockNumber, offsetNumber);\n \n \tPG_RETURN_CSTRING(pstrdup(buf));\n }", "msg_date": "Tue, 16 Jul 2002 13:54:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: BlockNumber fixes" } ]
[ { "msg_contents": "\n> OK, so you do a tar backup of a file. While you are doing the tar,\n> certain 8k blocks are being modified in the file. There is no way to\n> know what blocks are modified as you are doing the tar, and in fact you\n> could read partial page writes during the tar.\n\nNo, I think all OS's (Unix and NT at least) guard against this, as long as \nthe whole 8k block is written in one call. It is only the physical layer (disk) \nthat is prone to partial writes.\n\n> any page that was modified while we were backing up is in the WAL. On\n> restore, we can recover whatever tar saw of the file, knowing that the\n> WAL page images will recover any page changes made during the tar.\n\nAssuming above, I do not think this is necessary.\n\n> What I suggest is a way for the backup tar to turn on pre-change page\n> images while the tar is happening, and turn it off after the tar is\n> done.\n\nAgain, I do not think this is necessary.\n\nAndreas\n\n\n", "msg_date": "Mon, 8 Jul 2002 14:15:02 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > OK, so you do a tar backup of a file. While you are doing the tar,\n> > certain 8k blocks are being modified in the file. There is no way to\n> > know what blocks are modified as you are doing the tar, and in fact you\n> > could read partial page writes during the tar.\n> \n> No, I think all OS's (Unix and NT at least) guard against this, as long as \n> the whole 8k block is written in one call. It is only the physical layer (disk) \n> that is prone to partial writes.\n\nYes, good point. The kernel will present a unified view of the 8k\nblock. Of course, there are still cases where 8k blocks are being\nchanged in front/behind in the tarred file. Will WAL allow us to\nre-synchronize that file even if part of it has pages from an earlier in\ntime than other pages. Uh, I think so. So maybe we don't need the\npre-write images in WAL after all. Can we replay the WAL when some\npages in the restored file _have_ the WAL changes and some don't? Maybe\nthe LSN on the pages helps with this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Mon, 8 Jul 2002 12:51:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" } ]
[ { "msg_contents": "\n> As noted, one of the main problems is knowing where to begin\n> in the log. This can be handled by having backup processing \n> update the control file with the first lsn and log file \n> required. At the time of the backup, this information is or \n> can be made available. The control file can be the last file\n> added to the tar and can contain information spanning the entire\n> backup process.\n\nlsn and logfile number (of latest checkpoints) is already in the control \nfile, thus you need control file at start of backup. (To reduce the number \nof logs needed for restore of an online backup you could force a checkpoint\nbefore starting file backup)\n\nYou will also need lsn and logfile number after file backup, to know how much \nlog needs to at least be replayed to regain a consistent state. \n\nAndreas\n\n\n", "msg_date": "Mon, 8 Jul 2002 14:51:04 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > As noted, one of the main problems is knowing where to begin\n> > in the log. This can be handled by having backup processing\n> > update the control file with the first lsn and log file\n> > required. At the time of the backup, this information is or\n> > can be made available. The control file can be the last file\n> > added to the tar and can contain information spanning the entire\n> > backup process.\n> \n> lsn and logfile number (of latest checkpoints) is already in the control\n> file, thus you need control file at start of backup. (To reduce the number\n> of logs needed for restore of an online backup you could force a checkpoint\n> before starting file backup)\n\nMaybe I should have been more clear. The control file snapshot must \nbe taken at backup start (as you mention) but can be stored in cache.\nThe fields can then be modified as we see fit. At the end of backup,\nwe can write this to a temp file and add it to the tar. Therefore,\nas mentioned, the snapshot spans the entire backup process.\n \n> You will also need lsn and logfile number after file backup, to know how much\n> log needs to at least be replayed to regain a consistent state.\n\nThis is a nicety but not a necessity. If you have a backup end log \nrecord, you just have to enforce that the PIT recovery encounters \nthat particular log record on forward recovery. Once encountered,\nyou know that you at passed the point of back up end.\n\nCheers,\nPatrick\n\n\n", "msg_date": "Mon, 08 Jul 2002 09:10:30 -0400", "msg_from": "Patrick Macdonald <patrickm@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "I know that in Oracle there are 'alter database begin backup' and 'alter \ndatabase end backup' commands that allow you to script your hot backups \nthrough a cron job by calling the begin backup command first, then using \ndisk backup method of choice and then finally call the end backup command.\n\n--Barry\n\nPatrick Macdonald wrote:\n\n>Zeugswetter Andreas SB SD wrote:\n> \n>\n>>>As noted, one of the main problems is knowing where to begin\n>>>in the log. This can be handled by having backup processing\n>>>update the control file with the first lsn and log file\n>>>required. At the time of the backup, this information is or\n>>>can be made available. The control file can be the last file\n>>>added to the tar and can contain information spanning the entire\n>>>backup process.\n>>> \n>>>\n>>lsn and logfile number (of latest checkpoints) is already in the control\n>>file, thus you need control file at start of backup. (To reduce the number\n>>of logs needed for restore of an online backup you could force a checkpoint\n>>before starting file backup)\n>> \n>>\n>\n>Maybe I should have been more clear. The control file snapshot must \n>be taken at backup start (as you mention) but can be stored in cache.\n>The fields can then be modified as we see fit. At the end of backup,\n>we can write this to a temp file and add it to the tar. Therefore,\n>as mentioned, the snapshot spans the entire backup process.\n> \n> \n>\n>>You will also need lsn and logfile number after file backup, to know how much\n>>log needs to at least be replayed to regain a consistent state.\n>> \n>>\n>\n>This is a nicety but not a necessity. If you have a backup end log \n>record, you just have to enforce that the PIT recovery encounters \n>that particular log record on forward recovery. Once encountered,\n>you know that you at passed the point of back up end.\n>\n>Cheers,\n>Patrick\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n>\n> \n>\n\n\n\n\n", "msg_date": "Mon, 08 Jul 2002 09:53:10 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "On Mon, 2002-07-08 at 21:53, Barry Lind wrote:\n> I know that in Oracle there are 'alter database begin backup' and 'alter \n> database end backup' commands that allow you to script your hot backups \n> through a cron job by calling the begin backup command first, then using \n> disk backup method of choice and then finally call the end backup command.\n\nThis gave me an idea of a not-too-difficult-to-implement way of doing\nconsistent online backups (thanks to MVCC it is probably much easier\nthan Oracle's):\n\n\nBackup:\n\n1) record the lowest uncommitted transaction number (LUTN) , this may\nhave problems with wraparound, but I guess they are solvable. Disllow\nVACUUM. Do a CHECKPOINT ('alter database begin backup')\n\n3) make a file-level (.tar) backup of data directory.\n\n4) Allow VACUUM. ('alter database end backup')\n\n\n\nRestore:\n\n1) restore the data directory from file-level backup\n\n2) mark all transactions committed after LUTN as aborted, effectively\ndeleting all tuples inserted and resurrecting those deleted/updated\nafter start of backups. \n\n3) make sure that new transaction number is large enough.\n\n\n\nPS. It would be nice if our OID-based filenames had some type indicator\nin their names - it is usually waste of time and space to backup indexes\nand temp tables. The names could be of form\npg_class.relkind:pg_class.relfilenode instead of just\npg_class.relfilenode they are now.\n\n\n-------------------\nHannu\n\n\n\n", "msg_date": "09 Jul 2002 11:19:42 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> 1) record the lowest uncommitted transaction number (LUTN) , this may\n> have problems with wraparound, but I guess they are solvable. Disllow\n> VACUUM. Do a CHECKPOINT ('alter database begin backup')\n> 3) make a file-level (.tar) backup of data directory.\n> 4) Allow VACUUM. ('alter database end backup')\n\nTransactions don't necessarily commit in sequence number order, so the\nconcept of LUTN seems meaningless.\n\nWhy is it necessary (or even good) to disallow VACUUM? I really dislike\na design that allows the DBA to cripple the database by forgetting the\nlast step in a (long) process.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Jul 2002 11:26:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR) " }, { "msg_contents": "On Tue, 2002-07-09 at 17:26, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > 1) record the lowest uncommitted transaction number (LUTN) , this may\n> > have problems with wraparound, but I guess they are solvable. Disllow\n> > VACUUM. Do a CHECKPOINT ('alter database begin backup')\n> > 3) make a file-level (.tar) backup of data directory.\n> > 4) Allow VACUUM. ('alter database end backup')\n> \n> Transactions don't necessarily commit in sequence number order, so the\n> concept of LUTN seems meaningless.\n\nNot quite. It is the most simple way to be sure that if we invalidate\nall transactions >= than it we get back to a fairly recent\nPoint-In-Time.\n\nThe real solution would of course be to remember all committed\ntransactions at this PIT, which can probably be done by remembering LUTN\nand all individual committed transactions > LUTN\n\n> Why is it necessary (or even good) to disallow VACUUM?\n\nSo that it would be possible to resurrect these tuples that have been\ndeleted/updated during disk-level backup.\n\nI would like better the ability to tell VACUUM not to touch tuples where\ndeleting transaction number >= LUTN . IIRC the original postgres was\nable to do that.\n\n> I really dislike\n> a design that allows the DBA to cripple the database by forgetting the\n> last step in a (long) process.\n\nThere are several ways around it.\n\n1. do it in a script, that will not forget.\n\n2. Closing the session that did 'alter database begin backup' session\ncould do it automatically, but this would make the backup script\ntrickier.\n\n3. VACUUM should not block but report a warning about being restricted\nfrom running.\n\n4. database can be instructed to send a message to DBA's pager if it has\nbeen in 'begin backup' state too long ;)\n\n----------------\nHannu\n\n\n", "msg_date": "09 Jul 2002 19:27:37 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Issues Outstanding for Point In Time Recovery (PITR)" } ]
[ { "msg_contents": "\n> No, what I envisioned was a standalone dumper that can produce dump output \n> without having a backend at all. If this dumper knows about the various \n> binary formats, and knows how to get my data into a form I can then restore \n> reliably, I will be satisfied. If it can be easily automated so much the \n> better. Doing it table by table would be ok as well.\n\nUnless it dumps binary representation of columns, a standalone dumper\nwould still need to load all the output function shared libs for custom types\n(or not support custom types which would imho not be good).\n\nAndreas\n\n\n", "msg_date": "Mon, 8 Jul 2002 15:15:00 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > No, what I envisioned was a standalone dumper that can produce dump output\n> > without having a backend at all. If this dumper knows about the various\n> > binary formats, and knows how to get my data into a form I can then restore\n> > reliably, I will be satisfied. If it can be easily automated so much the\n> > better. Doing it table by table would be ok as well.\n> \n> Unless it dumps binary representation of columns, a standalone dumper\n> would still need to load all the output function shared libs for custom types\n> (or not support custom types which would imho not be good).\n\nAnd now we change the internal representation of NUMERIC to a short\ninteger array holding the number in base 10,000 and what exactly does\nthe standalone dumpster do with our data?\n\nAnother good example: let's add a field to some parsenode struct (was\nthere a release where this didn't happen?). This causes the NodeOut()\nresults to become a little different, which actually changes the textual\ncontent of a likely toasted pg_rewrite attribute. Stored compressed and\nsliced. I am quite a bit familiar with TOAST and the rewrite system.\nYet, someone has to help me a little to understand how we can do this\nconversion in binary on the fly with an external tool. Especially where\nthis conversion results in different raw and compressed sizes of the\nTOASTed attribute, which has to propagate up into the TOAST reference in\nthe main table ... not to speak of possible required block splits in the\ntoast table and index because of needing one more slice!\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Mon, 08 Jul 2002 15:20:46 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tue, 2002-07-09 at 00:20, Jan Wieck wrote:\n> Zeugswetter Andreas SB SD wrote:\n> > \n> > > No, what I envisioned was a standalone dumper that can produce dump output\n> > > without having a backend at all. If this dumper knows about the various\n> > > binary formats, and knows how to get my data into a form I can then restore\n> > > reliably, I will be satisfied. If it can be easily automated so much the\n> > > better. Doing it table by table would be ok as well.\n> > \n> > Unless it dumps binary representation of columns, a standalone dumper\n> > would still need to load all the output function shared libs for custom types\n> > (or not support custom types which would imho not be good).\n> \n> And now we change the internal representation of NUMERIC to a short\n> integer array holding the number in base 10,000 and what exactly does\n> the standalone dumpster do with our data?\n>\n> Another good example: let's add a field to some parsenode struct (was\n> there a release where this didn't happen?). This causes the NodeOut()\n> results to become a little different, which actually changes the textual\n> content of a likely toasted pg_rewrite attribute. Stored compressed and\n> sliced. I am quite a bit familiar with TOAST and the rewrite system.\n> Yet, someone has to help me a little to understand how we can do this\n> conversion in binary on the fly with an external tool. Especially where\n> this conversion results in different raw and compressed sizes of the\n> TOASTed attribute, which has to propagate up into the TOAST reference in\n> the main table ... not to speak of possible required block splits in the\n> toast table and index because of needing one more slice!\n\nThis brings us back to my original proposal : this \"external tool\" needs\nto be either a full postgres backend with added DUMP command or\nsomething that can use a (possibly single-user) backend as either a \nlibrary or, yes, a \"backend\" ;)\n\n---------------------\nHannu\n\n\n\n\n", "msg_date": "09 Jul 2002 00:24:13 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Monday 08 July 2002 03:20 pm, Jan Wieck wrote:\n> Zeugswetter Andreas SB SD wrote:\n> > Unless it dumps binary representation of columns, a standalone dumper\n> > would still need to load all the output function shared libs for custom\n> > types (or not support custom types which would imho not be good).\n\n> And now we change the internal representation of NUMERIC to a short\n> integer array holding the number in base 10,000 and what exactly does\n> the standalone dumpster do with our data?\n\nWhat does a standard dump/restore do then as well? Is the restore process \ncomplicated by a rebuild of the function(s) involved in custom types? This, \nIMHO, is a pathological case even for a standard dump/restore. Someone doing \nthis sort of thing is going to have more to do that a simple package upgrade.\n\n> Another good example: let's add a field to some parsenode struct (was\n> there a release where this didn't happen?). This causes the NodeOut()\n> results to become a little different, which actually changes the textual\n> content of a likely toasted pg_rewrite attribute. Stored compressed and\n> sliced. I am quite a bit familiar with TOAST and the rewrite system.\n> Yet, someone has to help me a little to understand how we can do this\n> conversion in binary on the fly with an external tool. Especially where\n> this conversion results in different raw and compressed sizes of the\n> TOASTed attribute, which has to propagate up into the TOAST reference in\n> the main table ... not to speak of possible required block splits in the\n> toast table and index because of needing one more slice!\n\nThis is more difficult, certainly. Martijn, how does pg_fsck handle such \nthings now?\n\nAgain, this tool has utility outside upgrading. And I'm talking about dumping \nthe binary down to ASCII to be restored, not binary to binary on the fly.\n\nThis is the best dialog yet on the issue of upgrading. Keep it coming! :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 9 Jul 2002 11:19:09 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tue, 2002-07-09 at 17:19, Lamar Owen wrote:\n> On Monday 08 July 2002 03:20 pm, Jan Wieck wrote:\n> > Another good example: let's add a field to some parsenode struct (was\n> > there a release where this didn't happen?). This causes the NodeOut()\n> > results to become a little different, which actually changes the textual\n> > content of a likely toasted pg_rewrite attribute. Stored compressed and\n> > sliced. I am quite a bit familiar with TOAST and the rewrite system.\n> > Yet, someone has to help me a little to understand how we can do this\n> > conversion in binary on the fly with an external tool. Especially where\n> > this conversion results in different raw and compressed sizes of the\n> > TOASTed attribute, which has to propagate up into the TOAST reference in\n> > the main table ... not to speak of possible required block splits in the\n> > toast table and index because of needing one more slice!\n> \n> This is more difficult, certainly. Martijn, how does pg_fsck handle such \n> things now?\n> \n> Again, this tool has utility outside upgrading. And I'm talking about dumping \n> the binary down to ASCII to be restored, not binary to binary on the fly.\n\nYou seem to be talking about pg_dump + old backend, no ? ;)\n\nFor me it seems a given that you need old binary-reading code to read\nold binary format data. The most convenient place to keep it is inside\nan old backend. \n\nIt may be possible to migrate simple user table data from old version to\nnew, but it gets complicated real fast for most other things, especially\nfor stuff in system tables.\n\n---------------\nHannu\n\n", "msg_date": "09 Jul 2002 18:57:54 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" } ]
[ { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> Nothing. We even have audio files on the website so there is \n> no question on how to pronounce it. Some folks just aren't \n> happy unless they can change what doesn't need changing is \n> all. I guess it's their way of contributing.\n\nI don't understand some of the animosity I have seen towards \nwhat I consider a positive change. It obviously does need \nchanging, otherwise so many people would not be discussing it. \nAn audio file on the website is not going to help people who \nare reading the word inside text somewhere. The very fact that \nsuch an audio file even needs to exist should be telling us \nsomething. It was created because people were having a hard time \npronouncing it. Saying that people can pronounce it properly \nbecause there is an audio file smacks of circular logic.\n\n> However, we all have taken a *lot* of time and effort to get \n> the name \"PostgreSQL\" recognised, and we should continue on \n> doing this. Nowdays I'm finding it very unusual to see new \n> articles and publications going online and getting it wrong, \n> meaning that although there are legacy documents out there \n> refering to \"Postgres\", most of the new stuff online is \n> calling it the proper \"PostgreSQL\".\n\nSorry, but the shortcut \"Postgres\" is alive and kicking. Look \nanywhere, even in the mailing lists. The default user is \n\"postgres\" not \"postgresql\". Another point is that postgres \nfits in the 8.3 naming schema, which is not really used anymore, \nbut does serve as a fairly good rule of thumb. A product name \nshould be 8 letters or less (Linux, Apache, Windows, Sybase, \nOracle, Ingres, Sybase, MySQL, Apache, tinydns, sendmail, qmail, \niptables, etc.) (Microsoft products are an exception of course, \nbut they have the marketing power to name something \n\"throatwobblermangrove\" and still have the masses purchase it :)\n\nI might agree somewhat with the \"don't break tradition\" argument \nif there had been a concerted effort from the start to \n*dissuade* people from using the word \"postgres\", but as far as \nI can tell, most people involved with Postgre[sS][QL] don't \nreally care which one is being used, and all other things being \nequal, tend to simplify it to the shorter form, especially \nwhen talking out loud.\n\nI've given presentations on Postgres to technical and \nnon-technical people, and always have to throw in a section \nat the start about the name - how to pronounce the \"long form\", \nhow it came about, and to not worry about which term I use \nwithin the presentation, since they are synonymous. I'd rather \nspend that time extolling some of the better virtues of the \nproduct, rather than trying to explain why we smushed a word \nand an acronym together into something awkward to pronounce. \nPerhaps that's the best way to put it: the word is not difficult \nto pronounce, but it is awkward.\n\n\n\nGreg Sabino Mullane greg@turnstep.com\nPGP Key: 0x14964AC8 200207081020\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE9KaCJvJuQZxSWSsgRAloRAKD4tbIititzKXI08kEpAFSkeT/YrgCfcyDa\n0UOzfOknsIEi7B1kfdTFFGU=\n=ADDF\n-----END PGP SIGNATURE-----\n\n\n\n\n", "msg_date": "Mon, 8 Jul 2002 14:28:58 -0000", "msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>", "msg_from_op": true, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Mon, 8 Jul 2002, Greg Sabino Mullane wrote:\n\n>\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n>\n> > Nothing. We even have audio files on the website so there is\n> > no question on how to pronounce it. Some folks just aren't\n> > happy unless they can change what doesn't need changing is\n> > all. I guess it's their way of contributing.\n>\n> I don't understand some of the animosity I have seen towards\n> what I consider a positive change. It obviously does need\n> changing, otherwise so many people would not be discussing it.\n\nFunny, that's just about the same argument the pro GPL people use\nwhen the licensing discussion comes up.\n\n> An audio file on the website is not going to help people who\n> are reading the word inside text somewhere. The very fact that\n> such an audio file even needs to exist should be telling us\n> something. It was created because people were having a hard time\n> pronouncing it. Saying that people can pronounce it properly\n> because there is an audio file smacks of circular logic.\n\nThat's not what was said. Some people can play it over and over and\nstill not be able to pronounce it. Same with my last name and yours\nfor all that matter. But that's no reason to change it, or are you\nwilling to change your last name so all of use can pronounce it? The\naudio file is there so if someone wants to know how it's really\npronounced they can click on it and listen.\n\nTell me, do you pronounce \"Linux\" the same way Linus does or some\nother way? Should \"Linux\" be changed to something that has a more\ncommon pronunciation? And yes, I know how Linus pronounces it, I've\nheard it - someone sent me an mp3 of it.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n\n\n", "msg_date": "Mon, 8 Jul 2002 21:09:20 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "> I don't understand some of the animosity I have seen towards\n> what I consider a positive change. It obviously does need\n> changing, otherwise so many people would not be discussing it.\n\nWell, it is the same reaction you would have if someone walked up to you\nand said \"your child has a stupid name. You need to give it another\none\". At least you didn't call our child ugly too ;)\n\nI know that \"you\" is also one of \"us\", but the point is mostly the\nsame...\n\nafaict the name has not inhibited our market acceptance. But maybe I'm\ngiving too much credit to suits' abilities to cope.\n\nPgSQL works for me if I'm wanting an all-acronym acronym, and I've been\nknown to write and use \"Postgres\" as a synonym. But historically we\nneeded to differentiate the name from PostQuel-enabled Postgres. It\ncould be worse: we could have stuck with Postgres95 (ack, spit).\n\n - Thomas\n\n\n", "msg_date": "Mon, 08 Jul 2002 19:47:46 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Mon, 8 Jul 2002, Greg Sabino Mullane wrote:\n\n> Sorry, but the shortcut \"Postgres\" is alive and kicking. Look anywhere,\n> even in the mailing lists. The default user is \"postgres\" not\n> \"postgresql\". Another point is that postgres fits in the 8.3 naming\n> schema\n\nSorry, but you just lost your argument the moment you throw out M$/DOS\nstandards as something to live by ...\n\n\n\n\n", "msg_date": "Tue, 9 Jul 2002 02:18:30 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Mon, 8 Jul 2002, Vince Vielhaber wrote:\n\n> That's not what was said. Some people can play it over and over and\n> still not be able to pronounce it. Same with my last name and yours\n> for all that matter. But that's no reason to change it, or are you\n> willing to change your last name so all of use can pronounce it?\n\nWoah! I gather you are a techie, and don't care much about marketing, hmm?\n\nPersonally, if I were trying to market myself to a broader audience\nthan I have now, I would almost certainly change my name to something\nless confusing, more memorable and easier to pronounce. And many others\nagree with me, judging by the number of celebrities and wanna-be\ncelebrities that have done so. Not to mention products that have been\nrebranded every decade or three to change with the changing tastes of\nthe target market.\n\nA lot of the things that market-oriented folks want to do (such as\nchanging a name) may seem stupid and useless to you, as a techie, but\nthat doesn't mean that they don't work. It just means that they don't\nwork on people like you, who comprise a very small market. (I'm in that\nmarket too, by the way; I just recognise that most people do not make\ndecisions the way I do about what to purchase and/or use.)\n\nI don't know if you were one of the ones complaining about postgres not\nbeing so popular (as, say, compared to MySQL), but if you want it to be\nmore popular, trying to stop the folks interesting in marketing it from\nmarketing it is not the way to help things.\n\n> Tell me, do you pronounce \"Linux\" the same way Linus does or some\n> other way? Should \"Linux\" be changed to something that has a more\n> common pronunciation? And yes, I know how Linus pronounces it, I've\n> heard it - someone sent me an mp3 of it.\n\nThis is a quite different situation. Linux is almost never misspelled,\nor broken up into two incorrect pieces, and Linux is dead easy to\npronounce in English, even if it's not the same (\"correct\") Finnish\npronounciation. The incorrect pronounciation of \"Linux\" is no worse\na problem than your undoubtedly incorrect pronounciation of \"Nissan,\"\n\"Toyota\", \"Tokyo\", and many other Japanese words (or the Japanese\npronounciation of many popular foreign names).\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Tue, 9 Jul 2002 20:15:31 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Mon, 8 Jul 2002, Thomas Lockhart wrote:\n\n> > I don't understand some of the animosity I have seen towards\n> > what I consider a positive change. It obviously does need\n> > changing, otherwise so many people would not be discussing it.\n>\n> Well, it is the same reaction you would have if someone walked up to you\n> and said \"your child has a stupid name. You need to give it another\n> one\". At least you didn't call our child ugly too ;)\n\nAnother way of putting it ... alot of ppl name their child 'Samantha', but\nhow many refer to them as 'Sam' in everyday conversation? One of their\n'formal, on paper' name, the other is their common name *shrug*\n\n\n", "msg_date": "Tue, 9 Jul 2002 11:21:57 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Curt Sampson wrote:\n> On Mon, 8 Jul 2002, Vince Vielhaber wrote:\n> \n> > That's not what was said. Some people can play it over and over and\n> > still not be able to pronounce it. Same with my last name and yours\n> > for all that matter. But that's no reason to change it, or are you\n> > willing to change your last name so all of use can pronounce it?\n> \n> Woah! I gather you are a techie, and don't care much about marketing, hmm?\n> \n> Personally, if I were trying to market myself to a broader audience\n> than I have now, I would almost certainly change my name to something\n> less confusing, more memorable and easier to pronounce. And many others\n> agree with me, judging by the number of celebrities and wanna-be\n> celebrities that have done so. Not to mention products that have been\n> rebranded every decade or three to change with the changing tastes of\n> the target market.\n> \n> A lot of the things that market-oriented folks want to do (such as\n> changing a name) may seem stupid and useless to you, as a techie, but\n> that doesn't mean that they don't work. It just means that they don't\n> work on people like you, who comprise a very small market. (I'm in that\n> market too, by the way; I just recognise that most people do not make\n> decisions the way I do about what to purchase and/or use.)\n> \n> I don't know if you were one of the ones complaining about postgres not\n> being so popular (as, say, compared to MySQL), but if you want it to be\n> more popular, trying to stop the folks interesting in marketing it from\n> marketing it is not the way to help things.\n\nI totally agree. The name has proven to be hard to pronounce, and that\nis bad for marketing, period. Is marketing important enough to change\nthe name? That is the question.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 9 Jul 2002 12:28:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Tue, 9 Jul 2002, Bruce Momjian wrote:\n\n> I totally agree. The name has proven to be hard to pronounce, and that\n> is bad for marketing, period. Is marketing important enough to change\n> the name? That is the question.\n\nNo, period.\n\nFor starters, ppl are confusing 'word of mouth' with marketing, which they\naren't the same ...\n\nMarketing *is* the 8 or so books on the shelves and at Amazon for\nPostgreSQL, and the ones to follow ...\n\nMarketing is the thousands of t-shirts and mugs and CDs that have gone out\nover the past 4+ years ...\n\nMarketing is the countless articles/reviews that ppl have written that\ntalk about PostgreSQL ...\n\nMarketing is the awards we have won over the years ...\n\nMarketing is proliferation of the newsgroups over the 'Net over the past\n4+ years ...\n\nMarketing is the countless companies out there that offer PostgreSQL\nservices, support *and* training ...\n\nIf ppl want to be lazy and call it Postgres, so be it ... as I mentioned\nbefore, its like calling Samantha, Sam ... but the formal name itself is,\nand will remain, PostgreSQL ... its what *alot* of us have been marketing\nfor years now, period.\n\n\n", "msg_date": "Tue, 9 Jul 2002 14:09:23 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Tue, 9 Jul 2002, Bruce Momjian wrote:\n> \n> > I totally agree. The name has proven to be hard to pronounce, and that\n> > is bad for marketing, period. Is marketing important enough to change\n> > the name? That is the question.\n> \n> No, period.\n> \n> For starters, ppl are confusing 'word of mouth' with marketing, which they\n> aren't the same ...\n> \n> Marketing *is* the 8 or so books on the shelves and at Amazon for\n> PostgreSQL, and the ones to follow ...\n\nSo you think, even marketing-wise that PostgreSQL is better. At a\nminimum, we should add \"also called 'postgres'\" to all our introductory\ndocumentation.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 9 Jul 2002 13:32:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Tue, 9 Jul 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Tue, 9 Jul 2002, Bruce Momjian wrote:\n> >\n> > > I totally agree. The name has proven to be hard to pronounce, and that\n> > > is bad for marketing, period. Is marketing important enough to change\n> > > the name? That is the question.\n> >\n> > No, period.\n> >\n> > For starters, ppl are confusing 'word of mouth' with marketing, which they\n> > aren't the same ...\n> >\n> > Marketing *is* the 8 or so books on the shelves and at Amazon for\n> > PostgreSQL, and the ones to follow ...\n>\n> So you think, even marketing-wise that PostgreSQL is better. At a\n> minimum, we should add \"also called 'postgres'\" to all our introductory\n> documentation.\n\nI do agree with this ... in fact, it shoudl go as far as saying something\nlike:\n\nPostgreSQL (aka PgSQL aka Postgres aka Pg) ... ppl use all the various\nforms ...\n\nNote that the lists themselves act as their own marketing ...\npgsql-*@postgresql.org ... and all the search engines have postgresql.org\nin them, and, I'm sorry, but someones lame argument about 'whether to\nsearch for postgres or postgresql' ... like, come on ... if you have any\ndoubt, just search for postgres, it *is* a sub-string of the formal name\n...\n\n\n\n\n", "msg_date": "Tue, 9 Jul 2002 14:45:57 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "I hate to play devil's advocate, but isn't the reason it is called\nPostgreSQL now due to marketing? As I understand it, the name PostgreSQL\ncame about because the development team wanted to convey the fact that\nPostgres95 was now an SQL based DBMS. From a technical standpoint\nthere's no reason it couldn't be called Postgres2002 or some such\nnonsense, but it might be more cryptic as to what purpose it serves.\nThat's marketing, pure and simple. \n\nRobert Treat\n\nOn Tue, 2002-07-09 at 13:09, Marc G. Fournier wrote:\n> On Tue, 9 Jul 2002, Bruce Momjian wrote:\n> \n> > I totally agree. The name has proven to be hard to pronounce, and that\n> > is bad for marketing, period. Is marketing important enough to change\n> > the name? That is the question.\n> \n> No, period.\n> \n> For starters, ppl are confusing 'word of mouth' with marketing, which they\n> aren't the same ...\n> \n> Marketing *is* the 8 or so books on the shelves and at Amazon for\n> PostgreSQL, and the ones to follow ...\n> \n> Marketing is the thousands of t-shirts and mugs and CDs that have gone out\n> over the past 4+ years ...\n> \n> Marketing is the countless articles/reviews that ppl have written that\n> talk about PostgreSQL ...\n> \n> Marketing is the awards we have won over the years ...\n> \n> Marketing is proliferation of the newsgroups over the 'Net over the past\n> 4+ years ...\n> \n> Marketing is the countless companies out there that offer PostgreSQL\n> services, support *and* training ...\n> \n> If ppl want to be lazy and call it Postgres, so be it ... as I mentioned\n> before, its like calling Samantha, Sam ... but the formal name itself is,\n> and will remain, PostgreSQL ... its what *alot* of us have been marketing\n> for years now, period.\n>\n\n\n", "msg_date": "09 Jul 2002 14:06:20 -0400", "msg_from": "Robert Treat <rtreat@webmd.net>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Tue, 9 Jul 2002, Bruce Momjian wrote:\n>Curt Sampson wrote:\n>> Personally, if I were trying to market myself to a broader audience\n>> than I have now, I would almost certainly change my name to something\n>> less confusing, more memorable and easier to pronounce. And many others\n>> agree with me, judging by the number of celebrities and wanna-be\n>> celebrities that have done so. Not to mention products that have been\n>> rebranded every decade or three to change with the changing tastes of\n>> the target market.\n\n While I think I agree with Curt, I don't thing that it is an *urgent*\nproblem that is plaguing Postgres[QL]. Funny thing on /. today with\nregards to OpenBeOS interviews:\n\n\" The answer to all the 'is there room in the market?' questions was\nanswered in a way: 'We are an OSS project. Marketing is not our job.' \"\n\n Although I do wish that Postgres had better market presence. It is\ncertainly my DB of choice, but many people I have to work with/for will\nfirst ask 'do you support MySQL?'. Now we've *had* to support MySQL, but I\nbelieve that this is only because of its popularity and market presence.\nI'd love to tell them all 'no, we support Postgres[QL] which offers a\nwhole lot more'.\n\nCheers,\n\nChris\n\n-- \n\nChristopher Murtagh\nWebmaster / Sysadmin\nWeb Communications Group\nMcGill University\nMontreal, Quebec\nCanada\n\nTel.: (514) 398-3122\nFax: (514) 398-2017\n\n\n", "msg_date": "Tue, 9 Jul 2002 16:14:36 -0400 (EDT)", "msg_from": "Christopher Murtagh <christopher.murtagh@mcgill.ca>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Tue, 9 Jul 2002, Marc G. Fournier wrote:\n\n> PostgreSQL (aka PgSQL aka Postgres aka Pg) ... ppl use all the various\n> forms ...\n\nHaving to say that is a very, very good reason to just change the name\nback to \"postgres\" *everywhere*, and stick with that.\n\n> Note that the lists themselves act as their own marketing ...\n> pgsql-*@postgresql.org ... and all the search engines have\n> postgresql.org in them....\n\nIs there a difficulty in using postgres.org instead?\n\n> , and, I'm sorry, but someones lame argument about 'whether to search\n> for postgres or postgresql' ... like, come on ... if you have any\n> doubt, just search for postgres, it *is* a sub-string of the formal\n> name\n\nI did search. A search for \"postgres\" turns up only one quarter of the\nhits, does not turn up the advertisement for postgresql documentation,\nand the second link is to a page called \"University POSTGRES 4.2\".\n\nThe advertisement thing worries me particularly; it means that\nadvertisers have to advertise on more keywords in order to achieve\nreasonable coverage.\n\nNow, having done a web search on \"pgsql\", I can now see your difficulty\nwith the name change; you are the president of a company called\n\"PostgreSQL Inc.\" Why didn't you just say this from the beginnning?\nNobody here is trying to make life difficult for those promoting\npostgres, and if you really are marketing the \"PostgreSQL\" name hard,\nmaybe we shouldn't change it. But I'm not seeing a lot of evidence of\nthat marketing, unfortunately (or perhaps I would have heard of your\ncompany before this).\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Wed, 10 Jul 2002 11:20:41 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Wed, 10 Jul 2002, Curt Sampson wrote:\n\n> On Tue, 9 Jul 2002, Marc G. Fournier wrote:\n>\n> > PostgreSQL (aka PgSQL aka Postgres aka Pg) ... ppl use all the various\n> > forms ...\n>\n> Having to say that is a very, very good reason to just change the name\n> back to \"postgres\" *everywhere*, and stick with that.\n\nIt will *not* happen, so you may as well just drop that part of the\nthread.\n\n> > Note that the lists themselves act as their own marketing ...\n> > pgsql-*@postgresql.org ... and all the search engines have\n> > postgresql.org in them....\n>\n> Is there a difficulty in using postgres.org instead?\n\nIs there a difficulty in accepting that it will not change?\n\n> > , and, I'm sorry, but someones lame argument about 'whether to search\n> > for postgres or postgresql' ... like, come on ... if you have any\n> > doubt, just search for postgres, it *is* a sub-string of the formal\n> > name\n>\n> I did search. A search for \"postgres\" turns up only one quarter of the\n> hits, does not turn up the advertisement for postgresql documentation,\n> and the second link is to a page called \"University POSTGRES 4.2\".\n\nGuess you should learn to type 'postgresql' then, eh?\n\n> Now, having done a web search on \"pgsql\", I can now see your difficulty\n> with the name change; you are the president of a company called\n> \"PostgreSQL Inc.\" Why didn't you just say this from the beginnning?\n\nBecause it was irrelevant? And still is ...\n\n", "msg_date": "Wed, 10 Jul 2002 01:20:58 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Wed, 10 Jul 2002, Marc G. Fournier wrote:\n\n> It will *not* happen, so you may as well just drop that part of the\n> thread.\n\nAnd so who died and appointed you king?\n\nSorry, but PostgreSQL is not your product, much as you might like to\nthink so. And I find it rather offensive that you should pretend it is.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Wed, 10 Jul 2002 13:37:53 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Wednesday 10 July 2002 12:37 am, Curt Sampson wrote:\n> On Wed, 10 Jul 2002, Marc G. Fournier wrote:\n> > It will *not* happen, so you may as well just drop that part of the\n> > thread.\n\n> And so who died and appointed you king?\n\n> Sorry, but PostgreSQL is not your product, much as you might like to\n> think so. And I find it rather offensive that you should pretend it is.\n\nCurt, you do realize that Marc Fournier helped start this whole thing (taking \nover Postgres95 from its two developers and founding PostgreSQL), is a \nfounding member of the PostgreSQL steering committee (core), \nadministers/runs/pays for the postgresql.org website, coordinates and \nperforms the actual work of the release, and many other things that are \nnecessary.\n\nHis contributions to the project entitle him to have far more say than you \nhave.\n\nIf you doubt that fact, you need to read the archives for awhile to get a \nsense of how this project is organized. If the steering committee (the core \nsix) decide against something, then that something _does_not_happen_. End of \nstory. This is not a democracy. It is an oligarchy. Marc is one of the six \noligarchs, so _Deal_with_it_. Bruce, another of the core six, has to an \nextent agreed with some of the difficulty of the current name. But how have \nthe rest weighed in? Up until the last portions of this thread I might have \nagreed with you to an extent. But after I weighed the difficulty of actually \npulling off a name change, I am dead set against it. It's too much effort \nfor too little gain.\n\nNow, if you want to pay for the bandwidth of a 'postgres.org', want to set up \na full CVS repository, want to administer a popular server, and want to \nevangelize enough developers to gather a critical mass to fork a 'postgres' \nproject, then go ahead.\n\nThe name is fine as it is. This is because:\n1.)\tIt is well known by that name;\n2.)\tBooks are already written using that name (there are no books about \n'Postgres';\n3.)\tIt is descriptive;\n4.)\tIt has history;\n5.)\t\"If it ain't broke, don't fix it\" -- and the name ain't broke.\n\nThe whole project should not have to deal with all the ramifications of a name \nchange just for a few people's convenience, laziness, and stubborness.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 10 Jul 2002 09:57:18 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "> It will *not* happen, so you may as well just drop that part of the\n> thread.\n\nHmm. But anyway, why not register the domain postgres.org in addition to \npostgresql.org, no matter if the name will change or not.\n\nLet it point to the same server, forward to postgresql.org or whatever. It \nmight make a few new people look the right place :-)\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 12.00-18.00 Email: kar@webline.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n", "msg_date": "Wed, 10 Jul 2002 20:34:12 +0200", "msg_from": "Kaare Rasmussen <kar@kakidata.dk>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Wed, 10 Jul 2002, Kaare Rasmussen wrote:\n\n> > It will *not* happen, so you may as well just drop that part of the\n> > thread.\n>\n> Hmm. But anyway, why not register the domain postgres.org in addition to\n> postgresql.org, no matter if the name will change or not.\n\nAlready is ... and it *was* supposed to be pointing at postgresql.org ...\ngetting that fixed now :(\n\n\n", "msg_date": "Wed, 10 Jul 2002 15:41:02 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n>> Hmm. But anyway, why not register the domain postgres.org in addition to\n>> postgresql.org, no matter if the name will change or not.\n\n> Already is ... and it *was* supposed to be pointing at postgresql.org ...\n> getting that fixed now :(\n\nSince we also have postgres.com, don't forget to make that point to the\nright place too.\n\nGB did get a couple of things done anyway ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 18:51:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly " }, { "msg_contents": "Lamar Owen wrote:\n> If you doubt that fact, you need to read the archives for awhile to get a \n> sense of how this project is organized. If the steering committee (the core \n> six) decide against something, then that something _does_not_happen_. End of \n> story. This is not a democracy. It is an oligarchy. Marc is one of the six \n> oligarchs, so _Deal_with_it_. Bruce, another of the core six, has to an \n> extent agreed with some of the difficulty of the current name. But how have \n> the rest weighed in? Up until the last portions of this thread I might have \n> agreed with you to an extent. But after I weighed the difficulty of actually \n> pulling off a name change, I am dead set against it. It's too much effort \n> for too little gain.\n\nI don't think you can just \"shut down\" a discussion about a name change.\nSome good things are coming out of is, such as adding \"also called\n'postgres'\" to some of our documentation, and properly mapping\npostgres.org/com to postgresql.org.\n\nI think there is room for an \"also called postgres\" push among our users\nand for marketing. Oracle is changing the name of their server all the\ntime to position it for marketing so having a secondary name doesn't\nhurt. Our _official_ name is PostgreSQL.\n\n(I personally voted for 'tigres' at the time we chose PostgreSQL.)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Jul 2002 21:26:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Wed, 10 Jul 2002, Lamar Owen wrote:\n\n> Curt, you do realize that Marc Fournier....\n\nSure. You do realize that I've been using postgres on and off since\nbefore Postgres95 existed, right?\n\nRegardless, if people think it's not worthwhile to change the name, I\nhave no problem with that. It's the, \"Just shut up because I don't want\nto listen to you attitude\" that I have a problem with. Not to mention\nthe, \"It's wrong, but I don't have to give any reasons why, because I'm\njust right and you're not\" attitude.\n\nYou know, I just bought this up as an idea to be batted around. And I,\nfor one, am still not sure whether changing the name is a good idea or\nnot. But I find really disappointing the number of people here who a)\nobject to an idea for no good reason that they can explain, beyond \"I\ndon't like it,\" and b) are flaming those who are discussing this idea\nrather than dismissing it.\n\n> This is not a democracy. It is an oligarchy. Marc is one of the six\n> oligarchs, so _Deal_with_it_.\n\nSo, basically, \"if you don't like the product, get lost.\"\n\nIf this is really the case, we can certainly drop any discussion\nof the name, because we have much, much bigger marketing problems.\n\n> 1.)\tIt is well known by that name;\n\nNot to mention by one or two other names.\n\n> 4.)\tIt has history;\n\nThough less history than other names.\n\n> 5.)\t\"If it ain't broke, don't fix it\" -- and the name ain't broke.\n\nWell, apparently some people don't agree with you. You can listen to them\nor you can tell them to get lost. Your choice.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 11 Jul 2002 11:19:53 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Thu, 11 Jul 2002, Curt Sampson wrote:\n\n> On Wed, 10 Jul 2002, Lamar Owen wrote:\n>\n> > Curt, you do realize that Marc Fournier....\n>\n> Sure. You do realize that I've been using postgres on and off since\n> before Postgres95 existed, right?\n\nWell you certainly don't act like it - and before you go snapping at me\nfor saying that, go back and look at your previous posts and ask yourself\nwhy you would ask some of the questions or make some of the statements you\nmade if you really had been around that long.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 10 Jul 2002 22:31:46 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "pgman@candle.pha.pa.us (Bruce Momjian) wrote:\n> Lamar Owen wrote:\n>> If you doubt that fact, you need to read the archives for awhile to\n>> get a sense of how this project is organized. If the steering\n>> committee (the core six) decide against something, then that\n>> something _does_not_happen_. End of story. This is not a\n>> democracy. It is an oligarchy. Marc is one of the six oligarchs,\n>> so _Deal_with_it_. Bruce, another of the core six, has to an\n>> extent agreed with some of the difficulty of the current name. But\n>> how have the rest weighed in? Up until the last portions of this\n>> thread I might have agreed with you to an extent. But after I\n>> weighed the difficulty of actually pulling off a name change, I am\n>> dead set against it. It's too much effort for too little gain.\n>\n> I don't think you can just \"shut down\" a discussion about a name\n> change. Some good things are coming out of is, such as adding \"also\n> called 'postgres'\" to some of our documentation, and properly\n> mapping postgres.org/com to postgresql.org.\n>\n> I think there is room for an \"also called postgres\" push among our\n> users and for marketing. Oracle is changing the name of their\n> server all the time to position it for marketing so having a\n> secondary name doesn't hurt. Our _official_ name is PostgreSQL.\n>\n> (I personally voted for 'tigres' at the time we chose PostgreSQL.)\n\nHear, hear!\n\nThis is a _wonderful_ thing.\n\nNote that Netscape Navigator has been spelled many ways over the\nyears, \"but is always pronounced `Mozilla.'\"\n\nWhile 'tigres' sounds quite nice, and strikes me as an attractive\noption were things open to a _completely_ new name, it hasn't the\nmerit \"postgres\" has of:\n a) Being a historical name, and\n b) Being highly similar to the current name.\n\nThere's NOTHING wrong with having a \"legal name\" as well as an\n\"operating as\" name; companies do that _all the time_.\n\nAnd if there are 20 places that say \"It's officially spelled\nPostgreSQL, but you can _pronounce_ that 'p\\O\\st-\"gres', and here's\nthe MP3 of Bruce saying it,\" that can cope with the situation nicely.\n\nI have no problem with the \"PostgreSQL\" _spelling_, but how it sounds\n_is_ important, and I don't think it's reasonable to expect to do the\n\"Cliff Richard\" thing where the famous British pop star coined a name\nspecifically so that he could regularly remind interviewers \"No, no,\nnot `Cliff Richards,' it's `Cliff Richard.' No 's' on the end!\"\n\nConsider the Hitchhikers Guide to the Galaxy series. The first three\nbooks are _tremendously_ more popular than the later sequels, and I\nbelieve that is the result of them having been first 'honed' by being\npresented as radio plays. They actually _sound_ better than they\n\"read.\" In contrast, later volumes like _So Long and Thanks for All\nthe Fish_ never were on radio, and read _very_ differently,\nunfortunately not as nicely.\n-- \n(reverse (concatenate 'string \"moc.enworbbc@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/nonrdbms.html\n\"One of my most often repeated quips was the one I made when former\nPresidents Carter, Ford and Nixon stood by each other at a White House\nevent. 'There they are,' I said. 'See no evil, hear no evil, and ...\nevil.'\" -- Bob Dole, 1983\n", "msg_date": "Wed, 10 Jul 2002 22:58:39 -0400", "msg_from": "Christopher Browne <cbbrowne@cbbrowne.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Wed, 10 Jul 2002, Christopher Browne wrote:\n\n> And if there are 20 places that say \"It's officially spelled\n> PostgreSQL, but you can _pronounce_ that 'p\\O\\st-\"gres', and here's\n> the MP3 of Bruce saying it,\" that can cope with the situation nicely.\n\nFor the record, the voice on the MP3 isn't Bruce. It's the voice of a\nprofessional broadcaster.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 11 Jul 2002 05:31:10 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Sat, 10 Aug 2002, Jeff MacDonald wrote:\n\n> How long did it take you to teach him to say PostgreSQL ? :)\n\nLessee, the conversation went something like this:\n\nMe: I need a wav file of you saying \"PostgreSQL\".\n\nHim: \"PostgreSQL\"?\n\nMe: Yeah.\n\nHim: Ok, I'll get it to you later today.\n\nThen after I made it available someone else did the MP3 conversion\nand I put that there as well. He caught on pretty quick! :)\n\n\n\n>\n> Jeff.\n>\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Vince Vielhaber\n> Sent: Thursday, July 11, 2002 6:31 AM\n> To: Christopher Browne\n> Cc: pgsql-hackers@postgreSQL.org\n> Subject: Re: [HACKERS] I am being interviewed by OReilly\n>\n>\n> On Wed, 10 Jul 2002, Christopher Browne wrote:\n>\n> > And if there are 20 places that say \"It's officially spelled\n> > PostgreSQL, but you can _pronounce_ that 'p\\O\\st-\"gres', and here's\n> > the MP3 of Bruce saying it,\" that can cope with the situation nicely.\n>\n> For the record, the voice on the MP3 isn't Bruce. It's the voice of a\n> professional broadcaster.\n>\n> Vince.\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 11 Jul 2002 08:27:19 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Thu, 11 Jul 2002, Vince Vielhaber wrote:\n\n> On Sat, 10 Aug 2002, Jeff MacDonald wrote:\n>\n> > How long did it take you to teach him to say PostgreSQL ? :)\n>\n> Lessee, the conversation went something like this:\n>\n> Me: I need a wav file of you saying \"PostgreSQL\".\n>\n> Him: \"PostgreSQL\"?\n>\n> Me: Yeah.\n>\n> Him: Ok, I'll get it to you later today.\n>\n> Then after I made it available someone else did the MP3 conversion\n> and I put that there as well. He caught on pretty quick! :)\n\nMy experiences tend to be about the same ... worst case scenario, I have\nto show them where the syllables break down ... after that, they are\ngenerally fine ...\n\n >\n>\n>\n> >\n> > Jeff.\n> >\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Vince Vielhaber\n> > Sent: Thursday, July 11, 2002 6:31 AM\n> > To: Christopher Browne\n> > Cc: pgsql-hackers@postgreSQL.org\n> > Subject: Re: [HACKERS] I am being interviewed by OReilly\n> >\n> >\n> > On Wed, 10 Jul 2002, Christopher Browne wrote:\n> >\n> > > And if there are 20 places that say \"It's officially spelled\n> > > PostgreSQL, but you can _pronounce_ that 'p\\O\\st-\"gres', and here's\n> > > the MP3 of Bruce saying it,\" that can cope with the situation nicely.\n> >\n> > For the record, the voice on the MP3 isn't Bruce. It's the voice of a\n> > professional broadcaster.\n> >\n> > Vince.\n> >\n>\n>\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Thu, 11 Jul 2002 10:24:21 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Curt Sampson wrote:\n> \n> On Wed, 10 Jul 2002, Lamar Owen wrote:\n> \n> > Curt, you do realize that Marc Fournier....\n> \n> Sure. You do realize that I've been using postgres on and off since\n> before Postgres95 existed, right?\n\nHave been there since v4.2 (last official Berkeley release, PostQUEL\nversion) and survived the dark era (Postgres95).\n\nThat by itself is no argument, nor does it give someones word more\nweight.\n\nPostgreSQL is the name of this software for many years, books are\nprinted using that name, press articles refer to it, marketing is based\non it. If we like it or not, it is IMHO not an option to change it just\nto sound better or because people with spelling difficulties don't get\nit right. \n\nLook at my own name. Jan Wieck (correctly pronounced like Yann Veek).\nSome people here in the US think I'm a girl, most pronounce my first\nname Dshaen or worse many fail on my last name somewhere around\nOUUEEEE... cough, cough. Do you think I consider changing something? No\nway, not for people who speak one or less languages only.\n\nPostgreSQL is PostgreSQL, and that's it.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Thu, 11 Jul 2002 11:17:44 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "> (I personally voted for 'tigres' at the time we chose PostgreSQL.)\n\n??\n\nWhat about \"digress\" ??\n\n:-)\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 12.00-18.00 Email: kar@webline.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n", "msg_date": "Thu, 11 Jul 2002 17:51:00 +0200", "msg_from": "Kaare Rasmussen <kar@kakidata.dk>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Thu, 11 Jul 2002, Jan Wieck wrote:\n\n> Look at my own name. Jan Wieck (correctly pronounced like Yann Veek).\n\nIs that how its pronounced?? :) I've known you for how long now and never\nhad a clue how to pronounce your last name ... first is/was easy, last I\nnever even tried ...\n\n\n\n", "msg_date": "Thu, 11 Jul 2002 16:07:40 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Thu, 11 Jul 2002, Jan Wieck wrote:\n> \n> > Look at my own name. Jan Wieck (correctly pronounced like Yann Veek).\n> \n> Is that how its pronounced?? :) I've known you for how long now and never\n> had a clue how to pronounce your last name ... first is/was easy, last I\n> never even tried ...\n\nAnd we did think Jan was a girl for many months until somehow he\nmentioned something that gave us a clue he wasn't.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 15:21:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "On Thu, 11 Jul 2002, Bruce Momjian wrote:\n\n> And we did think Jan was a girl for many months until somehow he\n> mentioned something that gave us a clue he wasn't.\n\nThrough the years I've had many friends that I've never seen and several\nthat I had no idea as to their gender. Isn't e-mail great. No\npreconceived ideas when we start communicating. (Heck I get to be called\na @#$%^& based on merit. :-) There has been a Jan, Lynn, Sandy and a\ncouple others I can't remember but the biggest shock was the phone call I\ngot from some one I'd been communicating with for several years and she\nsaid \"Hi Rod, this is Charlie\". I hadn't had a clue Charlie was a woman.\n\n As the cartoon says \"On the internet no one knows you're a dog.\"\n\n\nRod\n-- \n \"Open Source Software - Sometimes you get more than you paid for...\"\n\n", "msg_date": "Thu, 11 Jul 2002 14:48:22 -0700 (PDT)", "msg_from": "\"Roderick A. Anderson\" <raanders@acm.org>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "Roderick A. Anderson wrote:\n> On Thu, 11 Jul 2002, Bruce Momjian wrote:\n> \n> > And we did think Jan was a girl for many months until somehow he\n> > mentioned something that gave us a clue he wasn't.\n> \n> Through the years I've had many friends that I've never seen and several\n> that I had no idea as to their gender. Isn't e-mail great. No\n> preconceived ideas when we start communicating. (Heck I get to be called\n\nI do speak to women differently so it was weird not knowing about Jan. \nIf you have seen the SNL skit about Pat, well, its like that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 17:51:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "Okay, going *way* off subject.... why on earth do you speak to women\ndifferently?\n\nOn Thu, 11 Jul 2002, Bruce Momjian wrote:\n\n> Roderick A. Anderson wrote:\n> > On Thu, 11 Jul 2002, Bruce Momjian wrote:\n> >\n> > > And we did think Jan was a girl for many months until somehow he\n> > > mentioned something that gave us a clue he wasn't.\n> >\n> > Through the years I've had many friends that I've never seen and several\n> > that I had no idea as to their gender. Isn't e-mail great. No\n> > preconceived ideas when we start communicating. (Heck I get to be called\n>\n> I do speak to women differently so it was weird not knowing about Jan.\n> If you have seen the SNL skit about Pat, well, its like that.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Thu, 11 Jul 2002 15:10:29 -0700 (PDT)", "msg_from": "Ben <bench@silentmedia.com>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "Ben wrote:\n> Okay, going *way* off subject.... why on earth do you speak to women\n> differently?\n\nYou don't make sarcastic comments to women, for one thing. I speak less\nharshly, I guess. I am married, so dating isn't the issue.\n\nI used to think \"treat them the same\" but in reality I think most women\nprefer that you didn't.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 18:13:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "Bruce Momjian wrote:\n> Marc G. Fournier wrote:\n> \n>>On Thu, 11 Jul 2002, Jan Wieck wrote:\n>>\n>>\n>>>Look at my own name. Jan Wieck (correctly pronounced like Yann Veek).\n>>\n>>Is that how its pronounced?? :) I've known you for how long now and never\n>>had a clue how to pronounce your last name ... first is/was easy, last I\n>>never even tried ...\n> \n> \n> And we did think Jan was a girl for many months until somehow he\n> mentioned something that gave us a clue he wasn't.\n> \n\nThere are plenty of hard to spell/pronounce names on the lists - and I \ndon't want to be the one to start *that* thread - but I have to ask:\n\nMom-jee-arn ?\nMom-jy-ann ?\nMom-jeen ?\nMom-zhon ?\n\nCheers, Kurt (Male).\n\n", "msg_date": "Fri, 12 Jul 2002 10:57:16 +1200", "msg_from": "Kurt at iadvance <kurtw@iadvance.co.nz>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": ">>>>Look at my own name. Jan Wieck (correctly pronounced like Yann Veek).\n\nVaguely remembered, then found on the Web...\n\nNicklaus Wirth, the designer of PASCAL, gave a talk once at which he\nwas asked \"How do you pronounce your name?\". He replied, \"You can call\nme by name, pronouncing it 'Virt', or call be by value, 'Worth'.\" \n\n-- \nJohn W Hall <wweexxsseessssaa@telusplanet.net>\nCalgary, Alberta, Canada.\n\"Helping People Prosper in the Information Age\"\n", "msg_date": "Thu, 11 Jul 2002 23:16:05 GMT", "msg_from": "John Hall <wweexxsseessssaa@telusplanet.net>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Ben wrote:\n> > Okay, going *way* off subject.... why on earth do you speak to women\n> > differently?\n> \n> You don't make sarcastic comments to women, for one thing. I speak less\n> harshly, I guess. I am married, so dating isn't the issue.\n> \n> I used to think \"treat them the same\" but in reality I think most women\n> prefer that you didn't.\n\nAmen.\n\nMike Mascari\nmascarm@mascari.com\n", "msg_date": "Thu, 11 Jul 2002 20:36:39 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "On Wednesday 10 July 2002 09:26 pm, Bruce Momjian wrote:\n> Lamar Owen wrote:\n> > If you doubt that fact, you need to read the archives for awhile to get a\n> > sense of how this project is organized. If the steering committee (the\n> > core six) decide against something, then that something\n\n> I don't think you can just \"shut down\" a discussion about a name change.\n\nWell, you're right. *I* can't shut anything down (except my server...). But \nmy reply was targeted to the 'who died and made you king' ad hominem.\n\n> I think there is room for an \"also called postgres\" push among our users\n> and for marketing. Oracle is changing the name of their server all the\n> time to position it for marketing so having a secondary name doesn't\n> hurt. Our _official_ name is PostgreSQL.\n\nIs that like the Artist Formerly Known as Prince? Now abbreviated to \n'Artist'? Why beat around the bush about our name? This is confusion.\n\n> (I personally voted for 'tigres' at the time we chose PostgreSQL.)\n\nAs in the river? Does that mean we consider ourselves to be the new seat of \npower in the world? Hey, maybe I need to change my name to Tiglathpileser or \nsomething. Hey, we could then have Sargon as our engine, Pul as our system \nof scripting, Shalmaneser as the trademark for 'readers don't have to wait \nfor writers', Sennacherib as the patented system for stuffing data that goes \nbeyond the size of a tuple.....\n\nAs long as a project from Iraq called 'Babylon' lead by a royal Nebuchadnezzar \ndoesn't come around....\n\nTime to retreat to Nineveh....\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 11 Jul 2002 20:48:27 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Wednesday 10 July 2002 10:19 pm, Curt Sampson wrote:\n> You know, I just bought this up as an idea to be batted around. And I,\n> for one, am still not sure whether changing the name is a good idea or\n> not.\n\nA name change will be traumatic for a number of reasons. \n\nFirst, every distributor of the progam will have to change its name. Apache \nis doing this very thing now, and it has got things in a state of confusion \n-- the apache webserver is now packaged in a a tarball called 'httpd' -- \nwhich, IMNSHO, is arrogant.\n\nNext, name changes must be effectively communicated to people. If we are \nhaving a hard time communicating our current name, how much harder of a time \nwill we have with communicating the fact that 'we've changed our name because \nwe've given up on communicating our name to people'?\n\nApply Occam's Razor, please. Non sunt multiplicanda entia praeter \nnecessitatem.\n\nIt is up to the ones who want to change the name to come up with a compelling \nreason to change it -- it's not the responsibility of those who support the \ncurrent name to defend the current name in order for it to be kept.\n\nMy opinion is simply that the current name is fine. The question becomes 'is \nit worth the work to change the name for the minority's convenience?' I \nbelieve the logical answer is 'No. While there are seemingly good reasons \nfor this move, none compel this drastic of an action.'\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 11 Jul 2002 21:00:06 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Kurt at iadvance wrote:\n> Bruce Momjian wrote:\n> > Marc G. Fournier wrote:\n> > \n> >>On Thu, 11 Jul 2002, Jan Wieck wrote:\n> >>\n> >>\n> >>>Look at my own name. Jan Wieck (correctly pronounced like Yann Veek).\n> >>\n> >>Is that how its pronounced?? :) I've known you for how long now and never\n> >>had a clue how to pronounce your last name ... first is/was easy, last I\n> >>never even tried ...\n> > \n> > \n> > And we did think Jan was a girl for many months until somehow he\n> > mentioned something that gave us a clue he wasn't.\n> > \n> \n> There are plenty of hard to spell/pronounce names on the lists - and I \n> don't want to be the one to start *that* thread - but I have to ask:\n> \n> Mom-jee-arn ?\n> Mom-jy-ann ?\n> Mom-jeen ?\n> Mom-zhon ?\n\nNow, that is a good question. It is MOM-jin. Actually, though the\nArmenian way to say it is MOOM-ji-an, but I don't say it that way.\n\nMaybe I need an MP3 file for that. Vince, can you hook me up? ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 22:52:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "Lamar Owen dijo: \n\n> On Wednesday 10 July 2002 09:26 pm, Bruce Momjian wrote:\n\n> > (I personally voted for 'tigres' at the time we chose PostgreSQL.)\n> \n> As in the river?\n\nI don't think so. The river is actually called \"Tigris\" (that's the\nspanish version, and the version that generates more results in Google,\nalthought I admit that the english transliteration may be different).\n\nTigres is the spanish plural for \"tiger\", and of course it keeps the\nplay on \"-gres\" while suggesting some kind of power (and speed?). But I\nthink Bruce should be actually better informed than me as to what was\nthe origin of the word suggestion in the first place.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nwww.google.com: interfaz de linea de comando para la web.\n\n", "msg_date": "Thu, 11 Jul 2002 23:03:50 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Kurt at iadvance wrote:\n> Bruce Momjian wrote:\n> > Marc G. Fournier wrote:\n> > \n> >>On Thu, 11 Jul 2002, Jan Wieck wrote:\n> >>\n> >>\n> >>>Look at my own name. Jan Wieck (correctly pronounced like Yann Veek).\n> >>\n> >>Is that how its pronounced?? :) I've known you for how long now and never\n> >>had a clue how to pronounce your last name ... first is/was easy, last I\n> >>never even tried ...\n> > \n> > \n> > And we did think Jan was a girl for many months until somehow he\n> > mentioned something that gave us a clue he wasn't.\n> > \n> \n> There are plenty of hard to spell/pronounce names on the lists - and I \n> don't want to be the one to start *that* thread - but I have to ask:\n> \n> Mom-jee-arn ?\n> Mom-jy-ann ?\n> Mom-jeen ?\n> Mom-zhon ?\n\nSorry for the off-topic, but my wife just corrected the proper Armenian\npronuciation of my name. It should be MOHM-juh-yan.\n\nI rarely correct anyone who pronounces my name. I give them credit for\njust trying.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 23:04:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "On Thu, 11 Jul 2002, Bruce Momjian wrote:\n\n> Kurt at iadvance wrote:\n> > Bruce Momjian wrote:\n> > > Marc G. Fournier wrote:\n> > >\n> > >>On Thu, 11 Jul 2002, Jan Wieck wrote:\n> > >>\n> > >>\n> > >>>Look at my own name. Jan Wieck (correctly pronounced like Yann Veek).\n> > >>\n> > >>Is that how its pronounced?? :) I've known you for how long now and never\n> > >>had a clue how to pronounce your last name ... first is/was easy, last I\n> > >>never even tried ...\n> > >\n> > >\n> > > And we did think Jan was a girl for many months until somehow he\n> > > mentioned something that gave us a clue he wasn't.\n> > >\n> >\n> > There are plenty of hard to spell/pronounce names on the lists - and I\n> > don't want to be the one to start *that* thread - but I have to ask:\n> >\n> > Mom-jee-arn ?\n> > Mom-jy-ann ?\n> > Mom-jeen ?\n> > Mom-zhon ?\n>\n> Now, that is a good question. It is MOM-jin. Actually, though the\n> Armenian way to say it is MOOM-ji-an, but I don't say it that way.\n>\n> Maybe I need an MP3 file for that. Vince, can you hook me up? ;-)\n\nNo problem, the broadcaster in question is my biz partner. I'll get\nwith him next week.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 11 Jul 2002 23:09:25 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "Alvaro Herrera wrote:\n> Lamar Owen dijo: \n> \n> > On Wednesday 10 July 2002 09:26 pm, Bruce Momjian wrote:\n> \n> > > (I personally voted for 'tigres' at the time we chose PostgreSQL.)\n> > \n> > As in the river?\n> \n> I don't think so. The river is actually called \"Tigris\" (that's the\n> spanish version, and the version that generates more results in Google,\n> althought I admit that the english transliteration may be different).\n> \n> Tigres is the spanish plural for \"tiger\", and of course it keeps the\n> play on \"-gres\" while suggesting some kind of power (and speed?). But I\n> think Bruce should be actually better informed than me as to what was\n> the origin of the word suggestion in the first place.\n\nTigres (female tiger) was suggested because it is the only common *gres\nword we could think of.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 23:22:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Lamar Owen wrote:\n> On Wednesday 10 July 2002 09:26 pm, Bruce Momjian wrote:\n> > Lamar Owen wrote:\n> > > If you doubt that fact, you need to read the archives for awhile to get a\n> > > sense of how this project is organized. If the steering committee (the\n> > > core six) decide against something, then that something\n> \n> > I don't think you can just \"shut down\" a discussion about a name change.\n> \n> Well, you're right. *I* can't shut anything down (except my server...). But \n> my reply was targeted to the 'who died and made you king' ad hominem.\n\nAnd my reply was to those who wanted to shut down the discussion. \nLamar, you weren't one of them.\n\nThe discussion addressed a serious issue, specifically that though\nPostgreSQL looks great on paper, with the Postgre* and the SQL, it is\nhard to pronounce.\n\nThat was the issue, and people are free to make suggestions on how to\naddress it. I think the only discussions we really shut down are those\nthat insult people or are blatantly disrespectful. I think anything\nelse is open for discussion.\n\n> > I think there is room for an \"also called postgres\" push among our users\n> > and for marketing. Oracle is changing the name of their server all the\n> > time to position it for marketing so having a secondary name doesn't\n> > hurt. Our _official_ name is PostgreSQL.\n> \n> Is that like the Artist Formerly Known as Prince? Now abbreviated to \n> 'Artist'? Why beat around the bush about our name? This is confusion.\n\nSure, just call us Database. Maybe Red Hat Database is a trend. ;-)\n\nActually, Prince did that because his record company had rights to his\nname for publishing for X years. The contract finally ran out and I\nthink he is back to Prince.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 23:29:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "On Thursday 11 July 2002 11:29 pm, Bruce Momjian wrote:\n> Lamar Owen wrote:\n> > Well, you're right. *I* can't shut anything down (except my server...). \n> > But my reply was targeted to the 'who died and made you king' ad hominem.\n\n> And my reply was to those who wanted to shut down the discussion.\n> Lamar, you weren't one of them.\n\nAs I was in the cc list, it was my message replied to, and my Asperger's \nkicked in again, I misunderstood.\n\n> The discussion addressed a serious issue, specifically that though\n> PostgreSQL looks great on paper, with the Postgre* and the SQL, it is\n> hard to pronounce.\n\nYou know, I think I brought the issue up once myself. But I further realized \nthat it really isn't hard to pronounce.\n\n> Sure, just call us Database. Maybe Red Hat Database is a trend. ;-)\n\nWe might get confused with dBASE. Hmmm. pBASE? (nah.)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 11 Jul 2002 23:37:56 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "...\n> I think there is room for an \"also called postgres\" push among our users\n> and for marketing. Oracle is changing the name of their server all the\n> time to position it for marketing so having a secondary name doesn't\n> hurt. Our _official_ name is PostgreSQL.\n\nWe have had the following in our documentation for years (I remember\nwriting it ;)\n\n4. Terminology and Notation\n\nThe terms \"PostgreSQL\" and \"Postgres\" will be used interchangeably to\nrefer to the software that accompanies this documentation. \n...\n\nI fairly recently went through and scrubbed the docs, which at the time\nwere (if I remember correctly) roughly 60/40 split between the two forms\nof name. I think it is not helpful to try to emphasize the \"short form\",\nbut we have always acknowledged that it exists and folks have always\nused both forms. It just isn't that big a deal, and it just isn't that\nconfusing.\n\n - Thomas\n", "msg_date": "Thu, 11 Jul 2002 22:34:27 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" }, { "msg_contents": "Bruce Momjian wrote:\n> Ben wrote:\n> \n>>Okay, going *way* off subject.... why on earth do you speak to women\n>>differently?\n> \n> \n> You don't make sarcastic comments to women, for one thing. I speak less\n> harshly, I guess. I am married, so dating isn't the issue.\n> \n> I used to think \"treat them the same\" but in reality I think most women\n> prefer that you didn't.\n> \n\nI personally prefer to be treated the same.\nOtherwise i feel a bit like a child , childs are treated with indulgence.\nI can't stand the feeling not beeing taken seriously.\n\nRegards\nTina\n\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 12 Jul 2002 11:24:08 +0200", "msg_from": "Tina Messmann <tina.messmann@xinux.de>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "On Thu, 11 Jul 2002 23:16:05 GMT, John Hall wrote:\n> >>>>Look at my own name. Jan Wieck (correctly pronounced like Yann Veek).\n> \n> Vaguely remembered, then found on the Web...\n> \n> Nicklaus Wirth, the designer of PASCAL, gave a talk once at which he\n> was asked \"How do you pronounce your name?\". He replied, \"You can call\n> me by name, pronouncing it 'Virt', or call be by value, 'Worth'.\"\n\nYou mean this:\n\"In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth.\n That's because in Europe they call me by name, and in the US by value!\"\n\n-- \n\tPeter Haworth\tpmh@edison.ioppublishing.com\n\"The familiar dot '.' symbol from Internet addresses\n is used in this book to terminate sentences.\"\n\t\t-- Carlton Egremont III, /Mr. Bunny's Guide to ActiveX/\n", "msg_date": "Mon, 15 Jul 2002 12:12:08 +0100", "msg_from": "\"Peter Haworth\" <pmh@edison.ioppublishing.com>", "msg_from_op": false, "msg_subject": "Re: Jan's Name (Was: Re: I am being interviewed by OReilly)" }, { "msg_contents": "How long did it take you to teach him to say PostgreSQL ? :)\n\nJeff.\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Vince Vielhaber\nSent: Thursday, July 11, 2002 6:31 AM\nTo: Christopher Browne\nCc: pgsql-hackers@postgreSQL.org\nSubject: Re: [HACKERS] I am being interviewed by OReilly\n\n\nOn Wed, 10 Jul 2002, Christopher Browne wrote:\n\n> And if there are 20 places that say \"It's officially spelled\n> PostgreSQL, but you can _pronounce_ that 'p\\O\\st-\"gres', and here's\n> the MP3 of Bruce saying it,\" that can cope with the situation nicely.\n\nFor the record, the voice on the MP3 isn't Bruce. It's the voice of a\nprofessional broadcaster.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n", "msg_date": "Sat, 10 Aug 2002 09:17:28 -0300", "msg_from": "\"Jeff MacDonald\" <jeff@tsunamicreek.com>", "msg_from_op": false, "msg_subject": "Re: I am being interviewed by OReilly" } ]
[ { "msg_contents": "With reference to the following mesage...We are wondering what was the resolution for the core..\n\nthanks\ngautam\n\n6.3.1: Core during initdb on SVR4 (MIPS)\n\n*\tFrom: Frank Ridderbusch <ridderbusch.pad@sni.de <mailto:ridderbusch.pad@sni.de>> \n*\tTo: pgsql-hackers@postgresql.org <mailto:pgsql-hackers@postgresql.org> \n*\tSubject: 6.3.1: Core during initdb on SVR4 (MIPS) \n*\tDate: Thu, 26 Mar 1998 13:46:51 GMT \n\nHi,\n\njust got me the complete 6.3.1 tarball. Compilation and installation\nwent smoothly for the most part.\n\nProblem is, that the backend dumps core during the initdb run. Here is \nthe transscript (see the 'Abort' below, vacuuming template1). \n\npostgres@utensil (9) $ bin/initdb\ninitdb: using /home/tools/pgsql-6.3/lib/local1_template1.bki.source as input to create the template database.\ninitdb: using /home/tools/pgsql-6.3/lib/global1.bki.source as input to create the global classes.\ninitdb: using /home/tools/pgsql-6.3/lib/pg_hba.conf.sample as the host-based authentication control file.\n\nWe are initializing the database system with username postgres (uid=121).\nThis user will own all the files and must also own the server process.\n\nCreating Postgres database system directory /home/tools/pgsql-6.3/data/base\n\ninitdb: creating template database in /home/tools/pgsql-6.3/data/base/template1\nRunning: postgres -boot -C -F -D/home/tools/pgsql-6.3/data -Q template1\n\nCreating global classes in /base\nRunning: postgres -boot -C -F -D/home/tools/pgsql-6.3/data -Q template1\n\nAdding template1 database to pg_database...\nRunning: postgres -boot -C -F -D/home/tools/pgsql-6.3/data -Q template1 < /tmp/create.6347\n\nvacuuming template1\nAbort\ncreating public pg_user view\nloading pg_description\n\nI then pulled the core into dbx and got the following info. \n\npostgres@utensil (16) $ dbx ../../../bin/postgres core\ndbx 2.1A00 SINIX (Apr 6 1995)\nCopyright (C) Siemens Nixdorf Informationssysteme AG 1995\nBase:\tBSD, Copyright (C) The Regents of the University of California\nAll rights reserved\nreading symbolic information ...\nCurrent signal in memory image is: SIGIOT (6) (generated by pid 6409 uid 121)\n[using memory image in core]\nType 'help' for help\n(dbx) where\n.kill() at 0x482d994\n.abort() at 0x4822d30\nExcAbort(excP = 0x62ddc0, detail = 0, data = (nilv), message = \"!(RelationIsValid(relation))\"), line 29 in \"excabort.c\"\nExcUnCaught(excP = 0x62ddc0, detail = 0, data = (nilv), message = \"!(RelationIsValid(relation))\"), line 173 in \"exc.c\"\nExcRaise(excP = 0x62ddc0, detail = 0, data = (nilv), message = \"!(RelationIsValid(relation))\"), line 190 in \"exc.c\"\nExceptionalCondition(conditionName = \"!(RelationIsValid(relation))\", exceptionP = 0x62ddc0, detail = (nilv), fileName = \"indexam.c\", lineNumber = 231), line 69 in \"assert.c\"\nindex_beginscan(relation = (nilv), scanFromEnd = '\\0', numberOfKeys = 0, key = (nilv)), line 231 in \"indexam.c\"\nvc_scanoneind(indrel = (nilv), nhtups = 0), line 1448 in \"vacuum.c\"\nvc_vacone(relid = 1247, analyze = '\\0', va_cols = (nilv)), line 560 in \"vacuum.c\"\nvc_vacuum(VacRelP = (nilv), analyze = '\\0', va_cols = (nilv)), line 253 in \"vacuum.c\"\n.vacuum.vacuum(vacrel = (nilv), verbose = '\\0', analyze = '\\0', va_spec = (nilv)), line 159 in \"vacuum.c\"\nProcessUtility(parsetree = 0x66c770, dest = Debug), line 633 in \"utility.c\"\npg_exec_query_dest(query_string = \"vacuum\\n\", argv = (nilv), typev = (nilv), nargs = 0, dest = Debug), line 653 in \"postgres.c\"\npg_exec_query(query_string = \"vacuum\\n\", argv = (nilv), typev = (nilv), nargs = 0), line 601 in \"postgres.c\"\nPostgresMain(argc = 7, argv = 0x7fffe7ac), line 1382 in \"postgres.c\"\n.main() at 0x4bb29c\n__start() at 0x417de4\n(dbx) \n\nMfG/Regards\n--\n /==== Siemens Nixdorf Informationssysteme AG\n / Ridderbusch / , Abt.: OEC XS QM4\n / /./ Heinz Nixdorf Ring\n /=== /,== ,===/ /,==, // 33106 Paderborn, Germany\n / // / / // / / \\ Tel.: (49) 5251-8-15211\n/ / `==/\\ / / / \\ Email: ridderbusch.pad@sni.de\n\nSince I have taken all the Gates out of my computer, it finally works!!\n\n\n\n", "msg_date": "Mon, 8 Jul 2002 10:12:39 -0700", "msg_from": "\"Gautam Jain\" <gautam.jain@intransa.com>", "msg_from_op": true, "msg_subject": "6.3.1: Core during initdb on SVR4 (MIPS) " } ]
[ { "msg_contents": "FYI, this person wants to hire someone to write a CRC function for\nPostgreSQL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nOn Mon, 8 Jul 2002, Bruce Momjian wrote:\n\n> Not sure. Seems like you will start learning how to add your own\n> functions to the backend. :-)\n\nI believe in division of labor. :-)\nFor how much do you think I can get someone to write a loadable module\nwith CRC? What would be the best place to ask? The general list?", "msg_date": "Mon, 8 Jul 2002 21:13:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] CRC function? (fwd)" } ]
[ { "msg_contents": "We have \"always\" stored our time zone offsets in a compressed\ndivide-by-ten form to allow storage in a single byte. But there are a\nfew time zones ending at a 45 minute boundary, which \"minutes divided by\n10\" can not represent. \n\nBut \"minutes divided by 15\" can represent this and afaik all other time\nzones in existance. I have patches which will do this (with changes done\nby a perl script so the math must be right ;). Can anyone think of a\ncounter-example time zone which does not fall on a 15 minute boundary?\nOr any other reason to not move to 15 minute quantization for these\nvalues?\n\nbtw, the \"45 minute cases\" are Nepal and the Chatham Islands, and the\n\"30 minute cases\" are New Foundland, Canada and India, a bunch in\nAustralia, and a few others.\n\n - Thomas\n\n\n", "msg_date": "Mon, 08 Jul 2002 21:11:32 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Units for storage of internal time zone offsets" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> We have \"always\" stored our time zone offsets in a compressed\n> divide-by-ten form to allow storage in a single byte. But there are a\n> few time zones ending at a 45 minute boundary, which \"minutes divided by\n> 10\" can not represent. \n\nIs there a reason for the restriction to one byte? Offhand I don't\nrecall that we store TZ offsets on disk at all...\n\n> But \"minutes divided by 15\" can represent this and afaik all other time\n> zones in existance. I have patches which will do this (with changes done\n> by a perl script so the math must be right ;). Can anyone think of a\n> counter-example time zone which does not fall on a 15 minute boundary?\n\nI dug through the zic timezone database, and it seems that as of current\npractice, everyone is on hour, half-hour, or 15-minute GMT offsets, just\nas you note. But twenty-minute offsets were in use as recently as 1979\nin some places (Line Islands), and if you want to go back before about\n1950 you'd better be prepared to handle very weird offsets indeed.\n\nGiven that we were just lecturing the glibc guys about access to\npre-1970 timezone info, it doesn't sit very well with me to adopt a\nformat that is designed only to support recent practice.\n\nMy preference would be to store the GMT offset to the nearest minute,\nif we can afford the storage. What's the constraint exactly?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jul 2002 10:40:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Units for storage of internal time zone offsets " }, { "msg_contents": "> Is there a reason for the restriction to one byte? Offhand I don't\n> recall that we store TZ offsets on disk at all...\n\nAh, we don't. Sorry I wasn't clear. This is only for the lookup table we\nuse to interpret time zones on *input*. It is contained in\nsrc/backend/utils/adt/datetime.c\n\nAnd it has no long-term ramifications; it is entirely self-contained and\ndoes not result in information being stored on disk.\n\nThis table does not have any concept of multiple values for a particular\nstring time zone (e.g. \"PST\" or \"EDT\"). If/when we incorporate the zic\nstuff we will still need this lookup table, unless we develop an API to\nallow finding these within the zic database.\n\n - Thomas\n", "msg_date": "Tue, 09 Jul 2002 08:10:11 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Units for storage of internal time zone offsets" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> Is there a reason for the restriction to one byte? Offhand I don't\n>> recall that we store TZ offsets on disk at all...\n\n> Ah, we don't. Sorry I wasn't clear. This is only for the lookup table we\n> use to interpret time zones on *input*. It is contained in\n> src/backend/utils/adt/datetime.c\n\nWell, sheesh. Let's widen the byte field to int and store all the\noffsets in minutes, or even seconds.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Jul 2002 11:18:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Units for storage of internal time zone offsets " }, { "msg_contents": "> Well, sheesh. Let's widen the byte field to int and store all the\n> offsets in minutes, or even seconds.\n\nIt doesn't really matter, since the table is used only internally, and\nonly holds current accepted values for time zone offsets. The current\nscheme works.\n\nIt might be useful to change the units, but how they are stored is an\ninternal detail without much importance. Someone at some point decided\nthat minimizing row width and table size was important (probably back in\n1989 ;) and the table is structured with that in mind.\n\n - Thomas\n", "msg_date": "Tue, 09 Jul 2002 08:27:23 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Units for storage of internal time zone offsets" } ]
[ { "msg_contents": "I've nearly finished off the patch Christopher distributed. Creates the\nbetween node, and passes all regression tests except horology. I need\nto update outfuncs and readfuncs -- but hope to fix the below first.\n\nSeems I have a funny case left (Note the last comparison should be\nfalse):\n\n\nregression=# select 3 between 2 and 4;\n ?column? \n----------\n t\n(1 row)\n\nregression=# select 5 between 2 and 4;\n ?column? \n----------\n f\n(1 row)\n\nregression=# select 1 between 2 and 4;\n ?column? \n----------\n f\n(1 row)\n\nregression=# select 3 between 2 and 4 and 5 between 2 and 4;\n ?column? \n----------\n f\n(1 row)\n\nregression=# select 3 between 2 and 4 and 3 between 2 and 4;\n ?column? \n----------\n t\n(1 row)\n\nregression=# select 3 between 2 and 4 and 1 between 2 and 4;\n ?column? \n----------\n t\n(1 row)\n\n\n\nThe patch can be found at:\nhttp://www.zort.ca/patches/postgresql_misc/between.patch\n\n<a href=\"http://www.zort.ca/patches/postgresql_misc/between.patch\">PATCH\nHERE</a>\n\n\n\n\n", "msg_date": "09 Jul 2002 00:20:13 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "More fun with BETWEEN" }, { "msg_contents": "> Seems I have a funny case left (Note the last comparison should be\n> false):\n\nUgh... I forgot to initialize the result -- so it picked up the previous\ncalls value.\n\n", "msg_date": "09 Jul 2002 15:09:17 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: More fun with BETWEEN" } ]
[ { "msg_contents": "Hi all,\n\nJust wondering - what is everyone working on? There's not much going on on\nthe list at the moment save lots of talk about DROP COLUMN and CREATE\nCONVERSION.\n\nWhat's going on in everyone's pipelines?\n\nCheers,\n\nChris\n\n\n\n\n", "msg_date": "Tue, 9 Jul 2002 16:31:57 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Postgres Projects" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Hi all,\n> \n> Just wondering - what is everyone working on? There's not much going on on\n> the list at the moment save lots of talk about DROP COLUMN and CREATE\n> CONVERSION.\n> \n> What's going on in everyone's pipelines?\n\nI am working on query_timeout right now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 9 Jul 2002 11:43:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres Projects" } ]
[ { "msg_contents": "Tom,\n\nDoes the pg_tables view need to be fixed now that we have schemas?\n\nYep:\n\ntest2=# create schema a;\nCREATE SCHEMA\ntest2=# create schema b;\nCREATE SCHEMA\ntest2=# create table a.test (a int4);\nCREATE TABLE\ntest2=# create table b.test (b int4);\nCREATE TABLE\ntest2=# select * from pg_tables where tablename not like 'pg_%';\n tablename | tableowner | hasindexes | hasrules | hastriggers\n-----------+------------+------------+----------+-------------\n test | chriskl | f | f | f\n test | chriskl | f | f | f\n(2 rows)\n\nChris\n\n\n\n", "msg_date": "Tue, 9 Jul 2002 17:26:03 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "pg_tables and schemas" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Does the pg_tables view need to be fixed now that we have schemas?\n\nYeah, many of the system views need work. I haven't stopped to think\nabout it yet. If anyone else wants to come up with a proposal,\ngo for it...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jul 2002 10:42:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_tables and schemas " }, { "msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Does the pg_tables view need to be fixed now that we have schemas?\n>\n> Yeah, many of the system views need work. I haven't stopped to think\n> about it yet. If anyone else wants to come up with a proposal,\n> go for it...\n\nWell, just so long as it doesn't get forgotten before 7.3 is released! I\nthink it would be enough to just let the namespaceid column get into the\nviews?\n\nI just noticed it when I started adding schema support to WebDB (phpPgAdmin\nnext gen)\n\nChris\n\n", "msg_date": "Wed, 10 Jul 2002 09:51:52 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: pg_tables and schemas " } ]
[ { "msg_contents": "Hello,\n\nThis is a short update about the current stage of pgaccess.\n\nThe development team will try to release 0.98.8 version together with\nPostgreSQL 7.3\n\nThe current areas of active development are -\n\nBoyan - the visual schema\nChris - reports and forms\n\nThere are ideas in the area of -\n\nJeff - windows installer\nBartus - overloaded functions, Hungarian translation\nJosh - documentation and tutorial\n\nThere is a new wave revived - to make the pgaccess a tool for writing small\napplications similar to MS Access. This is an old idea of Teo, that was just\nnoticed. Chris is actually writing in this direction. And I am personally\nquite fascinated by it. I heard that the governments of Germany and China\nfavour the Linux very much - pgaccess based on PostgreSQL might become\nreally nice UNIX based application development tool.\n\nPlease, send your ideas, needs or wishes. If they are in the covered areas -\nthe developers there will take them. If not - we would need some more\npeople.\n\nPeople for testing on different platforms will be very much welcome.\n\nAll best,\n\nIavor\n\n--\nwww.pgaccess.org\n\n\n\n", "msg_date": "Tue, 9 Jul 2002 12:36:32 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": true, "msg_subject": "pgaccess update" } ]
[ { "msg_contents": "How do I unsubscribe from here?\n\nThank You,\nAnthony\n\n", "msg_date": "Tue, 9 Jul 2002 09:23:15 -0400", "msg_from": "\"Anthony W. Marino\" <anthony@AWMObjects.com>", "msg_from_op": true, "msg_subject": "Help Unsubscribing" } ]
[ { "msg_contents": "Hello,\n\nAs of today, a Bugzilla has been made available at -\n\nbugzilla.pgaccess.org\n\nThis is a pretty straight forward installation of Bugzilla 2.14.2\n\nIt is currently empty. There are even no components so the first bug\nsubmissions can be either request for components or have to wait a few days.\n\nAs we do not have much experience setting Bugzila for open source project\n(we use it for internal projects - with groups and permissions), all\ncomments are welcome.\n\nIavor\n\n--\nIavor Raytchev\nvery small technologies (a company of CEE Solutions)\n\nin case of emergency -\n\n call: + 43 676 639 46 49\nor write to: support@verysmall.org\n\nwww.verysmall.org\n\n", "msg_date": "Tue, 9 Jul 2002 20:41:41 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": true, "msg_subject": "bugzilla.pgaccess.org" }, { "msg_contents": "Iavor Raytchev wrote:\n> \n> Hello,\n> \n> As of today, a Bugzilla has been made available at -\n> \n> bugzilla.pgaccess.org\n> \n> This is a pretty straight forward installation of Bugzilla 2.14.2\n> \n> It is currently empty. There are even no components so the first bug\n> submissions can be either request for components or have to wait a few days.\n> \n> As we do not have much experience setting Bugzila for open source project\n> (we use it for internal projects - with groups and permissions), all\n> comments are welcome.\n\nJust out of curiosity, what database is backing it?\n\nIf it isn't PostgreSQL, what about using PHP BugTracker instead? That\nruns on top of PostgreSQL.\n\nhttp://sourceforge.net/projects/phpbt/\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Tue, 09 Jul 2002 17:28:16 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bugzilla.pgaccess.org" }, { "msg_contents": "----- Original Message ----- \nFrom: \"Jan Wieck\" <JanWieck@Yahoo.com>\nTo: \"Iavor Raytchev\" <iavor.raytchev@verysmall.org>\nCc: \"pgaccess - developers\" <developers@pgaccess.org>; \"pgaccess - users\" <users@pgaccess.org>; \"pgsql-hackers\" <pgsql-hackers@postgresql.org>; \"pgsql-interfaces\" <pgsql-interfaces@postgresql.org>\nSent: Tuesday, July 09, 2002 5:28 PM\nSubject: Re: [HACKERS] bugzilla.pgaccess.org\n\n\n> Iavor Raytchev wrote:\n> > \n> > Hello,\n> > \n> > As of today, a Bugzilla has been made available at -\n> > \n> > bugzilla.pgaccess.org\n> > \n> > This is a pretty straight forward installation of Bugzilla 2.14.2\n> > \n> > It is currently empty. There are even no components so the first bug\n> > submissions can be either request for components or have to wait a few days.\n> > \n> > As we do not have much experience setting Bugzila for open source project\n> > (we use it for internal projects - with groups and permissions), all\n> > comments are welcome.\n> \n> Just out of curiosity, what database is backing it?\n> \n> If it isn't PostgreSQL, what about using PHP BugTracker instead? That\n> runs on top of PostgreSQL.\n> \n> http://sourceforge.net/projects/phpbt/\n> \n> \n> Jan\n\n\nOr Gborg... ;-)\n\nhttp://gborg.postgresql.org/project/gborg/projdisplay.php\n\nCheers,\nNed\n\n", "msg_date": "Tue, 9 Jul 2002 17:37:48 -0400", "msg_from": "\"Ned Lilly\" <ned@nedscape.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bugzilla.pgaccess.org" }, { "msg_contents": "> > Just out of curiosity, what database is backing it?\n> > \n> > If it isn't PostgreSQL, what about using PHP BugTracker instead? That\n> > runs on top of PostgreSQL.\n> > \n> > http://sourceforge.net/projects/phpbt/\n> > \n> > \n> > Jan\n> \n> \n> Or Gborg... ;-)\n> \n> http://gborg.postgresql.org/project/gborg/projdisplay.php\n> \n> Cheers,\n> Ned\n\nAny other suggestions?\n\nIavor\n", "msg_date": "Wed, 10 Jul 2002 09:55:29 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] bugzilla.pgaccess.org" }, { "msg_contents": "On Wed, 2002-07-10 at 21:49, Iavor Raytchev wrote:\n> Josh Berkus said:\n> > Iavor,\n> >\n> >> Any other suggestions?\n> >\n> > I can tell you from experience that Double-Choco-Latte, another\n> > PHP/PostgreSQL tool, is really set up just for single projects. So it\n> > would work fine for PGAccess-only. However, DCL has its own problems\n> > and is not necessarily better than Mozilla; I personally don't think\n> > it's worth switching tools just to eat our own dogfood if we don't gain\n> > some functionality in the process.\n\nStill we could use the bugzilla variant that uses PostgreSQL for its DB.\n\nYou can download it from the link uder the perl seal on the page\nhttps://bugzilla.redhat.com/bugzilla/index.cgi \n\nIt points to ftp://people.redhat.com/dkl where are variants for mysql,\noracle and postgresql.\n\nThe only thing you have to do after install is changing the first lines\nof scripts as they currently ppoint to #!/usr/bonsaitools/bin/perl which\nyou may not have ;)\n\n---------------\nHannu\n\n\n", "msg_date": "10 Jul 2002 19:16:46 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [pgaccess-users] RE: [HACKERS] bugzilla.pgaccess.org" }, { "msg_contents": "Iavor,\n\n> Any other suggestions?\n\nI can tell you from experience that Double-Choco-Latte, another\nPHP/PostgreSQL tool, is really set up just for single projects. So it\nwould work fine for PGAccess-only. However, DCL has its own problems\nand is not necessarily better than Mozilla; I personally don't think\nit's worth switching tools just to eat our own dogfood if we don't gain\nsome functionality in the process.\n\nI'd love to re-write one of these tools someday; they all have ghastly\nUI problems. Right. Just after I do the accounting program, and get\ncaught up on my tax filing, and re-paint the bathroom ...\n\nThe one thing that really \"bugs\" me about Mozilla/Issuezilla is that,\nif you click the link on an issue e-mail, you don't get automatically\nlogged in, forcing you to search for the bug a second time if you want\nto commment or close the issue. Grrr. Also there's no option *not* to\nget the darned e-mails for people who check the web interface\nregularly.\n\n-Josh \n", "msg_date": "Wed, 10 Jul 2002 08:44:59 -0700", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: [pgaccess-users] RE: [HACKERS] bugzilla.pgaccess.org" }, { "msg_contents": "> Still we could use the bugzilla variant that uses PostgreSQL for its DB.\n>\n> You can download it from the link uder the perl seal on the page\n> https://bugzilla.redhat.com/bugzilla/index.cgi\n>\n> It points to ftp://people.redhat.com/dkl where are variants for mysql,\n> oracle and postgresql.\n>\n> The only thing you have to do after install is changing the first lines\n> of scripts as they currently ppoint to #!/usr/bonsaitools/bin/perl which\n> you may not have ;)\n\nDon't you think it is better to wait for Bugzilla 2.18 that will support\nPostgreSQL fresh from the bugzilla.org - I would like to focus on pgaccess\nnow :)\n\n", "msg_date": "Wed, 10 Jul 2002 18:35:59 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": true, "msg_subject": "Re: [pgaccess-users] RE: [HACKERS] bugzilla.pgaccess.org" }, { "msg_contents": "Hello everybody,\n\nI want to apologise for inflating all lists (as Chris noticed) with\ninsignificant discussions. I would like to invite all pgaccess involved\npeople to restrict their postings to developers@pgaccess.org, unless there\nis a good reason for doing it differently.\n\nThanks,\n\nIavor\n\nPS I get the impression that some of the lists repeat the messages more than\nonce and sometimes with a delay... or there is something wrong with my\nfilters...\n\n", "msg_date": "Wed, 10 Jul 2002 18:38:14 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": true, "msg_subject": "inflating lists" }, { "msg_contents": "Josh Berkus said:\n> Iavor,\n>\n>> Any other suggestions?\n>\n> I can tell you from experience that Double-Choco-Latte, another\n> PHP/PostgreSQL tool, is really set up just for single projects. So it\n> would work fine for PGAccess-only. However, DCL has its own problems\n> and is not necessarily better than Mozilla; I personally don't think\n> it's worth switching tools just to eat our own dogfood if we don't gain\n> some functionality in the process.\n>\n> I'd love to re-write one of these tools someday; they all have ghastly\n> UI problems. Right. Just after I do the accounting program, and get\n> caught up on my tax filing, and re-paint the bathroom ...\n\nThanks, Josh. This was the answer I wanted to hear.\n\n> The one thing that really \"bugs\" me about Mozilla/Issuezilla is that,\n> if you click the link on an issue e-mail, you don't get automatically\n> logged in, forcing you to search for the bug a second time if you want\n> to commment or close the issue. Grrr. Also there's no option *not* to\n> get the darned e-mails for people who check the web interface\n> regularly.\n\nWhat don't you try bugzilla.bugzilla.org or something like that :)\n\nI mean - the guys at Mozilla have made a great product using Bugzilla. I\nwould be happy if we bring pgaccess at least to the level of Mozilla. Then\nwe can talk about changing the bug tracking system. Agreed?\n\n", "msg_date": "Wed, 10 Jul 2002 18:49:31 +0200 (CEST)", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": true, "msg_subject": "Re: [pgaccess-users] RE: [HACKERS] bugzilla.pgaccess.org" }, { "msg_contents": "Iavor Raytchev wrote:\n> \n> Hello everybody,\n> \n> I want to apologise for inflating all lists (as Chris noticed) with\n> insignificant discussions. I would like to invite all pgaccess involved\n> people to restrict their postings to developers@pgaccess.org, unless there\n> is a good reason for doing it differently.\n\nI am not subscribed to that list. So you would've missed my 2 cents.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Wed, 10 Jul 2002 13:13:22 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inflating lists" }, { "msg_contents": "> How hard will the migration from MySQLzilla to PostgreSQLzilla be ?\n\nIs this a rhetoric question?\n\nI have no idea.\n\nA posting I saw (by one of the Bugzilla guys, I think) required something to\nbe done in PostgreSQL before they can migrate - something exactly related to\nthe issue of upgrading from one Bugzilla version to another.\n\nThat means - Bugzilla takes care of the database upon upgrade. I am pretty\nsure somebody will write a migration tool. Otherwise the whole noise is\nsomehow useless.\n\nOr not?\n\n", "msg_date": "Thu, 11 Jul 2002 22:50:53 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": true, "msg_subject": "Re: bugzilla.pgaccess.org" }, { "msg_contents": "On Wed, 2002-07-10 at 18:35, Iavor Raytchev wrote:\n> > Still we could use the bugzilla variant that uses PostgreSQL for its DB.\n...\n> \n> Don't you think it is better to wait for Bugzilla 2.18 that will support\n> PostgreSQL fresh from the bugzilla.org - I would like to focus on pgaccess\n> now :)\n\nHow hard will the migration from MySQLzilla to PostgreSQLzilla be ?\n\n-----------\nHannu\n\n\n\n", "msg_date": "11 Jul 2002 23:20:39 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [pgaccess-users] RE: [HACKERS] bugzilla.pgaccess.org" }, { "msg_contents": "In reply to Hannu Krosing\nIavor Raytchev wrote:\n> \n> > How hard will the migration from MySQLzilla to PostgreSQLzilla be ?\n> \n> Is this a rhetoric question?\n> \n> I have no idea.\n> \n> A posting I saw (by one of the Bugzilla guys, I think) required something to\n> be done in PostgreSQL before they can migrate - something exactly related to\n> the issue of upgrading from one Bugzilla version to another.\n\nI have included pgsql-hackers again, where this discussion originally\nstarted crossposted. \n\nHannu's question is absolutely not rhetoric. I see a concern about using\na MySQL based tool for PostgreSQL related project management on a public\nsite in it. \n\nThe Bugzilla project plans to support PostgreSQL in one of their future\nreleases, but this requires functionality in PostgreSQL, that is not\neven scheduled for 7.3. So the availability of a supported PostgreSQL\nport of Bugzilla is unpredictable at this time.\n\nMy opinion is that a project as closely related to PostgreSQL as\npgaccess should try to use PostgreSQL backed management tools. The\nswitch to PHP BugTracker or something else at this time would be\neasiest, since the Bugzilla installation on pgaccess.org is virgin and\ndoes not contain any data yet.\n\nThis is reason why I suggested that switch when you asked for comments\noriginally. And I have not yet seen any argument against it, nor any\nreason why to start off with a MySQL based Bugzilla version now.\nEspecially when there are equivalent solutions using PostgreSQL\navailable. \n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Thu, 11 Jul 2002 18:28:47 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: bugzilla.pgaccess.org" }, { "msg_contents": ".. inside of each other using ECPG ?\n\nI have a situation where it would be advantages to open a cursor, retrieve\na tuple, then open another query based on the results of the first. Then\nwhen that query has been processed return to the first query and get the\nsecond tuple.\n\nIs this possible ?\n\ncheers,\nJim Parker\n", "msg_date": "Thu, 11 Jul 2002 19:24:41 -0400", "msg_from": "Jim Parker <hopeye@cfl.rr.com>", "msg_from_op": false, "msg_subject": "Can I have multiple cursors open ..." }, { "msg_contents": "> The Bugzilla project plans to support PostgreSQL in one of their future\n> releases, but this requires functionality in PostgreSQL, that is not\n> even scheduled for 7.3. So the availability of a supported PostgreSQL\n> port of Bugzilla is unpredictable at this time.\n\nI think he said that they needed DROP COLUMN functionality, which is being\nworked on for 7.3. (Although I haven't had time to work on it for a few\ndays)\n\nChris\n\n", "msg_date": "Fri, 12 Jul 2002 09:35:43 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bugzilla.pgaccess.org" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > The Bugzilla project plans to support PostgreSQL in one of their future\n> > releases, but this requires functionality in PostgreSQL, that is not\n> > even scheduled for 7.3. So the availability of a supported PostgreSQL\n> > port of Bugzilla is unpredictable at this time.\n> \n> I think he said that they needed DROP COLUMN functionality, which is being\n> worked on for 7.3. (Although I haven't had time to work on it for a few\n> days)\n\nDROP COLUMN is the one we might solve in 7.3. ALTER COLUMN ...\nTYPE was mentioned too and I don't know when or how we will have\nthat one.\n\nREPLACE INTO is one more. Though you can work around it. If you\nsetup a BEFORE INSERT trigger, in which you do a table lock, then\ntry to UPDATE an existing row with NEW's key. If that succeeds,\nyou return NULL, suppressing the INSERT. If it fails, you return\nNEW letting the INSERT happen. The table lock (what Bradley\ncalled \"heavy locking\") is required because otherwise someone can\nsneak in between your update attempt and letting the INSERT\nhappen, getting exactly the same result and ... boom, duplicate\nkey error.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n", "msg_date": "Thu, 11 Jul 2002 22:46:47 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bugzilla.pgaccess.org" }, { "msg_contents": "Jan Wieck wrote:\n> Christopher Kings-Lynne wrote:\n> > \n> > > The Bugzilla project plans to support PostgreSQL in one of their future\n> > > releases, but this requires functionality in PostgreSQL, that is not\n> > > even scheduled for 7.3. So the availability of a supported PostgreSQL\n> > > port of Bugzilla is unpredictable at this time.\n> > \n> > I think he said that they needed DROP COLUMN functionality, which is being\n> > worked on for 7.3. (Although I haven't had time to work on it for a few\n> > days)\n> \n> DROP COLUMN is the one we might solve in 7.3. ALTER COLUMN ...\n> TYPE was mentioned too and I don't know when or how we will have\n> that one.\n\nChristopher plans to work on that when he is done DROP COLUMN.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 22:50:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bugzilla.pgaccess.org" }, { "msg_contents": "> Is this possible ?\n\nSure.\n\n - Thomas\n", "msg_date": "Fri, 12 Jul 2002 07:25:00 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Can I have multiple cursors open ..." } ]
[ { "msg_contents": "I'm pleased to see some renewed interest in pg_access. It seems obvious to me that MS Access is not currently...and probably never will be able to handle data in a robust and reliable fashion. MS Access' apparent success is due to the user interface quality and \"ease of use\" for \"non-programmers\". The \"Relationships View\" window, for example, is one of the best and most useful features ever invented for any database toolset.\n\nIn reality PostGreSQL is in a \"strong position\" to fill the \"reliability void\" left by MS Access. However, the general public doesn't know much about the short comings of Access, due to MS advertising and sales efforts. It seems clear to me that the best way to \"promote\" the use of PostGreSQL is to offer more \"ease of use\" GUI interfaces for changing table structures, indexes, relationships, and upgrading older versions of files. Although it would be nice to have a native Windows version of PostGreSQL, as well as a Linux version, I expect Linux to replace Windows on a large number of PCs in the near future. I think that \"having a Windows version\" will not be a significant issue at that point. However, GUI based \"ease of use\" features WILL be an extremely important issue and will increase in importance for the rest of the forseeable future. Using a \"browser\" to implement the GUI toolset is a good start, but it probably won't support the same degree of user friendliness that is seen in the \"Relationships View\" window of MS Access, where a relationship can be instantly \"drawn\" with a mouse, and fields added to the Table with a simple \"right click\" on the Table header.\n\nIf we do a good job of providing GUI based tools, similar to MS Access, as well as conversion tools from Access to PostGreSQL for existing data, then PostGreSQL and Linux should quickly become the \"defacto standard\" toolset for all website servers. It seems to me like PostGreSQL is already on this pathway, \"like it or not\", and that focussing on the GUI toolset is essential to maintaining a good relationship with those who are new to the Linux world. Whether you realize it or not, there is a humongous tidal wave of MS Access users currently gathering enough database theory expertise to \"realize\" the MS \"snow job\" they've been given about its reliability. They will be forced into finding another solution and chances are VERY good they won't opt for MS SQL Server or Oracle. If we are ready to give a solution to them...great....sorry MS, but they seem to \"like us better\". If we are not ready, then our future won't have anything to do with MS, only our own lack of vision.\n\nAt our current level of GUI tools, we can't expect any positive response even from fairly talented self taught computer programmers who have been interested in Linux since 1998 or later. Soon, there will be many Windows IT Specialists who will be seriously investigating the Linux OS and the \"best database tools\" available for it. Add to this list \"end users\" who are fed up with daily Windows crashes and are experimenting with hosting their own DSL based website servers....and well...there's your tidal wave! Ready or not....the wave is directly behind us....time to \"paddle\" for all we're worth!\n\nSincerely,\n\nArthur Baldwin\n\n\n\n\n\n\n\nI'm pleased to see some renewed interest in \npg_access.  It seems obvious to me that MS Access is not currently...and \nprobably never will be able to handle data in a robust and reliable \nfashion.  MS Access' apparent success is due to the user interface quality \nand \"ease of use\" for \"non-programmers\".  The \"Relationships View\" window, \nfor example, is one of the best and most useful features ever invented for any \ndatabase toolset.\n \nIn reality PostGreSQL is in a \"strong position\" to \nfill the \"reliability void\" left by MS Access.  However, the general \npublic doesn't know much about the short comings of Access, due to MS \nadvertising and sales efforts.  It seems clear to me that the best way to \n\"promote\" the use of PostGreSQL is to offer more \"ease of use\" GUI interfaces \nfor changing table structures, indexes, relationships, and upgrading older \nversions of files.  Although it would be nice to have a native Windows \nversion of PostGreSQL, as well as a Linux version, I expect Linux to replace \nWindows on a large number of PCs in the near future.  I think that \"having \na Windows version\" will not be a significant issue at that point.  However, \nGUI based \"ease of use\" features WILL be an extremely important issue and will \nincrease in importance for the rest of the forseeable future.  Using a \n\"browser\" to implement the GUI toolset is a good start, but it probably won't \nsupport the same degree of user friendliness that is seen in the \"Relationships \nView\" window of MS Access, where a relationship can be instantly \"drawn\" with a \nmouse, and fields added to the Table with a simple \"right click\" on the Table \nheader.\n \nIf we do a good job of providing GUI based \ntools, similar to MS Access, as well as conversion tools from Access to \nPostGreSQL for existing data, then PostGreSQL and Linux should quickly become \nthe \"defacto standard\" toolset for all website servers.  It seems to me \nlike PostGreSQL is already on this pathway, \"like it or not\", and that focussing \non the GUI toolset is essential to maintaining a good relationship with those \nwho are new to the Linux world.  Whether you realize it or not, there is a \nhumongous tidal wave of MS Access users currently gathering enough database \ntheory expertise to \"realize\" the MS \"snow job\" they've been given about its \nreliability.  They will be forced into finding another solution and chances \nare VERY good they won't opt for MS SQL Server or Oracle.  If we are \nready to give a solution to them...great....sorry MS, but they seem to \n\"like us better\".  If we are not ready, then our future won't have \nanything to do with MS, only our own lack of vision.\n \nAt our current level of GUI tools, we can't expect \nany positive response even from fairly talented self taught computer programmers \nwho have been interested in Linux since 1998 or later.  Soon, there will be \nmany Windows IT Specialists who will be seriously investigating the Linux OS and \nthe \"best database tools\" available for it.  Add to this list \"end users\" \nwho are fed up with daily Windows crashes and are experimenting with hosting \ntheir own DSL based website servers....and well...there's your tidal wave!  \nReady or not....the wave is directly behind us....time to \"paddle\" for all we're \nworth!\n \nSincerely,\n \nArthur Baldwin", "msg_date": "Tue, 9 Jul 2002 14:04:56 -0700", "msg_from": "\"Arthur@LinkLine.com\" <arthur@linkline.com>", "msg_from_op": true, "msg_subject": "Re: pg_access" }, { "msg_contents": "I'm afraid that I don't hold as much faith as you that Linux will become\nthe \"defacto standard\" toolset for all website servers. MS, despite its\nmajor shortcomings, is fairly slow and steady when it comes to\nimprovements to its OS. That said, Access is crap because no one uses it\nfor what it was built to be used for. And I would imagine that MS would\nrather spend their time/money on SQL Server development. I agree with\nyou that pgsql needs a more powerful, GUI interface. The QBE interface\nin Access is nice. However, I don't agree that it is unimportant to have\na Windows version. Point being, that Linux users are used to - and sadly\noften expect - poor interfaces with the programs they use. Windows users\nare far less forgiving. If, what you are talking about, is truly wide\nspread use for PC's and small-time web-servers then a Windows interface\nis damn near necessary.\n \nEric\n \n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of\nArthur@LinkLine.com\nSent: Tuesday, July 09, 2002 4:05 PM\nTo: PostGreSQL Hackers\nSubject: Re: [HACKERS] pg_access\n \nI'm pleased to see some renewed interest in pg_access. It seems obvious\nto me that MS Access is not currently...and probably never will be able\nto handle data in a robust and reliable fashion. MS Access' apparent\nsuccess is due to the user interface quality and \"ease of use\" for\n\"non-programmers\". The \"Relationships View\" window, for example, is one\nof the best and most useful features ever invented for any database\ntoolset.\n \nIn reality PostGreSQL is in a \"strong position\" to fill the \"reliability\nvoid\" left by MS Access. However, the general public doesn't know much\nabout the short comings of Access, due to MS advertising and sales\nefforts. It seems clear to me that the best way to \"promote\" the use of\nPostGreSQL is to offer more \"ease of use\" GUI interfaces for changing\ntable structures, indexes, relationships, and upgrading older versions\nof files. Although it would be nice to have a native Windows version of\nPostGreSQL, as well as a Linux version, I expect Linux to replace\nWindows on a large number of PCs in the near future. I think that\n\"having a Windows version\" will not be a significant issue at that\npoint. However, GUI based \"ease of use\" features WILL be an extremely\nimportant issue and will increase in importance for the rest of the\nforseeable future. Using a \"browser\" to implement the GUI toolset is a\ngood start, but it probably won't support the same degree of user\nfriendliness that is seen in the \"Relationships View\" window of MS\nAccess, where a relationship can be instantly \"drawn\" with a mouse, and\nfields added to the Table with a simple \"right click\" on the Table\nheader.\n \nIf we do a good job of providing GUI based tools, similar to MS Access,\nas well as conversion tools from Access to PostGreSQL for existing data,\nthen PostGreSQL and Linux should quickly become the \"defacto standard\"\ntoolset for all website servers. It seems to me like PostGreSQL is\nalready on this pathway, \"like it or not\", and that focussing on the GUI\ntoolset is essential to maintaining a good relationship with those who\nare new to the Linux world. Whether you realize it or not, there is a\nhumongous tidal wave of MS Access users currently gathering enough\ndatabase theory expertise to \"realize\" the MS \"snow job\" they've been\ngiven about its reliability. They will be forced into finding another\nsolution and chances are VERY good they won't opt for MS SQL Server or\nOracle. If we are ready to give a solution to them...great....sorry MS,\nbut they seem to \"like us better\". If we are not ready, then our future\nwon't have anything to do with MS, only our own lack of vision.\n \nAt our current level of GUI tools, we can't expect any positive response\neven from fairly talented self taught computer programmers who have been\ninterested in Linux since 1998 or later. Soon, there will be many\nWindows IT Specialists who will be seriously investigating the Linux OS\nand the \"best database tools\" available for it. Add to this list \"end\nusers\" who are fed up with daily Windows crashes and are experimenting\nwith hosting their own DSL based website servers....and well...there's\nyour tidal wave! Ready or not....the wave is directly behind us....time\nto \"paddle\" for all we're worth!\n \nSincerely,\n \nArthur Baldwin\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI’m afraid that I don’t hold\nas much faith as you that Linux will become the “defacto standard”\ntoolset for all website servers. MS, despite its major shortcomings, is fairly\nslow and steady when it comes to improvements to its OS. That said, Access is crap because no one uses it for what it was\nbuilt to be used for. And I would imagine that MS would rather spend their time/money\non SQL Server development. I agree with you that pgsql\nneeds a more powerful, GUI interface. The QBE interface in Access is nice. However,\nI don’t agree that it is unimportant to have a Windows version. Point\nbeing, that Linux users are used to – and sadly often expect – poor\ninterfaces with the programs they use. Windows users are far less forgiving. If,\nwhat you are talking about, is truly wide spread use for PC’s and small-time\nweb-servers then a Windows interface is damn near necessary.\n \nEric\n \n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On\nBehalf Of Arthur@LinkLine.com\nSent: Tuesday, July 09, 2002 4:05 PM\nTo: PostGreSQL Hackers\nSubject: Re: [HACKERS] pg_access\n \n\nI'm pleased to see some renewed\ninterest in pg_access.  It seems obvious to me that MS Access is not\ncurrently...and probably never will be able to handle data in a robust and\nreliable fashion.  MS Access' apparent success is due to the user interface\nquality and \"ease of use\" for \"non-programmers\".  The\n\"Relationships View\" window, for example, is one of the best and most\nuseful features ever invented for any database toolset.\n\n\n \n\n\nIn reality PostGreSQL is in a\n\"strong position\" to fill the \"reliability void\" left\nby MS Access.  However, the general public doesn't know much about\nthe short comings of Access, due to MS advertising and sales\nefforts.  It seems clear to me that the best way to \"promote\"\nthe use of PostGreSQL is to offer more \"ease of use\" GUI interfaces\nfor changing table structures, indexes, relationships, and upgrading older\nversions of files.  Although it would be nice to have a native Windows\nversion of PostGreSQL, as well as a Linux version, I expect Linux to replace\nWindows on a large number of PCs in the near future.  I think that\n\"having a Windows version\" will not be a significant issue at that\npoint.  However, GUI based \"ease of use\" features WILL be an\nextremely important issue and will increase in importance for the rest of the\nforseeable future.  Using a \"browser\" to implement the GUI\ntoolset is a good start, but it probably won't support the same degree of user\nfriendliness that is seen in the \"Relationships View\" window of MS\nAccess, where a relationship can be instantly \"drawn\" with a mouse,\nand fields added to the Table with a simple \"right click\" on the\nTable header.\n\n\n \n\n\nIf we do a good job of\nproviding GUI based tools, similar to MS Access, as well as conversion\ntools from Access to PostGreSQL for existing data, then PostGreSQL and Linux\nshould quickly become the \"defacto standard\" toolset for all website\nservers.  It seems to me like PostGreSQL is already on this pathway,\n\"like it or not\", and that focussing on the GUI toolset is essential\nto maintaining a good relationship with those who are new to the Linux\nworld.  Whether you realize it or not, there is a humongous tidal wave of\nMS Access users currently gathering enough database theory expertise to\n\"realize\" the MS \"snow job\" they've been given about its\nreliability.  They will be forced into finding another solution and\nchances are VERY good they won't opt for MS SQL Server or Oracle.  If\nwe are ready to give a solution to them...great....sorry MS, but they\nseem to \"like us better\".  If we are not ready, then our\nfuture won't have anything to do with MS, only our own lack of vision.\n\n\n \n\n\nAt our current level of GUI tools,\nwe can't expect any positive response even from fairly talented self taught\ncomputer programmers who have been interested in Linux since 1998 or\nlater.  Soon, there will be many Windows IT Specialists who will be\nseriously investigating the Linux OS and the \"best database tools\"\navailable for it.  Add to this list \"end users\" who are fed up\nwith daily Windows crashes and are experimenting with hosting their own DSL\nbased website servers....and well...there's your tidal wave!  Ready or\nnot....the wave is directly behind us....time to \"paddle\" for all\nwe're worth!\n\n\n \n\n\nSincerely,\n\n\n \n\n\nArthur Baldwin", "msg_date": "Tue, 9 Jul 2002 17:14:18 -0500", "msg_from": "\"Eric Redmond\" <redmonde@purdue.edu>", "msg_from_op": false, "msg_subject": "Re: pg_access" }, { "msg_contents": "Ahh and let us not forget that postgresql runs beautifully (so far as I can\ntell) under Mac OS X, an operating system whose user base demands\neasy-to-use GUI software.\n\nPgaccess is a different story however-- I haven't gotten it to work yet\nbecause tcl/tk is not quite working... Haven't had any time to look at it\nyet.\n\n\nBest regards,\nMichael Ditto\n\nOn 7/9/02 4:14 PM, \"Eric Redmond\" <redmonde@purdue.edu> wrote:\n\n> I¹m afraid that I don¹t hold as much faith as you that Linux will become the\n> ³defacto standard² toolset for all website servers. MS, despite its major\n> shortcomings, is fairly slow and steady when it comes to improvements to its\n> OS. That said, Access is crap because no one uses it for what it was built to\n> be used for. And I would imagine that MS would rather spend their time/money\n> on SQL Server development. I agree with you that pgsql needs a more powerful,\n> GUI interface. The QBE interface in Access is nice. However, I don¹t agree\n> that it is unimportant to have a Windows version. Point being, that Linux\n> users are used to ­ and sadly often expect ­ poor interfaces with the programs\n> they use. Windows users are far less forgiving. If, what you are talking\n> about, is truly wide spread use for PC¹s and small-time web-servers then a\n> Windows interface is damn near necessary.\n> \n> \n> \n> Eric\n> \n> \n> \n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Arthur@LinkLine.com\n> Sent: Tuesday, July 09, 2002 4:05 PM\n> To: PostGreSQL Hackers\n> Subject: Re: [HACKERS] pg_access\n> \n> \n> \n> I'm pleased to see some renewed interest in pg_access. It seems obvious to me\n> that MS Access is not currently...and probably never will be able to handle\n> data in a robust and reliable fashion. MS Access' apparent success is due to\n> the user interface quality and \"ease of use\" for \"non-programmers\". The\n> \"Relationships View\" window, for example, is one of the best and most useful\n> features ever invented for any database toolset.\n> \n> \n> \n> In reality PostGreSQL is in a \"strong position\" to fill the \"reliability void\"\n> left by MS Access. However, the general public doesn't know much about the\n> short comings of Access, due to MS advertising and sales efforts. It seems\n> clear to me that the best way to \"promote\" the use of PostGreSQL is to offer\n> more \"ease of use\" GUI interfaces for changing table structures, indexes,\n> relationships, and upgrading older versions of files. Although it would be\n> nice to have a native Windows version of PostGreSQL, as well as a Linux\n> version, I expect Linux to replace Windows on a large number of PCs in the\n> near future. I think that \"having a Windows version\" will not be a\n> significant issue at that point. However, GUI based \"ease of use\" features\n> WILL be an extremely important issue and will increase in importance for the\n> rest of the forseeable future. Using a \"browser\" to implement the GUI toolset\n> is a good start, but it probably won't support the same degree of user\n> friendliness that is seen in the \"Relationships View\" window of MS Access,\n> where a relationship can be instantly \"drawn\" with a mouse, and fields added\n> to the Table with a simple \"right click\" on the Table header.\n> \n> \n> \n> If we do a good job of providing GUI based tools, similar to MS Access, as\n> well as conversion tools from Access to PostGreSQL for existing data, then\n> PostGreSQL and Linux should quickly become the \"defacto standard\" toolset for\n> all website servers. It seems to me like PostGreSQL is already on this\n> pathway, \"like it or not\", and that focussing on the GUI toolset is essential\n> to maintaining a good relationship with those who are new to the Linux world.\n> Whether you realize it or not, there is a humongous tidal wave of MS Access\n> users currently gathering enough database theory expertise to \"realize\" the MS\n> \"snow job\" they've been given about its reliability. They will be forced into\n> finding another solution and chances are VERY good they won't opt for MS SQL\n> Server or Oracle. If we are ready to give a solution to them...great....sorry\n> MS, but they seem to \"like us better\". If we are not ready, then our future\n> won't have anything to do with MS, only our own lack of vision.\n> \n> \n> \n> At our current level of GUI tools, we can't expect any positive response even\n> from fairly talented self taught computer programmers who have been interested\n> in Linux since 1998 or later. Soon, there will be many Windows IT Specialists\n> who will be seriously investigating the Linux OS and the \"best database tools\"\n> available for it. Add to this list \"end users\" who are fed up with daily\n> Windows crashes and are experimenting with hosting their own DSL based website\n> servers....and well...there's your tidal wave! Ready or not....the wave is\n> directly behind us....time to \"paddle\" for all we're worth!\n> \n> \n> \n> Sincerely,\n> \n> \n> \n> Arthur Baldwin\n> \n\n\n", "msg_date": "Tue, 09 Jul 2002 17:03:55 -0600", "msg_from": "\"Michael J. Ditto\" <janus@frii.com>", "msg_from_op": false, "msg_subject": "Re: pg_access" } ]
[ { "msg_contents": "In syscache.c, the structure cachedesc contains a field reloidattr that is\nsupposed to contain the number of an attribute that is an OID reference to\nanother table. But what if there are two such attributes?\n\nThe concrete case is an index on pg_cast (castsource, casttarget), which\nboth reference pg_type.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 10 Jul 2002 00:17:52 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Question about syscache" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> In syscache.c, the structure cachedesc contains a field reloidattr that is\n> supposed to contain the number of an attribute that is an OID reference to\n> another table. But what if there are two such attributes?\n\nSpecifically, it is a reference to pg_class.\n\n> The concrete case is an index on pg_cast (castsource, casttarget), which\n> both reference pg_type.\n\nThat is not a reference to pg_class, so you should not make reloidattr\nreference it.\n\nThe point of reloidattr is that during a relation cache clear event,\nit allows catcache to find all catcache rows that are related to that\nrelation cache entry (eg, pg_attribute, pg_trigger, etc). So far\nthere's not been need for more than one such attribute per tuple.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Jul 2002 18:36:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question about syscache " } ]
[ { "msg_contents": "There seems to have been an accumulation lately of stuff that was simply\ndumped into the source tree without any sort of integration. I am\nparticularly talking about interfaces/ssl and interfaces/libpqxx. No\ndoubt both of these things are useful in the end, but as they are right\nnow they're a headache waiting to happen.\n\nCould someone try to address the following issues?\n\nSSL:\n\n* A bunch of cryptic configuration files -- what do they do?\n\n* Weird shell scripts -- what do they do?\n\n* The shell scripts are written in a completely unportable fashion and\nhave inappropriate names (surely PostgreSQL isn't the only application in\nthe world that allows to \"mkcert\").\n\n* They don't even belong into interfaces.\n\n* No build instructions, let alone a makefile.\n\nLibpqxx:\n\n* I'm no C++ whizz, but I guarantee that this coding style is not nearly\nas portable as we've tried to make libpq++ be. Who wants to answer those\nsupport calls all over again?\n\n* What's the deal with libpq++ vs. libpqxx? Who's going to want to\nexplain that to the crowd for the next 5 years?\n\n* Bogus Automake stuff -- hurts my eyes. ;-)\n\n* Doxygen -- is that going to be a quasi-required tool now?\n\n* Bonus points for documentation in DocBook format -- but unfortunately\nversion 4.1, and unfortunately not integrated with the rest of the\ndocumentation set.\n\n* No build integration.\n\n* Why are half the text files executable?\n\nPersonally, I'm uneasy about carrying around another interface library\nthat appears to have no basis in any sort of standard.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n", "msg_date": "Wed, 10 Jul 2002 00:18:16 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Unintegrated stuff in source tree" }, { "msg_contents": "Peter Eisentraut wrote:\n> There seems to have been an accumulation lately of stuff that was simply\n> dumped into the source tree without any sort of integration. I am\n> particularly talking about interfaces/ssl and interfaces/libpqxx. No\n> doubt both of these things are useful in the end, but as they are right\n> now they're a headache waiting to happen.\n> \n> Could someone try to address the following issues?\n> \n> SSL:\n> \n> * A bunch of cryptic configuration files -- what do they do?\n> \n> * Weird shell scripts -- what do they do?\n> \n> * The shell scripts are written in a completely unportable fashion and\n> have inappropriate names (surely PostgreSQL isn't the only application in\n> the world that allows to \"mkcert\").\n> \n> * They don't even belong into interfaces.\n> \n> * No build instructions, let alone a makefile.\n\nI have requested info from the ssl author with no replies. Yank it from\nCVS or I will do the honors.\n\n> Libpqxx:\n> \n> * I'm no C++ whizz, but I guarantee that this coding style is not nearly\n> as portable as we've tried to make libpq++ be. Who wants to answer those\n> support calls all over again?\n> \n> * What's the deal with libpq++ vs. libpqxx? Who's going to want to\n> explain that to the crowd for the next 5 years?\n> \n> * Bogus Automake stuff -- hurts my eyes. ;-)\n> \n> * Doxygen -- is that going to be a quasi-required tool now?\n> \n> * Bonus points for documentation in DocBook format -- but unfortunately\n> version 4.1, and unfortunately not integrated with the rest of the\n> documentation set.\n> \n> * No build integration.\n> \n> * Why are half the text files executable?\n> \n> Personally, I'm uneasy about carrying around another interface library\n> that appears to have no basis in any sort of standard.\n\nThe idea was that someone with more knowledge of PostgreSQL would\nintegrate the new C++ interface into our build system. If it doesn't\nhappen, we will yank it before 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 9 Jul 2002 23:06:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unintegrated stuff in source tree" }, { "msg_contents": "> > There seems to have been an accumulation lately of stuff that was simply\n> > dumped into the source tree without any sort of integration. I am\n> > particularly talking about interfaces/ssl and interfaces/libpqxx. No\n> > doubt both of these things are useful in the end, but as they are right\n> > now they're a headache waiting to happen.\n\n> I have requested info from the ssl author with no replies. Yank it from\n> CVS or I will do the honors.\n\nLet's not yank the SSL stuff just yet. It's all good stuff. Let's give him\ntime to reply before 7.3.\n\nChris\n\n", "msg_date": "Wed, 10 Jul 2002 11:18:59 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Unintegrated stuff in source tree" }, { "msg_contents": "On Tue, Jul 09, 2002 at 11:06:10PM -0400, Bruce Momjian wrote:\n> Peter Eisentraut wrote:\n> > Personally, I'm uneasy about carrying around another interface library\n> > that appears to have no basis in any sort of standard.\n\nErm, upon which standards are the other language interfaces based?\n\n> The idea was that someone with more knowledge of PostgreSQL would\n> integrate the new C++ interface into our build system. If it doesn't\n> happen, we will yank it before 7.3.\n\nHas the author been given CVS access yet?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Tue, 9 Jul 2002 23:19:11 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": false, "msg_subject": "Re: Unintegrated stuff in source tree" }, { "msg_contents": "Neil Conway wrote:\n> On Tue, Jul 09, 2002 at 11:06:10PM -0400, Bruce Momjian wrote:\n> > Peter Eisentraut wrote:\n> > > Personally, I'm uneasy about carrying around another interface library\n> > > that appears to have no basis in any sort of standard.\n> \n> Erm, upon which standards are the other language interfaces based?\n> \n> > The idea was that someone with more knowledge of PostgreSQL would\n> > integrate the new C++ interface into our build system. If it doesn't\n> > happen, we will yank it before 7.3.\n> \n> Has the author been given CVS access yet?\n\nYes, he has. I got him in touch with Marc. I was just pointing out\nthat while he knows C++, he needs help getting the connecting stuff\nmerged to our coding style.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 9 Jul 2002 23:20:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unintegrated stuff in source tree" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > > There seems to have been an accumulation lately of stuff that was simply\n> > > dumped into the source tree without any sort of integration. I am\n> > > particularly talking about interfaces/ssl and interfaces/libpqxx. No\n> > > doubt both of these things are useful in the end, but as they are right\n> > > now they're a headache waiting to happen.\n> \n> > I have requested info from the ssl author with no replies. Yank it from\n> > CVS or I will do the honors.\n> \n> Let's not yank the SSL stuff just yet. It's all good stuff. Let's give him\n> time to reply before 7.3.\n\nOK, not knowing SSL, I can't judge it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 9 Jul 2002 23:21:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unintegrated stuff in source tree" }, { "msg_contents": "On Wed, Jul 10, 2002 at 12:18:16AM +0200, Peter Eisentraut wrote:\n> There seems to have been an accumulation lately of stuff that was simply\n> dumped into the source tree without any sort of integration. I am\n> particularly talking about interfaces/ssl and interfaces/libpqxx. No\n> doubt both of these things are useful in the end, but as they are right\n> now they're a headache waiting to happen.\n \nWell, I guess as the author I should comment on the libpqxx side of \nthings here. We decided to drop it into the source tree as-is because \n(1) it's fairly hard to come by the autoconf/automake expertise required \nto make it work from the main source tree's configure script, and (2) not \nall systems or C++ compilers support enough of the Standard to make it \nwork, so it ought to remain a separate, optional thing in the beginning.\n\n\n> Libpqxx:\n> \n> * I'm no C++ whizz, but I guarantee that this coding style is not nearly\n> as portable as we've tried to make libpq++ be. Who wants to answer those\n> support calls all over again?\n \nPlease explain. I'm sure this is not all portable, but what does coding\nstyle have to do with that? Sure, Microsoft compilers have a problem\nwith it, but that's just because they don't support the basic concepts\nof the language. We've tried to fix that, but it's very hard to work \naround a bad compiler. If your compiler doesn't handle C++, don't try \nto compile C++ with it. I'd try to run it through more critical\ncompilers, but according to existing compliance tests, gcc 2.95 through\n3.1 happpen to be the best compilers around.\n\nAs far as I know, libpqxx relies on only two things: first, the C++\nstandard, and second, libpq. If your system doesn't support those,\nwhy bother addressing PostgreSQL from C++ in the first place?\n\n\n> * What's the deal with libpq++ vs. libpqxx? Who's going to want to\n> explain that to the crowd for the next 5 years?\n \nFrankly, libpq++ was severely broken. That's a simple story to tell\nthe crowd in 5 years' time. As it turned out, there was no way to\nfix it except by writing a completely new library. Use libpq++ if\nyou must; libpqxx if you have the choice. If your compiler only\nsupports C++ as \"A Better C\" then libpqxx is not for you. If there\nis some other problem with the code--and I can't say I've heard of any\nsuch case in a while--then it ought to be fixed.\n\n\n> * Bogus Automake stuff -- hurts my eyes. ;-)\n \nPlease elaborate. Like the docs say, the automake stuff was contributed\nby external dcvelopers, and I merely tried to edit it. If you could\njust say *what* is so bogus about it, perhaps it could be fixed. Just\nsaying that it's bogus and that it hurts your eyes isn't going to help\nanyone. The library has its own automake setup so that it can be\ninstalled separately on systems that support it. Hopefully someone\nwill see fit to integrate it into the existing setup at some point. This\nis not proprietary software, after all.\n\n\n> * Doxygen -- is that going to be a quasi-required tool now?\n\nIt comes with docs generated by doxygen. I don't think that means\nthat doxygen is \"quasi-required\" any more than emacs is, just because\nsome of the text may have been written with it. If reading HTML is a\nproblem for you, you have other problems. Sure, re-generate the docs\nwith doxygen if you like; you're going to end up with essentially the\nsame stuff that's bundled with the library. If you think that's a\nuseful exercise, you're better off with doxygen. If you don't, there's\nnothing wrong with the way things are.\n\n\n> * Bonus points for documentation in DocBook format -- but unfortunately\n> version 4.1, and unfortunately not integrated with the rest of the\n> documentation set.\n> \n> * No build integration.\n\nThese are on the to-do list, and some are beyond my control so I can't\nreally address them. But if you have time to complain about them, \nperhaps you can help fix them as well. That would be most welcome; I've\nessentially been flying blind when writing the docs, mimicking what was\nalready there.\n\n \n> * Why are half the text files executable?\n \nInteresting. Haven't been able to find a single file with this problem\nin the libpqxx distribution archive, so they may have been mangled \nsomewhere else. Where did you find the archive with this problem?\n\n\n> Personally, I'm uneasy about carrying around another interface library\n> that appears to have no basis in any sort of standard.\n\nThis suggests that either you haven't actually looked at it, or you're\nunfamiliar with C++. The thing about libpqxx is that it makes database\naccess adhere to standard C++ interfaces. So saying that it has \"no\nbasis in any sort of standard\" is just plain wrong. In fact, some C++\nprogrammers have found this tremendously useful, and some have chosen\nPostgreSQL over competing databases because of this library.\n\nNow if you want to maintain that it doesn't adhere to an existing, set\nstandard for C++ database access you may have a point--because there\ncurrently are none (unless you count \"industry standards\" like the \npitiful crap MFC users are forced to put up with). But someday there \nwill be, and rest assured that they will look very much like the way \nlibpqxx does things, because it sticks very closely to existing STL \nstandards. So far, all C++ experts I've presented this to (including\nsome involved in the C++ standards process, albeit loosely) have been \nvery supportive of libpqxx because of this existing lack of standards, \nand the way libpqxx tries to fit things in.\n\nIn my opinion, libpqxx is a good replacement for libpq++. It does more\nto conform to standards, it fixes some serious problems with the original\ndesign, and it makes it easy to write solid programs using PostgreSQL. In \nthe future it will probably be extended to cover other databases as well, \nbut for now it's a competitive advantage. Finally, it fills a void in the\nC++ world in a way that may someday define new standards. I don't think\nthis would be hard to explain to new or existing programmers.\n\n\nJeroen\n\n", "msg_date": "Wed, 10 Jul 2002 05:38:45 +0200", "msg_from": "jtv <jtv@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Unintegrated stuff in source tree" }, { "msg_contents": "On Tue, Jul 09, 2002 at 11:20:47PM -0400, Bruce Momjian wrote:\n> > \n> > Has the author been given CVS access yet?\n> \n> Yes, he has. I got him in touch with Marc. I was just pointing out\n> that while he knows C++, he needs help getting the connecting stuff\n> merged to our coding style.\n\nHaven't had a reply from Marc yet, so I guess I don't have CVS access\nyet. Anyway, my main problem at this time is know how to begin to\nintegrate the automake/autoconf stuff into the main source tree. Can't\nsay I know a whole lot about these tools or how PostgreSQL uses them,\nso as you say I'll need some help making this work in harmony with the \nPostgreSQL stuff.\n\nAny takers?\n\n\nJeroen\n\n", "msg_date": "Wed, 10 Jul 2002 05:48:37 +0200", "msg_from": "jtv <jtv@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Unintegrated stuff in source tree" }, { "msg_contents": "On Wed, 10 Jul 2002, jtv wrote:\n\n> On Tue, Jul 09, 2002 at 11:20:47PM -0400, Bruce Momjian wrote:\n> > >\n> > > Has the author been given CVS access yet?\n> >\n> > Yes, he has. I got him in touch with Marc. I was just pointing out\n> > that while he knows C++, he needs help getting the connecting stuff\n> > merged to our coding style.\n>\n> Haven't had a reply from Marc yet, so I guess I don't have CVS access\n> yet. Anyway, my main problem at this time is know how to begin to\n> integrate the automake/autoconf stuff into the main source tree. Can't\n> say I know a whole lot about these tools or how PostgreSQL uses them,\n> so as you say I'll need some help making this work in harmony with the\n> PostgreSQL stuff.\n>\n> Any takers?\n\nDamn, thanks for teh reminder ... spent today working on the mailing list\nlag, and totally forgot about it ... will get that all setup tomorrow,\nsorry :(\n\n\n", "msg_date": "Wed, 10 Jul 2002 01:17:29 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Unintegrated stuff in source tree" } ]
[ { "msg_contents": "Dear Eric,\n\nThanks for your input! I'm still in favor of developing a Windows version of PostGreSQL. But I don't think that intelligent people are going to want to use any version of Windows (especially since the news of DRMOS and TCPA included in Windows XP and soon to be implemented in all future releases of Windows 2000 Server) to host their own website via their DSL line. Another few reasons for this choice are the constant crashes of ALL versions of Windows and the increasing lack of responsiveness to fix bugs or admit to \"less than advertised\" limitations.\n\nAlso, people who don't appreciate the far less than honest approach to Access users that MS has taken, are not about to spend thousands of dollars on MS SQL Server. At least not at a time when all signs point to abandoning Windows in favor of Linux, as a \"necessary\" step, albeit painful. \"Necessary\" because of the improved security that is possible, with sufficient Linux expertise and because of the lowered risk of being at only one company's mercy...as pointed out by the senator from Peru and many others since.\n\nAllowing MS to continue a 90 + percent market share (by remaining blissfully ignorant) until TCPA is fully implemented in all supported versions of Windows is nothing short of total insanity. I believe that most Americans are intelligent enough to realize this and avoid that scenario by making the switch to Linux. I don't believe that we Americans are stupid enough to let a company like MS take away almost every freedom that now exists in the field of technological development! If we are, then we will shortly lose our other freedoms as well...through ignorance and apathy.\n\nI believe that we will see many mfgs of motherboards that don't include the \"Fritz chip\" that are designed for those who vote for Linux and make the switch to an unfamiliar OS...primarily due to their discovery of the TCPA/MS agenda. Am I sure about this? Yes! Look at the intense interest shown by other countries in Linux application development for the GUI desktop. Do you think they will abandon their investment? I don't think so. Especially not in Taiwan or China where flexible and compatible \"hard Real Time\" versions of Linux are a primary development focus...to be marketed here in the US!\n\n(Examples: http://www.redsonic.com and China Soft...their website link can be found at the Redsonic website....hope you can read Chinese!)\n\nRedsonic's toolset will beat WinCE in the near future...without any doubt. The performance improvement and user friendliness of their tools far exceed any version of WinCE.\n\nAnd Redsonic isn't the only \"Real Time\" competitor. There are many, many others...and a steady flow of new ones.\n\nIf we as Americans don't make the switch to Linux instead of Windows, including end users at home, then I can think of one Biblical phrase that fits, \"Don't cast your pearls before swine, lest they turn and rend you\". In other words, \"send back your PC to the store\"...we're too stupid and self absorbed to have the privilege of using them or to enjoy our current freedoms. Switching to Linux is currently a \"pain\" to be sure...but it is less painful than what will surely follow if the MS plan works. \"A word to the wise is sufficient\". And it won't be long until it is much less painful to make the switch.\n\nSincerely,\n\nArthur Baldwin\n\nI'm afraid that I don't hold as much faith as you that Linux will become the \"defacto standard\" toolset for all website servers. MS, despite its major shortcomings, is fairly slow and steady when it comes to improvements to its OS. That said, Access is crap because no one uses it for what it was built to be used for. And I would imagine that MS would rather spend their time/money on SQL Server development. I agree with you that pgsql needs a more powerful, GUI interface. The QBE interface in Access is nice. However, I don't agree that it is unimportant to have a Windows version. Point being, that Linux users are used to - and sadly often expect - poor interfaces with the programs they use. Windows users are far less forgiving. If, what you are talking about, is truly wide spread use for PC's and small-time web-servers then a Windows interface is damn near necessary.\n\n \n\nEric\n\n \n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Arthur@LinkLine.com\nSent: Tuesday, July 09, 2002 4:05 PM\nTo: PostGreSQL Hackers\nSubject: Re: [HACKERS] pg_access\n\n \n\nI'm pleased to see some renewed interest in pg_access. It seems obvious to me that MS Access is not currently...and probably never will be able to handle data in a robust and reliable fashion. MS Access' apparent success is due to the user interface quality and \"ease of use\" for \"non-programmers\". The \"Relationships View\" window, for example, is one of the best and most useful features ever invented for any database toolset.\n\n \n\nIn reality PostGreSQL is in a \"strong position\" to fill the \"reliability void\" left by MS Access. However, the general public doesn't know much about the short comings of Access, due to MS advertising and sales efforts. It seems clear to me that the best way to \"promote\" the use of PostGreSQL is to offer more \"ease of use\" GUI interfaces for changing table structures, indexes, relationships, and upgrading older versions of files. Although it would be nice to have a native Windows version of PostGreSQL, as well as a Linux version, I expect Linux to replace Windows on a large number of PCs in the near future. I think that \"having a Windows version\" will not be a significant issue at that point. However, GUI based \"ease of use\" features WILL be an extremely important issue and will increase in importance for the rest of the forseeable future. Using a \"browser\" to implement the GUI toolset is a good start, but it probably won't support the same degree of user friendliness that is seen in the \"Relationships View\" window of MS Access, where a relationship can be instantly \"drawn\" with a mouse, and fields added to the Table with a simple \"right click\" on the Table header.\n\n \n\nIf we do a good job of providing GUI based tools, similar to MS Access, as well as conversion tools from Access to PostGreSQL for existing data, then PostGreSQL and Linux should quickly become the \"defacto standard\" toolset for all website servers. It seems to me like PostGreSQL is already on this pathway, \"like it or not\", and that focussing on the GUI toolset is essential to maintaining a good relationship with those who are new to the Linux world. Whether you realize it or not, there is a humongous tidal wave of MS Access users currently gathering enough database theory expertise to \"realize\" the MS \"snow job\" they've been given about its reliability. They will be forced into finding another solution and chances are VERY good they won't opt for MS SQL Server or Oracle. If we are ready to give a solution to them...great....sorry MS, but they seem to \"like us better\". If we are not ready, then our future won't have anything to do with MS, only our own lack of vision.\n\n \n\nAt our current level of GUI tools, we can't expect any positive response even from fairly talented self taught computer programmers who have been interested in Linux since 1998 or later. Soon, there will be many Windows IT Specialists who will be seriously investigating the Linux OS and the \"best database tools\" available for it. Add to this list \"end users\" who are fed up with daily Windows crashes and are experimenting with hosting their own DSL based website servers....and well...there's your tidal wave! Ready or not....the wave is directly behind us....time to \"paddle\" for all we're worth!\n\n \n\nSincerely,\n\n \n\nArthur Baldwin\n\n\n\n\n\n\n\n\n\n\nDear \nEric,\nThanks for your \ninput!  I'm still in favor of developing a Windows version of \nPostGreSQL.  But I don't think that intelligent people are going to want to \nuse any version of Windows (especially since the news of DRMOS and TCPA included \nin Windows XP and soon to be implemented in all future releases of Windows 2000 \nServer) to host their own website via their DSL line.  Another few \nreasons for this choice are the constant crashes of ALL versions of Windows and \nthe increasing lack of responsiveness to fix bugs or admit to \"less than \nadvertised\" limitations.\nAlso, people who don't \nappreciate the far less than honest approach to Access users that MS has taken, \nare not about to spend thousands of dollars on MS SQL Server.  At least not \nat a time when all signs point to abandoning Windows in favor of Linux, as a \n\"necessary\" step, albeit painful.  \"Necessary\" because of the improved \nsecurity that is possible, with sufficient Linux expertise and because of \nthe lowered risk of being at only one company's mercy...as pointed out by the \nsenator from Peru and many others since.\nAllowing MS to continue \na 90 + percent market share (by remaining blissfully ignorant) until TCPA is \nfully implemented in all supported versions of Windows is nothing short of total \ninsanity.  I believe that most Americans are intelligent enough to realize \nthis and avoid that scenario by making the switch to Linux.  I don't \nbelieve that we Americans are stupid enough to let a company like MS take away \nalmost every freedom that now exists in the field of technological \ndevelopment!  If we are, then we will shortly lose our other freedoms as \nwell...through ignorance and apathy.\nI believe that we will \nsee many mfgs of motherboards that don't include the \"Fritz chip\" that are \ndesigned for those who vote for Linux and make the switch to an unfamiliar \nOS...primarily due to their discovery of the TCPA/MS agenda.  Am I sure \nabout this?  Yes!  Look at the intense interest shown by other \ncountries in Linux application development for the GUI desktop.  Do you \nthink they will abandon their investment?  I don't think so.  \nEspecially not in Taiwan or China where flexible and compatible \"hard Real Time\" \nversions of Linux are a primary development focus...to be marketed here in the \nUS!\n(Examples: http://www.redsonic.com and China \nSoft...their website link can be found at the Redsonic website....hope you can \nread Chinese!)\nRedsonic's \ntoolset will beat WinCE in the near future...without any doubt.  The \nperformance improvement and user friendliness of their tools far exceed any \nversion of WinCE.\nAnd Redsonic isn't the \nonly \"Real Time\" competitor.  There are many, many others...and a steady \nflow of new ones.\nIf we as Americans \ndon't make the switch to Linux instead of Windows, including end users at home, \nthen I can think of one Biblical phrase that fits, \"Don't cast your pearls \nbefore swine, lest they turn and rend you\".  In other words, \"send back \nyour PC to the store\"...we're too stupid and self absorbed to have the privilege \nof using them or to enjoy our current freedoms.  Switching to Linux is \ncurrently a \"pain\" to be sure...but it is less painful than what will surely \nfollow if the MS plan works.  \"A word to the wise is sufficient\".  And \nit won't be long until it is much less painful to make the \nswitch.\nSincerely,\nArthur \nBaldwin\nI’m afraid that I don’t \nhold as much faith as you that Linux will become the “defacto standard” toolset \nfor all website servers. MS, despite its major shortcomings, is fairly slow and \nsteady when it comes to improvements to its OS. That said, Access is crap because no one uses it for what it was built \nto be used for. And I would imagine that MS would rather spend their time/money \non SQL Server development. I agree with you that pgsql \nneeds a more powerful, GUI interface. The QBE interface in Access is nice. \nHowever, I don’t agree that it is unimportant to have a Windows version. Point \nbeing, that Linux users are used to – and sadly often expect – poor interfaces \nwith the programs they use. Windows users are far less forgiving. If, what you \nare talking about, is truly wide spread use for PC’s and small-time web-servers \nthen a Windows interface is damn near necessary.\n \nEric\n \n-----Original \nMessage-----From: \npgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-owner@postgresql.org] \nOn Behalf Of \nArthur@LinkLine.comSent: Tuesday, July 09, \n2002 4:05 \nPMTo: PostGreSQL HackersSubject: Re: [HACKERS] \npg_access\n \n\nI'm pleased to see some renewed \ninterest in pg_access.  It seems obvious to me that MS Access is not \ncurrently...and probably never will be able to handle data in a robust and \nreliable fashion.  MS Access' apparent success is due to the user interface \nquality and \"ease of use\" for \"non-programmers\".  The \"Relationships View\" \nwindow, for example, is one of the best and most useful features ever invented \nfor any database toolset.\n\n \n\nIn reality PostGreSQL is in a \n\"strong position\" to fill the \"reliability void\" left by MS Access.  \nHowever, the general public doesn't know much about the short comings \nof Access, due to MS advertising and sales efforts.  It seems clear to \nme that the best way to \"promote\" the use of PostGreSQL is to offer more \"ease \nof use\" GUI interfaces for changing table structures, indexes, relationships, \nand upgrading older versions of files.  Although it would be nice to have a \nnative Windows version of PostGreSQL, as well as a Linux version, I expect Linux \nto replace Windows on a large number of PCs in the near future.  I think \nthat \"having a Windows version\" will not be a significant issue at that \npoint.  However, GUI based \"ease of use\" features WILL be an extremely \nimportant issue and will increase in importance for the rest of the forseeable \nfuture.  Using a \"browser\" to implement the GUI toolset is a good start, \nbut it probably won't support the same degree of user friendliness that is seen \nin the \"Relationships View\" window of MS Access, where a relationship can be \ninstantly \"drawn\" with a mouse, and fields added to the Table with a simple \n\"right click\" on the Table header.\n\n \n\nIf we do a good job of \nproviding GUI based tools, similar to MS Access, as well as conversion \ntools from Access to PostGreSQL for existing data, then PostGreSQL and Linux \nshould quickly become the \"defacto standard\" toolset for all website \nservers.  It seems to me like PostGreSQL is already on this pathway, \"like \nit or not\", and that focussing on the GUI toolset is essential to maintaining a \ngood relationship with those who are new to the Linux world.  Whether you \nrealize it or not, there is a humongous tidal wave of MS Access users currently \ngathering enough database theory expertise to \"realize\" the MS \"snow job\" \nthey've been given about its reliability.  They will be forced into finding \nanother solution and chances are VERY good they won't opt for MS SQL Server \nor Oracle.  If we are ready to give a solution to \nthem...great....sorry MS, but they seem to \"like us better\".  If we \nare not ready, then our future won't have anything to do with MS, only our own \nlack of vision.\n\n \n\nAt our current level of GUI tools, \nwe can't expect any positive response even from fairly talented self taught \ncomputer programmers who have been interested in Linux since 1998 or \nlater.  Soon, there will be many Windows IT Specialists who will be \nseriously investigating the Linux OS and the \"best database tools\" available for \nit.  Add to this list \"end users\" who are fed up with daily Windows crashes \nand are experimenting with hosting their own DSL based website servers....and \nwell...there's your tidal wave!  Ready or not....the wave is directly \nbehind us....time to \"paddle\" for all we're \nworth!\n\n \n\nSincerely,\n\n \n\nArthur \nBaldwin", "msg_date": "Tue, 9 Jul 2002 17:35:18 -0700", "msg_from": "\"Arthur@LinkLine.com\" <arthur@linkline.com>", "msg_from_op": true, "msg_subject": "Re: pg_access" }, { "msg_contents": "Yeah, sure. Whatever.\n \nEric\n \n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of\nArthur@LinkLine.com\nSent: Tuesday, July 09, 2002 7:35 PM\nTo: PostGreSQL Hackers\nSubject: Re: [HACKERS] pg_access\n \nDear Eric,\nThanks for your input! I'm still in favor of developing a Windows\nversion of PostGreSQL. But I don't think that intelligent people are\ngoing to want to use any version of Windows (especially since the news\nof DRMOS and TCPA included in Windows XP and soon to be implemented in\nall future releases of Windows 2000 Server) to host their own website\nvia their DSL line. Another few reasons for this choice are the\nconstant crashes of ALL versions of Windows and the increasing lack of\nresponsiveness to fix bugs or admit to \"less than advertised\"\nlimitations.\nAlso, people who don't appreciate the far less than honest approach to\nAccess users that MS has taken, are not about to spend thousands of\ndollars on MS SQL Server. At least not at a time when all signs point\nto abandoning Windows in favor of Linux, as a \"necessary\" step, albeit\npainful. \"Necessary\" because of the improved security that is possible,\nwith sufficient Linux expertise and because of the lowered risk of being\nat only one company's mercy...as pointed out by the senator from Peru\nand many others since.\nAllowing MS to continue a 90 + percent market share (by remaining\nblissfully ignorant) until TCPA is fully implemented in all supported\nversions of Windows is nothing short of total insanity. I believe that\nmost Americans are intelligent enough to realize this and avoid that\nscenario by making the switch to Linux. I don't believe that we\nAmericans are stupid enough to let a company like MS take away almost\nevery freedom that now exists in the field of technological development!\nIf we are, then we will shortly lose our other freedoms as\nwell...through ignorance and apathy.\nI believe that we will see many mfgs of motherboards that don't include\nthe \"Fritz chip\" that are designed for those who vote for Linux and make\nthe switch to an unfamiliar OS...primarily due to their discovery of the\nTCPA/MS agenda. Am I sure about this? Yes! Look at the intense\ninterest shown by other countries in Linux application development for\nthe GUI desktop. Do you think they will abandon their investment? I\ndon't think so. Especially not in Taiwan or China where flexible and\ncompatible \"hard Real Time\" versions of Linux are a primary development\nfocus...to be marketed here in the US!\n(Examples: http://www.redsonic.com and China Soft...their website link\ncan be found at the Redsonic website....hope you can read Chinese!)\nRedsonic's toolset will beat WinCE in the near future...without any\ndoubt. The performance improvement and user friendliness of their tools\nfar exceed any version of WinCE.\nAnd Redsonic isn't the only \"Real Time\" competitor. There are many,\nmany others...and a steady flow of new ones.\nIf we as Americans don't make the switch to Linux instead of Windows,\nincluding end users at home, then I can think of one Biblical phrase\nthat fits, \"Don't cast your pearls before swine, lest they turn and rend\nyou\". In other words, \"send back your PC to the store\"...we're too\nstupid and self absorbed to have the privilege of using them or to enjoy\nour current freedoms. Switching to Linux is currently a \"pain\" to be\nsure...but it is less painful than what will surely follow if the MS\nplan works. \"A word to the wise is sufficient\". And it won't be long\nuntil it is much less painful to make the switch.\nSincerely,\nArthur Baldwin\nI'm afraid that I don't hold as much faith as you that Linux will become\nthe \"defacto standard\" toolset for all website servers. MS, despite its\nmajor shortcomings, is fairly slow and steady when it comes to\nimprovements to its OS. That said, Access is crap because no one uses it\nfor what it was built to be used for. And I would imagine that MS would\nrather spend their time/money on SQL Server development. I agree with\nyou that pgsql needs a more powerful, GUI interface. The QBE interface\nin Access is nice. However, I don't agree that it is unimportant to have\na Windows version. Point being, that Linux users are used to - and sadly\noften expect - poor interfaces with the programs they use. Windows users\nare far less forgiving. If, what you are talking about, is truly wide\nspread use for PC's and small-time web-servers then a Windows interface\nis damn near necessary.\n \nEric\n \n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of\nArthur@LinkLine.com\nSent: Tuesday, July 09, 2002 4:05 PM\nTo: PostGreSQL Hackers\nSubject: Re: [HACKERS] pg_access\n \nI'm pleased to see some renewed interest in pg_access. It seems obvious\nto me that MS Access is not currently...and probably never will be able\nto handle data in a robust and reliable fashion. MS Access' apparent\nsuccess is due to the user interface quality and \"ease of use\" for\n\"non-programmers\". The \"Relationships View\" window, for example, is one\nof the best and most useful features ever invented for any database\ntoolset.\n \nIn reality PostGreSQL is in a \"strong position\" to fill the \"reliability\nvoid\" left by MS Access. However, the general public doesn't know much\nabout the short comings of Access, due to MS advertising and sales\nefforts. It seems clear to me that the best way to \"promote\" the use of\nPostGreSQL is to offer more \"ease of use\" GUI interfaces for changing\ntable structures, indexes, relationships, and upgrading older versions\nof files. Although it would be nice to have a native Windows version of\nPostGreSQL, as well as a Linux version, I expect Linux to replace\nWindows on a large number of PCs in the near future. I think that\n\"having a Windows version\" will not be a significant issue at that\npoint. However, GUI based \"ease of use\" features WILL be an extremely\nimportant issue and will increase in importance for the rest of the\nforseeable future. Using a \"browser\" to implement the GUI toolset is a\ngood start, but it probably won't support the same degree of user\nfriendliness that is seen in the \"Relationships View\" window of MS\nAccess, where a relationship can be instantly \"drawn\" with a mouse, and\nfields added to the Table with a simple \"right click\" on the Table\nheader.\n \nIf we do a good job of providing GUI based tools, similar to MS Access,\nas well as conversion tools from Access to PostGreSQL for existing data,\nthen PostGreSQL and Linux should quickly become the \"defacto standard\"\ntoolset for all website servers. It seems to me like PostGreSQL is\nalready on this pathway, \"like it or not\", and that focussing on the GUI\ntoolset is essential to maintaining a good relationship with those who\nare new to the Linux world. Whether you realize it or not, there is a\nhumongous tidal wave of MS Access users currently gathering enough\ndatabase theory expertise to \"realize\" the MS \"snow job\" they've been\ngiven about its reliability. They will be forced into finding another\nsolution and chances are VERY good they won't opt for MS SQL Server or\nOracle. If we are ready to give a solution to them...great....sorry MS,\nbut they seem to \"like us better\". If we are not ready, then our future\nwon't have anything to do with MS, only our own lack of vision.\n \nAt our current level of GUI tools, we can't expect any positive response\neven from fairly talented self taught computer programmers who have been\ninterested in Linux since 1998 or later. Soon, there will be many\nWindows IT Specialists who will be seriously investigating the Linux OS\nand the \"best database tools\" available for it. Add to this list \"end\nusers\" who are fed up with daily Windows crashes and are experimenting\nwith hosting their own DSL based website servers....and well...there's\nyour tidal wave! Ready or not....the wave is directly behind us....time\nto \"paddle\" for all we're worth!\n \nSincerely,\n \nArthur Baldwin\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nYeah, sure. Whatever.\n \nEric\n \n-----Original Message-----\nFrom:\npgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Arthur@LinkLine.com\nSent: Tuesday, July 09, 2002 7:35\nPM\nTo: PostGreSQL Hackers\nSubject: Re: [HACKERS] pg_access\n \n\n\nDear Eric,\nThanks for your input!  I'm still in\nfavor of developing a Windows version of PostGreSQL.  But I don't think\nthat intelligent people are going to want to use any version of Windows\n(especially since the news of DRMOS and TCPA included in Windows XP and soon to\nbe implemented in all future releases of Windows 2000 Server) to host their\nown website via their DSL line.  Another few reasons for this choice\nare the constant crashes of ALL versions of Windows and the increasing lack of\nresponsiveness to fix bugs or admit to \"less than advertised\"\nlimitations.\nAlso, people who don't appreciate the far\nless than honest approach to Access users that MS has taken, are not about to\nspend thousands of dollars on MS SQL Server.  At least not at a time when\nall signs point to abandoning Windows in favor of Linux, as a\n\"necessary\" step, albeit painful.  \"Necessary\" because\nof the improved security that is possible, with sufficient Linux expertise\nand because of the lowered risk of being at only one company's mercy...as\npointed out by the senator from Peru and many others since.\nAllowing MS to continue a 90 + percent\nmarket share (by remaining blissfully ignorant) until TCPA is fully implemented\nin all supported versions of Windows is nothing short of total insanity. \nI believe that most Americans are intelligent enough to realize this and avoid\nthat scenario by making the switch to Linux.  I don't believe that we\nAmericans are stupid enough to let a company like MS take away almost every\nfreedom that now exists in the field of technological development!  If we\nare, then we will shortly lose our other freedoms as well...through ignorance\nand apathy.\nI believe that we will see many mfgs of\nmotherboards that don't include the \"Fritz chip\" that are designed for\nthose who vote for Linux and make the switch to an unfamiliar OS...primarily\ndue to their discovery of the TCPA/MS agenda.  Am I sure about this? \nYes!  Look at the intense interest shown by other countries in Linux\napplication development for the GUI desktop.  Do you think they will\nabandon their investment?  I don't think so.  Especially not in\nTaiwan or China where flexible and compatible \"hard Real Time\"\nversions of Linux are a primary development focus...to be marketed here in the\nUS!\n(Examples: http://www.redsonic.com and China\nSoft...their website link can be found at the Redsonic website....hope you can\nread Chinese!)\nRedsonic's toolset will beat WinCE in\nthe near future...without any doubt.  The performance improvement and user\nfriendliness of their tools far exceed any version of WinCE.\nAnd Redsonic isn't the only \"Real\nTime\" competitor.  There are many, many others...and a steady flow of\nnew ones.\nIf we as Americans don't make the switch\nto Linux instead of Windows, including end users at home, then I can think of\none Biblical phrase that fits, \"Don't cast your pearls before swine, lest\nthey turn and rend you\".  In other words, \"send back your PC to\nthe store\"...we're too stupid and self absorbed to have the privilege of\nusing them or to enjoy our current freedoms.  Switching to Linux is\ncurrently a \"pain\" to be sure...but it is less painful than what will\nsurely follow if the MS plan works.  \"A word to the wise is sufficient\". \nAnd it won't be long until it is much less painful to make the switch.\nSincerely,\nArthur Baldwin\nI’m afraid that I don’t hold\nas much faith as you that Linux will become the “defacto standard”\ntoolset for all website servers. MS, despite its major shortcomings, is fairly\nslow and steady when it comes to improvements to its OS. That said, Access is crap because no one uses it for what it was\nbuilt to be used for. And I would imagine that MS would rather spend their\ntime/money on SQL Server development. I agree with you that pgsql\nneeds a more powerful, GUI interface. The QBE interface in Access is nice.\nHowever, I don’t agree that it is unimportant to have a Windows version.\nPoint being, that Linux users are used to – and sadly often expect\n– poor interfaces with the programs they use. Windows users are far less\nforgiving. If, what you are talking about, is truly wide spread use for\nPC’s and small-time web-servers then a Windows interface is damn near\nnecessary.\n \nEric\n \n-----Original Message-----\nFrom:\npgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Arthur@LinkLine.com\nSent: Tuesday,\nJuly 09, 2002 4:05 PM\nTo: PostGreSQL Hackers\nSubject: Re: [HACKERS] pg_access\n \n\nI'm pleased to see some renewed interest in pg_access. \nIt seems obvious to me that MS Access is not currently...and probably never\nwill be able to handle data in a robust and reliable fashion.  MS Access'\napparent success is due to the user interface quality and \"ease of\nuse\" for \"non-programmers\".  The \"Relationships\nView\" window, for example, is one of the best and most useful features\never invented for any database toolset.\n\n\n \n\n\nIn reality PostGreSQL is in a \"strong position\" to\nfill the \"reliability void\" left by MS Access.  However,\nthe general public doesn't know much about the short comings of Access,\ndue to MS advertising and sales efforts.  It seems clear to me that the\nbest way to \"promote\" the use of PostGreSQL is to offer more\n\"ease of use\" GUI interfaces for changing table structures, indexes,\nrelationships, and upgrading older versions of files.  Although it would\nbe nice to have a native Windows version of PostGreSQL, as well as a Linux\nversion, I expect Linux to replace Windows on a large number of PCs in the near\nfuture.  I think that \"having a Windows version\" will not be a\nsignificant issue at that point.  However, GUI based \"ease of\nuse\" features WILL be an extremely important issue and will increase in\nimportance for the rest of the forseeable future.  Using a \"browser\"\nto implement the GUI toolset is a good start, but it probably won't support the\nsame degree of user friendliness that is seen in the \"Relationships\nView\" window of MS Access, where a relationship can be instantly\n\"drawn\" with a mouse, and fields added to the Table with a simple\n\"right click\" on the Table header.\n\n\n \n\n\nIf we do a good job of providing GUI based tools,\nsimilar to MS Access, as well as conversion tools from Access to PostGreSQL for\nexisting data, then PostGreSQL and Linux should quickly become the\n\"defacto standard\" toolset for all website servers.  It seems to\nme like PostGreSQL is already on this pathway, \"like it or not\", and\nthat focussing on the GUI toolset is essential to maintaining a good\nrelationship with those who are new to the Linux world.  Whether you realize\nit or not, there is a humongous tidal wave of MS Access users currently\ngathering enough database theory expertise to \"realize\" the MS\n\"snow job\" they've been given about its reliability.  They will\nbe forced into finding another solution and chances are VERY good they won't\nopt for MS SQL Server or Oracle.  If we are ready to give a\nsolution to them...great....sorry MS, but they seem to \"like us\nbetter\".  If we are not ready, then our future won't have anything\nto do with MS, only our own lack of vision.\n\n\n \n\n\nAt our current level of GUI tools, we can't expect any\npositive response even from fairly talented self taught computer programmers\nwho have been interested in Linux since 1998 or later.  Soon, there will\nbe many Windows IT Specialists who will be seriously investigating the Linux OS\nand the \"best database tools\" available for it.  Add to this\nlist \"end users\" who are fed up with daily Windows crashes and are\nexperimenting with hosting their own DSL based website servers....and\nwell...there's your tidal wave!  Ready or not....the wave is directly\nbehind us....time to \"paddle\" for all we're worth!\n\n\n \n\n\nSincerely,\n\n\n \n\n\nArthur Baldwin", "msg_date": "Tue, 9 Jul 2002 19:37:19 -0500", "msg_from": "\"Eric Redmond\" <redmonde@purdue.edu>", "msg_from_op": false, "msg_subject": "Re: pg_access" }, { "msg_contents": "Just A note on Linux.\nI consider Linux on the same level as Windows. I have been able to crash it (kernel 2.4.18) just as well and the lib C API is nowhere near complete or fault tolerant to any commercial OS. For the moment nothing beats SCO Openserver/Unixware stability on PC. These are OS worth paying for. \n\n ----- Original Message ----- \n From: Eric Redmond \n To: 'PostGreSQL Hackers' \n Sent: Wednesday, July 10, 2002 10:37 AM\n Subject: Re: [HACKERS] pg_access\n\n\n Yeah, sure. Whatever.\n\n \n\n Eric\n\n \n\n -----Original Message-----\n From: pgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Arthur@LinkLine.com\n Sent: Tuesday, July 09, 2002 7:35 PM\n To: PostGreSQL Hackers\n Subject: Re: [HACKERS] pg_access\n\n \n\n Dear Eric,\n\n Thanks for your input! I'm still in favor of developing a Windows version of PostGreSQL. But I don't think that intelligent people are going to want to use any version of Windows (especially since the news of DRMOS and TCPA included in Windows XP and soon to be implemented in all future releases of Windows 2000 Server) to host their own website via their DSL line. Another few reasons for this choice are the constant crashes of ALL versions of Windows and the increasing lack of responsiveness to fix bugs or admit to \"less than advertised\" limitations.\n\n Also, people who don't appreciate the far less than honest approach to Access users that MS has taken, are not about to spend thousands of dollars on MS SQL Server. At least not at a time when all signs point to abandoning Windows in favor of Linux, as a \"necessary\" step, albeit painful. \"Necessary\" because of the improved security that is possible, with sufficient Linux expertise and because of the lowered risk of being at only one company's mercy...as pointed out by the senator from Peru and many others since.\n\n Allowing MS to continue a 90 + percent market share (by remaining blissfully ignorant) until TCPA is fully implemented in all supported versions of Windows is nothing short of total insanity. I believe that most Americans are intelligent enough to realize this and avoid that scenario by making the switch to Linux. I don't believe that we Americans are stupid enough to let a company like MS take away almost every freedom that now exists in the field of technological development! If we are, then we will shortly lose our other freedoms as well...through ignorance and apathy.\n\n I believe that we will see many mfgs of motherboards that don't include the \"Fritz chip\" that are designed for those who vote for Linux and make the switch to an unfamiliar OS...primarily due to their discovery of the TCPA/MS agenda. Am I sure about this? Yes! Look at the intense interest shown by other countries in Linux application development for the GUI desktop. Do you think they will abandon their investment? I don't think so. Especially not in Taiwan or China where flexible and compatible \"hard Real Time\" versions of Linux are a primary development focus...to be marketed here in the US!\n\n (Examples: http://www.redsonic.com and China Soft...their website link can be found at the Redsonic website....hope you can read Chinese!)\n\n Redsonic's toolset will beat WinCE in the near future...without any doubt. The performance improvement and user friendliness of their tools far exceed any version of WinCE.\n\n And Redsonic isn't the only \"Real Time\" competitor. There are many, many others...and a steady flow of new ones.\n\n If we as Americans don't make the switch to Linux instead of Windows, including end users at home, then I can think of one Biblical phrase that fits, \"Don't cast your pearls before swine, lest they turn and rend you\". In other words, \"send back your PC to the store\"...we're too stupid and self absorbed to have the privilege of using them or to enjoy our current freedoms. Switching to Linux is currently a \"pain\" to be sure...but it is less painful than what will surely follow if the MS plan works. \"A word to the wise is sufficient\". And it won't be long until it is much less painful to make the switch.\n\n Sincerely,\n\n Arthur Baldwin\n\n I'm afraid that I don't hold as much faith as you that Linux will become the \"defacto standard\" toolset for all website servers. MS, despite its major shortcomings, is fairly slow and steady when it comes to improvements to its OS. That said, Access is crap because no one uses it for what it was built to be used for. And I would imagine that MS would rather spend their time/money on SQL Server development. I agree with you that pgsql needs a more powerful, GUI interface. The QBE interface in Access is nice. However, I don't agree that it is unimportant to have a Windows version. Point being, that Linux users are used to - and sadly often expect - poor interfaces with the programs they use. Windows users are far less forgiving. If, what you are talking about, is truly wide spread use for PC's and small-time web-servers then a Windows interface is damn near necessary.\n\n \n\n Eric\n\n \n\n -----Original Message-----\n From: pgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Arthur@LinkLine.com\n Sent: Tuesday, July 09, 2002 4:05 PM\n To: PostGreSQL Hackers\n Subject: Re: [HACKERS] pg_access\n\n \n\n I'm pleased to see some renewed interest in pg_access. It seems obvious to me that MS Access is not currently...and probably never will be able to handle data in a robust and reliable fashion. MS Access' apparent success is due to the user interface quality and \"ease of use\" for \"non-programmers\". The \"Relationships View\" window, for example, is one of the best and most useful features ever invented for any database toolset.\n\n \n\n In reality PostGreSQL is in a \"strong position\" to fill the \"reliability void\" left by MS Access. However, the general public doesn't know much about the short comings of Access, due to MS advertising and sales efforts. It seems clear to me that the best way to \"promote\" the use of PostGreSQL is to offer more \"ease of use\" GUI interfaces for changing table structures, indexes, relationships, and upgrading older versions of files. Although it would be nice to have a native Windows version of PostGreSQL, as well as a Linux version, I expect Linux to replace Windows on a large number of PCs in the near future. I think that \"having a Windows version\" will not be a significant issue at that point. However, GUI based \"ease of use\" features WILL be an extremely important issue and will increase in importance for the rest of the forseeable future. Using a \"browser\" to implement the GUI toolset is a good start, but it probably won't support the same degree of user friendliness that is seen in the \"Relationships View\" window of MS Access, where a relationship can be instantly \"drawn\" with a mouse, and fields added to the Table with a simple \"right click\" on the Table header.\n\n \n\n If we do a good job of providing GUI based tools, similar to MS Access, as well as conversion tools from Access to PostGreSQL for existing data, then PostGreSQL and Linux should quickly become the \"defacto standard\" toolset for all website servers. It seems to me like PostGreSQL is already on this pathway, \"like it or not\", and that focussing on the GUI toolset is essential to maintaining a good relationship with those who are new to the Linux world. Whether you realize it or not, there is a humongous tidal wave of MS Access users currently gathering enough database theory expertise to \"realize\" the MS \"snow job\" they've been given about its reliability. They will be forced into finding another solution and chances are VERY good they won't opt for MS SQL Server or Oracle. If we are ready to give a solution to them...great....sorry MS, but they seem to \"like us better\". If we are not ready, then our future won't have anything to do with MS, only our own lack of vision.\n\n \n\n At our current level of GUI tools, we can't expect any positive response even from fairly talented self taught computer programmers who have been interested in Linux since 1998 or later. Soon, there will be many Windows IT Specialists who will be seriously investigating the Linux OS and the \"best database tools\" available for it. Add to this list \"end users\" who are fed up with daily Windows crashes and are experimenting with hosting their own DSL based website servers....and well...there's your tidal wave! Ready or not....the wave is directly behind us....time to \"paddle\" for all we're worth!\n\n \n\n Sincerely,\n\n \n\n Arthur Baldwin\n\n\n\n\n\n\n\n\n\n\nJust A note on Linux.\nI consider Linux on the same level as Windows. I \nhave been able to crash it (kernel 2.4.18) just as well and the lib C API is \nnowhere near complete or fault tolerant to any commercial OS. For the moment \nnothing beats SCO Openserver/Unixware stability on PC. These are OS worth paying \nfor. \n \n\n----- Original Message ----- \nFrom:\nEric \n Redmond \nTo: 'PostGreSQL Hackers' \nSent: Wednesday, July 10, 2002 10:37 \n AM\nSubject: Re: [HACKERS] pg_access\n\n\nYeah, \n sure. Whatever.\n \nEric\n \n-----Original \n Message-----From: pgsql-hackers-owner@postgresql.org \n [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Arthur@LinkLine.comSent: Tuesday, July 09, 2002 7:35 \n PMTo: PostGreSQL \n HackersSubject: Re: \n [HACKERS] pg_access\n \n\n\nDear \n Eric,\nThanks for your \n input!  I'm still in favor of developing a Windows version of \n PostGreSQL.  But I don't think that intelligent people are going to want \n to use any version of Windows (especially since the news of DRMOS and TCPA \n included in Windows XP and soon to be implemented in all future releases of \n Windows 2000 Server) to host their own website via their DSL line.  \n Another few reasons for this choice are the constant crashes of ALL versions \n of Windows and the increasing lack of responsiveness to fix bugs or admit to \n \"less than advertised\" limitations.\nAlso, people who \n don't appreciate the far less than honest approach to Access users that MS has \n taken, are not about to spend thousands of dollars on MS SQL Server.  At \n least not at a time when all signs point to abandoning Windows in favor of \n Linux, as a \"necessary\" step, albeit painful.  \"Necessary\" because of the \n improved security that is possible, with sufficient Linux expertise and \n because of the lowered risk of being at only one company's mercy...as pointed \n out by the senator from Peru and many others \n since.\nAllowing MS to \n continue a 90 + percent market share (by remaining blissfully ignorant) until \n TCPA is fully implemented in all supported versions of Windows is nothing \n short of total insanity.  I believe that most Americans are intelligent \n enough to realize this and avoid that scenario by making the switch to \n Linux.  I don't believe that we Americans are stupid enough to let a \n company like MS take away almost every freedom that now exists in the field of \n technological development!  If we are, then we will shortly lose our \n other freedoms as well...through ignorance and \n apathy.\nI believe that we \n will see many mfgs of motherboards that don't include the \"Fritz chip\" that \n are designed for those who vote for Linux and make the switch to an unfamiliar \n OS...primarily due to their discovery of the TCPA/MS agenda.  Am I sure \n about this?  Yes!  Look at the intense interest shown by other \n countries in Linux application development for the GUI desktop.  Do you \n think they will abandon their investment?  I don't think so.  \n Especially not in Taiwan or China where flexible and compatible \"hard Real \n Time\" versions of Linux are a primary development focus...to be marketed here \n in the US!\n(Examples: http://www.redsonic.com and China \n Soft...their website link can be found at the Redsonic website....hope you can \n read Chinese!)\nRedsonic's \n toolset will beat WinCE in the near future...without any doubt.  The \n performance improvement and user friendliness of their tools far exceed any \n version of WinCE.\nAnd Redsonic isn't \n the only \"Real Time\" competitor.  There are many, many others...and a \n steady flow of new ones.\nIf we as Americans \n don't make the switch to Linux instead of Windows, including end users at \n home, then I can think of one Biblical phrase that fits, \"Don't cast your \n pearls before swine, lest they turn and rend you\".  In other words, \"send \n back your PC to the store\"...we're too stupid and self absorbed to have the \n privilege of using them or to enjoy our current freedoms.  Switching to \n Linux is currently a \"pain\" to be sure...but it is less painful than what will \n surely follow if the MS plan works.  \"A word to the wise is \n sufficient\".  And it won't be long until it is much less painful to make \n the switch.\nSincerely,\nArthur \n Baldwin\nI’m afraid that I \n don’t hold as much faith as you that Linux will become the “defacto standard” \n toolset for all website servers. MS, despite its major shortcomings, is fairly \n slow and steady when it comes to improvements to its OS. That said, Access is crap because no one uses it for what it was \n built to be used for. And I would imagine that MS would rather spend their \n time/money on SQL Server development. I agree with you that pgsql needs a more powerful, GUI interface. The QBE \n interface in Access is nice. However, I don’t agree that it is unimportant to \n have a Windows version. Point being, that Linux users are used to – and sadly \n often expect – poor interfaces with the programs they use. Windows users are \n far less forgiving. If, what you are talking about, is truly wide spread use \n for PC’s and small-time web-servers then a Windows interface is damn near \n necessary.\n \nEric\n \n-----Original \n Message-----From: \n pgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-owner@postgresql.org] \n On Behalf Of \n Arthur@LinkLine.comSent: Tuesday, July 09, 2002 4:05 PMTo: PostGreSQL HackersSubject: Re: [HACKERS] \n pg_access\n \n\nI'm \n pleased to see some renewed interest in pg_access.  It seems obvious to \n me that MS Access is not currently...and probably never will be able to handle \n data in a robust and reliable fashion.  MS Access' apparent success is \n due to the user interface quality and \"ease of use\" for \n \"non-programmers\".  The \"Relationships View\" window, for example, is one \n of the best and most useful features ever invented for any database \n toolset.\n\n \n\nIn reality \n PostGreSQL is in a \"strong position\" to fill the \"reliability void\" left \n by MS Access.  However, the general public doesn't know much about \n the short comings of Access, due to MS advertising and sales \n efforts.  It seems clear to me that the best way to \"promote\" the use of \n PostGreSQL is to offer more \"ease of use\" GUI interfaces for changing table \n structures, indexes, relationships, and upgrading older versions of \n files.  Although it would be nice to have a native Windows version of \n PostGreSQL, as well as a Linux version, I expect Linux to replace Windows on a \n large number of PCs in the near future.  I think that \"having a Windows \n version\" will not be a significant issue at that point.  However, GUI \n based \"ease of use\" features WILL be an extremely important issue and will \n increase in importance for the rest of the forseeable future.  Using a \n \"browser\" to implement the GUI toolset is a good start, but it probably won't \n support the same degree of user friendliness that is seen in the \n \"Relationships View\" window of MS Access, where a relationship can be \n instantly \"drawn\" with a mouse, and fields added to the Table with a simple \n \"right click\" on the Table header.\n\n \n\nIf we do a \n good job of providing GUI based tools, similar to MS Access, as well as \n conversion tools from Access to PostGreSQL for existing data, then PostGreSQL \n and Linux should quickly become the \"defacto standard\" toolset for all website \n servers.  It seems to me like PostGreSQL is already on this pathway, \n \"like it or not\", and that focussing on the GUI toolset is essential to \n maintaining a good relationship with those who are new to the Linux \n world.  Whether you realize it or not, there is a humongous tidal wave of \n MS Access users currently gathering enough database theory expertise to \n \"realize\" the MS \"snow job\" they've been given about its reliability.  \n They will be forced into finding another solution and chances are VERY good \n they won't opt for MS SQL Server or Oracle.  If we are ready to \n give a solution to them...great....sorry MS, but they seem to \"like \n us better\".  If we are not ready, then our future won't have \n anything to do with MS, only our own lack of \n vision.\n\n \n\nAt our \n current level of GUI tools, we can't expect any positive response even from \n fairly talented self taught computer programmers who have been interested in \n Linux since 1998 or later.  Soon, there will be many Windows IT \n Specialists who will be seriously investigating the Linux OS and the \"best \n database tools\" available for it.  Add to this list \"end users\" who are \n fed up with daily Windows crashes and are experimenting with hosting their own \n DSL based website servers....and well...there's your tidal wave!  Ready \n or not....the wave is directly behind us....time to \"paddle\" for all we're \n worth!\n\n \n\nSincerely,\n\n \n\nArthur \n Baldwin", "msg_date": "Wed, 10 Jul 2002 11:54:52 +1000", "msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_access" }, { "msg_contents": "Hehehe - try FreeBSD for stability over Linux as well :)\n\nChris\n -----Original Message-----\n From: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Nicolas Bazin\n Sent: Wednesday, 10 July 2002 9:55 AM\n To: Eric Redmond; 'PostGreSQL Hackers'\n Subject: Re: [HACKERS] pg_access\n\n\n Just A note on Linux.\n I consider Linux on the same level as Windows. I have been able to crash\nit (kernel 2.4.18) just as well and the lib C API is nowhere near complete\nor fault tolerant to any commercial OS. For the moment nothing beats SCO\nOpenserver/Unixware stability on PC. These are OS worth paying for.\n\n\n\n\n\n\n\n\n\n\nHehehe \n- try FreeBSD for stability over Linux as well :)\n \nChris\n\n-----Original Message-----From: \n pgsql-hackers-owner@postgresql.org \n [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Nicolas \n BazinSent: Wednesday, 10 July 2002 9:55 AMTo: Eric \n Redmond; 'PostGreSQL Hackers'Subject: Re: [HACKERS] \n pg_access\nJust A note on Linux.\nI consider Linux on the same level as Windows. I \n have been able to crash it (kernel 2.4.18) just as well and the lib C API is \n nowhere near complete or fault tolerant to any commercial OS. For the moment \n nothing beats SCO Openserver/Unixware stability on PC. These are OS worth \n paying for.", "msg_date": "Wed, 10 Jul 2002 10:03:20 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_access" } ]
[ { "msg_contents": "\nIs there a bug tracking system someone online for the postgres engine? \n\nI was unable to locate one via the web site.\n\nThanks,\n\tJames\n", "msg_date": "Tue, 9 Jul 2002 20:05:33 -0500", "msg_from": "James Maes <jmaes@sportingnews.com>", "msg_from_op": true, "msg_subject": "Postgres Bug Database" } ]
[ { "msg_contents": "When trying to perform a full vacuum I am getting the following error:\n\nERROR: No one parent tuple was found\n\nPlain vacuum works fine. Thinking it might be a problem with the \nindexes I have rebuilt them but still get the error. What does this \nerror indicate and what are my options to solve the problem?\n\nI am running: PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC \n2.96 on a RedHat 7.3 system.\n\nthanks,\n--Barry\n\n", "msg_date": "Tue, 09 Jul 2002 20:41:36 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "error during vacuum full" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> When trying to perform a full vacuum I am getting the following error:\n> ERROR: No one parent tuple was found\n\nWant to dig into it with gdb, or let someone else into your system to\ndo so?\n\nI've suspected for awhile that there's still a lurking bug or three\nin the VACUUM FULL tuple-chain-moving code, but without an example\nto study it's damn hard to make any progress.\n\nNote that restarting the postmaster will probably make the problem\ngo away, as tuple chains cannot exist unless there are old open\ntransactions. So unless your installation is already compiled\n--enable-debug, there's probably not much we can learn :=(\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 00:00:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: error during vacuum full " }, { "msg_contents": "Tom,\n\nIt was not compiled with debug. I will do that now and see if this \nhappens again in the future. If and when it happens again what would \nyou like me to do? I am willing provide you access if you need it.\n\nthanks,\n--Barry\n\nTom Lane wrote:\n\n>Barry Lind <barry@xythos.com> writes:\n> \n>\n>>When trying to perform a full vacuum I am getting the following error:\n>>ERROR: No one parent tuple was found\n>> \n>>\n>\n>Want to dig into it with gdb, or let someone else into your system to\n>do so?\n>\n>I've suspected for awhile that there's still a lurking bug or three\n>in the VACUUM FULL tuple-chain-moving code, but without an example\n>to study it's damn hard to make any progress.\n>\n>Note that restarting the postmaster will probably make the problem\n>go away, as tuple chains cannot exist unless there are old open\n>transactions. So unless your installation is already compiled\n>--enable-debug, there's probably not much we can learn :=(\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n>\n> \n>\n\n\n", "msg_date": "Tue, 09 Jul 2002 21:10:24 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Re: error during vacuum full" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> It was not compiled with debug. I will do that now and see if this \n> happens again in the future. If and when it happens again what would \n> you like me to do? I am willing provide you access if you need it.\n\nWell, first off, please confirm that killing off open client\ntransactions (you shouldn't even need to do a full postmaster restart)\nmakes the problem go away.\n\nBeyond that, I have no advice except to be prepared to apply a debugger\nnext time. I believe we could fix the problem if we could examine the\nsituation VACUUM is seeing --- but it's so far not been possible to\ndo that, because the triggering conditions are so transient.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 00:14:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: error during vacuum full " }, { "msg_contents": "Tom,\n\nI am not sure exactly what you mean by 'open client transactions' but I \njust killed off all client processes. The only postgres processes \nrunning are:\n\n[root@cvs root]# ps -ef | grep post\npostgres 1004 1 0 Jul03 ? 00:00:00 \n/usr/local/pgsql/bin/postmaster\npostgres 1069 1004 0 Jul03 ? 00:00:00 postgres: stats buffer \nprocess \npostgres 1070 1069 0 Jul03 ? 00:00:00 postgres: stats \ncollector proces\n\nI then reconnected via psql and reran the vacuum full getting the same \nerror.\n\n--Barry\n\n\nTom Lane wrote:\n\n>Barry Lind <barry@xythos.com> writes:\n> \n>\n>>It was not compiled with debug. I will do that now and see if this \n>>happens again in the future. If and when it happens again what would \n>>you like me to do? I am willing provide you access if you need it.\n>> \n>>\n>\n>Well, first off, please confirm that killing off open client\n>transactions (you shouldn't even need to do a full postmaster restart)\n>makes the problem go away.\n>\n>Beyond that, I have no advice except to be prepared to apply a debugger\n>next time. I believe we could fix the problem if we could examine the\n>situation VACUUM is seeing --- but it's so far not been possible to\n>do that, because the triggering conditions are so transient.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n>\n> \n>\n\n\n", "msg_date": "Tue, 09 Jul 2002 21:29:23 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Re: error during vacuum full" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> The only postgres processes running are:\n\n> [root@cvs root]# ps -ef | grep post\n> postgres 1004 1 0 Jul03 ? 00:00:00 \n> /usr/local/pgsql/bin/postmaster\n> postgres 1069 1004 0 Jul03 ? 00:00:00 postgres: stats buffer \n> process \n> postgres 1070 1069 0 Jul03 ? 00:00:00 postgres: stats \n> collector proces\n\n> I then reconnected via psql and reran the vacuum full getting the same \n> error.\n\nReally!? Well, does restarting the postmaster fix it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 00:41:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: error during vacuum full " }, { "msg_contents": "Tom,\n\nNo. Restarting the postmaster does not resolve the problem. I am going \nto put the debug build in place and see if I can still reproduce.\n\n--Barry\n\n\nTom Lane wrote:\n\n>Barry Lind <barry@xythos.com> writes:\n> \n>\n>>The only postgres processes running are:\n>> \n>>\n>\n> \n>\n>>[root@cvs root]# ps -ef | grep post\n>>postgres 1004 1 0 Jul03 ? 00:00:00 \n>>/usr/local/pgsql/bin/postmaster\n>>postgres 1069 1004 0 Jul03 ? 00:00:00 postgres: stats buffer \n>>process \n>>postgres 1070 1069 0 Jul03 ? 00:00:00 postgres: stats \n>>collector proces\n>> \n>>\n>\n> \n>\n>>I then reconnected via psql and reran the vacuum full getting the same \n>>error.\n>> \n>>\n>\n>Really!? Well, does restarting the postmaster fix it?\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n>\n> \n>\n\n\n", "msg_date": "Tue, 09 Jul 2002 21:50:19 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Re: error during vacuum full" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> No. Restarting the postmaster does not resolve the problem.\n\nNow you've got my attention ;-) ... this is completely at variance\nwith my theories about the cause of the problem. Destruction of\na theory is usually a sign of impending advance.\n\n> I am going \n> to put the debug build in place and see if I can still reproduce.\n\nGood. I am dead tired and am about to go to bed, but if you can\nreproduce with a debuggable build then I would definitely like to\ncrawl into it tomorrow.\n\nAt this point I do not have the faintest idea what might cause the\nproblem to go away, so if possible I'd urge you to do the minimum\npossible work on the system overnight. Alternatively, if you can\nspare the disk space, make a tarball copy of the whole $PGDATA\ntree while you have the postmaster shut down. Then we can\nstudy the problem offline without worrying about your live\napplication.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 00:55:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: error during vacuum full " }, { "msg_contents": "Tom,\n\n> Good. I am dead tired and am about to go to bed, but if you can\n> reproduce with a debuggable build then I would definitely like to\n> crawl into it tomorrow.\n\nGood night. We can pick this up tomorrow.\n\n> At this point I do not have the faintest idea what might cause the\n> problem to go away, so if possible I'd urge you to do the minimum\n> possible work on the system overnight. Alternatively, if you can\n> spare the disk space, make a tarball copy of the whole $PGDATA\n> tree while you have the postmaster shut down. Then we can\n> study the problem offline without worrying about your live\n> application.\n\nI need the app up and running, but I did shut it down and created a backup of the entire directory as you suggested. \n\nthanks,\n--Barry\n\n\n\n\nTom Lane wrote:\n\n>Barry Lind <barry@xythos.com> writes:\n> \n>\n>>No. Restarting the postmaster does not resolve the problem.\n>> \n>>\n>\n>Now you've got my attention ;-) ... this is completely at variance\n>with my theories about the cause of the problem. Destruction of\n>a theory is usually a sign of impending advance.\n>\n> \n>\n>>I am going \n>>to put the debug build in place and see if I can still reproduce.\n>> \n>>\n>\n>Good. I am dead tired and am about to go to bed, but if you can\n>reproduce with a debuggable build then I would definitely like to\n>crawl into it tomorrow.\n>\n>At this point I do not have the faintest idea what might cause the\n>problem to go away, so if possible I'd urge you to do the minimum\n>possible work on the system overnight. Alternatively, if you can\n>spare the disk space, make a tarball copy of the whole $PGDATA\n>tree while you have the postmaster shut down. Then we can\n>study the problem offline without worrying about your live\n>application.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://archives.postgresql.org\n>\n> \n>\n\n\n", "msg_date": "Tue, 09 Jul 2002 22:11:28 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Re: error during vacuum full" }, { "msg_contents": "Am Mittwoch, 10. Juli 2002 06:50 schrieb Barry Lind:\n> Tom,\n>\n> No. Restarting the postmaster does not resolve the problem. I am going\n> to put the debug build in place and see if I can still reproduce.\n>\n\nI've this problem on different machines too (on a daily basis), and restarting the database has never helped. There are for sure no open transactions when this happens, and the only way out is to regenerate all tuples:\nupdate tablename set colname=colname; (take whatever column you like). I guess it's because I've a cron job which is running every minute or so and checks some conditions, and I guess it is called while vacuum full is running too.\n\nBest regards,\n\tMario Weilguni\n", "msg_date": "Wed, 10 Jul 2002 08:28:24 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "Re: error during vacuum full" } ]
[ { "msg_contents": "\nJust a test to make sure both are being used properly ... should help\nincrease overall list speeds ...\n\n", "msg_date": "Wed, 10 Jul 2002 16:09:01 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Just added a second relay server ..." }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Just a test to make sure both are being used properly ... should help\n> increase overall list speeds ...\n\nThings do seem to be *markedly* faster today than a few days ago.\nGood work!\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 19:01:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Just added a second relay server ... " }, { "msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Just a test to make sure both are being used properly ... should help\n> > increase overall list speeds ...\n> \n> Things do seem to be *markedly* faster today than a few days ago.\n> Good work!\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Jul 2002 22:57:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Just added a second relay server ..." }, { "msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Just a test to make sure both are being used properly ... should help\n> > increase overall list speeds ...\n> \n> Things do seem to be *markedly* faster today than a few days ago.\n> Good work!\n\nYea, but how come my email box is filling so quickly. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Jul 2002 22:57:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Just added a second relay server ..." }, { "msg_contents": "> Tom Lane wrote:\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > Just a test to make sure both are being used properly ... should help\n> > > increase overall list speeds ...\n> > \n> > Things do seem to be *markedly* faster today than a few days ago.\n> > Good work!\n> \n> Yea, but how come my email box is filling so quickly. ;-)\n\nJust a curiosity. How many subscribers do we have?\nI have been running a PostgreSQL list having over 5000 subscribers\nwith mailman+postfix in Japan...\n--\nTatsuo Ishii\n", "msg_date": "Thu, 11 Jul 2002 12:02:43 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Just added a second relay server ..." } ]
[ { "msg_contents": "Hello,\n\nIs there a proper 'powered by postgresql' .gif or something?\n\nCan't find anything like this.\n\nIavor\n\n--\nIavor Raytchev\nvery small technologies (a company of CEE Solutions)\n\nin case of emergency -\n\n call: + 43 676 639 46 49\nor write to: support@verysmall.org\n\nwww.verysmall.org \n", "msg_date": "Wed, 10 Jul 2002 23:15:59 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": true, "msg_subject": "powered by" }, { "msg_contents": "On Wed, Jul 10, 2002 at 11:15:59PM +0200, Iavor Raytchev wrote:\n> Hello,\n> \n> Is there a proper 'powered by postgresql' .gif or something?\n\nTry\n\n\thttp://www.pgsql.com/propaganda/\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 10 Jul 2002 17:43:08 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: powered by" } ]
[ { "msg_contents": "Hi,\n\nI looked around for a definitive answer to this question but to no avail. It appears that PostgreSQL does not implement the SQL function REPLACE() , which, in MySQL works like this:\n\nREPLACE(str,from_str,to_str) \nReturns the string str with all all occurrences of the string from_str replaced by the string to_str: \n\nWhat do folks usually do when they have to do a global search/replace on a big table?\n\nThanks,\n\nMatt\n\n\n\n\n\n\n\n\nHi,\n \nI looked around for a definitive answer to this question but to no \navail.  It appears that PostgreSQL does not implement the SQL function \nREPLACE() , which, in MySQL works like this:\n \n\nREPLACE(str,from_str,to_str)\nReturns the string str with all all occurrences of the string \nfrom_str replaced by the string to_str: \n \nWhat do folks usually do when they have to do a global search/replace \non a big table?\n \nThanks,\n \nMatt", "msg_date": "Wed, 10 Jul 2002 15:00:03 -0700", "msg_from": "\"Agent155 Support\" <matt@planetnet.com>", "msg_from_op": true, "msg_subject": "workaround for lack of REPLACE() function" }, { "msg_contents": "\"Agent155 Support\" <matt@planetnet.com> writes:\n> What do folks usually do when they have to do a global search/replace on a =\n> big table?\n\nYou can code pretty much any text transformation you'd like in plperl or\npltcl, both of which languages are very strong on string manipulations.\nSo there's not been a lot of concern about the lack of a SQL-level\nsubstitution operator.\n\nIIRC, SQL99 does specify some sort of substring replacement function,\nand Thomas recently implemented it for 7.3. But it's not very bright\nand I suspect people will keep falling back on plperl or pltcl to do\nanything nontrivial.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Jul 2002 10:06:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: workaround for lack of REPLACE() function " }, { "msg_contents": "Tom Lane wrote:\n> \"Agent155 Support\" <matt@planetnet.com> writes:\n> \n>>What do folks usually do when they have to do a global search/replace on a =\n>>big table?\n> \n> \n> You can code pretty much any text transformation you'd like in plperl or\n> pltcl, both of which languages are very strong on string manipulations.\n> So there's not been a lot of concern about the lack of a SQL-level\n> substitution operator.\n> \n> IIRC, SQL99 does specify some sort of substring replacement function,\n> and Thomas recently implemented it for 7.3. But it's not very bright\n> and I suspect people will keep falling back on plperl or pltcl to do\n> anything nontrivial.\n> \n\nI think Thomas did just recently commit the SQL99 OVERLAY function, but \nsimilar to Tom's comment, I don't like the way SQL99 defines it. I've \nwritten a replace() C function (along with a couple of other string \nmanipulation functions) for my own use. If you'd like a copy let me know \nand I'll gladly send it to you.\n\nI have thought about sending it in as a contrib, but wasn't sure if \nthere was enough interest to warrant it.\n\nJoe\n\n", "msg_date": "Thu, 11 Jul 2002 09:44:34 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: workaround for lack of REPLACE() function" }, { "msg_contents": "...\n> I think Thomas did just recently commit the SQL99 OVERLAY function, but\n> similar to Tom's comment, I don't like the way SQL99 defines it. I've\n> written a replace() C function (along with a couple of other string\n> manipulation functions) for my own use. If you'd like a copy let me know\n> and I'll gladly send it to you.\n\nOK, what don't you like about it? If you can define some functionality\nthat we *should* have, then it is likely to go into the main distro.\nEither as an extension to existing functions or as a separate function.\n\n\"Style\" counts for not-much, but \"can't do it with what we have\" counts\nfor a lot.\n\n - Thomas\n", "msg_date": "Thu, 11 Jul 2002 10:38:17 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: workaround for lack of REPLACE() function" }, { "msg_contents": "Thomas Lockhart wrote:\n>>I think Thomas did just recently commit the SQL99 OVERLAY function, but\n>>similar to Tom's comment, I don't like the way SQL99 defines it. I've\n>>written a replace() C function (along with a couple of other string\n>>manipulation functions) for my own use. If you'd like a copy let me know\n>>and I'll gladly send it to you.\n> \n> OK, what don't you like about it? If you can define some functionality\n> that we *should* have, then it is likely to go into the main distro.\n> Either as an extension to existing functions or as a separate function.\n> \n> \"Style\" counts for not-much, but \"can't do it with what we have\" counts\n> for a lot.\n\nHmmm, making justify my comment ;-)\n\nWell, OVERLAY is defined as:\n overlay(string placing string from integer [for integer])\nand replace() is defined (by me at least) as:\n replace(inputstring, old-substr, new-substr)\n\nOVERLAY requires that I know the \"from\" position and possibly the \"for\" \nin advance. Other functions (such as strpos() and substr()) can be used \nto help, but consider the following:\n\ntest=# create table strtest(f1 text);\nCREATE TABLE\ntest=# insert into strtest values('/usr/local/pgsql/data');\nINSERT 124955 1\ntest=# select replace(f1,'/local','') from strtest;\n replace\n-----------------\n /usr/pgsql/data\n(1 row)\n\nNow, how can I do this with overlay()? If I happen to know in advance \nthat my only input string is '/usr/local/pgsql/data', then I can do:\n\ntest=# select overlay(f1 placing '' from 5 for 6) from strtest;\n overlay\n-----------------\n /usr/pgsql/data\n(1 row)\n\nBut what if now I do:\ntest=# insert into strtest values('/m1/usr/local/pgsql/data');\nINSERT 124957 1\n\nNow\n\ntest=# select replace(f1,'/local','') from strtest;\n replace\n--------------------\n /usr/pgsql/data\n /m1/usr/pgsql/data\n(2 rows)\n\nworks fine, but\n\ntest=# select overlay(f1 placing '' from 5 for 6) from strtest;\n overlay\n--------------------\n /usr/pgsql/data\n /m1/cal/pgsql/data\n(2 rows)\n\ndoesn't give the desired result. Of course you can work around this, but \nit starts to get ugly:\n\ntest=# select overlay(f1 placing '' from strpos(f1,'/local') for 6) from \nstrtest;\n overlay\n--------------------\n /usr/pgsql/data\n /m1/usr/pgsql/data\n(2 rows)\n\nBut now what happens if you wanted to replace all of the '/' characters \nwith '\\'?\n\ntest=# select replace(f1,'/','\\\\') from strtest;\n replace\n--------------------------\n \\usr\\local\\pgsql\\data\n \\m1\\usr\\local\\pgsql\\data\n(2 rows)\n\nYou can't do this at all with overlay(), unless you want to write a \nPL/pgSQL function and loop through each string. I started out with \nexactly this, using strpos() and substr(), but I thought a C function \nwas cleaner, and it is certainly faster.\n\n\nBTW, the other functions already in the string manipulation module are:\n\nto_hex -- Accepts bigint and returns it as equivilent hex string\n to_hex(bigint inputnum) RETURNS text\n\ntest=# select to_hex(123456789::bigint);\n to_hex\n---------\n 75bcd15\n(1 row)\n\nand\n\nextract_tok -- Extracts and returns individual token from delimited\n text\n extract_tok(text inputstring, text delimiter, int posn) RETURNS text\n\ntest=# select extract_tok(extract_tok('f=1&g=3&h=4','&',2),'=',2);\n extract_tok\n-------------\n 3\n(1 row)\n\nextract_tok() is actually already in dblink (dblink_strtok), because it \nis useful in that context, but it probably belongs in a contrib for \nstring manipulation instead. In fact, now that I think about it, so is \nreplace() (dblink_replace).\n\nRegards,\n\nJoe\n\n", "msg_date": "Thu, 11 Jul 2002 11:34:04 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: workaround for lack of REPLACE() function" }, { "msg_contents": "(crossposted to -hackers, should follow up on that list)\n\n> Well, OVERLAY is defined as:\n> overlay(string placing string from integer [for integer])\n> and replace() is defined (by me at least) as:\n> replace(inputstring, old-substr, new-substr)\n\nOK.\n\n> OVERLAY requires that I know the \"from\" position and possibly the \"for\"\n> in advance. Other functions (such as strpos() and substr()) can be used\n> to help...\n\nRight. So you can do your example pretty easily:\n\nthomas=# select overlay(f1 placing '' from position('/local' in f1)\nthomas-# for length('/local')) from strtest;\n overlay \n--------------------\n /usr/pgsql/data\n /m1/usr/pgsql/data\n\nAnd if you don't like that much typing you can do:\n\nthomas=# create function replace(text, text, text) returns text as '\nthomas'# select overlay($1 placing $3 from position($2 in $1) for\nlength($2));\nthomas'# ' language 'sql';\nCREATE FUNCTION\nthomas=# select replace(f1, '/local', '') from strtest;\n replace \n--------------------\n /usr/pgsql/data\n /m1/usr/pgsql/data\n\n> But now what happens if you wanted to replace all of the '/' characters \n> with '\\'?...\n> You can't do this at all with overlay(), unless you want to write a\n> PL/pgSQL function and loop through each string. I started out with\n> exactly this, using strpos() and substr(), but I thought a C function\n> was cleaner, and it is certainly faster.\n\nOK, this is in the \"can't do it what we have\" category. Should we have\nit accept a regular expression rather than a simple string? In either\ncase it should probably go into the main distro. Except that I see\n\"REPLACE\" is mentioned as a reserved word in SQL99. But has no other\nmention in my copy of the draft standard. Anyone else have an idea what\nit might be used for in the standard?\n\nThe other functions look useful too, unless to_char() and varbit can be\nevolved to support this functionality.\n\n - Thomas\n", "msg_date": "Fri, 12 Jul 2002 08:07:33 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: workaround for lack of REPLACE() function" }, { "msg_contents": "Thomas Lockhart wrote:\n > (crossposted to -hackers, should follow up on that list)\n\n<snip>\n\n> OK, this is in the \"can't do it what we have\" category. Should we have\n> it accept a regular expression rather than a simple string? In either\n> case it should probably go into the main distro. Except that I see\n> \"REPLACE\" is mentioned as a reserved word in SQL99. But has no other\n> mention in my copy of the draft standard. Anyone else have an idea what\n> it might be used for in the standard?\n\nNot sure, but I see what you mean. Perhaps because of Oracle pushing to \nlegitimize the \"CREATE OR REPLACE\" syntax? In any case, this works in 8i:\n\nSQL> select replace('hello','l','x') from dual;\n\nREPLACE('HELLO','L','X')\n------------------------\nhexxo\n\nand here it is in MSSQL 7:\n\nselect replace('hello','l','x')\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n\nhexxo\n\n(1 row(s) affected)\n\nand my proposed PostgreSQL function:\n\ntest=# select replace('hello','l','x');\n replace\n---------\n hexxo\n(1 row)\n\nso at least we would be consistant/compatable with these two.\n\n\n> \n> The other functions look useful too, unless to_char() and varbit can be\n> evolved to support this functionality.\n\nI will take a look at merging these into existing functions, but I have \na few other things ahead of this in my queue.\n\nOne of the reasons I wasn't pushing too hard to get replace() into the \nbackend is because my current solution is a bit of a hack. It uses the \nbuiltin length, strpos and substr text functions (which I think makes \nsense since they already know how to deal with mb strings), but because \nthey accept and return text, I'm doing lots of conversions back and \nforth from (* text) to (* char). To do this \"right\" probably means \nreworking the text string manipulation functions to be wrappers around \nsome equivalent functions accepting and returning C strings. That was \nmore work than I had time for when I wrote the current replace(). But as \nI said, if there is support for getting this into the backend, I'll add \nit to my todo list:\n\n- Create new backend function replace()\n- Either create new backend functions, or merge into existing functions: \nto_hex() and extract_tok()\n\nJoe\n\n\n\n\n\n", "msg_date": "Fri, 12 Jul 2002 10:47:54 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] workaround for lack of REPLACE() function" }, { "msg_contents": "Joe Conway wrote:\n> more work than I had time for when I wrote the current replace(). But as \n> I said, if there is support for getting this into the backend, I'll add \n> it to my todo list:\n> \n> - Create new backend function replace()\n> - Either create new backend functions, or merge into existing functions: \n> to_hex() and extract_tok()\n> \n\nI'm just starting to take a look at this again. While studying the \ncurrent text_substr() function I found two behaviors which conflict with \nspecific SQL92/SQL99 requirements, and one bug. First the spec \ncompliance -- SQL92 section 6.7/SQL99 section 6.18 say:\n\nIf <character substring function> is specified, then:\na) Let C be the value of the <character value expression>, let LC be the\n length of C, and let S be the value of the <start position>.\nb) If <string length> is specified, then let L be the value of <string\n length> and let E be S+L. Otherwise, let E be the larger of LC + 1\n and S.\nc) If either C, S, or L is the null value, then the result of the\n <character substring function> is the null value.\nd) If E is less than S, then an exception condition is raised: data\n exception-substring error.\ne) Case:\n i) If S is greater than LC or if E is less than 1, then the result of\n the <character substring function> is a zero-length string.\n ii) Otherwise,\n 1) Let SI be the larger of S and 1. Let El be the smaller of E and\n LC+l. Let Ll be El-Sl.\n 2) The result of the <character substring function> is a character\n string containing the Ll characters of C starting at character\n number Sl in the same order that the characters appear in C.\n\nThe only way for d) to be true is when L < 0. Instead of an error, we do:\ntest=# select substr('hello',2,-1);\n substr\n--------\n ello\n(1 row)\n\nThe other spec issue is wrt para e)i). If E (=S+L) < 1, we should return \na zero-length string. Currently I get:\ntest=# select substr('hello',-4,3);\n substr\n--------\n hello\n(1 row)\n\nNeither behavior is documented (unless it's somewhere other than:\nhttp://developer.postgresql.org/docs/postgres/functions-string.html ).\n\nThe bug is this one:\ntest=# create DATABASE testmb with encoding = 'EUC_JP';\nCREATE DATABASE\ntest=# \\c testmb\nYou are now connected to database testmb.\ntestmb=# select substr('hello',6,2);\n substr\n--------\n ~\n(1 row)\n\ntestmb=# \\c test\nYou are now connected to database test.\ntest=# select substr('hello',6,2);\n substr\n--------\n\n(1 row)\n\nThe multibyte database behavior is the bug. The SQL_ASCII behavior is \ncorrect (zero-length string):\ntest=# select substr('hello',6,2) is null;\n ?column?\n----------\n f\n(1 row)\n\n\nAny objection if I rework this function to meet SQL92 and fix the bug? \nOr is the SQL92 part not desirable because it breaks backward \ncompatability?\n\nIn any case, can the #ifdef MULTIBYTE's be removed now in favor of a \ntest for encoding max length?\n\nJoe\n\n", "msg_date": "Fri, 09 Aug 2002 22:04:24 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] workaround for lack of REPLACE() function" }, { "msg_contents": "> Any objection if I rework this function to meet SQL92 and fix the bug? \n\nI don't object.\n\n> Or is the SQL92 part not desirable because it breaks backward \n> compatability?\n\nI don't think so.\n\n> In any case, can the #ifdef MULTIBYTE's be removed now in favor of a \n> test for encoding max length?\n\nSure.\n--\nTatsuo Ishii\n", "msg_date": "Sat, 10 Aug 2002 15:51:26 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] workaround for lack of REPLACE() function" }, { "msg_contents": "Tatsuo Ishii wrote:\n>>Any objection if I rework this function to meet SQL92 and fix the bug? \n> \n\nI've started working on text_substr() as described in this thread (which \nis hopefully prep work for the replace() function that started the \nthread). I haven't really looked at toast or multibyte closely before, \nso I'd like to ask a couple of questions to be sure I'm understanding \nthe relevant issues correctly.\n\nFirst, in textlen() I see (ignoring multibyte for a moment):\n\n text *t = PG_GETARG_TEXT_P(0);\n PG_RETURN_INT32(VARSIZE(t) - VARHDRSZ);\n\nTom has pointed out to me before that PG_GETARG_TEXT_P(n) incurs the \noverhead of retrieving and possibly decompressing a toasted datum. So my \nfirst question is, can we simply do:\n PG_RETURN_INT32(toast_raw_datum_size(PG_GETARG_DATUM(0)) - VARHDRSZ);\nand save the overhead of retrieving and decompressing the whole datum?\n\nNow, in the multibyte case, again in textlen(), I see:\n\n /* optimization for single byte encoding */\n if (pg_database_encoding_max_length() <= 1)\n PG_RETURN_INT32(VARSIZE(t) - VARHDRSZ);\n\n PG_RETURN_INT32(\n pg_mbstrlen_with_len(VARDATA(t), VARSIZE(t) - VARHDRSZ));\n\nThree questions here.\n1) In the case of encoding max length == 1, can we treat it the same as \nthe non-multibyte case (I presume they are exactly the same)?\n\n2) Can encoding max length ever be < 1? Doesn't make sense to me.\n\n3) In the case of encoding max length > 1, if I understand correctly, \neach encoded character can be one *or more* bytes, up to and encluding \nencoding max length bytes. So the *only* way presently to get the length \nof the original character string is to loop through the entire string \nchecking the length of each individual character (that's what \npg_mbstrlen_with_len() does it seems)?\n\nFinally, if 3) is true, then there is no way to avoid the retrieval and \ndecompression of the datum just to find out its length. For large \ndatums, detoasting plus the looping through each character would add a \nhuge amount of overhead just to get at the length of the original \nstring. I don't know if we need to be able to get *just* the length \noften enough to really care, but if we do, I had an idea for some future \nrelease (I wouldn't propose doing this for 7.3):\n\n- add a new EXTENDED state to va_external for MULTIBYTE\n- any string with max encoding length > 1 would be EXTENDED even if it\n is not EXTERNAL and not COMPRESSED.\n- to each of the structs in the union, add va_strlen\n- populate va_strlen on INSERT and maintain it on UPDATE.\n\nNow a new function similar to toast_raw_datum_size(), maybe \ntoast_raw_datum_strlen() could be used to get the original string \nlength, whether MB or not, without needing to retrieve and decompress \nthe entire datum.\n\nI understand we would either: have to steal another bit from the VARHDR \nwhich would reduce the effective size of a valena from 1GB down to .5GB; \nor we would need to add a byte or two to the VARHDR which is extra \nper-datum overhead. I'm not sure we would want to do either. But I \nwanted to toss out the idea while it was fresh on my mind.\n\nThanks,\n\nJoe\n\n\n", "msg_date": "Sun, 11 Aug 2002 10:15:43 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] workaround for lack of REPLACE() function" }, { "msg_contents": "Tatsuo Ishii wrote:\n>>Any objection if I rework this function to meet SQL92 and fix the bug? \n> I don't object.\n\nOne more question on this: how can I generate some characters with > 1 \nencoding length? I need a way to test the work I'm doing, and I'm not \nquite sure how to test it.\n\nJust making a database that uses a MB encoding doesn't make 0-9/A-Z/a-z \ninto characters of > 1 byte, does it? Sorry, but never having used MB \nencoding has left me a bit clueless on this ;-)\n\nJoe\n\n", "msg_date": "Sun, 11 Aug 2002 13:09:43 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] workaround for lack of REPLACE() function" }, { "msg_contents": "> Now, in the multibyte case, again in textlen(), I see:\n> \n> /* optimization for single byte encoding */\n> if (pg_database_encoding_max_length() <= 1)\n> PG_RETURN_INT32(VARSIZE(t) - VARHDRSZ);\n> \n> PG_RETURN_INT32(\n> pg_mbstrlen_with_len(VARDATA(t), VARSIZE(t) - VARHDRSZ));\n> \n> Three questions here.\n> 1) In the case of encoding max length == 1, can we treat it the same as \n> the non-multibyte case (I presume they are exactly the same)?\n\nYes.\n\n> 2) Can encoding max length ever be < 1? Doesn't make sense to me.\n\nNo. It seems just a defensive coding.\n\n> 3) In the case of encoding max length > 1, if I understand correctly, \n> each encoded character can be one *or more* bytes, up to and encluding \n> encoding max length bytes.\n\nRight.\n\n> So the *only* way presently to get the length \n> of the original character string is to loop through the entire string \n> checking the length of each individual character (that's what \n> pg_mbstrlen_with_len() does it seems)?\n\nYes.\n\n> Finally, if 3) is true, then there is no way to avoid the retrieval and \n> decompression of the datum just to find out its length. For large \n> datums, detoasting plus the looping through each character would add a \n> huge amount of overhead just to get at the length of the original \n> string. I don't know if we need to be able to get *just* the length \n> often enough to really care, but if we do, I had an idea for some future \n> release (I wouldn't propose doing this for 7.3):\n> \n> - add a new EXTENDED state to va_external for MULTIBYTE\n> - any string with max encoding length > 1 would be EXTENDED even if it\n> is not EXTERNAL and not COMPRESSED.\n> - to each of the structs in the union, add va_strlen\n> - populate va_strlen on INSERT and maintain it on UPDATE.\n> \n> Now a new function similar to toast_raw_datum_size(), maybe \n> toast_raw_datum_strlen() could be used to get the original string \n> length, whether MB or not, without needing to retrieve and decompress \n> the entire datum.\n> \n> I understand we would either: have to steal another bit from the VARHDR \n> which would reduce the effective size of a valena from 1GB down to .5GB; \n> or we would need to add a byte or two to the VARHDR which is extra \n> per-datum overhead. I'm not sure we would want to do either. But I \n> wanted to toss out the idea while it was fresh on my mind.\n\nInteresting idea. I also was thinking about adding some extra\ninfomation to text data types such as character set, collation\netc. for 7.4 or later.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 12 Aug 2002 10:07:20 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] workaround for lack of REPLACE() function" }, { "msg_contents": "Tatsuo Ishii wrote:\n>>Now a new function similar to toast_raw_datum_size(), maybe \n>>toast_raw_datum_strlen() could be used to get the original string \n>>length, whether MB or not, without needing to retrieve and decompress \n>>the entire datum.\n>>\n>>I understand we would either: have to steal another bit from the VARHDR \n>>which would reduce the effective size of a valena from 1GB down to .5GB; \n>>or we would need to add a byte or two to the VARHDR which is extra \n>>per-datum overhead. I'm not sure we would want to do either. But I \n>>wanted to toss out the idea while it was fresh on my mind.\n> \n> \n> Interesting idea. I also was thinking about adding some extra\n> infomation to text data types such as character set, collation\n> etc. for 7.4 or later.\n\nI ran some tests to confirm the theory above regarding overhead;\n\ncreate table strtest(f1 text);\ndo 100 times\n insert into strtest values('12345....'); -- 100000 characters\nloop\ndo 1000 times\n select length(f1) from strtest;\nloop\n\nResults:\n\nSQL_ASCII database, new code:\n=============================\nPG_RETURN_INT32(toast_raw_datum_size(PG_GETARG_DATUM(0)) - VARHDRSZ);\n==> 2 seconds\n\nSQL_ASCII database, old code:\n=============================\ntext \n *t = PG_GETARG_TEXT_P(0);\nPG_RETURN_INT32(VARSIZE(t) - VARHDRSZ);\n==> 66 seconds\n\nEUC_JP database, new & old code:\n================================\ntext \n *t = PG_GETARG_TEXT_P(0);\nPG_RETURN_INT32(pg_mbstrlen_with_len(VARDATA(t),\n VARSIZE(t) - VARHDRSZ));\n==> 469 seconds\n\nSo it appears that, while detoasting is moderately expensive (adds 64 \nseconds to the test), the call to pg_mbstrlen_with_len() is very \nexpensive (adds 403 seconds to the test).\n\nJoe\n\n", "msg_date": "Mon, 12 Aug 2002 14:55:54 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] workaround for lack of REPLACE() function" }, { "msg_contents": "Tatsuo Ishii wrote:\n > Joe Conway wrote:\n >>Any objection if I rework this function to meet SQL92 and fix the bug?\n >\n > I don't object.\n >\n >>Or is the SQL92 part not desirable because it breaks backward\n >>compatability?\n >\n > I don't think so.\n >\n >>In any case, can the #ifdef MULTIBYTE's be removed now in favor of a\n >>test for encoding max length?\n >\n > Sure.\n\n<sorry so long-winded>\n\nAttached is a patch that implements the above items wrt text_substr(). I\nalso modified textlen(), textoctetlen(), byteaoctetlen(), and\nbytea_substr(). Here's a summary of the change to each:\n\n- text_substr(): rewrite function to meet SQL92, fix MB related bug,\n and remove #ifdef MULTIBYTE.\n\n- bytea_substr(): same as text_substr() enc max len == 1.\n\n- textoctetlen(), byteaoctetlen(): use\n toast_raw_datum_size(PG_GETARG_DATUM(0)) - VARHDRSZ)\n to avoid detoasting.\n\n- textlen(): same as textoctetlen() for enc max len == 1, and remove\n #ifdef MULTIBYTE.\n\nI did some benchmarking to ensure no performance degradation, and to\nhelp me understand MB and related performance issues. The results were\nvery enlightening:\n\n===================================================================\nFirst test - textlen() (already reported, repeated here for completeness):\n-------------------------------------------------------------------\ncreate table strtest(f1 text);\ndo 100 times\n insert into strtest values('12345....'); -- 100000 characters\nloop\ndo 1000 times\n select length(f1) from strtest;\nloop\n-------------------------------------------------------------------\nResults:\nSQL_ASCII database, new code\t\t2 seconds\nSQL_ASCII database, old code\t\t66 seconds\nEUC_JP database, new & old code\t\t469 seconds\n===================================================================\nSecond test - short string test:\n-------------------------------------------------------------------\ncreate table parts(partnum text);\n<fill with ~220000 rows, 8 to 12 characters each>\n\ndo 300 times\n select substr(partnum, 3, 3) from parts;\nloop\n-------------------------------------------------------------------\nResults:\nSQL_ASCII database, old code\t\t352 seconds\nSQL_ASCII database, new code\t\t350 seconds\nEUC_JP database, old code\t\t461 seconds\nEUC_JP database, new code\t\t422 seconds\n===================================================================\nThird test - long string, EXTENDED storage (EXTERNAL+COMPRESSED):\n-------------------------------------------------------------------\ncreate table strtest(f1 text);\ndo 100 times\n insert into strtest values('12345....'); -- 100000 characters\nloop\ndo 1000 times\n select substr(f1, 89000, 10000) from strtest;\nloop\n-------------------------------------------------------------------\nResults:\nSQL_ASCII database, old code\t\t59 seconds\nSQL_ASCII database, new code\t\t58 seconds\nEUC_JP database, old code\t\t915 seconds\nEUC_JP database, new code\t\t912 seconds\n===================================================================\nForth test - long string, EXTERNAL storage (not COMPRESSED)\n-------------------------------------------------------------------\ncreate table strtest(f1 text);\ndo 100 times\n insert into strtest values('12345....'); -- 100000 characters\nloop\ndo 1000 times\n select substr(f1, 89000, 10000) from strtest;\nloop\n-------------------------------------------------------------------\nResults:\nSQL_ASCII database, old code\t\t17 seconds\nSQL_ASCII database, new code\t\t17 seconds\nEUC_JP database, old code\t\t918 seconds\nEUC_JP database, new code\t\t911 seconds\n\n\nThe only remaining problem is that this causes opr_sanity to fail based\non this query:\n\n-- Considering only built-in procs (prolang = 12), look for multiple\n-- uses of the same internal function (ie, matching prosrc fields).\n-- It's OK to have several entries with different pronames for the same\n-- internal function, but conflicts in the number of arguments and other\n-- critical items should be complained of.\nSELECT p1.oid, p1.proname, p2.oid, p2.proname\nFROM pg_proc AS p1, pg_proc AS p2\nWHERE p1.oid != p2.oid AND\n p1.prosrc = p2.prosrc AND\n p1.prolang = 12 AND p2.prolang = 12 AND\n (p1.prolang != p2.prolang OR\n p1.proisagg != p2.proisagg OR\n p1.prosecdef != p2.prosecdef OR\n p1.proisstrict != p2.proisstrict OR\n p1.proretset != p2.proretset OR\n p1.provolatile != p2.provolatile OR\n p1.pronargs != p2.pronargs);\n\nThis fails because I implemented text_substr() and bytea_substr() to\ntake either 2 or 3 args. This was necessary for SQL92 spec compliance.\n\nSQL92 requires L < 0 to throw an error, and L IS NULL to return NULL. It\nalso requires that if L is not provided, the length to the end of the\nstring is assumed. Current code handles L IS NULL correctly but not L <\n0 -- it assumes L < 0 is the same as L is not provided. By allowing the\nfunction to determine if it was passed 2 or 3 args, this can be handled\nproperly.\n\nSo the question is, can/should I change opr_sanity to allow this case?\n\nI also still owe some additions to the strings regression test to make\nit cover toasted values.\n\nOther than those two issues, I think the patch is ready to go. I'm\nplanning to take on the replace function next.\n\nThanks,\n\nJoe", "msg_date": "Tue, 13 Aug 2002 23:15:25 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] workaround for lack of REPLACE() function" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> The only remaining problem is that this causes opr_sanity to fail based\n> on this query: ...\n> This fails because I implemented text_substr() and bytea_substr() to\n> take either 2 or 3 args. This was necessary for SQL92 spec compliance.\n\nRather than loosening the opr_sanity test, I'd suggest setting this\nup as two separate builtin functions. They can call a common\nimplementation routine if you like. But a runtime test on the number\nof arguments doesn't offer any attractive improvement.\n\n> I'm planning to take on the replace function next.\n\nIsn't Gavin on that already?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Aug 2002 02:23:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] workaround for lack of REPLACE() function " }, { "msg_contents": "Tom Lane wrote:\n>>I'm planning to take on the replace function next.\n> Isn't Gavin on that already?\n\nNo, sorry for the confusion. I meant:\n\n replace(bigstring, substr, newsubstr)\n\nwhich I discussed (mainly with with Thomas) a few weeks ago. This \ncurrent work was a warmup, since I wasn't comfortable with MB character \nhandling, and noticed some issues whilst studying it.\n\nJoe\n\n\n\n", "msg_date": "Tue, 13 Aug 2002 23:31:17 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] workaround for lack of REPLACE() function" }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> \n>>The only remaining problem is that this causes opr_sanity to fail based\n>>on this query: ...\n>>This fails because I implemented text_substr() and bytea_substr() to\n>>take either 2 or 3 args. This was necessary for SQL92 spec compliance.\n> \n> \n> Rather than loosening the opr_sanity test, I'd suggest setting this\n> up as two separate builtin functions. They can call a common\n> implementation routine if you like. But a runtime test on the number\n> of arguments doesn't offer any attractive improvement.\n\nI took Tom's advice and added wrapper functions around text_substr() and \nbytea_substr() to cover the 2 argument case.\n\nI also added tests to strings.sql to cover substr() on toasted columns \nof both text and bytea.\n\nIf there are no objections, please apply.\n\nThanks,\n\nJoe", "msg_date": "Wed, 14 Aug 2002 11:09:41 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] workaround for lack of REPLACE() function" }, { "msg_contents": "Joe Conway wrote:\n> I took Tom's advice and added wrapper functions around text_substr() and \n> bytea_substr() to cover the 2 argument case.\n> \n> I also added tests to strings.sql to cover substr() on toasted columns \n> of both text and bytea.\n> \n\nPlease replace the original patch (substr.2002.08.14.1.patch) with the \nattached. It includes everything from the previous one, plus newly \nimplemented builtin functions:\n\nreplace(string, from, to)\n -- replaces all occurrences of \"from\" in \"string\" to \"to\"\nsplit(string, fldsep, column)\n -- splits \"string\" on \"fldsep\" and returns \"column\" number piece\nto_hex(int32_num) & to_hex(int64_num)\n -- takes integer number and returns as hex string\n\nAll previously discussed on the list; see thread at:\nhttp://archives.postgresql.org/pgsql-hackers/2002-07/msg00511.php\n\nExamples:\n\nSELECT replace('yabadabadoo', 'ba', '123') AS \"ya123da123doo\";\n ya123da123doo\n---------------\n ya123da123doo\n(1 row)\n\nselect split('joeuser@mydatabase','@',1) AS \"joeuser\";\n joeuser\n---------\n joeuser\n(1 row)\n\nselect split('joeuser@mydatabase','@',2) AS \"mydatabase\";\n mydatabase\n------------\n mydatabase\n(1 row)\n\nselect to_hex(256::bigint*256::bigint*256::bigint*256::bigint - 1) AS \n\"ffffffff\";\n ffffffff\n----------\n ffffffff\n(1 row)\n\nTests have been added to the regression suite.\n\nPasses all regression tests. I've checked the strings.sql script in a \nmultibyte database and it works fine also. I'd appreciate a good look by \nsomeone more familiar with multibyte related issues though.\n\nIf it is OK, I'd like to hold off on docs until this is committed and \nafter beta starts.\n\nIf there are no objections, please apply.\n\nThanks,\n\nJoe", "msg_date": "Fri, 16 Aug 2002 13:22:55 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] workaround for lack of REPLACE()" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Joe Conway wrote:\n> > I took Tom's advice and added wrapper functions around text_substr() and \n> > bytea_substr() to cover the 2 argument case.\n> > \n> > I also added tests to strings.sql to cover substr() on toasted columns \n> > of both text and bytea.\n> > \n> \n> Please replace the original patch (substr.2002.08.14.1.patch) with the \n> attached. It includes everything from the previous one, plus newly \n> implemented builtin functions:\n> \n> replace(string, from, to)\n> -- replaces all occurrences of \"from\" in \"string\" to \"to\"\n> split(string, fldsep, column)\n> -- splits \"string\" on \"fldsep\" and returns \"column\" number piece\n> to_hex(int32_num) & to_hex(int64_num)\n> -- takes integer number and returns as hex string\n> \n> All previously discussed on the list; see thread at:\n> http://archives.postgresql.org/pgsql-hackers/2002-07/msg00511.php\n> \n> Examples:\n> \n> SELECT replace('yabadabadoo', 'ba', '123') AS \"ya123da123doo\";\n> ya123da123doo\n> ---------------\n> ya123da123doo\n> (1 row)\n> \n> select split('joeuser@mydatabase','@',1) AS \"joeuser\";\n> joeuser\n> ---------\n> joeuser\n> (1 row)\n> \n> select split('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> mydatabase\n> ------------\n> mydatabase\n> (1 row)\n> \n> select to_hex(256::bigint*256::bigint*256::bigint*256::bigint - 1) AS \n> \"ffffffff\";\n> ffffffff\n> ----------\n> ffffffff\n> (1 row)\n> \n> Tests have been added to the regression suite.\n> \n> Passes all regression tests. I've checked the strings.sql script in a \n> multibyte database and it works fine also. I'd appreciate a good look by \n> someone more familiar with multibyte related issues though.\n> \n> If it is OK, I'd like to hold off on docs until this is committed and \n> after beta starts.\n> \n> If there are no objections, please apply.\n> \n> Thanks,\n> \n> Joe\n\n> Index: src/backend/utils/adt/varlena.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/utils/adt/varlena.c,v\n> retrieving revision 1.87\n> diff -c -r1.87 varlena.c\n> *** src/backend/utils/adt/varlena.c\t4 Aug 2002 06:44:47 -0000\t1.87\n> --- src/backend/utils/adt/varlena.c\t16 Aug 2002 19:54:03 -0000\n> ***************\n> *** 18,23 ****\n> --- 18,25 ----\n> \n> #include \"mb/pg_wchar.h\"\n> #include \"miscadmin.h\"\n> + #include \"access/tuptoaster.h\"\n> + #include \"lib/stringinfo.h\"\n> #include \"utils/builtins.h\"\n> #include \"utils/pg_locale.h\"\n> \n> ***************\n> *** 27,34 ****\n> --- 29,62 ----\n> #define DatumGetUnknownP(X)\t\t\t((unknown *) PG_DETOAST_DATUM(X))\n> #define PG_GETARG_UNKNOWN_P(n)\t\tDatumGetUnknownP(PG_GETARG_DATUM(n))\n> #define PG_RETURN_UNKNOWN_P(x)\t\tPG_RETURN_POINTER(x)\n> + #define PG_TEXTARG_GET_STR(arg_) \\\n> + DatumGetCString(DirectFunctionCall1(textout, PG_GETARG_DATUM(arg_)))\n> + #define PG_TEXT_GET_STR(textp_) \\\n> + DatumGetCString(DirectFunctionCall1(textout, PointerGetDatum(textp_)))\n> + #define PG_STR_GET_TEXT(str_) \\\n> + DatumGetTextP(DirectFunctionCall1(textin, CStringGetDatum(str_)))\n> + #define TEXTLEN(textp) \\\n> + \ttext_length(PointerGetDatum(textp))\n> + #define TEXTPOS(buf_text, from_sub_text) \\\n> + \ttext_position(PointerGetDatum(buf_text), PointerGetDatum(from_sub_text), 1)\n> + #define TEXTDUP(textp) \\\n> + \tDatumGetTextPCopy(PointerGetDatum(textp))\n> + #define LEFT(buf_text, from_sub_text) \\\n> + \ttext_substring(PointerGetDatum(buf_text), \\\n> + \t\t\t\t\t1, \\\n> + \t\t\t\t\tTEXTPOS(buf_text, from_sub_text) - 1, false)\n> + #define RIGHT(buf_text, from_sub_text, from_sub_text_len) \\\n> + \ttext_substring(PointerGetDatum(buf_text), \\\n> + \t\t\t\t\tTEXTPOS(buf_text, from_sub_text) + from_sub_text_len, \\\n> + \t\t\t\t\t-1, true)\n> \n> static int\ttext_cmp(text *arg1, text *arg2);\n> + static int32 text_length(Datum str);\n> + static int32 text_position(Datum str, Datum search_str, int matchnum);\n> + static text *text_substring(Datum str,\n> + \t\t\t\t\t\t\tint32 start,\n> + \t\t\t\t\t\t\tint32 length,\n> + \t\t\t\t\t\t\tbool length_not_specified);\n> \n> \n> /*****************************************************************************\n> ***************\n> *** 285,303 ****\n> Datum\n> textlen(PG_FUNCTION_ARGS)\n> {\n> ! \ttext\t *t = PG_GETARG_TEXT_P(0);\n> \n> ! #ifdef MULTIBYTE\n> ! \t/* optimization for single byte encoding */\n> ! \tif (pg_database_encoding_max_length() <= 1)\n> ! \t\tPG_RETURN_INT32(VARSIZE(t) - VARHDRSZ);\n> ! \n> ! \tPG_RETURN_INT32(\n> ! \t\tpg_mbstrlen_with_len(VARDATA(t), VARSIZE(t) - VARHDRSZ)\n> ! \t\t);\n> ! #else\n> ! \tPG_RETURN_INT32(VARSIZE(t) - VARHDRSZ);\n> ! #endif\n> }\n> \n> /*\n> --- 313,348 ----\n> Datum\n> textlen(PG_FUNCTION_ARGS)\n> {\n> ! \tPG_RETURN_INT32(text_length(PG_GETARG_DATUM(0)));\n> ! }\n> \n> ! /*\n> ! * text_length -\n> ! *\tDoes the real work for textlen()\n> ! *\tThis is broken out so it can be called directly by other string processing\n> ! *\tfunctions.\n> ! */\n> ! static int32\n> ! text_length(Datum str)\n> ! {\n> ! \t/* fastpath when max encoding length is one */\n> ! \tif (pg_database_encoding_max_length() == 1)\n> ! \t\tPG_RETURN_INT32(toast_raw_datum_size(str) - VARHDRSZ);\n> ! \n> ! \tif (pg_database_encoding_max_length() > 1)\n> ! \t{\n> ! \t\ttext\t *t = DatumGetTextP(str);\n> ! \n> ! \t\tPG_RETURN_INT32(pg_mbstrlen_with_len(VARDATA(t),\n> ! \t\t\t\t\t\t\t\t\t VARSIZE(t) - VARHDRSZ));\n> ! \t}\n> ! \n> ! \t/* should never get here */\n> ! \telog(ERROR, \"Invalid backend encoding; encoding max length \"\n> ! \t\t\t\t\"is less than one.\");\n> ! \n> ! \t/* not reached: suppress compiler warning */\n> ! \treturn 0;\n> }\n> \n> /*\n> ***************\n> *** 308,316 ****\n> Datum\n> textoctetlen(PG_FUNCTION_ARGS)\n> {\n> ! \ttext *arg = PG_GETARG_TEXT_P(0);\n> ! \n> ! \tPG_RETURN_INT32(VARSIZE(arg) - VARHDRSZ);\n> }\n> \n> /*\n> --- 353,359 ----\n> Datum\n> textoctetlen(PG_FUNCTION_ARGS)\n> {\n> ! \tPG_RETURN_INT32(toast_raw_datum_size(PG_GETARG_DATUM(0)) - VARHDRSZ);\n> }\n> \n> /*\n> ***************\n> *** 382,471 ****\n> * - Thomas Lockhart 1998-12-10\n> * Now uses faster TOAST-slicing interface\n> * - John Gray 2002-02-22\n> */\n> Datum\n> text_substr(PG_FUNCTION_ARGS)\n> {\n> ! \ttext\t *string;\n> ! \tint32\t\tm = PG_GETARG_INT32(1);\n> ! \tint32\t\tn = PG_GETARG_INT32(2);\n> ! \tint32 sm;\n> ! \tint32 sn;\n> ! \tint eml = 1;\n> ! #ifdef MULTIBYTE\n> ! \tint\t\t\ti;\n> ! \tint\t\t\tlen;\n> ! \ttext\t *ret;\n> ! \tchar\t *p;\n> ! #endif \n> \n> ! \t/*\n> ! \t * starting position before the start of the string? then offset into\n> ! \t * the string per SQL92 spec...\n> ! \t */\n> ! \tif (m < 1)\n> \t{\n> ! \t\tn += (m - 1);\n> ! \t\tm = 1;\n> ! \t}\n> ! \t/* Check for m > octet length is made in TOAST access routine */\n> \n> ! \t/* m will now become a zero-based starting position */\n> ! \tsm = m - 1;\n> ! \tsn = n;\n> \n> ! #ifdef MULTIBYTE\n> ! \teml = pg_database_encoding_max_length ();\n> \n> ! \tif (eml > 1)\n> \t{\n> ! \t\tsm = 0;\n> ! \t\tif (n > -1)\n> ! \t\t\tsn = (m + n) * eml + 3; /* +3 to avoid mb characters overhanging slice end */\n> \t\telse\n> ! \t\t\tsn = n;\t\t/* n < 0 is special-cased by heap_tuple_untoast_attr_slice */\n> ! \t}\n> ! #endif \n> \n> ! \tstring = PG_GETARG_TEXT_P_SLICE (0, sm, sn);\n> \n> ! \tif (eml == 1) \n> ! \t{\n> ! \t\tPG_RETURN_TEXT_P (string);\n> ! \t}\n> ! #ifndef MULTIBYTE\n> ! \tPG_RETURN_NULL(); /* notreached: suppress compiler warning */\n> ! #endif\n> ! #ifdef MULTIBYTE\n> ! \tif (n > -1)\n> ! \t\tlen = pg_mbstrlen_with_len (VARDATA (string), sn - 3);\n> ! \telse\t/* n < 0 is special-cased; need full string length */\n> ! \t\tlen = pg_mbstrlen_with_len (VARDATA (string), VARSIZE(string)-VARHDRSZ);\n> ! \n> ! \tif (m > len)\n> ! \t{\n> ! \t\tm = 1;\n> ! \t\tn = 0;\n> ! \t}\n> ! \tm--;\n> ! \tif (((m + n) > len) || (n < 0))\n> ! \t\tn = (len - m);\n> ! \n> ! \tp = VARDATA(string);\n> ! \tfor (i = 0; i < m; i++)\n> ! \t\tp += pg_mblen(p);\n> ! \tm = p - VARDATA(string);\n> ! \tfor (i = 0; i < n; i++)\n> ! \t\tp += pg_mblen(p);\n> ! \tn = p - (VARDATA(string) + m);\n> \n> ! \tret = (text *) palloc(VARHDRSZ + n);\n> ! \tVARATT_SIZEP(ret) = VARHDRSZ + n;\n> \n> ! \tmemcpy(VARDATA(ret), VARDATA(string) + m, n);\n> \n> ! \tPG_RETURN_TEXT_P(ret);\n> ! #endif\n> }\n> \n> /*\n> --- 425,625 ----\n> * - Thomas Lockhart 1998-12-10\n> * Now uses faster TOAST-slicing interface\n> * - John Gray 2002-02-22\n> + * Remove \"#ifdef MULTIBYTE\" and test for encoding_max_length instead. Change\n> + * behaviors conflicting with SQL92 to meet SQL92 (if E = S + L < S throw\n> + * error; if E < 1, return '', not entire string). Fixed MB related bug when\n> + * S > LC and < LC + 4 sometimes garbage characters are returned.\n> + * - Joe Conway 2002-08-10 \n> */\n> Datum\n> text_substr(PG_FUNCTION_ARGS)\n> {\n> ! \tPG_RETURN_TEXT_P(text_substring(PG_GETARG_DATUM(0),\n> ! \t\t\t\t\t\t\t\t\tPG_GETARG_INT32(1),\n> ! \t\t\t\t\t\t\t\t\tPG_GETARG_INT32(2),\n> ! \t\t\t\t\t\t\t\t\tfalse));\n> ! }\n> \n> ! /*\n> ! * text_substr_no_len -\n> ! *\t Wrapper to avoid opr_sanity failure due to\n> ! *\t one function accepting a different number of args.\n> ! */\n> ! Datum\n> ! text_substr_no_len(PG_FUNCTION_ARGS)\n> ! {\n> ! \tPG_RETURN_TEXT_P(text_substring(PG_GETARG_DATUM(0),\n> ! \t\t\t\t\t\t\t\t\tPG_GETARG_INT32(1),\n> ! \t\t\t\t\t\t\t\t\t-1, true));\n> ! }\n> ! \n> ! /*\n> ! * text_substring -\n> ! *\tDoes the real work for text_substr() and text_substr_no_len()\n> ! *\tThis is broken out so it can be called directly by other string processing\n> ! *\tfunctions.\n> ! */\n> ! static text*\n> ! text_substring(Datum str, int32 start, int32 length, bool length_not_specified)\n> ! {\n> ! \tint32\t\teml = pg_database_encoding_max_length();\n> ! \tint32\t\tS = start;\t\t\t\t/* start position */\n> ! \tint32\t\tS1;\t\t\t\t\t\t/* adjusted start position */\n> ! \tint32\t\tL1;\t\t\t\t\t\t/* adjusted substring length */\n> ! \n> ! \t/* life is easy if the encoding max length is 1 */\n> ! \tif (eml == 1)\n> \t{\n> ! \t\tS1 = Max(S, 1);\n> \n> ! \t\tif (length_not_specified)\t/* special case - get length to end of string */\n> ! \t\t\tL1 = -1;\n> ! \t\telse\n> ! \t\t{\n> ! \t\t\t/* end position */\n> ! \t\t\tint\tE = S + length;\n> \n> ! \t\t\t/*\n> ! \t\t\t * A negative value for L is the only way for the end position\n> ! \t\t\t * to be before the start. SQL99 says to throw an error.\n> ! \t\t\t */\n> ! \t\t\tif (E < S)\n> ! \t\t\t\telog(ERROR, \"negative substring length not allowed\");\n> \n> ! \t\t\t/* \n> ! \t\t\t * A zero or negative value for the end position can happen if the start\n> ! \t\t\t * was negative or one. SQL99 says to return a zero-length string.\n> ! \t\t\t */\n> ! \t\t\tif (E < 1)\n> ! \t\t\t\treturn PG_STR_GET_TEXT(\"\");\n> ! \n> ! \t\t\tL1 = E - S1;\n> ! \t\t}\n> ! \n> ! \t\t/* \n> ! \t\t * If the start position is past the end of the string,\n> ! \t\t * SQL99 says to return a zero-length string -- \n> ! \t\t * PG_GETARG_TEXT_P_SLICE() will do that for us.\n> ! \t\t * Convert to zero-based starting position\n> ! \t\t */\n> ! \t\treturn DatumGetTextPSlice(str, S1 - 1, L1);\n> ! \t}\n> ! \telse if (eml > 1)\n> \t{\n> ! \t\t/*\n> ! \t\t * When encoding max length is > 1, we can't get LC without\n> ! \t\t * detoasting, so we'll grab a conservatively large slice\n> ! \t\t * now and go back later to do the right thing\n> ! \t\t */\n> ! \t\tint32\t\tslice_start;\n> ! \t\tint32\t\tslice_size;\n> ! \t\tint32\t\tslice_strlen;\n> ! \t\ttext\t\t*slice;\n> ! \t\tint32\t\tE1;\n> ! \t\tint32\t\ti;\n> ! \t\tchar\t *p;\n> ! \t\tchar\t *s;\n> ! \t\ttext\t *ret;\n> ! \n> ! \t\t/*\n> ! \t\t * if S is past the end of the string, the tuple toaster\n> ! \t\t * will return a zero-length string to us\n> ! \t\t */\n> ! \t\tS1 = Max(S, 1);\n> ! \n> ! \t\t/*\n> ! \t\t * We need to start at position zero because there is no\n> ! \t\t * way to know in advance which byte offset corresponds to \n> ! \t\t * the supplied start position.\n> ! \t\t */\n> ! \t\tslice_start = 0;\n> ! \n> ! \t\tif (length_not_specified)\t/* special case - get length to end of string */\n> ! \t\t\tslice_size = L1 = -1;\n> \t\telse\n> ! \t\t{\n> ! \t\t\tint\tE = S + length;\n> ! \n> ! \t\t\t/*\n> ! \t\t\t * A negative value for L is the only way for the end position\n> ! \t\t\t * to be before the start. SQL99 says to throw an error.\n> ! \t\t\t */\n> ! \t\t\tif (E < S)\n> ! \t\t\t\telog(ERROR, \"negative substring length not allowed\");\n> \n> ! \t\t\t/* \n> ! \t\t\t * A zero or negative value for the end position can happen if the start\n> ! \t\t\t * was negative or one. SQL99 says to return a zero-length string.\n> ! \t\t\t */\n> ! \t\t\tif (E < 1)\n> ! \t\t\t\treturn PG_STR_GET_TEXT(\"\");\n> \n> ! \t\t\t/*\n> ! \t\t\t * if E is past the end of the string, the tuple toaster\n> ! \t\t\t * will truncate the length for us\n> ! \t\t\t */\n> ! \t\t\tL1 = E - S1;\n> ! \n> ! \t\t\t/*\n> ! \t\t\t * Total slice size in bytes can't be any longer than the start\n> ! \t\t\t * position plus substring length times the encoding max length.\n> ! \t\t\t */\n> ! \t\t\tslice_size = (S1 + L1) * eml;\n> ! \t\t}\n> ! \t\tslice = DatumGetTextPSlice(str, slice_start, slice_size);\n> \n> ! \t\t/* see if we got back an empty string */\n> ! \t\tif ((VARSIZE(slice) - VARHDRSZ) == 0)\n> ! \t\t\treturn PG_STR_GET_TEXT(\"\");\n> \n> ! \t\t/* Now we can get the actual length of the slice in MB characters */\n> ! \t\tslice_strlen = pg_mbstrlen_with_len (VARDATA(slice), VARSIZE(slice) - VARHDRSZ);\n> \n> ! \t\t/* Check that the start position wasn't > slice_strlen. If so,\n> ! \t\t * SQL99 says to return a zero-length string.\n> ! \t\t */\n> ! \t\tif (S1 > slice_strlen)\n> ! \t\t\treturn PG_STR_GET_TEXT(\"\");\n> ! \n> ! \t\t/*\n> ! \t\t * Adjust L1 and E1 now that we know the slice string length.\n> ! \t\t * Again remember that S1 is one based, and slice_start is zero based.\n> ! \t\t */\n> ! \t\tif (L1 > -1)\n> ! \t\t\tE1 = Min(S1 + L1 , slice_start + 1 + slice_strlen);\n> ! \t\telse\n> ! \t\t\tE1 = slice_start + 1 + slice_strlen;\n> ! \n> ! \t\t/*\n> ! \t\t * Find the start position in the slice;\n> ! \t\t * remember S1 is not zero based\n> ! \t\t */\n> ! \t\tp = VARDATA(slice);\n> ! \t\tfor (i = 0; i < S1 - 1; i++)\n> ! \t\t\tp += pg_mblen(p);\n> ! \n> ! \t\t/* hang onto a pointer to our start position */\n> ! \t\ts = p;\n> ! \n> ! \t\t/*\n> ! \t\t * Count the actual bytes used by the substring of \n> ! \t\t * the requested length.\n> ! \t\t */\n> ! \t\tfor (i = S1; i < E1; i++)\n> ! \t\t\tp += pg_mblen(p);\n> ! \n> ! \t\tret = (text *) palloc(VARHDRSZ + (p - s));\n> ! \t\tVARATT_SIZEP(ret) = VARHDRSZ + (p - s);\n> ! \t\tmemcpy(VARDATA(ret), s, (p - s));\n> ! \n> ! \t\treturn ret;\n> ! \t}\n> ! \telse\n> ! \t\telog(ERROR, \"Invalid backend encoding; encoding max length \"\n> ! \t\t\t\t\t\"is less than one.\");\n> ! \n> ! \t/* not reached: suppress compiler warning */\n> ! \treturn PG_STR_GET_TEXT(\"\");\n> }\n> \n> /*\n> ***************\n> *** 481,536 ****\n> Datum\n> textpos(PG_FUNCTION_ARGS)\n> {\n> ! \ttext\t *t1 = PG_GETARG_TEXT_P(0);\n> ! \ttext\t *t2 = PG_GETARG_TEXT_P(1);\n> ! \tint\t\t\tpos;\n> ! \tint\t\t\tpx,\n> ! \t\t\t\tp;\n> ! \tint\t\t\tlen1,\n> \t\t\t\tlen2;\n> - \tpg_wchar *p1,\n> - \t\t\t *p2;\n> \n> ! #ifdef MULTIBYTE\n> ! \tpg_wchar *ps1,\n> ! \t\t\t *ps2;\n> ! #endif\n> \n> \tif (VARSIZE(t2) <= VARHDRSZ)\n> \t\tPG_RETURN_INT32(1);\t\t/* result for empty pattern */\n> \n> \tlen1 = (VARSIZE(t1) - VARHDRSZ);\n> \tlen2 = (VARSIZE(t2) - VARHDRSZ);\n> ! #ifdef MULTIBYTE\n> ! \tps1 = p1 = (pg_wchar *) palloc((len1 + 1) * sizeof(pg_wchar));\n> ! \t(void) pg_mb2wchar_with_len((unsigned char *) VARDATA(t1), p1, len1);\n> ! \tlen1 = pg_wchar_strlen(p1);\n> ! \tps2 = p2 = (pg_wchar *) palloc((len2 + 1) * sizeof(pg_wchar));\n> ! \t(void) pg_mb2wchar_with_len((unsigned char *) VARDATA(t2), p2, len2);\n> ! \tlen2 = pg_wchar_strlen(p2);\n> ! #else\n> ! \tp1 = VARDATA(t1);\n> ! \tp2 = VARDATA(t2);\n> ! #endif\n> ! \tpos = 0;\n> \tpx = (len1 - len2);\n> ! \tfor (p = 0; p <= px; p++)\n> \t{\n> ! #ifdef MULTIBYTE\n> ! \t\tif ((*p2 == *p1) && (pg_wchar_strncmp(p1, p2, len2) == 0))\n> ! #else\n> ! \t\tif ((*p2 == *p1) && (strncmp(p1, p2, len2) == 0))\n> ! #endif\n> \t\t{\n> ! \t\t\tpos = p + 1;\n> ! \t\t\tbreak;\n> ! \t\t};\n> ! \t\tp1++;\n> ! \t};\n> ! #ifdef MULTIBYTE\n> ! \tpfree(ps1);\n> ! \tpfree(ps2);\n> ! #endif\n> \tPG_RETURN_INT32(pos);\n> }\n> \n> --- 635,729 ----\n> Datum\n> textpos(PG_FUNCTION_ARGS)\n> {\n> ! \tPG_RETURN_INT32(text_position(PG_GETARG_DATUM(0), PG_GETARG_DATUM(1), 1));\n> ! }\n> ! \n> ! /*\n> ! * text_position -\n> ! *\tDoes the real work for textpos()\n> ! *\tThis is broken out so it can be called directly by other string processing\n> ! *\tfunctions.\n> ! */\n> ! static int32\n> ! text_position(Datum str, Datum search_str, int matchnum)\n> ! {\n> ! \tint\t\t\teml = pg_database_encoding_max_length();\n> ! \ttext\t *t1 = DatumGetTextP(str);\n> ! \ttext\t *t2 = DatumGetTextP(search_str);\n> ! \tint\t\t\tmatch = 0,\n> ! \t\t\t\tpos = 0,\n> ! \t\t\t\tp = 0,\n> ! \t\t\t\tpx,\n> ! \t\t\t\tlen1,\n> \t\t\t\tlen2;\n> \n> ! \tif(matchnum == 0)\n> ! \t\treturn 0;\t\t/* result for 0th match */\n> \n> \tif (VARSIZE(t2) <= VARHDRSZ)\n> \t\tPG_RETURN_INT32(1);\t\t/* result for empty pattern */\n> \n> \tlen1 = (VARSIZE(t1) - VARHDRSZ);\n> \tlen2 = (VARSIZE(t2) - VARHDRSZ);\n> ! \n> ! \t/* no use in searching str past point where search_str will fit */\n> \tpx = (len1 - len2);\n> ! \n> ! \tif (eml == 1)\t/* simple case - single byte encoding */\n> \t{\n> ! \t\tchar *p1,\n> ! \t\t\t *p2;\n> ! \n> ! \t\tp1 = VARDATA(t1);\n> ! \t\tp2 = VARDATA(t2);\n> ! \n> ! \t\tfor (p = 0; p <= px; p++)\n> \t\t{\n> ! \t\t\tif ((*p2 == *p1) && (strncmp(p1, p2, len2) == 0))\n> ! \t\t\t{\n> ! \t\t\t\tif (++match == matchnum)\n> ! \t\t\t\t{\n> ! \t\t\t\t\tpos = p + 1;\n> ! \t\t\t\t\tbreak;\n> ! \t\t\t\t}\n> ! \t\t\t}\n> ! \t\t\tp1++;\n> ! \t\t}\n> ! \t}\n> ! \telse if (eml > 1)\t/* not as simple - multibyte encoding */\n> ! \t{\n> ! \t\tpg_wchar *p1,\n> ! \t\t\t\t *p2,\n> ! \t\t\t\t *ps1,\n> ! \t\t\t\t *ps2;\n> ! \n> ! \t\tps1 = p1 = (pg_wchar *) palloc((len1 + 1) * sizeof(pg_wchar));\n> ! \t\t(void) pg_mb2wchar_with_len((unsigned char *) VARDATA(t1), p1, len1);\n> ! \t\tlen1 = pg_wchar_strlen(p1);\n> ! \t\tps2 = p2 = (pg_wchar *) palloc((len2 + 1) * sizeof(pg_wchar));\n> ! \t\t(void) pg_mb2wchar_with_len((unsigned char *) VARDATA(t2), p2, len2);\n> ! \t\tlen2 = pg_wchar_strlen(p2);\n> ! \n> ! \t\tfor (p = 0; p <= px; p++)\n> ! \t\t{\n> ! \t\t\tif ((*p2 == *p1) && (pg_wchar_strncmp(p1, p2, len2) == 0))\n> ! \t\t\t{\n> ! \t\t\t\tif (++match == matchnum)\n> ! \t\t\t\t{\n> ! \t\t\t\t\tpos = p + 1;\n> ! \t\t\t\t\tbreak;\n> ! \t\t\t\t}\n> ! \t\t\t}\n> ! \t\t\tp1++;\n> ! \t\t}\n> ! \n> ! \t\tpfree(ps1);\n> ! \t\tpfree(ps2);\n> ! \t}\n> ! \telse\n> ! \t\telog(ERROR, \"Invalid backend encoding; encoding max length \"\n> ! \t\t\t\t\t\"is less than one.\");\n> ! \n> \tPG_RETURN_INT32(pos);\n> }\n> \n> ***************\n> *** 758,766 ****\n> Datum\n> byteaoctetlen(PG_FUNCTION_ARGS)\n> {\n> ! \tbytea\t *v = PG_GETARG_BYTEA_P(0);\n> ! \n> ! \tPG_RETURN_INT32(VARSIZE(v) - VARHDRSZ);\n> }\n> \n> /*\n> --- 951,957 ----\n> Datum\n> byteaoctetlen(PG_FUNCTION_ARGS)\n> {\n> ! \tPG_RETURN_INT32(toast_raw_datum_size(PG_GETARG_DATUM(0)) - VARHDRSZ);\n> }\n> \n> /*\n> ***************\n> *** 805,810 ****\n> --- 996,1003 ----\n> \tPG_RETURN_BYTEA_P(result);\n> }\n> \n> + #define PG_STR_GET_BYTEA(str_) \\\n> + DatumGetByteaP(DirectFunctionCall1(byteain, CStringGetDatum(str_)))\n> /*\n> * bytea_substr()\n> * Return a substring starting at the specified position.\n> ***************\n> *** 813,845 ****\n> * Input:\n> *\t- string\n> *\t- starting position (is one-based)\n> ! *\t- string length\n> *\n> * If the starting position is zero or less, then return from the start of the string\n> * adjusting the length to be consistent with the \"negative start\" per SQL92.\n> ! * If the length is less than zero, return the remaining string.\n> ! *\n> */\n> Datum\n> bytea_substr(PG_FUNCTION_ARGS)\n> {\n> ! \tint32\t\tm = PG_GETARG_INT32(1);\n> ! \tint32\t\tn = PG_GETARG_INT32(2);\n> \n> ! \t/*\n> ! \t * starting position before the start of the string? then offset into\n> ! \t * the string per SQL92 spec...\n> ! \t */\n> ! \tif (m < 1)\n> \t{\n> ! \t\tn += (m - 1);\n> ! \t\tm = 1;\n> \t}\n> \n> ! \t/* m will now become a zero-based starting position */\n> ! \tm--;\n> \n> ! \tPG_RETURN_BYTEA_P(PG_GETARG_BYTEA_P_SLICE (0, m, n));\n> }\n> \n> /*\n> --- 1006,1076 ----\n> * Input:\n> *\t- string\n> *\t- starting position (is one-based)\n> ! *\t- string length (optional)\n> *\n> * If the starting position is zero or less, then return from the start of the string\n> * adjusting the length to be consistent with the \"negative start\" per SQL92.\n> ! * If the length is less than zero, an ERROR is thrown. If no third argument\n> ! * (length) is provided, the length to the end of the string is assumed.\n> */\n> Datum\n> bytea_substr(PG_FUNCTION_ARGS)\n> {\n> ! \tint\t\tS = PG_GETARG_INT32(1);\t/* start position */\n> ! \tint\t\tS1;\t\t\t\t\t\t/* adjusted start position */\n> ! \tint\t\tL1;\t\t\t\t\t\t/* adjusted substring length */\n> \n> ! \tS1 = Max(S, 1);\n> ! \n> ! \tif (fcinfo->nargs == 2)\n> ! \t{\n> ! \t\t/*\n> ! \t\t * Not passed a length - PG_GETARG_BYTEA_P_SLICE()\n> ! \t\t * grabs everything to the end of the string if we pass it\n> ! \t\t * a negative value for length.\n> ! \t\t */\n> ! \t\tL1 = -1;\n> ! \t}\n> ! \telse\n> \t{\n> ! \t\t/* end position */\n> ! \t\tint\tE = S + PG_GETARG_INT32(2);\n> ! \n> ! \t\t/*\n> ! \t\t * A negative value for L is the only way for the end position\n> ! \t\t * to be before the start. SQL99 says to throw an error.\n> ! \t\t */\n> ! \t\tif (E < S)\n> ! \t\t\telog(ERROR, \"negative substring length not allowed\");\n> ! \n> ! \t\t/* \n> ! \t\t * A zero or negative value for the end position can happen if the start\n> ! \t\t * was negative or one. SQL99 says to return a zero-length string.\n> ! \t\t */\n> ! \t\tif (E < 1)\n> ! \t\t\tPG_RETURN_BYTEA_P(PG_STR_GET_BYTEA(\"\"));\n> ! \n> ! \t\tL1 = E - S1;\n> \t}\n> \n> ! \t/* \n> ! \t * If the start position is past the end of the string,\n> ! \t * SQL99 says to return a zero-length string -- \n> ! \t * PG_GETARG_TEXT_P_SLICE() will do that for us.\n> ! \t * Convert to zero-based starting position\n> ! \t */\n> ! \tPG_RETURN_BYTEA_P(PG_GETARG_BYTEA_P_SLICE (0, S1 - 1, L1));\n> ! }\n> \n> ! /*\n> ! * bytea_substr_no_len -\n> ! *\t Wrapper to avoid opr_sanity failure due to\n> ! *\t one function accepting a different number of args.\n> ! */\n> ! Datum\n> ! bytea_substr_no_len(PG_FUNCTION_ARGS)\n> ! {\n> ! \treturn bytea_substr(fcinfo);\n> }\n> \n> /*\n> ***************\n> *** 1422,1424 ****\n> --- 1653,1834 ----\n> \n> \tPG_RETURN_INT32(cmp);\n> }\n> + \n> + /*\n> + * replace_text\n> + * replace all occurences of 'old_sub_str' in 'orig_str'\n> + * with 'new_sub_str' to form 'new_str'\n> + * \n> + * returns 'orig_str' if 'old_sub_str' == '' or 'orig_str' == ''\n> + * otherwise returns 'new_str' \n> + */\n> + Datum\n> + replace_text(PG_FUNCTION_ARGS)\n> + {\n> + \ttext\t\t*left_text;\n> + \ttext\t\t*right_text;\n> + \ttext\t\t*buf_text;\n> + \ttext\t\t*ret_text;\n> + \tint\t\t\tcurr_posn;\n> + \ttext\t\t*src_text = PG_GETARG_TEXT_P(0);\n> + \tint\t\t\tsrc_text_len = TEXTLEN(src_text);\n> + \ttext\t\t*from_sub_text = PG_GETARG_TEXT_P(1);\n> + \tint\t\t\tfrom_sub_text_len = TEXTLEN(from_sub_text);\n> + \ttext\t\t*to_sub_text = PG_GETARG_TEXT_P(2);\n> + \tchar\t\t*to_sub_str = PG_TEXT_GET_STR(to_sub_text);\n> + \tStringInfo\tstr = makeStringInfo();\n> + \n> + \tif (src_text_len == 0 || from_sub_text_len == 0)\n> + \t\tPG_RETURN_TEXT_P(src_text);\n> + \n> + \tbuf_text = TEXTDUP(src_text);\n> + \tcurr_posn = TEXTPOS(buf_text, from_sub_text);\n> + \n> + \twhile (curr_posn > 0)\n> + \t{\n> + \t\tleft_text = LEFT(buf_text, from_sub_text);\n> + \t\tright_text = RIGHT(buf_text, from_sub_text, from_sub_text_len);\n> + \n> + \t\tappendStringInfo(str, PG_TEXT_GET_STR(left_text));\n> + \t\tappendStringInfo(str, to_sub_str);\n> + \n> + \t\tpfree(buf_text);\n> + \t\tpfree(left_text);\n> + \t\tbuf_text = right_text;\n> + \t\tcurr_posn = TEXTPOS(buf_text, from_sub_text);\n> + \t}\n> + \n> + \tappendStringInfo(str, PG_TEXT_GET_STR(buf_text));\n> + \tpfree(buf_text);\n> + \n> + \tret_text = PG_STR_GET_TEXT(str->data);\n> + \tpfree(str->data);\n> + \tpfree(str);\n> + \n> + \tPG_RETURN_TEXT_P(ret_text);\n> + }\n> + \n> + /*\n> + * split_text\n> + * parse input string\n> + * return ord item (1 based)\n> + * based on provided field separator\n> + */\n> + Datum\n> + split_text(PG_FUNCTION_ARGS)\n> + {\n> + \ttext\t *inputstring = PG_GETARG_TEXT_P(0);\n> + \tint\t\t\tinputstring_len = TEXTLEN(inputstring);\n> + \ttext\t *fldsep = PG_GETARG_TEXT_P(1);\n> + \tint\t\t\tfldsep_len = TEXTLEN(fldsep);\n> + \tint\t\t\tfldnum = PG_GETARG_INT32(2);\n> + \tint\t\t\tstart_posn = 0;\n> + \tint\t\t\tend_posn = 0;\n> + \ttext\t\t*result_text;\n> + \n> + \t/* return empty string for empty input string */\n> + \tif (inputstring_len < 1)\n> + \t\tPG_RETURN_TEXT_P(PG_STR_GET_TEXT(\"\"));\n> + \n> + \t/* empty field separator */\n> + \tif (fldsep_len < 1)\n> + \t{\n> + \t\tif (fldnum == 1)\t/* first field - just return the input string */\n> + \t\t\tPG_RETURN_TEXT_P(inputstring);\n> + \t\telse\t\t\t\t/* otherwise return an empty string */\n> + \t\t\tPG_RETURN_TEXT_P(PG_STR_GET_TEXT(\"\"));\n> + \t}\n> + \n> + \t/* field number is 1 based */\n> + \tif (fldnum < 1)\n> + \t\telog(ERROR, \"field position must be > 0\");\n> + \n> + \tstart_posn = text_position(PointerGetDatum(inputstring),\n> + \t\t\t\t\t\t\t\tPointerGetDatum(fldsep),\n> + \t\t\t\t\t\t\t\tfldnum - 1);\n> + \tend_posn = text_position(PointerGetDatum(inputstring),\n> + \t\t\t\t\t\t\t\tPointerGetDatum(fldsep),\n> + \t\t\t\t\t\t\t\tfldnum);\n> + \n> + \tif ((start_posn == 0) && (end_posn == 0))\t/* fldsep not found */\n> + \t{\n> + \t\tif (fldnum == 1)\t/* first field - just return the input string */\n> + \t\t\tPG_RETURN_TEXT_P(inputstring);\n> + \t\telse\t\t\t\t/* otherwise return an empty string */\n> + \t\t\tPG_RETURN_TEXT_P(PG_STR_GET_TEXT(\"\"));\n> + \t}\n> + \telse if ((start_posn != 0) && (end_posn == 0))\n> + \t{\n> + \t\t/* last field requested */\n> + \t\tresult_text = text_substring(PointerGetDatum(inputstring), start_posn + fldsep_len, -1, true);\n> + \t\tPG_RETURN_TEXT_P(result_text);\n> + \t}\n> + \telse if ((start_posn == 0) && (end_posn != 0))\n> + \t{\n> + \t\t/* first field requested */\n> + \t\tresult_text = LEFT(inputstring, fldsep);\n> + \t\tPG_RETURN_TEXT_P(result_text);\n> + \t}\n> + \telse\n> + \t{\n> + \t\t/* prior to last field requested */\n> + \t\tresult_text = text_substring(PointerGetDatum(inputstring), start_posn + fldsep_len, end_posn - start_posn - fldsep_len, false);\n> + \t\tPG_RETURN_TEXT_P(result_text);\n> + \t}\n> + }\n> + \n> + #define HEXBASE 16\n> + /*\n> + * Convert a int32 to a string containing a base 16 (hex) representation of\n> + * the number.\n> + */\n> + Datum\n> + to_hex32(PG_FUNCTION_ARGS)\n> + {\n> + \tstatic char\t\tdigits[] = \"0123456789abcdef\";\n> + \tchar\t\t\tbuf[32];\t/* bigger than needed, but reasonable */\n> + \tchar\t\t *ptr,\n> + \t\t\t\t *end;\n> + \ttext\t\t *result_text;\n> + \tint32\t\t\tvalue = PG_GETARG_INT32(0);\n> + \n> + \tend = ptr = buf + sizeof(buf) - 1;\n> + \t*ptr = '\\0';\n> + \n> + \tdo\n> + \t{\n> + \t\t*--ptr = digits[value % HEXBASE];\n> + \t\tvalue /= HEXBASE;\n> + \t} while (ptr > buf && value);\n> + \n> + \tresult_text = PG_STR_GET_TEXT(ptr);\n> + \tPG_RETURN_TEXT_P(result_text);\n> + }\n> + \n> + /*\n> + * Convert a int64 to a string containing a base 16 (hex) representation of\n> + * the number.\n> + */\n> + Datum\n> + to_hex64(PG_FUNCTION_ARGS)\n> + {\n> + \tstatic char\t\tdigits[] = \"0123456789abcdef\";\n> + \tchar\t\t\tbuf[32];\t/* bigger than needed, but reasonable */\n> + \tchar\t\t\t*ptr,\n> + \t\t\t\t\t*end;\n> + \ttext\t\t\t*result_text;\n> + \tint64\t\t\tvalue = PG_GETARG_INT64(0);\n> + \n> + \tend = ptr = buf + sizeof(buf) - 1;\n> + \t*ptr = '\\0';\n> + \n> + \tdo\n> + \t{\n> + \t\t*--ptr = digits[value % HEXBASE];\n> + \t\tvalue /= HEXBASE;\n> + \t} while (ptr > buf && value);\n> + \n> + \tresult_text = PG_STR_GET_TEXT(ptr);\n> + \tPG_RETURN_TEXT_P(result_text);\n> + }\n> + \n> Index: src/include/catalog/pg_proc.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/catalog/pg_proc.h,v\n> retrieving revision 1.254\n> diff -c -r1.254 pg_proc.h\n> *** src/include/catalog/pg_proc.h\t15 Aug 2002 02:51:27 -0000\t1.254\n> --- src/include/catalog/pg_proc.h\t16 Aug 2002 18:53:13 -0000\n> ***************\n> *** 2121,2127 ****\n> DESCR(\"remove initial characters from string\");\n> DATA(insert OID = 882 ( rtrim\t\t PGNSP PGUID 14 f f t f i 1 25 \"25\" \"select rtrim($1, \\' \\')\" - _null_ ));\n> DESCR(\"remove trailing characters from string\");\n> ! DATA(insert OID = 883 ( substr\t PGNSP PGUID 14 f f t f i 2 25 \"25 23\"\t\"select substr($1, $2, -1)\" - _null_ ));\n> DESCR(\"return portion of string\");\n> DATA(insert OID = 884 ( btrim\t\t PGNSP PGUID 12 f f t f i 2 25 \"25 25\"\tbtrim - _null_ ));\n> DESCR(\"trim both ends of string\");\n> --- 2121,2127 ----\n> DESCR(\"remove initial characters from string\");\n> DATA(insert OID = 882 ( rtrim\t\t PGNSP PGUID 14 f f t f i 1 25 \"25\" \"select rtrim($1, \\' \\')\" - _null_ ));\n> DESCR(\"remove trailing characters from string\");\n> ! DATA(insert OID = 883 ( substr\t PGNSP PGUID 12 f f t f i 2 25 \"25 23\"\ttext_substr_no_len - _null_ ));\n> DESCR(\"return portion of string\");\n> DATA(insert OID = 884 ( btrim\t\t PGNSP PGUID 12 f f t f i 2 25 \"25 25\"\tbtrim - _null_ ));\n> DESCR(\"trim both ends of string\");\n> ***************\n> *** 2130,2137 ****\n> \n> DATA(insert OID = 936 ( substring PGNSP PGUID 12 f f t f i 3 25 \"25 23 23\" text_substr - _null_ ));\n> DESCR(\"return portion of string\");\n> ! DATA(insert OID = 937 ( substring PGNSP PGUID 14 f f t f i 2 25 \"25 23\"\t\"select substring($1, $2, -1)\" - _null_ ));\n> DESCR(\"return portion of string\");\n> \n> /* for multi-byte support */\n> \n> --- 2130,2145 ----\n> \n> DATA(insert OID = 936 ( substring PGNSP PGUID 12 f f t f i 3 25 \"25 23 23\" text_substr - _null_ ));\n> DESCR(\"return portion of string\");\n> ! DATA(insert OID = 937 ( substring PGNSP PGUID 12 f f t f i 2 25 \"25 23\"\ttext_substr_no_len - _null_ ));\n> DESCR(\"return portion of string\");\n> + DATA(insert OID = 2087 ( replace PGNSP PGUID 12 f f t f i 3 25 \"25 25 25\" replace_text - _null_ ));\n> + DESCR(\"replace all occurrences of old_substr with new_substr in string\");\n> + DATA(insert OID = 2088 ( split PGNSP PGUID 12 f f t f i 3 25 \"25 25 23\" split_text - _null_ ));\n> + DESCR(\"split string by field_sep and return field_num\");\n> + DATA(insert OID = 2089 ( to_hex PGNSP PGUID 12 f f t f i 1 25 \"23\" to_hex32 - _null_ ));\n> + DESCR(\"convert int32 number to hex\");\n> + DATA(insert OID = 2090 ( to_hex PGNSP PGUID 12 f f t f i 1 25 \"20\" to_hex64 - _null_ ));\n> + DESCR(\"convert int64 number to hex\");\n> \n> /* for multi-byte support */\n> \n> ***************\n> *** 2778,2784 ****\n> DESCR(\"concatenate\");\n> DATA(insert OID = 2012 ( substring\t\t PGNSP PGUID 12 f f t f i 3 17 \"17 23 23\" bytea_substr - _null_ ));\n> DESCR(\"return portion of string\");\n> ! DATA(insert OID = 2013 ( substring\t\t PGNSP PGUID 14 f f t f i 2 17 \"17 23\"\t\"select substring($1, $2, -1)\" - _null_ ));\n> DESCR(\"return portion of string\");\n> DATA(insert OID = 2014 ( position\t\t PGNSP PGUID 12 f f t f i 2 23 \"17 17\"\tbyteapos - _null_ ));\n> DESCR(\"return position of substring\");\n> --- 2786,2796 ----\n> DESCR(\"concatenate\");\n> DATA(insert OID = 2012 ( substring\t\t PGNSP PGUID 12 f f t f i 3 17 \"17 23 23\" bytea_substr - _null_ ));\n> DESCR(\"return portion of string\");\n> ! DATA(insert OID = 2013 ( substring\t\t PGNSP PGUID 12 f f t f i 2 17 \"17 23\"\tbytea_substr_no_len - _null_ ));\n> ! DESCR(\"return portion of string\");\n> ! DATA(insert OID = 2085 ( substr\t\t PGNSP PGUID 12 f f t f i 3 17 \"17 23 23\" bytea_substr - _null_ ));\n> ! DESCR(\"return portion of string\");\n> ! DATA(insert OID = 2086 ( substr\t\t PGNSP PGUID 12 f f t f i 2 17 \"17 23\"\tbytea_substr_no_len - _null_ ));\n> DESCR(\"return portion of string\");\n> DATA(insert OID = 2014 ( position\t\t PGNSP PGUID 12 f f t f i 2 23 \"17 17\"\tbyteapos - _null_ ));\n> DESCR(\"return position of substring\");\n> Index: src/include/utils/builtins.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/utils/builtins.h,v\n> retrieving revision 1.191\n> diff -c -r1.191 builtins.h\n> *** src/include/utils/builtins.h\t15 Aug 2002 02:51:27 -0000\t1.191\n> --- src/include/utils/builtins.h\t16 Aug 2002 18:53:13 -0000\n> ***************\n> *** 447,458 ****\n> --- 447,463 ----\n> extern Datum textoctetlen(PG_FUNCTION_ARGS);\n> extern Datum textpos(PG_FUNCTION_ARGS);\n> extern Datum text_substr(PG_FUNCTION_ARGS);\n> + extern Datum text_substr_no_len(PG_FUNCTION_ARGS);\n> extern Datum name_text(PG_FUNCTION_ARGS);\n> extern Datum text_name(PG_FUNCTION_ARGS);\n> extern int\tvarstr_cmp(char *arg1, int len1, char *arg2, int len2);\n> extern List *textToQualifiedNameList(text *textval, const char *caller);\n> extern bool SplitIdentifierString(char *rawstring, char separator,\n> \t\t\t\t\t\t\t\t List **namelist);\n> + extern Datum replace_text(PG_FUNCTION_ARGS);\n> + extern Datum split_text(PG_FUNCTION_ARGS);\n> + extern Datum to_hex32(PG_FUNCTION_ARGS);\n> + extern Datum to_hex64(PG_FUNCTION_ARGS);\n> \n> extern Datum unknownin(PG_FUNCTION_ARGS);\n> extern Datum unknownout(PG_FUNCTION_ARGS);\n> ***************\n> *** 476,481 ****\n> --- 481,487 ----\n> extern Datum byteacat(PG_FUNCTION_ARGS);\n> extern Datum byteapos(PG_FUNCTION_ARGS);\n> extern Datum bytea_substr(PG_FUNCTION_ARGS);\n> + extern Datum bytea_substr_no_len(PG_FUNCTION_ARGS);\n> \n> /* version.c */\n> extern Datum pgsql_version(PG_FUNCTION_ARGS);\n> Index: src/test/regress/expected/strings.out\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/expected/strings.out,v\n> retrieving revision 1.12\n> diff -c -r1.12 strings.out\n> *** src/test/regress/expected/strings.out\t11 Jun 2002 15:41:38 -0000\t1.12\n> --- src/test/regress/expected/strings.out\t16 Aug 2002 18:53:13 -0000\n> ***************\n> *** 573,575 ****\n> --- 573,738 ----\n> text and varchar\n> (1 row)\n> \n> + --\n> + -- test substr with toasted text values\n> + --\n> + CREATE TABLE toasttest(f1 text);\n> + insert into toasttest values(repeat('1234567890',10000));\n> + insert into toasttest values(repeat('1234567890',10000));\n> + -- If the starting position is zero or less, then return from the start of the string\n> + -- adjusting the length to be consistent with the \"negative start\" per SQL92.\n> + SELECT substr(f1, -1, 5) from toasttest;\n> + substr \n> + --------\n> + 123\n> + 123\n> + (2 rows)\n> + \n> + -- If the length is less than zero, an ERROR is thrown.\n> + SELECT substr(f1, 5, -1) from toasttest;\n> + ERROR: negative substring length not allowed\n> + -- If no third argument (length) is provided, the length to the end of the\n> + -- string is assumed.\n> + SELECT substr(f1, 99995) from toasttest;\n> + substr \n> + --------\n> + 567890\n> + 567890\n> + (2 rows)\n> + \n> + -- If start plus length is > string length, the result is truncated to\n> + -- string length\n> + SELECT substr(f1, 99995, 10) from toasttest;\n> + substr \n> + --------\n> + 567890\n> + 567890\n> + (2 rows)\n> + \n> + DROP TABLE toasttest;\n> + --\n> + -- test substr with toasted bytea values\n> + --\n> + CREATE TABLE toasttest(f1 bytea);\n> + insert into toasttest values(decode(repeat('1234567890',10000),'escape'));\n> + insert into toasttest values(decode(repeat('1234567890',10000),'escape'));\n> + -- If the starting position is zero or less, then return from the start of the string\n> + -- adjusting the length to be consistent with the \"negative start\" per SQL92.\n> + SELECT substr(f1, -1, 5) from toasttest;\n> + substr \n> + --------\n> + 123\n> + 123\n> + (2 rows)\n> + \n> + -- If the length is less than zero, an ERROR is thrown.\n> + SELECT substr(f1, 5, -1) from toasttest;\n> + ERROR: negative substring length not allowed\n> + -- If no third argument (length) is provided, the length to the end of the\n> + -- string is assumed.\n> + SELECT substr(f1, 99995) from toasttest;\n> + substr \n> + --------\n> + 567890\n> + 567890\n> + (2 rows)\n> + \n> + -- If start plus length is > string length, the result is truncated to\n> + -- string length\n> + SELECT substr(f1, 99995, 10) from toasttest;\n> + substr \n> + --------\n> + 567890\n> + 567890\n> + (2 rows)\n> + \n> + DROP TABLE toasttest;\n> + --\n> + -- test length\n> + --\n> + SELECT length('abcdef') AS \"length_6\";\n> + length_6 \n> + ----------\n> + 6\n> + (1 row)\n> + \n> + --\n> + -- test strpos\n> + --\n> + SELECT strpos('abcdef', 'cd') AS \"pos_3\";\n> + pos_3 \n> + -------\n> + 3\n> + (1 row)\n> + \n> + SELECT strpos('abcdef', 'xy') AS \"pos_0\";\n> + pos_0 \n> + -------\n> + 0\n> + (1 row)\n> + \n> + --\n> + -- test replace\n> + --\n> + SELECT replace('abcdef', 'de', '45') AS \"abc45f\";\n> + abc45f \n> + --------\n> + abc45f\n> + (1 row)\n> + \n> + SELECT replace('yabadabadoo', 'ba', '123') AS \"ya123da123doo\";\n> + ya123da123doo \n> + ---------------\n> + ya123da123doo\n> + (1 row)\n> + \n> + SELECT replace('yabadoo', 'bad', '') AS \"yaoo\";\n> + yaoo \n> + ------\n> + yaoo\n> + (1 row)\n> + \n> + --\n> + -- test split\n> + --\n> + select split('joeuser@mydatabase','@',0) AS \"an error\";\n> + ERROR: field position must be > 0\n> + select split('joeuser@mydatabase','@',1) AS \"joeuser\";\n> + joeuser \n> + ---------\n> + joeuser\n> + (1 row)\n> + \n> + select split('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> + mydatabase \n> + ------------\n> + mydatabase\n> + (1 row)\n> + \n> + select split('joeuser@mydatabase','@',3) AS \"empty string\";\n> + empty string \n> + --------------\n> + \n> + (1 row)\n> + \n> + select split('@joeuser@mydatabase@','@',2) AS \"joeuser\";\n> + joeuser \n> + ---------\n> + joeuser\n> + (1 row)\n> + \n> + --\n> + -- test to_hex\n> + --\n> + select to_hex(256*256*256 - 1) AS \"ffffff\";\n> + ffffff \n> + --------\n> + ffffff\n> + (1 row)\n> + \n> + select to_hex(256::bigint*256::bigint*256::bigint*256::bigint - 1) AS \"ffffffff\";\n> + ffffffff \n> + ----------\n> + ffffffff\n> + (1 row)\n> + \n> Index: src/test/regress/sql/strings.sql\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/sql/strings.sql,v\n> retrieving revision 1.8\n> diff -c -r1.8 strings.sql\n> *** src/test/regress/sql/strings.sql\t11 Jun 2002 15:41:38 -0000\t1.8\n> --- src/test/regress/sql/strings.sql\t16 Aug 2002 18:53:13 -0000\n> ***************\n> *** 197,199 ****\n> --- 197,292 ----\n> SELECT text 'text' || char(20) ' and characters' AS \"Concat text to char\";\n> \n> SELECT text 'text' || varchar ' and varchar' AS \"Concat text to varchar\";\n> + \n> + --\n> + -- test substr with toasted text values\n> + --\n> + CREATE TABLE toasttest(f1 text);\n> + \n> + insert into toasttest values(repeat('1234567890',10000));\n> + insert into toasttest values(repeat('1234567890',10000));\n> + \n> + -- If the starting position is zero or less, then return from the start of the string\n> + -- adjusting the length to be consistent with the \"negative start\" per SQL92.\n> + SELECT substr(f1, -1, 5) from toasttest;\n> + \n> + -- If the length is less than zero, an ERROR is thrown.\n> + SELECT substr(f1, 5, -1) from toasttest;\n> + \n> + -- If no third argument (length) is provided, the length to the end of the\n> + -- string is assumed.\n> + SELECT substr(f1, 99995) from toasttest;\n> + \n> + -- If start plus length is > string length, the result is truncated to\n> + -- string length\n> + SELECT substr(f1, 99995, 10) from toasttest;\n> + \n> + DROP TABLE toasttest;\n> + \n> + --\n> + -- test substr with toasted bytea values\n> + --\n> + CREATE TABLE toasttest(f1 bytea);\n> + \n> + insert into toasttest values(decode(repeat('1234567890',10000),'escape'));\n> + insert into toasttest values(decode(repeat('1234567890',10000),'escape'));\n> + \n> + -- If the starting position is zero or less, then return from the start of the string\n> + -- adjusting the length to be consistent with the \"negative start\" per SQL92.\n> + SELECT substr(f1, -1, 5) from toasttest;\n> + \n> + -- If the length is less than zero, an ERROR is thrown.\n> + SELECT substr(f1, 5, -1) from toasttest;\n> + \n> + -- If no third argument (length) is provided, the length to the end of the\n> + -- string is assumed.\n> + SELECT substr(f1, 99995) from toasttest;\n> + \n> + -- If start plus length is > string length, the result is truncated to\n> + -- string length\n> + SELECT substr(f1, 99995, 10) from toasttest;\n> + \n> + DROP TABLE toasttest;\n> + \n> + --\n> + -- test length\n> + --\n> + \n> + SELECT length('abcdef') AS \"length_6\";\n> + \n> + --\n> + -- test strpos\n> + --\n> + \n> + SELECT strpos('abcdef', 'cd') AS \"pos_3\";\n> + \n> + SELECT strpos('abcdef', 'xy') AS \"pos_0\";\n> + \n> + --\n> + -- test replace\n> + --\n> + SELECT replace('abcdef', 'de', '45') AS \"abc45f\";\n> + \n> + SELECT replace('yabadabadoo', 'ba', '123') AS \"ya123da123doo\";\n> + \n> + SELECT replace('yabadoo', 'bad', '') AS \"yaoo\";\n> + \n> + --\n> + -- test split\n> + --\n> + select split('joeuser@mydatabase','@',0) AS \"an error\";\n> + \n> + select split('joeuser@mydatabase','@',1) AS \"joeuser\";\n> + \n> + select split('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> + \n> + select split('joeuser@mydatabase','@',3) AS \"empty string\";\n> + \n> + select split('@joeuser@mydatabase@','@',2) AS \"joeuser\";\n> + \n> + --\n> + -- test to_hex\n> + --\n> + select to_hex(256*256*256 - 1) AS \"ffffff\";\n> + \n> + select to_hex(256::bigint*256::bigint*256::bigint*256::bigint - 1) AS \"ffffffff\";\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 21 Aug 2002 13:24:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] workaround for lack of REPLACE()" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nJoe Conway wrote:\n> Joe Conway wrote:\n> > I took Tom's advice and added wrapper functions around text_substr() and \n> > bytea_substr() to cover the 2 argument case.\n> > \n> > I also added tests to strings.sql to cover substr() on toasted columns \n> > of both text and bytea.\n> > \n> \n> Please replace the original patch (substr.2002.08.14.1.patch) with the \n> attached. It includes everything from the previous one, plus newly \n> implemented builtin functions:\n> \n> replace(string, from, to)\n> -- replaces all occurrences of \"from\" in \"string\" to \"to\"\n> split(string, fldsep, column)\n> -- splits \"string\" on \"fldsep\" and returns \"column\" number piece\n> to_hex(int32_num) & to_hex(int64_num)\n> -- takes integer number and returns as hex string\n> \n> All previously discussed on the list; see thread at:\n> http://archives.postgresql.org/pgsql-hackers/2002-07/msg00511.php\n> \n> Examples:\n> \n> SELECT replace('yabadabadoo', 'ba', '123') AS \"ya123da123doo\";\n> ya123da123doo\n> ---------------\n> ya123da123doo\n> (1 row)\n> \n> select split('joeuser@mydatabase','@',1) AS \"joeuser\";\n> joeuser\n> ---------\n> joeuser\n> (1 row)\n> \n> select split('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> mydatabase\n> ------------\n> mydatabase\n> (1 row)\n> \n> select to_hex(256::bigint*256::bigint*256::bigint*256::bigint - 1) AS \n> \"ffffffff\";\n> ffffffff\n> ----------\n> ffffffff\n> (1 row)\n> \n> Tests have been added to the regression suite.\n> \n> Passes all regression tests. I've checked the strings.sql script in a \n> multibyte database and it works fine also. I'd appreciate a good look by \n> someone more familiar with multibyte related issues though.\n> \n> If it is OK, I'd like to hold off on docs until this is committed and \n> after beta starts.\n> \n> If there are no objections, please apply.\n> \n> Thanks,\n> \n> Joe\n\n> Index: src/backend/utils/adt/varlena.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/utils/adt/varlena.c,v\n> retrieving revision 1.87\n> diff -c -r1.87 varlena.c\n> *** src/backend/utils/adt/varlena.c\t4 Aug 2002 06:44:47 -0000\t1.87\n> --- src/backend/utils/adt/varlena.c\t16 Aug 2002 19:54:03 -0000\n> ***************\n> *** 18,23 ****\n> --- 18,25 ----\n> \n> #include \"mb/pg_wchar.h\"\n> #include \"miscadmin.h\"\n> + #include \"access/tuptoaster.h\"\n> + #include \"lib/stringinfo.h\"\n> #include \"utils/builtins.h\"\n> #include \"utils/pg_locale.h\"\n> \n> ***************\n> *** 27,34 ****\n> --- 29,62 ----\n> #define DatumGetUnknownP(X)\t\t\t((unknown *) PG_DETOAST_DATUM(X))\n> #define PG_GETARG_UNKNOWN_P(n)\t\tDatumGetUnknownP(PG_GETARG_DATUM(n))\n> #define PG_RETURN_UNKNOWN_P(x)\t\tPG_RETURN_POINTER(x)\n> + #define PG_TEXTARG_GET_STR(arg_) \\\n> + DatumGetCString(DirectFunctionCall1(textout, PG_GETARG_DATUM(arg_)))\n> + #define PG_TEXT_GET_STR(textp_) \\\n> + DatumGetCString(DirectFunctionCall1(textout, PointerGetDatum(textp_)))\n> + #define PG_STR_GET_TEXT(str_) \\\n> + DatumGetTextP(DirectFunctionCall1(textin, CStringGetDatum(str_)))\n> + #define TEXTLEN(textp) \\\n> + \ttext_length(PointerGetDatum(textp))\n> + #define TEXTPOS(buf_text, from_sub_text) \\\n> + \ttext_position(PointerGetDatum(buf_text), PointerGetDatum(from_sub_text), 1)\n> + #define TEXTDUP(textp) \\\n> + \tDatumGetTextPCopy(PointerGetDatum(textp))\n> + #define LEFT(buf_text, from_sub_text) \\\n> + \ttext_substring(PointerGetDatum(buf_text), \\\n> + \t\t\t\t\t1, \\\n> + \t\t\t\t\tTEXTPOS(buf_text, from_sub_text) - 1, false)\n> + #define RIGHT(buf_text, from_sub_text, from_sub_text_len) \\\n> + \ttext_substring(PointerGetDatum(buf_text), \\\n> + \t\t\t\t\tTEXTPOS(buf_text, from_sub_text) + from_sub_text_len, \\\n> + \t\t\t\t\t-1, true)\n> \n> static int\ttext_cmp(text *arg1, text *arg2);\n> + static int32 text_length(Datum str);\n> + static int32 text_position(Datum str, Datum search_str, int matchnum);\n> + static text *text_substring(Datum str,\n> + \t\t\t\t\t\t\tint32 start,\n> + \t\t\t\t\t\t\tint32 length,\n> + \t\t\t\t\t\t\tbool length_not_specified);\n> \n> \n> /*****************************************************************************\n> ***************\n> *** 285,303 ****\n> Datum\n> textlen(PG_FUNCTION_ARGS)\n> {\n> ! \ttext\t *t = PG_GETARG_TEXT_P(0);\n> \n> ! #ifdef MULTIBYTE\n> ! \t/* optimization for single byte encoding */\n> ! \tif (pg_database_encoding_max_length() <= 1)\n> ! \t\tPG_RETURN_INT32(VARSIZE(t) - VARHDRSZ);\n> ! \n> ! \tPG_RETURN_INT32(\n> ! \t\tpg_mbstrlen_with_len(VARDATA(t), VARSIZE(t) - VARHDRSZ)\n> ! \t\t);\n> ! #else\n> ! \tPG_RETURN_INT32(VARSIZE(t) - VARHDRSZ);\n> ! #endif\n> }\n> \n> /*\n> --- 313,348 ----\n> Datum\n> textlen(PG_FUNCTION_ARGS)\n> {\n> ! \tPG_RETURN_INT32(text_length(PG_GETARG_DATUM(0)));\n> ! }\n> \n> ! /*\n> ! * text_length -\n> ! *\tDoes the real work for textlen()\n> ! *\tThis is broken out so it can be called directly by other string processing\n> ! *\tfunctions.\n> ! */\n> ! static int32\n> ! text_length(Datum str)\n> ! {\n> ! \t/* fastpath when max encoding length is one */\n> ! \tif (pg_database_encoding_max_length() == 1)\n> ! \t\tPG_RETURN_INT32(toast_raw_datum_size(str) - VARHDRSZ);\n> ! \n> ! \tif (pg_database_encoding_max_length() > 1)\n> ! \t{\n> ! \t\ttext\t *t = DatumGetTextP(str);\n> ! \n> ! \t\tPG_RETURN_INT32(pg_mbstrlen_with_len(VARDATA(t),\n> ! \t\t\t\t\t\t\t\t\t VARSIZE(t) - VARHDRSZ));\n> ! \t}\n> ! \n> ! \t/* should never get here */\n> ! \telog(ERROR, \"Invalid backend encoding; encoding max length \"\n> ! \t\t\t\t\"is less than one.\");\n> ! \n> ! \t/* not reached: suppress compiler warning */\n> ! \treturn 0;\n> }\n> \n> /*\n> ***************\n> *** 308,316 ****\n> Datum\n> textoctetlen(PG_FUNCTION_ARGS)\n> {\n> ! \ttext *arg = PG_GETARG_TEXT_P(0);\n> ! \n> ! \tPG_RETURN_INT32(VARSIZE(arg) - VARHDRSZ);\n> }\n> \n> /*\n> --- 353,359 ----\n> Datum\n> textoctetlen(PG_FUNCTION_ARGS)\n> {\n> ! \tPG_RETURN_INT32(toast_raw_datum_size(PG_GETARG_DATUM(0)) - VARHDRSZ);\n> }\n> \n> /*\n> ***************\n> *** 382,471 ****\n> * - Thomas Lockhart 1998-12-10\n> * Now uses faster TOAST-slicing interface\n> * - John Gray 2002-02-22\n> */\n> Datum\n> text_substr(PG_FUNCTION_ARGS)\n> {\n> ! \ttext\t *string;\n> ! \tint32\t\tm = PG_GETARG_INT32(1);\n> ! \tint32\t\tn = PG_GETARG_INT32(2);\n> ! \tint32 sm;\n> ! \tint32 sn;\n> ! \tint eml = 1;\n> ! #ifdef MULTIBYTE\n> ! \tint\t\t\ti;\n> ! \tint\t\t\tlen;\n> ! \ttext\t *ret;\n> ! \tchar\t *p;\n> ! #endif \n> \n> ! \t/*\n> ! \t * starting position before the start of the string? then offset into\n> ! \t * the string per SQL92 spec...\n> ! \t */\n> ! \tif (m < 1)\n> \t{\n> ! \t\tn += (m - 1);\n> ! \t\tm = 1;\n> ! \t}\n> ! \t/* Check for m > octet length is made in TOAST access routine */\n> \n> ! \t/* m will now become a zero-based starting position */\n> ! \tsm = m - 1;\n> ! \tsn = n;\n> \n> ! #ifdef MULTIBYTE\n> ! \teml = pg_database_encoding_max_length ();\n> \n> ! \tif (eml > 1)\n> \t{\n> ! \t\tsm = 0;\n> ! \t\tif (n > -1)\n> ! \t\t\tsn = (m + n) * eml + 3; /* +3 to avoid mb characters overhanging slice end */\n> \t\telse\n> ! \t\t\tsn = n;\t\t/* n < 0 is special-cased by heap_tuple_untoast_attr_slice */\n> ! \t}\n> ! #endif \n> \n> ! \tstring = PG_GETARG_TEXT_P_SLICE (0, sm, sn);\n> \n> ! \tif (eml == 1) \n> ! \t{\n> ! \t\tPG_RETURN_TEXT_P (string);\n> ! \t}\n> ! #ifndef MULTIBYTE\n> ! \tPG_RETURN_NULL(); /* notreached: suppress compiler warning */\n> ! #endif\n> ! #ifdef MULTIBYTE\n> ! \tif (n > -1)\n> ! \t\tlen = pg_mbstrlen_with_len (VARDATA (string), sn - 3);\n> ! \telse\t/* n < 0 is special-cased; need full string length */\n> ! \t\tlen = pg_mbstrlen_with_len (VARDATA (string), VARSIZE(string)-VARHDRSZ);\n> ! \n> ! \tif (m > len)\n> ! \t{\n> ! \t\tm = 1;\n> ! \t\tn = 0;\n> ! \t}\n> ! \tm--;\n> ! \tif (((m + n) > len) || (n < 0))\n> ! \t\tn = (len - m);\n> ! \n> ! \tp = VARDATA(string);\n> ! \tfor (i = 0; i < m; i++)\n> ! \t\tp += pg_mblen(p);\n> ! \tm = p - VARDATA(string);\n> ! \tfor (i = 0; i < n; i++)\n> ! \t\tp += pg_mblen(p);\n> ! \tn = p - (VARDATA(string) + m);\n> \n> ! \tret = (text *) palloc(VARHDRSZ + n);\n> ! \tVARATT_SIZEP(ret) = VARHDRSZ + n;\n> \n> ! \tmemcpy(VARDATA(ret), VARDATA(string) + m, n);\n> \n> ! \tPG_RETURN_TEXT_P(ret);\n> ! #endif\n> }\n> \n> /*\n> --- 425,625 ----\n> * - Thomas Lockhart 1998-12-10\n> * Now uses faster TOAST-slicing interface\n> * - John Gray 2002-02-22\n> + * Remove \"#ifdef MULTIBYTE\" and test for encoding_max_length instead. Change\n> + * behaviors conflicting with SQL92 to meet SQL92 (if E = S + L < S throw\n> + * error; if E < 1, return '', not entire string). Fixed MB related bug when\n> + * S > LC and < LC + 4 sometimes garbage characters are returned.\n> + * - Joe Conway 2002-08-10 \n> */\n> Datum\n> text_substr(PG_FUNCTION_ARGS)\n> {\n> ! \tPG_RETURN_TEXT_P(text_substring(PG_GETARG_DATUM(0),\n> ! \t\t\t\t\t\t\t\t\tPG_GETARG_INT32(1),\n> ! \t\t\t\t\t\t\t\t\tPG_GETARG_INT32(2),\n> ! \t\t\t\t\t\t\t\t\tfalse));\n> ! }\n> \n> ! /*\n> ! * text_substr_no_len -\n> ! *\t Wrapper to avoid opr_sanity failure due to\n> ! *\t one function accepting a different number of args.\n> ! */\n> ! Datum\n> ! text_substr_no_len(PG_FUNCTION_ARGS)\n> ! {\n> ! \tPG_RETURN_TEXT_P(text_substring(PG_GETARG_DATUM(0),\n> ! \t\t\t\t\t\t\t\t\tPG_GETARG_INT32(1),\n> ! \t\t\t\t\t\t\t\t\t-1, true));\n> ! }\n> ! \n> ! /*\n> ! * text_substring -\n> ! *\tDoes the real work for text_substr() and text_substr_no_len()\n> ! *\tThis is broken out so it can be called directly by other string processing\n> ! *\tfunctions.\n> ! */\n> ! static text*\n> ! text_substring(Datum str, int32 start, int32 length, bool length_not_specified)\n> ! {\n> ! \tint32\t\teml = pg_database_encoding_max_length();\n> ! \tint32\t\tS = start;\t\t\t\t/* start position */\n> ! \tint32\t\tS1;\t\t\t\t\t\t/* adjusted start position */\n> ! \tint32\t\tL1;\t\t\t\t\t\t/* adjusted substring length */\n> ! \n> ! \t/* life is easy if the encoding max length is 1 */\n> ! \tif (eml == 1)\n> \t{\n> ! \t\tS1 = Max(S, 1);\n> \n> ! \t\tif (length_not_specified)\t/* special case - get length to end of string */\n> ! \t\t\tL1 = -1;\n> ! \t\telse\n> ! \t\t{\n> ! \t\t\t/* end position */\n> ! \t\t\tint\tE = S + length;\n> \n> ! \t\t\t/*\n> ! \t\t\t * A negative value for L is the only way for the end position\n> ! \t\t\t * to be before the start. SQL99 says to throw an error.\n> ! \t\t\t */\n> ! \t\t\tif (E < S)\n> ! \t\t\t\telog(ERROR, \"negative substring length not allowed\");\n> \n> ! \t\t\t/* \n> ! \t\t\t * A zero or negative value for the end position can happen if the start\n> ! \t\t\t * was negative or one. SQL99 says to return a zero-length string.\n> ! \t\t\t */\n> ! \t\t\tif (E < 1)\n> ! \t\t\t\treturn PG_STR_GET_TEXT(\"\");\n> ! \n> ! \t\t\tL1 = E - S1;\n> ! \t\t}\n> ! \n> ! \t\t/* \n> ! \t\t * If the start position is past the end of the string,\n> ! \t\t * SQL99 says to return a zero-length string -- \n> ! \t\t * PG_GETARG_TEXT_P_SLICE() will do that for us.\n> ! \t\t * Convert to zero-based starting position\n> ! \t\t */\n> ! \t\treturn DatumGetTextPSlice(str, S1 - 1, L1);\n> ! \t}\n> ! \telse if (eml > 1)\n> \t{\n> ! \t\t/*\n> ! \t\t * When encoding max length is > 1, we can't get LC without\n> ! \t\t * detoasting, so we'll grab a conservatively large slice\n> ! \t\t * now and go back later to do the right thing\n> ! \t\t */\n> ! \t\tint32\t\tslice_start;\n> ! \t\tint32\t\tslice_size;\n> ! \t\tint32\t\tslice_strlen;\n> ! \t\ttext\t\t*slice;\n> ! \t\tint32\t\tE1;\n> ! \t\tint32\t\ti;\n> ! \t\tchar\t *p;\n> ! \t\tchar\t *s;\n> ! \t\ttext\t *ret;\n> ! \n> ! \t\t/*\n> ! \t\t * if S is past the end of the string, the tuple toaster\n> ! \t\t * will return a zero-length string to us\n> ! \t\t */\n> ! \t\tS1 = Max(S, 1);\n> ! \n> ! \t\t/*\n> ! \t\t * We need to start at position zero because there is no\n> ! \t\t * way to know in advance which byte offset corresponds to \n> ! \t\t * the supplied start position.\n> ! \t\t */\n> ! \t\tslice_start = 0;\n> ! \n> ! \t\tif (length_not_specified)\t/* special case - get length to end of string */\n> ! \t\t\tslice_size = L1 = -1;\n> \t\telse\n> ! \t\t{\n> ! \t\t\tint\tE = S + length;\n> ! \n> ! \t\t\t/*\n> ! \t\t\t * A negative value for L is the only way for the end position\n> ! \t\t\t * to be before the start. SQL99 says to throw an error.\n> ! \t\t\t */\n> ! \t\t\tif (E < S)\n> ! \t\t\t\telog(ERROR, \"negative substring length not allowed\");\n> \n> ! \t\t\t/* \n> ! \t\t\t * A zero or negative value for the end position can happen if the start\n> ! \t\t\t * was negative or one. SQL99 says to return a zero-length string.\n> ! \t\t\t */\n> ! \t\t\tif (E < 1)\n> ! \t\t\t\treturn PG_STR_GET_TEXT(\"\");\n> \n> ! \t\t\t/*\n> ! \t\t\t * if E is past the end of the string, the tuple toaster\n> ! \t\t\t * will truncate the length for us\n> ! \t\t\t */\n> ! \t\t\tL1 = E - S1;\n> ! \n> ! \t\t\t/*\n> ! \t\t\t * Total slice size in bytes can't be any longer than the start\n> ! \t\t\t * position plus substring length times the encoding max length.\n> ! \t\t\t */\n> ! \t\t\tslice_size = (S1 + L1) * eml;\n> ! \t\t}\n> ! \t\tslice = DatumGetTextPSlice(str, slice_start, slice_size);\n> \n> ! \t\t/* see if we got back an empty string */\n> ! \t\tif ((VARSIZE(slice) - VARHDRSZ) == 0)\n> ! \t\t\treturn PG_STR_GET_TEXT(\"\");\n> \n> ! \t\t/* Now we can get the actual length of the slice in MB characters */\n> ! \t\tslice_strlen = pg_mbstrlen_with_len (VARDATA(slice), VARSIZE(slice) - VARHDRSZ);\n> \n> ! \t\t/* Check that the start position wasn't > slice_strlen. If so,\n> ! \t\t * SQL99 says to return a zero-length string.\n> ! \t\t */\n> ! \t\tif (S1 > slice_strlen)\n> ! \t\t\treturn PG_STR_GET_TEXT(\"\");\n> ! \n> ! \t\t/*\n> ! \t\t * Adjust L1 and E1 now that we know the slice string length.\n> ! \t\t * Again remember that S1 is one based, and slice_start is zero based.\n> ! \t\t */\n> ! \t\tif (L1 > -1)\n> ! \t\t\tE1 = Min(S1 + L1 , slice_start + 1 + slice_strlen);\n> ! \t\telse\n> ! \t\t\tE1 = slice_start + 1 + slice_strlen;\n> ! \n> ! \t\t/*\n> ! \t\t * Find the start position in the slice;\n> ! \t\t * remember S1 is not zero based\n> ! \t\t */\n> ! \t\tp = VARDATA(slice);\n> ! \t\tfor (i = 0; i < S1 - 1; i++)\n> ! \t\t\tp += pg_mblen(p);\n> ! \n> ! \t\t/* hang onto a pointer to our start position */\n> ! \t\ts = p;\n> ! \n> ! \t\t/*\n> ! \t\t * Count the actual bytes used by the substring of \n> ! \t\t * the requested length.\n> ! \t\t */\n> ! \t\tfor (i = S1; i < E1; i++)\n> ! \t\t\tp += pg_mblen(p);\n> ! \n> ! \t\tret = (text *) palloc(VARHDRSZ + (p - s));\n> ! \t\tVARATT_SIZEP(ret) = VARHDRSZ + (p - s);\n> ! \t\tmemcpy(VARDATA(ret), s, (p - s));\n> ! \n> ! \t\treturn ret;\n> ! \t}\n> ! \telse\n> ! \t\telog(ERROR, \"Invalid backend encoding; encoding max length \"\n> ! \t\t\t\t\t\"is less than one.\");\n> ! \n> ! \t/* not reached: suppress compiler warning */\n> ! \treturn PG_STR_GET_TEXT(\"\");\n> }\n> \n> /*\n> ***************\n> *** 481,536 ****\n> Datum\n> textpos(PG_FUNCTION_ARGS)\n> {\n> ! \ttext\t *t1 = PG_GETARG_TEXT_P(0);\n> ! \ttext\t *t2 = PG_GETARG_TEXT_P(1);\n> ! \tint\t\t\tpos;\n> ! \tint\t\t\tpx,\n> ! \t\t\t\tp;\n> ! \tint\t\t\tlen1,\n> \t\t\t\tlen2;\n> - \tpg_wchar *p1,\n> - \t\t\t *p2;\n> \n> ! #ifdef MULTIBYTE\n> ! \tpg_wchar *ps1,\n> ! \t\t\t *ps2;\n> ! #endif\n> \n> \tif (VARSIZE(t2) <= VARHDRSZ)\n> \t\tPG_RETURN_INT32(1);\t\t/* result for empty pattern */\n> \n> \tlen1 = (VARSIZE(t1) - VARHDRSZ);\n> \tlen2 = (VARSIZE(t2) - VARHDRSZ);\n> ! #ifdef MULTIBYTE\n> ! \tps1 = p1 = (pg_wchar *) palloc((len1 + 1) * sizeof(pg_wchar));\n> ! \t(void) pg_mb2wchar_with_len((unsigned char *) VARDATA(t1), p1, len1);\n> ! \tlen1 = pg_wchar_strlen(p1);\n> ! \tps2 = p2 = (pg_wchar *) palloc((len2 + 1) * sizeof(pg_wchar));\n> ! \t(void) pg_mb2wchar_with_len((unsigned char *) VARDATA(t2), p2, len2);\n> ! \tlen2 = pg_wchar_strlen(p2);\n> ! #else\n> ! \tp1 = VARDATA(t1);\n> ! \tp2 = VARDATA(t2);\n> ! #endif\n> ! \tpos = 0;\n> \tpx = (len1 - len2);\n> ! \tfor (p = 0; p <= px; p++)\n> \t{\n> ! #ifdef MULTIBYTE\n> ! \t\tif ((*p2 == *p1) && (pg_wchar_strncmp(p1, p2, len2) == 0))\n> ! #else\n> ! \t\tif ((*p2 == *p1) && (strncmp(p1, p2, len2) == 0))\n> ! #endif\n> \t\t{\n> ! \t\t\tpos = p + 1;\n> ! \t\t\tbreak;\n> ! \t\t};\n> ! \t\tp1++;\n> ! \t};\n> ! #ifdef MULTIBYTE\n> ! \tpfree(ps1);\n> ! \tpfree(ps2);\n> ! #endif\n> \tPG_RETURN_INT32(pos);\n> }\n> \n> --- 635,729 ----\n> Datum\n> textpos(PG_FUNCTION_ARGS)\n> {\n> ! \tPG_RETURN_INT32(text_position(PG_GETARG_DATUM(0), PG_GETARG_DATUM(1), 1));\n> ! }\n> ! \n> ! /*\n> ! * text_position -\n> ! *\tDoes the real work for textpos()\n> ! *\tThis is broken out so it can be called directly by other string processing\n> ! *\tfunctions.\n> ! */\n> ! static int32\n> ! text_position(Datum str, Datum search_str, int matchnum)\n> ! {\n> ! \tint\t\t\teml = pg_database_encoding_max_length();\n> ! \ttext\t *t1 = DatumGetTextP(str);\n> ! \ttext\t *t2 = DatumGetTextP(search_str);\n> ! \tint\t\t\tmatch = 0,\n> ! \t\t\t\tpos = 0,\n> ! \t\t\t\tp = 0,\n> ! \t\t\t\tpx,\n> ! \t\t\t\tlen1,\n> \t\t\t\tlen2;\n> \n> ! \tif(matchnum == 0)\n> ! \t\treturn 0;\t\t/* result for 0th match */\n> \n> \tif (VARSIZE(t2) <= VARHDRSZ)\n> \t\tPG_RETURN_INT32(1);\t\t/* result for empty pattern */\n> \n> \tlen1 = (VARSIZE(t1) - VARHDRSZ);\n> \tlen2 = (VARSIZE(t2) - VARHDRSZ);\n> ! \n> ! \t/* no use in searching str past point where search_str will fit */\n> \tpx = (len1 - len2);\n> ! \n> ! \tif (eml == 1)\t/* simple case - single byte encoding */\n> \t{\n> ! \t\tchar *p1,\n> ! \t\t\t *p2;\n> ! \n> ! \t\tp1 = VARDATA(t1);\n> ! \t\tp2 = VARDATA(t2);\n> ! \n> ! \t\tfor (p = 0; p <= px; p++)\n> \t\t{\n> ! \t\t\tif ((*p2 == *p1) && (strncmp(p1, p2, len2) == 0))\n> ! \t\t\t{\n> ! \t\t\t\tif (++match == matchnum)\n> ! \t\t\t\t{\n> ! \t\t\t\t\tpos = p + 1;\n> ! \t\t\t\t\tbreak;\n> ! \t\t\t\t}\n> ! \t\t\t}\n> ! \t\t\tp1++;\n> ! \t\t}\n> ! \t}\n> ! \telse if (eml > 1)\t/* not as simple - multibyte encoding */\n> ! \t{\n> ! \t\tpg_wchar *p1,\n> ! \t\t\t\t *p2,\n> ! \t\t\t\t *ps1,\n> ! \t\t\t\t *ps2;\n> ! \n> ! \t\tps1 = p1 = (pg_wchar *) palloc((len1 + 1) * sizeof(pg_wchar));\n> ! \t\t(void) pg_mb2wchar_with_len((unsigned char *) VARDATA(t1), p1, len1);\n> ! \t\tlen1 = pg_wchar_strlen(p1);\n> ! \t\tps2 = p2 = (pg_wchar *) palloc((len2 + 1) * sizeof(pg_wchar));\n> ! \t\t(void) pg_mb2wchar_with_len((unsigned char *) VARDATA(t2), p2, len2);\n> ! \t\tlen2 = pg_wchar_strlen(p2);\n> ! \n> ! \t\tfor (p = 0; p <= px; p++)\n> ! \t\t{\n> ! \t\t\tif ((*p2 == *p1) && (pg_wchar_strncmp(p1, p2, len2) == 0))\n> ! \t\t\t{\n> ! \t\t\t\tif (++match == matchnum)\n> ! \t\t\t\t{\n> ! \t\t\t\t\tpos = p + 1;\n> ! \t\t\t\t\tbreak;\n> ! \t\t\t\t}\n> ! \t\t\t}\n> ! \t\t\tp1++;\n> ! \t\t}\n> ! \n> ! \t\tpfree(ps1);\n> ! \t\tpfree(ps2);\n> ! \t}\n> ! \telse\n> ! \t\telog(ERROR, \"Invalid backend encoding; encoding max length \"\n> ! \t\t\t\t\t\"is less than one.\");\n> ! \n> \tPG_RETURN_INT32(pos);\n> }\n> \n> ***************\n> *** 758,766 ****\n> Datum\n> byteaoctetlen(PG_FUNCTION_ARGS)\n> {\n> ! \tbytea\t *v = PG_GETARG_BYTEA_P(0);\n> ! \n> ! \tPG_RETURN_INT32(VARSIZE(v) - VARHDRSZ);\n> }\n> \n> /*\n> --- 951,957 ----\n> Datum\n> byteaoctetlen(PG_FUNCTION_ARGS)\n> {\n> ! \tPG_RETURN_INT32(toast_raw_datum_size(PG_GETARG_DATUM(0)) - VARHDRSZ);\n> }\n> \n> /*\n> ***************\n> *** 805,810 ****\n> --- 996,1003 ----\n> \tPG_RETURN_BYTEA_P(result);\n> }\n> \n> + #define PG_STR_GET_BYTEA(str_) \\\n> + DatumGetByteaP(DirectFunctionCall1(byteain, CStringGetDatum(str_)))\n> /*\n> * bytea_substr()\n> * Return a substring starting at the specified position.\n> ***************\n> *** 813,845 ****\n> * Input:\n> *\t- string\n> *\t- starting position (is one-based)\n> ! *\t- string length\n> *\n> * If the starting position is zero or less, then return from the start of the string\n> * adjusting the length to be consistent with the \"negative start\" per SQL92.\n> ! * If the length is less than zero, return the remaining string.\n> ! *\n> */\n> Datum\n> bytea_substr(PG_FUNCTION_ARGS)\n> {\n> ! \tint32\t\tm = PG_GETARG_INT32(1);\n> ! \tint32\t\tn = PG_GETARG_INT32(2);\n> \n> ! \t/*\n> ! \t * starting position before the start of the string? then offset into\n> ! \t * the string per SQL92 spec...\n> ! \t */\n> ! \tif (m < 1)\n> \t{\n> ! \t\tn += (m - 1);\n> ! \t\tm = 1;\n> \t}\n> \n> ! \t/* m will now become a zero-based starting position */\n> ! \tm--;\n> \n> ! \tPG_RETURN_BYTEA_P(PG_GETARG_BYTEA_P_SLICE (0, m, n));\n> }\n> \n> /*\n> --- 1006,1076 ----\n> * Input:\n> *\t- string\n> *\t- starting position (is one-based)\n> ! *\t- string length (optional)\n> *\n> * If the starting position is zero or less, then return from the start of the string\n> * adjusting the length to be consistent with the \"negative start\" per SQL92.\n> ! * If the length is less than zero, an ERROR is thrown. If no third argument\n> ! * (length) is provided, the length to the end of the string is assumed.\n> */\n> Datum\n> bytea_substr(PG_FUNCTION_ARGS)\n> {\n> ! \tint\t\tS = PG_GETARG_INT32(1);\t/* start position */\n> ! \tint\t\tS1;\t\t\t\t\t\t/* adjusted start position */\n> ! \tint\t\tL1;\t\t\t\t\t\t/* adjusted substring length */\n> \n> ! \tS1 = Max(S, 1);\n> ! \n> ! \tif (fcinfo->nargs == 2)\n> ! \t{\n> ! \t\t/*\n> ! \t\t * Not passed a length - PG_GETARG_BYTEA_P_SLICE()\n> ! \t\t * grabs everything to the end of the string if we pass it\n> ! \t\t * a negative value for length.\n> ! \t\t */\n> ! \t\tL1 = -1;\n> ! \t}\n> ! \telse\n> \t{\n> ! \t\t/* end position */\n> ! \t\tint\tE = S + PG_GETARG_INT32(2);\n> ! \n> ! \t\t/*\n> ! \t\t * A negative value for L is the only way for the end position\n> ! \t\t * to be before the start. SQL99 says to throw an error.\n> ! \t\t */\n> ! \t\tif (E < S)\n> ! \t\t\telog(ERROR, \"negative substring length not allowed\");\n> ! \n> ! \t\t/* \n> ! \t\t * A zero or negative value for the end position can happen if the start\n> ! \t\t * was negative or one. SQL99 says to return a zero-length string.\n> ! \t\t */\n> ! \t\tif (E < 1)\n> ! \t\t\tPG_RETURN_BYTEA_P(PG_STR_GET_BYTEA(\"\"));\n> ! \n> ! \t\tL1 = E - S1;\n> \t}\n> \n> ! \t/* \n> ! \t * If the start position is past the end of the string,\n> ! \t * SQL99 says to return a zero-length string -- \n> ! \t * PG_GETARG_TEXT_P_SLICE() will do that for us.\n> ! \t * Convert to zero-based starting position\n> ! \t */\n> ! \tPG_RETURN_BYTEA_P(PG_GETARG_BYTEA_P_SLICE (0, S1 - 1, L1));\n> ! }\n> \n> ! /*\n> ! * bytea_substr_no_len -\n> ! *\t Wrapper to avoid opr_sanity failure due to\n> ! *\t one function accepting a different number of args.\n> ! */\n> ! Datum\n> ! bytea_substr_no_len(PG_FUNCTION_ARGS)\n> ! {\n> ! \treturn bytea_substr(fcinfo);\n> }\n> \n> /*\n> ***************\n> *** 1422,1424 ****\n> --- 1653,1834 ----\n> \n> \tPG_RETURN_INT32(cmp);\n> }\n> + \n> + /*\n> + * replace_text\n> + * replace all occurences of 'old_sub_str' in 'orig_str'\n> + * with 'new_sub_str' to form 'new_str'\n> + * \n> + * returns 'orig_str' if 'old_sub_str' == '' or 'orig_str' == ''\n> + * otherwise returns 'new_str' \n> + */\n> + Datum\n> + replace_text(PG_FUNCTION_ARGS)\n> + {\n> + \ttext\t\t*left_text;\n> + \ttext\t\t*right_text;\n> + \ttext\t\t*buf_text;\n> + \ttext\t\t*ret_text;\n> + \tint\t\t\tcurr_posn;\n> + \ttext\t\t*src_text = PG_GETARG_TEXT_P(0);\n> + \tint\t\t\tsrc_text_len = TEXTLEN(src_text);\n> + \ttext\t\t*from_sub_text = PG_GETARG_TEXT_P(1);\n> + \tint\t\t\tfrom_sub_text_len = TEXTLEN(from_sub_text);\n> + \ttext\t\t*to_sub_text = PG_GETARG_TEXT_P(2);\n> + \tchar\t\t*to_sub_str = PG_TEXT_GET_STR(to_sub_text);\n> + \tStringInfo\tstr = makeStringInfo();\n> + \n> + \tif (src_text_len == 0 || from_sub_text_len == 0)\n> + \t\tPG_RETURN_TEXT_P(src_text);\n> + \n> + \tbuf_text = TEXTDUP(src_text);\n> + \tcurr_posn = TEXTPOS(buf_text, from_sub_text);\n> + \n> + \twhile (curr_posn > 0)\n> + \t{\n> + \t\tleft_text = LEFT(buf_text, from_sub_text);\n> + \t\tright_text = RIGHT(buf_text, from_sub_text, from_sub_text_len);\n> + \n> + \t\tappendStringInfo(str, PG_TEXT_GET_STR(left_text));\n> + \t\tappendStringInfo(str, to_sub_str);\n> + \n> + \t\tpfree(buf_text);\n> + \t\tpfree(left_text);\n> + \t\tbuf_text = right_text;\n> + \t\tcurr_posn = TEXTPOS(buf_text, from_sub_text);\n> + \t}\n> + \n> + \tappendStringInfo(str, PG_TEXT_GET_STR(buf_text));\n> + \tpfree(buf_text);\n> + \n> + \tret_text = PG_STR_GET_TEXT(str->data);\n> + \tpfree(str->data);\n> + \tpfree(str);\n> + \n> + \tPG_RETURN_TEXT_P(ret_text);\n> + }\n> + \n> + /*\n> + * split_text\n> + * parse input string\n> + * return ord item (1 based)\n> + * based on provided field separator\n> + */\n> + Datum\n> + split_text(PG_FUNCTION_ARGS)\n> + {\n> + \ttext\t *inputstring = PG_GETARG_TEXT_P(0);\n> + \tint\t\t\tinputstring_len = TEXTLEN(inputstring);\n> + \ttext\t *fldsep = PG_GETARG_TEXT_P(1);\n> + \tint\t\t\tfldsep_len = TEXTLEN(fldsep);\n> + \tint\t\t\tfldnum = PG_GETARG_INT32(2);\n> + \tint\t\t\tstart_posn = 0;\n> + \tint\t\t\tend_posn = 0;\n> + \ttext\t\t*result_text;\n> + \n> + \t/* return empty string for empty input string */\n> + \tif (inputstring_len < 1)\n> + \t\tPG_RETURN_TEXT_P(PG_STR_GET_TEXT(\"\"));\n> + \n> + \t/* empty field separator */\n> + \tif (fldsep_len < 1)\n> + \t{\n> + \t\tif (fldnum == 1)\t/* first field - just return the input string */\n> + \t\t\tPG_RETURN_TEXT_P(inputstring);\n> + \t\telse\t\t\t\t/* otherwise return an empty string */\n> + \t\t\tPG_RETURN_TEXT_P(PG_STR_GET_TEXT(\"\"));\n> + \t}\n> + \n> + \t/* field number is 1 based */\n> + \tif (fldnum < 1)\n> + \t\telog(ERROR, \"field position must be > 0\");\n> + \n> + \tstart_posn = text_position(PointerGetDatum(inputstring),\n> + \t\t\t\t\t\t\t\tPointerGetDatum(fldsep),\n> + \t\t\t\t\t\t\t\tfldnum - 1);\n> + \tend_posn = text_position(PointerGetDatum(inputstring),\n> + \t\t\t\t\t\t\t\tPointerGetDatum(fldsep),\n> + \t\t\t\t\t\t\t\tfldnum);\n> + \n> + \tif ((start_posn == 0) && (end_posn == 0))\t/* fldsep not found */\n> + \t{\n> + \t\tif (fldnum == 1)\t/* first field - just return the input string */\n> + \t\t\tPG_RETURN_TEXT_P(inputstring);\n> + \t\telse\t\t\t\t/* otherwise return an empty string */\n> + \t\t\tPG_RETURN_TEXT_P(PG_STR_GET_TEXT(\"\"));\n> + \t}\n> + \telse if ((start_posn != 0) && (end_posn == 0))\n> + \t{\n> + \t\t/* last field requested */\n> + \t\tresult_text = text_substring(PointerGetDatum(inputstring), start_posn + fldsep_len, -1, true);\n> + \t\tPG_RETURN_TEXT_P(result_text);\n> + \t}\n> + \telse if ((start_posn == 0) && (end_posn != 0))\n> + \t{\n> + \t\t/* first field requested */\n> + \t\tresult_text = LEFT(inputstring, fldsep);\n> + \t\tPG_RETURN_TEXT_P(result_text);\n> + \t}\n> + \telse\n> + \t{\n> + \t\t/* prior to last field requested */\n> + \t\tresult_text = text_substring(PointerGetDatum(inputstring), start_posn + fldsep_len, end_posn - start_posn - fldsep_len, false);\n> + \t\tPG_RETURN_TEXT_P(result_text);\n> + \t}\n> + }\n> + \n> + #define HEXBASE 16\n> + /*\n> + * Convert a int32 to a string containing a base 16 (hex) representation of\n> + * the number.\n> + */\n> + Datum\n> + to_hex32(PG_FUNCTION_ARGS)\n> + {\n> + \tstatic char\t\tdigits[] = \"0123456789abcdef\";\n> + \tchar\t\t\tbuf[32];\t/* bigger than needed, but reasonable */\n> + \tchar\t\t *ptr,\n> + \t\t\t\t *end;\n> + \ttext\t\t *result_text;\n> + \tint32\t\t\tvalue = PG_GETARG_INT32(0);\n> + \n> + \tend = ptr = buf + sizeof(buf) - 1;\n> + \t*ptr = '\\0';\n> + \n> + \tdo\n> + \t{\n> + \t\t*--ptr = digits[value % HEXBASE];\n> + \t\tvalue /= HEXBASE;\n> + \t} while (ptr > buf && value);\n> + \n> + \tresult_text = PG_STR_GET_TEXT(ptr);\n> + \tPG_RETURN_TEXT_P(result_text);\n> + }\n> + \n> + /*\n> + * Convert a int64 to a string containing a base 16 (hex) representation of\n> + * the number.\n> + */\n> + Datum\n> + to_hex64(PG_FUNCTION_ARGS)\n> + {\n> + \tstatic char\t\tdigits[] = \"0123456789abcdef\";\n> + \tchar\t\t\tbuf[32];\t/* bigger than needed, but reasonable */\n> + \tchar\t\t\t*ptr,\n> + \t\t\t\t\t*end;\n> + \ttext\t\t\t*result_text;\n> + \tint64\t\t\tvalue = PG_GETARG_INT64(0);\n> + \n> + \tend = ptr = buf + sizeof(buf) - 1;\n> + \t*ptr = '\\0';\n> + \n> + \tdo\n> + \t{\n> + \t\t*--ptr = digits[value % HEXBASE];\n> + \t\tvalue /= HEXBASE;\n> + \t} while (ptr > buf && value);\n> + \n> + \tresult_text = PG_STR_GET_TEXT(ptr);\n> + \tPG_RETURN_TEXT_P(result_text);\n> + }\n> + \n> Index: src/include/catalog/pg_proc.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/catalog/pg_proc.h,v\n> retrieving revision 1.254\n> diff -c -r1.254 pg_proc.h\n> *** src/include/catalog/pg_proc.h\t15 Aug 2002 02:51:27 -0000\t1.254\n> --- src/include/catalog/pg_proc.h\t16 Aug 2002 18:53:13 -0000\n> ***************\n> *** 2121,2127 ****\n> DESCR(\"remove initial characters from string\");\n> DATA(insert OID = 882 ( rtrim\t\t PGNSP PGUID 14 f f t f i 1 25 \"25\" \"select rtrim($1, \\' \\')\" - _null_ ));\n> DESCR(\"remove trailing characters from string\");\n> ! DATA(insert OID = 883 ( substr\t PGNSP PGUID 14 f f t f i 2 25 \"25 23\"\t\"select substr($1, $2, -1)\" - _null_ ));\n> DESCR(\"return portion of string\");\n> DATA(insert OID = 884 ( btrim\t\t PGNSP PGUID 12 f f t f i 2 25 \"25 25\"\tbtrim - _null_ ));\n> DESCR(\"trim both ends of string\");\n> --- 2121,2127 ----\n> DESCR(\"remove initial characters from string\");\n> DATA(insert OID = 882 ( rtrim\t\t PGNSP PGUID 14 f f t f i 1 25 \"25\" \"select rtrim($1, \\' \\')\" - _null_ ));\n> DESCR(\"remove trailing characters from string\");\n> ! DATA(insert OID = 883 ( substr\t PGNSP PGUID 12 f f t f i 2 25 \"25 23\"\ttext_substr_no_len - _null_ ));\n> DESCR(\"return portion of string\");\n> DATA(insert OID = 884 ( btrim\t\t PGNSP PGUID 12 f f t f i 2 25 \"25 25\"\tbtrim - _null_ ));\n> DESCR(\"trim both ends of string\");\n> ***************\n> *** 2130,2137 ****\n> \n> DATA(insert OID = 936 ( substring PGNSP PGUID 12 f f t f i 3 25 \"25 23 23\" text_substr - _null_ ));\n> DESCR(\"return portion of string\");\n> ! DATA(insert OID = 937 ( substring PGNSP PGUID 14 f f t f i 2 25 \"25 23\"\t\"select substring($1, $2, -1)\" - _null_ ));\n> DESCR(\"return portion of string\");\n> \n> /* for multi-byte support */\n> \n> --- 2130,2145 ----\n> \n> DATA(insert OID = 936 ( substring PGNSP PGUID 12 f f t f i 3 25 \"25 23 23\" text_substr - _null_ ));\n> DESCR(\"return portion of string\");\n> ! DATA(insert OID = 937 ( substring PGNSP PGUID 12 f f t f i 2 25 \"25 23\"\ttext_substr_no_len - _null_ ));\n> DESCR(\"return portion of string\");\n> + DATA(insert OID = 2087 ( replace PGNSP PGUID 12 f f t f i 3 25 \"25 25 25\" replace_text - _null_ ));\n> + DESCR(\"replace all occurrences of old_substr with new_substr in string\");\n> + DATA(insert OID = 2088 ( split PGNSP PGUID 12 f f t f i 3 25 \"25 25 23\" split_text - _null_ ));\n> + DESCR(\"split string by field_sep and return field_num\");\n> + DATA(insert OID = 2089 ( to_hex PGNSP PGUID 12 f f t f i 1 25 \"23\" to_hex32 - _null_ ));\n> + DESCR(\"convert int32 number to hex\");\n> + DATA(insert OID = 2090 ( to_hex PGNSP PGUID 12 f f t f i 1 25 \"20\" to_hex64 - _null_ ));\n> + DESCR(\"convert int64 number to hex\");\n> \n> /* for multi-byte support */\n> \n> ***************\n> *** 2778,2784 ****\n> DESCR(\"concatenate\");\n> DATA(insert OID = 2012 ( substring\t\t PGNSP PGUID 12 f f t f i 3 17 \"17 23 23\" bytea_substr - _null_ ));\n> DESCR(\"return portion of string\");\n> ! DATA(insert OID = 2013 ( substring\t\t PGNSP PGUID 14 f f t f i 2 17 \"17 23\"\t\"select substring($1, $2, -1)\" - _null_ ));\n> DESCR(\"return portion of string\");\n> DATA(insert OID = 2014 ( position\t\t PGNSP PGUID 12 f f t f i 2 23 \"17 17\"\tbyteapos - _null_ ));\n> DESCR(\"return position of substring\");\n> --- 2786,2796 ----\n> DESCR(\"concatenate\");\n> DATA(insert OID = 2012 ( substring\t\t PGNSP PGUID 12 f f t f i 3 17 \"17 23 23\" bytea_substr - _null_ ));\n> DESCR(\"return portion of string\");\n> ! DATA(insert OID = 2013 ( substring\t\t PGNSP PGUID 12 f f t f i 2 17 \"17 23\"\tbytea_substr_no_len - _null_ ));\n> ! DESCR(\"return portion of string\");\n> ! DATA(insert OID = 2085 ( substr\t\t PGNSP PGUID 12 f f t f i 3 17 \"17 23 23\" bytea_substr - _null_ ));\n> ! DESCR(\"return portion of string\");\n> ! DATA(insert OID = 2086 ( substr\t\t PGNSP PGUID 12 f f t f i 2 17 \"17 23\"\tbytea_substr_no_len - _null_ ));\n> DESCR(\"return portion of string\");\n> DATA(insert OID = 2014 ( position\t\t PGNSP PGUID 12 f f t f i 2 23 \"17 17\"\tbyteapos - _null_ ));\n> DESCR(\"return position of substring\");\n> Index: src/include/utils/builtins.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/utils/builtins.h,v\n> retrieving revision 1.191\n> diff -c -r1.191 builtins.h\n> *** src/include/utils/builtins.h\t15 Aug 2002 02:51:27 -0000\t1.191\n> --- src/include/utils/builtins.h\t16 Aug 2002 18:53:13 -0000\n> ***************\n> *** 447,458 ****\n> --- 447,463 ----\n> extern Datum textoctetlen(PG_FUNCTION_ARGS);\n> extern Datum textpos(PG_FUNCTION_ARGS);\n> extern Datum text_substr(PG_FUNCTION_ARGS);\n> + extern Datum text_substr_no_len(PG_FUNCTION_ARGS);\n> extern Datum name_text(PG_FUNCTION_ARGS);\n> extern Datum text_name(PG_FUNCTION_ARGS);\n> extern int\tvarstr_cmp(char *arg1, int len1, char *arg2, int len2);\n> extern List *textToQualifiedNameList(text *textval, const char *caller);\n> extern bool SplitIdentifierString(char *rawstring, char separator,\n> \t\t\t\t\t\t\t\t List **namelist);\n> + extern Datum replace_text(PG_FUNCTION_ARGS);\n> + extern Datum split_text(PG_FUNCTION_ARGS);\n> + extern Datum to_hex32(PG_FUNCTION_ARGS);\n> + extern Datum to_hex64(PG_FUNCTION_ARGS);\n> \n> extern Datum unknownin(PG_FUNCTION_ARGS);\n> extern Datum unknownout(PG_FUNCTION_ARGS);\n> ***************\n> *** 476,481 ****\n> --- 481,487 ----\n> extern Datum byteacat(PG_FUNCTION_ARGS);\n> extern Datum byteapos(PG_FUNCTION_ARGS);\n> extern Datum bytea_substr(PG_FUNCTION_ARGS);\n> + extern Datum bytea_substr_no_len(PG_FUNCTION_ARGS);\n> \n> /* version.c */\n> extern Datum pgsql_version(PG_FUNCTION_ARGS);\n> Index: src/test/regress/expected/strings.out\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/expected/strings.out,v\n> retrieving revision 1.12\n> diff -c -r1.12 strings.out\n> *** src/test/regress/expected/strings.out\t11 Jun 2002 15:41:38 -0000\t1.12\n> --- src/test/regress/expected/strings.out\t16 Aug 2002 18:53:13 -0000\n> ***************\n> *** 573,575 ****\n> --- 573,738 ----\n> text and varchar\n> (1 row)\n> \n> + --\n> + -- test substr with toasted text values\n> + --\n> + CREATE TABLE toasttest(f1 text);\n> + insert into toasttest values(repeat('1234567890',10000));\n> + insert into toasttest values(repeat('1234567890',10000));\n> + -- If the starting position is zero or less, then return from the start of the string\n> + -- adjusting the length to be consistent with the \"negative start\" per SQL92.\n> + SELECT substr(f1, -1, 5) from toasttest;\n> + substr \n> + --------\n> + 123\n> + 123\n> + (2 rows)\n> + \n> + -- If the length is less than zero, an ERROR is thrown.\n> + SELECT substr(f1, 5, -1) from toasttest;\n> + ERROR: negative substring length not allowed\n> + -- If no third argument (length) is provided, the length to the end of the\n> + -- string is assumed.\n> + SELECT substr(f1, 99995) from toasttest;\n> + substr \n> + --------\n> + 567890\n> + 567890\n> + (2 rows)\n> + \n> + -- If start plus length is > string length, the result is truncated to\n> + -- string length\n> + SELECT substr(f1, 99995, 10) from toasttest;\n> + substr \n> + --------\n> + 567890\n> + 567890\n> + (2 rows)\n> + \n> + DROP TABLE toasttest;\n> + --\n> + -- test substr with toasted bytea values\n> + --\n> + CREATE TABLE toasttest(f1 bytea);\n> + insert into toasttest values(decode(repeat('1234567890',10000),'escape'));\n> + insert into toasttest values(decode(repeat('1234567890',10000),'escape'));\n> + -- If the starting position is zero or less, then return from the start of the string\n> + -- adjusting the length to be consistent with the \"negative start\" per SQL92.\n> + SELECT substr(f1, -1, 5) from toasttest;\n> + substr \n> + --------\n> + 123\n> + 123\n> + (2 rows)\n> + \n> + -- If the length is less than zero, an ERROR is thrown.\n> + SELECT substr(f1, 5, -1) from toasttest;\n> + ERROR: negative substring length not allowed\n> + -- If no third argument (length) is provided, the length to the end of the\n> + -- string is assumed.\n> + SELECT substr(f1, 99995) from toasttest;\n> + substr \n> + --------\n> + 567890\n> + 567890\n> + (2 rows)\n> + \n> + -- If start plus length is > string length, the result is truncated to\n> + -- string length\n> + SELECT substr(f1, 99995, 10) from toasttest;\n> + substr \n> + --------\n> + 567890\n> + 567890\n> + (2 rows)\n> + \n> + DROP TABLE toasttest;\n> + --\n> + -- test length\n> + --\n> + SELECT length('abcdef') AS \"length_6\";\n> + length_6 \n> + ----------\n> + 6\n> + (1 row)\n> + \n> + --\n> + -- test strpos\n> + --\n> + SELECT strpos('abcdef', 'cd') AS \"pos_3\";\n> + pos_3 \n> + -------\n> + 3\n> + (1 row)\n> + \n> + SELECT strpos('abcdef', 'xy') AS \"pos_0\";\n> + pos_0 \n> + -------\n> + 0\n> + (1 row)\n> + \n> + --\n> + -- test replace\n> + --\n> + SELECT replace('abcdef', 'de', '45') AS \"abc45f\";\n> + abc45f \n> + --------\n> + abc45f\n> + (1 row)\n> + \n> + SELECT replace('yabadabadoo', 'ba', '123') AS \"ya123da123doo\";\n> + ya123da123doo \n> + ---------------\n> + ya123da123doo\n> + (1 row)\n> + \n> + SELECT replace('yabadoo', 'bad', '') AS \"yaoo\";\n> + yaoo \n> + ------\n> + yaoo\n> + (1 row)\n> + \n> + --\n> + -- test split\n> + --\n> + select split('joeuser@mydatabase','@',0) AS \"an error\";\n> + ERROR: field position must be > 0\n> + select split('joeuser@mydatabase','@',1) AS \"joeuser\";\n> + joeuser \n> + ---------\n> + joeuser\n> + (1 row)\n> + \n> + select split('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> + mydatabase \n> + ------------\n> + mydatabase\n> + (1 row)\n> + \n> + select split('joeuser@mydatabase','@',3) AS \"empty string\";\n> + empty string \n> + --------------\n> + \n> + (1 row)\n> + \n> + select split('@joeuser@mydatabase@','@',2) AS \"joeuser\";\n> + joeuser \n> + ---------\n> + joeuser\n> + (1 row)\n> + \n> + --\n> + -- test to_hex\n> + --\n> + select to_hex(256*256*256 - 1) AS \"ffffff\";\n> + ffffff \n> + --------\n> + ffffff\n> + (1 row)\n> + \n> + select to_hex(256::bigint*256::bigint*256::bigint*256::bigint - 1) AS \"ffffffff\";\n> + ffffffff \n> + ----------\n> + ffffffff\n> + (1 row)\n> + \n> Index: src/test/regress/sql/strings.sql\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/sql/strings.sql,v\n> retrieving revision 1.8\n> diff -c -r1.8 strings.sql\n> *** src/test/regress/sql/strings.sql\t11 Jun 2002 15:41:38 -0000\t1.8\n> --- src/test/regress/sql/strings.sql\t16 Aug 2002 18:53:13 -0000\n> ***************\n> *** 197,199 ****\n> --- 197,292 ----\n> SELECT text 'text' || char(20) ' and characters' AS \"Concat text to char\";\n> \n> SELECT text 'text' || varchar ' and varchar' AS \"Concat text to varchar\";\n> + \n> + --\n> + -- test substr with toasted text values\n> + --\n> + CREATE TABLE toasttest(f1 text);\n> + \n> + insert into toasttest values(repeat('1234567890',10000));\n> + insert into toasttest values(repeat('1234567890',10000));\n> + \n> + -- If the starting position is zero or less, then return from the start of the string\n> + -- adjusting the length to be consistent with the \"negative start\" per SQL92.\n> + SELECT substr(f1, -1, 5) from toasttest;\n> + \n> + -- If the length is less than zero, an ERROR is thrown.\n> + SELECT substr(f1, 5, -1) from toasttest;\n> + \n> + -- If no third argument (length) is provided, the length to the end of the\n> + -- string is assumed.\n> + SELECT substr(f1, 99995) from toasttest;\n> + \n> + -- If start plus length is > string length, the result is truncated to\n> + -- string length\n> + SELECT substr(f1, 99995, 10) from toasttest;\n> + \n> + DROP TABLE toasttest;\n> + \n> + --\n> + -- test substr with toasted bytea values\n> + --\n> + CREATE TABLE toasttest(f1 bytea);\n> + \n> + insert into toasttest values(decode(repeat('1234567890',10000),'escape'));\n> + insert into toasttest values(decode(repeat('1234567890',10000),'escape'));\n> + \n> + -- If the starting position is zero or less, then return from the start of the string\n> + -- adjusting the length to be consistent with the \"negative start\" per SQL92.\n> + SELECT substr(f1, -1, 5) from toasttest;\n> + \n> + -- If the length is less than zero, an ERROR is thrown.\n> + SELECT substr(f1, 5, -1) from toasttest;\n> + \n> + -- If no third argument (length) is provided, the length to the end of the\n> + -- string is assumed.\n> + SELECT substr(f1, 99995) from toasttest;\n> + \n> + -- If start plus length is > string length, the result is truncated to\n> + -- string length\n> + SELECT substr(f1, 99995, 10) from toasttest;\n> + \n> + DROP TABLE toasttest;\n> + \n> + --\n> + -- test length\n> + --\n> + \n> + SELECT length('abcdef') AS \"length_6\";\n> + \n> + --\n> + -- test strpos\n> + --\n> + \n> + SELECT strpos('abcdef', 'cd') AS \"pos_3\";\n> + \n> + SELECT strpos('abcdef', 'xy') AS \"pos_0\";\n> + \n> + --\n> + -- test replace\n> + --\n> + SELECT replace('abcdef', 'de', '45') AS \"abc45f\";\n> + \n> + SELECT replace('yabadabadoo', 'ba', '123') AS \"ya123da123doo\";\n> + \n> + SELECT replace('yabadoo', 'bad', '') AS \"yaoo\";\n> + \n> + --\n> + -- test split\n> + --\n> + select split('joeuser@mydatabase','@',0) AS \"an error\";\n> + \n> + select split('joeuser@mydatabase','@',1) AS \"joeuser\";\n> + \n> + select split('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> + \n> + select split('joeuser@mydatabase','@',3) AS \"empty string\";\n> + \n> + select split('@joeuser@mydatabase@','@',2) AS \"joeuser\";\n> + \n> + --\n> + -- test to_hex\n> + --\n> + select to_hex(256*256*256 - 1) AS \"ffffff\";\n> + \n> + select to_hex(256::bigint*256::bigint*256::bigint*256::bigint - 1) AS \"ffffffff\";\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 21 Aug 2002 23:23:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] workaround for lack of REPLACE()" } ]
[ { "msg_contents": "Consider\n\n\tCREATE TABLE foo (f1 int primary key);\n\n\tCREATE TABLE bar (f1 int references foo);\n\n\tDROP TABLE foo RESTRICT;\n\nShould this succeed? Or should it be necessary to say DROP CASCADE to\nget rid of the foreign-key reference to foo?\n\nOur historical behavior is to allow the drop, while issuing a notice\nabout implicit deletion of triggers. But I think SQL92 intends that\nCASCADE should be required.\n\n(If you deduce from this question that a lot of Rod Taylor's pg_depend\npatch is working here, you are right...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 18:33:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Should this require CASCADE?" }, { "msg_contents": "On Wed, 10 Jul 2002, Tom Lane wrote:\n\n> Consider\n>\n> \tCREATE TABLE foo (f1 int primary key);\n>\n> \tCREATE TABLE bar (f1 int references foo);\n>\n> \tDROP TABLE foo RESTRICT;\n>\n> Should this succeed? Or should it be necessary to say DROP CASCADE to\n> get rid of the foreign-key reference to foo?\n>\n> Our historical behavior is to allow the drop, while issuing a notice\n> about implicit deletion of triggers. But I think SQL92 intends that\n> CASCADE should be required.\n\nI think the above should fail. If someone was adding restrict since it\nwas optional, I'd guess they were doing so in advance for the days when\nwe'd actually restrict the drop.\n\n", "msg_date": "Wed, 10 Jul 2002 15:45:24 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE?" }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> On Wed, 10 Jul 2002, Tom Lane wrote:\n>> DROP TABLE foo RESTRICT;\n>> \n>> Should this succeed? Or should it be necessary to say DROP CASCADE to\n>> get rid of the foreign-key reference to foo?\n\n> I think the above should fail. If someone was adding restrict since it\n> was optional, I'd guess they were doing so in advance for the days when\n> we'd actually restrict the drop.\n\nSorry if I wasn't clear: we never had the RESTRICT/CASCADE syntax at all\nuntil now. What I'm intending though is that DROP with no option will\ndefault to DROP RESTRICT, which means that a lot of cases that used to\nbe \"gotchas\" will now fail until you say CASCADE. I wrote RESTRICT in\nmy example just to emphasize that the intended behavior is RESTRICT.\n\nSo if you prefer, imagine same example but you merely say\n\tDROP TABLE foo;\nDoes your answer change?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 18:59:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Should this require CASCADE? " }, { "msg_contents": "On Wed, 10 Jul 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > On Wed, 10 Jul 2002, Tom Lane wrote:\n> >> DROP TABLE foo RESTRICT;\n> >>\n> >> Should this succeed? Or should it be necessary to say DROP CASCADE to\n> >> get rid of the foreign-key reference to foo?\n>\n> > I think the above should fail. If someone was adding restrict since it\n> > was optional, I'd guess they were doing so in advance for the days when\n> > we'd actually restrict the drop.\n>\n> Sorry if I wasn't clear: we never had the RESTRICT/CASCADE syntax at all\n> until now. What I'm intending though is that DROP with no option will\n> default to DROP RESTRICT, which means that a lot of cases that used to\n> be \"gotchas\" will now fail until you say CASCADE. I wrote RESTRICT in\n> my example just to emphasize that the intended behavior is RESTRICT.\n>\n> So if you prefer, imagine same example but you merely say\n> \tDROP TABLE foo;\n> Does your answer change?\n\nThat's tougher. If I had a choice without worrying about the complexities\ninvolved, I'd say that DROP TABLE foo; should restrict unless the only\nreferences were from foreign keys and that those should cascade which is\nthe similar behavior to past versions without the really unsafe\nreferencing things that don't exist, and restrict and cascade should work\nas specified. However, that adds effectively a third drop behavior and\none that isn't in the spec and would have to be documented, however I\nthink (unless I misread the spec) it wouldn't directly conflict with the\nspec since drop behavior isn't optional.\n\nGiven that that's a can of worms we probably don't want to open, I\nthink restrict is probably safer behavior even though it breaks\ncompatibility with old versions even more than the above, but I think\nsilently cascading will be more difficult for users (hey, where did\nmy definition of <X> go?).\n\n\n", "msg_date": "Wed, 10 Jul 2002 16:27:39 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE? " }, { "msg_contents": "On Wed, 2002-07-10 at 18:33, Tom Lane wrote:\n> Consider\n> \n> \tCREATE TABLE foo (f1 int primary key);\n> \n> \tCREATE TABLE bar (f1 int references foo);\n> \n> \tDROP TABLE foo RESTRICT;\n\n> Our historical behavior is to allow the drop, while issuing a notice\n> about implicit deletion of triggers. But I think SQL92 intends that\n> CASCADE should be required.\n\nI think you know my answer (Fail).\n\n- As stated, spec intends it to be required\n- Number of automated scripts doing drop table is small\n- Users will quickly learn the ropes. They would be surprised if it\ncascaded by default.\n\nThe question I suppose is:\n\nDROP TABLE foo;\n\nDoes it default to restrict or cascade? Currently it is restrict. I\ndon't believe the spec allows those statements to be without the\nqualifier.\n\n\nOr, how about ALTER TABLE bar DROP CONSTRAINT <fkey_cons> RESTRICT;\n\nI forget what happens here -- does bar depend on foo via the fkey?\n\n\nALTER TABLE foo DROP CONSTRAINT <primary key> RESTRICT; should\ndefinitely fail (bar depends on fkey which depends on foo.pkey).\n\n\n\n\n\n", "msg_date": "10 Jul 2002 19:27:47 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE?" }, { "msg_contents": "Tom Lane wrote:\n> \n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > On Wed, 10 Jul 2002, Tom Lane wrote:\n> >> DROP TABLE foo RESTRICT;\n> >>\n> >> Should this succeed? Or should it be necessary to say DROP CASCADE to\n> >> get rid of the foreign-key reference to foo?\n> \n> > I think the above should fail. If someone was adding restrict since it\n> > was optional, I'd guess they were doing so in advance for the days when\n> > we'd actually restrict the drop.\n> \n> Sorry if I wasn't clear: we never had the RESTRICT/CASCADE syntax at all\n> until now. What I'm intending though is that DROP with no option will\n> default to DROP RESTRICT, which means that a lot of cases that used to\n> be \"gotchas\" will now fail until you say CASCADE. I wrote RESTRICT in\n> my example just to emphasize that the intended behavior is RESTRICT.\n\nI think the idea was to have it default to CASCADE for this release, not\nto break existing code right away. Then 7.3 is transition time and\nRESTRICT will be the default from the next release on.\n\nIf so, this has to go into the release notes.\n\n\nJan\n\n> \n> So if you prefer, imagine same example but you merely say\n> DROP TABLE foo;\n> Does your answer change?\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Wed, 10 Jul 2002 19:34:53 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE?" }, { "msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> I think the idea was to have it default to CASCADE for this release, not\n> to break existing code right away.\n\nI never thought that. If we default to CASCADE then a DROP is likely to\ndelete stuff that it would not have deleted in prior releases. That\nseems *far* more dangerous than \"breaking existing code\". I doubt\nthere's much existing code that does automatic DROPs anyway, at least\nof things that might have dependencies.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 22:20:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Should this require CASCADE? " }, { "msg_contents": "> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > I think the idea was to have it default to CASCADE for this release, not\n> > to break existing code right away.\n>\n> I never thought that. If we default to CASCADE then a DROP is likely to\n> delete stuff that it would not have deleted in prior releases. That\n> seems *far* more dangerous than \"breaking existing code\". I doubt\n> there's much existing code that does automatic DROPs anyway, at least\n> of things that might have dependencies.\n\nWow - I think defaulting to CASCADE is nuts! Surely RESTRICT should be the\nsafest default?\n\nChris\n\n", "msg_date": "Thu, 11 Jul 2002 10:24:45 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE? " }, { "msg_contents": "On Wed, 10 Jul 2002, Tom Lane wrote:\n\n> \tCREATE TABLE foo (f1 int primary key);\n> \tCREATE TABLE bar (f1 int references foo);\n> \tDROP TABLE foo RESTRICT;\n>\n> Should this succeed? Or should it be necessary to say DROP CASCADE to\n> get rid of the foreign-key reference to foo?\n>\n> Our historical behavior is to allow the drop, while issuing a notice\n> about implicit deletion of triggers. But I think SQL92 intends that\n> CASCADE should be required.\n\nI don't think it should succeed, no; to me the historical behaviour\nseems wrong. To get a bit Dateish (or is that Datish?) for a moment,\nwhen you created table bar, you added this rule to the set of rules\nfor your database:\n\n For every f1 in bar, there exists the same f1 in foo.\n\nIf you allow table foo to be dropped, you make that statement false. But\nI think removing that rule should be an explicit, not implicit action.\n\nAnd as far as the compatability thing goes, well, probably pretty\nmuch everyone agrees that it's better to break things such that\nyou are less likely to lose data....\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 11 Jul 2002 11:55:35 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE?" }, { "msg_contents": "Jan Wieck wrote:\n> Tom Lane wrote:\n> > \n> > Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > > On Wed, 10 Jul 2002, Tom Lane wrote:\n> > >> DROP TABLE foo RESTRICT;\n> > >>\n> > >> Should this succeed? Or should it be necessary to say DROP CASCADE to\n> > >> get rid of the foreign-key reference to foo?\n> > \n> > > I think the above should fail. If someone was adding restrict since it\n> > > was optional, I'd guess they were doing so in advance for the days when\n> > > we'd actually restrict the drop.\n> > \n> > Sorry if I wasn't clear: we never had the RESTRICT/CASCADE syntax at all\n> > until now. What I'm intending though is that DROP with no option will\n> > default to DROP RESTRICT, which means that a lot of cases that used to\n> > be \"gotchas\" will now fail until you say CASCADE. I wrote RESTRICT in\n> > my example just to emphasize that the intended behavior is RESTRICT.\n> \n> I think the idea was to have it default to CASCADE for this release, not\n> to break existing code right away. Then 7.3 is transition time and\n> RESTRICT will be the default from the next release on.\n\nI am not in favor of changing thing 1/2 way for one release, then doing\nthe final job in the next release. If we need to change it, we document\nit and move on. Two smalll changes are worse than one big change.\n\nAs far as this question, seems with no RESTRICT/CASCADE, it fails, with\nRESTRICT it drops the trigger, and with CASCADE it drops the referencing\ntable. Is that accurate?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Jul 2002 23:05:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> As far as this question, seems with no RESTRICT/CASCADE, it fails, with\n> RESTRICT it drops the trigger, and with CASCADE it drops the referencing\n> table. Is that accurate?\n\nNot at all. CASCADE would drop the foreign key constraint (including\nthe triggers that implement it), but not the other table. In my mind\nthe issue is whether RESTRICT mode should do the same, or report an\nerror.\n\nI'm not eager to accept the idea that DROP-without-either-option should\nbehave in some intermediate fashion. I want it to be the same as\nRESTRICT.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 23:19:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Should this require CASCADE? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > As far as this question, seems with no RESTRICT/CASCADE, it fails, with\n> > RESTRICT it drops the trigger, and with CASCADE it drops the referencing\n> > table. Is that accurate?\n> \n> Not at all. CASCADE would drop the foreign key constraint (including\n> the triggers that implement it), but not the other table. In my mind\n> the issue is whether RESTRICT mode should do the same, or report an\n> error.\n> \n> I'm not eager to accept the idea that DROP-without-either-option should\n> behave in some intermediate fashion. I want it to be the same as\n> RESTRICT.\n\nSounds good to me, and I don't think we need to require RESTRICT just\nbecause the standard says so. Does the standard require RESTRICT for\nevery DROP or just drops that have foreign keys?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Jul 2002 23:24:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE?" }, { "msg_contents": "> > As far as this question, seems with no RESTRICT/CASCADE, it fails, with\n> > RESTRICT it drops the trigger, and with CASCADE it drops the referencing\n> > table. Is that accurate?\n>\n> Not at all. CASCADE would drop the foreign key constraint (including\n> the triggers that implement it), but not the other table. In my mind\n> the issue is whether RESTRICT mode should do the same, or report an\n> error.\n>\n> I'm not eager to accept the idea that DROP-without-either-option should\n> behave in some intermediate fashion. I want it to be the same as\n> RESTRICT.\n\nI think that an unqualified drop should restrict and fail to drop if there's\na foreign key. Any app that lets people do a drop is probably already\nchecking for error conditions. Hence, it's just another error condition.\n\nChris\n\n", "msg_date": "Thu, 11 Jul 2002 11:30:41 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE? " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Sounds good to me, and I don't think we need to require RESTRICT just\n> because the standard says so. Does the standard require RESTRICT for\n> every DROP or just drops that have foreign keys?\n\nSyntactically, it requires RESTRICT or CASCADE on *every* form of DROP.\nI don't think we're willing to do that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 23:31:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Should this require CASCADE? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Sounds good to me, and I don't think we need to require RESTRICT just\n> > because the standard says so. Does the standard require RESTRICT for\n> > every DROP or just drops that have foreign keys?\n> \n> Syntactically, it requires RESTRICT or CASCADE on *every* form of DROP.\n> I don't think we're willing to do that...\n\nYuck, or RESTRICT|CASCADE yuck.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Jul 2002 23:33:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE?" } ]
[ { "msg_contents": "<delurk - reading from the archives, so please cc me on responses>\n\nNote that before bugzilla really supports postgresql, we (ie the bugzilla\nteam) are going to need DROP COLUMN support, as well as support for\nchanging a field's type. This is because thats how upgrades are done, when\nnew features change the bz schema.\n\nSee \nhttp://lxr.mozilla.org/mozilla/source/webtools/bugzilla/checksetup.pl#2193 \nand below for the code. Lots of it is currently mysql specific, and could \neasily be wrapped in helper functions - some of it already is. That won't \nhelp if there isn't an easy way to use the functionality, though.\n\nReclaiming the disk space is also really needed, because upgrading a\nbugzilla installation could change a table multiple times, and requirng\nall that extra disk space will probably be an issue with most admins.\n\nBradley\n\n", "msg_date": "Thu, 11 Jul 2002 09:44:01 +1000 (EST)", "msg_from": "Bradley Baetz <bbaetz@student.usyd.edu.au>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: bugzilla.pgaccess.org" }, { "msg_contents": "On Wed, 2002-07-10 at 19:44, Bradley Baetz wrote:\n> <delurk - reading from the archives, so please cc me on responses>\n> \n> Note that before bugzilla really supports postgresql, we (ie the bugzilla\n> team) are going to need DROP COLUMN support, as well as support for\n> changing a field's type. This is because thats how upgrades are done, when\n> new features change the bz schema.\n\nAgreed it would be nice, but how come upgrades cannot be done with temp\ntables -- especially since the bugzilla database is simple (no foreign\nkey constraints, etc.) If you're persisting with using ENUM(), a common\nupgrade script won't work anyway.\n\n\n-- Create a table\ncreate table a (col1 varchar(4));\n\n-- Add some data\ninsert into table a values (1);\ninsert into table a values (2);\n\n-- Lets change the datatype of col1\nbegin;\ncreate temp table a_tmp as select * from a;\ndrop table a;\ncreate table a (col1 integer);\ninsert into a (col1) select cast(col1 as integer) from a_tmp;\ncommit;\n\nYou've just changed the datatype from a varchar to integer. With the\ntransaction support, you're guaranteed it won't be left mid way through\neither.\n\n", "msg_date": "10 Jul 2002 19:56:01 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: " }, { "msg_contents": "On 10 Jul 2002, Rod Taylor wrote:\n\n> On Wed, 2002-07-10 at 19:44, Bradley Baetz wrote:\n> > <delurk - reading from the archives, so please cc me on responses>\n> > \n> > Note that before bugzilla really supports postgresql, we (ie the bugzilla\n> > team) are going to need DROP COLUMN support, as well as support for\n> > changing a field's type. This is because thats how upgrades are done, when\n> > new features change the bz schema.\n> \n> Agreed it would be nice, but how come upgrades cannot be done with temp\n> tables -- especially since the bugzilla database is simple (no foreign\n> key constraints, etc.) If you're persisting with using ENUM(), a common\n> upgrade script won't work anyway.\n\nWe don't plan on persisting in using enum :)\n\n<snip>\n\n> \n> You've just changed the datatype from a varchar to integer. With the\n> transaction support, you're guaranteed it won't be left mid way through\n> either.\n> \n\nWell, when bugzilla supports a db which supports foreign constraints, we'd \nlike to add those in, too\n\nHowever, is there an easy way of obtaining the list of columns (and their\ntypes/indexes/etc) in a table, so that we can recreate table a with just\nthat column missing? One which won't break when the underlying pg_* schema \nchanges?\n\nThe alternative is to pass the set of columns/indexes/etc into the\nDropColumn function each time its called, which would get messy quite\nquickly.\n\nBradley\n\n", "msg_date": "Thu, 11 Jul 2002 10:08:47 +1000 (EST)", "msg_from": "Bradley Baetz <bbaetz@student.usyd.edu.au>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: bugzilla.pgaccess.org" }, { "msg_contents": "> However, is there an easy way of obtaining the list of columns (and their\n> types/indexes/etc) in a table, so that we can recreate table a with just\n> that column missing? One which won't break when the underlying pg_* schema \n> changes?\n\nI see. No, not that I know of. You could take an SQL dump of the DB\nand work on that, then restore at the end of the upgrade process -- but\nthats not so good :)\n\nAnyway, I'd *really* like to see PostgreSQL officially supported by\nBugzilla.\n\nWe may get DROP COLUMN in this release (Christopher?).\n\nChanging data types probably won't appear. I don't know of anyone\nworking on it -- and it can be quite a complex issue to get a good\n(resource friendly and transaction safe) version.\n\nThat said, if drop column is finished in time would the below be close\nenough to do a data type change?:\n\nalter table <table> rename <column> to <coltemp>;\nalter table <table> add column <column> <newtype>;\nupdate table <table> set <column> = <coltemp>;\nalter table <table> drop column <coltemp>;\n\n\nAre there any other requirements aside from drop column and altering\ndata types?\n\n\n\n", "msg_date": "10 Jul 2002 20:43:58 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: " }, { "msg_contents": "On 10 Jul 2002, Rod Taylor wrote:\n\n> > However, is there an easy way of obtaining the list of columns (and their\n> > types/indexes/etc) in a table, so that we can recreate table a with just\n> > that column missing? One which won't break when the underlying pg_* schema \n> > changes?\n> \n> I see. No, not that I know of. You could take an SQL dump of the DB\n> and work on that, then restore at the end of the upgrade process -- but\n> thats not so good :)\n\n:)\n\n> \n> Anyway, I'd *really* like to see PostgreSQL officially supported by\n> Bugzilla.\n\nSo would I. I cringe every time I think of the locking issues we have with \nmysql. There is work being done on that (on a branch), but I don't know \nwhat the state of it is.\n\n> We may get DROP COLUMN in this release (Christopher?).\n\nYeah, I've been reading the archives. bugzilla's auto-updating schema is \nprobably a bit of an unusual application, but it works for us.\n\n> \n> Changing data types probably won't appear. I don't know of anyone\n> working on it -- and it can be quite a complex issue to get a good\n> (resource friendly and transaction safe) version.\n\nI'd be happy with a non-resource friendly and non-transaction-safe version \nover not having the functionality at all... ;)\n\n> \n> That said, if drop column is finished in time would the below be close\n> enough to do a data type change?:\n> \n> alter table <table> rename <column> to <coltemp>;\n> alter table <table> add column <column> <newtype>;\n> update table <table> set <column> = <coltemp>;\n> alter table <table> drop column <coltemp>;\n> \n\nThat would work - we'd have to manually recreate the indexes, but most of\nthe type changes are done in combination with other changes which have us\ndoing that anyway.\n\n> \n> Are there any other requirements aside from drop column and altering\n> data types?\n> \n\nI think the big issues are bugzilla ones, using mysql specific features\n(enum/timestamp types, REPLACE INTO, etc) Locking is the major one, but\nthe first port to pgsql will almost certainly use heavy locking (ie mysql\nREAD -> pgsql SHARE MODE, mysql WRITE -> ACCESS EXCLUSIVE MODE), because\nthats the easiest thing to port the mysql-based code over to. Less\nrestrictive locking + select for update & friends can be added later.\n\nThanks,\n\nBradley\n\n", "msg_date": "Thu, 11 Jul 2002 11:17:02 +1000 (EST)", "msg_from": "Bradley Baetz <bbaetz@student.usyd.edu.au>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: bugzilla.pgaccess.org" }, { "msg_contents": "Rod Taylor wrote:\n> \n> > However, is there an easy way of obtaining the list of columns (and their\n> > types/indexes/etc) in a table, so that we can recreate table a with just\n> > that column missing? One which won't break when the underlying pg_* schema\n> > changes?\n> \n> I see. No, not that I know of. You could take an SQL dump of the DB\n> and work on that, then restore at the end of the upgrade process -- but\n> thats not so good :)\n\nOne way to make the application more DB version independent is to\nhide the system catalog version specific stuff in views. Whatever\ninformation you need from the catalog, wrap it into bzdd_*\nfunctions and views (BugZillaDataDictionary). If the catalog\nchanges, you just change the functions and views.\n\nYou finally end up with one .sql file per supported BZ/PG\ncombination. But that's alot better than bunches of if()'s in the\napplication code.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n", "msg_date": "Wed, 10 Jul 2002 21:21:24 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: " }, { "msg_contents": "Bradley Baetz wrote:\n> <delurk - reading from the archives, so please cc me on responses>\n> \n> Note that before bugzilla really supports postgresql, we (ie the bugzilla\n> team) are going to need DROP COLUMN support, as well as support for\n> changing a field's type. This is because thats how upgrades are done, when\n> new features change the bz schema.\n\nDROP COLUMNS should be in 7.3, due out in the Fall. You can simulate\nALTER COLUMN by creating a new column, UPDATING the data to the new\ncolumn, dropping the old, then renaming the new column to the old name.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Jul 2002 22:07:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: bugzilla.pgaccess.org" }, { "msg_contents": "> > Note that before bugzilla really supports postgresql, we (ie\n> the bugzilla\n> > team) are going to need DROP COLUMN support, as well as support for\n> > changing a field's type. This is because thats how upgrades are\n> done, when\n> > new features change the bz schema.\n>\n> DROP COLUMNS should be in 7.3, due out in the Fall.\n\nAssuming, of course, that I can get past these latest problems that Tom\nbrought up - otherwise I guess I could pass it off again :(\n\n> You can simulate\n> ALTER COLUMN by creating a new column, UPDATING the data to the new\n> column, dropping the old, then renaming the new column to the old name.\n\nWell, once DROP COLUMN is natively supported, I intend to implement a SET\nTYPE or MODIFY function - it should be quite straightforward once DROP\nCOLUMN is done. It will maintain all foreign key and index references\nproperly...\n\nChris\n\n", "msg_date": "Thu, 11 Jul 2002 10:13:31 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: bugzilla.pgaccess.org" }, { "msg_contents": "> > Changing data types probably won't appear. I don't know of anyone\n> > working on it -- and it can be quite a complex issue to get a good\n> > (resource friendly and transaction safe) version.\n> \n> I'd be happy with a non-resource friendly and non-transaction-safe version \n> over not having the functionality at all... ;)\n\nFor me, I'd have to buy / install harddrives if I wanted to change data\ntypes in some of the larger tables. I've done a number of silly things\nlike store an Apache hitlog in the DB for pattern analysis. Lots and\nlots of rows ;)\n \n> > That said, if drop column is finished in time would the below be close\n> > enough to do a data type change?:\n> > \n> > alter table <table> rename <column> to <coltemp>;\n> > alter table <table> add column <column> <newtype>;\n> > update table <table> set <column> = <coltemp>;\n> > alter table <table> drop column <coltemp>;\n> > \n> \n> That would work - we'd have to manually recreate the indexes, but most of\n> the type changes are done in combination with other changes which have us\n> doing that anyway.\n\nOkay, if thats all it truly takes, I'll see if I can help get it done.\n\n> > Are there any other requirements aside from drop column and altering\n> > data types?\n\n> I think the big issues are bugzilla ones, using mysql specific features\n> (enum/timestamp types, REPLACE INTO, etc) Locking is the major one, but\n\nenum(A,B,C) -> column char(1) check (column IN ('A', 'B', 'C'))\n\ntimestamp? Output pattern may be different, but PostgreSQL 7.3 will\naccept any timestamp I've thrown at it. Lots of weird and wonderful\nforms.\n\nAnyway, I think there is a way to coerce MySQL into outputting an ISO\nstyle timestamp, which would probably be the best way to move as it'll\nmake adding other DBs easier in the future.\n\nREPLACE INTO: Have an ON INSERT TRIGGER on all tables which will update\na row if the primary key already exists -- or catch an INSERT error and\ntry an update instead.\n\n", "msg_date": "10 Jul 2002 22:18:19 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: " }, { "msg_contents": "> > > Changing data types probably won't appear. I don't know of anyone\n> > > working on it -- and it can be quite a complex issue to get a good\n> > > (resource friendly and transaction safe) version.\n> >\n> > I'd be happy with a non-resource friendly and\n> non-transaction-safe version\n> > over not having the functionality at all... ;)\n\nI absolutely, definitely agree with this! If I really, really, really need\nto change a column type then even if it takes 2 hours, I should have the\noption. People can always resort to doing a dump, edit and restore if they\nreally want...\n\n> For me, I'd have to buy / install harddrives if I wanted to change data\n> types in some of the larger tables. I've done a number of silly things\n> like store an Apache hitlog in the DB for pattern analysis. Lots and\n> lots of rows ;)\n\nOf course, you might have thought about the correct column types in advance,\nbut hey :) I think that there's no way to have a rollback-able column type\nchange without temporarily doubling space. Actually, I think Oracle has\nsome sort of system whereby the column type change is irreversible, and if\nit crashes halfway thru, the table is unusable. You can issue a command on\nthe table to pick up where it left off. You continue to do this until it's\nfully complete. However, I think the temporary doubling is probably good\nenough for 90% of our users...\n\n> > > That said, if drop column is finished in time would the below be close\n> > > enough to do a data type change?:\n> > >\n> > > alter table <table> rename <column> to <coltemp>;\n> > > alter table <table> add column <column> <newtype>;\n> > > update table <table> set <column> = <coltemp>;\n> > > alter table <table> drop column <coltemp>;\n> > >\n> >\n> > That would work - we'd have to manually recreate the indexes,\n> but most of\n> > the type changes are done in combination with other changes\n> which have us\n> > doing that anyway.\n> >\n> Okay, if thats all it truly takes, I'll see if I can help get it done.\n\nWell, you're always welcome to help me out with this DROP COLUMN business -\nafter which MODIFY will be straightforward. Don't forget that maybe foreign\nkeys, rules, triggers and views might have to be updated?\n\n> > I think the big issues are bugzilla ones, using mysql specific features\n> > (enum/timestamp types, REPLACE INTO, etc) Locking is the major one, but\n>\n> enum(A,B,C) -> column char(1) check (column IN ('A', 'B', 'C'))\n>\n> timestamp? Output pattern may be different, but PostgreSQL 7.3 will\n> accept any timestamp I've thrown at it. Lots of weird and wonderful\n> forms.\n>\n> Anyway, I think there is a way to coerce MySQL into outputting an ISO\n> style timestamp, which would probably be the best way to move as it'll\n> make adding other DBs easier in the future.\n>\n> REPLACE INTO: Have an ON INSERT TRIGGER on all tables which will update\n> a row if the primary key already exists -- or catch an INSERT error and\n> try an update instead.\n\nThe main thing I pick up from all of this is that Bugzilla is rather poorly\nwritten for cross-db compatibility. It should be using a database\nabstraction layer such as ADODB that will let you do a 'replace' in _any_\ndatabase, is type independent, syntax independent, etc.\n\nChris\n\n", "msg_date": "Thu, 11 Jul 2002 10:27:06 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: bugzilla.pgaccess.org" }, { "msg_contents": "On 10 Jul 2002, Rod Taylor wrote:\n\n> enum(A,B,C) -> column char(1) check (column IN ('A', 'B', 'C'))\n\nright.\n\n> \n> timestamp? Output pattern may be different, but PostgreSQL 7.3 will\n> accept any timestamp I've thrown at it. Lots of weird and wonderful\n> forms.\n\nI'm referring to the mysql |timestamp| type, which will update that\ncolumn's contents to |now()| when any UPDATE is given for that partcular\nrow, unless the column was assigned to. I don't know how to handle the\nlast part in a trigger. Bugzilla's use of that feature is more trouble\nthan its worth, though, because we keep forgetting to stop the update in \nplaces where it should be, and there are plans to remove it anyway.\n\n> REPLACE INTO: Have an ON INSERT TRIGGER on all tables which will update\n> a row if the primary key already exists -- or catch an INSERT error and\n> try an update instead.\n\nBZ uses REPLACE in 4 places, and 3 of those check for the row existing \nimmediately before. :)\n\nThese are bugzilla problems, not postgres ones, though.\n\nBradley\n\n", "msg_date": "Thu, 11 Jul 2002 13:08:11 +1000 (EST)", "msg_from": "Bradley Baetz <bbaetz@student.usyd.edu.au>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: bugzilla.pgaccess.org" }, { "msg_contents": "On Thu, 11 Jul 2002, Christopher Kings-Lynne wrote:\n\n> Of course, you might have thought about the correct column types in advance,\n> but hey :) I think that there's no way to have a rollback-able column type\n> change without temporarily doubling space. Actually, I think Oracle has\n> some sort of system whereby the column type change is irreversible, and if\n> it crashes halfway thru, the table is unusable. You can issue a command on\n> the table to pick up where it left off. You continue to do this until it's\n> fully complete. However, I think the temporary doubling is probably good\n> enough for 90% of our users...\n\nI don't mind temporarily doubling space - mysql docs say that all its \nALTER TABLE stuff (except for renaming) is done by making a copy.\n\n> The main thing I pick up from all of this is that Bugzilla is rather poorly\n> written for cross-db compatibility. It should be using a database\n> abstraction layer such as ADODB that will let you do a 'replace' in _any_\n> database, is type independent, syntax independent, etc.\n\nYep. BZ isn't very portable - it wasn't a design goal at the time, I \nbelieve. redhat do have an oracle port though, and are working on a \npostgres port, so it is possible.\n\nADODB (or a perl equivalent) is possibly overkill once we get the (legacy)\ncolumn typing stuff worked out. BZ doesn't really use any non-basic SQL\nfunctionality, although the query stuff will benefit from subselects.\n\n> \n> Chris\n> \n\nBradley\n\n", "msg_date": "Thu, 11 Jul 2002 13:15:13 +1000 (EST)", "msg_from": "Bradley Baetz <bbaetz@student.usyd.edu.au>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: bugzilla.pgaccess.org" }, { "msg_contents": "Bradley Baetz <bbaetz@student.usyd.edu.au> writes:\n> I'm referring to the mysql |timestamp| type, which will update that\n> column's contents to |now()| when any UPDATE is given for that partcular\n> row, unless the column was assigned to. I don't know how to handle the\n> last part in a trigger.\n\nIt'd probably be close enough to have an UPDATE trigger that does\n\n\tif (new.timestamp = old.timestamp)\n\t\tnew.timestamp = now();\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 23:25:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: bugzilla.pgaccess.org " }, { "msg_contents": "On Wed, 10 Jul 2002, Tom Lane wrote:\n\n> Bradley Baetz <bbaetz@student.usyd.edu.au> writes:\n> > I'm referring to the mysql |timestamp| type, which will update that\n> > column's contents to |now()| when any UPDATE is given for that partcular\n> > row, unless the column was assigned to. I don't know how to handle the\n> > last part in a trigger.\n> \n> It'd probably be close enough to have an UPDATE trigger that does\n> \n> \tif (new.timestamp = old.timestamp)\n> \t\tnew.timestamp = now();\n\nNope, because the documented way of making sure that the field doens't \nchange is to use |UPDATE foo SET bar=bar ....|, and thats what bz uses.\n\nDon't worry about this, though - we will hpefully be removing this \n'feature' soon.\n\nBradley\n\n", "msg_date": "Thu, 11 Jul 2002 13:29:26 +1000 (EST)", "msg_from": "Bradley Baetz <bbaetz@student.usyd.edu.au>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: bugzilla.pgaccess.org" } ]
[ { "msg_contents": "IMHO, I believe that the standard should be adhered to if at all possible.\nSince Cascade was added, Restrict must be the default is my reading of the\nstandard.\n\nSo that everyone can talk from the same sheet, the 1999 SQL Standard for DROP\nTABLE follows:\n\n\n11.20 <drop table statement>\nFunction\nDestroy a table.\nFormat\n<drop table statement> ::=\nDROP TABLE <table name> <drop behavior>\nSyntax Rules\n1) Let T be the table identified by the <table name> and let TN be that <table\nname>.\n2) The schema identified by the explicit or implicit schema name of the <table\nname> shall include the descriptor of T.\n3) T shall be a base table.\n4) T shall not be a declared local temporary table.\n5) If RESTRICT is specified, then T shall not have any subtables.\n6) If RESTRICT is specified, then T shall not be referenced in any of the\nfollowing:\na) The <query expression> of any view descriptor.\nb) The <search condition> of any table check constraint descriptor of any table\nother than T or the <search condition> of a constraint descriptor of an\nassertion descriptor.\nc) The table descriptor of the referenced table of any referential constraint\ndescriptor of any table other than T.\nd) The scope of the declared type of a column of a table other than T and of the\ndeclared type of an SQL parameter of any SQL-invoked routine.\ne) The <SQL routine body> of any SQL-invoked routine descriptor.\nf) The scope of the declared type of an SQL parameter of any SQL-invoked\nroutine.\ng) The trigger action of any trigger descriptor.\nNOTE 197 - If CASCADE is specified, then such referenced objects will be dropped\nby the execution of the <revoke statement> specified in the General Rules of\nthis Subclause.\n7) Let A be the <authorization identifier> that owns the schema identified by\nthe <schema name> of the table identified by TN.\n8) Let the containing schema be the schema identified by the <schema name>\nexplicitly or implicitly contained in <table name>.\nAccess Rules\n1) The enabled authorization identifiers shall include A.\n \nGeneral Rules\n1) Let ST be the <table name> of any subtable of T. The following <drop table\nstatement> is effectively executed without further Access Rule checking:\nDROP TABLE ST CASCADE\n2) If T is a referenceable table, then:\na) Let ST be structured type associated with T.\nb) Let RST be the reference type whose referenced type is ST.\nc) Let DT be any table whose table descriptor includes a column descriptor that\ngenerally includes a field descriptor, an attribute descriptor, or an array\ndescriptor that includes a reference type descriptor RST whose scope includes\nTN.\nNOTE 198 - A descriptor that ''generally includes'' another descriptor is\ndefined in Subclause 6.2.4, \"Descriptors\", in ISO/IEC 9075-1.\nd) Let DTN be the name of the table DT.\ne) Case:\ni) If DT is a base table, then the following <drop table statement> is\neffectively executed without further Access Rule checking:\nDROP TABLE DTN CASCADE\nii) Otherwise, the following <drop view statement> is effectively executed\nwithout further Access Rule checking:\nDROP VIEW DTN CASCADE\n3) For every supertable of T, every superrow and every subrow of every row of T\nis effectively deleted at the end of the SQL-statement, prior to the checking of\nany integrity constraints.\nNOTE 199 - This deletion creates neither a new trigger execution context nor the\ndefinition of a new state change in the current trigger execution context.\n4) The following <revoke statement> is effectively executed with a current\nauthorization identifier of ''_SYSTEM'' and without further Access Rule\nchecking:\nREVOKE ALL PRIVILEGES ON TN FROM A CASCADE\n5) Let R be any SQL-invoked routine whose routine descriptor contains the <table\nname> of T in the <SQL routine body>. Let SN be the <specific name> of R. The\nfollowing <drop routine statement> is effectively executed without further\nAccess Rule checking:\nDROP SPECIFIC ROUTINE SN CASCADE\n6) For each direct supertable DST of T, the table name of T is removed from the\nlist of table names of direct subtables of DST that is included in the table\ndescriptor of DST.\n7) The descriptor of T is destroyed.\n \nConformance Rules\n1) Without Feature F032, ''CASCADE drop behavior'', a <drop behavior> of CASCADE\nshall not be specified in <drop table statement>.\n\n\n\n-----Original Message-----\nFrom: Jan Wieck [mailto:JanWieck@Yahoo.com]\nSent: Wednesday, July 10, 2002 7:35 PM\nTo: Tom Lane\nCc: Stephan Szabo; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Should this require CASCADE?\n\n\nTom Lane wrote:\n> \n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > On Wed, 10 Jul 2002, Tom Lane wrote:\n> >> DROP TABLE foo RESTRICT;\n> >>\n> >> Should this succeed? Or should it be necessary to say DROP CASCADE to\n> >> get rid of the foreign-key reference to foo?\n> \n> > I think the above should fail. If someone was adding restrict since it\n> > was optional, I'd guess they were doing so in advance for the days when\n> > we'd actually restrict the drop.\n> \n> Sorry if I wasn't clear: we never had the RESTRICT/CASCADE syntax at all\n> until now. What I'm intending though is that DROP with no option will\n> default to DROP RESTRICT, which means that a lot of cases that used to\n> be \"gotchas\" will now fail until you say CASCADE. I wrote RESTRICT in\n> my example just to emphasize that the intended behavior is RESTRICT.\n\nI think the idea was to have it default to CASCADE for this release, not\nto break existing code right away. Then 7.3 is transition time and\nRESTRICT will be the default from the next release on.\n\nIf so, this has to go into the release notes.\n\n\nJan\n\n> \n> So if you prefer, imagine same example but you merely say\n> DROP TABLE foo;\n> Does your answer change?\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n", "msg_date": "Wed, 10 Jul 2002 20:25:30 -0400", "msg_from": "\"Groff, Dana\" <Dana.Groff@filetek.com>", "msg_from_op": true, "msg_subject": "Re: Should this require CASCADE?" } ]
[ { "msg_contents": "IMHO, I believe that the standard should be adhered to if at all possible.\nSince Cascade was added, Restrict must be the default is my reading of the\nstandard.\n\nSo that everyone can talk from the same sheet, the 1999 SQL Standard for DROP\nTABLE follows:\n\n\n11.20 <drop table statement>\nFunction\nDestroy a table.\nFormat\n<drop table statement> ::=\nDROP TABLE <table name> <drop behavior>\nSyntax Rules\n1) Let T be the table identified by the <table name> and let TN be that <table\nname>.\n2) The schema identified by the explicit or implicit schema name of the <table\nname> shall include the descriptor of T.\n3) T shall be a base table.\n4) T shall not be a declared local temporary table.\n5) If RESTRICT is specified, then T shall not have any subtables.\n6) If RESTRICT is specified, then T shall not be referenced in any of the\nfollowing:\na) The <query expression> of any view descriptor.\nb) The <search condition> of any table check constraint descriptor of any table\nother than T or the <search condition> of a constraint descriptor of an\nassertion descriptor.\nc) The table descriptor of the referenced table of any referential constraint\ndescriptor of any table other than T.\nd) The scope of the declared type of a column of a table other than T and of the\ndeclared type of an SQL parameter of any SQL-invoked routine.\ne) The <SQL routine body> of any SQL-invoked routine descriptor.\nf) The scope of the declared type of an SQL parameter of any SQL-invoked\nroutine.\ng) The trigger action of any trigger descriptor.\nNOTE 197 - If CASCADE is specified, then such referenced objects will be dropped\nby the execution of the <revoke statement> specified in the General Rules of\nthis Subclause.\n7) Let A be the <authorization identifier> that owns the schema identified by\nthe <schema name> of the table identified by TN.\n8) Let the containing schema be the schema identified by the <schema name>\nexplicitly or implicitly contained in <table name>.\nAccess Rules\n1) The enabled authorization identifiers shall include A.\n \nGeneral Rules\n1) Let ST be the <table name> of any subtable of T. The following <drop table\nstatement> is effectively executed without further Access Rule checking:\nDROP TABLE ST CASCADE\n2) If T is a referenceable table, then:\na) Let ST be structured type associated with T.\nb) Let RST be the reference type whose referenced type is ST.\nc) Let DT be any table whose table descriptor includes a column descriptor that\ngenerally includes a field descriptor, an attribute descriptor, or an array\ndescriptor that includes a reference type descriptor RST whose scope includes\nTN.\nNOTE 198 - A descriptor that ''generally includes'' another descriptor is\ndefined in Subclause 6.2.4, \"Descriptors\", in ISO/IEC 9075-1.\nd) Let DTN be the name of the table DT.\ne) Case:\ni) If DT is a base table, then the following <drop table statement> is\neffectively executed without further Access Rule checking:\nDROP TABLE DTN CASCADE\nii) Otherwise, the following <drop view statement> is effectively executed\nwithout further Access Rule checking:\nDROP VIEW DTN CASCADE\n3) For every supertable of T, every superrow and every subrow of every row of T\nis effectively deleted at the end of the SQL-statement, prior to the checking of\nany integrity constraints.\nNOTE 199 - This deletion creates neither a new trigger execution context nor the\ndefinition of a new state change in the current trigger execution context.\n4) The following <revoke statement> is effectively executed with a current\nauthorization identifier of ''_SYSTEM'' and without further Access Rule\nchecking:\nREVOKE ALL PRIVILEGES ON TN FROM A CASCADE\n5) Let R be any SQL-invoked routine whose routine descriptor contains the <table\nname> of T in the <SQL routine body>. Let SN be the <specific name> of R. The\nfollowing <drop routine statement> is effectively executed without further\nAccess Rule checking:\nDROP SPECIFIC ROUTINE SN CASCADE\n6) For each direct supertable DST of T, the table name of T is removed from the\nlist of table names of direct subtables of DST that is included in the table\ndescriptor of DST.\n7) The descriptor of T is destroyed.\n \nConformance Rules\n1) Without Feature F032, ''CASCADE drop behavior'', a <drop behavior> of CASCADE\nshall not be specified in <drop table statement>.\n\n\n\n-----Original Message-----\nFrom: Jan Wieck [mailto:JanWieck@Yahoo.com]\nSent: Wednesday, July 10, 2002 7:35 PM\nTo: Tom Lane\nCc: Stephan Szabo; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Should this require CASCADE?\n\n\nTom Lane wrote:\n> \n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > On Wed, 10 Jul 2002, Tom Lane wrote:\n> >> DROP TABLE foo RESTRICT;\n> >>\n> >> Should this succeed? Or should it be necessary to say DROP CASCADE to\n> >> get rid of the foreign-key reference to foo?\n> \n> > I think the above should fail. If someone was adding restrict since it\n> > was optional, I'd guess they were doing so in advance for the days when\n> > we'd actually restrict the drop.\n> \n> Sorry if I wasn't clear: we never had the RESTRICT/CASCADE syntax at all\n> until now. What I'm intending though is that DROP with no option will\n> default to DROP RESTRICT, which means that a lot of cases that used to\n> be \"gotchas\" will now fail until you say CASCADE. I wrote RESTRICT in\n> my example just to emphasize that the intended behavior is RESTRICT.\n\nI think the idea was to have it default to CASCADE for this release, not\nto break existing code right away. Then 7.3 is transition time and\nRESTRICT will be the default from the next release on.\n\nIf so, this has to go into the release notes.\n\n\nJan\n\n> \n> So if you prefer, imagine same example but you merely say\n> DROP TABLE foo;\n> Does your answer change?\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n", "msg_date": "Wed, 10 Jul 2002 20:29:39 -0400", "msg_from": "\"Groff, Dana\" <Dana.Groff@filetek.com>", "msg_from_op": true, "msg_subject": "Re: Should this require CASCADE?" } ]
[ { "msg_contents": "Actually, the answer is even clearer, the standard calls for the specification\nof \"CASCADE\" or \"RESTRICT\" and doesn't support _not_ having that specified.\n(the <drop behavior> is NOT [drop behavior] aka optional)\n\nNOTE: <drop behavior> ::= CASCADE | RESTRICT \nas defined in 11.2\n\n\nDana\nPS to be complete: all these references are to: ISO/IEC 9075-2:1999 \"Foundation\"\n\n-----Original Message-----\nFrom: Groff, Dana [mailto:Dana.Groff@filetek.com]\nSent: Wednesday, July 10, 2002 8:30 PM\nTo: 'pgsql-hackers@postgreSQL.org'\nSubject: Re: [HACKERS] Should this require CASCADE?\n\n\nIMHO, I believe that the standard should be adhered to if at all possible.\nSince Cascade was added, Restrict must be the default is my reading of the\nstandard.\n\nSo that everyone can talk from the same sheet, the 1999 SQL Standard for DROP\nTABLE follows:\n\n\n11.20 <drop table statement>\nFunction\nDestroy a table.\nFormat\n<drop table statement> ::=\nDROP TABLE <table name> <drop behavior>\nSyntax Rules\n1) Let T be the table identified by the <table name> and let TN be that <table\nname>.\n2) The schema identified by the explicit or implicit schema name of the <table\nname> shall include the descriptor of T.\n3) T shall be a base table.\n4) T shall not be a declared local temporary table.\n5) If RESTRICT is specified, then T shall not have any subtables.\n6) If RESTRICT is specified, then T shall not be referenced in any of the\nfollowing:\na) The <query expression> of any view descriptor.\nb) The <search condition> of any table check constraint descriptor of any table\nother than T or the <search condition> of a constraint descriptor of an\nassertion descriptor.\nc) The table descriptor of the referenced table of any referential constraint\ndescriptor of any table other than T.\nd) The scope of the declared type of a column of a table other than T and of the\ndeclared type of an SQL parameter of any SQL-invoked routine.\ne) The <SQL routine body> of any SQL-invoked routine descriptor.\nf) The scope of the declared type of an SQL parameter of any SQL-invoked\nroutine.\ng) The trigger action of any trigger descriptor.\nNOTE 197 - If CASCADE is specified, then such referenced objects will be dropped\nby the execution of the <revoke statement> specified in the General Rules of\nthis Subclause.\n7) Let A be the <authorization identifier> that owns the schema identified by\nthe <schema name> of the table identified by TN.\n8) Let the containing schema be the schema identified by the <schema name>\nexplicitly or implicitly contained in <table name>.\nAccess Rules\n1) The enabled authorization identifiers shall include A.\n \nGeneral Rules\n1) Let ST be the <table name> of any subtable of T. The following <drop table\nstatement> is effectively executed without further Access Rule checking:\nDROP TABLE ST CASCADE\n2) If T is a referenceable table, then:\na) Let ST be structured type associated with T.\nb) Let RST be the reference type whose referenced type is ST.\nc) Let DT be any table whose table descriptor includes a column descriptor that\ngenerally includes a field descriptor, an attribute descriptor, or an array\ndescriptor that includes a reference type descriptor RST whose scope includes\nTN.\nNOTE 198 - A descriptor that ''generally includes'' another descriptor is\ndefined in Subclause 6.2.4, \"Descriptors\", in ISO/IEC 9075-1.\nd) Let DTN be the name of the table DT.\ne) Case:\ni) If DT is a base table, then the following <drop table statement> is\neffectively executed without further Access Rule checking:\nDROP TABLE DTN CASCADE\nii) Otherwise, the following <drop view statement> is effectively executed\nwithout further Access Rule checking:\nDROP VIEW DTN CASCADE\n3) For every supertable of T, every superrow and every subrow of every row of T\nis effectively deleted at the end of the SQL-statement, prior to the checking of\nany integrity constraints.\nNOTE 199 - This deletion creates neither a new trigger execution context nor the\ndefinition of a new state change in the current trigger execution context.\n4) The following <revoke statement> is effectively executed with a current\nauthorization identifier of ''_SYSTEM'' and without further Access Rule\nchecking:\nREVOKE ALL PRIVILEGES ON TN FROM A CASCADE\n5) Let R be any SQL-invoked routine whose routine descriptor contains the <table\nname> of T in the <SQL routine body>. Let SN be the <specific name> of R. The\nfollowing <drop routine statement> is effectively executed without further\nAccess Rule checking:\nDROP SPECIFIC ROUTINE SN CASCADE\n6) For each direct supertable DST of T, the table name of T is removed from the\nlist of table names of direct subtables of DST that is included in the table\ndescriptor of DST.\n7) The descriptor of T is destroyed.\n \nConformance Rules\n1) Without Feature F032, ''CASCADE drop behavior'', a <drop behavior> of CASCADE\nshall not be specified in <drop table statement>.\n\n\n\n-----Original Message-----\nFrom: Jan Wieck [mailto:JanWieck@Yahoo.com]\nSent: Wednesday, July 10, 2002 7:35 PM\nTo: Tom Lane\nCc: Stephan Szabo; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Should this require CASCADE?\n\n\nTom Lane wrote:\n> \n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > On Wed, 10 Jul 2002, Tom Lane wrote:\n> >> DROP TABLE foo RESTRICT;\n> >>\n> >> Should this succeed? Or should it be necessary to say DROP CASCADE to\n> >> get rid of the foreign-key reference to foo?\n> \n> > I think the above should fail. If someone was adding restrict since it\n> > was optional, I'd guess they were doing so in advance for the days when\n> > we'd actually restrict the drop.\n> \n> Sorry if I wasn't clear: we never had the RESTRICT/CASCADE syntax at all\n> until now. What I'm intending though is that DROP with no option will\n> default to DROP RESTRICT, which means that a lot of cases that used to\n> be \"gotchas\" will now fail until you say CASCADE. I wrote RESTRICT in\n> my example just to emphasize that the intended behavior is RESTRICT.\n\nI think the idea was to have it default to CASCADE for this release, not\nto break existing code right away. Then 7.3 is transition time and\nRESTRICT will be the default from the next release on.\n\nIf so, this has to go into the release notes.\n\n\nJan\n\n> \n> So if you prefer, imagine same example but you merely say\n> DROP TABLE foo;\n> Does your answer change?\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n", "msg_date": "Wed, 10 Jul 2002 20:40:04 -0400", "msg_from": "\"Groff, Dana\" <Dana.Groff@filetek.com>", "msg_from_op": true, "msg_subject": "Re: Should this require CASCADE?" }, { "msg_contents": "\"Groff, Dana\" <Dana.Groff@filetek.com> writes:\n> Actually, the answer is even clearer, the standard calls for the specification\n> of \"CASCADE\" or \"RESTRICT\" and doesn't support _not_ having that specified.\n> (the <drop behavior> is NOT [drop behavior] aka optional)\n\nRight, the spec does not allow it to be defaulted. We will, however,\nsince the alternative is breaking every PG application that uses DROP.\n\nDefaulting to RESTRICT behavior seems a reasonably safe way of\npreserving as much backwards compatibility as we can.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 23:09:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE? " } ]
[ { "msg_contents": "As I currently have Rod's dependency code set up, an index derived from\na UNIQUE or PRIMARY KEY clause can't be dropped directly; you must drop\nthe constraint instead. For example:\n\nregression=# create table foo (f1 text primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'foo_pkey' for table 'foo'\nCREATE TABLE\nregression=# drop index foo_pkey;\nERROR: Cannot drop index foo_pkey because constraint foo_pkey on table foo requires it\n You may DROP the other object instead\nregression=# alter table foo drop constraint foo_pkey;\nALTER TABLE\n-- now the index is gone, eg\nregression=# drop index foo_pkey;\nERROR: index \"foo_pkey\" does not exist\n\nBut on the other hand an index created from CREATE INDEX has no\nassociated pg_constraint entry, so it can (and must) be dropped with\nDROP INDEX.\n\nIs this a good idea, or should we consider the index and the constraint\nto be equivalent (ie, you can drop both with either syntax)?\n\nI went out of my way to make the above happen, but now I'm wondering if\nit was a good idea or not. Backwards compatibility would suggest\nallowing DROP INDEX to get rid of UNIQUE/PRIMARY KEY constraints.\nOTOH one might feel that the index is an implementation detail, and\nthe user should only think about the constraint.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jul 2002 23:41:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Okay, how about indexes versus unique/primary constraints?" }, { "msg_contents": "On Wed, 10 Jul 2002, Tom Lane wrote:\n\n> As I currently have Rod's dependency code set up, an index derived from\n> a UNIQUE or PRIMARY KEY clause can't be dropped directly; you must drop\n> the constraint instead.\n> ...\n> I went out of my way to make the above happen, but now I'm wondering if\n> it was a good idea or not.\n\nI think it's a great idea. It helps make it clear just why the index was\ncreated, so you don't get someone less familiar with the schema saying,\n\"we don't have any queries that use this index, so we might as well get\nrid of it....\"\n\nI think this change is hardly likely to cause problems, since adding or\ndeleting indexes seems unlikely to be automated. It's really a system\nadministration activity, not something the application would do on its own.\n\n> OTOH one might feel that the index is an implementation detail, and\n> the user should only think about the constraint.\n\nExactly my feeling.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 11 Jul 2002 13:38:42 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Okay, how about indexes versus unique/primary constraints?" } ]
[ { "msg_contents": "I think that we are getting into two or three issues here. If I may:\n\n\t(1) Is DROP TABLE <foo> acceptable by the standard?\n\t(2) Should we break \"old\" functionality?\n\t(3) assuming we support the old syntax: \n\t\tshould DROP TABLE <foo> be functionally the same as \n\t\tDROP TABLE <foo> RESTRICT\n\t(4) does that mean that That a DROP TABLE <foo> RESTRICT fails on\nforeign key reference to foo.\n\nAnswers from my experience and from my reading of the standard. (See earlier\nnote, I encourage you to determine if I am mistaken, the stand is often hard to\nread.)\n\n\t(1) It is ONLY acceptable (see conformance note) if you do not support\nCASCADE. If you support CASCADE, you must indicate CASCADE or RESTRICT. This\nisn't an \"optional parameter\". So, no -- the suggestion that \"DROP TABLE <foo>\"\nis now valid syntax given the CASCADE functionality breaks the standard.\nVendors <sarcasm> occasionally </sarcasm> decide to break the standard.\n(currently the standards node seems to be down -- I was going to verify that\nnothing in 2004 has yet to change this syntax. That verification will have to\ncome tomorrow (assuming it comes back up).)\n\n\t(2) I am new here. This is really an answer that should be driven by\nthe user community. My experience doing database engineering allows me to argue\nboth viewpoints. I would claim, as I have seen others hint at, that \"drop\ntable\" operations are not heavily embedded into application code and this is one\ncase where backward compatibility may not be as important as clarity and\nstandard conformance. Tom seems to have made a clear statement that there is A\nDESIRE to not break old implementations directly with a syntax restriction. I\ndon't know if you have a mechanism to provide a warning that this is a\ndeprecated feature -- if so, that may be \"a good idea\"(tm). The reasoning\nbehind the standard thrust here is that a maintainer should explicitly know what\nthe drop command will accomplish.\n\n\t(3) I believe Tom is right in that you don't want to do something \"half\nway\". It should behave like RESTRICT. There are commercial examples for this\nbehavior (Oracle Rdb (aka Digital Rdb) is the one that immediately comes to\nmind; Oracle 9i's CASCADE CONSTRAINTS|<nothing> is another variant).\n\n\t(4) yes seems to be the general answer from all corners. (I agree)\n\nDana\n(BTW: while I mentioned this to some of the core folks, I actually do\nparticipate on the SQL standard for the US. NO, that doesn't make me the last\nword when it comes to the standard (that's Jim Melton's job :-), but I do have\naccess to all the documents. I have been developing commercial DB engines for\nsome so, I have some experience in the field but I am only just starting to hack\nPostgreSQL.)\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nSent: Wednesday, July 10, 2002 11:25 PM\nTo: Tom Lane\nCc: Jan Wieck; Stephan Szabo; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Should this require CASCADE?\n\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > As far as this question, seems with no RESTRICT/CASCADE, it fails, with\n> > RESTRICT it drops the trigger, and with CASCADE it drops the referencing\n> > table. Is that accurate?\n> \n> Not at all. CASCADE would drop the foreign key constraint (including\n> the triggers that implement it), but not the other table. In my mind\n> the issue is whether RESTRICT mode should do the same, or report an\n> error.\n> \n> I'm not eager to accept the idea that DROP-without-either-option should\n> behave in some intermediate fashion. I want it to be the same as\n> RESTRICT.\n\nSounds good to me, and I don't think we need to require RESTRICT just\nbecause the standard says so. Does the standard require RESTRICT for\nevery DROP or just drops that have foreign keys?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n", "msg_date": "Thu, 11 Jul 2002 00:59:48 -0400", "msg_from": "\"Groff, Dana\" <Dana.Groff@filetek.com>", "msg_from_op": true, "msg_subject": "Re: Should this require CASCADE?" }, { "msg_contents": "Groff, Dana wrote:\n> I think that we are getting into two or three issues here. If I may:\n> \n> \t(1) Is DROP TABLE <foo> acceptable by the standard?\n> \t(2) Should we break \"old\" functionality?\n> \t(3) assuming we support the old syntax: \n> \t\tshould DROP TABLE <foo> be functionally the same as \n> \t\tDROP TABLE <foo> RESTRICT\n> \t(4) does that mean that That a DROP TABLE <foo> RESTRICT fails on\n> foreign key reference to foo.\n> \n> Answers from my experience and from my reading of the standard. (See earlier\n> note, I encourage you to determine if I am mistaken, the stand is often hard to\n> read.)\n> \n> \t(1) It is ONLY acceptable (see conformance note) if you do not support\n> CASCADE. If you support CASCADE, you must indicate CASCADE or RESTRICT. This\n> isn't an \"optional parameter\". So, no -- the suggestion that \"DROP TABLE <foo>\"\n> is now valid syntax given the CASCADE functionality breaks the standard.\n> Vendors <sarcasm> occasionally </sarcasm> decide to break the standard.\n> (currently the standards node seems to be down -- I was going to verify that\n> nothing in 2004 has yet to change this syntax. That verification will have to\n> come tomorrow (assuming it comes back up).)\n\nHard to argue why we should invalidate all preexisting SQL books by\nrejecting DROP TABLE tab. If I create a table, and then want to drop\nit, why should I have to put another noise word in there to make the\nserver happy. Now, if someone wanted to say CASCADE|RESTRICT was\nrequired for DROP _only_ if there is some foreign key references to the\ntable, I would be OK with that, but that's not what the standard says.\n\nHard to imagine what the standards people were thinking on this one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 12:27:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Now, if someone wanted to say CASCADE|RESTRICT was\n> required for DROP _only_ if there is some foreign key references to the\n> table, I would be OK with that, but that's not what the standard says.\n\nBut in fact that is not different from what I propose to do. Consider\nwhat such a rule really means:\n\t* if no dependencies exist for the object, go ahead and delete.\n\t* if dependencies exist, complain.\nHow is that different from \"the default behavior is RESTRICT\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Jul 2002 12:36:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Now, if someone wanted to say CASCADE|RESTRICT was\n> > required for DROP _only_ if there is some foreign key references to the\n> > table, I would be OK with that, but that's not what the standard says.\n> \n> But in fact that is not different from what I propose to do. Consider\n> what such a rule really means:\n> \t* if no dependencies exist for the object, go ahead and delete.\n> \t* if dependencies exist, complain.\n> How is that different from \"the default behavior is RESTRICT\"?\n\nNo, I support your ideas. We are allowing RESTRICT to be the default.\n\nWhat I was saying is that the standard _requiring_ RESTRICT or CASCADE\nwas really strange, and I could understand such a requirement only if\nforeign keys existed on the table. Requiring it when no foreign keys\nexist is really weird. I agree we should default to RESTRICT in all\ncases.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 12:46:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE?" }, { "msg_contents": "With all this dependency stuff, what happens with the ALTER TABLE / DROP NOT\nNULL syntax we came up with?\n\nMaybe we should allow RESTRICT/CASCADE on that syntax and if restrict is\nspecified, you can't drop it if a primary key depends on it and if cascade\nis specified it will drop the primary key...\n\nJust for consistency...\n\nAlso, when talking about whether or not the index supporting a constraint\nshould be sort of 'hidden' from the user, should not we change pg_dump to\ndump unique indices using the ALTER TABLE syntax, rather than the CREATE\nUNIQUE INDEX syntax? Otherwise this information will be lost.\n\nChris\n\n", "msg_date": "Fri, 12 Jul 2002 09:29:09 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE? " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> With all this dependency stuff, what happens with the ALTER TABLE / DROP NOT\n> NULL syntax we came up with?\n\nNothing, AFAICS. NOT NULL doesn't have any dependency implications.\n\n> Also, when talking about whether or not the index supporting a constraint\n> should be sort of 'hidden' from the user, should not we change pg_dump to\n> dump unique indices using the ALTER TABLE syntax, rather than the CREATE\n> UNIQUE INDEX syntax? Otherwise this information will be lost.\n\nI thought we did that already. We do need to tweak pg_dump's handling\nof foreign keys though --- dumping some trigger definitions is no longer\nthe right thing.\n\nIt would be interesting to see if we can reasonably reverse-engineer\na foreign-key-constraint structure given the CREATE TRIGGER commands\nthat are actually going to be present in existing pg_dump scripts.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Jul 2002 23:04:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE? " }, { "msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > With all this dependency stuff, what happens with the ALTER\n> TABLE / DROP NOT\n> > NULL syntax we came up with?\n>\n> Nothing, AFAICS. NOT NULL doesn't have any dependency implications.\n\nWhat about the primary keys that I mentioned? In the current\nimplementation, it's restrict-only.\n\nChris\n\n", "msg_date": "Fri, 12 Jul 2002 11:08:41 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE? " }, { "msg_contents": "> > Also, when talking about whether or not the index supporting a\n> constraint\n> > should be sort of 'hidden' from the user, should not we change\n> pg_dump to\n> > dump unique indices using the ALTER TABLE syntax, rather than the CREATE\n> > UNIQUE INDEX syntax? Otherwise this information will be lost.\n>\n> I thought we did that already.\n\nNope: (CVS-HEAD)\n\ntest=# create table test (a int4 unique);\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'test_a_key' for\ntable 'test'\nCREATE TABLE\ntest=# \\q\nchriskl@alpha:~$ pg_dump test\n--\n-- Selected TOC Entries:\n--\n\\connect - chriskl\n\nSET search_path = public, pg_catalog;\n\n--\n-- TOC Entry ID 2 (OID 16575)\n--\n-- Name: test Type: TABLE Schema: public Owner: chriskl\n--\n\nCREATE TABLE \"test\" (\n \"a\" integer\n);\n\n--\n-- Data for TOC Entry ID 4 (OID 16575)\n--\n-- Name: test Type: TABLE DATA Schema: public Owner: chriskl\n--\n\n\nCOPY \"test\" FROM stdin;\n\\.\n--\n-- TOC Entry ID 3 (OID 16577)\n--\n-- Name: test_a_key Type: INDEX Schema: public Owner: chriskl\n--\n\nCREATE UNIQUE INDEX test_a_key ON test USING btree (a);\n\nI think that if an index is unique and uses btree, it should be dumped as an\nalter table statement?\n\nChris\n\n", "msg_date": "Fri, 12 Jul 2002 11:12:19 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Should this require CASCADE? " } ]
[ { "msg_contents": "I have committed changes enabling new CREATE CONVERSION/DROP\nCONVERSION SQL commands. You can issue these SQL commands, but they\njust add/remove tuples to/from new system catalog pg_conversion.\nStill needs lots of work...\n\nInitdb required.\n\nFor those who wishes to try these (currently) useless commands, here\nis the brief description:\n\nCREATE [DEFAULT] CONVERSION conversion_name FOR for_encoding_name\n TO to_encoding_name FROM function_name;\n\nDROP CONVERSION conversion_name;\n\ndescription:\n\nCREATE CONVERSION creates new encoding conversion. conversion_name is\nthe name of the conversion possibly schema qualified name(yes,\nconversion is schema aware). Duplicate conversion name is not allowed\nin a schema. for_encoding_name is the encoding of the source\ntext. to_encoding_name is the encoding of the destination\ntext. function_name is the C function actually performs the\nconversion(see below for the function signature).\n\nIf DEFAULT is specified, then the conversion is used for the automatic\nfrontend/backend encoding conversion.\n\nconversion function signature is as follows:\n\n pgconv(\n\t\tINTEGER,\t-- source encoding id\n\t\tINTEGER,\t-- destination encoding id\n\t\tOPAQUE,\t\t-- source string (null terminated C string)\n\t\tOPAQUE,\t\t-- destination string (null terminated C string)\n\t\tINTEGER\t\t-- source string length\n ) returns INTEGER;\t-- dummy. returns nothing, actually.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 11 Jul 2002 16:57:12 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "new SQL command: CREATE CONVERSION/DROP CONVERSION added" } ]
[ { "msg_contents": "\n> > > > Changing data types probably won't appear. I don't know of anyone\n> > > > working on it -- and it can be quite a complex issue to get a good\n> > > > (resource friendly and transaction safe) version.\n> > >\n> > > I'd be happy with a non-resource friendly and non-transaction-safe version\n> > > over not having the functionality at all... ;)\n\nCertain (imho common) cases could be short circuited to only manipulate the column \ndefinition (including default value), like alter varchar(10) to varchar(20), \nor decimal(10) to decimal(20), or even varchar(10) to varchar(5) (which would need \nto select all rows, and check that none is actually longer than 5, or abort).\n\nThat is what I have seen other db's do. Do it inplace where it is easily possible,\nelse do a long transaction that rewrites the content.\n\nAndreas\n", "msg_date": "Thu, 11 Jul 2002 10:32:16 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] [pgaccess-users] RE: bugzilla.pgaccess.org" } ]
[ { "msg_contents": "The estimator only uses effective_cache_size, it never looks at\nNBuffers. So shouldn't we add\n\tif (effective_cache_size < NBuffers)\n\t{\n\t\telog(NOTICE, \"adjusting effective_cache_size to %d\",\n\t\t NBuffers);\n\t\teffective_cache_size = NBuffers;\n\t}\n\nsomewhere near\n\t * Check for invalid combinations of switches\n\nin postmaster.c? I can see only one reason not to it: If\neffective_cache_size were meant as a hint to the backend that it\ncannot count on the whole buffer cache, because it has to share the\ncache with other backends; though I didn't find any indication of\nsuch an intention. Am I missing something else?\n\nIn costsize.c there is this comment:\n\n * We also use a rough estimate \"effective_cache_size\" of the number\nof\n * disk pages in Postgres + OS-level disk cache. (We can't simply use\n * NBuffers for this purpose because that would ignore the effects of\n * the kernel's disk cache.)\n\nIn the docs we have:\n\nEFFECTIVE_CACHE_SIZE (floating point): Sets the optimizer's\nassumption about the effective size of the disk cache (that is, the\nportion of the kernel's disk cache that will be used for PostgreSQL\ndata files). This is measured in disk pages, which are normally 8 kB\neach. \n\nWhat about adding something like \"including the effects of the\nbackend's own shared buffers\" here?\n\nServus\n Manfred\n", "msg_date": "Thu, 11 Jul 2002 14:25:30 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "effective_cache_size" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> The estimator only uses effective_cache_size, it never looks at\n> NBuffers. So shouldn't we add\n> \tif (effective_cache_size < NBuffers)\n\nPretty useless considering that effective_cache_size can be SET on the\nfly...\n\nIn general, my philosophy has been not to constrain settings of\noptimizer cost parameters more than absolutely necessary. For example,\nthe system will let you set random_page_cost to values between 0 and 1,\neven though values less than 1 are surely nonsensical. Once in a while\nsomeone might want to do that just for testing purposes...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Jul 2002 10:11:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: effective_cache_size " } ]
[ { "msg_contents": "I can't improve performance on this query:\n\nSELECT\n supplier.name,\n supplier.address\nFROM\n supplier,\n nation\nWHERE\n supplier.suppkey IN(\n SELECT\n partsupp.suppkey\n FROM\n partsupp\n WHERE\n partsupp.partkey IN(\n SELECT\n part.partkey\n FROM\n part\n WHERE\n part.name like 'forest%'\n )\n AND partsupp.availqty>(\n SELECT\n 0.5*(sum(lineitem.quantity)::FLOAT)\n FROM\n lineitem\n WHERE\n lineitem.partkey=partsupp.partkey\n AND lineitem.suppkey=partsupp.partkey\n AND lineitem.shipdate>=('1994-01-01')::DATE\n AND lineitem.shipdate<(('1994-01-01')::DATE+('1 year')::INTERVAL)::DATE\n )\n )\n AND supplier.nationkey=nation.nationkey\n AND nation.name='CANADA'\nORDER BY\n supplier.name;\n\n\nexplain results:\nNOTICE: QUERY PLAN:\n\nSort (cost=2777810917708.17..2777810917708.17 rows=200 width=81)\n -> Nested Loop (cost=0.00..2777810917700.53 rows=200 width=81)\n -> Seq Scan on nation (cost=0.00..1.31 rows=1 width=4)\n -> Index Scan using snation_index on supplier (cost=0.00..2777810917696.72 rows=200 width=77)\n SubPlan\n -> Materialize (cost=6944527291.72..6944527291.72 rows=133333 width=4)\n -> Seq Scan on partsupp (cost=0.00..6944527291.72 rows=133333 width=4)\n SubPlan\n -> Materialize (cost=8561.00..8561.00 rows=1 width=4)\n -> Seq Scan on part (cost=0.00..8561.00 rows=1 width=4)\n -> Aggregate (cost=119.61..119.61 rows=1 width=4)\n -> Index Scan using lineitem_index on lineitem (cost=0.00..119.61 rows=1 width=4)\n\npartsupp::800000 tuples\n Table \"partsupp\"\n Column | Type | Modifiers \n------------+----------------+-----------\n partkey | integer | not null\n suppkey | integer | not null\n availqty | integer | \n supplycost | numeric(10,2) | \n comment | character(199) | \nPrimary key: partsupp_pkey\nTriggers: RI_ConstraintTrigger_16597,\n RI_ConstraintTrigger_16603\n\ntpch=# select attname,n_distinct,correlation from pg_stats where tablename='partsupp';\n attname | n_distinct | correlation \n------------+------------+-------------\n partkey | -0.195588 | 1\n suppkey | 9910 | 0.00868363\n availqty | 9435 | -0.00788662\n supplycost | -0.127722 | -0.0116864\n comment | -1 | 0.0170702\n\nI accept query changes, reordering, indexes ideas and horizontal partitioning\nthanks in advance.\nRegards\n\n\n\n\n\n\n\n\n\n\nI can't improve performance on this \nquery:\n \nSELECT supplier.name, supplier.addressFROM supplier, nationWHERE supplier.suppkey \nIN(  SELECT   partsupp.suppkey  FROM   partsupp  WHERE   partsupp.partkey \nIN(    SELECT     part.partkey    FROM     part    WHERE     part.name \nlike 'forest%'     )   AND \npartsupp.availqty>(    SELECT     0.5*(sum(lineitem.quantity)::FLOAT)    FROM     lineitem    WHERE     lineitem.partkey=partsupp.partkey     AND \nlineitem.suppkey=partsupp.partkey     AND \nlineitem.shipdate>=('1994-01-01')::DATE     AND \nlineitem.shipdate<(('1994-01-01')::DATE+('1 \nyear')::INTERVAL)::DATE     )  ) AND \nsupplier.nationkey=nation.nationkey AND nation.name='CANADA'ORDER \nBY supplier.name;\n \nexplain results:\nNOTICE:  QUERY PLAN:\n \nSort  (cost=2777810917708.17..2777810917708.17 \nrows=200 width=81)  ->  Nested Loop  \n(cost=0.00..2777810917700.53 rows=200 \nwidth=81)        ->  Seq Scan on \nnation  (cost=0.00..1.31 rows=1 \nwidth=4)        ->  Index Scan \nusing snation_index on supplier  (cost=0.00..2777810917696.72 rows=200 \nwidth=77)              \nSubPlan                \n->  Materialize  (cost=6944527291.72..6944527291.72 rows=133333 \nwidth=4)                      \n->  Seq Scan on partsupp  (cost=0.00..6944527291.72 rows=133333 \nwidth=4)                            \nSubPlan                              \n->  Materialize  (cost=8561.00..8561.00 rows=1 \nwidth=4)                                    \n->  Seq Scan on part  (cost=0.00..8561.00 rows=1 \nwidth=4)                              \n->  Aggregate  (cost=119.61..119.61 rows=1 \nwidth=4)                                    \n->  Index Scan using lineitem_index on lineitem  (cost=0.00..119.61 \nrows=1 width=4)\npartsupp::800000 tuples\n            Table \n\"partsupp\"   Column   |      \nType      | Modifiers \n------------+----------------+----------- partkey    \n| integer        | not \nnull suppkey    | \ninteger        | not \nnull availqty   | \ninteger        |  supplycost | \nnumeric(10,2)  |  comment    | character(199) | \nPrimary key: partsupp_pkeyTriggers: \nRI_ConstraintTrigger_16597,          \nRI_ConstraintTrigger_16603\ntpch=# select attname,n_distinct,correlation from \npg_stats where tablename='partsupp';  attname   | n_distinct \n| correlation \n------------+------------+------------- partkey    \n|  -0.195588 |           \n1 suppkey    |       9910 \n|  0.00868363 availqty   \n|       9435 | -0.00788662 supplycost \n|  -0.127722 |  -0.0116864 comment    \n|         -1 |   \n0.0170702\nI accept query changes, reordering, indexes ideas \nand horizontal partitioning\nthanks in advance.\nRegards", "msg_date": "Thu, 11 Jul 2002 17:22:14 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "please help on query" }, { "msg_contents": "[moving to pgsql-sql]\nOn Thu, 11 Jul 2002 17:22:14 +0200, \"Luis Alberto Amigo Navarro\"\n<lamigo@atc.unican.es> wrote:\n>I can't improve performance on this query:\n>\n>SELECT\n> supplier.name,\n> supplier.address\n>FROM\n> supplier,\n> nation\n>WHERE\n> supplier.suppkey IN(\n> SELECT\n> partsupp.suppkey\n> FROM\n> partsupp\n> WHERE\n> partsupp.partkey IN(\n> SELECT\n> part.partkey\n> FROM\n> part\n> WHERE\n> part.name like 'forest%'\n> )\n> AND partsupp.availqty>(\n> SELECT\n> 0.5*(sum(lineitem.quantity)::FLOAT)\n> FROM\n> lineitem\n> WHERE\n> lineitem.partkey=partsupp.partkey\n> AND lineitem.suppkey=partsupp.partkey\n ^^^^^^^\n suppkey ???\n> AND lineitem.shipdate>=('1994-01-01')::DATE\n> AND lineitem.shipdate<(('1994-01-01')::DATE+('1 year')::INTERVAL)::DATE\n> )\n> )\n> AND supplier.nationkey=nation.nationkey\n> AND nation.name='CANADA'\n>ORDER BY\n> supplier.name;\n\nLuis,\nrules of thumb: \"Avoid subselects; use joins!\" and \"If you have to use\nsubselects, avoid IN, use EXISTS!\"\n\nLet's try. If partkey is unique in part, then\n| FROM partsupp\n| WHERE partsupp.partkey IN (SELECT part.partkey\n\ncan be replaced by\n FROM partsupp ps, part p\n WHERE ps.partkey = p.partkey\n\nor\n partsupp ps INNER JOIN part p\n ON (ps.partkey = p.partkey AND p.name LIKE '...')\n\nWhen we ignore \"part\" for now, your subselect boils down to\n\n| SELECT partsupp.suppkey\n| FROM partsupp\n| WHERE partsupp.availqty > (\n| SELECT 0.5*(sum(lineitem.quantity)::FLOAT)\n| FROM lineitem\n| WHERE lineitem.partkey=partsupp.partkey\n| AND lineitem.suppkey=partsupp.suppkey\n| AND lineitem.shipdate BETWEEN ... AND ...\n| )\n\nwhich can be rewritten to (untested)\n\n SELECT ps.suppkey\n FROM partsupp ps, lineitem li\n WHERE li.partkey=ps.partkey\n AND li.suppkey=ps.suppkey\n AND lineitem.shipdate BETWEEN ... AND ...\n GROUP BY ps.partkey, ps.suppkey\n HAVING min(ps.availqty) > 0.5*(sum(lineitem.quantity)::FLOAT)\n ^^^\n As all ps.availqty are equal in one group, you can as well\nuse max() or avg().\n\nNow we have left only one IN:\n| WHERE supplier.suppkey IN (\n| SELECT partsupp.suppkey FROM partsupp WHERE <condition> )\n\nBeing to lazy to find out, if this can be rewritten to a join, let`s\napply rule 2 here:\n\n WHERE EXISTS (\n SELECT ... FROM partsupp ps\n WHERE supplier.suppkey = ps.suppkey\n AND <condition> )\n\nHTH, but use with a grain of salt ...\n\n>Sort (cost=2777810917708.17..2777810917708.17 rows=200 width=81)\n ^^^^^^^^^^^^^^^^\nBTW, how many years are these? :-)\n\nServus\n Manfred\n", "msg_date": "Thu, 11 Jul 2002 18:47:03 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] please help on query" }, { "msg_contents": "I've tried\nSELECT\n supplier.name,\n supplier.address\nFROM\n supplier,\n nation,\n lineitem\nWHERE\n EXISTS(\n SELECT\n partsupp.suppkey\n FROM\n partsupp,lineitem\n WHERE\n lineitem.partkey=partsupp.partkey\n AND lineitem.suppkey=partsupp.partkey\n AND lineitem.shipdate>=('1994-01-01')::DATE\n AND lineitem.shipdate<(('1994-01-01')::DATE+('1 year')::INTERVAL)::DATE\n AND EXISTS(\n SELECT\n part.partkey\n FROM\n part\n WHERE\n part.name like 'forest%'\n )\n GROUP BY partsupp.partkey,partsupp.suppkey\n HAVING min(availqty)>(0.5*(sum(lineitem.quantity)::FLOAT))\n )\n AND supplier.nationkey=nation.nationkey\n AND nation.name='CANADA'\nORDER BY\n supplier.name;\n\nas you said and something is wrong\nSort (cost=1141741215.35..1141741215.35 rows=2400490000 width=81)\n InitPlan\n -> Aggregate (cost=0.00..921773.85 rows=48 width=24)\n InitPlan\n -> Seq Scan on part (cost=0.00..8561.00 rows=1 width=4)\n -> Group (cost=0.00..921771.44 rows=481 width=24)\n -> Result (cost=0.00..921769.04 rows=481 width=24)\n -> Merge Join (cost=0.00..921769.04 rows=481\nwidth=24)\n -> Index Scan using partsupp_pkey on partsupp\n(cost=0.00..98522.75 rows=800000 width=12)\n -> Index Scan using lsupp_index on lineitem\n(cost=0.00..821239.91 rows=145 width=12)\n -> Result (cost=1.31..112888690.31 rows=2400490000 width=81)\n -> Nested Loop (cost=1.31..112888690.31 rows=2400490000 width=81)\n -> Hash Join (cost=1.31..490.31 rows=400 width=81)\n -> Seq Scan on supplier (cost=0.00..434.00 rows=10000\nwidth=77)\n -> Hash (cost=1.31..1.31 rows=1 width=4)\n -> Seq Scan on nation (cost=0.00..1.31 rows=1\nwidth=4)\n -> Seq Scan on lineitem (cost=0.00..222208.25 rows=6001225\nwidth=0)\n\nwhere might be my mistake\nThanks and regards\n----- Original Message -----\nFrom: \"Manfred Koizar\" <mkoi-pg@aon.at>\nTo: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\nCc: <pgsql-sql@postgresql.org>\nSent: Thursday, July 11, 2002 6:47 PM\nSubject: Re: [HACKERS] please help on query\n\n\n> [moving to pgsql-sql]\n> On Thu, 11 Jul 2002 17:22:14 +0200, \"Luis Alberto Amigo Navarro\"\n> <lamigo@atc.unican.es> wrote:\n> >I can't improve performance on this query:\n> >\n> >SELECT\n> > supplier.name,\n> > supplier.address\n> >FROM\n> > supplier,\n> > nation\n> >WHERE\n> > supplier.suppkey IN(\n> > SELECT\n> > partsupp.suppkey\n> > FROM\n> > partsupp\n> > WHERE\n> > partsupp.partkey IN(\n> > SELECT\n> > part.partkey\n> > FROM\n> > part\n> > WHERE\n> > part.name like 'forest%'\n> > )\n> > AND partsupp.availqty>(\n> > SELECT\n> > 0.5*(sum(lineitem.quantity)::FLOAT)\n> > FROM\n> > lineitem\n> > WHERE\n> > lineitem.partkey=partsupp.partkey\n> > AND lineitem.suppkey=partsupp.partkey\n> ^^^^^^^\n> suppkey ???\n> > AND lineitem.shipdate>=('1994-01-01')::DATE\n> > AND lineitem.shipdate<(('1994-01-01')::DATE+('1\nyear')::INTERVAL)::DATE\n> > )\n> > )\n> > AND supplier.nationkey=nation.nationkey\n> > AND nation.name='CANADA'\n> >ORDER BY\n> > supplier.name;\n>\n\n\n\n", "msg_date": "Thu, 11 Jul 2002 19:40:46 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] please help on query" }, { "msg_contents": "On Thu, 2002-07-11 at 11:22, Luis Alberto Amigo Navarro wrote:\n> I can't improve performance on this query:\n\nBlame Canada!\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "11 Jul 2002 14:06:47 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: please help on query" }, { "msg_contents": "On Thursday 11 July 2002 12:06, J. R. Nield wrote:\n> On Thu, 2002-07-11 at 11:22, Luis Alberto Amigo Navarro wrote:\n> > I can't improve performance on this query:\n>\n> Blame Canada!\n\nWhatever ... \n\nHow's that silver medal down there in the states?\n\n;-)\n\n", "msg_date": "Thu, 11 Jul 2002 12:09:52 -0600", "msg_from": "Andy Kopciuch <akopciuch@olympusproject.org>", "msg_from_op": false, "msg_subject": "Re: please help on query" }, { "msg_contents": "On Thu, 11 Jul 2002 19:40:46 +0200, \"Luis Alberto Amigo Navarro\"\n<lamigo@atc.unican.es> wrote:\n>I've tried\n[reformatted to fit on one page]\n| SELECT supplier.name, supplier.address\n| FROM supplier, nation, lineitem\nYou already found out that you do not need lineitem here.\n\n| WHERE EXISTS(\n| SELECT partsupp.suppkey\n| FROM partsupp,lineitem\n| WHERE\n| lineitem.partkey=partsupp.partkey\n| AND lineitem.suppkey=partsupp.partkey\nI still don't believe this suppkey=partkey\n\n| AND lineitem.shipdate [...]\n| AND EXISTS( SELECT part.partkey\n| FROM part WHERE part.name like 'forest%')\nThis subselect gives either true or false, but in any case always the\nsame result. You might want to add a condition\n\tAND part.partkey=partsupp.partkey\n\nAre you sure partkey is not unique? If it is unique you can replace\nthis subselect by a join.\n\n| GROUP BY partsupp.partkey,partsupp.suppkey\n| HAVING min(availqty)>(0.5*(sum(lineitem.quantity)::FLOAT))\n| )\n| AND supplier.nationkey=nation.nationkey\n| AND nation.name='CANADA'\n| ORDER BY supplier.name;\n\n>as you said and something is wrong\n>Sort (cost=1141741215.35..1141741215.35 rows=2400490000 width=81)\n\nThe cost is now only 1141741215.35 compared to 2777810917708.17\nbefore; this is an improvement factor of more than 2000. So what's\nyour problem? ;-)\n\nServus\n Manfred\n", "msg_date": "Thu, 11 Jul 2002 20:44:58 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] please help on query" }, { "msg_contents": "On Thu, 2002-07-11 at 17:22, Luis Alberto Amigo Navarro wrote:\n> I can't improve performance on this query:\n\nYou could try rewriting the IN's into = joins\nor even use explicit INNER JOIN syntax to force certain plans\n\nwith a select inside another and depending on value of partsupp.partkey\nit is really hard for optimiser to do anything else than to perform the\nquery for each row.\n\nBut it may help to rewrite\n\n SELECT\n partsupp.suppkey\n FROM\n partsupp\n WHERE\n partsupp.partkey IN (\n SELECT\n part.partkey\n FROM\n part\n WHERE\n part.name like 'forest%'\n )\n AND partsupp.availqty>(\n SELECT\n 0.5*(sum(lineitem.quantity)::FLOAT)\n FROM\n lineitem\n WHERE\n lineitem.partkey=partsupp.partkey\n AND lineitem.suppkey=partsupp.partkey\n AND lineitem.shipdate>=('1994-01-01')::DATE\n AND lineitem.shipdate<(('1994-01-01')::DATE+('1\nyear')::INTERVAL)::DATE\n )\n )\n\ninto\n\n SELECT\n partsupp.suppkey\n FROM\n partsupp,\n (SELECT part.partkey as partkey\n FROM part\n WHERE part.name like 'forest%'\n ) fp,\n (SELECT 0.5*(sum(lineitem.quantity)::FLOAT) as halfsum,\n partkey\n FROM lineitem\n WHERE\n lineitem.partkey=partsupp.partkey\n AND lineitem.suppkey=partsupp.partkey\n AND lineitem.shipdate>=('1994-01-01')::DATE\n AND lineitem.shipdate<(('1994-01-01')::DATE+('1\nyear')::INTERVAL)::DATE\n ) li\n WHERE partsupp.partkey = fp.partkey \n AND partsupp.partkey = li.partkey \n AND partsupp.availqty > halfsum\n\nif \"lineitem\" is significantly smaller than \"partsupp\"\n\n\n\nBut you really should tell us more, like how many lines does lineitem\nand other tables have, \n\n----------\nHannu\n\n", "msg_date": "11 Jul 2002 22:48:08 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: please help on query" }, { "msg_contents": "On Thu, 2002-07-11 at 17:22, Luis Alberto Amigo Navarro wrote:\n> I can't improve performance on this query:\n\nYou may also want to rewrite\n\nlineitem.shipdate<(('1994-01-01')::DATE+('1 year')::INTERVAL)::DATE\n\ninto\n\nlineitem.shipdate<(('1995-01-01')::DATE\n\nif you can, as probably the optimiser will not recognize it else as a\nconstant and won't use index on lineitem.shipdate.\n\n----------------\nHannu\n\n\n", "msg_date": "11 Jul 2002 22:51:04 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: please help on query" }, { "msg_contents": "On Thu, 2002-07-11 at 17:22, Luis Alberto Amigo Navarro wrote:\n> I can't improve performance on this query:\n\n\nThis _may_ work.\n\nSELECT\n supplier.name,\n supplier.address\n FROM\n supplier,\n nation,\n WHERE supplier.suppkey IN (\n SELECT part.partkey\n FROM part\n WHERE part.name like 'forest%'\n INNER JOIN partsupp ON part.partkey=partsupp.partkey\n INNER JOIN (\n SELECT 0.5*(sum(lineitem.quantity)::FLOAT) as halfsum\n FROM lineitem\n WHERE lineitem.partkey=partsupp.partkey\n AND shipdate >= '1994-01-01'\n AND shipdate < '1995-01-01'\n ) li ON partsupp.availqty > halfsum\n )\n AND supplier.nationkey=nation.nationkey\n AND nation.name='CANADA'\nORDER BY supplier.name;\n\n---------------\nHannu\n\n", "msg_date": "11 Jul 2002 23:08:15 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: please help on query" }, { "msg_contents": "\n\n\n> The cost is now only 1141741215.35 compared to 2777810917708.17\n> before; this is an improvement factor of more than 2000. So what's\n> your problem? ;-)\n>\n> Servus\n> Manfred\n>\nIn fact planner is estimating incredibly badly, it took only 833msecs now\nruns perfectly\n\nI'm going to keep on asking about another query:\n\nSELECT\n customer.name,\n customer.custkey,\n orders.orderkey,\n orders.orderdate,\n orders.totalprice,\n sum(lineitem.quantity)\nFROM\n customer,\n orders,\n lineitem\nWHERE\n exists(\n SELECT\n lineitem.orderkey\n FROM\n lineitem\n WHERE\n lineitem.orderkey=orders.orderkey\n GROUP BY\n lineitem.orderkey HAVING\n sum(lineitem.quantity)>300\n )\n AND customer.custkey=orders.custkey\n AND orders.orderkey=lineitem.orderkey\nGROUP BY\n customer.name,\n customer.custkey,\n orders.orderkey,\n orders.orderdate,\n orders.totalprice\n\nORDER BY\n orders.totalprice DESC,\n orders.orderdate;\n\nNOTICE: QUERY PLAN:\n\nSort (cost=26923941.97..26923941.97 rows=300061 width=66)\n -> Aggregate (cost=26851634.86..26896644.05 rows=300061 width=66)\n -> Group (cost=26851634.86..26889142.52 rows=3000612 width=66)\n -> Sort (cost=26851634.86..26851634.86 rows=3000612\nwidth=66)\n -> Hash Join (cost=26107574.81..26457309.10\nrows=3000612 width=66)\n -> Seq Scan on lineitem (cost=0.00..222208.25\nrows=6001225 width=8)\n -> Hash (cost=26105699.81..26105699.81\nrows=750000 width=58)\n -> Hash Join (cost=7431.00..26105699.81\nrows=750000 width=58)\n -> Seq Scan on orders\n(cost=0.00..26083268.81 rows=750000 width=25)\n SubPlan\n -> Aggregate\n(cost=0.00..17.35 rows=1 width=8)\n -> Group\n(cost=0.00..17.34 rows=5 width=8)\n -> Index Scan\nusing lineitem_pkey on lineitem (cost=0.00..17.33 rows=5 width=8)\n -> Hash (cost=7056.00..7056.00\nrows=150000 width=33)\n -> Seq Scan on customer\n(cost=0.00..7056.00 rows=150000 width=33)\n\nagain:\norders 1500000 tuples\nlineitem 6000000 tuples there are 1 to 7 lineitems per orderkey\nCustomer 150000 tuples\n\nselect attname,n_distinct,correlation from pg_stats where\ntablename='lineitem';\n attname | n_distinct | correlation\n---------------+------------+-------------\n orderkey | -0.199847 | 1\n partkey | 196448 | 0.0223377\n suppkey | 9658 | -0.00822751\n linenumber | 7 | 0.17274\n quantity | 50 | 0.0150153\n extendedprice | 25651 | -0.00790245\n discount | 11 | 0.103761\n tax | 9 | 0.0993771\n returnflag | 3 | 0.391434\n linestatus | 2 | 0.509791\n shipdate | 2440 | 0.0072777\n commitdate | 2497 | 0.00698162\n receiptdate | 2416 | 0.00726686\n shipinstruct | 4 | 0.241511\n shipmode | 7 | 0.138432\n comment | 275488 | 0.0188006\n(16 rows)\n\nselect attname,n_distinct,correlation from pg_stats where\ntablename='orders';\n attname | n_distinct | correlation\n---------------+------------+-------------\n orderkey | -1 | -0.999925\n custkey | 76309 | 0.00590596\n orderstatus | 3 | 0.451991\n totalprice | -1 | -0.00768806\n orderdate | 2431 | -0.0211354\n orderpriority | 5 | 0.182489\n clerk | 1009 | 0.00546939\n shippriority | 1 | 1\n comment | -0.750125 | -0.0123887\n\nCustomer\n attname | n_distinct | correlation\n------------+------------+-------------\n custkey | -1 | 1\n name | -1 | 1\n address | -1 | -0.00510274\n nationkey | 25 | 0.0170533\n phone | -1 | -0.0227816\n acctbal | -0.83444 | -0.00220958\n mktsegment | 5 | 0.205013\n comment | -1 | 0.0327827\n\nThis query takes 12 minutes to run and returns about 50 customers.\nlineitem.quantity takes values from 1 to 50, so 300 per orderkey is very\nrestrictive\n\nMay someone help on improving performance?\nAgain thanks in advance\nRegards\n\n\n", "msg_date": "Fri, 12 Jul 2002 13:37:47 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] please help on query" }, { "msg_contents": "hi,\n\navoid subselect: create a temp table and use join...\n\nCREATE TEMP TABLE tmp AS\n SELECT\n lineitem.orderkey\n FROM\n lineitem\n WHERE\n lineitem.orderkey=orders.orderkey\n GROUP BY\n lineitem.orderkey HAVING\n sum(lineitem.quantity)>300;\n\nCREATE INDEX tmp_idx ON tmp (orderkey);\n\n SELECT\n customer.name,\n customer.custkey,\n orders.orderkey,\n orders.orderdate,\n orders.totalprice,\n sum(lineitem.quantity)\n FROM\n customer,\n orders,\n lineitem,\n tmp\n WHERE\n orders.orderkey=tmp.orderkey\n AND customer.custkey=orders.custkey\n AND orders.orderkey=lineitem.orderkey\n GROUP BY\n customer.name,\n customer.custkey,\n orders.orderkey,\n orders.orderdate,\n orders.totalprice\n ORDER BY\n orders.totalprice DESC,\n orders.orderdate;\n\n\nmay be the index is not necessary...\n\nkuba\n\n\n> I'm going to keep on asking about another query:\n>\n> SELECT\n> customer.name,\n> customer.custkey,\n> orders.orderkey,\n> orders.orderdate,\n> orders.totalprice,\n> sum(lineitem.quantity)\n> FROM\n> customer,\n> orders,\n> lineitem\n> WHERE\n> exists(\n> SELECT\n> lineitem.orderkey\n> FROM\n> lineitem\n> WHERE\n> lineitem.orderkey=orders.orderkey\n> GROUP BY\n> lineitem.orderkey HAVING\n> sum(lineitem.quantity)>300\n> )\n> AND customer.custkey=orders.custkey\n> AND orders.orderkey=lineitem.orderkey\n> GROUP BY\n> customer.name,\n> customer.custkey,\n> orders.orderkey,\n> orders.orderdate,\n> orders.totalprice\n>\n> ORDER BY\n> orders.totalprice DESC,\n> orders.orderdate;\n>\n> NOTICE: QUERY PLAN:\n>\n> Sort (cost=26923941.97..26923941.97 rows=300061 width=66)\n> -> Aggregate (cost=26851634.86..26896644.05 rows=300061 width=66)\n> -> Group (cost=26851634.86..26889142.52 rows=3000612 width=66)\n> -> Sort (cost=26851634.86..26851634.86 rows=3000612\n> width=66)\n> -> Hash Join (cost=26107574.81..26457309.10\n> rows=3000612 width=66)\n> -> Seq Scan on lineitem (cost=0.00..222208.25\n> rows=6001225 width=8)\n> -> Hash (cost=26105699.81..26105699.81\n> rows=750000 width=58)\n> -> Hash Join (cost=7431.00..26105699.81\n> rows=750000 width=58)\n> -> Seq Scan on orders\n> (cost=0.00..26083268.81 rows=750000 width=25)\n> SubPlan\n> -> Aggregate\n> (cost=0.00..17.35 rows=1 width=8)\n> -> Group\n> (cost=0.00..17.34 rows=5 width=8)\n> -> Index Scan\n> using lineitem_pkey on lineitem (cost=0.00..17.33 rows=5 width=8)\n> -> Hash (cost=7056.00..7056.00\n> rows=150000 width=33)\n> -> Seq Scan on customer\n> (cost=0.00..7056.00 rows=150000 width=33)\n>\n> again:\n> orders 1500000 tuples\n> lineitem 6000000 tuples there are 1 to 7 lineitems per orderkey\n> Customer 150000 tuples\n>\n> select attname,n_distinct,correlation from pg_stats where\n> tablename='lineitem';\n> attname | n_distinct | correlation\n> ---------------+------------+-------------\n> orderkey | -0.199847 | 1\n> partkey | 196448 | 0.0223377\n> suppkey | 9658 | -0.00822751\n> linenumber | 7 | 0.17274\n> quantity | 50 | 0.0150153\n> extendedprice | 25651 | -0.00790245\n> discount | 11 | 0.103761\n> tax | 9 | 0.0993771\n> returnflag | 3 | 0.391434\n> linestatus | 2 | 0.509791\n> shipdate | 2440 | 0.0072777\n> commitdate | 2497 | 0.00698162\n> receiptdate | 2416 | 0.00726686\n> shipinstruct | 4 | 0.241511\n> shipmode | 7 | 0.138432\n> comment | 275488 | 0.0188006\n> (16 rows)\n>\n> select attname,n_distinct,correlation from pg_stats where\n> tablename='orders';\n> attname | n_distinct | correlation\n> ---------------+------------+-------------\n> orderkey | -1 | -0.999925\n> custkey | 76309 | 0.00590596\n> orderstatus | 3 | 0.451991\n> totalprice | -1 | -0.00768806\n> orderdate | 2431 | -0.0211354\n> orderpriority | 5 | 0.182489\n> clerk | 1009 | 0.00546939\n> shippriority | 1 | 1\n> comment | -0.750125 | -0.0123887\n>\n> Customer\n> attname | n_distinct | correlation\n> ------------+------------+-------------\n> custkey | -1 | 1\n> name | -1 | 1\n> address | -1 | -0.00510274\n> nationkey | 25 | 0.0170533\n> phone | -1 | -0.0227816\n> acctbal | -0.83444 | -0.00220958\n> mktsegment | 5 | 0.205013\n> comment | -1 | 0.0327827\n>\n> This query takes 12 minutes to run and returns about 50 customers.\n> lineitem.quantity takes values from 1 to 50, so 300 per orderkey is very\n> restrictive\n>\n> May someone help on improving performance?\n> Again thanks in advance\n> Regards\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Fri, 12 Jul 2002 13:50:29 +0200 (CEST)", "msg_from": "Jakub Ouhrabka <jakub.ouhrabka@comgate.cz>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] please help on query" }, { "msg_contents": "Lineitem is being modified on run time, so creating a temp table don't\nsolves my problem\nThe time of creating this table is the same of performing the subselect (or\nso I think), it could be done creating a new table, and a new trigger, but\nthere are already triggers to calculate\nlineitem.extendedprice=part.retailprice*lineitem.quantity*(1+taxes)*(1-disco\nunt) and to calculate orderstatus in order with linestatus and to calculate\norders.totalprice as sum(extendedprice) where\nlineitem.orderkey=new.orderkey. A new trigger in order to insert orderkey if\nsum(quantity) where orderkey=new.orderkey might be excessive.\nAny other idea?\nThanks And Regards\n\n----- Original Message -----\nFrom: \"Jakub Ouhrabka\" <jakub.ouhrabka@comgate.cz>\nTo: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\nCc: \"Manfred Koizar\" <mkoi-pg@aon.at>; <pgsql-sql@postgresql.org>\nSent: Friday, July 12, 2002 1:50 PM\nSubject: Re: [SQL] [HACKERS] please help on query\n\n\n> hi,\n>\n> avoid subselect: create a temp table and use join...\n>\n> CREATE TEMP TABLE tmp AS\n> SELECT\n> lineitem.orderkey\n> FROM\n> lineitem\n> WHERE\n> lineitem.orderkey=orders.orderkey\n> GROUP BY\n> lineitem.orderkey HAVING\n> sum(lineitem.quantity)>300;\n>\n> CREATE INDEX tmp_idx ON tmp (orderkey);\n>\n> SELECT\n> customer.name,\n> customer.custkey,\n> orders.orderkey,\n> orders.orderdate,\n> orders.totalprice,\n> sum(lineitem.quantity)\n> FROM\n> customer,\n> orders,\n> lineitem,\n> tmp\n> WHERE\n> orders.orderkey=tmp.orderkey\n> AND customer.custkey=orders.custkey\n> AND orders.orderkey=lineitem.orderkey\n> GROUP BY\n> customer.name,\n> customer.custkey,\n> orders.orderkey,\n> orders.orderdate,\n> orders.totalprice\n> ORDER BY\n> orders.totalprice DESC,\n> orders.orderdate;\n>\n>\n> may be the index is not necessary...\n>\n> kuba\n>\n>\n> > I'm going to keep on asking about another query:\n> >\n> > SELECT\n> > customer.name,\n> > customer.custkey,\n> > orders.orderkey,\n> > orders.orderdate,\n> > orders.totalprice,\n> > sum(lineitem.quantity)\n> > FROM\n> > customer,\n> > orders,\n> > lineitem\n> > WHERE\n> > exists(\n> > SELECT\n> > lineitem.orderkey\n> > FROM\n> > lineitem\n> > WHERE\n> > lineitem.orderkey=orders.orderkey\n> > GROUP BY\n> > lineitem.orderkey HAVING\n> > sum(lineitem.quantity)>300\n> > )\n> > AND customer.custkey=orders.custkey\n> > AND orders.orderkey=lineitem.orderkey\n> > GROUP BY\n> > customer.name,\n> > customer.custkey,\n> > orders.orderkey,\n> > orders.orderdate,\n> > orders.totalprice\n> >\n> > ORDER BY\n> > orders.totalprice DESC,\n> > orders.orderdate;\n> >\n> > NOTICE: QUERY PLAN:\n> >\n> > Sort (cost=26923941.97..26923941.97 rows=300061 width=66)\n> > -> Aggregate (cost=26851634.86..26896644.05 rows=300061 width=66)\n> > -> Group (cost=26851634.86..26889142.52 rows=3000612 width=66)\n> > -> Sort (cost=26851634.86..26851634.86 rows=3000612\n> > width=66)\n> > -> Hash Join (cost=26107574.81..26457309.10\n> > rows=3000612 width=66)\n> > -> Seq Scan on lineitem\n(cost=0.00..222208.25\n> > rows=6001225 width=8)\n> > -> Hash (cost=26105699.81..26105699.81\n> > rows=750000 width=58)\n> > -> Hash Join\n(cost=7431.00..26105699.81\n> > rows=750000 width=58)\n> > -> Seq Scan on orders\n> > (cost=0.00..26083268.81 rows=750000 width=25)\n> > SubPlan\n> > -> Aggregate\n> > (cost=0.00..17.35 rows=1 width=8)\n> > -> Group\n> > (cost=0.00..17.34 rows=5 width=8)\n> > -> Index Scan\n> > using lineitem_pkey on lineitem (cost=0.00..17.33 rows=5 width=8)\n> > -> Hash (cost=7056.00..7056.00\n> > rows=150000 width=33)\n> > -> Seq Scan on customer\n> > (cost=0.00..7056.00 rows=150000 width=33)\n> >\n> > again:\n> > orders 1500000 tuples\n> > lineitem 6000000 tuples there are 1 to 7 lineitems per orderkey\n> > Customer 150000 tuples\n> >\n> > select attname,n_distinct,correlation from pg_stats where\n> > tablename='lineitem';\n> > attname | n_distinct | correlation\n> > ---------------+------------+-------------\n> > orderkey | -0.199847 | 1\n> > partkey | 196448 | 0.0223377\n> > suppkey | 9658 | -0.00822751\n> > linenumber | 7 | 0.17274\n> > quantity | 50 | 0.0150153\n> > extendedprice | 25651 | -0.00790245\n> > discount | 11 | 0.103761\n> > tax | 9 | 0.0993771\n> > returnflag | 3 | 0.391434\n> > linestatus | 2 | 0.509791\n> > shipdate | 2440 | 0.0072777\n> > commitdate | 2497 | 0.00698162\n> > receiptdate | 2416 | 0.00726686\n> > shipinstruct | 4 | 0.241511\n> > shipmode | 7 | 0.138432\n> > comment | 275488 | 0.0188006\n> > (16 rows)\n> >\n> > select attname,n_distinct,correlation from pg_stats where\n> > tablename='orders';\n> > attname | n_distinct | correlation\n> > ---------------+------------+-------------\n> > orderkey | -1 | -0.999925\n> > custkey | 76309 | 0.00590596\n> > orderstatus | 3 | 0.451991\n> > totalprice | -1 | -0.00768806\n> > orderdate | 2431 | -0.0211354\n> > orderpriority | 5 | 0.182489\n> > clerk | 1009 | 0.00546939\n> > shippriority | 1 | 1\n> > comment | -0.750125 | -0.0123887\n> >\n> > Customer\n> > attname | n_distinct | correlation\n> > ------------+------------+-------------\n> > custkey | -1 | 1\n> > name | -1 | 1\n> > address | -1 | -0.00510274\n> > nationkey | 25 | 0.0170533\n> > phone | -1 | -0.0227816\n> > acctbal | -0.83444 | -0.00220958\n> > mktsegment | 5 | 0.205013\n> > comment | -1 | 0.0327827\n> >\n> > This query takes 12 minutes to run and returns about 50 customers.\n> > lineitem.quantity takes values from 1 to 50, so 300 per orderkey is very\n> > restrictive\n> >\n> > May someone help on improving performance?\n> > Again thanks in advance\n> > Regards\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n>\n\n\n", "msg_date": "Fri, 12 Jul 2002 17:32:50 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] please help on query" }, { "msg_contents": "On Fri, 12 Jul 2002 17:32:50 +0200\n\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> wrote:\n\n\n> Lineitem is being modified on run time, so creating a temp table don't\n> solves my problem\n> The time of creating this table is the same of performing the subselect (or\n> so I think), it could be done creating a new table, and a new trigger, but\n> there are already triggers to calculate\n> lineitem.extendedprice=part.retailprice*lineitem.quantity*(1+taxes)*(1-disco\n> unt) and to calculate orderstatus in order with linestatus and to calculate\n> orders.totalprice as sum(extendedprice) where\n> lineitem.orderkey=new.orderkey. A new trigger in order to insert orderkey if\n> sum(quantity) where orderkey=new.orderkey might be excessive.\n> Any other idea?\n> Thanks And Regards\n> \n> ----- Original Message -----\n> From: \"Jakub Ouhrabka\" <jakub.ouhrabka@comgate.cz>\n> To: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\n> Cc: \"Manfred Koizar\" <mkoi-pg@aon.at>; <pgsql-sql@postgresql.org>\n> Sent: Friday, July 12, 2002 1:50 PM\n> Subject: Re: [SQL] [HACKERS] please help on query\n> \n> >\n> > avoid subselect: create a temp table and use join...\n> >\n> > CREATE TEMP TABLE tmp AS\n> > SELECT\n> > lineitem.orderkey\n> > FROM\n> > lineitem\n> > WHERE\n> > lineitem.orderkey=orders.orderkey\n> > GROUP BY\n> > lineitem.orderkey HAVING\n> > sum(lineitem.quantity)>300;\n\n\nHi,\n\nI'm not sure whether its performance can be improved or not. But I feel\nthere is a slight chance to reduce the total number of the tuples which \nPlanner must think.\n\nBTW, how much time does the following query take in your situation, \nand how many rows does it retrieve ?\n\n\nEXPLAIN ANALYZE\nSELECT\n lineitem.orderkey\n FROM\n lineitem\n GROUP BY\n lineitem.orderkey\n HAVING\n SUM(lineitem.quantity) > 300;\n\n\n\nRegards,\nMasaru Sugawara\n\n\n", "msg_date": "Sun, 14 Jul 2002 21:23:28 +0900", "msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] please help on query" }, { "msg_contents": "\n----- Original Message -----\nFrom: \"Masaru Sugawara\" <rk73@sea.plala.or.jp>\nTo: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\nCc: <pgsql-sql@postgresql.org>\nSent: Sunday, July 14, 2002 2:23 PM\nSubject: Re: [SQL] [HACKERS] please help on query\n\n\nThis is the output:\n\nAggregate (cost=0.00..647161.10 rows=600122 width=8) (actual\ntime=4959.19..347328.83 rows=62 loops=1)\n -> Group (cost=0.00..632158.04 rows=6001225 width=8) (actual\ntime=10.79..274259.16 rows=6001225 loops=1)\n -> Index Scan using lineitem_pkey on lineitem\n(cost=0.00..617154.97 rows=6001225 width=8) (actual time=10.77..162439.11\nrows=6001225 loops=1)\nTotal runtime: 347330.28 msec\n\nit is returning all rows in lineitem. Why is it using index?\nThanks and regards\n\n\n> On Fri, 12 Jul 2002 17:32:50 +0200\n> \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> wrote:\n>\n>\n> > Lineitem is being modified on run time, so creating a temp table don't\n> > solves my problem\n> > The time of creating this table is the same of performing the subselect\n(or\n> > so I think), it could be done creating a new table, and a new trigger,\nbut\n> > there are already triggers to calculate\n> >\nlineitem.extendedprice=part.retailprice*lineitem.quantity*(1+taxes)*(1-disco\n> > unt) and to calculate orderstatus in order with linestatus and to\ncalculate\n> > orders.totalprice as sum(extendedprice) where\n> > lineitem.orderkey=new.orderkey. A new trigger in order to insert\norderkey if\n> > sum(quantity) where orderkey=new.orderkey might be excessive.\n> > Any other idea?\n> > Thanks And Regards\n> >\n> > ----- Original Message -----\n> > From: \"Jakub Ouhrabka\" <jakub.ouhrabka@comgate.cz>\n> > To: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\n> > Cc: \"Manfred Koizar\" <mkoi-pg@aon.at>; <pgsql-sql@postgresql.org>\n> > Sent: Friday, July 12, 2002 1:50 PM\n> > Subject: Re: [SQL] [HACKERS] please help on query\n> >\n> > >\n> > > avoid subselect: create a temp table and use join...\n> > >\n> > > CREATE TEMP TABLE tmp AS\n> > > SELECT\n> > > lineitem.orderkey\n> > > FROM\n> > > lineitem\n> > > WHERE\n> > > lineitem.orderkey=orders.orderkey\n> > > GROUP BY\n> > > lineitem.orderkey HAVING\n> > > sum(lineitem.quantity)>300;\n>\n>\n> Hi,\n>\n> I'm not sure whether its performance can be improved or not. But I feel\n> there is a slight chance to reduce the total number of the tuples which\n> Planner must think.\n>\n> BTW, how much time does the following query take in your situation,\n> and how many rows does it retrieve ?\n>\n>\n> EXPLAIN ANALYZE\n> SELECT\n> lineitem.orderkey\n> FROM\n> lineitem\n> GROUP BY\n> lineitem.orderkey\n> HAVING\n> SUM(lineitem.quantity) > 300;\n>\n>\n>\n> Regards,\n> Masaru Sugawara\n>\n>\n>\n\n\n", "msg_date": "Mon, 15 Jul 2002 09:45:36 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] please help on query" }, { "msg_contents": "On Mon, 15 Jul 2002 09:45:36 +0200\n\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> wrote:\n\n> This is the output:\n> \n> Aggregate (cost=0.00..647161.10 rows=600122 width=8) (actual\n> time=4959.19..347328.83 rows=62 loops=1)\n> -> Group (cost=0.00..632158.04 rows=6001225 width=8) (actual\n> time=10.79..274259.16 rows=6001225 loops=1)\n> -> Index Scan using lineitem_pkey on lineitem\n> (cost=0.00..617154.97 rows=6001225 width=8) (actual time=10.77..162439.11\n> rows=6001225 loops=1)\n> Total runtime: 347330.28 msec\n> \n> it is returning all rows in lineitem. Why is it using index?\n\n\nSorry, I don't know the reason. \nI need more info. Can you show me the outputs of EXPLAIN ANALYZE ?\n\n\nEXPLAIN ANALYZE\nSELECT\n orders.orderkey\n FROM\n lineitem LEFT OUTER JOIN\n orders USING(orderkey)\n WHERE\n orders.orderkey IS NOT NULL\n GROUP BY\n orders.orderkey\n HAVING\n SUM(lineitem.quantity) > 300;\n\n\n\nEXPLAIN ANALYZE\nSELECT\n t2.*\nFROM (SELECT\n orders.orderkey\n FROM\n lineitem LEFT OUTER JOIN\n orders USING(orderkey)\n WHERE\n orders.orderkey IS NOT NULL\n GROUP BY\n orders.orderkey\n HAVING\n SUM(lineitem.quantity) > 300\n ) AS t1 LEFT OUTER JOIN\n orders AS t2 USING(orderkey)\nORDER BY t2.custkey \n\n\nRegards,\nMasaru Sugawara\n\n\n", "msg_date": "Tue, 16 Jul 2002 01:15:35 +0900", "msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] please help on query" }, { "msg_contents": "\n----- Original Message -----\nFrom: \"Masaru Sugawara\" <rk73@sea.plala.or.jp>\nTo: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\nCc: <pgsql-sql@postgresql.org>\nSent: Monday, July 15, 2002 6:15 PM\nSubject: Re: [SQL] [HACKERS] please help on query\n\n\n\n>\n> Sorry, I don't know the reason.\n> I need more info. Can you show me the outputs of EXPLAIN ANALYZE ?\n>\nHere it is:\n\n\n>\n> EXPLAIN ANALYZE\n> SELECT\n> orders.orderkey\n> FROM\n> lineitem LEFT OUTER JOIN\n> orders USING(orderkey)\n> WHERE\n> orders.orderkey IS NOT NULL\n> GROUP BY\n> orders.orderkey\n> HAVING\n> SUM(lineitem.quantity) > 300;\n>\nAggregate (cost=1257368.92..1287375.04 rows=600122 width=12) (actual\ntime=1236941.71..1454824.56 rows=62 loops=1)\n -> Group (cost=1257368.92..1272371.98 rows=6001225 width=12) (actual\ntime=1233968.87..1385034.91 rows=6001225 loops=1)\n -> Sort (cost=1257368.92..1257368.92 rows=6001225 width=12)\n(actual time=1233968.82..1276147.37 rows=6001225 loops=1)\n -> Hash Join (cost=166395.00..520604.08 rows=6001225\nwidth=12) (actual time=59061.21..773997.08 rows=6001225 loops=1)\n -> Seq Scan on lineitem (cost=0.00..195405.25\nrows=6001225 width=8) (actual time=20.66..115511.34 rows=6001225 loops=1)\n -> Hash (cost=162645.00..162645.00 rows=1500000\nwidth=4) (actual time=59032.16..59032.16 rows=0 loops=1)\n -> Seq Scan on orders (cost=0.00..162645.00\nrows=1500000 width=4) (actual time=17.33..44420.10 rows=1500000 loops=1)\nTotal runtime: 1454929.11 msec\n\n\n\n\n>\n>\n> EXPLAIN ANALYZE\n> SELECT\n> t2.*\n> FROM (SELECT\n> orders.orderkey\n> FROM\n> lineitem LEFT OUTER JOIN\n> orders USING(orderkey)\n> WHERE\n> orders.orderkey IS NOT NULL\n> GROUP BY\n> orders.orderkey\n> HAVING\n> SUM(lineitem.quantity) > 300\n> ) AS t1 LEFT OUTER JOIN\n> orders AS t2 USING(orderkey)\n> ORDER BY t2.custkey\n>\n\nSort (cost=1739666.43..1739666.43 rows=600122 width=119) (actual\ntime=1538897.23..1538897.47 rows=62 loops=1)\n -> Merge Join (cost=1344971.49..1682069.98 rows=600122 width=119)\n(actual time=1440886.58..1538886.03 rows=62 loops=1)\n -> Index Scan using orders_pkey on orders t2 (cost=0.00..324346.65\nrows=1500000 width=115) (actual time=32.80..87906.98 rows=1455276 loops=1)\n -> Sort (cost=1344971.49..1344971.49 rows=600122 width=12) (actual\ntime=1439550.31..1439550.73 rows=62 loops=1)\n -> Subquery Scan t1 (cost=1257368.92..1287375.04 rows=600122\nwidth=12) (actual time=1222560.86..1439549.36 rows=62 loops=1)\n -> Aggregate (cost=1257368.92..1287375.04 rows=600122\nwidth=12) (actual time=1222560.84..1439548.42 rows=62 loops=1)\n -> Group (cost=1257368.92..1272371.98\nrows=6001225 width=12) (actual time=1219607.04..1369327.42 rows=6001225\nloops=1)\n -> Sort (cost=1257368.92..1257368.92\nrows=6001225 width=12) (actual time=1219607.00..1261208.08 rows=6001225\nloops=1)\n -> Hash Join\n(cost=166395.00..520604.08 rows=6001225 width=12) (actual\ntime=65973.31..769253.41 rows=6001225 loops=1)\n -> Seq Scan on lineitem\n(cost=0.00..195405.25 rows=6001225 width=8) (actual time=20.07..115247.61\nrows=6001225 loops=1)\n -> Hash\n(cost=162645.00..162645.00 rows=1500000 width=4) (actual\ntime=65943.80..65943.80 rows=0 loops=1)\n -> Seq Scan on orders\n(cost=0.00..162645.00 rows=1500000 width=4) (actual time=39.04..52049.90\nrows=1500000 loops=1)\nTotal runtime: 1539010.00 msec\n\n\n\nThanks and regards\n\n\n", "msg_date": "Tue, 16 Jul 2002 10:51:03 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] please help on query" }, { "msg_contents": "On Tue, 16 Jul 2002 10:51:03 +0200\n\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> wrote:\n\n\n> Aggregate (cost=1257368.92..1287375.04 rows=600122 width=12) (actual\n> time=1236941.71..1454824.56 rows=62 loops=1)\n> -> Group (cost=1257368.92..1272371.98 rows=6001225 width=12) (actual\n> time=1233968.87..1385034.91 rows=6001225 loops=1)\n> -> Sort (cost=1257368.92..1257368.92 rows=6001225 width=12)\n> (actual time=1233968.82..1276147.37 rows=6001225 loops=1)\n> -> Hash Join (cost=166395.00..520604.08 rows=6001225\n> width=12) (actual time=59061.21..773997.08 rows=6001225 loops=1)\n> -> Seq Scan on lineitem (cost=0.00..195405.25\n> rows=6001225 width=8) (actual time=20.66..115511.34 rows=6001225 loops=1)\n> -> Hash (cost=162645.00..162645.00 rows=1500000\n> width=4) (actual time=59032.16..59032.16 rows=0 loops=1)\n> -> Seq Scan on orders (cost=0.00..162645.00\n> rows=1500000 width=4) (actual time=17.33..44420.10 rows=1500000 loops=1)\n> Total runtime: 1454929.11 msec\n\n\nHmm, does each of the three tables have some indices like the following?\nIf not so, could you execute EXPLAIN ANALYZE after creating the indices.\n\n\ncreate index idx_lineitem_orderkey on lineitem(orderkey);\ncreate index idx_orders_orderkey on orders(orderkey);\ncreate index idx_orders_custkey on orders(custkey);\ncreate index idx_customer_custkey on customer(custkey);\n\n\nRegards,\nMasaru Sugawara\n\n\n", "msg_date": "Wed, 17 Jul 2002 22:28:37 +0900", "msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] please help on query" } ]
[ { "msg_contents": "I think that is the proper behavior Tom.\n\nAlso I agree with Bruce that this might be an oversight in the standard. That\nis why standards evolve. As I write this I am also sending a note to H2 asking\nabout this very issue. The latest working draft still has this construct.\n\nDana\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Thursday, July 11, 2002 12:36 PM\n> To: Bruce Momjian\n> Cc: Groff, Dana; Jan Wieck; Stephan Szabo; \n> pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Should this require CASCADE? \n> \n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Now, if someone wanted to say CASCADE|RESTRICT was\n> > required for DROP _only_ if there is some foreign key \n> references to the\n> > table, I would be OK with that, but that's not what the \n> standard says.\n> \n> But in fact that is not different from what I propose to do. Consider\n> what such a rule really means:\n> \t* if no dependencies exist for the object, go ahead and delete.\n> \t* if dependencies exist, complain.\n> How is that different from \"the default behavior is RESTRICT\"?\n> \n> \t\t\tregards, tom lane\n> \n", "msg_date": "Thu, 11 Jul 2002 12:43:18 -0400", "msg_from": "\"Groff, Dana\" <Dana.Groff@filetek.com>", "msg_from_op": true, "msg_subject": "Re: Should this require CASCADE? " } ]
[ { "msg_contents": "What should be the permissions required to create a cast?\n\nCurrently, it's approximately first come, first serve. You probably need\nto have execute privilege on the function, but that is the least concern.\n\nWith no permissions required on either the source or the target type, it's\neasy to boobytrap the entire system by creating bogus casting functions.\n\nGiven the current granularity of the permissions on data types we'd need\nto require the user to own both the source and the target type, which\nwould make the entire effort quite useless.\n\nEven if we had a \"usage\" privilege on types, I'm not sure if that would be\nappropriate, because creating a cast function is really more than usage --\nit affects how the type behaves.\n\nSo I'm afraid this might even need to be a separate privilege altogether.\n\nSQL99 effectively says that you must own the source type, the target type,\nand the cast function, unless a type is not \"user-defined\", which is a\ndistinction we don't make.\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 11 Jul 2002 20:32:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Permissions to create casts" } ]
[ { "msg_contents": "Mac OS X:\npostgres% psql --version\npsql (PostgreSQL) 7.2.1\ncontains support for: multibyte\n\nLEDEV=# create table test1 (foo varchar(5));\nCREATE\nLEDEV=# create table test2 (foo char(5));\nCREATE\nLEDEV=# insert into test2 (foo) values ('S');\nINSERT 3724249 1\nLEDEV=# insert into test1 (foo) values ('S');\nINSERT 3724250 1\nLEDEV=# select a.foo, b.foo from test1 a, test2 b where a.foo = \nb.foo::text;\n foo | foo\n-----+-----\n(0 rows)\n\nLEDEV=# select a.foo = 'S', b.foo = 'S' from test1 a, test2 b;\n ?column? | ?column?\n----------+----------\n t | t\n(1 row)\n\nLEDEV=# select a.foo, b.foo from test1 a, test2 b where CAST(a.foo as \nCHAR) = b.foo;\n foo | foo\n-----+-------\n S | S\n(1 row)\n\n", "msg_date": "Thu, 11 Jul 2002 16:47:39 -0500", "msg_from": "Scott Royston <scroyston@mac.com>", "msg_from_op": true, "msg_subject": "string cast/compare broken?" }, { "msg_contents": "Scott Royston <scroyston@mac.com> writes:\n> [ various examples of comparing char and varchar ]\n\nI see no bug here. For the CHAR datatype, trailing spaces are defined\nto be insignificant. For VARCHAR and TEXT, trailing spaces are\nsignificant. If you want to compare a CHAR value to a VARCHAR or TEXT\nvalue, your best bet is a locution like\n\trtrim(charval) = varcharval\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Jul 2002 23:50:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: string cast/compare broken? " }, { "msg_contents": "On Fri, 2002-07-12 at 08:50, Tom Lane wrote:\n> Scott Royston <scroyston@mac.com> writes:\n> > [ various examples of comparing char and varchar ]\n> \n> I see no bug here. For the CHAR datatype, trailing spaces are defined\n> to be insignificant. For VARCHAR and TEXT, trailing spaces are\n> significant. If you want to compare a CHAR value to a VARCHAR or TEXT\n> value, your best bet is a locution like\n> \trtrim(charval) = varcharval\n\nI guess the strangest part was that both a.foo = 'S' and b.foo = 'S' but\nnot a.foo=b.foo; (a.foo is varchar(5) , b.foo is char(5) )\n\nI guess that tha 'S' that b.foo gets compared to is converted to 'S '\nbefore comparison but when comparing varchar(5) and char(5) they are\nboth compared by converting them to varchar which keeps the trailing\nspaces from char(5). If the conversion where varchar(5) --> char(5) then\nthey would compare equal.\n\nI vaguely remember something in the standard about cases when comparing\nchar() types should discard extra spaces.\n\n-------------\nHannu\n\n\n\n", "msg_date": "12 Jul 2002 09:14:07 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: string cast/compare broken?" } ]
[ { "msg_contents": "\n> I guess the strangest part was that both a.foo = 'S' and b.foo = 'S' but\n> not a.foo=b.foo; (a.foo is varchar(5) , b.foo is char(5) )\n> \n> I guess that tha 'S' that b.foo gets compared to is converted to 'S '\n> before comparison but when comparing varchar(5) and char(5) they are\n> both compared by converting them to varchar which keeps the trailing\n> spaces from char(5). \n\nYes, I think this is inconvenient/unintuitive. If it is doable according to \nstandards, this should imho be fixed.\n\n> If the conversion where varchar(5) --> char(5) then\n> they would compare equal.\n\nI am not sure, since, if the varchar stored 'S ' then the comparison to a char 'S' \nshould probably still fail, since those spaces in the varchar are significant. \nInformix compares them equal, so I guess argumentation can be made in that direction\ntoo (that currently evades my understanding of intuitive reasoning :-). \n\nAndreas\n", "msg_date": "Fri, 12 Jul 2002 09:35:26 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: string cast/compare broken?" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> If the conversion where varchar(5) --> char(5) then\n>> they would compare equal.\n\n> I am not sure, since, if the varchar stored 'S ' then the comparison\n> to a char 'S' should probably still fail,\n\nThere is no comparison of varchar to char:\n\nregression=# select 'z'::char = 'z'::varchar;\nERROR: Unable to identify an operator '=' for types 'character' and 'character varying'\n You will have to retype this query using an explicit cast\nregression=#\n\nI consider this a feature, not a bug, since it's quite unclear which\nsemantics ought to be used.\n\nThe cases Scott originally posted all involved various forms of\ncoercion to force both sides to be the same type; I'm not sure\nthat he quite understood why he had to do that, but perhaps it's now\nbecoming clear.\n\nI wonder whether it would be a good idea to stop considering char\nas binary-compatible to varchar and text. Instead we could set\nthings up so that there is a coercion function involved, namely\nrtrim(). But that would probably make us diverge even further\nfrom the spec.\n\nHas anyone studied how other DBMSs handle CHAR vs VARCHAR? Judging\nfrom the number of questions we get on this point, I have to wonder\nif we are not out of step with the way other systems do it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Jul 2002 09:16:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: string cast/compare broken? " }, { "msg_contents": "On Fri, 12 Jul 2002, Tom Lane wrote:\n\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> >> If the conversion where varchar(5) --> char(5) then\n> >> they would compare equal.\n>\n> > I am not sure, since, if the varchar stored 'S ' then the comparison\n> > to a char 'S' should probably still fail,\n>\n> There is no comparison of varchar to char:\n>\n> regression=# select 'z'::char = 'z'::varchar;\n> ERROR: Unable to identify an operator '=' for types 'character' and 'character varying'\n> You will have to retype this query using an explicit cast\n> regression=#\n>\n> I consider this a feature, not a bug, since it's quite unclear which\n> semantics ought to be used.\n>\n> The cases Scott originally posted all involved various forms of\n> coercion to force both sides to be the same type; I'm not sure\n> that he quite understood why he had to do that, but perhaps it's now\n> becoming clear.\n>\n> I wonder whether it would be a good idea to stop considering char\n> as binary-compatible to varchar and text. Instead we could set\n> things up so that there is a coercion function involved, namely\n> rtrim(). But that would probably make us diverge even further\n> from the spec.\n>\n> Has anyone studied how other DBMSs handle CHAR vs VARCHAR? Judging\n> from the number of questions we get on this point, I have to wonder\n> if we are not out of step with the way other systems do it.\n\nI don't think it's just a CHAR vs VARCHAR issue. AFAICT the spec defines\nall of this in terms of the collations used and there are (imho arcane)\nrules about converting between them for comparisons and operations.\n\nTechnically I think varcharcol=charcol *is* illegal if we are\nsaying that char has a collation with PAD SPACE and varchar\nhas a collation with NO PAD, because they're different collations\nand character value expressions from column reference are implicit\nand that doesn't allow comparison between two different collations.\nOf course I could also be misreading it.\n\n", "msg_date": "Fri, 12 Jul 2002 07:16:58 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: string cast/compare broken? " } ]
[ { "msg_contents": "Hi\nWe�re two doctoring students and we have a little problem to resolve.\nWe�re using Grass5pre3 and PostgreSQL 7.2 (Linux)to map vehicular\npollution of our city. We have a map of the streets and we have to\nassign 24 values (+ the label) to each street.\nWhat would be a smart way to solve this problem using Postgres?\nThanx a lot for your help, Alberto & Massimo.\n\n\n\nSalve.\nSiamo due studenti laureandi e avremmo un problema da risolvere.\nStiamo utilizzando Grass5pre3 e PostgreSQL 7.2 (Linux)per realizzare\nuna mappatura dell�inquinamento veicolare nella nostra citt�.\nAbbiamo una mappa delle vie e dobbiamo assegnare ad ogni via 24 valori\n(pi� l�etichetta).\nQual e� il modo pi� furbo e veloce per farlo, utilizzando Postgres?\nGrazie Mille per l�aiuto, Alberto & Massimo.\n\n\n\n\n\n\n--\nPrendi GRATIS l'email universale che... risparmia: http://www.email.it/f\n\nSponsor:\nConto Arancio. Zero spese, stessa banca, pi� interessi.\nClicca qui: http://adv2.email.it/cgi-bin/foclick.cgi?mid=657&d=12-7\n", "msg_date": "Fri, 12 Jul 2002 10:53:59 +0200", "msg_from": "\"alf0@email.it\" <alf0@email.it>", "msg_from_op": true, "msg_subject": "urgent needed" }, { "msg_contents": "On Fri, 12 Jul 2002, alf0@email.it wrote:\n\n> Hi\n> WeО©╫re two doctoring students and we have a little problem to resolve.\n> WeО©╫re using Grass5pre3 and PostgreSQL 7.2 (Linux)to map vehicular\n> pollution of our city. We have a map of the streets and we have to\n> assign 24 values (+ the label) to each street.\n> What would be a smart way to solve this problem using Postgres?\n> Thanx a lot for your help, Alberto & Massimo.\n>\n\nI'd use contrib/intarray module, which was developed exactly for such\nkind of problems (in our case we have messages assigned to several\nsections)\n\n\n\tRegards,\n\n\t\tOleg\n\n>\n>\n> Salve.\n> Siamo due studenti laureandi e avremmo un problema da risolvere.\n> Stiamo utilizzando Grass5pre3 e PostgreSQL 7.2 (Linux)per realizzare\n> una mappatura dellО©╫inquinamento veicolare nella nostra cittО©╫.\n> Abbiamo una mappa delle vie e dobbiamo assegnare ad ogni via 24 valori\n> (piО©╫ lО©╫etichetta).\n> Qual eО©╫ il modo piО©╫ furbo e veloce per farlo, utilizzando Postgres?\n> Grazie Mille per lО©╫aiuto, Alberto & Massimo.\n>\n>\n>\n>\n>\n>\n> --\n> Prendi GRATIS l'email universale che... risparmia: http://www.email.it/f\n>\n> Sponsor:\n> Conto Arancio. Zero spese, stessa banca, piО©╫ interessi.\n> Clicca qui: http://adv2.email.it/cgi-bin/foclick.cgi?mid=657&d=12-7\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 12 Jul 2002 15:45:24 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: urgent needed" } ]
[ { "msg_contents": "\n\tDear Sirs!:)I encounted one small problem,working with \nPostgreSQL 7.3devel.It can look a\nbit strange,but i have to use whitespaces in names of databases,tables,fields\nand so on(like \"roomno jk\").It's possible to create them all and work with them\n(INSERT,DELETE,UPDATE),but PL/pgSQL parser(compiler ?) can't execute such \nstatements.To explain the problem, I took and changed next examples from \npgsql/src/pl/plpgsql/test:\n\n-- ************************************************************ \n-- * Tables for the patchfield test of PL/pgSQL \n-- * $Header: /projects/cvsroot/pgsql/src/pl/plpgsql/test/tables.sql,v 1.1 1998/08/24 19:16:27 momjian Exp $\n-- ************************************************************\n\ncreate table Room (\n \"roomno jk\"\tchar(8), --- common SQL parser eats it\n comment\ttext\n);\ncreate unique index Room_rno on Room using btree (\"roomno jk\" bpchar_ops);\n\ncreate table WSlot (\n slotname\tchar(20),\n \"roomno jk\"\tchar(8), --- common SQL parser eats it\n slotlink\tchar(20),\n backlink\tchar(20)\n);\ncreate unique index WSlot_name on WSlot using btree (slotname bpchar_ops);\n \nYou also can use such \"roomno jk\" in DECLARATION of PL/pgSQL procedures and functions :\n \n-- ************************************************************\n-- * Trigger procedures and functions for the patchfield\n-- * test of PL/pgSQL\n-- * $Header: /projects/cvsroot/pgsql/src/pl/plpgsql/test/triggers.sql,v 1.2 2000/10/22 23:25:11 tgl Exp $\n-- ************************************************************\n-- * AFTER UPDATE on Room\n-- *\t- If room no changes let wall slots follow\n-- ************************************************************\n\nPL/pgSQL eats it,he will cry during execution.\n\ncreate function tg_room_au() returns opaque as '\nbegin\n if new.\"roomno jk\" != old.\"roomno jk\" then\n update WSlot set \"roomno jk\" = new.\"roomno jk\" where \"roomno jk\" = old.\"roomno jk\";\n end if;\n return new;\nend;\n' language 'plpgsql';\n\ncreate trigger tg_room_au after update\n on Room for each row execute procedure tg_room_au();\n\n-- ************************************************************\n-- * BEFORE INSERT or UPDATE on WSlot\n-- *\t- Check that room exists\n-- ************************************************************\n\nPL/pgSQL also eats it,he will cry during execution.\n\ncreate function tg_wslot_biu() returns opaque as '\nbegin\n\tif count(*) = 0 from Room where \"roomno jk\" = new.\"roomno jk\" then\n raise exception ''Room % does not exist'', new.\"roomno jk\";\n end if;\n return new;\nend;\n' language 'plpgsql';\n\n\ncreate trigger tg_wslot_biu before insert or update\n on WSlot for each row execute procedure tg_wslot_biu();\n\nThen do next:\ninsert into Room values ('001', 'Entrance'); --Everything is ok\n\nThen do it and catch failure:\ninsert into WSlot values ('WS.001.1a', '001', '', '');\n\nPostgreSQL returns :\n\npsql:/home/eu/SQL/plt/p_test.sql:19: ERROR: parse error at or near \"new\"\npsql:/home/eu/SQL/plt/p_test.sql:20: WARNING: plpgsql: ERROR during compile of tg_wslot_biu near line 3\n\nAs you see there's no support for \"roomno jk\" in PL/pgSQL parser.\nTo this moment i know nothing serious about flex,lex and yacc,but\na simple look at PL/pgSQL parser shows,that situations of\n\"roomno jk\" are just undefined there.\n\t\t\t\t\tregards,Eugene\nP.S.In case you make patch,please,send me a copy. \n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 12 Jul 2002 14:31:18 +0400 (MSD)", "msg_from": "\"eutm\" <eutm@yandex.ru>", "msg_from_op": true, "msg_subject": "Bug of PL/pgSQL parser" }, { "msg_contents": "I see this on the TODO list:\n\n# Fix PL/PgSQL to handle quoted mixed-case identifiers\n\n\nPerhaps you could make a view (alias the names with spaces) to work on?\n\n\nOn Fri, 2002-07-12 at 06:31, eutm wrote:\n> \n> \tDear Sirs!:)I encounted one small problem,working with \n> PostgreSQL 7.3devel.It can look a\n> bit strange,but i have to use whitespaces in names of databases,tables,fields\n> and so on(like \"roomno jk\").It's possible to create them all and work with them\n> (INSERT,DELETE,UPDATE),but PL/pgSQL parser(compiler ?) can't execute such \n> statements.To explain the problem, I took and changed next examples from \n> pgsql/src/pl/plpgsql/test:\n> \n> -- ************************************************************ \n> -- * Tables for the patchfield test of PL/pgSQL \n> -- * $Header: /projects/cvsroot/pgsql/src/pl/plpgsql/test/tables.sql,v 1.1 1998/08/24 19:16:27 momjian Exp $\n> -- ************************************************************\n> \n> create table Room (\n> \"roomno jk\"\tchar(8), --- common SQL parser eats it\n> comment\ttext\n> );\n> create unique index Room_rno on Room using btree (\"roomno jk\" bpchar_ops);\n> \n> create table WSlot (\n> slotname\tchar(20),\n> \"roomno jk\"\tchar(8), --- common SQL parser eats it\n> slotlink\tchar(20),\n> backlink\tchar(20)\n> );\n> create unique index WSlot_name on WSlot using btree (slotname bpchar_ops);\n> \n> You also can use such \"roomno jk\" in DECLARATION of PL/pgSQL procedures and functions :\n> \n> -- ************************************************************\n> -- * Trigger procedures and functions for the patchfield\n> -- * test of PL/pgSQL\n> -- * $Header: /projects/cvsroot/pgsql/src/pl/plpgsql/test/triggers.sql,v 1.2 2000/10/22 23:25:11 tgl Exp $\n> -- ************************************************************\n> -- * AFTER UPDATE on Room\n> -- *\t- If room no changes let wall slots follow\n> -- ************************************************************\n> \n> PL/pgSQL eats it,he will cry during execution.\n> \n> create function tg_room_au() returns opaque as '\n> begin\n> if new.\"roomno jk\" != old.\"roomno jk\" then\n> update WSlot set \"roomno jk\" = new.\"roomno jk\" where \"roomno jk\" = old.\"roomno jk\";\n> end if;\n> return new;\n> end;\n> ' language 'plpgsql';\n> \n> create trigger tg_room_au after update\n> on Room for each row execute procedure tg_room_au();\n> \n> -- ************************************************************\n> -- * BEFORE INSERT or UPDATE on WSlot\n> -- *\t- Check that room exists\n> -- ************************************************************\n> \n> PL/pgSQL also eats it,he will cry during execution.\n> \n> create function tg_wslot_biu() returns opaque as '\n> begin\n> \tif count(*) = 0 from Room where \"roomno jk\" = new.\"roomno jk\" then\n> raise exception ''Room % does not exist'', new.\"roomno jk\";\n> end if;\n> return new;\n> end;\n> ' language 'plpgsql';\n> \n> \n> create trigger tg_wslot_biu before insert or update\n> on WSlot for each row execute procedure tg_wslot_biu();\n> \n> Then do next:\n> insert into Room values ('001', 'Entrance'); --Everything is ok\n> \n> Then do it and catch failure:\n> insert into WSlot values ('WS.001.1a', '001', '', '');\n> \n> PostgreSQL returns :\n> \n> psql:/home/eu/SQL/plt/p_test.sql:19: ERROR: parse error at or near \"new\"\n> psql:/home/eu/SQL/plt/p_test.sql:20: WARNING: plpgsql: ERROR during compile of tg_wslot_biu near line 3\n> \n> As you see there's no support for \"roomno jk\" in PL/pgSQL parser.\n> To this moment i know nothing serious about flex,lex and yacc,but\n> a simple look at PL/pgSQL parser shows,that situations of\n> \"roomno jk\" are just undefined there.\n> \t\t\t\t\tregards,Eugene\n> P.S.In case you make patch,please,send me a copy. \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n", "msg_date": "12 Jul 2002 08:51:50 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Bug of PL/pgSQL parser" }, { "msg_contents": "\"eutm\" <eutm@yandex.ru> writes:\n> \tDear Sirs!:)I encounted one small problem,working with \n> PostgreSQL 7.3devel.It can look a\n> bit strange,but i have to use whitespaces in names of databases,tables,fields\n> and so on(like \"roomno jk\").It's possible to create them all and work with them\n> (INSERT,DELETE,UPDATE),but PL/pgSQL parser(compiler ?) can't execute such \n> statements.\n\nYeah, this is a known bug: the plpgsql lexer doesn't really handle\nquoted identifiers correctly. (It effectively acts like double-quote\nis just another letter, which of course falls down on cases like\nembedded whitespace.) If you have any experience with writing flex\nrules, perhaps you'd care to submit a patch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Jul 2002 11:14:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug of PL/pgSQL parser " } ]
[ { "msg_contents": "\n> Has anyone studied how other DBMSs handle CHAR vs VARCHAR? Judging\n> from the number of questions we get on this point, I have to wonder\n> if we are not out of step with the way other systems do it.\n\nWell, I already gave the Informix example, that compares them as equal.\n(they obviously coerce varchar to char)\n\nIn nearly all cases I have seen so far the different handling of trailing\nblanks is not wanted. In most of these varchar is simply used instead of char to \nsave disk space.\n\nIn Informix ESQL/C there is a host variable type CSTRINGTYPE that automatically \nrtrims columns of char type upon select.\n\nImho the advantages of an automatic coercion would outweigh the few corner cases\nwhere the behavior would not be intuitive to everybody.\n\nAndreas\n", "msg_date": "Fri, 12 Jul 2002 15:48:59 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: string cast/compare broken? " }, { "msg_contents": "There is no comparison of varchar to char in Oracle too.\nScott provided cast cases are some unique features in psql, \neach database MAY handle those casting differently.\n\nIn good design/application, char should be replaced by\nvarchar type unless you know the exact bytes. It would be\nnot bad idea to get rid of char gradually in the future\nto avoid such inconsistency \nbetween databases, that's just my view.\n\njohnl\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Zeugswetter\n> Andreas SB SD\n> Sent: Friday, July 12, 2002 8:49 AM\n> To: Tom Lane\n> Cc: Hannu Krosing; Scott Royston; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] string cast/compare broken? \n> \n> \n> \n> > Has anyone studied how other DBMSs handle CHAR vs VARCHAR? Judging\n> > from the number of questions we get on this point, I have to wonder\n> > if we are not out of step with the way other systems do it.\n> \n> Well, I already gave the Informix example, that compares them as equal.\n> (they obviously coerce varchar to char)\n> \n> In nearly all cases I have seen so far the different handling of trailing\n> blanks is not wanted. In most of these varchar is simply used \n> instead of char to \n> save disk space.\n> \n> In Informix ESQL/C there is a host variable type CSTRINGTYPE that \n> automatically \n> rtrims columns of char type upon select.\n> \n> Imho the advantages of an automatic coercion would outweigh the \n> few corner cases\n> where the behavior would not be intuitive to everybody.\n> \n> Andreas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Fri, 12 Jul 2002 09:24:03 -0500", "msg_from": "\"John Liu\" <johnl@synthesys.com>", "msg_from_op": false, "msg_subject": "Re: string cast/compare broken? " }, { "msg_contents": "On Fri, Jul 12, 2002 at 03:48:59PM +0200, Zeugswetter Andreas SB SD wrote:\n> Imho the advantages of an automatic coercion would outweigh the few\n> corner cases where the behavior would not be intuitive to\n> everybody.\n\nHow then would one get the correct behaviour from char()?\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 12 Jul 2002 11:28:20 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: string cast/compare broken?" } ]
[ { "msg_contents": "So, what should the behavior be of a constant declared as\n\nCHAR 'hi'\n\n? Right now it fails, since SQL9x asks that the char type defaults to a\nlength of one and our parser does not distinguish between usage as a\nconstant declaration and as a column definition (where you would want\nthe \"char(1)\" to be filled in). But istm that for a constant string, the\nlength should be whatever the string is, or unspecified.\n\nComments?\n\n - Thomas\n", "msg_date": "Fri, 12 Jul 2002 08:42:44 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "CHAR constants" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> So, what should the behavior be of a constant declared as\n> CHAR 'hi'\n\n> ? Right now it fails, since SQL9x asks that the char type defaults to a\n> length of one and our parser does not distinguish between usage as a\n> constant declaration and as a column definition (where you would want\n> the \"char(1)\" to be filled in). But istm that for a constant string, the\n> length should be whatever the string is, or unspecified.\n\nSeems we should convert that to char(2). Not sure how difficult it is\nto do though...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Jul 2002 13:09:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CHAR constants " }, { "msg_contents": "> > So, what should the behavior be of a constant declared as\n> > CHAR 'hi'\n> > ? Right now it fails, since SQL9x asks that the char type defaults to a\n> > length of one and our parser does not distinguish between usage as a\n> > constant declaration and as a column definition (where you would want\n> > the \"char(1)\" to be filled in). But istm that for a constant string, the\n> > length should be whatever the string is, or unspecified.\n\nOK, I've got patches (not yet applied; any comments before they go\nin??)...\n\n> Seems we should convert that to char(2). Not sure how difficult it is\n> to do though...\n\nafaict there is no need to internally set a specific length; it is\nsufficient to set typmod to -1 *in this case only*. So I've got patches\nwhich separate uses of character string declarations in constants vs\nothers (e.g. column declarations).\n\nSo, before patches this fails:\n\nthomas=# select char 'hi';\nERROR: value too long for type character(1)\n\nand after patches:\n\nthomas=# select char 'hi';\n bpchar \n--------\n hi\n(1 row)\n\nthomas=# select char(1) 'hi';\nERROR: value too long for type character(1)\n\n\nbtw, I first tried solving this with in-line actions just setting a flag\nto indicate that this was going to be a production for a constant. And\nprobably set a record for shift/reduce conflicts:\n\nbison -y -d -v gram.y\nconflicts: 20591 shift/reduce, 10 reduce/reduce\n\nWoohoo!! :)\n\n - Thomas\n", "msg_date": "Sat, 13 Jul 2002 07:32:50 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: CHAR constants" } ]
[ { "msg_contents": "I'd like to look at the performance of the query optimizer (both the\ntraditional one and GEQO) when joining large numbers of tables: 10-15,\nor more. In order to do that (and to get meaningful results), I'll\nneed to work with some data that actually requires joins of that\nmagnitude. Ideally, I'd like the data to be somewhat realistic -- so\nthat the performance I'm seeing will reflect the performance a typical\nuser might see. (i.e. I don't want an artificial benchmark)\n\nHowever, I don't possess any data of that nature, and I'm unsure\nwhere I can find some (or how to generate some of my own). Does\nanyone know of:\n\n - a freely available collection of data that requires queries\n of this type, and is reasonably representative of \"real world\"\n applications\n\n - or, a means to generate programatically some data that\n fits the above criteria.\n\nThanks in advance,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 12 Jul 2002 12:05:41 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "test data for query optimizer" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> I'd like to look at the performance of the query optimizer (both the\n> traditional one and GEQO) when joining large numbers of tables: 10-15,\n> or more. In order to do that (and to get meaningful results), I'll\n> need to work with some data that actually requires joins of that\n> magnitude.\n\nThe easiest way to construct a realistic many-way join is to use a star\nschema. Here you have a primary \"fact table\" that includes a lot of\ncolumns that individually join to the primary keys of other \"detail\ntables\". For example, you might have a column \"State\" in the fact table\nwith values like \"PA\", \"NY\", etc, and you want to join it to a table\nstates(abbrev,fullname,...) so your query can display \"Pennsylvania\",\n\"New York\", etc. It's easy to make up realistic examples that involve\nany number of joins.\n\nThis is of course only one usage pattern for lots-o-joins, so don't put\ntoo much credence in it alone as a benchmark, but it's certainly a\nwidely used pattern.\n\nSearching for \"star schema\" at Google turned up some interesting things\nlast time I tried it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Jul 2002 13:14:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: test data for query optimizer " }, { "msg_contents": "What about the OSDB benchmark? Does that contain a large dataset?\n\nChris\n\n----- Original Message ----- \nFrom: \"Neil Conway\" <nconway@klamath.dyndns.org>\nTo: \"PostgreSQL Hackers\" <pgsql-hackers@postgresql.org>\nSent: Saturday, July 13, 2002 12:05 AM\nSubject: [HACKERS] test data for query optimizer\n\n\n> I'd like to look at the performance of the query optimizer (both the\n> traditional one and GEQO) when joining large numbers of tables: 10-15,\n> or more. In order to do that (and to get meaningful results), I'll\n> need to work with some data that actually requires joins of that\n> magnitude. Ideally, I'd like the data to be somewhat realistic -- so\n> that the performance I'm seeing will reflect the performance a typical\n> user might see. (i.e. I don't want an artificial benchmark)\n> \n> However, I don't possess any data of that nature, and I'm unsure\n> where I can find some (or how to generate some of my own). Does\n> anyone know of:\n> \n> - a freely available collection of data that requires queries\n> of this type, and is reasonably representative of \"real world\"\n> applications\n> \n> - or, a means to generate programatically some data that\n> fits the above criteria.\n> \n> Thanks in advance,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n", "msg_date": "Sat, 13 Jul 2002 11:18:14 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: test data for query optimizer" }, { "msg_contents": "What about the TPC-H benchmark ?\n\nI cant recall if it has more than 10 tables, but it seemed like the\nqueries were \"quite good\" for a benchmark. In addition it comes with a\ndata generator.\n\n\nregards\n\nMark\n>On Sat, 2002-07-13 at 04:05, Neil Conway wrote:\n> I'd like to look at the performance of the query optimizer (both the\n> traditional one and GEQO) when joining large numbers of tables: 10-15,\n> or more. In order to do that (and to get meaningful results), I'll\n> need to work with some data that actually requires joins of that\n> magnitude. Ideally, I'd like the data to be somewhat realistic -- so\n> that the performance I'm seeing will reflect the performance a typical\n> user might see. (i.e. I don't want an artificial benchmark)\n> \n>\n\n", "msg_date": "13 Jul 2002 16:00:40 +1200", "msg_from": "Mark kirkwood <markir@slingshot.co.nz>", "msg_from_op": false, "msg_subject": "Re: test data for query optimizer" }, { "msg_contents": "On Sat, Jul 13, 2002 at 11:18:14AM +0800, Christopher Kings-Lynne wrote:\n> What about the OSDB benchmark? Does that contain a large dataset?\n\nNo -- it only uses 5 relations total, with the most complex query\nonly involving 4 joins.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Sat, 13 Jul 2002 00:06:47 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: test data for query optimizer" } ]
[ { "msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql\nChanges by:\ttgl@postgresql.org\t02/07/12 14:43:20\n\nModified files:\n\tdoc/src/sgml : catalogs.sgml release.sgml \n\tdoc/src/sgml/ref: alter_table.sgml comment.sgml \n\t drop_aggregate.sgml drop_domain.sgml \n\t drop_function.sgml drop_index.sgml \n\t drop_language.sgml drop_operator.sgml \n\t drop_rule.sgml drop_sequence.sgml \n\t drop_table.sgml drop_trigger.sgml \n\t drop_type.sgml drop_view.sgml \n\tsrc/backend/bootstrap: bootparse.y \n\tsrc/backend/catalog: Makefile heap.c index.c indexing.c \n\t namespace.c pg_type.c \n\tsrc/backend/commands: aggregatecmds.c cluster.c comment.c \n\t dbcommands.c functioncmds.c indexcmds.c \n\t operatorcmds.c proclang.c tablecmds.c \n\t trigger.c typecmds.c view.c \n\tsrc/backend/nodes: copyfuncs.c equalfuncs.c outfuncs.c \n\tsrc/backend/parser: analyze.c gram.y \n\tsrc/backend/rewrite: rewriteDefine.c rewriteRemove.c \n\t rewriteSupport.c \n\tsrc/backend/tcop: utility.c \n\tsrc/backend/utils/cache: lsyscache.c relcache.c \n\tsrc/bin/initdb : initdb.sh \n\tsrc/bin/pg_dump: pg_dump.c \n\tsrc/bin/psql : describe.c \n\tsrc/include/access: tupdesc.h \n\tsrc/include/catalog: catname.h catversion.h heap.h index.h \n\t indexing.h \n\tsrc/include/commands: comment.h defrem.h proclang.h trigger.h \n\tsrc/include/nodes: parsenodes.h \n\tsrc/include/rewrite: rewriteRemove.h \n\tsrc/include/utils: lsyscache.h \n\tsrc/test/regress/expected: alter_table.out domain.out \n\t foreign_key.out sanity_check.out \n\tsrc/test/regress/output: constraints.source \n\tsrc/test/regress/sql: alter_table.sql domain.sql foreign_key.sql \nAdded files:\n\tsrc/backend/catalog: dependency.c pg_constraint.c pg_depend.c \n\tsrc/include/catalog: dependency.h pg_constraint.h pg_depend.h \nRemoved files:\n\tsrc/include/catalog: pg_relcheck.h \n\nLog message:\n\tSecond phase of committing Rod Taylor's pg_depend/pg_constraint patch.\n\tpg_relcheck is gone; CHECK, UNIQUE, PRIMARY KEY, and FOREIGN KEY\n\tconstraints all have real live entries in pg_constraint. pg_depend\n\texists, and RESTRICT/CASCADE options work on most kinds of DROP;\n\thowever, pg_depend is not yet very well populated with dependencies.\n\t(Most of the ones that are present at this point just replace formerly\n\thardwired associations, such as the implicit drop of a relation's pg_type\n\tentry when the relation is dropped.) Need to add more logic to create\n\tdependency entries, improve pg_dump to dump constraints in place of\n\tindexes and triggers, and add some regression tests.\n\n", "msg_date": "Fri, 12 Jul 2002 14:43:20 -0400 (EDT)", "msg_from": "tgl@postgresql.org (Tom Lane)", "msg_from_op": true, "msg_subject": "pgsql/ oc/src/sgml/catalogs.sgml oc/src/sgml/r ..." }, { "msg_contents": "Is it at all a problem that several columns in pg_conversion have the same\nname as columns in pg_constraint?\n\nShould the ones in pg_conversion become: convname instead of conname, etc.\nsimply for clarity?\n\nChris\n\n----- Original Message -----\n\n> Log message:\n> Second phase of committing Rod Taylor's pg_depend/pg_constraint patch.\n> pg_relcheck is gone; CHECK, UNIQUE, PRIMARY KEY, and FOREIGN KEY\n> constraints all have real live entries in pg_constraint. pg_depend\n> exists, and RESTRICT/CASCADE options work on most kinds of DROP;\n> however, pg_depend is not yet very well populated with dependencies.\n> (Most of the ones that are present at this point just replace formerly\n> hardwired associations, such as the implicit drop of a relation's pg_type\n> entry when the relation is dropped.) Need to add more logic to create\n> dependency entries, improve pg_dump to dump constraints in place of\n> indexes and triggers, and add some regression tests.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Sat, 13 Jul 2002 13:35:31 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/catalogs.sgml oc/src/sgml/r ..." }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Is it at all a problem that several columns in pg_conversion have the same\n> name as columns in pg_constraint?\n> Should the ones in pg_conversion become: convname instead of conname, etc.\n> simply for clarity?\n\nPerhaps so. The two patches were developed independently and so no one\nthought about it. I don't have a strong feeling about which set of\nnames to change, although perhaps pg_conversion is referenced in fewer\nplaces at the moment.\n\nTatsuo, any opinions?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Jul 2002 13:07:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/catalogs.sgml oc/src/sgml/r ... " } ]
[ { "msg_contents": "Now that the pg_depend mechanism is mostly in there, it is no longer\na good idea to delete things directly (for example, by calling\nheap_drop_with_catalog or even just heap_delete'ing a catalog tuple).\n\nThe correct thing to do is to call performDeletion() with a parameter\nspecifying what it is you want to delete. Object deletion\ncommands should be implemented in two routines: an outer wrapper that\nlooks up the object, verifies permissions, and calls performDeletion,\nand an inner routine that actually deletes the catalog entry (plus\nany other directly-associated work). The inner routine is called from\nperformDeletion() after handling any dependency processing that might\nbe needed. A good example to look at is the way RemoveFunction()\nhas been split into RemoveFunction() and RemoveFunctionById().\n\nThe payoff for this seeming extra complexity is that we can get rid of\na lot of former hard-wired code in favor of letting dependencies do it.\nFor instance, heap_drop_with_catalog no longer does anything directly\nabout deleting indexes, constraints, or type tuples --- that's all\ngotten rid of by dependency links when you do a DROP TABLE. Thus\nheap.c is about 300 lines shorter than it used to be. We also have\nmuch more control over whether to allow deletions of dependent objects.\nFor instance, you now get fairly sane behavior when you try to drop\nthe pg_type entry associated with a relation:\n\nregression=# create table foo(f1 int);\nCREATE TABLE\nregression=# drop type foo;\nERROR: Cannot drop type foo because table foo requires it\n You may DROP the other object instead\n\n\nI notice that Tatsuo recently committed DROP CONVERSION code that does\nthings the old way. I didn't try to change it, but as-is it will not\nwork to have any dependencies leading to or from conversions. I\nrecommend changing it so that it can participate in dependencies.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Jul 2002 15:17:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Memo on dropping practices" }, { "msg_contents": "Tom Lane wrote:\n> Now that the pg_depend mechanism is mostly in there, it is no longer\n> a good idea to delete things directly (for example, by calling\n> heap_drop_with_catalog or even just heap_delete'ing a catalog tuple).\n> \n> The correct thing to do is to call performDeletion() with a parameter\n\nShould it be called performDrop rather than Deletion?\n\n> The payoff for this seeming extra complexity is that we can get rid of\n> a lot of former hard-wired code in favor of letting dependencies do it.\n> For instance, heap_drop_with_catalog no longer does anything directly\n> about deleting indexes, constraints, or type tuples --- that's all\n> gotten rid of by dependency links when you do a DROP TABLE. Thus\n> heap.c is about 300 lines shorter than it used to be. We also have\n> much more control over whether to allow deletions of dependent objects.\n> For instance, you now get fairly sane behavior when you try to drop\n> the pg_type entry associated with a relation:\n\nYes, this code now allows lots of cleanups we weren't able to do before.\nTODO has:\n\t\n\tDependency Checking\n\t===================\n\t\n\t* Add pg_depend table for dependency recording; use sysrelid, oid,\n\t depend_sysrelid, depend_oid, name\n\t* Auto-destroy sequence on DROP of table with SERIAL; perhaps a separate\n\t SERIAL type\n\t* Have SERIAL generate non-colliding sequence names when we have \n\t auto-destruction\n\t* Prevent column dropping if column is used by foreign key\n\t* Propagate column or table renaming to foreign key constraints\n\t* Automatically drop constraints/functions when object is dropped\n\t* Make constraints clearer in dump file\n\t* Make foreign keys easier to identify\n\t* Flush cached query plans when their underlying catalog data changes\n\nWhich of these are done with the patch?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Jul 2002 22:20:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Memo on dropping practices" }, { "msg_contents": "> \t* Add pg_depend table for dependency recording; use sysrelid, oid,\n> \t depend_sysrelid, depend_oid, name\n> \t* Auto-destroy sequence on DROP of table with SERIAL; perhaps a separate\n> \t SERIAL type\n> \t* Have SERIAL generate non-colliding sequence names when we have \n> \t auto-destruction\n> \t* Prevent column dropping if column is used by foreign key\n> \t* Propagate column or table renaming to foreign key constraints\n> \t* Automatically drop constraints/functions when object is dropped\n> \t* Make constraints clearer in dump file\n> \t* Make foreign keys easier to identify\n> \t* Flush cached query plans when their underlying catalog data changes\n> \n> Which of these are done with the patch?\n\nBelow is what I listed off as complete when submitting the patch.\n\n'Make constraints clearer in dump file' is questionable. Foreign keys\nare, others not yet, but they need to be.\n\n\n# Add ALTER TABLE DROP non-CHECK CONSTRAINT\n# Allow psql \\d to show foreign keys\n* Add pg_depend table for dependency recording; use sysrelid, oid, \ndepend_sysrelid, depend_oid, name\n# Auto-destroy sequence on DROP of table with SERIAL\n# Prevent column dropping if column is used by foreign key\n# Automatically drop constraints/functions when object is dropped\n# Make constraints clearer in dump file\n# Make foreign keys easier to identify\n\n", "msg_date": "12 Jul 2002 22:38:09 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Memo on dropping practices" }, { "msg_contents": "\nThanks, TODO updated. I split out \"Make constraints clearer in dump\nfile\" into a foreign key version, which I marked as done, and a second\nversion which I left as undone.\n\nThanks. That's a heap of items completed.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> > \t* Add pg_depend table for dependency recording; use sysrelid, oid,\n> > \t depend_sysrelid, depend_oid, name\n> > \t* Auto-destroy sequence on DROP of table with SERIAL; perhaps a separate\n> > \t SERIAL type\n> > \t* Have SERIAL generate non-colliding sequence names when we have \n> > \t auto-destruction\n> > \t* Prevent column dropping if column is used by foreign key\n> > \t* Propagate column or table renaming to foreign key constraints\n> > \t* Automatically drop constraints/functions when object is dropped\n> > \t* Make constraints clearer in dump file\n> > \t* Make foreign keys easier to identify\n> > \t* Flush cached query plans when their underlying catalog data changes\n> > \n> > Which of these are done with the patch?\n> \n> Below is what I listed off as complete when submitting the patch.\n> \n> 'Make constraints clearer in dump file' is questionable. Foreign keys\n> are, others not yet, but they need to be.\n> \n> \n> # Add ALTER TABLE DROP non-CHECK CONSTRAINT\n> # Allow psql \\d to show foreign keys\n> * Add pg_depend table for dependency recording; use sysrelid, oid, \n> depend_sysrelid, depend_oid, name\n> # Auto-destroy sequence on DROP of table with SERIAL\n> # Prevent column dropping if column is used by foreign key\n> # Automatically drop constraints/functions when object is dropped\n> # Make constraints clearer in dump file\n> # Make foreign keys easier to identify\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Jul 2002 22:43:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Memo on dropping practices" }, { "msg_contents": "On Fri, 2002-07-12 at 15:17, Tom Lane wrote:\n> Now that the pg_depend mechanism is mostly in there, it is no longer\n> a good idea to delete things directly (for example, by calling\n> heap_drop_with_catalog or even just heap_delete'ing a catalog tuple).\n\nI noticed that SERIAL sequences aren't dropping with the application of\nthe patch.\n\nWas this intentional?\n\nI know I didn't have a way of carrying sequence information across a\ndump (yet), but didn't think it would hurt to have.\n\n", "msg_date": "12 Jul 2002 22:51:54 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Memo on dropping practices" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n>> Which of these are done with the patch?\n\n> Below is what I listed off as complete when submitting the patch.\n\nNote that I have not yet finished committing all of Rod's original\npatch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Jul 2002 10:25:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Memo on dropping practices " }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> I noticed that SERIAL sequences aren't dropping with the application of\n> the patch.\n\n> Was this intentional?\n\nYeah, the dependency isn't stored yet. I didn't like the way you did\nthat, and was trying to think of a better way...\n\nMore generally, a lot of dependencies that should be in place aren't\nyet. I committed as soon as I had stable functionality that more or\nless duplicated the former behavior; there's more patch still to review.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Jul 2002 10:27:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Memo on dropping practices " }, { "msg_contents": "On Sat, 2002-07-13 at 10:27, Tom Lane wrote:\n> Rod Taylor <rbt@zort.ca> writes:\n> > I noticed that SERIAL sequences aren't dropping with the application of\n> > the patch.\n> \n> > Was this intentional?\n> \n> Yeah, the dependency isn't stored yet. I didn't like the way you did\n> that, and was trying to think of a better way...\n> \n> More generally, a lot of dependencies that should be in place aren't\n> yet. I committed as soon as I had stable functionality that more or\n> less duplicated the former behavior; there's more patch still to review.\n\n\nI eventually figured that out. I had only initially noticed a couple of\nmissing items which would have been easy to miss.\n\n", "msg_date": "13 Jul 2002 11:05:16 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Memo on dropping practices" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> The correct thing to do is to call performDeletion() with a parameter\n\n> Should it be called performDrop rather than Deletion?\n\nWell, if you want to rationalize the naming of these various routines:\n\nI think DROP ought to be associated with the SQL-level commands.\nperformDeletion is the next level down (since it doesn't do any\npermissions checks) and then there are the bottom-level deletion\nroutines for each object type (which do even less). It would make sense\nto choose different verbs for each level. Right now, since I just split\nRemoveFoo into two routines and called the second one RemoveFooById,\nit's not very mnemonic at all. Perhaps:\n\n\tDropFoo\t\t--- top level, corresponds to SQL DROP command\n\n\tperformSomething -- dependency controller\n\n\tRemoveFoo\t--- bottom level deleter\n\nNot sure what \"performSomething\" should be called, but I'd like to\nthink of a verb that's not either Drop or Remove. I'm not wedded to\nRemove for the bottom level, either. Thoughts?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Jul 2002 12:53:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Memo on dropping practices " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> The correct thing to do is to call performDeletion() with a parameter\n> \n> > Should it be called performDrop rather than Deletion?\n> \n> Well, if you want to rationalize the naming of these various routines:\n> \n> I think DROP ought to be associated with the SQL-level commands.\n> performDeletion is the next level down (since it doesn't do any\n> permissions checks) and then there are the bottom-level deletion\n> routines for each object type (which do even less). It would make sense\n> to choose different verbs for each level. Right now, since I just split\n> RemoveFoo into two routines and called the second one RemoveFooById,\n> it's not very mnemonic at all. Perhaps:\n> \n> \tDropFoo\t\t--- top level, corresponds to SQL DROP command\n> \n> \tperformSomething -- dependency controller\n> \n> \tRemoveFoo\t--- bottom level deleter\n> \n> Not sure what \"performSomething\" should be called, but I'd like to\n> think of a verb that's not either Drop or Remove. I'm not wedded to\n> Remove for the bottom level, either. Thoughts?\n\nHow about:\n\n> \tDropFoo\t\t--- top level, corresponds to SQL DROP command\n> \tDropCascadeFoo --- dependency controller\n> \tRemoveFoo\t--- bottom level deleter\n\nIs that accurate for cascade?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Jul 2002 16:08:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Memo on dropping practices" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> How about:\n\n>> DropFoo\t\t--- top level, corresponds to SQL DROP command\n>> DropCascadeFoo --- dependency controller\n>> RemoveFoo\t--- bottom level deleter\n\nThere is only one dependency controller; it's not Foo anything.\nAnd I don't want to call it by the same verb as either the top\nor bottom levels...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Jul 2002 20:02:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Memo on dropping practices " } ]
[ { "msg_contents": "I'm going to change the pg_dump command to pull these constraints out of\npg_constaint where possible, creating the appropriate alter table add\nconstraint command (see primary key).\n\n\nShould unique constraints created with 'create index' (no entry in\npg_constraint) be re-created via alter table add constraint, or via\ncreate unique index? \n\nI prefer ...add constraint. After a while (release or 2) removal of\ncreate unique index all together.\n\nSince index names are unique, and all unique and primary key constraints\nhave a matching name in pg_index there isn't a problem with name\nconflicts.\n\n", "msg_date": "12 Jul 2002 23:11:38 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Unique and Primary Key Constraints" }, { "msg_contents": "Rod Taylor wrote:\n> I'm going to change the pg_dump command to pull these constraints out of\n> pg_constaint where possible, creating the appropriate alter table add\n> constraint command (see primary key).\n> \n> \n> Should unique constraints created with 'create index' (no entry in\n> pg_constraint) be re-created via alter table add constraint, or via\n> create unique index? \n\nCREATE UNIQUE INDEX has optimization purpose as well as an constraint\npurpose. I think CREATE UNIQUE INDEX is the way to go.\n\n> I prefer ...add constraint. After a while (release or 2) removal of\n> create unique index all together.\n\nRemove CREATE UNIQUE INDEX entirely? Why?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Jul 2002 00:20:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unique and Primary Key Constraints" }, { "msg_contents": "> > I prefer ...add constraint. After a while (release or 2) removal of\n> > create unique index all together.\n> \n> Remove CREATE UNIQUE INDEX entirely? Why?\n\nI was looking to encourage users to use core SQL as I spend more time\nthan I want converting between systems -- thanks in part to users who\ncreate non-portable structures.\n\nTemporarily forgot there are index types other than btree :)\n\nAnyway, thanks for the answers.\n\n", "msg_date": "13 Jul 2002 01:33:07 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: Unique and Primary Key Constraints" }, { "msg_contents": "Rod Taylor wrote:\n> > > I prefer ...add constraint. After a while (release or 2) removal of\n> > > create unique index all together.\n> > \n> > Remove CREATE UNIQUE INDEX entirely? Why?\n> \n> I was looking to encourage users to use core SQL as I spend more time\n> than I want converting between systems -- thanks in part to users who\n> create non-portable structures.\n> \n> Temporarily forgot there are index types other than btree :)\n\nNot so much non-btree, but non-unique indexes themselves. UNIQUE index\nis funny because it is a constraint and an performance utility. I see\nyour point that a constraint is more ANSI standard, but because we can't\nget rid of non-unique indexes, I am not sure if there is really a good\nreason to move to UNIQUE constraints. Well, it does make the table\ndefinition and index more compact (one statement) but we split them up\non pg_dump so we can load the table without the index, so it doesn't\nseem to be a win.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Jul 2002 10:29:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unique and Primary Key Constraints" }, { "msg_contents": "On Sat, 2002-07-13 at 10:29, Bruce Momjian wrote:\n> Rod Taylor wrote:\n> > > > I prefer ...add constraint. After a while (release or 2) removal of\n> > > > create unique index all together.\n> > > \n> > > Remove CREATE UNIQUE INDEX entirely? Why?\n> > \n> > I was looking to encourage users to use core SQL as I spend more time\n> > than I want converting between systems -- thanks in part to users who\n> > create non-portable structures.\n> > \n> > Temporarily forgot there are index types other than btree :)\n> \n> Not so much non-btree, but non-unique indexes themselves. UNIQUE index\n> is funny because it is a constraint and an performance utility. I see\n> your point that a constraint is more ANSI standard, but because we can't\n\nYup. Makes sense. I submitted a patch which retains the difference. \nIf the index is created with CREATE UNIQUE, it's dumped with CREATE\nUNIQUE. Constraint UNIQUE is treated likewise.\n\n", "msg_date": "13 Jul 2002 11:08:21 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: Unique and Primary Key Constraints" }, { "msg_contents": "Rod Taylor wrote:\n> On Sat, 2002-07-13 at 10:29, Bruce Momjian wrote:\n> > Rod Taylor wrote:\n> > > > > I prefer ...add constraint. After a while (release or 2) removal of\n> > > > > create unique index all together.\n> > > > \n> > > > Remove CREATE UNIQUE INDEX entirely? Why?\n> > > \n> > > I was looking to encourage users to use core SQL as I spend more time\n> > > than I want converting between systems -- thanks in part to users who\n> > > create non-portable structures.\n> > > \n> > > Temporarily forgot there are index types other than btree :)\n> > \n> > Not so much non-btree, but non-unique indexes themselves. UNIQUE index\n> > is funny because it is a constraint and an performance utility. I see\n> > your point that a constraint is more ANSI standard, but because we can't\n> \n> Yup. Makes sense. I submitted a patch which retains the difference. \n> If the index is created with CREATE UNIQUE, it's dumped with CREATE\n> UNIQUE. Constraint UNIQUE is treated likewise.\n\nYes, very nice.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Jul 2002 11:11:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unique and Primary Key Constraints" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> Yup. Makes sense. I submitted a patch which retains the difference. \n> If the index is created with CREATE UNIQUE, it's dumped with CREATE\n> UNIQUE. Constraint UNIQUE is treated likewise.\n\nYes, I was going to suggest that --- we should try to reproduce the way\nthat the definition was created, not enforce our own ideas of style.\n\nCREATE INDEX will always be more flexible than constraints anyway\n(non-default index type, non-default opclasses, partial indexes for\nstarters) so the notion that it might go away someday is a nonstarter.\n\nRod's original pg_depend patch tried to make a pg_constraint entry for\nany unique index, but I changed it to only make entries for indexes\nthat were actually made from constraint clauses, so the distinction\nis preserved in the system catalogs. Just a matter of having pg_dump\nrespect it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Jul 2002 13:03:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unique and Primary Key Constraints " } ]
[ { "msg_contents": "[Cc:ed to hackers]\n\nFrom: nconway@klamath.dyndns.org (Neil Conway)\nSubject: pgbench questions\nDate: Sat, 13 Jul 2002 00:57:37 -0400\nMessage-ID: <20020713045736.GA9258@klamath.dyndns.org>\n\n> Hi,\n> \n> I was looking at doing some performance profiling on PostgreSQL, and\n> I had a few questions on pgbench.\n> \n> (1) Is there a reason you chose to use the TPC-B benchmark rather\n> than TPC-C or TPC-H, for example? Do you think there might be any\n> merit in converting pgbench to use TPC-H or AS3AP?\n\nJust easy to implement. Ideally pgbench should be able handle to\nseveral kinds of benchmarks (I don't have time to do that sigh...)\n\nBTW, TPC-H is very different from othe benchmarks. As far as I know,\nit focuses on Data Ware House. So TPC-H cannot be a replacement for\nTPC-B.\n\n> (2) At least in the current CVS version, the code to do a 'CHECKPOINT'\n> after creating a table has been #ifdef'ed out. Why is that?\n\nThat is not after creation of a table, but while creating it, which is\nnot necessary any more since Tom has fix the growth of WAL logs.\n\n> (3) Several people (Rod Taylor, Tom Lane, myself, perhaps others) have\n> noticed that the results obtained from pgbench can be somewhat\n> inconsistent (i.e. can vary between runs quite a bit).\n> \n> Have you found this to be the case in your own experience?\n> \n> Do you have any suggestions on how pgbench can be made more\n> consistent (either through special benchmarking procedures, or\n> through a change to pgbench)\n\nI believe it's a common problem with benchmark programs. I think Tom\nor Jan has posted a good summary to hackers list showing how to get\na consistent result with pgbench.\n--\nTatsuo Ishii\n", "msg_date": "Sat, 13 Jul 2002 15:25:21 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgbench questions" }, { "msg_contents": "On Sat, 2002-07-13 at 02:25, Tatsuo Ishii wrote: \n> > (2) At least in the current CVS version, the code to do a 'CHECKPOINT'\n> > after creating a table has been #ifdef'ed out. Why is that?\n> \n> That is not after creation of a table, but while creating it, which is\n> not necessary any more since Tom has fix the growth of WAL logs.\n> \n\nTatsou:\n\nCould you or Tom give me some background on what this change was about?\nIs this something recent, or would it have been in CVS about a month\nago?\n\nI can't see any reason to force a checkpoint after CREATE TABLE, but it\nwould be interesting to know why it was done before.\n\n;John Nield\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "13 Jul 2002 17:53:50 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: pgbench questions" }, { "msg_contents": "J. R. Nield wrote:\n> On Sat, 2002-07-13 at 02:25, Tatsuo Ishii wrote: \n> > > (2) At least in the current CVS version, the code to do a 'CHECKPOINT'\n> > > after creating a table has been #ifdef'ed out. Why is that?\n> > \n> > That is not after creation of a table, but while creating it, which is\n> > not necessary any more since Tom has fix the growth of WAL logs.\n> > \n> \n> Tatsou:\n> \n> Could you or Tom give me some background on what this change was about?\n> Is this something recent, or would it have been in CVS about a month\n> ago?\n> \n> I can't see any reason to force a checkpoint after CREATE TABLE, but it\n> would be interesting to know why it was done before.\n\n7.1.0 had a problem with WAL file growth. This was fixed in 7.1.2 or\n7.1.3 so checkpoint is not required anymore.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Jul 2002 17:59:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench questions" } ]
[ { "msg_contents": "I encountered a problem while implementing new CREATE\nCONVERSION. Since converion procs are dynamically invoked while doing\nan encoding conversion, it might fail for some reasons:\n\n(1) stale pg_conversion entry. If someone re-register that proc, the\n oid might be changed and the reference from pg_conversion to\n pg_proc becomes stale.\n\n(2) buggy conversion proc is defined by a user\n\n(3) schema search path changed. Since conversion is schema aware, if\n someone sets a wrong schema path, the conversion proc might not be\n found anymore. This is actually not a problem right now, since in\n this case a conversion search would be performed on pg_catalog\n name space which should always be exist. However I am a little bit\n worried about this.\n\nProblem is, in any case mentioned above, an ERROR is raised and\nbackend tries to send an error message which again raise an ERROR. As\na result, backend goes into an infinite loop.\n\nI have to do some syscache searches aginst pg_proc before calling\nconversion proc using fmgr, since there seems no API for checking that\nconversion proc surely exists without throwing an ERROR. This is ugly\nand is not ideal IMO.\n\nAny idea?\n--\nTatsuo Ishii\n", "msg_date": "Sat, 13 Jul 2002 15:25:32 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "help needed with CREATE CONVERSION" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I encountered a problem while implementing new CREATE\n> CONVERSION. Since converion procs are dynamically invoked while doing\n> an encoding conversion, it might fail for some reasons:\n\n> (1) stale pg_conversion entry. If someone re-register that proc, the\n> oid might be changed and the reference from pg_conversion to\n> pg_proc becomes stale.\n\nThis could (and should IMHO) be prevented with a dependency.\n\n> (2) buggy conversion proc is defined by a user\n\nThis I think we have to be concerned about; there will always be the\npossibility of a failure in the conversion proc.\n\n> (3) schema search path changed.\n\nI do not see how that's an issue. The conversion proc is referred to\nby OID, no?\n\n> Problem is, in any case mentioned above, an ERROR is raised and\n> backend tries to send an error message which again raise an ERROR. As\n> a result, backend goes into an infinite loop.\n\nAs long as we can restrict this failure to the case of a buggy\nconversion proc, I think the risk can be lived with. Buggy C code\ncan cause crashes in plenty of ways ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Jul 2002 13:12:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: help needed with CREATE CONVERSION " }, { "msg_contents": "Tom Lane dijo: \n\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I encountered a problem while implementing new CREATE\n> > CONVERSION. Since converion procs are dynamically invoked while doing\n> > an encoding conversion, it might fail for some reasons:\n> \n> > (2) buggy conversion proc is defined by a user\n> \n> This I think we have to be concerned about; there will always be the\n> possibility of a failure in the conversion proc.\n> \n> > Problem is, in any case mentioned above, an ERROR is raised and\n> > backend tries to send an error message which again raise an ERROR. As\n> > a result, backend goes into an infinite loop.\n\nWhat about having a separate elog() category that evades the conversion\nstuff, so if something fails in character conversion it doesn't get\ncalled again? Some way to ``disable'' recursion in elog().\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nLicensee shall have no right to use the Licensed Software\nfor productive or commercial use. (Licencia de StarOffice 6.0 beta)\n\n", "msg_date": "Sat, 13 Jul 2002 15:35:55 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: help needed with CREATE CONVERSION " } ]
[ { "msg_contents": "Hello,\n\nI somehow feel that I do not know anymore what are the current expectations\nfor a release date for 7.3.\n\nAnd when do you think it will be stable enough so that testing of interfaces\n(like pgaccess) will be meaningful (is this the period you call 'slow\ndown').\n\nIavor\n\n--\nwww.pgaccess.org\n\n", "msg_date": "Sat, 13 Jul 2002 11:31:10 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": true, "msg_subject": "7.3 - current expectations for a release date" }, { "msg_contents": "Iavor Raytchev wrote:\n> Hello,\n> \n> I somehow feel that I do not know anymore what are the current expectations\n> for a release date for 7.3.\n\nBeta freeze September 1, final release October/November, is my guess.\n\n> And when do you think it will be stable enough so that testing of interfaces\n> (like pgaccess) will be meaningful (is this the period you call 'slow\n> down').\n\nShould be stable now. Isn't it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Jul 2002 10:33:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3 - current expectations for a release date" } ]
[ { "msg_contents": "The answer from H2 (Jim Melton).\n\nWhen this feature was being voted on, some vendors had \"cascade\" as a default,\nothers had \"restrict\". So, the compromise was not to define a default.\n<grumble grumble>\n\nAs such providing a \"default\" is a vendor extension and compliance simply\nrequires we also support the standard syntax.\n\nDana\n\n> -----Original Message-----\n> From: Groff, Dana [mailto:Dana.Groff@filetek.com]\n> Sent: Thursday, July 11, 2002 12:43 PM\n> To: 'Tom Lane'; Bruce Momjian\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Should this require CASCADE? \n> \n> \n> I think that is the proper behavior Tom.\n> \n> Also I agree with Bruce that this might be an oversight in \n> the standard. That\n> is why standards evolve. As I write this I am also sending a \n> note to H2 asking\n> about this very issue. The latest working draft still has \n> this construct.\n> \n> Dana\n> \n> > -----Original Message-----\n> > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > Sent: Thursday, July 11, 2002 12:36 PM\n> > To: Bruce Momjian\n> > Cc: Groff, Dana; Jan Wieck; Stephan Szabo; \n> > pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS] Should this require CASCADE? \n> > \n> > \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Now, if someone wanted to say CASCADE|RESTRICT was\n> > > required for DROP _only_ if there is some foreign key \n> > references to the\n> > > table, I would be OK with that, but that's not what the \n> > standard says.\n> > \n> > But in fact that is not different from what I propose to \n> do. Consider\n> > what such a rule really means:\n> > \t* if no dependencies exist for the object, go ahead and delete.\n> > \t* if dependencies exist, complain.\n> > How is that different from \"the default behavior is RESTRICT\"?\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n", "msg_date": "Sat, 13 Jul 2002 20:22:15 -0400", "msg_from": "\"Groff, Dana\" <Dana.Groff@filetek.com>", "msg_from_op": true, "msg_subject": "Re: Should this require CASCADE? " } ]
[ { "msg_contents": "I was trying to see if it was possible to create an 'internal' function \nafter bootstrap (i.e. without listing in pg_proc.h). The test case below \nillustrates that it is indeed possible.\n\ntest=# CREATE OR REPLACE FUNCTION mytest(text,int,int) RETURNS text AS \n'text_substr' LANGUAGE 'internal' IMMUTABLE STRICT;\nCREATE FUNCTION\ntest=# select mytest('abcde',2,2);\n mytest\n--------\n bc\n(1 row)\n\nIt made me wonder why don't we always create internal functions this \nway, or at least all except a core set of bootstrapped functions. Am I \nwrong in thinking that it would eliminate the need to initdb every time \na new internal function is added?\n\nWe could have a script, say \"internal_functions.sql\", that would contain \n\"CREATE OR REPLACE FUNCTION...LANGUAGE 'internal'...\" for each internal \nfunction and be executed by initdb. When a new builtin function is added \nto the backend, you could run this script directly to update your catalog.\n\nJust a thought. Any reason we can't or don't want to do this?\n\nJoe\n\n", "msg_date": "Sat, 13 Jul 2002 17:50:38 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "question re internal functions requiring initdb" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> It made me wonder why don't we always create internal functions this \n> way, or at least all except a core set of bootstrapped functions.\n\nI don't believe it will actually work: you *must* add an internal\nfunction to include/catalog/pg_proc.h, or it won't get into the function\nlookup table that's built by Gen_fmgrtab.sh.\n\nIt is true that you don't have to force an initdb right away, but\nthere's an efficiency penalty IIRC (can't bypass the lookup table\nsearch, or something ... read the fmgr code for details).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Jul 2002 21:01:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: question re internal functions requiring initdb " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>>It made me wonder why don't we always create internal functions this \n>>way, or at least all except a core set of bootstrapped functions.\n> \n> I don't believe it will actually work: you *must* add an internal\n> function to include/catalog/pg_proc.h, or it won't get into the function\n> lookup table that's built by Gen_fmgrtab.sh.\n> \n> It is true that you don't have to force an initdb right away, but\n> there's an efficiency penalty IIRC (can't bypass the lookup table\n> search, or something ... read the fmgr code for details).\n> \n\nOK -- I see what you mean now. For a *user alias* of an existing builtin \nfunction fmgr_isbuiltin(), which does a binary search on the sorted \nfmgr_builtins array, will fail. So there is a speed penalty in that the \nfunction is looked up with fmgr_lookupByName(), which does a sequential \nscan through the fmgr_builtins array.\n\nRegardless, if the function is not listed in the fmgr_builtins array at \nall, which it won't be if Gen_fmgrtab.sh doesn't see it in pg_proc.h, \nthen the lookup will fail entirely. I guess I would have found this out \non my own if I had carried the experiment out a little farther. Shucks!\n\nJoe\n\n", "msg_date": "Sat, 13 Jul 2002 19:25:53 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: question re internal functions requiring initdb" } ]
[ { "msg_contents": "Probably the most succinct explanation would be to copy & paste from the \nterminal...\n\ntjhart=> create table a_line( foo line );\nCREATE\ntjhart=> insert into a_line ( foo ) values( '(0,0), (1,1)' );\nERROR: line not yet implemented\ntjhart=> select version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.2.1 on powerpc-apple-darwin5.3, compiled by GCC 2.95.2\n(1 row)\n\n\nThe documentation (datatype-geometric.html) indicates both a 'line' type \nand an 'lseg' type in the summary table at the top of the page. The same \ncode above using the type 'lseg' in place of 'line' works just fine.\n\nWhy can I create a table with a column of type 'line' if I can't insert \ninto it?\n\n", "msg_date": "Sat, 13 Jul 2002 23:09:15 -0600", "msg_from": "Tim Hart <tjhart@mac.com>", "msg_from_op": true, "msg_subject": "line datatype" }, { "msg_contents": "Tim Hart wrote:\n> Probably the most succinct explanation would be to copy & paste from the \n> terminal...\n> \n> tjhart=> create table a_line( foo line );\n> CREATE\n> tjhart=> insert into a_line ( foo ) values( '(0,0), (1,1)' );\n> ERROR: line not yet implemented\n> tjhart=> select version();\n> version\n> ---------------------------------------------------------------------\n> PostgreSQL 7.2.1 on powerpc-apple-darwin5.3, compiled by GCC 2.95.2\n> (1 row)\n> \n> \n> The documentation (datatype-geometric.html) indicates both a 'line' type \n> and an 'lseg' type in the summary table at the top of the page. The same \n> code above using the type 'lseg' in place of 'line' works just fine.\n> \n> Why can I create a table with a column of type 'line' if I can't insert \n> into it?\n\nWell, that's a very good question. I see you have to compile PostgreSQL\nwith ENABLE_LINE_TYPE defined in pg_config.h.in and rerun configure.\n\nI see this commit from August 16, 1998:\n\t\n\trevision 1.35\n\tdate: 1998/08/16 04:06:55; author: thomas; state: Exp; lines: +7 -6\n\tDisable not-ready-to-use support code for the line data type.\n\tBracket things with #ifdef ENABLE_LINE_TYPE.\n\tThe line data type has always been used internally to support other \n\ttypes, but I/O routines have never been defined for it.\n\npsql \\dT clearly shows line and lseg:\n\n line | geometric line '(pt1,pt2)'\n lseg | geometric line segment '(pt1,pt2)'\n\nso I think we have both a documentation problem, psql problem, and code\nproblem. Let's see what Thomas says.\n\nFor the short term, I would use lseg because it looks the same.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 10:43:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: line datatype" }, { "msg_contents": "\nOK, I have added comments to \\dT and SGML docs to mention that 'line' is\nnot implemented. This should help future folks.\n\nIt would be nice to get the line type working 100%. Thomas says the\nproblem is input/output format. I don't completely understand.\n\n---------------------------------------------------------------------------\n\nTim Hart wrote:\n> Probably the most succinct explanation would be to copy & paste from the \n> terminal...\n> \n> tjhart=> create table a_line( foo line );\n> CREATE\n> tjhart=> insert into a_line ( foo ) values( '(0,0), (1,1)' );\n> ERROR: line not yet implemented\n> tjhart=> select version();\n> version\n> ---------------------------------------------------------------------\n> PostgreSQL 7.2.1 on powerpc-apple-darwin5.3, compiled by GCC 2.95.2\n> (1 row)\n> \n> \n> The documentation (datatype-geometric.html) indicates both a 'line' type \n> and an 'lseg' type in the summary table at the top of the page. The same \n> code above using the type 'lseg' in place of 'line' works just fine.\n> \n> Why can I create a table with a column of type 'line' if I can't insert \n> into it?\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 23:33:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: line datatype" }, { "msg_contents": "> It would be nice to get the line type working 100%. Thomas says the\n> problem is input/output format. I don't completely understand.\n\nThe issue is in choosing an external format for LINE which does not lose\nprecision during dump/reload. Internally, LINE is described by a formula\nwhich is likely subject to problems with limited precision for some line\norientations.\n\nDoes anyone have a suggestion (perhaps drawn from another GIS package)\nfor representing lines? We already have this implemented internally, and\nthe algorithms are used to support other data types; the only unresolved\nissue is in how to input the values.\n\n - Thomas\n", "msg_date": "Mon, 15 Jul 2002 23:10:29 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [SQL] line datatype" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> The issue is in choosing an external format for LINE which does not lose\n> precision during dump/reload.\n\nWhy is this any worse for LINE than for any of the other geometric\ntypes (or for plain floats, for that matter)?\n\nWe do need a solution for exact dump/reload of floating-point data,\nbut I don't see why the lack of it should be reason to disable access\nto the LINE type.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 09:44:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] line datatype " }, { "msg_contents": "Tom Lane wrote:\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > The issue is in choosing an external format for LINE which does not lose\n> > precision during dump/reload.\n> \n> Why is this any worse for LINE than for any of the other geometric\n> types (or for plain floats, for that matter)?\n> \n> We do need a solution for exact dump/reload of floating-point data,\n> but I don't see why the lack of it should be reason to disable access\n> to the LINE type.\n\nI don't understand why dumping the two point values isn't sufficient.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 11:21:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] line datatype" }, { "msg_contents": "...\n> > We do need a solution for exact dump/reload of floating-point data,\n> > but I don't see why the lack of it should be reason to disable access\n> > to the LINE type.\n> I don't understand why dumping the two point values isn't sufficient.\n\nWhich two point values? LINE is handled as an equation, not as points,\nunlike the LSEG type which has two points.\n\nOne possibility is to have the external representation *be* the same as\nLSEG, then convert internally. Then we need to decide how to scale those\npoints; maybe always using a unit vector is the right thing to do...\n\n - Thomas\n", "msg_date": "Tue, 16 Jul 2002 08:29:46 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [SQL] line datatype" }, { "msg_contents": "Thomas Lockhart wrote:\n> ...\n> > > We do need a solution for exact dump/reload of floating-point data,\n> > > but I don't see why the lack of it should be reason to disable access\n> > > to the LINE type.\n> > I don't understand why dumping the two point values isn't sufficient.\n> \n> Which two point values? LINE is handled as an equation, not as points,\n> unlike the LSEG type which has two points.\n\nWell, the \\dT documentation used to show line as two points, so I\nassumed that was how it was specified.\n\n> One possibility is to have the external representation *be* the same as\n> LSEG, then convert internally. Then we need to decide how to scale those\n> points; maybe always using a unit vector is the right thing to do...\n\nNo one likes entering an equation. Two points seems the simplest.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 11:38:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] line datatype" }, { "msg_contents": "...\n> Well, the \\dT documentation used to show line as two points, so I\n> assumed that was how it was specified.\n\nHmm. And it seems I entered it a few years ago ;)\n\nCut and paste error. At that time the line type was defined but has\nnever had the i/o routines enabled.\n\n> No one likes entering an equation. Two points seems the simplest.\n\nThat it does.\n\n - Thomas\n", "msg_date": "Tue, 16 Jul 2002 09:07:10 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [SQL] line datatype" }, { "msg_contents": "On Tuesday 16 July 2002 11:29 am, Thomas Lockhart wrote:\n> > > We do need a solution for exact dump/reload of floating-point data,\n> > > but I don't see why the lack of it should be reason to disable access\n> > > to the LINE type.\n\n> > I don't understand why dumping the two point values isn't sufficient.\n\n> Which two point values? LINE is handled as an equation, not as points,\n> unlike the LSEG type which has two points.\n\n> One possibility is to have the external representation *be* the same as\n> LSEG, then convert internally. Then we need to decide how to scale those\n> points; maybe always using a unit vector is the right thing to do...\n\nLines are entered now by specifying two points, anywhere on the line, right? \nThe internal representation is then slope-intercept? Why not allow either \nthe 'two-point' entry, or direct entry as slope-intercept? How do we \nrepresent lines now in output? Do we pick two arbitrary points on the line? \nIf so, I can see Thomas' point here, where the original data entry might have \nspecified two relatively distant points -- but then there's a precision error \nanyway converting to slope-intercept, if indeed that is the internal \nrepresentation. So why not dump in slope-intercept form, if that is the \ninternal representation?\n\nBut, you're telling me floats aren't dumpable/restoreable to exactly the same \nvalue? (????) This can't be good.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 16 Jul 2002 12:10:23 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: [SQL] line datatype" }, { "msg_contents": "Lamar Owen wrote:\n> On Tuesday 16 July 2002 11:29 am, Thomas Lockhart wrote:\n> > > > We do need a solution for exact dump/reload of floating-point data,\n> > > > but I don't see why the lack of it should be reason to disable access\n> > > > to the LINE type.\n> \n> > > I don't understand why dumping the two point values isn't sufficient.\n> \n> > Which two point values? LINE is handled as an equation, not as points,\n> > unlike the LSEG type which has two points.\n> \n> > One possibility is to have the external representation *be* the same as\n> > LSEG, then convert internally. Then we need to decide how to scale those\n> > points; maybe always using a unit vector is the right thing to do...\n> \n> Lines are entered now by specifying two points, anywhere on the line, right? \n> The internal representation is then slope-intercept? Why not allow either \n> the 'two-point' entry, or direct entry as slope-intercept? How do we \n> represent lines now in output? Do we pick two arbitrary points on the line? \n> If so, I can see Thomas' point here, where the original data entry might have \n> specified two relatively distant points -- but then there's a precision error \n> anyway converting to slope-intercept, if indeed that is the internal \n> representation. So why not dump in slope-intercept form, if that is the \n> internal representation?\n\nYow, I can see the pain of having slope/intercept and trying to output\ntwo points. What if we store line internally as two points, and convert\nto slope/intercept when needed. That way, it would dump out just as it\nwas entered.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 12:26:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] line datatype" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> No one likes entering an equation. Two points seems the simplest.\n\n> That it does.\n\nOn the other hand, if you want to enter two points why don't you just\nuse lseg to begin with? There's not much justification for having a\nseparate line type unless it behaves differently ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 12:30:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] line datatype " }, { "msg_contents": "Tom Lane wrote:\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> >> No one likes entering an equation. Two points seems the simplest.\n> \n> > That it does.\n> \n> On the other hand, if you want to enter two points why don't you just\n> use lseg to begin with? There's not much justification for having a\n> separate line type unless it behaves differently ...\n\nI assume the line type keeps going after the two end points, while lseg\ndoesn't.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 12:31:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] line datatype" }, { "msg_contents": "> >> No one likes entering an equation. Two points seems the simplest.\n> > That it does.\n> On the other hand, if you want to enter two points why don't you just\n> use lseg to begin with? There's not much justification for having a\n> separate line type unless it behaves differently ...\n\nThey are different. One is infinite in length, the other is finite.\nDistances, etc are calculated differently between the two types.\n\n - Thomas\n", "msg_date": "Tue, 16 Jul 2002 13:42:14 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [SQL] line datatype" }, { "msg_contents": "On Tuesday 16 July 2002 04:42 pm, Thomas Lockhart wrote:\n> > On the other hand, if you want to enter two points why don't you just\n> > use lseg to begin with? There's not much justification for having a\n> > separate line type unless it behaves differently ...\n\n> They are different. One is infinite in length, the other is finite.\n> Distances, etc are calculated differently between the two types.\n\nFor some of my work a type of 'ray' would be nice... :-) But LSEG's usually \nwork OK as long as you specify an endpoint that is far enough away.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 16 Jul 2002 16:46:57 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: [SQL] line datatype" } ]
[ { "msg_contents": "I've been running a few functions within schema's. It's annoying that\neverything needs to be qualified as it doesn't allow the functions to be\nmoved very easily.\n\n\nWould it be appropriate for the function to have it's own schema as\npre-pended onto the user path while in the users function?\n\n\n\n", "msg_date": "14 Jul 2002 20:36:43 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "plpgsql and Schemas" }, { "msg_contents": "On Sun, 2002-07-14 at 20:36, Rod Taylor wrote:\n> I've been running a few functions within schema's. It's annoying that\n> everything needs to be qualified as it doesn't allow the functions to be\n> moved very easily.\n> \n> \n> Would it be appropriate for the function to have it's own schema as\n> pre-pended onto the user path while in the users function?\n\nThats a weird way of saying 'prepended to the path during function\nexecution'.\n\n", "msg_date": "14 Jul 2002 20:39:27 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: plpgsql and Schemas" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> I've been running a few functions within schema's. It's annoying that\n> everything needs to be qualified as it doesn't allow the functions to be\n> moved very easily.\n> Would it be appropriate for the function to have it's own schema as\n> pre-pended onto the user path while in the users function?\n\nHmm. I can think of examples where you wouldn't want that (because\nthe function *should* see the caller's namespace) about as easily\nas cases where you would.\n\nIf a function wants to access \"its own schema\", why shouldn't it\nuse qualified references?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Jul 2002 21:19:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plpgsql and Schemas " }, { "msg_contents": "On Sun, 2002-07-14 at 21:19, Tom Lane wrote:\n> Rod Taylor <rbt@zort.ca> writes:\n> > I've been running a few functions within schema's. It's annoying that\n> > everything needs to be qualified as it doesn't allow the functions to be\n> > moved very easily.\n> > Would it be appropriate for the function to have it's own schema as\n> > pre-pended onto the user path while in the users function?\n> \n> Hmm. I can think of examples where you wouldn't want that (because\n> the function *should* see the caller's namespace) about as easily\n> as cases where you would.\n> \n> If a function wants to access \"its own schema\", why shouldn't it\n> use qualified references?\n\nI was thinking of the effort put into pg_dump to prevent over qualifying\nreferences in order to allow the user to move stuff easily. It's not a\nbig deal, but does prevent this ability with functions.\n\n", "msg_date": "14 Jul 2002 21:27:11 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: plpgsql and Schemas" }, { "msg_contents": "Rod Taylor writes:\n\n> I've been running a few functions within schema's. It's annoying that\n> everything needs to be qualified as it doesn't allow the functions to be\n> moved very easily.\n\n> Would it be appropriate for the function to have it's own schema as\n> pre-pended onto the user path while in the users function?\n\nThe SQL standard has rules on how the effective schema path during a\nfunction execution is determined. In a nutshell, it allows you to specify\nthe path as an attribute of the containing schema. E.g.,\n\n CREATE SCHEMA foo PATH here, there;\n\nI haven't thought this through, but you might want to think about it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Jul 2002 00:55:02 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: plpgsql and Schemas" } ]
[ { "msg_contents": "RAISE seems to be quite wonky.\n\nThe below function fails to compile due to the function call in the\narguement.\n\ncreate function test(text) \n returns bool \n as '\nbegin\n raise exception ''test %'', quote_literal($1);\n \n return false;\nend;\n' language plpgsql;\n\n\nLikewise, the below fails to compile due to error message being an\nexpression. I know this one used to work not all that long ago. \nDigging through 7.0 docs even found an example of it.\n\ncreate function test2(text) \n returns bool \n as '\nbegin\n raise exception ''test '' || ''more...'';\n \n return false;\nend;\n' language plpgsql;\n\n\n\nAny thoughts on how to fix the first, and if the second could be\nreverted? It's a pain for long messages that they cannot be broken.\n\nYes, I could write an error handling routine which passes nice messages\nto elog(), but I think thats a silly thing to do.\n\n\n\n", "msg_date": "14 Jul 2002 20:37:32 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "PLPGSQL and Exceptions" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> Likewise, the below fails to compile due to error message being an\n> expression. I know this one used to work not all that long ago. \n> Digging through 7.0 docs even found an example of it.\n\nNo, it never worked; that's why it was taken out of the docs.\n\nThe RAISE syntax is indeed annoyingly restrictive --- feel free to\nimprove it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Jul 2002 21:07:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PLPGSQL and Exceptions " } ]
[ { "msg_contents": "OK,\n\nDROP COLUMN now seems to work perfectly. All the old test cases that failed\nnow work fine.\n\nHowever, I'm not happy with the way dropped columns are renamed. I want to\ngive them a name that no-one would ever want to use as a legit column name.\nI don't like this behaviour:\n\ntest=# create table test (a int4, b int4);\nCREATE TABLE\ntest=# alter table test drop a;\nALTER TABLE\ntest=# select dropped_1 from test;\nERROR: Attribute \"dropped_1\" not found\ntest=# alter table test add dropped_1 int4;\nERROR: ALTER TABLE: column name \"dropped_1\" already exists in table \"test\"\n\nIt's a bit confusing, hey?\n\nWhat should we do about it?\n\nMaybe I could make ADD COLUMN give this message instead for dropped columns?\n\nERROR: ALTER TABLE: column name \"dropped_1\" is a dropped column in table\n\"test\" ... or something ...\n\nWe could name the fields \"________dropped_x\" sort of thing perhaps????\n\nChris\n\n", "msg_date": "Mon, 15 Jul 2002 11:45:44 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "More DROP COLUMN" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> However, I'm not happy with the way dropped columns are renamed.\n\nOkay...\n\n> We could name the fields \"________dropped_x\" sort of thing perhaps????\n\nIn practice that would certainly work, especially if we increase\nNAMEDATALEN to 128 or so, as has been proposed repeatedly.\n\nAlternatively, we could invest a lot of work to make it possible for\nattname to be NULL, but I don't see the payoff...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Jul 2002 23:57:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More DROP COLUMN " }, { "msg_contents": "On Mon, 15 Jul 2002, Christopher Kings-Lynne wrote:\n\n> However, I'm not happy with the way dropped columns are renamed. I want to\n> give them a name that no-one would ever want to use as a legit column name.\n> ...\n> We could name the fields \"________dropped_x\" sort of thing perhaps????\n\nI suggest you _dropped_N_XXXXXXXXXXXXXXXX where \"n\" is that same\nsequence number (1, 2, 3, etc.) and the Xs are the hexedecimal\nrepresentation of a 64-bit random number. So you'd get names like\n\"_dropped_2_719fe940a46eb39c\".\n\nThis is easy to generate and highly unlikley to conflict with anything.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 15 Jul 2002 13:01:34 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: More DROP COLUMN" }, { "msg_contents": "> > We could name the fields \"________dropped_x\" sort of thing perhaps????\n>\n> In practice that would certainly work, especially if we increase\n> NAMEDATALEN to 128 or so, as has been proposed repeatedly.\n\nWell, x is just an integer anyway, so even with 32 it's not a problem...\n\nIn case anyone was wondering btw, if a column named 'dropped_1' already\nexists when you drop column 1 in the table, it will be renamed like this:\n\ndropped1_1\n\nAnd if that also exists, it will become\n\ndropped2_1\n\netc. I put that extra number after dropped and not at the end so prevent it\nbeing off the end of a 32 character name.\n\n> Alternatively, we could invest a lot of work to make it possible for\n> attname to be NULL, but I don't see the payoff...\n\nYeah, I think a weird name should be good enough...\n\nChris\n\n", "msg_date": "Mon, 15 Jul 2002 12:06:47 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: More DROP COLUMN " }, { "msg_contents": "> > etc. I put that extra number after dropped and not at the end\n> so prevent it\n> > being off the end of a 32 character name.\n> >\n> > > Alternatively, we could invest a lot of work to make it possible for\n> > > attname to be NULL, but I don't see the payoff...\n> >\n> > Yeah, I think a weird name should be good enough...\n>\n> perhaps starting it with spaces instead of _ would make it even harder\n> to write by accident, so tha name could be\n> \" dropped 0000000001\"\n>\n> or to make it even more self documenting store the drop time,\n> \" col001 dropped@020715.101427\"\n> --------------------------------\n\nWell, are there characters that are illegal in column names that I could\nuse? I did a quick check and couldn't find any!\n\nChris\n\n", "msg_date": "Mon, 15 Jul 2002 15:20:08 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: More DROP COLUMN" }, { "msg_contents": "On Mon, 2002-07-15 at 06:06, Christopher Kings-Lynne wrote:\n> > > We could name the fields \"________dropped_x\" sort of thing perhaps????\n> >\n> > In practice that would certainly work, especially if we increase\n> > NAMEDATALEN to 128 or so, as has been proposed repeatedly.\n> \n> Well, x is just an integer anyway, so even with 32 it's not a problem...\n> \n> In case anyone was wondering btw, if a column named 'dropped_1' already\n> exists when you drop column 1 in the table, it will be renamed like this:\n> \n> dropped1_1\n> \n> And if that also exists, it will become\n> \n> dropped2_1\n> \n> etc. I put that extra number after dropped and not at the end so prevent it\n> being off the end of a 32 character name.\n> \n> > Alternatively, we could invest a lot of work to make it possible for\n> > attname to be NULL, but I don't see the payoff...\n> \n> Yeah, I think a weird name should be good enough...\n\nperhaps starting it with spaces instead of _ would make it even harder\nto write by accident, so tha name could be\n \" dropped 0000000001\"\n\nor to make it even more self documenting store the drop time, \n\" col001 dropped@020715.101427\"\n--------------------------------\n\n---------------\nHannu\n\n", "msg_date": "15 Jul 2002 10:17:52 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: More DROP COLUMN" }, { "msg_contents": "On Mon, 2002-07-15 at 09:20, Christopher Kings-Lynne wrote:\n> > > etc. I put that extra number after dropped and not at the end\n> > so prevent it\n> > > being off the end of a 32 character name.\n> > >\n> > > > Alternatively, we could invest a lot of work to make it possible for\n> > > > attname to be NULL, but I don't see the payoff...\n> > >\n> > > Yeah, I think a weird name should be good enough...\n> >\n> > perhaps starting it with spaces instead of _ would make it even harder\n> > to write by accident, so tha name could be\n> > \" dropped 0000000001\"\n> >\n> > or to make it even more self documenting store the drop time,\n> > \" col001 dropped@020715.101427\"\n> > --------------------------------\n> \n> Well, are there characters that are illegal in column names that I could\n> use? I did a quick check and couldn't find any!\n\nI guess that \\0 would be unusable (not sure if its illegal)\n\n\\r \\n and \\t (and others < 0x20) are probably quite unlikely too.\n\n--------------\nHannu\n\n", "msg_date": "15 Jul 2002 12:17:11 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: More DROP COLUMN" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> or to make it even more self documenting store the drop time,\n> \" col001 dropped@020715.101427\"\n\nI'm not at all excited about trying to store times, random numbers,\netc in dropped column names. We are not trying to do cryptography\nhere, only invent an improbable name. I do not believe that injecting\npseudo-randomness will help. I'd prefer to keep the names of dropped\ncolumns predictable.\n\n> I guess that \\0 would be unusable (not sure if its illegal)\n\nYou can NOT use \\0, and I don't think other nonprinting characters would\nbe a good idea either. I think a bunch of leading spaces or underscores\nwould be fine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Jul 2002 08:44:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More DROP COLUMN " }, { "msg_contents": "\nchris, have you looked at how sapdb (http://www.sapdb.org)\ndoes this ?\n\n/sergio\nps: IANAL\n\n\n\"\"Christopher Kings-Lynne\"\" <chriskl@familyhealth.com.au> escribi� en el\nmensaje news:GNELIHDDFBOCMGBFGEFOAECECDAA.chriskl@familyhealth.com.au...\n> OK,\n>\n> DROP COLUMN now seems to work perfectly. All the old test cases that\nfailed\n> now work fine.\n>\n> However, I'm not happy with the way dropped columns are renamed. I want\nto\n> give them a name that no-one would ever want to use as a legit column\nname.\n> I don't like this behaviour:\n>\n> test=# create table test (a int4, b int4);\n> CREATE TABLE\n> test=# alter table test drop a;\n> ALTER TABLE\n> test=# select dropped_1 from test;\n> ERROR: Attribute \"dropped_1\" not found\n> test=# alter table test add dropped_1 int4;\n> ERROR: ALTER TABLE: column name \"dropped_1\" already exists in table\n\"test\"\n>\n> It's a bit confusing, hey?\n>\n> What should we do about it?\n>\n> Maybe I could make ADD COLUMN give this message instead for dropped\ncolumns?\n>\n> ERROR: ALTER TABLE: column name \"dropped_1\" is a dropped column in table\n> \"test\" ... or something ...\n>\n> We could name the fields \"________dropped_x\" sort of thing perhaps????\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n", "msg_date": "Mon, 15 Jul 2002 14:45:40 -0300", "msg_from": "\"Sergio A. Kessler\" <sak@ksb.com.ar>", "msg_from_op": false, "msg_subject": "Re: More DROP COLUMN" } ]
[ { "msg_contents": "\n> However, I'm not happy with the way dropped columns are renamed. I want to\n> give them a name that no-one would ever want to use as a legit column name.\n> I don't like this behaviour:\n\nYes, how about prepending a character that would usually need to be escaped.\n\nI like Hannu's proposal with the blanks \" col1 dropped@2002-07-17.10:30:00\",\nthe underscores are too commonly used.\nMaybe add two characters, one special and a backspace after the first blank. \nSo it would print nicely, but be very unlikely.\n\nI would prefer a simple but highly predictable rule, where you can say \"Don't \nname your columns starting with \" \\353\\010\" (blank, greek d, BS) over some random \nalgo that stays out of the way by means of low probability.\n\nAndreas\n", "msg_date": "Mon, 15 Jul 2002 11:10:03 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: More DROP COLUMN" }, { "msg_contents": "On Mon, 15 Jul 2002, Zeugswetter Andreas SB SD wrote:\n\n> I would prefer a simple but highly predictable rule, where you can say\n> \"Don't name your columns starting with \" \\353\\010\" (blank, greek d,\n> BS) over some random algo that stays out of the way by means of low\n> probability.\n\n\\353 is not a delta in most of the character encodings that I use,\nand is not valid at all in ASCII. Non-graphic chars are also likely\nto cause misery because it's not obvious, using normal tools, what\nthey are. (The above example would appear to many people as just\na space.)\n\nI would suggest it's probably a good idea to stick to ASCII graphic\n(i.e., non-control, not delete) characters.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 15 Jul 2002 19:35:59 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: More DROP COLUMN" } ]
[ { "msg_contents": "I'm trying to use a closed source program with PostgreSQL over ODBC\n\nThe problem is that it tries to make a table which has a column called\n'cmin' which of course not allowes.\n\nAre there any plans of either \n\n1) (optionally) renaming such sytem columns in the ODBC layer \n\n2) renaming system colums to start with pg_ (pg_oid, pg_cmin, pg_*)\n\n3) moving them to another namespace, accessible like a schema\n\nso \n\n select oid from mytable;\n\nwould become \n\n select system.oid from mytable;\n\n\nI quess that SQL is not as flexible as XML in putting anything in its\nown namespace ;(\n\n---------------\nHannu\n\n\n", "msg_date": "15 Jul 2002 12:46:40 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "advice for user column named cmin" } ]
[ { "msg_contents": "OK, more DROP COLUMN funny business:\n\nAssuming that selects, updates and deletes all ignore the dropped column,\nwhat happens with things like alter table statements?\n\nYou can still quite happily set the default for a dropped column, etc.\n\nWill I have to add a dropped column check in everywhere that a command is\nable to target a column. ie. create index, cluster, alter table, etc,\netc.? Or is there an easier way?\n\nCheers,\n\nChris\n\n", "msg_date": "Mon, 15 Jul 2002 23:30:05 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "DROP COLUMN" }, { "msg_contents": "On Mon, 2002-07-15 at 11:30, Christopher Kings-Lynne wrote:\n> OK, more DROP COLUMN funny business:\n> \n> Assuming that selects, updates and deletes all ignore the dropped column,\n> what happens with things like alter table statements?\n> \n> You can still quite happily set the default for a dropped column, etc.\n> \n> Will I have to add a dropped column check in everywhere that a command is\n> able to target a column. ie. create index, cluster, alter table, etc,\n> etc.? Or is there an easier way?\n\nEach utility statement does some kind of a SearchSysCache() to determine\nthe status of the column (whether it exists or not).\n\nYou may want to write a wrapper function in lsyscache.c that returns the\nstatus of the column (dropped or not). Perhaps the att tuple could be\nfetched through this function (processed on the way out) -- though\nlsyscache routines tend to return simple items.\n\n", "msg_date": "15 Jul 2002 12:23:45 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Rod Taylor wrote:\n> On Mon, 2002-07-15 at 11:30, Christopher Kings-Lynne wrote:\n> > OK, more DROP COLUMN funny business:\n> > \n> > Assuming that selects, updates and deletes all ignore the dropped column,\n> > what happens with things like alter table statements?\n> > \n> > You can still quite happily set the default for a dropped column, etc.\n> > \n> > Will I have to add a dropped column check in everywhere that a command is\n> > able to target a column. ie. create index, cluster, alter table, etc,\n> > etc.? Or is there an easier way?\n> \n> Each utility statement does some kind of a SearchSysCache() to determine\n> the status of the column (whether it exists or not).\n> \n> You may want to write a wrapper function in lsyscache.c that returns the\n> status of the column (dropped or not). Perhaps the att tuple could be\n> fetched through this function (processed on the way out) -- though\n> lsyscache routines tend to return simple items.\n\nExcellent idea. That's how temp tables worked, by bypassing the\nsyscache. I wonder if you could just prevent dropped columns from being\nreturned by the syscache. That may work just fine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 13:17:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Excellent idea. That's how temp tables worked, by bypassing the\n> syscache. I wonder if you could just prevent dropped columns from being\n> returned by the syscache. That may work just fine.\n\nNo, it will break all the places that need to see dropped columns.\n\nI agree that a wrapper function is probably an appropriate solution,\nbut only some of the calls of SearchSysCache should use it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Jul 2002 13:41:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Excellent idea. That's how temp tables worked, by bypassing the\n> > syscache. I wonder if you could just prevent dropped columns from being\n> > returned by the syscache. That may work just fine.\n>\n> No, it will break all the places that need to see dropped columns.\n>\n> I agree that a wrapper function is probably an appropriate solution,\n> but only some of the calls of SearchSysCache should use it.\n\nWhat like add another parameter to SearchSysCache*?\n\nAnother question: How do I fill out the ObjectAddress when trying to drop\nrelated objects?\n\neg:\n\n object.classId = ??;\n object.objectId = ??;\n object.objectSubId = ??;\n\n\t performDeletion(&object, behavior);\n\nChris\n\n", "msg_date": "Tue, 16 Jul 2002 11:46:30 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> I agree that a wrapper function is probably an appropriate solution,\n>> but only some of the calls of SearchSysCache should use it.\n\n> What like add another parameter to SearchSysCache*?\n\nDefinitely *not*; I don't want to kluge up every call to SearchSysCache\nwith a feature that's only relevant to a small number of them.\n\n> Another question: How do I fill out the ObjectAddress when trying to drop\n> related objects?\n\nA column would be classId = RelOid_pg_class, objectId = OID of relation,\nobjectSubId = column's attnum.\n\nBTW, it occurred to me recently that most of the column-specific\nAlterTable operations will happily try to alter system columns (eg,\nOID). In most cases this makes no sense and should be forbidden.\nIt definitely makes no sense for DROP COLUMN...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 00:09:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN " }, { "msg_contents": "Tom Lane wrote:\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> >> I agree that a wrapper function is probably an appropriate solution,\n> >> but only some of the calls of SearchSysCache should use it.\n> \n> > What like add another parameter to SearchSysCache*?\n> \n> Definitely *not*; I don't want to kluge up every call to SearchSysCache\n> with a feature that's only relevant to a small number of them.\n\nUh, then what? The only idea I had was to set a static boolean variable in\nsyscache.c that controls whether droppped columns are returned, and have\na enable/disable functions that can turn it on/off. The only problem is\nthat an elog inside a syscache lookup would leave that value set.\n\nMy only other idea is to make a syscache that is like ATTNAME except\nthat it doesn't return a dropped column. I could probably code that up\nif you wish.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 00:38:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "> Uh, then what? The only idea I had was to set a static boolean\n> variable in\n> syscache.c that controls whether droppped columns are returned, and have\n> a enable/disable functions that can turn it on/off. The only problem is\n> that an elog inside a syscache lookup would leave that value set.\n>\n> My only other idea is to make a syscache that is like ATTNAME except\n> that it doesn't return a dropped column. I could probably code that up\n> if you wish.\n\nThat'd be cool.\n\nI guess the thing is that either way, I will need to manually change every\nsingle instance where a dropped column should be avoided. So, really\nthere's not much difference between me changing the SysCache search to use\nATTNAMEUNDROPPED or whatever, or just checking the attisdropped field of the\ntuple in the same way that you must always check that attnum > 0.\n\nIn fact, looking at it logically...if all the commands currently are\nrequired to check that they're not modifiying a system column, then why not\nadd the requirement that they must also not modify dropped columns? I can\ndo a careful doc search and try to make sure I've touched everything...\n\nChris\n\n", "msg_date": "Tue, 16 Jul 2002 12:47:59 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > Uh, then what? The only idea I had was to set a static boolean\n> > variable in\n> > syscache.c that controls whether droppped columns are returned, and have\n> > a enable/disable functions that can turn it on/off. The only problem is\n> > that an elog inside a syscache lookup would leave that value set.\n> >\n> > My only other idea is to make a syscache that is like ATTNAME except\n> > that it doesn't return a dropped column. I could probably code that up\n> > if you wish.\n> \n> That'd be cool.\n> \n> I guess the thing is that either way, I will need to manually change every\n> single instance where a dropped column should be avoided. So, really\n> there's not much difference between me changing the SysCache search to use\n> ATTNAMEUNDROPPED or whatever, or just checking the attisdropped field of the\n> tuple in the same way that you must always check that attnum > 0.\n> \n> In fact, looking at it logically...if all the commands currently are\n> required to check that they're not modifiying a system column, then why not\n> add the requirement that they must also not modify dropped columns? I can\n> do a careful doc search and try to make sure I've touched everything...\n\nMakes sense. Of course, we could make a syscache that didn't return\nsystem columns either.\n\nActually, the original argument for negative attno's for dropped columns\nwas exactly for this case, that the system column check would catch\ndropped columns too, but it causes other problems that are harder to fix\nso we _dropped_ the idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 00:50:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Definitely *not*; I don't want to kluge up every call to SearchSysCache\n>> with a feature that's only relevant to a small number of them.\n\n> Uh, then what?\n\nThen we make a wrapper function. Something like\n\n\tGetUndeletedColumnByName(relid,attname)\n\nreplaces SearchSysCache(ATTNAME,...) in those places where you don't\nwant to see deleted columns. It'd return NULL if it finds a column\ntuple but sees it's deleted.\n\n\tGetUndeletedColumnByNum(relid,attnum)\n\nreplaces SearchSysCache(ATTNUM,...) similarly.\n\n> My only other idea is to make a syscache that is like ATTNAME except\n> that it doesn't return a dropped column.\n\nThat would mean duplicate storage of tuples inside the catcache...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 00:56:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Definitely *not*; I don't want to kluge up every call to SearchSysCache\n> >> with a feature that's only relevant to a small number of them.\n> \n> > Uh, then what?\n> \n> Then we make a wrapper function. Something like\n> \n> \tGetUndeletedColumnByName(relid,attname)\n> \n> replaces SearchSysCache(ATTNAME,...) in those places where you don't\n> want to see deleted columns. It'd return NULL if it finds a column\n> tuple but sees it's deleted.\n> \n> \tGetUndeletedColumnByNum(relid,attnum)\n> \n> replaces SearchSysCache(ATTNUM,...) similarly.\n\nGood idea.\n\n> > My only other idea is to make a syscache that is like ATTNAME except\n> > that it doesn't return a dropped column.\n> \n> That would mean duplicate storage of tuples inside the catcache...\n\nNo, I was thinking of something that did the normal ATTNAME lookup in\nthe syscache code, then returned NULL on dropped columns; similar to\nyour idea but done inside the syscache code rather than in a separate\nfunction.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 00:58:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "> Actually, the original argument for negative attno's for dropped columns\n> was exactly for this case, that the system column check would catch\n> dropped columns too, but it causes other problems that are harder to fix\n> so we _dropped_ the idea.\n\nWell, negative attnums are a good idea and yes, you sort of avoid all these\nproblems. However, the backend is _full_ of stuff like this:\n\nif (attnum < 0)\n\telog(ERROR, \"Cannot footle system attribute.\");\n\nBut the problem is that we'd have to change all of them anyway in a negative\nattnum implementation, since they're not system attributes, they're dropped\ncolumns.\n\nBut let's not start another thread about this!!\n\nChris\n\n", "msg_date": "Tue, 16 Jul 2002 13:02:14 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "> > In fact, looking at it logically...if all the commands currently are\n> > required to check that they're not modifiying a system column,\n> then why not\n> > add the requirement that they must also not modify dropped\n> columns? I can\n> > do a careful doc search and try to make sure I've touched everything...\n>\n> Makes sense. Of course, we could make a syscache that didn't return\n> system columns either.\n\nActually - are you certain that every command uses a SearchSysCache and not\nsome other weirdness? If we have to do the odd exception, then maybe we\nshould do them all as 'exceptions'?\n\nChris\n\n", "msg_date": "Tue, 16 Jul 2002 13:03:14 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > > In fact, looking at it logically...if all the commands currently are\n> > > required to check that they're not modifiying a system column,\n> > then why not\n> > > add the requirement that they must also not modify dropped\n> > columns? I can\n> > > do a careful doc search and try to make sure I've touched everything...\n> >\n> > Makes sense. Of course, we could make a syscache that didn't return\n> > system columns either.\n> \n> Actually - are you certain that every command uses a SearchSysCache and not\n> some other weirdness? If we have to do the odd exception, then maybe we\n> should do them all as 'exceptions'?\n\nI actually don't know. I know all the table name lookups do use\nsyscache or temp tables wouldn't have worked. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 01:04:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Actually - are you certain that every command uses a SearchSysCache and not\n> some other weirdness?\n\nThey probably don't. You'll have to look closely at places that use the\nTupleDesc from a relcache entry.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 01:15:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN " }, { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian\n> \n> Christopher Kings-Lynne wrote:\n> > > Uh, then what? The only idea I had was to set a static boolean\n> > > variable in\n> > > syscache.c that controls whether droppped columns are \n> returned, and have\n> > > a enable/disable functions that can turn it on/off. The only \n> problem is\n> > > that an elog inside a syscache lookup would leave that value set.\n> > >\n> > > My only other idea is to make a syscache that is like ATTNAME except\n> > > that it doesn't return a dropped column. I could probably \n> code that up\n> > > if you wish.\n> > \n> > That'd be cool.\n> > \n> > I guess the thing is that either way, I will need to manually \n> change every\n> > single instance where a dropped column should be avoided. So, really\n> > there's not much difference between me changing the SysCache \n> search to use\n> > ATTNAMEUNDROPPED or whatever, or just checking the attisdropped \n> field of the\n> > tuple in the same way that you must always check that attnum > 0.\n> > \n> > In fact, looking at it logically...if all the commands currently are\n> > required to check that they're not modifiying a system column, \n> then why not\n> > add the requirement that they must also not modify dropped \n> columns? I can\n> > do a careful doc search and try to make sure I've touched everything...\n> \n> Makes sense. Of course, we could make a syscache that didn't return\n> system columns either.\n> \n> Actually, the original argument for negative attno's for dropped columns\n> was exactly for this case, that the system column check would catch\n> dropped columns too, \n\n> but it causes other problems that are harder to fix\n> so we _dropped_ the idea.\n\nWhat does this mean ?\nBTW would we do nothing for clients after all ?\n\nregards,\nHiroshi Inoue\n\n", "msg_date": "Wed, 17 Jul 2002 01:19:42 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Hiroshi Inoue wrote:\n> > Makes sense. Of course, we could make a syscache that didn't return\n> > system columns either.\n> > \n> > Actually, the original argument for negative attno's for dropped columns\n> > was exactly for this case, that the system column check would catch\n> > dropped columns too, \n> \n> > but it causes other problems that are harder to fix\n> > so we _dropped_ the idea.\n> \n> What does this mean ?\n\nClient programmers prefered the dropped flag rather than negative\nattno's so we went with that.\n\n> BTW would we do nothing for clients after all ?\n\nClients will now need to check that dropped flag.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 12:30:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "On Tue, 2002-07-16 at 18:30, Bruce Momjian wrote:\n> Hiroshi Inoue wrote:\n> > > Makes sense. Of course, we could make a syscache that didn't return\n> > > system columns either.\n> > > \n> > > Actually, the original argument for negative attno's for dropped columns\n> > > was exactly for this case, that the system column check would catch\n> > > dropped columns too, \n> > \n> > > but it causes other problems that are harder to fix\n> > > so we _dropped_ the idea.\n> > \n> > What does this mean ?\n> \n> Client programmers prefered the dropped flag rather than negative\n> attno's so we went with that.\n\nWhile you are at it,could you add another flag is_system ?\n\n<evil grin>\n\n-----------\nHannu\n\n", "msg_date": "16 Jul 2002 20:29:36 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > > Makes sense. Of course, we could make a syscache that didn't return\n> > > system columns either.\n> > >\n> > > Actually, the original argument for negative attno's for dropped columns\n> > > was exactly for this case, that the system column check would catch\n> > > dropped columns too,\n> >\n> > > but it causes other problems that are harder to fix\n> > > so we _dropped_ the idea.\n> >\n> > What does this mean ?\n> \n> Client programmers prefered the dropped flag rather than negative\n> attno's so we went with that.\n\nWhat I asked you is what *harder to fix* means. \n \n> > BTW would we do nothing for clients after all ?\n> \n> Clients will now need to check that dropped flag.\n\nClients would have to check the flag everywhere\npg_attribute appears. \nWhy should clients do such a thing ?\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Wed, 17 Jul 2002 08:37:13 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > > > Makes sense. Of course, we could make a syscache that didn't return\n> > > > system columns either.\n> > > >\n> > > > Actually, the original argument for negative attno's for dropped columns\n> > > > was exactly for this case, that the system column check would catch\n> > > > dropped columns too,\n> > >\n> > > > but it causes other problems that are harder to fix\n> > > > so we _dropped_ the idea.\n> > >\n> > > What does this mean ?\n> > \n> > Client programmers prefered the dropped flag rather than negative\n> > attno's so we went with that.\n> \n> What I asked you is what *harder to fix* means. \n\nUh, some said that having attno's like 1,2,3,5,7,8,9 with gaps would\ncause coding problems in client applications, and that was easier to\nhave the numbers as 1-9 and check a flag if the column is dropped. Why\nthat is easier than having gaps, I don't understand. I voted for the\ngaps (with negative attno's) but client coders liked the flag, so we\nwent with that.\n\n> > > BTW would we do nothing for clients after all ?\n> > \n> > Clients will now need to check that dropped flag.\n> \n> Clients would have to check the flag everywhere\n> pg_attribute appears. \n> Why should clients do such a thing ?\n\nWell, good question. They could easily skip the dropped columns if we\nused negative attno's because they usually already skip system columns.\nHowever, they prefered a specific dropped column flag and positive\nattno's. I don't know why. They would have to explain.\n\n From my perspective, when client coders like Dave Page and others say\nthey would prefer the flag to the negative attno's, I don't have to\nunderstand. I just take their word for it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 23:15:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> What I asked you is what *harder to fix* means. \n\n> Uh, some said that having attno's like 1,2,3,5,7,8,9 with gaps would\n> cause coding problems in client applications, and that was easier to\n> have the numbers as 1-9 and check a flag if the column is dropped. Why\n> that is easier than having gaps, I don't understand. I voted for the\n> gaps (with negative attno's) but client coders liked the flag, so we\n> went with that.\n\nIt seems to me that the problems Chris is noticing have to do with\ngaps in the sequence of valid (positive) attnums. I don't believe that\nthe negative-attnum approach to marking deleted columns would make those\nissues any easier (or harder) to fix. Either way you have a gap.\n\nBut since the historical convention is \"negative attnum is a system\ncolumn\", and deleted columns are *not* system columns, I prefer the idea\nof using a separate marker for deleted columns. AFAICT the comments\nfrom application coders have also been that they don't want to confuse\nthese two concepts.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jul 2002 00:04:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN " }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> > >\n> > > Hiroshi Inoue wrote:\n> \n> > > > BTW would we do nothing for clients after all ?\n> > >\n> > > Clients will now need to check that dropped flag.\n> >\n> > Clients would have to check the flag everywhere\n> > pg_attribute appears.\n> > Why should clients do such a thing ?\n> \n> Well, good question. They could easily skip the dropped columns if we\n> used negative attno's because they usually already skip system columns.\n> However, they prefered a specific dropped column flag and positive\n> attno's. I don't know why. They would have to explain.\n\nI don't stick to negative attno's but\n \n> >From my perspective, when client coders like Dave Page and others say\n> they would prefer the flag to the negative attno's, I don't have to\n> understand. I just take their word for it.\n\ndo they really love to check attisdropped everywhere ?\nIsn't it the opposite of the encapsulation ?\nI don't understand why we would do nothing for clients.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Wed, 17 Jul 2002 13:11:42 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "On Wed, 2002-07-17 at 09:11, Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> \n> > From my perspective, when client coders like Dave Page and others say\n> > they would prefer the flag to the negative attno's, I don't have to\n> > understand. I just take their word for it.\n> \n> do they really love to check attisdropped everywhere ?\n> Isn't it the opposite of the encapsulation ?\n> I don't understand why we would do nothing for clients.\n\nAFAIK, there is separate work being done on defining SQL99 compatible\nsystem views, that most client apps could and should use. \n\nBut those (few) apps that still need intimate knowledge about postrges'\ninternals will always have to query the original system _tables_.\n\nAlso, as we have nothing like Oracles ROWNR, I think it will be quite\nhard to have colnums without gaps in the system views, so we could\nperhaps have a stopgap solution of adding logical column numbers (\n(pg_attribute.attlognum) that will be changed every time a col is\nadded/dropped just for that purpose.\n\n-------------\nHannu\n\n\n", "msg_date": "17 Jul 2002 09:13:09 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> What I asked you is what *harder to fix* means.\n> \n> > Uh, some said that having attno's like 1,2,3,5,7,8,9 with gaps would\n> > cause coding problems in client applications, and that was easier to\n> > have the numbers as 1-9 and check a flag if the column is dropped. Why\n> > that is easier than having gaps, I don't understand. I voted for the\n> > gaps (with negative attno's) but client coders liked the flag, so we\n> > went with that.\n> \n> It seems to me that the problems Chris is noticing have to do with\n> gaps in the sequence of valid (positive) attnums. I don't believe that\n> the negative-attnum approach to marking deleted columns would make those\n> issues any easier (or harder) to fix. Either way you have a gap.\n\nHave I ever mentioned that negative attno's is better\nthan the attisdropped flag implemetation in the handling\nof gaps in attnums ? And I don't object to the attisdropped\nflag implemetation as long as it doesn't scatter the \nattisdropped test around client applications.\nWhy would you like to do nothing for clients ?\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Wed, 17 Jul 2002 13:41:08 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "On Wed, 2002-07-17 at 11:29, Christopher Kings-Lynne wrote:\n> > > But those (few) apps that still need intimate knowledge about postrges'\n> > > internals will always have to query the original system _tables_.\n> > > \n> > > Also, as we have nothing like Oracles ROWNR, I think it will be quite\n> > > hard to have colnums without gaps in the system views,\n> > \n> > Agreed. However do we have to give up all views which omit\n> > dropped columns ? \n> \n> What's Oracle's ROWNR?\n\nA pseudocolumn that is always the number of row as it is retrieved.\n\nso if we had it, we could do something like\n\nselect\n ROWNUM as attlognum,\n attname\nfrom (\n select attname\n from pg_attribute\n where attrelid = XXX\n and attisdropped\n order by attnum\n ) att\norder by attlognum;\n\nand have nice consecutive colnums\n\nthe internal select is needed because ROWNUM is generated in the\nexecutor as the tuple is output, so sorting it later would mess it up\n\n-------------\nHannu\n\n\n", "msg_date": "17 Jul 2002 09:44:01 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Have I ever mentioned that negative attno's is better\n> than the attisdropped flag implemetation in the handling\n> of gaps in attnums ?\n\nHow so? I don't see any improvement ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jul 2002 00:46:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Have I ever mentioned that negative attno's is better\n> > than the attisdropped flag implemetation in the handling\n> > of gaps in attnums ?\n> \n> How so? I don't see any improvement ...\n\nSorry please ignore my above words if it has no meanig to you.\n\nMy comments about this item always seem to be misunderstood.\nI've never intended to persist that my trial work using\nnegative attno's was better than the attisdropped implementa-\ntion. I've only intended to guard my work from being evaluated\nunfairly. In my feeling you evaluated my work unfairly without\nany verfication twice. I've protected againast you about it\neach time but never got your explicit reply. Or have I missed\nyour reply ?\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Wed, 17 Jul 2002 14:26:18 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Also, as we have nothing like Oracles ROWNR, I think it will be quite\n> hard to have colnums without gaps in the system views, so we could\n> perhaps have a stopgap solution of adding logical column numbers (\n> (pg_attribute.attlognum) that will be changed every time a col is\n> added/dropped just for that purpose.\n\n[ thinks... ] I don't believe this would make life any easier, really.\nInside the backend it's not much help, because we still have to look\nat every single attnum reference to see if it should be logical or\nphysical attnum. On the client side it seems promising at first sight\n... but the client will still break if it tries to correlate the\nlogical colnum it sees with physical colnums in pg_attrdef and other\nsystem catalogs.\n\nBottom line AFAICT is that it's a lot of work and a lot of code\nto examine either way :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jul 2002 02:26:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN " }, { "msg_contents": "> > But those (few) apps that still need intimate knowledge about postrges'\n> > internals will always have to query the original system _tables_.\n> > \n> > Also, as we have nothing like Oracles ROWNR, I think it will be quite\n> > hard to have colnums without gaps in the system views,\n> \n> Agreed. However do we have to give up all views which omit\n> dropped columns ? \n\nWhat's Oracle's ROWNR?\n\nChris\n\n", "msg_date": "Wed, 17 Jul 2002 14:29:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Hannu Krosing wrote:\n> \n> On Wed, 2002-07-17 at 09:11, Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> >\n> > > From my perspective, when client coders like Dave Page and others say\n> > > they would prefer the flag to the negative attno's, I don't have to\n> > > understand. I just take their word for it.\n> >\n> > do they really love to check attisdropped everywhere ?\n> > Isn't it the opposite of the encapsulation ?\n> > I don't understand why we would do nothing for clients.\n> \n> AFAIK, there is separate work being done on defining SQL99 compatible\n> system views, that most client apps could and should use.\n> \n> But those (few) apps that still need intimate knowledge about postrges'\n> internals will always have to query the original system _tables_.\n> \n> Also, as we have nothing like Oracles ROWNR, I think it will be quite\n> hard to have colnums without gaps in the system views,\n\nAgreed. However do we have to give up all views which omit\ndropped columns ? \n\n> so we could\n> perhaps have a stopgap solution of adding logical column numbers (\n> (pg_attribute.attlognum) that will be changed every time a col is\n> added/dropped just for that purpose.\n> \n> -------------\n> Hannu\n\n-- \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Wed, 17 Jul 2002 15:30:23 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "I sent a draft by mistake, sorry.\n\nHannu Krosing wrote:\n> \n> On Wed, 2002-07-17 at 09:11, Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> >\n> > > From my perspective, when client coders like Dave Page and others say\n> > > they would prefer the flag to the negative attno's, I don't have to\n> > > understand. I just take their word for it.\n> >\n> > do they really love to check attisdropped everywhere ?\n> > Isn't it the opposite of the encapsulation ?\n> > I don't understand why we would do nothing for clients.\n> \n> AFAIK, there is separate work being done on defining SQL99 compatible\n> system views, that most client apps could and should use.\n> \n> But those (few) apps that still need intimate knowledge about postrges'\n> internals will always have to query the original system _tables_.\n> \n> Also, as we have nothing like Oracles ROWNR, I think it will be quite\n> hard to have colnums without gaps in the system views,\n\nAgreed. However do we have to give up all views which omit\ndropped columns ? Logical numbers aren't always needed.\nI think the system view created by 'CREATE VIEW xxxx as\nselect * from pg_attribute where not attisdropped' has\nits reason for existing. \n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Wed, 17 Jul 2002 15:48:42 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > Also, as we have nothing like Oracles ROWNR, I think it will be quite\n> > hard to have colnums without gaps in the system views, so we could\n> > perhaps have a stopgap solution of adding logical column numbers (\n> > (pg_attribute.attlognum) that will be changed every time a col is\n> > added/dropped just for that purpose.\n> \n> [ thinks... ] I don't believe this would make life any easier, really.\n> Inside the backend it's not much help, because we still have to look\n> at every single attnum reference to see if it should be logical or\n> physical attnum. On the client side it seems promising at first sight\n> ... but the client will still break if it tries to correlate the\n> logical colnum it sees with physical colnums in pg_attrdef and other\n> system catalogs.\n\nWhy do we have to give up all even though we can't handle\nphysical/logical attnums in the same way ?\n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Wed, 17 Jul 2002 17:24:53 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "On Wed, 2002-07-17 at 08:26, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > Also, as we have nothing like Oracles ROWNR, I think it will be quite\n> > hard to have colnums without gaps in the system views, so we could\n> > perhaps have a stopgap solution of adding logical column numbers (\n> > (pg_attribute.attlognum) that will be changed every time a col is\n> > added/dropped just for that purpose.\n> \n> [ thinks... ] I don't believe this would make life any easier, really.\n> Inside the backend it's not much help, because we still have to look\n> at every single attnum reference to see if it should be logical or\n> physical attnum. \n\nI meant this as a workaround for missing ROWNR pseudocolumn.\n\nAll backend functions would still use real attnum's. And I doubt that\nbackend will ever work though system views.\n\nAdding them should touch _only_ CREATE TABLE, ADD COLUMN, DROP COLUMN\nplus the system views and possibly output from SELECT(*), if we allow\nlogical reordering of columns by changing attlognum.\n\nOf course we would not need them if we had ROWNR (or was it ROWNUM ;),\nexcept for the hypothetical column reordering (which would be useful for\nALTER COLUMN CHANGE TYPE too)\n\n\n> On the client side it seems promising at first sight\n> ... but the client will still break if it tries to correlate the\n> logical colnum it sees with physical colnums in pg_attrdef and other\n> system catalogs.\n\nOne can alway look it up in pg_attribute ;)\n\nJust remember to use attlognum _only_ for presentation.\n\n> Bottom line AFAICT is that it's a lot of work and a lot of code\n> to examine either way :-(\n\nYes, I see that it can open another can of worms .\n\n--------------\nHannu\n\n\n", "msg_date": "17 Jul 2002 11:28:17 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "On Wed, 2002-07-17 at 08:48, Hiroshi Inoue wrote:\n> I sent a draft by mistake, sorry.\n> \n> Hannu Krosing wrote:\n> > \n> > On Wed, 2002-07-17 at 09:11, Hiroshi Inoue wrote:\n> > > Bruce Momjian wrote:\n> > >\n> > > > From my perspective, when client coders like Dave Page and others say\n> > > > they would prefer the flag to the negative attno's, I don't have to\n> > > > understand. I just take their word for it.\n> > >\n> > > do they really love to check attisdropped everywhere ?\n> > > Isn't it the opposite of the encapsulation ?\n> > > I don't understand why we would do nothing for clients.\n> > \n> > AFAIK, there is separate work being done on defining SQL99 compatible\n> > system views, that most client apps could and should use.\n> > \n> > But those (few) apps that still need intimate knowledge about postrges'\n> > internals will always have to query the original system _tables_.\n> > \n> > Also, as we have nothing like Oracles ROWNR, I think it will be quite\n> > hard to have colnums without gaps in the system views,\n> \n> Agreed. However do we have to give up all views which omit\n> dropped columns ? Logical numbers aren't always needed.\n\nOf course not. I just proposed it as a solution for getting\nORDINAL_POSITION for ANSI/ISO system view COLUMNS.\n\nThe standard view is defined below but we will no doubt have to\nimplement it differently ;)\n\nCREATE VIEW COLUMNS AS\n SELECT DISTINCT\n TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME,\n C.COLUMN_NAME, ORDINAL_POSITION,\n CASE WHEN EXISTS\n ( SELECT *\n FROM DEFINITION_SCHEMA.SCHEMATA AS S \n WHERE ( TABLE_CATALOG, TABLE_SCHEMA )\n = (S.CATALOG_NAME, S.SCHEMA_NAME )\n AND \n ( SCHEMA_OWNER IN\n ( PUBLIC , CURRENT_USER )\n OR SCHEMA_OWNER IN\n ( SELECT ROLE_NAME\n FROM ENABLED_ROLES ) ) )\n THEN COLUMN_DEFAULT\n ELSE NULL\n END AS COLUMN_DEFAULT,\n IS_NULLABLE,\n COALESCE (D1.DATA_TYPE, D2.DATA_TYPE)\n AS DATA_TYPE,\n COALESCE (D1.CHARACTER_MAXIMUM_LENGTH,\n D2.CHARACTER_MAXIMUM_LENGTH)\n AS CHARACTER_MAXIMUM_LENGTH,\n COALESCE (D1.CHARACTER_OCTET_LENGTH, D2.CHARACTER_OCTET_LENGTH)\n AS CHARACTER_OCTET_LENGTH,\n COALESCE (D1.NUMERIC_PRECISION, D2.NUMERIC_PRECISION)\n AS NUMERIC_PRECISION,\n COALESCE (D1.NUMERIC_PRECISION_RADIX, D2.NUMERIC_PRECISION_RADIX)\n AS NUMERIC_PRECISION_RADIX,\n COALESCE (D1.NUMERIC_SCALE, D2.NUMERIC_SCALE)\n AS NUMERIC_SCALE,\n COALESCE (D1.DATETIME_PRECISION, D2.DATETIME_PRECISION)\n AS DATETIME_PRECISION,\n COALESCE (D1.INTERVAL_TYPE, D2.INTERVAL_TYPE)\n AS INTERVAL_TYPE,\n COALESCE (D1.INTERVAL_PRECISION, D2.INTERVAL_PRECISION)\n AS INTERVAL_PRECISION,\n COALESCE (C1.CHARACTER_SET_CATALOG, C2.CHARACTER_SET_CATALOG)\n AS CHARACTER_SET_CATALOG,\n COALESCE (C1.CHARACTER_SET_SCHEMA, C2.CHARACTER_SET_SCHEMA)\n AS CHARACTER_SET_SCHEMA,\n COALESCE (C1.CHARACTER_SET_NAME, C2.CHARACTER_SET_NAME)\n AS CHARACTER_SET_NAME,\n COALESCE (D1.COLLATION_CATALOG, D2.COLLATION_CATALOG)\n AS COLLATION_CATALOG,\n COALESCE (D1.COLLATION_SCHEMA, D2.COLLATION_SCHEMA)\n AS COLLATION_SCHEMA,\n COALESCE (D1.COLLATION_NAME, D2.COLLATION_NAME)\n AS COLLATION_NAME,\n DOMAIN_CATALOG, DOMAIN_SCHEMA, DOMAIN_NAME, \n COALESCE (D1.USER_DEFINED_TYPE_CATALOG,\n D2.USER_DEFINED_TYPE_CATALOG)\n AS UDT_CATALOG,\n COALESCE (D1.USER_DEFINED_TYPE_SCHEMA,\n D2.USER_DEFINED_TYPE_SCHEMA)\n AS UDT_SCHEMA,\n COALESCE (D1.USER_DEFINED_TYPE_NAME, D2.USER_DEFINED_TYPE_NAME)\n AS UDT_NAME,\n COALESCE (D1.SCOPE_CATALOG, D2.SCOPE_CATALOG) AS SCOPE_CATALOG,\n COALESCE (D1.SCOPE_SCHEMA, D2.SCOPE_SCHEMA) AS SCOPE_SCHEMA,\n COALESCE (D1.SCOPE_NAME, D2.SCOPE_NAME) AS SCOPE_NAME,\n COALESCE (D1.MAXIMUM_CARDINALITY, D2.MAXIMUM_CARDINALITY)\n AS MAXIMUM_CARDINALITY,\n COALESCE (D1.DTD_IDENTIFIER, D2.DTD_IDENTIFIER) AS DTD_IDENTIFIER,\n IS_SELF_REFERENCING\n FROM ( ( DEFINITION_SCHEMA.COLUMNS AS C\n LEFT JOIN\n ( DEFINITION_SCHEMA.DATA_TYPE_DESCRIPTOR AS D1\n LEFT JOIN\n DEFINITION_SCHEMA.COLLATIONS AS C1\n ON ( ( C1.COLLATION_CATALOG, C1.COLLATION_SCHEMA,\n C1.COLLATION_NAME )\n = ( D1.COLLATION_CATALOG, D1.COLLATION_SCHEMA,\n D1.COLLATION_NAME ) ) )\n ON ( ( C.TABLE_CATALOG, C.TABLE_SCHEMA, C.TABLE_NAME,\n 'TABLE', C.DTD_IDENTIFIER )\n = ( D1.OBJECT_CATALOG, D1.OBJECT_SCHEMA, D1.OBJECT_NAME,\n D1.OBJECT_TYPE, D1.DTD_IDENTIFIER ) ) ) )\n LEFT JOIN\n ( DEFINITION_SCHEMA.DATA_TYPE_DESCRIPTOR AS D2\n LEFT JOIN\n DEFINITION_SCHEMA.COLLATIONS AS C2\n ON ( ( C2.COLLATION_CATALOG, C2.COLLATION_SCHEMA,\n C2.COLLATION_NAME )\n = ( D2.COLLATION_CATALOG, D2.COLLATION_SCHEMA,\n D2.COLLATION_NAME ) ) )\n ON ( ( C.DOMAIN_CATALOG, C.DOMAIN_SCHEMA, C.DOMAIN_NAME, \n 'DOMAIN', C.DTD_IDENTIFIER )\n = ( D2.OBJECT_CATALOG, D2.OBJECT_SCHEMA, D2.OBJECT_NAME,\n D2.OBJECT_TYPE, D2.DTD_IDENTIFIER ) )\n WHERE ( C.TABLE_CATALOG, C.TABLE_SCHEMA, C.TABLE_NAME,\n C.COLUMN_NAME ) IN \n ( SELECT\n TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME,\n COLUMN_NAME\n FROM DEFINITION_SCHEMA.COLUMN_PRIVILEGES\n WHERE ( SCHEMA_OWNER IN\n ( 'PUBLIC', CURRENT_USER )\n OR\n SCHEMA_OWNER IN\n ( SELECT ROLE_NAME\n FROM ENABLED_ROLES ) ) )\n AND\n C.TABLE_CATALOG\n = ( SELECT CATALOG_NAME\n FROM INFORMATION_SCHEMA_CATALOG_NAME );\n\n----------------\nHannu\n", "msg_date": "17 Jul 2002 12:15:03 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> All backend functions would still use real attnum's. And I doubt that\n> backend will ever work though system views.\n> Adding them should touch _only_ CREATE TABLE, ADD COLUMN, DROP COLUMN\n> plus the system views and possibly output from SELECT(*), if we allow\n> logical reordering of columns by changing attlognum.\n\nHmm. That last point is attractive enough to make it interesting to do.\n\nChristopher, you're the man doing the legwork ... what do you think?\nOffhand I'd think that expansion of \"SELECT *\" and association of \ncolumn aliases to specific columns would be the two places that would\nneed work to support attlognum; but we know they're both broken anyway\nby introduction of dropped columns.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jul 2002 11:28:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN " }, { "msg_contents": "> Hannu Krosing <hannu@tm.ee> writes:\n> > All backend functions would still use real attnum's. And I doubt that\n> > backend will ever work though system views.\n> > Adding them should touch _only_ CREATE TABLE, ADD COLUMN, DROP COLUMN\n> > plus the system views and possibly output from SELECT(*), if we allow\n> > logical reordering of columns by changing attlognum.\n>\n> Hmm. That last point is attractive enough to make it interesting to do.\n>\n> Christopher, you're the man doing the legwork ... what do you think?\n> Offhand I'd think that expansion of \"SELECT *\" and association of\n> column aliases to specific columns would be the two places that would\n> need work to support attlognum; but we know they're both broken anyway\n> by introduction of dropped columns.\n\nSure you don't want me to submit a working patch for DROP COLUMN first and\nthen do it after?\n\nIt wouldn't even cause any backward compatibility problems would it? Older\nclients would just order the columns by attnum...\n\nChris\n\n", "msg_date": "Thu, 18 Jul 2002 10:10:18 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN " } ]
[ { "msg_contents": "Can somebody point me to any escape handlers in the COPY mechanism. I \nhave grepped and examined to the best of my ability and haven't found \nany indication that there even is an escape handler around COPY.\n\nJust to give a little background, using pgdump in \"default\" mode creates \na dump file that includes inline newlines and tabs. The solution is to \nuse the -Fc option but it's a little late for that if all one has is a \n\"default\" dump file. I was hoping to simply run a conversion on the file \nto create an \"escaped\" version of the file, but none of the traditional \nescape methods appear to work (ie. \\n, \\010, 0x10, etc).\n\nAm I correct that there is no escape handler in the COPY command. If so, \nshould this be considered an enhancement or is there an underlying \nreason why it isn't there. Also, in the case this is an enhancement, \nwhere in the code would this kind of mechanism be appropriate (ie. \ninterface, back end, parser, etc.). Wow ... nice sentence. ;-)\n\nThanks in advance!\n\nMarc L.\n\n", "msg_date": "Mon, 15 Jul 2002 11:45:15 -0400", "msg_from": "Marc Lavergne <mlavergn-pub@richlava.com>", "msg_from_op": true, "msg_subject": "COPY x FROM STDIN escape handler" } ]
[ { "msg_contents": "The fmtId() function used in pg_dump for optional quoting identifiers\nhas bothered me for a while now. The API is confusing: the returned\nvalue needs to be used before fmtId() is called again, because the\nbuffer the return value points to is re-used for each call of fmtId().\nThat leads to bugs for those unaware of this requirement, and clumsy\ncode for those that are -- for example (pg_dump.c:2911)\n\n appendPQExpBuffer(delq, \"DROP TYPE %s.\",\n fmtId(tinfo->typnamespace->nspname,\n force_quotes));\n appendPQExpBuffer(delq, \"%s;\\n\",\n fmtId(tinfo->typname, force_quotes));\n\nShould really only be 1 line of code. Similar ugliness occurs in many\nplaces (e.g. several lines down, there is a section of 4 calls to\nappendPQExpBuffer() that could be condensed down to 1, if not for\nfmtId() ).\n\nLastly, it has a tendancy to produce memory leaks -- for example,\nconvertRegProcReference() in pg_dump.c will leak memory when used\nwith PostgreSQL >= 7.3. When I mentioned this to Tom Lane, he\nsaid that he didn't see a way to fix it without convoluting the\ncode, and suggested I bring it up on -hackers.\n\nMy suggestion is: since fmtId() is almost always used with\nappendPQExpBuffer(), we should add a wrapper function to pg_dump\nthat accepts an extra escape sequence (%S, or %i, perhaps), which\nwould properly quote the input string before passing it to\nappendPQExpBuffer(). That should ensure that there won't be any\nleaks from adding quotes to identifiers, but also allow us to avoid\nplaying games with static buffers, like fmtId() does now.\n\nAny comments?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Mon, 15 Jul 2002 12:16:55 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "fmtId() and pg_dump" }, { "msg_contents": "\nInteresting idea. Not sure how you are going to do that since\nappendPQExpBuffer uses vsnprintf. Would you spin through the format\nstring and modify the pointers sent to vsnprintf? Seems like a lot of\nwork.\n\nFYI, the 7.2 code had fmtId called pretty messed up in certain places\nbut I think I fixed them all.\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> The fmtId() function used in pg_dump for optional quoting identifiers\n> has bothered me for a while now. The API is confusing: the returned\n> value needs to be used before fmtId() is called again, because the\n> buffer the return value points to is re-used for each call of fmtId().\n> That leads to bugs for those unaware of this requirement, and clumsy\n> code for those that are -- for example (pg_dump.c:2911)\n> \n> appendPQExpBuffer(delq, \"DROP TYPE %s.\",\n> fmtId(tinfo->typnamespace->nspname,\n> force_quotes));\n> appendPQExpBuffer(delq, \"%s;\\n\",\n> fmtId(tinfo->typname, force_quotes));\n> \n> Should really only be 1 line of code. Similar ugliness occurs in many\n> places (e.g. several lines down, there is a section of 4 calls to\n> appendPQExpBuffer() that could be condensed down to 1, if not for\n> fmtId() ).\n> \n> Lastly, it has a tendancy to produce memory leaks -- for example,\n> convertRegProcReference() in pg_dump.c will leak memory when used\n> with PostgreSQL >= 7.3. When I mentioned this to Tom Lane, he\n> said that he didn't see a way to fix it without convoluting the\n> code, and suggested I bring it up on -hackers.\n> \n> My suggestion is: since fmtId() is almost always used with\n> appendPQExpBuffer(), we should add a wrapper function to pg_dump\n> that accepts an extra escape sequence (%S, or %i, perhaps), which\n> would properly quote the input string before passing it to\n> appendPQExpBuffer(). That should ensure that there won't be any\n> leaks from adding quotes to identifiers, but also allow us to avoid\n> playing games with static buffers, like fmtId() does now.\n> \n> Any comments?\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 02:11:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: fmtId() and pg_dump" }, { "msg_contents": "Neil Conway writes:\n\n> My suggestion is: since fmtId() is almost always used with\n> appendPQExpBuffer(), we should add a wrapper function to pg_dump\n> that accepts an extra escape sequence (%S, or %i, perhaps), which\n> would properly quote the input string before passing it to\n> appendPQExpBuffer().\n\nI think that would be nice.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 17 Jul 2002 20:45:08 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: fmtId() and pg_dump" } ]
[ { "msg_contents": "Just to give a little background, using pgdump in \"default\" mode creates \na dump file that includes inline newlines and tabs. The solution is to \nuse the -Fc option but it's a little late for that if all one has is a \n\"default\" dump file. I was hoping to simply run a conversion on the file \nto create an \"escaped\" version of the file, but none of the traditional \nescape methods appear to work (ie. \\n, \\010, 0x10, etc).\n\nThe code errors out in CopyReadNewline() but it's the result of a call \nfrom CopyFrom(). From what I can see, there is no escape handler in the \nCopyFrom function. So, should this be considered an enhancement or is \nthere an underlying reason why it isn't there? If it's an enhancement, \nI'll patch it and submit it but I *really* don't want to end up with a \nnon-standard version of PostgreSQL!\n\nThanks in advance!\n\nMarc L.\n\n", "msg_date": "Mon, 15 Jul 2002 12:18:17 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": true, "msg_subject": "COPY x FROM STDIN escape handlers" }, { "msg_contents": "Marc Lavergne <mlavergne-pub@richlava.com> writes:\n> Just to give a little background, using pgdump in \"default\" mode creates \n> a dump file that includes inline newlines and tabs.\n\nHow old a PG release are you using? COPY has quoted special characters\nproperly for a long time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Jul 2002 13:05:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY x FROM STDIN escape handlers " }, { "msg_contents": "I see the problem now. It was my file parser that was escaping the value \nthen passing it to PQescapeString which resulted in \\\\n instead of \\n. \nGuess I was on a wild goose chase. I guess PQescapeString() and \nPQputline() are mutally exclusive ... my bad!\n\nThanks,\n\nMarc L.\n\nTom Lane wrote:\n> Marc Lavergne <mlavergne-pub@richlava.com> writes:\n> \n>>Just to give a little background, using pgdump in \"default\" mode creates \n>>a dump file that includes inline newlines and tabs.\n> \n> \n> How old a PG release are you using? COPY has quoted special characters\n> properly for a long time.\n> \n> \t\t\tregards, tom lane\n> \n\n\n", "msg_date": "Mon, 15 Jul 2002 14:14:59 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": true, "msg_subject": "Re: COPY x FROM STDIN escape handlers" } ]
[ { "msg_contents": "An example is:\n\nregression=# create table foo (f1 text);\nCREATE TABLE\nregression=# insert into foo values ('zzzzzzzzzzzzz');\nINSERT 148289 1\nregression=# insert into foo select * from foo;\nINSERT 148290 1\nregression=# insert into foo select * from foo;\nINSERT 0 2\nregression=# insert into foo select * from foo;\nINSERT 0 4\n<< repeat enough times to have 1000 or so tuples in table >>\n\nregression=# insert into foo values ('q');\nINSERT 150337 1\nregression=# delete from foo where f1 != 'q';\nDELETE 2048\nregression=# vacuum full foo;\nVACUUM\nregression=# update foo set f1 = 'qq';\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\nBacktracing shows that the assertion in HeapTupleHeaderSetCmax fails,\nbecause it's not expecting to find the HEAP_MOVED bits set in its input.\n(The above test is simply an easy way of forcing an update on a tuple\nthat has been moved by VACUUM FULL.)\n\nI am not sure if this is a bug introduced by the patch, or if it's\nexposed a previously lurking bug. It seems that the HEAP_MOVED bits\nshould be cleared before re-using cmax for something else, but I have\nnot dug through the old logic to see how it was done before. Or perhaps\nwe cannot really reduce the number of fields this far.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Jul 2002 16:46:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "HeapTuple header changes cause core dumps in CVS tip" }, { "msg_contents": "On Mon, 15 Jul 2002 16:46:44 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>regression=# update foo set f1 = 'qq';\n>server closed the connection unexpectedly\n\nSame with DELETE FROM foo;\n\n>I am not sure if this is a bug introduced by the patch, or if it's\n>exposed a previously lurking bug.\n\nI suspect the former :-(\n\n>It seems that the HEAP_MOVED bits\n>should be cleared before re-using cmax for something else, but I have\n>not dug through the old logic to see how it was done before.\n\nAFAICS from a quick look at tqual it didn't matter before. Once the\nvacuum transaction had committed, on the next access to the tuple\nMOVED_IN caused XMIN_COMMITTED to be set, and after that the MOVED\nbits were never looked at again. MOVED_OFF is not an issue.\n\nI'll take a closer look at tqual and the vacuum source code tomorrow.\nFor now the attached patch cures both symptoms (UPDATE and DELETE) and\npasses all regression tests. A regression test for this case will\nfollow.\n\nServus\n Manfred\n\ndiff -ruN ../base/src/backend/access/heap/heapam.c src/backend/access/heap/heapam.c\n--- ../base/src/backend/access/heap/heapam.c\t2002-07-15 22:22:28.000000000 +0200\n+++ src/backend/access/heap/heapam.c\t2002-07-16 00:16:59.000000000 +0200\n@@ -1123,11 +1123,11 @@\n \t\t\tCheckMaxObjectId(HeapTupleGetOid(tup));\n \t}\n \n+\ttup->t_data->t_infomask &= ~(HEAP_XACT_MASK);\n \tHeapTupleHeaderSetXmin(tup->t_data, GetCurrentTransactionId());\n \tHeapTupleHeaderSetCmin(tup->t_data, cid);\n \tHeapTupleHeaderSetXmaxInvalid(tup->t_data);\n-\tHeapTupleHeaderSetCmax(tup->t_data, FirstCommandId);\n-\ttup->t_data->t_infomask &= ~(HEAP_XACT_MASK);\n+\t/* HeapTupleHeaderSetCmax(tup->t_data, FirstCommandId); */\n \ttup->t_data->t_infomask |= HEAP_XMAX_INVALID;\n \ttup->t_tableOid = relation->rd_id;\n \n@@ -1321,7 +1321,7 @@\n \n \tSTART_CRIT_SECTION();\n \t/* store transaction information of xact deleting the tuple */\n-\ttp.t_data->t_infomask &= ~(HEAP_XMAX_COMMITTED |\n+\ttp.t_data->t_infomask &= ~(HEAP_XMAX_COMMITTED | HEAP_MOVED |\n \t\t\t\t\t\t\t HEAP_XMAX_INVALID | HEAP_MARKED_FOR_UPDATE);\n \tHeapTupleHeaderSetXmax(tp.t_data, GetCurrentTransactionId());\n \tHeapTupleHeaderSetCmax(tp.t_data, cid);\n@@ -1554,7 +1554,7 @@\n \t\t_locked_tuple_.tid = oldtup.t_self;\n \t\tXactPushRollback(_heap_unlock_tuple, (void *) &_locked_tuple_);\n \n-\t\toldtup.t_data->t_infomask &= ~(HEAP_XMAX_COMMITTED |\n+\t\toldtup.t_data->t_infomask &= ~(HEAP_XMAX_COMMITTED | HEAP_MOVED |\n \t\t\t\t\t\t\t\t\t HEAP_XMAX_INVALID |\n \t\t\t\t\t\t\t\t\t HEAP_MARKED_FOR_UPDATE);\n \t\toldtup.t_data->t_infomask |= HEAP_XMAX_UNLOGGED;\n@@ -1645,7 +1645,7 @@\n \t}\n \telse\n \t{\n-\t\toldtup.t_data->t_infomask &= ~(HEAP_XMAX_COMMITTED |\n+\t\toldtup.t_data->t_infomask &= ~(HEAP_XMAX_COMMITTED | HEAP_MOVED |\n \t\t\t\t\t\t\t\t\t HEAP_XMAX_INVALID |\n \t\t\t\t\t\t\t\t\t HEAP_MARKED_FOR_UPDATE);\n \t\tHeapTupleHeaderSetXmax(oldtup.t_data, GetCurrentTransactionId());\n@@ -1816,7 +1816,7 @@\n \t((PageHeader) BufferGetPage(*buffer))->pd_sui = ThisStartUpID;\n \n \t/* store transaction information of xact marking the tuple */\n-\ttuple->t_data->t_infomask &= ~(HEAP_XMAX_COMMITTED | HEAP_XMAX_INVALID);\n+\ttuple->t_data->t_infomask &= ~(HEAP_XMAX_COMMITTED | HEAP_XMAX_INVALID | HEAP_MOVED);\n \ttuple->t_data->t_infomask |= HEAP_MARKED_FOR_UPDATE;\n \tHeapTupleHeaderSetXmax(tuple->t_data, GetCurrentTransactionId());\n \tHeapTupleHeaderSetCmax(tuple->t_data, cid);\n@@ -2147,7 +2147,7 @@\n \n \tif (redo)\n \t{\n-\t\thtup->t_infomask &= ~(HEAP_XMAX_COMMITTED |\n+\t\thtup->t_infomask &= ~(HEAP_XMAX_COMMITTED | HEAP_MOVED |\n \t\t\t\t\t\t\t HEAP_XMAX_INVALID | HEAP_MARKED_FOR_UPDATE);\n \t\tHeapTupleHeaderSetXmax(htup, record->xl_xid);\n \t\tHeapTupleHeaderSetCmax(htup, FirstCommandId);\n@@ -2320,7 +2320,7 @@\n \t\t}\n \t\telse\n \t\t{\n-\t\t\thtup->t_infomask &= ~(HEAP_XMAX_COMMITTED |\n+\t\t\thtup->t_infomask &= ~(HEAP_XMAX_COMMITTED | HEAP_MOVED |\n \t\t\t\t\t\t\t HEAP_XMAX_INVALID | HEAP_MARKED_FOR_UPDATE);\n \t\t\tHeapTupleHeaderSetXmax(htup, record->xl_xid);\n \t\t\tHeapTupleHeaderSetCmax(htup, FirstCommandId);", "msg_date": "Tue, 16 Jul 2002 01:38:48 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: HeapTuple header changes cause core dumps in CVS tip" } ]
[ { "msg_contents": "I have applied the following diff to make the sharing of C files among\nmodules more sane. Instead of having configure.in set the file name to\nstrdup.o and have the Makefiles specify the path, I set it to the full\npath $(top_builddir)/src/utils/strdup.o and have the makefiles use\nthat directly, rather than going through with 'make -c dirname\nfilename'.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql/configure.in,v\nretrieving revision 1.187\ndiff -c -r1.187 configure.in\n*** configure.in\t7 Jul 2002 20:28:24 -0000\t1.187\n--- configure.in\t15 Jul 2002 21:07:07 -0000\n***************\n*** 870,877 ****\n # have working \"long long int\" support -- see below.\n \n SNPRINTF=''\n! AC_CHECK_FUNCS(snprintf, [], SNPRINTF='snprintf.o')\n! AC_CHECK_FUNCS(vsnprintf, [], SNPRINTF='snprintf.o')\n AC_SUBST(SNPRINTF)\n \n \n--- 870,877 ----\n # have working \"long long int\" support -- see below.\n \n SNPRINTF=''\n! AC_CHECK_FUNCS(snprintf, [], SNPRINTF='$(top_builddir)/src/backend/port/snprintf.o')\n! AC_CHECK_FUNCS(vsnprintf, [], SNPRINTF='$(top_builddir)/src/backend/port/snprintf.o')\n AC_SUBST(SNPRINTF)\n \n \n***************\n*** 913,921 ****\n AC_SUBST(MISSING_RANDOM)\n AC_CHECK_FUNCS(inet_aton, [], INET_ATON='inet_aton.o')\n AC_SUBST(INET_ATON)\n! AC_CHECK_FUNCS(strerror, [], STRERROR='strerror.o')\n AC_SUBST(STRERROR)\n! AC_CHECK_FUNCS(strdup, [], STRDUP='../../utils/strdup.o')\n AC_SUBST(STRDUP)\n AC_CHECK_FUNCS(strtol, [], STRTOL='strtol.o')\n AC_SUBST(STRTOL)\n--- 913,921 ----\n AC_SUBST(MISSING_RANDOM)\n AC_CHECK_FUNCS(inet_aton, [], INET_ATON='inet_aton.o')\n AC_SUBST(INET_ATON)\n! AC_CHECK_FUNCS(strerror, [], STRERROR='$(top_builddir)/src/backend/port/strerror.o')\n AC_SUBST(STRERROR)\n! AC_CHECK_FUNCS(strdup, [], STRDUP='$(top_builddir)/src/utils/strdup.o')\n AC_SUBST(STRDUP)\n AC_CHECK_FUNCS(strtol, [], STRTOL='strtol.o')\n AC_SUBST(STRTOL)\n***************\n*** 1093,1109 ****\n ],\n [ AC_MSG_RESULT(no)\n \t# Force usage of our own snprintf, since system snprintf is broken\n! \tSNPRINTF='snprintf.o'\n \tINT64_FORMAT='\"%lld\"'\n ],\n [ AC_MSG_RESULT(assuming not on target machine)\n \t# Force usage of our own snprintf, since we cannot test foreign snprintf\n! \tSNPRINTF='snprintf.o'\n \tINT64_FORMAT='\"%lld\"'\n ]) ],\n [ AC_MSG_RESULT(assuming not on target machine)\n \t# Force usage of our own snprintf, since we cannot test foreign snprintf\n! \tSNPRINTF='snprintf.o'\n \tINT64_FORMAT='\"%lld\"'\n ])\n else\n--- 1093,1109 ----\n ],\n [ AC_MSG_RESULT(no)\n \t# Force usage of our own snprintf, since system snprintf is broken\n! \tSNPRINTF='$(top_builddir)/src/backend/port/snprintf.o'\n \tINT64_FORMAT='\"%lld\"'\n ],\n [ AC_MSG_RESULT(assuming not on target machine)\n \t# Force usage of our own snprintf, since we cannot test foreign snprintf\n! \tSNPRINTF='$(top_builddir)/src/backend/port/snprintf.o'\n \tINT64_FORMAT='\"%lld\"'\n ]) ],\n [ AC_MSG_RESULT(assuming not on target machine)\n \t# Force usage of our own snprintf, since we cannot test foreign snprintf\n! \tSNPRINTF='$(top_builddir)/src/backend/port/snprintf.o'\n \tINT64_FORMAT='\"%lld\"'\n ])\n else\nIndex: doc/FAQ_Solaris\n===================================================================\nRCS file: /cvsroot/pgsql/doc/FAQ_Solaris,v\nretrieving revision 1.14\ndiff -c -r1.14 FAQ_Solaris\n*** doc/FAQ_Solaris\t4 Mar 2002 17:47:11 -0000\t1.14\n--- doc/FAQ_Solaris\t15 Jul 2002 21:07:07 -0000\n***************\n*** 94,100 ****\n (1) In src/Makefile.global, change the line\n \tSNPRINTF = \n to read\n! \tSNPRINTF = snprintf.o\n \n (2) In src/backend/port/Makefile, add \"snprintf.o\" to OBJS. (Skip this\n step if you see \"$(SNPRINTF)\" already listed in OBJS.)\n--- 94,100 ----\n (1) In src/Makefile.global, change the line\n \tSNPRINTF = \n to read\n! \tSNPRINTF = $(top_builddir)/src/backend/port/snprint.o\n \n (2) In src/backend/port/Makefile, add \"snprintf.o\" to OBJS. (Skip this\n step if you see \"$(SNPRINTF)\" already listed in OBJS.)\nIndex: src/backend/port/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/port/Makefile,v\nretrieving revision 1.13\ndiff -c -r1.13 Makefile\n*** src/backend/port/Makefile\t5 May 2002 16:02:37 -0000\t1.13\n--- src/backend/port/Makefile\t15 Jul 2002 21:07:16 -0000\n***************\n*** 21,54 ****\n top_builddir = ../../..\n include $(top_builddir)/src/Makefile.global\n \n! OBJS = dynloader.o pg_sema.o pg_shmem.o\n \n! OBJS += $(GETHOSTNAME) $(GETRUSAGE) $(INET_ATON) $(ISINF) $(MEMCMP) \\\n! $(MISSING_RANDOM) $(SNPRINTF) $(SRANDOM) $(STRCASECMP) $(STRERROR) \\\n! $(STRTOL) $(STRTOUL)\n \n! OBJS += $(TAS)\n \n- ifdef STRDUP\n- OBJS += $(top_builddir)/src/utils/strdup.o\n- endif\n ifeq ($(PORTNAME), qnx4)\n! OBJS += getrusage.o qnx4/SUBSYS.o\n endif\n ifeq ($(PORTNAME), beos)\n! OBJS += beos/SUBSYS.o\n endif\n ifeq ($(PORTNAME), darwin)\n! OBJS += darwin/SUBSYS.o\n endif\n \n all: SUBSYS.o\n \n SUBSYS.o: $(OBJS)\n \t$(LD) $(LDREL) $(LDOUT) $@ $^\n- \n- $(top_builddir)/src/utils/strdup.o:\n- \t$(MAKE) -C $(top_builddir)/src/utils strdup.o\n \n qnx4/SUBSYS.o: qnx4.dir\n \n--- 21,48 ----\n top_builddir = ../../..\n include $(top_builddir)/src/Makefile.global\n \n! OBJS=dynloader.o pg_sema.o pg_shmem.o\n \n! OBJS+=$(GETHOSTNAME) $(GETRUSAGE) $(INET_ATON) $(ISINF) $(MEMCMP) \\\n! $(MISSING_RANDOM) $(SNPRINTF) $(SRANDOM) $(STRCASECMP) $(STRDUP) \\\n! \t$(STRERROR) $(STRTOL) $(STRTOUL)\n \n! OBJS+=$(TAS)\n \n ifeq ($(PORTNAME), qnx4)\n! OBJS+=getrusage.o qnx4/SUBSYS.o\n endif\n ifeq ($(PORTNAME), beos)\n! OBJS+=beos/SUBSYS.o\n endif\n ifeq ($(PORTNAME), darwin)\n! OBJS+=darwin/SUBSYS.o\n endif\n \n all: SUBSYS.o\n \n SUBSYS.o: $(OBJS)\n \t$(LD) $(LDREL) $(LDOUT) $@ $^\n \n qnx4/SUBSYS.o: qnx4.dir\n \nIndex: src/bin/pg_dump/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/pg_dump/Makefile,v\nretrieving revision 1.34\ndiff -c -r1.34 Makefile\n*** src/bin/pg_dump/Makefile\t6 Jul 2002 20:12:30 -0000\t1.34\n--- src/bin/pg_dump/Makefile\t15 Jul 2002 21:07:17 -0000\n***************\n*** 14,34 ****\n include $(top_builddir)/src/Makefile.global\n \n OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \\\n! pg_backup_files.o pg_backup_null.o pg_backup_tar.o sprompt.o\n! \n! ifdef STRDUP\n! OBJS+=$(top_builddir)/src/utils/strdup.o\n! \n! $(top_builddir)/src/utils/strdup.o:\n! \t$(MAKE) -C $(top_builddir)/src/utils strdup.o\n! endif\n! \n! ifdef STRTOUL\n! OBJS+=$(top_builddir)/src/backend/port/strtoul.o\n! \n! $(top_builddir)/src/backend/port/strtoul.o:\n! \t$(MAKE) -C $(top_builddir)/src/backend/port strtoul.o\n! endif\n \n override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)\n \n--- 14,21 ----\n include $(top_builddir)/src/Makefile.global\n \n OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \\\n! pg_backup_files.o pg_backup_null.o pg_backup_tar.o sprompt.o \\\n! $(STRDUP) $(STRTOUL)\n \n override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)\n \nIndex: src/bin/psql/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/psql/Makefile,v\nretrieving revision 1.33\ndiff -c -r1.33 Makefile\n*** src/bin/psql/Makefile\t6 Jul 2002 20:12:30 -0000\t1.33\n--- src/bin/psql/Makefile\t15 Jul 2002 21:07:17 -0000\n***************\n*** 17,57 ****\n \n override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)\n \n! OBJS=command.o common.o help.o input.o stringutils.o mainloop.o \\\n! \tcopy.o startup.o prompt.o variables.o large_obj.o print.o describe.o \\\n! \tsprompt.o tab-complete.o mbprint.o\n \n all: submake psql\n- \n- ifdef STRDUP\n- OBJS+=$(top_builddir)/src/utils/strdup.o\n- \n- $(top_builddir)/src/utils/strdup.o:\n- \t$(MAKE) -C $(top_builddir)/src/utils strdup.o\n- endif\n- \n- # Move these to the utils directory?\n- \n- ifdef STRERROR\n- OBJS+=$(top_builddir)/src/backend/port/strerror.o\n- \n- $(top_builddir)/src/backend/port/strerror.o:\n- \t$(MAKE) -C $(top_builddir)/src/backend/port strerror.o\n- endif\n- \n- ifdef SNPRINTF\n- OBJS+=$(top_builddir)/src/backend/port/snprintf.o\n- \n- $(top_builddir)/src/backend/port/snprintf.o:\n- \t$(MAKE) -C $(top_builddir)/src/backend/port snprintf.o\n- endif\n- \n- ifdef STRTOUL\n- OBJS+=$(top_builddir)/src/backend/port/strtoul.o\n- \n- $(top_builddir)/src/backend/port/strtoul.o:\n- \t$(MAKE) -C $(top_builddir)/src/backend/port strtoul.o\n- endif\n \n # End of hacks for picking up backend 'port' modules\n \n--- 17,28 ----\n \n override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)\n \n! OBJS=command.o common.o help.o input.o stringutils.o mainloop.o copy.o \\\n! \tstartup.o prompt.o variables.o large_obj.o print.o describe.o \\\n! \tsprompt.o tab-complete.o mbprint.o $(SNPRINTF) $(STRDUP) \\\n! \t$(STRERROR) $(STRTOUL)\n \n all: submake psql\n \n # End of hacks for picking up backend 'port' modules\n \nIndex: src/interfaces/ecpg/preproc/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/ecpg/preproc/Makefile,v\nretrieving revision 1.83\ndiff -c -r1.83 Makefile\n*** src/interfaces/ecpg/preproc/Makefile\t11 Mar 2002 12:56:02 -0000\t1.83\n--- src/interfaces/ecpg/preproc/Makefile\t15 Jul 2002 21:07:17 -0000\n***************\n*** 18,40 ****\n endif\n \n OBJS=preproc.o pgc.o type.o ecpg.o ecpg_keywords.o output.o\\\n! keywords.o c_keywords.o ../lib/typename.o descriptor.o variable.o\n \n all: ecpg\n- \n- ifdef SNPRINTF\n- OBJS+=$(top_builddir)/src/backend/port/snprintf.o\n- \n- $(top_builddir)/src/backend/port/snprintf.o:\n- \t$(MAKE) -C $(top_builddir)/src/backend/port snprintf.o\n- endif\n- \n- ifdef STRDUP\n- OBJS+=$(top_builddir)/src/utils/strdup.o\n- \n- $(top_builddir)/src/utils/strdup.o:\n- \t$(MAKE) -C $(top_builddir)/src/utils strdup.o\n- endif\n \n ecpg: $(OBJS)\n \t$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LIBS) -o $@\n--- 18,27 ----\n endif\n \n OBJS=preproc.o pgc.o type.o ecpg.o ecpg_keywords.o output.o\\\n! keywords.o c_keywords.o ../lib/typename.o descriptor.o variable.o \\\n! $(SNPRINTF) $(STRDUP)\n \n all: ecpg\n \n ecpg: $(OBJS)\n \t$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LIBS) -o $@\nIndex: src/utils/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/src/utils/Makefile,v\nretrieving revision 1.9\ndiff -c -r1.9 Makefile\n*** src/utils/Makefile\t31 Aug 2000 16:12:35 -0000\t1.9\n--- src/utils/Makefile\t15 Jul 2002 21:07:18 -0000\n***************\n*** 22,27 ****\n--- 22,29 ----\n include $(top_builddir)/src/Makefile.global\n \n all:\n+ \t# Nothing required here. These C files are compiled in\n+ \t# directories as needed.\n \n clean distclean maintainer-clean:\n \trm -f dllinit.o getopt.o strdup.o", "msg_date": "Mon, 15 Jul 2002 17:31:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "utils C files" }, { "msg_contents": "Bruce Momjian writes:\n\n> I have applied the following diff to make the sharing of C files among\n> modules more sane. Instead of having configure.in set the file name to\n> strdup.o and have the Makefiles specify the path, I set it to the full\n> path $(top_builddir)/src/utils/strdup.o and have the makefiles use\n> that directly, rather than going through with 'make -c dirname\n> filename'.\n\nDon't do that, it doesn't work. Building outside the source tree, weird\ncompilers, etc. The current state was the result of much labor to get rid\nof exactly the state you reintroduced.\n\nA secondary objection is that I've been meaning to replace configure\nchecks of the form\n\nAC_CHECK_FUNCS(inet_aton, [], INET_ATON='inet_aton.o')\nAC_SUBST(INET_ATON)\n\nwith one integrated macro, which doesn't work if we have to encode the\npath into configure.\n\nAlso, do not tab-indent comments in makefiles. That makes them part of\nthe command to execute.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Jul 2002 01:04:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: utils C files" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I have applied the following diff to make the sharing of C files among\n> > modules more sane. Instead of having configure.in set the file name to\n> > strdup.o and have the Makefiles specify the path, I set it to the full\n> > path $(top_builddir)/src/utils/strdup.o and have the makefiles use\n> > that directly, rather than going through with 'make -c dirname\n> > filename'.\n> \n> Don't do that, it doesn't work. Building outside the source tree, weird\n> compilers, etc. The current state was the result of much labor to get rid\n> of exactly the state you reintroduced.\n\nWell, the actual problem was that there was inconsistency in the way\nthings where handled, e.g. some had their own rules for making the *.o\nfiles if the *.o files were out of the current directory, other didn't. \nI can change it but it has to be consistent. What do you suggest?\n\n> A secondary objection is that I've been meaning to replace configure\n> checks of the form\n> \n> AC_CHECK_FUNCS(inet_aton, [], INET_ATON='inet_aton.o')\n> AC_SUBST(INET_ATON)\n> \n> with one integrated macro, which doesn't work if we have to encode the\n> path into configure.\n\nThe path is only the thing we assign to the variable. I can't see how\nthat effects the configure script. Actually, once we move stuff into\nthe same directory, it will not matter. They will all be in the same\ndirectory so you can just prepend whatever directory we need.\n\n> Also, do not tab-indent comments in makefiles. That makes them part of\n> the command to execute.\n\nFixed. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 19:32:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: utils C files" }, { "msg_contents": "Bruce Momjian writes:\n\n> Well, the actual problem was that there was inconsistency in the way\n> things where handled, e.g. some had their own rules for making the *.o\n> files if the *.o files were out of the current directory, other didn't.\n> I can change it but it has to be consistent. What do you suggest?\n\nCan you point to one example of such an inconsistency? I can't find one.\n\nThe rule of thumb is that rules involving the C compiler should only use\nfiles in the current directory. Otherwise you don't know where those\nfiles are going to end up.\n\n> > A secondary objection is that I've been meaning to replace configure\n> > checks of the form\n> >\n> > AC_CHECK_FUNCS(inet_aton, [], INET_ATON='inet_aton.o')\n> > AC_SUBST(INET_ATON)\n> >\n> > with one integrated macro, which doesn't work if we have to encode the\n> > path into configure.\n>\n> The path is only the thing we assign to the variable. I can't see how\n> that effects the configure script.\n\nWhat I want this to read in the end is\n\nPGAC_SOME_MACRO([func1 func2 func3])\n\n> Actually, once we move stuff into the same directory, it will not\n> matter.\n\nTrue, that will help a lot.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Jul 2002 23:57:18 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: utils C files" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Well, the actual problem was that there was inconsistency in the way\n> > things where handled, e.g. some had their own rules for making the *.o\n> > files if the *.o files were out of the current directory, other didn't.\n> > I can change it but it has to be consistent. What do you suggest?\n> \n> Can you point to one example of such an inconsistency? I can't find one.\n\nSure, interfaces/libpq had:\n\t\n\tOBJS= fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o \\\n\t pqexpbuffer.o dllist.o md5.o pqsignal.o fe-secure.o \\\n\t $(INET_ATON) $(SNPRINTF) $(STRERROR)\n\nwhile psql/Makefile had what I think you wanted:\n\n\tOBJS=command.o common.o help.o input.o stringutils.o mainloop.o \\\n\t copy.o startup.o prompt.o variables.o large_obj.o print.o describe.o \\\n\t sprompt.o tab-complete.o mbprint.o\n\t\n\tall: submake psql\n\t\n\tifdef STRDUP\n\tOBJS+=$(top_builddir)/src/utils/strdup.o\n\t\n\t$(top_builddir)/src/utils/strdup.o:\n\t $(MAKE) -C $(top_builddir)/src/utils strdup.o\n\tendif\n\n\tifdef STRERROR\n\tOBJS+=$(top_builddir)/src/backend/port/strerror.o\n\t\n\t$(top_builddir)/src/backend/port/strerror.o:\n\t $(MAKE) -C $(top_builddir)/src/backend/port strerror.o\n\tendif\n\t\n\tifdef SNPRINTF\n\tOBJS+=$(top_builddir)/src/backend/port/snprintf.o\n\t\n\t$(top_builddir)/src/backend/port/snprintf.o:\n\t $(MAKE) -C $(top_builddir)/src/backend/port snprintf.o\n\tendif\n\n\nHere we see SNPRINTF done in two different ways. I think the library\nfile is the way to go anyway. We compile it once, and use it whenever\nwe need it. Clean.\n\n> The rule of thumb is that rules involving the C compiler should only use\n> files in the current directory. Otherwise you don't know where those\n> files are going to end up.\n\nSure.\n\n> > > A secondary objection is that I've been meaning to replace configure\n> > > checks of the form\n> > >\n> > > AC_CHECK_FUNCS(inet_aton, [], INET_ATON='inet_aton.o')\n> > > AC_SUBST(INET_ATON)\n> > >\n> > > with one integrated macro, which doesn't work if we have to encode the\n> > > path into configure.\n> >\n> > The path is only the thing we assign to the variable. I can't see how\n> > that effects the configure script.\n> \n> What I want this to read in the end is\n> \n> PGAC_SOME_MACRO([func1 func2 func3])\n> \n> > Actually, once we move stuff into the same directory, it will not\n> > matter.\n> \n> True, that will help a lot.\n\nHeck, soon configure is only going to control what gets put in the\nlibport.a file and that's it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 23:03:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: utils C files" }, { "msg_contents": "Bruce Momjian writes:\n\n> > Can you point to one example of such an inconsistency? I can't find one.\n>\n> Sure, interfaces/libpq had:\n>\n> \tOBJS= fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o \\\n> \t pqexpbuffer.o dllist.o md5.o pqsignal.o fe-secure.o \\\n> \t $(INET_ATON) $(SNPRINTF) $(STRERROR)\n>\n> while psql/Makefile had what I think you wanted:\n\nNote that the libpq makefile goes through trouble to link the inet_aton.c\nfile into the current directory, so this example doesn't count.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 18 Jul 2002 00:32:50 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: utils C files" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > > Can you point to one example of such an inconsistency? I can't find one.\n> >\n> > Sure, interfaces/libpq had:\n> >\n> > \tOBJS= fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o \\\n> > \t pqexpbuffer.o dllist.o md5.o pqsignal.o fe-secure.o \\\n> > \t $(INET_ATON) $(SNPRINTF) $(STRERROR)\n> >\n> > while psql/Makefile had what I think you wanted:\n> \n> Note that the libpq makefile goes through trouble to link the inet_aton.c\n> file into the current directory, so this example doesn't count.\n\nWell, the code is:\n\n\t# this only gets done if configure finds system doesn't have inet_aton()\n\tinet_aton.c: $(backend_src)/port/inet_aton.c\n\t rm -f $@ && $(LN_S) $< .\n\nHow is this any better than just mentioning the *.o file and letting the\ndefault rules compile it. I don't understand how linking to the current\ndirectory gets us anything. Now, if you did a 'make -C dir target' that\nwould be different.\n\nIn fact, with the lib idea dead, if there are special rules for for\ncertain port/*.o files, we should put those rules in Makefile.global and\nlet all the code use it. That way, we defined it in one place, but can\nuse the object file anywhere.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Jul 2002 20:32:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: utils C files" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> How is this any better than just mentioning the *.o file and letting the\n> default rules compile it. I don't understand how linking to the current\n> directory gets us anything. Now, if you did a 'make -C dir target' that\n> would be different.\n\nThe whole point of the pushups for libpq is that we DON'T want the\ndefault rules. We need to compile it PIC so that it can go into a\nshared library. This will not be the same object file built in the\nports directory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jul 2002 21:37:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: utils C files " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > How is this any better than just mentioning the *.o file and letting the\n> > default rules compile it. I don't understand how linking to the current\n> > directory gets us anything. Now, if you did a 'make -C dir target' that\n> > would be different.\n> \n> The whole point of the pushups for libpq is that we DON'T want the\n> default rules. We need to compile it PIC so that it can go into a\n> shared library. This will not be the same object file built in the\n> ports directory.\n\nOh, so specifying the full path for the *.o allows it to use an existing\n*.o, while putting it in the directory forces a recompile with the\ncurrent Makefile. Got it. Libpq has all of them done.\n\nI see the comment in libpq/Makefile now:\n\n# We use several backend modules verbatim, but since we need to\n# compile with appropriate options to build a shared lib, we can't\n# necessarily use the same object files as the backend uses. Instead,\n# symlink the source files in here and build our own object file.\n\nThanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Jul 2002 21:40:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: utils C files" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > How is this any better than just mentioning the *.o file and letting the\n> > default rules compile it. I don't understand how linking to the current\n> > directory gets us anything. Now, if you did a 'make -C dir target' that\n> > would be different.\n> \n> The whole point of the pushups for libpq is that we DON'T want the\n> default rules. We need to compile it PIC so that it can go into a\n> shared library. This will not be the same object file built in the\n> ports directory.\n\nOK, this cleanup makes the src/backend/port file location dependent only\non configure.in values. I will now move them from src/backend/port to\nsrc/port.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: contrib/pg_controldata/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/pg_controldata/Makefile,v\nretrieving revision 1.5\ndiff -c -r1.5 Makefile\n*** contrib/pg_controldata/Makefile\t6 Sep 2001 10:49:29 -0000\t1.5\n--- contrib/pg_controldata/Makefile\t18 Jul 2002 03:58:46 -0000\n***************\n*** 5,18 ****\n include $(top_builddir)/src/Makefile.global\n \n PROGRAM = pg_controldata\n! OBJS\t= pg_controldata.o pg_crc.o $(SNPRINTF)\n \n pg_crc.c: $(top_srcdir)/src/backend/utils/hash/pg_crc.c\n \trm -f $@ && $(LN_S) $< .\n \n! # this only gets done if configure finds system doesn't have snprintf()\n! snprintf.c: $(top_srcdir)/src/backend/port/snprintf.c\n \trm -f $@ && $(LN_S) $< .\n \n EXTRA_CLEAN = pg_crc.c snprintf.c\n \n--- 5,19 ----\n include $(top_builddir)/src/Makefile.global\n \n PROGRAM = pg_controldata\n! OBJS\t= pg_controldata.o pg_crc.o $(notdir $(SNPRINTF))\n \n pg_crc.c: $(top_srcdir)/src/backend/utils/hash/pg_crc.c\n \trm -f $@ && $(LN_S) $< .\n \n! ifdef SNPRINTF\n! $(basename $(notdir $(SNPRINTF))).c: $(basename $(SNPRINTF)).c\n \trm -f $@ && $(LN_S) $< .\n+ endif\n \n EXTRA_CLEAN = pg_crc.c snprintf.c\n \nIndex: contrib/pg_resetxlog/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/pg_resetxlog/Makefile,v\nretrieving revision 1.3\ndiff -c -r1.3 Makefile\n*** contrib/pg_resetxlog/Makefile\t6 Sep 2001 10:49:30 -0000\t1.3\n--- contrib/pg_resetxlog/Makefile\t18 Jul 2002 03:58:46 -0000\n***************\n*** 5,18 ****\n include $(top_builddir)/src/Makefile.global\n \n PROGRAM = pg_resetxlog\n! OBJS\t= pg_resetxlog.o pg_crc.o $(SNPRINTF)\n \n pg_crc.c: $(top_srcdir)/src/backend/utils/hash/pg_crc.c\n \trm -f $@ && $(LN_S) $< .\n \n! # this only gets done if configure finds system doesn't have snprintf()\n! snprintf.c: $(top_srcdir)/src/backend/port/snprintf.c\n \trm -f $@ && $(LN_S) $< .\n \n EXTRA_CLEAN = pg_crc.c snprintf.c\n \n--- 5,19 ----\n include $(top_builddir)/src/Makefile.global\n \n PROGRAM = pg_resetxlog\n! OBJS\t= pg_resetxlog.o pg_crc.o $(notdir $(SNPRINTF))\n \n pg_crc.c: $(top_srcdir)/src/backend/utils/hash/pg_crc.c\n \trm -f $@ && $(LN_S) $< .\n \n! ifdef SNPRINTF\n! $(basename $(notdir $(SNPRINTF))).c: $(basename $(SNPRINTF)).c\n \trm -f $@ && $(LN_S) $< .\n+ endif\n \n EXTRA_CLEAN = pg_crc.c snprintf.c\n \nIndex: src/interfaces/libpq/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq/Makefile,v\nretrieving revision 1.62\ndiff -c -r1.62 Makefile\n*** src/interfaces/libpq/Makefile\t14 Jun 2002 04:23:17 -0000\t1.62\n--- src/interfaces/libpq/Makefile\t18 Jul 2002 03:58:48 -0000\n***************\n*** 12,17 ****\n--- 12,18 ----\n top_builddir = ../../..\n include $(top_builddir)/src/Makefile.global\n \n+ \n # shared library parameters\n NAME= pq\n SO_MAJOR_VERSION= 2\n***************\n*** 21,32 ****\n \n OBJS= fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o \\\n pqexpbuffer.o dllist.o md5.o pqsignal.o fe-secure.o \\\n! $(INET_ATON) $(SNPRINTF) $(STRERROR)\n \n ifdef MULTIBYTE\n OBJS+= wchar.o encnames.o\n endif\n \n # Add libraries that libpq depends (or might depend) on into the\n # shared library link. (The order in which you list them here doesn't\n # matter.)\n--- 22,34 ----\n \n OBJS= fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o \\\n pqexpbuffer.o dllist.o md5.o pqsignal.o fe-secure.o \\\n! $(notdir $(INET_ATON)) $(notdir $(SNPRINTF)) $(notdir $(STRERROR))\n \n ifdef MULTIBYTE\n OBJS+= wchar.o encnames.o\n endif\n \n+ \n # Add libraries that libpq depends (or might depend) on into the\n # shared library link. (The order in which you list them here doesn't\n # matter.)\n***************\n*** 37,68 ****\n \n # Shared library stuff\n include $(top_srcdir)/src/Makefile.shlib\n- \n- \n- # We use several backend modules verbatim, but since we need to\n- # compile with appropriate options to build a shared lib, we can't\n- # necessarily use the same object files as the backend uses. Instead,\n- # symlink the source files in here and build our own object file.\n- \n backend_src = $(top_srcdir)/src/backend\n \n dllist.c: $(backend_src)/lib/dllist.c\n \trm -f $@ && $(LN_S) $< .\n \n md5.c: $(backend_src)/libpq/md5.c\n \trm -f $@ && $(LN_S) $< .\n \n # this only gets done if configure finds system doesn't have inet_aton()\n! inet_aton.c: $(backend_src)/port/inet_aton.c\n \trm -f $@ && $(LN_S) $< .\n \n! # this only gets done if configure finds system doesn't have snprintf()\n! snprintf.c: $(backend_src)/port/snprintf.c\n \trm -f $@ && $(LN_S) $< .\n \n! # this only gets done if configure finds system doesn't have strerror()\n! strerror.c: $(backend_src)/port/strerror.c\n \trm -f $@ && $(LN_S) $< .\n \n ifdef MULTIBYTE\n wchar.c : % : $(backend_src)/utils/mb/%\n--- 39,73 ----\n \n # Shared library stuff\n include $(top_srcdir)/src/Makefile.shlib\n backend_src = $(top_srcdir)/src/backend\n \n+ \n dllist.c: $(backend_src)/lib/dllist.c\n \trm -f $@ && $(LN_S) $< .\n \n md5.c: $(backend_src)/libpq/md5.c\n \trm -f $@ && $(LN_S) $< .\n \n+ # We use several backend modules verbatim, but since we need to\n+ # compile with appropriate options to build a shared lib, we can't\n+ # necessarily use the same object files as the backend uses. Instead,\n+ # symlink the source files in here and build our own object file.\n # this only gets done if configure finds system doesn't have inet_aton()\n! \n! ifdef INET_ATON\n! $(basename $(notdir $(INET_ATON))).c: $(basename $(INET_ATON)).c\n \trm -f $@ && $(LN_S) $< .\n+ endif\n \n! ifdef SNPRINTF\n! $(basename $(notdir $(SNPRINTF))).c: $(basename $(SNPRINTF)).c\n \trm -f $@ && $(LN_S) $< .\n+ endif\n \n! ifdef STRERROR\n! $(basename $(notdir $(STRERROR))).c: $(basename $(STRERROR)).c\n \trm -f $@ && $(LN_S) $< .\n+ endif\n \n ifdef MULTIBYTE\n wchar.c : % : $(backend_src)/utils/mb/%", "msg_date": "Wed, 17 Jul 2002 23:59:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: utils C files" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > How is this any better than just mentioning the *.o file and letting the\n> > default rules compile it. I don't understand how linking to the current\n> > directory gets us anything. Now, if you did a 'make -C dir target' that\n> > would be different.\n> \n> The whole point of the pushups for libpq is that we DON'T want the\n> default rules. We need to compile it PIC so that it can go into a\n> shared library. This will not be the same object file built in the\n> ports directory.\n\nOK, I have moved the files to src/port. Would people like this rule\nadded to Makefile.global.in so that any usage of src/port/*.c files will\ncompile in the local directory?\n\t\n\tifdef SNPRINTF\n\t$(basename $(notdir $(SNPRINTF))).c: $(basename $(SNPRINTF)).c\n\t rm -f $@ && $(LN_S) $< .\n\tendif\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jul 2002 00:35:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: utils C files" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, I have moved the files to src/port. Would people like this rule\n> added to Makefile.global.in so that any usage of src/port/*.c files will\n> compile in the local directory?\n\nI'd guess *not*. libpq is a special case.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jul 2002 00:55:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: utils C files " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, I have moved the files to src/port. Would people like this rule\n> > added to Makefile.global.in so that any usage of src/port/*.c files will\n> > compile in the local directory?\n> \n> I'd guess *not*. libpq is a special case.\n\nBut the case is specifically for libpq, so that it appears as a local\nfile for compile by the local makefile rules. Adding this would make\nall uses of those files behave the same way. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jul 2002 00:57:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: utils C files" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, I have moved the files to src/port. Would people like this rule\n> > added to Makefile.global.in so that any usage of src/port/*.c files will\n> > compile in the local directory?\n> \n> I'd guess *not*. libpq is a special case.\n\nOh, so you are saying let most uses of src/port use the *.o files that\nare in the directory, and it isn't needed to have other directories use\nthe link trick. Just let me know what people want.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jul 2002 01:00:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: utils C files" }, { "msg_contents": "Bruce Momjian writes:\n\n> Oh, so you are saying let most uses of src/port use the *.o files that\n> are in the directory, and it isn't needed to have other directories use\n> the link trick. Just let me know what people want.\n\nWhat I want is this:\n\nIn configure.in, call\n\nAC_REPLACE_FUNCS([inet_aton ...]) # the whole list\n\nIf you need more sophisticated checks on top of \"function exists\", you\nkeep the existing tests, but instead of, say,\n\nSNPRINTF=snprintf.c\nAC_SUBST(SNPRINTF)\n\nyou'd call\n\nAC_LIBOBJ(snprintf)\n\nIn Makefile.global.in:\n\nLIBOBJS = @LIBOBJS@\n\nIn utils/port/Makefile:\n\nlibpgport.a: $(LIBOBJS)\n\tar crs $@ $^\n\nIn Makefile.global.in:\n\nAdd -L$(top_builddir)/src/port to LDFLAGS (near the start), -lpgport to\nLIBS (near the end).\n\nThen you need to make sure that the src/port directory is build before\nbeing referred to.\n\nIn the libpq makefile, you can write the rules like:\n\nifneq(,$(filter snprintf.o, $(LIBOBJS)))\n# do what it's doing now in case of 'ifdef SNPRINF'\nendif\n\n\nCaveat implementor.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 19 Jul 2002 01:00:05 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] utils C files" }, { "msg_contents": "\nWow. I think I will hold for a while.\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Oh, so you are saying let most uses of src/port use the *.o files that\n> > are in the directory, and it isn't needed to have other directories use\n> > the link trick. Just let me know what people want.\n> \n> What I want is this:\n> \n> In configure.in, call\n> \n> AC_REPLACE_FUNCS([inet_aton ...]) # the whole list\n> \n> If you need more sophisticated checks on top of \"function exists\", you\n> keep the existing tests, but instead of, say,\n> \n> SNPRINTF=snprintf.c\n> AC_SUBST(SNPRINTF)\n> \n> you'd call\n> \n> AC_LIBOBJ(snprintf)\n> \n> In Makefile.global.in:\n> \n> LIBOBJS = @LIBOBJS@\n> \n> In utils/port/Makefile:\n> \n> libpgport.a: $(LIBOBJS)\n> \tar crs $@ $^\n> \n> In Makefile.global.in:\n> \n> Add -L$(top_builddir)/src/port to LDFLAGS (near the start), -lpgport to\n> LIBS (near the end).\n> \n> Then you need to make sure that the src/port directory is build before\n> being referred to.\n> \n> In the libpq makefile, you can write the rules like:\n> \n> ifneq(,$(filter snprintf.o, $(LIBOBJS)))\n> # do what it's doing now in case of 'ifdef SNPRINF'\n> endif\n> \n> \n> Caveat implementor.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jul 2002 19:05:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] utils C files" } ]
[ { "msg_contents": "On Tue, 2002-07-16 at 03:53, Peter Eisentraut wrote:\n> The following system table columns are currently unused and don't appear\n> to be in the line of resurrection.\n> \n> pg_language.lancompiler\n> pg_operator.oprprec\n> pg_operator.oprisleft\n> pg_proc.probyte_pct\n> pg_proc.properbyte_cpu\n> pg_proc.propercall_cpu\n> pg_proc.prooutin_ratio\n> pg_shadow.usetrace\n> pg_type.typprtlen\n> pg_type.typreceive\n> pg_type.typsend\n\npg_type.typreceive and pg_type.typsend\nare unused, but I think they should be saved for use as converters\nfrom/to unified binary wire protocol (as their name implies ;) once we\nget at it.\n\nThe alternative would be yet another system table which would allow us\nto support unlimited number of to/from converters for different wire\nprotocols, but it will definitely be easier to start with\ntypreceive/typsend.\n\n---------------\nHannu\n\n\n", "msg_date": "16 Jul 2002 02:53:29 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "On Tue, 2002-07-16 at 04:55, Bruce Momjian wrote:\n> Hannu Krosing wrote:\n> > On Tue, 2002-07-16 at 03:53, Peter Eisentraut wrote:\n> > > The following system table columns are currently unused and don't appear\n> > > to be in the line of resurrection.\n> > > \n> > > pg_language.lancompiler\n> > > pg_operator.oprprec\n> > > pg_operator.oprisleft\n> > > pg_proc.probyte_pct\n> > > pg_proc.properbyte_cpu\n> > > pg_proc.propercall_cpu\n> > > pg_proc.prooutin_ratio\n> > > pg_shadow.usetrace\n> > > pg_type.typprtlen\n> > > pg_type.typreceive\n> > > pg_type.typsend\n> > \n> > pg_type.typreceive and pg_type.typsend\n> > are unused, but I think they should be saved for use as converters\n> > from/to unified binary wire protocol (as their name implies ;) once we\n> > get at it.\n> > \n> > The alternative would be yet another system table which would allow us\n> > to support unlimited number of to/from converters for different wire\n> > protocols, but it will definitely be easier to start with\n> > typreceive/typsend.\n> \n> We can always re-add the columns them.\n\nBut would it not be nice if we could add uniform binary protocol without\nrequiring initdb ?\n\nIf the main concern is disk space, just set them to NULL .\n\n------------\nHannu\n\n", "msg_date": "16 Jul 2002 03:03:22 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "On Tue, 2002-07-16 at 05:19, Bruce Momjian wrote:\n> Hannu Krosing wrote:\n> > > > The alternative would be yet another system table which would allow us\n> > > > to support unlimited number of to/from converters for different wire\n> > > > protocols, but it will definitely be easier to start with\n> > > > typreceive/typsend.\n> > > \n> > > We can always re-add the columns them.\n> > \n> > But would it not be nice if we could add uniform binary protocol without\n> > requiring initdb ?\n> \n> Seems impossible that would ever happen without an initdb.\n\nWhy?\n\nWe already have a binary protocol, the only part I see missing for\nmaking it _universal_ is binary representation of types + alignment\nissues.\n\nIf we just write the functions for typreceive/send (mostly\nidentity+padding for x86, some byte swapping on SPARC (or vice versa))\nand start using them when cursor is in binary mode plus we determine\nminimal acceptable alignments then we are (almost?) there for output.\n\nFor input, putting in PREPARE/EXECUTE with binary argument passing will\nlikely need initdb (maybe not), but _temporarily_ throwing out _just_\ntypreceive seems weird.\n\n> > If the main concern is disk space, just set them to NULL .\n> \n> Good point, but it does confuse developers.\n\nBut it confuses them _less_ than our current practice of putting unused\ncopies of typinput/typoutput there, and nobody seems too confused even\nnow ;)\n\nAnd keeping them as NULL may be used to indicate than no conversion is\nneeded and data can be sent as-is like we do now, so we are even doing\nthe right thing for this scenario, all without any coding ;)\n\n--------------\nHannu\n\n\n", "msg_date": "16 Jul 2002 03:41:01 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "\nAgreed, and with schemas coming in, we are going to break so much stuff\nanyway we should remove them.\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> The following system table columns are currently unused and don't appear\n> to be in the line of resurrection.\n> \n> pg_language.lancompiler\n> pg_operator.oprprec\n> pg_operator.oprisleft\n> pg_proc.probyte_pct\n> pg_proc.properbyte_cpu\n> pg_proc.propercall_cpu\n> pg_proc.prooutin_ratio\n> pg_shadow.usetrace\n> pg_type.typprtlen\n> pg_type.typreceive\n> pg_type.typsend\n> \n> This adds up to quite some space -- on disk and on the screen. I think we\n> should remove them.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 18:52:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "The following system table columns are currently unused and don't appear\nto be in the line of resurrection.\n\npg_language.lancompiler\npg_operator.oprprec\npg_operator.oprisleft\npg_proc.probyte_pct\npg_proc.properbyte_cpu\npg_proc.propercall_cpu\npg_proc.prooutin_ratio\npg_shadow.usetrace\npg_type.typprtlen\npg_type.typreceive\npg_type.typsend\n\nThis adds up to quite some space -- on disk and on the screen. I think we\nshould remove them.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Jul 2002 00:53:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Unused system table columns" }, { "msg_contents": "For all intent and purpose, pg_index.indisprimary can be added to that\nlist. Can't make a primary key without a pg_constraint entry.\n\nThe below are also reported unused by the documentation:\n\npg_class.relukeys\npg_class.relfkeys\npg_class.relrefs\npg_index.indisclustered\npg_index.indreference\n\n\n\nOn Mon, 2002-07-15 at 18:53, Peter Eisentraut wrote:\n> The following system table columns are currently unused and don't appear\n> to be in the line of resurrection.\n\n> pg_language.lancompiler\n> pg_operator.oprprec\n> pg_operator.oprisleft\n> pg_proc.probyte_pct\n> pg_proc.properbyte_cpu\n> pg_proc.propercall_cpu\n> pg_proc.prooutin_ratio\n> pg_shadow.usetrace\n> pg_type.typprtlen\n> pg_type.typreceive\n> pg_type.typsend\n> \n> This adds up to quite some space -- on disk and on the screen. I think we\n> should remove them.\n\n\n", "msg_date": "15 Jul 2002 19:01:22 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "On Tue, 2002-07-16 at 05:43, Bruce Momjian wrote:\n> Hannu Krosing wrote:\n> > On Tue, 2002-07-16 at 05:19, Bruce Momjian wrote:\n> > > Hannu Krosing wrote:\n> > > > > > The alternative would be yet another system table which would allow us\n> > > > > > to support unlimited number of to/from converters for different wire\n> > > > > > protocols, but it will definitely be easier to start with\n> > > > > > typreceive/typsend.\n> > > > > \n> > > > > We can always re-add the columns them.\n> > > > \n> > > > But would it not be nice if we could add uniform binary protocol without\n> > > > requiring initdb ?\n> > > \n> > > Seems impossible that would ever happen without an initdb.\n> > \n> > Why?\n> \n> It is inconceivable we would add such a feature without a major release,\n> and every major release requires an initdb.\n\nEven if we change nothing in system tables ;)\n\nAs I explained, we already have a binary protocol. What I proposed,\nwould make it usable between hosts with different CPU's by inserting\nappropriate functions for types - without typsend(), i.e typesend=NULL\nthe behaviour would be exactly as it is now, but people would be free to\nexperiment without fatally breaking all other installations.\n\nTechnically this will probably not extend much beyond modifying function\nprinttup_internal in src/backend/access/common/printtup.c \n\n/* ----------------\n * printtup_internal\n * We use a different data prefix, e.g. 'B' instead of 'D' to\n * indicate a tuple in internal (binary) form.\n *\n * This is largely same as printtup,except we don't use the typout func.\n * ----------------\n */\nstatic void\nprinttup_internal(HeapTuple tuple, TupleDesc typeinfo, DestReceiver\n*self)\n\n\nThe hard part will be agreeing on the actual data format(s), but this\ncan be postponed by having this implementation where people can\nexperiment.\n\nAfter looking at the code again, it seems that we must have already\nsolved (most) alignment issues in printtup, so the task is just agreeing\non types' on-wire representations.\n\n------------------\nHannu\n\n\n", "msg_date": "16 Jul 2002 04:18:46 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "Hannu Krosing wrote:\n> On Tue, 2002-07-16 at 03:53, Peter Eisentraut wrote:\n> > The following system table columns are currently unused and don't appear\n> > to be in the line of resurrection.\n> > \n> > pg_language.lancompiler\n> > pg_operator.oprprec\n> > pg_operator.oprisleft\n> > pg_proc.probyte_pct\n> > pg_proc.properbyte_cpu\n> > pg_proc.propercall_cpu\n> > pg_proc.prooutin_ratio\n> > pg_shadow.usetrace\n> > pg_type.typprtlen\n> > pg_type.typreceive\n> > pg_type.typsend\n> \n> pg_type.typreceive and pg_type.typsend\n> are unused, but I think they should be saved for use as converters\n> from/to unified binary wire protocol (as their name implies ;) once we\n> get at it.\n> \n> The alternative would be yet another system table which would allow us\n> to support unlimited number of to/from converters for different wire\n> protocols, but it will definitely be easier to start with\n> typreceive/typsend.\n\nWe can always re-add the columns them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 19:55:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "Hannu Krosing wrote:\n> > > The alternative would be yet another system table which would allow us\n> > > to support unlimited number of to/from converters for different wire\n> > > protocols, but it will definitely be easier to start with\n> > > typreceive/typsend.\n> > \n> > We can always re-add the columns them.\n> \n> But would it not be nice if we could add uniform binary protocol without\n> requiring initdb ?\n\nSeems impossible that would ever happen without an initdb.\n\n> If the main concern is disk space, just set them to NULL .\n\nGood point, but it does confuse developers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 20:19:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "Hannu Krosing wrote:\n> On Tue, 2002-07-16 at 05:19, Bruce Momjian wrote:\n> > Hannu Krosing wrote:\n> > > > > The alternative would be yet another system table which would allow us\n> > > > > to support unlimited number of to/from converters for different wire\n> > > > > protocols, but it will definitely be easier to start with\n> > > > > typreceive/typsend.\n> > > > \n> > > > We can always re-add the columns them.\n> > > \n> > > But would it not be nice if we could add uniform binary protocol without\n> > > requiring initdb ?\n> > \n> > Seems impossible that would ever happen without an initdb.\n> \n> Why?\n\nIt is inconceivable we would add such a feature without a major release,\nand every major release requires an initdb.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 20:43:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "Hannu Krosing wrote:\n> Technically this will probably not extend much beyond modifying function\n> printtup_internal in src/backend/access/common/printtup.c \n> \n> /* ----------------\n> * printtup_internal\n> * We use a different data prefix, e.g. 'B' instead of 'D' to\n> * indicate a tuple in internal (binary) form.\n> *\n> * This is largely same as printtup,except we don't use the typout func.\n> * ----------------\n> */\n> static void\n> printtup_internal(HeapTuple tuple, TupleDesc typeinfo, DestReceiver\n> *self)\n> \n> \n> The hard part will be agreeing on the actual data format(s), but this\n> can be postponed by having this implementation where people can\n> experiment.\n> \n> After looking at the code again, it seems that we must have already\n> solved (most) alignment issues in printtup, so the task is just agreeing\n> on types' on-wire representations.\n\nI just can't imagine adding anything like that in a minor release. In\nfact, I most would object to adding it in a minor release because we\ndon't add features in a minor release. Now, if you want it kept because\nyou may want to work in that area, we can surely do that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 21:23:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "On Tue, 2002-07-16 at 11:13, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> >> We can always re-add the columns them.\n> \n> > But would it not be nice if we could add uniform binary protocol without\n> > requiring initdb ?\n> \n> That won't happen, because the existing contents of those columns are\n> completely useless for a binary-protocol feature.\n> \n> If we do ever add such a feature, we'd be better off adding new columns\n> with a different name, just to avoid confusion over what's supposed to\n> be there. (For example: extant pg_dump scripts for user-defined types\n> will try to load wrong values into those columns if given a chance.\n\nSo you know some place that actually uses the values from these columns\n?\n\nOr is it just that we have told users long enough to make them same as\ntypinput/typoutput ?\n\n> We *must* use new names for those slots in CREATE TYPE to avoid that\n> pitfall, and so we might as well change the system column name too.)\n\nCan't we just add a warning when typinput==typreceive or\ntypoutput==typsend, or just plain refuse to make them equal and force\npeople to create another function binding even if they are.\n\nThe interim solution would be to set typreceive/typsend to NULL if they\nare the same as typinput/typoutput in CREATE TYPE.\n\n-------------------\nHannu\n\n\n", "msg_date": "16 Jul 2002 09:36:53 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "On Tue, 2002-07-16 at 11:44, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > The interim solution would be to set typreceive/typsend to NULL if they\n> > are the same as typinput/typoutput in CREATE TYPE.\n> \n> Which still puts you right back at square one. You might as well define\n> two new columns that will carry the function names for binary transport.\n\nOk, but could this be then rename instead of removing the columns - it\nwill be roughly the same amount of work going through all the places\nthat touch pg_type. I'd even guess that renaming is _less_ work..\n\n> typsend/typreceive are hopelessly contaminated at this point, IMHO;\n> it'll be less work and less confusion to adopt other column names than\n> to try to reuse them just \"because they're there\".\n\n-----------\nHannu\n\n\n", "msg_date": "16 Jul 2002 10:04:52 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> For all intent and purpose, pg_index.indisprimary can be added to that\n> list. Can't make a primary key without a pg_constraint entry.\n\nI disagree. For one thing, there are clients that look at that column.\nThere's no percentage in breaking them to gain zero (and it will be\nzero, because of alignment considerations...)\n\nindisclustered is currently pretty useless, but may become less so\nif CLUSTER gets upgraded to usefulness, so I'm not in favor of deleting\nthat either.\n\nNo strong attachment to the other stuff mentioned so far.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 02:05:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns " }, { "msg_contents": "Tom Lane wrote:\n> Rod Taylor <rbt@zort.ca> writes:\n> > For all intent and purpose, pg_index.indisprimary can be added to that\n> > list. Can't make a primary key without a pg_constraint entry.\n> \n> I disagree. For one thing, there are clients that look at that column.\n> There's no percentage in breaking them to gain zero (and it will be\n> zero, because of alignment considerations...)\n\nYes, pgaccess uses it, as I remember. Would be nice if it was accurate.\n\n> \n> indisclustered is currently pretty useless, but may become less so\n> if CLUSTER gets upgraded to usefulness, so I'm not in favor of deleting\n> that either.\n\nYea, I can see that being useful some day.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 02:12:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n>> We can always re-add the columns them.\n\n> But would it not be nice if we could add uniform binary protocol without\n> requiring initdb ?\n\nThat won't happen, because the existing contents of those columns are\ncompletely useless for a binary-protocol feature.\n\nIf we do ever add such a feature, we'd be better off adding new columns\nwith a different name, just to avoid confusion over what's supposed to\nbe there. (For example: extant pg_dump scripts for user-defined types\nwill try to load wrong values into those columns if given a chance.\nWe *must* use new names for those slots in CREATE TYPE to avoid that\npitfall, and so we might as well change the system column name too.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 02:13:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns " }, { "msg_contents": "> > I disagree. For one thing, there are clients that look at that column.\n> > There's no percentage in breaking them to gain zero (and it will be\n> > zero, because of alignment considerations...)\n> \n> Yes, pgaccess uses it, as I remember. Would be nice if it was accurate.\n\nNot to mention phpPgAdmin, psql, pg_dump, TOra, pgadmin, etc.\n\nChris\n\n", "msg_date": "Tue, 16 Jul 2002 14:18:10 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, pgaccess uses it, as I remember. Would be nice if it was accurate.\n\nEh? It is, AFAIK.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 02:28:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, pgaccess uses it, as I remember. Would be nice if it was accurate.\n> \n> Eh? It is, AFAIK.\n\nOh, I can't remember if it was fixed or broken. Certainly it is nice to\nhave for apps like pgaccess.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 02:29:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> The interim solution would be to set typreceive/typsend to NULL if they\n> are the same as typinput/typoutput in CREATE TYPE.\n\nWhich still puts you right back at square one. You might as well define\ntwo new columns that will carry the function names for binary transport.\n\ntypsend/typreceive are hopelessly contaminated at this point, IMHO;\nit'll be less work and less confusion to adopt other column names than\nto try to reuse them just \"because they're there\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 02:44:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unused system table columns " } ]
[ { "msg_contents": "I have added:\n\n\tAC_CHECK_LIB(getopt, main)\n\nto configure.in to allow PostgreSQL to perhaps find getopt_long() in a\nseparate library.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 18:40:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "getopt_long search in configure" }, { "msg_contents": "Bruce Momjian writes:\n\n> I have added:\n>\n> \tAC_CHECK_LIB(getopt, main)\n>\n> to configure.in to allow PostgreSQL to perhaps find getopt_long() in a\n> separate library.\n\nIs there a system that distributes a libgetopt library that contains\ngetopt_long()?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Jul 2002 01:05:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: getopt_long search in configure" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I have added:\n> >\n> > \tAC_CHECK_LIB(getopt, main)\n> >\n> > to configure.in to allow PostgreSQL to perhaps find getopt_long() in a\n> > separate library.\n> \n> Is there a system that distributes a libgetopt library that contains\n> getopt_long()?\n\nI have it here in /usr/local/include. Not sure how it got there. It\nmust have been installed by some other software.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 19:26:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: getopt_long search in configure" }, { "msg_contents": "Bruce Momjian writes:\n\n> I have it here in /usr/local/include. Not sure how it got there. It\n> must have been installed by some other software.\n\nOK good. But the check should be\n\nAC_SEARCH_LIBS(getopt_long, [getopt])\n\nThat way you check if the library actually contains the function you want.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Jul 2002 23:56:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: getopt_long search in configure" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I have it here in /usr/local/include. Not sure how it got there. It\n> > must have been installed by some other software.\n> \n> OK good. But the check should be\n> \n> AC_SEARCH_LIBS(getopt_long, [getopt])\n> \n> That way you check if the library actually contains the function you want.\n\nThanks. Change made. I was finding it hard to debug the pg_restore\nflag problems without long options. This way, I have them. I will try\nto research how I got libgetopt.a in /usr/local/include. Does anyone\nelse have one? Maybe I generated it by hand.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 22:54:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: getopt_long search in configure" } ]
[ { "msg_contents": "We had a fixed version of getopt() that would properly warn users that\nthey compiled psql without long options on systems with buggy getopt's,\nlike FreeBSD.\n\nNow that I have added a search for libgetopt.a, which may find\ngetopt_long on those platforms, I have removed utils/getopt.c. No\nmodules were using it anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 18:51:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "getopt bug" } ]
[ { "msg_contents": "We have src/utils for stuff supposedly that is used by the backend and\nother binaries, and src/backend/port for stuff used only by the backend.\n\nHowever, over time, this distinction has broken down and we have a\nnumber of backend/port stuff used in other binaries. I propose moving\nthe src/utils remaining items into src/backend/port, and removing the\nsrc/utils directory.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 18:54:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Future of src/utils" }, { "msg_contents": "Bruce Momjian writes:\n\n> However, over time, this distinction has broken down and we have a\n> number of backend/port stuff used in other binaries. I propose moving\n> the src/utils remaining items into src/backend/port, and removing the\n> src/utils directory.\n\nI propose the reverse operation.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Jul 2002 01:06:45 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Future of src/utils" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > However, over time, this distinction has broken down and we have a\n> > number of backend/port stuff used in other binaries. I propose moving\n> > the src/utils remaining items into src/backend/port, and removing the\n> > src/utils directory.\n> \n> I propose the reverse operation.\n\nYea, I thought of that. Means all the subdirectores have to move too. \nIt is more extreme than moving stuff from /src/utils, but it is more\nlogical.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 19:27:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Future of src/utils" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > However, over time, this distinction has broken down and we have a\n> > number of backend/port stuff used in other binaries. I propose moving\n> > the src/utils remaining items into src/backend/port, and removing the\n> > src/utils directory.\n> \n> I propose the reverse operation.\n\nThe following patch moves dllinit.c and strdup.c into backend/port, and\nremoves the src/utils directory, for the time being. It also cleans up\ndllinit so it has its own variable to point to the file path rather than\nhaving it hard-coded in all the makefiles.\n\nWhen we decide to move everything to src/utils, it will be all ready.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql/configure.in,v\nretrieving revision 1.189\ndiff -c -r1.189 configure.in\n*** configure.in\t15 Jul 2002 22:41:45 -0000\t1.189\n--- configure.in\t16 Jul 2002 05:41:22 -0000\n***************\n*** 916,922 ****\n AC_SUBST(INET_ATON)\n AC_CHECK_FUNCS(strerror, [], STRERROR='$(top_builddir)/src/backend/port/strerror.o')\n AC_SUBST(STRERROR)\n! AC_CHECK_FUNCS(strdup, [], STRDUP='$(top_builddir)/src/utils/strdup.o')\n AC_SUBST(STRDUP)\n AC_CHECK_FUNCS(strtol, [], STRTOL='$(top_builddir)/src/backend/port/strtol.o')\n AC_SUBST(STRTOL)\n--- 916,922 ----\n AC_SUBST(INET_ATON)\n AC_CHECK_FUNCS(strerror, [], STRERROR='$(top_builddir)/src/backend/port/strerror.o')\n AC_SUBST(STRERROR)\n! AC_CHECK_FUNCS(strdup, [], STRDUP='$(top_builddir)/src/backend/port/strdup.o')\n AC_SUBST(STRDUP)\n AC_CHECK_FUNCS(strtol, [], STRTOL='$(top_builddir)/src/backend/port/strtol.o')\n AC_SUBST(STRTOL)\n***************\n*** 924,929 ****\n--- 924,936 ----\n AC_SUBST(STRTOUL)\n AC_CHECK_FUNCS(strcasecmp, [], STRCASECMP='$(top_builddir)/src/backend/port/strcasecmp.o')\n AC_SUBST(STRCASECMP)\n+ \n+ # Set path of dllinit.c for cygwin\n+ DLLINIT=\"\"\n+ case $host_os in \n+ cygwin*) DLLINIT='$(top_builddir)/src/backend/port/dllinit.o' ;;\n+ esac\n+ AC_SUBST(DLLINIT)\n \n # On HPUX 9, rint() is not in regular libm.a but in /lib/pa1.1/libm.a;\n # this hackery with HPUXMATHLIB allows us to cope.\nIndex: src/Makefile.global.in\n===================================================================\nRCS file: /cvsroot/pgsql/src/Makefile.global.in,v\nretrieving revision 1.148\ndiff -c -r1.148 Makefile.global.in\n*** src/Makefile.global.in\t28 May 2002 16:57:53 -0000\t1.148\n--- src/Makefile.global.in\t16 Jul 2002 05:41:23 -0000\n***************\n*** 359,364 ****\n--- 359,365 ----\n STRERROR = @STRERROR@\n STRTOL = @STRTOL@\n STRTOUL = @STRTOUL@\n+ DLLINIT = @DLLINIT@\n \n TAS = @TAS@\n \nIndex: src/Makefile.shlib\n===================================================================\nRCS file: /cvsroot/pgsql/src/Makefile.shlib,v\nretrieving revision 1.58\ndiff -c -r1.58 Makefile.shlib\n*** src/Makefile.shlib\t24 May 2002 18:10:17 -0000\t1.58\n--- src/Makefile.shlib\t16 Jul 2002 05:41:23 -0000\n***************\n*** 327,339 ****\n else # PORTNAME == win\n \n # WIN case\n! $(shlib) lib$(NAME).a: $(OBJS) $(top_builddir)/src/utils/dllinit.o\n \t$(DLLTOOL) --export-all --output-def $(NAME).def $(OBJS)\n! \t$(DLLWRAP) -o $(shlib) --dllname $(shlib) --def $(NAME).def $(OBJS) $(top_builddir)/src/utils/dllinit.o $(DLLINIT) $(DLLLIBS) $(SHLIB_LINK)\n \t$(DLLTOOL) --dllname $(shlib) --def $(NAME).def --output-lib lib$(NAME).a\n \n! $(top_builddir)/src/utils/dllinit.o: $(top_srcdir)/src/utils/dllinit.c\n! \t$(MAKE) -C $(top_builddir)/src/utils dllinit.o\n \n endif # PORTNAME == win\n \n--- 327,339 ----\n else # PORTNAME == win\n \n # WIN case\n! $(shlib) lib$(NAME).a: $(OBJS) $(DLLINIT)\n \t$(DLLTOOL) --export-all --output-def $(NAME).def $(OBJS)\n! \t$(DLLWRAP) -o $(shlib) --dllname $(shlib) --def $(NAME).def $(OBJS) $(DLLINIT) $(DLLLIBS) $(SHLIB_LINK)\n \t$(DLLTOOL) --dllname $(shlib) --def $(NAME).def --output-lib lib$(NAME).a\n \n! $(DLLINIT):\n! \t$(MAKE) -C $(@D) $(@F)\n \n endif # PORTNAME == win\n \nIndex: src/backend/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/Makefile,v\nretrieving revision 1.79\ndiff -c -r1.79 Makefile\n*** src/backend/Makefile\t22 May 2002 21:46:40 -0000\t1.79\n--- src/backend/Makefile\t16 Jul 2002 05:41:24 -0000\n***************\n*** 43,49 ****\n \n # No points for style here. How about encapsulating some of these\n # commands into variables?\n! postgres: $(OBJS) $(top_builddir)/src/utils/dllinit.o postgres.def libpostgres.a\n \tdlltool --dllname $@$(X) --output-exp $@.exp --def postgres.def\n \tgcc $(LDFLAGS) -g -o $@$(X) -Wl,--base-file,$@.base $@.exp $(OBJS) $(DLLLIBS)\n \tdlltool --dllname $@$(X) --base-file $@.base --output-exp $@.exp --def postgres.def\n--- 43,49 ----\n \n # No points for style here. How about encapsulating some of these\n # commands into variables?\n! postgres: $(OBJS) $(DLLINIT) postgres.def libpostgres.a\n \tdlltool --dllname $@$(X) --output-exp $@.exp --def postgres.def\n \tgcc $(LDFLAGS) -g -o $@$(X) -Wl,--base-file,$@.base $@.exp $(OBJS) $(DLLLIBS)\n \tdlltool --dllname $@$(X) --base-file $@.base --output-exp $@.exp --def postgres.def\n***************\n*** 67,80 ****\n postgres.def: $(OBJS)\n \t$(DLLTOOL) --export-all --output-def $@ $(OBJS)\n \n! libpostgres.a: $(OBJS) $(top_builddir)/src/utils/dllinit.o postgres.def\n \t$(DLLTOOL) --dllname postgres.exe --def postgres.def --output-lib $@\n \n endif # MAKE_DLL\n \n \n! $(top_builddir)/src/utils/dllinit.o: $(top_srcdir)/src/utils/dllinit.c\n! \t$(MAKE) -C $(top_builddir)/src/utils dllinit.o\n \n # The postgres.o target is needed by the rule in Makefile.global that\n # creates the exports file when MAKE_EXPORTS = true.\n--- 67,80 ----\n postgres.def: $(OBJS)\n \t$(DLLTOOL) --export-all --output-def $@ $(OBJS)\n \n! libpostgres.a: $(OBJS) $(DLLINIT) postgres.def\n \t$(DLLTOOL) --dllname postgres.exe --def postgres.def --output-lib $@\n \n endif # MAKE_DLL\n \n \n! $(DLLINIT):\n! \t$(MAKE) -C $(@D) $(@F)\n \n # The postgres.o target is needed by the rule in Makefile.global that\n # creates the exports file when MAKE_EXPORTS = true.\nIndex: src/makefiles/Makefile.win\n===================================================================\nRCS file: /cvsroot/pgsql/src/makefiles/Makefile.win,v\nretrieving revision 1.15\ndiff -c -r1.15 Makefile.win\n*** src/makefiles/Makefile.win\t6 Sep 2001 02:58:33 -0000\t1.15\n--- src/makefiles/Makefile.win\t16 Jul 2002 05:41:25 -0000\n***************\n*** 17,23 ****\n \n %.dll: %.o\n \t$(DLLTOOL) --export-all --output-def $*.def $<\n! \t$(DLLWRAP) -o $@ --def $*.def $< $(top_builddir)/src/utils/dllinit.o $(DLLLIBS)\n \trm -f $*.def\n \n ifeq ($(findstring backend,$(subdir)), backend)\n--- 17,23 ----\n \n %.dll: %.o\n \t$(DLLTOOL) --export-all --output-def $*.def $<\n! \t$(DLLWRAP) -o $@ --def $*.def $< $(DLLINIT) $(DLLLIBS)\n \trm -f $*.def\n \n ifeq ($(findstring backend,$(subdir)), backend)", "msg_date": "Tue, 16 Jul 2002 01:42:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Future of src/utils" }, { "msg_contents": "Bruce Momjian writes:\n\n> Yea, I thought of that. Means all the subdirectores have to move too.\n> It is more extreme than moving stuff from /src/utils, but it is more\n> logical.\n\nI don't think we need to move the subdirectories, which involve stuff\nthat's heavily tied to the backend. But the generic C library replacement\nfiles should move into src/utils preferably. In fact, what we could do is\nassemble all the files we need (as determined by configure) into a static\nlibrary and link all executables with that. That way we don't have to\ndeal with the individual files in each individual makefile.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Jul 2002 23:57:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Future of src/utils" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I don't think we need to move the subdirectories, which involve stuff\n> that's heavily tied to the backend. But the generic C library replacement\n> files should move into src/utils preferably. In fact, what we could do is\n> assemble all the files we need (as determined by configure) into a static\n> library and link all executables with that. That way we don't have to\n> deal with the individual files in each individual makefile.\n\nI like that a lot. But will it work for libpq? I have a feeling we'd\nend up linking *all* the replacement functions into libpq, which might\ncreate some namespace issues for client applications. Ideally we should\nonly link the functions libpq actually needs into libpq, but I'm not\nsure that works with standard linker behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jul 2002 18:27:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Future of src/utils " }, { "msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I don't think we need to move the subdirectories, which involve stuff\n> > that's heavily tied to the backend. But the generic C library replacement\n> > files should move into src/utils preferably. In fact, what we could do is\n> > assemble all the files we need (as determined by configure) into a static\n> > library and link all executables with that. That way we don't have to\n> > deal with the individual files in each individual makefile.\n> \n> I like that a lot. But will it work for libpq? I have a feeling we'd\n\nYes, I like it too, and I like the fact that the subdirectories stay,\nbecause those are so backend-specific, it doesn't make any sense to move\nthem.\n\nCan we move them to src/port rather than src/utils? Port makes more\nsense to me because that's what they are. Maybe is should be called\nsrc/libc?\n\n> end up linking *all* the replacement functions into libpq, which might\n> create some namespace issues for client applications. Ideally we should\n> only link the functions libpq actually needs into libpq, but I'm not\n> sure that works with standard linker behavior.\n\nLinkers work per object file, so if each member of the library has only\none function in it (which is how we do it now anyway) a linker will pick\nout only the object files it needs. Many C libraries have multiple\nfunctions per object file, and that's where you see the namespace\npollution.\n\nActually, our current setup is more prone to pollution becuse we\nunconditionally add *.o files to the link line. Add a library makes\nsure only the object files needed are added to the executable.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Jul 2002 22:47:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Future of src/utils" }, { "msg_contents": "Tom Lane writes:\n\n> > assemble all the files we need (as determined by configure) into a static\n> > library and link all executables with that. That way we don't have to\n> > deal with the individual files in each individual makefile.\n>\n> I like that a lot. But will it work for libpq?\n\nNo, just for executables. Otherwise you'd get into a big PIC mess.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 18 Jul 2002 00:31:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Future of src/utils " }, { "msg_contents": "Bruce Momjian writes:\n\n> Can we move them to src/port rather than src/utils? Port makes more\n> sense to me because that's what they are. Maybe is should be called\n> src/libc?\n\nWell, there is a bit of a history in picking a really silly name for this\nlibrary. GCC calls it libiberty, Kerberos calls it libroken. Make up\nyour own. \"port\" makes sense, though.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 18 Jul 2002 00:32:29 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Future of src/utils" }, { "msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > > assemble all the files we need (as determined by configure) into a static\n> > > library and link all executables with that. That way we don't have to\n> > > deal with the individual files in each individual makefile.\n> >\n> > I like that a lot. But will it work for libpq?\n> \n> No, just for executables. Otherwise you'd get into a big PIC mess.\n\nOh, OK, so we keep the same object file variables, but just move them to\nsrc/port.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Jul 2002 20:29:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Future of src/utils" } ]
[ { "msg_contents": "I've been conversing with Bruce off-list about getting people together \nfor dinner one night during next week's OSCon in San Diego. Please email \nme if you are interested with your preferred day/time and I will try to \ncoordinate something.\n\nAlso FYI there is a PostgreSQL BOF scheduled:\nPostgreSQL\nDate: 07/24/2002\nTime: 8:00pm - 9:00pm\nLocation: Grande Ballroom B in the East Tower\nModerated by: Bruce Momjian, SRA/Japan\n\nThanks,\n\nJoe\n\n", "msg_date": "Mon, 15 Jul 2002 15:56:22 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "OT: O'Reilly OSCon gatherings" } ]
[ { "msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql\nChanges by:\tmomjian@postgresql.org\t02/07/15 20:51:37\n\nModified files:\n\tconfig : docbook.m4 \n\tdoc/src/sgml : features.sgml \n\nLog message:\n\tThis fixes 2 inaccuracies in the recently added SQL99 feature list docs.\n\tUNIQUE and DISTINCT predicates are both listed as implemented -- AFAIK,\n\tneither is.\n\t\n\tI also included another trivial patch which adds the default location\n\tof the DSSSL stylesheets on my system (Debian unstable, docbook-dsssl\n\t1.76) to the list of paths that configure looks for.\n\t\n\tNeil Conway\n\n", "msg_date": "Mon, 15 Jul 2002 20:51:37 -0400 (EDT)", "msg_from": "momjian@postgresql.org (Bruce Momjian - CVS)", "msg_from_op": true, "msg_subject": "pgsql/ onfig/docbook.m4 oc/src/sgml/features.sgml" }, { "msg_contents": "> Log message:\n> This fixes 2 inaccuracies in the recently added SQL99 feature list docs.\n> UNIQUE and DISTINCT predicates are both listed as implemented -- AFAIK,\n> neither is.\n\nDISTINCT was implemented a couple of weeks ago. I'll change the docs\nagain...\n\n - Thomas\n", "msg_date": "Mon, 15 Jul 2002 18:46:16 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: pgsql/ onfig/docbook.m4 oc/src/sgml/features.sgml" } ]