threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "With the freshly retrieved current source, now PostgreSQL is running\nfine on an AIX 5L box. Thanks Tom.\n\nBTW, I have done some benchmarking using pgbench on this machine and\nfound that 7.2 is almost two times slower than 7.1. The hardware is a\n4way machine. Since I thought that 7.2 improves the performance for\nSMP machines, I'm now wondering why 7.2 is so slow.\n\npostgresql.conf paramters changed from default values are:\n\nmax_connections = 1024\nwal_sync_method = fdatasync\nshared_buffers = 4096\ndeadlock_timeout = 1000000\n\nconfigure option is: --enable-multibyte=EUC_JP\n\nOf cousre, these setting are identical for both 7.1 and 7.2.\n\nSee attached graph...",
"msg_date": "Mon, 17 Dec 2001 15:46:37 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "7.2 is slow?"
},
{
"msg_contents": "> With the freshly retrieved current source, now PostgreSQL is running\n> fine on an AIX 5L box. Thanks Tom.\n> \n> BTW, I have done some benchmarking using pgbench on this machine and\n> found that 7.2 is almost two times slower than 7.1. The hardware is a\n> 4way machine. Since I thought that 7.2 improves the performance for\n ^^^^\n> SMP machines, I'm now wondering why 7.2 is so slow.\n\nEwe. I will remind people that this multi-cpu setup is exactly the type\nof machine we wanted to speed up with the new light-weight locking code\nthat reduced spinlock looping.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Dec 2001 02:40:42 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> With the freshly retrieved current source, now PostgreSQL is running\n> fine on an AIX 5L box. Thanks Tom.\n> \n> BTW, I have done some benchmarking using pgbench on this machine and\n> found that 7.2 is almost two times slower than 7.1.\n\nIs this an AIX specific problem or do all/all SMP/all 4way computers \nhave it ?\n\nIs this a bug that needs to be addresse before release of final ?\n\nOr would we just prominently warn people that the new release is 2x \nslower and advise upgrading only if they have powerful enough computers \n(system load < 0.5 during normal operation)?\n\n------------------------\nHannu\n",
"msg_date": "Mon, 17 Dec 2001 10:54:14 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "> > BTW, I have done some benchmarking using pgbench on this machine and\n> > found that 7.2 is almost two times slower than 7.1.\n> \n> Is this an AIX specific problem or do all/all SMP/all 4way computers \n> have it ?\n\nNot sure. As far as I can tell, nobody except me has tested 7.2 on big\nboxes.\n\n> Is this a bug that needs to be addresse before release of final ?\n\nI hope this would be solved before final. At least I would like to\nknow what's going on.\n\nAnyway, I will do some testings on a smaller machine (that is my\nlaptop) to see if I see the same performance degration on it.\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 17 Dec 2001 18:26:44 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> > > BTW, I have done some benchmarking using pgbench on this machine and\n> > > found that 7.2 is almost two times slower than 7.1.\n> >\n> > Is this an AIX specific problem or do all/all SMP/all 4way computers\n> > have it ?\n> \n> Not sure. As far as I can tell, nobody except me has tested 7.2 on big\n> boxes.\n> \n> > Is this a bug that needs to be addresse before release of final ?\n> \n> I hope this would be solved before final. At least I would like to\n> know what's going on.\n> \n> Anyway, I will do some testings on a smaller machine (that is my\n> laptop) to see if I see the same performance degration on it.\n\nHow did you test ?\n\nI could do the same test on Dual Pentium III / 800 w/1024 MB\nwith IBM 45 G/7200 IDE disk.\n\nSo we could compare different platforms as well :)\n\n-------------\nHannu\n",
"msg_date": "Mon, 17 Dec 2001 12:43:05 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "On Mon, Dec 17, 2001 at 12:43:05PM +0200, Hannu Krosing allegedly wrote:\n> Tatsuo Ishii wrote:\n> > \n> > > > BTW, I have done some benchmarking using pgbench on this machine and\n> > > > found that 7.2 is almost two times slower than 7.1.\n> > >\n> > > Is this an AIX specific problem or do all/all SMP/all 4way computers\n> > > have it ?\n> > \n> > Not sure. As far as I can tell, nobody except me has tested 7.2 on big\n> > boxes.\n> > \n> > > Is this a bug that needs to be addresse before release of final ?\n> > \n> > I hope this would be solved before final. At least I would like to\n> > know what's going on.\n> > \n> > Anyway, I will do some testings on a smaller machine (that is my\n> > laptop) to see if I see the same performance degration on it.\n> \n> How did you test ?\n> \n> I could do the same test on Dual Pentium III / 800 w/1024 MB\n> with IBM 45 G/7200 IDE disk.\n> \n> So we could compare different platforms as well :)\n\nI could do some testing on a Sun 450 / 4x400 MHz / 4 GB, if that's helpful.\n\nCheers,\n\nMathijs\n",
"msg_date": "Mon, 17 Dec 2001 13:30:02 +0100",
"msg_from": "Mathijs Brands <mathijs@ilse.nl>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "> > > > Is this an AIX specific problem or do all/all SMP/all 4way computers\n> > > > have it ?\n\nI'll have 4 way and 8 way xeon boxes tues evening that I can test this\nagainst (though I won't get to test till wed unless I don't sleep)\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Mon, 17 Dec 2001 08:06:54 -0500 (EST)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "> > How did you test ?\n> > \n> > I could do the same test on Dual Pentium III / 800 w/1024 MB\n> > with IBM 45 G/7200 IDE disk.\n> > \n> > So we could compare different platforms as well :)\n> \n> I could do some testing on a Sun 450 / 4x400 MHz / 4 GB, if that's helpful.\n> \n> Cheers,\n> \n> Mathijs\n> \n\n> I'll have 4 way and 8 way xeon boxes tues evening that I can test this\n> against (though I won't get to test till wed unless I don't sleep)\n>\n> - Brandon\n\nThanks to everyone. Here are the methods I used for testings including\ngenerating graphs (actually very simple).\n\n(1) Tweak postgresql.conf to allow large concurrent users. I tested up\n to 1024 on AIX, but for the comparison I think testing up to 128\n users is enough. Here are example settings:\n\n max_connections = 128\n shared_buffers = 4096\n deadlock_timeout = 100000\n\n You might want to tweak wal_sync_method to get the best\n performance. However this should not affect the comparison between\n 7.1 and 7.2.\n\n(2) Run:\n\n sh bench.sh\n\n It will invoke pgbench for various concurrent users. So you need\n to install pgbench beforehand (it's in contrib/pgbench. Just type\n make install there to install pgbench).\n\n This will take while.\n\n(3) (2) will generate a file named \"bench.data\". The file have rows\n where the first column is the number of concurrent users and\n second one is the tps. Rename it to bench-7.2.data.\n\n(4) Do (1) and (2) for PostgreSQL 7.1 and rename bench.data to\n bench-7.1.data.\n\n(5) Run plot.sh to see the result graph. Note that plot.sh requires\n gnuplot.\n---\nTatsuo Ishii",
"msg_date": "Mon, 17 Dec 2001 23:12:53 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "\nIt seems that on dual PIII we are indeed faster than 7.1.3 for \nsmall number of clients but slower for large number (~ 40)\n\nMy initial results on dual PIII/800 are as follows\n\n 7.1.3 7.2b4 7.2b4-FULL\n==================================================================\n./pgbench -i -p 5433 \n./pgbench -p 5433 -c 1 -t 100 240/251 217/223 177/181\n./pgbench -p 5433 -c 5 -t 100 93/ 94 211/217 207/212\n./pgbench -p 5433 -c 10 -t 100 57/ 58 145/148 160/163\n------------------------------------------------------------------\n./pgbench -i -s 10 -p 5433 \n./pgbench -p 5433 -c 1 -t 100 171/177 162/166 169/173\n./pgbench -p 5433 -c 5 -t 100 140/143 191/196 202/207\n./pgbench -p 5433 -c 10 -t 100 132/135 165/168 159/163\n./pgbench -p 5433 -c 25 -t 100 65/ 66 60/ 60 75/ 76\n./pgbench -p 5433 -c 50 -t 100 60/ 61 43/ 43 55/ 59\n./pgbench -p 5433 -c 100 -t 100 48/ 48 23/ 23 34/ 34\n------------------------------------------------------------------\n\nOne of thereasons seems to be that vacuum has chaged\n\nafter oding \n\npsql -p 5433 -c 'vacuum full'\n\nthe result of\n\n./pgbench -p 5433 -c 100 -t 100\n\nwas 34/34 - still ~25% slower than 7.1.3 but much better \nthan with non-full vacuum (which I guess is used by pgbench\n\nThe third column 7.2b4-FULL is done by running \n \"psql -p 5433 -c 'vacuum full'\"\nbetween each pgbench run - now the lines cross somwhere \nbetween 25 and 50 concurrent users\n\nOne of the reasons pg is slower on last limes of my test is that \npostgres is slower when vacuum is not done often enough - \non fresh db\n\"./pgbench -p 5433 -c 100 -t 10\" gives 67/75 as result\nindicating that one reason is just our non-overwriting storage manager.\n\n\nI also tried to outsmart pg by running the new vacuum \nconcurrently, but was disappointed.\n\nvacuuming in 'normal' psql gave me 20/20 tps and running \nwith nice psql gave 21/21 tps\nrunning ./pgbench -p 5433 -c 100 -t 100 as first benchmark gave the \nsame result as running it after vacuum full\n\n\n-----------------------------------------------------------------------\nPS. I hope to get single-processor results from the same computer in \nabout 6 hours as well (after my co-worker arrives home and can reboot \nhis computer to single-user)\n\nInxc - after you have rebooted to single-processor mode, pleas start \nthe postgres daemon by\n\nsu - hannu\ncd db/7.2b4/\nbin/pg_ctl -D data -l logfile\n\nand ther run above pgbench commands from \ncd /home/hannu/src/postgresql-7.1.3/contrib/pgbench/\n-----------------------------------------------------------------------\n",
"msg_date": "Mon, 17 Dec 2001 17:18:18 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> \n> Thanks to everyone. Here are the methods I used for testings including\n> generating graphs (actually very simple).\n> \n> (1) Tweak postgresql.conf to allow large concurrent users. I tested up\n> to 1024 on AIX, but for the comparison I think testing up to 128\n> users is enough. Here are example settings:\n> \n> max_connections = 128\n> shared_buffers = 4096\n> deadlock_timeout = 100000\n> \n> You might want to tweak wal_sync_method to get the best\n> performance. However this should not affect the comparison between\n> 7.1 and 7.2.\n>\n> (2) Run:\n> \n> sh bench.sh\n\nI have no more time today, but I'll redo the tests with your script\ntomorrow\n(after I have found where to stick database name and port :)\n\n----------------\nHannu\n",
"msg_date": "Mon, 17 Dec 2001 17:37:16 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> ./pgbench -i -s 10 -p 5433 \n> ./pgbench -p 5433 -c 1 -t 100 171/177 162/166 169/173\n> ./pgbench -p 5433 -c 5 -t 100 140/143 191/196 202/207\n> ./pgbench -p 5433 -c 10 -t 100 132/135 165/168 159/163\n> ./pgbench -p 5433 -c 25 -t 100 65/ 66 60/ 60 75/ 76\n> ./pgbench -p 5433 -c 50 -t 100 60/ 61 43/ 43 55/ 59\n> ./pgbench -p 5433 -c 100 -t 100 48/ 48 23/ 23 34/ 34\n\nYou realize, of course, that when the number of clients exceeds the\nscale factor you're not really measuring anything except update\ncontention on the \"branch\" rows? Every transaction tries to update\nthe balance for its branch, so if you have more clients than branches\nthen there will be lots of transactions blocked waiting for someone\nelse to commit. With a 10:1 ratio, there will be several transactions\nblocked waiting for *each* active transaction; and when that guy\ncommits, all the others will waken simultaneously and contend for the\nchance to update the branch row. One will win, the others will go\nback to sleep, having done nothing except wasting CPU time. Thus a\nsevere falloff in measured TPS is inevitable when -c >> -s. I don't\nthink this scenario has all that much to do with real-world loads,\nhowever.\n\nI think you are right that the difference between 7.1 and 7.2 may have\nmore to do with the change in VACUUM strategy than anything else. Could\nyou retry the test after changing all the \"vacuum\" commands in pgbench.c\nto \"vacuum full\"?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Dec 2001 10:53:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > ./pgbench -i -s 10 -p 5433\n> > ./pgbench -p 5433 -c 1 -t 100 171/177 162/166 169/173\n> > ./pgbench -p 5433 -c 5 -t 100 140/143 191/196 202/207\n> > ./pgbench -p 5433 -c 10 -t 100 132/135 165/168 159/163\n> > ./pgbench -p 5433 -c 25 -t 100 65/ 66 60/ 60 75/ 76\n> > ./pgbench -p 5433 -c 50 -t 100 60/ 61 43/ 43 55/ 59\n> > ./pgbench -p 5433 -c 100 -t 100 48/ 48 23/ 23 34/ 34\n> \n> You realize, of course, that when the number of clients exceeds the\n> scale factor you're not really measuring anything except update\n> contention on the \"branch\" rows? \n\nOops! I thought that the deciding table would be tellers and this -s 10 \nwould be ok for up to 100 users\n\nI will retry this with Tatsuos using -s 128(if it still fits on disk \n- taking about 160MB/1Mtuple needs 1.6GB for test with -s 100 and \nI currently have only 1.3G free)\n\nI re-run some of them with -s 50 (on 7.2b4)\n\neach one after running \"psql -p 5433 -c 'vacuum full;checkpoint;'\"\n\n tps \n./pgbench -p 5433 -i -s 50\n./pgbench -p 5433 -c 1 -t 1000 93/ 93\n./pgbench -p 5433 -c 3 -t 333 106/107\n./pgbench -p 5433 -c 5 -t 200 106/107\n./pgbench -p 5433 -c 8 -t 125 112/113\n./pgbench -p 5433 -c 10 -t 100 94/ 95\n./pgbench -p 5433 -c 25 -t 40 98/ 91\n./pgbench -p 5433 -c 50 -t 20 70/ 74\n\n> Every transaction tries to update\n> the balance for its branch, so if you have more clients than branches\n> then there will be lots of transactions blocked waiting for someone\n> else to commit. With a 10:1 ratio, there will be several transactions\n> blocked waiting for *each* active transaction; and when that guy\n> commits, all the others will waken simultaneously and contend for the\n> chance to update the branch row. One will win, the others will go\n> back to sleep, having done nothing except wasting CPU time. Thus a\n> severe falloff in measured TPS is inevitable when -c >> -s. I don't\n> think this scenario has all that much to do with real-world loads,\n> however.\n\nIt probably models a real-world ill-tuned database :)\n\nAnd it seems that we fall off more rapidly on 7.2 than we did on 7.1 , \neven so much so that we will be slower in the end.\n\n> I think you are right that the difference between 7.1 and 7.2 may have\n> more to do with the change in VACUUM strategy than anything else. Could\n> you retry the test after changing all the \"vacuum\" commands in pgbench.c\n> to \"vacuum full\"?\n\nThe third column should be the equivalent of doing so (I did run \n'vacuum full' between each pgbench and AFACT pgbencg runs vacuun only \nbefore each run)\n\n--------------\nHannu\n",
"msg_date": "Mon, 17 Dec 2001 18:57:03 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > ./pgbench -i -s 10 -p 5433\n> > ./pgbench -p 5433 -c 1 -t 100 171/177 162/166 169/173\n> > ./pgbench -p 5433 -c 5 -t 100 140/143 191/196 202/207\n> > ./pgbench -p 5433 -c 10 -t 100 132/135 165/168 159/163\n> > ./pgbench -p 5433 -c 25 -t 100 65/ 66 60/ 60 75/ 76\n> > ./pgbench -p 5433 -c 50 -t 100 60/ 61 43/ 43 55/ 59\n> > ./pgbench -p 5433 -c 100 -t 100 48/ 48 23/ 23 34/ 34\n> \n> You realize, of course, that when the number of clients exceeds the\n> scale factor you're not really measuring anything except update\n> contention on the \"branch\" rows? Every transaction tries to update\n> the balance for its branch, so if you have more clients than branches\n> then there will be lots of transactions blocked waiting for someone\n> else to commit. With a 10:1 ratio, there will be several transactions\n> blocked waiting for *each* active transaction; and when that guy\n> commits, all the others will waken simultaneously and contend for the\n> chance to update the branch row. One will win, the others will go\n> back to sleep, having done nothing except wasting CPU time. Thus a\n> severe falloff in measured TPS is inevitable when -c >> -s. I don't\n> think this scenario has all that much to do with real-world loads,\n> however.\n\nI did some benchmarking and the interesting part is that 7.2b4 is up to \n2.5X faster than 7.1.3 for _small_ scale factors and up to 25% slower \nwhen there is no contention (-s128, clients <= 128)\n\nPerhaps the waiting on lock somehow organizes things to happen in some \norder that avoids some stupidity in some other locking logic ?\n\nI run benchmark (with added vacuum full for 7.2b4) on Dual PIII 800MHz \nwith 1 G of RAM and an IDE disk. The results are mean from six runs \nwith two slowes removed (there was other activity going on sometimes)\n\nthey are for scale factors 1, 10 and 128 \n\nin order to measure real performance of roughly the _same_ dataset each\ntest run did the same total number of transactions 512 with each client \ndoing 512/nr_of_trx.",
"msg_date": "Fri, 21 Dec 2001 14:21:12 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> in order to measure real performance of roughly the _same_ dataset each\n> test run did the same total number of transactions 512 with each client \n> doing 512/nr_of_trx.\n\nThat means you're only measuring a few transactions per backend (as few\nas 4, near the upper end of the scale). I think the results may say\nmore about backend-startup transients than true peak throughput.\nCould you try it again with a run about ten times that long?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Dec 2001 11:00:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow? "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Hannu Krosing <hannu@tm.ee> writes:\n>\n>>in order to measure real performance of roughly the _same_ dataset each\n>>test run did the same total number of transactions 512 with each client \n>>doing 512/nr_of_trx.\n>>\n>\n>That means you're only measuring a few transactions per backend (as few\n>as 4, near the upper end of the scale). I think the results may say\n>more about backend-startup transients than true peak throughput.\n>Could you try it again with a run about ten times that long?\n>\nI did run 4096trx on 7.2b4 with -s 1, best 4-of-6\n\n 512trx 4096trx ratio\n1 180.59 90.15 2.00\n2 221.52 80.92 2.74\n4 203.72 75.60 2.69\n8 179.54 69.29 2.59\n16 156.68 63.15 2.48\n32 123.48 57.73 2.14\n64 89.99 54.14 1.66\n128 61.84 48.97 1.26\n\nso it seems that large number of of transactions degrades tps \nperformance faster than\nconnection setup overhead.\n\nthe\n\nI'll try running the whole suite again with higher number of \ntransactions i a few days\n\n------------------\nHannu\n\n\n",
"msg_date": "Sat, 22 Dec 2001 02:31:12 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
}
] |
[
{
"msg_contents": "Marc, can we increase the size of messages allowed on the patches list? \nI am getting complaints about delays, and I keep the patches 1-2 days\nbefore being applied too.\n\nPeople are starting to try and bypass the limit by posting URL's or\nsending emails directly to me; neither seems good.\n\nAlso, can we allow people subscribed to one list to post to other lists\neven though they are not subscribed. Seems like it would improve list\nresponse times.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Dec 2001 02:11:49 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Size restriction on patches list"
},
{
"msg_contents": "\nOn Mon, 17 Dec 2001, Bruce Momjian wrote:\n\n> Marc, can we increase the size of messages allowed on the patches list?\n> I am getting complaints about delays, and I keep the patches 1-2 days\n> before being applied too.\n\nDamn, I searched for this before, couldn't find it, and just found it now:\n\n%mj_shell -p xx configset pgsql-patches maxlength = 400000\nmaxlength set to \"400000\".\n\n400k large enough?\n\n\n",
"msg_date": "Mon, 17 Dec 2001 09:20:21 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Size restriction on patches list"
},
{
"msg_contents": "> \n> On Mon, 17 Dec 2001, Bruce Momjian wrote:\n> \n> > Marc, can we increase the size of messages allowed on the patches list?\n> > I am getting complaints about delays, and I keep the patches 1-2 days\n> > before being applied too.\n> \n> Damn, I searched for this before, couldn't find it, and just found it now:\n> \n> %mj_shell -p xx configset pgsql-patches maxlength = 400000\n> maxlength set to \"400000\".\n> \n> 400k large enough?\n\nOh, very good. Do we get large emails that need blocking? Should it be\n1MB, which I believe is the default for most mailers?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Dec 2001 11:35:05 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Size restriction on patches list"
}
] |
[
{
"msg_contents": "Tom writes:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Zeugswetter Andreas SB SD writes:\n> >> Second I do not understand why the Makefile in pl/tcl is so complicated,\n> >> and not similar e.g. to the plpython one, so the first task should be to\n> >> simplify the Makefile to use Makefile.shlib, and not cook it's own soup.\n> \n> > All the oddities in that makefile are there because presumably some system\n> > needed them in the past.\n> \n> No, it's a historical thing: the Makefile.shlib stuff didn't exist when\n> pltcl was developed. I'm not entirely sure why I didn't try to fold\n> pltcl in with the Makefile.shlib approach when we started doing that.\n> Possibly I was just thinking \"don't fix what ain't broken\". But now\n> I'd vote for changing over.\n\nHere is a patch per above thread.\nI tested the getDBs function in pgtclsh and a simple pltcl function.\n\nI duplicated the file mkMakefile.tcldefs.sh which is probably not good,\nbut bin/pgtclsh already did it like that, so ...\n\nAndreas",
"msg_date": "Mon, 17 Dec 2001 14:43:10 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problem compiling postgres sql --with-tcl "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Here is a patch per above thread.\n\nWhy did you change libpgtcl's build process? AFAIK no one was claiming\nthat was broken.\n\nThis does not seem the right time to be making undiscussed changes in\nMakefile.shlib, either. What's with that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Dec 2001 19:05:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem compiling postgres sql --with-tcl "
},
{
"msg_contents": "Tom Lane writes:\n\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > Here is a patch per above thread.\n>\n> Why did you change libpgtcl's build process? AFAIK no one was claiming\n> that was broken.\n>\n> This does not seem the right time to be making undiscussed changes in\n> Makefile.shlib, either. What's with that?\n\nI wasn't under the impression that we wanted to get this patch into 7.2.\nAFAIK, the Tcl stuff has not built on AIX for quite a while, so it's not a\nregression from a previous release. Moreover, the chances that any\nsignificant number of people will test the Tcl build in the remaining test\nperiod is nearly zero.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 18 Dec 2001 23:23:42 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Problem compiling postgres sql --with-tcl "
}
] |
[
{
"msg_contents": "Apparently there's been a change in the way views are handled within \nPL/pgSQL. The following program works fine in earlier versions of \nPostgreSQL. It also works if the select on the view is replaced with a \ndirect call to nextval().\n\nWe use this construct repeatedly in OpenACS so we can share queries that \nuse sequences between Oracle and PostgreSQL unless they differ in other \nways that can't be reconciled between the two RDBMS's. Obviously, the \ngreater the number of queries we can share in this way, the lower our \nporting, new code development, and code maintenance costs.\n\nAs it stands neither the older OpenACS 3.x toolkit nor the upcoming \nOpenACS 4.x toolkit will work with PG 7.2.\n\nOuch.\n\nSorry for the long delay in reporting this. I only recently decided to \nget off my rear end and test against PG 7.2 after Marc Fournier tried to \ndo an install and ran into a few problems.\n\nHere's some code. The problem's caused by the fact that two rows are \nbeing inserted due to the reference to \"multiple rows\". Sequential \ncalls to the function which insert single rows works fine:\n\ncreate sequence test_seq_x;\n\ncreate view test_seq as\n select nextval('test_seq_x') as nextval;\n\ncreate table data (i integer primary key);\n\ncreate table multiple_rows (i integer);\n\ninsert into multiple_rows values (1);\ninsert into multiple_rows values (2);\n\ncreate function f() returns boolean as '\nbegin\n\n -- The insert works if you use nextval() instead of the view.\n\n insert into data\n select test_seq.nextval\n from multiple_rows;\n\n return ''true'';\n\nend;' language 'plpgsql';\n\nselect f();\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Mon, 17 Dec 2001 10:20:14 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "Bug in PG 7.2b4 (and b2, for good measure)"
}
] |
[
{
"msg_contents": "Is there any reason why recursive SQL functions are not allowed in PG 7.2?\n\nAfter all this:\n\ncreate function foo() returns setof integer as 'select 1'\nlanguage 'sql';\n\ncreate or replace function foo() returns setof integer as\n'select foo()'\nlanguage 'sql';\n\nWorks fine ...\n\nIt turns out that with the aid of a very simple and efficient recursive \nSQL function it is quite easy to devise a key structure for trees that \nscales very, very well. Probably better than using hierarchical \n(\"connect by\") queries with an appropriate parent foreign key in Oracle, \nthough I haven't done any serious benchmarking yet.\n\nThis is important for the OpenACS project which uses a filesystem \nparadigm to organize content in many of its packages.\n\nOne of our volunteer hackers figured out an ugly kludge that lets us \ndefine a recursive SQL function in PG 7.1 and it works great, leading to \nextremely efficient queries that work on the parents of a given node.\n\nWe were thinking we could just declare the function directly in PG 7.2 \nbut instead found we have to resort to a kludge similar to the example \nabove in order to do it. It's a far nicer kludge than our PG 7.1 hack, \nbelieve me, but we were hoping for a clean define of a recursive function.\n\nSQL functions can return rowsets but recursive ones can't be defined \ndirectly.\n\nRecursive PL/pgSQL functions can be defined directly but they can't \nreturn rowsets.\n\nSniff...sniff...sniff :)\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Mon, 17 Dec 2001 10:30:31 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "Recursive SQL functions ..."
}
] |
[
{
"msg_contents": "\nGot a report the other day of a \"problem\" with pg_dump where, if in the\nmiddle of a dump, someone happens to drop a table, it errors out with:\n\n SQL query to dump the contents of Table 'server_2' did not execute.\nExplanation from backend: 'ERROR: Relation 'server_2' does not exist'.\n The query was: 'COPY \"server_2\" TO stdout;'.\n\nNow, altho I can't imagine it happening often, I know I'd be a bit annoyed\nif I came in the next morning, tried to restore what was in the table that\nshould have been dumped *after* this one, and found that my dump didn't\nwork :(\n\nThis is using a v7.1.3 ... and I know there has been *alot* of changes to\npg_dump since, so this might have already been caught/dealt with ... ?\n\n\n\n\n",
"msg_date": "Mon, 17 Dec 2001 14:10:28 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Potential bug in pg_dump ... "
},
{
"msg_contents": "At 14:10 17/12/01 -0500, Marc G. Fournier wrote:\n>\n>Got a report the other day of a \"problem\" with pg_dump where, if in the\n>middle of a dump, someone happens to drop a table, it errors out with:\n>\n\npg_dump runs in a single TX, which should mean that metadata changes in\nanother process won't affect it. Have I misunderstood the way PG handles\nmetadata changes?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 18 Dec 2001 08:25:21 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Potential bug in pg_dump ... "
},
{
"msg_contents": "At 08:25 18/12/01 +1100, Philip Warner wrote:\n>At 14:10 17/12/01 -0500, Marc G. Fournier wrote:\n>>\n>>Got a report the other day of a \"problem\" with pg_dump where, if in the\n>>middle of a dump, someone happens to drop a table, it errors out with:\n>>\n>\n>pg_dump runs in a single TX, which should mean that metadata changes in\n>another process won't affect it. Have I misunderstood the way PG handles\n>metadata changes?\n>\n\nMaybe I'm expectibg too much from the locking, since pg_dump is only\nreading pg_class to get a list of tables. Perhaps it should do one of the\nfollowing:\n\na) Issue a LOCK TABLE for each table (seems like a bad idea)\n\nb) Reconfirm the existance of the table before trying to dump it.\n\nc) Ignore the problem\n\nI favour either (b) or (c). Anyone have comments or other suggestions?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 18 Dec 2001 09:01:25 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Potential bug in pg_dump ... "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 14:10 17/12/01 -0500, Marc G. Fournier wrote:\n>> Got a report the other day of a \"problem\" with pg_dump where, if in the\n>> middle of a dump, someone happens to drop a table, it errors out with:\n\n> pg_dump runs in a single TX, which should mean that metadata changes in\n> another process won't affect it. Have I misunderstood the way PG handles\n> metadata changes?\n\nIn the case Marc is describing, pg_dump hasn't yet tried to touch the\ntable that someone else is dropping, so it has no lock on the table,\nso the drop is allowed to occur.\n\nA possible (partial) solution is for pg_dump to obtain a read-lock on\nevery table in the database as soon as it sees the table mentioned in\npg_class, rather than waiting till it's ready to read the contents of\nthe table. However this cure might be worse than the disease,\nparticularly for people running \"pg_dump -t table\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Dec 2001 17:06:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Potential bug in pg_dump ... "
},
{
"msg_contents": "[2001-12-17 17:06] Tom Lane said:\n| Philip Warner <pjw@rhyme.com.au> writes:\n| > At 14:10 17/12/01 -0500, Marc G. Fournier wrote:\n| >> Got a report the other day of a \"problem\" with pg_dump where, if in the\n| >> middle of a dump, someone happens to drop a table, it errors out with:\n| \n| > pg_dump runs in a single TX, which should mean that metadata changes in\n| > another process won't affect it. Have I misunderstood the way PG handles\n| > metadata changes?\n| \n| In the case Marc is describing, pg_dump hasn't yet tried to touch the\n| table that someone else is dropping, so it has no lock on the table,\n| so the drop is allowed to occur.\n| \n| A possible (partial) solution is for pg_dump to obtain a read-lock on\n| every table in the database as soon as it sees the table mentioned in\n| pg_class, rather than waiting till it's ready to read the contents of\n| the table. However this cure might be worse than the disease,\n| particularly for people running \"pg_dump -t table\".\n\nHow would this lock-when-seen approach cause problems with '-t'?\n\nISTM, that we could make getTables like\n\n tblinfo = getTables(&numTables, finfo, numFuncs, tablename);\n\nso only that table gets locked when reading pg_class if tablename\nisn't NULL, otherwise all tables get locked.\n\nAside from me not being familiar with the specifics of table locking,\navoiding the \"table dropped during dump\" condition looks \nstraightforward and uncomplicated. From reading the docs, an \nACCESS SHARE lock should keep any pending ALTER TABLE from modifying\nthe table.\n\nWhat am I overlooking?\n\n b\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Wed, 9 Jan 2002 18:48:29 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: Potential bug in pg_dump ..."
},
{
"msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> [2001-12-17 17:06] Tom Lane said:\n> | A possible (partial) solution is for pg_dump to obtain a read-lock on\n> | every table in the database as soon as it sees the table mentioned in\n> | pg_class, rather than waiting till it's ready to read the contents of\n> | the table. However this cure might be worse than the disease,\n> | particularly for people running \"pg_dump -t table\".\n\n> How would this lock-when-seen approach cause problems with '-t'?\n\nLocking the whole DB when you only want to dump one table might be seen\nas a denial of service. Also, consider the possibility that you don't\nhave the right to lock every table in the DB.\n\nIf we can arrange to lock only those tables that will end up getting\ndumped, then these problems go away. I have not looked to see if that's\na difficult change or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 Jan 2002 18:55:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Potential bug in pg_dump ... "
},
{
"msg_contents": "[2002-01-09 18:55] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > [2001-12-17 17:06] Tom Lane said:\n| > | A possible (partial) solution is for pg_dump to obtain a read-lock on\n| > | every table in the database as soon as it sees the table mentioned in\n| > | pg_class, rather than waiting till it's ready to read the contents of\n| > | the table. However this cure might be worse than the disease,\n| > | particularly for people running \"pg_dump -t table\".\n| \n| > How would this lock-when-seen approach cause problems with '-t'?\n| \n| Locking the whole DB when you only want to dump one table might be seen\n| as a denial of service. Also, consider the possibility that you don't\n| have the right to lock every table in the DB.\n| \n| If we can arrange to lock only those tables that will end up getting\n| dumped, then these problems go away. I have not looked to see if that's\n| a difficult change or not.\n\nWe can try to lock one or lock all very easily. An ACCESS SHARE\nlock is granted to the user having SELECT privs, if they don't have\nSELECT privs, they'll not have much luck dumping data anyway.\n\nthanks.\n brent\n\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Wed, 9 Jan 2002 19:19:34 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: Potential bug in pg_dump ..."
},
{
"msg_contents": "[2002-01-09 19:19] Brent Verner said:\n| [2002-01-09 18:55] Tom Lane said:\n| | Brent Verner <brent@rcfile.org> writes:\n| | > [2001-12-17 17:06] Tom Lane said:\n| | > | A possible (partial) solution is for pg_dump to obtain a read-lock on\n| | > | every table in the database as soon as it sees the table mentioned in\n| | > | pg_class, rather than waiting till it's ready to read the contents of\n| | > | the table. However this cure might be worse than the disease,\n| | > | particularly for people running \"pg_dump -t table\".\n| | \n| | > How would this lock-when-seen approach cause problems with '-t'?\n| | \n| | Locking the whole DB when you only want to dump one table might be seen\n| | as a denial of service. Also, consider the possibility that you don't\n| | have the right to lock every table in the DB.\n| | \n| | If we can arrange to lock only those tables that will end up getting\n| | dumped, then these problems go away. I have not looked to see if that's\n| | a difficult change or not.\n| \n| We can try to lock one or lock all very easily. An ACCESS SHARE\n| lock is granted to the user having SELECT privs, if they don't have\n| SELECT privs, they'll not have much luck dumping data anyway.\n\nAttached is a patch that implements table locking for pg_dump.\n\nIf a tablename is specified with '-t tablename', only that table will\nbe locked; otherwise, all tables will be locked. Locks are with \nACCESS SHARE to block only concurrent AccessExclusiveLock operations\non the table.\n\ncomments?\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman",
"msg_date": "Thu, 10 Jan 2002 22:08:20 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: Potential bug in pg_dump ..."
},
{
"msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> Attached is a patch that implements table locking for pg_dump.\n\nChecked and applied, with some small tweaking. I broke the outer loop\nof getTables() into two loops, one that extracts data from the pg_class\nSELECT result and locks the target tables, and a second one that does\nthe rest of the stuff that that routine does. This is to minimize the\ntime window between doing the pg_class SELECT and locking the tables.\n\nIn testing this thing, I noticed that pg_dump spends a really\nunreasonable amount of time on schema extraction. For example, on the\nregression database the actual COPY commands take less than a quarter of\nthe runtime. (Of course, regression deliberately doesn't contain huge\nvolumes of data, so this case may be unrepresentative of real-world\nsituations.) The retail queries to get table attributes, descriptions,\netc are probably the cause. Something to think about improving\nsomeday...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jan 2002 18:30:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Potential bug in pg_dump ... "
},
{
"msg_contents": "[2002-01-11 18:30] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > Attached is a patch that implements table locking for pg_dump.\n| \n| Checked and applied, with some small tweaking. I broke the outer loop\n| of getTables() into two loops, one that extracts data from the pg_class\n| SELECT result and locks the target tables, and a second one that does\n| the rest of the stuff that that routine does. This is to minimize the\n| time window between doing the pg_class SELECT and locking the tables.\n\nACK. Now I understand what you meant by the \"more than zero time\"\nto lock the tables :-)\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Fri, 11 Jan 2002 21:10:04 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: Potential bug in pg_dump ..."
}
] |
[
{
"msg_contents": "Apparently there's been a change in the way views are handled within \nPostreSQL. The following program works fine in earlier versions. It \nalso works if the select on the view is replaced with a direct call to \nnextval().\n\nWe use this construct repeatedly in OpenACS so we can share queries that \nuse sequences between Oracle and PostgreSQL unless they differ in other \nways that can't be reconciled between the two RDBMS's. Obviously, the \ngreater the number of queries we can share in this way, the lower our \nporting, new code development, and code maintenance costs.\n\nAs it stands neither the older OpenACS 3.x toolkit nor the upcoming \nOpenACS 4.x toolkit will work with PG 7.2.\n\nOuch.\n\nSorry for the long delay in reporting this. I only recently decided to \nget off my rear end and test against PG 7.2 after Marc Fournier tried to \ndo an install and ran into a few problems.\n\nHere's some code. The problem's caused by the fact that two rows are \nbeing inserted due to the reference to \"multiple rows\". Sequential \ninserts of single rows using the view work just fine:\n\ncreate sequence test_seq_x;\n\ncreate view test_seq as\n select nextval('test_seq_x') as nextval;\n\ncreate table data (i integer primary key);\n\ncreate table multiple_rows (i integer);\n\ninsert into multiple_rows values (1);\ninsert into multiple_rows values (2);\n\ninsert into data\nselect test_seq.nextval\nfrom multiple_rows;\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Mon, 17 Dec 2001 11:47:40 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "PG 7.2b4 bug?"
},
{
"msg_contents": "Don Baccus <dhogaza@pacifier.com> writes:\n> Apparently there's been a change in the way views are handled within \n> PostreSQL. The following program works fine in earlier versions.\n\nAFAICT, it was just pure, unadulterated luck that it \"works\" in prior\nversions.\n\nIn 7.1 I get:\n\nregression=# select test_seq.nextval from multiple_rows;\nNOTICE: Adding missing FROM-clause entry for table \"test_seq\"\n nextval\n---------\n 3\n 4\n(2 rows)\n\nregression=# explain select test_seq.nextval from multiple_rows;\nNOTICE: Adding missing FROM-clause entry for table \"test_seq\"\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..30.00 rows=1000 width=4)\n -> Seq Scan on multiple_rows (cost=0.00..20.00 rows=1000 width=0)\n -> Subquery Scan test_seq (cost=0.00..0.00 rows=0 width=0)\n -> Result (cost=0.00..0.00 rows=0 width=0)\n\nEXPLAIN\n\nIn 7.2 I get:\n\nregression=# select test_seq.nextval from multiple_rows;\nNOTICE: Adding missing FROM-clause entry for table \"test_seq\"\n nextval\n---------\n 4\n 4\n(2 rows)\n\nregression=# explain select test_seq.nextval from multiple_rows;\nNOTICE: Adding missing FROM-clause entry for table \"test_seq\"\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..30.01 rows=1000 width=8)\n -> Subquery Scan test_seq (cost=0.00..0.01 rows=1 width=0)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Seq Scan on multiple_rows (cost=0.00..20.00 rows=1000 width=0)\n\nEXPLAIN\n\nThe reason it \"works\" in 7.1 is that the view is the inside of the\nnested loop, and so is re-evaluated for each tuple from the outer query.\n(The Result node is where the nextval call is actually being evaluated.)\nIn 7.2 the view has been placed on the outside of the nested loop, so\nit's only evaluated once. The reason for the change is that the 7.2\nplanner makes the (much more realistic) assumption that evaluating the\nResult node isn't free, and so it considers that evaluating the view\nmultiple times is more expensive than doing it only once. This can be\ndemonstrated to be the cause by setting the Result cost to zero; then\nthe behavior matches 7.1:\n\nregression=# show cpu_tuple_cost ;\nNOTICE: cpu_tuple_cost is 0.01\nSHOW VARIABLE\nregression=# set cpu_tuple_cost to 0;\nSET VARIABLE\nregression=# explain select test_seq.nextval from multiple_rows;\nNOTICE: Adding missing FROM-clause entry for table \"test_seq\"\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..10.00 rows=1000 width=8)\n -> Seq Scan on multiple_rows (cost=0.00..10.00 rows=1000 width=0)\n -> Subquery Scan test_seq (cost=0.00..0.00 rows=1 width=0)\n -> Result (cost=0.00..0.00 rows=1 width=0)\n\nEXPLAIN\nregression=# select test_seq.nextval from multiple_rows;\nNOTICE: Adding missing FROM-clause entry for table \"test_seq\"\n nextval\n---------\n 5\n 6\n(2 rows)\n\nHowever, it's pure luck that you get the nested loop expressed this way\nand not the other way when the costs come out the same. I'm surprised\nthat you consistently got the behavior you wanted in queries more\ncomplex than this test case.\n\nI'd have to say that I consider the code as given to be broken; it's not\na bug for the planner to rearrange this query in any way it sees fit.\n\nIt would be nice to accept the Oracle syntax for nextval, but I'm\nafraid this hack doesn't get the job done :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Dec 2001 15:35:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG 7.2b4 bug? "
},
{
"msg_contents": "\nOn Mon, 17 Dec 2001, Don Baccus wrote:\n\n> insert into data\n> select test_seq.nextval\n> from multiple_rows;\n\nI'm not sure that's wrong though with that example. test_seq.nextval in\nthe select list means to PostgreSQL a join with test_seq which is a view\nwith one row and I'd expect it to only evaluate that one row once, if\nit did it more than once in the past, I'd say it was buggy.\n\nHowever, I'd think:\n\"select (select nextval from test_seq) from multiple_rows;\"\nshould give you different values and doesn't, although\n\"select (select nextval from test_seq where i IS NULL or i IS NOT NULL)\n from multiple_rows;\" does give you different values.\n\n",
"msg_date": "Mon, 17 Dec 2001 12:37:07 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: PG 7.2b4 bug?"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> However, I'd think:\n> \"select (select nextval from test_seq) from multiple_rows;\"\n> should give you different values and doesn't, although\n> \"select (select nextval from test_seq where i IS NULL or i IS NOT NULL)\n> from multiple_rows;\" does give you different values.\n\nIn the first case, the subselect is visibly not dependent on the outer\nquery, so it's evaluated only once; in the second case it has to be\nre-evaluated for each row using that row's value of i. You can see the\ndifference (InitPlan vs. SubPlan) in the query's EXPLAIN output.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Dec 2001 16:13:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG 7.2b4 bug? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Don Baccus <dhogaza@pacifier.com> writes:\n> \n>>Apparently there's been a change in the way views are handled within \n>>PostreSQL. The following program works fine in earlier versions.\n>>\n> \n> AFAICT, it was just pure, unadulterated luck that it \"works\" in prior\n> versions.\n> \n> In 7.1 I get:\n> \n> regression=# select test_seq.nextval from multiple_rows;\n> NOTICE: Adding missing FROM-clause entry for table \"test_seq\"\n> nextval\n> ---------\n> 3\n> 4\n> (2 rows)\n\n\nNormally one expects a statement's sematics to depend only upon the \nsource code and to be consistent, not to vary depending on the mood du \njour of the processor ... this also fails (it's the same statement with \nmanual substitution):\n\ntest=# select (select nextval('test_seq_x') as nextval) as test_seq from \nmultiple_rows;\n test_seq\n----------\n 2\n 2\n(2 rows)\n\ntest=#\n\nIn other words the function's only called once (as I expected).\n\nI've looked at Date and Darwin's appendix on SQL3's PSMs but it's no \nhelp that I can see, it doesn't get into nitpicking semantic details \nregarding their use in queries, just their definition.\n\nMaybe the behavior's implementation defined ... if not, I'd presume SQL3 \n states that a function in the above context is called either once per \nrow or once per query, not sometimes one or sometimes the other.\n\nSo I think it's too early to write this off as not being a bug ...\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Mon, 17 Dec 2001 13:16:40 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "Re: PG 7.2b4 bug?"
},
{
"msg_contents": "\nOn Mon, 17 Dec 2001, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > However, I'd think:\n> > \"select (select nextval from test_seq) from multiple_rows;\"\n> > should give you different values and doesn't, although\n> > \"select (select nextval from test_seq where i IS NULL or i IS NOT NULL)\n> > from multiple_rows;\" does give you different values.\n>\n> In the first case, the subselect is visibly not dependent on the outer\n> query, so it's evaluated only once; in the second case it has to be\n> re-evaluated for each row using that row's value of i. You can see the\n> difference (InitPlan vs. SubPlan) in the query's EXPLAIN output.\n\nI figured that, but I'm not sure whether or not that's a bug of\nover-optimizing since I think that it probably *should* give you\ndifferent results since it is different each time it's evaluated.\nWithout checking spec, I'd expect that conceptually the select list\nentries are evaluated per row, even if we can avoid that when the\nvalue is certain to be the same, which would mean the result is\nincorrect since if it was evaluated per row it would give different\nresults each time.\n\n\n",
"msg_date": "Mon, 17 Dec 2001 13:36:19 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: PG 7.2b4 bug? "
},
{
"msg_contents": "Don Baccus <dhogaza@pacifier.com> writes:\n> Maybe the behavior's implementation defined ... if not, I'd presume SQL3 \n> states that a function in the above context is called either once per \n> row or once per query, not sometimes one or sometimes the other.\n\nAFAICT, the relevant concept in SQL99 is whether a function is\n\"deterministic\" or not:\n\n An SQL-invoked routine is either deterministic or possibly non-\n deterministic. An SQL-invoked function that is deterministic always\n returns the same return value for a given list of SQL argument\n values. An SQL-invoked procedure that is deterministic always\n returns the same values in its output and inout SQL parameters\n for a given list of SQL argument values. An SQL-invoked routine\n is possibly non-deterministic if, during invocation of that SQL-\n invoked routine, an SQL-implementation might, at two different\n times when the state of the SQL-data is the same, produce unequal\n results due to General Rules that specify implementation-dependent\n behavior.\n\nIt looks to me like the spec does NOT attempt to nail down the behavior\nof non-deterministic functions; in the places where they talk about\nnon-deterministic functions at all, it's mostly to forbid their use in\ncontexts where nondeterminism would affect the final result. Otherwise\nthe results are implementation-defined.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Dec 2001 16:44:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG 7.2b4 bug? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Don Baccus <dhogaza@pacifier.com> writes:\n> \n>>Maybe the behavior's implementation defined ... if not, I'd presume SQL3 \n>> states that a function in the above context is called either once per \n>>row or once per query, not sometimes one or sometimes the other.\n\n\n> It looks to me like the spec does NOT attempt to nail down the behavior\n> of non-deterministic functions; in the places where they talk about\n> non-deterministic functions at all, it's mostly to forbid their use in\n> contexts where nondeterminism would affect the final result. Otherwise\n> the results are implementation-defined.\n\n\nThanks ... I wasn't trying to lobby for a change, I just wanted to make \nsure that the standard stated that the behavior is implementation \ndefined or otherwise punted on the issue before my example was written \noff as a non-bug.\n\n\nAt some point the non-deterministic behavior of non-deterministic \nfunctions called in subselects in the target list should probably be \ndocumented, no? Most language standards - at least the ones I've worked \non - require compliant implementations to define and document \nimplementation-defined behavior ...\n\nMaybe a warning would be appropriate, too?\n\nI realize both of the above would rank pretty low in priority on the \ntodo list ...\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Mon, 17 Dec 2001 13:54:39 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "Re: PG 7.2b4 bug?"
},
{
"msg_contents": "Don Baccus <dhogaza@pacifier.com> writes:\n> Most language standards - at least the ones I've worked \n> on - require compliant implementations to define and document \n> implementation-defined behavior ...\n\nSQL99 saith:\n\n g) implementation-defined: Possibly differing between SQL-\n implementations, but specified by the implementor for each\n particular SQL-implementation.\n\n h) implementation-dependent: Possibly differing between SQL-\n implementations, but not specified by ISO/IEC 9075, and not\n required to be specified by the implementor for any particular\n SQL-implementations.\n\nBehavior of nondeterministic functions falls in the second category ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Dec 2001 16:59:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG 7.2b4 bug? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Don Baccus <dhogaza@pacifier.com> writes:\n> \n>>Most language standards - at least the ones I've worked \n>>on - require compliant implementations to define and document \n>>implementation-defined behavior ...\n>>\n> \n> SQL99 saith:\n> \n> g) implementation-defined: Possibly differing between SQL-\n> implementations, but specified by the implementor for each\n> particular SQL-implementation.\n> \n> h) implementation-dependent: Possibly differing between SQL-\n> implementations, but not specified by ISO/IEC 9075, and not\n> required to be specified by the implementor for any particular\n> SQL-implementations.\n> \n> Behavior of nondeterministic functions falls in the second category ...\n\n\n\nYep, those are the definitions I'm used to. OK, then, since this is \nimplementation-dependent, not implementation-defined, PG's off the hook \nentirely!\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Mon, 17 Dec 2001 14:02:08 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "Re: PG 7.2b4 bug?"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Don Baccus <dhogaza@pacifier.com> writes:\n> \n>>Maybe the behavior's implementation defined ... if not, I'd presume SQL3 \n>> states that a function in the above context is called either once per \n>>row or once per query, not sometimes one or sometimes the other.\n\n\nThis is still bothering me so I decided to plunge into the standard \nmyself. First of all...\n\n\n> \n> AFAICT, the relevant concept in SQL99 is whether a function is\n> \"deterministic\" or not:\n\n\nActually this argument may well apply to the function all within the \nsubselect or view, but I fail to see any language in the standard that \nsuggests that this trumps the following declaration about the execution \nof a <query specification> (what many of us informally refer to as a \n\"SELECT\"):\n\n(from section 7.12, Query Specification)\n\na) If T is not a grouped table, then\n\n Case:\n\n(I deleted Case i, which refers to standard aggregates like COUNT)\n\n ii) If the <select list> does not include a <set function\n specification> that contains a reference to T, then\n each <value expression> is applied to each row of T \n\n yielding a table of M rows, where M is the cardinality \n\n of T ...\n\n(FYI a <set function specification> is a standard aggregate like COUNT, \ni.e. Case ii pertains to those queries that don't fall into Case i)\n\nISTM that this quite clearly states that a subselect in a target list \nshould be applied to each row to be returned in M. I don't see any \nwaffle-room here. I would have to dig more deeply into the standard's \nview regarding VIEW semantics but I would assume it would knit together \nin a consistent manner.\n\nFor instance, earlier we saw the following exchange between Stephen and Tom:\n\n\nStephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n\n > > However, I'd think:\n > > \"select (select nextval from test_seq) from multiple_rows;\"\n > > should give you different values and doesn't, although\n > > \"select (select nextval from test_seq where i IS NULL or i IS NOT NULL)\n > > from multiple_rows;\" does give you different values.\n\n >\n > In the first case, the subselect is visibly not dependent on the outer\n > query, so it's evaluated only once; in the second case it has to be\n > re-evaluated for each row using that row's value of i.\n\nNote that the standard does not give you this freedom. It says the \n<value expression> (in this case the subselect, and yes subselects are \nvalid <value expressions> in SQL3, at least in my draft) must be applied \nto each row.\n\nIMO this means that the optimizer can choose to evaluate the <value \nexpression> once only if it knows for certain that multiple calls will \nreturn the same value. For example \"my_cachable_function()\", not \n\"my_uncachable_function()\" or \"nextval()\".\n\nOr IMO a view built using a non-cachable function.\n\nIn other words it can only suppress evaluation if it can be certain that \ndoing so doesn't change the result.\n\nAnother nit-pick, the claim's not even strictly true. \"i IS NULL OR i \nIS NOT NULL\" can be folded to true, so the subselect's not \"visibly \ndependent on i\". In fact, it is quite visibly *not* dependent on the \nouter query. PG just isn't smart enough to fold the boolean expression \ninto the known value \"true\".\n\nIt's this kind of uncertainty that makes the current behavior so ... \nugly. You get different answers depending on various optimization \nvalues, the complexity of the query, etc.\n\nISTM that the standard is quite clearly worded to avoid this unpleasantness.\n\n...\n\n\n> It looks to me like the spec does NOT attempt to nail down the behavior\n> of non-deterministic functions; in the places where they talk about\n> non-deterministic functions at all, it's mostly to forbid their use in\n> contexts where nondeterminism would affect the final result. Otherwise\n> the results are implementation-dependent.\n\n\nI've been looking at a few of the \"non-deterministic\" clauses in the \nGeneral Rules, out of curiousity.\n\nThey generally aren't involved with the execution or non-execution of \nexpressions. Ordering of execution is in many cases non-deterministic \nand implementation-dependent. There are plenty of General Rules of this \nsort.\n\nWe also have this big gaping black hole of non-determinism due to \ncharacter set collation.\n\nIn other words:\n\nselect foo\nfrom bar\norder by foo;\n\nis non-deterministic (we don't know the order in which the rows will be \nreturned) if foo is a character type. This can even be true within \nimplementations, for instance in PG it changes with when you change \nlocales (and have locale support enabled).\n\nHowever, it seems clear that:\n\nselect foo, my_function()\nfrom bar\norder by foo;\n\nrequires my_function() to be called for every row - we just can't depend \non the order in which it will be applied to those rows in the case where \nfoo is a character type. Of course, iscachable tells the optimizer that \nit's OK to just call it once but that's an extension outside SQL3's \ndomain.\n\nObviously if you run this query over and over again with the same \ncollation order the \"order by\" is deterministic. The non-determinism is \n in respect to the portability of the query to implementations built on \ndiffering character sets.\n\nI'm just not seeing justification for claiming that Section 7.12 can be \nignored if the subselect or view happens to contain a function that's \nnot cachable.\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Tue, 18 Dec 2001 14:05:30 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "Re: PG 7.2b4 bug?"
},
{
"msg_contents": "Some more bug-or-not-bug thoughts ...\n\nI thought I'd add a quote from Date that furthers my belief that the \nsubselect example I posted does indeed expose a bug:\n\n(T1 is the table conceptually created by the various joins, etc)\n\n\"[if] the select-item takes the form \"scalar-expression [[AS] column]\"\n\n...\n\nFor each row in T1 the scalar-expression is evaluated ..\"\n\n(page 151 Date & Darwin)\n\nSQL92 didn't support subselects in the select-item-list. SQL3 extends \nthe expression to include one-row selects that return a single scalar \nvalue. It does NOT however add any wording that allows the subselect to \nbe yanked and evaluated once rather than evaluated for each row. The \nstandard uses the word \"applied\" not \"evaluated\". I interpret this to \nmean \"evaluated\" and it appears that Date does, too.\n\nOn the other hand the view example is giving the proper result in PG \n7.2, though only by luck, as Tom pointed out earler. For (given the \nview \"create view foo as select nextval('foo_sequence') as nextval;\")\n\nselect foo.nextval\nfrom multiple_rows;\n\nisn't actually legal SQL. It must be stated as:\n\nselect foo.nextval\nfrom foo, multiple_rows;\n\n(all PG does is add \"foo\" to the from clause for me if I leave it out).\n\nThe semantics of this are obvious when you think about it - materialize \n\"foo\" then cross-join the resulting table with multiple_rows. Since \n\"foo\" returns a single row computed by \"nextval('foo_sequence')\" \nobviously the result seen with PG 7.2 is correct. Date is quite clear \non the semantics of this and it makes tons of sense since views are \nmeant to be treated like tables.\n\nSo:\n\n1. If an explicit scalar subselect appears in the target list, it should\n be executed for every row in the result set.\n\n2. A view referenced in the target list is actually supposed to be\n materialized in the FROM clause (even if implictly added to it for\n you) then joined to the other tables in the query, if any. Meaning\n it should always be executed once and only once. The standard\n doesn't have PG-style rules, of course, but such tables are also\n should be in the FROM clause, evaluated and joined afterwards\n IMO.\n\nAt least that's my reading and I've spent quite a bit of time on this now.\n\nUnfortunately PG currently doesn't use the form of the query to decide \nwhether or not to execute the subselect or view once or for each row, \nbut rather does so depending on the estimated cost of each approach.\n\nThat's the real bug it seems. The form of the query, not the whim of \nthe optimizer, is the determinant.\n\nNeither of these cases is likely to arise frequently in practice, so if \nI ruled Middle Earth I'd decree that:\n\n1. It be filed as a bug\n\n2. It not be assigned a high priority.\n\nHowever it's not merely of academic interest. The semantics of the view \nexample is such that you should be able to force single-evaluation of a \nfunction by simply wrapping it in a view, regardless of whether or not \nit has side-effects.\n\nMeanwhile I get to go off and inspect the roughly 750 queries that use \nthis particular style view and determine which ones incorrectly assume \nthat the view's evaluated more than once per query! :)\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Wed, 19 Dec 2001 12:59:04 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "Re: PG 7.2b4 bug?"
}
] |
[
{
"msg_contents": "Is there any reason why recursive SQL functions are not allowed in PG 7.2?\n\nAfter all this:\n\ncreate function foo() returns setof integer as 'select 1'\nlanguage 'sql';\n\ncreate or replace function foo() returns setof integer as\n'select foo()'\nlanguage 'sql';\n\nWorks fine ... (until you call it and run out of stack space!)\n\nIt turns out that with the aid of a very simple and efficient recursive \nSQL function it is quite easy to devise a key structure for trees that \nscales very, very well. Probably better than using hierarchical \n(\"connect by\") queries with an appropriate parent foreign key in Oracle, \nthough I haven't done any serious benchmarking yet.\n\nThis is important for the OpenACS project which uses a filesystem \nparadigm to organize content in many of its packages.\n\nOne of our volunteer hackers figured out an ugly kludge that lets us \ndefine a recursive SQL function in PG 7.1 and it works great, leading to \nextremely efficient queries that work on the parents of a given node.\n\nWe were thinking we could just declare the function directly in PG 7.2 \nbut instead found we have to resort to a kludge similar to the example \nabove in order to do it. It's a far nicer kludge than our PG 7.1 hack, \nbelieve me, but we were hoping for a clean define of a recursive function.\n\nSQL functions can return rowsets but recursive ones can't be defined \ndirectly.\n\nRecursive PL/pgSQL functions can be defined directly but they can't \nreturn rowsets.\n\nSniff...sniff...sniff [:)]\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Mon, 17 Dec 2001 11:49:36 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "recursive SQL functions"
},
{
"msg_contents": "Don Baccus <dhogaza@pacifier.com> writes:\n> Is there any reason why recursive SQL functions are not allowed in PG 7.2?\n\nIt's been discussed, cf\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1038929\nbut no one got around to it for 7.2.\n\nAside from the forward-reference problem, which seems easy enough to\nsolve, I think there may be some performance issues that'd have to be\ndealt with (memory leaks and so forth).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Dec 2001 15:57:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: recursive SQL functions "
},
{
"msg_contents": "Tom Lane wrote:\n\n\n> Aside from the forward-reference problem, which seems easy enough to\n> solve, I think there may be some performance issues that'd have to be\n> dealt with (memory leaks and so forth).\n\nOK, that's reasonable. If potential memory leaks are only for the \nduration of a transaction I'm not worried (in regard to the one \nrecursive function we're using) as it's cachable and our trees not very \ndeep.\n\nI have found one bug that crashed the backend if a recursive function's \ncalled. Are you interested in it, since folks can now define them \nsemi-officially (if a bit unconventially) via CREATE AND REPLACE?\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Mon, 17 Dec 2001 13:05:15 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "Re: recursive SQL functions"
},
{
"msg_contents": "Don Baccus <dhogaza@pacifier.com> writes:\n> I have found one bug that crashed the backend if a recursive function's \n> called. Are you interested in it,\n\nYou bet. Crashes are always interesting ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Dec 2001 16:08:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: recursive SQL functions "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Don Baccus <dhogaza@pacifier.com> writes:\n> \n>>I have found one bug that crashed the backend if a recursive function's \n>>called. Are you interested in it,\n>>\n> \n> You bet. Crashes are always interesting ...\n\n\nDarn ... dies in PG 7.1 works in PG 7.2 :)\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Mon, 17 Dec 2001 13:46:06 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "Re: recursive SQL functions"
},
{
"msg_contents": "Tom Lane writes:\n\n> Don Baccus <dhogaza@pacifier.com> writes:\n> > Is there any reason why recursive SQL functions are not allowed in PG 7.2?\n>\n> It's been discussed, cf\n> http://fts.postgresql.org/db/mw/msg.html?mid=1038929\n> but no one got around to it for 7.2.\n>\n> Aside from the forward-reference problem, which seems easy enough to\n> solve, I think there may be some performance issues that'd have to be\n> dealt with (memory leaks and so forth).\n\nI've prepared a patch for recursive SQL functions. I assume these memory\nleaks \"and so forth\" that you speak of are just issues of quality, not\nsomething that should prevent the use of recursive functions altogether.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 23 Feb 2002 12:55:42 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: recursive SQL functions "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I've prepared a patch for recursive SQL functions. I assume these memory\n> leaks \"and so forth\" that you speak of are just issues of quality, not\n> something that should prevent the use of recursive functions altogether.\n\nI suspect it would not work to re-use a query plan tree at multiple\nrecursion levels (someday, plan trees should be read-only during\nexecution, but they ain't now). As long as you are making a new\nplan tree for each recursive entry, it should work ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Feb 2002 13:07:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: recursive SQL functions "
},
{
"msg_contents": "Tom Lane writes:\n\n> I suspect it would not work to re-use a query plan tree at multiple\n> recursion levels (someday, plan trees should be read-only during\n> execution, but they ain't now). As long as you are making a new\n> plan tree for each recursive entry, it should work ...\n\nISTM that this is already what's happening. Each level gets a new plan\ntree.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 23 Feb 2002 14:08:24 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: recursive SQL functions "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> ISTM that this is already what's happening. Each level gets a new plan\n> tree.\n\nFine then. I wasn't sure if that'd fall out of the existing behavior\nor not...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Feb 2002 14:23:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: recursive SQL functions "
}
] |
[
{
"msg_contents": "I was browsing through the TODO list and stumbled upon the connection pooling \nitem discussed over a year ago. Is it safe to assume that since it's still \nlisted in the TODO that it's still a desirable feature? Although I don't know \nclaim to know jack about the postgresql source (although I am trying to learn \nmy way around) it does seem like the initial idea proposed by Alfred \nPerlstein could offer considerable improvements with respect to postgresql's \nability to handle larger numbers of connections. Passing file descriptors \nbetween processes aside, is it feasible to at least get the backend to where \nit could theoretically handle this --- multiple, simultaneous clients.\n\nEven if the descriptor passing were not the best (most portable) way to get \ninformation from the client - postmaster to the backend, there still might be \nanother way by which data from the client socket can be transmitted to a \nbackend, thereby allowing the data from multiple clients to be sent to a \nsingle backend. For instance, the postmaster could accept() and also \nmultiplex (select()/poll(), or use kqueue on BSD) across all file \ndescriptors, sending and receiving data to/from the clients. After calling \nselect(), It assigns/maps the client socket descriptor to one of the \nbackends, which it is connected to via domain socket. All subsequent data \ncoming from that client descriptor will be passed directly to that backend \nvia the domain socket. Information sent from the backend to the postmaster \nthrough that socket includes the descriptor number of the client to whom that \ninformation is intended. The postmaster would multiplex across descriptors to \nclients and backends alike, effectively connecting the M clients to N \nbackends, without having to pass descriptors.\n\nI doubt if what I am going to say in this paragraph is new to any of you, so \ndon't thing I'm being preachy, or that I think this is a novel idea. But \nallowing multiple backends to process multiple clients, if only considered \nfrom a mathematical standpoint, seems like a way to increase (perhaps even \ndramatically) the maximum number of connections postgresql can service while \nmaintaining a fixed number of backends, and therefore a known performance \nlevel. The central issue is the fact that the database server reaches \nmarginal returns with the number of active backend processes, beyond which \nthe overhead from shared resource contention, context switching, etc., \ndegrades performance. This ideal number, varying from machine to machine, can \nof course be controlled by setting the max number of connections. However, \nonce reached, other clients are then locked out. When that number is reached, \nthe real question becomes whether or not the server is really being used to \nits full potential. And most likely it is not --- because there is still a \nsignificant amount of idle time in each backend. And while the maximum number \nof connections have been reached, the overall utilization is less than what \nthe machine can be performing, all the while locking other clients out. Then \nthe pendulum can swing the other way: you set the max number of connections \nwell beyond the ideal number (creating a buffer of sorts) to allow for this \nscenario, so those blocked clients are let in. The problem is that this also \nopens another worse-case scenario: what if all of the maximum connections are \nactive, sending the number of active backends well beyond the ideal limit?\n\nBy having the postmaster map multiple clients to a fixed number of backends, \nyou achieve the happy medium: You never exceed the ideal number of active \nbackends, and at the same time you are not limited to only accepting a fixed \nnumber of connections. Accepting connections can now be based on load \n(however you wish to define it), not number. You now make decisions based on \nutlization.\n\nIf it were shown that even half of a backend's life consisted of idle time, \nleasing out that idle time to another active connection would potentially \ndouble the average number of simultaneous requests without (theoretically) \nincurring any significant degradation in performance.\n\nI have worked on code the uses this model, and would be glad to \nadapt/contribute it to postgresql. Currently it does pass files descriptors \nfrom a listening process to a queue process using send/recvmsg(). I have no \ntrouble with it on Linux and BSD, but I don't pretend to know anything about \nportability. However, even it this is an issue, I would be willing to adapt \nthe model mentioned above. Currently, the design I have is to pass in \ncomplete transactions, so that the N to M mapping can be achieved as an \napplication server. The downside is that state cannot be maintained within \nthe database (backend) as they are shared between clients on the application \nserver. Every request the server makes to postgresql must be a standalone \ntransaction.\n\nSo I guess my question is whether or not there is still interest in this, and \nwhether there are still great difficulties in making the necessary changes to \nthe backend so that it could handle multiple clients. If it does seem \npossible, and there is interest, then I would be willing to take a stab at it \nwith the code I have developed.\n",
"msg_date": "Mon, 17 Dec 2001 18:46:44 -0600",
"msg_from": "Michael Owens <owensmk@earthlink.net>",
"msg_from_op": true,
"msg_subject": "Connection Pooling, a year later"
},
{
"msg_contents": "Michael Owens <owensmk@earthlink.net> writes:\n> Even if the descriptor passing were not the best (most portable) way to get \n> information from the client - postmaster to the backend, there still might be\n> another way by which data from the client socket can be transmitted to a \n> backend, thereby allowing the data from multiple clients to be sent to a \n> single backend. For instance, the postmaster could accept() and also \n> multiplex (select()/poll(), or use kqueue on BSD) across all file \n> descriptors, sending and receiving data to/from the clients.\n\nThat would turn the postmaster into a performance bottleneck, since it\nwould have to do work for every client interaction.\n\nAnother objection is that the postmaster would need to have two open\nfile descriptors (the original client connection and a pipe to the\nbackend) for every active connection. On systems with a fairly low\nmax-files-per-process setting (~ 50 is not uncommon) that would put\na serious crimp in the number of connections that could be supported.\n\nOn the whole it strikes me as more practical to do connection pooling\non the client side...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Dec 2001 20:16:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later "
},
{
"msg_contents": "I don't get the deal with connection pooling.\n\nSure, there are some efficiencies in reducing the number of back-end postgres\nprocesses, but at what I see as a huge complication.\n\nHaving experimented with Oracle's connection pooling, and watching either it or\nPHP(Apache) crash because of a bug in the query state tracking, I figured I'd\nbuy some more RAM and forget about the process memory and call myself lucky.\n\nIf you have a web server and use (in PHP) pg_pConnect, you will get a\npostgresql process for each http process on your web servers.\n\nBeside memory, are there any real costs associated with having a good number of\nidle PostgreSQL processes sitting around? \n\nTom, Bruce?\n",
"msg_date": "Mon, 17 Dec 2001 21:34:25 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "> I don't get the deal with connection pooling.\n> \n> Sure, there are some efficiencies in reducing the number of\n> back-end postgres processes, but at what I see as a huge\n> complication.\n> \n> Having experimented with Oracle's connection pooling, and watching\n> either it or PHP(Apache) crash because of a bug in the query\n> state tracking, I figured I'd buy some more RAM and forget about\n> the process memory and call myself lucky.\n> \n> If you have a web server and use (in PHP) pg_pConnect, you will\n> get a postgresql process for each http process on your web\n> servers.\n> \n> Beside memory, are there any real costs associated with having\n> a good number of idle PostgreSQL processes sitting around?\n\nI think it is the startup cost that most people want to avoid, and our's\nis higher than most db's that use threads; at least I think so.\n\nIt would just be nice to have it done internally rather than have all\nthe clients do it, iff it can be done cleanly.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Dec 2001 22:42:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > I don't get the deal with connection pooling.\n> >\n> > Sure, there are some efficiencies in reducing the number of\n> > back-end postgres processes, but at what I see as a huge\n> > complication.\n> >\n> > Having experimented with Oracle's connection pooling, and watching\n> > either it or PHP(Apache) crash because of a bug in the query\n> > state tracking, I figured I'd buy some more RAM and forget about\n> > the process memory and call myself lucky.\n> >\n> > If you have a web server and use (in PHP) pg_pConnect, you will\n> > get a postgresql process for each http process on your web\n> > servers.\n> >\n> > Beside memory, are there any real costs associated with having\n> > a good number of idle PostgreSQL processes sitting around?\n> \n> I think it is the startup cost that most people want to avoid, and our's\n> is higher than most db's that use threads; at least I think so.\n> \n> It would just be nice to have it done internally rather than have all\n> the clients do it, iff it can be done cleanly.\n\nYou can usually avoid most (all?) of the startup cost by using persistent\nconnections with PHP.\n\nMy concern is, and do you know, besides the memory used by idle postgres\nprocesses, are there any performance reasons why connection pooling a fewer\nnumber of processes, would perform better than a larger number of idle\npersistent processes?\n\nUnless it does, I would say that connection pooling is pointless.\n",
"msg_date": "Mon, 17 Dec 2001 22:50:24 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "> > I think it is the startup cost that most people want to avoid, and our's\n> > is higher than most db's that use threads; at least I think so.\n> > \n> > It would just be nice to have it done internally rather than have all\n> > the clients do it, iff it can be done cleanly.\n> \n> You can usually avoid most (all?) of the startup cost by using persistent\n> connections with PHP.\n\nYes, that is assuming you are using PHP. If you are using something\nelse, you connection pooling in there too. All those client interfaces\nreimplementing connection pooling seems like a waste to me.\n\n> My concern is, and do you know, besides the memory used by idle postgres\n> processes, are there any performance reasons why connection pooling a fewer\n> number of processes, would perform better than a larger number of idle\n> persistent processes?\n> \n> Unless it does, I would say that connection pooling is pointless.\n\nNo, idle backends take minimal resources.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Dec 2001 22:57:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "> If you have a web server and use (in PHP) pg_pConnect, you will get a\n> postgresql process for each http process on your web servers.\n>\n> Beside memory, are there any real costs associated with having a\n> good number of\n> idle PostgreSQL processes sitting around?\n\nIf implemented, surely the best place to put it would be in libpq? You\ncould always add a function to lib pq to create a 'pooled' connection,\nrather than a normal connection. Basically then the PHP guys would just use\nthat instead of their own pg_connect function. I guess it would mean that\nlots of people who use the pgsql client wouldn't have to rewrite their own\nconnection sharing code.\n\nHowever, where would you put all the options for the pool? Like max\nprocesses, min processes, etc.\n\nI have learnt that half the problem with connection pooling is transactions\nthat fail to be rolled back...\n\nChris\n\n",
"msg_date": "Tue, 18 Dec 2001 12:42:55 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "> If implemented, surely the best place to put it would be in libpq? You\n> could always add a function to lib pq to create a 'pooled' connection,\n> rather than a normal connection. Basically then the PHP guys would just use\n> that instead of their own pg_connect function. I guess it would mean that\n> lots of people who use the pgsql client wouldn't have to rewrite their own\n> connection sharing code.\n> \n> However, where would you put all the options for the pool? Like max\n> processes, min processes, etc.\n> \n> I have learnt that half the problem with connection pooling is transactions\n> that fail to be rolled back...\n\nThe trick for that is to call COMMIT before you pass the backend to a\nnew person. Now, if you want to abort a left-over transaction, you can\ndo an ABORT but that is going to show up in the server logs because an\nABORT without a transaction causes an error message.\n\nWe also have RESET ALL for connection pooling use.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Dec 2001 23:49:06 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "> I think it is the startup cost that most people want to avoid, and our's\n> is higher than most db's that use threads; at least I think so.\n>\n> It would just be nice to have it done internally rather than have all\n> the clients do it, iff it can be done cleanly.\n\nI'd add that client side connection pooling isn't effective in some cases\nanyway - one application we work with has 4 physical application servers\nrunning around 6 applications. Each of the applications was written by a\ndifferent vendor, and thus a pool size of five gives you 120 open\nconnections.\n\n From another message, implementing it in libpq doesn't solve for JDBC\nconnectivity either.\n\nMy knowledge of the PostgreSQL internals is rather limited, but could you\nnot kick off a number of backends and use the already existing block of\nshared memory to grab and process requests?\n\nCheers,\n\nMark Pritchard\n\n",
"msg_date": "Tue, 18 Dec 2001 17:06:40 +1100",
"msg_from": "\"Mark Pritchard\" <mark@tangent.net.au>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "At 11:49 PM 12/17/01 -0500, Bruce Momjian wrote:\n>new person. Now, if you want to abort a left-over transaction, you can\n>do an ABORT but that is going to show up in the server logs because an\n>ABORT without a transaction causes an error message.\n\nI do a lot of rollbacks typically. Would that cause errors?\n\nI prefer doing rollbacks to commits when in doubt.\n\nRegards,\nLink.\n\n\n",
"msg_date": "Tue, 18 Dec 2001 19:02:02 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "At 10:57 PM 12/17/01 -0500, Bruce Momjian wrote:\n>Yes, that is assuming you are using PHP. If you are using something\n>else, you connection pooling in there too. All those client interfaces\n>reimplementing connection pooling seems like a waste to me.\n\nBut trying to connect and reconnect to an RDBMS 100 times a sec sounds\nbroken (plus authentication etc).\n\nI personally think the fix for that should be at the client side. At worst\nit should be in an intermediate application (listener). Otherwise it's like\ntrying to turn a db server into a webserver, quite a bit of work there.\n\n>> My concern is, and do you know, besides the memory used by idle postgres\n>> processes, are there any performance reasons why connection pooling a fewer\n>> number of processes, would perform better than a larger number of idle\n>> persistent processes?\n>> \n>> Unless it does, I would say that connection pooling is pointless.\n>\n>No, idle backends take minimal resources.\n\nI'd personally will be happy with a large number of backends then. Probably\nmore deterministic having everything fully loaded to the max. \n\nCheerio,\nLink.\n\n",
"msg_date": "Tue, 18 Dec 2001 19:14:51 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "\nBruce Momjian wrote:\n> \n> > If implemented, surely the best place to put it would be in libpq? You\n> > could always add a function to lib pq to create a 'pooled' connection,\n> > rather than a normal connection. Basically then the PHP guys would just use\n> > that instead of their own pg_connect function. I guess it would mean that\n> > lots of people who use the pgsql client wouldn't have to rewrite their own\n> > connection sharing code.\n> >\n> > However, where would you put all the options for the pool? Like max\n> > processes, min processes, etc.\n> >\n> > I have learnt that half the problem with connection pooling is transactions\n> > that fail to be rolled back...\n> \n> The trick for that is to call COMMIT before you pass the backend to a\n> new person. Now, if you want to abort a left-over transaction, you can\n> do an ABORT but that is going to show up in the server logs because an\n> ABORT without a transaction causes an error message.\n> \n> We also have RESET ALL for connection pooling use.\n\nThe problem with connection polling, and it can be a very difficult problem, is\nthe state of the connection. What we saw with The Oracle connection pooling was\nif a SQL query took too long, and/or the PHP front end timed out or crashed\n(The XML library does this sometimes) that the Oracle connection was in a\nstrange state. Sometimes the connection stayed active with respect to the\npooling software, but brain dead. The apache processes which was lucky enough\nto get that pooled connection either errored or hung.\n\nThere were a large number of virtually untraceable problems related to the\nprevious query and the previous client's behavior.\n\nI know I am being alarmist, but my experience with connection pooling left a\nbad taste in my mouth. I can see a persistent connection used per process, but\npooling \"n\" processes by a \"x < n\" connections is problematic. The pooling\nsoftware has to be able to detect and act upon the real \"unexpected\" status of\nthe back-end, not just what it thinks it is.\n\nMost high performance software already has a notion of persistent connection,\nwhich has been debugged and tuned. If there is no real benefit to reducing the\nnumber of back-end processes, I think connection pooling is something that will\nmore problematic than productive.\n",
"msg_date": "Tue, 18 Dec 2001 08:31:04 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "> At 11:49 PM 12/17/01 -0500, Bruce Momjian wrote:\n> >new person. Now, if you want to abort a left-over transaction, you can\n> >do an ABORT but that is going to show up in the server logs because an\n> >ABORT without a transaction causes an error message.\n> \n> I do a lot of rollbacks typically. Would that cause errors?\n> \n> I prefer doing rollbacks to commits when in doubt.\n\nNo problem, it is just that rollbacks when you are not in a transaction\ncause a log error message.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Dec 2001 09:15:05 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> No problem, it is just that rollbacks when you are not in a transaction\n> cause a log error message.\n\nI don't see any difference in the behavior: you get a notice either way.\n\nregression=# commit;\nNOTICE: COMMIT: no transaction in progress\nCOMMIT\nregression=# rollback;\nNOTICE: ROLLBACK: no transaction in progress\nROLLBACK\nregression=#\n\nMy recommendation would generally be to do a ROLLBACK not a COMMIT, on\nthe grounds that if the previous user failed to complete his transaction\nyou probably want to abort it, not assume that it's safe to commit.\n\nHowever, this safety-first approach might be unworkable if you have a\nlarge body of existing code that all assumes it needn't issue COMMIT\nexplicitly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 10:08:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > No problem, it is just that rollbacks when you are not in a transaction\n> > cause a log error message.\n> \n> I don't see any difference in the behavior: you get a notice either way.\n> \n> regression=# commit;\n> NOTICE: COMMIT: no transaction in progress\n> COMMIT\n> regression=# rollback;\n> NOTICE: ROLLBACK: no transaction in progress\n> ROLLBACK\n> regression=#\n> \n> My recommendation would generally be to do a ROLLBACK not a COMMIT, on\n> the grounds that if the previous user failed to complete his transaction\n> you probably want to abort it, not assume that it's safe to commit.\n> \n> However, this safety-first approach might be unworkable if you have a\n> large body of existing code that all assumes it needn't issue COMMIT\n> explicitly.\n\nSorry, I should have said do a \"BEGIN;COMMIT;\". That only generates an\nerror message if a transaction was left open, and it commits the\nleft-open transaction.\n\nWe can add a SILENT keyword to COMMIT/ROLLBACK if people really want it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Dec 2001 10:12:57 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n\n> It would just be nice to have it done internally rather than have all\n> the clients do it, iff it can be done cleanly.\n\nSerious client applications that need it already do it. Firing up an \nOracle or most other db's isn't that lightweight a deal, either, it's \nnot useful only for PG..\n\nPersonally I'd just view it as getting in the way, but then I use a \nwebserver that's provided connection pooling for client threads for the \nlast seven years ...\n\nI agree with Tom that the client seems to be the best place to do this.\n\nAmong other things it isn't that difficult. If you know how to fire up \none connection, you know how to fire up N of them and adding logic to \npool them afterwards is easy enough.\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Tue, 18 Dec 2001 08:14:57 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n\n> Yes, that is assuming you are using PHP. If you are using something\n> else, you connection pooling in there too. All those client interfaces\n> reimplementing connection pooling seems like a waste to me.\n\n\nEffective pooling's pretty specific to your environment, though, so any \ngeneral mechanism would have to provide a wide-ranging suite of \nparameters governing the number to pool, how long each handle should \nlive, what to do if a handle's released by a client while in the midst \nof a transaction (AOLserver rolls back the transaction, other clients \nmight want to do something else, i.e. fire a callback or the like), etc etc.\n\nI think it would be fairly complex and for those high-throughput \napplications already written with client-side pooling no improvement.\n\nAnd those are the only applications that need it.\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Tue, 18 Dec 2001 08:24:31 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n\n> \n> The trick for that is to call COMMIT before you pass the backend to a\n> new person.\n\n\nThe failure to COMMIT is a programmer error - ROLLBACK's much safer. At \n least that's what we decided in the AOLserver community, and that's \nwhat the drivers for Oracle and PG (the two I maintain) implement.\n\n> Now, if you want to abort a left-over transaction, you can\n> do an ABORT but that is going to show up in the server logs because an\n> ABORT without a transaction causes an error message.\n\n\nThe connection pooling mechanism needs to track the transaction state \nand only ROLLBACK a handle that's not in autocommit state or in the \nmidst of a BEGIN/END transaction (again, Oracle vs. PG)..\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Tue, 18 Dec 2001 08:29:10 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "Mark Pritchard wrote:\n\n>>I think it is the startup cost that most people want to avoid, and our's\n>>is higher than most db's that use threads; at least I think so.\n>>\n>>It would just be nice to have it done internally rather than have all\n>>the clients do it, iff it can be done cleanly.\n>>\n> \n> I'd add that client side connection pooling isn't effective in some cases\n> anyway - one application we work with has 4 physical application servers\n> running around 6 applications. Each of the applications was written by a\n> different vendor, and thus a pool size of five gives you 120 open\n> connections.\n\nTuning a central pooling mechanism to run well in this kind of situation \nisn't going to be a trivial task, either. The next thing you'll want is \nsome way to prioritize the various clients so your more serious \napplications have a better chance of getting a pool.\n\nOr you'll want to set up subpools so they don't compete with each other, \nin effect replicating what's done now, but adding more complexity to the \ncentral service.\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Tue, 18 Dec 2001 08:33:53 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "Does schema support will resolve this discussion ?\nIf I understand correctly, initial arguments for connection pooling\nwas restriction in number of persistent connections. it's right in\ncurrent postgresql that if one wants keep connection for performance\nreason to several databases the total number of connections will\ndoubled, trippled and so on. But if I understand schema support will\neventually put away these problem because we could keep only one\npool of connections to the *one* database.\n\n\tOleg\n\nOn Tue, 18 Dec 2001, Don Baccus wrote:\n\n> Bruce Momjian wrote:\n>\n>\n> > Yes, that is assuming you are using PHP. If you are using something\n> > else, you connection pooling in there too. All those client interfaces\n> > reimplementing connection pooling seems like a waste to me.\n>\n>\n> Effective pooling's pretty specific to your environment, though, so any\n> general mechanism would have to provide a wide-ranging suite of\n> parameters governing the number to pool, how long each handle should\n> live, what to do if a handle's released by a client while in the midst\n> of a transaction (AOLserver rolls back the transaction, other clients\n> might want to do something else, i.e. fire a callback or the like), etc etc.\n>\n> I think it would be fairly complex and for those high-throughput\n> applications already written with client-side pooling no improvement.\n>\n> And those are the only applications that need it.\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 18 Dec 2001 20:05:26 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> \n> > \n> > The trick for that is to call COMMIT before you pass the backend to a\n> > new person.\n> \n> \n> The failure to COMMIT is a programmer error - ROLLBACK's much safer. At \n> least that's what we decided in the AOLserver community, and that's \n> what the drivers for Oracle and PG (the two I maintain) implement.\n\n\nThen you can issue a \"BEGIN;ROLLBACK;\" when you pass the session to the\nnext user, and \"RESET ALL;\" of course.\n\n> > Now, if you want to abort a left-over transaction, you can\n> > do an ABORT but that is going to show up in the server logs because an\n> > ABORT without a transaction causes an error message.\n> \n> \n> The connection pooling mechanism needs to track the transaction state \n> and only ROLLBACK a handle that's not in autocommit state or in the \n> midst of a BEGIN/END transaction (again, Oracle vs. PG)..\n\nSeems like a lot of work to keep track of transaction state in the\nclient; seems easier to just unconditionally issue the begin;rollback.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Dec 2001 14:56:31 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n\n> Seems like a lot of work to keep track of transaction state in the\n> client; seems easier to just unconditionally issue the begin;rollback.\n\n\nWell, in the Oracle and PG drivers for AOLserver it wasn't, but then \nagain applications code in that environment doesn't call libpq directly \nbut through an abstraction layer that works with all DBs (the layer \ndoes, the query strings obviously sometimes don't!). The db primitives \nthen call an RDBMS-specific driver, which can call thread-safe RDMBS \nclient libraries directly or call an external driver (possibly the \nexternal ODBC driver) for RDBMS's without a thread-safe client library.\n\nSo we can track things easily. Along with other things, for instance \nretrying queries in one backend after another backend has bombed out and \ngiven the nice little message saying \"another backend has closed, please \nretry your query\". Luckily it was pretty easy to kill PG 6.5 so I could \ntest and debug this feature...\n\nI suspect that major applications that support multiple RDBMS's take a \nsomewhat similar approach. In the context of providing an abstract \ndatabase API for one's client code, adding persistent connection pooling \nseems pretty minor.\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Tue, 18 Dec 2001 13:01:09 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "> I suspect that major applications that support multiple RDBMS's take a \n> somewhat similar approach. In the context of providing an abstract \n> database API for one's client code, adding persistent connection pooling \n> seems pretty minor.\n\nYes, with that abstraction layer, it is quite easy.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Dec 2001 16:03:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "On Tue, 2001-12-18 at 13:46, Michael Owens wrote:\n> \n> By having the postmaster map multiple clients to a fixed number of backends, \n> you achieve the happy medium: You never exceed the ideal number of active \n> backends, and at the same time you are not limited to only accepting a fixed \n> number of connections. Accepting connections can now be based on load \n> (however you wish to define it), not number. You now make decisions based on \n> utlization.\n> \n> If it were shown that even half of a backend's life consisted of idle time, \n> leasing out that idle time to another active connection would potentially \n> double the average number of simultaneous requests without (theoretically) \n> incurring any significant degradation in performance.\n> \n\nHave you looked at the client-side connection pooling solutions out\nthere?\n\nDBBalancer ( http://dbbalancer.sourceforge.net/ ) tries to sit very\ntransparently between your application and PostgreSQL, letting you\nimplement connection pooling with almost no application changes.\n\nThere was another one I came across too, but that one requires you to\nmake more wide-reaching changes to the application.\n\nIn my applications I have found DBBalancer to be roughly the same level\nof performance as PHP persistent connections, but a lot fewer\nconnections are needed in the pool because they are only needed when\nApache is delivering dynamic content - not the associated static\nstylesheets and images.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n--------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Are you enrolled at http://schoolreunions.co.nz/ yet?\n\n",
"msg_date": "19 Dec 2001 18:30:34 +1300",
"msg_from": "Andrew McMillan <andrew@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "As long as each client's call is composed of a standalone transaction, there \nis no problem with external connection pools. But what about when a client's \ntransactions spans two or more calls, such as SELECT FOR UPDATE? Then pooling \nis not safe: it offers no assurance of what may be interjected into an open \ntransaction between calls. For example, each is a separate call to a shared \nconnection:\n\nClient A: BEGIN WORK; SELECT last_name from customer for update where <X>;\n\nClient B: BEGIN WORK; SELECT street from customer for update where <Y>;\n\nClient A: update customer set lastname=<modified value> where <X>; COMMIT \nWORK;\n\n\nNow, isn't Client B's write lock gone with Client A's commit? Yet Client A's \nlock is still hanging around. While Client B's commit will close it, Client B \nhas lost the assurance of its lock, defeating the purpose of SELECT FOR \nUPDATE.\n\nIf this is corrent, then external connection pools limit what you can do with \nthe database to a single call. Any transaction spanning more than one call is \nunsafe, because it is not isolated from other clients sharing the same \nconnection.\n\n\n\nOn Tuesday 18 December 2001 11:30 pm, Andrew McMillan wrote:\n> On Tue, 2001-12-18 at 13:46, Michael Owens wrote:\n> > By having the postmaster map multiple clients to a fixed number of\n> > backends, you achieve the happy medium: You never exceed the ideal number\n> > of active backends, and at the same time you are not limited to only\n> > accepting a fixed number of connections. Accepting connections can now be\n> > based on load (however you wish to define it), not number. You now make\n> > decisions based on utlization.\n> >\n> > If it were shown that even half of a backend's life consisted of idle\n> > time, leasing out that idle time to another active connection would\n> > potentially double the average number of simultaneous requests without\n> > (theoretically) incurring any significant degradation in performance.\n>\n> Have you looked at the client-side connection pooling solutions out\n> there?\n>\n> DBBalancer ( http://dbbalancer.sourceforge.net/ ) tries to sit very\n> transparently between your application and PostgreSQL, letting you\n> implement connection pooling with almost no application changes.\n>\n> There was another one I came across too, but that one requires you to\n> make more wide-reaching changes to the application.\n>\n> In my applications I have found DBBalancer to be roughly the same level\n> of performance as PHP persistent connections, but a lot fewer\n> connections are needed in the pool because they are only needed when\n> Apache is delivering dynamic content - not the associated static\n> stylesheets and images.\n>\n> Regards,\n> \t\t\t\t\tAndrew.\n",
"msg_date": "Wed, 19 Dec 2001 12:22:58 -0600",
"msg_from": "Michael Owens <owensmk@earthlink.net>",
"msg_from_op": true,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "Michael Owens wrote:\n\n> As long as each client's call is composed of a standalone transaction, there \n> is no problem with external connection pools. But what about when a client's \n> transactions spans two or more calls, such as SELECT FOR UPDATE? Then pooling \n> is not safe: it offers no assurance of what may be interjected into an open \n> transaction between calls. For example, each is a separate call to a shared \n> connection:\n> \n> Client A: BEGIN WORK; SELECT last_name from customer for update where <X>;\n> \n> Client B: BEGIN WORK; SELECT street from customer for update where <Y>;\n> \n> Client A: update customer set lastname=<modified value> where <X>; COMMIT \n> WORK;\n> \n> \n> Now, isn't Client B's write lock gone with Client A's commit? Yet Client A's \n> lock is still hanging around. While Client B's commit will close it, Client B \n> has lost the assurance of its lock, defeating the purpose of SELECT FOR \n> UPDATE.\n> \n> If this is corrent, then external connection pools limit what you can do with \n> the database to a single call. Any transaction spanning more than one call is \n> unsafe, because it is not isolated from other clients sharing the same \n> connection.\n\n\nThe general idea is that you grab a handle and hold onto it until you're \ndone. This makes the above scenario impossible.\n\nForgetting to commit or rollback before relenquishing the handle is \nanother scenario that can lead to problems but that's already been \ndiscussed in detail.\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Wed, 19 Dec 2001 11:04:25 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "On Wednesday 19 December 2001 01:04 pm, Don Baccus wrote:\n\n\n> The general idea is that you grab a handle and hold onto it until you're\n> done. This makes the above scenario impossible.\n>\n> Forgetting to commit or rollback before relenquishing the handle is\n> another scenario that can lead to problems but that's already been\n> discussed in detail.\n\nBut then the shared connection is unshared, sitting idle while the client \nworks in between calls, thus introducing idle time among a fixed number of \nconnections. The server is doing less than it could.\n\nI agree that this connection pool has improved things in eliminating backend \nstartup time. But idle time still exists for the clients performing multiple \ncalls, proportional to the product of the number of multiple call clients and \nthe number of calls they make, plus the idle time between them.\n\nHowever this probably only ever happens on update. Inserts and selects can be \ndone in one call. And, I suppose updates comprise only a small fraction of \nthe requests sent to the database. Even then, you can probably eliminate some \nmultiple calls by using things such as procedures.\n\nFactoring all that in, you can probably do as well by optimizing your \nparticular database/application than by writing code.\n\nI relent. Thanks for your thoughts.\n",
"msg_date": "Wed, 19 Dec 2001 14:28:14 -0600",
"msg_from": "Michael Owens <owensmk@earthlink.net>",
"msg_from_op": true,
"msg_subject": "Re: Connection Pooling, a year later"
},
{
"msg_contents": "On Thu, 2001-12-20 at 07:22, Michael Owens wrote:\n> As long as each client's call is composed of a standalone transaction, there \n> is no problem with external connection pools. But what about when a client's \n> transactions spans two or more calls, such as SELECT FOR UPDATE? Then pooling \n> is not safe: it offers no assurance of what may be interjected into an open \n> transaction between calls. For example, each is a separate call to a shared \n> connection:\n> \n> Client A: BEGIN WORK; SELECT last_name from customer for update where <X>;\n> \n> Client B: BEGIN WORK; SELECT street from customer for update where <Y>;\n> \n> Client A: update customer set lastname=<modified value> where <X>; COMMIT \n> WORK;\n> \n> \n> Now, isn't Client B's write lock gone with Client A's commit? Yet Client A's \n> lock is still hanging around. While Client B's commit will close it, Client B \n> has lost the assurance of its lock, defeating the purpose of SELECT FOR \n> UPDATE.\n> \n> If this is corrent, then external connection pools limit what you can do with \n> the database to a single call. Any transaction spanning more than one call is \n> unsafe, because it is not isolated from other clients sharing the same \n> connection.\n\nOh, I see. You are absolutely correct that client-side pooling wouldn't\nwork in that situation of course.\n\nAs an application developer nobody has forced me into such a corner yet,\nhowever. Long running transactions are something I avoid like the\nplague.\n\nCheers,\n\t\t\t\t\tAndrew.\n-- \n--------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Are you enrolled at http://schoolreunions.co.nz/ yet?\n\n",
"msg_date": "20 Dec 2001 13:52:28 +1300",
"msg_from": "Andrew McMillan <andrew@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling, a year later"
}
] |
[
{
"msg_contents": "I have observed a nasty three-way deadlock condition.\n\nThis proc is trying to generate a new transaction ID, and has hit the one\ncase in every 32K where a new page must be added to the CLOG. That\nmeans that an XLOG record must be written to record the creation of the\nnew CLOG page:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\ntgl 1135 0.0 3.4 41012 8812 pts/2 SN 19:54 0:00 postgres: tgl bench [local] idle\n\n#0 0x401d63b2 in semop (semid=1474560, sops=0xbfffcd20, nsops=1) at ../sysdeps/unix/sysv/linux/semop.c:36\n#1 0x0811ccab in IpcSemaphoreLock (semId=1474560, sem=4, interruptOK=0 '\\000') at ipc.c:422\n#2 0x0812332f in LWLockAcquire (lockid=WALInsertLock, mode=LW_EXCLUSIVE) at lwlock.c:271\n#3 0x08091d90 in XLogInsert (rmid=3 '\\003', info=0 '\\000', rdata=0xbfffef10) at xlog.c:644\n#4 0x08090237 in WriteZeroPageXlogRec (pageno=2) at clog.c:962\n#5 0x0808f7e0 in ZeroCLOGPage (pageno=2, writeXlog=1 '\\001') at clog.c:357\nThis proc is holding CLogControlLock, LW_EXCLUSIVE:\n#6 0x0808ff50 in ExtendCLOG (newestXact=65536) at clog.c:778\nThis proc is holding XidGenLock, LW_EXCLUSIVE:\n#7 0x08090590 in GetNewTransactionId () at varsup.c:58\n#8 0x08090d77 in StartTransaction () at xact.c:863\n#9 0x080910f9 in StartTransactionCommand () at xact.c:1156\n#10 0x08126753 in pg_exec_query_string (query_string=0x8271410 \"begin\", dest=Remote, parse_context=0x8247adc) at postgres.c:603\n#11 0x081278da in PostgresMain (argc=4, argv=0xbffff1c0, username=0x822dce9 \"tgl\") at postgres.c:1849\n\nThe first proc is waiting for the second, who already holds WALInsertLock.\nThe second proc is trying to make the first XLOG entry of his transaction.\nTherefore he needs to set MyProc->logRec, which presently requires him\nto obtain SInvalLock:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\ntgl 1196 0.0 3.5 41028 8928 pts/2 SN 19:54 0:00 postgres: tgl bench [local] UPDATE\n\n#0 0x401d63b2 in semop (semid=1572867, sops=0xbfffcb50, nsops=1) at ../sysdeps/unix/sysv/linux/semop.c:36\n#1 0x0811ccab in IpcSemaphoreLock (semId=1572867, sem=15, interruptOK=0 '\\000') at ipc.c:422\n#2 0x0812332f in LWLockAcquire (lockid=SInvalLock, mode=LW_EXCLUSIVE) at lwlock.c:271\nThis proc is holding WALInsertLock, LW_EXCLUSIVE:\n#3 0x0809222f in XLogInsert (rmid=10 '\\n', info=40 '(', rdata=0xbfffed50) at xlog.c:747\n#4 0x08079f4e in log_heap_update (reln=0x425c7fe0, oldbuf=238, from={ip_blkid = {bi_hi = 1, bi_lo = 5639}, ip_posid = 16}, \n newbuf=3307, newtup=0x82865e8, move=0 '\\000') at heapam.c:1931\n#5 0x0807948f in heap_update (relation=0x425c7fe0, otid=0xbfffef10, newtup=0x82865e8, ctid=0xbfffee80) at heapam.c:1565\n#6 0x080d6216 in ExecReplace (slot=0x827a9ec, tupleid=0xbfffef10, estate=0x827ae38) at execMain.c:1454\n#7 0x080d5f1d in ExecutePlan (estate=0x827ae38, plan=0x827ad90, operation=CMD_UPDATE, numberTuples=0, \n direction=ForwardScanDirection, destfunc=0x827b6e4) at execMain.c:1129\n#8 0x080d5260 in ExecutorRun (queryDesc=0x827ae1c, estate=0x827ae38, feature=3, count=0) at execMain.c:233\n#9 0x08127e13 in ProcessQuery (parsetree=0x8272148, plan=0x827ad90, dest=Remote) at pquery.c:293\n#10 0x08126942 in pg_exec_query_string (\n query_string=0x8271d90 \"update accounts set abalance = abalance + 735 where aid = 4270516\\n\", dest=Remote, \n parse_context=0x824845c) at postgres.c:781\n#11 0x081278da in PostgresMain (argc=4, argv=0xbffff1c0, username=0x822dce9 \"tgl\") at postgres.c:1849\n\nAnd this proc is trying to obtain XidGenLock while already holding\nSInvalLock:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\ntgl 1138 0.0 3.5 41020 8936 pts/2 SN 19:54 0:00 postgres: tgl bench [local] idle in transaction\n\n#0 0x401d63b2 in semop (semid=1474560, sops=0xbfffef00, nsops=1) at ../sysdeps/unix/sysv/linux/semop.c:36\n#1 0x0811ccab in IpcSemaphoreLock (semId=1474560, sem=7, interruptOK=0 '\\000') at ipc.c:422\n#2 0x0812332f in LWLockAcquire (lockid=XidGenLock, mode=LW_SHARED) at lwlock.c:271\n#3 0x080905d4 in ReadNewTransactionId () at varsup.c:103\nThis proc is holding SInvalLock, LW_SHARED:\n#4 0x0811e2ae in GetSnapshotData (serializable=0 '\\000') at sinval.c:359\n#5 0x081767ce in SetQuerySnapshot () at tqual.c:752\n#6 0x081268f9 in pg_exec_query_string (\n query_string=0x8271458 \"insert into history(tid,bid,aid,delta,mtime) values(336,81,9860149,356,'now')\", dest=Remote, \n parse_context=0x8247b0c) at postgres.c:764\n#7 0x081278da in PostgresMain (argc=4, argv=0xbffff1c0, username=0x822dce9 \"tgl\") at postgres.c:1849\n\nUnfortunately the first proc is holding XidGenLock, ergo deadlock.\n\nI don't think we have any room to wiggle in terms of the locking\nsequence of the first proc (see comments in GetNewTransactionId),\nnor of the third (see comments in GetSnapshotData). That means\nthe only way to resolve the deadlock is to not grab SInvalLock\nwhile holding the WALInsertLock in XLogInsert.\n\nI believe this is actually safe, because the only code that looks at the\nlogRec fields of other backends' PROC structures is GetUndoRecPtr,\nwhich is only called while holding WALInsertLock in CreateCheckPoint.\nTherefore, we could re-document proc->logRec as being protected by\nWALInsertLock not SInvalLock and not have to get SInvalLock in\nXLogInsert.\n\nHowever, there's still a problem: GetUndoRecPtr also gets SInvalLock\nwhile its caller holds WALInsertLock, and therefore this routine\ncould create the second leg of the deadlock too. Removing the\nSInvalLock lock there creates the problem that backends might be\nadded to or deleted from the PROC array while GetUndoRecPtr runs.\nI think it might be possible to survive that, by adding an assumption\nthat logRec.xrecoff can be set to zero atomically, but it seems tricky.\n\nComments? Anyone see a better approach?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Dec 2001 22:29:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Deadlock condition in current sources"
}
] |
[
{
"msg_contents": "I have just run 7.2b4 on FreeBSD/alpha.\n\nThe float8 and geometry tests failed.\n\nAttatched is the regression stuff. I have submitted it to the database.\n\nI'm not certain how significant the results are.\n\nEspecially these ones:\n\n--- 241,249 ----\n INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e400');\n ERROR: Input '-10e400' is out of range for float8\n INSERT INTO FLOAT8_TBL(f1) VALUES ('10e-400');\n+ ERROR: Input '10e-400' is out of range for float8\n INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e-400');\n+ ERROR: Input '-10e-400' is out of range for float8\n -- maintain external table consistency across platforms\n -- delete all values and reinsert well-behaved ones\n DELETE FROM FLOAT8_TBL;\n\nChris",
"msg_date": "Tue, 18 Dec 2001 15:20:52 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "FreeBSD/alpha"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I have just run 7.2b4 on FreeBSD/alpha.\n> The float8 and geometry tests failed.\n> I'm not certain how significant the results are.\n\nThe float8 difference looks like float8-small-is-zero is not the correct\ncomparison file for that platform. Looking in resultmap I see\n\nfloat8/.*-freebsd=float8-small-is-zero\nfloat8/i.86-.*-openbsd=float8-small-is-zero\nfloat8/i.86-.*-netbsd=float8-small-is-zero\n\nI generally think it suspicious when one of the BSD ports varies from\nthe other two, and here it would seem that it's wrong for freebsd to\nbe out of step. I propose making the entry read\n\nfloat8/i.86-.*-freebsd=float8-small-is-zero\n\nAny comments from freebsd users out there? Are there any other freebsd\nplatforms besides i86 and alpha? If so, how do they do on this test?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 10:27:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FreeBSD/alpha "
},
{
"msg_contents": "On Tue, Dec 18, 2001 at 10:27:08AM -0500, Tom Lane wrote:\n> I generally think it suspicious when one of the BSD ports varies from\n> the other two, and here it would seem that it's wrong for freebsd to\n> be out of step. I propose making the entry read\n> \n> float8/i.86-.*-freebsd=float8-small-is-zero\n> \n> Any comments from freebsd users out there? Are there any other freebsd\n> platforms besides i86 and alpha? If so, how do they do on this test?\n\nThere is preliminary work on a sparc port but it is not in usable\nshape as far as I am aware.\n\n-- \nDavid Terrell | \"To increase the hype, I'm gonna release a bunch\nNebcorp PM | of BLT variants (NetBLT, FreeBLT, BLT386, etc)\ndbt@meat.net | and create artificial rivalries.\"\nwwn.nebcorp.com | - Brian Swetland (www.openblt.org)\n",
"msg_date": "Tue, 18 Dec 2001 11:40:48 -0800",
"msg_from": "David Terrell <dbt@meat.net>",
"msg_from_op": false,
"msg_subject": "Re: FreeBSD/alpha"
},
{
"msg_contents": "> I generally think it suspicious when one of the BSD ports varies from\n> the other two, and here it would seem that it's wrong for freebsd to\n> be out of step. I propose making the entry read\n>\n> float8/i.86-.*-freebsd=float8-small-is-zero\n>\n> Any comments from freebsd users out there? Are there any other freebsd\n> platforms besides i86 and alpha? If so, how do they do on this test?\n\nFreeBSD is i386 and Alpha only ATM. I've submitted reports for both. Could\nyou please tell me where I can make the modification you suggest above to\ntest it?\n\nChris\n\n",
"msg_date": "Wed, 19 Dec 2001 10:48:52 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: FreeBSD/alpha "
},
{
"msg_contents": "> FreeBSD is i386 and Alpha only ATM. I've submitted reports for \n> both. Could\n> you please tell me where I can make the modification you suggest above to\n> test it?\n\nOK, I made the change myself - now only the geometry test doesn't pass.\n\nChris",
"msg_date": "Wed, 19 Dec 2001 11:10:46 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: FreeBSD/alpha "
},
{
"msg_contents": "> > FreeBSD is i386 and Alpha only ATM...\n> OK, I made the change myself - now only the geometry test doesn't pass.\n\nGot it. Thanks for the new platform!\n\n - Thomas\n",
"msg_date": "Wed, 19 Dec 2001 08:00:29 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: FreeBSD/alpha"
},
{
"msg_contents": "> > > FreeBSD is i386 and Alpha only ATM...\n> > OK, I made the change myself - now only the geometry test doesn't pass.\n>\n> Got it. Thanks for the new platform!\n\nGot what? The change hasn't been committed, and I'd like to correct the\ngeometry problem as well?\n\nChris\n\n",
"msg_date": "Wed, 19 Dec 2001 16:09:18 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: FreeBSD/alpha"
},
{
"msg_contents": "> Got what? The change hasn't been committed, and I'd like to correct the\n> geometry problem as well?\n\nSorry for being unclear. I've followed your reports, and even without\nthe patch you have demonstrated correct performance under FreeBSD/alpha.\n\nIf your patch does not get committed by someone else (and if it does not\naffect other platforms) then I'll help commit it if you would like.\n\nbtw, small geometry differences are an ongoing \"feature\", since the\ntranscendental functions do not return *exactly* the same results for\nall platforms. There may be an alternate geometry file which matches\nexactly, or you can submit one yourself, but we need to figure out what\nclass of machine your results would fall into if the geometry test\nresults are not unique to your platform.\n\n - Thomas\n",
"msg_date": "Wed, 19 Dec 2001 15:48:43 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: FreeBSD/alpha"
}
] |
[
{
"msg_contents": "Hi,\n\nI am a new to Postgres. I have a problem. I want to\nschedule some jobs at a particular time on a\nparticular day of a week. To make it more clear, my\nrequirement is to send mails to people based on the\navailability of data in some fields once in every\nweek. How do I go about doing this ???\n\nCan any one help me on this??\n\nRegards,\n\nJay Oorath\n\nBangalore\n\n\n\n__________________________________________________\nDo You Yahoo!?\nCheck out Yahoo! Shopping and Yahoo! Auctions for all of\nyour unique holiday gifts! Buy at http://shopping.yahoo.com\nor bid at http://auctions.yahoo.com\n",
"msg_date": "Tue, 18 Dec 2001 02:38:15 -0800 (PST)",
"msg_from": "Jayaraj Oorath <jayoorath@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Scheduling Jobs in Postgres"
},
{
"msg_contents": "What you want first is a scirpt or program to do the job\nthat you are trying to do, I would write it in perl but \npython, c or even shell is also fine.\n\nthe program should be able to do the job without output \nto the screen (sdtout or stderr) UNLESS something does\nnot work (and even then it is best if it sends mail on\nerror as well (to you).\n\nthen just have the program above run by cron.\n\nthis is not a postgresql answer, the same woud be true\nfor any data base or other system on unix (I have assumed\nthat you are not running on Windows).\n\nOn 18 Dec, Jayaraj Oorath wrote:\n> Hi,\n> \n> I am a new to Postgres. I have a problem. I want to\n> schedule some jobs at a particular time on a\n> particular day of a week. To make it more clear, my\n> requirement is to send mails to people based on the\n> availability of data in some fields once in every\n> week. How do I go about doing this ???\n> \n> Can any one help me on this??\n> \n> Regards,\n> \n> Jay Oorath\n> \n> Bangalore\n> \n> \n> \n> __________________________________________________\n> Do You Yahoo!?\n> Check out Yahoo! Shopping and Yahoo! Auctions for all of\n> your unique holiday gifts! Buy at http://shopping.yahoo.com\n> or bid at http://auctions.yahoo.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \nIt is MDT, Inc's policy to delete mail containing unsolicited file attachments.\nPlease be sure to contact the MDT staff member BEFORE sending an e-mail with\nany file attachments; they will be able to arrange for the files to be received.\n\nThis email, and any files transmitted with it, is confidential and intended\nsolely for the use of the individual or entity to whom they are addressed.\nIf you have received this email in error, please advise postmaster@mdtsoft.com\n<mailto:postmaster@mdtsoft.com>.\n\nPhilip W. Dalrymple III <pwd@mdtsoft.com>\nMDT Software - The Change Management Company\n+1 678 297 1001\nFax +1 678 297 1003\n\n\n",
"msg_date": "Tue, 18 Dec 2001 06:32:43 -0500 (EST)",
"msg_from": "pwd@mdtsoft.com",
"msg_from_op": false,
"msg_subject": "Re: Scheduling Jobs in Postgres"
},
{
"msg_contents": "On Tue, 18 Dec 2001, Jayaraj Oorath wrote:\n\n> Hi,\n> \n> I am a new to Postgres. I have a problem. I want to\n> schedule some jobs at a particular time on a\n\ncron is your friend.\n\nA database is not the right place to do such a thing.\n\nGavin\n\n",
"msg_date": "Wed, 19 Dec 2001 10:30:29 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Scheduling Jobs in Postgres"
}
] |
[
{
"msg_contents": "I haven't tested with the new 7.2 betas, but here are some results from\n7.1.\n\nWe have a developement computer, IBM x series 250, with 4 processors\n(PIII Xeon 750Mhz), 1 Gb memory and 2SCSI disks (u160).\n\nThe software is writing new rows to a table, and after this it reads the\nid from that row. There are currently about 50 connections doing the\nsame thing.\n\nWhen I run this test with the Redhat 7.1 SMP kernel, I noticed, that the\nprocessors are more than 90% idle. Disks utilisation is not the\nbottleneck either, since there is very low disk usage. Some data is\nwritten to disks every 4-5 seconds. Fsync is turned of. In transactions,\nthis means about 200 inserted rows per second. The software that is used\nto give the feed, is capable of several thousand rows per second.\n\nOkey, so I tried this also with the same computer, but using the not SMP\nsupported kernel. So only with one processor. The result was about 600\nrows per second. The configuration file was unchanged. Now, the\nprocessor is about 100% utilized.\n\nI didn't find any parameters that should help in this, but if you have a\nversion of 7.2 that you would like to get information about, let me\nknow, so I'll test.\n\nJussi\n\n\n\n",
"msg_date": "Tue, 18 Dec 2001 12:54:13 +0200",
"msg_from": "Jussi Mikkola <jussi.mikkola@bonware.com>",
"msg_from_op": true,
"msg_subject": "Re: 7.2 is slow?"
}
] |
[
{
"msg_contents": "> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > Here is a patch per above thread.\n> \n> Why did you change libpgtcl's build process? AFAIK no one was claiming\n> that was broken.\n\nIt was definitely broken on AIX, and was also the original complaint.\nSince the make stopped there the next failure in pl/tcl was not reported.\n\n> This does not seem the right time to be making undiscussed changes in\n> Makefile.shlib, either. What's with that?\n\nI was presuming, that the patch was at this stage not applied without further \nreview and testing on some other port.\n\nIt is not a real big problem, but for some targets \"make clean\" fails to\nclean libxxx.2.0 (depending on what $(shlib) is on that port) and the \nlibxxx.exp file without the patch.\nThe patch brings \"clean-lib\" inline with \"uninstall-lib\".\n\nUnfortunately it misses to clean xxx.dll on Windows, very sorry :-(\nI thought I tripplechecked. Incremental patch attached.\n\nAndreas",
"msg_date": "Tue, 18 Dec 2001 12:26:06 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problem compiling postgres sql --with-tcl "
}
] |
[
{
"msg_contents": "I feel there was a reasonably nice client side attempt at this using a\nworker pool model or something. Can't seem to track it down at this moment.\nAlso would spread queries in different ways to get a hot backup equivalent\netc. It was slick.\n\nThe key is that pgsql be able to support a very significant number of\ntransactions. Be neat to see some numbers on your attempt.\n\nSite I used to run had 6 front end webservers running PHP apps. Each\npersistent connection (a requirement to avoid overhead of set-up/teardowns)\nlived as long as the httpd process lived, even if idle. That meant at 250\nprocesses per server we had a good 1500 connections clicking over. Our\nfeeling was that rather than growing to 3,000 connections as the frontend\ngrew, why not pool those connections off each machine down to perhaps\n75/machine worker threads that actually did the work.\n\nLooks like that's not an issue if these backends suck up few resources.\nDoing something similar with MySQL we'd experiance problems if we got into\nthe 2,000 connection range. (kernel/system limits bumped plenty high).\n\nWhile we are on TODO's I would like to point out that some way to fully\nvacume (ie recover deleted and changed) while a db is in full swing is\ncritical to larger installtions. We did 2 billion queries between reboots on\na quad zeon MySQL box, and those are real user based queries not data loads\nor anything like that. At 750-1000 queries/second bringing the database down\nor seriously degrading its performance is not a good option.\n\nEnjoy playing with pgsql as always....\n\n- AZ\n\n",
"msg_date": "Tue, 18 Dec 2001 05:00:57 -0800",
"msg_from": "\"August Zajonc\" <ml@augustz.com>",
"msg_from_op": true,
"msg_subject": "Connection Pooling, a year later"
}
] |
[
{
"msg_contents": "I know I have expressed these concerns before but lost the argument, or\nat least no one rallied to my position, but I feel I have to mention\nthese again because they came up during beta.\n\nMy first concern is that VACUUM now defaults to the non-locking version.\nWhile I appreciate the new non-locking code, I imagine a release where\nnon-locking VACUUM will happen automatically, making no need to run\nVACUUM. However, locking VACUUM will still need to be run by\nadministrators, so we may be left with a VACUUM no one needs to run but\na VACUUM FULL that does need to be run, leaving us with a useless\ndefault for VACUUM without the FULL keyword. Also, because VACUUM does\nnot store the freetuple list between postmaster restarts, nor does it\nhave any way of recording _all_ free tuples, it has to be run with a\ndifferent frequency than the old VACUUM that I assume most people ran at\nnight. I would have preferred to leave VACUUM as locking VACUUM and\ncreate a new lighter option to the VACUUM command, and if light vacuum\nlater becomes automatic, the option can just go away.\n\nSecond, I am concerned about the removal of oids from system tables. I\nrealize this was done to prevent oid usage, particularly by the creation\nof temp tables, but such savings only make sense when oids are turned\noff in postgresql.conf. I imagine future releases where we have a\nseparate oid counter for system/user tables, or 8-bytes oids, in which\ncase the removal of oids from system tables may be needless. We have\nseen OpenACS is broken now because the new pg_description requires a\nseparate classoid/objsubid columns to uniquely access tables without\noids, like pg_attribute. These new columns are used only for non-oid\ntables, making access cumbersome, rather than the simpler oid lookup. I\ndon't even know if the current setup will allow other tables without\noids to be referenced cleanly. Object dependency tracking, using\npg_depend, will also require these additional fields to track\ndependency, rather than just using the oid, and such access will be more\nconfusing.\n\nI realize the motivation for these changes were to make PostgreSQL more\nenterprise-ready, but I am concerned these changes may need to be\nmodified in future releases, causing confusion and porting problems for\nusers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Dec 2001 09:43:27 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Concerns about this release"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I know I have expressed these concerns before but lost the argument,\n\nYes, you did, and it's way too late to bring them up again.\nParticularly the OID issue; do you seriously propose an initdb\nat this stage to put back OIDs in the system tables?\n\nBut for the record:\n\nI think your argument about VACUUM misses the point. The reason FULL\nisn't the default is that we want the default form to be the one people\nmost want to use. If lightweight VACUUM starts to be run automatically\nin some future release, FULL might at that time become the default.\nI don't see anything wrong with changing the default behavior of the\ncommand whenever the system's other behavior changes enough to alter the\n\"typical\" usage of the command.\n\nAs for pg_description, the change in primary key is unfortunate but\n*necessary*. I don't foresee us reversing it. The consensus view as\nI recall it was that we wanted to go over to a separate OID generator\nper table in some future release, which fits right in with the new\nstructure of pg_description, but is entirely unworkable with the old.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 10:53:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> I know I have expressed these concerns before but lost the argument, or\n> at least no one rallied to my position, but I feel I have to mention\n> these again because they came up during beta.\n> \n> My first concern is that VACUUM now defaults to the non-locking version.\n> While I appreciate the new non-locking code, I imagine a release where\n> non-locking VACUUM will happen automatically, making no need to run\n> VACUUM. However, locking VACUUM will still need to be run by\n> administrators, so we may be left with a VACUUM no one needs to run but\n> a VACUUM FULL that does need to be run, leaving us with a useless\n> default for VACUUM without the FULL keyword. Also, because VACUUM does\n> not store the freetuple list between postmaster restarts, nor does it\n> have any way of recording _all_ free tuples, it has to be run with a\n> different frequency than the old VACUUM that I assume most people ran at\n> night. I would have preferred to leave VACUUM as locking VACUUM and\n> create a new lighter option to the VACUUM command, and if light vacuum\n> later becomes automatic, the option can just go away.\n\nI kind of second your opinion here. I also have my doubts that the\ndefault is not as well tested as the option. Plus, aren't there some\nisses with the non-locking vacuum?\n\n> \n> Second, I am concerned about the removal of oids from system tables. I\n> realize this was done to prevent oid usage, particularly by the creation\n> of temp tables, but such savings only make sense when oids are turned\n> off in postgresql.conf. I imagine future releases where we have a\n> separate oid counter for system/user tables, or 8-bytes oids, in which\n> case the removal of oids from system tables may be needless. We have\n> seen OpenACS is broken now because the new pg_description requires a\n> separate classoid/objsubid columns to uniquely access tables without\n> oids, like pg_attribute. These new columns are used only for non-oid\n> tables, making access cumbersome, rather than the simpler oid lookup. I\n> don't even know if the current setup will allow other tables without\n> oids to be referenced cleanly. Object dependency tracking, using\n> pg_depend, will also require these additional fields to track\n> dependency, rather than just using the oid, and such access will be more\n> confusing.\n> \n> I realize the motivation for these changes were to make PostgreSQL more\n> enterprise-ready, but I am concerned these changes may need to be\n> modified in future releases, causing confusion and porting problems for\n> users.\n\nI don't understand why the oids were taken from the system tables.\nSurely there can't be so many that it is a real problem of use. The\ndangerous issue is, of course, oid reuse in system tables.\n\nThe ability to remove OIDs from user tables is a big bonus. There are so\nmany times when you just want to store huge amount of lookup data in a\nstatic table, there is no need for the overhead of an OID. This saves\ndisk space and OID depletion (wrap around).\n\nAre all the PostgreSQL developers really, really, sure that all the new\nfeatures in 7.2 are ready for prime time?\n",
"msg_date": "Tue, 18 Dec 2001 10:58:11 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n\n> My first concern is that VACUUM now defaults to the non-locking version.\n> While I appreciate the new non-locking code, I imagine a release where\n> non-locking VACUUM will happen automatically, making no need to run\n> VACUUM. However, locking VACUUM will still need to be run by\n> administrators, so we may be left with a VACUUM no one needs to run but\n> a VACUUM FULL that does need to be run, leaving us with a useless\n> default for VACUUM without the FULL keyword. Also, because VACUUM does\n> not store the freetuple list between postmaster restarts, nor does it\n> have any way of recording _all_ free tuples, it has to be run with a\n> different frequency than the old VACUUM that I assume most people ran at\n> night. I would have preferred to leave VACUUM as locking VACUUM and\n> create a new lighter option to the VACUUM command, and if light vacuum\n> later becomes automatic, the option can just go away.\n\n\nI've kept my mouth shut about this mostly because I've been extremely \nbusy and because I've suspected my opinion is a minority one.\n\nIn general I think changing the behavior of familiar commands should be \navoided as much as possible. VACUUM has worked the same way \"since the \nbeginning\" and this seems like a gratuitous semantic change.\n\n(NOT the new code, but rather the fact that VACUUM now uses the new \ncode, and the fact that you need to do a VACUUM FULL to get the old).\n\nIt just seems unnecessary.\n\n\n> Second, I am concerned about the removal of oids from system tables. I\n> realize this was done to prevent oid usage, particularly by the creation\n> of temp tables, but such savings only make sense when oids are turned\n> off in postgresql.conf. I imagine future releases where we have a\n> separate oid counter for system/user tables, or 8-bytes oids, in which\n> case the removal of oids from system tables may be needless. We have\n> seen OpenACS is broken now because the new pg_description requires a\n> separate classoid/objsubid columns to uniquely access tables without\n> oids, like pg_attribute\n\n\nThis one doesn't bother me particularly. We were only caught by \nsurprise because those of us who follow PG developments haven't been \npaying as close attention as we have in the past. Mostly because in the \npast we've been desperate for new features and thus working with \"the \nnext PG beta\" version, thus tracking progress and changes in upcoming \nreleases has been central to managing our project. It's a tribute to \nall the hard work you guys have done in the past couple of years that \nwe've decided to support PG 7.1 as well as PG 7.2 (for those to chicken \nto change over).\n\nIn this case I just wrote a small PL/pgSQL function that queries \n\"version()\" and generates a view which works properly for the given \n7.1/7.2 version.\n\nI think that's reasonable when you're mucking around system tables. The \nnew function to grab the comment is cleaner, too.\n\nI'm not trying to undercut Bruce's overall argument, just pointing out \nthat as the OpenACS 4 project manager I have no strong feelings either \nway. Bruce, obviously, does and has thought about it a lot more than I \nhave.\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Tue, 18 Dec 2001 08:51:34 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I kind of second your opinion here. I also have my doubts that the\n> default is not as well tested as the option.\n\nBy that logic, we could never make any new releases, or at least never\nadd any new code. \"New code isn't as well tested as old code\" is an\nunhelpful observation.\n\nFWIW, I trust lazy VACUUM a lot *more* than I trust the old VACUUM code.\nRead the tuple-chain-moving logic in vacuum.c sometime, and then tell me\nhow confident you feel in it. (My gut tells me that that logic is\nresponsible for the recent reports of duplicate tuples in 7.1.*, though\nI can't yet back this up with any evidence.)\n\n> Plus, aren't there some isses with the non-locking vacuum?\n\nSuch as?\n\n> Are all the PostgreSQL developers really, really, sure that all the new\n> features in 7.2 are ready for prime time?\n\nSee above. If you like, we'll push out the release date a few years.\nOf course, the code won't get any more ready for prime time just by\nsitting on it.\n\nI think that we've very nearly reached the point where further time in\nbeta phase isn't going to yield any new info. Only putting it out as\nan official release will draw enough new users to expose remaining bugs.\nWe've been through this same dynamic with every previous release; 7.2\nwon't be any different.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 11:52:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release "
},
{
"msg_contents": "At 10:58 AM 12/18/01 -0500, mlw wrote:\n>\n>Are all the PostgreSQL developers really, really, sure that all the new\n>features in 7.2 are ready for prime time?\n\nProbably needs more testing - given the recent performance stuff and some\nother issues.\n\nFor me there's no rush - vacuum isn't as big a pain to me as it is for\nother people, and 7.1.3 seems to work well enough for now ;).\n\n7.1 was significantly better overall than 7.0 which was better overall than\n6.5.3.\n\nI hope once 7.2 is ready it will be significantly better than 7.1. This\ntime it might not be better in everything, but hopefully it'll be\nconvincingly better.\n\nCheerio,\nLink.\n\n",
"msg_date": "Wed, 19 Dec 2001 00:53:08 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "Tom Lane wrote:\n\n> mlw <markw@mohawksoft.com> writes:\n> \n>>I kind of second your opinion here. I also have my doubts that the\n>>default is not as well tested as the option.\n>>\n> \n> By that logic, we could never make any new releases, or at least never\n> add any new code. \"New code isn't as well tested as old code\" is an\n> unhelpful observation.\n\n\nI'd switch production sites I control more quickly if I didn't have to \nrun around and change scripts galore to say \"VACUUM FULL\" rather than \n\"VACUUM\". I personally will let the new VACUUM code run on non-critical \nsites for a few months before using it on production sites.\n\nThe issue isn't new code, the issue is changing semantics for an old \ncommand when there is no need to do so.\n\nThat is a very different thing.\n\nOne such change is no big deal, but it's a bad product design philosphy \nin general.\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Tue, 18 Dec 2001 09:21:40 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > I kind of second your opinion here. I also have my doubts that the\n> > default is not as well tested as the option.\n> \n> By that logic, we could never make any new releases, or at least never\n> add any new code. \"New code isn't as well tested as old code\" is an\n> unhelpful observation.\n\nOh come on now. That isn't the point. One could, however, leave the\ndefault \"as is\" and make the new feature the option for at least one\nrelease cycle. That seem pretty sane without being overly conservative.\n\n> \n> FWIW, I trust lazy VACUUM a lot *more* than I trust the old VACUUM code.\n> Read the tuple-chain-moving logic in vacuum.c sometime, and then tell me\n> how confident you feel in it. (My gut tells me that that logic is\n> responsible for the recent reports of duplicate tuples in 7.1.*, though\n> I can't yet back this up with any evidence.)\n> \n> > Plus, aren't there some isses with the non-locking vacuum?\n> \n> Such as?\n\nI'm not sure, I have vague recollection of some things (I thought the\nduplicate primary keys was on 7.2), but if you think it is good, then\nI'll take your word for it.\n\n> \n> > Are all the PostgreSQL developers really, really, sure that all the new\n> > features in 7.2 are ready for prime time?\n> \n> See above. If you like, we'll push out the release date a few years.\n> Of course, the code won't get any more ready for prime time just by\n> sitting on it.\n\n\nNo, that isn't my point. My point is the changes in OIDs and the new\nvacuum code seem like a more drastic set of changes than previous\nreleases. \n\nAgain, there is a time on every project when it is speak now or forever\nhold your peace. Bruce spoke, he raised some concerns, I had similar\nones. There can be no harm in doing a little retrospect. \n\n> \n> I think that we've very nearly reached the point where further time in\n> beta phase isn't going to yield any new info. Only putting it out as\n> an official release will draw enough new users to expose remaining bugs.\n> We've been through this same dynamic with every previous release; 7.2\n> won't be any different.\n\nI agree.\n",
"msg_date": "Tue, 18 Dec 2001 12:23:14 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n\n> Tom Lane wrote:\n> > \n> > mlw <markw@mohawksoft.com> writes:\n> > > I kind of second your opinion here. I also have my doubts that the\n> > > default is not as well tested as the option.\n> > \n> > By that logic, we could never make any new releases, or at least never\n> > add any new code. \"New code isn't as well tested as old code\" is an\n> > unhelpful observation.\n> \n> Oh come on now. That isn't the point. One could, however, leave the\n> default \"as is\" and make the new feature the option for at least one\n> release cycle. That seem pretty sane without being overly\n> conservative.\n\nActually I think that it makes a lot of sense to change the semantics\nof VACUUM. None of us here like the way that vacuum currently works.\nThe PostgreSQL hackers could have given the new VACUUM a different\nname, but a million email messages in the mailing list archives tell\nnewbies to VACUUM. That's why changing VACUUM so that it \"does what\nyou mean\" makes perfect sense. I am looking forward to being able to\nvacuum my database while the plant is still running.\n\nFor most people (especially the new PostgreSQL users) the new VACUUM\nis precisely what they want. They don't want to lock their tables\nwhile vacuuming. The fact that Tom trusts the new vacuum over the old\none should clinch the matter.\n\nThose of you that have 7.1 databases in production that don't mind the\ncurrent VACUUM implementation can simply wait until the new VACUUM\ngets put out in production and see how it goes. If you are happy with\nwhat you have now their is little pressure to upgrade. However,\nchanging the default makes it easier for new PostgreSQL users. It\nalso means that the PostgreSQL hackers can say that their database is\nready for 24x7 operation in \"the default\" mode.\n\nAdvertising is important too.\n\n> > FWIW, I trust lazy VACUUM a lot *more* than I trust the old VACUUM\n> > code. Read the tuple-chain-moving logic in vacuum.c sometime, and\n> > then tell me how confident you feel in it. (My gut tells me that\n> > that logic is responsible for the recent reports of duplicate\n> > tuples in 7.1.*, though I can't yet back this up with any\n> > evidence.)\n> > \n> > > Plus, aren't there some isses with the non-locking vacuum?\n> > \n> > Such as?\n> \n> I'm not sure, I have vague recollection of some things (I thought the\n> duplicate primary keys was on 7.2), but if you think it is good, then\n> I'll take your word for it.\n> \n> > \n> > > Are all the PostgreSQL developers really, really, sure that all the new\n> > > features in 7.2 are ready for prime time?\n> > \n> > See above. If you like, we'll push out the release date a few years.\n> > Of course, the code won't get any more ready for prime time just by\n> > sitting on it.\n> \n> \n> No, that isn't my point. My point is the changes in OIDs and the new\n> vacuum code seem like a more drastic set of changes than previous\n> releases.\n\nI think that you are forgetting some of PostgreSQL's past changes.\nTake a look at the changelog and you will realize that removing OIDs\nand changine the semantics of VACUUM are peanuts compared to 7.1's\ntransaction log, TOAST, Outer Joins, the new function manager, etc.\n\nThe developers have been warning against using OIDs forever, and you\ncan still leave them on if you need them. And the new vacuum is a\nvast improvement for those of us that use our databases 24x7. I see\nthis as a bug fix of the old procedure with the option of using the\nold implementation if you really need it.\n\n> Again, there is a time on every project when it is speak now or\n> forever hold your peace. Bruce spoke, he raised some concerns, I had\n> similar ones. There can be no harm in doing a little retrospect.\n\nThat's certainly true. I suppose the primary difference between my\nattitude and the yours and Bruce's is that I can't hardly wait for the\nnew features in 7.2 :). Likewise you don't remember the massive\nchanges in 7.1 because you *needed* them. Now you are happy with 7.1\nand don't want to make radical changes :).\n\n> > I think that we've very nearly reached the point where further time in\n> > beta phase isn't going to yield any new info. Only putting it out as\n> > an official release will draw enough new users to expose remaining bugs.\n> > We've been through this same dynamic with every previous release; 7.2\n> > won't be any different.\n> \n> I agree.\n\nBring it on! I have good backups :).\n\nJason\n",
"msg_date": "18 Dec 2001 11:53:04 -0700",
"msg_from": "Jason Earl <jason.earl@simplot.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > I kind of second your opinion here. I also have my doubts that the\n> > default is not as well tested as the option.\n> \n> By that logic, we could never make any new releases, or at least never\n> add any new code. \"New code isn't as well tested as old code\" is an\n> unhelpful observation.\n> \n> FWIW, I trust lazy VACUUM a lot *more* than I trust the old VACUUM code.\n> Read the tuple-chain-moving logic in vacuum.c sometime, and then tell me\n> how confident you feel in it. (My gut tells me that that logic is\n> responsible for the recent reports of duplicate tuples in 7.1.*, though\n> I can't yet back this up with any evidence.)\n\nFor all the various bugs which have been in PG over the years, the\nrecent crop of \"duplicate tuples\" is the absolute scariest. Can a\nrelease really be made without knowing precisely the cause of those\ncorruptions? The various theories offered by the posters (SMP\nmachine, CREATE INDEX in pl/pgsql functions, etc.) aren't comforting\neither. To me, all other feature enhancements pale in comparison to\npinning down this bug.\n\nJust my opinion, \n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Tue, 18 Dec 2001 14:13:15 -0500",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "Mike Mascari <mascarm@mascari.com> writes:\n\n> Tom Lane wrote:\n> > \n> > FWIW, I trust lazy VACUUM a lot *more* than I trust the old VACUUM code.\n> > Read the tuple-chain-moving logic in vacuum.c sometime, and then tell me\n> > how confident you feel in it. (My gut tells me that that logic is\n> > responsible for the recent reports of duplicate tuples in 7.1.*, though\n> > I can't yet back this up with any evidence.)\n> \n> For all the various bugs which have been in PG over the years, the\n> recent crop of \"duplicate tuples\" is the absolute scariest. Can a\n> release really be made without knowing precisely the cause of those\n> corruptions? The various theories offered by the posters (SMP\n> machine, CREATE INDEX in pl/pgsql functions, etc.) aren't comforting\n> either. To me, all other feature enhancements pale in comparison to\n> pinning down this bug.\n\nThe one instance of the bug that has been definitely pinned down\ninvolved the old VACUUM code in 7.1.3 (plus a don't-do-that-then index\nfunction).\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "18 Dec 2001 14:47:37 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "Mike Mascari <mascarm@mascari.com> writes:\n> For all the various bugs which have been in PG over the years, the\n> recent crop of \"duplicate tuples\" is the absolute scariest. Can a\n> release really be made without knowing precisely the cause of those\n> corruptions?\n\nI'm not happy about it either, but the existence of an unrepaired bug\nin 7.1 is hardly grounds for not releasing 7.2. Finding a bug in 7.2\nthat doesn't exist in 7.1 would be such grounds, sure. But we put out\nreleases when we think they are better than what's out there; holding\nthem in a (vain) quest for perfection isn't the way to make progress.\n\nFWIW, we seem to understand the mechanism behind Brian Hirt's case,\nat least. Need to poll the other reporters and see if the explanation\nfits their cases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 14:52:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release "
},
{
"msg_contents": "Jason Earl wrote:\n\n\n> \n> Actually I think that it makes a lot of sense to change the semantics\n> of VACUUM. None of us here like the way that vacuum currently works.\n\n\nNone of us? You forgot to ask me when you polled all of us ...\n\nI think the new VACUUM strategy has great promise, but until I see it \nrunning in production environments for a while that's as far as I'm \nwilling to go. I've lived with the old VACUUM for the two or so years \nI've been running PG-based websites, I can live with it a few more \nmonths until you run into all the table-corrupting bugs for me. Or have \nsuch a successful experience that I become convinced there aren't any \n(my hope, of course!)\n\n> It also means that the PostgreSQL hackers can say that their database is\n> ready for 24x7 operation in \"the default\" mode.\n\n\nPlenty of people already run it 24x7.\n\n\n> I think that you are forgetting some of PostgreSQL's past changes.\n> Take a look at the changelog and you will realize that removing OIDs\n> and changine the semantics of VACUUM are peanuts compared to 7.1's\n> transaction log, TOAST, Outer Joins, the new function manager, etc.\n\n\nWhile true, it is also true that:\n\n1. TOAST didn't change the semantics of queries on existing tables. It\n just allows you to define tables that weren't definable before.\n\n2. outer joins didn't change the semantics of existing queries. You can\n write queries you couldn't write before, but your old queries work\n without change.\n\n3. The new function manager is transparent to all but those who write C\n functions, other than fixing the NULL parameter bug.\n\n4. WAL was a scary change due to the scope, but it didn't change the\n semantics of transactions.\n\n5. The semantics of VACUUM have changed. Silently (in the sense that\n there's no notification or warning spewed out).\n\n> And the new vacuum is a\n> vast improvement for those of us that use our databases 24x7. I see\n> this as a bug fix of the old procedure with the option of using the\n> old implementation if you really need it.\n\n\nOr if the new one hoses your tables. Tom's bright, but he's not \ncertified bug-free.\n\n \n>>Again, there is a time on every project when it is speak now or\n>>forever hold your peace. Bruce spoke, he raised some concerns, I had\n>>similar ones. There can be no harm in doing a little retrospect.\n>>\n> \n> That's certainly true. I suppose the primary difference between my\n> attitude and the yours and Bruce's is that I can't hardly wait for the\n> new features in 7.2 :). Likewise you don't remember the massive\n> changes in 7.1 because you *needed* them. Now you are happy with 7.1\n> and don't want to make radical changes :).\n\n\nI think the new VACUUM is a great thing, and frankly I don't care all \nthat much about the VACUUM FULL command issue. I tend to take the view \nthat command semantics shouldn't be changed if one can avoid it. It's a \nview built on the fact that about twenty years of my past life were \nspent as a language implementor and dealing with customer feedback when \nI've been foolish enough to make semantics changes of this sort.\n\nI certainly wouldn't've raised this issue, at this late date, on my own.\n\nBut since Bruce did and since I'm temporarily subscribed to the list I \nfeel justified in piling on :)\n\n\n> Bring it on! I have good backups :).\n\n\nUh ... if you're truly 24x7 critical then rolling back to your last \nbackup wouldn't cut it, I'd think ...\n\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Tue, 18 Dec 2001 12:12:48 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "Don Baccus <dhogaza@pacifier.com> writes:\n> 5. The semantics of VACUUM have changed. Silently (in the sense that\n> there's no notification or warning spewed out).\n\n??? VACUUM has no semantics: it does not alter your data (or at least\nit's not supposed to ;-)). This change is transparent in the same\nway that the WAL and function manager changes were. If there is any\nlack of transparency, it would show up as greater disk space usage\nthan you might expect --- which seems *exactly* parallel to WAL.\nAnd you don't have the option to turn WAL off. I don't think you\ncan consistently maintain that adding WAL is good and changing VACUUM\nis bad.\n\n> Or if the new one hoses your tables. Tom's bright, but he's not \n> certified bug-free.\n\nCertainly, but you are assuming that the old VACUUM is bug-free,\nwhich is, um, overly optimistic. The new VACUUM code is (mostly) a\nsubset of the old, and has removed all the most ticklish bits of the old\ncode. So if you are looking for the fewest bugs you should prefer the\nnew to the old anyway. Case in point: Brian Hirt's bug does not arise\nunder new-style VACUUM. I had to say VACUUM FULL to make it happen\nin current sources.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 15:27:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release "
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Plus, aren't there some isses with the non-locking vacuum?\n>> \n>> Such as?\n\n> I'm not sure, I have vague recollection of some things (I thought the\n> duplicate primary keys was on 7.2), but if you think it is good, then\n> I'll take your word for it.\n\nAll the reports of duplicate rows are from people running 7.1.*.\nThe bug (or bugs) probably do exist in current sources as well,\nbut you cannot blame non-locking vacuum for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 15:32:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I know I have expressed these concerns before but lost the argument,\n> \n> Yes, you did, and it's way too late to bring them up again.\n> Particularly the OID issue; do you seriously propose an initdb\n> at this stage to put back OIDs in the system tables?\n\nNo, I don't expect any changes. I just felt I needed to clearly state\nmy opinion on this so people know where we are headed.\n\n> But for the record:\n> \n> I think your argument about VACUUM misses the point. The reason FULL\n> isn't the default is that we want the default form to be the one people\n> most want to use. If lightweight VACUUM starts to be run automatically\n> in some future release, FULL might at that time become the default.\n> I don't see anything wrong with changing the default behavior of the\n> command whenever the system's other behavior changes enough to alter the\n> \"typical\" usage of the command.\n\nAgreed. VACUUM nolock will be the most used method of vacuum for 7.2.\n\nMy concern was that FULL is still needed sometimes _and_ may become the\nmore popular vacuum method in later releases as vacuum nolock becomes\nautomatic. \n\nVacuum may go from locking (7.1), to non-locking (7.2), to locking (7.3)\nin the span of three releases. My point was that keeping vacuum as\nlocking in all releases and adding a non-locking version only for 7.2\nseemed cleaner.\n\nNow, if you think we will continue needing a non-locking vacuum in all\nfuture releases then we are doing the right thing by making non-locking\nthe default. Is that true?\n\n> As for pg_description, the change in primary key is unfortunate but\n> *necessary*. I don't foresee us reversing it. The consensus view as\n> I recall it was that we wanted to go over to a separate OID generator\n> per table in some future release, which fits right in with the new\n> structure of pg_description, but is entirely unworkable with the old.\n\nIn other words, a separate sequence for each system table, right? Is\nthat where we are headed? We could still call the column oid and\nqueries would continue to work. I don't think there are many\ncases where the oid is used to find a particular table, except my\n/contrib/findoidjoins, which can be removed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Dec 2001 16:02:22 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "\n\nBruce Momjian wrote:\n\n>>As for pg_description, the change in primary key is unfortunate but\n>>*necessary*. I don't foresee us reversing it. The consensus view as\n>>I recall it was that we wanted to go over to a separate OID generator\n>>per table in some future release, which fits right in with the new\n>>structure of pg_description, but is entirely unworkable with the old.\n>>\nThis is the clash of views between OO and R parts of ORDB - tho OO part\n_needs_ oid and a better support structure for OIDs, while the classical \nRDB\n(aka. bean-counting ;) part has not need for them..\n\nIt's not the right time to discuss it during beta time, but the pendulum \nhas been\nswinging heavily to the \"classic RDB\" side of thing in recent years for \nPostgreSQL,\nwhile it has been already going the other way for at least Oracle and \nInformix(Illustra)\nat least.\n\nIf we want to keep up the general copycat image of free software we of \ncourse can,\nbut I would much more like us to \"return to the roots\" of postgres and \nbe in/near the\nforefront for a change ;) . That would mean at least stronger support \nfor OO, and\nperhaps also restoring support for (per table/optional) time travel.\n\nThere were possibly more nice features that could be restored...\n\n>\n>In other words, a separate sequence for each system table, right? Is\n>that where we are headed? We could still call the column oid and\n>queries would continue to work. I don't think there are many\n>cases where the oid is used to find a particular table, except my\n>/contrib/findoidjoins, which can be removed.\n>\nIn the 7.x.x series even a system column tableoid was added that can be \nused to\ndetermine the table the tuple came form. I guess it was in preparation \nfor implementing\nSQL99's CREATE TABLE x UNDER y construct in one table, perhaps in a way that\nwould allow fast DROP COLUMN's (i.e separation of physical/logical \ncolumn order)\n\n----------------\nHannu\n\n\n",
"msg_date": "Wed, 19 Dec 2001 02:28:07 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> This is the clash of views between OO and R parts of ORDB - tho OO part\n> _needs_ oid and a better support structure for OIDs, while the classical \n> RDB (aka. bean-counting ;) part has not need for them..\n\nWhat's that have to do with it? The direction we are moving in is that\nthe globally unique identifier of an object is tableoid+rowoid, not just\noid; but I fail to see why that's less support than before. If\nanything, I think it's better support. The tableoid tells you which\ntable the object is in, and thus its type, whereas a single global OID\nsequence gives you no information at all about what the object\nrepresented by an OID is or where to look for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 20:04:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release "
},
{
"msg_contents": "> Hannu Krosing <hannu@tm.ee> writes:\n> > This is the clash of views between OO and R parts of ORDB - tho OO part\n> > _needs_ oid and a better support structure for OIDs, while the classical \n> > RDB (aka. bean-counting ;) part has not need for them..\n> \n> What's that have to do with it? The direction we are moving in is that\n> the globally unique identifier of an object is tableoid+rowoid, not just\n> oid; but I fail to see why that's less support than before. If\n> anything, I think it's better support. The tableoid tells you which\n> table the object is in, and thus its type, whereas a single global OID\n> sequence gives you no information at all about what the object\n> represented by an OID is or where to look for it.\n\nI like that idea a lot. I had not see that proposed before, to use a\ncombination table oid/sequence as the globally unique oid. Nice.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Dec 2001 21:03:51 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "> > Are all the PostgreSQL developers really, really, sure that all the new\n> > features in 7.2 are ready for prime time?\n>\n> See above. If you like, we'll push out the release date a few years.\n> Of course, the code won't get any more ready for prime time just by\n> sitting on it.\n>\n> I think that we've very nearly reached the point where further time in\n> beta phase isn't going to yield any new info. Only putting it out as\n> an official release will draw enough new users to expose remaining bugs.\n> We've been through this same dynamic with every previous release; 7.2\n> won't be any different.\n\nJust for the record, I *never* use a x.x version of anything, especially\nPostgres. I'll wait until at least 7.2.1 before running it on a production\nserver...\n\nNot saying that there's anything wrong with 7.2 - just that there's _always_\ninteresting problems (look at the changelogs for 7.1.1 - 7.1.3)\n\nChris\n\n",
"msg_date": "Wed, 19 Dec 2001 10:24:33 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release "
},
{
"msg_contents": "> I think the new VACUUM is a great thing, and frankly I don't care all\n> that much about the VACUUM FULL command issue. I tend to take the view\n> that command semantics shouldn't be changed if one can avoid it. It's a\n> view built on the fact that about twenty years of my past life were\n> spent as a language implementor and dealing with customer feedback when\n> I've been foolish enough to make semantics changes of this sort.\n\nI actually just add 'vacuumdb -a -z -q' sort of thing to my crontab. Which\nversion of VACCUM will vacuumdb be using in 7.2? Will there be an option to\nvacuumdb to run VACUUM FULL if it defaults to just VACUUM?\n\nChris\n\n",
"msg_date": "Wed, 19 Dec 2001 10:40:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n\n> Just for the record, I *never* use a x.x version of anything, especially\n> Postgres. I'll wait until at least 7.2.1 before running it on a production\n> server...\n> \n> Not saying that there's anything wrong with 7.2 - just that there's _always_\n> interesting problems (look at the changelogs for 7.1.1 - 7.1.3)\n\nOn the other hand, *somebody's* got to use x.x, otherwise the\n\"interesting\" problems would never get found. ;)\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "18 Dec 2001 21:55:37 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": ">>>Don Baccus said:\n > > It also means that the PostgreSQL hackers can say that their database is\n > > ready for 24x7 operation in \"the default\" mode.\n > \n > \n > Plenty of people already run it 24x7.\n\nI guess many do, actually. With the old (existing) code. I used to run most of \nmy production systems with beta code years ago - as at that time PostgreSQL \nwas beta quality anyway :) - but I currently run 7.0 on such systems, planning \nan upgarde to 7.1 anytime now - actually, the delay is because I need to shut \ndown that large 24x7 setup!\n\n > 2. outer joins didn't change the semantics of existing queries. You can\n > write queries you couldn't write before, but your old queries work\n > without change.\n\nNot true!\n\nSQL that worked in 7.0 does not work always in 7.1. Arguably, that can be \nexplained by 'bad SQL', but does not change the fact.\n\nThe default SQL inheritance also makes things much, much different. Let me \nexplain with this 'intuitive' schema:\n\nCREATE TABLE maillog (\n\tfrom\ttext,\n\tto\ttext,\n\tmsgdate\tdatetime,\n\tmsgid\ttext,\n\tbytes\tint\n);\n\nNow this is the 'current' log data, and we need to archive it periodically, to \nkeep the old data and at the same time avoid unnecessary performance decrease. \nSo we create archive tables like this\n\nCREATE TABLE maillog200101 () INHERITS (maillog);\nCREATE TABLE maillog200102 () INHERITS (maillog);\n\nData gets there by\n\nINSERT INTO maillog200101 SELECT * FROM maillog WHERE msgdate >= '2001-01-01' \nand msgdate < '2001-02-01';\nDELETE FROM maillog WHERE msgdate >= '2001-01-01' and msgdate < '2001-02-01';\n\netc.\n\nOne of the reasons in doing this is that the archive creation software needs \nnot know the exact structure of the table it archives...\n\nUnder 7.0 everything is as expected. Under 7.1 when you SELECT from maillog, \nyou also select from all inherited tables, which is not expected. One can of \ncourse rewrite all the SELECTs :-)\n\nSorry for the off-topic, but this hit me badly :)\n\n > 4. WAL was a scary change due to the scope, but it didn't change the\n > semantics of transactions.\n\nBut it was discussed many, many times before being implemented. \n\n > 5. The semantics of VACUUM have changed. Silently (in the sense that\n > there's no notification or warning spewed out).\n\nYou argue with Tom here, but both of you go call the same beast differently.\nSure, semantics of the 'VACUUM' command has changed. The old semantics is now \navailable as 'VACUUM FULL' - as it was already commented, this is no worry for \nnew users. The new semantics of VACUUM may actually be an improvement, because \nmany people need to run VACUUM in order to update statistics. The only benefit \nI see from VACUUM FULL is disk space usage reduction - which most \nadministrators do not do as default action (that is, rarely in scripts).\n\nIf free heap is reused in both tables and indexes, I do not see much trouble \nin the new semantics, especially for 24x7 setups! Tables, that are used for \n'logs' always grow - if you take my example above, given you have ca 1 million \nrecords a month, after the first month the maillog table is supposed to \ncontain 1 million records, then those get deleted and the new tuples will \noccupy another 1 million records disk space... now given that we archive the \nold data and not just throw it away, if free space reuse is available, the \nmaillog table will never grow more than 2 million records in size, which is \nreasonable. With the old vacuum semantics, the vacuuming of such tables is \nexpensive (as they usually have indexes) and the saved disk space is not \nsignificant.\n\n > I think the new VACUUM is a great thing, and frankly I don't care all \n > that much about the VACUUM FULL command issue. I tend to take the view \n > that command semantics shouldn't be changed if one can avoid it. It's a \n > view built on the fact that about twenty years of my past life were \n > spent as a language implementor and dealing with customer feedback when \n > I've been foolish enough to make semantics changes of this sort.\n\nI fully agree with this.\n\nHowever, what I think Tom had not said in his defense is that the new VACUUM \nsemantics is expected to greatly improve the experience with PostgreSQL. As \nsuch, NEW users are more likely to appreciate it. More users = more found bugs \n= better PostgreSQL (perhaps other benefits :) It may also be an improvement \nfor true 24x7 setups - users of my PostgreSQL powered company management \ndatabase start to complain when i do VACUUM during the day, as table locking \nis 'freezing' for example cash register operation, and 'people in the queue \nare waiting' etc ;)\n\n > > Bring it on! I have good backups :).\n > \n > \n > Uh ... if you're truly 24x7 critical then rolling back to your last \n > backup wouldn't cut it, I'd think ...\n\nAh... backups.. and they also load back so slow :)\n\n\nDaniel\n\n",
"msg_date": "Wed, 19 Dec 2001 10:23:25 +0200",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release "
},
{
"msg_contents": "Doug McNaught <doug@wireboard.com> writes:\n\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> \n> > Just for the record, I *never* use a x.x version of anything, especially\n> > Postgres. I'll wait until at least 7.2.1 before running it on a production\n> > server...\n> > \n> > Not saying that there's anything wrong with 7.2 - just that there's _always_\n> > interesting problems (look at the changelogs for 7.1.1 - 7.1.3)\n> \n> On the other hand, *somebody's* got to use x.x, otherwise the\n> \"interesting\" problems would never get found. ;)\n\nIt's simple. The people who use the x.x releases are those that\neither 1) need the added functionality or 2) haven't read the HACKERS\nmailing list :).\n\nPersonally the new non-locking VACUUM is enough of a win for me that I\nwill be giving 7.2 a whirl sooner rather than later. In fact, I have\na planned downtime over the Christmas holiday, and I have been toying\nwith the idea of rolling out whatever is available (even if it isn't\nthe official release). In fact, if there was a Debian package of\n7.2b4 I would probably be running it right now (hint hint).\n\nJason\n",
"msg_date": "19 Dec 2001 11:05:09 -0700",
"msg_from": "Jason Earl <jason.earl@simplot.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "Jason Earl wrote:\n\n> In fact, if there was a Debian package of\n> 7.2b4 I would probably be running it right now (hint hint).\n\n\nI know there will be folks putting up demo OpenACS 4 sites on PG 7.2 \nsomething, as soon as we can resolve the problems that prevent it from \nworking. Just because I personally feel cautious about using the new \nVACUUM on production sites doesn't mean I won't be using it on \nnon-critical sites ASAP.\n\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Wed, 19 Dec 2001 10:26:27 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "> I know there will be folks putting up demo OpenACS 4 sites on PG 7.2\n> something, as soon as we can resolve the problems that prevent it from\n> working. Just because I personally feel cautious about using the new\n> VACUUM on production sites doesn't mean I won't be using it on\n> non-critical sites ASAP.\n\nI've been trying to follow the VACUUM concern thread but have missed a few\nso I thought I might ask someone to summarize the concern and the real risk\nof table corruption and any other problems that have been reported... I was\nplanning on upgrading several production databases to 7.2 the day it was\nreleased (I've been developing with the betas with little problem) but won't\ndo so if there are some real concerns about corruption and such...\n\nSo if anyone doesn't mind to take a minute, could I get opinions? Is it too\nparanoid to not use the 7.2 release in production?\n\nThanks!\n\n-Mitch\n\n",
"msg_date": "Wed, 19 Dec 2001 12:16:36 -0700",
"msg_from": "\"Mitch Vincent\" <mitch@doot.org>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "\"Mitch Vincent\" <mitch@doot.org> writes:\n\n> I've been trying to follow the VACUUM concern thread but have missed a few\n> so I thought I might ask someone to summarize the concern and the real risk\n> of table corruption and any other problems that have been reported... I was\n> planning on upgrading several production databases to 7.2 the day it was\n> released (I've been developing with the betas with little problem) but won't\n> do so if there are some real concerns about corruption and such...\n> \n> So if anyone doesn't mind to take a minute, could I get opinions? Is it too\n> paranoid to not use the 7.2 release in production?\n\nInterestingly enough, the one definite table-corrupting bug that has\nbeen found was actually in old code (what is now VACUUM FULL). It was\nhard to trigger which explains why it didn't turn up earlier, but it\nwas hiding in 7.1.x the whole time. \n\nI will probably put 7.2rc1 on my dev server when it comes out.\nI am running a mix of 7.1.x versions in production so I will probably \nmigrate them all to 7.1.3 first (because pg_dump in earlier 7.1.x\nmakes unrestorable dumps in some situations) then look to go to 7.2\nafter it's been out a bit.\n\nYour call, but I'd certainly run 7.2 on a test box for a little while\nbefore rolling it out anywhere important.\n\nJust my take...\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "19 Dec 2001 18:44:02 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
},
{
"msg_contents": "\"Mitch Vincent\" <mitch@doot.org> writes:\n> So if anyone doesn't mind to take a minute, could I get opinions? Is it too\n> paranoid to not use the 7.2 release in production?\n\nDon is working from the \"don't be a pioneer\" theory, which is hard to\ndispute in the abstract. In the concrete, though, I see little reason\nto think that 7.2 will be less reliable than 7.1.*, even before we fix\nthe inevitable early-return bugs and issue a 7.2.1. We have not made\nany huge changes like WAL in this cycle.\n\nAs an idle exercise, I just went through the CVS log entries since\n7.2beta2 (Nov 6, about six weeks ago). I counted 68 log entries that\nI could classify as bug fixes; of these, 47 were for bugs that exist in\n7.1, the other 21 for new bugs introduced in 7.2 code. I'd call about\n4 of the old bugs and 6 of the new ones significant issues (eg, a core\ndump is significant, fixing to_char's handling of roman numeral dates\nis not). 4 out of the 6 significant new-bug fixes were in the first two\nweeks of the six-week period.\n\nYou can read those numbers however you want, but to me they look like\n7.2.0 will be better than 7.1.anything.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Dec 2001 18:52:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release "
},
{
"msg_contents": "Tom Lane wrote:\n\n> \"Mitch Vincent\" <mitch@doot.org> writes:\n> \n>>So if anyone doesn't mind to take a minute, could I get opinions? Is it too\n>>paranoid to not use the 7.2 release in production?\n>>\n> \n> Don is working from the \"don't be a pioneer\" theory, which is hard to\n> dispute in the abstract. In the concrete, though, I see little reason\n> to think that 7.2 will be less reliable than 7.1.*, even before we fix\n> the inevitable early-return bugs and issue a 7.2.1. We have not made\n> any huge changes like WAL in this cycle.\n\n\nOnly on production boxes, though. In practice, Tom's probably right \nabout the relative reliability but, on the other hand ...\n\nThis is really the first release cycle where those of us working on the \nOpenACS project feel we have had the luxury of not being early adaptors. \n It's a nice feeling for a change and a reflection on PG's maturity.\n\nOur first version of OpenACS was released to run on PG 6.5 beta. 6.4, \nwhich sync'd the log file after every select statement (remember those \ndays?), just didn't cut it. We also were crashing the backend in 6.4 \nright-and-left and 6.5 fixed a ton of those. PG 7.0 fixed a bunch of \nbugs that were hindering us and we quickly told everyone using our \ntoolkit to upgrade ASAP. When we inherited the aD code base for what's \nnow OpenACS 4 and decided to support both Oracle and PG with common \nsource code, we quickly decided we'd only support PG 7.1, with its outer \njoins, even though it was still in a pre-release state.\n\nSo ... now I can afford to be conservative and hey, I'm enjoying it!\n\n\n> You can read those numbers however you want, but to me they look like\n> 7.2.0 will be better than 7.1.anything.\n\n\nNone of my sites seem to be affected by any PG 7.1 bug, and as they run \nthe same stereotyped queries over and over again, I'm not terribly \nconcerned about suddenly hitting a roadbump.\n\nSo while PG 7.2 may have fewer bugs than PG 7.1, it may still introduce \nnew bugs that will affect me in a negative way. Just because I appear \nto be dodging 7.1's bullets doesn't mean I'll dodge new rounds fired by 7.2.\n\nAlso fixing bugs may expose accidental dependencies on such bugs. When \nwe first started moving OpenACS 3.x from 6.5 to 7.0 we found a few \nillegal queries that had slipped through 6.5, appearing to work, which \ntriggered explicit errors in PG 7.0.\n\nSo testing first on a development box isn't just a protection against \nany new bugs you introduce. New fixes can lead to work, too, and it's \nreally nicer to find this out on test, not production, boxes.\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Wed, 19 Dec 2001 17:45:35 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Concerns about this release"
}
] |
[
{
"msg_contents": "Hi,\nI got this patch working on 7.2b4, uses locale setting for input/output \nnumeric datatype.\nIt do not modify the regression tests.\n\nAny suggestion on How is implemented ?\nHow to change the regression test, to make it work on all the locale \nsettings ?\n\nthanks\nGiuseppe\n\n\n-- \n-------------------------------------------------------\nGiuseppe Tanzilli\t\tg.tanzilli@gruppocsf.com\nCSF Sistemi srl\t\t\tphone ++39 0775 7771\nVia del Ciavattino \nAnagni FR\nItaly\n\n\n\n",
"msg_date": "Tue, 18 Dec 2001 16:59:33 +0100",
"msg_from": "Giuseppe Tanzilli - CSF <g.tanzilli@gruppocsf.com>",
"msg_from_op": true,
"msg_subject": "RFC: Locale support for Numeric datatype"
}
] |
[
{
"msg_contents": "> However, there's still a problem: GetUndoRecPtr also gets SInvalLock\n> while its caller holds WALInsertLock, and therefore this routine\n> could create the second leg of the deadlock too. Removing the\n> SInvalLock lock there creates the problem that backends might be\n> added to or deleted from the PROC array while GetUndoRecPtr runs.\n> I think it might be possible to survive that, by adding an assumption\n> that logRec.xrecoff can be set to zero atomically, but it \n> seems tricky.\n\nCheckpoint' undo is not used currently so just comment out GetUndoRecPtr\ncall in CreateCheckPoint - we'll find solution later.\n\nVadim\n",
"msg_date": "Tue, 18 Dec 2001 10:55:41 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: Deadlock condition in current sources"
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> Checkpoint' undo is not used currently so just comment out GetUndoRecPtr\n> call in CreateCheckPoint - we'll find solution later.\n\nI thought about that, but figured you'd object ;-)\n\nOne possibility is to do something you had recommended awhile back for\nother reasons: add a spinlock to each PROC structure and use the\nspinlock, rather than SInvalLock, to protect setting and reading of\nxmin, logRec, and related fields. I haven't finished working out all\nthe details, though, and this seems like a rather large change to make\nat this late stage of beta. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 14:05:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Deadlock condition in current sources "
}
] |
[
{
"msg_contents": "> > Checkpoint' undo is not used currently so just comment out \n> > GetUndoRecPtr call in CreateCheckPoint - we'll find solution later.\n> \n> I thought about that, but figured you'd object ;-)\n> \n> One possibility is to do something you had recommended awhile back for\n> other reasons: add a spinlock to each PROC structure and use the\n> spinlock, rather than SInvalLock, to protect setting and reading of\n> xmin, logRec, and related fields. I haven't finished working out all\n> the details, though, and this seems like a rather large change to make\n> at this late stage of beta. Comments?\n\nSure - *comment* out GetUndoRecPtr -:)\n\nVadim\n",
"msg_date": "Tue, 18 Dec 2001 11:11:55 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: Deadlock condition in current sources "
}
] |
[
{
"msg_contents": "> This is trying to get rid of the original copy of a tuple that's been\n> moved to another page. The problem is that your index \n> function causes a\n> table scan, which means that by the time control gets here, \n> someone else\n> has looked at this tuple and marked it good --- so the initial test of\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> HEAP_XMIN_COMMITTED fails, and the tuple is never removed!\n> \n> I would say that it's incorrect for vacuum.c to assume that\n> HEAP_XMIN_COMMITTED can't become set on HEAP_MOVED_OFF/HEAP_MOVED_IN\n> tuples during the course of vacuum's processing; after all, the xmin\n> definitely does refer to a committed xact, and we can't realistically\n> assume that we know what processing will be induced by user-defined\n> index functions. Vadim, what do you think? How should we fix this?\n\nBut it's incorrect for table scan to mark tuple as good neither.\nLooks like we have to add checks for the case\nTransactionIdIsCurrentTransactionId(tuple->t_cmin) when\nthere is HEAP_MOVED_OFF or HEAP_MOVED_IN in t_infomask to\nall HeapTupleSatisfies* in tqual.c as we do in\nHeapTupleSatisfiesDirty - note comments about uniq btree-s there.\n\nVadim\n",
"msg_date": "Tue, 18 Dec 2001 11:25:08 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: problems with table corruption continued "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> I would say that it's incorrect for vacuum.c to assume that\n>> HEAP_XMIN_COMMITTED can't become set on HEAP_MOVED_OFF/HEAP_MOVED_IN\n>> tuples during the course of vacuum's processing; after all, the xmin\n>> definitely does refer to a committed xact, and we can't realistically\n>> assume that we know what processing will be induced by user-defined\n>> index functions. Vadim, what do you think? How should we fix this?\n\n> But it's incorrect for table scan to mark tuple as good neither.\n\nOh, that makes sense.\n\n> Looks like we have to add checks for the case\n> TransactionIdIsCurrentTransactionId(tuple->t_cmin) when\n> there is HEAP_MOVED_OFF or HEAP_MOVED_IN in t_infomask to\n> all HeapTupleSatisfies* in tqual.c as we do in\n> HeapTupleSatisfiesDirty - note comments about uniq btree-s there.\n\nSounds like a plan. Do you want to work on this, or shall I?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 14:36:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with table corruption continued "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> >> I would say that it's incorrect for vacuum.c to assume that\n> >> HEAP_XMIN_COMMITTED can't become set on HEAP_MOVED_OFF/HEAP_MOVED_IN\n> >> tuples during the course of vacuum's processing; after all, the xmin\n> >> definitely does refer to a committed xact, and we can't realistically\n> >> assume that we know what processing will be induced by user-defined\n> >> index functions. Vadim, what do you think? How should we fix this?\n> \n> > But it's incorrect for table scan to mark tuple as good neither.\n> \n> Oh, that makes sense.\n> \n> > Looks like we have to add checks for the case\n> > TransactionIdIsCurrentTransactionId(tuple->t_cmin) when\n> > there is HEAP_MOVED_OFF or HEAP_MOVED_IN in t_infomask to\n> > all HeapTupleSatisfies* in tqual.c as we do in\n> > HeapTupleSatisfiesDirty - note comments about uniq btree-s there.\n\nShould the change be TransactionIdIsInProgress(tuple->t_cmin) ?\n\nThe cause of Brian's issue was exactly what I was afraid of. \nVACUUM is guarded by AccessExclusive lock but IMHO we\nshouldn't rely on it too heavily. \n\nregards,\nHiroshi Inoue\n \n",
"msg_date": "Wed, 19 Dec 2001 06:39:34 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: problems with table corruption continued "
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> Should the change be TransactionIdIsInProgress(tuple->t_cmin) ?\n\nI'd be willing to do\n\tif (tuple->t_cmin is my XID)\n\t\tdo something;\n\tAssert(!TransactionIdIsInProgress(tuple->t_cmin));\nif that makes you feel better. But anything that's scanning\na table exclusive-locked by another transaction is broken.\n(BTW: contrib/pgstattuple is thus broken. Will fix.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 18:01:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with table corruption continued "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > Should the change be TransactionIdIsInProgress(tuple->t_cmin) ?\n> \n> I'd be willing to do\n> if (tuple->t_cmin is my XID)\n> do something;\n> Assert(!TransactionIdIsInProgress(tuple->t_cmin));\n> if that makes you feel better. \n\nIt's useless in hard to reproduce cases. \nI've thought but given up many times to propose this\nchange and my decision seems to have been right because\nI see only *Assert* even after this issue.\n\n> But anything that's scanning\n> a table exclusive-locked by another transaction is broken.\n> (BTW: contrib/pgstattuple is thus broken. Will fix.)\n\nIt seems dangerous to me to rely only on developers'\ncarefulness. There are some places where relations\nare opened with NoLock. We had better change them.\nI once examined them but AFAIR there are few cases\nwhen they are really opened with NoLock. In most\ncases they are already opened previously with other\nlock modes. I don't remember well if there was a\nreally dangerous(scan an relation opened with NoLock)\nplace.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 19 Dec 2001 09:19:54 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: problems with table corruption continued"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I don't remember well if there was a really dangerous(scan an relation\n> opened with NoLock) place.\n\nI have just looked for that, and the only such place is in the new\ncontrib module pgstattuple. This is clearly broken, since there is\nnothing stopping someone from (eg) dropping the relation while\npgsstattuple is scanning it. I think it should get AccessShareLock\non the relation.\n\nThe ri_triggers code has a lot of places that open things NoLock,\nbut it only looks into the relcache entry and doesn't try to scan\nthe relation. Nonetheless that code bothers me; we could be using\nan obsolete relcache entry if someone has just committed an ALTER\nTABLE on the relation. Some of the cases may be safe because a lock\nis held higher up (eg, on the table from which the trigger was fired)\nbut I doubt they all are.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 19:29:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with table corruption continued "
},
{
"msg_contents": "On Tue, 18 Dec 2001, Tom Lane wrote:\n\n> The ri_triggers code has a lot of places that open things NoLock,\n> but it only looks into the relcache entry and doesn't try to scan\n> the relation. Nonetheless that code bothers me; we could be using\n> an obsolete relcache entry if someone has just committed an ALTER\n> TABLE on the relation. Some of the cases may be safe because a lock\n> is held higher up (eg, on the table from which the trigger was fired)\n> but I doubt they all are.\n\nProbably not, since it looks like that's being done for the other table of\nthe constraint (not the one on which the trigger was fired).\n\n",
"msg_date": "Tue, 18 Dec 2001 17:39:33 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with table corruption continued "
},
{
"msg_contents": "Stephan Szabo wrote:\n> \n> On Tue, 18 Dec 2001, Tom Lane wrote:\n> \n> > The ri_triggers code has a lot of places that open things NoLock,\n> > but it only looks into the relcache entry and doesn't try to scan\n> > the relation. Nonetheless that code bothers me; we could be using\n> > an obsolete relcache entry if someone has just committed an ALTER\n> > TABLE on the relation. Some of the cases may be safe because a lock\n> > is held higher up (eg, on the table from which the trigger was fired)\n> > but I doubt they all are.\n> \n> Probably not, since it looks like that's being done for the other table of\n> the constraint (not the one on which the trigger was fired).\n\nIf a lock is held already, acquiring an AccessShareLock\nwould cause no addtional conflict. I don't see any reason\nto walk a tightrope with NoLock intentionally.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 19 Dec 2001 12:56:40 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: problems with table corruption continued"
},
{
"msg_contents": "\nOn Wed, 19 Dec 2001, Hiroshi Inoue wrote:\n\n> Stephan Szabo wrote:\n> >\n> > On Tue, 18 Dec 2001, Tom Lane wrote:\n> >\n> > > The ri_triggers code has a lot of places that open things NoLock,\n> > > but it only looks into the relcache entry and doesn't try to scan\n> > > the relation. Nonetheless that code bothers me; we could be using\n> > > an obsolete relcache entry if someone has just committed an ALTER\n> > > TABLE on the relation. Some of the cases may be safe because a lock\n> > > is held higher up (eg, on the table from which the trigger was fired)\n> > > but I doubt they all are.\n> >\n> > Probably not, since it looks like that's being done for the other table of\n> > the constraint (not the one on which the trigger was fired).\n>\n> If a lock is held already, acquiring an AccessShareLock\n> would cause no addtional conflict. I don't see any reason\n> to walk a tightrope with NoLock intentionally.\n\nI don't know why NoLock was used there, I was just pointing out that the\nodds of a lock being held higher up is probably low.\n\n",
"msg_date": "Tue, 18 Dec 2001 20:29:24 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with table corruption continued"
}
] |
[
{
"msg_contents": "> > Looks like we have to add checks for the case\n> > TransactionIdIsCurrentTransactionId(tuple->t_cmin) when\n> > there is HEAP_MOVED_OFF or HEAP_MOVED_IN in t_infomask to\n> > all HeapTupleSatisfies* in tqual.c as we do in\n> > HeapTupleSatisfiesDirty - note comments about uniq btree-s there.\n> \n> Sounds like a plan. Do you want to work on this, or shall I?\n\nSorry, I'm not able to do this fast enough for current pre-release stage -:(\n\nVadim\n",
"msg_date": "Tue, 18 Dec 2001 11:49:24 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: problems with table corruption continued "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> Sounds like a plan. Do you want to work on this, or shall I?\n\n> Sorry, I'm not able to do this fast enough for current pre-release stage -:(\n\nOkay. I'll work on a patch and send it to you for review, if you like.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 14:53:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with table corruption continued "
},
{
"msg_contents": "Okay, I've applied the attached patch to tqual.c. This brings all the\nvariants of HeapTupleSatisfies up to speed: I have compared them\ncarefully and they now all make equivalent series of tests on the tuple\nstatus (though they may interpret the results differently).\n\nI decided that Hiroshi had a good point about testing\nTransactionIdIsInProgress, so the code now does that before risking\na change to t_infomask, even in the VACUUM cases.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/utils/time/tqual.c.orig\tMon Oct 29 15:31:08 2001\n--- src/backend/utils/time/tqual.c\tWed Dec 19 12:09:32 2001\n***************\n*** 31,37 ****\n /*\n * HeapTupleSatisfiesItself\n *\t\tTrue iff heap tuple is valid for \"itself.\"\n! *\t\t\"{it}self\" means valid as of everything that's happened\n *\t\tin the current transaction, _including_ the current command.\n *\n * Note:\n--- 31,37 ----\n /*\n * HeapTupleSatisfiesItself\n *\t\tTrue iff heap tuple is valid for \"itself.\"\n! *\t\t\"itself\" means valid as of everything that's happened\n *\t\tin the current transaction, _including_ the current command.\n *\n * Note:\n***************\n*** 53,59 ****\n bool\n HeapTupleSatisfiesItself(HeapTupleHeader tuple)\n {\n- \n \tif (!(tuple->t_infomask & HEAP_XMIN_COMMITTED))\n \t{\n \t\tif (tuple->t_infomask & HEAP_XMIN_INVALID)\n--- 53,58 ----\n***************\n*** 61,86 ****\n \n \t\tif (tuple->t_infomask & HEAP_MOVED_OFF)\n \t\t{\n! \t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n! \t\t\t{\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n \t\t\t\treturn false;\n \t\t\t}\n \t\t}\n \t\telse if (tuple->t_infomask & HEAP_MOVED_IN)\n \t\t{\n! \t\t\tif (!TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n \t\t\t{\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n! \t\t\t\treturn false;\n \t\t\t}\n \t\t}\n \t\telse if (TransactionIdIsCurrentTransactionId(tuple->t_xmin))\n \t\t{\n \t\t\tif (tuple->t_infomask & HEAP_XMAX_INVALID)\t/* xid invalid */\n \t\t\t\treturn true;\n \t\t\tif (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n \t\t\t\treturn true;\n \t\t\treturn false;\n \t\t}\n \t\telse if (!TransactionIdDidCommit(tuple->t_xmin))\n--- 60,102 ----\n \n \t\tif (tuple->t_infomask & HEAP_MOVED_OFF)\n \t\t{\n! \t\t\tif (TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n \t\t\t\treturn false;\n+ \t\t\tif (!TransactionIdIsInProgress((TransactionId) tuple->t_cmin))\n+ \t\t\t{\n+ \t\t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n+ \t\t\t\t{\n+ \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n+ \t\t\t\t\treturn false;\n+ \t\t\t\t}\n+ \t\t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t\t\t}\n \t\t}\n \t\telse if (tuple->t_infomask & HEAP_MOVED_IN)\n \t\t{\n! \t\t\tif (!TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n \t\t\t{\n! \t\t\t\tif (TransactionIdIsInProgress((TransactionId) tuple->t_cmin))\n! \t\t\t\t\treturn false;\n! \t\t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n! \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n! \t\t\t\telse\n! \t\t\t\t{\n! \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n! \t\t\t\t\treturn false;\n! \t\t\t\t}\n \t\t\t}\n \t\t}\n \t\telse if (TransactionIdIsCurrentTransactionId(tuple->t_xmin))\n \t\t{\n \t\t\tif (tuple->t_infomask & HEAP_XMAX_INVALID)\t/* xid invalid */\n \t\t\t\treturn true;\n+ \n+ \t\t\tAssert(TransactionIdIsCurrentTransactionId(tuple->t_xmax));\n+ \n \t\t\tif (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n \t\t\t\treturn true;\n+ \n \t\t\treturn false;\n \t\t}\n \t\telse if (!TransactionIdDidCommit(tuple->t_xmin))\n***************\n*** 89,97 ****\n \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID; /* aborted */\n \t\t\treturn false;\n \t\t}\n! \t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t}\n! \t/* the tuple was inserted validly */\n \n \tif (tuple->t_infomask & HEAP_XMAX_INVALID)\t/* xid invalid or aborted */\n \t\treturn true;\n--- 105,115 ----\n \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID; /* aborted */\n \t\t\treturn false;\n \t\t}\n! \t\telse\n! \t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t}\n! \n! \t/* by here, the inserting transaction has committed */\n \n \tif (tuple->t_infomask & HEAP_XMAX_INVALID)\t/* xid invalid or aborted */\n \t\treturn true;\n***************\n*** 117,123 ****\n \t\treturn true;\n \t}\n \n! \t/* by here, deleting transaction has committed */\n \ttuple->t_infomask |= HEAP_XMAX_COMMITTED;\n \n \tif (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n--- 135,141 ----\n \t\treturn true;\n \t}\n \n! \t/* xmax transaction committed */\n \ttuple->t_infomask |= HEAP_XMAX_COMMITTED;\n \n \tif (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n***************\n*** 177,194 ****\n \n \t\tif (tuple->t_infomask & HEAP_MOVED_OFF)\n \t\t{\n! \t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n! \t\t\t{\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n \t\t\t\treturn false;\n \t\t\t}\n \t\t}\n \t\telse if (tuple->t_infomask & HEAP_MOVED_IN)\n \t\t{\n! \t\t\tif (!TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n \t\t\t{\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n! \t\t\t\treturn false;\n \t\t\t}\n \t\t}\n \t\telse if (TransactionIdIsCurrentTransactionId(tuple->t_xmin))\n--- 195,225 ----\n \n \t\tif (tuple->t_infomask & HEAP_MOVED_OFF)\n \t\t{\n! \t\t\tif (TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n \t\t\t\treturn false;\n+ \t\t\tif (!TransactionIdIsInProgress((TransactionId) tuple->t_cmin))\n+ \t\t\t{\n+ \t\t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n+ \t\t\t\t{\n+ \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n+ \t\t\t\t\treturn false;\n+ \t\t\t\t}\n+ \t\t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t\t\t}\n \t\t}\n \t\telse if (tuple->t_infomask & HEAP_MOVED_IN)\n \t\t{\n! \t\t\tif (!TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n \t\t\t{\n! \t\t\t\tif (TransactionIdIsInProgress((TransactionId) tuple->t_cmin))\n! \t\t\t\t\treturn false;\n! \t\t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n! \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n! \t\t\t\telse\n! \t\t\t\t{\n! \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n! \t\t\t\t\treturn false;\n! \t\t\t\t}\n \t\t\t}\n \t\t}\n \t\telse if (TransactionIdIsCurrentTransactionId(tuple->t_xmin))\n***************\n*** 215,221 ****\n \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID; /* aborted */\n \t\t\treturn false;\n \t\t}\n! \t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t}\n \n \t/* by here, the inserting transaction has committed */\n--- 246,253 ----\n \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID; /* aborted */\n \t\t\treturn false;\n \t\t}\n! \t\telse\n! \t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t}\n \n \t/* by here, the inserting transaction has committed */\n***************\n*** 257,348 ****\n }\n \n int\n! HeapTupleSatisfiesUpdate(HeapTuple tuple)\n {\n! \tHeapTupleHeader th = tuple->t_data;\n \n \tif (AMI_OVERRIDE)\n \t\treturn HeapTupleMayBeUpdated;\n \n! \tif (!(th->t_infomask & HEAP_XMIN_COMMITTED))\n \t{\n! \t\tif (th->t_infomask & HEAP_XMIN_INVALID) /* xid invalid or aborted */\n \t\t\treturn HeapTupleInvisible;\n \n! \t\tif (th->t_infomask & HEAP_MOVED_OFF)\n \t\t{\n! \t\t\tif (TransactionIdDidCommit((TransactionId) th->t_cmin))\n! \t\t\t{\n! \t\t\t\tth->t_infomask |= HEAP_XMIN_INVALID;\n \t\t\t\treturn HeapTupleInvisible;\n \t\t\t}\n \t\t}\n! \t\telse if (th->t_infomask & HEAP_MOVED_IN)\n \t\t{\n! \t\t\tif (!TransactionIdDidCommit((TransactionId) th->t_cmin))\n \t\t\t{\n! \t\t\t\tth->t_infomask |= HEAP_XMIN_INVALID;\n! \t\t\t\treturn HeapTupleInvisible;\n \t\t\t}\n \t\t}\n! \t\telse if (TransactionIdIsCurrentTransactionId(th->t_xmin))\n \t\t{\n! \t\t\tif (CommandIdGEScanCommandId(th->t_cmin))\n \t\t\t\treturn HeapTupleInvisible;\t\t/* inserted after scan\n \t\t\t\t\t\t\t\t\t\t\t\t * started */\n \n! \t\t\tif (th->t_infomask & HEAP_XMAX_INVALID)\t\t/* xid invalid */\n \t\t\t\treturn HeapTupleMayBeUpdated;\n \n! \t\t\tAssert(TransactionIdIsCurrentTransactionId(th->t_xmax));\n \n! \t\t\tif (th->t_infomask & HEAP_MARKED_FOR_UPDATE)\n \t\t\t\treturn HeapTupleMayBeUpdated;\n \n! \t\t\tif (CommandIdGEScanCommandId(th->t_cmax))\n \t\t\t\treturn HeapTupleSelfUpdated;\t/* updated after scan\n \t\t\t\t\t\t\t\t\t\t\t\t * started */\n \t\t\telse\n \t\t\t\treturn HeapTupleInvisible;\t\t/* updated before scan\n \t\t\t\t\t\t\t\t\t\t\t\t * started */\n \t\t}\n! \t\telse if (!TransactionIdDidCommit(th->t_xmin))\n \t\t{\n! \t\t\tif (TransactionIdDidAbort(th->t_xmin))\n! \t\t\t\tth->t_infomask |= HEAP_XMIN_INVALID;\t/* aborted */\n \t\t\treturn HeapTupleInvisible;\n \t\t}\n! \t\tth->t_infomask |= HEAP_XMIN_COMMITTED;\n \t}\n \n \t/* by here, the inserting transaction has committed */\n \n! \tif (th->t_infomask & HEAP_XMAX_INVALID)\t\t/* xid invalid or aborted */\n \t\treturn HeapTupleMayBeUpdated;\n \n! \tif (th->t_infomask & HEAP_XMAX_COMMITTED)\n \t{\n! \t\tif (th->t_infomask & HEAP_MARKED_FOR_UPDATE)\n \t\t\treturn HeapTupleMayBeUpdated;\n \t\treturn HeapTupleUpdated;\t/* updated by other */\n \t}\n \n! \tif (TransactionIdIsCurrentTransactionId(th->t_xmax))\n \t{\n! \t\tif (th->t_infomask & HEAP_MARKED_FOR_UPDATE)\n \t\t\treturn HeapTupleMayBeUpdated;\n! \t\tif (CommandIdGEScanCommandId(th->t_cmax))\n \t\t\treturn HeapTupleSelfUpdated;\t\t/* updated after scan\n \t\t\t\t\t\t\t\t\t\t\t\t * started */\n \t\telse\n \t\t\treturn HeapTupleInvisible;\t/* updated before scan started */\n \t}\n \n! \tif (!TransactionIdDidCommit(th->t_xmax))\n \t{\n! \t\tif (TransactionIdDidAbort(th->t_xmax))\n \t\t{\n! \t\t\tth->t_infomask |= HEAP_XMAX_INVALID;\t\t/* aborted */\n \t\t\treturn HeapTupleMayBeUpdated;\n \t\t}\n \t\t/* running xact */\n--- 289,394 ----\n }\n \n int\n! HeapTupleSatisfiesUpdate(HeapTuple htuple)\n {\n! \tHeapTupleHeader tuple = htuple->t_data;\n \n \tif (AMI_OVERRIDE)\n \t\treturn HeapTupleMayBeUpdated;\n \n! \tif (!(tuple->t_infomask & HEAP_XMIN_COMMITTED))\n \t{\n! \t\tif (tuple->t_infomask & HEAP_XMIN_INVALID)\n \t\t\treturn HeapTupleInvisible;\n \n! \t\tif (tuple->t_infomask & HEAP_MOVED_OFF)\n \t\t{\n! \t\t\tif (TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n \t\t\t\treturn HeapTupleInvisible;\n+ \t\t\tif (!TransactionIdIsInProgress((TransactionId) tuple->t_cmin))\n+ \t\t\t{\n+ \t\t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n+ \t\t\t\t{\n+ \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n+ \t\t\t\t\treturn HeapTupleInvisible;\n+ \t\t\t\t}\n+ \t\t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t\t\t}\n \t\t}\n! \t\telse if (tuple->t_infomask & HEAP_MOVED_IN)\n \t\t{\n! \t\t\tif (!TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n \t\t\t{\n! \t\t\t\tif (TransactionIdIsInProgress((TransactionId) tuple->t_cmin))\n! \t\t\t\t\treturn HeapTupleInvisible;\n! \t\t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n! \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n! \t\t\t\telse\n! \t\t\t\t{\n! \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n! \t\t\t\t\treturn HeapTupleInvisible;\n! \t\t\t\t}\n \t\t\t}\n \t\t}\n! \t\telse if (TransactionIdIsCurrentTransactionId(tuple->t_xmin))\n \t\t{\n! \t\t\tif (CommandIdGEScanCommandId(tuple->t_cmin))\n \t\t\t\treturn HeapTupleInvisible;\t\t/* inserted after scan\n \t\t\t\t\t\t\t\t\t\t\t\t * started */\n \n! \t\t\tif (tuple->t_infomask & HEAP_XMAX_INVALID)\t\t/* xid invalid */\n \t\t\t\treturn HeapTupleMayBeUpdated;\n \n! \t\t\tAssert(TransactionIdIsCurrentTransactionId(tuple->t_xmax));\n \n! \t\t\tif (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n \t\t\t\treturn HeapTupleMayBeUpdated;\n \n! \t\t\tif (CommandIdGEScanCommandId(tuple->t_cmax))\n \t\t\t\treturn HeapTupleSelfUpdated;\t/* updated after scan\n \t\t\t\t\t\t\t\t\t\t\t\t * started */\n \t\t\telse\n \t\t\t\treturn HeapTupleInvisible;\t\t/* updated before scan\n \t\t\t\t\t\t\t\t\t\t\t\t * started */\n \t\t}\n! \t\telse if (!TransactionIdDidCommit(tuple->t_xmin))\n \t\t{\n! \t\t\tif (TransactionIdDidAbort(tuple->t_xmin))\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\t/* aborted */\n \t\t\treturn HeapTupleInvisible;\n \t\t}\n! \t\telse\n! \t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t}\n \n \t/* by here, the inserting transaction has committed */\n \n! \tif (tuple->t_infomask & HEAP_XMAX_INVALID)\t\t/* xid invalid or aborted */\n \t\treturn HeapTupleMayBeUpdated;\n \n! \tif (tuple->t_infomask & HEAP_XMAX_COMMITTED)\n \t{\n! \t\tif (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n \t\t\treturn HeapTupleMayBeUpdated;\n \t\treturn HeapTupleUpdated;\t/* updated by other */\n \t}\n \n! \tif (TransactionIdIsCurrentTransactionId(tuple->t_xmax))\n \t{\n! \t\tif (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n \t\t\treturn HeapTupleMayBeUpdated;\n! \t\tif (CommandIdGEScanCommandId(tuple->t_cmax))\n \t\t\treturn HeapTupleSelfUpdated;\t\t/* updated after scan\n \t\t\t\t\t\t\t\t\t\t\t\t * started */\n \t\telse\n \t\t\treturn HeapTupleInvisible;\t/* updated before scan started */\n \t}\n \n! \tif (!TransactionIdDidCommit(tuple->t_xmax))\n \t{\n! \t\tif (TransactionIdDidAbort(tuple->t_xmax))\n \t\t{\n! \t\t\ttuple->t_infomask |= HEAP_XMAX_INVALID;\t\t/* aborted */\n \t\t\treturn HeapTupleMayBeUpdated;\n \t\t}\n \t\t/* running xact */\n***************\n*** 350,358 ****\n \t}\n \n \t/* xmax transaction committed */\n! \tth->t_infomask |= HEAP_XMAX_COMMITTED;\n \n! \tif (th->t_infomask & HEAP_MARKED_FOR_UPDATE)\n \t\treturn HeapTupleMayBeUpdated;\n \n \treturn HeapTupleUpdated;\t/* updated by other */\n--- 396,404 ----\n \t}\n \n \t/* xmax transaction committed */\n! \ttuple->t_infomask |= HEAP_XMAX_COMMITTED;\n \n! \tif (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n \t\treturn HeapTupleMayBeUpdated;\n \n \treturn HeapTupleUpdated;\t/* updated by other */\n***************\n*** 374,396 ****\n \n \t\tif (tuple->t_infomask & HEAP_MOVED_OFF)\n \t\t{\n- \t\t\t/*\n- \t\t\t * HeapTupleSatisfiesDirty is used by unique btree-s and so\n- \t\t\t * may be used while vacuuming.\n- \t\t\t */\n \t\t\tif (TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n \t\t\t\treturn false;\n! \t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n \t\t\t{\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n! \t\t\t\treturn false;\n \t\t\t}\n- \t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t\t}\n \t\telse if (tuple->t_infomask & HEAP_MOVED_IN)\n \t\t{\n \t\t\tif (!TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n \t\t\t{\n \t\t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t\t\t\telse\n--- 420,443 ----\n \n \t\tif (tuple->t_infomask & HEAP_MOVED_OFF)\n \t\t{\n \t\t\tif (TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n \t\t\t\treturn false;\n! \t\t\tif (!TransactionIdIsInProgress((TransactionId) tuple->t_cmin))\n \t\t\t{\n! \t\t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n! \t\t\t\t{\n! \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n! \t\t\t\t\treturn false;\n! \t\t\t\t}\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t\t\t}\n \t\t}\n \t\telse if (tuple->t_infomask & HEAP_MOVED_IN)\n \t\t{\n \t\t\tif (!TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n \t\t\t{\n+ \t\t\t\tif (TransactionIdIsInProgress((TransactionId) tuple->t_cmin))\n+ \t\t\t\t\treturn false;\n \t\t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t\t\t\telse\n***************\n*** 416,425 ****\n \t\t{\n \t\t\tif (TransactionIdDidAbort(tuple->t_xmin))\n \t\t\t{\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID; /* aborted */\n \t\t\t\treturn false;\n \t\t\t}\n \t\t\tSnapshotDirty->xmin = tuple->t_xmin;\n \t\t\treturn true;\t\t/* in insertion by other */\n \t\t}\n \t\telse\n--- 463,473 ----\n \t\t{\n \t\t\tif (TransactionIdDidAbort(tuple->t_xmin))\n \t\t\t{\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n \t\t\t\treturn false;\n \t\t\t}\n \t\t\tSnapshotDirty->xmin = tuple->t_xmin;\n+ \t\t\t/* XXX shouldn't we fall through to look at xmax? */\n \t\t\treturn true;\t\t/* in insertion by other */\n \t\t}\n \t\telse\n***************\n*** 474,479 ****\n--- 522,528 ----\n \tif (AMI_OVERRIDE)\n \t\treturn true;\n \n+ \t/* XXX this is horribly ugly: */\n \tif (ReferentialIntegritySnapshotOverride)\n \t\treturn HeapTupleSatisfiesNow(tuple);\n \n***************\n*** 484,501 ****\n \n \t\tif (tuple->t_infomask & HEAP_MOVED_OFF)\n \t\t{\n! \t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n! \t\t\t{\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n \t\t\t\treturn false;\n \t\t\t}\n \t\t}\n \t\telse if (tuple->t_infomask & HEAP_MOVED_IN)\n \t\t{\n! \t\t\tif (!TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n \t\t\t{\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n! \t\t\t\treturn false;\n \t\t\t}\n \t\t}\n \t\telse if (TransactionIdIsCurrentTransactionId(tuple->t_xmin))\n--- 533,563 ----\n \n \t\tif (tuple->t_infomask & HEAP_MOVED_OFF)\n \t\t{\n! \t\t\tif (TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n \t\t\t\treturn false;\n+ \t\t\tif (!TransactionIdIsInProgress((TransactionId) tuple->t_cmin))\n+ \t\t\t{\n+ \t\t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n+ \t\t\t\t{\n+ \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n+ \t\t\t\t\treturn false;\n+ \t\t\t\t}\n+ \t\t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t\t\t}\n \t\t}\n \t\telse if (tuple->t_infomask & HEAP_MOVED_IN)\n \t\t{\n! \t\t\tif (!TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n \t\t\t{\n! \t\t\t\tif (TransactionIdIsInProgress((TransactionId) tuple->t_cmin))\n! \t\t\t\t\treturn false;\n! \t\t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n! \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n! \t\t\t\telse\n! \t\t\t\t{\n! \t\t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n! \t\t\t\t\treturn false;\n! \t\t\t\t}\n \t\t\t}\n \t\t}\n \t\telse if (TransactionIdIsCurrentTransactionId(tuple->t_xmin))\n***************\n*** 519,528 ****\n \t\telse if (!TransactionIdDidCommit(tuple->t_xmin))\n \t\t{\n \t\t\tif (TransactionIdDidAbort(tuple->t_xmin))\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID; /* aborted */\n \t\t\treturn false;\n \t\t}\n! \t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t}\n \n \t/*\n--- 581,591 ----\n \t\telse if (!TransactionIdDidCommit(tuple->t_xmin))\n \t\t{\n \t\t\tif (TransactionIdDidAbort(tuple->t_xmin))\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n \t\t\treturn false;\n \t\t}\n! \t\telse\n! \t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t}\n \n \t/*\n***************\n*** 623,646 ****\n \t\t\treturn HEAPTUPLE_DEAD;\n \t\telse if (tuple->t_infomask & HEAP_MOVED_OFF)\n \t\t{\n \t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n \t\t\t{\n \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n \t\t\t\treturn HEAPTUPLE_DEAD;\n \t\t\t}\n- \t\t\t/* Assume we can only get here if previous VACUUM aborted, */\n- \t\t\t/* ie, it couldn't still be in progress */\n \t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t\t}\n \t\telse if (tuple->t_infomask & HEAP_MOVED_IN)\n \t\t{\n! \t\t\tif (!TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n \t\t\t{\n- \t\t\t\t/* Assume we can only get here if previous VACUUM aborted */\n \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n \t\t\t\treturn HEAPTUPLE_DEAD;\n \t\t\t}\n- \t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t\t}\n \t\telse if (TransactionIdIsInProgress(tuple->t_xmin))\n \t\t\treturn HEAPTUPLE_INSERT_IN_PROGRESS;\n--- 686,715 ----\n \t\t\treturn HEAPTUPLE_DEAD;\n \t\telse if (tuple->t_infomask & HEAP_MOVED_OFF)\n \t\t{\n+ \t\t\tif (TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n+ \t\t\t\treturn HEAPTUPLE_DELETE_IN_PROGRESS;\n+ \t\t\tif (TransactionIdIsInProgress((TransactionId) tuple->t_cmin))\n+ \t\t\t\treturn HEAPTUPLE_DELETE_IN_PROGRESS;\n \t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n \t\t\t{\n \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n \t\t\t\treturn HEAPTUPLE_DEAD;\n \t\t\t}\n \t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n \t\t}\n \t\telse if (tuple->t_infomask & HEAP_MOVED_IN)\n \t\t{\n! \t\t\tif (TransactionIdIsCurrentTransactionId((TransactionId) tuple->t_cmin))\n! \t\t\t\treturn HEAPTUPLE_INSERT_IN_PROGRESS;\n! \t\t\tif (TransactionIdIsInProgress((TransactionId) tuple->t_cmin))\n! \t\t\t\treturn HEAPTUPLE_INSERT_IN_PROGRESS;\n! \t\t\tif (TransactionIdDidCommit((TransactionId) tuple->t_cmin))\n! \t\t\t\ttuple->t_infomask |= HEAP_XMIN_COMMITTED;\n! \t\t\telse\n \t\t\t{\n \t\t\t\ttuple->t_infomask |= HEAP_XMIN_INVALID;\n \t\t\t\treturn HEAPTUPLE_DEAD;\n \t\t\t}\n \t\t}\n \t\telse if (TransactionIdIsInProgress(tuple->t_xmin))\n \t\t\treturn HEAPTUPLE_INSERT_IN_PROGRESS;\n***************\n*** 671,676 ****\n--- 740,751 ----\n \tif (tuple->t_infomask & HEAP_XMAX_INVALID)\n \t\treturn HEAPTUPLE_LIVE;\n \n+ \tif (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n+ \t{\n+ \t\t/* \"deleting\" xact really only marked it for update */\n+ \t\treturn HEAPTUPLE_LIVE;\n+ \t}\n+ \n \tif (!(tuple->t_infomask & HEAP_XMAX_COMMITTED))\n \t{\n \t\tif (TransactionIdIsInProgress(tuple->t_xmax))\n***************\n*** 698,709 ****\n \t/*\n \t * Deleter committed, but check special cases.\n \t */\n- \n- \tif (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n- \t{\n- \t\t/* \"deleting\" xact really only marked it for update */\n- \t\treturn HEAPTUPLE_LIVE;\n- \t}\n \n \tif (TransactionIdEquals(tuple->t_xmin, tuple->t_xmax))\n \t{\n--- 773,778 ----\n\n",
"msg_date": "Wed, 19 Dec 2001 12:23:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with table corruption continued "
}
] |
[
{
"msg_contents": "After reading through the strong opinions about the location of the\nconfiguration files in the current and in previous threads, I must concede\nthat despite the best intentions, the current \"everything in one place\"\nsystem is obviously not addressing the needs of the user. So while we're\nat it we might as well consider more sweeping changes to bring the\nsystem in line with the \"expected\" or \"standard\" behaviour.\n\nConsider the following points:\n\n1. Most users will probably only run one server -- especially new users.\n\n2. Most users will expect configuration files somewhere under =~ /etc/ --\n including new users.\n\n3. To run more than one server, special knowledge and configuration is\n required anyway.\n\nTherefore, a wired-in configuration file location near /etc would be\nhelpful or at least indifferent for most users.\n\nI suggest that we wire-in the location of the configuration files into the\nbinaries as ${sysconfdir} as determined by configure. This would default\nto /usr/local/pgsql/etc, so the \"everything in one place\" system is still\nsomewhat preserved for those that care. For the confused, we could for a\nwhile install into the data directory files named \"postgresql.conf\",\n\"pg_hba.conf\", etc. that only contain text like \"This file is now to be\nfound at @sysconfdir@ by popular demand.\"\n\nFurthermore, I suggest that we wire-in the default location of the data\nfiles as ${localstatedir} as determined by configure. This would default\nto /usr/local/pgsql/var, which is not quite the same as the customary\n/usr/local/pgsql/data but it doesn't matter because with both \"initdb\" and\n\"postmaster\" defaulting to this directory and the configuration files\nelsewhere you don't really need to know except on few occasions. Having\nthis default would also save me a lot of typing during development. ;-)\n\nSurely we can also add a -C option to override the sysconfdir just as -D\noverrides localstatedir. Those that refuse to convert can also set -C\nequal to -D and have the old setup. Or the user can only specify -C to\npoint to the former -D and use the proposed 'datadir' parameter to find\nthe data area.\n\nBut I find a wired-in configuration file location better than having to\nuse -C everytime you want a \"standard\" setup because this way we force\nusers to have consistent setups.\n\nWhat does this mean for multiple-server setups? Basically you add a -C to\neach invocation or you replace -D by -C as explained above. Or if you\nwant to share the configuration file you only need to add the right -p\noption to each invocation. This probably means someone will need to\nchange their scripts but as we hear they don't like them anyway. Someone\nthat currently relies on a -D being sufficient will at least get a clean\nfailure when the ports conflict.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 18 Dec 2001 23:24:43 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Thoughts on the location of configuration files"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> After reading through the strong opinions about the location of the\n> configuration files in the current and in previous threads, I must concede\n> that despite the best intentions, the current \"everything in one place\"\n> system is obviously not addressing the needs of the user. So while we're\n> at it we might as well consider more sweeping changes to bring the\n> system in line with the \"expected\" or \"standard\" behaviour.\n> \n> Consider the following points:\n> \n> 1. Most users will probably only run one server -- especially new users.\nAgreed\n> \n> 2. Most users will expect configuration files somewhere under =~ /etc/ --\n> including new users.\nAgreed\n> \n> 3. To run more than one server, special knowledge and configuration is\n> required anyway.\nAgreed\n> \n> Therefore, a wired-in configuration file location near /etc would be\n> helpful or at least indifferent for most users.\n> \n> I suggest that we wire-in the location of the configuration files into the\n> binaries as ${sysconfdir} as determined by configure. This would default\n> to /usr/local/pgsql/etc, so the \"everything in one place\" system is still\n> somewhat preserved for those that care. For the confused, we could for a\n> while install into the data directory files named \"postgresql.conf\",\n> \"pg_hba.conf\", etc. that only contain text like \"This file is now to be\n> found at @sysconfdir@ by popular demand.\"\n\nGreat!\n> \n> Furthermore, I suggest that we wire-in the default location of the data\n> files as ${localstatedir} as determined by configure. This would default\n> to /usr/local/pgsql/var, which is not quite the same as the customary\n> /usr/local/pgsql/data but it doesn't matter because with both \"initdb\" and\n> \"postmaster\" defaulting to this directory and the configuration files\n> elsewhere you don't really need to know except on few occasions. Having\n> this default would also save me a lot of typing during development. ;-)\n\nI guess that is OK, but I would also like a setting in the config file for the\ndatadir location, as well as hba and ident. That way a single \"-C\" can set the\nwhole world.\n\n> \n> Surely we can also add a -C option to override the sysconfdir just as -D\n> overrides localstatedir. Those that refuse to convert can also set -C\n> equal to -D and have the old setup. Or the user can only specify -C to\n> point to the former -D and use the proposed 'datadir' parameter to find\n> the data area.\n> \n> But I find a wired-in configuration file location better than having to\n> use -C everytime you want a \"standard\" setup because this way we force\n> users to have consistent setups.\n\nHowever, I wish \"-C\" to point to a specific file. \n\n> \n> What does this mean for multiple-server setups? Basically you add a -C to\n> each invocation or you replace -D by -C as explained above. Or if you\n> want to share the configuration file you only need to add the right -p\n> option to each invocation. This probably means someone will need to\n> change their scripts but as we hear they don't like them anyway. Someone\n> that currently relies on a -D being sufficient will at least get a clean\n> failure when the ports conflict.\n",
"msg_date": "Tue, 18 Dec 2001 18:31:32 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Therefore, a wired-in configuration file location near /etc would be\n> helpful or at least indifferent for most users.\n\nBy \"wired in\" you evidently don't mean hard-wired, but \"default\nestablished at configure time with the option to override from the\ncommand line\". That I can live with. We would presumably also\nretire the use of environment variable PGDATA, which strikes\nme as a Good Thing.\n\nOne thing we should think about before becoming too enthusiastic is\nsecurity considerations. Up to now, we have not really thought hard\nabout whether there are any items in the configuration files that\nshouldn't be visible to random users, because all of them live under\n$PGDATA and the directory protection on $PGDATA renders all the config\nfiles secure from prying eyes. But I do not think it is safe to assume\nthat config files living in /etc will reliably be made mode 0600. Are\nthere, or might in the future there be, any items in these files that\nwe'd not want to be world-readable?\n\nSecondary password files are a fairly obvious example of stuff better\nnot left out in the cold. We could probably deprecate the practice\nof keeping any actual passwords in such files ;-) ... but I wonder\nwhether it'd not be better to leave them under $PGDATA. A person\nslightly more paranoid than myself would argue against exposing any\npart of pg_hba.conf or pg_ident.conf.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 18:42:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files "
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n>After reading through the strong opinions about the location of the\n>configuration files in the current and in previous threads, I must concede\n>that despite the best intentions, the current \"everything in one place\"\n>system is obviously not addressing the needs of the user. So while we're\n>at it we might as well consider more sweeping changes to bring the\n>system in line with the \"expected\" or \"standard\" behaviour.\n>\n>Consider the following points:\n>\n>1. Most users will probably only run one server -- especially new users.\n>\n>2. Most users will expect configuration files somewhere under =~ /etc/ --\n> including new users.\n>\n>3. To run more than one server, special knowledge and configuration is\n> required anyway.\n>\n>Therefore, a wired-in configuration file location near /etc would be\n>helpful or at least indifferent for most users.\n>\n>I suggest that we wire-in the location of the configuration files into the\n>binaries as ${sysconfdir} as determined by configure. This would default\n>to /usr/local/pgsql/etc, so the \"everything in one place\" system is still\n>somewhat preserved for those that care. For the confused, we could for a\n>while install into the data directory files named \"postgresql.conf\",\n>\"pg_hba.conf\", etc. that only contain text like \"This file is now to be\n>found at @sysconfdir@ by popular demand.\"\n>\nIn keeping with some of the more modern daemons (xinetd, etc) you might \nwant to consider something like /etc/pgsql.d/ as a directory name. \n Where as most folders with a .d contain a set of files or a referenced \nby the main config file in /etc. This is on a RedHat system, but I \nthink the logic applies well if you are flexible the location of the \nbase system config directory. (/usr/local/etc vs /etc, etc.)\n\n>\n>\n>Furthermore, I suggest that we wire-in the default location of the data\n>files as ${localstatedir} as determined by configure. This would default\n>to /usr/local/pgsql/var, which is not quite the same as the customary\n>/usr/local/pgsql/data but it doesn't matter because with both \"initdb\" and\n>\"postmaster\" defaulting to this directory and the configuration files\n>elsewhere you don't really need to know except on few occasions. Having\n>this default would also save me a lot of typing during development. ;-)\n>\n>Surely we can also add a -C option to override the sysconfdir just as -D\n>overrides localstatedir. Those that refuse to convert can also set -C\n>equal to -D and have the old setup. Or the user can only specify -C to\n>point to the former -D and use the proposed 'datadir' parameter to find\n>the data area.\n>\n>But I find a wired-in configuration file location better than having to\n>use -C everytime you want a \"standard\" setup because this way we force\n>users to have consistent setups.\n>\n>What does this mean for multiple-server setups? Basically you add a -C to\n>each invocation or you replace -D by -C as explained above. Or if you\n>want to share the configuration file you only need to add the right -p\n>option to each invocation. This probably means someone will need to\n>change their scripts but as we hear they don't like them anyway. Someone\n>that currently relies on a -D being sufficient will at least get a clean\n>failure when the ports conflict.\n>\n\n>\n\n",
"msg_date": "Tue, 18 Dec 2001 18:04:49 -0600",
"msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "> >I suggest that we wire-in the location of the configuration files into the\n> >binaries as ${sysconfdir} as determined by configure. This would default\n> >to /usr/local/pgsql/etc, so the \"everything in one place\" system is still\n> >somewhat preserved for those that care. For the confused, we could for a\n> >while install into the data directory files named \"postgresql.conf\",\n> >\"pg_hba.conf\", etc. that only contain text like \"This file is now to be\n> >found at @sysconfdir@ by popular demand.\"\n> >\n> In keeping with some of the more modern daemons (xinetd, etc) you might \n> want to consider something like /etc/pgsql.d/ as a directory name. \n> Where as most folders with a .d contain a set of files or a referenced \n> by the main config file in /etc. This is on a RedHat system, but I \n> think the logic applies well if you are flexible the location of the \n> base system config directory. (/usr/local/etc vs /etc, etc.)\n\nI often wondered, if it is directory, why do they need the '.d' in the\nname? What possible purpose could it have except to look ugly? :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Dec 2001 21:06:31 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "On Tuesday 18 December 2001 05:24 pm, Peter Eisentraut wrote:\n> After reading through the strong opinions about the location of the\n> configuration files in the current and in previous threads, I must concede\n> that despite the best intentions, the current \"everything in one place\"\n> system is obviously not addressing the needs of the user. So while we're\n> at it we might as well consider more sweeping changes to bring the\n> system in line with the \"expected\" or \"standard\" behaviour.\n\n> Surely we can also add a -C option to override the sysconfdir just as -D\n> overrides localstatedir. Those that refuse to convert can also set -C\n> equal to -D and have the old setup. Or the user can only specify -C to\n> point to the former -D and use the proposed 'datadir' parameter to find\n> the data area.\n\nWhile having config files named differently from 'postgresql.conf' would be a \nnice thing, I can live with your proposal without being able to specify \narbitrary conf file names. A subdirectory under sysconfdir for each \n'virtual' database would be sufficient.\n\nAs to the security points that Tom brings up, you don't put anything in /etc \ndirectly -- you put it under /etc/pgsql, and lock it down the same as$PGDATA. \nOf course, the logic to do this sort of thing already exists in the configure \nscript.... Also on the topic of security, 'encouraging' the use of separate \nsubdirs for each server would also provide more isolation for the users of \nthose servers.\n\nOh, and BTW: when this is implemented, the dream of multiple servers running \nunder the RPMset install will be realizable... :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 18 Dec 2001 22:47:18 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "Noting Peter's thought on the matter and merging some of the things I think are\nimportant, I wrote this down as a sort of way of flushing out the precise\nmeaning of the configuration options:\n\nCommand line options:\n-C filepath_name\nIf filepath_name is a file, it is treated as a configuration file, no other\ninformation is assumed. If filepath_file is a directory, it is used to replace\nthe default \"sysconfdir\" obtained from configure. \n\n-D datadir\nthe -D option overrides any default setting, and any setting in\npostgresql.conf.\n\nOther options\nAll other options override the defaults set by either the \"configure\" operation\nor the active postgresql.conf file.\n\npostgresql.conf\nBy default it will live in sysconfdir as configured by \"configure.\" \nIf it is not found in the \"sysconfdir,\" $PGDATA will be searched.\nUsing the \"-C\" option forces the file's explicit location. If \"-C\" is\nspecified, postgresql.conf (or filename) must exist as specified in accordance\nwith the documented \"-C\" behavior.\nIt may contain a setting \"hbaconfig\" which will override the default location.\nIt may contain a setting \"identconfig\" which will override the default\nlocation.\nIt may contain a setting \"datadir\" which will override the default location.\n\npg_hba.conf\nBy default it will live in sysconfdir as configured by \"configure.\"\nIts location can be changed by \"hbaconfig\" in postgresql.conf.\nIf not configured in postgresql.conf and not found within \"sysconfdir,\" $PGDATA\nwill be searched. If explicitly configured in the postgresql.conf file, it must\nexist as specified.\n\npg_ident.conf\nBy default it will live in sysconfdir as configured by \"configure.\"\nIts location can be changed by \"identconfig\" in postgresql.conf.\nIf not configured in postgresql.conf and not found within \"sysconfdir,\" $PGDATA\nwill be searched. If explicitly configured in the postgresql.conf file, it must\nexist as specified.\n\nPGDATA\nThe data directory will be found in the directory configured by GNU \"configure\"\nIf the environment variable PGDATA is specified, it overrides the configured\ndefault.\nIf the postgresql.conf file contains \"datadir\" it overrides the previous other\ntwo.\nIf the command line \"-D\" option is used, it overrides the previous three.\n\n\nNote:\nI think the data directory should be explicitly configured by either the\nposgresql.conf file, environment variable (PGDATA), or through the command line\noption, but using the \"configure\" statedir isn't anything anyone would object\ntoo.\n\n\nWhat do you all think? \nIs anything ambiguous?\nIs anything wrong?\nCan we all agree this is how it should be?\n",
"msg_date": "Tue, 18 Dec 2001 23:48:32 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files, how about this:"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> As to the security points that Tom brings up, you don't put anything in /etc \n> directly -- you put it under /etc/pgsql, and lock it down the same as$PGDATA.\n\nThat'd work if we assume that /etc/pgsql can be owned by the postgres\nuser. Is that kosher per the various filesystem layout standards?\nSeems to me that someone who thinks the executables should be root-owned\nis likely to think the same of the config files.\n\nPersonally I think this would be a fine idea, I'm just worried that\nwe'll find packagers overriding the decision because \"the Debian\nstandards don't allow you to do that\" or whatever.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 23:50:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files "
},
{
"msg_contents": "On Tuesday 18 December 2001 11:50 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > As to the security points that Tom brings up, you don't put anything in\n> > /etc directly -- you put it under /etc/pgsql, and lock it down the same\n> > as$PGDATA.\n\n> That'd work if we assume that /etc/pgsql can be owned by the postgres\n> user. Is that kosher per the various filesystem layout standards?\n\nThe Red-Hat-issue 'ntp' package has a /etc/ntp that is owned by ntp.ntp. So \nthere's at least precedence. I'll have to peruse the FHS to see if it's \nparve or not. Cursory reading indicates that it is not specified as to \nownership in /etc. The LSB may state something else -- I'll look at it \nlater, unless someone else wants to beat me to it... :-)\n\nHowever, that same standard states, about /var/lib (under which PGDATA lives, \nas the database itself is 'state information'), that users must never need to \nmodify files here for configuration of program operation. IE, the current \nRPM packages are not FHS-2.2 compliant, as postgresql.conf is under /var/lib. \n:-(\n\nThis config file change would allow compliance much more easily.\n\n> Seems to me that someone who thinks the executables should be root-owned\n> is likely to think the same of the config files.\n\nSorry to disappoint you :-). No, I envision a tree where you could have:\n/etc/pgsql\ndrwx------ 1 pari pari 4096 Nov 9 01:16 pari\ndrwx------ 1 postgres postgres 4096 Nov 9 01:11 main-web\ndrwx------ 1 nobody nobody 4096 May 15 2000 devel\ndrwxrwx--- 1 lowen wgcr 4096 Nov 9 22:37 wgcr\n\nOr some such. And the existing config files are postgres.postgres owned, \nunder /var/lib/pgsql (the whole tree is postgres owned). To match the \n/etc/pgsql tree, I'd do the same in /var/lib/pgsql, with the default location \nbeing 'data' in order to be backward-compatible.\n\nHowever, IMHO, for best security, the executables do need to be root owned. \nIMHO. Even though none of our executables runs as root or is suid root, it \nis just a good practice to not have network-accessible executables being able \nto overwrite themselves under buffer overflow conditions. This is procedure \nde rigeur for webservers -- at least one set of the AOLserver docs \nspecifically recommends it. Of course, a webserver requires running as root \nto bind TCP port 80, but the principle is, IMHO, still valid for non-root \nunprivileged-port-binding daemons -- they shouldn't be able to scribble on \ntop of themselves.\n\n> Personally I think this would be a fine idea, I'm just worried that\n> we'll find packagers overriding the decision because \"the Debian\n> standards don't allow you to do that\" or whatever.\n\nOliver? My gut feel is that Oliver would jump for joy over this proposal. \nBut Oliver should answer for himself.\n\nRed Hat doesn't have an external packaging standards document; what I've \nfound I've found through the FHS, the Mandrake RPM HOWTO, and trial and error \n(the trials of error?). Trond, Jeff Johnson, Cristian Gafton, and lots of \nactual users of my packages have taught me much more than any document has. \n:-) Some lessons are more 'memorable' than others.....\n\nOr, more bluntly, I don't plan on 'overriding' this -- nay, this thing would \nsuit me _just_fine_. Too bad this is a post-7.2 thing.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 19 Dec 2001 00:42:45 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > As to the security points that Tom brings up, you don't put anything in /etc \n> > directly -- you put it under /etc/pgsql, and lock it down the same as$PGDATA.\n> \n> That'd work if we assume that /etc/pgsql can be owned by the postgres\n> user. Is that kosher per the various filesystem layout standards?\n> Seems to me that someone who thinks the executables should be root-owned\n> is likely to think the same of the config files.\n> \n> Personally I think this would be a fine idea, I'm just worried that\n> we'll find packagers overriding the decision because \"the Debian\n> standards don't allow you to do that\" or whatever.\n\nSeems the proper default location is /usr/local/pgsql/config. Anything\nelse and non-root people have trouble with the install.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Dec 2001 00:47:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Seems the proper default location is /usr/local/pgsql/config. Anything\n> else and non-root people have trouble with the install.\n\nI think it'd be reasonable for the source distribution to be set up\nto default to that, but the RPMs need not, since they're not intended\nto be installed non-root.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Dec 2001 00:57:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Seems the proper default location is /usr/local/pgsql/config. Anything\n> > else and non-root people have trouble with the install.\n> \n> I think it'd be reasonable for the source distribution to be set up\n> to default to that, but the RPMs need not, since they're not intended\n> to be installed non-root.\n\nYes, I thought we were just talking about our source default.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Dec 2001 01:05:34 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Seems the proper default location is /usr/local/pgsql/config. Anything\n> > else and non-root people have trouble with the install.\n> \n> I think it'd be reasonable for the source distribution to be set up\n> to default to that, but the RPMs need not, since they're not intended\n> to be installed non-root.\n\nLet me add I think a separate /config directory is a good idea rather\nthan putting it in /data because when you do pg_dump, you don't need a\nfile system backup of '/data, except that you do need to backup those\nconfig files because they are not part of the contents of pg_dump. I\nhad to mention that particularly in my book, and it was kind of awkward.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Dec 2001 01:07:45 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n>> Seems to me that someone who thinks the executables should be root-owned\n>> is likely to think the same of the config files.\n\n> Sorry to disappoint you :-).\n> ...\n> However, IMHO, for best security, the executables do need to be root owned. \n\nOr at least not owned/writable by the postgres user. Sure, that seems\nlike a good idea for a high-security installation. But I always thought\nthe motivation for that rule was to prevent someone who'd gained some\ncontrol of the program (eg via a buffer-overrun exploit) from expanding\nhis exploit by overwriting the executables with malicious code. If the\nconfig files can be overwritten by the postgres user, then you still\nhave an avenue for an attacker to expand his privileges. Example: he\ncan trivially become postgres superuser after altering pg_hba.conf.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Dec 2001 01:09:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files "
},
{
"msg_contents": "On Wednesday 19 December 2001 01:09 am, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> >> Seems to me that someone who thinks the executables should be root-owned\n> >> is likely to think the same of the config files.\n\n> > Sorry to disappoint you :-).\n ...\n> > However, IMHO, for best security, the executables do need to be root\n> > owned.\n\n> his exploit by overwriting the executables with malicious code. If the\n> config files can be overwritten by the postgres user, then you still\n> have an avenue for an attacker to expand his privileges. Example: he\n> can trivially become postgres superuser after altering pg_hba.conf.\n\nThis much is true. Hmmm. More thought required.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 19 Dec 2001 01:13:29 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "On Wednesday 19 December 2001 12:47 am, Bruce Momjian wrote:\n> > Lamar Owen <lamar.owen@wgcr.org> writes:\n> > > As to the security points that Tom brings up, you don't put anything in\n> > > /etc directly -- you put it under /etc/pgsql, and lock it down the same\n> > > as$PGDATA.\n\n> > That'd work if we assume that /etc/pgsql can be owned by the postgres\n\n> > Personally I think this would be a fine idea, I'm just worried that\n> > we'll find packagers overriding the decision because \"the Debian\n> > standards don't allow you to do that\" or whatever.\n\n> Seems the proper default location is /usr/local/pgsql/config. Anything\n> else and non-root people have trouble with the install.\n\nOh, I'm not talking _default_ -- I'm talking 'optional and allowed'. IMHO, \ndefault should be /usr/local/pgsql/etc. This is sysconfdir under configure in \nthe default case, right? That's Peter's proposal -- use sysconfdir for its \nintended purpose in all installs. Of course, sysconfdir varies -- but then a \n'pg_config --configure' gives you where things are by default..... Although I \ndidn't know about 'statedir' being PREFIX/var by default. Nice one to know.\n\nCould pg_config possibly be endowed with another option -- while listing the \noptions given to configure is nice, it would be nicer to list all the options \nconfigure had, including the defaults, in a slightly more useful form?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 19 Dec 2001 01:23:34 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": ">>>Tom Lane said:\n > Secondary password files are a fairly obvious example of stuff better\n > not left out in the cold. We could probably deprecate the practice\n > of keeping any actual passwords in such files ;-) ... but I wonder\n > whether it'd not be better to leave them under $PGDATA. A person\n > slightly more paranoid than myself would argue against exposing any\n > part of pg_hba.conf or pg_ident.conf.\n\nThen, count me more paranoid that you.\n\nIn a 'serious' database setup, it is unlikely anyone to have 'shell' access to \nthe database server except 'root' and the DBA (I tend to assume in many places \nsuch separation is valid). This will include larger setups. In these cases \nwhere the config files are is not important at all - perhaps the reason for \nthe current situation.\n\nIn 'lets try it' setups, many people will have access to the files on the \nmachine and the current setup is fairly secure. However, it will also be \nsecure enough, if files in /etc are mode 600 (or just not writable/readable by \nother) - perhaps PostgreSQL should just refuse to run, if they are not?\n\nThe point in hiding pg_hba.conf and pg_ident.conf for example is that an \ninexperienced DBA may well make errors in these files that permit unwanted \naccess - this is much easier to exploit - and no, I don't advocate security \ntrough obscurity.\n\nDaniel\n\n",
"msg_date": "Wed, 19 Dec 2001 10:36:41 +0200",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files "
},
{
"msg_contents": ">>>Thomas Swan said:\n > In keeping with some of the more modern daemons (xinetd, etc) you might \n > want to consider something like /etc/pgsql.d/ as a directory name. \n > Where as most folders with a .d contain a set of files or a referenced \n > by the main config file in /etc. This is on a RedHat system, but I \n > think the logic applies well if you are flexible the location of the \n > base system config directory. (/usr/local/etc vs /etc, etc.)\n\nI run BSD, and I believe config files should sit in /etc if the files are not \nmany. We can even go with one config file, such as postgres.conf which will \ninclude the paths to other files - that can sit anywhere - in /etc/pgsql for \nexample or in /usr/local/pgsql/etc.\n\nBut, let's not start religious wars whether the System V way is better than \nBSD's :-)\n\nDaniel\n\n",
"msg_date": "Wed, 19 Dec 2001 10:40:59 +0200",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> >> Seems to me that someone who thinks the executables should be root-owned\n> >> is likely to think the same of the config files.\n> \n> > Sorry to disappoint you :-).\n> > ...\n> > However, IMHO, for best security, the executables do need to be root owned.\n> \n> Or at least not owned/writable by the postgres user. Sure, that seems\n> like a good idea for a high-security installation. But I always thought\n> the motivation for that rule was to prevent someone who'd gained some\n> control of the program (eg via a buffer-overrun exploit) from expanding\n> his exploit by overwriting the executables with malicious code. If the\n> config files can be overwritten by the postgres user, then you still\n> have an avenue for an attacker to expand his privileges. Example: he\n> can trivially become postgres superuser after altering pg_hba.conf.\n\nOne of the nice features of putting configuration files in /etc\ninstead of /var is that some people like to mount the root\nfilesystem (non-/var directories) read-only on a disc that is\nphysically jumpered read-only, or some other read-only media. Its an\nattempt to prevent buffer exploits from modifying executables and\nconfiguration files, even if root is achieved. Of course, it\nwouldn't stop someone with destroying anything in /var, but it at\nleast limits the potential damage in some meaningful way.\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Wed, 19 Dec 2001 04:16:56 -0500",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "mlw writes:\n\n> -C filepath_name\n> If filepath_name is a file, it is treated as a configuration file, no other\n> information is assumed. If filepath_file is a directory, it is used to replace\n> the default \"sysconfdir\" obtained from configure.\n\nThis seems like a reasonable compromise, but I think I'm OK with -C\nspecifying the name of the config file only. Otherwise it would be too\nmuch logic for something that you can work around with tab completion.\n\n> postgresql.conf\n> By default it will live in sysconfdir as configured by \"configure.\"\n> If it is not found in the \"sysconfdir,\" $PGDATA will be searched.\n\nI don't think I like \"if not found in X then search Y\". If the file is\nnot where it was configured to be then it's an error or it will be\nignored, or whatever the usual behavior would be.\n\nLooking into $PGDATA is probably something we want to discourage, not do\nimplicitly. The backup/upgrade issue would be much simplified if we kept\nhand-edited files out of there. Are you concerned about backward\ncompatibility? I think a note in the data directory that tells users\nwhere to find the files is OK. Others may disagree.\n\nAlso, I'm not sure exactly what you mean with $PGDATA. If you mean \"the\ndata area\", then this would be complicated to arrange, because the data\narea is or may be configured in postgresql.conf. If you mean the actual\nenvironment variable, I think environment variables should override\ncompiled-in defaults, not serve as fallbacks. That's just the usual order\nof priorities.\n\n> pg_hba.conf\n> By default it will live in sysconfdir as configured by \"configure.\"\n> Its location can be changed by \"hbaconfig\" in postgresql.conf.\n> If not configured in postgresql.conf and not found within \"sysconfdir,\" $PGDATA\n> will be searched. If explicitly configured in the postgresql.conf file, it must\n> exist as specified.\n\nSame concern here.\n\n> Note:\n> I think the data directory should be explicitly configured by either the\n> posgresql.conf file, environment variable (PGDATA), or through the command line\n> option, but using the \"configure\" statedir isn't anything anyone would object\n> too.\n\nFixed locations create consistency and save typing. Both are tremendous\ntime-savers.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 20 Dec 2001 18:12:55 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on the location of configuration files, how about this:"
},
{
"msg_contents": "Tom Lane writes:\n\n> One thing we should think about before becoming too enthusiastic is\n> security considerations. Up to now, we have not really thought hard\n> about whether there are any items in the configuration files that\n> shouldn't be visible to random users, because all of them live under\n> $PGDATA and the directory protection on $PGDATA renders all the config\n> files secure from prying eyes.\n\nThe important thing is that we give users the option of setting it up in\nwhich ever way they like.\n\nPersonally, I would make the configuration files 0644 by default.\nThere's nothing in there that you can't get at in another way or which\nwould matter to outsiders. I hope in the next release we make the\nunix_socket_permissions default to 0700 so the default setup is totally\nsecure even if you messed up your pg_hba.conf.\n\nIf people don't feel like exposing their pg_hba.conf setup to the world,\nthen let them change the permissions. There are several useful ways,\nincluding the old owned-by-postgres, or root ownership and a 'postgres'\ngroup that can read the file for the sophisticated. Put a comment at the\ntop of the file reminding the user to think about it, and we should be as\nsafe as it can get.\n\n> Secondary password files are a fairly obvious example of stuff better\n> not left out in the cold. We could probably deprecate the practice\n> of keeping any actual passwords in such files ;-) ... but I wonder\n> whether it'd not be better to leave them under $PGDATA.\n\nIf you put actual passwords in those files then you should think about\nmaking the file not readable by anyone but the server. The most we can\nreasonably do there is to put a clear reminder somewhere. But password\nfiles are traditionally kept with config files, so I think it's okay.\nAlso, keeping *all* hand-edited files out of the data directory would\nsimplify the backup and upgrade process.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 20 Dec 2001 18:13:30 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on the location of configuration files "
},
{
"msg_contents": "> Personally, I would make the configuration files 0644 by default.\n> There's nothing in there that you can't get at in another way or which\n> would matter to outsiders. I hope in the next release we make the\n> unix_socket_permissions default to 0700 so the default setup is totally\n> secure even if you messed up your pg_hba.conf.\n\nI have an idea for the Unix socket file permissions and local 'trust'\npermissoins as default. Right now we allow the socket permissions to be\nset in postgresql.conf, but that seems like the wrong place for it.\n\nSuppose we add an option to pg_hba.conf for 'local' connections called\n'singleuser' and 'singlegroup' which set enable socket permissions only for the\npostgres super-user or his group.\n\nThat way, we can ship the default pg_hba.conf file default as\n'singleuser' and allow people to change it as they wish.\n\nIf people think this is a good idea, I will add it to the TODO list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 23 Dec 2001 22:27:13 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have an idea for the Unix socket file permissions and local 'trust'\n> permissoins as default. Right now we allow the socket permissions to be\n> set in postgresql.conf, but that seems like the wrong place for it.\n\n> Suppose we add an option to pg_hba.conf for 'local' connections called\n> 'singleuser' and 'singlegroup' which set enable socket permissions\n> only for the postgres super-user or his group.\n\nThat strikes me as (a) not better, and (b) not backwards compatible.\nWhat's the point?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Dec 2001 22:31:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have an idea for the Unix socket file permissions and local 'trust'\n> > permissoins as default. Right now we allow the socket permissions to be\n> > set in postgresql.conf, but that seems like the wrong place for it.\n> \n> > Suppose we add an option to pg_hba.conf for 'local' connections called\n> > 'singleuser' and 'singlegroup' which set enable socket permissions\n> > only for the postgres super-user or his group.\n> \n> That strikes me as (a) not better, and (b) not backwards compatible.\n> What's the point?\n\nWell, the problem with backward compatibility here is that now we have\npg_hba.conf to configure some part of local authentication and\npostgresql.conf to configure the other part. Seems quite confusing to\nme. If you would prefer, we could allow specification of the socket\npermissions in pg_hba.conf.\n\nAren't the socket permissions best dealt with in pg_hba.conf?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 23 Dec 2001 22:35:58 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Well, the problem with backward compatibility here is that now we have\n> pg_hba.conf to configure some part of local authentication and\n> postgresql.conf to configure the other part.\n\nSeems a pretty empty argument. pg_ident.conf also (now) bears on local\nauthentication, as does any random secondary-password file the user\nmight select. Shall we find a way to smush all that into pg_hba.conf?\n\n> Aren't the socket permissions best dealt with in pg_hba.conf?\n\nMaybe if we were designing the whole thing from scratch, it'd be cleaner\nto do it that way ... but it doesn't seem enough cleaner to justify\ncreating a compatibility issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Dec 2001 22:43:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Well, the problem with backward compatibility here is that now we have\n> > pg_hba.conf to configure some part of local authentication and\n> > postgresql.conf to configure the other part.\n> \n> Seems a pretty empty argument. pg_ident.conf also (now) bears on local\n> authentication, as does any random secondary-password file the user\n> might select. Shall we find a way to smush all that into pg_hba.conf?\n> \n> > Aren't the socket permissions best dealt with in pg_hba.conf?\n> \n> Maybe if we were designing the whole thing from scratch, it'd be cleaner\n> to do it that way ... but it doesn't seem enough cleaner to justify\n> creating a compatibility issue.\n\nHow many people really use unix socket permissions in postgresql.conf?\nProbably very few. We could announce when it goes away, and even throw\nan error if it appears in postgresql.conf. Seems that would clear it up\nand make the feature much more usable.\n\nSecurity is very easy to mess up. That's why I think clarity is\nimportant. If we are going to change the default socket permissions to\n700, that clearly would be a good time to make the change, no?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 23 Dec 2001 22:49:38 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "> How many people really use unix socket permissions in postgresql.conf?\n> Probably very few. We could announce when it goes away, and even throw\n> an error if it appears in postgresql.conf. Seems that would clear it up\n> and make the feature much more usable.\n> \n> Security is very easy to mess up. That's why I think clarity is\n> important. If we are going to change the default socket permissions to\n> 700, that clearly would be a good time to make the change, no?\n\nNow that I look at postgresql.conf, I do see lots of connection-related\nstuff:\n\t\n\t#\n\t# Connection Parameters\n\t#\n\t#tcpip_socket = false\n\t#ssl = false\n\t\n\t#max_connections = 32\n\t\n\t#port = 5432 \n\t#hostname_lookup = false\n\t#show_source_port = false\n\t\n\t#unix_socket_directory = ''\n\t#unix_socket_group = ''\n\t#unix_socket_permissions = 0777\n\t\n\t#virtual_host = ''\n\t\n\t#krb_server_keyfile = ''\n\nI guess my problem is that we will have 'trust' in pg_hba.conf, but then\noverride that in postgresql.conf by restricting permissions to one user.\nThat seems kind of strange. We may have to change pg_hba.conf 'trust'\nanyway to something like 'socketpermit', or remove the permission\nsetting in postgresql.conf and add the two new ones I suggested,\nsingleuser, and singlegroup.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 23 Dec 2001 23:39:47 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "I still don't think you've presented any argument that justifies\nbreaking existing config files ... but I'll shut up now and wait\nto see what others think.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Dec 2001 23:56:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I have an idea for the Unix socket file permissions and local 'trust'\n> permissoins as default. Right now we allow the socket permissions to be\n> set in postgresql.conf, but that seems like the wrong place for it.\n>\n> Suppose we add an option to pg_hba.conf for 'local' connections called\n> 'singleuser' and 'singlegroup' which set enable socket permissions only for the\n> postgres super-user or his group.\n\nThis is neither necessarily better, nor even possible.\n\nThe pg_hba.conf file describes a set (or list) of rules whose input values\nare certain known parameters from the connection request and whose output\nvalue is an authentication method. The permissions of the socket operate\non a completely different level: they are considered before a connection\nrequest is even generated from the postmaster's point of view, and they\ndon't describe any part of any rule that evaluates to an authentication\nmethod, instead they are a scalar state variable of the server.\n\nYou can have more than one 'local' record, but you can have only one set\nof permissions for the socket, so it wouldn't work in general cases.\nMoreover, attaching the permissions to each record gives users a view of\nthe world which really isn't there, which is quite worse, considering that\nit's a security-related issue.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 25 Dec 2001 21:29:25 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> Therefore, a wired-in configuration file location near /etc would be\n> helpful or at least indifferent for most users.\n> \n> I suggest that we wire-in the location of the configuration files into the\n> binaries as ${sysconfdir} as determined by configure. This would default\n> to /usr/local/pgsql/etc, so the \"everything in one place\" system is still\n> somewhat preserved for those that care. For the confused, we could for a\n> while install into the data directory files named \"postgresql.conf\",\n> \"pg_hba.conf\", etc. that only contain text like \"This file is now to be\n> found at @sysconfdir@ by popular demand.\"\n> \n> Furthermore, I suggest that we wire-in the default location of the data\n> files as ${localstatedir} as determined by configure. This would default\n> to /usr/local/pgsql/var, which is not quite the same as the customary\n> /usr/local/pgsql/data but it doesn't matter because with both \"initdb\" and\n> \"postmaster\" defaulting to this directory and the configuration files\n> elsewhere you don't really need to know except on few occasions. Having\n> this default would also save me a lot of typing during development. ;-)\n> \n> Surely we can also add a -C option to override the sysconfdir just as -D\n> overrides localstatedir. Those that refuse to convert can also set -C\n> equal to -D and have the old setup. Or the user can only specify -C to\n> point to the former -D and use the proposed 'datadir' parameter to find\n> the data area.\n\nI like this, but I'd prefer to have \"-C\" point to a specific\nconfiguration file.\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "26 Dec 2001 09:51:04 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> \n> > Therefore, a wired-in configuration file location near /etc would be\n> > helpful or at least indifferent for most users.\n> > \n> > I suggest that we wire-in the location of the configuration files into the\n> > binaries as ${sysconfdir} as determined by configure. This would default\n> > to /usr/local/pgsql/etc, so the \"everything in one place\" system is still\n> > somewhat preserved for those that care. For the confused, we could for a\n> > while install into the data directory files named \"postgresql.conf\",\n> > \"pg_hba.conf\", etc. that only contain text like \"This file is now to be\n> > found at @sysconfdir@ by popular demand.\"\n> > \n> > Furthermore, I suggest that we wire-in the default location of the data\n> > files as ${localstatedir} as determined by configure. This would default\n> > to /usr/local/pgsql/var, which is not quite the same as the customary\n> > /usr/local/pgsql/data but it doesn't matter because with both \"initdb\" and\n> > \"postmaster\" defaulting to this directory and the configuration files\n> > elsewhere you don't really need to know except on few occasions. Having\n> > this default would also save me a lot of typing during development. ;-)\n> > \n> > Surely we can also add a -C option to override the sysconfdir just as -D\n> > overrides localstatedir. Those that refuse to convert can also set -C\n> > equal to -D and have the old setup. Or the user can only specify -C to\n> > point to the former -D and use the proposed 'datadir' parameter to find\n> > the data area.\n> \n> I like this, but I'd prefer to have \"-C\" point to a specific\n> configuration file.\n\nI understand the value of pointing to a specific configuration file, but\nwe then would need to define the location of pg_hba.conf and others in\nthat file, and it makes it hard to move that directory anywhere because\nthe file paths have to be updated in the file. Of course, we could\ndefault to look in the same directory as the config file, but that seems\nquite confusing. Seems easier to just point to a directory and find all\nthe stuff in there you want.\n\nThis does allow you to share postgresql.conf, pg_hba.conf, and\npg_ident.conf with multiple servers if you override the port on\npostmaster startup for each server. What it doesn't allow you to do is\nshare only pg_hba.conf. For that, you have to set up multiple\ndirectories and symlink the pg_hba.conf's together.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 Dec 2001 21:38:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "\nPlease forget what I said here. I see this thread was continued later\nand a compromise was reached.\n\n---------------------------------------------------------------------------\n\n> > > Surely we can also add a -C option to override the sysconfdir just as -D\n> > > overrides localstatedir. Those that refuse to convert can also set -C\n> > > equal to -D and have the old setup. Or the user can only specify -C to\n> > > point to the former -D and use the proposed 'datadir' parameter to find\n> > > the data area.\n> > \n> > I like this, but I'd prefer to have \"-C\" point to a specific\n> > configuration file.\n> \n> I understand the value of pointing to a specific configuration file, but\n> we then would need to define the location of pg_hba.conf and others in\n> that file, and it makes it hard to move that directory anywhere because\n> the file paths have to be updated in the file. Of course, we could\n> default to look in the same directory as the config file, but that seems\n> quite confusing. Seems easier to just point to a directory and find all\n> the stuff in there you want.\n> \n> This does allow you to share postgresql.conf, pg_hba.conf, and\n> pg_ident.conf with multiple servers if you override the port on\n> postmaster startup for each server. What it doesn't allow you to do is\n> share only pg_hba.conf. For that, you have to set up multiple\n> directories and symlink the pg_hba.conf's together.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 Dec 2001 22:20:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
}
] |
[
{
"msg_contents": "In the vacuum tuple-chain moving logic, shouldn't the lines that update\nthe new tuple's t_ctid (vacuum.c lines 1882-1891 in current sources)\nbe moved up to before the log_heap_move call at line 1866?\n\nIt appears to me that as the code stands, log_heap_move will log the new\ntuple containing the wrong t_ctid; therefore, if we crash and have to\nredo the transaction from WAL, the wrong t_ctid will be restored. No?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 19:22:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Possible bug in vacuum redo"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> In the vacuum tuple-chain moving logic, shouldn't the lines that update\n> the new tuple's t_ctid (vacuum.c lines 1882-1891 in current sources)\n> be moved up to before the log_heap_move call at line 1866?\n> \n> It appears to me that as the code stands, log_heap_move will log the new\n> tuple containing the wrong t_ctid; therefore, if we crash and have to\n> redo the transaction from WAL, the wrong t_ctid will be restored. No?\n\nAFAIR t_ctid isn't logged in WAL.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Sun, 23 Dec 2001 00:52:37 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in vacuum redo"
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> AFAIR t_ctid isn't logged in WAL.\n\nAfter looking at the heap_update code I think you are right. Doesn't\nthat render the field completely useless/unreliable?\n\nIn the simple heap_update case I think that heap_xlog_update could\neasily set the old tuple's t_ctid field correctly. Not sure how\nit works when VACUUM is moving tuple chains around, however.\n\nAnother thing I am currently looking at is that I do not believe VACUUM\nhandles tuple chain moves correctly. It only enters the chain-moving\nlogic if it finds a tuple that is in the *middle* of an update chain,\nie, both the prior and next tuples still exist. In the case of a\ntwo-element update chain (only the latest and next-to-latest tuples of\na row survive VACUUM), AFAICT vacuum will happily move the latest tuple\nwithout ever updating the previous tuple's t_ctid.\n\nIn short t_ctid seems extremely unreliable. I have been trying to work\nout a way that a bad t_ctid link could lead to the duplicate-tuple\nreports we've been hearing lately, but so far I haven't seen one. I do\nthink it can lead to missed UPDATEs in read-committed mode, however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 Dec 2001 11:13:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Possible bug in vacuum redo "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > AFAIR t_ctid isn't logged in WAL.\n> \n> After looking at the heap_update code I think you are right. Doesn't\n> that render the field completely useless/unreliable?\n\nRedo runs with no concurrent backends. New backends invoked\nafter a redo operation don't need(see) the existent t_ctid values.\nPostgreSQL before MVCC didn't need the t_ctid.\n\n> \n> In the simple heap_update case I think that heap_xlog_update could\n> easily set the old tuple's t_ctid field correctly. Not sure how\n> it works when VACUUM is moving tuple chains around, however.\n> \n> Another thing I am currently looking at is that I do not believe VACUUM\n> handles tuple chain moves correctly. It only enters the chain-moving\n> logic if it finds a tuple that is in the *middle* of an update chain,\n> ie, both the prior and next tuples still exist. \n ^^^^^\nIsn't it *either* not *both* ?\nAnyway I agree with you at the point that the tuple chain-moving\nis too complex. It's one of the main reason why I prefer the new\nVACUUM.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Sun, 23 Dec 2001 08:35:58 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in vacuum redo "
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n>> Another thing I am currently looking at is that I do not believe VACUUM\n>> handles tuple chain moves correctly. It only enters the chain-moving\n>> logic if it finds a tuple that is in the *middle* of an update chain,\n>> ie, both the prior and next tuples still exist. \n> ^^^^^\n> Isn't it *either* not *both* ?\n\n[ reads it again ] Oh, you're right.\n\nStill, if WAL isn't taking care to maintain t_ctid then we have a\nproblem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 Dec 2001 20:45:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Possible bug in vacuum redo "
},
{
"msg_contents": "FWIW, I got the following deadlock-like situation by simply running \npgbench :\n\n 968 ? S 0:00 /usr/bin/postmaster -D /var/lib/pgsql/data\n13245 pts/1 S 0:00 su - postsql\n31671 pts/1 S 0:00 bin/postmaster\n31672 pts/1 S 0:00 postgres: stats buffer process\n31674 pts/1 S 0:00 postgres: stats collector process\n32606 pts/1 S 0:02 pgbench -p 7204 -t 128 -c 32 postsql\n32613 pts/1 S 0:01 postgres: postsql postsql [local] UPDATE\n32615 pts/1 S 0:00 postgres: postsql postsql [local] UPDATE\n32616 pts/1 S 0:01 postgres: postsql postsql [local] UPDATE\n32617 pts/1 S 0:00 postgres: postsql postsql [local] UPDATE\n32618 pts/1 S 0:00 postgres: postsql postsql [local] UPDATE\n32623 pts/1 S 0:01 postgres: postsql postsql [local] UPDATE\n32624 pts/1 S 0:01 postgres: postsql postsql [local] idle in \ntransaction\n32628 pts/1 S 0:01 postgres: postsql postsql [local] UPDATE\n32633 pts/1 S 0:00 postgres: postsql postsql [local] idle\n32634 pts/1 S 0:00 postgres: postsql postsql [local] UPDATE\n32635 pts/1 S 0:00 postgres: postsql postsql [local] COMMIT\n32636 pts/1 S 0:01 postgres: postsql postsql [local] UPDATE waiting\n32637 pts/1 S 0:00 postgres: postsql postsql [local] UPDATE\n32638 pts/1 S 0:00 postgres: postsql postsql [local] UPDATE\n32639 pts/1 S 0:00 postgres: postsql postsql [local] UPDATE\n32640 pts/1 S 0:00 postgres: checkpoint subprocess\n32692 pts/2 S 0:00 su - postsql\n32752 pts/2 S 0:00 grep post\n\nIt is probably some bug in pgbench, but could also be something more \nserious.\n\nI was running pgbench using modified (to use vacuum full between runs) \nTatsuos\nbench script on a uniprocessor Athlon 850.:\n\n[postsql@taru bench]$ cat bench.sh\n#! /bin/sh\nfor i in 1 2 3 4 5 6 7 8\n do\n# pg_ctl stop\n# pg_ctl -w -o '-c \"wal_sync_method=fdatasync\" -S' start\n# pg_ctl -w start\n sh mpgbench| tee bench.res.$i |\n sh extract.sh > bench.data.$i\ndone\n[postsql@taru bench]$ cat mpgbench\n#! /bin/sh\nDB=postsql\nPORT=7204\n\n#pgbench -p $PORT -i -s 128 $DB\n#pgbench -p $PORT -i -s 10 $DB\n#pgbench -p $PORT -i -s 1 $DB\n\nfor i in 1 2 4 8 16 32 64 128\ndo\n# t=$(echo \"scale=0; 512/$i\" | bc -l)\n t=$(echo \"scale=0; 4096/$i\" | bc -l)\n echo $i concurrent users... 1>&2\n echo $t transactions each... 1>&2\n pgbench -p $PORT -t $t -c $i $DB\n psql -p $PORT -c 'vacuum full' $DB\n psql -p $PORT -c 'checkpoint' $DB\n echo \"===== sync ======\" 1>&2\n sync;sync;sync;sleep 10\n echo \"===== sync done ======\" 1>&2\ndone\n\n\n[postsql@taru bench]$ cat extract.sh\n#! /bin/sh\nsed -n -e '/^number of clients.*'/p \\\n -e '/.*excluding connections establishing.*'/p |\nsed -e 's/number of clients: //' \\\n -e 's/^tps = //' \\\n -e 's/(excluding connections establishing)//'|\nwhile read i\ndo\n echo -n \"$i \"\n read j\n echo $j\ndone\n\n-------------------\nHannu\n\n\n",
"msg_date": "Mon, 24 Dec 2001 03:24:22 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in vacuum redo"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> >> Another thing I am currently looking at is that I do not believe VACUUM\n> >> handles tuple chain moves correctly. It only enters the chain-moving\n> >> logic if it finds a tuple that is in the *middle* of an update chain,\n> >> ie, both the prior and next tuples still exist.\n> > ^^^^^\n> > Isn't it *either* not *both* ?\n> \n> [ reads it again ] Oh, you're right.\n> \n> Still, if WAL isn't taking care to maintain t_ctid then we have a\n> problem.\n\nI don't think it's preferable either. However there's\nno problem unless there's an application which handle\nthe tuples containing the t_ctid link. I know few\napplications(vacuum, create index ??) which handles\nthe tuples already updated to new ones and committed\nbefore the transaction started.\nNote that redo is executed alone. Shutdown recovery\nis called after killing all backends if necessary.\nOf cource there are no other backends running when\nstartup recovery is called.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 24 Dec 2001 09:09:03 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in vacuum redo"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I don't think it's preferable either. However there's\n> no problem unless there's an application which handle\n> the tuples containing the t_ctid link.\n\nWhat about READ COMMITTED mode? EvalPlanQual uses the t_ctid field\nto find the updated version of the row. If the t_ctid is wrong,\nyou might get an elog(), or you might miss the row you should have\nupdated, or possibly you might update a row that you shouldn't have.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Dec 2001 19:44:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Possible bug in vacuum redo "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I don't think it's preferable either. However there's\n> > no problem unless there's an application which handle\n> > the tuples containing the t_ctid link.\n> \n> What about READ COMMITTED mode? EvalPlanQual uses the t_ctid field\n> to find the updated version of the row.\n\nIn READ COMMITTED mode, an app searches valid tuples first\nusing the snapshot taken when the query started. It never\nsearches already updated(to newer ones) and committed tuples\nat the point when the query started. Essentially t_ctid is\nonly needed by the concurrently running backends.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 24 Dec 2001 10:04:47 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in vacuum redo"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> In READ COMMITTED mode, an app searches valid tuples first\n> using the snapshot taken when the query started. It never\n> searches already updated(to newer ones) and committed tuples\n> at the point when the query started. Essentially t_ctid is\n> only needed by the concurrently running backends.\n\n[ thinks for awhile ] I see: you're saying that t_ctid is only\nused by transactions that are concurrent with the deleting transaction,\nso if there's a database crash there's no need to restore t_ctid.\n\nProbably true, but still mighty ugly.\n\nMeanwhile, I guess I gotta look elsewhere for a theory to explain\nthose reports of duplicate rows. Oh well...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Dec 2001 20:28:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Possible bug in vacuum redo "
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> FWIW, I got the following deadlock-like situation by simply running \n> pgbench :\n\nUsing what version? That looks like the deadlock I fixed last week.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Dec 2001 20:29:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Possible bug in vacuum redo "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > In READ COMMITTED mode, an app searches valid tuples first\n> > using the snapshot taken when the query started. It never\n> > searches already updated(to newer ones) and committed tuples\n> > at the point when the query started. Essentially t_ctid is\n> > only needed by the concurrently running backends.\n> \n> [ thinks for awhile ] I see: you're saying that t_ctid is only\n> used by transactions that are concurrent with the deleting transaction,\n> so if there's a database crash there's no need to restore t_ctid.\n\nYes.\n \n> Probably true, but still mighty ugly.\n\nYes.\n \n> Meanwhile, I guess I gotta look elsewhere for a theory to explain\n> those reports of duplicate rows. Oh well...\n\nGreat. Where is it ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 24 Dec 2001 13:39:05 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in vacuum redo"
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Hannu Krosing <hannu@tm.ee> writes:\n>\n>>FWIW, I got the following deadlock-like situation by simply running \n>>pgbench :\n>>\n>\n>Using what version? That looks like the deadlock I fixed last week.\n>\nOfficial 7.2b4. Should I get a fresh one from CVS or is beta5/RC1 imminent ?\n\n--------------------\nHannu\n\n\n\n",
"msg_date": "Mon, 24 Dec 2001 11:14:29 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in vacuum redo"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n>> Using what version? That looks like the deadlock I fixed last week.\n>> \n> Official 7.2b4. Should I get a fresh one from CVS or is beta5/RC1 imminent ?\n\nUse CVS (or a nightly snapshot if that's easier). I doubt we'll do\nanything until after Christmas...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 Dec 2001 09:16:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Possible bug in vacuum redo "
},
{
"msg_contents": "Tom Lane wrote:\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > In READ COMMITTED mode, an app searches valid tuples first\n> > using the snapshot taken when the query started. It never\n> > searches already updated(to newer ones) and committed tuples\n> > at the point when the query started. Essentially t_ctid is\n> > only needed by the concurrently running backends.\n> \n> [ thinks for awhile ] I see: you're saying that t_ctid is only\n> used by transactions that are concurrent with the deleting transaction,\n> so if there's a database crash there's no need to restore t_ctid.\n> \n> Probably true, but still mighty ugly.\n> \n> Meanwhile, I guess I gotta look elsewhere for a theory to explain\n> those reports of duplicate rows. Oh well...\n\nCan someone document this in the sources somewhere? I am not sure how\nto do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 00:53:54 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in vacuum redo"
}
] |
[
{
"msg_contents": "I was wondering, when we start to reuse a WAL file, do we know that all\ndirty buffers modified in that WAL file have been flushed to disk? Do\nwe fsync() dirty buffers at time of checkpoint, and do we also make sure\nthat buffers we wrote to disk and later reused before the checkpoint\nalso made it to disk?\n\nMy point is that writing it to the kernel doesn't guarantee it made it\nto disk.\n\nI see the WAL records being fsync'ed in xlog.c but I don't see the\nbuffer pages being fsynced.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Dec 2001 22:03:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "checkpoint reliability"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was wondering, when we start to reuse a WAL file, do we know that all\n> dirty buffers modified in that WAL file have been flushed to disk?\n\nYes. At least two checkpoints ago, in fact.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Dec 2001 23:06:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: checkpoint reliability "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I was wondering, when we start to reuse a WAL file, do we know that all\n> > dirty buffers modified in that WAL file have been flushed to disk?\n> \n> Yes. At least two checkpoints ago, in fact.\n\nSo when we decide to reuse a shared memory buffer and write it to disk,\ndo we fsync it, or do we run a file sync() to force all dirty buffers to\ndisk?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Dec 2001 23:10:20 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: checkpoint reliability"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I was wondering, when we start to reuse a WAL file, do we know that all\n> > dirty buffers modified in that WAL file have been flushed to disk?\n> \n> Yes. At least two checkpoints ago, in fact.\n\nIsn't the following what Bruce asked ?\n\n/*\n *\tmdsync() -- Sync storage.\n *\n */\nint\nmdsync()\n{\n\tsync();\n\tif (IsUnderPostmaster)\n\t\tsleep(2);\n\tsync();\n\treturn SM_SUCCESS;\n}\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 19 Dec 2001 13:33:49 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: checkpoint reliability"
},
{
"msg_contents": "> Tom Lane wrote:\n> > \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > I was wondering, when we start to reuse a WAL file, do we know that all\n> > > dirty buffers modified in that WAL file have been flushed to disk?\n> > \n> > Yes. At least two checkpoints ago, in fact.\n> \n> Isn't the following what Bruce asked ?\n> \n> /*\n> *\tmdsync() -- Sync storage.\n> *\n> */\n> int\n> mdsync()\n> {\n> \tsync();\n> \tif (IsUnderPostmaster)\n> \t\tsleep(2);\n> \tsync();\n> \treturn SM_SUCCESS;\n> }\n\nOh, yes. That is it. I couldn't find out how we were sure our pages\nthat we had written to the kernel were actually on disk before we\nstarted reusing the WAL files.\n\nThanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Dec 2001 23:34:36 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: checkpoint reliability"
}
] |
[
{
"msg_contents": "Hello,\nI found the following code in postgresql-7.2b4/src/interfaces/perl5/Pg.pm:101 file.\n\n$Pg::VERSION = '1.9.0';\n\nIs this version number correct? I think this version is 1.8.X.\nThe latest release of pgsql_perl5 is also 1.9.0. And we can find the following url.\n\nhttp://search.cpan.org/search?module=Pg\n\nAs you know, the pgsql_perl5 1.9.0 of CPAN can not use old-style interface,\nbut the perl interface of pg7.2b4 can use it.\n--\njunichi@sra.co.jp\n",
"msg_date": "Wed, 19 Dec 2001 13:56:16 +0900",
"msg_from": "KobayashiJunichi <junichi@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Pg7.2bata4 pgsql_perl5 version."
}
] |
[
{
"msg_contents": "Hi,\nI got this patch working on 7.2b4, uses locale setting for input/output\nnumeric datatype.\nIt do not modify the regression tests.\n\nAny suggestion on How is implemented ?\nHow to change the regression test, to make it work on all the locale\nsettings ?\n\nthanks\nGiuseppe\n\n\n-- \n-------------------------------------------------------\nGiuseppe Tanzilli\t\tg.tanzilli@gruppocsf.com\nCSF Sistemi srl\t\t\tphone ++39 0775 7771\nVia del Ciavattino\nAnagni FR\nItaly",
"msg_date": "Wed, 19 Dec 2001 09:51:08 +0100",
"msg_from": "Giuseppe Tanzilli - CSF <g.tanzilli@gruppocsf.com>",
"msg_from_op": true,
"msg_subject": "RFC: Locale support for Numeric datatype"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 19 December 2001 02:07\n> To: Thomas Swan\n> Cc: Peter Eisentraut; PostgreSQL Development\n> Subject: Re: Thoughts on the location of configuration files\n> \n> \n> > >I suggest that we wire-in the location of the configuration files \n> > >into the binaries as ${sysconfdir} as determined by \n> configure. This \n> > >would default to /usr/local/pgsql/etc, so the \"everything in one \n> > >place\" system is still somewhat preserved for those that \n> care. For \n> > >the confused, we could for a while install into the data directory \n> > >files named \"postgresql.conf\", \"pg_hba.conf\", etc. that \n> only contain \n> > >text like \"This file is now to be found at @sysconfdir@ by popular \n> > >demand.\"\n> > >\n> > In keeping with some of the more modern daemons (xinetd, \n> etc) you might \n> > want to consider something like /etc/pgsql.d/ as a \n> directory name. \n> > Where as most folders with a .d contain a set of files or a \n> > referenced\n> > by the main config file in /etc. This is on a RedHat system, but I \n> > think the logic applies well if you are flexible the \n> location of the \n> > base system config directory. (/usr/local/etc vs /etc, etc.)\n> \n> I often wondered, if it is directory, why do they need the \n> '.d' in the name? What possible purpose could it have except \n> to look ugly? :-)\n\nIsn't this a RedHat thing anyway? Precisely why I use Slackware...\n\nRegards, Dave.\n",
"msg_date": "Wed, 19 Dec 2001 09:07:14 -0000",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on the location of configuration files"
},
{
"msg_contents": "\n\n\n\n\nDave Page wrote:\n\n*snip*\n\n\n\nIn keeping with some of the more modern daemons (xinetd,etc) you might want to consider something like /etc/pgsql.d/ as a directory name. Where as most folders with a .d contain a set of files or a referenced by the main config file in /etc. This is on a RedHat system, but I think the logic applies well if you are flexible the location of the base system config directory. (/usr/local/etc vs /etc, etc.)\n\nI often wondered, if it is directory, why do they need the '.d' in the name? What possible purpose could it have except to look ugly? :-)\n\nIsn't this a RedHat thing anyway? Precisely why I use Slackware...\n\nPerhaps... I just thought I'd mention it as an observation. Regardless,\nbeing able to locate the config outside of the database directory is a Good\nThing (tm). I'm really in favor of the /etc/postgresql.conf and support\nfiles being put in /etc/pgsql/ or some other system config dir,--with-sysconfdir={something}\nas specified at compile time...\n@sysconfdir@ = /etc ...\npostgresql.conf in @sysconfdir@ \nsupport files in @sysconfdir@/pgsql or someother place specified in postgresql.conf\n\n\n\n\n",
"msg_date": "Wed, 19 Dec 2001 04:29:05 -0600",
"msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files"
}
] |
[
{
"msg_contents": "\n> > > Here is a patch per above thread.\n> >\n> > Why did you change libpgtcl's build process? AFAIK no one was claiming\n> > that was broken.\n> >\n> > This does not seem the right time to be making undiscussed changes in\n> > Makefile.shlib, either. What's with that?\n> \n> I wasn't under the impression that we wanted to get this patch into 7.2.\n> AFAIK, the Tcl stuff has not built on AIX for quite a while, so it's not a\n> regression from a previous release. Moreover, the chances that any\n> significant number of people will test the Tcl build in the remaining test\n> period is nearly zero.\n\nIt would have been nice, if you would have stated your concern when I asked \nif this patch had chance for inclusion, because if I had known that, I would\nhave worked at a patch after new year, because we are quite busy here with \nthe Euro coming.\n\nAndreas\n",
"msg_date": "Wed, 19 Dec 2001 10:29:09 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Problem compiling postgres sql --with-tcl "
},
{
"msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> It would have been nice, if you would have stated your concern when I asked\n> if this patch had chance for inclusion, because if I had known that, I would\n> have worked at a patch after new year, because we are quite busy here with\n> the Euro coming.\n\nSorry, I was always assuming you were working for a future release, since\nyou indicated that you wouldn't have much time anyway. But you must\nunderstand that the Tcl code gets really little testing. The patch is\nstill available for other interestd AIX users, so the work is not in vain.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 19 Dec 2001 19:45:54 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Problem compiling postgres sql --with-tcl "
},
{
"msg_contents": "> It would have been nice, if you would have stated your concern\n> when I asked\n> if this patch had chance for inclusion, because if I had known\n> that, I would\n> have worked at a patch after new year, because we are quite busy\n> here with\n> the Euro coming.\n\nI saw a show about the first meeting to perhaps form an African Union. My\nfriend then suggested that if it goes ahead, they should called their\nunified currency the 'Afro' :)\n\nMerry Christmas all,\n\nChris\n\n",
"msg_date": "Thu, 20 Dec 2001 09:36:05 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Problem compiling postgres sql --with-tcl"
}
] |
[
{
"msg_contents": "\n> I think the data directory should be explicitly configured by either the\n> posgresql.conf file, environment variable (PGDATA), or through the command line\n> option, but using the \"configure\" statedir isn't anything \n> anyone would object too.\n\nChanging the default data location at this point would imho be a major hassle\nfor those actually using the default.\nThus I object to changing it.\n\nAndreas\n",
"msg_date": "Wed, 19 Dec 2001 10:52:44 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on the location of configuration files, how about this:"
},
{
"msg_contents": ">>>\"Zeugswetter Andreas SB SD\" said:\n > \n > > I think the data directory should be explicitly configured by either the\n > > posgresql.conf file, environment variable (PGDATA), or through the command\n line\n > > option, but using the \"configure\" statedir isn't anything \n > > anyone would object too.\n > \n > Changing the default data location at this point would imho be a major hassl\n e\n > for those actually using the default.\n > Thus I object to changing it.\n\nThe idea is to have the default configuration file location possible to change \nat configure time. If whoever compiles PostgreSQL does not specify different \nlocation, I guess /usr/local/pgsql/data will be the default location.\n\nDaniel\n\n",
"msg_date": "Wed, 19 Dec 2001 12:53:54 +0200",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files, how "
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > I think the data directory should be explicitly configured by either the\n> > posgresql.conf file, environment variable (PGDATA), or through the command line\n> > option, but using the \"configure\" statedir isn't anything\n> > anyone would object too.\n> \n> Changing the default data location at this point would imho be a major hassle\n> for those actually using the default.\n> Thus I object to changing it.\n\nThe \"new\" proposed configuration should be 100% backward compatible with\nexisting scripts.\n",
"msg_date": "Wed, 19 Dec 2001 07:37:52 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on the location of configuration files, how about this:"
}
] |
[
{
"msg_contents": "> I think you are right that the difference between 7.1 and 7.2 may have\n> more to do with the change in VACUUM strategy than anything else. Could\n> you retry the test after changing all the \"vacuum\" commands in pgbench.c\n> to \"vacuum full\"?\n\nMight there also be a difference in chosen query plans ?\nWasn't 7.1 more willing to choose an index over seq scan,\neven though the scan would be faster in the single user case ?\nOr was that change after 7.0 ?\n\nThe seq scan would be slower that the index in the case of \nmany concurrent accesses.\n\nAndreas\n",
"msg_date": "Wed, 19 Dec 2001 11:38:28 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: 7.2 is slow? "
},
{
"msg_contents": "I haven't tested with the new 7.2 betas, but here are some results from\n7.1.\n\nWe have a developement computer, IBM x series 250, with 4 processors\n(PIII Xeon 750Mhz), 1 Gb memory and 2SCSI disks (u160).\n\nThe software is writing new rows to a table, and after this it reads the\nid from that row. There are currently about 50 connections doing the\nsame thing.\n\nWhen I run this test with the Redhat 7.1, with SMP kernel, I noticed, that\nthe\nprocessors are more than 90% idle. Disks utilisation is not the\nbottleneck either, since there is very low disk usage. Some data is\nwritten to disks every 4-5 seconds. Fsync is turned of. In transactions,\nthis means about 200 inserted rows per second. The software that is used\nto give the feed, is capable of several thousand rows per second.\n\nOkey, so I tried this also with the same computer, but using the not SMP\nsupported kernel. So only with one processor. The result was about 600\nrows per second. The configuration file was unchanged. Now, the\nprocessor is about 100% utilized.\n\nI didn't find any parameters that should help in this, but if you have a\nversion of 7.2 that you would like to get information about, let me\nknow, so I'll test.\n\nJussi\n\nZeugswetter Andreas SB SD wrote:\n\n> > I think you are right that the difference between 7.1 and 7.2 may have\n> > more to do with the change in VACUUM strategy than anything else. Could\n> > you retry the test after changing all the \"vacuum\" commands in pgbench.c\n> > to \"vacuum full\"?\n>\n> Might there also be a difference in chosen query plans ?\n> Wasn't 7.1 more willing to choose an index over seq scan,\n> even though the scan would be faster in the single user case ?\n> Or was that change after 7.0 ?\n>\n> The seq scan would be slower that the index in the case of\n> many concurrent accesses.\n>\n> Andreas\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n--\nJussi Mikkola Project Manager\nBonware Oy gsm +358 40 830 7561\nTekniikantie 12 tel +358 9 2517 5570\n02150 Espoo fax +358 9 2517 5571\nFinland www.bonware.com\n\n\n\n\nI haven't tested with the new 7.2 betas, but here are some results from\n7.1.\nWe have a developement computer, IBM x series 250, with 4 processors\n(PIII Xeon 750Mhz), 1 Gb memory and 2SCSI disks (u160).\nThe software is writing new rows to a table, and after this it reads\nthe\nid from that row. There are currently about 50 connections doing the\nsame thing.\nWhen I run this test with the Redhat 7.1, with SMP kernel, I noticed,\nthat the\nprocessors are more than 90% idle. Disks utilisation is not the\nbottleneck either, since there is very low disk usage. Some data is\nwritten to disks every 4-5 seconds. Fsync is turned of. In transactions,\nthis means about 200 inserted rows per second. The software that is\nused\nto give the feed, is capable of several thousand rows per second.\nOkey, so I tried this also with the same computer, but using the not\nSMP\nsupported kernel. So only with one processor. The result was about\n600\nrows per second. The configuration file was unchanged. Now, the\nprocessor is about 100% utilized.\nI didn't find any parameters that should help in this, but if you have\na\nversion of 7.2 that you would like to get information about, let me\nknow, so I'll test.\nJussi\nZeugswetter Andreas SB SD wrote:\n> I think you are right that the difference between\n7.1 and 7.2 may have\n> more to do with the change in VACUUM strategy than anything else. \nCould\n> you retry the test after changing all the \"vacuum\" commands in pgbench.c\n> to \"vacuum full\"?\nMight there also be a difference in chosen query plans ?\nWasn't 7.1 more willing to choose an index over seq scan,\neven though the scan would be faster in the single user case ?\nOr was that change after 7.0 ?\nThe seq scan would be slower that the index in the case of\nmany concurrent accesses.\nAndreas\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n-- \nJussi Mikkola Project Manager\nBonware Oy gsm +358 40 830 7561\nTekniikantie 12 tel +358 9 2517 5570\n02150 Espoo fax +358 9 2517 5571 \nFinland www.bonware.com",
"msg_date": "Wed, 19 Dec 2001 13:00:03 +0200",
"msg_from": "Jussi Mikkola <jussi.mikkola@bonware.com>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "> I haven't tested with the new 7.2 betas, but here are some results from\n> 7.1.\n> \n> We have a developement computer, IBM x series 250, with 4 processors\n> (PIII Xeon 750Mhz), 1 Gb memory and 2SCSI disks (u160).\n> \n> The software is writing new rows to a table, and after this it reads the\n> id from that row. There are currently about 50 connections doing the\n> same thing.\n> \n> When I run this test with the Redhat 7.1, with SMP kernel, I noticed, that\n> the\n> processors are more than 90% idle. Disks utilisation is not the\n> bottleneck either, since there is very low disk usage. Some data is\n> written to disks every 4-5 seconds. Fsync is turned of. In transactions,\n> this means about 200 inserted rows per second. The software that is used\n> to give the feed, is capable of several thousand rows per second.\n> \n> Okey, so I tried this also with the same computer, but using the not SMP\n> supported kernel. So only with one processor. The result was about 600\n> rows per second. The configuration file was unchanged. Now, the\n> processor is about 100% utilized.\n> \n> I didn't find any parameters that should help in this, but if you have a\n> version of 7.2 that you would like to get information about, let me\n> know, so I'll test.\n\nYes! This sleeping case is the problem we expected to see on SMP\nmachines in >= 7.1 because of lock contention and a select() that can't\nsleep for less than 1/100 second. Please try the current 7.2 snapshot\nand let us know what performance you get.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Dec 2001 06:37:04 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "Hi !\n\nYes, now I've tested with 7.2b4. The result is about the same as with 7.1.\n\nAbout 200 messages with four processors and about 600 messages with one\nprocessor.\n\nJussi\n\n>\n>\n> Yes! This sleeping case is the problem we expected to see on SMP\n> machines in >= 7.1 because of lock contention and a select() that can't\n> sleep for less than 1/100 second. Please try the current 7.2 snapshot\n> and let us know what performance you get.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n--\nJussi Mikkola Project Manager\nBonware Oy gsm +358 40 830 7561\nTekniikantie 12 tel +358 9 2517 5570\n02150 Espoo fax +358 9 2517 5571\nFinland www.bonware.com\n\n\n\n",
"msg_date": "Wed, 19 Dec 2001 14:54:48 +0200",
"msg_from": "Jussi Mikkola <jussi.mikkola@bonware.com>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Might there also be a difference in chosen query plans ?\n\nIf so, it'd affect the results across-the-board, but AFAICT\nTatsuo is seeing comparable results for small numbers of clients.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Dec 2001 10:05:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow? "
},
{
"msg_contents": "Hi All,\n\nI have experienced similar problems after moving my main server from UP \n(PII 300 box) to SMP (PII400 box). I remeber someone said look at the \niostat figures for the different runs, but I haven't had time to check \nit out.\n\nOut of curosity, what does iostat say when run in SMP vs UP?\n\nAshley Cambrell\n\nJussi Mikkola wrote:\n\n>Hi !\n>\n>Yes, now I've tested with 7.2b4. The result is about the same as with 7.1.\n>\n>About 200 messages with four processors and about 600 messages with one\n>processor.\n>\n>Jussi\n>\n>>\n>><snip>\n>>\n\n\n",
"msg_date": "Thu, 20 Dec 2001 02:09:42 +1100",
"msg_from": "Ashley Cambrell <ash@freaky-namuh.com>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "We're getting similar problem.\nWe're currently working on TPC-H benchmarking using postgresql 7.2b3.\n>From one up to 8 paralell conexions (we've got 8 MIPS processors), uptime\nincreases from 1 to 8,\nbut increasing above 8 makes performance drop rapidly to uptimes even lower\nthan 2 for 22 conexions.\nAs we've could trail, when we have more processes than processors, we're\ngetting an increasing number of collisions.\nwhen collision happens both processes get idle for a while, then collision may\nhappen again and so...\nRegards\nLuis Amigo\nUniversidad de Cantabria\n\n\n",
"msg_date": "Wed, 19 Dec 2001 16:20:40 +0100",
"msg_from": "Luis Amigo <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "Jussi Mikkola <jussi.mikkola@bonware.com> writes:\n> Yes, now I've tested with 7.2b4. The result is about the same as with 7.1.\n\n> About 200 messages with four processors and about 600 messages with one\n> processor.\n\nThat's annoying. The LWLock changes were intended to solve the\ninefficiency with multiple CPUs, but it seems like we still have a\nproblem somewhere.\n\nCould you recompile the backend with profiling enabled and try to get\na profile from your test case? To build a profilable backend, it's\nsufficient to do\n\n\tcd .../src/backend\n\tgmake clean\n\tgmake PROFILE=-pg all\n\tgmake install-bin\n\n(assuming you are using gcc). Then restart the postmaster, and you\nshould notice \"gmon.out\" files being dropped into the various database\nsubdirectories anytime a backend exits. Next run your test case,\nand as soon as it finishes copy the gmon.out file to a safe place.\n(You'll only be able to get the profile from the last process to exit,\nso try to make sure that this is representative. Might be worth\nrepeating the test a few times to make sure that the results don't\nvary a whole lot.) Finally, do\n\n\tgprof .../bin/postgres gmon.out >resultfile\n\nto produce a legible result.\n\nOh, one more thing: on Linuxen you are likely to find that all the\nreported routine runtimes are zero, rendering the results useless.\nApply the attached patch (for 7.2beta) to fix this.\n\n\t\t\tregards, tom lane\n\n*** src/backend/postmaster/postmaster.c.orig\tWed Dec 12 14:52:03 2001\n--- src/backend/postmaster/postmaster.c\tMon Dec 17 19:38:29 2001\n***************\n*** 1823,1828 ****\n--- 1823,1829 ----\n {\n \tBackend *bn;\t\t\t\t/* for backend cleanup */\n \tpid_t\t\tpid;\n+ \tstruct itimerval svitimer;\n \n \t/*\n \t * Compute the cancel key that will be assigned to this backend. The\n***************\n*** 1858,1869 ****\n--- 1859,1874 ----\n \tbeos_before_backend_startup();\n #endif\n \n+ \tgetitimer(ITIMER_PROF, &svitimer);\n+ \n \tpid = fork();\n \n \tif (pid == 0)\t\t\t\t/* child */\n \t{\n \t\tint\t\t\tstatus;\n \n+ \t\tsetitimer(ITIMER_PROF, &svitimer, NULL);\n+ \n \t\tfree(bn);\n #ifdef __BEOS__\n \t\t/* Specific beos backend startup actions */\n",
"msg_date": "Wed, 19 Dec 2001 10:30:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow? "
},
{
"msg_contents": "Luis Amigo <lamigo@atc.unican.es> writes:\n> We're getting similar problem.\n> We're currently working on TPC-H benchmarking using postgresql 7.2b3.\n> From one up to 8 paralell conexions (we've got 8 MIPS processors),\n\nMIPS? Which spinlock implementation is getting used? (Look in\nsrc/include/storage/s_lock.h and src/backend/storage/lmgr/s_lock.c)\n\nIf you're falling back to the default SysV-semaphore based spinlock\nimplementation, I wouldn't be surprised to see a performance problem...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Dec 2001 10:44:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow? "
},
{
"msg_contents": "I haven't actually tried to compile 7.2 from the CVS, but there seems to \nbe a problem? [maybe on my side]\n\nmake[3]: Leaving directory \n`/home/ash/ash-server/Work/build/pgsql/src/backend/utils'\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations \n-Wl,-rpath,/tmp//lib -export-dynamic access/SUBSYS.o bootstrap/SUBSYS.o \ncatalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o \nlib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o \noptimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o \nrewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o -lz \n-lcrypt -lresolv -lnsl -ldl -lm -lreadline -o postgres\nnodes/SUBSYS.o: In function `pprint':\nnodes/SUBSYS.o(.text+0xdc95): undefined reference to `MIN'\nnodes/SUBSYS.o(.text+0xdcfd): undefined reference to `MIN'\ncollect2: ld returned 1 exit status\nmake[2]: *** [postgres] Error 1\nmake[2]: Leaving directory \n`/home/ash/ash-server/Work/build/pgsql/src/backend'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory `/home/ash/ash-server/Work/build/pgsql/src'\nmake: *** [all] Error 2\n\nIn ./src/backend/nodes/print.c:\n\n/* outdent */\nif (indentLev > 0)\n{\n indentLev--;\n indentDist = MIN(indentLev * INDENTSTOP, MAXINDENT);\n}\n\n\nIf I add\n#ifndef MIN\n#define MIN(a,b) (((a)<(b)) ? (a) : (b))\n#endif\nto print.c it compiles fine.\n\nAshley Cambrell\n\n\n<snip>\n\n>\n>That's annoying. The LWLock changes were intended to solve the\n>inefficiency with multiple CPUs, but it seems like we still have a\n>problem somewhere.\n>\n>Could you recompile the backend with profiling enabled and try to get\n>a profile from your test case? To build a profilable backend, it's\n>sufficient to do\n>\n>\tcd .../src/backend\n>\tgmake clean\n>\tgmake PROFILE=-pg all\n>\tgmake install-bin\n>\n>(assuming you are using gcc). Then restart the postmaster, and you\n>should notice \"gmon.out\" files being dropped into the various database\n>subdirectories anytime a backend exits. Next run your test case,\n>and as soon as it finishes copy the gmon.out file to a safe place.\n>(You'll only be able to get the profile from the last process to exit,\n>so try to make sure that this is representative. Might be worth\n>repeating the test a few times to make sure that the results don't\n>vary a whole lot.) Finally, do\n>\n>\tgprof .../bin/postgres gmon.out >resultfile\n>\n>to produce a legible result.\n>\n>Oh, one more thing: on Linuxen you are likely to find that all the\n>reported routine runtimes are zero, rendering the results useless.\n>Apply the attached patch (for 7.2beta) to fix this.\n>\n>\t\t\tregards, tom lane\n>\n>*** src/backend/postmaster/postmaster.c.orig\tWed Dec 12 14:52:03 2001\n>--- src/backend/postmaster/postmaster.c\tMon Dec 17 19:38:29 2001\n>***************\n>*** 1823,1828 ****\n>--- 1823,1829 ----\n> {\n> \tBackend *bn;\t\t\t\t/* for backend cleanup */\n> \tpid_t\t\tpid;\n>+ \tstruct itimerval svitimer;\n> \n> \t/*\n> \t * Compute the cancel key that will be assigned to this backend. The\n>***************\n>*** 1858,1869 ****\n>--- 1859,1874 ----\n> \tbeos_before_backend_startup();\n> #endif\n> \n>+ \tgetitimer(ITIMER_PROF, &svitimer);\n>+ \n> \tpid = fork();\n> \n> \tif (pid == 0)\t\t\t\t/* child */\n> \t{\n> \t\tint\t\t\tstatus;\n> \n>+ \t\tsetitimer(ITIMER_PROF, &svitimer, NULL);\n>+ \n> \t\tfree(bn);\n> #ifdef __BEOS__\n> \t\t/* Specific beos backend startup actions */\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\n\n",
"msg_date": "Thu, 20 Dec 2001 11:51:03 +1100",
"msg_from": "Ashley Cambrell <ash@freaky-namuh.com>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow? [compile problem]"
},
{
"msg_contents": "I just got the same problem on latest CVS on freebsd/i386\n\ngmake[4]: Entering directory `/home/chriskl/pgsql/src/backend/utils/time'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../.\n./src/include -c -o tqual.o tqual.c\n/usr/libexec/elf/ld -r -o SUBSYS.o tqual.o\ngmake[4]: Leaving directory `/home/chriskl/pgsql/src/backend/utils/time'\n/usr/libexec/elf/ld -r -o SUBSYS.o fmgrtab.o adt/SUBSYS.o cache/SUBSYS.o\nerror/SUBSYS.o fmgr/SUBSYS.o hash/SUBSYS.o init/SUBSYS.o misc/SUBS\nYS.o mmgr/SUBSYS.o sort/SUBSYS.o time/SUBSYS.o\ngmake[3]: Leaving directory `/home/chriskl/pgsql/src/backend/utils'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -R/home/chr\niskl/local/lib -export-dynamic access/SUBSYS.o bootstrap/SUBSYS\n.o catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o\nlib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o optimizer/\nSUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o\nstorage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o -lz -lcrypt -lcomp\nat -lm -lutil -lreadline -o postgres\nnodes/SUBSYS.o: In function `pprint':\nnodes/SUBSYS.o(.text+0xda71): undefined reference to `MIN'\nnodes/SUBSYS.o(.text+0xdade): undefined reference to `MIN'\ngmake[2]: *** [postgres] Error 1\ngmake[2]: Leaving directory `/home/chriskl/pgsql/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/chriskl/pgsql/src'\ngmake: *** [all] Error 2\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Ashley Cambrell\n> Sent: Thursday, 20 December 2001 8:51 AM\n> To: Tom Lane\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] 7.2 is slow? [compile problem]\n>\n>\n> I haven't actually tried to compile 7.2 from the CVS, but there seems to\n> be a problem? [maybe on my side]\n>\n> make[3]: Leaving directory\n> `/home/ash/ash-server/Work/build/pgsql/src/backend/utils'\n> gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n> -Wl,-rpath,/tmp//lib -export-dynamic access/SUBSYS.o bootstrap/SUBSYS.o\n> catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o\n> lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o\n> optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o\n> rewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o -lz\n> -lcrypt -lresolv -lnsl -ldl -lm -lreadline -o postgres\n> nodes/SUBSYS.o: In function `pprint':\n> nodes/SUBSYS.o(.text+0xdc95): undefined reference to `MIN'\n> nodes/SUBSYS.o(.text+0xdcfd): undefined reference to `MIN'\n> collect2: ld returned 1 exit status\n> make[2]: *** [postgres] Error 1\n> make[2]: Leaving directory\n> `/home/ash/ash-server/Work/build/pgsql/src/backend'\n> make[1]: *** [all] Error 2\n> make[1]: Leaving directory `/home/ash/ash-server/Work/build/pgsql/src'\n> make: *** [all] Error 2\n>\n> In ./src/backend/nodes/print.c:\n>\n> /* outdent */\n> if (indentLev > 0)\n> {\n> indentLev--;\n> indentDist = MIN(indentLev * INDENTSTOP, MAXINDENT);\n> }\n>\n>\n> If I add\n> #ifndef MIN\n> #define MIN(a,b) (((a)<(b)) ? (a) : (b))\n> #endif\n> to print.c it compiles fine.\n>\n> Ashley Cambrell\n>\n>\n> <snip>\n>\n> >\n> >That's annoying. The LWLock changes were intended to solve the\n> >inefficiency with multiple CPUs, but it seems like we still have a\n> >problem somewhere.\n> >\n> >Could you recompile the backend with profiling enabled and try to get\n> >a profile from your test case? To build a profilable backend, it's\n> >sufficient to do\n> >\n> >\tcd .../src/backend\n> >\tgmake clean\n> >\tgmake PROFILE=-pg all\n> >\tgmake install-bin\n> >\n> >(assuming you are using gcc). Then restart the postmaster, and you\n> >should notice \"gmon.out\" files being dropped into the various database\n> >subdirectories anytime a backend exits. Next run your test case,\n> >and as soon as it finishes copy the gmon.out file to a safe place.\n> >(You'll only be able to get the profile from the last process to exit,\n> >so try to make sure that this is representative. Might be worth\n> >repeating the test a few times to make sure that the results don't\n> >vary a whole lot.) Finally, do\n> >\n> >\tgprof .../bin/postgres gmon.out >resultfile\n> >\n> >to produce a legible result.\n> >\n> >Oh, one more thing: on Linuxen you are likely to find that all the\n> >reported routine runtimes are zero, rendering the results useless.\n> >Apply the attached patch (for 7.2beta) to fix this.\n> >\n> >\t\t\tregards, tom lane\n> >\n> >*** src/backend/postmaster/postmaster.c.orig\tWed Dec 12 14:52:03 2001\n> >--- src/backend/postmaster/postmaster.c\tMon Dec 17 19:38:29 2001\n> >***************\n> >*** 1823,1828 ****\n> >--- 1823,1829 ----\n> > {\n> > \tBackend *bn;\t\t\t\t/* for backend cleanup */\n> > \tpid_t\t\tpid;\n> >+ \tstruct itimerval svitimer;\n> >\n> > \t/*\n> > \t * Compute the cancel key that will be assigned to this backend. The\n> >***************\n> >*** 1858,1869 ****\n> >--- 1859,1874 ----\n> > \tbeos_before_backend_startup();\n> > #endif\n> >\n> >+ \tgetitimer(ITIMER_PROF, &svitimer);\n> >+\n> > \tpid = fork();\n> >\n> > \tif (pid == 0)\t\t\t\t/* child */\n> > \t{\n> > \t\tint\t\t\tstatus;\n> >\n> >+ \t\tsetitimer(ITIMER_PROF, &svitimer, NULL);\n> >+\n> > \t\tfree(bn);\n> > #ifdef __BEOS__\n> > \t\t/* Specific beos backend startup actions */\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 5: Have you checked our extensive FAQ?\n> >\n> >http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Thu, 20 Dec 2001 10:02:42 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow? [compile problem]"
},
{
"msg_contents": "\nOK, I just committed a fix. MIN() was used in the pretty node print\npatch; should have been Min().\n\n---------------------------------------------------------------------------\n\n> I just got the same problem on latest CVS on freebsd/i386\n> \n> gmake[4]: Entering directory `/home/chriskl/pgsql/src/backend/utils/time'\n> gcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../.\n> ./src/include -c -o tqual.o tqual.c\n> /usr/libexec/elf/ld -r -o SUBSYS.o tqual.o\n> gmake[4]: Leaving directory `/home/chriskl/pgsql/src/backend/utils/time'\n> /usr/libexec/elf/ld -r -o SUBSYS.o fmgrtab.o adt/SUBSYS.o cache/SUBSYS.o\n> error/SUBSYS.o fmgr/SUBSYS.o hash/SUBSYS.o init/SUBSYS.o misc/SUBS\n> YS.o mmgr/SUBSYS.o sort/SUBSYS.o time/SUBSYS.o\n> gmake[3]: Leaving directory `/home/chriskl/pgsql/src/backend/utils'\n> gcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -R/home/chr\n> iskl/local/lib -export-dynamic access/SUBSYS.o bootstrap/SUBSYS\n> .o catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o\n> lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o optimizer/\n> SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o\n> storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o -lz -lcrypt -lcomp\n> at -lm -lutil -lreadline -o postgres\n> nodes/SUBSYS.o: In function `pprint':\n> nodes/SUBSYS.o(.text+0xda71): undefined reference to `MIN'\n> nodes/SUBSYS.o(.text+0xdade): undefined reference to `MIN'\n> gmake[2]: *** [postgres] Error 1\n> gmake[2]: Leaving directory `/home/chriskl/pgsql/src/backend'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory `/home/chriskl/pgsql/src'\n> gmake: *** [all] Error 2\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Ashley Cambrell\n> > Sent: Thursday, 20 December 2001 8:51 AM\n> > To: Tom Lane\n> > Cc: pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS] 7.2 is slow? [compile problem]\n> >\n> >\n> > I haven't actually tried to compile 7.2 from the CVS, but there seems to\n> > be a problem? [maybe on my side]\n> >\n> > make[3]: Leaving directory\n> > `/home/ash/ash-server/Work/build/pgsql/src/backend/utils'\n> > gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n> > -Wl,-rpath,/tmp//lib -export-dynamic access/SUBSYS.o bootstrap/SUBSYS.o\n> > catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o\n> > lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o\n> > optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o\n> > rewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o -lz\n> > -lcrypt -lresolv -lnsl -ldl -lm -lreadline -o postgres\n> > nodes/SUBSYS.o: In function `pprint':\n> > nodes/SUBSYS.o(.text+0xdc95): undefined reference to `MIN'\n> > nodes/SUBSYS.o(.text+0xdcfd): undefined reference to `MIN'\n> > collect2: ld returned 1 exit status\n> > make[2]: *** [postgres] Error 1\n> > make[2]: Leaving directory\n> > `/home/ash/ash-server/Work/build/pgsql/src/backend'\n> > make[1]: *** [all] Error 2\n> > make[1]: Leaving directory `/home/ash/ash-server/Work/build/pgsql/src'\n> > make: *** [all] Error 2\n> >\n> > In ./src/backend/nodes/print.c:\n> >\n> > /* outdent */\n> > if (indentLev > 0)\n> > {\n> > indentLev--;\n> > indentDist = MIN(indentLev * INDENTSTOP, MAXINDENT);\n> > }\n> >\n> >\n> > If I add\n> > #ifndef MIN\n> > #define MIN(a,b) (((a)<(b)) ? (a) : (b))\n> > #endif\n> > to print.c it compiles fine.\n> >\n> > Ashley Cambrell\n> >\n> >\n> > <snip>\n> >\n> > >\n> > >That's annoying. The LWLock changes were intended to solve the\n> > >inefficiency with multiple CPUs, but it seems like we still have a\n> > >problem somewhere.\n> > >\n> > >Could you recompile the backend with profiling enabled and try to get\n> > >a profile from your test case? To build a profilable backend, it's\n> > >sufficient to do\n> > >\n> > >\tcd .../src/backend\n> > >\tgmake clean\n> > >\tgmake PROFILE=-pg all\n> > >\tgmake install-bin\n> > >\n> > >(assuming you are using gcc). Then restart the postmaster, and you\n> > >should notice \"gmon.out\" files being dropped into the various database\n> > >subdirectories anytime a backend exits. Next run your test case,\n> > >and as soon as it finishes copy the gmon.out file to a safe place.\n> > >(You'll only be able to get the profile from the last process to exit,\n> > >so try to make sure that this is representative. Might be worth\n> > >repeating the test a few times to make sure that the results don't\n> > >vary a whole lot.) Finally, do\n> > >\n> > >\tgprof .../bin/postgres gmon.out >resultfile\n> > >\n> > >to produce a legible result.\n> > >\n> > >Oh, one more thing: on Linuxen you are likely to find that all the\n> > >reported routine runtimes are zero, rendering the results useless.\n> > >Apply the attached patch (for 7.2beta) to fix this.\n> > >\n> > >\t\t\tregards, tom lane\n> > >\n> > >*** src/backend/postmaster/postmaster.c.orig\tWed Dec 12 14:52:03 2001\n> > >--- src/backend/postmaster/postmaster.c\tMon Dec 17 19:38:29 2001\n> > >***************\n> > >*** 1823,1828 ****\n> > >--- 1823,1829 ----\n> > > {\n> > > \tBackend *bn;\t\t\t\t/* for backend cleanup */\n> > > \tpid_t\t\tpid;\n> > >+ \tstruct itimerval svitimer;\n> > >\n> > > \t/*\n> > > \t * Compute the cancel key that will be assigned to this backend. The\n> > >***************\n> > >*** 1858,1869 ****\n> > >--- 1859,1874 ----\n> > > \tbeos_before_backend_startup();\n> > > #endif\n> > >\n> > >+ \tgetitimer(ITIMER_PROF, &svitimer);\n> > >+\n> > > \tpid = fork();\n> > >\n> > > \tif (pid == 0)\t\t\t\t/* child */\n> > > \t{\n> > > \t\tint\t\t\tstatus;\n> > >\n> > >+ \t\tsetitimer(ITIMER_PROF, &svitimer, NULL);\n> > >+\n> > > \t\tfree(bn);\n> > > #ifdef __BEOS__\n> > > \t\t/* Specific beos backend startup actions */\n> > >\n> > >---------------------------(end of broadcast)---------------------------\n> > >TIP 5: Have you checked our extensive FAQ?\n> > >\n> > >http://www.postgresql.org/users-lounge/docs/faq.html\n> > >\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Dec 2001 21:40:09 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow? [compile problem]"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, I just committed a fix. MIN() was used in the pretty node print\n> patch; should have been Min().\n\nMea maxima (or MINima?) culpa. Thanks ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Dec 2001 22:18:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow? [compile problem] "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, I just committed a fix. MIN() was used in the pretty node print\n> > patch; should have been Min().\n> \n> Mea maxima (or MINima?) culpa. Thanks ...\n\nI also noticed we define our our MAX/MIN in adt/numeric.c and a few\nother files. I put it on my list to research that in 7.3 and maybe use\nthe c.h versions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Dec 2001 22:20:42 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow? [compile problem]"
},
{
"msg_contents": "Hi Tom!\n\nWell, here's the profile, but yes, almost all the times are zero. Both without\nthe patch, and with it. Did I miss something? (Yes, I did make and install\nafterwards ;-)\n\nI made a new database, so as it was growing, the times were even slower than\nbefore. (It would be nice, if I could create a large database from the\nbeginning.;)\n\nIs this of any help? Do you need the same result with less cpus?\n\nJussi\n\n\nTom Lane wrote:\n\n> Jussi Mikkola <jussi.mikkola@bonware.com> writes:\n> > Yes, now I've tested with 7.2b4. The result is about the same as with 7.1.\n>\n> > About 200 messages with four processors and about 600 messages with one\n> > processor.\n>\n> That's annoying. The LWLock changes were intended to solve the\n> inefficiency with multiple CPUs, but it seems like we still have a\n> problem somewhere.\n>\n> Could you recompile the backend with profiling enabled and try to get\n> a profile from your test case? To build a profilable backend, it's\n> sufficient to do\n>\n> cd .../src/backend\n> gmake clean\n> gmake PROFILE=-pg all\n> gmake install-bin\n>\n> (assuming you are using gcc). Then restart the postmaster, and you\n> should notice \"gmon.out\" files being dropped into the various database\n> subdirectories anytime a backend exits. Next run your test case,\n> and as soon as it finishes copy the gmon.out file to a safe place.\n> (You'll only be able to get the profile from the last process to exit,\n> so try to make sure that this is representative. Might be worth\n> repeating the test a few times to make sure that the results don't\n> vary a whole lot.) Finally, do\n>\n> gprof .../bin/postgres gmon.out >resultfile\n>\n> to produce a legible result.\n>\n> Oh, one more thing: on Linuxen you are likely to find that all the\n> reported routine runtimes are zero, rendering the results useless.\n> Apply the attached patch (for 7.2beta) to fix this.\n>\n> regards, tom lane\n>\n> *** src/backend/postmaster/postmaster.c.orig Wed Dec 12 14:52:03 2001\n> --- src/backend/postmaster/postmaster.c Mon Dec 17 19:38:29 2001\n> ***************\n> *** 1823,1828 ****\n> --- 1823,1829 ----\n> {\n> Backend *bn; /* for backend cleanup */\n> pid_t pid;\n> + struct itimerval svitimer;\n>\n> /*\n> * Compute the cancel key that will be assigned to this backend. The\n> ***************\n> *** 1858,1869 ****\n> --- 1859,1874 ----\n> beos_before_backend_startup();\n> #endif\n>\n> + getitimer(ITIMER_PROF, &svitimer);\n> +\n> pid = fork();\n>\n> if (pid == 0) /* child */\n> {\n> int status;\n>\n> + setitimer(ITIMER_PROF, &svitimer, NULL);\n> +\n> free(bn);\n> #ifdef __BEOS__\n> /* Specific beos backend startup actions */\n\n--\nJussi Mikkola Project Manager\nBonware Oy gsm +358 40 830 7561\nTekniikantie 12 tel +358 9 2517 5570\n02150 Espoo fax +358 9 2517 5571\nFinland www.bonware.com",
"msg_date": "Thu, 20 Dec 2001 15:55:28 +0200",
"msg_from": "Jussi Mikkola <jussi.mikkola@bonware.com>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
},
{
"msg_contents": "Jussi Mikkola <jussi.mikkola@bonware.com> writes:\n> Well, here's the profile, but yes, almost all the times are zero.\n\nIt looks to me like this profile only covers the postmaster, not a\nbackend. You want to use gmon.out from down inside the database's\nsubdirectory ($PGDATA/base/something/gmon.out).\n\n> Both without the patch, and with it. Did I miss something? (Yes, I did\n> make and install afterwards ;-)\n\n[ scratches head ] Dunno. You did restart the postmaster after\ninstalling the new executable, right?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2001 09:55:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow? "
},
{
"msg_contents": "Jussi Mikkola wrote:\n> \n> Hi Tom!\n> \n> Well, here's the profile, but yes, almost all the times are zero. Both without\n> the patch, and with it. Did I miss something? (Yes, I did make and install\n> afterwards ;-)\n> \n> I made a new database, so as it was growing, the times were even slower than\n> before. (It would be nice, if I could create a large database from the\n> beginning.;)\n> \n> Is this of any help? Do you need the same result with less cpus?\n> \n> Jussi\n> \n> Tom Lane wrote:\n> \n> > Jussi Mikkola <jussi.mikkola@bonware.com> writes:\n> > > Yes, now I've tested with 7.2b4. The result is about the same as with 7.1.\n> >\n> > > About 200 messages with four processors and about 600 messages with one\n> > > processor.\n> >\n\nWas this solved with latest LWLock patches ?\n\n--------------\nHannu\n",
"msg_date": "Wed, 09 Jan 2002 12:02:42 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 is slow?"
}
] |
[
{
"msg_contents": "\n> > Changing the default data location at this point would imho \n> > be a major hassle for those actually using the default.\n> > Thus I object to changing it.\n> \n> The idea is to have the default configuration file location \n> possible to change at configure time.\n\nI totally support that, and think sysconfdir is a good default.\n\n> If whoever compiles PostgreSQL does not specify different \n> location, I guess /usr/local/pgsql/data will be the default location.\n\nThat is what I meant, but proposed was statedir (== /usr/local/pgsql/var).\n\nAndreas\n",
"msg_date": "Wed, 19 Dec 2001 12:00:23 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on the location of configuration files, how about this:"
}
] |
[
{
"msg_contents": "I've just installed 7.2b4 on Irix modifications works perfectly.\noffsetof works properly\nMy apologizes for the long time.\nRegards\nLuis Amigo\nProfiling must wait for a while\n\n\n",
"msg_date": "Wed, 19 Dec 2001 18:25:22 +0100",
"msg_from": "Luis Amigo <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "7.2b4 on irix6.5.13"
}
] |
[
{
"msg_contents": "Hi,\n\nI was installing the ODBC driver on an alpha box. Problem is that there \nare assumptions in the typedefs that a four byte integer is a \"long \nint\". Of course, this is often an incorrect assumption. I can fix this, \nbut wanted to know how people wanted this done. How do you handle this \nissue in the server? Seems confingure.in and associated files is the most \nreasonable way to fix this to me.\n\nCheers,\n\n\n-- Andrew Bell\nacbell@iastate.edu\n \n\n",
"msg_date": "Wed, 19 Dec 2001 11:52:10 -0600",
"msg_from": "Andrew Bell <acbell@iastate.edu>",
"msg_from_op": true,
"msg_subject": "long ints use for 4-byte entities in ODBC"
},
{
"msg_contents": "> I was installing the ODBC driver on an alpha box. Problem is that there\n> are assumptions in the typedefs that a four byte integer is a \"long\n> int\". Of course, this is often an incorrect assumption. I can fix this,\n> but wanted to know how people wanted this done. How do you handle this\n> issue in the server? Seems confingure.in and associated files is the most\n> reasonable way to fix this to me.\n\nAlso; which driver manager are you using? I think Nick Gorham has been \nworking on this issue within unixODBC.\n\nPeter\n\n",
"msg_date": "Wed, 19 Dec 2001 11:38:29 -0800",
"msg_from": "Peter Harvey <pharvey@codebydesign.com>",
"msg_from_op": false,
"msg_subject": "Re: long ints use for 4-byte entities in ODBC"
},
{
"msg_contents": "Peter Harvey wrote:\n\n> > I was installing the ODBC driver on an alpha box. Problem is that there\n> > are assumptions in the typedefs that a four byte integer is a \"long\n> > int\". Of course, this is often an incorrect assumption. I can fix this,\n> > but wanted to know how people wanted this done. How do you handle this\n> > issue in the server? Seems confingure.in and associated files is the most\n> > reasonable way to fix this to me.\n>\n> Also; which driver manager are you using? I think Nick Gorham has been\n> working on this issue within unixODBC.\n>\n> Peter\n\nAFAIK there should be nothing wrong with\n\ntypedef Int4 int\n\ninstead of the\n\ntypedef Int4 long\n\nwhich is plainly wrong on 64 bit platforms.\n\n--\nNick Gorham\nEasysoft Ltd\n\n\n\n",
"msg_date": "Thu, 20 Dec 2001 11:41:43 +0000",
"msg_from": "Nick Gorham <nick@easysoft.com>",
"msg_from_op": false,
"msg_subject": "Re: long ints use for 4-byte entities in ODBC"
},
{
"msg_contents": "At 11:41 AM 12/20/2001 +0000, Nick Gorham wrote:\n>Peter Harvey wrote:\n>\n> > > I was installing the ODBC driver on an alpha box. Problem is that there\n> > > are assumptions in the typedefs that a four byte integer is a \"long\n> > > int\". Of course, this is often an incorrect assumption. I can fix this,\n> > > but wanted to know how people wanted this done. How do you handle this\n> > > issue in the server? Seems confingure.in and associated files is the \n> most\n> > > reasonable way to fix this to me.\n> >\n> > Also; which driver manager are you using? I think Nick Gorham has been\n> > working on this issue within unixODBC.\n> >\n> > Peter\n>\n>AFAIK there should be nothing wrong with\n>\n>typedef Int4 int\n>\n>instead of the\n>\n>typedef Int4 long\n>\n>which is plainly wrong on 64 bit platforms.\n\nOf course, the C standard doesn't say anything about the sizes of any of \nthe int-like datatypes, it only specifies their relative sizes. MySQL \n(don't throw stones) addresses the problem like this in configure.in:\n\n-------------------------------------------\n\nAC_CHECK_SIZEOF(int, 4)\nif test \"$ac_cv_sizeof_int\" -eq 0\nthen\n AC_MSG_ERROR(\"No size for int type.\")\nfi\nAC_CHECK_SIZEOF(long, 4)\nif test \"$ac_cv_sizeof_long\" -eq 0\nthen\n AC_MSG_ERROR(\"No size for long type.\")\nfi\nAC_CHECK_SIZEOF(long long, 8)\nif test \"$ac_cv_sizeof_long_long\" -eq 0\nthen\n AC_MSG_ERROR(\"MySQL needs a long long type.\")\nfi\n# off_t is not a builtin type\nMYSQL_CHECK_SIZEOF(off_t, 4)\nif test \"$ac_cv_sizeof_off_t\" -eq 0\nthen\n AC_MSG_ERROR(\"MySQL needs a off_t type.\")\nfi\n\n-------------------------------------------\n\nCoupled with a few SIZEOF_<datatype> checks in the headers which set up the \ntypedefs for sized data, the problem is solved generically.\n\n\n-- Andrew Bell\nacbell@iastate.edu\n \n\n",
"msg_date": "Thu, 20 Dec 2001 10:55:56 -0600",
"msg_from": "Andrew Bell <acbell@iastate.edu>",
"msg_from_op": true,
"msg_subject": "Re: long ints use for 4-byte entities in ODBC"
}
] |
[
{
"msg_contents": "I'd like to claim that these will be removed from the parser in the next\nrelease (7.3). Any objections? I had thought to advocate removing them\nnow, but decided the outcome of the discussion isn't worth the\ndiscussion itself at the moment ;)\n\nComments? I'm updating the docs to reflect \"almost gone deprecated\"\nrather than just \"deprecated\" as they did for 7.1.\n\n - Thomas\n",
"msg_date": "Thu, 20 Dec 2001 03:49:18 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "datetime and timespan deprecated"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I'd like to claim that these will be removed from the parser in the next\n> release (7.3). Any objections?\n\nNot here. I seem to recall that there are several other \"remove in the\nnext release\" compatibility hacks in gram.y, most of them of more than\none release's standing. Shall we put them all on a \"this IS going away,\nthis time for sure\" hitlist?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Dec 2001 23:18:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: datetime and timespan deprecated "
},
{
"msg_contents": "> Not here. I seem to recall that there are several other \"remove in the\n> next release\" compatibility hacks in gram.y, most of them of more than\n> one release's standing. Shall we put them all on a \"this IS going away,\n> this time for sure\" hitlist?\n\nSure. I'm not recalling any particulars, other than the ODBC hack which\nis already covered now in the driver itself afaict. Not sure if we ever\nhad a detailed test case specified to test that one anyway. Peter?\n\nOh, there is an SQL9x vs PostgreSQL issue wrt whether timestamp defaults\nto \"with time zone\" or not. I have it default to \"with time zone\" for\nthis release for compatibility with previous releases, but we could\nconsider changing that in 7.3 per SQL9x spec on defaults. To make that\neasy, pg_dump should do the right thing by explicitly specifying this on\noutput, and it seems to do that already. So we have a pretty seamless\npath to 7.3 for this issue.\n\n - Thomas\n",
"msg_date": "Thu, 20 Dec 2001 05:25:39 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: datetime and timespan deprecated"
}
] |
[
{
"msg_contents": "\n See:\n\n./configure --prefix=/usr/lib/postgresql \\\n --with-unixodbc \\\n --enable-odbc \\\n --with-openssl \\\n --with-pam \\\n --with-python \\\n --with-perl \\\n --with-tcl \\\n --enable-nls \\\n --enable-multibyte \\\n --enable-recode \\\n --enable-locale\n\n [--cut--]\n checking for tclsh... /usr/bin/tclsh\n checking for tclConfig.sh... /usr/lib/tcl8.3/tclConfig.sh\n checking for tkConfig.sh... no\n configure: error: file `tkConfig.sh' is required for Tk\n\n ..hmm, I try:\n\n $ ls -la /usr/lib/tk8.3/tkConfig.sh\n -rw-r--r-- 1 root root 3194 Oct 27 10:00\n /usr/lib/tk8.3/tkConfig.sh\n\n \n If I define directly path by --with-tkconfig=/usr/lib/tk8.3 it pass. \n But why is it needful for tkConfig.sh if it's at very simular place \n as tclConfig.sh?\n\n /usr/lib/tcl8.3/tclConfig.sh\n /usr/lib/tk8.3/tkConfig.sh\n\n Comments?\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 20 Dec 2001 14:53:07 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "tkConfig.sh vs. ./configure"
},
{
"msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> checking for tclConfig.sh... /usr/lib/tcl8.3/tclConfig.sh\n> checking for tkConfig.sh... no\n> configure: error: file `tkConfig.sh' is required for Tk\n \n> If I define directly path by --with-tkconfig=/usr/lib/tk8.3 it pass. \n> But why is it needful for tkConfig.sh if it's at very simular place \n> as tclConfig.sh?\n\nIt looks like the default way to find the search path for these things\nis to ask Tcl, via\n echo 'puts $auto_path' | $TCLSH\n\nUnfortunately tclsh is only going to answer about plain Tcl, not Tk.\nWe'd need to ask wish to get the path for Tk stuff. For example,\nI get\n\n$ tclsh\n% puts $auto_path\n/usr/local/lib/tcl8.0 /usr/local/lib\n\n$ wish\n% puts $auto_path\n/usr/local/lib/tcl8.0 /usr/local/lib /usr/local/lib/tk8.0\n\nAsking wish does not seem like a good idea, since it will fail to fire\nup if you aren't in an X environment.\n\nHowever, on my machine both tclConfig.sh and tkConfig.sh are in\n/usr/local/lib, not in the subdirectories. Putting them in\nversion-specific subdirectories seems pretty self-defeating.\nWhat packaging of tcl/tk did you use?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2001 10:20:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tkConfig.sh vs. ./configure "
},
{
"msg_contents": "On Thu, Dec 20, 2001 at 10:20:34AM -0500, Tom Lane wrote:\n> Karel Zak <zakkr@zf.jcu.cz> writes:\n\n> However, on my machine both tclConfig.sh and tkConfig.sh are in\n> /usr/local/lib, not in the subdirectories. Putting them in\n> version-specific subdirectories seems pretty self-defeating.\n> What packaging of tcl/tk did you use?\n\n Latest unstable GNU/Linux Debian. Tcl 8.3.3-1, Tk 8.3.3-1.\n \n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 20 Dec 2001 16:36:18 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: tkConfig.sh vs. ./configure"
},
{
"msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> On Thu, Dec 20, 2001 at 10:20:34AM -0500, Tom Lane wrote:\n>> What packaging of tcl/tk did you use?\n\n> Latest unstable GNU/Linux Debian. Tcl 8.3.3-1, Tk 8.3.3-1.\n\nI think the Debian packager blew it.\n\nI just looked at a Red Hat 7.2 machine, and it has both tclConfig.sh\nand tkConfig.sh in /usr/lib.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2001 10:42:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tkConfig.sh vs. ./configure "
},
{
"msg_contents": "Karel Zak writes:\n\n> [--cut--]\n> checking for tclsh... /usr/bin/tclsh\n> checking for tclConfig.sh... /usr/lib/tcl8.3/tclConfig.sh\n> checking for tkConfig.sh... no\n> configure: error: file `tkConfig.sh' is required for Tk\n>\n> ..hmm, I try:\n>\n> $ ls -la /usr/lib/tk8.3/tkConfig.sh\n> -rw-r--r-- 1 root root 3194 Oct 27 10:00\n> /usr/lib/tk8.3/tkConfig.sh\n\nThe tclConfig.sh file is found by looking into the path returned by `echo\n'puts $auto_path' | tclsh`.\n\nThen theoretically, the tkConfig.sh file should be found by looking into\nthe path returned by `echo 'puts $auto_path' | wish`, which would indeed\ngive the right answer, but when I execute that, wish also opens a window\non my desktop and hangs, which is not exactly what you'd want during a\nconfigure run.\n\nIf you have a plan to work around that and the case where X is not running\nduring configuration, then I'm all ears.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 20 Dec 2001 18:13:12 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: tkConfig.sh vs. ./configure"
},
{
"msg_contents": ">>>Peter Eisentraut said:\n > Karel Zak writes:\n > \n > > [--cut--]\n > > checking for tclsh... /usr/bin/tclsh\n > > checking for tclConfig.sh... /usr/lib/tcl8.3/tclConfig.sh\n > > checking for tkConfig.sh... no\n > > configure: error: file `tkConfig.sh' is required for Tk\n > >\n > > ..hmm, I try:\n > >\n > > $ ls -la /usr/lib/tk8.3/tkConfig.sh\n > > -rw-r--r-- 1 root root 3194 Oct 27 10:00\n > > /usr/lib/tk8.3/tkConfig.sh\n > \n > The tclConfig.sh file is found by looking into the path returned by `echo\n > 'puts $auto_path' | tclsh`.\n > \n > Then theoretically, the tkConfig.sh file should be found by looking into\n > the path returned by `echo 'puts $auto_path' | wish`, which would indeed\n > give the right answer, but when I execute that, wish also opens a window\n > on my desktop and hangs, which is not exactly what you'd want during a\n > configure run.\n\nWhat about `echo 'puts $auto_path; exit' | wish'?\n\nDaniel\n\n\n",
"msg_date": "Thu, 20 Dec 2001 19:29:35 +0200",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": false,
"msg_subject": "Re: tkConfig.sh vs. ./configure "
},
{
"msg_contents": "On Thu, Dec 20, 2001 at 06:13:12PM +0100, Peter Eisentraut wrote:\n\n> The tclConfig.sh file is found by looking into the path returned by `echo\n> 'puts $auto_path' | tclsh`.\n> \n> Then theoretically, the tkConfig.sh file should be found by looking into\n> the path returned by `echo 'puts $auto_path' | wish`, which would indeed\n\n In the X-win:\n\n $ echo 'puts $auto_path; exit' | wish\n /usr/lib/tcl8.3 /usr/lib /usr/lib/tk8.3\n\n it's right, but really ugly is that it require X display. \n \n $ echo 'puts $auto_path; exit' | wish\n Application initialization failed: no display name and no $DISPLAY\n environment variable\n\n\n The other thing is that tcl*.h and tk.h files are in /usr/inlude/tcl8.3,\n and \n\n$ ./configure --prefix=/usr/lib/postgresql \\\n --with-tcl \\\n --with-tkconfig=/usr/lib/tk8.3/\n \n finish with:\n\nmake[4]: Leaving directory `/var/home/PG_DEVEL/pgsql/src/interfaces/libpgtcl'\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/interfaces/libpgtcl -I../../../src/include -I/usr/X11R6/include -c -o pgtkAppInit.o pgtkAppInit.c\npgtkAppInit.c:15: tk.h: No such file or directory\nmake[3]: *** [pgtkAppInit.o] Error 1\n\n (yes, I know --includedir= etc., but it's _standard_ latest unstable\n Debian...) \n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 20 Dec 2001 19:00:20 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: tkConfig.sh vs. ./configure"
},
{
"msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> The other thing is that tcl*.h and tk.h files are in /usr/inlude/tcl8.3,\n\nThere's not a darn thing we can do about that, because tclConfig.sh\ndoesn't tell exactly where the include files are (which is Tcl's\noversight, but we're stuck with it). We could try \"$TCL_PREFIX/include\",\nwhich'd work on standard Tcl install layouts, but Debian has evidently\nmanaged to botch that too.\n\n> (yes, I know --includedir= etc., but it's _standard_ latest unstable\n> Debian...) \n\nIt may be Debian's idea of standard, but that doesn't mean it's not\nbroken. How are we supposed to guess where they are hiding these files,\nshort of searching the entire filesystem?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2001 13:29:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tkConfig.sh vs. ./configure "
},
{
"msg_contents": "On Thu, Dec 20, 2001 at 01:29:42PM -0500, Tom Lane wrote:\n> Karel Zak <zakkr@zf.jcu.cz> writes:\n> > The other thing is that tcl*.h and tk.h files are in /usr/inlude/tcl8.3,\n> \n> There's not a darn thing we can do about that, because tclConfig.sh\n> doesn't tell exactly where the include files are (which is Tcl's\n> oversight, but we're stuck with it). We could try \"$TCL_PREFIX/include\",\n> which'd work on standard Tcl install layouts, but Debian has evidently\n> managed to botch that too.\n> \n> > (yes, I know --includedir= etc., but it's _standard_ latest unstable\n> > Debian...) \n> \n> It may be Debian's idea of standard, but that doesn't mean it's not\n\n I agree, but change of this is probably out of our possibility...\n\n> broken. How are we supposed to guess where they are hiding these files,\n> short of searching the entire filesystem?\n\n\n I see debian/rule (build system) of tkdvi package and here is \n \n ./configure --with-tcl=/usr/lib/tcl8.3 \\\n --with-tk=/usr/lib/tk8.3 \\\n --with-tclinclude=/usr/include/tcl8.3 \\\n --with-tkinclude=/usr/include/tcl8.3\n\n Karel\n \n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 20 Dec 2001 19:48:16 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: tkConfig.sh vs. ./configure"
},
{
"msg_contents": "\nWas this resolved?\n\n---------------------------------------------------------------------------\n\n> On Thu, Dec 20, 2001 at 01:29:42PM -0500, Tom Lane wrote:\n> > Karel Zak <zakkr@zf.jcu.cz> writes:\n> > > The other thing is that tcl*.h and tk.h files are in /usr/inlude/tcl8.3,\n> > \n> > There's not a darn thing we can do about that, because tclConfig.sh\n> > doesn't tell exactly where the include files are (which is Tcl's\n> > oversight, but we're stuck with it). We could try \"$TCL_PREFIX/include\",\n> > which'd work on standard Tcl install layouts, but Debian has evidently\n> > managed to botch that too.\n> > \n> > > (yes, I know --includedir= etc., but it's _standard_ latest unstable\n> > > Debian...) \n> > \n> > It may be Debian's idea of standard, but that doesn't mean it's not\n> \n> I agree, but change of this is probably out of our possibility...\n> \n> > broken. How are we supposed to guess where they are hiding these files,\n> > short of searching the entire filesystem?\n> \n> \n> I see debian/rule (build system) of tkdvi package and here is \n> \n> ./configure --with-tcl=/usr/lib/tcl8.3 \\\n> --with-tk=/usr/lib/tk8.3 \\\n> --with-tclinclude=/usr/include/tcl8.3 \\\n> --with-tkinclude=/usr/include/tcl8.3\n> \n> Karel\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 Dec 2001 16:44:18 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tkConfig.sh vs. ./configure"
}
] |
[
{
"msg_contents": "\n Hi,\n\n on system without perl-dev the ./configure --with-perl pass without\n some error message and error appear during compilation:\n\n LD_RUN_PATH=\"\" cc -shared -L/usr/local/lib plperl.o eloglvl.o SPI.o -L/usr/local/lib /usr/lib/perl/5.6.1/auto/DynaLoader/DynaLoader.a -L/usr/lib/perl/5.6.1/CORE -lperl -ldl -lm -lc -lcrypt -o blib/arch/auto/plperl/plperl.so\n /usr/bin/ld: cannot find -lperl\n collect2: ld returned 1 exit status\n\n IMHO right place for this is in the ./configure and something like \n AC_CHECK_LIB().\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 20 Dec 2001 19:16:26 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "-lperl not checked"
},
{
"msg_contents": "Karel Zak writes:\n\n> LD_RUN_PATH=\"\" cc -shared -L/usr/local/lib plperl.o eloglvl.o SPI.o -L/usr/local/lib /usr/lib/perl/5.6.1/auto/DynaLoader/DynaLoader.a -L/usr/lib/perl/5.6.1/CORE -lperl -ldl -lm -lc -lcrypt -o blib/arch/auto/plperl/plperl.so\n> /usr/bin/ld: cannot find -lperl\n> collect2: ld returned 1 exit status\n>\n> IMHO right place for this is in the ./configure and something like\n> AC_CHECK_LIB().\n\nFirst, the Perl build uses mostly the information provided by MakeMaker,\nnot by configure. MakeMaker is completely broken and it's only\ncoincidence that it returns the right answers most of the time. I've made\nsome desperate attempts to introduce some sane configury into this system,\nbut the interfaces are so obscure and poorly documented, it's pretty\ndifficult. But it will keep evolving until it does the right thing.\n\nSecondly, in case of a missing libperl, there isn't much you can do as an\nalternative, so a configure test will only report an error at a different\ntime. I don't think we should load up configure with those kinds of\ntests.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 23 Dec 2001 19:16:13 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: -lperl not checked"
}
] |
[
{
"msg_contents": "A quick search of the source code and archives failed to show \nanything related to this (which may reflect my searching skills \nmore than anything else), so I'll toss it out here as a wishlist\nitem. \n\nWishlist: TLS (SSL) authentication\n----------------------------------\n\nWhile superficially similar to SSL, the emphasis here is\non mutual strong authentication via digital certificates,\nnot an encrypted data channel.\n\nSERVER AUTHENTICATION\n\nThe database server requires an X.509 certificate and private\nkey before it will accept TLS/SSL connections, and it can provide\nthis cert to the client. If the certificate does not match a \npreviously published one or is not verifiable via external means\nthen the connection can be refused. This prevents a server from \nbeing \"hijacked\" - the connection may be forwarded, but the \ncerts won't match.\n\nThis functionality can be largely duplicated with careful\nconfiguration of an SSL tunnel with port forwarding, but \nOpenSSL tunnels require that the user has a \"shell account\"\n(even if the \"shell\" is /bin/yes) on the database server.\nThis approach does not require user \"shell accounts.\"\n\nIf \"anonymous DH\" session keys are supported, this functionality\nmakes the database server \"ssl-aware\" much like HTTPS servers.\n(In fact, the implementation will almost certainly start with\n\"anonymous DH\" and add client authentication later.)\n\nCLIENT AUTHENTICATION\n\nThe clients require an X.509 certificate before it will accept\nTLS connections, and this can be used as strong authentication\nof the client. (\"something they have\" - an X.509 certificate,\nand \"something they know\" - the password to the private key for\nthat certificate.) This can be tied into the PostgreSQL \nauthentication mechanism in the usual way.\n\nThe client certs can be issued by the server (if it acts as a\ncertificate authority for this purpose), or issued elsewhere and\nstored in the server. With PKI, the certs could also be checked\nagainst external certificate stores. You *don't* want to simply\ncheck the \"distinguished name\" on the certificate since it's\ntrivial to create a fradulent self-signed certificate.\n\nLong term: X.509 certificates can be stored on \"smart cards,\"\nUSB fobs, or the like. Clients could be written that only \nsuccessfully connect when the user has attached such a device \nto the system, providing strong evidence (with a USB keychain\nfob, at least) that the individual is actually present.\n\nThis functionality cannot be duplicated with OpenSSL tunnels.\n\nPUBLIC KEY INFRASTRUCTURE (PKI) RESOURCES\n\nHinted at above are extensions to the server itself to support\nPKI. This can be supported in phases:\n\n 1) new data types to store PKI information.\n\n 2) self-hosting certificate authority (CA) functionality, allowing\n the database to maintain its own server and client certificates.\n (no need to bother with external CAs - most users just need\n\tto run a program like pg_keygen and it will handle everything.)\n\n 3) public certificate store functionality, for storage and retrieval\n\tof certificates used by other applications.\n\n------------------------------------------------------------\n\nAs a practical matter, the lion's share of the work to implement\nthis is in moving the network code from sockets to SSL. Once\nthat's been done it's straightforward to request the client\ncertificate and tie it into the authentication code.\n\nHowever, I have finished[*] TOASTable user-defined types for \nvarious PKI items: X509, X509_REQ, X509_CRL, PKCS7, PKCS8, SPKAC,\nplus some utility classes. I've also written all of the useful \nuser-defined accessor functions, e.g., x509_subject(x509). I'm\nwilling to contribute this code to the PostgreSQL contrib (or\nmain ADT) section.\n\n[*] I lie. The implementation with the old interface was\nfinished, the port to TOASTable 7.1.3 should be finished by \nthis weekend. But I wanted to get this message out now since\nmany people will be distracted next week. :-)\n\n---\nBear Giles\nbgiles (at) coyotesong (dot) com\n",
"msg_date": "Thu, 20 Dec 2001 11:39:45 -0700 (MST)",
"msg_from": "Bear Giles <bear@coyotesong.com>",
"msg_from_op": true,
"msg_subject": "Wishlist: TLS, PKI"
}
] |
[
{
"msg_contents": "Where are we on the RC1 release? What are the open items? I have\ngotten lost.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Dec 2001 18:15:45 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Status on RC1?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Where are we on the RC1 release? What are the open items?\n\n* That missing \"July\" entry in datetime.c (Thomas indicated that he'd\n fix that, but as far as I've seen he didn't commit yet).\n\n* I think there are some unapplied NLS submissions (Peter seems to be\n keeping track of those).\n\n* Probable bug in vacuum's tuple chain moving, per my message of the\n 18th (Vadim hasn't responded to that yet).\n\n* Hiroshi reported a regression failure on R4000 SysV awhile back,\n but with no followup details there's little we can do.\n\n* Have you committed the SunOS memcmp fix yet?\n\nIn addition to these we have some unresolved bug reports about\nduplicated tuples and sequences being mis-restored after crashes.\nHowever, those bugs (if real) are in 7.1.*, so they don't seem\nlike a good reason to hold off a 7.2 release.\n\nThat's everything on my hot-list.\n\nPerhaps we should be thinking about a docs freeze and the usual\nother preparations for release.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2001 22:15:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Status on RC1? "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Where are we on the RC1 release? What are the open items?\n> \n> * That missing \"July\" entry in datetime.c (Thomas indicated that he'd\n> fix that, but as far as I've seen he didn't commit yet).\n\nYes, I have that in my mailbox too, with Thomas's patch.\n\n> * I think there are some unapplied NLS submissions (Peter seems to be\n> keeping track of those).\n\nYep, I see those. Waiting on Peter for application.\n\n> * Probable bug in vacuum's tuple chain moving, per my message of the\n> 18th (Vadim hasn't responded to that yet).\n> \n> * Hiroshi reported a regression failure on R4000 SysV awhile back,\n> but with no followup details there's little we can do.\n> \n> * Have you committed the SunOS memcmp fix yet?\n\nYes. Stuck in queue for moderator because the subject has the word\n\"config.\"\n\n> \n> In addition to these we have some unresolved bug reports about\n> duplicated tuples and sequences being mis-restored after crashes.\n> However, those bugs (if real) are in 7.1.*, so they don't seem\n> like a good reason to hold off a 7.2 release.\n> \n> That's everything on my hot-list.\n> \n> Perhaps we should be thinking about a docs freeze and the usual\n> other preparations for release.\n\nSeems we should move ahead. We have to leave something for the minor\nreleases. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Dec 2001 22:53:00 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Status on RC1?"
},
{
"msg_contents": "> * That missing \"July\" entry in datetime.c\n\nOK, I've committed the fix. Also, I removed a few duplicate lines from\nodbc.sql left over from my last ill-fated update attempt via my bad DSL\nline (looks like they fixed it today. woohoo!).\n\n - Thomas\n",
"msg_date": "Fri, 21 Dec 2001 06:10:36 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Status on RC1?"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> * That missing \"July\" entry in datetime.c\n> OK, I've committed the fix.\n\nSounds good. How are you feeling about the state of the documentation?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Dec 2001 01:23:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Status on RC1? "
},
{
"msg_contents": "> > * That missing \"July\" entry in datetime.c\n> OK, I've committed the fix.\n\nfwiw, I seem to have lost that line in September, when I added more ISO\ntime support to the date/time routines. I've checked the rest of that\nsame update and nothing else seems to be missing; must have been an\nediting fat finger on my part...\n\n - Thomas\n",
"msg_date": "Fri, 21 Dec 2001 06:30:26 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Status on RC1?"
},
{
"msg_contents": "> Sounds good. How are you feeling about the state of the documentation?\n\nNot bad afaict. I had made a tutorial hardcopy a couple of weeks ago\n(nothing is likely to have changed on that one since then) and the\nothers should flow fairly quickly (much faster than in previous years).\nThough I'm not sure if hardcopy is even going to be packaged into the\nmain release this time around, others will likely want to have them for,\nsay, RPM packaging (Lamar?).\n\nI usually find a bunch of small things when going through the docs to\nbuild the hardcopy.\n\nThere is doc/HISTORY, and doc/src/sgml/release.sgml which needs to be\nmerged. We *should* be generating doc/HISTORY from the release.sgml\nsource, but we apparently have never started doing that. Next release,\nwe really should since the sgml docs could and should be the definitive\nsource for this kind of information.\n\nFor now, someone will need to hand-merge the contents of doc/HISTORY\nback into the sgml source file.\n\n - Thomas\n",
"msg_date": "Fri, 21 Dec 2001 06:42:48 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Status on RC1?"
},
{
"msg_contents": "> There is doc/HISTORY, and doc/src/sgml/release.sgml which needs to be\n> merged. We *should* be generating doc/HISTORY from the release.sgml\n> source, but we apparently have never started doing that. Next release,\n> we really should since the sgml docs could and should be the definitive\n> source for this kind of information.\n> \n> For now, someone will need to hand-merge the contents of doc/HISTORY\n> back into the sgml source file.\n\nI have indicated the places where things should come from the HISTORY\nfile to release.sgml. It would be nice to make it automatic. I think\nthe problem we had was that ASCII output from SGML is kind of hard to\ntweek to look good.\n\nWhenever you want to get started, just cut-paste them over. If you can\ngive me a little warning, I will make sure HISTORY is up to date with\nCVS. I know it is pretty close now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Dec 2001 01:51:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Status on RC1?"
},
{
"msg_contents": "\nSeems these are all done except for duplicates, but we picked up some\nmore in the mean time. :-)\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Where are we on the RC1 release? What are the open items?\n> \n> * That missing \"July\" entry in datetime.c (Thomas indicated that he'd\n> fix that, but as far as I've seen he didn't commit yet).\n> \n> * I think there are some unapplied NLS submissions (Peter seems to be\n> keeping track of those).\n> \n> * Probable bug in vacuum's tuple chain moving, per my message of the\n> 18th (Vadim hasn't responded to that yet).\n> \n> * Hiroshi reported a regression failure on R4000 SysV awhile back,\n> but with no followup details there's little we can do.\n> \n> * Have you committed the SunOS memcmp fix yet?\n> \n> In addition to these we have some unresolved bug reports about\n> duplicated tuples and sequences being mis-restored after crashes.\n> However, those bugs (if real) are in 7.1.*, so they don't seem\n> like a good reason to hold off a 7.2 release.\n> \n> That's everything on my hot-list.\n> \n> Perhaps we should be thinking about a docs freeze and the usual\n> other preparations for release.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 01:11:14 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Status on RC1?"
}
] |
[
{
"msg_contents": "> If \"pre-page WAL write\" means the value of the page before the current\n> changes, then there is generally another reason for writing it out.\n\nBruce, \"*pre*-page\" is confusing - we write \"after-change\" page\nimage to WAL.\n\n> When the system comes back up, we need to do a rollback on\n> transaction B since it did not commit and we need the \"pre-page\"\n> to know how to undo the change for B that got saved in step 6 above.\n\nBrian, PGSQL still uses non-overwriting storage manager -\nremoving rows inserted by aborted transactions is not required.\n\nVadim\n",
"msg_date": "Thu, 20 Dec 2001 16:03:58 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Perform"
},
{
"msg_contents": "> > If \"pre-page WAL write\" means the value of the page before the current\n> > changes, then there is generally another reason for writing it out.\n> \n> Bruce, \"*pre*-page\" is confusing - we write \"after-change\" page\n> image to WAL.\n\nYes, I didn't like pre-page either. Changed.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Dec 2001 22:56:06 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Perform"
}
] |
[
{
"msg_contents": "I wrote two functions I think are quite cool.\n\nint_aggregate_array(int)\nand\nint_enum_array(int[])\n\n\nWhile I'm not sure I can submit them because I wrote them on company\ntime, I thought I should tell you about them because they are fairly\ntrivial, and could make postgresql better.\n\nThey are used as:\n\ncreate table test select id1, int_aggregate_array(id2) as ar group by\nid1;\n\nThis creates a summary table. It is used as:\n\nselect id1, int_enum_array(ar) from test where id1 = 'nnnnn';\n\n\nIf you have a \"one to many\" table the \"int_aggregate_array\" will reduce\nit to one row with an array. The int_enum_array() is used to extract it.\n\nThe idea is that a multiple a -> b records could be encoded as one row.\nThis could be a huge performance improvement.\n",
"msg_date": "Thu, 20 Dec 2001 19:53:10 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Integer aggregate and enum"
}
] |
[
{
"msg_contents": "Hi All,\n\nYou know how when you create a foreign key in postgres it isn't\nautomatically indexed, and it seems to me that it's very useful to have\nindexed foreign keys, especially if you use lots of them.\n\nSo, how about a 'findslowfks' contrib? This would basically be similar to\nBruce's findoidjoins thingy...\n\nJust an idea,\n\nChris\n\n",
"msg_date": "Fri, 21 Dec 2001 10:47:01 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "contrib idea"
},
{
"msg_contents": "> Hi All,\n> \n> You know how when you create a foreign key in postgres it isn't\n> automatically indexed, and it seems to me that it's very useful to have\n> indexed foreign keys, especially if you use lots of them.\n> \n> So, how about a 'findslowfks' contrib? This would basically be similar to\n> Bruce's findoidjoins thingy...\n\nWhy would you want an index on a foreign key. Primary I can understand,\nbut is there use to foreignt? Is it for checking of changes to primary\nkeys?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Dec 2001 22:58:40 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib idea"
},
{
"msg_contents": "If you have a foreign key on a column, then whenever the primary key is\nmodified, the following checks may occur:\n\n* Check to see if the child row exists (no action)\n* Delete the child row (cascade delete)\n* Update the child row (cascade update)\n\nAll of which will benefit from an index...\n\nChris\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, 21 December 2001 11:59 AM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: [HACKERS] contrib idea\n>\n>\n> > Hi All,\n> >\n> > You know how when you create a foreign key in postgres it isn't\n> > automatically indexed, and it seems to me that it's very useful to have\n> > indexed foreign keys, especially if you use lots of them.\n> >\n> > So, how about a 'findslowfks' contrib? This would basically be\n> similar to\n> > Bruce's findoidjoins thingy...\n>\n> Why would you want an index on a foreign key. Primary I can understand,\n> but is there use to foreignt? Is it for checking of changes to primary\n> keys?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Fri, 21 Dec 2001 12:03:25 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: contrib idea"
},
{
"msg_contents": "> If you have a foreign key on a column, then whenever the primary key is\n> modified, the following checks may occur:\n> \n> * Check to see if the child row exists (no action)\n> * Delete the child row (cascade delete)\n> * Update the child row (cascade update)\n> \n> All of which will benefit from an index...\n\nOK, then perhaps we should be creating an index automatically? Folks?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Dec 2001 23:03:53 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib idea"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> If you have a foreign key on a column, then whenever the primary key is\n>> modified, the following checks may occur:\n>> \n>> * Check to see if the child row exists (no action)\n>> * Delete the child row (cascade delete)\n>> * Update the child row (cascade update)\n>> \n>> All of which will benefit from an index...\n\n> OK, then perhaps we should be creating an index automatically? Folks?\n\nWe should not *force* people to have an index. If the master table very\nseldom changes, then an index on the referencing table will be a net\nloss (at least as far as the foreign-key ops go). You'll pay for it on\nevery referencing-table update, and use it only seldom.\n\nPossibly there should be an entry in the \"performance tips\" chapter\nrecommending that people consider adding an index on the referencing\ncolumn if they are concerned about the speed of updates to the\nreferenced table. But I dislike software that considers itself smarter\nthan the DBA.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2001 23:29:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib idea "
},
{
"msg_contents": "> Possibly there should be an entry in the \"performance tips\" chapter\n> recommending that people consider adding an index on the referencing\n> column if they are concerned about the speed of updates to the\n> referenced table. But I dislike software that considers itself smarter\n> than the DBA.\n\nWhich is why I proposed a contrib that can assist the DBA in finding ones\nthey've forgotten to index...\n\nIn fact, it would be cool if it just dumped out a whole bunch of CREATE\nINDEX commands...\n\nChris\n\n",
"msg_date": "Fri, 21 Dec 2001 12:33:09 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: contrib idea "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> If you have a foreign key on a column, then whenever the primary key is\n> >> modified, the following checks may occur:\n> >> \n> >> * Check to see if the child row exists (no action)\n> >> * Delete the child row (cascade delete)\n> >> * Update the child row (cascade update)\n> >> \n> >> All of which will benefit from an index...\n> \n> > OK, then perhaps we should be creating an index automatically? Folks?\n> \n> We should not *force* people to have an index. If the master table very\n> seldom changes, then an index on the referencing table will be a net\n> loss (at least as far as the foreign-key ops go). You'll pay for it on\n> every referencing-table update, and use it only seldom.\n> \n> Possibly there should be an entry in the \"performance tips\" chapter\n> recommending that people consider adding an index on the referencing\n> column if they are concerned about the speed of updates to the\n> referenced table. But I dislike software that considers itself smarter\n> than the DBA.\n\nKeep in mind that the penalty for no index is a sequential scan, which\n_usually_ is a light operation. In fact, many queryes don't even use\nindexes if they are going to need to see more than a small portion of\nthe table.\n\nBut yes, if your primary key is changing often, that is a valid issue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Dec 2001 23:35:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib idea"
},
{
"msg_contents": "> Keep in mind that the penalty for no index is a sequential scan, which\n> _usually_ is a light operation. In fact, many queryes don't even use\n> indexes if they are going to need to see more than a small portion of\n> the table.\n\nI agree... \n\nManaging customers'DBs for years now, I'm convinced that systematic indexes are\ngood only for the intellect of the DBA because it may respect some methods :-)\n\nToo many tables with less than thousands records. Automatic indexes are\nannoying, I have to drop em all every time. It's harder to think in droping\nunwanted indexes than creating wanted ones.\n\nI know DBAs that drop automatic PK index created by PG only because the naming\nmethod choosen for index is not like they want.. :-)\n\nTable scans are always good idea for litle tables. Even more if the table is \nfully cached (I dream of a \"CREATE TABLE... CACHE\"). Cool too when we'll be\nable to store execution plans :-)\n\nFinaly, there would be tables with more index than data :-) if you consider\ntables with many FK. Where's the gain then?\n\nBest regards,\n\n-- \nJean-Paul ARGUDO \t\tIDEALX S.A.S\nConsultant bases de donn�es\t\t\t15-17, av. de S�gur\nhttp://IDEALX.com/ \t\t\t\tF-75007 PARIS\n",
"msg_date": "Fri, 21 Dec 2001 10:12:59 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@IDEALX.com>",
"msg_from_op": false,
"msg_subject": "Re: contrib idea"
},
{
"msg_contents": "Tom Lane wrote:\n\n\n> We should not *force* people to have an index. If the master table very\n> seldom changes, then an index on the referencing table will be a net\n> loss (at least as far as the foreign-key ops go). You'll pay for it on\n> every referencing-table update, and use it only seldom.\n\n\nNot only that but it's non standard ... people porting code over which \ncorrectly defines an explicit index when appropriate would end up with \ntwo of them.\n\n\n> Possibly there should be an entry in the \"performance tips\" chapter\n> recommending that people consider adding an index on the referencing\n> column if they are concerned about the speed of updates to the\n> referenced table. But I dislike software that considers itself smarter\n> than the DBA.\n\n\nThis is a much better idea.\n\n\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Fri, 21 Dec 2001 08:09:59 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: contrib idea"
},
{
"msg_contents": "Don Baccus writes:\n\n> Not only that but it's non standard ... people porting code over which\n> correctly defines an explicit index when appropriate would end up with\n> two of them.\n\nNot that there's anything remotely standard about indexes...\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 22 Dec 2001 09:30:47 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: contrib idea"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> If you have a foreign key on a column, then whenever the primary key is\n> >> modified, the following checks may occur:\n> >> \n> >> * Check to see if the child row exists (no action)\n> >> * Delete the child row (cascade delete)\n> >> * Update the child row (cascade update)\n> >> \n> >> All of which will benefit from an index...\n> \n> > OK, then perhaps we should be creating an index automatically? Folks?\n> \n> We should not *force* people to have an index. If the master table very\n> seldom changes, then an index on the referencing table will be a net\n> loss (at least as far as the foreign-key ops go). You'll pay for it on\n> every referencing-table update, and use it only seldom.\n> \n> Possibly there should be an entry in the \"performance tips\" chapter\n> recommending that people consider adding an index on the referencing\n> column if they are concerned about the speed of updates to the\n> referenced table. But I dislike software that considers itself smarter\n> than the DBA.\n\nOK, I have added the following to the create_lang.sgml manual page. I\ncouldn't find a good place to put this in the performance page.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/ref/create_table.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/ref/create_table.sgml,v\nretrieving revision 1.50\ndiff -c -r1.50 create_table.sgml\n*** doc/src/sgml/ref/create_table.sgml\t2001/12/08 03:24:35\t1.50\n--- doc/src/sgml/ref/create_table.sgml\t2002/01/03 06:23:36\n***************\n*** 437,442 ****\n--- 437,449 ----\n </varlistentry>\n </variablelist>\n </para>\n+ <para>\n+ If primary key column is updated frequently, it may be wise to\n+ add an index to the <literal>REFERENCES</literal> column so that\n+ <literal>NO ACTION</literal> and <literal>CASCADE</literal>\n+ actions associated with the <literal>REFERENCES</literal>\n+ column can be more efficiently performed.\n+ </para>\n \n </listitem>\n </varlistentry>\n***************\n*** 472,477 ****\n--- 479,486 ----\n </listitem>\n </varlistentry>\n </variablelist>\n+ \n+ \n </refsect1>",
"msg_date": "Thu, 3 Jan 2002 01:25:52 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib idea"
}
] |
[
{
"msg_contents": "In HEAD contrib/dbase:\n\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../src/\ninterf\naces/libpq -I. -I../../src/include -c -o dbf2pg.o dbf2pg.c\ndbf2pg.c:19: iconv.h: No such file or directory\ndbf2pg.c:38: syntax error before `iconv_d'\ndbf2pg.c:38: warning: type defaults to `int' in declaration of `iconv_d'\ndbf2pg.c:38: warning: data definition has no type or storage class\ndbf2pg.c: In function `convert_charset':\ndbf2pg.c:148: warning: implicit declaration of function `iconv'\ndbf2pg.c: In function `main':\ndbf2pg.c:789: warning: implicit declaration of function `iconv_open'\ndbf2pg.c:790: `iconv_t' undeclared (first use in this function)\ndbf2pg.c:790: (Each undeclared identifier is reported only once\ndbf2pg.c:790: for each function it appears in.)\ndbf2pg.c:810: warning: implicit declaration of function `iconv_close'\ngmake: *** [dbf2pg.o] Error 1\n\nChris\n\n",
"msg_date": "Fri, 21 Dec 2001 11:27:10 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "The dbase conrtib doesn't compile"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> In HEAD contrib/dbase:\n> gcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../src/\n> interf\n> aces/libpq -I. -I../../src/include -c -o dbf2pg.o dbf2pg.c\n> dbf2pg.c:19: iconv.h: No such file or directory\n\nLooks like someone took a shortcut in dealing with <iconv.h>.\nWhat the heck is that, anyway, and do we need the ifdef'd code at all?\n\n(FWIW, the code compiles fine if you do have <iconv.h>)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2001 22:54:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The dbase conrtib doesn't compile "
},
{
"msg_contents": "So what's the fix? I have iconv.h. This is my system:\n\nIntel:\n\n/usr/include/sys/iconv.h\n/usr/local/include/giconv.h\n/usr/local/include/iconv.h\n\nAnd the Alpha:\n\n/usr/include/sys/iconv.h\n\nChris\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, 21 December 2001 11:55 AM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: [HACKERS] The dbase conrtib doesn't compile \n> \n> \n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > In HEAD contrib/dbase:\n> > gcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations \n> -I../../src/\n> > interf\n> > aces/libpq -I. -I../../src/include -c -o dbf2pg.o dbf2pg.c\n> > dbf2pg.c:19: iconv.h: No such file or directory\n> \n> Looks like someone took a shortcut in dealing with <iconv.h>.\n> What the heck is that, anyway, and do we need the ifdef'd code at all?\n> \n> (FWIW, the code compiles fine if you do have <iconv.h>)\n> \n> \t\t\tregards, tom lane\n> \n\n",
"msg_date": "Fri, 21 Dec 2001 11:59:35 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: The dbase conrtib doesn't compile "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> So what's the fix? I have iconv.h. This is my system:\n> Intel:\n\n> /usr/include/sys/iconv.h\n> /usr/local/include/giconv.h\n> /usr/local/include/iconv.h\n\n> And the Alpha:\n\n> /usr/include/sys/iconv.h\n\nHmm, either it should be #include <sys/iconv.h> or you need to configure\n--with-includes=/usr/local/include. Try it and see.\n\nYou have to remember that the contrib stuff is not ready for prime time;\nif it were, it'd likely be in the mainframe. Portability problems are\npar for the course. (The cast things you just found in pgcrypto look\nlike they might be real bugs, btw.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2001 23:03:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The dbase conrtib doesn't compile "
},
{
"msg_contents": "> In HEAD contrib/dbase:\n> \n> gcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../src/\n> interf\n> aces/libpq -I. -I../../src/include -c -o dbf2pg.o dbf2pg.c\n> dbf2pg.c:19: iconv.h: No such file or directory\n> dbf2pg.c:38: syntax error before `iconv_d'\n> dbf2pg.c:38: warning: type defaults to `int' in declaration of `iconv_d'\n> dbf2pg.c:38: warning: data definition has no type or storage class\n> dbf2pg.c: In function `convert_charset':\n> dbf2pg.c:148: warning: implicit declaration of function `iconv'\n> dbf2pg.c: In function `main':\n> dbf2pg.c:789: warning: implicit declaration of function `iconv_open'\n> dbf2pg.c:790: `iconv_t' undeclared (first use in this function)\n> dbf2pg.c:790: (Each undeclared identifier is reported only once\n> dbf2pg.c:790: for each function it appears in.)\n> dbf2pg.c:810: warning: implicit declaration of function `iconv_close'\n> gmake: *** [dbf2pg.o] Error 1\n\nYes, I am seeing the same failure here. I knew there was a /contrib\nmodule that needed iconv but I can't find it now. I suppose this was\nthe one.\n\nI see the old Makefile used iconv:\n\n! $(CC) $(CFLAGS) $(OBJS) $(libpq) $(LDFLAGS) $(LIBS) -liconv -o $@\n\nbut the overhaul of these Makefiles on 2001/09/06 removed it. I am\napplying the following patch to re-add it. You will need libiconv for\nit to link.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: contrib/dbase/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/dbase/Makefile,v\nretrieving revision 1.2\ndiff -c -r1.2 Makefile\n*** contrib/dbase/Makefile\t2001/09/06 10:49:29\t1.2\n--- contrib/dbase/Makefile\t2001/12/21 04:10:17\n***************\n*** 7,13 ****\n PROGRAM = dbf2pg\n OBJS\t= dbf.o dbf2pg.o endian.o\n PG_CPPFLAGS = -I$(libpq_srcdir)\n! PG_LIBS = $(libpq)\n \n DOCS = README.dbf2pg\n MAN = dbf2pg.1\t\t\t# XXX not implemented\n--- 7,13 ----\n PROGRAM = dbf2pg\n OBJS\t= dbf.o dbf2pg.o endian.o\n PG_CPPFLAGS = -I$(libpq_srcdir)\n! PG_LIBS = $(libpq) -liconv\n \n DOCS = README.dbf2pg\n MAN = dbf2pg.1\t\t\t# XXX not implemented\nIndex: contrib/dbase/README.dbf2pg\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/dbase/README.dbf2pg,v\nretrieving revision 1.1\ndiff -c -r1.1 README.dbf2pg\n*** contrib/dbase/README.dbf2pg\t2001/05/10 14:41:23\t1.1\n--- contrib/dbase/README.dbf2pg\t2001/12/21 04:10:17\n***************\n*** 107,113 ****\n ENVIRONMENT\n This program is affected by the\tenvironment-variables as\n used by \"PostgresSQL.\" See the documentation of Post-\n! gresSQL for more info.\n \n BUGS\n Fields larger than 8192 characters are not supported and\n--- 107,113 ----\n ENVIRONMENT\n This program is affected by the\tenvironment-variables as\n used by \"PostgresSQL.\" See the documentation of Post-\n! gresSQL for more info. This program requires libiconv.\n \n BUGS\n Fields larger than 8192 characters are not supported and",
"msg_date": "Thu, 20 Dec 2001 23:12:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The dbase conrtib doesn't compile"
},
{
"msg_contents": "> You have to remember that the contrib stuff is not ready for prime time;\n> if it were, it'd likely be in the mainframe. Portability problems are\n> par for the course. (The cast things you just found in pgcrypto look\n> like they might be real bugs, btw.)\n\nI'm just checking them all cos I'm a bit of a FreeBSD & PostgreSQL fanatic\nand I want FBSD to be a good platform for pgsql. Maybe we could add to the\nregression test database fields for 'result of running \"gmake all\" in\ncontrib/' ? Unfortunately this process stops at the first error. Maybe it\nwould be nice if it could continue on and give a report of the failures?\n\nChris\n\n",
"msg_date": "Fri, 21 Dec 2001 12:13:57 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: The dbase conrtib doesn't compile "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I'm just checking them all cos I'm a bit of a FreeBSD & PostgreSQL fanatic\n> and I want FBSD to be a good platform for pgsql. Maybe we could add to the\n> regression test database fields for 'result of running \"gmake all\" in\n> contrib/' ?\n\nOkay, though you really should also try the regression tests for the\nmodules that have one.\n\n> Unfortunately this process stops at the first error. Maybe it\n> would be nice if it could continue on and give a report of the failures?\n\n\"gmake -k\" is your friend.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2001 23:19:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The dbase conrtib doesn't compile "
},
{
"msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > In HEAD contrib/dbase:\n> > gcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../src/\n> > interf\n> > aces/libpq -I. -I../../src/include -c -o dbf2pg.o dbf2pg.c\n> > dbf2pg.c:19: iconv.h: No such file or directory\n> \n> Looks like someone took a shortcut in dealing with <iconv.h>.\n> What the heck is that, anyway, and do we need the ifdef'd code at all?\n> \n> (FWIW, the code compiles fine if you do have <iconv.h>)\n\nI see that now too. Seems we need to test for libiconv and set\nHAVE_ICONV_H accordingly, and the link line too. If I comment out the\ndefine, it does not compile so the conditionals in the code are not\ncorrect anyway.\n\nThe following patch does allow it to compile with HAVE_ICONV_H not\ndefined; clearly a step in the right direction. Of course, you will\nstill need to remove the -liconv from the link line.\n\nI wonder if the best solution is to not assume libiconv exists.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: contrib/dbase/dbf2pg.c\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/dbase/dbf2pg.c,v\nretrieving revision 1.4\ndiff -c -r1.4 dbf2pg.c\n*** contrib/dbase/dbf2pg.c\t2001/10/25 05:49:19\t1.4\n--- contrib/dbase/dbf2pg.c\t2001/12/21 04:27:48\n***************\n*** 742,753 ****\n--- 742,755 ----\n \t\t\tcase 'U':\n \t\t\t\tusername = (char *) strdup(optarg);\n \t\t\t\tbreak;\n+ #ifdef HAVE_ICONV_H\n \t\t\tcase 'F':\n \t\t\t\tcharset_from = (char *) strdup(optarg);\n \t\t\t\tbreak;\n \t\t\tcase 'T':\n \t\t\t\tcharset_to = (char *) strdup(optarg);\n \t\t\t\tbreak;\n+ #endif\n \t\t\tcase ':':\n \t\t\t\tusage();\n \t\t\t\tprintf(\"missing argument!\\n\");\n***************\n*** 806,813 ****\n--- 808,817 ----\n \t\t\tfree(username);\n \t\tif (password)\n \t\t\tfree(password);\n+ #ifdef HAVE_ICONV_H\n \t\tif (charset_from)\n \t\t\ticonv_close(iconv_d);\n+ #endif\n \t\texit(1);\n \t}\n \n***************\n*** 846,853 ****\n--- 850,859 ----\n \t\t\tfree(username);\n \t\tif (password)\n \t\t\tfree(password);\n+ #ifdef HAVE_ICONV_H\n \t\tif (charset_from)\n \t\t\ticonv_close(iconv_d);\n+ #endif\n \t\texit(1);\n \t}\n \n***************\n*** 864,871 ****\n--- 870,879 ----\n \t\t\t\tfree(username);\n \t\t\tif (password)\n \t\t\t\tfree(password);\n+ #ifdef HAVE_ICONV_H\n \t\t\tif (charset_from)\n \t\t\t\ticonv_close(iconv_d);\n+ #endif\n \t\t\texit(1);\n \t\t}\n \t\tif (del)\n***************\n*** 880,887 ****\n--- 888,897 ----\n \t\t\t\t\tfree(username);\n \t\t\t\tif (password)\n \t\t\t\t\tfree(password);\n+ #ifdef HAVE_ICONV_H\n \t\t\t\tif (charset_from)\n \t\t\t\t\ticonv_close(iconv_d);\n+ #endif\n \t\t\t\texit(1);\n \t\t\t}\n \t\t\tif (verbose > 1)\n***************\n*** 903,910 ****\n--- 913,922 ----\n \t\t\t\tfree(username);\n \t\t\tif (password)\n \t\t\t\tfree(password);\n+ #ifdef HAVE_ICONV_H\n \t\t\tif (charset_from)\n \t\t\t\ticonv_close(iconv_d);\n+ #endif\n \t\t\texit(1);\n \t\t}\n \t\tif (verbose > 1)\n***************\n*** 933,939 ****\n--- 945,953 ----\n \t\tfree(username);\n \tif (password)\n \t\tfree(password);\n+ #ifdef HAVE_ICONV_H\n \tif (charset_from)\n \t\ticonv_close(iconv_d);\n+ #endif\n \texit(0);\n }",
"msg_date": "Thu, 20 Dec 2001 23:28:40 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The dbase conrtib doesn't compile"
},
{
"msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > So what's the fix? I have iconv.h. This is my system:\n> > Intel:\n> \n> > /usr/include/sys/iconv.h\n> > /usr/local/include/giconv.h\n> > /usr/local/include/iconv.h\n> \n> > And the Alpha:\n> \n> > /usr/include/sys/iconv.h\n> \n> Hmm, either it should be #include <sys/iconv.h> or you need to configure\n> --with-includes=/usr/local/include. Try it and see.\n> \n> You have to remember that the contrib stuff is not ready for prime time;\n> if it were, it'd likely be in the mainframe. Portability problems are\n> par for the course. (The cast things you just found in pgcrypto look\n> like they might be real bugs, btw.)\n\nAlso, you need current CVS because I just added the -liconv in Makefile.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Dec 2001 23:29:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The dbase conrtib doesn't compile"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Also, you need current CVS because I just added the -liconv in Makefile.\n\nWell, it *used* to build under HPUX. And whatever the contributor's\nsystem was. Your change has fixed one platform and broken two others.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2001 23:36:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The dbase conrtib doesn't compile "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Also, you need current CVS because I just added the -liconv in Makefile.\n> \n> Well, it *used* to build under HPUX. And whatever the contributor's\n> system was. Your change has fixed one platform and broken two others.\n\nThe -liconv used to be there before in 7.1 and earlier. It was only\nremoved in September. Are you saying those system calls work for you,\nbut you don't have a libiconv?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Dec 2001 23:38:21 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The dbase conrtib doesn't compile"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The -liconv used to be there before in 7.1 and earlier. It was only\n> removed in September. Are you saying those system calls work for you,\n> but you don't have a libiconv?\n\nThe <iconv.h> routines live in libc on HPUX. And on Red Hat Linux\n(I suppose also on other Linux flavors, but RHL 7.2 is the only one\nI have handy to check). And presumably on whatever platform Peter uses,\nelse he'd not have removed the -liconv.\n\nChristopher has not yet opined on where they are on his platform...\nthough since it's a BSD variant, it might be the same as yours.\n\nTo fix this correctly we'd need to add configure tests for <iconv.h>\nand libiconv. I'm disinclined to do that, partly because it'd slow\ndown configure for everyone whether they intended to build contrib/dbase\nor not, but mainly because in the present state of the build process\nit'd cause libiconv (if present) to be linked to *every* executable\nwe build.\n\nI wonder if it's practical for contrib modules to have their own\nmini-configure checks above and beyond what the main configure script\ndoes?\n\nIn the meantime, I don't really care that much whether dbase/Makefile\ncontains -liconv or not; clearly, that will help some platforms and\nhurt others no matter which way we jump. I was just pointing out \nthat your makefile change is not a clear win.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2001 23:59:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The dbase conrtib doesn't compile "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The -liconv used to be there before in 7.1 and earlier. It was only\n> > removed in September. Are you saying those system calls work for you,\n> > but you don't have a libiconv?\n> \n> The <iconv.h> routines live in libc on HPUX. And on Red Hat Linux\n> (I suppose also on other Linux flavors, but RHL 7.2 is the only one\n> I have handy to check). And presumably on whatever platform Peter uses,\n> else he'd not have removed the -liconv.\n> \n> Christopher has not yet opined on where they are on his platform...\n> though since it's a BSD variant, it might be the same as yours.\n> \n> To fix this correctly we'd need to add configure tests for <iconv.h>\n> and libiconv. I'm disinclined to do that, partly because it'd slow\n> down configure for everyone whether they intended to build contrib/dbase\n> or not, but mainly because in the present state of the build process\n> it'd cause libiconv (if present) to be linked to *every* executable\n> we build.\n> \n> I wonder if it's practical for contrib modules to have their own\n> mini-configure checks above and beyond what the main configure script\n> does?\n> \n> In the meantime, I don't really care that much whether dbase/Makefile\n> contains -liconv or not; clearly, that will help some platforms and\n> hurt others no matter which way we jump. I was just pointing out \n> that your makefile change is not a clear win.\n\nYes, glad you pointed it out. I think the best solution is to remove\n#define HAVE_ICONV_H and -liconv so it will work fine on all platforms. \nIf someone wants the iconv conversions, they can add the needed #define\nand link library, OK?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Dec 2001 00:07:20 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The dbase conrtib doesn't compile"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, glad you pointed it out. I think the best solution is to remove\n> #define HAVE_ICONV_H and -liconv so it will work fine on all platforms. \n> If someone wants the iconv conversions, they can add the needed #define\n> and link library, OK?\n\nThat seems like a plan. Perhaps add some commented-out macro\ndefinitions to the makefile to make it a simple addition. Something\nlike\n\n# Uncomment this to provide charset translation\n# CFLAGS += -DHAVE_ICONV_H\n# You might need to uncomment this too, if libiconv is a separate\n# library on your platform\n# LIBS += -liconv\n\nUntested but you get the idea ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Dec 2001 00:12:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The dbase conrtib doesn't compile "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, glad you pointed it out. I think the best solution is to remove\n> > #define HAVE_ICONV_H and -liconv so it will work fine on all platforms. \n> > If someone wants the iconv conversions, they can add the needed #define\n> > and link library, OK?\n> \n> That seems like a plan. Perhaps add some commented-out macro\n> definitions to the makefile to make it a simple addition. Something\n> like\n> \n> # Uncomment this to provide charset translation\n> # CFLAGS += -DHAVE_ICONV_H\n> # You might need to uncomment this too, if libiconv is a separate\n> # library on your platform\n> # LIBS += -liconv\n> \n> Untested but you get the idea ...\n\nOK, patch attached and applied. I tested with and without the Makefile\ndefines. iconv defaults to off, mentioned in README.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: contrib/dbase/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/dbase/Makefile,v\nretrieving revision 1.3\ndiff -c -r1.3 Makefile\n*** contrib/dbase/Makefile\t2001/12/21 04:13:12\t1.3\n--- contrib/dbase/Makefile\t2001/12/21 05:27:58\n***************\n*** 7,13 ****\n PROGRAM = dbf2pg\n OBJS\t= dbf.o dbf2pg.o endian.o\n PG_CPPFLAGS = -I$(libpq_srcdir)\n! PG_LIBS = $(libpq) -liconv\n \n DOCS = README.dbf2pg\n MAN = dbf2pg.1\t\t\t# XXX not implemented\n--- 7,19 ----\n PROGRAM = dbf2pg\n OBJS\t= dbf.o dbf2pg.o endian.o\n PG_CPPFLAGS = -I$(libpq_srcdir)\n! PG_LIBS = $(libpq)\n! \n! # Uncomment this to provide charset translation\n! #PG_CPPFLAGS += -DHAVE_ICONV_H\n! # You might need to uncomment this too, if libiconv is a separate\n! # library on your platform\n! #PG_LIBS += -liconv\n \n DOCS = README.dbf2pg\n MAN = dbf2pg.1\t\t\t# XXX not implemented\nIndex: contrib/dbase/README.dbf2pg\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/dbase/README.dbf2pg,v\nretrieving revision 1.2\ndiff -c -r1.2 README.dbf2pg\n*** contrib/dbase/README.dbf2pg\t2001/12/21 04:13:12\t1.2\n--- contrib/dbase/README.dbf2pg\t2001/12/21 05:27:58\n***************\n*** 97,113 ****\n \t fied charset. Example:\n \t -F IBM437\n \t Consult your system documentation to see the con-\n! \t vertions available.\n \n -T charset_to\n \t Together with -F charset_from ,\tit converts the\n \t data to the\tspecified charset. Default is\n! \t \"ISO-8859-1\".\n \n ENVIRONMENT\n This program is affected by the\tenvironment-variables as\n used by \"PostgresSQL.\" See the documentation of Post-\n! gresSQL for more info. This program requires libiconv.\n \n BUGS\n Fields larger than 8192 characters are not supported and\n--- 97,116 ----\n \t fied charset. Example:\n \t -F IBM437\n \t Consult your system documentation to see the con-\n! \t versions available. This requires iconv to be enabled\n! in the compile.\n \n -T charset_to\n \t Together with -F charset_from ,\tit converts the\n \t data to the\tspecified charset. Default is\n! \t \"ISO-8859-1\". This requires iconv to be enabled\n! in the compile.\n \n ENVIRONMENT\n This program is affected by the\tenvironment-variables as\n used by \"PostgresSQL.\" See the documentation of Post-\n! gresSQL for more info. This program can optionally use iconv \n! character set conversion routines.\n \n BUGS\n Fields larger than 8192 characters are not supported and\nIndex: contrib/dbase/dbf2pg.c\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/dbase/dbf2pg.c,v\nretrieving revision 1.5\ndiff -c -r1.5 dbf2pg.c\n*** contrib/dbase/dbf2pg.c\t2001/12/21 04:30:59\t1.5\n--- contrib/dbase/dbf2pg.c\t2001/12/21 05:27:59\n***************\n*** 7,14 ****\n */\n #include \"postgres_fe.h\"\n \n- #define HAVE_ICONV_H\t\t\t/* should be somewhere else */\n- \n #include <fcntl.h>\n #include <unistd.h>\n #include <ctype.h>\n--- 7,12 ----",
"msg_date": "Fri, 21 Dec 2001 00:29:34 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The dbase conrtib doesn't compile"
},
{
"msg_contents": "OK, that builds perfectly out of the box on Freebsd alpha and intel.\n\nChris\n\n",
"msg_date": "Fri, 21 Dec 2001 14:21:15 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: The dbase conrtib doesn't compile"
}
] |
[
{
"msg_contents": "Hi Marko,\n\nJust testing pgcrypto on freebsd/alpha. I get some warnings:\n\ngcc -pipe -O -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC \n-DRAND_SILLY -I. -I. -I../../src/include -c -\no internal.o internal.c\ninternal.c: In function `rj_encrypt':\ninternal.c:314: warning: cast from pointer to integer of different size\ninternal.c: In function `rj_decrypt':\ninternal.c:342: warning: cast from pointer to integer of different size\ninternal.c: In function `bf_encrypt':\ninternal.c:429: warning: cast from pointer to integer of different size\ninternal.c: In function `bf_decrypt':\ninternal.c:453: warning: cast from pointer to integer of different size\n\nAnd I can't do regression:\n\ngmake -C ../../src/test/regress pg_regress\ngmake[1]: Entering directory\n`/home/chriskl/postgresql-7.2b4/src/test/regress'\ngmake[1]: `pg_regress' is up to date.\ngmake[1]: Leaving directory\n`/home/chriskl/postgresql-7.2b4/src/test/regress'\n../../src/test/regress/pg_regress init md5 sha1 hmac-md5 hmac-sha1 blowfish\nrijndael crypt-des crypt-md5 crypt-blowfish cryp\nt-xdes\n(using postmaster on Unix socket, default port)\n============== dropping database \"regression\" ==============\nERROR: DROP DATABASE: database \"regression\" does not exist\nERROR: DROP DATABASE: database \"regression\" does not exist\ndropdb: database removal failed\n============== creating database \"regression\" ==============\nCREATE DATABASE\n============== dropping regression test user accounts ==============\nERROR: DROP GROUP: group \"regressgroup1\" does not exist\n============== installing PL/pgSQL ==============\n============== running regression test queries ==============\ntest init ... ERROR: stat failed on file\n'$libdir/pgcrypto': No such file or directory\nERROR: stat failed on file '$libdir/pgcrypto': No such file or directory\nERROR: stat failed on file '$libdir/pgcrypto': No such file or directory\nERROR: stat failed on file '$libdir/pgcrypto': No such file or directory\nERROR: stat failed on file '$libdir/pgcrypto': No such file or directory\nERROR: stat failed on file '$libdir/pgcrypto': No such file or directory\nERROR: stat failed on file '$libdir/pgcrypto': No such file or directory\nERROR: stat failed on file '$libdir/pgcrypto': No such file or directory\nERROR: stat failed on file '$libdir/pgcrypto': No such file or directory\nERROR: stat failed on file '$libdir/pgcrypto': No such file or directory\nERROR: stat failed on file '$libdir/pgcrypto': No such file or directory\nERROR: stat failed on file '$libdir/pgcrypto': No such file or directory\nFAILED\ntest md5 ... ERROR: Function 'digest(unknown, unknown)'\ndoes not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'digest(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'digest(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'digest(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'digest(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'digest(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'digest(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nFAILED\ntest sha1 ... ERROR: Function 'digest(unknown, unknown)'\ndoes not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'digest(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'digest(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'digest(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'digest(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'digest(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'digest(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nFAILED\ntest hmac-md5 ... ERROR: Function 'hmac(unknown, bytea,\nunknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'hmac(unknown, unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'hmac(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'hmac(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'hmac(unknown, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'hmac(unknown, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'hmac(unknown, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nFAILED\ntest hmac-sha1 ... ERROR: Function 'hmac(unknown, bytea,\nunknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'hmac(unknown, unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'hmac(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'hmac(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'hmac(unknown, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'hmac(unknown, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'hmac(unknown, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nFAILED\ntest blowfish ... ERROR: Function 'encrypt(bytea, bytea,\nunknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nFAILED\ntest rijndael ... ERROR: Function 'encrypt(bytea, bytea,\nunknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'encrypt(bytea, bytea, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nFAILED\ntest crypt-des ... ERROR: Function 'crypt(unknown, unknown)'\ndoes not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'crypt(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'gen_salt(unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'crypt(text, text)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'crypt(text, text)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nFAILED\ntest crypt-md5 ... ERROR: Function 'crypt(unknown, unknown)'\ndoes not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'crypt(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'gen_salt(unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'crypt(text, text)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'crypt(text, text)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nFAILED\ntest crypt-blowfish ... ERROR: Function 'crypt(unknown, unknown)'\ndoes not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'crypt(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'gen_salt(unknown, int4)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'crypt(text, text)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'crypt(text, text)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nFAILED\ntest crypt-xdes ... ERROR: Function 'crypt(unknown, unknown)'\ndoes not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'crypt(unknown, unknown)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'gen_salt(unknown, int4)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'crypt(text, text)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nERROR: Function 'crypt(text, text)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\nFAILED\n\n========================\n 11 of 11 tests failed.\n========================\n\nThe differences that caused some tests to fail can be viewed in the\nfile `./regression.diffs'. A copy of the test summary that you see\nabove is saved in the file `./regression.out'.\n\n",
"msg_date": "Fri, 21 Dec 2001 11:43:21 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "pgcryto failures on freebsd/alpha"
},
{
"msg_contents": "On Fri, Dec 21, 2001 at 11:43:21AM +0800, Christopher Kings-Lynne wrote:\n> Hi Marko,\n> \n> Just testing pgcrypto on freebsd/alpha. I get some warnings:\n> \n> gcc -pipe -O -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC \n> -DRAND_SILLY -I. -I. -I../../src/include -c -\n> o internal.o internal.c\n> internal.c: In function `rj_encrypt':\n> internal.c:314: warning: cast from pointer to integer of different size\n> internal.c: In function `rj_decrypt':\n> internal.c:342: warning: cast from pointer to integer of different size\n> internal.c: In function `bf_encrypt':\n> internal.c:429: warning: cast from pointer to integer of different size\n> internal.c: In function `bf_decrypt':\n> internal.c:453: warning: cast from pointer to integer of different size\n\nThey should be harmless, although I should fix them.\n\n> And I can't do regression:\n\n[ ... ]\n\n> ============== running regression test queries ==============\n> test init ... ERROR: stat failed on file\n> '$libdir/pgcrypto': No such file or directory\n> ERROR: stat failed on file '$libdir/pgcrypto': No such file or directory\n\nYou need to do 'make install' first.\n\n-- \nmarko\n\n",
"msg_date": "Fri, 21 Dec 2001 10:34:45 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: pgcryto failures on freebsd/alpha"
},
{
"msg_contents": "Marko Kreen wrote:\n> On Fri, Dec 21, 2001 at 11:43:21AM +0800, Christopher Kings-Lynne wrote:\n> > Hi Marko,\n> > \n> > Just testing pgcrypto on freebsd/alpha. I get some warnings:\n> > \n> > gcc -pipe -O -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC \n> > -DRAND_SILLY -I. -I. -I../../src/include -c -\n> > o internal.o internal.c\n> > internal.c: In function `rj_encrypt':\n> > internal.c:314: warning: cast from pointer to integer of different size\n> > internal.c: In function `rj_decrypt':\n> > internal.c:342: warning: cast from pointer to integer of different size\n> > internal.c: In function `bf_encrypt':\n> > internal.c:429: warning: cast from pointer to integer of different size\n> > internal.c: In function `bf_decrypt':\n> > internal.c:453: warning: cast from pointer to integer of different size\n> \n> They should be harmless, although I should fix them.\n\nThe actual code is:\n\n if ((dlen & 15) || (((unsigned) res) & 3))\n return -1;\n\nwhile res is defined as an uint8 pointer:\n\n rj_encrypt(PX_Cipher * c, const uint8 *data, unsigned dlen, uint8 *res)\n\nHard to imagine how (uint *) & 3 makes any sense, unless res isn't\nalways a (uint8 *). Is that true?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 01:13:55 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgcryto failures on freebsd/alpha"
},
{
"msg_contents": "On Thu, Jan 03, 2002 at 01:13:55AM -0500, Bruce Momjian wrote:\n> Marko Kreen wrote:\n> > On Fri, Dec 21, 2001 at 11:43:21AM +0800, Christopher Kings-Lynne wrote:\n> > > Just testing pgcrypto on freebsd/alpha. I get some warnings:\n> > They should be harmless, although I should fix them.\n> \n> The actual code is:\n> \n> if ((dlen & 15) || (((unsigned) res) & 3))\n> return -1;\n\n> Hard to imagine how (uint *) & 3 makes any sense, unless res isn't\n> always a (uint8 *). Is that true?\n\nAt some point it was casted to (uint32*) so I wanted to be sure its ok.\nATM its pointless. Please apply the following patch.\n\n-- \nmarko\n\n\nIndex: contrib/pgcrypto/internal.c\n===================================================================\nRCS file: /opt/cvs/pgsql/pgsql/contrib/pgcrypto/internal.c,v\nretrieving revision 1.10\ndiff -u -r1.10 internal.c\n--- contrib/pgcrypto/internal.c\t20 Nov 2001 18:54:07 -0000\t1.10\n+++ contrib/pgcrypto/internal.c\t21 Dec 2001 08:45:21 -0000\n@@ -311,7 +311,7 @@\n \tif (dlen == 0)\n \t\treturn 0;\n \n-\tif ((dlen & 15) || (((unsigned) res) & 3))\n+\tif (dlen & 15)\n \t\treturn -1;\n \n \tmemcpy(res, data, dlen);\n@@ -339,7 +339,7 @@\n \tif (dlen == 0)\n \t\treturn 0;\n \n-\tif ((dlen & 15) || (((unsigned) res) & 3))\n+\tif (dlen & 15)\n \t\treturn -1;\n \n \tmemcpy(res, data, dlen);\n@@ -426,7 +426,7 @@\n \tif (dlen == 0)\n \t\treturn 0;\n \n-\tif ((dlen & 7) || (((unsigned) res) & 3))\n+\tif (dlen & 7)\n \t\treturn -1;\n \n \tmemcpy(res, data, dlen);\n@@ -450,7 +450,7 @@\n \tif (dlen == 0)\n \t\treturn 0;\n \n-\tif ((dlen & 7) || (((unsigned) res) & 3))\n+\tif (dlen & 7)\n \t\treturn -1;\n \n \tmemcpy(res, data, dlen);\n",
"msg_date": "Thu, 3 Jan 2002 09:21:05 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: pgcryto failures on freebsd/alpha"
},
{
"msg_contents": "\nPatch applied because it is to /contrib and is from the author, and\nfixes some unusual code. Did testing the bottom two bits actually test\nanything (res & 3)?\n\n\n---------------------------------------------------------------------------\n\nMarko Kreen wrote:\n> On Thu, Jan 03, 2002 at 01:13:55AM -0500, Bruce Momjian wrote:\n> > Marko Kreen wrote:\n> > > On Fri, Dec 21, 2001 at 11:43:21AM +0800, Christopher Kings-Lynne wrote:\n> > > > Just testing pgcrypto on freebsd/alpha. I get some warnings:\n> > > They should be harmless, although I should fix them.\n> > \n> > The actual code is:\n> > \n> > if ((dlen & 15) || (((unsigned) res) & 3))\n> > return -1;\n> \n> > Hard to imagine how (uint *) & 3 makes any sense, unless res isn't\n> > always a (uint8 *). Is that true?\n> \n> At some point it was casted to (uint32*) so I wanted to be sure its ok.\n> ATM its pointless. Please apply the following patch.\n> \n> -- \n> marko\n> \n> \n> Index: contrib/pgcrypto/internal.c\n> ===================================================================\n> RCS file: /opt/cvs/pgsql/pgsql/contrib/pgcrypto/internal.c,v\n> retrieving revision 1.10\n> diff -u -r1.10 internal.c\n> --- contrib/pgcrypto/internal.c\t20 Nov 2001 18:54:07 -0000\t1.10\n> +++ contrib/pgcrypto/internal.c\t21 Dec 2001 08:45:21 -0000\n> @@ -311,7 +311,7 @@\n> \tif (dlen == 0)\n> \t\treturn 0;\n> \n> -\tif ((dlen & 15) || (((unsigned) res) & 3))\n> +\tif (dlen & 15)\n> \t\treturn -1;\n> \n> \tmemcpy(res, data, dlen);\n> @@ -339,7 +339,7 @@\n> \tif (dlen == 0)\n> \t\treturn 0;\n> \n> -\tif ((dlen & 15) || (((unsigned) res) & 3))\n> +\tif (dlen & 15)\n> \t\treturn -1;\n> \n> \tmemcpy(res, data, dlen);\n> @@ -426,7 +426,7 @@\n> \tif (dlen == 0)\n> \t\treturn 0;\n> \n> -\tif ((dlen & 7) || (((unsigned) res) & 3))\n> +\tif (dlen & 7)\n> \t\treturn -1;\n> \n> \tmemcpy(res, data, dlen);\n> @@ -450,7 +450,7 @@\n> \tif (dlen == 0)\n> \t\treturn 0;\n> \n> -\tif ((dlen & 7) || (((unsigned) res) & 3))\n> +\tif (dlen & 7)\n> \t\treturn -1;\n> \n> \tmemcpy(res, data, dlen);\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 02:23:25 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgcryto failures on freebsd/alpha"
},
{
"msg_contents": "On Thu, Jan 03, 2002 at 02:23:25AM -0500, Bruce Momjian wrote:\n> \n> Patch applied because it is to /contrib and is from the author, and\n> fixes some unusual code. Did testing the bottom two bits actually test\n> anything (res & 3)?\n\nYou mean if I did catch anything with it? No.\n\n-- \nmarko\n\n",
"msg_date": "Thu, 3 Jan 2002 09:36:05 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: pgcryto failures on freebsd/alpha"
}
] |
[
{
"msg_contents": "\n> > If you have a foreign key on a column, then whenever the primary key is\n> > modified, the following checks may occur:\n> > \n> > * Check to see if the child row exists (no action)\n> > * Delete the child row (cascade delete)\n> > * Update the child row (cascade update)\n> > \n> > All of which will benefit from an index...\n> \n> OK, then perhaps we should be creating an index automatically? Folks?\n\nThe index is only useful where you actually have an on delete or on update\nclause. I don't think we want to conditionally create an index. That would\nbee too confusing. A contrib, to find \"suggested\" indexes seems fine.\n\nAndreas\n",
"msg_date": "Fri, 21 Dec 2001 09:20:50 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: contrib idea"
},
{
"msg_contents": "\nOn Fri, 21 Dec 2001, Zeugswetter Andreas SB SD wrote:\n\n>\n> > > If you have a foreign key on a column, then whenever the primary key is\n> > > modified, the following checks may occur:\n> > >\n> > > * Check to see if the child row exists (no action)\n> > > * Delete the child row (cascade delete)\n> > > * Update the child row (cascade update)\n> > >\n> > > All of which will benefit from an index...\n> >\n> > OK, then perhaps we should be creating an index automatically? Folks?\n>\n> The index is only useful where you actually have an on delete or on update\n> clause. I don't think we want to conditionally create an index. That would\n> bee too confusing. A contrib, to find \"suggested\" indexes seems fine.\n\nActually, even without an on delete or on update it would be used (for the\ncheck to see if there was a row to prevent the action), however autocreate\nseems bad. The contrib thing sounds cool, another vote that way.\n\n\n",
"msg_date": "Fri, 21 Dec 2001 13:44:16 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: contrib idea"
},
{
"msg_contents": "> > > * Check to see if the child row exists (no action)\n> > > * Delete the child row (cascade delete)\n> > > * Update the child row (cascade update)\n> > >\n> > > All of which will benefit from an index...\n> >\n> > OK, then perhaps we should be creating an index automatically? Folks?\n>\n> The index is only useful where you actually have an on delete or on update\n> clause.\n\nHmm...not necessarily true. A default 'no action' foreign key still needs\nto prevent the parent key from being deleted if the child exists. This\nrequires that postgres do a search of the child table.\n\n> I don't think we want to conditionally create an index. That would\n> bee too confusing. A contrib, to find \"suggested\" indexes seems fine.\n\nThat's what I suggested.\n\nChris\n\n",
"msg_date": "Mon, 24 Dec 2001 10:49:57 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: contrib idea"
}
] |
[
{
"msg_contents": "I just looked at the source for dbase. I have a long hacked version of\nthis. The one in contrib is based on a newer version of what I have been\nusing.\n\nIt is funny, because I made some of the same changes to mine as are in\nthe one in contrib, with a few exceptions.\n\nusage: dbf2pg [options] filename.dbf\n Options\n -h hostname, host name of PostgreSQL server\n -y translate field name, oldname=newname\n -z translate data type, fieldname=datatype\n -d dbase\n -t table_name\n -p prefix, prepends prefix to default table name\n [-c | -D] Create or delete\n -f convert field names to lower case\n [-u | -l] Converts data to upper or lower case,\nrespectively\n -v[v] sets verbosity\n\n\nWho contributed it? Did Maarten Boekhold? I would like to merge my data\ntype and table prefix code into the main tree, if this is where it will\nnow be officially maintained.\n",
"msg_date": "Fri, 21 Dec 2001 14:52:29 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "contrib/dbase"
},
{
"msg_contents": "> I just looked at the source for dbase. I have a long hacked version of\n> this. The one in contrib is based on a newer version of what I have been\n> using.\n> \n> It is funny, because I made some of the same changes to mine as are in\n> the one in contrib, with a few exceptions.\n> \n> usage: dbf2pg [options] filename.dbf\n> Options\n> -h hostname, host name of PostgreSQL server\n> -y translate field name, oldname=newname\n> -z translate data type, fieldname=datatype\n> -d dbase\n> -t table_name\n> -p prefix, prepends prefix to default table name\n> [-c | -D] Create or delete\n> -f convert field names to lower case\n> [-u | -l] Converts data to upper or lower case,\n> respectively\n> -v[v] sets verbosity\n> \n> \n> Who contributed it? Did Maarten Boekhold? I would like to merge my data\n> type and table prefix code into the main tree, if this is where it will\n> now be officially maintained.\n\ncontrib/README shows:\n\n\tdbase -\n\t Converts from dbase/xbase to PostgreSQL\n\t by Ivan Baldo, lubaldo@adinet.com.uy\n\nYes, please send in your improvements and we will add them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Dec 2001 15:50:08 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/dbase"
}
] |
[
{
"msg_contents": "I've said it before, the HISTORY file really contains a lot of inaccurate\ninformation, not to say nonsense. Plus, I find the sectioning and sorting\nto be pretty peculiar.\n\nSo I've tried to give it some better structure, correct the mistakes,\nimprove spelling, and convert it to DocBook in one go.\n\nAttached is the result (only the 7.2 part). Let me know what you think.\n\nBtw., the item \"Fire INSERT rules after statement\" doesn't sound right.\nRules generally don't \"fire\", to name one thing.\n\n-- \nPeter Eisentraut peter_e@gmx.net",
"msg_date": "Sat, 22 Dec 2001 22:41:51 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "HISTORY file"
},
{
"msg_contents": "> I've said it before, the HISTORY file really contains a lot of inaccurate\n> information, not to say nonsense. Plus, I find the sectioning and sorting\n> to be pretty peculiar.\n> \n> So I've tried to give it some better structure, correct the mistakes,\n> improve spelling, and convert it to DocBook in one go.\n> \n> Attached is the result (only the 7.2 part). Let me know what you think.\n> \n> Btw., the item \"Fire INSERT rules after statement\" doesn't sound right.\n> Rules generally don't \"fire\", to name one thing.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 22 Dec 2001 19:15:52 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: HISTORY file"
},
{
"msg_contents": "\nSorry for the previous empty email. \n\nThis list looks great. As I said in 7.1, I do my best, but need help,\nand it is good to get it. I like the new sectioning and I am sure the\nitem descriptions are improved.\n\nHow do you want to go from here? Do you want to maintain HISTORY, or do\nyou want me to keep generating the raw list of items and let you mark\nthem up? Or do you want me to keep maintaining it and use your new\ntemplate? HISTORY needs updating with new stuff added in the past\nweeks. How do you want to do that?\n\n---------------------------------------------------------------------------\n\n> I've said it before, the HISTORY file really contains a lot of inaccurate\n> information, not to say nonsense. Plus, I find the sectioning and sorting\n> to be pretty peculiar.\n> \n> So I've tried to give it some better structure, correct the mistakes,\n> improve spelling, and convert it to DocBook in one go.\n> \n> Attached is the result (only the 7.2 part). Let me know what you think.\n> \n> Btw., the item \"Fire INSERT rules after statement\" doesn't sound right.\n> Rules generally don't \"fire\", to name one thing.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 22 Dec 2001 19:19:53 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: HISTORY file"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> [ mostly excellent HISTORY updates ]\n\n> * The \"pg_hba.conf\" configuration is now only reloaded after receiving a\n> SIGHUP signal, not with each connection.\n\nThat applies to pg_ident.conf too, no?\n\n> Btw., the item \"Fire INSERT rules after statement\" doesn't sound right.\n> Rules generally don't \"fire\", to name one thing.\n\n\"Execute\", \"process\"?\n\n> Remove VACUUM warning about index tuples fewer than heap (Martijn van Oosterhout)\n\nIt's not \"removed\", it's only limited to the cases where it applies,\nviz, indexes that are not partial and include NULLs.\n\n\n\nOverall, it's slightly astonishing how much we've gotten done since 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Dec 2001 03:07:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: HISTORY file "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> How do you want to go from here? Do you want to maintain HISTORY, or do\n> you want me to keep generating the raw list of items and let you mark\n> them up? Or do you want me to keep maintaining it and use your new\n> template? HISTORY needs updating with new stuff added in the past\n> weeks. How do you want to do that?\n\nI think it would be most effective if when a significant change is checked\nin, the release information is updated with it. That way we don't have to\nthink months later what that obscure log message meant. Also, it would\ngive users of snapshot releases an idea of what has happened so far.\n\nI've updated the list I posted up to the 22nd and checked it in. For the\nhopefully few remaining additions I think it's OK to just update both\nfiles, unless you want to do the reformatting. (I used a program called\n\"links\" to convert from HTML to text.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 24 Dec 2001 14:18:18 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: HISTORY file"
}
] |
[
{
"msg_contents": "has the default clause been changed for 7.2b4 so as not use a statement as such:\n\ncreate table news(\nid serial,\ndate date DEFAULT 'select now()::date' NOT NULL,\ntopic varchar(256),\nbody text\n);\n\nthe above query results in ERROR: Bad date external representation 'select now()::date', yet if I use this on a production 7.1.2 machine it works fine.\n\nMike\n\n\n\n\n\n\n\nhas the default clause been changed for 7.2b4 so as \nnot use a statement as such:\n \ncreate table news(\nid serial,\ndate date DEFAULT 'select now()::date' NOT \nNULL,\ntopic varchar(256),\nbody text\n);\n \nthe above query results in ERROR: Bad date \nexternal representation 'select now()::date', yet if I use this on a production \n7.1.2 machine it works fine.\n \nMike",
"msg_date": "Sat, 22 Dec 2001 17:25:57 -0800",
"msg_from": "\"mike\" <matrix@vianet.ca>",
"msg_from_op": true,
"msg_subject": "default modifiers for 7.2b4"
},
{
"msg_contents": "\"mike\" <matrix@vianet.ca> writes:\n> create table news(\n> id serial,\n> date date DEFAULT 'select now()::date' NOT NULL,\n> topic varchar(256),\n> body text\n> );\n\n[ goggles ]\n\n> the above query results in ERROR: Bad date external representation 'select=\n> now()::date', yet if I use this on a production 7.1.2 machine it works fin=\n> e.\n\nApparently the 7.1.* date parser was rather more forgiving than it\nshould have been. You might care to contemplate the difference between\n\tselect 'now'::date;\nand\n\tselect 'select now()::date'::date;\nor just to make it crystal clear,\n\tselect 'foo now#&%!@bar'::date;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 Dec 2001 20:56:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: default modifiers for 7.2b4 "
},
{
"msg_contents": "(cross posted to -hackers, just in case...)\n\n> or just to make it crystal clear,\n> select 'foo now#&%!@bar'::date;\n\nOh yeah. From the 1.0x days of Postgres95 (and probably earlier) the\ndate/time parser ignored any part of a date string it didn't understand.\n\nThat has been the case, more or less, until select date 'now' ;)\n\nAs you might imagine, that could lead to Really Bad Errors, such as\nignoring mistyped time zones. So for 7.2 fields are not ignored, unless\nthey are on the short list of strings which have been explicitly ignored\nforever. For example, 'abstime' can be inside a date string without\ntrouble, to support compatibility with the argument 'Invalid Abstime',\nwhich apparently was actually emitted by Postgres or Postgres95 (or\nmaybe someone thought it may be someday. Who knows??!).\n\nAnyway, the parsing has been tightened up a bit. btw, there are some new\nfeatures in the parser in addition to this, such as supporting more\nvaried forms of ISO-8601 dates and times.\n\n - Thomas\n",
"msg_date": "Sun, 23 Dec 2001 05:42:46 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: default modifiers for 7.2b4"
}
] |
[
{
"msg_contents": "Hi,\n\nI just fixed some nasty bugs in ecpg, especially when it comes to using\narrays of structs. It seems this feature has never been fully implemented.\nDuring my search for these bugs I also fixed some other including one memory\nrelated one. \n\nAfter all I had to make changes in quite some source files, but I'm pretty\nsure I managed to not mess up the remaining functionality. However, I had to\nchange some variable addressing and I could only test it on i386 Linux. \n\nI did commit the changes to CVS but unless they are tested by some others on\nother archs and maybe with different embedded sql sources I don't feel good\nto release it with 7.2. Since the changes just fix some bugs, I don#t feel\ngood if we have to back them out either, so please everyone check this.\n\nIf you have some sources just run them through the latest ecpg version.\n\nIf you have some other arch, it would be okay if you just run my test cases\nand see if the output is okay. \n\nThanks a lot.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sun, 23 Dec 2001 13:29:52 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "ECPG changes"
}
] |
[
{
"msg_contents": "I am pleased to announce the initial release of \"libpkixpq\", \nPostgreSQL user-defined types and functions that allow the\ndatabase to understand the basic PKIX types. \n\nThis release should be considered EXPERIMENTAL. This is \nliterally the first public release and the lack of known\nbugs undoubtably speaks to my own poor testing skills, not\nto the quality of the code.\n\nThe intention of this package is enable the database to extract\n(and check) fields from PKIX objects, not to create new ones \nor manipulate existing ones. The latter functions would best \nbe supported via a second set of user-defined functions.\n\nOne practical use of these types is to create \"friendly\" views\nof PKIX fields:\n\n create table x (x x509);\n\n create view v as\n select \n x509_serial(x) as serial,\n x509_subject(x) as subject,\n x509_issuer(x) as issuer,\n x509_notBefore(x) as notBefore,\n x509_notAfter(x) as notAfter\n from x;\n\nA second practical use is supporting integrity checks on the\ndata:\n\n create table cachedx (\n x x509,\n subject varchar(80)\n constraint c1 check (subject = x509_subject(x))\n );\n\nThis is not yet fully supported since there is no test for equality \nof \"x509_name\" objects. You can compare individual components.\n\n\nThese new types are defined:\n\n Certificates and bags:\n\n x509\n pkcs7\n pkcs8\n pkcs12\n\n Other PKIX information:\n\n x509_req\n x509_crl\n pubkey\n rsapubkey\n dsapubkey\n dsaparams\n dhparams\n\n Miscellaneous\n\n x509_name\n asn1_integer (probably renamed in future)\n\nA large number of accessor functions are also defined, see the\n\"test\" directory for a list of these files and demonstrations of\ntheir use.\n\nSource:\n\n1) Source is available at http://www.dimensional.com/~bgiles/\n\n2) Source is released under a new-style BSD license.\n\n3) Source can be built with either standard autoconf techniques,\n or as a Debian package.\n\n4) Ideally, the source will eventually be distributed as\n contributed code with either the PostgreSQL or OpenSSL projects.\n\nRequirements:\n\n1) OpenSSL 0.9.6b was used during development, but (slightly) older\n versions shouldn't be a problem.\n\n2) PostgreSQL 7.1.3, primarily because all new types are \"TOASTable\"\n to allow the contents to be moved out of the main table when necessary.\n\nKnown bugs:\n\n1) Many internal functions still guess at how much memory will be\n required to hold results, and silently truncate the output to 4k.\n This has not been a problem during testing, but it's an unnecessary\n restriction.\n\n2) There is essentially no documentation yet.\n\n3) Certificate times are parsed to the minute, not to the second,\n and are presented as \"abstime\", not \"datetime.\"\n\nFuture enhancements:\n\n1) Make it possible to compare x509_name and asn1_integer objects\n directly.\n\n2) Add all arithmetic functions for asn1_integer.\n\nExport stuff:\n\n1) A copy of this notice has been sent to crypt@bxa.doc.gov.\n\n--\nBear Giles\nbgiles (at) coyotesong (dot) com\n",
"msg_date": "Sun, 23 Dec 2001 19:46:31 -0700 (MST)",
"msg_from": "Bear Giles <bear@coyotesong.com>",
"msg_from_op": true,
"msg_subject": "Announcement: libpkixpq 0.1 released"
},
{
"msg_contents": "Bear Giles <bear@coyotesong.com> writes:\n> I am pleased to announce the initial release of \"libpkixpq\", \n> PostgreSQL user-defined types and functions that allow the\n> database to understand the basic PKIX types. \n\nFor the ignorant among us ... what is PKIX?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Dec 2001 23:06:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Announcement: libpkixpq 0.1 released "
},
{
"msg_contents": "On Sun, Dec 23, 2001 at 11:06:30PM -0500, Tom Lane wrote:\n> Bear Giles <bear@coyotesong.com> writes:\n> > I am pleased to announce the initial release of \"libpkixpq\", \n> > PostgreSQL user-defined types and functions that allow the\n> > database to understand the basic PKIX types. \n> \n> For the ignorant among us ... what is PKIX?\n\nBear posted two days previously, with a nice long message about how\nall this should work. Bruce, could you drop that post into a TODO.pki\nor TODO.crypto ?\n\nhttp://archives.postgresql.org/pgsql-hackers/2001-12/msg00823.php\n\nBear, there _is_ an existing SSL patch/connection option. Have you\nlooked at that code?\n\nRoss\n-- \nRoss Reedstrom, Ph.D. reedstrm@rice.edu\nExecutive Director phone: 713-348-6166\nGulf Coast Consortium for Bioinformatics fax: 713-348-6182\nRice University MS-39\nHouston, TX 77005\n",
"msg_date": "Thu, 3 Jan 2002 11:58:57 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Announcement: libpkixpq 0.1 released"
}
] |
[
{
"msg_contents": "hello pgsql-hackers,\n\nI'm very very very... new here, but...\nWell, I see you're all stressed by the new release comming up soon, but may\nbe one of you guys has got an ear and a couple of words for a smb who really\nwants to get involved.\n\nme: I think, I've got enough programming experience to get involved, if smb.\nreally want's to take a look at what I can, please go to\nhttp://www.pbit.org/me_e.html\nI used to work with SQL databases in many projects, and PostgreSQL was the\nbetter one :-)\n\nwhy I'm here: well, I don't think, that I really can become an\ninternals-guru after I read some of your messages here - it would take a\nlong time - and I think, you guys don't need anyone new in the\ninternals-kitchen. But I checked out your TODOs and I found an exotic issue,\nwhich sound interesting to me: to make a PostgreSQL database to work like an\nOracle database to clients - it's not small and is not a bugfix, so I don't\nhave to know about the deepest internals of PostgreSQL to implement it.\n\nmy questions: is there smb. already working on it? Is it smth. this database\nrealy needs? Did you guys worked out any usable docs about what's the\nultimate way to implement such a feature (I just see a listener-idea in the\nlist)? Or is it the right time to discuss smth. like that?\nOr is there an other \"major\" feature waiting for it's implementation and\nbeing more urgent?\n\nrgds\nPavlo\n\n\n",
"msg_date": "Mon, 24 Dec 2001 17:26:34 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Smb to get involved"
},
{
"msg_contents": "Hi Pavlo, welcome aboard! Like any good free software collaborative\nproject, PostgreSQL is always happy to have new contributors. Just be\nprepared for public, honest, _productive_ criticism of your code. If\nyou've got an itch to make an Oracle compatability layer, scratch away:\nnoone here will try to tell you what you _should_ be working on. Do\nnote that being 'just like Oracle' is not a major goal for the project,\nbut making it easy to port or write software against both databases is.\n\nIf you decide to take a look at this, I'd suggest talking to the OpenACS\npeople, and in particular Don Baccus <dhogaza@pacifier.com> who pops\nup here occasionally: they've probably ported more code from Oracle to\nPostgreSQL than anybody, and have a good idea about the niggly little\ndetails you'd need to address.\n\nRoss\n\nOn Mon, Dec 24, 2001 at 05:26:34PM +0100, Pavlo Baron wrote:\n> hello pgsql-hackers,\n> \n> I'm very very very... new here, but...\n> Well, I see you're all stressed by the new release comming up soon, but may\n> be one of you guys has got an ear and a couple of words for a smb who really\n> wants to get involved.\n> \n> me: I think, I've got enough programming experience to get involved, if smb.\n> really want's to take a look at what I can, please go to\n> http://www.pbit.org/me_e.html\n> I used to work with SQL databases in many projects, and PostgreSQL was the\n> better one :-)\n> \n> why I'm here: well, I don't think, that I really can become an\n> internals-guru after I read some of your messages here - it would take a\n> long time - and I think, you guys don't need anyone new in the\n> internals-kitchen. But I checked out your TODOs and I found an exotic issue,\n> which sound interesting to me: to make a PostgreSQL database to work like an\n> Oracle database to clients - it's not small and is not a bugfix, so I don't\n> have to know about the deepest internals of PostgreSQL to implement it.\n> \n> my questions: is there smb. already working on it? Is it smth. this database\n> realy needs? Did you guys worked out any usable docs about what's the\n> ultimate way to implement such a feature (I just see a listener-idea in the\n> list)? Or is it the right time to discuss smth. like that?\n> Or is there an other \"major\" feature waiting for it's implementation and\n> being more urgent?\n> \n> rgds\n> Pavlo\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n",
"msg_date": "Thu, 3 Jan 2002 12:12:20 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Smb to get involved"
},
{
"msg_contents": "Ross J. Reedstrom writes:\n\n> Hi Pavlo, welcome aboard!\n\n:-)\n\n> Like any good free software collaborative\n> project, PostgreSQL is always happy to have new contributors. Just be\n> prepared for public, honest, _productive_ criticism of your code.\n\noh yeah, I've already comunicated with some core gurus (Tom, Bruce, Thomas)\nbetween years about providing a fix for a TODO-item, but it seems to me, the\nthing I implemented would never match their expectations about how to fix\nsmth. lying deep in the basic code. I thought, nobody ever replies to my\nfirst, Oracle-related email, so I used the moment to take a look at the\ndeepest basics. Then, I nerved those guys a little ,) Now, I really don't\nthink the codebase to be the right thing to me - in my honest opinion -\nthere is a great amount of code written by smb. else and there are\nsufficient codebase experts having and being able to fix all those bugs. But\nthis playing around did really help me to understand some basic concepts of\nthe parser.\n\n> If\n> you've got an itch to make an Oracle compatability layer, scratch away:\n> noone here will try to tell you what you _should_ be working on. Do\n> note that being 'just like Oracle' is not a major goal for the project,\n> but making it easy to port or write software against both databases is.\n\nI never wanted to make PostgreSQL=Oracle, oh no. My original idea was to get\ninvolved by providing a translation service or a part of it to overlay one\nof it with the other one. I found some code and an Oracle-specific TODO in\nthe source-tree thinking a bit different from the idea with a Net8 listener\nfrom the main TODO - I found this one terrible - maybe because I couldn't\nfind any free documentation on how to implement the Net8 layer.\nI think, a complete SQL-translater for both databases would be the first\nstep. But there are very very many features with Oracle missing in PG, so\nthe translater Oracle2PG would get a little bit tricky to shallow some\nthings.\nIs it interesting to discuss this theme now? (Smb. said, it's the wrong\nmoment now for such discussions.) I didn't want to appear where everybody\ngot stressed with the release completion.\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Thu, 3 Jan 2002 19:59:03 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: Smb to get involved"
}
] |
[
{
"msg_contents": "Hi, \n\nI just felt on this one :\n\ncreate table radlog (\n...\ndebut timestamp,\nfin timestamp,\n...)\n\nselect extract(YEAR FROM debut) as annee,extract(MONTH FROM debut) as\nmois, EXTRACT(EPOCH FROM sum(fin-debut)) as total, EXTRACT(EPOCH FROM\navg(fin - debut)) from radlog where fin is not null;\n\nFails with '0' bas interval external representation for any debut = fin.\n\nThis can also be seen with select now() - now();\n\nThis is with 7.1.3\n\nI've not been yet able to test with .2.\n\nIs there any quick patch for 7.1.3\n\nMerry Xmas to you all\n\nRegards\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Mon, 24 Dec 2001 19:39:11 +0100 (MET)",
"msg_from": "Olivier PRENANT <ohp@pyrenet.fr>",
"msg_from_op": true,
"msg_subject": "interval bug?"
}
] |
[
{
"msg_contents": "On Mon, 24 Dec 2001, Thomas Lockhart wrote:\n\n> (resent due to dns trouble)\n> \n> > select extract(YEAR FROM debut) as annee,...\n> > EXTRACT(EPOCH FROM avg(fin - debut))\n> > from radlog where fin is not null;\n> > Fails with '0' bas interval external representation for any debut = fin.\n> > This can also be seen with select now() - now();\n> \n> I'm not seeing this symptom exactly as you describe (and your query\n> fails due to lack of a group by clause, right?). However, I *do* see a\nYeah!! That was left as an exercise :)\n> problem on 7.1 with aggregates on intervals, seemingly due to the values\n> used to initialize the aggregate. Fix is below. I don't see the same\n> trouble in 7.2.\n> \n> > This is with 7.1.3\n> > Is there any quick patch for 7.1.3\n> \n> Tested on 7.1:\n> \n> thomas=# select extract(year from debut) as annee,\n> thomas-# avg(fin-debut) as total from radlog group by annee;\n> ERROR: Bad interval external representation '0'\n> thomas=# update pg_aggregate set agginitval='{\\'0 sec\\',\\'0 sec\\'}'\n> thomas-# where aggname = 'avg' and aggbasetype=1186;\n> UPDATE 1\nYep!! It works ...\n\nThank you so much...\n> thomas=# select extract(year from debut) as annee, avg(fin-debut) as\n> total from radlog group by annee;\n> annee | total \n> -------+------------------\n> 2001 | @ 3 mins 38 secs\n> \nI started Xmas night so I can't go any further.\n\nBut I hope all of you guys have a nice time with family and/or friends\n\nBest regards from France\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Tue, 25 Dec 2001 00:04:44 +0100 (MET)",
"msg_from": "Olivier PRENANT <ohp@pyrenet.fr>",
"msg_from_op": true,
"msg_subject": "Re: interval bug?"
}
] |
[
{
"msg_contents": "For those who're celebrating it now,\nfor those it has passed a few hours ago and\nfor those it will come in a few hours ---\n\n Merry Christmas\n Joyeux No�l\n Feliz Navidad\n\n!!!\n\n--\nSerguei A. Mokhov\n\n\n",
"msg_date": "Mon, 24 Dec 2001 18:52:29 -0500",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": true,
"msg_subject": "[OT] Merry Xmas!"
}
] |
[
{
"msg_contents": "I found a failure in a JDBC driver of 7.2b4.\n (1) It does not support timestamptz type\n (2) Exception occurs by timestamp without time zone type\nI attach a patch correcting the first failure.\nYou can confirm it in example.psql as follows:\n\n---------------------------------------------------------------------\n$ java -cp postgresql-examples.jar:postgresql.jar example.psql jdbc:postgresql:r-matuda r-matuda pass\nPostgreSQL psql example v6.3 rev 1\n\nConnecting to Database URL = jdbc:postgresql:r-matuda\nConnected to PostgreSQL 7.2b4\n\n[1] select 'now'::timestamp;\ntimestamptz\nNo class found for timestamptz.\n[1] [1] select 'now'::timestamp with time zone;\ntimestamptz\nNo class found for timestamptz.\n[1] [1] select 'now'::timestamp without time zone;\ntimestamp\nException caught.\njava.lang.StringIndexOutOfBoundsException: String index out of range: 26\njava.lang.StringIndexOutOfBoundsException: String index out of range: 26\n at java.lang.String.charAt(String.java:516)\n at org.postgresql.jdbc2.ResultSet.toTimestamp(ResultSet.java:1653)\n at org.postgresql.jdbc2.ResultSet.getTimestamp(ResultSet.java:398)\n at org.postgresql.jdbc2.ResultSet.getObject(ResultSet.java:768)\n at example.psql.displayResult(psql.java:137)\n at example.psql.processLine(psql.java:96)\n at example.psql.<init>(psql.java:62)\n at example.psql.main(psql.java:227)",
"msg_date": "Tue, 25 Dec 2001 17:19:39 +0900",
"msg_from": "Ryouichi Matsuda <r-matuda@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Failure in timestamptz of JDBC of 7.2b4"
},
{
"msg_contents": "Ryouichi,\n\nI did not follow your explaination of the problem and therefore I don't \nsee how your suggested patch fixes the problem.\n\nYou claim that the jdbc driver does not support the timestamptz type. \nHowever when I try using this type, I don't have any problems. I just \nexecuted the sql statements you had included in your email:\n\nselect 'now'::timestamp;\nselect 'now'::timestamptz;\nselect 'now'::timestamp with time zone;\n\nThese all seem to work correctly for me.\n\nThen however I did try your last query:\n\nselect 'now'::timestamp without time zone;\n\nand this does fail for me with the string index out of bounds exception. \n However the patch you sent does not seem to fix this error. And I \nreally don't know what to do with this datatype, since jdbc does not \nhave a corresponding datatype that doesn't contain timezone information.\n\nSo to summarize, after looking at the sql statements you mention in your \nemail, they all seem to work correctly for me without your patch being \napplied, except for the last one, which still fails even with your patch \napplied. So I am unsure what your patch is supposed to fix, since I do \nnot see any evidence that it fixes anything that is broken.\n\nthanks,\n--Barry\n\n\n\nRyouichi Matsuda wrote:\n\n> I found a failure in a JDBC driver of 7.2b4.\n> (1) It does not support timestamptz type\n> (2) Exception occurs by timestamp without time zone type\n> I attach a patch correcting the first failure.\n> You can confirm it in example.psql as follows:\n> \n> ---------------------------------------------------------------------\n> $ java -cp postgresql-examples.jar:postgresql.jar example.psql jdbc:postgresql:r-matuda r-matuda pass\n> PostgreSQL psql example v6.3 rev 1\n> \n> Connecting to Database URL = jdbc:postgresql:r-matuda\n> Connected to PostgreSQL 7.2b4\n> \n> [1] select 'now'::timestamp;\n> timestamptz\n> No class found for timestamptz.\n> [1] [1] select 'now'::timestamp with time zone;\n> timestamptz\n> No class found for timestamptz.\n> [1] [1] select 'now'::timestamp without time zone;\n> timestamp\n> Exception caught.\n> java.lang.StringIndexOutOfBoundsException: String index out of range: 26\n> java.lang.StringIndexOutOfBoundsException: String index out of range: 26\n> at java.lang.String.charAt(String.java:516)\n> at org.postgresql.jdbc2.ResultSet.toTimestamp(ResultSet.java:1653)\n> at org.postgresql.jdbc2.ResultSet.getTimestamp(ResultSet.java:398)\n> at org.postgresql.jdbc2.ResultSet.getObject(ResultSet.java:768)\n> at example.psql.displayResult(psql.java:137)\n> at example.psql.processLine(psql.java:96)\n> at example.psql.<init>(psql.java:62)\n> at example.psql.main(psql.java:227)\n> \n> \n> ------------------------------------------------------------------------\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n\n",
"msg_date": "Wed, 26 Dec 2001 18:44:35 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Failure in timestamptz of JDBC of 7.2b4"
},
{
"msg_contents": "\nBarry Lind wrote:\n\n> select 'now'::timestamp;\n> select 'now'::timestamptz;\n> select 'now'::timestamp with time zone;\n>\n> These all seem to work correctly for me.\n\nWhen I executed SQL in <<example.psql>>, exception occurred.\nIn an approach by a getTimestamp() method, exception does not occur.\n\n Statement st = db.createStatement();\n ResultSet rs = st.executeQuery(\"SELECT 'now'::timestamp\");\n rs.next();\n Timestamp ts = rs.getTimestamp(1);\n System.out.println(ts);\n\n\nBut, in an approach by a getObject() method, exception occurs.\n\n Statement st = db.createStatement();\n ResultSet rs = st.executeQuery(\"SELECT 'now'::timestamp\");\n rs.next();\n Timestamp ts = (Timestamp)rs.getObject(1);\n System.out.println(ts);\n\nBecause a displayResult() method of 'example/psql.java' uses\ngetObject(), exception of 'No class found for timestamptz' occurs.\nThe patch which I attached to a former mail corrects this error.\n\n\n> Then however I did try your last query:\n> \n> select 'now'::timestamp without time zone;\n> \n> and this does fail for me with the string index out of bounds exception. \n> However the patch you sent does not seem to fix this error. And I \n> really don't know what to do with this datatype, since jdbc does not \n> have a corresponding datatype that doesn't contain timezone information.\n\nMy patch does not correct this error. It is difficult for me to\ncorrect this error justly, but I try to think.\n\n\nIn addition, I found an error of time type.\n\n Statement st = db.createStatement();\n ResultSet rs = st.executeQuery(\"SELECT 'now'::time\");\n rs.next();\n Time t = rs.getTime(1);\n System.out.println(t);\n\nThis becomes string index out of bounds exception.\n\n",
"msg_date": "Mon, 07 Jan 2002 18:19:44 +0900",
"msg_from": "Ryouichi Matsuda <r-matuda@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Failure in timestamptz of JDBC of 7.2b4"
},
{
"msg_contents": "Ryouichi,\n\nThanks for the additional information. I will look at this when I get \nsome free time.\n\nthanks,\n--Barry\n\n\n\nRyouichi Matsuda wrote:\n\n> Barry Lind wrote:\n> \n> \n>>select 'now'::timestamp;\n>>select 'now'::timestamptz;\n>>select 'now'::timestamp with time zone;\n>>\n>>These all seem to work correctly for me.\n>>\n> \n> When I executed SQL in <<example.psql>>, exception occurred.\n> In an approach by a getTimestamp() method, exception does not occur.\n> \n> Statement st = db.createStatement();\n> ResultSet rs = st.executeQuery(\"SELECT 'now'::timestamp\");\n> rs.next();\n> Timestamp ts = rs.getTimestamp(1);\n> System.out.println(ts);\n> \n> \n> But, in an approach by a getObject() method, exception occurs.\n> \n> Statement st = db.createStatement();\n> ResultSet rs = st.executeQuery(\"SELECT 'now'::timestamp\");\n> rs.next();\n> Timestamp ts = (Timestamp)rs.getObject(1);\n> System.out.println(ts);\n> \n> Because a displayResult() method of 'example/psql.java' uses\n> getObject(), exception of 'No class found for timestamptz' occurs.\n> The patch which I attached to a former mail corrects this error.\n> \n> \n> \n>>Then however I did try your last query:\n>>\n>>select 'now'::timestamp without time zone;\n>>\n>>and this does fail for me with the string index out of bounds exception. \n>> However the patch you sent does not seem to fix this error. And I \n>>really don't know what to do with this datatype, since jdbc does not \n>>have a corresponding datatype that doesn't contain timezone information.\n>>\n> \n> My patch does not correct this error. It is difficult for me to\n> correct this error justly, but I try to think.\n> \n> \n> In addition, I found an error of time type.\n> \n> Statement st = db.createStatement();\n> ResultSet rs = st.executeQuery(\"SELECT 'now'::time\");\n> rs.next();\n> Time t = rs.getTime(1);\n> System.out.println(t);\n> \n> This becomes string index out of bounds exception.\n> \n> \n\n\n",
"msg_date": "Tue, 08 Jan 2002 10:19:27 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: Failure in timestamptz of JDBC of 7.2b4"
},
{
"msg_contents": "Ryouichi,\n\nI have applied the patch for this bug. The fix won't be in 7.2b5, but \nwill be in 7.2RC1. It will also be in the latest builds on the \njdbc.postgresql.org website.\n\nthanks,\n--Barry\n\n\nRyouichi Matsuda wrote:\n\n> Barry Lind wrote:\n> \n> \n>>select 'now'::timestamp;\n>>select 'now'::timestamptz;\n>>select 'now'::timestamp with time zone;\n>>\n>>These all seem to work correctly for me.\n>>\n> \n> When I executed SQL in <<example.psql>>, exception occurred.\n> In an approach by a getTimestamp() method, exception does not occur.\n> \n> Statement st = db.createStatement();\n> ResultSet rs = st.executeQuery(\"SELECT 'now'::timestamp\");\n> rs.next();\n> Timestamp ts = rs.getTimestamp(1);\n> System.out.println(ts);\n> \n> \n> But, in an approach by a getObject() method, exception occurs.\n> \n> Statement st = db.createStatement();\n> ResultSet rs = st.executeQuery(\"SELECT 'now'::timestamp\");\n> rs.next();\n> Timestamp ts = (Timestamp)rs.getObject(1);\n> System.out.println(ts);\n> \n> Because a displayResult() method of 'example/psql.java' uses\n> getObject(), exception of 'No class found for timestamptz' occurs.\n> The patch which I attached to a former mail corrects this error.\n> \n> \n> \n>>Then however I did try your last query:\n>>\n>>select 'now'::timestamp without time zone;\n>>\n>>and this does fail for me with the string index out of bounds exception. \n>> However the patch you sent does not seem to fix this error. And I \n>>really don't know what to do with this datatype, since jdbc does not \n>>have a corresponding datatype that doesn't contain timezone information.\n>>\n> \n> My patch does not correct this error. It is difficult for me to\n> correct this error justly, but I try to think.\n> \n> \n> In addition, I found an error of time type.\n> \n> Statement st = db.createStatement();\n> ResultSet rs = st.executeQuery(\"SELECT 'now'::time\");\n> rs.next();\n> Time t = rs.getTime(1);\n> System.out.println(t);\n> \n> This becomes string index out of bounds exception.\n> \n> \n\n\n",
"msg_date": "Mon, 14 Jan 2002 22:55:12 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: Failure in timestamptz of JDBC of 7.2b4"
},
{
"msg_contents": "I wrote:\n> In addition, I found an error of time type.\n> \n> Statement st = db.createStatement();\n> ResultSet rs = st.executeQuery(\"SELECT 'now'::time\");\n> rs.next();\n> Time t = rs.getTime(1);\n> System.out.println(t);\n> \n> This becomes string index out of bounds exception.\n\nAn attached patch corrects this error.\n\nBut time is always local time zone.\nIn this patch, time zone does not look.",
"msg_date": "Tue, 15 Jan 2002 19:41:31 +0900",
"msg_from": "Ryouichi Matsuda <r-matuda@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Failure in timestamptz of JDBC of 7.2b4"
},
{
"msg_contents": "Barry Lind wrote:\n> Then however I did try your last query:\n> \n> select 'now'::timestamp without time zone;\n> \n> and this does fail for me with the string index out of bounds exception. \n\nAn attached patch corrects problem of this bug and fractional second.\n\n\nThe handling of time zone was as follows:\n\n (a) with time zone\n using SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss z\")\n (b) without time zone\n using SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss\")\n\n\nAbout problem of fractional second,\nFractional second was changed from milli-second to nano-second.",
"msg_date": "Thu, 17 Jan 2002 20:00:01 +0900",
"msg_from": "Ryouichi Matsuda <r-matuda@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Problem in ResultSet#getTimestamp() of 7.2b4"
},
{
"msg_contents": "Hi Matsuda-san,\n\nYou beat me too it :) The following is a similar patch to the same code.\nI've tested it with your GetTimestampTest.java code and it looks good\nfrom what I can see. I'm attaching both jdbc1 and jdbc2 patches. This\npatch changes a bit less in the code and basically adds a check to the\nfraction loop for the end of string, as well as a check for a tz before\nadding the GMT bit.\n\nTom.\n\nOn Thu, Jan 17, 2002 at 08:00:01PM +0900, Ryouichi Matsuda wrote:\n> Barry Lind wrote:\n> > Then however I did try your last query:\n> > \n> > select 'now'::timestamp without time zone;\n> > \n> > and this does fail for me with the string index out of bounds exception. \n> \n> An attached patch corrects problem of this bug and fractional second.\n> \n> \n> The handling of time zone was as follows:\n> \n> (a) with time zone\n> using SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss z\")\n> (b) without time zone\n> using SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss\")\n> \n> \n> About problem of fractional second,\n> Fractional second was changed from milli-second to nano-second.\n\n-- \nThomas O'Dowd. - Nooping - http://nooper.com\ntom@nooper.com - Testing - http://nooper.co.jp/labs",
"msg_date": "Thu, 17 Jan 2002 22:57:08 +0900",
"msg_from": "\"Thomas O'Dowd\" <tom@nooper.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem in ResultSet#getTimestamp() of 7.2b4"
},
{
"msg_contents": "\nThis has been saved for the 7.3 release:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\n---------------------------------------------------------------------------\n\nRyouichi Matsuda wrote:\n> Barry Lind wrote:\n> > Then however I did try your last query:\n> > \n> > select 'now'::timestamp without time zone;\n> > \n> > and this does fail for me with the string index out of bounds exception. \n> \n> An attached patch corrects problem of this bug and fractional second.\n> \n> \n> The handling of time zone was as follows:\n> \n> (a) with time zone\n> using SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss z\")\n> (b) without time zone\n> using SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss\")\n> \n> \n> About problem of fractional second,\n> Fractional second was changed from milli-second to nano-second.\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Jan 2002 21:08:19 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem in ResultSet#getTimestamp() of 7.2b4"
},
{
"msg_contents": "\nThis has been saved for the 7.3 release:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\n---------------------------------------------------------------------------\n\nThomas O'Dowd wrote:\n> Hi Matsuda-san,\n> \n> You beat me too it :) The following is a similar patch to the same code.\n> I've tested it with your GetTimestampTest.java code and it looks good\n> from what I can see. I'm attaching both jdbc1 and jdbc2 patches. This\n> patch changes a bit less in the code and basically adds a check to the\n> fraction loop for the end of string, as well as a check for a tz before\n> adding the GMT bit.\n> \n> Tom.\n> \n> On Thu, Jan 17, 2002 at 08:00:01PM +0900, Ryouichi Matsuda wrote:\n> > Barry Lind wrote:\n> > > Then however I did try your last query:\n> > > \n> > > select 'now'::timestamp without time zone;\n> > > \n> > > and this does fail for me with the string index out of bounds exception. \n> > \n> > An attached patch corrects problem of this bug and fractional second.\n> > \n> > \n> > The handling of time zone was as follows:\n> > \n> > (a) with time zone\n> > using SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss z\")\n> > (b) without time zone\n> > using SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss\")\n> > \n> > \n> > About problem of fractional second,\n> > Fractional second was changed from milli-second to nano-second.\n> \n> -- \n> Thomas O'Dowd. - Nooping - http://nooper.com\n> tom@nooper.com - Testing - http://nooper.co.jp/labs\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Jan 2002 21:08:28 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem in ResultSet#getTimestamp() of 7.2b4"
},
{
"msg_contents": "Hi\n\nI am trying to insert non english characters ( chars which lie in the\n128-255 range in the ASCII character set ). When I try to get this data from\nthe table, I am getting garbled data. Is there any way to insert and\nretrieve non english characters in pgsql\n\nregards\nSulakshana\n\n",
"msg_date": "Fri, 25 Jan 2002 18:55:37 +0530",
"msg_from": "\"Sulakshana Awsarikar\" <sulakshana@mithi.com>",
"msg_from_op": false,
"msg_subject": "Inserting Non English characters in a varchar type field"
},
{
"msg_contents": "I think, you must adjust the Characterset of your database. By default it is\nASCII, but you can create databases with unicode or one of the europien\ncharacter sets (LATIN-1 or something else)\n\nThat should solve your problem.\n\nOliver Friedrich\n\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org]On Behalf Of Sulakshana\nAwsarikar\nSent: Friday, January 25, 2002 2:26 PM\nTo: pgsql-jdbc@postgresql.org\nSubject: [JDBC] Inserting Non English characters in a varchar type field\n\n\nHi\n\nI am trying to insert non english characters ( chars which lie in the\n128-255 range in the ASCII character set ). When I try to get this data from\nthe table, I am getting garbled data. Is there any way to insert and\nretrieve non english characters in pgsql\n\nregards\nSulakshana\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to majordomo@postgresql.org so that your\nmessage can get through to the mailing list cleanly\n\n",
"msg_date": "Fri, 25 Jan 2002 15:18:49 +0100",
"msg_from": "\"Oliver Friedrich\" <oliver@familie-friedrich.de>",
"msg_from_op": false,
"msg_subject": "Re: Inserting Non English characters in a varchar type field"
},
{
"msg_contents": "hi,\nI get the following error while retrieving a datetime column using \ngetTimestamp.\n\nBad Timestamp Format at 12 in 1970-1-1 0:0;\n\n\nBad Timestamp Format at 12 in 1970-1-1 0:0\n at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n at \norg.jboss.pool.jdbc.ResultSetInPool.getTimestamp(ResultSetInPool.java:734)\n\nregards,\nNagarajan.\n",
"msg_date": "Sat, 26 Jan 2002 14:33:15 +0100",
"msg_from": "Gunaseelan Nagarajan <gnagarajan@dkf.de>",
"msg_from_op": false,
"msg_subject": "Bad Timestamp Format"
},
{
"msg_contents": "Hi Nagarajan,\n\nWhat driver version are you using? The latest drivers at least\nshould switch the database to return ISO datestyle timestamps\nwhen it first gets a connection. The current driver code expects\nto parse the ISO format which is why you are getting this exception\nbelow. Are you changing the datestyle in your program? I'm unfamilar\nwith how to get postgres to return a timestamp in the short format without\nthe seconds which you have below. The ISO format usually returns...\n1970-01-01 00:00:00 if without time zone is specified. Can you perhaps\nprovide sample code that reproduces the problem using the latest driver?\n\nYou can download the latest drivers here...\n\n\thttp://jdbc.postgresql.org/download.html\n\nCheers,\n\nTom.\n\nOn Sat, Jan 26, 2002 at 02:33:15PM +0100, Gunaseelan Nagarajan wrote:\n> hi,\n> I get the following error while retrieving a datetime column using \n> getTimestamp.\n> \n> Bad Timestamp Format at 12 in 1970-1-1 0:0;\n> \n> \n> Bad Timestamp Format at 12 in 1970-1-1 0:0\n> at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n> at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n> at \n> org.jboss.pool.jdbc.ResultSetInPool.getTimestamp(ResultSetInPool.java:734)\n> \n> regards,\n> Nagarajan.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \nThomas O'Dowd. - Nooping - http://nooper.com\ntom@nooper.com - Testing - http://nooper.co.jp/labs\n",
"msg_date": "Sun, 27 Jan 2002 00:47:24 +0900",
"msg_from": "\"Thomas O'Dowd\" <tom@nooper.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad Timestamp Format"
},
{
"msg_contents": "Hi Tom,\n\nI went to ResultSet.java and made all the formats into\ndf = new SimpleDateFormat(\"yyyy-MM-dd HH:mm\");\n\nso now it works. As I don't require the seconds and milliseconds part,\nit is ok now.\n\nmy program inserts the date values in the \"yyyy-MM-dd HH:mm\" format.\nPerhaps that is causing the problem. However I am not changing the\ndefault format either in the driver or in the database.\n\nThanks,\nNagarajan.\n\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org]On Behalf Of Thomas O'Dowd\nSent: Saturday, January 26, 2002 4:47 PM\nTo: Gunaseelan Nagarajan\nCc: pgsql-jdbc@postgresql.org\nSubject: Re: [JDBC] Bad Timestamp Format\n\n\nHi Nagarajan,\n\nWhat driver version are you using? The latest drivers at least\nshould switch the database to return ISO datestyle timestamps\nwhen it first gets a connection. The current driver code expects\nto parse the ISO format which is why you are getting this exception\nbelow. Are you changing the datestyle in your program? I'm unfamilar\nwith how to get postgres to return a timestamp in the short format without\nthe seconds which you have below. The ISO format usually returns...\n1970-01-01 00:00:00 if without time zone is specified. Can you perhaps\nprovide sample code that reproduces the problem using the latest driver?\n\nYou can download the latest drivers here...\n\n\thttp://jdbc.postgresql.org/download.html\n\nCheers,\n\nTom.\n\nOn Sat, Jan 26, 2002 at 02:33:15PM +0100, Gunaseelan Nagarajan wrote:\n> hi,\n> I get the following error while retrieving a datetime column using\n> getTimestamp.\n>\n> Bad Timestamp Format at 12 in 1970-1-1 0:0;\n>\n>\n> Bad Timestamp Format at 12 in 1970-1-1 0:0\n> at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n> at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n> at\n> org.jboss.pool.jdbc.ResultSetInPool.getTimestamp(ResultSetInPool.java:734)\n>\n> regards,\n> Nagarajan.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n--\nThomas O'Dowd. - Nooping - http://nooper.com\ntom@nooper.com - Testing - http://nooper.co.jp/labs\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Sat, 26 Jan 2002 17:14:31 +0100",
"msg_from": "\"G.Nagarajan\" <gnagarajan@dkf.de>",
"msg_from_op": false,
"msg_subject": "Re: Bad Timestamp Format"
},
{
"msg_contents": "Hi Nagarajan,\n\nOn Sat, Jan 26, 2002 at 05:14:31PM +0100, G.Nagarajan wrote:\n> Hi Tom,\n> \n> I went to ResultSet.java and made all the formats into\n> df = new SimpleDateFormat(\"yyyy-MM-dd HH:mm\");\n> \n> so now it works. As I don't require the seconds and milliseconds part,\n> it is ok now.\n\nHmmmmmm. Not sure that this is the correct solution for your particular\nproblem (ie. altering the driver) although it may work for you now. It\nwould be better to find out the core of the problem and fix that.\n\n> my program inserts the date values in the \"yyyy-MM-dd HH:mm\" format.\n> Perhaps that is causing the problem. However I am not changing the\n> default format either in the driver or in the database.\n\nWhat database version are you using? What driver version are you using?\nI can't figure out how to get 7.2 beta to return a shorter timestamp\nthen 1970-01-01 00:00:00. even if I insert a short timestamp. I presume\nyou are using older code? As I said before, the driver should force\nthe datestyle to ISO when it first gets a connection.\n\nselect '1970-1-1 0:0'::timestamp without time zone, '1970-1-1 0:0'::timestamp;\n timestamp | timestamptz \n---------------------+------------------------\n 1970-01-01 00:00:00 | 1970-01-01 00:00:00+09\n(1 row)\n\nIf I can figure out how you are doing it and if the its a problem in the\nlatest code, then we can fix it.\n\nTom.\n\n> -----Original Message-----\n> From: pgsql-jdbc-owner@postgresql.org\n> [mailto:pgsql-jdbc-owner@postgresql.org]On Behalf Of Thomas O'Dowd\n> Sent: Saturday, January 26, 2002 4:47 PM\n> To: Gunaseelan Nagarajan\n> Cc: pgsql-jdbc@postgresql.org\n> Subject: Re: [JDBC] Bad Timestamp Format\n> \n> \n> Hi Nagarajan,\n> \n> What driver version are you using? The latest drivers at least\n> should switch the database to return ISO datestyle timestamps\n> when it first gets a connection. The current driver code expects\n> to parse the ISO format which is why you are getting this exception\n> below. Are you changing the datestyle in your program? I'm unfamilar\n> with how to get postgres to return a timestamp in the short format without\n> the seconds which you have below. The ISO format usually returns...\n> 1970-01-01 00:00:00 if without time zone is specified. Can you perhaps\n> provide sample code that reproduces the problem using the latest driver?\n> \n> You can download the latest drivers here...\n> \n> \thttp://jdbc.postgresql.org/download.html\n> \n> Cheers,\n> \n> Tom.\n> \n> On Sat, Jan 26, 2002 at 02:33:15PM +0100, Gunaseelan Nagarajan wrote:\n> > hi,\n> > I get the following error while retrieving a datetime column using\n> > getTimestamp.\n> >\n> > Bad Timestamp Format at 12 in 1970-1-1 0:0;\n> >\n> >\n> > Bad Timestamp Format at 12 in 1970-1-1 0:0\n> > at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n> > at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n> > at\n> > org.jboss.pool.jdbc.ResultSetInPool.getTimestamp(ResultSetInPool.java:734)\n> >\n> > regards,\n> > Nagarajan.\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> --\n> Thomas O'Dowd. - Nooping - http://nooper.com\n> tom@nooper.com - Testing - http://nooper.co.jp/labs\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \nThomas O'Dowd. - Nooping - http://nooper.com\ntom@nooper.com - Testing - http://nooper.co.jp/labs\n",
"msg_date": "Sun, 27 Jan 2002 14:50:13 +0900",
"msg_from": "\"Thomas O'Dowd\" <tom@nooper.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad Timestamp Format"
},
{
"msg_contents": "Hi Tom,\n\nI am using PostgreSQL 7.1.3. It was built from the source file\nby using ./configure --with-java etc. I used the jdbc source that comes\nalong with the source and did not download it seperately.\n\nHere is my java environment.\n\tJDK 1.3.10\n\tJBoss 2.4\n\nI use the following code for inserting the date value\n\n\tpublic String giveInsert(Calendar cal)\n\t\tthrows Exception{\n\t String colValue = \"\";\n\t colValue = \"'\" + cal.get(Calendar.YEAR) + \"-\"\n\t\t + ( cal.get( Calendar.MONTH ) + 1 ) + \"-\"\n\t\t + cal.get(Calendar.DAY_OF_MONTH) + \" \"\n\t\t + cal.get( Calendar.HOUR_OF_DAY ) + \":\"\n\t\t + cal.get( Calendar.MINUTE ) + \"'\";\n\n\t\treturn colValue;\n\t}\n\nFor retreiving from the database, I use the following code\n\n\tpublic Calendar takeFromResultSet(ResultSet rs, String columnName)\n\t\tthrows SQLException, Exception{\n \tTimestamp ts = rs.getTimestamp(columnName);\n\tCalendar c = Calendar.getInstance();\n\tif(ts != null)\n\t\t\tc.set(ts.getYear()+1900, ts.getMonth(), ts.getDate(), \nts.getHours(),ts.getMinutes());\n\t\telse\n\t\t\tc = null;\n\t\treturn c;\n\t}\n\nThe database connection comes through the JBoss connection pool handler. \nMaybe that is not setting the required database property?\n\nhere is the configuration settings in jboss.jcml\n\n<mbean code=\"org.jboss.jdbc.XADataSourceLoader\" \nname=\"DefaultDomain:service=XADataSource,name=postgresqlDS\">\n\t<attribute \nname=\"DataSourceClass\">org.jboss.pool.jdbc.xa.wrapper.XADataSourceImpl</attribute>\n\t<attribute name=\"PoolName\">postgresqlDS</attribute>\n\t<attribute name=\"URL\">jdbc:postgresql:db</attribute>\n\t<attribute name=\"JDBCUser\">root</attribute>\n\t<attribute name=\"Password\">japan</attribute>\n</mbean>\n\n\nTo get the database connection, I use the following code\n\n\t\tcat.debug(\"Using jboss pool to get Connection\");\n\t\tContext ctx = new InitialContext();\n\t\tjavax.sql.DataSource ds =(javax.sql.DataSource)ctx.lookup( jndiName );\n\t\tcon = ds.getConnection(); \n\nThanks and Regards,\nNagarajan.\n\t\nOn Saturday 26 January 2002 14:33, you wrote:\n> hi,\n> I get the following error while retrieving a datetime column using\n> getTimestamp.\n>\n> Bad Timestamp Format at 12 in 1970-1-1 0:0;\n>\n>\n> Bad Timestamp Format at 12 in 1970-1-1 0:0\n> at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n> at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n> at\n> org.jboss.pool.jdbc.ResultSetInPool.getTimestamp(ResultSetInPool.java:734)\n>\n> regards,\n> Nagarajan.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Sun, 27 Jan 2002 15:22:58 +0100",
"msg_from": "Gunaseelan Nagarajan <gnagarajan@dkf.de>",
"msg_from_op": false,
"msg_subject": "Re: Bad Timestamp Format"
},
{
"msg_contents": "Hi Nagarajan,\n\nI'm not familiar with jboss, but I just tried messing with an old\n7.1.2 database that I had lying around and can't get it to return\na short format of a timestamp like \"1970-1-1 0:0\". The driver\ncode in 7.1.3 will cause an exception for this short form as you\nare getting and the 7.2 driver will just parse the datepart but\nnot the hour/minute part. \n\nBefore changing the new driver code to handle this format, I'd\nlike to know how you are generating it, so that a) I know it makes\nsense to support it and b) that I can test it. Can anyone on the\nlist let me know how to get the backend to return this short format?\n\nThe only way I can think about doing it is using the to_char() \nfunctions which you would have to be explicitly doing??? ie, something\nlike:\n\nselect now(), to_char(now(), 'YYYY-FMMM-FMDD FMHH24:FMMI');\n now | to_char \n------------------------+-----------------\n 2002-01-28 10:26:09+09 | 2002-1-28 10:26\n(1 row)\n\nIf your select code or jboss is somehow using to_char() to alter the\ndefault timestamp string, its doing something unusual which the driver\ndoesn't support.\n\nCan anyone add anymore reasons why the backend would be returning \na short timestamp string to the driver, such as specific datestyle\noptions, a specific locale or other?\n\nTom.\n\nOn Sun, Jan 27, 2002 at 03:22:58PM +0100, Gunaseelan Nagarajan wrote:\n> Hi Tom,\n> \n> I am using PostgreSQL 7.1.3. It was built from the source file\n> by using ./configure --with-java etc. I used the jdbc source that comes\n> along with the source and did not download it seperately.\n> \n> Here is my java environment.\n> \tJDK 1.3.10\n> \tJBoss 2.4\n> \n> I use the following code for inserting the date value\n> \n> \tpublic String giveInsert(Calendar cal)\n> \t\tthrows Exception{\n> \t String colValue = \"\";\n> \t colValue = \"'\" + cal.get(Calendar.YEAR) + \"-\"\n> \t\t + ( cal.get( Calendar.MONTH ) + 1 ) + \"-\"\n> \t\t + cal.get(Calendar.DAY_OF_MONTH) + \" \"\n> \t\t + cal.get( Calendar.HOUR_OF_DAY ) + \":\"\n> \t\t + cal.get( Calendar.MINUTE ) + \"'\";\n> \n> \t\treturn colValue;\n> \t}\n> \n> For retreiving from the database, I use the following code\n> \n> \tpublic Calendar takeFromResultSet(ResultSet rs, String columnName)\n> \t\tthrows SQLException, Exception{\n> \tTimestamp ts = rs.getTimestamp(columnName);\n> \tCalendar c = Calendar.getInstance();\n> \tif(ts != null)\n> \t\t\tc.set(ts.getYear()+1900, ts.getMonth(), ts.getDate(), \n> ts.getHours(),ts.getMinutes());\n> \t\telse\n> \t\t\tc = null;\n> \t\treturn c;\n> \t}\n> \n> The database connection comes through the JBoss connection pool handler. \n> Maybe that is not setting the required database property?\n> \n> here is the configuration settings in jboss.jcml\n> \n> <mbean code=\"org.jboss.jdbc.XADataSourceLoader\" \n> name=\"DefaultDomain:service=XADataSource,name=postgresqlDS\">\n> \t<attribute \n> name=\"DataSourceClass\">org.jboss.pool.jdbc.xa.wrapper.XADataSourceImpl</attribute>\n> \t<attribute name=\"PoolName\">postgresqlDS</attribute>\n> \t<attribute name=\"URL\">jdbc:postgresql:db</attribute>\n> \t<attribute name=\"JDBCUser\">root</attribute>\n> \t<attribute name=\"Password\">japan</attribute>\n> </mbean>\n> \n> \n> To get the database connection, I use the following code\n> \n> \t\tcat.debug(\"Using jboss pool to get Connection\");\n> \t\tContext ctx = new InitialContext();\n> \t\tjavax.sql.DataSource ds =(javax.sql.DataSource)ctx.lookup( jndiName );\n> \t\tcon = ds.getConnection(); \n> \n> Thanks and Regards,\n> Nagarajan.\n> \t\n> On Saturday 26 January 2002 14:33, you wrote:\n> > hi,\n> > I get the following error while retrieving a datetime column using\n> > getTimestamp.\n> >\n> > Bad Timestamp Format at 12 in 1970-1-1 0:0;\n> >\n> >\n> > Bad Timestamp Format at 12 in 1970-1-1 0:0\n> > at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n> > at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n> > at\n> > org.jboss.pool.jdbc.ResultSetInPool.getTimestamp(ResultSetInPool.java:734)\n> >\n> > regards,\n> > Nagarajan.\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \nThomas O'Dowd. - Nooping - http://nooper.com\ntom@nooper.com - Testing - http://nooper.co.jp/labs\n",
"msg_date": "Mon, 28 Jan 2002 10:29:39 +0900",
"msg_from": "\"Thomas O'Dowd\" <tom@nooper.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad Timestamp Format"
},
{
"msg_contents": "Hi Tom,\n\nI use a \"Select * from table\" and no to_char() functions. I will post\nthe same question to the jboss-mailing list and see if anyone there\nhas had the same problem. It could also be a feature of the JBoss\nresultset implementation.\n\nRegards,\nNagarajan.\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org]On Behalf Of Thomas O'Dowd\nSent: Monday, January 28, 2002 2:30 AM\nTo: Gunaseelan Nagarajan\nCc:\nSubject: Re: [JDBC] Bad Timestamp Format\n\n\nHi Nagarajan,\n\nI'm not familiar with jboss, but I just tried messing with an old\n7.1.2 database that I had lying around and can't get it to return\na short format of a timestamp like \"1970-1-1 0:0\". The driver\ncode in 7.1.3 will cause an exception for this short form as you\nare getting and the 7.2 driver will just parse the datepart but\nnot the hour/minute part.\n\nBefore changing the new driver code to handle this format, I'd\nlike to know how you are generating it, so that a) I know it makes\nsense to support it and b) that I can test it. Can anyone on the\nlist let me know how to get the backend to return this short format?\n\nThe only way I can think about doing it is using the to_char()\nfunctions which you would have to be explicitly doing??? ie, something\nlike:\n\nselect now(), to_char(now(), 'YYYY-FMMM-FMDD FMHH24:FMMI');\n now | to_char\n------------------------+-----------------\n 2002-01-28 10:26:09+09 | 2002-1-28 10:26\n(1 row)\n\nIf your select code or jboss is somehow using to_char() to alter the\ndefault timestamp string, its doing something unusual which the driver\ndoesn't support.\n\nCan anyone add anymore reasons why the backend would be returning\na short timestamp string to the driver, such as specific datestyle\noptions, a specific locale or other?\n\nTom.\n\nOn Sun, Jan 27, 2002 at 03:22:58PM +0100, Gunaseelan Nagarajan wrote:\n> Hi Tom,\n>\n> I am using PostgreSQL 7.1.3. It was built from the source file\n> by using ./configure --with-java etc. I used the jdbc source that comes\n> along with the source and did not download it seperately.\n>\n> Here is my java environment.\n> \tJDK 1.3.10\n> \tJBoss 2.4\n>\n> I use the following code for inserting the date value\n>\n> \tpublic String giveInsert(Calendar cal)\n> \t\tthrows Exception{\n> \t String colValue = \"\";\n> \t colValue = \"'\" + cal.get(Calendar.YEAR) + \"-\"\n> \t\t + ( cal.get( Calendar.MONTH ) + 1 ) + \"-\"\n> \t\t + cal.get(Calendar.DAY_OF_MONTH) + \" \"\n> \t\t + cal.get( Calendar.HOUR_OF_DAY ) + \":\"\n> \t\t + cal.get( Calendar.MINUTE ) + \"'\";\n>\n> \t\treturn colValue;\n> \t}\n>\n> For retreiving from the database, I use the following code\n>\n> \tpublic Calendar takeFromResultSet(ResultSet rs, String columnName)\n> \t\tthrows SQLException, Exception{\n> \tTimestamp ts = rs.getTimestamp(columnName);\n> \tCalendar c = Calendar.getInstance();\n> \tif(ts != null)\n> \t\t\tc.set(ts.getYear()+1900, ts.getMonth(), ts.getDate(),\n> ts.getHours(),ts.getMinutes());\n> \t\telse\n> \t\t\tc = null;\n> \t\treturn c;\n> \t}\n>\n> The database connection comes through the JBoss connection pool handler.\n> Maybe that is not setting the required database property?\n>\n> here is the configuration settings in jboss.jcml\n>\n> <mbean code=\"org.jboss.jdbc.XADataSourceLoader\"\n> name=\"DefaultDomain:service=XADataSource,name=postgresqlDS\">\n> \t<attribute\n>\nname=\"DataSourceClass\">org.jboss.pool.jdbc.xa.wrapper.XADataSourceImpl</attr\nibute>\n> \t<attribute name=\"PoolName\">postgresqlDS</attribute>\n> \t<attribute name=\"URL\">jdbc:postgresql:db</attribute>\n> \t<attribute name=\"JDBCUser\">root</attribute>\n> \t<attribute name=\"Password\">japan</attribute>\n> </mbean>\n>\n>\n> To get the database connection, I use the following code\n>\n> \t\tcat.debug(\"Using jboss pool to get Connection\");\n> \t\tContext ctx = new InitialContext();\n> \t\tjavax.sql.DataSource ds =(javax.sql.DataSource)ctx.lookup( jndiName );\n> \t\tcon = ds.getConnection();\n>\n> Thanks and Regards,\n> Nagarajan.\n>\n> On Saturday 26 January 2002 14:33, you wrote:\n> > hi,\n> > I get the following error while retrieving a datetime column using\n> > getTimestamp.\n> >\n> > Bad Timestamp Format at 12 in 1970-1-1 0:0;\n> >\n> >\n> > Bad Timestamp Format at 12 in 1970-1-1 0:0\n> > at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n> > at org.postgresql.jdbc2.ResultSet.getTimestamp(Unknown Source)\n> > at\n> >\norg.jboss.pool.jdbc.ResultSetInPool.getTimestamp(ResultSetInPool.java:734)\n> >\n> > regards,\n> > Nagarajan.\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n--\nThomas O'Dowd. - Nooping - http://nooper.com\ntom@nooper.com - Testing - http://nooper.co.jp/labs\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to majordomo@postgresql.org so that your\nmessage can get through to the mailing list cleanly\n\n",
"msg_date": "Mon, 28 Jan 2002 09:33:10 +0100",
"msg_from": "\"G.Nagarajan\" <gnagarajan@dkf.de>",
"msg_from_op": false,
"msg_subject": "Re: Bad Timestamp Format"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nThomas O'Dowd wrote:\n> Hi Matsuda-san,\n> \n> You beat me too it :) The following is a similar patch to the same code.\n> I've tested it with your GetTimestampTest.java code and it looks good\n> from what I can see. I'm attaching both jdbc1 and jdbc2 patches. This\n> patch changes a bit less in the code and basically adds a check to the\n> fraction loop for the end of string, as well as a check for a tz before\n> adding the GMT bit.\n> \n> Tom.\n> \n> On Thu, Jan 17, 2002 at 08:00:01PM +0900, Ryouichi Matsuda wrote:\n> > Barry Lind wrote:\n> > > Then however I did try your last query:\n> > > \n> > > select 'now'::timestamp without time zone;\n> > > \n> > > and this does fail for me with the string index out of bounds exception. \n> > \n> > An attached patch corrects problem of this bug and fractional second.\n> > \n> > \n> > The handling of time zone was as follows:\n> > \n> > (a) with time zone\n> > using SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss z\")\n> > (b) without time zone\n> > using SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss\")\n> > \n> > \n> > About problem of fractional second,\n> > Fractional second was changed from milli-second to nano-second.\n> \n> -- \n> Thomas O'Dowd. - Nooping - http://nooper.com\n> tom@nooper.com - Testing - http://nooper.co.jp/labs\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Feb 2002 21:09:50 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem in ResultSet#getTimestamp() of 7.2b4"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nThomas O'Dowd wrote:\n> Hi Matsuda-san,\n> \n> You beat me too it :) The following is a similar patch to the same code.\n> I've tested it with your GetTimestampTest.java code and it looks good\n> from what I can see. I'm attaching both jdbc1 and jdbc2 patches. This\n> patch changes a bit less in the code and basically adds a check to the\n> fraction loop for the end of string, as well as a check for a tz before\n> adding the GMT bit.\n> \n> Tom.\n> \n> On Thu, Jan 17, 2002 at 08:00:01PM +0900, Ryouichi Matsuda wrote:\n> > Barry Lind wrote:\n> > > Then however I did try your last query:\n> > > \n> > > select 'now'::timestamp without time zone;\n> > > \n> > > and this does fail for me with the string index out of bounds exception. \n> > \n> > An attached patch corrects problem of this bug and fractional second.\n> > \n> > \n> > The handling of time zone was as follows:\n> > \n> > (a) with time zone\n> > using SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss z\")\n> > (b) without time zone\n> > using SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss\")\n> > \n> > \n> > About problem of fractional second,\n> > Fractional second was changed from milli-second to nano-second.\n> \n> -- \n> Thomas O'Dowd. - Nooping - http://nooper.com\n> tom@nooper.com - Testing - http://nooper.co.jp/labs\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 4 Mar 2002 23:11:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem in ResultSet#getTimestamp() of 7.2b4"
}
] |
[
{
"msg_contents": "Lamar Owen writes:\n\n> Why isn't all of it in postgresql.conf?\n\nBecause the postgresql.conf and pg_hba.conf files describe different data\nstructures that are evaluated at different times in different ways for\ndifferent reasons.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 25 Dec 2001 21:33:44 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on the location of configuration files"
}
] |
[
{
"msg_contents": "Someone reported to me that they can't get their queries to use indexes.\nIt turns out this is because timestamp() has pg_proc.proiscachable set\nto false in many cases. Date() also has this in some cases.\n\nI realized timestamp() can be called with 'CURRENT_TIMESTAMP', which of\ncourse is not cachable, but when called with a real date, it seems it\nwould be cachable. However, I seem to remember that the timezone\nsetting can effect the output, and therefore it isn't cachable, or\nsomething like that.\n\nWhile the actual conversion call it very minor, there is code in\nbackend/optimizer/utils/clauses::simplify_op_or_func() that has:\n\n if (!proiscachable)\n return NULL;\n\nThis prevents index usage for non-cachable functions, as shown below. \n\nThe first only does only a date() conversion, the second adds an\ninterval, which results in a timestamp() conversion. Notice this uses a\nsequential scan. The final one avoids timestamp but just adding '1' to\nthe date value:\n\n \ttest=> EXPLAIN SELECT * FROM test WHERE x = DATE('2001-01-01');\n \tNOTICE: QUERY PLAN:\n \t\n \tIndex Scan USING i_test ON test (cost=0.00..3.01 ROWS=1 width=208)\n \t\n \tEXPLAIN\n \ttest=> EXPLAIN SELECT * FROM test WHERE x = DATE('2001-01-01') +\n \tINTERVAL '1 DAY';\n \tNOTICE: QUERY PLAN:\n \t\n \tSeq Scan ON test (cost=0.00..26.00 ROWS=5 width=208)\n \t\n \tEXPLAIN\n \ttest=> EXPLAIN SELECT * FROM test WHERE x = DATE('2001-01-01') + 1;\n \tNOTICE: QUERY PLAN:\n \t\n \tIndex Scan USING i_test ON test (cost=0.00..3.01 ROWS=1 width=208)\n \t\n \tEXPLAIN\n \nCan someone explain the rational between which timestamp/date calls are\ncachable and which are not, and whether the cachablility really relates\nto index usage or is this just a problem with our having only one\ncachable setting for each function? I would like to understand this so\nI can formulate a TODO item to document it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 Dec 2001 00:36:55 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Timestamp conversion can't use index"
}
] |
[
{
"msg_contents": "> Someone reported to me that they can't get their queries to use indexes.\n> It turns out this is because timestamp() has pg_proc.proiscachable set\n> to false in many cases. Date() also has this in some cases.\n\nPlease let me add a reference to this email from Tom Lane:\n\n\thttp://fts.postgresql.org/db/mw/msg.html?mid=1041918\n\nIt specifically states:\n\t\n\t[More complete] reasonable [cachable] definitions would be:\n\t\n\t1. noncachable: must be called every time; not guaranteed to return same\n\tresult for same parameters even within a query. random(), timeofday(),\n\tnextval() are examples.\n\t\n\t2. fully cachable: function guarantees same result for same parameters\n\tno matter when invoked. This setting allows a call with constant\n\tparameters to be constant-folded on sight.\n\t\n\t3. query cachable: function guarantees same result for same parameters\n\twithin a single query, or more precisely within a single\n\tCommandCounterIncrement interval. This corresponds to the actual\n\tbehavior of functions that execute SELECTs, and it's sufficiently strong\n\tto allow the function result to be used in an indexscan, which is what\n\twe really care about.\n\nItem #2 clearly mentions constant folding, I assume by the optimizer. \nWhat has me confused is why constant folding is needed to perform index\nlookups. Can't the executor call the function and then do the index\nlookup? Is this just a failing in our executor? Is there a reason\n#1-type noncachable functions can't use indexes? Is the timezone\nrelated here?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 Dec 2001 00:47:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Timestamp conversion can't use index"
},
{
"msg_contents": "> > Someone reported to me that they can't get their queries to use indexes.\n> > It turns out this is because timestamp() has pg_proc.proiscachable set\n> > to false in many cases. Date() also has this in some cases.\n> Please let me add a reference to this email from Tom Lane:\n\nThe functions marked as non-cachable are those that are converting from\ndata types (such as text for which the input may need to be evaluated\nfor (at least) that transaction.\n\nWhat kind of queries against constants are they doing that can't use\nSQL-standard syntax to avoid a conversion from another data type?\n\ntimestamp('stringy time')\n\nmay not be good, but I would think that\n\ntimestamp 'timey time'\n\nshould let the optimizer use indices just fine. It *could* do some more\nconstant folding if we had a distinction between functions with\nindeterminate side effects (e.g. random()) as opposed to those who just\nneed to be evaluated once per transaction (say, date/time conversion\nfunctions needing the time zone evaluated).\n\n - Thomas\n",
"msg_date": "Wed, 26 Dec 2001 06:30:03 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp conversion can't use index"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> What has me confused is why constant folding is needed to perform index\n> lookups.\n\nYou are confused because those aren't related.\n\nThe entire notion of an indexscan is predicated on the assumption that\nyou are comparing all elements of the index to the same comparison\nvalue. Thus for example \"x = random()\" is not indexable. To use an\nindexscan the query planner must be able to determine that the right\nhand side will not change over the course of the scan.\n\nConstant-folding requires a stronger assumption: that the result the\nfunction gives when evaluated by the query planner will be the same\nresult we'd get later (perhaps much later) at execution time.\n\nSince we only have one kind of noncachable function at the moment,\nthese two restrictions are conflated ... but there should be more than\none kind of noncachable function.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 Dec 2001 10:49:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp conversion can't use index "
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> timestamp('stringy time')\n> may not be good, but I would think that\n> timestamp 'timey time'\n> should let the optimizer use indices just fine.\n\nYup. Possibly this should be noted in the FAQ?\n\nActually,\n\ttimestamp('stringy time')\ndoesn't work at all anymore in 7.2, unless you doublequote the name:\n\nregression=# select timestamp('now');\nERROR: parser: parse error at or near \"'\"\nregression=# select \"timestamp\"('now');\n timestamp\n----------------------------\n 2001-12-26 12:18:07.008337\n(1 row)\n\nAnother interesting factoid is that \"timestamp\"('now') does indeed\nproduce a constant in 7.2, not a runtime evaluation of text_timestamp.\ntext_timestamp is still considered noncachable, but the expression is\nconsidered to represent timestamp 'now' and not a call of text_timestamp,\npresumably because of this change:\n\n2001-10-04 18:06 tgl\n\n\t* doc/src/sgml/typeconv.sgml, src/backend/commands/indexcmds.c,\n\tsrc/backend/parser/parse_func.c, src/include/parser/parse_func.h:\n\tConsider interpreting a function call as a trivial\n\t(binary-compatible) type coercion after failing to find an exact\n\tmatch in pg_proc, but before considering interpretations that\n\tinvolve a function call with one or more argument type coercions. \n\tThis avoids surprises wherein what looks like a type coercion is\n\tinterpreted as coercing to some third type and then to the\n\tdestination type, as in Dave Blasby's bug report of 3-Oct-01. See\n\tsubsequent discussion in pghackers.\n\nSo there's more here than meets the eye, but the syntax change from\n7.1 to 7.2 is definitely going to warrant a FAQ entry, IMHO.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 Dec 2001 12:35:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp conversion can't use index "
},
{
"msg_contents": "> > > Someone reported to me that they can't get their queries to use indexes.\n> > > It turns out this is because timestamp() has pg_proc.proiscachable set\n> > > to false in many cases. Date() also has this in some cases.\n> > Please let me add a reference to this email from Tom Lane:\n> \n> The functions marked as non-cachable are those that are converting from\n> data types (such as text for which the input may need to be evaluated\n> for (at least) that transaction.\n> \n> What kind of queries against constants are they doing that can't use\n> SQL-standard syntax to avoid a conversion from another data type?\n\n\nThey are doing trying to add one day to a date field:\n\n test=> EXPLAIN SELECT * FROM test WHERE x = DATE('2001-01-01');\n NOTICE: QUERY PLAN:\n \n Index Scan USING i_test ON test (cost=0.00..3.01 ROWS=1 width=208)\n \n EXPLAIN\n test=> EXPLAIN SELECT * FROM test WHERE x = DATE('2001-01-01') +\n INTERVAL '1 DAY';\n NOTICE: QUERY PLAN:\n \n Seq Scan ON test (cost=0.00..26.00 ROWS=5 width=208)\n\n ^^^^^^^^\n\n EXPLAIN\n test=> EXPLAIN SELECT * FROM test WHERE x = DATE('2001-01-01') + 1;\n NOTICE: QUERY PLAN:\n \n Index Scan USING i_test ON test (cost=0.00..3.01 ROWS=1 width=208)\n \n EXPLAIN\n\n\nSeems it is an operator that returns a timestamp.\n\n> \n> timestamp('stringy time')\n> \n> may not be good, but I would think that\n> \n> timestamp 'timey time'\n> \n> should let the optimizer use indices just fine. It *could* do some more\n> constant folding if we had a distinction between functions with\n> indeterminate side effects (e.g. random()) as opposed to those who just\n> need to be evaluated once per transaction (say, date/time conversion\n> functions needing the time zone evaluated).\n\nI have added this to the TODO list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 Dec 2001 18:53:49 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Timestamp conversion can't use index"
},
{
"msg_contents": "> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > timestamp('stringy time')\n> > may not be good, but I would think that\n> > timestamp 'timey time'\n> > should let the optimizer use indices just fine.\n> \n> Yup. Possibly this should be noted in the FAQ?\n> \n> Actually,\n> \ttimestamp('stringy time')\n> doesn't work at all anymore in 7.2, unless you doublequote the name:\n> \n> regression=# select timestamp('now');\n> ERROR: parser: parse error at or near \"'\"\n> regression=# select \"timestamp\"('now');\n> timestamp\n> ----------------------------\n> 2001-12-26 12:18:07.008337\n> (1 row)\n\nI have updated HISTORY and release.sgml Migration sections:\n\n * The timestamp() function is no longer available. Use timestamp\n \"string\" instead, or CAST. \n \n> \n> Another interesting factoid is that \"timestamp\"('now') does indeed\n> produce a constant in 7.2, not a runtime evaluation of text_timestamp.\n> text_timestamp is still considered noncachable, but the expression is\n> considered to represent timestamp 'now' and not a call of text_timestamp,\n> presumably because of this change:\n> \n> 2001-10-04 18:06 tgl\n> \n> \t* doc/src/sgml/typeconv.sgml, src/backend/commands/indexcmds.c,\n> \tsrc/backend/parser/parse_func.c, src/include/parser/parse_func.h:\n> \tConsider interpreting a function call as a trivial\n> \t(binary-compatible) type coercion after failing to find an exact\n> \tmatch in pg_proc, but before considering interpretations that\n> \tinvolve a function call with one or more argument type coercions. \n> \tThis avoids surprises wherein what looks like a type coercion is\n> \tinterpreted as coercing to some third type and then to the\n> \tdestination type, as in Dave Blasby's bug report of 3-Oct-01. See\n> \tsubsequent discussion in pghackers.\n> \n> So there's more here than meets the eye, but the syntax change from\n> 7.1 to 7.2 is definitely going to warrant a FAQ entry, IMHO.\n\nAdded to same files:\n\n\tdatatype(const,...) function calls now evaluated earlier \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 Dec 2001 19:04:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Timestamp conversion can't use index"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Added to same files:\n> \tdatatype(const,...) function calls now evaluated earlier \n\nThis is quite wrong, since (a) the change only applies to single-\nargument function calls (so, no \"...\"), (b) the call is not\nevaluated \"earlier\", but \"differently\", and (c) it doesn't only\napply to constant arguments.\n\nNot sure that I can come up with a one-liner definition of this change,\nbut the above definitely doesn't do the job.\n\nWe already have\n\n Modify type coersion logic to attempt binary-compatible functions first (Tom)\n\nand I'm not sure there is a better one-liner for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 Dec 2001 23:39:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp conversion can't use index "
},
{
"msg_contents": "\nOK, new text:\n\n\tSome datatype() function calls now evaluated differently \n\nAt least it is a warning.\n\n---------------------------------------------------------------------------\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Added to same files:\n> > \tdatatype(const,...) function calls now evaluated earlier \n> \n> This is quite wrong, since (a) the change only applies to single-\n> argument function calls (so, no \"...\"), (b) the call is not\n> evaluated \"earlier\", but \"differently\", and (c) it doesn't only\n> apply to constant arguments.\n> \n> Not sure that I can come up with a one-liner definition of this change,\n> but the above definitely doesn't do the job.\n> \n> We already have\n> \n> Modify type coersion logic to attempt binary-compatible functions first (Tom)\n> \n> and I'm not sure there is a better one-liner for it.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 Dec 2001 23:45:45 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Timestamp conversion can't use index"
}
] |
[
{
"msg_contents": "\nBased on Tom's comments and this email, I am adding this to the TODO\nlist:\n\n* Add new pg_proc cachable settings to specify whether function can be\n evaluated only once or once per query\n\n\n---------------------------------------------------------------------------\n\n> > Someone reported to me that they can't get their queries to use indexes.\n> > It turns out this is because timestamp() has pg_proc.proiscachable set\n> > to false in many cases. Date() also has this in some cases.\n> \n> Please let me add a reference to this email from Tom Lane:\n> \n> \thttp://fts.postgresql.org/db/mw/msg.html?mid=1041918\n> \n> It specifically states:\n> \t\n> \t[More complete] reasonable [cachable] definitions would be:\n> \t\n> \t1. noncachable: must be called every time; not guaranteed to return same\n> \tresult for same parameters even within a query. random(), timeofday(),\n> \tnextval() are examples.\n> \t\n> \t2. fully cachable: function guarantees same result for same parameters\n> \tno matter when invoked. This setting allows a call with constant\n> \tparameters to be constant-folded on sight.\n> \t\n> \t3. query cachable: function guarantees same result for same parameters\n> \twithin a single query, or more precisely within a single\n> \tCommandCounterIncrement interval. This corresponds to the actual\n> \tbehavior of functions that execute SELECTs, and it's sufficiently strong\n> \tto allow the function result to be used in an indexscan, which is what\n> \twe really care about.\n> \n> Item #2 clearly mentions constant folding, I assume by the optimizer. \n> What has me confused is why constant folding is needed to perform index\n> lookups. Can't the executor call the function and then do the index\n> lookup? Is this just a failing in our executor? Is there a reason\n> #1-type noncachable functions can't use indexes? Is the timezone\n> related here?\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 Dec 2001 18:41:21 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Timestamp conversion can't use index"
}
] |
[
{
"msg_contents": "A quote from backend's po:\n\n> #: ../access/nbtree/nbtinsert.c:552\n> #, c-format\n> msgid \"\"\n> \"_bt_getstackbuf: my bits moved right off the end of the world!\\n\"\n\nShould I feel free in translating this, Bruce? ;)\n\n--\nSerguei A. Mokhov\n \n\n",
"msg_date": "Wed, 26 Dec 2001 19:18:47 -0500",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": true,
"msg_subject": "Smbd's bits moved somewhere far.."
},
{
"msg_contents": "> A quote from backend's po:\n> \n> > #: ../access/nbtree/nbtinsert.c:552\n> > #, c-format\n> > msgid \"\"\n> > \"_bt_getstackbuf: my bits moved right off the end of the world!\\n\"\n> \n> Should I feel free in translating this, Bruce? ;)\n\nWow, that is a good one. Maybe there is a typical local saying you\ncould use? :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 Dec 2001 19:32:42 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Smbd's bits moved somewhere far.."
}
] |
[
{
"msg_contents": "I have a /data directory from someone running 7.1.3 and they are seeing\ndata corruption on a table using TOAST columns. Specifically, they are\nseeing this:\n\t\n\ttest=> SELECT woman FROM user_details WHERE uid = '00eezEoLyWJK';\n\tERROR: Relation 1 does not exist\n\n\ttest=> SELECT question_num FROM user_details WHERE uid = '00eezEoLyWJK';\n\tERROR: MemoryContextAlloc: invalid request size 2139062147\n\nThe other columns in the same row are fine, and the other rows in the\ntable are fine too. This is with RH Linux 6.2. They have not turned\noff fsync.\n\nThe backtrace for the first failure is:\n\t\n\t#0 elog (lev=-1, fmt=0x813a5de \"Relation %u does not exist\") at elog.c:119\n\t#1 0x806ef17 in heap_open (relationId=1, lockmode=1) at heapam.c:589\n\t#2 0x80736a5 in toast_fetch_datum (attr=0x2b5bd124) at tuptoaster.c:972\n\t#3 0x80720bc in heap_tuple_untoast_attr (attr=0x2b5bd124) at tuptoaster.c:127\n\t#4 0x812f556 in pg_detoast_datum (datum=0x2b5bd124) at fmgr.c:1434\n\t#5 0x80678a1 in printtup (tuple=0x2b5bd020, typeinfo=0x8335fb0, \n\t self=0x83498f0) at printtup.c:206\n\t#6 0x80b6609 in ExecRetrieve (slot=0x833d608, destfunc=0x83498f0, \n\t estate=0x833d338) at execMain.c:1187\n\t#7 0x80b6523 in ExecutePlan (estate=0x833d338, plan=0x833d2b0, \n\t operation=CMD_SELECT, numberTuples=0, direction=ForwardScanDirection, \n\t destfunc=0x83498f0) at execMain.c:1107\n\t#8 0x80b5b33 in ExecutorRun (queryDesc=0x8335f20, estate=0x833d338, \n\t feature=3, count=0) at execMain.c:233\n\t#9 0x80f6ce3 in ProcessQuery (parsetree=0x8328e38, plan=0x833d2b0, \n\t dest=Remote) at pquery.c:295\n\t#10 0x80f58eb in pg_exec_query_string (\n\t query_string=0x8328a68 \"select * from user_details where uid = '00eezEoLyWJK';\", dest=Remote, parse_context=0x8302b48) at postgres.c:810\n\nObviously, it is a TOAST-related problem. The error is happening while\nthe Datum is trying to be untoasted. If I look at the Datum for the\nfirst failure I see:\n\n\t(gdb) print *(struct varlena *)datum\n\t$1 = {vl_len = 134507004, vl_dat = \"c\"}\n\n\t(gdb) printf \"%x\\n\", (struct varlena *)datum.vl_len.vl_len\n\t80469fc\n\nAs you can see, the high bit 0x08 is set, meaning that data is external,\nand 0x04 is not set, meaning it is not compressed. However, when it\nattempts to find the TOAST value, it fails trying to open a relation\nwith oid equal to 1; obviously a problem.\n\nHere is the frame of 'attr' which has the improper relid:\n\n (gdb) print *attr\n $8 = {va_header = -1072362015, va_content = {va_compressed = {va_rawsize = 1, \n va_data = \"\"}, va_external = {va_rawsize = 1, va_extsize = 0, \n va_valueid = 47, va_toastrelid = 1, va_toastidxid = 0, va_rowid = 0, \n\n ^^^^^^^^^^^^^^^^^\n va_attno = 0}, va_data = \"\\001\"}}\n\nThis is the varattrib structure stored on disk for TOAST entries. \nObviously something is wrong there. But how, and does this problem\nstill exist in current sources? Was it already fixed?\n\nThe MemoryContextAlloc error backtrace is:\n\t\n\t#0 elog (lev=-1, fmt=0x81bf278 \"MemoryContextAlloc: invalid request size %lu\")\n\t at elog.c:120\n\t#1 0x81682ae in MemoryContextAlloc (context=0x8294800, size=2139062147)\n\t at mcxt.c:418\n\t#2 0x8078ca5 in heap_tuple_untoast_attr (attr=0x831f54c) at tuptoaster.c:151\n\t#3 0x8162888 in pg_detoast_datum (datum=0x831f54c) at fmgr.c:1434\n\t#4 0x8066dac in printtup (tuple=0x831f514, typeinfo=0x831f144, self=0x831f4cc)\n\t at printtup.c:206\n\t#5 0x80d1088 in ExecRetrieve (slot=0x831ee80, destfunc=0x831f4cc, \n\t estate=0x831e7c0) at execMain.c:1187\n\t#6 0x80d0fd7 in ExecutePlan (estate=0x831e7c0, plan=0x831e734, \n\t operation=CMD_SELECT, numberTuples=0, direction=ForwardScanDirection, \n\t destfunc=0x831f4cc) at execMain.c:1107\n\t#7 0x80d02d6 in ExecutorRun (queryDesc=0x831ee1c, estate=0x831e7c0, \n\t feature=3, count=0) at execMain.c:233\n\t#8 0x81221cb in ProcessQuery (parsetree=0x831a45c, plan=0x831e734, \n\t dest=Remote) at pquery.c:295\n\t#9 0x8120aa1 in pg_exec_query_string (\n\t query_string=0x831a038 \"select question_num from user_details where uid = '00eezEoLyWJK';\", dest=Remote, parse_context=0x82946e8) at postgres.c:810\n\t#10 0x8121c38 in PostgresMain (argc=5, argv=0x8046f1c, real_argc=6, \n\t real_argv=0x804786c, username=0x8259661 \"postgres\") at postgres.c:1908\n\t#11 0x81078b0 in DoBackend (port=0x8259400) at postmaster.c:2114\n\t#12 0x81073f6 in BackendStartup (port=0x8259400) at postmaster.c:1897\n\t#13 0x8106579 in ServerLoop () at postmaster.c:995\n\t#14 0x8105f59 in PostmasterMain (argc=6, argv=0x804786c) at postmaster.c:685\n\t#15 0x80e2ed2 in main (argc=6, argv=0x804786c) at main.c:171\n\t#16 0x8064c4e in __start ()\n\nFrame 3 reports datum as:\n\n\t(gdb) print *(struct varlena *)datum\n\t$5 = {vl_len = 2139062142, vl_dat = \"\\177\"}\n\t(gdb) printf \"%x\\n\", (struct varlena *)datum.vl_len\n\t7f7f7f7e\n\nThat 0x7f7f7f7e looks quite strange.\n\nFrame 2 reports attr, which is the same as datum:\n\n (gdb) print *attr\n $25 = {va_header = 2139062142, va_content = {va_compressed = {\n va_rawsize = 2139062143, va_data = \"�\"}, va_external = {\n va_rawsize = 2139062143, va_extsize = 137491692, va_valueid = 16, \n va_toastrelid = 0, va_toastidxid = 2139062143, va_rowid = 2139062143, \n va_attno = 32639}, va_data = \"\\177\"}}\n\nObviously the length is huge and PostgreSQL fails on the memory\nallocation.'\n\nHere is someone reporting the same problem in August:\n\n\thttp://fts.postgresql.org/db/mw/msg.html?mid=1029724\n\nVACUUM ANALYZE has the same failure as \"SELECT\" because it accesses\nevery value in the table. Unfortunately, I don't see an answer supplied\nto this bug report. This is Tom Lane's reply:\n\n\thttp://fts.postgresql.org/db/mw/msg.html?mid=1029778\n\nThe general problem appears to be that certain toast rows have improper\nlengths or settings. The problem appears only with the 'woman' and\n'question_num' columns in that row. The other rows are fine.\n\nSeems this is may be an unknown problem. I did a search for TOAST and\nthe MemoryContextAlloc message:\n\n\thttp://fts.postgresql.org/db/mw/index.html?section=-1&word=toast+MemoryContextAlloc&action=Search&sdb_d=25&sdb_m=1&sdb_y=2001&sde_d=26&sde_m=12&sde_y=2001&weight=0&format=0&order=1\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 27 Dec 2001 01:14:50 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Problem with TOAST column corruption"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have a /data directory from someone running 7.1.3 and they are seeing\n> data corruption on a table using TOAST columns.\n\nWhat is the complete schema of the table? Could we see a dump of the\nentire tuple not just the parts that the system thinks are toasted\nfields?\n\nNeither value looks plausible at all as a toasted datum, so I am\nwondering if there is corruption of earlier columns in the row,\nleading to an alignment problem (ie, length of some earlier varlena\nitem was misdetermined).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Dec 2001 10:26:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem with TOAST column corruption "
}
] |
[
{
"msg_contents": "hello hackers\n\nhere is the patch for the INSERT INTO t VALUES (..., DEFAULT, ...) from the\ncurrent TODO list.\nIt's my first patch so please forgive me if it's not the best solution - I'm\nnow learning about the codebase and this implementation really hunted me\nthrough some deep and dark places...\nYour comments on how to do it right or what is additionally to do are\nwelcome, of course\n\nWell, here how it works:\n\n1. only 3 files are related, all from the src/backend/parser\n2. my comments are all beginning with \"pavlo (pb@pbit.org)\"\n3. gram.y - here I added a rule for the DEFAULT-element in the target_list\nused for the INSERT-statement. It now replaces DEFAULT by an anti-thing like\n\"@default\" because I couldn't find out were it fails if I leave DEFAULT\nunchainged. If smb. knows a way to do it I'll drop this @default\n4. parse_coerce.c - coerce_type - here I faked the standard handling failing\nwhen it tries to convert the @default-string\n5. parse_target.c - updateTargetListEntry - here the @default-string is\nhandled. I just look up if there is a default value for the column and use\nit instead of the @default place holder\n6. I tested it with psql\n7. I tested it under Sun Solaris8 SPARC and FreeBSD 4.0 Intel\n8. I used \"cvs diff > patch\" to diff changes\n\nPlease let me know if it's ok or if some changes must be done.\n\n\n\nrgds\nPavlo Baron",
"msg_date": "Thu, 27 Dec 2001 12:05:55 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "patch: INSERT INTO t VALUES (a, b, ..., DEFAULT, ...)"
},
{
"msg_contents": "\"Pavlo Baron\" <pb@pbit.org> writes:\n> 3. gram.y - here I added a rule for the DEFAULT-element in the target_list\n> used for the INSERT-statement. It now replaces DEFAULT by an anti-thing like\n> \"@default\" because I couldn't find out were it fails if I leave DEFAULT\n> unchainged. If smb. knows a way to do it I'll drop this @default\n\nThis would break\n\tINSERT INTO foo(textcolumn) VALUES ('@default')\nwhich I find hardly acceptable.\n\nThe only way to do it without breaking valid data entries is to\nintroduce a new parse node type to represent a DEFAULT placeholder.\n\nI also wonder what's going to happen if I write DEFAULT in a SELECT's\ntargetlist, which is possible given where you made the grammar change.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Dec 2001 10:11:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch: INSERT INTO t VALUES (a, b, ..., DEFAULT, ...) "
},
{
"msg_contents": "> \"Pavlo Baron\" <pb@pbit.org> writes:\n> > 3. gram.y - here I added a rule for the DEFAULT-element in the\ntarget_list\n> > used for the INSERT-statement. It now replaces DEFAULT by an anti-thing\nlike\n> > \"@default\" because I couldn't find out were it fails if I leave DEFAULT\n> > unchainged. If smb. knows a way to do it I'll drop this @default\n>\n\nTom Lane writes:\n> This would break\n> INSERT INTO foo(textcolumn) VALUES ('@default')\n> which I find hardly acceptable.\n>\n> The only way to do it without breaking valid data entries is to\n> introduce a new parse node type to represent a DEFAULT placeholder.\n\nI know what you mean and I hope to know where to do it. I thought, there\ncould be some similar cases handled a similar way. I'll try to implement\nproviding a new parse node type to represent a DEFAULT placeholder.\n\nTom Lane writes:\n>\n> I also wonder what's going to happen if I write DEFAULT in a SELECT's\n> targetlist, which is possible given where you made the grammar change.\n\nI also thought about it, but maybe I tested a wrong statement. Could you\ngive me an example on what you mean would possibly appear funny? a\nselect-statement...\nMaybe I don't understand what the targetlist means in the case of the select\nstatement. I tried smth. like \"select f1, f2 from tab1;\". I think, \"f1, f2\"\nis a targetlist and if I try to use DEFAULT in the list, a parser error is\ngenerated as it was before I put my changes.\nI could specify a new targetlist branch used only in the case of the INSERT\nstatement, and that's what I had before I minimized my code to that what you\nsee now. I think, I see a way to declare a rule only used with the INSERT\nstatement, but I couldn't find any problem caused by using default in the\ntargetlist of the SELECT stmt. What should we do?\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Thu, 27 Dec 2001 17:19:51 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: patch: INSERT INTO t VALUES (a, b, ..., DEFAULT, ...) "
},
{
"msg_contents": "here is a new patch containing all changes you (Tom) suggested to make. I\nstill use my \"pavlo (pbpbit.org)\" for me to locate my code; feel free to\nilliminate them before integrating :-)\n\nTom Lane:\n> This would break\n> INSERT INTO foo(textcolumn) VALUES ('@default')\n> which I find hardly acceptable.\n>\n> The only way to do it without breaking valid data entries is to\n> introduce a new parse node type to represent a DEFAULT placeholder.\n\nNow there is a newly declared parse node type \"Default\" - the corresponding\nstructure has no data. The \"@default\" hack is now illiminated - I'm the\nhappiest about it\n\nTom Lane:\n>\n> I also wonder what's going to happen if I write DEFAULT in a SELECT's\n> targetlist, which is possible given where you made the grammar change.\n\nThe grammer now contains two new rules: \"insert_target_list\" and\n\"insert_target_el\", the SELECT and INSERT don't use the same targetlist\nanymore, but the \"insert_target_el\" completely inherits \"target_el\" to avoid\nmultiple declarations and it just provides the new DEFAULT-rule.\n\nI hope, this patch is ok - to me, it looks correct now\n\nrgds\nPavlo Baron",
"msg_date": "Thu, 27 Dec 2001 21:37:40 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: patch: INSERT INTO t VALUES (a, b, ..., DEFAULT, ...) "
},
{
"msg_contents": "\nCan we get a context diff, diff -c for this?\n\n---------------------------------------------------------------------------\n\nPavlo Baron wrote:\n> here is a new patch containing all changes you (Tom) suggested to make. I\n> still use my \"pavlo (pbpbit.org)\" for me to locate my code; feel free to\n> illiminate them before integrating :-)\n> \n> Tom Lane:\n> > This would break\n> > INSERT INTO foo(textcolumn) VALUES ('@default')\n> > which I find hardly acceptable.\n> >\n> > The only way to do it without breaking valid data entries is to\n> > introduce a new parse node type to represent a DEFAULT placeholder.\n> \n> Now there is a newly declared parse node type \"Default\" - the corresponding\n> structure has no data. The \"@default\" hack is now illiminated - I'm the\n> happiest about it\n> \n> Tom Lane:\n> >\n> > I also wonder what's going to happen if I write DEFAULT in a SELECT's\n> > targetlist, which is possible given where you made the grammar change.\n> \n> The grammer now contains two new rules: \"insert_target_list\" and\n> \"insert_target_el\", the SELECT and INSERT don't use the same targetlist\n> anymore, but the \"insert_target_el\" completely inherits \"target_el\" to avoid\n> multiple declarations and it just provides the new DEFAULT-rule.\n> \n> I hope, this patch is ok - to me, it looks correct now\n> \n> rgds\n> Pavlo Baron\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Feb 2002 20:48:36 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch: INSERT INTO t VALUES (a, b, ..., DEFAULT, ...)"
},
{
"msg_contents": "\nIs this ready for application? Patch is at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\n\n---------------------------------------------------------------------------\n\nPavlo Baron wrote:\n> here is a new patch containing all changes you (Tom) suggested to make. I\n> still use my \"pavlo (pbpbit.org)\" for me to locate my code; feel free to\n> illiminate them before integrating :-)\n> \n> Tom Lane:\n> > This would break\n> > INSERT INTO foo(textcolumn) VALUES ('@default')\n> > which I find hardly acceptable.\n> >\n> > The only way to do it without breaking valid data entries is to\n> > introduce a new parse node type to represent a DEFAULT placeholder.\n> \n> Now there is a newly declared parse node type \"Default\" - the corresponding\n> structure has no data. The \"@default\" hack is now illiminated - I'm the\n> happiest about it\n> \n> Tom Lane:\n> >\n> > I also wonder what's going to happen if I write DEFAULT in a SELECT's\n> > targetlist, which is possible given where you made the grammar change.\n> \n> The grammer now contains two new rules: \"insert_target_list\" and\n> \"insert_target_el\", the SELECT and INSERT don't use the same targetlist\n> anymore, but the \"insert_target_el\" completely inherits \"target_el\" to avoid\n> multiple declarations and it just provides the new DEFAULT-rule.\n> \n> I hope, this patch is ok - to me, it looks correct now\n> \n> rgds\n> Pavlo Baron\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 19:57:56 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch: INSERT INTO t VALUES (a, b, ..., DEFAULT, ...)"
},
{
"msg_contents": "\nOops, this is a non-context diff. Would you please resubmit as a\ncontext diff, diff -c?\n\n---------------------------------------------------------------------------\n\nPavlo Baron wrote:\n> here is a new patch containing all changes you (Tom) suggested to make. I\n> still use my \"pavlo (pbpbit.org)\" for me to locate my code; feel free to\n> illiminate them before integrating :-)\n> \n> Tom Lane:\n> > This would break\n> > INSERT INTO foo(textcolumn) VALUES ('@default')\n> > which I find hardly acceptable.\n> >\n> > The only way to do it without breaking valid data entries is to\n> > introduce a new parse node type to represent a DEFAULT placeholder.\n> \n> Now there is a newly declared parse node type \"Default\" - the corresponding\n> structure has no data. The \"@default\" hack is now illiminated - I'm the\n> happiest about it\n> \n> Tom Lane:\n> >\n> > I also wonder what's going to happen if I write DEFAULT in a SELECT's\n> > targetlist, which is possible given where you made the grammar change.\n> \n> The grammer now contains two new rules: \"insert_target_list\" and\n> \"insert_target_el\", the SELECT and INSERT don't use the same targetlist\n> anymore, but the \"insert_target_el\" completely inherits \"target_el\" to avoid\n> multiple declarations and it just provides the new DEFAULT-rule.\n> \n> I hope, this patch is ok - to me, it looks correct now\n> \n> rgds\n> Pavlo Baron\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 20:01:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch: INSERT INTO t VALUES (a, b, ..., DEFAULT, ...)"
}
] |
[
{
"msg_contents": "I've seen a few postings in multiple newsgroups saying that in 7.1.x and \nup, literals in SQL statements are implicitly cast to strings.\n\nFor example in:\nselect distinct 'hello' from mytable;\nthe 'hello' is implicitly assumed to be 'hello'::text\n\nHowever, in both 7.1.3, and a fresh build of 7.2b4 from cvs, (with all \nregressions passing) I get:\n\nmytest=# select distinct 'hello' from mytable;\nERROR: Unable to identify an ordering operator '<' for type 'unknown'\n Use an explicit ordering operator or modify the query\n\n\nan explicit 'hello'::text works fine.\n\nI've spent a day looking through the code and can't really find any \nobvious #define's or compile time flags that would be causing this \nproblem.\nIt looks like\nConst *\nmake_const(Value *value)\n{\n...\n case T_String:\n val = DirectFunctionCall1(textin, \nCStringGetDatum(strVal(value)));\n\n typeid = UNKNOWNOID; /* will be coerced \nlater */\n typelen = -1; /* variable len */\n typebyval = false;\n break;\n...\n}\n\ndoes the damage, and it never gets 'coerced later', at least not before \ntransformDistinctClause(...) gets called, which is where the failure \nhappens (a few levels down).\n\ndoes this really work for everybody else? Can someone point me to a \ncompile flag I may be missing, or the code that actually does the \nimplicit cast?\n\nthanks\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 27 Dec 2001 14:37:14 -0600",
"msg_from": "Scott Royston <scroyston71@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Implicit cast of literal in SQL statements"
},
{
"msg_contents": "Scott Royston <scroyston71@yahoo.com> writes:\n> I've seen a few postings in multiple newsgroups saying that in 7.1.x and \n> up, literals in SQL statements are implicitly cast to strings.\n\nThat's an oversimplification: the implicit coercion of unknown literals\nonly happens when looking for an operator or function to apply to them.\nFor an unprocessed result literal such as you describe, the type\nnever does get changed. Which is okay because type \"unknown\" does have\nan output routine, which is all that's needed to emit the literal.\nYou may care to peruse the rules in\nhttp://developer.postgresql.org/docs/postgres/typeconv.html\n\n> However, in both 7.1.3, and a fresh build of 7.2b4 from cvs, (with all \n> regressions passing) I get:\n\n> mytest=# select distinct 'hello' from mytable;\n> ERROR: Unable to identify an ordering operator '<' for type 'unknown'\n> Use an explicit ordering operator or modify the query\n\nThis is mildly annoying but I'm not sure that fixing it wouldn't\nintroduce greater annoyances. As an example of the pitfalls, consider:\n\nregression=# select 1 union select '2';\n ?column?\n----------\n 1\n 2\n(2 rows)\n\nregression=# select 1 union select '2'::text;\nERROR: UNION types \"int4\" and \"text\" not matched\n\nThe first example works because the right-hand SELECT's result is not\ncoerced to \"text\" before UNION can get its hands on it.\n\nPossibly DISTINCT should be allowed to type-coerce unknown inputs to\ntext the same way that explicit operators and functions can. Offhand\nI'm not sure if that's a good solution or not. There are related\ncases to consider too, eg ORDER BY and GROUP BY.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Dec 2001 16:22:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Implicit cast of literal in SQL statements "
},
{
"msg_contents": "> I've seen a few postings in multiple newsgroups saying that in 7.1.x and\n> up, literals in SQL statements are implicitly cast to strings.\n\nIn some contexts, that statement is true, yes. The cases where this is\ntrue is when the parser is trying to match literals with available\nfunction calls. If there is a literal of unknown type, and if there is a\nfunction which could take a string literal as input, then that function\nis the one chosen.\n\n> does this really work for everybody else? Can someone point me to a\n> compile flag I may be missing, or the code that actually does the\n> implicit cast?\n\nIt looks like we are not handling the case where there is no explicit\nfunction call, and there a string literal in the target list (so no\nunderlying column to infer a type from), and there is a subsequent\nordering operation. That might be fixable, but it may not be a useful\nreal world example afaict.\n\nDo you have another example to illustrate the problem for a query which\none might actually need to use?\n\n - Thomas\n",
"msg_date": "Thu, 27 Dec 2001 21:28:53 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Implicit cast of literal in SQL statements"
},
{
"msg_contents": "I've got some 'legacy' code that I'm dealing with - sometimes it will \nreceive query requests that are simply the union of two easier requests \nit already knows the sql for.\nThese 'easier' queries have 'distincts' in them, and the code doesn't go \nto the trouble of removing by hand when doing a union.\nso the query ends up looking like:\nSELECT DISTINCT firstName, middleName, lastName FROM completeNameTable \nWHERE (...)\nUNION\nSELECT DISTINCT firstName, ' ', lastName FROM partialNameTable WHERE(...)\n\nugly, I know. ( and probably inefficient, I should check the plan )\n\nthanks for the quick response\n\nOn Thursday, December 27, 2001, at 03:28 PM, Thomas Lockhart wrote:\n\n>> I've seen a few postings in multiple newsgroups saying that in 7.1.x \n>> and\n>> up, literals in SQL statements are implicitly cast to strings.\n>\n> In some contexts, that statement is true, yes. The cases where this is\n> true is when the parser is trying to match literals with available\n> function calls. If there is a literal of unknown type, and if there is a\n> function which could take a string literal as input, then that function\n> is the one chosen.\n>\n>> does this really work for everybody else? Can someone point me to a\n>> compile flag I may be missing, or the code that actually does the\n>> implicit cast?\n>\n> It looks like we are not handling the case where there is no explicit\n> function call, and there a string literal in the target list (so no\n> underlying column to infer a type from), and there is a subsequent\n> ordering operation. That might be fixable, but it may not be a useful\n> real world example afaict.\n>\n> Do you have another example to illustrate the problem for a query which\n> one might actually need to use?\n>\n> - Thomas\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 27 Dec 2001 16:28:29 -0600",
"msg_from": "Scott Royston <scroyston71@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Implicit cast of literal in SQL statements"
}
] |
[
{
"msg_contents": "what is meant by the TODO-item \"Disallow missing columns in INSERT ...\nVALUES, per ANSI\" ?\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Thu, 27 Dec 2001 21:58:06 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "TODO question"
},
{
"msg_contents": "On Thu, 27 Dec 2001, Pavlo Baron wrote:\n\n> what is meant by the TODO-item \"Disallow missing columns in INSERT ...\n> VALUES, per ANSI\" ?\n\nINSERT INTO foo(a,b) values(1,2,3);\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 27 Dec 2001 17:21:22 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "Vince Vielhaber:\n> INSERT INTO foo(a,b) values(1,2,3);\n>\n\nsorry, but I'm a little bit confused: what should or should not happen with\nthis:\nif I execute this INSERT on a table having 2 cols, an error is generated:\nERROR: INSERT has more expressions than target columns\n\nhas it already been fixed? or what should be exactly disallowed? every time\nI execute such statement with any cols/vals combination having different\ncols/vals number it fails. Should it be allowed now?\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Thu, 27 Dec 2001 23:54:14 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> On Thu, 27 Dec 2001, Pavlo Baron wrote:\n>> what is meant by the TODO-item \"Disallow missing columns in INSERT ...\n>> VALUES, per ANSI\" ?\n\n> INSERT INTO foo(a,b) values(1,2,3);\n\nI think you mean\n\nregression=# INSERT INTO foo(a,b,c) values(1,2);\nINSERT 172219 1\n\nas the other behaves as expected:\n\nregression=# INSERT INTO foo(a,b) values(1,2,3);\nERROR: INSERT has more expressions than target columns\n\nI am not sure why this is on the TODO list, as opposed to having been\nfixed out-of-hand; it sure doesn't look complex to the naked eye.\nPerhaps there were some compatibility concerns, or some other issue.\nPavlo would be well advised to go search the archives for awhile to\nunderstand the background of the TODO item.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Dec 2001 18:07:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question "
},
{
"msg_contents": "On Thu, 27 Dec 2001, Pavlo Baron wrote:\n\n> Vince Vielhaber:\n> > INSERT INTO foo(a,b) values(1,2,3);\n> >\n>\n> sorry, but I'm a little bit confused: what should or should not happen with\n> this:\n> if I execute this INSERT on a table having 2 cols, an error is generated:\n> ERROR: INSERT has more expressions than target columns\n>\n> has it already been fixed? or what should be exactly disallowed? every time\n> I execute such statement with any cols/vals combination having different\n> cols/vals number it fails. Should it be allowed now?\n\nSorry, I had it backwards.\n\ninsert into foo(a,b,c) values(1,2);\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 27 Dec 2001 18:17:29 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "> I am not sure why this is on the TODO list, as opposed to having been\n> fixed out-of-hand; it sure doesn't look complex to the naked eye.\n> Perhaps there were some compatibility concerns, or some other issue.\n> Pavlo would be well advised to go search the archives for awhile to\n> understand the background of the TODO item.\n\nIt has the potential to break existing queries so I don't think we were\nworked up about fixing it quickly. Whoever does will have to weather\nthe complaints, but I agree it should be done. :-)\n\nThere are actually lots of nice easy items on the TODO list. Most\npeople that they are easy for are focusing on other things. I found\nsome time to pick off some easy ones for 7.2 and I hope more for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 27 Dec 2001 19:12:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "> I am not sure why this is on the TODO list, as opposed to having been\n> fixed out-of-hand; it sure doesn't look complex to the naked eye.\n> Perhaps there were some compatibility concerns, or some other issue.\n> Pavlo would be well advised to go search the archives for awhile to\n> understand the background of the TODO item.\n\nActually, the bigest problem with these small items is not the coding\nbut making a patch everyone likes (at least for me). :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 27 Dec 2001 19:17:10 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "Bruce Momjian:\n> Actually, the bigest problem with these small items is not the coding\n> but making a patch everyone likes (at least for me). :-)\n\nok, but what's the exact process of integrating patches made by new\ncontributers (or smb. who tries to become a contributer, smb. like me ,-) )?\nI made (and remade, thank you Tom) a patch for \"Allow INSERT INTO my_table\nVALUES (a, b, c, DEFAULT, x, y, z, ...)\" from the TODO list being a small\nnice item. How do I know if it's been accepted? I explicitely wanted to\nbegin with such small items, because: what looks small'n'easy for you guys\nwould take some time for me to implement, and if I've got it after I jogged\nthrough the whole parser, maybe it would look small'n'easy for me, too.\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Fri, 28 Dec 2001 10:15:59 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "> ok, but what's the exact process of integrating patches made by new\n> contributers (or smb. who tries to become a contributer, smb. like me ,-) )?\n> I made (and remade, thank you Tom) a patch for \"Allow INSERT INTO my_table\n> VALUES (a, b, c, DEFAULT, x, y, z, ...)\" from the TODO list being a small\n> nice item. How do I know if it's been accepted? I explicitely wanted to\n> begin with such small items, because: what looks small'n'easy for you guys\n> would take some time for me to implement, and if I've got it after I jogged\n> through the whole parser, maybe it would look small'n'easy for me, too.\n\nYou have seen the process for acceptance, with the exception of the \"OK,\napplied!\" exchange that happens at the end. That is because we have not\nquite released 7.2, and the source tree is essentially frozen for the\nnext month or so.\n\nbtw, it *may* be premature to assume that your patches are completely\naccepted yet, since none of us are very focused on new features at this\nmoment.\n\nIf you had submitted patches three months ago, they would be in the 7.2\nrelease ;)\n\n - Thomas\n",
"msg_date": "Fri, 28 Dec 2001 15:07:12 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> btw, it *may* be premature to assume that your patches are completely\n> accepted yet, since none of us are very focused on new features at this\n> moment.\n\nIn fact the patch seemed quite incomplete to me; adding a new parsenode\ntype requires much more than just a struct declaration. But this isn't\nthe right time of the cycle to be reviewing new-feature patches.\n\nBTW, patches should usually be sent to pgsql-patches not pgsql-hackers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 28 Dec 2001 10:39:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question "
},
{
"msg_contents": "Tom Lane:\n> In fact the patch seemed quite incomplete to me; adding a new parsenode\n> type requires much more than just a struct declaration.\n\nbtw, it's not correct, that just a new structure has been declared. I added\nthe T_Default to the Type-Enum and it seems to me, my new parsenode type has\nbeen full-automatically integrated in the parser-workflow. In the gram.y,\nthere is a new set of rules describing the DEFAULT value in the INSERT\nstmt - this is the place, where it's being identified and node-ed (using\nit's type), the transformation has got the new T_Default-case leaving this\nnode \"as is\", and it's being transformed (replaced by the default value\ntaken from the relation specified by the corresponding parsestate-field)\nlater.\n\n> But this isn't\n> the right time of the cycle to be reviewing new-feature patches.\n\nok, but I hope you've got a 3%-free--ear-capacity at least to answer some of\nmy questions (having a very bad timing ,-) ). I don't ask offen and about\nevery step, but sometimes it breaks through...\n\n>\n> BTW, patches should usually be sent to pgsql-patches not pgsql-hackers.\n\n...where they will get dusty before the new release has been finished... ,)\nno problem, I'll wait a little with my patches but not with my questions ,)\n\nsorry if I increased your current stress level :-)\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Fri, 28 Dec 2001 17:17:19 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO question "
},
{
"msg_contents": "Thomas Lockhart:\n> You have seen the process for acceptance, with the exception of the \"OK,\n> applied!\" exchange that happens at the end. That is because we have not\n> quite released 7.2, and the source tree is essentially frozen for the\n> next month or so.\n\nno problem, if I don't hear a *NO!*, it could be a *maybe yes...*\n\n>\n> btw, it *may* be premature to assume that your patches are completely\n> accepted yet, since none of us are very focused on new features at this\n> moment.\n\nI really don't assume the immediate acceptance of my patches, I'm just\nimpregnating the codebase, so I ask some questions, and there are, IMHO, not\nvery lot of them, and I hope it's not a big problem if I submit my patches\nsometimes with a complete description on what I did so nobody really needs\n(I know very very very precise about a Release completion phase %-( )\nimmediately to take a look at the code I wrote but possibly at the concept I\nused doing that.\n\n>\n> If you had submitted patches three months ago, they would be in the 7.2\n> release ;)\n\nI think, my timing is perfect! Before you guys are got rid of the 7.2, I'll\nbe ready to take care of some new features and learn it by doing and\nproviding portions which have a chance to survive! I think, I've got a lot\nof ideas and usefull experiences, so may be some of them would be usefull\nfor PostgreSQL, too.\nExcuse me my enthusiasm, but I don't think it's a problem\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Fri, 28 Dec 2001 17:29:36 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "> Tom Lane:\n> > In fact the patch seemed quite incomplete to me; adding a new parsenode\n> > type requires much more than just a struct declaration.\n> \n> btw, it's not correct, that just a new structure has been declared. I added\n> the T_Default to the Type-Enum and it seems to me, my new parsenode type has\n> been full-automatically integrated in the parser-workflow. In the gram.y,\n> there is a new set of rules describing the DEFAULT value in the INSERT\n> stmt - this is the place, where it's being identified and node-ed (using\n> it's type), the transformation has got the new T_Default-case leaving this\n> node \"as is\", and it's being transformed (replaced by the default value\n> taken from the relation specified by the corresponding parsestate-field)\n> later.\n> \n> > But this isn't\n> > the right time of the cycle to be reviewing new-feature patches.\n> \n> ok, but I hope you've got a 3%-free--ear-capacity at least to answer some of\n> my questions (having a very bad timing ,-) ). I don't ask offen and about\n> every step, but sometimes it breaks through...\n\nSure, ask away. We will do our best. In fact, adding a new node is\npretty tricky. There is a developer's FAQ item about it, number 7. I\nassume you read that already.\n\nI may be able to take you patch and add the needed node support stuff,\nand send the patch back to you so you can continue on it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 28 Dec 2001 12:57:53 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "Bruce Momjian writes:\n> Sure, ask away. We will do our best. In fact, adding a new node is\n> pretty tricky. There is a developer's FAQ item about it, number 7. I\n> assume you read that already.\n\nhm...do you mean the current version of FAQ on the web? there are 2 seventh\nitems in the table of contents, both jumping to\n----------------------------------------snip--------------------------------\n-\n7) I just added a field to a structure. What else should I do?\nThe structures passing around from the parser, rewrite, optimizer, and\nexecutor require quite a bit of support. Most structures have support\nroutines in src/backend/nodes used to create, copy, read, and output those\nstructures. Make sure you add support for your new field to these files.\nFind any other places the structure may need code for your new field. mkid\nis helpful with this (see above).\n\n----------------------------------------snap--------------------------------\n-\n\nIs that what you mean? It confuses me a bit, surely because I'm new here...\nI didn't change any existing struct, and coudn't find any struct I which\ncould grow bacause of my new parsenode type. The type-enum contains a\ncorresponding range of values (I think, it was smth. starting with 700 upto\n...) - I just added my new type here.\nOr do I use a wrong copy of FAQ?\n\n>\n> I may be able to take you patch and add the needed node support stuff,\n> and send the patch back to you so you can continue on it.\n\nthanx. please, feel free - I attached my last patch to this email.\nI see, that my patch provides the needed operation, but maybe it's not\nenough and could cause some side effects. I'm looking forward to your\nmodifications (I hope it doesn't keep you from your current work, at least\nnot crucial). What I'm just interested in is, if my changes would be a\nsubset of your code or if what I did is an absolute b*-sh* ,)\n\nrgds\nPavlo Baron\n\n\nIndex: src/backend/parser/parse_target.c\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/backend/parser/parse_target.c,v\nretrieving revision 1.76\ndiff -c -r1.76 parse_target.c\n*** src/backend/parser/parse_target.c 2001/11/05 17:46:26 1.76\n--- src/backend/parser/parse_target.c 2001/12/28 23:13:43\n***************\n*** 60,65 ****\n--- 60,77 ----\n if (IsA(expr, Ident) &&((Ident *) expr)->isRel)\n elog(ERROR, \"You can't use relation names alone in the target list, try\nrelation.*.\");\n\n+ /* pavlo (pb@pbit.org: 2001-12-27: handle the DEFAULT in INSERT\nINTO foo VALUES (..., DEFAULT, ...))*/\n+ if (IsA(expr, Default))\n+ {\n+ if (pstate->p_target_relation->rd_att->attrs[(AttrNumber)\npstate->p_last_resno - 1]->atthasdef)\n+ {\n+ Const *con = (Const *)\nstringToNode(pstate->p_target_relation->rd_att->constr->defval[(AttrNumber)\npstate->p_last_resno - 1].adbin);\n+ expr = con;\n+ }\n+ else\n+ elog(ERROR, \"no default value for column \\\"%s\\\"\nfound\\nDEFAULT cannot be inserted\", colname);\n+ }\n+\n type_id = exprType(expr);\n type_mod = exprTypmod(expr);\n\n***************\n*** 260,266 ****\n {\n tle->expr = CoerceTargetExpr(pstate, tle->expr, type_id,\n attrtype, attrtypmod);\n! if (tle->expr == NULL)\n elog(ERROR, \"column \\\"%s\\\" is of type '%s'\"\n \" but expression is of type '%s'\"\n \"\\n\\tYou will need to rewrite or cast the expression\",\n--- 272,278 ----\n {\n tle->expr = CoerceTargetExpr(pstate, tle->expr, type_id,\n attrtype, attrtypmod);\n! if (tle->expr == NULL)\n elog(ERROR, \"column \\\"%s\\\" is of type '%s'\"\n \" but expression is of type '%s'\"\n \"\\n\\tYou will need to rewrite or cast the expression\",\n***************\n*** 299,305 ****\n int32 attrtypmod)\n {\n if (can_coerce_type(1, &type_id, &attrtype))\n! expr = coerce_type(pstate, expr, type_id, attrtype, attrtypmod);\n\n #ifndef DISABLE_STRING_HACKS\n\n--- 311,317 ----\n int32 attrtypmod)\n {\n if (can_coerce_type(1, &type_id, &attrtype))\n! expr = coerce_type(pstate, expr, type_id, attrtype,\nattrtypmod);\n\n #ifndef DISABLE_STRING_HACKS\n\n***************\n*** 525,528 ****\n }\n\n return strength;\n! }\n--- 537,540 ----\n }\n\n return strength;\n! }\n\\ No newline at end of file\nIndex: src/backend/parser/parse_expr.c\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/backend/parser/parse_expr.c,v\nretrieving revision 1.105\ndiff -c -r1.105 parse_expr.c\n*** src/backend/parser/parse_expr.c 2001/11/12 20:05:24 1.105\n--- src/backend/parser/parse_expr.c 2001/12/28 23:15:08\n***************\n*** 121,126 ****\n--- 121,131 ----\n result = (Node *) make_const(val);\n break;\n }\n+ case T_Default: /* pavlo (pb@pbit.org): 2001-12-27:\ntransormation for the DEFAULT value for INSERT INTO foo VALUES (...,\nDEFAULT, ...)*/\n+ {\n+ result = (Default *) expr;\n+ break;\n+ }\n case T_ParamNo:\n {\n ParamNo *pno = (ParamNo *) expr;\n***************\n*** 1066,1069 ****\n }\n else\n return typename->name;\n! }\n--- 1071,1074 ----\n }\n else\n return typename->name;\n! }\n\\ No newline at end of file\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.276\ndiff -c -r2.276 gram.y\n*** src/backend/parser/gram.y 2001/12/09 04:39:39 2.276\n--- src/backend/parser/gram.y 2001/12/28 23:15:39\n***************\n*** 195,201 ****\n opt_column_list, columnList, opt_name_list,\n sort_clause, sortby_list, index_params, index_list, name_list,\n from_clause, from_list, opt_array_bounds,\n! expr_list, attrs, target_list, update_target_list,\n def_list, opt_indirection, group_clause, TriggerFuncArgs,\n select_limit, opt_select_limit\n\n--- 195,201 ----\n opt_column_list, columnList, opt_name_list,\n sort_clause, sortby_list, index_params, index_list, name_list,\n from_clause, from_list, opt_array_bounds,\n! expr_list, attrs, target_list, insert_target_list, update_target_list,\n def_list, opt_indirection, group_clause, TriggerFuncArgs,\n select_limit, opt_select_limit\n\n***************\n*** 253,259 ****\n %type <node> table_ref\n %type <jexpr> joined_table\n %type <range> relation_expr\n! %type <target> target_el, update_target_el\n %type <paramno> ParamNo\n\n %type <typnam> Typename, SimpleTypename, ConstTypename\n--- 253,259 ----\n %type <node> table_ref\n %type <jexpr> joined_table\n %type <range> relation_expr\n! %type <target> target_el, update_target_el, insert_target_el\n %type <paramno> ParamNo\n\n %type <typnam> Typename, SimpleTypename, ConstTypename\n***************\n*** 3302,3308 ****\n }\n ;\n\n! insert_rest: VALUES '(' target_list ')'\n {\n $$ = makeNode(InsertStmt);\n $$->cols = NIL;\n--- 3302,3308 ----\n }\n ;\n\n! insert_rest: VALUES '(' insert_target_list ')'\n {\n $$ = makeNode(InsertStmt);\n $$->cols = NIL;\n***************\n*** 5482,5488 ****\n *\n\n****************************************************************************\n*/\n\n! /* Target lists as found in SELECT ... and INSERT VALUES ( ... ) */\n\n target_list: target_list ',' target_el\n { $$ = lappend($1, $3); }\n--- 5482,5488 ----\n *\n\n****************************************************************************\n*/\n\n! /* Target lists as found in SELECT ... */\n\n target_list: target_list ',' target_el\n { $$ = lappend($1, $3); }\n***************\n*** 5490,5496 ****\n--- 5490,5522 ----\n { $$ = makeList1($1); }\n ;\n\n+ /* Target lists as found in INSERT VALUES ( ... ) */\n+\n+ /* pavlo (pb@pbit.org): 2001-12-27: parse node based handling for the\nDEFAULT value added;\n+ now it's possible to INSERT INTO ... VALUES (..., FEFAULT, ...)!\n+ */\n+ insert_target_list: insert_target_list ',' insert_target_el\n+ { $$ = lappend($1, $3); }\n+ | insert_target_el\n+ { $$ = makeList1($1); }\n+ | insert_target_list ',' target_el\n+ { $$ = lappend($1, $3); }\n+ | target_el\n+ { $$ = makeList1($1); }\n+ ;\n+\n+ insert_target_el: DEFAULT\n+ {\n+ Default *n = makeNode(Default);\n+ $$ = makeNode(ResTarget);\n+ $$->name = NULL;\n+ $$->indirection = NULL;\n+ $$->val = (Node *)n;\n+ }\n+ ;\n+\n /* AS is not optional because shift/red conflict with unary ops */\n+\n target_el: a_expr AS ColLabel\n {\n $$ = makeNode(ResTarget);\n***************\n*** 6380,6383 ****\n strcpy(newval+1, oldval);\n v->val.str = newval;\n }\n! }\n--- 6406,6409 ----\n strcpy(newval+1, oldval);\n v->val.str = newval;\n }\n! }\n\\ No newline at end of file\nIndex: src/include/nodes/parsenodes.h\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\nretrieving revision 1.151\ndiff -c -r1.151 parsenodes.h\n*** src/include/nodes/parsenodes.h 2001/11/05 17:46:34 1.151\n--- src/include/nodes/parsenodes.h 2001/12/28 23:16:30\n***************\n*** 1005,1010 ****\n--- 1005,1019 ----\n } A_Const;\n\n /*\n+ * pavlo (pb@pbit.org): 2001.12.27:\n+ * Default - the DEFAULT constant expression used in the target list of\nINSERT INTO ... VALUES (..., DEFAULT, ...)\n+ */\n+ typedef struct Default\n+ {\n+ NodeTag type;\n+ } Default;\n+\n+ /*\n * TypeCast - a CAST expression\n *\n * NOTE: for mostly historical reasons, A_Const and ParamNo parsenodes\ncontain\n***************\n*** 1355,1358 ****\n */\n typedef SortClause GroupClause;\n\n! #endif /* PARSENODES_H */\n--- 1364,1367 ----\n */\n typedef SortClause GroupClause;\n\n! #endif /* PARSENODES_H */\n\\ No newline at end of file",
"msg_date": "Sat, 29 Dec 2001 00:25:11 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "BTW, inserting the actual default expression in the parser is no good.\nWe just finished getting rid of that behavior last month:\n\n2001-11-02 15:23 tgl\n\n\t* src/backend/: catalog/heap.c, optimizer/prep/preptlist.c,\n\tparser/analyze.c: Add default expressions to INSERTs during\n\tplanning, not during parse analysis. This keeps stored rules from\n\tprematurely absorbing default information, which is necessary for\n\tALTER TABLE SET DEFAULT to work unsurprisingly with rules. See\n\tpgsql-bugs discussion 24-Oct-01.\n\nThe way I would tackle this is to do the work in\ntransformInsertStmt: if you find a Default node in the input targetlist,\nyou can simply omit the corresponding entry of the column list. In\nother words, transform\n\n\tINSERT INTO foo (col1, col2, col3) VALUES (11, DEFAULT, 13);\n\ninto\n\n\tINSERT INTO foo (col1, col3) VALUES (11, 13);\n\nThen you can rely on the planner to insert the correct defaults at planning\ntime.\n\nMy inclination would be to twiddle the order of operations so that the\nDefault node is spotted and intercepted before being fed to\ntransformExpr. This would probably mean doing some surgery on\ntransformTargetList. The advantage of doing it that way is that\ntransformExpr and subsequent processing need never see Default nodes\nand can just error out if they do. The way you have it, Default needs\nto pass through transformExpr, which raises a bunch of questions in my\nmind about what parts of the system will need to accept it. There's\nan established distinction between \"pre-parse-analysis\" node types\n(eg, Attr) and \"post-parse-analysis\" node types (eg, Var), and we\nunderstand which places need to support which node types as long as\nthat's the distinction. If you allow Default to pass through\ntransformExpr then you're to some extent allowing it to be a\npost-parse-analysis node type, which doesn't strike me as a good idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 28 Dec 2001 19:15:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question "
},
{
"msg_contents": "> Bruce Momjian writes:\n> > Sure, ask away. We will do our best. In fact, adding a new node is\n> > pretty tricky. There is a developer's FAQ item about it, number 7. I\n> > assume you read that already.\n> \n> hm...do you mean the current version of FAQ on the web? there are 2 seventh\n> items in the table of contents, both jumping to\n> ----------------------------------------snip--------------------------------\n> -\n> 7) I just added a field to a structure. What else should I do?\n> The structures passing around from the parser, rewrite, optimizer, and\n> executor require quite a bit of support. Most structures have support\n> routines in src/backend/nodes used to create, copy, read, and output those\n> structures. Make sure you add support for your new field to these files.\n> Find any other places the structure may need code for your new field. mkid\n> is helpful with this (see above).\n> \n> ----------------------------------------snap--------------------------------\n> -\n> \n> Is that what you mean? It confuses me a bit, surely because I'm new here...\n> I didn't change any existing struct, and coudn't find any struct I which\n> could grow bacause of my new parsenode type. The type-enum contains a\n> corresponding range of values (I think, it was smth. starting with 700 upto\n> ...) - I just added my new type here.\n> Or do I use a wrong copy of FAQ?\n\nYes, that is the one. The steps for adding an entry to an existing\nstructure is similar to add an entirely new one.\n\n> > I may be able to take you patch and add the needed node support stuff,\n> > and send the patch back to you so you can continue on it.\n> \n> thanx. please, feel free - I attached my last patch to this email.\n> I see, that my patch provides the needed operation, but maybe it's not\n> enough and could cause some side effects. I'm looking forward to your\n> modifications (I hope it doesn't keep you from your current work, at least\n> not crucial). What I'm just interested in is, if my changes would be a\n> subset of your code or if what I did is an absolute b*-sh* ,)\n\nI think it would be a subset. I haven't looked at it closely yet,\nthough.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 28 Dec 2001 22:44:50 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "> Bruce Momjian writes:\n> > Sure, ask away. We will do our best. In fact, adding a new node is\n> > pretty tricky. There is a developer's FAQ item about it, number 7. I\n> > assume you read that already.\n> \n> hm...do you mean the current version of FAQ on the web? there are 2 seventh\n> items in the table of contents, both jumping to\n\nI have updated the developer's FAQ to remove the duplicate entries. I\nhave wanted to split the FAQ into two sections anyway, and this was a\ngood time to do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 28 Dec 2001 22:51:06 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "> > Bruce Momjian writes:\n> > > Sure, ask away. We will do our best. In fact, adding a new node is\n> > > pretty tricky. There is a developer's FAQ item about it, number 7. I\n> > > assume you read that already.\n> >\n> > hm...do you mean the current version of FAQ on the web? there are 2\nseventh\n> > items in the table of contents, both jumping to\n>\n> ----------------------------------------snip------------------------------\n--\n> > -\n> > 7) I just added a field to a structure. What else should I do?\n> > The structures passing around from the parser, rewrite, optimizer, and\n> > executor require quite a bit of support. Most structures have support\n> > routines in src/backend/nodes used to create, copy, read, and output\nthose\n> > structures. Make sure you add support for your new field to these files.\n> > Find any other places the structure may need code for your new field.\nmkid\n> > is helpful with this (see above).\n> >\n>\n> ----------------------------------------snap------------------------------\n--\n> > -\n> >\n> > Is that what you mean? It confuses me a bit, surely because I'm new\nhere...\n> > I didn't change any existing struct, and coudn't find any struct I which\n> > could grow bacause of my new parsenode type. The type-enum contains a\n> > corresponding range of values (I think, it was smth. starting with 700\nupto\n> > ...) - I just added my new type here.\n> > Or do I use a wrong copy of FAQ?\n>\n> Yes, that is the one. The steps for adding an entry to an existing\n> structure is similar to add an entirely new one.\n\nWhat structure could you suggest me to pick out and to follow in the code\nwhich would be handled similarily to my \"Default\"? This one is a one-way\nuse'n'drop struct and wouldn't appear anywhere else.\n\n>\n> > > I may be able to take you patch and add the needed node support stuff,\n> > > and send the patch back to you so you can continue on it.\n> >\n> > thanx. please, feel free - I attached my last patch to this email.\n> > I see, that my patch provides the needed operation, but maybe it's not\n> > enough and could cause some side effects. I'm looking forward to your\n> > modifications (I hope it doesn't keep you from your current work, at\nleast\n> > not crucial). What I'm just interested in is, if my changes would be a\n> > subset of your code or if what I did is an absolute b*-sh* ,)\n>\n> I think it would be a subset. I haven't looked at it closely yet,\n> though.\n>\n\noops, a new patch attached: the last one wouldn't compile because of missing\nT_Default value I just forgot to diff-in (or out) - sorry\n\nrgds\nPavlo Baron\n\n\nIndex: src/backend/parser/parse_target.c\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/backend/parser/parse_target.c,v\nretrieving revision 1.76\ndiff -c -r1.76 parse_target.c\n*** src/backend/parser/parse_target.c 2001/11/05 17:46:26 1.76\n--- src/backend/parser/parse_target.c 2001/12/28 23:13:43\n***************\n*** 60,65 ****\n--- 60,77 ----\n if (IsA(expr, Ident) &&((Ident *) expr)->isRel)\n elog(ERROR, \"You can't use relation names alone in the target list, try\nrelation.*.\");\n\n+ /* pavlo (pb@pbit.org: 2001-12-27: handle the DEFAULT in INSERT\nINTO foo VALUES (..., DEFAULT, ...))*/\n+ if (IsA(expr, Default))\n+ {\n+ if (pstate->p_target_relation->rd_att->attrs[(AttrNumber)\npstate->p_last_resno - 1]->atthasdef)\n+ {\n+ Const *con = (Const *)\nstringToNode(pstate->p_target_relation->rd_att->constr->defval[(AttrNumber)\npstate->p_last_resno - 1].adbin);\n+ expr = con;\n+ }\n+ else\n+ elog(ERROR, \"no default value for column \\\"%s\\\"\nfound\\nDEFAULT cannot be inserted\", colname);\n+ }\n+\n type_id = exprType(expr);\n type_mod = exprTypmod(expr);\n\n***************\n*** 260,266 ****\n {\n tle->expr = CoerceTargetExpr(pstate, tle->expr, type_id,\n attrtype, attrtypmod);\n! if (tle->expr == NULL)\n elog(ERROR, \"column \\\"%s\\\" is of type '%s'\"\n \" but expression is of type '%s'\"\n \"\\n\\tYou will need to rewrite or cast the expression\",\n--- 272,278 ----\n {\n tle->expr = CoerceTargetExpr(pstate, tle->expr, type_id,\n attrtype, attrtypmod);\n! if (tle->expr == NULL)\n elog(ERROR, \"column \\\"%s\\\" is of type '%s'\"\n \" but expression is of type '%s'\"\n \"\\n\\tYou will need to rewrite or cast the expression\",\n***************\n*** 299,305 ****\n int32 attrtypmod)\n {\n if (can_coerce_type(1, &type_id, &attrtype))\n! expr = coerce_type(pstate, expr, type_id, attrtype, attrtypmod);\n\n #ifndef DISABLE_STRING_HACKS\n\n--- 311,317 ----\n int32 attrtypmod)\n {\n if (can_coerce_type(1, &type_id, &attrtype))\n! expr = coerce_type(pstate, expr, type_id, attrtype,\nattrtypmod);\n\n #ifndef DISABLE_STRING_HACKS\n\n***************\n*** 525,528 ****\n }\n\n return strength;\n! }\n--- 537,540 ----\n }\n\n return strength;\n! }\n\\ No newline at end of file\nIndex: src/backend/parser/parse_expr.c\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/backend/parser/parse_expr.c,v\nretrieving revision 1.105\ndiff -c -r1.105 parse_expr.c\n*** src/backend/parser/parse_expr.c 2001/11/12 20:05:24 1.105\n--- src/backend/parser/parse_expr.c 2001/12/28 23:15:08\n***************\n*** 121,126 ****\n--- 121,131 ----\n result = (Node *) make_const(val);\n break;\n }\n+ case T_Default: /* pavlo (pb@pbit.org): 2001-12-27:\ntransormation for the DEFAULT value for INSERT INTO foo VALUES (...,\nDEFAULT, ...)*/\n+ {\n+ result = (Default *) expr;\n+ break;\n+ }\n case T_ParamNo:\n {\n ParamNo *pno = (ParamNo *) expr;\n***************\n*** 1066,1069 ****\n }\n else\n return typename->name;\n! }\n--- 1071,1074 ----\n }\n else\n return typename->name;\n! }\n\\ No newline at end of file\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.276\ndiff -c -r2.276 gram.y\n*** src/backend/parser/gram.y 2001/12/09 04:39:39 2.276\n--- src/backend/parser/gram.y 2001/12/28 23:15:39\n***************\n*** 195,201 ****\n opt_column_list, columnList, opt_name_list,\n sort_clause, sortby_list, index_params, index_list, name_list,\n from_clause, from_list, opt_array_bounds,\n! expr_list, attrs, target_list, update_target_list,\n def_list, opt_indirection, group_clause, TriggerFuncArgs,\n select_limit, opt_select_limit\n\n--- 195,201 ----\n opt_column_list, columnList, opt_name_list,\n sort_clause, sortby_list, index_params, index_list, name_list,\n from_clause, from_list, opt_array_bounds,\n! expr_list, attrs, target_list, insert_target_list, update_target_list,\n def_list, opt_indirection, group_clause, TriggerFuncArgs,\n select_limit, opt_select_limit\n\n***************\n*** 253,259 ****\n %type <node> table_ref\n %type <jexpr> joined_table\n %type <range> relation_expr\n! %type <target> target_el, update_target_el\n %type <paramno> ParamNo\n\n %type <typnam> Typename, SimpleTypename, ConstTypename\n--- 253,259 ----\n %type <node> table_ref\n %type <jexpr> joined_table\n %type <range> relation_expr\n! %type <target> target_el, update_target_el, insert_target_el\n %type <paramno> ParamNo\n\n %type <typnam> Typename, SimpleTypename, ConstTypename\n***************\n*** 3302,3308 ****\n }\n ;\n\n! insert_rest: VALUES '(' target_list ')'\n {\n $$ = makeNode(InsertStmt);\n $$->cols = NIL;\n--- 3302,3308 ----\n }\n ;\n\n! insert_rest: VALUES '(' insert_target_list ')'\n {\n $$ = makeNode(InsertStmt);\n $$->cols = NIL;\n***************\n*** 5482,5488 ****\n *\n\n****************************************************************************\n*/\n\n! /* Target lists as found in SELECT ... and INSERT VALUES ( ... ) */\n\n target_list: target_list ',' target_el\n { $$ = lappend($1, $3); }\n--- 5482,5488 ----\n *\n\n****************************************************************************\n*/\n\n! /* Target lists as found in SELECT ... */\n\n target_list: target_list ',' target_el\n { $$ = lappend($1, $3); }\n***************\n*** 5490,5496 ****\n--- 5490,5522 ----\n { $$ = makeList1($1); }\n ;\n\n+ /* Target lists as found in INSERT VALUES ( ... ) */\n+\n+ /* pavlo (pb@pbit.org): 2001-12-27: parse node based handling for the\nDEFAULT value added;\n+ now it's possible to INSERT INTO ... VALUES (..., FEFAULT, ...)!\n+ */\n+ insert_target_list: insert_target_list ',' insert_target_el\n+ { $$ = lappend($1, $3); }\n+ | insert_target_el\n+ { $$ = makeList1($1); }\n+ | insert_target_list ',' target_el\n+ { $$ = lappend($1, $3); }\n+ | target_el\n+ { $$ = makeList1($1); }\n+ ;\n+\n+ insert_target_el: DEFAULT\n+ {\n+ Default *n = makeNode(Default);\n+ $$ = makeNode(ResTarget);\n+ $$->name = NULL;\n+ $$->indirection = NULL;\n+ $$->val = (Node *)n;\n+ }\n+ ;\n+\n /* AS is not optional because shift/red conflict with unary ops */\n+\n target_el: a_expr AS ColLabel\n {\n $$ = makeNode(ResTarget);\n***************\n*** 6380,6383 ****\n strcpy(newval+1, oldval);\n v->val.str = newval;\n }\n! }\n--- 6406,6409 ----\n strcpy(newval+1, oldval);\n v->val.str = newval;\n }\n! }\n\\ No newline at end of file\nIndex: src/include/nodes/parsenodes.h\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\nretrieving revision 1.151\ndiff -c -r1.151 parsenodes.h\n*** src/include/nodes/parsenodes.h 2001/11/05 17:46:34 1.151\n--- src/include/nodes/parsenodes.h 2001/12/28 23:16:30\n***************\n*** 1005,1010 ****\n--- 1005,1019 ----\n } A_Const;\n\n /*\n+ * pavlo (pb@pbit.org): 2001.12.27:\n+ * Default - the DEFAULT constant expression used in the target list of\nINSERT INTO ... VALUES (..., DEFAULT, ...)\n+ */\n+ typedef struct Default\n+ {\n+ NodeTag type;\n+ } Default;\n+\n+ /*\n * TypeCast - a CAST expression\n *\n * NOTE: for mostly historical reasons, A_Const and ParamNo parsenodes\ncontain\n***************\n*** 1355,1358 ****\n */\n typedef SortClause GroupClause;\n\n! #endif /* PARSENODES_H */\n--- 1364,1367 ----\n */\n typedef SortClause GroupClause;\n\n! #endif /* PARSENODES_H */\n\\ No newline at end of file\nIndex: src/include/nodes/nodes.h\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/include/nodes/nodes.h,v\nretrieving revision 1.96\ndiff -c -r1.96 nodes.h\n*** src/include/nodes/nodes.h 2001/11/05 17:46:34 1.96\n--- src/include/nodes/nodes.h 2001/12/29 09:23:37\n***************\n*** 222,227 ****\n--- 222,228 ----\n T_CaseWhen,\n T_FkConstraint,\n T_PrivGrantee,\n+ T_Default, /* pavlo (pb@pbit.org) : 2001.12.27: DEFAULT element of\nthe INSERT INTO target list*/\n\n /*\n * TAGS FOR FUNCTION-CALL CONTEXT AND RESULTINFO NODES (see fmgr.h)\n***************\n*** 365,368 ****\n (jointype) == JOIN_FULL || \\\n (jointype) == JOIN_RIGHT)\n\n! #endif /* NODES_H */\n--- 366,369 ----\n (jointype) == JOIN_FULL || \\\n (jointype) == JOIN_RIGHT)\n\n! #endif /* NODES_H */\n\\ No newline at end of file",
"msg_date": "Sat, 29 Dec 2001 10:28:34 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "Tom Lane writes:\n\n> BTW, inserting the actual default expression in the parser is no good.\n> We just finished getting rid of that behavior last month:\n>\n> 2001-11-02 15:23 tgl\n>\n> * src/backend/: catalog/heap.c, optimizer/prep/preptlist.c,\n> parser/analyze.c: Add default expressions to INSERTs during\n> planning, not during parse analysis. This keeps stored rules from\n> prematurely absorbing default information, which is necessary for\n> ALTER TABLE SET DEFAULT to work unsurprisingly with rules. See\n> pgsql-bugs discussion 24-Oct-01.\n>\n> The way I would tackle this is to do the work in\n> transformInsertStmt: if you find a Default node in the input targetlist,\n> you can simply omit the corresponding entry of the column list. In\n> other words, transform\n>\n> INSERT INTO foo (col1, col2, col3) VALUES (11, DEFAULT, 13);\n>\n> into\n>\n> INSERT INTO foo (col1, col3) VALUES (11, 13);\n>\n> Then you can rely on the planner to insert the correct defaults at\nplanning\n> time.\n\nthis is an information of gold! I was really unhappy with my hacks looked\nlike an alien element to me (and surely being those, too). When I look on\nthe pointer chain I used to get the default value... And I'm not sure if my\nsolution would handle every kind of default expr. correctly.\n\n>\n> My inclination would be to twiddle the order of operations so that the\n> Default node is spotted and intercepted before being fed to\n> transformExpr. This would probably mean doing some surgery on\n> transformTargetList. The advantage of doing it that way is that\n> transformExpr and subsequent processing need never see Default nodes\n> and can just error out if they do. The way you have it, Default needs\n> to pass through transformExpr, which raises a bunch of questions in my\n> mind about what parts of the system will need to accept it. There's\n> an established distinction between \"pre-parse-analysis\" node types\n> (eg, Attr) and \"post-parse-analysis\" node types (eg, Var), and we\n> understand which places need to support which node types as long as\n> that's the distinction. If you allow Default to pass through\n> transformExpr then you're to some extent allowing it to be a\n> post-parse-analysis node type, which doesn't strike me as a good idea.\n\nI see. Can I say in simple words, that everything that can be parsed without\nany knowledge about the database condition has to be pre-parse-analized and\nthe rest has to be post-parse-analized, then? Or, things needed to parse\ncorrectly are prepared by the pre-parser-analysis? Could you point me to a\nsummary (if any) where I would find a rough desc. on such distinction?\nI'll re-code to match the basic concepts you've explained here.\n\nBTW, what about constructs like:\n\nINSERT INTO foo VALUES (1,,2);\n\nwouldn't it make the whole thing a bit simpler? or:\n\nINSERT INTO foo(c1, c2, c3) VALUES (,1,);\n\nI find, this short form is a bit more finger-freiendly and I think, it could\nbe implemented without dropping the standard one? This would better match\nyour idea about \"to omit\" the DEFAULT-ed column, since one doesn't care of\nwhat value has to be put in if you write smth. like \"...VALUES\n(,,,,,4,,,1,)\". Or, there could be an expression defined if not even DEFAULT\nhas been specified, an other expression than the DEFAULT expr., eg, (I just\nspin around :-) ) it could be allowed to define smth. like OMITED appearing\nas a place holder for a post-parse replacement where eg column references\ncould be used, eg:\n\ndb=# CREATE TABLE foo (c1 int2, c2 int2, c3 int2 OMITED c1*20, c4 int2);\ndb=# INSERT INTO foo values (2, 3,, 4);\ndb=# SELECT c3 FROM foo;\nc3\n----\n40\n(1 rows)\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Sat, 29 Dec 2001 13:09:58 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO question "
},
{
"msg_contents": "\"Pavlo Baron\" <pb@pbit.org> writes:\n> I see. Can I say in simple words, that everything that can be parsed without\n> any knowledge about the database condition has to be pre-parse-analized and\n> the rest has to be post-parse-analized, then?\n\nExactly. The grammar phase has to operate without examining the\ndatabase, because it needs to be able to run even in transaction-aborted\nstate (so we can recognize COMMIT and ROLLBACK commands). Parse\nanalysis does the lookups and so forth to determine exactly what the\nraw syntax tree really means.\n\n> summary (if any) where I would find a rough desc. on such distinction?\n\nThere's no substitute for reading the code ... the point above is\nmentioned in the comments at the head of gram.y, for example.\n\n> BTW, what about constructs like:\n> INSERT INTO foo VALUES (1,,2);\n> wouldn't it make the whole thing a bit simpler?\n\nMaybe, but it's not SQL92. Looks kind of error-prone to me anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 Dec 2001 11:54:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question "
},
{
"msg_contents": "Tom Lane writes:\n\n> My inclination would be to twiddle the order of operations so that the\n> Default node is spotted and intercepted before being fed to\n> transformExpr. This would probably mean doing some surgery on\n> transformTargetList.\n\nWhy transformTargetList? The comment above this function says that it's\nindiffernet if there is a SELECT or INSERT or an other stmt. being parsed -\nthe expressions are just to be transformed? Woudn't it break this\nindifference if I add a new branch handling with the Default node to this\nfunction? Or is it uncritical if I simply add it without any distinction of\nwhat stmt. is being parsed? Could it become a problem later if the Default\nnode is reused by some other stmt.?\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Sat, 29 Dec 2001 21:12:09 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO question "
},
{
"msg_contents": "\"Pavlo Baron\" <pb@pbit.org> writes:\n> Tom Lane writes:\n>> My inclination would be to twiddle the order of operations so that the\n>> Default node is spotted and intercepted before being fed to\n>> transformExpr. This would probably mean doing some surgery on\n>> transformTargetList.\n\n> Why transformTargetList? The comment above this function says that it's\n> indiffernet if there is a SELECT or INSERT or an other stmt. being parsed -\n> the expressions are just to be transformed? Woudn't it break this\n> indifference if I add a new branch handling with the Default node to this\n> function?\n\nWell, you could either assume that a DEFAULT node must be okay if\npresent (relying on the grammar to have allowed it only for INSERT),\nor you could add a parameter to transformTargetList saying whether it's\ndealing with an INSERT list or not. If not, it could error out if it\nsees a DEFAULT node. This might be better than rejecting DEFAULT in\nthe grammar, since you could give a more specific error message than\njust \"parse error\"; and you wouldn't need two separate targetlist\nconstructs in the grammar.\n\nYou also need to think about what to return in the output targetlist if\nyou see a DEFAULT node. You can't build a correct, valid Resdom since\nyou have no info about datatype. I think I'd be inclined to return the\nDEFAULT node in place of a TargetEntry, and then transformInsertStmt\nwould be changed to discard list items that weren't TargetEntrys. A\nlittle ugly, but probably less ugly than building a not-really-valid\nTargetEntry structure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 Dec 2001 15:56:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question "
},
{
"msg_contents": "Tom Lane writes:\n> Well, you could either assume that a DEFAULT node must be okay if\n> present (relying on the grammar to have allowed it only for INSERT),\n> or you could add a parameter to transformTargetList saying whether it's\n> dealing with an INSERT list or not. If not, it could error out if it\n> sees a DEFAULT node. This might be better than rejecting DEFAULT in\n> the grammar, since you could give a more specific error message than\n> just \"parse error\"; and you wouldn't need two separate targetlist\n> constructs in the grammar.\n\nOh, I really don't think I should modify the parameter list of such\n*present* function. The comment on it looked very *proud* to me, proud,\nbecause it's been highlighted that it doesn't play a role which stmt. is\ncomming in, so everything breaking this *pride* would be an evil hack,\nwouldn't it?\nFuthermore Bruce says, such small'n'easy hack must match everybody's\nexpectations before it gets a chance to be accepted ,)\nSeriously: I know, it's not the finest method to change the grammar, though\nit looks very compact to me.\nQuestions: I followed your recomendation to create a new parse-node for\nDEFAULT. Is this step ok? I'll additionally take a look on what to do to\n*fully* integrate a new node - Bruce said, maybe he would be able to help a\nlittle. Further, if the previous step is correct then, do I understand you\nright: you are not enjoyed of the rejecting DEFAULT in cases other than\nINSERT, but should DEFAULT be recognized anyway, then translated to my new\nparse-node \"Default\" (grammar ends here) and then handled in\ntransformTargetList? Or did you mean, that the grammer hasn't to be changed\nat all and DEFAULT is to be handled out later from the string constant\n(unknown type)?\nI'll take a look on the call chain leading to the final call of\ntransormTargetList - maybe I'll find a way to avoid a modification of it's\nparameter list. Is there a chance to handle Default somewhere *above*\ntransformTargetList without midifying it's parameters?\nIf no, then can I really feel free to modify this parameter list and adapt\nit everywhere I find a call to this function?\n*But*: What would you say, if I add a mitm-function called instead of\ntransformTargetList in the case of INSERT, doing a thing and finally calling\ntransformTargetList? Smth. like transformInsertTargetList? Wouldn't it be\nmore elegant than modifying the parameter list of one of the basic\ntransformation functions? Then, I could add a case to transformTargetList\nwhere I would generate an error on every other Default-parse-node comming\nin, denying it and explaining it's cause? Would it be ok?\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Sat, 29 Dec 2001 23:06:48 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO question "
},
{
"msg_contents": "> oops, a new patch attached: the last one wouldn't compile because of missing\n> T_Default value I just forgot to diff-in (or out) - sorry\n> \n> rgds\n> Pavlo Baron\n\nNever mind. I see it now. We still need to discuss it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Feb 2002 20:51:02 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "Bruce writes:\n> > oops, a new patch attached: the last one wouldn't compile because of\nmissing\n> > T_Default value I just forgot to diff-in (or out) - sorry\n> >\n> > rgds\n> > Pavlo Baron\n>\n> Never mind. I see it now. We still need to discuss it.\n>\n\nIs it now the right time to discuss new features? I just had stopped to bomb\nyou guys with my questions since you were stressed by the new release\ncompletion.\n\nThere were some communication between Tom Lane and me about where to\nimplement that INSERT...DEFAULT fix - now I really don't know where to\nbegin. Do you possibly have an idea on how that thing can start rolling now?\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Sat, 2 Mar 2002 21:01:44 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "\nDouble-oops. There is a later one under a different email subject\ncalled TODO questino that is a context diff. I just need to know if\nthis is the newest version and if anyone doesn't like it.\n\nThis adds DEFAULT functionality to the INSERT values list.\n\n> oops, a new patch attached: the last one wouldn't compile because of missing\n> T_Default value I just forgot to diff-in (or out) - sorry\n> \n> rgds\n> Pavlo Baron\n> \n> \n> Index: src/backend/parser/parse_target.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/backend/parser/parse_target.c,v\n> retrieving revision 1.76\n> diff -c -r1.76 parse_target.c\n> *** src/backend/parser/parse_target.c 2001/11/05 17:46:26 1.76\n> --- src/backend/parser/parse_target.c 2001/12/28 23:13:43\n> ***************\n> *** 60,65 ****\n> --- 60,77 ----\n> if (IsA(expr, Ident) &&((Ident *) expr)->isRel)\n> elog(ERROR, \"You can't use relation names alone in the target list, try\n> relation.*.\");\n> \n> + /* pavlo (pb@pbit.org: 2001-12-27: handle the DEFAULT in INSERT\n> INTO foo VALUES (..., DEFAULT, ...))*/\n> + if (IsA(expr, Default))\n> + {\n> + if (pstate->p_target_relation->rd_att->attrs[(AttrNumber)\n> pstate->p_last_resno - 1]->atthasdef)\n> + {\n> + Const *con = (Const *)\n> stringToNode(pstate->p_target_relation->rd_att->constr->defval[(AttrNumber)\n> pstate->p_last_resno - 1].adbin);\n> + expr = con;\n> + }\n> + else\n> + elog(ERROR, \"no default value for column \\\"%s\\\"\n> found\\nDEFAULT cannot be inserted\", colname);\n> + }\n> +\n> type_id = exprType(expr);\n> type_mod = exprTypmod(expr);\n> \n> ***************\n> *** 260,266 ****\n> {\n> tle->expr = CoerceTargetExpr(pstate, tle->expr, type_id,\n> attrtype, attrtypmod);\n> ! if (tle->expr == NULL)\n> elog(ERROR, \"column \\\"%s\\\" is of type '%s'\"\n> \" but expression is of type '%s'\"\n> \"\\n\\tYou will need to rewrite or cast the expression\",\n> --- 272,278 ----\n> {\n> tle->expr = CoerceTargetExpr(pstate, tle->expr, type_id,\n> attrtype, attrtypmod);\n> ! if (tle->expr == NULL)\n> elog(ERROR, \"column \\\"%s\\\" is of type '%s'\"\n> \" but expression is of type '%s'\"\n> \"\\n\\tYou will need to rewrite or cast the expression\",\n> ***************\n> *** 299,305 ****\n> int32 attrtypmod)\n> {\n> if (can_coerce_type(1, &type_id, &attrtype))\n> ! expr = coerce_type(pstate, expr, type_id, attrtype, attrtypmod);\n> \n> #ifndef DISABLE_STRING_HACKS\n> \n> --- 311,317 ----\n> int32 attrtypmod)\n> {\n> if (can_coerce_type(1, &type_id, &attrtype))\n> ! expr = coerce_type(pstate, expr, type_id, attrtype,\n> attrtypmod);\n> \n> #ifndef DISABLE_STRING_HACKS\n> \n> ***************\n> *** 525,528 ****\n> }\n> \n> return strength;\n> ! }\n> --- 537,540 ----\n> }\n> \n> return strength;\n> ! }\n> \\ No newline at end of file\n> Index: src/backend/parser/parse_expr.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/backend/parser/parse_expr.c,v\n> retrieving revision 1.105\n> diff -c -r1.105 parse_expr.c\n> *** src/backend/parser/parse_expr.c 2001/11/12 20:05:24 1.105\n> --- src/backend/parser/parse_expr.c 2001/12/28 23:15:08\n> ***************\n> *** 121,126 ****\n> --- 121,131 ----\n> result = (Node *) make_const(val);\n> break;\n> }\n> + case T_Default: /* pavlo (pb@pbit.org): 2001-12-27:\n> transormation for the DEFAULT value for INSERT INTO foo VALUES (...,\n> DEFAULT, ...)*/\n> + {\n> + result = (Default *) expr;\n> + break;\n> + }\n> case T_ParamNo:\n> {\n> ParamNo *pno = (ParamNo *) expr;\n> ***************\n> *** 1066,1069 ****\n> }\n> else\n> return typename->name;\n> ! }\n> --- 1071,1074 ----\n> }\n> else\n> return typename->name;\n> ! }\n> \\ No newline at end of file\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.276\n> diff -c -r2.276 gram.y\n> *** src/backend/parser/gram.y 2001/12/09 04:39:39 2.276\n> --- src/backend/parser/gram.y 2001/12/28 23:15:39\n> ***************\n> *** 195,201 ****\n> opt_column_list, columnList, opt_name_list,\n> sort_clause, sortby_list, index_params, index_list, name_list,\n> from_clause, from_list, opt_array_bounds,\n> ! expr_list, attrs, target_list, update_target_list,\n> def_list, opt_indirection, group_clause, TriggerFuncArgs,\n> select_limit, opt_select_limit\n> \n> --- 195,201 ----\n> opt_column_list, columnList, opt_name_list,\n> sort_clause, sortby_list, index_params, index_list, name_list,\n> from_clause, from_list, opt_array_bounds,\n> ! expr_list, attrs, target_list, insert_target_list, update_target_list,\n> def_list, opt_indirection, group_clause, TriggerFuncArgs,\n> select_limit, opt_select_limit\n> \n> ***************\n> *** 253,259 ****\n> %type <node> table_ref\n> %type <jexpr> joined_table\n> %type <range> relation_expr\n> ! %type <target> target_el, update_target_el\n> %type <paramno> ParamNo\n> \n> %type <typnam> Typename, SimpleTypename, ConstTypename\n> --- 253,259 ----\n> %type <node> table_ref\n> %type <jexpr> joined_table\n> %type <range> relation_expr\n> ! %type <target> target_el, update_target_el, insert_target_el\n> %type <paramno> ParamNo\n> \n> %type <typnam> Typename, SimpleTypename, ConstTypename\n> ***************\n> *** 3302,3308 ****\n> }\n> ;\n> \n> ! insert_rest: VALUES '(' target_list ')'\n> {\n> $$ = makeNode(InsertStmt);\n> $$->cols = NIL;\n> --- 3302,3308 ----\n> }\n> ;\n> \n> ! insert_rest: VALUES '(' insert_target_list ')'\n> {\n> $$ = makeNode(InsertStmt);\n> $$->cols = NIL;\n> ***************\n> *** 5482,5488 ****\n> *\n> \n> ****************************************************************************\n> */\n> \n> ! /* Target lists as found in SELECT ... and INSERT VALUES ( ... ) */\n> \n> target_list: target_list ',' target_el\n> { $$ = lappend($1, $3); }\n> --- 5482,5488 ----\n> *\n> \n> ****************************************************************************\n> */\n> \n> ! /* Target lists as found in SELECT ... */\n> \n> target_list: target_list ',' target_el\n> { $$ = lappend($1, $3); }\n> ***************\n> *** 5490,5496 ****\n> --- 5490,5522 ----\n> { $$ = makeList1($1); }\n> ;\n> \n> + /* Target lists as found in INSERT VALUES ( ... ) */\n> +\n> + /* pavlo (pb@pbit.org): 2001-12-27: parse node based handling for the\n> DEFAULT value added;\n> + now it's possible to INSERT INTO ... VALUES (..., FEFAULT, ...)!\n> + */\n> + insert_target_list: insert_target_list ',' insert_target_el\n> + { $$ = lappend($1, $3); }\n> + | insert_target_el\n> + { $$ = makeList1($1); }\n> + | insert_target_list ',' target_el\n> + { $$ = lappend($1, $3); }\n> + | target_el\n> + { $$ = makeList1($1); }\n> + ;\n> +\n> + insert_target_el: DEFAULT\n> + {\n> + Default *n = makeNode(Default);\n> + $$ = makeNode(ResTarget);\n> + $$->name = NULL;\n> + $$->indirection = NULL;\n> + $$->val = (Node *)n;\n> + }\n> + ;\n> +\n> /* AS is not optional because shift/red conflict with unary ops */\n> +\n> target_el: a_expr AS ColLabel\n> {\n> $$ = makeNode(ResTarget);\n> ***************\n> *** 6380,6383 ****\n> strcpy(newval+1, oldval);\n> v->val.str = newval;\n> }\n> ! }\n> --- 6406,6409 ----\n> strcpy(newval+1, oldval);\n> v->val.str = newval;\n> }\n> ! }\n> \\ No newline at end of file\n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.151\n> diff -c -r1.151 parsenodes.h\n> *** src/include/nodes/parsenodes.h 2001/11/05 17:46:34 1.151\n> --- src/include/nodes/parsenodes.h 2001/12/28 23:16:30\n> ***************\n> *** 1005,1010 ****\n> --- 1005,1019 ----\n> } A_Const;\n> \n> /*\n> + * pavlo (pb@pbit.org): 2001.12.27:\n> + * Default - the DEFAULT constant expression used in the target list of\n> INSERT INTO ... VALUES (..., DEFAULT, ...)\n> + */\n> + typedef struct Default\n> + {\n> + NodeTag type;\n> + } Default;\n> +\n> + /*\n> * TypeCast - a CAST expression\n> *\n> * NOTE: for mostly historical reasons, A_Const and ParamNo parsenodes\n> contain\n> ***************\n> *** 1355,1358 ****\n> */\n> typedef SortClause GroupClause;\n> \n> ! #endif /* PARSENODES_H */\n> --- 1364,1367 ----\n> */\n> typedef SortClause GroupClause;\n> \n> ! #endif /* PARSENODES_H */\n> \\ No newline at end of file\n> Index: src/include/nodes/nodes.h\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/include/nodes/nodes.h,v\n> retrieving revision 1.96\n> diff -c -r1.96 nodes.h\n> *** src/include/nodes/nodes.h 2001/11/05 17:46:34 1.96\n> --- src/include/nodes/nodes.h 2001/12/29 09:23:37\n> ***************\n> *** 222,227 ****\n> --- 222,228 ----\n> T_CaseWhen,\n> T_FkConstraint,\n> T_PrivGrantee,\n> + T_Default, /* pavlo (pb@pbit.org) : 2001.12.27: DEFAULT element of\n> the INSERT INTO target list*/\n> \n> /*\n> * TAGS FOR FUNCTION-CALL CONTEXT AND RESULTINFO NODES (see fmgr.h)\n> ***************\n> *** 365,368 ****\n> (jointype) == JOIN_FULL || \\\n> (jointype) == JOIN_RIGHT)\n> \n> ! #endif /* NODES_H */\n> --- 366,369 ----\n> (jointype) == JOIN_FULL || \\\n> (jointype) == JOIN_RIGHT)\n> \n> ! #endif /* NODES_H */\n> \\ No newline at end of file\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 20:03:52 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Double-oops. There is a later one under a different email subject\n> called TODO questino that is a context diff. I just need to know if\n> this is the newest version and if anyone doesn't like it.\n\nI don't like it. As given, it inserts default values into the query\nat parse time, whereas this must not be done until planning time.\n(Otherwise the defaults sneak into stored rules, and if you change\ndefaults with ALTER TABLE you will get unexpected results.) The\ncorrect (and actually easier) way is to simply drop the defaulted column\nout of the analyzed query altogether.\n\nThis is not Pavlo's fault exactly, since he copied the way we used\nto do it in 7.1 ... but the patch must be updated to follow 7.2\npractice.\n\nAnother problem: no copy/equal/outfuncs support for the added node type.\n\nStylistic issue: we should discourage people from putting their initials\non every bit of code they touch. The code will soon be unreadable if\nsuch becomes common practice.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 21:02:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question "
},
{
"msg_contents": "\nGood analysis. Thanks. Pavlov, I know this code is complex. I think\nthese tips will help.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Double-oops. There is a later one under a different email subject\n> > called TODO questino that is a context diff. I just need to know if\n> > this is the newest version and if anyone doesn't like it.\n> \n> I don't like it. As given, it inserts default values into the query\n> at parse time, whereas this must not be done until planning time.\n> (Otherwise the defaults sneak into stored rules, and if you change\n> defaults with ALTER TABLE you will get unexpected results.) The\n> correct (and actually easier) way is to simply drop the defaulted column\n> out of the analyzed query altogether.\n> \n> This is not Pavlo's fault exactly, since he copied the way we used\n> to do it in 7.1 ... but the patch must be updated to follow 7.2\n> practice.\n> \n> Another problem: no copy/equal/outfuncs support for the added node type.\n> \n> Stylistic issue: we should discourage people from putting their initials\n> on every bit of code they touch. The code will soon be unreadable if\n> such becomes common practice.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 21:05:12 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "Tom Lane wrote:\n\n> I don't like it. As given, it inserts default values into the query\n> at parse time, whereas this must not be done until planning time.\n> (Otherwise the defaults sneak into stored rules, and if you change\n> defaults with ALTER TABLE you will get unexpected results.) The\n\n:(\nBut of course you are right - my solution was just a hack to learn about the\ninternals and I really didn't expect my first patch to be accepted.\n\n> correct (and actually easier) way is to simply drop the defaulted column\n> out of the analyzed query altogether.\n\nI know what you mean.\n\n>\n> This is not Pavlo's fault exactly, since he copied the way we used\n> to do it in 7.1 ... but the patch must be updated to follow 7.2\n> practice.\n\nI coudn't know about such details. But I see, my solution wasn't completely\nwrong.\n\n>\n> Another problem: no copy/equal/outfuncs support for the added node type.\n\nIs it also interesting in the case of the DEFAULT-pair being droped?\n\n>\n> Stylistic issue: we should discourage people from putting their initials\n> on every bit of code they touch. The code will soon be unreadable if\n> such becomes common practice.\n\nUnderstandable. I don't want to immortalize my initials in a foreign work.\nMy comments where a kind of orientaition marks for me to quickly find my own\ncode. As you know, I just wanted to know if I'm on the right way\nexperimenting with this TODO-issue.\n\nSorry, but I don't think I should provide this patch. I really haven't\nenough energy and time to become an internals-GURU. This software is too\nmighty and it's internals contain too many special thoughts of it's core\ndevelopers, so I would never get a required motivation to participate the\nkernal development.\nWhat I'll look for in the next time is what I wanted to do at the beginning\nof my postgresql-dev-adventure. I'm developing very complex applications\nbased on Oracle. IMHO, it's an expensive and overheaded colossus, so Pg is\nmy absolute favorite when I develop a low cost web-based-solution for smb. w\nho don't want (or can't) pay a sack of money for the database.\nRoss J. Reedstrom pointed me to the developer group working on the\nOracle-related issues. So I'll contact those guys to discuss some ideas I've\ngot after working with both Pg and Oracle.\n\nBTW, here's smth. really funny about that TODO issue: smb. sent me an\nSQL-script containig inserts using that \"default\" placeholder, and my\nmodified Pg-version did it! But now, I use the stable without my\nmodifications.\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Sat, 9 Mar 2002 09:45:38 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO question "
},
{
"msg_contents": "\"Pavlo Baron\" <pb@pbit.org> writes:\n>> Another problem: no copy/equal/outfuncs support for the added node type.\n\n> Is it also interesting in the case of the DEFAULT-pair being droped?\n\nYes, mainly on general principles: for example, debug dumps of raw parse\ntrees will fail if there's not outfuncs support for your node type.\n\nWe've been lax on this in the past but I think it's important to try to\nkeep those support routines complete.\n\n> Sorry, but I don't think I should provide this patch.\n\nSorry to hear that. It needs to be done, and you are not that far away\nfrom having it done.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Mar 2002 10:26:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO question "
}
] |
[
{
"msg_contents": "This TODO-item \"SELECT col FROM tab WHERE numeric_col = 10.1 fails, requires\nquotes\" - is it already fixed? I executed smth. like\n\nselect * from tab2 where dpcol = 10.1;\n\nand it returns a record containing this value in the double precision column\n\"dpcol\".\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Thu, 27 Dec 2001 22:09:14 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "TODO question"
},
{
"msg_contents": "> This TODO-item \"SELECT col FROM tab WHERE numeric_col = 10.1 fails, requires\n> quotes\" - is it already fixed? ...\n\nthomas=# create table t3 (n numeric);\nCREATE\nthomas=# select * from t3 where n = 10.1;\nERROR: Unable to identify an operator '=' for types 'numeric' and\n'double precision'\n\tYou will have to retype this query using an explicit cast\nthomas=# select * from t3 where n = '10.1';\n n \n---\n(0 rows)\n\n\n - Thomas\n",
"msg_date": "Thu, 27 Dec 2001 21:32:18 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: TODO question"
},
{
"msg_contents": "Thomas Lockhart:\n> > This TODO-item \"SELECT col FROM tab WHERE numeric_col = 10.1 fails,\nrequires\n> > quotes\" - is it already fixed? ...\n>\n> thomas=# create table t3 (n numeric);\n> CREATE\n> thomas=# select * from t3 where n = 10.1;\n> ERROR: Unable to identify an operator '=' for types 'numeric' and\n> 'double precision'\n> You will have to retype this query using an explicit cast\n> thomas=# select * from t3 where n = '10.1';\n> n\n> ---\n> (0 rows)\n\noops, I should clean my eyeballs: my test case was:\n\ncreate table t3 (n double precision);\nselect * from t3 where n = 10.1;\nn\n---\n(0 rows)\n\nthe problem lies in the numeric field - let's see, if I find out where to\nfix it\n\nrgds\nPavlo Baron\n\n",
"msg_date": "Thu, 27 Dec 2001 22:50:21 +0100",
"msg_from": "\"Pavlo Baron\" <pb@pbit.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO question"
}
] |
[
{
"msg_contents": "These things look like bugs to me:\n\nheap_xlog_clean doesn't update the page's LSN and SUI.\n\nheap_xlog_insert and heap_xlog_update leave random bits in the\ninserted/updated tuple's t_ctid. heap_xlog_update leaves random\nbits in the updated tuple's t_cmax in the \"move\" case.\n\nI am not sure about what sort of visible fault these omissions could\nproduce; perhaps none. But they look like dangerous things IMHO.\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Dec 2001 21:02:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Glitches in heapam.c xlog redo routines"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> These things look like bugs to me:\n> \n> heap_xlog_clean doesn't update the page's LSN and SUI.\n> \n> heap_xlog_insert and heap_xlog_update leave random bits in the\n> inserted/updated tuple's t_ctid. heap_xlog_update leaves random\n> bits in the updated tuple's t_cmax in the \"move\" case.\n\nAs far as I see, t_cmax isn't used anywhere once a tuple\nwas committed.\n\n> \n> I am not sure about what sort of visible fault these omissions could\n> produce; perhaps none. But they look like dangerous things IMHO.\n> Comments?\n\nI'm not sure about the heap_xlog_clean case. Updating\nLSN and SUI seems better to me.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 28 Dec 2001 13:13:31 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Glitches in heapam.c xlog redo routines"
}
] |
[
{
"msg_contents": "Hi All,\n\nHas anyone used pltclu in postgres (version 7.1, on\nLinux 7.1). I wanted to use a function called pgmail\nwritten in pltclu. But i am not able to create that\nfunction in postgres as the language pltclu is not\ncreated. I tried to create the language pltclu, but\ncouldnt succeed as I was not able to locate the\nrequired .so file in my linux system. For using the\nsocket command of Tcl in a postgres function u need\npltclu.\n\nI had created the language plpgsql using the file\nplpgsql.so and have written functions also in plpgsql.\n\n\nThe pgmail function mentioned above helps to send mail\nfrom postgres. Is there any other than pgmail way\nusing plpgsql to send mail from postgres ??\n\n\n\nRegards,\n\nJay\n\n__________________________________________________\nDo You Yahoo!?\nSend your FREE holiday greetings online!\nhttp://greetings.yahoo.com\n",
"msg_date": "Thu, 27 Dec 2001 21:07:06 -0800 (PST)",
"msg_from": "Jayaraj Oorath <jayoorath@yahoo.com>",
"msg_from_op": true,
"msg_subject": "pltclu"
}
] |
[
{
"msg_contents": "I got this error from JDBC driver which performing a\nupdate. Does\nanyone knows what cause it? I am using Postgre 7.1.3\nand its JDBC\ndriver.\n\nThanks,\nKevin\n\n\n\n__________________________________________________\nDo You Yahoo!?\nSend your FREE holiday greetings online!\nhttp://greetings.yahoo.com\n",
"msg_date": "Fri, 28 Dec 2001 12:12:58 -0800 (PST)",
"msg_from": "Kevin Xiao <kevin_xiao@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Error: SET TRANSACTION ISOLATION LEVEL must be called before any\n query"
}
] |
[
{
"msg_contents": "For 7.2, to support some ISO-8601 variants, I'm tightening up the date\ndelimiter parsing to require the same delimiter to be used between all\nparts of a date.\n\nDoes anyone use the German date notation for PostgreSQL? If so, what is\nthe actual format you input? The reasons I'm asking are:\n\no I had recalled that the format was \"dd.mm/yyyy\", but actually\nPostgreSQL emits \"dd.mm.yyyy\".\n\no By tightening up the parsing, \"dd.mm.yyyy\" would be accepted, but\n\"dd.mm/yyyy\", \"yyyy.mm-dd\", etc would not.\n\no The stricter parsing in this area would allow more general parsing\nelsewhere, enabling other variants such as\n\n yyyymmddThhmmss\n yyyymmdd hhmmss.ss-zz\n Thhmmss-zz\n\nWith these changes, more formats should be correctly handled, including\nsome edge cases which should have worked but seemed not to; the current\nregression tests still all pass. As an example of edge case troubles,\n7.1 accepts both of the following:\n\n timestamp '2001-12-27 04:05:06-08'\n timestamp '2001-12-27 040506 -08'\n\nBut rejects the latter if the space before the time zone is removed:\n\n timestamp '2001-12-27 040506-08'\n\nComments? Suggestions?\n\n - Thomas\n",
"msg_date": "Sat, 29 Dec 2001 03:39:42 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "date/time formats in 7.2"
}
] |
[
{
"msg_contents": "Hi all!\n\nI have an Access 2000 database (under Windows 2000) which has tables linked\nfrom a PostgreSQL database. These tables have been linked using the\nPostgreSQL ODBC driver 7.01.0004 or some version around that (sorry, I don't\nremember the exact version). The Access ODBC connect string of one table is:\n\n ODBC;DSN=lmdb;DATABASE=lmdb;SERVER=sphinx.link-m.de;PORT=5432;\n UID=lmdb;READONLY=0;PROTOCOL=6.4;FAKEOIDINDEX=0;SHOWOIDCOLUMN=0;\n ROWVERSIONING=1;SHOWSYSTEMTABLES=0;CONNSETTINGS=;TABLE=**Logfile\n\nHowever, I have now installed a newer version of the ODBC driver\n(7.01.0009), and when I create a new table link for the *same* table, I get\nquite a different ODBC connect string:\n\n ODBC;DSN=lmdb;DATABASE=lmdb;SERVER=sphinx.link-m.de;PORT=5432;\n A0=0;A1=6.4;A2=0;A3=0;A4=1;A5=0;A6=;A7=100;A8=4096;A9=0;B0=254;\n B1=8190;B2=0;B3=0;B4=1;B5=1;B6=0;B7=1;B8=0;B9=0;C0=0;C1=0;\n C2=dd_;;TABLE=**Logfile\n\nQuestions:\n\n* Did the ODBC connect string format indeed change between ODBC driver\nversions?\n* Can I still use the old format with newer ODBC drivers without having to\nexpect any disadvantages or bugs?\n* Is the new format better? (It seems to include more of the parameters that\ncan be set in the Windows ODBC control panel applet for the PostgreSQL ODBC\ndriver, but on the other hand, it seems to miss parameters, like \"UID\".)\n\nThis really confuses me, and I would appreciate any insights you can give\nme!\n\nThanks in advance,\nJulian Mehnle.\n--\nLinksystem Muenchen GmbH info@link-m.de\nSchloerstrasse 10 http://www.link-m.de\n80634 Muenchen Tel. 089 / 890 518-0\nWe make the Net work. Fax 089 / 890 518-77\n\n\n\n",
"msg_date": "Sat, 29 Dec 2001 16:58:54 +0100",
"msg_from": "\"Julian Mehnle, Linksystem Muenchen\" <j.mehnle@buero.link-m.de>",
"msg_from_op": true,
"msg_subject": "ODBC connect string format differs between ODBC driver versions?"
},
{
"msg_contents": "\"Julian Mehnle, Linksystem Muenchen\" wrote:\n> \n> Hi all!\n> \n> I have an Access 2000 database (under Windows 2000) which has tables linked\n> from a PostgreSQL database. These tables have been linked using the\n> PostgreSQL ODBC driver 7.01.0004 or some version around that (sorry, I don't\n> remember the exact version). The Access ODBC connect string of one table is:\n> \n> ODBC;DSN=lmdb;DATABASE=lmdb;SERVER=sphinx.link-m.de;PORT=5432;\n> UID=lmdb;READONLY=0;PROTOCOL=6.4;FAKEOIDINDEX=0;SHOWOIDCOLUMN=0;\n> ROWVERSIONING=1;SHOWSYSTEMTABLES=0;CONNSETTINGS=;TABLE=**Logfile\n> \n> However, I have now installed a newer version of the ODBC driver\n> (7.01.0009), and when I create a new table link for the *same* table, I get\n> quite a different ODBC connect string:\n> \n> ODBC;DSN=lmdb;DATABASE=lmdb;SERVER=sphinx.link-m.de;PORT=5432;\n> A0=0;A1=6.4;A2=0;A3=0;A4=1;A5=0;A6=;A7=100;A8=4096;A9=0;B0=254;\n> B1=8190;B2=0;B3=0;B4=1;B5=1;B6=0;B7=1;B8=0;B9=0;C0=0;C1=0;\n> C2=dd_;;TABLE=**Logfile\n> \n> Questions:\n> \n> * Did the ODBC connect string format indeed change between ODBC driver\n> versions?\n\nYes it was changed in 7.01.0007. Most driver options are changed\nto be DSN options.\n\n> * Can I still use the old format with newer ODBC drivers without having to\n> expect any disadvantages or bugs?\n\nUnfortunately I couldn't guarantee that the change was\nbug-free. However I have seen no bug report other than\nimplicit *too long connect string* bug. A0=,B1= etc are\nabbreviated format to shorten connect strings. Original\nformat READONLY= etc is still allowed.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 31 Dec 2001 07:24:06 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: ODBC connect string format differs between ODBC driver versions?"
}
] |
[
{
"msg_contents": "I've committed changes to tighten up the date/time parsing, while\nenabling more edge cases which should have worked but didn't. The\nchanges have the following effects:\n\n1) date fields like \"2001-12-29\" need the same delimiter between all\nfields. Previously, any of \"-\", \"/\", or \".\" might have been accepted;\nnow the second delimiter must match the first.\n\n2) Julian day actually works for both input (with a leading \"J\") and\noutput (using EXTRACT('julian' from timestamp)).\n\n3) the ISO-8601 identifier for time fields (\"T\") now works for all cases\nI can think of to test. So '20011227T040506.789' and other variants now\nare recognized.\n\n4) more cases of time zones are interpreted correctly. Previously, some\ncases were sensitive to leading spaces or lack thereof.\n\n5) regression tests still pass, and I have some additional regression\ntest cases to add after the upcoming 7.2 release (and those pass too).\n\n - Thomas\n",
"msg_date": "Sat, 29 Dec 2001 18:55:55 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Updated date/time parsing"
}
] |
[
{
"msg_contents": "After some further experimentation, I believe I understand the reason for\nthe reports we've had of 7.2 producing heavy context-swap activity where\n7.1 didn't. Here is an extract from tracing lwlock activity for one\nbackend in a pgbench run:\n\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): awakened\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): excl 1 shared 0 head 0x422c27d4\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): release waiter\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(300): excl 0 shared 0 head (nil)\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(300): excl 0 shared 1 head (nil)\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): excl 1 shared 0 head 0x422c2bfc\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): waiting\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): awakened\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): excl 1 shared 0 head 0x422c27d4\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): release waiter\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(232): excl 0 shared 0 head (nil)\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(232): excl 0 shared 1 head (nil)\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(300): excl 0 shared 0 head (nil)\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(300): excl 0 shared 1 head (nil)\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): excl 1 shared 0 head 0x422c2bfc\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): waiting\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): awakened\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): excl 1 shared 0 head 0x422c27d4\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): release waiter\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(232): excl 0 shared 0 head (nil)\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(232): excl 0 shared 1 head (nil)\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(300): excl 0 shared 0 head (nil)\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(300): excl 0 shared 1 head (nil)\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): excl 1 shared 0 head 0x422c2bfc\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): waiting\n2001-12-29 13:30:30 [31442] DEBUG: LWLockAcquire(0): awakened\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): excl 1 shared 0 head 0x422c27d4\n2001-12-29 13:30:30 [31442] DEBUG: LWLockRelease(0): release waiter\n\nLWLock 0 is the BufMgrLock, while the locks with numbers like 232 and\n300 are context locks for individual buffers. At the beginning of this\ntrace we see the process awoken after having been granted the\nBufMgrLock. It does a small amount of processing (probably a ReadBuffer\noperation) and releases the BufMgrLock. At that point, someone else is\nalready waiting for BufMgrLock, and the line about \"release waiter\"\nmeans that ownership of BufMgrLock has been transferred to that other\nsomeone. Next, the context lock 300 is acquired and released (there's no\ncontention for it). Next we need to get the BufMgrLock again (probably\nto do a ReleaseBuffer). Since we've already granted the BufMgrLock to\nsomeone else, we are forced to block here. When control comes back,\nwe do the ReleaseBuffer and then release the BufMgrLock --- again,\nimmediately granting it to someone else. That guarantees that our next\nattempt to acquire BufMgrLock will cause us to block. The cycle repeats\nfor every attempt to lock BufMgrLock.\n\nIn essence, what we're seeing here is a \"tag team\" behavior: someone is\nalways waiting on the BufMgrLock, and so each LWLockRelease(BufMgrLock)\ntransfers lock ownership to someone else; then the next\nLWLockAcquire(BufMgrLock) in the same process is guaranteed to block;\nand that means we have a new waiter on BufMgrLock, so that the cycle\nrepeats. Net result: a process context swap for *every* entry to the\nbuffer manager.\n\nIn previous versions, since BufMgrLock was only a spinlock, releasing it\ndid not cause ownership of the lock to be immediately transferred to\nsomeone else. Therefore, the releaser would be able to re-acquire the\nlock if he wanted to do another bufmgr operation before his time quantum\nexpired. This made for many fewer context swaps.\n\nIt would seem, therefore, that lwlock.c's behavior of immediately\ngranting the lock to released waiters is not such a good idea after all.\nPerhaps we should release waiters but NOT grant them the lock; when they\nget to run, they have to loop back, try to get the lock, and possibly go\nback to sleep if they fail. This apparent waste of cycles is actually\nbeneficial because it saves context swaps overall.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 Dec 2001 14:10:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "LWLock contention: I think I understand the problem"
},
{
"msg_contents": "...\n> It would seem, therefore, that lwlock.c's behavior of immediately\n> granting the lock to released waiters is not such a good idea after all.\n> Perhaps we should release waiters but NOT grant them the lock; when they\n> get to run, they have to loop back, try to get the lock, and possibly go\n> back to sleep if they fail. This apparent waste of cycles is actually\n> beneficial because it saves context swaps overall.\n\nHmm. Seems reasonable. In some likely scenerios, it would seem that the\nwaiters *could* grab the lock when they are next scheduled, since the\ncurrent locker would have finished at least one\ngrab/release/grab/release cycle in the meantime.\n\nHow hard will it be to try this out?\n\n - Thomas\n",
"msg_date": "Sat, 29 Dec 2001 19:24:02 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "> It would seem, therefore, that lwlock.c's behavior of immediately\n> granting the lock to released waiters is not such a good idea after all.\n> Perhaps we should release waiters but NOT grant them the lock; when they\n> get to run, they have to loop back, try to get the lock, and possibly go\n> back to sleep if they fail. This apparent waste of cycles is actually\n> beneficial because it saves context swaps overall.\n\nI still need to think about this, but the above idea doesn't seem good. \nRight now, we wake only one waiting process who gets the lock while\nother waiters stay sleeping, right? If we don't give them the lock,\ndon't we have to wake up all the waiters? If there are many, that\nsounds like lots of context switches no?\n\nI am still thinking.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 Dec 2001 14:35:49 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "> It would seem, therefore, that lwlock.c's behavior of immediately\n> granting the lock to released waiters is not such a good idea after all.\n> Perhaps we should release waiters but NOT grant them the lock; when they\n> get to run, they have to loop back, try to get the lock, and possibly go\n> back to sleep if they fail. This apparent waste of cycles is actually\n> beneficial because it saves context swaps overall.\n\nAnother question: Is there a way to release buffer locks without\naquiring the master lock?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 Dec 2001 14:37:15 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I still need to think about this, but the above idea doesn't seem good. \n> Right now, we wake only one waiting process who gets the lock while\n> other waiters stay sleeping, right? If we don't give them the lock,\n> don't we have to wake up all the waiters?\n\nNo. We'll still wake up the same processes as now: either one would-be\nexclusive lock holder, or multiple would-be shared lock holders.\nBut what I'm proposing is that they don't get granted the lock at that\ninstant; they have to try to get the lock once they actually start to\nrun.\n\nOnce in a while, they'll fail to get the lock, either because the\noriginal releaser reacquired the lock, and then ran out of his time\nquantum before releasing it, or because some third process came along\nand acquired the lock. In either of these scenarios they'd have to\nblock again, and we'd have wasted a process dispatch cycle. The\nimportant thing though is that the current arrangement wastes a process\ndispatch cycle for every acquisition of a contended-for lock.\n\nWhat I had not really focused on before, but it's now glaringly obvious,\nis that on modern machines one process time quantum (0.01 sec typically)\nis enough time for a LOT of computation, in particular an awful lot of\ntrips through the buffer manager or other modules with shared state.\nWe want to be sure that a process can repeatedly acquire and release\nthe shared lock for as long as its time quantum holds out, even if there\nare other processes waiting for the lock. Otherwise we'll be swapping\nprocesses too often.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 Dec 2001 14:45:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> How hard will it be to try this out?\n\nIt's a pretty minor rearrangement of the logic in lwlock.c, I think.\nWorking on it now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 Dec 2001 14:46:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "> No. We'll still wake up the same processes as now: either one would-be\n> exclusive lock holder, or multiple would-be shared lock holders.\n> But what I'm proposing is that they don't get granted the lock at that\n> instant; they have to try to get the lock once they actually start to\n> run.\n> \n> Once in a while, they'll fail to get the lock, either because the\n> original releaser reacquired the lock, and then ran out of his time\n> quantum before releasing it, or because some third process came along\n> and acquired the lock. In either of these scenarios they'd have to\n> block again, and we'd have wasted a process dispatch cycle. The\n> important thing though is that the current arrangement wastes a process\n> dispatch cycle for every acquisition of a contended-for lock.\n> \n> What I had not really focused on before, but it's now glaringly obvious,\n> is that on modern machines one process time quantum (0.01 sec typically)\n> is enough time for a LOT of computation, in particular an awful lot of\n> trips through the buffer manager or other modules with shared state.\n> We want to be sure that a process can repeatedly acquire and release\n> the shared lock for as long as its time quantum holds out, even if there\n> are other processes waiting for the lock. Otherwise we'll be swapping\n> processes too often.\n\nOK, I understand what you are saying now. You are not talking about the\nSysV semaphore but a level above that.\n\nWhat you are saying is that when we release a lock, we are currently\nautomatically giving it to another process that is asleep and may not be\nscheduled to run for some time. We then continue processing, and when\nwe need that lock again, we can't get it because the sleeper is holding\nit. We go to sleep and the sleeper wakes up, gets the lock, and\ncontinues.\n\nWhat you want to do is to wake up the sleeper but not give them the lock\nuntil they are actually running and can aquire it themselves. \n\nSeems like a no-brainer win to me. Giving the lock to a process that is\nnot currently running seems quite bad to me. It would be one thing if\nwe were trying to do some real-time processing, but throughput is the\nkey for us.\n\nIf you code up a patch, I will test it on my SMP machine using pgbench. \nHopefully this will help Tatsuo's 4-way AIX machine too, and Linux.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 Dec 2001 15:07:47 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "\n\nOn Sat, 29 Dec 2001, Tom Lane wrote:\n\n> After some further experimentation, I believe I understand the reason for\n> the reports we've had of 7.2 producing heavy context-swap activity where\n> 7.1 didn't. Here is an extract from tracing lwlock activity for one\n> backend in a pgbench run:\n\n...\n\n> It would seem, therefore, that lwlock.c's behavior of immediately\n> granting the lock to released waiters is not such a good idea after all.\n> Perhaps we should release waiters but NOT grant them the lock; when they\n> get to run, they have to loop back, try to get the lock, and possibly go\n> back to sleep if they fail. This apparent waste of cycles is actually\n> beneficial because it saves context swaps overall.\n\nSounds reasonable enough, but there seems to be a possibility of a process\nstarving. For example, if A releases the lock, B and C wake up, B gets\nthe lock. Then B releases the lock, A and C wake, and A gets the lock\nback. C gets CPU time but never gets the lock.\n\nBTW I am not on this list.\n\n-jwb\n\n",
"msg_date": "Sat, 29 Dec 2001 12:23:20 -0800 (PST)",
"msg_from": "\"Jeffrey W. Baker\" <jwbaker@acm.org>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> What you want to do is to wake up the sleeper but not give them the lock\n> until they are actually running and can aquire it themselves. \n\nYeah. Essentially this is a partial reversion to the idea of a\nspinlock. But it's more efficient than our old implementation with\ntimed waits between retries, because (a) a process will not be awoken\nunless it has a chance at getting the lock, and (b) when a contended-for\nlock is freed, a waiting process will be made ready immediately, rather\nthan waiting for a time tick to elapse. So, if the lock-releasing\nprocess does block before the end of its quantum, the released process\nis available to run immediately. Under the old scheme, a process that\nhad failed to get a spinlock couldn't run until its select wait timed\nout, even if the lock were now available. So I think it's still a net\nwin to have the LWLock mechanism in there, rather than just changing\nthem back to spinlocks.\n\n> If you code up a patch, I will test it on my SMP machine using pgbench. \n> Hopefully this will help Tatsuo's 4-way AIX machine too, and Linux.\n\nAttached is a proposed patch (against the current-CVS version of\nlwlock.c). I haven't committed this yet, but it seems to be a win on\na single CPU. Can people try it on multi CPUs?\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/storage/lmgr/lwlock.c.orig\tFri Dec 28 18:26:04 2001\n--- src/backend/storage/lmgr/lwlock.c\tSat Dec 29 15:20:08 2001\n***************\n*** 195,201 ****\n LWLockAcquire(LWLockId lockid, LWLockMode mode)\n {\n \tvolatile LWLock *lock = LWLockArray + lockid;\n! \tbool\t\tmustwait;\n \n \tPRINT_LWDEBUG(\"LWLockAcquire\", lockid, lock);\n \n--- 195,202 ----\n LWLockAcquire(LWLockId lockid, LWLockMode mode)\n {\n \tvolatile LWLock *lock = LWLockArray + lockid;\n! \tPROC\t *proc = MyProc;\n! \tint\t\t\textraWaits = 0;\n \n \tPRINT_LWDEBUG(\"LWLockAcquire\", lockid, lock);\n \n***************\n*** 206,248 ****\n \t */\n \tHOLD_INTERRUPTS();\n \n! \t/* Acquire mutex. Time spent holding mutex should be short! */\n! \tSpinLockAcquire_NoHoldoff(&lock->mutex);\n! \n! \t/* If I can get the lock, do so quickly. */\n! \tif (mode == LW_EXCLUSIVE)\n \t{\n! \t\tif (lock->exclusive == 0 && lock->shared == 0)\n \t\t{\n! \t\t\tlock->exclusive++;\n! \t\t\tmustwait = false;\n \t\t}\n \t\telse\n- \t\t\tmustwait = true;\n- \t}\n- \telse\n- \t{\n- \t\t/*\n- \t\t * If there is someone waiting (presumably for exclusive access),\n- \t\t * queue up behind him even though I could get the lock. This\n- \t\t * prevents a stream of read locks from starving a writer.\n- \t\t */\n- \t\tif (lock->exclusive == 0 && lock->head == NULL)\n \t\t{\n! \t\t\tlock->shared++;\n! \t\t\tmustwait = false;\n \t\t}\n- \t\telse\n- \t\t\tmustwait = true;\n- \t}\n \n! \tif (mustwait)\n! \t{\n! \t\t/* Add myself to wait queue */\n! \t\tPROC\t *proc = MyProc;\n! \t\tint\t\t\textraWaits = 0;\n \n \t\t/*\n \t\t * If we don't have a PROC structure, there's no way to wait. This\n \t\t * should never occur, since MyProc should only be null during\n \t\t * shared memory initialization.\n--- 207,263 ----\n \t */\n \tHOLD_INTERRUPTS();\n \n! \t/*\n! \t * Loop here to try to acquire lock after each time we are signaled\n! \t * by LWLockRelease.\n! \t *\n! \t * NOTE: it might seem better to have LWLockRelease actually grant us\n! \t * the lock, rather than retrying and possibly having to go back to\n! \t * sleep. But in practice that is no good because it means a process\n! \t * swap for every lock acquisition when two or more processes are\n! \t * contending for the same lock. Since LWLocks are normally used to\n! \t * protect not-very-long sections of computation, a process needs to\n! \t * be able to acquire and release the same lock many times during a\n! \t * single process dispatch cycle, even in the presence of contention.\n! \t * The efficiency of being able to do that outweighs the inefficiency of\n! \t * sometimes wasting a dispatch cycle because the lock is not free when a\n! \t * released waiter gets to run. See pgsql-hackers archives for 29-Dec-01.\n! \t */\n! \tfor (;;)\n \t{\n! \t\tbool\t\tmustwait;\n! \n! \t\t/* Acquire mutex. Time spent holding mutex should be short! */\n! \t\tSpinLockAcquire_NoHoldoff(&lock->mutex);\n! \n! \t\t/* If I can get the lock, do so quickly. */\n! \t\tif (mode == LW_EXCLUSIVE)\n \t\t{\n! \t\t\tif (lock->exclusive == 0 && lock->shared == 0)\n! \t\t\t{\n! \t\t\t\tlock->exclusive++;\n! \t\t\t\tmustwait = false;\n! \t\t\t}\n! \t\t\telse\n! \t\t\t\tmustwait = true;\n \t\t}\n \t\telse\n \t\t{\n! \t\t\tif (lock->exclusive == 0)\n! \t\t\t{\n! \t\t\t\tlock->shared++;\n! \t\t\t\tmustwait = false;\n! \t\t\t}\n! \t\t\telse\n! \t\t\t\tmustwait = true;\n \t\t}\n \n! \t\tif (!mustwait)\n! \t\t\tbreak;\t\t\t\t/* got the lock */\n \n \t\t/*\n+ \t\t * Add myself to wait queue.\n+ \t\t *\n \t\t * If we don't have a PROC structure, there's no way to wait. This\n \t\t * should never occur, since MyProc should only be null during\n \t\t * shared memory initialization.\n***************\n*** 267,275 ****\n \t\t *\n \t\t * Since we share the process wait semaphore with the regular lock\n \t\t * manager and ProcWaitForSignal, and we may need to acquire an\n! \t\t * LWLock while one of those is pending, it is possible that we\n! \t\t * get awakened for a reason other than being granted the LWLock.\n! \t\t * If so, loop back and wait again. Once we've gotten the lock,\n \t\t * re-increment the sema by the number of additional signals\n \t\t * received, so that the lock manager or signal manager will see\n \t\t * the received signal when it next waits.\n--- 282,290 ----\n \t\t *\n \t\t * Since we share the process wait semaphore with the regular lock\n \t\t * manager and ProcWaitForSignal, and we may need to acquire an\n! \t\t * LWLock while one of those is pending, it is possible that we get\n! \t\t * awakened for a reason other than being signaled by LWLockRelease.\n! \t\t * If so, loop back and wait again. Once we've gotten the LWLock,\n \t\t * re-increment the sema by the number of additional signals\n \t\t * received, so that the lock manager or signal manager will see\n \t\t * the received signal when it next waits.\n***************\n*** 287,309 ****\n \n \t\tLOG_LWDEBUG(\"LWLockAcquire\", lockid, \"awakened\");\n \n! \t\t/*\n! \t\t * The awakener already updated the lock struct's state, so we\n! \t\t * don't need to do anything more to it. Just need to fix the\n! \t\t * semaphore count.\n! \t\t */\n! \t\twhile (extraWaits-- > 0)\n! \t\t\tIpcSemaphoreUnlock(proc->sem.semId, proc->sem.semNum);\n! \t}\n! \telse\n! \t{\n! \t\t/* Got the lock without waiting */\n! \t\tSpinLockRelease_NoHoldoff(&lock->mutex);\n \t}\n \n \t/* Add lock to list of locks held by this backend */\n \tAssert(num_held_lwlocks < MAX_SIMUL_LWLOCKS);\n \theld_lwlocks[num_held_lwlocks++] = lockid;\n }\n \n /*\n--- 302,322 ----\n \n \t\tLOG_LWDEBUG(\"LWLockAcquire\", lockid, \"awakened\");\n \n! \t\t/* Now loop back and try to acquire lock again. */\n \t}\n \n+ \t/* We are done updating shared state of the lock itself. */\n+ \tSpinLockRelease_NoHoldoff(&lock->mutex);\n+ \n \t/* Add lock to list of locks held by this backend */\n \tAssert(num_held_lwlocks < MAX_SIMUL_LWLOCKS);\n \theld_lwlocks[num_held_lwlocks++] = lockid;\n+ \n+ \t/*\n+ \t * Fix the process wait semaphore's count for any absorbed wakeups.\n+ \t */\n+ \twhile (extraWaits-- > 0)\n+ \t\tIpcSemaphoreUnlock(proc->sem.semId, proc->sem.semNum);\n }\n \n /*\n***************\n*** 344,355 ****\n \t}\n \telse\n \t{\n! \t\t/*\n! \t\t * If there is someone waiting (presumably for exclusive access),\n! \t\t * queue up behind him even though I could get the lock. This\n! \t\t * prevents a stream of read locks from starving a writer.\n! \t\t */\n! \t\tif (lock->exclusive == 0 && lock->head == NULL)\n \t\t{\n \t\t\tlock->shared++;\n \t\t\tmustwait = false;\n--- 357,363 ----\n \t}\n \telse\n \t{\n! \t\tif (lock->exclusive == 0)\n \t\t{\n \t\t\tlock->shared++;\n \t\t\tmustwait = false;\n***************\n*** 427,446 ****\n \t\tif (lock->exclusive == 0 && lock->shared == 0)\n \t\t{\n \t\t\t/*\n! \t\t\t * Remove the to-be-awakened PROCs from the queue, and update\n! \t\t\t * the lock state to show them as holding the lock.\n \t\t\t */\n \t\t\tproc = head;\n! \t\t\tif (proc->lwExclusive)\n! \t\t\t\tlock->exclusive++;\n! \t\t\telse\n \t\t\t{\n- \t\t\t\tlock->shared++;\n \t\t\t\twhile (proc->lwWaitLink != NULL &&\n \t\t\t\t\t !proc->lwWaitLink->lwExclusive)\n \t\t\t\t{\n \t\t\t\t\tproc = proc->lwWaitLink;\n- \t\t\t\t\tlock->shared++;\n \t\t\t\t}\n \t\t\t}\n \t\t\t/* proc is now the last PROC to be released */\n--- 435,451 ----\n \t\tif (lock->exclusive == 0 && lock->shared == 0)\n \t\t{\n \t\t\t/*\n! \t\t\t * Remove the to-be-awakened PROCs from the queue. If the\n! \t\t\t * front waiter wants exclusive lock, awaken him only.\n! \t\t\t * Otherwise awaken as many waiters as want shared access.\n \t\t\t */\n \t\t\tproc = head;\n! \t\t\tif (!proc->lwExclusive)\n \t\t\t{\n \t\t\t\twhile (proc->lwWaitLink != NULL &&\n \t\t\t\t\t !proc->lwWaitLink->lwExclusive)\n \t\t\t\t{\n \t\t\t\t\tproc = proc->lwWaitLink;\n \t\t\t\t}\n \t\t\t}\n \t\t\t/* proc is now the last PROC to be released */",
"msg_date": "Sat, 29 Dec 2001 15:49:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Another question: Is there a way to release buffer locks without\n> aquiring the master lock?\n\nWe might want to think about making bufmgr locking more fine-grained\n... in a future release. For 7.2 I don't really want to mess around\nwith the bufmgr logic at this late hour. Too risky.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 Dec 2001 15:58:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Another question: Is there a way to release buffer locks without\n> > aquiring the master lock?\n> \n> We might want to think about making bufmgr locking more fine-grained\n> ... in a future release. For 7.2 I don't really want to mess around\n> with the bufmgr logic at this late hour. Too risky.\n\nYou want a TODO item on this?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 Dec 2001 16:09:56 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> We might want to think about making bufmgr locking more fine-grained\n>> ... in a future release. For 7.2 I don't really want to mess around\n>> with the bufmgr logic at this late hour. Too risky.\n\n> You want a TODO item on this?\n\nSure. But don't phrase it as just a bufmgr problem. Maybe:\n\n* Make locking of shared data structures more fine-grained\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 Dec 2001 18:09:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > What you want to do is to wake up the sleeper but not give them the lock\n> > until they are actually running and can aquire it themselves. \n> \n> Yeah. Essentially this is a partial reversion to the idea of a\n> spinlock. But it's more efficient than our old implementation with\n> timed waits between retries, because (a) a process will not be awoken\n> unless it has a chance at getting the lock, and (b) when a contended-for\n> lock is freed, a waiting process will be made ready immediately, rather\n> than waiting for a time tick to elapse. So, if the lock-releasing\n> process does block before the end of its quantum, the released process\n> is available to run immediately. Under the old scheme, a process that\n> had failed to get a spinlock couldn't run until its select wait timed\n> out, even if the lock were now available. So I think it's still a net\n> win to have the LWLock mechanism in there, rather than just changing\n> them back to spinlocks.\n> \n> > If you code up a patch, I will test it on my SMP machine using pgbench. \n> > Hopefully this will help Tatsuo's 4-way AIX machine too, and Linux.\n> \n> Attached is a proposed patch (against the current-CVS version of\n> lwlock.c). I haven't committed this yet, but it seems to be a win on\n> a single CPU. Can people try it on multi CPUs?\n\nOK, here are the results on BSD/OS 4.2 on a 2-cpu system. The first is\nbefore the patch, the second after. Both average 14tps, so the patch\nhas no negative effect on my system. Of course, it has no positive\neffect either. :-)\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\ntransaction type: TPC-B (sort of)\nscaling factor: 50\nnumber of clients: 1\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 1000/1000\ntps = 15.755389(including connections establishing)\ntps = 15.765396(excluding connections establishing)\ntransaction type: TPC-B (sort of)\nscaling factor: 50\nnumber of clients: 10\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 10000/10000\ntps = 16.926562(including connections establishing)\ntps = 16.935963(excluding connections establishing)\ntransaction type: TPC-B (sort of)\nscaling factor: 50\nnumber of clients: 25\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 25000/25000\ntps = 16.219866(including connections establishing)\ntps = 16.228470(excluding connections establishing)\ntransaction type: TPC-B (sort of)\nscaling factor: 50\nnumber of clients: 50\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 50000/50000\ntps = 12.071730(including connections establishing)\ntps = 12.076470(excluding connections establishing)\n\ntransaction type: TPC-B (sort of)\nscaling factor: 50\nnumber of clients: 1\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 1000/1000\ntps = 13.784963(including connections establishing)\ntps = 13.792893(excluding connections establishing)\ntransaction type: TPC-B (sort of)\nscaling factor: 50\nnumber of clients: 10\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 10000/10000\ntps = 16.287374(including connections establishing)\ntps = 16.296349(excluding connections establishing)\ntransaction type: TPC-B (sort of)\nscaling factor: 50\nnumber of clients: 25\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 25000/25000\ntps = 15.810789(including connections establishing)\ntps = 15.819153(excluding connections establishing)\ntransaction type: TPC-B (sort of)\nscaling factor: 50\nnumber of clients: 50\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 50000/50000\ntps = 12.030432(including connections establishing)\ntps = 12.035500(excluding connections establishing)",
"msg_date": "Sat, 29 Dec 2001 20:50:48 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "\n\nOn Sat, 29 Dec 2001, Bruce Momjian wrote:\n\n> OK, here are the results on BSD/OS 4.2 on a 2-cpu system. The first is\n> before the patch, the second after. Both average 14tps, so the patch\n> has no negative effect on my system. Of course, it has no positive\n> effect either. :-)\n\nActually it looks slighty worse with the patch. What about CPU usage?\n\n-jwb\n\n",
"msg_date": "Sat, 29 Dec 2001 18:00:43 -0800 (PST)",
"msg_from": "\"Jeffrey W. Baker\" <jwbaker@acm.org>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "> \n> \n> On Sat, 29 Dec 2001, Bruce Momjian wrote:\n> \n> > OK, here are the results on BSD/OS 4.2 on a 2-cpu system. The first is\n> > before the patch, the second after. Both average 14tps, so the patch\n> > has no negative effect on my system. Of course, it has no positive\n> > effect either. :-)\n> \n> Actually it looks slighty worse with the patch. What about CPU usage?\n\nYes, slightly, but I have better performance on 2 cpu's than 1, so I\ndidn't expect to see any major change, partially because the context\nswitching overhead problem doesn't see to exist on this OS.\n\nIf we find that it helps single-cpu machines, and perhaps helps machines\nthat had worse performance on SMP than single-cpu, my guess is it would\nbe a win, in general.\n\nLet me tell you what I did to test it. I ran /contrib/pgbench. I had\nthe postmaster configured with 1000 buffers, and ran pgbench with a\nscale of 50. I then ran it with 1, 10, 25, and 50 clients using 1000\ntransactions.\n\nThe commands were:\n\n\t$ createdb pgbench\n\t$ pgbench -i -s 50\t\n\t$ for CLIENT in 1 10 25 50\n\tdo\n\t\tpgbench -c $CLIENT -t 1000 pgbench\n\tdone | tee -a pgbench2_7.2\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 Dec 2001 21:13:36 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "> \n> \n> On Sat, 29 Dec 2001, Bruce Momjian wrote:\n> \n> > OK, here are the results on BSD/OS 4.2 on a 2-cpu system. The first is\n> > before the patch, the second after. Both average 14tps, so the patch\n> > has no negative effect on my system. Of course, it has no positive\n> > effect either. :-)\n> \n> Actually it looks slighty worse with the patch. What about CPU usage?\n\nFor 5 clients, CPU's are 96% idle. Load average is around 5. Seems\ntotally I/O bound.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 Dec 2001 21:30:23 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, here are the results on BSD/OS 4.2 on a 2-cpu system. The first is\n> before the patch, the second after. Both average 14tps, so the patch\n> has no negative effect on my system. Of course, it has no positive\n> effect either. :-)\n\nI am also having a hard time measuring any difference using pgbench.\nHowever, pgbench is almost entirely I/O bound on my hardware (CPU is\ntypically 70-80% idle) so this is not very surprising.\n\nI can confirm that the patch accomplishes the intended goal of reducing\ncontext swaps. Using pgbench with 64 clients, a profile of the old code\nshowed about 7% of LWLockAcquire calls blocking (invoking\nIpcSemaphoreLock). A profile of the new code shows 0.1% of the calls\nblocking.\n\nI suspect that we need something less I/O-bound than pgbench to really\ntell whether this patch is worthwhile or not. Jeffrey, what are you\nseeing in your application?\n\nAnd btw, what are you using to count context swaps?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 Dec 2001 21:42:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "> > If you code up a patch, I will test it on my SMP machine using pgbench. \n> > Hopefully this will help Tatsuo's 4-way AIX machine too, and Linux.\n> \n> Attached is a proposed patch (against the current-CVS version of\n> lwlock.c). I haven't committed this yet, but it seems to be a win on\n> a single CPU. Can people try it on multi CPUs?\n\nYour patches seem lightly enhanced 7.2 performance on AIX 5L (still\nslower than 7.1, however).",
"msg_date": "Sun, 30 Dec 2001 16:53:09 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Your patches seem lightly enhanced 7.2 performance on AIX 5L (still\n> slower than 7.1, however).\n\nIt's awfully hard to see what's happening near the left end of that\nchart. May I suggest plotting the x-axis on a log scale?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 30 Dec 2001 11:52:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "I have thought of a further refinement to the patch I produced\nyesterday. Assume that there are multiple waiters blocked on (eg)\nBufMgrLock. After we release the first one, we want the currently\nrunning process to be able to continue acquiring and releasing the lock\nfor as long as its time quantum holds out. But in the patch as given,\neach acquire/release cycle releases another waiter. This is probably\nnot good.\n\nAttached is a modification that prevents additional waiters from being\nreleased until the first releasee has a chance to run and acquire the\nlock. Would you try this and see if it's better or not in your test\ncases? It doesn't seem to help on a single CPU, but maybe on multiple\nCPUs it'll make a difference.\n\nTo try to make things simple, I've attached the mod in two forms:\nas a diff from current CVS, and as a diff from the previous patch.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/storage/lmgr/lwlock.c.orig\tSat Dec 29 19:48:03 2001\n--- src/backend/storage/lmgr/lwlock.c\tSun Dec 30 12:11:47 2001\n***************\n*** 30,35 ****\n--- 30,36 ----\n typedef struct LWLock\n {\n \tslock_t\t\tmutex;\t\t\t/* Protects LWLock and queue of PROCs */\n+ \tbool\t\treleaseOK;\t\t/* T if ok to release waiters */\n \tchar\t\texclusive;\t\t/* # of exclusive holders (0 or 1) */\n \tint\t\t\tshared;\t\t\t/* # of shared holders (0..MaxBackends) */\n \tPROC\t *head;\t\t\t/* head of list of waiting PROCs */\n***************\n*** 67,75 ****\n PRINT_LWDEBUG(const char *where, LWLockId lockid, const volatile LWLock *lock)\n {\n \tif (Trace_lwlocks)\n! \t\telog(DEBUG, \"%s(%d): excl %d shared %d head %p\",\n \t\t\t where, (int) lockid,\n! \t\t\t (int) lock->exclusive, lock->shared, lock->head);\n }\n \n inline static void\n--- 68,77 ----\n PRINT_LWDEBUG(const char *where, LWLockId lockid, const volatile LWLock *lock)\n {\n \tif (Trace_lwlocks)\n! \t\telog(DEBUG, \"%s(%d): excl %d shared %d head %p rOK %d\",\n \t\t\t where, (int) lockid,\n! \t\t\t (int) lock->exclusive, lock->shared, lock->head,\n! \t\t\t (int) lock->releaseOK);\n }\n \n inline static void\n***************\n*** 153,158 ****\n--- 155,161 ----\n \tfor (id = 0, lock = LWLockArray; id < numLocks; id++, lock++)\n \t{\n \t\tSpinLockInit(&lock->mutex);\n+ \t\tlock->releaseOK = true;\n \t\tlock->exclusive = 0;\n \t\tlock->shared = 0;\n \t\tlock->head = NULL;\n***************\n*** 195,201 ****\n LWLockAcquire(LWLockId lockid, LWLockMode mode)\n {\n \tvolatile LWLock *lock = LWLockArray + lockid;\n! \tbool\t\tmustwait;\n \n \tPRINT_LWDEBUG(\"LWLockAcquire\", lockid, lock);\n \n--- 198,206 ----\n LWLockAcquire(LWLockId lockid, LWLockMode mode)\n {\n \tvolatile LWLock *lock = LWLockArray + lockid;\n! \tPROC\t *proc = MyProc;\n! \tbool\t\tretry = false;\n! \tint\t\t\textraWaits = 0;\n \n \tPRINT_LWDEBUG(\"LWLockAcquire\", lockid, lock);\n \n***************\n*** 206,248 ****\n \t */\n \tHOLD_INTERRUPTS();\n \n! \t/* Acquire mutex. Time spent holding mutex should be short! */\n! \tSpinLockAcquire_NoHoldoff(&lock->mutex);\n! \n! \t/* If I can get the lock, do so quickly. */\n! \tif (mode == LW_EXCLUSIVE)\n \t{\n! \t\tif (lock->exclusive == 0 && lock->shared == 0)\n \t\t{\n! \t\t\tlock->exclusive++;\n! \t\t\tmustwait = false;\n \t\t}\n \t\telse\n- \t\t\tmustwait = true;\n- \t}\n- \telse\n- \t{\n- \t\t/*\n- \t\t * If there is someone waiting (presumably for exclusive access),\n- \t\t * queue up behind him even though I could get the lock. This\n- \t\t * prevents a stream of read locks from starving a writer.\n- \t\t */\n- \t\tif (lock->exclusive == 0 && lock->head == NULL)\n \t\t{\n! \t\t\tlock->shared++;\n! \t\t\tmustwait = false;\n \t\t}\n- \t\telse\n- \t\t\tmustwait = true;\n- \t}\n \n! \tif (mustwait)\n! \t{\n! \t\t/* Add myself to wait queue */\n! \t\tPROC\t *proc = MyProc;\n! \t\tint\t\t\textraWaits = 0;\n \n \t\t/*\n \t\t * If we don't have a PROC structure, there's no way to wait. This\n \t\t * should never occur, since MyProc should only be null during\n \t\t * shared memory initialization.\n--- 211,271 ----\n \t */\n \tHOLD_INTERRUPTS();\n \n! \t/*\n! \t * Loop here to try to acquire lock after each time we are signaled\n! \t * by LWLockRelease.\n! \t *\n! \t * NOTE: it might seem better to have LWLockRelease actually grant us\n! \t * the lock, rather than retrying and possibly having to go back to\n! \t * sleep. But in practice that is no good because it means a process\n! \t * swap for every lock acquisition when two or more processes are\n! \t * contending for the same lock. Since LWLocks are normally used to\n! \t * protect not-very-long sections of computation, a process needs to\n! \t * be able to acquire and release the same lock many times during a\n! \t * single process dispatch cycle, even in the presence of contention.\n! \t * The efficiency of being able to do that outweighs the inefficiency of\n! \t * sometimes wasting a dispatch cycle because the lock is not free when a\n! \t * released waiter gets to run. See pgsql-hackers archives for 29-Dec-01.\n! \t */\n! \tfor (;;)\n \t{\n! \t\tbool\t\tmustwait;\n! \n! \t\t/* Acquire mutex. Time spent holding mutex should be short! */\n! \t\tSpinLockAcquire_NoHoldoff(&lock->mutex);\n! \n! \t\t/* If retrying, allow LWLockRelease to release waiters again */\n! \t\tif (retry)\n! \t\t\tlock->releaseOK = true;\n! \n! \t\t/* If I can get the lock, do so quickly. */\n! \t\tif (mode == LW_EXCLUSIVE)\n \t\t{\n! \t\t\tif (lock->exclusive == 0 && lock->shared == 0)\n! \t\t\t{\n! \t\t\t\tlock->exclusive++;\n! \t\t\t\tmustwait = false;\n! \t\t\t}\n! \t\t\telse\n! \t\t\t\tmustwait = true;\n \t\t}\n \t\telse\n \t\t{\n! \t\t\tif (lock->exclusive == 0)\n! \t\t\t{\n! \t\t\t\tlock->shared++;\n! \t\t\t\tmustwait = false;\n! \t\t\t}\n! \t\t\telse\n! \t\t\t\tmustwait = true;\n \t\t}\n \n! \t\tif (!mustwait)\n! \t\t\tbreak;\t\t\t\t/* got the lock */\n \n \t\t/*\n+ \t\t * Add myself to wait queue.\n+ \t\t *\n \t\t * If we don't have a PROC structure, there's no way to wait. This\n \t\t * should never occur, since MyProc should only be null during\n \t\t * shared memory initialization.\n***************\n*** 267,275 ****\n \t\t *\n \t\t * Since we share the process wait semaphore with the regular lock\n \t\t * manager and ProcWaitForSignal, and we may need to acquire an\n! \t\t * LWLock while one of those is pending, it is possible that we\n! \t\t * get awakened for a reason other than being granted the LWLock.\n! \t\t * If so, loop back and wait again. Once we've gotten the lock,\n \t\t * re-increment the sema by the number of additional signals\n \t\t * received, so that the lock manager or signal manager will see\n \t\t * the received signal when it next waits.\n--- 290,298 ----\n \t\t *\n \t\t * Since we share the process wait semaphore with the regular lock\n \t\t * manager and ProcWaitForSignal, and we may need to acquire an\n! \t\t * LWLock while one of those is pending, it is possible that we get\n! \t\t * awakened for a reason other than being signaled by LWLockRelease.\n! \t\t * If so, loop back and wait again. Once we've gotten the LWLock,\n \t\t * re-increment the sema by the number of additional signals\n \t\t * received, so that the lock manager or signal manager will see\n \t\t * the received signal when it next waits.\n***************\n*** 287,309 ****\n \n \t\tLOG_LWDEBUG(\"LWLockAcquire\", lockid, \"awakened\");\n \n! \t\t/*\n! \t\t * The awakener already updated the lock struct's state, so we\n! \t\t * don't need to do anything more to it. Just need to fix the\n! \t\t * semaphore count.\n! \t\t */\n! \t\twhile (extraWaits-- > 0)\n! \t\t\tIpcSemaphoreUnlock(proc->sem.semId, proc->sem.semNum);\n! \t}\n! \telse\n! \t{\n! \t\t/* Got the lock without waiting */\n! \t\tSpinLockRelease_NoHoldoff(&lock->mutex);\n \t}\n \n \t/* Add lock to list of locks held by this backend */\n \tAssert(num_held_lwlocks < MAX_SIMUL_LWLOCKS);\n \theld_lwlocks[num_held_lwlocks++] = lockid;\n }\n \n /*\n--- 310,331 ----\n \n \t\tLOG_LWDEBUG(\"LWLockAcquire\", lockid, \"awakened\");\n \n! \t\t/* Now loop back and try to acquire lock again. */\n! \t\tretry = true;\n \t}\n \n+ \t/* We are done updating shared state of the lock itself. */\n+ \tSpinLockRelease_NoHoldoff(&lock->mutex);\n+ \n \t/* Add lock to list of locks held by this backend */\n \tAssert(num_held_lwlocks < MAX_SIMUL_LWLOCKS);\n \theld_lwlocks[num_held_lwlocks++] = lockid;\n+ \n+ \t/*\n+ \t * Fix the process wait semaphore's count for any absorbed wakeups.\n+ \t */\n+ \twhile (extraWaits-- > 0)\n+ \t\tIpcSemaphoreUnlock(proc->sem.semId, proc->sem.semNum);\n }\n \n /*\n***************\n*** 344,355 ****\n \t}\n \telse\n \t{\n! \t\t/*\n! \t\t * If there is someone waiting (presumably for exclusive access),\n! \t\t * queue up behind him even though I could get the lock. This\n! \t\t * prevents a stream of read locks from starving a writer.\n! \t\t */\n! \t\tif (lock->exclusive == 0 && lock->head == NULL)\n \t\t{\n \t\t\tlock->shared++;\n \t\t\tmustwait = false;\n--- 366,372 ----\n \t}\n \telse\n \t{\n! \t\tif (lock->exclusive == 0)\n \t\t{\n \t\t\tlock->shared++;\n \t\t\tmustwait = false;\n***************\n*** 419,451 ****\n \n \t/*\n \t * See if I need to awaken any waiters. If I released a non-last\n! \t * shared hold, there cannot be anything to do.\n \t */\n \thead = lock->head;\n \tif (head != NULL)\n \t{\n! \t\tif (lock->exclusive == 0 && lock->shared == 0)\n \t\t{\n \t\t\t/*\n! \t\t\t * Remove the to-be-awakened PROCs from the queue, and update\n! \t\t\t * the lock state to show them as holding the lock.\n \t\t\t */\n \t\t\tproc = head;\n! \t\t\tif (proc->lwExclusive)\n! \t\t\t\tlock->exclusive++;\n! \t\t\telse\n \t\t\t{\n- \t\t\t\tlock->shared++;\n \t\t\t\twhile (proc->lwWaitLink != NULL &&\n \t\t\t\t\t !proc->lwWaitLink->lwExclusive)\n \t\t\t\t{\n \t\t\t\t\tproc = proc->lwWaitLink;\n- \t\t\t\t\tlock->shared++;\n \t\t\t\t}\n \t\t\t}\n \t\t\t/* proc is now the last PROC to be released */\n \t\t\tlock->head = proc->lwWaitLink;\n \t\t\tproc->lwWaitLink = NULL;\n \t\t}\n \t\telse\n \t\t{\n--- 436,469 ----\n \n \t/*\n \t * See if I need to awaken any waiters. If I released a non-last\n! \t * shared hold, there cannot be anything to do. Also, do not awaken\n! \t * any waiters if someone has already awakened waiters that haven't\n! \t * yet acquired the lock.\n \t */\n \thead = lock->head;\n \tif (head != NULL)\n \t{\n! \t\tif (lock->exclusive == 0 && lock->shared == 0 && lock->releaseOK)\n \t\t{\n \t\t\t/*\n! \t\t\t * Remove the to-be-awakened PROCs from the queue. If the\n! \t\t\t * front waiter wants exclusive lock, awaken him only.\n! \t\t\t * Otherwise awaken as many waiters as want shared access.\n \t\t\t */\n \t\t\tproc = head;\n! \t\t\tif (!proc->lwExclusive)\n \t\t\t{\n \t\t\t\twhile (proc->lwWaitLink != NULL &&\n \t\t\t\t\t !proc->lwWaitLink->lwExclusive)\n \t\t\t\t{\n \t\t\t\t\tproc = proc->lwWaitLink;\n \t\t\t\t}\n \t\t\t}\n \t\t\t/* proc is now the last PROC to be released */\n \t\t\tlock->head = proc->lwWaitLink;\n \t\t\tproc->lwWaitLink = NULL;\n+ \t\t\t/* prevent additional wakeups until retryer gets to run */\n+ \t\t\tlock->releaseOK = false;\n \t\t}\n \t\telse\n \t\t{\n\n*** src/backend/storage/lmgr/lwlock.c.try1\tSat Dec 29 15:20:08 2001\n--- src/backend/storage/lmgr/lwlock.c\tSun Dec 30 12:11:47 2001\n***************\n*** 30,35 ****\n--- 30,36 ----\n typedef struct LWLock\n {\n \tslock_t\t\tmutex;\t\t\t/* Protects LWLock and queue of PROCs */\n+ \tbool\t\treleaseOK;\t\t/* T if ok to release waiters */\n \tchar\t\texclusive;\t\t/* # of exclusive holders (0 or 1) */\n \tint\t\t\tshared;\t\t\t/* # of shared holders (0..MaxBackends) */\n \tPROC\t *head;\t\t\t/* head of list of waiting PROCs */\n***************\n*** 67,75 ****\n PRINT_LWDEBUG(const char *where, LWLockId lockid, const volatile LWLock *lock)\n {\n \tif (Trace_lwlocks)\n! \t\telog(DEBUG, \"%s(%d): excl %d shared %d head %p\",\n \t\t\t where, (int) lockid,\n! \t\t\t (int) lock->exclusive, lock->shared, lock->head);\n }\n \n inline static void\n--- 68,77 ----\n PRINT_LWDEBUG(const char *where, LWLockId lockid, const volatile LWLock *lock)\n {\n \tif (Trace_lwlocks)\n! \t\telog(DEBUG, \"%s(%d): excl %d shared %d head %p rOK %d\",\n \t\t\t where, (int) lockid,\n! \t\t\t (int) lock->exclusive, lock->shared, lock->head,\n! \t\t\t (int) lock->releaseOK);\n }\n \n inline static void\n***************\n*** 153,158 ****\n--- 155,161 ----\n \tfor (id = 0, lock = LWLockArray; id < numLocks; id++, lock++)\n \t{\n \t\tSpinLockInit(&lock->mutex);\n+ \t\tlock->releaseOK = true;\n \t\tlock->exclusive = 0;\n \t\tlock->shared = 0;\n \t\tlock->head = NULL;\n***************\n*** 196,201 ****\n--- 199,205 ----\n {\n \tvolatile LWLock *lock = LWLockArray + lockid;\n \tPROC\t *proc = MyProc;\n+ \tbool\t\tretry = false;\n \tint\t\t\textraWaits = 0;\n \n \tPRINT_LWDEBUG(\"LWLockAcquire\", lockid, lock);\n***************\n*** 230,235 ****\n--- 234,243 ----\n \t\t/* Acquire mutex. Time spent holding mutex should be short! */\n \t\tSpinLockAcquire_NoHoldoff(&lock->mutex);\n \n+ \t\t/* If retrying, allow LWLockRelease to release waiters again */\n+ \t\tif (retry)\n+ \t\t\tlock->releaseOK = true;\n+ \n \t\t/* If I can get the lock, do so quickly. */\n \t\tif (mode == LW_EXCLUSIVE)\n \t\t{\n***************\n*** 303,308 ****\n--- 311,317 ----\n \t\tLOG_LWDEBUG(\"LWLockAcquire\", lockid, \"awakened\");\n \n \t\t/* Now loop back and try to acquire lock again. */\n+ \t\tretry = true;\n \t}\n \n \t/* We are done updating shared state of the lock itself. */\n***************\n*** 427,438 ****\n \n \t/*\n \t * See if I need to awaken any waiters. If I released a non-last\n! \t * shared hold, there cannot be anything to do.\n \t */\n \thead = lock->head;\n \tif (head != NULL)\n \t{\n! \t\tif (lock->exclusive == 0 && lock->shared == 0)\n \t\t{\n \t\t\t/*\n \t\t\t * Remove the to-be-awakened PROCs from the queue. If the\n--- 436,449 ----\n \n \t/*\n \t * See if I need to awaken any waiters. If I released a non-last\n! \t * shared hold, there cannot be anything to do. Also, do not awaken\n! \t * any waiters if someone has already awakened waiters that haven't\n! \t * yet acquired the lock.\n \t */\n \thead = lock->head;\n \tif (head != NULL)\n \t{\n! \t\tif (lock->exclusive == 0 && lock->shared == 0 && lock->releaseOK)\n \t\t{\n \t\t\t/*\n \t\t\t * Remove the to-be-awakened PROCs from the queue. If the\n***************\n*** 451,456 ****\n--- 462,469 ----\n \t\t\t/* proc is now the last PROC to be released */\n \t\t\tlock->head = proc->lwWaitLink;\n \t\t\tproc->lwWaitLink = NULL;\n+ \t\t\t/* prevent additional wakeups until retryer gets to run */\n+ \t\t\tlock->releaseOK = false;\n \t\t}\n \t\telse\n \t\t{",
"msg_date": "Sun, 30 Dec 2001 13:04:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "Several people complained that my email client was not properly\nattributing quotations to the people who made them. I figured out the\nelmrc option and I have it working now, as you can see:\n\n-->\tTom Lane wrote:\n\t> I have thought of a further refinement to the patch I produced\n\t> yesterday. Assume that there are multiple waiters blocked on (eg)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 30 Dec 2001 18:44:13 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "My email is fixed"
},
{
"msg_contents": "> I have thought of a further refinement to the patch I produced\n> yesterday. Assume that there are multiple waiters blocked on (eg)\n> BufMgrLock. After we release the first one, we want the currently\n> running process to be able to continue acquiring and releasing the lock\n> for as long as its time quantum holds out. But in the patch as given,\n> each acquire/release cycle releases another waiter. This is probably\n> not good.\n> \n> Attached is a modification that prevents additional waiters from being\n> released until the first releasee has a chance to run and acquire the\n> lock. Would you try this and see if it's better or not in your test\n> cases? It doesn't seem to help on a single CPU, but maybe on multiple\n> CPUs it'll make a difference.\n> \n> To try to make things simple, I've attached the mod in two forms:\n> as a diff from current CVS, and as a diff from the previous patch.\n\nOk, here is a pgbench (-s 10) result on an AIX 5L box (4 way).\n\n\"7.2 with patch\" is for the previous patch. \"7.2 with patch (revised)\"\nis for the this patch. I see virtually no improvement. Please note\nthat xy axis are now in log scale.",
"msg_date": "Thu, 03 Jan 2002 10:18:25 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "Tom Lane wrote:\n> I have thought of a further refinement to the patch I produced\n> yesterday. Assume that there are multiple waiters blocked on (eg)\n> BufMgrLock. After we release the first one, we want the currently\n> running process to be able to continue acquiring and releasing the lock\n> for as long as its time quantum holds out. But in the patch as given,\n> each acquire/release cycle releases another waiter. This is probably\n> not good.\n> \n> Attached is a modification that prevents additional waiters from being\n> released until the first releasee has a chance to run and acquire the\n> lock. Would you try this and see if it's better or not in your test\n> cases? It doesn't seem to help on a single CPU, but maybe on multiple\n> CPUs it'll make a difference.\n> \n> To try to make things simple, I've attached the mod in two forms:\n> as a diff from current CVS, and as a diff from the previous patch.\n\nThis does seem like a nice optimization. I will try to test it tomorrow\nbut I doubt I will see any change on BSD/OS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 02:20:16 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> > I have thought of a further refinement to the patch I produced\n> > yesterday. Assume that there are multiple waiters blocked on (eg)\n> > BufMgrLock. After we release the first one, we want the currently\n> > running process to be able to continue acquiring and releasing the lock\n> > for as long as its time quantum holds out. But in the patch as given,\n> > each acquire/release cycle releases another waiter. This is probably\n> > not good.\n> > \n> > Attached is a modification that prevents additional waiters from being\n> > released until the first release has a chance to run and acquire the\n> > lock. Would you try this and see if it's better or not in your test\n> > cases? It doesn't seem to help on a single CPU, but maybe on multiple\n> > CPUs it'll make a difference.\n> > \n> > To try to make things simple, I've attached the mod in two forms:\n> > as a diff from current CVS, and as a diff from the previous patch.\n> \n> Ok, here is a pgbench (-s 10) result on an AIX 5L box (4 way).\n> \n> \"7.2 with patch\" is for the previous patch. \"7.2 with patch (revised)\"\n> is for the this patch. I see virtually no improvement. Please note\n> that xy axis are now in log scale.\n\nWell, there is clearly some good news in that graph. The unpatched 7.2\nhad _terrible_ performance for a few users. The patch clearly helped\nthat.\n\nBoth the 7.2 with patch tests show much better performance, close to\n7.1. Interestingly the first 7.2 patch shows better performance than\nthe later one, perhaps because it is a 4-way system and maybe it is\nfaster to start up more waiting backends on such a system, but the\nperformance difference is minor.\n\nI guess what really bothers me now is why the select() in 7.1 wasn't\nslower than it was. We made 7.2 especially for multicpu systems, and\nhere we have identical performance to 7.1. Tatsuo, is AIX capable of\n<10 millisecond sleeps? I see there is such a program in the archives\nfrom Tom Lane:\n\n\thttp://fts.postgresql.org/db/mw/msg.html?mid=1217731\n\nTatsuo, can you run that program on the AIX box and tell us what it\nreports? It would not surprise me if AIX supported sub-10ms select()\ntiming because I have heard AIX is a mixing of Unix and IBM mainframe\ncode.\n\nI have attached a clean version of the code because the web mail archive\nmunged the C code. I called it tst1.c. If you compile it and run it\nlike this:\n\n\t#$ time tst1 1\n\n\treal 0m10.013s\n\tuser 0m0.000s\n\tsys 0m0.004s\n\nThis runs select(1) 1000 times, meaning 10ms per select for BSD/OS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n#include <sys/types.h>\n#include <sys/time.h>\n#include <sys/select.h>\n#include <signal.h>\n#include <stdio.h>\n#include <stdlib.h>\n\nint main(int argc, char** argv)\n{\n\tstruct timeval delay;\n\tint i, del;\n\n\tdel = atoi(argv[1]);\n\n\tfor (i = 0; i < 1000; i++) {\n\t\tdelay.tv_sec = 0;\n\t\tdelay.tv_usec = del;\n\t\t(void) select(0, NULL, NULL, NULL, &delay);\n\t}\n\treturn 0;\n}",
"msg_date": "Thu, 3 Jan 2002 02:55:26 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "> I guess what really bothers me now is why the select() in 7.1 wasn't\n> slower than it was. We made 7.2 especially for multicpu systems, and\n> here we have identical performance to 7.1. Tatsuo, is AIX capable of\n> <10 millisecond sleeps? I see there is such a program in the archives\n> from Tom Lane:\n> \n> \thttp://fts.postgresql.org/db/mw/msg.html?mid=1217731\n> \n> Tatsuo, can you run that program on the AIX box and tell us what it\n> reports? It would not surprise me if AIX supported sub-10ms select()\n> timing because I have heard AIX is a mixing of Unix and IBM mainframe\n> code.\n> \n> I have attached a clean version of the code because the web mail archive\n> munged the C code. I called it tst1.c. If you compile it and run it\n> like this:\n> \n> \t#$ time tst1 1\n> \n> \treal 0m10.013s\n> \tuser 0m0.000s\n> \tsys 0m0.004s\n> \n> This runs select(1) 1000 times, meaning 10ms per select for BSD/OS.\n\nBingo. It seems AIX 5L can run select() at 1ms timing.\n\nbash-2.04$ time ./a.out 1\n\nreal 0m1.027s\nuser 0m0.000s\nsys 0m0.000s\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 03 Jan 2002 18:00:10 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Ok, here is a pgbench (-s 10) result on an AIX 5L box (4 way).\n> \"7.2 with patch\" is for the previous patch. \"7.2 with patch (revised)\"\n> is for the this patch. I see virtually no improvement.\n\nIf anything, the revised patch seems to make things slightly worse :-(.\nThat agrees with my measurement on a single CPU.\n\nI am inclined to use the revised patch anyway, though, because I think\nit will be less prone to starvation (ie, a process repeatedly being\nawoken but failing to get the lock). The original form of lwlock.c\nguaranteed that a writer could not be locked out by large numbers of\nreaders, but I had to abandon that goal in the first version of the\npatch. The second version still doesn't keep the writer from being\nblocked by active readers, but it does ensure that readers queued up\nbehind the writer won't be released. Comments?\n\n> Please note that xy axis are now in log scale.\n\nSeems much easier to read this way. Thanks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jan 2002 10:20:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "Tom Lane wrote:\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Ok, here is a pgbench (-s 10) result on an AIX 5L box (4 way).\n> > \"7.2 with patch\" is for the previous patch. \"7.2 with patch (revised)\"\n> > is for the this patch. I see virtually no improvement.\n> \n> If anything, the revised patch seems to make things slightly worse :-(.\n> That agrees with my measurement on a single CPU.\n> \n> I am inclined to use the revised patch anyway, though, because I think\n> it will be less prone to starvation (ie, a process repeatedly being\n> awoken but failing to get the lock). The original form of lwlock.c\n> guaranteed that a writer could not be locked out by large numbers of\n> readers, but I had to abandon that goal in the first version of the\n> patch. The second version still doesn't keep the writer from being\n> blocked by active readers, but it does ensure that readers queued up\n> behind the writer won't be released. Comments?\n\nYes, I agree with the later patch.\n\n> \n> > Please note that xy axis are now in log scale.\n> \n> Seems much easier to read this way. Thanks.\n\nYes, good idea. I want to read up on gnuplot. I knew how to use it long\nago.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 12:08:34 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Tom Lane wrote:\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Ok, here is a pgbench (-s 10) result on an AIX 5L box (4 way).\n> > \"7.2 with patch\" is for the previous patch. \"7.2 with patch (revised)\"\n> > is for the this patch. I see virtually no improvement.\n> \n> If anything, the revised patch seems to make things slightly worse :-(.\n> That agrees with my measurement on a single CPU.\n> \n> I am inclined to use the revised patch anyway, though, because I think\n> it will be less prone to starvation (ie, a process repeatedly being\n> awoken but failing to get the lock). The original form of lwlock.c\n> guaranteed that a writer could not be locked out by large numbers of\n> readers, but I had to abandon that goal in the first version of the\n> patch. The second version still doesn't keep the writer from being\n> blocked by active readers, but it does ensure that readers queued up\n> behind the writer won't be released. Comments?\n\nOK, so now we know that while the new lock code handles the select(1)\nproblem better, we also know that on AIX the old select(1) code wasn't\nas bad as we thought.\n\nAs to why we don't see better numbers on AIX, we are getting 100tps,\nwhich seems pretty good to me. Tatsuo, were you expecting higher than\n100tps on that machine? My hardware is at listed at\nhttp://candle.pha.pa.us/main/hardware.html and I don't get over 16tps.\n\nI believe we don't see improvement on SMP machines using pgbench because\npgbench, at least at high scaling factors, is really testing disk i/o,\nnot backend processing speed. It would be interesting to test pgbench\nusing scaling factors that allowed most of the tables to sit in shared\nmemory buffers. Then, we wouldn't be testing disk i/o and would be\ntesting more backend processing throughput. (Tom, is that true?)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 12:16:31 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, so now we know that while the new lock code handles the select(1)\n> problem better, we also know that on AIX the old select(1) code wasn't\n> as bad as we thought.\n\nIt still seems that the select() blocking method should be a loser.\n\nI notice that for AIX, s_lock.h defines TAS() as a call on a system\nroutine cs(). I wonder what cs() actually does and how long it takes.\nTatsuo or Andreas, any info? It might be interesting to try the pgbench\ntests on AIX with s_lock.c's SPINS_PER_DELAY set to different values\n(try 10 and 1000 instead of the default 100).\n\n> I believe we don't see improvement on SMP machines using pgbench because\n> pgbench, at least at high scaling factors, is really testing disk i/o,\n> not backend processing speed.\n\nGood point. I suspect this is even more true on the PC-hardware setups\nthat most of the rest of us are using: we've got these ridiculously fast\nprocessors and consumer-grade disks (with IDE interfaces, yet).\nTatsuo's AIX setup might have a better CPU-to-IO throughput balance,\nbut it's probably still ultimately I/O bound in this test. Tatsuo,\ncan you report anything about CPU idle time percentage while you are\nrunning these tests?\n\n> It would be interesting to test pgbench\n> using scaling factors that allowed most of the tables to sit in shared\n> memory buffers. Then, we wouldn't be testing disk i/o and would be\n> testing more backend processing throughput. (Tom, is that true?)\n\nUnfortunately, at low scaling factors pgbench is guaranteed to look\nhorrible because of contention for the \"branches\" rows. I think that\nit'd be necessary to adjust the ratios of branches, tellers, and\naccounts rows to make it possible to build a small pgbench database\nthat didn't show a lot of contention.\n\nBTW, I realized over the weekend that the reason performance tails off\nfor more clients is that if you hold tx/client constant, more clients\nmeans more total updates executed, which means more dead rows, which\nmeans more time spent in unique-index duplicate checks. We know we want\nto change the way that works, but not for 7.2. At the moment, the only\nway to make a pgbench run that accurately reflects the impact of\nmultiple clients and not the inefficiency of dead index entries is to\nscale tx/client down as #clients increases, so that the total number of\ntransactions is the same for all test runs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jan 2002 12:41:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>Tom Lane wrote:\n>\n>>Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>>\n>>>Ok, here is a pgbench (-s 10) result on an AIX 5L box (4 way).\n>>>\"7.2 with patch\" is for the previous patch. \"7.2 with patch (revised)\"\n>>>is for the this patch. I see virtually no improvement.\n>>>\n>>If anything, the revised patch seems to make things slightly worse :-(.\n>>That agrees with my measurement on a single CPU.\n>>\n>>I am inclined to use the revised patch anyway, though, because I think\n>>it will be less prone to starvation (ie, a process repeatedly being\n>>awoken but failing to get the lock). The original form of lwlock.c\n>>guaranteed that a writer could not be locked out by large numbers of\n>>readers, but I had to abandon that goal in the first version of the\n>>patch. The second version still doesn't keep the writer from being\n>>blocked by active readers, but it does ensure that readers queued up\n>>behind the writer won't be released. Comments?\n>>\n>\n>OK, so now we know that while the new lock code handles the select(1)\n>problem better, we also know that on AIX the old select(1) code wasn't\n>as bad as we thought.\n>\n>As to why we don't see better numbers on AIX, we are getting 100tps,\n>which seems pretty good to me. Tatsuo, were you expecting higher than\n>100tps on that machine? My hardware is at listed at\n>http://candle.pha.pa.us/main/hardware.html and I don't get over 16tps.\n>\nWhat scaling factor do you use ?\nWhat OS ?\n\nI got from ~40 tps for -s 128 up to 50-230 tps for -s 1 or 10 on dual \nPIII 800 on IDE\ndisk (Model=IBM-DTLA-307045) with hdparm -t the following\n\n/dev/hda:\n Timing buffered disk reads: 64 MB in 3.10 seconds = 20.65 MB/sec\n\nThe only difference from Tom's hdparm is unmaskirq = 1 (on) (the -u \n1 switch that\nenables interrupts during IDE processing - there is an ancient warning \nabout it being a risk,\nbut I have been running so for years on very different configurations \nwith no problems)\n\nI'll reattach the graph (old one, without either Tom's 7.2b4 patches). \nThis is on RedHat 7.2\n\n>I believe we don't see improvement on SMP machines using pgbench because\n>pgbench, at least at high scaling factors, is really testing disk i/o,\n>not backend processing speed. It would be interesting to test pgbench\n>using scaling factors that allowed most of the tables to sit in shared\n>memory buffers. Then, we wouldn't be testing disk i/o and would be\n>testing more backend processing throughput.\n>\nI suspect that we should run at about same level of disk i/o for same \nTPS level regardless\nof number of clients, so pgbench is measuring ability to run \nconcurrently in this scenario.\n\n-----------------\nHannu",
"msg_date": "Fri, 04 Jan 2002 00:37:17 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>>It would be interesting to test pgbench\n>>using scaling factors that allowed most of the tables to sit in shared\n>>memory buffers. \n>>\nThats why I recommended testing on ram disk ;)\n\n>>Then, we wouldn't be testing disk i/o and would be\n>>testing more backend processing throughput. (Tom, is that true?)\n>>\n>\n>Unfortunately, at low scaling factors pgbench is guaranteed to look\n>horrible because of contention for the \"branches\" rows. \n>\nNot really! See graph in my previous post - the database size affects \nperformance\nmuch more !\n\n-s 1 is faster than -s 128 for all cases except 7.1.3 where it becomse \nslower when\nnr of clients is > 16\n\n>I think that\n>it'd be necessary to adjust the ratios of branches, tellers, and\n>accounts rows to make it possible to build a small pgbench database\n>that didn't show a lot of contention.\n>\nMy understanding is that pgbench is meant to have some level of \ncontention and should\nbe tested up to ( -c = 10 times -s ), as each test client should emulate \na real \"teller\" and\nthere are 10 tellers per -s.\n\n>BTW, I realized over the weekend that the reason performance tails off\n>for more clients is that if you hold tx/client constant, more clients\n>means more total updates executed, which means more dead rows, which\n>means more time spent in unique-index duplicate checks. \n>\nThats the point I tried to make by modifying Tatsuos script to do what \nyou describe.\nI'm not smart enough to attribute it directly to index lookups but my \ngut feeling told\nme that dead tuples must be the culprit ;)\n\nI first tried to counter the slowdown by running a concurrent new-type \nvacuum process\nbut it made things 2X slower still (38 --> 20 tps for -s 100 with \noriginal nr for -t )\n\n> We know we want\n>to change the way that works, but not for 7.2. At the moment, the only\n>way to make a pgbench run that accurately reflects the impact of\n>multiple clients and not the inefficiency of dead index entries is to\n>scale tx/client down as #clients increases, so that the total number of\n>transactions is the same for all test runs.\n>\nYes. My test also showed that the impact of per-client startup costs is \nmuch smaller\nthan the impact of increased numer of transactions.\n\nI posted the modified script that does exactly that (512 total transactions\nfor 1-2-4-8-16-32-64-128 concurrent clients ) about a week ago together \nwith a\ngraph of results.\n\n------------------------\nHannu\n\n\n\n\n\n\n\n",
"msg_date": "Fri, 04 Jan 2002 00:55:10 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Tom Lane wrote:\n>> Unfortunately, at low scaling factors pgbench is guaranteed to look\n>> horrible because of contention for the \"branches\" rows. \n>> \n> Not really! See graph in my previous post - the database size affects \n> performance much more !\n\nBut the way that pgbench is currently set up, you can't really tell the\ndifference between database size effects and contention effects, because\nyou can't vary one while holding the other constant.\n\nI based my comments on having done profiles that show most of the CPU\ntime going into attempts to acquire row locks for updates and/or\nchecking of dead tuples in _bt_check_unique. So at least in the\nconditions I was using (single CPU) I think those are the bottlenecks.\nI don't have any profiles for SMP machines, yet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jan 2002 18:39:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "Fredrik Estreen <estreen@algonet.se> writes:\n> Here are some results for Linux 2.2 on a Dual PentiumPro 200MHz, SCSI \n> disks and way too litte RAM (just 128MB).\n\nMany thanks for the additional datapoints! I converted the data into\na plot (attached) to make it easier to interpret.\n\n> I observed the loadavg. with the three different 7.2 versions and 50\n> clients, without patch the load stayed low (2-3), with patch no1 very\n> high (12-14) and with patch no2 between the two others (6-8).\n\nThat makes sense. The first patch would release more processes than\nit probably should, which would translate to more processes in the\nkernel's run queue = higher load average. This would only make a\ndifference if the additional processes were not able to get the lock\nwhen they finally get a chance to run; which would happen sometimes\nbut not always. So the small improvement for patch2 is pretty much\nwhat I would've expected.\n\n> I could run benchmarks on 7.1 if that would be interesting.\n\nYes, if you have the time to run the same test conditions on 7.1, it\nwould be good.\n\nAlso, per recent discussions, it would probably be better to try to keep\nthe total number of transactions the same for all runs (maybe about\n10000 transactions total, so -t would vary between 10000 and 200 as\n-c ranges from 1 to 50).\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 03 Jan 2002 20:02:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "On Thu, Jan 03, 2002 at 11:17:04PM +0100, Fredrik Estreen wrote:\nFredrik:\n\tNot sure who or where this should go to, but here is what I did,\n\thope it makes some sense.. The box normally runs oracle, its not\n\tbusy at the moment.. I sent a copy to pgsql-hackers@postgresql.org,\n\tI think that is the correct address. \n\n\tFor the SMP test (I think it was using pgbench)\n\tdownloaded the 7.2b4 source\n\tbuild postgres from source into /usr/local tree\n\tmanually started the db with defaults\n build pgbench\n\n\thardware is a 2-processor Dell box, 1.2 GZ Zeon processors\n\t4G memory with RAID SCSI disks\n\tLinux seti 2.4.7-10smp #1 SMP Thu Sep 6 17:09:31 EDT 2001 i686 unknown\n\n\tsetup pgbench with : pgbench -i testdb -c 50 -t 40 -s 10\n\tchanged postgresql.conf parameters\n\t\twal_files = 4 # range 0-64\n\t\tshared_buffers = 200 # 2*max_connections, min 16\n\t\n\ttest run as pgbench testdb -- output follows:\n\n[kklatt@seti pgbench]$ pgbench testdb -c 50 -t 40 -s 10\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 40\nnumber of transactions actually processed: 2000/2000\ntps = 101.847384(including connections establishing)\ntps = 104.345472(excluding connections establishing)\n\nHope this makes some sense..\n\nKenny Klatt\nData Architect / Oracle DBA\nUniversity of Wisconsin Milwaukee\n",
"msg_date": "Thu, 3 Jan 2002 20:35:11 -0600",
"msg_from": "Kenny H Klatt <kklatt@csd.uwm.edu>",
"msg_from_op": false,
"msg_subject": "Transaction tests on SMP Linux"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> \n> Bruce Momjian wrote:\n> \n> >Tom Lane wrote:\n> >\n> >>Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> >>\n> >>>Ok, here is a pgbench (-s 10) result on an AIX 5L box (4 way).\n> >>>\"7.2 with patch\" is for the previous patch. \"7.2 with patch (revised)\"\n> >>>is for the this patch. I see virtually no improvement.\n> >>>\n> >>If anything, the revised patch seems to make things slightly worse :-(.\n> >>That agrees with my measurement on a single CPU.\n> >>\n> >>I am inclined to use the revised patch anyway, though, because I think\n> >>it will be less prone to starvation (ie, a process repeatedly being\n> >>awoken but failing to get the lock). The original form of lwlock.c\n> >>guaranteed that a writer could not be locked out by large numbers of\n> >>readers, but I had to abandon that goal in the first version of the\n> >>patch. The second version still doesn't keep the writer from being\n> >>blocked by active readers, but it does ensure that readers queued up\n> >>behind the writer won't be released. Comments?\n> >>\n> >\n> >OK, so now we know that while the new lock code handles the select(1)\n> >problem better, we also know that on AIX the old select(1) code wasn't\n> >as bad as we thought.\n> >\n> >As to why we don't see better numbers on AIX, we are getting 100tps,\n> >which seems pretty good to me. Tatsuo, were you expecting higher than\n> >100tps on that machine? My hardware is at listed at\n> >http://candle.pha.pa.us/main/hardware.html and I don't get over 16tps.\n> >\n> What scaling factor do you use ?\n> What OS ?\n> \n> I got from ~40 tps for -s 128 up to 50-230 tps for -s 1 or 10 on dual \n> PIII 800 on IDE\n> disk (Model=IBM-DTLA-307045) with hdparm -t the following\n\nScale 50, transactions 1000, clients 1, 5, 10, 25, 50, all around 15tps.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 23:44:32 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, so now we know that while the new lock code handles the select(1)\n> > problem better, we also know that on AIX the old select(1) code wasn't\n> > as bad as we thought.\n> \n> It still seems that the select() blocking method should be a loser.\n\nNo question the new locking code is better. It just frustrates me we\ncan't get something to show that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 23:46:04 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> It still seems that the select() blocking method should be a loser.\n\n> No question the new locking code is better. It just frustrates me we\n> can't get something to show that.\n\npgbench may not be the setting in which that can be shown. It's I/O\nbound to start with, and it exercises some of our other weak spots\n(viz duplicate-key checking). So I'm not really surprised that it's\nnot showing any improvement from 7.1 to 7.2.\n\nBut yeah, it'd be nice to get some cross-version comparisons on other\ntest cases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jan 2002 23:55:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "\n\nOn Thu, 3 Jan 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > OK, so now we know that while the new lock code handles the select(1)\n> > > problem better, we also know that on AIX the old select(1) code wasn't\n> > > as bad as we thought.\n> >\n> > It still seems that the select() blocking method should be a loser.\n>\n> No question the new locking code is better. It just frustrates me we\n> can't get something to show that.\n\nEven though I haven't completed controlled benchmarks yet, 7.2b4 was using\nall of my CPU time, whereas a patched version is using around half of CPU\ntime, all in user space.\n\nI think not pissing away all our time in the scheduler is a big\nimprovement!\n\n-jwb\n\n",
"msg_date": "Thu, 3 Jan 2002 20:59:11 -0800 (PST)",
"msg_from": "\"Jeffrey W. Baker\" <jwbaker@acm.org>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > OK, so now we know that while the new lock code handles the select(1)\n> > > > problem better, we also know that on AIX the old select(1) code wasn't\n> > > > as bad as we thought.\n> > >\n> > > It still seems that the select() blocking method should be a loser.\n> >\n> > No question the new locking code is better. It just frustrates me we\n> > can't get something to show that.\n> \n> Even though I haven't completed controlled benchmarks yet, 7.2b4 was using\n> all of my CPU time, whereas a patched version is using around half of CPU\n> time, all in user space.\n> \n> I think not pissing away all our time in the scheduler is a big\n> improvement!\n\nYes, the new patch is clearly better than 7.2b4. We are really hoping\nto see the patched version beat 7.1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 4 Jan 2002 00:02:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Tom Lane wrote:\n\n>Fredrik Estreen <estreen@algonet.se> writes:\n>\n>>I could run benchmarks on 7.1 if that would be interesting.\n>>\n>\n>Yes, if you have the time to run the same test conditions on 7.1, it\n>would be good.\n>\n>Also, per recent discussions, it would probably be better to try to keep\n>the total number of transactions the same for all runs (maybe about\n>10000 transactions total, so -t would vary between 10000 and 200 as\n>-c ranges from 1 to 50).\n>\n\nI'll test my original series on 7.1 and also test the constant number of \ntransactions this\nweekend. A quick test with 20 transactions and 50 clients gave ca 25 tps \nwith the latest\npatch, but I'm not sure that point is good, other loads etc.\n\nRegards\n Fredrik Estreen\n\n\n",
"msg_date": "Fri, 04 Jan 2002 07:21:54 +0100",
"msg_from": "Fredrik Estreen <estreen@algonet.se>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > Tom Lane wrote:\n> >> Unfortunately, at low scaling factors pgbench is guaranteed to look\n> >> horrible because of contention for the \"branches\" rows.\n> >>\n> > Not really! See graph in my previous post - the database size affects\n> > performance much more !\n> \n> But the way that pgbench is currently set up, you can't really tell the\n> difference between database size effects and contention effects, because\n> you can't vary one while holding the other constant.\n\nWhat I meant was that a small -s (lot of contention and small database) \nruns much faster than tham big -s (low contention and big database)\n\n> I based my comments on having done profiles that show most of the CPU\n> time going into attempts to acquire row locks for updates and/or\n> checking of dead tuples in _bt_check_unique. So at least in the\n> conditions I was using (single CPU) I think those are the bottlenecks.\n> I don't have any profiles for SMP machines, yet.\n\nYou have good theoretical grounds for your claim - it just does not fit \nwith real-world tests. It may be due to contention in some other places \nbut not on the branches table (i.e small scale factor)\n\n--------------\nHannu\n\n",
"msg_date": "Fri, 04 Jan 2002 13:45:43 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Tom Lane wrote:\n\n>Attached is a modification that prevents additional waiters from being\n>released until the first releasee has a chance to run and acquire the\n>lock. Would you try this and see if it's better or not in your test\n>cases? It doesn't seem to help on a single CPU, but maybe on multiple\n>CPUs it'll make a difference.\n>\nHere are some results for Linux 2.2 on a Dual PentiumPro 200MHz, SCSI \ndisks and way too litte RAM\n(just 128MB). I observed the loadavg. with the three different 7.2 \nversions and 50 clients, without patch\nthe load stayed low (2-3), with patch no1 very high (12-14) and with \npatch no2 between the two others\n(6-8). Any of the patches seem to be a big win with the second version \nbeing slightly better. I could run\nbenchmarks on 7.1 if that would be interesting. I used the same \nbenchmark database with a\nVACUUM FULL between each version of the backend tested. I also re-run \nsome of the tests on the same\ndatabase after I tested all loads on the different versions, and numbers \nstayed very simmilar (difference:\n 0.1-0.3 tps).\n\nBest regrds\n Fredrik Estreen",
"msg_date": "Fri, 04 Jan 2002 12:58:28 +0100",
"msg_from": "Fredrik Estreen <estreen@algonet.se>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "I have gotten my hands on a Linux 4-way SMP box (courtesy of my new\nemployer Red Hat), and have obtained pgbench results that look much\nmore promising than Tatsuo's. It seems the question is not so much\n\"why is 7.2 bad?\" as \"why is it bad on AIX?\"\n\nThe test machine has 4 550MHz Pentium III CPUs, 5Gb RAM, and a passel\nof SCSI disks hanging off ultra-wide controllers. It's presently\nrunning Red Hat 7.1 enterprise release, kernel version 2.4.2-2enterprise\n#1 SMP. (Not the latest thing, but perhaps representative of what\npeople are running in production situations. I can get it rebooted with\nother kernel versions if anyone thinks the results will be interesting.)\n\nFor the tests, the postmasters were started with parameters\n\tpostmaster -F -N 100 -B 3800\n(the -B setting chosen to fit within 32Mb, which is the shmmax setting\non stock Linux). -F is not very representative of production use,\nbut I thought it was appropriate since we are trying to measure CPU\neffects not disk I/O. pgbench scale factor is 50; xacts/client varied\nso that each run executes 10000 transactions, per this script:\n\n#! /bin/sh\n\nDB=bench\ntotxacts=10000\n\nfor c in 1 2 3 4 5 6 10 25 50 100\ndo\n t=`expr $totxacts / $c`\n psql -c 'vacuum' $DB\n psql -c 'checkpoint' $DB\n echo \"===== sync ======\" 1>&2\n sync;sync;sync;sleep 10\n echo $c concurrent users... 1>&2\n pgbench -n -t $t -c $c $DB\ndone\n\nThe results are shown in the attached plot. Interesting, hmm?\nThe \"sweet spot\" at 3 processes might be explained by assuming that\npgbench itself chews up the fourth CPU.\n\nThis still leaves me undecided whether to apply the first or second\nversion of the LWLock patch.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 04 Jan 2002 18:13:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "> This still leaves me undecided whether to apply the first or second\n> version of the LWLock patch.\n\nI vote for the second. Logically it makes more sense, and my guess is\nthat the first patch wins only if there are enough CPU's available to\nrun all the newly-awoken processes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 4 Jan 2002 19:32:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "> The results are shown in the attached plot. Interesting, hmm?\n> The \"sweet spot\" at 3 processes might be explained by assuming that\n> pgbench itself chews up the fourth CPU.\n\nTo probe the theory, you could run pgbench on a different machine.\n\nBTW, could you run the test with changing the number of CPUs? I'm\ninterested in how 7.2 is scale with # of processors.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 05 Jan 2002 10:25:32 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> BTW, could you run the test with changing the number of CPUs?\n\nI'm not sure how to do that (and I don't have root on that machine,\nso probably couldn't do it myself anyway). Maybe I can arrange\nsomething with the admins next week.\n\nBTW, I am currently getting some interesting results from adjusting\nSPINS_PER_DELAY in s_lock.c. Will post results when I finish the\nset of test runs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jan 2002 20:44:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>I have gotten my hands on a Linux 4-way SMP box (courtesy of my new\n>employer Red Hat), and have obtained pgbench results that look much\n>more promising than Tatsuo's. It seems the question is not so much\n>\"why is 7.2 bad?\" as \"why is it bad on AIX?\"\n>\nCould you rerun some of the tests on the same hardware but with \nuniprocesor kernel\nto get another reference point ?\n\nThere were some reports about very poor insert performance on 4way vs 1way\nprocessors.\n\nYou could also try timing pgbench -i to compare raw inser performance.\n\n>The test machine has 4 550MHz Pentium III CPUs, 5Gb RAM, and a passel\n>of SCSI disks hanging off ultra-wide controllers. It's presently\n>running Red Hat 7.1 enterprise release, kernel version 2.4.2-2enterprise\n>#1 SMP. (Not the latest thing, but perhaps representative of what\n>people are running in production situations. I can get it rebooted with\n>other kernel versions if anyone thinks the results will be interesting.)\n>\n>\n>For the tests, the postmasters were started with parameters\n>\tpostmaster -F -N 100 -B 3800\n>(the -B setting chosen to fit within 32Mb, which is the shmmax setting\n>on stock Linux). -F is not very representative of production use,\n>but I thought it was appropriate since we are trying to measure CPU\n>effects not disk I/O. pgbench scale factor is 50; xacts/client varied\n>so that each run executes 10000 transactions, per this script:\n>\n>#! /bin/sh\n>\n>DB=bench\n>totxacts=10000\n>\n>for c in 1 2 3 4 5 6 10 25 50 100\n>do\n> t=`expr $totxacts / $c`\n> psql -c 'vacuum' $DB\n>\nShould this not be 'vacuum full' ?\n\n>\n> psql -c 'checkpoint' $DB\n> echo \"===== sync ======\" 1>&2\n> sync;sync;sync;sleep 10\n> echo $c concurrent users... 1>&2\n> pgbench -n -t $t -c $c $DB\n>done\n>\n-----------\nHannu\n\n\n",
"msg_date": "Sat, 05 Jan 2002 22:54:29 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Could you rerun some of the tests on the same hardware but with \n> uniprocesor kernel\n\nI don't have root on that machine, but will see what I can arrange next\nweek.\n\n> There were some reports about very poor insert performance on 4way vs 1way\n> processors.\n\nIIRC, that was fixed for 7.2. (As far as I can tell from profiling,\ncontention for the shared free-space-map is a complete nonissue, at\nleast in this test. That was something I was a tad worried about\nwhen I wrote the FSM code, but the tactic of locally caching a current\ninsertion page seems to have sidestepped the problem nicely.)\n\n>> psql -c 'vacuum' $DB\n>> \n> Should this not be 'vacuum full' ?\n\nDon't see why I should expend the extra time to do a vacuum full.\nThe point here is just to ensure a comparable starting state for all\nthe runs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 05 Jan 2002 16:44:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "This maybe of interest on this topic..\n\nhttp://kerneltrap.org/article.php?sid=461\n\nMost of this is way above my head, but it's still interesting and ties \nin with possible current bad performance of smp under linux..[?] \n Anyways.. apologies if this is spam..\n\n\nAshley Cambrell\n\n\n\n",
"msg_date": "Sun, 06 Jan 2002 23:01:44 +1100",
"msg_from": "Ashley Cambrell <ash@freaky-namuh.com>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "On Sun, 2002-01-06 at 02:44, Tom Lane wrote: \n> Hannu Krosing <hannu@tm.ee> writes:\n> > Could you rerun some of the tests on the same hardware but with \n> > uniprocesor kernel\n> \n> I don't have root on that machine, but will see what I can arrange next\n> week.\n> \n> > There were some reports about very poor insert performance on 4way vs 1way\n> > processors.\n> \n> IIRC, that was fixed for 7.2. (As far as I can tell from profiling,\n> contention for the shared free-space-map is a complete nonissue, at\n> least in this test. That was something I was a tad worried about\n> when I wrote the FSM code, but the tactic of locally caching a current\n> insertion page seems to have sidestepped the problem nicely.)\n> \n> >> psql -c 'vacuum' $DB\n> >> \n> > Should this not be 'vacuum full' ?\n> \n> Don't see why I should expend the extra time to do a vacuum full.\n> The point here is just to ensure a comparable starting state for all\n> the runs.\n\nOk. I thought that you would also want to compare performance for different \nconcurrency levels where the number of dead tuples matters more as shown by\nthe attached graph. It is for Dual PIII 800 on RH 7.2 with IDE hdd, scale 5,\n1-25 concurrent backends and 10000 trx per run",
"msg_date": "07 Jan 2002 03:32:40 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "On Mon, 2002-01-07 at 06:37, Tom Lane wrote:\n> Hannu Krosing <hannu@krosing.net> writes:\n> > Should this not be 'vacuum full' ?\n> >> \n> >> Don't see why I should expend the extra time to do a vacuum full.\n> >> The point here is just to ensure a comparable starting state for all\n> >> the runs.\n> \n> > Ok. I thought that you would also want to compare performance for different \n> > concurrency levels where the number of dead tuples matters more as shown by\n> > the attached graph. It is for Dual PIII 800 on RH 7.2 with IDE hdd, scale 5,\n> > 1-25 concurrent backends and 10000 trx per run\n> \n> VACUUM and VACUUM FULL will provide the same starting state as far as\n> number of dead tuples goes: none. \n\nI misinterpreted the fact that new VACUUM will skip locked pages - here\nare none if run independently.\n\n> So that doesn't explain the\n> difference you see. My guess is that VACUUM FULL looks better because\n> all the new tuples will get added at the end of their tables; possibly\n> that improves I/O locality to some extent. After a plain VACUUM the\n> system will tend to allow each backend to drop new tuples into a\n> different page of a relation, at least until the partially-empty pages\n> all fill up.\n> \n> What -B setting were you using?\n\nI had the following in the postgresql.conf\n\nshared_buffers = 4096\n\n--------------\nHannu\n\nI attach similar run, only with scale 50, from my desktop computer\n(uniprocessor Athlon 850MHz, RedHat 7.1) \n\nBTW, both were running unpatched postgreSQL 7.2b4.\n\n--------------\nHannu",
"msg_date": "07 Jan 2002 04:12:07 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Hannu Krosing <hannu@krosing.net> writes:\n> Should this not be 'vacuum full' ?\n>> \n>> Don't see why I should expend the extra time to do a vacuum full.\n>> The point here is just to ensure a comparable starting state for all\n>> the runs.\n\n> Ok. I thought that you would also want to compare performance for different \n> concurrency levels where the number of dead tuples matters more as shown by\n> the attached graph. It is for Dual PIII 800 on RH 7.2 with IDE hdd, scale 5,\n> 1-25 concurrent backends and 10000 trx per run\n\nVACUUM and VACUUM FULL will provide the same starting state as far as\nnumber of dead tuples goes: none. So that doesn't explain the\ndifference you see. My guess is that VACUUM FULL looks better because\nall the new tuples will get added at the end of their tables; possibly\nthat improves I/O locality to some extent. After a plain VACUUM the\nsystem will tend to allow each backend to drop new tuples into a\ndifferent page of a relation, at least until the partially-empty pages\nall fill up.\n\nWhat -B setting were you using?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Jan 2002 20:37:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "Hannu Krosing <hannu@krosing.net> writes:\n> I misinterpreted the fact that new VACUUM will skip locked pages\n\nHuh? There is no such \"fact\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Jan 2002 21:32:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@krosing.net> writes:\n> > I misinterpreted the fact that new VACUUM will skip locked pages\n> \n> Huh? There is no such \"fact\".\n> \n> regards, tom lane\n\nWas it not the case that instead of locking whole tables the new \nvacuum locks only one page at a time. If it can't lock that page it \njust moves to next one instead of waiting for other backend to release \nits lock. At least I remember that this was the (proposed?) behaviour \nonce.\n\n---------------\nHannu\n",
"msg_date": "Mon, 07 Jan 2002 09:01:15 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Was it not the case that instead of locking whole tables the new \n> vacuum locks only one page at a time. If it can't lock that page it \n> just moves to next one instead of waiting for other backend to release \n> its lock.\n\nNo, it just waits till it can get the page lock.\n\nThe only conditional part of the new vacuum algorithm is truncation of\nthe relation file (releasing empty end pages back to the OS). That\nrequires exclusive lock on the relation, which it will not be able to\nget if there are any other users of the relation. In that case it\nforgets about truncation and just leaves the empty pages as free space.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jan 2002 11:39:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
},
{
"msg_contents": "Hi,\n\nI received a mail that report a problem using fetch_fields() with view.\nIt seems that it return good data but with a length of 0.\n\nMaybe someone can take look and check if it's right.\n\nIs the fix concerning alias on columns name as been applied, it seems not\nto me ?\nrecall:\n\n select column1 as col1 from my table\n\nreturn a colum name as column1 but expected should be col1 isn't it ?\n\nRegards\n\n",
"msg_date": "Mon, 07 Jan 2002 18:51:19 +0100",
"msg_from": "Gilles DAROLD <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "Problem with view and fetch_fields"
},
{
"msg_contents": "Gilles DAROLD wrote:\n> \n> Hi,\n> \n> I received a mail that report a problem using fetch_fields() with view.\n> It seems that it return good data but with a length of 0.\n> \n> Maybe someone can take look and check if it's right.\n\nDetails please.\n\n> \n> Is the fix concerning alias on columns name as been applied, it seems not\n> to me ?\n\nIt was already applied. What kind of source you are\nseeing ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 08 Jan 2002 11:20:30 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with view and fetch_fields"
},
{
"msg_contents": "I know it's a bit too late, but here are unpatched 7.2b3 and patched 7.2b4\nresults for pgbench\nscale factor 50 on a 8 MIPS r10000 sgi-Irix machine with 1Gb\nhope it helps",
"msg_date": "Mon, 14 Jan 2002 14:12:42 +0100",
"msg_from": "Luis Amigo <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > Was it not the case that instead of locking whole tables the new \n> > vacuum locks only one page at a time. If it can't lock that page it \n> > just moves to next one instead of waiting for other backend to release \n> > its lock.\n> \n> No, it just waits till it can get the page lock.\n> \n> The only conditional part of the new vacuum algorithm is truncation of\n> the relation file (releasing empty end pages back to the OS). That\n> requires exclusive lock on the relation, which it will not be able to\n> get if there are any other users of the relation. In that case it\n> forgets about truncation and just leaves the empty pages as free space.\n\nIf we have one page with data, and 100 empty pages, and another page\nwith data on the end, will VACUUM shrink that to two pages if no one is\naccessing the table, or does it do _only_ intra-page moves.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 23 Jan 2002 14:10:27 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> If we have one page with data, and 100 empty pages, and another page\n> with data on the end, will VACUUM shrink that to two pages if no one is\n> accessing the table, or does it do _only_ intra-page moves.\n\nThe only way to shrink that is VACUUM FULL.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Jan 2002 14:15:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem "
}
] |
[
{
"msg_contents": "How does PG handle DDL statements that are wrapped in a transaction?\nDoes it roll them back if one fails, or is it like Oracle?\n\nWhere is this documented?\n\nThanks.\n",
"msg_date": "29 Dec 2001 13:12:27 -0800",
"msg_from": "james@unifiedmind.com (James Thornton)",
"msg_from_op": true,
"msg_subject": "DDLs in Transactions...?"
},
{
"msg_contents": "Looks like it supports transactions:\n\n[postgres@drow postgres]$ /usr/local/pgsql/bin/psql template1\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntemplate1=# begin;\nBEGIN\ntemplate1=# create table intransaction (field1 int);\nCREATE\ntemplate1=# rollback;\nROLLBACK\ntemplate1=# select * from intransaction;\nERROR: Relation 'intransaction' does not exist\ntemplate1=# begin;\nBEGIN\ntemplate1=# create table intransaction (field1 int);\nCREATE\ntemplate1=# commit;\nCOMMIT\ntemplate1=# select * from intransaction ;\n field1 \n--------\n(0 rows)\n\ntemplate1=# begin; \nBEGIN\ntemplate1=# drop table intransaction;\nDROP\ntemplate1=# rollback; \nROLLBACK\ntemplate1=# select * from intransaction ;\n field1 \n--------\n(0 rows)\n\ntemplate1=# begin;\nBEGIN\ntemplate1=# drop table intransaction;\nDROP\ntemplate1=# commit;\nCOMMIT\ntemplate1=# select * from intransaction;\nERROR: Relation 'intransaction' does not exist\ntemplate1=# \n\nCheers,\n\nMark Pritchard\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of James Thornton\n> Sent: Sunday, 30 December 2001 8:12 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] DDLs in Transactions...?\n> \n> \n> How does PG handle DDL statements that are wrapped in a transaction?\n> Does it roll them back if one fails, or is it like Oracle?\n> \n> Where is this documented?\n> \n> Thanks.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n",
"msg_date": "Fri, 4 Jan 2002 07:45:26 +1100",
"msg_from": "\"Mark Pritchard\" <mark@tangent.net.au>",
"msg_from_op": false,
"msg_subject": "Re: DDLs in Transactions...?"
}
] |
[
{
"msg_contents": "make[3]: Entering directory `/home/postgres/pgsql/src/backend/parser'\ngcc -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -I../../../src/include -c -o analyze.o analyze.c\nIn file included from analyze.c:23:\n../../../src/include/parser/parse.h:160: warning: `TIME' redefined\n../../../src/include/utils/datetime.h:113: warning: this is the location of the previous definition\n\nand similarly in half a dozen other modules. This is very bad; I have\nno confidence that the correct value of the symbol is being used in each\nplace that references it. Could we rename the one in datetime.h to a\nnon-conflicting name?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 Dec 2001 20:23:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Latest datetime changes produce gcc complaints"
},
{
"msg_contents": "> make[3]: Entering directory `/home/postgres/pgsql/src/backend/parser'\n> gcc -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -I../../../src/include -c -o analyze.o analyze.c\n> In file included from analyze.c:23:\n> ../../../src/include/parser/parse.h:160: warning: `TIME' redefined\n> ../../../src/include/utils/datetime.h:113: warning: this is the location of the previous definition\n> \n> and similarly in half a dozen other modules. This is very bad; I have\n> no confidence that the correct value of the symbol is being used in each\n> place that references it. Could we rename the one in datetime.h to a\n> non-conflicting name?\n\nOf course this brings up the question of whether there were any other\nrecent changes that will break ports. Thomas, can you check on that? \nI didn't see your patch and can't find it in the archives. (Of course,\nthe fact the patch didn't compile and we are thinking of RC1 tomorrow\ndoesn't help.) :-)\n\nOK, I see it now via CVS. There are almost 2k lines of code in the\npatch. It looks like DATE and TIME are the only two new defines. Can\nyou change TIME to PGTIME and DATE to PGDATE? Seems safer.\n\nDon't know about the rest of the code. Hope it works. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 Dec 2001 21:08:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Latest datetime changes produce gcc complaints"
},
{
"msg_contents": "> and similarly in half a dozen other modules. This is very bad; I have\n> no confidence that the correct value of the symbol is being used in each\n> place that references it. Could we rename the one in datetime.h to a\n> non-conflicting name?\n\nYup. Sorry. Will fix it up tonight.\n\n - Thomas\n",
"msg_date": "Mon, 31 Dec 2001 20:43:23 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Latest datetime changes produce gcc complaints"
},
{
"msg_contents": "> > ... Could we rename the one in datetime.h to a\n> > non-conflicting name?\n\nOK, renamed to \"ISOTIME\", since it is a field defined in ISO-8601. I did\na \"make clean all install\" and the regression tests pass.\n\n> Of course this brings up the question of whether there were any other\n> recent changes that will break ports. Thomas, can you check on that?\n\nNot sure what you mean here. I made changes in only a few files. Not\nsure about other recent changes of course, any more than you are. Better\nkeep testing, eh?\n\n> Don't know about the rest of the code. Hope it works. :-)\n\nWithout any more changes, it certainly works \"better\" than the old code.\nWhether it breaks a case that used to work is not known, but all of the\nregress tests plus more tests I've written all pass. Features which\ndidn't used to work now do, and some inconsistancies have been repaired.\nThese are all bug fixes, and I'll continue to poke at the docs to get\nthe features more completely illuminated (which is how I got to patching\nin the first place).\n\n - Thomas\n",
"msg_date": "Tue, 01 Jan 2002 03:06:15 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Latest datetime changes produce gcc complaints"
}
] |
[
{
"msg_contents": "This article explains the value of a constent coding style:\n\n\thttp://ezine.daemonnews.org/200112/single_coding_style.html\n\nI have added a link from the developer's FAQ.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 Dec 2001 23:32:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Constent coding style"
}
] |
[
{
"msg_contents": "Presently, the RESUME_INTERRUPTS() and END_CRIT_SECTION() macros implicitly\ndo a CHECK_FOR_INTERRUPTS(); that is, if a cancel request arrived during\nthe interrupt-free section it will be serviced immediately upon exit\nfrom the section.\n\nIt strikes me that this is a really bad idea. There are lots of places\nwhere we release one lock then acquire another, and are not expecting to\nlose control in between. The original concept of the query-cancel\nfacility was that we'd accept cancels only at *explicit*\nCHECK_FOR_INTERRUPTS points. What we actually have at the moment is\nthat cancels could be accepted in a very wide variety of places, and\nI don't believe we've considered the consequences at each such place.\n\nI am inclined to remove the ProcessInterrupts calls from\nRESUME_INTERRUPTS and END_CRIT_SECTION. Does anyone object?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 30 Dec 2001 13:46:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Are we accepting cancel interrupts too often?"
},
{
"msg_contents": "Tom Lane wrote:\n> Presently, the RESUME_INTERRUPTS() and END_CRIT_SECTION() macros implicitly\n> do a CHECK_FOR_INTERRUPTS(); that is, if a cancel request arrived during\n> the interrupt-free section it will be serviced immediately upon exit\n> from the section.\n> \n> It strikes me that this is a really bad idea. There are lots of places\n> where we release one lock then acquire another, and are not expecting to\n> lose control in between. The original concept of the query-cancel\n> facility was that we'd accept cancels only at *explicit*\n> CHECK_FOR_INTERRUPTS points. What we actually have at the moment is\n> that cancels could be accepted in a very wide variety of places, and\n> I don't believe we've considered the consequences at each such place.\n> \n> I am inclined to remove the ProcessInterrupts calls from\n> RESUME_INTERRUPTS and END_CRIT_SECTION. Does anyone object?\n\nI read the nice description of RESUME_INTERRUPTS at the top of\nmiscadmin.c and I agree that the idea of allowing a CHECK_FOR_INTERRUPTS\ncall is not the same as making the call.\n\nI started to look at when this nice code was added to determine if this\nwas part of the original design or added later and found you wrote it\nyourself, so I guess we don't have to ask anyone to make sure there\nisn't something were are missing.\n\nLooking at CHECK_FOR_INTERRUPTS calls, those are all in safe places,\nwhile the RESUME_INTERRUPTS are not in obviously safe places, so I agree\nwith your suggested change.\n\n---------------------------------------------------------------------------\n\n Revision 1.77 / (download) - annotate - [select for diffs] , Sun Jan 14\n05:08:16 2001 UTC (11 months, 2 weeks ago) by tgl\nChanges since 1.76: +73 -36 lines\nDiff to previous 1.76\n\nRestructure backend SIGINT/SIGTERM handling so that 'die' interrupts\nare treated more like 'cancel' interrupts: the signal handler sets a\nflag that is examined at well-defined spots, rather than trying to cope\nwith an interrupt that might happen anywhere. See pghackers discussion\nof 1/12/01.\n\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 Dec 2001 11:14:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Are we accepting cancel interrupts too often?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I started to look at when this nice code was added to determine if this\n> was part of the original design or added later and found you wrote it\n> yourself, so I guess we don't have to ask anyone to make sure there\n> isn't something were are missing.\n\nAs far as I can recall my thinking at the time, it went like so:\n\"We *should* be able to accept a cancel interrupt anywhere we are not\nactually in the midst of modifying shared-memory data structures,\nbecause after all the database system is supposed to be robust against\ncrashes, and those could happen anyplace\".\n\nBut the fallacy in equating a cancel to a crash is that we have rather\nextensive logic for coping with a crash (including reinitializing shared\nmemory from scratch). A cancel will only provoke elog cleanup, which is\nnot nearly as thorough. For example, it's not obvious that shared\nmemory structures that are protected by different locks couldn't get out\nof sync.\n\n\nBTW, I spent some time yesterday trying to use this worry to explain my\nlatest favorite bugaboo, the duplicate-rows complaints we've gotten from\na few people. It is easy to see that a cancel being accepted at the\nright place (exit from the first WriteBuffer in heap_update) could leave\nan updated tuple created and its buffer marked dirty, while the old\ntuple's buffer is not yet marked dirty and might therefore be discarded\nunwritten. (The WAL entry is correct but will never be consulted unless\nthere's a crash.) However, this scenario doesn't seem to explain the\nfailures because the cancel would lead to transaction abort, so the\nupdated tuple should never be considered good anyway. Back to the\ndrawing board...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Dec 2001 11:32:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Are we accepting cancel interrupts too often? "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I started to look at when this nice code was added to determine if this\n> > was part of the original design or added later and found you wrote it\n> > yourself, so I guess we don't have to ask anyone to make sure there\n> > isn't something were are missing.\n> \n> As far as I can recall my thinking at the time, it went like so:\n> \"We *should* be able to accept a cancel interrupt anywhere we are not\n> actually in the midst of modifying shared-memory data structures,\n> because after all the database system is supposed to be robust against\n> crashes, and those could happen anyplace\".\n> \n> But the fallacy in equating a cancel to a crash is that we have rather\n> extensive logic for coping with a crash (including reinitializing shared\n> memory from scratch). A cancel will only provoke elog cleanup, which is\n> not nearly as thorough. For example, it's not obvious that shared\n> memory structures that are protected by different locks couldn't get out\n> of sync.\n> \n\nYes, I saw the RESUME_INTERRUPTS in SpinLockRelease(). It seems very\naggresive to allow a query cancel there.\n\n> \n> BTW, I spent some time yesterday trying to use this worry to explain my\n> latest favorite bugaboo, the duplicate-rows complaints we've gotten from\n> a few people. It is easy to see that a cancel being accepted at the\n> right place (exit from the first WriteBuffer in heap_update) could leave\n> an updated tuple created and its buffer marked dirty, while the old\n> tuple's buffer is not yet marked dirty and might therefore be discarded\n> unwritten. (The WAL entry is correct but will never be consulted unless\n> there's a crash.) However, this scenario doesn't seem to explain the\n> failures because the cancel would lead to transaction abort, so the\n> updated tuple should never be considered good anyway. Back to the\n> drawing board...\n\nI thought we were seeing duplicates in 7.1, which didn't have this code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 Dec 2001 11:41:56 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Are we accepting cancel interrupts too often?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I thought we were seeing duplicates in 7.1, which didn't have this code.\n\nNo, the query-cancel stuff is the same in 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Dec 2001 12:02:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Are we accepting cancel interrupts too often? "
}
] |
[
{
"msg_contents": "Hi all!!\n\nI'm testing 7.2b4 and I have this (little problem) when importing the\n7.1.3 databases:\n\n2 of the databases are owner by a gone user (not in the system\nanymore) That gives errors when recreating databases.\n\nI'd like to see this databases belong to me (that one was easy : change\nsysuid in pg_databases); however, how do I change table owner, permissions\netc..\n\nRegards\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Sun, 30 Dec 2001 20:12:00 +0100 (MET)",
"msg_from": "Olivier PRENANT <ohp@pyrenet.fr>",
"msg_from_op": true,
"msg_subject": "Little question on owners"
}
] |
[
{
"msg_contents": "On 12/22/2001 12:50:08 AM Bruce Momjian wrote:\n> > Who contributed it? Did Maarten Boekhold? I would like to merge my \ndata\n> > type and table prefix code into the main tree, if this is where it \nwill\n> > now be officially maintained.\n> \n> contrib/README shows:\n> \n> dbase -\n> Converts from dbase/xbase to PostgreSQL\n> by Ivan Baldo, lubaldo@adinet.com.uy\n\nHmmm, this was most certainly originally written by me, with (according to \nthe README) later updates from Frank Koormann (fkoorman@usf.uni-osnabrueck.de) (dbf.c). Not to be picky, but listing Ivan as the (only) author doesn't \nsound right to me. Some of the files have funny CVS history entries, i.e. \nreferences to WAL locking code?!?\n\nI've also got a version on my own systems that *appears* to be newer, \nsupposedly with bug fixes and added features. I'll have to verify that \nthough.\n\nbtw. there's a listing of my email address in dbf.h that doesn't exist \nanymore. Can somebody update it to read maarten.boekhold@reuters.com?\n\nMaarten\n\n----\n\nMaarten Boekhold, maarten.boekhold@reuters.com\n\nReuters Consulting / TIBCO Finance Technology Inc.\nDubai Media City\nBuilding 1, 5th Floor\nPO Box 1426\nDubai, United Arab Emirates\ntel:+971(0)4 3918300 ext 249\nfax:+971(0)4 3918333\nmob:+971(0)505526539\n\n-------------------------------------------------------------- --\n Visit our Internet site at http://www.reuters.com\n\nAny views expressed in this message are those of the individual\nsender, except where the sender specifically states them to be\nthe views of Reuters Ltd.\n\nOn 12/22/2001 12:50:08 AM Bruce Momjian wrote:\n> > Who contributed it? Did Maarten Boekhold? I would like to merge my data\n> > type and table prefix code into the main tree, if this is where it will\n> > now be officially maintained.\n> \n> contrib/README shows:\n> \n> dbase -\n> Converts from dbase/xbase to PostgreSQL\n> by Ivan Baldo, lubaldo@adinet.com.uy\n\nHmmm, this was most certainly originally written by me, with (according to the README) later updates from Frank Koormann (fkoorman@usf.uni-osnabrueck.de) (dbf.c). Not to be picky, but listing Ivan as the (only) author doesn't sound right to me. Some of the files have funny CVS history entries, i.e. references to WAL locking code?!?\n\nI've also got a version on my own systems that *appears* to be newer, supposedly with bug fixes and added features. I'll have to verify that though.\n\nbtw. there's a listing of my email address in dbf.h that doesn't exist anymore. Can somebody update it to read maarten.boekhold@reuters.com?\n\nMaarten\n\n----\n\nMaarten Boekhold, maarten.boekhold@reuters.com\n\nReuters Consulting / TIBCO Finance Technology Inc.\nDubai Media City\nBuilding 1, 5th Floor\nPO Box 1426\nDubai, United Arab Emirates\ntel:+971(0)4 3918300 ext 249\nfax:+971(0)4 3918333\nmob:+971(0)505526539\n\n-------------------------------------------------------------- --\n Visit our Internet site at http://www.reuters.com\n\nAny views expressed in this message are those of the individual\nsender, except where the sender specifically states them to be\nthe views of Reuters Ltd.",
"msg_date": "Mon, 31 Dec 2001 15:56:57 +0400",
"msg_from": "Maarten.Boekhold@reuters.com",
"msg_from_op": true,
"msg_subject": "Re: contrib/dbase"
},
{
"msg_contents": "> btw. there's a listing of my email address in dbf.h that doesn't exist \n> anymore. Can somebody update it to read maarten.boekhold@reuters.com?\n\nOK, updated. I found your email in two other files.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: contrib/README\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/README,v\nretrieving revision 1.50\ndiff -c -r1.50 README\n*** contrib/README\t2001/10/12 23:19:09\t1.50\n--- contrib/README\t2001/12/31 13:30:22\n***************\n*** 44,50 ****\n \n dbase -\n \tConverts from dbase/xbase to PostgreSQL\n! \tby Ivan Baldo, lubaldo@adinet.com.uy\n \n dblink -\n \tAllows remote query execution\n--- 44,52 ----\n \n dbase -\n \tConverts from dbase/xbase to PostgreSQL\n! \tby Maarten.Boekhold <Maarten.Boekhold@reuters.com>,\n! \t Frank Koormann <fkoorman@usf.uni-osnabrueck.de>,\n! \t Ivan Baldo <lubaldo@adinet.com.uy>\n \n dblink -\n \tAllows remote query execution\nIndex: contrib/dbase/dbf.h\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/dbase/dbf.h,v\nretrieving revision 1.4\ndiff -c -r1.4 dbf.h\n*** contrib/dbase/dbf.h\t2001/11/05 17:46:22\t1.4\n--- contrib/dbase/dbf.h\t2001/12/31 13:30:22\n***************\n*** 2,8 ****\n declares routines for reading and writing xBase-files (.dbf), and\n associated structures\n \n! Maarten Boekhold (boekhold@cindy.et.tudelft.nl) 29 oktober 1995\n */\n \n #ifndef _DBF_H\n--- 2,8 ----\n declares routines for reading and writing xBase-files (.dbf), and\n associated structures\n \n! Maarten Boekhold (maarten.boekhold@reuters.com) 29 oktober 1995\n */\n \n #ifndef _DBF_H\nIndex: contrib/dbase/dbf2pg.c\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/dbase/dbf2pg.c,v\nretrieving revision 1.7\ndiff -c -r1.7 dbf2pg.c\n*** contrib/dbase/dbf2pg.c\t2001/12/30 23:09:41\t1.7\n--- contrib/dbase/dbf2pg.c\t2001/12/31 13:30:23\n***************\n*** 1,7 ****\n /* This program reads in an xbase-dbf file and sends 'inserts' to an\n PostgreSQL-server with the records in the xbase-file\n \n! M. Boekhold (boekhold@cindy.et.tudelft.nl) okt. 1995\n oktober 1996: merged sources of dbf2msql.c and dbf2pg.c\n oktober 1997: removed msql support\n */\n--- 1,7 ----\n /* This program reads in an xbase-dbf file and sends 'inserts' to an\n PostgreSQL-server with the records in the xbase-file\n \n! M. Boekhold (maarten.boekhold@reuters.com) okt. 1995\n oktober 1996: merged sources of dbf2msql.c and dbf2pg.c\n oktober 1997: removed msql support\n */\nIndex: contrib/dbase/endian.c\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/dbase/endian.c,v\nretrieving revision 1.2\ndiff -c -r1.2 endian.c\n*** contrib/dbase/endian.c\t2001/10/25 05:49:19\t1.2\n--- contrib/dbase/endian.c\t2001/12/31 13:30:23\n***************\n*** 1,4 ****\n! /* Maarten Boekhold (boekhold@cindy.et.tudelft.nl) oktober 1995 */\n \n #include <sys/types.h>\n #include \"dbf.h\"\n--- 1,4 ----\n! /* Maarten Boekhold (maarten.boekhold@reuters.com) oktober 1995 */\n \n #include <sys/types.h>\n #include \"dbf.h\"",
"msg_date": "Mon, 31 Dec 2001 08:31:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/dbase"
},
{
"msg_contents": "> Hmmm, this was most certainly originally written by me, with (according to \n> the README) later updates from Frank Koormann (fkoorman@usf.uni-osnabrueck.de) (dbf.c). Not to be picky, but listing Ivan as the (only) author doesn't \n> sound right to me. Some of the files have funny CVS history entries, i.e. \n> references to WAL locking code?!?\n> \n> I've also got a version on my own systems that *appears* to be newer, \n> supposedly with bug fixes and added features. I'll have to verify that \n> though.\n\nPlease send over some improvements if you have them. Thanks.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 03:03:42 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/dbase"
}
] |
[
{
"msg_contents": "\n\nThe one variant I'd like it to accept is 'yyyy-mm-ddThh:nn:ss', as in\n'2001-12-31T13:30:46'. This is the ISO-8601 format that perl's Time::Piece and I\nthink at least one other perl date module use as their 'standard ISO' output\nformat, and the default format that's closest to what Postgresql 7.1.3 seems to\naccept. For now, I'm taking a string that looks like the above, replacing the T\nwith a space with Perl, and then inserting the value into Postgresql. Not a huge\ndeal, of course, but it might make it a little more convenient for some of us.\nThanks,\n\nWes Sheldahl\n\n\n\nThomas Lockhart <lockhart%fourpalms.org@interlock.lexmark.com> on 12/28/2001\n10:39:42 PM\n\nPlease respond to lockhart%fourpalms.org@interlock.lexmark.com\n\nTo: Hackers List <pgsql-hackers%postgresql.org@interlock.lexmark.com>, General\n Postgres List <pgsql-general%postgresql.org@interlock.lexmark.com>\ncc: (bcc: Wesley Sheldahl/Lex/Lexmark)\nSubject: [GENERAL] date/time formats in 7.2\n\n\nFor 7.2, to support some ISO-8601 variants, I'm tightening up the date\ndelimiter parsing to require the same delimiter to be used between all\nparts of a date.\n\nDoes anyone use the German date notation for PostgreSQL? If so, what is\nthe actual format you input? The reasons I'm asking are:\n\no I had recalled that the format was \"dd.mm/yyyy\", but actually\nPostgreSQL emits \"dd.mm.yyyy\".\n\no By tightening up the parsing, \"dd.mm.yyyy\" would be accepted, but\n\"dd.mm/yyyy\", \"yyyy.mm-dd\", etc would not.\n\no The stricter parsing in this area would allow more general parsing\nelsewhere, enabling other variants such as\n\n yyyymmddThhmmss\n yyyymmdd hhmmss.ss-zz\n Thhmmss-zz\n\nWith these changes, more formats should be correctly handled, including\nsome edge cases which should have worked but seemed not to; the current\nregression tests still all pass. As an example of edge case troubles,\n7.1 accepts both of the following:\n\n timestamp '2001-12-27 04:05:06-08'\n timestamp '2001-12-27 040506 -08'\n\nBut rejects the latter if the space before the time zone is removed:\n\n timestamp '2001-12-27 040506-08'\n\nComments? Suggestions?\n\n - Thomas\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n\n\n\n\n",
"msg_date": "Mon, 31 Dec 2001 11:18:50 -0500",
"msg_from": "wsheldah@lexmark.com",
"msg_from_op": true,
"msg_subject": "Re: date/time formats in 7.2"
},
{
"msg_contents": "> The one variant I'd like it to accept is 'yyyy-mm-ddThh:nn:ss', as in\n> '2001-12-31T13:30:46'. This is the ISO-8601 format that perl's Time::Piece and I\n> think at least one other perl date module use as their 'standard ISO' output\n> format, and the default format that's closest to what Postgresql 7.1.3 seems to\n> accept. For now, I'm taking a string that looks like the above, replacing the T\n> with a space with Perl, and then inserting the value into Postgresql. Not a huge\n> deal, of course, but it might make it a little more convenient for some of us.\n\nAlready accomodated in 7.2.\n\n - Thomas\n",
"msg_date": "Mon, 31 Dec 2001 20:45:13 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: date/time formats in 7.2"
}
] |
[
{
"msg_contents": "Happy New Year\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 31 Dec 2001 21:39:59 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "test new server, please ignore"
}
] |
[
{
"msg_contents": "Just wishing a Marvelous New Year :)\n\nLet this year be a bit more luckier for everyone\ncompared to the previous one!\n\n--\nSerguei A. Mokhov\n \n\n",
"msg_date": "Mon, 31 Dec 2001 14:36:05 -0500",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": true,
"msg_subject": "[OT] Happy New Year ��� tout le monde!"
}
] |
[
{
"msg_contents": "Happy new year to all of you.\n\nMay this year give a lot to you AND postgresql!!!\n\nRegards to you all\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Mon, 31 Dec 2001 23:23:11 +0100 (MET)",
"msg_from": "Olivier PRENANT <ohp@pyrenet.fr>",
"msg_from_op": true,
"msg_subject": "Happy new year"
}
] |
[
{
"msg_contents": "Happy new year for all!\n\nI would like to tell you about the results of my work on pl/j.\nmemo: Java and postgres must run in a separate address space. First I \nwanted to use the sys v ipc, which was a bad idea becouse of some \nproblems with java VM-s. Many hackers told me about its bad sides, and \nthe good sides of the sockets, so I droped the whole code and started a \nnew one.\n\nI started to write the java side first, which is maybe almost 10% ready :))\n-we have is a communication protocol between the two process. I know \nnoone will like it, so there is an API for protocols, so it is plugable. \nThe current implementation is receiveing calls,sends exceptions, but \nsending the results is not implemented yet.\n\n-the Postgres side is not yet done. It sends function calls without \narguments, it doesn`t receive sql queries, exceptions or results at all, \nand there is no API for it, it is an uggly hardcoded thing.\n\n-there is no JDBC implementation, and I have never written JDBC driver, \nso it may take for a while...\n\nBut it says \"hello world\" :))\n\nTodo for me:\n\n-learn more about postgres, jdbc drivers, etc, etc\n-develop api for the postgres side of the communication.\n\nThis will take for a good while becouse of other todos but I hope next \ntime I can tell you good news.\n\nthx,\nLaszlo Hornyak\n\n",
"msg_date": "Tue, 01 Jan 2002 14:03:51 +0100",
"msg_from": "Laszlo Hornyak <hornyakl@freemail.hu>",
"msg_from_op": true,
"msg_subject": "PL/(pg)J"
}
] |
[
{
"msg_contents": "Dear all,\n\nI wish you all a happy new year.\nThanks again for working on PostgreSQL\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Tue, 1 Jan 2002 14:59:54 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Happy new year"
}
] |
[
{
"msg_contents": "I have found a case in nbtree where it is possible for two concurrent\ntransactions to insert the same key value in a unique index :-(\n\nThe reason is that _bt_check_unique can (of course) only detect keys\nthat are already in the index. To prevent concurrent insertion of\nduplicate keys, we have to rely on locking. Two would-be inserters\nof the same key must lock the same target page in _bt_doinsert, so\nthey cannot run the check at the same time. The second to obtain\nthe lock will see the first one's already-inserted key.\n\nHowever, suppose that the index contains many existing instances\nof the same key (presumably pointing at deleted-but-not-yet-vacuumed\ntuples). If the existing instances span several index pages, it is\nlikely that _bt_insertonpg will decide to insert the new key on one\nof the pages to the right of the original target key. As presently\ncoded, _bt_insertonpg drops the write lock it has before acquiring\nwrite lock on the next page to the right. This creates a window\nwherein another process could perform _bt_check_unique, fail to\nfind any non-dead instances of the key, and decide that it's okay\nto insert the same key.\n\nThe fix is to acquire the next page's write lock before we drop the\ncurrent page's. This is deadlock-free since no writer ever tries\nto chain write-locks to the left (in fact the same thing is done in\nthe page split logic).\n\nI am not convinced that this explains any of the field reports of\nduplicate rows that we've seen, but it's clearly a bug. Will commit\nthe fix shortly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jan 2002 14:46:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Duplicate-key-detection failure case found in btree"
},
{
"msg_contents": "> The fix is to acquire the next page's write lock before we drop the\n> current page's. This is deadlock-free since no writer ever tries\n> to chain write-locks to the left (in fact the same thing is done in\n> the page split logic).\n> \n> I am not convinced that this explains any of the field reports of\n> duplicate rows that we've seen, but it's clearly a bug. Will commit\n> the fix shortly.\n\nSounds good to me. Good find.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jan 2002 15:49:31 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate-key-detection failure case found in btree"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > The fix is to acquire the next page's write lock before we drop the\n> > current page's. This is deadlock-free since no writer ever tries\n> > to chain write-locks to the left (in fact the same thing is done in\n> > the page split logic).\n> >\n> > I am not convinced that this explains any of the field reports of\n> > duplicate rows that we've seen, but it's clearly a bug. Will commit\n> > the fix shortly.\n>\n> Sounds good to me. Good find.\n\n It always scares me that bugs like this can live untriggered\n for multiple releases. What else is lingering under the\n surface ...\n\n And I absolutely agree with Tom, there is no connection\n between this bug and the duplicate rows reported. Good catch\n anyway.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 2 Jan 2002 08:40:21 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate-key-detection failure case found in btree"
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> It always scares me that bugs like this can live untriggered\n> for multiple releases. What else is lingering under the\n> surface ...\n\nFWIW, I don't think that this bug did survive for multiple releases\n(though it very nearly made it into 7.2). The present form of\nbt_check_unique and bt_insertonpg was new code in 7.1, replacing\nthe BTP_CHAIN method of handling identical keys. That code had a\nlot of bugs of its own, but I don't think it had this one. I will\ntake the blame for introducing this bug into 7.1 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 11:09:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Duplicate-key-detection failure case found in btree "
}
] |
[
{
"msg_contents": "\tSo I discovered today that pgdb follows in the traditional style of\ncarrying timestamp and most other time fields through to the user as\ntext strings, so I either need to have all my queries do some gymnastics\nto have the server format my time information in a way that is printable\nor can be handled by my client code or whatever.\n\nIs there a better way? I was thinking that if there was a way to set a\ndatestyle that would just emit the seconds since the Unix epoch, I could\nkick them into the python time module's functions for easier formatting,\nand it would give all clients a more standardized way to deal with time\nby letting them get the 'raw' values and handle them locally.\n\nIs this a good, bad, or old idea? Should I spend some time trying to\npatch my local system for testing?\n\n-- \nAdam Haberlach | Who buys an eight-processor machine and then watches 30\nadam@newsnipple.com | movies on it all at the same time? Beats me. They told\n | us they could sell it, so we made it.\n | -- George Hoffman, Be Engineer\n",
"msg_date": "Tue, 1 Jan 2002 14:26:47 -0800",
"msg_from": "Adam Haberlach <adam@newsnipple.com>",
"msg_from_op": true,
"msg_subject": "SET DATESTYLE to time_t style for client libraries?"
},
{
"msg_contents": "Adam Haberlach <adam@newsnipple.com> writes:\n> Is there a better way?\n\n\tSELECT EXTRACT(epoch FROM timestamp-value)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jan 2002 10:05:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SET DATESTYLE to time_t style for client libraries? "
},
{
"msg_contents": "> So I discovered today that pgdb follows in the traditional style of\n> carrying timestamp and most other time fields through to the user as\n> text strings, so I either need to have all my queries do some gymnastics\n> to have the server format my time information in a way that is printable\n> or can be handled by my client code or whatever.\n\nRight. Though the available styles *should* cover common usage, and\nISO-8601 is not a bad way to go imho.\n\n> Is there a better way? I was thinking that if there was a way to set a\n> datestyle that would just emit the seconds since the Unix epoch, I could\n> kick them into the python time module's functions for easier formatting,\n> and it would give all clients a more standardized way to deal with time\n> by letting them get the 'raw' values and handle them locally.\n\nHmm. If the Python module has any date/time input routines, it *should*\nbe easy to ingest ISO-formatted dates. No? How about one of the other\navailable styles? If nothing else, you could go through to_char() to\nformat the date exactly as Python needs to see it (or directly for\ndisplay on your client apps). date_part('epoch'...) could get you Unix\nsystem time, but that would last on my list...\n\n - Thomas\n",
"msg_date": "Thu, 03 Jan 2002 15:44:50 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: SET DATESTYLE to time_t style for client libraries?"
},
{
"msg_contents": "On Thu, Jan 03, 2002 at 03:44:50PM +0000, Thomas Lockhart wrote:\n> > So I discovered today that pgdb follows in the traditional style of\n> > carrying timestamp and most other time fields through to the user as\n> > text strings, so I either need to have all my queries do some gymnastics\n> > to have the server format my time information in a way that is printable\n> > or can be handled by my client code or whatever.\n> \n> Right. Though the available styles *should* cover common usage, and\n> ISO-8601 is not a bad way to go imho.\n\n\tOk...\n\n> > Is there a better way? I was thinking that if there was a way to set a\n> > datestyle that would just emit the seconds since the Unix epoch, I could\n> > kick them into the python time module's functions for easier formatting,\n> > and it would give all clients a more standardized way to deal with time\n> > by letting them get the 'raw' values and handle them locally.\n> \n> Hmm. If the Python module has any date/time input routines, it *should*\n> be easy to ingest ISO-formatted dates. No? How about one of the other\n> available styles? If nothing else, you could go through to_char() to\n> format the date exactly as Python needs to see it (or directly for\n> display on your client apps). date_part('epoch'...) could get you Unix\n> system time, but that would last on my list...\n\n\tI'll look into getting it to ingest dates, but it seems wasteful to have\nthe server take its internal reprentation, pretty-format it into a nice\nhuman-readable representation to send to the client, and then have the client\nparse that into something it can deal with internally. While it is a fairly\nminor performance issue, it seems there are a lot of chances for things to\ngo wrong.\n\n\tI've already had to hack my python libs a bit to make the money\ntype work correctly. It takes the incoming text, removes '$' and ',' and\nthen tries to convert it into a float. In the case of negative values, it\nwill blow up because there are \"()\" around the value. I'll submit a patch\nif anyone is interested.\n\n...I assume that the ISO-8601 representation itself won't be changing, but\ntime is silly and there's a lot of edge cases. It'd be nice to have a way\nto reliabily tell the server \"Give me standardized raw values, I'll sort\nthings out on my end.\" Of course, this may already be happening within\nthe C libraries and I'm not seeing them inside python. I'll look around\na bit more.\n\n-- \nAdam Haberlach | Who buys an eight-processor machine and then watches 30\nadam@newsnipple.com | movies on it all at the same time? Beats me. They told\n | us they could sell it, so we made it.\n | -- George Hoffman, Be Engineer\n",
"msg_date": "Thu, 3 Jan 2002 12:32:44 -0800",
"msg_from": "Adam Haberlach <adam@newsnipple.com>",
"msg_from_op": true,
"msg_subject": "Re: SET DATESTYLE to time_t style for client libraries?"
},
{
"msg_contents": "\nI would suggest taking a look at the mxDateTime package if you want to\nmanipulate dates in Python.\n\nAdam Haberlach <adam@newsnipple.com> writes:\n\n<snip>\n\n> \n> \tI'll look into getting it to ingest dates, but it seems\n> wasteful to have the server take its internal reprentation,\n> pretty-format it into a nice human-readable representation to send\n> to the client, and then have the client parse that into something it\n> can deal with internally. While it is a fairly minor performance\n> issue, it seems there are a lot of chances for things to go wrong.\n\nThat's a good point. On the other hand, I trust the PostgreSQL folks\nto know more about all of the wacky time edge cases than I do. I know\nthat I am not particular excited about using raw time_t values.\n\n> \tI've already had to hack my python libs a bit to make the\n> money type work correctly. It takes the incoming text, removes '$'\n> and ',' and then tries to convert it into a float. In the case of\n> negative values, it will blow up because there are \"()\" around the\n> value. I'll submit a patch if anyone is interested.\n\nWhy not simply use the numeric type? I thought the money type was\ndeprecated.\n\n> ...I assume that the ISO-8601 representation itself won't be\n> changing, but time is silly and there's a lot of edge cases. It'd\n> be nice to have a way to reliabily tell the server \"Give me\n> standardized raw values, I'll sort things out on my end.\" Of\n> course, this may already be happening within the C libraries and I'm\n> not seeing them inside python. I'll look around a bit more.\n\nmxDateTime is your friend.\n\nJason\n",
"msg_date": "03 Jan 2002 15:13:29 -0700",
"msg_from": "Jason Earl <jason.earl@simplot.com>",
"msg_from_op": false,
"msg_subject": "Re: SET DATESTYLE to time_t style for client libraries?"
}
] |
[
{
"msg_contents": "Over the last two days I have been struggling with running vacuum on a \n7.2b4 database that is running in a production environment. This was \nessentially the first time I ran vacuum on this database since it was \nupgraded to 7.2. This database is characterized by one large table that \nhas many inserts and deletes, however generally contains zero rows. So \nover the course of the last few weeks this table had grown in size to \nabout 2.5G (or more correctly the corresponding toast table grew that \nlarge).\n\nSo the first problem I had was that the vaccum (regular vacuum not full \nvacuum) took a very long time on this table (2+ hours). Now I would \nexpect it to take a while, so that in and of itself isn't a problem. \nBut while this vacuum was running the rest of the system was performing \nvery poorly. Opperations that usually are subsecond, where taking \nminutes to complete. At first I thought there was some sort of locking \nproblem, but these opperations did complete, but after a very long time.\n\nIn looking at the log files from this time, I noticed that while the \nvacuum process was running, there were a lot of the following messages \nin the log file:\n\n2001-12-31 22:16:40 [20655] DEBUG: recycled transaction log file \n000000010000009A\n\nThe interesting thing (at least in my mind) is that these messages were \nproduced by all of the other postgres processes, not by the vacuum \nprocess. (And by the way what do they mean?)\n\n\n\nThe second issue I noticed was that the vacuum process later just hung. \n Since I didn't think the new vacuum was supposed to hang (since I \nthought it tried a best effort and if it couldn't lock something it \nwould just skip it).\n\n2001-12-31 22:18:04 [19945] NOTICE: --Relation xyf_files--\n2001-12-31 22:21:51 [20673] DEBUG: recycled transaction log file \n000000010000009C\n2001-12-31 22:21:51 [20673] DEBUG: recycled transaction log file \n000000010000009D\n2001-12-31 22:21:51 [20673] DEBUG: recycled transaction log file \n000000010000009B\n2001-12-31 22:31:54 [20711] DEBUG: recycled transaction log file \n000000010000009F\n2001-12-31 22:31:54 [20711] DEBUG: recycled transaction log file \n000000010000009E\n2002-01-01 07:30:58 [19945] ERROR: Query was cancelled.\n\nIt hung until I cancelled the vacuum with a ^c. So then I tried to \nrerun the vacuum and it hung in the same spot, this time for 1.5 hours \nbefore I killed it.\n\nThinking that maybe there was some sort of problem with this table I ran \na vacuum full (after restarting the database to make sure no other \nprocesses would be locking the full vacuum) and it ran to completion.\n\nNow after the full vacuum the the regular vacuum runs almost instantly \nwith no further problems.\n\nIs there a bug here, I don't know, but I thought it was interesting \nenough to post what I just saw.\n\nthanks,\n--Barry\n\n\n\n",
"msg_date": "Tue, 01 Jan 2002 16:31:38 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "problems with new vacuum (??)"
},
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> But while this vacuum was running the rest of the system was performing \n> very poorly. Opperations that usually are subsecond, where taking \n> minutes to complete.\n\nIs this any different from the behavior of 7.1 vacuum? Also, what\nplatform are you on?\n\nI've noticed on a Linux 2.4 box (RH 7.2, typical commodity-grade PC\nhardware) that vacuum, pgbench, or almost any I/O intensive operation\ndrives interactive performance into the ground. I have not had an\nopportunity to try to characterize the problem, but I suspect Linux's\ndisk I/O scheduler is not bright enough to prioritize interactive\noperations.\n\n> 2001-12-31 22:16:40 [20655] DEBUG: recycled transaction log file \n> 000000010000009A\n\n> The interesting thing (at least in my mind) is that these messages were \n> produced by all of the other postgres processes, not by the vacuum \n> process.\n\nNo surprise, as they're coming from the checkpoint process(es).\n\n> The second issue I noticed was that the vacuum process later just hung. \n\nYou sure you just didn't wait long enough?\n\nThere was a deadlock condition found in 7.2b4 recently, but I am not\nconvinced that it could affect VACUUM. Anyway, if you can replicate\nthe problem then please attach to the stuck process with gdb and provide\na stack backtrace.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jan 2002 20:07:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with new vacuum (??) "
},
{
"msg_contents": "Tom,\n\nThe platform is Redhat 7.0 with a 2.2.19 kernal.\n\nThe behavior is different from the 7.1 vacuum, in 7.1, processes would \njust hang since they needed to access the table being vacuumed. So they \nwould hang as long as the vacuum took on a particular table. In 7.2 \nthey proceed, and don't hang, but take a long time. So you could say \nthat 7.2 is better than 7.1, but my expectations where higher of 7.2. I \nwas expecting vacuum to be a benign background process that could run in \nparallel with other transactions. That doesn't yet seem to be the case. \n I will continue to monitor my system and see if this is reproducable. \n I will then try to get a backtrace if I find I have a reproducable case.\n\nthanks,\n--Barry\n\nTom Lane wrote:\n\n> Barry Lind <barry@xythos.com> writes:\n> \n>>But while this vacuum was running the rest of the system was performing \n>>very poorly. Opperations that usually are subsecond, where taking \n>>minutes to complete.\n>>\n> \n> Is this any different from the behavior of 7.1 vacuum? Also, what\n> platform are you on?\n> \n> I've noticed on a Linux 2.4 box (RH 7.2, typical commodity-grade PC\n> hardware) that vacuum, pgbench, or almost any I/O intensive operation\n> drives interactive performance into the ground. I have not had an\n> opportunity to try to characterize the problem, but I suspect Linux's\n> disk I/O scheduler is not bright enough to prioritize interactive\n> operations.\n> \n> \n>>2001-12-31 22:16:40 [20655] DEBUG: recycled transaction log file \n>>000000010000009A\n>>\n> \n>>The interesting thing (at least in my mind) is that these messages were \n>>produced by all of the other postgres processes, not by the vacuum \n>>process.\n>>\n> \n> No surprise, as they're coming from the checkpoint process(es).\n> \n> \n>>The second issue I noticed was that the vacuum process later just hung. \n>>\n> \n> You sure you just didn't wait long enough?\n> \n> There was a deadlock condition found in 7.2b4 recently, but I am not\n> convinced that it could affect VACUUM. Anyway, if you can replicate\n> the problem then please attach to the stuck process with gdb and provide\n> a stack backtrace.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n",
"msg_date": "Tue, 01 Jan 2002 21:23:44 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "Re: problems with new vacuum (??)"
},
{
"msg_contents": "On Wed, 2002-01-02 at 13:31, Barry Lind wrote:\n> Over the last two days I have been struggling with running vacuum on a \n> 7.2b4 database that is running in a production environment. This was \n> essentially the first time I ran vacuum on this database since it was \n> upgraded to 7.2. This database is characterized by one large table that \n> has many inserts and deletes, however generally contains zero rows. So \n> over the course of the last few weeks this table had grown in size to \n> about 2.5G (or more correctly the corresponding toast table grew that \n> large).\n> \n> So the first problem I had was that the vaccum (regular vacuum not full \n> vacuum) took a very long time on this table (2+ hours). Now I would \n> expect it to take a while, so that in and of itself isn't a problem. \n> But while this vacuum was running the rest of the system was performing \n> very poorly. Opperations that usually are subsecond, where taking \n> minutes to complete. At first I thought there was some sort of locking \n> problem, but these opperations did complete, but after a very long time.\n\nIs it possible that you waited until a point when the work that vacuum\nhas to do is being undone faster by the new transactions coming\nthrough? This might be complicated by the fact that (from your vague\ndescription) the table is heavily toasted.\n\nAlso, as a suggestion, if you can know there are zero records in the\ntable very often, why not TRUNCATE it at those times? That should be a\n_lot_ quicker than vacuuming it!\n\nCheers,\n\t\t\t\t\tAndrew.\n-- \n--------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Are you enrolled at http://schoolreunions.co.nz/ yet?\n\n",
"msg_date": "02 Jan 2002 21:29:59 +1300",
"msg_from": "Andrew McMillan <andrew@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: problems with new vacuum (??)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Barry Lind <barry@xythos.com> writes:\n> > But while this vacuum was running the rest of the system was performing\n> > very poorly. Opperations that usually are subsecond, where taking\n> > minutes to complete.\n> \n> Is this any different from the behavior of 7.1 vacuum? Also, what\n> platform are you on?\n> \n> I've noticed on a Linux 2.4 box (RH 7.2, typical commodity-grade PC\n> hardware) that vacuum, pgbench, or almost any I/O intensive operation\n> drives interactive performance into the ground.\n\nThey drive each other to the ground too ;(\n\nWhen I tried to run the new vacuum concurrently with a pgbench in hope \nto make it perform better for large number of updates (via removing the \nneed to scan large number of dead tuples) 1 concurrent vacuum was able\nto \nmake 128 pgbench backends more than twice as slow as they were without\nvacuum. \nAnd this is an extra slowdown from another 2-3X slowdown due to dead\ntuples \n(got from comparing speed on VACUUM FULL db and db aftre doing ~10k \npgbench transactions)\n\n> I have not had an\n> opportunity to try to characterize the problem, but I suspect Linux's\n> disk I/O scheduler is not bright enough to prioritize interactive\n> operations.\n\nHave you any ideas how to distinguish between interactive and\nnon-interactive\ndisk I/O coming from postgresql backends ?\n\nCan I for example nice the vacuum'ing backend without getting the \n\"reverse priority\" effects ?\n\n> > 2001-12-31 22:16:40 [20655] DEBUG: recycled transaction log file\n> > 000000010000009A\n> \n> > The interesting thing (at least in my mind) is that these messages were\n> > produced by all of the other postgres processes, not by the vacuum\n> > process.\n> \n> No surprise, as they're coming from the checkpoint process(es).\n> \n> > The second issue I noticed was that the vacuum process later just hung.\n> \n> You sure you just didn't wait long enough?\n> \n> There was a deadlock condition found in 7.2b4 recently, but I am not\n> convinced that it could affect VACUUM. Anyway, if you can replicate\n> the problem then please attach to the stuck process with gdb and provide\n> a stack backtrace.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n",
"msg_date": "Wed, 02 Jan 2002 12:55:56 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: problems with new vacuum (??)"
},
{
"msg_contents": "On Tuesday 01 January 2002 11:23 pm, Barry Lind wrote:\n> Tom,\n>\n> The platform is Redhat 7.0 with a 2.2.19 kernal.\n\nIs this and IDE based system? If so do you have the drives running in DMA \nmode?\n\nWhat are the results of \"/sbin/hdparm /dev/hd(?)\" (a,b,c,d ... which ever \ndrive you are running the database on.)\n\nThe 2.2 linux kernel defaults to DMA off. You can try to enable dma by \nissuing /sbin/hdparm -d1 /dev/hd(?) You can also test the disk speed with \n/sbin/hdparm -tT /dev/hd(?).\n\nIn my experience enabling this feature can make a huge improvement in I/O \nintensive applications. Other options can help also, but I find dma to have \nthe largest impact. I find linux almost unusable without it.\n",
"msg_date": "Wed, 2 Jan 2002 08:49:58 -0600",
"msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Re: problems with new vacuum (??)"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Have you any ideas how to distinguish between interactive and\n> non-interactive disk I/O coming from postgresql backends ?\n\nI don't see how. For one thing, the backend that originally dirtied\na buffer is not necessarily the one that writes it out. Even assuming\nthat we could assign a useful priority to different I/O requests,\nhow do we tell the kernel about it? There's no portable API for that\nAFAIK.\n\nOne thing that would likely help a great deal is to have the WAL files\non a separate disk spindle, but since what I've got is a one-disk\nsystem, I can't test that on this PC.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 10:56:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with new vacuum (??) "
},
{
"msg_contents": "Tom Lane wrote:\n> Barry Lind <barry@xythos.com> writes:\n> > But while this vacuum was running the rest of the system was performing \n> > very poorly. Opperations that usually are subsecond, where taking \n> > minutes to complete.\n> \n> Is this any different from the behavior of 7.1 vacuum? Also, what\n> platform are you on?\n> \n> I've noticed on a Linux 2.4 box (RH 7.2, typical commodity-grade PC\n> hardware) that vacuum, pgbench, or almost any I/O intensive operation\n> drives interactive performance into the ground. I have not had an\n> opportunity to try to characterize the problem, but I suspect Linux's\n> disk I/O scheduler is not bright enough to prioritize interactive\n> operations.\n\nJust as a data point, I have not seen pgbench dramatically affect\nperformance on BSD/OS. Interactive sessions are just slightly slower\nwhen then need to access the disk.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jan 2002 13:12:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with new vacuum (??)"
},
{
"msg_contents": "> In my experience enabling this feature can make a huge improvement in I/O \n> intensive applications. Other options can help also, but I find dma to have \n> the largest impact. I find linux almost unusable without it.\n\nOh, I should mention my BSD/OS data point is with one SCSI disk, soft\nupdates and tagged queuing enabled.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jan 2002 13:13:32 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with new vacuum (??)"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>>In my experience enabling this feature can make a huge improvement in I/O \n>>intensive applications. Other options can help also, but I find dma to have \n>>the largest impact. I find linux almost unusable without it.\n>>\n> \n> Oh, I should mention my BSD/OS data point is with one SCSI disk, soft\n> updates and tagged queuing enabled.\n\n\nIf Tom's system is IDE-based and he's not explicitly enabled DMA then \nthis alone would explain the difference you two are seeing, just as the \nposter above is implying. I have one system with an older 15GB disk \nthat causes a kernel panic if I try to enable DMA, and I see the kind of \nsystem performance issues described by Tom on that system.\n\nOn my main server downtown (SCSI) and my normal desktop (two IDE drives \nthat do work properly with DMA enabled) things run much, much better \nwhen there's a lot of disk I/O going on. These are all Linux systems, \nnot BSD...\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Wed, 02 Jan 2002 10:34:35 -0800",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with new vacuum (??)"
},
{
"msg_contents": "Don Baccus <dhogaza@pacifier.com> writes:\n> If Tom's system is IDE-based and he's not explicitly enabled DMA then \n> this alone would explain the difference you two are seeing,\n\nIt is IDE, but DMA is on:\n\n[root@rh1 root]# hdparm -v /dev/hda\n\n/dev/hda:\n multcount = 16 (on)\n I/O support = 0 (default 16-bit)\n unmaskirq = 0 (off)\n using_dma = 1 (on)\n keepsettings = 0 (off)\n nowerr = 0 (off)\n readonly = 0 (off)\n readahead = 8 (on)\n geometry = 9729/255/63, sectors = 156301488, start = 0\n\n[root@rh1 root]# hdparm -i /dev/hda\n\n/dev/hda:\n\n Model=ST380021A, FwRev=3.10, SerialNo=3HV0CZ2L\n Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% }\n RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4\n BuffType=unknown, BuffSize=2048kB, MaxMultSect=16, MultSect=16\n CurCHS=16383/16/63, CurSects=-66060037, LBA=yes, LBAsects=156301488\n IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120}\n PIO modes: pio0 pio1 pio2 pio3 pio4\n DMA modes: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5\n AdvancedPM=no\n Drive Supports : Reserved : ATA-1 ATA-2 ATA-3 ATA-4 ATA-5\n\nThis is an out-of-the-box RH 7.2 install (kernel 2.4.7-10) on recent\nDell hardware. If anyone can suggest further tuning of the hdparm\nsettings, I'm all ears. Don't know a darn thing about disk tuning\nfor Linux.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 13:40:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with new vacuum (??) "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Hannu Krosing <hannu@tm.ee> writes:\n>\n>>Have you any ideas how to distinguish between interactive and\n>>non-interactive disk I/O coming from postgresql backends ?\n>>\n>\n>I don't see how. For one thing, the backend that originally dirtied\n>a buffer is not necessarily the one that writes it out. Even assuming\n>that we could assign a useful priority to different I/O requests,\n>how do we tell the kernel about it? There's no portable API for that\n>AFAIK.\n>\n>One thing that would likely help a great deal is to have the WAL files\n>on a separate disk spindle, but since what I've got is a one-disk\n>system, I can't test that on this PC.\n>\nIf you have enough memory you can put WAL files on a RAM disk for testing :)\n\nIt is totally to the countrary of their intended use, but could reveal \nsomething\ninteresting while testing\n\n----------------\nHannu\n\n\n",
"msg_date": "Thu, 03 Jan 2002 02:09:14 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: problems with new vacuum (??)"
}
] |
[
{
"msg_contents": "Hi, I'm new to this list.\n\nHaving got some insight into the PostgreSQL sources by following\nserver action using GDB and after trying to implement some features I\nbecame able to formulate a proposal which IMHO could be useful for\nothers, too.\n\nThe proposal consists in generalizing the PostgreSQL deferred trigger\nmechanism to more general types of events. Such an event could be,\ne.g., a function call deferred up to transaction end.\n\nOne use case would be a PostgreSQL procedure `mark_tx_rollback()' to\nmark a transaction for rollback. The rollback will be performed at\ntransaction commit time. (Actually `mark_tx_rollback()' can be\nimplemented by faking a `ResultRelInfo' and calling\n`ExecARInsertTriggers' or similar functions with the faked\n`ResultRelInfo' but this is not the cleanest way and will not work for\nother use cases.)\n\nThe implementation would be straightforward, changes have to be made\nonly to `trigger.c' and `trigger.h'. A data structure\n`DeferredEventData' must be added, from which as well the existing\n`DeferredTriggerEventData' as also `DeferredFunctionCallEvent' will be\nderived. The implementation of `deferredTriggerInvokeEvents(bool)'\nmust be changed to separate DeferredTriggerEvent's from\nDeferredFunctionCallEvent's. A function `DeferredFunctionCallExecute'\ncalled from `deferredTriggerInvokeEvents(bool)' in the case of a\n`DeferredFunctionCallEvent' and functions to add\nDeferredFunctionCallEvent's to the queue have to be added.\n\nI would be able to make the necessary changes and post the patch (to\nwhom ?). Is it OK ? Are there others working on similar or\ncontradicting features ? Does my proposal conform to the further plans\nof PostgreSQL development ?\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n",
"msg_date": "Wed, 2 Jan 2002 10:26:52 +0100",
"msg_from": "Holger Krug <hkrug@rationalizer.com>",
"msg_from_op": true,
"msg_subject": "Feature proposal: generalizing deferred trigger events"
},
{
"msg_contents": "Holger Krug <hkrug@rationalizer.com> writes:\n> The proposal consists in generalizing the PostgreSQL deferred trigger\n> mechanism to more general types of events.\n\nWhile I have no inherent objection to that, I'm not seeing what it's\ngood for either. What SQL feature would be associated with this?\n\n> One use case would be a PostgreSQL procedure `mark_tx_rollback()' to\n> mark a transaction for rollback.\n\nThis is unconvincing, since Postgres has no need for rollback procedures\n(and I personally will disagree with any attempt to introduce 'em; if\nyou need one then you have a problem doing crash recovery).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 11:31:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature proposal: generalizing deferred trigger events "
},
{
"msg_contents": "On Wed, Jan 02, 2002 at 11:31:16AM -0500, Tom Lane wrote:\n> Holger Krug <hkrug@rationalizer.com> writes:\n> > The proposal consists in generalizing the PostgreSQL deferred trigger\n> > mechanism to more general types of events.\n> \n> While I have no inherent objection to that, I'm not seeing what it's\n> good for either. What SQL feature would be associated with this?\n\nThe use I want to make of this feature is to collect errors during a\ntransaction, report them back to the user (in my case an application\nserver) and rollback the transaction when the user tries to commit.\nIt's not a SQL feature, but nevertheless useful in some cases, because\nyou can collect several errors within one transaction and are not\nfobbed off with only one.\n\n> > One use case would be a PostgreSQL procedure `mark_tx_rollback()' to\n> > mark a transaction for rollback.\n> \n> This is unconvincing, since Postgres has no need for rollback procedures\n> (and I personally will disagree with any attempt to introduce 'em; if\n> you need one then you have a problem doing crash recovery).\n\nI think you misunderstand me. `mark_tx_rollback()', when called, would\nadd an entry to the deferred trigger queue, which is called at commit\ntime *before* the transaction closes and causes transaction rollback\n(and not more). All is done *within one transaction*, not after the\ntransaction has finished.\n\nAre there any other ways to add transaction end hooks\n(i.e. functions to be called immediately *before* the transaction\ncloses) ? I would use them.\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n",
"msg_date": "Wed, 2 Jan 2002 18:03:41 +0100",
"msg_from": "Holger Krug <hkrug@rationalizer.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature proposal: generalizing deferred trigger events"
},
{
"msg_contents": "Holger Krug <hkrug@rationalizer.com> writes:\n> On Wed, Jan 02, 2002 at 11:31:16AM -0500, Tom Lane wrote:\n>> While I have no inherent objection to that, I'm not seeing what it's\n>> good for either. What SQL feature would be associated with this?\n\n> The use I want to make of this feature is to collect errors during a\n> transaction, report them back to the user (in my case an application\n> server) and rollback the transaction when the user tries to commit.\n\nHmm. So basically your code is saying \"I want to abort the current\ntransaction, but not right now --- I can wait till the end to force\nabort.\" Okay. Of course, if some other code causes a transaction\nabort, your deferred trigger will never get run, but maybe that's\nall right in your situation.\n\n> Are there any other ways to add transaction end hooks\n> (i.e. functions to be called immediately *before* the transaction\n> closes) ? I would use them.\n\nI don't think so. I've occasionally thought that xact.c should have\na modifiable data structure instead of a hard-wired list of action\nsubroutines to call. But in practice, there are usually a bunch of\nconstraints on the order that things must be done in, so just \"add\nanother commit-time function call\" would not be an adequate API for\nadding to the list. You'd need to be able to specify \"before these\nguys, after those guys\" in some way.\n\nIn your example, it would seem that you'd want the force-an-abort\nsubroutine to be called at the last possible instant before we'd\notherwise commit, so as to allow as many potential errors as possible\nto be detected and reported. In particular, if you just throw it into\nthe deferred trigger list at a random time, then only the other deferred\ntriggers that are queued before it will have an opportunity to detect\nerrors. Seems like that is not really the behavior you want.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 12:16:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature proposal: generalizing deferred trigger events "
},
{
"msg_contents": "On Wed, Jan 02, 2002 at 12:16:14PM -0500, Tom Lane wrote:\n\n> Of course, if some other code causes a transaction\n> abort, your deferred trigger will never get run, but maybe that's\n> all right in your situation.\n\nIt is because I use special triggers for referential integrity checks etc..\n\n> In your example, it would seem that you'd want the force-an-abort\n> subroutine to be called at the last possible instant before we'd\n> otherwise commit, so as to allow as many potential errors as possible\n> to be detected and reported. In particular, if you just throw it into\n> the deferred trigger list at a random time, then only the other deferred\n> triggers that are queued before it will have an opportunity to detect\n> errors. Seems like that is not really the behavior you want.\n\nYou're right, I oversimplified. I planned some ugly hacks within my\ntriggers to \"solve\" this problem. It would be better to introduce\npriority levels for triggers. But this involves some deeper changes of\nthe implementation in `trigger.c' (not necessarily of the interface),\nwhich I feared not to get the OK for.\n\nIf I would do so the following questions should be agreed on:\n\n1) One trigger list for all priority levels or several lists for different\n levels ?\n2) A small number of priority levels or something like int16 ?\n\nThere is even a second problem with my approach: If many errors occur,\nthe rollback trigger is added many times to the deferred trigger\nlist. This is really nonsense. I can circumvent this, by holding the\nxid of the last call as a static variable within the procedure adding\nthe rollback trigger.\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n",
"msg_date": "Wed, 2 Jan 2002 18:58:31 +0100",
"msg_from": "Holger Krug <hkrug@rationalizer.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature proposal: generalizing deferred trigger events"
},
{
"msg_contents": "On Wed, Jan 02, 2002 at 12:16:14PM -0500, Tom Lane wrote:\n\nHey, I got an SQL feature for you: One could use those generalized\ndeferred triggers to implement:\n\nCREATE TEMP TABLE <table> ( ... ) ON COMMIT DELETE ROWS\n\nSimply add a not deferred trigger on insertion to <table>, which\nchecks if it was already called in the same transaction (using a\nhard-coded transaction id as static variable) and if not, adds a\ngeneralized clean-up trigger which truncates <table>. (Shall triggers\nON DELETE for <table> be called in this case or not ?)\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n",
"msg_date": "Wed, 2 Jan 2002 19:18:06 +0100",
"msg_from": "Holger Krug <hkrug@rationalizer.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature proposal: generalizing deferred trigger events"
}
] |
[
{
"msg_contents": "\n Hi,\n\n I start fix my bug with \"YY vs. zero\" in formatting.c, and before it\n a see current CVS:\n\ntest=# select to_timestamp('10-10-2001', 'MM-DD-YYYY');\n to_timestamp\n------------------------\n 2001-10-10 00:00:00+02\n(1 row)\n\ntest=# select to_date('10-10-2001', 'MM-DD-YYYY');\n to_date\n------------\n 2001-10-09\n ^^\n \n It looks like bug in to_date(), but here is no real code of \nto_date(), because to_date and to_timastamp use same code:\n\nDatum\nto_date(PG_FUNCTION_ARGS)\n{\n /*\n * Quick hack: since our inputs are just like to_timestamp, hand over\n * the whole input info struct...\n */\n return DirectFunctionCall1(timestamp_date, to_timestamp(fcinfo));\n}\n\n\n What are you mean? \n\n Karel\n\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Wed, 2 Jan 2002 12:03:24 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "datetime error?"
},
{
"msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> I start fix my bug with \"YY vs. zero\" in formatting.c, and before it\n> a see current CVS:\n\n> test=# select to_timestamp('10-10-2001', 'MM-DD-YYYY');\n> to_timestamp\n> ------------------------\n> 2001-10-10 00:00:00+02\n> (1 row)\n\n> test=# select to_date('10-10-2001', 'MM-DD-YYYY');\n> to_date\n> ------------\n> 2001-10-09\n> ^^\n\nHmm, is 2001-10-10 a daylight-savings transition day in your timezone?\nAlthough I thought we'd fixed all those bugs ... and I don't see any\ncorresponding problem here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 11:35:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: datetime error? "
},
{
"msg_contents": "On Wed, Jan 02, 2002 at 11:35:08AM -0500, Tom Lane wrote:\n> Karel Zak <zakkr@zf.jcu.cz> writes:\n> > I start fix my bug with \"YY vs. zero\" in formatting.c, and before it\n> > a see current CVS:\n> \n> > test=# select to_timestamp('10-10-2001', 'MM-DD-YYYY');\n> > to_timestamp\n> > ------------------------\n> > 2001-10-10 00:00:00+02\n> > (1 row)\n> \n> > test=# select to_date('10-10-2001', 'MM-DD-YYYY');\n> > to_date\n> > ------------\n> > 2001-10-09\n> > ^^\n> \n> Hmm, is 2001-10-10 a daylight-savings transition day in your timezone?\n\n No, it's daylight-savings independent. The interesting thing is that \n you not see it. I found some things:\n\n * it not happen for GMT timezone, but for others only (I test 'Japan'\n and 'CET').\n\n * the difference between to_date and to_timestamp is that to_date use\n the timestamp_date() for conversion. And in the timestamp_date() is\n used timestamp2tm() that output bad 'tm' struct.\n\n The basic difference is that timestamp2tm() with right output do\n code that call localtime() and timestamp2tm() with bad output skip\n it, because 'tzp' is not defined (\"if (tzp != NULL)\" in this\n timestamp2tm()). \n \n \n * and the other thing:\n\n # select to_date('12-13-1901', 'MM-DD-YYYY');\n to_date\n ------------\n 1901-12-13\n (1 row)\n\n # select to_date('12-14-1901', 'MM-DD-YYYY');\n NOTICE: timestamp_date: year:1901 mon:12, mday:13\n to_date\n ------------\n 1901-12-13\n (1 row)\n \n For 'CET' timezone are all dates before '12-14-1901' right :-)\n\n IMHO it's timezone problem.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 3 Jan 2002 12:28:40 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: datetime error?"
},
{
"msg_contents": "> > > I start fix my bug with \"YY vs. zero\" in formatting.c, and before it\n> > > a see current CVS:\n...\n> > Hmm, is 2001-10-10 a daylight-savings transition day in your timezone?\n> No, it's daylight-savings independent. The interesting thing is that\n> you not see it. I found some things:\n> \n> * it not happen for GMT timezone, but for others only (I test 'Japan'\n> and 'CET').\n> \n> * the difference between to_date and to_timestamp is that to_date use\n> the timestamp_date() for conversion. And in the timestamp_date() is\n> used timestamp2tm() that output bad 'tm' struct.\n> \n> The basic difference is that timestamp2tm() with right output do\n> code that call localtime() and timestamp2tm() with bad output skip\n> it, because 'tzp' is not defined (\"if (tzp != NULL)\" in this\n> timestamp2tm()).\n\nAh! Have you tried calling timestamptz_date() instead? That one allows\nhandling time zones internally. Before 7.2, timestamp_date() did handle\ntime zones, but now we have TIMESTAMP WITHOUT TIME ZONE and TIMESTAMP\nWITH TIME ZONE so those internal routines changed out from under you.\n\n - Thomas\n",
"msg_date": "Thu, 03 Jan 2002 15:56:23 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: datetime error?"
},
{
"msg_contents": "On Thu, Jan 03, 2002 at 03:56:23PM +0000, Thomas Lockhart wrote:\n> \n> > The basic difference is that timestamp2tm() with right output do\n> > code that call localtime() and timestamp2tm() with bad output skip\n> > it, because 'tzp' is not defined (\"if (tzp != NULL)\" in this\n> > timestamp2tm()).\n> \n> Ah! Have you tried calling timestamptz_date() instead? That one allows\n> handling time zones internally. Before 7.2, timestamp_date() did handle\n> time zones, but now we have TIMESTAMP WITHOUT TIME ZONE and TIMESTAMP\n> WITH TIME ZONE so those internal routines changed out from under you.\n\n You are right. \n\n I don't want send patch with 2 chars. Can you or anyone other fix it\n in src/backend/utils/adt/formatting.c in function to_date (line cca \n 3130) and rename timestamp_date to timestamptz_date?\n ^^\n Thanks. The formatting.c is ready to RC1 with this fix.\n\n Karel\n \n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Fri, 4 Jan 2002 11:23:08 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: datetime error?"
},
{
"msg_contents": "...\n> I don't want send patch with 2 chars. Can you or anyone other fix it\n> in src/backend/utils/adt/formatting.c in function to_date (line cca\n> 3130) and rename timestamp_date to timestamptz_date?\n\nOK. Sorry for the breakage. Think about a regression test which would\ncatch this one; I've got a few more input format tests for\ndate/time/timestamp to add after 7.2 is released to catch more edge\ncases in that area...\n\n - Thomas\n",
"msg_date": "Fri, 04 Jan 2002 15:12:12 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: datetime error?"
},
{
"msg_contents": "...\n> I don't want send patch with 2 chars. Can you or anyone other fix it\n> in src/backend/utils/adt/formatting.c in function to_date (line cca\n> 3130) and rename timestamp_date to timestamptz_date?\n\nDone, committed to CVS, the code builds from a \"make clean\", and the\nregression tests pass.\n\n - Thomas\n",
"msg_date": "Fri, 04 Jan 2002 15:52:45 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: datetime error?"
}
] |
[
{
"msg_contents": "\nHow do you people look at various trees and lists when debugging them ?\n\nDo you \n\n1. use functions from nodes/print.c to print out tree snapshots\n\n2. run the backend under visual debugger (like DDD) and look at things\nthere\n\n3. memorize everything in binary and work on raw memory image from\n/proc/core ;)\n\n4. use some other method\n\n\nI am currently working on understanding enough of the parse/plan/execute \nprocess to make up a good plan for implementing WITH RECURSIVE ...\nSELECT ...\nand perhaps GROUP BY ROLLUP(a,b,c) from SQL99 spec and I'm not\nproceeding \nas fast as I'd like ;-p\n\nAlso could anyone recommend any tools for debugging gram.y or is this\nalso \ndone mostly by hand even for large grammars ?\n\n------------------\nHannu\n",
"msg_date": "Wed, 02 Jan 2002 15:25:10 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "how to watch parse/plan trees"
},
{
"msg_contents": "On Wed, Jan 02, 2002 at 03:25:10PM +0200, Hannu Krosing wrote:\n\n> I am currently working on understanding enough of the parse/plan/execute \n> process \n\nI tried the same a few days ago and made very good progress using simply GDB\ngoing through the code step by step as the system executes.\n\nAs mentioned in the Developers FAQ using\n$ call pprint(nodepointer)\nfrom within GDB and looking at the debug output was very useful.\n\nOnly sometimes I called pprint on something which wasn't really a\nnode-pointer (but a pointer to something else) and in very rare cases\nI got nice system crashes because of this;-)\n\nSome hints for the use GDB with PostgreSQL (deviating from what is\nrecommended in other places):\n\n1) Start postmaster with `LD_DEBUG=files' set in the environment if you need\n access to dynamically loaded libraries. (Seems not to be necessary in your\n case.)\n2) Start gdb with all PostgreSQL source directories loaded (I start it\n within Emacs, at the same time using the Tags table which can be created \n with tools/make_etags, which allows fast code browsing.)\n3) Connect with psql.\n4) pstree -p | grep postmaster\n to get informed about the pid of the backend your psql is connected to.\n5) Make the appropriate calls from psql to get all the dynamically loadable\n libraries loaded. Follow the PostgreSQL log output to get informed about\n the entry addresses of the dynamically loaded libraries. (Not necessary\n in your case.)\n6) In gdb:\n $ file <bindir>/postgres\n # now use the entry address which appeared in the logs:\n $ add-symbol-file <libdir>/plpgsql.so <entryaddress> \n $ add-symbol-file <libdir>/<otherlib>.so <otherentryaddress> \n # now use the pid of the backend\n $ attach pid\n # now work with gdb\n\nThis sounds much to do, but works astonishingly well.\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n",
"msg_date": "Wed, 2 Jan 2002 15:34:11 +0100",
"msg_from": "Holger Krug <hkrug@rationalizer.com>",
"msg_from_op": false,
"msg_subject": "Re: how to watch parse/plan trees"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> How do you people look at various trees and lists when debugging them ?\n\nI tend to start psql with PGOPTIONS=\"-d2\" and then look at the\nprettyprinted trees in the postmaster log.\n\nIf you have a bug that prevents you from getting as far as the parsetree\ndump, however, gdb is probably the only way.\n\n> Also could anyone recommend any tools for debugging gram.y or is this\n> also done mostly by hand even for large grammars ?\n\nOnce you've got rid of any shift/reduce or reduce/reduce conflicts\n(bison -v output is helpful for that), I find that the grammar itself\nseldom has any surprising behaviors that you need to use a debugger\nto follow.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 11:41:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: how to watch parse/plan trees "
}
] |
[
{
"msg_contents": "\nWe are running the 7.2b4 beta server.\n\nThis join worked last week and today it gets and error:\n\nselect * from b, d\n where b.address = d.address;\n\nIt now fails with the following error:\nERROR: join_selectivity: bad value -0.121693\n\nall that is in the pgsql.log file is the same error.\n\nI am working on a reproduction I can give you, but in the mean time I was\nwondering if this is a current problem with the server or me?\n\nThanks,\n\n-- \nLaurette Cisneros\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nPassenger Information Everywhere\n\n",
"msg_date": "Wed, 2 Jan 2002 13:40:32 -0800 (PST)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "bug in join?"
},
{
"msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n> This join worked last week and today it gets and error:\n> select * from b, d\n> where b.address = d.address;\n> It now fails with the following error:\n> ERROR: join_selectivity: bad value -0.121693\n\nProbably what has changed is the pg_statistic data (VACUUM ANALYZE\nresults). Please send the results of\n\nselect * from pg_stats where tablename = 'b';\nselect * from pg_stats where tablename = 'd';\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 17:02:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug in join? "
},
{
"msg_contents": "Yeah, the culprit appears to be vacuum analyze (vacuum alone doesn't do it).\n\nThe problem is that I fixed the original database by dropping and\nrecreating the tables populating them with backed up data. And, now it\nwon't recreate (the values in pg_stats for them is lost).\n\nUgh.\n\nI will keep trying to recreate it for you.\n\nL.\nOn Wed, 2 Jan 2002, Tom Lane wrote:\n\n> Laurette Cisneros <laurette@nextbus.com> writes:\n> > This join worked last week and today it gets and error:\n> > select * from b, d\n> > where b.address = d.address;\n> > It now fails with the following error:\n> > ERROR: join_selectivity: bad value -0.121693\n>\n> Probably what has changed is the pg_statistic data (VACUUM ANALYZE\n> results). Please send the results of\n>\n> select * from pg_stats where tablename = 'b';\n> select * from pg_stats where tablename = 'd';\n>\n> \t\t\tregards, tom lane\n>\n\n-- \nLaurette Cisneros\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nPassenger Information Everywhere\n\n",
"msg_date": "Wed, 2 Jan 2002 14:25:35 -0800 (PST)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: bug in join? "
},
{
"msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n> I will keep trying to recreate it for you.\n\nYou could just try\n\n\tanalyze b;\n\tanalyze d;\n\texplain select * from b,d where b.address = d.address;\n\nand repeat until you see the error from EXPLAIN. Since ANALYZE takes\na random sampling these days, successive loops will in fact produce\nslightly different results, and you may be able to recreate the\nerroneous state eventually.\n\nThe math in eqjoinsel() is not entirely trivial, but I thought I had\nconvinced myself it was okay. I need to see a failing example to \nfigure out what's wrong with it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 17:29:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug in join? "
},
{
"msg_contents": "Will keep trying. I also noticed that several triggers were dropped when I\ndropped the table. I need to add these back and see if they make a\ndifference.\n\nMore as I have it...\n\nL.\nOn Wed, 2 Jan 2002, Tom Lane wrote:\n\n> Laurette Cisneros <laurette@nextbus.com> writes:\n> > I will keep trying to recreate it for you.\n>\n> You could just try\n>\n> \tanalyze b;\n> \tanalyze d;\n> \texplain select * from b,d where b.address = d.address;\n>\n> and repeat until you see the error from EXPLAIN. Since ANALYZE takes\n> a random sampling these days, successive loops will in fact produce\n> slightly different results, and you may be able to recreate the\n> erroneous state eventually.\n>\n> The math in eqjoinsel() is not entirely trivial, but I thought I had\n> convinced myself it was okay. I need to see a failing example to\n> figure out what's wrong with it.\n>\n> \t\t\tregards, tom lane\n>\n\n-- \nLaurette Cisneros\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nPassenger Information Everywhere\n\n",
"msg_date": "Wed, 2 Jan 2002 14:31:45 -0800 (PST)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: bug in join? "
},
{
"msg_contents": "OOps, never mind about the triggers. But, I will keep trying to induce the\nproblem for you.\n\nL.\nOn Wed, 2 Jan 2002, Tom Lane wrote:\n\n> Laurette Cisneros <laurette@nextbus.com> writes:\n> > I will keep trying to recreate it for you.\n>\n> You could just try\n>\n> \tanalyze b;\n> \tanalyze d;\n> \texplain select * from b,d where b.address = d.address;\n>\n> and repeat until you see the error from EXPLAIN. Since ANALYZE takes\n> a random sampling these days, successive loops will in fact produce\n> slightly different results, and you may be able to recreate the\n> erroneous state eventually.\n>\n> The math in eqjoinsel() is not entirely trivial, but I thought I had\n> convinced myself it was okay. I need to see a failing example to\n> figure out what's wrong with it.\n>\n> \t\t\tregards, tom lane\n>\n\n-- \nLaurette Cisneros\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nPassenger Information Everywhere\n\n",
"msg_date": "Wed, 2 Jan 2002 14:32:52 -0800 (PST)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: bug in join? "
},
{
"msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n> Will keep trying. I also noticed that several triggers were dropped when I\n> dropped the table. I need to add these back and see if they make a\n> difference.\n\nNo, join_selectivity isn't going to care about triggers. What's failing\nis the estimation of the fraction of rows that will match on address\nbetween the two tables (join_selectivity is rejecting the result as\nobviously bogus, which it is). That doesn't depend on anything except\nthe ANALYZE statistics.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 17:36:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug in join? "
},
{
"msg_contents": "OK, I reproduced it again. I had restored the data for the tables from the\nwrong backup. I restored the two tables from last night's backup (BTW, we use\npg_dump -a -O -Fc...) and ran vacuum analyze and it reproduces (each and\nevery time).\n\nWe've turned off vacuum analyze (we do it every night *after* the backup)\nfor now. We would love for this to get fixed asap (of course ;.)\n\nHere's the info. you asked for:\n\nselect * from pg_stats where tablesname ='b':\n\n tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation\n-----------+--------------+-----------+-----------+------------+---------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------+------------------+-------------\n b | rev | 0 | 4 | -0.142857 | {\"0\",\"1001\",\"1002\"} | {\"0.333333\",\"0.333333\",\"0.333333\"}\n | | 1\n b | b_tag | 0 | 7 | -0.333333 | {\"S001\",\"S002\",\"S003\",\"S004\",\"S005\",\"VT1\",\"VT2\"} | {\"0.142857\",\"0.142857\",\"0.142857\",\"0.142857\",\"0.142857\",\"0.142857\",\"0.142857\"} | | 0.454545\n b | input_tag | 0 | 16 | 1 | {\"AirLinkInput\"} | {\"1\"} | | 1\n b | address | 0 | 19 | -0.333333 | {\"166.128.052.237\",\"166.128.053.084\",\"166.128.054.017\",\"166.128.054.018\",\"166.128.057.250\",\"166.128.058.202\",\"166.128.058.203\"} | {\"0.142857\",\"0.142857\",\"0.142857\",\"0.142857\",\"0.142857\",\"0.142857\",\"0.142857\"} | | 0.454545\n b | b_distance | 0 | 4 | 1 | {\"200\"} | {\"1\"} | | 1\n b | b_time | 0 | 4 | 1 | {\"90\"} | {\"1\"} | | 1\n(6 rows)\n\nselect * from pg_status where tablename = 'd';\n tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation\n-----------+-------------+-----------+-----------+------------+---------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n d | rev | 0 | 4 | 3 | {\"0\",\"1002\",\"1001\"} | {\"0.45679\",\"0.45679\",\"0.0864198\"} | | 0.912263\n d | address | 0 | 19 | -0.45679 | {\"166.128.052.237\",\"166.128.053.084\",\"166.128.054.017\",\"166.128.054.018\",\"166.128.057.250\",\"166.128.058.202\",\"166.128.058.203\"} | {\"0.037037\",\"0.037037\",\"0.037037\",\"0.037037\",\"0.037037\",\"0.037037\",\"0.037037\"} | {\"166.128.169.189\",\"166.128.169.199\",\"166.128.169.236\",\"166.128.171.002\",\"166.129.108.003\",\"166.132.126.184\",\"166.133.174.093\",\"166.133.174.098\",\"166.204.012.171\",\"166.204.045.135\",\"166.204.066.001\"} | 0.451197\n d | d_passwd | 0 | 9 | 1 | {\"Czech\"} | {\"1\"} | | 1\n d | d_port | 0 | 4 | 1 | {\"22335\"} | {\"1\"} |\n | 1\n d | d_type | 0 | 9 | 2 | {\"signs\",\"buses\"} | {\"0.740741\",\"0.259259\"} | | 0.912263\n d | d_status | 0 | 4 | 1 | {\"0\"} | {\"1\"} | | 1\n(6 rows)\n\nThanks!\n\nLaurette\nOn Wed, 2 Jan 2002, Tom Lane wrote:\n\n> Laurette Cisneros <laurette@nextbus.com> writes:\n> > This join worked last week and today it gets and error:\n> > select * from b, d\n> > where b.address = d.address;\n> > It now fails with the following error:\n> > ERROR: join_selectivity: bad value -0.121693\n>\n> Probably what has changed is the pg_statistic data (VACUUM ANALYZE\n> results). Please send the results of\n>\n> select * from pg_stats where tablename = 'b';\n> select * from pg_stats where tablename = 'd';\n>\n> \t\t\tregards, tom lane\n>\n\n-- \nLaurette Cisneros\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nPassenger Information Everywhere\n\n",
"msg_date": "Wed, 2 Jan 2002 15:32:43 -0800 (PST)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: bug in join? "
}
] |
[
{
"msg_contents": "\nA question came up on the jdbc mail list today that I couldn't answer. \nIt involves some code that is in the jdbc cvs tree (and has been for \nmany releases). This code contains the following license terms in each \nfile:\n\n/*\n* Redistribution and use of this software and associated documentation\n* (\"Software\"), with or without modification, are permitted provided\n* that the following conditions are met:\n*\n* 1. Redistributions of source code must retain copyright\n* \n statements and notices. Redistributions must also contain a\n* \n copy of this document.\n*\n* 2. Redistributions in binary form must reproduce the\n* \n above copyright notice, this list of conditions and the\n* \n following disclaimer in the documentation and/or other\n* \n materials provided with the distribution.\n*\n* 3. The name \"Exolab\" must not be used to endorse or promote\n* \n products derived from this Software without prior written\n* \n permission of Exoffice Technologies. For written permission,\n* \n please contact info@exolab.org.\n*\n* 4. Products derived from this Software may not be called \"Exolab\"\n* \n nor may \"Exolab\" appear in their names without prior written\n* \n permission of Exoffice Technologies. Exolab is a registered\n* \n trademark of Exoffice Technologies.\n*\n* 5. Due credit should be given to the Exolab Project\n* \n (http://www.exolab.org/).\n*\n* THIS SOFTWARE IS PROVIDED BY EXOFFICE TECHNOLOGIES AND CONTRIBUTORS\n* ``AS IS'' AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT\n* NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND\n* FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.\tIN NO EVENT SHALL\n* EXOFFICE TECHNOLOGIES OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT,\n* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,\n* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED\n* OF THE POSSIBILITY OF SUCH DAMAGE.\n*\n* Copyright 1999 (C) Exoffice Technologies Inc. All Rights Reserved.\n*\n\n\n\nIs this a problem?\n\nthanks,\n--Barry\n\n\n",
"msg_date": "Wed, 02 Jan 2002 13:47:13 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "software license question"
},
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> A question came up on the jdbc mail list today that I couldn't answer. \n> It involves some code that is in the jdbc cvs tree (and has been for \n> many releases). This code contains the following license terms in each \n> file:\n> [ snip ]\n> Is this a problem?\n\nLooks like a pretty standard BSD-style license to me. I don't see a\nproblem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 17:04:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: software license question "
},
{
"msg_contents": "Barry Lind wrote:\n> \n> A question came up on the jdbc mail list today that I couldn't answer. \n> It involves some code that is in the jdbc cvs tree (and has been for \n> many releases). This code contains the following license terms in each \n> file:\n> \n\n> * 5. Due credit should be given to the Exolab Project\n> * \n> (http://www.exolab.org/).\n> *\n> * OF THE POSSIBILITY OF SUCH DAMAGE.\n> *\n> * Copyright 1999 (C) Exoffice Technologies Inc. All Rights Reserved.\n> *\n> \n\nYes, this does seems like a serious problem. Can the original\ncontributor be contacted?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jan 2002 17:05:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: software license question"
},
{
"msg_contents": "> *\n> * 5. Due credit should be given to the Exolab Project\n> * \n> (http://www.exolab.org/).\n> *\n> * Copyright 1999 (C) Exoffice Technologies Inc. All Rights Reserved.\n> *\n\nThis was the part that seemed to be a problem. Do we need to mention\nexolab in our COPYRIGHT announcment in psql?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jan 2002 18:17:42 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: software license question"
},
{
"msg_contents": "I'm not sure its a problem. The source code already contains the\ncopyright, and this only affects the JDBC driver. Perhaps we just need\nto put in the JDBC driver readme file that parts were provided by\nExolab?\n\nI guess, it all depends on who defines \"due credit\". The code was\nsubmitted by Assaf Arkin <arkin@exoffice.com>. Who gets to ask?\n\n\n\nOn Wed, 2002-01-02 at 15:05, Bruce Momjian wrote:\n> Barry Lind wrote:\n> > \n> > A question came up on the jdbc mail list today that I couldn't answer. \n> > It involves some code that is in the jdbc cvs tree (and has been for \n> > many releases). This code contains the following license terms in each \n> > file:\n> > \n> \n> > * 5. Due credit should be given to the Exolab Project\n> > * \n> > (http://www.exolab.org/).\n> > *\n> > * OF THE POSSIBILITY OF SUCH DAMAGE.\n> > *\n> > * Copyright 1999 (C) Exoffice Technologies Inc. All Rights Reserved.\n> > *\n> > \n> \n> Yes, this does seems like a serious problem. Can the original\n> contributor be contacted?\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n-- \n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45",
"msg_date": "02 Jan 2002 17:16:18 -0700",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": false,
"msg_subject": "Re: software license question"
},
{
"msg_contents": "Ned Wolpert wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> I'm not sure its a problem. The source code already contains the\n> copyright, and this only affects the JDBC driver. Perhaps we just need\n> to put in the JDBC driver readme file that parts were provided by\n> Exolab?\n> \n> I guess, it all depends on who defines \"due credit\". The code was\n> submitted by Assaf Arkin <arkin@exoffice.com>. Who gets to ask?\n\nTo make things even more complex, the newer BSD copyright doesn't have a\n\"due credit\" clause. See the COPYRIGHT file at the top of our source\ntree.\n\nI suppose the copyright at the top of the file, and perhaps in the jdbc\nREADME is enough.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jan 2002 20:17:38 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: software license question"
}
] |
[
{
"msg_contents": "Okay, I've been able to reproduce the problem here. Looks like the\neqjoinsel math is not wrong, exactly, but small roundoff errors are\ncausing the logic to do unreasonable things. I think\nget_att_numdistinct needs to round its result to an integer, and\nprobably there needs to be some clamping of computed probabilities to\nthe 0..1 range (otherfreq1 is coming out about -4.4703483581542969e-08\nin this example, which should be clamped to 0).\n\nWill have a fix late tonight or tomorrow.\n\nThanks for the example case!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 19:02:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: bug in join? "
},
{
"msg_contents": "Thanks Tom!\n\nOn Wed, 2 Jan 2002, Tom Lane wrote:\n\n> Okay, I've been able to reproduce the problem here. Looks like the\n> eqjoinsel math is not wrong, exactly, but small roundoff errors are\n> causing the logic to do unreasonable things. I think\n> get_att_numdistinct needs to round its result to an integer, and\n> probably there needs to be some clamping of computed probabilities to\n> the 0..1 range (otherfreq1 is coming out about -4.4703483581542969e-08\n> in this example, which should be clamped to 0).\n>\n> Will have a fix late tonight or tomorrow.\n>\n> Thanks for the example case!\n>\n> \t\t\tregards, tom lane\n>\n\n-- \nLaurette Cisneros\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nPassenger Information Everywhere\n\n",
"msg_date": "Wed, 2 Jan 2002 16:12:53 -0800 (PST)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": false,
"msg_subject": "Re: bug in join? "
},
{
"msg_contents": "Okay, I've committed a fix to CVS. The easiest way for you to pick it\nup is probably to grab tonight's nightly snapshot,\nftp://ftp.us.postgresql.org/dev/postgresql-snapshot.tar.gz\n(or a closer ftp mirror if you have one).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jan 2002 23:05:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: bug in join? "
},
{
"msg_contents": "Hi Tom,\n\nI did download this friday morning and compiled and installed it on my\nsystem. Looks like the bug is fixed! I also ran vacuum analyze a few\nhundred times to make sure ;.)\n\nWe were wondering when the next (beta) release is scheduled for?\n\nThank you for the quick response to this problem!\n\nL.\nOn Wed, 2 Jan 2002, Tom Lane wrote:\n\n> Okay, I've committed a fix to CVS. The easiest way for you to pick it\n> up is probably to grab tonight's nightly snapshot,\n> ftp://ftp.us.postgresql.org/dev/postgresql-snapshot.tar.gz\n> (or a closer ftp mirror if you have one).\n>\n> \t\t\tregards, tom lane\n>\n\n-- \nLaurette Cisneros\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nPassenger Information Everywhere\n\n",
"msg_date": "Mon, 7 Jan 2002 10:13:02 -0800 (PST)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": false,
"msg_subject": "Re: bug in join? "
}
] |
[
{
"msg_contents": "\nOK, can someone comment on this email I received. I am confused myself.\nFAQ item 4.1 says:\n\n> 4.1) Why is system confused about commas, decimal points, and date formats.\n\nDoes locale only control dates? If so, why the mention of commas? This\nitem was there before I started maintaining the FAQ.\n \n\n---------------------------------------------------------------------------\n\nCarsten Grewe wrote:\n> Hi Bruce!\n> \n> Below you'll find a question I asked to the pgsql-general list. I \n> missunderstood FAQ 4.1. Maybe it's just me - maybe the answer to the FAQ is \n> irritating - it's for you to judge.\n> \n> Kind regards and merry Christmas\n> Carsten\n> \n> ---------- forwarded message ----------\n> \n> Subject: Re: [GENERAL] localization\n> Date: Fri, 21 Dec 2001 13:06:20 +0100\n> From: Carsten Grewe <DerReisende@schatzis.org>\n> To: Karel Zak <zakkr@zf.jcu.cz>\n> \n> Karel Zak, Freitag, 21. Dezember 2001 10:09:\n> > On Thu, Dec 20, 2001 at 10:24:31PM +0100, Carsten Grewe wrote:\n> > > I am a bit lost here. I have tried to convince pgsql to show numbers with\n> > > a decimal-comma (german style) instead of the decimal-point.\n> [snip]\n> >\n> > The backend ignore LC_NUMERIC. If you want output numbers in locale\n> > depend format use to_char() (for example to_char(1234.5, '9G999D9').\n> >\n> > Karel\n> \n> Thanks! I know about the Format Functions, but FAQ 4.1 made me believe that\n> there is a way to influence the backend.\n> \n> VVVVVVVVV\n> 4.1) Why is system confused about commas, decimal points, and date formats.\n> \n> Check your locale configuration. PostgreSQL uses the locale setting of the\n> user that ran the postmaster process. There are postgres and psql SET\n> commands to control the date format. Set those accordingly for your operating\n> environment.\n> VVVVVVVVV\n> \n> Kind regards and merry Christmas\n> Carsten\n> \n> -------------------------------------------------------\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 01:37:19 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PGSQL - FAQ 4.1"
},
{
"msg_contents": "> OK, can someone comment on this email I received. I am confused myself.\n> FAQ item 4.1 says:\n> > 4.1) Why is system confused about commas, decimal points, and date formats.\n> Does locale only control dates? If so, why the mention of commas? This\n> item was there before I started maintaining the FAQ.\n\nI think there is confusion on these things, but the FAQ content does not\nseem to reduce it much. afaik locale settings affect *nothing* in\ndate/time, but SET DATESTYLE does. Is it the case that LC_NUMERIC or\nother settings would affect numeric *input*?\n\n - Thomas\n\n> Carsten Grewe wrote:\n> > Hi Bruce!\n> >\n> > Below you'll find a question I asked to the pgsql-general list. I\n> > missunderstood FAQ 4.1. Maybe it's just me - maybe the answer to the FAQ is\n> > irritating - it's for you to judge.\n> >\n> > Kind regards and merry Christmas\n> > Carsten\n> >\n> > ---------- forwarded message ----------\n> >\n> > Subject: Re: [GENERAL] localization\n> > Date: Fri, 21 Dec 2001 13:06:20 +0100\n> > From: Carsten Grewe <DerReisende@schatzis.org>\n> > To: Karel Zak <zakkr@zf.jcu.cz>\n> >\n> > Karel Zak, Freitag, 21. Dezember 2001 10:09:\n> > > On Thu, Dec 20, 2001 at 10:24:31PM +0100, Carsten Grewe wrote:\n> > > > I am a bit lost here. I have tried to convince pgsql to show numbers with\n> > > > a decimal-comma (german style) instead of the decimal-point.\n> > [snip]\n> > >\n> > > The backend ignore LC_NUMERIC. If you want output numbers in locale\n> > > depend format use to_char() (for example to_char(1234.5, '9G999D9').\n> > >\n> > > Karel\n> >\n> > Thanks! I know about the Format Functions, but FAQ 4.1 made me believe that\n> > there is a way to influence the backend.\n> >\n> > VVVVVVVVV\n> > 4.1) Why is system confused about commas, decimal points, and date formats.\n> >\n> > Check your locale configuration. PostgreSQL uses the locale setting of the\n> > user that ran the postmaster process. There are postgres and psql SET\n> > commands to control the date format. Set those accordingly for your operating\n> > environment.\n> > VVVVVVVVV\n> >\n> > Kind regards and merry Christmas\n> > Carsten\n> >\n> > -------------------------------------------------------\n> >\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n",
"msg_date": "Thu, 03 Jan 2002 15:11:13 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: PGSQL - FAQ 4.1"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Is it the case that LC_NUMERIC or\n> other settings would affect numeric *input*?\n\nNo, because we don't accept those settings from the environment;\nif you look in main.c, you'll see that only LC_MESSAGES, \nLC_CTYPE, LC_COLLATE, and LC_MONETARY are accepted.\n\nto_char does look at additional locale settings, I believe, but in\ngeneral we ignore LC_NUMERIC.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jan 2002 10:51:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGSQL - FAQ 4.1 "
},
{
"msg_contents": "Tom Lane wrote:\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > Is it the case that LC_NUMERIC or\n> > other settings would affect numeric *input*?\n> \n> No, because we don't accept those settings from the environment;\n> if you look in main.c, you'll see that only LC_MESSAGES, \n> LC_CTYPE, LC_COLLATE, and LC_MONETARY are accepted.\n> \n> to_char does look at additional locale settings, I believe, but in\n> general we ignore LC_NUMERIC.\n\nSeems I should just remove the FAQ item.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 12:27:01 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PGSQL - FAQ 4.1"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > > Is it the case that LC_NUMERIC or\n> > > other settings would affect numeric *input*?\n> > \n> > No, because we don't accept those settings from the environment;\n> > if you look in main.c, you'll see that only LC_MESSAGES, \n> > LC_CTYPE, LC_COLLATE, and LC_MONETARY are accepted.\n> > \n> > to_char does look at additional locale settings, I believe, but in\n> > general we ignore LC_NUMERIC.\n> \n> Seems I should just remove the FAQ item.\n\nOK, I have removed the item. Seemed just too confusing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 4 Jan 2002 00:45:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PGSQL - FAQ 4.1"
}
] |
[
{
"msg_contents": "TODO: I have added a syntax suggested by someone on IRC:\n\n* Make it easier to create a database owned by someone who can't createdb,\n perhaps CREATE DATABASE dbname WITH USER = \"user\"\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 03:00:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Updated TODO item"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>TODO: I have added a syntax suggested by someone on IRC:\n>\n>* Make it easier to create a database owned by someone who can't createdb,\n> perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n>\nPerhaps \"CREATE DATABASE dbname WITH USER username\" would fit a little \nbetter. I'd love to see this one get done... ;-)\n\n\n",
"msg_date": "Thu, 03 Jan 2002 03:19:29 -0600",
"msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>",
"msg_from_op": false,
"msg_subject": "Re: Updated TODO item"
},
{
"msg_contents": "On Thu, 3 Jan 2002, Bruce Momjian wrote:\n\n> TODO: I have added a syntax suggested by someone on IRC:\n> \n> * Make it easier to create a database owned by someone who can't createdb,\n> perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAttached is a patch which implements this. Passes regression:\n\n======================\n All 79 tests passed.\n======================\n\nThis required the modification of the CreatedbStmt data structure. I've\nensured that all functions which interact with CreatedbStmt have been\nmodified.\n\nGavin",
"msg_date": "Fri, 4 Jan 2002 02:27:03 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Updated TODO item"
},
{
"msg_contents": "Thomas Swan wrote:\n> Bruce Momjian wrote:\n> \n> >TODO: I have added a syntax suggested by someone on IRC:\n> >\n> >* Make it easier to create a database owned by someone who can't createdb,\n> > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> >\n> Perhaps \"CREATE DATABASE dbname WITH USER username\" would fit a little \n> better. I'd love to see this one get done... ;-)\n\nWell, I had the equals sign because the documentation for other params\nsuggests it:\n\n\tCREATE DATABASE name\n\t [ WITH [ LOCATION = 'dbpath' ]\n\t [ TEMPLATE = template ]\n\t [ ENCODING = encoding ] ]\n\nMaybe we should make the equals optional for all these options.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 12:21:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Updated TODO item"
},
{
"msg_contents": "\nThanks. Saved for 7.3.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> On Thu, 3 Jan 2002, Bruce Momjian wrote:\n> \n> > TODO: I have added a syntax suggested by someone on IRC:\n> > \n> > * Make it easier to create a database owned by someone who can't createdb,\n> > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Attached is a patch which implements this. Passes regression:\n> \n> ======================\n> All 79 tests passed.\n> ======================\n> \n> This required the modification of the CreatedbStmt data structure. I've\n> ensured that all functions which interact with CreatedbStmt have been\n> modified.\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 12:27:58 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Updated TODO item"
},
{
"msg_contents": "\n\n\n\n\nBruce Momjian wrote:\n\nThomas Swan wrote:\n\nBruce Momjian wrote:\n\nTODO: I have added a syntax suggested by someone on IRC:* Make it easier to create a database owned by someone who can't createdb, perhaps CREATE DATABASE dbname WITH USER = \"user\" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nPerhaps \"CREATE DATABASE dbname WITH USER username\" would fit a little better. I'd love to see this one get done... ;-)\n\nWell, I had the equals sign because the documentation for other paramssuggests it:\tCREATE DATABASE name\t [ WITH [ LOCATION = 'dbpath' ]\t [ TEMPLATE = template ]\t [ ENCODING = encoding ] ]Maybe we should make the equals optional for all these options.\n\nThey need to be consistent. I had forgotten that the existing syntax contained\nthe \"=\" character. Either way would be fine, but the \"user = \" option would\ndefinitely be a welcome addition. I\n\n\n\n\n",
"msg_date": "Thu, 03 Jan 2002 12:17:39 -0600",
"msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>",
"msg_from_op": false,
"msg_subject": "Re: Updated TODO item"
},
{
"msg_contents": "Thomas Swan wrote:\n> Bruce Momjian wrote:\n> \n> >TODO: I have added a syntax suggested by someone on IRC:\n> >\n> >* Make it easier to create a database owned by someone who can't createdb,\n> > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> >\n> Perhaps \"CREATE DATABASE dbname WITH USER username\" would fit a little \n> better. I'd love to see this one get done... ;-)\n\nOK, I checked at CREATE DATABASE is the only command that has the equals\nin WITH PARAM = VAL. Added to TODO:\n\n\t* Make equals sign optional in CREATE DATABASE WITH param = 'val'\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 13:21:23 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Updated TODO item"
},
{
"msg_contents": "On Thu, 3 Jan 2002, Bruce Momjian wrote:\n\n> > Perhaps \"CREATE DATABASE dbname WITH USER username\" would fit a little \n> > better. I'd love to see this one get done... ;-)\n> \n> Well, I had the equals sign because the documentation for other params\n> suggests it:\n> \n> \tCREATE DATABASE name\n> \t [ WITH [ LOCATION = 'dbpath' ]\n> \t [ TEMPLATE = template ]\n> \t [ ENCODING = encoding ] ]\n> \n> Maybe we should make the equals optional for all these options.\n> \n\n'WITH LOCATION dbpath', etc makes more sense. I'd be happy to revise my\npatch to change this. Does anyone have any strong feelings against this?\n\nGavin\n\n",
"msg_date": "Fri, 4 Jan 2002 12:17:09 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Updated TODO item"
},
{
"msg_contents": "On Thu, 3 Jan 2002, Bruce Momjian wrote:\n\n> Thomas Swan wrote:\n> > Bruce Momjian wrote:\n> > \n> > >TODO: I have added a syntax suggested by someone on IRC:\n> > >\n> > >* Make it easier to create a database owned by someone who can't createdb,\n> > > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > >\n> > Perhaps \"CREATE DATABASE dbname WITH USER username\" would fit a little \n> > better. I'd love to see this one get done... ;-)\n> \n> OK, I checked at CREATE DATABASE is the only command that has the equals\n> in WITH PARAM = VAL. Added to TODO:\n> \n> \t* Make equals sign optional in CREATE DATABASE WITH param = 'val'\n\nI've revised my previous patch to handle this.\n\nGavin",
"msg_date": "Fri, 4 Jan 2002 12:56:34 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Updated TODO item"
},
{
"msg_contents": "Gavin Sherry wrote:\n> On Thu, 3 Jan 2002, Bruce Momjian wrote:\n> \n> > Thomas Swan wrote:\n> > > Bruce Momjian wrote:\n> > > \n> > > >TODO: I have added a syntax suggested by someone on IRC:\n> > > >\n> > > >* Make it easier to create a database owned by someone who can't createdb,\n> > > > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > > >\n> > > Perhaps \"CREATE DATABASE dbname WITH USER username\" would fit a little \n> > > better. I'd love to see this one get done... ;-)\n> > \n> > OK, I checked at CREATE DATABASE is the only command that has the equals\n> > in WITH PARAM = VAL. Added to TODO:\n> > \n> > \t* Make equals sign optional in CREATE DATABASE WITH param = 'val'\n> \n> I've revised my previous patch to handle this.\n> \n\nThanks. Saved for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 23:43:27 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Updated TODO item"
},
{
"msg_contents": "> * Make it easier to create a database owned by someone who can't createdb,\n> perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nShouldn't that be\n\nCREATE DATABASE dbname WITH OWNER = \"user\"\n\n?\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Email: kar@kakidata.dk\n",
"msg_date": "Fri, 4 Jan 2002 17:47:51 +0100",
"msg_from": "Kaare Rasmussen <kar@webline.dk>",
"msg_from_op": false,
"msg_subject": "Re: Updated TODO item"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nGavin Sherry wrote:\n> On Thu, 3 Jan 2002, Bruce Momjian wrote:\n> \n> > Thomas Swan wrote:\n> > > Bruce Momjian wrote:\n> > > \n> > > >TODO: I have added a syntax suggested by someone on IRC:\n> > > >\n> > > >* Make it easier to create a database owned by someone who can't createdb,\n> > > > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > > >\n> > > Perhaps \"CREATE DATABASE dbname WITH USER username\" would fit a little \n> > > better. I'd love to see this one get done... ;-)\n> > \n> > OK, I checked at CREATE DATABASE is the only command that has the equals\n> > in WITH PARAM = VAL. Added to TODO:\n> > \n> > \t* Make equals sign optional in CREATE DATABASE WITH param = 'val'\n> \n> I've revised my previous patch to handle this.\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Feb 2002 20:59:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Updated TODO item"
}
] |
[
{
"msg_contents": "> Tatsuo, is AIX capable of <10 millisecond sleeps?\n\nYes, the select granularity is 1 ms for non root users on AIX.\n\nAIX is able to actually sleep micro seconds with select\nas user root (non root users can use usleep for the same \nresult). AIX also has yield.\n\nI already reported this once, but a patch was not welcomed,\nmaybe I failed to properly describe ... \n\nAndreas\n",
"msg_date": "Thu, 3 Jan 2002 14:02:05 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: LWLock contention: I think I understand the problem"
}
] |
[
{
"msg_contents": "Hi,\n\nI am doing some tests with functions in 7.2b4. I got some stuff to work \nthat returns 'setof text' but I was wondering whether it is possble to let \nthe function return more than one column, something like:\n\ncreate function myfunc (integer) returns setof (text, integer)\n\nor\n\ncreate function myfunc (integer) returns setof text, integer\n\nI've searched the docs about the syntaxt of the 'setof' keyword but could \nnot find really much.\n\nTIA,\n\nReinoud van Leeuwen\n\n",
"msg_date": "Thu, 3 Jan 2002 15:11:11 +0100 (CET)",
"msg_from": "\"Reinoud van Leeuwen\" <reinoud@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "setof (more than one column)"
}
] |
[
{
"msg_contents": "Shahid Mohammad Shamsi (mshamsi@dinmar.com) reports a bug with a severity of 2\nThe lower the number the more severe it is.\n\nShort Description\nselect table privilege in postgres allows user to create index on the table\n\nLong Description\nI created a user and assigned select privilege on a table. The user can not insert any data or add a field to the table. But, the user can create indexes on the table despite having select only privileges. This becomes a serious problem if the user can create unique indexes.\n\n\nSample Code\n\n\nNo file was uploaded with this report\n\n",
"msg_date": "Thu, 3 Jan 2002 11:12:05 -0500 (EST)",
"msg_from": "pgsql-bugs@postgresql.org",
"msg_from_op": true,
"msg_subject": "Bug #549: select table privilege in postgres allows user to create\n\tindex on the table"
},
{
"msg_contents": "pgsql-bugs@postgresql.org writes:\n> select table privilege in postgres allows user to create index on the table\n\nActually, it appears that CREATE INDEX has no permission check at all.\n\nI agree this is a bug. Probably CREATE INDEX should require ownership\npermission, the same as ALTER TABLE.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jan 2002 11:54:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #549: select table privilege in postgres allows user to\n\tcreate index on the table"
},
{
"msg_contents": "Tom Lane wrote:\n> pgsql-bugs@postgresql.org writes:\n> > select table privilege in postgres allows user to create index on the table\n> \n> Actually, it appears that CREATE INDEX has no permission check at all.\n> \n> I agree this is a bug. Probably CREATE INDEX should require ownership\n> permission, the same as ALTER TABLE.\n\nAdded to TODO:\n\n\t* Allow only owner to create indexes\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 12:30:25 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #549: select table privilege in postgres allows"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Added to TODO:\n> \t* Allow only owner to create indexes\n\nI was going to just fix it now. Do you want to leave it for 7.3?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jan 2002 12:43:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #549: select table privilege in postgres allows\n\tuser to create index on the table"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Added to TODO:\n> > \t* Allow only owner to create indexes\n> \n> I was going to just fix it now. Do you want to leave it for 7.3?\n\nIf you think it is safe, go ahead. I fixed some stuff last night. :-)\n\nI will remove from TODO when I see the commit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 12:44:36 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #549: select table privilege in postgres allows"
}
] |
[
{
"msg_contents": "Look at this:\n\n\t$ dropdb lijasdf oiuwqe test\n\tDROP DATABASE\n\nThe create/drop scripts only process the last arguments, ignoring\nearlier ones. I assume no one wants me to fix it now so I will add this\nto TODO:\n\n\t* Prevent create/drop scripts from allowing extra args (Bruce)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 13:12:06 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "More problem with scripts"
},
{
"msg_contents": "[2002-01-03 13:12] Bruce Momjian said:\n| Look at this:\n| \n| \t$ dropdb lijasdf oiuwqe test\n| \tDROP DATABASE\n| \n| The create/drop scripts only process the last arguments, ignoring\n| earlier ones. I assume no one wants me to fix it now so I will add this\n| to TODO:\n\nsomething /simple/ might look like.\n\nIndex: dropdb\n===================================================================\nRCS file: /var/cvsup/pgsql/src/bin/scripts/dropdb,v\nretrieving revision 1.13\ndiff -c -r1.13 dropdb\n*** dropdb 30 Sep 2001 22:17:51 -0000 1.13\n--- dropdb 3 Jan 2002 18:54:17 -0000\n***************\n*** 88,94 ****\n exit 1\n ;;\n *)\n! dbname=\"$1\"\n ;;\n esac\n shift\n--- 88,95 ----\n exit 1\n ;;\n *)\n! [ ! -z \"$dbname\" ] && usage=1\n! dbname=$1\n ;;\n esac\n shift\n***************\n*** 132,137 ****\n--- 133,141 ----\n \n \n dbname=`echo $dbname | sed 's/\\\"/\\\\\\\"/g'`\n+ \n+ echo $dbname\n+ exit 0;\n \n ${PATHNAME}psql $PSQLOPT -d template1 -c \"DROP DATABASE \\\"$dbname\\\"\"\n if [ \"$?\" -ne 0 ]; then\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Thu, 3 Jan 2002 13:56:43 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: More problem with scripts"
},
{
"msg_contents": "[2002-01-03 13:56] Brent Verner said:\n| [2002-01-03 13:12] Bruce Momjian said:\n| | Look at this:\n| | \n| | \t$ dropdb lijasdf oiuwqe test\n| | \tDROP DATABASE\n| | \n| | The create/drop scripts only process the last arguments, ignoring\n| | earlier ones. I assume no one wants me to fix it now so I will add this\n| | to TODO:\n| \n| something /simple/ might look like.\n| \n| Index: dropdb\n| ===================================================================\n| RCS file: /var/cvsup/pgsql/src/bin/scripts/dropdb,v\n [snip]\n| + \n| + echo $dbname\n| + exit 0;\n\ner, minus this debugging cruft :-)\n\nsorry,\n b\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Thu, 3 Jan 2002 14:16:24 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: More problem with scripts"
},
{
"msg_contents": "\nActually, we can just do:\n\n> *)\n> dbname=$1\n> [ \"$#\" -ne 1 ] && usage=1\n\nMeaning if they have anything after the dbname, it is an error. This\ncatches flags _after_ the dbname. Seems most of the script have this\nproblem. If people want it fixed, I can easily do it; just give me to\ngo-ahead.\n\n---------------------------------------------------------------------------\n\nBrent Verner wrote:\n> [2002-01-03 13:12] Bruce Momjian said:\n> | Look at this:\n> | \n> | \t$ dropdb lijasdf oiuwqe test\n> | \tDROP DATABASE\n> | \n> | The create/drop scripts only process the last arguments, ignoring\n> | earlier ones. I assume no one wants me to fix it now so I will add this\n> | to TODO:\n> \n> something /simple/ might look like.\n> \n> Index: dropdb\n> ===================================================================\n> RCS file: /var/cvsup/pgsql/src/bin/scripts/dropdb,v\n> retrieving revision 1.13\n> diff -c -r1.13 dropdb\n> *** dropdb 30 Sep 2001 22:17:51 -0000 1.13\n> --- dropdb 3 Jan 2002 18:54:17 -0000\n> ***************\n> *** 88,94 ****\n> exit 1\n> ;;\n> *)\n> ! dbname=\"$1\"\n> ;;\n> esac\n> shift\n> --- 88,95 ----\n> exit 1\n> ;;\n> *)\n> ! [ ! -z \"$dbname\" ] && usage=1\n> ! dbname=$1\n> ;;\n> esac\n> shift\n> ***************\n> *** 132,137 ****\n> --- 133,141 ----\n> \n> \n> dbname=`echo $dbname | sed 's/\\\"/\\\\\\\"/g'`\n> + \n> + echo $dbname\n> + exit 0;\n> \n> ${PATHNAME}psql $PSQLOPT -d template1 -c \"DROP DATABASE \\\"$dbname\\\"\"\n> if [ \"$?\" -ne 0 ]; then\n> \n> -- \n> \"Develop your talent, man, and leave the world something. Records are \n> really gifts from people. To think that an artist would love you enough\n> to share his music with anyone is a beautiful thing.\" -- Duane Allman\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 14:19:12 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More problem with scripts"
},
{
"msg_contents": "[2002-01-03 14:19] Bruce Momjian said:\n| \n| Actually, we can just do:\n| \n| > *)\n| > dbname=$1\n| > [ \"$#\" -ne 1 ] && usage=1\n| \n| Meaning if they have anything after the dbname, it is an error. This\n| catches flags _after_ the dbname. Seems most of the script have this\n| problem. If people want it fixed, I can easily do it; just give me to\n| go-ahead.\n\n+1\n\n I can't see a reason to /not/ fix something this simple for the 7.2 \nrelease. In general, I think it's best to fix things like this[1]\n\"on sight\" as opposed to queueing them in TODO where they /might/ sit\nuntouched through another release cycle.\n\n[1] meaning problems that require little effort to fix, and whose\n solutions are /very/ localized.\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Thu, 3 Jan 2002 14:51:46 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: More problem with scripts"
},
{
"msg_contents": "Brent Verner wrote:\n> [2002-01-03 14:19] Bruce Momjian said:\n> | \n> | Actually, we can just do:\n> | \n> | > *)\n> | > dbname=$1\n> | > [ \"$#\" -ne 1 ] && usage=1\n> | \n> | Meaning if they have anything after the dbname, it is an error. This\n> | catches flags _after_ the dbname. Seems most of the script have this\n> | problem. If people want it fixed, I can easily do it; just give me to\n> | go-ahead.\n> \n> +1\n> \n> I can't see a reason to /not/ fix something this simple for the 7.2 \n> release. In general, I think it's best to fix things like this[1]\n> \"on sight\" as opposed to queueing them in TODO where they /might/ sit\n> untouched through another release cycle.\n> \n> [1] meaning problems that require little effort to fix, and whose\n> solutions are /very/ localized.\n\nOK, one more +1 and I will get to it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 14:54:58 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More problem with scripts"
},
{
"msg_contents": ">> I can't see a reason to /not/ fix something this simple for the 7.2 \n>> release.\n\nSeems like a safe fix to me too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jan 2002 15:58:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More problem with scripts "
},
{
"msg_contents": "> Brent Verner wrote:\n> > [2002-01-03 14:19] Bruce Momjian said:\n> > | \n> > | Actually, we can just do:\n> > | \n> > | > *)\n> > | > dbname=$1\n> > | > [ \"$#\" -ne 1 ] && usage=1\n> > | \n> > | Meaning if they have anything after the dbname, it is an error. This\n> > | catches flags _after_ the dbname. Seems most of the script have this\n> > | problem. If people want it fixed, I can easily do it; just give me\n> to\n> > | go-ahead.\n> > \n> > +1\n> > \n> > I can't see a reason to /not/ fix something this simple for the 7.2 \n> > release. In general, I think it's best to fix things like this[1]\n> > \"on sight\" as opposed to queueing them in TODO where they /might/ sit\n> > untouched through another release cycle.\n> > \n> > [1] meaning problems that require little effort to fix, and whose\n> > solutions are /very/ localized.\n> \n> OK, one more +1 and I will get to it.\n\n-4\n\n1: It's not a regression from 7.1. Anything else is too late.\n\n2: The issue does not cause problems if you stick to the documented syntax\nand it\n does not cause hazard if you don't.\n\n3: The patch is wrong, because showing the usage screen in case of an error\nis\n inappropriate.\n\n4: Even beginning to talk of \"localized\", \"trivial\", \"little effort\" should\ncause\n an automatic ban on programming for 1 month.\n\n\n-- \nGMX - Die Kommunikationsplattform im Internet.\nhttp://www.gmx.net",
"msg_date": "Thu, 3 Jan 2002 23:33:42 +0100 (MET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: More problem with scripts"
},
{
"msg_contents": "...\n> 4: Even beginning to talk of \"localized\", \"trivial\", \"little effort\" should\n> cause an automatic ban on programming for 1 month.\n\nHey, this a new year of optimism and hope. Save the realism until summer\nor fall at least :)\n\n - Thomas\n",
"msg_date": "Thu, 03 Jan 2002 23:03:20 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: More problem with scripts"
},
{
"msg_contents": "[2002-01-03 23:33] Peter Eisentraut said:\n| > Brent Verner wrote:\n| > > [2002-01-03 14:19] Bruce Momjian said:\n| > > | \n| > > | Actually, we can just do:\n\n| > > +1\n| > > \n| > > I can't see a reason to /not/ fix something this simple for the 7.2 \n| > > release. In general, I think it's best to fix things like this[1]\n| > > \"on sight\" as opposed to queueing them in TODO where they /might/ sit\n| > > untouched through another release cycle.\n| > > \n| > > [1] meaning problems that require little effort to fix, and whose\n| > > solutions are /very/ localized.\n| > \n| > OK, one more +1 and I will get to it.\n| \n| -4\n\n Hey, no ballot stuffing ;-)\n\n| 1: It's not a regression from 7.1. Anything else is too late.\n\n IMO, this is not a valid reason to /not/ fix _this problem_. Taken \nto an extreme, this reasoning would allow any number of fatal bugs to\nremain in 7.2 because they also existed in 7.1. Given the imminent\nrelease, this position is valid even for a number of non-fatal bugs, \nbut not this one.\n\n| 2: The issue does not cause problems if you stick to the documented syntax\n| and it\n| does not cause hazard if you don't.\n\n First half correct. It is arguable whether or not the following\nwould be considered a hazard or not. Is unexpected behavior \nhazardous?\n\n$ createdb whatiwanted whatigot\n\n| 3: The patch is wrong, because showing the usage screen in case of an error\n| is\n| inappropriate.\n\n Perhaps a more suitable message similar to the \"invalid option\" error\nshould be printed prior to exit, but this is not as important as \nenforcing proper/documented use of the script(s).\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Thu, 3 Jan 2002 19:50:23 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: More problem with scripts"
},
{
"msg_contents": "Brent Verner wrote:\n> [2002-01-03 23:33] Peter Eisentraut said:\n> | > Brent Verner wrote:\n> | > > [2002-01-03 14:19] Bruce Momjian said:\n> | > > | \n> | > > | Actually, we can just do:\n> \n> | > > +1\n> | > > \n> | > > I can't see a reason to /not/ fix something this simple for the 7.2 \n> | > > release. In general, I think it's best to fix things like this[1]\n> | > > \"on sight\" as opposed to queueing them in TODO where they /might/ sit\n> | > > untouched through another release cycle.\n> | > > \n> | > > [1] meaning problems that require little effort to fix, and whose\n> | > > solutions are /very/ localized.\n> | > \n> | > OK, one more +1 and I will get to it.\n> | \n> | -4\n\nOK, here's the patch. Seems createdb wasn't properly handling the db\ncomment (arg code now similar to createlang), createlang dbname being\noptional wasn't documented in --help, and vacuumdb wasn't handlling an\noptional dbname. I added the required checks so extra arguments report\na failure:\n\t\n\t$ dropdb lkjas asdfljk test\n\tdropdb: invalid option: asdfljk\n\tTry 'dropdb --help' for more information.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/ref/vacuumdb.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/ref/vacuumdb.sgml,v\nretrieving revision 1.20\ndiff -c -r1.20 vacuumdb.sgml\n*** doc/src/sgml/ref/vacuumdb.sgml\t2001/12/08 03:24:40\t1.20\n--- doc/src/sgml/ref/vacuumdb.sgml\t2002/01/04 05:23:04\n***************\n*** 23,33 ****\n <cmdsynopsis>\n <command>vacuumdb</command>\n <arg rep=\"repeat\"><replaceable>connection-options</replaceable></arg>\n! <arg><arg>-d</arg> <replaceable>dbname</replaceable></arg>\n <group><arg>--full</arg><arg>-f</arg></group>\n <group><arg>--verbose</arg><arg>-v</arg></group>\n <group><arg>--analyze</arg><arg>-z</arg></group>\n! <arg>--table '<replaceable>table</replaceable>\n <arg>( <replaceable class=\"parameter\">column</replaceable> [,...] )</arg>'\n </arg>\n <sbr>\n--- 23,33 ----\n <cmdsynopsis>\n <command>vacuumdb</command>\n <arg rep=\"repeat\"><replaceable>connection-options</replaceable></arg>\n! <arg><replaceable>dbname</replaceable></arg>\n <group><arg>--full</arg><arg>-f</arg></group>\n <group><arg>--verbose</arg><arg>-v</arg></group>\n <group><arg>--analyze</arg><arg>-z</arg></group>\n! <arg>--table | -t '<replaceable>table</replaceable>\n <arg>( <replaceable class=\"parameter\">column</replaceable> [,...] )</arg>'\n </arg>\n <sbr>\nIndex: src/bin/scripts/createdb\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/scripts/createdb,v\nretrieving revision 1.18\ndiff -c -r1.18 createdb\n*** src/bin/scripts/createdb\t2001/09/30 22:17:51\t1.18\n--- src/bin/scripts/createdb\t2002/01/04 05:23:05\n***************\n*** 104,114 ****\n \t\texit 1\n \t\t;;\n \t*)\n! \t\tif [ -z \"$dbname\" ]; then\n! \t\t\tdbname=\"$1\"\n! \t\telse\n \t\t\tdbcomment=\"$1\"\n \t\tfi\n \t\t;;\n esac\n shift\n--- 104,120 ----\n \t\texit 1\n \t\t;;\n \t*)\n! \t\tdbname=\"$1\"\n! \t\tif [ \"$2\" ]\n! \t\tthen\n! \t\t\tshift\n \t\t\tdbcomment=\"$1\"\n \t\tfi\n+ \t\tif [ \"$#\" -ne 1 ]; then\n+ \t\t\techo \"$CMDNAME: invalid option: $2\" 1>&2\n+ \t echo \"Try '$CMDNAME --help' for more information.\" 1>&2\n+ \t\t\texit 1\n+ \t\tfi\n \t\t;;\n esac\n shift\n***************\n*** 118,124 ****\n echo \"$CMDNAME creates a PostgreSQL database.\"\n echo\n \techo \"Usage:\"\n! echo \" $CMDNAME [options] dbname [description]\"\n echo\n \techo \"Options:\"\n \techo \" -D, --location=PATH Alternative place to store the database\"\n--- 124,130 ----\n echo \"$CMDNAME creates a PostgreSQL database.\"\n echo\n \techo \"Usage:\"\n! echo \" $CMDNAME [options] [dbname] [description]\"\n echo\n \techo \"Options:\"\n \techo \" -D, --location=PATH Alternative place to store the database\"\nIndex: src/bin/scripts/createlang.sh\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/scripts/createlang.sh,v\nretrieving revision 1.32\ndiff -c -r1.32 createlang.sh\n*** src/bin/scripts/createlang.sh\t2002/01/03 05:30:04\t1.32\n--- src/bin/scripts/createlang.sh\t2002/01/04 05:23:05\n***************\n*** 116,121 ****\n--- 116,126 ----\n \t\t\tfi\n \t\telse\tdbname=\"$1\"\n \t\tfi\n+ \t\tif [ \"$#\" -ne 1 ]; then\n+ \t\t\techo \"$CMDNAME: invalid option: $2\" 1>&2\n+ \t echo \"Try '$CMDNAME --help' for more information.\" 1>&2\n+ \t\t\texit 1\n+ \t\tfi\n ;;\n esac\n shift\nIndex: src/bin/scripts/createuser\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/scripts/createuser,v\nretrieving revision 1.22\ndiff -c -r1.22 createuser\n*** src/bin/scripts/createuser\t2001/09/30 22:17:51\t1.22\n--- src/bin/scripts/createuser\t2002/01/04 05:23:05\n***************\n*** 123,128 ****\n--- 123,133 ----\n \t\t;;\n *)\n \t\tNewUser=\"$1\"\n+ \t\tif [ \"$#\" -ne 1 ]; then\n+ \t\t\techo \"$CMDNAME: invalid option: $2\" 1>&2\n+ \t echo \"Try '$CMDNAME --help' for more information.\" 1>&2\n+ \t\t\texit 1\n+ \t\tfi\n \t\t;;\n esac\n shift;\nIndex: src/bin/scripts/dropdb\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/scripts/dropdb,v\nretrieving revision 1.13\ndiff -c -r1.13 dropdb\n*** src/bin/scripts/dropdb\t2001/09/30 22:17:51\t1.13\n--- src/bin/scripts/dropdb\t2002/01/04 05:23:05\n***************\n*** 89,94 ****\n--- 89,99 ----\n \t\t;;\n \t *)\n \t\tdbname=\"$1\"\n+ \t\tif [ \"$#\" -ne 1 ]; then\n+ \t\t\techo \"$CMDNAME: invalid option: $2\" 1>&2\n+ \t echo \"Try '$CMDNAME --help' for more information.\" 1>&2\n+ \t\t\texit 1\n+ \t\tfi\n \t\t;;\n esac\n shift\nIndex: src/bin/scripts/droplang\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/scripts/droplang,v\nretrieving revision 1.20\ndiff -c -r1.20 droplang\n*** src/bin/scripts/droplang\t2002/01/03 08:53:00\t1.20\n--- src/bin/scripts/droplang\t2002/01/04 05:23:05\n***************\n*** 105,110 ****\n--- 105,115 ----\n \t\t\tfi\n \t\telse\tdbname=\"$1\"\n \t\tfi\n+ \t\tif [ \"$#\" -ne 1 ]; then\n+ \t\t\techo \"$CMDNAME: invalid option: $2\" 1>&2\n+ \t echo \"Try '$CMDNAME --help' for more information.\" 1>&2\n+ \t\t\texit 1\n+ \t\tfi\n ;;\n esac\n shift\nIndex: src/bin/scripts/dropuser\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/scripts/dropuser,v\nretrieving revision 1.14\ndiff -c -r1.14 dropuser\n*** src/bin/scripts/dropuser\t2001/09/30 22:17:51\t1.14\n--- src/bin/scripts/dropuser\t2002/01/04 05:23:05\n***************\n*** 91,96 ****\n--- 91,101 ----\n \t\t;;\n *)\n \t\tDelUser=\"$1\"\n+ \t\tif [ \"$#\" -ne 1 ]; then\n+ \t\t\techo \"$CMDNAME: invalid option: $2\" 1>&2\n+ \t echo \"Try '$CMDNAME --help' for more information.\" 1>&2\n+ \t\t\texit 1\n+ \t\tfi\n \t\t;;\n esac\n shift;\nIndex: src/bin/scripts/vacuumdb\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/scripts/vacuumdb,v\nretrieving revision 1.19\ndiff -c -r1.19 vacuumdb\n*** src/bin/scripts/vacuumdb\t2001/09/30 22:17:51\t1.19\n--- src/bin/scripts/vacuumdb\t2002/01/04 05:23:05\n***************\n*** 112,117 ****\n--- 112,122 ----\n \t\t;;\n \t*)\n \t\tdbname=\"$1\"\n+ \t\tif [ \"$#\" -ne 1 ]; then\n+ \t\t\techo \"$CMDNAME: invalid option: $2\" 1>&2\n+ \t echo \"Try '$CMDNAME --help' for more information.\" 1>&2\n+ \t\t\texit 1\n+ \t\tfi\n \t\t;;\n esac\n shift\n***************\n*** 151,159 ****\n \tdbname=`${PATHNAME}psql $PSQLOPT -q -t -A -d template1 -c 'SELECT datname FROM pg_database WHERE datallowconn'`\n \n elif [ -z \"$dbname\" ]; then\n! \techo \"$CMDNAME: missing required argument: database name\" 1>&2\n! echo \"Try '$CMDNAME -?' for help.\" 1>&2\n! \texit 1\n fi\n \n for db in $dbname\n--- 156,167 ----\n \tdbname=`${PATHNAME}psql $PSQLOPT -q -t -A -d template1 -c 'SELECT datname FROM pg_database WHERE datallowconn'`\n \n elif [ -z \"$dbname\" ]; then\n! if [ \"$PGUSER\" ]; then\n! dbname=\"$PGUSER\"\n! else\n! dbname=`${PATHNAME}pg_id -u -n`\n! fi\n! [ \"$?\" -ne 0 ] && exit 1\n fi\n \n for db in $dbname",
"msg_date": "Fri, 4 Jan 2002 00:34:53 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More problem with scripts"
},
{
"msg_contents": "[2002-01-04 00:34] Bruce Momjian said:\n| Brent Verner wrote:\n| > [2002-01-03 23:33] Peter Eisentraut said:\n| > | > Brent Verner wrote:\n| > | > > [2002-01-03 14:19] Bruce Momjian said:\n| > | > > | \n| > | > > | Actually, we can just do:\n| > \n| > | > > +1\n| > | > > \n| > | > > I can't see a reason to /not/ fix something this simple for the 7.2 \n| > | > > release. In general, I think it's best to fix things like this[1]\n| > | > > \"on sight\" as opposed to queueing them in TODO where they /might/ sit\n| > | > > untouched through another release cycle.\n| > | > > \n| > | > > [1] meaning problems that require little effort to fix, and whose\n| > | > > solutions are /very/ localized.\n| > | > \n| > | > OK, one more +1 and I will get to it.\n| > | \n| > | -4\n| \n| OK, here's the patch. Seems createdb wasn't properly handling the db\n| comment (arg code now similar to createlang), createlang dbname being\n| optional wasn't documented in --help, and vacuumdb wasn't handlling an\n| optional dbname. I added the required checks so extra arguments report\n| a failure:\n| \t\n| \t$ dropdb lkjas asdfljk test\n| \tdropdb: invalid option: asdfljk\n| \tTry 'dropdb --help' for more information.\n\nThis looks good to me.\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Fri, 4 Jan 2002 16:59:50 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: More problem with scripts"
},
{
"msg_contents": "\nPeter is against this patch, and he has done lots of work on the\nscripts, so I will keep my patch for 7.3.\n\n---------------------------------------------------------------------------\n\nBrent Verner wrote:\n> [2002-01-03 23:33] Peter Eisentraut said:\n> | > Brent Verner wrote:\n> | > > [2002-01-03 14:19] Bruce Momjian said:\n> | > > | \n> | > > | Actually, we can just do:\n> \n> | > > +1\n> | > > \n> | > > I can't see a reason to /not/ fix something this simple for the 7.2 \n> | > > release. In general, I think it's best to fix things like this[1]\n> | > > \"on sight\" as opposed to queueing them in TODO where they /might/ sit\n> | > > untouched through another release cycle.\n> | > > \n> | > > [1] meaning problems that require little effort to fix, and whose\n> | > > solutions are /very/ localized.\n> | > \n> | > OK, one more +1 and I will get to it.\n> | \n> | -4\n> \n> Hey, no ballot stuffing ;-)\n> \n> | 1: It's not a regression from 7.1. Anything else is too late.\n> \n> IMO, this is not a valid reason to /not/ fix _this problem_. Taken \n> to an extreme, this reasoning would allow any number of fatal bugs to\n> remain in 7.2 because they also existed in 7.1. Given the imminent\n> release, this position is valid even for a number of non-fatal bugs, \n> but not this one.\n> \n> | 2: The issue does not cause problems if you stick to the documented syntax\n> | and it\n> | does not cause hazard if you don't.\n> \n> First half correct. It is arguable whether or not the following\n> would be considered a hazard or not. Is unexpected behavior \n> hazardous?\n> \n> $ createdb whatiwanted whatigot\n> \n> | 3: The patch is wrong, because showing the usage screen in case of an error\n> | is\n> | inappropriate.\n> \n> Perhaps a more suitable message similar to the \"invalid option\" error\n> should be printed prior to exit, but this is not as important as \n> enforcing proper/documented use of the script(s).\n> \n> cheers.\n> brent\n> \n> -- \n> \"Develop your talent, man, and leave the world something. Records are \n> really gifts from people. To think that an artist would love you enough\n> to share his music with anyone is a beautiful thing.\" -- Duane Allman\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Jan 2002 15:59:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More problem with scripts"
}
] |
[
{
"msg_contents": "> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Seems nested transactions are not required if we load\n> > > each COPY line in its own transaction, like we do with\n> > > INSERT from pg_dump.\n> > \n> > I don't think that's an acceptable answer. Consider\n> \n> Oh, very good point. \"Requires nested transactions\" added to TODO.\n\nAlso add performance issue with per-line-commit...\n\nAlso-II - there is more common name for required feature - savepoints.\n\nVadim\n",
"msg_date": "Thu, 3 Jan 2002 13:11:34 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: Bulkloading using COPY - ignore duplicates?"
},
{
"msg_contents": "Mikheev, Vadim wrote:\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > Seems nested transactions are not required if we load\n> > > > each COPY line in its own transaction, like we do with\n> > > > INSERT from pg_dump.\n> > > \n> > > I don't think that's an acceptable answer. Consider\n> > \n> > Oh, very good point. \"Requires nested transactions\" added to TODO.\n> \n> Also add performance issue with per-line-commit...\n> \n> Also-II - there is more common name for required feature - savepoints.\n\nOK, updated TODO to prefer savepoints term.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 16:15:04 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bulkloading using COPY - ignore duplicates?"
},
{
"msg_contents": ">>>Bruce Momjian said:\n > Mikheev, Vadim wrote:\n > > > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n > > > > > Seems nested transactions are not required if we load\n > > > > > each COPY line in its own transaction, like we do with\n > > > > > INSERT from pg_dump.\n > > > > \n > > > > I don't think that's an acceptable answer. Consider\n > > > \n > > > Oh, very good point. \"Requires nested transactions\" added to TODO.\n > > \n > > Also add performance issue with per-line-commit...\n > > \n > > Also-II - there is more common name for required feature - savepoints.\n > \n > OK, updated TODO to prefer savepoints term.\n\nNow, how about the same functionality for\n\nINSERT into table1 SELECT * from table2 ... WITH ERRORS;\n\nShould allow the insert to complete, even if table1 has unique indexes and we \ntry to insert duplicate rows. Might save LOTS of time in bulkloading scripts \nnot having to do single INSERTs.\n\nGuess all this will be available in 7.3?\n\nDaniel\n\n",
"msg_date": "Fri, 04 Jan 2002 09:36:01 +0200",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": false,
"msg_subject": "Re: Bulkloading using COPY - ignore duplicates? "
},
{
"msg_contents": "> Now, how about the same functionality for\n>\n> INSERT into table1 SELECT * from table2 ... WITH ERRORS;\n>\n> Should allow the insert to complete, even if table1 has unique indexes and\nwe\n> try to insert duplicate rows. Might save LOTS of time in bulkloading\nscripts\n> not having to do single INSERTs.\n\n1. I prefer Oracle' (and others, I believe) way - put statement(s) in PL\nblock and define\nfor what exceptions (errors) what actions should be taken (ie IGNORE for\nNON_UNIQ_KEY\nerror, etc).\n\n2. For INSERT ... SELECT statement one can put DISTINCT in select' target\nlist.\n\n> Guess all this will be available in 7.3?\n\nWe'll see.\n\nVadim\n\n\n",
"msg_date": "Thu, 3 Jan 2002 23:47:36 -0800",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: Bulkloading using COPY - ignore duplicates? "
},
{
"msg_contents": ">>>\"Vadim Mikheev\" said:\n > 1. I prefer Oracle' (and others, I believe) way - put statement(s) in PL\n > block and define\n > for what exceptions (errors) what actions should be taken (ie IGNORE for\n > NON_UNIQ_KEY\n > error, etc).\n\nSome people prefer 'pure' SQL. Anyway, it can be argued which is worse - the \nusage of non-SQL language, or usage of extended SQL language. I guess the SQL \nstandard does not provide for such functionality?\n\n > 2. For INSERT ... SELECT statement one can put DISTINCT in select' target\n > list.\n\nWith this construct, you are effectively copying rows from one table to \nanother - or constructing rows from various sources (constants, other tables \netc) and inserting these in the table. If the target table has unique indexes \n(or constraints), and some of the rows returned by SELECT violate the \nrestrictions - you are supposed to get errors - and unfortunately the entire \nINSERT is aborted. I fail to see how DISTINCT can help here... Perhaps it is \npossible to include checking for already existing tuples in the destination \ntable in the select... but this will significantly increase the runtime, \nespecially when the destination table is huge.\n\nMy idea is to let this INSERT statement insert as much of its rows as \npossible, eventually returning NOTICEs or ignoring the errors (with an IGNORE \nERRORS syntax for example :)\n\nI believe all this functionality will have to consider the syntax firts.\n\nDaniel\n\n",
"msg_date": "Fri, 04 Jan 2002 10:07:21 +0200",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": false,
"msg_subject": "Re: Bulkloading using COPY - ignore duplicates? "
},
{
"msg_contents": "Vadim Mikheev wrote:\n> > Now, how about the same functionality for\n> >\n> > INSERT into table1 SELECT * from table2 ... WITH ERRORS;\n> >\n> > Should allow the insert to complete, even if table1 has unique indexes and\n> we\n> > try to insert duplicate rows. Might save LOTS of time in bulkloading\n> scripts\n> > not having to do single INSERTs.\n> \n> 1. I prefer Oracle' (and others, I believe) way - put statement(s) in PL\n> block and define\n> for what exceptions (errors) what actions should be taken (ie IGNORE for\n> NON_UNIQ_KEY\n> error, etc).\n\nAdded to TODO:\n\n\t* Allow command blocks that can ignore certain types of errors\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 4 Jan 2002 13:16:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bulkloading using COPY - ignore duplicates?"
}
] |
[
{
"msg_contents": "Someone on IRC was confused by the contents of postgresql.conf:\n\t\n\t#ifdef USE_ASSERT_CHECKING\n\t#debug_assertions = true\n\t#endif\n\nI have to say I was confused myself. The #ifdef and #end are merely\ncomments, while appears like that syntax is supported by PostgreSQL. I\nthink we should put just plain comments at the top of each section. \nObjections?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jan 2002 16:33:02 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "postgresql.conf syntax"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Someone on IRC was confused by the contents of postgresql.conf:\n> \t\n> \t#ifdef USE_ASSERT_CHECKING\n> \t#debug_assertions = true\n> \t#endif\n> \n> I have to say I was confused myself. The #ifdef and #end are merely\n> comments, while appears like that syntax is supported by PostgreSQL. I\n> think we should put just plain comments at the top of each section. \n> Objections?\n\nOK, I have replaced #ifdef with # requires. Patch attached and applied.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/utils/misc/postgresql.conf.sample\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/misc/postgresql.conf.sample,v\nretrieving revision 1.27\ndiff -c -r1.27 postgresql.conf.sample\n*** src/backend/utils/misc/postgresql.conf.sample\t2001/12/17 19:09:01\t1.27\n--- src/backend/utils/misc/postgresql.conf.sample\t2002/01/04 05:48:47\n***************\n*** 120,138 ****\n #debug_print_plan = false\n #debug_pretty_print = false\n \n! #ifdef USE_ASSERT_CHECKING\n #debug_assertions = true\n- #endif\n \n \n #\n #\tSyslog\n #\n! #ifdef ENABLE_SYSLOG\n #syslog = 0 # range 0-2\n #syslog_facility = 'LOCAL0'\n #syslog_ident = 'postgres'\n- #endif\n \n \n #\n--- 120,136 ----\n #debug_print_plan = false\n #debug_pretty_print = false\n \n! # requires USE_ASSERT_CHECKING\n #debug_assertions = true\n \n \n #\n #\tSyslog\n #\n! # requires ENABLE_SYSLOG\n #syslog = 0 # range 0-2\n #syslog_facility = 'LOCAL0'\n #syslog_ident = 'postgres'\n \n \n #\n***************\n*** 142,150 ****\n #show_planner_stats = false\n #show_executor_stats = false\n #show_query_stats = false\n! #ifdef BTREE_BUILD_STATS\n #show_btree_build_stats = false\n- #endif\n \n \n #\n--- 140,148 ----\n #show_planner_stats = false\n #show_executor_stats = false\n #show_query_stats = false\n! \n! # requires BTREE_BUILD_STATS\n #show_btree_build_stats = false\n \n \n #\n***************\n*** 161,174 ****\n #\tLock Tracing\n #\n #trace_notify = false\n! #ifdef LOCK_DEBUG\n #trace_locks = false\n #trace_userlocks = false\n #trace_lwlocks = false\n #debug_deadlocks = false\n #trace_lock_oidmin = 16384\n #trace_lock_table = 0\n- #endif\n \n \n #\n--- 159,172 ----\n #\tLock Tracing\n #\n #trace_notify = false\n! \n! # requires LOCK_DEBUG\n #trace_locks = false\n #trace_userlocks = false\n #trace_lwlocks = false\n #debug_deadlocks = false\n #trace_lock_oidmin = 16384\n #trace_lock_table = 0\n \n \n #",
"msg_date": "Fri, 4 Jan 2002 00:50:12 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf syntax"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.