threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Here are a few open concerns about pg_dump:\n\nCritical:\n\n* pg_dumpall is not compatible with pre-7.3. It used to be ignorant but\nnow that it has extra columns in pg_database and pg_user to take care of\nit will break with older releases. This should be straightforward to fix\nfor me (I hope) within the next few days.\n\n* pg_dumpall doesn't know about the new database-level privileges, yet.\n\nNon-critical:\n\n* The pg_dumpall documentation contains this:\n\n| -c, --clean\n|\n| Include SQL commands to clean (drop) database objects before\n| recreating them. (This option is fairly useless, since the output script\n| expects to create the databases themselves; they would always be empty\n| upon creation.)\n\npg_dumpall processes this option itself and puts out DROP DATABASE\ncommands for each database dumped, which seems to be a reasonable feature.\nPerhaps the option should not be passed through to pg_dump (where it is\nuseless) and the documentation should be changed to reflect that.\n\n* The --ignore-version description says that pg_dump only works with\nservers of the same release. Nowadays we take great care to make it\nbackward compatible, so the documentation should be changed if we want to\npublicize that.\n\n* The \"disable trigger\" feature currently puts out code like this:\n\n-- Disable triggers\nUPDATE pg_catalog.pg_class SET reltriggers = 0 WHERE oid = 'char_tbl'::pg_catalog.regclass;\n\nCOPY char_tbl (f1) FROM stdin;\na\nab\nabcd\nabcd\n\\.\n\n-- Enable triggers\nUPDATE pg_catalog.pg_class SET reltriggers = (SELECT pg_catalog.count(*) FROM pg_catalog.pg_trigger where pg_class.oid = tgrelid) WHERE oid = 'char_tbl'::pg_catalog.regclass;\n\nAs the pg_dump man page correctly advises, this may leave the system\ncatalogs corrupted if the restore is interrupted. I was wondering why we\ndon't do this:\n\nBEGIN;\nUPDATE ...\nCOPY ...\nUPDATE ...\nCOMMIT;\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 5 Sep 2002 00:53:12 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Open pg_dump issues"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I was wondering why we\n> don't do this:\n\n> BEGIN;\n> UPDATE ...\n> COPY ...\n> UPDATE ...\n> COMMIT;\n\nSeems like a good idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Sep 2002 21:01:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open pg_dump issues "
}
] |
[
{
"msg_contents": "Hannu Krosing wrote:\n> On Thu, 2002-09-05 at 03:17, Neil Conway wrote:\n> > \n> > Tom did some work on this as well as Chris, I believe:\n> > \n> > - Add ALTER TABLE DROP COLUMN (Christopher)\n> \n> IIRC, some of it was originally based on Hiroshi's earlyer trial code,\n> so he should probably be mentioned as well ?\n\nYes, absolutely:\n\n\tAdd ALTER TABLE DROP COLUMN (Christopher, Hiroshi)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 4 Sep 2002 19:00:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Beta1 schedule"
},
{
"msg_contents": "> Hannu Krosing wrote:\n> > On Thu, 2002-09-05 at 03:17, Neil Conway wrote:\n> > >\n> > > Tom did some work on this as well as Chris, I believe:\n> > >\n> > > - Add ALTER TABLE DROP COLUMN (Christopher)\n> >\n> > IIRC, some of it was originally based on Hiroshi's earlyer trial code,\n> > so he should probably be mentioned as well ?\n>\n> Yes, absolutely:\n>\n> \tAdd ALTER TABLE DROP COLUMN (Christopher, Hiroshi)\n\nWhile we're competing for the humble award, you might want to add Tom to\nthat list...\n\nChris\n\n",
"msg_date": "Thu, 5 Sep 2002 11:27:08 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Beta1 schedule"
},
{
"msg_contents": "It would be far simpler to put each of the core teams names on the top\nof the history file in big bold letters -- or perhaps a watermark in the\nbackground ;)\n\n> While we're competing for the humble award, you might want to add Tom to\n> that list...\n\n\n",
"msg_date": "04 Sep 2002 23:43:04 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Beta1 schedule"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > Hannu Krosing wrote:\n> > > On Thu, 2002-09-05 at 03:17, Neil Conway wrote:\n> > > >\n> > > > Tom did some work on this as well as Chris, I believe:\n> > > >\n> > > > - Add ALTER TABLE DROP COLUMN (Christopher)\n> > >\n> > > IIRC, some of it was originally based on Hiroshi's earlyer trial code,\n> > > so he should probably be mentioned as well ?\n> >\n> > Yes, absolutely:\n> >\n> > \tAdd ALTER TABLE DROP COLUMN (Christopher, Hiroshi)\n> \n> While we're competing for the humble award, you might want to add Tom to\n> that list...\n\nAlready done, with Hiroshi too:\n\n\tAdd ALTER TABLE DROP COLUMN (Christopher, Tom, Hiroshi)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 00:19:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Beta1 schedule"
}
] |
[
{
"msg_contents": "A long time ago you mentioned in passing that postgres.h should be\nincluded before including any system headers. I have been desultorily\nchanging files to meet that rule, but AFAIK no one's made a pass to\nensure that it's followed everywhere.\n\nWell, now we have a reason it had better be that way: largefile support\nwill break otherwise. Since we've arranged to define stuff like\n_FILE_OFFSET_BITS in pg_config.h which is included by postgres.h, it\nis *critical* to read postgres.h before reading any system headers.\nOtherwise you are likely to see non-64-bit-aware definitions of library\nroutines, FILE structs, etc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Sep 2002 19:40:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Largefile fallout: postgres.h MUST be included first"
}
] |
[
{
"msg_contents": "Can someone maybe do a bit of a 'wc' on the cvs logs to see how much we've\nchanged between 7.2 - 7.3 compared to 7.1 - 7.2? It's evident that the\nHISTORY file shows many more changes in this release than the previous, and\nI think it'd be interesting to know how much/how fast postgres is gaining\nmomentum, what the developer appearance and attrition rate is, etc.\n\nJust interesting...\n\nChris\n\n",
"msg_date": "Thu, 5 Sep 2002 11:20:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "7.2 - 7.3 activity"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> Can someone maybe do a bit of a 'wc' on the cvs logs to see how much we've\n> changed between 7.2 - 7.3 compared to 7.1 - 7.2? It's evident that the\n> HISTORY file shows many more changes in this release than the previous, and\n> I think it'd be interesting to know how much/how fast postgres is gaining\n> momentum, what the developer appearance and attrition rate is, etc.\n\nGood question. As far as lines of *.[chy] code in pgsql/src, you have:\n\t\n\t Date | Release | Lines of code \n\t--------------+----------+----------------\n\t 1994 | | 244,581 \n\t 1996-08-01 | 1.02.1 | \n\t 1996-10-27 | 1.09 | 178,976 \n\t 1997-01-29 | 6.0 | \n\t 1997-06-08 | 6.1 | 200,709 \n\t 1997-10-02 | 6.2 | 225,848 \n\t 1998-03-01 | 6.3 | 260,809 \n\t 1998-10-30 | 6.4 | 297,918 \n\t 1999-06-09 | 6.5 | 331,278 \n\t 2000-05-08 | 7.0 | 383,270 \n\t 2001-04-13 | 7.1 | 410,500 \n\t 2002-02-04 | 7.2 | 394,274 \n\t 2002-??-?? | 7.3 | 453,282 \n\nAs you can see, a 15% increase over 7.2. As far as the feature list,\n7.3 has the largest list ever, again about a 15% increase.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 01:45:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 - 7.3 activity"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Good question. As far as lines of *.[chy] code in pgsql/src, you have:\n\t\n> \t Date | Release | Lines of code \n> \t--------------+----------+----------------\n> \t ...\n> \t 2002-02-04 | 7.2 | 394,274 \n> \t 2002-??-?? | 7.3 | 453,282 \n\n> As you can see, a 15% increase over 7.2.\n\nAnd that's despite having removed a goodly amount of code to gborg.\nDo you have an idea how many lines of code were pushed out? You'd\nhave to add them back to get truly comparable numbers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Sep 2002 09:37:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 - 7.3 activity "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Good question. As far as lines of *.[chy] code in pgsql/src, you have:\n> \t\n> > \t Date | Release | Lines of code \n> > \t--------------+----------+----------------\n> > \t ...\n> > \t 2002-02-04 | 7.2 | 394,274 \n> > \t 2002-??-?? | 7.3 | 453,282 \n> \n> > As you can see, a 15% increase over 7.2.\n> \n> And that's despite having removed a goodly amount of code to gborg.\n> Do you have an idea how many lines of code were pushed out? You'd\n> have to add them back to get truly comparable numbers.\n\nGood point. I see 36k lines move to gborg, which makes the increase\nmore like 25%.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 11:12:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 - 7.3 activity"
},
{
"msg_contents": "On Thu, 2002-09-05 at 20:12, Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Good question. As far as lines of *.[chy] code in pgsql/src, you have:\n> > \t\n> > > \t Date | Release | Lines of code \n> > > \t--------------+----------+----------------\n> > > \t ...\n> > > \t 2002-02-04 | 7.2 | 394,274 \n> > > \t 2002-??-?? | 7.3 | 453,282 \n> > \n> > > As you can see, a 15% increase over 7.2.\n> > \n> > And that's despite having removed a goodly amount of code to gborg.\n> > Do you have an idea how many lines of code were pushed out? You'd\n> > have to add them back to get truly comparable numbers.\n> \n> Good point. I see 36k lines move to gborg, which makes the increase\n> more like 25%.\n\nHas anyone run any speed tests to see how 7.2 and 7.3 compare ?\n\n----------------\nHannu\n\n\n",
"msg_date": "05 Sep 2002 20:27:23 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 - 7.3 activity"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Good question. As far as lines of *.[chy] code in pgsql/src, you have:\n>\n> > \t Date | Release | Lines of code\n> > \t--------------+----------+----------------\n> > \t ...\n> > \t 2002-02-04 | 7.2 | 394,274\n> > \t 2002-??-?? | 7.3 | 453,282\n>\n> > As you can see, a 15% increase over 7.2.\n>\n> And that's despite having removed a goodly amount of code to gborg.\n> Do you have an idea how many lines of code were pushed out? You'd\n> have to add them back to get truly comparable numbers.\n\nProbably a more accurate assessment would in fact be the number of _changes_\nie. the size of the diffs...\n\nBut really hard to figure out of course!\n\nChris\n\n",
"msg_date": "Fri, 6 Sep 2002 09:24:17 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: 7.2 - 7.3 activity "
},
{
"msg_contents": "On 05 Sep 2002 20:27:23 +0500, Hannu Krosing <hannu@tm.ee> wrote:\n>Has anyone run any speed tests to see how 7.2 and 7.3 compare ?\n\nRunning a modified OSDB (CREATE TABLE ... WITHOUT OIDS) with 400 MB\ndata on a Pentium III 1 GHz, 382 MB RAM, 7200 rpm IBM 14 GB HD under\nLinux, this is what I got so far:\n\nTestname Wo8 Old8wo 721b\nNr 17 18 19\nTest MB 400 400 400\nSystem mem 382 382 382\n\nTuple header small large large\nWITH / WITHOUT OIDS WITHOUT WITHOUT WITHOUT\n\n(populate + single user)\nElapsed hh:mm:ss 05:04:36 06:54:18 06:27:26\nUser mm:ss.00 00:19.32 00:19.61 00:17.72\nSystem mm:ss.00 00:15.97 00:17.37 00:15.90\n\nXlog ...5B ...5B ...5A\nSize KB 1,038,564 1,070,656 1,069,652\nCTIME postmaster mmm:ss 284:22 391:44 363:09\nUpdates 2,009 2,009 2,009\n VAC, ANA VAC, ANA VAC, ANA *1\n\n(multi user) *2\nElapsed hh:mm:ss 31:34:17 51:33:54\nUser mm:ss.00 130:31.22 222:08.98\nSystem mm:ss.00 92:59.18 159:24.81\n\nXlog 5B...1,8F 5B...2,21\nSize KB 1,143,080 1,193,536\nCTIME postmaster mmm:ss 1640:00 2680:00\nUpdates 243,390 341,233\n\n\ncreate_tables() 0.08 0.08 0.06\nload() 633.30 681.91 725.79\ncreate_idx_uniques_key_bt() 320.90 344.45 305.63\ncreate_idx_updates_key_bt() 321.23 351.97 327.52\ncreate_idx_hundred_key_bt() 319.26 349.17 327.87\ncreate_idx_tenpct_key_bt() 318.78 349.05 326.82\ncreate_idx_tenpct_key_code_bt() 65.40 94.34 70.69\ncreate_idx_tiny_key_bt() 3.15 0.10 4.69\ncreate_idx_tenpct_int_bt() 23.44 27.04 21.60\ncreate_idx_tenpct_signed_bt() 25.16 25.81 25.45\ncreate_idx_uniques_code_h() 118.48 138.47 122.57\ncreate_idx_tenpct_double_bt() 32.03 29.78 29.49\ncreate_idx_updates_decim_bt() 130.92 149.37 136.27\ncreate_idx_tenpct_float_bt() 28.71 29.66 28.88\ncreate_idx_updates_int_bt() 55.05 62.62 56.90\ncreate_idx_tenpct_decim_bt() 52.14 54.05 52.41\ncreate_idx_hundred_code_h() 116.09 136.30 122.34\ncreate_idx_tenpct_name_h() 40.91 42.94 39.28\ncreate_idx_updates_code_h() 73.54 81.80 75.48\ncreate_idx_tenpct_code_h() 36.51 37.99 36.17\ncreate_idx_updates_double_bt() 64.02 71.18 67.72\ncreate_idx_hundred_foreign() 135.44 140.54 131.18\nSum 2,914.54 3,198.62 3,034.81\npopulateDataBase() 2,914.69 3,195.71 3,034.89\n\nsel_1_cl() 0.09 0.07 0.08\njoin_3_cl() 0.10 0.10 0.10\nsel_100_ncl() 2.60 2.62 2.53\ntable_scan() 36.72 41.32 37.74\nagg_func() 100.06 137.39 113.70\nagg_scal() 37.93 41.68 37.69\nsel_100_cl() 2.59 29.53 2.54\njoin_3_ncl() 231.39 234.77 239.32\nsel_10pct_ncl() 51.50 20.68 133.47\nagg_simple_report() 8,734.76 14,222.07 13,132.75\nagg_info_retrieval() 46.03 133.41 131.11\nagg_create_view() 0.69 0.67 0.47\nagg_subtotal_report() 98.67 146.69 87.07\nagg_total_report() 94.19 132.59 120.86\njoin_2_cl() 0.12 0.11 0.08\njoin_2() 96.67 108.61 101.16\nsel_variable_select_low() 21.92 35.75 20.35\nsel_variable_select_high() 30.12 29.57 28.33\njoin_4_cl() 0.02 0.01 0.01\nproj_100() 100.81 144.07 114.83\njoin_4_ncl() 282.74 368.88 315.62\nproj_10pct() 109.96 144.27 124.76\nsel_1_ncl() 0.14 0.09 0.07\njoin_2_ncl() 94.76 113.95 105.70\nintegrity_test() 5.61 6.00 5.60\ndrop_updates_keys() 0.38 0.36 0.48\nbulk_save() 0.25 0.30 0.26\nbulk_modify() 2,464.31 2,647.28 2,552.16\nupd_append_duplicate() 0.11 0.13 0.12\nupd_remove_duplicate() 0.00 0.00 0.00\nupd_app_t_mid() 0.01 0.01 0.01\nupd_mod_t_mid() 2.46 2.84 2.58\nupd_del_t_mid() 2.48 2.83 2.55\nupd_app_t_end() 0.04 0.04 0.04\nupd_mod_t_end() 2.46 2.84 2.57\nupd_del_t_end() 2.47 2.84 2.54\ncreate_idx_updates_code_h() 73.84 83.08 75.63\nupd_app_t_mid() 0.10 0.11 0.10\nupd_mod_t_cod() 0.00 0.01 0.00\nupd_del_t_mid() 2.48 2.82 2.56\ncreate_idx_updates_int_bt() 54.50 61.96 57.03\nupd_app_t_mid() 0.10 0.09 0.12\nupd_mod_t_int() 0.00 0.01 0.01\nupd_del_t_mid() 2.52 2.91 2.57\nbulk_append() 11.88 19.01 11.49\nbulk_delete() 2,513.29 2,685.69 2,593.61\nSum 15,313.87 21,610.06 20,162.37\nSingle User Test 15,313.88 21,610.08 20,162.40\n\nMixed IR (tup/sec) 101.32 98.97 *2\nsel_1_ncl() 0.08 0.09\nagg_simple_report() 98,086.15 162,492.57\nmu_sel_100_seq() 0.89 0.81\nmu_sel_100_rand() 0.22 0.20\nmu_mod_100_seq_abort() 2.82 3.27\nmu_mod_100_rand() 0.18 0.21\nmu_unmod_100_seq() 0.26 0.45\nmu_unmod_100_rand() 0.38 0.33\n 98,090.98 162,497.93\ncrossSectionTests\n (Mixed IR) 98,090.98 162,497.93\n\nmu_checkmod_100_seq() 0.15 0.15\nmu_checkmod_100_rand() 0.01 0.01\n\nMixed OLTP (tup/sec) 32.03 28.69\nsel_1_ncl() 0.15 0.32\nagg_simple_report() 13,094.84 20,635.31\nmu_sel_100_seq() 15.93 16.88\nmu_sel_100_rand() 1.34 4.18\nmu_mod_100_seq_abort() 10.72 25.82\nmu_mod_100_rand() 3.58 0.61\nmu_unmod_100_seq() 1.75 1.37\nmu_unmod_100_rand() 2.18 0.57\n 13,130.49 20,685.06\ncrossSectionTests\n (Mixed OLTP) 13,130.63 20,685.20\nmu_checkmod_100_seq() 2.39 1.88\nmu_checkmod_100_rand() 0.10 0.64\n\nMulti-User Test 113,649.68 185,610.40 *3\n\n\nwo8 is a CVS snapshot from 2002-07-20. Since then there have been\nNAMEDATALEN and FUNC_MAX_ARGS changes making PG slightly slower (cf.\nJoe Conway's mail \"Re: [HACKERS] FUNC_MAX_ARGS benchmarks\"\n2002-08-06). Are there any other performance-relevant changes?\nAnyway I'll do some tests with 7.3beta1 later.\n\nold8wo is the 2002-08-20 snapshot with the heap tuple header changes\nreversed.\n\n721b is plain 7.2.1.\n\n*1 I did a VACUUM ANALYZE for all user tables before the multi user\ntest.\n\n*2 721b multi user test still running, expected to finish on Friday. \n\n*3 Multi user tests were run with ten users.\n\nDon't trust the numbers returned by the multi user tests. There are at\nleast two problems, both related to the fact that child processes do\nrandom selects/updates as fast as possible:\n\nFirst, with a faster server you have more deleted tuples making the\ntests slower.\n\nSecond, with system RAM significantly smaller than database size the\nrandom selects/updates have to wait for I/O most of the time and the\nmaster process gets almost all the CPU, especially on the longer tests\n(agg_simple_report!). With enough RAM to cache most of the database\nthere is much less I/O and CPU is distributed evenly between all\nprocesses, so the master process gets only a small fraction of CPU\npower.\n\nThe single user tests look plausible to me. Though I did each test\nonly once. So please use with a grain of salt and feel free to\ncomment, if you think there's something wrong.\n\nServus\n Manfred\n",
"msg_date": "Wed, 11 Sep 2002 17:38:21 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 - 7.3 activity"
}
] |
[
{
"msg_contents": "I just tried to build all of contrib, and it stops at earthdistance. \nLooks like this is the cause:\n\n[...]\n\t\tdbmirror\t\\\n\t\tdbsize\t\t\\\n\t\tearthdistance\t\\\n#\t\tfindoidjoins\t\\\n\t\tfulltextindex\t\\\n[...]\n\nThe comment on findoidjoins breaks the line continuation, doesn't it?\n\nJoe\n\n",
"msg_date": "Wed, 04 Sep 2002 20:52:09 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "contrib Makefile"
},
{
"msg_contents": "Joe Conway writes:\n\n> I just tried to build all of contrib, and it stops at earthdistance.\n\nWhoops...\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 5 Sep 2002 20:43:30 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: contrib Makefile"
}
] |
[
{
"msg_contents": "I'm also getting a failure on tsearch:\n\nmake[1]: Entering directory `/opt/src/pgsql/contrib/tsearch'\ngcc -O2 -g -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -I. \n-I../../src/include -c -o morph.o morph.c -MMD\nmorph.c: In function `initmorph':\nmorph.c:107: `PG_LocaleCategories' undeclared (first use in this function)\nmorph.c:107: (Each undeclared identifier is reported only once\nmorph.c:107: for each function it appears in.)\nmorph.c:107: parse error before `lc'\nmorph.c:116: warning: implicit declaration of function `PGLC_current'\nmorph.c:116: `lc' undeclared (first use in this function)\nmorph.c:124: warning: implicit declaration of function \n`PGLC_free_categories'\nmake[1]: *** [morph.o] Error 1\nmake[1]: Leaving directory `/opt/src/pgsql/contrib/tsearch'\n\n\nJoe\n\n",
"msg_date": "Wed, 04 Sep 2002 20:54:57 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "more contrib breakage"
},
{
"msg_contents": "Oleg knows about it and is planning to fix it.\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Joe Conway\n> Sent: Thursday, 5 September 2002 11:55 AM\n> To: pgsql-hackers\n> Subject: [HACKERS] more contrib breakage\n>\n>\n> I'm also getting a failure on tsearch:\n>\n> make[1]: Entering directory `/opt/src/pgsql/contrib/tsearch'\n> gcc -O2 -g -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -I.\n> -I../../src/include -c -o morph.o morph.c -MMD\n> morph.c: In function `initmorph':\n> morph.c:107: `PG_LocaleCategories' undeclared (first use in this function)\n> morph.c:107: (Each undeclared identifier is reported only once\n> morph.c:107: for each function it appears in.)\n> morph.c:107: parse error before `lc'\n> morph.c:116: warning: implicit declaration of function `PGLC_current'\n> morph.c:116: `lc' undeclared (first use in this function)\n> morph.c:124: warning: implicit declaration of function\n> `PGLC_free_categories'\n> make[1]: *** [morph.o] Error 1\n> make[1]: Leaving directory `/opt/src/pgsql/contrib/tsearch'\n>\n>\n> Joe\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n",
"msg_date": "Thu, 5 Sep 2002 11:59:39 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: more contrib breakage"
}
] |
[
{
"msg_contents": "Is this item completed? It sure looks like it:\n\n\t* Make triggers refer to columns by number, not name\n\ntest=> \\d pg_trigger\n Table \"pg_catalog.pg_trigger\"\n Column | Type | Modifiers \n----------------+------------+-----------\n tgrelid | oid | not null\n tgname | name | not null\n tgfoid | oid | not null\n tgtype | smallint | not null\n tgenabled | boolean | not null\n tgisconstraint | boolean | not null\n tgconstrname | name | not null\n tgconstrrelid | oid | not null\n tgdeferrable | boolean | not null\n tginitdeferred | boolean | not null\n tgnargs | smallint | not null\n tgattr | int2vector | not null\n tgargs | bytea | \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 00:58:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "TODO item on triggers"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is this item completed? It sure looks like it:\n> \t* Make triggers refer to columns by number, not name\n\nIt is not necessary anymore. The triggers still use names, but there's\ncode in ALTER...RENAME to fix the trigger parameters. I'm perfectly\nhappy with that solution and see no need to do what the TODO item\nsuggests.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Sep 2002 09:17:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO item on triggers "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is this item completed? It sure looks like it:\n> > \t* Make triggers refer to columns by number, not name\n> \n> It is not necessary anymore. The triggers still use names, but there's\n> code in ALTER...RENAME to fix the trigger parameters. I'm perfectly\n> happy with that solution and see no need to do what the TODO item\n> suggests.\n\nOK, item removed, again. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 12:33:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: TODO item on triggers"
}
] |
[
{
"msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql-server\nChanges by:\tmomjian@postgresql.org\t02/09/05 00:58:28\n\nModified files:\n\tdoc : TODO \n\nLog message:\n\tDone:\n\t\n\t> * -Make triggers refer to columns by number, not name\n\n",
"msg_date": "Thu, 5 Sep 2002 00:58:28 -0400 (EDT)",
"msg_from": "momjian@postgresql.org (Bruce Momjian - CVS)",
"msg_from_op": true,
"msg_subject": "pgsql-server/doc TODO"
},
{
"msg_contents": "Hang on - try looking at the tgargs field. I bet it still refers to fields\nby their name, not their number...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-committers-owner@postgresql.org\n> [mailto:pgsql-committers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> - CVS\n> Sent: Thursday, 5 September 2002 12:58 PM\n> To: pgsql-committers@postgresql.org\n> Subject: [COMMITTERS] pgsql-server/doc TODO\n>\n>\n> CVSROOT:\t/cvsroot\n> Module name:\tpgsql-server\n> Changes by:\tmomjian@postgresql.org\t02/09/05 00:58:28\n>\n> Modified files:\n> \tdoc : TODO\n>\n> Log message:\n> \tDone:\n>\n> \t> * -Make triggers refer to columns by number, not name\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n",
"msg_date": "Thu, 5 Sep 2002 13:03:36 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/doc TODO"
},
{
"msg_contents": "\nYep, I see in regression database:\n\n tgargs \n \n---------------------------------------------------------------------------\n clstr_tst_con\\000clstr_tst\\000clstr_tst_s\\000UNSPECIFIED\\000b\\000rf_a\\000\n clstr_tst_con\\000clstr_tst\\000clstr_tst_s\\000UNSPECIFIED\\000b\\000rf_a\\000\n clstr_tst_con\\000clstr_tst\\000clstr_tst_s\\000UNSPECIFIED\\000b\\000rf_a\\000\n clstr_tst_con\\000clstr_tst\\000clstr_tst_s\\000UNSPECIFIED\\000b\\000rf_a\\000\n\nTODO updated to:\n\n * Make pg_trigger.tgargs refer to columns by number, not name \n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> Hang on - try looking at the tgargs field. I bet it still refers to fields\n> by their name, not their number...\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-committers-owner@postgresql.org\n> > [mailto:pgsql-committers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> > - CVS\n> > Sent: Thursday, 5 September 2002 12:58 PM\n> > To: pgsql-committers@postgresql.org\n> > Subject: [COMMITTERS] pgsql-server/doc TODO\n> >\n> >\n> > CVSROOT:\t/cvsroot\n> > Module name:\tpgsql-server\n> > Changes by:\tmomjian@postgresql.org\t02/09/05 00:58:28\n> >\n> > Modified files:\n> > \tdoc : TODO\n> >\n> > Log message:\n> > \tDone:\n> >\n> > \t> * -Make triggers refer to columns by number, not name\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 01:10:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/doc TODO"
}
] |
[
{
"msg_contents": "Anyone else think we should add some more pins to the developer map? At the\nmoment, it looks like we have very few developers!\n\nChris\n\n",
"msg_date": "Thu, 5 Sep 2002 13:10:26 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Map of developers"
},
{
"msg_contents": "\nI will work on that this month. It is part of the advocacy project.\n\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> Anyone else think we should add some more pins to the developer map? At the\n> moment, it looks like we have very few developers!\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 01:10:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Map of developers"
},
{
"msg_contents": "On Thu, 5 Sep 2002, Christopher Kings-Lynne wrote:\n\n> Anyone else think we should add some more pins to the developer map? At the\n> moment, it looks like we have very few developers!\n\nIf so then now's the time to do it 'cuze I'm planning on generating a new\none as soon as I get this tcl tool working.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 5 Sep 2002 06:22:16 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Map of developers"
},
{
"msg_contents": "On Thu, 5 Sep 2002, Bruce Momjian wrote:\n\n>\n> I will work on that this month. It is part of the advocacy project.\n\nSince when?\n\n\n>\n>\n> ---------------------------------------------------------------------------\n>\n> Christopher Kings-Lynne wrote:\n> > Anyone else think we should add some more pins to the developer map? At the\n> > moment, it looks like we have very few developers!\n> >\n> > Chris\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 5 Sep 2002 06:23:09 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Map of developers"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> On Thu, 5 Sep 2002, Bruce Momjian wrote:\n> \n> >\n> > I will work on that this month. It is part of the advocacy project.\n> \n> Since when?\n\nSince I decide to take over the world. :-)\n\nWhat I meant was that it was on my TODO list as part of advocacy stuff I\nplan for September.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 12:16:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Map of developers"
},
{
"msg_contents": "On Thu, 5 Sep 2002, Bruce Momjian wrote:\n\n> Vince Vielhaber wrote:\n> > On Thu, 5 Sep 2002, Bruce Momjian wrote:\n> >\n> > >\n> > > I will work on that this month. It is part of the advocacy project.\n> >\n> > Since when?\n>\n> Since I decide to take over the world. :-)\n>\n> What I meant was that it was on my TODO list as part of advocacy stuff I\n> plan for September.\n\nRape and pillage is on your todo list? Go find your own content. The\nmap is \"The PostgreSQL Developers\" not \"The PostgreSQL Advocacy Squad\".\nThe last thing we need is the two maps getting out of sync, and no, I\ndon't plan on using yours.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 5 Sep 2002 12:22:14 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Map of developers"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> On Thu, 5 Sep 2002, Bruce Momjian wrote:\n> \n> > Vince Vielhaber wrote:\n> > > On Thu, 5 Sep 2002, Bruce Momjian wrote:\n> > >\n> > > >\n> > > > I will work on that this month. It is part of the advocacy project.\n> > >\n> > > Since when?\n> >\n> > Since I decide to take over the world. :-)\n> >\n> > What I meant was that it was on my TODO list as part of advocacy stuff I\n> > plan for September.\n> \n> Rape and pillage is on your todo list? Go find your own content. The\n> map is \"The PostgreSQL Developers\" not \"The PostgreSQL Advocacy Squad\".\n> The last thing we need is the two maps getting out of sync, and no, I\n> don't plan on using yours.\n\nOK, what I really meant is that I want to make changes to the list of\ndevelopers on the developers page and have you regenerate the map once\nthat is updated. It is related to advocacy.\n\nMaybe I can be just vague enough again to push your buttons. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 12:39:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Map of developers"
},
{
"msg_contents": "On Thu, 5 Sep 2002, Bruce Momjian wrote:\n\n> Vince Vielhaber wrote:\n> > On Thu, 5 Sep 2002, Bruce Momjian wrote:\n> >\n> > > Vince Vielhaber wrote:\n> > > > On Thu, 5 Sep 2002, Bruce Momjian wrote:\n> > > >\n> > > > >\n> > > > > I will work on that this month. It is part of the advocacy project.\n> > > >\n> > > > Since when?\n> > >\n> > > Since I decide to take over the world. :-)\n> > >\n> > > What I meant was that it was on my TODO list as part of advocacy stuff I\n> > > plan for September.\n> >\n> > Rape and pillage is on your todo list? Go find your own content. The\n> > map is \"The PostgreSQL Developers\" not \"The PostgreSQL Advocacy Squad\".\n> > The last thing we need is the two maps getting out of sync, and no, I\n> > don't plan on using yours.\n>\n> OK, what I really meant is that I want to make changes to the list of\n> developers on the developers page and have you regenerate the map once\n> that is updated. It is related to advocacy.\n>\n> Maybe I can be just vague enough again to push your buttons. ;-)\n\nPush 'em all you want, just don't bitch about the result!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 5 Sep 2002 12:59:16 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Map of developers"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> Anyone else think we should add some more pins to the developer map? At the\n> moment, it looks like we have very few developers!\n\nWe might as well refresh that thing a bit. I haven't been to Hamburg\nsince April 2001! Vince already has my ... er ... rather old coordinates\nhere in Massachusetts and a newer photo. \n\nOther pin's that need an update?\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Mon, 09 Sep 2002 11:23:12 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Map of developers"
},
{
"msg_contents": "On Mon, 9 Sep 2002, Jan Wieck wrote:\n\n> Christopher Kings-Lynne wrote:\n> >\n> > Anyone else think we should add some more pins to the developer map? At the\n> > moment, it looks like we have very few developers!\n>\n> We might as well refresh that thing a bit. I haven't been to Hamburg\n> since April 2001! Vince already has my ... er ... rather old coordinates\n> here in Massachusetts and a newer photo.\n>\n> Other pin's that need an update?\n\nStill don't know where Peter's going to be so his pin may end up\nin Dresden. I've had zero success in getting that tcl tool to work\nwhich is the current holdup but I do have all the updates I know of\nthat are needed.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 9 Sep 2002 12:49:53 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Map of developers"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> Still don't know where Peter's going to be so his pin may end up\n> in Dresden. I've had zero success in getting that tcl tool to work\n> which is the current holdup but I do have all the updates I know of\n> that are needed.\n\nYou mean that Tcl/Tk application that manages the imagemap for the\npopups? What's the problem with it?\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Mon, 09 Sep 2002 13:32:45 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Map of developers"
}
] |
[
{
"msg_contents": "Hi Oleg/Teodor,\n\nI'm sorry to keep posting bugs without patches, but I'm just hoping you guys\nknow the answer faster than I...I know you're busy.\n\nWhat does tsearch have against the word 'herring' (as in the fish). Why is\nit considered a stopword?\n\nAttached is example queries...\n\nChris",
"msg_date": "Thu, 5 Sep 2002 14:35:48 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "contrib/tsearch"
},
{
"msg_contents": "Hmmm...thinking about it, maybe 'herring' is being reduced to 'her' after\nthe stemming process and hence is thought to be a stopword? This is a bug,\nbut how should it be fixed?\n\nAlthough, tests don't support that:\n\nusa=# select food_id, brand,description,ftiidx from food_foods where ftiidx\n## 'himring';\n food_id | brand | description | ftiidx\n---------+-------+-------------+--------\n(0 rows)\nusa=# select food_id, brand,description,ftiidx from food_foods where ftiidx\n## 'hisring';\n food_id | brand | description | ftiidx\n---------+-------+-------------+--------\n(0 rows)\n\nusa=# select food_id, brand,description,ftiidx from food_foods where ftiidx\n## 'hising';\n food_id | brand | description | ftiidx\n---------+-------+-------------+--------\n(0 rows)\n\nusa=# select food_id, brand,description,ftiidx from food_foods where ftiidx\n## 'himing';\n food_id | brand | description | ftiidx\n---------+-------+-------------+--------\n(0 rows)\n\nAll work...?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Thursday, 5 September 2002 2:36 PM\n> To: Hackers\n> Subject: [HACKERS] contrib/tsearch\n>\n>\n> Hi Oleg/Teodor,\n>\n> I'm sorry to keep posting bugs without patches, but I'm just\n> hoping you guys\n> know the answer faster than I...I know you're busy.\n>\n> What does tsearch have against the word 'herring' (as in the\n> fish). Why is\n> it considered a stopword?\n>\n> Attached is example queries...\n>\n> Chris\n>\n\n",
"msg_date": "Thu, 5 Sep 2002 14:40:38 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: contrib/tsearch"
},
{
"msg_contents": "On Thu, 5 Sep 2002, Christopher Kings-Lynne wrote:\n\n> Hmmm...thinking about it, maybe 'herring' is being reduced to 'her' after\n> the stemming process and hence is thought to be a stopword? This is a bug,\n> but how should it be fixed?\n>\n\nIt's difficult question how to use stop words. We'll see what we could\ndo. Probably, porter's stemming algorithm has problem here.\n'herring' -> 'her'~'ring'\n(I have a demo of english-russian stemmr, so you can play)\nhttp://intra.astronet.ru/db/lingua/snowball/\nI'll ask Martin Porter if there could be an error stemmer.\nBut I think the problem is in concept of using stop words.\nShould we check for stop words before stemming or after ?\nIn the first case we have to collect all forms of stop-words which is doable\nbut difficult to maintain, in latter - we'll have current problem.\n\nIt's time for beta1 and I'm not sure if we could work on this issue\nright now, but I feel a big pressure from tsearch users :-)\nIf people want to help us why not to work on stop words list including\nall forms ? In any case, we are not native english, so don't expect we'll\ncreate more or less decent list. Programming changes are trivial, probably\nwe'll end for the moment just using compile time option.\nAs always, your patches are welcome !\n\nbtw, you may test your queries much easier:\n\nlist=# select 'herring'::mquery_txt;\nERROR: Your query contained only stopword(s), ignored\nlist=# select 'herring'::query_txt;\n query_txt\n-----------\n 'herring'\n(1 row)\n\n\n\n\n> Although, tests don't support that:\n>\n> usa=# select food_id, brand,description,ftiidx from food_foods where ftiidx\n> ## 'himring';\n> food_id | brand | description | ftiidx\n> ---------+-------+-------------+--------\n> (0 rows)\n> usa=# select food_id, brand,description,ftiidx from food_foods where ftiidx\n> ## 'hisring';\n> food_id | brand | description | ftiidx\n> ---------+-------+-------------+--------\n> (0 rows)\n>\n> usa=# select food_id, brand,description,ftiidx from food_foods where ftiidx\n> ## 'hising';\n> food_id | brand | description | ftiidx\n> ---------+-------+-------------+--------\n> (0 rows)\n>\n> usa=# select food_id, brand,description,ftiidx from food_foods where ftiidx\n> ## 'himing';\n> food_id | brand | description | ftiidx\n> ---------+-------+-------------+--------\n> (0 rows)\n>\n> All work...?\n>\n> Chris\n>\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> > Kings-Lynne\n> > Sent: Thursday, 5 September 2002 2:36 PM\n> > To: Hackers\n> > Subject: [HACKERS] contrib/tsearch\n> >\n> >\n> > Hi Oleg/Teodor,\n> >\n> > I'm sorry to keep posting bugs without patches, but I'm just\n> > hoping you guys\n> > know the answer faster than I...I know you're busy.\n> >\n> > What does tsearch have against the word 'herring' (as in the\n> > fish). Why is\n> > it considered a stopword?\n> >\n> > Attached is example queries...\n> >\n> > Chris\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n",
"msg_date": "Thu, 5 Sep 2002 13:46:32 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: contrib/tsearch"
},
{
"msg_contents": "> Should we check for stop words before stemming or after ?\n\nI think you should.\n\n> In the first case we have to collect all forms of stop-words\n> which is doable\n> but difficult to maintain, in latter - we'll have current problem.\n\nLooking at the list of stopwords you sent me, Oleg, there are only about 1\nout of the list of 120 stopwords that need to have all word forms added. I\nalso don't think it'll be a maintenance problem. The reason I think this is\nbecause stopwords in general don't have different word forms.\n\neg. her, his, i, and, etc. They don't have different forms. In fact, the\n_only_ word in the stopword list that needs a different form is yourself and\nyourselves. Actually, according to dictionary.com 'ourself' is also a word.\n'themself' isn't tho. Some others I don't know about are:\n\n'veri' - I assume this is stemmed 'very', so why not just use 'very'?\n\nSo, why don't you change tsearch to check for stop words _before_ stemming?\nI can give you a list of revised stopwords that haven't been stemmed, with\nall forms of the words.\n\n> It's time for beta1 and I'm not sure if we could work on this issue\n> right now, but I feel a big pressure from tsearch users :-)\n> If people want to help us why not to work on stop words list including\n> all forms ? In any case, we are not native english, so don't expect we'll\n> create more or less decent list. Programming changes are trivial, probably\n> we'll end for the moment just using compile time option.\n> As always, your patches are welcome !\n\nI'm happy to work on the list of stopwords for you, Oleg. I agree this\nmight be 7.4 thing though...\n\nChris\n\n",
"msg_date": "Fri, 6 Sep 2002 12:01:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: contrib/tsearch"
},
{
"msg_contents": "> Looking at the list of stopwords you sent me, Oleg, there are only about 1\n> out of the list of 120 stopwords that need to have all word forms \n> added. I\n> also don't think it'll be a maintenance problem. The reason I \n> think this is\n> because stopwords in general don't have different word forms.\n\nActually, it just occurred to me that stuff like:\n\nwill\nwon't\nit\nit's\nwhere\nwhere's\n\nWill all have to be in the list, right?\n\nChris\n\n",
"msg_date": "Fri, 6 Sep 2002 12:20:11 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: contrib/tsearch"
},
{
"msg_contents": "There also seems to be a more complete list of english stopwords here:\n\nhttp://www.dcs.gla.ac.uk/idom/ir_resources/linguistic_utils/\n\nHowever this list again does not include contractions. I can take this\nlist, check it and submit it to you Oleg, but do you want me to add\ncontractions?\n\neg. wasn't, isn't, it's, etc.?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Friday, 6 September 2002 12:20 PM\n> To: Christopher Kings-Lynne; Oleg Bartunov\n> Cc: Hackers; martin_porter@softhome.net\n> Subject: Re: [HACKERS] contrib/tsearch\n>\n>\n> > Looking at the list of stopwords you sent me, Oleg, there are\n> only about 1\n> > out of the list of 120 stopwords that need to have all word forms\n> > added. I\n> > also don't think it'll be a maintenance problem. The reason I\n> > think this is\n> > because stopwords in general don't have different word forms.\n>\n> Actually, it just occurred to me that stuff like:\n>\n> will\n> won't\n> it\n> it's\n> where\n> where's\n>\n> Will all have to be in the list, right?\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Fri, 6 Sep 2002 12:59:56 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: contrib/tsearch"
},
{
"msg_contents": "On Fri, 6 Sep 2002, Christopher Kings-Lynne wrote:\n\n> There also seems to be a more complete list of english stopwords here:\n>\n> http://www.dcs.gla.ac.uk/idom/ir_resources/linguistic_utils/\n\nChris, I think we have to separate stop word list from tsearch package and\nsupply just some defaults. The reason for this is to let user decide what is\na stop word - various domains should have different stop words.\nThis is how OpenFTS works.\nAlso, we probably need to let user decide when to check for stop word -\nafter or before stemming. I'm waiting for Martin's fix for english stemmerr\nand probably we'll switch to use snowball one, which are more qualified.\n\nDamn, we wanted to do these and much more a bit later because we're under\nbig pressure of our work. We'll see if we could manage our plans.\n\nWe certainly need developers to help us in full text searching,\nltree ( it has a chance to support XML ). Also we need to work\non adding concurrency support to GiST.\n\nso, I couldn't promise we'll work on tsearch right now, but we provide\nmakedict.pl so you could build dictionary with custom list of stop words.\nDid you try it ?\n\n>\n> However this list again does not include contractions. I can take this\n> list, check it and submit it to you Oleg, but do you want me to add\n> contractions?\n>\n> eg. wasn't, isn't, it's, etc.?\n\nHmm, our parser isn't smart to handle them as a single word, so\nit'll not helps:\n\n13:30:03[megera@amon]~/app/fts/test-suite>./testdict.pl -p\nwasn't\nlexeme:wasn:1:Latin word\nlexeme:':12:Space symbols\nlexeme:t:1:Latin word\n\nBut, you always could add 'wasn', 'isn' ... and 't','s' to list of your\nstop words and be happy. Hmm, probably we could enhance our parser to\nhandle such words too.\n\nAnyway, most problems just a question of time we don't have :-(\n\n\n>\n> Chris\n>\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> > Kings-Lynne\n> > Sent: Friday, 6 September 2002 12:20 PM\n> > To: Christopher Kings-Lynne; Oleg Bartunov\n> > Cc: Hackers; martin_porter@softhome.net\n> > Subject: Re: [HACKERS] contrib/tsearch\n> >\n> >\n> > > Looking at the list of stopwords you sent me, Oleg, there are\n> > only about 1\n> > > out of the list of 120 stopwords that need to have all word forms\n> > > added. I\n> > > also don't think it'll be a maintenance problem. The reason I\n> > > think this is\n> > > because stopwords in general don't have different word forms.\n> >\n> > Actually, it just occurred to me that stuff like:\n> >\n> > will\n> > won't\n> > it\n> > it's\n> > where\n> > where's\n> >\n> > Will all have to be in the list, right?\n> >\n> > Chris\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 6 Sep 2002 13:41:01 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: contrib/tsearch"
},
{
"msg_contents": "On Fri, 6 Sep 2002, Christopher Kings-Lynne wrote:\n\n> > Looking at the list of stopwords you sent me, Oleg, there are only about 1\n> > out of the list of 120 stopwords that need to have all word forms\n> > added. I\n> > also don't think it'll be a maintenance problem. The reason I\n> > think this is\n> > because stopwords in general don't have different word forms.\n>\n> Actually, it just occurred to me that stuff like:\n>\n> will\n> won't\n> it\n> it's\n> where\n> where's\n>\n> Will all have to be in the list, right?\n\nright, see my previous message. Teodor is our main developer, he should be\nback from vacation very soon. But he already has many assignments regarding\nour main project. Are there one smart programmer ?\n\n\n>\n> Chris\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 6 Sep 2002 13:46:00 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: contrib/tsearch"
},
{
"msg_contents": "On Fri, 6 Sep 2002, Christopher Kings-Lynne wrote:\n\n> > Should we check for stop words before stemming or after ?\n>\n> I think you should.\n>\n> > In the first case we have to collect all forms of stop-words\n> > which is doable\n> > but difficult to maintain, in latter - we'll have current problem.\n>\n> Looking at the list of stopwords you sent me, Oleg, there are only about 1\n> out of the list of 120 stopwords that need to have all word forms added. I\n> also don't think it'll be a maintenance problem. The reason I think this is\n> because stopwords in general don't have different word forms.\n>\n> eg. her, his, i, and, etc. They don't have different forms. In fact, the\n> _only_ word in the stopword list that needs a different form is yourself and\n> yourselves. Actually, according to dictionary.com 'ourself' is also a word.\n> 'themself' isn't tho. Some others I don't know about are:\n>\n> 'veri' - I assume this is stemmed 'very', so why not just use 'very'?\n\nThat's because we currently check for stop word after stemming and\nI think porters algorithm converts 'very' to 'veri' :-)\n\n>\n> So, why don't you change tsearch to check for stop words _before_ stemming?\n> I can give you a list of revised stopwords that haven't been stemmed, with\n> all forms of the words.\n>\n\nI agree that english list is, probably, easy to maintain, but what about\nother languages ? We don't have any volunteers - you're the first one.\n\n\n> > It's time for beta1 and I'm not sure if we could work on this issue\n> > right now, but I feel a big pressure from tsearch users :-)\n> > If people want to help us why not to work on stop words list including\n> > all forms ? In any case, we are not native english, so don't expect we'll\n> > create more or less decent list. Programming changes are trivial, probably\n> > we'll end for the moment just using compile time option.\n> > As always, your patches are welcome !\n>\n> I'm happy to work on the list of stopwords for you, Oleg. I agree this\n> might be 7.4 thing though...\n\nWe always could keep updates separately on our page and in CVS.\n\n>\n> Chris\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 6 Sep 2002 13:52:11 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: contrib/tsearch"
},
{
"msg_contents": "> Should we check for stop words before stemming or after ?\n\nCurrent implementation supports both variants. Look dictionary interface \ndefinition in morph.c:\n\ntypedef struct\n{\n char localename[NAMEDATALEN];\n /* init dictionary */\n void *(*init) (void);\n /* close dictionary */\n void (*close) (void *);\n /* find in dictionary */\n char *(*lemmatize) (void *, char *, int *);\n int (*is_stoplemm) (void *, char *, int);\n int (*is_stemstoplemm) (void *, char *, int);\n} DICT;\n\n'is_stoplemm' method is called before 'lemmtize' and 'is_stemstoplemm' after.\ndict/porter_english.dct at the end:\nTABLE_DICT_START\n \"C\",\n setup_english_stemmer,\n closedown_english_stemmer,\n engstemming,\n NULL,\n is_stopengword\nTABLE_DICT_END\n\ndict/russian_stemming.dct:\nTABLE_DICT_START\n \"ru_RU.KOI8-R\",\n NULL,\n NULL,\n ru_RUKOI8R_stem,\n ru_RUKOI8R_is_stopword,\n NULL\nTABLE_DICT_END\n\nSo english stemmer defines is lexem stop or not after stemming, but russian before.\n\n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Mon, 09 Sep 2002 18:19:42 +0400",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": false,
"msg_subject": "Re: contrib/tsearch"
}
] |
[
{
"msg_contents": "\nIt seems that my last mail on this did not get through to the list ;(\n\n\n\nPlease consider renaming the new builtin function \n\n split(text,text,int)\n\nto something else, perhaps\n\n split_part(text,text,int)\n\n(like date_part)\n\nThe reason for this request is that 3 most popular scripting languages\n(perl, python, php) all have also a function with similar signature, but\nreturning an array instead of single element and the (optional) third\nargument is limit (maximum number of splits to perform)\n\nI think that it would be good to have similar function in (some future\nrelease of) postgres, but if we now let in a function with same name and\narguments but returning a single string instead an array of them, then\nwe will need to invent a new and not so easy to recognise name for the\n\"real\" split function.\n\n----------------\nHannu\n\n",
"msg_date": "05 Sep 2002 09:30:53 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Please rename split(text,text,int) to splitpart"
},
{
"msg_contents": "Hannu Krosing wrote:\n> It seems that my last mail on this did not get through to the list ;(\n> \n> Please consider renaming the new builtin function \n> \n> split(text,text,int)\n> \n> to something else, perhaps\n> \n> split_part(text,text,int)\n> \n> (like date_part)\n> \n> The reason for this request is that 3 most popular scripting languages\n> (perl, python, php) all have also a function with similar signature, but\n> returning an array instead of single element and the (optional) third\n> argument is limit (maximum number of splits to perform)\n> \n> I think that it would be good to have similar function in (some future\n> release of) postgres, but if we now let in a function with same name and\n> arguments but returning a single string instead an array of them, then\n> we will need to invent a new and not so easy to recognise name for the\n> \"real\" split function.\n> \n\nThis is a good point, and I'm not opposed to changing the name, but it \nis too bad your original email didn't get through before beta1 was \nrolled. The change would now require an initdb, which I know we were \ntrying to avoid once beta started (although we could change it without \n*requiring* an initdb I suppose).\n\nI guess if we do end up needing an initdb for other reasons, we should \nmake this change too. Any other opinions? Is split_part an acceptable name?\n\nAlso, if we add a todo to produce a \"real\" split function that returns \nan array, similar to those languages, I'll take it for 7.4.\n\nThanks,\n\nJoe\n\n\n\n",
"msg_date": "Thu, 05 Sep 2002 08:12:19 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Please rename split(text,text,int) to splitpart"
},
{
"msg_contents": "Joe Conway wrote:\n > Hannu Krosing wrote:\n >\n >> It seems that my last mail on this did not get through to the list\n >> ;(\n >>\n >> Please consider renaming the new builtin function\n >> split(text,text,int)\n >>\n >> to something else, perhaps\n >>\n >> split_part(text,text,int)\n >>\n >> (like date_part)\n >>\n >> The reason for this request is that 3 most popular scripting\n >> languages (perl, python, php) all have also a function with similar\n >> signature, but returning an array instead of single element and the\n >> (optional) third argument is limit (maximum number of splits to\n >> perform)\n >>\n >> I think that it would be good to have similar function in (some\n >> future release of) postgres, but if we now let in a function with\n >> same name and arguments but returning a single string instead an\n >> array of them, then we will need to invent a new and not so easy to\n >> recognise name for the \"real\" split function.\n >>\n >\n > This is a good point, and I'm not opposed to changing the name, but\n > it is too bad your original email didn't get through before beta1 was\n > rolled. The change would now require an initdb, which I know we were\n > trying to avoid once beta started (although we could change it\n > without *requiring* an initdb I suppose).\n >\n > I guess if we do end up needing an initdb for other reasons, we\n > should make this change too. Any other opinions? Is split_part an\n > acceptable name?\n >\n > Also, if we add a todo to produce a \"real\" split function that\n > returns an array, similar to those languages, I'll take it for 7.4.\n\nNo one commented on the choice of name, so the attached patch changes \nthe name of split(text,text,int) to split_part(text,text,int) per \nHannu's recommendation above. This can be applied without an initdb if \ncurrent beta testers are advised to run:\n\n update pg_proc set proname = 'split_part' where proname = 'split';\n\nin the case they want to use this function. Regression and doc fix is \nalso included in the patch.\n\nPlease apply.\n\nThanks,\n\nJoe",
"msg_date": "Sat, 07 Sep 2002 12:45:13 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Please rename split(text,text,int) to splitpart"
},
{
"msg_contents": "\nWhat do people think if this change?\n\n---------------------------------------------------------------------------\n\nHannu Krosing wrote:\n> \n> It seems that my last mail on this did not get through to the list ;(\n> \n> \n> \n> Please consider renaming the new builtin function \n> \n> split(text,text,int)\n> \n> to something else, perhaps\n> \n> split_part(text,text,int)\n> \n> (like date_part)\n> \n> The reason for this request is that 3 most popular scripting languages\n> (perl, python, php) all have also a function with similar signature, but\n> returning an array instead of single element and the (optional) third\n> argument is limit (maximum number of splits to perform)\n> \n> I think that it would be good to have similar function in (some future\n> release of) postgres, but if we now let in a function with same name and\n> arguments but returning a single string instead an array of them, then\n> we will need to invent a new and not so easy to recognise name for the\n> \"real\" split function.\n> \n> ----------------\n> Hannu\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 Sep 2002 22:33:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Please rename split(text,text,int) to splitpart"
},
{
"msg_contents": "I think it should be made. Don't force an initdb. Beta testers can run the\nupdate. 'split' is a pretty standard function these days...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Wednesday, 11 September 2002 10:33 AM\n> To: Hannu Krosing\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Please rename split(text,text,int) to splitpart\n>\n>\n>\n> What do people think if this change?\n>\n> ------------------------------------------------------------------\n> ---------\n>\n> Hannu Krosing wrote:\n> >\n> > It seems that my last mail on this did not get through to the list ;(\n> >\n> >\n> >\n> > Please consider renaming the new builtin function\n> >\n> > split(text,text,int)\n> >\n> > to something else, perhaps\n> >\n> > split_part(text,text,int)\n> >\n> > (like date_part)\n> >\n> > The reason for this request is that 3 most popular scripting languages\n> > (perl, python, php) all have also a function with similar signature, but\n> > returning an array instead of single element and the (optional) third\n> > argument is limit (maximum number of splits to perform)\n> >\n> > I think that it would be good to have similar function in (some future\n> > release of) postgres, but if we now let in a function with same name and\n> > arguments but returning a single string instead an array of them, then\n> > we will need to invent a new and not so easy to recognise name for the\n> > \"real\" split function.\n> >\n> > ----------------\n> > Hannu\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square,\n> Pennsylvania 19073\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Wed, 11 Sep 2002 11:02:17 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Please rename split(text,text,int) to splitpart"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> I think it should be made. Don't force an initdb. Beta testers can run the\n> update. 'split' is a pretty standard function these days...\n> \n\nMe too. Patch already sent in, including doc and regression test.\n\nAnd as I said, I'll take a TODO to create a 'split' which either returns an \narray or maybe as an SRF, so the behavior is more like people will be expecting.\n\nJoe\n\n\n",
"msg_date": "Tue, 10 Sep 2002 20:28:38 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Please rename split(text,text,int) to splitpart"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> What do people think if this change?\n\nI'm not thrilled about renaming the function without forcing an initdb\n... but the alternatives seem worse. Okay by me if we do it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 23:56:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Please rename split(text,text,int) to splitpart "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > What do people think if this change?\n> \n> I'm not thrilled about renaming the function without forcing an initdb\n> ... but the alternatives seem worse. Okay by me if we do it.\n\nI am not either. How do you do the documentation when the function can\nbe called two ways. I guess we can give the SQL query to fix it during\nbeta2 _and_ add a regression test to make sure it is fix. That sounds\nlike a plan.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 00:00:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Please rename split(text,text,int) to splitpart"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Joe Conway wrote:\n> > Hannu Krosing wrote:\n> >\n> >> It seems that my last mail on this did not get through to the list\n> >> ;(\n> >>\n> >> Please consider renaming the new builtin function\n> >> split(text,text,int)\n> >>\n> >> to something else, perhaps\n> >>\n> >> split_part(text,text,int)\n> >>\n> >> (like date_part)\n> >>\n> >> The reason for this request is that 3 most popular scripting\n> >> languages (perl, python, php) all have also a function with similar\n> >> signature, but returning an array instead of single element and the\n> >> (optional) third argument is limit (maximum number of splits to\n> >> perform)\n> >>\n> >> I think that it would be good to have similar function in (some\n> >> future release of) postgres, but if we now let in a function with\n> >> same name and arguments but returning a single string instead an\n> >> array of them, then we will need to invent a new and not so easy to\n> >> recognise name for the \"real\" split function.\n> >>\n> >\n> > This is a good point, and I'm not opposed to changing the name, but\n> > it is too bad your original email didn't get through before beta1 was\n> > rolled. The change would now require an initdb, which I know we were\n> > trying to avoid once beta started (although we could change it\n> > without *requiring* an initdb I suppose).\n> >\n> > I guess if we do end up needing an initdb for other reasons, we\n> > should make this change too. Any other opinions? Is split_part an\n> > acceptable name?\n> >\n> > Also, if we add a todo to produce a \"real\" split function that\n> > returns an array, similar to those languages, I'll take it for 7.4.\n> \n> No one commented on the choice of name, so the attached patch changes \n> the name of split(text,text,int) to split_part(text,text,int) per \n> Hannu's recommendation above. This can be applied without an initdb if \n> current beta testers are advised to run:\n> \n> update pg_proc set proname = 'split_part' where proname = 'split';\n> \n> in the case they want to use this function. Regression and doc fix is \n> also included in the patch.\n> \n> Please apply.\n> \n> Thanks,\n> \n> Joe\n> \n\n> Index: src/include/catalog/pg_proc.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/catalog/pg_proc.h,v\n> retrieving revision 1.270\n> diff -c -r1.270 pg_proc.h\n> *** src/include/catalog/pg_proc.h\t4 Sep 2002 20:31:38 -0000\t1.270\n> --- src/include/catalog/pg_proc.h\t7 Sep 2002 18:54:57 -0000\n> ***************\n> *** 2130,2136 ****\n> DESCR(\"return portion of string\");\n> DATA(insert OID = 2087 ( replace\t PGNSP PGUID 12 f f t f i 3 25 \"25 25 25\" replace_text - _null_ ));\n> DESCR(\"replace all occurrences of old_substr with new_substr in string\");\n> ! DATA(insert OID = 2088 ( split\t\t PGNSP PGUID 12 f f t f i 3 25 \"25 25 23\" split_text - _null_ ));\n> DESCR(\"split string by field_sep and return field_num\");\n> DATA(insert OID = 2089 ( to_hex\t PGNSP PGUID 12 f f t f i 1 25 \"23\" to_hex32 - _null_ ));\n> DESCR(\"convert int32 number to hex\");\n> --- 2130,2136 ----\n> DESCR(\"return portion of string\");\n> DATA(insert OID = 2087 ( replace\t PGNSP PGUID 12 f f t f i 3 25 \"25 25 25\" replace_text - _null_ ));\n> DESCR(\"replace all occurrences of old_substr with new_substr in string\");\n> ! DATA(insert OID = 2088 ( split_part PGNSP PGUID 12 f f t f i 3 25 \"25 25 23\" split_text - _null_ ));\n> DESCR(\"split string by field_sep and return field_num\");\n> DATA(insert OID = 2089 ( to_hex\t PGNSP PGUID 12 f f t f i 1 25 \"23\" to_hex32 - _null_ ));\n> DESCR(\"convert int32 number to hex\");\n> Index: src/test/regress/expected/strings.out\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/expected/strings.out,v\n> retrieving revision 1.16\n> diff -c -r1.16 strings.out\n> *** src/test/regress/expected/strings.out\t28 Aug 2002 20:18:29 -0000\t1.16\n> --- src/test/regress/expected/strings.out\t7 Sep 2002 19:09:44 -0000\n> ***************\n> *** 719,747 ****\n> (1 row)\n> \n> --\n> ! -- test split\n> --\n> ! select split('joeuser@mydatabase','@',0) AS \"an error\";\n> ERROR: field position must be > 0\n> ! select split('joeuser@mydatabase','@',1) AS \"joeuser\";\n> joeuser \n> ---------\n> joeuser\n> (1 row)\n> \n> ! select split('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> mydatabase \n> ------------\n> mydatabase\n> (1 row)\n> \n> ! select split('joeuser@mydatabase','@',3) AS \"empty string\";\n> empty string \n> --------------\n> \n> (1 row)\n> \n> ! select split('@joeuser@mydatabase@','@',2) AS \"joeuser\";\n> joeuser \n> ---------\n> joeuser\n> --- 719,747 ----\n> (1 row)\n> \n> --\n> ! -- test split_part\n> --\n> ! select split_part('joeuser@mydatabase','@',0) AS \"an error\";\n> ERROR: field position must be > 0\n> ! select split_part('joeuser@mydatabase','@',1) AS \"joeuser\";\n> joeuser \n> ---------\n> joeuser\n> (1 row)\n> \n> ! select split_part('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> mydatabase \n> ------------\n> mydatabase\n> (1 row)\n> \n> ! select split_part('joeuser@mydatabase','@',3) AS \"empty string\";\n> empty string \n> --------------\n> \n> (1 row)\n> \n> ! select split_part('@joeuser@mydatabase@','@',2) AS \"joeuser\";\n> joeuser \n> ---------\n> joeuser\n> Index: src/test/regress/sql/strings.sql\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/sql/strings.sql,v\n> retrieving revision 1.10\n> diff -c -r1.10 strings.sql\n> *** src/test/regress/sql/strings.sql\t28 Aug 2002 20:18:29 -0000\t1.10\n> --- src/test/regress/sql/strings.sql\t7 Sep 2002 19:09:00 -0000\n> ***************\n> *** 288,304 ****\n> SELECT replace('yabadoo', 'bad', '') AS \"yaoo\";\n> \n> --\n> ! -- test split\n> --\n> ! select split('joeuser@mydatabase','@',0) AS \"an error\";\n> \n> ! select split('joeuser@mydatabase','@',1) AS \"joeuser\";\n> \n> ! select split('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> \n> ! select split('joeuser@mydatabase','@',3) AS \"empty string\";\n> \n> ! select split('@joeuser@mydatabase@','@',2) AS \"joeuser\";\n> \n> --\n> -- test to_hex\n> --- 288,304 ----\n> SELECT replace('yabadoo', 'bad', '') AS \"yaoo\";\n> \n> --\n> ! -- test split_part\n> --\n> ! select split_part('joeuser@mydatabase','@',0) AS \"an error\";\n> \n> ! select split_part('joeuser@mydatabase','@',1) AS \"joeuser\";\n> \n> ! select split_part('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> \n> ! select split_part('joeuser@mydatabase','@',3) AS \"empty string\";\n> \n> ! select split_part('@joeuser@mydatabase@','@',2) AS \"joeuser\";\n> \n> --\n> -- test to_hex\n> Index: doc/src/sgml/func.sgml\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/doc/src/sgml/func.sgml,v\n> retrieving revision 1.120\n> diff -c -r1.120 func.sgml\n> *** doc/src/sgml/func.sgml\t2 Sep 2002 05:53:23 -0000\t1.120\n> --- doc/src/sgml/func.sgml\t7 Sep 2002 19:12:34 -0000\n> ***************\n> *** 1899,1912 ****\n> </row>\n> \n> <row>\n> ! <entry><function>split</function>(<parameter>string</parameter> <type>text</type>,\n> <parameter>delimiter</parameter> <type>text</type>,\n> <parameter>column</parameter> <type>integer</type>)</entry>\n> <entry><type>text</type></entry>\n> <entry>Split <parameter>string</parameter> on <parameter>delimiter</parameter>\n> returning the resulting (one based) <parameter>column</parameter> number.\n> </entry>\n> ! <entry><literal>split('abc~@~def~@~ghi','~@~',2)</literal></entry>\n> <entry><literal>def</literal></entry>\n> </row>\n> \n> --- 1899,1912 ----\n> </row>\n> \n> <row>\n> ! <entry><function>split_part</function>(<parameter>string</parameter> <type>text</type>,\n> <parameter>delimiter</parameter> <type>text</type>,\n> <parameter>column</parameter> <type>integer</type>)</entry>\n> <entry><type>text</type></entry>\n> <entry>Split <parameter>string</parameter> on <parameter>delimiter</parameter>\n> returning the resulting (one based) <parameter>column</parameter> number.\n> </entry>\n> ! <entry><literal>split_part('abc~@~def~@~ghi','~@~',2)</literal></entry>\n> <entry><literal>def</literal></entry>\n> </row>\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 00:01:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Please rename split(text,text,int) to splitpart"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am not either. How do you do the documentation when the function can\n> be called two ways.\n\nYou don't. There is only one supported name, so that's the only one\nyou document.\n\n> I guess we can give the SQL query to fix it during\n> beta2 _and_ add a regression test to make sure it is fix. That sounds\n> like a plan.\n\nThat sounds like massive overkill. Just apply the patch. We don't need\nto institutionalize a regression test for this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Sep 2002 00:40:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Please rename split(text,text,int) to splitpart "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am not either. How do you do the documentation when the function can\n> > be called two ways.\n> \n> You don't. There is only one supported name, so that's the only one\n> you document.\n> \n> > I guess we can give the SQL query to fix it during\n> > beta2 _and_ add a regression test to make sure it is fix. That sounds\n> > like a plan.\n> \n> That sounds like massive overkill. Just apply the patch. We don't need\n> to institutionalize a regression test for this.\n\nIt would catch people who don't apply the patch. We could remove the\ntest after 7.3. Just an idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 00:42:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Please rename split(text,text,int) to splitpart"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n>>That sounds like massive overkill. Just apply the patch. We don't need\n>>to institutionalize a regression test for this.\n> \n> It would catch people who don't apply the patch. We could remove the\n> test after 7.3. Just an idea.\n> \n\nThe existing strings regression test will fail if the update patch isn't applied.\n\nJoe\n\n",
"msg_date": "Tue, 10 Sep 2002 22:05:45 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Please rename split(text,text,int) to splitpart"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\nI have not forced an initdb, _but_ there will be regression failures if\nan initdb is not done. The regression test was part of the patch.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Joe Conway wrote:\n> > Hannu Krosing wrote:\n> >\n> >> It seems that my last mail on this did not get through to the list\n> >> ;(\n> >>\n> >> Please consider renaming the new builtin function\n> >> split(text,text,int)\n> >>\n> >> to something else, perhaps\n> >>\n> >> split_part(text,text,int)\n> >>\n> >> (like date_part)\n> >>\n> >> The reason for this request is that 3 most popular scripting\n> >> languages (perl, python, php) all have also a function with similar\n> >> signature, but returning an array instead of single element and the\n> >> (optional) third argument is limit (maximum number of splits to\n> >> perform)\n> >>\n> >> I think that it would be good to have similar function in (some\n> >> future release of) postgres, but if we now let in a function with\n> >> same name and arguments but returning a single string instead an\n> >> array of them, then we will need to invent a new and not so easy to\n> >> recognise name for the \"real\" split function.\n> >>\n> >\n> > This is a good point, and I'm not opposed to changing the name, but\n> > it is too bad your original email didn't get through before beta1 was\n> > rolled. The change would now require an initdb, which I know we were\n> > trying to avoid once beta started (although we could change it\n> > without *requiring* an initdb I suppose).\n> >\n> > I guess if we do end up needing an initdb for other reasons, we\n> > should make this change too. Any other opinions? Is split_part an\n> > acceptable name?\n> >\n> > Also, if we add a todo to produce a \"real\" split function that\n> > returns an array, similar to those languages, I'll take it for 7.4.\n> \n> No one commented on the choice of name, so the attached patch changes \n> the name of split(text,text,int) to split_part(text,text,int) per \n> Hannu's recommendation above. This can be applied without an initdb if \n> current beta testers are advised to run:\n> \n> update pg_proc set proname = 'split_part' where proname = 'split';\n> \n> in the case they want to use this function. Regression and doc fix is \n> also included in the patch.\n> \n> Please apply.\n> \n> Thanks,\n> \n> Joe\n> \n\n> Index: src/include/catalog/pg_proc.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/catalog/pg_proc.h,v\n> retrieving revision 1.270\n> diff -c -r1.270 pg_proc.h\n> *** src/include/catalog/pg_proc.h\t4 Sep 2002 20:31:38 -0000\t1.270\n> --- src/include/catalog/pg_proc.h\t7 Sep 2002 18:54:57 -0000\n> ***************\n> *** 2130,2136 ****\n> DESCR(\"return portion of string\");\n> DATA(insert OID = 2087 ( replace\t PGNSP PGUID 12 f f t f i 3 25 \"25 25 25\" replace_text - _null_ ));\n> DESCR(\"replace all occurrences of old_substr with new_substr in string\");\n> ! DATA(insert OID = 2088 ( split\t\t PGNSP PGUID 12 f f t f i 3 25 \"25 25 23\" split_text - _null_ ));\n> DESCR(\"split string by field_sep and return field_num\");\n> DATA(insert OID = 2089 ( to_hex\t PGNSP PGUID 12 f f t f i 1 25 \"23\" to_hex32 - _null_ ));\n> DESCR(\"convert int32 number to hex\");\n> --- 2130,2136 ----\n> DESCR(\"return portion of string\");\n> DATA(insert OID = 2087 ( replace\t PGNSP PGUID 12 f f t f i 3 25 \"25 25 25\" replace_text - _null_ ));\n> DESCR(\"replace all occurrences of old_substr with new_substr in string\");\n> ! DATA(insert OID = 2088 ( split_part PGNSP PGUID 12 f f t f i 3 25 \"25 25 23\" split_text - _null_ ));\n> DESCR(\"split string by field_sep and return field_num\");\n> DATA(insert OID = 2089 ( to_hex\t PGNSP PGUID 12 f f t f i 1 25 \"23\" to_hex32 - _null_ ));\n> DESCR(\"convert int32 number to hex\");\n> Index: src/test/regress/expected/strings.out\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/expected/strings.out,v\n> retrieving revision 1.16\n> diff -c -r1.16 strings.out\n> *** src/test/regress/expected/strings.out\t28 Aug 2002 20:18:29 -0000\t1.16\n> --- src/test/regress/expected/strings.out\t7 Sep 2002 19:09:44 -0000\n> ***************\n> *** 719,747 ****\n> (1 row)\n> \n> --\n> ! -- test split\n> --\n> ! select split('joeuser@mydatabase','@',0) AS \"an error\";\n> ERROR: field position must be > 0\n> ! select split('joeuser@mydatabase','@',1) AS \"joeuser\";\n> joeuser \n> ---------\n> joeuser\n> (1 row)\n> \n> ! select split('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> mydatabase \n> ------------\n> mydatabase\n> (1 row)\n> \n> ! select split('joeuser@mydatabase','@',3) AS \"empty string\";\n> empty string \n> --------------\n> \n> (1 row)\n> \n> ! select split('@joeuser@mydatabase@','@',2) AS \"joeuser\";\n> joeuser \n> ---------\n> joeuser\n> --- 719,747 ----\n> (1 row)\n> \n> --\n> ! -- test split_part\n> --\n> ! select split_part('joeuser@mydatabase','@',0) AS \"an error\";\n> ERROR: field position must be > 0\n> ! select split_part('joeuser@mydatabase','@',1) AS \"joeuser\";\n> joeuser \n> ---------\n> joeuser\n> (1 row)\n> \n> ! select split_part('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> mydatabase \n> ------------\n> mydatabase\n> (1 row)\n> \n> ! select split_part('joeuser@mydatabase','@',3) AS \"empty string\";\n> empty string \n> --------------\n> \n> (1 row)\n> \n> ! select split_part('@joeuser@mydatabase@','@',2) AS \"joeuser\";\n> joeuser \n> ---------\n> joeuser\n> Index: src/test/regress/sql/strings.sql\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/test/regress/sql/strings.sql,v\n> retrieving revision 1.10\n> diff -c -r1.10 strings.sql\n> *** src/test/regress/sql/strings.sql\t28 Aug 2002 20:18:29 -0000\t1.10\n> --- src/test/regress/sql/strings.sql\t7 Sep 2002 19:09:00 -0000\n> ***************\n> *** 288,304 ****\n> SELECT replace('yabadoo', 'bad', '') AS \"yaoo\";\n> \n> --\n> ! -- test split\n> --\n> ! select split('joeuser@mydatabase','@',0) AS \"an error\";\n> \n> ! select split('joeuser@mydatabase','@',1) AS \"joeuser\";\n> \n> ! select split('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> \n> ! select split('joeuser@mydatabase','@',3) AS \"empty string\";\n> \n> ! select split('@joeuser@mydatabase@','@',2) AS \"joeuser\";\n> \n> --\n> -- test to_hex\n> --- 288,304 ----\n> SELECT replace('yabadoo', 'bad', '') AS \"yaoo\";\n> \n> --\n> ! -- test split_part\n> --\n> ! select split_part('joeuser@mydatabase','@',0) AS \"an error\";\n> \n> ! select split_part('joeuser@mydatabase','@',1) AS \"joeuser\";\n> \n> ! select split_part('joeuser@mydatabase','@',2) AS \"mydatabase\";\n> \n> ! select split_part('joeuser@mydatabase','@',3) AS \"empty string\";\n> \n> ! select split_part('@joeuser@mydatabase@','@',2) AS \"joeuser\";\n> \n> --\n> -- test to_hex\n> Index: doc/src/sgml/func.sgml\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/doc/src/sgml/func.sgml,v\n> retrieving revision 1.120\n> diff -c -r1.120 func.sgml\n> *** doc/src/sgml/func.sgml\t2 Sep 2002 05:53:23 -0000\t1.120\n> --- doc/src/sgml/func.sgml\t7 Sep 2002 19:12:34 -0000\n> ***************\n> *** 1899,1912 ****\n> </row>\n> \n> <row>\n> ! <entry><function>split</function>(<parameter>string</parameter> <type>text</type>,\n> <parameter>delimiter</parameter> <type>text</type>,\n> <parameter>column</parameter> <type>integer</type>)</entry>\n> <entry><type>text</type></entry>\n> <entry>Split <parameter>string</parameter> on <parameter>delimiter</parameter>\n> returning the resulting (one based) <parameter>column</parameter> number.\n> </entry>\n> ! <entry><literal>split('abc~@~def~@~ghi','~@~',2)</literal></entry>\n> <entry><literal>def</literal></entry>\n> </row>\n> \n> --- 1899,1912 ----\n> </row>\n> \n> <row>\n> ! <entry><function>split_part</function>(<parameter>string</parameter> <type>text</type>,\n> <parameter>delimiter</parameter> <type>text</type>,\n> <parameter>column</parameter> <type>integer</type>)</entry>\n> <entry><type>text</type></entry>\n> <entry>Split <parameter>string</parameter> on <parameter>delimiter</parameter>\n> returning the resulting (one based) <parameter>column</parameter> number.\n> </entry>\n> ! <entry><literal>split_part('abc~@~def~@~ghi','~@~',2)</literal></entry>\n> <entry><literal>def</literal></entry>\n> </row>\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 20:21:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Please rename split(text,text,int) to splitpart"
}
] |
[
{
"msg_contents": "\nI get the following error when building beta 1 on CYGWIN_NT-5.1 PC9\n1.3.10(0.51/3/2) 2002-02-25 11:14 i686 unknown:\n\nmake[3]: Entering directory\n`/usr/local/src/postgresql-7.3b1/src/backend/utils/mb/conversion_procs/c\nyrillic_and_mic'\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../../../../../src/include -I/usr/local/include -DBUILDING_DLL=1\n-c -o cyrillic_and_mic.o cyrillic_and_mic.c\ndlltool --export-all --output-def cyrillic_and_mic.def\ncyrillic_and_mic.o\ndllwrap -o cyrillic_and_mic.dll --dllname cyrillic_and_mic.dll --def\ncyrillic_and_mic.def cyrillic_and_mic.o\n../../../../../../src/utils/dllinit.o -lcygipc -lcrypt -L/usr/local/lib\n-L../../../../../../src/backend -lpostgres\nWarning: resolving _CurrentMemoryContext by linking to\n__imp__CurrentMemoryContext (auto-import)\nfu000001.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'\nfu000002.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'\nfu000003.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'\nfu000004.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'\nfu000005.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'\nfu000006.o(.idata$3+0xc): more undefined references to\n`libpostgres_a_iname' follow\nnmth000000.o(.idata$4+0x0): undefined reference to\n`_nm__CurrentMemoryContext'\ncollect2: ld returned 1 exit status\ndllwrap: gcc exited with status 1\nmake[3]: *** [cyrillic_and_mic.dll] Error 1\nmake[3]: Leaving directory\n`/usr/local/src/postgresql-7.3b1/src/backend/utils/mb\n/conversion_procs/cyrillic_and_mic'\nmake[2]: *** [all] Error 2\nmake[2]: Leaving directory\n`/usr/local/src/postgresql-7.3b1/src/backend/utils/mb\n/conversion_procs'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory `/usr/local/src/postgresql-7.3b1/src'\nmake: *** [all] Error 2\nPC9 $\n\nRegards, Dave\n",
"msg_date": "Thu, 5 Sep 2002 12:54:50 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "7.3 Beta 1 Build Error on Cygwin"
},
{
"msg_contents": "The following build error under Cygwin was recently reported:\n\nOn Thu, Sep 05, 2002 at 12:54:50PM +0100, Dave Page wrote:\n> I get the following error when building beta 1 on CYGWIN_NT-5.1 PC9\n> 1.3.10(0.51/3/2) 2002-02-25 11:14 i686 unknown:\n> \n> make[3]: Entering directory `/usr/local/src/postgresql-7.3b1/src/backend/utils/mb/conversion_procs/cyrillic_and_mic'\n> gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../../../src/include -I/usr/local/include -DBUILDING_DLL=1 -c -o cyrillic_and_mic.o cyrillic_and_mic.c\n> [snip]\n> dllwrap -o cyrillic_and_mic.dll --dllname cyrillic_and_mic.dll --def cyrillic_and_mic.def cyrillic_and_mic.o ../../../../../../src/utils/dllinit.o -lcygipc -lcrypt -L/usr/local/lib -L../../../../../../src/backend -lpostgres\n> Warning: resolving _CurrentMemoryContext by linking to __imp__CurrentMemoryContext (auto-import)\n> [snip]\n> nmth000000.o(.idata$4+0x0): undefined reference to `_nm__CurrentMemoryContext'\n\nThe first patch fixes the above, the second one fixes the following:\n\nmake[4]: Entering directory `/home/jt/src/pgsql/src/pl/plpgsql/src'\n[snip]\ndllwrap -o plpgsql.dll --dllname plpgsql.dll --def plpgsql.def pl_gram.o pl_scan.o pl_handler.o pl_comp.o pl_exec.o pl_funcs.o ../../../../src/utils/dllinit.o -L../../../../src/backend -lpostgres -lcygipc -lcrypt -L/usr/local/lib\nWarning: resolving _InterruptPending by linking to __imp__InterruptPending (auto-import)\nWarning: resolving _SortMem by linking to __imp__SortMem (auto-import)\n[snip]\nnmth000000.o(.idata$4+0x0): undefined reference to `_nm__InterruptPending'\nnmth000002.o(.idata$4+0x0): undefined reference to `_nm__SortMem'\n\nAfter applying these patches, PostgreSQL CVS builds cleanly under Cygwin\nagain.\n\nThanks,\nJason\n\nP.S. Dave, thanks for the heads up!",
"msg_date": "Thu, 05 Sep 2002 11:19:15 -0400",
"msg_from": "Jason Tishler <jason@tishler.net>",
"msg_from_op": false,
"msg_subject": "Re: [CYGWIN] 7.3 Beta 1 Build Error on Cygwin"
},
{
"msg_contents": "Dave,\n\nOn Thu, Sep 05, 2002 at 12:54:50PM +0100, Dave Page wrote:\n> I get the following error when building beta 1 on CYGWIN_NT-5.1 PC9\n> 1.3.10(0.51/3/2) 2002-02-25 11:14 i686 unknown:\n> \n> make[3]: Entering directory `/usr/local/src/postgresql-7.3b1/src/backend/utils/mb/conversion_procs/cyrillic_and_mic'\n> gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../../../src/include -I/usr/local/include -DBUILDING_DLL=1 -c -o cyrillic_and_mic.o cyrillic_and_mic.c\n> [snip]\n> dllwrap -o cyrillic_and_mic.dll --dllname cyrillic_and_mic.dll --def cyrillic_and_mic.def cyrillic_and_mic.o ../../../../../../src/utils/dllinit.o -lcygipc -lcrypt -L/usr/local/lib -L../../../../../../src/backend -lpostgres\n> Warning: resolving _CurrentMemoryContext by linking to __imp__CurrentMemoryContext (auto-import)\n> [snip]\n> nmth000000.o(.idata$4+0x0): undefined reference to `_nm__CurrentMemoryContext'\n\nI just submitted a patch to pgsql-patches to fix the above and to add a\ncouple of missing DLLIMPORTs to src/include/miscadmin.h.\n\nFYI, plperl no longer builds cleanly against Cygwin Perl 5.6.1 because\nPostgreSQL no longer uses the Perl extension infrastructure. However,\nupgrading Cygwin Perl to 5.8.0 solves the problem because this version\nuses the conventional name for libperl.a instead of one that has the\nversion embedded in it.\n\nThanks again for the heads up.\n\nJason\n",
"msg_date": "Thu, 05 Sep 2002 11:29:50 -0400",
"msg_from": "Jason Tishler <jason@tishler.net>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 Beta 1 Build Error on Cygwin"
},
{
"msg_contents": "Dave Page writes:\n\n> I get the following error when building beta 1 on CYGWIN_NT-5.1 PC9\n> 1.3.10(0.51/3/2) 2002-02-25 11:14 i686 unknown:\n\nShould all be fixed now.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 5 Sep 2002 20:33:20 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 7.3 Beta 1 Build Error on Cygwin"
},
{
"msg_contents": "\nYour changes have been applied by Peter.\n\n---------------------------------------------------------------------------\n\nJason Tishler wrote:\n> The following build error under Cygwin was recently reported:\n> \n> On Thu, Sep 05, 2002 at 12:54:50PM +0100, Dave Page wrote:\n> > I get the following error when building beta 1 on CYGWIN_NT-5.1 PC9\n> > 1.3.10(0.51/3/2) 2002-02-25 11:14 i686 unknown:\n> > \n> > make[3]: Entering directory `/usr/local/src/postgresql-7.3b1/src/backend/utils/mb/conversion_procs/cyrillic_and_mic'\n> > gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../../../src/include -I/usr/local/include -DBUILDING_DLL=1 -c -o cyrillic_and_mic.o cyrillic_and_mic.c\n> > [snip]\n> > dllwrap -o cyrillic_and_mic.dll --dllname cyrillic_and_mic.dll --def cyrillic_and_mic.def cyrillic_and_mic.o ../../../../../../src/utils/dllinit.o -lcygipc -lcrypt -L/usr/local/lib -L../../../../../../src/backend -lpostgres\n> > Warning: resolving _CurrentMemoryContext by linking to __imp__CurrentMemoryContext (auto-import)\n> > [snip]\n> > nmth000000.o(.idata$4+0x0): undefined reference to `_nm__CurrentMemoryContext'\n> \n> The first patch fixes the above, the second one fixes the following:\n> \n> make[4]: Entering directory `/home/jt/src/pgsql/src/pl/plpgsql/src'\n> [snip]\n> dllwrap -o plpgsql.dll --dllname plpgsql.dll --def plpgsql.def pl_gram.o pl_scan.o pl_handler.o pl_comp.o pl_exec.o pl_funcs.o ../../../../src/utils/dllinit.o -L../../../../src/backend -lpostgres -lcygipc -lcrypt -L/usr/local/lib\n> Warning: resolving _InterruptPending by linking to __imp__InterruptPending (auto-import)\n> Warning: resolving _SortMem by linking to __imp__SortMem (auto-import)\n> [snip]\n> nmth000000.o(.idata$4+0x0): undefined reference to `_nm__InterruptPending'\n> nmth000002.o(.idata$4+0x0): undefined reference to `_nm__SortMem'\n> \n> After applying these patches, PostgreSQL CVS builds cleanly under Cygwin\n> again.\n> \n> Thanks,\n> Jason\n> \n> P.S. Dave, thanks for the heads up!\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 14:43:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [CYGWIN] 7.3 Beta 1 Build Error on Cygwin"
},
{
"msg_contents": "Jason Tishler wrote:\n> Peter,\n> \n> On Thu, Sep 05, 2002 at 08:33:20PM +0200, Peter Eisentraut wrote:\n> > Dave Page writes:\n> > \n> > > I get the following error when building beta 1 on CYGWIN_NT-5.1 PC9\n> > > 1.3.10(0.51/3/2) 2002-02-25 11:14 i686 unknown:\n> > \n> > Should all be fixed now.\n> \n> Huh? I don't see any recent CVS commits to indicate this.\n\nI see as a commit:\n\n Assorted fixes for Cygwin:\n \n Eliminate the mysterious games that the Cygwin build plays with the linker\n flag variables. DLLLIBS is gone, use SHLIB_LINK like everyone else.\n Detect cygipc in configure, after the linker flags are set up, otherwise\n configure might not work at all.\n \n Make sure everything is covered by make clean.\n \n Fix the build of the new conversion procedure modules.\n \n Add new DLLIMPORT markers where required.\n \n Finally, the compiler complains if we use an explicit\n -I/usr/local/include, so don't do that. Curiously, -L/usr/local/lib is\n still necessary.\n\nI assume it was in there.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 14:51:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 7.3 Beta 1 Build Error on Cygwin"
},
{
"msg_contents": "Peter,\n\nOn Thu, Sep 05, 2002 at 08:33:20PM +0200, Peter Eisentraut wrote:\n> Dave Page writes:\n> \n> > I get the following error when building beta 1 on CYGWIN_NT-5.1 PC9\n> > 1.3.10(0.51/3/2) 2002-02-25 11:14 i686 unknown:\n> \n> Should all be fixed now.\n\nHuh? I don't see any recent CVS commits to indicate this.\n\nJason\n",
"msg_date": "Thu, 05 Sep 2002 14:51:33 -0400",
"msg_from": "Jason Tishler <jason@tishler.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 7.3 Beta 1 Build Error on Cygwin"
},
{
"msg_contents": "Peter,\n\nOn Thu, Sep 05, 2002 at 02:51:31PM -0400, Bruce Momjian wrote:\n> Jason Tishler wrote:\n> > On Thu, Sep 05, 2002 at 08:33:20PM +0200, Peter Eisentraut wrote:\n> > > Should all be fixed now.\n> > \n> > Huh? I don't see any recent CVS commits to indicate this.\n> \n> I see as a commit:\n> \n> [snip]\n> \n> I assume it was in there.\n\nSorry for the noise, but at the time:\n\n cvs status include/miscadmin.h makefiles/Makefile.win\n\ndid *not* indicate any recent commits. Maybe you sent the above email\nbefore you committed your changes?\n\nAnyway, I just tried a:\n\n make distclean\n rm include/miscadmin.h makefiles/Makefile.win # remove my patch\n cvs update\n make\n\nand got the following error:\n\n [snip]\n make[3]: Leaving directory `/home/jt/src/pgsql/src/backend/utils'\n dlltool --dllname postgres.exe --output-exp postgres.exp --def postgres.def\n gcc -L/usr/local/lib -o postgres.exe -Wl,--base-file,postgres.base postgres.exp access/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o \n libpq/SUBSYS.o(.text+0x1c84):crypt.c: undefined reference to `crypt'\n port/SUBSYS.o(.text+0x262):pg_sema.c: undefined reference to `semget'\n [snip]\n\nI can get postgres.exe to successfully link by manually appending\n\"-lcrypt -lcygipc\" to the end of the above gcc command line.\n\nSince you are already working on this, would you be willing to fix this\nproblem?\n\nThanks,\nJason\n",
"msg_date": "Thu, 05 Sep 2002 15:37:57 -0400",
"msg_from": "Jason Tishler <jason@tishler.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 7.3 Beta 1 Build Error on Cygwin"
}
] |
[
{
"msg_contents": "I'm suspecting that something blocks mail from my home computer\n\nThis is sent to test if it is so.\n\n\n\n",
"msg_date": "05 Sep 2002 20:10:09 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "test, please ignore"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Jason Tishler [mailto:jason@tishler.net] \n> Sent: 05 September 2002 16:30\n> To: Dave Page\n> Cc: pgsql-hackers; pgsql-cygwin\n> Subject: Re: [CYGWIN] 7.3 Beta 1 Build Error on Cygwin\n> \n> \n> I just submitted a patch to pgsql-patches to fix the above \n> and to add a couple of missing DLLIMPORTs to src/include/miscadmin.h.\n\nYup, saw that, thanks.\n\n> FYI, plperl no longer builds cleanly against Cygwin Perl \n> 5.6.1 because PostgreSQL no longer uses the Perl extension \n> infrastructure. However, upgrading Cygwin Perl to 5.8.0 \n> solves the problem because this version uses the conventional \n> name for libperl.a instead of one that has the version embedded in it.\n\nI'll bear that in mind, though I don't normally use Perl.\n\n> Thanks again for the heads up.\n\nYou're welcome, just trying to get a headstart on the testing of pgAdmin\nfor 7.3 and the 7.3 regression testing...\n\n> Jason\n> \n",
"msg_date": "Thu, 5 Sep 2002 16:30:00 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: 7.3 Beta 1 Build Error on Cygwin"
}
] |
[
{
"msg_contents": "\nOleg,\n\nThe Porter stemming stems herring and herrings to her, which is a bit\nunfortunate. A quick fix is to put 'herring/herrings' in the exception list\nin the english (porter2) stemmer, but I'll look at this case over the next\nfew days and see if I can come up with something a bit better.\n\nInteresting that no one has reported this before.\n\nMartin\n\n\n",
"msg_date": "Thu, 05 Sep 2002 10:12:03 -0600",
"msg_from": "martin_porter@softhome.net (Martin Porter)",
"msg_from_op": true,
"msg_subject": "Re: contrib/tsearch"
},
{
"msg_contents": "On Thu, 5 Sep 2002, Martin Porter wrote:\n\n>\n> Oleg,\n>\n> The Porter stemming stems herring and herrings to her, which is a bit\n> unfortunate. A quick fix is to put 'herring/herrings' in the exception list\n> in the english (porter2) stemmer, but I'll look at this case over the next\n> few days and see if I can come up with something a bit better.\n\nUnfrtunately, we wrote tsearch module before the Snowball project has started,\nso we used one implementation we found in the net (www.muscat.com) and\nthere is no exception list. OpenFTS uses snowball stemming, so we'd like\nto have a fix. I think we have enough arguments to use snowball stemmers\nin tsearch also.\n\n>\n> Interesting that no one has reported this before.\n\n:-) Thanks Cristopher for his persistence.\n\n>\n> Martin\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 5 Sep 2002 21:10:12 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: contrib/tsearch"
}
] |
[
{
"msg_contents": "In the process of upgrading a few systems for the Beta, I ended up\nwriting a tool to upgrade the Foreign key, Unique, and Serial objects to\ntheir 7.3 version from the 7.2 version (may work on prior -- but not\nguarenteed).\n\nI imagine it'll fail miserably on mixed case, or names with spaces --\nbut oh well.\n\nAnyway, goes through step by step asking the user if they wish to\nupgrade each element. Lightly tested\n\nhttp://www.rbt.ca/postgresql/upgrade.shtml\n\nIt assumes you've already dumped / upgraded / restored to 7.3 before\nrunning the script.\n\nKinda slow, but safe to run more than once.\n\n",
"msg_date": "05 Sep 2002 12:38:08 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Ok, I broke down..."
},
{
"msg_contents": "Whoot! I was just thinking about writing such a tool. Thanks.\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Rod Taylor\n> Sent: Friday, 6 September 2002 12:38 AM\n> To: PostgreSQL-development\n> Subject: [HACKERS] Ok, I broke down...\n> \n> \n> In the process of upgrading a few systems for the Beta, I ended up\n> writing a tool to upgrade the Foreign key, Unique, and Serial objects to\n> their 7.3 version from the 7.2 version (may work on prior -- but not\n> guarenteed).\n> \n> I imagine it'll fail miserably on mixed case, or names with spaces --\n> but oh well.\n> \n> Anyway, goes through step by step asking the user if they wish to\n> upgrade each element. Lightly tested\n> \n> http://www.rbt.ca/postgresql/upgrade.shtml\n> \n> It assumes you've already dumped / upgraded / restored to 7.3 before\n> running the script.\n> \n> Kinda slow, but safe to run more than once.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n",
"msg_date": "Fri, 6 Sep 2002 09:37:57 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Ok, I broke down..."
},
{
"msg_contents": "Feel free to add to it. It's creation was primarily to get Autodoc to\nwork with my old 7.2 structures.\n\n- Needs to handle mixed case / quoted element names (UNIQUE keys\nespecially).\n- Deferred foreign key constraints.\n\nI use neither, so I'm not overly worried about either unless I get a\nbunch of requests.\n\nOn Thu, 2002-09-05 at 21:37, Christopher Kings-Lynne wrote:\n> Whoot! I was just thinking about writing such a tool. Thanks.\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Rod Taylor\n> > Sent: Friday, 6 September 2002 12:38 AM\n> > To: PostgreSQL-development\n> > Subject: [HACKERS] Ok, I broke down...\n> > \n> > \n> > In the process of upgrading a few systems for the Beta, I ended up\n> > writing a tool to upgrade the Foreign key, Unique, and Serial objects to\n> > their 7.3 version from the 7.2 version (may work on prior -- but not\n> > guarenteed).\n> > \n> > I imagine it'll fail miserably on mixed case, or names with spaces --\n> > but oh well.\n> > \n> > Anyway, goes through step by step asking the user if they wish to\n> > upgrade each element. Lightly tested\n> > \n> > http://www.rbt.ca/postgresql/upgrade.shtml\n> > \n> > It assumes you've already dumped / upgraded / restored to 7.3 before\n> > running the script.\n> > \n> > Kinda slow, but safe to run more than once.\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> > \n> \n\n\n",
"msg_date": "05 Sep 2002 22:55:13 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: Ok, I broke down..."
}
] |
[
{
"msg_contents": "Just in time for 7.3 beta 1 :\n\nhttp://dsc.discovery.com/news/briefs/20020902/elephant.html\n\n-----------\nHannu\n\n\n",
"msg_date": "05 Sep 2002 22:27:52 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "postgres crowd may find this interesting ;)"
}
] |
[
{
"msg_contents": "The following happens in latest CVS and a fresh database:\n\ncreate table test (a int);\ninsert into test values (1);\nalter table test add column b text check (b <> '');\nalter table test add check (a > 0);\nalter table test add check (a <> 1);\n\nAfter the last command I get\n\nERROR: CheckConstraintFetch: unexpected record found for rel test\n\nand then the table seems to be wedged because any access to it will get\nthe same error.\n\nAlso, psql seems to forget about check constraints in peculiar ways:\n\ncreate table test (a int);\ninsert into test values (1);\nalter table test add column b text check (b <> '');\n\\d test\nalter table test add check (a > 0);\n\\d test\n\nThe first shows:\n\n Table \"public.test\"\n Spalte | Typ | Attribute\n--------+---------+-----------\n a | integer |\n b | text |\n\nThe second shows:\n\n Table \"public.test\"\n Spalte | Typ | Attribute\n--------+---------+-----------\n a | integer |\n b | text |\nCheck-Constraints: ᅵtest_bᅵ (b <> ''::text)\n ᅵ$1ᅵ (a > 0)\n\nNote the first one doesn't show any constraints.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 5 Sep 2002 23:37:47 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Add check constraint bug"
},
{
"msg_contents": "\nOn Thu, 5 Sep 2002, Peter Eisentraut wrote:\n\n> The following happens in latest CVS and a fresh database:\n>\n> create table test (a int);\n> insert into test values (1);\n> alter table test add column b text check (b <> '');\n> alter table test add check (a > 0);\n> alter table test add check (a <> 1);\n>\n> After the last command I get\n>\n> ERROR: CheckConstraintFetch: unexpected record found for rel test\n>\n> and then the table seems to be wedged because any access to it will get\n> the same error.\n\n\nI don't have reasonable access to the machine at home for code purposes,\nbut it looks to me that the add column line is the one that's causing\nthe bug. It's inserting a check constraint but not upping relchecks\nwhich seems to work because it's zero and therefore doesn't even look, but\nthe add check is incrementing the count and inserting its constraint which\nmakes 2 real constraints and relchecks=1 which causes the error.\n\nThis is probably also why it forgets about the check constraint below\nsince relchecks is 0, but I didn't look.\n\nNote that:\ncreate table test(a int check (a>3));\nalter table test add column b text check(b<>'');\nselect * from test;\n\nwill error.\n\n\n",
"msg_date": "Thu, 5 Sep 2002 15:49:40 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Add check constraint bug"
},
{
"msg_contents": "On Thu, 5 Sep 2002, Stephan Szabo wrote:\n\n> \n> On Thu, 5 Sep 2002, Peter Eisentraut wrote:\n> \n> > The following happens in latest CVS and a fresh database:\n> >\n> > create table test (a int);\n> > insert into test values (1);\n> > alter table test add column b text check (b <> '');\n> > alter table test add check (a > 0);\n> > alter table test add check (a <> 1);\n> >\n> > After the last command I get\n> >\n> > ERROR: CheckConstraintFetch: unexpected record found for rel test\n> >\n> > and then the table seems to be wedged because any access to it will get\n> > the same error.\n\nJust fyi, 7.2.1 does this too.\n\n",
"msg_date": "Thu, 5 Sep 2002 16:57:32 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: Add check constraint bug"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The following happens in latest CVS and a fresh database:\n> create table test (a int);\n> insert into test values (1);\n> alter table test add column b text check (b <> '');\n\nThis bug's been there awhile I fear. The failure occurs when\nAlterTableAddColumn needs to add a check constraint AND the\nnew column causes AlterTableCreateToastTable to do its thing.\n\nThe reason there is a bug is that AlterTableCreateToastTable\ngratuitously does a heap_mark4update, thereby selecting the un-updated\nversion of the pg_class tuple as its basis for modification (and\nignoring the HeapTupleSelfUpdated return code that warned that there\nwas a problem).\n\nI've said before that I do not like heap_mark4update in catalog\nmanipulations, and here's a perfect example of why it's a bad idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Sep 2002 19:52:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add check constraint bug "
},
{
"msg_contents": "\nIs there a TODO here?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > The following happens in latest CVS and a fresh database:\n> > create table test (a int);\n> > insert into test values (1);\n> > alter table test add column b text check (b <> '');\n> \n> This bug's been there awhile I fear. The failure occurs when\n> AlterTableAddColumn needs to add a check constraint AND the\n> new column causes AlterTableCreateToastTable to do its thing.\n> \n> The reason there is a bug is that AlterTableCreateToastTable\n> gratuitously does a heap_mark4update, thereby selecting the un-updated\n> version of the pg_class tuple as its basis for modification (and\n> ignoring the HeapTupleSelfUpdated return code that warned that there\n> was a problem).\n> \n> I've said before that I do not like heap_mark4update in catalog\n> manipulations, and here's a perfect example of why it's a bad idea.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 20:44:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add check constraint bug"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is there a TODO here?\n\nI've committed a fix for the immediate problem. I want to take a very\nhard look at the other heap_mark4update calls, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Sep 2002 21:22:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add check constraint bug "
}
] |
[
{
"msg_contents": "I have removed PGPASSWORDFILE in CVS and therefore in beta2.\n\nIt was decided that $HOME/.pgpass should always be tested, rather than\nhave an environment variable for it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 18:07:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Removal of PGPASSWORDFILE in beta"
}
] |
[
{
"msg_contents": "OK,\n\nI note that the regression tests for the following contribs are failing:\n\ncube\nintarray\nseg\n\nChris\n\n",
"msg_date": "Fri, 6 Sep 2002 10:32:24 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Contrib installcheck problems"
}
] |
[
{
"msg_contents": "I haven't see the beta announcement on the announce list. Do we\nannounce it there?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 5 Sep 2002 22:42:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "7.3 beta announcement"
},
{
"msg_contents": "On Thu, 5 Sep 2002, Bruce Momjian wrote:\n\n> I haven't see the beta announcement on the announce list. Do we\n> announce it there?\n\nI've been expecting it but haven't seen it yet.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 5 Sep 2002 22:52:48 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 beta announcement"
}
] |
[
{
"msg_contents": "OK,\n\nThe argument about using ALTER TABLE/ADD FOREIGN KEY in dumps was that it\ncaused an actual check of the data in the table, right? This was going to\nbe much slower than using CREATE CONSTRAINT TRIGGER.\n\nSo, why can't we do this in the SQL that pg_dump creates (TODO):\n\nCREATE TABLE ...\nALTER TABLE/ADD FOREIGN KEY ...\nupdate catalogs and disable triggers that the ADD FOREIGN KEY just created\n...\nCOPY .. FROM ...\n\\.\nupdate catalogs and enable triggers\n\nDoesn't this give us the best of both worlds? ie. Keeps dependencies but\ndoes fast COPYing?\n\nAlso, I think a new super-user (or owner) only SQL command would be nice\n(TODO):\n\nALTER TABLE foo {DISABLE|ENABLE} TRIGGER { ALL | trigger_name [ ,... ] };\n\nThis is like MSSQL syntax (IIRC):\n\nhttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_\naa-az_3ied.asp\nSpecifies that trigger_name is enabled or disabled. When a trigger is\ndisabled it is still defined for the table; however, when INSERT, UPDATE, or\nDELETE statements are executed against the table, the actions in the trigger\nare not performed until the trigger is re-enabled.\n\n\nIt would certainly tidy up the dumps a bit...\n\nChris\n\n",
"msg_date": "Fri, 6 Sep 2002 13:19:44 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Foreign keys in pg_dump"
},
{
"msg_contents": "On Fri, 2002-09-06 at 01:19, Christopher Kings-Lynne wrote:\n> OK,\n> \n> The argument about using ALTER TABLE/ADD FOREIGN KEY in dumps was that it\n> caused an actual check of the data in the table, right? This was going to\n> be much slower than using CREATE CONSTRAINT TRIGGER.\n> \n> So, why can't we do this in the SQL that pg_dump creates (TODO):\n> \n> CREATE TABLE ...\n> ALTER TABLE/ADD FOREIGN KEY ...\n> update catalogs and disable triggers that the ADD FOREIGN KEY just created\n> ...\n> COPY .. FROM ...\n> \\.\n> update catalogs and enable triggers\n\nThe problem with this is you may enable a trigger that was disabled by\nthe user. It cannot be done to all triggers. We could figure out which\ntriggers were created for the foreign key via pg_depend, then re-enable\nonly those.\n\nIf we did most of this in a single transaction it should be fairly safe.\n\n> Doesn't this give us the best of both worlds? ie. Keeps dependencies but\n> does fast COPYing?\n> \n> Also, I think a new super-user (or owner) only SQL command would be nice\n> (TODO):\n> \n> ALTER TABLE foo {DISABLE|ENABLE} TRIGGER { ALL | trigger_name [ ,... ] };\n\npg_dump shouldn't need to know that a trigger is involved for foreign\nkeys. A SET CONSTRAINTS DISABLED would be more appropriate in a binary\nmode dump -- but I firmly believe that text mode dumps should run full\nchecks on the data to ensure the user didn't muck with it.\n\n\n\n",
"msg_date": "06 Sep 2002 09:34:21 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Foreign keys in pg_dump"
}
] |
[
{
"msg_contents": "Dear all,\nI want to make library for visual basic to connect to\nPostgreSQL, but I have problem to get libpq.dll source\ncode. Can somebody help me ?\n(Sorry for bad english :))\n\nBest Regards,\nAchmad Amin\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Finance - Get real-time stock quotes\nhttp://finance.yahoo.com\n",
"msg_date": "Thu, 5 Sep 2002 22:54:41 -0700 (PDT)",
"msg_from": "Achmad Amin <ma_achmad@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Libpq.dll Souce Code"
},
{
"msg_contents": "* Achmad Amin <ma_achmad@yahoo.com> [2002-09-05 22:54 -0700]:\n> Dear all,\n> I want to make library for visual basic to connect to\n> PostgreSQL, but I have problem to get libpq.dll source\n> code. Can somebody help me ?\n\nDownload a PostgreSQL source distribution. The libpq sources are in\nsrc/interfaces/libpq. The PostgreSQL documentation explains how to\ncompile it on Windows using Vi$ual C++. If you don't have it, you can\nfind Makefiles for building libpq with gcc (either mingw or Cygwin\nflavour) at my homepage: http://www.cs.fhm.edu/~ifw00065/ \n\nIn the future, please ask support questions on pgsql-general, not here.\nThe correct list for discussion of libpq is pgsql-interfaces.\n\n-- Gerhard\n",
"msg_date": "Fri, 6 Sep 2002 08:32:56 +0200",
"msg_from": "Gerhard =?iso-8859-1?Q?H=E4ring?= <haering_postgresql@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: Libpq.dll Souce Code"
}
] |
[
{
"msg_contents": "Hello!\n\nSome time ago I've got troubles with performance of my PG.\nAfter investigation I had found that the most probable reason was the big\nnumber of \"unused\" pages. Below follows what VACUUM reported:\n\n=======================\nvacuum verbose goods;\nNOTICE: --Relation goods--\nNOTICE: Pages 15068: Changed 0, Empty 0; Tup 16157: Vac 0, Keep 0, UnUsed 465938.\n=======================\nselect count(*) from goods;\n count\n-------\n 16157\n=======================\n\nThe same schema with the almost identical number of rows gives completely\ndifferent result on another table:\n=======================\nvacuum verbose goods;\nNOTICE: --Relation goods--\nNOTICE: Pages 912: Changed 0, Empty 0; Tup 11209: Vac 0, Keep 0, UnUsed\n19778.\n=======================\nselect count(*) from goods;\n count\n-------\n 11209\n=======================\n\nTwo questions:\n\n1) Where to seek the real source of the enormous big number of unused\npages?\n\n2) How to shrink the table (i.e. how can I get rid those unused pages)?\n\nPG: was 7.2.1, now 7.2.2.\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n",
"msg_date": "Fri, 6 Sep 2002 13:04:14 +0700 (NOVST)",
"msg_from": "Yury Bokhoncovich <byg@center-f1.ru>",
"msg_from_op": true,
"msg_subject": "Big number of \"unused\" pages as reported by VACUUM"
},
{
"msg_contents": "Hi Yury,\n\nThis question should not be posted to -patches, changed accordingly.\n\nWhat happens if you go 'VACUUM VERBOSE FULL goods;'?\n\nYour on-disk files won't shrink or have unused tuples removed unless you\nVACUUM FULL. The problem with doing VACUUM FULL is that it totally locks\nthe whole table while it's running, meaning no-one can use the table. This\nis bad in production environments, so it's not the default. Bear in mind\nthat postgres will re-use the unused portion of the table as you add new\ntuples...\n\nChris\n\n> Some time ago I've got troubles with performance of my PG.\n> After investigation I had found that the most probable reason was the big\n> number of \"unused\" pages. Below follows what VACUUM reported:\n>\n> =======================\n> vacuum verbose goods;\n> NOTICE: --Relation goods--\n> NOTICE: Pages 15068: Changed 0, Empty 0; Tup 16157: Vac 0, Keep\n> 0, UnUsed 465938.\n> =======================\n> select count(*) from goods;\n> count\n> -------\n> 16157\n\n",
"msg_date": "Fri, 6 Sep 2002 14:29:19 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Big number of \"unused\" pages as reported by VACUUM"
},
{
"msg_contents": "Hello!\n\nOn Fri, 6 Sep 2002, Christopher Kings-Lynne wrote:\n\n> This question should not be posted to -patches, changed accordingly.\n>\n> What happens if you go 'VACUUM VERBOSE FULL goods;'?\n\nOh, big thanx!\nBut 'VACUUM VERBOSE FULL goods;' didn't work, only 'VACUUM FULL VERBOSE\ngoods;' did.:)\n\nI make a guess I've got this due to parallel running of a program making\nbulk INSERTs/UPDATEs into that table. Mmm...I need a way to avoid the big\nnumber of unused pages in such a case. LOCK TABLE?\n\n>\n> Your on-disk files won't shrink or have unused tuples removed unless you\n> VACUUM FULL. The problem with doing VACUUM FULL is that it totally locks\n> the whole table while it's running, meaning no-one can use the table. This\n\nThis can't scare people whom had dealt with 6.x.;)\nOnly if \"We scare because we care\"...=)\n\n> is bad in production environments, so it's not the default. Bear in mind\n> that postgres will re-use the unused portion of the table as you add new\n> tuples...\n\nYes, as an ole MUMPSter I did catch this very well some times ago.=)\n\n>\n> Chris\n>\n> > Some time ago I've got troubles with performance of my PG.\n> > After investigation I had found that the most probable reason was the big\n> > number of \"unused\" pages. Below follows what VACUUM reported:\n> >\n> > =======================\n> > vacuum verbose goods;\n> > NOTICE: --Relation goods--\n> > NOTICE: Pages 15068: Changed 0, Empty 0; Tup 16157: Vac 0, Keep\n> > 0, UnUsed 465938.\n> > =======================\n> > select count(*) from goods;\n> > count\n> > -------\n> > 16157\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\nYep! Suggest to add this as well as that typical mistake with\nLANGUAGE/HANDLER (plpgsql.so I mean).:-)\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n",
"msg_date": "Fri, 6 Sep 2002 13:56:52 +0700 (NOVST)",
"msg_from": "Yury Bokhoncovich <byg@center-f1.ru>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Big number of \"unused\" pages as reported by"
},
{
"msg_contents": "> I make a guess I've got this due to parallel running of a program making\n> bulk INSERTs/UPDATEs into that table. Mmm...I need a way to avoid the big\n> number of unused pages in such a case. LOCK TABLE?\n\nWell, I suggest doing a normal vacuum analyze ('VACUUM ANALYZE goods') after\nevery bulk insert/update. This will go through the table and mark all new\noutdated tuples as re-usable. That way, when you do your next bulk\ninsert/update it will be able to reuse the unused tuples. Give that a\ntry...\n\nChris\n\n",
"msg_date": "Fri, 6 Sep 2002 15:20:43 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Big number of \"unused\" pages as reported by VACUUM"
},
{
"msg_contents": "\nI think you want to use VACUUM FULL to actually shrink the table. In\n7.2.X, VACUUM only records free space for later reuse.\n\n---------------------------------------------------------------------------\n\nYury Bokhoncovich wrote:\n> Hello!\n> \n> Some time ago I've got troubles with performance of my PG.\n> After investigation I had found that the most probable reason was the big\n> number of \"unused\" pages. Below follows what VACUUM reported:\n> \n> =======================\n> vacuum verbose goods;\n> NOTICE: --Relation goods--\n> NOTICE: Pages 15068: Changed 0, Empty 0; Tup 16157: Vac 0, Keep 0, UnUsed 465938.\n> =======================\n> select count(*) from goods;\n> count\n> -------\n> 16157\n> =======================\n> \n> The same schema with the almost identical number of rows gives completely\n> different result on another table:\n> =======================\n> vacuum verbose goods;\n> NOTICE: --Relation goods--\n> NOTICE: Pages 912: Changed 0, Empty 0; Tup 11209: Vac 0, Keep 0, UnUsed\n> 19778.\n> =======================\n> select count(*) from goods;\n> count\n> -------\n> 11209\n> =======================\n> \n> Two questions:\n> \n> 1) Where to seek the real source of the enormous big number of unused\n> pages?\n> \n> 2) How to shrink the table (i.e. how can I get rid those unused pages)?\n> \n> PG: was 7.2.1, now 7.2.2.\n> \n> -- \n> WBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\n> Phone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\n> Unix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 6 Sep 2002 09:56:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Big number of \"unused\" pages as reported by VACUUM"
}
] |
[
{
"msg_contents": "I have some sql to define some functions for doing conversions between\ncube and latitude and longitude (as float8) and for calculating\ngreat circle distances between cubes (using a spherical model of the earth).\nI am not sure the code is suitable for contrib.\nThe code picks a radius of the earth in meters. Other people may choose to\nuse different units or even use a different radius in meters.\nI have grants in the code to make the cube functions and the functions\ndefined by the script as execute for public. (The cube stuff needs to be\ndone as postgres since a type is created, but then the functions aren't\ngenerally accessible by default.)\nThe script is about 5K.\nSome people might find this useful as there are some advantages to keeping\ntrack of locations on the earth using cube (with 3D coordinates) as opposed\nto using point (with 2D coordinates).\n",
"msg_date": "Fri, 6 Sep 2002 02:09:21 -0500",
"msg_from": "Bruno Wolff III <bruno@wolff.to>",
"msg_from_op": true,
"msg_subject": "Making small bits of code available"
},
{
"msg_contents": "\n/contrib/earthdistance already exists. Is this new functionality?\n\n---------------------------------------------------------------------------\n\nBruno Wolff III wrote:\n> I have some sql to define some functions for doing conversions between\n> cube and latitude and longitude (as float8) and for calculating\n> great circle distances between cubes (using a spherical model of the earth).\n> I am not sure the code is suitable for contrib.\n> The code picks a radius of the earth in meters. Other people may choose to\n> use different units or even use a different radius in meters.\n> I have grants in the code to make the cube functions and the functions\n> defined by the script as execute for public. (The cube stuff needs to be\n> done as postgres since a type is created, but then the functions aren't\n> generally accessible by default.)\n> The script is about 5K.\n> Some people might find this useful as there are some advantages to keeping\n> track of locations on the earth using cube (with 3D coordinates) as opposed\n> to using point (with 2D coordinates).\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 6 Sep 2002 09:58:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making small bits of code available"
},
{
"msg_contents": "On Fri, Sep 06, 2002 at 09:58:00 -0400,\n Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> \n> /contrib/earthdistance already exists. Is this new functionality?\n\nThis works with cube instead of point. If you use point hold latitude and\nlongitude you have to worry about whether you will have data near 180\ndegrees of longitude or near the poles. This may not be a problem if\nyour data is mostly on one continent.\n\nThe script I have is most grant calls for the cube functions. Since cube\nneeds to be installed as postgres (or other super user), most likely\nyou want to grant execute to public on the provided functions. (I don't\nknow if you need to do this for ones just used be the gist stuff.)\n\nThe stuff people might want to see are a few sql functions for getting\nto and from latitude and longitude and cube (as domain earth) and some\nfunctions related to getting the size of boxes to use for searching for\npoints within a great circle distance of a specified point.\n\nIf 5K isn't too much I could post it to the list and it will get archived\nand people that are interested can find it with google and can take what they\nwant from the code.\n\nThis stuff isn't packaged up neatly for a contrib with a regression test\nand all. Probably people who use this will want to tinker with it before\nusing it themselves.\n\nThe function prototypes extracted from the file are:\ncreate function earth() returns float8 language 'sql' immutable as\ncreate function sec_to_gc(float8) returns float8 language 'sql'\ncreate function gc_to_sec(float8) returns float8 language 'sql'\ncreate function ll_to_earth(float8, float8) returns earth language 'sql'\ncreate function latitude(earth) returns float8 language 'sql'\ncreate function longitude(earth) returns float8 language 'sql'\ncreate function earth_distance(earth, earth) returns float8 language 'sql'\ncreate function earth_box(earth, float8) returns cube language 'sql'\n",
"msg_date": "Sat, 7 Sep 2002 07:01:01 -0500",
"msg_from": "Bruno Wolff III <bruno@wolff.to>",
"msg_from_op": true,
"msg_subject": "Re: Making small bits of code available"
},
{
"msg_contents": "\nWhat would be really valuable would be to add your routines to\n/contrib/earthdistance. Is that possible?\n\n---------------------------------------------------------------------------\n\nBruno Wolff III wrote:\n> On Fri, Sep 06, 2002 at 09:58:00 -0400,\n> Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> > \n> > /contrib/earthdistance already exists. Is this new functionality?\n> \n> This works with cube instead of point. If you use point hold latitude and\n> longitude you have to worry about whether you will have data near 180\n> degrees of longitude or near the poles. This may not be a problem if\n> your data is mostly on one continent.\n> \n> The script I have is most grant calls for the cube functions. Since cube\n> needs to be installed as postgres (or other super user), most likely\n> you want to grant execute to public on the provided functions. (I don't\n> know if you need to do this for ones just used be the gist stuff.)\n> \n> The stuff people might want to see are a few sql functions for getting\n> to and from latitude and longitude and cube (as domain earth) and some\n> functions related to getting the size of boxes to use for searching for\n> points within a great circle distance of a specified point.\n> \n> If 5K isn't too much I could post it to the list and it will get archived\n> and people that are interested can find it with google and can take what they\n> want from the code.\n> \n> This stuff isn't packaged up neatly for a contrib with a regression test\n> and all. Probably people who use this will want to tinker with it before\n> using it themselves.\n> \n> The function prototypes extracted from the file are:\n> create function earth() returns float8 language 'sql' immutable as\n> create function sec_to_gc(float8) returns float8 language 'sql'\n> create function gc_to_sec(float8) returns float8 language 'sql'\n> create function ll_to_earth(float8, float8) returns earth language 'sql'\n> create function latitude(earth) returns float8 language 'sql'\n> create function longitude(earth) returns float8 language 'sql'\n> create function earth_distance(earth, earth) returns float8 language 'sql'\n> create function earth_box(earth, float8) returns cube language 'sql'\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 7 Sep 2002 10:05:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making small bits of code available"
},
{
"msg_contents": "On Sat, Sep 07, 2002 at 10:05:14 -0400,\n Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> \n> What would be really valuable would be to add your routines to\n> /contrib/earthdistance. Is that possible?\n\nYes.\n\nRight now the script contains:\n\nSome leading comments\n\ngrant execute to public commands for each function in contrib/cube\n\nA definition of the earth domain along with comments about what check\nconstraints should be used (until domains support check constraints)\n\nFor each new function there is a comment about it, a definition (using\nlanguage 'sql') and a grant execute to public\n\nThere is currently no regression test.\n\nNow to the questions.\n\nWere the function names (earth, sec_to_gc, gc_to_sec, ll_to_earth, latitude,\nlongitude, earth_distance, and earth_box) acceptable?\n\nShould I make a separate regression test file or add it on to the existing\none for earth_distance?\n\nShould I make a separate README file or just add stuff to the end of the\nexisting REAMDE file?\n\nShould I leave the grants in, leave that to the administrator or provide\na separate script?\n\nShould the creation of these functions be added to the existing script\nfor earth_distance or should it be a separate script? It seems unlikely\nthat someone would be using both of these at the same time, since one\nis based on the point type and the other on the cube type. However the\noverhead of installing both seems small, so maybe making it easier to\ntry both and then pick one is worthwhile.\n\nAnother option would be to go back to the contrib/cube install script\nand and grants for the functions there. And then just to a grant for\nthe old geo_distance function in earthdistance (since that is the only\n'C' function)? I didn't do that previously because the previous contrib/cube\ndidn't, but of course, functions didn't have an execute privilege previously.\nIf I do that, do I have to grant public access to internal functions\n(used for the gist index) or can I just make the ones meant for users\nto access directly public?\n",
"msg_date": "Sat, 7 Sep 2002 11:20:40 -0500",
"msg_from": "Bruno Wolff III <bruno@wolff.to>",
"msg_from_op": true,
"msg_subject": "Re: Making small bits of code available"
},
{
"msg_contents": "Bruno Wolff III wrote:\n> On Sat, Sep 07, 2002 at 10:05:14 -0400,\n> Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> > \n> > What would be really valuable would be to add your routines to\n> > /contrib/earthdistance. Is that possible?\n> \n> Yes.\n> \n> Right now the script contains:\n> \n> Some leading comments\n> \n> grant execute to public commands for each function in contrib/cube\n> \n> A definition of the earth domain along with comments about what check\n> constraints should be used (until domains support check constraints)\n> \n> For each new function there is a comment about it, a definition (using\n> language 'sql') and a grant execute to public\n> \n> There is currently no regression test.\n> \n> Now to the questions.\n> \n> Were the function names (earth, sec_to_gc, gc_to_sec, ll_to_earth, latitude,\n> longitude, earth_distance, and earth_box) acceptable?\n\nSure.\n\n> Should I make a separate regression test file or add it on to the existing\n> one for earth_distance?\n\nNo, just add. If someone wants earth measurements, it should all be in\none place.\n\n> Should I make a separate README file or just add stuff to the end of the\n> existing REAMDE file?\n\nJust add.\n\n> Should I leave the grants in, leave that to the administrator or provide\n> a separate script?\n\nI would not add the grants.\n\n> Should the creation of these functions be added to the existing script\n> for earth_distance or should it be a separate script? It seems unlikely\n> that someone would be using both of these at the same time, since one\n> is based on the point type and the other on the cube type. However the\n> overhead of installing both seems small, so maybe making it easier to\n> try both and then pick one is worthwhile.\n\n\nInstall them both. Just make sure it is clear which is which, or are\nyours superior and the old one should be removed?\n\n> Another option would be to go back to the contrib/cube install script\n> and and grants for the functions there. And then just to a grant for\n> the old geo_distance function in earthdistance (since that is the only\n> 'C' function)? I didn't do that previously because the previous contrib/cube\n> didn't, but of course, functions didn't have an execute privilege previously.\n> If I do that, do I have to grant public access to internal functions\n> (used for the gist index) or can I just make the ones meant for users\n> to access directly public?\n\nNot sure. I don't think we want to public permit this stuff unless the\nadmin asks for it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 7 Sep 2002 12:52:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making small bits of code available"
},
{
"msg_contents": "Bruno Wolff III wrote:\n> On Sat, Sep 07, 2002 at 12:52:06 -0400,\n> Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> > \n> > > Should the creation of these functions be added to the existing script\n> > > for earth_distance or should it be a separate script? It seems unlikely\n> > > that someone would be using both of these at the same time, since one\n> > > is based on the point type and the other on the cube type. However the\n> > > overhead of installing both seems small, so maybe making it easier to\n> > > try both and then pick one is worthwhile.\n> > \n> > \n> > Install them both. Just make sure it is clear which is which, or are\n> > yours superior and the old one should be removed?\n> \n> They are different and someone could want either.\n\n[ CC changed to hackers.]\n\n> \n> I forgot to ask about how to handle the dependency on contrib/cube.\n\n> \n> I can see three options. Automatically install contrib/cube when building\n> contrib/earthdistance, refuse to work unless contrib cube appears to be\n> installed, or only install the original stuff if contribe/cube is not\n> available. Trying to do different installs based on whether or not\n\nAuto-install cube. I think this is done by psql making/installing libpq\nbecause it depends on that.\n\n> > Not sure. I don't think we want to public permit this stuff unless the\n> > admin asks for it.\n> \n> I will put in some comments about needing to make functions public for normal\n> user access.\n\nOK.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 7 Sep 2002 15:27:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Making small bits of code available"
},
{
"msg_contents": "On Sat, Sep 07, 2002 at 12:52:06 -0400,\n Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> \n> > Should the creation of these functions be added to the existing script\n> > for earth_distance or should it be a separate script? It seems unlikely\n> > that someone would be using both of these at the same time, since one\n> > is based on the point type and the other on the cube type. However the\n> > overhead of installing both seems small, so maybe making it easier to\n> > try both and then pick one is worthwhile.\n> \n> \n> Install them both. Just make sure it is clear which is which, or are\n> yours superior and the old one should be removed?\n\nThey are different and someone could want either.\n\nI forgot to ask about how to handle the dependency on contrib/cube.\n\nI can see three options. Automatically install contrib/cube when building\ncontrib/earthdistance, refuse to work unless contrib cube appears to be\ninstalled, or only install the original stuff if contribe/cube is not\navailable. Trying to do different installs based on whether or not\ncontrib/cube is installed seems like a bad idea as it is mistake prone\nand could be confusing.\n\n> \n> Not sure. I don't think we want to public permit this stuff unless the\n> admin asks for it.\n\nI will put in some comments about needing to make functions public for normal\nuser access.\n",
"msg_date": "Sat, 7 Sep 2002 14:32:40 -0500",
"msg_from": "Bruno Wolff III <bruno@wolff.to>",
"msg_from_op": true,
"msg_subject": "Re: Making small bits of code available"
},
{
"msg_contents": "I am almost done. While working on the regression test I found a significant\nbug in the original earth distance package, so this really does need to\nget updated. While I was doing that I switched it to use the haversine\nformula as that is more accurate for short distances than the formula\nthey used previously.\n",
"msg_date": "Sat, 7 Sep 2002 23:14:20 -0500",
"msg_from": "Bruno Wolff III <bruno@wolff.to>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Making small bits of code available"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Bruno Wolff III wrote:\n>> Should I leave the grants in, leave that to the administrator or provide\n>> a separate script?\n\n> I would not add the grants.\n\nActually I disagree. Bruno's comment made me realize that all the\ncontrib scripts that create functions are now effectively broken,\nbecause they create functions that are not callable by anyone\nexcept the creating user. 99% of the time that will be wrong.\n\nThe scripts were all written under the assumption that the functions\nthey create would be callable by world. I think we should add explicit\nGRANT EXECUTE TO PUBLIC commands to them to maintain\nbackwards-compatible behavior.\n\nIf there's anyone who does not want that result, they can easily edit\nthe script before they run it. Adding missing GRANTs to a creation\nscript is a lot harder than commenting out ones you don't want ...\n\n>> If I do that, do I have to grant public access to internal functions\n>> (used for the gist index) or can I just make the ones meant for users\n\nDon't believe it matters. Anything taking an INTERNAL parameter cannot\nbe called manually anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 15:12:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "7.3 function permissions (was Re: Making small bits of code\n available)"
}
] |
[
{
"msg_contents": "\n> I make a guess I've got this due to parallel running of a program making\n> bulk INSERTs/UPDATEs into that table. Mmm...I need a way to avoid the big\n> number of unused pages in such a case. LOCK TABLE?\n\nOnly UPDATEs and DELETEs (and rolled back INSERTs) cause unused pages.\nThe trick for other people was to run very frequent 'VACUUM goods;'\n(like every 15 seconds) on tables when relatively few rows (in small tables)\nwhere constantly beeing updated (e.g. counters/balances).\n\nIt might be sufficient in your case though to do the 'VACUUM goods;' after \nevery bulk UPDATE, like Christopher suggested. A concurrent vacuum won't \nhelp if each bulk update is done in one single transaction.\n\nAndreas\n",
"msg_date": "Fri, 6 Sep 2002 09:33:03 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Big number of \"unused\" pages as reported by"
}
] |
[
{
"msg_contents": "Hello, All\n I have read the source code /cvsroot/pgsql/src/backend/optimizer/path/costsize.c and there is a function cost_sort(...). I think the code in 464 to 465 lines must be changed to:\n startup_cost += npageaccesses *\n\t (1.0 + cost_nonsequential_access(1)) * 0.5;\n\nThe original code is:\n startup_cost += npageaccesses *\n\t (1.0 + cost_nonsequential_access(npages)) * 0.5;\nCan any one discuss about this issue with me ? Thanks for your response very much!\n----------------------\n Guo long jiang. 2002-9-6 \n \n______________________________________\n\n===================================================================\n������ѵ������� (http://mail.sina.com.cn)\n���˷�����Ϣ�������г���һ�ߣ��ó���ʱ�ͳ��֣� (http://classad.sina.com.cn/2shou/)\n�������ֻ�ͼƬ������������������ѡ��ÿ�춼�и��� (http://sms.sina.com.cn/cgi-bin/sms/smspic.cgi)\n",
"msg_date": "Fri, 06 Sep 2002 16:09:59 +0800",
"msg_from": "ljguo_1234 <ljguo_1234@sina.com>",
"msg_from_op": true,
"msg_subject": "abou the cost estimation"
},
{
"msg_contents": "ljguo_1234 <ljguo_1234@sina.com> writes:\n> I have read the source code /cvsroot/pgsql/src/backend/optimizer/path/costsize.c and there is a function cost_sort(...). I think the code in 464 to 465 lines must be changed to:\n> startup_cost += npageaccesses *\n> \t (1.0 + cost_nonsequential_access(1)) * 0.5;\n\nThat would be wrong. Note the definition of cost_nonsequential_access:\n\n *\t Estimate the cost of accessing one page at random from a relation\n *\t (or sort temp file) of the given size in pages.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Sep 2002 09:01:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: abou the cost estimation "
}
] |
[
{
"msg_contents": "Hi,\n\nBeen playing with the 7.3beta1 version and I've noticed a small\nproblem with dependency checking when dropping a column. If you have\na view which uses JOIN's to join tables then dropping a column will\nfail on a dependency check, even though the column being dropped is\nnot used at all in the view. If you join the tables in the WHERE\nclause the column can be dropped without problems.\n\nPlease see below some example SQL to demonstrate:\n\n\n-- wrap it all up in a transaction so we don't do anything permanent\n\nBEGIN;\n\nCREATE TABLE table1 (col_a text, col_b int);\nCREATE TABLE table2 (col_b int, col_c text);\n\nCREATE VIEW tester1 AS SELECT A.col_a,B.col_b FROM table1 A, table2 B\nWHERE (b.col_b=a.col_b);\n\nCREATE VIEW tester2 AS SELECT A.col_a,B.col_b FROM table2 B INNER JOIN\ntable1 A ON (b.col_b=a.col_b);\n\n--Now try and drop column col_c from table2\nALTER TABLE table2 DROP COLUMN col_c RESTRICT;\n\n--You should now get an error to say that col_c is a dependent object\nin view tester2\n\nROLLBACK;\n\n\n-- I have also noticied the following behaviour when using the SET\ncommand with incorrect option names\n\nSET anythingyoulike = 1,2\n\n--will cause the error to be reported as ERROR: SET anythingyoulike\ntakes only one argument\n\nSET anythingyoulike = 1\n--will cause the error to be reported correctly ('anythingyoulike' is\nnot a valid option name)\n",
"msg_date": "6 Sep 2002 03:54:37 -0700",
"msg_from": "tim@ametco.co.uk (Tim Knowles)",
"msg_from_op": true,
"msg_subject": "7.3beta1 DROP COLUMN DEPENDENCY PROBLEM"
},
{
"msg_contents": "On Fri, 2002-09-06 at 06:54, Tim Knowles wrote:\n> Hi,\n> \n> Been playing with the 7.3beta1 version and I've noticed a small\n> problem with dependency checking when dropping a column. If you have\n> a view which uses JOIN's to join tables then dropping a column will\n\nThis has to do with the way the JOIN currently functions. At the moment\nthe JOIN nodes record an alias which has all columns listed, which is\nappropriately picked up by the dependency code.\n\nTom is debating whether or not the alias on columns not used in the\nwhere or clause or returned is strictly necessary.\n\nIndeed, if you delete the dependencies, then drop the column the view\ncontinues to function but I'm not sure thats always the case.\n\n \n-- \n Rod Taylor\n\n",
"msg_date": "09 Sep 2002 10:15:56 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta1 DROP COLUMN DEPENDENCY PROBLEM"
}
] |
[
{
"msg_contents": "Seems to build cleanly here now. Perhaps anoncvs just hadn't sync'd up\nwhen you tried Jason?\n\nRegards, Dave.\n\n> -----Original Message-----\n> From: Jason Tishler [mailto:jason@tishler.net] \n> Sent: 05 September 2002 20:38\n> To: Peter Eisentraut\n> Cc: Bruce Momjian; Dave Page; pgsql-hackers; pgsql-cygwin\n> Subject: Re: [HACKERS] 7.3 Beta 1 Build Error on Cygwin\n> \n> \n> Peter,\n> \n> On Thu, Sep 05, 2002 at 02:51:31PM -0400, Bruce Momjian wrote:\n> > Jason Tishler wrote:\n> > > On Thu, Sep 05, 2002 at 08:33:20PM +0200, Peter Eisentraut wrote:\n> > > > Should all be fixed now.\n> > > \n> > > Huh? I don't see any recent CVS commits to indicate this.\n> > \n> > I see as a commit:\n> > \n> > [snip]\n> > \n> > I assume it was in there.\n> \n> Sorry for the noise, but at the time:\n> \n> cvs status include/miscadmin.h makefiles/Makefile.win\n> \n> did *not* indicate any recent commits. Maybe you sent the \n> above email before you committed your changes?\n> \n> Anyway, I just tried a:\n> \n> make distclean\n> rm include/miscadmin.h makefiles/Makefile.win # remove my patch\n> cvs update\n> make\n> \n> and got the following error:\n> \n> [snip]\n> make[3]: Leaving directory `/home/jt/src/pgsql/src/backend/utils'\n> dlltool --dllname postgres.exe --output-exp postgres.exp \n> --def postgres.def\n> gcc -L/usr/local/lib -o postgres.exe \n> -Wl,--base-file,postgres.base postgres.exp access/SUBSYS.o \n> bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o \n> commands/SUBSYS.o executor/SUBSYS.o lib/SUBSYS.o \n> libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o \n> optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o \n> regex/SUBSYS.o rewrite/SUBSYS.o storage/SUBSYS.o \n> tcop/SUBSYS.o utils/SUBSYS.o \n> libpq/SUBSYS.o(.text+0x1c84):crypt.c: undefined reference \n> to `crypt'\n> port/SUBSYS.o(.text+0x262):pg_sema.c: undefined reference \n> to `semget'\n> [snip]\n> \n> I can get postgres.exe to successfully link by manually \n> appending \"-lcrypt -lcygipc\" to the end of the above gcc command line.\n> \n> Since you are already working on this, would you be willing \n> to fix this problem?\n> \n> Thanks,\n> Jason\n> \n",
"msg_date": "Fri, 6 Sep 2002 12:54:13 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 7.3 Beta 1 Build Error on Cygwin"
},
{
"msg_contents": "Peter,\n\nOn Fri, Sep 06, 2002 at 12:54:13PM +0100, Dave Page wrote:\n> Seems to build cleanly here now.\n\nAnd here (and now) too.\n\n> Perhaps anoncvs just hadn't sync'd up when you tried Jason?\n\nI guess so -- very strange...\n\nAnyway, sorry (again) for the noise and thanks for fixing the Cygwin\nbuild.\n\nJason\n",
"msg_date": "Fri, 06 Sep 2002 09:06:43 -0400",
"msg_from": "Jason Tishler <jason@tishler.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 7.3 Beta 1 Build Error on Cygwin"
}
] |
[
{
"msg_contents": "\nWell, I swear, this is the first release we've actually kept on scheduale\nwith, as far as going into beta is concerned ...\n\nWe've just packaged up and released v7.3beta1 for broader testing ... and\nthis is a big one as far as changes are concerned.\n\n Major changes in this release:\n\n Schemas\n\n Schemas allow users to create objects in their own namespace\n so two people or applications can have tables with the same\n name. There is also a public schema for shared tables.\n Table/index creation can be restricted by removing\n permissions on the public schema.\n\n Drop Column\n\n PostgreSQL now supports ALTER TABLE ... DROP COLUMN functionality.\n\n Table Functions\n\n Functions returning multiple rows and/or multiple columns are\n now much easier to use than before. You can call such a\n \"table function\" in the SELECT FROM clause, treating its output\n like a table. Also, plpgsql functions can now return sets.\n\n Prepared Queries\n\n For performance, PostgreSQL now supports prepared queries.\n\n Dependency Tracking\n\n PostgreSQL now records object dependencies, which allows\n improvements in many areas.\n\n Privileges\n\n Functions and procedural languages now have privileges, and\n people running them can take on the privileges of their creators.\n\n Multibyte/Locale\n\n Both multibyte and locale are now always enabled.\n\n Logging\n\n A variety of logging options have been enhanced.\n\n Interfaces\n\n A large number of interfaces have been moved to\n http://gborg.postgresql.org where they can be developed\n and released independently.\n\n Functions/Identifiers\n\n By default, functions can now take up to 32 parameters, and\n identifiers can be up to 63 bytes long.\n\nAnd these are only the Major Changes ... the minor changes are extensive\nas well, and are documented in the HISTORY file.\n\nThis release can be found on the main site, as well as the mirrors in:\n\n\tftp://ftp.postgresql.org/pub/beta\n\nNote that this is a *beta* release ... we have only *just* stop'd\ndevelopment of features, so there are instabilities in the system\nexpected. Anyone, and everyone, is encouraged to download and test this\non their various platforms, but do not use it in a production environment\nas of yet. The more people that can test this release, the faster bugs\nwill get reported and fixed in a much shorter time.\n\nAny bugs/problems, please report them to pgsql-bugs@postgresql.org ...\n\nIf we are lucky, we can keep this to a reasonably short beta period ...\n\nMarc G. Fournier\n\n\n",
"msg_date": "Fri, 6 Sep 2002 09:10:48 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "v7.3beta1 Packaged and Released ..."
}
] |
[
{
"msg_contents": "Harris,\n\nWhat error do you get?\n\nAlso you don't need the quotes around id\n\nDave\nOn Fri, 2002-09-06 at 10:06, snpe wrote:\n> Hello,\n> I have simple table with column ID and values '4' in this.\n> I user 7.3 beta1 (from cvs 05.09.2002) and autocommit off in postgresql.conf.\n> Next program don't work .\n> I am tried with compiled postgresql.jar form CVS and with\n> pg73b1jdbc3.jar from 05.09.2002 on jdbc.postgresql.org\n> \n> What is wrong ?\n> \n> regards\n> Haris Peco\n> import java.io.*;\n> import java.sql.*;\n> import java.text.*;\n> \n> public class PrepStatTest\n> {\n> \tConnection db;\t\n> \tString stat=\"DELETE FROM org_ban WHERE \\\"id\\\" = ?\";\n> \tString delid = \"4\";\n> \tpublic PrepStatTest() throws ClassNotFoundException, FileNotFoundException, \n> IOException, SQLException\n> \t{\n> \t\tClass.forName(\"org.postgresql.Driver\");\n> \t\tdb = DriverManager.getConnection(\"jdbc:postgresql://spnew/snpe\", \"snpe\", \n> \"snpe\");\n> \t\tPreparedStatement st = db.prepareStatement(stat);\n> \t\tst.setString(1, delid);\n> \t\tint rowsDeleted = st.executeUpdate();\n> \t\tSystem.out.println(\"Rows deleted \" + rowsDeleted);\n> \t\tdb.commit();\n> \t\tst.close();\n> \t\tdb.close();\n> \t}\n> \n> \tpublic static void main(String args[])\n> \t{\n> \t\ttry\n> \t\t{\n> \t\t\tPrepStatTest test = new PrepStatTest();\n> \t\t}\n> \t\tcatch (Exception ex)\n> \t\t{\n> \t\t\tSystem.err.println(\"Exception caught.\\n\" + ex);\n> \t\t\tex.printStackTrace();\n> \t\t}\n> \t}\n> }\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n\n\n",
"msg_date": "06 Sep 2002 10:05:18 -0400",
"msg_from": "Dave Cramer <Dave@micro-automation.net>",
"msg_from_op": true,
"msg_subject": "Re: JDBC 7.3 dev (Java 2 SDK 1.4.0)"
},
{
"msg_contents": "Hello,\n I have simple table with column ID and values '4' in this.\nI user 7.3 beta1 (from cvs 05.09.2002) and autocommit off in postgresql.conf.\nNext program don't work .\nI am tried with compiled postgresql.jar form CVS and with\npg73b1jdbc3.jar from 05.09.2002 on jdbc.postgresql.org\n\nWhat is wrong ?\n\nregards\nHaris Peco\nimport java.io.*;\nimport java.sql.*;\nimport java.text.*;\n\npublic class PrepStatTest\n{\n\tConnection db;\t\n\tString stat=\"DELETE FROM org_ban WHERE \\\"id\\\" = ?\";\n\tString delid = \"4\";\n\tpublic PrepStatTest() throws ClassNotFoundException, FileNotFoundException, \nIOException, SQLException\n\t{\n\t\tClass.forName(\"org.postgresql.Driver\");\n\t\tdb = DriverManager.getConnection(\"jdbc:postgresql://spnew/snpe\", \"snpe\", \n\"snpe\");\n\t\tPreparedStatement st = db.prepareStatement(stat);\n \t\tst.setString(1, delid);\n \t\tint rowsDeleted = st.executeUpdate();\n\t\tSystem.out.println(\"Rows deleted \" + rowsDeleted);\n\t\tdb.commit();\n\t\tst.close();\n\t\tdb.close();\n\t}\n\n\tpublic static void main(String args[])\n\t{\n\t\ttry\n\t\t{\n\t\t\tPrepStatTest test = new PrepStatTest();\n\t\t}\n\t\tcatch (Exception ex)\n\t\t{\n\t\t\tSystem.err.println(\"Exception caught.\\n\" + ex);\n\t\t\tex.printStackTrace();\n\t\t}\n\t}\n}\n\n",
"msg_date": "Fri, 6 Sep 2002 16:06:52 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "JDBC 7.3 dev (Java 2 SDK 1.4.0) "
},
{
"msg_contents": "Remove the quotes around id, and let me know what happens\n\nDave\nOn Fri, 2002-09-06 at 10:52, snpe wrote:\n> Hello Dave,\n> There isn't any error.Program write 'Rows deleted 1', but row hasn't been \n> deleted\n> \n> Thanks\n> Haris Peco\n> On Friday 06 September 2002 04:05 pm, Dave Cramer wrote:\n> > Harris,\n> >\n> > What error do you get?\n> >\n> > Also you don't need the quotes around id\n> >\n> > Dave\n> >\n> > On Fri, 2002-09-06 at 10:06, snpe wrote:\n> > > Hello,\n> > > I have simple table with column ID and values '4' in this.\n> > > I user 7.3 beta1 (from cvs 05.09.2002) and autocommit off in\n> > > postgresql.conf. Next program don't work .\n> > > I am tried with compiled postgresql.jar form CVS and with\n> > > pg73b1jdbc3.jar from 05.09.2002 on jdbc.postgresql.org\n> > >\n> > > What is wrong ?\n> > >\n> > > regards\n> > > Haris Peco\n> > > import java.io.*;\n> > > import java.sql.*;\n> > > import java.text.*;\n> > >\n> > > public class PrepStatTest\n> > > {\n> > > \tConnection db;\n> > > \tString stat=\"DELETE FROM org_ban WHERE \\\"id\\\" = ?\";\n> > > \tString delid = \"4\";\n> > > \tpublic PrepStatTest() throws ClassNotFoundException,\n> > > FileNotFoundException, IOException, SQLException\n> > > \t{\n> > > \t\tClass.forName(\"org.postgresql.Driver\");\n> > > \t\tdb = DriverManager.getConnection(\"jdbc:postgresql://spnew/snpe\",\n> > > \"snpe\", \"snpe\");\n> > > \t\tPreparedStatement st = db.prepareStatement(stat);\n> > > \t\tst.setString(1, delid);\n> > > \t\tint rowsDeleted = st.executeUpdate();\n> > > \t\tSystem.out.println(\"Rows deleted \" + rowsDeleted);\n> > > \t\tdb.commit();\n> > > \t\tst.close();\n> > > \t\tdb.close();\n> > > \t}\n> > >\n> > > \tpublic static void main(String args[])\n> > > \t{\n> > > \t\ttry\n> > > \t\t{\n> > > \t\t\tPrepStatTest test = new PrepStatTest();\n> > > \t\t}\n> > > \t\tcatch (Exception ex)\n> > > \t\t{\n> > > \t\t\tSystem.err.println(\"Exception caught.\\n\" + ex);\n> > > \t\t\tex.printStackTrace();\n> > > \t\t}\n> > > \t}\n> > > }\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n\n\n",
"msg_date": "06 Sep 2002 10:35:53 -0400",
"msg_from": "Dave Cramer <Dave@micro-automation.net>",
"msg_from_op": true,
"msg_subject": "Re: JDBC 7.3 dev (Java 2 SDK 1.4.0)"
},
{
"msg_contents": "Hello Dave,\n There isn't any error.Program write 'Rows deleted 1', but row hasn't been \ndeleted\n\nThanks\nHaris Peco\nOn Friday 06 September 2002 04:05 pm, Dave Cramer wrote:\n> Harris,\n>\n> What error do you get?\n>\n> Also you don't need the quotes around id\n>\n> Dave\n>\n> On Fri, 2002-09-06 at 10:06, snpe wrote:\n> > Hello,\n> > I have simple table with column ID and values '4' in this.\n> > I user 7.3 beta1 (from cvs 05.09.2002) and autocommit off in\n> > postgresql.conf. Next program don't work .\n> > I am tried with compiled postgresql.jar form CVS and with\n> > pg73b1jdbc3.jar from 05.09.2002 on jdbc.postgresql.org\n> >\n> > What is wrong ?\n> >\n> > regards\n> > Haris Peco\n> > import java.io.*;\n> > import java.sql.*;\n> > import java.text.*;\n> >\n> > public class PrepStatTest\n> > {\n> > \tConnection db;\n> > \tString stat=\"DELETE FROM org_ban WHERE \\\"id\\\" = ?\";\n> > \tString delid = \"4\";\n> > \tpublic PrepStatTest() throws ClassNotFoundException,\n> > FileNotFoundException, IOException, SQLException\n> > \t{\n> > \t\tClass.forName(\"org.postgresql.Driver\");\n> > \t\tdb = DriverManager.getConnection(\"jdbc:postgresql://spnew/snpe\",\n> > \"snpe\", \"snpe\");\n> > \t\tPreparedStatement st = db.prepareStatement(stat);\n> > \t\tst.setString(1, delid);\n> > \t\tint rowsDeleted = st.executeUpdate();\n> > \t\tSystem.out.println(\"Rows deleted \" + rowsDeleted);\n> > \t\tdb.commit();\n> > \t\tst.close();\n> > \t\tdb.close();\n> > \t}\n> >\n> > \tpublic static void main(String args[])\n> > \t{\n> > \t\ttry\n> > \t\t{\n> > \t\t\tPrepStatTest test = new PrepStatTest();\n> > \t\t}\n> > \t\tcatch (Exception ex)\n> > \t\t{\n> > \t\t\tSystem.err.println(\"Exception caught.\\n\" + ex);\n> > \t\t\tex.printStackTrace();\n> > \t\t}\n> > \t}\n> > }\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n\n",
"msg_date": "Fri, 6 Sep 2002 16:52:05 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: JDBC 7.3 dev (Java 2 SDK 1.4.0)"
},
{
"msg_contents": "Hi Dave,\nThat is same.Program work with and without quote but row don't deleted.\nPostgresql is 7.3 beta (from cvs) and parameter autocommit in postgresql.conf \nis off (no auto commit).\nI am tried with db.autocommit(true) after getConnection, but no success\n\nI thin that is bug in JDBC \nPGSql 7.3 beta have new features autocommit on/off and JDBC driver don't work\nwith autocommit off\n\nThanks \n\nP.S\nI am play ith Oracle JDeveloper 9i and Postgresql and I get error in prepared \nstatement like this error :\n(oracle.jbo.SQLStmtException) JBO-27123: SQL error during call statement \npreparation. Statement: DELETE FROM org_ban WHERE \"id\"=?\n\nand pgsqlerror is :\n(org.postgresql.util.PSQLException) Malformed stmt [DELETE FROM org_ban WHERE \n\"id\"=?] usage : {[? =] call <some_function> ([? [,?]*]) }\n\nI think that JDeveloper call CallableStatement for insert or delete (select \nand update work fine), but I don't know how.\n\nOn Friday 06 September 2002 04:35 pm, Dave Cramer wrote:\n> Remove the quotes around id, and let me know what happens\n>\n> Dave\n>\n> On Fri, 2002-09-06 at 10:52, snpe wrote:\n> > Hello Dave,\n> > There isn't any error.Program write 'Rows deleted 1', but row hasn't\n> > been deleted\n> >\n> > Thanks\n> > Haris Peco\n> >\n> > On Friday 06 September 2002 04:05 pm, Dave Cramer wrote:\n> > > Harris,\n> > >\n> > > What error do you get?\n> > >\n> > > Also you don't need the quotes around id\n> > >\n> > > Dave\n> > >\n> > > On Fri, 2002-09-06 at 10:06, snpe wrote:\n> > > > Hello,\n> > > > I have simple table with column ID and values '4' in this.\n> > > > I user 7.3 beta1 (from cvs 05.09.2002) and autocommit off in\n> > > > postgresql.conf. Next program don't work .\n> > > > I am tried with compiled postgresql.jar form CVS and with\n> > > > pg73b1jdbc3.jar from 05.09.2002 on jdbc.postgresql.org\n> > > >\n> > > > What is wrong ?\n> > > >\n> > > > regards\n> > > > Haris Peco\n> > > > import java.io.*;\n> > > > import java.sql.*;\n> > > > import java.text.*;\n> > > >\n> > > > public class PrepStatTest\n> > > > {\n> > > > \tConnection db;\n> > > > \tString stat=\"DELETE FROM org_ban WHERE \\\"id\\\" = ?\";\n> > > > \tString delid = \"4\";\n> > > > \tpublic PrepStatTest() throws ClassNotFoundException,\n> > > > FileNotFoundException, IOException, SQLException\n> > > > \t{\n> > > > \t\tClass.forName(\"org.postgresql.Driver\");\n> > > > \t\tdb = DriverManager.getConnection(\"jdbc:postgresql://spnew/snpe\",\n> > > > \"snpe\", \"snpe\");\n> > > > \t\tPreparedStatement st = db.prepareStatement(stat);\n> > > > \t\tst.setString(1, delid);\n> > > > \t\tint rowsDeleted = st.executeUpdate();\n> > > > \t\tSystem.out.println(\"Rows deleted \" + rowsDeleted);\n> > > > \t\tdb.commit();\n> > > > \t\tst.close();\n> > > > \t\tdb.close();\n> > > > \t}\n> > > >\n> > > > \tpublic static void main(String args[])\n> > > > \t{\n> > > > \t\ttry\n> > > > \t\t{\n> > > > \t\t\tPrepStatTest test = new PrepStatTest();\n> > > > \t\t}\n> > > > \t\tcatch (Exception ex)\n> > > > \t\t{\n> > > > \t\t\tSystem.err.println(\"Exception caught.\\n\" + ex);\n> > > > \t\t\tex.printStackTrace();\n> > > > \t\t}\n> > > > \t}\n> > > > }\n> > > >\n> > > >\n> > > > ---------------------------(end of\n> > > > broadcast)--------------------------- TIP 3: if posting/reading\n> > > > through Usenet, please send an appropriate subscribe-nomail command\n> > > > to majordomo@postgresql.org so that your message can get through to\n> > > > the mailing list cleanly\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Fri, 6 Sep 2002 17:17:21 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: JDBC 7.3 dev (Java 2 SDK 1.4.0)"
},
{
"msg_contents": "I set autocommit true in postgresql.conf and program work fine\n\nregards\nHaris Peco\nOn Friday 06 September 2002 04:35 pm, Dave Cramer wrote:\n> Remove the quotes around id, and let me know what happens\n>\n> Dave\n>\n> On Fri, 2002-09-06 at 10:52, snpe wrote:\n> > Hello Dave,\n> > There isn't any error.Program write 'Rows deleted 1', but row hasn't\n> > been deleted\n> >\n> > Thanks\n> > Haris Peco\n> >\n> > On Friday 06 September 2002 04:05 pm, Dave Cramer wrote:\n> > > Harris,\n> > >\n> > > What error do you get?\n> > >\n> > > Also you don't need the quotes around id\n> > >\n> > > Dave\n> > >\n> > > On Fri, 2002-09-06 at 10:06, snpe wrote:\n> > > > Hello,\n> > > > I have simple table with column ID and values '4' in this.\n> > > > I user 7.3 beta1 (from cvs 05.09.2002) and autocommit off in\n> > > > postgresql.conf. Next program don't work .\n> > > > I am tried with compiled postgresql.jar form CVS and with\n> > > > pg73b1jdbc3.jar from 05.09.2002 on jdbc.postgresql.org\n> > > >\n> > > > What is wrong ?\n> > > >\n> > > > regards\n> > > > Haris Peco\n> > > > import java.io.*;\n> > > > import java.sql.*;\n> > > > import java.text.*;\n> > > >\n> > > > public class PrepStatTest\n> > > > {\n> > > > \tConnection db;\n> > > > \tString stat=\"DELETE FROM org_ban WHERE \\\"id\\\" = ?\";\n> > > > \tString delid = \"4\";\n> > > > \tpublic PrepStatTest() throws ClassNotFoundException,\n> > > > FileNotFoundException, IOException, SQLException\n> > > > \t{\n> > > > \t\tClass.forName(\"org.postgresql.Driver\");\n> > > > \t\tdb = DriverManager.getConnection(\"jdbc:postgresql://spnew/snpe\",\n> > > > \"snpe\", \"snpe\");\n> > > > \t\tPreparedStatement st = db.prepareStatement(stat);\n> > > > \t\tst.setString(1, delid);\n> > > > \t\tint rowsDeleted = st.executeUpdate();\n> > > > \t\tSystem.out.println(\"Rows deleted \" + rowsDeleted);\n> > > > \t\tdb.commit();\n> > > > \t\tst.close();\n> > > > \t\tdb.close();\n> > > > \t}\n> > > >\n> > > > \tpublic static void main(String args[])\n> > > > \t{\n> > > > \t\ttry\n> > > > \t\t{\n> > > > \t\t\tPrepStatTest test = new PrepStatTest();\n> > > > \t\t}\n> > > > \t\tcatch (Exception ex)\n> > > > \t\t{\n> > > > \t\t\tSystem.err.println(\"Exception caught.\\n\" + ex);\n> > > > \t\t\tex.printStackTrace();\n> > > > \t\t}\n> > > > \t}\n> > > > }\n> > > >\n> > > >\n> > > > ---------------------------(end of\n> > > > broadcast)--------------------------- TIP 3: if posting/reading\n> > > > through Usenet, please send an appropriate subscribe-nomail command\n> > > > to majordomo@postgresql.org so that your message can get through to\n> > > > the mailing list cleanly\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Fri, 6 Sep 2002 17:21:00 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: JDBC 7.3 dev (Java 2 SDK 1.4.0)"
},
{
"msg_contents": "Hello,\n\nIf I set db.setAutoCommit(false) after getConnection row is deleted\n\nDriver don't see parameter autocommit in postgresql.conf\nregards\nOn Friday 06 September 2002 05:21 pm, snpe wrote:\n> I set autocommit true in postgresql.conf and program work fine\n>\n> regards\n> Haris Peco\n>\n> On Friday 06 September 2002 04:35 pm, Dave Cramer wrote:\n> > Remove the quotes around id, and let me know what happens\n> >\n> > Dave\n> >\n> > On Fri, 2002-09-06 at 10:52, snpe wrote:\n> > > Hello Dave,\n> > > There isn't any error.Program write 'Rows deleted 1', but row hasn't\n> > > been deleted\n> > >\n> > > Thanks\n> > > Haris Peco\n> > >\n> > > On Friday 06 September 2002 04:05 pm, Dave Cramer wrote:\n> > > > Harris,\n> > > >\n> > > > What error do you get?\n> > > >\n> > > > Also you don't need the quotes around id\n> > > >\n> > > > Dave\n> > > >\n> > > > On Fri, 2002-09-06 at 10:06, snpe wrote:\n> > > > > Hello,\n> > > > > I have simple table with column ID and values '4' in this.\n> > > > > I user 7.3 beta1 (from cvs 05.09.2002) and autocommit off in\n> > > > > postgresql.conf. Next program don't work .\n> > > > > I am tried with compiled postgresql.jar form CVS and with\n> > > > > pg73b1jdbc3.jar from 05.09.2002 on jdbc.postgresql.org\n> > > > >\n> > > > > What is wrong ?\n> > > > >\n> > > > > regards\n> > > > > Haris Peco\n> > > > > import java.io.*;\n> > > > > import java.sql.*;\n> > > > > import java.text.*;\n> > > > >\n> > > > > public class PrepStatTest\n> > > > > {\n> > > > > \tConnection db;\n> > > > > \tString stat=\"DELETE FROM org_ban WHERE \\\"id\\\" = ?\";\n> > > > > \tString delid = \"4\";\n> > > > > \tpublic PrepStatTest() throws ClassNotFoundException,\n> > > > > FileNotFoundException, IOException, SQLException\n> > > > > \t{\n> > > > > \t\tClass.forName(\"org.postgresql.Driver\");\n> > > > > \t\tdb = DriverManager.getConnection(\"jdbc:postgresql://spnew/snpe\",\n> > > > > \"snpe\", \"snpe\");\n> > > > > \t\tPreparedStatement st = db.prepareStatement(stat);\n> > > > > \t\tst.setString(1, delid);\n> > > > > \t\tint rowsDeleted = st.executeUpdate();\n> > > > > \t\tSystem.out.println(\"Rows deleted \" + rowsDeleted);\n> > > > > \t\tdb.commit();\n> > > > > \t\tst.close();\n> > > > > \t\tdb.close();\n> > > > > \t}\n> > > > >\n> > > > > \tpublic static void main(String args[])\n> > > > > \t{\n> > > > > \t\ttry\n> > > > > \t\t{\n> > > > > \t\t\tPrepStatTest test = new PrepStatTest();\n> > > > > \t\t}\n> > > > > \t\tcatch (Exception ex)\n> > > > > \t\t{\n> > > > > \t\t\tSystem.err.println(\"Exception caught.\\n\" + ex);\n> > > > > \t\t\tex.printStackTrace();\n> > > > > \t\t}\n> > > > > \t}\n> > > > > }\n> > > > >\n> > > > >\n> > > > > ---------------------------(end of\n> > > > > broadcast)--------------------------- TIP 3: if posting/reading\n> > > > > through Usenet, please send an appropriate subscribe-nomail command\n> > > > > to majordomo@postgresql.org so that your message can get through to\n> > > > > the mailing list cleanly\n> > >\n> > > ---------------------------(end of\n> > > broadcast)--------------------------- TIP 2: you can get off all lists\n> > > at once with the unregister command (send \"unregister\n> > > YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n",
"msg_date": "Fri, 6 Sep 2002 17:53:48 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: JDBC 7.3 dev (Java 2 SDK 1.4.0)"
},
{
"msg_contents": "Hello Dave,\n I am find bug with CallableStatement. in Pgsql JDBC (I think that is bug)\nCallableStatement interface extends PreparedStatement (jdbc 3.0 specification) \nand\ncommand DELETE or UPDATE must work like with PreparedStatement\nPgsql JDBC work only with {[? =] call <some_function> ([? [,?]*]) } form\n\nNext code work in Oracle :\n\nimport java.io.*;\nimport java.sql.*;\nimport java.text.*;\n\npublic class PrepStatTestOra\n{\n\tConnection db;\t\n\tString stat=\"DELETE FROM org_ban WHERE id = ?\";\n\tString delid = \"4\";\n\tpublic PrepStatTestOra() throws ClassNotFoundException, \nFileNotFoundException, IOException, SQLException\n\t{\n\t\tClass.forName(\"oracle.jdbc.OracleDriver\");\n\t\tdb = DriverManager.getConnection(\"jdbc:oracle:thin:@spnew:1521:V9i\", \n\"snpe2001\", \"snpe2001\");\n\t\t//db.setAutoCommit(false);\n\t\t//PrepareStatement st = db.prepareStatement(stat);\n\t\tCallableStatement st = db.prepareCall(stat);\n \t\tst.setString(1, delid);\n \t\tint rowsDeleted = st.executeUpdate();\n\t\tSystem.out.println(\"Rows deleted \" + rowsDeleted);\n\t\tdb.commit();\n\t\tst.close();\n\t\tdb.close();\n\t}\n\n\tpublic static void main(String args[])\n\t{\n\t\ttry\n\t\t{\n\t\t\tPrepStatTestOra test = new PrepStatTestOra();\n\t\t}\n\t\tcatch (Exception ex)\n\t\t{\n\t\t\tSystem.err.println(\"Exception caught.\\n\" + ex);\n\t\t\tex.printStackTrace();\n\t\t}\n\t}\n}\n\nThis is for pgsql :\n\nimport java.io.*;\nimport java.sql.*;\nimport java.text.*;\n\npublic class PrepStatTest\n{\n\tConnection db;\t\n\tString stat=\"DELETE FROM org_ban WHERE id = ?\";\n\tString delid = \"4\";\n\tpublic PrepStatTest() throws ClassNotFoundException, FileNotFoundException, \nIOException, SQLException\n\t{\n\t\tClass.forName(\"org.postgresql.Driver\");\n\t\tdb = DriverManager.getConnection(\"jdbc:postgresql://spnew/snpe\", \"snpe\", \n\"snpe\");\n\t\tdb.setAutoCommit(false); // hack for 'autocommit true' in postgresql.conf\n\t\t//PrepareStatement st = db.prepareStatement(stat); // PreparedStatement work \nfine\n\t\tCallableStatement st = db.prepareCall(stat); // this must work like previous \nline with PreparedStatement\n \t\tst.setString(1, delid);\n \t\tint rowsDeleted = st.executeUpdate();\n\t\tSystem.out.println(\"Rows deleted \" + rowsDeleted);\n\t\tdb.commit();\n\t\tst.close();\n\t\tdb.close();\n\t}\n\n\tpublic static void main(String args[])\n\t{\n\t\ttry\n\t\t{\n\t\t\tPrepStatTest test = new PrepStatTest();\n\t\t}\n\t\tcatch (Exception ex)\n\t\t{\n\t\t\tSystem.err.println(\"Exception caught.\\n\" + ex);\n\t\t\tex.printStackTrace();\n\t\t}\n\t}\n}\n\nExample for Oracle work fine and Pgsql get error (same in JDeveloper) :\n\nException caught.\nMalformed stmt [DELETE FROM org_ban WHERE \"id\" = ?] usage : {[? =] call \n<some_function> ([? [,?]*]) }\nMalformed stmt [DELETE FROM org_ban WHERE \"id\" = ?] usage : {[? =] call \n<some_function> ([? [,?]*]) }\n\tat \norg.postgresql.jdbc1.AbstractJdbc1Statement.modifyJdbcCall(AbstractJdbc1Statement.java:1720)\n\tat \norg.postgresql.jdbc1.AbstractJdbc1Statement.parseSqlStmt(AbstractJdbc1Statement.java:88)\n\tat \norg.postgresql.jdbc1.AbstractJdbc1Statement.<init>(AbstractJdbc1Statement.java:79)\n\tat \norg.postgresql.jdbc2.AbstractJdbc2Statement.<init>(AbstractJdbc2Statement.java:32)\n\tat \norg.postgresql.jdbc3.AbstractJdbc3Statement.<init>(AbstractJdbc3Statement.java:23)\n\tat \norg.postgresql.jdbc3.Jdbc3CallableStatement.<init>(Jdbc3CallableStatement.java:11)\n\tat org.postgresql.jdbc3.Jdbc3Connection.prepareCall(Jdbc3Connection.java:36)\n\tat \norg.postgresql.jdbc2.AbstractJdbc2Connection.prepareCall(AbstractJdbc2Connection.java:39)\n\tat PrepStatTest.<init>(PrepStatTest.java:16)\n\tat PrepStatTest.main(PrepStatTest.java:29)\n\nOn Friday 06 September 2002 05:38 pm, you wrote:\n> Possibly, callable statements are a bit of a hack in postgres, since\n> they don't really exist. If you can send me something that causes the\n> errors I can try to fix it.\n>\n> Dave\n>\n> On Fri, 2002-09-06 at 11:59, snpe wrote:\n> > Hello,\n> > This is postgresql error on DELETE command\n> > 'Malformed stmt' is in CallableStatement in JDBC source only\n> > I think that CallableStatement in Pgsql JDBC driver have any\n> > incompatibilty because meratnt drivers (for DB2, MS SQL) work fine\n> >\n> > Thanks\n> > Haris Peco\n> > (org.postgresql.util.PSQLException) Malformed stmt [DELETE FROM org_ban\n> > WHERE \"id\"=?] usage : {[? =] call <some_function> ([? [,?]*]) }\n> >\n> > On Friday 06 September 2002 05:21 pm, you wrote:\n> > > No, not off hand, I've never used JDeveloper, nor have I seen that\n> > > error message\n> > >\n> > > Sorry,\n> > >\n> > > Dave\n> > >\n> > > On Fri, 2002-09-06 at 11:36, snpe wrote:\n> > > > Hi Dave\n> > > > Have You any cooment on my P.S. (problem with JDeveloper) ?\n> > > >\n> > > > On Friday 06 September 2002 05:07 pm, you wrote:\n> > > > > Hmmm.... interesting, I guess we have to fix that in the driver\n> > > > >\n> > > > > Dave\n> > > > >\n> > > > > On Fri, 2002-09-06 at 11:21, snpe wrote:\n> > > > > > I set autocommit true in postgresql.conf and program work fine\n> > > > > >\n> > > > > > regards\n> > > > > > Haris Peco\n> > > > > >\n> > > > > > On Friday 06 September 2002 04:35 pm, Dave Cramer wrote:\n> > > > > > > Remove the quotes around id, and let me know what happens\n> > > > > > >\n> > > > > > > Dave\n> > > > > > >\n> > > > > > > On Fri, 2002-09-06 at 10:52, snpe wrote:\n> > > > > > > > Hello Dave,\n> > > > > > > > There isn't any error.Program write 'Rows deleted 1', but\n> > > > > > > > row hasn't been deleted\n> > > > > > > >\n> > > > > > > > Thanks\n> > > > > > > > Haris Peco\n> > > > > > > >\n> > > > > > > > On Friday 06 September 2002 04:05 pm, Dave Cramer wrote:\n> > > > > > > > > Harris,\n> > > > > > > > >\n> > > > > > > > > What error do you get?\n> > > > > > > > >\n> > > > > > > > > Also you don't need the quotes around id\n> > > > > > > > >\n> > > > > > > > > Dave\n> > > > > > > > >\n> > > > > > > > > On Fri, 2002-09-06 at 10:06, snpe wrote:\n> > > > > > > > > > Hello,\n> > > > > > > > > > I have simple table with column ID and values '4' in\n> > > > > > > > > > this. I user 7.3 beta1 (from cvs 05.09.2002) and\n> > > > > > > > > > autocommit off in postgresql.conf. Next program don't\n> > > > > > > > > > work .\n> > > > > > > > > > I am tried with compiled postgresql.jar form CVS and with\n> > > > > > > > > > pg73b1jdbc3.jar from 05.09.2002 on jdbc.postgresql.org\n> > > > > > > > > >\n> > > > > > > > > > What is wrong ?\n> > > > > > > > > >\n> > > > > > > > > > regards\n> > > > > > > > > > Haris Peco\n> > > > > > > > > > import java.io.*;\n> > > > > > > > > > import java.sql.*;\n> > > > > > > > > > import java.text.*;\n> > > > > > > > > >\n> > > > > > > > > > public class PrepStatTest\n> > > > > > > > > > {\n> > > > > > > > > > \tConnection db;\n> > > > > > > > > > \tString stat=\"DELETE FROM org_ban WHERE \\\"id\\\" = ?\";\n> > > > > > > > > > \tString delid = \"4\";\n> > > > > > > > > > \tpublic PrepStatTest() throws ClassNotFoundException,\n> > > > > > > > > > FileNotFoundException, IOException, SQLException\n> > > > > > > > > > \t{\n> > > > > > > > > > \t\tClass.forName(\"org.postgresql.Driver\");\n> > > > > > > > > > \t\tdb =\n> > > > > > > > > > DriverManager.getConnection(\"jdbc:postgresql://spnew/snpe\n> > > > > > > > > >\", \"snpe\", \"snpe\");\n> > > > > > > > > > \t\tPreparedStatement st = db.prepareStatement(stat);\n> > > > > > > > > > \t\tst.setString(1, delid);\n> > > > > > > > > > \t\tint rowsDeleted = st.executeUpdate();\n> > > > > > > > > > \t\tSystem.out.println(\"Rows deleted \" + rowsDeleted);\n> > > > > > > > > > \t\tdb.commit();\n> > > > > > > > > > \t\tst.close();\n> > > > > > > > > > \t\tdb.close();\n> > > > > > > > > > \t}\n> > > > > > > > > >\n> > > > > > > > > > \tpublic static void main(String args[])\n> > > > > > > > > > \t{\n> > > > > > > > > > \t\ttry\n> > > > > > > > > > \t\t{\n> > > > > > > > > > \t\t\tPrepStatTest test = new PrepStatTest();\n> > > > > > > > > > \t\t}\n> > > > > > > > > > \t\tcatch (Exception ex)\n> > > > > > > > > > \t\t{\n> > > > > > > > > > \t\t\tSystem.err.println(\"Exception caught.\\n\" + ex);\n> > > > > > > > > > \t\t\tex.printStackTrace();\n> > > > > > > > > > \t\t}\n> > > > > > > > > > \t}\n> > > > > > > > > > }\n> > > > > > > > > >\n> > > > > > > > > >\n> > > > > > > > > > ---------------------------(end of\n> > > > > > > > > > broadcast)--------------------------- TIP 3: if\n> > > > > > > > > > posting/reading through Usenet, please send an\n> > > > > > > > > > appropriate subscribe-nomail command to\n> > > > > > > > > > majordomo@postgresql.org so that your message can get\n> > > > > > > > > > through to the mailing list cleanly\n> > > > > > > >\n> > > > > > > > ---------------------------(end of\n> > > > > > > > broadcast)--------------------------- TIP 2: you can get off\n> > > > > > > > all lists at once with the unregister command (send\n> > > > > > > > \"unregister YourEmailAddressHere\" to\n> > > > > > > > majordomo@postgresql.org)\n> > > > > > >\n> > > > > > > ---------------------------(end of\n> > > > > > > broadcast)--------------------------- TIP 4: Don't 'kill -9'\n> > > > > > > the postmaster\n> > > > > >\n> > > > > > ---------------------------(end of\n> > > > > > broadcast)--------------------------- TIP 6: Have you searched\n> > > > > > our list archives?\n> > > > > >\n> > > > > > http://archives.postgresql.org\n\n",
"msg_date": "Fri, 6 Sep 2002 18:45:27 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: JDBC 7.3 dev (Java 2 SDK 1.4.0)"
},
{
"msg_contents": "Hello Barry,\n JDBC driver must find autocommit (off or on) and set autoCommit field\nwhen open connection.\nregards\nOn Friday 06 September 2002 06:52 pm, Barry Lind wrote:\n> Haris,\n>\n> You can't use jdbc (and probably most other postgres clients) with\n> autocommit in postgresql.conf turned off.\n>\n> Hackers,\n>\n> How should client interfaces handle this new autocommit feature? Is it\n> best to just issue a set at the beginning of the connection to ensure\n> that it is always on?\n>\n> thanks,\n> --Barry\n>\n> snpe wrote:\n> >Hi Dave,\n> >That is same.Program work with and without quote but row don't deleted.\n> >Postgresql is 7.3 beta (from cvs) and parameter autocommit in\n>\n> postgresql.conf\n>\n> >is off (no auto commit).\n> >I am tried with db.autocommit(true) after getConnection, but no success\n> >\n> >I thin that is bug in JDBC\n> >PGSql 7.3 beta have new features autocommit on/off and JDBC driver\n>\n> don't work\n>\n> >with autocommit off\n> >\n> >Thanks\n> >\n> >P.S\n> >I am play ith Oracle JDeveloper 9i and Postgresql and I get error in\n>\n> prepared\n>\n> >statement like this error :\n> >(oracle.jbo.SQLStmtException) JBO-27123: SQL error during call statement\n> >preparation. Statement: DELETE FROM org_ban WHERE \"id\"=?\n> >\n> >and pgsqlerror is :\n> >(org.postgresql.util.PSQLException) Malformed stmt [DELETE FROM\n>\n> org_ban WHERE\n>\n> >\"id\"=?] usage : {[? =] call <some_function> ([? [,?]*]) }\n> >\n> >I think that JDeveloper call CallableStatement for insert or delete\n>\n> (select\n>\n> >and update work fine), but I don't know how.\n> >\n> >On Friday 06 September 2002 04:35 pm, Dave Cramer wrote:\n> >>Remove the quotes around id, and let me know what happens\n> >>\n> >>Dave\n> >>\n> >>On Fri, 2002-09-06 at 10:52, snpe wrote:\n> >>>Hello Dave,\n> >>> There isn't any error.Program write 'Rows deleted 1', but row hasn't\n> >>>been deleted\n> >>>\n> >>>Thanks\n> >>>Haris Peco\n> >>>\n> >>>On Friday 06 September 2002 04:05 pm, Dave Cramer wrote:\n> >>>>Harris,\n> >>>>\n> >>>>What error do you get?\n> >>>>\n> >>>>Also you don't need the quotes around id\n> >>>>\n> >>>>Dave\n> >>>>\n> >>>>On Fri, 2002-09-06 at 10:06, snpe wrote:\n> >>>>>Hello,\n> >>>>> I have simple table with column ID and values '4' in this.\n> >>>>>I user 7.3 beta1 (from cvs 05.09.2002) and autocommit off in\n> >>>>>postgresql.conf. Next program don't work .\n> >>>>>I am tried with compiled postgresql.jar form CVS and with\n> >>>>>pg73b1jdbc3.jar from 05.09.2002 on jdbc.postgresql.org\n> >>>>>\n> >>>>>What is wrong ?\n> >>>>>\n> >>>>>regards\n> >>>>>Haris Peco\n> >>>>>import java.io.*;\n> >>>>>import java.sql.*;\n> >>>>>import java.text.*;\n> >>>>>\n> >>>>>public class PrepStatTest\n> >>>>>{\n> >>>>>\tConnection db;\n> >>>>>\tString stat=\"DELETE FROM org_ban WHERE \\\"id\\\" = ?\";\n> >>>>>\tString delid = \"4\";\n> >>>>>\tpublic PrepStatTest() throws ClassNotFoundException,\n> >>>>>FileNotFoundException, IOException, SQLException\n> >>>>>\t{\n>\n> \tClass.forName(\"org.postgresql.Driver\");\n>\n> \tdb = DriverManager.getConnection(\"jdbc:postgresql://spnew/snpe\",\n>\n> >>>>>\"snpe\", \"snpe\");\n>\n> \tPreparedStatement st = db.prepareStatement(stat);\n>\n> >>>>> \t\tst.setString(1, delid);\n> >>>>> \t\tint rowsDeleted = st.executeUpdate();\n>\n> \tSystem.out.println(\"Rows deleted \" + rowsDeleted);\n>\n> \tdb.commit();\n>\n> \tst.close();\n>\n> \tdb.close();\n>\n> >>>>>\t}\n> >>>>>\n> >>>>>\tpublic static void main(String args[])\n> >>>>>\t{\n>\n> \ttry\n>\n> \t{\n>\n> \t\tPrepStatTest test = new PrepStatTest();\n>\n> \t}\n>\n> \tcatch (Exception ex)\n>\n> \t{\n>\n> \t\tSystem.err.println(\"Exception caught.\\n\" + ex);\n>\n> \t\tex.printStackTrace();\n>\n> \t}\n>\n> >>>>>\t}\n> >>>>>}\n> >>>>>\n> >>>>>\n> >>>>>---------------------------(end of\n> >>>>>broadcast)--------------------------- TIP 3: if posting/reading\n> >>>>>through Usenet, please send an appropriate subscribe-nomail command\n> >>>>>to majordomo@postgresql.org so that your message can get through to\n> >>>>>the mailing list cleanly\n> >>>\n> >>>---------------------------(end of\n> >>> broadcast)--------------------------- TIP 2: you can get off all lists\n> >>> at once with the unregister command (send \"unregister\n> >>> YourEmailAddressHere\" to majordomo@postgresql.org)\n> >>\n> >>---------------------------(end of broadcast)---------------------------\n> >>TIP 4: Don't 'kill -9' the postmaster\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 3: if posting/reading through Usenet, please send an appropriate\n> >subscribe-nomail command to majordomo@postgresql.org so that your\n> >message can get through to the mailing list cleanly\n\n",
"msg_date": "Fri, 6 Sep 2002 19:37:15 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "Haris,\n\nYou can't use jdbc (and probably most other postgres clients) with\nautocommit in postgresql.conf turned off.\n\nHackers,\n\nHow should client interfaces handle this new autocommit feature? Is it\nbest to just issue a set at the beginning of the connection to ensure\nthat it is always on?\n\nthanks,\n--Barry\n\n\n\nsnpe wrote:\n\n >Hi Dave,\n >That is same.Program work with and without quote but row don't deleted.\n >Postgresql is 7.3 beta (from cvs) and parameter autocommit in\npostgresql.conf\n >is off (no auto commit).\n >I am tried with db.autocommit(true) after getConnection, but no success\n >\n >I thin that is bug in JDBC\n >PGSql 7.3 beta have new features autocommit on/off and JDBC driver\ndon't work\n >with autocommit off\n >\n >Thanks\n >\n >P.S\n >I am play ith Oracle JDeveloper 9i and Postgresql and I get error in\nprepared\n >statement like this error :\n >(oracle.jbo.SQLStmtException) JBO-27123: SQL error during call statement\n >preparation. Statement: DELETE FROM org_ban WHERE \"id\"=?\n >\n >and pgsqlerror is :\n >(org.postgresql.util.PSQLException) Malformed stmt [DELETE FROM\norg_ban WHERE\n >\"id\"=?] usage : {[? =] call <some_function> ([? [,?]*]) }\n >\n >I think that JDeveloper call CallableStatement for insert or delete\n(select\n >and update work fine), but I don't know how.\n >\n >On Friday 06 September 2002 04:35 pm, Dave Cramer wrote:\n >\n >\n >>Remove the quotes around id, and let me know what happens\n >>\n >>Dave\n >>\n >>On Fri, 2002-09-06 at 10:52, snpe wrote:\n >>\n >>\n >>>Hello Dave,\n >>> There isn't any error.Program write 'Rows deleted 1', but row hasn't\n >>>been deleted\n >>>\n >>>Thanks\n >>>Haris Peco\n >>>\n >>>On Friday 06 September 2002 04:05 pm, Dave Cramer wrote:\n >>>\n >>>\n >>>>Harris,\n >>>>\n >>>>What error do you get?\n >>>>\n >>>>Also you don't need the quotes around id\n >>>>\n >>>>Dave\n >>>>\n >>>>On Fri, 2002-09-06 at 10:06, snpe wrote:\n >>>>\n >>>>\n >>>>>Hello,\n >>>>> I have simple table with column ID and values '4' in this.\n >>>>>I user 7.3 beta1 (from cvs 05.09.2002) and autocommit off in\n >>>>>postgresql.conf. Next program don't work .\n >>>>>I am tried with compiled postgresql.jar form CVS and with\n >>>>>pg73b1jdbc3.jar from 05.09.2002 on jdbc.postgresql.org\n >>>>>\n >>>>>What is wrong ?\n >>>>>\n >>>>>regards\n >>>>>Haris Peco\n >>>>>import java.io.*;\n >>>>>import java.sql.*;\n >>>>>import java.text.*;\n >>>>>\n >>>>>public class PrepStatTest\n >>>>>{\n >>>>>\tConnection db;\n >>>>>\tString stat=\"DELETE FROM org_ban WHERE \\\"id\\\" = ?\";\n >>>>>\tString delid = \"4\";\n >>>>>\tpublic PrepStatTest() throws ClassNotFoundException,\n >>>>>FileNotFoundException, IOException, SQLException\n >>>>>\t{\n >>>>>\n\tClass.forName(\"org.postgresql.Driver\");\n >>>>>\n\tdb = DriverManager.getConnection(\"jdbc:postgresql://spnew/snpe\",\n >>>>>\"snpe\", \"snpe\");\n >>>>>\n\tPreparedStatement st = db.prepareStatement(stat);\n >>>>> \t\tst.setString(1, delid);\n >>>>> \t\tint rowsDeleted = st.executeUpdate();\n >>>>>\n\tSystem.out.println(\"Rows deleted \" + rowsDeleted);\n >>>>>\n\tdb.commit();\n >>>>>\n\tst.close();\n >>>>>\n\tdb.close();\n >>>>>\t}\n >>>>>\n >>>>>\tpublic static void main(String args[])\n >>>>>\t{\n >>>>>\n\ttry\n >>>>>\n\t{\n >>>>>\n\t\tPrepStatTest test = new PrepStatTest();\n >>>>>\n\t}\n >>>>>\n\tcatch (Exception ex)\n >>>>>\n\t{\n >>>>>\n\t\tSystem.err.println(\"Exception caught.\\n\" + ex);\n >>>>>\n\t\tex.printStackTrace();\n >>>>>\n\t}\n >>>>>\t}\n >>>>>}\n >>>>>\n >>>>>\n >>>>>---------------------------(end of\n >>>>>broadcast)--------------------------- TIP 3: if posting/reading\n >>>>>through Usenet, please send an appropriate subscribe-nomail command\n >>>>>to majordomo@postgresql.org so that your message can get through to\n >>>>>the mailing list cleanly\n >>>>>\n >>>>>\n >>>---------------------------(end of\nbroadcast)---------------------------\n >>>TIP 2: you can get off all lists at once with the unregister command\n >>> (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n >>>\n >>>\n >>---------------------------(end of \nbroadcast)---------------------------\n >>TIP 4: Don't 'kill -9' the postmaster\n >>\n >>\n >\n >\n >---------------------------(end of broadcast)---------------------------\n >TIP 3: if posting/reading through Usenet, please send an appropriate\n >subscribe-nomail command to majordomo@postgresql.org so that your\n >message can get through to the mailing list cleanly\n >\n >\n >\n\n\n\n\n\n",
"msg_date": "Fri, 06 Sep 2002 16:30:27 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "Barry Lind wrote:\n> Haris,\n> \n> You can't use jdbc (and probably most other postgres clients) with\n> autocommit in postgresql.conf turned off.\n> \n> Hackers,\n> \n> How should client interfaces handle this new autocommit feature? Is it\n> best to just issue a set at the beginning of the connection to ensure\n> that it is always on?\n\nYes, I thought that was the best fix for apps that can't deal with\nautocommit being off.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 6 Sep 2002 20:55:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Saturday 07 September 2002 02:55 am, Bruce Momjian wrote:\n> Barry Lind wrote:\n> > Haris,\n> >\n> > You can't use jdbc (and probably most other postgres clients) with\n> > autocommit in postgresql.conf turned off.\n> >\n> > Hackers,\n> >\n> > How should client interfaces handle this new autocommit feature? Is it\n> > best to just issue a set at the beginning of the connection to ensure\n> > that it is always on?\n>\n> Yes, I thought that was the best fix for apps that can't deal with\n> autocommit being off.\nCan client get information from backend for autocommit (on or off) and that\nwork like psql ?\n\n\n",
"msg_date": "Sat, 7 Sep 2002 14:59:31 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "snpe wrote:\n> On Saturday 07 September 2002 02:55 am, Bruce Momjian wrote:\n> > Barry Lind wrote:\n> > > Haris,\n> > >\n> > > You can't use jdbc (and probably most other postgres clients) with\n> > > autocommit in postgresql.conf turned off.\n> > >\n> > > Hackers,\n> > >\n> > > How should client interfaces handle this new autocommit feature? Is it\n> > > best to just issue a set at the beginning of the connection to ensure\n> > > that it is always on?\n> >\n> > Yes, I thought that was the best fix for apps that can't deal with\n> > autocommit being off.\n> Can client get information from backend for autocommit (on or off) and that\n> work like psql ?\n\nSure, you can do SHOW autocommit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 7 Sep 2002 10:07:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter and"
},
{
"msg_contents": "On Saturday 07 September 2002 04:07 pm, Bruce Momjian wrote:\n> snpe wrote:\n> > On Saturday 07 September 2002 02:55 am, Bruce Momjian wrote:\n> > > Barry Lind wrote:\n> > > > Haris,\n> > > >\n> > > > You can't use jdbc (and probably most other postgres clients) with\n> > > > autocommit in postgresql.conf turned off.\n> > > >\n> > > > Hackers,\n> > > >\n> > > > How should client interfaces handle this new autocommit feature? Is\n> > > > it best to just issue a set at the beginning of the connection to\n> > > > ensure that it is always on?\n> > >\n> > > Yes, I thought that was the best fix for apps that can't deal with\n> > > autocommit being off.\n> >\n> > Can client get information from backend for autocommit (on or off) and\n> > that work like psql ?\n>\n> Sure, you can do SHOW autocommit.\nI am interesting with JDBC driver.\nWhen I make connection in base I want that driver find autocommit mode \n(from postgresql.conf or call in backend) and set mode true or false \n\nthanks\n\n",
"msg_date": "Sat, 7 Sep 2002 20:57:55 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "snpe wrote:\n>>Sure, you can do SHOW autocommit.\n> \n> I am interesting with JDBC driver.\n> When I make connection in base I want that driver find autocommit mode \n> (from postgresql.conf or call in backend) and set mode true or false \n> \n\nYou could make a call to current_setting in the backend.\n\nI don't know anything about the jdbc driver, but if it's written in C \nsomething like this should work:\n\ntext *autocommit = DatumGetTextP(DirectFunctionCall1(current_setting,\n CStringGetDatum(\"autocommit\")));\n\nWould this work?\n\nJoe\n\n",
"msg_date": "Sat, 07 Sep 2002 12:12:36 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter"
},
{
"msg_contents": "On Saturday 07 September 2002 09:12 pm, Joe Conway wrote:\n> snpe wrote:\n> >>Sure, you can do SHOW autocommit.\n> >\n> > I am interesting with JDBC driver.\n> > When I make connection in base I want that driver find autocommit mode\n> > (from postgresql.conf or call in backend) and set mode true or false\n>\n> You could make a call to current_setting in the backend.\n>\n> I don't know anything about the jdbc driver, but if it's written in C\n> something like this should work:\n>\n> text *autocommit = DatumGetTextP(DirectFunctionCall1(current_setting,\n> CStringGetDatum(\"autocommit\")));\n>\n> Would this work?\n\nYes.But I don't know like call in JDBC.\n\n",
"msg_date": "Sat, 7 Sep 2002 22:09:45 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter"
},
{
"msg_contents": "Yes it is possible, but according to the jdbc spec, a new connection in \njdbc is always initialized to autocommit=true. So jdbc needs to ignore \nwhatever the current server setting is and reset to autocommit=true.\n\n--Barry\n\nsnpe wrote:\n> On Saturday 07 September 2002 02:55 am, Bruce Momjian wrote:\n> \n>>Barry Lind wrote:\n>>\n>>>Haris,\n>>>\n>>>You can't use jdbc (and probably most other postgres clients) with\n>>>autocommit in postgresql.conf turned off.\n>>>\n>>>Hackers,\n>>>\n>>>How should client interfaces handle this new autocommit feature? Is it\n>>>best to just issue a set at the beginning of the connection to ensure\n>>>that it is always on?\n>>\n>>Yes, I thought that was the best fix for apps that can't deal with\n>>autocommit being off.\n> \n> Can client get information from backend for autocommit (on or off) and that\n> work like psql ?\n> \n> \n> \n\n",
"msg_date": "Sat, 07 Sep 2002 15:39:46 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Barry Lind wrote:\n>> How should client interfaces handle this new autocommit feature? Is it\n>> best to just issue a set at the beginning of the connection to ensure\n>> that it is always on?\n\n> Yes, I thought that was the best fix for apps that can't deal with\n> autocommit being off.\n\nIf autocommit=off really seriously breaks JDBC then I don't think a\nsimple SET command at the start of a session is going to do that much\nto improve robustness. What if the user issues another SET to turn it\non?\n\nI'd suggest just documenting that it is broken and you can't use it,\nuntil such time as you can get it fixed. Band-aids that only partially\ncover the problem don't seem worth the effort to me.\n\nIn general I think that autocommit=off is probably going to be very\npoorly supported in the 7.3 release. We can document it as being\n\"work in progress, use at your own risk\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 14:53:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc "
},
{
"msg_contents": "On Mon, 2002-09-09 at 17:04, snpe wrote:\n\n> I'm use 'autocommit=false' and have problem with psql\n> When any commnad is lost, then next commnad get error for transactions\n> (simple select command).BTW\n> \n> snpe> select * from org_ba;\n> ERROR: relation org_ba does not exists\n> snpe> select * from org_ban;\n> ERROR: current transactions is aborted, queries ignored until end of \n> transaction block\n> snpe> rollback;\n> ROLLBACK\n> snpe> select * from org_ban;\n\nMaybe I'm missing something, but isn't that the expected behaviour when\nautocommit is turned off?\n \n-- \n Rod Taylor\n\n",
"msg_date": "09 Sep 2002 17:03:46 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Monday 09 September 2002 08:53 pm, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Barry Lind wrote:\n> >> How should client interfaces handle this new autocommit feature? Is it\n> >> best to just issue a set at the beginning of the connection to ensure\n> >> that it is always on?\n> >\n> > Yes, I thought that was the best fix for apps that can't deal with\n> > autocommit being off.\n>\n> If autocommit=off really seriously breaks JDBC then I don't think a\n> simple SET command at the start of a session is going to do that much\n> to improve robustness. What if the user issues another SET to turn it\n> on?\n>\n> I'd suggest just documenting that it is broken and you can't use it,\n> until such time as you can get it fixed. Band-aids that only partially\n> cover the problem don't seem worth the effort to me.\n>\n> In general I think that autocommit=off is probably going to be very\n> poorly supported in the 7.3 release. We can document it as being\n> \"work in progress, use at your own risk\".\n>\n\nI'm use 'autocommit=false' and have problem with psql\nWhen any commnad is lost, then next commnad get error for transactions\n(simple select command).BTW\n\nsnpe> select * from org_ba;\nERROR: relation org_ba does not exists\nsnpe> select * from org_ban;\nERROR: current transactions is aborted, queries ignored until end of \ntransaction block\nsnpe> rollback;\nROLLBACK\nsnpe> select * from org_ban;\n\nthis command is ok.\nregards\nHaris Peco\n",
"msg_date": "Mon, 9 Sep 2002 23:04:38 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "\nOn Tue, 10 Sep 2002, snpe wrote:\n\n> On Monday 09 September 2002 11:03 pm, Rod Taylor wrote:\n> > On Mon, 2002-09-09 at 17:04, snpe wrote:\n> > > I'm use 'autocommit=false' and have problem with psql\n> > > When any commnad is lost, then next commnad get error for transactions\n> > > (simple select command).BTW\n> > >\n> > > snpe> select * from org_ba;\n> > > ERROR: relation org_ba does not exists\n> > > snpe> select * from org_ban;\n> > > ERROR: current transactions is aborted, queries ignored until end of\n> > > transaction block\n> > > snpe> rollback;\n> > > ROLLBACK\n> > > snpe> select * from org_ban;\n> >\n> > Maybe I'm missing something, but isn't that the expected behaviour when\n> > autocommit is turned off?\n> I get this every time.When exists command with error next command don't work\n> without explicit rollback and commit (this is not for psql, this error get in\n> with JDeveloper - JDBC driver).When autocommit=ture all is fine\n\nIt starts a transaction, failes the first command and goes into the\nerror has occurred in this transaction state. Seems like reasonable\nbehavior.\n\n\n",
"msg_date": "Mon, 9 Sep 2002 18:05:00 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Monday 09 September 2002 11:03 pm, Rod Taylor wrote:\n> On Mon, 2002-09-09 at 17:04, snpe wrote:\n> > I'm use 'autocommit=false' and have problem with psql\n> > When any commnad is lost, then next commnad get error for transactions\n> > (simple select command).BTW\n> >\n> > snpe> select * from org_ba;\n> > ERROR: relation org_ba does not exists\n> > snpe> select * from org_ban;\n> > ERROR: current transactions is aborted, queries ignored until end of\n> > transaction block\n> > snpe> rollback;\n> > ROLLBACK\n> > snpe> select * from org_ban;\n>\n> Maybe I'm missing something, but isn't that the expected behaviour when\n> autocommit is turned off?\nI get this every time.When exists command with error next command don't work \nwithout explicit rollback and commit (this is not for psql, this error get in \nwith JDeveloper - JDBC driver).When autocommit=ture all is fine\n\nharis peco\n\n",
"msg_date": "Tue, 10 Sep 2002 03:05:21 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Tuesday 10 September 2002 03:05 am, Stephan Szabo wrote:\n> On Tue, 10 Sep 2002, snpe wrote:\n> > On Monday 09 September 2002 11:03 pm, Rod Taylor wrote:\n> > > On Mon, 2002-09-09 at 17:04, snpe wrote:\n> > > > I'm use 'autocommit=false' and have problem with psql\n> > > > When any commnad is lost, then next commnad get error for\n> > > > transactions (simple select command).BTW\n> > > >\n> > > > snpe> select * from org_ba;\n> > > > ERROR: relation org_ba does not exists\n> > > > snpe> select * from org_ban;\n> > > > ERROR: current transactions is aborted, queries ignored until end of\n> > > > transaction block\n> > > > snpe> rollback;\n> > > > ROLLBACK\n> > > > snpe> select * from org_ban;\n> > >\n> > > Maybe I'm missing something, but isn't that the expected behaviour when\n> > > autocommit is turned off?\n> >\n> > I get this every time.When exists command with error next command don't\n> > work without explicit rollback and commit (this is not for psql, this\n> > error get in with JDeveloper - JDBC driver).When autocommit=ture all is\n> > fine\n>\n> It starts a transaction, failes the first command and goes into the\n> error has occurred in this transaction state. Seems like reasonable\n> behavior.\nSelect command don't start transaction - it is not good\nError command don't start transaction - nothing hapen, only typing error\n\nregards\nharis peco\n\n",
"msg_date": "Tue, 10 Sep 2002 04:08:01 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "\nOn Tue, 10 Sep 2002, snpe wrote:\n\n> On Tuesday 10 September 2002 03:05 am, Stephan Szabo wrote:\n> > On Tue, 10 Sep 2002, snpe wrote:\n> > > On Monday 09 September 2002 11:03 pm, Rod Taylor wrote:\n> > > > On Mon, 2002-09-09 at 17:04, snpe wrote:\n> > > > > I'm use 'autocommit=false' and have problem with psql\n> > > > > When any commnad is lost, then next commnad get error for\n> > > > > transactions (simple select command).BTW\n> > > > >\n> > > > > snpe> select * from org_ba;\n> > > > > ERROR: relation org_ba does not exists\n> > > > > snpe> select * from org_ban;\n> > > > > ERROR: current transactions is aborted, queries ignored until end of\n> > > > > transaction block\n> > > > > snpe> rollback;\n> > > > > ROLLBACK\n> > > > > snpe> select * from org_ban;\n> > > >\n> > > > Maybe I'm missing something, but isn't that the expected behaviour when\n> > > > autocommit is turned off?\n> > >\n> > > I get this every time.When exists command with error next command don't\n> > > work without explicit rollback and commit (this is not for psql, this\n> > > error get in with JDeveloper - JDBC driver).When autocommit=ture all is\n> > > fine\n> >\n> > It starts a transaction, failes the first command and goes into the\n> > error has occurred in this transaction state. Seems like reasonable\n> > behavior.\n\n> Select command don't start transaction - it is not good\n\nI think you need more justification than \"it is not good.\" If I do a\nsequence of select statements in autocommit=false, I'd expect the same\nconsistancy as if I'd done\nbegin;\nselect ...;\nselect ...;\n\n> Error command don't start transaction - nothing hapen, only typing error\n\nIf you do an insert that violates a constraint, does that start an\ntransaction or not? I think we have to choose before we start doing the\nstatement not after.\n\n",
"msg_date": "Mon, 9 Sep 2002 19:16:47 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "snpe <snpe@snpe.co.yu> writes:\n> I'm use 'autocommit=false' and have problem with psql\n> When any commnad is lost, then next commnad get error for transactions\n> (simple select command).BTW\n\n> snpe> select * from org_ba;\n> ERROR: relation org_ba does not exists\n> snpe> select * from org_ban;\n> ERROR: current transactions is aborted, queries ignored until end of \n> transaction block\n\nUm, what's wrong with that?\n\nIt seems to me that an application that is using autocommit=off will\nexpect the first SELECT to start a transaction block. If the first\nSELECT fails, then subsequent commands *should* fail until you commit\nor rollback. Certainly if you did an explicit BEGIN before the first\nSELECT, the above is what you'd get --- why should implicit BEGIN\nwork differently?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 22:27:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc "
},
{
"msg_contents": "On Mon, 9 Sep 2002, Tom Lane wrote:\n\n> snpe <snpe@snpe.co.yu> writes:\n>\n> > snpe> select * from org_ba;\n> > ERROR: relation org_ba does not exists\n> > snpe> select * from org_ban;\n> > ERROR: current transactions is aborted, queries ignored until end of\n> > transaction block\n>\n> Um, what's wrong with that?\n>\n> It seems to me that an application that is using autocommit=off will\n> expect the first SELECT to start a transaction block.\n\nYup. In fact, the standard (at least, insofar as I have information\nrelating to it), specifies that the first SELECT statement above\n*must* start a transaction.\n\n From Date's _A Guide to the SQL Standard_ (Fourth Edition):\n\n An SQL-transaction is initiated when the relevant SQL-agent executes\n a \"transaction-initiating\" SQL Statement (see below) and the\n SQL-agent does not already have an SQL-transaction in progress.\n ...\n The following SQL statements are _not_ transaction-initiating:\n\n\tCONNECT\n\tSET CONNECTION\n\tDISCONNECT\n\tSET SESSION AUTHORIZATION\n\tSET CATALOG\n\tSET SCHEMA\n\tSET NAMES\n\tSET TIME ZONE\n\tSET TRANSACTION\n\tSET CONSTRAINTS\n\tCOMMIT\n\tROLLBACK\n\tGET DIAGNOSTICS\n\n Nor, of course, are the nonexecutable statements DECLARE CURSOR,\n DECLAR LOCAL TEMPORARY TABLE, BEGIN DECLARE SECTION, SEND DECLARE\n SECTIONS, and WHENEVER.\n\nSo SELECT ought always to initiate a transaction, if one is not already\nin progress. If auto-commit is enabled, of course, that statement may\nbe committed immediately after execution, if it doesn't fail.\n\nAs far as the JDBC driver goes, I'm not too sure of the issues here, but\nit should certainly be ensuring that autocommit is enabled, as per the\nJDBC specification, when a new connection is created. I see no reason\nthis couldn't be done with a \"SET AUTOCOMMIT TO OFF\" or whatever, if\nthat's necessary to override a possible configuration file setting.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Tue, 10 Sep 2002 11:51:21 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Mon, 9 Sep 2002, Tom Lane wrote:\n\n> If autocommit=off really seriously breaks JDBC then I don't think a\n> simple SET command at the start of a session is going to do that much\n> to improve robustness. What if the user issues another SET to turn it\n> on?\n\nYou mean, to turn it off again? The driver should catch this, in theory.\n\nIn practice we could probably live with saying, \"Don't use SET\nAUTOCOMMIT; use the methods on the Connection class instead.\"\n\nProbably the driver should be changed for 7.3 just to use the server's\nSET AUTOCOMMIT functionality....\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Tue, 10 Sep 2002 11:59:55 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter"
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> Probably the driver should be changed for 7.3 just to use the server's\n> SET AUTOCOMMIT functionality....\n\nThat should happen eventually, IMHO, but I am not going to tell the JDBC\ndevelopers that they must make it happen for 7.3. They've already got a\npile of much-higher-priority things to fix for 7.3, like schema\ncompatibility and dropped-column handling.\n\nMy feeling about the original complaint is very simple: setting server\nautocommit to off is not supported with JDBC (nor is it fully supported\nwith any other of our frontend clients, right at this instant, though\nthat may improve somewhat before 7.3 release). If you don't like it,\ntough; contribute the required fixes or stop complaining. Someone else\nwill fix it when they get around to it, but there are bigger problems to\ndeal with first. Autocommit is only a work-in-progress today, not\nsomething that we promise will do anything useful for anybody.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 23:27:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter and jdbc "
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> From Date's _A Guide to the SQL Standard_ (Fourth Edition):\n> ...\n> The following SQL statements are _not_ transaction-initiating:\n\n> \tCONNECT\n> \tSET CONNECTION\n> \tDISCONNECT\n> \tSET SESSION AUTHORIZATION\n> \tSET CATALOG\n> \tSET SCHEMA\n> \tSET NAMES\n> \tSET TIME ZONE\n> \tSET TRANSACTION\n> \tSET CONSTRAINTS\n> \tCOMMIT\n> \tROLLBACK\n> \tGET DIAGNOSTICS\n\nHm. This brings up a thought I've been turning over for the past\ncouple days. As of CVS tip, SET commands *do* initiate transactions\nif you have autocommit off. By your reading of Date, this is not\nspec compliant for certain SET variables: a SET not already within\na transaction should not start a transaction block, at least for the\nvariables mentioned above. It occurs to me that it'd be reasonable\nto make it act that way for all SET variables.\n\nAn example of how this would simplify life: consider the problem of\na client that wants to ensure autocommit is on. A simple\n\tSET autocommit TO on;\ndoesn't work at the moment: if autocommit is off, then you'll need\nto issue a COMMIT as well to get out of the implicitly started\ntransaction. But you don't want to just issue a COMMIT, because\nyou'll get a nasty ugly WARNING message on stderr if indeed autocommit\nwas on already. The only warning-free way to issue a SET right now\nif you are uncertain about autocommit status is\n\tBEGIN; SET .... ; COMMIT;\nBlech. But if SET doesn't start a transaction then you can still\njust do SET. This avoids some changes we'll otherwise have to make\nin libpq startup, among other places.\n\nDoes anyone see any cases where it's important for SET to start\na transaction? (Of course, if you are already *in* a transaction,\nthe SET will be part of that transaction. The question is whether\nwe want SET to trigger an implicit BEGIN or not.)\n\n> Nor, of course, are the nonexecutable statements DECLARE CURSOR,\n> DECLAR LOCAL TEMPORARY TABLE, BEGIN DECLARE SECTION, SEND DECLARE\n> SECTIONS, and WHENEVER.\n\nHmm. I think the spec's notion of DECLARE must be different from ours.\nOur implementation of DECLARE CURSOR both declares and opens the cursor,\nand as such it *must* be transaction-initiating; else it's useless.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 09:40:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc "
},
{
"msg_contents": "On Tuesday 10 September 2002 04:16 am, Stephan Szabo wrote:\n> On Tue, 10 Sep 2002, snpe wrote:\n> > On Tuesday 10 September 2002 03:05 am, Stephan Szabo wrote:\n> > > On Tue, 10 Sep 2002, snpe wrote:\n> > > > On Monday 09 September 2002 11:03 pm, Rod Taylor wrote:\n> > > > > On Mon, 2002-09-09 at 17:04, snpe wrote:\n> > > > > > I'm use 'autocommit=false' and have problem with psql\n> > > > > > When any commnad is lost, then next commnad get error for\n> > > > > > transactions (simple select command).BTW\n> > > > > >\n> > > > > > snpe> select * from org_ba;\n> > > > > > ERROR: relation org_ba does not exists\n> > > > > > snpe> select * from org_ban;\n> > > > > > ERROR: current transactions is aborted, queries ignored until end\n> > > > > > of transaction block\n> > > > > > snpe> rollback;\n> > > > > > ROLLBACK\n> > > > > > snpe> select * from org_ban;\n> > > > >\n> > > > > Maybe I'm missing something, but isn't that the expected behaviour\n> > > > > when autocommit is turned off?\n> > > >\n> > > > I get this every time.When exists command with error next command\n> > > > don't work without explicit rollback and commit (this is not for\n> > > > psql, this error get in with JDeveloper - JDBC driver).When\n> > > > autocommit=ture all is fine\n> > >\n> > > It starts a transaction, failes the first command and goes into the\n> > > error has occurred in this transaction state. Seems like reasonable\n> > > behavior.\n> >\n> > Select command don't start transaction - it is not good\n>\n> I think you need more justification than \"it is not good.\" If I do a\n> sequence of select statements in autocommit=false, I'd expect the same\n> consistancy as if I'd done\n> begin;\n> select ...;\n> select ...;\n>\nOk.You start transaction explicit and this is ok.\nBut simple SELECT don't start transaction.\n\n> > Error command don't start transaction - nothing hapen, only typing error\n>\n> If you do an insert that violates a constraint, does that start an\n> transaction or not? I think we have to choose before we start doing the\n> statement not after.\nThis is typeing error.Nothing happen.That is not transaction.\nI don't know that is possible, but before start transaction we need parsing \ncommand and select or any error don't start transaction\nThis is problem for every client (I know for JDBC)\nregards\nHaris Peco\n",
"msg_date": "Tue, 10 Sep 2002 15:43:24 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "> > > > It starts a transaction, failes the first command and goes into the\n> > > > error has occurred in this transaction state. Seems like reasonable\n> > > > behavior.\n> > >\n> > > Select command don't start transaction - it is not good\n> >\n> > I think you need more justification than \"it is not good.\" If I do a\n> > sequence of select statements in autocommit=false, I'd expect the same\n> > consistancy as if I'd done\n> > begin;\n> > select ...;\n> > select ...;\n> >\n> Ok.You start transaction explicit and this is ok.\n> But simple SELECT don't start transaction.\n\nActually someone post a bit from Date's book that implies it does.\nAnd, that's still not an justification, it's just a restating of same\nposition. I don't see any reason why the two should be different from\na data consistency standpoint, there might be one, but you haven't\ngiven any reasons.\n\n> > > Error command don't start transaction - nothing hapen, only typing error\n> >\n> > If you do an insert that violates a constraint, does that start an\n> > transaction or not? I think we have to choose before we start doing the\n> > statement not after.\n> This is typeing error.Nothing happen.That is not transaction.\n> I don't know that is possible, but before start transaction we need parsing\n> command and select or any error don't start transaction\n\nWhy not? AFAICT it should, the transaction is initiated a statement is\nrun and it fails. Now maybe we shouldn't be going into the wierd disabled\nstatement state, but that's a different argument entirely.\n\n\n",
"msg_date": "Tue, 10 Sep 2002 08:33:16 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "I am waiting for this thread to conclude before deciding exactly what to\ndo for the jdbc driver for 7.3. While using the 'set autocommit true'\nsyntax is nice when talking to a 7.3 server, the jdbc driver also needs\nto be backwardly compatible with 7.2 and 7.1 servers. So it may just be\neasier to continue with the current way of doing things, even in the 7.3\ncase.\n\nthanks,\n--Barry\n\nCurt Sampson wrote:\n > On Mon, 9 Sep 2002, Tom Lane wrote:\n >\n >\n >>If autocommit=off really seriously breaks JDBC then I don't think a\n >>simple SET command at the start of a session is going to do that much\n >>to improve robustness. What if the user issues another SET to turn it\n >>on?\n >\n >\n > You mean, to turn it off again? The driver should catch this, in theory.\n >\n > In practice we could probably live with saying, \"Don't use SET\n > AUTOCOMMIT; use the methods on the Connection class instead.\"\n >\n > Probably the driver should be changed for 7.3 just to use the server's\n > SET AUTOCOMMIT functionality....\n >\n > cjs\n\n\n",
"msg_date": "Tue, 10 Sep 2002 09:36:28 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter"
},
{
"msg_contents": "Tom Lane wrote:\n> An example of how this would simplify life: consider the problem of\n> a client that wants to ensure autocommit is on. A simple\n> \tSET autocommit TO on;\n> doesn't work at the moment: if autocommit is off, then you'll need\n> to issue a COMMIT as well to get out of the implicitly started\n> transaction. But you don't want to just issue a COMMIT, because\n> you'll get a nasty ugly WARNING message on stderr if indeed autocommit\n> was on already. The only warning-free way to issue a SET right now\n> if you are uncertain about autocommit status is\n> \tBEGIN; SET .... ; COMMIT;\n> Blech. But if SET doesn't start a transaction then you can still\n> just do SET. This avoids some changes we'll otherwise have to make\n> in libpq startup, among other places.\n> \n> Does anyone see any cases where it's important for SET to start\n> a transaction? (Of course, if you are already *in* a transaction,\n> the SET will be part of that transaction. The question is whether\n> we want SET to trigger an implicit BEGIN or not.)\n\nUh, well, because we now have SET's rollback in an aborted transaction,\nthere is an issue of whether the SET is part of the transaction or not. \nSeems it has to be for consistency with our rollback behavior.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 Sep 2002 13:42:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter and"
},
{
"msg_contents": "On Tue, 10 Sep 2002, Stephan Szabo wrote:\n\n> > > > > It starts a transaction, failes the first command and goes into the\n> > > > > error has occurred in this transaction state. Seems like reasonable\n> > > > > behavior.\n> > > >\n> > > > Select command don't start transaction - it is not good\n> > >\n> > > I think you need more justification than \"it is not good.\" If I do a\n> > > sequence of select statements in autocommit=false, I'd expect the same\n> > > consistancy as if I'd done\n> > > begin;\n> > > select ...;\n> > > select ...;\n> > >\n> > Ok.You start transaction explicit and this is ok.\n> > But simple SELECT don't start transaction.\n> \n> Actually someone post a bit from Date's book that implies it does.\n> And, that's still not an justification, it's just a restating of same\n> position. I don't see any reason why the two should be different from\n> a data consistency standpoint, there might be one, but you haven't\n> given any reasons.\n\nWhat if it's a select for update? IF that failed because of a timout on a \nlock, shouldn't the transaction fail? Or a select into? Either of those \nshould make a transaction fail, and they're just selects.\n\n",
"msg_date": "Tue, 10 Sep 2002 11:46:17 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Does anyone see any cases where it's important for SET to start\n>> a transaction? (Of course, if you are already *in* a transaction,\n>> the SET will be part of that transaction. The question is whether\n>> we want SET to trigger an implicit BEGIN or not.)\n\n> Uh, well, because we now have SET's rollback in an aborted transaction,\n> there is an issue of whether the SET is part of the transaction or not. \n> Seems it has to be for consistency with our rollback behavior.\n\nYeah, it must be part of the transaction unless we want to reopen the\nSET-rollback can of worms (which I surely don't want to).\n\nHowever, a SET issued outside any pre-existing transaction block could\nform a self-contained transaction without any logical difficulty, even\nin autocommit-off mode. The question is whether that's more or less\nconvenient, or standards-conforming, than what we have.\n\nAn alternative that I'd really rather not consider is making SET's\nbehavior dependent on exactly which variable is being set ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 14:45:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter and jdbc "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Does anyone see any cases where it's important for SET to start\n> >> a transaction? (Of course, if you are already *in* a transaction,\n> >> the SET will be part of that transaction. The question is whether\n> >> we want SET to trigger an implicit BEGIN or not.)\n> \n> > Uh, well, because we now have SET's rollback in an aborted transaction,\n> > there is an issue of whether the SET is part of the transaction or not. \n> > Seems it has to be for consistency with our rollback behavior.\n> \n> Yeah, it must be part of the transaction unless we want to reopen the\n> SET-rollback can of worms (which I surely don't want to).\n> \n> However, a SET issued outside any pre-existing transaction block could\n> form a self-contained transaction without any logical difficulty, even\n> in autocommit-off mode. The question is whether that's more or less\n> convenient, or standards-conforming, than what we have.\n\nThat seems messy. What you are saying is that if autocommit is off,\nthen in:\n\n\tSET x=1;\n\tUPDATE ...\n\tSET y=2;\n\tROLLBACK;\n\nthat the x=1 doesn't get rolled back bu the y=2 does? I can't see any\ngood logic for that.\n\n> An alternative that I'd really rather not consider is making SET's\n> behavior dependent on exactly which variable is being set ...\n\nAgreed. We discussed that in the SET rollback case and found it was\nmore trouble that it was worth.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 Sep 2002 14:49:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter and"
},
{
"msg_contents": "On Tue, 10 Sep 2002, scott.marlowe wrote:\n\n> On Tue, 10 Sep 2002, Stephan Szabo wrote:\n>\n> > > > > > It starts a transaction, failes the first command and goes into the\n> > > > > > error has occurred in this transaction state. Seems like reasonable\n> > > > > > behavior.\n> > > > >\n> > > > > Select command don't start transaction - it is not good\n> > > >\n> > > > I think you need more justification than \"it is not good.\" If I do a\n> > > > sequence of select statements in autocommit=false, I'd expect the same\n> > > > consistancy as if I'd done\n> > > > begin;\n> > > > select ...;\n> > > > select ...;\n> > > >\n> > > Ok.You start transaction explicit and this is ok.\n> > > But simple SELECT don't start transaction.\n> >\n> > Actually someone post a bit from Date's book that implies it does.\n> > And, that's still not an justification, it's just a restating of same\n> > position. I don't see any reason why the two should be different from\n> > a data consistency standpoint, there might be one, but you haven't\n> > given any reasons.\n>\n> What if it's a select for update? IF that failed because of a timout on a\n> lock, shouldn't the transaction fail? Or a select into? Either of those\n> should make a transaction fail, and they're just selects.\n\nYes, but I think it should still work the same as if it had failed in an\nexplicit transaction if autocommit is false (or was that directed at\nsomeone else).\n\n\n",
"msg_date": "Tue, 10 Sep 2002 12:31:09 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Tue, 10 Sep 2002, Stephan Szabo wrote:\n\n> On Tue, 10 Sep 2002, scott.marlowe wrote:\n> \n> > On Tue, 10 Sep 2002, Stephan Szabo wrote:\n> >\n> > > > > > > It starts a transaction, failes the first command and goes into the\n> > > > > > > error has occurred in this transaction state. Seems like reasonable\n> > > > > > > behavior.\n> > > > > >\n> > > > > > Select command don't start transaction - it is not good\n> > > > >\n> > > > > I think you need more justification than \"it is not good.\" If I do a\n> > > > > sequence of select statements in autocommit=false, I'd expect the same\n> > > > > consistancy as if I'd done\n> > > > > begin;\n> > > > > select ...;\n> > > > > select ...;\n> > > > >\n> > > > Ok.You start transaction explicit and this is ok.\n> > > > But simple SELECT don't start transaction.\n> > >\n> > > Actually someone post a bit from Date's book that implies it does.\n> > > And, that's still not an justification, it's just a restating of same\n> > > position. I don't see any reason why the two should be different from\n> > > a data consistency standpoint, there might be one, but you haven't\n> > > given any reasons.\n> >\n> > What if it's a select for update? IF that failed because of a timout on a\n> > lock, shouldn't the transaction fail? Or a select into? Either of those\n> > should make a transaction fail, and they're just selects.\n> \n> Yes, but I think it should still work the same as if it had failed in an\n> explicit transaction if autocommit is false (or was that directed at\n> someone else).\n\nSorry, I was agreeing with you, and disagreeing with the guy who was \nsaying that selects shouldn't start a transaction. Should have mentioned \nthat. :-)\n\n",
"msg_date": "Tue, 10 Sep 2002 13:50:44 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> That seems messy. What you are saying is that if autocommit is off,\n> then in:\n\n> \tSET x=1;\n> \tUPDATE ...\n> \tSET y=2;\n> \tROLLBACK;\n\n> that the x=1 doesn't get rolled back bu the y=2 does?\n\nYes, if you weren't in a transaction at the start.\n\n> I can't see any good logic for that.\n\nHow about \"the SQL spec requires it\"? Date seems to think it does,\nat least for some variables (of course we have lots of variables\nthat are not in the spec).\n\nI can't find anything very clear in the SQL92 or SQL99 documents,\nand I'm not at home at the moment to look at my copy of Date, but\nif Curt's reading is correct then we have spec precedent for acting\nthis way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 15:55:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter and jdbc "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > That seems messy. What you are saying is that if autocommit is off,\n> > then in:\n> \n> > \tSET x=1;\n> > \tUPDATE ...\n> > \tSET y=2;\n> > \tROLLBACK;\n> \n> > that the x=1 doesn't get rolled back bu the y=2 does?\n> \n> Yes, if you weren't in a transaction at the start.\n> \n> > I can't see any good logic for that.\n> \n> How about \"the SQL spec requires it\"? Date seems to think it does,\n> at least for some variables (of course we have lots of variables\n> that are not in the spec).\n> \n> I can't find anything very clear in the SQL92 or SQL99 documents,\n> and I'm not at home at the moment to look at my copy of Date, but\n> if Curt's reading is correct then we have spec precedent for acting\n> this way.\n\nSpec or not, it looks pretty weird so I would question following the\nspec on this one.\n\nDo we want to say \"With autocommit off, SET will be in it's own\ntransaction if it appears before any non-SET command\", and \"SETs are\nrolled back except if autocommit off and they appear before any\nnon-SET\"? \n\nI sure don't.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 Sep 2002 16:00:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter and"
},
{
"msg_contents": "On Tuesday 10 September 2002 09:55 pm, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > That seems messy. What you are saying is that if autocommit is off,\n> > then in:\n> >\n> > \tSET x=1;\n> > \tUPDATE ...\n> > \tSET y=2;\n> > \tROLLBACK;\n> >\n> > that the x=1 doesn't get rolled back bu the y=2 does?\n>\n> Yes, if you weren't in a transaction at the start.\n>\n> > I can't see any good logic for that.\n>\n> How about \"the SQL spec requires it\"? Date seems to think it does,\n> at least for some variables (of course we have lots of variables\n> that are not in the spec).\n>\n> I can't find anything very clear in the SQL92 or SQL99 documents,\n> and I'm not at home at the moment to look at my copy of Date, but\n> if Curt's reading is correct then we have spec precedent for acting\n> this way.\n\nI know what Oracle do (default mode autocommit off except JDBC) :\nonly DML and DDL command start transaction and DDL command end transaction.\nThere is another problem: if select start transaction why error - I will \ncontinue transaction.\nWhy invalid command start transaction ?\n\nregards \nharis peco\n",
"msg_date": "Tue, 10 Sep 2002 22:49:38 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Tuesday 10 September 2002 07:46 pm, scott.marlowe wrote:\n> On Tue, 10 Sep 2002, Stephan Szabo wrote:\n> > > > > > It starts a transaction, failes the first command and goes into\n> > > > > > the error has occurred in this transaction state. Seems like\n> > > > > > reasonable behavior.\n> > > > >\n> > > > > Select command don't start transaction - it is not good\n> > > >\n> > > > I think you need more justification than \"it is not good.\" If I do a\n> > > > sequence of select statements in autocommit=false, I'd expect the\n> > > > same consistancy as if I'd done\n> > > > begin;\n> > > > select ...;\n> > > > select ...;\n> > >\n> > > Ok.You start transaction explicit and this is ok.\n> > > But simple SELECT don't start transaction.\n> >\n> > Actually someone post a bit from Date's book that implies it does.\n> > And, that's still not an justification, it's just a restating of same\n> > position. I don't see any reason why the two should be different from\n> > a data consistency standpoint, there might be one, but you haven't\n> > given any reasons.\n>\n> What if it's a select for update? IF that failed because of a timout on a\n> lock, shouldn't the transaction fail? Or a select into? Either of those\n> should make a transaction fail, and they're just selects.\nOk.Any lock or update,delete, insert (and all ddl command) start transaction\n(select for update, too), but simple select no.Select don't change data and no \ntransaction - this process cannot lost consistency (any command with error \ntoo).\nAnd if transaction start, so what ... I will (maybe) continue transaction\n(I don't end transaction), but I get error. and I must end transaction\nI think that we must parse command, choose if 'start transaction' and start \ntransaction or no.\nregards\nHaris Peco\n\n",
"msg_date": "Tue, 10 Sep 2002 22:49:40 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "\nOn Tue, 10 Sep 2002, snpe wrote:\n\n> On Tuesday 10 September 2002 07:46 pm, scott.marlowe wrote:\n> > What if it's a select for update? IF that failed because of a timout on a\n> > lock, shouldn't the transaction fail? Or a select into? Either of those\n> > should make a transaction fail, and they're just selects.\n> Ok.Any lock or update,delete, insert (and all ddl command) start transaction\n> (select for update, too), but simple select no.Select don't change data and no\n> transaction - this process cannot lost consistency (any command with error\n> too).\n\nAt least in serializable isolation level you'll probably get different\nresults if a transaction commits between those two selects based on\nwhether a transaction is started or not. Should two serializable selects\nin the same session see the same snapshot when autocommit is off?\n\n\n",
"msg_date": "Tue, 10 Sep 2002 14:50:52 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Wed, 11 Sep 2002, snpe wrote:\n\n> On Tuesday 10 September 2002 11:50 pm, Stephan Szabo wrote:\n> > On Tue, 10 Sep 2002, snpe wrote:\n> > > On Tuesday 10 September 2002 07:46 pm, scott.marlowe wrote:\n> > > > What if it's a select for update? IF that failed because of a timout\n> > > > on a lock, shouldn't the transaction fail? Or a select into? Either\n> > > > of those should make a transaction fail, and they're just selects.\n> > >\n> > > Ok.Any lock or update,delete, insert (and all ddl command) start\n> > > transaction (select for update, too), but simple select no.Select don't\n> > > change data and no transaction - this process cannot lost consistency\n> > > (any command with error too).\n> >\n> > At least in serializable isolation level you'll probably get different\n> > results if a transaction commits between those two selects based on\n> > whether a transaction is started or not. Should two serializable selects\n> > in the same session see the same snapshot when autocommit is off?\n\n> It is session, not transaction.My select don't change data and this is not\n> transaction.\n\nWe're going around in circles.\n\nDoes it matter if data is changed? I don't think so, since at least in\nserializable isolation level the snapshot that is seen depends on whether\nyou're in a transaction or not, and given autocommit=off I believe that\nyou should get a consistent snapshot between them.\n\nIf you believe it should matter, you need to give a reason. I don't\nthink it's a spec reason given that my sql92 spec draft says:\n\n\"The following SQL-statements are transaction initiating SQL-\nstatements, i.e., if there is no current transaction, and a\nstatement of this class is executed, a transaction is initiated:\n...\n o <select statement: single row>\n\n o <direct select statement: multiple rows>\"\nunless it changed.\n\nThere might be a compatibility reason, if so, with what and is it stronger\nthan reasons to start a transaction.\n\nThere might be another logical reason, if so, what is it and why does\nit matter?\n\n> My abother question, agian : why error (bad typing) start transaction ?\n\nThat depends. Given the way the spec is worded, it says nothing about\nother statements, so we need to decide those ourselves. I don't see\nanything that implies that a select statement that errors would be\nany different than a select statement that doesn't as far as starting\na transaction goes in my sql92 spec draft. If you were to type in\nfoo as a command, I could see a case that maybe that shouldn't be\ntransaction initiating, but afair that wasn't the case you had, you\nhad a select command against an invalid table name.\n\n\n",
"msg_date": "Tue, 10 Sep 2002 16:25:10 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Tuesday 10 September 2002 11:50 pm, Stephan Szabo wrote:\n> On Tue, 10 Sep 2002, snpe wrote:\n> > On Tuesday 10 September 2002 07:46 pm, scott.marlowe wrote:\n> > > What if it's a select for update? IF that failed because of a timout\n> > > on a lock, shouldn't the transaction fail? Or a select into? Either\n> > > of those should make a transaction fail, and they're just selects.\n> >\n> > Ok.Any lock or update,delete, insert (and all ddl command) start\n> > transaction (select for update, too), but simple select no.Select don't\n> > change data and no transaction - this process cannot lost consistency\n> > (any command with error too).\n>\n> At least in serializable isolation level you'll probably get different\n> results if a transaction commits between those two selects based on\n> whether a transaction is started or not. Should two serializable selects\n> in the same session see the same snapshot when autocommit is off?\nIt is session, not transaction.My select don't change data and this is not \ntransaction.\n\nMy abother question, agian : why error (bad typing) start transaction ?\nregards\nharis peco\n\n",
"msg_date": "Wed, 11 Sep 2002 01:30:56 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Wednesday 11 September 2002 01:25 am, Stephan Szabo wrote:\n> On Wed, 11 Sep 2002, snpe wrote:\n> > On Tuesday 10 September 2002 11:50 pm, Stephan Szabo wrote:\n> > > On Tue, 10 Sep 2002, snpe wrote:\n> > > > On Tuesday 10 September 2002 07:46 pm, scott.marlowe wrote:\n> > > > > What if it's a select for update? IF that failed because of a\n> > > > > timout on a lock, shouldn't the transaction fail? Or a select\n> > > > > into? Either of those should make a transaction fail, and they're\n> > > > > just selects.\n> > > >\n> > > > Ok.Any lock or update,delete, insert (and all ddl command) start\n> > > > transaction (select for update, too), but simple select no.Select\n> > > > don't change data and no transaction - this process cannot lost\n> > > > consistency (any command with error too).\n> > >\n> > > At least in serializable isolation level you'll probably get different\n> > > results if a transaction commits between those two selects based on\n> > > whether a transaction is started or not. Should two serializable\n> > > selects in the same session see the same snapshot when autocommit is\n> > > off?\n> >\n> > It is session, not transaction.My select don't change data and this is\n> > not transaction.\n>\n> We're going around in circles.\n>\n> Does it matter if data is changed? I don't think so, since at least in\n> serializable isolation level the snapshot that is seen depends on whether\n> you're in a transaction or not, and given autocommit=off I believe that\n> you should get a consistent snapshot between them.\n>\n> If you believe it should matter, you need to give a reason. I don't\n> think it's a spec reason given that my sql92 spec draft says:\n>\n> \"The following SQL-statements are transaction initiating SQL-\n> statements, i.e., if there is no current transaction, and a\n> statement of this class is executed, a transaction is initiated:\n> ...\n> o <select statement: single row>\n>\n> o <direct select statement: multiple rows>\"\n> unless it changed.\n>\n> There might be a compatibility reason, if so, with what and is it stronger\n> than reasons to start a transaction.\n>\n> There might be another logical reason, if so, what is it and why does\n> it matter?\n>\n> > My abother question, agian : why error (bad typing) start transaction ?\n>\n> That depends. Given the way the spec is worded, it says nothing about\n> other statements, so we need to decide those ourselves. I don't see\n> anything that implies that a select statement that errors would be\n> any different than a select statement that doesn't as far as starting\n> a transaction goes in my sql92 spec draft. If you were to type in\n> foo as a command, I could see a case that maybe that shouldn't be\n> transaction initiating, but afair that wasn't the case you had, you\n> had a select command against an invalid table name.\n\nyes, we're going around in circles.\n\nOk.I agreed (I think because Oracle do different)\nTransaction start\nI type invalid command \nI correct command\nI get error\n\nWhy.If is it transactin, why I get error\nI want continue.\nI am see this error with JDeveloper (work with Oracle, DB2 an SQL Server)\n\nIt is not matter for me transaction or not.I get error for correct command \nafter invalid\n\nI am sorry if I am confused.English is not my language.\n\nregards\nHaris Peco\n",
"msg_date": "Wed, 11 Sep 2002 01:57:31 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Wed, 11 Sep 2002, snpe wrote:\n\n> yes, we're going around in circles.\n>\n> Ok.I agreed (I think because Oracle do different)\n> Transaction start\n> I type invalid command\n> I correct command\n> I get error\n>\n> Why.If is it transactin, why I get error\n> I want continue.\n> I am see this error with JDeveloper (work with Oracle, DB2 an SQL Server)\n\nRight, that's a separate issue (I alluded to it earlier, but wasn't sure\nthat's what you were interested in). PostgreSQL treats all errors as\nunrecoverable. It may be a little loose about immediately rolling back\ndue to the fact that historically autocommit was on and it seemed better\nto not go into autocommit mode after the error.\n\nI doubt that 7.3 is going to change that behavior, but a case might be\nmade that when autocommit is off the error immediately causes a rollback\nand new transaction will start upon the next statement (that would\nnormally start a transaction).\n\nAt some point in the future, you'll probably be able to do nested\ntransactions or savepoints or error recovery and this will all be moot.\n\n> It is not matter for me transaction or not.I get error for correct command\n> after invalid\n\n",
"msg_date": "Tue, 10 Sep 2002 17:09:08 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Wednesday 11 September 2002 02:09 am, Stephan Szabo wrote:\n> On Wed, 11 Sep 2002, snpe wrote:\n> > yes, we're going around in circles.\n> >\n> > Ok.I agreed (I think because Oracle do different)\n> > Transaction start\n> > I type invalid command\n> > I correct command\n> > I get error\n> >\n> > Why.If is it transactin, why I get error\n> > I want continue.\n> > I am see this error with JDeveloper (work with Oracle, DB2 an SQL Server)\n>\n> Right, that's a separate issue (I alluded to it earlier, but wasn't sure\n> that's what you were interested in). PostgreSQL treats all errors as\n> unrecoverable. It may be a little loose about immediately rolling back\n> due to the fact that historically autocommit was on and it seemed better\n> to not go into autocommit mode after the error.\n>\n> I doubt that 7.3 is going to change that behavior, but a case might be\n> made that when autocommit is off the error immediately causes a rollback\n> and new transaction will start upon the next statement (that would\n> normally start a transaction).\n>\n\nWhy rollback.This is error (typing error).Nothing happen.\nI think that we need clear set : what is start transaction ?\nI think that transaction start with change data in database\n(what don't change data this start not transaction.\nOracle dot this and I think that is correct))\n\nP.S when I can find SQL 99 specification ?\n\nregards\n",
"msg_date": "Wed, 11 Sep 2002 03:07:00 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Wed, 11 Sep 2002, snpe wrote:\n\n> On Wednesday 11 September 2002 02:09 am, Stephan Szabo wrote:\n> > On Wed, 11 Sep 2002, snpe wrote:\n> > > yes, we're going around in circles.\n> > >\n> > > Ok.I agreed (I think because Oracle do different)\n> > > Transaction start\n> > > I type invalid command\n> > > I correct command\n> > > I get error\n> > >\n> > > Why.If is it transactin, why I get error\n> > > I want continue.\n> > > I am see this error with JDeveloper (work with Oracle, DB2 an SQL Server)\n> >\n> > Right, that's a separate issue (I alluded to it earlier, but wasn't sure\n> > that's what you were interested in). PostgreSQL treats all errors as\n> > unrecoverable. It may be a little loose about immediately rolling back\n> > due to the fact that historically autocommit was on and it seemed better\n> > to not go into autocommit mode after the error.\n> >\n> > I doubt that 7.3 is going to change that behavior, but a case might be\n> > made that when autocommit is off the error immediately causes a rollback\n> > and new transaction will start upon the next statement (that would\n> > normally start a transaction).\n> >\n>\n> Why rollback.This is error (typing error).Nothing happen.\n\nPostgresql currently has no real notion of a recoverable error.\nIn the case of the error you had, probably nothing bad would happen\nif it continued, but what if that was a unique constraint violation?\nContinuing would currently probably let you see the table in an\ninvalid state.\n\n> I think that we need clear set : what is start transaction ?\n> I think that transaction start with change data in database\n> (what don't change data this start not transaction.\n> Oracle dot this and I think that is correct))\n\nI disagree because I think that two serializable select statements\nin autocommit=off (without a commit or rollback of course) should\nsee the same snapshot.\n\nI'm trying to find something either way in a pdf copy of sql99.\nThe multiple row select has gotten hidden somewhere, so it's possible\nthat it's not, but all of opening a cursor, fetching from a cursor\nand the single row select syntax are labeled as transaction initiating.\n\n",
"msg_date": "Tue, 10 Sep 2002 18:14:22 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Tue, 10 Sep 2002, Tom Lane wrote:\n\n> As of CVS tip, SET commands *do* initiate transactions\n> if you have autocommit off. By your reading of Date, this is not\n> spec compliant for certain SET variables: a SET not already within\n> a transaction should not start a transaction block, at least for the\n> variables mentioned above. It occurs to me that it'd be reasonable\n> to make it act that way for all SET variables.\n\nI agree. SET variables are normally related to the behaviour of a\nsession, not information stored in the database. And your autocommit\nexample shows why having them start a transaction is a problem.\n\nBut there were some issues with rolling back and SET commands,\nweren't there? I remember a long discussion about this that I'm\nnot sure I want to go back to. :-)\n\n> > Nor, of course, are the nonexecutable statements DECLARE CURSOR,\n> > DECLAR LOCAL TEMPORARY TABLE, BEGIN DECLARE SECTION, SEND DECLARE\n> > SECTIONS, and WHENEVER.\n>\n> Hmm. I think the spec's notion of DECLARE must be different from ours.\n> Our implementation of DECLARE CURSOR both declares and opens the cursor,\n> and as such it *must* be transaction-initiating; else it's useless.\n\nWell, I'm not going to go chase it down right now, but ISTR that\nDECLAREing a cursor just allocates a variable name or the storage for it\nor something like that; it doesn't actually create an active cursor.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 11 Sep 2002 10:44:39 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Tue, 10 Sep 2002, Barry Lind wrote:\n\n> I am waiting for this thread to conclude before deciding exactly what to\n> do for the jdbc driver for 7.3. While using the 'set autocommit true'\n> syntax is nice when talking to a 7.3 server, the jdbc driver also needs\n> to be backwardly compatible with 7.2 and 7.1 servers.\n\nCan you not check the server's version on connect?\n\nIt would be ideal if the JDBC driver, without modification, ran\nall tests properly against 7.3, 7.2 and 7.1.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 11 Sep 2002 10:53:09 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter"
},
{
"msg_contents": "On Tue, 10 Sep 2002, Bruce Momjian wrote:\n\n> Do we want to say \"With autocommit off, SET will be in it's own\n> transaction if it appears before any non-SET command\", and \"SETs are\n> rolled back except if autocommit off and they appear before any\n> non-SET\"?\n\nNot really, I don't think.\n\nBut I'm starting to wonder if we should re-think all SET commands being\nrolled back if a transaction fails. Some don't seem to make sense, such\nas having SET AUTOCOMMIT or SET SESSION AUTHORIZATION roll back.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 11 Sep 2002 10:57:51 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter"
},
{
"msg_contents": "Curt Sampson wrote:\n> On Tue, 10 Sep 2002, Bruce Momjian wrote:\n> \n> > Do we want to say \"With autocommit off, SET will be in it's own\n> > transaction if it appears before any non-SET command\", and \"SETs are\n> > rolled back except if autocommit off and they appear before any\n> > non-SET\"?\n> \n> Not really, I don't think.\n> \n> But I'm starting to wonder if we should re-think all SET commands being\n> rolled back if a transaction fails. Some don't seem to make sense, such\n> as having SET AUTOCOMMIT or SET SESSION AUTHORIZATION roll back.\n\nYes, but the question is whether it is better to be consistent and roll\nthem all back, or to pick and choose which ones to roll back. \nConsistency is nice.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 Sep 2002 22:12:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter"
},
{
"msg_contents": "On Tue, 2002-09-10 at 21:44, Curt Sampson wrote:\n> But there were some issues with rolling back and SET commands,\n> weren't there? I remember a long discussion about this that I'm\n> not sure I want to go back to. :-)\n\nSo.. Unless explicitly requested, a SET command should have immediate\neffect?\n\nThe other constrictive value I can think of is search_path.\n\n-- Must be transaction safe\nBEGIN;\nCREATE SCHEMA <newschema>;\nSET search_path = <newschema>;\nROLLBACK;\nCREATE TABLE...\n\n\n-- This should be ok\nBEGIN;\nSET autocommit = on;\nINSERT ...\nCOMMIT;\n-- SET takes place on commit, as it was an explicit transaction\n\n\n-- This is requested behavior\nSET autocommit = off;\nSET autocommit = on;\nINSERT... -- immediate effect, since autocommit is on\n\n\n-- This gets interesting be ok as the schema must exist\nSET autocommit = off;\nCREATE SCHEMA <newschema>;\nSET search_path = <newschema>; -- implicit commit here?\nROLLBACK;\nCREATE TABLE ... \n-- search_path must roll back or schema must have been created\n\n\n-- Similar to the above\nSET autocommit = off;\nCREATE TABLE ...\nSET autocommit = on; -- implicit commit here?\nROLLBACK;\n-- Does this rollback anything?\n-- Was CREATE TABLE committed with the second SET statement?\n \n\n\n> Well, I'm not going to go chase it down right now, but ISTR that\n> DECLAREing a cursor just allocates a variable name or the storage for it\n> or something like that; it doesn't actually create an active cursor.\n\nIndeed, this is how the cursor is able to cross transactions. It is\nclosed at transaction commit, and re-created in next use.\n\n4.29:\n\nFor every <declare cursor> in an SQL-client module, a cursor is\neffectively created when an SQLtransaction (see Subclause 4.32, \nSQL-transactions ) referencing the SQL-client module is initiated.\n\n-- \n Rod Taylor\n\n",
"msg_date": "10 Sep 2002 22:17:53 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "Curt,\n\nYes I can check the server version on connect. In fact that is what the \n driver already does. However I can't check the version and then based \non the version call set autocommit true in one round trip to the server. \n Since many people don't use connection pools, I am reluctant to add \nthe overhead of an extra roundtrip to the database to set a variable \nthat for most people will already be set to true. It would be ideal if \nI could in one hit to the database determine the server version and \nconditionally call set autocommit based on the version at the same time.\n\nthanks,\n--Barry\n\n\n\nCurt Sampson wrote:\n> On Tue, 10 Sep 2002, Barry Lind wrote:\n> \n> \n>>I am waiting for this thread to conclude before deciding exactly what to\n>>do for the jdbc driver for 7.3. While using the 'set autocommit true'\n>>syntax is nice when talking to a 7.3 server, the jdbc driver also needs\n>>to be backwardly compatible with 7.2 and 7.1 servers.\n> \n> \n> Can you not check the server's version on connect?\n> \n> It would be ideal if the JDBC driver, without modification, ran\n> all tests properly against 7.3, 7.2 and 7.1.\n> \n> cjs\n\n",
"msg_date": "Tue, 10 Sep 2002 19:48:51 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter"
},
{
"msg_contents": "\nOn Wed, 11 Sep 2002, snpe wrote:\n\n> On Wednesday 11 September 2002 02:09 am, Stephan Szabo wrote:\n> > On Wed, 11 Sep 2002, snpe wrote:\n> > > yes, we're going around in circles.\n> > >\n> > > Ok.I agreed (I think because Oracle do different)\n> > > Transaction start\n> > > I type invalid command\n> > > I correct command\n> > > I get error\n> > >\n> > > Why.If is it transactin, why I get error\n> > > I want continue.\n> > > I am see this error with JDeveloper (work with Oracle, DB2 an SQL Server)\n> >\n> > Right, that's a separate issue (I alluded to it earlier, but wasn't sure\n> > that's what you were interested in). PostgreSQL treats all errors as\n> > unrecoverable. It may be a little loose about immediately rolling back\n> > due to the fact that historically autocommit was on and it seemed better\n> > to not go into autocommit mode after the error.\n> >\n> > I doubt that 7.3 is going to change that behavior, but a case might be\n> > made that when autocommit is off the error immediately causes a rollback\n> > and new transaction will start upon the next statement (that would\n> > normally start a transaction).\n> >\n>\n> Why rollback.This is error (typing error).Nothing happen.\n> I think that we need clear set : what is start transaction ?\n> I think that transaction start with change data in database\n> (what don't change data this start not transaction.\n\nAnother interesting case for a select is, what about\nselect func(x) from table;\nDoes func() have any side effects that might change data?\nAt what point do we decide that the statement needs a\ntransaction?\n\n",
"msg_date": "Tue, 10 Sep 2002 19:58:09 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Tue, 10 Sep 2002, Barry Lind wrote:\n\n> Yes I can check the server version on connect. In fact that is what the\n> driver already does. However I can't check the version and then based\n> on the version call set autocommit true in one round trip to the server.\n> Since many people don't use connection pools, I am reluctant to add\n> the overhead of an extra roundtrip to the database to set a variable\n> that for most people will already be set to true. It would be ideal if\n> I could in one hit to the database determine the server version and\n> conditionally call set autocommit based on the version at the same time.\n\nHmm. I don't think that there's any real way to avoid a second round\ntrip now, but one thing we might do with 7.3 would be to add a standard\nstored procedure that will deal with setting appropriate variables and\nsuchlike, and returning the version number and any other information\nthat the JDBC driver needs. (Maybe it can return a key/value table.)\nThat way, once we desupport 7.2 in the far future, we can reduce this to\none round trip.\n\nOr perhaps we we could try to execute that stored procedure and, if it\nfails, create it. (Or, if creating it fails, do things the hard way.) That\nway the first connection you make where the SP is not there you have the\noverhead of adding it, but all connections after that can use it. (I assume\nyou'd grant all rights to it to the general public.) And it could return\nits own version so that newer drivers could upgrade it if necessary. Or\nmaybe just have a differently-named one for each version of the driver.\nThis is a bit kludgy, but also sort of elegant, if you think about it....\n\nOn the other hand, perhaps we should just live with two round trips. So\nlong as we've got command batching at some point, we can get the version,\nand then send all the setup commands we need as a single batch after that.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 11 Sep 2002 12:06:31 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter"
},
{
"msg_contents": "\n> > > Why rollback.This is error (typing error).Nothing happen.\n> > > I think that we need clear set : what is start transaction ?\n> > > I think that transaction start with change data in database\n> > > (what don't change data this start not transaction.\n> >\n> > Another interesting case for a select is, what about\n> > select func(x) from table;\n> > Does func() have any side effects that might change data?\n> > At what point do we decide that the statement needs a\n> > transaction?\n> Function in select list mustn't change any data.\n> What if function change data in from clause ?\n\nWhy can't the function change data? I've done this one a number of\ntimes through views to log the user pulling out information from the\nsystem, and what it was at the time (time sensitive data).\n\n-- \n Rod Taylor\n\n",
"msg_date": "11 Sep 2002 08:38:44 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Wed, 11 Sep 2002, snpe wrote:\n\n> On Wednesday 11 September 2002 04:58 am, Stephan Szabo wrote:\n> > On Wed, 11 Sep 2002, snpe wrote:\n> > > On Wednesday 11 September 2002 02:09 am, Stephan Szabo wrote:\n> > > > On Wed, 11 Sep 2002, snpe wrote:\n> > > > > yes, we're going around in circles.\n> > > > >\n> > > > > Ok.I agreed (I think because Oracle do different)\n> > > > > Transaction start\n> > > > > I type invalid command\n> > > > > I correct command\n> > > > > I get error\n> > > > >\n> > > > > Why.If is it transactin, why I get error\n> > > > > I want continue.\n> > > > > I am see this error with JDeveloper (work with Oracle, DB2 an SQL\n> > > > > Server)\n> > > >\n> > > > Right, that's a separate issue (I alluded to it earlier, but wasn't\n> > > > sure that's what you were interested in). PostgreSQL treats all errors\n> > > > as unrecoverable. It may be a little loose about immediately rolling\n> > > > back due to the fact that historically autocommit was on and it seemed\n> > > > better to not go into autocommit mode after the error.\n> > > >\n> > > > I doubt that 7.3 is going to change that behavior, but a case might be\n> > > > made that when autocommit is off the error immediately causes a\n> > > > rollback and new transaction will start upon the next statement (that\n> > > > would normally start a transaction).\n> > >\n> > > Why rollback.This is error (typing error).Nothing happen.\n> > > I think that we need clear set : what is start transaction ?\n> > > I think that transaction start with change data in database\n> > > (what don't change data this start not transaction.\n> >\n> > Another interesting case for a select is, what about\n> > select func(x) from table;\n> > Does func() have any side effects that might change data?\n> > At what point do we decide that the statement needs a\n> > transaction?\n> Function in select list mustn't change any data.\n> What if function change data in from clause ?\n\nThere is no such restriction. The behavior is not necessarily\nwell defined in all cases, but postgresql certainly doesn't\nrequire that the functions not change data especially given\nthat postgresql takes:\nselect func();\nas the way to call to func();\nExample session from 7.3 just pre-beta included below.\n\n\n----\n\nsszabo=# create table b(a int);\nCREATE TABLE\nsszabo=# create table a(a int);\nCREATE TABLE\nsszabo=# create function f(int) returns int as 'insert into b values ($1);\nselect $1;' language 'sql';\nCREATE FUNCTION\nsszabo=# insert into a values (1);\nINSERT 17010 1\nsszabo=# select f(a) from a;\n f\n---\n 1\n(1 row)\n\nsszabo=# select * from b;\n a\n---\n 1\n(1 row)\n\n\n\n",
"msg_date": "Wed, 11 Sep 2002 05:43:03 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "\nOn Wed, 11 Sep 2002, snpe wrote:\n\n> On Wednesday 11 September 2002 03:14 am, Stephan Szabo wrote:\n> > On Wed, 11 Sep 2002, snpe wrote:\n> > > On Wednesday 11 September 2002 02:09 am, Stephan Szabo wrote:\n> > > > On Wed, 11 Sep 2002, snpe wrote:\n> > > > > yes, we're going around in circles.\n> > > > >\n> > > > > Ok.I agreed (I think because Oracle do different)\n> > > > > Transaction start\n> > > > > I type invalid command\n> > > > > I correct command\n> > > > > I get error\n> > > > >\n> > > > > Why.If is it transactin, why I get error\n> > > > > I want continue.\n> > > > > I am see this error with JDeveloper (work with Oracle, DB2 an SQL\n> > > > > Server)\n> > > >\n> > > > Right, that's a separate issue (I alluded to it earlier, but wasn't\n> > > > sure that's what you were interested in). PostgreSQL treats all errors\n> > > > as unrecoverable. It may be a little loose about immediately rolling\n> > > > back due to the fact that historically autocommit was on and it seemed\n> > > > better to not go into autocommit mode after the error.\n> > > >\n> > > > I doubt that 7.3 is going to change that behavior, but a case might be\n> > > > made that when autocommit is off the error immediately causes a\n> > > > rollback and new transaction will start upon the next statement (that\n> > > > would normally start a transaction).\n> > >\n> > > Why rollback.This is error (typing error).Nothing happen.\n> >\n> > Postgresql currently has no real notion of a recoverable error.\n> > In the case of the error you had, probably nothing bad would happen\n> > if it continued, but what if that was a unique constraint violation?\n> > Continuing would currently probably let you see the table in an\n> > invalid state.\n> >\n> If decision (transaction or not) is after parser (before execute) this isn't\n> problem.\n> I don't know when postgresql make decision, but that is best after parser.\n> I parser find error simple return error and nothing happen\n\nAre you saying that it's okay for:\ninsert into nonexistant values (3);\nand\ninsert into existant values (3);\nwhere 3 is invalid for existant to work\ndifferently?\nI think that'd be tough to get past some people, but you might\nwant to write a proposal for why it should act that way. (Don't\nexpect anything for 7.3, but 7.4's devel will start sometime.)\n\n> > > I think that we need clear set : what is start transaction ?\n> > > I think that transaction start with change data in database\n> > > (what don't change data this start not transaction.\n> > > Oracle dot this and I think that is correct))\n> >\n> > I disagree because I think that two serializable select statements\n> > in autocommit=off (without a commit or rollback of course) should\n> > see the same snapshot.\n> >\n> Question ?\n> All select in one transaction return same data - no matter if any change and\n> commit data ?\n\nIt depends on the isolation level of the transaction I believe.\nThis sequence in read committed (in postgresql) and serializable give\ndifferent results.\n\nT1: begin;\nT1: select * from a;\nT2: begin;\nT2: insert into a values (3);\nT2: commit;\nT1: select * from a;\n\nIn serializable mode, you can't get \"non-repeatable read\" effects:\nSQL-transaction T1 reads a row. SQL-transaction T2 then modifies\nor deletes that row and performs a COMMIT. If T1 then attempts to\nreread the row, it may receive the modified value of discover that the\nrow has been deleted.\n\n",
"msg_date": "Wed, 11 Sep 2002 05:55:32 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Wednesday 11 September 2002 04:58 am, Stephan Szabo wrote:\n> On Wed, 11 Sep 2002, snpe wrote:\n> > On Wednesday 11 September 2002 02:09 am, Stephan Szabo wrote:\n> > > On Wed, 11 Sep 2002, snpe wrote:\n> > > > yes, we're going around in circles.\n> > > >\n> > > > Ok.I agreed (I think because Oracle do different)\n> > > > Transaction start\n> > > > I type invalid command\n> > > > I correct command\n> > > > I get error\n> > > >\n> > > > Why.If is it transactin, why I get error\n> > > > I want continue.\n> > > > I am see this error with JDeveloper (work with Oracle, DB2 an SQL\n> > > > Server)\n> > >\n> > > Right, that's a separate issue (I alluded to it earlier, but wasn't\n> > > sure that's what you were interested in). PostgreSQL treats all errors\n> > > as unrecoverable. It may be a little loose about immediately rolling\n> > > back due to the fact that historically autocommit was on and it seemed\n> > > better to not go into autocommit mode after the error.\n> > >\n> > > I doubt that 7.3 is going to change that behavior, but a case might be\n> > > made that when autocommit is off the error immediately causes a\n> > > rollback and new transaction will start upon the next statement (that\n> > > would normally start a transaction).\n> >\n> > Why rollback.This is error (typing error).Nothing happen.\n> > I think that we need clear set : what is start transaction ?\n> > I think that transaction start with change data in database\n> > (what don't change data this start not transaction.\n>\n> Another interesting case for a select is, what about\n> select func(x) from table;\n> Does func() have any side effects that might change data?\n> At what point do we decide that the statement needs a\n> transaction?\nFunction in select list mustn't change any data.\nWhat if function change data in from clause ?\n\n\n",
"msg_date": "Wed, 11 Sep 2002 14:56:09 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Wednesday 11 September 2002 03:14 am, Stephan Szabo wrote:\n> On Wed, 11 Sep 2002, snpe wrote:\n> > On Wednesday 11 September 2002 02:09 am, Stephan Szabo wrote:\n> > > On Wed, 11 Sep 2002, snpe wrote:\n> > > > yes, we're going around in circles.\n> > > >\n> > > > Ok.I agreed (I think because Oracle do different)\n> > > > Transaction start\n> > > > I type invalid command\n> > > > I correct command\n> > > > I get error\n> > > >\n> > > > Why.If is it transactin, why I get error\n> > > > I want continue.\n> > > > I am see this error with JDeveloper (work with Oracle, DB2 an SQL\n> > > > Server)\n> > >\n> > > Right, that's a separate issue (I alluded to it earlier, but wasn't\n> > > sure that's what you were interested in). PostgreSQL treats all errors\n> > > as unrecoverable. It may be a little loose about immediately rolling\n> > > back due to the fact that historically autocommit was on and it seemed\n> > > better to not go into autocommit mode after the error.\n> > >\n> > > I doubt that 7.3 is going to change that behavior, but a case might be\n> > > made that when autocommit is off the error immediately causes a\n> > > rollback and new transaction will start upon the next statement (that\n> > > would normally start a transaction).\n> >\n> > Why rollback.This is error (typing error).Nothing happen.\n>\n> Postgresql currently has no real notion of a recoverable error.\n> In the case of the error you had, probably nothing bad would happen\n> if it continued, but what if that was a unique constraint violation?\n> Continuing would currently probably let you see the table in an\n> invalid state.\n>\nIf decision (transaction or not) is after parser (before execute) this isn't \nproblem.\nI don't know when postgresql make decision, but that is best after parser.\nI parser find error simple return error and nothing happen\n> > I think that we need clear set : what is start transaction ?\n> > I think that transaction start with change data in database\n> > (what don't change data this start not transaction.\n> > Oracle dot this and I think that is correct))\n>\n> I disagree because I think that two serializable select statements\n> in autocommit=off (without a commit or rollback of course) should\n> see the same snapshot.\n>\nQuestion ?\nAll select in one transaction return same data - no matter if any change and \ncommit data ?\n> I'm trying to find something either way in a pdf copy of sql99.\n> The multiple row select has gotten hidden somewhere, so it's possible\n> that it's not, but all of opening a cursor, fetching from a cursor\n> and the single row select syntax are labeled as transaction initiating.\n\nCan I find sql99 spec anywhere ?\n\nThanks\n",
"msg_date": "Wed, 11 Sep 2002 15:02:23 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "\nOn Wed, 11 Sep 2002, snpe wrote:\n\n> On Wednesday 11 September 2002 02:38 pm, Rod Taylor wrote:\n> > > > > Why rollback.This is error (typing error).Nothing happen.\n> > > > > I think that we need clear set : what is start transaction ?\n> > > > > I think that transaction start with change data in database\n> > > > > (what don't change data this start not transaction.\n> > > >\n> > > > Another interesting case for a select is, what about\n> > > > select func(x) from table;\n> > > > Does func() have any side effects that might change data?\n> > > > At what point do we decide that the statement needs a\n> > > > transaction?\n> > >\n> > > Function in select list mustn't change any data.\n> > > What if function change data in from clause ?\n> >\n> > Why can't the function change data? I've done this one a number of\n> > times through views to log the user pulling out information from the\n> > system, and what it was at the time (time sensitive data).\n> Scenario :\n> Func change data in table in form clause\n> I fetch 3 (after row 1 and 2) and then change row 1\n> What result expect ?\n\nJust because the behavior is sometimes undefined by the spec doesn't mean\nthe construct should be disallowed. Grouped character string columns also\ncould have implementation-dependent behavior (which never needs to be\nspecified), but we don't disallow that either.\n\n\n",
"msg_date": "Wed, 11 Sep 2002 06:06:57 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Wednesday 11 September 2002 02:38 pm, Rod Taylor wrote:\n> > > > Why rollback.This is error (typing error).Nothing happen.\n> > > > I think that we need clear set : what is start transaction ?\n> > > > I think that transaction start with change data in database\n> > > > (what don't change data this start not transaction.\n> > >\n> > > Another interesting case for a select is, what about\n> > > select func(x) from table;\n> > > Does func() have any side effects that might change data?\n> > > At what point do we decide that the statement needs a\n> > > transaction?\n> >\n> > Function in select list mustn't change any data.\n> > What if function change data in from clause ?\n>\n> Why can't the function change data? I've done this one a number of\n> times through views to log the user pulling out information from the\n> system, and what it was at the time (time sensitive data).\nScenario :\nFunc change data in table in form clause\nI fetch 3 (after row 1 and 2) and then change row 1\nWhat result expect ?\n\n",
"msg_date": "Wed, 11 Sep 2002 15:13:20 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Wednesday 11 September 2002 02:55 pm, Stephan Szabo wrote:\n> On Wed, 11 Sep 2002, snpe wrote:\n> > On Wednesday 11 September 2002 03:14 am, Stephan Szabo wrote:\n> > > On Wed, 11 Sep 2002, snpe wrote:\n> > > > On Wednesday 11 September 2002 02:09 am, Stephan Szabo wrote:\n> > > > > On Wed, 11 Sep 2002, snpe wrote:\n> > > > > > yes, we're going around in circles.\n> > > > > >\n> > > > > > Ok.I agreed (I think because Oracle do different)\n> > > > > > Transaction start\n> > > > > > I type invalid command\n> > > > > > I correct command\n> > > > > > I get error\n> > > > > >\n> > > > > > Why.If is it transactin, why I get error\n> > > > > > I want continue.\n> > > > > > I am see this error with JDeveloper (work with Oracle, DB2 an SQL\n> > > > > > Server)\n> > > > >\n> > > > > Right, that's a separate issue (I alluded to it earlier, but wasn't\n> > > > > sure that's what you were interested in). PostgreSQL treats all\n> > > > > errors as unrecoverable. It may be a little loose about\n> > > > > immediately rolling back due to the fact that historically\n> > > > > autocommit was on and it seemed better to not go into autocommit\n> > > > > mode after the error.\n> > > > >\n> > > > > I doubt that 7.3 is going to change that behavior, but a case might\n> > > > > be made that when autocommit is off the error immediately causes a\n> > > > > rollback and new transaction will start upon the next statement\n> > > > > (that would normally start a transaction).\n> > > >\n> > > > Why rollback.This is error (typing error).Nothing happen.\n> > >\n> > > Postgresql currently has no real notion of a recoverable error.\n> > > In the case of the error you had, probably nothing bad would happen\n> > > if it continued, but what if that was a unique constraint violation?\n> > > Continuing would currently probably let you see the table in an\n> > > invalid state.\n> >\n> > If decision (transaction or not) is after parser (before execute) this\n> > isn't problem.\n> > I don't know when postgresql make decision, but that is best after\n> > parser. I parser find error simple return error and nothing happen\n>\n> Are you saying that it's okay for:\n> insert into nonexistant values (3);\n> and\n> insert into existant values (3);\n> where 3 is invalid for existant to work\n> differently?\n> I think that'd be tough to get past some people, but you might\n> want to write a proposal for why it should act that way. (Don't\n> expect anything for 7.3, but 7.4's devel will start sometime.)\n>\nI don't understand all, but when I tell 'error' I think \"syntax error\"\nIf error is contraint error again nothin change, only error return\n\n> > > > I think that we need clear set : what is start transaction ?\n> > > > I think that transaction start with change data in database\n> > > > (what don't change data this start not transaction.\n> > > > Oracle dot this and I think that is correct))\n> > >\n> > > I disagree because I think that two serializable select statements\n> > > in autocommit=off (without a commit or rollback of course) should\n> > > see the same snapshot.\n> >\n> > Question ?\n> > All select in one transaction return same data - no matter if any change\n> > and commit data ?\n>\n> It depends on the isolation level of the transaction I believe.\n> This sequence in read committed (in postgresql) and serializable give\n> different results.\n>\n> T1: begin;\n> T1: select * from a;\n> T2: begin;\n> T2: insert into a values (3);\n> T2: commit;\n> T1: select * from a;\n>\n> In serializable mode, you can't get \"non-repeatable read\" effects:\n> SQL-transaction T1 reads a row. SQL-transaction T2 then modifies\n> or deletes that row and performs a COMMIT. If T1 then attempts to\n> reread the row, it may receive the modified value of discover that the\n> row has been deleted.\nIf serialization strict connect with transaction then ok.\n\nharis peco\n",
"msg_date": "Wed, 11 Sep 2002 15:45:54 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "\nOn Wed, 11 Sep 2002, snpe wrote:\n\n> On Wednesday 11 September 2002 02:55 pm, Stephan Szabo wrote:\n> > On Wed, 11 Sep 2002, snpe wrote:\n> > >\n> > > If decision (transaction or not) is after parser (before execute) this\n> > > isn't problem.\n> > > I don't know when postgresql make decision, but that is best after\n> > > parser. I parser find error simple return error and nothing happen\n> >\n> > Are you saying that it's okay for:\n> > insert into nonexistant values (3);\n> > and\n> > insert into existant values (3);\n> > where 3 is invalid for existant to work\n> > differently?\n> > I think that'd be tough to get past some people, but you might\n> > want to write a proposal for why it should act that way. (Don't\n> > expect anything for 7.3, but 7.4's devel will start sometime.)\n> >\n> I don't understand all, but when I tell 'error' I think \"syntax error\"\n> If error is contraint error again nothin change, only error return\n\nI don't understand what you mean here. Are you saying that both of\nthose queries should not start transactions? Then that wouldn't\nbe starting between the parser and execute since you won't know that\nthe row violates a constraint until execution time.\n\n> > > > I disagree because I think that two serializable select statements\n> > > > in autocommit=off (without a commit or rollback of course) should\n> > > > see the same snapshot.\n> > >\n> > > Question ?\n> > > All select in one transaction return same data - no matter if any change\n> > > and commit data ?\n> >\n> > It depends on the isolation level of the transaction I believe.\n> > This sequence in read committed (in postgresql) and serializable give\n> > different results.\n> >\n> > T1: begin;\n> > T1: select * from a;\n> > T2: begin;\n> > T2: insert into a values (3);\n> > T2: commit;\n> > T1: select * from a;\n> >\n> > In serializable mode, you can't get \"non-repeatable read\" effects:\n> > SQL-transaction T1 reads a row. SQL-transaction T2 then modifies\n> > or deletes that row and performs a COMMIT. If T1 then attempts to\n> > reread the row, it may receive the modified value of discover that the\n> > row has been deleted.\n> If serialization strict connect with transaction then ok.\n\nI again am not sure I understand, are you saying that under serializable\nselect should start a transaction but it shouldn't under read committed?\nThat seems like a bad idea to me, either it should or it shouldn't in\nmy opinion.\n\nPerhaps it'd be better if you wrote up what you think it should do in\nall these cases and then we could look at them as a whole.\n(Cases I can see right now are, select under serializable, select under\nread committed, garbage command, select to non existant table,\ninsert to non existant table, insert that fails due to unique constraint,\ninsert that fails due to exception raised by a before trigger,\ninsert that fails due to exception raised by an after trigger,\ninsert that does nothing due to before trigger, update that fails\ndue to any of those after some rows have already successfully been\nmodified and probably some others).\n\n\n",
"msg_date": "Wed, 11 Sep 2002 09:11:19 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "Please appl\n--- AbstractJdbc1DatabaseMetaData.java\tWed Sep 11 22:21:25 2002\n+++ AbstractJdbc1DatabaseMetaData.java.orig\tWed Sep 11 22:20:36 2002\n@@ -2381,44 +2381,21 @@\n \t// Implementation note: This is required for Borland's JBuilder to work\n \tpublic java.sql.ResultSet getBestRowIdentifier(String catalog, String \nschema, String table, int scope, boolean nullable) throws SQLException\n \t{\n-\t\tif (connection.haveMinimumServerVersion(\"7.3\")) {\n-\t\t\tStringBuffer sql = new StringBuffer(512);\n-\t\t\tsql.append(\"SELECT \" +\n-\t\t\t\tscope + \" as SCOPE,\" +\n-\t\t\t\t\"a.attname as COLUMN_NAME,\" +\n-\t\t\t\t\"a.atttypid as DATA_TYPE,\" +\n-\t\t\t\t\"t.typname as TYPE_NAME,\" +\n-\t\t\t\t\"t.typlen as COLUMN_SIZE,\" +\n-\t\t\t\t\"0::int4 as BUFFER_LENGTH,\" +\n-\t\t\t\t\"0::int4 as DECIMAL_DIGITS,\" +\n-\t\t\t\t\"0::int4 as PSEUDO_COLUMN \" +\n-\t\t \t\"FROM pg_catalog.pg_type t,pg_catalog.pg_class bc,\" +\n-\t\t\t\t\"pg_catalog.pg_class ic, pg_catalog.pg_index i, pg_catalog.pg_attribute a \n\" +\n-\t\t \t\"WHERE bc.relkind = 'r' \" +\n-\t\t \t\"AND t.oid=a.atttypid \" +\n-\t\t \t\"AND upper(bc.relname) = upper('\" + table + \"') \" +\n-\t\t \t\"AND i.indrelid = bc.oid \" +\n-\t\t \t\"AND i.indexrelid = ic.oid \" +\n-\t\t \t\"AND ic.oid = a.attrelid \" +\n-\t\t \t\"AND i.indisprimary='t' \");\n-\t\t\treturn connection.createStatement().executeQuery(sql.toString());\n-\t\t} else {\n-\t\t\t// for now, this returns an empty result set.\n-\t\t\tField f[] = new Field[8];\n-\t\t\tResultSet r;\t// ResultSet for the SQL query that we need to do\n-\t\t\tVector v = new Vector();\t\t// The new ResultSet tuple stuff\n+\t\t// for now, this returns an empty result set.\n+\t\tField f[] = new Field[8];\n+\t\tResultSet r;\t// ResultSet for the SQL query that we need to do\n+\t\tVector v = new Vector();\t\t// The new ResultSet tuple stuff\n \n-\t\t\tf[0] = new Field(connection, \"SCOPE\", iInt2Oid, 2);\n-\t\t\tf[1] = new Field(connection, \"COLUMN_NAME\", iVarcharOid, NAME_SIZE);\n-\t\t\tf[2] = new Field(connection, \"DATA_TYPE\", iInt2Oid, 2);\n-\t\t\tf[3] = new Field(connection, \"TYPE_NAME\", iVarcharOid, NAME_SIZE);\n-\t\t\tf[4] = new Field(connection, \"COLUMN_SIZE\", iInt4Oid, 4);\n-\t\t\tf[5] = new Field(connection, \"BUFFER_LENGTH\", iInt4Oid, 4);\n-\t\t\tf[6] = new Field(connection, \"DECIMAL_DIGITS\", iInt2Oid, 2);\n-\t\t\tf[7] = new Field(connection, \"PSEUDO_COLUMN\", iInt2Oid, 2);\n+\t\tf[0] = new Field(connection, \"SCOPE\", iInt2Oid, 2);\n+\t\tf[1] = new Field(connection, \"COLUMN_NAME\", iVarcharOid, NAME_SIZE);\n+\t\tf[2] = new Field(connection, \"DATA_TYPE\", iInt2Oid, 2);\n+\t\tf[3] = new Field(connection, \"TYPE_NAME\", iVarcharOid, NAME_SIZE);\n+\t\tf[4] = new Field(connection, \"COLUMN_SIZE\", iInt4Oid, 4);\n+\t\tf[5] = new Field(connection, \"BUFFER_LENGTH\", iInt4Oid, 4);\n+\t\tf[6] = new Field(connection, \"DECIMAL_DIGITS\", iInt2Oid, 2);\n+\t\tf[7] = new Field(connection, \"PSEUDO_COLUMN\", iInt2Oid, 2);\n \n-\t\t\treturn connection.getResultSet(null, f, v, \"OK\", 1);\n-\t\t}\n+\t\treturn connection.getResultSet(null, f, v, \"OK\", 1);\n \t}\n \n \t/*\n\n\n",
"msg_date": "Wed, 11 Sep 2002 22:38:08 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Patch for getBestRowIdentifier (for testing with Oracle JDeveloper)"
},
{
"msg_contents": "I'am sorry (reverse *java and *orig)\n\ncorrect patch\n--- AbstractJdbc1DatabaseMetaData.java.orig\tWed Sep 11 22:20:36 2002\n+++ AbstractJdbc1DatabaseMetaData.java\tWed Sep 11 22:50:37 2002\n@@ -2381,21 +2381,44 @@\n \t// Implementation note: This is required for Borland's JBuilder to work\n \tpublic java.sql.ResultSet getBestRowIdentifier(String catalog, String \nschema, String table, int scope, boolean nullable) throws SQLException\n \t{\n-\t\t// for now, this returns an empty result set.\n-\t\tField f[] = new Field[8];\n-\t\tResultSet r;\t// ResultSet for the SQL query that we need to do\n-\t\tVector v = new Vector();\t\t// The new ResultSet tuple stuff\n+\t\tif (connection.haveMinimumServerVersion(\"7.3\")) {\n+\t\t\tStringBuffer sql = new StringBuffer(512);\n+\t\t\tsql.append(\"SELECT \" +\n+\t\t\t\tscope + \" as SCOPE,\" +\n+\t\t\t\t\"a.attname as COLUMN_NAME,\" +\n+\t\t\t\t\"a.atttypid as DATA_TYPE,\" +\n+\t\t\t\t\"t.typname as TYPE_NAME,\" +\n+\t\t\t\t\"t.typlen as COLUMN_SIZE,\" +\n+\t\t\t\t\"0::int4 as BUFFER_LENGTH,\" +\n+\t\t\t\t\"0::int4 as DECIMAL_DIGITS,\" +\n+\t\t\t\t\"0::int4 as PSEUDO_COLUMN \" +\n+\t\t \t\"FROM pg_catalog.pg_type t,pg_catalog.pg_class bc,\" +\n+\t\t\t\t\"pg_catalog.pg_class ic, pg_catalog.pg_index i, pg_catalog.pg_attribute a \n\" +\n+\t\t \t\"WHERE bc.relkind = 'r' \" +\n+\t\t \t\"AND t.oid=a.atttypid \" +\n+\t\t \t\"AND upper(bc.relname) = upper('\" + table + \"') \" +\n+\t\t \t\"AND i.indrelid = bc.oid \" +\n+\t\t \t\"AND i.indexrelid = ic.oid \" +\n+\t\t \t\"AND ic.oid = a.attrelid \" +\n+\t\t \t\"AND i.indisprimary='t' \");\n+\t\t\treturn connection.createStatement().executeQuery(sql.toString());\n+\t\t} else {\n+\t\t\t// for now, this returns an empty result set.\n+\t\t\tField f[] = new Field[8];\n+\t\t\tResultSet r;\t// ResultSet for the SQL query that we need to do\n+\t\t\tVector v = new Vector();\t\t// The new ResultSet tuple stuff\n \n-\t\tf[0] = new Field(connection, \"SCOPE\", iInt2Oid, 2);\n-\t\tf[1] = new Field(connection, \"COLUMN_NAME\", iVarcharOid, NAME_SIZE);\n-\t\tf[2] = new Field(connection, \"DATA_TYPE\", iInt2Oid, 2);\n-\t\tf[3] = new Field(connection, \"TYPE_NAME\", iVarcharOid, NAME_SIZE);\n-\t\tf[4] = new Field(connection, \"COLUMN_SIZE\", iInt4Oid, 4);\n-\t\tf[5] = new Field(connection, \"BUFFER_LENGTH\", iInt4Oid, 4);\n-\t\tf[6] = new Field(connection, \"DECIMAL_DIGITS\", iInt2Oid, 2);\n-\t\tf[7] = new Field(connection, \"PSEUDO_COLUMN\", iInt2Oid, 2);\n+\t\t\tf[0] = new Field(connection, \"SCOPE\", iInt2Oid, 2);\n+\t\t\tf[1] = new Field(connection, \"COLUMN_NAME\", iVarcharOid, NAME_SIZE);\n+\t\t\tf[2] = new Field(connection, \"DATA_TYPE\", iInt2Oid, 2);\n+\t\t\tf[3] = new Field(connection, \"TYPE_NAME\", iVarcharOid, NAME_SIZE);\n+\t\t\tf[4] = new Field(connection, \"COLUMN_SIZE\", iInt4Oid, 4);\n+\t\t\tf[5] = new Field(connection, \"BUFFER_LENGTH\", iInt4Oid, 4);\n+\t\t\tf[6] = new Field(connection, \"DECIMAL_DIGITS\", iInt2Oid, 2);\n+\t\t\tf[7] = new Field(connection, \"PSEUDO_COLUMN\", iInt2Oid, 2);\n \n-\t\treturn connection.getResultSet(null, f, v, \"OK\", 1);\n+\t\t\treturn connection.getResultSet(null, f, v, \"OK\", 1);\n+\t\t}\n \t}\n \n \t/*\n\n",
"msg_date": "Wed, 11 Sep 2002 22:52:55 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: Patch for getBestRowIdentifier (for testing with Oracle\n\tJDeveloper)"
},
{
"msg_contents": "\nOn Wednesday 11 September 2002 06:11 pm, Stephan Szabo wrote:\n> On Wed, 11 Sep 2002, snpe wrote:\n> > On Wednesday 11 September 2002 02:55 pm, Stephan Szabo wrote:\n> > > On Wed, 11 Sep 2002, snpe wrote:\n> > > > If decision (transaction or not) is after parser (before execute)\n> > > > this isn't problem.\n> > > > I don't know when postgresql make decision, but that is best after\n> > > > parser. I parser find error simple return error and nothing happen\n> > >\n> > > Are you saying that it's okay for:\n> > > insert into nonexistant values (3);\n> > > and\n> > > insert into existant values (3);\n> > > where 3 is invalid for existant to work\n> > > differently?\n> > > I think that'd be tough to get past some people, but you might\n> > > want to write a proposal for why it should act that way. (Don't\n> > > expect anything for 7.3, but 7.4's devel will start sometime.)\n> >\n> > I don't understand all, but when I tell 'error' I think \"syntax error\"\n> > If error is contraint error again nothin change, only error return\n>\n> I don't understand what you mean here. Are you saying that both of\n> those queries should not start transactions? Then that wouldn't\n> be starting between the parser and execute since you won't know that\n> the row violates a constraint until execution time.\n>\n> > > > > I disagree because I think that two serializable select statements\n> > > > > in autocommit=off (without a commit or rollback of course) should\n> > > > > see the same snapshot.\n> > > >\n> > > > Question ?\n> > > > All select in one transaction return same data - no matter if any\n> > > > change and commit data ?\n> > >\n> > > It depends on the isolation level of the transaction I believe.\n> > > This sequence in read committed (in postgresql) and serializable give\n> > > different results.\n> > >\n> > > T1: begin;\n> > > T1: select * from a;\n> > > T2: begin;\n> > > T2: insert into a values (3);\n> > > T2: commit;\n> > > T1: select * from a;\n> > >\n> > > In serializable mode, you can't get \"non-repeatable read\" effects:\n> > > SQL-transaction T1 reads a row. SQL-transaction T2 then modifies\n> > > or deletes that row and performs a COMMIT. If T1 then attempts to\n> > > reread the row, it may receive the modified value of discover that the\n> > > row has been deleted.\n> >\n> > If serialization strict connect with transaction then ok.\n>\n> I again am not sure I understand, are you saying that under serializable\n> select should start a transaction but it shouldn't under read committed?\n> That seems like a bad idea to me, either it should or it shouldn't in\n> my opinion.\n>\n> Perhaps it'd be better if you wrote up what you think it should do in\n> all these cases and then we could look at them as a whole.\n> (Cases I can see right now are, select under serializable, select under\n> read committed, garbage command, select to non existant table,\n> insert to non existant table, insert that fails due to unique constraint,\n> insert that fails due to exception raised by a before trigger,\n> insert that fails due to exception raised by an after trigger,\n> insert that does nothing due to before trigger, update that fails\n> due to any of those after some rows have already successfully been\n> modified and probably some others).\n\nOne question first ?\n\nWhat mean ?\nERROR: current transaction is aborted, queries ignored until end of \ntransaction block\nI am tried next (autocommit=true in postgresql.conf)\n\n1. begin;\n2. select * from tab;\nquery work\n3. show t; -- force stupid syntax error\n4. select * from tab;\nERROR: current transaction is aborted, queries ignored until end of \ntransaction block\n5.end;\n6. select * from tab;\nquery work\n\nI must rollback or commit transaction when I make stupid syntax error.\nThis is same with autocommit=false\nIt is maybe ok, I don't know.\nFor rest is ok (if level serializable select start transaction)\n\nThanks\n\n\n",
"msg_date": "Wed, 11 Sep 2002 23:33:56 +0200",
"msg_from": "snpe <snpe@snpe.co.yu>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "On Wed, 11 Sep 2002, snpe wrote:\n\n> On Wednesday 11 September 2002 06:11 pm, Stephan Szabo wrote:\n>\n> > I again am not sure I understand, are you saying that under serializable\n> > select should start a transaction but it shouldn't under read committed?\n> > That seems like a bad idea to me, either it should or it shouldn't in\n> > my opinion.\n> >\n> > Perhaps it'd be better if you wrote up what you think it should do in\n> > all these cases and then we could look at them as a whole.\n> > (Cases I can see right now are, select under serializable, select under\n> > read committed, garbage command, select to non existant table,\n> > insert to non existant table, insert that fails due to unique constraint,\n> > insert that fails due to exception raised by a before trigger,\n> > insert that fails due to exception raised by an after trigger,\n> > insert that does nothing due to before trigger, update that fails\n> > due to any of those after some rows have already successfully been\n> > modified and probably some others).\n>\n> One question first ?\n>\n> What mean ?\n> ERROR: current transaction is aborted, queries ignored until end of\n> transaction block\n> I am tried next (autocommit=true in postgresql.conf)\n\nThe transaction has encountered an unrecoverable error (remember, all\nerrors are currently considered unrecoverable) and the transaction\nis in a potentially unsafe state.\n\n> 1. begin;\n> 2. select * from tab;\n> query work\n> 3. show t; -- force stupid syntax error\n> 4. select * from tab;\n> ERROR: current transaction is aborted, queries ignored until end of\n> transaction block\n> 5.end;\n> 6. select * from tab;\n> query work\n>\n> I must rollback or commit transaction when I make stupid syntax error.\n\nNote that even with end you get effectively a rollback in this case\nsince the transaction as a whole ended in an error state.\n\n> This is same with autocommit=false\n> It is maybe ok, I don't know.\n\nWell, at least until we have savepoints or nested transactions,\nthere's only a limited amount of freedom in the implementation.\n\n> For rest is ok (if level serializable select start transaction)\n\nLike I said above, having the transaction starting of select being\ndependent on the isolation level variable sounds like a bad idea.\nIn addition that still doesn't deal with select statements with side\neffects.\n\n",
"msg_date": "Wed, 11 Sep 2002 15:03:48 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with new autocommit config parameter and jdbc"
},
{
"msg_contents": "\nI will add a version of this patch to my work on making DatabaseMetaData\nschema aware.\n\nKris Jurka\n\n\nOn Wed, 11 Sep 2002, snpe wrote:\n\n> I'am sorry (reverse *java and *orig)\n>\n> correct patch\n> --- AbstractJdbc1DatabaseMetaData.java.orig\tWed Sep 11 22:20:36 2002\n> +++ AbstractJdbc1DatabaseMetaData.java\tWed Sep 11 22:50:37 2002\n> @@ -2381,21 +2381,44 @@\n> \t// Implementation note: This is required for Borland's JBuilder to work\n> \tpublic java.sql.ResultSet getBestRowIdentifier(String catalog, String\n> schema, String table, int scope, boolean nullable) throws SQLException\n> \t{\n> -\t\t// for now, this returns an empty result set.\n> -\t\tField f[] = new Field[8];\n> -\t\tResultSet r;\t// ResultSet for the SQL query that we need to do\n> -\t\tVector v = new Vector();\t\t// The new ResultSet tuple stuff\n> +\t\tif (connection.haveMinimumServerVersion(\"7.3\")) {\n> +\t\t\tStringBuffer sql = new StringBuffer(512);\n> +\t\t\tsql.append(\"SELECT \" +\n> +\t\t\t\tscope + \" as SCOPE,\" +\n> +\t\t\t\t\"a.attname as COLUMN_NAME,\" +\n> +\t\t\t\t\"a.atttypid as DATA_TYPE,\" +\n> +\t\t\t\t\"t.typname as TYPE_NAME,\" +\n> +\t\t\t\t\"t.typlen as COLUMN_SIZE,\" +\n> +\t\t\t\t\"0::int4 as BUFFER_LENGTH,\" +\n> +\t\t\t\t\"0::int4 as DECIMAL_DIGITS,\" +\n> +\t\t\t\t\"0::int4 as PSEUDO_COLUMN \" +\n> +\t\t \t\"FROM pg_catalog.pg_type t,pg_catalog.pg_class bc,\" +\n> +\t\t\t\t\"pg_catalog.pg_class ic, pg_catalog.pg_index i, pg_catalog.pg_attribute a\n> \" +\n> +\t\t \t\"WHERE bc.relkind = 'r' \" +\n> +\t\t \t\"AND t.oid=a.atttypid \" +\n> +\t\t \t\"AND upper(bc.relname) = upper('\" + table + \"') \" +\n> +\t\t \t\"AND i.indrelid = bc.oid \" +\n> +\t\t \t\"AND i.indexrelid = ic.oid \" +\n> +\t\t \t\"AND ic.oid = a.attrelid \" +\n> +\t\t \t\"AND i.indisprimary='t' \");\n> +\t\t\treturn connection.createStatement().executeQuery(sql.toString());\n> +\t\t} else {\n> +\t\t\t// for now, this returns an empty result set.\n> +\t\t\tField f[] = new Field[8];\n> +\t\t\tResultSet r;\t// ResultSet for the SQL query that we need to do\n> +\t\t\tVector v = new Vector();\t\t// The new ResultSet tuple stuff\n>\n> -\t\tf[0] = new Field(connection, \"SCOPE\", iInt2Oid, 2);\n> -\t\tf[1] = new Field(connection, \"COLUMN_NAME\", iVarcharOid, NAME_SIZE);\n> -\t\tf[2] = new Field(connection, \"DATA_TYPE\", iInt2Oid, 2);\n> -\t\tf[3] = new Field(connection, \"TYPE_NAME\", iVarcharOid, NAME_SIZE);\n> -\t\tf[4] = new Field(connection, \"COLUMN_SIZE\", iInt4Oid, 4);\n> -\t\tf[5] = new Field(connection, \"BUFFER_LENGTH\", iInt4Oid, 4);\n> -\t\tf[6] = new Field(connection, \"DECIMAL_DIGITS\", iInt2Oid, 2);\n> -\t\tf[7] = new Field(connection, \"PSEUDO_COLUMN\", iInt2Oid, 2);\n> +\t\t\tf[0] = new Field(connection, \"SCOPE\", iInt2Oid, 2);\n> +\t\t\tf[1] = new Field(connection, \"COLUMN_NAME\", iVarcharOid, NAME_SIZE);\n> +\t\t\tf[2] = new Field(connection, \"DATA_TYPE\", iInt2Oid, 2);\n> +\t\t\tf[3] = new Field(connection, \"TYPE_NAME\", iVarcharOid, NAME_SIZE);\n> +\t\t\tf[4] = new Field(connection, \"COLUMN_SIZE\", iInt4Oid, 4);\n> +\t\t\tf[5] = new Field(connection, \"BUFFER_LENGTH\", iInt4Oid, 4);\n> +\t\t\tf[6] = new Field(connection, \"DECIMAL_DIGITS\", iInt2Oid, 2);\n> +\t\t\tf[7] = new Field(connection, \"PSEUDO_COLUMN\", iInt2Oid, 2);\n>\n> -\t\treturn connection.getResultSet(null, f, v, \"OK\", 1);\n> +\t\t\treturn connection.getResultSet(null, f, v, \"OK\", 1);\n> +\t\t}\n> \t}\n>\n> \t/*\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n",
"msg_date": "Thu, 12 Sep 2002 13:02:28 -0400 (EDT)",
"msg_from": "Kris Jurka <books@ejurka.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch for getBestRowIdentifier (for testing with Oracle"
}
] |
[
{
"msg_contents": "Hello all,\n\nPostgreSQL *still* has a bug where PQcmdStatus() won't return the\nnumber of rows updated. But that is essential for applications, since\nwithout it of course we don't know if the updates/delete/insert\ncommands succeded. Even worst, on interfaces like Delphi/dbExpress the\nprogram will return an error message and rollback transaction thinking\nnothing have been updated. In other words, unusable.\n\nThis render views useless (I either use view with rules and don't get\nmy program working) and won't allow me to proper use security settings\non PostgreSQL...\n\nThis is a *major* issue in my opinion that appeared on a May thread\nbut I can't see it done on version 7.2.2. Even worst, I can't see\nnothing on the TODO file.\n\nWill this fix finally appear on 7.3 ? Any ways to work around this ?\nHow can I know at least if *something* succeeded, or how many rows\n(the proper behavior)?\n\nThank you very much.\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Fri, 6 Sep 2002 15:10:15 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Steve Howe wrote:\n> Hello all,\n> \n> PostgreSQL *still* has a bug where PQcmdStatus() won't return the\n> number of rows updated. But that is essential for applications, since\n> without it of course we don't know if the updates/delete/insert\n> commands succeded. Even worst, on interfaces like Delphi/dbExpress the\n> program will return an error message and rollback transaction thinking\n> nothing have been updated. In other words, unusable.\n> \n> This render views useless (I either use view with rules and don't get\n> my program working) and won't allow me to proper use security settings\n> on PostgreSQL...\n> \n> This is a *major* issue in my opinion that appeared on a May thread\n> but I can't see it done on version 7.2.2. Even worst, I can't see\n> nothing on the TODO file.\n> \n> Will this fix finally appear on 7.3 ? Any ways to work around this ?\n> How can I know at least if *something* succeeded, or how many rows\n> (the proper behavior)?\n\nI see on TODO:\n\n\t* Return proper effected tuple count from complex commands [return]\n\nand that \"return\" link has a discussion of possible fixes.\nUnfortunately, no fix was agreed upon so there is no fix in 7.3.\n\nAnd, on top of that, I can't even think of a workaround. At best,\nperhaps someone can write you a patch to fix this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 6 Sep 2002 14:22:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Hello Bruce,\n\nFriday, September 6, 2002, 3:22:13 PM, you wrote:\n\nBM> Steve Howe wrote:\n>> Hello all,\n>> \n>> PostgreSQL *still* has a bug where PQcmdStatus() won't return the\n>> number of rows updated. But that is essential for applications, since\n>> without it of course we don't know if the updates/delete/insert\n>> commands succeded. Even worst, on interfaces like Delphi/dbExpress the\n>> program will return an error message and rollback transaction thinking\n>> nothing have been updated. In other words, unusable.\n>> \n>> This render views useless (I either use view with rules and don't get\n>> my program working) and won't allow me to proper use security settings\n>> on PostgreSQL...\n>> \n>> This is a *major* issue in my opinion that appeared on a May thread\n>> but I can't see it done on version 7.2.2. Even worst, I can't see\n>> nothing on the TODO file.\n>> \n>> Will this fix finally appear on 7.3 ? Any ways to work around this ?\n>> How can I know at least if *something* succeeded, or how many rows\n>> (the proper behavior)?\n\nBM> I see on TODO:\n\nBM> * Return proper effected tuple count from complex commands [return]\nSorry, I missed it because I check the v7.2.2 TODO.\n\nBM> and that \"return\" link has a discussion of possible fixes.\nBM> Unfortunately, no fix was agreed upon so there is no fix in 7.3.\nSo all the databases that uses rules will still be broken ? I don't\nbelieve you guys are so unconcerned about this...\n\nBM> And, on top of that, I can't even think of a workaround. At best,\nBM> perhaps someone can write you a patch to fix this.\nLet's hope so... and I disagree about the 'write for me' point; it's\nfor *everyone using rules*. They are useless, currently... and it's\nbroken for months and nothing agreed until know... I just can't\nbelieve in it.\nWhat do you do when you have to update a view ?\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Fri, 6 Sep 2002 19:38:54 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "\nI am not any happier about it than you are. Your report is good because\nit is the first case where returning the wrong value actually breaks\nsoftware. You may be able to justify adding a fix during beta by saying\nit is a bug fix.\n\nOf course, someone is going to have to generate a patch and champion the\ncause. This stuff doesn't happen by magic.\n\n---------------------------------------------------------------------------\n\nSteve Howe wrote:\n> Hello Bruce,\n> \n> Friday, September 6, 2002, 3:22:13 PM, you wrote:\n> \n> BM> Steve Howe wrote:\n> >> Hello all,\n> >> \n> >> PostgreSQL *still* has a bug where PQcmdStatus() won't return the\n> >> number of rows updated. But that is essential for applications, since\n> >> without it of course we don't know if the updates/delete/insert\n> >> commands succeded. Even worst, on interfaces like Delphi/dbExpress the\n> >> program will return an error message and rollback transaction thinking\n> >> nothing have been updated. In other words, unusable.\n> >> \n> >> This render views useless (I either use view with rules and don't get\n> >> my program working) and won't allow me to proper use security settings\n> >> on PostgreSQL...\n> >> \n> >> This is a *major* issue in my opinion that appeared on a May thread\n> >> but I can't see it done on version 7.2.2. Even worst, I can't see\n> >> nothing on the TODO file.\n> >> \n> >> Will this fix finally appear on 7.3 ? Any ways to work around this ?\n> >> How can I know at least if *something* succeeded, or how many rows\n> >> (the proper behavior)?\n> \n> BM> I see on TODO:\n> \n> BM> * Return proper effected tuple count from complex commands [return]\n> Sorry, I missed it because I check the v7.2.2 TODO.\n> \n> BM> and that \"return\" link has a discussion of possible fixes.\n> BM> Unfortunately, no fix was agreed upon so there is no fix in 7.3.\n> So all the databases that uses rules will still be broken ? I don't\n> believe you guys are so unconcerned about this...\n> \n> BM> And, on top of that, I can't even think of a workaround. At best,\n> BM> perhaps someone can write you a patch to fix this.\n> Let's hope so... and I disagree about the 'write for me' point; it's\n> for *everyone using rules*. They are useless, currently... and it's\n> broken for months and nothing agreed until know... I just can't\n> believe in it.\n> What do you do when you have to update a view ?\n> \n> ------------- \n> Best regards,\n> Steve Howe mailto:howe@carcass.dhs.org\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 6 Sep 2002 20:52:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Hello Bruce,\n\nFriday, September 6, 2002, 9:52:18 PM, you wrote:\n\n\nBM> I am not any happier about it than you are. Your report is good because\nBM> it is the first case where returning the wrong value actually breaks\nBM> software. You may be able to justify adding a fix during beta by saying\nBM> it is a bug fix.\nActually I think it must have happened with someone else, but they\nmust have quit using rules or something...\nActually I can't ensure security in the system without rules.\n\nBM> Of course, someone is going to have to generate a patch and champion the\nBM> cause. This stuff doesn't happen by magic.\nI understand your point. I just was hoping to see more concern about\nthe issue by the developers... but that's been broken for months.\n\nUnhappily I can't do it myself because it would take weeks to get\nfamiliar with the inners of PostgreSQL...\n\nLet's hope someone realize how serious is this and make a fix.\n\nThanks again...\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Fri, 6 Sep 2002 22:30:12 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Steve Howe wrote:\n> Hello Bruce,\n> \n> Friday, September 6, 2002, 9:52:18 PM, you wrote:\n> \n> \n> BM> I am not any happier about it than you are. Your report is good because\n> BM> it is the first case where returning the wrong value actually breaks\n> BM> software. You may be able to justify adding a fix during beta by saying\n> BM> it is a bug fix.\n> Actually I think it must have happened with someone else, but they\n> must have quit using rules or something...\n> Actually I can't ensure security in the system without rules.\n> \n> BM> Of course, someone is going to have to generate a patch and champion the\n> BM> cause. This stuff doesn't happen by magic.\n> I understand your point. I just was hoping to see more concern about\n> the issue by the developers... but that's been broken for months.\n> \n> Unhappily I can't do it myself because it would take weeks to get\n> familiar with the inners of PostgreSQL...\n\nWell, there was a big discussion, and I did bring up the issue in early\nAugust to see if I could get a resolution to it and was told no\nconclusion could be made.\n\nI suggest you read the TODO detail on the item and make a proposal on\nhow it _should_ work and if you can get agreement from everyone, you may\nbe able to nag someone into doing a patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 6 Sep 2002 21:58:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Hello Bruce,\n\nFriday, September 6, 2002, 10:58:13 PM, you wrote:\n\nBM> Well, there was a big discussion, and I did bring up the issue in early\nBM> August to see if I could get a resolution to it and was told no\nBM> conclusion could be made.\n\nBM> I suggest you read the TODO detail on the item and make a proposal on\nBM> how it _should_ work and if you can get agreement from everyone, you may\nBM> be able to nag someone into doing a patch.\nI think it should return the number of rows modified in the context of\nthe view, and not exactly that of each of the tables affected. And\nthis would not work well with PQcmdStatus() because it returns a\nsingle integer entry only.\n\nThis was working on some previous build, wasn't it ? What was the\nprevious behavior ? Shouldn't the patch follow that way ?\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Fri, 6 Sep 2002 23:52:47 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Steve Howe <howe@carcass.dhs.org> writes:\n> BM> I suggest you read the TODO detail on the item and make a proposal on\n> BM> how it _should_ work and if you can get agreement from everyone, you may\n> BM> be able to nag someone into doing a patch.\n\n> I think it should return the number of rows modified in the context of\n> the view, and not exactly that of each of the tables affected.\n\nThat's so vague as to be useless. What is \"in the context of the view\"?\nHow does that notion help us resolve the uncertainties discussed in the\nTODO thread?\n\n> This was working on some previous build, wasn't it ? What was the\n> previous behavior ? Shouldn't the patch follow that way ?\n\nThe old behavior was quite broken too, just not in a way that affected\nyou. We will not be reverting the change that fatally broke it (namely\naltering the order of RULE applications for INSERTs) and so \"go back\nto the old code\" isn't a workable answer at all.\n\nI don't think fixing the code is the hard part; agreeing on what the\nbehavior should be in complex cases is the hard part.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 07 Sep 2002 16:42:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue "
},
{
"msg_contents": "Tom Lane wrote:\n> Steve Howe <howe@carcass.dhs.org> writes:\n> > BM> I suggest you read the TODO detail on the item and make a proposal on\n> > BM> how it _should_ work and if you can get agreement from everyone, you may\n> > BM> be able to nag someone into doing a patch.\n> \n> > I think it should return the number of rows modified in the context of\n> > the view, and not exactly that of each of the tables affected.\n> \n> That's so vague as to be useless. What is \"in the context of the view\"?\n> How does that notion help us resolve the uncertainties discussed in the\n> TODO thread?\n> \n> > This was working on some previous build, wasn't it ? What was the\n> > previous behavior ? Shouldn't the patch follow that way ?\n> \n> The old behavior was quite broken too, just not in a way that affected\n> you. We will not be reverting the change that fatally broke it (namely\n> altering the order of RULE applications for INSERTs) and so \"go back\n> to the old code\" isn't a workable answer at all.\n> \n> I don't think fixing the code is the hard part; agreeing on what the\n> behavior should be in complex cases is the hard part.\n\nYes, Steve, if you want a fix, you better read the TODO detail and come\nup with a proposal and try to sell it to the group.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 7 Sep 2002 17:22:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Hello Tom,\n\nSaturday, September 7, 2002, 5:42:33 PM, you wrote:\n\nTL> Steve Howe <howe@carcass.dhs.org> writes:\n>> BM> I suggest you read the TODO detail on the item and make a proposal on\n>> BM> how it _should_ work and if you can get agreement from everyone, you may\n>> BM> be able to nag someone into doing a patch.\n\n>> I think it should return the number of rows modified in the context of\n>> the view, and not exactly that of each of the tables affected.\n\nTL> That's so vague as to be useless. What is \"in the context of the view\"?\nTL> How does that notion help us resolve the uncertainties discussed in the\nTL> TODO thread?\nI just mean that PQcmdStatus() should not return a value for each\nchanged table but how many rows \"viewable by the view\" it could\nchange.\nAgain, I'm not that aware of the inners of PostgreSQL to feel\ncomfortable to do a better suggestion.\n\n>> This was working on some previous build, wasn't it ? What was the\n>> previous behavior ? Shouldn't the patch follow that way ?\n\nTL> The old behavior was quite broken too, just not in a way that affected\nTL> you. We will not be reverting the change that fatally broke it (namely\nTL> altering the order of RULE applications for INSERTs) and so \"go back\nTL> to the old code\" isn't a workable answer at all.\nI didn't mean to revert the code but to make it work like the older\nversion did. I was unaware that it was broken too, but the removal now\nbroke the whole views/rules so I wonder what could be worst...\nAlso, it should have affected thousands of users, not just me. Unless\nnobody uses views...\n\nTL> I don't think fixing the code is the hard part; agreeing on what the\nTL> behavior should be in complex cases is the hard part.\nI understand your point and I'll try to give a proper solution but\nsince I'm not familiar with the PostgreSQL inners, I wonder how good\ncould it be...\n\nThanks :)\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Sat, 7 Sep 2002 18:28:09 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Steve Howe wrote:\n> \n> Hello all,\n> \n> PostgreSQL *still* has a bug where PQcmdStatus() won't return the\n> number of rows updated. But that is essential for applications, since\n> without it of course we don't know if the updates/delete/insert\n> commands succeded. Even worst, on interfaces like Delphi/dbExpress the\n> program will return an error message and rollback transaction thinking\n> nothing have been updated. In other words, unusable.\n> \n> This render views useless (I either use view with rules and don't get\n> my program working) and won't allow me to proper use security settings\n> on PostgreSQL...\n> \n> This is a *major* issue in my opinion that appeared on a May thread\n> but I can't see it done on version 7.2.2. Even worst, I can't see\n> nothing on the TODO file.\n> \n> Will this fix finally appear on 7.3 ? Any ways to work around this ?\n> How can I know at least if *something* succeeded, or how many rows\n> (the proper behavior)?\n\nAnd of course, in the case you insert into a real table you expect if a\ntrigger procedure suppressed your original INSERT, but fired a cascade\nof other triggers by doing a mass UPDATE somewhere else instead, that\nall these caused UPDATEs and whatnot's are summed up and returned\ninstead, right? Or what is proper behavior here?\n\nSo please, \"proper behavior\" is not allways what your favorite tool\nexpects. And just because you cannot \"fix\" your tool doesn't make that\nbehavior any more \"proper\".\n\n\nJan\n\n> \n> Thank you very much.\n> \n> -------------\n> Best regards,\n> Steve Howe mailto:howe@carcass.dhs.org\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Mon, 09 Sep 2002 10:15:47 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Steve Howe wrote:\n> \n> Hello Bruce,\n> \n> Friday, September 6, 2002, 9:52:18 PM, you wrote:\n> \n> BM> I am not any happier about it than you are. Your report is good because\n> BM> it is the first case where returning the wrong value actually breaks\n> BM> software. You may be able to justify adding a fix during beta by saying\n> BM> it is a bug fix.\n> Actually I think it must have happened with someone else, but they\n> must have quit using rules or something...\n> Actually I can't ensure security in the system without rules.\n> \n> BM> Of course, someone is going to have to generate a patch and champion the\n> BM> cause. This stuff doesn't happen by magic.\n> I understand your point. I just was hoping to see more concern about\n> the issue by the developers... but that's been broken for months.\n> \n> Unhappily I can't do it myself because it would take weeks to get\n> familiar with the inners of PostgreSQL...\n> \n> Let's hope someone realize how serious is this and make a fix.\n\nSeems you at least realized how serious it is. Even if you cannot code\nthe \"proper\" solution, could you please make a complete table of all\npossible situations and the expected returns? With complete I mean\nincluding all combinations of rules, triggers, deferred constraints and\nthe like. Or do you at least see now where in the discussion we got\nstuck?\n\nIt doesn't help to cry for a quick hack that fixes your particular\nproblem. That only leads to the situation that someday we have a final\nfix that changes the behavior for your case again and then you cry again\nand ask us not to break backwards compatibility.\n\n\nThanks for your patience and understanding,\nJan\n\n> \n> Thanks again...\n> -------------\n> Best regards,\n> Steve Howe mailto:howe@carcass.dhs.org\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Mon, 09 Sep 2002 10:26:20 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Steve Howe wrote:\n> \n> Hello Bruce,\n> \n> Friday, September 6, 2002, 10:58:13 PM, you wrote:\n> \n> BM> Well, there was a big discussion, and I did bring up the issue in early\n> BM> August to see if I could get a resolution to it and was told no\n> BM> conclusion could be made.\n> \n> BM> I suggest you read the TODO detail on the item and make a proposal on\n> BM> how it _should_ work and if you can get agreement from everyone, you may\n> BM> be able to nag someone into doing a patch.\n> I think it should return the number of rows modified in the context of\n> the view, and not exactly that of each of the tables affected. And\n> this would not work well with PQcmdStatus() because it returns a\n> single integer entry only.\n> \n> This was working on some previous build, wasn't it ? What was the\n> previous behavior ? Shouldn't the patch follow that way ?\n\nIn previous versions rules even fired in different orders. We cannot get\nback to that, because it was the reason for total failure of rules at\nall. So no, the patch should follow that way.\n\nYou say that the return should be the rows modified in the context of\nthe view. Er ... what is that? You mean only INSERTS, UPDATES and\nDELETES made by rule actions directly to any table referenced by the\nview itself count, not if a modification to another third table or view\ntriggers back a modification to one of these base tables in return ...\nwould that be through a rule or a trigger?\n\nWhat about a view over views, that has rules that in turn get rewritten\nby the rewrite rules of the views it consists of? What is that views\ncontext in detail?\n\n\nJan\n\n> \n> -------------\n> Best regards,\n> Steve Howe mailto:howe@carcass.dhs.org\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Mon, 09 Sep 2002 10:36:56 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Hello Jan,\n\nMonday, September 9, 2002, 11:26:20 AM, you wrote:\n\nJW> Steve Howe wrote:\n>> \n>> Hello Bruce,\n>> \n>> Friday, September 6, 2002, 9:52:18 PM, you wrote:\n>> \n>> BM> I am not any happier about it than you are. Your report is good because\n>> BM> it is the first case where returning the wrong value actually breaks\n>> BM> software. You may be able to justify adding a fix during beta by saying\n>> BM> it is a bug fix.\n>> Actually I think it must have happened with someone else, but they\n>> must have quit using rules or something...\n>> Actually I can't ensure security in the system without rules.\n>> \n>> BM> Of course, someone is going to have to generate a patch and champion the\n>> BM> cause. This stuff doesn't happen by magic.\n>> I understand your point. I just was hoping to see more concern about\n>> the issue by the developers... but that's been broken for months.\n>> \n>> Unhappily I can't do it myself because it would take weeks to get\n>> familiar with the inners of PostgreSQL...\n>> \n>> Let's hope someone realize how serious is this and make a fix.\n\nJW> Seems you at least realized how serious it is. Even if you cannot code\n\"At least\" ?... What do you mean by that ?\n\nJW> the \"proper\" solution, could you please make a complete table of all\nJW> possible situations and the expected returns? With complete I mean\nJW> including all combinations of rules, triggers, deferred constraints and\nJW> the like. Or do you at least see now where in the discussion we got\nJW> stuck?\nI had seen and the proposal was posted two days ago.\n\nJW> It doesn't help to cry for a quick hack that fixes your particular\nJW> problem. That only leads to the situation that someday we have a final\nJW> fix that changes the behavior for your case again and then you cry again\nJW> and ask us not to break backwards compatibility.\nSee, I'm not crying. I'm just another user who needs something\nworking. The whole problem was that the PostgreSQL knew the problem\nexisted, had a brief discussion on the subject, and couldn't reach an\nagreement. That's ok for me, I understand... but releasing versions\nknown to be broken is something I can't understand.\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Mon, 9 Sep 2002 14:06:39 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Hello Jan,\n\nMonday, September 9, 2002, 11:26:20 AM, you wrote:\n\nJW> Seems you at least realized how serious it is. Even if you cannot code\nJW> the \"proper\" solution, could you please make a complete table of all\nJW> possible situations and the expected returns? With complete I mean\nJW> including all combinations of rules, triggers, deferred constraints and\nJW> the like. Or do you at least see now where in the discussion we got\nJW> stuck?\nBy the way, I don't think triggers and constraints are in focus here,\njust as rules other then \"DO INSTEAD\".\nThese should be transparent to the user.\nI suggest you to read the proposal posted to get aware of the\ndiscussion.\n\nThanks.\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Mon, 9 Sep 2002 14:14:49 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Hello Jan,\n\nMonday, September 9, 2002, 11:15:47 AM, you wrote:\n\nJW> Steve Howe wrote:\n>> \n>> Hello all,\n>> \n>> PostgreSQL *still* has a bug where PQcmdStatus() won't return the\n>> number of rows updated. But that is essential for applications, since\n>> without it of course we don't know if the updates/delete/insert\n>> commands succeded. Even worst, on interfaces like Delphi/dbExpress the\n>> program will return an error message and rollback transaction thinking\n>> nothing have been updated. In other words, unusable.\n>> \n>> This render views useless (I either use view with rules and don't get\n>> my program working) and won't allow me to proper use security settings\n>> on PostgreSQL...\n>> \n>> This is a *major* issue in my opinion that appeared on a May thread\n>> but I can't see it done on version 7.2.2. Even worst, I can't see\n>> nothing on the TODO file.\n>> \n>> Will this fix finally appear on 7.3 ? Any ways to work around this ?\n>> How can I know at least if *something* succeeded, or how many rows\n>> (the proper behavior)?\n\nJW> And of course, in the case you insert into a real table you expect if a\nJW> trigger procedure suppressed your original INSERT, but fired a cascade\nJW> of other triggers by doing a mass UPDATE somewhere else instead, that\nJW> all these caused UPDATEs and whatnot's are summed up and returned\nJW> instead, right? Or what is proper behavior here?\nWhat is documented, and what is expected: PQcmdStatus(),\nPQcmdTuples()and PQoidValue() returning the information they should be.\n\nJW> So please, \"proper behavior\" is not allways what your favorite tool\nJW> expects. And just because you cannot \"fix\" your tool doesn't make that\nJW> behavior any more \"proper\".\nDo you have any word more appropriate ?\n\nAnd just so that you know, I can't \"fix\" my tool because I have other\njob to do (and a lot of that and that job uses PostgreSQL), and\nunhappily I couldn't join the development team and thus I'm not aware\nof how it works internally. The reason isn't that I just don't have\nintellectual capacity.\n\nAnd it looks like *you* overhauled the query rewrite rule system, so\nwhat we are talking is something that must have passed through you. So\ninstead of offending me, your \"proper\" behavior would be try to help\nand suggest a solution for the problem, as other developers are doing.\n\nThanks again.\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Mon, 9 Sep 2002 14:43:46 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "> existed, had a brief discussion on the subject, and couldn't reach an\n> agreement. That's ok for me, I understand... but releasing versions\n> known to be broken is something I can't understand.\n -9' the postmaster\n\nIf we didn't do that, then Postgresql never would have been released in\nthe first place, nor any date between then and now.\n\nThere has been, and currently is a ton of known broken, wonky, or\nincomplete stuff -- but it's felt that the current version has a lot\nmore to offer than the previous version, so it's being released.\n\nThis works for *all* software. If you never release, nothing gets\nbetter.\n\n\nI suspect it'll be several more major releases before we begin to\nconsider it approaching completely functional.\n\n-- \n Rod Taylor\n\n",
"msg_date": "09 Sep 2002 13:55:18 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "\nOn Mon, 9 Sep 2002, Steve Howe wrote:\n\n> JW> Steve Howe wrote:\n> >>\n> >> Hello all,\n> >>\n> >> PostgreSQL *still* has a bug where PQcmdStatus() won't return the\n> >> number of rows updated. But that is essential for applications, since\n> >> without it of course we don't know if the updates/delete/insert\n> >> commands succeded. Even worst, on interfaces like Delphi/dbExpress the\n> >> program will return an error message and rollback transaction thinking\n> >> nothing have been updated. In other words, unusable.\n\nAs a note, I assume you realize that it returning any number doesn't\nguarantee that the command succeeded if you assume succeeding means doing\nwhat the statement sent would appear to do. ;) Although I think\nwe need to change the current behavior, we are turning a false \"failure\"\ninto a potentially false \"success\" (I did an update, it said two rows were\nchanged but there's no visible data change in the entire system?)\nFortunately, the likely bad effects from the false \"success\" are probably\nonly going to happen in somewhat degenerate cases.\n\nI quote \"failure\" and \"success\" because there's already a notion of\nsuccess and failure which is raising an exception condition or not (AFAICT\n0 rows is a completion condition - the statement succeeded but nothing was\nmodified). As such, using the count to determine success of the statement\nis wrong for an interface, but it may be meaningful for applications\nattempting to apply some sort of business logic.\n\n",
"msg_date": "Mon, 9 Sep 2002 11:15:29 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Steve Howe wrote:\n> \n> Hello Jan,\n> \n> Monday, September 9, 2002, 11:15:47 AM, you wrote:\n> \n> JW> So please, \"proper behavior\" is not allways what your favorite tool\n> JW> expects. And just because you cannot \"fix\" your tool doesn't make that\n> JW> behavior any more \"proper\".\n> Do you have any word more appropriate ?\n> [...]\n> And it looks like *you* overhauled the query rewrite rule system, so\n> what we are talking is something that must have passed through you. So\n> instead of offending me, your \"proper\" behavior would be try to help\n> and suggest a solution for the problem, as other developers are doing.\n\nSee, and exactly here lies the problem. Indeed, I spent about 3 months\nof my spare time back in 95 or so to fix it, after I spent many more\nmonths over years to get familiar with the internals.\n\nNow, instead of even trying to spend some serious amount of time\nyourself, you give some vague hints about the functionality that might\nmake your problems disappear, name that a proposal and expect someone\nelse to do what you need for free. This is not exactly how open source\nworks.\n\nWe should surely keep this on a much more technical level and avoid any\npersonal offendings. To do so, please explain to me why you think that\ntriggers and constraints are out of focus here? What is the difference\nbetween a trigger, a rule and an instead rule from a business process\noriented point of view? I think there is none at all. They are just\ndifferent techniques to do one and the same, implement business logic in\nthe database system.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Mon, 09 Sep 2002 15:56:04 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Hello Jan,\n\nMonday, September 9, 2002, 4:56:04 PM, you wrote:\n\nJW> Steve Howe wrote:\n>> \n>> Hello Jan,\n>> \n>> Monday, September 9, 2002, 11:15:47 AM, you wrote:\n>> \n>> JW> So please, \"proper behavior\" is not allways what your favorite tool\n>> JW> expects. And just because you cannot \"fix\" your tool doesn't make that\n>> JW> behavior any more \"proper\".\n>> Do you have any word more appropriate ?\n>> [...]\n>> And it looks like *you* overhauled the query rewrite rule system, so\n>> what we are talking is something that must have passed through you. So\n>> instead of offending me, your \"proper\" behavior would be try to help\n>> and suggest a solution for the problem, as other developers are doing.\n\nJW> See, and exactly here lies the problem. Indeed, I spent about 3 months\nJW> of my spare time back in 95 or so to fix it, after I spent many more\nJW> months over years to get familiar with the internals.\n\nJW> Now, instead of even trying to spend some serious amount of time\nJW> yourself, you give some vague hints about the functionality that might\nJW> make your problems disappear, name that a proposal and expect someone\nJW> else to do what you need for free. This is not exactly how open source\nJW> works.\nAs I told you, this would demand weeks and I just don't have time to\ndo it. Other developers offered to make a fix and asked me to do that\nproposal. And so I did.\nIt's sad that just you don't seem to be trying to help in\nany way. Other developers had considered the proposal and are actually\nvoting and giving constructive ideas on the subject.\n\nJW> We should surely keep this on a much more technical level and avoid any\nJW> personal offendings. To do so, please explain to me why you think that\nJW> triggers and constraints are out of focus here? What is the difference\nJW> between a trigger, a rule and an instead rule from a business process\nJW> oriented point of view? I think there is none at all. They are just\nJW> different techniques to do one and the same, implement business logic in\nJW> the database system.\nBecause the affected commands are supposed to give you back\ninformation on what your INSERT/UPDATE/DELETE commands, not what is\nmaking behind the scenes.\n\nAnd it seems that other people in the thread agree with me, please\nread thread.\n\nSince you are probably very familiar with the rules system, why don't\nyou vote on a proposal too, or just suggest yours. Your opinion is\nvery important. I'm not saying I'm the truth owner; I'm just another\ndeveloper who needs a feature working again.\n\nThank you.\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Mon, 9 Sep 2002 18:05:27 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Jan Wieck wrote:\n> We should surely keep this on a much more technical level and avoid any\n> personal offendings. To do so, please explain to me why you think that\n> triggers and constraints are out of focus here? What is the difference\n> between a trigger, a rule and an instead rule from a business process\n> oriented point of view? I think there is none at all. They are just\n> different techniques to do one and the same, implement business logic in\n> the database system.\n\nAll the problems here are coming from INSTEAD rules. We don't have\nINSTEAD triggers or contraints.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 Sep 2002 22:11:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Steve Howe wrote:\n> Because the affected commands are supposed to give you back\n> information on what your INSERT/UPDATE/DELETE commands, not what is\n> making behind the scenes.\n> \n> And it seems that other people in the thread agree with me, please\n> read thread.\n> \n> Since you are probably very familiar with the rules system, why don't\n> you vote on a proposal too, or just suggest yours. Your opinion is\n> very important. I'm not saying I'm the truth owner; I'm just another\n> developer who needs a feature working again.\n\nJan actually did vote in the first round which appears in TODO.detail. \nHe voted that if the INSTEAD rule had only _one_ statement, return that,\nif not, return nothing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 Sep 2002 22:13:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "On Mon, 9 Sep 2002, Bruce Momjian wrote:\n\n> Jan Wieck wrote:\n> > We should surely keep this on a much more technical level and avoid any\n> > personal offendings. To do so, please explain to me why you think that\n> > triggers and constraints are out of focus here? What is the difference\n> > between a trigger, a rule and an instead rule from a business process\n> > oriented point of view? I think there is none at all. They are just\n> > different techniques to do one and the same, implement business logic in\n> > the database system.\n>\n> All the problems here are coming from INSTEAD rules. We don't have\n> INSTEAD triggers or contraints.\n\nSure we do, well sort of. :)\nMake a before trigger that does a different statement and returns NULL\nto abort the original action on that row.\n\n\n",
"msg_date": "Mon, 9 Sep 2002 19:18:07 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "On Mon, 2002-09-09 at 22:11, Bruce Momjian wrote:\n> Jan Wieck wrote:\n> > We should surely keep this on a much more technical level and avoid any\n> > personal offendings. To do so, please explain to me why you think that\n> > triggers and constraints are out of focus here? What is the difference\n> > between a trigger, a rule and an instead rule from a business process\n> > oriented point of view? I think there is none at all. They are just\n> > different techniques to do one and the same, implement business logic in\n> > the database system.\n> \n> All the problems here are coming from INSTEAD rules. We don't have\n> INSTEAD triggers or contraints.\n\nWell.. Triggers could be exclusively INSTEAD. A trigger could easily\nwrite a few things to a number of other tables, and return NULL in a\nBEFORE trigger which would prevent execution of the requested command.\n\n\n\n-- \n Rod Taylor\n\n",
"msg_date": "09 Sep 2002 22:20:31 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Hello Bruce,\n\nMonday, September 9, 2002, 11:13:20 PM, you wrote:\n\nBM> Steve Howe wrote:\n>> Because the affected commands are supposed to give you back\n>> information on what your INSERT/UPDATE/DELETE commands, not what is\n>> making behind the scenes.\n>> \n>> And it seems that other people in the thread agree with me, please\n>> read thread.\n>> \n>> Since you are probably very familiar with the rules system, why don't\n>> you vote on a proposal too, or just suggest yours. Your opinion is\n>> very important. I'm not saying I'm the truth owner; I'm just another\n>> developer who needs a feature working again.\n\nBM> Jan actually did vote in the first round which appears in TODO.detail. \nBM> He voted that if the INSTEAD rule had only _one_ statement, return that,\nBM> if not, return nothing.\nWe still need Tom's word and Hiroshi, since they were the most related\nto the subject, and the other developer's opinion... :)\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Tue, 10 Sep 2002 00:25:48 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> On Mon, 9 Sep 2002, Bruce Momjian wrote:\n>> All the problems here are coming from INSTEAD rules. We don't have\n>> INSTEAD triggers or contraints.\n\n> Sure we do, well sort of. :)\n> Make a before trigger that does a different statement and returns NULL\n> to abort the original action on that row.\n\nI think we can reasonably leave the side-effects of triggers out of the\ndiscussion. PQcmdStatus numbers have never included side-effects of\ntriggers in the past, and I see no reason for them to start now.\n\nI think it's reasonable to exclude both triggers and non-INSTEAD rules\nfrom the status count, on the grounds that these normally represent\n\"add-on\" actions and not the \"real\" action. The cases that get\ninteresting are those that involve multiple INSTEAD actions (either from\nmultiple INSTEAD rules, or a single rule with multiple commands in its\nbody) and those cases where the INSTEAD action is a different type from\nthe original command (eg, ON UPDATE DO INSTEAD INSERT...).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 09:24:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue "
},
{
"msg_contents": "\nOn Tue, 10 Sep 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > On Mon, 9 Sep 2002, Bruce Momjian wrote:\n> >> All the problems here are coming from INSTEAD rules. We don't have\n> >> INSTEAD triggers or contraints.\n>\n> > Sure we do, well sort of. :)\n> > Make a before trigger that does a different statement and returns NULL\n> > to abort the original action on that row.\n>\n> I think we can reasonably leave the side-effects of triggers out of the\n> discussion. PQcmdStatus numbers have never included side-effects of\n> triggers in the past, and I see no reason for them to start now.\n>\n\nI agree, I was just commenting on the instead trigger comment.\n\n",
"msg_date": "Tue, 10 Sep 2002 08:25:10 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Jan Wieck wrote:\n> > We should surely keep this on a much more technical level and avoid any\n> > personal offendings. To do so, please explain to me why you think that\n> > triggers and constraints are out of focus here? What is the difference\n> > between a trigger, a rule and an instead rule from a business process\n> > oriented point of view? I think there is none at all. They are just\n> > different techniques to do one and the same, implement business logic in\n> > the database system.\n> \n> All the problems here are coming from INSTEAD rules. We don't have\n> INSTEAD triggers or contraints.\n\nSo a BEFORE INSERT trigger on table1 that does an UPDATE to table2 and\nthen returns NULL is not effectively the same as an ON INSERT ... DO\nINSTEAD UPDATE ... rule? Hmmm, the end result is exactly the same so\nwhat do you call it?\n\nI think we will have no chance to really return the number of\nVIEW-tuples affected. So any implementation is only a guess and we could\nsimply return fixed 42 if \"some\" tuples where affected at all. This\nreturn is as wrong (according to Steve) as everything else but at least\nwe have a clear definition what it means.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Tue, 10 Sep 2002 17:27:32 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > \n> > Jan Wieck wrote:\n> > > We should surely keep this on a much more technical level and avoid any\n> > > personal offendings. To do so, please explain to me why you think that\n> > > triggers and constraints are out of focus here? What is the difference\n> > > between a trigger, a rule and an instead rule from a business process\n> > > oriented point of view? I think there is none at all. They are just\n> > > different techniques to do one and the same, implement business logic in\n> > > the database system.\n> > \n> > All the problems here are coming from INSTEAD rules. We don't have\n> > INSTEAD triggers or contraints.\n> \n> So a BEFORE INSERT trigger on table1 that does an UPDATE to table2 and\n> then returns NULL is not effectively the same as an ON INSERT ... DO\n> INSTEAD UPDATE ... rule? Hmmm, the end result is exactly the same so\n> what do you call it?\n\nWell, yes, functionally it is the same and we would have trouble dealing\nwith that too. I didn't know you could NULL return from a trigger and\nit would exit the statement.\n\n> I think we will have no chance to really return the number of\n> VIEW-tuples affected. So any implementation is only a guess and we could\n> simply return fixed 42 if \"some\" tuples where affected at all. This\n> return is as wrong (according to Steve) as everything else but at least\n> we have a clear definition what it means.\n\nYes, my guess is that accumulating everything with the same tags is the\nclosest we are going to get and does return the proper values in simple\nmulti-statement INSTEAD rules.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 Sep 2002 18:32:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Jan Wieck writes:\n\n> I think we will have no chance to really return the number of\n> VIEW-tuples affected. So any implementation is only a guess and we could\n> simply return fixed 42 if \"some\" tuples where affected at all. This\n> return is as wrong (according to Steve) as everything else but at least\n> we have a clear definition what it means.\n\nMaybe we should return something to the effect of \"unknown, but something\nhappened\". I can see that returning 0 in case of doubt might confuse\napplications.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 11 Sep 2002 21:59:10 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
}
] |
[
{
"msg_contents": "From the Department of Redundancy Department:\n\nAttached is a perl script called 'pguniqchk'. It checks the uniqueness\nof unique constraints on tables in a PostgreSQL database using the\nPG_TABLES and PG_INDEXES system \"tables\".\n\nWhy would this be useful?\n\nIf you're planning to dump and restore the database, this might be a\ngood sanity check to run before doing it.\n\nApparently, when such an impossible event occurs, the unique index on\nthe table only \"sees\" one of the duplicate rows. In order to even query\nboth rows, one must run this SQL command (via psql) to turn off index\nscans:\n\n => set enable_indexscan = off;\n\nThe attached script does this, then verifies the uniqueness of the\nunique index by scanning the entire table.\n\nIt is probably useless for 99.999% of PostgreSQL users, but I thought\nI'd share it just in case someone finds it useful, even if only\nas a simple example of querying system tables.\n\nHow I found the problem:\n\nI had a need to alter the data types of a column on two different tables\n(VARCHAR(32) -> VARCHAR(128) and VARCHAR(128) -> TEXT) and drop a column\nfrom another table. The only way to do this in v7.1.x is to perform a\nfull dump and then restore. When I tried to reload the data, I got\nunique key violation errors, and data for two other tables did not load.\n\nAs it turns out, one table had a single pair of duplicate keys while the\nother table had five pair of duplicates and one set of triplicates.\n\nThe incident happened around April 05, 2002 (from what I can tell of\nthe duplicated data), but hasn't happened since. I was having SCSI\ndisk errors around that time on my production server, which is the prime\nsuspect.\n\nNOTES:\n\n- Only tested on PostgreSQL 7.1.3.\n\n- When a UNIQUE INDEX is put on a NULLABLE column, duplicates with NULL\n values are possible. This is a feature, though the script does not\n check for this case (so don't be alarmed if it finds something).\n\n 7.4. Unique Indexes\n http://www.postgresql.org/idocs/index.php?indexes-unique.html\n\nDave",
"msg_date": "Fri, 6 Sep 2002 16:06:24 -0500",
"msg_from": "\"David D. Kilzer\" <ddkilzer@lubricants-oil.com>",
"msg_from_op": true,
"msg_subject": "[SCRIPT] pguniqchk -- checks uniqueness of unique constraints on\n\ttables"
}
] |
[
{
"msg_contents": "In testing the new 7.3 prepared statement functionality I have come\nacross some findings that I cannot explain. I was testing using PREPARE\nfor a fairly complex sql statement that gets used frequently in my\napplicaition. I used the timing information from:\nshow_parser_stats = true\nshow_planner_stats = true\nshow_executor_stats = true\n\nThe timing information showed that 60% of time was in the parse and\nplanning, and 40% was in the execute for the original statement. This\nindicated that this statement was a good candidate for using the new\nPREPARE functionality.\n\nNow for the strange part. When looking at the execute timings as shown\nby 'show_executor_stats' under three different senerios I see:\nregular execute = 787ms (regular sql execution, not using prepare at\nall)\nprepare execute = 737ms (execution of a prepared statement via\nEXECUTE with no bind variable, all values are hardcoded into the\nprepared sql statement)\nprepare/bind execute = 693ms (same as above, but using bind variables)\n\nThese results where consistent across multiple runs. I don't understand\nwhy the timings for prepared statements would be less than for a regular\nstatement, and especially why using bind variables would be better than\nwithout. I am concerned that prepared statements may be choosing a\ndifferent execution plan than non-prepared statements. But I am not\nsure how to find out what the execution plan is for a prepared\nstatement, since EXPLAIN doesn't work for a prepared statement (i.e.\nEXPLAIN EXECUTE <preparedStatementName>, doesn't work).\n\nI like the fact that the timings are better in this particular case\n(upto 12% better), but since I don't understand why that is, I am\nconcerned that under different circumstances they may be worse. Can\nanyone shed some light on this?\n\nthanks,\n--Barry\n\n\n\n\n",
"msg_date": "Fri, 06 Sep 2002 16:30:15 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "Interesting results using new prepared statements"
},
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> ... I don't understand\n> why the timings for prepared statements would be less than for a regular\n> statement, and especially why using bind variables would be better than\n> without. I am concerned that prepared statements may be choosing a\n> different execution plan than non-prepared statements.\n\nThat's entirely likely if you are using bind variables in the prepared\nstatements, since the planner will not have access to the same constant\nvalues that it does in a plain SQL statement --- for example, \"WHERE foo\n= $1\" looks a lot different from \"WHERE foo = 42\" to the planner.\n\nIn most cases I'd expect the planner to generate worse plans when given\nless info :-( ... but in your particular case it seems to be guessing\nslightly wrong.\n\n> But I am not\n> sure how to find out what the execution plan is for a prepared\n> statement, since EXPLAIN doesn't work for a prepared statement (i.e.\n> EXPLAIN EXECUTE <preparedStatementName>, doesn't work).\n\nHmmm --- I can see the usefulness of that, but it looks like a new\nfeature and hence verboten during beta. Maybe a TODO for 7.4?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 16:12:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Interesting results using new prepared statements "
},
{
"msg_contents": "Tom Lane wrote:\n> > But I am not\n> > sure how to find out what the execution plan is for a prepared\n> > statement, since EXPLAIN doesn't work for a prepared statement (i.e.\n> > EXPLAIN EXECUTE <preparedStatementName>, doesn't work).\n> \n> Hmmm --- I can see the usefulness of that, but it looks like a new\n> feature and hence verboten during beta. Maybe a TODO for 7.4?\n\nAdded to TODO:\n\n\to Allow EXPLAIN EXECUTE to see prepared plans\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 18 Sep 2002 00:09:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Interesting results using new prepared statements"
}
] |
[
{
"msg_contents": "Hello everyone.\n When I studied system catolog,I can't understand a concept \"operator strategies for an access method\", Who can tell me what it's that and where I could find document on the Web. Thanks for your reponse very much, especially thank Hannu Krosing.\n\n Guo longjiang Harbin China\n______________________________________\n\n===================================================================\n������ѵ������� (http://mail.sina.com.cn)\n���˷�����Ϣ�������г���һ�ߣ��ó���ʱ�ͳ��֣� (http://classad.sina.com.cn/2shou/)\n�������ֻ�ͼƬ������������������ѡ��ÿ�춼�и��� (http://sms.sina.com.cn/cgi-bin/sms/smspic.cgi)\n",
"msg_date": "Sat, 07 Sep 2002 16:18:03 +0800",
"msg_from": "ljguo_1234 <ljguo_1234@sina.com>",
"msg_from_op": true,
"msg_subject": "Operator startegies for an access method"
}
] |
[
{
"msg_contents": "\nNow I'm testing connectby() in the /contrib/tablefunc in 7.3b1, which would\nbe a useful function for many users. However, I found the fact that\nif connectby_tree has the following data, connectby() tries to search the end\nof roots without knowing that the relations are infinite(-5-9-10-11-9-10-11-) .\nI hope connectby() supports a check routine to find infinite relations. \n\n\nCREATE TABLE connectby_tree(keyid int, parent_keyid int);\nINSERT INTO connectby_tree VALUES(1,NULL);\nINSERT INTO connectby_tree VALUES(2,1);\nINSERT INTO connectby_tree VALUES(3,1);\nINSERT INTO connectby_tree VALUES(4,2);\nINSERT INTO connectby_tree VALUES(5,2);\nINSERT INTO connectby_tree VALUES(6,4);\nINSERT INTO connectby_tree VALUES(7,3);\nINSERT INTO connectby_tree VALUES(8,6);\nINSERT INTO connectby_tree VALUES(9,5);\n\nINSERT INTO connectby_tree VALUES(10,9);\nINSERT INTO connectby_tree VALUES(11,10);\nINSERT INTO connectby_tree VALUES(9,11); <-- infinite\n\n\n\nRegards,\nMasaru Sugawara\n\n\n",
"msg_date": "Sat, 07 Sep 2002 21:41:43 +0900",
"msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>",
"msg_from_op": true,
"msg_subject": "About connectby()"
},
{
"msg_contents": "Masaru Sugawara wrote:\n> Now I'm testing connectby() in the /contrib/tablefunc in 7.3b1, which would\n> be a useful function for many users. However, I found the fact that\n> if connectby_tree has the following data, connectby() tries to search the end\n> of roots without knowing that the relations are infinite(-5-9-10-11-9-10-11-) .\n> I hope connectby() supports a check routine to find infinite relations. \n> \n> \n> CREATE TABLE connectby_tree(keyid int, parent_keyid int);\n> INSERT INTO connectby_tree VALUES(1,NULL);\n> INSERT INTO connectby_tree VALUES(2,1);\n> INSERT INTO connectby_tree VALUES(3,1);\n> INSERT INTO connectby_tree VALUES(4,2);\n> INSERT INTO connectby_tree VALUES(5,2);\n> INSERT INTO connectby_tree VALUES(6,4);\n> INSERT INTO connectby_tree VALUES(7,3);\n> INSERT INTO connectby_tree VALUES(8,6);\n> INSERT INTO connectby_tree VALUES(9,5);\n> \n> INSERT INTO connectby_tree VALUES(10,9);\n> INSERT INTO connectby_tree VALUES(11,10);\n> INSERT INTO connectby_tree VALUES(9,11); <-- infinite\n\nHmm, good point. I can think of two ways to deal with this:\n1. impose an arbitrary absolute limit on recursion depth\n2. perform a relatively expensive ancestor check\n\nI didn't really want to do #1. You can already use max_depth to cap off \ninfinite recursion:\n\ntest=# SELECT * FROM connectby('connectby_tree', 'keyid', \n'parent_keyid', '2', 8, '~') AS t(keyid int, parent_keyid int, level \nint, branch text);\n keyid | parent_keyid | level | branch\n-------+--------------+-------+-----------------------\n 2 | | 0 | 2\n 4 | 2 | 1 | 2~4\n 6 | 4 | 2 | 2~4~6\n 8 | 6 | 3 | 2~4~6~8\n 5 | 2 | 1 | 2~5\n 9 | 5 | 2 | 2~5~9\n 10 | 9 | 3 | 2~5~9~10\n 11 | 10 | 4 | 2~5~9~10~11\n 9 | 11 | 5 | 2~5~9~10~11~9\n 10 | 9 | 6 | 2~5~9~10~11~9~10\n 11 | 10 | 7 | 2~5~9~10~11~9~10~11\n 9 | 11 | 8 | 2~5~9~10~11~9~10~11~9\n(12 rows)\n\nI guess it would be better to look for repeating values in branch and \nbail out there. I'm just a bit worried about the added processing \noverhead. It also means branch will have to be built, even if it is not \nreturned, eliminating the efficiency gain of using the function without \nreturning branch.\n\nAny other suggestions?\n\nThanks,\n\nJoe\n\n",
"msg_date": "Sat, 07 Sep 2002 08:35:20 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: About connectby()"
},
{
"msg_contents": "Masaru Sugawara wrote:\n> Now I'm testing connectby() in the /contrib/tablefunc in 7.3b1, which would\n> be a useful function for many users. However, I found the fact that\n> if connectby_tree has the following data, connectby() tries to search the end\n> of roots without knowing that the relations are infinite(-5-9-10-11-9-10-11-) .\n> I hope connectby() supports a check routine to find infinite relations. \n> \n> \n> CREATE TABLE connectby_tree(keyid int, parent_keyid int);\n> INSERT INTO connectby_tree VALUES(1,NULL);\n> INSERT INTO connectby_tree VALUES(2,1);\n> INSERT INTO connectby_tree VALUES(3,1);\n> INSERT INTO connectby_tree VALUES(4,2);\n> INSERT INTO connectby_tree VALUES(5,2);\n> INSERT INTO connectby_tree VALUES(6,4);\n> INSERT INTO connectby_tree VALUES(7,3);\n> INSERT INTO connectby_tree VALUES(8,6);\n> INSERT INTO connectby_tree VALUES(9,5);\n> \n> INSERT INTO connectby_tree VALUES(10,9);\n> INSERT INTO connectby_tree VALUES(11,10);\n> INSERT INTO connectby_tree VALUES(9,11); <-- infinite\n> \n\nThe attached patch fixes the infinite recursion bug in \ncontrib/tablefunc/tablefunc.c:connectby found by Masaru Sugawara.\n\ntest=# SELECT * FROM connectby('connectby_tree', 'keyid', \n'parent_keyid', '2', 4, '~') AS t(keyid int, parent_keyid int, level \nint, branch text);\n keyid | parent_keyid | level | branch\n-------+--------------+-------+-------------\n 2 | | 0 | 2\n 4 | 2 | 1 | 2~4\n 6 | 4 | 2 | 2~4~6\n 8 | 6 | 3 | 2~4~6~8\n 5 | 2 | 1 | 2~5\n 9 | 5 | 2 | 2~5~9\n 10 | 9 | 3 | 2~5~9~10\n 11 | 10 | 4 | 2~5~9~10~11\n(8 rows)\n\ntest=# SELECT * FROM connectby('connectby_tree', 'keyid', \n'parent_keyid', '2', 5, '~') AS t(keyid int, parent_keyid int, level \nint, branch text);\nERROR: infinite recursion detected\n\nI implemented it by checking the branch string for repeated keys \n(whether or not the branch is returned). The performance hit was pretty \nminimal -- about 1% for a moderately complex test case (220000 record \ntable, 9 level tree with 3800 members).\n\nPlease apply.\n\nThanks,\n\nJoe",
"msg_date": "Sat, 07 Sep 2002 10:21:21 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] About connectby()"
},
{
"msg_contents": "Masaru Sugawara wrote:\n> Now I'm testing connectby() in the /contrib/tablefunc in 7.3b1, which would\n> be a useful function for many users. However, I found the fact that\n> if connectby_tree has the following data, connectby() tries to search the end\n> of roots without knowing that the relations are infinite(-5-9-10-11-9-10-11-) .\n> I hope connectby() supports a check routine to find infinite relations. \n> \n> \n> CREATE TABLE connectby_tree(keyid int, parent_keyid int);\n> INSERT INTO connectby_tree VALUES(1,NULL);\n> INSERT INTO connectby_tree VALUES(2,1);\n> INSERT INTO connectby_tree VALUES(3,1);\n> INSERT INTO connectby_tree VALUES(4,2);\n> INSERT INTO connectby_tree VALUES(5,2);\n> INSERT INTO connectby_tree VALUES(6,4);\n> INSERT INTO connectby_tree VALUES(7,3);\n> INSERT INTO connectby_tree VALUES(8,6);\n> INSERT INTO connectby_tree VALUES(9,5);\n> \n> INSERT INTO connectby_tree VALUES(10,9);\n> INSERT INTO connectby_tree VALUES(11,10);\n> INSERT INTO connectby_tree VALUES(9,11); <-- infinite\n> \n\nOK -- patch submitted to fix this. Once the patch is applied, this case \ngives:\n\ntest=# SELECT * FROM connectby('connectby_tree', 'keyid', \n'parent_keyid', '2', 0, '~') AS t(keyid int, parent_keyid int, level \nint, branch text);\nERROR: infinite recursion detected\n\nIf you specifically limit the depth to less than where the repeated key \nis hit, everything works as before:\n\ntest=# SELECT * FROM connectby('connectby_tree', 'keyid', \n'parent_keyid', '2', 4, '~') AS t(keyid int, parent_keyid int, level \nint, branch text);\n keyid | parent_keyid | level | branch\n-------+--------------+-------+-------------\n 2 | | 0 | 2\n 4 | 2 | 1 | 2~4\n 6 | 4 | 2 | 2~4~6\n 8 | 6 | 3 | 2~4~6~8\n 5 | 2 | 1 | 2~5\n 9 | 5 | 2 | 2~5~9\n 10 | 9 | 3 | 2~5~9~10\n 11 | 10 | 4 | 2~5~9~10~11\n(8 rows)\n\nThanks for the feedback!\n\nJoe\n\n",
"msg_date": "Sat, 07 Sep 2002 10:26:36 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: About connectby()"
},
{
"msg_contents": "I prefer the max depth method. Every tree I am aware of has a maximum usable \ndepth.\n\nThis should never be a problem in trees where keyid is unique.\n\nOn Saturday 07 September 2002 10:35 am, (Via wrote:\n> Masaru Sugawara wrote:\n> > Now I'm testing connectby() in the /contrib/tablefunc in 7.3b1, which\n> > would be a useful function for many users. However, I found the fact\n> > that if connectby_tree has the following data, connectby() tries to\n> > search the end of roots without knowing that the relations are\n> > infinite(-5-9-10-11-9-10-11-) . I hope connectby() supports a check\n> > routine to find infinite relations.\n> >\n> >\n> > CREATE TABLE connectby_tree(keyid int, parent_keyid int);\n> > INSERT INTO connectby_tree VALUES(1,NULL);\n> > INSERT INTO connectby_tree VALUES(2,1);\n> > INSERT INTO connectby_tree VALUES(3,1);\n> > INSERT INTO connectby_tree VALUES(4,2);\n> > INSERT INTO connectby_tree VALUES(5,2);\n> > INSERT INTO connectby_tree VALUES(6,4);\n> > INSERT INTO connectby_tree VALUES(7,3);\n> > INSERT INTO connectby_tree VALUES(8,6);\n> > INSERT INTO connectby_tree VALUES(9,5);\n> >\n> > INSERT INTO connectby_tree VALUES(10,9);\n> > INSERT INTO connectby_tree VALUES(11,10);\n> > INSERT INTO connectby_tree VALUES(9,11); <-- infinite\n>\n> Hmm, good point. I can think of two ways to deal with this:\n> 1. impose an arbitrary absolute limit on recursion depth\n> 2. perform a relatively expensive ancestor check\n>\n> I didn't really want to do #1. You can already use max_depth to cap off\n> infinite recursion:\n>\n> test=# SELECT * FROM connectby('connectby_tree', 'keyid',\n> 'parent_keyid', '2', 8, '~') AS t(keyid int, parent_keyid int, level\n> int, branch text);\n> keyid | parent_keyid | level | branch\n> -------+--------------+-------+-----------------------\n> 2 | | 0 | 2\n> 4 | 2 | 1 | 2~4\n> 6 | 4 | 2 | 2~4~6\n> 8 | 6 | 3 | 2~4~6~8\n> 5 | 2 | 1 | 2~5\n> 9 | 5 | 2 | 2~5~9\n> 10 | 9 | 3 | 2~5~9~10\n> 11 | 10 | 4 | 2~5~9~10~11\n> 9 | 11 | 5 | 2~5~9~10~11~9\n> 10 | 9 | 6 | 2~5~9~10~11~9~10\n> 11 | 10 | 7 | 2~5~9~10~11~9~10~11\n> 9 | 11 | 8 | 2~5~9~10~11~9~10~11~9\n> (12 rows)\n>\n> I guess it would be better to look for repeating values in branch and\n> bail out there. I'm just a bit worried about the added processing\n> overhead. It also means branch will have to be built, even if it is not\n> returned, eliminating the efficiency gain of using the function without\n> returning branch.\n>\n> Any other suggestions?\n>\n> Thanks,\n>\n> Joe\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Sat, 7 Sep 2002 12:27:15 -0500",
"msg_from": "David Walker <pgsql@grax.com>",
"msg_from_op": false,
"msg_subject": "Re: About connectby()"
},
{
"msg_contents": "David Walker wrote:\n> I prefer the max depth method. Every tree I am aware of has a maximum usable \n> depth.\n> \n> This should never be a problem in trees where keyid is unique.\n> \n\nI just sent in a patch using the ancestor check method. It turned out \nthat the performance hit was pretty small on a moderate sized tree.\n\nMy test case was a 220000 record bill-of-material table. The tree built \nwas 9 levels deep with about 3800 nodes. The performance hit was only \nabout 1%.\n\nAre there cases where infinite recursion to some max depth *should* be \nallowed? I couldn't think of any. If a max depth was imposed, what \nshould it be?\n\nJoe\n\n",
"msg_date": "Sat, 07 Sep 2002 10:34:07 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: About connectby()"
},
{
"msg_contents": "On Sat, 07 Sep 2002 10:26:36 -0700\nJoe Conway <mail@joeconway.com> wrote:\n\n> \n> OK -- patch submitted to fix this. Once the patch is applied, this case \n> gives:\n> \n> test=# SELECT * FROM connectby('connectby_tree', 'keyid', \n> 'parent_keyid', '2', 0, '~') AS t(keyid int, parent_keyid int, level \n> int, branch text);\n> ERROR: infinite recursion detected\n\n\n Thank you for your patch.\n\n\n> \n> If you specifically limit the depth to less than where the repeated key \n> is hit, everything works as before:\n\n\n And I also think this approach is reasonable.\n\n\n> \n> test=# SELECT * FROM connectby('connectby_tree', 'keyid', \n> 'parent_keyid', '2', 4, '~') AS t(keyid int, parent_keyid int, level \n> int, branch text);\n> keyid | parent_keyid | level | branch\n> -------+--------------+-------+-------------\n> 2 | | 0 | 2\n> 4 | 2 | 1 | 2~4\n> 6 | 4 | 2 | 2~4~6\n> 8 | 6 | 3 | 2~4~6~8\n> 5 | 2 | 1 | 2~5\n> 9 | 5 | 2 | 2~5~9\n> 10 | 9 | 3 | 2~5~9~10\n> 11 | 10 | 4 | 2~5~9~10~11\n> (8 rows)\n> \n> Thanks for the feedback!\n> \n> Joe\n> \n> \n\nRegards,\nMasaru Sugawara\n\n\n",
"msg_date": "Sun, 08 Sep 2002 22:35:12 +0900",
"msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>",
"msg_from_op": true,
"msg_subject": "Re: About connectby()"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Masaru Sugawara wrote:\n> > Now I'm testing connectby() in the /contrib/tablefunc in 7.3b1, which would\n> > be a useful function for many users. However, I found the fact that\n> > if connectby_tree has the following data, connectby() tries to search the end\n> > of roots without knowing that the relations are infinite(-5-9-10-11-9-10-11-) .\n> > I hope connectby() supports a check routine to find infinite relations. \n> > \n> > \n> > CREATE TABLE connectby_tree(keyid int, parent_keyid int);\n> > INSERT INTO connectby_tree VALUES(1,NULL);\n> > INSERT INTO connectby_tree VALUES(2,1);\n> > INSERT INTO connectby_tree VALUES(3,1);\n> > INSERT INTO connectby_tree VALUES(4,2);\n> > INSERT INTO connectby_tree VALUES(5,2);\n> > INSERT INTO connectby_tree VALUES(6,4);\n> > INSERT INTO connectby_tree VALUES(7,3);\n> > INSERT INTO connectby_tree VALUES(8,6);\n> > INSERT INTO connectby_tree VALUES(9,5);\n> > \n> > INSERT INTO connectby_tree VALUES(10,9);\n> > INSERT INTO connectby_tree VALUES(11,10);\n> > INSERT INTO connectby_tree VALUES(9,11); <-- infinite\n> > \n> \n> The attached patch fixes the infinite recursion bug in \n> contrib/tablefunc/tablefunc.c:connectby found by Masaru Sugawara.\n> \n> test=# SELECT * FROM connectby('connectby_tree', 'keyid', \n> 'parent_keyid', '2', 4, '~') AS t(keyid int, parent_keyid int, level \n> int, branch text);\n> keyid | parent_keyid | level | branch\n> -------+--------------+-------+-------------\n> 2 | | 0 | 2\n> 4 | 2 | 1 | 2~4\n> 6 | 4 | 2 | 2~4~6\n> 8 | 6 | 3 | 2~4~6~8\n> 5 | 2 | 1 | 2~5\n> 9 | 5 | 2 | 2~5~9\n> 10 | 9 | 3 | 2~5~9~10\n> 11 | 10 | 4 | 2~5~9~10~11\n> (8 rows)\n> \n> test=# SELECT * FROM connectby('connectby_tree', 'keyid', \n> 'parent_keyid', '2', 5, '~') AS t(keyid int, parent_keyid int, level \n> int, branch text);\n> ERROR: infinite recursion detected\n> \n> I implemented it by checking the branch string for repeated keys \n> (whether or not the branch is returned). The performance hit was pretty \n> minimal -- about 1% for a moderately complex test case (220000 record \n> table, 9 level tree with 3800 members).\n> \n> Please apply.\n> \n> Thanks,\n> \n> Joe\n\n> Index: contrib/tablefunc/tablefunc.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/contrib/tablefunc/tablefunc.c,v\n> retrieving revision 1.7\n> diff -c -r1.7 tablefunc.c\n> *** contrib/tablefunc/tablefunc.c\t5 Sep 2002 00:43:06 -0000\t1.7\n> --- contrib/tablefunc/tablefunc.c\t7 Sep 2002 16:28:50 -0000\n> ***************\n> *** 801,806 ****\n> --- 801,810 ----\n> \t\tchar\t\tcurrent_level[INT32_STRLEN];\n> \t\tchar\t *current_branch;\n> \t\tchar\t **values;\n> + \t\tStringInfo\tbranchstr = NULL;\n> + \n> + \t\t/* start a new branch */\n> + \t\tbranchstr = makeStringInfo();\n> \n> \t\tif (show_branch)\n> \t\t\tvalues = (char **) palloc(CONNECTBY_NCOLS * sizeof(char *));\n> ***************\n> *** 852,865 ****\n> \n> \t\tfor (i = 0; i < proc; i++)\n> \t\t{\n> ! \t\t\tStringInfo\tbranchstr = NULL;\n> ! \n> ! \t\t\t/* start a new branch */\n> ! \t\t\tif (show_branch)\n> ! \t\t\t{\n> ! \t\t\t\tbranchstr = makeStringInfo();\n> ! \t\t\t\tappendStringInfo(branchstr, \"%s\", branch);\n> ! \t\t\t}\n> \n> \t\t\t/* get the next sql result tuple */\n> \t\t\tspi_tuple = tuptable->vals[i];\n> --- 856,863 ----\n> \n> \t\tfor (i = 0; i < proc; i++)\n> \t\t{\n> ! \t\t\t/* initialize branch for this pass */\n> ! \t\t\tappendStringInfo(branchstr, \"%s\", branch);\n> \n> \t\t\t/* get the next sql result tuple */\n> \t\t\tspi_tuple = tuptable->vals[i];\n> ***************\n> *** 868,884 ****\n> \t\t\tcurrent_key = SPI_getvalue(spi_tuple, spi_tupdesc, 1);\n> \t\t\tcurrent_key_parent = pstrdup(SPI_getvalue(spi_tuple, spi_tupdesc, 2));\n> \n> \t\t\t/* get the current level */\n> \t\t\tsprintf(current_level, \"%d\", level);\n> \n> \t\t\t/* extend the branch */\n> ! \t\t\tif (show_branch)\n> ! \t\t\t{\n> ! \t\t\t\tappendStringInfo(branchstr, \"%s%s\", branch_delim, current_key);\n> ! \t\t\t\tcurrent_branch = branchstr->data;\n> ! \t\t\t}\n> ! \t\t\telse\n> ! \t\t\t\tcurrent_branch = NULL;\n> \n> \t\t\t/* build a tuple */\n> \t\t\tvalues[0] = pstrdup(current_key);\n> --- 866,881 ----\n> \t\t\tcurrent_key = SPI_getvalue(spi_tuple, spi_tupdesc, 1);\n> \t\t\tcurrent_key_parent = pstrdup(SPI_getvalue(spi_tuple, spi_tupdesc, 2));\n> \n> + \t\t\t/* check to see if this key is also an ancestor */\n> + \t\t\tif (strstr(branchstr->data, current_key))\n> + \t\t\t\telog(ERROR, \"infinite recursion detected\");\n> + \n> \t\t\t/* get the current level */\n> \t\t\tsprintf(current_level, \"%d\", level);\n> \n> \t\t\t/* extend the branch */\n> ! \t\t\tappendStringInfo(branchstr, \"%s%s\", branch_delim, current_key);\n> ! \t\t\tcurrent_branch = branchstr->data;\n> \n> \t\t\t/* build a tuple */\n> \t\t\tvalues[0] = pstrdup(current_key);\n> ***************\n> *** 916,921 ****\n> --- 913,922 ----\n> \t\t\t\t\t\t\t\t\t\t\t\t\tper_query_ctx,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tattinmeta,\n> \t\t\t\t\t\t\t\t\t\t\t\t\ttupstore);\n> + \n> + \t\t\t/* reset branch for next pass */\n> + \t\t\txpfree(branchstr->data);\n> + \t\t\tinitStringInfo(branchstr);\n> \t\t}\n> \t}\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 00:04:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] About connectby()"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Masaru Sugawara wrote:\n> > Now I'm testing connectby() in the /contrib/tablefunc in 7.3b1, which would\n> > be a useful function for many users. However, I found the fact that\n> > if connectby_tree has the following data, connectby() tries to search the end\n> > of roots without knowing that the relations are infinite(-5-9-10-11-9-10-11-) .\n> > I hope connectby() supports a check routine to find infinite relations. \n> > \n> > \n> > CREATE TABLE connectby_tree(keyid int, parent_keyid int);\n> > INSERT INTO connectby_tree VALUES(1,NULL);\n> > INSERT INTO connectby_tree VALUES(2,1);\n> > INSERT INTO connectby_tree VALUES(3,1);\n> > INSERT INTO connectby_tree VALUES(4,2);\n> > INSERT INTO connectby_tree VALUES(5,2);\n> > INSERT INTO connectby_tree VALUES(6,4);\n> > INSERT INTO connectby_tree VALUES(7,3);\n> > INSERT INTO connectby_tree VALUES(8,6);\n> > INSERT INTO connectby_tree VALUES(9,5);\n> > \n> > INSERT INTO connectby_tree VALUES(10,9);\n> > INSERT INTO connectby_tree VALUES(11,10);\n> > INSERT INTO connectby_tree VALUES(9,11); <-- infinite\n> > \n> \n> The attached patch fixes the infinite recursion bug in \n> contrib/tablefunc/tablefunc.c:connectby found by Masaru Sugawara.\n> \n> test=# SELECT * FROM connectby('connectby_tree', 'keyid', \n> 'parent_keyid', '2', 4, '~') AS t(keyid int, parent_keyid int, level \n> int, branch text);\n> keyid | parent_keyid | level | branch\n> -------+--------------+-------+-------------\n> 2 | | 0 | 2\n> 4 | 2 | 1 | 2~4\n> 6 | 4 | 2 | 2~4~6\n> 8 | 6 | 3 | 2~4~6~8\n> 5 | 2 | 1 | 2~5\n> 9 | 5 | 2 | 2~5~9\n> 10 | 9 | 3 | 2~5~9~10\n> 11 | 10 | 4 | 2~5~9~10~11\n> (8 rows)\n> \n> test=# SELECT * FROM connectby('connectby_tree', 'keyid', \n> 'parent_keyid', '2', 5, '~') AS t(keyid int, parent_keyid int, level \n> int, branch text);\n> ERROR: infinite recursion detected\n> \n> I implemented it by checking the branch string for repeated keys \n> (whether or not the branch is returned). The performance hit was pretty \n> minimal -- about 1% for a moderately complex test case (220000 record \n> table, 9 level tree with 3800 members).\n> \n> Please apply.\n> \n> Thanks,\n> \n> Joe\n\n> Index: contrib/tablefunc/tablefunc.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/contrib/tablefunc/tablefunc.c,v\n> retrieving revision 1.7\n> diff -c -r1.7 tablefunc.c\n> *** contrib/tablefunc/tablefunc.c\t5 Sep 2002 00:43:06 -0000\t1.7\n> --- contrib/tablefunc/tablefunc.c\t7 Sep 2002 16:28:50 -0000\n> ***************\n> *** 801,806 ****\n> --- 801,810 ----\n> \t\tchar\t\tcurrent_level[INT32_STRLEN];\n> \t\tchar\t *current_branch;\n> \t\tchar\t **values;\n> + \t\tStringInfo\tbranchstr = NULL;\n> + \n> + \t\t/* start a new branch */\n> + \t\tbranchstr = makeStringInfo();\n> \n> \t\tif (show_branch)\n> \t\t\tvalues = (char **) palloc(CONNECTBY_NCOLS * sizeof(char *));\n> ***************\n> *** 852,865 ****\n> \n> \t\tfor (i = 0; i < proc; i++)\n> \t\t{\n> ! \t\t\tStringInfo\tbranchstr = NULL;\n> ! \n> ! \t\t\t/* start a new branch */\n> ! \t\t\tif (show_branch)\n> ! \t\t\t{\n> ! \t\t\t\tbranchstr = makeStringInfo();\n> ! \t\t\t\tappendStringInfo(branchstr, \"%s\", branch);\n> ! \t\t\t}\n> \n> \t\t\t/* get the next sql result tuple */\n> \t\t\tspi_tuple = tuptable->vals[i];\n> --- 856,863 ----\n> \n> \t\tfor (i = 0; i < proc; i++)\n> \t\t{\n> ! \t\t\t/* initialize branch for this pass */\n> ! \t\t\tappendStringInfo(branchstr, \"%s\", branch);\n> \n> \t\t\t/* get the next sql result tuple */\n> \t\t\tspi_tuple = tuptable->vals[i];\n> ***************\n> *** 868,884 ****\n> \t\t\tcurrent_key = SPI_getvalue(spi_tuple, spi_tupdesc, 1);\n> \t\t\tcurrent_key_parent = pstrdup(SPI_getvalue(spi_tuple, spi_tupdesc, 2));\n> \n> \t\t\t/* get the current level */\n> \t\t\tsprintf(current_level, \"%d\", level);\n> \n> \t\t\t/* extend the branch */\n> ! \t\t\tif (show_branch)\n> ! \t\t\t{\n> ! \t\t\t\tappendStringInfo(branchstr, \"%s%s\", branch_delim, current_key);\n> ! \t\t\t\tcurrent_branch = branchstr->data;\n> ! \t\t\t}\n> ! \t\t\telse\n> ! \t\t\t\tcurrent_branch = NULL;\n> \n> \t\t\t/* build a tuple */\n> \t\t\tvalues[0] = pstrdup(current_key);\n> --- 866,881 ----\n> \t\t\tcurrent_key = SPI_getvalue(spi_tuple, spi_tupdesc, 1);\n> \t\t\tcurrent_key_parent = pstrdup(SPI_getvalue(spi_tuple, spi_tupdesc, 2));\n> \n> + \t\t\t/* check to see if this key is also an ancestor */\n> + \t\t\tif (strstr(branchstr->data, current_key))\n> + \t\t\t\telog(ERROR, \"infinite recursion detected\");\n> + \n> \t\t\t/* get the current level */\n> \t\t\tsprintf(current_level, \"%d\", level);\n> \n> \t\t\t/* extend the branch */\n> ! \t\t\tappendStringInfo(branchstr, \"%s%s\", branch_delim, current_key);\n> ! \t\t\tcurrent_branch = branchstr->data;\n> \n> \t\t\t/* build a tuple */\n> \t\t\tvalues[0] = pstrdup(current_key);\n> ***************\n> *** 916,921 ****\n> --- 913,922 ----\n> \t\t\t\t\t\t\t\t\t\t\t\t\tper_query_ctx,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tattinmeta,\n> \t\t\t\t\t\t\t\t\t\t\t\t\ttupstore);\n> + \n> + \t\t\t/* reset branch for next pass */\n> + \t\t\txpfree(branchstr->data);\n> + \t\t\tinitStringInfo(branchstr);\n> \t\t}\n> \t}\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 20:19:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] About connectby()"
}
] |
[
{
"msg_contents": "SIMILAR TO doesn't implement the SQL standard, it's only a wrapper around\nthe POSIX regexp matching, which is wrong. I thought someone wanted to\nfix that, but if it's not happening it should be removed.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 7 Sep 2002 18:15:44 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "SIMILAR TO"
},
{
"msg_contents": "> SIMILAR TO doesn't implement the SQL standard, it's only a wrapper around\n> the POSIX regexp matching, which is wrong. I thought someone wanted to\n> fix that, but if it's not happening it should be removed.\n\nPlease be specific on what you would consider correct. I'm not recalling\nany details of past discussions so need some background.\n\nI see mention in my SQL99 docs of escape characters for \"similar\npattern\" which would suggest that it resembles Posix regexp matching. I\ndon't have the code in front of me to check on the details of the\ncurrent implementation, but I'd hope that you have something helpful to\nsay on what a better implementation would be.\n\nRegards.\n\n - Tom\n",
"msg_date": "Sat, 07 Sep 2002 17:17:01 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: SIMILAR TO"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> > SIMILAR TO doesn't implement the SQL standard, it's only a wrapper around\n> > the POSIX regexp matching, which is wrong. I thought someone wanted to\n> > fix that, but if it's not happening it should be removed.\n>\n> Please be specific on what you would consider correct. I'm not recalling\n> any details of past discussions so need some background.\n\nThe pattern that should be accepted by SIMILAR TO (as defined in SQL99\npart 2 clause 8.6) and the POSIX regular expressions that it accepts now\nare not the same.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 9 Sep 2002 20:41:19 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: SIMILAR TO"
},
{
"msg_contents": "\nIs this a TODO?\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> Thomas Lockhart writes:\n> \n> > > SIMILAR TO doesn't implement the SQL standard, it's only a wrapper around\n> > > the POSIX regexp matching, which is wrong. I thought someone wanted to\n> > > fix that, but if it's not happening it should be removed.\n> >\n> > Please be specific on what you would consider correct. I'm not recalling\n> > any details of past discussions so need some background.\n> \n> The pattern that should be accepted by SIMILAR TO (as defined in SQL99\n> part 2 clause 8.6) and the POSIX regular expressions that it accepts now\n> are not the same.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 00:24:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SIMILAR TO"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Is this a TODO?\n\nIt's a must-fix for 7.3, but frankly I don't see how we could justify\nmaking the required extensive changes during beta. I suggest that we keep\nthe parser support and throw an error when it's invoked.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 11 Sep 2002 20:15:33 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: SIMILAR TO"
}
] |
[
{
"msg_contents": "Does anyone else feel that the pg_hba.conf inline documentation is getting\ntoo long? The default file is now 259 lines. I feel we should try to cut\nthis down to about 30-50 lines that have a reminder function, not a\ncomplete specification.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 7 Sep 2002 18:16:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "pg_hba.conf documentation"
},
{
"msg_contents": "On Sat, Sep 07, 2002 at 18:16:38 +0200,\n Peter Eisentraut <peter_e@gmx.net> wrote:\n> Does anyone else feel that the pg_hba.conf inline documentation is getting\n> too long? The default file is now 259 lines. I feel we should try to cut\n> this down to about 30-50 lines that have a reminder function, not a\n> complete specification.\n\nYes. The documentation in the config file should be available in the normal\ndocumentation and hence only used to prompt an admin's memory, not provide\na detailed spec. Unless this documentation is built automatically it is\njust two copies of the same data that have to be separately maintained.\nHowever, it isn't a big deal for an admin to delete the comments if they\nwant.\n",
"msg_date": "Sat, 7 Sep 2002 11:25:50 -0500",
"msg_from": "Bruno Wolff III <bruno@wolff.to>",
"msg_from_op": false,
"msg_subject": "Re: pg_hba.conf documentation"
},
{
"msg_contents": "Bruno Wolff III wrote:\n> On Sat, Sep 07, 2002 at 18:16:38 +0200,\n> Peter Eisentraut <peter_e@gmx.net> wrote:\n> > Does anyone else feel that the pg_hba.conf inline documentation is getting\n> > too long? The default file is now 259 lines. I feel we should try to cut\n> > this down to about 30-50 lines that have a reminder function, not a\n> > complete specification.\n> \n> Yes. The documentation in the config file should be available in the normal\n> documentation and hence only used to prompt an admin's memory, not provide\n> a detailed spec. Unless this documentation is built automatically it is\n> just two copies of the same data that have to be separately maintained.\n> However, it isn't a big deal for an admin to delete the comments if they\n> want.\n\nYes, it is probably time to pull that stuff out of there and get it into\nSGML.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 7 Sep 2002 12:55:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_hba.conf documentation"
}
] |
[
{
"msg_contents": "Didn't we want to remove that option?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 7 Sep 2002 18:23:52 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "--with-maxbackends"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Didn't we want to remove that option?\n\nI didn't know it was still in there. I see no reason for it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 7 Sep 2002 12:52:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: --with-maxbackends"
},
{
"msg_contents": "On Saturday 07 September 2002 12:52 pm, Bruce Momjian wrote:\n> Peter Eisentraut wrote:\n> > Didn't we want to remove that option?\n>\n> I didn't know it was still in there. I see no reason for it.\n\nHow about --enable-depend, that's not still needed is it? Or is that \nsomething other than the new dependancy system?\n\n",
"msg_date": "Sun, 8 Sep 2002 00:25:30 -0400",
"msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Re: --with-maxbackends"
},
{
"msg_contents": "Matthew T. O'Connor wrote:\n> On Saturday 07 September 2002 12:52 pm, Bruce Momjian wrote:\n> > Peter Eisentraut wrote:\n> > > Didn't we want to remove that option?\n> >\n> > I didn't know it was still in there. I see no reason for it.\n> \n> How about --enable-depend, that's not still needed is it? Or is that \n> something other than the new dependancy system?\n\nThat relates to C file dependencies, not pg_depend.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 8 Sep 2002 14:59:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: --with-maxbackends"
}
] |
[
{
"msg_contents": "Hello,\n\nIn http://developer.postgresql.org/docs/postgres/runtime-config.html,\nthe SEARCH_PATH variable description mentions the use of\ncurrent_schemas(), but this function doesn't exist (or it didn't exist\nlast time I updated:\n#define CATALOG_VERSION_NO 200209021\n)\n\nWhat exists is current_schema(), but it doesn't expand to the full\nsearch path; apparently, only the first item that exists:\n\ntesting=# set search_path to '$user', 'public', 'alvh1', 'alvh2';\nSET\ntesting=# select current_schema();\n current_schema \n----------------\n public\n(1 row)\n\nThis is not good:\n\ntesting=# \\d foo\n Table \"alvh1.foo\"\n Column | Type | Modifiers \n--------+---------+-----------\n a | integer | not null\nIndexes: foo_pkey primary key btree (a)\n\n(and alvh1 does not appear on current_schema)\n\nIs this a bug?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Como puedes confiar en algo que pagas y que no ves,\ny no confiar en algo que te dan y te lo muestran?\" (German Poo)\n\n",
"msg_date": "Sat, 7 Sep 2002 14:39:35 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": true,
"msg_subject": "current_schemas()"
},
{
"msg_contents": "On Sat, 2002-09-07 at 14:39, Alvaro Herrera wrote:\n> Hello,\n> \n> In http://developer.postgresql.org/docs/postgres/runtime-config.html,\n> the SEARCH_PATH variable description mentions the use of\n> current_schemas(), but this function doesn't exist (or it didn't exist\n> last time I updated:\n> #define CATALOG_VERSION_NO 200209021\n> )\n> \n> What exists is current_schema(), but it doesn't expand to the full\n> search path; apparently, only the first item that exists:\n> \n> testing=# set search_path to '$user', 'public', 'alvh1', 'alvh2';\n> SET\n> testing=# select current_schema();\n> current_schema \n> ----------------\n> public\n> (1 row)\n\nHeres what I get. Note current_schemas() shows the full search path,\nwhere current_schema() shows only the first.\n\na=# set search_path to k,l;\nSET\na=# select current_schemas(true);\n current_schemas \n------------------\n {pg_catalog,k,l}\n(1 row)\n\n\n\n",
"msg_date": "07 Sep 2002 14:58:08 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: current_schemas()"
},
{
"msg_contents": "Rod Taylor dijo: \n\n> On Sat, 2002-09-07 at 14:39, Alvaro Herrera wrote:\n> > Hello,\n> > \n> > In http://developer.postgresql.org/docs/postgres/runtime-config.html,\n> > the SEARCH_PATH variable description mentions the use of\n> > current_schemas(), but this function doesn't exist (or it didn't exist\n> \n> Heres what I get. Note current_schemas() shows the full search path,\n> where current_schema() shows only the first.\n\nOh, I see. This seems like a bug in the documentation to me. It should\nhave a cross-reference to the \"miscellaneous functions\" section, or\nmention the usage of the parameter.\n\nI think this is a global and serious bug in the SGML documentation: the\nlack of cross references. It seriously decreases the usability of the\ndocumentation system.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"El destino baraja y nosotros jugamos\" (A. Schopenhauer)\n\n",
"msg_date": "Sat, 7 Sep 2002 15:30:05 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": true,
"msg_subject": "Re: current_schemas()"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is my configuration :\ntemplate1=# select version();\n version\n---------------------------------------------------------------\n PostgreSQL 7.3b1 on i686-pc-linux-gnu, compiled by GCC 2.95.4\n(1 row)\n\n128 Mb of RAM with a PIII933\nLinux debian woody\n\nAs usual I have done like explain is the doc ...\npg_dumpall of my 7.2.2 database ...\n\nBut when I try to import it inside 7.3b1 I get this :\n(seems that the copy command is not fully compatible with the 7.2.2 \npg_dumpall ?)\n\nMany thinks like this : (I have only copied some parts ...)\nSize of the dump about 1.5 Gb ...\n\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015274: invalid command \\nPour\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015274: invalid command \\n<b>\npsql:/tmp/dump_mybase.txt:1015274: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015274: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015274: invalid command \\N\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015275: invalid command \\nLa\npsql:/tmp/dump_mybase.txt:1015275: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015275: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015275: invalid command \\N\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015276: invalid command \\nLa\npsql:/tmp/dump_mybase.txt:1015276: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015276: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015276: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015277: ERROR: parser: parse error at or near \n\"1038\" at character 1\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015277: invalid command \\n\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015277: invalid command \\n*\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015277: invalid command \\n\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015277: invalid command \\n\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015277: invalid command \\n\npsql:/tmp/dump_mybase.txt:1015277: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015277: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015277: invalid command \\N\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015278: invalid command \\n<STRONG>Cette\npsql:/tmp/dump_mybase.txt:1015278: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015278: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015278: invalid command \\N\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015279: invalid command \\n\npsql:/tmp/dump_mybase.txt:1015279: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015279: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015279: invalid command \\N\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015280: invalid command \\n<b>Attention\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015280: invalid command \\n2.\npsql:/tmp/dump_mybase.txt:1015280: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015280: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015280: invalid command \\N\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015281: invalid command \\nInterdit\npsql:/tmp/dump_mybase.txt:1015281: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015281: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015281: invalid command \\N\nQuery buffer reset (cleared).\npsql:/tmp/dump_mybase.txt:1015286: invalid command \\nClip\npsql:/tmp/dump_mybase.txt:1015286: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015286: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015286: invalid command \\N\npsql:/tmp/dump_mybase.txt:1015287: invalid command \\.\n\nShowing on tuples.\nTuples only is off.\nShowing on tuples.\nTuples only is off.\nShowing on tuples.\nTuples only is off.\nShowing on tuples.\nTuples only is off.\nShowing on tuples.\nTuples only is off.\nShowing on tuples.\nTuples only is off.\nShowing on tuples.\nTuples only is off.\nShowing on tuples.\nTuples only is off.\nShowing on tuples.\nTuples only is off.\nShowing on tuples.\nTuples only is off.\nShowing on tuples.\nTuples only is off.\nShowing on tuples.\nTuples only is off.\nShowing on tuples.\nTuples only is off.\n\npsql:dump_mybase.txt:2736773: invalid command \\N\npsql:dump_mybase.txt:2736774: ERROR: parser: parse error at or near \"9\" at \ncharacter 1\npsql:dump_mybase.txt:2736774: ERROR: parser: parse error at or near \n\"4405004\" at character 1\npsql:dump_mybase.txt:2736774: ERROR: parser: parse error at or near \n\"7327180\" at character 1\npsql:dump_mybase.txt:2736774: invalid command \\N\npsql:dump_mybase.txt:2736774: invalid command \\N\npsql:dump_mybase.txt:2736775: invalid command \\N\npsql:dump_mybase.txt:2736775: invalid command \\N\npsql:dump_mybase.txt:2736776: invalid command \\N\npsql:dump_mybase.txt:2736776: invalid command \\N\npsql:dump_mybase.txt:2736777: invalid command \\N\npsql:dump_mybase.txt:2736777: invalid command \\N\npsql:dump_mybase.txt:2736778: invalid command \\N\npsql:dump_mybase.txt:2736778: invalid command \\N\npsql:dump_mybase.txt:2736779: ERROR: parser: parse error at or near \"w\" at \ncharacter 1\npsql:dump_mybase.txt:2736779: ERROR: parser: parse error at or near \n\"4405004\" at character 1\npsql:dump_mybase.txt:2736779: ERROR: parser: parse error at or near \n\"7327180\" at character 1\npsql:dump_mybase.txt:2736779: invalid command \\N\npsql:dump_mybase.txt:2736779: invalid command \\N\npsql:dump_mybase.txt:2736780: ERROR: parser: parse error at or near \"w\" at \ncharacter 1\npsql:dump_mybase.txt:2736780: ERROR: parser: parse error at or near \n\"4405004\" at character 1\npsql:dump_mybase.txt:2736780: ERROR: parser: parse error at or near \n\"7327180\" at character 1\npsql:dump_mybase.txt:2736780: invalid command \\N\npsql:dump_mybase.txt:2736780: invalid command \\N\npsql:dump_mybase.txt:2736781: invalid command \\N\npsql:dump_mybase.txt:2736781: invalid command \\N\npsql:dump_mybase.txt:2736782: invalid command \\N\npsql:dump_mybase.txt:2736782: invalid command \\N\npsql:dump_mybase.txt:2736783: ERROR: parser: parse error at or near \"w\" at \ncharacter 1\npsql:dump_mybase.txt:2736783: ERROR: parser: parse error at or near \n\"4405004\" at character 1\npsql:dump_mybase.txt:2736783: ERROR: parser: parse error at or near \n\"7327180\" at character 1\npsql:dump_mybase.txt:2736783: invalid command \\N\npsql:dump_mybase.txt:2736783: invalid command \\N\npsql:dump_mybase.txt:2736784: invalid command \\N\npsql:dump_mybase.txt:2736784: invalid command \\N\npsql:dump_mybase.txt:2736785: invalid command \\N\npsql:dump_mybase.txt:2736785: invalid command \\N\npsql:dump_mybase.txt:2736786: invalid command \\N\npsql:dump_mybase.txt:2736786: invalid command \\N\npsql:dump_mybase.txt:2736787: invalid command \\.\nYou are now connected as new user herve.\npsql:dump_mybase.txt:2736797: ERROR: parser: parse error at or near \"w\" at \ncharacter 1\npsql:dump_mybase.txt:2880938: invalid command \\.\nYou are now connected as new user postgres.\npsql:dump_mybase.txt:2880948: ERROR: parser: parse error at or near \n\"110013570705\" at character 1\nQuery buffer reset (cleared).\npsql:dump_mybase.txt:2882296: \\r: extra argument '7116' ignored\npsql:dump_mybase.txt:2882296: \\r: extra argument '2001-08-07' ignored\npsql:dump_mybase.txt:2882296: \\r: extra argument 'nl42txt' ignored\npsql:dump_mybase.txt:2882774: ERROR: parser: parse error at or near \n\"lo1252386\" at character 1\n\npsql:dump_mybase.txt:3719939: invalid command \\\npsql:dump_mybase.txt:3719940: invalid command \\\npsql:dump_mybase.txt:3719941: invalid command \\\npsql:dump_mybase.txt:3719942: invalid command \\\npsql:dump_mybase.txt:3719943: invalid command \\\npsql:dump_mybase.txt:3719944: invalid command \\\npsql:dump_mybase.txt:3719945: invalid command \\\npsql:dump_mybase.txt:3719946: invalid command \\.\nYou are now connected as new user postgres.\npsql:dump_mybase.txt:3747485: invalid command \\.\n\nFinish like this all the time ! :(\n\npsql:dump_mybase.txt:16596289: invalid command \\N\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\nAny ideas ?\n\nRegards,\n-- \nHerv�\n\n\n\n\n",
"msg_date": "Sat, 7 Sep 2002 21:03:47 +0200",
"msg_from": "=?iso-8859-1?B?SGVydukgUGllZHZhY2hl?= <herve@elma.fr>",
"msg_from_op": true,
"msg_subject": "Impossible to import pg_dumpall from 7.2.2 to 7.3b1"
},
{
"msg_contents": "=?iso-8859-1?B?SGVydukgUGllZHZhY2hl?= <herve@elma.fr> writes:\n> But when I try to import it inside 7.3b1 I get this :\n> (seems that the copy command is not fully compatible with the 7.2.2 \n> pg_dumpall ?)\n\n> Many thinks like this : (I have only copied some parts ...)\n> Size of the dump about 1.5 Gb ...\n\n> Query buffer reset (cleared).\n> psql:/tmp/dump_mybase.txt:1015274: invalid command \\nPour\n> Query buffer reset (cleared).\n\nIt seems pretty clear that the COPY command itself failed, leaving psql\ntrying to interpret the following data as SQL commands. But you have\nnot shown us either the COPY command or the error message it generated,\nso there's not a lot we can say about it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 15:17:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Impossible to import pg_dumpall from 7.2.2 to 7.3b1 "
}
] |
[
{
"msg_contents": "[ Discussion moved to hackers.]\n\nI recently added a test for JAVA_HOME to configure.in to issue a more\nhelpful message when Ant can't be run rather than throwing a more\ngeneric Ant failure message and expecting people to look in config.log.\n\nMy question is should we be doing such checks to try and help people who\nget configure failures, should we be pointing people to config.log more\noften. and are there other common failure cases in configure.in that\nshould be improved.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 7 Sep 2002 18:09:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Testing for failures in configure.in"
}
] |
[
{
"msg_contents": "I'm trying to compile PostgreSQL 7.2.2 under the Mac OS X 10.2/Darwin \n6.0 (Jaguar) and having some difficulties. I'm using a clean install so \nthere shouldn't be any customizations hindering the compilation \nprocess. I have it stored in /usr/local/pgsql and I configure it as \nbelow:\n\n./configure --enable-locale --enable-recode --enable-multibyte \n--with-perl --with-python --with-java --with-openssl --enable-odbc \n--with-CXX --enable-syslog\n\nBut while making the following error occurs:\n\nmake -C darwin all\ngcc -traditional-cpp -g -O2 -Wall -Wmissing-prototypes \n-Wmissing-declarations -I../../../../src/include -c -o sem.o sem.c\nIn file included from sem.c:30:\n../../../../src/include/port/darwin/sem.h:66: warning: `union semun' \ndeclared inside parameter list\n../../../../src/include/port/darwin/sem.h:66: warning: its scope is \nonly this definition or declaration, which is probably not what you want\n../../../../src/include/port/darwin/sem.h:66: warning: parameter has \nincomplete type\nsem.c:67: warning: `union semun' declared inside parameter list\nsem.c:68: parameter `arg' has incomplete type\nmake[4]: *** [sem.o] Error 1\nmake[3]: *** [darwin.dir] Error 2\nmake[2]: *** [port-recursive] Error 2\nmake[1]: *** [all] Error 2\nmake: *** [all] Error 2\n\nObviously the sem.h file doesn't contain a definition for union semun, \nand the only include is for <sys/ipc.h>. That doesn't contain the \ndefinition either. I poked around a bit and found a CVS commit relating \nto sem.h. The URL is:\n\nhttp://archives.postgresql.org/pgsql-committers/2002-05/msg00019.php\n\nIt says sem.h isn't needed anymore and has been removed, but the date \nis marked May 5th of 2002. Is this referring to the beta 3 release? I \nalso found that <sys/sem.h> contains the needed definition, but isn't \nincluded anywhere. Also, the sem.h file in question appears to be \nreplicating much of the functionality found in <sys/mem.h>. This seems \nto be quite an issue because in pgsql/include/port/darwin there is a \nreadme file talking about some work-around required for Mac OS X 10.1 \nthat relates to mem.h.\n\nDoes anyone have an idea as to how to fix this? Thanks.\n\n___________________\nAjay Ayyagari\ndaleks@seattleu.edu\n___________________\n\n",
"msg_date": "Sat, 7 Sep 2002 15:54:32 -0700",
"msg_from": "Ajay Ayyagari <daleks@seattleu.edu>",
"msg_from_op": true,
"msg_subject": "Mac OS X 10.2/Darwin 6.0"
}
] |
[
{
"msg_contents": "I found the following while poking around. RangeVarGetRelid takes a \nsecond parameter that is intended to allow it to not fail, returning \nInvalidOid instead. However it calls LookupExplicitNamespace, which does \nnot honor any such request, and happily generates an error on a bad \nnamespace name:\n\n/*\n * RangeVarGetRelid\n * Given a RangeVar describing an existing relation,\n * select the proper namespace and look up the relation OID.\n *\n * If the relation is not found, return InvalidOid if failOK = true,\n * otherwise raise an error.\n */\nOid\nRangeVarGetRelid(const RangeVar *relation, bool failOK)\n{\n[...]\n if (relation->schemaname)\n {\n /* use exact schema given */\n namespaceId = LookupExplicitNamespace(relation->schemaname);\n relId = get_relname_relid(relation->relname, namespaceId);\n }\n[...]\n}\n\nOid\nLookupExplicitNamespace(const char *nspname)\n{\n[...]\n namespaceId = GetSysCacheOid(NAMESPACENAME,\n CStringGetDatum(nspname),0, 0, 0);\n if (!OidIsValid(namespaceId))\n elog(ERROR, \"Namespace \\\"%s\\\" does not exist\", nspname);\n[...]\n}\n\nShouldn't LookupExplicitNamespace be changed to allow the same second \nparameter? All uses of LookupExplicitNamespace, besides in \nRangeVarGetRelid, would have the parameter set to false.\n\nComments?\n\nJoe\n\n",
"msg_date": "Sat, 07 Sep 2002 23:19:19 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "bug?"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I found the following while poking around. RangeVarGetRelid takes a \n> second parameter that is intended to allow it to not fail, returning \n> InvalidOid instead. However it calls LookupExplicitNamespace, which does \n> not honor any such request, and happily generates an error on a bad \n> namespace name:\n\nISTR deciding that that was okay, and there was no need to clutter\nLookupExplicitNamespace with an extra parameter. Don't recall the\nreasoning at the moment...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 09:33:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug? "
},
{
"msg_contents": "I said:\n> Joe Conway <mail@joeconway.com> writes:\n>> I found the following while poking around. RangeVarGetRelid takes a \n>> second parameter that is intended to allow it to not fail, returning \n>> InvalidOid instead. However it calls LookupExplicitNamespace, which does \n>> not honor any such request, and happily generates an error on a bad \n>> namespace name:\n\n> ISTR deciding that that was okay, and there was no need to clutter\n> LookupExplicitNamespace with an extra parameter. Don't recall the\n> reasoning at the moment...\n\nAfter looking: the only place that calls RangeVarGetRelid with a \"true\"\nsecond parameter is tcop/utility.c, and it just does it so that it can\ngive a different error message for the \"relation not found\" case. Thus,\nwe don't actually *want* failures other than \"relation not found\" to\nreturn from RangeVarGetRelid. So the code is right as-is. Perhaps the\ncomments could stand improvement though, to make it clearer what failOK\nis meant to do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 12:10:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug? "
},
{
"msg_contents": "Tom Lane wrote:\n>>Joe Conway <mail@joeconway.com> writes:\n>>\n>>>I found the following while poking around. RangeVarGetRelid takes a \n>>>second parameter that is intended to allow it to not fail, returning \n>>>InvalidOid instead. However it calls LookupExplicitNamespace, which does \n>>>not honor any such request, and happily generates an error on a bad \n>>>namespace name:\n>>\n> \n>>ISTR deciding that that was okay, and there was no need to clutter\n>>LookupExplicitNamespace with an extra parameter. Don't recall the\n>>reasoning at the moment...\n> \n> \n> After looking: the only place that calls RangeVarGetRelid with a \"true\"\n> second parameter is tcop/utility.c, and it just does it so that it can\n> give a different error message for the \"relation not found\" case. Thus,\n> we don't actually *want* failures other than \"relation not found\" to\n> return from RangeVarGetRelid. So the code is right as-is. Perhaps the\n> comments could stand improvement though, to make it clearer what failOK\n> is meant to do.\n\nOK. The reason I brought it up was that while working on the plpgsql patch \n(posted last night), I found that plpgsql gives a better error message if \nRangeVarGetRelid returns InvalidOid instead of simply elog'ing. The comments \ndid lead me to believe I could get an InvalidOid, so I was surprized when I \ndidn't.\n\nJoe\n\n\n\n",
"msg_date": "Mon, 09 Sep 2002 09:33:23 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: bug?"
}
] |
[
{
"msg_contents": "Hi,\n\nI didn't download the beta but compared the CVS checkouts and it appears\nthe ecpg directory is still the one from 7.2 not the one tagged\nbig_bison. Will this one be moved into the mainstream source? Else we\nwould be stuck with a non-compatible parser.\n\nIf I shall move it, please tell me, I'm just not doing it before talking\nto you guys.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sun, 8 Sep 2002 10:20:11 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "7.3beta and ecpg"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> I didn't download the beta but compared the CVS checkouts and it appears\n> the ecpg directory is still the one from 7.2 not the one tagged\n> big_bison. Will this one be moved into the mainstream source?\n\nWell, I think we can't do that until postgresql.org has a version of\nbison installed that will compile it. And I'm really hesitant to see us\ndepending on a beta version of bison. Any word on a new bison official\nrelease?\n\nWe still have a few weeks until the situation gets critical, but maybe\nit is time to start thinking about a fallback plan...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 09:38:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg "
},
{
"msg_contents": "On Mon, Sep 09, 2002 at 09:38:39AM -0400, Tom Lane wrote:\n> Well, I think we can't do that until postgresql.org has a version of\n> bison installed that will compile it. And I'm really hesitant to see us\n> depending on a beta version of bison. Any word on a new bison official\n> release?\n\nNo news yet. They just said \"as soon as possible\". :-)\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Mon, 9 Sep 2002 15:46:16 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: 7.3beta and ecpg"
},
{
"msg_contents": "Tom Lane wrote:\n> Michael Meskes <meskes@postgresql.org> writes:\n> > I didn't download the beta but compared the CVS checkouts and it appears\n> > the ecpg directory is still the one from 7.2 not the one tagged\n> > big_bison. Will this one be moved into the mainstream source?\n> \n> Well, I think we can't do that until postgresql.org has a version of\n> bison installed that will compile it. And I'm really hesitant to see us\n> depending on a beta version of bison. Any word on a new bison official\n> release?\n> \n> We still have a few weeks until the situation gets critical, but maybe\n> it is time to start thinking about a fallback plan...\n\nIMHO, our fallback is to ship based on the bison beta.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 Sep 2002 22:17:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg"
},
{
"msg_contents": "\nI think we should stop playing around with ecpg. Let's get the beta\nbison on postgresql.org and package the proper ecpg version for\n7.3beta2. If we don't, we are going to get zero testing for 7.3 final.\n\nMarc?\n\nWe will not find out if there are problems with the bison beta until we\nship it as part of beta and I don't think we have to be scared of just\nbecause it is beta.\n\n---------------------------------------------------------------------------\n\nMichael Meskes wrote:\n> Hi,\n> \n> I didn't download the beta but compared the CVS checkouts and it appears\n> the ecpg directory is still the one from 7.2 not the one tagged\n> big_bison. Will this one be moved into the mainstream source? Else we\n> would be stuck with a non-compatible parser.\n> \n> If I shall move it, please tell me, I'm just not doing it before talking\n> to you guys.\n> \n> Michael\n> -- \n> Michael Meskes\n> Michael@Fam-Meskes.De\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 00:10:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> We will not find out if there are problems with the bison beta until we\n> ship it as part of beta and I don't think we have to be scared of just\n> because it is beta.\n\nNo? If there are bugs in it, they will break the main SQL parser, not\nonly ecpg. I am scared.\n\nMy idea of a reasonable fallback is to add prebuilt-with-the-beta-bison\noutput files to the ecpg directory, but not anyplace else. That is\nugly, but the effects of any bison problems will be limited to ecpg.\n\nI am also still wondering if we couldn't tweak the grammar to eliminate\nstates so that ecpg would build with a standard bison. That would be a\nwin all 'round, but it requires effort that we maybe don't have to\nspend.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Sep 2002 00:45:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg "
},
{
"msg_contents": "Tom Lane dijo: \n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > We will not find out if there are problems with the bison beta until we\n> > ship it as part of beta and I don't think we have to be scared of just\n> > because it is beta.\n> \n> No? If there are bugs in it, they will break the main SQL parser, not\n> only ecpg. I am scared.\n\nJust for the record: bison 1.49b reports lots of \"invalid character\"\nwhen processing plpgsql's grammar. However, the regression test passes.\nThis is Linux/i686.\n\n$ make gram.c -C src/pl/plpgsql/src\nmake: Entering directory `/home/alvherre/CVS/pgsql/src/pl/plpgsql/src'\nbison -y gram.y \ngram.y:101.24: invalid character: `,'\ngram.y:102.25: invalid character: `,'\ngram.y:104.26: invalid character: `,'\ngram.y:104.44: invalid character: `,'\ngram.y:106.24: invalid character: `,'\ngram.y:108.29: invalid character: `,'\ngram.y:108.46: invalid character: `,'\ngram.y:111.24: invalid character: `,'\ngram.y:112.22: invalid character: `,'\ngram.y:112.37: invalid character: `,'\ngram.y:117.25: invalid character: `,'\ngram.y:121.24: invalid character: `,'\ngram.y:121.36: invalid character: `,'\ngram.y:121.47: invalid character: `,'\ngram.y:122.23: invalid character: `,'\ngram.y:123.25: invalid character: `,'\ngram.y:123.34: invalid character: `,'\ngram.y:123.45: invalid character: `,'\ngram.y:123.57: invalid character: `,'\ngram.y:124.25: invalid character: `,'\ngram.y:124.43: invalid character: `,'\ngram.y:124.55: invalid character: `,'\ngram.y:125.23: invalid character: `,'\ngram.y:125.34: invalid character: `,'\ngram.y:125.47: invalid character: `,'\ngram.y:126.29: invalid character: `,'\ngram.y:126.43: invalid character: `,'\ngram.y:127.23: invalid character: `,'\ngram.y:127.35: invalid character: `,'\ngram.y:130.25: invalid character: `,'\ngram.y:134.26: invalid character: `,'\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"El conflicto es el camino real hacia la union\"\n\n",
"msg_date": "Wed, 11 Sep 2002 00:56:59 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > We will not find out if there are problems with the bison beta until we\n> > ship it as part of beta and I don't think we have to be scared of just\n> > because it is beta.\n> \n> No? If there are bugs in it, they will break the main SQL parser, not\n> only ecpg. I am scared.\n> \n> My idea of a reasonable fallback is to add prebuilt-with-the-beta-bison\n> output files to the ecpg directory, but not anyplace else. That is\n> ugly, but the effects of any bison problems will be limited to ecpg.\n\nYes, I assumed we would use the new bison only for ecpg.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 01:39:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg"
},
{
"msg_contents": "On Wed, Sep 11, 2002 at 12:45:06AM -0400, Tom Lane wrote:\n> No? If there are bugs in it, they will break the main SQL parser, not\n> only ecpg. I am scared.\n\nActually there is one more problem. The backend introduced the EXECUTE\ncommand just recently. However, this clashes with the embedded SQL\nEXECUTE command. Since both may be called just with EXECUTE <name>,\nthere is no way to distinguish them.\n\nI have no idea if there's a standard about execution of a plan but\ncouldn't/shouldn't it be named \"EXECUTE PLAN\" instead of just \"EXECUTE\"?\n\n> I am also still wondering if we couldn't tweak the grammar to eliminate\n> states so that ecpg would build with a standard bison. That would be a\n> win all 'round, but it requires effort that we maybe don't have to\n> spend.\n\nActually I think it will need quite some effort, in particular since I\nstay away from the backend grammar as much as possible. Once I change\nthe backend compatible part of the grammar I either have to make the\nsame changes to the backends parser or ecpg will soon be unmaintainable.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 11 Sep 2002 10:21:43 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: 7.3beta and ecpg"
},
{
"msg_contents": "On Wed, Sep 11, 2002 at 12:56:59AM -0400, Alvaro Herrera wrote:\n> Just for the record: bison 1.49b reports lots of \"invalid character\"\n> when processing plpgsql's grammar. However, the regression test passes.\n> This is Linux/i686.\n> \n> $ make gram.c -C src/pl/plpgsql/src\n> make: Entering directory `/home/alvherre/CVS/pgsql/src/pl/plpgsql/src'\n> bison -y gram.y \n> gram.y:101.24: invalid character: `,'\n\nNo big deal. Just remove all the ','. The new bison does not like them\nas seperators anymore. We will have to make that change in the near\nfuture anyway.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 11 Sep 2002 10:29:29 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: 7.3beta and ecpg"
}
] |
[
{
"msg_contents": "Hi,\n\nSorry to insist, may be my previus subject was miss understood ...\nrefering to this message :\nhttp://archives.postgresql.org/pgsql-hackers/2002-09/msg00461.php\n\nBut I can't import my data from 7.2.2 into 7.3b1 ...\n1- Many errors during importation of the data\n2- Seems to use all the memory (and swap) during the import of the data made \nwith a classical pg_dumpall from the 7.2.2 to the 7.3b1 ... \n\nI'm used to make the same import on the same computer from the same data \nfrom 7.2.2 (other server) to 7.2.2 (this computer)... so the 7.3b1 use more \nmemory and seems to not understand the pg_dumpall data of the 7.3b1 ?\n\nAny idea ?\n\nI'm the only one with this kind of trouble ?\nThe beta page tell us to make dump/initdb/reload ... it's what I've done but \nwithout any result ;)\n\nRegards,\n-- \nHerv�\n\n\n\n",
"msg_date": "Sun, 8 Sep 2002 16:21:27 +0200",
"msg_from": "=?iso-8859-1?B?SGVydukgUGllZHZhY2hl?= <herve@elma.fr>",
"msg_from_op": true,
"msg_subject": "Importing data from 7.2.2 into 7.3b1 !?"
},
{
"msg_contents": "Hervᅵ Piedvache writes:\n\n> Any idea ?\n\nNo.\n\nWe need the complete details (including the input files), not vague\nobservations.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 9 Sep 2002 20:41:01 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Importing data from 7.2.2 into 7.3b1 !?"
}
] |
[
{
"msg_contents": "Hello all,\n\nHere are the proposals for solutioning the \"Return proper effected\ntuple count from complex commands [return]\" issue as seen on TODO.\n\nAny comments ?... This is obviously open to voting and discussion.\n\n-- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n \n-------------------------------------------------------------------------\n\nIntroduction\n------------\nThese are three proposals to give a solution on the issue:\n\n* Return proper effected tuple count from complex commands [return]\n\n... as seen on TODO http://developer.postgresql.org/todo.php as of 09\nSep 2002.\n\n\nAffect Versions:\n----------------\n\nPostgreSQL v7.2X\nPostgreSQL pre 7.2 versions has inconsistent behavior as stated below.\n\n\nReferences\n----------\nThe main thread discussion is listed in (1):\nhttp://momjian.postgresql.org/cgi-bin/pgtodo?return\n\nSome previous discussion started on (2):\nhttp://archives.postgresql.org/pgsql-general/2002-05/msg00096.php\n\nThe topic was revisited by Steve Howe in the thread (3):\nhttp://archives.postgresql.org/pgsql-hackers/2002-09/msg00429.php\n\n\nProblem Description:\n--------------------\n\nPQcmdStatus(), PQcmdTuples() and PQoidValue() do not work properly on\nrules, most notably updating views. An additional layer of problems\ncan arise if user issues multiple commands per rule, as of what should\nbe the output of those functions in that situation.\n\nSpecially problematic is PQcmdTuples(), which will return 0, confusing\nclient applications into thinking nothing was updated and even\nbreaking some applications.\n\nThe pre-version 7.2 behavior is not acceptable as stated by Tom Lane\non the threads above.\n\nAn urgent fix is demanded to allow applications using rules to work\nproperly and allow clients to retrieve proper execution information.\n\n\nProposal #1 (author: Steve Howe):\n---------------------------------\n\nAs stated in the threads above (from the [References] topic), we have\n3 tags to worry about, returned by the following functions:\n\nPQcmdStatus() - command status string\nPQcmdTuples() - number of rows updated\nPQoidValue() - last inserted OID\n\nMy proposal consists basically on having the same behavior of when\nmultiple commands per execution string are executed currently (except\nfor PQcmdTuples()) :\n\nPQcmdStatus() ==> Should return the last executed command or the same\n as the original command (I prefer the second way,\n but the first is more intuitive on a multiple\n execution case, as I'll explain below).\n\nPQcmdTuples() ==> should return the sum of modified rows of all\n commands executed by the rule (DELETE / INSERT /\n UPDATE).\n \nPQoidValue() ==> should return the value for the last INSERT executed\n command in the rule (if any).\n\nUsing this scheme, any SELECT commands executed would not count on\nPQcmdTuples(), what makes plain sense. The other commands would give a\nsimilar response to what we already have when we issue multiple\ncommands per execution string.\n\nI would like to quote an issued pointed by Tom Lane, from one of the\nmessages on the thread above:\n\n>I'm also concerned about having an understandable definition for the\n>OID returned for an INSERT query --- if there are additional INSERTs\n>triggered by rules, does that mean you don't get to see the OID assigned\n>to the single row you tried to insert?\n\nIn this case, the user has to be aware that if he issued multiple\ncommands, he will get the result for only the last one. This is is the\nsame behavior of multiple commands when you execute:\n\ndb# insert into MyTable values(1 ,1); insert into MyTable values(2 ,2);\nINSERT 93345 1\nINSERT 93346 1\n\nOf course this could lead to have a PQcmdStatus() return value greater\nthen the number of rows viewable by the rule, but I think that's\nperfectly understandable if there are multiple commands involved and\nthe client application programmer should be aware of that.\n\nPQoidStatus() will return the OID only for the last command, so (again)\nthe proposed behavior is compatible on what already happens when you issue\nmultiple commands. So if the user issues some insert commands but\n\nThe proposed behavior would be the same for DO and DO INSTEAD rules\nunless someone points out some flaw.\n\n\nProposal #2 (author: Tom lane):\n---------------------------------\n\nTom Lane's proposal, as posted on\nhttp://candle.pha.pa.us/mhonarc/todo.detail/return/msg00012.html,\nconsists basically on the following:\n\nPQcmdStatus() ==> Should always return the same command type original\n submitted by the client.\n\nPQcmdTuples() ==> If no INSTEAD rule, return same output as for\n original command, ignoring other commands in the\n rule.If there is INSTEAD rules, use result of last\n command in the rewritten series, use result of last\n command of same type as original command or sum up\n the results of all the rewritten commands.\n\n (I particularly prefer the sum).\n\nPQoidValue() ==> If the original command was not INSERT, return 0.\n otherwise, if one INSERT, return it's original\n PQoidValue(). If more then one INSERT command\n applied, use last or other possibilities (please\n refer to the thread for details).\n\nPlease refer to the original post to refer to the original message. I\nwould like to point out that it was the most consistent proposal\npointed out until now on the previous discussions (Bruce M. agrees\nwith this one).\n\n \nProposal #3 (author: Steve Howe):\n---------------------------------\n\nAnother possibility (which does not go against the other proposals but\nextends them) would be returning a stack of all commands executed and\nreturning it on new functions, whose extend the primary's\nfunctionality; let's say these new functions are called\nPQcmdStatusEx(), PQcmdTuplesEx() and PQoidValueEx().\n\nThese \"extended\" functions should return the same as the original\nfunctions, for single commands issued, but they should give more\ndetailed information if a complex command had been issued.\n\nA simple examples of complex calls to those functions would return\n(case situation: two inserts then a delete command which affects three\nrows):\n\nPQcmdStatusEx() ==> 'INSERT INSERT DELETE'\nPQcmdTuplesEx() ==> '1 1 3'\nPQoidValueEx() ==> '939494 939495 0'\n\nThe advantage of this solution is that it does not suffer from the\nproblems of the other solutions (namely, what return when multiple\ncommands are issued in a single rule).\n\nThis would imply that other \"XXXXEx()\" functions should have to be\nmade (namely PQcmdTuples() and PQoidStatus()), but it might worth the\neffort because it would cover all the three tags, for all executed\ncommands, giving the possibility of reconstituting the whole\nexecution, and most importantly, without brokering existing\napplications. And those functions would be very easy to code after all\n(just append to the return string of those functions the value return\nfor a call to the original function, for each applied command). The\nclient application could parse those strings easily and get any info\nneeded for all the steps of the execution.\n\nStill, the best situation would have original PQcmdStatus(),\nPQcmdTuples(), PQoidValue() functions fixed accordingly to some of the\nother Proposals, making the fix available also for existing\napplications, and this proposal applied.\n\nAnother possibility still on the idea in this solution would be just\none function returning a SETOF with three columns (one for each of\nthose three functions), each row representing a command issued (same\nstack but in another format). But I like the first solution (returning\nstrings for each function) better, as it would follow better the style\nof the results for the existing libpq functions.\n\nFinally, an additional, good side effect of these functions is that\nthey could also return the same information for another odd\nsituations: when multiple commands are executed on a regular command\nline. Currently, only the results for the last execution string are\nreturned.\n\n\nProposal #4 (author: Hiroshi Inoue):\n------------------------------------\n\nHiroshi's proposal consist in a makeshift solution as stated on\nhttp://archives.postgresql.org/pgsql-general/2002-05/msg00170.php.\n\nPlease refer to that thread for details.\n\n\nFinal Comments\n--------------\n\nI particularly would like to see Proposals #1 or #2 implemented, and\nif possible Proposal #3 too.\nThis would provide a good solution for existing clients and a great\nsolution for the future for new clients.\n\nMaybe someone wishes to combine ideas from the first and second\nproposals to make a better Proposal. This would be interesting to\nhear.\n\nOf course, given the simplicity of the solutions and urgency of the\nfix, I think this could well fit on a pre 7.3 release, if someone can\ncode it.\n\nFinally, would like to thank Bruce Momjian for the help and support in\nwriting this Proposal, and hope the PostgreSQL team can reach an\nagreement on what's the best solution for this issue.\n\n-------------------------------------------------------------------------\n\n",
"msg_date": "Sun, 8 Sep 2002 19:50:21 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Proposal: Solving the \"Return proper effected tuple count from\n\tcomplex commands [return]\" issue"
},
{
"msg_contents": "\nI liked option #2. I don't think the _last_ query in a rule should have\nany special handling.\n\nSo, to summarize #2, we have:\n\n\tif no INSTEAD, \n\treturn value of original command\n\n\tif INSTEAD, \n\treturn tag of original command\n\treturn sum of all affected rows with the same tag\n\treturn OID if all INSERTs in the rule insert only one row, else zero\n\nThis INSERT behavior seems consistent with INSERTs inserting multiple\nrows via INSERT INTO ... SELECT:\n\t\n\ttest=> create table x (y int);\n\tinseCREATE TABLE\n\ttest=> insert into x select 1;\n\tINSERT 507324 1\n ^^^^^^\n\ttest=> insert into x select 1 union select 2;\n\tINSERT 0 2\n ^\n\nI don't think we should add tuple counts from different commands, i.e.\nadding UPDATE and DELETE counts just yeilds a totally meaningless\nnumber.\n\nI don't think there is any need/desire to add additional API routines to\nhandle multiple return values.\n\nCan I get some votes on this? We have one user very determined to get a\nfix, and the TODO.detail file has another user who really wants a fix.\n\n---------------------------------------------------------------------------\n\n> Proposal #2 (author: Tom lane):\n> ---------------------------------\n> \n> Tom Lane's proposal, as posted on\n> http://candle.pha.pa.us/mhonarc/todo.detail/return/msg00012.html,\n> consists basically on the following:\n> \n> PQcmdStatus() ==> Should always return the same command type original\n> submitted by the client.\n> \n> PQcmdTuples() ==> If no INSTEAD rule, return same output as for\n> original command, ignoring other commands in the\n> rule.If there is INSTEAD rules, use result of last\n> command in the rewritten series, use result of last\n> command of same type as original command or sum up\n> the results of all the rewritten commands.\n> \n> (I particularly prefer the sum).\n> \n> PQoidValue() ==> If the original command was not INSERT, return 0.\n> otherwise, if one INSERT, return it's original\n> PQoidValue(). If more then one INSERT command\n> applied, use last or other possibilities (please\n> refer to the thread for details).\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 8 Sep 2002 21:52:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count"
},
{
"msg_contents": "Bruce Momjian wrote:\n> I liked option #2. I don't think the _last_ query in a rule should have\n> any special handling.\n> \n> So, to summarize #2, we have:\n> \n> \tif no INSTEAD, \n> \treturn value of original command\n> \n> \tif INSTEAD, \n> \treturn tag of original command\n> \treturn sum of all affected rows with the same tag\n> \treturn OID if all INSERTs in the rule insert only one row, else zero\n> \n\nHow about:\n\n if no INSTEAD,\n return value of original command\n\n if INSTEAD,\n return tag MUTATED\n return sum of sum of tuple counts of all replacement commands\n return OID if sum of all replacement INSERTs in the rule inserted\n only one row, else zero\n\nThis is basically Tom's proposal, but substituting MUTATED for the \noriginal command tag name acknowledges that the original command was not \n executed unchanged. It also serves as a warning that the affected \ntuple count is from one or more substitute operations, not the original \ncommand.\n\n> I don't think we should add tuple counts from different commands, i.e.\n> adding UPDATE and DELETE counts just yeilds a totally meaningless\n> number.\n\nI don't know about that. The number of \"rows affected\" is indeed this \nnumber. It's just that they were not all affected in the same way.\n\n> I don't think there is any need/desire to add additional API routines to\n> handle multiple return values.\n\nAgreed.\n\n> \n> Can I get some votes on this? We have one user very determined to get a\n> fix, and the TODO.detail file has another user who really wants a fix.\n\n+1 for the version above ;-)\n\nJoe\n\n",
"msg_date": "Sun, 08 Sep 2002 19:54:45 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple"
},
{
"msg_contents": "Hello Bruce,\n\nSunday, September 8, 2002, 10:52:45 PM, you wrote:\n\nBM> I liked option #2. I don't think the _last_ query in a rule should have\nBM> any special handling.\n\nBM> So, to summarize #2, we have:\n\nBM> if no INSTEAD, \nBM> return value of original command\nThe problem is, this would lead us to the same behavior of Proposal\n#1 (returning the value for the last command executed), which you\ndidn't like...\n\nBM> if INSTEAD, \nBM> return tag of original command\nBM> return sum of all affected rows with the same tag\nBM> return OID if all INSERTs in the rule insert only one row, else zero\n\nBM> This INSERT behavior seems consistent with INSERTs inserting multiple\nBM> rows via INSERT INTO ... SELECT:\n \nBM> test=> create table x (y int);\nBM> inseCREATE TABLE\nBM> test=> insert into x select 1;\nBM> INSERT 507324 1\nBM> ^^^^^^\nBM> test=> insert into x select 1 union select 2;\nBM> INSERT 0 2\nBM> ^\n\nBM> I don't think we should add tuple counts from different commands, i.e.\nBM> adding UPDATE and DELETE counts just yeilds a totally meaningless\nBM> number.\nBut this *is* the total number of rows affected. There is no current\n(defined) behavior of \"rows affected by the same kind of command\nissued\", although I agree it makes some sense.\n\nBM> I don't think there is any need/desire to add additional API routines to\nBM> handle multiple return values.\nI'm ok with that if we can reach an agreement on how the existing API\nshould work. But as I stated, a new API would be a no-discussion way\nto solve this, and preferably extending some of the other proposals.\n\nBM> Can I get some votes on this? We have one user very determined to get a\nBM> fix, and the TODO.detail file has another user who really wants a fix.\n*Please* let's do it :)\n\nThanks.\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Mon, 9 Sep 2002 00:14:43 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count from\n\tcomplex commands [return]\" issue"
},
{
"msg_contents": "Joe Conway wrote:\n> Bruce Momjian wrote:\n> > I liked option #2. I don't think the _last_ query in a rule should have\n> > any special handling.\n> > \n> > So, to summarize #2, we have:\n> > \n> > \tif no INSTEAD, \n> > \treturn value of original command\n> > \n> > \tif INSTEAD, \n> > \treturn tag of original command\n> > \treturn sum of all affected rows with the same tag\n> > \treturn OID if all INSERTs in the rule insert only one row, else zero\n> > \n> \n> How about:\n> \n> if no INSTEAD,\n> return value of original command\n> \n> if INSTEAD,\n> return tag MUTATED\n> return sum of sum of tuple counts of all replacement commands\n> return OID if sum of all replacement INSERTs in the rule inserted\n> only one row, else zero\n> \n> This is basically Tom's proposal, but substituting MUTATED for the \n> original command tag name acknowledges that the original command was not \n> executed unchanged. It also serves as a warning that the affected \n> tuple count is from one or more substitute operations, not the original \n> command.\n\nAny suggestion on how to show the tag mutated? Do we want to add more\ntag possibilities?\n\n> > I don't think we should add tuple counts from different commands, i.e.\n> > adding UPDATE and DELETE counts just yeilds a totally meaningless\n> > number.\n> \n> I don't know about that. The number of \"rows affected\" is indeed this \n> number. It's just that they were not all affected in the same way.\n\nYes, that is true. The problem is that a DELETE returning a value of 10\nmay have deleted only one row and updated another 9 rows. In such\ncases, returning 1 is better. Of course, if there are multiple deletes\nthen perhaps the total is better, but then again, there is no way to\nflag this so we have to do one or the other consistently.\n\nThe real problem which you outline is that suppose the delete does _no_\ndeletes but only inserts. In my plan, we would return zero while in\nyours you would return the rows updated.\n\nIn my view, if you return a delete tag, you better only count deletes.\n\nAlso, your total affected isn't going to work well with INSERT because\nwe could return a non-1 for rows affected and still return an OID, which\nwould be quite confusing. I did the total only matching tags because it\ndoes mesh with the INSERT behavior.\n\n> > I don't think there is any need/desire to add additional API routines to\n> > handle multiple return values.\n> \n> Agreed.\n\nYep.\n\n> > Can I get some votes on this? We have one user very determined to get a\n> > fix, and the TODO.detail file has another user who really wants a fix.\n> \n> +1 for the version above ;-)\n\nOK, we are getting closer.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 8 Sep 2002 23:16:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count"
},
{
"msg_contents": "Hello Joe,\n\nSunday, September 8, 2002, 11:54:45 PM, you wrote:\n\nJC> Bruce Momjian wrote:\n>> I liked option #2. I don't think the _last_ query in a rule should have\n>> any special handling.\n>> \n>> So, to summarize #2, we have:\n>> \n>> if no INSTEAD, \n>> return value of original command\n>> \n>> if INSTEAD, \n>> return tag of original command\n>> return sum of all affected rows with the same tag\n>> return OID if all INSERTs in the rule insert only one row, else zero\n>> \n\nJC> How about:\n\nJC> if no INSTEAD,\nJC> return value of original command\n\nJC> if INSTEAD,\nJC> return tag MUTATED\nI see PQcmdStatus() returning a SQL command and not a pseudo-keyword,\nso I don't agree with this tag.\n\nJC> return sum of sum of tuple counts of all replacement commands\nAgreed.\n\nJC> return OID if sum of all replacement INSERTs in the rule inserted\nJC> only one row, else zero\nI don't agree with this one since it would lead us to a meaningless\ninformation... what would be the number retrieved ? Not an OID, nor\nnothing.\n\nJC> I don't know about that. The number of \"rows affected\" is indeed this\nJC> number. It's just that they were not all affected in the same way.\nAgreed too...\n\nJC> +1 for the version above ;-)\nWhich ? Yours or Tom's ? :)\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Mon, 9 Sep 2002 00:19:00 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count"
},
{
"msg_contents": "Steve Howe wrote:\n> Hello Bruce,\n> \n> Sunday, September 8, 2002, 10:52:45 PM, you wrote:\n> \n> BM> I liked option #2. I don't think the _last_ query in a rule should have\n> BM> any special handling.\n> \n> BM> So, to summarize #2, we have:\n> \n> BM> if no INSTEAD, \n> BM> return value of original command\n> The problem is, this would lead us to the same behavior of Proposal\n> #1 (returning the value for the last command executed), which you\n> didn't like...\n\nI don't like treating the last command as special when there is more\nthan one command. Of course, if there is only no INSTEAD, the main\nstatement is the only one we care about returning information for.\n\n> \n> BM> if INSTEAD, \n> BM> return tag of original command\n> BM> return sum of all affected rows with the same tag\n> BM> return OID if all INSERTs in the rule insert only one row, else zero\n> \n> BM> This INSERT behavior seems consistent with INSERTs inserting multiple\n> BM> rows via INSERT INTO ... SELECT:\n> \n> BM> test=> create table x (y int);\n> BM> inseCREATE TABLE\n> BM> test=> insert into x select 1;\n> BM> INSERT 507324 1\n> BM> ^^^^^^\n> BM> test=> insert into x select 1 union select 2;\n> BM> INSERT 0 2\n> BM> ^\n> \n> BM> I don't think we should add tuple counts from different commands, i.e.\n> BM> adding UPDATE and DELETE counts just yeilds a totally meaningless\n> BM> number.\n> But this *is* the total number of rows affected. There is no current\n> (defined) behavior of \"rows affected by the same kind of command\n> issued\", although I agree it makes some sense.\n\nYes, that is a good point, i.e. rows effected. However, see my previous\nemail on how this doesn't play with with INSERT.\n\n> BM> I don't think there is any need/desire to add additional API routines to\n> BM> handle multiple return values.\n> I'm ok with that if we can reach an agreement on how the existing API\n> should work. But as I stated, a new API would be a no-discussion way\n> to solve this, and preferably extending some of the other proposals.\n\n\nWe don't like to add complexity if we can help it.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 8 Sep 2002 23:21:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple"
},
{
"msg_contents": "Steve Howe wrote:\n> JC> return OID if sum of all replacement INSERTs in the rule inserted\n> JC> only one row, else zero\n> I don't agree with this one since it would lead us to a meaningless\n> information... what would be the number retrieved ? Not an OID, nor\n> nothing.\n\nI don't understand this objection.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 8 Sep 2002 23:22:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple"
},
{
"msg_contents": "Hello Bruce,\n\nMonday, September 9, 2002, 12:16:32 AM, you wrote:\n\nBM> Joe Conway wrote:\n\nBM> Any suggestion on how to show the tag mutated? Do we want to add more\nBM> tag possibilities?\nAgain, I don't agree with PQcmdStatus() returning a pseudo-keyword,\nsince I would expect a SQL command executed.\nI prefer Tom's suggestion of returning the same kind of command\nexecuted, or the last command as of Proposal #1.\n\n>> > I don't think we should add tuple counts from different commands, i.e.\n>> > adding UPDATE and DELETE counts just yeilds a totally meaningless\n>> > number.\n>> \n>> I don't know about that. The number of \"rows affected\" is indeed this \n>> number. It's just that they were not all affected in the same way.\n\nBM> Yes, that is true. The problem is that a DELETE returning a value of 10\nBM> may have deleted only one row and updated another 9 rows. In such\nBM> cases, returning 1 is better. Of course, if there are multiple deletes\nBM> then perhaps the total is better, but then again, there is no way to\nBM> flag this so we have to do one or the other consistently.\nBM>\nBM> The real problem which you outline is that suppose the delete does _no_\nBM> deletes but only inserts. In my plan, we would return zero while in\nBM> yours you would return the rows updated.\nYou have a good point here, Bruce. And for avoiding it, maybe Tom's\nsuggestion is the best. Unless the new API as of Proposal #3 is\nintroduced.\n\nBM> In my view, if you return a delete tag, you better only count deletes.\nYes, this is Tom's Proposal and it makes more sense when you imagine a\ncase situation.\nProposal #1 tried to be more compatible with the behavior of multiple\ncommands execution but that would lead us to bad situations like\nBruce exposes here.\n\nBM> Also, your total affected isn't going to work well with INSERT because\nBM> we could return a non-1 for rows affected and still return an OID, which\nBM> would be quite confusing. I did the total only matching tags because it\nBM> does mesh with the INSERT behavior.\nEven if this is 100% true, I'm afraid the only way to cover all\nspecific situations is the new API. Let's remember it's easy to\nimplement, and could server to both multiple commands execution *and*\nthis rules situation.\n\n>> > I don't think there is any need/desire to add additional API routines to\n>> > handle multiple return values.\n>> \n>> Agreed.\n\nBM> Yep.\nOK, this counts two points against the new API :)\n\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Mon, 9 Sep 2002 00:27:47 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Joe Conway wrote:\n>>This is basically Tom's proposal, but substituting MUTATED for the \n>>original command tag name acknowledges that the original command was not \n>> executed unchanged. It also serves as a warning that the affected \n>>tuple count is from one or more substitute operations, not the original \n>>command.\n> \n> Any suggestion on how to show the tag mutated? Do we want to add more\n> tag possibilities?\n\nThe suggestion was made based on what I think is the desired behavior, \nbut I must admit I have no idea how it would be implemented at this \npoint. It may turn out to be more pain than it's worth.\n\n>>I don't know about that. The number of \"rows affected\" is indeed this \n>>number. It's just that they were not all affected in the same way.\n> \n> Yes, that is true. The problem is that a DELETE returning a value of 10\n> may have deleted only one row and updated another 9 rows. In such\n> cases, returning 1 is better. Of course, if there are multiple deletes\n> then perhaps the total is better, but then again, there is no way to\n> flag this so we have to do one or the other consistently.\n> \n> The real problem which you outline is that suppose the delete does _no_\n> deletes but only inserts. In my plan, we would return zero while in\n> yours you would return the rows updated.\n> \n> In my view, if you return a delete tag, you better only count deletes.\n> \n> Also, your total affected isn't going to work well with INSERT because\n> we could return a non-1 for rows affected and still return an OID, which\n> would be quite confusing. I did the total only matching tags because it\n> does mesh with the INSERT behavior.\n\nSure, but that's why I am in favor of changing the tag. If you did:\n\nDELETE FROM fooview WHERE name LIKE 'Joe%';\n\nand got:\n\nMUTATED 507324 3\n\nit would mean that 3 tuples in total were affected by all of the \nsubstitute operations, only of of them being an INSERT, and the Oid of \nthe lone INSERT was 507324. If instead I got:\n\nDELETE 0\n\nI'd be back to having no useful information. Did any rows in fooview \nmatch the criteria \"LIKE 'Joe%'\"? Did any data in my database get \naltered? Can't tell from this.\n\nJoe\n\n\n",
"msg_date": "Sun, 08 Sep 2002 20:32:14 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple"
},
{
"msg_contents": "Hello Bruce,\n\nMonday, September 9, 2002, 12:21:11 AM, you wrote:\n\nBM> Steve Howe wrote:\n>> Hello Bruce,\n>> \n\n>> But this *is* the total number of rows affected. There is no current\n>> (defined) behavior of \"rows affected by the same kind of command\n>> issued\", although I agree it makes some sense.\n\nBM> Yes, that is a good point, i.e. rows effected. However, see my previous\nBM> email on how this doesn't play with with INSERT.\nI agree with your point. In fact, since everybody until now seems to\nagree that the \"last command\" behavior isn't consistent, I think Tom's\nsuggestion is the best.\n\nBM> We don't like to add complexity if we can help it.\nI understand. If we can reach an agreement on another way, that's ok\nfor me...\n\nWe still have to hear the other developers about this, but for a\nwhile, my votes go to Proposal's #2 (by Tom) and Proposal #3 if enough\npeople consider it important.\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Mon, 9 Sep 2002 00:32:26 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count from\n\tcomplex commands [return]\" issue"
},
{
"msg_contents": "Steve Howe wrote:\n> We still have to hear the other developers about this, but for a\n> while, my votes go to Proposal's #2 (by Tom) and Proposal #3 if enough\n> people consider it important.\n\nI think Tom and Hirosh were the people most involved in the previous\ndiscussion.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 8 Sep 2002 23:33:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple"
},
{
"msg_contents": "Joe Conway wrote:\n> Sure, but that's why I am in favor of changing the tag. If you did:\n> \n> DELETE FROM fooview WHERE name LIKE 'Joe%';\n> \n> and got:\n> \n> MUTATED 507324 3\n> \n> it would mean that 3 tuples in total were affected by all of the \n> substitute operations, only of of them being an INSERT, and the Oid of \n> the lone INSERT was 507324. If instead I got:\n> \n> DELETE 0\n> \n> I'd be back to having no useful information. Did any rows in fooview \n> match the criteria \"LIKE 'Joe%'\"? Did any data in my database get \n> altered? Can't tell from this.\n\nOK. Do any people have INSTEAD rules where there are not commands\nmatching the original query tag? Can anyone think of such a case being\ncreated?\n\nThe only one I can think of is UPDATE implemented as separate INSERT and\nDELETE commands.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 8 Sep 2002 23:36:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count"
},
{
"msg_contents": "Hello Bruce,\n\nMonday, September 9, 2002, 12:22:26 AM, you wrote:\n\nBM> Steve Howe wrote:\n>> JC> return OID if sum of all replacement INSERTs in the rule inserted\n>> JC> only one row, else zero\n>> I don't agree with this one since it would lead us to a meaningless\n>> information... what would be the number retrieved ? Not an OID, nor\n>> nothing.\n\nBM> I don't understand this objection.\nI misunderstood Joe's statement into thinking we wanted to sum the\nOIDs for all INSERT commands applied :)\nPlease ignore this.\nBut now that I read it again, I would prefer having at least one OID\nfor the last inserted row. With this info, I would be able to refresh\nmy client dataset to reflect the new inserted rows.\n\nI see returning 0 if multiple INSERT commands issued is as weird as\nreturning some OID if multiple INSERT commands issued. But the second\noptions is usable, while the first one is useless... So I would prefer\nretrieving the last inserted OID.\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Mon, 9 Sep 2002 00:37:39 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count"
},
{
"msg_contents": "Steve Howe wrote:\n> Hello Bruce,\n> \n> Monday, September 9, 2002, 12:22:26 AM, you wrote:\n> \n> BM> Steve Howe wrote:\n> >> JC> return OID if sum of all replacement INSERTs in the rule inserted\n> >> JC> only one row, else zero\n> >> I don't agree with this one since it would lead us to a meaningless\n> >> information... what would be the number retrieved ? Not an OID, nor\n> >> nothing.\n> \n> BM> I don't understand this objection.\n> I misunderstood Joe's statement into thinking we wanted to sum the\n> OIDs for all INSERT commands applied :)\n> Please ignore this.\n> But now that I read it again, I would prefer having at least one OID\n> for the last inserted row. With this info, I would be able to refresh\n> my client dataset to reflect the new inserted rows.\n> \n> I see returning 0 if multiple INSERT commands issued is as weird as\n> returning some OID if multiple INSERT commands issued. But the second\n> options is usable, while the first one is useless... So I would prefer\n> retrieving the last inserted OID.\n\nWe would return 0 for oid and an insert count, just like INSERT INTO ...\nSELECT. How is that weird?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 8 Sep 2002 23:39:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple"
},
{
"msg_contents": "Bruce Momjian wrote:\n> OK. Do any people have INSTEAD rules where there are not commands\n> matching the original query tag? Can anyone think of such a case being\n> created?\n> \n> The only one I can think of is UPDATE implemented as separate INSERT and\n> DELETE commands.\n> \n\nI could see an UPDATE implemented as an UPDATE and an INSERT. You would \nUPDATE the original row to mark it as dead (e.g. change END_DATE from \nNULL to CURRENT_DATE), and INSERT a new row to represent the new state. \nThis is pretty common in business systems where you need complete \ntransaction history, and never update in place over critical data.\n\nSimilarly, a DELETE might be implemented as an UPDATE for the same \nreason (mark it dead, but keep the data). In fact, the view itself might \nscreen out the dead rows using the field which was UPDATED.\n\nJoe\n\n",
"msg_date": "Sun, 08 Sep 2002 20:43:03 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple"
},
{
"msg_contents": "Hello Bruce,\n\nMonday, September 9, 2002, 12:36:38 AM, you wrote:\n\nBM> Joe Conway wrote:\n>> Sure, but that's why I am in favor of changing the tag. If you did:\n>> \n>> DELETE FROM fooview WHERE name LIKE 'Joe%';\n>> \n>> and got:\n>> \n>> MUTATED 507324 3\n>> \n>> it would mean that 3 tuples in total were affected by all of the \n>> substitute operations, only of of them being an INSERT, and the Oid of \n>> the lone INSERT was 507324. If instead I got:\n>> \n>> DELETE 0\n>> \n>> I'd be back to having no useful information. Did any rows in fooview \n>> match the criteria \"LIKE 'Joe%'\"? Did any data in my database get \n>> altered? Can't tell from this.\n\nBM> OK. Do any people have INSTEAD rules where there are not commands\nBM> matching the original query tag? Can anyone think of such a case being\nBM> created?\nI can think a thousand cases.\nFor instance, one could create an update rule that would delete rows\nreferenced on a second table (to avoid orphan rows). OR a user could\nmake an insert rule that empties a table with DELETE so that only one\nrow can always be assumed in that table... the possibilities are\ninfinite.\n\nBM> The only one I can think of is UPDATE implemented as separate INSERT and\nBM> DELETE commands.\nI'm afraid the great imagination of PostgreSQL users has come to all\nkind of uses and misuses for such a powerful feature :)\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Mon, 9 Sep 2002 00:44:37 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count"
},
{
"msg_contents": "Hello Bruce,\n\nMonday, September 9, 2002, 12:39:20 AM, you wrote:\n\n>> BM> I don't understand this objection.\n>> I misunderstood Joe's statement into thinking we wanted to sum the\n>> OIDs for all INSERT commands applied :)\n>> Please ignore this.\n>> But now that I read it again, I would prefer having at least one OID\n>> for the last inserted row. With this info, I would be able to refresh\n>> my client dataset to reflect the new inserted rows.\n>> \n>> I see returning 0 if multiple INSERT commands issued is as weird as\n>> returning some OID if multiple INSERT commands issued. But the second\n>> options is usable, while the first one is useless... So I would prefer\n>> retrieving the last inserted OID.\n\nBM> We would return 0 for oid and an insert count, just like INSERT INTO ...\nBM> SELECT. How is that weird?\nIt's not weird, or as weird as the other proposal which is retrieving\nthe last inserted OID number. If we can return some information for\nthe client, why not doing it ? :-)\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Mon, 9 Sep 2002 00:46:56 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count"
},
{
"msg_contents": "Steve Howe wrote:\n> BM> We would return 0 for oid and an insert count, just like INSERT INTO ...\n> BM> SELECT. How is that weird?\n> It's not weird, or as weird as the other proposal which is retrieving\n> the last inserted OID number. If we can return some information for\n> the client, why not doing it ? :-)\n\nWell, we don't return an OID from a random row when we do INSERT INTO\n... SELECT (and no one has complained about it) so I can't see why we\nwould return an OID there.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 8 Sep 2002 23:52:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count"
},
{
"msg_contents": "\nOn Sun, 8 Sep 2002, Steve Howe wrote:\n\n> Here are the proposals for solutioning the \"Return proper effected\n> tuple count from complex commands [return]\" issue as seen on TODO.\n>\n> Any comments ?... This is obviously open to voting and discussion.\n\nAs it seems we're voting, I think Tom's scheme is about as good\nas we'll do for the current API. I actually think that a better API\nis a good idea, but it's an API change and we're in beta, so not\nfor 7.3.\n\nI'm not 100% sure which of the PQcmdTuples behaviors makes sense (actually\nI'm pretty sure neither do, but since the general complaint is knowing\nwhether something happened or not, sum gets around the last statement\ndoing 0 rows and running into the same type of problem).\n\n> Proposal #2 (author: Tom lane):\n> ---------------------------------\n>\n> Tom Lane's proposal, as posted on\n> http://candle.pha.pa.us/mhonarc/todo.detail/return/msg00012.html,\n> consists basically on the following:\n>\n> PQcmdStatus() ==> Should always return the same command type original\n> submitted by the client.\n>\n> PQcmdTuples() ==> If no INSTEAD rule, return same output as for\n> original command, ignoring other commands in the\n> rule.If there is INSTEAD rules, use result of last\n> command in the rewritten series, use result of last\n> command of same type as original command or sum up\n> the results of all the rewritten commands.\n>\n> (I particularly prefer the sum).\n>\n> PQoidValue() ==> If the original command was not INSERT, return 0.\n> otherwise, if one INSERT, return it's original\n> PQoidValue(). If more then one INSERT command\n> applied, use last or other possibilities (please\n> refer to the thread for details).\n>\n> Please refer to the original post to refer to the original message. I\n> would like to point out that it was the most consistent proposal\n> pointed out until now on the previous discussions (Bruce M. agrees\n> with this one).\n\n",
"msg_date": "Sun, 8 Sep 2002 21:53:41 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple"
},
{
"msg_contents": "Steve Howe writes:\n\n> Here are the proposals for solutioning the \"Return proper effected\n> tuple count from complex commands [return]\" issue as seen on TODO.\n>\n> Any comments ?... This is obviously open to voting and discussion.\n\nWe don't have a whole lot of freedom in this; this area is covered by the\nSQL standard. The major premise in the standard's point of view is that\nviews are supposed to be transparent. That is, if\n\n SELECT * FROM my_view WHERE condition;\n\nreturn N rows, then a subsequently executed\n\n UPDATE my_view SET ... WHERE condition;\n\nreturns an update count of N, no matter what happens behind the scenes. I\ndon't think this matches Tom Lane's view exactly, but it's a lot closer\nthan your proposal.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 9 Sep 2002 20:41:41 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple"
},
{
"msg_contents": "Hello Peter,\n\nMonday, September 9, 2002, 3:41:41 PM, you wrote:\n\nPE> Steve Howe writes:\n\n>> Here are the proposals for solutioning the \"Return proper effected\n>> tuple count from complex commands [return]\" issue as seen on TODO.\n>>\n>> Any comments ?... This is obviously open to voting and discussion.\n\nPE> We don't have a whole lot of freedom in this; this area is covered by the\nPE> SQL standard. The major premise in the standard's point of view is that\nPE> views are supposed to be transparent. That is, if\n\nPE> SELECT * FROM my_view WHERE condition;\n\nPE> return N rows, then a subsequently executed\n\nPE> UPDATE my_view SET ... WHERE condition;\n\nPE> returns an update count of N, no matter what happens behind the scenes. I\nPE> don't think this matches Tom Lane's view exactly, but it's a lot closer\nPE> than your proposal.\nIf there was a single statement per rules executed, this would be end\nof discussion... but as you know there can be possible multiple\nstatements per rules, and the difficulty is what do to in those\ncases.\n\nAs far as of now, Tom Lane's proposal seems to be the most accepted,\nwithout using a new API.\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Mon, 9 Sep 2002 17:43:50 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count from\n\tcomplex commands [return]\" issue"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Steve Howe writes:\n> \n> > Here are the proposals for solutioning the \"Return proper effected\n> > tuple count from complex commands [return]\" issue as seen on TODO.\n> >\n> > Any comments ?... This is obviously open to voting and discussion.\n> \n> We don't have a whole lot of freedom in this; this area is covered by the\n> SQL standard. The major premise in the standard's point of view is that\n> views are supposed to be transparent. That is, if\n> \n> SELECT * FROM my_view WHERE condition;\n> \n> return N rows, then a subsequently executed\n> \n> UPDATE my_view SET ... WHERE condition;\n> \n> returns an update count of N, no matter what happens behind the scenes. I\n> don't think this matches Tom Lane's view exactly, but it's a lot closer\n> than your proposal.\n\nOh, this is bad news. The problem we have is that rules don't\ndistinguish the UPDATE on the underlying tables of the rule from other\nupdates that may appear in the query.\n\nIf we go with Tom's idea and total just UPDATE's, we will get the right\nanswer when there is only one UPDATE in the ruleset.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 Sep 2002 22:24:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple"
},
{
"msg_contents": "Sorry guys - it's killing me! It's 'affected' in the subject line - not\n'effected'!!! Sigh :)\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Tuesday, 10 September 2002 10:24 AM\n> To: Peter Eisentraut\n> Cc: Steve Howe; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Proposal: Solving the \"Return proper effected\n> tuple\n>\n>\n> Peter Eisentraut wrote:\n> > Steve Howe writes:\n> >\n> > > Here are the proposals for solutioning the \"Return proper effected\n> > > tuple count from complex commands [return]\" issue as seen on TODO.\n> > >\n> > > Any comments ?... This is obviously open to voting and discussion.\n> >\n> > We don't have a whole lot of freedom in this; this area is\n> covered by the\n> > SQL standard. The major premise in the standard's point of view is that\n> > views are supposed to be transparent. That is, if\n> >\n> > SELECT * FROM my_view WHERE condition;\n> >\n> > return N rows, then a subsequently executed\n> >\n> > UPDATE my_view SET ... WHERE condition;\n> >\n> > returns an update count of N, no matter what happens behind the\n> scenes. I\n> > don't think this matches Tom Lane's view exactly, but it's a lot closer\n> > than your proposal.\n>\n> Oh, this is bad news. The problem we have is that rules don't\n> distinguish the UPDATE on the underlying tables of the rule from other\n> updates that may appear in the query.\n>\n> If we go with Tom's idea and total just UPDATE's, we will get the right\n> answer when there is only one UPDATE in the ruleset.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square,\n> Pennsylvania 19073\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Tue, 10 Sep 2002 10:36:44 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper affected tuple"
},
{
"msg_contents": "Hello Christopher,\n\nMonday, September 9, 2002, 11:36:44 PM, you wrote:\n\nCKL> Sorry guys - it's killing me! It's 'affected' in the subject line - not\nCKL> 'effected'!!! Sigh :)\n\nlol... my bad, English is not my primary language and these things\njust seem to happen sometimes... I apologize.\n\n------------- \nBest regards,\n Steve Howe mailto:howe@carcass.dhs.org\n\n",
"msg_date": "Tue, 10 Sep 2002 00:26:49 -0300",
"msg_from": "Steve Howe <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Solving the \"Return proper affected tuple"
},
{
"msg_contents": "On Sun, 8 Sep 2002 19:50:21 -0300, Steve Howe <howe@carcass.dhs.org>\nwrote:\n>Proposal #1 (author: Steve Howe):\n>---------------------------------\n>\n>PQcmdStatus() ==> Should return the last executed command\n\n#1a\n\n> or the same as the original command\n\n#1b = #2\n\n>PQcmdTuples() ==> should return the sum of modified rows of all\n> commands executed by the rule (DELETE / INSERT /\n> UPDATE).\n\n= #2c\n\n> \n>PQoidValue() ==> should return the value for the last INSERT executed\n> command in the rule (if any).\n\n\n>Proposal #2 (author: Tom lane):\n>-------------------------------\n>\n>PQcmdStatus() ==> Should always return the same command type original\n> submitted by the client.\n>\n>PQcmdTuples() ==> If no INSTEAD rule, return same output as for\n> original command, ignoring other commands in the\n> rule.If there is INSTEAD rules,\n> use result of last command in the rewritten series,\n\n#2a\n\n> use result of last command of same type as original command\n\n#2b\n\n> or sum up the results of all the rewritten commands.\n\n#2c\n\n>PQoidValue() ==> If the original command was not INSERT, return 0.\n> otherwise, if one INSERT, return it's original\n> PQoidValue(). If more then one INSERT command\n> applied, use last\n\n#2A\n\n> or other possibilities\n\n#2B; one of these possibilities is: return 0 (#2C).\n\n\nOn Sun, 8 Sep 2002 21:52:45 -0400 (EDT), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n:So, to summarize #2, we have:\n:\n:\tif no INSTEAD, \n:\treturn value of original command\n:\n:\tif INSTEAD, \n:\treturn tag of original command\n:\treturn sum of all affected rows with the same tag\n\nthis is a new interpretation: #2d\n\n:\treturn OID if all INSERTs in the rule insert only one row, else zero\n\nthis is #2C\n\n\n>Proposal #3 (author: Steve Howe):\n>---------------------------------\n>\n>Another possibility (which does not go against the other proposals but\n>extends them) would be returning a stack of all commands executed and\n>returning it on new functions, whose extend the primary's\n>functionality; let's say these new functions are called\n>PQcmdStatusEx(), PQcmdTuplesEx() and PQoidValueEx().\n\n\n>Proposal #4 (author: Hiroshi Inoue):\n>------------------------------------\n>\n>Hiroshi's proposal consist in a makeshift solution as stated on\n>http://archives.postgresql.org/pgsql-general/2002-05/msg00170.php.\n>\n>Please refer to that thread for details.\n\n\nProposal #5:\n\nOn Sun, 08 Sep 2002 19:54:45 -0700, Joe Conway <mail@joeconway.com>\nwrote:\n: if no INSTEAD,\n: return value of original command\n:\n: if INSTEAD,\n: return tag MUTATED\n: return sum of sum of tuple counts of all replacement commands\n\nthis equals #2c\n\n: return OID if sum of all replacement INSERTs in the rule inserted\n: only one row, else zero\n\nthis is #2C\n\n\nOn Mon, 9 Sep 2002 20:41:41 +0200 (CEST), Peter Eisentraut\n<peter_e@gmx.net> wrote:\n:The major premise in the standard's point of view is that\n:views are supposed to be transparent. That is, if\n:\n: SELECT * FROM my_view WHERE condition;\n:\n:return N rows, then a subsequently executed\n:\n: UPDATE my_view SET ... WHERE condition;\n:\n:returns an update count of N, no matter what happens behind the scenes.\n\nISTM this is one of those problems where there is no generic solution.\nWhatever you implement, it is easy to come up with an example that\nshows that the implementation is broken (for a suitable definition of\n\"broken\"), because there are so many different ways to use this\nfeature.\n\nHere is just another \"bad idea\": As it is impossible to *guess* the\ncorrect behaviour, let the dba *define* what he wants. There is no\nCREATE RULE statement in SQL92, so we can't break any standard by\nchanging its syntax.\n\n CREATE [ OR REPLACE ] RULE name AS ON event\n TO table [ WHERE condition ]\n DO [ INSTEAD ] action\n \n where action can be:\n \n NOTHING\n | rulequery\n | ( rulequery; rulequery ... )\n \n where rulequery is:\n \n [ COUNT ] query\n\n(or any other keyword instead of COUNT)\n\n\nProposal #6:\n\nIf no INSTEAD, return value of original command (this is compatible to\n#2), else ...\n\nPQcmdStatus() ==> Always return tag of original command\n (this equals #2).\n\nPQcmdTuples() ==> Sum up the results of all the rewritten commands\n marked as COUNTed.\n\nPQoidValue() ==> If the original command was not INSERT, return 0.\n otherwise, if all COUNTed rewritten INSERTs insert\n exactly one row, then return its OID, else 0.\n\n\nProposal #7 (a variation of #6):\n\nIf no INSTEAD, treat the original command the same as a COUNTed\nrewritten command.\n\n\n+/- for both #6 and #7\n\nPro: Regarding PQcmdTuples this can emulate #1 and all variants of #2.\n\nCon: need to store COUNTed flag for rule queries ==> catalog change\n==> initdb ==> not for 7.3 (except we can find an unused bit).\n\n\nServus\n Manfred\n",
"msg_date": "Wed, 11 Sep 2002 11:35:55 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count from\n\tcomplex commands [return]\" issue"
}
] |
[
{
"msg_contents": "Hi Guys,\n\nYou might be interested in the results of the Australian Open Source Awards:\n\nhttp://www.smh.com.au/articles/2002/09/06/1031115931961.html\n\nJustin Clift and I both rated mentions - Justin for the Postgres websites\nand myself for BSD Users Group WA.\n\nOne good things is that both Postgres and BUGWA got a mention on Slashdot\nand the Sydney Morning Herald with is neat.\n\nCheers,\n\nChris\n\n",
"msg_date": "Mon, 9 Sep 2002 10:32:38 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Australian Open Source Awards"
},
{
"msg_contents": "\nGod, I wish ppl would at least get information correct :(\n\nJustin Clift (for the postgreSQL documentation website)\n\nthe website they point to *isn't* techdocs, but www, which Justin has had\nnothing to do with ;(\n\nOn Mon, 9 Sep 2002, Christopher Kings-Lynne wrote:\n\n> Hi Guys,\n>\n> You might be interested in the results of the Australian Open Source Awards:\n>\n> http://www.smh.com.au/articles/2002/09/06/1031115931961.html\n>\n> Justin Clift and I both rated mentions - Justin for the Postgres websites\n> and myself for BSD Users Group WA.\n>\n> One good things is that both Postgres and BUGWA got a mention on Slashdot\n> and the Sydney Morning Herald with is neat.\n>\n> Cheers,\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Mon, 9 Sep 2002 00:00:36 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Australian Open Source Awards"
},
{
"msg_contents": "Well annoyingly enough they have me down as 'founding pandaemonium' whereas\nit should be co-founded pandaemonium :(\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc G. Fournier\n> Sent: Monday, 9 September 2002 11:01 AM\n> To: Christopher Kings-Lynne\n> Cc: Hackers; pgsql-general@postgresql.org\n> Subject: Re: [HACKERS] [GENERAL] Australian Open Source Awards\n>\n>\n>\n> God, I wish ppl would at least get information correct :(\n>\n> Justin Clift (for the postgreSQL documentation website)\n>\n> the website they point to *isn't* techdocs, but www, which Justin has had\n> nothing to do with ;(\n>\n> On Mon, 9 Sep 2002, Christopher Kings-Lynne wrote:\n>\n> > Hi Guys,\n> >\n> > You might be interested in the results of the Australian Open\n> Source Awards:\n> >\n> > http://www.smh.com.au/articles/2002/09/06/1031115931961.html\n> >\n> > Justin Clift and I both rated mentions - Justin for the\n> Postgres websites\n> > and myself for BSD Users Group WA.\n> >\n> > One good things is that both Postgres and BUGWA got a mention\n> on Slashdot\n> > and the Sydney Morning Herald with is neat.\n> >\n> > Cheers,\n> >\n> > Chris\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Mon, 9 Sep 2002 11:06:21 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Australian Open Source Awards"
}
] |
[
{
"msg_contents": "Because we have seen many complains about sequential vs index scans, I\nwrote a script which computes the value for your OS/hardware\ncombination.\n\nUnder BSD/OS on one SCSI disk, I get a random_page_cost around 60. Our\ncurrent postgresql.conf default is 4.\n\nWhat do other people get for this value?\n\nKeep in mind if we increase this value, we will get a more sequential\nscans vs. index scans.\n\nOne flaw in this test is that it randomly reads blocks from different\nfiles rather than randomly reading from the same file. Do people have a\nsuggestion on how to correct this? Does it matter?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n#!/bin/bash\n\ntrap \"rm -f /tmp/$$\" 0 1 2 3 15\n\nBLCKSZ=8192\n\nif [ \"$RANDOM\" = \"$RANDOM\" ]\nthen\techo \"Your shell does not support \\$RANDOM. Try using bash.\" 1>&2\n\texit 1\nfi\n\n# XXX We assume 0 <= random <= 32767\n\necho \"Collecting sizing information ...\"\n\nTEMPLATE1=`du -s \"$PGDATA/base/1\" | awk '{print $1}'`\nFULL=`du -s \"$PGDATA/base\" | awk '{print $1}'`\nif [ \"$FULL\" -lt `expr \"$TEMPLATE1\" \\* 4` ]\nthen\techo \"Your installation should have at least four times the data stored in template1 to yield meaningful results\" 1>&2\n\texit 1 \nfi\n\nfind \"$PGDATA/base\" -type f -exec ls -ld {} \\; |\nawk '$5 % '\"$BLCKSZ\"' == 0 {print $5 / '\"$BLCKSZ\"', $9}' |\ngrep -v '^0 ' > /tmp/$$\n\nTOTAL=`awk 'BEGIN\t{sum=0}\n\t\t\t{sum += $1}\n\t END\t\t{print sum}' /tmp/$$`\n\necho \"Running random access timing test ...\"\n\nSTART=`date '+%s'`\nPAGES=1000\n\nwhile [ \"$PAGES\" -ne 0 ]\ndo\n\tBIGRAND=`expr \"$RANDOM\" \\* 32768 + \"$RANDOM\"`\n\t\n\tOFFSET=`awk 'BEGIN{printf \"%d\\n\", ('\"$BIGRAND\"' / 2^30) * '\"$TOTAL\"'}'`\n\t\n\tRESULT=`awk '\tBEGIN\t{offset = 0}\n\t\toffset + $1 > '\"$OFFSET\"' \\\n\t\t\t{print $2, '\"$OFFSET\"' - offset ; exit}\n\t\t\t{offset += $1}' /tmp/$$`\n\tFILE=`echo \"$RESULT\" | awk '{print $1}'`\n\tOFFSET=`echo \"$RESULT\" | awk '{print $2}'`\n\t\n\tdd bs=\"$BLCKSZ\" seek=\"$OFFSET\" count=1 if=\"$FILE\" of=\"/dev/null\" >/dev/null 2>&1\n\tPAGES=`expr \"$PAGES\" - 1`\ndone\n\nSTOP=`date '+%s'`\nRANDTIME=`expr \"$STOP\" - \"$START\"`\n\necho \"Running sequential access timing test ...\"\n\nSTART=`date '+%s'`\n# We run the random test 10 times more because it is quicker and\n# we need it to run for a while to get accurate results.\nPAGES=10000\n\nwhile [ \"$PAGES\" -ne 0 ]\ndo\n\tBIGRAND=`expr \"$RANDOM\" \\* 32768 + \"$RANDOM\"`\n\t\n\tOFFSET=`awk 'BEGIN{printf \"%d\\n\", ('\"$BIGRAND\"' / 2^30) * '\"$TOTAL\"'}'`\n\t\n\tRESULT=`awk '\tBEGIN\t{offset = 0}\n\t\toffset + $1 > '\"$OFFSET\"' \\\n\t\t\t{print $2, $1; exit}\n\t\t\t{offset += $1}' /tmp/$$`\n\tFILE=`echo \"$RESULT\" | awk '{print $1}'`\n\tFILEPAGES=`echo \"$RESULT\" | awk '{print $2}'`\n\n\tif [ \"$FILEPAGES\" -gt \"$PAGES\" ]\n\tthen\tFILEPAGES=\"$PAGES\"\n\tfi\n\t\n\tdd bs=\"$BLCKSZ\" count=\"$FILEPAGES\" if=\"$FILE\" of=\"/dev/null\" >/dev/null 2>&1\n\tPAGES=`expr \"$PAGES\" - \"$FILEPAGES\"`\ndone\n\nSTOP=`date '+%s'`\nSEQTIME=`expr \"$STOP\" - \"$START\"`\n\necho\nawk 'BEGIN\t{printf \"random_page_cost = %f\\n\", ('\"$RANDTIME\"' / '\"$SEQTIME\"') * 10}'",
"msg_date": "Mon, 9 Sep 2002 01:05:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Script to compute random page cost"
},
{
"msg_contents": "\nOK, turns out that the loop for sequential scan ran fewer times and was\nskewing the numbers. I have a new version at:\n\n\tftp://candle.pha.pa.us/pub/postgresql/randcost\n\nI get _much_ lower numbers now for random_page_cost.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Because we have seen many complains about sequential vs index scans, I\n> wrote a script which computes the value for your OS/hardware\n> combination.\n> \n> Under BSD/OS on one SCSI disk, I get a random_page_cost around 60. Our\n> current postgresql.conf default is 4.\n> \n> What do other people get for this value?\n> \n> Keep in mind if we increase this value, we will get a more sequential\n> scans vs. index scans.\n> \n> One flaw in this test is that it randomly reads blocks from different\n> files rather than randomly reading from the same file. Do people have a\n> suggestion on how to correct this? Does it matter?\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n> #!/bin/bash\n> \n> trap \"rm -f /tmp/$$\" 0 1 2 3 15\n> \n> BLCKSZ=8192\n> \n> if [ \"$RANDOM\" = \"$RANDOM\" ]\n> then\techo \"Your shell does not support \\$RANDOM. Try using bash.\" 1>&2\n> \texit 1\n> fi\n> \n> # XXX We assume 0 <= random <= 32767\n> \n> echo \"Collecting sizing information ...\"\n> \n> TEMPLATE1=`du -s \"$PGDATA/base/1\" | awk '{print $1}'`\n> FULL=`du -s \"$PGDATA/base\" | awk '{print $1}'`\n> if [ \"$FULL\" -lt `expr \"$TEMPLATE1\" \\* 4` ]\n> then\techo \"Your installation should have at least four times the data stored in template1 to yield meaningful results\" 1>&2\n> \texit 1 \n> fi\n> \n> find \"$PGDATA/base\" -type f -exec ls -ld {} \\; |\n> awk '$5 % '\"$BLCKSZ\"' == 0 {print $5 / '\"$BLCKSZ\"', $9}' |\n> grep -v '^0 ' > /tmp/$$\n> \n> TOTAL=`awk 'BEGIN\t{sum=0}\n> \t\t\t{sum += $1}\n> \t END\t\t{print sum}' /tmp/$$`\n> \n> echo \"Running random access timing test ...\"\n> \n> START=`date '+%s'`\n> PAGES=1000\n> \n> while [ \"$PAGES\" -ne 0 ]\n> do\n> \tBIGRAND=`expr \"$RANDOM\" \\* 32768 + \"$RANDOM\"`\n> \t\n> \tOFFSET=`awk 'BEGIN{printf \"%d\\n\", ('\"$BIGRAND\"' / 2^30) * '\"$TOTAL\"'}'`\n> \t\n> \tRESULT=`awk '\tBEGIN\t{offset = 0}\n> \t\toffset + $1 > '\"$OFFSET\"' \\\n> \t\t\t{print $2, '\"$OFFSET\"' - offset ; exit}\n> \t\t\t{offset += $1}' /tmp/$$`\n> \tFILE=`echo \"$RESULT\" | awk '{print $1}'`\n> \tOFFSET=`echo \"$RESULT\" | awk '{print $2}'`\n> \t\n> \tdd bs=\"$BLCKSZ\" seek=\"$OFFSET\" count=1 if=\"$FILE\" of=\"/dev/null\" >/dev/null 2>&1\n> \tPAGES=`expr \"$PAGES\" - 1`\n> done\n> \n> STOP=`date '+%s'`\n> RANDTIME=`expr \"$STOP\" - \"$START\"`\n> \n> echo \"Running sequential access timing test ...\"\n> \n> START=`date '+%s'`\n> # We run the random test 10 times more because it is quicker and\n> # we need it to run for a while to get accurate results.\n> PAGES=10000\n> \n> while [ \"$PAGES\" -ne 0 ]\n> do\n> \tBIGRAND=`expr \"$RANDOM\" \\* 32768 + \"$RANDOM\"`\n> \t\n> \tOFFSET=`awk 'BEGIN{printf \"%d\\n\", ('\"$BIGRAND\"' / 2^30) * '\"$TOTAL\"'}'`\n> \t\n> \tRESULT=`awk '\tBEGIN\t{offset = 0}\n> \t\toffset + $1 > '\"$OFFSET\"' \\\n> \t\t\t{print $2, $1; exit}\n> \t\t\t{offset += $1}' /tmp/$$`\n> \tFILE=`echo \"$RESULT\" | awk '{print $1}'`\n> \tFILEPAGES=`echo \"$RESULT\" | awk '{print $2}'`\n> \n> \tif [ \"$FILEPAGES\" -gt \"$PAGES\" ]\n> \tthen\tFILEPAGES=\"$PAGES\"\n> \tfi\n> \t\n> \tdd bs=\"$BLCKSZ\" count=\"$FILEPAGES\" if=\"$FILE\" of=\"/dev/null\" >/dev/null 2>&1\n> \tPAGES=`expr \"$PAGES\" - \"$FILEPAGES\"`\n> done\n> \n> STOP=`date '+%s'`\n> SEQTIME=`expr \"$STOP\" - \"$START\"`\n> \n> echo\n> awk 'BEGIN\t{printf \"random_page_cost = %f\\n\", ('\"$RANDTIME\"' / '\"$SEQTIME\"') * 10}'\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 Sep 2002 02:13:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "I got:\n\nrandom_page_cost = 0.807018\n\nFor FreeBSD 4.4/i386\n\nWith 512MB RAM & SCSI HDD\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Monday, 9 September 2002 2:14 PM\n> To: PostgreSQL-development\n> Subject: Re: [HACKERS] Script to compute random page cost\n>\n>\n>\n> OK, turns out that the loop for sequential scan ran fewer times and was\n> skewing the numbers. I have a new version at:\n>\n> \tftp://candle.pha.pa.us/pub/postgresql/randcost\n>\n> I get _much_ lower numbers now for random_page_cost.\n>\n> ------------------------------------------------------------------\n> ---------\n>\n> Bruce Momjian wrote:\n> > Because we have seen many complains about sequential vs index scans, I\n> > wrote a script which computes the value for your OS/hardware\n> > combination.\n> >\n> > Under BSD/OS on one SCSI disk, I get a random_page_cost around 60. Our\n> > current postgresql.conf default is 4.\n> >\n> > What do other people get for this value?\n> >\n> > Keep in mind if we increase this value, we will get a more sequential\n> > scans vs. index scans.\n> >\n> > One flaw in this test is that it randomly reads blocks from different\n> > files rather than randomly reading from the same file. Do people have a\n> > suggestion on how to correct this? Does it matter?\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square,\n> Pennsylvania 19073\n>\n> > #!/bin/bash\n> >\n> > trap \"rm -f /tmp/$$\" 0 1 2 3 15\n> >\n> > BLCKSZ=8192\n> >\n> > if [ \"$RANDOM\" = \"$RANDOM\" ]\n> > then\techo \"Your shell does not support \\$RANDOM. Try\n> using bash.\" 1>&2\n> > \texit 1\n> > fi\n> >\n> > # XXX We assume 0 <= random <= 32767\n> >\n> > echo \"Collecting sizing information ...\"\n> >\n> > TEMPLATE1=`du -s \"$PGDATA/base/1\" | awk '{print $1}'`\n> > FULL=`du -s \"$PGDATA/base\" | awk '{print $1}'`\n> > if [ \"$FULL\" -lt `expr \"$TEMPLATE1\" \\* 4` ]\n> > then\techo \"Your installation should have at least four\n> times the data stored in template1 to yield meaningful results\" 1>&2\n> > \texit 1\n> > fi\n> >\n> > find \"$PGDATA/base\" -type f -exec ls -ld {} \\; |\n> > awk '$5 % '\"$BLCKSZ\"' == 0 {print $5 / '\"$BLCKSZ\"', $9}' |\n> > grep -v '^0 ' > /tmp/$$\n> >\n> > TOTAL=`awk 'BEGIN\t{sum=0}\n> > \t\t\t{sum += $1}\n> > \t END\t\t{print sum}' /tmp/$$`\n> >\n> > echo \"Running random access timing test ...\"\n> >\n> > START=`date '+%s'`\n> > PAGES=1000\n> >\n> > while [ \"$PAGES\" -ne 0 ]\n> > do\n> > \tBIGRAND=`expr \"$RANDOM\" \\* 32768 + \"$RANDOM\"`\n> >\n> > \tOFFSET=`awk 'BEGIN{printf \"%d\\n\", ('\"$BIGRAND\"' / 2^30) *\n> '\"$TOTAL\"'}'`\n> >\n> > \tRESULT=`awk '\tBEGIN\t{offset = 0}\n> > \t\toffset + $1 > '\"$OFFSET\"' \\\n> > \t\t\t{print $2, '\"$OFFSET\"' - offset ; exit}\n> > \t\t\t{offset += $1}' /tmp/$$`\n> > \tFILE=`echo \"$RESULT\" | awk '{print $1}'`\n> > \tOFFSET=`echo \"$RESULT\" | awk '{print $2}'`\n> >\n> > \tdd bs=\"$BLCKSZ\" seek=\"$OFFSET\" count=1 if=\"$FILE\"\n> of=\"/dev/null\" >/dev/null 2>&1\n> > \tPAGES=`expr \"$PAGES\" - 1`\n> > done\n> >\n> > STOP=`date '+%s'`\n> > RANDTIME=`expr \"$STOP\" - \"$START\"`\n> >\n> > echo \"Running sequential access timing test ...\"\n> >\n> > START=`date '+%s'`\n> > # We run the random test 10 times more because it is quicker and\n> > # we need it to run for a while to get accurate results.\n> > PAGES=10000\n> >\n> > while [ \"$PAGES\" -ne 0 ]\n> > do\n> > \tBIGRAND=`expr \"$RANDOM\" \\* 32768 + \"$RANDOM\"`\n> >\n> > \tOFFSET=`awk 'BEGIN{printf \"%d\\n\", ('\"$BIGRAND\"' / 2^30) *\n> '\"$TOTAL\"'}'`\n> >\n> > \tRESULT=`awk '\tBEGIN\t{offset = 0}\n> > \t\toffset + $1 > '\"$OFFSET\"' \\\n> > \t\t\t{print $2, $1; exit}\n> > \t\t\t{offset += $1}' /tmp/$$`\n> > \tFILE=`echo \"$RESULT\" | awk '{print $1}'`\n> > \tFILEPAGES=`echo \"$RESULT\" | awk '{print $2}'`\n> >\n> > \tif [ \"$FILEPAGES\" -gt \"$PAGES\" ]\n> > \tthen\tFILEPAGES=\"$PAGES\"\n> > \tfi\n> >\n> > \tdd bs=\"$BLCKSZ\" count=\"$FILEPAGES\" if=\"$FILE\"\n> of=\"/dev/null\" >/dev/null 2>&1\n> > \tPAGES=`expr \"$PAGES\" - \"$FILEPAGES\"`\n> > done\n> >\n> > STOP=`date '+%s'`\n> > SEQTIME=`expr \"$STOP\" - \"$START\"`\n> >\n> > echo\n> > awk 'BEGIN\t{printf \"random_page_cost = %f\\n\", ('\"$RANDTIME\"' /\n> '\"$SEQTIME\"') * 10}'\n>\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square,\n> Pennsylvania 19073\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Mon, 9 Sep 2002 15:20:42 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "> OK, turns out that the loop for sequential scan ran fewer times and was\n> skewing the numbers. I have a new version at:\n> \n> \tftp://candle.pha.pa.us/pub/postgresql/randcost\n> \n> I get _much_ lower numbers now for random_page_cost.\n\nI got:\n\nrandom_page_cost = 1.047619\n\nLinux kernel 2.4.18\nPentium III 750MHz\nMemory 256MB\nIDE HDD\n\n(A notebook/SONY VAIO PCG-Z505CR/K)\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 09 Sep 2002 16:57:53 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "On Mon, 9 Sep 2002, Bruce Momjian wrote:\n\n> What do other people get for this value?\n\nWith your new script, with a 1.5 GHz Athlon, 512 MB RAM, and a nice fast\nIBM 7200 RPM IDE disk, I get random_page_cost = 0.933333.\n\n> One flaw in this test is that it randomly reads blocks from different\n> files rather than randomly reading from the same file. Do people have a\n> suggestion on how to correct this? Does it matter?\n\n From my quick glance, it also does a lot of work work to read each\nblock, including forking off serveral other programs. This would tend to\npush up the cost of a random read. You might want to look at modifying\nthe randread program (http://randread.sourceforge.net) to do what you\nwant....\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Mon, 9 Sep 2002 19:52:46 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "On Mon, 2002-09-09 at 07:13, Bruce Momjian wrote:\n> \n> OK, turns out that the loop for sequential scan ran fewer times and was\n> skewing the numbers. I have a new version at:\n> \n> \tftp://candle.pha.pa.us/pub/postgresql/randcost\n> \n> I get _much_ lower numbers now for random_page_cost.\n> \n> ---------------------------------------------------------------------------\n\nFive successive runs:\n\nrandom_page_cost = 0.947368\nrandom_page_cost = 0.894737\nrandom_page_cost = 0.947368\nrandom_page_cost = 0.894737\nrandom_page_cost = 0.894737\n\n\nlinux 2.4.18 SMP\ndual Athlon MP 1900+\n512Mb RAM\nSCSI\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Submit yourselves therefore to God. Resist the devil, \n and he will flee from you.\" James 4:7 \n\n",
"msg_date": "09 Sep 2002 12:52:38 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "On Mon, 2002-09-09 at 02:13, Bruce Momjian wrote:\n> \n> OK, turns out that the loop for sequential scan ran fewer times and was\n> skewing the numbers. I have a new version at:\n> \n> \tftp://candle.pha.pa.us/pub/postgresql/randcost\n> \n> I get _much_ lower numbers now for random_page_cost.\n\nThe current script pulls way more data for Sequential scan than random\nscan now.\n\nRandom is pulling a single page (count=1 for dd) with every loop. \nSequential does the same number of loops, but pulls count > 1 in each.\n\nIn effect, sequential is random with more data load -- which explains\nall of the 0.9's.\n\n\n Rod Taylor\n\n",
"msg_date": "09 Sep 2002 08:00:45 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "Bruce-\n\nWith the change in the script that I mentioned to you off-list (which I\nbelieve just pointed it at our \"real world\" data), I got the following\nresults with 6 successive runs on each of our two development platforms:\n\n(We're running PGSQL 7.2.1 on Debian Linux 2.4)\n\nSystem 1:\n1.2 GHz Athlon Processor, 512MB RAM, Database on IDE hard drive\nrandom_page_cost = 0.857143\nrandom_page_cost = 0.809524\nrandom_page_cost = 0.809524\nrandom_page_cost = 0.809524\nrandom_page_cost = 0.857143\nrandom_page_cost = 0.884615\n\nSystem 2:\nDual 1.2Ghz Athlon MP Processors, SMP enabled, 1 GB RAM, Database on Ultra\nSCSI RAID 5 with Hardware controller.\nrandom_page_cost = 0.894737\nrandom_page_cost = 0.842105\nrandom_page_cost = 0.894737\nrandom_page_cost = 0.894737\nrandom_page_cost = 0.842105\nrandom_page_cost = 0.894737\n\n\nI was surprised that the SCSI RAID drive is generally slower than IDE, but\nthe values are in line with the results that others have been getting.\n\n-Nick\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Monday, September 09, 2002 1:14 AM\n> To: PostgreSQL-development\n> Subject: Re: [HACKERS] Script to compute random page cost\n>\n>\n>\n> OK, turns out that the loop for sequential scan ran fewer times and was\n> skewing the numbers. I have a new version at:\n>\n> \tftp://candle.pha.pa.us/pub/postgresql/randcost\n>\n> I get _much_ lower numbers now for random_page_cost.\n\n",
"msg_date": "Mon, 9 Sep 2002 11:25:08 -0500",
"msg_from": "\"Nick Fankhauser\" <nickf@ontko.com>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "I'm getting an infinite wait on that file, could someone post it to the \nlist please?\n\n\n\nOn Mon, 9 Sep 2002, Bruce Momjian wrote:\n\n> \n> OK, turns out that the loop for sequential scan ran fewer times and was\n> skewing the numbers. I have a new version at:\n> \n> \tftp://candle.pha.pa.us/pub/postgresql/randcost\n> \n> I get _much_ lower numbers now for random_page_cost.\n\n",
"msg_date": "Mon, 9 Sep 2002 11:46:00 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "Hi again-\n\nI bounced these numbers off of Ray Ontko here at our shop, and he pointed\nout that random page cost is measured in multiples of a sequential page\nfetch. It seems almost impossible that a random fetch would be less\nexpensive than a sequential fetch, yet we all seem to be getting results <\n1. I can't see anything obviously wrong with the script, but something very\nodd is going.\n\n-Nick\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Nick Fankhauser\n> Sent: Monday, September 09, 2002 11:25 AM\n> To: Bruce Momjian; PostgreSQL-development\n> Cc: Ray Ontko\n> Subject: Re: [HACKERS] Script to compute random page cost\n>\n>\n> Bruce-\n>\n> With the change in the script that I mentioned to you off-list (which I\n> believe just pointed it at our \"real world\" data), I got the following\n> results with 6 successive runs on each of our two development platforms:\n>\n> (We're running PGSQL 7.2.1 on Debian Linux 2.4)\n>\n> System 1:\n> 1.2 GHz Athlon Processor, 512MB RAM, Database on IDE hard drive\n> random_page_cost = 0.857143\n> random_page_cost = 0.809524\n> random_page_cost = 0.809524\n> random_page_cost = 0.809524\n> random_page_cost = 0.857143\n> random_page_cost = 0.884615\n>\n> System 2:\n> Dual 1.2Ghz Athlon MP Processors, SMP enabled, 1 GB RAM, Database on Ultra\n> SCSI RAID 5 with Hardware controller.\n> random_page_cost = 0.894737\n> random_page_cost = 0.842105\n> random_page_cost = 0.894737\n> random_page_cost = 0.894737\n> random_page_cost = 0.842105\n> random_page_cost = 0.894737\n>\n>\n> I was surprised that the SCSI RAID drive is generally slower than IDE, but\n> the values are in line with the results that others have been getting.\n>\n> -Nick\n>\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> > Sent: Monday, September 09, 2002 1:14 AM\n> > To: PostgreSQL-development\n> > Subject: Re: [HACKERS] Script to compute random page cost\n> >\n> >\n> >\n> > OK, turns out that the loop for sequential scan ran fewer times and was\n> > skewing the numbers. I have a new version at:\n> >\n> > \tftp://candle.pha.pa.us/pub/postgresql/randcost\n> >\n> > I get _much_ lower numbers now for random_page_cost.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Mon, 9 Sep 2002 13:22:55 -0500",
"msg_from": "\"Nick Fankhauser\" <nickf@ontko.com>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "\"Nick Fankhauser\" <nickf@ontko.com> writes:\n> I bounced these numbers off of Ray Ontko here at our shop, and he pointed\n> out that random page cost is measured in multiples of a sequential page\n> fetch. It seems almost impossible that a random fetch would be less\n> expensive than a sequential fetch, yet we all seem to be getting results <\n> 1. I can't see anything obviously wrong with the script, but something very\n> odd is going.\n\nThe big problem with the script is that it involves an invocation of\n\"dd\" - hence, at least one process fork --- for every page read\noperation. The seqscan part of the test is even worse, as it adds a\ntest(1) call and a shell if/then/else to the overhead. My guess is that\nwe are measuring script overhead here, and not the desired I/O quantities\nat all --- the script overhead is completely swamping the latter. The\napparent stability of the results across a number of different platforms\nbolsters that thought.\n\nSomeone else opined that the script was also not comparing equal\nnumbers of pages read for the random and sequential cases. I haven't\ntried to decipher the logic enough to see if that allegation is true,\nbut it's not obviously false.\n\nFinally, I wouldn't believe the results for a moment if they were taken\nagainst databases that are not several times the size of physical RAM\non the test machine, with a total I/O volume also much more than\nphysical RAM. We are trying to measure the behavior when kernel\ncaching is not helpful; if the database fits in RAM then you are just\nnaturally going to get random_page_cost close to 1, because the kernel\nwill avoid doing any I/O at all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 17:09:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost "
},
{
"msg_contents": "Nick Fankhauser wrote:\n> Hi again-\n> \n> I bounced these numbers off of Ray Ontko here at our shop, and he pointed\n> out that random page cost is measured in multiples of a sequential page\n> fetch. It seems almost impossible that a random fetch would be less\n> expensive than a sequential fetch, yet we all seem to be getting results <\n> 1. I can't see anything obviously wrong with the script, but something very\n> odd is going.\n\nOK, new version at:\n\n\tftp://candle.pha.pa.us/pub/postgresql/randcost\n\nWhat I have done is to take all of the computation stuff out of the\ntimed loop so only the 'dd' is done in the loop.\n\nI am getting a 1.0 for random pages cost with this new code, but I don't\nhave much data in the database so it is very possible I have it all\ncached. Would others please test it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 Sep 2002 21:24:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "On Mon, 9 Sep 2002, Tom Lane wrote:\n\n> Finally, I wouldn't believe the results for a moment if they were taken\n> against databases that are not several times the size of physical RAM\n> on the test machine, with a total I/O volume also much more than\n> physical RAM. We are trying to measure the behavior when kernel\n> caching is not helpful; if the database fits in RAM then you are just\n> naturally going to get random_page_cost close to 1, because the kernel\n> will avoid doing any I/O at all.\n\nUm...yeah; another reason to use randread against a raw disk device.\n(A little hard to use on linux systems, I bet, but works fine on\nBSD systems.)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Tue, 10 Sep 2002 11:54:07 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost "
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> On Mon, 9 Sep 2002, Tom Lane wrote:\n>> ... We are trying to measure the behavior when kernel\n>> caching is not helpful; if the database fits in RAM then you are just\n>> naturally going to get random_page_cost close to 1, because the kernel\n>> will avoid doing any I/O at all.\n\n> Um...yeah; another reason to use randread against a raw disk device.\n> (A little hard to use on linux systems, I bet, but works fine on\n> BSD systems.)\n\nUmm... not really; surely randread wouldn't know anything about\nread-ahead logic?\n\nThe reason this is a difficult topic is that we are trying to measure\ncertain kernel behaviors --- namely readahead for sequential reads ---\nand not others --- namely caching, because we have other parameters\nof the cost models that purport to deal with that.\n\nMebbe this is an impossible task and we need to restructure the cost\nmodels from the ground up. But I'm not convinced of that. The fact\nthat a one-page shell script can't measure the desired quantity doesn't\nmean we can't measure it with more effort.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 23:43:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost "
},
{
"msg_contents": "On Mon, 9 Sep 2002, Tom Lane wrote:\n\n> Curt Sampson <cjs@cynic.net> writes:\n> > On Mon, 9 Sep 2002, Tom Lane wrote:\n> >> ... We are trying to measure the behavior when kernel\n> >> caching is not helpful; if the database fits in RAM then you are just\n> >> naturally going to get random_page_cost close to 1, because the kernel\n> >> will avoid doing any I/O at all.\n>\n> > Um...yeah; another reason to use randread against a raw disk device.\n> > (A little hard to use on linux systems, I bet, but works fine on\n> > BSD systems.)\n>\n> Umm... not really; surely randread wouldn't know anything about\n> read-ahead logic?\n\nRandread doesn't know anything about read-ahead logic, but I don't\nsee how that matters one way or the other. The chances of it reading\nblocks sequentially are pretty much infinitesimal if you're reading\nacross a reasonably large area of disk (I recommend at least 4GB),\nso readahead will never be triggered.\n\n> The reason this is a difficult topic is that we are trying to measure\n> certain kernel behaviors --- namely readahead for sequential reads ---\n> and not others --- namely caching, because we have other parameters\n> of the cost models that purport to deal with that.\n\nWell, for the sequential reads, the readahead should be trigerred\neven when reading from a raw device. So just use dd to measure\nthat. If you want to slightly more accurately model postgres'\nbehaviour, you probably want to pick a random area of the disk,\nread a gigabyte, switch areas, read another gigabyte, and so on.\nThis will model the \"split into 1GB\" files thing.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Tue, 10 Sep 2002 13:19:46 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost "
},
{
"msg_contents": "OK, I have a better version at:\n\n\tftp://candle.pha.pa.us/pub/postgresql/randcost\n\nI have added a null loop which does a dd on a single file without\nreading any data, and by netting that loop out of the total computation\nand increasing the number of tests, I have gotten the following results\nfor three runs:\n\t\n\trandom test: 36\n\tsequential test: 33\n\tnull timing test: 27\n\t\n\trandom_page_cost = 1.500000\n\n\t\n\trandom test: 38\n\tsequential test: 32\n\tnull timing test: 27\n\t\n\trandom_page_cost = 2.200000\n\t\n\n\trandom test: 40\n\tsequential test: 31\n\tnull timing test: 27\n\t\n\trandom_page_cost = 3.250000\n\nInteresting that random time is increasing, while the others were\nstable. I think this may have to do with other system activity at the\ntime of the test. I will run it some more tomorrow but clearly we are\nseeing reasonable numbers now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 Sep 2002 02:01:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "I got somewhat different:\n\n$ ./randcost /usr/local/pgsql/data\nCollecting sizing information ...\nRunning random access timing test ...\nRunning sequential access timing test ...\nRunning null loop timing test ...\nrandom test: 13\nsequential test: 15\nnull timing test: 11\n\nrandom_page_cost = 0.500000\n\nChris\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Tuesday, 10 September 2002 2:02 PM\n> To: Curt Sampson\n> Cc: Tom Lane; nickf@ontko.com; PostgreSQL-development; Ray Ontko\n> Subject: Re: [HACKERS] Script to compute random page cost\n> \n> \n> OK, I have a better version at:\n> \n> \tftp://candle.pha.pa.us/pub/postgresql/randcost\n> \n> I have added a null loop which does a dd on a single file without\n> reading any data, and by netting that loop out of the total computation\n> and increasing the number of tests, I have gotten the following results\n> for three runs:\n> \t\n> \trandom test: 36\n> \tsequential test: 33\n> \tnull timing test: 27\n> \t\n> \trandom_page_cost = 1.500000\n> \n> \t\n> \trandom test: 38\n> \tsequential test: 32\n> \tnull timing test: 27\n> \t\n> \trandom_page_cost = 2.200000\n> \t\n> \n> \trandom test: 40\n> \tsequential test: 31\n> \tnull timing test: 27\n> \t\n> \trandom_page_cost = 3.250000\n> \n> Interesting that random time is increasing, while the others were\n> stable. I think this may have to do with other system activity at the\n> time of the test. I will run it some more tomorrow but clearly we are\n> seeing reasonable numbers now.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, \n> Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n",
"msg_date": "Tue, 10 Sep 2002 14:23:29 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "On Tue, 10 Sep 2002, Bruce Momjian wrote:\n\n> Interesting that random time is increasing, while the others were\n> stable. I think this may have to do with other system activity at the\n> time of the test.\n\nActually, the random versus sequential time may also be different\ndepending on how many processes are competing for disk access, as\nwell. If the OS isn't maintaining readahead for whatever reason,\nsequential access could, in theory, degrade to being the same speed\nas random access. It might be interesting to test this, too.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Tue, 10 Sep 2002 15:24:21 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "On Mon, 2002-09-09 at 07:13, Bruce Momjian wrote:\n> \n> OK, turns out that the loop for sequential scan ran fewer times and was\n> skewing the numbers. I have a new version at:\n> \n> \tftp://candle.pha.pa.us/pub/postgresql/randcost\n\nLatest version:\n\nolly@linda$ \nrandom test: 14\nsequential test: 11\nnull timing test: 9\nrandom_page_cost = 2.500000\n\nolly@linda$ for a in 1 2 3 4 5\n> do\n> ~/randcost\n> done\nCollecting sizing information ...\nrandom test: 11\nsequential test: 11\nnull timing test: 9\nrandom_page_cost = 1.000000\n\nrandom test: 11\nsequential test: 10\nnull timing test: 9\nrandom_page_cost = 2.000000\n\nrandom test: 11\nsequential test: 11\nnull timing test: 9\nrandom_page_cost = 1.000000\n\nrandom test: 11\nsequential test: 10\nnull timing test: 9\nrandom_page_cost = 2.000000\n\nrandom test: 10\nsequential test: 10\nnull timing test: 10\nSequential time equals null time. Increase TESTCYCLES and rerun.\n\n\nAvailable memory (512M) exceeds the total database size, so sequential\nand random are almost the same for the second and subsequent runs.\n \nSince, in production, I would hope to have all active tables permanently\nin RAM, would there be a case for my using a page cost of 1 on the\nassumption that no disk reads would be needed?\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Draw near to God and he will draw near to you. \n Cleanse your hands, you sinners; and purify your \n hearts, you double minded.\" James 4:8 \n\n",
"msg_date": "10 Sep 2002 11:47:53 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> Well, for the sequential reads, the readahead should be trigerred\n> even when reading from a raw device.\n\nThat strikes me as an unportable assumption.\n\nEven if true, we can't provide a test mechanism that requires root\naccess to run it --- raw-device testing is out of the question just on\nthat basis, never mind that it is not measuring what we want to measure.\n\nPerhaps it's time to remind people that what we want to measure\nis the performance seen by a C program issuing write() and read()\ncommands, transferring 8K at a time, on a regular Unix filesystem.\nA shell script invoking dd is by definition going to see a very\ndifferent performance ratio, even if what dd does under the hood\nis 8K read() and write() (another not-well-supported assumption,\nIMHO). If you try to \"improve\" the results by using a raw device,\nyou're merely moving even further away from the scenario of interest.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 10:00:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I will run it some more tomorrow but clearly we are\n> seeing reasonable numbers now.\n\n... which still have no provable relationship to the ratio we need to\nmeasure. See my previous comments to Curt; I don't think you can\npossibly get trustworthy results out of a shell script + dd approach,\nbecause we do not implement Postgres using dd.\n\nIf you implemented a C testbed and then proved by experiment that the\nshell script got comparable numbers, then I'd believe its results.\nWithout that confirmation, these are just meaningless numbers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 10:09:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost "
},
{
"msg_contents": "\nOK, what you are seeing here is that for your platform the TESTCYCLES\nsize isn't enough; the numbers are too close to measure the difference.\n\nI am going to increase the TESTCYCLES from 5k to 10k. That should\nprovide better numbers.\n\n---------------------------------------------------------------------------\n\nOliver Elphick wrote:\n> On Mon, 2002-09-09 at 07:13, Bruce Momjian wrote:\n> > \n> > OK, turns out that the loop for sequential scan ran fewer times and was\n> > skewing the numbers. I have a new version at:\n> > \n> > \tftp://candle.pha.pa.us/pub/postgresql/randcost\n> \n> Latest version:\n> \n> olly@linda$ \n> random test: 14\n> sequential test: 11\n> null timing test: 9\n> random_page_cost = 2.500000\n> \n> olly@linda$ for a in 1 2 3 4 5\n> > do\n> > ~/randcost\n> > done\n> Collecting sizing information ...\n> random test: 11\n> sequential test: 11\n> null timing test: 9\n> random_page_cost = 1.000000\n> \n> random test: 11\n> sequential test: 10\n> null timing test: 9\n> random_page_cost = 2.000000\n> \n> random test: 11\n> sequential test: 11\n> null timing test: 9\n> random_page_cost = 1.000000\n> \n> random test: 11\n> sequential test: 10\n> null timing test: 9\n> random_page_cost = 2.000000\n> \n> random test: 10\n> sequential test: 10\n> null timing test: 10\n> Sequential time equals null time. Increase TESTCYCLES and rerun.\n> \n> \n> Available memory (512M) exceeds the total database size, so sequential\n> and random are almost the same for the second and subsequent runs.\n> \n> Since, in production, I would hope to have all active tables permanently\n> in RAM, would there be a case for my using a page cost of 1 on the\n> assumption that no disk reads would be needed?\n> \n> -- \n> Oliver Elphick Oliver.Elphick@lfix.co.uk\n> Isle of Wight, UK \n> http://www.lfix.co.uk/oliver\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"Draw near to God and he will draw near to you. \n> Cleanse your hands, you sinners; and purify your \n> hearts, you double minded.\" James 4:8 \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 Sep 2002 11:27:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "Oliver Elphick wrote:\n> Available memory (512M) exceeds the total database size, so sequential\n> and random are almost the same for the second and subsequent runs.\n> \n> Since, in production, I would hope to have all active tables permanently\n> in RAM, would there be a case for my using a page cost of 1 on the\n> assumption that no disk reads would be needed?\n\nYes, in your case random_page_cost would be 1 once the data gets into\nRAM.\n\nIn fact, that is the reason I used only /data/base for testing so places\nwhere data can load into ram will see lower random pages costs.\n\nI could just create a random file and test on that but it isn't the\nsame.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 Sep 2002 11:28:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "On Tue, 10 Sep 2002, Tom Lane wrote:\n\n> Curt Sampson <cjs@cynic.net> writes:\n> > Well, for the sequential reads, the readahead should be trigerred\n> > even when reading from a raw device.\n>\n> That strikes me as an unportable assumption.\n\nNot only unportable: but false. :-) NetBSD, at least, does read-ahead\nonly through the buffer cache. Thinking about it, you *can't* do\nread-ahead on a raw device, because you're not buffering. Doh!\n\n> Perhaps it's time to remind people that what we want to measure\n> is the performance seen by a C program issuing write() and read()\n> commands, transferring 8K at a time, on a regular Unix filesystem.\n\nRight. Which is what randread does, if you give it a file rather\nthan a raw device. I'm actually just now working on some modifications\nfor it that will let you work against a bunch of files, rather than\njust one, so it will very accurately emulate a postgres random read\nof blocks from a table.\n\nThere are two other tricky things related to the behaviour, however:\n\n1. The buffer cache. You really need to be working against your\nentire database, not just a few gigabytes of its data, or sample\ndata.\n\n2. Multiple users. You really want a mix of simultaneous accesses\ngoing on, with as many processes as you normally have users querying\nthe database.\n\nThese can probably both be taken care of with shell scripts, though.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 11 Sep 2002 10:51:41 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost "
},
{
"msg_contents": "Tom Lane wrote:\n> Perhaps it's time to remind people that what we want to measure\n> is the performance seen by a C program issuing write() and read()\n >commands, transferring 8K at a time, on a regular Unix filesystem\n\nYes...and at the risk of being accused of marketing ;-) , that is \nexactly what the 3 programs in my archive do (see previous post for url) :\n\n- one called 'write' creates a suitably sized data file (8k at a time - \nconfigurable), using the write() call\n- another called 'read' does sequential reads (8k at a time - \nconfigurable), using the read() call\n- finally one called 'seek' does random reads (8k chunks - \nconfigurable), using the lseek() and read() calls\n\nI tried to use code as similar as possible to how Postgres does its \nio....so the results *should* be meaningful !\nLarge file support in enabled too (as you need to use a file several \ntimes bigger than your RAM - and everyone seems to have >1G of it these \ndays...)\n\nI think the code is reasonably readable too....\nIts been *tested* on Linux, Freebsd, Solaris, MacosX.\n\n\nThe only downer is that they don't automatically compute \nrandom_page_cost for you..(I was more interested in the raw sequential \nread, write and random read rates at the time). However it would be a \nfairly simple modification to combine the all 3 programs into one \nexecutable that outputs random_page_cost...\n\nregards\n\nMark\n\n\n\n\n\n\n",
"msg_date": "Wed, 11 Sep 2002 17:47:25 +1200",
"msg_from": "Mark Kirkwood <markir@slingshot.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "On Wed, 11 Sep 2002, Mark Kirkwood wrote:\n\n> Yes...and at the risk of being accused of marketing ;-) , that is\n> exactly what the 3 programs in my archive do (see previous post for url) :\n\nHm, it appears we've both been working on something similar. However,\nI've just released version 0.2 of randread, which has the following\nfeatures:\n\n Written in C, uses read(2) and write(2), pretty much like postgres.\n\n Reads or writes random blocks from a specified list of files,\n treated as a contiguous range of blocks, again like postgres. This\n allows you to do random reads from the actual postgres data files\n for a table, if you like.\n\n You can specify the block size to use, and the number of reads to do.\n\n Allows you to specify how many blocks you want to read before you\n start reading again at a new random location. (The default is 1.)\n This allows you to model various sequential and random read mixes.\n\nIf you want to do writes, I suggest you create your own set of files to\nwrite, rather than destroying postgresql data. This can easily a be done\nwith something like this Bourne shell script:\n\n for i in 1 2 3 4; do\n\tdd if=/dev/zero of=file.$i bs=1m count=1024\n done\n\nHowever, it doesn't calculate the random vs. sequential ratio for you;\nyou've got to do that for yourself. E.g.,:\n\n$ ./randread -l 512 -c 256 /u/cjs/z?\n256 reads of 512 x 8.00 KB blocks (4096.00 KB)\n totalling 131072 blocks (1024.00 MB)\n from 524288 blocks (4092.00 MB) in 4 files.\n256 reads in 36.101119 sec. (141019 usec/read, 7 reads/sec, 29045.53 KB/sec)\n\n$ ./randread -c 4096 /u/cjs/z?\n4096 reads of 1 x 8.00 KB blocks (8.00 KB)\n totalling 4096 blocks (32.00 MB)\n from 524288 blocks (4095.99 MB) in 4 files.\n4096 reads in 34.274582 sec. (8367 usec/read, 120 reads/sec, 956.04 KB/sec)\n\nIn this case, across 4 GB in 4 files on my 512 MB, 1.5 GHz Athlon\nwith an IBM 7200 RPM IDE drive, I read about 30 times faster doing\na full sequential read of the files than I do reading 32 MB randomly\nfrom it. But because of the size of this, there's basically no\nbuffer cache involved. If I do this on a single 512 MB file:\n\n$ ./randread -c 4096 /u/cjs/z1:0-65536\n4096 reads of 1 x 8.00 KB blocks (8.00 KB)\n totalling 4096 blocks (32.00 MB)\n from 65536 blocks (511.99 MB) in 1 files.\n4096 reads in 28.064573 sec. (6851 usec/read, 146 reads/sec, 1167.59 KB/sec)\n\n$ ./randread -l 65535 -c 1 /u/cjs/z1:0-65536\n1 reads of 65535 x 8.00 KB blocks (524280.00 KB)\n totalling 65535 blocks (511.99 MB)\n from 65536 blocks (0.01 MB) in 1 files.\n1 reads in 17.107867 sec. (17107867 usec/read, 0 reads/sec, 30645.55 KB/sec)\n\n$ ./randread -c 4096 /u/cjs/z1:0-65536\n4096 reads of 1 x 8.00 KB blocks (8.00 KB)\n totalling 4096 blocks (32.00 MB)\n from 65536 blocks (511.99 MB) in 1 files.\n4096 reads in 19.413738 sec. (4739 usec/read, 215 reads/sec, 1687.88 KB/sec)\n\nWell, there you see some of the buffer cache effect from starting\nwith about half the file in memory. If you want to see serious buffer\ncache action, just use the first 128 MB of my first test file:\n\n$ ./randread -c 4096 /u/cjs/z1:0-16536\n4096 reads of 1 x 8.00 KB blocks (8.00 KB)\n totalling 4096 blocks (32.00 MB)\n from 16536 blocks (129.18 MB) in 1 files.\n4096 reads in 20.220791 sec. (4936 usec/read, 204 reads/sec, 1620.51 KB/sec)\n\n$ ./randread -l 16535 -c 1 /u/cjs/z1:0-16536\n1 reads of 16535 x 8.00 KB blocks (132280.00 KB)\n totalling 16535 blocks (129.18 MB)\n from 16536 blocks (0.01 MB) in 1 files.\n1 reads in 3.469231 sec. (3469231 usec/read, 0 reads/sec, 38129.49 KB/sec)\n\n$ ./randread -l 16535 -c 64 /u/cjs/z1:0-16536\n64 reads of 16535 x 8.00 KB blocks (132280.00 KB)\n totalling 1058240 blocks (8267.50 MB)\n from 16536 blocks (0.01 MB) in 1 files.\n64 reads in 23.643026 sec. (369422 usec/read, 2 reads/sec, 358072.59 KB/sec)\n\nFor those last three, we're basically limited completely by the\nCPU, as there's not much disk I/O going on at all. The many-block\none is going to be slower because it's got to generate a lot more\nrandom numbers and do a lot more lseek operations.\n\nAnyway, looking at the real difference between truly sequential\nand truly random reads on a large amount of data file (30:1 or so),\nit looks to me that people getting much less than that are getting\ngood work out of their buffer cache. You've got to wonder if there's\nsome way to auto-tune for this sort of thing....\n\nAnyway, feel free to download and play. If you want to work on the\nprogram, I'm happy to give developer access on sourceforge.\n\n http://sourceforge.net/project/showfiles.php?group_id=55994\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 11 Sep 2002 16:18:14 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "AMD Athlon 500\n512MB Ram\nIBM 120GB IDE\n\nTested with:\nBLCKSZ=8192\nTESTCYCLES=500000\n\nResult:\nCollecting sizing information ...\nRunning random access timing test ...\nRunning sequential access timing test ...\nRunning null loop timing test ...\nrandom test: 2541\nsequential test: 2455\nnull timing test: 2389\n\nrandom_page_cost = 2.303030\n\n Hans\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n",
"msg_date": "Wed, 11 Sep 2002 09:55:48 +0200",
"msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "Curt Sampson wrote:\n> On Wed, 11 Sep 2002, Mark Kirkwood wrote:\n> \n> \n> \n> Hm, it appears we've both been working on something similar. However,\n> I've just released version 0.2 of randread, which has the following\n> features:\n> \n\n\nfunny how often that happens...( I think its often worth the effort to \nwrite your own benchmarking / measurement tool in order to gain an good \nunderstanding of what you intend to measure)\n\n >Anyway, feel free to download and play. If you want to work on the\n >program, I'm happy to give developer access on sourceforge.\n >\n > http://sourceforge.net/project/showfiles.php?group_id=55994\n\nI'll take a look.\n\n\nbest wishes\n\nMark\n\n",
"msg_date": "Wed, 11 Sep 2002 20:04:48 +1200",
"msg_from": "Mark Kirkwood <markir@slingshot.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "\nHi!\n\nOn Tue, 10 Sep 2002 14:01:11 +0000 (UTC)\n tgl@sss.pgh.pa.us (Tom Lane) wrote:\n[...]\n> Perhaps it's time to remind people that what we want to measure\n> is the performance seen by a C program issuing write() and read()\n> commands, transferring 8K at a time, on a regular Unix filesystem.\n[...]\n\nI've written something like that.\nIt is not C but might be useful.\nAny comments are welcome.\n\nhttp://www.a-nugget.org/downloads/randread.py\n\nBye\n Guido\n",
"msg_date": "12 Sep 2002 01:47:59 +0200",
"msg_from": "Guido Goldstein <news@a-nugget.de>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "Hi all,\n\nAs an end result of all this, do we now have a decent utility by which\nend user admin's can run it against the same disk/array that their\nPostgreSQL installation is on, and get a reasonably accurate number for\nrandom page cost?\n\nie:\n\n$ ./get_calc_cost\nTry using random_page_cost = foo\n\n$\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sat, 12 Oct 2002 22:28:02 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "Justin Clift wrote:\n> Hi all,\n> \n> As an end result of all this, do we now have a decent utility by which\n> end user admin's can run it against the same disk/array that their\n> PostgreSQL installation is on, and get a reasonably accurate number for\n> random page cost?\n> \n> ie:\n> \n> $ ./get_calc_cost\n> Try using random_page_cost = foo\n> \n> $\n> \n> :-)\n\nRight now we only have my script:\n\n\tftp://candle.pha.pa.us/pub/postgresql/randcost\n\nIt uses dd so it forks for every loop and shows a value for my machine\naround 2.5. I need to code the loop in C to get more accurate numbers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 12 Oct 2002 10:45:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Script to compute random page cost"
}
] |
[
{
"msg_contents": "\nDell Inspiron 8100 laptop, 1.2GHz Pentium, 512Mb RAM, Windows XP Pro\nCYGWIN_NT-5.1 PC9 1.3.10(0.51/3/2) 2002-02-25 11:14 i686 unknown\n\nrandom_page_cost = 0.924119\n\nRegards, Dave.\n\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 09 September 2002 07:14\n> To: PostgreSQL-development\n> Subject: Re: [HACKERS] Script to compute random page cost\n> \n> \n> \n> OK, turns out that the loop for sequential scan ran fewer \n> times and was skewing the numbers. I have a new version at:\n> \n> \tftp://candle.pha.pa.us/pub/postgresql/randcost\n> \n> I get _much_ lower numbers now for random_page_cost.\n",
"msg_date": "Mon, 9 Sep 2002 08:37:06 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Script to compute random page cost"
}
] |
[
{
"msg_contents": "\n Probably nothing important, but I saw it in\n src/backend/commands/prepare.c:\n\n 1/ ExecuteQuery() (line 110). Why is needful use copyObject()? The\n PostgreSQL executor modify query planns? I think copyObject() is\n expensive call.\n\n 2/ Lines 236 -- 245. Why do you \"check for pre-existing entry of\n same name\" if you create hash table? I think better is use \"else\"\n for this block of code and check it only if hash table already\n exist.\n \n 3/ Last is cosmetic: see line 404, what happen if memory context\n is not valid? :-) (maybe use some elog() call)\n\n Thanks\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 9 Sep 2002 10:03:18 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "PREPARE code notes"
},
{
"msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> 1/ ExecuteQuery() (line 110). Why is needful use copyObject()? The\n> PostgreSQL executor modify query planns?\n\nYes, and yes. Unfortunately.\n\n> 2/ Lines 236 -- 245. Why do you \"check for pre-existing entry of\n> same name\" if you create hash table? I think better is use \"else\"\n> for this block of code and check it only if hash table already\n> exist.\n\nThe code reads more cleanly as-is; changing it as you suggest would\ncreate an unnecessary interdependency between two logically distinct\nconcerns.\n \n> 3/ Last is cosmetic: see line 404, what happen if memory context\n> is not valid? :-) (maybe use some elog() call)\n\nOr just get rid of the MemoryContextIsValid test --- it shouldn't\never not be valid. Not very important though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 11:51:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PREPARE code notes "
},
{
"msg_contents": "On Mon, Sep 09, 2002 at 11:51:08AM -0400, Tom Lane wrote:\n> Karel Zak <zakkr@zf.jcu.cz> writes:\n> > 1/ ExecuteQuery() (line 110). Why is needful use copyObject()? The\n> > PostgreSQL executor modify query planns?\n> \n> Yes, and yes. Unfortunately.\n\n Hmm, it's bad. Is there any way to \"fix\" executor? Maybe in far\n future we will save to cache all planns and copyObject() is not\n performance winning.\n\n> > 2/ Lines 236 -- 245. Why do you \"check for pre-existing entry of\n> > same name\" if you create hash table? I think better is use \"else\"\n> > for this block of code and check it only if hash table already\n> > exist.\n> \n> The code reads more cleanly as-is; changing it as you suggest would\n> create an unnecessary interdependency between two logically distinct\n> concerns.\n\n I don't believe :-)\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 10 Sep 2002 09:27:05 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: PREPARE code notes"
},
{
"msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> On Mon, Sep 09, 2002 at 11:51:08AM -0400, Tom Lane wrote:\n>>> PostgreSQL executor modify query planns?\n>> \n>> Yes, and yes. Unfortunately.\n\n> Hmm, it's bad. Is there any way to \"fix\" executor?\n\nIt should be fixed IMHO ... but it'll be a major restructuring and\nit's difficult to justify spending the time ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 10:15:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PREPARE code notes "
}
] |
[
{
"msg_contents": "Linux RedHat 7.3 (ext3, kernel 2.4.18-3)\n512MB Ram\nAMD Athlon 500\nIBM 120GB IDE\n\n\n[hs@backup hs]$ ./randcost.sh /data/db/\nCollecting sizing information ...\nRunning random access timing test ...\nRunning sequential access timing test ...\n\nrandom_page_cost = 0.901961\n\n\n\n[hs@backup hs]$ ./randcost.sh /data/db/\nCollecting sizing information ...\nRunning random access timing test ...\nRunning sequential access timing test ...\n\nrandom_page_cost = 0.901961\n\n\nGreat script - it should be in contrib.\n\n Hans\n\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n",
"msg_date": "Mon, 09 Sep 2002 10:06:19 +0200",
"msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Script to compute randon page cost"
},
{
"msg_contents": "Assuming it's giving out correct information, there seems to be a lot of\nevidence for dropping the default random_page_cost to 1...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Hans-J�rgen\n> Sch�nig\n> Sent: Monday, 9 September 2002 4:06 PM\n> To: pgsql-hackers\n> Subject: [HACKERS] Script to compute randon page cost\n>\n>\n> Linux RedHat 7.3 (ext3, kernel 2.4.18-3)\n> 512MB Ram\n> AMD Athlon 500\n> IBM 120GB IDE\n>\n>\n> [hs@backup hs]$ ./randcost.sh /data/db/\n> Collecting sizing information ...\n> Running random access timing test ...\n> Running sequential access timing test ...\n>\n> random_page_cost = 0.901961\n>\n>\n>\n> [hs@backup hs]$ ./randcost.sh /data/db/\n> Collecting sizing information ...\n> Running random access timing test ...\n> Running sequential access timing test ...\n>\n> random_page_cost = 0.901961\n>\n>\n> Great script - it should be in contrib.\n>\n> Hans\n>\n>\n>\n> --\n> *Cybertec Geschwinde u Schoenig*\n> Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria\n> Tel: +43/1/913 68 09; +43/664/233 90 75\n> www.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at\n> <http://cluster.postgresql.at>, www.cybertec.at\n> <http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Mon, 9 Sep 2002 16:08:37 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute randon page cost"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n\n>Assuming it's giving out correct information, there seems to be a lot of\n>evidence for dropping the default random_page_cost to 1...\n>\n>Chris\n> \n>\nSome time ago Joe Conway suggest a tool based on a genetic algorithm \nwhich tries to find the best parameter settings.\nAs input the user could use a set of SQL statements. The algorithm will \ntry to find those settings which lead to the lowest execution time based \non the set of SQL.\n\nWhat about something like that?\nThis way people could tune the database theirselves.\n\n Hans\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n",
"msg_date": "Mon, 09 Sep 2002 10:16:14 +0200",
"msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Script to compute randon page cost"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Assuming it's giving out correct information, there seems to be a lot of\n> evidence for dropping the default random_page_cost to 1...\n\nThe fact that a lot of people are reporting numbers below 1 is\nsufficient evidence that the script is broken. A value below 1\nis physically impossible.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 11:46:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute randon page cost "
},
{
"msg_contents": "On Mon, 2002-09-09 at 01:16, Hans-Jürgen Schönig wrote:\n> Christopher Kings-Lynne wrote:\n> \n> >Assuming it's giving out correct information, there seems to be a lot of\n> >evidence for dropping the default random_page_cost to 1...\n> >\n> >Chris\n> > \n> >\n> Some time ago Joe Conway suggest a tool based on a genetic algorithm \n> which tries to find the best parameter settings.\n> As input the user could use a set of SQL statements. The algorithm will \n> try to find those settings which lead to the lowest execution time based \n> on the set of SQL.\n> \n> What about something like that?\n> This way people could tune the database theirselves.\n> \n\nI actually had starting coding a tool like this, but have become\ndistracted with other things. I plan on continuing with it maybe next\nweek. If anyone has suggestions, please let me know...\n\n --brett\n\n\n-- \nBrett Schwarz\nbrett_schwarz AT yahoo.com\n\n",
"msg_date": "09 Sep 2002 09:41:15 -0700",
"msg_from": "Brett Schwarz <brett_schwarz@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute randon page cost"
}
] |
[
{
"msg_contents": "I am trying to populate a 7.3 database from a 7.2 dump. I used 7.3's\npg_dumpall, but this did not handle all the issues:\n\n1. The language dumping needs to be improved:\n\n CREATE FUNCTION plperl_call_handler () RETURNS opaque\n ^^^^^^^^^^^^^^\n AS '/usr/local/pgsql/lib/plperl.so', 'plperl_call_handler'\n LANGUAGE \"C\";\n CREATE FUNCTION\n GRANT ALL ON FUNCTION plperl_call_handler () TO PUBLIC;\n GRANT\n REVOKE ALL ON FUNCTION plperl_call_handler () FROM postgres;\n REVOKE\n CREATE TRUSTED PROCEDURAL LANGUAGE plperl HANDLER plperl_call_handler;\n ERROR: function plperl_call_handler() does not return type language_handler\n \n \n2. Either casts or extra default conversions may be needed:\n\n CREATE TABLE cust_alloc_history (\n customer character varying(8) NOT NULL,\n product character varying(10) NOT NULL,\n \"year\" integer DEFAULT date_part('year'::text, ('now'::text)::timestamp(6) with time zone) NOT NULL,\n jan integer DEFAULT 0 NOT NULL,\n feb integer DEFAULT 0 NOT NULL,\n mar integer DEFAULT 0 NOT NULL,\n apr integer DEFAULT 0 NOT NULL,\n may integer DEFAULT 0 NOT NULL,\n jun integer DEFAULT 0 NOT NULL,\n jul integer DEFAULT 0 NOT NULL,\n aug integer DEFAULT 0 NOT NULL,\n sep integer DEFAULT 0 NOT NULL,\n oct integer DEFAULT 0 NOT NULL,\n nov integer DEFAULT 0 NOT NULL,\n dbr integer DEFAULT 0 NOT NULL,\n CONSTRAINT c_a_h_year CHECK (((float8(\"year\") <= date_part('year'::text, ('now'::text)::timestamp(6) with time zone)) AND (\"year\" > 1997)))\n );\n ERROR: Column \"year\" is of type integer but default expression is of type double precision\n You will need to rewrite or cast the expression\n \n \n3. A view is being created before one of the tables it refers to. \nShould not views be created only at the very end?\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Submit yourselves therefore to God. Resist the devil, \n and he will flee from you.\" James 4:7 \n\n",
"msg_date": "09 Sep 2002 12:31:39 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "pg_dump problems in upgrading"
},
{
"msg_contents": "At 12:31 PM 9/09/2002 +0100, Oliver Elphick wrote:\n>3. A view is being created before one of the tables it refers to.\n>Should not views be created only at the very end?\n\nThis would be trivial (and we already put several items at the end), but I \nam not sure it would fix the problem since views can also be on other \nviews. I presume the bad ordering happened as a result of a drop/create on \na table? Or is there some other cause?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n",
"msg_date": "Thu, 12 Sep 2002 09:52:40 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump problems in upgrading"
},
{
"msg_contents": "At 12:31 PM 9/09/2002 +0100, Oliver Elphick wrote:\n\n> CREATE FUNCTION plperl_call_handler () RETURNS opaque\n> ^^^^^^^^^^^^^^\n> AS '/usr/local/pgsql/lib/plperl.so', 'plperl_call_handler'\n> LANGUAGE \"C\";\n...\n> CREATE TRUSTED PROCEDURAL LANGUAGE plperl HANDLER plperl_call_handler;\n> ERROR: function plperl_call_handler() does not return type \n> language_handler\n\nThis is reminiscent of the mess with language definitions in the last \nversion, prior to the more sensible function manager definition system.\n\nA similar solution could be adopted here: extend the function manager \ndefinition macros to also (optionally) capture the return type; then when \nthe function is defined, the function manager could check the real return \ntype, issue a warning, and define it properly. This could be extended to \nargs as well, if we felt so inclined. This solution obviously only works \nfor languages since (I assume) they will be the only ones modified to use \nthe improved macros; but it will fix 90% of problems.\n\n\n\n> ERROR: Column \"year\" is of type integer but default expression is of \n> type double precision\n> You will need to rewrite or cast the expression\n\nThis does seem like a problem to me - has anything been done about this? \nThere does not seem to be much traffic in this thread.\n\n\n>3. A view is being created before one of the tables it refers to.\n>Should not views be created only at the very end?\n\nUnless this is a 7.3-specific problem, I'd put this at a lower priority; as \nI suggested in an earlier post, moving the views to the end won't \nnecessarily fix the problem; and pre-7.3 databases don't know about \ndependencies, so we can't use the rudimentary support for dependencies in \npg_dump.\n\n\n\n\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://archives.postgresql.org\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n",
"msg_date": "Thu, 12 Sep 2002 11:50:21 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump problems in upgrading"
},
{
"msg_contents": "On Thu, 2002-09-12 at 00:52, Philip Warner wrote:\n> At 12:31 PM 9/09/2002 +0100, Oliver Elphick wrote:\n> >3. A view is being created before one of the tables it refers to.\n> >Should not views be created only at the very end?\n> \n> This would be trivial (and we already put several items at the end), but I \n> am not sure it would fix the problem since views can also be on other \n> views. I presume the bad ordering happened as a result of a drop/create on \n> a table? Or is there some other cause?\n\nIt could be, but I don't know for sure. This is a development db which\nquite often gets reloaded entirely and repopulated.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Let the wicked forsake his way, and the unrighteous \n man his thoughts; and let him return unto the LORD, \n and He will have mercy upon him; and to our God, for \n he will abundantly pardon.\" Isaiah 55:7 \n\n",
"msg_date": "12 Sep 2002 12:35:17 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump problems in upgrading"
},
{
"msg_contents": "Awhile back, Oliver Elphick <olly@lfix.co.uk> wrote:\n> I am trying to populate a 7.3 database from a 7.2 dump. I used 7.3's\n> pg_dumpall, but this did not handle all the issues:\n\n> 1. The language dumping needs to be improved:\n\nThis is now fixed.\n\n> 2. Either casts or extra default conversions may be needed:\n\nThis too --- at least in the example you give.\n\n> 3. A view is being created before one of the tables it refers to. \n\nOn thinking about it, I'm having a hard time seeing how that case could\narise, unless the source database was old enough to have wrapped around\nits OID counter. I'd be interested to see the details of your case.\nWhile the only long-term solution is proper dependency tracking in\npg_dump, there might be some shorter-term hack that we should apply...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Sep 2002 14:49:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump problems in upgrading "
},
{
"msg_contents": "On Sat, 2002-09-21 at 19:49, Tom Lane wrote:\n> > 3. A view is being created before one of the tables it refers to. \n> \n> On thinking about it, I'm having a hard time seeing how that case could\n> arise, unless the source database was old enough to have wrapped around\n> its OID counter. I'd be interested to see the details of your case.\n> While the only long-term solution is proper dependency tracking in\n> pg_dump, there might be some shorter-term hack that we should apply...\n\nWhile I don't think that the oids have wrapped round, the oid of the\ntable in question is larger than the oid of the view. It is quite\nlikely that the table was dropped and recreated after the view was\ncreated.\n\nIn fact, the view no longer works:\n ERROR: Relation \"sales_forecast\" with OID 26246751 no longer exists\nso that must be what happened.\n \n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Charge them that are rich in this world, that they not\n be highminded nor trust in uncertain riches, but in \n the living God, who giveth us richly all things to \n enjoy; That they do good, that they be rich in good \n works, ready to distribute, willing to communicate; \n Laying up in store for themselves a good foundation \n against the time to come, that they may lay hold on \n eternal life.\" I Timothy 6:17-19 \n\n",
"msg_date": "21 Sep 2002 20:28:11 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump problems in upgrading"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n>>> 3. A view is being created before one of the tables it refers to. \n\n> While I don't think that the oids have wrapped round, the oid of the\n> table in question is larger than the oid of the view. It is quite\n> likely that the table was dropped and recreated after the view was\n> created.\n\n> In fact, the view no longer works:\n> ERROR: Relation \"sales_forecast\" with OID 26246751 no longer exists\n> so that must be what happened.\n\nAh ... so the view was broken already. I'm surprised you didn't get a\nfailure while attempting to dump the view definition.\n\nThe new dependency stuff should help prevent this type of problem in\nfuture ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Sep 2002 15:35:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump problems in upgrading "
}
] |
[
{
"msg_contents": "> What do other people get for this value?\n> \n> Keep in mind if we increase this value, we will get a more sequential\n> scans vs. index scans.\n\nWith the new script I get 0.929825 on 2 IBM DTLA 5400RPM (80GB) with a 3Ware\n6400 Controller (RAID-1)\n\nBest regards,\n\tMario Weilguni\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 Sep 2002 13:50:59 +0200",
"msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>",
"msg_from_op": true,
"msg_subject": "Re: Script to compute random page cost"
}
] |
[
{
"msg_contents": "\n> I don't think we should add tuple counts from different commands, i.e.\n> adding UPDATE and DELETE counts just yields a totally meaningless\n> number.\n\nAgreed.\n\n \n> I don't think there is any need/desire to add additional API routines to\n> handle multiple return values.\n\nYup.\n\n> Can I get some votes on this?\n\nI vote for Tom's proposal, especially regarding non instead rules (a note to Steve:\nnon instead rules are not for views).\nI also think summing up is good, it would nicely fit the partitioned table requirements. \nAnd even if you imagine an insert statement with one row, even though I would be quite \nsurprised if I got 3 rows inserted as an answer, I think it is the dba's responsibility \nto do the 2nd and 3rd row with a non instead rule or a trigger. \nFor the same reason I would not restrict the count to one tag (do what you don't want in \nthe count with a non instead rule or a trigger).\n\nI would vote for OID from first or last command. And I would disregarding the tag, since that \ngives me the power to transparently move an updated table to a history keeping table, \nthat only does inserts.\n\nWhether first or last result is probably not so important, since the rule creator can \ninfluence what is done first/last, no ? You'd only need to know which. \n\nAndreas\n",
"msg_date": "Mon, 9 Sep 2002 17:04:39 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple count"
}
] |
[
{
"msg_contents": "> could you please make a complete table of all\n> possible situations and the expected returns? With complete I mean\n> including all combinations of rules, triggers, deferred constraints and\n> the like. Or do you at least see now where in the discussion we got\n> stuck?\n\nImho only view rules (== instead rules) should affect the returned info.\nNot \"non instead\" rules, triggers or constraints. Those are imho supposed to \nbe transparent as long as they don't abort the statement. \n\nEspecially for triggers and constraints there is no room for flexibility,\nsince other db's also don't modify the \"affected rows\" count for these.\nThink sqlca.sqlerrd[2] /* number of rows processed */!\n\nAndreas\n",
"msg_date": "Mon, 9 Sep 2002 17:48:09 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
}
] |
[
{
"msg_contents": "\nHELP!!!\n\nI'm stuck for strange reason!\nThis is my first attempt to use pg_lo concept in my apps:\n\n...\n Oid oid;\n PGconn* dbcon = PQconnectdb(conninfo.c_str());\n oid = lo_creat(dbcon, INV_WRITE | INV_READ);\n int pgfd = lo_open(dbcon, oid, INV_WRITE | INV_READ);\n...\n\n\nlo_open ALWAYS returns -1 while oid is positive (I can even see oid\nin pg_largeobject system table)!!!!\n\npostmaster reports the following:\nERROR: lo_lseek: invalid large obj descriptor (0)\n\nI realy NEED a prompt advice!\n\nPlease find a couple of minutes for reply!\nTIA\nStanislav\n\nps> I run FreeBSD-4.4 + ported PostgreSQL-7.1.3\npps> my other pg-connected apps run OK\n\n",
"msg_date": "Mon, 9 Sep 2002 20:01:12 +0400 (MSD)",
"msg_from": "Stanislav Silnitski <stalker@minicorp.ru>",
"msg_from_op": true,
"msg_subject": "IN FIRE"
},
{
"msg_contents": "Stanislav Silnitski <stalker@minicorp.ru> writes:\n\n> HELP!!!\n> \n> I'm stuck for strange reason!\n> This is my first attempt to use pg_lo concept in my apps:\n> \n> ...\n> Oid oid;\n> PGconn* dbcon = PQconnectdb(conninfo.c_str());\n> oid = lo_creat(dbcon, INV_WRITE | INV_READ);\n> int pgfd = lo_open(dbcon, oid, INV_WRITE | INV_READ);\n> ...\n> \n> \n> lo_open ALWAYS returns -1 while oid is positive (I can even see oid\n> in pg_largeobject system table)!!!!\n> \n> postmaster reports the following:\n> ERROR: lo_lseek: invalid large obj descriptor (0)\n\nYou need to do all your LO manipulation inside a transaction. See the\ndocs.\n\n-Doug\n",
"msg_date": "10 Sep 2002 08:56:24 -0400",
"msg_from": "Doug McNaught <doug@mcnaught.org>",
"msg_from_op": false,
"msg_subject": "Re: IN FIRE"
},
{
"msg_contents": "> \n> I'm stuck for strange reason!\n> This is my first attempt to use pg_lo concept in my apps:\n> \n> ...\n> Oid oid;\n> PGconn* dbcon = PQconnectdb(conninfo.c_str());\n> oid = lo_creat(dbcon, INV_WRITE | INV_READ);\n> int pgfd = lo_open(dbcon, oid, INV_WRITE | INV_READ);\n> ...\n> \n> \n> lo_open ALWAYS returns -1 while oid is positive (I can even see oid\n> in pg_largeobject system table)!!!!\n\nUse transactions (BEGIN; END;). Large objects rely on this\n\n\n",
"msg_date": "Tue, 10 Sep 2002 16:25:12 +0200",
"msg_from": "\"Mario Weilguni\" <mweilguni@sime.com>",
"msg_from_op": false,
"msg_subject": "Re: IN FIRE"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Rod Taylor [mailto:rbt@rbt.ca] \n> Sent: Monday, September 09, 2002 10:55 AM\n> To: Steve Howe\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Rule updates and PQcmdstatus() issue\n> \n> \n> > existed, had a brief discussion on the subject, and \n> couldn't reach an \n> > agreement. That's ok for me, I understand... but releasing versions \n> > known to be broken is something I can't understand.\n> -9' the postmaster\n> \n> If we didn't do that, then Postgresql never would have been \n> released in the first place, nor any date between then and now.\n> \n> There has been, and currently is a ton of known broken, \n> wonky, or incomplete stuff -- but it's felt that the current \n> version has a lot more to offer than the previous version, so \n> it's being released.\n> \n> This works for *all* software. If you never release, nothing \n> gets better.\n> \n> \n> I suspect it'll be several more major releases before we \n> begin to consider it approaching completely functional.\n\nI believe that the surprise is at the focus, when it comes to a release.\nWith commercial products (anyway) if you have any sort of show-stopper\nbug (crashing, incorrect results, etc.) you do not release the tool\nuntil the bug, and all others like it, are fixed. Bugs that have to do\nwith appearance or convenience can be overlooked for a release as long\nas they are documented in the release notes. Now, it is not unlikely\nthat there are unintentional show-stopper bugs that get through Q/A.\nBut intentionally passing them through would be incompetent for a\ncommercial enterprise.\n\nWith open source projects, the empasis tends to be on features, with far\nless regard for correcting known problems. Even bugs that can cause a\ncrash seem to be viewed as acceptable if they happen rarely.\n\nNow, at first blush, the open source strategy seems ludicrous. After\nall, who will want to use a product which could potentially (albeit\nunlikely) destroy your data or give wrong results? Then, after a bit of\nthought, you can see that the same sort of strategy as the open source\nprojects *is* followed by one very large and very successful software\ngiant. So maybe \"burgeoning featuritis without extreme concern for\nrobust stability\" isn't such a stupid strategy after all. ;-)\n\nAll kidding aside, I would like to see increased emphasis on stability\nand correctness. But I will admit that it is a lot less fun than adding\nnew features.\n",
"msg_date": "Mon, 9 Sep 2002 11:30:52 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "On Mon, Sep 09, 2002 at 11:30:52AM -0700, Dann Corbit wrote:\n> \n> All kidding aside, I would like to see increased emphasis on stability\n> and correctness. But I will admit that it is a lot less fun than adding\n> new features.\n\nBut in fairness, I think you'd be hard pressed to find a set of\ndevelopers anywhere who take more seriously that the PostgreSQL core\nthe responsibility to provide stable, correct software. I've\nreported show-stopping bugs to commercial database providers and on\nthe PostgreSQL lists, and I'd be hard pressed to come up with an\noccasion where I received from a commercial software company service\nthat was even 1/10th the quality and speed that I get here. \n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 9 Sep 2002 15:22:50 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "On Mon, Sep 09, 2002 at 11:30:52AM -0700, Dann Corbit wrote:\n> > \n> > I suspect it'll be several more major releases before we \n> > begin to consider it approaching completely functional.\n> \n> I believe that the surprise is at the focus, when it comes to a release.\n> With commercial products (anyway) if you have any sort of show-stopper\n> bug (crashing, incorrect results, etc.) you do not release the tool\n> until the bug, and all others like it, are fixed. Bugs that have to do\n> with appearance or convenience can be overlooked for a release as long\n> as they are documented in the release notes. Now, it is not unlikely\n> that there are unintentional show-stopper bugs that get through Q/A.\n> But intentionally passing them through would be incompetent for a\n> commercial enterprise.\n\nHmm, you don't have any drinking buddies who work QA, do you? _Lots_ of\nknown, \"eat your harddrive\" bugs get classified as \"to be fixed in future\nrelease\" in commercial software, when the release date pressure grows.\n\n> With open source projects, the empasis tends to be on features, with far\n> less regard for correcting known problems. Even bugs that can cause a\n> crash seem to be viewed as acceptable if they happen rarely.\n\nHuh? I tend to see exactly the opposite. Actual crash and \"wrong\nanswer\" bugs tend to get very prompt attention on all the open source\nprojects I know and use. What _does_ get delayed or even ignored are \"bug\ncompability\" problems, like this one. That is, software that relies on the\n\"affected rows\" count is in fact broken, since it's making assumptions\nabout that number that were never promised in any standard or interface\ndocs.\n\n<snip silly comparison to commercial software house>\n\n> All kidding aside, I would like to see increased emphasis on stability\n> and correctness. But I will admit that it is a lot less fun than adding\n> new features.\n\nAnd this has got to be trolling: PostgreSQL is one of the _most_\nstability and correctness focused software projects I've ever known. In\nthis particular case, the complaints about this issue where \"Your bugfix\nbroke my tool! make it better!\" The answer was \"We can't just put it\nback, that's an actual bug in there (rules firing in an unpredicatable\norder). What's the _correct_ behavior?\" The people with the complaints\nthen did not come up with a compelling, complete description of what\nthe correct behavior should be. There's always been vague parts to the\n\"desired behavior\" like the phrase Tom pointed out: \"in the context of\nthe view\" which was clarified to mean \"viewable by the view\", which is\nnearly impossible to code, if not an example of the halting problem.\n\nPostgreSQL as a project errs on the side of not coding the quick fix,\nin favor of waiting for the right answer. Sometimes too long, but this\ncase isn't one of those, IMHO.\n\nRoss\n-- \nRoss Reedstrom, Ph.D. reedstrm@rice.edu\nExecutive Director phone: 713-348-6166\nGulf Coast Consortium for Bioinformatics fax: 713-348-6182\nRice University MS-39\nHouston, TX 77005\n",
"msg_date": "Mon, 9 Sep 2002 14:25:48 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "> > If we didn't do that, then Postgresql never would have been \n> > released in the first place, nor any date between then and now.\n\n> I believe that the surprise is at the focus, when it comes to a release.\n> With commercial products (anyway) if you have any sort of show-stopper\n> bug (crashing, incorrect results, etc.) you do not release the tool\n\nMost companies / groups (opensource or otherwise) will not hold back\nmany bugfixes and features for the sake of getting an additional out of\nthe way fix in as it tends to piss off the majority of the users.\n\nI'm afraid right now I see this as a very minor item which is heavily\nbroken, meaning it's really really important to very few users.\n\nNot having foreign keys break when renaming a column or table will\nprobably affect more people and is awaiting the next release. Ditto for\nsecurity enhancements. I see these as more important -- since they\naffect me :)\n\nIf the changes are agreed upon and fixed, great. It's a better product\nbecause of it. But forcing others to use an older version with\nequivelently broken items because the next one doesn't do everything\nperfectly doesn't make for progress.\n\nHowever, rest assured, with anything if you push and put in the work\nrequire it'll eventually go where you want it to.\n\n-- \n Rod Taylor\n\n",
"msg_date": "09 Sep 2002 15:41:59 -0400",
"msg_from": "Rod Taylor <rbt@rbt.ca>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "On Mon, 2002-09-09 at 21:25, Ross J. Reedstrom wrote:\n> And this has got to be trolling: PostgreSQL is one of the _most_\n> stability and correctness focused software projects I've ever known. In\n> this particular case, the complaints about this issue where \"Your bugfix\n> broke my tool! make it better!\" The answer was \"We can't just put it\n> back, that's an actual bug in there (rules firing in an unpredicatable\n> order).\n\nWhy is \"rules firing in an unpredicatable order\" a bug but \"returned\naffected tuple count is wrong \" just a compatibility issue ?\n\nAfaik, rule firing order has never been promised, while pqCmdTuples()\nhas.\n\n> What's the _correct_ behavior?\" The people with the complaints\n> then did not come up with a compelling, complete description of what\n> the correct behavior should be. There's always been vague parts to the\n> \"desired behavior\" like the phrase Tom pointed out: \"in the context of\n> the view\" which was clarified to mean \"viewable by the view\", which is\n> nearly impossible to code, if not an example of the halting problem.\n\nOne approach could be to expose the tuple count at SQL level and then\nlet the user decide what to return.\n\n> PostgreSQL as a project errs on the side of not coding the quick fix,\n> in favor of waiting for the right answer. Sometimes too long, but this\n> case isn't one of those, IMHO.\n\nYou usually learn afterwards when it has been \"too long\" ;)\n\n--------------\nHannu\n\n",
"msg_date": "10 Sep 2002 10:39:18 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Why is \"rules firing in an unpredicatable order\" a bug but \"returned\n> affected tuple count is wrong \" just a compatibility issue ?\n> Afaik, rule firing order has never been promised, while pqCmdTuples()\n> has.\n\nThere has never been any spec saying exactly what PQcmdTuples would give\nin complicated cases. The effective behavior pre-7.2 was that you'd get\nthe result tag of the last action executed, but this was undocumented,\nand unsafe to rely on in multi-rule cases even then, considering that\nthe firing order of rules was not predictable.\n\nWhat actually happened was this: in 7.2 we changed ON INSERT rule firing\nto execute non-INSTEAD rules after the original INSERT, rather than\nbefore it. In the old behavior, non-INSTEAD rules just plain did not\nwork with an INSERT: they wouldn't see any \"NEW\" row, because it wasn't\nthere yet when they ran. This is surely a bug fix in my book (and it is\nunrelated to the 7.3 change that provides predictable firing order of\nmultiple rules).\n\nNow the side effect of that change was to change PQcmdTuples' behavior,\nbecause the \"last action\" was no longer the same thing as before. This\nbroke various clients that were depending on the \"last action\" to be the\noriginal INSERT. The fix we applied was to redefine PQcmdTuples to\nreturn the result count of the original query regardless of firing\norder.\n\nThis behavior is evidently not good for Steve, and I'm perfectly\nprepared to discuss modifying it some more --- but I don't want to have\na PQcmdTuples behavior-of-the-month with new changes in every release.\nI want a discussed, agreed-to, well-defined behavior that we aren't\ngoing to revisit again in future releases. When we have that agreement\nwe can implement it and forget it ... but if we apply a bandaid now\nand then change the behavior again later, we're just going to make life\neven harder for clients. I'd rather leave the behavior broken (by\nSteve's view anyway) but *the same as 7.2* than have a new but still-\nunsatisfactory definition in there for 7.3.\n\nI think the other developers have the same negative opinion about API\nchurn as I do, and so when we couldn't get agreement about what to\ndo back in May, we shelved the topic in hopes a fresh idea would come\nalong.\n\nNow could we drop the name-calling and the bogus opinionating about\nhow serious or not-serious this problem is, and concentrate on finding\na satisfactory answer?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 10:46:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue "
}
] |
[
{
"msg_contents": "Dear PostgreSQL people,\n\n\tSorry for jumping into this conversation in the middle.\n\tAutocommit is very important, as appservers may turn it on or off at\nwill in order to support EJB transactions (being able to set them up, roll\nthem back, commit them, etc. by using the JDBC API). If it is broken, then\nall EJB apps using PostgreSQL may be broken also. ...This frightens me a\nlittle. Could somebody please explain?\n\nSincerely,\n\n\tDaryl.\n\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Monday, September 09, 2002 2:54 PM\n> To: Bruce Momjian\n> Cc: Barry Lind; pgsql-jdbc@postgresql.org; \n> pgsql-hackers@postgresql.org\n> Subject: Re: [JDBC] [HACKERS] problem with new autocommit config\n> parameter and jdbc \n> \n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Barry Lind wrote:\n> >> How should client interfaces handle this new autocommit \n> feature? Is it\n> >> best to just issue a set at the beginning of the \n> connection to ensure\n> >> that it is always on?\n> \n> > Yes, I thought that was the best fix for apps that can't deal with\n> > autocommit being off.\n> \n> If autocommit=off really seriously breaks JDBC then I don't think a\n> simple SET command at the start of a session is going to do that much\n> to improve robustness. What if the user issues another SET to turn it\n> on?\n> \n> I'd suggest just documenting that it is broken and you can't use it,\n> until such time as you can get it fixed. Band-aids that only \n> partially\n> cover the problem don't seem worth the effort to me.\n> \n> In general I think that autocommit=off is probably going to be very\n> poorly supported in the 7.3 release. We can document it as being\n> \"work in progress, use at your own risk\".\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n",
"msg_date": "Mon, 9 Sep 2002 15:01:23 -0400 ",
"msg_from": "Daryl Beattie <dbeattie@insystems.com>",
"msg_from_op": true,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter"
},
{
"msg_contents": "Daryl,\n\nThe problem is an incompatiblity between a new server autocommit feature \nand the existing jdbc autocommit feature. The problem manifests itself \nwhen you turn autocommit off on the server (which is new functionality \nin 7.3). If you leave autocommit turned on on the server (which is the \nway the server has always worked until 7.3) the jdbc driver correctly \nhandles issuing the correct begin/commit/rollback commands to support \nautocommit functionality in the jdbc driver.\n\nAutocommit will work with jdbc in 7.3 (and it does now as long as you \nleave autocommit set on in the postgresql.conf file). We are just need \nto decide what to do in this one corner case.\n\nthanks,\n--Barry\n\n\nDaryl Beattie wrote:\n> Dear PostgreSQL people,\n> \n> \tSorry for jumping into this conversation in the middle.\n> \tAutocommit is very important, as appservers may turn it on or off at\n> will in order to support EJB transactions (being able to set them up, roll\n> them back, commit them, etc. by using the JDBC API). If it is broken, then\n> all EJB apps using PostgreSQL may be broken also. ...This frightens me a\n> little. Could somebody please explain?\n> \n> Sincerely,\n> \n> \tDaryl.\n> \n> \n> \n>>-----Original Message-----\n>>From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n>>Sent: Monday, September 09, 2002 2:54 PM\n>>To: Bruce Momjian\n>>Cc: Barry Lind; pgsql-jdbc@postgresql.org; \n>>pgsql-hackers@postgresql.org\n>>Subject: Re: [JDBC] [HACKERS] problem with new autocommit config\n>>parameter and jdbc \n>>\n>>\n>>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>>\n>>>Barry Lind wrote:\n>>>\n>>>>How should client interfaces handle this new autocommit \n>>>\n>>feature? Is it\n>>\n>>>>best to just issue a set at the beginning of the \n>>>\n>>connection to ensure\n>>\n>>>>that it is always on?\n>>>\n>>>Yes, I thought that was the best fix for apps that can't deal with\n>>>autocommit being off.\n>>\n>>If autocommit=off really seriously breaks JDBC then I don't think a\n>>simple SET command at the start of a session is going to do that much\n>>to improve robustness. What if the user issues another SET to turn it\n>>on?\n>>\n>>I'd suggest just documenting that it is broken and you can't use it,\n>>until such time as you can get it fixed. Band-aids that only \n>>partially\n>>cover the problem don't seem worth the effort to me.\n>>\n>>In general I think that autocommit=off is probably going to be very\n>>poorly supported in the 7.3 release. We can document it as being\n>>\"work in progress, use at your own risk\".\n>>\n>>\t\t\tregards, tom lane\n>>\n>>---------------------------(end of \n>>broadcast)---------------------------\n>>TIP 3: if posting/reading through Usenet, please send an appropriate\n>>subscribe-nomail command to majordomo@postgresql.org so that your\n>>message can get through to the mailing list cleanly\n>>\n> \n> \n\n",
"msg_date": "Mon, 09 Sep 2002 12:29:02 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] problem with new autocommit config parameter"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Ross J. Reedstrom [mailto:reedstrm@rice.edu] \n> Sent: Monday, September 09, 2002 12:26 PM\n> To: Dann Corbit\n> Cc: Rod Taylor; Steve Howe; PostgreSQL-development\n> Subject: Re: [HACKERS] Rule updates and PQcmdstatus() issue\n> \n> \n> On Mon, Sep 09, 2002 at 11:30:52AM -0700, Dann Corbit wrote:\n> > > \n> > > I suspect it'll be several more major releases before we\n> > > begin to consider it approaching completely functional.\n> > \n> > I believe that the surprise is at the focus, when it comes to a \n> > release. With commercial products (anyway) if you have any sort of \n> > show-stopper bug (crashing, incorrect results, etc.) you do not \n> > release the tool until the bug, and all others like it, are fixed. \n> > Bugs that have to do with appearance or convenience can be \n> overlooked \n> > for a release as long as they are documented in the release notes. \n> > Now, it is not unlikely that there are unintentional \n> show-stopper bugs \n> > that get through Q/A. But intentionally passing them \n> through would be \n> > incompetent for a commercial enterprise.\n> \n> Hmm, you don't have any drinking buddies who work QA, do you? \n\nI do have friends who work in Q/A.\n\n> _Lots_ of known, \"eat your harddrive\" bugs get classified as \n> \"to be fixed in future release\" in commercial software, when \n> the release date pressure grows.\n\nI have been programming since 1976 on literally many dozens of projects.\nThere is no project on which I have been a part where such a thing would\nbe allowed. On the other hand, the projects I tend to work on are the\n\"these tools are used to run your business\" MIS sorts of things.\nPerhaps other areas of development are different.\n \n> > With open source projects, the emphasis tends to be on \n> features, with \n> > far less regard for correcting known problems. Even bugs that can \n> > cause a crash seem to be viewed as acceptable if they happen rarely.\n> \n> Huh? I tend to see exactly the opposite. Actual crash and \n> \"wrong answer\" bugs tend to get very prompt attention on all \n> the open source projects I know and use. What _does_ get \n> delayed or even ignored are \"bug compability\" problems, like \n> this one. That is, software that relies on the \"affected \n> rows\" count is in fact broken, since it's making assumptions \n> about that number that were never promised in any standard or \n> interface docs.\n\nIf this particular case is a case of someone relying on undocumented\nbehavior, then there is no bug. If this is a case of relying upon\ndocumented behavior and the behavior changes, then there is a bug.\n \n> <snip silly comparison to commercial software house>\n> \n> > All kidding aside, I would like to see increased emphasis \n> on stability \n> > and correctness. But I will admit that it is a lot less fun than \n> > adding new features.\n> \n> And this has got to be trolling: PostgreSQL is one of the \n> _most_ stability and correctness focused software projects \n> I've ever known.\n\nThere are very serious problems that have been in the release notes for\na very long time and yet have never been addressed. Most of them are\nrather esoteric and won't affect most users. I have been on many\nprojects that were far more concerned with correctness. As I have said,\n\"no serious bugs are allowed in a release\" is not uncommon on the\ncommercial projects where I have experience. That includes 9 years as a\nsubcontractor at Microsoft. If they have a serious bug that cannot be\nfixed, they will simply cut scope. But my experience was on MS (ITG)\nprojects. Perhaps other branches of MS did not require the same rigor.\nOn the other hand, PostgreSQL is more responsive in this area than any\nother open source project that I know of.\n\n> In this particular case, the complaints \n> about this issue where \"Your bugfix broke my tool! make it \n> better!\" The answer was \"We can't just put it back, that's an \n> actual bug in there (rules firing in an unpredicatable \n> order). What's the _correct_ behavior?\" The people with the \n> complaints then did not come up with a compelling, complete \n> description of what the correct behavior should be. There's \n> always been vague parts to the \"desired behavior\" like the \n> phrase Tom pointed out: \"in the context of the view\" which \n> was clarified to mean \"viewable by the view\", which is nearly \n> impossible to code, if not an example of the halting problem.\n\nThis may be an example where the original poster is asking for something\nthey should not be asking for. If the original poster was relying upon\nundocumented behavior, then there is nothing that needs to be done, and\nthe resulting problem is the original poster's fault.\n \n> PostgreSQL as a project errs on the side of not coding the \n> quick fix, in favor of waiting for the right answer. \n> Sometimes too long, but this case isn't one of those, IMHO.\n\nYou are probably right about this case. In fact, I am not defending the\noriginal poster's demand. I have no idea if their request has merit or\nnot. I was merely expressing an opinion that a good standard to follow\nis to fix all outstanding showstopper bugs before making a release.\nNothing more, nothing less.\n\nI am not attacking the PostgreSQL project or team. In fact, I think it\nis the finest piece of open source, freely available software on the\nplanet. My request was more of an aside -- simply wishing out loud for\nan intense focus on fixing problems.\n",
"msg_date": "Mon, 9 Sep 2002 12:46:58 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
},
{
"msg_contents": "Actually, this problem is part of a whole scope of problems that were in\nthe Berkeley code, because rules, and inheritance, just have a certain\ncontorting effect on SQL queries where it is difficult to get them\nworking properly.\n\nIf these features didn't come from Berkeley, I doubt we would have ever\nimplemented them, so in some case there are inherited bugs from features\nthat weren't 100% thought out when they were added many years ago.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 Sep 2002 22:10:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
}
] |
[
{
"msg_contents": "\nI am trying move my development database to 7.3b1.\n\nHowever, when I try to restore from a 7.2.2 dump to the 7.3.b1 server I get\nthe following error:\n\npg_restore -U nbadmin -h lnc -p 5432 -d stats -Fc /tmp/stats.pgdmp\n\npg_restore: [archiver (db)] could not execute query: ERROR: function\nplpgsql_call_handler() does not return type language_handler\n\nAny ideas?\n\nThanks,\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n",
"msg_date": "Mon, 9 Sep 2002 13:34:33 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "None"
},
{
"msg_contents": "On Mon, 9 Sep 2002, Laurette Cisneros wrote:\n\n> \n> I am trying move my development database to 7.3b1.\n> \n> However, when I try to restore from a 7.2.2 dump to the 7.3.b1 server I get\n> the following error:\n> \n> pg_restore -U nbadmin -h lnc -p 5432 -d stats -Fc /tmp/stats.pgdmp\n> \n> pg_restore: [archiver (db)] could not execute query: ERROR: function\n> plpgsql_call_handler() does not return type language_handler\n\nI sounds like there's a language installed on your 7.2.2 server that your \n7.3 server doesn't have installed.\n\n",
"msg_date": "Mon, 9 Sep 2002 14:50:41 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Mon, 2002-09-09 at 21:34, Laurette Cisneros wrote:\n> \n> I am trying move my development database to 7.3b1.\n> \n> However, when I try to restore from a 7.2.2 dump to the 7.3.b1 server I get\n> the following error:\n> \n> pg_restore -U nbadmin -h lnc -p 5432 -d stats -Fc /tmp/stats.pgdmp\n> \n> pg_restore: [archiver (db)] could not execute query: ERROR: function\n> plpgsql_call_handler() does not return type language_handler\n> \n> Any ideas?\n\nAt the moment, you have to edit the dump. Where the language handler\nfunction is declared, change \"RETURNS opaque\" to \"RETURNS\nlanguage_handler\".\n\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Submit yourselves therefore to God. Resist the devil, \n and he will flee from you.\" James 4:7 \n\n",
"msg_date": "09 Sep 2002 22:01:57 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Thanks!\n\nOn 9 Sep 2002, Oliver Elphick wrote:\n\n> On Mon, 2002-09-09 at 21:34, Laurette Cisneros wrote:\n> > \n> > I am trying move my development database to 7.3b1.\n> > \n> > However, when I try to restore from a 7.2.2 dump to the 7.3.b1 server I get\n> > the following error:\n> > \n> > pg_restore -U nbadmin -h lnc -p 5432 -d stats -Fc /tmp/stats.pgdmp\n> > \n> > pg_restore: [archiver (db)] could not execute query: ERROR: function\n> > plpgsql_call_handler() does not return type language_handler\n> > \n> > Any ideas?\n> \n> At the moment, you have to edit the dump. Where the language handler\n> function is declared, change \"RETURNS opaque\" to \"RETURNS\n> language_handler\".\n> \n> \n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n",
"msg_date": "Mon, 9 Sep 2002 14:02:55 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "Ok, am I missing somethig here?\n\nIn 7.3, the -Fp option has been removed which leaves the -Fc (which we use\nin our 7.2 dumps) or -Ft. \n\nHow does one edit a compressed or tar file?\n\nAlso, is this problem going to be fixed in a later beta or regular release\nof 7.3? This could pose a problem to restore full database dumps.\n\nThanks,\n\nL.\nOn 9 Sep 2002, Oliver Elphick wrote:\n\n> On Mon, 2002-09-09 at 21:34, Laurette Cisneros wrote:\n> > \n> > I am trying move my development database to 7.3b1.\n> > \n> > However, when I try to restore from a 7.2.2 dump to the 7.3.b1 server I get\n> > the following error:\n> > \n> > pg_restore -U nbadmin -h lnc -p 5432 -d stats -Fc /tmp/stats.pgdmp\n> > \n> > pg_restore: [archiver (db)] could not execute query: ERROR: function\n> > plpgsql_call_handler() does not return type language_handler\n> > \n> > Any ideas?\n> \n> At the moment, you have to edit the dump. Where the language handler\n> function is declared, change \"RETURNS opaque\" to \"RETURNS\n> language_handler\".\n> \n> \n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n",
"msg_date": "Mon, 9 Sep 2002 14:58:01 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "Ok, I made the changes in the compressed pg_dump file. Now pg_restore crashes:\n\npg_restore: [archiver] out of memory\n\n*sigh*\n\nL.\nOn 9 Sep 2002, Oliver Elphick wrote:\n\n> On Mon, 2002-09-09 at 21:34, Laurette Cisneros wrote:\n> > \n> > I am trying move my development database to 7.3b1.\n> > \n> > However, when I try to restore from a 7.2.2 dump to the 7.3.b1 server I get\n> > the following error:\n> > \n> > pg_restore -U nbadmin -h lnc -p 5432 -d stats -Fc /tmp/stats.pgdmp\n> > \n> > pg_restore: [archiver (db)] could not execute query: ERROR: function\n> > plpgsql_call_handler() does not return type language_handler\n> > \n> > Any ideas?\n> \n> At the moment, you have to edit the dump. Where the language handler\n> function is declared, change \"RETURNS opaque\" to \"RETURNS\n> language_handler\".\n> \n> \n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n",
"msg_date": "Mon, 9 Sep 2002 15:54:49 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "At 03:54 PM 9/09/2002 -0700, Laurette Cisneros wrote:\n>Ok, I made the changes in the compressed pg_dump file.\n\nThat's probably a very bad idea.\n\nIt's a little more long-winded, but try:\n\npg_restore -l dumpfile > dump1.lis\n\ncopy dump1.lis to dump2.lis\n\ndelete everything from dump1.lis at and after the definition that causes \nthe problem.\n\ndelete everything from dump2.lis at and before the definition that causes \nthe problem.\n\npg_restore -L dump1.lis\n\nmanually define the language\n\npg_restore -L dump2.lis\n\n\nALTERNATIVELY, define the language in template1, then just edit dump1.lis \nto remove the line for the language definition, and run pg_restore -L \ndump1.lis.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n",
"msg_date": "Tue, 10 Sep 2002 09:50:22 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Tue, 2002-09-10 at 00:50, Philip Warner wrote:\n\n> ALTERNATIVELY, define the language in template1, then just edit dump1.lis \n> to remove the line for the language definition, and run pg_restore -L \n> dump1.lis.\n\nThat doesn't work for a dump and reload, because 7.3's pg_dumpall writes\na script to create the databases from template0 rather than template1.\n\nThe 7.3 documentation for pg_dump says:\n\n Notes\n \n If your installation has any local additions to the template1\n database, be careful to restore the output of pg_dump into a truly\n empty database; otherwise you are likely to get errors due to\n duplicate definitions of the added objects. To make an empty\n database without any local additions, copy from template0 not\n template1, for example:\n \n CREATE DATABASE foo WITH TEMPLATE = template0;\n \nbut this seems to be out of date. pg_dumpall actually uses template0\nitself.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Draw near to God and he will draw near to you. \n Cleanse your hands, you sinners; and purify your \n hearts, you double minded.\" James 4:8 \n\n",
"msg_date": "10 Sep 2002 12:18:36 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "\nI am confused. This wording seems fine to me.\n\n---------------------------------------------------------------------------\n\nOliver Elphick wrote:\n> On Tue, 2002-09-10 at 00:50, Philip Warner wrote:\n> \n> > ALTERNATIVELY, define the language in template1, then just edit dump1.lis \n> > to remove the line for the language definition, and run pg_restore -L \n> > dump1.lis.\n> \n> That doesn't work for a dump and reload, because 7.3's pg_dumpall writes\n> a script to create the databases from template0 rather than template1.\n> \n> The 7.3 documentation for pg_dump says:\n> \n> Notes\n> \n> If your installation has any local additions to the template1\n> database, be careful to restore the output of pg_dump into a truly\n> empty database; otherwise you are likely to get errors due to\n> duplicate definitions of the added objects. To make an empty\n> database without any local additions, copy from template0 not\n> template1, for example:\n> \n> CREATE DATABASE foo WITH TEMPLATE = template0;\n> \n> but this seems to be out of date. pg_dumpall actually uses template0\n> itself.\n> \n> -- \n> Oliver Elphick Oliver.Elphick@lfix.co.uk\n> Isle of Wight, UK \n> http://www.lfix.co.uk/oliver\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"Draw near to God and he will draw near to you. \n> Cleanse your hands, you sinners; and purify your \n> hearts, you double minded.\" James 4:8 \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 Sep 2002 13:38:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "I do this to begin with (createdb -T template0 db).\n\nFYI: Here's what I've determined is the best thing to do:\n\n1. create the database from template0\n2. create the needed languages (plpgsql, plperl, plpython) in the database\n3. create the needed tables, functions, types, etc. from script files.\n4. restore only the data from the dump.\n\nSeems to be the \"easiest\" and safest way to convert the database(s) to\n7.3b1 (we have a mirad of databases for different needs each having their\nown set of types, functions and languages that they use). I'll let you\nknow if I run into problems with this - as this, in my opinion, should not!\n\nThanks to all for the help,\n\nL.\nOn Tue, 10 Sep 2002, Bruce Momjian wrote:\n\n> \n> I am confused. This wording seems fine to me.\n> \n> ---------------------------------------------------------------------------\n> \n> Oliver Elphick wrote:\n> > On Tue, 2002-09-10 at 00:50, Philip Warner wrote:\n> > \n> > > ALTERNATIVELY, define the language in template1, then just edit dump1.lis \n> > > to remove the line for the language definition, and run pg_restore -L \n> > > dump1.lis.\n> > \n> > That doesn't work for a dump and reload, because 7.3's pg_dumpall writes\n> > a script to create the databases from template0 rather than template1.\n> > \n> > The 7.3 documentation for pg_dump says:\n> > \n> > Notes\n> > \n> > If your installation has any local additions to the template1\n> > database, be careful to restore the output of pg_dump into a truly\n> > empty database; otherwise you are likely to get errors due to\n> > duplicate definitions of the added objects. To make an empty\n> > database without any local additions, copy from template0 not\n> > template1, for example:\n> > \n> > CREATE DATABASE foo WITH TEMPLATE = template0;\n> > \n> > but this seems to be out of date. pg_dumpall actually uses template0\n> > itself.\n> > \n> > -- \n> > Oliver Elphick Oliver.Elphick@lfix.co.uk\n> > Isle of Wight, UK \n> > http://www.lfix.co.uk/oliver\n> > GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> > ========================================\n> > \"Draw near to God and he will draw near to you. \n> > Cleanse your hands, you sinners; and purify your \n> > hearts, you double minded.\" James 4:8 \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > \n> \n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n",
"msg_date": "Tue, 10 Sep 2002 10:45:43 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "On Tue, 2002-09-10 at 18:38, Bruce Momjian wrote:\n> \n> I am confused. This wording seems fine to me.\n\nThe confusion was mine. Of course, pg_dump doesn't create the\ndatabase. I was mixing it up with pg_dumpall.\n\nHowever, there is a problem in that recent changes have made it quite\nlikely that an upgrade will fail and will requre the dump script to be\nedited. There are some issues in pg_dump / pg_dumpall that need\naddressing before final release.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Draw near to God and he will draw near to you. \n Cleanse your hands, you sinners; and purify your \n hearts, you double minded.\" James 4:8 \n\n",
"msg_date": "10 Sep 2002 23:08:00 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Oliver Elphick wrote:\n> On Tue, 2002-09-10 at 18:38, Bruce Momjian wrote:\n> > \n> > I am confused. This wording seems fine to me.\n> \n> The confusion was mine. Of course, pg_dump doesn't create the\n> database. I was mixing it up with pg_dumpall.\n> \n> However, there is a problem in that recent changes have made it quite\n> likely that an upgrade will fail and will requre the dump script to be\n> edited. There are some issues in pg_dump / pg_dumpall that need\n> addressing before final release.\n\nOK, can you specifically list them?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 Sep 2002 18:09:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Tue, 2002-09-10 at 23:09, Bruce Momjian wrote:\n> Oliver Elphick wrote:\n> > edited. There are some issues in pg_dump / pg_dumpall that need\n> > addressing before final release.\n> \n> OK, can you specifically list them?\n\nMessage yesterday to pgsql-hackers\n \n Subject: [HACKERS] pg_dump problems in upgrading\n Date: 09 Sep 2002 12:31:39 +0100\n Message-Id: <1031571099.24419.199.camel@linda>\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Draw near to God and he will draw near to you. \n Cleanse your hands, you sinners; and purify your \n hearts, you double minded.\" James 4:8 \n\n",
"msg_date": "10 Sep 2002 23:22:19 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> However, there is a problem in that recent changes have made it quite\n> likely that an upgrade will fail and will requre the dump script to be\n> edited. There are some issues in pg_dump / pg_dumpall that need\n> addressing before final release.\n\nAFAIK, we did what we could on that front in 7.2.1. If you have ideas\non how we can retroactively make things better, I'm all ears ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 23:43:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Tuesday 10 September 2002 11:43 pm, Tom Lane wrote:\n> Oliver Elphick <olly@lfix.co.uk> writes:\n> > However, there is a problem in that recent changes have made it quite\n> > likely that an upgrade will fail and will requre the dump script to be\n> > edited. There are some issues in pg_dump / pg_dumpall that need\n> > addressing before final release.\n\n> AFAIK, we did what we could on that front in 7.2.1. If you have ideas\n> on how we can retroactively make things better, I'm all ears ...\n\nSo this release is going to be the royal pain release to upgrade to? Not \ngood. People may just not upgrade at all in that case.\n\nMy datasets aren't complicated enough to trigger some of these problems; \npeople who have complex datasets need to report all failures so that we can \nat least write a sed/perl/awk script to massage the things that need \nmassaging, if it can be done that easily.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 10 Sep 2002 23:52:48 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> On Tuesday 10 September 2002 11:43 pm, Tom Lane wrote:\n>> AFAIK, we did what we could on that front in 7.2.1. If you have ideas\n>> on how we can retroactively make things better, I'm all ears ...\n\n> So this release is going to be the royal pain release to upgrade to?\n\npg_dumpall from a 7.2 db, and reload into 7.2, is broken if you have\nmixed-case DB names. AFAIK it's okay if you use a later-than-7.2\npg_dumpall, or reload with a later-than-7.2 psql. If Oliver's got\ninfo to the contrary then he'd better be more specific about what\nhe thinks should be fixed for 7.3. Griping about the fact that 7.2.0\nis broken is spectacularly unproductive at this point.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Sep 2002 00:20:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Wed, 2002-09-11 at 05:20, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > On Tuesday 10 September 2002 11:43 pm, Tom Lane wrote:\n> >> AFAIK, we did what we could on that front in 7.2.1. If you have ideas\n> >> on how we can retroactively make things better, I'm all ears ...\n> \n> > So this release is going to be the royal pain release to upgrade to?\n> \n> pg_dumpall from a 7.2 db, and reload into 7.2, is broken if you have\n> mixed-case DB names. AFAIK it's okay if you use a later-than-7.2\n> pg_dumpall, or reload with a later-than-7.2 psql. If Oliver's got\n> info to the contrary then he'd better be more specific about what\n> he thinks should be fixed for 7.3. Griping about the fact that 7.2.0\n> is broken is spectacularly unproductive at this point.\n\nI ran pg_dumpall from 7.3 on the 7.2 database. So I am talking about\nthe pg_dump that is now being beta-tested. Because of the major changes\nin 7.3, the 7.2 dump is not very useful. I am *not* complaining about\n7.2's pg_dump!\n\nLet me reiterate. I got these problems dumping 7.2 data with 7.3's\npg_dumpall:\n\n1. The language handlers were dumped as opaque; that needs to be\nchanged to language_handler.\n\n2. The dump produced:\n CREATE TABLE cust_alloc_history (\n ...\n \"year\" integer DEFAULT date_part('year'::text,\n ('now'::text)::timestamp(6) with time zone) NOT NULL,\n ...\n ERROR: Column \"year\" is of type integer but default expression is\nof type double precision\n You will need to rewrite or cast the expression\n\n3. A view was created before one of the tables to which it referred.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I am crucified with Christ; nevertheless I live; yet \n not I, but Christ liveth in me; and the life which I \n now live in the flesh I live by the faith of the Son \n of God, who loved me, and gave himself for me.\" \n Galatians 2:20 \n\n",
"msg_date": "11 Sep 2002 07:29:12 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> Let me reiterate. I got these problems dumping 7.2 data with 7.3's\n> pg_dumpall:\n\n> 1. The language handlers were dumped as opaque; that needs to be\n> changed to language_handler.\n\nOkay, we need to do something about that, though I'm not sure I see\na clean solution offhand.\n\n> 2. The dump produced:\n> CREATE TABLE cust_alloc_history (\n> ...\n> \"year\" integer DEFAULT date_part('year'::text,\n> ('now'::text)::timestamp(6) with time zone) NOT NULL,\n> ...\n> ERROR: Column \"year\" is of type integer but default expression is\n> of type double precision\n> You will need to rewrite or cast the expression\n\nHmm ... what was the original coding of the default?\n\n> 3. A view was created before one of the tables to which it referred.\n\nThis has been a problem all along and will continue to be a problem\nfor awhile longer. Sorry.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Sep 2002 09:59:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Wed, 2002-09-11 at 14:59, Tom Lane wrote:\n> Oliver Elphick <olly@lfix.co.uk> writes:\n> > Let me reiterate. I got these problems dumping 7.2 data with 7.3's\n> > pg_dumpall:\n> \n> > 1. The language handlers were dumped as opaque; that needs to be\n> > changed to language_handler.\n> \n> Okay, we need to do something about that, though I'm not sure I see\n> a clean solution offhand.\n\nIn 7.2, this will identify the functions that need to be dumped as\nlanguage handlers:\n\njunk=# SELECT p.proname\njunk-# FROM pg_proc AS p, pg_language AS l\njunk-# WHERE l.lanplcallfoid = p.oid AND l.lanplcallfoid != 0;\n proname \n----------------------\n plperl_call_handler\n plpgsql_call_handler\n pltcl_call_handler\n(3 rows)\n\n\n> > 2. The dump produced:\n> > CREATE TABLE cust_alloc_history (\n> > ...\n> > \"year\" integer DEFAULT date_part('year'::text,\n> > ('now'::text)::timestamp(6) with time zone) NOT NULL,\n> > ...\n> > ERROR: Column \"year\" is of type integer but default expression is\n> > of type double precision\n> > You will need to rewrite or cast the expression\n> \n> Hmm ... what was the original coding of the default?\n\n year INTEGER DEFAULT date_part('year',CURRENT_TIMESTAMP)\n\n\n\n> > 3. A view was created before one of the tables to which it referred.\n> \n> This has been a problem all along and will continue to be a problem\n> for awhile longer. Sorry.\n\nIs it not enough to defer all views until the end? Why would they be\nneeded any sooner?\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I am crucified with Christ; nevertheless I live; yet \n not I, but Christ liveth in me; and the life which I \n now live in the flesh I live by the faith of the Son \n of God, who loved me, and gave himself for me.\" \n Galatians 2:20 \n\n",
"msg_date": "11 Sep 2002 15:29:09 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "\n> Is it not enough to defer all views until the end? Why would they be\n> needed any sooner?\n\nI would think that views of views, or permissions on views, or prepared \nstatements might need the right view to be declared first. There may be other \nexamples as well.\n\nYour solution might be better than the current situation, however.\n\nRegards,\n\tJeff\n",
"msg_date": "Wed, 11 Sep 2002 12:41:11 -0700",
"msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "\nyes, deferring views to the end will also break if you have\nSQL functions defined that use views. The dependencies\nis (are?) a really hard problem.\n\nelein\n\nAt 12:41 PM 9/11/02, Jeff Davis wrote:\n\n> > Is it not enough to defer all views until the end? Why would they be\n> > needed any sooner?\n>\n>I would think that views of views, or permissions on views, or prepared\n>statements might need the right view to be declared first. There may be other\n>examples as well.\n>\n>Your solution might be better than the current situation, however.\n>\n>Regards,\n> Jeff\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>subscribe-nomail command to majordomo@postgresql.org so that your\n>message can get through to the mailing list cleanly\n\n:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:\n elein@norcov.com (510)543-6079\n \"Taking a Trip. Not taking a Trip.\" --anonymous\n:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:\n\n",
"msg_date": "Wed, 11 Sep 2002 12:58:10 -0700",
"msg_from": "elein <elein@norcov.com>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> ERROR: Column \"year\" is of type integer but default expression is\n> of type double precision\n> You will need to rewrite or cast the expression\n>> \n>> Hmm ... what was the original coding of the default?\n\n> year INTEGER DEFAULT date_part('year',CURRENT_TIMESTAMP)\n\nWell, date_part has always yielded double, so what we are really looking\nat here is a side-effect of the tightening of implicit casting in 7.3.\nIt wants you to cast down to integer explicitly.\n\nThere was some discussion of allowing \"implicit explicit casting\" of\nINSERT and UPDATE values to the target column's datatype, ie, allow a\ncast path to be used even if it is not marked as implicitly castable.\nIf we did that then it's be reasonable to do it for default values as\nwell, and that would allow this coding to keep working. But we did not\nhave a consensus to do it AFAIR.\n\n> 3. A view was created before one of the tables to which it referred.\n>> \n>> This has been a problem all along and will continue to be a problem\n>> for awhile longer. Sorry.\n\n> Is it not enough to defer all views until the end? Why would they be\n> needed any sooner?\n\nWell, one counterexample is where the view is being used as a substitute\nfor a standalone composite type: there might be a function somewhere\nthat uses the view's rowtype as an input or result datatype. I recall\nseeing exactly that coding technique in some of Joe Conway's contrib\nstuff (though it's now been superseded by use of standalone types).\nIn any case, such a rule won't ensure getting cross-references between\nviews to work.\n\nThe only real solution to pg_dump's ordering woes is to examine the\ndatabase dependency graph and do a topological sort to determine a\nsafe dump order. As of 7.3 we have the raw materials to do this (in\nthe form of the pg_depend system table), but making pg_dump actually\ndo it is a major rewrite that didn't get done, and IMHO shouldn't be\ntackled during beta. (I sure want to see it for 7.4 though.)\n\nIn the meantime, I think that we shouldn't mess with pg_dump's basically\nOID-order-driven dump ordering. It works in normal cases, and adding\narbitrary rules to it to fix one corner case is likely to accomplish\nlittle except breaking other corner cases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Sep 2002 16:19:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Wed, 2002-09-11 at 21:19, Tom Lane wrote:\n> In the meantime, I think that we shouldn't mess with pg_dump's basically\n> OID-order-driven dump ordering. It works in normal cases, and adding\n> arbitrary rules to it to fix one corner case is likely to accomplish\n> little except breaking other corner cases.\n\nI can see that Lamar and I are going to have major problems dealing with\nusers who fall over these problems. There are some things that simply\ncannot be handled automatically, such as user-written functions that\nreturn opaque. Then there are issues of ordering; and finally the fact\nthat we need to use the new pg_dump with the old binaries to get a\nuseful dump.\n\nIt seems to me that I shall have to make the new package such that it\ncan exist alongside the old one for a time, or else possibly separate\n7.3 pg_dump and pg_dumpall into a separate package. It is going to be a\ntotal pain!\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I am crucified with Christ; nevertheless I live; yet \n not I, but Christ liveth in me; and the life which I \n now live in the flesh I live by the faith of the Son \n of God, who loved me, and gave himself for me.\" \n Galatians 2:20 \n\n",
"msg_date": "11 Sep 2002 22:40:44 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: - pg_dump issues"
},
{
"msg_contents": "On Wednesday 11 September 2002 05:40 pm, Oliver Elphick wrote:\n> On Wed, 2002-09-11 at 21:19, Tom Lane wrote:\n> > In the meantime, I think that we shouldn't mess with pg_dump's basically\n> > OID-order-driven dump ordering. It works in normal cases, and adding\n> > arbitrary rules to it to fix one corner case is likely to accomplish\n> > little except breaking other corner cases.\n\n> I can see that Lamar and I are going to have major problems dealing with\n> users who fall over these problems.\n\nYes, we are. Thankfully, with RPM dependencies I can prevent blind upgrades. \nBut that doe not help the data migration issue this release is going to be. \nGuys, migration that is this shabby is, well, shabby. This _has_ to be fixed \nwhere a dump of 7.2.2 data (not 7.2.0, Tom) can be cleanly restored into 7.3. \nThat is, after all, our only migration path.\n\nI think this upgrade/migration nightmare scenario warrants upping the version \nto 8.0 to draw attention to the potential problems.\n\n> It seems to me that I shall have to make the new package such that it\n> can exist alongside the old one for a time, or else possibly separate\n> 7.3 pg_dump and pg_dumpall into a separate package. It is going to be a\n> total pain!\n\nI had planned on making just such a 'pg_dump' package -- but if the 7.3 \npg_dump isn't going to produce useful output, it seems like a waste of time.\n\nHowever, the jury is still out -- what sort of percentages are involved? That \nis, how likely are problems going to happen?\n\nBruce, I mentioned a sed/perl/awk script already to massage the dump into a \n7.3-friendly form -- but we need to gather the cases that are involved. \nMethinks every single OpenACS installation will hit this issue.\n\nHow big is the problem? It's looking bigger with each passing day, ISTM.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 11 Sep 2002 21:28:23 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: - pg_dump issues"
},
{
"msg_contents": "Lamar Owen wrote:\n> Bruce, I mentioned a sed/perl/awk script already to massage the dump into a \n> 7.3-friendly form -- but we need to gather the cases that are involved. \n> Methinks every single OpenACS installation will hit this issue.\n> \n> How big is the problem? It's looking bigger with each passing day, ISTM.\n\nThat is exactly what I want to know and document on the open items page.\nI am having trouble understanding some of the failures because no one is\nshowing the failure messages/statements, just describing them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 21:44:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: - pg_dump issues"
},
{
"msg_contents": "On Wednesday 11 September 2002 09:44 pm, Bruce Momjian wrote:\n> Lamar Owen wrote:\n> > Bruce, I mentioned a sed/perl/awk script already to massage the dump into\n> > a 7.3-friendly form -- but we need to gather the cases that are involved.\n> > Methinks every single OpenACS installation will hit this issue.\n\n> > How big is the problem? It's looking bigger with each passing day, ISTM.\n\n> That is exactly what I want to know and document on the open items page.\n> I am having trouble understanding some of the failures because no one is\n> showing the failure messages/statements, just describing them.\n\nWell, I am going to _try_ to lay aside an hour or two tomorrow or Friday and \ntry to import a 7.2.2 OpenACS dump into a 7.3 installation. I'll try to get \nvery verbose with the errors... :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 11 Sep 2002 22:11:46 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: - pg_dump issues"
}
] |
[
{
"msg_contents": "I've put together some packages for the 7.3beta1 release. The can be\nfound here along with a tenative FreeBSD port:\n\n http://66.250.180.19/postgresql-7.3beta1/\n\nThe differences in the files are that the postgresql-7.3b1-O3.tbz has\nbeen compiled with -O3 where as the postgresql-7.3b1.tbz hasn't. See\nmy next message for details.\n\n-sc\n\n-- \nSean Chittenden",
"msg_date": "Mon, 9 Sep 2002 14:15:31 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": true,
"msg_subject": "FreeBSD Packages/Port for 7.3beta1..."
},
{
"msg_contents": "Sean Chittenden writes:\n\n> I've put together some packages for the 7.3beta1 release. The can be\n> found here along with a tenative FreeBSD port:\n>\n> http://66.250.180.19/postgresql-7.3beta1/\n\nI checked out this port and made some notes that you might find useful.\n\n\n[Makefile]\n\nYou can remove --enable-locale, --enable-syslog, --with-CXX as configure\narguments. They no longer do anything.\n\nThis\n\nLDFLAGS+= -L${LOCALBASE}/lib -lgnugetopt\nCONFIGURE_ENV+= LDFLAGS=\"${LDFLAGS}\"\n\nis redundant. You already put LOCALBASE/lib into --with-libs. Also, if\nyou wish, we can automatically check for -lgnugetopt in configure. We\nalready do that for other spellings of the same library.\n\nI would like some details on the following.\n\n# if you want localized messages, make -DWITH_GETTEXT\n# WARNING: this seems to require relinking binaries depending on\n# libpq.so, including for example mod_php and tcl.\n\nThis\n\nCONFIGURE_ENV+= \"LIBS=-lintl\"\nLDFLAGS+= -L${LOCALBASE}/lib -lintl\n\nis even more redundant, because configure checks for -lintl automatically\n(and one of LIBS and LDFLAGS would have sufficed).\n\nMultibyte is no longer an option (it's the default), so you can remove\nanything that refers to it.\n\nIf you want to strip the binaries, you can use 'gmake install-strip'\ninstead of 'gmake install'. It's all automatic.\n\n\n[patch-ak]\n\nI assume you're going to fix this properly sometime...\n\n\n[files/patch-al]\n\nCan be removed for beta 2.\n\n\n[files/dot.*.in]\n\nDo you need the PATH assignment? Shouldn't the user decide for himself\nwhat he wants in the path? PostgreSQL certainly doesn't need the path\nset, if you're concerned about that.\n\nPGLIB hasn't done anything for several releases...\n\nPGDATESTYLE should now be set in the configuration file, so you can remove\nthe environment variable assignment.\n\nSetting locales (LC_ALL) is now best done as an option to initdb. Be sure\nto update pkg-message to that effect. Also, the encoding should be\nspecified to initdb (rather than configure --enable-multibyte=ENCODING).\n\nNot sure what the TZ assignment is supposed to accomplish. It certainly\ndoesn't alter the way the regression tests turn out, as it seems to claim.\nMight have been an ancient problem.\n\n\n[files/patch.configure]\n\nI think you should handle that through the makefiles. In fact, you\nprobably shouldn't specify an argument to the krb options if you're\nconcerned about this.\n\n\n[files/post-install-notes]\n\nBe sure to revise those, as some of the things are now shipped separately\n(such as PgAccess).\n\n\n[scripts/configure.postgresql]\n\nRemove multibyte option.\n\nPostgreSQL should work with Heimdal now.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 13 Sep 2002 00:09:13 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: FreeBSD Packages/Port for 7.3beta1..."
},
{
"msg_contents": "> Setting locales (LC_ALL) is now best done as an option to initdb. Be sure\n> to update pkg-message to that effect. Also, the encoding should be\n> specified to initdb (rather than configure --enable-multibyte=ENCODING).\n\nI guess --enable-multibyte=ENCODING does nothing with 7.3\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 13 Sep 2002 10:11:58 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: FreeBSD Packages/Port for 7.3beta1..."
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> > Setting locales (LC_ALL) is now best done as an option to initdb. Be sure\n> > to update pkg-message to that effect. Also, the encoding should be\n> > specified to initdb (rather than configure --enable-multibyte=ENCODING).\n> \n> I guess --enable-multibyte=ENCODING does nothing with 7.3\n\nI have added to HISTORY:\n\nAlways enable multibyte in compile, remove --enable-multibyte option (Tatsuo)\nAlways enable locale in compile, remove --enable-locale option (Tatsuo)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 15 Sep 2002 23:36:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FreeBSD Packages/Port for 7.3beta1..."
},
{
"msg_contents": "Peter Eisentraut wrote:\n> LDFLAGS+= -L${LOCALBASE}/lib -lgnugetopt\n> CONFIGURE_ENV+= LDFLAGS=\"${LDFLAGS}\"\n> \n> is redundant. You already put LOCALBASE/lib into --with-libs. Also, if\n> you wish, we can automatically check for -lgnugetopt in configure. We\n> already do that for other spellings of the same library.\n\nI have applied the following patch to search for gnugetopt to CVS. \nAutoconf updated.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql-server/configure.in,v\nretrieving revision 1.208\ndiff -c -c -r1.208 configure.in\n*** configure.in\t11 Sep 2002 04:27:48 -0000\t1.208\n--- configure.in\t16 Sep 2002 20:08:56 -0000\n***************\n*** 607,613 ****\n AC_CHECK_LIB(gen, main)\n AC_CHECK_LIB(PW, main)\n AC_CHECK_LIB(resolv, main)\n! AC_SEARCH_LIBS(getopt_long, [getopt])\n # QNX:\n AC_CHECK_LIB([[unix]], main)\n AC_SEARCH_LIBS(crypt, crypt)\n--- 607,613 ----\n AC_CHECK_LIB(gen, main)\n AC_CHECK_LIB(PW, main)\n AC_CHECK_LIB(resolv, main)\n! AC_SEARCH_LIBS(getopt_long, [getopt gnugetopt])\n # QNX:\n AC_CHECK_LIB([[unix]], main)\n AC_SEARCH_LIBS(crypt, crypt)",
"msg_date": "Mon, 16 Sep 2002 16:50:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FreeBSD Packages/Port for 7.3beta1..."
}
] |
[
{
"msg_contents": "In an attempt to beef up the PostgreSQL port for FreeBSD, I've added\nan option for adding additional optimization, similar to what MySQL\ndoes by compiling the server with -O6. I'm only compiling at -O3 with\nthe flag at the moment, however I wanted to ping the idea around to\nmake sure this isn't some land-mine that doesn't show up in the\nregression tests. My database hardware is in transition to a new\ndata center so I can't test this on my own at the moment. :-/\n\nThe size difference between -O and -O3 is only 200K or so... does\nanyone think that it'd be safe to head to -O6 on a wide scale? I\ndon't want to cream the FreeBSD user base with a bogus recommendation.\n\nI figure this is a database and 200KB doesn't amount to bo-diddly\ncompared to my data sizes so this seems acceptable in that dept. I'm\neven thinking about going so far as to have flex required for the\nbuild dependencies and setting -Cf or -CF for building the scanner\n(need to check the archives for which turned out to be faster).\n\nI'm also tinkering with the idea of automatically turn off fsync if\noptimize is set. Objections? -sc\n\n-- \nSean Chittenden",
"msg_date": "Mon, 9 Sep 2002 14:27:20 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": true,
"msg_subject": "Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> The size difference between -O and -O3 is only 200K or so... does\n> anyone think that it'd be safe to head to -O6 on a wide scale?\n\nDunno. I'm not aware of any bits of the code that are unportable enough\nto break with max optimization of any correct compiler. But you might\nfind such a bug. Or a bug in your compiler. Are you feeling lucky\ntoday?\n\nMy feeling is that gcc -O2 is quite well tested with the PG code.\nI don't have any equivalent confidence in -O6. Give it a shot for\nbeta-testing, for sure, but I'm iffy about calling that a\nproduction-grade database release...\n\n> I'm even thinking about going so far as to have flex required for the\n> build dependencies and setting -Cf or -CF for building the scanner\n> (need to check the archives for which turned out to be faster).\n\nUm, didn't we do that stuff already in the standard build? AFAIK\nyou cannot build PG with any lexer except flex, and Peter already\nhacked the flags.\n\n> I'm also tinkering with the idea of automatically turn off fsync if\n> optimize is set.\n\nNo-bloody-way. Trusting your compiler is an entirely separate issue\nfrom whether you trust your disk hardware, power source, etc. Puh-leez\ndo not muddy the waters by introducing a port-specific variation in\nchoices that only the DBA of a particular installation should make.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Sep 2002 22:40:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL... "
},
{
"msg_contents": "> > The size difference between -O and -O3 is only 200K or so... does\n> > anyone think that it'd be safe to head to -O6 on a wide scale?\n> \n> Dunno. I'm not aware of any bits of the code that are unportable\n> enough to break with max optimization of any correct compiler. But\n> you might find such a bug. Or a bug in your compiler. Are you\n> feeling lucky today?\n> \n> My feeling is that gcc -O2 is quite well tested with the PG code. I\n> don't have any equivalent confidence in -O6. Give it a shot for\n> beta-testing, for sure, but I'm iffy about calling that a\n> production-grade database release...\n\nI'm thinking about changing this from a beta port to a -devel port\nthat I'll periodically update with snapshots. I'll turn on -O6 for\nthe -devel port and -O2 for production for now. If I don't hear of\nany random bogons in the code I'll see if I can't increase it further\nto -O3 and beyond at a slow/incremental rate.\n\nHas there been any talk of doing incremental -snapshots of the code\nbase? I've really fallen inlove with the concept for development.\nHaving incremental changes is much easier to cope with than massive\nsteps forward.\n\n> > I'm even thinking about going so far as to have flex required for the\n> > build dependencies and setting -Cf or -CF for building the scanner\n> > (need to check the archives for which turned out to be faster).\n> \n> Um, didn't we do that stuff already in the standard build? AFAIK\n> you cannot build PG with any lexer except flex, and Peter already\n> hacked the flags.\n\nHrm, I should go check the archives, but I thought what was used was\none step below -C[fF] and was used because of size concerns for\nembedded databases. My memory for what happens on mailing lists seems\nto be fading though so I'll look it up.\n\n> > I'm also tinkering with the idea of automatically turn off fsync if\n> > optimize is set.\n> \n> No-bloody-way. Trusting your compiler is an entirely separate issue\n> from whether you trust your disk hardware, power source, etc.\n> Puh-leez do not muddy the waters by introducing a port-specific\n> variation in choices that only the DBA of a particular installation\n> should make.\n\nWhoop, guess I won't do that. :~) Thanks. -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Mon, 9 Sep 2002 19:54:19 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": true,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "Tom Lane wrote:\n> Sean Chittenden <sean@chittenden.org> writes:\n> > The size difference between -O and -O3 is only 200K or so... does\n> > anyone think that it'd be safe to head to -O6 on a wide scale?\n> \n> Dunno. I'm not aware of any bits of the code that are unportable enough\n> to break with max optimization of any correct compiler. But you might\n> find such a bug. Or a bug in your compiler. Are you feeling lucky\n> today?\n> \n> My feeling is that gcc -O2 is quite well tested with the PG code.\n> I don't have any equivalent confidence in -O6. Give it a shot for\n> beta-testing, for sure, but I'm iffy about calling that a\n> production-grade database release...\n\nAnd of course the big question is whether you will see any performance\nimprovement with -O6 vs. -O2. My guess is no.\n\n> \n> > I'm even thinking about going so far as to have flex required for the\n> > build dependencies and setting -Cf or -CF for building the scanner\n> > (need to check the archives for which turned out to be faster).\n> \n> Um, didn't we do that stuff already in the standard build? AFAIK\n> you cannot build PG with any lexer except flex, and Peter already\n> hacked the flags.\n\nYes, I thought that was a done deal too.\n\n> > I'm also tinkering with the idea of automatically turn off fsync if\n> > optimize is set.\n> \n> No-bloody-way. Trusting your compiler is an entirely separate issue\n> from whether you trust your disk hardware, power source, etc. Puh-leez\n> do not muddy the waters by introducing a port-specific variation in\n> choices that only the DBA of a particular installation should make.\n\nTom is right. Hardware/power reliability is a different issue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 Sep 2002 22:55:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "On Mon, 9 Sep 2002, Sean Chittenden wrote:\n\n> I'm thinking about changing this from a beta port to a -devel port\n> that I'll periodically update with snapshots. I'll turn on -O6 for\n> the -devel port and -O2 for production for now. If I don't hear of\n> any random bogons in the code I'll see if I can't increase it further\n> to -O3 and beyond at a slow/incremental rate.\n\nKeep in mind that, while gcc is pretty stable for i386, the higher\noptimization levels (above -O2) do tend to have bogons on other\nprocessors, that vary with which version of gcc you're running.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Tue, 10 Sep 2002 11:56:22 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "Sean Chittenden wrote:\n> Hrm, I should go check the archives, but I thought what was used was\n> one step below -C[fF] and was used because of size concerns for\n> embedded databases. My memory for what happens on mailing lists seems\n> to be fading though so I'll look it up.\n\nI see in parser/Makefile:\n\n\tFLEXFLAGS = -CF\n\nand\n\t\n\tifdef FLEX\n\t $(FLEX) $(FLEXFLAGS) -o'$@' $<\n\telse\n\t @$(missing) flex $< $@\n\tendif\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 Sep 2002 22:57:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "Curt Sampson wrote:\n> On Mon, 9 Sep 2002, Sean Chittenden wrote:\n> \n> > I'm thinking about changing this from a beta port to a -devel port\n> > that I'll periodically update with snapshots. I'll turn on -O6 for\n> > the -devel port and -O2 for production for now. If I don't hear of\n> > any random bogons in the code I'll see if I can't increase it further\n> > to -O3 and beyond at a slow/incremental rate.\n> \n> Keep in mind that, while gcc is pretty stable for i386, the higher\n> optimization levels (above -O2) do tend to have bogons on other\n> processors, that vary with which version of gcc you're running.\n\nYes, last I heard, FreeBSD/alpha doesn't work in PostgreSQL if compiled\nwith -O2. You can see template/freebsd for that alpha flag override.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 Sep 2002 22:59:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "> > > The size difference between -O and -O3 is only 200K or so... does\n> > > anyone think that it'd be safe to head to -O6 on a wide scale?\n> > \n> > Dunno. I'm not aware of any bits of the code that are unportable enough\n> > to break with max optimization of any correct compiler. But you might\n> > find such a bug. Or a bug in your compiler. Are you feeling lucky\n> > today?\n> > \n> > My feeling is that gcc -O2 is quite well tested with the PG code.\n> > I don't have any equivalent confidence in -O6. Give it a shot for\n> > beta-testing, for sure, but I'm iffy about calling that a\n> > production-grade database release...\n> \n> And of course the big question is whether you will see any performance\n> improvement with -O6 vs. -O2. My guess is no.\n\nAgreed, however some of the loop-unrolling might prove to have some\noptimization, but we'll see. I'd like to think that there's some\nactual value in -O6 beyond the geek appeal of being able to say it's\nbeen compiled with all the optimizations possible. ::shrug::\n\n> > I'm thinking about changing this from a beta port to a -devel port\n> > that I'll periodically update with snapshots. I'll turn on -O6 for\n> > the -devel port and -O2 for production for now. If I don't hear of\n> > any random bogons in the code I'll see if I can't increase it further\n> > to -O3 and beyond at a slow/incremental rate.\n> \n> Keep in mind that, while gcc is pretty stable for i386, the higher\n> optimization levels (above -O2) do tend to have bogons on other\n> processors, that vary with which version of gcc you're running.\n\nFully aware of these!!! I've got a few systems running GCC 3.2 and\n3.3 and it's touch and go above -O3, but most of these bogons are\nmozilla and GUI related when it comes to complex thread handling. For\nmore simple single threaded procs, the bugs get found out about pretty\nquickly and end up making their way back into the GCC src tree. I'm\nthinking -O6 for the -devel port should work nicely as a way of\ntesting things out. -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Mon, 9 Sep 2002 20:02:19 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": true,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "Sean Chittenden wrote:\n> > > > The size difference between -O and -O3 is only 200K or so... does\n> > > > anyone think that it'd be safe to head to -O6 on a wide scale?\n> > > \n> > > Dunno. I'm not aware of any bits of the code that are unportable enough\n> > > to break with max optimization of any correct compiler. But you might\n> > > find such a bug. Or a bug in your compiler. Are you feeling lucky\n> > > today?\n> > > \n> > > My feeling is that gcc -O2 is quite well tested with the PG code.\n> > > I don't have any equivalent confidence in -O6. Give it a shot for\n> > > beta-testing, for sure, but I'm iffy about calling that a\n> > > production-grade database release...\n> > \n> > And of course the big question is whether you will see any performance\n> > improvement with -O6 vs. -O2. My guess is no.\n> \n> Agreed, however some of the loop-unrolling might prove to have some\n> optimization, but we'll see. I'd like to think that there's some\n> actual value in -O6 beyond the geek appeal of being able to say it's\n> been compiled with all the optimizations possible. ::shrug::\n\nAnd you think the answer is ... I think we all know what the answer is.\n:-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 Sep 2002 23:04:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "> > > > My feeling is that gcc -O2 is quite well tested with the PG\n> > > > code. I don't have any equivalent confidence in -O6. Give it\n> > > > a shot for beta-testing, for sure, but I'm iffy about calling\n> > > > that a production-grade database release...\n> > > \n> > > And of course the big question is whether you will see any\n> > > performance improvement with -O6 vs. -O2. My guess is no.\n> > \n> > Agreed, however some of the loop-unrolling might prove to have\n> > some optimization, but we'll see. I'd like to think that there's\n> > some actual value in -O6 beyond the geek appeal of being able to\n> > say it's been compiled with all the optimizations possible.\n> > ::shrug::\n> \n> And you think the answer is ... I think we all know what the answer\n> is. :-)\n\nI think the newbie/l33t geek appeal of being able to say something's\ncompiled and works with -O6 is probably worth more in terms of\nmarketing than it is interms of actual technical merrit. Those that\nneed 10K lookups per second should be serializing data into a bdb file\nwith a unique key and not using a relational database (or helping out\nwith pgsql-replication). :~) -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Mon, 9 Sep 2002 20:18:38 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": true,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> Agreed, however some of the loop-unrolling might prove to have some\n> optimization, but we'll see. I'd like to think that there's some\n> actual value in -O6 beyond the geek appeal of being able to say it's\n> been compiled with all the optimizations possible. ::shrug::\n\nBTW, -O3 is the highest GCC optimization level; anything higher than\nthat is synonymous with -O3, I believe. Also, -O3 doesn't have\nanything to do with loop unrolling, AFAIK.\n\nAs for the value of enabling that flag, it depends IMHO on the\nperformance gain you see. If there is a significance difference, let\n-hackers know, and it might be worth considering enabling it by\ndefault for certain platforms. If the performance difference is\nnegligible (which is what I'd suspect), I don't think it's worth the\ncode bloat, reduced debuggability, or the potential for running into\nmore compiler bugs.\n\nAlso, if -O3 *is* a good compiler option, I dislike the idea of\nenabling it for your own packages but no one else's. IMHO distributors\nshould not futz with packages more than is strictely necessary, and a\nchange like this seems both unwarranted, and potentially dangerous. If\n-O3 is a good idea, we should make the change for the appropriate\nplatforms in the official source, and let it get the widespread\ntesting it requires.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n",
"msg_date": "10 Sep 2002 00:51:12 -0400",
"msg_from": "Neil Conway <neilc@samurai.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> Has there been any talk of doing incremental -snapshots of the code\n> base?\n\nI don't really see the point. Snapshots of development code are\navailable from CVS anyway -- and if you're going to be running a\npre-alpha version of a relational database, I don't think that\nknowledge of CVS is an onerous requirement.\n\nAt any rate, the problem with releasing snapshots is that the system\ncatalogs would change so often that upgrading between snapshots would\nbe a headache. i.e. the changes required to upgrade from a 2 week old\ndevelopment snapshot to a current snapshot would still be non-trivial,\nsignificantly reducing the usefulness of snapshots, IMHO.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n",
"msg_date": "10 Sep 2002 00:57:53 -0400",
"msg_from": "Neil Conway <neilc@samurai.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> Sean Chittenden <sean@chittenden.org> writes:\n>> Has there been any talk of doing incremental -snapshots of the code\n>> base?\n\n> I don't really see the point. Snapshots of development code are\n> available from CVS anyway -- and if you're going to be running a\n> pre-alpha version of a relational database, I don't think that\n> knowledge of CVS is an onerous requirement.\n\nThere's also the nightly automatic snapshot tarball on the FTP server,\nif you don't want to learn CVS...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 10:03:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL... "
},
{
"msg_contents": "> > Agreed, however some of the loop-unrolling might prove to have some\n> > optimization, but we'll see. I'd like to think that there's some\n> > actual value in -O6 beyond the geek appeal of being able to say it's\n> > been compiled with all the optimizations possible. ::shrug::\n> \n> BTW, -O3 is the highest GCC optimization level; anything higher than\n> that is synonymous with -O3, I believe. Also, -O3 doesn't have\n> anything to do with loop unrolling, AFAIK.\n\nIn terms of instruction optimization, yes. Above that is where it\ndoes the loop unrolling, inlining, and other various tweaks.\n\n> As for the value of enabling that flag, it depends IMHO on the\n> performance gain you see. If there is a significance difference, let\n> -hackers know, and it might be worth considering enabling it by\n> default for certain platforms. If the performance difference is\n> negligible (which is what I'd suspect), I don't think it's worth the\n> code bloat, reduced debuggability, or the potential for running into\n> more compiler bugs.\n\nAgreed. Later today I'll thump on my good SCSI system and let you\nknow what happens.\n\n> Also, if -O3 *is* a good compiler option, I dislike the idea of\n> enabling it for your own packages but no one else's. IMHO\n> distributors should not futz with packages more than is strictely\n> necessary, and a change like this seems both unwarranted, and\n> potentially dangerous. If -O3 is a good idea, we should make the\n> change for the appropriate platforms in the official source, and let\n> it get the widespread testing it requires.\n\nAgreed, but the testing's got to start someplace. :~) The -O3 is a\ntunable that you can optionally set or unset so it's not like I'm\nforcing it to be on (thought it will by default for the -devel port).\n\n-sc\n\n-- \nSean Chittenden\n",
"msg_date": "Tue, 10 Sep 2002 08:30:30 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": true,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "> > Has there been any talk of doing incremental -snapshots of the\n> > code base?\n> \n> I don't really see the point. Snapshots of development code are\n> available from CVS anyway -- and if you're going to be running a\n> pre-alpha version of a relational database, I don't think that\n> knowledge of CVS is an onerous requirement.\n\nAgreed, however it's nice to have landmarks along the way, such as a\npoint of stability or once a new feature gets rolled in and need some\nuse (ex: schemas or auto-commit).\n\n> At any rate, the problem with releasing snapshots is that the system\n> catalogs would change so often that upgrading between snapshots\n> would be a headache. i.e. the changes required to upgrade from a 2\n> week old development snapshot to a current snapshot would still be\n> non-trivial, significantly reducing the usefulness of snapshots,\n> IMHO.\n\nDon't doubt it at all, but that reminds me: I need to add a message\nreminding the developer to re-initdb when installing this version.\nThis is for a -devel port that'd track the new features that are being\nrolled into postgresql so there's a large degree of competence assumed\nwhen someone installs this particular version from the tree. I've\nalso slapped up some big warnings to make sure that it's developers\nonly. At the moment, however, I think I'll probably roll my own\ntarballs when an island of stability has been found unless the\nsnapshot server is holding onto its snaps for several months at a\ntime. -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Tue, 10 Sep 2002 08:37:50 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": true,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> Don't doubt it at all, but that reminds me: I need to add a message\n> reminding the developer to re-initdb when installing this version.\n\nThe catversion check isn't good enough for you?\n\nIt seems you are busily reinventing a bunch of decisions that have\nalready been made, and in most cases have stood the test of time.\nPerhaps you should be less eager to make this Sean's Own Postgres\nVersion, and more eager to be pushing out something that matches\nwhat everyone else is testing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 11:57:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL... "
},
{
"msg_contents": "> > Don't doubt it at all, but that reminds me: I need to add a message\n> > reminding the developer to re-initdb when installing this version.\n> \n> The catversion check isn't good enough for you?\n\nNope, it's good enough and then some. I've gotten in the habit of\njust re-initdb'ing and figured that's what the rest of the world did:\ndidn't realize there was a way of testing the catalog versions. My\nlife seems to be spent inside the DB and not playing with it from the\nCLI.\n\n> It seems you are busily reinventing a bunch of decisions that have\n> already been made, and in most cases have stood the test of time.\n> Perhaps you should be less eager to make this Sean's Own Postgres\n> Version, and more eager to be pushing out something that matches\n> what everyone else is testing.\n\nOuch! I hope not. Testing gcc optimizations and adding a developers\nport of PostgreSQL hopefully isn't for just myself.\n\nPostgreSQL has a chunk of work that needs to happen when setting it up\nor upgrading and I am trying to smooth out as much of that as possible\nsuch that installing PostgreSQL gets to the point of having it\nreasonably tuned for the OS its being installed on after installing\nthe port. Its not that install PostgreSQL is hard, far from, but\nthere's a reasonable checklist of things that needs to happen and that\nrequires a certain requisite knowledge of the database, tuning, and\nthe OS you're on: something, for better or worse, I assume most\nusers/DBA's don't have. In a typical install, I generally do some\nvariation of the following:\n\n*) setenv CFLAGS '-g -O3'\n*) make\n*) pg_dumpall > ~/db_dump\n*) ${LOCALBASE}/etc/rc.d/010.pgsql.sh stop\n*) make deinstall\n*) make install\n*) mv $PGDATA $PGDATA.old\n*) initdb\n*) diff -c $PGDATA.old/data/postgresql.conf $PGDATA/data/postgresql.conf > $PGDATA/data/postgresql.conf.patch\n*) cd $PGSQL/data; patch -p0 < postgresql.conf.patch\n*) edit postgresql.conf\n*) ${LOCALBASE}/etc/rc.d/010.pgsql.sh start\n*) psql -f ~/db_dump\n*) vacuumdb -a -f -z\n*) tweak various sysctl's to increase fd's, etc.\n*) hopefully don't have to recompile the kernel with more shmem, etc\n\nOn some hosts, I've even got a script that I run that does all of that\nfor me because it's the exact same procedure every time. :-/ Getting\nas much of that done and taken care of as possible would probably be\nappreciated and enjoyed by others. It's not fool-proof, don't get me\nwrong, but there's certainly some of that that can be automated, and\nwith tunables I'd like to for usabilities sake.\n\n::shrug:: Usability's a touchy subject though and none of this will be\non by default so as to not offend the power-users out there. -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Tue, 10 Sep 2002 09:30:43 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": true,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "Sean Chittenden writes:\n\n> Hrm, I should go check the archives, but I thought what was used was\n> one step below -C[fF] and was used because of size concerns for\n> embedded databases. My memory for what happens on mailing lists seems\n> to be fading though so I'll look it up.\n\nThe particular decision was -CF vs. -CFa (\"a\" for alignment). The latter\nwas about 2% faster in the test case but increased the size of the\nexecutable by 80 kB.\n\nNote that the test case was extremely contrived -- parsing of about 70 MB\nof uninteresting commands with little to no other activity. For a normal\ncommand the scanner overhead is really small.\n\nOn the other hand, the test case was run on a x86 machine which is not\nknown for being sensitive to alignment. So on a different architecture\nyou might get more significant speedups. Try it if you like.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 11 Sep 2002 01:05:03 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "Neil Conway writes:\n\n> Also, if -O3 *is* a good compiler option, I dislike the idea of\n> enabling it for your own packages but no one else's. IMHO distributors\n> should not futz with packages more than is strictely necessary, and a\n> change like this seems both unwarranted, and potentially dangerous. If\n> -O3 is a good idea, we should make the change for the appropriate\n> platforms in the official source, and let it get the widespread\n> testing it requires.\n\nI disagree. Choosing the compiler options is exactly the job of the\ninstaller, packager, or distributor. That's why you can specify CFLAGS on\nthe command line after all, and most distributors' build environments make\nuse of that.\n\nI don't think we're doing anyone a service if we spread wild speculations\nabout how risky certain compiler options are. If your compiler creates\nbroken code, don't use it. Packagers are expected to know about their\ncompiler. If they create broken packages and behave irresponsibly about\nit they won't be making packages much longer.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 11 Sep 2002 01:05:20 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "On Wed, 11 Sep 2002, Peter Eisentraut wrote:\n\n> I disagree. Choosing the compiler options is exactly the job of the\n> installer, packager, or distributor.\n\nIf there is one, yes.\n\n> I don't think we're doing anyone a service if we spread wild speculations\n> about how risky certain compiler options are. If your compiler creates\n> broken code, don't use it. Packagers are expected to know about their\n> compiler. If they create broken packages and behave irresponsibly about\n> it they won't be making packages much longer.\n\nHowever, many users are not as knowledgable as packagers, but may\nstill be compiling from source. For those people, I don't think it's\nunreasonable to say, \"Use -O2 unless you know what you are doing.\"\n\n(I'm not sure we're actually disagreeing here, but I just wanted to make\nthis point clear.)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 11 Sep 2002 10:31:06 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "On Tuesday 10 September 2002 09:31 pm, Curt Sampson wrote:\n> On Wed, 11 Sep 2002, Peter Eisentraut wrote:\n> > I disagree. Choosing the compiler options is exactly the job of the\n> > installer, packager, or distributor.\n\n> If there is one, yes.\n\nIf the enduser is directly compiling the source, then that user is responsible \nfor passing the flags desired -- they become their own packager.\n\n> However, many users are not as knowledgable as packagers, but may\n> still be compiling from source. For those people, I don't think it's\n> unreasonable to say, \"Use -O2 unless you know what you are doing.\"\n\nI still remember when the Alpha port _required_ -O0. And it was documented \nthat way, IIRC. \n\nCompiling from source implies certain knowledge. Automated from source \nbuilds, such as ports or linux distributions such as Gentoo can handle this \nin their own build systems. \n\nIf someone can figure out how to override the default, then they can deal with \nthe results, IMHO.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 10 Sep 2002 22:11:40 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "On Tue, 10 Sep 2002, Lamar Owen wrote:\n\n> I still remember when the Alpha port _required_ -O0. And it was documented\n> that way, IIRC.\n\nGood. It would also be very nice if, in situations like this, the\nconfigure script could detect this and use -O0 when compiling on\nthe alpha.\n\n> Compiling from source implies certain knowledge.\n\nNo it doesn't. All it means is that someone's using a system for\nwhich they don't have a package handy.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 11 Sep 2002 11:18:53 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
},
{
"msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n\n> I'm thinking about changing this from a beta port to a -devel port\n> that I'll periodically update with snapshots. I'll turn on -O6 for\n> the -devel port and -O2 for production for now. If I don't hear of\n> any random bogons in the code I'll see if I can't increase it further\n> to -O3 and beyond at a slow/incremental rate.\n\n-O3 is usually slower than -O2 because of increased code size due to\nautomatic inlining. With GCC, -O4 etc. are all equivalent to -O3.\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Mon, 23 Sep 2002 07:44:54 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
}
] |
[
{
"msg_contents": "Dear Tom,\n\n>> <herve@elma.fr> writes:\n>> But when I try to import it inside 7.3b1 I get this :\n>> (seems that the copy command is not fully compatible with the 7.2.2\n>> pg_dumpall ?)\n>>\n>> Many thinks like this : (I have only copied some parts ...)\n>> Size of the dump about 1.5 Gb ...\n>>\n>> Query buffer reset (cleared).\n>> psql:/tmp/dump_mybase.txt:1015274: invalid command \\nPour\n>> Query buffer reset (cleared).\n>\n>It seems pretty clear that the COPY command itself failed, leaving psql\n>trying to interpret the following data as SQL commands. But you have\n>not shown us either the COPY command or the error message it generated,\n>so there's not a lot we can say about it...\n>\n>regards, tom lane\n\nOK I have (hope) find the trouble ... may be a mistake from my part but \nwhich was running with v7.2.2 ... (I think I have to alter my table with \ndefault current_date ...)\n\nI have this error message :\npsql:dump.7.2.2.txt:304: ERROR: Column \"datecrea\" is of type date but \ndefault expression is of type timestamp with time zone\nYou will need to rewrite or cast the expression\n\nfor the field :\n\"datecrea\" date DEFAULT now(),\n\nSo after, the importation of the data are making errors messages because the \nprevius table has not been created ... I'm right ?\n\nI have also a strange error :\npsql:dump.7.2.2.txt:1087: ERROR: function plpgsql_call_handler() does not \nreturn type language_handler\npsql:dump.7.2.2.txt:1126: ERROR: language \"plpgsql\" does not exist\n\nfor those lines :\n--\n-- TOC Entry ID 292 (OID 2083293)\n--\n-- Name: \"plpgsql_call_handler\" () Type: FUNCTION Owner: postgres\n--\n\nCREATE FUNCTION \"plpgsql_call_handler\" () RETURNS opaque AS \n'/usr/local/pgsql/lib/plpgsql.so', 'plpgsql_call_handler' LANGUAGE 'C';\n\n--\n-- TOC Entry ID 293 (OID 2083294)\n--\n-- Name: plpgsql Type: PROCEDURAL LANGUAGE Owner:\n--\n\nCREATE TRUSTED PROCEDURAL LANGUAGE 'plpgsql' HANDLER \"plpgsql_call_handler\" \nLANCOMPILER 'PL/pgSQL';\n\nHope this help ...\n\nRegards,\n-- \nHerv�\n\n\n",
"msg_date": "Tue, 10 Sep 2002 02:19:32 +0200",
"msg_from": "=?iso-8859-1?B?SGVydukgUGllZHZhY2hl?= <herve@elma.fr>",
"msg_from_op": true,
"msg_subject": "Re: Impossible to import pg_dumpall from 7.2.2 to 7.3b1"
}
] |
[
{
"msg_contents": "Dear all,\n\nI'm currently working on my thesis and I chose psql. What I need to do\nis defining a new type in psql.\n\nIt should be dynamic array.\n\n| 1 | 2 | 3.0 | 4.5 | 2.1 | . .. . .\n\n\n// This one is not working\ntypedef struct Myindex {\n\tdouble *indexes;\n\tint level;\n\tint size;\n} Myindex\n\nMyindex *\nMyindex_in {\n\n}\n\nMyindex *\nMyindex_out {\n\tHowever when I try to get back the data. It seems that the last\n\tinsertion always overwrite other previous insertion.\n\tIn particular, it overwrites all data from 2nd to n-1th record.\n\twhere n is the number of insertion but not the first one.\n}\n\n\n// This one work ok but the idea is to have dynamic array.\n// This would defeat the purpose of this new structure.\n\ntypedef struct Myindex {\n double indexes[10];\n int level;\n int size;\n} Myindex;\n\n\nStandalone debuging works for both cases.\nHowever psql accepts only the static array.\n\n\nCould anybody enlight me on this issue, please\n\n\nregards,\nVan\n\n",
"msg_date": "Tue, 10 Sep 2002 12:23:45 +1000 (EST)",
"msg_from": "Vanmunin Chea <vac@cse.unsw.EDU.AU>",
"msg_from_op": true,
"msg_subject": "If there a bug in the psql or just a feature . "
},
{
"msg_contents": "Vanmunin Chea <vac@cse.unsw.EDU.AU> writes:\n> // This one is not working\n> typedef struct Myindex {\n> \tdouble *indexes;\n> \tint level;\n> \tint size;\n> } Myindex\n\nYou cannot use a pointer inside a Postgres datatype. The system will\nhave no idea that the pointer is there and so will not copy the\npointed-to data, nor update the pointer, when the datum is copied,\nstored on disk, etc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Sep 2002 11:10:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: If there a bug in the psql or just a feature . "
},
{
"msg_contents": "Hey Tom,\n\n\tThanks for the tips, Tom. I have that feeling from the start\n(with the two different implementation) but never actually have a chance\nto confirm with someone.\n\n1. It there a way to store the dynamic array at all ?\n\n\n\tI notice psql has a similar type - Single Dynamic Dimensional\nArray. However there isn't any built in operators(<,<=,==,>,>=) for Array\nto do sorting.\n\n2. Can I write one up ?\n\n\nregards,\nVan.\n\n\n\nOn Tue, 10 Sep 2002, Tom Lane wrote:\n\n> Vanmunin Chea <vac@cse.unsw.EDU.AU> writes:\n> > // This one is not working\n> > typedef struct Myindex {\n> > \tdouble *indexes;\n> > \tint level;\n> > \tint size;\n> > } Myindex\n>\n> You cannot use a pointer inside a Postgres datatype. The system will\n> have no idea that the pointer is there and so will not copy the\n> pointed-to data, nor update the pointer, when the datum is copied,\n> stored on disk, etc.\n>\n> \t\t\tregards, tom lane\n>\n\nVanmunin Chea\n\n",
"msg_date": "Wed, 11 Sep 2002 01:22:40 +1000 (EST)",
"msg_from": "Vanmunin Chea <vac@cse.unsw.EDU.AU>",
"msg_from_op": true,
"msg_subject": "Re: If there a bug in the psql or just a feature . "
},
{
"msg_contents": "On Tue, 2002-09-10 at 17:22, Vanmunin Chea wrote:\n> Hey Tom,\n> \n> \tThanks for the tips, Tom. I have that feeling from the start\n> (with the two different implementation) but never actually have a chance\n> to confirm with someone.\n> \n> 1. It there a way to store the dynamic array at all ?\n> \n> \n> \tI notice psql has a similar type - Single Dynamic Dimensional\n> Array. However there isn't any built in operators(<,<=,==,>,>=) for Array\n> to do sorting.\n> \n> 2. Can I write one up ?\n\nSee attachment.\n\nUnfortunately I ran out of time before figuring out how to make btree\nindex use it ;(\n\nAlso, in 7.3 there are a lot more ops for in contrib/intarray than was\nin 7.2.\n\n-------------\nHannu",
"msg_date": "17 Sep 2002 10:16:54 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: If there a bug in the psql or just a feature ."
}
] |
[
{
"msg_contents": "\n> > Here are the proposals for solutioning the \"Return proper effected\n> > tuple count from complex commands [return]\" issue as seen on TODO.\n> >\n> > Any comments ?... This is obviously open to voting and discussion.\n> \n> We don't have a whole lot of freedom in this; this area is \n> covered by the\n> SQL standard. The major premise in the standard's point of \n> view is that\n> views are supposed to be transparent. That is, if\n> \n> SELECT * FROM my_view WHERE condition;\n> \n> return N rows, then a subsequently executed\n> \n> UPDATE my_view SET ... WHERE condition;\n> \n> returns an update count of N, no matter what happens behind the scenes. I\n> don't think this matches Tom Lane's view exactly, but it's a lot closer\n> than your proposal.\n\nYes, exactly. I think it does match Tom's proposal as best we can. But we need a \nknowing dba that creates correct rules. Since you can create a lot more powerful\nviews in pg than usual, I guess that is not such a farfetched demand.\n\nI do not know whether above extends to inserts ? In Informix you can \ncreate views \"WITH CHECK OPTION\", then inserted and updated rows are guaranteed to \nstill be visible by the view. If you don't add that clause, inserts and updates \nmay produce rows that are not visible through the view. The affected row count still \nincludes those though.\n\nAndreas\n",
"msg_date": "Tue, 10 Sep 2002 09:00:52 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple"
}
] |
[
{
"msg_contents": "> What is the difference\n> between a trigger, a rule and an instead rule from a business process\n> oriented point of view? I think there is none at all. They are just\n> different techniques to do one and the same, implement \n> business logic in the database system.\n\nThe difference is how other db's work. They all ignore triggers and constraints\nin the sqlca.sqlerrd[2] \"number of processed rows\" count, that I see identical to our \naffected rows count. They all have views, but not many have rules :-) Pg's \"instead rules\"\nare the toolkit for views, and as such need special handling, imho.\n\nAndreas\n",
"msg_date": "Tue, 10 Sep 2002 09:11:35 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Rule updates and PQcmdstatus() issue"
}
] |
[
{
"msg_contents": "\n> Oh, this is bad news. The problem we have is that rules don't\n> distinguish the UPDATE on the underlying tables of the rule from other\n> updates that may appear in the query.\n> \n> If we go with Tom's idea and total just UPDATE's, we will get the right\n> answer when there is only one UPDATE in the ruleset.\n\nAs long as the rules don't overlap (1 row is handled by 1 instead statement, \nanother row by a different one), it is ok. Again, you can create \"non instead\"\nrules or triggers for the other work needed. \nI am still in favor of not distinguishing the different tags. The dba needs to \ntake responsibility anyway (as long as we don't autogenerate the rules for simple \ncases).\n\nAndreas\n",
"msg_date": "Tue, 10 Sep 2002 09:31:30 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Solving the \"Return proper effected tuple"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Neil Conway [mailto:neilc@samurai.com] \n> Sent: 10 September 2002 05:58\n> To: Sean Chittenden\n> Cc: Tom Lane; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Optimization levels when compiling \n> PostgreSQL...\n> \n> \n> Sean Chittenden <sean@chittenden.org> writes:\n> > Has there been any talk of doing incremental -snapshots of the code \n> > base?\n> \n> I don't really see the point. Snapshots of development code \n> are available from CVS anyway -- and if you're going to be \n> running a pre-alpha version of a relational database, I don't \n> think that knowledge of CVS is an onerous requirement.\n> \n> At any rate, the problem with releasing snapshots is that the \n> system catalogs would change so often that upgrading between \n> snapshots would be a headache. i.e. the changes required to \n> upgrade from a 2 week old development snapshot to a current \n> snapshot would still be non-trivial, significantly reducing \n> the usefulness of snapshots, IMHO.\n\nSnapshots can be found here: ftp://ftp.postgresql.org/pub/dev/\n\nRegards, Dave.\n",
"msg_date": "Tue, 10 Sep 2002 08:46:09 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Optimization levels when compiling PostgreSQL..."
}
] |
[
{
"msg_contents": ">OK, I have a better version at:\n\nThe script is now broken, I get:\nCollecting sizing information ...\nRunning random access timing test ...\nRunning sequential access timing test ...\nRunning null loop timing test ...\nrandom test: 14\nsequential test: 16\nnull timing test: 14\n\nrandom_page_cost = 0.000000\n\n",
"msg_date": "Tue, 10 Sep 2002 10:17:01 +0200",
"msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>",
"msg_from_op": true,
"msg_subject": "Re: Script to compute random page cost"
},
{
"msg_contents": "I was attempting to measure random page cost a while ago -\nI used three programs in this archive :\n\nhttp://techdocs.postgresql.org/markir/download/benchtool/\n\nIt writes a single big file and seems to give more realistic \nmeasurements ( like 6 for a Solaris scsi system and 10 for a Linux ide \none...)\n\nHave a look and see if you can cannibalize it for your program\n\n\nCheers\n\nMark\n\n",
"msg_date": "Tue, 10 Sep 2002 22:33:19 +1200",
"msg_from": "Mark Kirkwood <markir@slingshot.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: Script to compute random page cost"
}
] |
[
{
"msg_contents": "Is there any way to determine the location of files in a database\nwithout being the postgres user? Essentially i'm after the setting of\nPGDATA so i can then show disk status (df) for that partition.\n\nThe pg_database catalogue has 'datpath':\n\n If the database is stored at an alternative location then this\n records the location. It's either an environment variable name or an\n absolute path, depending how it was entered.\n\nso I'm really looking for the default location...\n\nI could knock together a C function to do this (and indeed another to\nreturn the usage stats too), but would like to check first there's no\nsimple way already!\n\nRegards, Lee Kindness.\n",
"msg_date": "Tue, 10 Sep 2002 10:47:37 +0100",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": true,
"msg_subject": "Location of database files?"
}
] |
[
{
"msg_contents": "I realise that this has already been done, by Joe Conway I think. Indeed I was\nlooking at this just before beta1 when I happened to notice the post giving the\nplpgsql function. However, as I had started work on it and I was interested in\nseeing how things should be done I continued, only not in so much of a rush.\n\nIn the interests on finding out if I have approached this the right way, or the\nway a more experienced backend programmer would, I'd appreciate any comments on\nthe attached .c file. In particular, I'm not sure what I'm doing with regard to\nmemory contexts, I think I may have one unnecessary switch in there, and in\ngeneral I seem to be doing a lot of work just to find out tidbits of\ninformation.\n\nI based this on, i.e. started by editing, Joe Conway's tablefunc.c but I think\nthere's very little of the original left in there.\n\nI've also attached the .h, Makefile and .sql.in files to make this work if\nanyone is interested in giving it a run. The .sql.in shows the usage. I did\nthis in a directory called pggrouping, for the sake of a better name, under the\ncontrib directory in my tree, so that's probably the best place to build it.\n\nThanks, and sorry for adding to people's email and work load.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants",
"msg_date": "Tue, 10 Sep 2002 16:51:37 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": true,
"msg_subject": "SRF and pg_group"
}
] |
[
{
"msg_contents": "Hi,\n\nI was just contacted by a customer about the SQLProcedureColumns call in\nour odbc driver. It appears this call is undefined in the standard odbc\ndriver but is available in odbcplus. Could anyone please enlighten me\nwhy this was forked and not merged into one driver? Is there a problem\nwhen I take the odbcplus code and put it into the odbc driver? \n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Tue, 10 Sep 2002 21:42:23 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "ODBC problem/question"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Michael Meskes [mailto:meskes@postgresql.org] \n> Sent: 10 September 2002 20:42\n> To: PostgreSQL Interfaces; PostgreSQL Hacker\n> Subject: [HACKERS] ODBC problem/question\n> \n> \n> Hi,\n> \n> I was just contacted by a customer about the \n> SQLProcedureColumns call in our odbc driver. It appears this \n> call is undefined in the standard odbc driver but is \n> available in odbcplus. Could anyone please enlighten me why \n> this was forked and not merged into one driver? Is there a \n> problem when I take the odbcplus code and put it into the \n> odbc driver? \n\nHi Michael,\n\nThere are currently 3 variants of the driver.\n\nPostgreSQL - This is the current ODBC 2.5 compliant driver.\nPostgreSQL+ - This is a development version that is ODBC 3.0 compliant.\nPostgreSQL+ Unicode - This is PostgreSQL+ with Unicode support.\n\nWe are aiming for PostgreSQL+ Unicode to be the only driver as soon as\npossible, but due to the way ODBC3 and Unicode support were added (ie.\nquickly, to solve immediate problems), we (Hiroshi & I) felt it was best\nto keep them seperate until we were sure of their reliability.\n\nCurrently, PostgreSQL+ seems pretty good as, does the Unicode version,\nthough that is still missing some features iirc.\n\nHTH, Regards, Dave.\n",
"msg_date": "Tue, 10 Sep 2002 21:40:07 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: ODBC problem/question"
}
] |
[
{
"msg_contents": "Dann Corbit wrote:\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> > Sent: Tuesday, September 10, 2002 9:10 PM\n> > To: Michael Meskes\n> > Cc: PostgreSQL Hacker; Marc G. Fournier\n> > Subject: Re: [HACKERS] 7.3beta and ecpg\n> > \n> > \n> > \n> > I think we should stop playing around with ecpg. Let's get \n> > the beta bison on postgresql.org and package the proper ecpg \n> > version for 7.3beta2. If we don't, we are going to get zero \n> > testing for 7.3 final.\n> > \n> > Marc?\n> > \n> > We will not find out if there are problems with the bison \n> > beta until we ship it as part of beta and I don't think we \n> > have to be scared of just because it is beta.\n> \n> I have a dumb idea...\n> \n> Why not just package the output of the Bison beta version?\n> \n> It may not be comprehensible, but it does not need to be generated on\n> any particular target machine does it?\n> \n> Sure, it would be nice to be able to process the original grammar on any\n> client workstation. But if it will hold up the entire project, why not\n> just ship the preprocessed output?\n\nWe do ship just the preprocessed output. We need the new bison on\npostgresql.org and we need the CVS to be updated for the new version and\nthen beta2 will hold the proper bison output.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 00:15:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: 7.3beta and ecpg"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: Tuesday, September 10, 2002 9:10 PM\n> To: Michael Meskes\n> Cc: PostgreSQL Hacker; Marc G. Fournier\n> Subject: Re: [HACKERS] 7.3beta and ecpg\n> \n> \n> \n> I think we should stop playing around with ecpg. Let's get \n> the beta bison on postgresql.org and package the proper ecpg \n> version for 7.3beta2. If we don't, we are going to get zero \n> testing for 7.3 final.\n> \n> Marc?\n> \n> We will not find out if there are problems with the bison \n> beta until we ship it as part of beta and I don't think we \n> have to be scared of just because it is beta.\n\nI have a dumb idea...\n\nWhy not just package the output of the Bison beta version?\n\nIt may not be comprehensible, but it does not need to be generated on\nany particular target machine does it?\n\nSure, it would be nice to be able to process the original grammar on any\nclient workstation. But if it will hold up the entire project, why not\njust ship the preprocessed output?\n",
"msg_date": "Tue, 10 Sep 2002 21:17:27 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg"
}
] |
[
{
"msg_contents": "Hackers,\n\nIs there some documentation on TOAST? In the SGML docs there isn't even\na description of it, and in the release notes I cannot find anything but\nvery light mentions. I've seen descriptions scattered around the web\nwhile Googling, but they are very light and don't seem \"official\".\n\nAny pointers will be appreciated,\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Cuando no hay humildad las personas se degradan\" (A. Christie)\n",
"msg_date": "Wed, 11 Sep 2002 00:28:22 -0400",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": true,
"msg_subject": "TOAST docs"
},
{
"msg_contents": "Alvaro Herrera writes:\n\n> Is there some documentation on TOAST?\n\nNo. Why do you need any?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 13 Sep 2002 00:09:02 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: TOAST docs"
},
{
"msg_contents": "On Fri, 13 Sep 2002, Peter Eisentraut wrote:\n\n> Alvaro Herrera writes:\n> \n> > Is there some documentation on TOAST?\n> \n> No. Why do you need any?\n\nI think I saw some docs in the \n\n/usr/local/src/postgresql-7.2.1/src/backend/access/heap/tuptoaster.c\n\nfile on my box. :-)\n\nActually it is pretty well commented, so I'm not just being a smart ass \nhere.\n\n",
"msg_date": "Thu, 12 Sep 2002 16:26:40 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: TOAST docs"
},
{
"msg_contents": "On Fri, 2002-09-13 at 00:09, Peter Eisentraut wrote:\n> Alvaro Herrera writes:\n> \n> > Is there some documentation on TOAST?\n> \n> No. Why do you need any?\n\nIIRC there were some ways to tweak when TOAST gets used, when it goes\nout to toastfile and when it uses compressed/non-compressed storage.\n\nI hope this is documented someplace, no ?\n\n-------------\nHannu\n\n",
"msg_date": "13 Sep 2002 11:20:46 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: TOAST docs"
},
{
"msg_contents": "\"scott.marlowe\" wrote:\n> \n> On Fri, 13 Sep 2002, Peter Eisentraut wrote:\n> \n> > Alvaro Herrera writes:\n> >\n> > > Is there some documentation on TOAST?\n> >\n> > No. Why do you need any?\n> \n> I think I saw some docs in the\n> \n> /usr/local/src/postgresql-7.2.1/src/backend/access/heap/tuptoaster.c\n> \n> file on my box. :-)\n> \n> Actually it is pretty well commented, so I'm not just being a smart ass\n> here.\n\nInline comments are not exactly what I call \"documentation\", but\nthanks for the flowers anyway.\n\nHannu is right though, that there are ways to tweak the behavior\nby running the risk to currupt your system catalogs (read:\nmanually updating it). Originally I had in mind to add some\nadministrative utilities that give a safe access to these\nsettings ... if the past year would just have been a bit less\nstressful ...\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n",
"msg_date": "Fri, 13 Sep 2002 11:26:40 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: TOAST docs"
},
{
"msg_contents": "On Fri, 2002-09-13 at 17:26, Jan Wieck wrote:\n> \"scott.marlowe\" wrote:\n> > I think I saw some docs in the\n> > \n> > /usr/local/src/postgresql-7.2.1/src/backend/access/heap/tuptoaster.c\n> > \n> > file on my box. :-)\n> > \n> > Actually it is pretty well commented, so I'm not just being a smart ass\n> > here.\n> \n> Inline comments are not exactly what I call \"documentation\", but\n> thanks for the flowers anyway.\n\nBut this is how most of backend is \"documented\" ;)\n\n> Hannu is right though, that there are ways to tweak the behavior\n> by running the risk to currupt your system catalogs (read:\n> manually updating it). Originally I had in mind to add some\n> administrative utilities that give a safe access to these\n> settings ...\n\nI quess the quickest/easiest and most universal (immediately usable by\n\"other\" admin utils) would be a set of pl/pgsql or sql functions.\n\n-----------\nHannu\n\n",
"msg_date": "13 Sep 2002 18:44:43 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: TOAST docs"
},
{
"msg_contents": "Hannu Krosing wrote:\n> IIRC there were some ways to tweak when TOAST gets used, when it goes\n> out to toastfile and when it uses compressed/non-compressed storage.\n> \n> I hope this is documented someplace, no ?\n\nThere is a mention of it in the ALTER TABLE doc:\n\nALTER TABLE [ ONLY ] table [ * ]\n ALTER [ COLUMN ] column SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN }\n\nSET STORAGE\n\nThis form sets the storage mode for a column. This controls whether this \ncolumn is held inline or in a supplementary table, and whether the data should \nbe compressed or not. PLAIN must be used for fixed-length values such as \nINTEGER and is inline, uncompressed. MAIN is for inline, compressible data. \nEXTERNAL is for external, uncompressed data and EXTENDED is for external, \ncompressed data. EXTENDED is the default for all datatypes that support it. \nThe use of EXTERNAL will make substring operations on a TEXT column faster, at \nthe penalty of increased storage space.\n\nJoe\n\n",
"msg_date": "Fri, 13 Sep 2002 09:46:08 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: TOAST docs"
},
{
"msg_contents": "Joe Conway wrote:\n> \n> Hannu Krosing wrote:\n> > IIRC there were some ways to tweak when TOAST gets used, when it goes\n> > out to toastfile and when it uses compressed/non-compressed storage.\n> >\n> > I hope this is documented someplace, no ?\n> \n> There is a mention of it in the ALTER TABLE doc:\n> \n> ALTER TABLE [ ONLY ] table [ * ]\n> ALTER [ COLUMN ] column SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN }\n> \n> SET STORAGE\n> \n> This form ...\n\nWe have that already? Seems my memory doesn't serve as good as it\nused to ... I'm getting old, folks. Thanks to whoever did it.\n\nNow the other magic thingy is, that on CREATE TABLE the column\nstorage is taken from the pg_type entry. Someone could manipulate\nthat and default all text columns to external for example. \n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n",
"msg_date": "Fri, 13 Sep 2002 13:52:05 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: TOAST docs"
}
] |
[
{
"msg_contents": "Here are the open items:\n\n P O S T G R E S Q L\n\n 7 . 3 O P E N I T E M S\n\n\nCurrent at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n\nSource Code Changes\n-------------------\nSchema handling - ready? interfaces? client apps?\nDrop column handling - ready for all clients, apps?\nFix BeOS and QNX4 ports\nGet bison upgrade on postgresql.org\nFix vacuum btree bug (Tom)\nFix client apps for autocommit = off\nFix clusterdb to be schema-aware\nChange log_min_error_statement to be off by default\nFix return tuple counts/oid/tag for rules\n\nOn Hold\n-------\nPoint-in-time recovery\nWin32 port\nSecurity audit\n\nDocumentation Changes\n---------------------\nDocument need to add permissions to loaded functions and languages\nMove documation to gborg for moved projects\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 00:31:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Open items"
},
{
"msg_contents": "On Wed, 11 Sep 2002, Bruce Momjian wrote:\n\n> On Hold\n> -------\n> Point-in-time recovery\n> Win32 port\n> Security audit\n\nWhy is the security audit on hold? This is the best time to do it, while\nthe code is reasonably static :(\n\n\n",
"msg_date": "Wed, 11 Sep 2002 22:42:49 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 11 Sep 2002, Bruce Momjian wrote:\n> \n> > On Hold\n> > -------\n> > Point-in-time recovery\n> > Win32 port\n> > Security audit\n> \n> Why is the security audit on hold? This is the best time to do it, while\n> the code is reasonably static :(\n\nIt is on hold in the sense it is not a item that has to be completed for\n7.3 but is in on-going, like the other items. The other items have to\nbe specifically marked as \"done\" before the 7.3 release.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 21:45:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "On Wed, 11 Sep 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Wed, 11 Sep 2002, Bruce Momjian wrote:\n> >\n> > > On Hold\n> > > -------\n> > > Point-in-time recovery\n> > > Win32 port\n> > > Security audit\n> >\n> > Why is the security audit on hold? This is the best time to do it, while\n> > the code is reasonably static :(\n>\n> It is on hold in the sense it is not a item that has to be completed for\n> 7.3 but is in on-going, like the other items. The other items have to\n> be specifically marked as \"done\" before the 7.3 release.\n\nAh, k ... maybe put it under a 'Non Critical' or 'Ongoing' category?\n\n\n",
"msg_date": "Wed, 11 Sep 2002 22:51:21 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 11 Sep 2002, Bruce Momjian wrote:\n> \n> > Marc G. Fournier wrote:\n> > > On Wed, 11 Sep 2002, Bruce Momjian wrote:\n> > >\n> > > > On Hold\n> > > > -------\n> > > > Point-in-time recovery\n> > > > Win32 port\n> > > > Security audit\n> > >\n> > > Why is the security audit on hold? This is the best time to do it, while\n> > > the code is reasonably static :(\n> >\n> > It is on hold in the sense it is not a item that has to be completed for\n> > 7.3 but is in on-going, like the other items. The other items have to\n> > be specifically marked as \"done\" before the 7.3 release.\n> \n> Ah, k ... maybe put it under a 'Non Critical' or 'Ongoing' category?\n\nGood, changed to \"On going\".\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 22:23:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open items"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Oliver Elphick [mailto:olly@lfix.co.uk] \n> Sent: 11 September 2002 07:29\n> To: Tom Lane\n> Cc: Lamar Owen; Bruce Momjian; Philip Warner; Laurette \n> Cisneros; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS]\n> \n>\n> Let me reiterate. I got these problems dumping 7.2 data with 7.3's\n> pg_dumpall:\n\nI wonder how many people would do something more like:\n\npg_dumpall > db.sql\nmake install\npsql -e template1 < db.sql\n\nrather than manually installing pg_dumpall from 7.3 first?\n\nRegards, Dave.\n",
"msg_date": "Wed, 11 Sep 2002 08:20:22 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "On Wed, 2002-09-11 at 08:20, Dave Page wrote:\n> \n> \n> > -----Original Message-----\n> > From: Oliver Elphick [mailto:olly@lfix.co.uk] \n> > Sent: 11 September 2002 07:29\n> > To: Tom Lane\n> > Cc: Lamar Owen; Bruce Momjian; Philip Warner; Laurette \n> > Cisneros; pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS]\n> > \n> >\n> > Let me reiterate. I got these problems dumping 7.2 data with 7.3's\n> > pg_dumpall:\n> \n> I wonder how many people would do something more like:\n> \n> pg_dumpall > db.sql\n> make install\n> psql -e template1 < db.sql\n> \n> rather than manually installing pg_dumpall from 7.3 first?\n\nI suppose that what people will do unless told otherwise, but the\nintroduction of schemas means that it is much better to use 7.3's dump,\notherwise, for example, all functions will be private rather than\npublic.\n\nPerhaps a note should be added to INSTALL. At the moment it says:\n\n 2. To dump your database installation, type:\n \n pg_dumpall > outputfile\n \n ...\n \n Make sure that you use the \"pg_dumpall\" command from the version\n you are currently running. 7.2's \"pg_dumpall\" should not be used\n on older databases.\n \nBut now we should be telling people to use 7.3's pg_dumpall, at least\nfor 7.2 data. (How far back can it go?)\n\n Make sure you use pg_dumpall from the new 7.3 software to dump\n your data from 7.2. To do this, you must have the 7.2\n postmaster running and run the 7.3 pg_dumpall by using its full\n pathname. 7.2's pg_dumpall is unsuitable because of the\n introduction of schemas in 7.3 which make it necessary to grant\n public access to features that will, if created from a 7.2 dump,\n be given access by their owner only.\n \n(Have I got that right?)\n\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I am crucified with Christ; nevertheless I live; yet \n not I, but Christ liveth in me; and the life which I \n now live in the flesh I live by the faith of the Son \n of God, who loved me, and gave himself for me.\" \n Galatians 2:20 \n\n",
"msg_date": "11 Sep 2002 18:07:46 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Oliver Elphick wrote:\n> But now we should be telling people to use 7.3's pg_dumpall, at least\n> for 7.2 data. (How far back can it go?)\n> \n> Make sure you use pg_dumpall from the new 7.3 software to dump\n> your data from 7.2. To do this, you must have the 7.2\n> postmaster running and run the 7.3 pg_dumpall by using its full\n> pathname. 7.2's pg_dumpall is unsuitable because of the\n> introduction of schemas in 7.3 which make it necessary to grant\n> public access to features that will, if created from a 7.2 dump,\n> be given access by their owner only.\n\nThat's a pretty big hurtle. I think we are better off giving them an\nSQL UPDATE to run.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 13:20:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "\n> Actually there is one more problem. The backend introduced the EXECUTE\n> command just recently. However, this clashes with the embedded SQL\n> EXECUTE command. Since both may be called just with EXECUTE <name>,\n> there is no way to distinguish them.\n> \n> I have no idea if there's a standard about execution of a plan but\n> couldn't/shouldn't it be named \"EXECUTE PLAN\" instead of just \n> \"EXECUTE\"?\n\nI know this is not really related, but wouldn't the plan be to make\necpg actually use the backend side \"execute ...\" now that it is available ?\n\necpg needs eighter 'execute :idvar' or 'execute id', so either idvar is a \ndeclared variable or id a statement id. I don't know if that is something a \nparser can check though :-(\n\nFor now, I would leave \"exec sql execute\" do the ecpg thing if that is possible. \nIf you want to use the backend side functionality you would need to:\nexec sql prepare ex1 from 'execute id';\nexec sql execute ex1;\n\nAndreas\n",
"msg_date": "Wed, 11 Sep 2002 11:23:45 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: 7.3beta and ecpg"
},
{
"msg_contents": "On Wed, Sep 11, 2002 at 11:23:45AM +0200, Zeugswetter Andreas SB SD wrote:\n> I know this is not really related, but wouldn't the plan be to make\n> ecpg actually use the backend side \"execute ...\" now that it is available ?\n\nMaybe I misunderstood something. Do you mean I could use the backend\nPREPARE/EXECUTE to prepare and execute any statement I can\nPREPARE/EXECUTE with the ecpg part? Can I use PREPARE to prepare a\ncursor? In that case I will gladly remove the ecpg stuff.\n\nI just looked into the backend any further and wonder why I didn't\nunderstand earlier. For some reason I was believing this was just an\noptimization command.\n\nIt seems I can use larger parts of this thus reducing ecpg parser's\ncomplexity as well.\n\n> ecpg needs eighter 'execute :idvar' or 'execute id', so either idvar is a \n> declared variable or id a statement id. I don't know if that is something a \n> parser can check though :-(\n\nActually ecpg needs 'execute id using ... into ...'. I did not see any\nmention of using in the backend execute command. The 'execute :idvar'\npart is easier since this correctly is named 'execute immediate :idvar'\nI think.\n\nAFAIK the standard is \"execute ID using value\" and not \"execute\nID(value)\". Please correct me if I'm wrong, but right now ecpg uses the\nfirst syntax the backend uses the second.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 11 Sep 2002 14:43:49 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg"
}
] |
[
{
"msg_contents": "\n> > I know this is not really related, but wouldn't the plan be to make\n> > ecpg actually use the backend side \"execute ...\" now that it is available ?\n> \n> Maybe I misunderstood something. Do you mean I could use the backend\n> PREPARE/EXECUTE to prepare and execute any statement I can\n> PREPARE/EXECUTE with the ecpg part? Can I use PREPARE to prepare a\n> cursor? In that case I will gladly remove the ecpg stuff.\n\nThat is how I understood it so far.\n\n> I just looked into the backend any further and wonder why I didn't\n> understand earlier. For some reason I was believing this was just an\n> optimization command.\n\nWell, yes and no. For programs the reuse a prepared statement it is \ngood, for those that only use it once it can be a loss. Simple tests in prev posts \nto this list showed, that with longer data cstrings the parser was so slow, \nthat prepare + execute actually sped up the overall exec time. (At least that was \nmy interpretation) \n\n> \n> It seems I can use larger parts of this thus reducing ecpg parser's\n> complexity as well.\n\nHopefully :-)\n\n> \n> > ecpg needs eighter 'execute :idvar' or 'execute id', so either idvar is a \n> > declared variable or id a statement id. I don't know if that is something a \n> > parser can check though :-(\n> \n> Actually ecpg needs 'execute id using ... into ...'. I did not see any\n> mention of using in the backend execute command. The 'execute :idvar'\n> part is easier since this correctly is named 'execute immediate :idvar'\n> I think.\n\nThe \"using\" clause is optional, I just left it out. My ESQL/C precompiler\ncan also use an id variable for \"execute :idvar using ...\". That is actually \nhow we use esql/c here. \n\n> \n> AFAIK the standard is \"execute ID using value\" and not \"execute\n> ID(value)\". Please correct me if I'm wrong, but right now ecpg uses the\n> first syntax the backend uses the second.\n\nI think it should be the intention to keep those identical, which would\nmean, that the backend syntax is currently wrong :-(\n\nAndreas\n",
"msg_date": "Wed, 11 Sep 2002 15:42:44 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: 7.3beta and ecpg"
},
{
"msg_contents": "On Wed, Sep 11, 2002 at 03:42:44PM +0200, Zeugswetter Andreas SB SD wrote:\n> That is how I understood it so far.\n\nI will dig into this as soon as I find time, i.e. definitely for 7.3.\n\n> > Actually ecpg needs 'execute id using ... into ...'. I did not see any\n> > mention of using in the backend execute command. The 'execute :idvar'\n> > part is easier since this correctly is named 'execute immediate :idvar'\n> > I think.\n> \n> The \"using\" clause is optional, I just left it out. My ESQL/C precompiler\n\nCorrect, \"using\" is optional with ecpg as well.\n\n> can also use an id variable for \"execute :idvar using ...\". That is actually \n> how we use esql/c here. \n\nAnd how we used Pro*C when I was still working with Oracle.\n\n> > AFAIK the standard is \"execute ID using value\" and not \"execute\n> > ID(value)\". Please correct me if I'm wrong, but right now ecpg uses the\n> > first syntax the backend uses the second.\n> \n> I think it should be the intention to keep those identical, which would\n> mean, that the backend syntax is currently wrong :-(\n\nWhich of course means we should change it. :-)\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 11 Sep 2002 19:30:46 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> On Wed, Sep 11, 2002 at 03:42:44PM +0200, Zeugswetter Andreas SB SD wrote:\n>> I think it should be the intention to keep those identical, which would\n>> mean, that the backend syntax is currently wrong :-(\n\n> Which of course means we should change it. :-)\n\nIIRC, the conclusion of our earlier debate about backend PREPARE/EXECUTE\nsyntax was that since it was not implementing exactly the behavior\nspecified for embedded SQL (and couldn't, not being an embedded\noperation) it would be better to deliberately avoid using exactly the\nsame syntax. See thread starting at\nhttp://archives.postgresql.org/pgsql-hackers/2002-07/msg00814.php\n\nWe can revisit that decision if you like, but you must convince us that\nit was wrong, not just say \"of course we should change it\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Sep 2002 16:36:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg "
},
{
"msg_contents": "On Wed, Sep 11, 2002 at 04:36:31PM -0400, Tom Lane wrote:\n> IIRC, the conclusion of our earlier debate about backend PREPARE/EXECUTE\n> syntax was that since it was not implementing exactly the behavior\n> specified for embedded SQL (and couldn't, not being an embedded\n> operation) it would be better to deliberately avoid using exactly the\n> same syntax. See thread starting at\n> http://archives.postgresql.org/pgsql-hackers/2002-07/msg00814.php\n\nI'm awfully sorry that I missed this thread. But I do not really\nunderstand the problem. If we cannot be exactly as specified why aren't\nwe coming close? As it stands now I have to implement my own\nPREPARE/EXECUTE in ecpg and the syntax does clash with the backend one.\nThis would force me to not allow the backend's prepare/execute at all in\nembedded sql but use the work around we've been using ever since. But\nthe backend implementation certainly is better and faster, so I'd love\nto switch. \n\n> We can revisit that decision if you like, but you must convince us that\n> it was wrong, not just say \"of course we should change it\".\n\nAgain, please take my apologies, since I missed the discussion. I'm so\nswarmed with work and emails that I have to delete some by just looking\nat the subject and appearantly I didn't see the relevance of this one.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 12 Sep 2002 10:53:57 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> I'm awfully sorry that I missed this thread. But I do not really\n> understand the problem. If we cannot be exactly as specified why aren't\n> we coming close? As it stands now I have to implement my own\n> PREPARE/EXECUTE in ecpg and the syntax does clash with the backend one.\n\nBut you must implement your own PREPARE/EXECUTE anyway, using ecpg\nvariables, no? If you can really embed what you need in the backend\nfacility, and only the syntax variation is getting in the way, then\nmaybe I misunderstand the problem. How do parameters of PREPAREd\nstatements work in ecpg?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 09:07:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg "
},
{
"msg_contents": "On Thu, Sep 12, 2002 at 09:07:20AM -0400, Tom Lane wrote:\n> Michael Meskes <meskes@postgresql.org> writes:\n> > I'm awfully sorry that I missed this thread. But I do not really\n> > understand the problem. If we cannot be exactly as specified why aren't\n> > we coming close? As it stands now I have to implement my own\n> > PREPARE/EXECUTE in ecpg and the syntax does clash with the backend one.\n> \n> But you must implement your own PREPARE/EXECUTE anyway, using ecpg\n> variables, no? If you can really embed what you need in the backend\n> facility, and only the syntax variation is getting in the way, then\n> maybe I misunderstand the problem. How do parameters of PREPAREd\n> statements work in ecpg?\n\nIn ecpg you can use a string variable or constant holding the statement\nto prepare that statement as in \n\nexec sql prepare STMT from string;\n\nThis binds the ident STMT to the statement in string. Later you can then\ndeclare a cursor using\n\nexec sql declare CURS cursor for STMT;\n\nor execute the statement using\n\nexec sql execute STMT;\n\nNow if you have a parameter in the prepared statement by just specify \n\"?\" instead some value, you add a using clause during execution to set\nthe values. \n\nI'm not sure where you expect the ecpg variables. If you're talking\nabout C variables they won't be seen by any statement since ecpg creates\nan ascii string of the whole statement before sending it to the backend.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 12 Sep 2002 16:58:32 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> On Thu, Sep 12, 2002 at 09:07:20AM -0400, Tom Lane wrote:\n>> But you must implement your own PREPARE/EXECUTE anyway, using ecpg\n>> variables, no?\n\n> In ecpg you can use a string variable or constant holding the statement\n> to prepare that statement as in \n\n> exec sql prepare STMT from string;\n\nSure --- and that is exactly *not* what the backend facility does. In\nthe backend PREPARE you supply the statement to be prepared directly in\nthe same SQL command, not as the value of some variable.\n\n> Now if you have a parameter in the prepared statement by just specify \n> \"?\" instead some value, you add a using clause during execution to set\n> the values. \n\nAnd a plain \"?\" isn't going to fly as the parameter marker, either.\nThe backend wants to know what datatype each parameter is supposed to\nbe.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 15:18:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg "
},
{
"msg_contents": "On Thu, Sep 12, 2002 at 03:18:13PM -0400, Tom Lane wrote:\n> Sure --- and that is exactly *not* what the backend facility does. In\n> the backend PREPARE you supply the statement to be prepared directly in\n> the same SQL command, not as the value of some variable.\n\nThe variable will be replaced by ecpg. That's not a problem. The actual\necpg prepare function does insert the value of the variable when storing\nthe so-called prepared statement, which of course is not prepared in\nreality.\n\n> > Now if you have a parameter in the prepared statement by just specify \n> > \"?\" instead some value, you add a using clause during execution to set\n> > the values. \n> \n> And a plain \"?\" isn't going to fly as the parameter marker, either.\n> The backend wants to know what datatype each parameter is supposed to\n> be.\n\nSo, yes, this may be a problem we have to think about. But I could\nhandle that by asking the backend for the datatypes before issuing the\nPREPARE statement and thus formulating it accordingly. \n\nAnyway, we could of course keep both ways seperate, but right now that\nwould mean I have to disable access to the backend functions in ecpg or\nelse the parser will be in trouble or else the parser will be in trouble. And frankly I don't really like that.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 13 Sep 2002 08:44:30 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.3beta and ecpg"
}
] |
[
{
"msg_contents": "Attached is a perl script called 'pgrefchk'. It checks the referential\nintegrity of foreign keys on tables in a PostgreSQL database using the\nPG_TABLES, PG_PROC, PG_CLASS and PG_TRIGGER system \"tables\".\n\nIt was created in the same vein as the pguniqchk script which checks the\nuniqueness of unique constraints on tables in a PostgreSQL database.\n\nWhy would this be useful?\n\nIf you're planning to dump and restore the database, or if you suspect\nthe state of data in your database after having hard drive issues, this\nmight be a good sanity check to run.\n\nIf nothing else, it's a good example of how to query PostgreSQL system\ntables.\n\nNOTES:\n\n- Only tested on PostgreSQL 7.1.3.\n\nDave",
"msg_date": "Wed, 11 Sep 2002 11:14:06 -0500",
"msg_from": "\"David D. Kilzer\" <ddkilzer@lubricants-oil.com>",
"msg_from_op": true,
"msg_subject": "[SCRIPT] pgrefchk -- checks referential integrity of foreign keys on\n\ttables"
}
] |
[
{
"msg_contents": "I wanted people to see a screen shot of the new pgaccess to be releases\nwith 7.3:\n\n\tftp://candle.pha.pa.us/pub/postgresql/pgaccess.gif\n\nIt looks amazing.\n\nThe main pgaccess page is:\n\n\thttp://www.pgaccess.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 12:37:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "New pgaccess"
},
{
"msg_contents": "Looks very good. Much like my qpsql (http://www.maekitalo.de/qpsql) I once started.\n\n> \n> \tftp://candle.pha.pa.us/pub/postgresql/pgaccess.gif\n> \n",
"msg_date": "Thu, 12 Sep 2002 08:56:10 +0200",
"msg_from": "tommi@hel.tm.maekitalo.de (Tommi Maekitalo)",
"msg_from_op": false,
"msg_subject": "Re: New pgaccess"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 11 September 2002 18:21\n> To: Oliver Elphick\n> Cc: Dave Page; Tom Lane; Lamar Owen; Philip Warner; Laurette \n> Cisneros; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS]\n> \n> \n> Oliver Elphick wrote:\n> > But now we should be telling people to use 7.3's \n> pg_dumpall, at least \n> > for 7.2 data. (How far back can it go?)\n> > \n> > Make sure you use pg_dumpall from the new 7.3 \n> software to dump\n> > your data from 7.2. To do this, you must have the 7.2\n> > postmaster running and run the 7.3 pg_dumpall by \n> using its full\n> > pathname. 7.2's pg_dumpall is unsuitable because of the\n> > introduction of schemas in 7.3 which make it \n> necessary to grant\n> > public access to features that will, if created \n> from a 7.2 dump,\n> > be given access by their owner only.\n> \n> That's a pretty big hurtle. I think we are better off giving \n> them an SQL UPDATE to run.\n\nHow would that massage a dump file though? I can't think of any SQL that\nmight make 7.2 output 'language_handler' correctly, and we already know\n7.3 will barf on opaque.\n\nRegards, Dave.\n",
"msg_date": "Wed, 11 Sep 2002 22:09:59 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "Dave Page wrote:\n> > That's a pretty big hurtle. I think we are better off giving \n> > them an SQL UPDATE to run.\n> \n> How would that massage a dump file though? I can't think of any SQL that\n> might make 7.2 output 'language_handler' correctly, and we already know\n> 7.3 will barf on opaque.\n\nOh, I thought it was just the permissions that were the problem. Can we\ngive them a sed script?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 17:13:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 11 September 2002 17:38\n> To: PostgreSQL-development\n> Cc: developers@pgaccess.org\n> Subject: [HACKERS] New pgaccess\n> \n> \n> I wanted people to see a screen shot of the new pgaccess to \n> be releases with 7.3:\n> \n>\tftp://candle.pha.pa.us/pub/postgresql/pgaccess.gif\n>\n> It looks amazing.\n\nIt looks very similar to pgAdmin :-)\n\nRegards, Dave.\n",
"msg_date": "Wed, 11 Sep 2002 22:14:50 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: New pgaccess"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 11 September 2002 22:13\n> To: Dave Page\n> Cc: Oliver Elphick; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS]\n> \n> \n> Dave Page wrote:\n> > > That's a pretty big hurtle. I think we are better off giving\n> > > them an SQL UPDATE to run.\n> > \n> > How would that massage a dump file though? I can't think of any SQL \n> > that might make 7.2 output 'language_handler' correctly, and we \n> > already know 7.3 will barf on opaque.\n> \n> Oh, I thought it was just the permissions that were the \n> problem. Can we give them a sed script?\n\nI guess so. It seems to me that upgrading to 7.3 is going to be the\nstuff of nightmares, so my first thought is to try to avoid getting\npeople to run a 7.3 utility on their 7.x database. It would be nice to\nsee such a script run on old version dump files - but what else will\nbreak? Oliver has found a couple of things, and I wouldn't be surprised\nif my main installation falls over as well. If I get a chance I'll try\nit tomorrow.\n\nRegards, Dave.\n",
"msg_date": "Wed, 11 Sep 2002 22:20:42 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "Dave Page wrote:\n> > Oh, I thought it was just the permissions that were the \n> > problem. Can we give them a sed script?\n> \n> I guess so. It seems to me that upgrading to 7.3 is going to be the\n> stuff of nightmares, so my first thought is to try to avoid getting\n> people to run a 7.3 utility on their 7.x database. It would be nice to\n> see such a script run on old version dump files - but what else will\n> break? Oliver has found a couple of things, and I wouldn't be surprised\n> if my main installation falls over as well. If I get a chance I'll try\n> it tomorrow.\n\nWhy can't we do the remapping in the SQL grammar and remove the\nremapping in 7.4?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 17:27:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Wed, 2002-09-11 at 22:27, Bruce Momjian wrote:\n> Dave Page wrote:\n> > > Oh, I thought it was just the permissions that were the \n> > > problem. Can we give them a sed script?\n> > \n> > I guess so. It seems to me that upgrading to 7.3 is going to be the\n> > stuff of nightmares, so my first thought is to try to avoid getting\n> > people to run a 7.3 utility on their 7.x database. It would be nice to\n> > see such a script run on old version dump files - but what else will\n> > break? Oliver has found a couple of things, and I wouldn't be surprised\n> > if my main installation falls over as well. If I get a chance I'll try\n> > it tomorrow.\n> \n> Why can't we do the remapping in the SQL grammar and remove the\n> remapping in 7.4?\n\nSurely you will have to leave the remapping in for the benefit of anyone\nwho jumps from <= 7.2 to >= 7.4\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I am crucified with Christ; nevertheless I live; yet \n not I, but Christ liveth in me; and the life which I \n now live in the flesh I live by the faith of the Son \n of God, who loved me, and gave himself for me.\" \n Galatians 2:20 \n\n",
"msg_date": "11 Sep 2002 22:42:37 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Oliver Elphick wrote:\n> On Wed, 2002-09-11 at 22:27, Bruce Momjian wrote:\n> > Dave Page wrote:\n> > > > Oh, I thought it was just the permissions that were the \n> > > > problem. Can we give them a sed script?\n> > > \n> > > I guess so. It seems to me that upgrading to 7.3 is going to be the\n> > > stuff of nightmares, so my first thought is to try to avoid getting\n> > > people to run a 7.3 utility on their 7.x database. It would be nice to\n> > > see such a script run on old version dump files - but what else will\n> > > break? Oliver has found a couple of things, and I wouldn't be surprised\n> > > if my main installation falls over as well. If I get a chance I'll try\n> > > it tomorrow.\n> > \n> > Why can't we do the remapping in the SQL grammar and remove the\n> > remapping in 7.4?\n> \n> Surely you will have to leave the remapping in for the benefit of anyone\n> who jumps from <= 7.2 to >= 7.4\n\nWell, our whole goal was to get rid of the opaque thing entirely so I am\nnot sure if we want to keep that going. In fact, I am not sure it is\neven possible to remap opaque because it now is represented by so many\nother values.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 17:46:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Well, our whole goal was to get rid of the opaque thing entirely so I am\n> not sure if we want to keep that going. In fact, I am not sure it is\n> even possible to remap opaque because it now is represented by so many\n> other values.\n\nWe do still allow OPAQUE for triggers and datatype I/O functions, though\nI would like to take that out by and by.\n\nThe only case where OPAQUE is rejected now but was allowed before is PL\nlanguage handlers. We could weaken that --- but since there are no\nuser-defined PL handlers in the wild (AFAIK anyway), I'd prefer not to.\n\nMy original thought about this was that people should run 7.3's\ncreatelang script to load proper 7.3 language definitions into their 7.3\ndatabase. (This would not only fix the OPAQUE business but also replace\nany remaining absolute paths for language handlers with the $libdir\nform, which is an important 7.2 change that doesn't seem to have\npropagated very well because people are just doing dumps and reloads.)\n\nBut I now see that this answer doesn't work for pg_dumpall scripts.\n\nDoes anyone see a cleaner answer than re-allowing OPAQUE for PL\nhandlers?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 10:31:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade"
},
{
"msg_contents": "On Thu, 2002-09-12 at 15:31, Tom Lane wrote:\n> Does anyone see a cleaner answer than re-allowing OPAQUE for PL\n> handlers?\n\nCan't you just special case the language handlers when dumping <7.3 and\nchange 'RETURNS opaque' to 'RETURNS language_handler'? That's all that\nis needed to let them be restored OK into 7.3.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Let the wicked forsake his way, and the unrighteous \n man his thoughts; and let him return unto the LORD, \n and He will have mercy upon him; and to our God, for \n he will abundantly pardon.\" Isaiah 55:7 \n\n",
"msg_date": "12 Sep 2002 15:48:20 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> On Thu, 2002-09-12 at 15:31, Tom Lane wrote:\n>> Does anyone see a cleaner answer than re-allowing OPAQUE for PL\n>> handlers?\n\n> Can't you just special case the language handlers when dumping <7.3 and\n> change 'RETURNS opaque' to 'RETURNS language_handler'? That's all that\n> is needed to let them be restored OK into 7.3.\n\nOnly if people dump their old databases with 7.3 pg_dump; which is an\nassumption I'd rather not make if we can avoid it.\n\nOTOH, if we did do such a thing we could probably fix OPAQUE triggers\nand datatype I/O ops too ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 10:54:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade "
},
{
"msg_contents": "At 10:31 AM 12/09/2002 -0400, Tom Lane wrote:\n>Does anyone see a cleaner answer than re-allowing OPAQUE for PL\n>handlers?\n\nWhat about extending the function manager macros to know about return types \n(at least for builtin types)?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n",
"msg_date": "Fri, 13 Sep 2002 00:56:48 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade"
},
{
"msg_contents": "On Thu, 2002-09-12 at 15:54, Tom Lane wrote:\n> Oliver Elphick <olly@lfix.co.uk> writes:\n> > On Thu, 2002-09-12 at 15:31, Tom Lane wrote:\n> >> Does anyone see a cleaner answer than re-allowing OPAQUE for PL\n> >> handlers?\n> \n> > Can't you just special case the language handlers when dumping <7.3 and\n> > change 'RETURNS opaque' to 'RETURNS language_handler'? That's all that\n> > is needed to let them be restored OK into 7.3.\n> \n> Only if people dump their old databases with 7.3 pg_dump; which is an\n> assumption I'd rather not make if we can avoid it.\n\nI don't understand.\n\nThe only pg_dump we can fix is 7.3. You can't backport such a change\ninto 7.2 or it won't work for 7.2 restore. If you are using 7.3 pg_dump\nit isn't an assumption but a certainty that it is being used.\n\nIf someone restores into 7.3 with a 7.2 dump they are going to have\nother problems, such as turning all their functions private. Since they\nare going to need to edit the dump anyway, they might as well edit this\nbit too. Surely we should be advising them to use 7.3's pg_dump to do\nthe upgrade.\n\nThe alternative approach is to build a set of kludges into >=7.3 to\nchange opague to language_handler when a language function is\ninstalled. That doesn't sound like a good idea.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Let the wicked forsake his way, and the unrighteous \n man his thoughts; and let him return unto the LORD, \n and He will have mercy upon him; and to our God, for \n he will abundantly pardon.\" Isaiah 55:7 \n\n",
"msg_date": "12 Sep 2002 18:08:37 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade"
},
{
"msg_contents": "\n\n\n\n\n\nOliver Elphick wrote:\n\nOn Thu, 2002-09-12 at 15:54, Tom Lane wrote:\n \n\nOliver Elphick <olly@lfix.co.uk> writes:\n \n\nOn Thu, 2002-09-12 at 15:31, Tom Lane wrote:\n \n\nDoes anyone see a cleaner answer than re-allowing OPAQUE for PL\nhandlers?\n \n\n\n\nCan't you just special case the language handlers when dumping <7.3 and\nchange 'RETURNS opaque' to 'RETURNS language_handler'? That's all that\nis needed to let them be restored OK into 7.3.\n \n\nOnly if people dump their old databases with 7.3 pg_dump; which is an\nassumption I'd rather not make if we can avoid it.\n \n\n\nI don't understand.\n\nThe only pg_dump we can fix is 7.3. You can't backport such a change\ninto 7.2 or it won't work for 7.2 restore. If you are using 7.3 pg_dump\nit isn't an assumption but a certainty that it is being used.\n\nIf someone restores into 7.3 with a 7.2 dump they are going to have\nother problems, such as turning all their functions private. Since they\nare going to need to edit the dump anyway, they might as well edit this\nbit too. Surely we should be advising them to use 7.3's pg_dump to do\nthe upgrade.\n\nThe alternative approach is to build a set of kludges into >=7.3 to\nchange opague to language_handler when a language function is\ninstalled. That doesn't sound like a good idea.\n\n \n\nIs it possible to build a standalone 7.3 dump/dump_all program that can be\nrun on a server with an existing 7.2.x installation and not be linked against\n7.3 libraries? Call it a migration agent if you will.\n\nA notice of somekind would help: Before upgrading, dump the database using\nthis program.\n\n\n\n",
"msg_date": "Thu, 12 Sep 2002 12:19:13 -0500",
"msg_from": "Thomas Swan <tswan@idigx.com>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 10:31 AM 12/09/2002 -0400, Tom Lane wrote:\n>> Does anyone see a cleaner answer than re-allowing OPAQUE for PL\n>> handlers?\n\n> What about extending the function manager macros to know about return types \n> (at least for builtin types)?\n\nEr ... what has that got to do with this? And what sort of extension\ndo you think we need? We already have the RETURN_foo() macros.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 13:37:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade "
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> On Thu, 2002-09-12 at 15:54, Tom Lane wrote:\n>> Only if people dump their old databases with 7.3 pg_dump; which is an\n>> assumption I'd rather not make if we can avoid it.\n\n> I don't understand.\n\n> The only pg_dump we can fix is 7.3.\n\nCertainly. But if we hack the backend so it still accepts OPAQUE, then\nwe can still load 7.2 dump files.\n\n> If someone restores into 7.3 with a 7.2 dump they are going to have\n> other problems, such as turning all their functions private.\n\nTrue, but they can fix that after-the-fact. Not sure if there is any\ngood workaround for the PL-handler problem in a 7.2 pg_dumpall script.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 13:51:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade "
},
{
"msg_contents": "At 01:37 PM 12/09/2002 -0400, Tom Lane wrote:\n> > What about extending the function manager macros to know about return \n> types\n> > (at least for builtin types)?\n>\n>Er ... what has that got to do with this?\n\nWhen a user issues a 'CREATE FUNCTION' call, the fmgr can check the return \ntype, and create it with the correct return type (with warning). We just \nneed to make sure that the language handlers are listed as returning the \ncorrect type.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n",
"msg_date": "Fri, 13 Sep 2002 09:59:01 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 01:37 PM 12/09/2002 -0400, Tom Lane wrote:\n>> Er ... what has that got to do with this?\n\n> When a user issues a 'CREATE FUNCTION' call, the fmgr can check the return \n> type, and create it with the correct return type (with warning). We just \n> need to make sure that the language handlers are listed as returning the \n> correct type.\n\nYou mean hardwire the names \"plpgsql_language_handler\", etc, as being\nones that should return such-and-such instead of OPAQUE?\n\nI suppose that's a possible approach, but it strikes me as mighty\nugly.\n\nIf we were going to do such a thing, I'd also want to see it force\nthe shlib path to \"$libdir\". Does that strike you as impossibly\ncrocky, or a reasonable workaround for our past sins?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 23:27:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade "
},
{
"msg_contents": "At 11:27 PM 12/09/2002 -0400, Tom Lane wrote:\n>You mean hardwire the names \"plpgsql_language_handler\", etc, as being\n>ones that should return such-and-such instead of OPAQUE?\n\nNo; I actually mean modifying the function definition macros \n(PG_FUNCTION_INFO etc) to allow function definitions to (optionally) \ninclude return type (at least for builtin types with fixed IDs) - they \nalready define the invocation method etc, so it does not seem a big stretch \nto add a return type ID.\n\nNot all functions would need to use these, but when a user defines a \nfunction they could be checked. And in the case of the plpgsql handlers, \nthey would of course be defined.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n",
"msg_date": "Fri, 13 Sep 2002 13:42:23 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade "
},
{
"msg_contents": "At 01:42 PM 13/09/2002 +1000, Philip Warner wrote:\n\n>Not all functions would need to use these, but when a user defines a \n>function they could be checked. And in the case of the plpgsql handlers, \n>they would of course be defined.\n\nISTM that this problem comes about because we allow an external function to \nbe defined incorrectly (ie. the db says it returns type A, the function \nreally returns type B) - and we should be addressing that problem.\n\nAs I said in an earlier post, it might be good in the future to apply this \nto function args as well.\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n",
"msg_date": "Fri, 13 Sep 2002 13:55:01 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 11:27 PM 12/09/2002 -0400, Tom Lane wrote:\n>> You mean hardwire the names \"plpgsql_language_handler\", etc, as being\n>> ones that should return such-and-such instead of OPAQUE?\n\n> No; I actually mean modifying the function definition macros \n> (PG_FUNCTION_INFO etc) to allow function definitions to (optionally) \n> include return type (at least for builtin types with fixed IDs) - they \n> already define the invocation method etc, so it does not seem a big stretch \n> to add a return type ID.\n\nThat cannot work for user-defined functions, wherein the datatype OID is\nnot frozen at the time the code is compiled. In any case, it surely\ndoes not help for our current problem, which is forward-compatibility\nof dumps from 7.2 databases...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Sep 2002 00:09:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> ISTM that this problem comes about because we allow an external function to \n> be defined incorrectly (ie. the db says it returns type A, the function \n> really returns type B) - and we should be addressing that problem.\n\nWell, yeah. 7.3 is trying to tighten up on exactly that point. And our\ncurrent problem arises precisely because dumps from older database\nversions will fail to meet the tighter rules. How can we accommodate\nthose old dumps without abandoning the attempt to be tighter about\ndatatypes?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Sep 2002 00:11:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade "
},
{
"msg_contents": "At 12:11 AM 13/09/2002 -0400, Tom Lane wrote:\n>How can we accommodate\n>those old dumps without abandoning the attempt to be tighter about\n>datatypes?\n\nMaybe I'm missing something, but:\n\n1. Dump from 7.2 has 'Create Function....OPAQUE'\n\n2. 7.3 installation has plpgsql library with new function info macro that \ndefines the builtin return type correctly\n\n3. Script runs 'Create Function....OPAQUE'; the backend enquires about the \nfunction in the 'plpgsql.so' library, notes that it really returns \n'language_handler', issues a NOTICE and modifies the definition \nappropriately before adding it to the database.\n\nI'm not sure it's all that valuable, but if we wanted to allow for function \nto return user-defined types, then the function manager macros would have \nto include a return type name, not number.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n",
"msg_date": "Fri, 13 Sep 2002 14:18:17 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade "
},
{
"msg_contents": "At 12:11 AM 13/09/2002 -0400, Tom Lane wrote:\n>Well, yeah. 7.3 is trying to tighten up on exactly that point.\n\nThe problem is that as implemented you have only half of the solution; you \nalso need a way for postgresql to determine the 'real' arguments and return \ntype of a function. If the building blocks for pseudo-RTTI can be put in \nplace, then I think that would be a great step forward, *and* solve the \ncurrent problem.\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n",
"msg_date": "Sat, 14 Sep 2002 14:02:08 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: OPAQUE and 7.2-7.3 upgrade "
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 11 September 2002 22:28\n> To: Dave Page\n> Cc: Oliver Elphick; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS]\n> \n> \n> Why can't we do the remapping in the SQL grammar and remove \n> the remapping in 7.4?\n> \n\nI can see that working for the opaque/language_handler thing, but\nwould/should it work for tweaking casts that are no longer implicit?\n\nRegards, Dave.\n",
"msg_date": "Wed, 11 Sep 2002 22:33:58 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "Dave Page wrote:\n> \n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> > Sent: 11 September 2002 22:28\n> > To: Dave Page\n> > Cc: Oliver Elphick; pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS]\n> > \n> > \n> > Why can't we do the remapping in the SQL grammar and remove \n> > the remapping in 7.4?\n> > \n> \n> I can see that working for the opaque/language_handler thing, but\n> would/should it work for tweaking casts that are no longer implicit?\n\nOK, I am going to add these items to the open items list because I am\nhaving trouble keeping track of all the compatibility changes for\npg_dump.\n\nI have:\n\n\tLoading 7.2 pg_dumps \n \topaque language handler no longer recognized \n\nWhat else is there? \n\nDo cast problems related to pg_dump loading or to working with the data\nafter the load? Is it casts in user functions?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 19:53:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "\nIf you define a column as:\ncol timestamp\nIn 7.2.x didn't it default to timestamp with timezone?\n\nAnd now in 7.3(b1) it defaults to timestamp without timezone?\n\nIs this right?\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n",
"msg_date": "Wed, 11 Sep 2002 14:42:14 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "timestamp column default changed?"
},
{
"msg_contents": "Laurette Cisneros wrote:\n> \n> If you define a column as:\n> col timestamp\n> In 7.2.x didn't it default to timestamp with timezone?\n> \n> And now in 7.3(b1) it defaults to timestamp without timezone?\n\n/HISTORY says right at the top:\n\n * TIMESTAMP and TIME data types now default to WITHOUT TIMEZONE\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 17:50:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timestamp column default changed?"
},
{
"msg_contents": "I'm sure you all have discussed this ad-nauseum but this sure does create a\npain in the butt when converting.\n\nOk, I had my say.\n\nThanks for all your hard work,\n\nL.\nOn Wed, 11 Sep 2002, Bruce Momjian wrote:\n\n> Laurette Cisneros wrote:\n> > \n> > If you define a column as:\n> > col timestamp\n> > In 7.2.x didn't it default to timestamp with timezone?\n> > \n> > And now in 7.3(b1) it defaults to timestamp without timezone?\n> \n> /HISTORY says right at the top:\n> \n> * TIMESTAMP and TIME data types now default to WITHOUT TIMEZONE\n> \n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n",
"msg_date": "Wed, 11 Sep 2002 15:03:45 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: timestamp column default changed?"
},
{
"msg_contents": "\nI think the SQL standards required the change.\n\n---------------------------------------------------------------------------\n\nLaurette Cisneros wrote:\n> I'm sure you all have discussed this ad-nauseum but this sure does create a\n> pain in the butt when converting.\n> \n> Ok, I had my say.\n> \n> Thanks for all your hard work,\n> \n> L.\n> On Wed, 11 Sep 2002, Bruce Momjian wrote:\n> \n> > Laurette Cisneros wrote:\n> > > \n> > > If you define a column as:\n> > > col timestamp\n> > > In 7.2.x didn't it default to timestamp with timezone?\n> > > \n> > > And now in 7.3(b1) it defaults to timestamp without timezone?\n> > \n> > /HISTORY says right at the top:\n> > \n> > * TIMESTAMP and TIME data types now default to WITHOUT TIMEZONE\n> > \n> > \n> \n> -- \n> Laurette Cisneros\n> The Database Group\n> (510) 420-3137\n> NextBus Information Systems, Inc.\n> www.nextbus.com\n> ----------------------------------\n> A wiki we will go...\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 11 Sep 2002 19:30:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timestamp column default changed?"
},
{
"msg_contents": "I understand. Thanks for pointing that out. \n\nL.\nOn Wed, 11 Sep 2002, Bruce Momjian wrote:\n\n> \n> I think the SQL standards required the change.\n> \n> ---------------------------------------------------------------------------\n> \n> Laurette Cisneros wrote:\n> > I'm sure you all have discussed this ad-nauseum but this sure does create a\n> > pain in the butt when converting.\n> > \n> > Ok, I had my say.\n> > \n> > Thanks for all your hard work,\n> > \n> > L.\n> > On Wed, 11 Sep 2002, Bruce Momjian wrote:\n> > \n> > > Laurette Cisneros wrote:\n> > > > \n> > > > If you define a column as:\n> > > > col timestamp\n> > > > In 7.2.x didn't it default to timestamp with timezone?\n> > > > \n> > > > And now in 7.3(b1) it defaults to timestamp without timezone?\n> > > \n> > > /HISTORY says right at the top:\n> > > \n> > > * TIMESTAMP and TIME data types now default to WITHOUT TIMEZONE\n> > > \n> > > \n> > \n> > -- \n> > Laurette Cisneros\n> > The Database Group\n> > (510) 420-3137\n> > NextBus Information Systems, Inc.\n> > www.nextbus.com\n> > ----------------------------------\n> > A wiki we will go...\n> > \n> > \n> \n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n",
"msg_date": "Wed, 11 Sep 2002 18:18:28 -0700 (PDT)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: timestamp column default changed?"
},
{
"msg_contents": "timestamp becoming timestamp without time zone is actually the SQL\nstandard...\n\nChris\n\n> I'm sure you all have discussed this ad-nauseum but this sure\n> does create a\n> pain in the butt when converting.\n>\n> Ok, I had my say.\n>\n> Thanks for all your hard work,\n>\n> L.\n> On Wed, 11 Sep 2002, Bruce Momjian wrote:\n>\n> > Laurette Cisneros wrote:\n> > >\n> > > If you define a column as:\n> > > col timestamp\n> > > In 7.2.x didn't it default to timestamp with timezone?\n> > >\n> > > And now in 7.3(b1) it defaults to timestamp without timezone?\n> >\n> > /HISTORY says right at the top:\n> >\n> > * TIMESTAMP and TIME data types now default to WITHOUT TIMEZONE\n> >\n> >\n>\n> --\n> Laurette Cisneros\n> The Database Group\n> (510) 420-3137\n> NextBus Information Systems, Inc.\n> www.nextbus.com\n> ----------------------------------\n> A wiki we will go...\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Thu, 12 Sep 2002 10:10:12 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: timestamp column default changed?"
}
] |
[
{
"msg_contents": "\n> We can revisit that decision if you like, but you must convince us that\n> it was wrong, not just say \"of course we should change it\".\n\nI am sorry, but at that time I did not have time for the discussion,\nand now is also very tight for me :-(\n\nFour reasons I can give:\n\t1. execute xx(...); looks like xx is a procedure which it definitely is not.\n\t2. imho ecpg should use the backend side feature and thus the syntax should be\n\t the same. iirc the syntax was chosen to separate it from esql, but if it gets \n\t to be the same why separate it ?\n\t3. I think a close comparison is possible for dynamically prepared statements where \n\t you don't directly use host variables in the statement, but placeholders (\"?\").\n\t4. we did use the esql standard for \"declare cursor\", why not now ?\n\nAre the () mandatory for the backend side feature ? If yes, it would at least be possible\nto differentiate ecpg from it.\n\nActually \"exec sql execute\" is only for statements not returning a result set (e.g. update).\nselects would need 'declare \"curid\" cursor for ...' and fetch, but that would imho be an \nimprovement because you can then choose a named portal.\n\nAndreas\n",
"msg_date": "Wed, 11 Sep 2002 23:42:47 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: 7.3beta and ecpg "
}
] |
[
{
"msg_contents": "Hi,\n\nDoes anyone know any implementation of a fixpoint operator (recursive\nqueries) for postgreSQL?\n\nThanks,\nLuciano.\n\n",
"msg_date": "Thu, 12 Sep 2002 01:55:16 +0100 (BST)",
"msg_from": "Luciano David Gerber <gerberl@cs.man.ac.uk>",
"msg_from_op": true,
"msg_subject": "fixpoint"
}
] |
[
{
"msg_contents": "FYI, I am going to be away from Thursday night to Sunday on a retreat. \nI will be checking my email but may not be able to reply quickly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 12 Sep 2002 00:12:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Long weekend"
}
] |
[
{
"msg_contents": "FYI, SRA, the leading PostgreSQL support company in Japan, has renewed\nmy employment contract. This will allow me to continue to devote 100%\nof my working hours to improving PostgreSQL and assisting them and their\ncustomers. It is a pleasure working for them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 12 Sep 2002 00:21:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "SRA contract renewed"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nAm just wondering if we've ever considered adding a PGXLOG environment\nvariable that would point to the pg_xlog directory?\n\nIn a Unix environment it's not real necessary as filesystem links can be\ncreated, but in other environments (i.e. the Native windows port) it's\nlooking like it might be useful.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Thu, 12 Sep 2002 14:29:28 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PGXLOG variable worthwhile?"
},
{
"msg_contents": "\nWe dealt this this (painfully) during 7.3 development. Some wanted a -X\nflag to initdb/postgres/postmaster that would identify the pg_xlog\ndirectory while others wanted the flag only on initdb and have initdb\ncreate a symlink.\n\nFinally, we decided to do nothing. and continue to recommend manually\nmoving pg_xlog using symlinks.\n\nAlso, I have heard symlinks are available in native Windows but the\ninterface to them isn't clearly visible. Can someone clarify that?\n\n---------------------------------------------------------------------------\n\nJustin Clift wrote:\n> Hi everyone,\n> \n> Am just wondering if we've ever considered adding a PGXLOG environment\n> variable that would point to the pg_xlog directory?\n> \n> In a Unix environment it's not real necessary as filesystem links can be\n> created, but in other environments (i.e. the Native windows port) it's\n> looking like it might be useful.\n> \n> :-)\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> -- \n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 12 Sep 2002 01:26:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Thu, 12 Sep 2002, Justin Clift wrote:\n\n> Am just wondering if we've ever considered adding a PGXLOG environment\n> variable that would point to the pg_xlog directory?\n\nIMHO, a much better way to support this is to put this information into\nthe config file. That way it can't easily change when you happen to, say,\nstart postgres in the wrong window.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Fri, 13 Sep 2002 01:28:39 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> On Thu, 12 Sep 2002, Justin Clift wrote:\n>> Am just wondering if we've ever considered adding a PGXLOG environment\n>> variable that would point to the pg_xlog directory?\n\n> IMHO, a much better way to support this is to put this information into\n> the config file. That way it can't easily change when you happen to, say,\n> start postgres in the wrong window.\n\nYes. We rejected environment-variable-based xlog location for reasons\nthat apply equally well to Windows. The xlog location *must* be stored\nin a physical file in the data directory; anything else is too unsafe.\nThe current technology for that is a symlink.\n\nWhile it doesn't have to be a symlink as opposed to some sort of config\nfile, I don't have the slightest problem with saying that we don't\nsupport relocation of xlog on older Windoid platforms.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 15:12:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile? "
},
{
"msg_contents": "Tom Lane wrote:\n> Curt Sampson <cjs@cynic.net> writes:\n> > On Thu, 12 Sep 2002, Justin Clift wrote:\n> >> Am just wondering if we've ever considered adding a PGXLOG environment\n> >> variable that would point to the pg_xlog directory?\n> \n> > IMHO, a much better way to support this is to put this information into\n> > the config file. That way it can't easily change when you happen to, say,\n> > start postgres in the wrong window.\n> \n> Yes. We rejected environment-variable-based xlog location for reasons\n> that apply equally well to Windows. The xlog location *must* be stored\n> in a physical file in the data directory; anything else is too unsafe.\n> The current technology for that is a symlink.\n> \n> While it doesn't have to be a symlink as opposed to some sort of config\n> file, I don't have the slightest problem with saying that we don't\n> support relocation of xlog on older Windoid platforms.\n\nAgreed. Win 4.X is pretty dead. I added this thread to TODO.detail.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 15 Sep 2002 22:49:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Tom Lane wrote:\n<snip>\n> > While it doesn't have to be a symlink as opposed to some sort of config\n> > file, I don't have the slightest problem with saying that we don't\n> > support relocation of xlog on older Windoid platforms.\n> \n> Agreed. Win 4.X is pretty dead. I added this thread to TODO.detail.\n\nHuh? You've got to be joking.\n\nMany of the *really large* enterprises around (i.e. with 40k+ PC's, etc)\nare still running WinNT 4, due to the migration issues with upgrading. \nAka, Too Many Things Break when they move to Win2k, etc.\n\nAlthough MS no longer considers WinNT 4.0 to be a supported platform,\nthere are *lots* of big places still running it.\n\nThat's part of the reason some of the bigger corporates are looking for\nMS alternatives.\n\nRegards and best wishes,\n\nJustin Clift\n\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 16 Sep 2002 13:02:08 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Justin Clift wrote:\n> Bruce Momjian wrote:\n> > \n> > Tom Lane wrote:\n> <snip>\n> > > While it doesn't have to be a symlink as opposed to some sort of config\n> > > file, I don't have the slightest problem with saying that we don't\n> > > support relocation of xlog on older Windoid platforms.\n> > \n> > Agreed. Win 4.X is pretty dead. I added this thread to TODO.detail.\n> \n> Huh? You've got to be joking.\n> \n> Many of the *really large* enterprises around (i.e. with 40k+ PC's, etc)\n> are still running WinNT 4, due to the migration issues with upgrading. \n> Aka, Too Many Things Break when they move to Win2k, etc.\n> \n> Although MS no longer considers WinNT 4.0 to be a supported platform,\n> there are *lots* of big places still running it.\n> \n> That's part of the reason some of the bigger corporates are looking for\n> MS alternatives.\n\nOh, that is bad news. Well, can we accept they will not be moving XLOG\naround?\n\nThe problem with the non-symlink solution is that it is error-prone/ugly\non all the platforms, not just NT4.X.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 15 Sep 2002 23:04:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Bruce Momjian wrote:\n<snip>\n> Oh, that is bad news. Well, can we accept they will not be moving XLOG\n> around?\n> \n> The problem with the non-symlink solution is that it is error-prone/ugly\n> on all the platforms, not just NT4.X.\n\nWhat you guys are saying isn't necessarily wrong, in that it may not\ndefinitely be very pretty.\n\nHowever, moving the WAL files to another disk has a significant\nperformance gain attached to it for loaded servers, so we how about we\ntake the viewpoint that if WinNT/2k/XP are to be supported then we might\nas well let it do things properly instead of handicapping it?\n\nDoes anyone care to estimate what the coding time+issues involved would\nbe, for adding a parameter to the postgresql.conf file that allows\nPostgreSQL to directly use a different directory path for the WAL files?\n\n'wal_path'\n\nor\n\n'wal_directory'\n\nor similar. In the postgresql.conf it would probably be placed in the\n'Write-ahead log (WAL)' or 'Misc' sections.\n\nNo guarantees just yet but if it's not an extremely expensive thing to\nadd, then there might be people willing to pay for it (have a group in\nmind already).\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 16 Sep 2002 13:26:13 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> However, moving the WAL files to another disk has a significant\n> performance gain attached to it for loaded servers, so we how about we\n> take the viewpoint that if WinNT/2k/XP are to be supported then we might\n> as well let it do things properly instead of handicapping it?\n\nConsidering that we do not yet have support for WinAnything except via\ncygwin, this thread strikes me as mighty premature.\n\nAnd, to be blunt, I'm not likely to go out of my way to improve support\nfor WinAnything even when we do have a native port. In words of one\nsyllable: WinAnything is not, and never will be, a preferred platform\nfor Postgres. Accordingly, performance improvements for it are just a\ndistraction from our real business; a distraction which plays into the\nhands of Gates & Co. No thank you. I'm okay with providing minimal\nsupport for those who really want to run toy databases on a toy\nplatform. I will *not* buy into trying to make it a non-toy platform.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Sep 2002 23:46:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile? "
},
{
"msg_contents": "On Sun, 15 Sep 2002, Bruce Momjian wrote:\n\n> The problem with the non-symlink solution is that it is error-prone/ugly\n> on all the platforms, not just NT4.X.\n\nActually, it's really just the environment variable solution that's\nerror prone, I think. Putting it in the config file is fine. It's\njust a matter of someone coding it.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Mon, 16 Sep 2002 12:56:02 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Tom Lane wrote:\n<snip> \n> And, to be blunt, I'm not likely to go out of my way to improve support\n> for WinAnything even when we do have a native port. In words of one\n> syllable: WinAnything is not, and never will be, a preferred platform\n> for Postgres. Accordingly, performance improvements for it are just a\n> distraction from our real business; a distraction which plays into the\n> hands of Gates & Co. No thank you. I'm okay with providing minimal\n> support for those who really want to run toy databases on a toy\n> platform. I will *not* buy into trying to make it a non-toy platform.\n\nUnderstood, and that's ok.\n\nAllowing PostgreSQL to be productively used as best it can be, whereever\nit can be, makes sense doesn't it? Especially when the real target here\nwould be to give existing MS places a lower cost of entry to the\nPostgreSQL world.\n\nFinancial example :\n\nWinNT/2k/XP costs a few hundred dollars.\n\nMS SQL Server costs a few thousand dollars.\n\nWhenever we displace MS SQL Server, we divert more revenue away from MS\nthan if we just say \"Sorry but we're not happy with making the Windows\nport perform in ways that let it compete adequately with MS SQL Server\".\n\nWe both know the arguments for and against. You're in the \"against\"\ncamp, and I'm in the \"for\" camp.\n\nPersonally I'm hoping there are some other PostgreSQL coders around in\nthe \"for\" camp too that can assist with this as we're beginning to gain\nsome good public enterprise level of interest.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n> \n> regards, tom lane\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 16 Sep 2002 13:59:26 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Justin Clift wrote:\n> \n> Bruce Momjian wrote:\n> <snip>\n> > Oh, that is bad news. Well, can we accept they will not be moving XLOG\n> > around?\n> >\n> > The problem with the non-symlink solution is that it is error-prone/ugly\n> > on all the platforms, not just NT4.X.\n> \n> What you guys are saying isn't necessarily wrong, in that it may not\n> definitely be very pretty.\n> \n> However, moving the WAL files to another disk has a significant\n> performance gain attached to it for loaded servers, so we how about we\n> take the viewpoint that if WinNT/2k/XP are to be supported then we might\n> as well let it do things properly instead of handicapping it?\n\nI just don't see why that all could become an issue. Someone\nrunning big stuff on NT4 today is not running a native PostgreSQL\nport on it. Why would someone want to do a new, big, PG\ninstallation on an old, unsupported NT4 server today?\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n",
"msg_date": "Mon, 16 Sep 2002 11:14:59 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Jan Wieck wrote:\n<snip>\n> \n> I just don't see why that all could become an issue. Someone\n> running big stuff on NT4 today is not running a native PostgreSQL\n> port on it. Why would someone want to do a new, big, PG\n> installation on an old, unsupported NT4 server today?\n\nCorporate Standards. Even if everyone *knows* that NT4 isn't the latest\nand greatest, many large companies still use NT4. Purely because so\nmuch stuff they use works with it that they haven't been able to\ngenerate sufficient business cases to migrate their base server OS to\nWin2K (or XP).\n\nIf this would be a really huge and drastic modification then sure it's\nnot necessarily an easy thing to decide. But the first thing to\nconsider is \"how much effort would be required?\".\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> Jan\n> \n> --\n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being\n> right. #\n> # Let's break this rule - forgive\n> me. #\n> #==================================================\n> JanWieck@Yahoo.com #\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Tue, 17 Sep 2002 01:25:57 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Justin Clift writes:\n\n> WinNT/2k/XP costs a few hundred dollars.\n>\n> MS SQL Server costs a few thousand dollars.\n\nThe places that run Windows can be categorized into three camps: (1)\nThose that don't have a clue. They will never run PostgreSQL. (2) Those\nthat are somehow afraid to switch to a different solution. They will be\neven more hesitant to switch to PostgreSQL. (3) Those that somehow like\nWindows. They will like MS SQL Server as well, no matter what we do.\n\nSo where is the market?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 16 Sep 2002 19:25:04 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Mon, 16 Sep 2002, Peter Eisentraut wrote:\n\n> Justin Clift writes:\n> \n> > WinNT/2k/XP costs a few hundred dollars.\n> >\n> > MS SQL Server costs a few thousand dollars.\n> \n> The places that run Windows can be categorized into three camps: (1)\n> Those that don't have a clue. They will never run PostgreSQL. (2) Those\n> that are somehow afraid to switch to a different solution. They will be\n> even more hesitant to switch to PostgreSQL. (3) Those that somehow like\n> Windows. They will like MS SQL Server as well, no matter what we do.\n\nI would say the only real growth market is \"Those who have a clue, and are \nlooking at migrating off of Windows / MSSQL to a different database.\"\n\nIn the case of my company, that's mostly resulted in Postgresql deployed \non Linux and Solaris. But I can see a use for Postgresql on Windows. \nHowever, for us, all our serious Windows servers have long since been \nconverted to Win2K. For all those situations, I can't imagine the \ndatabase getting big enough and hit hard enough for pg_xlog to be a \nproblem before it gets moved to a real OS.\n\nSo, by the time someone is deciding to dedicate themselves to running \nPostgresql, they've probably already decided they should run it on some \nflavor of Unix, or the slower performance of Postgresql under Windows is \nno great detriment.\n\nSupporting a sane OS like Unix is hard enough, creating more work for the \ncore developers in trying to work around a broken file system on Windows \nis not the best use of the resources available.\n\nIf and when someone running postgresql on Windows decides they REALLY need \nto move the pg_xlog somewhere else, they can either code it, or move to \nLinux. I'd recommend moving to Linux.\n\n",
"msg_date": "Mon, 16 Sep 2002 11:57:26 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Justin Clift wrote:\n> \n> Jan Wieck wrote:\n> <snip>\n> >\n> > I just don't see why that all could become an issue. Someone\n> > running big stuff on NT4 today is not running a native PostgreSQL\n> > port on it. Why would someone want to do a new, big, PG\n> > installation on an old, unsupported NT4 server today?\n> \n> Corporate Standards. Even if everyone *knows* that NT4 isn't the latest\n> and greatest, many large companies still use NT4. Purely because so\n> much stuff they use works with it that they haven't been able to\n> generate sufficient business cases to migrate their base server OS to\n> Win2K (or XP).\n\nThe word construct \"corporate standard\" is the most expensive and\ndangerous form of ignorance I've seen in the business. One of the\nbest examples I've seen actually fit's very well. An SAP customer\nconverting from R/2 to R/3 a couple years ago. They ran all their\nnon-mainframe business on HP3000 MPE/IX systems. We strongly\nrecommended using HP/UX for the SAP installation instead, but\nthey followed their \"corporate ignorance\" anyway. Two weeks\nbefore going life SAP informed all their MPE customers that\nsupport for that operating system will be abandoned and strongly\nrecommended converting to HP/UX soon because within a few months\nnot even hotfixes will be provided any more. Outch!\n\nIf corporate standard means similar letter heads, similar\nappearance of public offices or advertising, absolutely a good\nthing and I'm all for it. But if it causes to get stuck with old\ntechnology, then the corporate standard itself is the problem\nthat needs to be fixed first.\n\nBut ... let's put it into the damned config file and move on.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n",
"msg_date": "Mon, 16 Sep 2002 13:57:51 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Justin Clift writes:\n> \n> > WinNT/2k/XP costs a few hundred dollars.\n> >\n> > MS SQL Server costs a few thousand dollars.\n> \n> The places that run Windows can be categorized into three camps:\n<snip>\n\nHow about this?\n\nThe places that run Windows can be categorised a number of different\nways, depending on what you're looking for.\n\n1) Places that have in-house staff that can do or learn everything.\n\nMany of these places are really small, some are not. PostgreSQL fits\nwell here, Windows or not, as these people are prepared to learn how to\nuse it best.\n\n\n2) Companies that hire external IT services.\n\nOften the software implemented here will be dependent on outside sources\nof advice such as consultants, executives who take an interest in IT\nmags, etc.\n\nLook at Windows NT on the server in the first place. Microsoft\nleveraged the marketplace through making itself available then promoting\nthe heck out of itself into the IT press, industry mags, etc.\n\nThese places will be receptive to PostgreSQL as our reputation further\nbecomes known and they can see where PostgreSQL will be useful to them. \nPostgreSQL on Win NT/2K/XP will definitely be of use to a sizable number\nof these businesses.\n\n\n3) Companies who depend on multiple external sources of IT support. \ni.e. One reasonable sized enterprise here in Australia has over 450\n*development* companies presently working on applications for their\nenvironment. Because of the scope of standardisation needed, they\nstandardised on WinNT many years ago. It still works for them. They\ndon't even have SP6 installed on their desktops as it breaks too many of\nthe desktop applications. etc.\n\nThese people are not clueless. They make strategic decisions when\nthey're necessary, and it all comes down to flexibility, reliability,\nand cost.\n\nFor some things they run Unix, or Windows, or Novell, or OS/390, or any\nnumber of other stuff.\n\nBecause of the years of experience some of their support companies have\nwith WinNT, it works reliably enough for them. They don't have the\n\"need to reboot once per week\" thing with their servers.\n\nThese guys will become receptive to PostgreSQL too, and it will be in\nour favour to be able to demonstrate very good performance across all\nplatforms that we can, not just our own *personally preferred*\nplatforms.\n\nBy giving them options when it doesn't take a *whole bunch of effort* to\ndo so, we open up ways for PostgreSQL to be used that we haven't even\nthought of before. We all know this already. \n\nIt wouldn't really surprise me greatly if at some point this proved\nbeneficial to a non-Windows platform for some reason too.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> So where is the market?\n> \n> --\n> Peter Eisentraut peter_e@gmx.net\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Tue, 17 Sep 2002 04:11:26 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "It seems all of this discussion misses the point. Either it has a large\namount of impact and the idea gets rejected because of implementation\nissues, or it has little impact but it's nothing the core group wants to\nimplement. If the problem is finding someone to implement it, it sounds\nlike Justin has found such a person, so are we going to stand in his way\nwhile we wax poetic about OS religion and corporate philosophies or can\nhe start submitting patches?\n\nRobert Treat\n\nOn Mon, 2002-09-16 at 14:11, Justin Clift wrote:\n> Peter Eisentraut wrote:\n> > \n> > Justin Clift writes:\n> > \n> > > WinNT/2k/XP costs a few hundred dollars.\n> > >\n> > > MS SQL Server costs a few thousand dollars.\n> > \n> > The places that run Windows can be categorized into three camps:\n> <snip>\n> \n> How about this?\n> \n> The places that run Windows can be categorised a number of different\n> ways, depending on what you're looking for.\n> \n> 1) Places that have in-house staff that can do or learn everything.\n> \n> Many of these places are really small, some are not. PostgreSQL fits\n> well here, Windows or not, as these people are prepared to learn how to\n> use it best.\n> \n> \n> 2) Companies that hire external IT services.\n> \n> Often the software implemented here will be dependent on outside sources\n> of advice such as consultants, executives who take an interest in IT\n> mags, etc.\n> \n> Look at Windows NT on the server in the first place. Microsoft\n> leveraged the marketplace through making itself available then promoting\n> the heck out of itself into the IT press, industry mags, etc.\n> \n> These places will be receptive to PostgreSQL as our reputation further\n> becomes known and they can see where PostgreSQL will be useful to them. \n> PostgreSQL on Win NT/2K/XP will definitely be of use to a sizable number\n> of these businesses.\n> \n> \n> 3) Companies who depend on multiple external sources of IT support. \n> i.e. One reasonable sized enterprise here in Australia has over 450\n> *development* companies presently working on applications for their\n> environment. Because of the scope of standardisation needed, they\n> standardised on WinNT many years ago. It still works for them. They\n> don't even have SP6 installed on their desktops as it breaks too many of\n> the desktop applications. etc.\n> \n> These people are not clueless. They make strategic decisions when\n> they're necessary, and it all comes down to flexibility, reliability,\n> and cost.\n> \n> For some things they run Unix, or Windows, or Novell, or OS/390, or any\n> number of other stuff.\n> \n> Because of the years of experience some of their support companies have\n> with WinNT, it works reliably enough for them. They don't have the\n> \"need to reboot once per week\" thing with their servers.\n> \n> These guys will become receptive to PostgreSQL too, and it will be in\n> our favour to be able to demonstrate very good performance across all\n> platforms that we can, not just our own *personally preferred*\n> platforms.\n> \n> By giving them options when it doesn't take a *whole bunch of effort* to\n> do so, we open up ways for PostgreSQL to be used that we haven't even\n> thought of before. We all know this already. \n> \n> It wouldn't really surprise me greatly if at some point this proved\n> beneficial to a non-Windows platform for some reason too.\n> \n> :-)\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> \n> > So where is the market?\n> > \n> > --\n> > Peter Eisentraut peter_e@gmx.net\n> \n> -- \n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\n\n",
"msg_date": "16 Sep 2002 16:42:06 -0400",
"msg_from": "Robert Treat <xzilla@users.sourceforge.net>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "> The places that run Windows can be categorized into three camps: (1)\n> Those that don't have a clue. They will never run PostgreSQL. (2) Those\n> that are somehow afraid to switch to a different solution. They will be\n> even more hesitant to switch to PostgreSQL. (3) Those that somehow like\n> Windows. They will like MS SQL Server as well, no matter what we do.\n> \n> So where is the market?\n\nAsk MySQL - they have many, many Windows users.\n\nChris\n\n",
"msg_date": "Tue, 17 Sep 2002 10:04:26 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Robert Treat wrote:\n> It seems all of this discussion misses the point. Either it has a large\n> amount of impact and the idea gets rejected because of implementation\n> issues, or it has little impact but it's nothing the core group wants to\n> implement. If the problem is finding someone to implement it, it sounds\n> like Justin has found such a person, so are we going to stand in his way\n> while we wax poetic about OS religion and corporate philosophies or can\n> he start submitting patches?\n\nActually, the work is minimal. Look at the commit I used to remove\nPGXLOG, trim that to remove the changes to make the path name dynamic in\nsize (added too much complexity for little benefit) and hang the path\ncoding off a GUC variable rather than an environment variable.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 01:19:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "> > It seems all of this discussion misses the point. Either it has a large\n> > amount of impact and the idea gets rejected because of implementation\n> > issues, or it has little impact but it's nothing the core group wants to\n> > implement. If the problem is finding someone to implement it, it sounds\n> > like Justin has found such a person, so are we going to stand in his way\n> > while we wax poetic about OS religion and corporate philosophies or can\n> > he start submitting patches?\n>\n> Actually, the work is minimal. Look at the commit I used to remove\n> PGXLOG, trim that to remove the changes to make the path name dynamic in\n> size (added too much complexity for little benefit) and hang the path\n> coding off a GUC variable rather than an environment variable.\n\nI personally don't see the problem with a GUC variable...that seems like the\nperfect solution to me...\n\nChris\n\n",
"msg_date": "Tue, 17 Sep 2002 13:32:23 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > > It seems all of this discussion misses the point. Either it has a large\n> > > amount of impact and the idea gets rejected because of implementation\n> > > issues, or it has little impact but it's nothing the core group wants to\n> > > implement. If the problem is finding someone to implement it, it sounds\n> > > like Justin has found such a person, so are we going to stand in his way\n> > > while we wax poetic about OS religion and corporate philosophies or can\n> > > he start submitting patches?\n> >\n> > Actually, the work is minimal. Look at the commit I used to remove\n> > PGXLOG, trim that to remove the changes to make the path name dynamic in\n> > size (added too much complexity for little benefit) and hang the path\n> > coding off a GUC variable rather than an environment variable.\n> \n> I personally don't see the problem with a GUC variable...that seems like the\n> perfect solution to me...\n\nWell, let's see if we ever run on native NT4.X and we can decide then. \nActually, don't our Cygnus folks have a problem with moving pg_xlog\nalready?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 01:36:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Robert Treat wrote:\n> It seems all of this discussion misses the point. Either it has a large\n> amount of impact and the idea gets rejected because of implementation\n> issues, or it has little impact but it's nothing the core group wants to\n> implement. If the problem is finding someone to implement it, it sounds\n> like Justin has found such a person, so are we going to stand in his way\n> while we wax poetic about OS religion and corporate philosophies or can\n> he start submitting patches?\n\nWell, I have Win32 patches here I am reviewing. I think I can say that\nthe changes are minimal and probably will be accepted for addition into\n7.4. I am actually surprised at how little is required.\n\nRight now, 7.4 is targeted with point-in-time recovery and Win32. And,\nin fact, both patches are almost ready for inclusion into CVS, so we\nmay find that 7.4 has a very short release cycle.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 16:27:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
}
] |
[
{
"msg_contents": "MySQL wins Prestigious Linux Journal's Editors' Choice Award:\n\nhttp://www.mysql.com/news/article-109.html\n\nAn amusing quote from the article:\n\n\"If you're one of the people who has been saying, 'I can't use MySQL because\nit doesn't have [feature you need here]', it's time to read up on MySQL 4.0\nand try it out on a development system. Can you say, 'full support for\ntransactions and row- level locking'? 'UNION'? 'Full text search'?\"\n\n*sigh*\n\nWell, at least they have an easy and fast upgrade process ;)\n\nChris\n\n",
"msg_date": "Thu, 12 Sep 2002 13:56:19 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "MySQL wins award - makes amusing statement"
},
{
"msg_contents": "On Thu, Sep 12, 2002 at 01:56:19PM +0800, Christopher Kings-Lynne wrote:\n> \n> *sigh*\n> \n> Well, at least they have an easy and fast upgrade process ;)\n\nRight, fewer pesky features to get in the way of the upgrade ;->\n\nRoss\n",
"msg_date": "Thu, 12 Sep 2002 01:24:24 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MySQL wins award - makes amusing statement"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 12 September 2002 00:53\n> To: Dave Page\n> Cc: Oliver Elphick; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS]\n> \n> \n> OK, I am going to add these items to the open items list \n> because I am having trouble keeping track of all the \n> compatibility changes for pg_dump.\n> \n> I have:\n> \n> \tLoading 7.2 pg_dumps \n> \topaque language handler no longer recognized \n> \n> What else is there? \n> \n> Do cast problems related to pg_dump loading or to working \n> with the data after the load? Is it casts in user functions?\n\nOliver reported:\n\n2. The dump produced:\n CREATE TABLE cust_alloc_history (\n ...\n \"year\" integer DEFAULT date_part('year'::text,\n ('now'::text)::timestamp(6) with time zone) NOT NULL,\n ...\n ERROR: Column \"year\" is of type integer but default expression is\nof type double precision\n You will need to rewrite or cast the expression\n\nFor an original definition of:\n\n year INTEGER DEFAULT\ndate_part('year',CURRENT_TIMESTAMP)\n\nRegards, Dave.\n",
"msg_date": "Thu, 12 Sep 2002 08:23:36 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "Dave Page wrote:\n> Oliver reported:\n> \n> 2. The dump produced:\n> CREATE TABLE cust_alloc_history (\n> ...\n> \"year\" integer DEFAULT date_part('year'::text,\n> ('now'::text)::timestamp(6) with time zone) NOT NULL,\n> ...\n> ERROR: Column \"year\" is of type integer but default expression is\n> of type double precision\n> You will need to rewrite or cast the expression\n> \n> For an original definition of:\n> \n> year INTEGER DEFAULT\n> date_part('year',CURRENT_TIMESTAMP)\n\nWow. That is clear. Why are we returning \"year\" as a double? Yes, I\nsee now:\n\n\ttest=> \\df date_part\n\t List of functions\n\t Result data type | Schema | Name | Argument data types \n\t------------------+------------+-----------+-----------------------------------\n\t double precision | pg_catalog | date_part | text, abstime\n\t double precision | pg_catalog | date_part | text, date\n\t double precision | pg_catalog | date_part | text, interval\n\t double precision | pg_catalog | date_part | text, reltime\n\t double precision | pg_catalog | date_part | text, time with time zone\n\t double precision | pg_catalog | date_part | text, time without time zone\n\t double precision | pg_catalog | date_part | text, timestamp with time zone\n\t double precision | pg_catalog | date_part | text, timestamp without time zone\n\nI would love to say that this is related to change in casts, but that\nisn't the case. It is the new double-precision handling of dates; and\nI see no easy way to fix this, and you can't fix this after the data\nload because the table wasn't created. Yuck.\n\nI have to ask, why are we using a double here rather than a 64-bit\nvalue, if available?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 12 Sep 2002 12:15:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I would love to say that this is related to change in casts, but that\n> isn't the case. It is the new double-precision handling of dates;\n\nYou've got that exactly backwards: date_part has always returned double.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 15:07:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Wow. That is clear. Why are we returning \"year\" as a double?\n\nBecause we've been doing that for many years.\n\n> I would love to say that this is related to change in casts, but that\n> isn't the case.\n\nSure it is. The float=>int casts need to be made implicit, or we'll have\ntons of problems like this.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 13 Sep 2002 00:08:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Bruce Momjian writes:\n>> I would love to say that this is related to change in casts, but that\n>> isn't the case.\n\n> Sure it is. The float=>int casts need to be made implicit, or we'll have\n> tons of problems like this.\n\nWell, yeah. That did not seem to bother anyone last spring, when we\nwere discussing tightening the implicit-casting rules. Shall we\nabandon all that work and go back to \"any available cast can be applied\nimplicitly\"?\n\nMy vote is \"tough, time to fix your SQL code\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Sep 2002 00:46:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Bruce Momjian writes:\n> >> I would love to say that this is related to change in casts, but that\n> >> isn't the case.\n> \n> > Sure it is. The float=>int casts need to be made implicit, or \n> we'll have\n> > tons of problems like this.\n> \n> Well, yeah. That did not seem to bother anyone last spring, when we\n> were discussing tightening the implicit-casting rules. Shall we\n> abandon all that work and go back to \"any available cast can be applied\n> implicitly\"?\n> \n> My vote is \"tough, time to fix your SQL code\".\n\nWasn't the resolution back then to \"wait until beta and see who complains\"?\n\nChris\n\n",
"msg_date": "Fri, 13 Sep 2002 12:56:36 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "En Fri, 13 Sep 2002 00:46:00 -0400\nTom Lane <tgl@sss.pgh.pa.us> escribi�:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> \n> > Sure it is. The float=>int casts need to be made implicit, or we'll have\n> > tons of problems like this.\n> \n> Well, yeah. That did not seem to bother anyone last spring, when we\n> were discussing tightening the implicit-casting rules. Shall we\n> abandon all that work and go back to \"any available cast can be applied\n> implicitly\"?\n\nImplicit float to int loses precision, so it shouldn't be implicit,\nshould it?\n\nMaybe the solution is to make 7.3 pg_dump smart enough to add explicit\ncasts where default values demand them... Is this possible? Are there\nother cases where tightening implicit casts is going to bit users?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nEl sentido de las cosas no viene de las cosas, sino de\nlas inteligencias que las aplican a sus problemas diarios\nen busca del progreso. (Ernesto Hern�ndez-Novich)\n",
"msg_date": "Fri, 13 Sep 2002 00:59:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": ">\n> My vote is \"tough, time to fix your SQL code\".\n>\n\nSounds good to me, but please document it in the \"migration\" notes. No need \nfor a surprise.\n\nRegards,\n\tJeff\n",
"msg_date": "Thu, 12 Sep 2002 23:58:00 -0700",
"msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Tom Lane writes:\n\n> Shall we abandon all that work and go back to \"any available cast can be\n> applied implicitly\"?\n>\n> My vote is \"tough, time to fix your SQL code\".\n\nThat would be a OK if the current behavior conformed to the SQL standard,\nwhich it doesn't. The standard says that all numerical types are mutually\nassignable, which in my mind translates directly as implicitly castable.\nAdditionally, your stance breaks the following SQL compatible and probably\nquite common code:\n\ncreate table test ( a int extract(year from current_date) );\n\nWe aren't abandoning \"all that work\". Plenty of casts should not be\nimplicit because they are structurally guaranteed to lose information. But\nfor casts between numerical types it depends on the content at run time.\nTherefore the SQL standard says that the check needs to be at run time.\nWe do that already, so I don't see a reason to be more strict here.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 14 Sep 2002 00:37:16 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Shall we abandon all that work and go back to \"any available cast can be\n>> applied implicitly\"?\n>> \n>> My vote is \"tough, time to fix your SQL code\".\n\n> That would be a OK if the current behavior conformed to the SQL standard,\n> which it doesn't. The standard says that all numerical types are mutually\n> assignable, which in my mind translates directly as implicitly castable.\n\nIf we take that stance then we will never make any progress at all on\nfixing our problems with poor choices of numeric operators and inability\nto choose an appropriate operator. We can *not* adopt the attitude that\nall numeric casts are equal; some have got to be more equal than others,\nor the parser will be unable to choose desirable interpretations over\nundesirable ones.\n\nAs an example, current code does the right thing with\n\tselect * from foo where numeric_col = 10.1\nwhereas 7.2 failed with\n\tERROR: Unable to identify an operator '=' for types 'numeric' and 'double precision'\nThis improvement comes precisely because the numeric->float8 cast\npathway is not treated on an even footing with the other direction.\n\n> Additionally, your stance breaks the following SQL compatible and probably\n> quite common code:\n\n> create table test ( a int extract(year from current_date) );\n\nI previously suggested that it might be okay to allow non-implicit casts\nto be used when assigning a value to a target column in INSERT and\nUPDATE (including the case where the value is a default value). If we\ndo that, then the above will work, and we haven't abandoned all hope of\nchoosing sensible cast pathways within expressions.\n\nAlternatively we could think about a three-level scheme where pg_cast\ncan declare different \"strengths\" of implicit castability for a cast\npathway; then it'd be possible to allow or disallow implicit coercion\nto a target column type on a cast-by-cast basis. Dunno if we need that\nmuch complexity here...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Sep 2002 14:47:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Casting rules (was: an untitled thread)"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I would love to say that this is related to change in casts, but that\n> > isn't the case. It is the new double-precision handling of dates;\n> \n> You've got that exactly backwards: date_part has always returned double.\n\nWell, at least I was _exact_ about something. :-)\n\n(I am back from the retreat. I actually was back Saturday afternoon but\nmy connection to my ISP was down until today.)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 15 Sep 2002 21:03:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "\nCan someone remind me why date_part() returns a double rather than an\nint4? It is just for partial seconds?\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > Shall we abandon all that work and go back to \"any available cast can be\n> > applied implicitly\"?\n> >\n> > My vote is \"tough, time to fix your SQL code\".\n> \n> That would be a OK if the current behavior conformed to the SQL standard,\n> which it doesn't. The standard says that all numerical types are mutually\n> assignable, which in my mind translates directly as implicitly castable.\n> Additionally, your stance breaks the following SQL compatible and probably\n> quite common code:\n> \n> create table test ( a int extract(year from current_date) );\n> \n> We aren't abandoning \"all that work\". Plenty of casts should not be\n> implicit because they are structurally guaranteed to lose information. But\n> for casts between numerical types it depends on the content at run time.\n> Therefore the SQL standard says that the check needs to be at run time.\n> We do that already, so I don't see a reason to be more strict here.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 15 Sep 2002 21:22:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 12 September 2002 06:27\n> To: Justin Clift\n> Cc: PostgreSQL Hackers Mailing List\n> Subject: Re: [HACKERS] PGXLOG variable worthwhile?\n> \n> Also, I have heard symlinks are available in native Windows \n> but the interface to them isn't clearly visible. Can someone \n> clarify that?\n\nWell there are 'shortcuts' but I wouldn't want to trust my xlog\ndirectory to one.\n\nEven if I did, iirc, unless you are using the shell api, they just\nappear to be regular files anyway (for example, in Cygwin vi, I can edit\na shortcut to a directory).\n\nRegards, Dave.\n",
"msg_date": "Thu, 12 Sep 2002 08:32:28 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Dave Page wrote:\n> \n>>-----Original Message-----\n>>From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n>>\n>>Also, I have heard symlinks are available in native Windows \n>>but the interface to them isn't clearly visible. Can someone \n>>clarify that?\n> \n> \n> Well there are 'shortcuts' but I wouldn't want to trust my xlog\n> directory to one.\n\nThese are Shell OLE links. As Dave points out, it requires the \nshell to interpret the shortcut.\n\n> \n> Even if I did, iirc, unless you are using the shell api, they just\n> appear to be regular files anyway (for example, in Cygwin vi, I can edit\n> a shortcut to a directory).\n> \n> Regards, Dave.\n\nIn Windows 2000 and Windows XP with an NTFS filesystem, \nMicrosoft has added Reparse Points, which allow for the \nimplementation of symbolic links for directories. Microsoft \ncalls them \"Junctions\". I *believe* the function used for \ncreating reparse points is DeviceIoControl() with the \nFSCTL_SET_REPARSE_POINT I/O control code. I don't have quick \naccess to 2K or XP, but it is clearly not supported by Win32 on \n95/98/ME.\n\nHere's a link discussing the features of NTFS5 and Reparse Points:\n\nhttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnw2kmag00/html/NTFSPart1.asp\n\nMike Mascari\nmascarm@mascari.com\n\n",
"msg_date": "Thu, 12 Sep 2002 05:34:11 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "Mike Mascari wrote:\n<snip>\n> In Windows 2000 and Windows XP with an NTFS filesystem,\n> Microsoft has added Reparse Points, which allow for the\n> implementation of symbolic links for directories. Microsoft\n> calls them \"Junctions\". I *believe* the function used for\n> creating reparse points is DeviceIoControl() with the\n> FSCTL_SET_REPARSE_POINT I/O control code. I don't have quick\n> access to 2K or XP, but it is clearly not supported by Win32 on\n> 95/98/ME.\n> \n> Here's a link discussing the features of NTFS5 and Reparse Points:\n> \n> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnw2kmag00/html/NTFSPart1.asp\n\nThat's really useful info. Reparse points under Win2k (mount points to\nthe rest of us) are definitely something to try out in the future then. \n:)\n\nSeems like the NT4 users are left out in the cold though until we add\nsome kind of ability for PostgreSQL to not look at the filesystem for\ninfo about where to put the xlog files.\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> Mike Mascari\n> mascarm@mascari.com\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Thu, 12 Sep 2002 22:11:16 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Thu, 12 Sep 2002, Justin Clift wrote:\n\n> Mike Mascari wrote:\n> <snip>\n> > In Windows 2000 and Windows XP with an NTFS filesystem,\n> > Microsoft has added Reparse Points, which allow for the\n> > implementation of symbolic links for directories. Microsoft\n> > calls them \"Junctions\". I *believe* the function used for\n> > creating reparse points is DeviceIoControl() with the\n> > FSCTL_SET_REPARSE_POINT I/O control code. I don't have quick\n> > access to 2K or XP, but it is clearly not supported by Win32 on\n> > 95/98/ME.\n> > \n> > Here's a link discussing the features of NTFS5 and Reparse Points:\n> > \n> > http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnw2kmag00/html/NTFSPart1.asp\n> \n> That's really useful info. Reparse points under Win2k (mount points to\n> the rest of us) are definitely something to try out in the future then. \n> :)\n> \n> Seems like the NT4 users are left out in the cold though until we add\n> some kind of ability for PostgreSQL to not look at the filesystem for\n> info about where to put the xlog files.\n\nThis isn't true. With the resource kit, you get the gnu utils, and ln \nworks a charm under NT4 with ntfs. And not just for directories, but \nfiles as well. Unless Microsoft somehow removed that functionality in the \nintervening years since I've used NT. (wouldn't put it past them, but I \ndoubt they have.)\n\n",
"msg_date": "Thu, 12 Sep 2002 08:58:26 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "\"scott.marlowe\" wrote:\n<snip>\n> > Seems like the NT4 users are left out in the cold though until we add\n> > some kind of ability for PostgreSQL to not look at the filesystem for\n> > info about where to put the xlog files.\n> \n> This isn't true. With the resource kit, you get the gnu utils, and ln\n> works a charm under NT4 with ntfs. And not just for directories, but\n> files as well. Unless Microsoft somehow removed that functionality in the\n> intervening years since I've used NT. (wouldn't put it past them, but I\n> doubt they have.)\n\nThe reference point that I'm working from is this:\n\n - Am testing out the third beta of the Native PostgreSQL port for\nWindows, on NT4 SP6 at present.\n - Have an internal RAID array of Seagate Cheetah 10kRPM drives. When\ninstalling the PGDATA directory on one drive it gives a certain kind of\nperformance, and I'm interested in testing the performance of the Native\nPostgreSQL port for Windows with the xlog directory being located on\nanother drive.\n - Have tried doing normal shortcuts, and have also tried using the\ncygwin \"ln\" command to create the appropriate soft link. Both\napproaches create a shortcut object of the correct name pointing to the\ncorrect place on the new drive, but the only thing that appears to\nfollow this shortcut is when I click on them using Windows Explorer. \nThe Native PostgreSQL port for Windows doesn't, and neither do a few\nother applications I tested.\n\nWould it be correct to say that the 'ln' command in the MS Resource Kit\ncreates this kind of shortcut too, as the Reparse Points feature doesn't\nseem to be possible under NT4?\n\nCan only think of two real solutions at present, one being for us to add\na PGXLOG environment variable or similar ability (GUC parameter\nperhaps?), and the other would be for the Native PostgreSQL for Windows\nport to follow these shortcuts.\n\nNot if any of these is all that easy, or maybe there is another solution\nthat would work (apart from ignoring the problem).\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Fri, 13 Sep 2002 01:32:18 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "On Fri, 13 Sep 2002, Justin Clift wrote:\n\n> \"scott.marlowe\" wrote:\n> <snip>\n> > > Seems like the NT4 users are left out in the cold though until we add\n> > > some kind of ability for PostgreSQL to not look at the filesystem for\n> > > info about where to put the xlog files.\n> > \n> > This isn't true. With the resource kit, you get the gnu utils, and ln\n> > works a charm under NT4 with ntfs. And not just for directories, but\n> > files as well. Unless Microsoft somehow removed that functionality in the\n> > intervening years since I've used NT. (wouldn't put it past them, but I\n> > doubt they have.)\n> \n> The reference point that I'm working from is this:\n> \n> - Am testing out the third beta of the Native PostgreSQL port for\n> Windows, on NT4 SP6 at present.\n> - Have an internal RAID array of Seagate Cheetah 10kRPM drives. When\n> installing the PGDATA directory on one drive it gives a certain kind of\n> performance, and I'm interested in testing the performance of the Native\n> PostgreSQL port for Windows with the xlog directory being located on\n> another drive.\n> - Have tried doing normal shortcuts, and have also tried using the\n> cygwin \"ln\" command to create the appropriate soft link. Both\n> approaches create a shortcut object of the correct name pointing to the\n> correct place on the new drive, but the only thing that appears to\n> follow this shortcut is when I click on them using Windows Explorer. \n> The Native PostgreSQL port for Windows doesn't, and neither do a few\n> other applications I tested.\n> \n> Would it be correct to say that the 'ln' command in the MS Resource Kit\n> creates this kind of shortcut too, as the Reparse Points feature doesn't\n> seem to be possible under NT4?\n\nI wouldn't assume that. It's been years since I tested it, but back then, \nthe command line and all program I used could see the link created by ln \nthat came with the resource kit. They were distinctly different from the \nshortcut type of links, in that they seems transparent like short cuts in \nunix generally are.\n\nDo you have the resource kit or the gnu utils from it?\n\nLooking at this url:\n\nhttp://unxutils.sourceforge.net/\n\nthe part for ln.exe says it makes real hard links on ntfs (which means \nthey would be on the same drive.) So I'm not sure if ntfs supports soft \nlinks across volumes transparently or not now.\n\n",
"msg_date": "Thu, 12 Sep 2002 09:55:49 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "scott.marlowe wrote:\n> > Seems like the NT4 users are left out in the cold though until we add\n> > some kind of ability for PostgreSQL to not look at the filesystem for\n> > info about where to put the xlog files.\n> \n> This isn't true. With the resource kit, you get the gnu utils, and ln \n> works a charm under NT4 with ntfs. And not just for directories, but \n> files as well. Unless Microsoft somehow removed that functionality in the \n> intervening years since I've used NT. (wouldn't put it past them, but I \n> doubt they have.)\n\nYes, this is what I remember, that Cygwin had symlinks, and at that time\nthat was the only Win32 OS we supported. Now, with native Win32 port\ncoming, we have to figure out what is available.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 12 Sep 2002 12:04:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "scott.marlowe wrote:\n> On Fri, 13 Sep 2002, Justin Clift wrote:\n >\n>>Would it be correct to say that the 'ln' command in the MS Resource Kit\n>>creates this kind of shortcut too, as the Reparse Points feature doesn't\n>>seem to be possible under NT4?\n> \n> \n> I wouldn't assume that. It's been years since I tested it, but back then, \n> the command line and all program I used could see the link created by ln \n> that came with the resource kit. They were distinctly different from the \n> shortcut type of links, in that they seems transparent like short cuts in \n> unix generally are.\n> \n> Do you have the resource kit or the gnu utils from it?\n\nThe situation appears to be this:\n\n1. Soft links are available on NTFS 5 (2K/XP) as Reparse Points \nvia the DeviceIoControl() function for any application using the \nstandard C library routines.\n\n2. Soft links are available on any filesystem under \n95/98/ME/NT4/2K/XP as OLE streams (.lnk files) for Shell-aware \napplications.\n\n3. Hard links are available on NTFS 5 (2K/XP) via the \nCreateHardLink() API.\n\nSee:\n\nhttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/base/createhardlink.asp\n\n4. Hard links are available on NTFS (NT3.1/NT4) via the \nBackupWrite() API by writing a special stream to the NTFS.\n\nExample:\n\nhttp://www.mvps.org/win32/ntfs/lnw.cpp\n\nThe cygwin implementation of link():\n\nhttp://sources.redhat.com/cgi-bin/cvsweb.cgi/src/winsup/cygwin/syscalls.cc?rev=1.149.2.23&content-type=text/x-cvsweb-markup&cvsroot=src\n\n1. Will use CreateHardLink() if on 2K/XP\n2. Will try to use the BackupWrite() method\n3. Failing #2 will just copy the file\n\nSee how fun Microsoft makes things?\n\nMike Mascari\nmascarm@mascari.com\n\n",
"msg_date": "Thu, 12 Sep 2002 13:00:55 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
},
{
"msg_contents": "I wrote:\n> scott.marlowe wrote:\n >>\n>> I wouldn't assume that. It's been years since I tested it, but back \n>> then, the command line and all program I used could see the link \n>> created by ln that came with the resource kit. They were distinctly \n>> different from the shortcut type of links, in that they seems \n>> transparent like short cuts in unix generally are.\n>>\n>> Do you have the resource kit or the gnu utils from it?\n> \n> \n> The situation appears to be this:\n> \n> 1. Soft links are available on NTFS 5 (2K/XP) as Reparse Points via the \n> DeviceIoControl() function for any application using the standard C \n> library routines.\n> \n> 2. Soft links are available on any filesystem under 95/98/ME/NT4/2K/XP \n> as OLE streams (.lnk files) for Shell-aware applications.\n> \n> 3. Hard links are available on NTFS 5 (2K/XP) via the CreateHardLink() API.\n\n<snip>\n\n> 4. Hard links are available on NTFS (NT3.1/NT4) via the BackupWrite() \n> API by writing a special stream to the NTFS.\n\nI also believe (I could be wrong) that for directories, the only \ntwo methods of links are the Soft link methods above. So PGXLOG \ncannot use soft links on a non-XP/2K machine unless it is \n\"Shell-Aware\". For example, in a cygwin bash command window:\n\nmkdir dir1\nln dir1 dir2 <- Error using Cygwin implementation\nln -s dir1 dir2 <- Creates a Shell short-cut (NT4)\necho \"Hello\" > dir1/test.txt\ncat dir2/test.txt\n\"Hello\" <- Cygwin's cat(bash?) is shell short-cut aware\n\nNow, in a Windows NT command prompt:\n\nnotepad dir2\\test.txt <- Notepad can't find file\nnotepad dir2.lnk <- Displays link contents\n\nThat means for a native port with a different PGXLOG directory \nrunning on NT4, the only choice *using links* is to make the \nnative port shell short-cut aware.\n\nI could be wrong but I don't think so.\n\nMike Mascari\nmascarm@mascari.com\n\n",
"msg_date": "Thu, 12 Sep 2002 13:29:20 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: PGXLOG variable worthwhile?"
}
] |
[
{
"msg_contents": "I've come upon a misbehaviour of drop column, where drop column\nunconditionally drops inherited column from child tables.\n\nWhat it should do is to check if the same column is not inherited from\nother parents and drop it only when it is not\n\nHere is the test case:\n\n\nhannu=# create table p1(id int, name text);\nCREATE TABLE\nhannu=# create table p2(id2 int, name text);\nCREATE TABLE\nhannu=# create table c1(age int) inherits(p1,p2);\nNOTICE: CREATE TABLE: merging multiple inherited definitions of\nattribute \"name\"\nCREATE TABLE\nhannu=# \\d c1 \n Table \"public.c1\"\n Column | Type | Modifiers \n--------+---------+-----------\n id | integer | \n name | text | \n id2 | integer | \n age | integer | \n\nhannu=# alter table p1 drop column name;\nALTER TABLE\nhannu=# \\d c1\n Table \"public.c1\"\n Column | Type | Modifiers \n--------+---------+-----------\n id | integer | \n id2 | integer | \n age | integer | \n\n\nThe column \"c1.name\" should survive the drop from p1, as it is also\ninherited from p2. \n\n--------------------\nHannu\n\n",
"msg_date": "12 Sep 2002 10:20:47 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> I've come upon a misbehaviour of drop column, where drop column\n> unconditionally drops inherited column from child tables.\n> What it should do is to check if the same column is not inherited from\n> other parents and drop it only when it is not\n\nHm. Seems like attisinherited should have been a count, not a boolean.\n\nIs anyone sufficiently excited about this issue to force an initdb to\nfix it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 10:14:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Tom Lane dijo: \n\n> Hannu Krosing <hannu@tm.ee> writes:\n> > I've come upon a misbehaviour of drop column, where drop column\n> > unconditionally drops inherited column from child tables.\n> > What it should do is to check if the same column is not inherited from\n> > other parents and drop it only when it is not\n> \n> Hm. Seems like attisinherited should have been a count, not a boolean.\n\nI'll try to make a fix and submit.\n\n> Is anyone sufficiently excited about this issue to force an initdb to\n> fix it?\n\nIf people thinks it's important, the fix can be integrated. If not, it\ncan wait until 7.4.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Aprende a avergonzarte mas ante ti que ante los demas\" (Democrito)\n\n",
"msg_date": "Thu, 12 Sep 2002 10:41:49 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n>> Hm. Seems like attisinherited should have been a count, not a boolean.\n>> Is anyone sufficiently excited about this issue to force an initdb to\n>> fix it?\n\n> The count approach seems definitely the right way, but a check (possibly\n> a slow one) can be probably done without initdb.\n\nSlow, complicated to code, and deadlock-prone (since you'd have to\nacquire locks on the other parent tables). My feeling is we fix this\nwith a counted attisinherited field, or don't fix at all.\n\nWe can certainly do the proper fix in 7.4; do we consider this bug\nimportant enough to do an initdb for 7.3beta2? I don't have a strong\nfeeling either way about that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 10:41:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "On Thu, 2002-09-12 at 16:14, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > I've come upon a misbehaviour of drop column, where drop column\n> > unconditionally drops inherited column from child tables.\n> > What it should do is to check if the same column is not inherited from\n> > other parents and drop it only when it is not\n> \n> Hm. Seems like attisinherited should have been a count, not a boolean.\n\neither that, or some check at drop column time.\n \n> Is anyone sufficiently excited about this issue to force an initdb to\n> fix it?\n\nThe count approach seems definitely the right way, but a check (possibly\na slow one) can be probably done without initdb.\n\nThe other sad thing about the current behaviour is that in addition to\nbeing wrong it also breaks dump/reload - after dump/reload the initially\ndropped column is back in c1.\n\n-------------\nHannu\n\n",
"msg_date": "12 Sep 2002 17:23:41 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "> > The count approach seems definitely the right way, but a check (possibly\n> > a slow one) can be probably done without initdb.\n>\n> We can certainly do the proper fix in 7.4; do we consider this bug\n> important enough to do an initdb for 7.3beta2? I don't have a strong\n> feeling either way about that.\n\nI think we are too scared of doing initdb during beta... \n\nInitdb during beta should not be evaultated on a per bug basis, but keep a \nlist of all things that could be fixed and judge if the total of all the \nfixes is worth one initdb. Right now off the top of my head I can think of \nthe split function and this inherited change, are there more?\n\nmy two cents...\n",
"msg_date": "Thu, 12 Sep 2002 12:09:34 -0400",
"msg_from": "\"Matthew T. OConnor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "On Thu, 12 Sep 2002, Matthew T. OConnor wrote:\n\n> > > The count approach seems definitely the right way, but a check (possibly\n> > > a slow one) can be probably done without initdb.\n> >\n> > We can certainly do the proper fix in 7.4; do we consider this bug\n> > important enough to do an initdb for 7.3beta2? I don't have a strong\n> > feeling either way about that.\n> \n> I think we are too scared of doing initdb during beta... \n> \n> Initdb during beta should not be evaultated on a per bug basis, but keep a \n> list of all things that could be fixed and judge if the total of all the \n> fixes is worth one initdb. Right now off the top of my head I can think of \n> the split function and this inherited change, are there more?\n> \n> my two cents...\n\nAgreed.\n\nActually, an argument could likely be made that changes that require \ninitdb should be done as early as possible since the later the change the \nmore people there will be to test the change, and there will be fewer \npeople who actually have to initdb since a lot of folks don't test beta \nreleases until the 3rd or 4th beta.\n\n",
"msg_date": "Thu, 12 Sep 2002 10:23:53 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "On Thu, 12 Sep 2002, scott.marlowe wrote:\n\n> Agreed.\n> \n> Actually, an argument could likely be made that changes that require \n> initdb should be done as early as possible since the later the change the \n> more people there will be to test the change, and there will be fewer \n> people who actually have to initdb since a lot of folks don't test beta \n> releases until the 3rd or 4th beta.\n\nMy mental dyslexia strikes again, that should be:\n\n... since the EARLIER the change the more people there will be to test the \nchange, ...\n\nsheesh. Sorry...\n\n",
"msg_date": "Thu, 12 Sep 2002 11:07:10 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Tom Lane dijo: \n\n> Hannu Krosing <hannu@tm.ee> writes:\n> \n> > The count approach seems definitely the right way, but a check (possibly\n> > a slow one) can be probably done without initdb.\n> \n> Slow, complicated to code, and deadlock-prone (since you'd have to\n> acquire locks on the other parent tables). My feeling is we fix this\n> with a counted attisinherited field, or don't fix at all.\n\nAll right, I now have all the catalog changes on place; this is the easy\npart (is an int2 count enough?).\n\nBut when actually dropping a column, the recursion cannot be done the\nway it's done now, fetching the whole inheritor tree in one pass,\nbecause there's no way to distinguish the direct ones that have the\nattisinherited count greater than 1 from deeper ones; it has to be done\nstep by step. If this is not clear, imagine the following situation:\n\ncreate table p1(id int, name text);\ncreate table p2(id2 int, name text);\ncreate table c1(age int) inherits(p1,p2);\ncreate table gc1() inherits (c1);\n\np1 and p2 have name->attisinherited=0, while c1 has\nname->attisinherited=2. But gc1->name->attisinherited=1. If I just\nrecurse the tree the way it's done now, I will happily drop \"name\" from\ngc1 while keeping it on c1. So I have to switch from\nfind_all_inheritors() to find_inheritance_children() and keep recursing\nuntil there are no more inheritors (I still have to check if there are\nother gotchas with this approach, or optimizations to be done). I am\nalready midway with this, but wanted to let you know in case the patch\nis rejected.\n\nIs this Ok? I see this is getting away from the \"trivial fix\" camp. \n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n",
"msg_date": "Thu, 12 Sep 2002 21:14:14 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Alvaro Herrera dijo: \n\n> All right, I now have all the catalog changes on place; this is the easy\n> part (is an int2 count enough?).\n> \n> But when actually dropping a column, the recursion cannot be done the\n> way it's done now, fetching the whole inheritor tree in one pass,\n> because there's no way to distinguish the direct ones that have the\n> attisinherited count greater than 1 from deeper ones; it has to be done\n> step by step.\n\nDone. I attach the patch. It's huge because it needs to touch\npg_attribute.h, but it is relatively simple. This passes the regression\ntests and fixes the bug reported by Hannu.\n\nPlease review and apply if OK. I didn't touch catversion.h.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Cuando ma�ana llegue pelearemos segun lo que ma�ana exija\" (Mowgli)",
"msg_date": "Thu, 12 Sep 2002 22:13:15 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "En 12 Sep 2002 17:23:41 +0200\nHannu Krosing <hannu@tm.ee> escribi�:\n\n> The other sad thing about the current behaviour is that in addition to\n> being wrong it also breaks dump/reload - after dump/reload the initially\n> dropped column is back in c1.\n\nI hadn't read this paragraph before. But I don't understand what\nyou're saying. If I drop the column from p1 but not from p2, how is it\nexpected that the column doesn't show in c1, that inherits both? Truth\nis that the column shouldn't have disappeared in the first place, so it\nisn't a mistake that shows up in the dump.\n\nSure, databases before and after the dump are different, but the one\nbefore dump is broken. I don't have the original pgsql version (without\nthe patch) compiled right now, but I think that if you were to select\nfrom p2, the backend would crash (or at least elog(ERROR)).\n\nAnyway, the patch I just submitted should fix this bug. Please test it\nand thanks for the report.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"La conclusion que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusion de ellos\" (Tanenbaum)\n",
"msg_date": "Thu, 12 Sep 2002 22:52:40 -0400",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> If this is not clear, imagine the following situation:\n\n> create table p1(id int, name text);\n> create table p2(id2 int, name text);\n> create table c1(age int) inherits(p1,p2);\n> create table gc1() inherits (c1);\n\n> p1 and p2 have name->attisinherited=0, while c1 has\n> name->attisinherited=2. But gc1->name->attisinherited=1.\n\nIck. I hadn't thought that far ahead.\n\nWe could probably cause gc1->name->attisinherited to be 2 in this\nscenario; does that help?\n\nActually, there might not be a problem. c1.name can't be deleted until\nboth p1.name and p2.name go away, and at that point we want both c1.name\nand gc1.name to go away. So as long as we don't *recursively* decrement\nthe inherits count when c1.name.attisinherited hasn't reached 0, this\nmight be okay. But it needs thought.\n\n> I see this is getting away from the \"trivial fix\" camp.\n\nYup. Let's step back and think carefully before we plunge into the\ncoding. What goes away when, and how do we define the inherits-count\nto make it work right?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 23:40:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "En Thu, 12 Sep 2002 23:40:21 -0400\nTom Lane <tgl@sss.pgh.pa.us> escribi�:\n\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > If this is not clear, imagine the following situation:\n> \n> > create table p1(id int, name text);\n> > create table p2(id2 int, name text);\n> > create table c1(age int) inherits(p1,p2);\n> > create table gc1() inherits (c1);\n> \n> > p1 and p2 have name->attisinherited=0, while c1 has\n> > name->attisinherited=2. But gc1->name->attisinherited=1.\n> \n> We could probably cause gc1->name->attisinherited to be 2 in this\n> scenario; does that help?\n\nI'm trying to imagine a case where this is harmful, but cannot find any.\nIt would have to be proven that there is none; IMHO this is a little\ndeviating from the \"reality\".\n\n\n> Actually, there might not be a problem. c1.name can't be deleted until\n> both p1.name and p2.name go away, and at that point we want both c1.name\n> and gc1.name to go away. So as long as we don't *recursively* decrement\n> the inherits count when c1.name.attisinherited hasn't reached 0, this\n> might be okay. But it needs thought.\n\nThis is what I implemented on the patch I posted, I think. The idea is\nthat attisinherited is decremented non-recursively, i.e. only in direct\ninheritors; and when it reaches zero the column is dropped, and its\ninheritors have it decremented also.\n\nIn the cases I've tried this works, and it seems to me that it is\ncorrect; however, I haven't proven it is. Multiple inheritance and\nmultiple generations is weird.\n\nIt just ocurred to me that maybe I overlooked the\nALTER TABLE ONLY ... DROP COLUMN case, but I'm now going to bed. I'll\nthink about this case tomorrow.\n\n> > I see this is getting away from the \"trivial fix\" camp.\n> \n> Yup. Let's step back and think carefully before we plunge into the\n> coding. What goes away when, and how do we define the inherits-count\n> to make it work right?\n\nHuh, I already did. Please think about my solution.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Para tener mas hay que desear menos\"\n",
"msg_date": "Fri, 13 Sep 2002 00:56:19 -0400",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> escribi�:\n>> Actually, there might not be a problem. c1.name can't be deleted until\n>> both p1.name and p2.name go away, and at that point we want both c1.name\n>> and gc1.name to go away. So as long as we don't *recursively* decrement\n>> the inherits count when c1.name.attisinherited hasn't reached 0, this\n>> might be okay. But it needs thought.\n\n> This is what I implemented on the patch I posted, I think. The idea is\n> that attisinherited is decremented non-recursively, i.e. only in direct\n> inheritors; and when it reaches zero the column is dropped, and its\n> inheritors have it decremented also.\n\nYeah; after marginally more thought, I'm thinking that the correct\ndefinition of attisinherited (need new name BTW) is \"number of *direct*\nancestors this table inherits this column from\". I think you are\ndescribing the same idea.\n\nGiven the obvious algorithms for updating and using such a value,\ndoes anyone see a flaw in the behavior?\n\nOne corner case is that I think we currently allow\n\n\tcreate table p (f1 int);\n\tcreate table c (f1 int) inherits(p);\n\nwhich is useless in the given example but is not useless if c\nprovides a default or constraints for column f1. ISTM f1 should\nnot go away in c if we drop it in p, in this case. Maybe we want\nnot an \"inherits count\" but a \"total sources of definitions count\",\nwhich would include 1 for each ancestral table plus 1 if declared\nlocally. When it drops to 0, okay to delete the column.\n\n> however, I haven't proven it is. Multiple inheritance and\n> multiple generations is weird.\n\nWhat he said... I'm way too tired to think this through tonight...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Sep 2002 01:41:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Tom Lane dijo: \n\n> One corner case is that I think we currently allow\n> \n> \tcreate table p (f1 int);\n> \tcreate table c (f1 int) inherits(p);\n\nIn this case, c.f1.attisinherited count is 2; thus when I drop f1 from\np, it is not dropped from c.\n\nDo you have some suggestion on what the name should be? Clearly\nattisinherited is not appropiate. attinhcount maybe?\n\nThe patch submitted does what you describe. I'm leaving tomorrow and\nwon't be back until next weekend, so please do the name change yourself\nif the patch is to be applied.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nVoy a acabar con todos los humanos / con los humanos yo acabar�\nvoy a acabar con todos / con todos los humanos acabar� (Bender)\n\n",
"msg_date": "Fri, 13 Sep 2002 20:56:32 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": " \nI am keeing this patch so we have it to apply when we decide to force an\ninitdb:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\n---------------------------------------------------------------------------\n\n\nAlvaro Herrera wrote:\n> Alvaro Herrera dijo: \n> \n> > All right, I now have all the catalog changes on place; this is the easy\n> > part (is an int2 count enough?).\n> > \n> > But when actually dropping a column, the recursion cannot be done the\n> > way it's done now, fetching the whole inheritor tree in one pass,\n> > because there's no way to distinguish the direct ones that have the\n> > attisinherited count greater than 1 from deeper ones; it has to be done\n> > step by step.\n> \n> Done. I attach the patch. It's huge because it needs to touch\n> pg_attribute.h, but it is relatively simple. This passes the regression\n> tests and fixes the bug reported by Hannu.\n> \n> Please review and apply if OK. I didn't touch catversion.h.\n> \n> -- \n> Alvaro Herrera (<alvherre[a]atentus.com>)\n> \"Cuando ma?ana llegue pelearemos segun lo que ma?ana exija\" (Mowgli)\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 17 Sep 2002 23:15:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "[ back to thinking about this patch ]\n\nAlvaro Herrera <alvherre@atentus.com> writes:\n> Tom Lane dijo: \n>> One corner case is that I think we currently allow\n>> \n>> create table p (f1 int);\n>> create table c (f1 int) inherits(p);\n\n> In this case, c.f1.attisinherited count is 2; thus when I drop f1 from\n> p, it is not dropped from c.\n\nThat seems right, but the problem I have with it is that the resulting\nstate of c.f1 is attisinherited = 1. This means that you cannot drop\nc.f1. It seems to me that we should have this behavior:\n\ncreate table p (f1 int);\ncreate table c (f1 int not null) inherits(p);\n\ndrop column c.f1;\n-- should be rejected since c.f1 is inherited\ndrop column p.f1;\n-- c.f1 is still there, but no longer inherited\ndrop column c.f1;\n-- should succeed; but will fail with patch as given\n\nas compared to\n\ncreate table p (f1 int);\ncreate table c () inherits(p);\n\ndrop column c.f1;\n-- should be rejected since c.f1 is inherited\ndrop column p.f1;\n-- c.f1 is dropped now, since there is no local definition for it\n\nAnd if you aren't confused yet, what about non-recursive drops of p.f1\n(ie, alter table ONLY p drop column f1)? This case seems clear:\n\ncreate table p (f1 int);\ncreate table c () inherits(p);\n\ndrop column c.f1;\n-- should be rejected since c.f1 is inherited\ndrop ONLY column p.f1;\n-- c.f1 is NOT dropped, but must now be considered non-inherited\ndrop column c.f1;\n-- should succeed\n\nAnd then I think we should say\n\ncreate table p (f1 int);\ncreate table c (f1 int not null) inherits(p);\n\ndrop column c.f1;\n-- should be rejected since c.f1 is inherited\ndrop ONLY column p.f1;\n-- c.f1 is still there, but no longer inherited\ndrop column c.f1;\n-- should succeed\n\nI am not sure how to make all four of these cases work. We might need\ntwo fields :-( ... a \"locally defined\" boolean and a \"number of times\ninherited\" counter. This seems like overkill though.\n\nIf we don't have the \"locally defined\" boolean then I think we have to\nmake the first case work like so:\n\ncreate table p (f1 int);\ncreate table c (f1 int not null) inherits(p);\n\ndrop column p.f1;\n-- c.f1 GOES AWAY, because its inherit count went to zero\n\nIs this reasonable behavior? I'm not sure. You could probably argue\nit either way.\n\nAnother interesting case is multiple inheritance.\n\ncreate table p1 (f1 int);\ncreate table p2 (f1 int);\ncreate table c () inherits(p1, p2);\n\ndrop ONLY column p1.f1;\ndrop column p2.f1;\n\nAfter this sequence, what is the state of c.f1? Is it still there?\nShould it be? If it is still there, will it be possible to get rid of\nit with \"drop column c.f1\"? What if we did DROP ONLY on *both*\nancestors?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 14:06:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "> That seems right, but the problem I have with it is that the resulting\n> state of c.f1 is attisinherited = 1. This means that you cannot drop\n> c.f1. It seems to me that we should have this behavior:\n\nHas anyone given much thought as to perhaps we could just drop multiple\ninheritance from Postgres? There are people using single inheritance - but\nhow many actually use multiple inheritance? If we dumped it we could use\nthe proposed all-child-tables-in-one-relation idea, and everything would\nbecome very easy...\n\nChris\n\n",
"msg_date": "Fri, 20 Sep 2002 10:08:00 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > That seems right, but the problem I have with it is that the resulting\n> > state of c.f1 is attisinherited = 1. This means that you cannot drop\n> > c.f1. It seems to me that we should have this behavior:\n> \n> Has anyone given much thought as to perhaps we could just drop multiple\n> inheritance from Postgres? There are people using single inheritance - but\n> how many actually use multiple inheritance? If we dumped it we could use\n> the proposed all-child-tables-in-one-relation idea, and everything would\n> become very easy...\n\nI am for it. Multiple inheritance is more of a mess than a help. Just\nlook at C++. Everyone is moving away from multiple inheritance for that\nreason.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 22:27:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Christopher Kings-Lynne wrote:\n>> Has anyone given much thought as to perhaps we could just drop multiple\n>> inheritance from Postgres?\n\n> I am for it. Multiple inheritance is more of a mess than a help.\n\nI'm not agin it ... but if that's the lay of the land then we have\nno need to apply a last-minute catalog reformatting to fix a\nmultiple-inheritance bug. This patch is off the \"must fix for 7.3\"\nlist, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 22:49:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Christopher Kings-Lynne wrote:\n> >> Has anyone given much thought as to perhaps we could just drop multiple\n> >> inheritance from Postgres?\n> \n> > I am for it. Multiple inheritance is more of a mess than a help.\n> \n> I'm not agin it ... but if that's the lay of the land then we have\n> no need to apply a last-minute catalog reformatting to fix a\n> multiple-inheritance bug. This patch is off the \"must fix for 7.3\"\n> list, no?\n\nI don't think a few days before beta2 is the time to be making such\ndecisions. I think we have to keep the course and open the discussion\nin 7.4. Sorry.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 22:52:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "> > > I am for it. Multiple inheritance is more of a mess than a help.\n> >\n> > I'm not agin it ... but if that's the lay of the land then we have\n> > no need to apply a last-minute catalog reformatting to fix a\n> > multiple-inheritance bug. This patch is off the \"must fix for 7.3\"\n> > list, no?\n\nMultiple inheritance patches should go in for 7.3, since we support multiple\ninheritance in 7.3. However, I think thought should be put into removing\nmultiple inheritance in 7.4 - after a user survey perhaps. If removing\nmultiple inheritance means we can have perfece, indexable single inheritance\nthen I think it's worth it. Unless the spec calls for multiple inheritance\nof course.\n\nChris\n\n",
"msg_date": "Fri, 20 Sep 2002 10:55:10 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> I'm not agin it ... but if that's the lay of the land then we have\n>> no need to apply a last-minute catalog reformatting to fix a\n>> multiple-inheritance bug. This patch is off the \"must fix for 7.3\"\n>> list, no?\n\n> I don't think a few days before beta2 is the time to be making such\n> decisions.\n\nThe decision at hand is whether to apply a patch. You cannot say \"we're\nnot deciding now\", because that is a decision...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Sep 2002 23:15:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Christopher Kings-Lynne wrote:\n> >> Has anyone given much thought as to perhaps we could just drop\n> >> multiple inheritance from Postgres?\n> \n> > I am for it. Multiple inheritance is more of a mess than a help.\n> \n> I'm not agin it\n\nI'm abstaining.\n\n> but if that's the lay of the land then we have no need to apply a\n> last-minute catalog reformatting to fix a multiple-inheritance bug.\n\nThe catalog format has changed since beta1 anyway due to the casting\nchanges, right? (not to mention the split -> split_part change). If\nthat's the case, I don't see a good reason not to include the fix,\nprovided it's reasonably low-risk.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n",
"msg_date": "19 Sep 2002 23:29:05 -0400",
"msg_from": "Neil Conway <neilc@samurai.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> I'm not agin it ... but if that's the lay of the land then we have\n> >> no need to apply a last-minute catalog reformatting to fix a\n> >> multiple-inheritance bug. This patch is off the \"must fix for 7.3\"\n> >> list, no?\n> \n> > I don't think a few days before beta2 is the time to be making such\n> > decisions.\n> \n> The decision at hand is whether to apply a patch. You cannot say \"we're\n> not deciding now\", because that is a decision...\n\nYes. I am saying we should not assume we are going to remove multiple\ninheritance. We should apply the patch and make things a good as they\ncan be for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 19 Sep 2002 23:32:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "> > The decision at hand is whether to apply a patch. You cannot say \"we're\n> > not deciding now\", because that is a decision...\n>\n> Yes. I am saying we should not assume we are going to remove multiple\n> inheritance. We should apply the patch and make things a good as they\n> can be for 7.3.\n\nI think the patch should be applied. That way people who are using multiple\ninheritance (if there are any) can know that they have a vaguely bug free\nimplementation in 7.3 until they redo their stuff for 7.4.\n\nChris\n\n",
"msg_date": "Fri, 20 Sep 2002 11:38:00 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Tom Lane kirjutas R, 20.09.2002 kell 04:49:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Christopher Kings-Lynne wrote:\n> >> Has anyone given much thought as to perhaps we could just drop multiple\n> >> inheritance from Postgres?\n> \n> > I am for it. Multiple inheritance is more of a mess than a help.\n> \n> I'm not agin it ... but if that's the lay of the land then we have\n> no need to apply a last-minute catalog reformatting to fix a\n> multiple-inheritance bug. \n\nWhat I'm actually envisioning for PostgreSQL inheritance is model\nsimilar to (my understanding of) SQL99 :\n\n1. Single \"data\" inheritance for SELECT/INSERT/UPDATE/DELETE , meaning\nthat the set of inherited tables any such command operates on comes from\na single-inheritance hierarchy\n\nthe SQL99 syntax for defining tables is\n\nCREATE TABLE child (\n...\n) UNDER parent;\n\n2. This single inheritance also applies both to \"tuple-level\"\nconstraints (not null, check) and to \"relation-level\" constraints -\nunique, primary and foreign keys.\n\n3. Multiple \"interface-only\" inheritance \n\nthe SQL99 syntax for defining tables is\n\nCREATE TABLE child (\n ...,\n LIKE othertable,\n LIKE yetanothertable,\n);\n\nwhich would behave like our current multiple inheritance, only without\naffecting SELECT/INSERT/UPDATE/DELETE and \"relation-level\" constraints.\n\n4. \"tuple-level\" constraints should still be inherited\n\n5. function selection for functions defined on row typoes should be able\nto select both from among functions defined over \"data\" inheritance\nparents and \"interface-only\" inheritance parents.\n\n5 such selection should be dynamic (scan-time) for queries running over\ninheritance trees. (SELECT without ONLY , formely SELECT*)\n\n> This patch is off the \"must fix for 7.3\" list, no?\n\nI still think that this should be fixed in 7.3, but the inhcount\nattribute should show all tables where the column is defined, not just\ninherited. The default, no-inheritance case should set the column to 1.\n\n-------------\nHannu\n\n",
"msg_date": "20 Sep 2002 15:17:10 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> I still think that this should be fixed in 7.3, but the inhcount\n> attribute should show all tables where the column is defined, not just\n> inherited. The default, no-inheritance case should set the column to 1.\n\nWell, no, because then a locally defined column is indistinguishable\nfrom a singly-inherited column, breaking the cases that the original\nattisinherited patch was supposed to fix.\n\nIt doesn't fix the ONLY problem, either. Consider\n\ncreate table p1 (f1 int);\ncreate table p2 (f1 int);\ncreate table c () inherits(p1, p2);\n--c.f1 now has definition-count 2\n\ndrop ONLY column p1.f1;\n--c.f1 now has count 1?\ndrop column p2.f1;\n--c.f1 removed because count went to 0?\n\nIt might look like we could fix this by defining DROP ONLY as not\ntouching the child-table definition-counts at all; then a DROP ONLY\neffectively makes a child column look like it's locally defined\ninstead of inherited. But that trick only works once. Consider:\n\ncreate table p1 (f1 int);\ncreate table p2 (f1 int);\ncreate table c () inherits(p1, p2);\n--c.f1 now has definition-count 2\n\ndrop ONLY column p1.f1;\n--c.f1 still has count 2?\ndrop ONLY column p2.f1;\n--c.f1 still has count 2?\ndrop column c.f1\n--fails because count>1, so there is now no way to delete c.f1\n\nI think we could make all these cases work if we replaced attisinherited\nwith *two* columns, a boolean attislocal(ly defined) and a count of\n(direct) inheritances. DROP ONLY would have the effect of decrementing\nthe count and setting attislocal to true in each direct child; recursive\nDROP would decrement the count and then drop if count is 0 *and*\nattislocal is not set. At the start of a recursion, we'd allow DROP\nonly if count is 0 (and, presumably, attislocal is true, else the column\nwould not be there...).\n\nQuestion is, is fixing these cases worth this much trouble? I think the\ntwo-column solution is actually free in terms of storage space in\npg_attribute, because of alignment considerations. But it's still a\nlarge reworking of the existing patch, and we have other fish to fry by\nSunday.\n\nIn any case I am inclined to reject the patch as-it-stands, because it\nfixes one problem at the cost of introducing new ones.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Sep 2002 09:57:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Tom Lane dijo: \n\n> I think we could make all these cases work if we replaced attisinherited\n> with *two* columns, a boolean attislocal(ly defined) and a count of\n> (direct) inheritances. DROP ONLY would have the effect of decrementing\n> the count and setting attislocal to true in each direct child; recursive\n> DROP would decrement the count and then drop if count is 0 *and*\n> attislocal is not set. At the start of a recursion, we'd allow DROP\n> only if count is 0 (and, presumably, attislocal is true, else the column\n> would not be there...).\n\nThe cases you presented are really tricky. I'll work today on the\nattislocal and attinhcount patch; I hope to have it ready later today\nfor review and inclusion before beta2.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")\n\n",
"msg_date": "Fri, 20 Sep 2002 16:37:12 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "En Thu, 19 Sep 2002 14:06:05 -0400\nTom Lane <tgl@sss.pgh.pa.us> escribi�:\n\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > Tom Lane dijo: \n> >> One corner case is that I think we currently allow\n> >> \n> >> create table p (f1 int);\n> >> create table c (f1 int) inherits(p);\n> \n> > In this case, c.f1.attisinherited count is 2; thus when I drop f1 from\n> > p, it is not dropped from c.\n> \n> That seems right, but the problem I have with it is that the resulting\n> state of c.f1 is attisinherited = 1. This means that you cannot drop\n> c.f1. It seems to me that we should have this behavior:\n\nNew patch attached. This one should answer your concerns. This is the\nidea implemented:\n\n> We might need two fields :-( ... a \"locally defined\" boolean and a\n> \"number of times inherited\" counter.\n\n\nSome discussion:\n\n> create table p (f1 int);\n> create table c (f1 int not null) inherits(p);\n> \n> drop column p.f1;\n> -- c.f1 GOES AWAY, because its inherit count went to zero\n\nIn this case, the attached code preserves f1. It's not clear whether\nthe user wants the column to stay or not, but if he is defining it\ntwice, let him drop it twice if he wants it to go away.\n\n> Another interesting case is multiple inheritance.\n> \n> create table p1 (f1 int);\n> create table p2 (f1 int);\n> create table c () inherits(p1, p2);\n> \n> drop ONLY column p1.f1;\n> drop column p2.f1;\n> \n> After this sequence, what is the state of c.f1? Is it still there?\n> Should it be? If it is still there, will it be possible to get rid of\n> it with \"drop column c.f1\"? What if we did DROP ONLY on *both*\n> ancestors?\n\nWell, in this case the column is dropped. If the last drop is ONLY, the\ncolumn will stay (regardless of what the first drop did). This one\nseems very tricky and I don't see a way to do otherwise.\n\nOther cases (such as the set of four you posted) are handled in the\n\"natural\" way you described. Regression tests for all those four are\nincluded, along another case that was the start of all this.\n\nPlease review the patch. It should be current as of your commit of\n20:30 today, but I'm not sure (anoncvs delays and all -- there are\nchanges to the same files).\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Hay dos momentos en la vida de un hombre en los que no deber�a\nespecular: cuando puede permit�rselo y cuando no puede\" (Mark Twain)",
"msg_date": "Sat, 21 Sep 2002 22:26:17 -0400",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n>> Another interesting case is multiple inheritance.\n>> \n>> create table p1 (f1 int);\n>> create table p2 (f1 int);\n>> create table c () inherits(p1, p2);\n>> \n>> drop ONLY column p1.f1;\n>> drop column p2.f1;\n>> \n>> After this sequence, what is the state of c.f1? Is it still there?\n>> Should it be?\n\n> Well, in this case the column is dropped. If the last drop is ONLY, the\n> column will stay (regardless of what the first drop did).\n\nIt seems to me that DROP ONLY should set attislocal true on each child\nfor which it decrements the inherit count, whether the count reaches\nzero or not. This would cause the behavior in the above case to be that\nc.f1 stays around after the second drop (but can be dropped with a third\ndrop of c.f1 itself). I think this is correct, since the implication of\nDROP ONLY is that child columns are being cut loose from their parent's\napron strings and now have independent existence.\n\nThis is a minor tweak to your patch, and I'll make it work that way\nunless I hear squawks...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Sep 2002 12:56:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Tom Lane dijo: \n\n> It seems to me that DROP ONLY should set attislocal true on each child\n> for which it decrements the inherit count, whether the count reaches\n> zero or not. This would cause the behavior in the above case to be that\n> c.f1 stays around after the second drop (but can be dropped with a third\n> drop of c.f1 itself). I think this is correct, since the implication of\n> DROP ONLY is that child columns are being cut loose from their parent's\n> apron strings and now have independent existence.\n\nYes, I think it's more consistent the way you are proposing.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Acepta los honores y aplausos y perderas tu libertad\"\n\n",
"msg_date": "Sun, 22 Sep 2002 13:08:00 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "\nPatch applied by Tom.\n\n---------------------------------------------------------------------------\n\nAlvaro Herrera wrote:\n> Alvaro Herrera dijo: \n> \n> > All right, I now have all the catalog changes on place; this is the easy\n> > part (is an int2 count enough?).\n> > \n> > But when actually dropping a column, the recursion cannot be done the\n> > way it's done now, fetching the whole inheritor tree in one pass,\n> > because there's no way to distinguish the direct ones that have the\n> > attisinherited count greater than 1 from deeper ones; it has to be done\n> > step by step.\n> \n> Done. I attach the patch. It's huge because it needs to touch\n> pg_attribute.h, but it is relatively simple. This passes the regression\n> tests and fixes the bug reported by Hannu.\n> \n> Please review and apply if OK. I didn't touch catversion.h.\n> \n> -- \n> Alvaro Herrera (<alvherre[a]atentus.com>)\n> \"Cuando ma?ana llegue pelearemos segun lo que ma?ana exija\" (Mowgli)\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 22 Sep 2002 21:31:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Hannu Krosing dijo: \n\n> Tom Lane kirjutas P, 22.09.2002 kell 18:56:\n\n> > It seems to me that DROP ONLY should set attislocal true on each child\n> > for which it decrements the inherit count, whether the count reaches\n> > zero or not.\n> \n> Would it then not produce a situation, which can't be reproduced using\n> just CREATEs ? i.e. same column in bot parent (p2.f1) and child (c.f1)\n> but _not_ inherited ?? \n\nNo, you cannot do that. For example,\ncreate table p1 (f1 int, f2 int);\ncreate table p2 (f1 int, f3 int);\ncreate table c () inherits (p1, p2);\n\nalter table only p1 drop column f1;\nalter table only p2 drop column f1;\n\nIn this case, f1 is kept on c, and this situation can be recreated as:\ncreate table p1 (f2 int);\ncreate table p2 (f3 int);\ncreate table c (f1 int) inherits (p2, p3);\n\nIf you drop it on only one parent it is exactly the same.\n\nThe next question is whether pg_dump knows how to do such things. The\nanswer is that it doesn't know that it must locally define f1 on c if\nyou drop the column on only one parent. Oddly enough, the following\n\ncreate table p (f1 int);\ncreate table c (f1 int not null);\n\nproduces the right behavior in pg_dump, but\n\ncreate table p (f1 int);\ncreate table c () inherits (p);\nalter table c alter f1 set not null;\n\nproduces exactly the same as the former. I don't know if it's right.\n\n\n> Then there would be no way to move a field from one parent table to\n> another and still have it as an inherited column in child.\n\nYou cannot add a column to a table that is inherited by another table\nthat has a column with the same name:\n\ninhtest=# alter table p1 add column f1 int;\nERROR: ALTER TABLE: column name \"f1\" already exists in table \"c\"\ninhtest=# alter table only p1 add column f1 int;\nERROR: Attribute must be added to child tables too\ninhtest=# \n\nIOW: there's no way to \"move\" a column, unless you drop it in the whole\ninheritance tree first. Maybe this is a bug, and adding a column that\nexists in all childs (with the same name and type) should be allowed.\n\n> It also seems bogus considering when doing SELECT * FROM p2 -- How\n> should the select behave regarding c.f1 - there is a field with the same\n> name and type but not inherited . \n\nI don't understand. Suppose table c has column f1. If I select from p2\nand it has f1 also, f1 will show up. If p2 doesn't have f1, it won't:\nthe inheritance status of the attribute doesn't matter.\n\n\n> > This would cause the behavior in the above case to be that\n> > c.f1 stays around after the second drop (but can be dropped with a third\n> > drop of c.f1 itself). \n> \n> What if you have a deeper hierarchy under c - will this make you\n> traverse them all to drop f1 ?\n\nThe recursion is always done in steps one level deep. If the column is\ninherited from somewhere else in the grandchild, it will stay. If not,\nit will disappear. If you want to drop in more than one level, but not\nall of them, you will have to drop it locally on each. This seems just\nnatural, doesn't it?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Granting software the freedom to evolve guarantees only different results,\nnot better ones.\" (Zygo Blaxell)\n\n",
"msg_date": "Mon, 23 Sep 2002 04:06:21 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Tom Lane kirjutas P, 22.09.2002 kell 18:56:\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> >> Another interesting case is multiple inheritance.\n> >> \n> >> create table p1 (f1 int);\n> >> create table p2 (f1 int);\n> >> create table c () inherits(p1, p2);\n> >> \n> >> drop ONLY column p1.f1;\n> >> drop column p2.f1;\n> >> \n> >> After this sequence, what is the state of c.f1? Is it still there?\n> >> Should it be?\n> \n> > Well, in this case the column is dropped. If the last drop is ONLY, the\n> > column will stay (regardless of what the first drop did).\n> \n> It seems to me that DROP ONLY should set attislocal true on each child\n> for which it decrements the inherit count, whether the count reaches\n> zero or not. \n\nThis would not be what I e'd expect - if c inherited f1 twice and then\none of the parents disinherits it, then it would still be inherited from\nthe other parent\n\n> This would cause the behavior in the above case to be that\n> c.f1 stays around after the second drop (but can be dropped with a third\n> drop of c.f1 itself).\n\nI'd vote for the way Alvaro describes it - keep the attislocal=false\nwhile there exist parents from which the column was inherited.\n\n> I think this is correct, since the implication of\n> DROP ONLY is that child columns are being cut loose from their parent's\n> apron strings and now have independent existence.\n\nFor me the implication is that ONLY this parent cuts loose the strings\nfrom its side, but should not mess with anything the child inherits from\nother parties.\n\n> This is a minor tweak to your patch, and I'll make it work that way\n> unless I hear squawks...\n\nI was disconnected for the weekend, I hope this is not too late to\nsquawk ;)\n\n-----------------\nHannu\n\n",
"msg_date": "23 Sep 2002 10:23:06 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "En 23 Sep 2002 10:23:06 +0200\nHannu Krosing <hannu@tm.ee> escribi�:\n\n> Tom Lane kirjutas P, 22.09.2002 kell 18:56:\n\n> > It seems to me that DROP ONLY should set attislocal true on each child\n> > for which it decrements the inherit count, whether the count reaches\n> > zero or not. \n> \n> This would not be what I e'd expect - if c inherited f1 twice and then\n> one of the parents disinherits it, then it would still be inherited from\n> the other parent\n\nThe problem with this is that two sequences of commands only differing\nin the ordering of two clauses give different result:\n\ncreate table p1 (f1 int, f2 int);\ncreate table p2 (f1 int, f2 int);\ncreate table c () inherits (p1, p2);\nalter table only p1 drop column f1;\nalter table p2 drop column f1;\n\n\n\ncreate table p1 (f1 int, f2 int);\ncreate table p2 (f1 int, f2 int);\ncreate table c () inherits (p1, p2);\nalter table p2 drop column f1;\nalter table only p1 drop column f1;\n\nThe former drops f1 from c, while the latter does not. It's\ninconsistent.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"La Primavera ha venido. Nadie sabe como ha sido\" (A. Machado)\n",
"msg_date": "Mon, 23 Sep 2002 04:30:18 -0400",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Tom Lane kirjutas P, 22.09.2002 kell 18:56:\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> >> Another interesting case is multiple inheritance.\n> >> \n> >> create table p1 (f1 int);\n> >> create table p2 (f1 int);\n> >> create table c () inherits(p1, p2);\n> >> \n> >> drop ONLY column p1.f1;\n> >> drop column p2.f1;\n> >> \n> >> After this sequence, what is the state of c.f1? Is it still there?\n> >> Should it be?\n> \n> > Well, in this case the column is dropped. If the last drop is ONLY, the\n> > column will stay (regardless of what the first drop did).\n> \n> It seems to me that DROP ONLY should set attislocal true on each child\n> for which it decrements the inherit count, whether the count reaches\n> zero or not.\n\nWould it then not produce a situation, which can't be reproduced using\njust CREATEs ? i.e. same column in bot parent (p2.f1) and child (c.f1)\nbut _not_ inherited ?? \n\nThen there would be no way to move a field from one parent table to\nanother and still have it as an inherited column in child.\n\nIt also seems bogus considering when doing SELECT * FROM p2 -- How\nshould the select behave regarding c.f1 - there is a field with the same\nname and type but not inherited . \n\n> This would cause the behavior in the above case to be that\n> c.f1 stays around after the second drop (but can be dropped with a third\n> drop of c.f1 itself). \n\nWhat if you have a deeper hierarchy under c - will this make you\ntraverse them all to drop f1 ?\n\n> I think this is correct, since the implication of\n> DROP ONLY is that child columns are being cut loose from their parent's\n> apron strings and now have independent existence.\n\n From (this) parent's but not from (other) parents' ;)\n\nLike In real world one should only be allowed to disinherit what _he_\nowns.\n\n--------------\nHannu\n\n",
"msg_date": "23 Sep 2002 10:36:39 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Alvaro Herrera kirjutas E, 23.09.2002 kell 10:06:\n> Hannu Krosing dijo: \n> \n> > Tom Lane kirjutas P, 22.09.2002 kell 18:56:\n> \n> > > It seems to me that DROP ONLY should set attislocal true on each child\n> > > for which it decrements the inherit count, whether the count reaches\n> > > zero or not.\n> > \n> > Would it then not produce a situation, which can't be reproduced using\n> > just CREATEs ? i.e. same column in bot parent (p2.f1) and child (c.f1)\n> > but _not_ inherited ?? \n> \n> No, you cannot do that. For example,\n> create table p1 (f1 int, f2 int);\n> create table p2 (f1 int, f3 int);\n> create table c () inherits (p1, p2);\n> \n> alter table only p1 drop column f1;\n> alter table only p2 drop column f1;\n> \n> In this case, f1 is kept on c, and this situation can be recreated as:\n> create table p1 (f2 int);\n> create table p2 (f3 int);\n> create table c (f1 int) inherits (p2, p3);\n> \n> If you drop it on only one parent it is exactly the same.\n\n\nI meant \n\ncreate table p1 (f1 int, f2 int);\ncreate table p2 (f1 int, f3 int);\ncreate table c () inherits (p1, p2);\n \nalter table only p1 drop column f1;\n\nIf you now set c.f1.attislocal = 1 as suggested by Tom , it seems like\nyou have a local p1.f1 _and_ local c.f1 , for which there is no way to\ncreate without DROP's.\n\nIf I understand the meaning of attislocal correctly, the after the\nabove, I could do ALTER TABLE c DROP COLUMN f1, which would break \nSELECT * FROM p2.\n\n> The next question is whether pg_dump knows how to do such things. The\n> answer is that it doesn't know that it must locally define f1 on c if\n> you drop the column on only one parent. Oddly enough, the following\n> \n> create table p (f1 int);\n> create table c (f1 int not null);\n\nDid you mean\n\ncreate table c (f1 int not null) inherits (p);\n\n?\n\n> produces the right behavior in pg_dump, but\n> \n> create table p (f1 int);\n> create table c () inherits (p);\n> alter table c alter f1 set not null;\n> \n> produces exactly the same as the former. I don't know if it's right.\n>\n> > Then there would be no way to move a field from one parent table to\n> > another and still have it as an inherited column in child.\n> \n> You cannot add a column to a table that is inherited by another table\n> that has a column with the same name:\n> \n> inhtest=# alter table p1 add column f1 int;\n> ERROR: ALTER TABLE: column name \"f1\" already exists in table \"c\"\n> inhtest=# alter table only p1 add column f1 int;\n> ERROR: Attribute must be added to child tables too\n> inhtest=# \n> \n> IOW: there's no way to \"move\" a column, unless you drop it in the whole\n> inheritance tree first. Maybe this is a bug, and adding a column that\n> exists in all childs (with the same name and type) should be allowed.\n\nIt should be symmetric to DROP behaviour.\n\nSo we should first check, if there are no childs with columns with the\nsame name but different type, then add it to all children where it is\nmissing and just make it inherited, where it is already present.\n\n\n-----------\nHannu\n\n",
"msg_date": "23 Sep 2002 11:37:01 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Alvaro Herrera kirjutas E, 23.09.2002 kell 10:30:\n> En 23 Sep 2002 10:23:06 +0200\n> Hannu Krosing <hannu@tm.ee> escribió:\n> \n> > Tom Lane kirjutas P, 22.09.2002 kell 18:56:\n> \n> > > It seems to me that DROP ONLY should set attislocal true on each child\n> > > for which it decrements the inherit count, whether the count reaches\n> > > zero or not. \n> > \n> > This would not be what I e'd expect - if c inherited f1 twice and then\n> > one of the parents disinherits it, then it would still be inherited from\n> > the other parent\n> \n> The problem with this is that two sequences of commands only differing\n> in the ordering of two clauses give different result:\n\nIMHO this is the correct behaviour\n\n> create table p1 (f1 int, f2 int);\n> create table p2 (f1 int, f2 int);\n> create table c () inherits (p1, p2);\n> alter table only p1 drop column f1;\n\nHere you get rid of f1 in p1 _only_, i.e you keep it in children.\n\n> alter table p2 drop column f1;\n\nAt this point c.f1 is inherited from only p2 and should be dropped\n\n> create table p1 (f1 int, f2 int);\n> create table p2 (f1 int, f2 int);\n> create table c () inherits (p1, p2);\n> alter table p2 drop column f1;\n\nHere c.f1 is still inherited from p1 and thus will not be dropped\n\n> alter table only p1 drop column f1;\n\nIf you say ONLY you _do_ mean \"don't drop from child tables\".\n\n> The former drops f1 from c, while the latter does not. It's\n> inconsistent.\n\nBut this is what _should_ happen.\n\nIt is quite unreasonable to expect that order of commands makes no\ndifference.\n\n------------\nHannu\n\n",
"msg_date": "23 Sep 2002 11:54:41 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Alvaro Herrera kirjutas E, 23.09.2002 kell 10:30:\n>> The former drops f1 from c, while the latter does not. It's\n>> inconsistent.\n\n> But this is what _should_ happen.\n\nOn what grounds do you claim that? I agree with Alvaro: it's\ninconsistent to have ONLY produce different effects depending on\nthe order in which you issue the commands.\n\n> It is quite unreasonable to expect that order of commands makes no\n> difference.\n\nWhy?\n\nI'll agree that it's not an overriding argument, but it is something\nto shoot for if we can. And I'm not seeing the argument on the other\nside.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 09:41:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> I meant \n\n> create table p1 (f1 int, f2 int);\n> create table p2 (f1 int, f3 int);\n> create table c () inherits (p1, p2);\n \n> alter table only p1 drop column f1;\n\n> If you now set c.f1.attislocal = 1 as suggested by Tom , it seems like\n> you have a local p1.f1 _and_ local c.f1 , for which there is no way to\n> create without DROP's.\n\nUh, no, you don't have a p1.f1 at all.\n\n> If I understand the meaning of attislocal correctly, the after the\n> above, I could do ALTER TABLE c DROP COLUMN f1, which would break \n> SELECT * FROM p2.\n\nNo you could not, because c.f1 still has attinhcount = 1 due to the\ninheritance from p2. As long as c.f1.attinhcount > 0, you won't be\nallowed to drop c.f1. attislocal does not override that.\n\n>> The next question is whether pg_dump knows how to do such things. The\n>> answer is that it doesn't know that it must locally define f1 on c if\n>> you drop the column on only one parent.\n\nThat's a good point. It could be fixed easily though (pg_dump would\njust have to take attislocal into consideration when deciding whether\nto emit a column definition in the child table).\n\n>> ... produces the right behavior in pg_dump, but\n>> \n>> create table p (f1 int);\n>> create table c () inherits (p);\n>> alter table c alter f1 set not null;\n>> \n>> produces exactly the same as the former. I don't know if it's right.\n\nI think this is fine. Having done something to the field in c (and not\nrecursively from p) means that you are attaching special new meaning\nto c.f1; I'm okay with equating this action to \"c is now locally defined\".\nMaybe the backend should make that equation too, and actively set\nattislocal in the top level when doing an ALTER COLUMN.\n\nBTW, do we prohibit ALTER DROP NOT NULL on inherited columns? We\nprobably should.\n\n>> You cannot add a column to a table that is inherited by another table\n>> that has a column with the same name:\n>> \n>> inhtest=# alter table p1 add column f1 int;\n>> ERROR: ALTER TABLE: column name \"f1\" already exists in table \"c\"\n>> inhtest=# alter table only p1 add column f1 int;\n>> ERROR: Attribute must be added to child tables too\n>> inhtest=# \n>> \n>> IOW: there's no way to \"move\" a column, unless you drop it in the whole\n>> inheritance tree first. Maybe this is a bug, and adding a column that\n>> exists in all childs (with the same name and type) should be allowed.\n\nYeah, this is an implementation shortcoming in ALTER ADD COLUMN: if it\nfinds an existing column of the same name in a child table, it should\ntest whether it's okay to \"merge\" the columns (same types, no conflict\nin constraints/defaults, cf CREATE's behavior); if so, it should\nincrement the child column's attinhcount instead of failing.\n\nI had noticed that yesterday, and meant to ask Bruce to put it on TODO,\nbut got distracted with other stuff.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 09:53:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n>> It seems to me that DROP ONLY should set attislocal true on each child\n>> for which it decrements the inherit count, whether the count reaches\n>> zero or not.\n\n> Would it then not produce a situation, which can't be reproduced using\n> just CREATEs ? i.e. same column in bot parent (p2.f1) and child (c.f1)\n> but _not_ inherited ?? \n\nNo, because the child will still have attinhcount > 0 until you drop the\nlast matching parent column. attislocal is independent of the value of\nattinhcount (that's why we need two fields).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Sep 2002 10:01:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "On Mon, 2002-09-23 at 18:41, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > Alvaro Herrera kirjutas E, 23.09.2002 kell 10:30:\n> >> The former drops f1 from c, while the latter does not. It's\n> >> inconsistent.\n> \n> > But this is what _should_ happen.\n> \n> On what grounds do you claim that? I agree with Alvaro: it's\n> inconsistent to have ONLY produce different effects depending on\n> the order in which you issue the commands.\n\nSorry it took some time thin down my thoughts ;)\n\nAs the three following sets of commands ( should ) yield exactly the\nsame database schema (as visible to user):\n\n1) --------------------------------\ncreate table p1 (f1 int, g1 int);\ncreate table p2 (f1 int, h1 int);\ncreate table c () inherits(p1, p2);\ndrop column p2.f1; -- this DROP is in fact implicitly ONLY\n2) --------------------------------\ncreate table p1 (f1 int, g1 int);\ncreate table p2 (f1 int, h1 int);\ncreate table c () inherits(p1, p2);\ndrop only column p2.f1;\n3) --------------------------------\ncreate table p1 (f1 int, g1 int);\ncreate table p2 (h1 int);\ncreate table c () inherits(p1, p2);\n-----------------------------------\n\nFor this schema, no matter how we arrived at it\n\nDROP COLUMN p1.f1;\n\nshould be different from\n\nDROP ONLY COLUMN p1.f1;\n\n\n\nBut the ONLY modifier was implicit for all the _non-final_ DROPs\n\nWe could carve it out for users by _requiring_ ONLY if the column\ndropped is multiply inherited, but that would cut off the possibility\nthat it is multiply inherited in some children and not in some other,\ni.e you could not have drop column automatically remove c13.f1 but keep\nc12.f1 for the following schema.\n\ncreate table p1 (f1 int, g1 int);\ncreate table p2 (f1 int, h1 int);\ncreate table c12 () inherits(p1, p2);\ncreate table p3 (i1 int);\ncreate table c13 () inherits(p1, p3);\n\n\nSo I'd suggest we just postulate that for multiple inheritance dropping\nany columns still inherited from other peers will be implicitly \"DROP\nONLY\" _as far as it concerns this child_ .\n\nthen it would be clear why we have different behaviour for\n\ndrop ONLY column p1.f1;\ndrop column p2.f1;\n\nand\n\ndrop ONLY column p2.f1; <-- this ONLY is implicit for c by virtue of\n p1.f1 being still around\ndrop ONLY column p1.f1;\n\n\n> > It is quite unreasonable to expect that order of commands makes no\n> > difference.\n> \n> Why?\n> \n> I'll agree that it's not an overriding argument, but it is something\n> to shoot for if we can. And I'm not seeing the argument on the other\n> side.\n\nJust to reiterate:\n\n1. All ALTER TABLE MyTable DROP COLUMN commands assume implicit ONLY\nwhen dropping columns multiply inherited from MyTable.\n\n2. Making the final DROP implicitly NOT-ONLY in case there have been\nother DROPs of same column from other parents would make it\nnon-deterministic if columns from child tables will be dropped when\nusing DROP ONLY on a schema you dont know the full history for.\n\n2.a It will probably also not be pg_dump-transparent, ie doing\ndump/reload between first and second drop column will get you different\nresults.\n\n-----------------\nHannu\n\n\n\n\n\n\n",
"msg_date": "25 Sep 2002 02:12:14 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "On Wed, 2002-09-25 at 04:13, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > 1) --------------------------------\n> > create table p1 (f1 int, g1 int);\n> > create table p2 (f1 int, h1 int);\n> > create table c () inherits(p1, p2);\n> > drop column p2.f1; -- this DROP is in fact implicitly ONLY\n> \n> Surely not? At least, I don't see why it should be thought of that way.\n> There's always a difference between DROP and DROP ONLY.\n\nWhat will be the difference in the user-visible schema ?\n\n------------\nHannu\n\n\n",
"msg_date": "25 Sep 2002 02:20:21 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "On Wed, 2002-09-25 at 04:33, Alvaro Herrera wrote:\n> Hannu Krosing dijo: \n> \n> > On Wed, 2002-09-25 at 04:13, Tom Lane wrote:\n> > > Hannu Krosing <hannu@tm.ee> writes:\n> > > > 1) --------------------------------\n> > > > create table p1 (f1 int, g1 int);\n> > > > create table p2 (f1 int, h1 int);\n> > > > create table c () inherits(p1, p2);\n> > > > drop column p2.f1; -- this DROP is in fact implicitly ONLY\n> > > \n> > > Surely not? At least, I don't see why it should be thought of that way.\n> > > There's always a difference between DROP and DROP ONLY.\n> > \n> > What will be the difference in the user-visible schema ?\n> \n> If I understand the issue correctly, this is the key point to this\n> discussion. The user will not see a difference in schemas, no matter\n> which way you look at it. But to the system catalogs there are two ways\n> of representing this situation: f1 being defined locally by c (and also\n> inherited from p1) or not (and only inherited from p1).\n\nOk, I think I'm beginning to see Tom's point. \n\nSo what Tom wants is that doing DROP ONLY will push the definition down\nthe hierarchy on first possibility only as a last resort.\n\nFor me it feels assymmetric (unless we will make attislocal also int\ninstead of boolean ;). This assymetric nature will manifest itself when\nwe will have ADD COLUMN which can put back the DROP ONLY COLUMN and it\nhas to determine weather to remove the COLUMN definition from the child.\n\nWhat does the current model do in the following case:\n\ncreate table p (f1 int, g1 int);\ncreate table c (f1 int) inherits(p);\ndrop column c.f1;\n\nWill it just set attisinh = 1 on c.f1 ?\n\nwhat would drop column p.f1; have done - would it have left c.f1 intact?\n\n> I think the difference is purely phylosophical, and there are no\n> arguments that can convince either party that it is wrong.\n\nThere seem to be actually 3 different possible behaviours for DROP\nCOLUMN for hierarchies.\n\n1. DROP ONLY - the weakest - drops the column and moves the \"original\"\n(or explicit, defined-here) definition down to all children if not\nalready found there too.\n\n2. DROP - midlevel - drops the column and its inherited definitions in\nchildren but stops at first foreign definition (defined locally or\ninherited from other parents). \n\n3. DROP FORCE - strongest ( more or less what current drop seems to do.)\n- walks down the hierarchy and removes all definitions, weather\ninherited or local, only leaves definitions inherited from other\nparents. Perhaps it should just fail in case of multiply inherited field\n?\n\nMaybe it was too early to put the DROP ONLY functionality in ?\n\n> Anyway, there's always a set of commands that can make the user go from\n> one representation to the other. He just has to be careful and know\n> exactly which way the system will work. Whichever way it works, it\n> should be clearly and carefully documented in the ALTER TABLE reference.\n\nAmen.\n\n--------------\nHannu\n\n\n\n",
"msg_date": "25 Sep 2002 03:29:34 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> 1) --------------------------------\n> create table p1 (f1 int, g1 int);\n> create table p2 (f1 int, h1 int);\n> create table c () inherits(p1, p2);\n> drop column p2.f1; -- this DROP is in fact implicitly ONLY\n\nSurely not? At least, I don't see why it should be thought of that way.\nThere's always a difference between DROP and DROP ONLY.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Sep 2002 19:13:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Hannu Krosing dijo: \n\n> On Wed, 2002-09-25 at 04:13, Tom Lane wrote:\n> > Hannu Krosing <hannu@tm.ee> writes:\n> > > 1) --------------------------------\n> > > create table p1 (f1 int, g1 int);\n> > > create table p2 (f1 int, h1 int);\n> > > create table c () inherits(p1, p2);\n> > > drop column p2.f1; -- this DROP is in fact implicitly ONLY\n> > \n> > Surely not? At least, I don't see why it should be thought of that way.\n> > There's always a difference between DROP and DROP ONLY.\n> \n> What will be the difference in the user-visible schema ?\n\nIf I understand the issue correctly, this is the key point to this\ndiscussion. The user will not see a difference in schemas, no matter\nwhich way you look at it. But to the system catalogs there are two ways\nof representing this situation: f1 being defined locally by c (and also\ninherited from p1) or not (and only inherited from p1).\n\nI think the difference is purely phylosophical, and there are no\narguments that can convince either party that it is wrong.\n\nAnyway, there's always a set of commands that can make the user go from\none representation to the other. He just has to be careful and know\nexactly which way the system will work. Whichever way it works, it\nshould be clearly and carefully documented in the ALTER TABLE reference.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Cuando no hay humildad las personas se degradan\" (A. Christie)\n\n",
"msg_date": "Tue, 24 Sep 2002 19:33:16 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Hannu Krosing dijo: \n\n> For me it feels assymmetric (unless we will make attislocal also int\n> instead of boolean ;). This assymetric nature will manifest itself when\n> we will have ADD COLUMN which can put back the DROP ONLY COLUMN and it\n> has to determine weather to remove the COLUMN definition from the child.\n\nWell, the ADD COLUMN thing is something I haven't think about. Let's\nsee: if I have a child with a local definition of the column I'm adding,\nI have to add one to its inhcount, that's clear. But do I have to reset\nits attislocal?\n\n> What does the current model do in the following case:\n> \n> create table p (f1 int, g1 int);\n> create table c (f1 int) inherits(p);\n> drop column c.f1;\n> \n> Will it just set attisinh = 1 on c.f1 ?\n\nNo, it will forbid you to drop the column. That was the intention on\nthe first place: if a column is inherited, you shouldn't be allowed to\ndrop or rename it. You can only do so at the top of the inheritance\ntree, either recursively or non-recursively. And when you do it\nnon-recursively, the first level is marked non-inherited.\n\n> There seem to be actually 3 different possible behaviours for DROP\n> COLUMN for hierarchies.\n\nWell, I'm not too eager to discuss this kind of thing: it's possible\nthat multiple inheritance goes away in a future release, and all these\nissues will possibly vanish. But I'm not sure I understand the\nimplications of \"interfaces\" (a la Java multiple inheritance).\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Acepta los honores y aplausos y perderas tu libertad\"\n\n",
"msg_date": "Tue, 24 Sep 2002 20:45:14 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Alvaro Herrera kirjutas K, 25.09.2002 kell 02:45:\n> Hannu Krosing dijo: \n> \n> > For me it feels assymmetric (unless we will make attislocal also int\n> > instead of boolean ;). This assymetric nature will manifest itself when\n> > we will have ADD COLUMN which can put back the DROP ONLY COLUMN and it\n> > has to determine weather to remove the COLUMN definition from the child.\n> \n> Well, the ADD COLUMN thing is something I haven't think about. Let's\n> see: if I have a child with a local definition of the column I'm adding,\n> I have to add one to its inhcount, that's clear. But do I have to reset\n> its attislocal?\n\nI'd guess that it should reset attislocal if ONLY is specified (to be\nsymmetric with behaviour of drop ONLY).\n\n> > What does the current model do in the following case:\n> > \n> > create table p (f1 int, g1 int);\n> > create table c (f1 int) inherits(p);\n> > drop column c.f1;\n> > \n> > Will it just set attisinh = 1 on c.f1 ?\n> \n> No, it will forbid you to drop the column. That was the intention on\n> the first place: if a column is inherited, you shouldn't be allowed to\n> drop or rename it. You can only do so at the top of the inheritance\n> tree, either recursively or non-recursively. And when you do it\n> non-recursively, the first level is marked non-inherited.\n\nAnd my views differed from Tom's on weather to do it always or only when\nthe column was dropped the last parent providing it for inheritance. \n\nLets hope that possible move from INHERITS to (LIKE,...)UNDER will make\nthese issues clearer and thus easier to discuss and agree upon.\n\n> > There seem to be actually 3 different possible behaviours for DROP\n> > COLUMN for hierarchies.\n> \n> Well, I'm not too eager to discuss this kind of thing: it's possible\n> that multiple inheritance goes away in a future release, and all these\n> issues will possibly vanish. But I'm not sure I understand the\n> implications of \"interfaces\" (a la Java multiple inheritance).\n\nI don't think that issues for inheriting multiple columns will vanish\neven for SQL99 way of doing nheritance (LIKE/UNDER), as there can be\nmultiple LIKE's and afaik they too should track changes in parent\ncolumns.\n\nBut I don't think that it is very important to reach concensus for 7.3\nas the whole inheritance area in postgres will likely be changed.\n\nI think these will be items for discussion once 7.4 cycle starts.\n\n-------------\nHannu\n\n",
"msg_date": "25 Sep 2002 11:38:32 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "En Mon, 23 Sep 2002 09:53:08 -0400\nTom Lane <tgl@sss.pgh.pa.us> escribi�:\n\n> > You cannot add a column to a table that is inherited by another table\n> > that has a column with the same name:\n> \n> Yeah, this is an implementation shortcoming in ALTER ADD COLUMN: if it\n> finds an existing column of the same name in a child table, it should\n> test whether it's okay to \"merge\" the columns (same types, no conflict\n> in constraints/defaults, cf CREATE's behavior); if so, it should\n> increment the child column's attinhcount instead of failing.\n\nI have this almost ready. The thing I don't have quite clear yet is\nwhat to do with attislocal. IMHO it should not be touched in any case,\nbut Hannu thinks that for symmetry it should be reset in some cases.\n\nAlso, what do you mean by conflicts on defaults? I don't think the\nparent should take into consideration what the defaults are for its\nchildren. Same for constraints.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nSi no sabes adonde vas, es muy probable que acabes en otra parte.\n",
"msg_date": "Sat, 28 Sep 2002 20:06:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> I have this almost ready. The thing I don't have quite clear yet is\n> what to do with attislocal. IMHO it should not be touched in any case,\n> but Hannu thinks that for symmetry it should be reset in some cases.\n\nMy feeling would be to leave it alone in all cases. If I have\n\n\tcreate table p (f1 int);\n\tcreate table c (f2 text) inherits (p);\n\nI would find it quite surprising if I could destroy c.f2 by adding\nand then dropping p.f2.\n\n> Also, what do you mean by conflicts on defaults? I don't think the\n> parent should take into consideration what the defaults are for its\n> children. Same for constraints.\n\nWell, the rules will probably have to be different for this case than\nthey are when creating a child below an existing parent. In particular,\nif the ADD COLUMN operation is trying to create constraints (including\na simple NOT NULL), I'm inclined to fail rather than merge if the\nexisting child column does not already have matching constraints.\nIt would seem surprising to me that creating a parent column in this\nway could allow the formerly free-standing child column to suddenly\nhave constraints it didn't have before. Also, you'd have to scan the\nchild rows to see whether they all meet the constraint, which would\nbe slow. For example, if you wanted to do\n\n\talter table p add column f2 text not null;\n\nin the above example, I think it is reasonable to insist that you first\ndo\n\n\talter table c alter column f2 set not null;\n\nto make it perfectly clear all 'round that you are accepting an\nalteration in the existing column.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Sep 2002 22:00:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "Tom Lane kirjutas P, 29.09.2002 kell 04:00:\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > I have this almost ready. The thing I don't have quite clear yet is\n> > what to do with attislocal. IMHO it should not be touched in any case,\n> > but Hannu thinks that for symmetry it should be reset in some cases.\n\nI'd propose that ADD ONLY would pull topmost attislocal up (reset it\nfrom the (grand)child) whereas plain ADD would leave attislocal alone.\n\nThe use of ONLY with this meaning is for the symmetry with DROP ONLY.\n\n> My feeling would be to leave it alone in all cases. If I have\n> \n> \tcreate table p (f1 int);\n> \tcreate table c (f2 text) inherits (p);\n> \n> I would find it quite surprising if I could destroy c.f2 by adding\n> and then dropping p.f2.\n\nThis should depend on weather you drop ONLY\n\nOr are you also be surprised by this behaviour of DELETE CASCADE :)\n\nhannu=# create table c(i int);\nCREATE TABLE\nhannu=# insert into c values(1);\nINSERT 41595 1\nhannu=# insert into c values(2);\nINSERT 41596 1\nhannu=# create table p (pk int primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'p_pkey'\nfor table 'p'\nCREATE TABLE\nhannu=# insert into p values(1);\nINSERT 41601 1\nhannu=# insert into p values(2);\nINSERT 41602 1\nhannu=# alter table c add constraint fk foreign key (i)\nhannu-# references p on delete cascade;\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nALTER TABLE\nhannu=# delete from p where pk=2;\nDELETE 1\nhannu=# select * from c;\n i \n---\n 1\n(1 row)\n\nSurprise: Where did i=2 go ??\n\n\nWhat you are proposing is IMHO equivalent to making FOREIGN KEYs ON\nDELETE CASCADE behaviour dependant on weather the foreign key was\ncreated initially or added afterwards.\n\n> > Also, what do you mean by conflicts on defaults? I don't think the\n> > parent should take into consideration what the defaults are for its\n> > children. Same for constraints.\n> \n> Well, the rules will probably have to be different for this case than\n> they are when creating a child below an existing parent. In particular,\n> if the ADD COLUMN operation is trying to create constraints (including\n> a simple NOT NULL), I'm inclined to fail rather than merge if the\n> existing child column does not already have matching constraints.\n> It would seem surprising to me that creating a parent column in this\n> way could allow the formerly free-standing child column to suddenly\n> have constraints it didn't have before. Also, you'd have to scan the\n> child rows to see whether they all meet the constraint, which would\n> be slow. For example, if you wanted to do\n> \n> \talter table p add column f2 text not null;\n> \n> in the above example, I think it is reasonable to insist that you first\n> do\n> \n> \talter table c alter column f2 set not null;\n\nTo this I strongly agree.\n\n-----------------\nHannu\n\n \n\n",
"msg_date": "29 Sep 2002 15:15:50 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> I'd propose that ADD ONLY would pull topmost attislocal up (reset it\n> from the (grand)child) whereas plain ADD would leave attislocal alone.\n\nADD ONLY? There is no such animal as ADD ONLY, and cannot be because\nit implies making a parent inconsistent with its children. (Yes, I\nknow that the code takes that combination right now, but erroring out\ninstead is on the \"must fix before release\" list. Ditto for RENAME\nONLY.)\n\n> The use of ONLY with this meaning is for the symmetry with DROP ONLY.\n\nBut it's not a symmetrical situation. The children must contain every\ncolumn in the parent; the reverse is not true. Some asymmetry in the\ncommands is therefore unavoidable.\n\n>> I would find it quite surprising if I could destroy c.f2 by adding\n>> and then dropping p.f2.\n\n> This should depend on weather you drop ONLY\n\nI disagree. Your analogy to a CASCADE foreign key is bad, because\nthe foreign key constraint is attached to the column that might lose\ndata. Thus you (presumably) know when you create the constraint what\nyou are risking. Losing existing child data because of manipulations\ndone only on the parent --- perhaps not even remembering that there\nis a conflicting child column --- strikes me as dangerous. It seems\nlike an indirect, \"action at a distance\" behavior.\n\nHere is another scenario: suppose p has many children, but only c42\nhas a column f2. If I \"alter table p add column f2\", now p and\nall the c's will have f2. Suppose I realize that was a mistake.\nCan I undo it with \"alter table p drop column f2\"? Yes, under my\nproposal; no, under yours. In yours, the only way would be to\ndo a DROP ONLY on p and then retail DROPs on each of the other\nchildren. This would be tedious and error-prone. If some random\nsubset of the children had f2, it'd be even worse --- it would\nbe difficult even to identify which children had f2 before the\nADD operation. IMHO this is a good example of why attislocal is\nuseful.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Sep 2002 10:57:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "On Sun, 2002-09-29 at 19:57, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > I'd propose that ADD ONLY would pull topmost attislocal up (reset it\n> > from the (grand)child) whereas plain ADD would leave attislocal alone.\n> \n> ADD ONLY? There is no such animal as ADD ONLY, and cannot be because\n> it implies making a parent inconsistent with its children. \n\nI meant ADD ONLY to be the exact opposite of DROP ONLY - it adds parent\ncolumn and removes attislocal from children. Simple ADD would _not_\nremove attislocal from children with matching column.\n\n> > The use of ONLY with this meaning is for the symmetry with DROP ONLY.\n> \n> But it's not a symmetrical situation. The children must contain every\n> column in the parent; the reverse is not true. Some asymmetry in the\n> commands is therefore unavoidable.\n\nPerhaps some mirror command then: DROP ONLY <--> ADD ALL ?\n\n> >> I would find it quite surprising if I could destroy c.f2 by adding\n> >> and then dropping p.f2.\n> \n> > This should depend on weather you drop ONLY\n> \n> I disagree. Your analogy to a CASCADE foreign key is bad, because\n> the foreign key constraint is attached to the column that might lose\n> data. Thus you (presumably) know when you create the constraint what\n> you are risking. Losing existing child data because of manipulations\n> done only on the parent --- perhaps not even remembering that there\n> is a conflicting child column --- strikes me as dangerous. It seems\n> like an indirect, \"action at a distance\" behavior.\n\nWhat about warning the user and making him use FORCE in ambiguous cases\n(like when some children don't have that column) ?\n\n> Here is another scenario: suppose p has many children, but only c42\n> has a column f2. If I \"alter table p add column f2\", now p and\n> all the c's will have f2. Suppose I realize that was a mistake.\n> Can I undo it with \"alter table p drop column f2\"? Yes, under my\n> proposal; no, under yours.\n\n\"YES\" under mine, unless you did \"alter table ONLY p add column f2\" ,\nwhich would have removed the local definition from children.\n\n> In yours, the only way would be to\n> do a DROP ONLY on p and then retail DROPs on each of the other\n> children. This would be tedious and error-prone. If some random\n> subset of the children had f2, it'd be even worse --- it would\n> be difficult even to identify which children had f2 before the\n> ADD operation.\n\nYour proposal and mine are the same in case ONLY is not given. The\noption ADD ONLY is proposed just to make it easy to undo a DROP ONLY.\n\nUnder your proposal I see no easy way to undo DROP ONLY (for example to\ndo DROP instead).\n\n> IMHO this is a good example of why attislocal is useful.\n\nI don't doubt usefulness of attislocal, I just want to make sure it is\nused in a consistent manner.\n\n-------------\nHannu\n\n\n\n\n",
"msg_date": "29 Sep 2002 21:33:48 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "On Sun, 29 Sep 2002, Tom Lane wrote:\n\n> Hannu Krosing <hannu@tm.ee> writes:\n> > I'd propose that ADD ONLY would pull topmost attislocal up (reset it\n> > from the (grand)child) whereas plain ADD would leave attislocal alone.\n> \n> ADD ONLY? There is no such animal as ADD ONLY, and cannot be because\n> it implies making a parent inconsistent with its children. (Yes, I\n> know that the code takes that combination right now, but erroring out\n> instead is on the \"must fix before release\" list. Ditto for RENAME\n> ONLY.)\n\nI'm leaving right now and can't participate in the whole discussion, but\nI implemented \"ADD ONLY\" as a way to add the column only in the parent\n(all children should already have to column, errors if at least one\ndoesn't or is different atttype), while \"ADD\" adds the column to\nchildren that don't have it and merges where already exist; it errors if\nchildren have different atttype etc.\n\nShould I rip the ADD ONLY part out?\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Pido que me den el Nobel por razones humanitarias\" (Nicanor Parra)\n\n",
"msg_date": "Sun, 29 Sep 2002 13:02:25 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "On Mon, 2002-09-30 at 00:05, Alvaro Herrera wrote:\n> On 29 Sep 2002, Hannu Krosing wrote:\n> \n> > On Sun, 2002-09-29 at 19:57, Tom Lane wrote:\n> > > Hannu Krosing <hannu@tm.ee> writes:\n> > > > I'd propose that ADD ONLY would pull topmost attislocal up (reset it\n> > > > from the (grand)child) whereas plain ADD would leave attislocal alone.\n> > > \n> > > ADD ONLY? There is no such animal as ADD ONLY, and cannot be because\n> > > it implies making a parent inconsistent with its children. \n> > \n> > I meant ADD ONLY to be the exact opposite of DROP ONLY - it adds parent\n> > column and removes attislocal from children. Simple ADD would _not_\n> > remove attislocal from children with matching column.\n> \n> Consistency requires that it be exactly the opposite.\n\nConsistency seems to mean different things to different people - an a\n\"natural\" meaning is often hard to see in a non-natural language (SQL).\nSo is it \"ADD ONLY to table\" or \"ADD the ONLY definition\" or \"ADD ONLY\ndon't reset attislocal\" or \"ADD ONLY as opposite of DROP ONLY\")\n\nBut I'd be happy with any meaning, as long as the functionality is there\nand it is clearly documented. \n\nYour definition of \"ADD to this table ONLY and leave other definitions\nalone\" is easy to accept.\n\n> When you ADD\n> ONLY, you want only in the \"local\" table, so children still have a local\n> definition; OTOH, when you ADD (recursively) you want all children to\n> get non-local status.\n\nPerhaps ADD should either have ONLY or ALL and function without either\nonly when there is no matching column in any of the child tables.\n\n> Suppose\n> CREATE TABLE p (f1 int);\n> CREATE TABLE c (f2 int) INHERITS (p);\n> c.f2.attislocal = true\n> \n> Now,\n> ALTER TABLE ONLY p ADD COLUMN f2 int\n> should leavy c.f2.attislocal alone, while\n> ALTER TABLE p ADD COLUMN f2 int\n> should reset it.\n> \n> This is the opposite of your proposal, and I don't think it exists in\n> Tom's proposal.\n\nI also like the ablility to undo accidental DROP ONLY, which is missing\nin Toms proposal.\n\n> I think this is also consistent with the fact that ONLY requires the\n> column to exist in all children, while non-ONLY creates it where it\n> doesn't exist, and merges (resetting attislocal if set -- it could be\n> inherited from some other parent) where it exists.\n\nFor completeness there should be a third behaviour that would work like\nONLY for existing columns in children, but add it to children where it\nis missing.\n\nThis would be needed to effectively undo a DROP COLUMN where it was\nmultiply inherited and/or locally defined in some children.\n\n----------------------\nHannu\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "29 Sep 2002 22:29:32 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "On 29 Sep 2002, Hannu Krosing wrote:\n\n> On Sun, 2002-09-29 at 19:57, Tom Lane wrote:\n> > Hannu Krosing <hannu@tm.ee> writes:\n> > > I'd propose that ADD ONLY would pull topmost attislocal up (reset it\n> > > from the (grand)child) whereas plain ADD would leave attislocal alone.\n> > \n> > ADD ONLY? There is no such animal as ADD ONLY, and cannot be because\n> > it implies making a parent inconsistent with its children. \n> \n> I meant ADD ONLY to be the exact opposite of DROP ONLY - it adds parent\n> column and removes attislocal from children. Simple ADD would _not_\n> remove attislocal from children with matching column.\n\nConsistency requires that it be exactly the opposite. When you ADD\nONLY, you want only in the \"local\" table, so children still have a local\ndefinition; OTOH, when you ADD (recursively) you want all children to\nget non-local status.\n\nSuppose\nCREATE TABLE p (f1 int);\nCREATE TABLE c (f2 int) INHERITS (p);\nc.f2.attislocal = true\n\nNow,\nALTER TABLE ONLY p ADD COLUMN f2 int\nshould leavy c.f2.attislocal alone, while\nALTER TABLE p ADD COLUMN f2 int\nshould reset it.\n\nThis is the opposite of your proposal, and I don't think it exists in\nTom's proposal.\n\nI think this is also consistent with the fact that ONLY requires the\ncolumn to exist in all children, while non-ONLY creates it where it\ndoesn't exist, and merges (resetting attislocal if set -- it could be\ninherited from some other parent) where it exists.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Nunca se desea ardientemente lo que solo se desea por razon\" (F. Alexandre)\n\n",
"msg_date": "Sun, 29 Sep 2002 15:05:01 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> I implemented \"ADD ONLY\" as a way to add the column only in the parent\n> (all children should already have to column, errors if at least one\n> doesn't or is different atttype), while \"ADD\" adds the column to\n> children that don't have it and merges where already exist; it errors if\n> children have different atttype etc.\n\nI fail to see the value in such a distinction. The end state is the same\nin both cases, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Sep 2002 23:48:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "On Fri, 2002-10-04 at 01:00, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Where are we with this patch?\n> \n> It's done as far as I'm concerned ;-). Not sure if Hannu still wants\n> to argue that the behavior is wrong ... it seems fine to me though ...\n\nI stop arguing for now, \"ONLY\" can mean too many things ;)\n\nI can't promise that I don't bring some of it up again when we will\nstart discussing a more general overhaul of our inheritance and OO .\n\n---------------\nHannu\n\n",
"msg_date": "03 Oct 2002 23:33:25 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "\nWhere are we with this patch?\n\n---------------------------------------------------------------------------\n\nAlvaro Herrera wrote:\n> On 29 Sep 2002, Hannu Krosing wrote:\n> \n> > On Sun, 2002-09-29 at 19:57, Tom Lane wrote:\n> > > Hannu Krosing <hannu@tm.ee> writes:\n> > > > I'd propose that ADD ONLY would pull topmost attislocal up (reset it\n> > > > from the (grand)child) whereas plain ADD would leave attislocal alone.\n> > > \n> > > ADD ONLY? There is no such animal as ADD ONLY, and cannot be because\n> > > it implies making a parent inconsistent with its children. \n> > \n> > I meant ADD ONLY to be the exact opposite of DROP ONLY - it adds parent\n> > column and removes attislocal from children. Simple ADD would _not_\n> > remove attislocal from children with matching column.\n> \n> Consistency requires that it be exactly the opposite. When you ADD\n> ONLY, you want only in the \"local\" table, so children still have a local\n> definition; OTOH, when you ADD (recursively) you want all children to\n> get non-local status.\n> \n> Suppose\n> CREATE TABLE p (f1 int);\n> CREATE TABLE c (f2 int) INHERITS (p);\n> c.f2.attislocal = true\n> \n> Now,\n> ALTER TABLE ONLY p ADD COLUMN f2 int\n> should leavy c.f2.attislocal alone, while\n> ALTER TABLE p ADD COLUMN f2 int\n> should reset it.\n> \n> This is the opposite of your proposal, and I don't think it exists in\n> Tom's proposal.\n> \n> I think this is also consistent with the fact that ONLY requires the\n> column to exist in all children, while non-ONLY creates it where it\n> doesn't exist, and merges (resetting attislocal if set -- it could be\n> inherited from some other parent) where it exists.\n> \n> -- \n> Alvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n> \"Nunca se desea ardientemente lo que solo se desea por razon\" (F. Alexandre)\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 3 Oct 2002 15:31:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Where are we with this patch?\n\nIt's done as far as I'm concerned ;-). Not sure if Hannu still wants\nto argue that the behavior is wrong ... it seems fine to me though ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Oct 2002 16:00:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance "
},
{
"msg_contents": "On Thu, Oct 03, 2002 at 04:00:32PM -0400, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Where are we with this patch?\n> \n> It's done as far as I'm concerned ;-). Not sure if Hannu still wants\n> to argue that the behavior is wrong ... it seems fine to me though ...\n\nI still haven't submitted the ALTER TABLE/ADD COLUMN part. There's a\nlittle thing I want to change first. It's a different issue though (but\nrelated).\n\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"El que vive para el futuro es un iluso, y el que vive para el pasado,\nun imb�cil\" (Luis Adler, \"Los tripulantes de la noche\")\n",
"msg_date": "Thu, 3 Oct 2002 17:29:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP COLUMN misbehaviour with multiple inheritance"
}
] |
[
{
"msg_contents": "I had a similar problem with collation and case conversion in LATIN2.\n\n> Are you sure, that you've system locale (LANG variable) set to\n> something.ISO8859-2 when you've invoked initdb command?\n\nThat was a bit tricky cause I just set LC_ALL=pl_PL - for most programs\nit is enough.\n\nBut eventually it worked for iso8859-2. Now I try to get proper\ncollation for UTF-8 and I can't find what locales I should set in this\ncase (tried en_IN.UTF-8 and it did not work).\n\nZbigniew Lukasiak\n",
"msg_date": "Thu, 12 Sep 2002 14:51:25 +0200",
"msg_from": "zby@e-katalyst.pl (Zbigniew Lukasiak)",
"msg_from_op": true,
"msg_subject": "UTF-8 collation (was Re: [BUGS] LATIN2 ORDER BY)"
}
] |
[
{
"msg_contents": "\n Hi,\n\n I have file with this code:\n\n----------\n\\l\nSHOW SERVER_ENCODING;\nSHOW CLIENT_ENCODING;\n\n--- Languages table\n---\nCREATE TABLE lang\n(\n\t--- 'id' is here lang abbreviation\n\t---\n\tid\t\tvarchar(3) PRIMARY KEY,\n\tname\t\tvarchar(16) NOT NULL\t--- lang fullname\n);\n\nCOPY lang FROM stdin;\nEN\tEnglish\nDE\tGerman\nJP\tJapanese\n\\.\n\n----------\n\n and now I use latest PostgreSQL from CVS:\n\n$ psql anydb < langs.sql \n List of databases\n Name | Owner | Encoding \n-----------+----------+-----------\n anydb | zakkr | UNICODE\n template0 | postgres | SQL_ASCII\n template1 | postgres | SQL_ASCII\n test | postgres | SQL_ASCII\n(4 rows)\n\n server_encoding \n-----------------\n UNICODE\n(1 row)\n\n client_encoding \n-----------------\n LATIN1\n(1 row)\n\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'lang_pkey' for table 'lang'\nCREATE TABLE\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nlost synchronization with server, resetting connection\nconnection to server was lost\n\n In the server log file is:\n\nTRAP: FailedAssertion(\"!(len > 0)\", File: \"utf8_and_iso8859_1.c\", Line: 45)\n\n\n If I use INSERT instead COPY it's OK.\n\n Karel\n\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 12 Sep 2002 17:29:06 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "failed Assert() in utf8_and_iso8859_1.c"
},
{
"msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> In the server log file is:\n> TRAP: FailedAssertion(\"!(len > 0)\", File: \"utf8_and_iso8859_1.c\", Line: 45)\n\nHmm, looks like all the conversion_procs files have\n\n\tAssert(len > 0);\n\nSurely that should be Assert(len >= 0)?\n\nI also notice that I neglected to change PG_RETURN_INT32(0) to\nPG_RETURN_VOID() in these files. That's only cosmetic, but\nprobably it should be done.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Sep 2002 15:23:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failed Assert() in utf8_and_iso8859_1.c "
},
{
"msg_contents": "> Hmm, looks like all the conversion_procs files have\n> \n> \tAssert(len > 0);\n> \n> Surely that should be Assert(len >= 0)?\n> \n> I also notice that I neglected to change PG_RETURN_INT32(0) to\n> PG_RETURN_VOID() in these files. That's only cosmetic, but\n> probably it should be done.\n\nFixed.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 13 Sep 2002 15:41:51 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: failed Assert() in utf8_and_iso8859_1.c "
}
] |
[
{
"msg_contents": "Hi,\n\nDoes anyone know any implementation of a fixpoint operator (recursive\nqueries) for postgreSQL?\n\nThanks,\nLuciano.\n\n",
"msg_date": "Thu, 12 Sep 2002 19:33:13 +0100",
"msg_from": "Luciano Gerber <gerberl@cs.man.ac.uk>",
"msg_from_op": true,
"msg_subject": "fixpoint"
},
{
"msg_contents": "On Thu, 2002-09-12 at 20:33, Luciano Gerber wrote:\n> Hi,\n> \n> Does anyone know any implementation of a fixpoint operator (recursive\n> queries) for postgreSQL?\n\nI'm not sure i know about fixpoint, but you may get some help with\nrecursive queries from connectby() function from contrib/tablefunc/\n\n--------------\nHannu\n",
"msg_date": "13 Sep 2002 12:03:48 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: fixpoint"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nWe're looking to get an initial \"PostgreSQL Advocacy and Marketing\" site\nup an running in the next day or so.\n\nWhilst we know of a reasonable number of large places running PostgreSQL\n(as shown on the\nhttp://techdocs.postgresql.org/techdocs/supportcontracts.php page),\nwe're still looking for further examples.\n\nSpecifically, we are looking for places that are happy to discuss it,\neither a) not publicly, or b) happy to let the world know about it.\n\nProbably about 1/3 to 1/4 of the large organisations that we know are\nusing PostgreSQL for important work aren't yet able to announce it\npublicly. Please don't let this stop you from letting us know\nprivately, as we are interested in the implementation details and will\nrespect your confidentiality.\n\nSo, if you're using PostgreSQL and haven't directly let us know, please\ndo so now if you can.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Fri, 13 Sep 2002 05:35:49 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Looking for more \"big name\" places that use PostgreSQL"
},
{
"msg_contents": "I don't know how big our name is, but IHS, or Information Handling \nServices has been in the information industry for over 40 years, starting \nwith paper indexes to catalogs, and moving into specs and standards, \nparametric data, and dozens of other catagories.\n\nWhile our big financial and online databases are all mainframe / idms / \ncics or Solaris / Oracle, we are using postgresql more and more for \ninternal projects and are quite happy with it. So far it is slowly \npushing out MSSQL server and the other smaller backend databases, mainly \nbecause it is stable, fast, and free, the three mythical qualities you can \nonly get two of most of the time.\n\nI'm not in marketing, and I don't speak for IHS as a company, I'm just one \nof the trained web monkeys / postgresql dbas working in the back shop \nhere, so it's probably best to just say we use it for internal projects \nand such on the web.\n\nScott Marlowe\n\n",
"msg_date": "Thu, 12 Sep 2002 14:58:11 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Looking for more \"big name\" places that use PostgreSQL"
},
{
"msg_contents": "\"scott.marlowe\" wrote:\n<snip>\n> I'm not in marketing, and I don't speak for IHS as a company, I'm just one\n> of the trained web monkeys / postgresql dbas working in the back shop\n> here, so it's probably best to just say we use it for internal projects\n> and such on the web.\n\nCool. Thanks heaps Scott.\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> \n> Scott Marlowe\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Fri, 13 Sep 2002 07:11:21 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: Looking for more \"big name\" places that use PostgreSQL"
},
{
"msg_contents": "Hi Justin, Just Sports USA, my employer is the largest professional \nsports licensed merchandise retailer in the US and we use PostgreSQL as \nthe foundation of our technology solution that runs everything from our \npurchasing system to our in-store point-of-sale systems. Contact me if \nyou would like a more detailed description of our use, how it's worked \nflawlessly for us for over 3 years, or anything else related. \n\nGavin M. Roy\nCIO\nJust Sports USA\n\nJustin Clift wrote:\n\n>Hi everyone,\n>\n>We're looking to get an initial \"PostgreSQL Advocacy and Marketing\" site\n>up an running in the next day or so.\n>\n>Whilst we know of a reasonable number of large places running PostgreSQL\n>(as shown on the\n>http://techdocs.postgresql.org/techdocs/supportcontracts.php page),\n>we're still looking for further examples.\n>\n>Specifically, we are looking for places that are happy to discuss it,\n>either a) not publicly, or b) happy to let the world know about it.\n>\n>Probably about 1/3 to 1/4 of the large organisations that we know are\n>using PostgreSQL for important work aren't yet able to announce it\n>publicly. Please don't let this stop you from letting us know\n>privately, as we are interested in the implementation details and will\n>respect your confidentiality.\n>\n>So, if you're using PostgreSQL and haven't directly let us know, please\n>do so now if you can.\n>\n>:-)\n>\n>Regards and best wishes,\n>\n>Justin Clift\n>\n> \n>\n\n\n\n\n\n---------------------------------------------------------\nScanned by Sophos Anti-Virus v3.59TPOS, MIMEDefang v2.19,\nand Spam Assassin v2.31 on satchel.bteg.net\n",
"msg_date": "Thu, 12 Sep 2002 14:42:26 -0700",
"msg_from": "\"Gavin M. Roy\" <gmr@justsportsusa.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Looking for more \"big name\" places that use PostgreSQL"
},
{
"msg_contents": "\nYes, and I am going to start working on advocacy this week. If you are\ninterested in sharing your experience of PostgreSQL with others, please\nsubscribe to the advocacy mailing list. Some items I want to focus on\nare:\n\t\n\tquotations\n\tcompany users\n\tbeef up developers list, add companies\n\tsuccess stories\n\tBSD license\n\tfunding\n\tnon-technical papers\n\tphone/email/visit potential PostgreSQL sites\n\n---------------------------------------------------------------------------\n\nJustin Clift wrote:\n> Hi everyone,\n> \n> We're looking to get an initial \"PostgreSQL Advocacy and Marketing\" site\n> up an running in the next day or so.\n> \n> Whilst we know of a reasonable number of large places running PostgreSQL\n> (as shown on the\n> http://techdocs.postgresql.org/techdocs/supportcontracts.php page),\n> we're still looking for further examples.\n> \n> Specifically, we are looking for places that are happy to discuss it,\n> either a) not publicly, or b) happy to let the world know about it.\n> \n> Probably about 1/3 to 1/4 of the large organisations that we know are\n> using PostgreSQL for important work aren't yet able to announce it\n> publicly. Please don't let this stop you from letting us know\n> privately, as we are interested in the implementation details and will\n> respect your confidentiality.\n> \n> So, if you're using PostgreSQL and haven't directly let us know, please\n> do so now if you can.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 15 Sep 2002 23:08:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Looking for more \"big name\" places that use PostgreSQL"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.