threads
listlengths
1
2.99k
[ { "msg_contents": "\n> Per Tom's request(1000 concurrent backends), I tried current on IBM\n> AIX 5L and found that make check hungs:\n> \n> parallel group (13 tests): float4 oid varchar\n> \n> pgbench hungs too if more than 4 or so concurrent backends are\n> involved.\n\nI once had hangs during make check on AIX 4, but after make distclean\nand \nrebuild was never able to reproduce.\n\nCan you read the man page for cs(3), AIX 4 sais it is not recommended\nsuggests to use compare_and_swap, maybe AIX 5 has more to say ?\n\n> Unfortunately gdb does not work well on AIX, so I'm stucked.\n> Maybe a new locking code?\n\nUse dbx (and ddd) ?\n\nI don't have access to AIX 5.\n\nAndreas\n", "msg_date": "Mon, 1 Oct 2001 14:33:43 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Problem on AIX with current" }, { "msg_contents": "> > Per Tom's request(1000 concurrent backends), I tried current on IBM\n> > AIX 5L and found that make check hungs:\n> > \n> > parallel group (13 tests): float4 oid varchar\n> > \n> > pgbench hungs too if more than 4 or so concurrent backends are\n> > involved.\n> \n> I once had hangs during make check on AIX 4, but after make distclean\n> and \n> rebuild was never able to reproduce.\n\nI thing I did make distclean.\n\n> Can you read the man page for cs(3), AIX 4 sais it is not recommended\n> suggests to use compare_and_swap, maybe AIX 5 has more to say ?\n\n Note&#58; The cs subroutine is only provided to support binary\n compatibility with AIX Version 3 applications&#46; When writing new\n applications, it is not recommended to use this subroutine; it may cause\n reduced performance in the future&#46; Applications should use the\n compare_and_swap (compare_and_swap Subroutine) subroutine, unless they\n need to use unaligned memory locations&#46;\n\nSeems same as AIX 4?\n\n> > Unfortunately gdb does not work well on AIX, so I'm stucked.\n> > Maybe a new locking code?\n> \n> Use dbx (and ddd) ?\n\nHere is a stack trace using dbx.\n\nsemop(??, ??, ??) at 0xd02be73c\nIpcSemaphoreLock(??, ??, ??), line 425 in \"ipc.c\"\nLWLockAcquire(??, ??), line 270 in \"lwlock.c\"\nLockAcquire(??, ??, ??, ??, ??), line 482 in \"lock.c\"\nLockRelation(??, ??), line 153 in \"lmgr.c\"\nheap_openr(??, ??), line 512 in \"heapam.c\"\nscan_pg_rel_ind(??, ??), line 380 in \"relcache.c\"\nScanPgRelation(??, ??), line 307 in \"relcache.c\"\nIndexedAccessMethodInitialize(??, ??, ??), line 994 in \"relcache.c\"\nRelationNameGetRelation(??), line 1484 in \"relcache.c\"\nheap_openr(??, ??), line 502 in \"heapam.c\"\nsetTargetTable(??, ??, ??, ??), line 136 in \"parse_clause.c\"\ntransformUpdateStmt(??, ??), line 2416 in \"analyze.c\"\ntransformStmt(??, ??), line 228 in \"analyze.c\"\nparse_analyze(??, ??), line 92 in \"analyze.c\"\npg_analyze_and_rewrite(??), line 428 in \"postgres.c\"\nunnamed block $b1877, line 740 in \"postgres.c\"\nunnamed block $b1876, line 740 in \"postgres.c\"\nunnamed block $b1872, line 740 in \"postgres.c\"\npg_exec_query_string(??, ??, ??), line 740 in \"postgres.c\"\nPostgresMain(??, ??, ??, ??, ??), line 1943 in \"postgres.c\"\nDoBackend(??), line 2104 in \"postmaster.c\"\nBackendStartup(??), line 1837 in \"postmaster.c\"\nunnamed block $b1665, line 917 in \"postmaster.c\"\nServerLoop(), line 917 in \"postmaster.c\"\nPostmasterMain(??, ??), line 712 in \"postmaster.c\"\nmain(argc = 0, argv = (nil)), line 178 in \"main.c\"\n", "msg_date": "Tue, 02 Oct 2001 00:42:42 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Problem on AIX with current" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> Can you read the man page for cs(3), AIX 4 sais it is not recommended\n>> suggests to use compare_and_swap, maybe AIX 5 has more to say ?\n\n> Note&#58; The cs subroutine is only provided to support binary\n> compatibility with AIX Version 3 applications&#46; When writing new\n> applications, it is not recommended to use this subroutine; it may cause\n> reduced performance in the future&#46; Applications should use the\n> compare_and_swap (compare_and_swap Subroutine) subroutine, unless they\n> need to use unaligned memory locations&#46;\n\n> Seems same as AIX 4?\n\nHmm, does anyone want to produce new s_lock code for AIX that uses\ncompare_and_swap? But I'm not sure that's the problem here.\n\n> Here is a stack trace using dbx.\n\n> semop(??, ??, ??) at 0xd02be73c\n> IpcSemaphoreLock(??, ??, ??), line 425 in \"ipc.c\"\n> LWLockAcquire(??, ??), line 270 in \"lwlock.c\"\n> LockAcquire(??, ??, ??, ??, ??), line 482 in \"lock.c\"\n\nThis process is waiting to acquire the LockMgr lock. You need to look\nat the rest of the processes and try to figure out who's got the lock.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 12:22:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem on AIX with current " }, { "msg_contents": "> > Here is a stack trace using dbx.\n> \n> > semop(??, ??, ??) at 0xd02be73c\n> > IpcSemaphoreLock(??, ??, ??), line 425 in \"ipc.c\"\n> > LWLockAcquire(??, ??), line 270 in \"lwlock.c\"\n> > LockAcquire(??, ??, ??, ??, ??), line 482 in \"lock.c\"\n> \n> This process is waiting to acquire the LockMgr lock. You need to look\n> at the rest of the processes and try to figure out who's got the lock.\n\nStrange enough, there's no other backend (of course except stats\ncollectors) here. I make sure this with ps and pg_stat_activity view.\n\nBTW pg_stat_activity view shows:\n\n16556 | test | 197378 | 1 | postgres | update accounts set abalance = abalance + 406, filler = 'added amount to abalance is 406' where aid = 1447\n--\nTatsuo Ishii\n", "msg_date": "Tue, 02 Oct 2001 10:09:39 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Problem on AIX with current " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Strange enough, there's no other backend (of course except stats\n> collectors) here. I make sure this with ps and pg_stat_activity view.\n\nIf you have no better way of determining what's going on, it might help\nto recompile with LOCK_DEBUG defined, then enable trace_lwlocks in\npostgresql.conf (better turn on debug_print_query, log_timestamp, and\nlog_pid too). This will generate rather voluminous log output, perhaps\nenough to provide a clue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 10:41:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem on AIX with current " }, { "msg_contents": "> If you have no better way of determining what's going on, it might help\n> to recompile with LOCK_DEBUG defined, then enable trace_lwlocks in\n> postgresql.conf (better turn on debug_print_query, log_timestamp, and\n> log_pid too). This will generate rather voluminous log output, perhaps\n> enough to provide a clue.\n\nWhen I recompiled with LOCK_DEBUG and trace_lwlocks = true, it *works*\n(and saw lots of lock debugging messages, of course). However if I\nturn trace_lwlocks to off, the backend stucks again. Is there anything\nI can do?\n\nNote the machine has 4 processors. Is that related to?\n--\nTatsuo Ishii\n", "msg_date": "Thu, 04 Oct 2001 13:07:17 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Problem on AIX with current " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> When I recompiled with LOCK_DEBUG and trace_lwlocks = true, it *works*\n> (and saw lots of lock debugging messages, of course). However if I\n> turn trace_lwlocks to off, the backend stucks again.\n\nUgh ... ye classic Heisenbug ...\n\n> Is there anything I can do?\n\nApparently the problem is timing-sensitive, which is hardly surprising\nfor a lock issue. You might find that it occurs some of the time if\nyou repeat the test over and over.\n\n> Note the machine has 4 processors. Is that related to?\n\nHard to tell at this point, but considering that no one else has\nreported a problem so far, it does seem like multiple CPUs at least\nhelp to make the failure more probable. But it could just be a\nportability problem. Do you have another machine with identical OS\nand fewer processors to try for comparison?\n\nAndreas, have you tried CVS tip lately on AIX? What's your results?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 00:25:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem on AIX with current " } ]
[ { "msg_contents": "\nIs the developer's faq still a valid document? After last nite's website\nchanges I've been tracking down missing items and this one pops up. The\nonly place I see it tho is in the cvs Attic. Isn't the attic where the\njunk goes that noone wants anymore? Is that where the faq belongs?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 1 Oct 2001 08:45:48 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "developer's faq" }, { "msg_contents": "> \n> Is the developer's faq still a valid document? After last nite's website\n> changes I've been tracking down missing items and this one pops up. The\n> only place I see it tho is in the cvs Attic. Isn't the attic where the\n> junk goes that noone wants anymore? Is that where the faq belongs?\n\nIt is in the main cvs and copied into the web directory. It is not in\nthe web cvs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 11:29:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: developer's faq" }, { "msg_contents": "On Mon, 1 Oct 2001, Bruce Momjian wrote:\n\n> >\n> > Is the developer's faq still a valid document? After last nite's website\n> > changes I've been tracking down missing items and this one pops up. The\n> > only place I see it tho is in the cvs Attic. Isn't the attic where the\n> > junk goes that noone wants anymore? Is that where the faq belongs?\n>\n> It is in the main cvs and copied into the web directory. It is not in\n> the web cvs.\n\nWhere? This is the only place I see it.\n\n$ locate faq-dev-english\n/cvsroot/www/html/docs/Attic/faq-dev-english.html,v\n/cvsroot/www/html/docs/Attic/faq-dev-english.shtml,v\n$\n\nI'm already getting a few things out of cvs now (TODO and flowchard are\na couple) for the developer site. WebCVS will be there whenever SOMEONE\ngets around to some configuration changes I requested.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 1 Oct 2001 11:36:56 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: developer's faq" }, { "msg_contents": "> On Mon, 1 Oct 2001, Bruce Momjian wrote:\n> \n> > >\n> > > Is the developer's faq still a valid document? After last nite's website\n> > > changes I've been tracking down missing items and this one pops up. The\n> > > only place I see it tho is in the cvs Attic. Isn't the attic where the\n> > > junk goes that noone wants anymore? Is that where the faq belongs?\n> >\n> > It is in the main cvs and copied into the web directory. It is not in\n> > the web cvs.\n> \n> Where? This is the only place I see it.\n> \n> $ locate faq-dev-english\n> /cvsroot/www/html/docs/Attic/faq-dev-english.html,v\n> /cvsroot/www/html/docs/Attic/faq-dev-english.shtml,v\n> $\n\nThe pgsql CVS, not the www CVS:\n\n $ ls pgsql/doc/src/FAQ\n CVS/ FAQ_DEV.html FAQ_japanese.html\n FAQ.html FAQ_german.html\n\nI copy them to the web directory like the platform-specific FAQ's.\n\n> \n> I'm already getting a few things out of cvs now (TODO and flowchard are\n> a couple) for the developer site. WebCVS will be there whenever SOMEONE\n> gets around to some configuration changes I requested.\n\nTODO should not be in the www CVS either. That is generated by txt2html\nand I copy that to www too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 12:10:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: developer's faq" }, { "msg_contents": "On Mon, 1 Oct 2001, Bruce Momjian wrote:\n\n> > On Mon, 1 Oct 2001, Bruce Momjian wrote:\n> >\n> > > >\n> > > > Is the developer's faq still a valid document? After last nite's website\n> > > > changes I've been tracking down missing items and this one pops up. The\n> > > > only place I see it tho is in the cvs Attic. Isn't the attic where the\n> > > > junk goes that noone wants anymore? Is that where the faq belongs?\n> > >\n> > > It is in the main cvs and copied into the web directory. It is not in\n> > > the web cvs.\n> >\n> > Where? This is the only place I see it.\n> >\n> > $ locate faq-dev-english\n> > /cvsroot/www/html/docs/Attic/faq-dev-english.html,v\n> > /cvsroot/www/html/docs/Attic/faq-dev-english.shtml,v\n> > $\n>\n> The pgsql CVS, not the www CVS:\n>\n> $ ls pgsql/doc/src/FAQ\n> CVS/ FAQ_DEV.html FAQ_japanese.html\n> FAQ.html FAQ_german.html\n>\n> I copy them to the web directory like the platform-specific FAQ's.\n\nSo you renamed it to faq-dev-english... Ok. I can get that out.\n\n> > I'm already getting a few things out of cvs now (TODO and flowchard are\n> > a couple) for the developer site. WebCVS will be there whenever SOMEONE\n> > gets around to some configuration changes I requested.\n>\n> TODO should not be in the www CVS either. That is generated by txt2html\n> and I copy that to www too.\n\nI'm trying to get everything as automated as possible without having to\ndepend on any one person which is why I'm going into cvs to get it. If\nit doesn't belong in cvs, then why is it still there? More importantly\nif it's not going to be there, where will it be? Should we put it in\nthe database?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 1 Oct 2001 12:40:33 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: developer's faq" }, { "msg_contents": "> > The pgsql CVS, not the www CVS:\n> >\n> > $ ls pgsql/doc/src/FAQ\n> > CVS/ FAQ_DEV.html FAQ_japanese.html\n> > FAQ.html FAQ_german.html\n> >\n> > I copy them to the web directory like the platform-specific FAQ's.\n> \n> So you renamed it to faq-dev-english... Ok. I can get that out.\n\nSee copyfaq script for copy renames. I have one i my home directory\nthat I use to copy from my local CVS while you have one too.\n\n> \n> > > I'm already getting a few things out of cvs now (TODO and flowchard are\n> > > a couple) for the developer site. WebCVS will be there whenever SOMEONE\n> > > gets around to some configuration changes I requested.\n> >\n> > TODO should not be in the www CVS either. That is generated by txt2html\n> > and I copy that to www too.\n> \n> I'm trying to get everything as automated as possible without having to\n> depend on any one person which is why I'm going into cvs to get it. If\n> it doesn't belong in cvs, then why is it still there? More importantly\n> if it's not going to be there, where will it be? Should we put it in\n> the database?\n\nNot sure about TODO. I don't see it in my www CVS copy here. It is a\nfunny file because it is generated from the text TODO file in the pgsql\ncvs. I can put the script I use with txt2html to generate it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 12:51:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: developer's faq" }, { "msg_contents": "On Mon, 1 Oct 2001, Bruce Momjian wrote:\n\n> > > The pgsql CVS, not the www CVS:\n> > >\n> > > $ ls pgsql/doc/src/FAQ\n> > > CVS/ FAQ_DEV.html FAQ_japanese.html\n> > > FAQ.html FAQ_german.html\n> > >\n> > > I copy them to the web directory like the platform-specific FAQ's.\n> >\n> > So you renamed it to faq-dev-english... Ok. I can get that out.\n>\n> See copyfaq script for copy renames. I have one i my home directory\n> that I use to copy from my local CVS while you have one too.\n\nI'm trying to avoid renaming. Since the sites are going php I have more\n(better) options.\n\n> >\n> > > > I'm already getting a few things out of cvs now (TODO and flowchard are\n> > > > a couple) for the developer site. WebCVS will be there whenever SOMEONE\n> > > > gets around to some configuration changes I requested.\n> > >\n> > > TODO should not be in the www CVS either. That is generated by txt2html\n> > > and I copy that to www too.\n> >\n> > I'm trying to get everything as automated as possible without having to\n> > depend on any one person which is why I'm going into cvs to get it. If\n> > it doesn't belong in cvs, then why is it still there? More importantly\n> > if it's not going to be there, where will it be? Should we put it in\n> > the database?\n>\n> Not sure about TODO. I don't see it in my www CVS copy here. It is a\n> funny file because it is generated from the text TODO file in the pgsql\n> cvs. I can put the script I use with txt2html to generate it.\n\nIt's not in www cvs. It's in pgsql cvs. That's where I'm getting it.\nI'm trying to have php use cvs to update the copy I'm about to serve up\nbut it's complaining so I have to look and see how I did it before.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 1 Oct 2001 12:59:31 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: developer's faq" }, { "msg_contents": "> On Mon, 1 Oct 2001, Bruce Momjian wrote:\n> \n> > > > The pgsql CVS, not the www CVS:\n> > > >\n> > > > $ ls pgsql/doc/src/FAQ\n> > > > CVS/ FAQ_DEV.html FAQ_japanese.html\n> > > > FAQ.html FAQ_german.html\n> > > >\n> > > > I copy them to the web directory like the platform-specific FAQ's.\n> > >\n> > > So you renamed it to faq-dev-english... Ok. I can get that out.\n> >\n> > See copyfaq script for copy renames. I have one i my home directory\n> > that I use to copy from my local CVS while you have one too.\n> \n> I'm trying to avoid renaming. Since the sites are going php I have more\n> (better) options.\n\nSure.\n\n> > Not sure about TODO. I don't see it in my www CVS copy here. It is a\n> > funny file because it is generated from the text TODO file in the pgsql\n> > cvs. I can put the script I use with txt2html to generate it.\n> \n> It's not in www cvs. It's in pgsql cvs. That's where I'm getting it.\n> I'm trying to have php use cvs to update the copy I'm about to serve up\n> but it's complaining so I have to look and see how I did it before.\n\nAttached is the script I use and a supporting file. You will need\ntxt2html installed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n:\ntrap \"rm -rf /tmp/$$\" 0 1 2 3 15\n\ntxt2html -m -s 100 -p 100 --title \"PostgreSQL TODO list\" \\\n\t--link /u/txt2html/txt2html.dict \\\n\t--append_head /u/txt2html/BODY /pgtop/doc/TODO |\nsed 's;\\[\\([^]]*\\)\\];[<A HREF=\"http://candle.pha.pa.us/cgi-bin/pgtodo?\\1\">\\1</A>];g' >/tmp/$$\n\nscp -q /tmp/$$ momjian@www.ca.postgresql.org:/home/projects/pgsql/ftp/www/html/docs/todo.html\n\ncd /pgtop/doc\nchown postgres TODO\npgcvs commit -m 'Update TODO list.' TODO\n \n\n<BODY BGCOLOR=\"#FFFFFF\" TEXT=\"#000000\" LINK=\"#FF0000\" VLINK=\"#A00000\" ALINK=\"#0000FF\">", "msg_date": "Mon, 1 Oct 2001 13:31:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: developer's faq" } ]
[ { "msg_contents": "Hi all,\n\nReading through the script files again, there seems to be several\ndifferent methods of doing the same thing :\n\ni.e. if [ -x \"$self_path/postmaster\" ] && [ -x \"$self_path/psql\" ];\nthen\n\nor\n\nif [[ -x \"$self_path/postmaster\" && -x \"$self_path/psql\" ]]; then\n\n\n\n\nif [ x\"$foo\" = x\"\" ]; then\n\nor\n\nif [ \"$op\" = \"\" ]; then\n\nor\n\nif [ \"$foo\" ]; then\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n\n", "msg_date": "Mon, 01 Oct 2001 23:37:37 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "When scripting, which is better?" }, { "msg_contents": "Justin Clift wrote:\n\n\n> if [ x\"$foo\" = x\"\" ]; then\n\nThis is the safest way. It prevents problems when $foo begins with with a\n\"-\"\n\nI don't know about your first question, though.\n\n", "msg_date": "Mon, 1 Oct 2001 10:59:10 -0400", "msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>", "msg_from_op": false, "msg_subject": "Re: When scripting, which is better?" }, { "msg_contents": ">>>>> \"Justin\" == Justin Clift <justin@postgresql.org> writes:\n\n Justin> if [ x\"$foo\" = x\"\" ]; then\n\n Justin> or\n\n Justin> if [ \"$op\" = \"\" ]; then\n\n Justin> or\n\n Justin> if [ \"$foo\" ]; then\n\nI'm not the slightest bit a shell expert, but why not :-\n\nif [ -z \"$foo\" ]; then\n\nIs this POSIX/SUS2/whatever ?\n\nSincerely,\n\nAdrian Phillips\n\n-- \nYour mouse has moved.\nWindows NT must be restarted for the change to take effect.\nReboot now? [OK]\n", "msg_date": "01 Oct 2001 17:15:14 +0200", "msg_from": "Adrian Phillips <adrianp@powertech.no>", "msg_from_op": false, "msg_subject": "Re: When scripting, which is better?" }, { "msg_contents": "> Hi all,\n> \n> Reading through the script files again, there seems to be several\n> different methods of doing the same thing :\n> \n> i.e. if [ -x \"$self_path/postmaster\" ] && [ -x \"$self_path/psql\" ];\n> then\n\nThe above semicolon is useless. Actually, I have never see this. The\nnormal way is:\n\n\tif [ -x \"$self_path/postmaster\" -a -x \"$self_path/psql\" ]\n\n> \n> or\n> \n> if [[ -x \"$self_path/postmaster\" && -x \"$self_path/psql\" ]]; then\n\n\nI usually do:\n\n\tif [ ... ]\n\tthen\n\nPretty simple.\n\n> \n> \n> \n> \n> if [ x\"$foo\" = x\"\" ]; then\n> \n> or\n> \n> if [ \"$op\" = \"\" ]; then\n\nThis is done if you think $op may have a leading dash.\n\n> \n> or\n> \n> if [ \"$foo\" ]; then\n> \n\nThis tests whether \"$foo\" is not equal to \"\".\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 12:46:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: When scripting, which is better?" }, { "msg_contents": "Sorry guys,\n\nI didn't realise I actually sent this, it was part of an email I was\nputting together to achieve consistency in the scripts, but I thought I\ncancelled it when it got late in the morning.\n\nMy apologies.\n\nRegards and best wishes,\n\nJustin Clift\n\n\nBruce Momjian wrote:\n> \n> > Hi all,\n> >\n> > Reading through the script files again, there seems to be several\n> > different methods of doing the same thing :\n> >\n> > i.e. if [ -x \"$self_path/postmaster\" ] && [ -x \"$self_path/psql\" ];\n> > then\n> \n> The above semicolon is useless. Actually, I have never see this. The\n> normal way is:\n> \n> if [ -x \"$self_path/postmaster\" -a -x \"$self_path/psql\" ]\n> \n> >\n> > or\n> >\n> > if [[ -x \"$self_path/postmaster\" && -x \"$self_path/psql\" ]]; then\n> \n> I usually do:\n> \n> if [ ... ]\n> then\n> \n> Pretty simple.\n> \n> >\n> >\n> >\n> >\n> > if [ x\"$foo\" = x\"\" ]; then\n> >\n> > or\n> >\n> > if [ \"$op\" = \"\" ]; then\n> \n> This is done if you think $op may have a leading dash.\n> \n> >\n> > or\n> >\n> > if [ \"$foo\" ]; then\n> >\n> \n> This tests whether \"$foo\" is not equal to \"\".\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 02 Oct 2001 18:56:10 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: When scripting, which is better?" }, { "msg_contents": "Justin Clift writes:\n\n> i.e. if [ -x \"$self_path/postmaster\" ] && [ -x \"$self_path/psql\" ];\n> then\n>\n> or\n>\n> if [[ -x \"$self_path/postmaster\" && -x \"$self_path/psql\" ]]; then\n\nI don't think the second one is a valid expression. ;-)\n\nMaybe you were wondering about [[ ]] vs [] -- In Autoconf [] are the quote\ncharacters so you have to double-quote, sort of. It's better to use\n'test' in that case because m4 quoting can be tricky. I prefer test over\n[] in general because it is more consistent and slightly clearer.\n\n> if [ x\"$foo\" = x\"\" ]; then\n\nMaximum safety for the case where $foo starts with a dash. Yes, that\nmeans all comparisons should really be done that way. No, I don't think\nwe should do it in all cases if we know what $foo can contain, because\nthat makes code *really* unreadable.\n\n> or\n>\n> if [ \"$op\" = \"\" ]; then\n>\n> or\n>\n> if [ \"$foo\" ]; then\n\nThese two are equivalent but the second one is arguably less clear.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 2 Oct 2001 20:53:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: When scripting, which is better?" } ]
[ { "msg_contents": "\n> > Would this seem a reasonable thing to do? Does anyone rely on COPY\n> > FROM causing an ERROR on duplicate input?\n> \n> Yes. This change will not be acceptable unless it's made an optional\n> (and not default, IMHO, though perhaps that's negotiable) feature of\n> COPY.\n> \n> The implementation might be rather messy too. I don't much \n> care for the\n> notion of a routine as low-level as bt_check_unique knowing that the\n> context is or is not COPY. We might have to do some restructuring.\n> \n> > Would:\n> > WITH ON_DUPLICATE = CONTINUE|TERMINATE (or similar)\n> > need to be added to the COPY command (I hope not)?\n> \n> It occurs to me that skip-the-insert might be a useful option for\n> INSERTs that detect a unique-key conflict, not only for COPY. (Cf.\n> the regular discussions we see on whether to do INSERT first or\n> UPDATE first when the key might already exist.) Maybe a SET variable\n> that applies to all forms of insertion would be appropriate.\n\nImho yes, but:\nI thought that the problem was, that you cannot simply skip the \ninsert, because at that time the tuple (pointer) might have already \nbeen successfully inserted into an other index/heap, and thus this was \nonly sanely possible with savepoints/undo.\n\nAn idea would probably be to at once mark the new tuple dead, and\nproceed\nnormally?\n\nAndreas\n", "msg_date": "Mon, 1 Oct 2001 16:06:39 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> I thought that the problem was, that you cannot simply skip the \n> insert, because at that time the tuple (pointer) might have already \n> been successfully inserted into an other index/heap, and thus this was \n> only sanely possible with savepoints/undo.\n\nHmm, good point. If we don't error out the transaction then that tuple\nwould become good when we commit. This is nastier than it appears.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 10:09:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "I said:\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> I thought that the problem was, that you cannot simply skip the \n>> insert, because at that time the tuple (pointer) might have already \n>> been successfully inserted into an other index/heap, and thus this was \n>> only sanely possible with savepoints/undo.\n\n> Hmm, good point. If we don't error out the transaction then that tuple\n> would become good when we commit. This is nastier than it appears.\n\nOn further thought, I think it *would* be possible to do this without\nsavepoints, but it'd take some restructuring of the index AM API.\nWhat'd have to happen is that a unique index could not raise an elog\nERROR when it detects a uniqueness conflict. Instead, it'd return a\nuniqueness-conflict indication back to its caller. This would have\nto propagate up to the level of the executor. At that point we'd make\nthe choice of whether to raise an error or not. If not, we'd need to\nmodify the just-created tuple to mark it deleted by the current\ntransaction. We can't remove it, since that would leave any\nalready-created entries in other indexes pointing to nothing. But\nmarking it deleted by the same xact and command ID that inserted it\nwould leave things in a valid state until VACUUM comes along to do the\njanitorial work.\n\nTo support backoff in the case of a conflict during UPDATE, it'd also be\nnecessary to un-mark the prior version of the tuple, which we'd already\nmarked as deleted. This might create some concurrency issues in case\nthere are other updaters waiting to see if we commit or not. (The same\nissue would arise for savepoint-based undo, though.) We might want to\npunt on that part for now.\n\nThe effects don't stop propagating there, either. The decision not to\ninsert the tuple must be reported up still further, so that the executor\nknows not to run any AFTER INSERT/UPDATE triggers and knows not to count\nthe tuple as inserted/updated for the command completion report.\n\nIn short, quite a lot of code to touch to make this happen ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 13:21:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane\n>\n> I said:\n> > \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> >> I thought that the problem was, that you cannot simply skip the\n> >> insert, because at that time the tuple (pointer) might have already\n> >> been successfully inserted into an other index/heap, and thus this was\n> >> only sanely possible with savepoints/undo.\n>\n> > Hmm, good point. If we don't error out the transaction then that tuple\n> > would become good when we commit. This is nastier than it appears.\n>\n> On further thought, I think it *would* be possible to do this without\n> savepoints,\n\nIt's a very well known issue that the partial rolloback functionality is\na basis of this kind of problem and it's the reason I've mentioned that\nUNDO functionality has the highest priority. IMHO we shouldn't\nimplement a partial rolloback functionality specific to an individual\nproblem.\n\nregards,\nHiroshi Inoue\n\n", "msg_date": "Tue, 2 Oct 2001 05:58:26 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " } ]
[ { "msg_contents": "> IMHO, you should copy into a temporary table and the do a select \n> distinct from it into the table that you want.\n\nWhich would be way too slow for normal operation :-(\nWe are talking about a \"fast as possible\" data load from a flat file\nthat may have duplicates (or even data errors, but that \nis another issue).\n\nAndreas\n", "msg_date": "Mon, 1 Oct 2001 16:39:36 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n\n>>IMHO, you should copy into a temporary table and the do a select \n>>distinct from it into the table that you want.\n>>\n>\n>Which would be way too slow for normal operation :-(\n>We are talking about a \"fast as possible\" data load from a flat file\n>that may have duplicates (or even data errors, but that \n>is another issue).\n>\n>Andreas\n>\nThen the IGNORE_DUPLICATE would definitely be the way to go, if speed is \nthe question...\n\n\n\n\n\n\n\n\n\nZeugswetter Andreas SB SD wrote:\n\n\nIMHO, you should copy into a temporary table and the do a select distinct from it into the table that you want.\n\nWhich would be way too slow for normal operation :-(We are talking about a \"fast as possible\" data load from a flat filethat may have duplicates (or even data errors, but that is another issue).Andreas\n\nThen the IGNORE_DUPLICATE would definitely be the way to go, if speed is\nthe question...", "msg_date": "Mon, 01 Oct 2001 09:42:56 -0500", "msg_from": "Thomas Swan <tswan@olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Okay,\n\nIf I'm going to modify 'COPY INTO' to include 'ignore duplicates'\nfunctionality it looks like I'll have to add to the COPY syntax. The\nmost obvious way is to add:\n\n WITH IGNORE DUPLICATES\n\nto the syntax. I'm going to need my hand held a bit for this! The\ngrammar for COPY will need updating in gram.y and specifically the\n'WITH' keyword will have 'IGNORE DUPLICATES' as well as 'NULL AS'.\n\nAny pointers?\n\nThanks, Lee.\n", "msg_date": "Mon, 1 Oct 2001 16:31:06 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Lee Kindness writes:\n > If I'm going to modify 'COPY INTO' to include 'ignore duplicates'\n > functionality it looks like I'll have to add to the COPY syntax. The\n > most obvious way is to add:\n > WITH IGNORE DUPLICATES\n\nOr does it make more sense to add a 'COPY_IGNORE_DUPLICATES' SET\nparameter? \n\nLee.\n", "msg_date": "Mon, 1 Oct 2001 16:50:41 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" } ]
[ { "msg_contents": "Please apply attached patch to current CVS\n\nChanges:\n\n October 1, 2001\n\n 1. Implemented binary search in array\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83", "msg_date": "Mon, 1 Oct 2001 19:25:30 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "patch contrib/intarray to current CVS" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Please apply attached patch to current CVS\n> \n> Changes:\n> \n> October 1, 2001\n> \n> 1. Implemented binary search in array\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 14:50:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: patch contrib/intarray to current CVS" }, { "msg_contents": "\nPatch applied. Thanks.\n\n> Please apply attached patch to current CVS\n> \n> Changes:\n> \n> October 1, 2001\n> \n> 1. Implemented binary search in array\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 4 Oct 2001 11:41:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: patch contrib/intarray to current CVS" } ]
[ { "msg_contents": "... cs(3)\n> > Seems same as AIX 4?\n\nYes, identical.\n\n> \n> Hmm, does anyone want to produce new s_lock code for AIX that uses\n> compare_and_swap? But I'm not sure that's the problem here.\n\nI did once, but performance was worse, so I discarded it.\nSince AIX 5 still has it, I see no reason to change it.\n\nStill, testing it on AIX 5 might reveal that compare_and_swap \nis now faster, Tatsuo ?\n\nAndreas\n", "msg_date": "Mon, 1 Oct 2001 18:31:29 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Problem on AIX with current " } ]
[ { "msg_contents": "I can't do `cvs update -d -P'. I get the following error:\n\ncvs server: failed to create lock directory for `/projects/cvsroot/pgsql/contrib/pgcrypto/expected' (/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock): Permission denied\ncvs server: failed to obtain dir lock in repository `/projects/cvsroot/pgsql/contrib/pgcrypto/expected'\ncvs [server aborted]: read lock failed - giving up\n\nIan\n", "msg_date": "01 Oct 2001 12:25:08 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "CVS problem" } ]
[ { "msg_contents": "> The effects don't stop propagating there, either. The decision\n> not to insert the tuple must be reported up still further, so\n> that the executor knows not to run any AFTER INSERT/UPDATE\n> triggers and knows not to count the tuple as inserted/updated\n> for the command completion report.\n\nBut what about BEFORE insert/update triggers which could insert\nrecords too?\n\nVadim\n", "msg_date": "Mon, 1 Oct 2001 14:25:41 -0700 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> The effects don't stop propagating there, either. The decision\n>> not to insert the tuple must be reported up still further, so\n>> that the executor knows not to run any AFTER INSERT/UPDATE\n>> triggers and knows not to count the tuple as inserted/updated\n>> for the command completion report.\n\n> But what about BEFORE insert/update triggers which could insert\n> records too?\n\nWell, what about them? It's already possible for a later BEFORE trigger\nto cause the actual insertion to be suppressed, so I don't see any\ndifference from what we have now. If a BEFORE trigger takes actions\non the assumption that the insert will happen, it's busted already.\n\n\nMind you, I'm not actually advocating that we do any of this ;-).\nI was just sketching a possible implementation approach in case someone\nwants to try it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 17:38:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " } ]
[ { "msg_contents": "> > But what about BEFORE insert/update triggers which could\n> > insert records too?\n> \n> Well, what about them? It's already possible for a later\n> BEFORE trigger to cause the actual insertion to be suppressed,\n> so I don't see any difference from what we have now.\n> If a BEFORE trigger takes actions on the assumption that the\n> insert will happen, it's busted already.\n\nThis problem could be solved now by implementing *single* trigger.\nIn future, we could give users ability to specify trigger\nexecution order.\nBut with proposed feature ...\n\n> Mind you, I'm not actually advocating that we do any of this ;-).\n\nI understand -:)\n\n> I was just sketching a possible implementation approach in\n> case someone wants to try it.\n\nAnd I'm just sketching possible problems -:)\n\nVadim\n", "msg_date": "Mon, 1 Oct 2001 14:56:01 -0700 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " } ]
[ { "msg_contents": "Hello,\n\nBackground:\n\nI am pursuing a MS degree at Christopher Newport University. I'm currently \nlooking for a subject for my thesis plus I've also got to do a simulated \n(scaled down) thesis proposal for class. Since my degree program is applied, \nit doesn't necessarily have to be a pure research project. My goal is to do \nsomething that is useful, beneficial, and can be put to work.\n\nCurrently, I'm in the middle of an aritficial intelligence class which covered \ngenetic algorithms a couple of weeks ago. I knew that Postgresql used a geneic \nalgo. for large query optimizations and I thought that this might be a good \narea to make a contribution.\n\nNo matter what I finally do decide on for my thesis, I've still got to \nwrite the proposal and give a presentation for the artificial intelligence class.\n\nQuestions that I can think of for now:\n\nWho works on the genetic query optimizer (geqo)?\n\tFrom what I could find from the sources and the archives was that \t\tMartin S. Utesch \nmaintains it or did.\n\nWould my input/help be wanted?\n\tMay sound silly, since anyone could post to the list and contribute,\n\tbut I just want to be sure.\nDoes it need to be improved?\nWhat areas need to be improved?\nAny special feature or request?\nAny other ideas or recommendations?\n\nThe only recent discussion I could find was the following link which indicated \nthat there might be some room for improvement.\nhttp://archives.postgresql.org/pgsql-hackers/2000-12/msg01005.php\n\nAny feedback/information that you can provide is appreciated.\n\nThank you\nJames Hubbard\n\nP.S.\nHere is some of the stuff I've come across while looking for information:\n\nA Genetic Algorithm for Database Query Optimization\nhttp://citeseer.nj.nec.com/bennett91genetic.html\n\nGenetic Programming in Database Query Optimization\nhttp://citeseer.nj.nec.com/stillger96genetic.html\n\nGenetic Algorithms for Optimal Logical Database Design\nhttp://citeseer.nj.nec.com/vanbommel94genetic.html\n\nOptimization Of Dynamic Query Evaluation Plans\nhttp://citeseer.nj.nec.com/cole94optimization.html\n\nPhysical Database Design Using a Genetic Algorithm Approach\nhttp://citeseer.nj.nec.com/285413.html\n\n", "msg_date": "Tue, 02 Oct 2001 02:06:30 -0400", "msg_from": "\"James L. Hubbard III\" <jhubbard@mcs.uvawise.edu>", "msg_from_op": true, "msg_subject": "Genetic Query Optimizer" }, { "msg_contents": "\"James L. Hubbard III\" <jhubbard@mcs.uvawise.edu> writes:\n> Who works on the genetic query optimizer (geqo)?\n\nAFAIK, no one has touched the genetic algorithm itself in years --- not\nsince the original contributor, who has not been heard from in awhile.\nThe only changes to that code have been to clean up its interfaces to\nthe rest of the system (eg, make it use the new GUC mechanism to accept\nparameters).\n\nIf you want to work on it, go right ahead!\n\n> Does it need to be improved?\n\nFinding better plans in less time is always better.\n\nThere aren't that many people using GEQO at the moment, I suspect,\njust because there aren't that many people doing umpteen-way joins.\nBut I think it would be cool if it became a useful alternative to\nthe standard exhaustive optimizer at a lower crossover point.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 10:06:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Genetic Query Optimizer " } ]
[ { "msg_contents": "Hi,\n\nIt's come to my attention that users of pgAdmin (the original, not pgAdmin\nII) will not be able to dump/reload their 7.1.x databases into 7.2 without\nan additional step in the upgrade procedure.\n\nThis is because pgAdmin creates a number of views on the server which\ninclude the oid column from tables such as pg_attribute - obviously\nattempting to reload these view will cause an error as there is no longer an\noid column in pg_attribute in 7.2.\n\npgAdmin II is unaffected by this problem.\n\nI will obviously try to assist anyone who suffers from this problem, but if\nthe following text can be added to the INSTALL file and anywhere else that\nmay be appropriate it might help ease the pain!\n\n{at the end of #2 under 'If you are upgrading)\n\n pgAdmin 7.x users will need to drop server side objects before dumping \n their database, otherwise the reload will fail. To do this, select\n 'Drop all pgAdmin Server Side Objects' from the 'Advanced' menu.\n pgAdmin does not support PostgreSQL 7.2, instead, please try pgAdmin II\n from http://pgadmin.postgresql.org/.\n\n\nRegards, Dave.\n\n-- \nDave Page (dpage@postgresql.org)\npgAdmin Project Leader\nhttp://pgadmin.postgresql.org/ \n", "msg_date": "Tue, 2 Oct 2001 08:45:19 +0100 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "pgAdmin users upgrading to PostgreSQL 7.2" }, { "msg_contents": "There seems to be a \"round trip\" problem when editing function definitions\nwithin the definition pane of the function window.\n\nIf the function text uses single quotes (') then these must be doubled up\n(because of the single quotes enclosing the entire function body). However\nthe doubling up is removed in a save/display loop.\n\nThe result is that _every time_ you edit the function you have to add an\nextra quote to each occurence of the quote character.\n\n--\nThomas Sandford | thomas@paradisegreen.co.uk\n\n", "msg_date": "Wed, 27 Mar 2002 15:21:08 -0000", "msg_from": "\"Thomas Sandford\" <thomas@paradisegreen.co.uk>", "msg_from_op": false, "msg_subject": "Quotes and functions" }, { "msg_contents": "Given a database created from the following SQL:\n\nCREATE TABLE \"testtable\" (\n \"id\" integer NOT NULL,\n \"mytext\" character varying(32),\n \"mytime\" timestamp with time zone,\n Constraint \"testtable_pkey\" Primary Key (\"id\")\n);\n\nCOPY \"testtable\" FROM stdin;\n1 \\N 2002-03-27 20:15:52.000000+00\n2 \\N 2002-03-27 20:16:05.187532+00\n\\.\n\nYou will find that whilst the 1st record can be edited using pgadmin, any\nattempt to edit the 2nd results in the message \"Could not locate the record\nfor updating in the database!\" when you attempt to save your changes.\n\nPresumably this is something to do with the non-integer seconds part of the\ntimestamp in the 2nd record. Unfortunately the timestamp in this record is a\n\"real\" timestamp created using the now() function, ie typical of real-world\ndata...\n\n--\nThomas Sandford | thomas@paradisegreen.co.uk\n\n", "msg_date": "Wed, 27 Mar 2002 20:31:51 -0000", "msg_from": "\"Thomas Sandford\" <thomas@paradisegreen.co.uk>", "msg_from_op": false, "msg_subject": "Can't edit tables with timestamps" }, { "msg_contents": "I am getting the error message\n\n\"Error in pgAdmin II:frmSQLOutput.LoadGrid: -2147217887 - Multiple-step OLE\nDB operation generated errors. Check each OLE DB status value, if available.\nNo work was done.\"\n\nwhen trying to do a basic query (select * from staff, or click the button\nthat has the same effect) on a table with the following definition:\n\nCREATE TABLE \"staff\" (\n \"sid\" int4 DEFAULT nextval('\"staff_sid_seq\"'::text) NOT NULL,\n \"lastname\" varchar(128),\n \"firstname\" varchar(128),\n \"preferred_contact_method\" int4,\n \"initials\" varchar(32),\n \"title\" varchar(64),\n \"dob\" date,\n \"emergency_contact_name\" varchar(256),\n \"e_c_relationship\" varchar(256),\n \"e_c_address_1\" varchar(256),\n \"e_c_address_2\" varchar(256),\n \"e_c_city\" varchar(256),\n \"e_c_phone_day\" varchar(256),\n \"e_c_phone_eve\" varchar(256),\n \"e_c_phone_mob\" varchar(256),\n \"e_c_email\" varchar(256),\n \"sex\" varchar(6),\n \"food_hygene_cert\" bool,\n \"food_hygene_cert_expiry_date\" date,\n \"first_aid_cert\" bool,\n \"first_aid_cert_expiry_date\" date,\n \"pgp_member\" bool,\n \"pgp_member_from\" date,\n \"smoker\" bool,\n \"medical_notes\" text,\n \"general_notes\" text,\n \"tech_experience\" int4,\n \"house_experience\" int4,\n \"cafe_experience\" int4,\n \"inactive\" bool,\n \"e_c_county\" varchar(128),\n \"e_c_post_code\" varchar(64),\n \"e_c_country\" varchar(128),\n CONSTRAINT \"staff_pkey\" PRIMARY KEY (\"sid\")\n) WITH OIDS;\nREVOKE ALL ON \"staff\" FROM PUBLIC;\nGRANT ALL ON \"staff\" TO \"user1\";\nGRANT ALL ON \"staff\" TO \"user2\";\n\nHere is the pgadmin (1.20) log at logging level \"debug\".\n\n21/06/2002 22:42:48 - Counting Records...\n21/06/2002 22:42:48 - SQL (PGP_Staff): SELECT count(*) AS count FROM \"staff\"\n21/06/2002 22:42:48 - Done - 0.05 Secs.\n21/06/2002 22:42:48 - Executing SQL Query...\n21/06/2002 22:42:49 - SQL (PGP_Staff): SELECT * FROM \"staff\"\n21/06/2002 22:42:49 - Loading Data...\n21/06/2002 22:42:49 - Done - 0.5 Secs.\n21/06/2002 22:42:49 - Error in pgAdmin\nII:frmSQLOutput.LoadGrid: -2147217887 - Multiple-step OLE DB operation\ngenerated errors. Check each OLE DB status value, if available. No work was\ndone.\n\nThere are 115 records in the \"staff\" table. After the error is reported a\ndisplay grid comes up, with 70-odd records displayed. I was able to view\nthis table before I had as many records. Running the query within psql on\nthe server causes no problems.\n\nAny suggestions?\n--\nThomas Sandford\n\n", "msg_date": "Fri, 21 Jun 2002 22:53:25 +0100", "msg_from": "\"Thomas Sandford\" <thomas@paradisegreen.co.uk>", "msg_from_op": false, "msg_subject": "\"Multiple-step OLE DB operation generated errors\" in pgAdmin\n\tII:frmSQLOutput.LoadGrid" } ]
[ { "msg_contents": "HI,\n\nI've seen lots of talk about anoncvs not working, but I \ncan't even find out where it is ;(\n\nThe old address gives\n\n[hannu@taru pgsql]$ ../update.cvs \ncvs update: authorization failed: server postgresql.org rejected \naccess to /home/projects/pgsql/cvsroot for user anoncvs\n\n\nthe link from developer .postgresql.org points to \nhttp://www.ca.postgresql.org/cgi/cvsweb.cgi/pgsql\n\nwhich gives \n\nNot Found\nThe requested URL /cgi/cvsweb.cgi/pgsql was not found on this server.\nApache/1.3.14 Server at www.ca.postgresql.org Port 80\n\n\nWhen I do as instructed in\nhttp://developer.postgresql.org/TODO/docs/cvs.html\nI get:\n\n[hannu@taru cvs_new]$ cvs -d\n:pserver:anoncvs@postgresql.org:/home/projects/pgsql/cvsroot login\n(Logging in to anoncvs@postgresql.org)\nCVS password: \ncvs login: authorization failed: server postgresql.org rejected access\nto /home/projects/pgsql/cvsroot for user anoncvs\n\n\nWhat should I do ?\n\n---------------------------\nHannu\n", "msg_date": "Tue, 02 Oct 2001 09:49:03 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "But _where_ is the anoncvs server ?" }, { "msg_contents": "On Tue, Oct 02, 2001 at 09:49:03AM +0200, Hannu Krosing wrote:\n> HI,\n> \n> I've seen lots of talk about anoncvs not working, but I \n> can't even find out where it is ;(\n\n:pserver:anoncvs@anoncvs.postgresql.org/projects/cvsroot\n\n-- \nmarko\n\n", "msg_date": "Tue, 2 Oct 2001 10:11:53 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: But _where_ is the anoncvs server ?" }, { "msg_contents": "Marko Kreen wrote:\n> \n> On Tue, Oct 02, 2001 at 09:49:03AM +0200, Hannu Krosing wrote:\n> > HI,\n> >\n> > I've seen lots of talk about anoncvs not working, but I\n> > can't even find out where it is ;(\n> \n> :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n\nI got in now, but the general problems have now stuck me too:\n\ncvs server: failed to create lock directory for\n`/projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n(/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock):\nPermission denied\ncvs server: failed to obtain dir lock in repository\n`/projects/cvsroot/pgsql/contrib/pgcrypto/expected'\ncvs [server aborted]: read lock failed - giving up\n\nProbably all newly created directories become unsuitable for anoncvs, so \njust fixin perms does not hel - a more general solution is needed -\nperhaps \nsome sticky bits on directories or configuration changes on cvs server\n\n----------------\nHannu\n", "msg_date": "Tue, 02 Oct 2001 10:16:56 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: But _where_ is the anoncvs server ?" }, { "msg_contents": "\nfixed, permanently, or should be ... let me know ..\n\nI've also just changed the 'pull down' to be every hour at *:59 ...\n\n\nOn Tue, 2 Oct 2001, Hannu Krosing wrote:\n\n> Marko Kreen wrote:\n> >\n> > On Tue, Oct 02, 2001 at 09:49:03AM +0200, Hannu Krosing wrote:\n> > > HI,\n> > >\n> > > I've seen lots of talk about anoncvs not working, but I\n> > > can't even find out where it is ;(\n> >\n> > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n>\n> I got in now, but the general problems have now stuck me too:\n>\n> cvs server: failed to create lock directory for\n> `/projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> (/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock):\n> Permission denied\n> cvs server: failed to obtain dir lock in repository\n> `/projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> cvs [server aborted]: read lock failed - giving up\n>\n> Probably all newly created directories become unsuitable for anoncvs, so\n> just fixin perms does not hel - a more general solution is needed -\n> perhaps\n> some sticky bits on directories or configuration changes on cvs server\n>\n> ----------------\n> Hannu\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Tue, 2 Oct 2001 08:26:24 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: But _where_ is the anoncvs server ?" }, { "msg_contents": "Marc,\n\nfix, please, permanently problem with permissions in anonymous CVS\ncvs server: Updating pgsql/contrib/pgcrypto/expected\ncvs server: failed to create lock directory for /projects/cvsroot/pgsql/contrib/pgcrypto/expected' (/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock): Permission denied\ncvs server: failed to obtain dir lock in repository /projects/cvsroot/pgsql/contrib/pgcrypto/expected'\ncvs [server aborted]: read lock failed - giving up\n\n\tOleg\nOn Tue, 2 Oct 2001, Marc G. Fournier wrote:\n\n>\n> fixed, permanently, or should be ... let me know ..\n>\n> I've also just changed the 'pull down' to be every hour at *:59 ...\n>\n>\n> On Tue, 2 Oct 2001, Hannu Krosing wrote:\n>\n> > Marko Kreen wrote:\n> > >\n> > > On Tue, Oct 02, 2001 at 09:49:03AM +0200, Hannu Krosing wrote:\n> > > > HI,\n> > > >\n> > > > I've seen lots of talk about anoncvs not working, but I\n> > > > can't even find out where it is ;(\n> > >\n> > > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n> >\n> > I got in now, but the general problems have now stuck me too:\n> >\n> > cvs server: failed to create lock directory for\n> > `/projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> > (/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock):\n> > Permission denied\n> > cvs server: failed to obtain dir lock in repository\n> > `/projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> > cvs [server aborted]: read lock failed - giving up\n> >\n> > Probably all newly created directories become unsuitable for anoncvs, so\n> > just fixin perms does not hel - a more general solution is needed -\n> > perhaps\n> > some sticky bits on directories or configuration changes on cvs server\n> >\n> > ----------------\n> > Hannu\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 2 Oct 2001 15:42:12 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: But _where_ is the anoncvs server ?" }, { "msg_contents": "\nshould have auto-fixed itself about 10 minutes ago ...\n\nOn Tue, 2 Oct 2001, Oleg Bartunov wrote:\n\n> Marc,\n>\n> fix, please, permanently problem with permissions in anonymous CVS\n> cvs server: Updating pgsql/contrib/pgcrypto/expected\n> cvs server: failed to create lock directory for /projects/cvsroot/pgsql/contrib/pgcrypto/expected' (/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock): Permission denied\n> cvs server: failed to obtain dir lock in repository /projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> cvs [server aborted]: read lock failed - giving up\n>\n> \tOleg\n> On Tue, 2 Oct 2001, Marc G. Fournier wrote:\n>\n> >\n> > fixed, permanently, or should be ... let me know ..\n> >\n> > I've also just changed the 'pull down' to be every hour at *:59 ...\n> >\n> >\n> > On Tue, 2 Oct 2001, Hannu Krosing wrote:\n> >\n> > > Marko Kreen wrote:\n> > > >\n> > > > On Tue, Oct 02, 2001 at 09:49:03AM +0200, Hannu Krosing wrote:\n> > > > > HI,\n> > > > >\n> > > > > I've seen lots of talk about anoncvs not working, but I\n> > > > > can't even find out where it is ;(\n> > > >\n> > > > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n> > >\n> > > I got in now, but the general problems have now stuck me too:\n> > >\n> > > cvs server: failed to create lock directory for\n> > > `/projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> > > (/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock):\n> > > Permission denied\n> > > cvs server: failed to obtain dir lock in repository\n> > > `/projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> > > cvs [server aborted]: read lock failed - giving up\n> > >\n> > > Probably all newly created directories become unsuitable for anoncvs, so\n> > > just fixin perms does not hel - a more general solution is needed -\n> > > perhaps\n> > > some sticky bits on directories or configuration changes on cvs server\n> > >\n> > > ----------------\n> > > Hannu\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > >\n> > > http://archives.postgresql.org\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Tue, 2 Oct 2001 09:05:31 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: But _where_ is the anoncvs server ?" }, { "msg_contents": "Thanks, It works now\nOn Tue, 2 Oct 2001, Marc G. Fournier wrote:\n\n>\n> should have auto-fixed itself about 10 minutes ago ...\n>\n> On Tue, 2 Oct 2001, Oleg Bartunov wrote:\n>\n> > Marc,\n> >\n> > fix, please, permanently problem with permissions in anonymous CVS\n> > cvs server: Updating pgsql/contrib/pgcrypto/expected\n> > cvs server: failed to create lock directory for /projects/cvsroot/pgsql/contrib/pgcrypto/expected' (/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock): Permission denied\n> > cvs server: failed to obtain dir lock in repository /projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> > cvs [server aborted]: read lock failed - giving up\n> >\n> > \tOleg\n> > On Tue, 2 Oct 2001, Marc G. Fournier wrote:\n> >\n> > >\n> > > fixed, permanently, or should be ... let me know ..\n> > >\n> > > I've also just changed the 'pull down' to be every hour at *:59 ...\n> > >\n> > >\n> > > On Tue, 2 Oct 2001, Hannu Krosing wrote:\n> > >\n> > > > Marko Kreen wrote:\n> > > > >\n> > > > > On Tue, Oct 02, 2001 at 09:49:03AM +0200, Hannu Krosing wrote:\n> > > > > > HI,\n> > > > > >\n> > > > > > I've seen lots of talk about anoncvs not working, but I\n> > > > > > can't even find out where it is ;(\n> > > > >\n> > > > > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n> > > >\n> > > > I got in now, but the general problems have now stuck me too:\n> > > >\n> > > > cvs server: failed to create lock directory for\n> > > > `/projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> > > > (/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock):\n> > > > Permission denied\n> > > > cvs server: failed to obtain dir lock in repository\n> > > > `/projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> > > > cvs [server aborted]: read lock failed - giving up\n> > > >\n> > > > Probably all newly created directories become unsuitable for anoncvs, so\n> > > > just fixin perms does not hel - a more general solution is needed -\n> > > > perhaps\n> > > > some sticky bits on directories or configuration changes on cvs server\n> > > >\n> > > > ----------------\n> > > > Hannu\n> > > >\n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 6: Have you searched our list archives?\n> > > >\n> > > > http://archives.postgresql.org\n> > > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Don't 'kill -9' the postmaster\n> > >\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 2 Oct 2001 16:21:01 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: But _where_ is the anoncvs server ?" } ]
[ { "msg_contents": "\ncvsweb is now working! It's available from the developer's site:\n\n http://developer.postgresql.org/\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 2 Oct 2001 09:31:25 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "cvsweb" }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> cvsweb is now working! It's available from the developer's site:\n> http://developer.postgresql.org/\n\nThankyouthankyouthankyouthankyou ... I hadn't realized how much I'd come\nto depend on that service, until I didn't have it for awhile ... manual\nuse of \"cvs log\" and \"cvs diff\" is a poor substitute.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 09:47:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvsweb " }, { "msg_contents": " .------[ Tom Lane wrote (2001/10/02 at 09:47:41) ]------\n | \n | Vince Vielhaber <vev@michvhf.com> writes:\n | > cvsweb is now working! It's available from the developer's site:\n | > http://developer.postgresql.org/\n | \n | Thankyouthankyouthankyouthankyou ... I hadn't realized how much I'd come\n | to depend on that service, until I didn't have it for awhile ... manual\n | use of \"cvs log\" and \"cvs diff\" is a poor substitute.\n | \n | \t\t\tregards, tom lane\n |\n `-------------------------------------------------\n\n Oops my original try at this only went to Tom ( sorry for the\n duplicate Tom ). \n\n It appears the cvs log functionality is working just fine, however\n you can't actually view the source of the file by clicking on the\n revision number. Clicking on: \n\n http://developer.postgresql.org/cvsweb.cgi/src/bin/psql/command.c?rev=1.58&content-type=text/x-cvsweb-markup\n\n To view src/bin/psql/command.c gives me the following error: \n\n Error: Unexpected output from cvs co: cvs [checkout aborted]:\n /cvsroot/pgsql/CVSROOT: No such file or directory\n\n Check whether the directory /cvsroot/pgsql/CVSROOT exists and the\n script has write-access to the CVSROOT/history file if it exists.\n The script needs to place lock files in the directory the file is in\n as well.\n\n ---------------------------------\n Frank Wiles <frank@wiles.org>\n http://frank.wiles.org\n ---------------------------------\n\n", "msg_date": "Tue, 2 Oct 2001 09:06:07 -0500", "msg_from": "Frank Wiles <frank@wiles.org>", "msg_from_op": false, "msg_subject": "Re: cvsweb" }, { "msg_contents": "On Tue, 2 Oct 2001, Frank Wiles wrote:\n\n> It appears the cvs log functionality is working just fine, however\n> you can't actually view the source of the file by clicking on the\n> revision number. Clicking on:\n\noops! fixed.\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 2 Oct 2001 10:14:50 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: cvsweb" }, { "msg_contents": "Frank Wiles <frank@wiles.org> writes:\n> It appears the cvs log functionality is working just fine, however\n> you can't actually view the source of the file by clicking on the\n> revision number. Clicking on: \n> http://developer.postgresql.org/cvsweb.cgi/src/bin/psql/command.c?rev=1.58&content-type=text/x-cvsweb-markup\n\nSeems to work for me at the moment. Note that Vince appears to have\njust corrected the path: you need a /pgsql/ in there. Does \n\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql/src/bin/psql/command.c?rev=1.58&content-type=text/x-cvsweb-markup\n\nwork for you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 10:15:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvsweb " }, { "msg_contents": "On Tue, 2 Oct 2001, Tom Lane wrote:\n\n> Frank Wiles <frank@wiles.org> writes:\n> > It appears the cvs log functionality is working just fine, however\n> > you can't actually view the source of the file by clicking on the\n> > revision number. Clicking on:\n> > http://developer.postgresql.org/cvsweb.cgi/src/bin/psql/command.c?rev=1.58&content-type=text/x-cvsweb-markup\n>\n> Seems to work for me at the moment. Note that Vince appears to have\n> just corrected the path: you need a /pgsql/ in there. Does\n\nI just updated the line too so it goes straight to pgsql\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 2 Oct 2001 10:22:24 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: cvsweb " } ]
[ { "msg_contents": "Christof just told me that I overwrote Tom's patch fixing the setlocale\nproblem in ecpg. I did not notice that and for some strange reason did not\nget Tom's mail either. Anyway, the CVS problem was that I use cvsup to keep\nan up-to-date source tree. I have set up my system so that it updates\neverytime I go on-line. I also automatically update my pgsql-ecpg checkout.\n\nThis has a long history as I started using cvsup before I even was a\ndeveloper and never felt the need to change that.\n\nWhen I change some stuff I usually do this in the complete source tree as I\nneed it to even compile ecpg. Then I copy over the files I changed to the\ncvs checkout and commit them. \n\nThis works well since it does not matter if the checkout is done via cvs or\ncvsup. But now cvsup on my system stopped working because of that 10mil\nseconds bug but I failed to notice. Hey, I don't check the logfiles\neverytime. So when I made my last commit, I simply used these old files as a\nbase and removed Tom's patch. I guess I better switch to cvs. :-)\n\nMaking this long story short, Tom, I can understand if you feel angry about\nthis (and I'm sure I would) and I will try my best to never let it happen\nagain. Please take my apologies.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 2 Oct 2001 16:19:50 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "My last ECPG commit" }, { "msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> But now cvsup on my system stopped working because of that 10mil\n> seconds bug but I failed to notice. Hey, I don't check the logfiles\n> everytime. So when I made my last commit, I simply used these old files as a\n> base and removed Tom's patch. I guess I better switch to cvs. :-)\n\nOr update to the fixed version of cvsup, anyway. Thomas Lockhart likes\ncvsup too, so you can be sure it will continue to be a workable means\nof working with our CVS tree.\n\n> Making this long story short, Tom, I can understand if you feel angry about\n> this (and I'm sure I would) and I will try my best to never let it happen\n> again. Please take my apologies.\n\nOf course.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 10:33:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: My last ECPG commit " }, { "msg_contents": "On Tue, Oct 02, 2001 at 10:33:55AM -0400, Tom Lane wrote:\n> Or update to the fixed version of cvsup, anyway. Thomas Lockhart likes\n> cvsup too, so you can be sure it will continue to be a workable means\n> of working with our CVS tree.\n\nYes, that's apossibility. But then the actual Debian package has not been\nupdated. And why messing with two mechanisms?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 2 Oct 2001 17:08:26 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: My last ECPG commit" }, { "msg_contents": "...\n> Yes, that's apossibility. But then the actual Debian package has not been\n> updated. And why messing with two mechanisms?\n\nIt may be that the static tarballs for RedHat will work for you (they\nwork for me on Mandrake).\n\nI use CVSup to keep a local copy of the cvs repository on my laptop, so\nI have a *full* development environment when I'm traveling or otherwise\noff line. I'd have a very hard time working without it...\n\n - Thomas\n", "msg_date": "Wed, 03 Oct 2001 02:27:20 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: My last ECPG commit" }, { "msg_contents": "On Wed, Oct 03, 2001 at 02:27:20AM +0000, Thomas Lockhart wrote:\n> It may be that the static tarballs for RedHat will work for you (they\n> work for me on Mandrake).\n\nMaybe. But then I could compile the sources myself.\n\n> I use CVSup to keep a local copy of the cvs repository on my laptop, so\n> I have a *full* development environment when I'm traveling or otherwise\n> off line. I'd have a very hard time working without it...\n\nYes, that was my original thinking too. But CVS could do the same.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 3 Oct 2001 11:29:18 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: My last ECPG commit" }, { "msg_contents": "...\n> > I use CVSup to keep a local copy of the cvs repository on my laptop, so\n> > I have a *full* development environment when I'm traveling or otherwise\n> > off line. I'd have a very hard time working without it...\n> Yes, that was my original thinking too. But CVS could do the same.\n\n? A local copy of the *repository*, not a checked out version of the\ntree. CVSup is too cool for words ;)\n\nBuilding CVSup from scratch is not trivial, since it requires the\ninstallation of Modula3. There are packages available for RH/Mandrake\nLinux.\n\n - Thomas\n", "msg_date": "Wed, 03 Oct 2001 17:40:08 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: My last ECPG commit" }, { "msg_contents": "On Wed, Oct 03, 2001 at 05:40:08PM +0000, Thomas Lockhart wrote:\n> ? A local copy of the *repository*, not a checked out version of the\n> tree. CVSup is too cool for words ;)\n\nOops, should read mail more carefully. :-)\n\n> Building CVSup from scratch is not trivial, since it requires the\n> installation of Modula3. There are packages available for RH/Mandrake\n> Linux.\n\nI know. I did it once before there was a package available.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 4 Oct 2001 08:34:01 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: My last ECPG commit" } ]
[ { "msg_contents": "Did we come to any conclusion about whether to accept Gavin Sherry's\nCREATE OR REPLACE FUNCTION patch?\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1035792\n\nAFAIR, the score was that I liked it, Bruce didn't, and no one else\nhad expressed an opinion.\n\nThe patch itself needs a little bit of cleanup I think, but I'm willing\nto fix and apply it if there's a consensus that we want the feature.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 10:29:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "What about CREATE OR REPLACE FUNCTION?" }, { "msg_contents": "Tom Lane wrote:\n> \n> Did we come to any conclusion about whether to accept Gavin Sherry's\n> CREATE OR REPLACE FUNCTION patch?\n> http://fts.postgresql.org/db/mw/msg.html?mid=1035792\n> \n> AFAIR, the score was that I liked it, Bruce didn't, and no one else\n> had expressed an opinion.\n> \n> The patch itself needs a little bit of cleanup I think, but I'm willing\n> to fix and apply it if there's a consensus that we want the feature.\n\nIf it enables us to for example change a trigger function without\nredefining \nthe trigger itself then surely it is needed.\n\n----------------\nHannu\n", "msg_date": "Tue, 02 Oct 2001 16:44:22 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: What about CREATE OR REPLACE FUNCTION?" }, { "msg_contents": "> Did we come to any conclusion about whether to accept Gavin Sherry's\n> CREATE OR REPLACE FUNCTION patch?\n> http://fts.postgresql.org/db/mw/msg.html?mid=1035792\n> \n> AFAIR, the score was that I liked it, Bruce didn't, and no one else\n> had expressed an opinion.\n\nI withdraw my objection. When I read it, I thought we were going to\nhave CREATE FUNCTION and REPLACE FUNCTION. I later realized it is\nliterally CREATE OR REPLACE FUNCTION. Looks strange, but there is no\nstandard way to do this and we usually take the Oracle syntax when the\nstandard doesn't specify it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Oct 2001 12:27:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: What about CREATE OR REPLACE FUNCTION?" }, { "msg_contents": "On Tue, 2 Oct 2001, Tom Lane wrote:\n\n> Did we come to any conclusion about whether to accept Gavin Sherry's\n> CREATE OR REPLACE FUNCTION patch?\n> http://fts.postgresql.org/db/mw/msg.html?mid=1035792\n> \n> AFAIR, the score was that I liked it, Bruce didn't, and no one else\n> had expressed an opinion.\n\nSince you're asking for opinions, \nI think that something of the sort is needed. As for the naming, as long\nas we keep things that are similar to this reasonably consistantly named\nin the future (like if we decided to do a rule altering one, etc...) it\nshould be fine. :)\n\n\n", "msg_date": "Tue, 2 Oct 2001 09:41:23 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: What about CREATE OR REPLACE FUNCTION?" }, { "msg_contents": "\n> > Did we come to any conclusion about whether to accept Gavin Sherry's\n> > CREATE OR REPLACE FUNCTION patch?\n> > http://fts.postgresql.org/db/mw/msg.html?mid=1035792\n> > AFAIR, the score was that I liked it, Bruce didn't, and no one else\n> > had expressed an opinion.\n>\n>I withdraw my objection. When I read it, I thought we were going to\n>have CREATE FUNCTION and REPLACE FUNCTION. I later realized it is\n>literally CREATE OR REPLACE FUNCTION. Looks strange, but there is no\n>standard way to do this and we usually take the Oracle syntax when the\n>standard doesn't specify it.\n>\n>Bruce Momjian | http://candle.pha.pa.us\n>pgman@candle.pha.pa.us | (610) 853-3000\n>+ If your life is a hard drive, | 830 Blythe Avenue\n>+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>TIP 5: Have you checked our extensive FAQ?\n>http://www.postgresql.org/users-lounge/docs/faq.html\n\nHello,\n\nDoes CREATE OR REPLACE FUNCTION preserve function OID?\nWhat it the difference with CREATE OR ALTER FUNCTION?\n\nWe would like to implement pseudo-editing of functions in pgAdmin II.\nIs there any solution which preserves function OID?\n\nBest regards,\nJean-Michel POURE\n", "msg_date": "Thu, 04 Oct 2001 15:49:50 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNCTION?" }, { "msg_contents": "Jean-Michel POURE <jm.poure@freesurf.fr> writes:\n> Does CREATE OR REPLACE FUNCTION preserve function OID?\n\nYes. That's the whole point ...\n\n> What it the difference with CREATE OR ALTER FUNCTION?\n\nThe former exists (now), the latter doesn't exist.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 10:00:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNCTION? " }, { "msg_contents": "On Thu, 4 Oct 2001, Jean-Michel POURE wrote:\n\n> Hello,\n> \n> Does CREATE OR REPLACE FUNCTION preserve function OID?\n> What it the difference with CREATE OR ALTER FUNCTION?\n> \n> We would like to implement pseudo-editing of functions in pgAdmin II.\n> Is there any solution which preserves function OID?\n\nYes. The idea was to preserve the OID.\n\n> \n> Best regards,\n> Jean-Michel POURE\n\nGavin\n\n", "msg_date": "Fri, 5 Oct 2001 00:08:01 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNCTION?" }, { "msg_contents": "Dear all,\n\n1) CREATE OR REPLACE FUNCTION\nIn pgAdmin II, we plan to use the CREATE OR REPLACE FUNCTION if the patch \nis applied. Do you know if there is any chance it be applied for beta time? \nWe would very much appreciate this feature...\n\n2) PL/pgSQL default support\nIt is sometimes tricky for Windows users to install a language remotely on \na Linux box (no access to createlang and/or no knowledge of handlers). So \nwhy not enable PL/pgSQL by default?\n\nBest regards,\nJean-Michel POURE \n", "msg_date": "Mon, 08 Oct 2001 15:42:34 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNCTION? " }, { "msg_contents": "On Mon, 8 Oct 2001, Jean-Michel POURE wrote:\n\n> Dear all,\n\n[snip]\n\n> 2) PL/pgSQL default support\n> It is sometimes tricky for Windows users to install a language remotely on \n> a Linux box (no access to createlang and/or no knowledge of handlers). So \n> why not enable PL/pgSQL by default?\n\nCould it not just be a feature of pgadmin to allow users to enable\nplpgsql? Enabling the language is, after all, just SQL.\n\nAs for having it by default, this takes some level of control out of the\nhands of the administrator. What if you do not want to allow your users to\ncreate plpgsql functions?\n\nGavin\n\n", "msg_date": "Tue, 9 Oct 2001 00:43:51 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNCTION? " }, { "msg_contents": "At 00:43 09/10/01 +1000, you wrote:\n>Could it not just be a feature of pgadmin to allow users to enable\n>plpgsql? Enabling the language is, after all, just SQL.\n>\n>As for having it by default, this takes some level of control out of the\n>hands of the administrator. What if you do not want to allow your users to\n>create plpgsql functions?\n>\n>Gavin\n\nHello Gavin,\n\nEnabling plpgsql language is just SQL, agreed.\n\nCREATE FUNCTION plpgsql_call_handler () RETURNS OPAQUE AS\n'/usr/local/pgsql/lib/plpgsql.so' LANGUAGE 'C';\n\nCREATE TRUSTED PROCEDURAL LANGUAGE 'plpgsql'\nHANDLER plpgsql_call_handler\nLANCOMPILER 'PL/pgSQL';\n\nBut you can never be sure that /usr/local/pgsql/lib/plpgsql.so\nis the right path. Maybe a built-in plpgsql_call_handler function would \nsuffice.\n\nCheers,\nJean-Michel POURE\n\n", "msg_date": "Mon, 08 Oct 2001 17:52:02 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNCTION? " }, { "msg_contents": "Jean-Michel POURE writes:\n\n> It is sometimes tricky for Windows users to install a language remotely on\n> a Linux box (no access to createlang and/or no knowledge of handlers).\n\nWhy not run createlang on the host that the server runs on?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 9 Oct 2001 19:13:01 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNCTION? " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Jean-Michel POURE writes:\n>> It is sometimes tricky for Windows users to install a language remotely on\n>> a Linux box (no access to createlang and/or no knowledge of handlers).\n\n> Why not run createlang on the host that the server runs on?\n\nI wasn't able to get excited about that argument either.\n\nI believe the primary reason why PL languages aren't installed by\ndefault is security considerations: users can easily create denial-of-\nservice conditions if given access to a PL. (Example: write infinitely\nrecursive function, invoke it to cause stack overflow crash in your\nbackend, which forces database-wide restart, thereby negating other\npeople's transactions. Repeat until DBA kicks you off...)\n\nA DBA who does want to give access to PL languages by default can\neasily do so by installing them into template1, whence they'll be\nautomatically duplicated into newly created databases. Perhaps this\noption needs to be documented more prominently.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Oct 2001 13:27:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNCTION? " }, { "msg_contents": "Tom Lane writes:\n\n> I believe the primary reason why PL languages aren't installed by\n> default is security considerations\n\nWell, that argumentation seems to be analogous to giving someone login\naccess on a multiuser computer system but not letting him execute, say,\nperl because he might write recursive functions with it. Such setups\nexist (perhaps with something else instead of perl and recursive\nfunctions) but they are not the norm and usually fine-tuned by the\nadministrator.\n\nWe have realized time and time again that giving someone access to a\nPostgreSQL server is already a security risk. Any person can easily crash\nthe server (select cash_out(2) is prominently documented as doing that) or\nexhaust time and space resources by writing appropriate queries.\nPrivilege systems do not guard against that. Privilege systems are for\nguarding against a reasonable user \"cheating\".\n\nNow, if a procedural language is not safe (at least as safe as the rest of\nthe system that's accessible to an ordinary user), then it shouldn't be\nmarked \"trusted\". Otherwise, the consequence of this chain of arguments\nis that createlang selectively introduces a security whole into your\nsystem. Of course, we may warn, \"Be careful when installing procedural\nlanguages, because ...\". But are users going to be careful? How do they\nknow what kind of care to exercise, and just *how* to do that?\n\nNo, I don't think this is the ideal situation. I don't want to press for\nchanging it right now because I'm not particularly bothered by it, and the\nsecond sentence of the previous paragraph might just be true. In a future\nlife, a privilege system should give finer grained control about access to\nPLs, but we might want to think about what the default should be.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 10 Oct 2001 00:36:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNCTION? " }, { "msg_contents": "I seem to recall that Oracle has all sorts of fancy resource limits that can\nbe applied to users. If such resource limits were implemented, then maybe\nthe DBA could have the power to limit someone to a maximum of 20% cpu and a\nfew transactions per second or something.\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Peter Eisentraut\n> Sent: Wednesday, 10 October 2001 6:36 AM\n> To: Tom Lane\n> Cc: Jean-Michel POURE; pgsql-hackers@postgresql.org; Bruce Momjian;\n> pgadmin-hackers@postgresql.org\n> Subject: Re: [HACKERS] What about CREATE OR REPLACE FUNCTION?\n>\n>\n> Tom Lane writes:\n>\n> > I believe the primary reason why PL languages aren't installed by\n> > default is security considerations\n>\n> Well, that argumentation seems to be analogous to giving someone login\n> access on a multiuser computer system but not letting him execute, say,\n> perl because he might write recursive functions with it. Such setups\n> exist (perhaps with something else instead of perl and recursive\n> functions) but they are not the norm and usually fine-tuned by the\n> administrator.\n>\n> We have realized time and time again that giving someone access to a\n> PostgreSQL server is already a security risk. Any person can easily crash\n> the server (select cash_out(2) is prominently documented as doing that) or\n> exhaust time and space resources by writing appropriate queries.\n> Privilege systems do not guard against that. Privilege systems are for\n> guarding against a reasonable user \"cheating\".\n>\n> Now, if a procedural language is not safe (at least as safe as the rest of\n> the system that's accessible to an ordinary user), then it shouldn't be\n> marked \"trusted\". Otherwise, the consequence of this chain of arguments\n> is that createlang selectively introduces a security whole into your\n> system. Of course, we may warn, \"Be careful when installing procedural\n> languages, because ...\". But are users going to be careful? How do they\n> know what kind of care to exercise, and just *how* to do that?\n>\n> No, I don't think this is the ideal situation. I don't want to press for\n> changing it right now because I'm not particularly bothered by it, and the\n> second sentence of the previous paragraph might just be true. In a future\n> life, a privilege system should give finer grained control about access to\n> PLs, but we might want to think about what the default should be.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Wed, 10 Oct 2001 09:57:10 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNCTION? " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> I seem to recall that Oracle has all sorts of fancy resource limits that can\n> be applied to users. If such resource limits were implemented, then maybe\n> the DBA could have the power to limit someone to a maximum of 20% cpu and a\n> few transactions per second or something.\n> \n> Chris\n\nI was hoping that after completing the current project I'm working\non I might be able to contribute this feature. Oracle calls them\nPROFILEs which are a set of resource limits associated with a user.\nThey can limit:\n\nNo. of simultaneous connections\nNo. of blocks read per query\nNo. of blocks read per connection\nCPU time per query\nCPU time per connection\nIdle time\n\nas well as a few more esoteric others. I haven't looked at the new\nsystem resource reporting system that Jan wrote, but I suspect some\nof the statistics he gathers might already be available. Limiting\nsimultaneous connections by a user might take a little effort.\nLimiting idle time might as well. Both have been a requested feature\nin the past, but have pitfalls associated with them. But right now\ndenial of service for a user with database access is easy: soak up\nall available connections. Like Jan's resource statistics collector,\nOracle's profiles must be enabled in the initSID.ora configuration\nfile since it takes a few cycles to actually account for user\nactivity.\n\nMike Mascari\nmascarm@mascari.com\n\n> > Tom Lane writes:\n> >\n> > > I believe the primary reason why PL languages aren't installed by\n> > > default is security considerations\n> >\n> > Well, that argumentation seems to be analogous to giving someone login\n> > access on a multiuser computer system but not letting him execute, say,\n> > perl because he might write recursive functions with it. Such setups\n> > exist (perhaps with something else instead of perl and recursive\n> > functions) but they are not the norm and usually fine-tuned by the\n> > administrator.\n...\n> > \n> > Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n", "msg_date": "Wed, 10 Oct 2001 02:16:11 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNCTION?" }, { "msg_contents": "Gavin Sherry wrote:\n> \n> What if you do not want to allow your users to create plpgsql functions?\n\nThis should be part of access control anyway - what if you do not want\nto \nallow *some* of your users to create plpgsql functions?\n\n--------------\nHannu\n", "msg_date": "Mon, 29 Oct 2001 09:32:46 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNCTION?" } ]
[ { "msg_contents": "We're about to release brand new contrib module which is\nactually is a first step of integration of OpenFTS to postgres.\nPreliminary numbers are rather impressive -\n\nzen:~/app/pgsql/GiST/tsearch_index$ time psql-dev apod -c \\\n \"select title from titles where titleidx @@ 'gist&patch';\" > /dev/null\n\nreal 0m0.070s\nuser 0m0.010s\nsys 0m0.000s\n\n\nTable 'titles' contains 377905 titles from various mailing lists we\naccumulate in our mailware projects.\n\napod=# select count(*) from titles;\n count\n--------\n 377905\n(1 row)\n\nThis contrib is based on our GiST development and will work only with\n7.2.\n\nWe need maximum a week to finish, test, benchmark and document.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 2 Oct 2001 18:27:18 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "do we're in time to catch 7.2 Beta ?" }, { "msg_contents": "\nIt is my understanding we can add things to /contrib even during beta,\nright? We are certainly more lax with beta than with the main backend\ntree. /contrib does get compiled so it does need to compile cleanly.\n\n> We're about to release brand new contrib module which is\n> actually is a first step of integration of OpenFTS to postgres.\n> Preliminary numbers are rather impressive -\n> \n> zen:~/app/pgsql/GiST/tsearch_index$ time psql-dev apod -c \\\n> \"select title from titles where titleidx @@ 'gist&patch';\" > /dev/null\n> \n> real 0m0.070s\n> user 0m0.010s\n> sys 0m0.000s\n> \n> \n> Table 'titles' contains 377905 titles from various mailing lists we\n> accumulate in our mailware projects.\n> \n> apod=# select count(*) from titles;\n> count\n> --------\n> 377905\n> (1 row)\n> \n> This contrib is based on our GiST development and will work only with\n> 7.2.\n> \n> We need maximum a week to finish, test, benchmark and document.\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Oct 2001 12:29:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: do we're in time to catch 7.2 Beta ?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> It is my understanding we can add things to /contrib even during beta,\n> right? We are certainly more lax with beta than with the main backend\n> tree. /contrib does get compiled so it does need to compile cleanly.\n\nThe rules for contrib are laxer, for sure.\n\n>> We need maximum a week to finish, test, benchmark and document.\n\nI'd suggest that you concentrate on making some documentation now, so\nthat you can send in a complete package before we go beta. Then worry\nabout fixing bugs and benchmarking during beta ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 13:38:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: do we're in time to catch 7.2 Beta ? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > It is my understanding we can add things to /contrib even during beta,\n> > right? We are certainly more lax with beta than with the main backend\n> > tree. /contrib does get compiled so it does need to compile cleanly.\n> \n> The rules for contrib are laxer, for sure.\n> \n> >> We need maximum a week to finish, test, benchmark and document.\n> \n> I'd suggest that you concentrate on making some documentation now, so\n> that you can send in a complete package before we go beta. Then worry\n> about fixing bugs and benchmarking during beta ;-)\n\nAh, the old get it in and fix during beta. I love it. I have used the\nstrategy many times, and was only caught a few. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Oct 2001 13:53:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: do we're in time to catch 7.2 Beta ?" }, { "msg_contents": "\nYup, not a crucial component of the base system ... once we start to get\ninto the 'release candidate' mode, then it is kinda frowned upon, but\nearly beta, no probs ...\n\n\nOn Tue, 2 Oct 2001, Bruce Momjian wrote:\n\n>\n> It is my understanding we can add things to /contrib even during beta,\n> right? We are certainly more lax with beta than with the main backend\n> tree. /contrib does get compiled so it does need to compile cleanly.\n>\n> > We're about to release brand new contrib module which is\n> > actually is a first step of integration of OpenFTS to postgres.\n> > Preliminary numbers are rather impressive -\n> >\n> > zen:~/app/pgsql/GiST/tsearch_index$ time psql-dev apod -c \\\n> > \"select title from titles where titleidx @@ 'gist&patch';\" > /dev/null\n> >\n> > real 0m0.070s\n> > user 0m0.010s\n> > sys 0m0.000s\n> >\n> >\n> > Table 'titles' contains 377905 titles from various mailing lists we\n> > accumulate in our mailware projects.\n> >\n> > apod=# select count(*) from titles;\n> > count\n> > --------\n> > 377905\n> > (1 row)\n> >\n> > This contrib is based on our GiST development and will work only with\n> > 7.2.\n> >\n> > We need maximum a week to finish, test, benchmark and document.\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Tue, 2 Oct 2001 14:02:13 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: do we're in time to catch 7.2 Beta ?" } ]
[ { "msg_contents": "Hello all,\n\nI just discovered PLpgSQL yesterday (yay!) as we began development of our \nmore robust database backend, but I'm seeing some odd behavior in the use \nof LOCK TABLE. The problem I'm seeing is that two database transactions, \ninitiated via JDBC, are able to obtain simultaneous exclusive table locks \non the same table. If I lock the table in PSQL, the JDBC calls do indeed \nblock until I end my transaction in PSQL. However, two JDBC calls don't \nappear to block each other. :(\n\nThere may indeed be a better solution than locking a table for what I'm \ndoing, so please chime in if I'm missing something there. My next attempt \nwill be to wrap the whole select-update in a loop, but I'm afraid of \ncreating a deadlock situation.\n\nI used PLpgSQL to implement a next_id function that allows IDs to be \nallocated in continuous blocks (thus I believe I cannot use sequences). The \nclient application generates its own IDs but tracks them in the database. \nTo minimize DB calls the client allocates IDs in blocks and doles them out \none-at-a-time. Thus, 20 calls to IDFactory.nextID() in Java will result in \nonly 2 calls to the database function next_id_block(10).\n\nAs well each object type (there are 6) that needs IDs gets its own \nIDFactory, *and* there are multiple Java clients accessing the same \ndatabase. Therefore, it's quite possible for two IDFactories to call \nnext_id_block(count) at the same time. Thus I'm locking the table in \nexclusive mode to force synchronous access. I've also tried access \nexclusive mode to no avail.\n\nHere's the table definition:\n\n create table idfactory\n (\n name varchar(20) not null primary key,\n next_id integer not null default 1,\n change_num smallint not null default 1\n ) ;\n\nThis is the psuedo-code algorithm I've implemented in PLpgSQL:\n\n next_id_block ( count )\n (1) lock idfactory table\n\n (2) read idfactory row\n (3) update idfactory row\n increment next_id by count\n increment change_num by 1\n where change_num is equal to that read in (2)\n\n (4) FAIL if (3) updated 0 rows\n\n (5) return next_id read in (2)\n\nMy intent is that by locking the idfactory table, I can assure that no one \nelse can update the row between it being read in step 2 and updated in step \n3. I've tried calling this function from JDBC with auto-commit on as well \nas with it off and the connection set to both transaction levels. The \nreality, however, is that some threads are trying to update the row \nconcurrently and failing (0 rows updated since the change_num value no \nlonger matches.\n\nI'll try to illustrate what seems to be happening in the case of two threads.\n\n Time Thread 1 Thread 2\n 1 lock\n 2 read 1, 1\n 3 lock\n 4 read 1, 1\n 5 write 11, 2\n 6 write 11, 2\n 7 return 1\n 8 FAIL\n\nIt's my understanding that thread 2 should block at T3 since thread 1 \nlocked the table at T1. Thread 2 shouldn't continue until thread 1's \ntransaction is ended (either by commit or abort). True? The truly odd part \nis that if I start up PSQL, begin a transaction, and then lock the table, \nall the threads calling the function block until I end the transaction, \njust as I'd expect. However, the threads won't block each other!\n\nHere's the stored function itself:\n\n create function next_id_block (\n varchar , integer\n )\n returns bigint\n as '\n DECLARE\n -- Parameters\n name_key alias for $1 ;\n block_size alias for $2 ;\n\n -- Constants\n FIRST_ID constant bigint := 1 ;\n\n -- Locals\n id_rec record ;\n new_last_id bigint ;\n num_rows integer ;\n BEGIN\n -- To avoid a retry-loop, lock the whole table for the transaction\n lock table idfactory in access exclusive mode ;\n\n -- Read the current value of next_id\n select into id_rec * from idfactory where name = name_key ;\n\n -- Increment it by block_size\n new_last_id := id_rec.next_id + block_size ;\n update idfactory\n set next_id = new_last_id,\n change_num = change_num + 1\n where name = name_key and change_num = id_rec.change_num ;\n\n -- Error if filter not found\n get diagnostics num_rows = ROW_COUNT ;\n if num_rows != 1 then\n raise exception ''Failed to update idfactory.next_id to % for % at \n%'',\n new_last_id, name_key, id_rec.change_num;\n return -1 ;\n end if ;\n\n return id_rec.next_id ;\n END ;\n ' language 'plpgsql' with (isstrict) ;\n\nFinally, here's the JDBC code I'm using to call it:\n\n protected void allocate ( int count )\n {\n PreparedStatement stmt = null;\n ResultSet result = null;\n long newNextID = INVALID_ID;\n\n try\n {\n stmt = conn.prepareStatement(\"select next_id_block(?, ?)\");\n\n stmt.setString(1, name);\n stmt.setInt(2, count);\n\n // conn.setAutoCommit(false);\n result = stmt.executeQuery();\n\n if ( ! result.next() )\n throw new SQLException(\"Function next_id_block failed\");\n\n // Pull out the new value and close the result set.\n newNextID = Nulls.getLong(result, 1);\n\n try { result.close(); result = null; }\n catch ( SQLException ignore ) { }\n\n // conn.commit();\n\n // Null values are not allowed.\n if ( Nulls.is(newNextID) )\n throw new SQLException(\"Function next_id_block returned null\");\n\n nextID = newNextID;\n lastID = nextID + count;\n }\n catch ( SQLException e )\n {\n e.printStackTrace();\n }\n finally\n {\n if ( result != null )\n {\n try { result.close(); }\n catch ( SQLException ignore ) { }\n }\n }\n }\n\nAnyway, this was rather long, but I wanted to provide all the information \nnecessary up front. Thank you for any thoughts or ideas you might have.\n\nPeace,\nDave\n\n--\nDavid Harkness\nMEconomy, Inc.\n\n", "msg_date": "Tue, 02 Oct 2001 13:31:27 -0700", "msg_from": "Dave Harkness <daveh@MEconomy.com>", "msg_from_op": true, "msg_subject": "LOCK TABLE oddness in PLpgSQL function called via JDBC" }, { "msg_contents": "Dave,\n\nFirst off, are you running with autocommit turned off in JDBC? By \ndefault autocommit is on, and thus your lock is removed as soon as it is \naquired.\n\nSecondly, you don't need a table lock, you just need to lock the row \nbetween the select and the update. You should use 'select for update' \nto do this. That way when you issue the select to get the current \nvalue, it will lock the row, preventing other select for update requests \nfrom completing until the lock is released. That way the select and the \nupdate can be assured that no one else is changing the data.\n\nthanks,\n--Barry\n\n\nDave Harkness wrote:\n\n> Hello all,\n> \n> I just discovered PLpgSQL yesterday (yay!) as we began development of \n> our more robust database backend, but I'm seeing some odd behavior in \n> the use of LOCK TABLE. The problem I'm seeing is that two database \n> transactions, initiated via JDBC, are able to obtain simultaneous \n> exclusive table locks on the same table. If I lock the table in PSQL, \n> the JDBC calls do indeed block until I end my transaction in PSQL. \n> However, two JDBC calls don't appear to block each other. :(\n> \n> There may indeed be a better solution than locking a table for what I'm \n> doing, so please chime in if I'm missing something there. My next \n> attempt will be to wrap the whole select-update in a loop, but I'm \n> afraid of creating a deadlock situation.\n> \n> I used PLpgSQL to implement a next_id function that allows IDs to be \n> allocated in continuous blocks (thus I believe I cannot use sequences). \n> The client application generates its own IDs but tracks them in the \n> database. To minimize DB calls the client allocates IDs in blocks and \n> doles them out one-at-a-time. Thus, 20 calls to IDFactory.nextID() in \n> Java will result in only 2 calls to the database function \n> next_id_block(10).\n> \n> As well each object type (there are 6) that needs IDs gets its own \n> IDFactory, *and* there are multiple Java clients accessing the same \n> database. Therefore, it's quite possible for two IDFactories to call \n> next_id_block(count) at the same time. Thus I'm locking the table in \n> exclusive mode to force synchronous access. I've also tried access \n> exclusive mode to no avail.\n> \n> Here's the table definition:\n> \n> create table idfactory\n> (\n> name varchar(20) not null primary key,\n> next_id integer not null default 1,\n> change_num smallint not null default 1\n> ) ;\n> \n> This is the psuedo-code algorithm I've implemented in PLpgSQL:\n> \n> next_id_block ( count )\n> (1) lock idfactory table\n> \n> (2) read idfactory row\n> (3) update idfactory row\n> increment next_id by count\n> increment change_num by 1\n> where change_num is equal to that read in (2)\n> \n> (4) FAIL if (3) updated 0 rows\n> \n> (5) return next_id read in (2)\n> \n> My intent is that by locking the idfactory table, I can assure that no \n> one else can update the row between it being read in step 2 and updated \n> in step 3. I've tried calling this function from JDBC with auto-commit \n> on as well as with it off and the connection set to both transaction \n> levels. The reality, however, is that some threads are trying to update \n> the row concurrently and failing (0 rows updated since the change_num \n> value no longer matches.\n> \n> I'll try to illustrate what seems to be happening in the case of two \n> threads.\n> \n> Time Thread 1 Thread 2\n> 1 lock\n> 2 read 1, 1\n> 3 lock\n> 4 read 1, 1\n> 5 write 11, 2\n> 6 write 11, 2\n> 7 return 1\n> 8 FAIL\n> \n> It's my understanding that thread 2 should block at T3 since thread 1 \n> locked the table at T1. Thread 2 shouldn't continue until thread 1's \n> transaction is ended (either by commit or abort). True? The truly odd \n> part is that if I start up PSQL, begin a transaction, and then lock the \n> table, all the threads calling the function block until I end the \n> transaction, just as I'd expect. However, the threads won't block each \n> other!\n> \n> Here's the stored function itself:\n> \n> create function next_id_block (\n> varchar , integer\n> )\n> returns bigint\n> as '\n> DECLARE\n> -- Parameters\n> name_key alias for $1 ;\n> block_size alias for $2 ;\n> \n> -- Constants\n> FIRST_ID constant bigint := 1 ;\n> \n> -- Locals\n> id_rec record ;\n> new_last_id bigint ;\n> num_rows integer ;\n> BEGIN\n> -- To avoid a retry-loop, lock the whole table for the transaction\n> lock table idfactory in access exclusive mode ;\n> \n> -- Read the current value of next_id\n> select into id_rec * from idfactory where name = name_key ;\n> \n> -- Increment it by block_size\n> new_last_id := id_rec.next_id + block_size ;\n> update idfactory\n> set next_id = new_last_id,\n> change_num = change_num + 1\n> where name = name_key and change_num = id_rec.change_num ;\n> \n> -- Error if filter not found\n> get diagnostics num_rows = ROW_COUNT ;\n> if num_rows != 1 then\n> raise exception ''Failed to update idfactory.next_id to % for % \n> at %'',\n> new_last_id, name_key, id_rec.change_num;\n> return -1 ;\n> end if ;\n> \n> return id_rec.next_id ;\n> END ;\n> ' language 'plpgsql' with (isstrict) ;\n> \n> Finally, here's the JDBC code I'm using to call it:\n> \n> protected void allocate ( int count )\n> {\n> PreparedStatement stmt = null;\n> ResultSet result = null;\n> long newNextID = INVALID_ID;\n> \n> try\n> {\n> stmt = conn.prepareStatement(\"select next_id_block(?, ?)\");\n> \n> stmt.setString(1, name);\n> stmt.setInt(2, count);\n> \n> // conn.setAutoCommit(false);\n> result = stmt.executeQuery();\n> \n> if ( ! result.next() )\n> throw new SQLException(\"Function next_id_block failed\");\n> \n> // Pull out the new value and close the result set.\n> newNextID = Nulls.getLong(result, 1);\n> \n> try { result.close(); result = null; }\n> catch ( SQLException ignore ) { }\n> \n> // conn.commit();\n> \n> // Null values are not allowed.\n> if ( Nulls.is(newNextID) )\n> throw new SQLException(\"Function next_id_block returned null\");\n> \n> nextID = newNextID;\n> lastID = nextID + count;\n> }\n> catch ( SQLException e )\n> {\n> e.printStackTrace();\n> }\n> finally\n> {\n> if ( result != null )\n> {\n> try { result.close(); }\n> catch ( SQLException ignore ) { }\n> }\n> }\n> }\n> \n> Anyway, this was rather long, but I wanted to provide all the \n> information necessary up front. Thank you for any thoughts or ideas you \n> might have.\n> \n> Peace,\n> Dave\n> \n> -- \n> David Harkness\n> MEconomy, Inc.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n", "msg_date": "Tue, 02 Oct 2001 13:45:53 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE oddness in PLpgSQL function called via JDBC" }, { "msg_contents": "At 01:45 PM 10/2/2001, Barry Lind wrote:\n>Dave,\n>\n>First off, are you running with autocommit turned off in JDBC? By default \n>autocommit is on, and thus your lock is removed as soon as it is aquired.\n\nI've tried it with auto-commit ON and OFF. With it off, I've tried it with \nREAD_COMMITTED and SERIALIZABLE. All produce the same result.\n\nHowever, my understanding is that each JDBC statement is executed within a \nsingle transaction when auto-commit is ON. I'm executing only one statement:\n\n select next_id_block(?, ?)\n\nWhile the function does indeed execute multiple statements itself, aren't \nthey all done inside a single transaction? If not, I must rethink our \nstrategy as I had assumed that the PLpgSQL functions I wrote would be \ntransactional.\n\n>Secondly, you don't need a table lock, you just need to lock the row \n>between the select and the update. You should use 'select for update' to \n>do this. That way when you issue the select to get the current value, it \n>will lock the row, preventing other select for update requests from \n>completing until the lock is released. That way the select and the update \n>can be assured that no one else is changing the data.\n\nTHANK YOU! That's what I thought, but the documentation was a bit light on \nthe subject of SELECT ... FOR UPDATE. So to mirror it back to you, if I do\n\n next_id_block ( count )\n (1) read idfactory row FOR UPDATE\n\n (2) update idfactory row\n increment next_id by count\n increment change_num by 1\n where change_num is equal to that read in (1)\n\n (3) return next_id read in (1)\n\nis it safe to assume that the update in (2) will ALWAYS succeed since it \nwould be impossible for any other transaction to read or update the row \nonce it was selected for update?\n\nThanks for your help.\n\nPeace,\nDave\n\n", "msg_date": "Tue, 02 Oct 2001 14:09:27 -0700", "msg_from": "Dave Harkness <daveh@MEconomy.com>", "msg_from_op": true, "msg_subject": "Re: LOCK TABLE oddness in PLpgSQL function called via JDBC" }, { "msg_contents": "Dave Harkness <daveh@MEconomy.com> writes:\n> The problem I'm seeing is that two database transactions, \n> initiated via JDBC, are able to obtain simultaneous exclusive table locks \n> on the same table.\n\nSounds to me like JDBC is feeding all your commands through a single\ndatabase connection, which means that what you think are independent\ntransactions are really not. Better take a closer look at what you're\ndoing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 17:22:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE oddness in PLpgSQL function called via JDBC " }, { "msg_contents": "At 02:22 PM 10/2/2001, Tom Lane wrote:\n>Dave Harkness <daveh@MEconomy.com> writes:\n> > The problem I'm seeing is that two database transactions,\n> > initiated via JDBC, are able to obtain simultaneous exclusive table locks\n> > on the same table.\n>\n>Sounds to me like JDBC is feeding all your commands through a single\n>database connection, which means that what you think are independent\n>transactions are really not. Better take a closer look at what you're\n>doing.\n\nMy test code creates multiple test threads and then starts them each. Each \ntest thread (IDFactoryThread) creates its own Connection and IDFactory \n(which gets the connection). The test thread itself simply calls \nIDFactory.nextID() in a loop. I'm not using any connection pooling \nwhatsoever. I'm using the built-in PostgreSQL JDBC driver alone.\n\nHere's the code:\n\n public static void test ( int numThreads , String nameKey )\n {\n Thread[] threads = new Thread[numThreads];\n\n for ( int i = 0 ; i < numThreads ; i++ )\n {\n threads[i] = new IDFactoryThread(i, nameKey);\n }\n\n for ( int i = 0 ; i < numThreads ; i++ )\n {\n threads[i].start();\n }\n }\n\n class IDFactoryThread extends Thread\n {\n Connection conn = null;\n IDFactorySQL factory = null;\n\n public IDFactoryThread ( int index , String nameKey )\n {\n super(Integer.toString(index));\n init(nameKey);\n }\n\n public void init ( String nameKey )\n {\n try\n {\n conn = DriverManager.getConnection(IDFactorySQLTest.DB_URL);\n }\n catch ( SQLException e )\n {\n System.out.println(\"Could not connect to the database\");\n e.printStackTrace();\n System.exit(1);\n }\n\n factory = new IDFactorySQL(conn, nameKey, \nIDFactorySQLTest.BLOCK_SIZE);\n }\n\n public void run ( )\n {\n try\n {\n for ( int i = 0 ; i < IDFactorySQLTest.LOOP_COUNT ; i++ )\n {\n System.out.println(getName() + \" - \" + factory.next());\n }\n }\n catch ( IllegalStateException e )\n {\n e.printStackTrace();\n System.exit(1);\n }\n\n factory.close();\n }\n }\n\nThanks again!\n\nPeace,\nDave\n\n", "msg_date": "Tue, 02 Oct 2001 14:38:30 -0700", "msg_from": "Dave Harkness <daveh@MEconomy.com>", "msg_from_op": true, "msg_subject": "Re: LOCK TABLE oddness in PLpgSQL function called via" }, { "msg_contents": "Barry, Tom, et al,\n\nThanks for your help. I really appreciate it.\n\nOkay, I changed the PLpgSQL function to use select for update rather than \nlocking the table explicitly. Now I'm getting different errors. Running in \nauto-commit and read-committed modes, I am seeing the same error as before: \nthread A is updating the (locked) row between thread B selecting and then \nupdating it. This causes thread B's update to affect 0 rows which I'm \ntrying to avoid.\n\nRunning in serializable mode, I'm getting a Postgres exception:\n\n ERROR: Can't serialize access due to concurrent update\n\nIt seems to me that the table locks grabbed in the PLpgSQL function aren't \nactually locking the tables. They check to make sure they can *get* the \nlock, but don't actually hold the lock. Same with the select for update. It \nmakes sure it can get the lock, but still lets others get the same lock.\n\nAnyway, here's how I'm doing my transaction level setting in Java. \nIDFactorySQL gets a name key (String) and Connection object in its \nconstructor, which it passes to an internal init() method where it sets the \ntransaction handling:\n\n protected void init ( Connection conn , String name )\n {\n this.conn = conn;\n this.name = name;\n\n try\n {\n // \nconn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);\n conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);\n conn.setAutoCommit(false);\n }\n catch ( SQLException e )\n {\n invalidate();\n }\n }\n\nI've tried both transaction levels separately as well as not setting it at \nall [but still calling setAutoCommit(false)] which I understand should \nleave me with read-committed level. Then, before calling the PLpgSQL \nfunction next_id_block(), I've tried again setting auto-commit to false as \nwell as not doing so:\n\n stmt = conn.prepareStatement(\"select next_id_block(?, ?)\");\n\n stmt.setString(1, name);\n stmt.setInt(2, count);\n\n conn.setAutoCommit(false);\n result = stmt.executeQuery();\n ...\n conn.commit();\n\nI roll back in the case of any SQLException, but at that point the test \nstops as it's broken. Any other ideas?\n\nPeace,\nDave\n\n", "msg_date": "Tue, 02 Oct 2001 15:03:56 -0700", "msg_from": "Dave Harkness <daveh@MEconomy.com>", "msg_from_op": true, "msg_subject": "Re: LOCK TABLE oddness in PLpgSQL function called via JDBC" }, { "msg_contents": "Dave Harkness <daveh@MEconomy.com> writes:\n> Running in serializable mode, I'm getting a Postgres exception:\n> ERROR: Can't serialize access due to concurrent update\n\nWell, in that case my theory about it all being one transaction is\nwrong; you couldn't get that error without a cross-transaction conflict.\n\n> It seems to me that the table locks grabbed in the PLpgSQL function aren't \n> actually locking the tables. They check to make sure they can *get* the \n> lock, but don't actually hold the lock. Same with the select for update. It \n> makes sure it can get the lock, but still lets others get the same lock.\n\nOnce a lock has been grabbed, the *only* way it can be let go is to\nend the transaction. So my new theory is that the JDBC driver is\nissuing an auto-commit at points where you're not expecting it.\n\nI'm not familiar enough with the behavior of \"setAutoCommit\" and friends\nto be sure what's happening; but if you turn on query logging in the\nserver you'll probably see the evidence soon enough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 18:29:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE oddness in PLpgSQL function called via JDBC " }, { "msg_contents": "At 03:29 PM 10/2/2001, Tom Lane wrote:\n>Once a lock has been grabbed, the *only* way it can be let go is to end \n>the transaction.\n\nThat's my understanding as well.\n\n>So my new theory is that the JDBC driver is issuing an auto-commit at \n>points where you're not expecting it.\n\nBut I'm only issuing *one* JDBC statement:\n\n select next_id_block(?, ?)\n\nOnce it returns, I grab the single value from the ResultSet, close the \nResultSet, and commit the transaction.\n\nAll of the SQL magic is being done by the PLpgSQL stored function on the \nbackend. It's almost like the PLpgSQL function itself is running in \nauto-commit mode, but then I don't see how I could be getting a \nserialization error. And the docs say that the function will run in the \ncaller's transaction, so I'm just confused.\n\nMy suspicion was that JDBC was somehow interacting oddly with PLpgSQL, but \nmore and more it's looking like PLpgSQL is the culprit all on its own. I'll \ntry posing the question to the general mailing list since there are none \nspecific to stored procedure languages, or is there a more appropriate list?\n\n>but if you turn on query logging in the server you'll probably see the \n>evidence soon enough.\n\nY'know, that's a very good idea. I haven't done that before -- is it fairly \nprominent in the online documentation? I'm off to find it now... Thanks.\n\nPeace,\nDave\n\n", "msg_date": "Tue, 02 Oct 2001 15:41:13 -0700", "msg_from": "Dave Harkness <daveh@MEconomy.com>", "msg_from_op": true, "msg_subject": "Re: LOCK TABLE oddness in PLpgSQL function called via" }, { "msg_contents": "Dave,\n\nI don't know why you are seeing these problems with the lock table. But \nthe select for update should work for you. (In my product I have done \nexactly the same thing you are trying to do using select for update with \nsuccess).\n\nI would add one minor comment on your description of the behavior of \nusing select for update:\n\nThe select for update will block other 'select for updates' or \n'updates'. It does not block other simple selects. But that is fine \nfor the purposes here.\n\nthanks,\n--Barry\n\n\n\nDave Harkness wrote:\n\n> At 01:45 PM 10/2/2001, Barry Lind wrote:\n> \n>> Dave,\n>>\n>> First off, are you running with autocommit turned off in JDBC? By \n>> default autocommit is on, and thus your lock is removed as soon as it \n>> is aquired.\n> \n> \n> I've tried it with auto-commit ON and OFF. With it off, I've tried it \n> with READ_COMMITTED and SERIALIZABLE. All produce the same result.\n> \n> However, my understanding is that each JDBC statement is executed within \n> a single transaction when auto-commit is ON. I'm executing only one \n> statement:\n> \n> select next_id_block(?, ?)\n> \n> While the function does indeed execute multiple statements itself, \n> aren't they all done inside a single transaction? If not, I must rethink \n> our strategy as I had assumed that the PLpgSQL functions I wrote would \n> be transactional.\n> \n>> Secondly, you don't need a table lock, you just need to lock the row \n>> between the select and the update. You should use 'select for update' \n>> to do this. That way when you issue the select to get the current \n>> value, it will lock the row, preventing other select for update \n>> requests from completing until the lock is released. That way the \n>> select and the update can be assured that no one else is changing the \n>> data.\n> \n> \n> THANK YOU! That's what I thought, but the documentation was a bit light \n> on the subject of SELECT ... FOR UPDATE. So to mirror it back to you, if \n> I do\n> \n> next_id_block ( count )\n> (1) read idfactory row FOR UPDATE\n> \n> (2) update idfactory row\n> increment next_id by count\n> increment change_num by 1\n> where change_num is equal to that read in (1)\n> \n> (3) return next_id read in (1)\n> \n> is it safe to assume that the update in (2) will ALWAYS succeed since it \n> would be impossible for any other transaction to read or update the row \n> once it was selected for update?\n> \n> Thanks for your help.\n> \n> Peace,\n> Dave\n> \n\n\n", "msg_date": "Tue, 02 Oct 2001 18:03:26 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE oddness in PLpgSQL function called via JDBC" }, { "msg_contents": "Dave,\n\nI can't explain what is happening here. I think the best next step is \nto turn on query logging on the server and look at the actual SQL \nstatements being executed. It really looks like some extra commits or \nrollbacks are occuring that is causing the locks to be released.\n\nthanks,\n--Barry\n\nDave Harkness wrote:\n\n> Barry, Tom, et al,\n> \n> Thanks for your help. I really appreciate it.\n> \n> Okay, I changed the PLpgSQL function to use select for update rather \n> than locking the table explicitly. Now I'm getting different errors. \n> Running in auto-commit and read-committed modes, I am seeing the same \n> error as before: thread A is updating the (locked) row between thread B \n> selecting and then updating it. This causes thread B's update to affect \n> 0 rows which I'm trying to avoid.\n> \n> Running in serializable mode, I'm getting a Postgres exception:\n> \n> ERROR: Can't serialize access due to concurrent update\n> \n> It seems to me that the table locks grabbed in the PLpgSQL function \n> aren't actually locking the tables. They check to make sure they can \n> *get* the lock, but don't actually hold the lock. Same with the select \n> for update. It makes sure it can get the lock, but still lets others get \n> the same lock.\n> \n> Anyway, here's how I'm doing my transaction level setting in Java. \n> IDFactorySQL gets a name key (String) and Connection object in its \n> constructor, which it passes to an internal init() method where it sets \n> the transaction handling:\n> \n> protected void init ( Connection conn , String name )\n> {\n> this.conn = conn;\n> this.name = name;\n> \n> try\n> {\n> // \n> conn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);\n> conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);\n> conn.setAutoCommit(false);\n> }\n> catch ( SQLException e )\n> {\n> invalidate();\n> }\n> }\n> \n> I've tried both transaction levels separately as well as not setting it \n> at all [but still calling setAutoCommit(false)] which I understand \n> should leave me with read-committed level. Then, before calling the \n> PLpgSQL function next_id_block(), I've tried again setting auto-commit \n> to false as well as not doing so:\n> \n> stmt = conn.prepareStatement(\"select next_id_block(?, ?)\");\n> \n> stmt.setString(1, name);\n> stmt.setInt(2, count);\n> \n> conn.setAutoCommit(false);\n> result = stmt.executeQuery();\n> ...\n> conn.commit();\n> \n> I roll back in the case of any SQLException, but at that point the test \n> stops as it's broken. Any other ideas?\n> \n> Peace,\n> Dave\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n", "msg_date": "Tue, 02 Oct 2001 18:13:14 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE oddness in PLpgSQL function called via JDBC" }, { "msg_contents": "Dave Harkness wrote:\n> \n> At 01:45 PM 10/2/2001, Barry Lind wrote:\n> >Dave,\n> >\n> >Secondly, you don't need a table lock, you just need to lock the row\n> >between the select and the update. You should use 'select for update' to\n> >do this. That way when you issue the select to get the current value, it\n> >will lock the row, preventing other select for update requests from\n> >completing until the lock is released. That way the select and the update\n> >can be assured that no one else is changing the data.\n> \n> THANK YOU! That's what I thought, but the documentation was a bit light on\n> the subject of SELECT ... FOR UPDATE. So to mirror it back to you, if I do\n> \n> next_id_block ( count )\n> (1) read idfactory row FOR UPDATE\n> \n> (2) update idfactory row\n> increment next_id by count\n> increment change_num by 1\n> where change_num is equal to that read in (1)\n> \n> (3) return next_id read in (1)\n\nAs far as I see, this is a stored function issue not a Java\nissue.\nI got the exact code of the function from Dave.\n\n create function next_id_block (\n varchar , integer\n )\n returns bigint\n as '\n DECLARE\n -- Parameters\n name_key alias for $1 ;\n block_size alias for $2 ;\n\n -- Locals\n id_rec record ;\n num_rows integer ;\n BEGIN\n -- To avoid a retry-loop, lock the whole table for the\ntransaction\n lock table idfactory in exclusive mode ;\n\n -- Read the current value of next_id\n select into id_rec * from idfactory where name = name_key ;\n\n -- Increment it by block_size\n update idfactory\n set next_id = next_id + block_size,\n change_num = change_num + 1\n where name = name_key and change_num = id_rec.change_num ;\n\n -- If the update failed, raise an exception\n get diagnostics num_rows = ROW_COUNT ;\n if num_rows != 1 then\n raise exception ''Update failed'' ;\n return -1 ;\n end if ;\n\n return id_rec.next_id ;\n END ;\n ' language 'plpgsql' ;\n\nThe cause is that the stored function uses a common\nsnapshot throughout the function execution. As I've\ncomplained many times, the current implementaion is\nfar from intuition and this case seems to show that\nit isn't proper at all either.\n\n*lock table* certainly locks idfactory table but the\nsubsequenct *select* sees the table using the snapshot\ntaken before the function call. The *update* statement\nfind the row matching the where clause using the common\nsnapshot but will find the row was already updated and\nthe updated row doesn't satisfy the condition any longer.\n\n[In case when we remove the *lock* statement and add a\n *for update* clause to the subsequent *select* statement]\n\nThe *select .. for update* statement gets the latest\n(may be updated) change_num value. Unfortunately\nthe subsequent *update* statement has a where clause\ncontaining change_num. The *update* statemnet can't\nfind the row matching the where clause using the snapshot\ntaken before the function call.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 03 Oct 2001 10:36:05 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE oddness in PLpgSQL function called via JDBC" }, { "msg_contents": "At 06:36 PM 10/2/2001, Hiroshi Inoue wrote:\n>The cause is that the stored function uses a common\n>snapshot throughout the function execution. As I've\n>complained many times, the current implementaion is\n>far from intuition and this case seems to show that\n>it isn't proper at all either.\n\nBravo! That indeed seems to have been the problem. To solve it, I simply \nmoved the LOCK TABLE out of the PLpgSQL function and into the JDBC code. \nWhile this isn't *ideal* as it leaves the table locked across two JDBC \ncalls (the function and the following commit), it achieves the desired \nresult (synchronous access to the idfactory table across all clients), and \nas I said, the function won't be called very often. It's far more important \nthat it work as expected rather than it work in sub-millisecond time.\n\nTo illustrate then what seems to have been occurring:\n\n Time Thread A Thread B\n 1 snapshot\n 2 lock\n 3 read 1, 1\n 4 write 11, 2\n 5 snapshot\n 6 return 1\n 7 commit\n 8 lock\n 9 read 1, 1\n 10 write 11, 2\n 11 FAIL\n\nAs long as thread B takes its snapshot any time before the commit at (7), \nits write at (10) will not affect any rows because ...\n\n>The *update* statement\n>find the row matching the where clause using the common\n>snapshot but will find the row was already updated and\n>the updated row doesn't satisfy the condition any longer.\n\nOuch. So querying for select, update, delete, whatever goes against the \nsnapshot to *locate* rows, but then applies the where clause to the *new \nvalues* not seen in the snapshot? If that's the case, that's extremely \nconfusing.\n\nAnyway, many thanks to everyone for keeping me from going totally insane. \nLuckily the other stored procedures we need to write won't require such \nstrict access to table data. :)\n\nPeace,\nDave\n\n", "msg_date": "Tue, 02 Oct 2001 19:37:40 -0700", "msg_from": "Dave Harkness <daveh@MEconomy.com>", "msg_from_op": true, "msg_subject": "PROBLEM SOLVED: LOCK TABLE oddness in PLpgSQL function called via\n JDBC" } ]
[ { "msg_contents": "Hi!\n\n0. I think access to other databases is really important. There was\na discussion about that. Using a dot operator to specify a\ndatabase (schema) seems to be very standard and elegant.\nBut there is another way to implement it. Here is my\nsuggestion.\n\n1. First, some syntax:\n\nCREATE [ SHARED ] [ TRUSTED ] CONNECTION conn_name\n USING 'conn_string'\n [ CONNECT ON { LOGIN | USE } ]\n [ DISCONNECT ON { LOGOUT | COMMIT } ];\n\nDescription\n Creates a connection definition (Oracle: database link) to\n a remote database.\n\nSHARED\n Means only one instance of connection exists and is accessible\n to all qualified users.\n\nTRUSTED\n Only superusers can use this connection (like TRUSTED modifier\n in CREATE LANGUAGE).\n\nconn_name\n Just an identifier.\n\n'conn_string'\n Connect string in standard form accepted by libpq\n 'PQconnectdb' function.\n\nCONNECT ON { LOGIN | USE }\n Defines whether connection should be established when\n user logs in, or when references remote object for the\n first time (default).\n\nDISCONNECT ON { LOGOUT | COMMIT }\n Defines whether connection should be closed when\n user logs out (default), or when transaction is ended (COMMIT,\n ROLLBACK, but also exiting).\n\n2. Additional commands\n\nALTER CONNECTION conn_name\n USING 'conn_string'\n [ CONNECT ON { LOGIN | USE } ]\n [ DISCONNECT ON { LOGOUT | COMMIT } ];\n\nDescription\n Changes behaviour of a defined connection (same parameters\n as for CREATE CONNECTION).\n\n\nDROP CONNECTION conn_name;\n\nDescription\n Hmm... drop the connection definition?\n\n\nAlso a new privilege CONNECT should be added, so\nGRANT CONNECT ON remote_database TO SCOTT;\ncan be processed.\n\n\n3. How to use this?\n\nSELECT local.id, remote.name\n FROM orders local, emp@remote_database remote\n WHERE local.emp_id = remote.id;\n\nSELECT give_a_raise_proc@rempte_database(1000);\n\n\n4. Some notes (in random order)\n\nIf a 'conn_string' does not contain a user/password information,\nconnection is performed using current user identity. But, for SHARED\nconnection always use a 'nobody' account (remeber to create\n'nobody' user on remote database). For security reasons\n'conn_string' must be stored in encrypted form.\n\nWhen CONNECT ON LOGIN is used, connection is etablished\nonly if user has CONNECTprivilege granted on this. For TRUSTED\nconnection also superuser rights must be checked.\n\nIf first remote object is accessed within a transaction, a remote\ntransaction should be started. When trancaction ends, remote\ntransaction should also be ended same way (commit or rollback).\n\nSHARED connection should be established when first user logs in\nor uses remote object (depends on CONNECT ON clause) and\nterminated when last user ends transaction or disconnects\n(depens on DISCONNECT ON clause). Of course no remote\ntransaction can be performed for SHARED connection.\n\nOf course it would require lot of work, but can be parted. The\nminimum IMHO can be a SHARED connection with\nCONNECT ON USE and DISCONNECT ON LOGOUT behaviour.\n\n5. Conclusion\n\nI know it is much easier to 'invent' a new functionality than\nto implement it. I also realize this proposal is not complete\nnor coherent. Still want to listen/read your opinions about it.\n\nRegards,\n\nMariusz Czulada\n\nP.S.: Is it planned to add 'auto_transaction' parameter on server\nor database levels, so events like login, commit or rolback\nautomaticly start a new transaction without 'BEGIN WORK'\n(like Oracle does)?\n\n", "msg_date": "Wed, 03 Oct 2001 00:38:44 +0200", "msg_from": "<manieq@idea.net.pl>", "msg_from_op": true, "msg_subject": "RFD: access to remore databases: altername suggestion" }, { "msg_contents": "You are attacking here two things: \n\na) schemas, which should be done in 7.3, thus multiple databases on same\nhost would be unnecessary.\n\nb) connections to remote host' databases, which is partially implemented\nalready (in a ugly way, but...) see contrib/dblink\n\nWhat you described is a syntactic sugar to implement b) which isn't a bad\nidea, but just consider, it is already done. sorta. \n\nOn Wed, 3 Oct 2001 manieq@idea.net.pl wrote:\n\n> Hi!\n> \n> 0. I think access to other databases is really important. There was\n> a discussion about that. Using a dot operator to specify a\n> database (schema) seems to be very standard and elegant.\n> But there is another way to implement it. Here is my\n> suggestion.\n> \n> 1. First, some syntax:\n> \n> CREATE [ SHARED ] [ TRUSTED ] CONNECTION conn_name\n> USING 'conn_string'\n> [ CONNECT ON { LOGIN | USE } ]\n> [ DISCONNECT ON { LOGOUT | COMMIT } ];\n> \n> Description\n> Creates a connection definition (Oracle: database link) to\n> a remote database.\n> \n> SHARED\n> Means only one instance of connection exists and is accessible\n> to all qualified users.\n> \n> TRUSTED\n> Only superusers can use this connection (like TRUSTED modifier\n> in CREATE LANGUAGE).\n> \n> conn_name\n> Just an identifier.\n> \n> 'conn_string'\n> Connect string in standard form accepted by libpq\n> 'PQconnectdb' function.\n> \n> CONNECT ON { LOGIN | USE }\n> Defines whether connection should be established when\n> user logs in, or when references remote object for the\n> first time (default).\n> \n> DISCONNECT ON { LOGOUT | COMMIT }\n> Defines whether connection should be closed when\n> user logs out (default), or when transaction is ended (COMMIT,\n> ROLLBACK, but also exiting).\n> \n> 2. Additional commands\n> \n> ALTER CONNECTION conn_name\n> USING 'conn_string'\n> [ CONNECT ON { LOGIN | USE } ]\n> [ DISCONNECT ON { LOGOUT | COMMIT } ];\n> \n> Description\n> Changes behaviour of a defined connection (same parameters\n> as for CREATE CONNECTION).\n> \n> \n> DROP CONNECTION conn_name;\n> \n> Description\n> Hmm... drop the connection definition?\n> \n> \n> Also a new privilege CONNECT should be added, so\n> GRANT CONNECT ON remote_database TO SCOTT;\n> can be processed.\n> \n> \n> 3. How to use this?\n> \n> SELECT local.id, remote.name\n> FROM orders local, emp@remote_database remote\n> WHERE local.emp_id = remote.id;\n> \n> SELECT give_a_raise_proc@rempte_database(1000);\n> \n> \n> 4. Some notes (in random order)\n> \n> If a 'conn_string' does not contain a user/password information,\n> connection is performed using current user identity. But, for SHARED\n> connection always use a 'nobody' account (remeber to create\n> 'nobody' user on remote database). For security reasons\n> 'conn_string' must be stored in encrypted form.\n> \n> When CONNECT ON LOGIN is used, connection is etablished\n> only if user has CONNECTprivilege granted on this. For TRUSTED\n> connection also superuser rights must be checked.\n> \n> If first remote object is accessed within a transaction, a remote\n> transaction should be started. When trancaction ends, remote\n> transaction should also be ended same way (commit or rollback).\n> \n> SHARED connection should be established when first user logs in\n> or uses remote object (depends on CONNECT ON clause) and\n> terminated when last user ends transaction or disconnects\n> (depens on DISCONNECT ON clause). Of course no remote\n> transaction can be performed for SHARED connection.\n> \n> Of course it would require lot of work, but can be parted. The\n> minimum IMHO can be a SHARED connection with\n> CONNECT ON USE and DISCONNECT ON LOGOUT behaviour.\n> \n> 5. Conclusion\n> \n> I know it is much easier to 'invent' a new functionality than\n> to implement it. I also realize this proposal is not complete\n> nor coherent. Still want to listen/read your opinions about it.\n> \n> Regards,\n> \n> Mariusz Czulada\n> \n> P.S.: Is it planned to add 'auto_transaction' parameter on server\n> or database levels, so events like login, commit or rolback\n> automaticly start a new transaction without 'BEGIN WORK'\n> (like Oracle does)?\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n", "msg_date": "Tue, 2 Oct 2001 18:52:41 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: RFD: access to remore databases: altername suggestion" } ]
[ { "msg_contents": "In current CVS I see a failure in the btree_gist regression test.\nIt kinda looks like the test data was changed without updating the\nexpected results, but would you verify this?\n\n\t\t\tregards, tom lane\n\n\n*** ./expected/btree_gist.out\tWed Aug 22 14:27:54 2001\n--- ./results/btree_gist.out\tTue Oct 2 18:48:34 2001\n***************\n*** 17,23 ****\n select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n count \n -------\n! 7\n (1 row)\n \n -- create idx\n--- 17,23 ----\n select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n count \n -------\n! 66\n (1 row)\n \n -- create idx\n***************\n*** 34,39 ****\n select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n count \n -------\n! 7\n (1 row)\n \n--- 34,39 ----\n select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n count \n -------\n! 66\n (1 row)\n \n\n", "msg_date": "Tue, 02 Oct 2001 18:51:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "btree_gist regression test busted?" }, { "msg_contents": "You are right. Please, apply attached patch or copy result/btree_gist.out \nexpected/btree_gist.out\n\n\nTom Lane wrote:\n\n> In current CVS I see a failure in the btree_gist regression test.\n> It kinda looks like the test data was changed without updating the\n> expected results, but would you verify this?\n> \n> \t\t\tregards, tom lane\n> \n> \n> *** ./expected/btree_gist.out\tWed Aug 22 14:27:54 2001\n> --- ./results/btree_gist.out\tTue Oct 2 18:48:34 2001\n> ***************\n> *** 17,23 ****\n> select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n> count \n> -------\n> ! 7\n> (1 row)\n> \n> -- create idx\n> --- 17,23 ----\n> select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n> count \n> -------\n> ! 66\n> (1 row)\n> \n> -- create idx\n> ***************\n> *** 34,39 ****\n> select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n> count \n> -------\n> ! 7\n> (1 row)\n> \n> --- 34,39 ----\n> select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n> count \n> -------\n> ! 66\n> (1 row)\n> \n> \n> \n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net", "msg_date": "Wed, 03 Oct 2001 13:07:07 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: btree_gist regression test busted?" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> You are right. Please, apply attached patch or copy result/btree_gist.out \n> expected/btree_gist.out\n> \n> \n> Tom Lane wrote:\n> \n> > In current CVS I see a failure in the btree_gist regression test.\n> > It kinda looks like the test data was changed without updating the\n> > expected results, but would you verify this?\n> > \n> > \t\t\tregards, tom lane\n> > \n> > \n> > *** ./expected/btree_gist.out\tWed Aug 22 14:27:54 2001\n> > --- ./results/btree_gist.out\tTue Oct 2 18:48:34 2001\n> > ***************\n> > *** 17,23 ****\n> > select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n> > count \n> > -------\n> > ! 7\n> > (1 row)\n> > \n> > -- create idx\n> > --- 17,23 ----\n> > select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n> > count \n> > -------\n> > ! 66\n> > (1 row)\n> > \n> > -- create idx\n> > ***************\n> > *** 34,39 ****\n> > select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n> > count \n> > -------\n> > ! 7\n> > (1 row)\n> > \n> > --- 34,39 ----\n> > select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n> > count \n> > -------\n> > ! 66\n> > (1 row)\n> > \n> > \n> > \n> > \n> \n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 3 Oct 2001 09:00:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: btree_gist regression test busted?" }, { "msg_contents": "\nAppears to have been already applied. Thanks.\n\n> You are right. Please, apply attached patch or copy result/btree_gist.out \n> expected/btree_gist.out\n> \n> \n> Tom Lane wrote:\n> \n> > In current CVS I see a failure in the btree_gist regression test.\n> > It kinda looks like the test data was changed without updating the\n> > expected results, but would you verify this?\n> > \n> > \t\t\tregards, tom lane\n> > \n> > \n> > *** ./expected/btree_gist.out\tWed Aug 22 14:27:54 2001\n> > --- ./results/btree_gist.out\tTue Oct 2 18:48:34 2001\n> > ***************\n> > *** 17,23 ****\n> > select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n> > count \n> > -------\n> > ! 7\n> > (1 row)\n> > \n> > -- create idx\n> > --- 17,23 ----\n> > select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n> > count \n> > -------\n> > ! 66\n> > (1 row)\n> > \n> > -- create idx\n> > ***************\n> > *** 34,39 ****\n> > select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n> > count \n> > -------\n> > ! 7\n> > (1 row)\n> > \n> > --- 34,39 ----\n> > select count(*) from tstmp where t < '2001-05-29 08:33:09+04';\n> > count \n> > -------\n> > ! 66\n> > (1 row)\n> > \n> > \n> > \n> > \n> \n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 4 Oct 2001 11:45:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: btree_gist regression test busted?" } ]
[ { "msg_contents": "\nI have used Oracle SQLOADER for many years now. It has the ability to \nput rejects/discards/bad into an output file and keep on going, maybe \nthis should be added to the copy command.\n\n\nCOPY [ BINARY ] table [ WITH OIDS ]\n FROM { 'filename' | stdin }\n [ [USING] DELIMITERS 'delimiter' ]\n [ WITH NULL AS 'null string' ]\n [ DISCARDS 'filename' ] \n\nwhat do you think???\n\n\n> Tom Lane writes:\n> \n> > It occurs to me that skip-the-insert might be a useful option for\n> > INSERTs that detect a unique-key conflict, not only for COPY. (Cf.\n> > the regular discussions we see on whether to do INSERT first or\n> > UPDATE first when the key might already exist.) Maybe a SET \nvariable\n> > that applies to all forms of insertion would be appropriate.\n> \n> What we need is:\n> \n> 1. Make errors not abort the transaction.\n> \n> 2. Error codes\n> \n> Then you can make your client deal with this in which ever way you \nwant,\n> at least for single-value inserts.\n> \n> However, it seems to me that COPY ignoring duplicates can easily be \ndone\n> by preprocessing the input file.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> \n> \n> ---------------------------(end of broadcast)-------------------------\n--\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n\n\n", "msg_date": "Tue, 2 Oct 2001 18:59:42 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " } ]
[ { "msg_contents": "For some reason, I seam to feel as if the inserts that should be executed by \na rule are not all getting executed, or at least, they are not getting writen.\n\nHow can I find out what the rule is really doing? The logs don't say much.\n\nAny help will be grear at this moment of stress!!! X->\n\nSaludos... :-)\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | mmarques@unl.edu.ar\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Tue, 2 Oct 2001 20:28:07 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": true, "msg_subject": "Missing inserts" }, { "msg_contents": "\nIn 7.1.X and earlier the INSERT rules are executed _before_ the INSERT. \nThis is changed to _after_ in 7.2.\n\n> For some reason, I seam to feel as if the inserts that should be executed by \n> a rule are not all getting executed, or at least, they are not getting writen.\n> \n> How can I find out what the rule is really doing? The logs don't say much.\n> \n> Any help will be grear at this moment of stress!!! X->\n> \n> Saludos... :-)\n> \n> -- \n> Porqu? usar una base de datos relacional cualquiera,\n> si pod?s usar PostgreSQL?\n> -----------------------------------------------------------------\n> Mart?n Marqu?s | mmarques@unl.edu.ar\n> Programador, Administrador, DBA | Centro de Telematica\n> Universidad Nacional\n> del Litoral\n> -----------------------------------------------------------------\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Oct 2001 20:59:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing inserts" }, { "msg_contents": "On Mar 02 Oct 2001 21:59, you wrote:\n> In 7.1.X and earlier the INSERT rules are executed _before_ the INSERT.\n> This is changed to _after_ in 7.2.\n\nThis would mean...??? I haven�t had much trouble until now, so I can�t \nunderstand why one of the 4 inserts of the rule didn�t get through.\n\nIs there some logic?\n\nTIA!\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | mmarques@unl.edu.ar\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Wed, 3 Oct 2001 16:43:26 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": true, "msg_subject": "Re: Missing inserts" }, { "msg_contents": "On Mi� 03 Oct 2001 16:43, you wrote:\n> On Mar 02 Oct 2001 21:59, you wrote:\n> > In 7.1.X and earlier the INSERT rules are executed _before_ the INSERT.\n> > This is changed to _after_ in 7.2.\n>\n> This would mean...??? I haven�t had much trouble until now, so I can�t\n> understand why one of the 4 inserts of the rule didn�t get through.\n\nSorry for answering my own mail, but I found some info.\n\nThis is my rule:\n\nCREATE RULE admin_insert AS ON \nINSERT TO admin_view\nDO INSTEAD (\n INSERT INTO carrera \n\t (carrera,titulo,area,descripcion,incumbencia,director,\n\t matricula,cupos,informes,nivel,requisitos,duracion,\n\t categoria)\n VALUES \n\t (new.carrera,new.titulo,new.id_subarea,new.descripcion,\n\t new.incumbencia,new.director,new.matricula,new.cupos,\n\t new.informes,new.nivel,new.requisitos,new.duracion,\n\t new.car_categ);\n\n INSERT INTO inscripcion\n\t (carrera,fecha_ini,fecha_fin,lugar)\n VALUES\n\t (currval('carrera_id_curso_seq'),new.fecha_ini,new.fecha_fin,\n\t new.lugar);\n\n INSERT INTO resol\n\t (carr,numero,year,fecha)\n VALUES\n\t (currval('carrera_id_curso_seq'),new.numero,new.year,new.fecha);\n\n INSERT INTO log_carrera (accion,tabla,id_col) VALUES \n\t ('I','carrera',currval('carrera_id_curso_seq'));\n);\n\nAll inserts to the tables are done to the view (so this rule is used), but I \nhave 39 of a total of 142 inserts that didn't get the second insert of the \nrule to go.\n\nThe question is why is this happening, and how can I fix it?\n\nIf you need logs or something, I have no problem at all.\n\nSaludos... :-)\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | mmarques@unl.edu.ar\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Wed, 3 Oct 2001 19:52:02 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": true, "msg_subject": "Re: Missing inserts" } ]
[ { "msg_contents": "Could you not include characters other than ASCII in the HISTORY file,\nplease.\n\n> Python fix fetchone() (Gerhard H舐ing)\n--\nTatsuo Ishii\n", "msg_date": "Wed, 03 Oct 2001 11:14:31 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "HISTORY" }, { "msg_contents": "> Could you not include characters other than ASCII in the HISTORY file,\n> please.\n> \n> > Python fix fetchone() (Gerhard H\u001b$BgS\u001b(Bing)\n\nFixed. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 13:46:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY" } ]
[ { "msg_contents": "I've implemented timestamp and time precision per SQL99 spec. The syntax\nis\n\n TIMESTAMP(2) WITH TIME ZONE\nor\n TIME(0)\netc etc.\n\nOne result of this is that \"timestamp\" is no longer a valid external\nfunction name (among other things) due to parser ambiguity between\n\n TIMESTAMP(2)\n\nand, say,\n\n TIMESTAMP(date 'today')\n\n(the latter used to be supported). If you need to explicitly call a\nfunction by that name you need to surround the function name with double\nquotes, as in\n\n select \"timestamp\"(date 'today');\n\nAll regression tests pass, though we will probably need updates to the\n\"pre-1970\" regression results. The CVS notes follow...\n\n - Thomas\n\nImplement precision support for timestamp and time, both with and\nwithout time zones. \nSQL99 spec requires a default of zero (round to seconds) which is set in\ngram.y as typmod is set in the parse tree. We *could* change to a\ndefault of either 6 (for internal compatibility with previous versions)\nor 2 (for external compatibility with previous versions).\nEvaluate entries in pg_proc wrt the iscachable attribute for timestamp\nand other date/time types. Try to recognize cases where side effects\nlike the current time zone setting may have an effect on results to\ndecide whether something is cachable or not.\n", "msg_date": "Wed, 03 Oct 2001 05:41:34 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "timestamp and time now support precision" } ]
[ { "msg_contents": "\n> You are attacking here two things: \n> \n> a) schemas, which should be done in 7.3,\n\nIs imho something different alltogether. (I know we have two opposed \nviews here)\n\n> thus multiple databases on same host would be unnecessary.\n\nI disagree :-)\n\n> \n> b) connections to remote host' databases, which is partially\nimplemented\n> already (in a ugly way, but...) see contrib/dblink\n> \n> What you described is a syntactic sugar to implement b) which isn't a\nbad\n> idea, but just consider, it is already done. sorta.\n\nNot in the least. True remote access needs 2 phase commit,\nwhich is nowhere near the horizon. Remote read only access would be \nsomewhat easier to implement, and would imho be a very useful \nfirst step.\n\nAndreas\n", "msg_date": "Wed, 3 Oct 2001 09:47:19 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: RFD: access to remore databases: altername suggestion" } ]
[ { "msg_contents": "\n> ------------- shell script -------------------\n> for i in 32 64 128 256 512 1024 2048 4096 8192\n> do\n> psql -c \"explain analyze select liketest(a,'aaa') from \n> (select substring('very_long_text' from 0 for $i) as a) as a\" test\n> done\n> ------------- shell script -------------------\n\nI don't think your search string is sufficient for a test. \nWith 'aaa' it actually knows that it only needs to look at the \nfirst three characters of a. Imho you need to try something \nlike liketest(a,'%aaa%').\n\nAndreas \n", "msg_date": "Wed, 3 Oct 2001 10:09:46 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "> I don't think your search string is sufficient for a test. \n> With 'aaa' it actually knows that it only needs to look at the \n> first three characters of a. Imho you need to try something \n> like liketest(a,'%aaa%').\n\nOk. I ran the modified test (now the iteration is reduced to 100000 in\nliketest()). As you can see, there's huge difference. MB seems up to\n~8 times slower:-< There seems some problems existing in the\nimplementation. Considering REGEX is not so slow, maybe we should\nemploy the same design as REGEX. i.e. using wide charcters, not\nmultibyte streams...\n\nMB+LIKE\nTotal runtime: 1321.58 msec\nTotal runtime: 1718.03 msec\nTotal runtime: 2519.97 msec\nTotal runtime: 4187.05 msec\nTotal runtime: 7629.24 msec\nTotal runtime: 14456.45 msec\nTotal runtime: 17320.14 msec\nTotal runtime: 17323.65 msec\nTotal runtime: 17321.51 msec\n\nnoMB+LIKE\nTotal runtime: 964.90 msec\nTotal runtime: 993.09 msec\nTotal runtime: 1057.40 msec\nTotal runtime: 1192.68 msec\nTotal runtime: 1494.59 msec\nTotal runtime: 2078.75 msec\nTotal runtime: 2328.77 msec\nTotal runtime: 2326.38 msec\nTotal runtime: 2330.53 msec\n--\nTatsuo Ishii\n", "msg_date": "Wed, 03 Oct 2001 18:30:01 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> ... There seems some problems existing in the\n> implementation. Considering REGEX is not so slow, maybe we should\n> employ the same design as REGEX. i.e. using wide charcters, not\n> multibyte streams...\n\nSeems like a good thing to put on the to-do list. In the meantime,\nwe still have the question of whether to enable multibyte in the\ndefault configuration. I'd still vote YES, as these results seem\nto me to demonstrate that there is no wide-ranging performance penalty.\nA problem confined to LIKE on long strings isn't a showstopper IMHO.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 10:56:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > ... There seems some problems existing in the\n> > implementation. Considering REGEX is not so slow, maybe we should\n> > employ the same design as REGEX. i.e. using wide charcters, not\n> > multibyte streams...\n> \n> Seems like a good thing to put on the to-do list. In the meantime,\n> we still have the question of whether to enable multibyte in the\n> default configuration. I'd still vote YES, as these results seem\n> to me to demonstrate that there is no wide-ranging performance penalty.\n> A problem confined to LIKE on long strings isn't a showstopper IMHO.\n> \n\nAdded to TODO:\n\n* Use wide characters to evaluate regular expressions, for performance\n(Tatsuo) \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 3 Oct 2001 12:05:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Added to TODO:\n\n> * Use wide characters to evaluate regular expressions, for performance\n> (Tatsuo) \n\nRegexes are fine; it's LIKE that's slow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 13:35:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Added to TODO:\n> \n> > * Use wide characters to evaluate regular expressions, for performance\n> > (Tatsuo) \n> \n> Regexes are fine; it's LIKE that's slow.\n\nOops, thanks. Changed to:\n\n* Use wide characters to evaluate LIKE, for performance (Tatsuo) \n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 3 Oct 2001 13:38:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> In the meantime, we still have the question of whether to enable\n>> multibyte in the default configuration.\n\n> Perhaps we could make it a release goal for 7.3\n\nYeah, that's probably the best way to proceed... it's awfully late\nin the 7.2 cycle to be deciding to do this now...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 16:44:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "Tom Lane writes:\n\n> In the meantime, we still have the question of whether to enable\n> multibyte in the default configuration.\n\nThis would make more sense if all of multibyte, locale, and NLS became\ndefaults in one release. I haven't quite sold people in the second item\nyet, although I have a design how to do that (see below). And the third,\nwell who knows...\n\nPerhaps we could make it a release goal for 7.3 to\n\n* Optimize i18n stuff to have a minimal performance penalty when it's not\n used. (locale=C etc.)\n\n* Make i18n stuff sufficiently well-behaved to make it the default.\n (Especially, add initdb options and GUC parameters to set the locale.\n Don't rely on environment variables -- too complicated.)\n\nMeanwhile, quadratic performance penalties (or so it seems) for LIKE\nexpressions aren't exactly a \"minor\" problem.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 3 Oct 2001 22:46:55 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "> Tom Lane writes:\n> \n> > In the meantime, we still have the question of whether to enable\n> > multibyte in the default configuration.\n> \n> This would make more sense if all of multibyte, locale, and NLS became\n> defaults in one release. I haven't quite sold people in the second item\n> yet, although I have a design how to do that (see below). And the third,\n> well who knows...\n> \n> Perhaps we could make it a release goal for 7.3 to\n> \n> * Optimize i18n stuff to have a minimal performance penalty when it's not\n> used. (locale=C etc.)\n> \n> * Make i18n stuff sufficiently well-behaved to make it the default.\n> (Especially, add initdb options and GUC parameters to set the locale.\n> Don't rely on environment variables -- too complicated.)\n> \n> Meanwhile, quadratic performance penalties (or so it seems) for LIKE\n> expressions aren't exactly a \"minor\" problem.\n> \n\nAdded to TODO:\n\n* Optimize locale to have minimal performance impact when not used\n(Peter E)\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 3 Oct 2001 18:27:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "> Ok. I ran the modified test (now the iteration is reduced to 100000 in\n> liketest()). As you can see, there's huge difference. MB seems up to\n> ~8 times slower:-< There seems some problems existing in the\n> implementation. Considering REGEX is not so slow, maybe we should\n> employ the same design as REGEX. i.e. using wide charcters, not\n> multibyte streams...\n> \n> MB+LIKE\n> Total runtime: 1321.58 msec\n> Total runtime: 1718.03 msec\n> Total runtime: 2519.97 msec\n> Total runtime: 4187.05 msec\n> Total runtime: 7629.24 msec\n> Total runtime: 14456.45 msec\n> Total runtime: 17320.14 msec\n> Total runtime: 17323.65 msec\n> Total runtime: 17321.51 msec\n> \n> noMB+LIKE\n> Total runtime: 964.90 msec\n> Total runtime: 993.09 msec\n> Total runtime: 1057.40 msec\n> Total runtime: 1192.68 msec\n> Total runtime: 1494.59 msec\n> Total runtime: 2078.75 msec\n> Total runtime: 2328.77 msec\n> Total runtime: 2326.38 msec\n> Total runtime: 2330.53 msec\n\nI did some trials with wide characters implementation and saw\nvirtually no improvement. My guess is the logic employed in LIKE is\ntoo simple to hide the overhead of the multibyte and wide character\nconversion. The reason why REGEX with MB is not so slow would be the\ncomplexity of its logic, I think. As you can see in my previous\npostings, $1 ~ $2 operation (this is logically same as a LIKE '%a%')\nis, for example, almost 80 times slower than LIKE (remember that\nlikest() loops over 10 times more than regextest()).\n\nSo I decided to use a completely different approach. Now like has two\nmatching engines, one for single byte encodings (MatchText()), the\nother is for multibyte ones (MBMatchText()). MatchText() is identical\nto the non MB version of it, and virtually no performance penalty for\nsingle byte encodings. MBMatchText() is for multibyte encodings and is\nidentical the one used in 7.1.\n\nHere is the MB case result with SQL_ASCII encoding.\n\nTotal runtime: 901.69 msec\nTotal runtime: 939.08 msec\nTotal runtime: 993.60 msec\nTotal runtime: 1148.18 msec\nTotal runtime: 1434.92 msec\nTotal runtime: 2024.59 msec\nTotal runtime: 2288.50 msec\nTotal runtime: 2290.53 msec\nTotal runtime: 2316.00 msec\n\nTo accomplish this, I moved MatchText etc. to a separate file and now\nlike.c includes it *twice* (similar technique used in regexec()). This\nmakes like.o a little bit larger, but I believe this is worth for the\noptimization.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 04 Oct 2001 11:16:42 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> To accomplish this, I moved MatchText etc. to a separate file and now\n> like.c includes it *twice* (similar technique used in regexec()). This\n> makes like.o a little bit larger, but I believe this is worth for the\n> optimization.\n\nThat sounds great.\n\nWhat's your feeling now about the original question: whether to enable\nmultibyte by default now, or not? I'm still thinking that Peter's\ncounsel is the wisest: plan to do it in 7.3, not today. But this fix\nseems to eliminate the only hard reason we have not to do it today ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 23:05:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "> What's your feeling now about the original question: whether to enable\n> multibyte by default now, or not? I'm still thinking that Peter's\n> counsel is the wisest: plan to do it in 7.3, not today. But this fix\n> seems to eliminate the only hard reason we have not to do it today ...\n\nIf SQL99's I18N staffs would be added in 7.3, it means both the\nmultibyte support and the locale support might disappear, then 2might\nbe merged into it (not sure about NLS. is that compatible with, for\nexample, per column charset?)\n\nSo, for none multibyte users, the multibyte support would be\none-release-only-functionality: suddenly appears in 7.2 and disappears\nin 7.3. I'm afraid this would cause more troubles rather than\nusefullness.\n\nWhat do you think?\n--\nTatsuo Ishii\n", "msg_date": "Thu, 04 Oct 2001 12:43:23 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> What do you think?\n\nI think that we were supposed to go beta a month ago, and so this is\nno time to start adding new features to this release. Let's plan to\nmake this happen (one way or the other) in 7.3, instead.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 23:49:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "> I think that we were supposed to go beta a month ago, and so this is\n> no time to start adding new features to this release. Let's plan to\n> make this happen (one way or the other) in 7.3, instead.\n\nAgreed.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 04 Oct 2001 13:11:21 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "> > Ok. I ran the modified test (now the iteration is reduced to 100000 in\n> > liketest()). As you can see, there's huge difference. MB seems up to\n> > ~8 times slower:-< There seems some problems existing in the\n> > implementation. Considering REGEX is not so slow, maybe we should\n> > employ the same design as REGEX. i.e. using wide charcters, not\n> > multibyte streams...\n\nLet me add I think our regex code is very slow. It is the standard BSD\nregex library by Henry Spencer. He rewrote it a few years ago for TCL\n8.X and said he was working on a standalone library version. I have\nasked him several times via email over the years but he still has not\nreleased a standalone version of the new optimized regex code. It is on\nour TODO list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 4 Oct 2001 00:30:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "Hi,\n\nI should have sent the patch earlier, but got delayed by other stuff.\nAnyway, here is the patch:\n\n- most of the functionality is only activated when MULTIBYTE is\n defined,\n\n- check valid UTF-8 characters, client-side only yet, and only on\n output, you still can send invalid UTF-8 to the server (so, it's\n only partly compliant to Unicode 3.1, but that's better than\n nothing).\n\n- formats with the correct number of columns (that's why I made it in\n the first place after all), but only for UNICODE. However, the code\n allows to plug-in routines for other encodings, as Tatsuo did for\n the other multibyte functions.\n\n- corrects a bit the UTF-8 code from Tatsuo to allow Unicode 3.1\n characters (characters with values >= 0x10000, which are encoded on\n four bytes).\n\n- doesn't depend on the locale capabilities of the glibc (useful for\n remote telnet).\n\nI would like somebody to check it closely, as it is my first patch to\npgsql. Also, I created dummy .orig files, so that the two files I\ncreated are included, I hope that's the right way.\n\nNow, a lot of functionality is NOT included here, but I will keep that\nfor 7.3 :) That includes all string checking on the server side (which\nwill have to be a bit more optimised ;) ), and the input checking on\nthe client side for UTF-8, though that should not be difficult. It's\njust to send the strings through mbvalidate() before sending them to\nthe server. Strong checking on UTF-8 strings is mandatory to be\ncompliant with Unicode 3.1+ .\n\nDo I have time to look for a patch to include iso-8859-15 for 7.2 ?\nThe euro is coming 1. january 2002 (before 7.3 !) and over 280\nmillions people in Europe will need the euro sign and only iso-8859-15\nand iso-8859-16 have it (and unfortunately, I don't think all Unices\nwill switch to Unicode in the meantime)....\n\nerr... yes, I know that this is not every single person in Europe that\nuses PostgreSql, so it's not exactly 280m, but it's just a matter of\ntime ! ;)\n\nI'll come back (on pgsql-hackers) later to ask a few questions\nregarding the full unicode support (normalisation, collation,\nregexes,...) on the server side :)\n\nHere is the patch !\n\nPatrice.\n\n-- \nPatrice HᅵDᅵ ------------------------------- patrice ᅵ islande org -----\n -- Isn't it weird how scientists can imagine all the matter of the\nuniverse exploding out of a dot smaller than the head of a pin, but they\ncan't come up with a more evocative name for it than \"The Big Bang\" ?\n -- What would _you_ call the creation of the universe ?\n -- \"The HORRENDOUS SPACE KABLOOIE !\" - Calvin and Hobbes\n------------------------------------------ http://www.islande.org/ -----", "msg_date": "Mon, 8 Oct 2001 21:35:44 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unicode combining characters" }, { "msg_contents": "> - corrects a bit the UTF-8 code from Tatsuo to allow Unicode 3.1\n> characters (characters with values >= 0x10000, which are encoded on\n> four bytes).\n\nAfter applying your patches, do the 4-bytes UTF-8 convert to UCS-2 (2\nbytes) or UCS-4 (4 bytes) in pg_utf2wchar_with_len()? If it were 4\nbytes, we are in trouble. Current regex implementaion does not handle\n4 byte width charsets.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 09 Oct 2001 23:16:56 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unicode combining characters" }, { "msg_contents": "* Tatsuo Ishii <t-ishii@sra.co.jp> [011009 18:38]:\n> > - corrects a bit the UTF-8 code from Tatsuo to allow Unicode 3.1\n> > characters (characters with values >= 0x10000, which are encoded on\n> > four bytes).\n> \n> After applying your patches, do the 4-bytes UTF-8 convert to UCS-2 (2\n> bytes) or UCS-4 (4 bytes) in pg_utf2wchar_with_len()? If it were 4\n> bytes, we are in trouble. Current regex implementaion does not handle\n> 4 byte width charsets.\n\n*sigh* yes, it does encode to four bytes :(\n\nThree solutions then :\n\n1) we support these supplementary characters, knowing that they won't\n work with regexes,\n\n2) I back out the change, but then anyone using these characters will\n get something weird, since the decoding would be faulty (they would\n be handled as 3 bytes UTF-8 chars, and then the fourth byte would\n become a \"faulty char\"... not very good, as the 3-byte version is\n still not a valid UTF-8 code !),\n\n3) we fix the regex engine within the next 24 hours, before the beta\n deadline is activated :/\n\nI must say that I doubt that anyone will use these characters in the\nnext few months : these are mostly chinese extended characters, with\nold italic, deseret, and gothic scripts, and bysantine and western\nmusical symbols, as well as the mathematical alphanumerical symbols.\n\nI would prefer solution 1), as I think it is better to allow these\ncharacters, even with a temporary restriction on the regex, than to\nfail completely on them. As for solution 3), we may still work at it\nin the next few months :) [I haven't even looked at the regex engine\nyet, so I don't know the implications of what I have just said !]\n\nWhat do you think ?\n\nPatrice\n\n-- \nPatrice Hᅵdᅵ\nemail: patrice hede ᅵ islande org\nwww : http://www.islande.org/\n", "msg_date": "Tue, 9 Oct 2001 19:07:38 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unicode combining characters" }, { "msg_contents": "> > After applying your patches, do the 4-bytes UTF-8 convert to UCS-2 (2\n> > bytes) or UCS-4 (4 bytes) in pg_utf2wchar_with_len()? If it were 4\n> > bytes, we are in trouble. Current regex implementaion does not handle\n> > 4 byte width charsets.\n> \n> *sigh* yes, it does encode to four bytes :(\n> \n> Three solutions then :\n> \n> 1) we support these supplementary characters, knowing that they won't\n> work with regexes,\n> \n> 2) I back out the change, but then anyone using these characters will\n> get something weird, since the decoding would be faulty (they would\n> be handled as 3 bytes UTF-8 chars, and then the fourth byte would\n> become a \"faulty char\"... not very good, as the 3-byte version is\n> still not a valid UTF-8 code !),\n> \n> 3) we fix the regex engine within the next 24 hours, before the beta\n> deadline is activated :/\n> \n> I must say that I doubt that anyone will use these characters in the\n> next few months : these are mostly chinese extended characters, with\n> old italic, deseret, and gothic scripts, and bysantine and western\n> musical symbols, as well as the mathematical alphanumerical symbols.\n> \n> I would prefer solution 1), as I think it is better to allow these\n> characters, even with a temporary restriction on the regex, than to\n> fail completely on them. As for solution 3), we may still work at it\n> in the next few months :) [I haven't even looked at the regex engine\n> yet, so I don't know the implications of what I have just said !]\n> \n> What do you think ?\n\nI think 2) is not very good, and we should reject these 4-bytes UTF-8\nstrings. After all, we are not ready for them.\n\nBTW, other part of your patches looks good. Peter, what do you think?\n--\nTatsuo Ishii\n", "msg_date": "Wed, 10 Oct 2001 10:12:01 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unicode combining characters" }, { "msg_contents": "> > 1) we support these supplementary characters, knowing that they won't\n> > work with regexes,\n> > \n> > 2) I back out the change, but then anyone using these characters will\n> > get something weird, since the decoding would be faulty (they would\n> > be handled as 3 bytes UTF-8 chars, and then the fourth byte would\n> > become a \"faulty char\"... not very good, as the 3-byte version is\n> > still not a valid UTF-8 code !),\n> > \n> > 3) we fix the regex engine within the next 24 hours, before the beta\n> > deadline is activated :/\n> > \n> > What do you think ?\n> \n> I think 2) is not very good, and we should reject these 4-bytes UTF-8\n> strings. After all, we are not ready for them.\n\nIf we still recognise them as 4-byte UTF-8 chars (in order to parse\nthe next char correctly) and reject them as invalid chars, that should\nbe OK :)\n\n> BTW, other part of your patches looks good. Peter, what do you think?\n\nNice to hear :)\n\nPatrice\n\n-- \nPatrice Hᅵdᅵ\nemail: patrice hede ᅵ islande org\nwww : http://www.islande.org/\n", "msg_date": "Wed, 10 Oct 2001 19:28:19 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unicode combining characters" }, { "msg_contents": "I have committed part of Patrice's patches with minor fixes.\nUncommitted changes are related to the backend side, and the reason\ncould be found in the previous discussions (basically this is due to\nthe fact that current regex code does not support UTF-8 chars >=\n0x10000). Instead pg_veryfymbstr() now rejects UTF-8 chars >= 0x10000.\n--\nTatsuo Ishii\n\n> Hi,\n> \n> I should have sent the patch earlier, but got delayed by other stuff.\n> Anyway, here is the patch:\n> \n> - most of the functionality is only activated when MULTIBYTE is\n> defined,\n> \n> - check valid UTF-8 characters, client-side only yet, and only on\n> output, you still can send invalid UTF-8 to the server (so, it's\n> only partly compliant to Unicode 3.1, but that's better than\n> nothing).\n> \n> - formats with the correct number of columns (that's why I made it in\n> the first place after all), but only for UNICODE. However, the code\n> allows to plug-in routines for other encodings, as Tatsuo did for\n> the other multibyte functions.\n> \n> - corrects a bit the UTF-8 code from Tatsuo to allow Unicode 3.1\n> characters (characters with values >= 0x10000, which are encoded on\n> four bytes).\n> \n> - doesn't depend on the locale capabilities of the glibc (useful for\n> remote telnet).\n> \n> I would like somebody to check it closely, as it is my first patch to\n> pgsql. Also, I created dummy .orig files, so that the two files I\n> created are included, I hope that's the right way.\n> \n> Now, a lot of functionality is NOT included here, but I will keep that\n> for 7.3 :) That includes all string checking on the server side (which\n> will have to be a bit more optimised ;) ), and the input checking on\n> the client side for UTF-8, though that should not be difficult. It's\n> just to send the strings through mbvalidate() before sending them to\n> the server. Strong checking on UTF-8 strings is mandatory to be\n> compliant with Unicode 3.1+ .\n> \n> Do I have time to look for a patch to include iso-8859-15 for 7.2 ?\n> The euro is coming 1. january 2002 (before 7.3 !) and over 280\n> millions people in Europe will need the euro sign and only iso-8859-15\n> and iso-8859-16 have it (and unfortunately, I don't think all Unices\n> will switch to Unicode in the meantime)....\n> \n> err... yes, I know that this is not every single person in Europe that\n> uses PostgreSql, so it's not exactly 280m, but it's just a matter of\n> time ! ;)\n> \n> I'll come back (on pgsql-hackers) later to ask a few questions\n> regarding the full unicode support (normalisation, collation,\n> regexes,...) on the server side :)\n> \n> Here is the patch !\n> \n> Patrice.\n> \n> -- \n> Patrice H���D��� ------------------------------- patrice ��� islande org -----\n> -- Isn't it weird how scientists can imagine all the matter of the\n> universe exploding out of a dot smaller than the head of a pin, but they\n> can't come up with a more evocative name for it than \"The Big Bang\" ?\n> -- What would _you_ call the creation of the universe ?\n> -- \"The HORRENDOUS SPACE KABLOOIE !\" - Calvin and Hobbes\n> ------------------------------------------ http://www.islande.org/ -----\n", "msg_date": "Mon, 15 Oct 2001 10:26:19 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Unicode combining characters" } ]
[ { "msg_contents": "Is it a bug or CEST timezone is not supported anymore ?\nI can't import my 7.1.2 database to current development version of\npostgresql\n\n\nlis=# create table test (ts timestamp);\nCREATE\nlis=# insert into test values ('23.05.2000 09:06:59.00 CEST');\nERROR: Bad timestamp external representation '23.05.2000 09:06:59.00\nCEST'\nlis=# insert into test values ('23.05.2000 09:06:59.00 CET');\nINSERT 125614 1\nlis=# select * from test;\n ts\n--------------------------\n 23.05.2000 10:06:59 CEST\n(1 row)\nlis=# select version();\n version\n----------------------------------------------------------------\n PostgreSQL 7.2devel on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\nlis=#\n\n\nJan Varga\n\n\n", "msg_date": "Wed, 03 Oct 2001 10:24:58 +0200", "msg_from": "Jan Varga <varga@utcru.sk>", "msg_from_op": true, "msg_subject": "CEST timezone" }, { "msg_contents": "> Is it a bug or CEST timezone is not supported anymore ?\n> I can't import my 7.1.2 database to current development version of\n> postgresql\n\nafaik \"CEST\" was never supported by PostgreSQL. Can you please confirm\nthat this is the same as \"CETDST\" (Central European Time, Daylight\nSavings Time) or, perhaps, \"CET\" (Central European Standard Time)? We\ncan add \"CEST\" as a synonym once I understand what it is supposed to be\n;)\n\nMy guess is that your timezone database on your OS has changed; I've\nlooked back to versions from the beginning of 2000 and didn't see any\nmention of CEST in the PostgreSQL code.\n\n - Thomas\n", "msg_date": "Wed, 03 Oct 2001 18:01:07 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: CEST timezone" }, { "msg_contents": "> Thanks for your reply.\n> Yes, CEST equals to CETDST\n> please add CEST as a synonym to existing timezone code (if it is possible)\n\nDone in my sources; will be committed soon.\n\n - Thomas\n", "msg_date": "Wed, 03 Oct 2001 22:05:07 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: CEST timezone" } ]
[ { "msg_contents": "\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > ... There seems some problems existing in the\n> > implementation. Considering REGEX is not so slow, maybe we should\n> > employ the same design as REGEX. i.e. using wide charcters, not\n> > multibyte streams...\n> \n> Seems like a good thing to put on the to-do list. In the meantime,\n> we still have the question of whether to enable multibyte in the\n> default configuration. I'd still vote YES, as these results seem\n> to me to demonstrate that there is no wide-ranging performance\npenalty.\n> A problem confined to LIKE on long strings isn't a showstopper IMHO.\n\nAs I said, with a valid not anchored like expression the performance \ndifference was substantial, even for shorter strings it was 37%. \nThe test with \"like 'aaa'\" was not a good test case, and we should not \ndeduce anything from that.\n\nAndreas\n", "msg_date": "Wed, 3 Oct 2001 17:23:47 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Unicode combining characters " } ]
[ { "msg_contents": "\nHi,\n\n\tI am currently struggling to write a \"serialize\"-like function\nthat would dump a row of a table into a string-like object in a way that\nwould allow me to reconstruct the original row (or its individual\nelements) from this object.\n\n\tThe tentative plan I have is something like this:\n\n\t1) Write a C-function declared something like so:\n\n\t Datum serialise(PG_FUNCTION_ARGS)\n\n\t2) Inside the function, get the pointer to the row using:\n\t\n\t TupleTableSlot *row = PG_GETARGPOINTER(0);\n\n\t3) Use GetAttributeByName to get the \"Datum\" value corresponding \n\t to each of the attributes.\n\nThis is where I am stuck. What I want to do now is to use this datum value\nreturned by GetAttributeByName to get at the \"glob of memory\" occupied by\nthe attribute and \"memmove\" it into an area declared as text. I could then\nstore this text as a row in a table. Is this at all possible or am I\ntalking through my hat ;) \n\nI am sorry I dont understand the backend variable storage and \ntypes too well and would be grateful for some help. \n\nRegards and Thanks,\nGurunandan\n\n\n\n", "msg_date": "Wed, 3 Oct 2001 22:37:27 +0530 (IST)", "msg_from": "\"Gurunandan R. Bhat\" <grbhat@exocore.com>", "msg_from_op": true, "msg_subject": "Dumping variables..A sort of serialize" } ]
[ { "msg_contents": "With current CVS, I did\n\nregression=# create table foo (f1 date default current_date,\nregression(# f2 time default current_time,\nregression(# f3 timestamp default current_timestamp);\nCREATE\nregression=# \\d foo\n Table \"foo\"\n Column | Type | Modifiers\n--------+--------------------------+----------------------------------\n f1 | date | default date('now'::text)\n f2 | time | default \"time\"('now'::text)\n f3 | timestamp with time zone | default \"timestamp\"('now'::text)\n\nregression=# insert into foo default values;\nINSERT 139633 1\nregression=# insert into foo default values;\nINSERT 139634 1\nregression=# select * from foo;\n f1 | f2 | f3\n------------+----------+------------------------\n 2001-10-03 | 13:15:37 | 2001-10-03 13:15:37-04\n 2001-10-03 | 13:15:49 | 2001-10-03 13:15:50-04\n(2 rows)\n\n\nIt's fairly disconcerting that f2 and f3 don't agree, wouldn't you say?\nFurther experimentation shows that it happens about half the time, with\nthe timestamp always one second ahead of the time when they differ.\nI infer that the new sub-second-resolution transaction timestamp is\nbeing correctly rounded when stored as a timestamp, but is truncated not\nrounded when stored as a time. Type timetz shows the same misbehavior.\nNot sure where to look for this ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 13:28:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Rounding issue with current_time" }, { "msg_contents": "Further experimentation:\n\nregression=# create table foo3 (f1 date default current_date,\nregression(# f2 time(3) default current_time,\nregression(# f3 timestamp(3) default current_timestamp);\nCREATE\nregression=# insert into foo3 default values;\n(multiple times)\nregression=# select * from foo3;\n f1 | f2 | f3\n------------+----------+-----------------------------\n 2001-10-03 | 13:32:07 | 2001-10-03 13:32:07-04\n 2001-10-03 | 13:32:08 | 2001-10-03 13:32:08.3020-04\n 2001-10-03 | 13:32:09 | 2001-10-03 13:32:09.4280-04\n 2001-10-03 | 13:32:10 | 2001-10-03 13:32:10.2530-04\n 2001-10-03 | 13:32:10 | 2001-10-03 13:32:10.8850-04\n 2001-10-03 | 13:32:11 | 2001-10-03 13:32:11.2930-04\n 2001-10-03 | 13:32:11 | 2001-10-03 13:32:11.6650-04\n 2001-10-03 | 13:32:12 | 2001-10-03 13:32:12.04-04\n 2001-10-03 | 13:32:13 | 2001-10-03 13:32:13.3730-04\n(9 rows)\n\nSo the real issue appears to be that subsecond resolution isn't\npropagating into time and timetz at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 13:34:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Rounding issue with current_time " }, { "msg_contents": "...\n> It's fairly disconcerting that f2 and f3 don't agree, wouldn't you say?\n\n:) I'll look at it.\n\n - Thomas\n", "msg_date": "Wed, 03 Oct 2001 18:12:09 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Rounding issue with current_time" }, { "msg_contents": "BTW, would you object to my removing the macros\nIS_BUILTIN_TYPE(), IS_HIGHER_TYPE(), IS_HIGHEST_TYPE()\nfrom parse_coerce.h? They are used nowhere and are not\nbeing maintained --- eg, they don't seem to know about\nTIMESTAMPTZ.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 14:13:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Rounding issue with current_time " }, { "msg_contents": "...\n> So the real issue appears to be that subsecond resolution isn't\n> propagating into time and timetz at all.\n\nAh. Of course it isn't, because I (probably) didn't change\nDecodeTimeOnly() to use the microsecond resolution version of the\ntransaction time. Will look at it.\n\n - Thomas\n", "msg_date": "Wed, 03 Oct 2001 18:15:35 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Rounding issue with current_time" }, { "msg_contents": "> BTW, would you object to my removing the macros\n> IS_BUILTIN_TYPE(), IS_HIGHER_TYPE(), IS_HIGHEST_TYPE()\n> from parse_coerce.h? They are used nowhere and are not\n> being maintained --- eg, they don't seem to know about\n> TIMESTAMPTZ.\n\nOK. I had already stripped out some \"#if NOT_USED\" code but must have\nmissed those.\n\n - Thomas\n", "msg_date": "Wed, 03 Oct 2001 18:20:54 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Rounding issue with current_time" } ]
[ { "msg_contents": "Bruce,\n\nI notice HISTORY in CVS doesn't mentioned any development we did\nwith GiST. Should we write some info ? Major things we did:\n\n1. Null-safe interface to GiST\n2. Support of multi-key GiST indices\n\nTODO\nAdding concurrency for GiST\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 3 Oct 2001 21:20:28 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "HISTORY for 7.2" }, { "msg_contents": "> Bruce,\n> \n> I notice HISTORY in CVS doesn't mentioned any development we did\n> with GiST. Should we write some info ? Major things we did:\n> \n> 1. Null-safe interface to GiST\n> 2. Support of multi-key GiST indices\n\nI had generic GIST improvements. Updated to:\n\nAllow GIST to handle NULLs and multi-key indexes (Oleg Bartunov, Teodor\n Sigaev, Tom)\n\n> \n> TODO\n> Adding concurrency for GiST\n\nAdded.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 3 Oct 2001 14:37:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY for 7.2" } ]
[ { "msg_contents": "#create table t (v varchar);\n#insert into t values ('0123456789a0123456789b0123456789c0123456789d');\n\n#select v from t;\n\n v \n----------------------------------------------\n 0123456789a0123456789b0123456789c0123456789d\n(1 row)\n\nSo far, so good.\n\n#select text(v) from t;\n\n text \n---------------------------------\n 0123456789a0123456789b012345678\n(1 row)\n\nTruncation occurs.\n\nWork around:\n\n# select v::text from t;\n ?column? \n----------------------------------------------\n 0123456789a0123456789b0123456789c0123456789d\n(1 row)\n\nI couldnt figure out what happens during a text(varchar) call. I looked\naround in pg_proc, but couldnt find the function. There's probably an\nautomagic type conversion going on or something.\n\nCould someone explain what all the internal varchar-like types are (ie.\nvarchar,varchar(n),text,char,_char,bpchar) and when they're used? I\nfind it all really confusing - I'm sure others do too.\n\nIs there anyway to determine what postgresql is doing in its automagic\nfunction calls? I guess I'm asking for an EXPLAIN that describes\nfunction calls. For example, \nEXPLAIN select text(v) from t;\n\n--> {Description of conversion from varchar to whatever the text()\nfunction actually works on}\n\n\nThanks,\ndave\n", "msg_date": "Wed, 03 Oct 2001 11:39:19 -0700", "msg_from": "Dave Blasby <dblasby@refractions.net>", "msg_from_op": true, "msg_subject": "BUG: text(varchar) truncates at 31 bytes" }, { "msg_contents": "\nI can confirm this problem exists in current sources. Quite strange.\n\n> #create table t (v varchar);\n> #insert into t values ('0123456789a0123456789b0123456789c0123456789d');\n> \n> #select v from t;\n> \n> v \n> ----------------------------------------------\n> 0123456789a0123456789b0123456789c0123456789d\n> (1 row)\n> \n> So far, so good.\n> \n> #select text(v) from t;\n> \n> text \n> ---------------------------------\n> 0123456789a0123456789b012345678\n> (1 row)\n> \n> Truncation occurs.\n> \n> Work around:\n> \n> # select v::text from t;\n> ?column? \n> ----------------------------------------------\n> 0123456789a0123456789b0123456789c0123456789d\n> (1 row)\n> \n> I couldnt figure out what happens during a text(varchar) call. I looked\n> around in pg_proc, but couldnt find the function. There's probably an\n> automagic type conversion going on or something.\n> \n> Could someone explain what all the internal varchar-like types are (ie.\n> varchar,varchar(n),text,char,_char,bpchar) and when they're used? I\n> find it all really confusing - I'm sure others do too.\n> \n> Is there anyway to determine what postgresql is doing in its automagic\n> function calls? I guess I'm asking for an EXPLAIN that describes\n> function calls. For example, \n> EXPLAIN select text(v) from t;\n> \n> --> {Description of conversion from varchar to whatever the text()\n> function actually works on}\n> \n> \n> Thanks,\n> dave\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 3 Oct 2001 14:56:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG: text(varchar) truncates at 31 bytes" }, { "msg_contents": "\n> #select text(v) from t;\n> \n> text \n> ---------------------------------\n> 0123456789a0123456789b012345678\n> (1 row)\n> \n> Truncation occurs.\n\nLooking at the explain verbose output, it looks\nlike it may be doing a conversion to name because\nit looks like there isn't a text(varchar), but\nthere's a text(name) and a name(varchar). My \nguess is there's no text(varchar) because they're\nconsidered binary compatible.\n\n> Work around:\n> \n> # select v::text from t;\n> ?column? \n> ----------------------------------------------\n> 0123456789a0123456789b0123456789c0123456789d\n> (1 row)\n\nThese types are probably marked as binary compatible, so\nnothing major has to happen in the type conversion. Same\nthing happens in CAST(v AS text).\n\n> Is there anyway to determine what postgresql is doing in its automagic\n> function calls? I guess I'm asking for an EXPLAIN that describes\n> function calls. For example, \n> EXPLAIN select text(v) from t;\nYou can use EXPLAIN VERBOSE if you're willing to wade through the output.\n:)\n\n\n", "msg_date": "Wed, 3 Oct 2001 11:59:34 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: BUG: text(varchar) truncates at 31 bytes" }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> Looking at the explain verbose output, it looks like it may be doing a\n> conversion to name because it looks like there isn't a text(varchar),\n> but there's a text(name) and a name(varchar). My guess is there's no\n> text(varchar) because they're considered binary compatible.\n\nSince the truncation is to 31 characters, it seems clear that a\nconversion to \"name\" happened.\n\nI think the reason for this behavior is that the possibility of a\n\"freebie\" binary-compatible conversion is not considered until all else\nfails (see parse_func.c: it's only considered after func_get_detail\nfails). Unfortunately func_get_detail is willing to consider all sorts\nof implicit conversions, so these secondary possibilities end up being\nthe chosen alternative.\n\nPerhaps it'd be a better idea for the option of a freebie conversion\nto be checked earlier, say immediately after we discover there is no\nexact match for the function name and input type. Thomas, what do you\nthink?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 18:10:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG: text(varchar) truncates at 31 bytes " }, { "msg_contents": "...\n> Perhaps it'd be a better idea for the option of a freebie conversion\n> to be checked earlier, say immediately after we discover there is no\n> exact match for the function name and input type. Thomas, what do you\n> think?\n\nWe *really* need that catalog lookup first. Otherwise, we will never be\nable to override the hardcoded compatibility assumptions in that\nmatching routine. Once we push that routine into a system catalog, we'll\nhave more flexibility to tune things after the fact.\n\nWithout the explicit function call, things would work just fine for the\nexample at hand, right?\n\nI could put in a dummy passthrough routine. But that seems a bit ugly.\n\n - Thomas\n", "msg_date": "Wed, 03 Oct 2001 23:38:47 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: BUG: text(varchar) truncates at 31 bytes" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> Perhaps it'd be a better idea for the option of a freebie conversion\n>> to be checked earlier, say immediately after we discover there is no\n>> exact match for the function name and input type. Thomas, what do you\n>> think?\n\n> We *really* need that catalog lookup first. Otherwise, we will never be\n> able to override the hardcoded compatibility assumptions in that\n> matching routine.\n\nSure, I said *after* we fail to find an exact match. But the \"freebie\"\nmatch is for a function name that matches a type name and is\nbinary-compatible with the source type. That's not a weak constraint.\nISTM that interpretation should take priority over interpretations that\ninvolve more than one level of transformation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 22:47:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG: text(varchar) truncates at 31 bytes " }, { "msg_contents": "...\n> Sure, I said *after* we fail to find an exact match. But the \"freebie\"\n> match is for a function name that matches a type name and is\n> binary-compatible with the source type. That's not a weak constraint.\n> ISTM that interpretation should take priority over interpretations that\n> involve more than one level of transformation.\n\nAh, OK I think. If there is a counterexample, it is probably no less\nobscure than this one.\n\n - Thomas\n", "msg_date": "Thu, 04 Oct 2001 04:46:45 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: BUG: text(varchar) truncates at 31 bytes" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> Sure, I said *after* we fail to find an exact match. But the \"freebie\"\n>> match is for a function name that matches a type name and is\n>> binary-compatible with the source type. That's not a weak constraint.\n>> ISTM that interpretation should take priority over interpretations that\n>> involve more than one level of transformation.\n\n> Ah, OK I think. If there is a counterexample, it is probably no less\n> obscure than this one.\n\nDone. Essentially, this amounts to interchanging steps 2 and 3 of the\nfunction call resolution rules described at\nhttp://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/typeconv-func.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 18:08:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG: text(varchar) truncates at 31 bytes " }, { "msg_contents": "\nI can confirm this is fixed in current sources. Thanks for the report.\n\n---------------------------------------------------------------------------\n\n> #create table t (v varchar);\n> #insert into t values ('0123456789a0123456789b0123456789c0123456789d');\n> \n> #select v from t;\n> \n> v \n> ----------------------------------------------\n> 0123456789a0123456789b0123456789c0123456789d\n> (1 row)\n> \n> So far, so good.\n> \n> #select text(v) from t;\n> \n> text \n> ---------------------------------\n> 0123456789a0123456789b012345678\n> (1 row)\n> \n> Truncation occurs.\n> \n> Work around:\n> \n> # select v::text from t;\n> ?column? \n> ----------------------------------------------\n> 0123456789a0123456789b0123456789c0123456789d\n> (1 row)\n> \n> I couldnt figure out what happens during a text(varchar) call. I looked\n> around in pg_proc, but couldnt find the function. There's probably an\n> automagic type conversion going on or something.\n> \n> Could someone explain what all the internal varchar-like types are (ie.\n> varchar,varchar(n),text,char,_char,bpchar) and when they're used? I\n> find it all really confusing - I'm sure others do too.\n> \n> Is there anyway to determine what postgresql is doing in its automagic\n> function calls? I guess I'm asking for an EXPLAIN that describes\n> function calls. For example, \n> EXPLAIN select text(v) from t;\n> \n> --> {Description of conversion from varchar to whatever the text()\n> function actually works on}\n> \n> \n> Thanks,\n> dave\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 13:47:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG: text(varchar) truncates at 31 bytes" } ]
[ { "msg_contents": "\nProblem: the external representation of time and timestamp are\n less precise than the internal representation.\n\nWe are using postgresql 7.1.3\n\nThe timestamp and time types support resolving microseconds (6 places beyond the decimal), however the output routines round the value to only 2 decimal places.\n\nThis causes data degradation, if a table with timestamps is copied out and then copied back in, as the timestamps lose precision.\n\nWe feel this is a data integrity issue. Copy out (ascii) does not maintain the consistency of the data it copies.\n\nIn our application, we depend on millisecond resolution timestamps and often need to copy out/copy back tables. The current timestamp formating in postgresql 7.1.x breaks this badly.\n\nA work around for display might be to use to_char(). But for copy the only workaround we have found is to use binary copy. Alas, binary copy does not work for server to client copies.\n\nUnfortunately, we need to copy to the client machine. The client copy does not support binary copies so we lose precision.\n\nOur suggested fix to this problem is to change the encoding of the fractional seconds part of the datetime and time types in datetime.c\n(called by timestamp_out, time_out) to represent least 6 digits beyond the decimal (ie \"%0.6f\"). A configurable format would also work.\n\nIf there is another way to force the encoding to be precise we'd love to hear about it. Otherwise this appears to be a silent data integrity bug with unacceptable workarounds.\n\nThanks!\n\nLaurette Cisneros (laurette@nextbus.com)\nElein Mustain\n\nNextBus Information Systems\n\n\n", "msg_date": "Wed, 3 Oct 2001 17:02:59 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Timestamp, fractional seconds problem" }, { "msg_contents": "> Problem: the external representation of time and timestamp are\n> less precise than the internal representation.\n\nFixed (as of yesterday) in the upcoming release.\n\n - Thomas\n", "msg_date": "Thu, 04 Oct 2001 04:39:19 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Timestamp, fractional seconds problem" }, { "msg_contents": "On Wed, Oct 03, 2001 at 05:02:59PM -0700, Laurette Cisneros wrote:\n\n> A work around for display might be to use to_char().\n\n In 7.2 is possible use millisecond / microsecond format:\n\n# select to_char(now()+'2s 324 ms'::interval, 'HH:MM:SS MS');\n to_char\n--------------\n 10:10:59 324\n(1 row)\n\n# select to_char(now()+'2s 324 ms 10 microsecon'::interval, 'HH:MM:SS US');\n to_char\n-----------------\n 10:10:03 324010\n(1 row)\n\n\tKarel \n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 4 Oct 2001 11:01:26 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Timestamp, fractional seconds problem" }, { "msg_contents": "This is very good news. Thanks to all for the response.\n\nL.\nOn Thu, 4 Oct 2001, Karel Zak wrote:\n\n> On Wed, Oct 03, 2001 at 05:02:59PM -0700, Laurette Cisneros wrote:\n>\n> > A work around for display might be to use to_char().\n>\n> In 7.2 is possible use millisecond / microsecond format:\n>\n> # select to_char(now()+'2s 324 ms'::interval, 'HH:MM:SS MS');\n> to_char\n> --------------\n> 10:10:59 324\n> (1 row)\n>\n> # select to_char(now()+'2s 324 ms 10 microsecon'::interval, 'HH:MM:SS US');\n> to_char\n> -----------------\n> 10:10:03 324010\n> (1 row)\n>\n> \tKarel\n>\n>\n\n-- \nLaurette Cisneros\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nPassenger Information Everywhere\n\n", "msg_date": "Thu, 4 Oct 2001 09:56:10 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Re: Timestamp, fractional seconds problem" }, { "msg_contents": "Thomas,\n\nCan you explain more how this functionality has changed? I know that in \nthe JDBC driver fractional seconds are assumed to be two decimal places. \n If this is no longer true, I need to understand the new symantics so \nthat the JDBC parsing routines can be changed. Other interfaces may \nhave similar issues.\n\nthanks,\n--Barry\n\nThomas Lockhart wrote:\n\n>>Problem: the external representation of time and timestamp are\n>> less precise than the internal representation.\n>>\n> \n> Fixed (as of yesterday) in the upcoming release.\n> \n> - Thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n\n", "msg_date": "Thu, 04 Oct 2001 10:36:26 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Timestamp, fractional seconds problem" }, { "msg_contents": "Hi Thomas,\n\nCould I get some more specific information on how this is fixed. Keep in mind that using tochar() is not an option for us in that we ned to use COPY to/from the client.\n\nThanks,\n\nL.\n\nOn Thu, 4 Oct 2001, Thomas Lockhart wrote:\n\n> > Problem: the external representation of time and timestamp are\n> > less precise than the internal representation.\n>\n> Fixed (as of yesterday) in the upcoming release.\n>\n> - Thomas\n>\n\n-- \nLaurette Cisneros\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nPassenger Information Everywhere\n\n", "msg_date": "Thu, 4 Oct 2001 10:46:10 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Re: Timestamp, fractional seconds problem" }, { "msg_contents": "Laurette Cisneros wrote:\n> \n> Could I get some more specific information on how this is fixed. Keep in mind that using tochar() is not an option for us in that we ned to use COPY to/from the client.\n\nI'm finishing up implementing SQL99-style precision features in\ntimestamp et al, so there will no longer be an arbitrary rounding of\ntime to 2 decimal places when values are output. There will of course be\n*other* issues for you to worry about, since the default precision\nspecified by SQL99 is zero decimal places...\n\n - Thomas\n", "msg_date": "Thu, 04 Oct 2001 18:00:22 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Timestamp, fractional seconds problem" }, { "msg_contents": "Thanks Thomas...at least there will be a way to specify more than 2. we are looking forward to this release...\n\nL.\nOn Thu, 4 Oct 2001, Thomas Lockhart wrote:\n\n> Laurette Cisneros wrote:\n> >\n> > Could I get some more specific information on how this is fixed. Keep in mind that using tochar() is not an option for us in that we ned to use COPY to/from the client.\n>\n> I'm finishing up implementing SQL99-style precision features in\n> timestamp et al, so there will no longer be an arbitrary rounding of\n> time to 2 decimal places when values are output. There will of course be\n> *other* issues for you to worry about, since the default precision\n> specified by SQL99 is zero decimal places...\n>\n> - Thomas\n>\n\n-- \nLaurette Cisneros\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nPassenger Information Everywhere\n\n", "msg_date": "Thu, 4 Oct 2001 11:59:44 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Re: Timestamp, fractional seconds problem" }, { "msg_contents": "> Can you explain more how this functionality has changed? I know that in\n> the JDBC driver fractional seconds are assumed to be two decimal places.\n> If this is no longer true, I need to understand the new symantics so\n> that the JDBC parsing routines can be changed. Other interfaces may\n> have similar issues.\n\nOK. (Remember that the new behaviors can be changed if this doesn't work\nfor you).\n\nFormerly, all times had either zero or two fractional decimal places.\nNow, times are explicitly truncated to their defined precision at a few\nspecific points in processing (e.g. when reading a literal constant or\nwhen storing into a column). At all other points in processing, the\nvalues are allowed to take on whatever fractional digits might have come\nfrom math operations or whatever.\n\nThe output routines now write the maximum number of fractional digits\nreasonably present for a floating point number (10 for time, should be\nbut isn't less for timestamp) and then trailing zeros are hacked out,\ntwo digits at a time.\n\nThe regression tests produced basically the same results as always, once\nthe time and timestamp columns were defined to be \"time(2)\" or\n\"timestamp(2)\".\n\nBut there is definitely the possibility of more precision than before in\nthe output string for time fields.\n\n - Thomas\n", "msg_date": "Thu, 04 Oct 2001 20:13:17 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Timestamp, fractional seconds problem" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> ... then trailing zeros are hacked out,\n> two digits at a time.\n\nI was wondering why it seemed to always want to produce an even number\nof fractional digits. Why are you doing it 2 at a time and not 1?\nI should think timestamp(1) would produce 1 fractional digit, not\ntwo digits of which the second is always 0 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 16:48:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Timestamp, fractional seconds problem " }, { "msg_contents": "> > ... then trailing zeros are hacked out,\n> > two digits at a time.\n> I was wondering why it seemed to always want to produce an even number\n> of fractional digits. Why are you doing it 2 at a time and not 1?\n> I should think timestamp(1) would produce 1 fractional digit, not\n> two digits of which the second is always 0 ...\n\nHmm. Good point wrt timestamp(1). I hack out two digits at a time to get\nconvergence on a behavior consistant with previous releases of having\n(at least) two digits of precision (not one or three). I was trying to\nminimize the impact of the other changes.\n\nNote that another \"arbitrary difference\" is that, by default, TIMESTAMP\nis actually TIMESTAMP WITH TIME ZONE. SQL99 specifies otherwise, but\nthere would seem to be fewer porting and upgrade issues for 7.2 if we\nchoose the current behavior.\n\nNot sure where pg_dump and other utilities gin up the SQL9x type names,\nbut we should fix things during beta to be consistant.\n\n - Thomas\n", "msg_date": "Thu, 04 Oct 2001 21:02:02 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Timestamp, fractional seconds problem" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Not sure where pg_dump and other utilities gin up the SQL9x type names,\n> but we should fix things during beta to be consistant.\n\nI believe pg_dump and psql are already okay now that I fixed\nformat_type. Not sure if there are dependencies in other utilities.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 17:32:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Timestamp, fractional seconds problem " } ]
[ { "msg_contents": "Hi people,\n\nIs it possible for Postgresql to bind to one IP address? \n\nI'm trying to run multiple postgresql installations on one server.\n\nThe unix socket could be named accordingly:\n\nPostgresql config bound to a particular port and all IPs.\n.s.PGSQL.<portnumber> \n\nPostgresql config bound to a particular port and IP.\n.s.PGSQL.<portnumber>.<ipaddress>\n\nAny other suggestions/comments on running multiple instances of postgresql\nare welcomed.\n\nAn less desirable alternative is to keep binding to all IP, use different\nports and name the ports, but specifying the port by name in -p doesn't work. \n\nCheerio,\nLink.\n\n", "msg_date": "Thu, 04 Oct 2001 11:15:52 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": true, "msg_subject": "Feature suggestion: Postgresql binding to one IP?" }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> Is it possible for Postgresql to bind to one IP address? \n\nSee 'virtual_host' GUC parameter.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 23:16:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Feature suggestion: Postgresql binding to one IP? " }, { "msg_contents": "Hi Lincoln,\n\nNot sure why you would want to run multiple instances, since you can run\nmultiple dbs if you want to maintain separate environments but if you really\nneed to do this, the postmaster has some options which control ip/port\nbinds:\n\n[pritchma@blade pritchma]$ /usr/local/pgsql/bin/postmaster --help\n/usr/local/pgsql/bin/postmaster is the PostgreSQL server.\n\nUsage:\n /usr/local/pgsql/bin/postmaster [options...]\n\nOptions:\n -B NBUFFERS number of shared buffers (default 64)\n -c NAME=VALUE set run-time parameter\n -d 1-5 debugging level\n -D DATADIR database directory\n -F turn fsync off\n -h HOSTNAME host name or IP address to listen on\n -i enable TCP/IP connections\n -k DIRECTORY Unix-domain socket location\n -N MAX-CONNECT maximum number of allowed connections (1..1024, default\n32)\n -o OPTIONS pass 'OPTIONS' to each backend server\n -p PORT port number to listen on (default 5432)\n -S silent mode (start in background without logging output)\n\nDeveloper options:\n -n do not reinitialize shared memory after abnormal exit\n -s send SIGSTOP to all backend servers if one dies\n\nI run postgres on a box with two interfaces, and I only want it to bind to a\nsingle one:\n\n# start postgres\nnohup > /dev/null su -c '/usr/local/pgsql/bin/postmaster -h 10.4.0.1 -i -D\n/usr/local/pgsql/data > /usr/local/pgsql/log/server.log 2>&1' postgres &\n\n\nCheers,\n\nMark Pritchard\n\n", "msg_date": "Thu, 4 Oct 2001 13:25:21 +1000", "msg_from": "\"Mark Pritchard\" <mark.pritchard@tangent.net.au>", "msg_from_op": false, "msg_subject": "Re: Feature suggestion: Postgresql binding to one IP?" }, { "msg_contents": "At 11:16 PM 03-10-2001 -0400, Tom Lane wrote:\n>Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n>> Is it possible for Postgresql to bind to one IP address? \n>\n>See 'virtual_host' GUC parameter.\n>\n>\t\t\tregards, tom lane\n\nThanks!\n\nI'm using a redhat style postgresql init and somehow postgresql seems to\nignore the postgresql.conf file. What's the postmaster.opts file for?\n\nCheerio,\nLink.\n\n", "msg_date": "Thu, 04 Oct 2001 17:47:57 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": true, "msg_subject": "Re: Feature suggestion: Postgresql binding to one" }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> I'm using a redhat style postgresql init and somehow postgresql seems to\n> ignore the postgresql.conf file.\n\nThat seems unlikely, assuming you are running a PG version recent enough\nto have a postgresql.conf file (ie 7.1 or better). What exactly are you\nputting into postgresql.conf? Also, take a look at the postmaster log\nto see if it's issuing any complaints. I believe a syntax error\nanywhere in the conf file will cause the whole file to be ignored ...\n\n> What's the postmaster.opts file for?\n\nIt's to log the command-line options you gave to the postmaster.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 09:39:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Feature suggestion: Postgresql binding to one " } ]
[ { "msg_contents": "hi\n\nBy default the backend process for postgresql is 32. I\nincreased the N and B value and got to 140. Iam not\nable to increase it any more. The present N value is\n1024 and B value is 2048. Can anyone help me in this.\n\ncheers\n\nbalsu\n\n____________________________________________________________\nDo You Yahoo!?\nSend a newsletter, share photos & files, conduct polls, organize chat events. Visit http://in.groups.yahoo.com\n", "msg_date": "Thu, 4 Oct 2001 05:40:22 +0100 (BST)", "msg_from": "=?iso-8859-1?q?balsu=20balsu?= <bbalsu@yahoo.com>", "msg_from_op": true, "msg_subject": "how to increase the back end process" }, { "msg_contents": "On Thu, 4 Oct 2001 05:40:22 +0100 (BST), you wrote:\n>By default the backend process for postgresql is 32. I\n>increased the N and B value and got to 140. Iam not\n>able to increase it any more. \n\nWhy not?\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Thu, 04 Oct 2001 20:51:26 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": false, "msg_subject": "Re: how to increase the back end process" } ]
[ { "msg_contents": "Is this an expected behavior? I could not see why t1 and t2 are\nshowing different time resolutions...\n\ntest=# create table t3(t1 timestamp(2), t2 timestamp(2) default current_timestamp);\nCREATE\ntest=# insert into t3 values(current_timestamp);\nINSERT 16566 1\ntest=# select * from t3;\n t1 | t2 \n------------------------+---------------------------\n 2001-10-04 13:48:34+09 | 2001-10-04 13:48:34.34+09\n(1 row)\n\n--\nTatsuo Ishii\n", "msg_date": "Thu, 04 Oct 2001 13:51:11 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "timestamp resolution?" }, { "msg_contents": "> Is this an expected behavior? I could not see why t1 and t2 are\n> showing different time resolutions...\n\nEven stranger, this only happens on the first call to CURRENT_TIMESTAMP\nafter starting a backend (example below), and stays that way if I just\ndo \"select current_timestamp\". Something must not be initialized quite\nright, but I don't know what. Any guesses?\n\n - Thomas\n\n(backend already connected and have just dropped t1)\n\nthomas=# create table t1 (d1 timestamp(2), d2 timestamp(2) default\ncurrent_timestamp);\nCREATE\nthomas=# insert into t1 values (current_timestamp);\nINSERT 16572 1\nthomas=# select * from t1;\n d1 | d2 \n---------------------------+---------------------------\n 2001-10-04 05:37:12.09+00 | 2001-10-04 05:37:12.09+00\n(1 row)\n\nthomas=# \\q\nmyst$ psql\n...\nthomas=# insert into t1 values (current_timestamp);\nINSERT 16573 1\nthomas=# select * from t1;\n d1 | d2 \n---------------------------+---------------------------\n 2001-10-04 05:37:12.09+00 | 2001-10-04 05:37:12.09+00\n 2001-10-04 05:37:40+00 | 2001-10-04 05:37:39.72+00\n(2 rows)\n\nthomas=# insert into t1 values (current_timestamp);\nINSERT 16574 1\nthomas=# select * from t1;\n d1 | d2 \n---------------------------+---------------------------\n 2001-10-04 05:37:12.09+00 | 2001-10-04 05:37:12.09+00\n 2001-10-04 05:37:40+00 | 2001-10-04 05:37:39.72+00\n 2001-10-04 05:38:08.33+00 | 2001-10-04 05:38:08.33+00\n(3 rows)\n", "msg_date": "Thu, 04 Oct 2001 05:48:14 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: timestamp resolution?" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Even stranger, this only happens on the first call to CURRENT_TIMESTAMP\n> after starting a backend (example below), and stays that way if I just\n> do \"select current_timestamp\". Something must not be initialized quite\n> right, but I don't know what. Any guesses?\n\nAh, I've got it. Two problems: AdjustTimestampForTypmod is one brick\nshy of a load, and the hardwired calls to timestamp_in and friends\nweren't passing all the parameters they should. (Can anyone think of\na way for DirectFunctionCall to do any checking?)\n\nPatch will be committed in a moment...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 10:46:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp resolution? " }, { "msg_contents": "...\n> Ah, I've got it. Two problems: AdjustTimestampForTypmod is one brick\n> shy of a load, and the hardwired calls to timestamp_in and friends\n> weren't passing all the parameters they should. (Can anyone think of\n> a way for DirectFunctionCall to do any checking?)\n\nOK, I found the second item last night, but am not sure why\nAdjustTimestampForTypmod needs more fixes.\n\nI'm going through gram.y and fixing up the implementations of\nCURRENT_TIMESTAMP et al. One point folks will run into is that\nCURRENT_TIMESTAMP *should* return time to the second, not fractions\nthereof, and CURRENT_TIMESTAMP(p) should be used to get something more\nprecise. Another issue I just noticed is that the result of\n\ncreate table t1 (d timestamp(2) default current_timestamp);\n\ngives me two decimal points of fractional seconds (after fixups for\nTatsuo's reported troubles) but I would think that it should round to\nthe second. Looks like we are \"type folding\" past the typmod attributes.\nComments?\n\n - Thomas\n", "msg_date": "Thu, 04 Oct 2001 17:46:38 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: timestamp resolution?" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I'm going through gram.y and fixing up the implementations of\n> CURRENT_TIMESTAMP et al. One point folks will run into is that\n> CURRENT_TIMESTAMP *should* return time to the second, not fractions\n> thereof, and CURRENT_TIMESTAMP(p) should be used to get something more\n> precise. Another issue I just noticed is that the result of\n\n> create table t1 (d timestamp(2) default current_timestamp);\n\n> gives me two decimal points of fractional seconds (after fixups for\n> Tatsuo's reported troubles) but I would think that it should round to\n> the second. Looks like we are \"type folding\" past the typmod attributes.\n\nNo, it's just that CURRENT_TIMESTAMP doesn't presently reduce its\nprecision, as you assert it should do. However, I see nothing in SQL99\n6.19 that asserts anything about the precision of CURRENT_TIMESTAMP\nwithout a precision indicator. It just says\n\n 2) If specified, <time precision> and <timestamp precision>\n respectively determine the precision of the time or timestamp\n value returned.\n\nwhich seems to leave it up to us to choose the behavior when no\nprecision is specified. I'd prefer to see CURRENT_TIMESTAMP return as\nmuch precision as possible (see also previous message).\n\nBTW, CURRENT_TIME and CURRENT_TIMESTAMP should return TIMETZ and\nTIMESTAMPTZ respectively, but currently do not --- are you fixing that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 14:13:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp resolution? " }, { "msg_contents": "> No, it's just that CURRENT_TIMESTAMP doesn't presently reduce its\n> precision, as you assert it should do. However, I see nothing in SQL99\n> 6.19 that asserts anything about the precision of CURRENT_TIMESTAMP\n> without a precision indicator. It just says\n> 2) If specified, <time precision> and <timestamp precision>\n> respectively determine the precision of the time or timestamp\n> value returned.\n> which seems to leave it up to us to choose the behavior when no\n> precision is specified. I'd prefer to see CURRENT_TIMESTAMP return as\n> much precision as possible (see also previous message).\n\nHmm. Somewhere else it *does* specify a precision of zero for TIME and\nTIMESTAMP; wonder why that rule wouldn't apply to CURRENT_TIME etc too?\nNot that lots of precision isn't good, but I'd like to be consistant.\n\n> BTW, CURRENT_TIME and CURRENT_TIMESTAMP should return TIMETZ and\n> TIMESTAMPTZ respectively, but currently do not --- are you fixing that?\n\nYup. Though I'm not certain that it would effectively be any different.\n\n - Thomas\n", "msg_date": "Thu, 04 Oct 2001 20:17:18 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: timestamp resolution?" }, { "msg_contents": "...\n> Hmm. Somewhere else it *does* specify a precision of zero for TIME and\n> TIMESTAMP; wonder why that rule wouldn't apply to CURRENT_TIME etc too?\n> Not that lots of precision isn't good, but I'd like to be consistant.\n\nAh, I'd forgotten about the 6 vs 0 behaviors (but had them in the code\njust a couple of days ago ;)\n\n - Thomas\n", "msg_date": "Thu, 04 Oct 2001 20:22:48 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: timestamp resolution?" } ]
[ { "msg_contents": "\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> >> Perhaps it'd be a better idea for the option of a freebie\nconversion\n> >> to be checked earlier, say immediately after we discover there is\nno\n> >> exact match for the function name and input type. Thomas, what do\nyou\n> >> think?\n> \n> > We *really* need that catalog lookup first. Otherwise, we will never\nbe\n> > able to override the hardcoded compatibility assumptions in that\n> > matching routine.\n> \n> Sure, I said *after* we fail to find an exact match. But the\n\"freebie\"\n> match is for a function name that matches a type name and is\n> binary-compatible with the source type. That's not a weak constraint.\n> ISTM that interpretation should take priority over interpretations\nthat\n> involve more than one level of transformation.\n\nThat sounds very reasonable to me.\n\nAndreas\n", "msg_date": "Thu, 4 Oct 2001 12:49:03 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: BUG: text(varchar) truncates at 31 bytes " } ]
[ { "msg_contents": "\n> Andreas, have you tried CVS tip lately on AIX? What's your results?\n\nAll 77 ok, no hangs, with make check on single CPU AIX 4.3.2. \nOnly problem on AIX is, that the argv[0] stuff does not work anymore\n(I think since we don't exec() anymore), which is rather annoying.\n\nAndreas\n", "msg_date": "Thu, 4 Oct 2001 13:13:21 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Problem on AIX with current " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Only problem on AIX is, that the argv[0] stuff does not work anymore\n> (I think since we don't exec() anymore), which is rather annoying.\n\nHmm, perhaps we are selecting the wrong PS_STRINGS method for AIX?\nPlease look at src/backend/utils/misc/ps_status.c and see if one of\nthe other methods will work on AIX.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 09:28:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem on AIX with current " } ]
[ { "msg_contents": "> > Only problem on AIX is, that the argv[0] stuff does not work anymore\n> > (I think since we don't exec() anymore), which is rather annoying.\n> \n> Hmm, perhaps we are selecting the wrong PS_STRINGS method for AIX?\n> Please look at src/backend/utils/misc/ps_status.c and see if one of\n> the other methods will work on AIX.\n\nYes, I see. Quite silly that I did not look earlier. \nThe compiler does not define _AIX4 or _AIX3, no idea who thought that. \nIt only defines _AIX, _AIX32, _AIX41 and _AIX43. \n\nI am quite sure that all AIX Versions accept the CLOBBER method,\nthus I ask you to apply the following patch, to make it work.\n\nAndreas", "msg_date": "Thu, 4 Oct 2001 16:41:34 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problem on AIX with current " }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> \n> > > Only problem on AIX is, that the argv[0] stuff does not work anymore\n> > > (I think since we don't exec() anymore), which is rather annoying.\n> > \n> > Hmm, perhaps we are selecting the wrong PS_STRINGS method for AIX?\n> > Please look at src/backend/utils/misc/ps_status.c and see if one of\n> > the other methods will work on AIX.\n> \n> Yes, I see. Quite silly that I did not look earlier. \n> The compiler does not define _AIX4 or _AIX3, no idea who thought that. \n> It only defines _AIX, _AIX32, _AIX41 and _AIX43. \n> \n> I am quite sure that all AIX Versions accept the CLOBBER method,\n> thus I ask you to apply the following patch, to make it work.\n> \n> Andreas\n\nContent-Description: ps_status.patch\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 4 Oct 2001 14:27:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem on AIX with current" }, { "msg_contents": "> > > Only problem on AIX is, that the argv[0] stuff does not work anymore\n> > > (I think since we don't exec() anymore), which is rather annoying.\n> > \n> > Hmm, perhaps we are selecting the wrong PS_STRINGS method for AIX?\n> > Please look at src/backend/utils/misc/ps_status.c and see if one of\n> > the other methods will work on AIX.\n> \n> Yes, I see. Quite silly that I did not look earlier. \n> The compiler does not define _AIX4 or _AIX3, no idea who thought that. \n> It only defines _AIX, _AIX32, _AIX41 and _AIX43. \n> \n> I am quite sure that all AIX Versions accept the CLOBBER method,\n> thus I ask you to apply the following patch, to make it work.\n\nCLOBBER does not work with AIX5L, nor CHANGE_ARGV. (SETPROCTITLE,\nPSTAT and PS_STRINGS can not be used since AIX5L does not have\nappropreate header files).\n--\nTatsuo Ishii\n\n", "msg_date": "Fri, 05 Oct 2001 10:45:49 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem on AIX with current " }, { "msg_contents": "\nPatch rejected, please resubmit:\n\nCLOBBER does not work with AIX5L, nor CHANGE_ARGV. (SETPROCTITLE,\nPSTAT and PS_STRINGS can not be used since AIX5L does not have\nappropreate header files).\n--\nTatsuo Ishii\n\n> \n> > > Only problem on AIX is, that the argv[0] stuff does not work anymore\n> > > (I think since we don't exec() anymore), which is rather annoying.\n> > \n> > Hmm, perhaps we are selecting the wrong PS_STRINGS method for AIX?\n> > Please look at src/backend/utils/misc/ps_status.c and see if one of\n> > the other methods will work on AIX.\n> \n> Yes, I see. Quite silly that I did not look earlier. \n> The compiler does not define _AIX4 or _AIX3, no idea who thought that. \n> It only defines _AIX, _AIX32, _AIX41 and _AIX43. \n> \n> I am quite sure that all AIX Versions accept the CLOBBER method,\n> thus I ask you to apply the following patch, to make it work.\n> \n> Andreas\n\nContent-Description: ps_status.patch\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 4 Oct 2001 22:38:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem on AIX with current" } ]
[ { "msg_contents": "Can we set a date for beta? If we are at least a week away, we should\nsay that so people know they can keep working.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 4 Oct 2001 11:27:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Beta time" }, { "msg_contents": "On Thu, 4 Oct 2001, Bruce Momjian wrote:\n\n> Can we set a date for beta? If we are at least a week away, we should\n> say that so people know they can keep working.\n\nIf we say the 10th I won't have to change the developer's page :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 4 Oct 2001 11:38:31 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Beta time" }, { "msg_contents": "> On Thu, 4 Oct 2001, Bruce Momjian wrote:\n>> Can we set a date for beta? If we are at least a week away, we should\n>> say that so people know they can keep working.\n\nI do not think we should slip it yet again, and especially not tell\npeople \"hey, send in more features\", because that will lead to further\nslip.\n\nHow about this Monday (10/8)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 14:17:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Beta time " }, { "msg_contents": "> > On Thu, 4 Oct 2001, Bruce Momjian wrote:\n> >> Can we set a date for beta? If we are at least a week away, we should\n> >> say that so people know they can keep working.\n> \n> I do not think we should slip it yet again, and especially not tell\n> people \"hey, send in more features\", because that will lead to further\n> slip.\n> \n> How about this Monday (10/8)?\n\nOK, can I get another vote for that date.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 4 Oct 2001 14:19:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta time" }, { "msg_contents": "...\n> OK, can I get another vote for that date.\n\nWhat was wrong with the 10th? I'm going to be tied up for most of the\ntime between now and Monday.\n\n - Thomas\n", "msg_date": "Thu, 04 Oct 2001 20:19:53 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Beta time" } ]
[ { "msg_contents": "Patch fix memory leaks in src/backend/utils/fmgr/dfmgr.c .\nThis leaks is very significant with massive update/insert tables with gist \nindexes in one transaction or with following sequence of commands:\n1. COPY in table large number of row\n2. CREATE GiST index on table\n3. VACUUM ANALYZE\nOn third step postgres eats very big number of memory.\nThis patch fix it.\n\nBTW\nTom, I want to notice that initGISTstate is called for every inserting value \n(for each row). I think it's not good, because this function called 'fmgr_info' \n7 times. 'fmgr_info' call a 'load_external_function' with execution of sequence \nsearch on library name. Any suggestion?\n\n-- \nTeodor Sigaev\nteodor@stack.net", "msg_date": "Thu, 04 Oct 2001 19:59:41 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": true, "msg_subject": "Patch for fixing a few memory leaks" }, { "msg_contents": "\nPatch applied. Thanks.\n\n> Patch fix memory leaks in src/backend/utils/fmgr/dfmgr.c .\n> This leaks is very significant with massive update/insert tables with gist \n> indexes in one transaction or with following sequence of commands:\n> 1. COPY in table large number of row\n> 2. CREATE GiST index on table\n> 3. VACUUM ANALYZE\n> On third step postgres eats very big number of memory.\n> This patch fix it.\n> \n> BTW\n> Tom, I want to notice that initGISTstate is called for every inserting value \n> (for each row). I think it's not good, because this function called 'fmgr_info' \n> 7 times. 'fmgr_info' call a 'load_external_function' with execution of sequence \n> search on library name. Any suggestion?\n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 4 Oct 2001 15:16:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch for fixing a few memory leaks" }, { "msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> Patch fix memory leaks in src/backend/utils/fmgr/dfmgr.c .\n\nApplied, thanks. (Looks like the leaks were introduced fairly\nrecently by the dynamic-search-path feature.)\n\n> Tom, I want to notice that initGISTstate is called for every inserting\n> value (for each row). I think it's not good, because this function\n> called 'fmgr_info' 7 times. 'fmgr_info' call a\n> 'load_external_function' with execution of sequence search on library\n> name. Any suggestion?\n\nfmgr_info shouldn't be all that expensive; I'm not really inclined to\nworry about it. Do you have evidence to the contrary?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 15:16:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch for fixing a few memory leaks " }, { "msg_contents": "Tom Lane writes:\n\n> Applied, thanks. (Looks like the leaks were introduced fairly\n> recently by the dynamic-search-path feature.)\n\nIs there some sort of a system behind which places are subject to leaks\nand which places are just too lazy to call pfree()?\n\nI know that index support procedures must not leak, hmm, I guess this\nwould include the function manager...\n\n(If that was not the right explanation, stop reading here.)\n\nWhy aren't index support procedures called with an appropriate memory\ncontext set up? Since the functions currently do all the cleaning\nthemselves, couldn't it work like this:\n\n1. set up memory context\n2. call index procedure\n3. clean out memory context\n\n(This could even be slightly more efficient.)\n\nThen again, I'm probably oversimplifying things...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 5 Oct 2001 00:36:26 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Patch for fixing a few memory leaks " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Is there some sort of a system behind which places are subject to leaks\n> and which places are just too lazy to call pfree()?\n\n> I know that index support procedures must not leak, hmm, I guess this\n> would include the function manager...\n\nYeah, that's basically why there's a problem here --- if this weren't\ngetting called from the index support area, I don't think the leak would\nmatter.\n\n> Why aren't index support procedures called with an appropriate memory\n> context set up?\n\nI looked at recovering space after index operations but decided it would\ntake more work than I could invest at the time. The trouble is that\nseveral of the index AMs allocate space that they expect to stick around\nacross operations, so they'd have to be fixed to use a special context\nfor such things. Eventually it'd be nice to fix it properly, ie, run\nindex support routines with CurrentMemoryContext = a short-term context,\njust as you say.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 18:50:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch for fixing a few memory leaks " }, { "msg_contents": ">>Tom, I want to notice that initGISTstate is called for every inserting\n>>value (for each row). I think it's not good, because this function\n>>called 'fmgr_info' 7 times. 'fmgr_info' call a\n>>'load_external_function' with execution of sequence search on library\n>>name. Any suggestion?\n>>\n> \n> fmgr_info shouldn't be all that expensive; I'm not really inclined to\n> worry about it. Do you have evidence to the contrary?\n\n\nTom, I make some test with this ugly patch which makes structure giststate \nstatic (pls, don't commit this patch :) ).\n\nTest:\n1. install contrib/btree_gist\n2. create 'sql.cmd' file contains:\nDROP TABLE tbl;\nBEGIN TRANSACTION;\nCREATE TABLE tbl (v INT);\nCREATE INDEX tblidx ON tbl USING GIST (v);\nCOPY tbl FROM '/tmp/data';\nEND TRANSACTION;\n3. create /tmp/data with 10000 random values.\n\nResult:\n1. Original gist.c\n% time psql wow < sql.cmd\npsql wow < sql.cmd 0.00s user 0.02s system 0% cpu 7.170 total\n2. Patched gist.c\n% time psql wow < sql.cmd\npsql wow < sql.cmd 0.02s user 0.00s system 2% cpu 0.699 total\n\nWe can see that calling fmgr_info for 70000 times may be very expensive.\n\n\n-- \nTeodor Sigaev\nteodor@stack.net", "msg_date": "Fri, 05 Oct 2001 13:01:02 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": true, "msg_subject": "Re: Patch for fixing a few memory leaks" }, { "msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n>>> Tom, I want to notice that initGISTstate is called for every inserting\n>>> value (for each row). I think it's not good, because this function\n>>> called 'fmgr_info' 7 times. 'fmgr_info' call a\n>>> 'load_external_function' with execution of sequence search on library\n>>> name. Any suggestion?\n>> \n>> fmgr_info shouldn't be all that expensive; I'm not really inclined to\n>> worry about it. Do you have evidence to the contrary?\n\n> Result:\n> 1. Original gist.c\n> % time psql wow < sql.cmd\n> psql wow < sql.cmd 0.00s user 0.02s system 0% cpu 7.170 total\n> 2. Patched gist.c\n> % time psql wow < sql.cmd\n> psql wow < sql.cmd 0.02s user 0.00s system 2% cpu 0.699 total\n\n> We can see that calling fmgr_info for 70000 times may be very expensive.\n\nOkay, I've done something about this: fmgr_info results for index\nsupport functions are now kept in the relcache. I now get roughly\nthe same timings for loading 10000 tuples into either a plain\nbtree index or a btree_gist index, rather than a factor-of-7 penalty\nfor btree_gist as before ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Oct 2001 19:25:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch for fixing a few memory leaks " } ]
[ { "msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql\nChanges by:\ttgl@postgresql.org\t01/10/04 13:52:24\n\nModified files:\n\tsrc/backend/parser: parse_coerce.c \n\tsrc/backend/utils/adt: format_type.c \n\nLog message:\n\tMake the world safe for atttypmod=0 ... this didn't use to mean anything,\n\tbut timestamp now wants it to mean something.\n\n", "msg_date": "Thu, 4 Oct 2001 13:52:25 -0400 (EDT)", "msg_from": "tgl@postgresql.org", "msg_from_op": true, "msg_subject": "pgsql/src/backend parser/parse_coerce.c utils/ ..." }, { "msg_contents": "> src/backend/parser: parse_coerce.c\n> src/backend/utils/adt: format_type.c\n> Log message:\n> Make the world safe for atttypmod=0 ... this didn't use to mean anything,\n> but timestamp now wants it to mean something.\n\nWhat was the effect of this?\n\n - Thomas\n", "msg_date": "Thu, 04 Oct 2001 17:57:59 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: pgsql/src/backend parser/parse_coerce.c utils/ ..." }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> Make the world safe for atttypmod=0 ... this didn't use to mean anything,\n>> but timestamp now wants it to mean something.\n\n> What was the effect of this?\n\ncoerce_type_typemod thought that typmod=0 meant it shouldn't perform any\nlength coercion. But typmod=0 is now a valid value for timestamp ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 14:06:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql/src/backend parser/parse_coerce.c utils/ ... " } ]
[ { "msg_contents": "It seems to me that when there is no explicit precision notation\nattached, a time/timestamp datatype should not force a precision of\nzero, but should accept whatever it's given. This is analogous to\nthe way we do char, varchar, and numeric: there's no length limit\nif you don't specify one. For example, I think this result is quite\nunintuitive:\n\nregression=# select '2001-10-04 13:52:42.845985-04'::timestamp;\n timestamptz\n------------------------\n 2001-10-04 13:52:43-04\n(1 row)\n\nThrowing away the clearly stated precision of the literal doesn't\nseem like the right behavior to me.\n\nThe code asserts that SQL99 requires the default precision to be zero,\nbut I do not agree with that reading. What I find is in 6.1:\n\n 30) If <time precision> is not specified, then 0 (zero) is implicit.\n If <timestamp precision> is not specified, then 6 is implicit.\n\nso at the very least you'd need two different settings for TIME and\nTIMESTAMP. But we don't enforce the spec's idea of default precision\nfor char, varchar, or numeric, so why start doing so with timestamp?\n\nEssentially, what I want is for gram.y to set typmod to -1 when it\ndoesn't see a \"(N)\" decoration on TIME/TIMESTAMP. I think everything\nworks correctly after that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 14:03:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Unhappiness with forced precision conversion for timestamp" }, { "msg_contents": "> The code asserts that SQL99 requires the default precision to be zero,\n> but I do not agree with that reading. What I find is in 6.1:\n> 30) If <time precision> is not specified, then 0 (zero) is implicit.\n> If <timestamp precision> is not specified, then 6 is implicit.\n> so at the very least you'd need two different settings for TIME and\n> TIMESTAMP. But we don't enforce the spec's idea of default precision\n> for char, varchar, or numeric, so why start doing so with timestamp?\n\nSure, I'd forgotten about the 6 vs 0 differences. Easy to put back in.\nOne of course might wonder why the spec *makes* them different.\n\n\"Why start doing so with timestamp?\". SQL99 compliance for one thing ;)\n\nI'm not sure I'm comfortable with the spec behavior, but without a\ndiscussion I wasn't comfortable implementing it another way.\n\n> Essentially, what I want is for gram.y to set typmod to -1 when it\n> doesn't see a \"(N)\" decoration on TIME/TIMESTAMP. I think everything\n> works correctly after that.\n\n\"... works correctly...\" == \"... works the way we'd like...\". Right?\n\nThis is the start of the discussion I suppose. And I *expected* a\ndiscussion like this, since SQL99 seems a bit ill-tempered on this\nprecision business. We shouldn't settle on a solution with just two of\nus, and I guess I'd like to hear from folks who have applications (the\nlarger the better) who would care about this. Even better if their app\nhad been running on some *other* DBMS. Anyone?\n\n - Thomas\n", "msg_date": "Thu, 04 Oct 2001 20:30:24 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Unhappiness with forced precision conversion for timestamp" }, { "msg_contents": "We use timestamps and intervals quite a bit in our applications. We\nalso use several different databases. Unfortunately, the time/date/\ninterval area is one that is not at all consistent between databases.\nIt makes life particularly difficult when trying to re-use application\ncode.\n\nSo far, as compared to many other databases, PostgreSQL, remains\npretty close to the standard (at least for our projects). The only\nareas that we have had issues with is the default inclusion of the\ntimezone information when retriving the timestamp information and the\nslightly non-standard interval literal notation (i.e., including the\nyear-month or day-time interval information inside the single quotes\nwith the literal string).\n\nMy vote on all datetime questions is to stick strictly to the\nstandard.\n\nOf course sticking to the standard is not always easy as the standard\nis not always clear or even consistent. (I'm only familiar with ANSI\n92 not ANSI 99.) Time zones in particular seem to be problematic.\n\nIn this case, I believe that it would be preferable to stick with the\nTIME(0) and TIMESTAMP(6) default precision. In our applications, we\nalways specify the precision, so, the default precision is not a real\nconcern for us, however, for portability, I still suggest sticking\nwith the standard.\n\nThanks,\nF Harvell\n\n\nOn Thu, 04 Oct 2001 20:30:24 -0000, Thomas Lockhart wrote:\n> > The code asserts that SQL99 requires the default precision to be zero,\n> > but I do not agree with that reading. What I find is in 6.1:\n> > 30) If <time precision> is not specified, then 0 (zero) is implicit.\n> > If <timestamp precision> is not specified, then 6 is implicit.\n> > so at the very least you'd need two different settings for TIME and\n> > TIMESTAMP. But we don't enforce the spec's idea of default precision\n> > for char, varchar, or numeric, so why start doing so with timestamp?\n> \n> Sure, I'd forgotten about the 6 vs 0 differences. Easy to put back in.\n> One of course might wonder why the spec *makes* them different.\n> \n> \"Why start doing so with timestamp?\". SQL99 compliance for one thing ;)\n> \n> I'm not sure I'm comfortable with the spec behavior, but without a\n> discussion I wasn't comfortable implementing it another way.\n> \n> > Essentially, what I want is for gram.y to set typmod to -1 when it\n> > doesn't see a \"(N)\" decoration on TIME/TIMESTAMP. I think everything\n> > works correctly after that.\n> \n> \"... works correctly...\" == \"... works the way we'd like...\". Right?\n> \n> This is the start of the discussion I suppose. And I *expected* a\n> discussion like this, since SQL99 seems a bit ill-tempered on this\n> precision business. We shouldn't settle on a solution with just two of\n> us, and I guess I'd like to hear from folks who have applications (the\n> larger the better) who would care about this. Even better if their app\n> had been running on some *other* DBMS. Anyone?\n> \n> - Thomas\n\n\n", "msg_date": "Fri, 05 Oct 2001 10:29:54 -0400", "msg_from": "F Harvell <fharvell@icgate.net>", "msg_from_op": false, "msg_subject": "Re: Unhappiness with forced precision conversion " }, { "msg_contents": "> So far, as compared to many other databases, PostgreSQL, remains\n> pretty close to the standard (at least for our projects). The only\n> areas that we have had issues with is the default inclusion of the\n> timezone information when retriving the timestamp information and the\n> slightly non-standard interval literal notation (i.e., including the\n> year-month or day-time interval information inside the single quotes\n> with the literal string).\n\nYou will be able to choose \"timestamp without time zone\" in the next\nrelease.\n\n> My vote on all datetime questions is to stick strictly to the\n> standard.\n\nHmm. It isn't at all clear that the standards guys were awake or sober\nwhen working on the date/time features. I assume that much of the\ncruftiness in the standard is forced by influential contributors who\nhave an existing database product, but maybe there is some other\nexplanation of why good folks can get so confused.\n\notoh, I'm not sure why people nowadays would *not* use time zones in\ntheir applications, since everyone is so much more globally aware and\ndistributed than in decades past.\n\n> Of course sticking to the standard is not always easy as the standard\n> is not always clear or even consistent. (I'm only familiar with ANSI\n> 92 not ANSI 99.) Time zones in particular seem to be problematic.\n\n:-P\n\nHave you actually used ANSI SQL9x time zones? istm that \"one offset fits\nall\" is really ineffective in supporting real applications, but I'd like\nto hear about how other folks use it.\n\n> In this case, I believe that it would be preferable to stick with the\n> TIME(0) and TIMESTAMP(6) default precision. In our applications, we\n> always specify the precision, so, the default precision is not a real\n> concern for us, however, for portability, I still suggest sticking\n> with the standard.\n\nWe are likely to use the 0/6 convention for the next release (though why\nTIME should default to zero decimal places and TIMESTAMP default to\nsomething else makes no sense).\n\n - Thomas\n", "msg_date": "Fri, 05 Oct 2001 19:35:48 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Unhappiness with forced precision conversion for timestamp" }, { "msg_contents": "Tom Lane writes:\n\n> regression=# select '2001-10-04 13:52:42.845985-04'::timestamp;\n> timestamptz\n> ------------------------\n> 2001-10-04 13:52:43-04\n> (1 row)\n>\n> Throwing away the clearly stated precision of the literal doesn't\n> seem like the right behavior to me.\n\nThat depends on the exact interpretation of '::'.\n\nRecall that the SQL syntax for a timestamp literal is actually\n\n TIMESTAMP 'YYYY-MM-DD HH:MM:SS.XXX....'\n\nwith the \"TIMESTAMP\" required. The rules concerning this are...\n\n 18) The declared type of a <time literal> that does not specify\n <time zone interval> is TIME(P) WITHOUT TIME ZONE, where P is\n the number of digits in <seconds fraction>, if specified, and\n 0 (zero) otherwise. The declared type of a <time literal> that\n specifies <time zone interval> is TIME(P) WITH TIME ZONE, where\n P is the number of digits in <seconds fraction>, if specified,\n and 0 (zero) otherwise.\n\nwhich is what you indicated you would expect.\n\nHowever, if you interpret X::Y as CAST(X AS Y) then the truncation is\nentirely correct.\n\nYou might expect all of\n\n'2001-10-05 22:41:00'\nTIMESTAMP '2001-10-05 22:41:00'\n'2001-10-05 22:41:00'::TIMESTAMP\nCAST('2001-10-05 22:41:00' AS TIMESTAMP)\n\nto evaluate the same (in an appropriate context), but SQL really defines\nall of these to be slightly different (or nothing at all). This\ndifference is already reflected in the parser: The first two are\n\"constants\", the latter two are \"type casts\".\n\nI think in a consistent extension of the standard, the first two should\ntake the precision as given, whereas the last two should truncate.\n\nTo make the TIMESTAMP in #2 be just a data type vs. meaning TIMESTAMP(0)\nin #3 and #4, the grammar rules would have to be beaten around a little,\nbut it seems doable.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 5 Oct 2001 23:13:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Unhappiness with forced precision conversion for" }, { "msg_contents": "On Fri, 05 Oct 2001 19:35:48 -0000, Thomas Lockhart wrote:\n> ...\n> \n> Have you actually used ANSI SQL9x time zones? istm that \"one offset fits\n> all\" is really ineffective in supporting real applications, but I'd like\n> to hear about how other folks use it.\n\n Fortunately, most of our date/time information is self-referential.\nI.e., we are usually looking at intervals between an initial date/\ntimestamp and the current date/timestamp. This has effectively\neliminated the need to deal with time zones.\n\n> > In this case, I believe that it would be preferable to stick with the\n> > TIME(0) and TIMESTAMP(6) default precision. In our applications, we\n> > always specify the precision, so, the default precision is not a real\n> > concern for us, however, for portability, I still suggest sticking\n> > with the standard.\n> \n> We are likely to use the 0/6 convention for the next release (though why\n> TIME should default to zero decimal places and TIMESTAMP default to\n> something else makes no sense).\n\n The only thing that I can think of is that originally, the DATE and\nTIME types were integer values and that when the \"new\" TIMESTAMP data\ntype was \"created\" the interest was to increase the precision. I\nwould guess, as you have also suggested, that the standards were based\nupon existing implementations (along with an interest in backwards\ncompatibility).\n\nThanks,\nF Harvell\n\n\n", "msg_date": "Fri, 05 Oct 2001 17:45:01 -0400", "msg_from": "F Harvell <fharvell@icgate.net>", "msg_from_op": false, "msg_subject": "Re: Unhappiness with forced precision conversion " } ]
[ { "msg_contents": "Hi All,\n\nI am having troubles compiling Postgresql 7.1.3 on OSX 10.1\n\nI have the following error:\n\n---- cut ----\ncc -no-cpp-precomp -g -O2 -Wall -Wmissing-prototypes\n-Wmissing-declarations -bundle -undefined suppress -bundle -undefined\nsuppress fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o\npqexpbuffer.o dllist.o pqsignal.o -o libpq.so.2.1\n/usr/bin/ld: -undefined error must be used when -twolevel_namespace is\nin effect\nmake[3]: *** [libpq.so.2.1] Error 1\nmake[2]: *** [all] Error 2\nmake[1]: *** [all] Error 2\nmake: *** [all] Error 2\n\n---- cut ----\nThanks for any info,\n\nSerge\n\n\n\n", "msg_date": "Thu, 4 Oct 2001 22:36:56 +0100", "msg_from": "\"Serge Sozonoff\" <serge@globalbeach.com>", "msg_from_op": true, "msg_subject": "OSX 10.1" } ]
[ { "msg_contents": "Hi,\n\nIn trying to solve a bug in 'ALTER TABLE tbl RENAME col1 TO col2',I \nnoticed (what must be) a typo in src/interfaces/ecpg/preproc/preproc.y\npatch attached, tho it might be easier if you just look for this\nline in the file:\n\nopt_column: COLUMN { $$ = make_str(\"colmunn\"); }\n\n\nBack to that original bug... \n\n'ALTER TABLE tbl RENAME col1 TO col2' does not update any indices that\nreference the old column name. Any suggestions to get this worked out\nwould be appreciated :-) I'll have some time this weekend to dig into\nthis, and would /really/ to tackle this myself.\n\n\ncheers.\n Brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Thu, 4 Oct 2001 23:38:24 -0400", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "typo in src/interfaces/ecpg/preproc/preproc.y" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> 'ALTER TABLE tbl RENAME col1 TO col2' does not update any indices that\n> reference the old column name.\n\nIt doesn't need to; the indexes link to column numbers, not column\nnames.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Oct 2001 09:46:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ALTER RENAME and indexes" }, { "msg_contents": "On 05 Oct 2001 at 09:46 (-0400), Tom Lane wrote:\n| Brent Verner <brent@rcfile.org> writes:\n| > 'ALTER TABLE tbl RENAME col1 TO col2' does not update any indices that\n| > reference the old column name.\n| \n| It doesn't need to; the indexes link to column numbers, not column\n| names.\n\nForgive my incorrect description of the problem... By example...\n\n\nbrent=# select version();\n version \n---------------------------------------------------------------\n PostgreSQL 7.1.3 on i686-pc-linux-gnu, compiled by GCC 2.95.4\n(1 row)\n\nbrent=# create table test ( id serial, col1 varchar(64) NOT NULL);\nNOTICE: CREATE TABLE will create implicit sequence 'test_id_seq' for SERIAL column 'test.id'\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_id_key' for table 'test'\nCREATE\nbrent=# create index idx_test_col1 on test(col1);\nCREATE\nbrent=# \\d idx_test_col1\n Index \"idx_test_col1\"\n Attribute | Type \n-----------+-----------------------\n col1 | character varying(64)\nbtree\n\nbrent=# alter table test rename col1 to col2;\nALTER\nbrent=# \\d idx_test_col1\n Index \"idx_test_col1\"\n Attribute | Type \n-----------+-----------------------\n col1 | character varying(64)\nbtree\n\nbrent=# \\d test\n Table \"test\"\n Attribute | Type | Modifier \n-----------+-----------------------+-------------------------------------------------\n id | integer | not null default nextval('\"test_id_seq\"'::text)\n col2 | character varying(64) | not null\nIndices: idx_test_col1,\n test_id_key\n\n\n\n I hit this problem using the jdbc driver, and originally thought \nit was the jdbc code, but as the above shows, the problem seems to\nbe a failure to update one (or more) of the system catalogs.\n\n Again, any pointers beyond look in src/backend/commands/rename.c\nwould be appreciated. My big question is how is the content of\nthe system tables used/affected from within PG -- I originally \nthought it would be simple enough to issue some SQL to properly\nupdate the system tables, but apparently that idea was /very/ naive.\nIs there any way to list all $things that reference the altered\nentity? find_all_inheritors() does not /appear/ to be getting \neverything that needs to be updated.\n\nAlso, a lot of terminology within the code is making my head spin (not\na difficult task ;-)), but then I've only spent about two hours \ndigging around this code. Is there a 'understanding internal PostgreSQL\nterminology' document that I've missed?\n\nthanks.\n Brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Fri, 5 Oct 2001 10:18:17 -0400", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: ALTER RENAME and indexes" }, { "msg_contents": "On 05 Oct 2001 at 10:18 (-0400), Brent Verner wrote:\n| On 05 Oct 2001 at 09:46 (-0400), Tom Lane wrote:\n| | Brent Verner <brent@rcfile.org> writes:\n| | > 'ALTER TABLE tbl RENAME col1 TO col2' does not update any indices that\n| | > reference the old column name.\n| | \n| | It doesn't need to; the indexes link to column numbers, not column\n| | names.\n\nah, I think I see the problem... The pg_attribute.attname just needs\nupdating, right? I suspect this after noticing that the \npg_get_indexdef(Oid) function produced the correct(expected) results,\nwhile those using pg_attribute were wrong.\n\nIf this is the _wrong_ answer for this, stop me before I make a \nbig mess :-)\n\nworking...\n b\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 6 Oct 2001 19:49:32 -0400", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: ALTER RENAME and indexes" }, { "msg_contents": "On 06 Oct 2001 at 20:13 (-0400), Rod Taylor wrote:\n| Of course, in 7.1 foreign key constraints become rather confused when\n| you rename columns on them.\n| \n| create table parent (id serial);\n| create table child (id int4 references parent(id) on update cascade);\n| alter table parent rename column id to anotherid;\n| alter table child rename column id to junk;\n| insert into child values (1);\n| \n| -> ERROR: constraint <unnamed>: table child does now have an\n| attribute id\n\nok, I see where this breaks. The args to the RI_ConstraintTrigger_%d\nare written into the pg_trigger tuple like so..\n\n '<unnamed>\\000child\\000parent\\000UNSPECIFIED\\000id\\000id\\000'\n\nThere are really two approaches, AFAICS.\n\n1) modify this tgargs value to reflect the modified column name(s).\n2) modify <whatever uses these args> to use the oid instead of\n the column names, and modify CreateTrigger to reflect this change..\n\n#2 seems to be the most bulletproof approach, so I'm looking\ninto hacking this up right now. Any comments would be much \nappreciated about any (better) ways to fix this problem.\n\ncheers.\n Brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 6 Oct 2001 22:56:14 -0400", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: ALTER RENAME and indexes" }, { "msg_contents": "\nOn Sat, 6 Oct 2001, Brent Verner wrote:\n\n> On 06 Oct 2001 at 20:13 (-0400), Rod Taylor wrote:\n> | Of course, in 7.1 foreign key constraints become rather confused when\n> | you rename columns on them.\n> | \n> | create table parent (id serial);\n> | create table child (id int4 references parent(id) on update cascade);\n> | alter table parent rename column id to anotherid;\n> | alter table child rename column id to junk;\n> | insert into child values (1);\n> | \n> | -> ERROR: constraint <unnamed>: table child does now have an\n> | attribute id\n> \n> ok, I see where this breaks. The args to the RI_ConstraintTrigger_%d\n> are written into the pg_trigger tuple like so..\n> \n> '<unnamed>\\000child\\000parent\\000UNSPECIFIED\\000id\\000id\\000'\n> \n> There are really two approaches, AFAICS.\n> \n> 1) modify this tgargs value to reflect the modified column name(s).\n> 2) modify <whatever uses these args> to use the oid instead of\n> the column names, and modify CreateTrigger to reflect this change..\n> \n> #2 seems to be the most bulletproof approach, so I'm looking\n> into hacking this up right now. Any comments would be much \n> appreciated about any (better) ways to fix this problem.\n\n#2 also requires changes to dump/restore stuff, since AFAIK\nit currently dumps create constraint trigger statements and the\noids won't be known for the restore, but this is probably a good\nidea in general.\n\n\n", "msg_date": "Sun, 7 Oct 2001 04:03:40 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: ALTER RENAME and indexes" }, { "msg_contents": "On 07 Oct 2001 at 04:03 (-0700), Stephan Szabo wrote:\n| \n| On Sat, 6 Oct 2001, Brent Verner wrote:\n| \n| > On 06 Oct 2001 at 20:13 (-0400), Rod Taylor wrote:\n| > | Of course, in 7.1 foreign key constraints become rather confused when\n| > | you rename columns on them.\n \n| > 1) modify this tgargs value to reflect the modified column name(s).\n| > 2) modify <whatever uses these args> to use the oid instead of\n| > the column names, and modify CreateTrigger to reflect this change..\n| > \n| > #2 seems to be the most bulletproof approach, so I'm looking\n| > into hacking this up right now. Any comments would be much \n| > appreciated about any (better) ways to fix this problem.\n| \n| #2 also requires changes to dump/restore stuff, since AFAIK\n| it currently dumps create constraint trigger statements and the\n| oids won't be known for the restore, but this is probably a good\n| idea in general.\n\nAfter looking this over for a couple of hours and seeing how many \nplaces would have to be touched, combined with the pg_dump breakage\nmakes #2 an unreasonable task for me to complete before real life\nstops my party. Plus, being this close to beta, this fix/hack\nmight actually get into 7.2, since it will be a really minor \naddition to rename.c.\n\nThanks for alerting me to pg_dump's dependency on this stuff :-).\n\ncheers.\n Brent\n\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sun, 7 Oct 2001 08:49:10 -0400", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: ALTER RENAME and indexes" } ]
[ { "msg_contents": "\n> > I am quite sure that all AIX Versions accept the CLOBBER method,\n> > thus I ask you to apply the following patch, to make it work.\n> \n> CLOBBER does not work with AIX5L, nor CHANGE_ARGV. (SETPROCTITLE,\n> PSTAT and PS_STRINGS can not be used since AIX5L does not have\n> appropreate header files).\n\nHave you actually tried my patch, and what was the effect ? \nThe previous code was wrong, since it did not do any PS magic,\nit defaulted to PS_USE_NONE.\n\nElse can you please tell me a predefine for AIX5, thanks. \n\nAndreas\n", "msg_date": "Fri, 5 Oct 2001 09:52:14 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Problem on AIX with current " }, { "msg_contents": "> > > I am quite sure that all AIX Versions accept the CLOBBER method,\n> > > thus I ask you to apply the following patch, to make it work.\n> > \n> > CLOBBER does not work with AIX5L, nor CHANGE_ARGV. (SETPROCTITLE,\n> > PSTAT and PS_STRINGS can not be used since AIX5L does not have\n> > appropreate header files).\n> \n> Have you actually tried my patch, and what was the effect ? \n> The previous code was wrong, since it did not do any PS magic,\n> it defaulted to PS_USE_NONE.\n\nTo make sure I did everything correctly, I cvsed fresh sources and\napplied your patches again. The result: It works fine! I don't know\nwhy, but I must have done something wrong.:-< Sorry for the wrong\ninfo. Bruce, please apply the patches.\n\nBTW, still I'm getting the stucking backends. New info: a snapshot\ndated on 10/3 works fine.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 05 Oct 2001 17:17:50 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Problem on AIX with current " }, { "msg_contents": "> \n> > > I am quite sure that all AIX Versions accept the CLOBBER method,\n> > > thus I ask you to apply the following patch, to make it work.\n> > \n> > CLOBBER does not work with AIX5L, nor CHANGE_ARGV. (SETPROCTITLE,\n> > PSTAT and PS_STRINGS can not be used since AIX5L does not have\n> > appropreate header files).\n> \n> Have you actually tried my patch, and what was the effect ? \n> The previous code was wrong, since it did not do any PS magic,\n> it defaulted to PS_USE_NONE.\n> \n> Else can you please tell me a predefine for AIX5, thanks. \n\nPatch applied. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 5 Oct 2001 11:48:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem on AIX with current" } ]
[ { "msg_contents": "\n> > ... then trailing zeros are hacked out,\n> > two digits at a time.\n> \n> I was wondering why it seemed to always want to produce an even number\n> of fractional digits. Why are you doing it 2 at a time and not 1?\n> I should think timestamp(1) would produce 1 fractional digit, not\n> two digits of which the second is always 0 ...\n\nYup, same here. I'd also prefer 1 at a time.\nIf you want compatibility, I would do it only for the first 2 digits.\n\nAndreas\n", "msg_date": "Fri, 5 Oct 2001 09:59:43 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Timestamp, fractional seconds problem " } ]
[ { "msg_contents": "\n> BTW, still I'm getting the stucking backends. New info: a snapshot\n> dated on 10/3 works fine.\n\nI allways have trouble with those different date formats. Do you\nmean, that the problem is fixed as of 3. October, or that an old\nsnapshot from 10. March still worked ?\n\nSnapshot of 1. Oct 2001 does not hang in \"make check\" on AIX 4.3.2\n4 CPU machine.\nSo it seems to be a problem on AIX5L only :-( Maybe a semaphore bug ? \n\nAndreas\n", "msg_date": "Fri, 5 Oct 2001 12:23:46 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Problem on AIX with current " }, { "msg_contents": "> > BTW, still I'm getting the stucking backends. New info: a snapshot\n> > dated on 10/3 works fine.\n> \n> I allways have trouble with those different date formats. Do you\n> mean, that the problem is fixed as of 3. October, or that an old\n> snapshot from 10. March still worked ?\n\nOf course the working source is 3rd October.\n\n> Snapshot of 1. Oct 2001 does not hang in \"make check\" on AIX 4.3.2\n> 4 CPU machine.\n\nOh, you have 4 way machine too?\n\n> So it seems to be a problem on AIX5L only :-( Maybe a semaphore bug ? \n\nMaybe. BTW, what is your compiler? I'm using xlc.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 05 Oct 2001 23:09:29 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Problem on AIX with current " } ]
[ { "msg_contents": "I just finished up some work bringing a Linux/FreeBSD server project to\nWindows. Enough to test at least. (I have to find a version of getopt() for\nWindows) What I found was surprising.\n\nMy server project is like a database cache for Web servers. It is used to keep\nsession information across page views without getting database hits for each\npage. One of the reasons why I wrote it was PostgreSQL's behavior with updates.\n\nAt first I tried to use Cygwin, and it was really easy, but it was very slow.\nThen I decided that I had to write the actual Windows bits myself. (I\nprogrammed Windows and NT for over a decade, so I remember how)\n\nThe difference was astounding!!! On the machine running Windows, sitting on a\nswitch, The cygwin version could get roughly 50 session dialogs a second, the\nnative Windows version was able to get roughly 250. I am sure that this is\nmostly due to the implementation of pthread_mutex and sockets, because I make\nvery few other system calls.\n\nThe question which pops in to mind, has anyone run the PostgreSQL that comes\nwith cygwin and compared it on the same machine running Linux or FreeBSD? Is\nthe performance comparable?\n\nIf not, does anyone know if that is due to the inefficiencies between *NIX and\nWindows? Or is Cygwin generally slow all around?\n\nDoes anyone care enough about Windows to attempt a \"native Windows\" port? I\nwould be able to contribute some time. (couple hours a day, a few days a week)\n", "msg_date": "Fri, 05 Oct 2001 08:33:13 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Windows native version of PostgreSQL." } ]
[ { "msg_contents": "My friends,\nI'm brazilian and I am developing a project that need to develop a function that returns me the amount of users connected in the bank, and the amount of lock's in a table, and the space in disk placed for one determined database! Can you help me with the solution ? How can I get this informations? I thank you very much for that help.\n\nAtenciosamente,\n Fábio Santana\nAnalista de Sistemas\n(fabio.santana@aplicad.com.br)\n(fabio3c@terra.com.br)\nICQ 107308715\n\n\n\n\n\n\n\nMy friends,I'm brazilian and I am developing a \nproject that need to develop a function that returns me the amount of users \nconnected in the bank, and the amount of lock's in a table, and the space in \ndisk placed for one determined database! Can you help me with the solution ? How \ncan I get this informations? I thank you very much for that \nhelp.\nAtenciosamente,  Fábio SantanaAnalista \nde Sistemas(fabio.santana@aplicad.com.br)(fabio3c@terra.com.br)ICQ \n107308715", "msg_date": "Fri, 5 Oct 2001 10:08:32 -0300", "msg_from": "=?iso-8859-1?Q?F=E1bio_Santana?= <fabio3c@terra.com.br>", "msg_from_op": true, "msg_subject": "I NEED HELP" }, { "msg_contents": "Hi, \nI've seen no answer to your question within four days. \nHave you solved it on your own? \nI'm interested in the stuff you asked for, too. \nPlease let me know about the status of the function. \nIf you haven't got any answer, maybe we should try to get \none using a clearer mail subject. \nRegards, Christoph \n\n> My friends,\n> I'm brazilian and I am developing a project that need to develop a function=\n> that returns me the amount of users connected in the bank, and the amount =\n> of lock's in a table, and the space in disk placed for one determined datab=\n> ase! Can you help me with the solution ? How can I get this informations? I=\n> thank you very much for that help.\n> \n> Atenciosamente,\n> F=E1bio Santana\n> Analista de Sistemas\n> (fabio.santana@aplicad.com.br)\n> (fabio3c@terra.com.br)\n> ICQ 107308715\n> \n\n", "msg_date": "Tue, 09 Oct 2001 11:01:26 METDST", "msg_from": "Haller Christoph <ch@rodos.fzk.de>", "msg_from_op": false, "msg_subject": "Re: System usage and statistics " } ]
[ { "msg_contents": "I had a catastrophic crash on one of our webservers here and took the\nopportunity to upgrade it. Unfortunately, after the upgrade, I am unable to\ncompile either 7.1.3 or the current snapshot of postgres.\n\nThe error I get is rather opaque:\n\n/usr/bin/ld: -undefined error must be used when -twolevel_namespace is in\neffect\n\nThis is in the final stretch of compiling, on libpq. I checked google for\nit, and came up with three header files in an MIT Apple sourcetree. So this\nstrikes me as a particularly Darwinish failing. I also ran a recursive grep\non the postgres source tree and wasnt able to find -twolevel_namespace in\nany of the Makefiles. This makes me think it is something external.\n\nAnyone have an idea as to what is causing this? This box is down until\npostgres comes back up. :-7\n\nalex\n\n--\nalex j. avriette\nperl hacker.\na_avriette@acs.org\n$dbh -> do('unhose');\n", "msg_date": "Fri, 5 Oct 2001 09:41:29 -0400 ", "msg_from": "Alex Avriette <a_avriette@acs.org>", "msg_from_op": true, "msg_subject": "Darwin 1.4 (OS X 10.1) Broken Compile, Snapshot and 7.1.3" }, { "msg_contents": "At 9:41 AM -0400 10/5/01, Alex Avriette wrote:\n>I had a catastrophic crash on one of our webservers here and took the\n>opportunity to upgrade it. Unfortunately, after the upgrade, I am unable to\n>compile either 7.1.3 or the current snapshot of postgres.\n>\n>The error I get is rather opaque:\n>\n>/usr/bin/ld: -undefined error must be used when -twolevel_namespace is in\n>effect\n>\n>This is in the final stretch of compiling, on libpq. I checked google for\n>it, and came up with three header files in an MIT Apple sourcetree. So this\n>strikes me as a particularly Darwinish failing. I also ran a recursive grep\n>on the postgres source tree and wasnt able to find -twolevel_namespace in\n>any of the Makefiles. This makes me think it is something external.\n>\n>Anyone have an idea as to what is causing this? This box is down until\n>postgres comes back up. :-7\n>\n>alex\n>\n\nAlex,\n\nYou might check the Fink mailing list, i believe this has been talked \nabout but I cant remember which mail list. But I believe Fink \ninstalls postgresql just fine on 10.1\n\nhttp://sourceforge.net/projects/fink\n\nI might have also see the discussion on the Darwin mail list.\n\nhttp://lists.apple.com/archives/darwin-development/2001/Sep/19.html\n\nhas something like:\nsimply add -flat_namespace to the LDFLAGS.\n\n-- \nNeil\nneilt@gnue.org\nGNU Enterprise\nhttp://www.gnuenterprise.org/\nhttp://www.gnuenterprise.org/~neilt/sc.html\n", "msg_date": "Fri, 5 Oct 2001 09:06:53 -0500", "msg_from": "Neil Tiffin <neilt@gnue.org>", "msg_from_op": false, "msg_subject": "Re: Darwin 1.4 (OS X 10.1) Broken Compile, Snapshot and" }, { "msg_contents": "Alex Avriette <a_avriette@acs.org> writes:\n> /usr/bin/ld: -undefined error must be used when -twolevel_namespace is in\n> effect\n\nAll we know at this point is that Apple changed something between 10.0.4\nand 10.1. If you want to look into it and figure out how to fix it,\nthat'd be great.\n\nAttached is some possibly-relevant info that someone sent in recently.\n\n\t\t\tregards, tom lane\n\n\n\n> I recently asked (in pg-sql, because I did not know where to ask) for \n> help with a problem 'making' pg on mac osx. I have finally tracked the \n> problem down and have the appropriate information. It appears to be \n> important to running on Mac OSX.\n> \n> Who can (should) I send it to? I am still trying to get pg to run under \n> OSX 10.1.\n> \n> Ted\n> \n> -------------------------\n> \n> Mac OS X Developer Release Notes:\n> Two-Level Namespace Executables\n> \n> ? \tThe Problem (and the Solution)\n> ? \tUsing Two-Level Namespace Executables On Mac OS X 10.0\n> ? \tRunning Your Application as a Flat Namespace Executable\n> ? \tTroubleshooting Two-Level Namespace Builds\n> ? \tBuilding Your Application With PlugIns and Bundles\n> ? \tNew Dynamic Linking Functions for Use With Two-Level Namespace \n> Executables\n> ? \tNSAddImage\n> ? \tNSLookupSymbolInImage\n> ? \tNSIsSymbolNameDefinedInImage\n> \n> The Problem (and the Solution)\n> \n> The shared library implementation shipped with Mac OS X 10.0 is a \n> robust, featureful implementation, with one small problem.\n> \n> When an executable file is loaded into a program, the dynamic linker \n> (the part of the system responsible for loading code on demand) \n> attempts to find all of the unresolved symbols (functions in other \n> executable files). In order to do this, it needs to know which libraries \n> contain those symbols. So when the program is built, you must \n> specify the libraries that contain those functions to the static linker \n> (the part of the build system responsible for linking object files \n> together). The static linker places the names of those libraries into \n> your program's executable file.\n> \n> The problem is that the static linker in Mac OS X 10.0 records only \n> the names of the libraries, but not which functions are to be found in \n> each of libraries. Therefore, it's not possible to load two libraries \n> containing a definition of the same symbol name, because it's not \n> possible to determine which library you really wanted the symbol \n> from.\n> \n> For example, two libraries might both implement a \"log\" function. \n> One of the libraries might be a math library, and the other might be a \n> library for logging messages to the system console. If your program \n> calls the math library \"log,\" there is no way to guarantee that you are \n> not calling the system console \"log\".\n> \n> This isn't usually a problem on Mac OS X 10.0, because the build \n> system will give you a multiple-defined-symbols error message \n> when you attempt to build your application using libraries that \n> contain a symbol with the same name. But consider the following \n> situations.\n> \n> ? \tFuture versions of the system may export symbols which conflict \n> with those implemented in your application. This will prevent users \n> from being able to use your application.\n> ? \tIf your application supports third-party plugins or bundles, libraries \n> used by your third-party developers may conflict with each other.\n> \n> To solve this problem, the Mac OS X 10.1 runtime environment \n> supports a new feature that records the names of symbols expected \n> to be found in a library along with the library name. An executable file \n> built with this feature enabled is called a two-level namespace \n> executable. An executable file without this feature is called a flat \n> namespace executable.\n> \n> When your program links to a symbol defined in a subframework of \n> an umbrella framework, the linker records the name of the umbrella \n> framework as the parent library for the symbol. At runtime, the linker \n> searches the umbrella framework, including all of the \n> subframeworks it contains, to find the symbol.\n> \n> For example, if your program references the function MoveWindow, \n> which is implemented in the HIToolbox subframework of the \n> \"Carbon\" umbrella framework, \"Carbon\" will be recorded as the \n> library name. Consequentially, if a later release of the Carbon \n> umbrella framework moves the MoveWindow function to a new or \n> different subframework (for example, a WindowManager framework), \n> your program will continue to function.\n> \n> Using Two-Level Namespace Executables On Mac OS X 10.0\n> \n> The static linker enables the two-level namepace option \n> (-twolevel_namespace) by default in Mac OS X 10.1. Two-level \n> namespace executables are compatible with Mac OS X 10.0, where \n> the linker will treat them as flat namespace executables (although \n> your program may unexpectedly quit with a \"multiple defined \n> symbols\" error if the resulting flat namespace contains multiple \n> symbols with the same name).\n> \n> However, applications that you build as two-level namespace \n> executables will cause versions of the update_prebinding tool up to \n> and including the version installed with Mac OS X 10.0.4 to crash. \n> update_prebinding is used by the Mac OS X installer to redo the \n> prebinding optimization (see the Prebinding release note for more \n> information). If the prebinding tool crashes, the installer will \n> complete the installation, but the resulting system won't be fully \n> prebound. To work around this problem, you have two options:\n> \n> ? \tYou can use the -flat_namespace linker option to force the linker to \n> build flat namespace executables. In Project Builder, add \n> -flat_namespace to the OTHER_LDFLAGS Build Setting.\n> ? \tOnly system updates from Apple should cause update_prebinding \n> to run. You can require Mac OS X 10.0.4, on the assumption that all \n> prebinding has been completed with the final Mac OS X 10.0 system \n> update. Note, however, that some third-party developers ship tools \n> that do cause update_prebinding to run. Also, if Apple releases \n> another 10.0 system update, which is unlikely but possible, this \n> strategy may not work.\n> \n> Running Your Application as a Flat Namespace Executable\n> \n> If you want to test your two-level namespace as a flat namespace \n> image on Mac OS X 10.1, set the \n> DYLD_FORCE_FLAT_NAMESPACE environment variable to launch \n> your program as a flat namespace executable.\n> \n> Note that when you do this, your program may terminate with a \n> \"multiple defined symbols\" error if the resulting flat namespace \n> contains multiple symbols with the same name.\n> \n> Troubleshooting Two-Level Namespace Builds\n> \n> Some existing projects will not link without modification, for the \n> following reasons:\n> \n> ? \tThere can be no unresolved, undefined references in two-level \n> namespace executables. Using the -undefined suppress option will \n> cause this error:\n> \n> /usr/bin/ld: -undefined error must be used when \n> -twolevel_namespace is in effect\n> \n> If you recieve this error message, you need to remove the -undefined \n> suppress option and make sure you specify the path to the program \n> you are linking against with the option -bundle_loader \n> pathnameToYourProgram. If you are linking against a framework, \n> this path is already specified using the -framework option, so you \n> just need to remove the -undefined suppress option. (To be really \n> paranoid, specify the -undefined error option). See the next section \n> for more information.\n> \n> ? \tWhen building a two-level namespace executable, you must link \n> against all shared libraries containing the symbols you reference. If \n> your program is currently a flat namespace executable, this may \n> cause problems. When you build a flat namespace executable, the \n> dynamic linker will search all libraries for the program's undefined \n> symbols, even libraries that your code did not explicitly link against.\n> \n> For example, your application might link against \n> ApplicationServices.framework, but not CoreFoundation.framework. \n> Because ApplicationServices.framework links to \n> CoreFoundation.framework, you can, as a flat namespace \n> executable, use routines exported by CoreFoundation.framework. If \n> you build the program as a two-level namespace executable, using \n> routines exported by CoreFoundation.framework will result in \n> undefined-symbol errors, so you must explicitly add \n> CoreFoundation.framework to the list of libraries you link.\n> \n> If you use a symbol from a library that you are not explicitly linking \n> against, you will get a single error message for each such library of \n> the form:\n> \n> ld: object_file illegal reference to symbol: symbol\n> defined in indirectly referenced dynamic library: library\n> \n> If you see this error message, you must add the library library to your \n> link command.\n> \n> If library is a sub-framework of an umbrella (for example, \n> HIToolbox.framework is a subframework of Carbon.framework), you \n> will need to add the umbrella framework to your link objects (in \n> Project Builder, just drag the framework into your project). Once you \n> explicitly link against an umbrella framework, you may freely use all \n> of the symbols it contains without referencing any sub-frameworks.\n> \n> Building Your Application With PlugIns and Bundles\n> \n> If your project includes bundles that must be resolved against \n> symbols in the program that loads them, do not use the linker option \n> -undefined suppress when building your bundles. Instead, use \n> -undefined error and make sure you specify the path to the program \n> you are linking against with the option -bundle_loader \n> pathnameToYourProgramOrFramework. For Project Builder \n> projects, add these arguments to the the OTHER_LDFLAGS build \n> setting of the bundle's target (in the expert settings table of the Build \n> Settings tab).\n> \n> Using the -bundle_loader option instead of -undefined suppress will \n> cause an ordering problem if the bundles are located inside your \n> application's bundle. For the application to copy the bundles to itself \n> with a Copy Files build phase, the bundles must be built first. \n> However, to properly link against the app and get their symbols \n> resolved the bundles must build after the application.\n> \n> One solution for an application that loads plug-ins or other bundles \n> is to provide your plug-in API implementation in a framework that \n> bundles can link against. Your application will also link to that \n> framework. For example, Interface Builder exposes an API for people \n> who wish to write new palettes of objects. There is an \n> InterfaceBuilder.framework that implements the functionality of the \n> API. The Interface Builder application itself calls routines in this \n> framework, and so do plug-in-palettes. This organization provides \n> several benefits:\n> \n> ? \tIt solves ordering issues and does not force plug-ins to link \n> against your application's executable file.\n> ? \tIt provides good separation between routines available for plugins \n> to use and routines that are internal to the app.\n> ? \tIt gives you a good place to package the plug-in shared library and \n> all the headers and doc and so forth that developers will need to \n> develop plug-ins. This makes it more convenient for your \n> developers.\n> \n> Alternately, you can use a second application target called \"Foo \n> Wrapper\" to fix the dependency problem:\n> \n> 1.\tSet the application name of the new target to the same name as \n> the real application target (\"Foo\", in this example).\n> 2.\tSet the original application target to not install itself.\n> 3.\tSet the wrapper target to install itself.\n> 4.\tAdd a Copy Files phase to the wrapper target and remove all the \n> other build phases.\n> 5.\tSet up the Copy Files phase the same way as it was in the \n> original app target.\n> 6.\tMake the wrapper target depend on the app target and on all the \n> bundle targets.\n> 7.\tMake things that used to depend on the app target depend on the \n> wrapper target instead.\n> 8.\tIf the app target was the first target in your target list, make the \n> wrapper target first.\n> 9.\tFinally, because the installation is now being performed by the \n> Foo Wrapper target, but the actual application is being built by the \n> Foo target, when the install happens, Project Builder will not remove \n> debugging symbols from the application executable file. To fix this \n> problem, add a shell script build phase to the Foo Wrapper target \n> that is set to run only for install builds and that contains the \n> following \n> script:\n> \n> strip -S \n> ${SYMROOT}/${PRODUCT_NAME}.${WRAPPER_EXTENSION}/Cont\n> ents/MacOS/$ {PRODUCT_NAME}\n> \n> New Dynamic Linking Functions for Use With Two-Level \n> Namespace Executables\n> \n> The runtime symbol lookup routines released in Mac OS X 10.0 \n> (located in the header <mach-o/dyld.h> and listed below) perform \n> lookups in the flat, global symbol namespace, and thus, when you \n> use them to find symbols in your plugins, may not return the \n> intended symbols in for two-level namespace applications.\n> \n> The \"hint\" provided to NSLookupAndBindSymbolWithHint is a \n> performance \"hint\" for the flat namespace lookup, and is thus not \n> used as the first level name for a two-level namespace lookup. To \n> perform a lookup within a two-level namespace executable, use the \n> function NSLookupSymbolInImage, as documented below.\n> \n> Applications built as two-level namespace executables should \n> instead use the following new routines.\n> \n> ? \tNSAddImage\n> ? \tNSIsSymbolNameDefinedInImage\n> ? \tNSLookupSymbolInImage\n> \n> NSAddImage\n> \n> \n> const struct mach_header *\n> NSAddImage(\n> char *image_name,\n> unsigned long options);\n> #define NSADDIMAGE_OPTION_NONE 0x0\n> #define NSADDIMAGE_OPTION_RETURN_ON_ERROR 0x1\n> #define NSADDIMAGE_OPTION_WITH_SEARCHING 0x2\n> #define NSADDIMAGE_OPTION_RETURN_ONLY_IF_LOADED 0x4\n> \n> NSAddImage loads the shared library specified by image_name into \n> the current process, returning a pointer to the mach_header data \n> structure of the loaded image. Any libraries that the specified library \n> depends on are also loaded.\n> \n> If the shared library specified by image_name is already loaded, the \n> mach_header already loaded is returned.\n> \n> The image_name parameter is a pointer to a C string containing the \n> pathname to the shared library on disk. For best performance, \n> specify the full pathname of the shared library?do not specify a \n> symlink.\n> \n> The options parameter is a bit mask. Valid options are:\n> \n> ? \tNSADDIMAGE_OPTION_NONE\n> No options.\n> \n> ? \tNSADDIMAGE_OPTION_RETURN_ON_ERROR\n> If an error occurs and you have specified this option, NSAddImage \n> returns NULL. You can then use the function NSLinkEditError to \n> retrieve information about the error.\n> \n> If an error occurs, and you have not specified this option, \n> NSAddImage will call the linkEdit error handler you have installed \n> using the NSInstallLinkEditErrorHandlers function. If you have not \n> installed a linkEdit error handler, NSAddImage prints an error to \n> stderr and calls the exit function to end the program.\n> \n> ? \tNSADDIMAGE_OPTION_WITH_SEARCHING\n> With this option the image_name passed for the library and all its \n> dependents will be effected by the various DYLD environment \n> variables as if this library were linked into the program.\n> \n> ? \tNSADDIMAGE_OPTION_RETURN_ONLY_IF_LOADED\n> With this option, NSAddImage will not load a shared library that has \n> not already been loaded. If the specified image_name is already \n> loaded, NSAddImage will return its mach_header.\n> \n> The linkEdit error handler is documented in the NSModule(3) man \n> page.\n> \n> NSLookupSymbolInImage\n> \n> extern NSSymbol\n> NSLookupSymbolInImage(\n> const struct mach_header *image,\n> const char *symbolName\n> unsigned long options);\n> #define NSLOOKUPSYMBOLINIMAGE_OPTION_BIND 0x0\n> #define NSLOOKUPSYMBOLINIMAGE_OPTION_BIND_NOW \n> 0x1\n> #define NSLOOKUPSYMBOLINIMAGE_OPTION_BIND_FULLY \n> 0x2\n> #define \n> NSLOOKUPSYMBOLINIMAGE_OPTION_RETURN_ON_ERROR 0x4\n> \n> NSLookupSymbolInImage returns the specified symbol (as an \n> NSSymbol) from the specified image.\n> \n> Error handling for NSLookupSymbolInImage is similar to error \n> handling for NSAddImage.\n> \n> The image parameter is a pointer to a mach_header data structure. \n> You can get this pointer from a shared library by calling \n> NSAddImage.\n> \n> The symbolName parameter is a C string specifying the name of the \n> symbol to retrieve.\n> \n> The options parameter is a bit mask. The following options are valid:\n> \n> ? \tNSLOOKUPSYMBOLINIMAGE_OPTION_BIND\n> Bind the non-lazy symbols of the module in the image that defines \n> the symbolName and let all lazy symbols in the module be bound on \n> first call.\n> \n> This should be used in the case where you expect the module to \n> bind without errors (for example, a library supplied with the system). \n> If, later, you call a lazy symbol, and the lazy symbol fails to bind, the \n> runtime will call the linkEdit handler you have installed using the \n> NSInstallLinkEditErrorHandlers function. If there is no linkEdit \n> handler installed, the runtime will print a message to stderr and call \n> the exit function to end the program.\n> \n> ? \tNSLOOKUPSYMBOLINIMAGE_OPTION_BIND_NOW\n> Bind all the non-lazy and lazy symbols of the module in the image \n> that defines the symbolName and let all dependent symbols in the \n> needed libraries be bound as needed. This would be used for a \n> module that might not be expected bind without errors but links \n> against only system-supplied libraries that are expected to bind \n> without any errors. For example, you might use this option with a \n> third-party plug-in.\n> \n> ? \tNSLOOKUPSYMBOLINIMAGE_OPTION_BIND_FULLY\n> Bind all the symbols of the module that defines the symbolName \n> and all if the dependent symbols of all needed libraries. This should \n> only be used for things like signal handlers and linkEdit error \n> handlers that can't bind other symbols once executed.\n> \n> ? \tNSLOOKUPSYMBOLINIMAGE_OPTION_RETURN_ON_ERROR\n> Return NULL if the symbol cannot be bound. This option is similar to \n> the NSAddImage option \n> NSADDIMAGE_OPTION_RETURN_ON_ERROR. See NSAddImage \n> for more details.\n> \n> NSIsSymbolNameDefinedInImage\n> \n> extern enum DYLD_BOOL\n> NSIsSymbolNameDefinedInImage(\n> const struct mach_header *image,\n> const char *symbolName);\n> \n> NSIsSymbolNameDefinedInImage returns true if the specified \n> image (or, if the image is a framework, any of its subframeworks) \n> contains the specified symbol, false otherwise.\n> \n> The image parameter is a pointer to a mach_header data structure. \n> You can get this pointer from a shared library by calling \n> NSAddImage.\n> \n> The symbolName parameter is a C string specifying the name of the \n> symbol to retrieve.\n> \n> The image parameter for NSLookupSymbolInImage and \n> NSIsSymbolNameDefinedInImage is a pointer to the mach_header \n> data structure of a Mach-O shared library. You can obtain a pointer to \n> a mach_header data structure from:\n> \n> ? \tthe NSAddImage function\n> ? \ta linker defined symbol as defined in <mach-o/ldsym.h>, \n> described on the ld(1) man page\n> ? \tthe dyld(3) function _dyld_get_image_header(3)\n> ? \tthe mach_header arguments to the callback functions called from \n> _dyld_register_func_for_add_image(3).\n> \n> Copyright ? 2001 Apple Computer, Inc.\n", "msg_date": "Fri, 05 Oct 2001 10:08:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Darwin 1.4 (OS X 10.1) Broken Compile, Snapshot and 7.1.3 " }, { "msg_contents": "Alex Avriette writes:\n\n> /usr/bin/ld: -undefined error must be used when -twolevel_namespace is in effect\n\nIn src/makefiles/Makefile.darwin, add -flat_namespace to CFLAGS_SL.\n\nI'm checking in a patch to fix this in 7.2.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 5 Oct 2001 23:14:22 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Darwin 1.4 (OS X 10.1) Broken Compile, Snapshot and" } ]
[ { "msg_contents": "At 9:41 AM -0400 10/5/01, Alex Avriette wrote:\n>I had a catastrophic crash on one of our webservers here and took the\n>opportunity to upgrade it. Unfortunately, after the upgrade, I am unable to\n>compile either 7.1.3 or the current snapshot of postgres.\n>\n>The error I get is rather opaque:\n>\n>/usr/bin/ld: -undefined error must be used when -twolevel_namespace is in\n>effect\n>\n>This is in the final stretch of compiling, on libpq. I checked google for\n>it, and came up with three header files in an MIT Apple sourcetree. So this\n>strikes me as a particularly Darwinish failing. I also ran a recursive grep\n>on the postgres source tree and wasnt able to find -twolevel_namespace in\n>any of the Makefiles. This makes me think it is something external.\n>\n>Anyone have an idea as to what is causing this? This box is down until\n>postgres comes back up. :-7\n>\n>alex\n>\n\nAlex,\n\nYou might check the Fink mailing list, i believe this has been talked \nabout but I cant remember which mail list. But I believe Fink \ninstalls postgresql just fine on 10.1\n\nhttp://sourceforge.net/projects/fink\n\nI might have also see the discussion on the Darwin mail list.\n\nhttp://lists.apple.com/archives/darwin-development/2001/Sep/19.html\n\nhas something like:\nsimply add -flat_namespace to the LDFLAGS.\n\nAnd later on the same page are the patches to make it work.\n\nNeil Tiffin\nChicago\n", "msg_date": "Fri, 5 Oct 2001 09:31:01 -0500", "msg_from": "Neil Tiffin <ntiffin@earthlink.net>", "msg_from_op": true, "msg_subject": "Re: Darwin 1.4 (OS X 10.1) Broken Compile, Snapshot and" } ]
[ { "msg_contents": "\n> > > BTW, still I'm getting the stucking backends. New info: a snapshot\n> > > dated on 10/3 works fine.\n> > \n> > I allways have trouble with those different date formats. Do you\n> > mean, that the problem is fixed as of 3. October, or that an old\n> > snapshot from 10. March still worked ?\n> \n> Of course the working source is 3rd October.\n\nTom, do you have an idea what you might have fixed to that effect ?\n\n> \n> > Snapshot of 1. Oct 2001 does not hang in \"make check\" on AIX 4.3.2\n> > 4 CPU machine.\n> \n> Oh, you have 4 way machine too?\n\nWell, the company I work for has all sorts of AIX hardware, but no AIX5\nyet.\nI usually use a 43P 150 with one 604e CPU for development and testing,\nbut \"borrowed\" another one to test the 4 CPU hang :-)\n\n> > So it seems to be a problem on AIX5L only :-( Maybe a \n> semaphore bug ? \n> \n> Maybe. BTW, what is your compiler? I'm using xlc.\n\nSame here, xlc from VisualAge C++, maybe other version: \nvac.C 5.0.1.3 COMMITTED C for AIX Compiler\n\nI made the experience, that gcc compiled code is somewhat slower.\n\nAndreas\n", "msg_date": "Fri, 5 Oct 2001 16:38:24 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Problem on AIX with current " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> Of course the working source is 3rd October.\n\n> Tom, do you have an idea what you might have fixed to that effect ?\n\nNo idea. I've been fixing some portability issues in dynahash.c,\nbut AFAIK they only affected the pgstats collector process not backends.\nAlso, that breakage had existed for months...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Oct 2001 13:42:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem on AIX with current " } ]
[ { "msg_contents": "Howdy hackers,\n\nShould I file a bug or is this known (or fixed)?\n\nThanks!\n\nLaurette (laurette@nextbus.com)\n\nHere's the psql ready test and description:\n\n--\n-- BUG Description:\n-- A float with zeros on the right of the decimal\n-- fails conversion to an interval.\n-- interval ( int ) works\n-- interval ( float ) works\n-- interval ( float where 0 on the right of decimal ) fail.\n--\n-- The first set of selects labelled \"No Casting\" show this\n-- problem.\n--\n-- To further diagnose the problem, I tried each query\n-- with a cast to integer, float and float8.\n-- This diagnosis is shown in the section labelled \"With Casting\".\n-- Note that the float value cast to integer fails due to\n-- strtol called by pg_atoi(). (Stupid strtol).\n-- \n-- Possible solution:\n-- In postgresql-7.1.3:src/backend/utils/datetime.c,\n-- function ParseDateTime line 442:\n-- \n-- This if statement forces a string containing a decimal\n-- to be typed as a date. Although dates can contain decimal\n-- delimiters, it is more common for decimal delimiters to be\n-- assumed to be floats. Perhaps the algorithm could change\n-- so that 2 decimals in the string makes it a date and only\n-- one makes it a float and therefore time. The alternative\n-- is to force strings with decimals into floats only and therefore\n-- numeric and time only.\n--\n\\echo ===========================================\n\\echo No casting\n\\echo ===========================================\n\\echo select interval(301); expect 00:05:01\nselect interval( 301 );\n\\echo\n\\echo select interval( 301.01 ); expect 00:05:01.01\nselect interval( 301.01 );\n\\echo\n\\echo select interval( 301.00 ); expect 00:05:01 get error on interval\nselect interval( 301.00 );\n\n\\echo ===========================================\n\\echo With casting\n\\echo ===========================================\n\\echo\n\\echo select interval( 301::integer); expect 00:05:01\nselect interval( 301::integer );\n\\echo select interval( 301::float); expect 00:05:01\nselect interval( 301::float );\n\\echo select interval( 301::float8 ); expect 00:05:01\nselect interval( 301::float8 );\n\\echo\n\\echo select interval( 301.01::integer); expect 00:05:01.01; pg_atoi message caused by strtol\nselect interval( 301.01::integer );\n\\echo select interval( 301.01::float); expect 00:05:01.01\nselect interval( 301.01::float );\n\\echo select interval( 301.01::float8); expect 00:05:01.01\nselect interval( 301.01::float8 );\n\\echo\n\\echo select interval( 301.00::integer); expect 00:05:01; pg_atoi message caused by strtol\nselect interval( 301.00::integer );\n\\echo select interval( 301.00::float); expect 00:05:01\nselect interval( 301.00::float );\n\\echo select interval( 301.00::float8); expect 00:05:01\nselect interval( 301.00::float8);\n\n\n\n", "msg_date": "Fri, 5 Oct 2001 09:15:09 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Interval bug " }, { "msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n> Should I file a bug or is this known (or fixed)?\n\nThe problem appears to be gone in current development sources:\n\nregression=# select interval( 301.00 );\n interval\n----------\n 00:05:01\n(1 row)\n\nThe particular code you mentioned seems unchanged, so the fix was\nevidently elsewhere.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Oct 2001 15:30:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Interval bug " } ]
[ { "msg_contents": "I have found some kind of problems with a rule I have on one of my databases, \nand after some mailling with Tom, and re-checking my logs I find out that the \ninserts look like the are getting (if I look at the logs) but the data is not \nthere!\n\nThis is the RULE:\n\nCREATE RULE admin_insert AS ON \nINSERT TO admin_view\nDO INSTEAD (\n INSERT INTO carrera \n\t (carrera,titulo,area,descripcion,incumbencia,director,\n\t matricula,cupos,informes,nivel,requisitos,duracion,\n\t categoria)\n VALUES \n\t (new.carrera,new.titulo,new.id_subarea,new.descripcion,\n\t new.incumbencia,new.director,new.matricula,new.cupos,\n\t new.informes,new.nivel,new.requisitos,new.duracion,\n\t new.car_categ);\n\n INSERT INTO inscripcion\n\t (carrera,fecha_ini,fecha_fin,lugar)\n VALUES\n\t (currval('carrera_id_curso_seq'),new.fecha_ini,new.fecha_fin,\n\t new.lugar);\n\n INSERT INTO resol\n\t (carr,numero,year,fecha)\n VALUES\n\t (currval('carrera_id_curso_seq'),new.numero,new.year,new.fecha);\n\n INSERT INTO log_carrera (accion,tabla,id_col) VALUES \n\t ('I','carrera',currval('carrera_id_curso_seq'));\n);\n\nAs you can see, there is an insert to a log (log_carrera) table for each \ninsert to the view. \nOn the other hand, all inserts are done throught the view, so for each value \nof the column id_curso of carrera (see the sequence of the other 3 inserts) \nthere should be a value in inscripcion.carrera, resol.carr and \nlog_carrera.id_col. This is not true for inscripcion:\n\nwebunl=> select count(id_curso) from carrera where id_curso NOT IN\nwebunl-> (select carrera from inscripcion);\n count\n-------\n 38\n(1 row)\n \nwebunl=>\n\nAnd if I check the log_carrera table for some of the values found by the last \nquery (obviously not the count but the id_curso):\n\nwebunl=> select * from log_carrera where id_col IN\nwebunl-> (87,88,90,92) AND tabla='carrera';\n id_log | usuario | horario | accion | tabla | id_col\n--------+---------+------------------------------+--------+---------+--------\n 259 | mariana | Mon 24 Sep 20:21:42 2001 GMT | I | carrera | 87\n 262 | mariana | Mon 24 Sep 20:36:26 2001 GMT | I | carrera | 88\n 269 | mariana | Mon 24 Sep 21:37:25 2001 GMT | I | carrera | 90\n 275 | mariana | Mon 24 Sep 21:53:38 2001 GMT | I | carrera | 92\n(4 rows)\n \nwebunl=>\n\nIn this case 92 is the only one of all four that is OK. The other 3 didn't \nmake the inscripcion insert.\n\nAnd the most courious thing i what the logs say:\n\n2001-09-24 17:21:42 DEBUG: StartTransactionCommand\n2001-09-24 17:21:42 DEBUG: query: INSERT INTO admin_view \n(titulo,id_subarea,descripcion,nivel,requisitos,duracion,numero,year,fecha,fecha_ini,fecha_fin,lugar,informes \n,carrera,director) VALUES ('Especialista y Magister en Gesti<F3>n \nUrbana',1,'Fase 1 Especializaci<F3>n: 3 m<F3>dulos; Introductorio; Contenidos \nEspec\n<ED>ficos; Problem<E1>ticas particularizadas. Fase 2 Maestr<ED>a: Se \nprofundizar<E1> en aspectos vinculados al desarrollo de la de la Tesis del \nMagister.',4,'Dr. Homero \nRondina',12,93,2001,'24/9/2001','01/02/2002','30/03/2002','Facultad de \nArquitectura, Dise<F1>o y Urbanismo.','1','Especializaci<F3>n y Maestr<ED>a \nen Gesti<F3>n Urbana, municipal y Comunal','Ser graduado Universitario' )\n2001-09-24 17:21:42 DEBUG: ProcessQuery\nINSERT @ 0/17879624: prev 0/17879584; xprev 0/0; xid 36118: Heap - insert: \nnode 102203/102530; tid 19/1\nINSERT @ 0/17880096: prev 0/17879624; xprev 0/17879624; xid 36118: Btree - \ninsert: node 102203/102573; tid 1/87\nINSERT @ 0/17880160: prev 0/17880096; xprev 0/17880096; xid 36118: Btree - \ninsert: node 102203/102600; tid 1/1\nINSERT @ 0/17880224: prev 0/17880160; xprev 0/17880160; xid 36118: Btree - \ninsert: node 102203/102603; tid 1/86\nINSERT @ 0/17880288: prev 0/17880224; xprev 0/17880224; xid 36118: Btree - \ninsert: node 102203/102606; tid 1/1\n2001-09-24 17:21:42 DEBUG: ProcessQuery\nINSERT @ 0/17880352: prev 0/17880288; xprev 0/17880288; xid 36118: Heap - \ninsert: node 102203/102724; tid 1/22\nINSERT @ 0/17880472: prev 0/17880352; xprev 0/17880352; xid 36118: Btree - \ninsert: node 102203/102743; tid 1/87\n2001-09-24 17:21:42 DEBUG: ProcessQuery\nINSERT @ 0/17880536: prev 0/17880472; xprev 0/17880472; xid 36118: Heap - \ninsert: node 102203/102635; tid 0/87\nINSERT @ 0/17880608: prev 0/17880536; xprev 0/17880536; xid 36118: Btree - \ninsert: node 102203/102654; tid 1/87\n2001-09-24 17:21:42 DEBUG: ProcessQuery\nINSERT @ 0/17880672: prev 0/17880608; xprev 0/17880608; xid 36118: Heap - \ninsert: node 102203/102902; tid 2/71\nINSERT @ 0/17880776: prev 0/17880672; xprev 0/17880672; xid 36118: Btree - \ninsert: node 102203/102922; tid 1/259\n2001-09-24 17:21:42 DEBUG: CommitTransactionCommand\n\nLooks like the 4 inserts went OK, but I can't understand why they didn't get \nto the database?\n\nwebunl=> select version();\n version\n------------------------------------------------------------------\n PostgreSQL 7.1.3 on sparc-sun-solaris2.7, compiled by GCC 2.95.2\n\n\nTIA!\n\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | mmarques@unl.edu.ar\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Fri, 5 Oct 2001 17:59:51 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": true, "msg_subject": "Rules and missing inserts" } ]
[ { "msg_contents": "Hi,\n\nI have encountered a problem with plpgsql and I would appreciated it\nif anyone could help me with this.\n\n\nIf I have a relation say:\n\nemp_id | salary\n---------------------------\n1 | 40000.00\n---------------------------\n2 | 45600.00\n---------------------------\n3 | 40000.00\n---------------------------\n4 | 45600.00\n---------------------------\n5 | 40000.00\n---------------------------\n\nmy plpgsql function is to be as the following format:\n\nget_emp_ids(float)\n\nit takes in the salay amount and returns a list of\nemp_ids that have the corresponding salary.\n\nhow should I go about doing it in plpgsql?\n\nthere's no return type that allows you to return multiple rows.\n\nis \"raise notice\" the only way to display these results?\n\n\nthanks in advance for your help.\n\nps. please also send a cc to me when replying.\n\n\nBill\n-- \nEverything you know is wrong!\n---------------------------------------------\nBill Shui Email: touro@capoeirabrasil.com.au\nBioinformatics Programmer\n", "msg_date": "Sat, 6 Oct 2001 14:28:22 +1000", "msg_from": "Bill Shui <touro@capoeirabrasil.com.au>", "msg_from_op": true, "msg_subject": "plpgsql question." } ]
[ { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nWhile playing around with trying to add foreign keys to the \n\\d table display in psql, I noticed that tableinfo.triggers \nis not used once it is set. I think it is meant to go here:\n\n /* count triggers */\n if (!error && tableinfo.hasrules)\n\nas:\n\n /* count triggers */\n if (!error && tableinfo.triggers)\n\n\nDoes that seem right? Since the archive links on the web page, \nand cvs appear to be down, I cannot check if this has been \nalready fixed. Also, after all the hullabaloo about cvs, is there \na definitive page for using it? This one, maybe?:\n\nhttp://developer.postgresql.org/TODO/docs/cvs.html\n\nThanks,\nGreg Sabino Mullane\ngreg@turnstep.com PGP Key: 0x14964AC8 200110060755\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niQA/AwUBO77yubybkGcUlkrIEQKYCQCfW9SRNR3WqeHQyDlZ5QT67I7y0MoAn2zg\nhNqKkxYnDBU6h60TUXsaK3m0\n=h46E\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Sat, 6 Oct 2001 08:00:54 -0400 (EDT)", "msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>", "msg_from_op": true, "msg_subject": "psql not showing triggers" }, { "msg_contents": "[ There is text before PGP section. ]\n> \n-- Start of PGP signed section.\n> While playing around with trying to add foreign keys to the \n> \\d table display in psql, I noticed that tableinfo.triggers \n> is not used once it is set. I think it is meant to go here:\n> \n> /* count triggers */\n> if (!error && tableinfo.hasrules)\n> \n> as:\n> \n> /* count triggers */\n> if (!error && tableinfo.triggers)\n> \n> \n> Does that seem right? Since the archive links on the web page, \n> and cvs appear to be down, I cannot check if this has been \n> already fixed. Also, after all the hullabaloo about cvs, is there \n> a definitive page for using it? This one, maybe?:\n> \n> http://developer.postgresql.org/TODO/docs/cvs.html\n\nGood catch. Your guess was right, and it hadn not been fixed. Patch\nattached and applied.\n\nAs far as CVS, that page should be accurate. It was updated a few days\nago with the correct information.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/bin/psql/describe.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/psql/describe.c,v\nretrieving revision 1.39\ndiff -c -r1.39 describe.c\n*** src/bin/psql/describe.c\t2001/08/21 16:36:05\t1.39\n--- src/bin/psql/describe.c\t2001/10/06 14:39:07\n***************\n*** 793,799 ****\n \t\t}\n \n \t\t/* count triggers */\n! \t\tif (!error && tableinfo.hasrules)\n \t\t{\n \t\t\tsprintf(buf,\n \t\t\t\t\t\"SELECT t.tgname\\n\"\n--- 793,799 ----\n \t\t}\n \n \t\t/* count triggers */\n! \t\tif (!error && tableinfo.triggers)\n \t\t{\n \t\t\tsprintf(buf,\n \t\t\t\t\t\"SELECT t.tgname\\n\"", "msg_date": "Sat, 6 Oct 2001 10:40:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql not showing triggers" } ]
[ { "msg_contents": "If I try:\ncvs -d :pserver:anoncvs@anoncvs.postgresql.org:/cvsroot login \nI get a time out\n\nIf I go to the cvs repository checkout link on\ndeveloper.postgresql.org\nI get the following:\nNot Found\n\nThe requested URL /TODO/docs/stylesheet.css was not found on this\nserver.\n\n\nCan someone fix these issues?\n\nLER\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 6 Oct 2001 14:14:23 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "anoncvs and CVS link off developers.postgresql.org" }, { "msg_contents": "On Sat, 6 Oct 2001, Larry Rosenman wrote:\n\n> If I try:\n> cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/cvsroot login\n> I get a time out\n\nMoi aussi. I can't reach www.postgresql.org either.\n\nIt doesn't seem obviously to be a routing problem.\n\nMatthew.\n\n", "msg_date": "Sat, 6 Oct 2001 21:28:03 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: anoncvs and CVS link off developers.postgresql.org" }, { "msg_contents": "On Sat, 6 Oct 2001 14:14:23 -0500, you wrote:\n>If I try:\n>cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/cvsroot login \n>I get a time out\n\nSame here an hour ago, but it seems to be fixed now.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Sat, 06 Oct 2001 23:36:26 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": false, "msg_subject": "Re: anoncvs and CVS link off developers.postgresql.org" }, { "msg_contents": "* Larry Rosenman <ler@lerctr.org> [011006 14:28]:\n> If I try:\n> cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/cvsroot login \n> I get a time out\nThis is fixed, and I needed to add a /projects before the /cvsroot.\n> \n> If I go to the cvs repository checkout link on\n> developer.postgresql.org\n> I get the following:\n> Not Found\n> \n> The requested URL /TODO/docs/stylesheet.css was not found on this\n> server.\n> \nThis issue is still with us....\n> \n> Can someone fix these issues?\n> \n> LER\n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 6 Oct 2001 18:53:29 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: anoncvs and CVS link off developers.postgresql.org" }, { "msg_contents": "\nDown server, been rebooted ...\n\nOn Sat, 6 Oct 2001, Matthew Kirkwood wrote:\n\n> On Sat, 6 Oct 2001, Larry Rosenman wrote:\n>\n> > If I try:\n> > cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/cvsroot login\n> > I get a time out\n>\n> Moi aussi. I can't reach www.postgresql.org either.\n>\n> It doesn't seem obviously to be a routing problem.\n>\n> Matthew.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Sat, 6 Oct 2001 20:24:39 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: anoncvs and CVS link off developers.postgresql.org" } ]
[ { "msg_contents": "Hi,\nI need to use some Chinese characters in charset MS950(CP950) but \nnot in Big5. Big5 and MS950 encoding are very much similiar but \ncurrently there is no support for MS950 and I will need to add it.\nI've read files in src/backend/utils/mb directory but still not\nsure what files to modify. Or can I just replace the Big5 mapping?\nThanks for your help.\n\n--\nRegards,\nZhenbang Wei\nforth@mail.net.tw\n>From pgsql-hackers-owner@postgresql.org Wed Oct 10 14:52:55 2001\nReceived: from spitfire.velocet.net (spitfire.velocet.net [216.138.223.227])\n\tby postgresql.org (8.11.3/8.11.4) with ESMTP id f970Ddn87358\n\tfor <pgsql-hackers@postgresql.org>; Sat, 6 Oct 2001 20:13:40 -0400 (EDT)\n\t(envelope-from rbt@zort.ca)\nReceived: from mail.barchord.com (H97.C194.tor.velocet.net [216.138.194.97])\n\tby spitfire.velocet.net (Postfix) with ESMTP id B57E944A94C\n\tfor <pgsql-hackers@postgresql.org>; Sat, 6 Oct 2001 20:13:34 -0400 (EDT)\nReceived: (qmail 49972 invoked by uid 0); 7 Oct 2001 00:13:31 -0000\nReceived: from unknown (HELO h97.c194.tor.velocet.net) (192.168.1.11)\n by 192.168.1.2 with DES-CBC3-SHA encrypted SMTP; 7 Oct 2001 00:13:31 -0000\nReceived: (qmail 26775 invoked by uid 0); 6 Oct 2001 20:12:27 -0000\nReceived: from h97.c194.tor.velocet.net (HELO jester) (216.138.194.97)\n by 192.168.1.11 with RC4-MD5 encrypted SMTP; 6 Oct 2001 20:12:27 -0000\nMessage-ID: <000b01c14ec4$f0fc07d0$8001a8c0@jester>\nFrom: \"Rod Taylor\" <rbt@zort.ca>\nTo: \"Brent Verner\" <brent@rcfile.org>,\n \"pgsql-hackers\" <pgsql-hackers@postgresql.org>\nReferences: <20011004233824.A2587@rcfile.org> <6961.1002289608@sss.pgh.pa.us> <20011005101817.A4734@rcfile.org> <20011006194932.A13304@rcfile.org>\nSubject: Re: ALTER RENAME and indexes\nDate: Sat, 6 Oct 2001 20:13:49 -0400\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: 7bit\nX-Priority: 3\nX-MSMail-Priority: Normal\nX-Mailer: Microsoft Outlook Express 5.50.4807.1700\nX-MimeOLE: Produced By Microsoft MimeOLE V5.50.4807.1700\nX-Archive-Number: 200110/411\nX-Sequence-Number: 14153\n\nOf course, in 7.1 foreign key constraints become rather confused when\nyou rename columns on them.\n\ncreate table parent (id serial);\ncreate table child (id int4 references parent(id) on update cascade);\nalter table parent rename column id to anotherid;\nalter table child rename column id to junk;\ninsert into child values (1);\n\n-> ERROR: constraint <unnamed>: table child does now have an\nattribute id\n\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.\n\n----- Original Message -----\nFrom: \"Brent Verner\" <brent@rcfile.org>\nTo: \"pgsql-hackers\" <pgsql-hackers@postgresql.org>\nSent: Saturday, October 06, 2001 7:49 PM\nSubject: Re: [HACKERS] ALTER RENAME and indexes\n\n\n> On 05 Oct 2001 at 10:18 (-0400), Brent Verner wrote:\n> | On 05 Oct 2001 at 09:46 (-0400), Tom Lane wrote:\n> | | Brent Verner <brent@rcfile.org> writes:\n> | | > 'ALTER TABLE tbl RENAME col1 TO col2' does not update any\nindices that\n> | | > reference the old column name.\n> | |\n> | | It doesn't need to; the indexes link to column numbers, not\ncolumn\n> | | names.\n>\n> ah, I think I see the problem... The pg_attribute.attname just needs\n> updating, right? I suspect this after noticing that the\n> pg_get_indexdef(Oid) function produced the correct(expected)\nresults,\n> while those using pg_attribute were wrong.\n>\n> If this is the _wrong_ answer for this, stop me before I make a\n> big mess :-)\n>\n> working...\n> b\n>\n> --\n> \"Develop your talent, man, and leave the world something. Records\nare\n> really gifts from people. To think that an artist would love you\nenough\n> to share his music with anyone is a beautiful thing.\" -- Duane\nAllman\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n>\n\n", "msg_date": "Sun, 7 Oct 2001 07:14:39 +0800 (GMT+08:00)", "msg_from": "forth@pagic.net", "msg_from_op": true, "msg_subject": "How to add a new encoding support?" } ]
[ { "msg_contents": "The attached patch works for my case...\n\nregression=# create table test (id serial, col1 varchar(64));\nNOTICE: CREATE TABLE will create implicit sequence 'test_id_seq' for SERIAL column 'test.id'\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_id_key' for table 'test'\nCREATE\nregression=# create index i_t_c on test(col1);\nCREATE\nregression=# alter table test rename col1 to col2;\nALTER\nregression=# \\d test\n Table \"test\"\n Column | Type | Modifiers \n--------+-----------------------+-------------------------------------------------\n id | integer | not null default nextval('\"test_id_seq\"'::text)\n col2 | character varying(64) | \nIndexes: i_t_c\nUnique keys: test_id_key\n\nregression=# \\d itc\nDid not find any relation named \"itc\".\nregression=# \\d i_t_c\n Index \"i_t_c\"\n Column | Type \n--------+-----------------------\n col2 | character varying(64)\nbtree\n\n\nwooohoo!!! Of course, it would be best if someone else looked this\ncode over, because I get the feeling there is an easier way to get\nthis done. The only thing I can say is that it solves my problem\nand does not /appear/ to create any others.\n\ncheers.\n Brent\n\np.s., I appreciate the gurus not jumping in with the quick fix --\nThe experience has been fun :-)\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman", "msg_date": "Sat, 6 Oct 2001 21:58:33 -0400", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "[patch] ALTER RENAME and indexes" }, { "msg_contents": "It occurs to me that the real problem is not so much ALTER RENAME not\ndoing enough, as it is psql doing the wrong thing. The \\d display for\nindexes is almost entirely unhelpful, since it doesn't tell you such\ncritical stuff as whether the index is a functional index nor which\nindex opclasses are being used. I wonder whether we oughtn't rip out\nthe whole display and make it report the results of pg_get_indexdef(),\ninstead.\n\nregression=# create table foo(f1 int,f2 int, primary key(f1,f2));\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'foo_pkey' for table 'foo'\nCREATE\nregression=# \\d foo\n Table \"foo\"\n Column | Type | Modifiers\n--------+---------+-----------\n f1 | integer | not null\n f2 | integer | not null\nPrimary key: foo_pkey\n\nregression=# create index foofn on foo (int4pl(f1,f2));\nCREATE\nregression=# \\d foofn\n Index \"foofn\"\n Column | Type\n--------+---------\n int4pl | integer\nbtree\n\nregression=# select pg_get_indexdef(oid) from pg_class where relname = 'foofn';\n pg_get_indexdef\n--------------------------------------------------------\n CREATE INDEX foofn ON foo USING btree (int4pl(f1, f2))\n(1 row)\n\nregression=# alter table foo rename f1 to f1new;\nALTER\nregression=# select pg_get_indexdef(oid) from pg_class where relname = 'foofn';\n pg_get_indexdef\n-----------------------------------------------------------\n CREATE INDEX foofn ON foo USING btree (int4pl(f1new, f2))\n(1 row)\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 Oct 2001 10:56:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [patch] ALTER RENAME and indexes " }, { "msg_contents": "On 07 Oct 2001 at 10:56 (-0400), Tom Lane wrote:\n| It occurs to me that the real problem is not so much ALTER RENAME not\n| doing enough, as it is psql doing the wrong thing. The \\d display for\n| indexes is almost entirely unhelpful, since it doesn't tell you such\n| critical stuff as whether the index is a functional index nor which\n| index opclasses are being used. I wonder whether we oughtn't rip out\n| the whole display and make it report the results of pg_get_indexdef(),\n| instead.\n\nThis would solve the display problem for sure, but we'd still have\nbad data in the pg_attribute tuple for the index -- specifically, \nattname would still contain the original column name that the index\nwas created on. I'm now aware that PG does not use this attname\ndirectly/internally, but it would still be wrong if anyone happens\nto look at the system catalog.\n\ncheers.\n Brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sun, 7 Oct 2001 19:16:29 -0400", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: [patch] ALTER RENAME and indexes" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> wooohoo!!! Of course, it would be best if someone else looked this\n> code over, because I get the feeling there is an easier way to get\n> this done.\n\nNo, that's about right, except that you forgot one step: you shouldn't\ntry to update column names of functional indexes, since their columns\nare named after the function not the column. If the function name\nhappened to match the column name then you'd have applied an erroneous\nrenaming.\n\nFixed and applied to CVS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Oct 2001 14:43:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [patch] ALTER RENAME and indexes " }, { "msg_contents": "On 08 Oct 2001 at 14:43 (-0400), Tom Lane wrote:\n| Brent Verner <brent@rcfile.org> writes:\n| > wooohoo!!! Of course, it would be best if someone else looked this\n| > code over, because I get the feeling there is an easier way to get\n| > this done.\n| \n| No, that's about right, except that you forgot one step: you shouldn't\n| try to update column names of functional indexes, since their columns\n| are named after the function not the column. If the function name\n| happened to match the column name then you'd have applied an erroneous\n| renaming.\n| \n| Fixed and applied to CVS.\n\nThanks for fixing that problem. Out of curiousity, did you put that\ncode in the renameatt() function for any reason, or is it just a \nstyle thing? (I ask in hopes of getting closer to submitting \ncode/patches that you'd not have to rewrite to apply ;-) )\n\nThanks.\n Brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Mon, 8 Oct 2001 20:18:47 -0400", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": true, "msg_subject": "Re: [patch] ALTER RENAME and indexes" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> Thanks for fixing that problem. Out of curiousity, did you put that\n> code in the renameatt() function for any reason, or is it just a \n> style thing?\n\nJust to avoid closing and reopening the target relation and pg_attribute\nrelation. Releasing and then reacquiring the lock on pg_attribute\nseemed like a not-so-good idea, although it probably shouldn't make any\ndifference.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Oct 2001 00:20:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [patch] ALTER RENAME and indexes " } ]
[ { "msg_contents": "Hello,\n\nHere are few more translated messages into Russian\nfor the PG_DUMP component.\n\nPlease apply to </src/bin/pg_dump/ru.po>\n\n--\nSerguei A. Mokhov", "msg_date": "Sun, 7 Oct 2001 14:22:13 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "PG_DUMP NLS (Russian)" }, { "msg_contents": "\nI was noticing that most of the translations are just being put into the \nroot of the source tree. I just thought I would point out that could get \nmessy and many projects realizing this have created a po directory off the \nsource tree to store the translations in. It might be worthwhile to \ncreate such a directory and implement this now then when more and more \ntranslations start pouring in. Just a very concrete sequential way of \nkeeping the \"real\" code visible.\n\n\nOn Sun, 7 Oct 2001, Serguei Mokhov wrote:\n\n> Hello,\n> \n> Here are few more translated messages into Russian\n> for the PG_DUMP component.\n> \n> Please apply to </src/bin/pg_dump/ru.po>\n> \n> --\n> Serguei A. Mokhov\n> \n> \n\n-- \n//========================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\========================================================//\n\n", "msg_date": "Sun, 7 Oct 2001 15:06:53 -0500 (CDT)", "msg_from": "\"D. Hageman\" <dhageman@dracken.com>", "msg_from_op": false, "msg_subject": "Re: PG_DUMP NLS (Russian)" }, { "msg_contents": "----- Original Message ----- \nFrom: D. Hageman <dhageman@dracken.com>\nSent: Sunday, October 07, 2001 4:06 PM\n\n\n> I was noticing that most of the translations are just being put into the \n> root of the source tree. \n\nNot exactly into the source root, but to the source\nof individual components prepared for the NLS.\n\n> I just thought I would point out that could get \n> messy and many projects realizing this have created a po directory off the \n> source tree to store the translations in. It might be worthwhile to \n> create such a directory and implement this now then when more and more \n> translations start pouring in. Just a very concrete sequential way of \n> keeping the \"real\" code visible.\n\nThere was discussion at some point going on AFAIR on the topic or\nclose to the topic, but I don't recall how it ended. But this idea is\nnot new, and I don't remember why this approach wasn't followed...\n\nOne could've created a /po dir as you are saying and have translations\nunder it with directories for every component. But I'm not sure how\nthe gettext and other tools work, so I it might be specifics of the\ntools. I still have to install them in order to use. But it's Peter\nEisentraut is in charge of NLS, so I guess he can clarify these points\nbetter than I.\n\n--\nS.\n\n", "msg_date": "Sun, 7 Oct 2001 19:48:48 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "Place of PO files for NLS (was Re: PG_DUMP NLS (Russian))" }, { "msg_contents": "Serguei Mokhov writes:\n\n> > I just thought I would point out that could get\n> > messy and many projects realizing this have created a po directory off the\n> > source tree to store the translations in. It might be worthwhile to\n> > create such a directory and implement this now then when more and more\n> > translations start pouring in. Just a very concrete sequential way of\n> > keeping the \"real\" code visible.\n>\n> There was discussion at some point going on AFAIR on the topic or\n> close to the topic, but I don't recall how it ended. But this idea is\n> not new, and I don't remember why this approach wasn't followed...\n\nI don't have a strong opinion about this other than \"it's easier the way\nit is now\". Once we get annoyed by it we can always change.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 9 Oct 2001 01:25:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Place of PO files for NLS (was Re: [PATCHES] PG_DUMP NLS\n\t(Russian))" }, { "msg_contents": "----- Original Message ----- \nFrom: Peter Eisentraut <peter_e@gmx.net>\nSent: Monday, October 08, 2001 7:25 PM\n\n> > > I just thought I would point out that could get\n> > > messy and many projects realizing this have created a po directory off the\n> > > source tree to store the translations in. It might be worthwhile to\n> > > create such a directory and implement this now then when more and more\n> > > translations start pouring in. Just a very concrete sequential way of\n> > > keeping the \"real\" code visible.\n> >\n> > There was discussion at some point going on AFAIR on the topic or\n> > close to the topic, but I don't recall how it ended. But this idea is\n> > not new, and I don't remember why this approach wasn't followed...\n> \n> I don't have a strong opinion about this other than \"it's easier the way\n> it is now\". Once we get annoyed by it we can always change.\n\nWell, it kind of make sense to keep all the po files under a separate directory,\nas a special kind of sub-tree, and it's easier to keep track of because all\ntranslations are in the same place, and there isn't much of \"distortion\"\nand \"noise\" for an eye of a developer looking for the source files in the source\ncode tree. For instance for psql we have already Czech, German, French, Russian,\nSwedish, and two Chinese translations, that is 6 po files, and one expects their\nnumber ot grow. They don't really belong to the source code... 'me thinks'.\n\nHow I invision this is to have a /src/nls dir, and components' dirs under it:\n\n/src/nls/\n postgres/*.po\n pg_dump/*.po\n psql/*.po\n libpq/*.po\n anyothercomponent/*.po\n\nBut it's just me, and if it is easier to keep things the way they are\nnow, well you are the boss, but it might be harder to change them in the future\nthen if one will want to.\n\n--\nS.\n\n\n\n\n", "msg_date": "Mon, 8 Oct 2001 19:38:14 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "Re: Place of PO files for NLS (was Re: [PATCHES] PG_DUMP NLS\n\t(Russian))" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\n> Hello,\n> \n> Here are few more translated messages into Russian\n> for the PG_DUMP component.\n> \n> Please apply to </src/bin/pg_dump/ru.po>\n> \n> --\n> Serguei A. Mokhov\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 13:54:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG_DUMP NLS (Russian)" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n> Hello,\n> \n> Here are few more translated messages into Russian\n> for the PG_DUMP component.\n> \n> Please apply to </src/bin/pg_dump/ru.po>\n> \n> --\n> Serguei A. Mokhov\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Oct 2001 00:25:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG_DUMP NLS (Russian)" }, { "msg_contents": "----- Original Message ----- \nFrom: Bruce Momjian <pgman@candle.pha.pa.us>\nSent: Saturday, October 13, 2001 12:25 AM\n\n> Patch applied. Thanks.\n\nHere is another one :)\nAnother chunk of translated messages.\nPlease apply to the same file.\n\nThanks,\n\n--\nS.\n\n> > Hello,\n> > \n> > Here are few more translated messages into Russian\n> > for the PG_DUMP component.\n> > \n> > Please apply to </src/bin/pg_dump/ru.po>\n> > \n> > --\n> > Serguei A. Mokhov", "msg_date": "Sat, 13 Oct 2001 15:46:30 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "Re: PG_DUMP NLS (Russian)" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\n > ----- Original Message ----- \n> From: Bruce Momjian <pgman@candle.pha.pa.us>\n> Sent: Saturday, October 13, 2001 12:25 AM\n> \n> > Patch applied. Thanks.\n> \n> Here is another one :)\n> Another chunk of translated messages.\n> Please apply to the same file.\n> \n> Thanks,\n> \n> --\n> S.\n> \n> > > Hello,\n> > > \n> > > Here are few more translated messages into Russian\n> > > for the PG_DUMP component.\n> > > \n> > > Please apply to </src/bin/pg_dump/ru.po>\n> > > \n> > > --\n> > > Serguei A. Mokhov\n> \n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Oct 2001 15:48:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG_DUMP NLS (Russian)" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n> ----- Original Message ----- \n> From: Bruce Momjian <pgman@candle.pha.pa.us>\n> Sent: Saturday, October 13, 2001 12:25 AM\n> \n> > Patch applied. Thanks.\n> \n> Here is another one :)\n> Another chunk of translated messages.\n> Please apply to the same file.\n> \n> Thanks,\n> \n> --\n> S.\n> \n> > > Hello,\n> > > \n> > > Here are few more translated messages into Russian\n> > > for the PG_DUMP component.\n> > > \n> > > Please apply to </src/bin/pg_dump/ru.po>\n> > > \n> > > --\n> > > Serguei A. Mokhov\n> \n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Oct 2001 22:50:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG_DUMP NLS (Russian)" } ]
[ { "msg_contents": "Hello!\n\nI would like to ask your opinion and about your intuitions\non the question: is it secure to use the cvs version\nof postgres instead of 7.1? (The more specific question\nis below...)\n\nSorry for enlarging the traffic of th elist with this\npossibly non-interesting question.\n\nTo be more precise: I would like to know if you think\nthe SUM and friends will work OK, and I do not bother\nabout possible hangups or crashes. I do not bother about\ncorrect extras at the moment, but I am very interested\nwhether the counted results can be trusted, if I use only\na very simple, basic subset of sql.\n\nThanks,\nBaldvin\n\n\n", "msg_date": "Sun, 7 Oct 2001 23:47:08 +0200 (MEST)", "msg_from": "Kovacs Baldvin <kb136@hszk.bme.hu>", "msg_from_op": true, "msg_subject": "Secure enough to use CVS version?" } ]
[ { "msg_contents": "I'm curious as to whether anybody has gotten PostgreSQL to work with a\ndatabase that lives on some sort of read-only medium...like a CD.\n\nI've looked around in the newsgroups and I've seen a comment by Bruce\nMomjian that it can't currently be done...and I've seen a different comment\nby Tom Lane that he thought that it probably could...So...I dunno.\n\nI've taken a database and set the read-only attributes on its files and\ntried to access it via psql...and couldn't...it complained about not being\nable to open pg_class.\n\nSO...I dug around through the code a little and found where the error was\ncoming from and changed the code so that if the open attempt with O_RDWR\nfails, the code tries again with O_RDONLY. This was in md.c...in the mdopen\nfunction.\n\nThis did work....I was then able to open the database and do queries and\nwhatnot. Trying to insert into the table didn't give any errors...until I\ntried to select the record back out, at which time it started giving me\nerrors such as:\n\nERROR: cannot write block 7548 of pole: Permission denied\n\nAt that point, it seems that your screwed...in that even if you shut down\npostgres and restart it, somewhere it knows that that database has data that\nneeds to be written to disk, and it refuses to continue until it does so.\n\nOTHER than that one problem...Is anyone aware of any other problems that my\nchange might cause? To be really useful, it would be necessary to go\nthrough and make additional changes so that it can recover from a failed\nwrite to the \"read-only\" database. But it seems like it would be okay as\nlong as you carefully avoid changing the database.\n\n\n", "msg_date": "Sun, 7 Oct 2001 23:20:10 -0400", "msg_from": "\"Kelly Harmon\" <kelly.harmon@byers.com>", "msg_from_op": true, "msg_subject": "Accessing Database files on a \"read-only\" medium...like a CD." }, { "msg_contents": ">\n> This did work....I was then able to open the database and do queries and\n> whatnot. Trying to insert into the table didn't give any errors...until I\n> tried to select the record back out, at which time it started giving me\n> errors such as:\n>\n\n\nOkay...I made a CD of a reasonably sized database....about 100MB in 3\ntables. Then I deleted the original database files from the appropriate\ndirectory and replaced the files with symbolic links to the files on the CD.\n\nTHEN I cranked up the modified PostgreSQL code and tried it out and it\nworked. I could run various select statements with no obvious troubles.\n\nSO...it is possible to run a database off of a CD...with a relatively minor\ncode change. Though, of course, you have to have to trick Postgres into it.\n\nAnd Postgres is not very forgiving if it ever figures out that it's been\ntricked...that definitely needs to be worked out.\n\nI'm very interested in hearing any other \"gotcha's\" that y'all may know of.\n\nThanks!\n\n\n", "msg_date": "Mon, 8 Oct 2001 03:46:15 -0400", "msg_from": "\"Kelly Harmon\" <kelly.harmon@byers.com>", "msg_from_op": true, "msg_subject": "Re: Accessing Database files on a \"read-only\" medium...like a CD." }, { "msg_contents": ">\n> And Postgres is not very forgiving if it ever figures out that it's been\n> tricked...that definitely needs to be worked out.\n>\n\nWhat I'm sort of leaning towards is to catch these attempted writes early\non, and not have to deal with a lot of cleaning up after the fact.\n\nI guess Postgres already supports the concept of a \"read-only\" database from\na user permissions perspective, right? So maybe take advantage of that\nexisting functionality?\n\n\n\n\n", "msg_date": "Mon, 8 Oct 2001 04:00:44 -0400", "msg_from": "\"Kelly Harmon\" <kelly.harmon@byers.com>", "msg_from_op": true, "msg_subject": "Re: Accessing Database files on a \"read-only\" medium...like a CD." }, { "msg_contents": "\nI wonder if you shut down the postmaster and restart if that would make\nit work again. I can't imagine where it would store table size\ninformation if the area is read-only. Adding data, full vacuum, restart\npostmaster should allow read-only databases.\n\n---------------------------------------------------------------------------\n\n> I'm curious as to whether anybody has gotten PostgreSQL to work with a\n> database that lives on some sort of read-only medium...like a CD.\n> \n> I've looked around in the newsgroups and I've seen a comment by Bruce\n> Momjian that it can't currently be done...and I've seen a different comment\n> by Tom Lane that he thought that it probably could...So...I dunno.\n> \n> I've taken a database and set the read-only attributes on its files and\n> tried to access it via psql...and couldn't...it complained about not being\n> able to open pg_class.\n> \n> SO...I dug around through the code a little and found where the error was\n> coming from and changed the code so that if the open attempt with O_RDWR\n> fails, the code tries again with O_RDONLY. This was in md.c...in the mdopen\n> function.\n> \n> This did work....I was then able to open the database and do queries and\n> whatnot. Trying to insert into the table didn't give any errors...until I\n> tried to select the record back out, at which time it started giving me\n> errors such as:\n> \n> ERROR: cannot write block 7548 of pole: Permission denied\n> \n> At that point, it seems that your screwed...in that even if you shut down\n> postgres and restart it, somewhere it knows that that database has data that\n> needs to be written to disk, and it refuses to continue until it does so.\n> \n> OTHER than that one problem...Is anyone aware of any other problems that my\n> change might cause? To be really useful, it would be necessary to go\n> through and make additional changes so that it can recover from a failed\n> write to the \"read-only\" database. But it seems like it would be okay as\n> long as you carefully avoid changing the database.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 14:21:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Accessing Database files on a \"read-only\" medium...like" } ]
[ { "msg_contents": "I'm curious as to whether anybody has gotten PostgreSQL to work with a\ndatabase that lives on some sort of read-only medium...like a CD.\n\nI've looked around in the newsgroups and I've seen a comment by Bruce\nMomjian that it can't currently be done...and I've seen a different comment\nby Tom Lane that he thought that it probably could...So...I dunno.\n\nI've taken a database and set the read-only attributes on its files and\ntried to access it via psql...and couldn't...it complained about not being\nable to open pg_class.\n\nSO...I dug around through the code a little and found where the error was\ncoming from and changed the code so that if the open attempt with O_RDWR\nfails, the code tries again with O_RDONLY. This was in md.c...in the mdopen\nfunction.\n\nThis did work....I was then able to open the database and do queries and\nwhatnot. Trying to insert into the table didn't give any errors...until I\ntried to select the record back out, at which time it started giving me\nerrors such as:\n\nERROR: cannot write block 7548 of pole: Permission denied\n\nAt that point, it seems that your screwed...in that even if you shut down\npostgres and restart it, somewhere it knows that that database has data that\nneeds to be written to disk, and it refuses to continue until it does so.\n\nOTHER than that one problem...Is anyone aware of any other problems that my\nchange might cause? To be really useful, it would be necessary to go\nthrough and make additional changes so that it can recover from a failed\nwrite to the \"read-only\" database. But it seems like it would be okay as\nlong as you carefully avoid changing the database.\n\n\n\n\n", "msg_date": "Sun, 7 Oct 2001 23:46:14 -0400", "msg_from": "\"Kelly Harmon\" <kelly.harmon@byers.com>", "msg_from_op": true, "msg_subject": "Accessing Database files on a \"read-only\" medium...like a CD." } ]
[ { "msg_contents": "Kelly Harmon <kelly.harmon@byers.com> wrote in message news:9pr7f7$k0j$1@news.tht.net...\n> SO...I dug around through the code a little and found where the error was\n> coming from and changed the code so that if the open attempt with O_RDWR\n> fails, the code tries again with O_RDONLY. This was in md.c...in the mdopen\n> function.\n> \n> This did work....I was then able to open the database and do queries and\n> whatnot. Trying to insert into the table didn't give any errors...until I\n> tried to select the record back out, at which time it started giving me\n> errors such as:\n> \n> ERROR: cannot write block 7548 of pole: Permission denied\n> \n> At that point, it seems that your screwed...in that even if you shut down\n> postgres and restart it, somewhere it knows that that database has data that\n> needs to be written to disk, and it refuses to continue until it does so.\n\nIsn't it the WAL who 'remembers' this info?\n\n--\nSerguei A. Mokhov\n \n\n", "msg_date": "Mon, 8 Oct 2001 00:23:18 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "Re: Accessing Database files on a \"read-only\" medium...like a CD." }, { "msg_contents": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n> Kelly Harmon <kelly.harmon@byers.com> wrote in message news:9pr7f7$k0j$1@news.tht.net...\n>> At that point, it seems that your screwed...in that even if you shut down\n>> postgres and restart it, somewhere it knows that that database has data that\n>> needs to be written to disk, and it refuses to continue until it does so.\n\n> Isn't it the WAL who 'remembers' this info?\n\nBoth WAL and pg_log *must* be on writable media, so there's really no\nchance of putting the whole of a $PGDATA tree onto a CD. However one\ncould imagine putting individual databases (or even individual tables)\nonto CD. One thing you'd have to watch out for is that Postgres\nmay try to update on-row commit status bits even during a read-only\noperation such as SELECT. The best way to deal with that would be to\nVACUUM the table or database before moving it to read-only storage.\nVACUUM would leave the status bits all set correctly.\n\nWe've talked repeatedly about implementing a notion of tablespaces\nto allow DBAs to exercise more control over where tables are kept.\nMaybe it'd make sense to allow tablespaces to be marked read-only, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Oct 2001 12:12:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Accessing Database files on a \"read-only\" medium...like a CD. " } ]
[ { "msg_contents": "> I've made some inroads towards adding 'ignore duplicates'\n> functionality to PostgreSQL's COPY command. I've updated the parser\n> grammar for COPY FROM to now accept:\n> \n> COPY [ BINARY ] table [ WITH OIDS ]\n> FROM { 'filename' | stdin }\n> [ [USING] DELIMITERS 'delimiter' ]\n> [ WITH [NULL AS 'null string']\n> [IGNORE DUPLICATES] ]\n\nIs there any possibility that COPY could insert a row so the old row\nwill be discarded and a new one inserted instead the old? It could be\nuseful for doing replications on table with modified rows.\n\n\n\t\t\tDan\n", "msg_date": "Mon, 8 Oct 2001 14:55:49 +0200", "msg_from": "=?us-ascii?Q?Horak_Daniel?= <horak@sit.plzen-city.cz>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" } ]
[ { "msg_contents": "> Hi,\n> \n> It apears that getting Postgres and OSX 10.1 to work is not just a\n> case of some compiler flags.\n> \n> I have attached a patch, not sure who wrote this patch, but it seems\n> to work for me!\n> \n> I am asuming that the author has submitted it to the pgsql team, but\n> if not here it is.\n> \nHave fun,\n> Serge\n> \nP.S. I give NO guarantees, like I said... I did not write this!\n\n----cut ----\n\n> diff -ru postgresql-7.1.3/src/Makefile.shlib\n> postgresql-7.1.3-posix/src/Makefile.shlib\n> --- postgresql-7.1.3/src/Makefile.shlib\tSun Apr 15 05:25:07 2001\n> +++ postgresql-7.1.3-posix/src/Makefile.shlib\tWed Sep 19 23:00:08 2001\n> @@ -113,7 +113,7 @@\n> \n> ifeq ($(PORTNAME), darwin)\n> shlib\t\t\t:=\n> lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)\n> - LINK.shared\t\t= $(COMPILER) $(CFLAGS_SL)\n> + LINK.shared\t\t= $(COMPILER)\n> endif\n> \n> ifeq ($(PORTNAME), openbsd)\n> diff -ru postgresql-7.1.3/src/backend/storage/ipc/ipc.c\n> postgresql-7.1.3-posix/src/backend/storage/ipc/ipc.c\n> --- postgresql-7.1.3/src/backend/storage/ipc/ipc.c\tFri Mar 23\n> 05:49:54 2001\n> +++ postgresql-7.1.3-posix/src/backend/storage/ipc/ipc.c\tWed Sep\n> 19 23:09:06 2001\n> @@ -29,10 +29,20 @@\n> \n> #include <sys/types.h>\n> #include <sys/file.h>\n> +#define POSIX_SHARED_MEMORY\n> +#ifdef POSIX_SHARED_MEMORY\n> +#include <sys/stat.h>\n> +#include <sys/mman.h>\n> +#endif\n> #include <errno.h>\n> #include <signal.h>\n> #include <unistd.h>\n> \n> +#ifdef POSIX_SHARED_MEMORY\n> +#define IpcMemoryId unsigned int\n> +#define IpcMemoryKey unsigned int\n> +#endif\n> +\n> #include \"storage/ipc.h\"\n> #include \"storage/s_lock.h\"\n> /* In Ultrix, sem.h and shm.h must be included AFTER ipc.h */\n> @@ -77,6 +87,13 @@\n> static void *PrivateMemoryCreate(uint32 size);\n> static void PrivateMemoryDelete(int status, Datum memaddr);\n> \n> +#ifdef POSIX_SHARED_MEMORY\n> +uint32 posix_shmget(uint32 key, uint32 size, int permissions);\n> +void *posix_shmat(uint32 id);\n> +uint32 posix_shm_count(uint32 id);\n> +void decrement_posix_shm_count(void *address);\n> +int posix_shmrm(uint32 id);\n> +#endif\n> \n> /* ----------------------------------------------------------------\n> *\t\t\t\t\t\texit() handling stuff\n> @@ -265,6 +282,9 @@\n> * print out an error and abort. Other types of errors are not\n> recoverable.\n> * ----------------------------------------------------------------\n> */\n> +#ifdef POSIX_SHARED_MEMORY\n> +#define shmget(a, b, c) posix_shmget(a,b,c)\n> +#endif\n> static IpcSemaphoreId\n> InternalIpcSemaphoreCreate(IpcSemaphoreKey semKey,\n> \t\t\t\t\t\t int numSems, int\n> permission,\n> @@ -620,7 +640,11 @@\n> \ton_shmem_exit(IpcMemoryDelete, Int32GetDatum(shmid));\n> \n> \t/* OK, should be able to attach to the segment */\n> +#ifdef POSIX_SHARED_MEMORY\n> +\tmemAddress = posix_shmat(shmid);\n> +#else\n> \tmemAddress = shmat(shmid, 0, 0);\n> +#endif\n> \n> \tif (memAddress == (void *) -1)\n> \t{\n> @@ -646,10 +670,13 @@\n> static void\n> IpcMemoryDetach(int status, Datum shmaddr)\n> {\n> +#ifndef POSIX_SHARED_MEMORY\n> \tif (shmdt(DatumGetPointer(shmaddr)) < 0)\n> \t\tfprintf(stderr, \"IpcMemoryDetach: shmdt(%p) failed:\n> %s\\n\",\n> \t\t\t\tDatumGetPointer(shmaddr),\n> strerror(errno));\n> -\n> +#else\n> +\tdecrement_posix_shm_count(DatumGetPointer(shmaddr));\n> +#endif\n> \t/*\n> \t * We used to report a failure via elog(NOTICE), but that's\n> pretty\n> \t * pointless considering any client has long since disconnected\n> ...\n> @@ -663,10 +690,13 @@\n> static void\n> IpcMemoryDelete(int status, Datum shmId)\n> {\n> +#ifdef POSIX_SHARED_MEMORY\n> +\tif (posix_shmrm(DatumGetInt32(shmId)) == -1)\n> +#else\n> \tif (shmctl(DatumGetInt32(shmId), IPC_RMID, (struct shmid_ds *)\n> NULL) < 0)\n> +#endif\n> \t\tfprintf(stderr, \"IpcMemoryDelete: shmctl(%d, %d, 0)\n> failed: %s\\n\",\n> \t\t\t\tDatumGetInt32(shmId), IPC_RMID,\n> strerror(errno));\n> -\n> \t/*\n> \t * We used to report a failure via elog(NOTICE), but that's\n> pretty\n> \t * pointless considering any client has long since disconnected\n> ...\n> @@ -679,8 +709,9 @@\n> bool\n> SharedMemoryIsInUse(IpcMemoryKey shmKey, IpcMemoryId shmId)\n> {\n> +#ifndef POSIX_SHARED_MEMORY\n> \tstruct shmid_ds shmStat;\n> -\n> +#endif\n> \t/*\n> \t * We detect whether a shared memory segment is in use by seeing\n> \t * whether it (a) exists and (b) has any processes are attached\n> to it.\n> @@ -689,6 +720,9 @@\n> \t * nonexistence of the segment (most likely, because it doesn't\n> belong\n> \t * to our userid), assume it is in use.\n> \t */\n> +#ifdef POSIX_SHARED_MEMORY\n> +\treturn (posix_shm_count(DatumGetInt32(shmId)) != 0);\n> +#else\n> \tif (shmctl(shmId, IPC_STAT, &shmStat) < 0)\n> \t{\n> \n> @@ -706,6 +740,7 @@\n> \tif (shmStat.shm_nattch != 0)\n> \t\treturn true;\n> \treturn false;\n> +#endif\n> }\n> \n> \n> @@ -801,9 +836,17 @@\n> \t\tshmid = shmget(NextShmemSegID, sizeof(PGShmemHeader),\n> 0);\n> \t\tif (shmid < 0)\n> \t\t\tcontinue;\t\t\t/* failed: must\n> be some other app's */\n> +#ifdef POSIX_SHARED_MEMORY\n> +\t\tmemAddress = posix_shmat(shmid);\n> +#else\n> \t\tmemAddress = shmat(shmid, 0, 0);\n> +#endif\n> \t\tif (memAddress == (void *) -1)\n> \t\t\tcontinue;\t\t\t/* failed: must\n> be some other app's */\n> +#ifdef POSIX_SHARED_MEMORY\n> +\t\tif (memAddress == NULL) continue;\n> +\t\telse break;\n> +#else\n> \t\thdr = (PGShmemHeader *) memAddress;\n> \t\tif (hdr->magic != PGShmemMagic)\n> \t\t{\n> @@ -848,6 +891,7 @@\n> \t\t * same shmem key before we did. Let him have that one,\n> loop\n> \t\t * around to try next key.\n> \t\t */\n> +#endif\n> \t}\n> \n> \t/*\n> @@ -966,3 +1010,124 @@\n> \n> \treturn semId;\n> }\n> +\n> +#ifdef POSIX_SHARED_MEMORY\n> +\n> +#define PSM_MAX_SEGS 10\n> +int psm_initted = 0;\n> +struct psm_map_ent {\n> +\tint valid;\n> +\tint32 id;\n> +\tvoid *address;\n> +\tint size;\n> +\tint count;\n> +\tint fd;\n> +};\n> +\n> +struct psm_map_ent map_array[PSM_MAX_SEGS];\n> +\n> +uint32 posix_shmget(uint32 key, uint32 size, int permissions)\n> +{\n> +int i;\n> +char name[32];\n> +\n> + /* Initialize structure if not already initted */\n> + if (!psm_initted) {\n> +\tfor (i=0; i<PSM_MAX_SEGS; i++) {\n> +\t map_array[i].valid = 0;\n> +\t map_array[i].id = -1;\n> +\t map_array[i].address = NULL;\n> +\t map_array[i].count = 0;\n> +\t map_array[i].size = 0;\n> +\t map_array[i].fd = -1;\n> +\t}\n> + }\n> + for (i=0; i<PSM_MAX_SEGS; i++) {\n> +\tif (!map_array[i].valid) break;\n> + }\n> + if (map_array[i].valid) return -1;\n> +\n> + /* Here's where we do the real work */\n> + sprintf(name, \"psm_%d\", key);\n> + map_array[i].fd = shm_open(name, (O_CREAT | O_RDWR | O_TRUNC),\n> +\t(S_IRUSR | S_IWUSR | S_IWGRP | S_IRGRP));\n> + map_array[i].size = size;\n> + map_array[i].id = key;\n> +\n> + return i;\n> +}\n> +\n> +void *posix_shmat(uint32 id)\n> +{\n> + int i;\n> +\n> +#if 0\n> + for (i=0; i<PSM_MAX_SEGS; i++)\n> +\tif (map_array[i].id == id) break;\n> +\n> + if (map_array[i].id == id)\n> +\treturn NULL;\n> +#else\n> +i = id;\n> +if (i == -1) return NULL;\n> +#endif\n> +\n> + map_array[i].address = mmap(NULL, map_array[i].size,\n> +\t(PROT_READ | PROT_WRITE), \n> +\t(MAP_ANON | MAP_INHERIT | MAP_SHARED), map_array[i].fd, 0);\n> + if (map_array[i].address == -1) {\n> +\tperror(\"posix_shmat\");\n> +\treturn NULL;\n> + }\n> + if (map_array[i].address != 0) {\n> +\tmap_array[i].valid = 1;\n> +\tmap_array[i].count = 1;\n> + return (map_array[i].address);\n> + } else {\n> +\treturn NULL;\n> + }\n> +}\n> +\n> +uint32 posix_shm_count(uint32 id) {\n> + int i;\n> +\n> +#if 0\n> + for (i=0; i<PSM_MAX_SEGS; i++) {\n> +\tif (map_array[i].id == id)\n> +\t\treturn map_array[i].count;\n> + }\n> + return -1;\n> +#else\n> + i = id;\n> + if (i == -1) return -1;\n> + return map_array[i].count;\n> +#endif\n> +}\n> +\n> +void decrement_posix_shm_count(void *address)\n> +{\n> + int i;\n> +\n> + for (i=0; i<PSM_MAX_SEGS; i++) {\n> +\tif (map_array[i].address == address) break;\n> + }\n> +\n> + if (map_array[i].address == address) {\n> +\tmap_array[i].count --;\n> + }\n> + if (map_array[i].count < 0) {\n> +\t/* This should never happen.... */\n> +\tfprintf(stderr, \"AIEEEEEE! Map count < 0 in\n> decrement_posix_shm_count!\\n\");\n> + }\n> +}\n> +\n> +int posix_shmrm(uint32 id)\n> +{\n> +char name[32];\n> +\n> + sprintf(name, \"psm_%d\", id);\n> +\n> + return shm_unlink(name);\n> +}\n> +\n> +#endif\n> diff -ru postgresql-7.1.3/src/makefiles/Makefile.darwin\n> postgresql-7.1.3-posix/src/makefiles/Makefile.darwin\n> --- postgresql-7.1.3/src/makefiles/Makefile.darwin\tMon Dec 11\n> 01:49:52 2000\n> +++ postgresql-7.1.3-posix/src/makefiles/Makefile.darwin\tWed Sep\n> 19 22:59:24 2001\n> @@ -2,7 +2,7 @@\n> AWK= awk\n> \n> DLSUFFIX = .so\n> -CFLAGS_SL = -bundle -undefined suppress\n> +CFLAGS_SL = -bundle -flat_namespace -undefined suppress\n> \n> %.so: %.o\n> \t$(CC) $(CFLAGS) $(CFLAGS_SL) -o $@ $<\n> \n> \n\n\n\n", "msg_date": "Mon, 8 Oct 2001 14:53:43 +0100", "msg_from": "\"Serge Sozonoff\" <serge@globalbeach.com>", "msg_from_op": true, "msg_subject": "Patch for OSX 10.1 and Postgresql 7.3.1" }, { "msg_contents": "\"Serge Sozonoff\" <serge@globalbeach.com> writes:\n>> I have attached a patch, not sure who wrote this patch, but it seems\n>> to work for me!\n>> I am asuming that the author has submitted it to the pgsql team, but\n>> if not here it is.\n\nIt has not been submitted, and it certainly won't get accepted as-is\n(it appears to unconditionally insert Darwin-specific code into ipc.c,\nand even without that I'm leery of applying patches from unknown\nsources). Please find the author and ask him to contact us.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2001 01:15:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch for OSX 10.1 and Postgresql 7.3.1 " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr] \n> Sent: 08 October 2001 14:43\n> To: pgsql-hackers@postgresql.org\n> Cc: Tom Lane; Bruce Momjian; pgadmin-hackers@postgresql.org\n> Subject: Re: [pgadmin-hackers] [HACKERS] What about CREATE OR \n> REPLACE FUNCTION? \n> \n> \n> Dear all,\n> \n> 1) CREATE OR REPLACE FUNCTION\n> In pgAdmin II, we plan to use the CREATE OR REPLACE FUNCTION \n> if the patch \n> is applied. Do you know if there is any chance it be applied \n> for beta time? \n> We would very much appreciate this feature...\n\nIt's already done in pgAdmin CVS (committed this morning) and I believe\nBruce committed the patch to PostgreSQL on 2nd October. I just haven't\ntested it yet as I can't find an up-to-date snapshot and I don't know the\nmagic that has to be worked on the PostgreSQL CVS version of the configure\nscript in order to make it run without barfing.\n\n> 2) PL/pgSQL default support\n> It is sometimes tricky for Windows users to install a \n> language remotely on \n> a Linux box (no access to createlang and/or no knowledge of \n> handlers). So \n> why not enable PL/pgSQL by default?\n\n2nd 'ed!\n\nRegards, Dave.\n", "msg_date": "Mon, 8 Oct 2001 15:02:17 +0100 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNC" }, { "msg_contents": "Dave Page <dpage@vale-housing.co.uk> writes:\n> ... I can't find an up-to-date snapshot\n\nWhere have you looked? I checked a couple of FTP mirrors at random and\nsee up-to-date snapshots, eg at ftp://ftp.us.postgresql.org/dev/\nftp://postgresql.wavefire.com/pub/dev/\nftp://postgresql.rmplc.co.uk/pub/postgresql/dev/\nall of which have snapshots dated Sun Oct 7 08:02:00 2001 as I write.\n\n> and I don't know the\n> magic that has to be worked on the PostgreSQL CVS version of the configure\n> script in order to make it run without barfing.\n\nNews to me that it requires any magic at all; I use it almost daily\nwithout problems. Why doesn't it work for you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Oct 2001 10:13:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What about CREATE OR REPLACE FUNC TION? " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 08 October 2001 15:13\n> To: Dave Page\n> Cc: 'Jean-Michel POURE'; pgsql-hackers@postgresql.org; Bruce \n> Momjian; pgadmin-hackers@postgresql.org\n> Subject: Re: [pgadmin-hackers] [HACKERS] What about CREATE OR \n> REPLACE FUNC TION? \n> \n> \n> Dave Page <dpage@vale-housing.co.uk> writes:\n> > ... I can't find an up-to-date snapshot\n> \n> Where have you looked? I checked a couple of FTP mirrors at \n> random and see up-to-date snapshots, eg at \n> ftp://ftp.us.postgresql.org/dev/ \n> ftp://postgresql.wavefire.com/pub/dev/\n> ftp://postgresql.rmplc.co.uk/pub/postgresql/dev/\n> all of which have snapshots dated Sun Oct 7 08:02:00 2001 as I write.\n\nI tried postgresql.rmplc.co.uk and got one (apparently) dated 7 Oct, however\nCREATE OR REPLACE FUNCTION didn't seem to be there (it certainly doesn't\nwork anyway - syntax error at OR). I then looked in the primary copy on\nmail.postgresql.org and found the copy there was dated 30 Sept from which I\nassumed that the 07/10/2001 date on rm's copy was actually a US date - that\nsite has been seriously out of date before.\n\n> and I don't know the\n> magic that has to be worked on the PostgreSQL CVS version of the \n> configure script in order to make it run without barfing.\n\nNews to me that it requires any magic at all; I use it almost daily without\nproblems. Why doesn't it work for you?\n\nI've tried it a few times and I always get something like:\n\nroot@tux1:/usr/local/src/pgsql# ./configure\nsu: ./configure: bad interpreter: No such file or directory\nroot@tux1:/usr/local/src/pgsql# sh ./configure\n: command not found\n: command not found\n: command not found\n: command not found\n: command not found\n'/configure: line 127: syntax error near unexpected token `do\n'/configure: line 127: `do\nroot@tux1:/usr/local/src/pgsql#\n\nI always assumed that something is done when the tarballs are built as the\nwork just fine on the same machine. The only odd thing I can think of is\nthat my copy of the source is maintained on my PC using WinCVS and was\nzipped/ftp'd onto a test box.\n\nRegards, Dave.\n\n", "msg_date": "Mon, 8 Oct 2001 15:27:36 +0100 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [pgadmin-hackers] What about CREATE OR REPLACE FUNC" }, { "msg_contents": "Dave Page <dpage@vale-housing.co.uk> writes:\n> ... I can't find an up-to-date snapshot\n\n> I tried postgresql.rmplc.co.uk and got one (apparently) dated 7 Oct, however\n> CREATE OR REPLACE FUNCTION didn't seem to be there (it certainly doesn't\n> work anyway - syntax error at OR). I then looked in the primary copy on\n> mail.postgresql.org and found the copy there was dated 30 Sept from which I\n> assumed that the 07/10/2001 date on rm's copy was actually a US date - that\n> site has been seriously out of date before.\n\nI just downloaded\nftp://ftp.us.postgresql.org/dev/postgresql-snapshot.tar.gz\nwhich has a date of yesterday in the FTP archives, but actually\ncontains a snapshot from around 15 September as near as I can tell.\nLooks like something is hosed in the snapshot preparation process;\nMarc, could you take a look at it?\n\n>> and I don't know the\n>> magic that has to be worked on the PostgreSQL CVS version of the \n>> configure script in order to make it run without barfing.\n\n> I always assumed that something is done when the tarballs are built as the\n> work just fine on the same machine.\n\nNo, the tarballs should be the same as what you get from a CVS pull\nof the same date (other than not having a lot of /CVS subdirectories).\nIn fact, they're made basically by tar'ing up a CVS checkout. Please\ntry diffing configure from a tarball against one from CVS to see if you\ncan figure out what's getting munged during your CVS pull.\n\n> The only odd thing I can think of is\n> that my copy of the source is maintained on my PC using WinCVS and was\n> zipped/ftp'd onto a test box.\n\nLF vs CR/LF newlines leap to mind as a likely source of trouble...\nthough I'm not sure why that would manifest in just this way...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Oct 2001 11:42:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Daily snapshots hosed (was Re: [pgadmin-hackers] What about CREATE OR\n\tREPLACE FUNCTION?)" }, { "msg_contents": "\nokay, daily snapshots are now being generated on the new server ... right\nnow, all the mirror sites are stale while Vince does some finishing\ntouches on the mirroring scripts/cgi's ... once he gerts that done, then,\nfrom my perspective, we'll be ready for beta ...\n\n\nOn Mon, 8 Oct 2001, Tom Lane wrote:\n\n> Dave Page <dpage@vale-housing.co.uk> writes:\n> > ... I can't find an up-to-date snapshot\n>\n> > I tried postgresql.rmplc.co.uk and got one (apparently) dated 7 Oct, however\n> > CREATE OR REPLACE FUNCTION didn't seem to be there (it certainly doesn't\n> > work anyway - syntax error at OR). I then looked in the primary copy on\n> > mail.postgresql.org and found the copy there was dated 30 Sept from which I\n> > assumed that the 07/10/2001 date on rm's copy was actually a US date - that\n> > site has been seriously out of date before.\n>\n> I just downloaded\n> ftp://ftp.us.postgresql.org/dev/postgresql-snapshot.tar.gz\n> which has a date of yesterday in the FTP archives, but actually\n> contains a snapshot from around 15 September as near as I can tell.\n> Looks like something is hosed in the snapshot preparation process;\n> Marc, could you take a look at it?\n>\n> >> and I don't know the\n> >> magic that has to be worked on the PostgreSQL CVS version of the\n> >> configure script in order to make it run without barfing.\n>\n> > I always assumed that something is done when the tarballs are built as the\n> > work just fine on the same machine.\n>\n> No, the tarballs should be the same as what you get from a CVS pull\n> of the same date (other than not having a lot of /CVS subdirectories).\n> In fact, they're made basically by tar'ing up a CVS checkout. Please\n> try diffing configure from a tarball against one from CVS to see if you\n> can figure out what's getting munged during your CVS pull.\n>\n> > The only odd thing I can think of is\n> > that my copy of the source is maintained on my PC using WinCVS and was\n> > zipped/ftp'd onto a test box.\n>\n> LF vs CR/LF newlines leap to mind as a likely source of trouble...\n> though I'm not sure why that would manifest in just this way...\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Wed, 10 Oct 2001 09:05:24 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Daily snapshots hosed (was Re: [pgadmin-hackers] What" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 08 October 2001 16:43\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org; The Hermit Hacker\n> Subject: Daily snapshots hosed (was Re: [pgadmin-hackers] \n> [HACKERS] What about CREATE OR REPLACE FUNCTION?)\n> \n> \n> Dave Page <dpage@vale-housing.co.uk> writes:\n> > ... I can't find an up-to-date snapshot\n> \n> > I tried postgresql.rmplc.co.uk and got one (apparently) \n> dated 7 Oct, \n> > however CREATE OR REPLACE FUNCTION didn't seem to be there (it \n> > certainly doesn't work anyway - syntax error at OR). I then \n> looked in \n> > the primary copy on mail.postgresql.org and found the copy \n> there was \n> > dated 30 Sept from which I assumed that the 07/10/2001 date on rm's \n> > copy was actually a US date - that site has been seriously \n> out of date \n> > before.\n> \n> I just downloaded \n> ftp://ftp.us.postgresql.org/dev/postgresql-> snapshot.tar.gz\n> \n> which has a date of yesterday in the FTP \n> archives, but actually contains a snapshot from around 15 \n> September as near as I can tell. Looks like something is \n> hosed in the snapshot preparation process; Marc, could you \n> take a look at it?\n> \n> >> and I don't know the\n> >> magic that has to be worked on the PostgreSQL CVS version of the\n> >> configure script in order to make it run without barfing.\n> \n> > I always assumed that something is done when the tarballs \n> are built as \n> > the work just fine on the same machine.\n> \n> No, the tarballs should be the same as what you get from a \n> CVS pull of the same date (other than not having a lot of \n> /CVS subdirectories). In fact, they're made basically by \n> tar'ing up a CVS checkout. Please try diffing configure from \n> a tarball against one from CVS to see if you can figure out \n> what's getting munged during your CVS pull.\n> \n> > The only odd thing I can think of is\n> > that my copy of the source is maintained on my PC using \n> WinCVS and was \n> > zipped/ftp'd onto a test box.\n> \n> LF vs CR/LF newlines leap to mind as a likely source of \n> trouble... though I'm not sure why that would manifest in \n> just this way...\n\nThis does appear to be the case, though where they came from I don't know!\nMy best guess is that WinCVS thought they'd be useful as I'm working on\nWindows. Actually that would explain the issue we had with the ODBC driver\nMSVC++ makefile some time ago...\n\nAnyway, thanks Tom,\n\nDave.\n", "msg_date": "Mon, 8 Oct 2001 16:51:32 +0100 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Daily snapshots hosed (was Re: [pgadmin-hackers] Wh" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Dave Page \n> Sent: 08 October 2001 16:52\n> To: 'Tom Lane'; Dave Page\n> Cc: pgsql-hackers@postgresql.org; The Hermit Hacker\n> Subject: RE: Daily snapshots hosed (was Re: [pgadmin-hackers] \n> [HACKERS] What about CREATE OR REPLACE FUNCTION?)\n>\n> > -----Original Message-----\n> > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > Sent: 08 October 2001 16:43\n> > To: Dave Page\n> > Cc: pgsql-hackers@postgresql.org; The Hermit Hacker\n> > Subject: Daily snapshots hosed (was Re: [pgadmin-hackers] \n> > [HACKERS] What about CREATE OR REPLACE FUNCTION?)\n> > \n> > LF vs CR/LF newlines leap to mind as a likely source of\n> > trouble... though I'm not sure why that would manifest in \n> > just this way...\n> \n> This does appear to be the case, though where they came from \n> I don't know! My best guess is that WinCVS thought they'd be \n> useful as I'm working on Windows. Actually that would explain \n> the issue we had with the ODBC driver MSVC++ makefile some time ago...\n> \n\nUpdate: I just found a conveniently tucked away option in WinCVs that tells\nit to use Unix LFs - it must be off by default.\n\nCheers, Dave.\n", "msg_date": "Mon, 8 Oct 2001 16:57:16 +0100 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Daily snapshots hosed (was Re: [pgadmin-hackers] Wh" } ]
[ { "msg_contents": "I changed the number of shared buffers to 3000 and my database locks\non a simple query. I must kill the database with pg_ctl stop -m i. \nNeither \"smart\" nor \"fast\" stops appear to succeed. One CPU gets\npinned.\n\nWhen I set the number of shared buffers to 64 everything is fine. No\ndata appears to have been corrupted, but I haven't been able to do a\nthorough check. I really didn't have a chance to do much of a\npostmortem. I only have what I can get from my logs:\n - schema\n - query\n - query plan\n - vacuum results\n - postgres log\n\nIf anybody wants more data I can reproduce the problem. It is very\nrepeatable. Also, I prefer to discover what went wrong rather than\nsimply upgrade to 7.1.3 and hope for the best. None of the\nfixes/enhancements listed in 7.1.3 seem relevant to this problem.\n\nI am running postgres 7.1.2 on Solaris 5.7 E450, 4 processors, 2 gig\nram.\n\nI added the following lines to /etc/system:\n set shmsys:shminfo_shmmax=0x10000000\n set shmsys:shminfo_shmmin=1\n set shmsys:shminfo_shmmni=256\n set shmsys:shminfo_shmseg=256\n set semsys:seminfo_semmap=256\n set semsys:seminfo_semmni=512\n set semsys:seminfo_semmsl=32\n set semsys:seminfo_semmns=512\n set rlim_fd_max=65535\n set rlim_fd_cur=65535\n\nHere is the query that locks the DB:\n select i8, i5, count(*), sum( float8mi(d2,d1) ), sum(i9)::int4 as\ndoll from\n calls_1001548800 group by i5, i8;\ni8 has about 5 unique values.\ni5 has two unique values (boolean) plus NULL.\n\nThe query was entered into psql by hand.\n\n\nHere is the table def:\n CREATE TABLE calls_1001548800 (\n pk\tint4 primary key,\n t1\ttext NOT NULL,\n t2\ttext NOT NULL,\n i1\tint4 NOT NULL,\n i2\tint4 NOT NULL,\n d1\tdouble precision NOT NULL,\n d2\tdouble precision NOT NULL,\n d3\tdouble precision NOT NULL,\n d4\tdouble precision NOT NULL,\n i3\tint4 NOT NULL,\n d5\treal NOT NULL,\n i4\tint4 NOT NULL,\n t3\ttext,\n i5\tboolean,\n i6\tint4 NOT NULL,\n i7\tint4 NOT NULL,\n i8\tint2 NOT NULL,\n i9\tint4,\n t4\ttext\n );\n CREATE INDEX calls_1001548800_d21 on calls_1001548800 (\nfloat8mi(d2,d1) );\n CREATE INDEX calls_1001548800_i3 on calls_1001548800 ( i3 );\n CREATE INDEX calls_1001548800_i9 on calls_1001548800 ( i9 );\n\n\nVacuum of the table (before the query):\n\n5681:DEBUG: --Relation calls_1001548800--\n5682:DEBUG: Pages 915: Changed 54, reaped 559, Empty 0, New 0; Tup\n33842: Vac 1077, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 1\n42, MaxLen 648; Re-using: Free/Avail. Space 279912/279912;\nEndEmpty/Avail. Pages 0/559. CPU 0.00s/0.11u sec.\n5683:DEBUG: Index calls_1001548800_pkey: Pages 179; Tuples 33842:\nDeleted 1077. CPU 0.00s/0.66u sec.\n5684:DEBUG: Index calls_1001548800_d21: Pages 95; Tuples 33842:\nDeleted 1077. CPU 0.00s/0.70u sec.\n5685:DEBUG: XLogWrite: new log file created - consider increasing\nWAL_FILES\n5686:DEBUG: Index calls_1001548800_i3: Pages 98; Tuples 33842:\nDeleted 1077. CPU 0.39s/0.78u sec.\n5687:DEBUG: Index calls_1001548800_i9: Pages 100; Tuples 33842:\nDeleted 1077. CPU 0.03s/0.66u sec.\n5688:DEBUG: Rel calls_1001548800: Pages: 915 --> 886; Tuple(s) moved:\n1122. CPU 0.00s/1.16u sec.\n5689:DEBUG: Index calls_1001548800_pkey: Pages 179; Tuples 33842:\nDeleted 1122. CPU 0.00s/0.38u sec.\n5690:DEBUG: Index calls_1001548800_d21: Pages 95; Tuples 33842:\nDeleted 1122. CPU 0.00s/0.40u sec.\n5691:DEBUG: Index calls_1001548800_i3: Pages 102; Tuples 33842:\nDeleted 1122. CPU 0.01s/0.39u sec.\n5692:DEBUG: Index calls_1001548800_i9: Pages 104; Tuples 33842:\nDeleted 1122. CPU 0.00s/0.40u sec.\n\n\nHere is a snippet of the log file showing the query plan and the\nevents after the query.\n\n7553:NOTICE: QUERY PLAN:\n7554:\n7555:Aggregate (cost=3770.44..4108.86 rows=3384 width=19)\n7556: -> Group (cost=3770.44..3939.65 rows=33842 width=19)\n7557: -> Sort (cost=3770.44..3770.44 rows=33842 width=19)\n7558: -> Seq Scan on calls_1001548800 \n(cost=0.00..1224.42 rows=33\n842 width=19)\n7559:\n7560:Smart Shutdown request at Tue Oct 2 18:34:08 2001\n7561:Fast Shutdown request at Tue Oct 2 18:35:03 2001\n7562:Aborting any active transaction...\n7563:FATAL 1: This connection has been terminated by the\nadministrator.\n7564:Immediate Shutdown request at Tue Oct 2 18:39:47 2001\n7565:NOTICE: Message from PostgreSQL backend:\n7566: The Postmaster has informed me that some other backend died\nabnormally\nand possibly corrupted shared memory.\n7567: I have rolled back the current transaction and am going\nto termina\nte your database system connection and exit.\n7568: Please reconnect to the database system and repeat your query.\n7569:DEBUG: database system was interrupted at 2001-10-02 18:11:06\nPDT\n7570:DEBUG: CheckPoint record at (39, 1811142680)\n7571:DEBUG: Redo record at (39, 1811142680); Undo record at (0, 0);\nShutdown FA\nLSE\n7572:DEBUG: NextTransactionId: 2086582; NextOid: 134710037\n7573:DEBUG: database system was not properly shut down; automatic\nrecovery in p\nrogress...\n7574:DEBUG: ReadRecord: record with zero len at (39, 1811142744)\n7575:DEBUG: redo is not required\n7576:DEBUG: database system is in production state\n\n\nRyan\n", "msg_date": "8 Oct 2001 12:28:20 -0700", "msg_from": "ryan_rs@c4.com (Ryan)", "msg_from_op": true, "msg_subject": "Postgres server locks up" }, { "msg_contents": "Tom, Bruce, any suggestions?\n\nryan_rs@c4.com (Ryan) wrote in message news:<7299ab58.0110081128.50cf8fba@posting.google.com>...\n> I changed the number of shared buffers to 3000 and my database locks\n> on a simple query. I must kill the database with pg_ctl stop -m i. \n> Neither \"smart\" nor \"fast\" stops appear to succeed. One CPU gets\n> pinned.\n> \n> When I set the number of shared buffers to 64 everything is fine. No\n> data appears to have been corrupted, but I haven't been able to do a\n> thorough check. I really didn't have a chance to do much of a\n> postmortem. I only have what I can get from my logs:\n> - schema\n> - query\n> - query plan\n> - vacuum results\n> - postgres log\n> \n> If anybody wants more data I can reproduce the problem. It is very\n> repeatable. Also, I prefer to discover what went wrong rather than\n> simply upgrade to 7.1.3 and hope for the best. None of the\n> fixes/enhancements listed in 7.1.3 seem relevant to this problem.\n> \n> I am running postgres 7.1.2 on Solaris 5.7 E450, 4 processors, 2 gig\n> ram.\n> \n> I added the following lines to /etc/system:\n> set shmsys:shminfo_shmmax=0x10000000\n> set shmsys:shminfo_shmmin=1\n> set shmsys:shminfo_shmmni=256\n> set shmsys:shminfo_shmseg=256\n> set semsys:seminfo_semmap=256\n> set semsys:seminfo_semmni=512\n> set semsys:seminfo_semmsl=32\n> set semsys:seminfo_semmns=512\n> set rlim_fd_max=65535\n> set rlim_fd_cur=65535\n> \n> Here is the query that locks the DB:\n> select i8, i5, count(*), sum( float8mi(d2,d1) ), sum(i9)::int4 as\n> doll from\n> calls_1001548800 group by i5, i8;\n> i8 has about 5 unique values.\n> i5 has two unique values (boolean) plus NULL.\n> \n> The query was entered into psql by hand.\n> \n> \n> Here is the table def:\n> CREATE TABLE calls_1001548800 (\n> pk\tint4 primary key,\n> t1\ttext NOT NULL,\n> t2\ttext NOT NULL,\n> i1\tint4 NOT NULL,\n> i2\tint4 NOT NULL,\n> d1\tdouble precision NOT NULL,\n> d2\tdouble precision NOT NULL,\n> d3\tdouble precision NOT NULL,\n> d4\tdouble precision NOT NULL,\n> i3\tint4 NOT NULL,\n> d5\treal NOT NULL,\n> i4\tint4 NOT NULL,\n> t3\ttext,\n> i5\tboolean,\n> i6\tint4 NOT NULL,\n> i7\tint4 NOT NULL,\n> i8\tint2 NOT NULL,\n> i9\tint4,\n> t4\ttext\n> );\n> CREATE INDEX calls_1001548800_d21 on calls_1001548800 (\n> float8mi(d2,d1) );\n> CREATE INDEX calls_1001548800_i3 on calls_1001548800 ( i3 );\n> CREATE INDEX calls_1001548800_i9 on calls_1001548800 ( i9 );\n> \n> \n> Vacuum of the table (before the query):\n> \n> 5681:DEBUG: --Relation calls_1001548800--\n> 5682:DEBUG: Pages 915: Changed 54, reaped 559, Empty 0, New 0; Tup\n> 33842: Vac 1077, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 1\n> 42, MaxLen 648; Re-using: Free/Avail. Space 279912/279912;\n> EndEmpty/Avail. Pages 0/559. CPU 0.00s/0.11u sec.\n> 5683:DEBUG: Index calls_1001548800_pkey: Pages 179; Tuples 33842:\n> Deleted 1077. CPU 0.00s/0.66u sec.\n> 5684:DEBUG: Index calls_1001548800_d21: Pages 95; Tuples 33842:\n> Deleted 1077. CPU 0.00s/0.70u sec.\n> 5685:DEBUG: XLogWrite: new log file created - consider increasing\n> WAL_FILES\n> 5686:DEBUG: Index calls_1001548800_i3: Pages 98; Tuples 33842:\n> Deleted 1077. CPU 0.39s/0.78u sec.\n> 5687:DEBUG: Index calls_1001548800_i9: Pages 100; Tuples 33842:\n> Deleted 1077. CPU 0.03s/0.66u sec.\n> 5688:DEBUG: Rel calls_1001548800: Pages: 915 --> 886; Tuple(s) moved:\n> 1122. CPU 0.00s/1.16u sec.\n> 5689:DEBUG: Index calls_1001548800_pkey: Pages 179; Tuples 33842:\n> Deleted 1122. CPU 0.00s/0.38u sec.\n> 5690:DEBUG: Index calls_1001548800_d21: Pages 95; Tuples 33842:\n> Deleted 1122. CPU 0.00s/0.40u sec.\n> 5691:DEBUG: Index calls_1001548800_i3: Pages 102; Tuples 33842:\n> Deleted 1122. CPU 0.01s/0.39u sec.\n> 5692:DEBUG: Index calls_1001548800_i9: Pages 104; Tuples 33842:\n> Deleted 1122. CPU 0.00s/0.40u sec.\n> \n> \n> Here is a snippet of the log file showing the query plan and the\n> events after the query.\n> \n> 7553:NOTICE: QUERY PLAN:\n> 7554:\n> 7555:Aggregate (cost=3770.44..4108.86 rows=3384 width=19)\n> 7556: -> Group (cost=3770.44..3939.65 rows=33842 width=19)\n> 7557: -> Sort (cost=3770.44..3770.44 rows=33842 width=19)\n> 7558: -> Seq Scan on calls_1001548800 \n> (cost=0.00..1224.42 rows=33\n> 842 width=19)\n> 7559:\n> 7560:Smart Shutdown request at Tue Oct 2 18:34:08 2001\n> 7561:Fast Shutdown request at Tue Oct 2 18:35:03 2001\n> 7562:Aborting any active transaction...\n> 7563:FATAL 1: This connection has been terminated by the\n> administrator.\n> 7564:Immediate Shutdown request at Tue Oct 2 18:39:47 2001\n> 7565:NOTICE: Message from PostgreSQL backend:\n> 7566: The Postmaster has informed me that some other backend died\n> abnormally\n> and possibly corrupted shared memory.\n> 7567: I have rolled back the current transaction and am going\n> to termina\n> te your database system connection and exit.\n> 7568: Please reconnect to the database system and repeat your query.\n> 7569:DEBUG: database system was interrupted at 2001-10-02 18:11:06\n> PDT\n> 7570:DEBUG: CheckPoint record at (39, 1811142680)\n> 7571:DEBUG: Redo record at (39, 1811142680); Undo record at (0, 0);\n> Shutdown FA\n> LSE\n> 7572:DEBUG: NextTransactionId: 2086582; NextOid: 134710037\n> 7573:DEBUG: database system was not properly shut down; automatic\n> recovery in p\n> rogress...\n> 7574:DEBUG: ReadRecord: record with zero len at (39, 1811142744)\n> 7575:DEBUG: redo is not required\n> 7576:DEBUG: database system is in production state\n> \n> \n> Ryan\n", "msg_date": "10 Oct 2001 11:07:35 -0700", "msg_from": "ryan_rs@c4.com (Ryan)", "msg_from_op": true, "msg_subject": "Re: Postgres server locks up, HELP!" }, { "msg_contents": "ryan_rs@c4.com (Ryan) writes:\n> I changed the number of shared buffers to 3000 and my database locks\n> on a simple query. I must kill the database with pg_ctl stop -m i. \n> If anybody wants more data I can reproduce the problem. It is very\n> repeatable.\n\nPlease attach to the stuck process with a debugger and see where\nit's stuck. Or, if you're not handy with gdb or local equivalent,\nmaybe you could give access to your system to someone who is.\n\n> Also, I prefer to discover what went wrong rather than\n> simply upgrade to 7.1.3 and hope for the best. \n\nI agree; this does not strike me as a symptom of any known problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2001 01:21:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres server locks up " } ]
[ { "msg_contents": "Are we ready for beta on Wednesday? I don't know anything holding us up\nat this point. Seems like major work has stopped and everyone is ready\nto start testing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 8 Oct 2001 17:02:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Beta Wednesday" }, { "msg_contents": "On Mon, 8 Oct 2001, Bruce Momjian wrote:\n\n\n> Are we ready for beta on Wednesday? I don't know anything holding us up\n> at this point. Seems like major work has stopped and everyone is ready\n> to start testing.\n\n\nWell, if that build problem I reported doesn't bother you;-)\n\nI think it's likely to recur.\n\n\n", "msg_date": "Tue, 9 Oct 2001 10:24:45 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: Beta Wednesday" } ]
[ { "msg_contents": "Hi,\n\nAs said in another mail, I have tried to add iso-8859-15 (Latin 9) &\niso-8859-16 (Latin 10) to PostgreSQL, I think I have done mostly all\nthat's necessary. But I miss two things :\n\n- latin92mic/mic2latin9/latin102mic/mic2latin10 in conv.c\n- the leading character value in pg_wchar.h\n\nI don't know anything about MULE except that it's some Emacs standard\n(why they didn't go for Unicode is beyond my understanding, is\noff-topic on this list, and had probably some good argument at the\ntime).\n\nCan someone point me to where I should look for that ? is it as easy\nas iso-8859-2/3/4 support, or do I need to do something as iso-8859-5 ?\n\nThank you :)\n\nPatrice.\n\n-- \nPatrice HᅵDᅵ ------------------------------- patrice ᅵ islande org -----\n -- Isn't it weird how scientists can imagine all the matter of the\nuniverse exploding out of a dot smaller than the head of a pin, but they\ncan't come up with a more evocative name for it than \"The Big Bang\" ?\n -- What would _you_ call the creation of the universe ?\n -- \"The HORRENDOUS SPACE KABLOOIE !\" - Calvin and Hobbes\n------------------------------------------ http://www.islande.org/ -----\n", "msg_date": "Mon, 8 Oct 2001 23:08:47 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <patrice@idf.net>", "msg_from_op": true, "msg_subject": "Mule internal code ?" } ]
[ { "msg_contents": "Hi,\n\nAs said in another mail, I have tried to add iso-8859-15 (Latin 9) &\niso-8859-16 (Latin 10) to PostgreSQL, I think I have done mostly all\nthat's necessary. But I miss two things :\n\n- latin92mic/mic2latin9/latin102mic/mic2latin10 in conv.c\n- the leading character value in pg_wchar.h\n\nI don't know anything about MULE except that it's some Emacs standard\n(why they didn't go for Unicode is beyond my understanding, is\noff-topic on this list, and had probably some good argument at the\ntime).\n\nCan someone point me to where I should look for that ? is it as easy\nas iso-8859-2/3/4 support, or do I need to do something as iso-8859-5 ?\n\nThank you :)\n\nPatrice.\n\n-- \nPatrice HᅵDᅵ ------------------------------- patrice ᅵ islande org -----\n -- Isn't it weird how scientists can imagine all the matter of the\nuniverse exploding out of a dot smaller than the head of a pin, but they\ncan't come up with a more evocative name for it than \"The Big Bang\" ?\n -- What would _you_ call the creation of the universe ?\n -- \"The HORRENDOUS SPACE KABLOOIE !\" - Calvin and Hobbes\n------------------------------------------ http://www.islande.org/ -----\n", "msg_date": "Mon, 8 Oct 2001 23:20:23 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>", "msg_from_op": true, "msg_subject": "Mule internal code ?" }, { "msg_contents": "> As said in another mail, I have tried to add iso-8859-15 (Latin 9) &\n> iso-8859-16 (Latin 10) to PostgreSQL, I think I have done mostly all\n> that's necessary. But I miss two things :\n\nISO-8859-15 and 16! I don't know anything beyond ISO-8859-10. Can you\ngive me any pointer (URL) explaining what they are?\n\n> - latin92mic/mic2latin9/latin102mic/mic2latin10 in conv.c\n> - the leading character value in pg_wchar.h\n>\n> I don't know anything about MULE except that it's some Emacs standard\n> (why they didn't go for Unicode is beyond my understanding, is\n> off-topic on this list, and had probably some good argument at the\n> time).\n\nProbably this is because Unicode is not perfect at all. For example,\nthe concept \"encode everything in two-bytes\" is obviously broken\ndown, some charsets (for example JIS X 0213) are not supported at all,\netc. etc...\n\n> Can someone point me to where I should look for that ? is it as easy\n> as iso-8859-2/3/4 support, or do I need to do something as iso-8859-5 ?\n\nDocs for MULE internal code come with XEmacs. For example, see:\n\nftp://ftp.xemacs.org/pub/xemacs/docs/letter/internals-letter.pdf.gz\n\nhttp://www.lns.cornell.edu/public/COMP/info/xemacs/internals/internals_15.html#SEC83\n\netc.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 10 Oct 2001 12:06:04 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Mule internal code ?" }, { "msg_contents": "* Tatsuo Ishii <t-ishii@sra.co.jp> [011010 18:20]:\n> > As said in another mail, I have tried to add iso-8859-15 (Latin 9) &\n> > iso-8859-16 (Latin 10) to PostgreSQL, I think I have done mostly all\n> > that's necessary. But I miss two things :\n> \n> ISO-8859-15 and 16! I don't know anything beyond ISO-8859-10. Can you\n> give me any pointer (URL) explaining what they are?\n\nhttp://www.evertype.com/sc2wg3.html\n\nIt links to files describing iso-8859-14 to 16.\n\n14 is gaelic support, which I've never seen used (of course, I don't\nspeak irish, so that's probably why :) ), and it has nothing to do\nwith the euro.\n\n15 is a \"modernised\" version of iso-8859-1. It removes some\nnot-so-widely used characters (currency place-holder, fraction\ncharacters), to replace them with the euro sign, the french oe, OE,\nand Y diaeresis, and the finnish/estonian s/S caron and z/Z caron.\n\nThat's the official 8-bit charset for western europe now (btw, the\nother name is latin9, or latin0, as it's supposed to replace\niso8859-1, which is now what should be called a legacy encoding).\n\n16 is quite new. It's supposed to do the same as iso-8859-15, but for\ncentral europe countries. It had support for the euro sign, the\nromanian language (t comma below, s comma below), but I've read\nsomewhere that it has lost support for two or three other central\neurope countries... go figure...\n\n> > - latin92mic/mic2latin9/latin102mic/mic2latin10 in conv.c\n> > - the leading character value in pg_wchar.h\n> >\n> > I don't know anything about MULE except that it's some Emacs standard\n> > (why they didn't go for Unicode is beyond my understanding, is\n> > off-topic on this list, and had probably some good argument at the\n> > time).\n> \n> Probably this is because Unicode is not perfect at all. For example,\n> the concept \"encode everything in two-bytes\" is obviously broken\n> down, some charsets (for example JIS X 0213) are not supported at all,\n> etc. etc...\n\nWell, for the history iso-10646 was 32 bits from the beginning, and\nUnicode didn't say that it was only 16 bits, though, to be fair, the\nUnicode consortium said it didn't believe it would need more than 16\nbits.\n\nBTW, now, there is a statement that they wouldn't go above 0x10ffff,\nwhich gives a bit more than 1 million characters... I think it should\nbe enough this time (but who knows !?).\n\nRegarding the *main* issue with Unicode, which is support of japanese\nkanji vs chinese (in the CJK unification), I must admit I don't know\nthe details, but arguments of both sides seem to be valid. I must\nadmit I would say \"add the japanese version of the characters\", since\nit's not lack of space which is the problem now. But things like this\nwill get solved with time, and it really seems like Unicode will\nachieve the so much needed charset unity it's been made for :)\n\n> > Can someone point me to where I should look for that ? is it as\n> > easy as iso-8859-2/3/4 support, or do I need to do something as\n> > iso-8859-5 ?\n> \n> Docs for MULE internal code come with XEmacs. For example, see:\n> \n> ftp://ftp.xemacs.org/pub/xemacs/docs/letter/internals-letter.pdf.gz\n> \n> http://www.lns.cornell.edu/public/COMP/info/xemacs/internals/internals_15.html#SEC83\n\nUnfortunately, these explain the principles behind mule, not the way\nto encode them from/to another character set :/\n\nPatrice\n\n-- \nPatrice Hᅵdᅵ\nemail: patrice hede ᅵ islande org\nwww : http://www.islande.org/\n", "msg_date": "Wed, 10 Oct 2001 19:46:14 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>", "msg_from_op": true, "msg_subject": "Re: Mule internal code ?" }, { "msg_contents": "> > ISO-8859-15 and 16! I don't know anything beyond ISO-8859-10. Can you\n> > give me any pointer (URL) explaining what they are?\n> \n> http://www.evertype.com/sc2wg3.html\n> \n> It links to files describing iso-8859-14 to 16.\n[snip] \nThanks for the info.\n\n> Well, for the history iso-10646 was 32 bits from the beginning, and\n> Unicode didn't say that it was only 16 bits, though, to be fair, the\n> Unicode consortium said it didn't believe it would need more than 16\n> bits.\n> \n> BTW, now, there is a statement that they wouldn't go above 0x10ffff,\n> which gives a bit more than 1 million characters... I think it should\n> be enough this time (but who knows !?).\n> \n> Regarding the *main* issue with Unicode, which is support of japanese\n> kanji vs chinese (in the CJK unification), I must admit I don't know\n> the details, but arguments of both sides seem to be valid. I must\n> admit I would say \"add the japanese version of the characters\", since\n> it's not lack of space which is the problem now. But things like this\n> will get solved with time, and it really seems like Unicode will\n> achieve the so much needed charset unity it's been made for :)\n\nIMHO we should not rely on particular encodings/charsets, including\nUnicode (or ISO 10646), MULE internal code or whatever. My plan for\nsupporting CREATE CHARCTER SET etc. stuffs would be truly *neutral* to\nany encodings/charsets.\n\n> > > Can someone point me to where I should look for that ? is it as\n> > > easy as iso-8859-2/3/4 support, or do I need to do something as\n> > > iso-8859-5 ?\n> > \n> > Docs for MULE internal code come with XEmacs. For example, see:\n> > \n> > ftp://ftp.xemacs.org/pub/xemacs/docs/letter/internals-letter.pdf.gz\n> > \n> > http://www.lns.cornell.edu/public/COMP/info/xemacs/internals/internals_15.html#SEC83\n> \n> Unfortunately, these explain the principles behind mule, not the way\n> to encode them from/to another character set :/\n\nPlease take look at \"15.3.1 Internal String Encoding.\"\n--\nTatsuo Ishii\n\n", "msg_date": "Thu, 11 Oct 2001 10:03:43 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Mule internal code ?" } ]
[ { "msg_contents": "Hello,\n\nI could not execute sample code of plsql.sgml. PostgreSQL version is 7.1.3.\nThen I made patches for reference. \n\n1) cs_refresh_one_mv function sample code bug. *** 512,526 ****\n2) Singule quotation missing. *** 1152,1158 ****\n3) cs_refresh_mviews function sample code bug. *** 1305,1313 ****\n\nPlease review my patch.\nThank you.\n\n\nFollowing plsqlORG.sgml file is a current CVS file.\n\n\n*** plsqlORG.sgml\tFri Oct 5 17:19:53 2001\n--- plsql.sgml\tFri Oct 5 17:32:16 2001\n***************\n*** 512,526 ****\n WHERE sort_key=key;\n \n IF NOT FOUND THEN\n! RAISE EXCEPTION ''View '' || key || '' not found'';\n RETURN 0;\n END IF;\n \n -- The mv_name column of cs_materialized_views stores view\n -- names.\n \n! TRUNCATE TABLE table_data.mv_name;\n! INSERT INTO table_data.mv_name || '' '' || table_data.mv_query;\n \n return 1;\n end;\n--- 512,526 ----\n WHERE sort_key=key;\n \n IF NOT FOUND THEN\n! RAISE EXCEPTION ''View % not found'',key;\n RETURN 0;\n END IF;\n \n -- The mv_name column of cs_materialized_views stores view\n -- names.\n \n! EXECUTE ''TRUNCATE TABLE '' || quote_ident(table_data.mv_name);\n! EXECUTE ''INSERT INTO '' || quote_ident(table_data.mv_name) || '' '' || table_data.mv_query;\n \n return 1;\n end;\n***************\n*** 1152,1158 ****\n FOR i IN 1..10 LOOP\n -- some expressions here\n \n! RAISE NOTICE 'i is %',i;\n END LOOP;\n \n FOR i IN REVERSE 1..10 LOOP\n--- 1152,1158 ----\n FOR i IN 1..10 LOOP\n -- some expressions here\n \n! RAISE NOTICE ''i is %'',i;\n END LOOP;\n \n FOR i IN REVERSE 1..10 LOOP\n***************\n*** 1305,1313 ****\n \n -- Now \"mviews\" has one record from cs_materialized_views\n \n! PERFORM cs_log(''Refreshing materialized view '' || mview.mv_name || ''...'');\n! TRUNCATE TABLE mview.mv_name;\n! INSERT INTO mview.mv_name || '' '' || mview.mv_query;\n END LOOP;\n \n PERFORM cs_log(''Done refreshing materialized views.'');\n--- 1305,1313 ----\n \n -- Now \"mviews\" has one record from cs_materialized_views\n \n! PERFORM cs_log(''Refreshing materialized view '' || quote_ident(mviews.mv_name) || ''...'');\n! EXECUTE ''TRUNCATE TABLE '' || quote_ident(mviews.mv_name);\n! EXECUTE ''INSERT INTO '' || quote_ident(mviews.mv_name) || '' '' || mviews.mv_query;\n END LOOP;\n \n PERFORM cs_log(''Done refreshing materialized views.'');\n\n---\nJunichi Kobayasi\n", "msg_date": "Tue, 09 Oct 2001 10:26:08 +0900", "msg_from": "KobayashiJunichi <junichi@sra.co.jp>", "msg_from_op": true, "msg_subject": "PL/pgSQL doc sample bug? (in 7.1.3)" }, { "msg_contents": "KobayashiJunichi <junichi@sra.co.jp> writes:\n> I could not execute sample code of plsql.sgml. PostgreSQL version is 7.1.3.\n> Then I made patches for reference. \n\nThanks for the corrections!\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Oct 2001 00:56:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL doc sample bug? (in 7.1.3) " } ]
[ { "msg_contents": "Hello! \nI have a trouble with PostgreSQL 7.1.3 (and 7.1.2 too). My OS is Solaris 8x86 with russian locale. PostgreSQL was builded from sources and configured with : --enable-locale --enable-multibyte=WIN. \nMy problem with sorting lowercase russian words in the text fields (type - \"varchar\") after cyrilic letter \"P\". \ncreate table test (id serial, rubrik varchar (255));\nThen inserting words in russian (from the client written on php - encoding win1251)\nselect * from library order by rubriks - will cause this problem. \nI think that this problem is only on Solaris systems because my friend use it on Linux SuSe and he don't have any troubles. \n\n\nregards\nkosha\n\n\n\n\n\n\n\nHello! \nI have a trouble with PostgreSQL 7.1.3 (and \n7.1.2 too). My OS is Solaris 8x86 with russian locale. PostgreSQL was builded \nfrom sources and configured with : --enable-locale --enable-multibyte=WIN. \n\nMy problem with sorting lowercase russian \nwords in the text fields (type  -  \"varchar\") after cyrilic \nletter \"P\". \ncreate table test (id serial, rubrik varchar \n(255));\nThen inserting words in russian (from the \nclient written on php  - encoding win1251)\nselect * from library order by rubriks - will \ncause this problem. \nI think that this problem is only on Solaris \nsystems because my friend use it on Linux SuSe and he don't have any troubles. \n\n \n \nregards\nkosha", "msg_date": "Tue, 9 Oct 2001 08:58:46 +0400", "msg_from": "\"Korshunov Ilya\" <kosha@kp.ru>", "msg_from_op": true, "msg_subject": "Problem with cyrilic" }, { "msg_contents": "Ilya,\n\ncheck your system locale - does simple perl script works properly\n\n\tOleg\nOn Tue, 9 Oct 2001, Korshunov Ilya wrote:\n\n> Hello!\n> I have a trouble with PostgreSQL 7.1.3 (and 7.1.2 too). My OS is Solaris 8x86 with russian locale. PostgreSQL was builded from sources and configured with : --enable-locale --enable-multibyte=WIN.\n> My problem with sorting lowercase russian words in the text fields (type - \"varchar\") after cyrilic letter \"P\".\n> create table test (id serial, rubrik varchar (255));\n> Then inserting words in russian (from the client written on php - encoding win1251)\n> select * from library order by rubriks - will cause this problem.\n> I think that this problem is only on Solaris systems because my friend use it on Linux SuSe and he don't have any troubles.\n>\n>\n> regards\n> kosha\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 9 Oct 2001 21:06:45 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Problem with cyrilic" } ]
[ { "msg_contents": "I've noticed that text(bool) isnt a defined function (nor does\n'true'::bool::text work). \n\nIt would be great to have this defined.\n\nSince \"select 'false'::bool\" gives 'f' and \"select 'true'::bool\" gives\n't', perhaps text(bool) should return 't' or 'f'.\n\nA case can, also, be given for returning 'true' or 'false'.\n\ndave\nps. 7.1.2 on solaris\npps. \"select boolin(true)\" causes the backend to core.\n", "msg_date": "Tue, 09 Oct 2001 13:37:06 -0700", "msg_from": "Dave Blasby <dblasby@refractions.net>", "msg_from_op": true, "msg_subject": "text(bool) not defined" }, { "msg_contents": "Dave Blasby writes:\n\n> I've noticed that text(bool) isnt a defined function (nor does\n> 'true'::bool::text work).\n\ncase when value then 'answer if true' else 'answer if false' end\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 10 Oct 2001 00:35:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: text(bool) not defined" } ]
[ { "msg_contents": "Just updated...\n\npeter=# SELECT EXTRACT(DOW FROM TIMESTAMP '2001-02-16 20:38:40');\nERROR: Timestamp with time zone units 'dow' not recognized\n\nThis is documented to work.\n\npeter=# SELECT EXTRACT(DOW FROM TIME '20:38:40');\nERROR: Interval units 'dow' not recognized\n\nThe expression is nonsensical, but so is the result.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 10 Oct 2001 00:35:25 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "EXTRACT broken" }, { "msg_contents": "> Just updated...\n> peter=# SELECT EXTRACT(DOW FROM TIMESTAMP '2001-02-16 20:38:40');\n> ERROR: Timestamp with time zone units 'dow' not recognized\n> This is documented to work.\n\nAh, I broke this with some recent additions to implement more ISO\nconventions (I changed the behavior of the date/time parser so that it\ndoes not willingly ignore unrecognized fields).\n\nI see the problem and the solution, but am in the middle of a few\nchanges to SET code and can't test at the moment. Hopefully I'll get\nthis fixed in the next couple of days, and if not I'll get it done early\nnext week.\n\nWould you like to add some tests to the regression suite? Clearly this\nisn't covered there...\n\n> peter=# SELECT EXTRACT(DOW FROM TIME '20:38:40');\n> ERROR: Interval units 'dow' not recognized\n> The expression is nonsensical, but so is the result.\n\nHmm. Why is the result nonsensical? \"day of week\" does not have meaning\nfor intervals, so it should not be recognized, right?\n\nIt is the same result as saying\n\n SELECT timestamp_part('yabadabadoo', time '20:38:40');\n\n - Thomas\n", "msg_date": "Thu, 11 Oct 2001 06:46:12 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: EXTRACT broken" }, { "msg_contents": "Thomas Lockhart writes:\n\n> > peter=# SELECT EXTRACT(DOW FROM TIME '20:38:40');\n> > ERROR: Interval units 'dow' not recognized\n> > The expression is nonsensical, but so is the result.\n>\n> Hmm. Why is the result nonsensical? \"day of week\" does not have meaning\n> for intervals, so it should not be recognized, right?\n\nIt's the \"interval\" part that's troubling me, since it appears nowhere in\nthe original expression.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 11 Oct 2001 22:02:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: EXTRACT broken" }, { "msg_contents": "> > > peter=# SELECT EXTRACT(DOW FROM TIME '20:38:40');\n> > > ERROR: Interval units 'dow' not recognized\n> > > The expression is nonsensical, but so is the result.\n> > Hmm. Why is the result nonsensical? \"day of week\" does not have meaning\n> > for intervals, so it should not be recognized, right?\n> It's the \"interval\" part that's troubling me, since it appears nowhere in\n> the original expression.\n\nOh yeah. We don't have a date_part(units, time) function defined, so it\nis getting converted to interval (which in other contexts *does* have\nsome usefulness as a \"time equivalent\").\n\nWe could fairly easily define a date_part() for the time and timetz data\ntypes.\n\n - Thomas\n", "msg_date": "Fri, 12 Oct 2001 05:37:52 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: EXTRACT broken" }, { "msg_contents": "Thomas Lockhart writes:\n\n> Oh yeah. We don't have a date_part(units, time) function defined, so it\n> is getting converted to interval (which in other contexts *does* have\n> some usefulness as a \"time equivalent\").\n\nYou're going to have an extremely hard time convincing me of that.\n\n> We could fairly easily define a date_part() for the time and timetz data\n> types.\n\nI had figured that time would be cast to timestamp. Which is probably\nwhat it used to do.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 13 Oct 2001 20:15:39 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: EXTRACT broken" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I had figured that time would be cast to timestamp.\n\nHow would you do that? With no date available, you're short all the\nhigh-order bits ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Oct 2001 15:08:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: EXTRACT broken " }, { "msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I had figured that time would be cast to timestamp.\n>\n> How would you do that? With no date available, you're short all the\n> high-order bits ...\n\nFor the purpose of extracting the fields that time does provide, namely\nhour, minute, and second, it wouldn't matter. At least it gives me a much\nbetter feeling than casting to interval, which is a completely different\nkind of quantity.\n\nOf course, a separate date_part for time and date would make the most\nsense.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 13 Oct 2001 21:26:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: EXTRACT broken " }, { "msg_contents": "> > Oh yeah. We don't have a date_part(units, time) function defined, so it\n> > is getting converted to interval (which in other contexts *does* have\n> > some usefulness as a \"time equivalent\").\n> You're going to have an extremely hard time convincing me of that.\n\nOK, thanks for the warning. I'll try later when I have more time...\n\n> > We could fairly easily define a date_part() for the time and timetz data\n> > types.\n> I had figured that time would be cast to timestamp. Which is probably\n> what it used to do.\n\nTom Lane pointed out the problem of inferring an appropriate date for\nthe upcast.\n\n - Thomas\n", "msg_date": "Mon, 15 Oct 2001 05:34:12 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: EXTRACT broken" } ]
[ { "msg_contents": "In my understanding below row value constructors(I hope this term is\ncorrect) exaples should return true, but PostgreSQL does not.\n\ntest=# select (1,0) > (0,0);\n ?column? \n----------\n f\n(1 row)\n\ntest=# select (0,1) > (0,0);\n ?column? \n----------\n f\n(1 row)\n\nIn my understanding, (a,b) > (c,d) is equal to:\n\n a > c or (a = c and b > d)\n--\nTatsuo Ishii\n", "msg_date": "Wed, 10 Oct 2001 10:18:42 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "row value constructor bug?" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> In my understanding below row value constructors(I hope this term is\n> correct) exaples should return true, but PostgreSQL does not.\n\nBy my reading, a \"row value constructor\" is one of the things in\nparentheses, while the whole clause is a \"comparison predicate\"\n(per section 8.2 of SQL92). But I agree that we don't seem to\nhave implemented the semantics correctly. The code currently\nresponsible for this is makeRowExpr() in gram.y ... I tend to\nagree with the comment on it that says that the functionality\nshould be pushed deeper ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2001 01:37:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: row value constructor bug? " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > In my understanding below row value constructors(I hope this term is\n> > correct) exaples should return true, but PostgreSQL does not.\n> \n> By my reading, a \"row value constructor\" is one of the things in\n> parentheses, while the whole clause is a \"comparison predicate\"\n> (per section 8.2 of SQL92). But I agree that we don't seem to\n> have implemented the semantics correctly. The code currently\n> responsible for this is makeRowExpr() in gram.y ... I tend to\n> agree with the comment on it that says that the functionality\n> should be pushed deeper ...\n\nTODO item here?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 23:56:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: row value constructor bug?" } ]
[ { "msg_contents": "Hi,\n\nNow that postgresql doesn't have field size limits, it seems to\nme they should be good for storing large blobs, even if it means\nhaving to uuencode them to be non-binary or whatever. I don't\nlike the old large object implementation, I need to store very large\nnumbers of objects and unless this implementation has changed\nin recent times it won't cut it.\n\nSo my question is, I assume TEXT is the best data type to store\nlarge things in, what precisely is the range of characters that\nI can store in TEXT? Is it only characters ascii <= 127, or is\nit only printable characters, or everything except '\\0' or what?\n\n", "msg_date": "Wed, 10 Oct 2001 11:33:04 +1000", "msg_from": "Chris Bitmead <chris@bitmead.com>", "msg_from_op": true, "msg_subject": "TOAST and TEXT" }, { "msg_contents": "It should be noted that there is still a limit of about 1GB if I\nremember correctly.\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.\n\n----- Original Message -----\nFrom: \"Chris Bitmead\" <chris@bitmead.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Tuesday, October 09, 2001 9:33 PM\nSubject: [HACKERS] TOAST and TEXT\n\n\n> Hi,\n>\n> Now that postgresql doesn't have field size limits, it seems to\n> me they should be good for storing large blobs, even if it means\n> having to uuencode them to be non-binary or whatever. I don't\n> like the old large object implementation, I need to store very large\n> numbers of objects and unless this implementation has changed\n> in recent times it won't cut it.\n>\n> So my question is, I assume TEXT is the best data type to store\n> large things in, what precisely is the range of characters that\n> I can store in TEXT? Is it only characters ascii <= 127, or is\n> it only printable characters, or everything except '\\0' or what?\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Tue, 9 Oct 2001 21:45:23 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": false, "msg_subject": "Re: TOAST and TEXT" }, { "msg_contents": "On Wed, Oct 10, 2001 at 11:33:04AM +1000, Chris Bitmead wrote:\n> So my question is, I assume TEXT is the best data type to store\n> large things in, what precisely is the range of characters that\n> I can store in TEXT? Is it only characters ascii <= 127, or is\n> it only printable characters, or everything except '\\0' or what?\n\ntext accepts everything except \\0, and also various funtions\ntake locale/charset info into account.\n\nUse bytea, its for 0-255, binary data. When your client\nlibrary does not support it, then base64 it in client side\nand later decode() into place.\n\n-- \nmarko\n\n", "msg_date": "Wed, 10 Oct 2001 04:48:42 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: TOAST and TEXT" }, { "msg_contents": "Rod Taylor wrote:\n> It should be noted that there is still a limit of about 1GB if I\n> remember correctly.\n\n You're right, there is still a practical limit on the size of\n a text field. And it's usually much lower than 1GB.\n\n The problem is that first, the (encoded) data has to be put\n completely into the querystring, passed to the backend and\n buffered there entirely in memory. Then it get's parsed, and\n the data copied into a const node. After rewriting and\n planning, a heap tuple is build, containing the third,\n eventually fourth in memory copy of the data. After that, the\n toaster kicks in, allocates another chunk of that size to try\n to compress the data and finally slices it up for storage.\n\n So the limit depends on how much swapspace you have and where\n the per process virtual memory limit of your OS is.\n\n In practice, sizes of up to 10 MB are no problem. So storing\n typical MP3s works.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 10 Oct 2001 08:52:54 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: TOAST and TEXT" }, { "msg_contents": "Chris Bitmead <chris@bitmead.com> writes:\n> ... I don't\n> like the old large object implementation, I need to store very large\n> numbers of objects and unless this implementation has changed\n> in recent times it won't cut it.\n\nHave you looked at 7.1? AFAIK it has no particular problem with\nlots of LOs.\n\nWhich is not to discourage you from going over to bytea fields instead,\nif that model happens to be more convenient for your application.\nBut your premise above seems false.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2001 00:32:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TOAST and TEXT " } ]
[ { "msg_contents": " >Use bytea, its for 0-255, binary data. When your client\n >library does not support it, then base64 it in client side\n >and later decode() into place.\n\nThanks, bytea sounds like what I need. Why no documentation on this \nimportant data type?\n\nDoes the Java client library support setting this type using\nsetBytes or setBinaryStream?\n\n", "msg_date": "Wed, 10 Oct 2001 13:52:26 +1000", "msg_from": "Chris Bitmead <chris@bitmead.com>", "msg_from_op": true, "msg_subject": "Re: TOAST and bytea JAVA" }, { "msg_contents": "Chris,\n\nCurrent sources for the jdbc driver does support the bytea type. \nHowever the driver for 7.1 does not.\n\nthanks,\n--Barry\n\n\nChris Bitmead wrote:\n\n> >Use bytea, its for 0-255, binary data. When your client\n> >library does not support it, then base64 it in client side\n> >and later decode() into place.\n> \n> Thanks, bytea sounds like what I need. Why no documentation on this \n> important data type?\n> \n> Does the Java client library support setting this type using\n> setBytes or setBinaryStream?\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n", "msg_date": "Tue, 09 Oct 2001 21:32:33 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TOAST and bytea JAVA" } ]
[ { "msg_contents": "Receiving a request to add ISO 8859-15 and 16, I review the multibyte\nsupport code and found several errors in it.\n\n1) There is a confusion between \"LATIN5\" and ISO 8859-5. LATIN5 is not\n ISO 8859-5, but is actually ISO 8859-9. Should we rename LATIN5 to\n \"ISO8859-5\" (or whatever) as the encoding name? I think we should.\n For your information, here are the correct mapping between ISO\n 8859-n and LATINn.\n\n ISO 8859-1\tLATIN1\n ISO 8859-2\tLATIN2\n ISO 8859-3\tLATIN3\n ISO 8859-4\tLATIN4\n ISO 8859-9\tLATIN5\n ISO 8859-10\tLATIN6\n\n2) The leading characters for some Cyrillic charsets are wrong.\n\nCurrently they are defined as:\n\n#define LC_KOI8_R\t0x8c\t/* Cyrillic KOI8-R */\n#define LC_KOI8_U\t0x8c\t/* Cyrillic KOI8-U */\n#define LC_ISO8859_5\t0x8d\t/* ISO8859 Cyrillic */\n\nThese should be:\n\n#define LC_KOI8_R\t0x8b\t/* Cyrillic KOI8-R */\n#define LC_KOI8_U\t0x8b\t/* Cyrillic KOI8-U */\n#define LC_ISO8859_5\t0x8c\t/* ISO8859 Cyrillic */\n\n The impact of correcting them would be for users who are storing\n their data into database using MULE internal code. I think they\n are quite few people using MULE internal code. So we could correct\n them for 7.2.\n\nComments?\n\nBTW, should we support ISO 8859-6 and beyond for 7.2? There have been\nsome requests to do that. Supporting them are actually trivial works,\nshould be one day job. The harder part is writing conversion function\nbetween encodings. However, there is very few demands to do that, I\nguess. If so, we could ommit the conversion capability for 7.2.\nComments?\n--\nTatsuo Ishii\n", "msg_date": "Wed, 10 Oct 2001 15:40:25 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Encoding issues" }, { "msg_contents": "> 1) There is a confusion between \"LATIN5\" and ISO 8859-5. LATIN5 is not\n> ISO 8859-5, but is actually ISO 8859-9. Should we rename LATIN5 to\n> \"ISO8859-5\" (or whatever) as the encoding name? I think we should.\n> For your information, here are the correct mapping between ISO\n> 8859-n and LATINn.\n> \n> ISO 8859-1\tLATIN1\n> ISO 8859-2\tLATIN2\n> ISO 8859-3\tLATIN3\n> ISO 8859-4\tLATIN4\n> ISO 8859-9\tLATIN5\n> ISO 8859-10\tLATIN6\n\nI just found additions:\n\n ISO 8859-13\tLATIN7\n ISO 8859-14\tLATIN8\n ISO 8859-15\tLATIN9\n--\nTatsuo Ishii\n\n", "msg_date": "Wed, 10 Oct 2001 16:00:59 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Encoding issues" }, { "msg_contents": "On Wed, Oct 10, 2001 at 03:40:25PM +0900, Tatsuo Ishii wrote:\n> Receiving a request to add ISO 8859-15 and 16, I review the multibyte\n> support code and found several errors in it.\n> \n> 1) There is a confusion between \"LATIN5\" and ISO 8859-5. LATIN5 is not\n> ISO 8859-5, but is actually ISO 8859-9. Should we rename LATIN5 to\n> \"ISO8859-5\" (or whatever) as the encoding name? I think we should.\n> For your information, here are the correct mapping between ISO\n> 8859-n and LATINn.\n> \n> ISO 8859-1\tLATIN1\n> ISO 8859-2\tLATIN2\n> ISO 8859-3\tLATIN3\n> ISO 8859-4\tLATIN4\n> ISO 8859-9\tLATIN5\n> ISO 8859-10\tLATIN6\n \n You are right. Now I see some old version of PostgreSQL and there\n is this confusion in some headers and comments too.\n \n> 2) The leading characters for some Cyrillic charsets are wrong.\n> \n> Currently they are defined as:\n> \n> #define LC_KOI8_R\t0x8c\t/* Cyrillic KOI8-R */\n> #define LC_KOI8_U\t0x8c\t/* Cyrillic KOI8-U */\n> #define LC_ISO8859_5\t0x8d\t/* ISO8859 Cyrillic */\n> \n> These should be:\n> \n> #define LC_KOI8_R\t0x8b\t/* Cyrillic KOI8-R */\n> #define LC_KOI8_U\t0x8b\t/* Cyrillic KOI8-U */\n> #define LC_ISO8859_5\t0x8c\t/* ISO8859 Cyrillic */\n\n Again, it's long time in sources too (interesting is that we don't \n understand some bugreport).\n\n> The impact of correcting them would be for users who are storing\n> their data into database using MULE internal code. I think they\n> are quite few people using MULE internal code. So we could correct\n> them for 7.2.\n> \n> Comments?\n\n I agree with you, make release with know bugs is dirty thing.\n\n> BTW, should we support ISO 8859-6 and beyond for 7.2? There have been\n> some requests to do that. Supporting them are actually trivial works,\n> should be one day job. The harder part is writing conversion function\n> between encodings. However, there is very few demands to do that, I\n> guess. If so, we could ommit the conversion capability for 7.2.\n> Comments?\n\n You will hear \"we are in the feature freeze state..\" :-)\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 10 Oct 2001 09:39:09 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Encoding issues" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> BTW, should we support ISO 8859-6 and beyond for 7.2?\n\nIf possible we should. Otherwise people might spread the word that\nPostgreSQL is not ready for the Euro.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 10 Oct 2001 18:52:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Encoding issues" }, { "msg_contents": "* Tatsuo Ishii <t-ishii@sra.co.jp> [011010 18:21]:\n> Receiving a request to add ISO 8859-15 and 16, I review the multibyte\n> support code and found several errors in it.\n> \n> 1) There is a confusion between \"LATIN5\" and ISO 8859-5. LATIN5 is not\n> ISO 8859-5, but is actually ISO 8859-9. Should we rename LATIN5 to\n> \"ISO8859-5\" (or whatever) as the encoding name? I think we should.\n> For your information, here are the correct mapping between ISO\n> 8859-n and LATINn.\n> \n> ISO 8859-1 LATIN1\n> ISO 8859-2 LATIN2\n> ISO 8859-3 LATIN3\n> ISO 8859-4 LATIN4\n> ISO 8859-9 LATIN5\n> ISO 8859-10 LATIN6\n\nISO-8859-14 LATIN 8\nISO-8859-15 LATIN 9 or LATIN 0\nISO-8859-16 LATIN 10\n\n:)\n\n> 2) The leading characters for some Cyrillic charsets are wrong.\n> \n> Currently they are defined as:\n> \n> #define LC_KOI8_R\t0x8c\t/* Cyrillic KOI8-R */\n> #define LC_KOI8_U\t0x8c\t/* Cyrillic KOI8-U */\n> #define LC_ISO8859_5\t0x8d\t/* ISO8859 Cyrillic */\n> \n> These should be:\n> \n> #define LC_KOI8_R\t0x8b\t/* Cyrillic KOI8-R */\n> #define LC_KOI8_U\t0x8b\t/* Cyrillic KOI8-U */\n> #define LC_ISO8859_5\t0x8c\t/* ISO8859 Cyrillic */\n> \n> The impact of correcting them would be for users who are storing\n> their data into database using MULE internal code. I think they\n> are quite few people using MULE internal code. So we could correct\n> them for 7.2.\n> \n> Comments?\n> \n> BTW, should we support ISO 8859-6 and beyond for 7.2? There have been\n> some requests to do that. Supporting them are actually trivial works,\n> should be one day job. The harder part is writing conversion function\n> between encodings. However, there is very few demands to do that, I\n> guess. If so, we could ommit the conversion capability for 7.2.\n> Comments?\n\nI think iso-8859-15 and 16 are important, if only because they are the\nonly two encodings which support the Euro (not speaking of unicode, of\ncourse !), and at least iso-8859-15 has some official status in\nwestern europe (on Unix systems at least... Windows users have their\nown table where the Euro sign is stored somewhere else, I think at\n0x80).\n\nI have done the conversion for the mappings to and from unicode, but\nyou could get the original tables at :\n\nhttp://www.unicode.org/Public/MAPPINGS/ISO8859/\n\n(you can get iso-8859-10, 13 and 14 there as well ! 10 is supposed to\nbe for greenlandic and sᅵmi, 13 for the baltic rim, and 14 for gaelic)\n\nJust found on google the following link, where you can see quite a few\ncharsets (it doesn't have -16, too new probably) :\n\nhttp://www.kostis.net/charsets/\n\nPatrice\n\n-- \nPatrice Hᅵdᅵ\nemail: patrice hede ᅵ islande org\nwww : http://www.islande.org/\n", "msg_date": "Wed, 10 Oct 2001 20:03:07 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>", "msg_from_op": false, "msg_subject": "Re: Encoding issues" } ]
[ { "msg_contents": "Hi List,\n Iam pretty new to this list as well as PostgreSQL. I hope to find some crucial info from here.\nThnx in advance to all those who would contribute to it.\n\nIam basically an Oracle Consultant.\n\nAt first i would like to clarify how to enforce password for a user i have created.\n I use the psql client to access the database and unless and until the -U option \n(psql template1 -U <user> ) is used, iam not prompted to enter any password.\nEven thou i enter a wrong password iam still allowed to log in. \nIs there any property needs to be altered to enforce the same ?\nLooking forward for some favourable responses.\nRegards\nBalaji\n\n\n\n\n\n\n\nHi List,\n     Iam pretty new to this list as well as PostgreSQL. \nI hope to find some crucial info from here.\nThnx in advance to all those who would contribute to it.\n \nIam basically an Oracle Consultant.\n \nAt first i would like to clarify how to enforce password for a user i have \ncreated.\n I use the psql client to access the database and unless and until the \n-U option \n(psql template1 -U <user> ) is used, iam not prompted to enter any \npassword.\nEven thou i enter a wrong password iam still allowed to log in. \nIs there any property needs to be altered to enforce the same ?\nLooking forward for some favourable responses.\nRegards\nBalaji", "msg_date": "Wed, 10 Oct 2001 15:51:44 +0530", "msg_from": "\"Balaji Venkatesan\" <balaji.venkatesan@megasoft.com>", "msg_from_op": true, "msg_subject": "Setting Password" }, { "msg_contents": "You need to change the pg_hba.conf file in your PostgreSQL\ninstallation so that \"password\" authentication is used. Check out:\n\nhttp://www.postgresql.org/idocs/index.php?client-authentication.html\n\nfor details.\n\nHope that helps, \n\nMike Mascari\nmascarm@mascari.com\n\n> Balaji Venkatesan wrote:\n> \n> Hi List,\n> Iam pretty new to this list as well as PostgreSQL. I hope to\n> find some crucial info from here.\n> Thnx in advance to all those who would contribute to it.\n> \n> Iam basically an Oracle Consultant.\n> \n> At first i would like to clarify how to enforce password for a\n> user i have created.\n> I use the psql client to access the database and unless and until\n> the -U option\n> (psql template1 -U <user> ) is used, iam not prompted to enter any\n> password.\n> Even thou i enter a wrong password iam still allowed to log in.\n> Is there any property needs to be altered to enforce the same ?\n> Looking forward for some favourable responses.\n> Regards\n> Balaji\n", "msg_date": "Wed, 10 Oct 2001 06:29:38 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: Setting Password" } ]
[ { "msg_contents": "Hi, \nI've done some research on your request, \nbut I could not find very much to help you. \nWhat I've found about \n1) Connections \nhttp://www.postgresql.org/idocs/index.php?runtime-config.html\nenable LOG_CONNECTIONS (boolean), LOG_PID (boolean) \nto log database users \n2) Table locks \nnothing \n3) Consumed disk space of a specific database \nAll database related files are located in \n$PGDATA/base/<database-name> \nSo, by summing all file sizes within this \ndirectory, you should have it. \nAs far as I know, the only limitation to a \ndatabase is given by the total disk capacity. \n\nI hope this helps at least a bit. \nI've looked through the FAQ list too, but \ncouldn't find anything which might help you. \nStill, I don't understand why nobody else is \nanswering. \nRegards, Christoph \n\n", "msg_date": "Wed, 10 Oct 2001 11:37:51 METDST", "msg_from": "Haller Christoph <ch@rodos.fzk.de>", "msg_from_op": true, "msg_subject": "Re: Connections, table locks, disk space " } ]
[ { "msg_contents": "Hello,\n\nIn dump file statement which grants permissions on view exists before\nstatement which create view.\nFor tables and sequences permissions dumped in correct order.\n\n--TOC Entry ID 124 (OID 150248)\nGRANT ALL on my_view to group sales;\n\n... skipped\n\n--TOC Entry ID 123 (OID 194103)\nCREATE VIEW my_view ...\n\nAny comments?\n\n\n", "msg_date": "Wed, 10 Oct 2001 19:09:03 +0400", "msg_from": "\"Dmitry Chernikov\" <cher@beltel.ru>", "msg_from_op": true, "msg_subject": "Problem in pg_dump 7.1.2 dump order" }, { "msg_contents": "On Wed, 10 Oct 2001, Dmitry Chernikov wrote:\n\n> Hello,\n>\n> In dump file statement which grants permissions on view exists before\n> statement which create view.\n> For tables and sequences permissions dumped in correct order.\n>\n> --TOC Entry ID 124 (OID 150248)\n> GRANT ALL on my_view to group sales;\n>\n> ... skipped\n>\n> --TOC Entry ID 123 (OID 194103)\n> CREATE VIEW my_view ...\n>\n> Any comments?\n\nThis bug was fixed in 7.1.3.\n\n", "msg_date": "Fri, 12 Oct 2001 10:28:43 -0400 (EDT)", "msg_from": "Joel Burton <joel@eos.scw.org>", "msg_from_op": false, "msg_subject": "Re: Problem in pg_dump 7.1.2 dump order" } ]
[ { "msg_contents": "Apologies for posting to [Hackers], have already posted to [Patches]\nwith no reply.\n\nWhen trying to pg_dump on 7.1.2 (& 7.1.3) I get the following error\nmessage:\n\nbash-2.04$ pg_dump dwh\ngetTables(): SELECT (for PRIMARY KEY NAME) failed for table nlcdmp.\nExplanation from backend: ERROR: dtoi4: integer out of range\nbash-2.04$ pg_dump -v dwh\n-- saving database definition\n-- last builtin oid is 18539\n-- reading user-defined types\n-- reading user-defined functions\n-- reading user-defined aggregates\n-- reading user-defined operators\n-- reading user-defined tables\ngetTables(): SELECT (for PRIMARY KEY NAME) failed for table nlcdmp.\nExplanation from backend: ERROR: dtoi4: integer out of range\n\n\nI have already applied the patches described by Martin Weinberg and Tom\nLane (see below), but this doesn't deem to have fixed my problem.\n---------\n--- pg_dump.cThu Sep 6 21:18:21 2001\n+++ pg_dump.c.origThu Sep 6 21:19:08 2001\n@@ -2289,7 +2289,7 @@\n\n resetPQExpBuffer(query);\n appendPQExpBuffer(query,\n- \"SELECT Oid FROM pg_index i WHERE i.indisprimary AND i.indrelid =\n'%s'::oid \",\n+ \"SELECT Oid FROM pg_index i WHERE i.indisprimary AND i.indrelid = %s\n\",\n tblinfo[i].oid);\n res2 = PQexec(g_conn, query->data);\n if (!res2 || PQresultStatus(res2) != PGRES_TUPLES_OK)\n@@ -3035,7 +3035,6 @@\n query = createPQExpBuffer();\n appendPQExpBuffer(query, \"SELECT description FROM pg_description WHERE\nobjoid = \");\n appendPQExpBuffer(query, oid);\n-appendPQExpBuffer(query, \"::oid\");\n\n /*** Execute query ***/\n\n--------\n\nSeveral of my tables have very large OIDs (over 4 billion in some cases\n! don't know why) , these are obviously also causing dtoi4 error\nmessages when entering table design in pgaccess, but one can carry on\npast the messages and continue working. I am also having problems in\nCodeCharge using the ODBC driver - Codecharge fails to get column names\nfor tables with high OIDs. Tables with lower OIDs in the same database\nwork fine :-)\n\nI've had no problems with any previous version of PostgreSQL much of the\n\ndata in this database has been progressively migrated over the last\ncouple of years from 6.2.\n\nMy interest in pg_dump is to dump my database without OIDs (normally I\ndump with OIDs so I've been carrying these big numbers for some time),\ndrop everything and rebuild (psql < data.out) so that I hopefully get\nnew smaller OIDs generated. Is this likely to work if I get round the\npg_dump problems?\n\nAnyway, what's needed now is suggestions as to what else I must do to\nget pg_dump working with my large OIDs, any ideas??\n\nThanks,\n\nSteve\n\n\n\n\n\n", "msg_date": "Wed, 10 Oct 2001 16:32:16 +0100", "msg_from": "steve <steve@jlajla.com>", "msg_from_op": true, "msg_subject": "pg_dump oid problems" }, { "msg_contents": "steve <steve@jlajla.com> writes:\n> When trying to pg_dump on 7.1.2 (& 7.1.3) I get the following error\n> message:\n\n> bash-2.04$ pg_dump dwh\n> getTables(): SELECT (for PRIMARY KEY NAME) failed for table nlcdmp.\n> Explanation from backend: ERROR: dtoi4: integer out of range\n\n> Several of my tables have very large OIDs (over 4 billion in some cases\n\nHmm. Okay, I think I can see how over-2-gig OIDs might lead to that\nerror message, but that doesn't really help in tracking down the specific\nlocation of the problem. Could you run pg_dump after doing\n\texport PGOPTIONS=\"-d2\"\nso that its queries get sent to the postmaster log? Then looking at the\nlog to see the last couple of queries before the failure should tell us.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2001 00:45:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump oid problems " }, { "msg_contents": "Tom,\n\nThanks for the prompt reply. Following is the postgresql log output:\n\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: SELECT Oid FROM pg_index i WHERE i.indisprimary AND\ni.indrelid = '3527162388'::oid\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: SELECT c.relname FROM pg_index i LEFT OUTER JOIN pg_class c\nON c.oid = i.indexrelid WHERE i.indrelid = 3527162388AND i.indisprimary\nERROR: dtoi4: integer out of range\nDEBUG: AbortCurrentTransaction\nDEBUG: proc_exit(0)\nDEBUG: shmem_exit(0)\nDEBUG: exit(0)\n\nThe 3527162388AND is exactly as shown in the log, with no space between the\nvalue and the AND, I guess this is the problem, wherever it's being\ngenerated in the code.\n\nHTH\n\nThanks,\n\nSteve\n\nTom Lane wrote:\n\n> steve <steve@jlajla.com> writes:\n> > When trying to pg_dump on 7.1.2 (& 7.1.3) I get the following error\n> > message:\n>\n> > bash-2.04$ pg_dump dwh\n> > getTables(): SELECT (for PRIMARY KEY NAME) failed for table nlcdmp.\n> > Explanation from backend: ERROR: dtoi4: integer out of range\n>\n> > Several of my tables have very large OIDs (over 4 billion in some cases\n>\n> Hmm. Okay, I think I can see how over-2-gig OIDs might lead to that\n> error message, but that doesn't really help in tracking down the specific\n> location of the problem. Could you run pg_dump after doing\n> export PGOPTIONS=\"-d2\"\n> so that its queries get sent to the postmaster log? Then looking at the\n> log to see the last couple of queries before the failure should tell us.\n>\n> regards, tom lane\n\n", "msg_date": "Thu, 11 Oct 2001 08:49:06 +0100", "msg_from": "steve <steve@jlajla.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump oid problems" }, { "msg_contents": "steve <steve@jlajla.com> writes:\n> DEBUG: query: SELECT c.relname FROM pg_index i LEFT OUTER JOIN pg_class c\n> ON c.oid = i.indexrelid WHERE i.indrelid = 3527162388AND i.indisprimary\n> ERROR: dtoi4: integer out of range\n\n> The 3527162388AND is exactly as shown in the log, with no space between the\n> value and the AND, I guess this is the problem, wherever it's being\n> generated in the code.\n\nThat's evidently coming from line 2346 of src/bin/pg_dump/pg_dump.c:\n\n \"WHERE i.indrelid = %s\"\n\nTry changing it to\n\n \"WHERE i.indrelid = '%s'::oid \"\n\n(Problem seems to be solved already in 7.2devel)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2001 12:15:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump oid problems " }, { "msg_contents": "Problem solved, 3GB dumped OK -- Thanks Tom\n\nSteve\n\nTom Lane wrote:\n\n> steve <steve@jlajla.com> writes:\n> > DEBUG: query: SELECT c.relname FROM pg_index i LEFT OUTER JOIN pg_class c\n> > ON c.oid = i.indexrelid WHERE i.indrelid = 3527162388AND i.indisprimary\n> > ERROR: dtoi4: integer out of range\n>\n> > The 3527162388AND is exactly as shown in the log, with no space between the\n> > value and the AND, I guess this is the problem, wherever it's being\n> > generated in the code.\n>\n> That's evidently coming from line 2346 of src/bin/pg_dump/pg_dump.c:\n>\n> \"WHERE i.indrelid = %s\"\n>\n> Try changing it to\n>\n> \"WHERE i.indrelid = '%s'::oid \"\n>\n> (Problem seems to be solved already in 7.2devel)\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n", "msg_date": "Fri, 12 Oct 2001 09:24:37 +0100", "msg_from": "steve <steve@jlajla.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump oid problems" } ]
[ { "msg_contents": "peter=# select current_timestamp;\n timestamptz\n-------------------------------\n 2001-10-10 01:04:54.965162+02\n(1 row)\n\npeter=# select extract(timezone_hour from current_timestamp);\n date_part\n-----------\n -2\n(1 row)\n\nPlus or minus?\n\npeter=# select extract(timezone_hour from timestamp '2001-10-10 01:04:54.965162+02');\n date_part\n-----------\n -2\n(1 row)\n\n(Same problem)\n\npeter=# select extract(timezone_hour from timestamp '2001-10-10 01:04:54.965162+03');\n ^^\n date_part\n-----------\n -2\n(1 row)\n\nBig problem.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 10 Oct 2001 18:57:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "extract(timezone_hour) funny business" }, { "msg_contents": "> Plus or minus?\n\nIs there a standard for this? We are printing date/time using Posix\nconventions, which are opposite from the SQL conventions for setting\ntime zone (which we don't yet support, since it is fundamentally useless\n;) I apparently implemented one, and you expect the other.\n\n> peter=# select extract(timezone_hour from timestamp '2001-10-10 01:04:54.965162+03');\n> -----------\n> -2\n> Big problem.\n\nNot really. The timestamp you have specified is read in and internalized\nas a gmt value, then is rewritten using your current time zone settings.\nafaict the time zone on an input value should not persist with the value\nitself, so the info does not carry far enough forward to be used for an\noutput routine.\n\nNote that I did not implement \"time with time zone\" this way, but rather\nused a \"persistant time zone\". I *think* that this should be taken out,\nbut further discussion is welcome. The reference books are distressingly\nunclear or obviously incorrect on this topic, presumably in the\ninterests of remaining lucid.\n\n - Thomas\n", "msg_date": "Thu, 11 Oct 2001 06:39:30 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: extract(timezone_hour) funny business" } ]
[ { "msg_contents": "I've been looking a bit at the MULE encoding wrt to latin 9 and 10. It\nseems that there is no support for the Euro at all in it.\n\ne.g. when I tried to use \"recode\", which does recognise iso-8859-15\nand 16, and convert to MULE, whatever I do, I obtain \"EUR\" for the\neuro sign, OE, oe, s, S, z, Z, \"Y for the different characters which\nare specific to 15 for example, and that's even worse for 16.\n\nShould we NOT allow conversion to Mule, or restrict the support, for\nexample by pretending iso-8859-15 is iso-8859-1 (resp. 16 is 2) for\nconversion from/to mule (i.e. use the 0x81 and 0x82 octet for these\nencodings) and be done with it ?? (and MENTION it in the docs ;) ).\n\nAnyway, I don't see somebody wanting support for the euro using Mule\nto store its strings... UTF-8 is much more important (and\nstraightforward) to support in that case :)\n\nWhat do you think ?\n\nPatrice.\n\n-- \nPatrice Hᅵdᅵ\nemail: patrice hede ᅵ islande org\nwww : http://www.islande.org/\n", "msg_date": "Wed, 10 Oct 2001 20:39:10 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>", "msg_from_op": true, "msg_subject": "iso-8859-15/16 to MULE" }, { "msg_contents": "> e.g. when I tried to use \"recode\", which does recognise iso-8859-15\n> and 16, and convert to MULE, whatever I do, I obtain \"EUR\" for the\n> euro sign, OE, oe, s, S, z, Z, \"Y for the different characters which\n> are specific to 15 for example, and that's even worse for 16.\n\nApparently MULE currently does not support beyond ISO 8859-10 at all.\n\n> Should we NOT allow conversion to Mule, or restrict the support, for\n> example by pretending iso-8859-15 is iso-8859-1 (resp. 16 is 2) for\n> conversion from/to mule (i.e. use the 0x81 and 0x82 octet for these\n> encodings) and be done with it ?? (and MENTION it in the docs ;) ).\n\nI think that we could negelect MULE encoding support for beyond ISO\n8859-10, at least untill MULE \"officially\" support them.\n\n> Anyway, I don't see somebody wanting support for the euro using Mule\n> to store its strings... UTF-8 is much more important (and\n> straightforward) to support in that case :)\n> \n> What do you think ?\n\nWell, the conversion to/from UTF-8 for ISO 8859-10 or later is pretty\neasy and should be supported, I think. Actually I already have\ngenerated mapping tables for these charsets. I will make patches\nagainst current and leave it for the core's decision, whether it\nshould be included in 7.2 or not.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 11 Oct 2001 10:27:48 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: iso-8859-15/16 to MULE" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Well, the conversion to/from UTF-8 for ISO 8859-10 or later is pretty\n> easy and should be supported, I think. Actually I already have\n> generated mapping tables for these charsets. I will make patches\n> against current and leave it for the core's decision, whether it\n> should be included in 7.2 or not.\n\nIf you are comfortable with these patches then apply them. You know\nmore about multibyte issues than any of the core committee...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2001 00:58:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: iso-8859-15/16 to MULE " }, { "msg_contents": "> > Well, the conversion to/from UTF-8 for ISO 8859-10 or later is pretty\n> > easy and should be supported, I think. Actually I already have\n> > generated mapping tables for these charsets. I will make patches\n> > against current and leave it for the core's decision, whether it\n> > should be included in 7.2 or not.\n> \n> If you are comfortable with these patches then apply them. You know\n> more about multibyte issues than any of the core committee...\n\nOk. I have committed changes to support ISO-8859-6 to 16.\n\n1) Followings are supported ISO-8859 series encoding names. Column 1\n is the \"official\" name and column 2 is the \"alias\" name.\n\n LATIN1\tISO-8859-1\n LATIN2\tISO-8859-2\n LATIN3\tISO-8859-3\n LATIN4\tISO-8859-4\n LATIN5\tISO-8859-9\n ISO-8859-6\n ISO-8859-7\n ISO-8859-8\n ISO-8859-10\tLATIN6\n ISO-8859-13\tLATIN7\n ISO-8859-14\tLATIN8\n ISO-8859-15\tLATIN9\n ISO-8859-16\n\n These encodings all support conversions to/from UNICODE(UTF-8).\n\n2) LATIN5 no more means ISO-8859-5, instead ISO-8859-9. This may\n impact the LATIN5 database backward compatibility. Especially in\n case of conversion between LATIN5 and UNICODE. If you have LATIN5\n database and used UNICODE conversion capability, PLEASE CHECK YOUR\n DATABASE.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 11 Oct 2001 23:23:35 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: iso-8859-15/16 to MULE " } ]
[ { "msg_contents": "\nOur FAQ, item 4.16.2 has:\n\n\t$newSerialID = nextval('person_id_seq');\n\tINSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');\n\nIs this correct Perl? I don't see a nextval() function in Perl. Can\nyou call SQL server-side functions natively from Perl?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Oct 2001 17:12:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "FAQ error" }, { "msg_contents": "On 10 Oct 2001 at 17:12 (-0400), Bruce Momjian wrote:\n| \n| Our FAQ, item 4.16.2 has:\n| \n| \t$newSerialID = nextval('person_id_seq');\n| \tINSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');\n| \n| Is this correct Perl? I don't see a nextval() function in Perl. Can\n| you call SQL server-side functions natively from Perl?\n\nno. The proper perl code would be more like...\n\nuse DBI;\nmy ($lastid,$nextid,$sql,$rv);\nmy $dbh = DBI->connect(\"perldoc DBD::Pg\");\n\n# to use the nextval\n$sql = \"SELECT nextval('person_id_seq')\";\n$nextid = ($dbh->selectrow_array($sql))[0];\n$sql = \"INSERT INTO person (id, name) VALUES ($nextid, 'Blaise Pascal');\n$rv = $dbh->do($sql);\n\n# or to get the currval\n$sql = \"INSERT INTO person (name) VALUES ('Blaise Pascal');\n$rv = $dbh->do($sql);\n$sql = \"SELECT currval('person_id_seq')\";\n$lastid = ($dbh->selectrow_array($sql))[0];\n\n\n| -- \n| Bruce Momjian | http://candle.pha.pa.us\n| pgman@candle.pha.pa.us | (610) 853-3000\n| + If your life is a hard drive, | 830 Blythe Avenue\n| + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n| \n| ---------------------------(end of broadcast)---------------------------\n| TIP 6: Have you searched our list archives?\n| \n| http://archives.postgresql.org\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Wed, 10 Oct 2001 20:20:46 -0400", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: FAQ error" }, { "msg_contents": "Bruce Momjian wrote:\n\n> $newSerialID = nextval('person_id_seq');\n> INSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');\n> \n> Is this correct Perl? I don't see a nextval() function in Perl. Can\n> you call SQL server-side functions natively from Perl?\n\nOfcourse not. This can be counted as 'pseudo-code'...\n\nA correct implementation using DBI (and DBD::Pg) would be\n\n$newSerialID = $dbh->selectrow_array (q{select\nnextval('person_id_seq')});\n$dbh->do (qq{INSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise\nPascal')});\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Thu, 11 Oct 2001 14:31:36 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": false, "msg_subject": "Re: FAQ error" }, { "msg_contents": "Bruce Momjian writes:\n\n> Our FAQ, item 4.16.2 has:\n>\n> \t$newSerialID = nextval('person_id_seq');\n> \tINSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');\n>\n> Is this correct Perl?\n\nNo. I always thought it was pseudo code. I think it's fine.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 11 Oct 2001 20:34:07 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: FAQ error" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > Our FAQ, item 4.16.2 has:\n> >\n> > \t$newSerialID = nextval('person_id_seq');\n> > \tINSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');\n> >\n> > Is this correct Perl?\n> \n> No. I always thought it was pseudo code. I think it's fine.\n\nIt is psaudo-code, but the assignment for nextval() is just wrong:\n\n> > \t$newSerialID = nextval('person_id_seq');\n\nI am going to flesh this out with the SELECT but not the rest.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Oct 2001 15:46:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: FAQ error" }, { "msg_contents": "> On 10 Oct 2001 at 17:12 (-0400), Bruce Momjian wrote:\n> | \n> | Our FAQ, item 4.16.2 has:\n> | \n> | \t$newSerialID = nextval('person_id_seq');\n> | \tINSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');\n> | \n> | Is this correct Perl? I don't see a nextval() function in Perl. Can\n> | you call SQL server-side functions natively from Perl?\n> \n> no. The proper perl code would be more like...\n> \n> use DBI;\n> my ($lastid,$nextid,$sql,$rv);\n> my $dbh = DBI->connect(\"perldoc DBD::Pg\");\n> \n> # to use the nextval\n> $sql = \"SELECT nextval('person_id_seq')\";\n> $nextid = ($dbh->selectrow_array($sql))[0];\n> $sql = \"INSERT INTO person (id, name) VALUES ($nextid, 'Blaise Pascal');\n> $rv = $dbh->do($sql);\n\nOK, new FAQ code is:\n\n $sql = \"SELECT nextval('person_id_seq')\";\n $newSerialID = ($conn->selectrow_array($sql))[0];\n INSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');\n $res = $dbh->do($sql);\n> \n> # or to get the currval\n> $sql = \"INSERT INTO person (name) VALUES ('Blaise Pascal');\n> $rv = $dbh->do($sql);\n> $sql = \"SELECT currval('person_id_seq')\";\n> $lastid = ($dbh->selectrow_array($sql))[0];\n\nand:\n\n INSERT INTO person (name) VALUES ('Blaise Pascal');\n $res = $conn->do($sql);\n $sql = \"SELECT currval('person_id_seq')\";\n $newSerialID = ($conn->selectrow_array($sql))[0];\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 23:52:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: FAQ error" }, { "msg_contents": "Bruce Momjian writes:\n\n> OK, new FAQ code is:\n>\n> $sql = \"SELECT nextval('person_id_seq')\";\n> $newSerialID = ($conn->selectrow_array($sql))[0];\n> INSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');\n> $res = $dbh->do($sql);\n\nThis code is still incorrect for any known programming language and it's\neven less clear to a person that doesn't know the programming language\nit's probably trying to imitate.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 13 Oct 2001 21:23:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: FAQ error" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > OK, new FAQ code is:\n> >\n> > $sql = \"SELECT nextval('person_id_seq')\";\n> > $newSerialID = ($conn->selectrow_array($sql))[0];\n> > INSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');\n> > $res = $dbh->do($sql);\n> \n> This code is still incorrect for any known programming language and it's\n> even less clear to a person that doesn't know the programming language\n> it's probably trying to imitate.\n\nOK, what suggestions do you have?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Oct 2001 15:47:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: FAQ error" }, { "msg_contents": "Bruce Momjian writes:\n\n> > Bruce Momjian writes:\n> >\n> > > OK, new FAQ code is:\n> > >\n> > > $sql = \"SELECT nextval('person_id_seq')\";\n> > > $newSerialID = ($conn->selectrow_array($sql))[0];\n> > > INSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');\n> > > $res = $dbh->do($sql);\n> >\n> > This code is still incorrect for any known programming language and it's\n> > even less clear to a person that doesn't know the programming language\n> > it's probably trying to imitate.\n>\n> OK, what suggestions do you have?\n\nI didn't have a problem with the original version. It conveyed clearly\n(to me), \"read the nextval and insert it yourself\".\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 14 Oct 2001 13:43:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: FAQ error" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > > Bruce Momjian writes:\n> > >\n> > > > OK, new FAQ code is:\n> > > >\n> > > > $sql = \"SELECT nextval('person_id_seq')\";\n> > > > $newSerialID = ($conn->selectrow_array($sql))[0];\n> > > > INSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');\n> > > > $res = $dbh->do($sql);\n> > >\n> > > This code is still incorrect for any known programming language and it's\n> > > even less clear to a person that doesn't know the programming language\n> > > it's probably trying to imitate.\n> >\n> > OK, what suggestions do you have?\n> \n> I didn't have a problem with the original version. It conveyed clearly\n> (to me), \"read the nextval and insert it yourself\".\n\nObviously, someone did because they tried the code and it didn't work. \nAt least the new code is closer to valid, though less clear. It is at\nleast a valid snippet, which the previous version was not.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Oct 2001 19:03:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: FAQ error" }, { "msg_contents": "> Obviously, someone did because they tried the code and it didn't work. \n> At least the new code is closer to valid, though less clear. It is at\n> least a valid snippet, which the previous version was not.\n\nOK, I changed it to more pseudocode:\n\n new_id = output of \"SELECT nextval('person_id_seq')\"\n INSERT INTO person (id, name) VALUES (new_id, 'Blaise Pascal');\n\nand\n\n INSERT INTO person (name) VALUES ('Blaise Pascal'); \n new_id = output of \"SELECT currval('person_id_seq')\";\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Oct 2001 19:27:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: FAQ error" } ]
[ { "msg_contents": " >Chris Bitmead <chris@bitmead.com> writes:\n >> ... I don't\n >> like the old large object implementation, I need to store very large\n >> numbers of objects and unless this implementation has changed\n >> in recent times it won't cut it.\n >\n >Have you looked at 7.1? AFAIK it has no particular problem with\n >lots of LOs.\n >\n >Which is not to discourage you from going over to bytea fields instead,\n >if that model happens to be more convenient for your application.\n >But your premise above seems false.\n\nI'm storing emails, which as we know are usually small but occasionally \nhuge. OK, I see in the release notes something like \"store all large\nobjects in one table\". and \"pg_dump\" of large objects. That sounds like\nmaybe LOs are now ok, although for portability with Oracle blobs it\nwould be nice if they could be embedded in any row or at least appear\nto be so from client interface side (Java client for what I'm doing).\n\nBTW, the postgres docs web pages says there is \"no limitation\" on row\nsize. Someone should probably update that with the info given in the\nlast few emails and probably integrate it in the regular doco as well.\n\n", "msg_date": "Thu, 11 Oct 2001 15:24:33 +1000", "msg_from": "Chris Bitmead <chris@bitmead.com>", "msg_from_op": true, "msg_subject": "Re: TOAST and TEXT" }, { "msg_contents": "HI\n I have to setup PERL to interact with PGSQL.\n I have taken the following steps.\n\n 1.Installation of perl_5.6.0 under Redhat Linux 7.0\n 2.Installation of POSTGRESQL under Redhat Linux7.0\n\n Both are working perfectly as seperate modules.\n\n Now I need to interface perl with PGSQL.\n\n I need to what's the best possible soln.\n\n I have installed latest DBI from www.cpan.org\n\n Now i need to install DBD For PGSQL .Is\n this the driver i have to work on for pgsql ?.\n Or do I have any other option to connect to pgsql\n from perl . Indeed i've found out an other way\n to use Pg driver provided by PGSQL to interface\n perl with pgsql.\n\n I need to exactly know the difference between\n use Pg ; and use DBI ; Need to which one is\n proceeding towards correct direction under what circumstances.\n\n\n when I tried to install DBD-Pg-0.93.tar.gz under Linux\n i get\n\n Configuring Pg\n Remember to actually read the README file !\n please set environment variables POSTGRES_INCLUDE and POSTGRES_LIB !\n\n I need to know what these varibles POSTGRES_INCLUDE and POSTGRES_LIB\n should point to ...\n\n and when i tried to run perl test.pl, the program to test the\ninstallation of the module which\n comes with the tar.\n I get the error\n\n OS: linux\n install_driver(Pg) failed: Can't locate DBD/Pg.pm in @INC (@INC\ncontains: /usr/l\n ib/perl5/5.6.0/i386-linux /usr/lib/perl5/5.6.0\n/usr/lib/perl5/site_perl/5.6.0/i3\n 86-linux /usr/lib/perl5/site_perl/5.6.0 /usr/lib/perl5/site_perl .)\nat (eval 1)\n line 3.\n Perhaps the DBD::Pg perl module hasn't been fully installed,\n or perhaps the capitalisation of 'Pg' isn't right.\n Available drivers: ADO, ExampleP, Multiplex, Proxy.\n at test.pl line 51\n\n Any body who can clarify is most welcome....\n\n with regards,\n Prassanna...\n\n", "msg_date": "Thu, 11 Oct 2001 11:48:09 +0530", "msg_from": "\"Balaji Venkatesan\" <balaji.venkatesan@megasoft.com>", "msg_from_op": false, "msg_subject": "Suitable Driver ?" }, { "msg_contents": "On Thu, 11 Oct 2001, Balaji Venkatesan wrote:\n\n> Now i need to install DBD For PGSQL .Is\n> this the driver i have to work on for pgsql ?.\n> Or do I have any other option to connect to pgsql\n> from perl . Indeed i've found out an other way\n> to use Pg driver provided by PGSQL to interface\n> perl with pgsql.\nYou need DBD::Pg, which is a DBD driver for postgres.\n\n> \n> I need to exactly know the difference between\n> use Pg ; and use DBI ; Need to which one is\n> proceeding towards correct direction under what circumstances.\nYou need use DBI; and use DBD::Pg;\nPg by itself is slightly lower-level module that is similar to C interface\nto postgresql.\n\n> when I tried to install DBD-Pg-0.93.tar.gz under Linux\n> i get\n> \n> Configuring Pg\n> Remember to actually read the README file !\n> please set environment variables POSTGRES_INCLUDE and POSTGRES_LIB !\n> \n> I need to know what these varibles POSTGRES_INCLUDE and POSTGRES_LIB\n> should point to ...\nTo location of your installed postgres includes' and libraries\nFor example:\n\nexport POSTGRES_INCLUDE=/usr/local/pgsql/include\nexport POSTGRES_LIB=/usr/local/pgsql/lib\n\n-alex\n\n", "msg_date": "Thu, 11 Oct 2001 10:11:42 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Suitable Driver ?" }, { "msg_contents": "\"Balaji Venkatesan\" <balaji.venkatesan@megasoft.com> writes:\n> I have installed latest DBI from www.cpan.org\n> Now i need to install DBD For PGSQL .Is\n> this the driver i have to work on for pgsql ?.\n\nIf you want to use DBI then you should get the DBD::Pg driver from\nCPAN. (Yes, it is on CPAN, even though their index page about DBD\nmodules didn't list it last time I looked.)\n\n> I need to exactly know the difference between\n> use Pg ; and use DBI ; Need to which one is\n\nPg is a older stand-alone driver; it's not DBI-compatible,\nand it's got nothing to do with DBD::Pg.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2001 12:24:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Suitable Driver ? " }, { "msg_contents": "> BTW, the postgres docs web pages says there is \"no limitation\" on row\n> size. Someone should probably update that with the info given in the\n> last few emails and probably integrate it in the regular doco as well.\n\nAlthough the field length is limited to 1GB, is there a row size limit? \nI don't know of one. The FAQ does say below the list:\n\n Of course, these are not actually unlimited, but limited to\n available disk space and memory/swap space. Performance may suffer\n when these values get unusually large. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 23:38:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: TOAST and TEXT" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Although the field length is limited to 1GB, is there a row size\n> limit? \n\nSure. 1Gb per field (hard limit) times 1600 fields (also hard limit).\nIn practice less, since TOAST pointers are 20bytes each at present,\nmeaning you can't have more than BLCKSZ/20 toasted fields in one row.\n\nWhether this has anything to do with real applications is debatable,\nhowever. I find it hard to visualize a table design that needs several\nhundred columns that *all* need to be GB-sized.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Oct 2001 23:56:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TOAST and TEXT " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Although the field length is limited to 1GB, is there a row size\n> > limit? \n> \n> Sure. 1Gb per field (hard limit) times 1600 fields (also hard limit).\n> In practice less, since TOAST pointers are 20bytes each at present,\n> meaning you can't have more than BLCKSZ/20 toasted fields in one row.\n\nI read this as 409GB with 8k pages.\n\n> Whether this has anything to do with real applications is debatable,\n> however. I find it hard to visualize a table design that needs several\n> hundred columns that *all* need to be GB-sized.\n\nYes, that just makes my head hurt. Easier to just say \"unlimited\" and\nlimited by your computer's memory/disk.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Oct 2001 00:13:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: TOAST and TEXT" } ]
[ { "msg_contents": "When adding an index to a (quite large, ~2 million rows) table\nPostgreSQL continues to do sequential lookups until VACUUM ANALYZE is\nrun. Naturally performance is poor.\n\nThe CREATE INDEX statement takes considerable time.\n\nSeen with 7.1.3 on Intel Linux (RedHat 7.0 & 7.1 and Solaris 2.6.\n\nIn the example below the data file (8 MB) can be found at:\n\n http://services.csl.co.uk/postgresql/obs.gz\n\nConsider the session below:\n\nlkind@elsick:~% createdb obs_test\nCREATE DATABASE\nlkind@elsick:~% psql obs_test\nobs_test=# CREATE TABLE obs (setup_id INTEGER, time REAL, value REAL, bad_data_flag SMALLINT);\nCREATE\nobs_test=# COPY obs FROM '/user/lkind/obs';\nCOPY\nobs_test=# SELECT COUNT(*) FROM obs;\n count \n---------\n 1966593\n(1 row)\n\nobs_test=# CREATE UNIQUE INDEX obs_idx ON obs USING BTREE(setup_id, time);\nCREATE\nobs_test=# EXPLAIN SELECT * FROM obs WHERE setup_id = 300 AND time = 118;\nNOTICE: QUERY PLAN:\n\nSeq Scan on obs (cost=0.00..42025.90 rows=197 width=14)\n\nEXPLAIN\nobs_test=# VACUUM ANALYZE obs ;\nVACUUM\nobs_test=# EXPLAIN SELECT * FROM obs WHERE setup_id = 300 AND time = 118;\nNOTICE: QUERY PLAN:\n\nIndex Scan using obs_idx on obs (cost=0.00..9401.60 rows=1 width=14)\n\nEXPLAIN\nobs_test=# \\q\n", "msg_date": "Thu, 11 Oct 2001 09:27:44 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Btree index ignored on SELECT until VACUUM ANALYZE" } ]
[ { "msg_contents": "A customer's machine hangs from time to time. All we could find so far is\nthat postgres seems to be in state \"idle in transaction\":\n\npostgres 19317 0.0 0.3 8168 392 ? S Oct05 0:00 /usr/lib/postgresql/bin/postmaster -D /var/lib/postgres/data\npostgres 19983 0.0 0.8 8932 1020 ? S Oct05 0:01 postgres: postgres rabatt 192.168.50.222 idle in transaction\npostgres 21005 0.0 0.0 3484 4 ? S Oct06 0:00 /usr/lib/postgresql/bin/psql -t -q -d template1\npostgres 21014 0.0 0.7 8892 952 ? S Oct06 0:01 postgres: postgres rabatt [local] VACUUM waiting\npostgres 21833 0.0 0.4 3844 572 ? S Oct06 0:00 /usr/lib/postgresql/bin/pg_dump rabatt\npostgres 21841 0.0 1.2 9716 1564 ? S Oct06 0:00 postgres: postgres rabatt [local] COPY waiting\npostgres 22135 0.0 0.9 8856 1224 ? S Oct06 0:00 postgres: postgres rabatt 192.168.50.223 idle in transaction waiting\n\nI'm not sure what's happening here and I have no remote access to the\nmachine myself. Any idea what could be the reason for this?\n\nThere may be some client processes running at the time the dump and the\nvacuum commands are issued that have an open transaction doing nothing. That\nis the just issued a BEGIN command. Thinking about it run some inserts at\nthe very same time, although that's not likely.\n\nAny hints are appreciated. Thanks in advance.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 11 Oct 2001 11:28:30 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Deadlock? idle in transaction" }, { "msg_contents": "On Thu, 11 Oct 2001, Michael Meskes wrote:\n\n> A customer's machine hangs from time to time. All we could find so far is\n> that postgres seems to be in state \"idle in transaction\":\n> \n> postgres 19317 0.0 0.3 8168 392 ? S Oct05 0:00 /usr/lib/postgresql/bin/postmaster -D /var/lib/postgres/data\n> postgres 19983 0.0 0.8 8932 1020 ? S Oct05 0:01 postgres: postgres rabatt 192.168.50.222 idle in transaction\n> postgres 21005 0.0 0.0 3484 4 ? S Oct06 0:00 /usr/lib/postgresql/bin/psql -t -q -d template1\n> postgres 21014 0.0 0.7 8892 952 ? S Oct06 0:01 postgres: postgres rabatt [local] VACUUM waiting\n> postgres 21833 0.0 0.4 3844 572 ? S Oct06 0:00 /usr/lib/postgresql/bin/pg_dump rabatt\n> postgres 21841 0.0 1.2 9716 1564 ? S Oct06 0:00 postgres: postgres rabatt [local] COPY waiting\n> postgres 22135 0.0 0.9 8856 1224 ? S Oct06 0:00 postgres: postgres rabatt 192.168.50.223 idle in transaction waiting\n> \n> I'm not sure what's happening here and I have no remote access to the\n> machine myself. Any idea what could be the reason for this?\n> \n> There may be some client processes running at the time the dump and the\n> vacuum commands are issued that have an open transaction doing nothing. That\n> is the just issued a BEGIN command. Thinking about it run some inserts at\n> the very same time, although that's not likely.\n> \n> Any hints are appreciated. Thanks in advance.\n\nWell, it'd be likely to get in this state if the first transaction grabbed\nany write locks and then sat on them without committing or doing any more\ncommands, since the vacuum would wait on that and the rest of the\ntransactions will probably wait on the vacuum. Is that a possible\nsituation?\n\n\n\n", "msg_date": "Thu, 11 Oct 2001 13:09:25 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock? idle in transaction" }, { "msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> A customer's machine hangs from time to time. All we could find so far is\n> that postgres seems to be in state \"idle in transaction\":\n\nYou evidently have some client applications holding open transactions\nthat have locks on some tables. That's not a deadlock --- at least,\nit's not Postgres' fault. The VACUUM is waiting to get exclusive access\nto some table that's held by one of these clients, and the COPY is\nprobably queued up behind the VACUUM.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2001 20:26:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Deadlock? idle in transaction " }, { "msg_contents": "On Thu, Oct 11, 2001 at 01:09:25PM -0700, Stephan Szabo wrote:\n> Well, it'd be likely to get in this state if the first transaction grabbed\n> any write locks and then sat on them without committing or doing any more\n> commands, since the vacuum would wait on that and the rest of the\n> transactions will probably wait on the vacuum. Is that a possible\n> situation?\n\nMaybe. The first transaction should not sit on any lock, but I have to dig\nthrough the sources to be sure it really does not. Also I wonder if this\ncould happen through normal operation:\n\nTask 1:\n\nbegin\nacquire lock in table A\nacquire lock in table B\ncommit\n\nTask 2 (vacuum):\n\nlock table B\nlock table A\n\nCould this force the situation too?\n\nIf so the easy workaround would be to run vacuum when there is no other\nprocess accessing the DB.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 12 Oct 2001 10:31:00 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: Deadlock? idle in transaction" }, { "msg_contents": "On Thu, Oct 11, 2001 at 08:26:48PM -0400, Tom Lane wrote:\n> You evidently have some client applications holding open transactions\n\nOkay, I know where to look for that. Thanks.\n\n> that have locks on some tables. That's not a deadlock --- at least,\n\nIt is no deadlock if the transaction holding the lock remains idle and does\nnothing. But I cannot imagine how this could happen.\n\nWhat happens if there is a real deadlock, i.e. the transaction holding the\nlock tries to lock a table vacuum already locked? Ah, I just checked and\nrendered my last mail useless. It appears the backend does correctly detect\nthe deadlock and kill one transaction.\n\n> it's not Postgres' fault. The VACUUM is waiting to get exclusive access\n> to some table that's held by one of these clients, and the COPY is\n> probably queued up behind the VACUUM.\n\nSo the reason is that the transaction does hold a lock but does not advance\nany further?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 12 Oct 2001 10:40:39 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: Deadlock? idle in transaction" }, { "msg_contents": "On Fri, 12 Oct 2001, Michael Meskes wrote:\n\n> On Thu, Oct 11, 2001 at 01:09:25PM -0700, Stephan Szabo wrote:\n> > Well, it'd be likely to get in this state if the first transaction grabbed\n> > any write locks and then sat on them without committing or doing any more\n> > commands, since the vacuum would wait on that and the rest of the\n> > transactions will probably wait on the vacuum. Is that a possible\n> > situation?\n> \n> Maybe. The first transaction should not sit on any lock, but I have to dig\n> through the sources to be sure it really does not. Also I wonder if this\n> could happen through normal operation:\n> \n> Task 1:\n> \n> begin\n> acquire lock in table A\n> acquire lock in table B\n> commit\n> \n> Task 2 (vacuum):\n> \n> lock table B\n> lock table A\n> \n> Could this force the situation too?\n\nDo you mean like task1 has gotten the A lock, and then task 2 gets the B\nand then task1 tries to get B and task2 tries to get A? I *think*\n(without ever looking at the code, and going on messages from here) that\nwould probably kick off the deadlock alert since you're trying to grab\na lock from a process which is waiting for a lock you hold.\n\n> If so the easy workaround would be to run vacuum when there is no other\n> process accessing the DB.\n\nWell, fortunately it sounds like in 7.2 we'll have much less of this in\nthe first place since the normal uses of vacuum will be happier with\nsharing.\n\n", "msg_date": "Fri, 12 Oct 2001 11:29:08 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock? idle in transaction" }, { "msg_contents": "Also note that an uncommitted select statement will lock the table and \nprevent vacuum from running. It isn't just inserts/updates that will \nlock and cause vacuum to block, but selects as well. This got me in the \npast. (Of course this is all fixed in 7.2 with the new vacuum \nfunctionality that doesn't require exclusive locks on the tables).\n\nthanks,\n--Barry\n\nMichael Meskes wrote:\n\n> On Thu, Oct 11, 2001 at 08:26:48PM -0400, Tom Lane wrote:\n> \n>>You evidently have some client applications holding open transactions\n>>\n> \n> Okay, I know where to look for that. Thanks.\n> \n> \n>>that have locks on some tables. That's not a deadlock --- at least,\n>>\n> \n> It is no deadlock if the transaction holding the lock remains idle and does\n> nothing. But I cannot imagine how this could happen.\n> \n> What happens if there is a real deadlock, i.e. the transaction holding the\n> lock tries to lock a table vacuum already locked? Ah, I just checked and\n> rendered my last mail useless. It appears the backend does correctly detect\n> the deadlock and kill one transaction.\n> \n> \n>>it's not Postgres' fault. The VACUUM is waiting to get exclusive access\n>>to some table that's held by one of these clients, and the COPY is\n>>probably queued up behind the VACUUM.\n>>\n> \n> So the reason is that the transaction does hold a lock but does not advance\n> any further?\n> \n> Michael\n> \n\n\n", "msg_date": "Fri, 12 Oct 2001 18:12:21 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock? idle in transaction" }, { "msg_contents": "On Fri, Oct 12, 2001 at 11:29:08AM -0700, Stephan Szabo wrote:\n> Do you mean like task1 has gotten the A lock, and then task 2 gets the B\n> and then task1 tries to get B and task2 tries to get A? I *think*\n> (without ever looking at the code, and going on messages from here) that\n> would probably kick off the deadlock alert since you're trying to grab\n> a lock from a process which is waiting for a lock you hold.\n\nI checked it and yes, it kicks off the deadlock alert. The idle in\ntransaction problem is not a deadlock but a transaction that simply does\nnot proceed. \n\nIn our case we believe to have found the reason. There was one user who\naccessed the database via M$ Access and was allowed to write. And this user\nlooked into a table and then let this query open while doing other work.\nSince he's able to change data I would guess that the query is internally\nrealized as a cursor select for update which of course locks. With Access\ndoing nothing but displaying the data the transaction certainly is idle.\nThat's it.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 29 Oct 2001 09:00:58 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: Deadlock? idle in transaction" } ]
[ { "msg_contents": "I have a function in PL/pgSQL which needs the current time in seconds\nexpressed as an int4. In 7.1 I was able to get this (I thought) with\ndate_part(''epoch'', timestamp ''now'') . That doesn't seem to work for me\nin last week's -current.\n\nHere's the PLpgSQL:\n\n v_seed := date_part(''epoch'', timestamp ''now'');\n\nAnd here's the output:\n\nNOTICE: Error occurred while executing PL/pgSQL function\nNOTICE: line 4 at assignment\nERROR: Timestamp with time zone units 'epoch' not recognized\n\nWhat's the best way to do this?\n\nTake care,\n\nBill\n\n", "msg_date": "Thu, 11 Oct 2001 09:17:45 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "How do I get the current time in seconds in the unix epoch?" }, { "msg_contents": "Bill Studenmund <wrstuden@netbsd.org> writes:\n> In 7.1 I was able to get this (I thought) with\n> date_part(''epoch'', timestamp ''now'') . That doesn't seem to work for me\n> in last week's -current.\n\nIndeed: in 7.1 I can do\n\ntest71=# select date_part('epoch', timestamp 'now');\n date_part\n------------\n 1002946239\n(1 row)\n\nbut current sources give\n\nregression=# select date_part('epoch', timestamp 'now');\nERROR: Timestamp with time zone units 'epoch' not recognized\n\nThomas, I think you broke something.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Oct 2001 00:12:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How do I get the current time in seconds in the unix epoch? " }, { "msg_contents": "> I have a function in PL/pgSQL which needs the current time in seconds\n> expressed as an int4. In 7.1 I was able to get this (I thought) with\n> date_part(''epoch'', timestamp ''now'') . That doesn't seem to work for me\n> in last week's -current.\n>\n> Here's the PLpgSQL:\n>\n> v_seed := date_part(''epoch'', timestamp ''now'');\n>\n> And here's the output:\n>\n> NOTICE: Error occurred while executing PL/pgSQL function\n> NOTICE: line 4 at assignment\n> ERROR: Timestamp with time zone units 'epoch' not recognized\n\nHmmm. I don't know why date_part isn't working, but I now only use the\nEXTRACT syntax for maximum SQL compatibility. ie. Do this instead:\n\nv_seed := EXTRACT (EPOCH FROM CURRENT_TIMESTAMP);\n\nCheers,\n\nChris\n\n", "msg_date": "Mon, 15 Oct 2001 10:42:00 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: How do I get the current time in seconds in the unix epoch?" }, { "msg_contents": "> > In 7.1 I was able to get this (I thought) with\n> > date_part(''epoch'', timestamp ''now'') . That doesn't seem to work for me\n> > in last week's -current.\n> Thomas, I think you broke something.\n\nIt was actually a side effect of changing the date/time parser to no\nlonger ignore unrecognized text fields. The previous behavior has been\nthere from the Beginning, and the new behavior meant that the search\nroutine no longer returns \"ignore\" as a status (which caused the calling\nroutine to drop into the \"special case\" tests including \"epoch\").\n\nAnyway, I've got patches, so no worries...\n\n - Thomas\n", "msg_date": "Mon, 15 Oct 2001 05:31:58 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: How do I get the current time in seconds in the unix " }, { "msg_contents": "On Mon, 15 Oct 2001, Christopher Kings-Lynne wrote:\n\n> Hmmm. I don't know why date_part isn't working, but I now only use the\n> EXTRACT syntax for maximum SQL compatibility. ie. Do this instead:\n>\n> v_seed := EXTRACT (EPOCH FROM CURRENT_TIMESTAMP);\n\nUnfortunatly that gives the same error. I think the problem is that the\nunderlying code isn't liking the EPOCH timezone. Tom mentioned he had\npatches.\n\nTake care,\n\nBill\n\n", "msg_date": "Mon, 15 Oct 2001 06:29:15 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: How do I get the current time in seconds in the unix" } ]
[ { "msg_contents": "Zembu has decided to release the result of a recent Postgres developement\nproject to the Postgres project. This project (for which I was the lead\ndeveloper) adds Oracle-like package support to Postgres. I'm in the\nprocess of making a version of the patch which is relative to the current\ncvs tree. The change is fairly encompasing, weighing in at around\n800k of unified diffs, of which about 200k are the real meat. Before I\nsend it in, though, I thought I'd see what people think of the idea. Oh,\nthis feature would definitly be a 7.3 feature, 7.2 is too close to the\ndoor for this to go in. :-)\n\nThis message is rather long. I've divided it into sections which start\nwith \"**\".\n\n** What are Packages\n\nSo what are packages? In Oracle, they are a feature which helps developers\nmake stored procedures and functions. They provide a name space for\nfunctions local to the package, session-specific package variables, and\ninitialization routines which are run before any other routine in the\npackage. Also, all parts of a package are loaded and unloaded at once -\nyou can't have it partially installed.\n\nAll of these features make life much easier for stored-procedure\ndevelopers. The name space feature means that you can name the routines in\nyour package whatever you want, and they won't conflict with the names\neither in other packages or with functions not in a package. All you need\nto do is ensure that no other package has the same name as yours.\n\n** What did I do, and what does a package declaration look like?\n\nWhat I've done is impliment Oracle packages with a Postgres flair. There\nis a new command, CREATE PACKAGE <name> AS which defines a package. For\nthose of you up on Oracle packages, this command duplicates the Oracle\nCREATE PACKAGE BODY command - there is no Postgres equivalent of the\nOracle CREATE PACKAGE command.\n\nPackages are listed in a new system table, pg_package, and are referenced\nin other tables by the oid of the row in pg_package.\n\nThere are seven different components which can be present in a package,\nand so a CREATE PACKAGE command contains seven stanza types. A package can\nbe made up of functions, types, operators, aggregates, package-global\nvariables, initialization routines, and functions usable for type\ndeclarations. Four of the stanzas are easy to understand; to create a\nfunction, a type, an aggregate, or an operator, you include a stanza which\nis the relevant CREATE command without the CREATE keyword. Thus the\nFUNCTION stanza creates a function, the TYPE stanza creates a type,\nAGGREGATE => an aggregate, and OPERATOR => an operator.\n\nThe initializer routines and package-global variables are done a bit\ndifferently than in Oracle, reflecting Postgres's strength at adding\nlanguages. Postgres supports six procedural languages (plpgsql, pltcl,\npltclu, plperl, plperlu, and plpython) whereas I think Oracle only\nsupports two (PL/SQL and I herad they added a java PL). The main\ndifference is that the variables and the initializer routines are language\nspecific. So you can have different variables for plpgsql than for pltcl.\nLikewise for initializers.\n\nPackage-global variables are defined as:\nDECLARE <variable name> '<variable type>' [, <next name> '<next type>' ]\nLANGUAGE 'langname'\n\nThe type is delimited by single quotes so that the postgres parser didn't\nhave to learn the syntax of each procedural language's variable types.\n\nInitializer routines are declared like normal functions, except the\nfunction name and signature (number & type of arguements and return type)\nare not given. The name is automatically generated (it is __packinit_\nfollowed by the language name) and the function signature should not be\ndepended on. It is to take no parameters and return an int4 for now, but\nthat should probably change whenever PG supports true procedures.\nInitializer routines are declared as:\nBODY AS 'function body' LANGUAGE 'lanname' [with <with options>]\n\nI'm attaching a sample showing a package initialization routine and global\nvariable declaration. There's a syntax error in it, which I asked about in\nanother EMail.\n\nThe last component of a package are the functions usable for type\ndeclarations. They are declared as:\nBEFORE TYPE FUNCTION <standard package function declaration>\n\nThey are useful as the normal functions in a package are declared after\nthe types are declared, so that they can use a type newly-defined in a\npackage. Which is fine, except that to define a type, you have to give an\ninput and an output function. BEFORE TYPE FUNCTIONs are used to define\nthose functions. Other than exactly when they are created in package\nloading, they are just like other functions in the package.\n\nI'm attaching an example which defines the type 'myint4' (using the\ninternal int4 routines) and proceeds to declare routines using the new\ntype.\n\n** So how do I use things in a package?\n\nYou don't have to do anything special to use a type or an operator defined\nin a package - you just use it. Getting technical, operators and types in\npackages are in the same name space as are types and operators not in\npackages. To follow along with the example I attached above, the 'myint4'\ntype is usable in the typetest package, in tables, in other packages, and\nin \"normal\" functions.\n\nFor functions and aggregates, things are a little more complicated. First\noff, there is a package called \"standard\" which contains all types,\naggregates, operators, and functions which aren't in a specific package.\nThis includes all of the standard Postgres routines, and anything created\nwith CREATE FUNCTION, CREATE AGGREGATE, CREATE OPERATOR, and CREATE TYPE.\n\nSecondly, parsing is always done in terms of a specified package context.\nIf we are parsing an equation in a routine inside of a package, then the\npackage context is that package. If we are just typing along in psql, then\nthe package context is \"standard\".\n\nWhen you specify a function or aggregate, you have two choices. One is to\nspecify a package, and a function in that package, like\n\"nametest.process\" to specify the \"process\" function in the \"nametest\"\npackage.\n\nThe other choice is to just give the function's name. The first place\nPostgres will look is in the package context used for parsing. If it's not\nthere (and that context wasn't \"standard\"), then it will look in\n\"standard\". So for example in the type declaration example attached, the\ntype stanza uses \"myint4in\" and \"myint4out\" as the input and output\nroutines, and finds the ones declared as part of the package.\n\nI've attached a sample showing off namespaces. It has two non-package\nroutines, and one package named \"nametest\".\n\nHere's a sample session:\n\ntesting=# select standard.process(4);\n process\n------------------\n I am in standard\n(1 row)\ntesting=# select nametest.process(4);\n process\n---------------------\n I am in the package\n(1 row)\ntesting=# select nametest.docheck();\n docheck\n---------------------\n I am in the package\n(1 row)\n\nFirst we see that the standard.process() routine says it is in the\n\"standard\" package, and that the nametest.process() routine says it is in\nthe package. Then we call the nametest.docheck() routine.\n\nIt evaluates \"process(changer(4));\" in the context of the nametest\npackage. We find the process() routine in the package, and use it.\n\nThe changer routine is there to test how typecasting works. It verifies\nthat Postgres would typecast the return of changer into a different\ninteger and call the process() routine in the package rather than call the\nprocess() routine in standard. This behavior matches Oracle's.\n\nThe other routines in the package show of some examples of how sql will\nparse according to the above rules.\n\nInitialization routines:\n\nThere is only one recomended way to use them: call a function written in\nthe same PL in the package. That will cause the initialization routine to\nbe run. Assuming there are no errors, the routine you call won't be\nexecuted until after the initialization routine finishes.\n\nOf course the non-recomended way is to manually call __packinit_<langname>\ndirectly. The problem with that is that you are depending on\nimplimentation details which might change. Like exactly how the name is\ngenerated (which probably won't change) and the calling convention (which\nhopefully will if procedures are ever suported).\n\nPackage-global variables:\n\nJust use them. Assuming that the procedural language supports global\nvariables, they just work. Note that as with Oracle, each backend will get\nits own set of variables. No effort is made to coordinate values across\nbackends. But chances are you don't want to do that, and if you did, just\nmake a table. :-)\n\n** So what is the state of the diffs?\n\nThe diffs contain changes to last week's current (I'll cvs update before\nsending out) which add package support to the backend, plpgsql, the SPI\ninterface, initdb, and pg_dump. The changes also include modifying the\nsystem schema to support packages (pg_package which lists packages,\npg_packglobal which list global variables, and adding a package identifier\nto pg_aggretage, pg_operator, pg_proc and pg_type).\n\nThe big things missing are documentation, and regression tests which\nexplicitly test packages.\n\nAlso, plpgsql is the only PL with package support. Adding package support\ndoesn't make sense for the 'C' and 'internal' languages, as you can\nmanually add \"global\" variables and initialization routines yourself. It\nalso doesn't make sense for 'sql' as sql doesn't support variables. The\nother languages need to gain package support, and I'll appreciate help\nfrom their authors. :-)\n\nSo I'd better wrap up here. Monday I'll send the diffs to the patches\nlist, and also send a message talking about more of the details of the\nchanges.\n\nWhat do folks think?\n\nTake care,\n\nBill", "msg_date": "Thu, 11 Oct 2001 13:12:32 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Package support for Postgres" }, { "msg_contents": "Bill Studenmund <wrstuden@netbsd.org> writes:\n> ... operators and types in\n> packages are in the same name space as are types and operators not in\n> packages.\n\n> For functions and aggregates, things are a little more complicated. First\n> off, there is a package called \"standard\" which contains all types,\n> aggregates, operators, and functions which aren't in a specific package.\n> This includes all of the standard Postgres routines, and anything created\n> with CREATE FUNCTION, CREATE AGGREGATE, CREATE OPERATOR, and CREATE TYPE.\n\n> Secondly, parsing is always done in terms of a specified package context.\n> If we are parsing an equation in a routine inside of a package, then the\n> package context is that package. If we are just typing along in psql, then\n> the package context is \"standard\".\n\n> When you specify a function or aggregate, you have two choices. One is to\n> specify a package, and a function in that package, like\n> \"nametest.process\" to specify the \"process\" function in the \"nametest\"\n> package.\n\n> The other choice is to just give the function's name. The first place\n> Postgres will look is in the package context used for parsing. If it's not\n> there (and that context wasn't \"standard\"), then it will look in\n> \"standard\".\n\nHmm. How does/will all of this interact with SQL-style schemas?\n\nThe reason I'm concerned is that if we want to retain the present\nconvention that the rowtype of a table has the same name as the table,\nI think we are going to have to make type names schema-local, just\nlike table names will be. And if type names are local to schemas\nthen so must be the functions that operate on those types, and therefore\nalso operators (which are merely syntactic sugar for functions).\n\nThis seems like it will overlap and possibly conflict with the decisions\nyou've made for packages. It also seems possible that a package *is*\na schema, if schemas are defined that way --- does a package bring\nanything more to the table?\n\nI also wonder how the fixed, single-level namespace search path you\ndescribe interacts with the SQL rules for schema search. (I don't\nactually know what those rules are offhand; haven't yet read the schema\nparts of the spec in any detail...)\n\nAlso, both operators and functions normally go through ambiguity\nresolution based on the types of their inputs. How does the existence\nof a name search path affect this --- are candidates nearer the front\nof the search path preferred? Offhand I'm not sure if they should get\nany preference or not.\n\nI'd like to see schemas implemented per the spec in 7.3, so we need to\ncoordinate all this stuff.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Oct 2001 14:11:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Package support for Postgres " }, { "msg_contents": "Bill Studenmund writes:\n\n> So what are packages? In Oracle, they are a feature which helps developers\n> make stored procedures and functions.\n\nI think you have restricted yourself too much to functions and procedures.\nA package could/should also be able to contain views, tables, and such.\n\n> They provide a name space for functions local to the package,\n\nNamespacing is the task of schemas. I think of packages as a bunch of\nobjects that can be addressed under a common name (think RPMs).\n\nBut it seems like some of this work could be used to implement schema\nsupport.\n\n> session-specific package variables,\n\nI think this is assuming a little too much about how a PL might operate.\nSome PLs already support this in their own language-specific way, with or\nwithout packages. Thus, I don't think packages should touch this.\nActually, I think you could easily set up session variables in the package\ninitializer function.\n\n> The last component of a package are the functions usable for type\n> declarations. They are declared as:\n> BEFORE TYPE FUNCTION <standard package function declaration>\n>\n> They are useful as the normal functions in a package are declared after\n> the types are declared, so that they can use a type newly-defined in a\n> package.\n\nI think it would make much more sense to allow the creation of objects in\nthe CREATE PACKAGE command in any order. PostgreSQL has not so far had a\nconcept of \"functions suitable for type declarations\" and we shouldn't add\none.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 13 Oct 2001 21:22:29 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "On Sat, 13 Oct 2001, Tom Lane wrote:\n\n> Bill Studenmund <wrstuden@netbsd.org> writes:\n> > For functions and aggregates, things are a little more complicated. First\n> > off, there is a package called \"standard\" which contains all types,\n> > aggregates, operators, and functions which aren't in a specific package.\n> > This includes all of the standard Postgres routines, and anything created\n> > with CREATE FUNCTION, CREATE AGGREGATE, CREATE OPERATOR, and CREATE TYPE.\n>\n> > Secondly, parsing is always done in terms of a specified package context.\n> > If we are parsing an equation in a routine inside of a package, then the\n> > package context is that package. If we are just typing along in psql, then\n> > the package context is \"standard\".\n>\n> > When you specify a function or aggregate, you have two choices. One is to\n> > specify a package, and a function in that package, like\n> > \"nametest.process\" to specify the \"process\" function in the \"nametest\"\n> > package.\n>\n> > The other choice is to just give the function's name. The first place\n> > Postgres will look is in the package context used for parsing. If it's not\n> > there (and that context wasn't \"standard\"), then it will look in\n> > \"standard\".\n>\n> Hmm. How does/will all of this interact with SQL-style schemas?\n\nIndependent as I understand it. Schemas (as I understand Oracle schemas)\noperate at a level above the level where packages operate.\n\n> The reason I'm concerned is that if we want to retain the present\n> convention that the rowtype of a table has the same name as the table,\n> I think we are going to have to make type names schema-local, just\n> like table names will be. And if type names are local to schemas\n> then so must be the functions that operate on those types, and therefore\n> also operators (which are merely syntactic sugar for functions).\n>\n> This seems like it will overlap and possibly conflict with the decisions\n> you've made for packages. It also seems possible that a package *is*\n> a schema, if schemas are defined that way --- does a package bring\n> anything more to the table?\n\nI don't think it conflicts. My understanding of schemas is rather\nsimplistic and practical. As I understand it, they correspond roughly to\ndatabases in PG. So with schema support, one database can essentially\nreach into another one. Package support deals with the functions (and\ntypes and in this case aggregates and operators) that schema support would\nfind in the other schemas/databases.\n\n> I also wonder how the fixed, single-level namespace search path you\n> describe interacts with the SQL rules for schema search. (I don't\n> actually know what those rules are offhand; haven't yet read the schema\n> parts of the spec in any detail...)\n\nShould be independent. The searching only happens when you are not in the\n\"standard\" package, and you give just a function name for a function.\nThe searching would only happen in the current schems. If\nyou give a schema name, then I'd expect PG to look in that schema, in\nstandard, for that function. If you give both a schema and package name,\nthen PG would look in that package in that schema.\n\n> Also, both operators and functions normally go through ambiguity\n> resolution based on the types of their inputs. How does the existence\n> of a name search path affect this --- are candidates nearer the front\n> of the search path preferred? Offhand I'm not sure if they should get\n> any preference or not.\n\nThere is no name spacing for operators in my implimentation as to have one\nstrikes me as reducing the utility of having types and operators in a\npackage. For functions (and aggregates), I tried to touch on that in the\nlatter part of my message; that's what the example with\n\"process(changer(4))\" was about. PG will try to type coerce a function in\nthe current package before it looks in standard. So yes, candidates nearer\nthe front are prefered.\n\n> I'd like to see schemas implemented per the spec in 7.3, so we need to\n> coordinate all this stuff.\n\nSounds good. I don't think it will be that hard, though. :-)\n\nTake care,\n\nBill\n\n", "msg_date": "Sat, 13 Oct 2001 16:38:20 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: Package support for Postgres " }, { "msg_contents": "On Sat, 13 Oct 2001, Bill Studenmund wrote:\n\n> On Sat, 13 Oct 2001, Tom Lane wrote:\n>\n> > I also wonder how the fixed, single-level namespace search path you\n> > describe interacts with the SQL rules for schema search. (I don't\n> > actually know what those rules are offhand; haven't yet read the schema\n> > parts of the spec in any detail...)\n>\n> Should be independent. The searching only happens when you are not in the\n> \"standard\" package, and you give just a function name for a function.\n> The searching would only happen in the current schems. If\n> you give a schema name, then I'd expect PG to look in that schema, in\n> standard, for that function. If you give both a schema and package name,\n> then PG would look in that package in that schema.\n\nMy description of namespaces seems to have caused a fair bit of confusion.\nLet me try again.\n\nThe ability of the package changes to automatically check standard when\nyou give an ambiguous function name while in a package context is a\nconvenience for the procedure author. Nothing more.\n\nIt means that when you want to use one of the built in functions\n(date_part, abs, floor, sqrt etc.) you don't have to prefix it with\n\"standard.\". You can just say date_part(), abs(), floor(), sqrt(), etc.\nThe only time you need to prefix a call with \"standard.\" is if you want to\nexclude any so-named routines in your own package.\n\nI've attached a copy of a package I wrote as part of testing package\ninitializers and package global variables. It is an adaptation of the\nRandom package described in Chapter 8 of _Oracle8 PL/SQL Programming_ by\nScott Urman. Other than adapting it to PostgreSQL, I also tweaked the\nRandMax routine to give a flat probability.\n\nNote the use of date_part() in the BODY AS section, and the use of rand()\nin randmax(). Both of these uses are the ambiguous sort of function naming\nwhich can trigger the multiple searching. Since they are in plpgsql code,\nthey get parsed in the context of the random package. So when each of them\ngets parsed, parse_func first looks in the random package. For rand(), it\nwill find the rand() function and use it. But for date_part(), since there\nisn't a date_part function in the package, we use the one in standard.\n\nIf we didn't have this ability, one of the two calls would need to have\nhad an explicit package with it. There are two choices (either \"standard.\"\nwould be needed for date_part(), or \"random.\" for rand()), but I think\nboth would lead to problems. Either choice makes the syntax heavy, for\nlittle gain. Also, if we scatter the package name throughout the package,\nif we ever want to change it, we have more occurences to change.\n\nDoes that make more sense?\n\nTake care,\n\nBill", "msg_date": "Sun, 14 Oct 2001 05:22:36 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: namespaces, was Package support for Postgres " }, { "msg_contents": "On Sat, 13 Oct 2001, Tom Lane wrote:\n\n> Bill Studenmund <wrstuden@netbsd.org> writes:\n> > The other choice is to just give the function's name. The first place\n> > Postgres will look is in the package context used for parsing. If it's not\n> > there (and that context wasn't \"standard\"), then it will look in\n> > \"standard\".\n>\n> Hmm. How does/will all of this interact with SQL-style schemas?\n>\n> The reason I'm concerned is that if we want to retain the present\n> convention that the rowtype of a table has the same name as the table,\n> I think we are going to have to make type names schema-local, just\n> like table names will be. And if type names are local to schemas\n> then so must be the functions that operate on those types, and therefore\n> also operators (which are merely syntactic sugar for functions).\n\nAhhh... There's the operators == sugar comment.\n\nI agree with you above; types and functions need to be schema-specific.\n\n> This seems like it will overlap and possibly conflict with the decisions\n> you've made for packages. It also seems possible that a package *is*\n> a schema, if schemas are defined that way --- does a package bring\n> anything more to the table?\n\nI'm repeating myself a little. :-)\n\nPackages aren't schemas. What they bring to the table is they facilitate\nmaking stored procedures (functions). You can have twelve different\ndevelopers working on twenty different packages, with no fear of name\nconflicts. The package names will have to be different, so there can be\nfunctions with the same names in different pacakges.\n\nThis ability isn't that important in small development projects, but is\nreally important for big ones. Think about big db applications, like\nClarify. Any project with multiple procedure authors. Without something\nlike packages, you'd need to spend a lot of effort coordinating names &\nsuch so that they didn't conflict. With packages, it's rather easy.\n\nAlso, I think PostgreSQL can challenge the commercial databases for these\napplications. But to do so, changing over to PG will need to be easy.\nHaving packages there will greatly help.\n\n> I'd like to see schemas implemented per the spec in 7.3, so we need to\n> coordinate all this stuff.\n\nFor the most part, I think packages and schemas are orthogonal. I'm taking\na cue from Oracle here. Oracle considers packages to be a schema-specific\nobject.\n\nTake care,\n\nBill\n\n", "msg_date": "Sun, 14 Oct 2001 08:40:45 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "On Sat, 13 Oct 2001, Peter Eisentraut wrote:\n\n> Bill Studenmund writes:\n>\n> > So what are packages? In Oracle, they are a feature which helps developers\n> > make stored procedures and functions.\n>\n> I think you have restricted yourself too much to functions and procedures.\n> A package could/should also be able to contain views, tables, and such.\n\nI disagree. Views and tables are the purview of schemas, which as I\nmentioned to Tom, strike me as being different from packages. Packages\nbasically are modules which make life easier for functions (and types and\naggregates and operators).\n\nIf we really want to make tables and views and triggers part of packages,\nwe can. My big concern is that it then makes pg_dump harder. I'll go into\nthat more below.\n\n> > They provide a name space for functions local to the package,\n>\n> Namespacing is the task of schemas. I think of packages as a bunch of\n> objects that can be addressed under a common name (think RPMs).\n\nRegrettablely Oracle beat you to it with what \"packages\" are in terms of\nOracle, and I suspect also in the minds of many DBAs.\n\nI also think that you and Tom have something different in mind about the\nnamespacing in packages. It is purely a convenience for the package\ndeveloper; whenever you want to use a function built into the database,\nyou _don't_ have to type \"standard.\" everywhere. Think what a PITA it\nwould be to have to say \"standard.abs(\" instead of \"abs(\" in your\nfunctions! I'm sorry if my explanation went abstract quickly & making that\nunclear.\n\n> But it seems like some of this work could be used to implement schema\n> support.\n\nI think the big boost this will have to schema support is that it shows\nhow to make a far-reaching change to PostgreSQL. :-) It's an internal\nschema change and more, just as schema support will be.\n\n> > session-specific package variables,\n>\n> I think this is assuming a little too much about how a PL might operate.\n> Some PLs already support this in their own language-specific way, with or\n> without packages. Thus, I don't think packages should touch this.\n> Actually, I think you could easily set up session variables in the package\n> initializer function.\n\nI agree that some PLs might do things their own way and so package\nvariables won't be as useful. If these variables are not appropriate to a\nPL, it can ignore them.\n\nPL/pgSQL is a counter-example, though, showing that something needs to be\ndone. It is not set up to support global variables; each code block\ngenerates its own namespace, and removes it on the way out. Thus I can\nnot see a clean way to add package global variables to say the\ninitialization routine - this routine's exit code would need to not\ndestroy the context. That strikes me as a mess.\n\n> > The last component of a package are the functions usable for type\n> > declarations. They are declared as:\n> > BEFORE TYPE FUNCTION <standard package function declaration>\n> >\n> > They are useful as the normal functions in a package are declared after\n> > the types are declared, so that they can use a type newly-defined in a\n> > package.\n>\n> I think it would make much more sense to allow the creation of objects in\n> the CREATE PACKAGE command in any order. PostgreSQL has not so far had a\n> concept of \"functions suitable for type declarations\" and we shouldn't add\n> one.\n\nI think you misread me slightly. BEFORE TYPE FUNCTION functions are\n\"usable\" for type declarations, not \"suitable\" for them. Also, I didn't\nsay one key clause, \"in this package\". The main difference is when in the\ncreation of the package the functions are created; they get created before\nthe types, rather than after.\n\nThis concept is new to PostgreSQL because PostgreSQL has never before\nchained creations together like this.\n\nThinking about it though it would be feasable to scan the list of types in\nthe package, and see if there are references to functions declared in that\npackage, and if so to create them before the types get declared. That\nwould remove the need for BEFORE TYPE FUNCTION and also make pg_dump a\nlittle simpler.\n\nTake care,\n\nBill\n\n", "msg_date": "Sun, 14 Oct 2001 09:20:25 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "On Mon, 15 Oct 2001, Tom Lane wrote:\n\n> Bill Studenmund <wrstuden@netbsd.org> writes:\n> > For the most part, I think packages and schemas are orthogonal. I'm taking\n> > a cue from Oracle here. Oracle considers packages to be a schema-specific\n> > object.\n>\n> Nonetheless, it's not clear to me that we need two independent concepts.\n> Given a name search path that can go through multiple schemas, it seems\n> to me that you could get all the benefits of a package from a schema.\n>\n> I'm not necessarily averse to accepting Oracle's syntax for declaring\n> packages --- if we can make it easier for Oracle users to port to Postgres,\n> that's great. But I'm uncomfortable with the notion of implementing two\n> separate mechanisms that seem to do the exact same thing, ie, control\n> name visibility.\n\nI'm at a loss as to what to say. I think that what packages do and what\nschemas do are different - they are different kinds of namespaces. That's\nwhy they should have different mechanisms. Packages are for making it\neasier to write stored procedures for large programming projects or for\ncode reuse. Schemas, well, I need to learn more. But they strike me more\nas a tool to partition entire chunks of a database.\n\nAlso, packages have a whole concept of initialization routines and global\nvariables, which strike me as having no place alongside tables and views.\n\nTake care,\n\nBill\n\n", "msg_date": "Sun, 14 Oct 2001 10:19:02 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "On Sun, 14 Oct 2001, Peter Eisentraut wrote:\n\n> I have been pondering a little about something I called \"package\",\n> completely independent of anything previously implemented. What I would\n> like to get out of a package is the same thing I get out of package\n> systems on operating systems, namely that I can remove all the things that\n> belong to the package with one command. Typical packages on PostgreSQL\n> could be the PgAccess admin tables or the ODBC catalog extensions.\n>\n> One might think that this could also be done with schemas. I'm thinking\n> using schemas for this would be analogous to installing one package per\n> directory. Now since we don't have to deal with command search paths or\n> file system mount points there might be nothing wrong with that.\n>\n> Packages typically also have post-install/uninstall code, as does this\n> proposed implementation, so that would have to be fit in somewhere.\n>\n> This is basically where my thinking has stopped... ;-)\n>\n> Now I'm also confused as to what this package system really represents:\n> Is it a namespace mechanisms -- but Oracle does have schemas; or is it a\n> package manager like I had in mind -- for that it does too many things\n> that don't belong there; or is it a mechanism to set up global variables\n> -- that already exists and doesn't need \"packages\".\n\nIt is an implimentation of Oracle Packages for PostgreSQL, taking\nadvantage of some of PostgreSQL's abilities (the aggregates & operators in\na package bit is new). It is a tool to help developers create large\nprojects and/or reuse code.\n\nIt is not schema support; schema support operates on a level above package\nsupport. It is also not the package support you had in mind. That support\nis different. What you describe above is packaging which primarily helps\nthe admin, while this packaging primarily helps the procedure developer.\nThat difference in emphasis is why this package support does things an\nadministrator-focused package system wouldn't.\n\nAlso, please note that while many of PostgreSQL's procedure languages\nmight not need global variable support, PL/pgSQL does.\n\nTake care,\n\nBill\n\n", "msg_date": "Sun, 14 Oct 2001 10:59:36 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: Package support for Postgres " }, { "msg_contents": "On Mon, 15 Oct 2001, Tom Lane wrote:\n\n> Bill Studenmund <wrstuden@netbsd.org> writes:\n> > For the most part, I think packages and schemas are orthogonal. I'm taking\n> > a cue from Oracle here. Oracle considers packages to be a schema-specific\n> > object.\n>\n> Nonetheless, it's not clear to me that we need two independent concepts.\n> Given a name search path that can go through multiple schemas, it seems\n> to me that you could get all the benefits of a package from a schema.\n\nAbout the best response to this I can come up with is that in its present\nimplimentation, types and operators are not scoped as package-specific. If\nyou declare a type in a package, that type is usable anywhere; you don't\nhave to say package.type. If we did packages via schemas, as I understand\nit, you would (and should).\n\nWe both agree that types and the functions that operate on them should be\nschema-specific. Thus operators should be schema-specific. If we did\npackages via schemas, I don't see how we would get at operators in\npackages. If you create a new integer type, would you really want to have\nto type \"3 packname.< table.attr\" to do a comparison?\n\nSo I guess that's the reason; this package implimentation creates types\nand operators in the same namespace as built-in types and operators. As I\nunderstand schemas, user types (and thus operators) should exist in a\nschema-specific space.\n\nI can see reasons for both, thus I think there is a place for two\nindependent concepts.\n\nTake care,\n\nBill\n\n", "msg_date": "Sun, 14 Oct 2001 11:19:38 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "Tom Lane writes:\n\n> This seems like it will overlap and possibly conflict with the decisions\n> you've made for packages. It also seems possible that a package *is*\n> a schema, if schemas are defined that way --- does a package bring\n> anything more to the table?\n\nI have been pondering a little about something I called \"package\",\ncompletely independent of anything previously implemented. What I would\nlike to get out of a package is the same thing I get out of package\nsystems on operating systems, namely that I can remove all the things that\nbelong to the package with one command. Typical packages on PostgreSQL\ncould be the PgAccess admin tables or the ODBC catalog extensions.\n\nOne might think that this could also be done with schemas. I'm thinking\nusing schemas for this would be analogous to installing one package per\ndirectory. Now since we don't have to deal with command search paths or\nfile system mount points there might be nothing wrong with that.\n\nPackages typically also have post-install/uninstall code, as does this\nproposed implementation, so that would have to be fit in somewhere.\n\nThis is basically where my thinking has stopped... ;-)\n\nNow I'm also confused as to what this package system really represents:\nIs it a namespace mechanisms -- but Oracle does have schemas; or is it a\npackage manager like I had in mind -- for that it does too many things\nthat don't belong there; or is it a mechanism to set up global variables\n-- that already exists and doesn't need \"packages\".\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 14 Oct 2001 22:50:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Package support for Postgres " }, { "msg_contents": "On Tue, 16 Oct 2001, Peter Eisentraut wrote:\n\n> Bill Studenmund writes:\n>\n> > I disagree. Views and tables are the purview of schemas, which as I\n> > mentioned to Tom, strike me as being different from packages.\n>\n> Well, obviously schemas are a namespacing mechanism for tables and views.\n> And apparently the \"packages\" you propose are (among other things) a\n> namespacing mechanism for functions. But the fact is that schemas already\n> provide a namespacing mechanism for functions. (That's what SQL says and\n> that's how it's going to happen.) Now perhaps you want to have a\n> namespacing mechanism *below* schemas. But then I think this could be\n> done with nested schemas, since the sub-schemas would really be the same\n> concept as a top-level schema. That would be a much more general\n> mechanism.\n\nYes, I want a namespace below schemas.\n\nThe difference between packages and schemas is that schemas encapsulate\neverything. As Tom pointed out, that includes types (and I'd assume\noperators too). Packages do not encapsulate types and operators. That's\nwhat makes them different from a sub-schema (assuming a sub-schema is a\nschema within a schema).\n\n> Obviously there is a large number of ideas that \"make life easier\". But\n> I'm still missing a clear statement what exactly the design idea behind\n> these packages is. So far I understood namespace and global variables for\n> PL/pgSQL. For the namespace thing we've already got a different design.\n> For global variables, see below.\n\nSee above.\n\n> > I agree that some PLs might do things their own way and so package\n> > variables won't be as useful. If these variables are not appropriate to a\n> > PL, it can ignore them.\n> >\n> > PL/pgSQL is a counter-example, though, showing that something needs to be\n> > done.\n>\n> Then PL/pgSQL should be fixed. But that doesn't need a such a large\n\nWhy is PL/pgSQL broken?\n\nIt has a very clean design element; you enter a code block, you get a new\nnamespace. You can declare variables in that namespace if you want. When\nyou use a variable name, PL/pgSQL looks in the current namespace, then the\nparent, and so on. You exit a code block, the namespace goes away. That's\nhow C works, for instance.\n\n> concept as \"packages\". It could be as easy as\n>\n> DECLARE GLOBAL\n> ...\n> BEGIN\n> ...\n> END\n>\n> > It is not set up to support global variables; each code block\n> > generates its own namespace, and removes it on the way out. Thus I can\n> > not see a clean way to add package global variables to say the\n> > initialization routine - this routine's exit code would need to not\n> > destroy the context. That strikes me as a mess.\n>\n> The language handler should have no problem creating persistent storage --\n> I don't see that as a problem. If the language is misdesigned that it\n> cannot be done (which I doubt, but consider the theoretical case) then the\n> language should be replaced by something better, but please keep in mind\n> that it's a PL/pgSQL problem only. Maybe if you're from an Oracle\n> background this separation is not quite as natural.\n\nThe problem is not creating persistent storage; the issue is that the\nlangyage was designed to not use it. What you're proposing could be done,\nbut would effectivly be shoving the change in with a hammer. Also, any\nother PLs which are based on languages with strict namespaces will have\nthe same problem.\n\nLook at C for instance. What you're describing is the equivalent to\nletting a function or procedure in C declare global variables. That's not\nnow the language works, and no one seems to mind. :-)\n\n> Right, that's why I suggested allowing the CREATE statements in any order\n> so you could order them yourself to have the function before the types or\n> whatever you want.\n\nMy concern with that is that then we have to make sure to dump it in the\nsame order you entered it. Right now, in general, pg_dump dumps objects in\nstages; all of the languages are dumped, then all of the types, then the\nfunctions, and so on. Functions needed for types and languages get dumped\nright before the type or language which needs it.\n\nIf we go with strict package order mattering, then pg_dump needs to be\nable to recreate that order. That means that it has to look in pg_proc,\npg_operator, pg_type, and pg_aggreagate, sort things (in the package being\ndumped) by oid, and dump things in order of increasing oid. Nothing else\nin pg_dump works like that. I'd rather not start.\n\nI have however come up with another way to make BEFORE TYPE FUNCTION go\naway. I'll just scan the types in a package (I doubt there will be many),\nget a set of candidate names, and scan the functions in the package for\nthem. If they are found, they get added before the types do. So then the\ndecision as to when a function should get added is implicit, rather than\nexplicit.\n\nI'll see about adding this before I send in the patch (it is the only\nthing left).\n\nTake care,\n\nBill\n\n", "msg_date": "Mon, 15 Oct 2001 11:44:00 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "Bill Studenmund <wrstuden@netbsd.org> writes:\n> For the most part, I think packages and schemas are orthogonal. I'm taking\n> a cue from Oracle here. Oracle considers packages to be a schema-specific\n> object.\n\nNonetheless, it's not clear to me that we need two independent concepts.\nGiven a name search path that can go through multiple schemas, it seems\nto me that you could get all the benefits of a package from a schema.\n\nI'm not necessarily averse to accepting Oracle's syntax for declaring\npackages --- if we can make it easier for Oracle users to port to Postgres,\nthat's great. But I'm uncomfortable with the notion of implementing two\nseparate mechanisms that seem to do the exact same thing, ie, control\nname visibility.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Oct 2001 17:02:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "On Sat, 13 Oct 2001, Peter Eisentraut wrote:\n\n> Bill Studenmund writes:\n>\n> > session-specific package variables,\n>\n> I think this is assuming a little too much about how a PL might operate.\n> Some PLs already support this in their own language-specific way, with or\n> without packages. Thus, I don't think packages should touch this.\n> Actually, I think you could easily set up session variables in the package\n> initializer function.\n\nCould you please give me an example of how to do this, say for plperl or\nplpython? Just showing how two functions made with CREATE FUNCTION can use\nglobal variables will be fine. This example will help me understand how\nthey work.\n\nTake care,\n\nBill\n\n", "msg_date": "Tue, 16 Oct 2001 06:20:52 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "are on their way to the patches list. Given the mail delay we've been\nseeing, they'll take a while to get there. Oh, it turns out there _is_ a\nsize limit for patches, so it'll need to get approved.\n\nThere are still a few warts in the code.\n\n1) One wart is that I needed to make an identifier for the oid for the\n\"standard\" package. The oid in question is 10, and the identifier is\nSTANDARDPackageId. I think I will change it to StandardPackageId.\n\nThe question I have is in which file should I store the define defining\nit?\n\n2) Another problem is dealing with the ambiguity between\nrelation.attribute and package.functionname. The present code does it by\nchanging scan.l to recognize ${identifier}\\.${identifier}, and if the\nfirst identifier isn't a key word, look to see if it is a package (scan\npg_packages for the name). If so, the scanner returns a different token,\nPACKID, than IDENT.\n\nI'll see what I can do about moving all of this into the parser, and\ndefering the pg_packages scan until later.\n\nI think I got rid of all of the debugging comments; please let me know if\nI didn't.\n\nTake care,\n\nBill\n\n", "msg_date": "Tue, 16 Oct 2001 08:05:37 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Package support diffs" }, { "msg_contents": "On Wed, 17 Oct 2001, Peter Eisentraut wrote:\n\n> Bill Studenmund writes:\n>\n> > Yes, I want a namespace below schemas.\n> >\n> > The difference between packages and schemas is that schemas encapsulate\n> > everything. As Tom pointed out, that includes types (and I'd assume\n> > operators too). Packages do not encapsulate types and operators.\n>\n> Of course nobody is forcing you to put types into subschemas. But the\n> user would have the freedom to spread things around as he sees fit.\n\n???\n\n> > > Then PL/pgSQL should be fixed. But that doesn't need a such a large\n> >\n> > Why is PL/pgSQL broken?\n>\n> Maybe read \"fixed\" as \"enhanced\".\n>\n> > The problem is not creating persistent storage; the issue is that the\n> > langyage was designed to not use it. What you're proposing could be done,\n> > but would effectivly be shoving the change in with a hammer. Also, any\n> > other PLs which are based on languages with strict namespaces will have\n> > the same problem.\n>\n> Other PLs have shown that storing global data in a language-typical way\n> *is* possible. I read your argumentation as \"PL/pgSQL is not designed to\n> have global variables, so I'm going to implement 'packages' as a way to\n> make some anyway\". Either PL/pgSQL is not designed for it, then there\n> should not be any -- at all. Or it can handle them after all, but then\n> it's the business of the language handler to deal with it.\n\nDo you really think that my employer paid me for three months to come up\nwith an 800k diff _just_ to add global variables to PL/pgSQL? While part\nof it, global variables are only one part of the work. I would actually\nsay it is a minor one.\n\nHonestly, I do not understand why \"global variables\" have been such a sore\npoint for you. PLs for which they don't make sense like this don't have to\ndo it, and Oracle, on whom our Pl/pgSQL was based, thinks that they make\nperfect sense for the language we copied.\n\nAlso, remember that this is an implimentation of Oracle packages for\nPostgres. One of our goals was to make it so that you can mechanically\ntransform an Oracle package into a Postgres one, and vis versa. This\nimplimentation does a good job of that. To make the change you suggest\nwould not.\n\n> > My concern with that is that then we have to make sure to dump it in the\n> > same order you entered it.\n>\n> pg_dump can do dependency ordering if you ask it nicely. ;-) When we\n> implement schemas we'll have to make sure it works anyway. Thinking about\n> pg_dump when designing backend features is usually not worthwhile.\n\nThe thing is what you're talking about is more than just dependency\nordering (which I taught pg_dump to do for packages). \"doing things in the\norder you list\" to me means that things get dumped in the exact same\norder. Say you added some functions and then some operators and then some\nfunctions. If order matters, then the operators should get generated in\nthe dump before the functions, even though there's no dependency-reason to\ndo so.\n\nMaybe I'm taking that a bit more literal than you mean, but how it comes\nacross to me is unnecessarily difficult. We cah achieve the same thing\nother ways.\n\nI did however take your point that BEFORE TYPE FUNCTION should go away;\nthe patch I sent in does not have it. In the patch, stanzas in the CREATE\nPACKAGE command are gathered, and done in sequence according to kind.\nFirst the global variables are defined, then the initialization routines,\nthen functions which are needed for types in the package, then types, then\nfunctions (other than the ones already done), aggregates, and operators.\n\nTake care,\n\nBill\n\n", "msg_date": "Tue, 16 Oct 2001 10:08:24 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "Bill Studenmund writes:\n\n> I disagree. Views and tables are the purview of schemas, which as I\n> mentioned to Tom, strike me as being different from packages.\n\nWell, obviously schemas are a namespacing mechanism for tables and views.\nAnd apparently the \"packages\" you propose are (among other things) a\nnamespacing mechanism for functions. But the fact is that schemas already\nprovide a namespacing mechanism for functions. (That's what SQL says and\nthat's how it's going to happen.) Now perhaps you want to have a\nnamespacing mechanism *below* schemas. But then I think this could be\ndone with nested schemas, since the sub-schemas would really be the same\nconcept as a top-level schema. That would be a much more general\nmechanism.\n\n> Packages basically are modules which make life easier for functions\n> (and types and aggregates and operators).\n\nObviously there is a large number of ideas that \"make life easier\". But\nI'm still missing a clear statement what exactly the design idea behind\nthese packages is. So far I understood namespace and global variables for\nPL/pgSQL. For the namespace thing we've already got a different design.\nFor global variables, see below.\n\n> If we really want to make tables and views and triggers part of packages,\n> we can. My big concern is that it then makes pg_dump harder. I'll go into\n> that more below.\n\nThat has never stopped us from doing anything. ;-)\n\n> Regrettablely Oracle beat you to it with what \"packages\" are in terms of\n> Oracle, and I suspect also in the minds of many DBAs.\n\nOracle appears to have beaten us to define the meaning of quite a few\nthings, but that doesn't mean we have to accept them. We don't\nre-implement Oracle here. And exactly because all Oracle has is\nprocedures and PL/SQL, whereas PostgreSQL has operators, types, and such,\nand user-defined procedural languages, designs may need to be changed or\nthrown out. It wouldn't be the first time.\n\n> I agree that some PLs might do things their own way and so package\n> variables won't be as useful. If these variables are not appropriate to a\n> PL, it can ignore them.\n>\n> PL/pgSQL is a counter-example, though, showing that something needs to be\n> done.\n\nThen PL/pgSQL should be fixed. But that doesn't need a such a large\nconcept as \"packages\". It could be as easy as\n\nDECLARE GLOBAL\n ...\nBEGIN\n ...\nEND\n\n> It is not set up to support global variables; each code block\n> generates its own namespace, and removes it on the way out. Thus I can\n> not see a clean way to add package global variables to say the\n> initialization routine - this routine's exit code would need to not\n> destroy the context. That strikes me as a mess.\n\nThe language handler should have no problem creating persistent storage --\nI don't see that as a problem. If the language is misdesigned that it\ncannot be done (which I doubt, but consider the theoretical case) then the\nlanguage should be replaced by something better, but please keep in mind\nthat it's a PL/pgSQL problem only. Maybe if you're from an Oracle\nbackground this separation is not quite as natural.\n\n> I think you misread me slightly. BEFORE TYPE FUNCTION functions are\n> \"usable\" for type declarations, not \"suitable\" for them. Also, I didn't\n> say one key clause, \"in this package\". The main difference is when in the\n> creation of the package the functions are created; they get created before\n> the types, rather than after.\n\nRight, that's why I suggested allowing the CREATE statements in any order\nso you could order them yourself to have the function before the types or\nwhatever you want.\n\n> This concept is new to PostgreSQL because PostgreSQL has never before\n> chained creations together like this.\n\nExternally perhaps not, but internally these things happen all the time.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 16 Oct 2001 22:14:08 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "On Sun, 14 Oct 2001, Bill Studenmund wrote:\n\n> On Mon, 15 Oct 2001, Tom Lane wrote:\n>\n> > Bill Studenmund <wrstuden@netbsd.org> writes:\n> > > For the most part, I think packages and schemas are orthogonal. I'm taking\n> > > a cue from Oracle here. Oracle considers packages to be a schema-specific\n> > > object.\n> >\n> > Nonetheless, it's not clear to me that we need two independent concepts.\n> > Given a name search path that can go through multiple schemas, it seems\n> > to me that you could get all the benefits of a package from a schema.\n\nI've been thinking about this. I've changed my mind. Well, I've come to\nrealize that you can have multiple schemas in one db, so that multiple\nschema support != one db reaching into another.\n\nI still think that schemas and packages are different, but I now think\nthey are interrelated. And that it shouldn't be too hard to leverage the\npackage work into schema support. Still a lot of work, but the package\nwork has shown how to go from one to two in a number of ways. :-)\n\nFirst off, do you (Tom) have a spec for schema support? I think that would\ndefinitly help things.\n\nSecond, can you help me with gram.y? I'm trying to get gram.y to deal with\nfiguring out if you've typed in packagename.function name, rather than\nrelying on the lexer to notice you've typed ${identifier}\\.${identifier}\nwhere the first identifier is a package name & send a terminal saying so.\nTwelve r/r conflicts. They involve a conflict between ColId and something\nelse, and focus on not knowing what reduction to take when seeing a '[',\n',', or ')'. Thoughts?\n\nTake care,\n\nBill\n\n", "msg_date": "Tue, 16 Oct 2001 15:09:52 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "On Thu, 18 Oct 2001, Peter Eisentraut wrote:\n\n> Bill Studenmund writes:\n>\n> > Could you please give me an example of how to do this, say for plperl or\n> > plpython? Just showing how two functions made with CREATE FUNCTION can use\n> > global variables will be fine. This example will help me understand how\n> > they work.\n>\n> For PL/Tcl you use regular Tcl global variables:\n>\n> create function produce(text) returns text as '\n> global foo; set foo $1;\n> ' language pltcl;\n>\n> create function consume() returns text as '\n> global foo; return $foo;\n> ' language pltcl;\n>\n> There is also a mechanism for one procedure to save private data across\n> calls.\n>\n> For PL/Python you use a global dictionary:\n>\n> create function produce(text) returns text as '\n> GD[\"key\"] = args[0]\n> ' language plpython;\n>\n> create function consume() returns text as '\n> return GD[\"key\"]\n> ' language plpython;\n>\n> There is also a dictionary for private data.\n\nPrivate to what?\n\n> For PL/Perl I'm not sure if something has been implemented. In C you can\n> use shared memory, and for PL/sh you would use temp files of course. ;-)\n\nThank you. I can now experiment with them to see how they do.\n\nI've never thought of adding package variables for C routines; there are\nother options open. :-)\n\nOh, by shared memory, do you mean SYSV Shared Memory (like how the\nbackends talk) or just memory shared between routines? I ask as part of\nthe idea with these variables is that they are backend-specific. So C\nroutines actually should NOT used SYSV Shared Mem. :-)\n\nTake care,\n\nBill\n\n", "msg_date": "Wed, 17 Oct 2001 06:48:05 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "Bill Studenmund writes:\n\n> Yes, I want a namespace below schemas.\n>\n> The difference between packages and schemas is that schemas encapsulate\n> everything. As Tom pointed out, that includes types (and I'd assume\n> operators too). Packages do not encapsulate types and operators.\n\nOf course nobody is forcing you to put types into subschemas. But the\nuser would have the freedom to spread things around as he sees fit.\n\n> > > I agree that some PLs might do things their own way and so package\n> > > variables won't be as useful. If these variables are not appropriate to a\n> > > PL, it can ignore them.\n> > >\n> > > PL/pgSQL is a counter-example, though, showing that something needs to be\n> > > done.\n> >\n> > Then PL/pgSQL should be fixed. But that doesn't need a such a large\n>\n> Why is PL/pgSQL broken?\n\nMaybe read \"fixed\" as \"enhanced\".\n\n> The problem is not creating persistent storage; the issue is that the\n> langyage was designed to not use it. What you're proposing could be done,\n> but would effectivly be shoving the change in with a hammer. Also, any\n> other PLs which are based on languages with strict namespaces will have\n> the same problem.\n\nOther PLs have shown that storing global data in a language-typical way\n*is* possible. I read your argumentation as \"PL/pgSQL is not designed to\nhave global variables, so I'm going to implement 'packages' as a way to\nmake some anyway\". Either PL/pgSQL is not designed for it, then there\nshould not be any -- at all. Or it can handle them after all, but then\nit's the business of the language handler to deal with it.\n\n> Look at C for instance. What you're describing is the equivalent to\n> letting a function or procedure in C declare global variables. That's not\n> now the language works, and no one seems to mind. :-)\n\nWhat you're describing is the equivalent of declaring global variables in\nC but inventing a whole new mechanism in the operating system for it\ncalled \"package\" (which also happens to be a namespace mechanism as a\nsecond job). Now, there are ways to exchange data between separate\nprograms or modules in C, such as message queues or shared memory. (If\nyou think about it, what PL/Python is doing with its global dictionary is\njust the same as shared memory.) This works fine, but it doesn't affect\nPostgreSQL proper in any way.\n\nSo, do something with PL/pgSQL. Implement a global dictionary, or shared\nmemory, or some other way to share data between functions. Call it\n\"package\" if you like. (Java has packages which are somewhat like that.)\n\n> My concern with that is that then we have to make sure to dump it in the\n> same order you entered it.\n\npg_dump can do dependency ordering if you ask it nicely. ;-) When we\nimplement schemas we'll have to make sure it works anyway. Thinking about\npg_dump when designing backend features is usually not worthwhile.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 17 Oct 2001 22:37:06 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "On Thu, 18 Oct 2001, Peter Eisentraut wrote:\n\n> Bill Studenmund writes:\n>\n> > Honestly, I do not understand why \"global variables\" have been such a sore\n> > point for you.\n>\n> My point is that the proposed \"package support\" introduces two features\n> that are a) independent, and b) already exist, at least in design.\n> Schemas are already planned as a namespace mechanism. Global variables in\n> PLs already exist in some PLs. Others can add it if they like. There\n> aren't any other features introduced by \"package support\" that I can see\n> or that you have explicitly pointed out.\n\nThen my explanations didn't click. Please let me try again.\n\nThe main feature of package support is that it greatly facilitates\ndeveloping large, complicated db applications. Like ones which require\nmultiple full-time developers to develop. I think PostgreSQL has the\ninternals to run these apps, and it should provide a developement\nenvironment to encourage them.\n\nThat's what packages are about.\n\nI have never developed an application that large. But I have talked to our\nDBAs who have worked with such things in Oracle, and a few who have worked\non (developed) such large applications. They all have agreed that\nsomething akin to packages is needed to make it work.\n\nThe seperate namespaces (for function names and for variables) mean that\ndifferent programmers don't have to coordinate the names of functions. Or\nthat the names have to have some de-ambiguating prefix to make them\ndifferent. All that has to happen is that different packages have\ndifferent names. When you through in the idea of developers releasing\nlibraries (packages) on the net, the minimality of coordination is even\nmore important.\n\nThe fact (for PostgreSQL i.e. this implimentation) that types and\noperators aren't namespaced off means that they effectively leak into the\nenclosing database (or schema when we have them) so that making and\nsupporting new types can be the aim/result of the package.\n\nFor comaprison with other languages, packages strike me as comparable to\nlibraries (in C) or modules (say in Perl or Python). Neither libraries nor\nmodules realy do anything that can't be achieved otherwise in the\nlanguage. Yet they are a prefered method of developing code, especially\nreused code. When you're making a program/application, you don't need to\nconcern yourself with (many) details about the code; you use the module\nand that's it. Likewise here, an application developer/integrator need\nonly load a module, and then all the routines in it are available. You\ndon't for instance have to worry if the routines have names which overlap\nones you were using, or ones used worse yet by another set of routines you\nwant to use.\n\nI think Jean-Michael's comments were right. While I'm not sure if things\nwill be as overwhelming as he predicted, packages (even as implimented in\nmy patch) will help people develop code libraries for PostgreSQL. And that\nwill make PostgreSQL applications easier.\n\nAlso, as I've come to understand what schemas are and aren't, I've\nrealized that they can be readily leveraged to help with schema support.\n\nSchemas, at least according to the SQL92 spec I have looked at (I'd love\nto see a later spec), are namespaces only for tables and views (and\ncharacter sets and a number of other things which PostreSQL doesn't\nsupport). They don't touch on functions. Sure, PostgreSQL could decide to\ndo something with functions, but if we do, we're improvising, and I see no\nreason to improvise differently than other DBMSs have done. There may be\none, but I don't see it.\n\nAlso, as I understand schemas (which could be wrong), there is a\ndifference in emphasis between schemas and packages. Schemas are a way to\npartition your database, so that different parts of an application see\nonly a subsection of the whole database. You can have some parts only able\nto access one or another schema, while other parts can access multiple\nschemas. Packages however are designed to help you build the tools to make\nthe applications work (providing toolchests of code for instance). It's\nlike schemas are a more top-down design element, and packages are\nbottom-up.\n\nWhere I see the interaction is that we want to have different schemas have\nschema-specific functions, we just have a package implicitly assosciated\nwith each schema which contains the traditional functions and aggregates\n(and types and operators) of that schema.\n\n> So the two questions I ask myself are:\n>\n> 1. Are package namespaces \"better\" than schemas? The answer to that is\n> no, because schemas are more standard and more general.\n\nSee above; I never said packages were better than schemas (nor worse). I\nsaid they were different parts of the puzzle. I think they are both\nimportant and valuable.\n\n> 2. Are global variables via packages \"better\" than the existing setups?\n> My answer to that is again no, because the existing setups respect\n> language conventions, maintain the separation of the backend and the\n> language handlers, and of course they are already there and used.\n\nAll package variables are to the backend are entries in a table,\npg_packglobal, provided for the convenience of the language handler. If\nthe handler doesn't want to do anything with them, then it doesn't and\nthat's no loss.\n\n> So as a consequence we have to ask ourselves,\n>\n> 3. Do \"packages\" add anything more to the table than those two elementary\n> features? Please educate us.\n\nSee above.\n\n> 4. Would it make sense to provide \"packages\" alongside the existing\n> mechanisms that accomplish approximately the same thing. That could be\n> debated, in case we agree that they are approximately the same thing.\n\nI don't agree that they are approximatly the same thing, though I agree\nthat many of the things packages do can be cobbled together (more\npainfully) without them.\n\nHopefully these explanations will come across clearer.\n\nTake care,\n\nBill\n\n", "msg_date": "Wed, 17 Oct 2001 18:59:24 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "On 19 Oct 2001, Gunnar [iso-8859-1] R�nning wrote:\n\n> * Bill Studenmund <wrstuden@netbsd.org> wrote:\n> |\n> | Packages aren't schemas. What they bring to the table is they facilitate\n> | making stored procedures (functions). You can have twelve different\n> | developers working on twenty different packages, with no fear of name\n> | conflicts. The package names will have to be different, so there can be\n> | functions with the same names in different pacakges.\n>\n> Hmm. But if we had schema support can't we just package those procedures\n> into a schema with a given name ? Maybe my stored procedures needs some other\n> resources as well that should not conflict with other packages, like temp\n> tables or such. It then seems to me that using schemas can solve everything\n> that packages do and more ?\n\nAssuming that schema support covers functions (which Tom, I, evidently\nyou, and Oracle think it should but which isn't mentioned at least in\nSQL92), you could do that. And if you're adding tables, you probably\nshould.\n\nBut a lot of times you don't need to go to the effort of namespacing off a\nwhole new schema, and I can think of some cool things to do when you\ndon't.\n\nOne example is a large, complicated db app with multiple programmers. For\neach general area of the app, you can create a package. That way you\nmodularize the code into more managable pieces. But since the are all in\nthe same schema, they can maintain/interact with the same tables.\n\nSo that's an arguement for packages/subschemas.\n\n> | For the most part, I think packages and schemas are orthogonal. I'm taking\n> | a cue from Oracle here. Oracle considers packages to be a schema-specific\n> | object.\n>\n> What is really the difference functionality wise of making a subschema and\n> package ? In both cases you deal with the namespace issues.\n\nA matter of what is subspaced. I'd assume that a subschema namespaces off\neverything a schema does. A package however only namespaces off functions\nand aggregates. Packages, at least as I've implimented them, do *not*\nnamespace off types nor operators they contain.\n\nTechnically, the package oid is a key in the name index for pg_proc and\npg_aggregate, while it is not for pg_type and pg_operator.\n\nI admit, I took a minor liberty here. Oracle packages do have types, but\nOracle types are not as rich as PostgreSQL's So when I was translating\npackages, I made the types in them match PostgreSQL's. Also, since I'd\nadded aggregates and types, adding operators seemed like a reasonable\nthing. Both from the point of view of the parser (they are all done about\nthe same way), and from the point of utility. PostgreSQL's ability to add\ntypes is really cool, and the ability to add operators makes new types\nconvenient to use. If packages could add types and support functions but\nnot operators, that'd seem lame.\n\nThe reason that packages don't namespace off types and operators is I\nthink it makes them more useful. Think about the complex number example in\nthe programmer's guide. I can think of scientific applications which could\nuse them. But having to say package.complex for the type would be\ncombersome. And even worse, having to say package.+ or package.- would be\nbad. And package.* might be ambiguous to the parser!\n\nSo that's why I made pacakges not be subschemas. Packages were designed to\nhelp with writing stored procedures, and to do it well. :-)\n\nTake care,\n\nBill\n\n", "msg_date": "Thu, 18 Oct 2001 09:28:13 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres" }, { "msg_contents": "On Fri, 19 Oct 2001, Tom Lane wrote:\n\n> Yeah. I am wondering whether we couldn't support Oracle-style packages\n> as a thin layer of syntactic sugar on top of schemas. I am concerned\n> about the prospect that \"foo.bar\" might mean either \"object bar in\n> schema foo\" or \"object bar in package foo\".\n\nSee my note to Gunnar for why I think packages should be inside of schemas\nrather than renamed schemas. Types and expecially operators would be much\nmore useful to the enclosing schema that way (I think).\n\nYes, there is an ambiguity between schema foo and package foo. I can think\nof a few ways to deal with this.\n\n1) Do whatever Oracle does, assuming it's not grotesque. Yes, I've said\nthat a lot. But I think PostgreSQL can really take some applications away\nfrom the commercial DBMSs, and Oracle is #1 in that market. So Oracle\nrepresents Prior Art of least surprise. :-)\n\n2) If there is both a schema named foo and a package named foo, then\nfoo.bar should always take foo to be the schema. If we let a package in\nthe local schema named foo be found before the schema foo, then we would\nget different results in said schema and another one (which didn't have a\npackage named foo in it).\n\n3) Don't let schemas and packages have the same name. I actually believe\nthis is what Oracle does, though I haven't checked. I _have_ checked that\npackages and tables can't have the same name, and built that into the\npackages patches. I think requiring schemas to have names different from\ntables and packages is a good thing, and would reduce ambiguity.\n\nAs an aside the reason I suspect this is what Oracle does is that Oracle\nhas a system table which contains a list of named objects. Tables and\npackages show up as entries in this table, and I'd expect schemas would\ntoo.\n\nTake care,\n\nBill\n\n", "msg_date": "Thu, 18 Oct 2001 09:56:22 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "On 19 Oct 2001, Gunnar [iso-8859-1] R�nning wrote:\n\n> * Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> |\n> | Yeah. I am wondering whether we couldn't support Oracle-style packages\n> | as a thin layer of syntactic sugar on top of schemas. I am concerned\n> | about the prospect that \"foo.bar\" might mean either \"object bar in\n> | schema foo\" or \"object bar in package foo\".\n>\n> Agreed, and in Sybase you may declare a procedure in a schema(or\n> database which is the Sybase term). If you want it global you declare it\n> in the \"master\" schema.\n\nOh cool. I knew that Oracle used \"standard\" for the name of the built-in\npackage, but I didn't know a name for the built-in schema. \"master\" sounds\ngood.\n\nTake care,\n\nBill\n\n", "msg_date": "Thu, 18 Oct 2001 09:57:28 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres" }, { "msg_contents": "Bill Studenmund writes:\n\n> Could you please give me an example of how to do this, say for plperl or\n> plpython? Just showing how two functions made with CREATE FUNCTION can use\n> global variables will be fine. This example will help me understand how\n> they work.\n\nFor PL/Tcl you use regular Tcl global variables:\n\ncreate function produce(text) returns text as '\n global foo; set foo $1;\n' language pltcl;\n\ncreate function consume() returns text as '\n global foo; return $foo;\n' language pltcl;\n\nThere is also a mechanism for one procedure to save private data across\ncalls.\n\nFor PL/Python you use a global dictionary:\n\ncreate function produce(text) returns text as '\n GD[\"key\"] = args[0]\n' language plpython;\n\ncreate function consume() returns text as '\n return GD[\"key\"]\n' language plpython;\n\nThere is also a dictionary for private data.\n\nFor PL/Perl I'm not sure if something has been implemented. In C you can\nuse shared memory, and for PL/sh you would use temp files of course. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 18 Oct 2001 19:42:08 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "On Tue, 16 Oct 2001, Bill Studenmund wrote:\n\n> I still think that schemas and packages are different, but I now think\n> they are interrelated. And that it shouldn't be too hard to leverage the\n> package work into schema support. Still a lot of work, but the package\n> work has shown how to go from one to two in a number of ways. :-)\n>\n> First off, do you (Tom) have a spec for schema support? I think that would\n> definitly help things.\n\nI found an on-line copy of the SQL92 spec, and I've been looking at it.\n\nI think it wouldn't be _that_ much more work to add shema support to what\nI've done for packages. Not trivial, but certainly not double the work.\n\nBut I have some questions.\n\nThe big one for now is how should you log into one schema or another?\npsql database.schema ?\n\nHere's a plan for schema support. But first let me review what packages\nhave.\n\nRight now (in my implimentation), packages have added a \"standard\" package\n(oid 10) which contains all of the built-in procedures, aggregates, types,\nand operators. Whenever you use the normal CREATE commands, you add a\nprocedure, aggregate, operator, or type in the \"standard\" package.\n\nThere is a new table, pg_package, which lists the name of each installed\npackage and its owner. \"standard\" is owned by PGUID. packages are\nreferenced by the oid of the row describing the package in this table.\n\nWhenever you look up a function or aggregate, you give the oid of the\npackage to look in in addition to the name (and types). Having the package\nid in the index provides the namespacing.\n\nWhenever you look up a type or operator, you don't have to give a package\nid.\n\nWhenever you call the parser to parse a command, you pass it the package\ncontext (oid) in which the parsing takes place. If you are typing in\ncommands in psql, that package id is 10, or \"standard\". Likewise for sql\nor plpgsql routines not in a package. If you are in an sql or plpgsql\nroutine which is in a package, the package's oid is passed in. That's what\nhas package routines look in the package first.\n\nThe parser also notes if you gave a package id or not (package.foo vs\nfoo). If you were in a package context and were not exact (foo in a\nprocedure in a package for instance), then all of the places which look up\nfunctions will try \"standard\" if they don't find a match.\n\nThere is a table, pg_packglobal, which contains package globals for the\ndifferent PLs. It contains 5 columns. The first three are the package oid,\nthe language oid, and a sequence number. They are indexed. The two others\nare variable name and variable type (of PostgreSQL type name and text\nrespectively). PLs for which these variables don't make sense are free to\nignore them.\n\nExtending this for schema support.\n\nExecutive summary: all of the above becomes the infrastructure to let\ndifferent schemas have schema-private functions and aggregates.\n\nWe add a new table, pg_schema, which lists the schemas in this database.\nIt would contain a name column, an owner column, something to indicate\ncharacter set (?), and other stuff I don't know of. Schemas are referenced\ninternally by the oid of the entry in this table.\n\nThere is a built-in schema, \"master\". It will have a fixed oid, probalby 9\nor 11.\n\nThe \"master\" schema will own the \"standard\" package oid 10, which contains\nall of the built-in functions, and ones added by create function/etc.\n\nEach new schema starts life with a \"standard\" package of its own. This\npackage is the one which holds functions & aggregates made with normal\ncommands (create function, create aggregate) when you're logged into that\nschema.\n\npg_package grows two more columns. One references the schema containing\nthe package. The other contains the oid of the \"parent\" package. The idea\nis this oid is the next oid to look in when you are doing an inexact oid\nsearch. It's vaguely like \"..\" on a file system.\n\nFor master.standard, this column is 0, indicating no further searching.\nFor say foo.standard (foo is a schema), it would be the oid of\nmaster.standard (10). Likewise for a package baz in the master schema, it\nwould be master.standard. For a package in a schema, it would be the oid\nof the \"standard\" package of the schema. As an example, say the foo schema\nhad a package named bup. For baz.bup, this column would have the oid of\nbaz.standard.\n\nRight now I'm in the process of redoing the parser changes I made so that\nthe scanner doesn't need to recognize package names. When this is done,\nthe parser will be able to deal with schema.function and package.function.\nOh, also schema.table.attr too. schema.package.function won't be hard, but\nit will be messy.\n\nThe only other part (which is no small one) is to add namespacing to the\nrest of the backend. I expect that will mean adding a schema column to\npg_class, pg_type, and pg_operator.\n\nHmmm... We probably also need a command to create operator classes, and\nthe tables it touches would need a schema column too, and accesses will\nneed to be schema savy.\n\nWell, that's a lot for now. Thoughts?\n\nTake care,\n\nBill\n\n\n\n\n", "msg_date": "Thu, 18 Oct 2001 12:38:29 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "Bill Studenmund writes:\n\n> Honestly, I do not understand why \"global variables\" have been such a sore\n> point for you.\n\nMy point is that the proposed \"package support\" introduces two features\nthat are a) independent, and b) already exist, at least in design.\nSchemas are already planned as a namespace mechanism. Global variables in\nPLs already exist in some PLs. Others can add it if they like. There\naren't any other features introduced by \"package support\" that I can see\nor that you have explicitly pointed out.\n\nSo the two questions I ask myself are:\n\n1. Are package namespaces \"better\" than schemas? The answer to that is\nno, because schemas are more standard and more general.\n\n2. Are global variables via packages \"better\" than the existing setups?\nMy answer to that is again no, because the existing setups respect\nlanguage conventions, maintain the separation of the backend and the\nlanguage handlers, and of course they are already there and used.\n\nSo as a consequence we have to ask ourselves,\n\n3. Do \"packages\" add anything more to the table than those two elementary\nfeatures? Please educate us.\n\n4. Would it make sense to provide \"packages\" alongside the existing\nmechanisms that accomplish approximately the same thing. That could be\ndebated, in case we agree that they are approximately the same thing.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 18 Oct 2001 23:04:22 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "* Bill Studenmund <wrstuden@netbsd.org> wrote:\n|\n| Packages aren't schemas. What they bring to the table is they facilitate\n| making stored procedures (functions). You can have twelve different\n| developers working on twenty different packages, with no fear of name\n| conflicts. The package names will have to be different, so there can be\n| functions with the same names in different pacakges.\n\nHmm. But if we had schema support can't we just package those procedures\ninto a schema with a given name ? Maybe my stored procedures needs some other\nresources as well that should not conflict with other packages, like temp\ntables or such. It then seems to me that using schemas can solve everything \nthat packages do and more ?\n\n| For the most part, I think packages and schemas are orthogonal. I'm taking\n| a cue from Oracle here. Oracle considers packages to be a schema-specific\n| object.\n\nWhat is really the difference functionality wise of making a subschema and\npackage ? In both cases you deal with the namespace issues.\n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n", "msg_date": "19 Oct 2001 14:35:44 +0200", "msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>", "msg_from_op": false, "msg_subject": "Re: schema support, was Package support for Postgres" }, { "msg_contents": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com> writes:\n> Hmm. But if we had schema support can't we just package those procedures\n> into a schema with a given name ? Maybe my stored procedures needs some other\n> resources as well that should not conflict with other packages, like temp\n> tables or such. It then seems to me that using schemas can solve everything \n> that packages do and more ?\n\nYeah. I am wondering whether we couldn't support Oracle-style packages\nas a thin layer of syntactic sugar on top of schemas. I am concerned\nabout the prospect that \"foo.bar\" might mean either \"object bar in\nschema foo\" or \"object bar in package foo\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Oct 2001 10:47:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> wrote:\n|\n| > resources as well that should not conflict with other packages, like temp\n| > tables or such. It then seems to me that using schemas can solve everything \n| > that packages do and more ?\n| \n| Yeah. I am wondering whether we couldn't support Oracle-style packages\n| as a thin layer of syntactic sugar on top of schemas. I am concerned\n| about the prospect that \"foo.bar\" might mean either \"object bar in\n| schema foo\" or \"object bar in package foo\".\n\nAgreed, and in Sybase you may declare a procedure in a schema(or \ndatabase which is the Sybase term). If you want it global you declare it\nin the \"master\" schema. \n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n", "msg_date": "19 Oct 2001 16:54:18 +0200", "msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>", "msg_from_op": false, "msg_subject": "Re: schema support, was Package support for Postgres" }, { "msg_contents": "On Sat, 20 Oct 2001, Peter Eisentraut wrote:\n\n> Yes, you're right. Actually, sharing data across PostgreSQL C functions\n> is trivial because you can just use global variables in your dlopen\n> modules.\n\nExactly. That's why I never envisioned \"C\" or \"internal\" functions using\npackage global variables. :-)\n\nTake care,\n\nBill\n\n", "msg_date": "Fri, 19 Oct 2001 10:59:10 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "On Sat, 20 Oct 2001, Serguei Mokhov wrote:\n\n> > It means that when you want to use one of the built in functions\n> > (date_part, abs, floor, sqrt etc.) you don't have to prefix it with\n> > \"standard.\". You can just say date_part(), abs(), floor(), sqrt(), etc.\n> > The only time you need to prefix a call with \"standard.\" is if you want to\n> > exclude any so-named routines in your own package.\n>\n> Quick question: would it be possible then create a 'system' package\n> and 'system' (or 'master' if you will) schema (when it's implemented),\n> move over all the system tables (pg_*) into the master schema\n> and functions into the 'system' package, so that no name conflicts will arise\n> when creating types, functions, tables, etc with the same names as system ones?\n\nYes. That is part of my plan actually. :-)\n\nIn the patch I sent in last week, all of the built-in functions and\naggregates are in the \"standard\" package, and you can infact reference\nthem as standard.foo.\n\nMoving types, operators, and relations (and whatever else should go there)\ninto \"master\" was part of my plan for schemas.\n\nTake care,\n\nBill\n\n", "msg_date": "Fri, 19 Oct 2001 11:04:28 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: namespaces" }, { "msg_contents": "On Sat, 20 Oct 2001, Rod Taylor wrote:\n\n> But what if you want a C function to set a variable which can be\n> accessed using an SQL, perl, PLpgSQL or other function type?\n> Shouldn't a global variable be global between all types of functions?\n\nNo. Doing that requires that all languages have the same internal storage\nof variables. And it's more than just an int4 takes up 4 bytes. Look in\nthe plpgsql source, at struct PLpgSQL_var. There is a fair amount of into\nabout a variable.\n\nWhile we could harmonize the info storage, making globals global across\nall languages would also would mean breaking down a lot of the isolation\nbetween PLs. Right now they are their own independent entities. To tie\nthem together like this would, in my opinion, make them\nfragilly-interconnected.\n\nMy suggestion is to just add a get and a set routine in one language, and\nhave it store the global. :-)\n\nTake care,\n\nBill\n\n", "msg_date": "Fri, 19 Oct 2001 18:40:04 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "Bill Studenmund writes:\n\n> > create function produce(text) returns text as '\n> > GD[\"key\"] = args[0]\n> > ' language plpython;\n> >\n> > create function consume() returns text as '\n> > return GD[\"key\"]\n> > ' language plpython;\n> >\n> > There is also a dictionary for private data.\n>\n> Private to what?\n\nPrivate to the procedure, but saved across calls (during one session).\n\n> Oh, by shared memory, do you mean SYSV Shared Memory (like how the\n> backends talk) or just memory shared between routines? I ask as part of\n> the idea with these variables is that they are backend-specific. So C\n> routines actually should NOT used SYSV Shared Mem. :-)\n\nYes, you're right. Actually, sharing data across PostgreSQL C functions\nis trivial because you can just use global variables in your dlopen\nmodules.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 20 Oct 2001 11:27:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "\n>I think Jean-Michael's comments were right. While I'm not sure if things\n>will be as overwhelming as he predicted, packages (even as implimented in\n>my patch) will help people develop code libraries for PostgreSQL. And that\n>will make PostgreSQL applications easier.\n\nPostgreSQL is a fantastic tool which lacks a few features to become #1. \nIMHO, these features are :\n > Beginners: ability to drop and reorganize columns. I know this sounds \nstupid for hackers, but this is #1 need when migrating from beginner tools \nsuch as MySQL or Access. Candidates?\n > Advanced users: PACKAGE support to create and distribute software \nlibraries. CREATE OR REPLACE VIEW, CREATE OR REPLACE TRIGGER, etc... \nPL/pgSQL installation by default with infinite loop protection.\n > Professionnal user: PostgreSQL does not lack many things. Maybe \nserver-side Java would be great in terms of object/inheritence approach. I \nrun several databases, one being hosted on a double Pentium Linux box with \nU2W discs. When using triggers, views, rules and PL/pgSQL, applications can \nbe optimized so much that you \"hardly\" reach the hardware limits.\n > Power users: load balancing, replication, tablespace. I can't really say.\n\nI first discovered PostgreSQL when localizing Oracle8i to French. We asked \nOracle if I could use their software to help us during the translation \nprocess. They answered \"OK, but you have to pay $xx.xxx because you have a \ndouble processor box\". This was about twice the price we were getting paid. \nThat day, I understood Oracle did not care about its users and was only \ninterested in fast, short term profit.\n\nCheers,\nJean-Michel\n", "msg_date": "Sat, 20 Oct 2001 12:36:00 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "----- Original Message ----- \nFrom: Bill Studenmund <wrstuden@netbsd.org>\nSent: Sunday, October 14, 2001 8:22 AM\n\n> My description of namespaces seems to have caused a fair bit of confusion.\n> Let me try again.\n> \n> The ability of the package changes to automatically check standard when\n> you give an ambiguous function name while in a package context is a\n> convenience for the procedure author. Nothing more.\n> \n> It means that when you want to use one of the built in functions\n> (date_part, abs, floor, sqrt etc.) you don't have to prefix it with\n> \"standard.\". You can just say date_part(), abs(), floor(), sqrt(), etc.\n> The only time you need to prefix a call with \"standard.\" is if you want to\n> exclude any so-named routines in your own package.\n\nQuick question: would it be possible then create a 'system' package\nand 'system' (or 'master' if you will) schema (when it's implemented),\nmove over all the system tables (pg_*) into the master schema\nand functions into the 'system' package, so that no name conflicts will arise\nwhen creating types, functions, tables, etc with the same names as system ones?\n\n--\nS.\n\n", "msg_date": "Sat, 20 Oct 2001 12:29:36 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: namespaces" }, { "msg_contents": "On Sun, 21 Oct 2001, Serguei Mokhov wrote:\n\n> ----- Original Message -----\n> From: Bill Studenmund <wrstuden@netbsd.org>\n> Sent: Friday, October 19, 2001 2:04 PM\n>\n> > > Quick question: would it be possible then create a 'system' package\n> > > and 'system' (or 'master' if you will) schema (when it's implemented),\n> > > move over all the system tables (pg_*) into the master schema\n> > > and functions into the 'system' package, so that no name conflicts will arise\n> > > when creating types, functions, tables, etc with the same names as system ones?\n> >\n> > Yes. That is part of my plan actually. :-)\n\nOh, one reason that needs to happen is that everything needs to be in a\npackage or a schema; for the tables where they do namespacing the schema\nor package is part of the primary key.\n\n> Hmm. I see. Then there won't be a problem of creating any DB object\n> with the system name.\n\nIt will work, though if you start creating tables named \"pg_class\", I\nthink you might make your head hurt. Also, your own int4 type might not be\nsuch a good idea...\n\n> > In the patch I sent in last week,\n>\n> Yeah, I remember that one. Took me a couple of minutes\n> to download. You know, it never hurts to compress things:\n> then the patch would be ~10 times less in size, and you wouldn't\n> have to worry about PINE messing up with your code in the message body... :)\n> And that would reduce the bounce rate too.\n>\n> Just a kind and gentle cry to reduce the size of patches sent to\n> my mailbox and save some bandwidth on the way :)\n\nOk. :-) Next time I will either compress it or I'll mail in a URL.\n\n> > all of the built-in functions and\n> > aggregates are in the \"standard\" package, and you can infact reference\n> > them as standard.foo.\n>\n> When you refer to it just foo(), and you have foo() defined\n> in more than one package, how do you resolve this? Do you also have\n> a notion of a global package and sub-packages?\n\nThere is a very simple search path system. If you are in a package (in a\nfunction that is part of a package), you look for foo in that package. If\nyou don't find it there, you look in stadard. If it's not there, you don't\nfind it. To look in other packages than the one you're in, you have to say\nwhich one it is. With schemas, if your package is not in \"master\" or\nwhatever it is called, you look first in your package, then in\nyour_schema.standard, then in master.standard.\n\nTake care,\n\nBill\n\n", "msg_date": "Sat, 20 Oct 2001 10:45:50 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: namespaces" }, { "msg_contents": "On Sun, 21 Oct 2001, Peter Eisentraut wrote:\n\n> Bill Studenmund writes:\n>\n> > The big one for now is how should you log into one schema or another?\n> > psql database.schema ?\n>\n> Each user has a default schema, which is by default the schema with the\n> same name as the user name, or if no such schema exists, it's the DEFAULT\n> schema (which I believe is what Oracle calls it). Then there should be\n> something like set schema path. I don't think schemas should be a\n> connection parameter. -- That would be my ideas anyway.\n\nI can see advantages for both; if you just connect to a database that has\nschemas, you get a schema with your name if it's there, and a default\notherwise. But I can see definite advantages to being able to specify.\n\n> > Whenever you look up a function or aggregate, you give the oid of the\n> > package to look in in addition to the name (and types). Having the package\n> > id in the index provides the namespacing.\n> >\n> > Whenever you look up a type or operator, you don't have to give a package\n> > id.\n>\n> While I understand that package.+ is silly, anything that make operators\n> and functions work fundamentally differently is suspicious. A common\n> search mechanism that works for everything in packages (or subschemas,\n> which I'd prefer) would/should/could allow you to do without those\n> prefixes.\n\nWhy? Operators are used differently than functions. That strikes me as a\ngood reason to namespace them differently.\n\nConceptually the main determiner of what function you want is the name, at\nleast as far as from what I can tell from talking with all the programmers\nI know. Yes, we make sure the types match (are part of the primary key),\nbut the name is the main concept. Operators, however, are more\nintent-based. The '+' operator means I want these two things added\ntogether. I don't care so much what types are involved, I want adding to\nhappen. That's a difference of intent. And that's the reason that I think\ndifferent namespacing rules make sense.\n\nPart of it is that I only expect a package to add operators for types it\nintroduced. So to be considering them, you had to have done something that\nties in the type in the package. Like you had to make a column in a table\nusing it.\n\nAnother take on that is that I expect the main user of (direct) function\ncalls calling package functions will be other functions in that package,\nwhile the main users of operators will be places which have used a type\nfrom said package. Like queries pulling things out of tables using that\ntype. So the function namespacing is a concenience/tool primarily for the\npackage developer, while the operator and type namespacing is more a\nconvenience for the end application developer.\n\nAlso, you seem to be wanting a path-search ability that is something like\nthe PATH environment variable. This pathing is fundamentally different; to\nuse unix terms, it is \".:..\". The fundamental difference is that there are\nno \"absolute\" paths. The searching is totally location (of routine)\ndependant.\n\nTo add something like an absolute path would totally break the whole\nmotivation for packages. The idea is to give a developer an area overwhich\ns/he has total name control, but if s/he needs built-in routines, s/he\ndoesn't need to say \"standard.\" to get at them.\n\nIf we allow something like \"absolute paths\" in the package namespacing,\nthen we totally destroy that. Because a package developer can't be sure\nwhat pathing is going on, s/he really has no clue what packages will get\nfound in what order. So then you have to be explicit in the name of all\nthe functions you use (otherwise if a user essentially puts something\nother than \".\" at the head of the path, then you don't get routines in\nyour own package), or run the risk of getting all sorts of run-time\nerrors. A feature designed to make writing packages easier now makes them\nharder. That strikes me as a step backwards.\n\n> > There is a built-in schema, \"master\". It will have a fixed oid, probalby 9\n> > or 11.\n>\n> The built-in schemas is called DEFINITION_SCHEMA.\n\nWhy is it different from the \"DEFAULT\" you get when you log into a\ndatabase which doesn't have a schema whose name matches your username?\n\n> > The only other part (which is no small one) is to add namespacing to the\n> > rest of the backend. I expect that will mean adding a schema column to\n> > pg_class, pg_type, and pg_operator.\n>\n> Yup. But you can replace the owner package with the schema column,\n> because the owner property will be transferred to the schema.\n\nNot necessarily. A user other than the one who owns the schema can add a\npackage to it. It's the same thing as why we keep track of who added a\nfunction. :-)\n\nTake care,\n\nBill\n\n", "msg_date": "Sat, 20 Oct 2001 11:24:59 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "But what if you want a C function to set a variable which can be\naccessed using an SQL, perl, PLpgSQL or other function type?\nShouldn't a global variable be global between all types of functions?\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.\n\n----- Original Message -----\nFrom: \"Bill Studenmund\" <wrstuden@netbsd.org>\nTo: \"Peter Eisentraut\" <peter_e@gmx.net>\nCc: \"PostgreSQL Development\" <pgsql-hackers@postgresql.org>\nSent: Friday, October 19, 2001 1:59 PM\nSubject: Re: [HACKERS] Package support for Postgres\n\n\n> On Sat, 20 Oct 2001, Peter Eisentraut wrote:\n>\n> > Yes, you're right. Actually, sharing data across PostgreSQL C\nfunctions\n> > is trivial because you can just use global variables in your\ndlopen\n> > modules.\n>\n> Exactly. That's why I never envisioned \"C\" or \"internal\" functions\nusing\n> package global variables. :-)\n>\n> Take care,\n>\n> Bill\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Sat, 20 Oct 2001 21:28:25 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": false, "msg_subject": "Re: Package support for Postgres" }, { "msg_contents": "Bill Studenmund writes:\n\n> The big one for now is how should you log into one schema or another?\n> psql database.schema ?\n\nEach user has a default schema, which is by default the schema with the\nsame name as the user name, or if no such schema exists, it's the DEFAULT\nschema (which I believe is what Oracle calls it). Then there should be\nsomething like set schema path. I don't think schemas should be a\nconnection parameter. -- That would be my ideas anyway.\n\n> Whenever you look up a function or aggregate, you give the oid of the\n> package to look in in addition to the name (and types). Having the package\n> id in the index provides the namespacing.\n>\n> Whenever you look up a type or operator, you don't have to give a package\n> id.\n\nWhile I understand that package.+ is silly, anything that make operators\nand functions work fundamentally differently is suspicious. A common\nsearch mechanism that works for everything in packages (or subschemas,\nwhich I'd prefer) would/should/could allow you to do without those\nprefixes.\n\n> There is a built-in schema, \"master\". It will have a fixed oid, probalby 9\n> or 11.\n\nThe built-in schemas is called DEFINITION_SCHEMA.\n\n> The only other part (which is no small one) is to add namespacing to the\n> rest of the backend. I expect that will mean adding a schema column to\n> pg_class, pg_type, and pg_operator.\n\nYup. But you can replace the owner package with the schema column,\nbecause the owner property will be transferred to the schema.\n\n> Hmmm... We probably also need a command to create operator classes, and\n> the tables it touches would need a schema column too, and accesses will\n> need to be schema savy.\n>\n> Well, that's a lot for now. Thoughts?\n\nThat \"lot\" was sort of the problem with tackling this until now. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 21 Oct 2001 14:27:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "----- Original Message ----- \nFrom: Bill Studenmund <wrstuden@netbsd.org>\nSent: Friday, October 19, 2001 2:04 PM\n\n> > > It means that when you want to use one of the built in functions\n> > > (date_part, abs, floor, sqrt etc.) you don't have to prefix it with\n> > > \"standard.\". You can just say date_part(), abs(), floor(), sqrt(), etc.\n> > > The only time you need to prefix a call with \"standard.\" is if you want to\n> > > exclude any so-named routines in your own package.\n> >\n> > Quick question: would it be possible then create a 'system' package\n> > and 'system' (or 'master' if you will) schema (when it's implemented),\n> > move over all the system tables (pg_*) into the master schema\n> > and functions into the 'system' package, so that no name conflicts will arise\n> > when creating types, functions, tables, etc with the same names as system ones?\n> \n> Yes. That is part of my plan actually. :-)\n\nHmm. I see. Then there won't be a problem of creating any DB object\nwith the system name.\n\n> In the patch I sent in last week,\n\nYeah, I remember that one. Took me a couple of minutes\nto download. You know, it never hurts to compress things:\nthen the patch would be ~10 times less in size, and you wouldn't\nhave to worry about PINE messing up with your code in the message body... :)\nAnd that would reduce the bounce rate too.\n\nJust a kind and gentle cry to reduce the size of patches sent to\nmy mailbox and save some bandwidth on the way :)\n\n> all of the built-in functions and\n> aggregates are in the \"standard\" package, and you can infact reference\n> them as standard.foo.\n\nWhen you refer to it just foo(), and you have foo() defined\nin more than one package, how do you resolve this? Do you also have\na notion of a global package and sub-packages?\n\n--\nSerguei\n\n\n", "msg_date": "Sun, 21 Oct 2001 14:40:45 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: namespaces" }, { "msg_contents": "On Tue, 23 Oct 2001, Thomas Lockhart wrote:\n\n> (I've been following the thread, at least casually ;)\n>\n> > intent-based. The '+' operator means I want these two things added\n> > together. I don't care so much what types are involved, I want adding to\n> > happen. That's a difference of intent. And that's the reason that I think\n> > different namespacing rules make sense.\n>\n> But operators *are* functions underneath the covers. So different\n> namespacing rules seem like a recipe for missed associations and\n> unexpected results.\n\nUnderneath the covers, yes. But \"those covers\" make the difference in what\nI'm thinking of. An operator isn't just one function call, it can be\nmultiple ones. And not just multiple itterations, but multiple different\nones depending on what the optimizer is doing. That's why you can give an\noperator more than just the procedure operator. You also give it a join\nproc and a restrict proc, and you tie it in with a commutator, negator,\nand two sort operators. When you use an operator, you're specifying an\nintent, and all of these parts of an operator's definition help make that\nintent happen.\n\nThe problem though is that if operators are namespaced the same as\nfunctions, then we destroy one of the benefits of packages - a seperate\nnamespace for functions.\n\nCan you think of a specific example where this namespacing causes\nproblems? The functions and aggregates are namespaced off of the\ncontaining schema, but the types and operators aren't. Inside the package,\nyou have access to everything in the package. In the enclosing schema, you\nhave immediate access to the types and operators, and can get at the\nfunctions and aggregates by \"packname.\".\n\n> > Part of it is that I only expect a package to add operators for types it\n> > introduced. So to be considering them, you had to have done something that\n> > ties in the type in the package. Like you had to make a column in a table\n> > using it.\n>\n> I'd expect schemas/packages to have operators and functions for existing\n> types, not just new ones. That is certainly how our extensibility\n> features are used; we are extensible in several dimensions (types,\n> functions, operators) and they do not all travel together. We can't\n> guess at the future intent of a package developer, and placing\n> limitations or assumptions about what *must* be in a package just limits\n> future (unexpected or suprising) uses.\n\nPlease play with the patch and try it.\n\nThere is no restriction in the patch that operators (and functions &\naggregates) can only be for types new to the package. You can add\noperators for built-in types, and you can even add operators for other\nuser-specified types too. And pg_dump will make sure that a user-defiend\ntype used in a package will get dumped before the package.\n\n> The \"absolute path\" scoping and lookup scheme is defined in SQL99. I'm\n> not sure I understand the issue in the last paragraph: you seem to be\n> making the point that absolute paths are Bad because package developers\n> don't know what those paths might be. But otoh allowing absolute paths\n> (and/or embedding them into a package) gives the developer *precise*\n> control over what their package calls and what the behaviors are. istm\n> that if a package developer needs to specify precisely the resources his\n> package requires, then he can do that. And if he wants to leave it\n> flexible and determined by scoping and pathing rules, then he can do\n> that too.\n>\n> afaik relative pathing is not specified in the standard, but we might\n> want to consider how we would implement that as an extension and whether\n> that gives more power to the packager or developer.\n\nI've found the spec, and am still studying it. Though what I've found so\nfar is a schema search path. My main interest is for the package itself to\nbe the first thing searched. After that, whatever search path is\nappropriate for the schema seems like the right thing to do. So, besides\nthe fact I think we should do schemas as per the spec, I think using the\nschema search path is the right thing to do.\n\n> > > The built-in schemas is called DEFINITION_SCHEMA.\n> > Why is it different from the \"DEFAULT\" you get when you log into a\n> > database which doesn't have a schema whose name matches your username?\n>\n> It may not be. But SQL99 specifies the name.\n\nActually, the most interesting thing I saw was in the start of chapter 21\n(the chapter on the DEFINITION_SCHEMA) at the bottom of section 21.1.\n\n\"The specification provides only a model of the base tables that are\nrequired, and does not imply that an SQL-implimentation shall provide the\nfunctionality in the manner described in this clause.\"\n\nAs I understand that, we are free to impliment things as we wish. We just\nneed to have the pieces/functionality described therein. We *don't* have\nto use the names or exact formats used in the spec.\n\nAs a concrete example of what I mean, I believe that we are free to\npg_attribute as it is and still comply with section 21.7 ATTRIBUTES base\ntable. UDT_NAME we impliment with attrelid, ATTRIBUTE_NAME we do with\nattname, ORDINAL_POSITION -> attnum, ATTRIBUTE_DEFAULT we do, IS_NULLABLE\n-> attnotnull (we twist the sense, but the functionality is there),\nATTRIBUTES_PRIMARY_KEY we agree (to the extent we don't support schemas\nor catalogs yet).\n\nThe main thing is that we eventually have an INFORMATION_SCHEMA full of\nviews which turn the system tables into what the standards want.\n\nTake care,\n\nBill\n\n", "msg_date": "Mon, 22 Oct 2001 06:46:16 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres" }, { "msg_contents": "On Tue, 23 Oct 2001, Peter Eisentraut wrote:\n\n> Bill Studenmund writes:\n>\n> > Why? Operators are used differently than functions.\n>\n> I don't think so. Operators are a syntacticaly convenience for functions.\n> That's what they always have been and that's what they should stay.\n\nHow does what you say disagree with what I said?\n\nOperators certainly have a lot more structure to them than a function call\ndoes. That's why you give the restriction and join functions, and you hand\nthem commutation and negation operators. The optimizer uses all of these\ntools to make what you want to have happen (adding for instance) happen as\nefficiently as possible. It will re-write what you said in a different\nmanner, if that different manner holds the same intent yet is more\nefficient.\n\n> > Conceptually the main determiner of what function you want is the name, at\n> > least as far as from what I can tell from talking with all the programmers\n> > I know. Yes, we make sure the types match (are part of the primary key),\n> > but the name is the main concept. Operators, however, are more\n> > intent-based. The '+' operator means I want these two things added\n> > together. I don't care so much what types are involved, I want adding to\n> > happen. That's a difference of intent. And that's the reason that I think\n> > different namespacing rules make sense.\n>\n> Naive developers all program by \"intent\". If I invoke a + operator then I\n> expect it to add. If I call a sqrt() function then I expect it to\n> calculate the square root. If I execute an INSERT statement then I would\n> prefer that I did not delete anything. Designing systems to work by\n> \"intent\" can be construed as an aspect of \"user-friendliness\".\n>\n> But the more knowledgeable programmer is mildly aware of what's going on\n> behind the scenes: Both \"+\" and \"sqrt\" are just names for function code\n> that may or may not do what you think they do. So this applies to both\n> functions and operators.\n\nSo I am a \"naive\" programmer because I mention intent above? That is very\ncondescending, Peter, and strikes me as inappropriate. Are you really so\nout of things to say that you have to resort to condescension?\n\nAt what point have you tried to determine how experienced a programmer I\nam? You've never asked me for my resume, or what projects I've worked on\nbefore this. Your comments indicate to me that you have not yet tried the\npatch I sent in, and if you have, I really doubt you've made packages with\nit. So how can you judge? The fact I disagree with you?\n\nAlso, in your \"naive\" vs \"more knowledgeable\" programmer comparison, you\nmention \"user-friendliness\" in the \"naive\" part, the \"bad\" part. Do you\nreally think that \"user-friendliness\" is a bad thing? I hope not.\n\nI think that \"user-friendliness\" is an important part of programming. It\nmeans that your tools or programmatic interfaces have (or lack) an\nappropriateness to the task at hand. It's not just for command lines or\nGUIs. I've worked with different libraries and programming packages, and I\nhave experienced the ones where the design and layout make the\nlibrary/package useful, and ones where the design and layout get in the\nway.\n\nHmmm... Thining about it, I now think you're right, that we should have a\nway to handle pathing. A package author should be able to set the path\nused for routines in the package, though.\n\n> > > The built-in schemas is called DEFINITION_SCHEMA.\n> >\n> > Why is it different from the \"DEFAULT\" you get when you log into a\n> > database which doesn't have a schema whose name matches your username?\n>\n> Because SQL says so.\n\nActually I'm not so sure. See the note to Thomas, especially the last\nsentance of section 21.1. It seems that if we have an INFORMATION_SCHEMA\nwhich contains all the views in the spec, and our system tables have\nthe right behaviors, then we are fine.\n\nActually, the text in section 20.1, \"Introduction to Information Schema\nand Definition Schema\" is more direct:\n\n\"The views of the Information Schema are viewed tables defined in terms of\nthe base tables of the Definition Schema. The only purpose of the\nDefinition Schema is to provide a data model to support the Information\nSchema and to assist understanding. An SQL-implementation need do no more\nthan simulate the existance of the Definition Schema as viewed through the\nInformation Schema views.\"\n\nSo if we have INFORMATION_SCHEMA with the right vies in it, we are fine\ndoing whatever we want.\n\n> > Not necessarily. A user other than the one who owns the schema can add a\n> > package to it. It's the same thing as why we keep track of who added a\n> > function. :-)\n>\n> Blech, I meant \"you can replace the owner column with the schema column\".\n\nThat's actually what I thought you said. :-)\n\nI stil think we can't do that, since someone other than the schema owner\ncan add a package to a schema. :-) Or at least that's the assumption I'm\nrunning on; we allow users other than PGUID to create functions (and\noperators and aggregates and types) in the default (whatever it will be\ncalled) schema, so why shouldn't they be allowed to add packages?\n\nTake care,\n\nBill\n\n", "msg_date": "Mon, 22 Oct 2001 08:03:39 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "On Wed, 24 Oct 2001, Peter Eisentraut wrote:\n\n> Bill Studenmund writes:\n>\n> > So I am a \"naive\" programmer because I mention intent above?\n>\n> No.\n\nSorry, that's the way it came across. As you've said that was not your\nintent, please disregard my response; I was responding to something you\ndid not mean.\n\n> > So if we have INFORMATION_SCHEMA with the right vies in it, we are fine\n> > doing whatever we want.\n>\n> I think some interpretation of the SQL standard can be used to prove that\n> a new schema should not contain any objects. So you're going to have to\n> stick to the two predefined schemas to put the system catalogs in. Then\n> again, other interpretations may be used to prove other things. But to me\n> the intent of the standard is clear that system catalogs are meant to go\n> into the defintion schema, and I don't see a reason why this could not be\n> so.\n\nI had been thining that we could have the built-in objects (functions,\ntypes, operators, etc.) in whatever was the \"default.master\" package, but\nit looks like SQL99 doesn't like that. You're right that built-in things\nhave to be in a different schema than user-added things.\n\nSection 10.4 contains text:\n\nii) If RN contains a <schema name> SN, then\n\nCase:\n\n1) If SN is INFORMATION_SCHEMA, then the single candidate routine of RI is\nthe built-in function identified by <routine name>.\n\nActually 4.24 is more exact. It defines a built-in function as a routine\nwhich is returned from teh query:\n\nSELECT DISTINCT ROUTINE_NAME\nFROM INFORMATION_SCHEMA.ROUTINES\nWHERE SPECIFIC_SCHEMA = INFORMATION_SCHEMA\n\nActually, since we have to have an INFORMATION_SCHEMA, and\n\"INFORMATION_SCHEMA\" gets thrown around a lot, I think it'd be easiest to\nmake \"INFORMATION_SCHEMA\" the schema containing built-in things. Otherwise\n(among other things) we have to replace DEFINTION_SCHEMA with\nINFORMATION_SCHEMA in the above-defined view (and in a lot of other\nplaces).\n\nThoughts?\n\n> > I stil think we can't do that, since someone other than the schema owner\n> > can add a package to a schema. :-) Or at least that's the assumption I'm\n> > running on; we allow users other than PGUID to create functions (and\n> > operators and aggregates and types) in the default (whatever it will be\n> > called) schema, so why shouldn't they be allowed to add packages?\n>\n> Because SQL says so. All objects in a schema belong to the owner of the\n> schema. In simple setups you have one schema per user with identical\n> names. This has well-established use patterns in other SQL RDBMS.\n\nThen implimenting schemas will cause a backwards-incompatabile change\nregarding who can add/own functions (and operators and ..).\n\nMainly because when we introduce schemas, all SQL transactions will have\nto be performed in the context of *some* schema. I think \"DEFAULT\" was the\nname you mentioned for when there was no schema matching the username. As\n\"DEFAULT\" (or whatever we call it) will be made by the PG super user (it\nwill actually be added as part of initdb), then that means that only the\nsuper user will own functions. That's not how things are now, and imposing\nthat on upgrading users will likely cause pain.\n\nThink about a dump/restore upgrade from 7.2 to 7.3. Right now users other\nthan PGUID can own functions (and triggers, etc.). When you do the\nrestore, though, since your dump had no schema support, it all goes into\nDEFAULT. Which will be owned by PGUID. So now we either have a schema with\nthings owned by a user other than the schema owner, or we have a broken\nrestore.\n\nOr we have to special case the DEFAULT schema. Which strikes me as a bad\nthing to do.\n\nFor now, I'd suggest letting users other than a schema owner own things in\na schema, and later on add controls over who can add things to a schema.\nThen when you do a \"CREATE SCHEMA\" command, you will implicitly be adding\nrestrictions prohibiting someone other than the owner from adding things\n(including packages/subschemas).\n\n> I agree that this might not be what everyone would want, but it seems\n> extensible. However, I feel we're trying to design too many things at\n> once. Let's do schemas first the way they're in the SQL standard, and\n> then we can try to tack on ownership or subschemas or package issues.\n\nWell, the packages changes can easily be turned into schema support for\nfunctions and aggregates, so we are part way there. Also, the packages\nchanges illustrate how to make system-wide internal schema changes of the\ntype adding SQL schemas will need. Plus, packages as they are now are\nuseful w/o schema support.\n\nAnd there's the fact that schemas were wanted for 7.2, and didn't happen.\nWithouth external adgitation, will they happen for 7.3? Given the size of\nthe job, I understand why they didn't happen (the package changes so far\nrepresent over 3 months of full-time programming). We've got some momentum\nnow, I'd say let's run with it. :-)\n\nTake care,\n\nBill\n\n", "msg_date": "Tue, 23 Oct 2001 08:43:32 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "(I've been following the thread, at least casually ;)\n\n> Why? Operators are used differently than functions. That strikes me as a\n> good reason to namespace them differently.\n> Conceptually the main determiner of what function you want is the name, at\n> least as far as from what I can tell from talking with all the programmers\n> I know. Yes, we make sure the types match (are part of the primary key),\n> but the name is the main concept. Operators, however, are more\n> intent-based. The '+' operator means I want these two things added\n> together. I don't care so much what types are involved, I want adding to\n> happen. That's a difference of intent. And that's the reason that I think\n> different namespacing rules make sense.\n\nBut operators *are* functions underneath the covers. So different\nnamespacing rules seem like a recipe for missed associations and\nunexpected results.\n\n> Part of it is that I only expect a package to add operators for types it\n> introduced. So to be considering them, you had to have done something that\n> ties in the type in the package. Like you had to make a column in a table\n> using it.\n\nI'd expect schemas/packages to have operators and functions for existing\ntypes, not just new ones. That is certainly how our extensibility\nfeatures are used; we are extensible in several dimensions (types,\nfunctions, operators) and they do not all travel together. We can't\nguess at the future intent of a package developer, and placing\nlimitations or assumptions about what *must* be in a package just limits\nfuture (unexpected or suprising) uses.\n\n> Another take on that is that I expect the main user of (direct) function\n> calls calling package functions will be other functions in that package,\n> while the main users of operators will be places which have used a type\n> from said package. Like queries pulling things out of tables using that\n> type. So the function namespacing is a concenience/tool primarily for the\n> package developer, while the operator and type namespacing is more a\n> convenience for the end application developer.\n\nWe are probably drawing too fine a distinction here.\n\n> Also, you seem to be wanting a path-search ability that is something like\n> the PATH environment variable. This pathing is fundamentally different; to\n> use unix terms, it is \".:..\". The fundamental difference is that there are\n> no \"absolute\" paths. The searching is totally location (of routine)\n> dependant.\n> To add something like an absolute path would totally break the whole\n> motivation for packages. The idea is to give a developer an area overwhich\n> s/he has total name control, but if s/he needs built-in routines, s/he\n> doesn't need to say \"standard.\" to get at them.\n> If we allow something like \"absolute paths\" in the package namespacing,\n> then we totally destroy that. Because a package developer can't be sure\n> what pathing is going on, s/he really has no clue what packages will get\n> found in what order. So then you have to be explicit in the name of all\n> the functions you use (otherwise if a user essentially puts something\n> other than \".\" at the head of the path, then you don't get routines in\n> your own package), or run the risk of getting all sorts of run-time\n> errors. A feature designed to make writing packages easier now makes them\n> harder. That strikes me as a step backwards.\n\nThe \"absolute path\" scoping and lookup scheme is defined in SQL99. I'm\nnot sure I understand the issue in the last paragraph: you seem to be\nmaking the point that absolute paths are Bad because package developers\ndon't know what those paths might be. But otoh allowing absolute paths\n(and/or embedding them into a package) gives the developer *precise*\ncontrol over what their package calls and what the behaviors are. istm\nthat if a package developer needs to specify precisely the resources his\npackage requires, then he can do that. And if he wants to leave it\nflexible and determined by scoping and pathing rules, then he can do\nthat too.\n\nafaik relative pathing is not specified in the standard, but we might\nwant to consider how we would implement that as an extension and whether\nthat gives more power to the packager or developer.\n\n> > > There is a built-in schema, \"master\". It will have a fixed oid, probalby 9\n> > > or 11.\n> > The built-in schemas is called DEFINITION_SCHEMA.\n> Why is it different from the \"DEFAULT\" you get when you log into a\n> database which doesn't have a schema whose name matches your username?\n\nIt may not be. But SQL99 specifies the name.\n\n - Thomas\n", "msg_date": "Tue, 23 Oct 2001 17:17:54 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: schema support, was Package support for Postgres" }, { "msg_contents": "Bill Studenmund writes:\n\n> Why? Operators are used differently than functions.\n\nI don't think so. Operators are a syntacticaly convenience for functions.\nThat's what they always have been and that's what they should stay.\n\n> Conceptually the main determiner of what function you want is the name, at\n> least as far as from what I can tell from talking with all the programmers\n> I know. Yes, we make sure the types match (are part of the primary key),\n> but the name is the main concept. Operators, however, are more\n> intent-based. The '+' operator means I want these two things added\n> together. I don't care so much what types are involved, I want adding to\n> happen. That's a difference of intent. And that's the reason that I think\n> different namespacing rules make sense.\n\nNaive developers all program by \"intent\". If I invoke a + operator then I\nexpect it to add. If I call a sqrt() function then I expect it to\ncalculate the square root. If I execute an INSERT statement then I would\nprefer that I did not delete anything. Designing systems to work by\n\"intent\" can be construed as an aspect of \"user-friendliness\".\n\nBut the more knowledgeable programmer is mildly aware of what's going on\nbehind the scenes: Both \"+\" and \"sqrt\" are just names for function code\nthat may or may not do what you think they do. So this applies to both\nfunctions and operators.\n\n> Part of it is that I only expect a package to add operators for types it\n> introduced.\n\nThis is an arbitrary restriction that you might find reasonable, but other\ndevelopers might not.\n\n> Another take on that is that I expect the main user of (direct) function\n> calls calling package functions will be other functions in that package,\n> while the main users of operators will be places which have used a type\n> from said package.\n\nSee above.\n\n> To add something like an absolute path would totally break the whole\n> motivation for packages.\n\nYes. If we add something like subschemas then something more\nsophisticated than a Unix-style path would have to be engineered.\n\n> > The built-in schemas is called DEFINITION_SCHEMA.\n>\n> Why is it different from the \"DEFAULT\" you get when you log into a\n> database which doesn't have a schema whose name matches your username?\n\nBecause SQL says so.\n\n> > > The only other part (which is no small one) is to add namespacing to the\n> > > rest of the backend. I expect that will mean adding a schema column to\n> > > pg_class, pg_type, and pg_operator.\n> >\n> > Yup. But you can replace the owner package with the schema column,\n> > because the owner property will be transferred to the schema.\n>\n> Not necessarily. A user other than the one who owns the schema can add a\n> package to it. It's the same thing as why we keep track of who added a\n> function. :-)\n\nBlech, I meant \"you can replace the owner column with the schema column\".\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 23 Oct 2001 20:42:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "On Thu, 25 Oct 2001, Peter Eisentraut wrote:\n\n> Bill Studenmund writes:\n>\n> > Mainly because when we introduce schemas, all SQL transactions will have\n> > to be performed in the context of *some* schema. I think \"DEFAULT\" was the\n> > name you mentioned for when there was no schema matching the username. As\n> > \"DEFAULT\" (or whatever we call it) will be made by the PG super user (it\n> > will actually be added as part of initdb), then that means that only the\n> > super user will own functions.\n>\n> If you want to own the function you should create it in your schema. If\n> you want to create a function and let someone else own it, then ask\n> someone else for write access to their schema. (This should be a rare\n> operation and I don't think SQL provides for it, so we can ignore it in\n> the beginning.) If there is no schema you have write access to then you\n> cannot create things. People have been dying for that kind of feature,\n> and schemas will enable us to have it.\n\nI think I understand your descriptions of what you will be *able* to do\nwith schemas. And also that they may describe how you *should* do thing\nwith schema. I'm not disagreeing with you about that. But that's not the\nangle I'm working.\n\nI guess to get at my point, I can ask this question, \"Will schema support\ninvalidate existing PostgreSQL database designs.\"\n\nI would like the answer to be no. I would like our users to be able to\ndump a pre-schema-release db, upgrade, and then restore into a\nschema-aware PostgreSQL. And have their restore work.\n\nSince the admin is restoring a db which was made before schema support,\nthere are no CREATE SCHEMA commands in it (or certainly not ones which do\na real schema create - right now CREATE SCHEMA is a synonym for CREATE\nDATABASE). So the restore will create everything in the \"DEFAULT\" schema\n(The schema where creates done w/o a CREATE SCHEMA go).\n\nBut right now, we can have different users owning things in one database.\nSo there will be restores out there which will have different users owning\nthings in the same restored-to schema, which will be \"DEFAULT\".\n\nSo we have to have (or just retail) the ability to have different users\nowning things in one schema.\n\n> Think about it this way: In its simplest implementation (which is in fact\n> the Entry Level SQL92, AFAIR), a schema can only have the name of the user\n> that owns it. I suspect that this is because SQL has no CREATE USER, so\n> CREATE SCHEMA is sort of how you become a user that can do things. At the\n> same time, schemas would space off the things each user creates, and if\n> you want to access someone else's stuff you have to prefix it with the\n> user's name <user>.<table>, sort of like ~user/file. The generic\n> \"namespace\" nature of schemas only comes from the fact that in higher\n> SQL92 levels a user can own more than one schema with different names.\n>\n> (Interesting thesis: It might be that our users are in fact schemas\n> (minus the parser changes) and we can forget about the whole thing.)\n\nHmmm... I don't think so, but hmmm..\n\n> Now what does this spell for the cooperative development environments you\n> described? Difficult to tell, but perhaps some of these would do, none of\n> which are standard, AFAIK:\n>\n> * schemas owned by groups/roles\n\nI think that schemas owned by roles are part of SQL99.\n\n> * access privileges to schemas, perhaps some sort of sticky bit\n> functionality\n>\n> > Or we have to special case the DEFAULT schema. Which strikes me as a bad\n> > thing to do.\n>\n> I don't necessarily think of the DEFAULT schemas as a real schema. It\n> might just be there so that *some* schema context is set if you don't have\n> one set otherwise, but you don't necessarily have write access to it.\n> But it might not be necessary at all.\n\nWhile if we were starting over, we might be able to (maybe should have)\ndesign(ed) things so we don't need it, I think a \"DEFAULT\" schema would\nhelp give users of the schema-aware PostgreSQL an experience similar to\nwhat they have now.\n\nAnd getting back to where this all started, I think we do need to have the\nability to have users other than the schema owner own things in the\nschema, so we should keep the owner id column in the pg_package table. I'm\nnot against, when things are all said and done, having the default be that\nonly the schema owner can add things. But that's a policy decision. :-)\n\nTake care,\n\nBill\n\n", "msg_date": "Wed, 24 Oct 2001 07:16:55 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "Bill Studenmund writes:\n\n> > > Why? Operators are used differently than functions.\n> >\n> > I don't think so. Operators are a syntacticaly convenience for functions.\n> > That's what they always have been and that's what they should stay.\n>\n> How does what you say disagree with what I said?\n>\n> Operators certainly have a lot more structure to them than a function call\n> does. That's why you give the restriction and join functions, and you hand\n> them commutation and negation operators.\n\nThese are just hints to the optimizer; they don't affect the invocation\ninterface.\n\n> So I am a \"naive\" programmer because I mention intent above?\n\nNo.\n\n> So if we have INFORMATION_SCHEMA with the right vies in it, we are fine\n> doing whatever we want.\n\nI think some interpretation of the SQL standard can be used to prove that\na new schema should not contain any objects. So you're going to have to\nstick to the two predefined schemas to put the system catalogs in. Then\nagain, other interpretations may be used to prove other things. But to me\nthe intent of the standard is clear that system catalogs are meant to go\ninto the defintion schema, and I don't see a reason why this could not be\nso.\n\n> > Blech, I meant \"you can replace the owner column with the schema column\".\n>\n> That's actually what I thought you said. :-)\n>\n> I stil think we can't do that, since someone other than the schema owner\n> can add a package to a schema. :-) Or at least that's the assumption I'm\n> running on; we allow users other than PGUID to create functions (and\n> operators and aggregates and types) in the default (whatever it will be\n> called) schema, so why shouldn't they be allowed to add packages?\n\nBecause SQL says so. All objects in a schema belong to the owner of the\nschema. In simple setups you have one schema per user with identical\nnames. This has well-established use patterns in other SQL RDBMS.\n\nI agree that this might not be what everyone would want, but it seems\nextensible. However, I feel we're trying to design too many things at\nonce. Let's do schemas first the way they're in the SQL standard, and\nthen we can try to tack on ownership or subschemas or package issues.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 24 Oct 2001 20:57:25 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "On Tue, Oct 23, 2001 at 08:43:32AM -0700, Bill Studenmund wrote:\n> \n> And there's the fact that schemas were wanted for 7.2, and didn't happen.\n> Withouth external adgitation, will they happen for 7.3? Given the size of\n> the job, I understand why they didn't happen (the package changes so far\n> represent over 3 months of full-time programming). We've got some momentum\n> now, I'd say let's run with it. :-)\n> \n\nI feel much better about my unsucessfully attempt at a naive schema\nimplementation, last Christmas holidays: I had no where _near_ 3 months\nof time in on that.\n\n;-)\nRoss\n\n-- \nRoss Reedstrom, Ph.D. reedstrm@rice.edu\nExecutive Director phone: 713-348-6166\nGulf Coast Consortium for Bioinformatics fax: 713-348-6182\nRice University MS-39\nHouston, TX 77005\n", "msg_date": "Wed, 24 Oct 2001 17:27:35 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: schema support, was Package support for Postgres" }, { "msg_contents": "On Fri, 26 Oct 2001, Peter Eisentraut wrote:\n\n> Bill Studenmund writes:\n>\n> > I guess to get at my point, I can ask this question, \"Will schema support\n> > invalidate existing PostgreSQL database designs.\"\n> >\n> > I would like the answer to be no. I would like our users to be able to\n> > dump a pre-schema-release db, upgrade, and then restore into a\n> > schema-aware PostgreSQL. And have their restore work.\n>\n> I think this can work. Assume a database like this:\n>\n> user1: CREATE TABLE foo ( );\n> user2: CREATE TABLE bar ( );\n>\n> The dump of this would be something like:\n>\n> \\c - user1\n> CREATE TABLE foo ( );\n>\n> \\c - user2\n> CREATE TABLE bar ( );\n>\n> So the tables would be created in the appropriate schema context for each\n> user. The remaining problem then is that the two schemas user1 and user2\n> would need to be created first, but we could make this implicit somewhere.\n> For instance, a user creation would automatically create a schema for the\n> user in template1. Or at least the dump could be automatically massaged\n> to this effect.\n>\n> > But right now, we can have different users owning things in one database.\n> > So there will be restores out there which will have different users owning\n> > things in the same restored-to schema, which will be \"DEFAULT\".\n>\n> This would fundamentally undermine what an SQL schema is and don't help\n> interoperability a bit. If we want to implement our own namespace\n> mechanism we can call it NAMESPACE. But if we want something called\n> SCHEMA then we should implement it the way it's standardized, and there is\n> certainly a tight coupling between schemas and ownership. In fact, as\n> I've said already, a schema *is* the ownership; a user is just a weird\n> PostgreSQL invention.\n\nHmmm.... I've been looking into this, and you are right. All of the views\nin INFORMATION_SCHEMA that I looked at contain text like\n\nWHERE (SCHEMA_OWNER = CURRENT_USER OR SCHEMA_OWNER IN (SELECT ROLL_NAME\n\tFROM ENABLED_ROLES) )\n\nSo then we'll need a tool to massage old-style dumps to:\n\n1) create the schema, and\n\n2) path all of the schemas together by default.\n\nWell, at least a number of tables won't gain a new colum as a result of\nthis; the owner column will become the schema_id column. :-)\n\nTake care,\n\nBill\n\n", "msg_date": "Thu, 25 Oct 2001 10:36:48 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "Bill Studenmund writes:\n\n> > Because SQL says so. All objects in a schema belong to the owner of the\n> > schema. In simple setups you have one schema per user with identical\n> > names. This has well-established use patterns in other SQL RDBMS.\n>\n> Then implimenting schemas will cause a backwards-incompatabile change\n> regarding who can add/own functions (and operators and ..).\n>\n> Mainly because when we introduce schemas, all SQL transactions will have\n> to be performed in the context of *some* schema. I think \"DEFAULT\" was the\n> name you mentioned for when there was no schema matching the username. As\n> \"DEFAULT\" (or whatever we call it) will be made by the PG super user (it\n> will actually be added as part of initdb), then that means that only the\n> super user will own functions.\n\nIf you want to own the function you should create it in your schema. If\nyou want to create a function and let someone else own it, then ask\nsomeone else for write access to their schema. (This should be a rare\noperation and I don't think SQL provides for it, so we can ignore it in\nthe beginning.) If there is no schema you have write access to then you\ncannot create things. People have been dying for that kind of feature,\nand schemas will enable us to have it.\n\nThink about it this way: In its simplest implementation (which is in fact\nthe Entry Level SQL92, AFAIR), a schema can only have the name of the user\nthat owns it. I suspect that this is because SQL has no CREATE USER, so\nCREATE SCHEMA is sort of how you become a user that can do things. At the\nsame time, schemas would space off the things each user creates, and if\nyou want to access someone else's stuff you have to prefix it with the\nuser's name <user>.<table>, sort of like ~user/file. The generic\n\"namespace\" nature of schemas only comes from the fact that in higher\nSQL92 levels a user can own more than one schema with different names.\n\n(Interesting thesis: It might be that our users are in fact schemas\n(minus the parser changes) and we can forget about the whole thing.)\n\nNow what does this spell for the cooperative development environments you\ndescribed? Difficult to tell, but perhaps some of these would do, none of\nwhich are standard, AFAIK:\n\n* schemas owned by groups/roles\n\n* access privileges to schemas, perhaps some sort of sticky bit\n functionality\n\n> Or we have to special case the DEFAULT schema. Which strikes me as a bad\n> thing to do.\n\nI don't necessarily think of the DEFAULT schemas as a real schema. It\nmight just be there so that *some* schema context is set if you don't have\none set otherwise, but you don't necessarily have write access to it.\nBut it might not be necessary at all.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 25 Oct 2001 20:31:26 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "* Bill Studenmund <wrstuden@netbsd.org> wrote:\n\n| I would like the answer to be no. I would like our users to be able to\n| dump a pre-schema-release db, upgrade, and then restore into a\n| schema-aware PostgreSQL. And have their restore work.\n\n\nImportant point. Also having a standard is fine, but by limiting ourselves\nto it we are ignoring issues that might be very useful. Draw the line. \n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n", "msg_date": "26 Oct 2001 03:01:00 +0200", "msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>", "msg_from_op": false, "msg_subject": "Re: schema support, was Package support for Postgres" }, { "msg_contents": "Bill Studenmund writes:\n\n> I guess to get at my point, I can ask this question, \"Will schema support\n> invalidate existing PostgreSQL database designs.\"\n>\n> I would like the answer to be no. I would like our users to be able to\n> dump a pre-schema-release db, upgrade, and then restore into a\n> schema-aware PostgreSQL. And have their restore work.\n\nI think this can work. Assume a database like this:\n\nuser1: CREATE TABLE foo ( );\nuser2: CREATE TABLE bar ( );\n\nThe dump of this would be something like:\n\n\\c - user1\nCREATE TABLE foo ( );\n\n\\c - user2\nCREATE TABLE bar ( );\n\nSo the tables would be created in the appropriate schema context for each\nuser. The remaining problem then is that the two schemas user1 and user2\nwould need to be created first, but we could make this implicit somewhere.\nFor instance, a user creation would automatically create a schema for the\nuser in template1. Or at least the dump could be automatically massaged\nto this effect.\n\n> But right now, we can have different users owning things in one database.\n> So there will be restores out there which will have different users owning\n> things in the same restored-to schema, which will be \"DEFAULT\".\n\nThis would fundamentally undermine what an SQL schema is and don't help\ninteroperability a bit. If we want to implement our own namespace\nmechanism we can call it NAMESPACE. But if we want something called\nSCHEMA then we should implement it the way it's standardized, and there is\ncertainly a tight coupling between schemas and ownership. In fact, as\nI've said already, a schema *is* the ownership; a user is just a weird\nPostgreSQL invention.\n\n> I think that schemas owned by roles are part of SQL99.\n\nCorrect.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 26 Oct 2001 23:03:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: schema support, was Package support for Postgres " }, { "msg_contents": "\nBased on this email, I am assuming we don't want to add Package support\nbut instead will do it with schemas, which Tom is already working on.\n\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> Bill Studenmund writes:\n> \n> > Honestly, I do not understand why \"global variables\" have been such a sore\n> > point for you.\n> \n> My point is that the proposed \"package support\" introduces two features\n> that are a) independent, and b) already exist, at least in design.\n> Schemas are already planned as a namespace mechanism. Global variables in\n> PLs already exist in some PLs. Others can add it if they like. There\n> aren't any other features introduced by \"package support\" that I can see\n> or that you have explicitly pointed out.\n> \n> So the two questions I ask myself are:\n> \n> 1. Are package namespaces \"better\" than schemas? The answer to that is\n> no, because schemas are more standard and more general.\n> \n> 2. Are global variables via packages \"better\" than the existing setups?\n> My answer to that is again no, because the existing setups respect\n> language conventions, maintain the separation of the backend and the\n> language handlers, and of course they are already there and used.\n> \n> So as a consequence we have to ask ourselves,\n> \n> 3. Do \"packages\" add anything more to the table than those two elementary\n> features? Please educate us.\n> \n> 4. Would it make sense to provide \"packages\" alongside the existing\n> mechanisms that accomplish approximately the same thing. That could be\n> debated, in case we agree that they are approximately the same thing.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 14:11:35 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Package support for Postgres" } ]
[ { "msg_contents": "i've had similar problems before. Looks like some thing is in a transaction,\nblocked on something else. Then vacuum comes in, locks half the tables, and\nthen gets stuck on a table that the transaction has modified. Now most of\nyour other transactions will block forever. Then the connection limit for\npostgres will be hit. Then you can't connect to postgres at all.\n\nBasically, its a death spiral starting from something in a transaction\nblocking forever on an external command. Nothing postgres itself can do\nabout. Of course, this is just my conjecture based on the info provided.\n\n-rchit\n\n-----Original Message-----\nFrom: Michael Meskes [mailto:meskes@postgresql.org]\nSent: Thursday, October 11, 2001 2:29 AM\nTo: PostgreSQL Hacker\nSubject: [HACKERS] Deadlock? idle in transaction\n\n\nA customer's machine hangs from time to time. All we could find so far is\nthat postgres seems to be in state \"idle in transaction\":\n\npostgres 19317 0.0 0.3 8168 392 ? S Oct05 0:00\n/usr/lib/postgresql/bin/postmaster -D /var/lib/postgres/data\npostgres 19983 0.0 0.8 8932 1020 ? S Oct05 0:01 postgres:\npostgres rabatt 192.168.50.222 idle in transaction\npostgres 21005 0.0 0.0 3484 4 ? S Oct06 0:00\n/usr/lib/postgresql/bin/psql -t -q -d template1\npostgres 21014 0.0 0.7 8892 952 ? S Oct06 0:01 postgres:\npostgres rabatt [local] VACUUM waiting\npostgres 21833 0.0 0.4 3844 572 ? S Oct06 0:00\n/usr/lib/postgresql/bin/pg_dump rabatt\npostgres 21841 0.0 1.2 9716 1564 ? S Oct06 0:00 postgres:\npostgres rabatt [local] COPY waiting\npostgres 22135 0.0 0.9 8856 1224 ? S Oct06 0:00 postgres:\npostgres rabatt 192.168.50.223 idle in transaction waiting\n\nI'm not sure what's happening here and I have no remote access to the\nmachine myself. Any idea what could be the reason for this?\n\nThere may be some client processes running at the time the dump and the\nvacuum commands are issued that have an open transaction doing nothing. That\nis the just issued a BEGIN command. Thinking about it run some inserts at\nthe very same time, although that's not likely.\n\nAny hints are appreciated. Thanks in advance.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n", "msg_date": "Thu, 11 Oct 2001 15:15:36 -0700", "msg_from": "Rachit Siamwalla <rachit@ensim.com>", "msg_from_op": true, "msg_subject": "Re: Deadlock? idle in transaction" }, { "msg_contents": "On Thu, Oct 11, 2001 at 03:15:36PM -0700, Rachit Siamwalla wrote:\n> then gets stuck on a table that the transaction has modified. Now most of\n> your other transactions will block forever. Then the connection limit for\n> postgres will be hit. Then you can't connect to postgres at all.\n\nReally? I do not know the way the backend handles locks, but couldn't it\ndetect such a deadlock and cancel a transaction? Something like this:\n\ntask 1 locks table A\ntask 2 locks table B\ntask 1 locks table B\ntask 2 tries to lock table A\n\nOf course the last call creates the deadlock. Would it be possible to just\ncancel task 2 in this case? Or do I miss something obvious?\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 12 Oct 2001 10:34:09 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Deadlock? idle in transaction" } ]
[ { "msg_contents": "Possible job...\n\n - Thomas\n\n-------- Original Message --------\nSubject: Postgre SQL Developer - Chicago, IL\n Date: Thu, 11 Oct 2001 14:01:12 -0400\n From: \"Crystal, Jennifer\" <JCrystal@LucasCareers.com>\n To: lockhart@alumni.caltech.edu\n\n\n\nHi, my name is Jennifer Crystal and I am a Technical Recruiter with the\nLucas Group. I found your email address on the PostgreSQL website and\nwas wondering if you might know someone who would be interested in the\nfollowing position:\n\nPostgre SQL Database Developer - Chicago, IL\n\nSolid experience with PgSQL programming on PostgreSQL ORBMS on a Linux\nPlatform. Candidate will build a prototype of the CNS Drug Database\nthat will be migratable to Oracle and DB2. This database contains the\ndrug effects on glucose metabolism of both known and novel CNS (Central\nNervous System) drugs. The company expects that the database will\nprovide a valuable resource to pharmaceutical developers as a screening\ntool for therapeutic profiles, side effect profiles, and dosing regimens\nfor new drug development, based on known drug effect characteristics. To\ndate, the company has collected data in several major classes of CNS\npharmaceuticals. The database contains processed OMEI� image data mapped\nin coordinate computer space as statistical metabolic end-effect\nprofiles.\n\nIf you know someone who might be interested, please have them contact me\nat 800-466-4489 x171 or email their resume to jcrystal@lucascareers.com.\n\nI appreciate your help!\n\nSincerely,\n\nJennifer Crystal\nSenior Recruiter - IT/Engineering\nLucasGroup\nRecruiting Excellence Since 1970\n\n3384 Peachtree Road, Suite 700\nAtlanta, GA 30326\n404-239-5630 x171\n800-466-4489 x171\njcrystal@lucascareers.com\nwww.lucascareers.com\n\n\n", "msg_date": "Fri, 12 Oct 2001 13:25:57 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "[Fwd: Postgre SQL Developer - Chicago, IL]" } ]
[ { "msg_contents": "Hi,\n\nwe'd like to submit new module contrib/tsearch which\ncontains implementation of new data type txtidx -\na searchable data type (textual) with indexed access.\n\nIt's based on current CVS and will not works with earlier\nversion of PostgreSQL.\n\nArchive is available from\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/tsearch.tar.gz\nIt's about 100Kb.\n\nFind below README.tsearch with some info.\n\nYour questions and comments are welcome !\nOleg Bartunov (oleg@sai.msu.su) and Teodor Sigaev (teodor@stack.net)\n\n\tRegards,\n\t\tOleg\n\nREADME.tsearch\n\nTsearch contrib module contains implementation of new data type txtidx -\na searchable data type (textual) with indexed access.\n\nAll work was done by Teodor Sigaev (teodor@stack.net) and Oleg Bartunov\n(oleg@sai.msu.su).\n\nIMPORTANT NOTICE:\n\nThis is a first step of our work on integration of OpenFTS\nfull text search engine (http://openfts.sourceforge.net/) into\nPostgreSQL. It's based on our recent development of GiST\n(Generalized Search Tree) for PostgreSQL 7.2 (see our GiST page\nat http://www.sai.msu.su/~megera/postgres/gist/ for info about GiST)\nand will works only for PostgreSQL version 7.2 and later.\n\nWe didn't try to implement a full-featured search engine with\nstable interfaces but rather experiment with various approaches.\nThere are many issues remains (most of them just not documented or\nimplemented) but we'd like to present a working prototype\nof full text search engine fully integrated into PostgreSQL to\ncollect user's feedback and recommendations.\n\nINSTALLATION:\n\n cd contrib/tsearch\n gmake\n gmake install\n\nREGRESSION TEST:\n\n gmake installcheck\n\nUSAGE:\n\n psql DATABASE < tsearch.sql (from contrib/tsearch)\n\n\nINTRODUCTION:\n\nThis module provides an implementation of a new data type 'txtidx' which is\na string of a space separated \"words\". \"Words\" with spaces\nshould be enclosed in apostrophes and apostrophes inside a \"word\" should be\nescaped by backslash.\n\nThis is quite different from OpenFTS approach which uses array\nof integers (ID of lexems) and requires storing of lexem-id pairs in database.\nOne of the prominent benefit of this new approach is that it's possible now\nto perform full text search in a 'natural' way.\n\nSome examples:\n\n create table foo (\n\ttitleidx txtidx\n );\n\n 2 regular words:\n insert into foo values ( 'the are' );\n Word with space:\n insert into foo values ( 'the\\\\ are' );\n Words with apostrophe:\n insert into foo values ( 'value\\'s this' );\n Complex word with apostrophe:\n insert into foo values ( 'value\\'s this we \\'PostgreSQL site\\'' );\n\n select * from foo where titleidx @@ '\\'PostgreSQL site\\' | this';\n select * from foo where titleidx @@ 'value\\'s | this';\n select * from foo where titleidx @@ '(the|this)&!we';\n\ntest=# select 'two words'::txtidx;\n txtidx\n---------------\n 'two' 'words'\n(1 row)\n\ntest=# select 'single\\\\ word'::txtidx;\n txtidx\n---------------\n 'single word'\n(1 row)\n\n\nFULL TEXT SEARCH:\n\nThe basic idea of this data type is to use it for full text search inside\ndatabase. If you have a 'text' column title and corresponding column\ntitleidx of type 'txtidx', which contains the same information from\ntext column, then search on title could be replaced by\nsearching on titleidx which would be fast because of indexed access.\n\nAs a real life example consider database with table 'titles' containing\ntitles of mailing list postings in column 'title':\n\n create table titles (\n title text\n\n );\n\nSuppose, you already have a lot of titles and want to do full text search\non them.\n\nFirst, you need to install contrib/tsearch module (see INSTALLATION and USAGE).\nAdd column 'titleidx' of type txtidx, containing space separated words from\ntitle. It's possible to use function txt2txtidx(title) to fill 'titleidx'\ncolumn (see notice 1):\n\n -- add titleidx column of type txtidx\n alter table titles add titleidx txtidx;\n update titles set titleidx=txt2txtidx(title);\n\nCreate index on titleidx:\n create index t_idx on titles using gist(titleidx);\n\nand now you can search all titles with words 'patch' and 'gist':\n select title from titles where titleidx ## 'patch&gist';\n\nHere, ## is a new operation defined for type 'txtidx' which could use index\n(if exists) built on titleidx. This operator uses morphology to\nexpand query, i.e.\n ## 'patches&gist' will find titles with 'patch' and 'gist' also.\nIf you want to provide query as is, use operator @@ instead:\n select title from titles where titleidx @@ 'patch&gist';\nbut remember, that function txt2txtidx does uses morphology, so you need\nto fill column 'titleidx' using some another way. We hope in future releases\nprovide more consistent and convenient interfaces.\n\nQuery could contains boolean operators &,|,!,() with their usual meaning,\nfor example: 'patch&gist&!cvs', 'patch|cvs'.\nEach operation ( ##, @@ ) requires appropriate query type -\n txtidx ## mquery_txt\n txtidx @@ query_txt\n\nTo see what query actually will be used :\n\ntest=# select 'patches&gist'::mquery_txt;\n mquery_txt\n------------------\n 'patch' & 'gist'\n(1 row)\n\ntest=# select 'patches&gist'::query_txt;\n query_txt\n--------------------\n 'patches' & 'gist'\n(1 row)\n\nNotice the difference !\n\nYou could use trigger to be sure column 'titleidx' is consistent\nwith any changes in column 'title':\n\n create trigger txtidxupdate before update or insert on titles\n for each row execute procedure tsearch(titleidx, title);\n\nThis trigger uses the same parser, dictionaries as function\ntxt2txtidx (see notice 1).\nCurrent syntax allows creating trigger for several columns\nyou want to be searchable:\n\n create trigger txtidxupdate before update or insert on titles\n for each row execute procedure tsearch(titleidx, title1, title2,... );\n\nUse function txtidxsize(titleidx) to get the number of \"words\" in column\ntitleidx. To get total number of words in table titles:\n\ntest=# select sum(txtidxsize(titleidx)) from titles;\n sum\n---------\n 1917182\n(1 row)\n\nNOTICES:\n\n1.\nfunction txt2txtidx and trigger use parser, dictionaries coming with\nthis contrib module on default. Parser is mostly the same as in OpenFTS and\ndictionaries are simple stemmers (sort of Lovin's stemmer which uses a\nlongest match algorithm.) for english and russian languages. There is a perl\nscript makedict/makedict.pl, which could be used to create specific\ndictionaries from files with endings and stop-words.\nExample files for english and russian languages are available\nfrom http://www.sai.msu.su/~megera/postgres/gist/tsearch/.\nRun script without parameters to see information about arguments and options.\n\nExample:\ncd makedict\n./makedict.pl -l LOCALNAME -e FILEENDINGS -s FILESTOPWORD \\\n -o ../dict/YOURDICT.dct\n\nAnother options of makedict.pl:\n-f do not execute tolower for any char\n-a function of checking stopword will be work after lemmatize,\n default is before\n\nYou need to edit dict.h to use your dictionary and, probably,\nmorph.c to change mapdict array.\n\nDon't forget to do\n make clean; make; make install\n\n2.\nAs it was mentioned above we don't use explicitly ID of lexems\nas in OpenFTS but use hash function (crc32) instead to map lexem to\ninteger. Our experiments show that probability of collision is quite small:\nfor english text it's about 10**(-6) and 10**(-5) for russian collection.\nDefault installation doesn't check for collisions but if your application\ndoes need to guarantee an exact (no collisions) search, you need\nto update system table to mark index islossy:\n\n update pg_amop set amopreqcheck = true where amopclaid =\n (select oid from pg_opclass where opcname = 'gist_txtidx_ops');\n\nIf you don't bother about collisions :\n\n update pg_amop set amopreqcheck = false where amopclaid =\n (select oid from pg_opclass where opcname = 'gist_txtidx_ops');\n\n3.\ntxtidx doesn't preserve words ordering (this is not critical for searching)\nfor performance reason, for example:\n\ntest=# select 'page two'::txtidx;\n txtidx\n--------------\n 'two' 'page'\n(1 row)\n\n4.\nIndexed access provided by txtidx data type isn't always good\nbecause of internal data structure we use (RD-Tree). Particularly,\nqueries like '!gist' will be slower than just a sequential scan,\nbecause for such queries RD-Tree doesn't provides selectivity on internal\nnodes and all checks should be processed at leaf nodes, i.t. scan of\nfull index. You may play with function query_tree to see how effective\nwill be index usage:\n\ntest=# select querytree( 'patch&gist'::query_txt );\n querytree\n------------------\n 'patch' & 'gist'\n(1 row)\n\nThis is an example of \"good\" query - index will effective for both words.\n\ntest=# select querytree( 'patch&!gist'::query_txt );\n querytree\n-----------\n 'patch'\n(1 row)\n\nThis means that index is effective only to search word 'patch' and resulted\nrows will be checked against '!gist'.\n\ntest=# select querytree( 'patch|!gist'::query_txt );\n querytree\n-----------\n T\n(1 row)\n\ntest=# select querytree( '!gist'::query_txt );\n querytree\n-----------\n T\n(1 row)\n\nThese two queries will be processed by scanning of full index !\nVery slow !\n\n5.\nFollowing selects produce the same result\n\n select title from titles where titleidx @@ 'patch&gist';\n select title from titles where titleidx @@ 'patch' and titleidx @@ 'gist';\n\nbut the former will be more effective, because of internal optimization\nof query executor.\n\n\nTODO:\n\nBetter configurability (as in OpenFTS)\nUser's interfaces to parser, dictionaries ...\nWrite documentation\n\n\nBENCHMARKS:\n\nWe use test collection in our experiments which contains 377905\ntitles from various mailing lists stored in our mailware\nproject.\n\nAll runs were performed on IBM ThinkPad T21 notebook with\nPIII 733 Mhz, 256 RAM, 20 Gb HDD, Linux 2.2.19, postgresql 7.2.dev\nWe didn't do extensive benchmarking and all\nnumbers provide for illustration. Actual performance\nis strongly depends on many factors (query, collection, dictionaries\nand hardware).\n\nCollection is available for download from\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/\nas mw_titles.gz (about 3Mb).\n\n0. install contrib/tsearch module\n1. createdb test\n2. psql test < tsearch.sql (from contrib/tsearch)\n3. zcat mw_titles.gz | psql test\n (it will creates table, copy test data and creates index)\n\nDatabase contains one table:\n\ntest=# \\d titles\n Table \"titles\"\n Column | Type | Modifiers\n----------+------------------------+-----------\n title | character varying(256) |\n titleidx | txtidx |\nIndexes: t_idx\n\nIndex was created as:\n create index t_idx on titles using gist(titleidx);\n (notice: this operation takes about 14 minutes on my notebook)\n\nTypical select looks like:\n select title from titles where titleidx @@ 'patch&gist';\n\nTotal number of lexems in collection : 1917182\n\n1. We trust index - we consider index is exact and no\n checking against tuples is necessary.\n\n update pg_amop set amopreqcheck = false where amopclaid =\n (select oid from pg_opclass where opcname = 'gist_txtidx_ops');\n\nusing gist indices\n1: titleidx @@ 'patch&gist' 0.000u 0.000s 0m0.054s 0.00%\n2: titleidx @@ 'patch&gist' 0.020u 0.000s 0m0.045s 44.82%\n3: titleidx @@ 'patch&gist' 0.000u 0.000s 0m0.044s 0.00%\nusing gist indices (morph)\n1: titleidx ## 'patch&gist' 0.000u 0.010s 0m0.046s 21.62%\n2: titleidx ## 'patch&gist' 0.010u 0.010s 0m0.046s 43.47%\n3: titleidx ## 'patch&gist' 0.000u 0.000s 0m0.046s 0.00%\ndisable gist index\n1: titleidx @@ 'patch&gist' 0.000u 0.010s 0m1.601s 0.62%\n2: titleidx @@ 'patch&gist' 0.000u 0.000s 0m1.607s 0.00%\n3: titleidx @@ 'patch&gist' 0.010u 0.000s 0m1.607s 0.62%\ntraditional like\n1: title ~* 'gist' and title ~* 'patch' 0.010u 0.000s 0m9.206s 0.10%\n2: title ~* 'gist' and title ~* 'patch' 0.000u 0.010s 0m9.205s 0.10%\n3: title ~* 'gist' and title ~* 'patch' 0.010u 0.000s 0m9.208s 0.10%\n\n2. Need to check results against tuples to avoid possible hash collision.\n\n update pg_amop set amopreqcheck = true where amopclaid =\n (select oid from pg_opclass where opcname = 'gist_txtidx_ops');\n\nusing gist indices\n1: titleidx @@ 'patch&gist' 0.010u 0.000s 0m0.052s 19.26%\n2: titleidx @@ 'patch&gist' 0.000u 0.000s 0m0.045s 0.00%\n3: titleidx @@ 'patch&gist' 0.010u 0.000s 0m0.045s 22.39%\nusing gist indices (morph)\n1: titleidx ## 'patch&gist' 0.000u 0.000s 0m0.046s 0.00%\n2: titleidx ## 'patch&gist' 0.000u 0.010s 0m0.046s 21.75%\n3: titleidx ## 'patch&gist' 0.020u 0.000s 0m0.047s 42.13%\n\nThere are no visible difference between these 2 cases but your\nmileage may vary.\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 12 Oct 2001 18:02:50 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "New contrib/tsearch module for 7.2" }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> we'd like to submit new module contrib/tsearch which\n> contains implementation of new data type txtidx -\n> a searchable data type (textual) with indexed access.\n\nCommitted into contrib. I made an addition of a cast to unsigned char\nin the tolower() calls that didn't already have one. Without this, the\nregression test didn't pass. With it, it still didn't pass :-( ... but\nI believe your original expected file was incorrect because of the lack\nof cast. I committed an expected file containing the results I now get.\nWould you check this and confirm or deny that it's okay?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Oct 2001 19:22:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: New contrib/tsearch module for 7.2 " }, { "msg_contents": "Thanks Tom,\n\npatch will be submitted.\n\n\tregards,\n\n\t\tOleg\nOn Fri, 12 Oct 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > we'd like to submit new module contrib/tsearch which\n> > contains implementation of new data type txtidx -\n> > a searchable data type (textual) with indexed access.\n>\n> Committed into contrib. I made an addition of a cast to unsigned char\n> in the tolower() calls that didn't already have one. Without this, the\n> regression test didn't pass. With it, it still didn't pass :-( ... but\n> I believe your original expected file was incorrect because of the lack\n> of cast. I committed an expected file containing the results I now get.\n> Would you check this and confirm or deny that it's okay?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 14 Oct 2001 18:08:53 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: New contrib/tsearch module for 7.2 " }, { "msg_contents": "> Committed into contrib. I made an addition of a cast to unsigned char\n> in the tolower() calls that didn't already have one. Without this, the\n> regression test didn't pass. With it, it still didn't pass :-( ... but\n> I believe your original expected file was incorrect because of the lack\n> of cast. I committed an expected file containing the results I now get.\n> Would you check this and confirm or deny that it's okay?\n> \n\n\nIt's my mistake, the problem is with locale, I'll send patch as soon as, but\nI can't see tsearch in contrib directory (in contrib/README tsearch exists) in \ncurrent cvs. :(\n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n", "msg_date": "Mon, 15 Oct 2001 12:55:58 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: New contrib/tsearch module for 7.2" }, { "msg_contents": "> It's my mistake, the problem is with locale, I'll send patch as soon as, \n> but\n> I can't see tsearch in contrib directory (in contrib/README tsearch \n> exists) in current cvs. :(\n\nOk now, I got.\n\nPlease apply attached patch for contrib/tsearch.\n\n\n\n-- \nTeodor Sigaev\nteodor@stack.net", "msg_date": "Mon, 15 Oct 2001 16:34:28 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: New contrib/tsearch module for 7.2" }, { "msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> I can't see tsearch in contrib directory (in contrib/README tsearch\n> exists) in current cvs. :(\n\nIt's there: I see it, and so does cvsweb. Did you use -d option in\ncvs update? \n\nPersonally I run with a ~/.cvsrc containing\n\ncvs -z3\nupdate -d -P\ncheckout -P\n\nso that I don't have to remember to give the options needed to make cvs\ntrack subdirectory additions/deletions ... its default behavior is\nreally quite brain-dead.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Oct 2001 10:07:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: New contrib/tsearch module for 7.2 " }, { "msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> Please apply attached patch for contrib/tsearch.\n\nI've applied the regression-test changes. I won't apply the Makefile\nchange. I took out those rules for a reason, which is that they cause\nthe module to fail to build entirely if one uses a non-gcc compiler.\n(\"-MM\" is gcc-specific.)\n\nIt should be unnecessary to have any dependency-list-building rules\nin a contrib module anyway; if you've configured with --enable-depend\nthen I'd expect Makefile.global to take care of it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Oct 2001 13:47:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: New contrib/tsearch module for 7.2 " } ]
[ { "msg_contents": "I've noticed general buggyness with ecpg on one of my source files for\na while now but it only got really annoying after setting up overnight\nbuild on Linux (output corrupt code), Solaris (output correct code),\nAIX (crashed) and HPUX (crashed).\n\nAfter comparing the output from ecpg on Linux and Solaris the\nfollowing type of statement was the root of the crash:\n\n EXEC SQL GRANT ALL ON exampletable TO PUBLIC;\n\nWhen the parser code was rebuilding the query to pass onto the server\nit was trying to include an extra, non-existent, parameter...\n\nThe bug is present in 7.1.2, 7.1.3 and the current CVS sources. The\nfollowing patch (against CVS version) corrects this bug:\n\n./interfaces/ecpg/preproc/preproc.y\n*** ./interfaces/ecpg/preproc/preproc.y.orig\tFri Oct 12 16:22:05 2001\n--- ./interfaces/ecpg/preproc/preproc.y\tFri Oct 12 16:22:09 2001\n***************\n*** 1693,1699 ****\n \n GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n \t\t\t\t{\n! \t\t\t\t\t$$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7);\n \t\t\t\t}\n \t\t;\n \n--- 1693,1699 ----\n \n GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n \t\t\t\t{\n! \t\t\t\t\t$$ = cat_str(7, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7);\n \t\t\t\t}\n \t\t;\n", "msg_date": "Fri, 12 Oct 2001 16:33:00 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "ecpg - GRANT bug" }, { "msg_contents": "Lee Kindness <lkindness@csl.co.uk> writes:\n> ***************\n> *** 1693,1699 ****\n \n> GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n> \t\t\t\t{\n> ! \t\t\t\t\t$$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7);\n> \t\t\t\t}\n> \t\t;\n \n> --- 1693,1699 ----\n \n> GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n> \t\t\t\t{\n> ! \t\t\t\t\t$$ = cat_str(7, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7);\n> \t\t\t\t}\n> \t\t;\n\n\nUh, isn't the correct fix\n\n! \t\t\t\t\t$$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7, $8);\n\nISTM your patch loses the opt_with_grant clause. (Of course the backend\ndoesn't currently accept that clause anyway, but that's no reason for\necpg to drop it.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Oct 2001 13:18:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug " }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\n> I've noticed general buggyness with ecpg on one of my source files for\n> a while now but it only got really annoying after setting up overnight\n> build on Linux (output corrupt code), Solaris (output correct code),\n> AIX (crashed) and HPUX (crashed).\n> \n> After comparing the output from ecpg on Linux and Solaris the\n> following type of statement was the root of the crash:\n> \n> EXEC SQL GRANT ALL ON exampletable TO PUBLIC;\n> \n> When the parser code was rebuilding the query to pass onto the server\n> it was trying to include an extra, non-existent, parameter...\n> \n> The bug is present in 7.1.2, 7.1.3 and the current CVS sources. The\n> following patch (against CVS version) corrects this bug:\n> \n> ./interfaces/ecpg/preproc/preproc.y\n> *** ./interfaces/ecpg/preproc/preproc.y.orig\tFri Oct 12 16:22:05 2001\n> --- ./interfaces/ecpg/preproc/preproc.y\tFri Oct 12 16:22:09 2001\n> ***************\n> *** 1693,1699 ****\n> \n> GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n> \t\t\t\t{\n> ! \t\t\t\t\t$$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7);\n> \t\t\t\t}\n> \t\t;\n> \n> --- 1693,1699 ----\n> \n> GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n> \t\t\t\t{\n> ! \t\t\t\t\t$$ = cat_str(7, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7);\n> \t\t\t\t}\n> \t\t;\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Oct 2001 00:14:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "Tom Lane writes:\n > Uh, isn't the correct fix\n > ! $$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5,\n > make_str(\"to\"), $7, $8);\n > ISTM your patch loses the opt_with_grant clause. (Of course the\n > backend doesn't currently accept that clause anyway, but that's no\n > reason for ecpg to drop it.)\n\nMy patch doesn't loose the option, it's never been passed on anyway:\n\n opt_with_grant: WITH GRANT OPTION\n\t\t\t\t{\n\t\t\t\t\tmmerror(ET_ERROR, \"WITH GRANT OPTION is not supported. Only relation owners can set privileges\");\n\t\t\t\t }\n\t\t| /*EMPTY*/ \n\t\t;\n\nThe existing code in ecpg/preproc/preproc.y to handle the WITH option\nsimply throws an error and aborts the processing... The patch below\nprevents the segfault and also passes on the WITH option to the\nbackend, probably a better fix.\n\nRegards, Lee.\n\nIndex: interfaces/ecpg/preproc/preproc.y\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\nretrieving revision 1.159\ndiff -c -r1.159 preproc.y\n*** interfaces/ecpg/preproc/preproc.y\t2001/10/14 12:07:57\t1.159\n--- interfaces/ecpg/preproc/preproc.y\t2001/10/15 09:06:29\n***************\n*** 1693,1699 ****\n \n GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n \t\t\t\t{\n! \t\t\t\t\t$$ = cat_str(7, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7);\n \t\t\t\t}\n \t\t;\n \n--- 1693,1699 ----\n \n GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n \t\t\t\t{\n! \t\t\t\t\t$$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7, $8);\n \t\t\t\t}\n \t\t;\n \n***************\n*** 1769,1779 ****\n \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n \t\t;\n \n! opt_with_grant: WITH GRANT OPTION\n! \t\t\t\t{\n! \t\t\t\t\tmmerror(ET_ERROR, \"WITH GRANT OPTION is not supported. Only relation owners can set privileges\");\n! \t\t\t\t }\n! \t\t| /*EMPTY*/ \n \t\t;\n \n \n--- 1769,1776 ----\n \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n \t\t;\n \n! opt_with_grant: WITH GRANT OPTION { $$ = make_str(\"with grant option\"); }\n! \t\t| /*EMPTY*/ { $$ = EMPTY; }\n \t\t;\n \n \n\n\n", "msg_date": "Mon, 15 Oct 2001 10:09:03 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "ecpg - GRANT bug " }, { "msg_contents": "On Tue, 16 Oct 2001, Lee Kindness wrote:\n\n> And the patch below corrects a pet peeve I have with ecpg, all errors\n> and warnings are output with a line number one less than reality...\n\nI think this patch is wrong. Wouldn't it be better to make the line number\nin yylineno be correct? Also, there are users of the line number in pcg.l\nwhich you didn't change.\n\nLooking at it, I don't see why the line number is off. It is initialized\nto 1 at the begining and whenever a new file is included. In the generated\ncode, it is incrimented whenever a '\\n' is found. Strange...\n\nTake care,\n\nBill\n\n> Lee.\n>\n> *** ./interfaces/ecpg/preproc/preproc.y.orig\tTue Oct 16 10:19:27 2001\n> --- ./interfaces/ecpg/preproc/preproc.y\tTue Oct 16 10:19:49 2001\n> ***************\n> *** 36,49 ****\n> switch(type)\n> {\n> \tcase ET_NOTICE:\n> ! \t\tfprintf(stderr, \"%s:%d: WARNING: %s\\n\", input_filename, yylineno, error);\n> \t\tbreak;\n> \tcase ET_ERROR:\n> ! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename, yylineno, error);\n> \t\tret_value = PARSE_ERROR;\n> \t\tbreak;\n> \tcase ET_FATAL:\n> ! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename, yylineno, error);\n> \t\texit(PARSE_ERROR);\n> }\n> }\n> --- 36,52 ----\n> switch(type)\n> {\n> \tcase ET_NOTICE:\n> ! \t\tfprintf(stderr, \"%s:%d: WARNING: %s\\n\", input_filename,\n> ! \t\t\tyylineno + 1, error);\n> \t\tbreak;\n> \tcase ET_ERROR:\n> ! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename,\n> ! \t\t\tyylineno + 1, error);\n> \t\tret_value = PARSE_ERROR;\n> \t\tbreak;\n> \tcase ET_FATAL:\n> ! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename,\n> ! \t\t\tyylineno + 1, error);\n> \t\texit(PARSE_ERROR);\n> }\n> }\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Mon, 15 Oct 2001 05:15:13 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "Lee Kindness <lkindness@csl.co.uk> writes:\n> The existing code in ecpg/preproc/preproc.y to handle the WITH option\n> simply throws an error and aborts the processing... The patch below\n> prevents the segfault and also passes on the WITH option to the\n> backend, probably a better fix.\n\nI agree. It shouldn't be ecpg's business to throw errors on behalf of\nthe backend, especially not \"not yet implemented\" kinds of errors.\nThat just causes ecpg to be more tightly coupled to a particular backend\nversion than it needs to be.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Oct 2001 10:10:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug " }, { "msg_contents": "Tom Lane writes:\n > Lee Kindness <lkindness@csl.co.uk> writes:\n > > The existing code in ecpg/preproc/preproc.y to handle the WITH option\n > > simply throws an error and aborts the processing... The patch below\n > > prevents the segfault and also passes on the WITH option to the\n > > backend, probably a better fix.\n > I agree. It shouldn't be ecpg's business to throw errors on behalf of\n > the backend, especially not \"not yet implemented\" kinds of errors.\n > That just causes ecpg to be more tightly coupled to a particular backend\n > version than it needs to be.\n\nIn which case a number of other cases should be weeded out of\nparser.y and passed onto the backend:\n\n CREATE TABLE: GLOBAL TEMPORARY option.\n CREATE FUNCTION: IN/OUT/INOUT options (note there's a bug in parser.y\n there anyway, it would pass on 'oinut' for INOUT).\n COMMIT: AND [NO] CHAIN options? Where do these come from,\n it's not ANSI (i'd probably leave this one).\n\nPerhaps an ET_NOTICE should still be output however...\n\nLet me known if you want a patch for these cases too.\n\nRegards, Lee Kindness.\n \n", "msg_date": "Mon, 15 Oct 2001 15:32:27 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: ecpg - GRANT bug " }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n> Tom Lane writes:\n> > Uh, isn't the correct fix\n> > ! $$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5,\n> > make_str(\"to\"), $7, $8);\n> > ISTM your patch loses the opt_with_grant clause. (Of course the\n> > backend doesn't currently accept that clause anyway, but that's no\n> > reason for ecpg to drop it.)\n> \n> My patch doesn't loose the option, it's never been passed on anyway:\n> \n> opt_with_grant: WITH GRANT OPTION\n> \t\t\t\t{\n> \t\t\t\t\tmmerror(ET_ERROR, \"WITH GRANT OPTION is not supported. Only relation owners can set privileges\");\n> \t\t\t\t }\n> \t\t| /*EMPTY*/ \n> \t\t;\n> \n> The existing code in ecpg/preproc/preproc.y to handle the WITH option\n> simply throws an error and aborts the processing... The patch below\n> prevents the segfault and also passes on the WITH option to the\n> backend, probably a better fix.\n> \n> Regards, Lee.\n> \n> Index: interfaces/ecpg/preproc/preproc.y\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\n> retrieving revision 1.159\n> diff -c -r1.159 preproc.y\n> *** interfaces/ecpg/preproc/preproc.y\t2001/10/14 12:07:57\t1.159\n> --- interfaces/ecpg/preproc/preproc.y\t2001/10/15 09:06:29\n> ***************\n> *** 1693,1699 ****\n> \n> GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n> \t\t\t\t{\n> ! \t\t\t\t\t$$ = cat_str(7, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7);\n> \t\t\t\t}\n> \t\t;\n> \n> --- 1693,1699 ----\n> \n> GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n> \t\t\t\t{\n> ! \t\t\t\t\t$$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7, $8);\n> \t\t\t\t}\n> \t\t;\n> \n> ***************\n> *** 1769,1779 ****\n> \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n> \t\t;\n> \n> ! opt_with_grant: WITH GRANT OPTION\n> ! \t\t\t\t{\n> ! \t\t\t\t\tmmerror(ET_ERROR, \"WITH GRANT OPTION is not supported. Only relation owners can set privileges\");\n> ! \t\t\t\t }\n> ! \t\t| /*EMPTY*/ \n> \t\t;\n> \n> \n> --- 1769,1776 ----\n> \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n> \t\t;\n> \n> ! opt_with_grant: WITH GRANT OPTION { $$ = make_str(\"with grant option\"); }\n> ! \t\t| /*EMPTY*/ { $$ = EMPTY; }\n> \t\t;\n> \n> \n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Oct 2001 14:16:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "> In which case a number of other cases should be weeded out of\n> parser.y and passed onto the backend:\n> \n> CREATE TABLE: GLOBAL TEMPORARY option.\n> CREATE FUNCTION: IN/OUT/INOUT options (note there's a bug in parser.y\n> there anyway, it would pass on 'oinut' for INOUT).\n> COMMIT: AND [NO] CHAIN options? Where do these come from,\n> it's not ANSI (i'd probably leave this one).\n> \n> Perhaps an ET_NOTICE should still be output however...\n> \n> Let me known if you want a patch for these cases too.\n\nSure, send them on over.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Oct 2001 14:16:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "Bruce Momjian writes:\n > Lee Kindness writes:\n > > In which case a number of other cases should be weeded out of\n > > parser.y and passed onto the backend:\n > > [ snip ]\n > > Let me known if you want a patch for these cases too.\n > Sure, send them on over.\n\nPatch below, it changes:\n\n 1. A number of mmerror(ET_ERROR) to mmerror(ET_NOTICE), passing on\n the (currently) unsupported options to the backend with warning.\n\n 2. Standardises warning messages in such cases.\n\n 3. Corrects typo in passing of 'CREATE FUNCTION/INOUT' parameter.\n\nPatch:\n\n? interfaces/ecpg/preproc/ecpg\nIndex: interfaces/ecpg/preproc/preproc.y\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\nretrieving revision 1.161\ndiff -c -r1.161 preproc.y\n*** interfaces/ecpg/preproc/preproc.y\t2001/10/15 20:15:09\t1.161\n--- interfaces/ecpg/preproc/preproc.y\t2001/10/16 09:15:53\n***************\n*** 1074,1084 ****\n \t\t| LOCAL TEMPORARY\t{ $$ = make_str(\"local temporary\"); }\n \t\t| LOCAL TEMP\t\t{ $$ = make_str(\"local temp\"); }\n \t\t| GLOBAL TEMPORARY\t{\n! \t\t\t\t\t mmerror(ET_ERROR, \"GLOBAL TEMPORARY TABLE is not currently supported\");\n \t\t\t\t\t $$ = make_str(\"global temporary\");\n \t\t\t\t\t}\n \t\t| GLOBAL TEMP\t\t{\n! \t\t\t\t\t mmerror(ET_ERROR, \"GLOBAL TEMPORARY TABLE is not currently supported\");\n \t\t\t\t\t $$ = make_str(\"global temp\");\n \t\t\t\t\t}\n \t\t| /*EMPTY*/\t\t{ $$ = EMPTY; }\n--- 1074,1084 ----\n \t\t| LOCAL TEMPORARY\t{ $$ = make_str(\"local temporary\"); }\n \t\t| LOCAL TEMP\t\t{ $$ = make_str(\"local temp\"); }\n \t\t| GLOBAL TEMPORARY\t{\n! \t\t\t\t\t mmerror(ET_NOTICE, \"Currently unsupported CREATE TABLE/GLOBAL TEMPORARY will be passed to backend\");\n \t\t\t\t\t $$ = make_str(\"global temporary\");\n \t\t\t\t\t}\n \t\t| GLOBAL TEMP\t\t{\n! \t\t\t\t\t mmerror(ET_NOTICE, \"Currently unsupported CREATE TABLE/GLOBAL TEMP will be passed to backend\");\n \t\t\t\t\t $$ = make_str(\"global temp\");\n \t\t\t\t\t}\n \t\t| /*EMPTY*/\t\t{ $$ = EMPTY; }\n***************\n*** 1103,1110 ****\n \t\t\t\t{\n \t\t\t\t\tif (strlen($4) > 0)\n \t\t\t\t\t{\n! \t\t\t\t\t\tsprintf(errortext, \"CREATE TABLE/COLLATE %s not yet implemented; clause ignored\", $4);\n! \t\t\t\t\t\tmmerror(ET_NOTICE, errortext);\n \t\t\t\t\t}\n \t\t\t\t\t$$ = cat_str(4, $1, $2, $3, $4);\n \t\t\t\t}\n--- 1103,1110 ----\n \t\t\t\t{\n \t\t\t\t\tif (strlen($4) > 0)\n \t\t\t\t\t{\n! \t\t\t\t\t\tsprintf(errortext, \"Currently unsupported CREATE TABLE/COLLATE %s will be passed to backend\", $4);\n! \t\t\t\t\t\tmmerror(ET_NOTICE, errortext);\n \t\t\t\t\t}\n \t\t\t\t\t$$ = cat_str(4, $1, $2, $3, $4);\n \t\t\t\t}\n***************\n*** 1219,1225 ****\n \t\t}\n \t\t| MATCH PARTIAL\t\t\n \t\t{\n! \t\t\tmmerror(ET_NOTICE, \"FOREIGN KEY/MATCH PARTIAL not yet implemented\");\n \t\t\t$$ = make_str(\"match partial\");\n \t\t}\n \t\t| /*EMPTY*/\n--- 1219,1225 ----\n \t\t}\n \t\t| MATCH PARTIAL\t\t\n \t\t{\n! \t\t\tmmerror(ET_NOTICE, \"Currently unsupported FOREIGN KEY/MATCH PARTIAL will be passed to backend\");\n \t\t\t$$ = make_str(\"match partial\");\n \t\t}\n \t\t| /*EMPTY*/\n***************\n*** 1614,1620 ****\n \t\t| BACKWARD\t{ $$ = make_str(\"backward\"); }\n \t\t| RELATIVE { $$ = make_str(\"relative\"); }\n | ABSOLUTE\t{\n! \t\t\t\t\tmmerror(ET_NOTICE, \"FETCH/ABSOLUTE not supported, backend will use RELATIVE\");\n \t\t\t\t\t$$ = make_str(\"absolute\");\n \t\t\t\t}\n \t\t;\n--- 1614,1620 ----\n \t\t| BACKWARD\t{ $$ = make_str(\"backward\"); }\n \t\t| RELATIVE { $$ = make_str(\"relative\"); }\n | ABSOLUTE\t{\n! \t\t\t\t\tmmerror(ET_NOTICE, \"Currently unsupported FETCH/ABSOLUTE will be passed to backend, backend will use RELATIVE\");\n \t\t\t\t\t$$ = make_str(\"absolute\");\n \t\t\t\t}\n \t\t;\n***************\n*** 1769,1775 ****\n \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n \t\t;\n \n! opt_with_grant: WITH GRANT OPTION { $$ = make_str(\"with grant option\"); }\n \t\t| /*EMPTY*/ { $$ = EMPTY; }\n \t\t;\n \n--- 1769,1779 ----\n \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n \t\t;\n \n! opt_with_grant: WITH GRANT OPTION\n! {\n! \t\t\t\t\tmmerror(ET_NOTICE, \"Currently unsupported GRANT/WITH GRANT OPTION will be passed to backend\");\n! \t\t\t\t\t$$ = make_str(\"with grant option\");\n! \t\t\t\t}\n \t\t| /*EMPTY*/ { $$ = EMPTY; }\n \t\t;\n \n***************\n*** 1919,1932 ****\n \n opt_arg: IN { $$ = make_str(\"in\"); }\n \t| OUT\t{ \n! \t\t mmerror(ET_ERROR, \"CREATE FUNCTION/OUT parameters are not supported\");\n \n \t \t $$ = make_str(\"out\");\n \t\t}\n \t| INOUT\t{ \n! \t\t mmerror(ET_ERROR, \"CREATE FUNCTION/INOUT parameters are not supported\");\n \n! \t \t $$ = make_str(\"oinut\");\n \t\t}\n \t;\n \n--- 1923,1936 ----\n \n opt_arg: IN { $$ = make_str(\"in\"); }\n \t| OUT\t{ \n! \t\t mmerror(ET_NOTICE, \"Currently unsupported CREATE FUNCTION/OUT will be passed to backend\");\n \n \t \t $$ = make_str(\"out\");\n \t\t}\n \t| INOUT\t{ \n! \t\t mmerror(ET_NOTICE, \"Currently unsupported CREATE FUNCTION/INOUT will be passed to backend\");\n \n! \t \t $$ = make_str(\"inout\");\n \t\t}\n \t;\n \n***************\n*** 2164,2170 ****\n \n opt_chain: AND NO CHAIN \t{ $$ = make_str(\"and no chain\"); }\n \t| AND CHAIN\t\t{\n! \t\t\t\t mmerror(ET_ERROR, \"COMMIT/CHAIN not yet supported\");\n \n \t\t\t\t $$ = make_str(\"and chain\");\n \t\t\t\t}\n--- 2168,2174 ----\n \n opt_chain: AND NO CHAIN \t{ $$ = make_str(\"and no chain\"); }\n \t| AND CHAIN\t\t{\n! \t\t\t\t mmerror(ET_NOTICE, \"Currently unsupported COMMIT/CHAIN will be passed to backend\");\n \n \t\t\t\t $$ = make_str(\"and chain\");\n \t\t\t\t}\n***************\n*** 2609,2620 ****\n \t\t\t}\n | GLOBAL TEMPORARY opt_table relation_name\n {\n! \t\t\t\tmmerror(ET_ERROR, \"GLOBAL TEMPORARY TABLE is not currently supported\");\n \t\t\t\t$$ = cat_str(3, make_str(\"global temporary\"), $3, $4);\n }\n | GLOBAL TEMP opt_table relation_name\n {\n! \t\t\t\tmmerror(ET_ERROR, \"GLOBAL TEMPORARY TABLE is not currently supported\");\n \t\t\t\t$$ = cat_str(3, make_str(\"global temp\"), $3, $4);\n }\n | TABLE relation_name\n--- 2613,2624 ----\n \t\t\t}\n | GLOBAL TEMPORARY opt_table relation_name\n {\n! \t\t\t\tmmerror(ET_NOTICE, \"Currently unsupported CREATE TABLE/GLOBAL TEMPORARY will be passed to backend\");\n \t\t\t\t$$ = cat_str(3, make_str(\"global temporary\"), $3, $4);\n }\n | GLOBAL TEMP opt_table relation_name\n {\n! \t\t\t\tmmerror(ET_NOTICE, \"Currently unsupported CREATE TABLE/GLOBAL TEMP will be passed to backend\");\n \t\t\t\t$$ = cat_str(3, make_str(\"global temp\"), $3, $4);\n }\n | TABLE relation_name\n", "msg_date": "Tue, 16 Oct 2001 10:16:38 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "Lee Kindness writes:\n > Patch below, it changes:\n > 1. A number of mmerror(ET_ERROR) to mmerror(ET_NOTICE), passing on\n > the (currently) unsupported options to the backend with warning.\n > 2. Standardises warning messages in such cases.\n > 3. Corrects typo in passing of 'CREATE FUNCTION/INOUT' parameter.\n\nAnd the patch below corrects a pet peeve I have with ecpg, all errors\nand warnings are output with a line number one less than reality...\n\nLee.\n\n*** ./interfaces/ecpg/preproc/preproc.y.orig\tTue Oct 16 10:19:27 2001\n--- ./interfaces/ecpg/preproc/preproc.y\tTue Oct 16 10:19:49 2001\n***************\n*** 36,49 ****\n switch(type)\n {\n \tcase ET_NOTICE: \n! \t\tfprintf(stderr, \"%s:%d: WARNING: %s\\n\", input_filename, yylineno, error); \n \t\tbreak;\n \tcase ET_ERROR:\n! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename, yylineno, error);\n \t\tret_value = PARSE_ERROR;\n \t\tbreak;\n \tcase ET_FATAL:\n! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename, yylineno, error);\n \t\texit(PARSE_ERROR);\n }\n }\n--- 36,52 ----\n switch(type)\n {\n \tcase ET_NOTICE: \n! \t\tfprintf(stderr, \"%s:%d: WARNING: %s\\n\", input_filename,\n! \t\t\tyylineno + 1, error); \n \t\tbreak;\n \tcase ET_ERROR:\n! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename,\n! \t\t\tyylineno + 1, error);\n \t\tret_value = PARSE_ERROR;\n \t\tbreak;\n \tcase ET_FATAL:\n! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename,\n! \t\t\tyylineno + 1, error);\n \t\texit(PARSE_ERROR);\n }\n }\n", "msg_date": "Tue, 16 Oct 2001 10:27:42 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\n> Tom Lane writes:\n> > Uh, isn't the correct fix\n> > ! $$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5,\n> > make_str(\"to\"), $7, $8);\n> > ISTM your patch loses the opt_with_grant clause. (Of course the\n> > backend doesn't currently accept that clause anyway, but that's no\n> > reason for ecpg to drop it.)\n> \n> My patch doesn't loose the option, it's never been passed on anyway:\n> \n> opt_with_grant: WITH GRANT OPTION\n> \t\t\t\t{\n> \t\t\t\t\tmmerror(ET_ERROR, \"WITH GRANT OPTION is not supported. Only relation owners can set privileges\");\n> \t\t\t\t }\n> \t\t| /*EMPTY*/ \n> \t\t;\n> \n> The existing code in ecpg/preproc/preproc.y to handle the WITH option\n> simply throws an error and aborts the processing... The patch below\n> prevents the segfault and also passes on the WITH option to the\n> backend, probably a better fix.\n> \n> Regards, Lee.\n> \n> Index: interfaces/ecpg/preproc/preproc.y\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\n> retrieving revision 1.159\n> diff -c -r1.159 preproc.y\n> *** interfaces/ecpg/preproc/preproc.y\t2001/10/14 12:07:57\t1.159\n> --- interfaces/ecpg/preproc/preproc.y\t2001/10/15 09:06:29\n> ***************\n> *** 1693,1699 ****\n> \n> GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n> \t\t\t\t{\n> ! \t\t\t\t\t$$ = cat_str(7, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7);\n> \t\t\t\t}\n> \t\t;\n> \n> --- 1693,1699 ----\n> \n> GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n> \t\t\t\t{\n> ! \t\t\t\t\t$$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7, $8);\n> \t\t\t\t}\n> \t\t;\n> \n> ***************\n> *** 1769,1779 ****\n> \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n> \t\t;\n> \n> ! opt_with_grant: WITH GRANT OPTION\n> ! \t\t\t\t{\n> ! \t\t\t\t\tmmerror(ET_ERROR, \"WITH GRANT OPTION is not supported. Only relation owners can set privileges\");\n> ! \t\t\t\t }\n> ! \t\t| /*EMPTY*/ \n> \t\t;\n> \n> \n> --- 1769,1776 ----\n> \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n> \t\t;\n> \n> ! opt_with_grant: WITH GRANT OPTION { $$ = make_str(\"with grant option\"); }\n> ! \t\t| /*EMPTY*/ { $$ = EMPTY; }\n> \t\t;\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Oct 2001 09:55:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\n> Bruce Momjian writes:\n> > Lee Kindness writes:\n> > > In which case a number of other cases should be weeded out of\n> > > parser.y and passed onto the backend:\n> > > [ snip ]\n> > > Let me known if you want a patch for these cases too.\n> > Sure, send them on over.\n> \n> Patch below, it changes:\n> \n> 1. A number of mmerror(ET_ERROR) to mmerror(ET_NOTICE), passing on\n> the (currently) unsupported options to the backend with warning.\n> \n> 2. Standardises warning messages in such cases.\n> \n> 3. Corrects typo in passing of 'CREATE FUNCTION/INOUT' parameter.\n> \n> Patch:\n> \n> ? interfaces/ecpg/preproc/ecpg\n> Index: interfaces/ecpg/preproc/preproc.y\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\n> retrieving revision 1.161\n> diff -c -r1.161 preproc.y\n> *** interfaces/ecpg/preproc/preproc.y\t2001/10/15 20:15:09\t1.161\n> --- interfaces/ecpg/preproc/preproc.y\t2001/10/16 09:15:53\n> ***************\n> *** 1074,1084 ****\n> \t\t| LOCAL TEMPORARY\t{ $$ = make_str(\"local temporary\"); }\n> \t\t| LOCAL TEMP\t\t{ $$ = make_str(\"local temp\"); }\n> \t\t| GLOBAL TEMPORARY\t{\n> ! \t\t\t\t\t mmerror(ET_ERROR, \"GLOBAL TEMPORARY TABLE is not currently supported\");\n> \t\t\t\t\t $$ = make_str(\"global temporary\");\n> \t\t\t\t\t}\n> \t\t| GLOBAL TEMP\t\t{\n> ! \t\t\t\t\t mmerror(ET_ERROR, \"GLOBAL TEMPORARY TABLE is not currently supported\");\n> \t\t\t\t\t $$ = make_str(\"global temp\");\n> \t\t\t\t\t}\n> \t\t| /*EMPTY*/\t\t{ $$ = EMPTY; }\n> --- 1074,1084 ----\n> \t\t| LOCAL TEMPORARY\t{ $$ = make_str(\"local temporary\"); }\n> \t\t| LOCAL TEMP\t\t{ $$ = make_str(\"local temp\"); }\n> \t\t| GLOBAL TEMPORARY\t{\n> ! \t\t\t\t\t mmerror(ET_NOTICE, \"Currently unsupported CREATE TABLE/GLOBAL TEMPORARY will be passed to backend\");\n> \t\t\t\t\t $$ = make_str(\"global temporary\");\n> \t\t\t\t\t}\n> \t\t| GLOBAL TEMP\t\t{\n> ! \t\t\t\t\t mmerror(ET_NOTICE, \"Currently unsupported CREATE TABLE/GLOBAL TEMP will be passed to backend\");\n> \t\t\t\t\t $$ = make_str(\"global temp\");\n> \t\t\t\t\t}\n> \t\t| /*EMPTY*/\t\t{ $$ = EMPTY; }\n> ***************\n> *** 1103,1110 ****\n> \t\t\t\t{\n> \t\t\t\t\tif (strlen($4) > 0)\n> \t\t\t\t\t{\n> ! \t\t\t\t\t\tsprintf(errortext, \"CREATE TABLE/COLLATE %s not yet implemented; clause ignored\", $4);\n> ! \t\t\t\t\t\tmmerror(ET_NOTICE, errortext);\n> \t\t\t\t\t}\n> \t\t\t\t\t$$ = cat_str(4, $1, $2, $3, $4);\n> \t\t\t\t}\n> --- 1103,1110 ----\n> \t\t\t\t{\n> \t\t\t\t\tif (strlen($4) > 0)\n> \t\t\t\t\t{\n> ! \t\t\t\t\t\tsprintf(errortext, \"Currently unsupported CREATE TABLE/COLLATE %s will be passed to backend\", $4);\n> ! \t\t\t\t\t\tmmerror(ET_NOTICE, errortext);\n> \t\t\t\t\t}\n> \t\t\t\t\t$$ = cat_str(4, $1, $2, $3, $4);\n> \t\t\t\t}\n> ***************\n> *** 1219,1225 ****\n> \t\t}\n> \t\t| MATCH PARTIAL\t\t\n> \t\t{\n> ! \t\t\tmmerror(ET_NOTICE, \"FOREIGN KEY/MATCH PARTIAL not yet implemented\");\n> \t\t\t$$ = make_str(\"match partial\");\n> \t\t}\n> \t\t| /*EMPTY*/\n> --- 1219,1225 ----\n> \t\t}\n> \t\t| MATCH PARTIAL\t\t\n> \t\t{\n> ! \t\t\tmmerror(ET_NOTICE, \"Currently unsupported FOREIGN KEY/MATCH PARTIAL will be passed to backend\");\n> \t\t\t$$ = make_str(\"match partial\");\n> \t\t}\n> \t\t| /*EMPTY*/\n> ***************\n> *** 1614,1620 ****\n> \t\t| BACKWARD\t{ $$ = make_str(\"backward\"); }\n> \t\t| RELATIVE { $$ = make_str(\"relative\"); }\n> | ABSOLUTE\t{\n> ! \t\t\t\t\tmmerror(ET_NOTICE, \"FETCH/ABSOLUTE not supported, backend will use RELATIVE\");\n> \t\t\t\t\t$$ = make_str(\"absolute\");\n> \t\t\t\t}\n> \t\t;\n> --- 1614,1620 ----\n> \t\t| BACKWARD\t{ $$ = make_str(\"backward\"); }\n> \t\t| RELATIVE { $$ = make_str(\"relative\"); }\n> | ABSOLUTE\t{\n> ! \t\t\t\t\tmmerror(ET_NOTICE, \"Currently unsupported FETCH/ABSOLUTE will be passed to backend, backend will use RELATIVE\");\n> \t\t\t\t\t$$ = make_str(\"absolute\");\n> \t\t\t\t}\n> \t\t;\n> ***************\n> *** 1769,1775 ****\n> \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n> \t\t;\n> \n> ! opt_with_grant: WITH GRANT OPTION { $$ = make_str(\"with grant option\"); }\n> \t\t| /*EMPTY*/ { $$ = EMPTY; }\n> \t\t;\n> \n> --- 1769,1779 ----\n> \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n> \t\t;\n> \n> ! opt_with_grant: WITH GRANT OPTION\n> ! {\n> ! \t\t\t\t\tmmerror(ET_NOTICE, \"Currently unsupported GRANT/WITH GRANT OPTION will be passed to backend\");\n> ! \t\t\t\t\t$$ = make_str(\"with grant option\");\n> ! \t\t\t\t}\n> \t\t| /*EMPTY*/ { $$ = EMPTY; }\n> \t\t;\n> \n> ***************\n> *** 1919,1932 ****\n> \n> opt_arg: IN { $$ = make_str(\"in\"); }\n> \t| OUT\t{ \n> ! \t\t mmerror(ET_ERROR, \"CREATE FUNCTION/OUT parameters are not supported\");\n> \n> \t \t $$ = make_str(\"out\");\n> \t\t}\n> \t| INOUT\t{ \n> ! \t\t mmerror(ET_ERROR, \"CREATE FUNCTION/INOUT parameters are not supported\");\n> \n> ! \t \t $$ = make_str(\"oinut\");\n> \t\t}\n> \t;\n> \n> --- 1923,1936 ----\n> \n> opt_arg: IN { $$ = make_str(\"in\"); }\n> \t| OUT\t{ \n> ! \t\t mmerror(ET_NOTICE, \"Currently unsupported CREATE FUNCTION/OUT will be passed to backend\");\n> \n> \t \t $$ = make_str(\"out\");\n> \t\t}\n> \t| INOUT\t{ \n> ! \t\t mmerror(ET_NOTICE, \"Currently unsupported CREATE FUNCTION/INOUT will be passed to backend\");\n> \n> ! \t \t $$ = make_str(\"inout\");\n> \t\t}\n> \t;\n> \n> ***************\n> *** 2164,2170 ****\n> \n> opt_chain: AND NO CHAIN \t{ $$ = make_str(\"and no chain\"); }\n> \t| AND CHAIN\t\t{\n> ! \t\t\t\t mmerror(ET_ERROR, \"COMMIT/CHAIN not yet supported\");\n> \n> \t\t\t\t $$ = make_str(\"and chain\");\n> \t\t\t\t}\n> --- 2168,2174 ----\n> \n> opt_chain: AND NO CHAIN \t{ $$ = make_str(\"and no chain\"); }\n> \t| AND CHAIN\t\t{\n> ! \t\t\t\t mmerror(ET_NOTICE, \"Currently unsupported COMMIT/CHAIN will be passed to backend\");\n> \n> \t\t\t\t $$ = make_str(\"and chain\");\n> \t\t\t\t}\n> ***************\n> *** 2609,2620 ****\n> \t\t\t}\n> | GLOBAL TEMPORARY opt_table relation_name\n> {\n> ! \t\t\t\tmmerror(ET_ERROR, \"GLOBAL TEMPORARY TABLE is not currently supported\");\n> \t\t\t\t$$ = cat_str(3, make_str(\"global temporary\"), $3, $4);\n> }\n> | GLOBAL TEMP opt_table relation_name\n> {\n> ! \t\t\t\tmmerror(ET_ERROR, \"GLOBAL TEMPORARY TABLE is not currently supported\");\n> \t\t\t\t$$ = cat_str(3, make_str(\"global temp\"), $3, $4);\n> }\n> | TABLE relation_name\n> --- 2613,2624 ----\n> \t\t\t}\n> | GLOBAL TEMPORARY opt_table relation_name\n> {\n> ! \t\t\t\tmmerror(ET_NOTICE, \"Currently unsupported CREATE TABLE/GLOBAL TEMPORARY will be passed to backend\");\n> \t\t\t\t$$ = cat_str(3, make_str(\"global temporary\"), $3, $4);\n> }\n> | GLOBAL TEMP opt_table relation_name\n> {\n> ! \t\t\t\tmmerror(ET_NOTICE, \"Currently unsupported CREATE TABLE/GLOBAL TEMP will be passed to backend\");\n> \t\t\t\t$$ = cat_str(3, make_str(\"global temp\"), $3, $4);\n> }\n> | TABLE relation_name\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Oct 2001 09:56:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\n> Lee Kindness writes:\n> > Patch below, it changes:\n> > 1. A number of mmerror(ET_ERROR) to mmerror(ET_NOTICE), passing on\n> > the (currently) unsupported options to the backend with warning.\n> > 2. Standardises warning messages in such cases.\n> > 3. Corrects typo in passing of 'CREATE FUNCTION/INOUT' parameter.\n> \n> And the patch below corrects a pet peeve I have with ecpg, all errors\n> and warnings are output with a line number one less than reality...\n> \n> Lee.\n> \n> *** ./interfaces/ecpg/preproc/preproc.y.orig\tTue Oct 16 10:19:27 2001\n> --- ./interfaces/ecpg/preproc/preproc.y\tTue Oct 16 10:19:49 2001\n> ***************\n> *** 36,49 ****\n> switch(type)\n> {\n> \tcase ET_NOTICE: \n> ! \t\tfprintf(stderr, \"%s:%d: WARNING: %s\\n\", input_filename, yylineno, error); \n> \t\tbreak;\n> \tcase ET_ERROR:\n> ! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename, yylineno, error);\n> \t\tret_value = PARSE_ERROR;\n> \t\tbreak;\n> \tcase ET_FATAL:\n> ! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename, yylineno, error);\n> \t\texit(PARSE_ERROR);\n> }\n> }\n> --- 36,52 ----\n> switch(type)\n> {\n> \tcase ET_NOTICE: \n> ! \t\tfprintf(stderr, \"%s:%d: WARNING: %s\\n\", input_filename,\n> ! \t\t\tyylineno + 1, error); \n> \t\tbreak;\n> \tcase ET_ERROR:\n> ! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename,\n> ! \t\t\tyylineno + 1, error);\n> \t\tret_value = PARSE_ERROR;\n> \t\tbreak;\n> \tcase ET_FATAL:\n> ! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename,\n> ! \t\t\tyylineno + 1, error);\n> \t\texit(PARSE_ERROR);\n> }\n> }\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Oct 2001 09:56:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "Lee Kindness <lkindness@csl.co.uk> writes:\n> And the patch below corrects a pet peeve I have with ecpg, all errors\n> and warnings are output with a line number one less than reality...\n\nHmm. yylineno *should* be the right thing. I think you are patching\na symptom rather than fixing the correct cause. Perhaps look into the\nway that the line number counter is initialized?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Oct 2001 11:38:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug " }, { "msg_contents": "\nI agree we need to find out why the line number is off rather than\ncovering up the problem. Patch rejected.\n\n---------------------------------------------------------------------------\n\n> Lee Kindness writes:\n> > Patch below, it changes:\n> > 1. A number of mmerror(ET_ERROR) to mmerror(ET_NOTICE), passing on\n> > the (currently) unsupported options to the backend with warning.\n> > 2. Standardises warning messages in such cases.\n> > 3. Corrects typo in passing of 'CREATE FUNCTION/INOUT' parameter.\n> \n> And the patch below corrects a pet peeve I have with ecpg, all errors\n> and warnings are output with a line number one less than reality...\n> \n> Lee.\n> \n> *** ./interfaces/ecpg/preproc/preproc.y.orig\tTue Oct 16 10:19:27 2001\n> --- ./interfaces/ecpg/preproc/preproc.y\tTue Oct 16 10:19:49 2001\n> ***************\n> *** 36,49 ****\n> switch(type)\n> {\n> \tcase ET_NOTICE: \n> ! \t\tfprintf(stderr, \"%s:%d: WARNING: %s\\n\", input_filename, yylineno, error); \n> \t\tbreak;\n> \tcase ET_ERROR:\n> ! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename, yylineno, error);\n> \t\tret_value = PARSE_ERROR;\n> \t\tbreak;\n> \tcase ET_FATAL:\n> ! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename, yylineno, error);\n> \t\texit(PARSE_ERROR);\n> }\n> }\n> --- 36,52 ----\n> switch(type)\n> {\n> \tcase ET_NOTICE: \n> ! \t\tfprintf(stderr, \"%s:%d: WARNING: %s\\n\", input_filename,\n> ! \t\t\tyylineno + 1, error); \n> \t\tbreak;\n> \tcase ET_ERROR:\n> ! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename,\n> ! \t\t\tyylineno + 1, error);\n> \t\tret_value = PARSE_ERROR;\n> \t\tbreak;\n> \tcase ET_FATAL:\n> ! \t\tfprintf(stderr, \"%s:%d: ERROR: %s\\n\", input_filename,\n> ! \t\t\tyylineno + 1, error);\n> \t\texit(PARSE_ERROR);\n> }\n> }\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Oct 2001 14:48:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "Lee Kindness writes:\n\n> COMMIT: AND [NO] CHAIN options? Where do these come from,\n> it's not ANSI (i'd probably leave this one).\n\nSure is.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 16 Oct 2001 22:13:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug " }, { "msg_contents": "Bill Studenmund writes:\n > I think this patch is wrong. Wouldn't it be better to make the line number\n > in yylineno be correct? Also, there are users of the line number in pcg.l\n > which you didn't change.\n > Looking at it, I don't see why the line number is off. It is initialized\n > to 1 at the begining and whenever a new file is included. In the generated\n > code, it is incrimented whenever a '\\n' is found. Strange...\n\nThe main reason I split the patch from the previous one for ecpg was\nfor this reason - I didn't think it was the correct patch myself.\n\nHowever after a serious hunt for the root of the problem I've found\nthat it is actually working correctly in the 7.2 sources and I was\npicking up an ecpg from a 7.1.3ish build (which only contained an ecpg\nbinary). Appologies for the false hunt!\n\nFor the record it was fixed in pgc.l 1.79 (meskes 13-Jun-01).\n\nRegards, Lee.\n", "msg_date": "Wed, 17 Oct 2001 10:00:31 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "[Sorry, for the late replies, but I was on the road since Sunday.]\n\nOn Mon, Oct 15, 2001 at 10:10:40AM -0400, Tom Lane wrote:\n> Lee Kindness <lkindness@csl.co.uk> writes:\n> > The existing code in ecpg/preproc/preproc.y to handle the WITH option\n> > simply throws an error and aborts the processing... The patch below\n> > prevents the segfault and also passes on the WITH option to the\n> > backend, probably a better fix.\n\nYes, that of course is better. Sorry, I simply didn't see this either.\n\n> I agree. It shouldn't be ecpg's business to throw errors on behalf of\n> the backend, especially not \"not yet implemented\" kinds of errors.\n\nI beg to disagree.\n\n> That just causes ecpg to be more tightly coupled to a particular backend\n> version than it needs to be.\n\nSure, but it also makes sure you get the error message at compile time\nrather than at run time. If this is not how ecpg should work, there is no\nneed to use this complex parser at all. I could simply accept all sql\nstatements and let the backend decide which one it accepts. \n\nBut for the user this is not a good idea IMO. I don't like running a program\nto debug syntax.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 18 Oct 2001 12:54:41 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "On Tue, Oct 16, 2001 at 10:16:38AM +0100, Lee Kindness wrote:\n> Patch below, it changes:\n> ...\n\nI just added this to my sources. Will commit in a few minutes.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 18 Oct 2001 12:58:31 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "On Tue, Oct 16, 2001 at 10:27:42AM +0100, Lee Kindness wrote:\n> And the patch below corrects a pet peeve I have with ecpg, all errors\n> and warnings are output with a line number one less than reality...\n\nI wish I knew where this comes from. I've been trying to track this bug down\nfor years now, but have yet to find the reason. Okay, didn't check for quite\nsome time now, but the first time I committed a fix was March 1998. But\nsomehow I still haven't found all problems it seems.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 18 Oct 2001 13:00:04 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "> On Tue, Oct 16, 2001 at 10:16:38AM +0100, Lee Kindness wrote:\n> > Patch below, it changes:\n> > ...\n> \n> I just added this to my sources. Will commit in a few minutes.\n\nMichael, I will let you apply the ecpg patches you desire.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Oct 2001 10:59:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "\nMichael will apply the required patches.\n\n> Tom Lane writes:\n> > Uh, isn't the correct fix\n> > ! $$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5,\n> > make_str(\"to\"), $7, $8);\n> > ISTM your patch loses the opt_with_grant clause. (Of course the\n> > backend doesn't currently accept that clause anyway, but that's no\n> > reason for ecpg to drop it.)\n> \n> My patch doesn't loose the option, it's never been passed on anyway:\n> \n> opt_with_grant: WITH GRANT OPTION\n> \t\t\t\t{\n> \t\t\t\t\tmmerror(ET_ERROR, \"WITH GRANT OPTION is not supported. Only relation owners can set privileges\");\n> \t\t\t\t }\n> \t\t| /*EMPTY*/ \n> \t\t;\n> \n> The existing code in ecpg/preproc/preproc.y to handle the WITH option\n> simply throws an error and aborts the processing... The patch below\n> prevents the segfault and also passes on the WITH option to the\n> backend, probably a better fix.\n> \n> Regards, Lee.\n> \n> Index: interfaces/ecpg/preproc/preproc.y\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\n> retrieving revision 1.159\n> diff -c -r1.159 preproc.y\n> *** interfaces/ecpg/preproc/preproc.y\t2001/10/14 12:07:57\t1.159\n> --- interfaces/ecpg/preproc/preproc.y\t2001/10/15 09:06:29\n> ***************\n> *** 1693,1699 ****\n> \n> GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n> \t\t\t\t{\n> ! \t\t\t\t\t$$ = cat_str(7, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7);\n> \t\t\t\t}\n> \t\t;\n> \n> --- 1693,1699 ----\n> \n> GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n> \t\t\t\t{\n> ! \t\t\t\t\t$$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7, $8);\n> \t\t\t\t}\n> \t\t;\n> \n> ***************\n> *** 1769,1779 ****\n> \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n> \t\t;\n> \n> ! opt_with_grant: WITH GRANT OPTION\n> ! \t\t\t\t{\n> ! \t\t\t\t\tmmerror(ET_ERROR, \"WITH GRANT OPTION is not supported. Only relation owners can set privileges\");\n> ! \t\t\t\t }\n> ! \t\t| /*EMPTY*/ \n> \t\t;\n> \n> \n> --- 1769,1776 ----\n> \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n> \t\t;\n> \n> ! opt_with_grant: WITH GRANT OPTION { $$ = make_str(\"with grant option\"); }\n> ! \t\t| /*EMPTY*/ { $$ = EMPTY; }\n> \t\t;\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Oct 2001 10:59:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "\nMichael will apply the required patches.\n\n> Bruce Momjian writes:\n> > Lee Kindness writes:\n> > > In which case a number of other cases should be weeded out of\n> > > parser.y and passed onto the backend:\n> > > [ snip ]\n> > > Let me known if you want a patch for these cases too.\n> > Sure, send them on over.\n> \n> Patch below, it changes:\n> \n> 1. A number of mmerror(ET_ERROR) to mmerror(ET_NOTICE), passing on\n> the (currently) unsupported options to the backend with warning.\n> \n> 2. Standardises warning messages in such cases.\n> \n> 3. Corrects typo in passing of 'CREATE FUNCTION/INOUT' parameter.\n> \n> Patch:\n> \n> ? interfaces/ecpg/preproc/ecpg\n> Index: interfaces/ecpg/preproc/preproc.y\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\n> retrieving revision 1.161\n> diff -c -r1.161 preproc.y\n> *** interfaces/ecpg/preproc/preproc.y\t2001/10/15 20:15:09\t1.161\n> --- interfaces/ecpg/preproc/preproc.y\t2001/10/16 09:15:53\n> ***************\n> *** 1074,1084 ****\n> \t\t| LOCAL TEMPORARY\t{ $$ = make_str(\"local temporary\"); }\n> \t\t| LOCAL TEMP\t\t{ $$ = make_str(\"local temp\"); }\n> \t\t| GLOBAL TEMPORARY\t{\n> ! \t\t\t\t\t mmerror(ET_ERROR, \"GLOBAL TEMPORARY TABLE is not currently supported\");\n> \t\t\t\t\t $$ = make_str(\"global temporary\");\n> \t\t\t\t\t}\n> \t\t| GLOBAL TEMP\t\t{\n> ! \t\t\t\t\t mmerror(ET_ERROR, \"GLOBAL TEMPORARY TABLE is not currently supported\");\n> \t\t\t\t\t $$ = make_str(\"global temp\");\n> \t\t\t\t\t}\n> \t\t| /*EMPTY*/\t\t{ $$ = EMPTY; }\n> --- 1074,1084 ----\n> \t\t| LOCAL TEMPORARY\t{ $$ = make_str(\"local temporary\"); }\n> \t\t| LOCAL TEMP\t\t{ $$ = make_str(\"local temp\"); }\n> \t\t| GLOBAL TEMPORARY\t{\n> ! \t\t\t\t\t mmerror(ET_NOTICE, \"Currently unsupported CREATE TABLE/GLOBAL TEMPORARY will be passed to backend\");\n> \t\t\t\t\t $$ = make_str(\"global temporary\");\n> \t\t\t\t\t}\n> \t\t| GLOBAL TEMP\t\t{\n> ! \t\t\t\t\t mmerror(ET_NOTICE, \"Currently unsupported CREATE TABLE/GLOBAL TEMP will be passed to backend\");\n> \t\t\t\t\t $$ = make_str(\"global temp\");\n> \t\t\t\t\t}\n> \t\t| /*EMPTY*/\t\t{ $$ = EMPTY; }\n> ***************\n> *** 1103,1110 ****\n> \t\t\t\t{\n> \t\t\t\t\tif (strlen($4) > 0)\n> \t\t\t\t\t{\n> ! \t\t\t\t\t\tsprintf(errortext, \"CREATE TABLE/COLLATE %s not yet implemented; clause ignored\", $4);\n> ! \t\t\t\t\t\tmmerror(ET_NOTICE, errortext);\n> \t\t\t\t\t}\n> \t\t\t\t\t$$ = cat_str(4, $1, $2, $3, $4);\n> \t\t\t\t}\n> --- 1103,1110 ----\n> \t\t\t\t{\n> \t\t\t\t\tif (strlen($4) > 0)\n> \t\t\t\t\t{\n> ! \t\t\t\t\t\tsprintf(errortext, \"Currently unsupported CREATE TABLE/COLLATE %s will be passed to backend\", $4);\n> ! \t\t\t\t\t\tmmerror(ET_NOTICE, errortext);\n> \t\t\t\t\t}\n> \t\t\t\t\t$$ = cat_str(4, $1, $2, $3, $4);\n> \t\t\t\t}\n> ***************\n> *** 1219,1225 ****\n> \t\t}\n> \t\t| MATCH PARTIAL\t\t\n> \t\t{\n> ! \t\t\tmmerror(ET_NOTICE, \"FOREIGN KEY/MATCH PARTIAL not yet implemented\");\n> \t\t\t$$ = make_str(\"match partial\");\n> \t\t}\n> \t\t| /*EMPTY*/\n> --- 1219,1225 ----\n> \t\t}\n> \t\t| MATCH PARTIAL\t\t\n> \t\t{\n> ! \t\t\tmmerror(ET_NOTICE, \"Currently unsupported FOREIGN KEY/MATCH PARTIAL will be passed to backend\");\n> \t\t\t$$ = make_str(\"match partial\");\n> \t\t}\n> \t\t| /*EMPTY*/\n> ***************\n> *** 1614,1620 ****\n> \t\t| BACKWARD\t{ $$ = make_str(\"backward\"); }\n> \t\t| RELATIVE { $$ = make_str(\"relative\"); }\n> | ABSOLUTE\t{\n> ! \t\t\t\t\tmmerror(ET_NOTICE, \"FETCH/ABSOLUTE not supported, backend will use RELATIVE\");\n> \t\t\t\t\t$$ = make_str(\"absolute\");\n> \t\t\t\t}\n> \t\t;\n> --- 1614,1620 ----\n> \t\t| BACKWARD\t{ $$ = make_str(\"backward\"); }\n> \t\t| RELATIVE { $$ = make_str(\"relative\"); }\n> | ABSOLUTE\t{\n> ! \t\t\t\t\tmmerror(ET_NOTICE, \"Currently unsupported FETCH/ABSOLUTE will be passed to backend, backend will use RELATIVE\");\n> \t\t\t\t\t$$ = make_str(\"absolute\");\n> \t\t\t\t}\n> \t\t;\n> ***************\n> *** 1769,1775 ****\n> \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n> \t\t;\n> \n> ! opt_with_grant: WITH GRANT OPTION { $$ = make_str(\"with grant option\"); }\n> \t\t| /*EMPTY*/ { $$ = EMPTY; }\n> \t\t;\n> \n> --- 1769,1779 ----\n> \t\t| grantee_list ',' grantee \t{ $$ = cat_str(3, $1, make_str(\",\"), $3); }\n> \t\t;\n> \n> ! opt_with_grant: WITH GRANT OPTION\n> ! {\n> ! \t\t\t\t\tmmerror(ET_NOTICE, \"Currently unsupported GRANT/WITH GRANT OPTION will be passed to backend\");\n> ! \t\t\t\t\t$$ = make_str(\"with grant option\");\n> ! \t\t\t\t}\n> \t\t| /*EMPTY*/ { $$ = EMPTY; }\n> \t\t;\n> \n> ***************\n> *** 1919,1932 ****\n> \n> opt_arg: IN { $$ = make_str(\"in\"); }\n> \t| OUT\t{ \n> ! \t\t mmerror(ET_ERROR, \"CREATE FUNCTION/OUT parameters are not supported\");\n> \n> \t \t $$ = make_str(\"out\");\n> \t\t}\n> \t| INOUT\t{ \n> ! \t\t mmerror(ET_ERROR, \"CREATE FUNCTION/INOUT parameters are not supported\");\n> \n> ! \t \t $$ = make_str(\"oinut\");\n> \t\t}\n> \t;\n> \n> --- 1923,1936 ----\n> \n> opt_arg: IN { $$ = make_str(\"in\"); }\n> \t| OUT\t{ \n> ! \t\t mmerror(ET_NOTICE, \"Currently unsupported CREATE FUNCTION/OUT will be passed to backend\");\n> \n> \t \t $$ = make_str(\"out\");\n> \t\t}\n> \t| INOUT\t{ \n> ! \t\t mmerror(ET_NOTICE, \"Currently unsupported CREATE FUNCTION/INOUT will be passed to backend\");\n> \n> ! \t \t $$ = make_str(\"inout\");\n> \t\t}\n> \t;\n> \n> ***************\n> *** 2164,2170 ****\n> \n> opt_chain: AND NO CHAIN \t{ $$ = make_str(\"and no chain\"); }\n> \t| AND CHAIN\t\t{\n> ! \t\t\t\t mmerror(ET_ERROR, \"COMMIT/CHAIN not yet supported\");\n> \n> \t\t\t\t $$ = make_str(\"and chain\");\n> \t\t\t\t}\n> --- 2168,2174 ----\n> \n> opt_chain: AND NO CHAIN \t{ $$ = make_str(\"and no chain\"); }\n> \t| AND CHAIN\t\t{\n> ! \t\t\t\t mmerror(ET_NOTICE, \"Currently unsupported COMMIT/CHAIN will be passed to backend\");\n> \n> \t\t\t\t $$ = make_str(\"and chain\");\n> \t\t\t\t}\n> ***************\n> *** 2609,2620 ****\n> \t\t\t}\n> | GLOBAL TEMPORARY opt_table relation_name\n> {\n> ! \t\t\t\tmmerror(ET_ERROR, \"GLOBAL TEMPORARY TABLE is not currently supported\");\n> \t\t\t\t$$ = cat_str(3, make_str(\"global temporary\"), $3, $4);\n> }\n> | GLOBAL TEMP opt_table relation_name\n> {\n> ! \t\t\t\tmmerror(ET_ERROR, \"GLOBAL TEMPORARY TABLE is not currently supported\");\n> \t\t\t\t$$ = cat_str(3, make_str(\"global temp\"), $3, $4);\n> }\n> | TABLE relation_name\n> --- 2613,2624 ----\n> \t\t\t}\n> | GLOBAL TEMPORARY opt_table relation_name\n> {\n> ! \t\t\t\tmmerror(ET_NOTICE, \"Currently unsupported CREATE TABLE/GLOBAL TEMPORARY will be passed to backend\");\n> \t\t\t\t$$ = cat_str(3, make_str(\"global temporary\"), $3, $4);\n> }\n> | GLOBAL TEMP opt_table relation_name\n> {\n> ! \t\t\t\tmmerror(ET_NOTICE, \"Currently unsupported CREATE TABLE/GLOBAL TEMP will be passed to backend\");\n> \t\t\t\t$$ = cat_str(3, make_str(\"global temp\"), $3, $4);\n> }\n> | TABLE relation_name\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Oct 2001 10:59:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "> > On Tue, Oct 16, 2001 at 10:16:38AM +0100, Lee Kindness wrote:\n> > > Patch below, it changes:\n> > > ...\n> > \n> > I just added this to my sources. Will commit in a few minutes.\n> \n> Michael, I will let you apply the ecpg patches you desire.\n\nI have removed all the ecpg patches from the unapplied patches queue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Oct 2001 12:28:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "Michael Meskes wrote:\n\n> On Tue, Oct 16, 2001 at 10:27:42AM +0100, Lee Kindness wrote:\n> > And the patch below corrects a pet peeve I have with ecpg, all errors\n> > and warnings are output with a line number one less than reality...\n>\n> I wish I knew where this comes from. I've been trying to track this bug down\n> for years now, but have yet to find the reason. Okay, didn't check for quite\n> some time now, but the first time I committed a fix was March 1998. But\n> somehow I still haven't found all problems it seems.\n\nI somewhat got the impression that using C++ style comments (//) are related\nto worse the problem. But I must confess I didn't dig deep enough to contribute\nanything substancial. Perhaps the problem is a misunderstanding of ecpg and\ncpp.\n\nI was confused by the blank lines following or preceding a #line statement\nevery time I looked at it. This should be not necessary.\n\nWhile talking about warnings: ecpg warns about NULLIF being not implemented\nyet. But actually it works (for me).\n\nChristof\n\n\n", "msg_date": "Fri, 19 Oct 2001 09:37:59 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "On Fri, Oct 19, 2001 at 09:37:59AM +0200, Christof Petig wrote:\n> I somewhat got the impression that using C++ style comments (//) are related\n> to worse the problem. But I must confess I didn't dig deep enough to contribute\n> anything substancial. Perhaps the problem is a misunderstanding of ecpg and\n> cpp.\n\nIf I find some time I will dig into it, but that looks like a longshot right\nnow.\n\n> While talking about warnings: ecpg warns about NULLIF being not implemented\n> yet. But actually it works (for me).\n\nFixed.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 19 Oct 2001 16:36:49 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "\nI saw that change from 8 to 7. I think in the ecpg code that the first\nparameter to cat_str is the number of arguments _after_ the first\nparameter.\n\n preproc.c:{ yyval.str = cat_str(3, yyvsp[-2].str, make_str(\",\"), yyvsp[0].str);\n\nso I think the change from 8 -> 7 is correct.\n\n\n---------------------------------------------------------------------------\n\n> Lee Kindness <lkindness@csl.co.uk> writes:\n> > ***************\n> > *** 1693,1699 ****\n> \n> > GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n> > \t\t\t\t{\n> > ! \t\t\t\t\t$$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7);\n> > \t\t\t\t}\n> > \t\t;\n> \n> > --- 1693,1699 ----\n> \n> > GrantStmt: GRANT privileges ON opt_table relation_name_list TO grantee_list opt_with_grant\n> > \t\t\t\t{\n> > ! \t\t\t\t\t$$ = cat_str(7, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7);\n> > \t\t\t\t}\n> > \t\t;\n> \n> \n> Uh, isn't the correct fix\n> \n> ! \t\t\t\t\t$$ = cat_str(8, make_str(\"grant\"), $2, make_str(\"on\"), $4, $5, make_str(\"to\"), $7, $8);\n> \n> ISTM your patch loses the opt_with_grant clause. (Of course the backend\n> doesn't currently accept that clause anyway, but that's no reason for\n> ecpg to drop it.)\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Oct 2001 12:44:09 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "On Mon, Oct 29, 2001 at 12:44:09PM -0500, Bruce Momjian wrote:\n> I saw that change from 8 to 7. I think in the ecpg code that the first\n> parameter to cat_str is the number of arguments _after_ the first\n> parameter.\n\nIt is indeed. \n\n> so I think the change from 8 -> 7 is correct.\n\nNot exactly. Yes, the cat_str has 7 arguments besides the number, but the\nrule suggests it should have 8.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 30 Oct 2001 07:48:15 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "> On Mon, Oct 29, 2001 at 12:44:09PM -0500, Bruce Momjian wrote:\n> > I saw that change from 8 to 7. I think in the ecpg code that the first\n> > parameter to cat_str is the number of arguments _after_ the first\n> > parameter.\n> \n> It is indeed. \n> \n> > so I think the change from 8 -> 7 is correct.\n> \n> Not exactly. Yes, the cat_str has 7 arguments besides the number, but the\n> rule suggests it should have 8.\n\nSorry, I am lost. The rule above says to count the number of params\nafter the first one. Doesn't that make it 7?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 30 Oct 2001 01:50:09 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "On Tue, Oct 30, 2001 at 01:50:09AM -0500, Bruce Momjian wrote:\n> > Not exactly. Yes, the cat_str has 7 arguments besides the number, but the\n> > rule suggests it should have 8.\n> \n> Sorry, I am lost. The rule above says to count the number of params\n> after the first one. Doesn't that make it 7?\n\nSorry, I meant the bison rule that has 8 argumenents.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 30 Oct 2001 08:31:53 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ecpg - GRANT bug" }, { "msg_contents": "All in all it doesn't really matter... Michael's already applied a\nbetter fix! For some reason Tom's email has been lost in the system\nfor weeks!\n\nLee.\n\nBruce Momjian writes:\n > > On Mon, Oct 29, 2001 at 12:44:09PM -0500, Bruce Momjian wrote:\n > > > I saw that change from 8 to 7. I think in the ecpg code that the first\n > > > parameter to cat_str is the number of arguments _after_ the first\n > > > parameter.\n > > \n > > It is indeed. \n > > \n > > > so I think the change from 8 -> 7 is correct.\n > > \n > > Not exactly. Yes, the cat_str has 7 arguments besides the number, but the\n > > rule suggests it should have 8.\n > \n > Sorry, I am lost. The rule above says to count the number of params\n > after the first one. Doesn't that make it 7?\n > \n > -- \n > Bruce Momjian | http://candle.pha.pa.us\n > pgman@candle.pha.pa.us | (610) 853-3000\n > + If your life is a hard drive, | 830 Blythe Avenue\n > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n > \n > ---------------------------(end of broadcast)---------------------------\n > TIP 5: Have you checked our extensive FAQ?\n > \n > http://www.postgresql.org/users-lounge/docs/faq.html\n", "msg_date": "Tue, 30 Oct 2001 09:15:15 +0000 (GMT)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: ecpg - GRANT bug" } ]
[ { "msg_contents": "Hello \n\nEasy question - does Postgres support Java stored procedures ?\n\nChris\n\n Posted Via Usenet.com Premium Usenet Newsgroup Services\n----------------------------------------------------------\n ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **\n---------------------------------------------------------- \n http://www.usenet.com\n", "msg_date": "12 Oct 2001 10:56:56 -0500", "msg_from": "None <None@news.tht.net>", "msg_from_op": true, "msg_subject": "java" }, { "msg_contents": "Hi List,\n Well i know its not apropriate to throw a question like is anyone is\nworking on\ninterchange in this list, who could be helpful to me. But its just a\nquestion of coincidence that iam\nlooking at to find some queries that i hold to be answered.\nWell iam totally new to PGSQL and Interchange. Well as of now i dont require\nmuch info on\nPGSQL , but for interchange.\n\nWell interchange seems to be with endless options.\n\nBut from a database perspective i seem to be stuck with some unanswerable\nsituation\nwhere i couldn't find relavant information for certain fields rather, i\ncouldn't find certain\ntables itself.\n\nDoes anyone has a script for all the IC table structures ? or maybe even\nbetter field level\ninfo on each and everything.\n\nI searched and found scripts for some 14 tables but i guess there goes a lot\nthan what iam\nreferring to.\n\nIam currently looking for tables structures for the admin's interface , i\npresume there are some hidden tables,\ndoes anyone know the structure of the same and maybe a brief description of\nthe same ?\nAnd to move from the existing text file to a DB like pgsql is there any\nscript being provided by interchange\nfor the tables used etc ?\nIf so then can anyone specify where i can find them ?\n\nLookin for immense help from this list asap.\n\nThnx in advance.\nCheers,\nBalaji\n\n\n", "msg_date": "Mon, 15 Oct 2001 19:36:03 +0530", "msg_from": "\"Balaji Venkatesan\" <balaji.venkatesan@megasoft.com>", "msg_from_op": false, "msg_subject": "Interchange" }, { "msg_contents": "None <None@news.tht.net> writes:\n\n> Hello \n> \n> Easy question - does Postgres support Java stored procedures ?\n\nEasy answer: no.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "15 Oct 2001 10:38:22 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: java" }, { "msg_contents": "> Hello \n> \n> Easy question - does Postgres support Java stored procedures ?\n\nNo, only PL/pgSQL, Perl, TCL, and Python.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Oct 2001 11:40:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: java" }, { "msg_contents": "I'm not sure if this is the correct place for comments about the web\nsite, but....\n\nWhile discussing some other items on comp.lang.tcl, Cameron Laird made\nthis remark:\n\n While we're on the subject, I'd sure appreciate it if\n someone at www.postgresql.org would fix up the hyper-\n link anchors with meaningful \"ALT =\"-s so image-free\n browsers can make better use of the site.\n\n\nIs there any possibility of putting up some ALT tags on the site?\n\nroland\n-- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD RL Enterprises\nroland@rlenter.com 76-15 113th Street, Apt 3B\nroland@astrofoto.org Forest Hills, NY 11375\n", "msg_date": "15 Nov 2001 10:57:53 -0500", "msg_from": "Roland Roberts <roland@astrofoto.org>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Interchange" } ]