threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "How about including <sys/types.h> before including\n<grp.h> in src/backend/utils/init/findbe.c?\n\nI've just compiled 7.2 on FreeBSD-current, which has failed\nwith compilation error because the type of gr_gid in struct group\nis gid_t on FreeBSD-current.\n\ncheers,\n\nhiro hanai\n",
"msg_date": "Fri, 08 Feb 2002 19:25:11 +0900 (JST)",
"msg_from": "hiroyuki hanai <hanai@imgsrc.co.jp>",
"msg_from_op": true,
"msg_subject": "compile error of PostgreSQL 7.2 on FreeBSD-current"
},
{
"msg_contents": "On Fri, 8 Feb 2002, Bruce Momjianwrote:\n> > How about including <sys/types.h> before including\n> > <grp.h> in src/backend/utils/init/findbe.c?\n> > \n> > I've just compiled 7.2 on FreeBSD-current, which has failed\n> > with compilation error because the type of gr_gid in struct group\n> > is gid_t on FreeBSD-current.\n> \n> sys/types.h include is in 7.2. Please upgrade.\n\nBruce, I know sys/types include is in 7.2.\nI'm talking about the order to include header files.\nsrc/backend/utils/init/findbe.c in 7.2 includes <grp.h>\n*before* <sys/types.h>.\nBut, the type of gr_gid in struct group, which is defined\nin <grp.h>, is gid_t. So, <sys/types> should be inclueded\nbefore <grp.h>\n\nThe type of gr_gid in <grp.h> was `int' before 22th Jan 2002.\nIt has been changed as gid_t by Mark Murray on 22th Jan 2002.\n\nRegards,\n\nhiro hanai\n",
"msg_date": "Sat, 09 Feb 2002 01:08:57 +0900 (JST)",
"msg_from": "hiroyuki hanai <hanai@imgsrc.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: compile error of PostgreSQL 7.2 on FreeBSD-current"
},
{
"msg_contents": "hiroyuki hanai <hanai@imgsrc.co.jp> writes:\n> But, the type of gr_gid in struct group, which is defined\n> in <grp.h>, is gid_t. So, <sys/types> should be inclueded\n> before <grp.h>\n\n> The type of gr_gid in <grp.h> was `int' before 22th Jan 2002.\n> It has been changed as gid_t by Mark Murray on 22th Jan 2002.\n\nOne would think this is a bug in FreeBSD's <grp.h>. Shouldn't it be\nresponsible for including the headers it depends on?\n\nWe can certainly move our header inclusion order around, but that is\nsimply an application-level workaround for a broken system header.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 12:12:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compile error of PostgreSQL 7.2 on FreeBSD-current "
},
{
"msg_contents": "Tom Lane writes:\n\n> One would think this is a bug in FreeBSD's <grp.h>. Shouldn't it be\n> responsible for including the headers it depends on?\n\nThe standards specify (effectively) that sys/types.h must be included\nbefore grp.h. This can be considered stupid, but it's not really\nFreeBSD's fault.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 8 Feb 2002 14:47:23 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: compile error of PostgreSQL 7.2 on FreeBSD-current "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The standards specify (effectively) that sys/types.h must be included\n> before grp.h. This can be considered stupid, but it's not really\n> FreeBSD's fault.\n\nPossibly I'm spoiled: HPUX gets this right.\n\nI ran the same experiment Bruce mentioned, and found that of 192 headers\nin HPUX 10.20's /usr/include directory, all but 24 compiled with no\nadditional inclusions. The failing headers were\n\nalarm.h dcnodes.h dmapi.h dumprestor.h dvio.h elog.h eucioctl.h\nexecargs.h exportent.h fbackup.h hard_reg.h initptr.h lc_core.h\nm4_frame.h m4_reg.h pfm.h ppfm.h prot.h sad.h soft_reg.h std_space.h\nterm.h xds.h xomi.h\n\nwhich are mostly not standardized headers.\n\nThe failure rate was higher in the subdirectories of /usr/include, but\nthat's not surprising. A lot of the headers underneath /usr/include/sys\ndon't look like they're even intended to be compiled in userland code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 15:58:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compile error of PostgreSQL 7.2 on FreeBSD-current "
}
] |
[
{
"msg_contents": "Hello,\n\nWe have report that all GiST opclasses cause a coredump while creating index on \n64-bit Solaris. I try to make installcheck GiST's contrib modules on DEC Alpha \nand got the same result. So it may be one problem.\n\nCompiler:\ngcc -v\nReading specs from \n/usr/local/egcs/lib/gcc-lib/alphaev56-dec-osf4.0c/egcs-2.90.23/specs\ngcc version egcs-2.90.23 980102 (egcs-1.0.1 release)\n\n\nBacktrace:\n#0 0x12003d564 in gistdentryinit (giststate=0x11fffd820, nkey=0, e=0x1401b25a4, \nk=5370611216, r=0x140171ce8,\n pg=0x1401ce620 \"\", o=1, b=8, l=0 '\\000', isNull=0 '\\000') at gist.c:1754\n#1 0x12003b3b0 in gistSplit (r=0x140171ce8, buffer=-1, itup=0x140196a10, \nlen=0x11fffd598, giststate=0x11fffd820,\n res=0x0) at gist.c:1264\n#2 0x120037be4 in gistlayerinsert (r=0x140171ce8, blkno=1075407376, \nitup=0x11fffd610, len=0x11fffd618, res=0x0,\n giststate=0x11fffd820) at gist.c:515\n#3 0x1200376d8 in gistdoinsert (r=0x140171ce8, itup=0x140196718, res=0x0, \ngiststate=0x11fffd820) at gist.c:426\n#4 0x120037228 in gistbuildCallback (index=0x140171ce8, htup=0x140196590, \nattdata=0x11fffd710,\n nulls=0x11fffd790 \" О©╫\\017@\\001\", tupleIsAlive=-24 'О©╫', state=0x1401ce620) \nat gist.c:275\n#5 0x12008c4c4 in IndexBuildHeapScan (heapRelation=0x1, \nindexRelation=0x140171ce8, indexInfo=0x140196238,\n callback=0x120037020 <gistbuildCallback>, callback_state=0x11fffd820) at \nindex.c:1805\n#6 0x120036f70 in gistbuild (fcinfo=0x11fffd820) at gist.c:186\n#7 0x1201df8c4 in OidFunctionCall3 (functionId=536860704, arg1=5370215808, \narg2=5370223848, arg3=5370372664)\n at fmgr.c:1190\n...\n\nOutput to sdterr:\nUnaligned access pid=6018 <postgres> va=0x1401b25a4 pc=0x12003d560 \nra=0x12003b3b0 inst=0xb6690000\n\nSource (gist.c, around 1264 line):\n /* generate the item array */\n entryvec = (bytea *) palloc(VARHDRSZ + (*len + 1) * sizeof(GISTENTRY));\n decompvec = (bool *) palloc(VARHDRSZ + (*len + 1) * sizeof(bool));\n VARATT_SIZEP(entryvec) = (*len + 1) * sizeof(GISTENTRY) + VARHDRSZ;\n for (i = 1; i <= *len; i++)\n {\n datum = index_getattr(itup[i - 1], 1, giststate->tupdesc, &IsNull);\n gistdentryinit(giststate, 0, &((GISTENTRY *) VARDATA(entryvec))[i],\n datum, r, p, i,\n ATTSIZE(datum, giststate->tupdesc, 1, IsNull), FALSE, IsNull);\n if ((!isAttByVal(giststate, 0)) && ((GISTENTRY *) \nVARDATA(entryvec))[i].key != datum)\n decompvec[i] = TRUE;\n else\n decompvec[i] = FALSE;\n }\n\nCore dump causes on first call gistdentryinit, because pointer\n&((GISTENTRY *) VARDATA(entryvec))[1] has not 8-byte aligment. Difference \nbetween entryvec and this pointer is equal 44 bytes.\n\nCan you give some advice how make aligment? or, may be exist another way for \nsolving...\n\nBTW, on HPUX 11.0 all works fine.\n\nThank you.\n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Fri, 08 Feb 2002 18:27:59 +0300",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": true,
"msg_subject": "GiST on 64-bit box"
},
{
"msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> We have report that all GiST opclasses cause a coredump while creating\n> index on 64-bit Solaris. I try to make installcheck GiST's contrib\n> modules on DEC Alpha and got the same result. So it may be one\n> problem.\n\nYes. It looks to me like GIST is broken on any platform that has 8-byte\nDatum (or pointer) and requires 8-byte alignment of same.\n\nThe problem is that GISTENTRY will require 8-byte alignment on such a\nplatform, and this code is not honoring that: it's trying to set up an\narray of GISTENTRYs starting at offset 4 in a palloc'd memory chunk.\n\nApparently, the reason for the offset is to stick a varlena header on\nthe parameter being passed to the picksplit function. This seems like\nit might be unnecessary. Is there another way for the picksplit\nfunction to learn the length of the array?\n\nI think you have two possible ways to proceed:\n\n1. Modify the code to use MAXALIGN(VARHDRSZ) rather than just VARHDRSZ\nas the offset in the bogus bytea construct. This would be messy since\nyou couldn't use VARDATA() anymore.\n\n2. Forget the bytea header and just treat the object as a GISTENTRY\narray.\n\nEither one of these is going to require changing the picksplit functions\nas well as the calling code, so they're both bad choices from a\nmaintenance point of view. I think I lean towards #2 since it will make\nthe code less ugly rather than more so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 12:00:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GiST on 64-bit box "
},
{
"msg_contents": "My opinion is that second way is right (use GISTENTRY array). But this channge \nrequires changes in GiST API: picksplit and union functions must retrieve one \nargument more. Is it possible to make for 7.2.1 or such changes must be appyed \nin TODO for 7.3 ?\n\nTom Lane wrote:\n\n> I think you have two possible ways to proceed:\n> \n> 1. Modify the code to use MAXALIGN(VARHDRSZ) rather than just VARHDRSZ\n> as the offset in the bogus bytea construct. This would be messy since\n> you couldn't use VARDATA() anymore.\n> \n> 2. Forget the bytea header and just treat the object as a GISTENTRY\n> array.\n> \n> Either one of these is going to require changing the picksplit functions\n> as well as the calling code, so they're both bad choices from a\n> maintenance point of view. I think I lean towards #2 since it will make\n> the code less ugly rather than more so.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Fri, 08 Feb 2002 20:23:40 +0300",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": true,
"msg_subject": "Re: GiST on 64-bit box"
},
{
"msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> My opinion is that second way is right (use GISTENTRY array). But this\n> channge requires changes in GiST API: picksplit and union functions\n> must retrieve one argument more. Is it possible to make for 7.2.1 or\n> such changes must be appyed in TODO for 7.3 ?\n\nHmm. If we had any such functions installed in the standard system,\nthen such a change would mean an initdb, which we couldn't do for 7.2.1.\n\nWe could argue that forcing a change in a contrib module isn't an\ninitdb, but the argument will seem very thin to anyone who has a\nrunning 7.2 database with GIST indexes and wants to update to 7.2.1.\nThey will have to reinstall their GIST support modules and recreate\ntheir GIST indexes, AFAICS.\n\nOn the other hand, changing the signature would be a good thing if the\nGIST code were tweaked to check that the referenced function had the\nright signature, because that way you could raise an error at runtime\nif someone tried to use a non-updated contrib module with an updated\nbackend. Without some such check I foresee disasters in the field.\n\nSo I think I vote for changing the signature and tweaking initGISTstate\nto verify that the number of parameters each function expects is right.\n\nBut even with that, it might be argued that we should postpone the\nchange till 7.3, and just say \"sorry folks, GIST doesn't work on\n64-bit machines for now\". Is that worse than risking update problems\nfor existing users of GIST indexes? Comments anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 12:42:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GiST on 64-bit box "
},
{
"msg_contents": "Actually, there is a third possibility, which would fix the problem\nwithout requiring any changes in the picksplit functions. You could\ndo this:\n\n char *storage;\n\n storage = palloc(MAXALIGN(VARHDRSZ) + (*len + 1) * sizeof(GISTENTRY));\n entryvec = (bytea *) (storage + MAXALIGN(VARHDRSZ) - VARHDRSZ);\n\n use entryvec as before, except final pfree is pfree(storage)\n\nGrotty as heck, but probably the right answer for 7.2.1 to avoid the\ninitdb issues.\n\nFor 7.3 we could do it the other, cleaner way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 13:07:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GiST on 64-bit box "
},
{
"msg_contents": "On Fri, Feb 08, 2002 at 12:42:18PM -0500, Tom Lane wrote:\n> Teodor Sigaev <teodor@stack.net> writes:\n> But even with that, it might be argued that we should postpone the\n> change till 7.3, and just say \"sorry folks, GIST doesn't work on\n> 64-bit machines for now\". Is that worse than risking update problems\n> for existing users of GIST indexes? Comments anyone?\n\nI would change it for 7.3 but have a patch somewhere downloadable for\nthose, who need it ASAP.\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n",
"msg_date": "Fri, 8 Feb 2002 19:29:57 +0100",
"msg_from": "Holger Krug <hkrug@rationalizer.com>",
"msg_from_op": false,
"msg_subject": "Re: GiST on 64-bit box"
},
{
"msg_contents": "Hi,\n\nI have a discussion with Teodor over the phone and we agree\nthis is the best for 7.2.*. Thanks Tom for the help.\nbtw, I think it should be noticed somewhere in documentation for developers\nthat pointers to \"int\" and \"long\" which are the same on 32-bit machine,\nare different on the 64-bit machine.\n\n\tOleg\n\nPS. For more than year of our GiST development we got the first report\nfrom 64-bit machine and I think it's a good sign. This year we must\nadd concurrency support. We already had discussion with\nJoseph Hellerstein ( the 'father' of the GiST ) about concurrency support\nand perhaps we'll go along the paper by Marcel Kornacker.\n\nOn Fri, 8 Feb 2002, Tom Lane wrote:\n\n> Actually, there is a third possibility, which would fix the problem\n> without requiring any changes in the picksplit functions. You could\n> do this:\n>\n> char *storage;\n>\n> storage = palloc(MAXALIGN(VARHDRSZ) + (*len + 1) * sizeof(GISTENTRY));\n> entryvec = (bytea *) (storage + MAXALIGN(VARHDRSZ) - VARHDRSZ);\n>\n> use entryvec as before, except final pfree is pfree(storage)\n>\n> Grotty as heck, but probably the right answer for 7.2.1 to avoid the\n> initdb issues.\n>\n> For 7.3 we could do it the other, cleaner way.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 8 Feb 2002 23:44:47 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: GiST on 64-bit box "
},
{
"msg_contents": "This patch solve the problem with unaligned access on 64-bit box. Please apply \nit for 7.2.1.\n\nTested on DEC Alpha.\n\nTom Lane wrote:\n\n> Actually, there is a third possibility, which would fix the problem\n> without requiring any changes in the picksplit functions. You could\n> do this:\n> \n> char *storage;\n> \n> storage = palloc(MAXALIGN(VARHDRSZ) + (*len + 1) * sizeof(GISTENTRY));\n> entryvec = (bytea *) (storage + MAXALIGN(VARHDRSZ) - VARHDRSZ);\n> \n> use entryvec as before, except final pfree is pfree(storage)\n> \n> Grotty as heck, but probably the right answer for 7.2.1 to avoid the\n> initdb issues.\n> \n> For 7.3 we could do it the other, cleaner way.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net",
"msg_date": "Mon, 11 Feb 2002 12:50:29 +0300",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": true,
"msg_subject": "Re: GiST on 64-bit box"
},
{
"msg_contents": "We got the report about this patch on 64-bit Solaris: it's work.\n\nTeodor Sigaev wrote:\n> This patch solve the problem with unaligned access on 64-bit box. Please \n> apply it for 7.2.1.\n> \n> Tested on DEC Alpha.\n> \n> Tom Lane wrote:\n> \n>> Actually, there is a third possibility, which would fix the problem\n>> without requiring any changes in the picksplit functions. You could\n>> do this:\n>>\n>> char *storage;\n>>\n>> storage = palloc(MAXALIGN(VARHDRSZ) + (*len + 1) * \n>> sizeof(GISTENTRY));\n>> entryvec = (bytea *) (storage + MAXALIGN(VARHDRSZ) - VARHDRSZ);\n>>\n>> use entryvec as before, except final pfree is pfree(storage)\n>>\n>> Grotty as heck, but probably the right answer for 7.2.1 to avoid the\n>> initdb issues.\n>>\n>> For 7.3 we could do it the other, cleaner way.\n>>\n>> regards, tom lane\n>>\n>>\n> \n> \n> \n> ------------------------------------------------------------------------\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n",
"msg_date": "Mon, 11 Feb 2002 23:28:59 +0300",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": true,
"msg_subject": "Re: GiST on 64-bit box"
},
{
"msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> This patch solve the problem with unaligned access on 64-bit\n> box. Please apply it for 7.2.1.\n\nPatch applied, thanks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Feb 2002 17:42:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GiST on 64-bit box "
}
] |
[
{
"msg_contents": "Dear all,\n\nI am looking at available database abstraction layers with the idea of \nporting pgAdmin2 to Linux. This is just for information, my project is not \nclear by now.\n\nGnomeDB (http://www.gnome-db.org) seems quite fantastic when used in \nconjunction with Glade. Until now, I never heard of any abstraction layer \nunder KDE. Is there any?\n\nBy the way, I looked at KDE TOra, which seems very Oracle centric. TOra seems \nto be built upon database wrappers, not abstraction layer classes. Am I wrong?\n\nWhat is the best database abstraction under Linux ? Any idea, suggestion, \netc.. are welcome. \n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Fri, 8 Feb 2002 17:23:15 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Database abstration layers"
},
{
"msg_contents": "> GnomeDB (http://www.gnome-db.org) seems quite fantastic when used in\n> conjunction with Glade. Until now, I never heard of any abstraction layer\n> under KDE. Is there any?\n\nQT3, which KDE3 uses, has database objects with postgresql support...\n\nChris\n\n\n",
"msg_date": "Sat, 9 Feb 2002 01:09:16 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Database abstration layers"
},
{
"msg_contents": "Jean-Michel POURE writes:\n\n> Until now, I never heard of any abstraction layer under KDE. Is there\n> any?\n\nIt's built into Qt.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 8 Feb 2002 12:13:22 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Database abstration layers"
},
{
"msg_contents": "Le Vendredi 8 F�vrier 2002 18:13, Peter Eisentraut a �crit :\n> >�Until now, I never heard of any abstraction layer under KDE. Is there\n> >�any?\n>\n> It's built into Qt.\n\nI am looking for an abstraction layer which gives access to all database \nobjects (tables, views, triggers, functions, rules). As far as I know, \nGnome-db and libgda provide shuch a framework.\n\nCheers,\nJean-Michel POURE\n",
"msg_date": "Fri, 8 Feb 2002 19:12:09 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: Database abstration layers"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Friday 08 February 2002 10:23 am, Jean-Michel POURE wrote:\n> Dear all,\n>\n> I am looking at available database abstraction layers with the idea of\n> porting pgAdmin2 to Linux. This is just for information, my project is not\n> clear by now.\n\nwhat is your goal? \n\nIf it's to port pgAdmin2 to Linux, then why do you need a database \nabstraction layer? Why not use direct calls to postgres? Most language have \nnative support (C, C++, PHP, Python, Perl...) Is it not true that pgAdmin \nunder windows has some limitations due to the fact that it's usind ODBC?\n\nIf it's to write a generic database management tool that can be used against \ndifferent databases then the need for an abstraction layer is obvious.\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE8ZCKG8BXvT14W9HARAoiHAJ9eJIsynNIc6GJ01NqlSN+R5v4AiACcCcUa\ncu4e5Lyd3K0c85vTAZ+2Q/o=\n=XwtN\n-----END PGP SIGNATURE-----\n",
"msg_date": "Fri, 8 Feb 2002 13:09:56 -0600",
"msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Re: Database abstration layers"
},
{
"msg_contents": "On Sat, 2002-02-09 at 05:23, Jean-Michel POURE wrote:\n> Dear all,\n> \n> I am looking at available database abstraction layers with the idea of \n> porting pgAdmin2 to Linux. This is just for information, my project is not \n> clear by now.\n> \n> GnomeDB (http://www.gnome-db.org) seems quite fantastic when used in \n> conjunction with Glade. Until now, I never heard of any abstraction layer \n> under KDE. Is there any?\n> \n> By the way, I looked at KDE TOra, which seems very Oracle centric. TOra seems \n> to be built upon database wrappers, not abstraction layer classes. Am I wrong?\n> \n> What is the best database abstraction under Linux ? Any idea, suggestion, \n> etc.. are welcome. \n\nSince pgAdmin2 already uses ODBC, would it not be best to leave that\nalone and just use ODBC in a Linux port as well?\n\nThe ODBC in pgAdmin2 is a good help in porting to PostgreSQL from other\ndatabases, but no doubt it has it's drawbacks. On the other hand if it\nis already existing, and should work 'as-is' then perhaps it would save\na lot of work to leave it alone.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n--------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Are you enrolled at http://schoolreunions.co.nz/ yet?\n\n",
"msg_date": "09 Feb 2002 14:10:23 +1300",
"msg_from": "Andrew McMillan <andrew@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: Database abstration layers"
}
] |
[
{
"msg_contents": "Guys, I've recently being going back over code from Ingres and porting\nit over to PostgreSQL. Heavy use was made of the IFNULL function, this\nfunction simply returns the 2nd argument if the first is\nNULL. Consider the following query:\n\n SELECT COALESCE(MAX(id), 0) + 1 from test;\n\ncan be replaced by the following PostgreSQL query:\n\n SELECT COALESCE(MAX(id), 0) + 1 from test;\n\nI've manually done this, but wouldn't this be a useful auto-tranlation\nto make in the parser? Aid to porting and all...\n\nYeah, I know i should be using a SERIAL column, that's later work...\n\nRegards, Lee.\n",
"msg_date": "Fri, 8 Feb 2002 16:37:03 +0000",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": true,
"msg_subject": "IFNULL -> COALESCE"
},
{
"msg_contents": "> Guys, I've recently being going back over code from Ingres and porting\n> it over to PostgreSQL. Heavy use was made of the IFNULL function, this\n> function simply returns the 2nd argument if the first is\n> NULL. Consider the following query:\n>\n> SELECT COALESCE(MAX(id), 0) + 1 from test;\n>\n> can be replaced by the following PostgreSQL query:\n>\n> SELECT COALESCE(MAX(id), 0) + 1 from test;\n\nUmmm...did you make a mistake here? Those statements are identical...\n\nChris\n\n",
"msg_date": "Sat, 9 Feb 2002 01:12:10 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: IFNULL -> COALESCE"
},
{
"msg_contents": "Christopher Kings-Lynne writes:\n > lkindness@csl.co.uk writes:\n > > SELECT COALESCE(MAX(id), 0) + 1 from test;\n > > can be replaced by the following PostgreSQL query:\n > > SELECT COALESCE(MAX(id), 0) + 1 from test;\n > Ummm...did you make a mistake here? Those statements are\n > identical...\n\nOkay, lets try that again...\n\n SELECT IFNULL(MAX(id), 0) + 1 from test;\n\ncan be replaced by the following PostgreSQL query:\n\n SELECT COALESCE(MAX(id), 0) + 1 from test;\n\nThanks, Lee.\n",
"msg_date": "Fri, 8 Feb 2002 17:17:03 +0000",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: IFNULL -> COALESCE"
},
{
"msg_contents": "Lee Kindness <lkindness@csl.co.uk> writes:\n> Okay, lets try that again...\n> SELECT IFNULL(MAX(id), 0) + 1 from test;\n> can be replaced by the following PostgreSQL query:\n> SELECT COALESCE(MAX(id), 0) + 1 from test;\n\nFor any specific datatype that you might need this for, you could\nprovide a user-defined IFNULL function to avoid having to translate\nyour code. Might get a bit tedious if you are doing it for a lot\nof different datatypes, however.\n\nNot sure if it's worth adding a keyword and a grammar production\nto get Postgres to do this for you. If it were part of a full-court\npress to improve our Oracle compatibility, I wouldn't object, but\nI'm not sure I see the point of doing just the one nonstandard\nfeature.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 13:23:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: IFNULL -> COALESCE "
},
{
"msg_contents": "On Fri, 8 Feb 2002, Lee Kindness wrote:\n\n> Christopher Kings-Lynne writes:\n> > lkindness@csl.co.uk writes:\n> > > SELECT COALESCE(MAX(id), 0) + 1 from test;\n> > > can be replaced by the following PostgreSQL query:\n> > > SELECT COALESCE(MAX(id), 0) + 1 from test;\n> > Ummm...did you make a mistake here? Those statements are\n> > identical...\n>\n> Okay, lets try that again...\n>\n> SELECT IFNULL(MAX(id), 0) + 1 from test;\n>\n> can be replaced by the following PostgreSQL query:\n>\n> SELECT COALESCE(MAX(id), 0) + 1 from test;\n\nMight be nice to have it done automatically, but as a workaround\nwhy not just define ifnull(int, int) - or whatever types are\nnecessary.\n\ncreate function ifnull(int, int) returns int as\n'select coalesce($1, $2);' language 'sql';\nshould work for 7.1 and above unless I'm missing something.\n\n\n\n",
"msg_date": "Fri, 8 Feb 2002 10:29:00 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: IFNULL -> COALESCE"
},
{
"msg_contents": "Oh, i'd agree - it's not really worth the hassle adding the code to\nautomatically do this. Useful to have it mentioned in the archives so\nsomeone else coming up against the same issue can pick up on it\nquicker...\n\nGot me thinking about an option for ecpg to report about any\nnon-standard/user-defined functions used in the source (which of\ncourse it assumes are such and just lets them through). Also that\n'sqlca is included by default' message added for 7.2 is annoying!\n\nAnd Bruce, yeah there's a lock ;)\n\nRegards, Lee Kindness.\n\nTom Lane writes:\n > Lee Kindness <lkindness@csl.co.uk> writes:\n > > Okay, lets try that again...\n > > SELECT IFNULL(MAX(id), 0) + 1 from test;\n > > can be replaced by the following PostgreSQL query:\n > > SELECT COALESCE(MAX(id), 0) + 1 from test;\n > \n > For any specific datatype that you might need this for, you could\n > provide a user-defined IFNULL function to avoid having to translate\n > your code. Might get a bit tedious if you are doing it for a lot\n > of different datatypes, however.\n > \n > Not sure if it's worth adding a keyword and a grammar production\n > to get Postgres to do this for you. If it were part of a full-court\n > press to improve our Oracle compatibility, I wouldn't object, but\n > I'm not sure I see the point of doing just the one nonstandard\n > feature.\n > \n > \t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Feb 2002 09:44:27 +0000",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: IFNULL -> COALESCE "
},
{
"msg_contents": "Lee Kindness writes:\n\n> Got me thinking about an option for ecpg to report about any\n> non-standard/user-defined functions used in the source (which of\n> course it assumes are such and just lets them through). Also that\n> 'sqlca is included by default' message added for 7.2 is annoying!\n\nNo kidding. Can we remove that for 7.2.1?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 11 Feb 2002 11:39:21 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: IFNULL -> COALESCE "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> 'sqlca is included by default' message added for 7.2 is annoying!\n\n> No kidding. Can we remove that for 7.2.1?\n\nI didn't understand why it was put in in the first place. There's\nno need for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Feb 2002 11:46:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: IFNULL -> COALESCE "
},
{
"msg_contents": "On Mon, Feb 11, 2002 at 09:44:27AM +0000, Lee Kindness wrote:\n> Oh, i'd agree - it's not really worth the hassle adding the code to\n> automatically do this. Useful to have it mentioned in the archives so\n> someone else coming up against the same issue can pick up on it\n> quicker...\n> \n> Got me thinking about an option for ecpg to report about any\n> non-standard/user-defined functions used in the source (which of\n> course it assumes are such and just lets them through). Also that\n> 'sqlca is included by default' message added for 7.2 is annoying!\n\nThat's actually something needed for FIPS (US federal gov't standard)\nalthough it's optional (not mentionec at all?) for ANSI or ISO: a\n'flagger' that reports all non-standard extensions.\n\nRoss\n\n",
"msg_date": "Mon, 11 Feb 2002 15:06:20 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: IFNULL -> COALESCE"
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> >> 'sqlca is included by default' message added for 7.2 is annoying!\n>\n> > No kidding. Can we remove that for 7.2.1?\n>\n> I didn't understand why it was put in in the first place. There's\n> no need for it.\n\nAs it stands, sqlca will actually be included twice, so the warning has\nsome merit. But it might be better to actually prevent the second\ninclusion.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 11 Feb 2002 16:56:00 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "sqlca warning (was Re: IFNULL -> COALESCE)"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> As it stands, sqlca will actually be included twice, so the warning has\n> some merit. But it might be better to actually prevent the second\n> inclusion.\n\nWhy? sqlca.h has an #ifndef guard, so there's no harm done. I'd vote\nfor just suppressing the check and notice completely.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Feb 2002 16:58:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: sqlca warning (was Re: IFNULL -> COALESCE) "
},
{
"msg_contents": "Peter Eisentraut writes:\n > Tom Lane writes:\n > > I didn't understand why it was put in in the first place. There's\n > > no need for it.\n > As it stands, sqlca will actually be included twice, so the warning has\n > some merit. But it might be better to actually prevent the second\n > inclusion.\n\nAs I understand it with 7.1 you HAD to have an 'EXEC SQL INCLUDE\nsqlca' line for things to work (assuming you actually access the sqlca\nstructure). With 7.2 this file is now automatically included (whether\nyou need it or not) and when you explicitly tell the precompiler what\nyou're using you get a warning! Imagine the response if a C compiler\nwas compiling the following program:\n\n #include <stdio.h>\n\n int main(int argc, char **argv)\n {\n printf(\"Hello world!\\n\");\n }\n\nand gave you a warning for including stdio.h! For reference gcc must\nbe doing something similar to ecpg - because you don't NEED to include\nstdio.h (which is bad).\n\nIt's nothing major... just annoying!\n\nBest Regards, Lee.\n",
"msg_date": "Tue, 12 Feb 2002 09:34:03 +0000",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": true,
"msg_subject": "sqlca warning (was Re: IFNULL -> COALESCE)"
}
] |
[
{
"msg_contents": "Would it be too much to ask for that everytime a significant\nuser-visible change is checked in, the release notes are updated right\nthere as though they are documentation (which they are)? This immediately\nleads to three significant advantages:\n\n1. Users can keep track of developement.\n\n Until after the start of beta, no user really had any idea what this\n new release was going to be about. Users that started projects before\n the release notes were in readable form were wasting their time, if\n they chose PostgreSQL at all.\n\n2. Developers can keep track of development.\n\n The number of changes for 7.2 is really enough for five releases, but\n no one can be expected to keep track of that. So in the future, when\n the list gets too long, we make a release. ;-)\n\n3. The list accurately reflects the actual work.\n\n Having the list reconstructed by a single person from CVS logs months\n after the fact is just way too lossy.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 8 Feb 2002 13:01:17 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Maintaining the list of release changes"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Would it be too much to ask for that everytime a significant\n> user-visible change is checked in, the release notes are updated right\n> there as though they are documentation (which they are)?\n> Having the list reconstructed by a single person from CVS logs months\n> after the fact is just way too lossy.\n\nI agree completely, and was about to make a similar proposal. However,\nI'm not entirely sure where \"the release notes\" actually are, nor which\nis the master copy. Also, I'd prefer not to have to deal with SGML\nmarkup for them.\n\nWhat I'd suggest is that we keep a plaintext file somewhere near the\ntop of the CVS tree (in doc/ perhaps) that developers append\nrelease-notes items to as the work is completed. At the end of each\ndevelopment cycle, Bruce can prepare the \"nice looking\" notes from that\nsource and then reset the file to empty for the next cycle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 13:29:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Maintaining the list of release changes "
},
{
"msg_contents": "Tom Lane writes:\n\n> I agree completely, and was about to make a similar proposal. However,\n> I'm not entirely sure where \"the release notes\" actually are, nor which\n> is the master copy. Also, I'd prefer not to have to deal with SGML\n> markup for them.\n\nThe release notes are in the file release.sgml. You don't really have to\ndeal with any markup there. Look at the source under sect2 \"Changes\"; you\nonly need to insert a line there. (Well, not there. A new section will\nbe created.) Grouping this into useful categories and creating the\n\"highlights\" at the top can be done later during the usual revising and\nediting.\n\n> What I'd suggest is that we keep a plaintext file somewhere near the\n> top of the CVS tree (in doc/ perhaps) that developers append\n> release-notes items to as the work is completed. At the end of each\n> development cycle, Bruce can prepare the \"nice looking\" notes from that\n> source and then reset the file to empty for the next cycle.\n\nThis (and later CVS log based ideas) don't really address my point #1:\nkeeping users informed. You might underestimate that. Plenty of users\nwould really like to try out development snapshots for their new projects,\nor see if they compile, verify new features early, or just to play around.\nBut that's a lot harder if they don't know what's in there.\n\nSecondly, why do double work if you can just do it right the first time?\n\nWe've been doing an excellent job lately about keeping the documentation\ncurrent. This is really the same corner, but it's even more fundamental:\nIf you know what changes have been made since the last release you can\neasily check if those changes have made it into the documentation. I'm\nnot suggesting that we become laxer about documentation updates because of\nthis, neither do I want strict \"every patch must update the release notes\"\npolicies. Just keep in mind that at some point, when you update the\ndocumentation (meaning you feel your feature is reasonably complete to the\npoint that you want to let it loose), add a line to the release notes to\nlet people know it's there. That's it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 8 Feb 2002 18:09:24 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Maintaining the list of release changes "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> What I'd suggest is that we keep a plaintext file somewhere near the\n>> top of the CVS tree (in doc/ perhaps) that developers append\n>> release-notes items to as the work is completed. At the end of each\n>> development cycle, Bruce can prepare the \"nice looking\" notes from that\n>> source and then reset the file to empty for the next cycle.\n\n> This (and later CVS log based ideas) don't really address my point #1:\n> keeping users informed.\n\nThat's a valid complaint against CVS-log-based notes, since most people\nprobably don't have or know how to use tools like cvs2cl. I don't see\nwhy it's an argument against my idea of a dedicated text file, though.\nSuch a file would be just as readily found as release.sgml, possibly\nmore so.\n\n> Secondly, why do double work if you can just do it right the first time?\n\nA fair point.\n\nI am still concerned about the prospect of either release.sgml or\na dedicated file becoming commit bottlenecks because everyone is\nconstantly hitting them (and in approximately the same place, too).\nHowever, we probably can't know how big an annoyance that will be\nin practice unless we try it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 18:16:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Maintaining the list of release changes "
},
{
"msg_contents": "Tom Lane writes:\n\n> That's a valid complaint against CVS-log-based notes, since most people\n> probably don't have or know how to use tools like cvs2cl. I don't see\n> why it's an argument against my idea of a dedicated text file, though.\n> Such a file would be just as readily found as release.sgml, possibly\n> more so.\n\nDo you have a suggestion for a name and where to put the file. Will it be\nremoved or cleared before releases?\n\n> I am still concerned about the prospect of either release.sgml or\n> a dedicated file becoming commit bottlenecks because everyone is\n> constantly hitting them (and in approximately the same place, too).\n\nThere are lots of and big projects that use GNU-style ChangeLogs.\nTheoretically, they would all hit at the same place, but in practice this\nis never a problem. When an outsider submits a patch he just puts the\nchangelog entry into a separate attachment or right into the message and\nthe committer puts it into the right place.\n\nWe can make this to work as well. If someone comes along, \"hey, I just\nadded a --foo option to pg_bar to do xyx\", the patch committer simply adds\na line \"added --foo option to pg_bar to do xyz\" to the log. All the\nrelease note items are one-liners, so this can't be too big of a deal.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 11 Feb 2002 23:38:02 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Maintaining the list of release changes "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Such a file would be just as readily found as release.sgml, possibly\n>> more so.\n\n> Do you have a suggestion for a name and where to put the file. Will it be\n> removed or cleared before releases?\n\nI think it should live in .../doc, same as TODO. Not picky about name.\n\nI don't think we should remove the file; removing and re-adding it will\nprobably do strange things to CVS history. One could argue about\nwhether we even need to clear it. Might be best to ship it as-is in\nthe release, and then clear it *after* the next version branch.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Feb 2002 16:29:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Maintaining the list of release changes "
},
{
"msg_contents": "Tom Lane writes:\n\n> I think it should live in .../doc, same as TODO. Not picky about name.\n>\n> I don't think we should remove the file; removing and re-adding it will\n> probably do strange things to CVS history. One could argue about\n> whether we even need to clear it. Might be best to ship it as-is in\n> the release, and then clear it *after* the next version branch.\n\nIt seems that I'm going to have a hard time convincing people to put the\nnotes into the DocBook sources right away, so in order to get this\njump-started, I'll go with the text file.\n\nNew changes go at the top. I guess we'll want to keep the one-liner,\nbarely-a-sentence format for now.\n\nNot sure if we want to ship this file, but we can think about that later\nwhen we see how it develops.\n\nAs for name, maybe CHANGES. You could think of it this way: Some changes\nmay cancel each other out along the way and may not go down into\n\"history\".\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 15 Feb 2002 16:50:44 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Maintaining the list of release changes "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> As for name, maybe CHANGES. You could think of it this way: Some changes\n> may cancel each other out along the way and may not go down into\n> \"history\".\n\nSure. Or maybe RECENTCHANGES, to point out that it's not a changelist\nback to the beginning of time. But I don't care much. Please create\nit, and I'll start making entries...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Feb 2002 17:56:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Maintaining the list of release changes "
},
{
"msg_contents": "...\n> It seems that I'm going to have a hard time convincing people to put the\n> notes into the DocBook sources right away, so in order to get this\n> jump-started, I'll go with the text file.\n\nWhy would putting notes into the DocBook sources be a problem? It is as\ntrivial as it can get, and it would eliminate one of the few remaining\ndisconnects between presentable final documentation and folks doing the\nwork.\n\nI'm happy having this in DocBook, Peter would prefer it too, and it\ndoesn't hurt or bite hard so why would it be a problem for others?\n\nI'm afraid I purged some of this thread from my mail, but are there\nissues other than \"I don't know which file to use\" or \"I don't\nunderstand DocBook markup\"? Both of those are easy to fix...\n\n - Thomas\n",
"msg_date": "Fri, 15 Feb 2002 16:30:07 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Maintaining the list of release changes"
}
] |
[
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> We can certainly move our header inclusion order around, but that is\n>> simply an application-level workaround for a broken system header.\n\n> I didn't think so. I always thought there were requirements in include\n> ordering.\n\nThere are certain broken OSes that think it's okay to let applications\ndeal with that, but they are certainly broken. What is an application\nsupposed to do if two different implementations have conflicting\nrequirements for header include order? Why should it be an\napplication's responsibility to worry about it in the first place?\n\nI don't even know of any place where it could be documented that\n\"<foo.h> requires <bar.h> to be included first\" in the standard man\npage layout, because there isn't a man page per header file. And I'll\ndefinitely bet lunch that that FreeBSD developer didn't fix the man\npages to say any such thing when he made that typedef change ;-)\n\nIf all versions of Unix had identical system headers then this sort\nof thing wouldn't be a big deal, but since they don't, \"each header\ncan be included independently\" is the only reasonable approach.\n\nWe have a number of workarounds of this kind in Postgres already,\nand I don't doubt that this one will not be the last one. But that\ndoes not persuade me that system headers with this sort of problem\nare acceptable. In this case we have an opportunity to complain,\nand we should.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 13:40:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: compile error of PostgreSQL 7.2 on FreeBSD-current "
},
{
"msg_contents": "[ Moved to patches.]\n\nI just did:\n\t \n\tfind /usr/include -name '*.h' | while read FILE \n\tdo \n\t echo \"#include <$FILE>\" >x.c \n\t echo \"main(){}\" >>x.c \n\t cc -c x.c\n\tdone > out 2>&1\n\nand look what I got, file attached. Who's OS can pass this test? I\nrealize some are g++ headers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026",
"msg_date": "Fri, 8 Feb 2002 14:01:38 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] compile error of PostgreSQL 7.2 on FreeBSD-current"
}
] |
[
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Sure, just stick stuff at the top of the HISTORY file and I can deal\n> with it during release packaging. I will tell you that many one-line\n> release items are made up of many commits and decisions so there is\n> little likelyhood I can just cut/paste and avoid the CVS log grovel but\n> it will help new folks.\n\nWell, if you expect to have to grovel through the CVS logs anyway,\nanother possibility is to put the info into the CVS commit messages.\nPerhaps we could agree on a convention that commits that include\nrelease-note-worthy material should have something like\n\n\tRELEASE NOTE: descriptive paragraph here\n\nin the commit message. Keep basically the same process, but make it\neasier for Bruce to spot the important material.\n\nI think I prefer the file-of-notes idea, but this is a possibility worth\nsuggesting.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 14:15:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Maintaining the list of release changes "
}
] |
[
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I think I prefer the file-of-notes idea, but this is a possibility worth\n>> suggesting.\n\n> We can do whatever people want. I was just afraid it would be too much\n> work for people, and I am willing to continue doing it. Actually, if\n> people want to help, looking over the final is 10x more valuable than\n> having a separate file, at least for me.\n\nEven if people do review the notes, who's to say they'll remember a\nchange they made months ago? I think it's important for developers to\nprepare at least a rough-draft entry for the release notes at the time\nthe change is made. We can debate different ways to keep that info\navailable until the docs are prepared, but the real problem here is to\nnot rely on memory.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 16:03:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Maintaining the list of release changes "
},
{
"msg_contents": "On Fri, Feb 08, 2002 at 04:03:39PM -0500, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> I think I prefer the file-of-notes idea, but this is a possibility worth\n> >> suggesting.\n> \n> > We can do whatever people want. I was just afraid it would be too much\n> > work for people, and I am willing to continue doing it. Actually, if\n> > people want to help, looking over the final is 10x more valuable than\n> > having a separate file, at least for me.\n> \n> Even if people do review the notes, who's to say they'll remember a\n> change they made months ago? I think it's important for developers to\n> prepare at least a rough-draft entry for the release notes at the time\n> the change is made. We can debate different ways to keep that info\n> available until the docs are prepared, but the real problem here is to\n> not rely on memory.\n\nThe _really_ critical piece for making this cumulative file work: _every_\nuser visible change needs to go into it, at the time it's commited to CVS.\nBe hardnosed, so external patches _must_ touch that file, or put it in\nthe commit log. The problem with the commit log is that it puts the onus\non the CVS commiter, not the patch maker.\n\nI'm partial to a combo - a 'USER VISIBLE CHANGES: <yes|no>' line in CVS\ncommit logs (put it in the template, default to yes?) and every 'yes'\nsubmit _must_ patch the cumulative release file.\n\nRoss\n",
"msg_date": "Fri, 8 Feb 2002 16:13:46 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Maintaining the list of release changes"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> I'm partial to a combo - a 'USER VISIBLE CHANGES: <yes|no>' line in CVS\n> commit logs (put it in the template, default to yes?) and every 'yes'\n> submit _must_ patch the cumulative release file.\n\nI find it hard to imagine a patch that doesn't create *some* kind of\nuser-visible change, at some level. The question here is whether it\nrises to the level of needing an entry in the release notes. If we\nwanted a commit-by-commit release history we'd just tell people to read\nthe CVS logs; in practice that's no help. The point of release notes is\nto hit the high spots, and that requires a certain amount of judgment.\n\nSo I don't think a mechanical \"you must provide this\" rule will help\nmuch. We should rely on the judgment of the committer to decide whether\na release note is warranted. What we want is a reasonably simple way\nfor the committer to provide a draft note, and a mechanism to ensure\nthat Bruce doesn't miss the note later.\n\nIn the case that started this thread, I had actually provided material\nfor a release note in the CVS commit entry, but Bruce had skipped over\nit because he didn't think it important. The missing link was that\nI didn't have a way to plaster a \"this is important\" label on the\ncommit message.\n\nOh, here's another thought: including an explicit patch of the\npending-notes file in submissions won't work very well, since that part\nof the patch will surely fail to apply if it's even a few days old.\nIt'll have to come in as separate text that the committer inserts into\nthe pending-notes file when he commits. From this perspective, it might\nbe a lot easier to put the notes into CVS commit messages; there'll be\nfewer problems with commit collisions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 17:31:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Maintaining the list of release changes "
}
] |
[
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I suppose we all thought that the change wouldn't bite anyone.\n\nI wasn't expecting such vocal complaints, for sure.\n\n> Tom, at the time, did you think it should be mentioned in the release\n> notes?\n\nI can't recall if I thought about it in that way. If I had, I would\nhave said \"yes\", but I don't recall if I considered the point. I've\nalways made a habit of writing reasonably detailed commit messages and\nthen leaving it to you to decide whether a release note is needed.\n\nPart of the point of this discussion, I think, is just to make sure that\ncommitters consider \"should there be a release note for this change?\"\nevery time they commit. Thinking about that, and writing down the\nappropriate info immediately if a note is needed, are the critical steps.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 18:03:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Maintaining the list of release changes "
},
{
"msg_contents": "On Fri, Feb 08, 2002 at 06:03:53PM -0500, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I suppose we all thought that the change wouldn't bite anyone.\n> \n> I wasn't expecting such vocal complaints, for sure.\n\nI think the users that screamed about this would like a detailed \"every\nuser visible change\" list, in addition to the highpoints Release notes.\n\n> > Tom, at the time, did you think it should be mentioned in the release\n> > notes?\n> \n> I can't recall if I thought about it in that way. If I had, I would\n> have said \"yes\", but I don't recall if I considered the point. I've\n> always made a habit of writing reasonably detailed commit messages and\n> then leaving it to you to decide whether a release note is needed.\n> \n> Part of the point of this discussion, I think, is just to make sure that\n> committers consider \"should there be a release note for this change?\"\n> every time they commit. Thinking about that, and writing down the\n> appropriate info immediately if a note is needed, are the critical steps.\n\nAnd having guidelines for the developers that describe the simple\nquestions to ask themselves when answering 'does this deserve a release\nnote'. It's not about advertizing: it's about documenting changes, so the\nDBA can grep for likely words of something breaks, such as (in this case)\n\"array\".\n\nTom, you said 'every change is user visible'. I think, for this purpose,\nonly things that modify existing behavior (input or output) in kind,\nnot merely quality, are 'visible'. A patch that speeds up (or slows\ndown!) query processing by 100 fold is certainly user visible, but not\nfor the purposes of reporting release changes. New functionality is not\n'user visible'.\n\nRoss\n",
"msg_date": "Fri, 8 Feb 2002 17:15:26 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Maintaining the list of release changes"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> I think the users that screamed about this would like a detailed \"every\n> user visible change\" list, in addition to the highpoints Release notes.\n\nA greppable copy of the CVS logs would satisfy that, assuming that we\nmaintain a reasonable standard of quality in our commit messages.\nDavid Gould evidently managed to find my commit message about that\narray_out change; but I do not know how hard it was for him to look.\n\nPerhaps we should arrange for a nightly cvs2cl run to produce a \"CVS\nchanges since last major release\" document on the website. This might\nalso help answer Peter's concern about visibility of work-in-progress.\n\n> Tom, you said 'every change is user visible'. I think, for this purpose,\n> only things that modify existing behavior (input or output) in kind,\n> not merely quality, are 'visible'.\n\nI don't think that's a helpful criterion. \"Does it potentially break\nany application code?\" might be a helpful criterion.\n\nNew features that don't pose backwards-compatibility issues probably\nneed a different set of criteria to decide if they merit mention in\nrelease notes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Feb 2002 18:35:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Maintaining the list of release changes "
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tille, Andreas [mailto:TilleA@rki.de] \n> Sent: 08 February 2002 10:42\n> To: Jean-Michel POURE\n> Cc: PostgreSQL Hacker Liste\n> Subject: Re: [HACKERS] Feature request for 7.3 and pgAdmin : \n> CREATE OR ALTER VIEW,\n> \n> \n> On Fri, 8 Feb 2002, Jean-Michel POURE wrote:\n> \n> > This will help us provide a better GUI environment for pgAdmin2.\n> By the way: I'm not on any pgAdmin2 list but regarding to \n> feature requests I could add something:\n> - Portability to free operating systems (wxGTK is nice and portable\n> and you would fill a big gap in the free software world)\n> \n> Kind regards\n> \n> Andreas.\n\nRealistically, unless someone comes up with an easy way to port a VB6 app to\nLinux etc, this is unlikely to happen until I get bored and think about\npgAdmin III, by when I hope the Mono project might provide some answers. \n\nThe current code will work under Wine with a little work (and a Windows\npartition) btw.\n\nRegards, Dave.\n",
"msg_date": "Sat, 9 Feb 2002 10:57:48 -0000 ",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Feature request for 7.3 and pgAdmin : CREATE OR ALT"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been asked if PostgreSQL is interested in its own booth on Linuxtag in\nGermany. While I will be there I have to take care of the credativ booth so\nI cannot run a PostgreSQL booth at the same time. I will, however, be able\nto present PostgreSQL at our booth, but maybe someone's interested in\nrunning a free booth.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sun, 10 Feb 2002 11:10:55 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Linuxtag"
}
] |
[
{
"msg_contents": "Dear all,\n\n Where can I find the source code of ODBC functions in pgsql(somthing like the RIGHT(),LEFT())?\n And also the source code of database functions( substr()..etc..)? I have download the 7.2.src.rpm\n but I can't find them...\n\nBest Regards'\nDean Lu\n",
"msg_date": "Mon, 11 Feb 2002 03:28:48 +0800",
"msg_from": "\"Dean@TMDT\" <dean@tmdt.com.tw>",
"msg_from_op": true,
"msg_subject": "where can I find the source code of functions?"
}
] |
[
{
"msg_contents": "Okay, I think I've seen one too many reports of corrupted pg_dump data\ncaused by Microslothish newline-to-CR/LF translation. We need to find\na way to make COPY data proof against that brain damage.\n\nThe best idea that comes to mind is as follows:\n\n1. During COPY OUT, convert carriage returns and newlines in\ndata being copied out to the two-character sequences \\ r and \\ n.\n(This will already work properly with the existing COPY IN logic,\nsince it recognizes those escape sequences.)\n\n2. During COPY IN, discard \\r if it appears just before \\n. (This\nmust occur before de-backslashing, so that the sequence \\ \\r \\n\nwill be interpreted as a quoted newline. That way, an existing\ndump file that represents a data newline as \\ \\n will be correctly\nread even if it's been Microsoft-munged to \\ \\r \\n.)\n\nThe second part of this would have a small backwards-compatibility\nproblem: when reading an old dump file (one made by COPY OUT before\nthe above change) it would be possible for it to discard a \\r that\nis legitimately part of the data. That would only happen if the \\r\nwas the last real data character on the line. Notice that if the\n\\r precedes a data \\n character, the \\n will be represented as \\ \\n\nby existing COPY OUT, or by \\ n by COPY OUT with the above change;\neither way, the \\r is followed by a backslash and will not be dropped.\nSo this change to COPY IN would not cause it to fail with old data\nvalues containing \\r\\n sequences, only with those ending with a bare \\r.\n\nWe could provide a workaround for that case by offering a SET variable\nthat enables or disables discarding of \\r, though I'd want it to default\nto ON.\n\nIf this seems like a reasonable approach, then I'd like to apply the\nCOPY OUT part of the change immediately (ie, for 7.2.1). Converting \\r\nand \\n to \\ r and \\ n will not hurt anyone in either forward or backward\ndirection, and if we do that then dumps made with 7.2.1 or later will\nwork correctly with 7.3 regardless of how one sets the discard-\\r\nvariable.\n\nA stronger change would make COPY IN regard \\n, \\r \\n, or \\r as\nequivalent representations of newline. If we do this then we'll also\nprevent Unix-to-Mac newline conversions from breaking COPY data (Mac\nrepresents newlines by just \\r). However, in this approach any\nold-style data \\r would break, not only those at end of data line.\nSo the SET variable to revert to the old COPY IN behavior would be\nneeded in many more cases.\n\nComments? Anyone see a better way?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Feb 2002 21:22:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Idea for making COPY data Microsoft-proof"
},
{
"msg_contents": "At 09:22 PM 2/10/02 -0500, Tom Lane wrote:\n>\n>Comments? Anyone see a better way?\n>\n\nCan you do something akin to what you did with the binary output - but in\nthis case allow for no details. This would be *great* for column order\nproblems. eg,\n\n COPY <table> TO STDOUT WITH HEADER;\n\nor somesuch. This would result in two extra lines at the start:\n\n <Keywords indocating assumptions and translations (eg. CRLF_TO_LF) etc.\n {<col name>...}\n ...NORMAL DATA HERE...\n\nSimilary,\n\n COPY <table> FROM STDIN WITH HEADER;\n\nWould use the header to work out how to translate CRLF etc, as well as\nallow COPY to be used to load different table definitions.\n\nHow does this sound?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 11 Feb 2002 18:47:59 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Idea for making COPY data Microsoft-proof"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Can you do something akin to what you did with the binary output - but in\n> this case allow for no details.\n\nThis strikes me as solving an entirely different issue -- with great\nloss of backwards compatibility. Possibly these are good ideas, but for\nthe moment I'd like to keep this thread focused on the issue of coping\nwith newline translations.\n\n(In any case, I thought someone was already working on an optional\ncolumn-name-list clause for COPY, which would solve that problem in what\nseems a cleaner fashion.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Feb 2002 11:11:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea for making COPY data Microsoft-proof "
},
{
"msg_contents": "At 11:11 AM 2/11/02 -0500, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>This strikes me as solving an entirely different issue \n\nWell, a related issue: you are talking about (slightly) changing the\nencoding of dumped data - why not therefor allow for an (optional) header\nwith full encoding details. Column headers are just a minor bonus.\n\n\n>- with great\n>loss of backwards compatibility.\n\nNot if anything without the 'WITH HEADERS' is treated as per current.\nAnyone having M$ problems can use the new format (and pg_dump could use it\nalways).\n\n\n>the moment I'd like to keep this thread focused on the issue of coping\n>with newline translations.\n\nNo big deal, but this seems to be an encoding issue; and it seems like a\ngood idea to formalize it somehow.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 12 Feb 2002 10:58:49 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Idea for making COPY data Microsoft-proof "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> No big deal, but this seems to be an encoding issue; and it seems like a\n> good idea to formalize it somehow.\n\nWell, currently there is a strict separation between COPY data (in the\nfile) and metadata (supplied as parameters to the COPY command). I'm\nnot eager to revisit that decision. What you seem to be suggesting is\nshoving metadata into the data file, but I think that will create more\nproblems than it solves. To take just one problem: how do I know that\nthe first line is metadata, and not data that happens to look exactly\nlike whatever my metadata layout is?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Feb 2002 19:02:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea for making COPY data Microsoft-proof "
},
{
"msg_contents": "At 07:02 PM 2/11/02 -0500, Tom Lane wrote:\n>To take just one problem: how do I know that\n>the first line is metadata, and not data that happens to look exactly\n>like whatever my metadata layout is?\n\nYou don't, which is why you need the 'WITH HEADER' or 'WITH ENCODING'\nclause on COPY. I guess COPY could issue a warning when you do not say WITH\nHEADER and it looks like a valid header. \n\nOther than that, it's a case of storing information about the dumped data,\nnot the database schema in the data file. I'm not particularly attached to\nthe column names being there, but it does seem usefull to store\ninstructions indicating the the file is formatted.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 12 Feb 2002 11:15:24 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Idea for making COPY data Microsoft-proof "
},
{
"msg_contents": "[2002-02-11 11:11] Tom Lane said:\n| Philip Warner <pjw@rhyme.com.au> writes:\n| > Can you do something akin to what you did with the binary output - but in\n| > this case allow for no details.\n| \n| This strikes me as solving an entirely different issue -- with great\n| loss of backwards compatibility. Possibly these are good ideas, but for\n| the moment I'd like to keep this thread focused on the issue of coping\n| with newline translations.\n| \n| (In any case, I thought someone was already working on an optional\n| column-name-list clause for COPY, which would solve that problem in what\n| seems a cleaner fashion.)\n\nYes, the work for a column list in COPY FROM is largely done. I've\nnot been able to work on COPY TO, tho.\n\nPart #1 of your original proposal is certainly the right thing to do.\n\nI've backgrounded this problem for most of the day, and although I\nknow it's a severe change, your \"stronger\" solution seems like a\nbetter change than the part #2, which just feels like something \nthat would only be undone later. Both #2 and the \"stronger\" way\nwill require a SET option for absolute correctness; why not require\nthat SETting more often than not for any old-format dumps? Yes, it\nwill affect a larger number of users, but we net a better dump \nformat for that pain.\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Mon, 11 Feb 2002 22:54:27 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: Idea for making COPY data Microsoft-proof"
}
] |
[
{
"msg_contents": "I have just isolated a bug in doing a 'select for update'. I have \nincluded a test case below that reproduces the problem on a 7.2 \ndatabase. The bug is as follows: after issuing a select for update that \nblocks on a record being locked by a second process, when that second \nprocess commits, the first process begins using 100% of the cpu and \nnever completes.\n\nThe test case is as follows:\n\ndrop table test1;\ndrop table test2;\n\ncreate table test1 (cola int);\ncreate unique index test1_u1 on test1(cola);\ncreate table test2 (cola int);\ncreate unique index test2_u1 on test2(cola);\ninsert into test1 values (1);\ninsert into test2 values (1);\n\n\nThen in one psql session issue the following:\n\nbegin;\nUPDATE TEST1 SET COLA = 1\nWHERE COLA = 1;\n\n\nThen in a second psql issue the following:\n\nbegin;\nSELECT T1.COLA\nFROM TEST1 T1, TEST2 T2\nWHERE T1.COLA = T2.COLA\nAND T2.COLA = 1 FOR UPDATE;\n\nNow go back to the first psql and issue the following:\n\ncommit;\n\nNow you will see the cpu usage spike up to 100% for the second process \nand it will never return.\n\nRunning analyze on these newly created tables will cause the indexes not \nto be used and will result in the select working correctly. However in \nmy real code the indexes are necessary and thus that isn't a workaround \nfor me.\n\nthanks,\n--Barry\n\n",
"msg_date": "Sun, 10 Feb 2002 22:39:29 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "bug with select for update"
},
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> I have just isolated a bug in doing a 'select for update'.\n\nNice example. I believe I have a fix...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Feb 2002 14:04:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug with select for update "
}
] |
[
{
"msg_contents": "Dear all,\n\nlibgda (http://www.gnome-db.org) offers a common interface to \nPostgreSQL, MySQL, Oracle, Sybase, ... databases. Libgda is based on Corba \nand gives access to most database and schema objects (tables, columns, views, \nfunctions are supported, triggers will soon be). It also has XML query \nsupport, unified types and connexion pooling. \n\nlibgda is an independant library with no dependency to Gnome.\n\nWhy not integrate libgda in PostgreSQL to register and access remote database \nobjects? This would allow the \"attachment\" and integration of foreign tables \n& views into PostgreSQL.\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Mon, 11 Feb 2002 09:23:41 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Fetature enhancement request : use of libgda in PostgreSQL to access\n\tlegacy databases."
},
{
"msg_contents": "Dear Jean-Michel,\ndid you notice http://gasql.sourceforge.net in the applications section\nof the gnome-db site? It is supposed to be a PostgreSQL administration\ntool, and seems to be nice looking at the screenshots... maybe you\njust need to remove GNOME integration...\nJust my 2 cents :-)\nBest regards,\nAndrea Aime\n\nJean-Michel POURE wrote:\n> \n> Dear all,\n> \n> libgda (http://www.gnome-db.org) offers a common interface to\n> PostgreSQL, MySQL, Oracle, Sybase, ... databases. Libgda is based on Corba\n> and gives access to most database and schema objects (tables, columns, views,\n> functions are supported, triggers will soon be). It also has XML query\n> support, unified types and connexion pooling.\n> \n> libgda is an independant library with no dependency to Gnome.\n> \n> Why not integrate libgda in PostgreSQL to register and access remote database\n> objects? This would allow the \"attachment\" and integration of foreign tables\n> & views into PostgreSQL.\n> \n> Best regards,\n> Jean-Michel POURE\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n",
"msg_date": "Mon, 11 Feb 2002 10:12:18 +0100",
"msg_from": "\"Andrea Aime\" <aaime@comune.modena.it>",
"msg_from_op": false,
"msg_subject": "Re: Fetature enhancement request : use of libgda in PostgreSQL "
},
{
"msg_contents": "Le Lundi 11 F�vrier 2002 10:12, Andrea Aime a �crit :\n> did you notice http://gasql.sourceforge.net in the applications section\n> of the gnome-db site? It is supposed to be a PostgreSQL administration\n> tool, and seems to be nice looking at the screenshots... maybe you\n> just need to remove GNOME integration...\n> Just my 2 cents :-)\n\nDear Andrea,\n\nThe purpose of my last mail was to propose the integration a libgda client \ninto PostgreSQL. For example, it should be possible to attach an Oracle table \ninto PostgreSQL, with the ability to run SELECTs, UPDATEs and maybe JOIN \nqueries (???).\n\nThis is pure computer-fiction, but MySQL might well provide such feature as \nwell and query PostgreSQL objects...\n\nComing back to gasql, if we happen to port pgAdmin2 \n(http://pgadmin.postgresql.org) to Linux, why not use libgda and create a \nmulti-vendor GUI directly.\n\nPeople were discussing lately about adding SQL compatibility layers in \nPostgreSQL (i.e Oracle compatibility). IMHO, this is not the right direction \nto go first because it would demand too much investment.\n\nOn the converse, if we integrate libgda BOTH into PostgreSQL AND in a future \nGUI client, we are winning.\n\nJust my 2 cents. My opinion is that the community should concentrate on real \nissues, starting with the most needed ones (ALTER TABLE ALTER COLUMN, CREATE \nOR REPLACE VIEW, CREATE OR REPLACE TRIGGER) and then work on GUI tools and \nabstraction layers to open PostgreSQL to the world.\n\nJust my 2 cents. What do you think my friends?\n\nCheers,\nJean-Michel POURE\n",
"msg_date": "Mon, 11 Feb 2002 11:20:04 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: Fetature enhancement request : use of libgda in PostgreSQL to\n\taccess legacy databases."
},
{
"msg_contents": "On Mon, 11 Feb 2002, Jean-Michel POURE wrote:\n\n> \n> The purpose of my last mail was to propose the integration a libgda client \n> into PostgreSQL. For example, it should be possible to attach an Oracle table \n> into PostgreSQL, with the ability to run SELECTs, UPDATEs and maybe JOIN \n> queries (???).\n\nIt would be inappropriate for PostgreSQL to be made an interface to other\nRDBMSs. What would this achieve? Why would it be useful?\n\n> \n> This is pure computer-fiction, but MySQL might well provide such feature as \n> well and query PostgreSQL objects...\n\nIf that's what they want to do...\n\n> People were discussing lately about adding SQL compatibility layers in \n> PostgreSQL (i.e Oracle compatibility). IMHO, this is not the right direction \n> to go first because it would demand too much investment.\n\nPostgreSQL does need greater support of SQL99 and some extensions to SQL\nfound in other proprietary systems, but this is not the right way to go\nabout doing it. It needs to support them natively so that it can replace\nother systems, not work in conjunction with them.\n\n\n> Just my 2 cents. My opinion is that the community should concentrate on real \n> issues, starting with the most needed ones (ALTER TABLE ALTER COLUMN, CREATE \n> OR REPLACE VIEW, CREATE OR REPLACE TRIGGER) and then work on GUI tools and \n> abstraction layers to open PostgreSQL to the world.\n\nI think you are off the mark here. The addition of trigger and rule/view\nrecompilation is a convenience at best and there are alternatives to\nALTER TABLE DROP COLUMN. Take a look at the TODO list: the most urgent\nitems relate to replication/clustering, point-in-time recovery and\nrow-reuse. All in all, it is these features which are much more desirably\nto current and prospective users.\n\nGavin\n\n",
"msg_date": "Mon, 11 Feb 2002 22:33:35 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Fetature enhancement request : use of libgda in"
},
{
"msg_contents": "Le Lundi 11 F�vrier 2002 12:33, Gavin Sherry a �crit :\n> The addition of trigger and rule/view\n> recompilation is a convenience at best and there are alternatives to\n> ALTER TABLE DROP COLUMN. Take a look at the TODO list: the most urgent\n> items relate to replication/clustering, point-in-time recovery and\n> row-reuse. All in all, it is these features which are much more desirably\n> to current and prospective users.\n\nMany projects ask users to vote for priority features.\nWho can speak for end-users? Gavin, we need a pool in the to-do-list ...\nThese are hackers priorities which ***may** differ from end-user ones.\n\n1) End-user point of view\n\nMy humble and personnal opinion, shared by many end-users, is that CREATE \nTABLE AS (or whatever based on CREATE TABLE AS and UPDATE FROM) is not a \nvalid alternative. A database sysadmin with 500 tables, triggers and rules \ncannot use alternatives. We need some basic features :\n- to modify schema objects (CREATE OR REPLACE VIEW, CREATE OR REPLACE \nTRIGGER).\n- to drop schema objects (ALTER TABLE DROP COLUMN).\n\nI would be very please if some users could express themselves. What is your \nopinion as regards CREATE TABLE AS, ALTER TABLE DROP COLUMN, etc...\n\nWhat is the end-user priority for such features in 7.3 ?\n\n2) Use of libgda to query legacy databases\n\nWould it be possible to add this feature in the the to-do-list (very low \npriority = in the long run):\n\" use libgda to query legacy databases (Oracle, Sybase, MySQL) transparently \nfrom PostgreSQL in order to access both data (tables, views) and schema \nobjects (triggers, functions, rules, types, etc..)\".\n\nIs this computer fiction to attach Oracle tables in PostgreSQL using libgda? \nI can't tell and I would be happy to know the hackers' opinion.\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Mon, 11 Feb 2002 15:58:43 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda in"
},
{
"msg_contents": "On Mon, 2002-02-11 at 09:58, Jean-Michel POURE wrote:\n> Le Lundi 11 F�vrier 2002 12:33, Gavin Sherry a �crit :\n> > The addition of trigger and rule/view\n> > recompilation is a convenience at best and there are alternatives to\n> > ALTER TABLE DROP COLUMN. Take a look at the TODO list: the most urgent\n> > items relate to replication/clustering, point-in-time recovery and\n> > row-reuse. All in all, it is these features which are much more desirably\n> > to current and prospective users.\n> \n> Many projects ask users to vote for priority features.\n> Who can speak for end-users? Gavin, we need a pool in the to-do-list ...\n> These are hackers priorities which ***may** differ from end-user ones.\n> \n> 1) End-user point of view\n> \n> My humble and personnal opinion, shared by many end-users, is that CREATE \n> TABLE AS (or whatever based on CREATE TABLE AS and UPDATE FROM) is not a \n> valid alternative. A database sysadmin with 500 tables, triggers and rules \n> cannot use alternatives. We need some basic features :\n> - to modify schema objects (CREATE OR REPLACE VIEW, CREATE OR REPLACE \n> TRIGGER).\n> - to drop schema objects (ALTER TABLE DROP COLUMN).\n> \n> I would be very please if some users could express themselves. What is your \n> opinion as regards CREATE TABLE AS, ALTER TABLE DROP COLUMN, etc...\n\nFWIW, As a user, I still would put my priorities more like Gavin did.\nReplication/cluistering is top for me, followed by point-in-time\nrecovery. Row reuse would be good, although maybe I differ a little in\nthat I would like 'CREATE OR REPLACE' syntax a liitle more. ALTER TABLE\nDROP COLUMN doen't do much for me - it's nice, but for the few case\nwhere my DB design was not up to snuff, I just rename and carry the\ncolumn on until my next major upgrade.\n\nOf course my say-so is moot. It's been my experience that people who\nvote by suppying code tend to be weight somewhat more hevily in this\nprocess. And I can't think of any way I'd rather have it.\n\n> What is the end-user priority for such features in 7.3 ?\n> \n> 2) Use of libgda to query legacy databases\n> \n> Would it be possible to add this feature in the the to-do-list (very low \n> priority = in the long run):\n> \" use libgda to query legacy databases (Oracle, Sybase, MySQL) transparently \n> from PostgreSQL in order to access both data (tables, views) and schema \n> objects (triggers, functions, rules, types, etc..)\".\n\nEasy enough to do in middleware. Just in the fantasy world in which I\nsomehow spoke for developers' time, I still wouldn't mark this too high\non my priority list.\n\nSo there's a little user feedback for you. Hope it helps.\n\n-- \nKarl DeBisschop\nDirector, Software Engineering & Development\nLearning Network / Information Please\nwww.learningnetwork.com / www.infoplease.com\n\n",
"msg_date": "11 Feb 2002 10:18:51 -0500",
"msg_from": "Karl DeBisschop <kdebisschop@alert.infoplease.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda"
},
{
"msg_contents": "On Mon, Feb 11, 2002 at 03:58:43PM +0100, Jean-Michel POURE wrote:\n\n> My humble and personnal opinion, shared by many end-users, is that\n> CREATE TABLE AS (or whatever based on CREATE TABLE AS and UPDATE\n> FROM) is not a valid alternative. A database sysadmin with 500\n> tables, triggers and rules cannot use alternatives. We need some\n> basic features :\n> - to modify schema objects (CREATE OR REPLACE VIEW, CREATE OR REPLACE \n> TRIGGER).\n> - to drop schema objects (ALTER TABLE DROP COLUMN).\n> \n> I would be very please if some users could express themselves. What\n> is your opinion as regards CREATE TABLE AS, ALTER TABLE DROP\n> COLUMN, etc...\n\nLow priority. You can work around these limitations without too much\ndifficulty. Point-in-time recovery is currently _impossible_. If\none is going to add features, it is better to concentrate on adding\nbig, category-killer features than refining little features that can\nbe worked around.\n\nThat said, Postgres is free. You can do what you like with it. If\nanyone wants to work on \"create or replace\", &c., s/he is free to do\nso. If it's that important to you, implement it yourself, or pay one\nof the able developers to work on a feature you want. Heck, even if\nyou pay for the feature to be implemented, it'll cost you less than\nOracle licenses.\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 11 Feb 2002 10:52:36 -0500",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda in"
},
{
"msg_contents": "Le Lundi 11 F�vrier 2002 16:52, Andrew Sullivan a �crit :\n> If it's that important to you, implement it yourself, or pay one\n> of the able developers to work on a feature you want.\n\nThanks for the information. If you help us, I pay you a beer in Paris. Dave \nPage will probably too in Oxford, so that makes two beers. Dave Page is \nworking hard on pgAdmin2 writing thousands lines of code. As for myself, I \nwill concentrate on something comparable to pgAdmin2 in a libgda / GTK+ \nenvironment because I think Windows has no future. \n\nCREATE OR REPLACE VIEW / TRIGGER and ALTER TABLE DROP COLUMN are real \npriorities for us at pgAdmin team (http://pgadmin.postgresql.org). I don't \nknow PostgreSQL internals, but it should take a few days/weeks to an \nexperienced hacker to add these features.\n\nSo why should I do it myself ? We are a community after all. We are not \nworking for money but helping each others. If we are bringing pgAdmin to a \nlarge audience, we need more help from hackers on what we consider important :\n\n>>>>\nIt should be possible to modify or drop any schema object (with a priority on \nviews, triggers and columns). This is absolute priority for us. Can anyone \nhelp us? Can we make sure it will be added to 7.3? Thanks in advance.\n>>>>\n\nCheers,\nJean-Michel POURE\n",
"msg_date": "Mon, 11 Feb 2002 17:56:22 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda in"
},
{
"msg_contents": "Jean-Michel POURE wrote:\n>\n> CREATE OR REPLACE VIEW / TRIGGER and ALTER TABLE DROP COLUMN are real\n> priorities for us at pgAdmin team (http://pgadmin.postgresql.org). I don't\n> know PostgreSQL internals, but it should take a few days/weeks to an\n> experienced hacker to add these features.\n\nJean-Michel,\n\n I think you underestimate the problem a little.\n\n Doing CREATE OR REPLACE is not that trivial as you appear to\n think. The existing PL handlers (for PL/Tcl and PL/pgSQL at\n least) identify functions by their pg_proc OID. The\n functions body text is parsed only on the first call to that\n function during the entire session. So changing the functions\n prosrc attribute after having called it already wouldn't take\n effect until the next \"session\". But changing the OID as well\n corrupts existing SPI plans in other functions plus rules.\n\n Now it might be possible to tell your function handler to\n recompile that function at the next call without changing the\n OID, but how do you tell the function handlers in all the\n other concurrently running backends to do so after finishing\n their current transaction?\n\n The reason for this feature not beeing implemented yet is not\n \"that just noone is in the mood for\". It is that the general\n multiuser support structures aren't in place and a little\n local sandbox-hack just wouldn't cut it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 11 Feb 2002 14:35:57 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda"
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> Now it might be possible to tell your function handler to\n> recompile that function at the next call without changing the\n> OID, but how do you tell the function handlers in all the\n> other concurrently running backends to do so after finishing\n> their current transaction?\n\nThis is in fact all dealt with for CREATE OR REPLACE FUNCTION, but\nJan's point holds also for CREATE OR REPLACE other-stuff. The syntax\nchange alone is the least of one's worries when implementing such\nthings.\n\nWe were foolish enough to accept a patch for CREATE OR REPLACE FUNCTION\nthat did not deal with propagating the changes, and had to do a lot of\nwork to clean up after it. We will not be so forgiving next time...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Feb 2002 15:02:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda "
},
{
"msg_contents": "Le Lundi 11 F�vrier 2002 20:35, Jan Wieck a �crit :\n> � � Now it might be possible to tell �your �function �handler �to\n> � � recompile that function at the next call without changing the\n> � � OID, but how do you tell the function �handlers �in �all �the\n> � � other �concurrently running backends to do so after finishing\n> � � their current transaction?\n>\n> � � The reason for this feature not beeing implemented yet is not\n> � � \"that just noone is in the mood for\". �It is that the general\n> � � multiuser support structures aren't in �place �and �a �little\n> � � local sandbox-hack just wouldn't cut it.\n\nThank you for the explaination. I feel stupid. Please, don't flame me for \nthis now :\n\nCould PostgreSQL be working in two modes : development (SET DEVELOPMENT MODE) \nand production (SET PRODUCTION MODE).\n\n1) In development mode, each object has an md5 signature showing whether the \nobject is updated or not. If the object has changed, it is reloaded. This \nwould work even in a cluster. Object modification and deletion would only be \nallowed in development mode.\n\n2) In production, object deletion and modification would not be possible. No \nneed for md5 signatures then.\n\nSwitching from production <-> development would only be possible after all \ntransactions have ended. Pretty stupid, I agree, but this would make life \neasier. Just my 0,00002 cents.\n\nCheers,\nJean-Michel\n",
"msg_date": "Mon, 11 Feb 2002 21:57:43 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda"
},
{
"msg_contents": "> CREATE OR REPLACE VIEW / TRIGGER and ALTER TABLE DROP COLUMN are real\n> priorities for us at pgAdmin team\n> (http://pgadmin.postgresql.org). I don't\n> know PostgreSQL internals, but it should take a few days/weeks to an\n> experienced hacker to add these features.\n\nI can see the utility of CREATE OR REPLACE VIEW, however for me the DROP\nCOLUMN is way more useful. I can't begin to express how annoying it is to\nnot be able to drop a column. Sooo annoying...\n\n> So why should I do it myself ? We are a community after all. We are not\n> working for money but helping each others. If we are bringing\n> pgAdmin to a\n> large audience, we need more help from hackers on what we\n> consider important :\n\nTo a certain extent I agree. I have definitely seen times where I have\nspent hours and hours and hours of coding doing something that a core\ndeveloper can do in no time, but just isn't inclined to do.\n\nAs an aside: did anyone read my post about SET NOT NULL? I am happy to\nimplement this for 7.3, but no-one answered my questions about if it's in\nthe parser or not, and where to put the code?\n\n> It should be possible to modify or drop any schema object (with a\n> priority on\n> views, triggers and columns). This is absolute priority for us.\n> Can anyone\n> help us? Can we make sure it will be added to 7.3? Thanks in advance.\n\nThe other side of this is that you are using a completely free product coded\nby volunteers. There's no way you can make sure it will be added - all you\ncan do is to try to convince a developer to implement it. Again, if I sat\ndown for a week of coding, I might be able to do it, but someone more\nfamiliar with postgres can probably do it in a day...\n\nChris\n\n",
"msg_date": "Tue, 12 Feb 2002 14:29:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda in"
},
{
"msg_contents": "Le Mardi 12 F�vrier 2002 07:29, Christopher Kings-Lynne a �crit :\n> I can see the utility of CREATE OR REPLACE VIEW, however for me the DROP\n> COLUMN is way more useful. �I can't begin to express how annoying it is to\n> not be able to drop a column. �Sooo annoying...\n\nI recieved a mail from Neil Conway. Here it is :\n\nIf ALTER TABLE DROP COLUMN is important to you guys, why not use the\nexisting code for it? Define _DROP_COLUMN_HACK__ and re-compile, it\nshould work. �I think the implementation is pretty messy: you, or\nsomeone from your time, is welcome to improve it, or suggest a better\nway to do things. �This code is also experimental, so it could\ndefinately do with some testing and QA.\n\nMy point is that, for this feature at least, there are certainly things\nthat you guys can do to increase the likelihood of ALTER TABLE DROP\nCOLUMN being in the 7.3 release.\n\nCheers,\nNeil Conway\n",
"msg_date": "Tue, 12 Feb 2002 09:57:48 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda in"
},
{
"msg_contents": "On Tue, Feb 12, 2002 at 09:57:48AM +0100, Jean-Michel POURE wrote:\n> Le Mardi 12 F�vrier 2002 07:29, Christopher Kings-Lynne a �crit :\n> > I can see the utility of CREATE OR REPLACE VIEW, however for me the DROP\n> > COLUMN is way more useful. �I can't begin to express how annoying it is to\n> > not be able to drop a column. �Sooo annoying...\n> \n> I recieved a mail from Neil Conway. Here it is :\n> \n> If ALTER TABLE DROP COLUMN is important to you guys, why not use the\n> existing code for it? Define _DROP_COLUMN_HACK__ and re-compile, it\n> should work. �I think the implementation is pretty messy: you, or\n> someone from your time, is welcome to improve it, or suggest a better\n> way to do things. �This code is also experimental, so it could\n> definately do with some testing and QA.\n\nLOL! For ages I have been thinking that this is the obvious way of solving\nthis problem, wondering why no-one had done it yet. Well, obviously someone\ndid it and pointed out the flaws at the same time. In fact, it's even cooler\nnow with lazy vacuum, as it could clean out the column in the background,\nreplacing any values with NULL which, IIRC, don't take any space on disk,\njust a bit in a bitmap.\n\nFor anyone wanting to know more about this, see the doc/TODO.detail/drop in\nthe source tree.\n\n> My point is that, for this feature at least, there are certainly things\n> that you guys can do to increase the likelihood of ALTER TABLE DROP\n> COLUMN being in the 7.3 release.\n\nIt's certainly a nice feature, but I'm not dying for it. There are other\nfeatures I want first :).\n\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> Terrorists can only take my life. Only my government can take my freedom.\n",
"msg_date": "Tue, 12 Feb 2002 21:54:59 +1100",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda in"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> To a certain extent I agree. I have definitely seen times where I have\n> spent hours and hours and hours of coding doing something that a core\n> developer can do in no time, but just isn't inclined to do.\n\nWell, you know, there is some method in our madness. We'd like to see\nmore people develop the skills to work on Postgres, and the above is how\nyou do it. (How do you think the core developers learned?) If we did\nall the \"easy\" stuff because it was easy, there'd be no appropriate\nprojects for new developers to tackle.\n\nWhich is not to say that DROP COLUMN is easy; it's not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Feb 2002 09:57:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda in "
},
{
"msg_contents": "I'm new to the list but I'm going to speak up anyways. Being a core \ndeveloper on several other projects, I feel that it's important to point \nout that both comments are valid here. As a core developer, I certainly \ndon't want to implement seemingly lessor features when more pressing \nissues are at hand. At the same time, I would like to see user demand \nmet and have some of the other developers lend a hand while polishing \ntheir knowledge on the project in general. What I've found especially \nuseful has been to tutor and guide (okay, hand-hold) newer/younger \ndevelopers to my projects so that their abilities are quickly \ncomplimented. I find that using IRC or even other IM technology can go \na long way toward providing support for would-be developers. Especially \nfor projects of this complexity. I find that this helps well beyond \nthat of a mailing list as people tend to be more timid in a public \nforum. After all, it's well understood that a degree of p2p interaction \nis often very helpful and tends to be even more so as the complexity of \nthe topic grows.\n\nTutoring can not only allow developers that are less intimate with the \ncode become more useful but help ensure the effort they put forward is \nnot only accepted but implemented in an ideal manner. This is a win for \nthe developers and the project as a whole. I find it also helps build a \nlevel of trust with future submissions from the developer in question. \nOf course, it also helps build retention with newer developers as it \nmore quickly allows them to feel like they are making a difference. A \nkey ingredient for any developer that is to stay with any project for \nthe long haul.\n\nIn fact, I'm happy this came up as I recently emailed a core developer \nasking for places to start as well as any preferred documentation to \nstart with. Basically I was told read the code and go read the docs. \nWhich is exactly where I was before I emailed him. This is not to say \nthat I wasn't happy to have him reply but his response pretty much \nprovided no value and added nothing beyond what common sense tells you. \n Wouldn't it be more helpful to point would-be developers at a specific \nsection of code telling them why they'd want to start there and where \nany specific documentation is that may be of value?\n\nNow, I'm not saying we should move away from the mailing list, rather, \nI'm saying that the core developers way want to reconsider how some \nrequests for help are answered and maybe even consider other forms of \ncomplimentary communication. Doesn't a hour of a core developers time \nin trade for multiple increase in productivity of another developer seem \nlike a good trade?\n\nJust some food for thought.\n\nGreg\n\n\n\nTom Lane wrote:\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> \n>>To a certain extent I agree. I have definitely seen times where I have\n>>spent hours and hours and hours of coding doing something that a core\n>>developer can do in no time, but just isn't inclined to do.\n>>\n> \n> Well, you know, there is some method in our madness. We'd like to see\n> more people develop the skills to work on Postgres, and the above is how\n> you do it. (How do you think the core developers learned?) If we did\n> all the \"easy\" stuff because it was easy, there'd be no appropriate\n> projects for new developers to tackle.\n> \n> Which is not to say that DROP COLUMN is easy; it's not.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n\n",
"msg_date": "Tue, 12 Feb 2002 09:54:04 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Feature enhancement request : use of libgda in"
},
{
"msg_contents": "[2002-02-12 09:54] Greg Copeland said:\n\nPlease understand that I am a wannabe-developer at the bottom of\na big learning curve when reading my comments.\n\n| I'm new to the list but I'm going to speak up anyways. Being a core \n| developer on several other projects, I feel that it's important to point \n| out that both comments are valid here. As a core developer, I certainly \n| don't want to implement seemingly lessor features when more pressing \n| issues are at hand. At the same time, I would like to see user demand \n| met and have some of the other developers lend a hand while polishing \n| their knowledge on the project in general. \n\nI think the problem is the perception of \"lesser\" features. What\nan outsider may see as a little problem, may infact be a large \nproblem that cannot be suitably solved at the present time, or \nrequire other seemingly-unrelated infrastructure to accomplish.\n_Much_ of what some core developers work on is totally orthogonal\nto anything I'd be able to work on right now.\n\n| What I've found especially \n| useful has been to tutor and guide (okay, hand-hold) newer/younger \n| developers to my projects so that their abilities are quickly \n| complimented.\n\nI can assure you that if you show that you are putting forth effort,\nthere will be a developer who can and will help you when you need it.\nThis means you _will_ spend 10 times as long on a problem than the\ndeveloper who's helping. This is the way it should be. The biggest\nhurdle to postgres development is the size/complexity of the code,\nand there is only one way to overcome that; dedication, time and\nexpertise -- things that all of the core developers have invested\nand earned.\n\n| I find that using IRC or even other IM technology can go \n| a long way toward providing support for would-be developers. Especially \n| for projects of this complexity. \n\nI agree that it seems like it would be nice to get a quick answer \nfor a 'simple' problem, but you miss out on all the non-answers,\nwhich increase familiarity with the codebase. I do appreciate\nbeing pointed in the right direction when I'm wandering around\nthe wrong area, and I think that is about all that should be done.\n\n| I find that this helps well beyond \n| that of a mailing list as people tend to be more timid in a public \n| forum.\n\nAll I can suggest is to suck up the timidity. I know I've made a\nmonkey of myself on a few occasions, and I'm sure I will again until\nI learn what I need. This is part of the learning process. Also, \nhiding valuable communication on private channels does nothing to \ninform new would-be-developers of past questions/problems; not using\nthe email archives when in 'idea' mode is a sure sign that the\nwould-be-developer needs to learn to use those archives -- a sin\nI've been guilty of :-(\n\n| After all, it's well understood that a degree of p2p interaction \n| is often very helpful and tends to be even more so as the complexity of \n| the topic grows.\n\nI agree that a public, archived, irc might be useful, but you have \nto take into consideration the fact that, at least for me, this\nproject requires prolonged code gazing sessions, which would only\nbe interrupted by \"instant\" communication. I, and maybe the real\ndevelopers, like the fact that email can be consumed /after/ a problem\nis investigated/solved.\n\n| Tutoring can not only allow developers that are less intimate with the \n| code become more useful but help ensure the effort they put forward is \n| not only accepted but implemented in an ideal manner. This is a win for \n| the developers and the project as a whole. I find it also helps build a \n| level of trust with future submissions from the developer in question. \n| Of course, it also helps build retention with newer developers as it \n| more quickly allows them to feel like they are making a difference. A \n| key ingredient for any developer that is to stay with any project for \n| the long haul.\n\nThis already happens. There is little hand-holding, but if you \nshow that you are standing, someone will likely help you walk.\nI have never seen a more helpful, dedicated, intelligent developer\ngroup than this one -- and I have seen a few. For this reason alone,\nthe postgresql project will flourish when others wither.\n\n| Wouldn't it be more helpful to point would-be developers at a specific \n| section of code telling them why they'd want to start there and where \n| any specific documentation is that may be of value?\n\nI agree, a quick-start guide might be helpful, but given the complexity\nof this project, a quick-start guide might be more maintenance than\nit is worth. In all honesty, it took me about a month of weekends\nto get my head enough into the code that I could find my way around.\nIf a potential contributor is not willing to show that amount of\ninitiative, why should the core group think they'll have sufficient\ninterest to maintain/support any code of theirs that gets into the\ncodebase?\n\n| Now, I'm not saying we should move away from the mailing list, rather, \n| I'm saying that the core developers way want to reconsider how some \n| requests for help are answered and maybe even consider other forms of \n| complimentary communication. Doesn't a hour of a core developers time \n| in trade for multiple increase in productivity of another developer seem \n| like a good trade?\n\nAn hour of core-developer time might allow you to _not_ learn important\nother things that you'll need later.\n\nmy $.02\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Tue, 12 Feb 2002 12:18:53 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Feature enhancement request : use of libgda in"
},
{
"msg_contents": "On Tue, Feb 12, 2002 at 09:54:04AM -0600, Greg Copeland wrote:\n> I'm new to the list but I'm going to speak up anyways. Being a core \n> developer on several other projects, I feel that it's important to point \n\nWelcome. It might have served you better to read some of the archives,\nbefore judging how this community and it's core developers interact.\n\n> out that both comments are valid here. As a core developer, I certainly \n> don't want to implement seemingly lessor features when more pressing \n> issues are at hand. At the same time, I would like to see user demand \n> met and have some of the other developers lend a hand while polishing \n> their knowledge on the project in general. What I've found especially \n> useful has been to tutor and guide (okay, hand-hold) newer/younger \n> developers to my projects so that their abilities are quickly \n> complimented. I find that using IRC or even other IM technology can go \n> a long way toward providing support for would-be developers. Especially \n> for projects of this complexity. I find that this helps well beyond \n> that of a mailing list as people tend to be more timid in a public \n> forum. After all, it's well understood that a degree of p2p interaction \n> is often very helpful and tends to be even more so as the complexity of \n> the topic grows.\n\nWell, the advantage of the mailing list is that it _is_ a (semi) public\nforum: the core developer's time spent answering questions gets multiplied\nby the number of potenetial developers listening. And there are archives!\n\n<snip benefits of tutoring>\n\nWell, if you go check the archives (there's that word again) you'll see\nthat the core developers, and Tom Lane in particular, do a _lot_ of this\nkind of tutoring. Both at the initial stage of chosing where to start,\nand later, with good feedback of proposed patches.\n\n<tale of trying to directly contact a developer>\n\nwell, in this community, this happens on the mailing list. If you're\nshy about posting to a list, you won't get along here, anyway. Open\ndevelopment, open communication.\n\n> \n> Now, I'm not saying we should move away from the mailing list, rather, \n> I'm saying that the core developers way want to reconsider how some \n> requests for help are answered and maybe even consider other forms of \n> complimentary communication. Doesn't a hour of a core developers time \n> in trade for multiple increase in productivity of another developer seem \n> like a good trade?\n\nIsn't that hour more likely to actually get multiplied if it's spent\nresponding on the list, where multiple potential coders are listening?\n\nAnd your more likely to get an answer from _some_ core developer if you\ncontact them _all_, via the lists. It's a bit rude to go looking for help,\nand _insisiting_ on personal service: either direct email or (worse) IRC,\nwhich demands _realtime_ interaction. If the expert suggests changing the\nmode of interaction, that's fine.\n\nRoss\n-- \nRoss Reedstrom, Ph.D. reedstrm@rice.edu\nExecutive Director phone: 713-348-6166\nGulf Coast Consortium for Bioinformatics fax: 713-348-6182\nRice University MS-39\nHouston, TX 77005\n",
"msg_date": "Tue, 12 Feb 2002 11:33:15 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Feature enhancement request : use of libgda in"
},
{
"msg_contents": "\nWow! 100% agree. I have felt this way for a long time, that IRC and\nAIM are valuable technologies for helping new developers along. I am\ncurrently working with someone in India who wants to work on the buffer\nmanager. I worked with him via AIM on the TODO list items, and he will\nlook over the code and make a posting to hackers soon to describe the\nitem he wants to tackle. (In fact, I am chatting to him right now and \nhe has already posted to hackers list.)\n\nThe ability to ask a few questions in real-time is invaluable for\nimproving the output of contributors. Getting people past those few\nstumbling blocks makes a huge difference.\n\nI am bmomjian on the #postgresql channel of EfNet, and bmomjian on AOL\nchat (AIM). I look forward to helping anyone who needs it. I used to\nhave long phone conversations over code issues, and I can do that too.\n\n---------------------------------------------------------------------------\n\nGreg Copeland wrote:\n> I'm new to the list but I'm going to speak up anyways. Being a core \n> developer on several other projects, I feel that it's important to point \n> out that both comments are valid here. As a core developer, I certainly \n> don't want to implement seemingly lessor features when more pressing \n> issues are at hand. At the same time, I would like to see user demand \n> met and have some of the other developers lend a hand while polishing \n> their knowledge on the project in general. What I've found especially \n> useful has been to tutor and guide (okay, hand-hold) newer/younger \n> developers to my projects so that their abilities are quickly \n> complimented. I find that using IRC or even other IM technology can go \n> a long way toward providing support for would-be developers. Especially \n> for projects of this complexity. I find that this helps well beyond \n> that of a mailing list as people tend to be more timid in a public \n> forum. After all, it's well understood that a degree of p2p interaction \n> is often very helpful and tends to be even more so as the complexity of \n> the topic grows.\n> \n> Tutoring can not only allow developers that are less intimate with the \n> code become more useful but help ensure the effort they put forward is \n> not only accepted but implemented in an ideal manner. This is a win for \n> the developers and the project as a whole. I find it also helps build a \n> level of trust with future submissions from the developer in question. \n> Of course, it also helps build retention with newer developers as it \n> more quickly allows them to feel like they are making a difference. A \n> key ingredient for any developer that is to stay with any project for \n> the long haul.\n> \n> In fact, I'm happy this came up as I recently emailed a core developer \n> asking for places to start as well as any preferred documentation to \n> start with. Basically I was told read the code and go read the docs. \n> Which is exactly where I was before I emailed him. This is not to say \n> that I wasn't happy to have him reply but his response pretty much \n> provided no value and added nothing beyond what common sense tells you. \n> Wouldn't it be more helpful to point would-be developers at a specific \n> section of code telling them why they'd want to start there and where \n> any specific documentation is that may be of value?\n> \n> Now, I'm not saying we should move away from the mailing list, rather, \n> I'm saying that the core developers way want to reconsider how some \n> requests for help are answered and maybe even consider other forms of \n> complimentary communication. Doesn't a hour of a core developers time \n> in trade for multiple increase in productivity of another developer seem \n> like a good trade?\n> \n> Just some food for thought.\n> \n> Greg\n> \n> \n> \n> Tom Lane wrote:\n> > \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > \n> >>To a certain extent I agree. I have definitely seen times where I have\n> >>spent hours and hours and hours of coding doing something that a core\n> >>developer can do in no time, but just isn't inclined to do.\n> >>\n> > \n> > Well, you know, there is some method in our madness. We'd like to see\n> > more people develop the skills to work on Postgres, and the above is how\n> > you do it. (How do you think the core developers learned?) If we did\n> > all the \"easy\" stuff because it was easy, there'd be no appropriate\n> > projects for new developers to tackle.\n> > \n> > Which is not to say that DROP COLUMN is easy; it's not.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> > \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Feb 2002 12:39:19 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda"
},
{
"msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n> [ several useful comments snipped ]\n\n> In fact, I'm happy this came up as I recently emailed a core developer \n> asking for places to start as well as any preferred documentation to \n> start with. Basically I was told read the code and go read the docs. \n\nI believe you are complaining about me. In my defense I'll just say\nthat your question was basically \"how does the optimizer work\", which\ncovers rather a lot of territory. I didn't see any more useful answer\nthan the one I gave you, short of writing a book which I was not about\nto do in private email.\n\nAs I said in that mail and will say again, I believe in carrying on that\nsort of discussion in the pgsql-hackers list, where other developers and\nwannabee developers have some chance of benefiting from it. Private mail\nonly teaches one person. As for IRC, personally I hate it: it discourages\ntaking the time for a considered answer, it does not work well for\npeople in vastly different timezones, and it leaves no archive trail\nthat others might learn from later. However, there are other developers\nwho think differently; I believe you can often find Bruce on IRC, for\nexample.\n\nThe PG community is big enough to support multiple interactions, and if\nsome folk want to use IRC I have no objection to it. But I feel that\nthe hub of the development activity is pgsql-hackers. There is nothing\nwrong with asking questions there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Feb 2002 12:55:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda in "
},
{
"msg_contents": "> I believe you are complaining about me. In my defense I'll just say\n> that your question was basically \"how does the optimizer work\", which\n> covers rather a lot of territory. I didn't see any more useful answer\n> than the one I gave you, short of writing a book which I was not about\n> to do in private email.\n> \n> As I said in that mail and will say again, I believe in carrying on that\n> sort of discussion in the pgsql-hackers list, where other developers and\n> wannabee developers have some chance of benefiting from it. Private mail\n> only teaches one person. As for IRC, personally I hate it: it discourages\n> taking the time for a considered answer, it does not work well for\n> people in vastly different timezones, and it leaves no archive trail\n> that others might learn from later. However, there are other developers\n> who think differently; I believe you can often find Bruce on IRC, for\n> example.\n> \n> The PG community is big enough to support multiple interactions, and if\n> some folk want to use IRC I have no objection to it. But I feel that\n> the hub of the development activity is pgsql-hackers. There is nothing\n> wrong with asking questions there.\n\nI agree with Tom. General questions are best asked on hackers. IRC/AIM\nis best for \"what does this variable do\" and \"do I need to call elog()\nhere\"; things that can hold you up from getting the job done. The delay\nof email can be too slow for such exchanges and lacks the interactive\nfeel of discussing an issue with someone. So I think both have their\nplace in the PostgreSQL developers support structure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Feb 2002 13:27:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda"
},
{
"msg_contents": "\n\nRoss J. Reedstrom wrote:\n> On Tue, Feb 12, 2002 at 09:54:04AM -0600, Greg Copeland wrote:\n> \n>>I'm new to the list but I'm going to speak up anyways. Being a core \n>>developer on several other projects, I feel that it's important to point \n>>\n> \n> Welcome. It might have served you better to read some of the archives,\n> before judging how this community and it's core developers interact.\n> \n\n\nSorry. Didn't mean for it to sound like I was judging the group. I was \nonly trying to state observations as I've seen them so far and offer a \nsuggestion. I thought I was being constructive.\n\n\n> \n>>out that both comments are valid here. As a core developer, I certainly \n\n\n[snip]\n\n>>forum. After all, it's well understood that a degree of p2p interaction \n>>is often very helpful and tends to be even more so as the complexity of \n>>the topic grows.\n>>\n> \n> Well, the advantage of the mailing list is that it _is_ a (semi) public\n> forum: the core developer's time spent answering questions gets multiplied\n> by the number of potenetial developers listening. And there are archives!\n> \n> <snip benefits of tutoring>\n> \n> Well, if you go check the archives (there's that word again) you'll see\n> that the core developers, and Tom Lane in particular, do a _lot_ of this\n> kind of tutoring. Both at the initial stage of chosing where to start,\n> and later, with good feedback of proposed patches.\n>\n\nActually, I have been reading the archives ALOT! Since they are not \nsearchable (searches for me result in nothing happening) it greatly \nlimits the accessibility and thusly the usability of them. As a result \nI've been having to manually browse and read various threads to see if \nthey pertain to anything I'm interested in. Sorry.\n\n\n[snip]\n\n> \n> \n>>Now, I'm not saying we should move away from the mailing list, rather, \n>>I'm saying that the core developers way want to reconsider how some \n>>requests for help are answered and maybe even consider other forms of \n>>complimentary communication. Doesn't a hour of a core developers time \n>>in trade for multiple increase in productivity of another developer seem \n>>like a good trade?\n>>\n >\n > Isn't that hour more likely to actually get multiplied if it's spent\n > responding on the list, where multiple potential coders are listening?\n >\n\n\nYes and no, depending on the topic and complexity at hand. Not to \nmention complex conversations that can take weeks to address in email \ncan often be addressed in minutes via more interactive mechanisms.\n\n\n> And your more likely to get an answer from _some_ core developer if you\n> contact them _all_, via the lists. It's a bit rude to go looking for help,\n> and _insisiting_ on personal service: either direct email or (worse) IRC,\n> which demands _realtime_ interaction. If the expert suggests changing the\n> mode of interaction, that's fine.\n\n\nHmmm. I didn't think I was asking for personalized service. In my mind \nI thought I was offering a possible suggestion to a common issue \n(supported by the archives, yes that word again, and a timely posting) \nwhich was seemingly not well received.\n\nI'll go back and hide in my cave now. :)\n\nGreg\n\n\n",
"msg_date": "Tue, 12 Feb 2002 17:11:04 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Feature enhancement request : use of libgda in"
},
{
"msg_contents": "On Tue, Feb 12, 2002 at 05:11:04PM -0600, Greg Copeland wrote:\n> \n> \n> Ross J. Reedstrom wrote:\n> >On Tue, Feb 12, 2002 at 09:54:04AM -0600, Greg Copeland wrote:\n> >\n> >>I'm new to the list but I'm going to speak up anyways. Being a core \n> >>developer on several other projects, I feel that it's important to point \n> >>\n> >\n> >Welcome. It might have served you better to read some of the archives,\n> >before judging how this community and it's core developers interact.\n> >\n> \n> \n> Sorry. Didn't mean for it to sound like I was judging the group. I was \n> only trying to state observations as I've seen them so far and offer a \n> suggestion. I thought I was being constructive.\n> \n\nHey, my fault - I came off sounding awfully harsh, myself. I guess I was\ntriggered by your opening disclaimer sating 'I haven't been here long, but...'\n\nThere's been more than one occurance of people showing up and telling the\ncommunity how it should run, without seeing how it has run in the past.\n\n> \n> Actually, I have been reading the archives ALOT! Since they are not \n> searchable (searches for me result in nothing happening) it greatly \n> limits the accessibility and thusly the usability of them. As a result \n> I've been having to manually browse and read various threads to see if \n> they pertain to anything I'm interested in. Sorry.\n\nYeah, that's not fun. There have been a number of administrative problems\nrecently with the archive search engine - looks pretty bad when a DB\nproject has DB backend problems, doesn't it? Sort of a cobbler's children\nrunning barefoot problem. It's being addressed, AFAIR.\n\n> I'll go back and hide in my cave now. :)\n\nNo, don't do that! Play out in public, with the rest of us. ;-)\n\nRoss\n",
"msg_date": "Tue, 12 Feb 2002 17:39:38 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Feature enhancement request : use of libgda in"
},
{
"msg_contents": "Greg Copeland <greg@copelandconsulting.net> writes:\n> Actually, I have been reading the archives ALOT! Since they are not \n> searchable (searches for me result in nothing happening) it greatly \n> limits the accessibility and thusly the usability of them.\n\nYeah, both fts.postgresql.org and archives.postgresql.org searches have\nbeen broken for several weeks now. Marc, can't we *please* do something\nabout that?\n\nIn the meantime, Greg, most of the key PG mail lists are archived and\nsearchable at geocrawler, http://www.geocrawler.com/lists/3/Databases/\nThere are probably other archive services that I don't know about.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Feb 2002 18:42:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Mail archives problems"
},
{
"msg_contents": "> In the meantime, Greg, most of the key PG mail lists are archived and\n> searchable at geocrawler, http://www.geocrawler.com/lists/3/Databases/\n> There are probably other archive services that I don't know about.\n\nmarc.theaimsgroup.com is my favorite.\n\ngreetings\nBjoern\n\n\n",
"msg_date": "Wed, 13 Feb 2002 01:11:11 +0100",
"msg_from": "\"Bjoern Metzdorf\" <bm@turtle-entertainment.de>",
"msg_from_op": false,
"msg_subject": "Re: Mail archives problems"
},
{
"msg_contents": "There's also:\n\nhttp://groups.google.com/groups?hl=en&group=comp.databases.postgresql\n\n(be sure to check the \"search this only\" button before searching)\n\nOn Tue, 12 Feb 2002, Tom Lane wrote:\n\n> Greg Copeland <greg@copelandconsulting.net> writes:\n> > Actually, I have been reading the archives ALOT! Since they are not\n> > searchable (searches for me result in nothing happening) it greatly\n> > limits the accessibility and thusly the usability of them.\n>\n> Yeah, both fts.postgresql.org and archives.postgresql.org searches have\n> been broken for several weeks now. Marc, can't we *please* do something\n> about that?\n>\n> In the meantime, Greg, most of the key PG mail lists are archived and\n> searchable at geocrawler, http://www.geocrawler.com/lists/3/Databases/\n> There are probably other archive services that I don't know about.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Tue, 12 Feb 2002 16:38:39 -0800 (PST)",
"msg_from": "Philip Hallstrom <philip@adhesivemedia.com>",
"msg_from_op": false,
"msg_subject": "Re: Mail archives problems"
},
{
"msg_contents": "On Tue, 12 Feb 2002, Tom Lane wrote:\n\n> Greg Copeland <greg@copelandconsulting.net> writes:\n> > Actually, I have been reading the archives ALOT! Since they are not\n> > searchable (searches for me result in nothing happening) it greatly\n> > limits the accessibility and thusly the usability of them.\n>\n> Yeah, both fts.postgresql.org and archives.postgresql.org searches have\n> been broken for several weeks now. Marc, can't we *please* do something\n> about that?\n\nI used archives.postgresql.org just the other day and found what I was\nlooking for with it ...\n\n",
"msg_date": "Tue, 12 Feb 2002 20:42:11 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Mail archives problems"
},
{
"msg_contents": "I'm sorry this is off topic but I've been looking for a long time.\n\nI'm looking for open source archiving software.\nTo run a website like the ones mentioned.\n\nThanks\n\nAt 04:38 PM 2/12/2002 -0800, you wrote:\n>There's also:\n>\n>http://groups.google.com/groups?hl=en&group=comp.databases.postgresql\n>\n>(be sure to check the \"search this only\" button before searching)\n>\n>On Tue, 12 Feb 2002, Tom Lane wrote:\n>\n> > Greg Copeland <greg@copelandconsulting.net> writes:\n> > > Actually, I have been reading the archives ALOT! Since they are not\n> > > searchable (searches for me result in nothing happening) it greatly\n> > > limits the accessibility and thusly the usability of them.\n> >\n> > Yeah, both fts.postgresql.org and archives.postgresql.org searches have\n> > been broken for several weeks now. Marc, can't we *please* do something\n> > about that?\n> >\n> > In the meantime, Greg, most of the key PG mail lists are archived and\n> > searchable at geocrawler, http://www.geocrawler.com/lists/3/Databases/\n> > There are probably other archive services that I don't know about.\n> >\n> > regards, tom lane\n\n",
"msg_date": "Tue, 12 Feb 2002 17:05:52 -0800",
"msg_from": "Tigran <tigran@usanogh.com>",
"msg_from_op": false,
"msg_subject": "Re: Mail archives problems"
},
{
"msg_contents": "On Tuesday 12 February 2002 07:42 pm, Marc G. Fournier wrote:\n> On Tue, 12 Feb 2002, Tom Lane wrote:\n> > Yeah, both fts.postgresql.org and archives.postgresql.org searches have\n> > been broken for several weeks now. Marc, can't we *please* do something\n> > about that?\n\n> I used archives.postgresql.org just the other day and found what I was\n> looking for with it ...\n\nIt just takes awhile to do the search. I just did a search on 'RPM' in \n'hackers' and it took 248.18 seconds to return 1207 matches.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 12 Feb 2002 20:21:14 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Mail archives problems"
},
{
"msg_contents": "Hi,\n\nIs anybody on this list involved in the courier, or shipping industry? \n\nPlease reply off list.\n\nDave\n\n",
"msg_date": "Tue, 19 Feb 2002 10:13:46 -0500",
"msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>",
"msg_from_op": false,
"msg_subject": "Courier Industry packages"
},
{
"msg_contents": "Greg Copeland wrote:\n> I'm new to the list but I'm going to speak up anyways. Being a core \n> developer on several other projects, I feel that it's important to point \n> out that both comments are valid here. As a core developer, I certainly \n> don't want to implement seemingly lessor features when more pressing \n> issues are at hand. At the same time, I would like to see user demand \n> met and have some of the other developers lend a hand while polishing \n> their knowledge on the project in general. What I've found especially \n> useful has been to tutor and guide (okay, hand-hold) newer/younger \n> developers to my projects so that their abilities are quickly \n> complimented. I find that using IRC or even other IM technology can go \n> a long way toward providing support for would-be developers. Especially \n> for projects of this complexity. I find that this helps well beyond \n> that of a mailing list as people tend to be more timid in a public \n> forum. After all, it's well understood that a degree of p2p interaction \n> is often very helpful and tends to be even more so as the complexity of \n> the topic grows.\n\nI should add that I am now regularly on AIM, Yahoo, MSN, and ICQ chat as\nbmomjian if any coders need assistance. If there are other chat\nprotocols people use, let me know.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Feb 2002 23:24:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libgda"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm pleased to announce the release of pgbash-2.4.\nhttp://www.psn.co.jp/PostgreSQL/pgbash/index-e.html\n\n\n# What's pgbash\n\n Pgbash is a BASH shell which includes the functionality of \naccessing the database for PostgreSQL. It is possible to execute \nSQL by making a shell script in the batch processing, and execute \nSQL directly in the interactive environment.\n\n# Change Logs\n\n 1. Adjust to PostgreSQL-7.0/7.1/7.2. \n 2. Fix a bug of Ctrl+C to cancell query. \n 3. Fix a bug of Pgbash original copy command when using delimiter. \n 4. Fix a bug of parsing ';' between left and right parenthesis \n in SQL. (reported by Dias Badekas) \n 5. Change TRUE/FALSE to ON/OFF value at the 'set OPTION' statement. \n 6. Add 'set OPTION_HEADERTR/HEADERTH/BODYTAG/INPUTTAG/INPUTSIZE'\n commands. \n 7. Add the functionality of 'EXEC_SQL_PREPARE' shell variable. \n 8. Add the functionality of 'SELECT INTO :host_var' clause. \n 9. Add the functionality of reading the /etc/pgbashrc file \n if ~/.pgbashrc file does not exist. \n10. Add the functionality which displays line_feed/tab/carriage_\n return as '\\n'/'\\t'/'\\r'. \n11. Add 'pgbash_description' table for large_object functions. \n12. Modify output format for plain text table (like psql). \n13. Add \"IDENTIFIED BY | USING | /\" at the password syntax of the\n CONNECT statement. \n14. Add the client_encoding in the connection table list('?m' command)\n\n\n-- \nregards,\nSAKAIDA Masaaki\n# Sorry, I am not good at English\n\n\n",
"msg_date": "Mon, 11 Feb 2002 21:10:36 +0900",
"msg_from": "SAKAIDA <sakaida@psn.co.jp>",
"msg_from_op": true,
"msg_subject": "pgbash-2.4 released"
},
{
"msg_contents": "SAKAIDA <sakaida@psn.co.jp> wrote:\n\n> \n> I'm pleased to announce the release of pgbash-2.4.\n> http://www.psn.co.jp/PostgreSQL/pgbash/index-e.html\n\nDoes it support 8 bit characters, like the '�' in my surname?\n\nIn general: the ISO Latin 1 character set. Psql doesn't seem to do this\nautomatically.\n-- \nPer Erik R�nne\nFrederikssundsvej 308B, DK-2700 Br�nsh�j, DENMARK, EUROPEAN UNION\nTlf. + fax: +38 89 00 16, mobil +45 28 23 09 92.\nHomepage http://www.diku.dk/students/xerxes\n",
"msg_date": "Sat, 16 Feb 2002 07:49:04 +0100",
"msg_from": "serse@diku.dk (Per Erik Ronne)",
"msg_from_op": false,
"msg_subject": "Re: pgbash-2.4 released"
},
{
"msg_contents": "\nserse@diku.dk (Per Erik Ronne) wrote:\n>\n> SAKAIDA <sakaida@psn.co.jp> wrote: \n> > I'm pleased to announce the release of pgbash-2.4.\n> > http://www.psn.co.jp/PostgreSQL/pgbash/index-e.html\n> \n> Does it support 8 bit characters, like the '\u000ex\u000f' in my surname?\n> \n> In general: the ISO Latin 1 character set. Psql doesn't seem to do this\n> automatically.\n\nPlease type as follows.\n\n% /bin/bash ( or /usr/local/bin/pgbash ) \n> ch='YOUR 8 BIT CHARACTERS'\n> echo $ch\n\nIf your 8 bit characters are displayed correctly, then \nPgbash can support 8 bit characters. If not, you should \napply MULTIBYTE patch for the readline or the bash parser.\n\nOf course, PostgreSQL has to be configured with multibyte \nas you know, and it is required that your terminal environment \ncan use MULTIBYTE.\n\n--\nSAKAIDA Masaaki \n\n\n",
"msg_date": "Fri, 22 Feb 2002 11:16:14 +0900",
"msg_from": "SAKAIDA Masaaki <sakaida@psn.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgbash-2.4 released"
}
] |
[
{
"msg_contents": "\nOr is this in the planner? Same query, same tables, one with seqscan\nenabled, one with it disabled (btw, whomever added the ANALYZE to EXPLAIN,\npure genius):\n\niwantu=# explain analyze SELECT poc.uid,headline,pictures,voice FROM orient poc JOIN clubs c ON (poc.uid = c.uid AND c.club = 3 );\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..1791417.52 rows=26566 width=72) (actual time=0.55..3345.13 rows=23510 loops=1)\n -> Index Scan using clubs_idx on clubs c (cost=0.00..1695474.62 rows=26569 width=64) (actual time=0.48..1936.95 rows=23510 loops=1)\n -> Index Scan using orient_pkey on orient poc (cost=0.00..3.60 rows=1 width=8) (actual time=0.03..0.03 rows=1 loops=23510)\nTotal runtime: 3474.93 msec\n\niwantu=# set enable_seqscan=true;\niwantu=# explain analyze SELECT poc.uid,headline,pictures,voice FROM orient poc JOIN clubs c ON (poc.uid = c.uid AND c.club = 3 );\nNOTICE: QUERY PLAN:\n\nHash Join (cost=31693.56..47033.86 rows=26566 width=72) (actual time=1044.41..11450.85 rows=23510 loops=1)\n -> Seq Scan on orient poc (cost=0.00..7718.69 rows=485969 width=8) (actual time=0.01..3484.00 rows=485969 loops=1)\n -> Hash (cost=31627.14..31627.14 rows=26569 width=64) (actual time=1034.14..1034.14 rows=0 loops=1)\n -> Seq Scan on clubs c (cost=0.00..31627.14 rows=26569 width=64) (actual time=593.80..836.72 rows=23510 loops=1)\nTotal runtime: 11583.36 msec\n\n\n\n",
"msg_date": "Mon, 11 Feb 2002 10:23:52 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Optimizer(?) off by factor of 3 ... ?"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> -> Index Scan using clubs_idx on clubs c (cost=0.00..1695474.62 rows=26569 width=64) (actual time=0.48..1936.95 rows=23510 loops=1)\n\nThis indexscan cost estimate is completely out of whack, it would seem.\n\nHave you done an ANALYZE on this table recently? If so, what do you get\nfrom\n\tselect * from pg_stats where tablename = 'clubs';\n\tselect * from pg_class where relname = 'clubs';\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Feb 2002 11:26:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer(?) off by factor of 3 ... ? "
},
{
"msg_contents": "On Mon, 11 Feb 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > -> Index Scan using clubs_idx on clubs c (cost=0.00..1695474.62 rows=26569 width=64) (actual time=0.48..1936.95 rows=23510 loops=1)\n>\n> This indexscan cost estimate is completely out of whack, it would seem.\n>\n> Have you done an ANALYZE on this table recently? If so, what do you get\n\nYup, been doing ANALYZEs just to make sure that I did them, so have done\nseveral since this database/table was populated ...\n\n> from\n> \tselect * from pg_stats where tablename = 'clubs';\n> \tselect * from pg_class where relname = 'clubs';\n\n tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | !\n histogram_bounds !\n | correlation\n-----------+---------------+-----------+-----------+------------+-----------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------!\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------!\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n clubs | uid | 0 | 8 | -1 | | | {13,56847,365368,432334,482114,538111,627969,683193,738091,793220,841391} !\n !\n | 0.596839\n clubs | club | 0 | 4 | 3 | {2,1,3} | {0.754,0.195333,0.0506667} | !\n !\n | 1\n clubs | hide | 0 | 2 | 2 | {1,0} | {0.950667,0.0493333} | !\n !\n | 0.810325\n clubs | last_update | 0 | 8 | 7731 | {1008005872,1009714469,1009688701,1011503100,1011330301,1009256700,1009429504,1011848704,1009885559,1010207101} | {0.735,0.004,0.00266667,0.00233333,0.002,0.00166667,0.00166667,0.00166667,0.00133333,0.00133333} | {1007584989,1008125462,1008569460,1009199787,1009651136,1010099882,1010466300,1010887647,1011224456,1011537512,1011853900} !\n !\n | 0.691723\n clubs | category | 0 | 30 | 7 | {\"{1,0,0,0,5}\",\"{0,0,0,4,0}\",\"{0,0,0,0,0}\",\"{0,0,0,4,5}\",\"{1,2,0,0,0}\",\"{1,2,3,4,5}\",\"{1,2,3,4,0}\"} | {0.316333,0.268667,0.169,0.139333,0.056,0.0256667,0.025} | !\n !\n |\n clubs | club_interest | 0 | 44 | 1 | {\"{0,0,0,0,0,0,0,0,0,0,0,0}\"} | {1} | !\n !\n |\n clubs | headline | 0 | 28 | 27663 | {\"\",\"looking for fun\",Hello,hi,Hi,hello,Looking,\"Looking for fun\",hey,\"Hello Ladies\"} | {0.103,0.00566667,0.004,0.004,0.00366667,0.00333333,0.00166667,0.00166667,0.00166667,0.00133333} | {\" ANGEL EYES\",\"Cum get sum\",\"Hell-o Iam hear,take me away.\",\"IF YOU LIKE A LAUGH IM YOU MAN THANX\",\"Looking for a man with a Heart\",\"Nice guy just looking to meet new people\",\"Tall Dark & Handsome\",\"come and say hi\",\"im 6 180 brown brown luv sex makeing girls feel good\",\"nice guy looking for a loving relationship with no games\",\"� looking good friend\"} !\n !\n | -0.0150456\n clubs | my_desc | 0 | 230 | 29320 | {\"\",\" \"} | {0.103,0.001} | {\"\n<BR>\n<BR> english speaking man searching for partner in norway near oslo\n<BR>\n<BR>\n<BR> are single and miss you\n<BR>\n<BR>\n<BR> lets share the lonely nights \",\"Drop me a line . U will not regret coz u just meet a chance of a lifetime ...Still thinking ?still wondering ? stop ! Write to me now and i'll get back to u !\",\"I am 35. I love good sex. I enjoy candle light, showers, baths, oil massages, give and recieve. I don't need intercourse to be satisfied. I think you need to use your imagination if you don't.\",\"I am intelligent,well-read,kind, emotional, have a good sense of humor. I dislike egoism,pettiness and dishonesty.\",\"I'm 23 in Notting Hill, looking for a woman older or younger for an int!\nimate secret encounter. I'm 5'11, dark hair, blue eyes, slim build. I regularly work out in the gym, and am ready to have a work out in you.\",\"Im a hard working man who would like a very\n<BR>sexy attractive women to share good times\n<BR>with here in South Florida. Walks by the\n<BR>Ocean, fine dining and just hanging out \n<BR>together would be nice.\",\"Not into ego trips or head games. I'm in a comfortable place in my life where I don't have to prove myself personally or professionally. I just desire to enjoy life and its many different venues.\",\"blue eye red head - thus the name foxxy ... Looking for you to share whatever your hearts desire might be ...\",\"i am single libyan man , friendship means more to me , so i long to do good and wide friendship , i wish all over the world , so if you want to talk with me this is my email\n<BR>abdul_zr@yahoo.com\",\"looking for fun and cassual sex with serious ladies 'your pleasure is mine also'if you want to get crazy and have real fun in a!\n safe manner call me \",�Ըеط����Ǽ�൫���Dz�С�˿�û�������ɷ��������������п��ŭû����Ů���ţ������Խ�������ò������nmcjdkf��½�ܿɿ��ˣ���ŵ�ź�ڣ������������ӣ���·��Ů�п�ϲ����ŭ�ɷ��ļ�Ŷ���ˣ�Ů�����ƣ�} | -0.0011142\n clubs | ur_desc | 0 | 4 | 1 | {\"\"} | {1} | !\n !\n | 1\n clubs | pictures | 0 | 26 | 2 | {\"{0,0,0}\",\"{1,0,0}\"} | {0.889,0.111} | !\n !\n |\n clubs | voice | 0 | 2 | 2 | {0,1} | {0.999333,0.000666667} | !\n !\n | 0.998169\n(11 rows)\n\n relname | reltype | relowner | relam | relfilenode | relpages | reltuples | reltoastrelid | reltoastidxid | relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers | relukeys | relfkeys | relrefs | relhasoids | relhaspkey | relhasrules | relhassubclass | relacl\n---------+---------+----------+-------+-------------+----------+-----------+---------------+---------------+-------------+-------------+---------+----------+-----------+-------------+----------+----------+---------+------------+------------+-------------+----------------+--------\n clubs | 5535242 | 1003 | 0 | 5535241 | 25552 | 486011 | 5535243 | 0 | t | f | r | 11 | 0 | 0 | 0 | 0 | 0 | t | f | f | f |\n(1 row)\n\n\nAnd just in case it has relevance:\n\niwantu=# \\d clubs_idx\nIndex \"clubs_idx\"\n Column | Type\n--------+---------\n uid | bigint\n club | integer\nbtree\n\n\n",
"msg_date": "Mon, 11 Feb 2002 13:11:02 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer(?) off by factor of 3 ... ? "
},
{
"msg_contents": "I'm trying to work out how the planner arrived at the numbers you're\nshowing; the hashjoin cost estimate seems a little lower than I'd\nexpect. Are you using nonstandard values for any of the planner cost\nfactors? (cpu_operator_cost, etc) How about sort_mem?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Feb 2002 21:15:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer(?) off by factor of 3 ... ? "
},
{
"msg_contents": "\nthe postgresql.conf file ... for testing, I used 'set enable_seqscan'\nafter the server was live to turn on/off, this is just hte default ...\n\ntcpip_socket = true\nmax_connections = 200 # 1-1024\nport = 5434\nsort_mem = 4024\nshared_buffers = 32768\nfsync = false\nwal_buffers = 32\ndebug_pretty_print = true\nenable_seqscan = false\n\n\nOn Mon, 11 Feb 2002, Tom Lane wrote:\n\n> I'm trying to work out how the planner arrived at the numbers you're\n> showing; the hashjoin cost estimate seems a little lower than I'd\n> expect. Are you using nonstandard values for any of the planner cost\n> factors? (cpu_operator_cost, etc) How about sort_mem?\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Mon, 11 Feb 2002 22:23:03 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer(?) off by factor of 3 ... ? "
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> [ bogus optimizer choices in 7.2 ]\n\nWell, I guess the good news is we seem to be past the old bugaboo of bad\nstatistics: the estimated row counts are all in the right ballpark. Now\nwe get to have fun with the cost models :-).\n\nIt looks to me like there are a couple of problems here. One is that\nthe default value of effective_cache_size is way too small --- it's set\nat 1000, which is probably silly when you have NBuffers set to 32768.\n(In hindsight maybe we should have expressed it as a multiple of\nNBuffers rather than an absolute size.) You could tweak that with a\npostgresql.conf change, but I'm not sure that that alone will help much.\n\nThe more difficult issue is that nestloops with inner indexscan are\nbeing seriously misestimated. We're computing the cost as though each\niteration of the inner scan were completely independent and being done\nfrom a standing start --- which is wrong, because in practice scans\nafter the first will tend to find buffer cache hits for pages already\nread in by prior scans. You can bet, for example, that the btree\nmetapage and root page aren't going to need to be re-read on each\niteration.\n\nI am thinking that the right way to do this is to cost the entire inner\nindexscan (all iterations put together) as if it were a single\nindexscan, at least for the purposes of applying the Mackert & Lohman\nformula embedded in cost_index. That would give us a more realistic\nresult for the total cost of the main-table accesses driven by the\nindex. Not sure how to adjust the cost estimate for reading the index,\nbut clearly we need to make some adjustment for repeated hits on the\nupper index pages.\n\nThis is probably a bigger change than we can hope to make in 7.2.* ...\n\nBTW, what do you get if you EXPLAIN ANALYZE that orient/clubs join\nwith seqscan enabled and hashjoin disabled? If it's a mergejoin,\nhow about if you also disable mergejoin? It seems to me that a seqscan\non clubs would be a much better way to do the nestloop join than an\nindexscan --- but it's being forced into an indexscan because you\ndisabled seqscan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Feb 2002 22:27:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer(?) off by factor of 3 ... ? "
},
{
"msg_contents": "On Mon, 11 Feb 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > [ bogus optimizer choices in 7.2 ]\n>\n> Well, I guess the good news is we seem to be past the old bugaboo of bad\n> statistics: the estimated row counts are all in the right ballpark. Now\n> we get to have fun with the cost models :-).\n>\n> It looks to me like there are a couple of problems here. One is that\n> the default value of effective_cache_size is way too small --- it's set\n> at 1000, which is probably silly when you have NBuffers set to 32768.\n> (In hindsight maybe we should have expressed it as a multiple of\n> NBuffers rather than an absolute size.) You could tweak that with a\n> postgresql.conf change, but I'm not sure that that alone will help much.\n>\n> The more difficult issue is that nestloops with inner indexscan are\n> being seriously misestimated. We're computing the cost as though each\n> iteration of the inner scan were completely independent and being done\n> from a standing start --- which is wrong, because in practice scans\n> after the first will tend to find buffer cache hits for pages already\n> read in by prior scans. You can bet, for example, that the btree\n> metapage and root page aren't going to need to be re-read on each\n> iteration.\n>\n> I am thinking that the right way to do this is to cost the entire inner\n> indexscan (all iterations put together) as if it were a single\n> indexscan, at least for the purposes of applying the Mackert & Lohman\n> formula embedded in cost_index. That would give us a more realistic\n> result for the total cost of the main-table accesses driven by the\n> index. Not sure how to adjust the cost estimate for reading the index,\n> but clearly we need to make some adjustment for repeated hits on the\n> upper index pages.\n>\n> This is probably a bigger change than we can hope to make in 7.2.* ...\n>\n> BTW, what do you get if you EXPLAIN ANALYZE that orient/clubs join\n> with seqscan enabled and hashjoin disabled? If it's a mergejoin,\n> how about if you also disable mergejoin? It seems to me that a seqscan\n> on clubs would be a much better way to do the nestloop join than an\n> indexscan --- but it's being forced into an indexscan because you\n> disabled seqscan.\n\niwantu=# set enable_seqscan=true;\niwantu=# set enable_hashjoin=false;\niwantu=# explain analyze SELECT o.uid,headline,pictures,voice FROM orient o JOIN clubs c ON (o.uid = c.uid AND c.club = 1 AND ( c.hide ='1' OR c.hide='2' ) AND (o.female) );\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=97750.86..100011.74 rows=78391 width=72) (actual time=17041.33..23771.57 rows=50745 loops=1)\n -> Sort (cost=53412.61..53412.61 rows=422145 width=8) (actual time=12996.56..15563.59 rows=418951 loops=1)\n -> Seq Scan on orient o (cost=0.00..7718.69 rows=422145 width=8) (actual time=0.02..3237.46 rows=418951 loops=1)\n -> Sort (cost=44338.25..44338.25 rows=90251 width=64) (actual time=4044.65..4531.18 rows=76954 loops=1)\n -> Seq Scan on clubs c (cost=0.00..34057.19 rows=90251 width=64) (actual time=0.04..1399.83 rows=76954 loops=1)\nTotal runtime: 24082.76 msec\n\niwantu=# set enable_mergejoin=false;\niwantu=# explain analyze SELECT o.uid,headline,pictures,voice FROM orient o JOIN clubs c ON (o.uid = c.uid AND c.club = 1 AND ( c.hide ='1' OR c.hide='2' ) AND (o.female) );\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..363373.00 rows=78391 width=72) (actual time=0.54..5488.15 rows=50745 loops=1)\n -> Seq Scan on clubs c (cost=0.00..34057.19 rows=90251 width=64) (actual time=0.03..1434.97 rows=76954 loops=1)\n -> Index Scan using orient_pkey on orient o (cost=0.00..3.64 rows=1 width=8) (actual time=0.03..0.03 rows=1 loops=76954)\nTotal runtime: 5769.21 msec\n\n\n\n",
"msg_date": "Tue, 12 Feb 2002 09:24:45 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer(?) off by factor of 3 ... ? "
},
{
"msg_contents": "\nOkay, I've 'saved' the dataset/schema for this if you want me to test/try\nanything further with it, as I'm going to try and redo teh schema to get\naround the issues for now ...\n\nOn Mon, 11 Feb 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > [ bogus optimizer choices in 7.2 ]\n>\n> Well, I guess the good news is we seem to be past the old bugaboo of bad\n> statistics: the estimated row counts are all in the right ballpark. Now\n> we get to have fun with the cost models :-).\n>\n> It looks to me like there are a couple of problems here. One is that\n> the default value of effective_cache_size is way too small --- it's set\n> at 1000, which is probably silly when you have NBuffers set to 32768.\n> (In hindsight maybe we should have expressed it as a multiple of\n> NBuffers rather than an absolute size.) You could tweak that with a\n> postgresql.conf change, but I'm not sure that that alone will help much.\n>\n> The more difficult issue is that nestloops with inner indexscan are\n> being seriously misestimated. We're computing the cost as though each\n> iteration of the inner scan were completely independent and being done\n> from a standing start --- which is wrong, because in practice scans\n> after the first will tend to find buffer cache hits for pages already\n> read in by prior scans. You can bet, for example, that the btree\n> metapage and root page aren't going to need to be re-read on each\n> iteration.\n>\n> I am thinking that the right way to do this is to cost the entire inner\n> indexscan (all iterations put together) as if it were a single\n> indexscan, at least for the purposes of applying the Mackert & Lohman\n> formula embedded in cost_index. That would give us a more realistic\n> result for the total cost of the main-table accesses driven by the\n> index. Not sure how to adjust the cost estimate for reading the index,\n> but clearly we need to make some adjustment for repeated hits on the\n> upper index pages.\n>\n> This is probably a bigger change than we can hope to make in 7.2.* ...\n>\n> BTW, what do you get if you EXPLAIN ANALYZE that orient/clubs join\n> with seqscan enabled and hashjoin disabled? If it's a mergejoin,\n> how about if you also disable mergejoin? It seems to me that a seqscan\n> on clubs would be a much better way to do the nestloop join than an\n> indexscan --- but it's being forced into an indexscan because you\n> disabled seqscan.\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Wed, 13 Feb 2002 09:16:15 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer(?) off by factor of 3 ... ? "
}
] |
[
{
"msg_contents": "\nGranted that I'm still futzing with my table structures and whatnot, but\nhere is the full query, and the explain ANALYZE for it with SEQSCAN both\nenabled and disbled ... enabled, it takes 2x longer??\n\nSELECT p.uid, tB.headline, tB.pictures, tB.voice, pa.zip, tb.gender, p.profiles_handle\n FROM\n ((( SELECT ta.uid,pgf.gender,ta.headline,ta.pictures,ta.voice FROM ( SELECT poc.uid,headline,pictures,voice FROM orient poc JOIN clubs c ON (poc.uid = c.uid AND c.club = 1 AND ( c.hide ='1' OR c.hide='2' ) AND (poc.female) ) ) AS ta\n JOIN gender pgf ON ( ta.uid = pgf.uid AND (pgf.gender='M') ) ) AS tb\n JOIN iwantu_profiles p USING (uid))\n LEFT JOIN lastlogin ll USING (uid))\n LEFT JOIN location pa USING (uid) ORDER BY ll.lastlogin DESC LIMIT 25 OFFSET 0;;\n\n\nLimit (cost=2939483.38..2939483.38 rows=25 width=134) (actual time=38013.45..38013.87 rows=25 loops=1)\n -> Sort (cost=2939483.38..2939483.38 rows=63539 width=134) (actual time=38013.43..38013.58 rows=26 loops=1)\n -> Nested Loop (cost=0.00..2930209.43 rows=63539 width=134) (actual time=1.94..35774.58 rows=47441 loops=1)\n -> Nested Loop (cost=0.00..2693295.07 rows=63539 width=120) (actual time=1.50..32086.98 rows=47441 loops=1)\n -> Nested Loop (cost=0.00..2466570.37 rows=63539 width=104) (actual time=1.07..28589.72 rows=47441 loops=1)\n -> Merge Join (cost=0.00..2237377.25 rows=63539 width=85) (actual time=0.59..23263.91 rows=47441 loops=1)\n -> Merge Join (cost=0.00..1956681.26 rows=79756 width=72) (actual time=0.47..14295.90 rows=50745 loops=1)\n -> Index Scan using orient_pkey on orient poc (cost=0.00..256490.59 rows=424251 width=8) (actual time=0.04..4833.53 rows=418951 loops=1)\n -> Index Scan using clubs_idx on clubs c (cost=0.00..1697904.67 rows=91367 width=64) (actual time=0.34..5187.63 rows=76954 loops=1)\n -> Index Scan using gender_pkey on gender pgf (cost=0.00..278734.47 rows=387155 width=13) (actual time=0.03..5280.48 rows=385969 loops=1)\n -> Index Scan using iwantu_profiles_n_pkey on iwantu_profiles p (cost=0.00..3.59 rows=1 width=19) (actual time=0.07..0.08 rows=1 loops=47441)\n -> Index Scan using lastlogin_pkey on lastlogin ll (cost=0.00..3.56 rows=1 width=16) (actual time=0.04..0.04 rows=1 loops=47441)\n -> Index Scan using location_pkey on location pa (cost=0.00..3.72 rows=1 width=14) (actual time=0.04..0.05 rows=1 loops=47441)\nTotal runtime: 38059.34 msec\n\n\nLimit (cost=265574.89..265574.89 rows=25 width=134) (actual time=76911.26..76911.68 rows=25 loops=1)\n -> Sort (cost=265574.89..265574.89 rows=63539 width=134) (actual time=76911.24..76911.39 rows=26 loops=1)\n -> Merge Join (cost=254132.94..256300.95 rows=63539 width=134) (actual time=67544.75..74800.40 rows=47441 loops=1)\n -> Sort (cost=188717.25..188717.25 rows=63539 width=120) (actual time=48313.73..48656.13 rows=47441 loops=1)\n -> Hash Join (cost=129420.48..180063.31 rows=63539 width=120) (actual time=30001.67..46783.32 rows=47441 loops=1)\n -> Hash Join (cost=72389.02..116937.80 rows=63539 width=104) (actual time=22960.36..37247.98 rows=47441 loops=1)\n -> Seq Scan on iwantu_profiles p (cost=0.00..35233.69 rows=485969 width=19) (actual time=0.42..6145.64 rows=485969 loops=1)\n -> Hash (cost=71361.18..71361.18 rows=63539 width=85) (actual time=22946.01..22946.01 rows=0 loops=1)\n -> Hash Join (cost=54743.55..71361.18 rows=63539 width=85) (actual time=12332.48..22558.83 rows=47441 loops=1)\n -> Seq Scan on gender pgf (cost=0.00..9170.61 rows=387155 width=13) (actual time=0.16..3693.87 rows=385970 loops=1)\n -> Hash (cost=53609.16..53609.16 rows=79756 width=72) (actual time=12328.38..12328.38 rows=0 loops=1)\n -> Hash Join (cost=13562.51..53609.16 rows=79756 width=72) (actual time=6104.95..11926.93 rows=50745 loops=1)\n -> Seq Scan on clubs c (cost=0.00..34057.19 rows=91367 width=64) (actual time=0.16..2938.62 rows=76954 loops=1)\n -> Hash (cost=7718.69..7718.69 rows=424251 width=8) (actual time=6080.37..6080.37 rows=0 loops=1)\n -> Seq Scan on orient poc (cost=0.00..7718.69 rows=424251 width=8) (actual time=0.13..3144.84 rows=418951 loops=1)\n -> Hash (cost=7922.73..7922.73 rows=483973 width=16) (actual time=7010.57..7010.57 rows=0 loops=1)\n -> Seq Scan on lastlogin ll (cost=0.00..7922.73 rows=483973 width=16) (actual time=0.11..3648.06 rows=483973 loops=1)\n -> Sort (cost=65415.69..65415.69 rows=485969 width=14) (actual time=19230.90..22199.19 rows=485965 loops=1)\n -> Seq Scan on location pa (cost=0.00..9649.69 rows=485969 width=14) (actual time=0.11..4289.88 rows=485969 loops=1)\nTotal runtime: 76970.09 msec\n\n\n\n",
"msg_date": "Mon, 11 Feb 2002 10:40:16 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Yeech ... more on SEQSCAN vs having it disabled ..."
}
] |
[
{
"msg_contents": "\nHello everybody\n\nI am a Ph.D student who just innocently upgraded from 7.1 to 7.2 his home\nbrewed content management system. \nThe update went allright (datawise speaking) but once i fired up the site\nagain i was flooeded with errors.. what happened? simple\nstarting from 7.2 strings are not truncated silently anymore.\n\nWhile this might be standard SQL (i dont know, i am not an expert and ,\nquite honeslty, i dont care) i have a few notes which might be worth\nconsidering:\n\nIMHO silent truncation is a valuable feature to my sistem and i see no\neasy way to get around it, how shold i do ?\n\na)replacing Char(x) with \"text\" type? a security hazard! now every new\nuser\ncan upload up to 8 MB in string for each text field I leave open! \n\nb)checking the lenght of EACH FIELD IN EACH QUERY? that means\nspecifically querying the DB for the metadata (performance nightmare) or\nincluding metadata about the db in the application (software engineering\nnightmare).\n\nc)leave everything as it is.. maybe get those limits a bit larger and hope\nthat no user ever enters stuff that's too large .. else he'll get nasty\nerrors .. on the other hand that also requires me to handle all query\nfailures directly (put error handles) althought very sound and\nconservative coding practices (as the postgres code itself , I\nassume) require this to be done this is really not the case of web\nscripting and i dont really thing it should be required.\n\nTo make a long story short\n\nSilent truncation wasnt bad ad all.. can we have it back ? (via a switch\nor something? ) i think a good idea would be to have it \"field wise\" like\ncreate table foo(a char(120) silent_trunc). \n\nI do not foresee being able to use postgresql in a non hi-end web\ndevelopment environment otherwise.\n\nthanks for the attention\nGiovanni Tummarello\n\n\n\n",
"msg_date": "Mon, 11 Feb 2002 17:50:16 +0100 (CET)",
"msg_from": "Giovanni Tummarello <tummarel@ascu.unian.it>",
"msg_from_op": true,
"msg_subject": "Serious 7.2 issue (non quiet string truncation)"
},
{
"msg_contents": "Why can't you truncate the string yourself.\n\nTake atleast one of these actions:\n\n1. Limit the forms themselves to the length in question:\n<input type=\"text\" size=\"50\" />\n\n2. Use trim the string to length in the code (php below):\n$string = substr($string, 0, 50);\n\n3. Have the INSERT truncate the string:\nINSERT INTO table (col1) VALUES (substring('valuetoinsert', 1, 5));\n\n\nAny of the above (or all of the above) will accomplish what you\nrequire. I personally suggest both 1 and 2. But 3 can be used if\nnecessary.\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Giovanni Tummarello\" <tummarel@ascu.unian.it>\nTo: <pgsql-hackers@postgresql.org>\nSent: Monday, February 11, 2002 11:50 AM\nSubject: [HACKERS] Serious 7.2 issue (non quiet string truncation)\n\n\n>\n> Hello everybody\n>\n> I am a Ph.D student who just innocently upgraded from 7.1 to 7.2 his\nhome\n> brewed content management system.\n> The update went allright (datawise speaking) but once i fired up the\nsite\n> again i was flooeded with errors.. what happened? simple\n> starting from 7.2 strings are not truncated silently anymore.\n>\n> While this might be standard SQL (i dont know, i am not an expert\nand ,\n> quite honeslty, i dont care) i have a few notes which might be worth\n> considering:\n>\n> IMHO silent truncation is a valuable feature to my sistem and i see\nno\n> easy way to get around it, how shold i do ?\n>\n> a)replacing Char(x) with \"text\" type? a security hazard! now every\nnew\n> user\n> can upload up to 8 MB in string for each text field I leave open!\n>\n> b)checking the lenght of EACH FIELD IN EACH QUERY? that means\n> specifically querying the DB for the metadata (performance\nnightmare) or\n> including metadata about the db in the application (software\nengineering\n> nightmare).\n>\n> c)leave everything as it is.. maybe get those limits a bit larger\nand hope\n> that no user ever enters stuff that's too large .. else he'll get\nnasty\n> errors .. on the other hand that also requires me to handle all\nquery\n> failures directly (put error handles) althought very sound and\n> conservative coding practices (as the postgres code itself , I\n> assume) require this to be done this is really not the case of web\n> scripting and i dont really thing it should be required.\n>\n> To make a long story short\n>\n> Silent truncation wasnt bad ad all.. can we have it back ? (via a\nswitch\n> or something? ) i think a good idea would be to have it \"field wise\"\nlike\n> create table foo(a char(120) silent_trunc).\n>\n> I do not foresee being able to use postgresql in a non hi-end web\n> development environment otherwise.\n>\n> thanks for the attention\n> Giovanni Tummarello\n>\n>\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Mon, 18 Feb 2002 15:28:15 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Serious 7.2 issue (non quiet string truncation)"
},
{
"msg_contents": "Giovanni Tummarello writes:\n\n> IMHO silent truncation is a valuable feature to my sistem and i see no\n> easy way to get around it, how shold i do ?\n\nAdd a trigger that truncates the value.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 18 Feb 2002 15:41:25 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Serious 7.2 issue (non quiet string truncation)"
},
{
"msg_contents": "On Mon, Feb 18, 2002 at 03:28:15PM -0500, Rod Taylor wrote:\n> Why can't you truncate the string yourself.\n> \n> Take atleast one of these actions:\n> \n> 1. Limit the forms themselves to the length in question:\n> <input type=\"text\" size=\"50\" />\n\nAn attacker could circument this by not going through the webform.\nWhile it's doubtful such an attack would cause an exploitable\ncondition in a language like PHP, it's still better to check\npost-submission...\n\n> 2. Use trim the string to length in the code (php below):\n> $string = substr($string, 0, 50);\n\nlike this.\n\n> 3. Have the INSERT truncate the string:\n> INSERT INTO table (col1) VALUES (substring('valuetoinsert', 1, 5));\n> \n> \n> Any of the above (or all of the above) will accomplish what you\n> require. I personally suggest both 1 and 2. But 3 can be used if\n> necessary.\n\n1 and 2, as you say.\n\nOtherwise some day you convert your code over to C and forget to\ntruncate, and you may be exploitable.\n\n-- \nDavid Terrell | \"Science is like sex: sometimes\ndbt@meat.net | something useful comes out, but\nNebcorp Prime Minister | that is not the reason we are\nhttp://wwn.nebcorp.com/ | doing it\" -- Richard Feynman\n",
"msg_date": "Mon, 18 Feb 2002 13:10:09 -0800",
"msg_from": "David Terrell <dbt@meat.net>",
"msg_from_op": false,
"msg_subject": "Re: Serious 7.2 issue (non quiet string truncation)"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr] \n> Sent: 11 February 2002 16:56\n> To: Andrew Sullivan; PostgreSQL general list\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [GENERAL] [HACKERS] Feature enhancement request \n> : use of libgda in\n> \n> \n> Le Lundi 11 F�vrier 2002 16:52, Andrew Sullivan a �crit :\n> > If it's that important to you, implement it yourself, or pay one of \n> > the able developers to work on a feature you want.\n> \n> Thanks for the information. If you help us, I pay you a beer \n> in Paris. Dave \n> Page will probably too in Oxford, so that makes two beers. \n> Dave Page is \n> working hard on pgAdmin2 writing thousands lines of code. As \n> for myself, I \n> will concentrate on something comparable to pgAdmin2 in a \n> libgda / GTK+ \n> environment because I think Windows has no future. \n\nI wouldn't bet on that.\n\n> CREATE OR REPLACE VIEW / TRIGGER and ALTER TABLE DROP COLUMN are real \n> priorities for us at pgAdmin team \n\nUmm, didn't Gavin (Sherry?) say he was planning to implement these features\nfor 7.3 (the create/replace anyway)?\n\nRegards, Dave.\n",
"msg_date": "Mon, 11 Feb 2002 17:16:38 -0000",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libg"
}
] |
[
{
"msg_contents": "I don't know how hard it would be to do, but I have rewritten some\nOracle code to work on PostgreSQL. While rewriting a couple functions,\nand the tedium of changing SYSDATE to CURRDATE were a pain, these things\ncan be handled with a scripting language. The big problem, which means\nthat PostgreSQL code does not go back to Oracle, is the \"join\" syntax.\n\nIf there was a way to adopt the Oracle join syntax in addition to the\nstandard join syntax. It would make a great deal of difference. The\ntedium of formatting and variable naming can be done by almost anyone.\nThe rewriting of complex queries into a completely different logical\nsyntax, can only be done by a knowledgeable person and a good QA team.\n",
"msg_date": "Mon, 11 Feb 2002 14:49:10 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Oracle compatibility"
}
] |
[
{
"msg_contents": "Hi,\n\nI am having problems with permissions in postgres. I am using version 7.1.3 of\nPostgres running on RedHat 7.2. \n\nI create the table \"accounts\" and revoke all permissions for the PUBLIC user: \naccounts | {\"=\",\"dcl=arwR\"}\n\nHowever, any user can make a select or update in the table \"accounts\".\n\nCan anybody help me?!\n\nThanks a lot.\n\n\nPS: Please send me a copy of the mails, I am not subscribed to the list.\n",
"msg_date": "Mon, 11 Feb 2002 21:29:28 +0100",
"msg_from": "noy <noyda@isoco.com>",
"msg_from_op": true,
"msg_subject": "Permissions problem"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Jan Wieck [mailto:janwieck@yahoo.com] \n> Sent: 11 February 2002 19:36\n> To: jm.poure@freesurf.fr\n> Cc: Andrew Sullivan; PostgreSQL general list; \n> pgsql-hackers@postgresql.org\n> Subject: Re: [GENERAL] [HACKERS] Feature enhancement request \n> : use of libgda\n> \n> \n> Jean-Michel POURE wrote:\n> >\n> > CREATE OR REPLACE VIEW / TRIGGER and ALTER TABLE DROP \n> COLUMN are real \n> > priorities for us at pgAdmin team \n> (http://pgadmin.postgresql.org). I \n> > don't know PostgreSQL internals, but it should take a few \n> days/weeks \n> > to an experienced hacker to add these features.\n> \n> Jean-Michel,\n> \n> I think you underestimate the problem a little.\n> \n> Doing CREATE OR REPLACE is not that trivial as you appear to\n> think. The existing PL handlers (for PL/Tcl and PL/pgSQL at\n> least) identify functions by their pg_proc OID. The\n> functions body text is parsed only on the first call to that\n> function during the entire session. So changing the functions\n> prosrc attribute after having called it already wouldn't take\n> effect until the next \"session\". But changing the OID as well\n> corrupts existing SPI plans in other functions plus rules.\n> \n> Now it might be possible to tell your function handler to\n> recompile that function at the next call without changing the\n> OID, but how do you tell the function handlers in all the\n> other concurrently running backends to do so after finishing\n> their current transaction?\n\nBearing in mind that I know nearly nothing about internals here :), how\nabout storing a version number in pg_proc? CREATE OR REPLACE updates that,\nand each backend notes it when first parsing the function. Future calls of\nthe function result in a check of the version, and re-parse if necessary.\n\nI'm sure there's far more to this than I realise... :-)\n\nRegards, Dave\n",
"msg_date": "Mon, 11 Feb 2002 20:32:56 -0000",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Feature enhancement request : use of libg"
}
] |
[
{
"msg_contents": "Are you sure you wanted \"current_date\" (CURRDATE) and not\n\"current_timestamp\"? \"current_date\" is not entirely synonymous with an\nOracle DATE since it doesn't have a timestamp component.\n\nAs for the outer join syntax, I agree, it's not a treat. I was toying\nwith the idea of writing a standalone parser but for now (like you) I\nbit the bullet and converted it to SQL99 syntax. This would make an\nexcellent 7.3 feature.\n\nmlw wrote:\n\n > I don't know how hard it would be to do, but I have rewritten some\n > Oracle code to work on PostgreSQL. While rewriting a couple functions,\n > and the tedium of changing SYSDATE to CURRDATE were a pain, these things\n > can be handled with a scripting language. The big problem, which means\n > that PostgreSQL code does not go back to Oracle, is the \"join\" syntax.\n >\n > If there was a way to adopt the Oracle join syntax in addition to the\n > standard join syntax. It would make a great deal of difference. The\n > tedium of formatting and variable naming can be done by almost anyone.\n > The rewriting of complex queries into a completely different logical\n > syntax, can only be done by a knowledgeable person and a good QA team.\n >\n\n-- \n01010101010101010101010101010101010101010101010101\n\nMarc P. Lavergne [wk:407-648-6996]\nProduct Development\nrichLAVA Corporation\n\n--\n\n\"Anyone who slaps a 'this page is best viewed with\nBrowser X' label on a Web page appears to be\nyearning for the bad old days, before the Web,\nwhen you had very little chance of reading a\ndocument written on another computer, another word\nprocessor, or another network.\"\n-Tim Berners-Lee (Technology Review, July 1996)\n\n01010101010101010101010101010101010101010101010101\n\n",
"msg_date": "Mon, 11 Feb 2002 18:19:32 -0500",
"msg_from": "Marc Lavergne <mlavergn@richlava.com>",
"msg_from_op": true,
"msg_subject": "[Fwd: Re: Oracle compatibility]"
}
] |
[
{
"msg_contents": "BAZIN Nicolas (nbazin@ingenico.com.au) reports a bug with a severity of 1\nThe lower the number the more severe it is.\n\nShort Description\nSequence cannot be deleted\n\nLong Description\nA Sequence is created automatically with the SQL command:\nCREATE TABLE fa_ccpsholderscpt(hsc_serial SERIAL NOT NULL ,chd_serial INTEGER NOT NULL ,hsc_respcode CHAR(2) NOT NULL ,scp_code CHAR(4) NOT NULL ,imp_flag SMALLINT)\n\nbut when I try to delete it with the following command:\nDROP SEQUENCE fa_ccpsholderscpt_hsc_serial_seq\n\nI get this error:\nsequence \"fa_ccpsholderscpt_hsc_serial_se\" does not exist\n\nI work with version 7.1.3 on Openserver 5.0.5, gcc 2.95.3 and send the SQL commands through the JDBC driver.\n\n\nSample Code\n\n\nNo file was uploaded with this report\n\n",
"msg_date": "Mon, 11 Feb 2002 20:26:37 -0500 (EST)",
"msg_from": "pgsql-bugs@postgresql.org",
"msg_from_op": true,
"msg_subject": "Bug #581: Sequence cannot be deleted"
},
{
"msg_contents": "\nOn Mon, 11 Feb 2002 pgsql-bugs@postgresql.org wrote:\n\n> BAZIN Nicolas (nbazin@ingenico.com.au) reports a bug with a severity of 1\n> The lower the number the more severe it is.\n>\n> Short Description\n> Sequence cannot be deleted\n>\n> Long Description A Sequence is created automatically with the SQL\n> command:\n\n> CREATE TABLE fa_ccpsholderscpt(hsc_serial SERIAL NOT NULL ,chd_serial\n> INTEGER NOT NULL ,hsc_respcode CHAR(2) NOT NULL ,scp_code CHAR(4) NOT\n> NULL ,imp_flag SMALLINT)\n>\n> but when I try to delete it with the following command:\n> DROP SEQUENCE fa_ccpsholderscpt_hsc_serial_seq\n\nThat's not the name of the sequence in question unless you've upped\nthe number of characters in an identifier. The sequence appears to\non my machine be named \"fa_ccpsholderscp_hsc_serial_seq\" because the\nname would have ended up being too long.\n\n",
"msg_date": "Mon, 11 Feb 2002 18:14:31 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug #581: Sequence cannot be deleted"
},
{
"msg_contents": "pgsql-bugs@postgresql.org writes:\n> A Sequence is created automatically with the SQL command:\n> CREATE TABLE fa_ccpsholderscpt(hsc_serial SERIAL NOT NULL ,chd_serial INTEGER NOT NULL ,hsc_respcode CHAR(2) NOT NULL ,scp_code CHAR(4) NOT NULL ,imp_flag SMALLINT)\n\nOkay, let's try it ...\n\nregression=# CREATE TABLE fa_ccpsholderscpt(hsc_serial SERIAL NOT NULL ,chd_serial INTEGER NOT NULL ,hsc_respcode CHAR(2) NOT NULL ,scp_code CHAR(4) NOT NULL ,imp_flag SMALLINT);\nNOTICE: CREATE TABLE will create implicit sequence 'fa_ccpsholderscp_hsc_serial_seq' for SERIAL column 'fa_ccpsholderscpt.hsc_serial'\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'fa_ccpsholderscp_hsc_serial_key' for table 'fa_ccpsholderscpt'\nCREATE\n\n> but when I try to delete it with the following command:\n> DROP SEQUENCE fa_ccpsholderscpt_hsc_serial_seq\n> I get this error:\n> sequence \"fa_ccpsholderscpt_hsc_serial_se\" does not exist\n\nNot surprising, because that's not what it's called. Check the NOTICE\nagain.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Feb 2002 21:17:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #581: Sequence cannot be deleted "
},
{
"msg_contents": "2002-02-11 21:17] Tom Lane said:\n| pgsql-bugs@postgresql.org writes:\n| > A Sequence is created automatically with the SQL command:\n| > CREATE TABLE fa_ccpsholderscpt(hsc_serial SERIAL NOT NULL ,chd_serial INTEGER NOT NULL ,hsc_respcode CHAR(2) NOT NULL ,scp_code CHAR(4) NOT NULL ,imp_flag SMALLINT)\n| \n| Okay, let's try it ...\n| \n| regression=# CREATE TABLE fa_ccpsholderscpt(hsc_serial SERIAL NOT NULL ,chd_serial INTEGER NOT NULL ,hsc_respcode CHAR(2) NOT NULL ,scp_code CHAR(4) NOT NULL ,imp_flag SMALLINT);\n| NOTICE: CREATE TABLE will create implicit sequence 'fa_ccpsholderscp_hsc_serial_seq' for SERIAL column 'fa_ccpsholderscpt.hsc_serial'\n| NOTICE: CREATE TABLE / UNIQUE will create implicit index 'fa_ccpsholderscp_hsc_serial_key' for table 'fa_ccpsholderscpt'\n| CREATE\n| \n| > but when I try to delete it with the following command:\n| > DROP SEQUENCE fa_ccpsholderscpt_hsc_serial_seq\n| > I get this error:\n| > sequence \"fa_ccpsholderscpt_hsc_serial_se\" does not exist\n| \n| Not surprising, because that's not what it's called. Check the NOTICE\n| again.\n\nIf the user was not doing this via psql, he'd not ever see that\nNOTICE. The naming of sequences has appeared in a number of\nproblem reports.\n\nISTM it would make sense to expose the sequence naming logic via\na builtin function, such as pg_serialseq(table,column)?\n\n DROP SEQUENCE pg_serialseq(a_long_table_name,a_long_column_name);\n\nThis would be a fairly straightforward wrapper of \nmakeObjectName(relname,colname,\"seq\") and we could easily update it\nif (when!) the SERIAL type is reworked to guarantee a way to get at\na SERIAL type's underlying sequence[1]\n\nthanks.\n brent\n\n[1] At some point in time, I'd like to rework SERIAL such that the\n actual sequence name is not used directly. I've been thinking\n of making an optional parameter for the SERIAL type to allow\n creation of SERIAL types that feed from an previously created\n SERIAL sequence. I envision\n CREATE TABLE a ( id SERIAL );\n CREATE TABLE b ( id SERIAL(a.id) );\n In short, I'd like to see nextval() and currval() not used for \n dealing with columns declared as SERIAL, but this is a thought \n for a later date...\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Tue, 12 Feb 2002 15:09:28 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Bug #581: Sequence cannot be deleted"
},
{
"msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> ISTM it would make sense to expose the sequence naming logic via\n> a builtin function, such as pg_serialseq(table,column)?\n\nThat might seem cleaner, but I think there's a hidden gotcha: it nails\ndown a presumption that the sequence name is a function of the table\nname, column name, and nothing else. So I think it'd actually make it\nharder rather than easier for us to make the sorts of changes we might\nwant to make in future. (F'r instance, we might add an OID into the\nname to prevent collisions.)\n\nI believe that the surprising-name problem will largely go away anyway\nas soon as we get around to increasing the default NAMEDATALEN. With\na decent name length no one would ever see truncation in practice.\n\nAlso, of course, what we really want is for SERIAL sequences to get\ndropped by themselves when the parent table is dropped, and then users\ndon't need to know what the generated sequence name is ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Feb 2002 15:48:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Bug #581: Sequence cannot be deleted "
},
{
"msg_contents": "On Tuesday 12 February 2002 21:48, Tom Lane wrote:\n> I believe that the surprising-name problem will largely go away anyway\n> as soon as we get around to increasing the default NAMEDATALEN. With\n> a decent name length no one would ever see truncation in practice.\n\nSorry to butt in here, but I would second this suggestion. One of my databases\nhas rather long-winded table and field names (mostly in German, which doesn't\nhelp much ;-). There aren't any which exceed 31 characters on their own, but\nsequences can get scarily long, so I always build with NAMEDATALEN set to 128,\njust to be on the safe side.\n\nIs there any reason for the default value (31 characters?), or are there\nany performance issues associated with longer values?\n\n> Also, of course, what we really want is for SERIAL sequences to get\n> dropped by themselves when the parent table is dropped, and then users\n> don't need to know what the generated sequence name is ...\n\nThis would be nice.\n\nYours\n\nIan Barwick\n",
"msg_date": "Tue, 12 Feb 2002 22:58:18 +0100",
"msg_from": "Ian Barwick <barwick@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Bug #581: Sequence cannot be deleted"
},
{
"msg_contents": "Ian Barwick <barwick@gmx.net> writes:\n> Is there any reason for the default value (31 characters?),\n\nIt's historical AFAIK.\n\n> or are there\n> any performance issues associated with longer values?\n\nLarger values would definitely waste space in the system tables (since\ntype name is fixed-width). Bigger system tables = more I/O = some\namount of slowdown. I have not heard that anyone has tried to measure\nthe cost. It might be negligible; we just don't know.\n\nI believe we'd be happy to change the number as soon as someone does the\nlegwork to quantify what it's going to cost.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Feb 2002 17:17:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Bug #581: Sequence cannot be deleted "
},
{
"msg_contents": "I have a similar problem. Where I have both long table names and long\ncolumn names. Has the increase of the NAMEDATALEN been targeted for a\nrelease? From my perspective, I would prefer if the algorithm to determine\nthe sequence name was more of a function on the table name rather than a\ncombination of the table name and column name, as I have never created a\ntable that has more than one sequence in it.\n\nI would also welcome the ability to have the drop table command drop the\nsequence as well.\n\nTom\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Ian Barwick\nSent: February 12, 2002 4:58 PM\nTo: Tom Lane; pgsql-hackers@postgresql.org\nCc: nbazin@ingenico.com.au; Brent Verner\nSubject: Re: [HACKERS] [BUGS] Bug #581: Sequence cannot be deleted\n\nOn Tuesday 12 February 2002 21:48, Tom Lane wrote:\n> I believe that the surprising-name problem will largely go away anyway\n> as soon as we get around to increasing the default NAMEDATALEN. With\n> a decent name length no one would ever see truncation in practice.\n\nSorry to butt in here, but I would second this suggestion. One of my\ndatabases\nhas rather long-winded table and field names (mostly in German, which\ndoesn't\nhelp much ;-). There aren't any which exceed 31 characters on their own, but\nsequences can get scarily long, so I always build with NAMEDATALEN set to\n128,\njust to be on the safe side.\n\nIs there any reason for the default value (31 characters?), or are there\nany performance issues associated with longer values?\n\n> Also, of course, what we really want is for SERIAL sequences to get\n> dropped by themselves when the parent table is dropped, and then users\n> don't need to know what the generated sequence name is ...\n\nThis would be nice.\n\nYours\n\nIan Barwick\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n\n",
"msg_date": "Tue, 12 Feb 2002 17:25:33 -0500",
"msg_from": "\"Tom Innes\" <tinnes@inforamp.net>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Bug #581: Sequence cannot be deleted"
}
] |
[
{
"msg_contents": "I've recently installed Postgres 7.2, but this also applied to the 7.1.3\ninstallation I had too. I'm seeing a lot of DEBUG messages being output to\nthe logfile/syslog, both when PG starts and stops, and also during VACUUM\nANALYZE runs (w/o verbose mode turned on); I start/stop PG via something\nlike:\n\nsu -l $PGUSER -c \"pg_ctl start -D '$PGDATA' -s -l $PGLOG\"\n\nand the analyze runs are done from cron like so:\n\nsu -l $PGUSER -c \"/usr/local/pgsql/bin/vacuumdb --all --analyze\"\n\nIn postgresql.conf, the relevant section looks like:\n---\n# Debug display\n#silent_mode = false\nlog_connections = true\nlog_timestamp = true\nlog_pid = true\n#debug_level = 0 # range 0-16\n#debug_print_query = false\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n# requires USE_ASSERT_CHECKING\n#debug_assertions = true\n---\n\nwhich to mean says that there shouldn't be much debug output (level=0); I'm\nOK with the connection messages. I've attached some examples of the messages\nbelow if they're helpful. Is this something I have done, or something I can\nturn off? Its mainly the analyze lines that fill the log up. I'd have a\nquick grep at the code for DEBUG's but I'll have to wait till I get home.\n\nCheers,\nJoe\n\n-- DEBUG examples:\n\nFeb 12 17:13:40 www postgres[32308]: [1] DEBUG: smart shutdown request\nFeb 12 17:13:40 www postgres[32375]: [2] DEBUG: shutting down\nFeb 12 17:13:42 www postgres[32375]: [3] DEBUG: database system is shut\ndown\nFeb 12 17:14:01 www postgres[32398]: [1] DEBUG: database system was shut\ndown at 2002-02-12 17:13:42 EST\nFeb 12 17:14:01 www postgres[32398]: [2] DEBUG: checkpoint record is at\n0/2CA848\nFeb 12 17:14:01 www postgres[32398]: [3] DEBUG: redo record is at 0/2CA848;\nundo record is at 0/0; shutdown TRUE\nFeb 12 17:14:01 www postgres[32398]: [4] DEBUG: next transaction id: 1795;\nnext oid: 16557\nFeb 12 17:14:01 www postgres[32398]: [5] DEBUG: database system is ready\nFeb 12 17:14:08 www postgres[32410]: [1] DEBUG: connection: host=[local]\nuser=postgres database=template1\nFeb 12 17:14:08 www postgres[32412]: [1] DEBUG: connection: host=[local]\nuser=postgres database=postgres\nFeb 12 17:14:08 www postgres[32412]: [2] DEBUG: --Relation pg_type--\nFeb 12 17:14:08 www postgres[32412]: [3-1] DEBUG: Pages 2: Changed 0, Empty\n0; Tup 143: Vac 0, Keep 0, UnUsed 1.\nFeb 12 17:14:08 www postgres[32412]: [3-2] Total CPU 0.00s/0.00u sec\nelapsed 0.00 sec.\nFeb 12 17:14:08 www postgres[32412]: [4] DEBUG: Analyzing pg_type\nFeb 12 17:14:08 www postgres[32412]: [5] DEBUG: --Relation pg_attribute--\nFeb 12 17:14:08 www postgres[32412]: [6-1] DEBUG: Pages 11: Changed 0,\nEmpty 0; Tup 795: Vac 0, Keep 0, UnUsed 21.\nFeb 12 17:14:08 www postgres[32412]: [6-2] Total CPU 0.00s/0.00u sec\nelapsed 0.00 sec.\nFeb 12 17:14:08 www postgres[32412]: [7] DEBUG: Analyzing pg_attribute\nFeb 12 17:14:08 www postgres[32412]: [8] DEBUG: --Relation pg_class--\n...\n\n\n***********Confidentiality/Limited Liability Statement***************\n\nHave the latest business news and in depth analysis delivered to your \ndesktop. Subscribe to \"Insights\", Deloitte's fortnightly email \nbusiness bulletin . . . \n \nhttp://www.deloitte.com.au/preferences/preference.asp\n\nThis message contains privileged and confidential information intended\nonly for the use of the addressee named above. If you are not the \nintended recipient of this message, you must not disseminate, copy or \ntake any action in reliance on it. If you have received this message \nin error, please notify Deloitte Touche Tohmatsu immediately. Any \nviews expressed in this message are those of the individual sender, \nexcept where the sender specifically states them to be the views of \nDeloitte.\n\nThe liability of Deloitte Touche Tohmatsu, is limited by, and to the \nextent of, the Accountants' Scheme under the Professional Standards \nAct 1994 (NSW).\n\nThe sender cannot guarantee that this email or any attachment to it is free of computer viruses or other conditions which may damage or\ninterfere with data, hardware or software with which it might be used. It is sent on the strict condition that the user carries out and relies\non its own procedures for ensuring that its use will not interfere with the recipients systems and the recipient assumes all risk of use and\nabsolves the sender of all responsibility for any consequence of its use.\n\n\n\n\n\nDEBUG output w/o debug set\n\n\nI've recently installed Postgres 7.2, but this also applied to the 7.1.3 installation I had too. I'm seeing a lot of DEBUG messages being output to the logfile/syslog, both when PG starts and stops, and also during VACUUM ANALYZE runs (w/o verbose mode turned on); I start/stop PG via something like:\nsu -l $PGUSER -c \"pg_ctl start -D '$PGDATA' -s -l $PGLOG\"\n\nand the analyze runs are done from cron like so:\n\nsu -l $PGUSER -c \"/usr/local/pgsql/bin/vacuumdb --all --analyze\"\n\nIn postgresql.conf, the relevant section looks like:\n---\n# Debug display\n#silent_mode = false\nlog_connections = true\nlog_timestamp = true\nlog_pid = true\n#debug_level = 0 # range 0-16\n#debug_print_query = false\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n# requires USE_ASSERT_CHECKING\n#debug_assertions = true\n---\n\nwhich to mean says that there shouldn't be much debug output (level=0); I'm OK with the connection messages. I've attached some examples of the messages below if they're helpful. Is this something I have done, or something I can turn off? Its mainly the analyze lines that fill the log up. I'd have a quick grep at the code for DEBUG's but I'll have to wait till I get home.\nCheers,\nJoe\n\n-- DEBUG examples:\n\nFeb 12 17:13:40 www postgres[32308]: [1] DEBUG: smart shutdown request\nFeb 12 17:13:40 www postgres[32375]: [2] DEBUG: shutting down\nFeb 12 17:13:42 www postgres[32375]: [3] DEBUG: database system is shut down\nFeb 12 17:14:01 www postgres[32398]: [1] DEBUG: database system was shut down at 2002-02-12 17:13:42 EST\nFeb 12 17:14:01 www postgres[32398]: [2] DEBUG: checkpoint record is at 0/2CA848\nFeb 12 17:14:01 www postgres[32398]: [3] DEBUG: redo record is at 0/2CA848; undo record is at 0/0; shutdown TRUE\nFeb 12 17:14:01 www postgres[32398]: [4] DEBUG: next transaction id: 1795; next oid: 16557\nFeb 12 17:14:01 www postgres[32398]: [5] DEBUG: database system is ready\nFeb 12 17:14:08 www postgres[32410]: [1] DEBUG: connection: host=[local] user=postgres database=template1\nFeb 12 17:14:08 www postgres[32412]: [1] DEBUG: connection: host=[local] user=postgres database=postgres\nFeb 12 17:14:08 www postgres[32412]: [2] DEBUG: --Relation pg_type--\nFeb 12 17:14:08 www postgres[32412]: [3-1] DEBUG: Pages 2: Changed 0, Empty 0; Tup 143: Vac 0, Keep 0, UnUsed 1.\nFeb 12 17:14:08 www postgres[32412]: [3-2] Total CPU 0.00s/0.00u sec elapsed 0.00 sec.\nFeb 12 17:14:08 www postgres[32412]: [4] DEBUG: Analyzing pg_type\nFeb 12 17:14:08 www postgres[32412]: [5] DEBUG: --Relation pg_attribute--\nFeb 12 17:14:08 www postgres[32412]: [6-1] DEBUG: Pages 11: Changed 0, Empty 0; Tup 795: Vac 0, Keep 0, UnUsed 21.\nFeb 12 17:14:08 www postgres[32412]: [6-2] Total CPU 0.00s/0.00u sec elapsed 0.00 sec.\nFeb 12 17:14:08 www postgres[32412]: [7] DEBUG: Analyzing pg_attribute\nFeb 12 17:14:08 www postgres[32412]: [8] DEBUG: --Relation pg_class--\n...\n\n\n\n***********Confidentiality/Limited Liability Statement***************\n\nHave the latest business news and in depth analysis delivered to your \ndesktop. Subscribe to \"Insights\", Deloitte's fortnightly email \nbusiness bulletin . . . \n\nhttp://www.deloitte.com.au/preferences/preference.asp\n\nThis message contains privileged and confidential information intended\nonly for the use of the addressee named above. If you are not the \nintended recipient of this message, you must not disseminate, copy or \ntake any action in reliance on it. If you have received this message \nin error, please notify Deloitte Touche Tohmatsu immediately. Any \nviews expressed in this message are those of the individual sender, \nexcept where the sender specifically states them to be the views of \nDeloitte.\n\nThe liability of Deloitte Touche Tohmatsu, is limited by, and to the \nextent of, the Accountants' Scheme under the Professional Standards \nAct 1994 (NSW).\n\nThe sender cannot guarantee that this email or any attachment to it is free of computer viruses or other conditions which may damage or\ninterfere with data, hardware or software with which it might be used. It is sent on the strict condition that the user carries out and relies\non its own procedures for ensuring that its use will not interfere with the recipients systems and the recipient assumes all risk of use and\nabsolves the sender of all responsibility for any consequence of its use.",
"msg_date": "Tue, 12 Feb 2002 17:15:02 +1100",
"msg_from": "\"Shevland, Joseph (AU - Hobart)\" <jshevland@deloitte.com.au>",
"msg_from_op": true,
"msg_subject": "DEBUG output w/o debug set"
}
] |
[
{
"msg_contents": "Hi All,\n\n(1)I am Amit Kumar Khare, I am doing MCS from UIUC USA offcampus from India.\n\n(2) We have been asked to enhance postgreSQL in one of our assignments. So I \nhave chosen to pick \"Add free-behind capability for large sequential scans\" \nfrom TODO list. Many thanks to Mr. Bruce Momjian who helped me out and \nsuggested to make a patch for this problem.\n\n(3)As explained to me by Mr. Bruce, the problem description is that if say \ncache size is 1 mb and a sequential scan is done through a 2mb file over and \nover again the cache becomes useless.Because by the time the second read of \nthe table happens the first 1mb has been forced out of the cache already.Thus \nthe idea is not to cache very large sequential scans, but to cache index scans \nsmall sequential scans.\n\n(4)what I think the problem arises because of default LRU page replacement \npolicy. So I think we have to make use of MRU or LRU-K page replacement \npolicies.\n\n(5)But I am not sure and I wish more input into the problem description from \nyou all. I have started reading the buffer manager code and I found that \nfreelist.c may be needed to be modified and may be some other too since we \nhave to identify the large sequential scans.\n\nPlease help me out\n\nRegards\nAmit Kumar Khare\n\n\n",
"msg_date": "Tue, 12 Feb 2002 01:12:57 -0600",
"msg_from": "khare <khare@students.uiuc.edu>",
"msg_from_op": true,
"msg_subject": "Add free-behind capability for large sequential scans"
}
] |
[
{
"msg_contents": "Browsing the archives, I found the latest comment about dropping columns\nabout summer 2000 closing with Hiroshi's unapplied (?) hack. What is the\ncurrent status of the implementation?\n\nRegards, Zoltan\n\n-- \n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n",
"msg_date": "Tue, 12 Feb 2002 10:33:58 +0100 (CET)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "alter table drop column status"
},
{
"msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> Browsing the archives, I found the latest comment about dropping columns\n> about summer 2000 closing with Hiroshi's unapplied (?) hack. What is the\n> current status of the implementation?\n\nIt was applied, and it's in there with #ifdef _DROP_COLUMN_HACK__,\nbut I believe Hiroshi has given up on that approach as unworkable.\n\nThe #ifdef'd code is still there (in most places anyway) because no\none has bothered to rip it out. But I doubt it would work very well\nif enabled --- the code mods in the last year or so have not taken\nany notice of _DROP_COLUMN_HACK__.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Feb 2002 10:14:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: alter table drop column status "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane\n>\n> Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> > Browsing the archives, I found the latest comment about dropping columns\n> > about summer 2000 closing with Hiroshi's unapplied (?) hack. What is the\n> > current status of the implementation?\n>\n> It was applied,\n\nNo there was an unapplied hack which uses logical/physical\nattribute numbers. I have synchronized it with cvs for a\nyear or so but stop it now. Though it had some flaws It\nsolved the following TODOs.\n\n* Add ALTER TABLE DROP COLUMN feature\n* ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n* Prevent column dropping if column is used by foreign key\n\nI gave up to apply the hack mainly because it may introduce\nthe maintenance headache.\n\n> and it's in there with #ifdef _DROP_COLUMN_HACK__,\n> but I believe Hiroshi has given up on that approach as unworkable.\n>\n> The #ifdef'd code is still there (in most places anyway) because no\n> one has bothered to rip it out. But I doubt it would work very well\n> if enabled --- the code mods in the last year or so have not taken\n> any notice of _DROP_COLUMN_HACK__.\n\nThe code doesn't work since long. I would remove it after 7.3 tree\nis branched.\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Wed, 13 Feb 2002 02:08:18 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: alter table drop column status "
},
{
"msg_contents": "> No there was an unapplied hack which uses logical/physical\n> attribute numbers. I have synchronized it with cvs for a\n> year or so but stop it now. Though it had some flaws It\n> solved the following TODOs.\n>\n> * Add ALTER TABLE DROP COLUMN feature\n> * ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n> * Prevent column dropping if column is used by foreign key\n\nThis seems fantastic - why can't this be committed? Surely if it's\ncommitted then the flaws will fairly quickly be ironed out? Even if it has\nflaws, then if we say 'this function is not yet stable' at least people can\nstart testing it and reporting the problems?\n\n> I gave up to apply the hack mainly because it may introduce\n> the maintenance headache.\n\nIs it a maintenance headache just for you to keep it up to date, or how\nwould it be a maintenance headache if it were committed?\n\nChris\n\n",
"msg_date": "Wed, 13 Feb 2002 13:14:59 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: alter table drop column status "
},
{
"msg_contents": "Le Mercredi 13 F�vrier 2002 06:14, Christopher Kings-Lynne a �crit :\n> This seems fantastic - why can't this be committed? �Surely if it's\n> committed then the flaws will fairly quickly be ironed out? �Even if it has\n> flaws, then if we say 'this function is not yet stable' at least people can\n> start testing it and reporting the problems?\n\n+1. What are the reasons why this hack was not applied?\n",
"msg_date": "Wed, 13 Feb 2002 09:09:41 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: alter table drop column status"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > No there was an unapplied hack which uses logical/physical\n> > attribute numbers. I have synchronized it with cvs for a\n> > year or so but stop it now. Though it had some flaws It\n> > solved the following TODOs.\n> >\n> > * Add ALTER TABLE DROP COLUMN feature\n> > * ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n> > * Prevent column dropping if column is used by foreign key\n> \n> This seems fantastic - why can't this be committed? Surely if it's\n> committed then the flaws will fairly quickly be ironed out? Even if it has\n> flaws, then if we say 'this function is not yet stable' at least people can\n> start testing it and reporting the problems?\n> \n> > I gave up to apply the hack mainly because it may introduce\n> > the maintenance headache.\n> \n> Is it a maintenance headache just for you to keep it up to date, or how\n> would it be a maintenance headache if it were committed?\n\nProbably(oops I don't remember well now sorry) the main\nreason why I didn't insist to apply the patch was that\nit wasn't so clean as I had expected.\nMy trial implementation uses logical(for clients) and\nphysical (for backend internal) attribute numbers but\nthere were many places where I wasn't able to judge which\nto use immediately. I'm pretty suspicious if a developer\ncould be careful about the choise when he is implementing\nan irrevant feature. (Un)fortunately the numbers have\nthe same values mostly and he could hardly notice the\nmistake even if he chose the wrong attribute numbers.\nI'm not sure if I myself chose the right attribute numbers \neverywhere in my implementation.\nIn addtion (probably) there were some pretty essential\nflaws. I intended to manage the backend internal\nobject references without the logical attribute\nnumbers but I found it difficult in some cases\n(probably the handling of virtual(not existent \nin any real table) tuples).\n\nSorry it was more than 1 year ago when I implemented\nit and I can't remember well what I'd thougth then.\nThough I'd kept my local branch up to date for\nabout a year, it's about half a year since I touched\nthe stuff last. \n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 13 Feb 2002 17:48:33 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: alter table drop column status"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> My trial implementation uses logical(for clients) and\n> physical (for backend internal) attribute numbers but\n> there were many places where I wasn't able to judge which\n> to use immediately. I'm pretty suspicious if a developer\n> could be careful about the choise when he is implementing\n> an irrevant feature. (Un)fortunately the numbers have\n> the same values mostly and he could hardly notice the\n> mistake even if he chose the wrong attribute numbers.\n\nI think this was the thing that really scared everyone about the trial\nimplementation: the near-certainty of bugs that might remain unnoticed\nfor a long time.\n\n\nAt the last OSDB conference I had an interesting discussion with\nAnn Harrison about how Interbase (Firebird) deals with this problem.\nEssentially, they mark every tuple with an identifier for the schema\nthat it follows. Translated to Postgres terms, it'd work like this:\n\n1. Composite types (row types) could exist independently of tables;\nthis is something we've wanted for awhile anyway. A composite type\nis identified by its OID in pg_type. pg_attribute rows would have\nto be considered to belong to pg_type entries not pg_class entries.\n\n2. A relation in pg_class has a pointer to its current preferred schema\n(row type). This link exists already (reltype), but it would no longer\nbe necessarily fixed for the life of the relation. To implement ADD,\nDROP or ALTER COLUMN, you'd construct a new row type and update\npg_class.reltype to point to it. And that's all you'd do --- you'd not\ntouch the stored data.\n\n3. Tuples being inserted/updated would always be coerced to the current\npreferred schema of the relation. However, old tuples would remain\nwith their original schema, perhaps indefinitely. (Or we could offer\na special command to forcibly update all tuples to current schema.)\n\n4. Internally, we'd probably need to create a \"row type cache\" separate\nfrom the existing relcache, so that the attribute structure shown by a\ngiven tuple header could be looked up quickly, whether or not it is the\ncurrent preferred schema of the relation.\n\n5. It'd no longer be possible to identify a particular column solely\nby column number, since the column number might vary between schemas.\nNor would identification by name be reliable (think RENAME COLUMN).\nI think what we'd have to do is go back to giving OIDs to individual\npg_attribute entries ... they wouldn't be true OIDs in the current sense\nbecause not unique across all pg_attribute entries, but we could\ngenerate them using the OID counter. Perhaps call them serial numbers\nnot OIDs. When constructing a new schema, the serial number would be\ncarried over from each column that is logically the same column as some\npre-existing column --- but the physical column numbers might be quite\ndifferent. Then, initial construction of a query plan would resolve\ncolumn name to column serial number using the current schema of the\nrelation, and at runtime the serial number would have to be looked up\nin the actual schema of each tuple. If it's not found, use the default\nvalue of the column as shown in the current schema (this supports ADD\nCOLUMN). If it's found but does not have the same datatype as the Var\nshows that the current schema expects, perform a runtime type coercion\n(this supports ALTERing a column datatype).\n\nThe main thing that this supports that Hiroshi's trial implementation\ndidn't is altering column datatype.\n\nIt'd also considerably simplify processing of inheritance-tree table\nscans: rather than the current kluge that translates parent to child\ncolumn numbers, you'd just make sure that a child table is created with\ncolumn serial numbers matching the parent for its inherited columns.\nThen the above-described mechanism takes care of finding the child\ncolumns for you: essentially, a child-table tuple can be treated just\nlike a tuple that's not of the current schema in the parent table.\n(I'm not sure if the trial implementation could do that too.)\n\nThe weakest feature of the whole scheme is the per-tuple runtime lookups\nimplied by points 4 and 5. We could probably avoid any noticeable\nslowdown in normal cases by caching the results in Var nodes of\nexecution plans, but in cases where a relation has a wild mix of tuples\nof different vintages a single-entry cache wouldn't help much.\n\nAnother objection is the need to add an OID field to tuple headers; 4\nmore bytes per tuple adds up (and on some platforms it'd be 8 bytes due\nto alignment considerations).\n\nAnother problem is that the distinction between column positions and\ncolumn serial numbers has the same kind of potential for confusion as\nbetween logical and physical numbers in the trial implementation. It\nwouldn't be as bad, because the values would be different in most cases.\n\n\nThis'd be a sufficiently big change that I'm not at all sure we'd want\nto do it that way. But I thought I'd sketch out the idea and see if\nanyone likes it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 10:57:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: alter table drop column status "
},
{
"msg_contents": "> to use immediately. I'm pretty suspicious if a developer\n> could be careful about the choise when he is implementing\n> an irrevant feature. (Un)fortunately the numbers have\n\nWell, dropping a column doesn't seem to be a relevant feature. But\nunfortunately our production system requires updates/upgrades \"on the\nfly\", without stopping and dumping out/in the whole database. Currently\nit's only about 16 megs of data but it's growing... I would be satisfied\nwith a working method for dropping and recreating only one table with a\nshort shutdown (~ a few minutes). The problem for me is that the foreign\nkey constraints of all referencing tables must be recreated and I want to\ndo this automagically. It would be enough for me if I could write a\nscript which does this reasonably fast.\n\nI wanted to know if I should wait for the solution of the full ALTER TABLE\nimplementation or not. I'm afraid I shouldn't wait, should I? ;-)\n\n-- \n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n",
"msg_date": "Wed, 13 Feb 2002 18:23:10 +0100 (CET)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "Re: alter table drop column status"
},
{
"msg_contents": "Le Mercredi 13 F�vrier 2002 18:23, Kovacs Zoltan a �crit :\n> I wanted to know if I should wait for the solution of the full ALTER TABLE\n> implementation or not. I'm afraid I shouldn't wait, should I? ;-)\n\nWhat we could do using pgAdmin2 is :\n(\"table_from\" is the table to be modified, \"table_to\" is the resulting table)\n\n1) Mark objects for deletion\n* mark columns in \"table_from\" for deletion,\n* mark primary keys in \"table_from\" for deletion,\n* mark foreign keys in \"table_from\" for deletion,\n\n2) Copy schema and data\n* copy \"table_to\" structure out of \"table_from\" keeing only marked objects,\n* copy data from \"table_from\" to \"table_to\",\n\n3) Add rules and triggers, rename\n* add \"table_from\" triggers to \"table_to\",\n* add \"table_from\" rules to \"table_to\",\n* drop table \"table_from\",\n* rename \"table_to\".\n\nThe same script should also work for inherited tables.\n\nThis could be a hack until equivalent features are added natively to \nPostgreSQL. Do you think it is relevant to add this feature to pgAdmin2? Does \nHiroshi script provide the same kind of features?\n\nWhat is your opinion my dear friends? We wait for your advice.\n\nCheers,\njean-Michel POURE\n",
"msg_date": "Thu, 14 Feb 2002 10:16:32 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: alter table drop column status"
},
{
"msg_contents": "That's the only way to do it at the moment - would you like to collaborate\non the actual sql script to get this done? I wonder if it could be done\nentirely with a stored procedure? That'd be cool:\n\nselect drop_column(mytable, mycolumn);\n\nSweet :)\n\nI'd like to implement this for phpPgAdmin as well.\n\nChris\n\n> -----Original Message-----\n> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr]\n> Sent: Thursday, 14 February 2002 5:17 PM\n> To: Kovacs Zoltan; Hiroshi Inoue; dpage@pgadmin.org\n> Cc: Christopher Kings-Lynne; Tom Lane; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] alter table drop column status\n>\n>\n> Le Mercredi 13 F�vrier 2002 18:23, Kovacs Zoltan a �crit :\n> > I wanted to know if I should wait for the solution of the full\n> ALTER TABLE\n> > implementation or not. I'm afraid I shouldn't wait, should I? ;-)\n>\n> What we could do using pgAdmin2 is :\n> (\"table_from\" is the table to be modified, \"table_to\" is the\n> resulting table)\n>\n> 1) Mark objects for deletion\n> * mark columns in \"table_from\" for deletion,\n> * mark primary keys in \"table_from\" for deletion,\n> * mark foreign keys in \"table_from\" for deletion,\n>\n> 2) Copy schema and data\n> * copy \"table_to\" structure out of \"table_from\" keeing only\n> marked objects,\n> * copy data from \"table_from\" to \"table_to\",\n>\n> 3) Add rules and triggers, rename\n> * add \"table_from\" triggers to \"table_to\",\n> * add \"table_from\" rules to \"table_to\",\n> * drop table \"table_from\",\n> * rename \"table_to\".\n>\n> The same script should also work for inherited tables.\n>\n> This could be a hack until equivalent features are added natively to\n> PostgreSQL. Do you think it is relevant to add this feature to\n> pgAdmin2? Does\n> Hiroshi script provide the same kind of features?\n>\n> What is your opinion my dear friends? We wait for your advice.\n>\n> Cheers,\n> jean-Michel POURE\n>\n\n",
"msg_date": "Fri, 15 Feb 2002 09:01:53 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: alter table drop column status"
},
{
"msg_contents": "> select drop_column(mytable, mycolumn);\n\nIMHO first at least a LOCK should be executed on all tables which are in\nany reference with \"mytable\". If LOCK is not enough, the entire database\nshould be locked (in pg_hba.conf) for all users except for the maintainer.\n\n> > 1) Mark objects for deletion\n> > * mark columns in \"table_from\" for deletion,\n> > * mark primary keys in \"table_from\" for deletion,\n> > * mark foreign keys in \"table_from\" for deletion,\n* check all other tables if they have any references to the columns\n of \"table_from\" marked to be deleted; if check fails, STOP\n* lock all tables which appear in FOREIGN KEYS of \"table_from\" and\n all tables which have FOREIGN KEYS references to \"table_from\"\n\n> > 2) Copy schema and data\n> > * copy \"table_to\" structure out of \"table_from\" keeing only\n> > marked objects,\n> > * copy data from \"table_from\" to \"table_to\",\n> >\n> > 3) Add rules and triggers, rename\n> > * add \"table_from\" triggers to \"table_to\",\n> > * add \"table_from\" rules to \"table_to\",\n> > * drop table \"table_from\",\n* (postgres will automatically drop referential integrity triggers from\n all tables referencing the the dropped table \"table_from\")\n> > * rename \"table_to\".\n* recreate referential integrity triggers in all tables described above\n* unlock all locked tables\n\nI'm afraid LOCK is not available inside a PLPGSQL function (I write almost\neverything in PLPGSQL). However, a shell script should do this easily, but\nit's no so smart to call a shell script from a PLPGSQL function (although\nI do this some time), if Cristopher would like to use it with a single\nSELECT.\n\nRegards, Zoltan\n\n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n",
"msg_date": "Fri, 15 Feb 2002 07:37:04 +0100 (CET)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "Re: alter table drop column status"
},
{
"msg_contents": "> IMHO first at least a LOCK should be executed on all tables which are in\n> any reference with \"mytable\". If LOCK is not enough, the entire database\n> should be locked (in pg_hba.conf) for all users except for the maintainer.\n\nYep.\n\n> I'm afraid LOCK is not available inside a PLPGSQL function (I write almost\n> everything in PLPGSQL). However, a shell script should do this easily, but\n> it's no so smart to call a shell script from a PLPGSQL function (although\n> I do this some time), if Cristopher would like to use it with a single\n> SELECT.\n\nHmmm - can LOCKs in PLPGSQL be added in 7.3, or are there reasons it's\ndifficult?\n\nI'd love to publish a contrib of 'Chris's DDL functions' like:\n\nalter_column_null(table, column, state)\ndrop_column(table, column)\ndrop_foreign_key(table, keyname)\n\netc.\n\nSo that people can use these in lieu of them being available natively in\npostgres.\n\nI guess they could be written in C - but then you may as well implement them\nproperly!\n\nChris\n\n\n",
"msg_date": "Fri, 15 Feb 2002 15:42:38 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: alter table drop column status"
},
{
"msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> I'm afraid LOCK is not available inside a PLPGSQL function\n\nWorks fine for me ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Feb 2002 10:04:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: alter table drop column status "
},
{
"msg_contents": "> > I'm afraid LOCK is not available inside a PLPGSQL function\n> \n> Works fine for me ...\n\nHmm, it works for me, too. OK, I see no more rocks ahead writing a PLPGSQL\nfunction which drops a column. It'll be slow, but it'll work. However, a C\nfunction would be better. Unfortunately I have no experience in writing\nlibpq or ecpg functions. Is it a problem for you if I contribute a PLPGSQL\ncode?\n\nRegards, Zoltan\n\n",
"msg_date": "Tue, 19 Feb 2002 19:07:13 +0100 (CET)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "Re: alter table drop column status "
},
{
"msg_contents": "Hi Zoltan,\n\nI'd love to see a pl/pgsql funciton to drop a column submitted to the list.\nI'll submit a set null / set not null one and maybe we can make up a little\npackage of functions for techdocs.postgres.org.\n\nIn fact, getting these pl/pgsql functions right will make it easier to write\nC versions, which might make it easier to integrate the functionality\ndirectly into postgres...\n\nChris\n\n> -----Original Message-----\n> From: Kovacs Zoltan [mailto:kovacsz@pc10.radnoti-szeged.sulinet.hu]\n> Sent: Wednesday, 20 February 2002 2:07 AM\n> To: Tom Lane\n> Cc: Christopher Kings-Lynne; jm.poure@freesurf.fr; Hiroshi Inoue;\n> dpage@pgadmin.org; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] alter table drop column status\n>\n>\n> > > I'm afraid LOCK is not available inside a PLPGSQL function\n> >\n> > Works fine for me ...\n>\n> Hmm, it works for me, too. OK, I see no more rocks ahead writing a PLPGSQL\n> function which drops a column. It'll be slow, but it'll work. However, a C\n> function would be better. Unfortunately I have no experience in writing\n> libpq or ecpg functions. Is it a problem for you if I contribute a PLPGSQL\n> code?\n>\n> Regards, Zoltan\n>\n\n",
"msg_date": "Wed, 20 Feb 2002 10:41:32 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: alter table drop column status "
}
] |
[
{
"msg_contents": "\n> I hate to sound like a broken record, but I want to re-open that\n> discussion about RTLD_LAZY binding that trailed off a week or two\n> ago. I have just noticed that the 7.0 and 7.1 versions of\n> src/backend/port/dynloader/linux.h have\n> \n> #define pg_dlopen(f)\tdlopen(f, 2)\n> \n> which in 7.2 has been changed to\n> \n> #define pg_dlopen(f)\tdlopen((f), RTLD_LAZY | RTLD_GLOBAL)\n\nOne thing to really watch out for is that old style extensions,\nthat only need a few functions from a standard library will load\nmore efficiently with RTLD_LAZY. (might not even pull in dependent \nlibs, that are not needed)\n\nNext thing to watch out for is, that RTLD_NOW will probably not load \na shared lib, that was not linked with a \"no entry\" flag. Arguably a bug\nfor a shared lib, but a recent report has shown that pg fails to supply\nsuch a flag on all ports.\n\nI am for keeping RTLD_LAZY :-)\n\nAndreas\n",
"msg_date": "Tue, 12 Feb 2002 12:00:44 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: RTLD_LAZY considered harmful (Re: pltlc and pltlcu "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> One thing to really watch out for is that old style extensions,\n> that only need a few functions from a standard library will load\n> more efficiently with RTLD_LAZY.\n\nI thought we had disposed of that argument. The issue here is not a\nmarginal efficiency gain, it is that an unresolved symbol will lead\nto a backend crash (== database-wide restart) unless it is detected\nat dlopen time. As a wise man once said, \"I can make it arbitrarily\nfast ... if it doesn't have to work.\"\n\n> Next thing to watch out for is, that RTLD_NOW will probably not load \n> a shared lib, that was not linked with a \"no entry\" flag.\n\nSay again?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Feb 2002 09:43:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RTLD_LAZY considered harmful (Re: pltlc and pltlcu "
}
] |
[
{
"msg_contents": "I had one more frustrating exprience with the 7.1.3 optimizer handling \nindex/scan selection.\n\nHere is the schema\n\nRADIUS=# \\d attrib\n Table \"attrib\"\n Attribute | Type | Modifier \n-----------+----------------+---------------------\n user_name | character(32) | not null default ''\n attr | character(32) | not null default ''\n value | character(128) | \n op | character(2) | \nIndex: uattr\n\nRADIUS=# \\d uattr\n Index \"uattr\"\n Attribute | Type \n-----------+---------------\n user_name | character(32)\n attr | character(32)\n op | character(2)\nbtree\n\n\n(this is for use by gnu-radius).\n\nRADIUS=# select count(*) from attrib;\n count \n--------\n 396117\n(1 row)\n\nRADIUS=# select count(distinct user_name) from attrib;\n count \n-------\n 62713\n(1 row)\n\n\neach username has more or less the same number of attributes.\n\nSELECT * FROM attrib WHERE user_name = 'xyz';\n\nalways results in sequential scan.\n\nAs you can see, there is sufficient number of different user_name values - why \nthe sequential scan?\n\nNeedless to say that turning off sequential scans results is measurably faster \nindex scan.\n\nDaniel\n\n",
"msg_date": "Tue, 12 Feb 2002 14:36:11 +0200",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": true,
"msg_subject": "again on index usage (7.1.3)"
},
{
"msg_contents": "On Tue, 12 Feb 2002, Daniel Kalchev wrote:\n\n> I had one more frustrating exprience with the 7.1.3 optimizer handling\n> index/scan selection.\n>\n> Here is the schema\n>\n> RADIUS=# \\d attrib\n> Table \"attrib\"\n> Attribute | Type | Modifier\n> -----------+----------------+---------------------\n> user_name | character(32) | not null default ''\n> attr | character(32) | not null default ''\n> value | character(128) |\n> op | character(2) |\n> Index: uattr\n>\n> RADIUS=# \\d uattr\n> Index \"uattr\"\n> Attribute | Type\n> -----------+---------------\n> user_name | character(32)\n> attr | character(32)\n> op | character(2)\n> btree\n>\n>\n> (this is for use by gnu-radius).\n>\n> RADIUS=# select count(*) from attrib;\n> count\n> --------\n> 396117\n> (1 row)\n>\n> RADIUS=# select count(distinct user_name) from attrib;\n> count\n> -------\n> 62713\n> (1 row)\n>\n>\n> each username has more or less the same number of attributes.\n>\n> SELECT * FROM attrib WHERE user_name = 'xyz';\n>\n> always results in sequential scan.\n>\n> As you can see, there is sufficient number of different user_name values - why\n> the sequential scan?\n>\n> Needless to say that turning off sequential scans results is measurably faster\n> index scan.\n\nLet's start with the standard set of things. Have you vacuum analyzed,\nwhat does explain show for the query, is there one value that is more\ncommon than all others?\n\n\n",
"msg_date": "Tue, 12 Feb 2002 08:22:17 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: again on index usage (7.1.3)"
},
{
"msg_contents": ">>>Stephan Szabo said:\n > \n > Let's start with the standard set of things. Have you vacuum analyzed,\n > what does explain show for the query, is there one value that is more\n > common than all others?\n > \n\nMy most recent 'standard' answer these days is \"it worked well before VACUUM \nANALYZE\" :-)\n\nRADIUS=# explain\nselect * from attrib where user_name = 'Paacons'RADIUS-# ;\nNOTICE: QUERY PLAN:\n\nSeq Scan on attrib (cost=0.00..16978.46 rows=17922 width=48)\n\nEXPLAIN\n\nis what explain says by default.\n\nRADIUS=# set enable_seqscan='off';\nSET VARIABLE\n\nRADIUS=# explain\nselect * from attrib where user_name = 'Paacons';\nNOTICE: QUERY PLAN:\n\nIndex Scan using uattr on attrib (cost=0.00..32861.00 rows=17922 width=48)\n\nEXPLAIN\n\nDaniel\n\n",
"msg_date": "Tue, 12 Feb 2002 18:36:55 +0200",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": true,
"msg_subject": "Re: again on index usage (7.1.3) "
},
{
"msg_contents": "\nOn Tue, 12 Feb 2002, Daniel Kalchev wrote:\n\n> >>>Stephan Szabo said:\n> >\n> > Let's start with the standard set of things. Have you vacuum analyzed,\n> > what does explain show for the query, is there one value that is more\n> > common than all others?\n> >\n>\n> My most recent 'standard' answer these days is \"it worked well before VACUUM\n> ANALYZE\" :-)\n\nDo you have a single value that is much more common than the rest (say\napproximately 170000 of the rows?) It's estimating almost 18000 matching\nrows, but I'm guessing that that's not a reasonable estimate.\n\n>\n> RADIUS=# explain\n> select * from attrib where user_name = 'Paacons'RADIUS-# ;\n> NOTICE: QUERY PLAN:\n>\n> Seq Scan on attrib (cost=0.00..16978.46 rows=17922 width=48)\n>\n> EXPLAIN\n>\n> is what explain says by default.\n>\n> RADIUS=# set enable_seqscan='off';\n> SET VARIABLE\n>\n> RADIUS=# explain\n> select * from attrib where user_name = 'Paacons';\n> NOTICE: QUERY PLAN:\n>\n> Index Scan using uattr on attrib (cost=0.00..32861.00 rows=17922 width=48)\n>\n> EXPLAIN\n>\n> Daniel\n>\n\n",
"msg_date": "Tue, 12 Feb 2002 08:40:02 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: again on index usage (7.1.3) "
},
{
"msg_contents": ">>>Stephan Szabo said:\n > Do you have a single value that is much more common than the rest (say\n > approximately 170000 of the rows?) It's estimating almost 18000 matching\n > rows, but I'm guessing that that's not a reasonable estimate.\n\nNot likely. There are no more than 30-40 attributes per user (this is another \nstory, but I discuss there the PostgreSQL aspect, not RADIUS :). Entries with \nmost rows have up to 35 rows.\n\nHowever, there is indeed an user_name entry with 179225 values! Somehow on \nthese rows user_name is ''.\n\nTime to change my standard answer...\n\nDaniel\n\n",
"msg_date": "Tue, 12 Feb 2002 18:50:43 +0200",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": true,
"msg_subject": "Re: again on index usage (7.1.3) "
},
{
"msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> However, there is indeed an user_name entry with 179225 values! Somehow on \n> these rows user_name is ''.\n\n> Time to change my standard answer...\n\nNo, time to update to 7.2. 7.2 doesn't get fooled by single values that\nare vastly more common than anything else.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Feb 2002 12:33:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: again on index usage (7.1.3) "
}
] |
[
{
"msg_contents": "Recently I found an Oracle extension, START WITH .. CONNECT BY, which\ngreatly eases parsing of n-way tree structures stored in table format.\n\nI'd prefer implementing this myself, but alas, I feel I need something\nwhich is a bit easier, to get accustomed to the code base.\n\nthe following is copyrighted by Oracle Corporation. So, sue me for posting\nit. :)\n\nSTART WITH condition \n\nSpecify a condition that identifies the row(s) to be used as the\nroot(s) of a hierarchical query. Oracle uses as root(s) all rows that\nsatisfy this condition. If you omit this clause, Oracle uses all rows in\nthe table as root rows. The START WITH condition can contain a subquery. \n\nCONNECT BY condition \n\nSpecify a condition that identifies the relationship between parent rows\nand child rows of the hierarchy. condition can be any condition as\ndescribed in \"Conditions\". However, some part of the condition must use\nthe PRIOR operator to refer to the parent row. The part of the condition\ncontaining the PRIOR operator must have one of the following forms: \n\nPRIOR expr comparison_operator expr \n\nexpr comparison_operator PRIOR expr \n\nRestriction: The CONNECT BY condition cannot contain a subquery. \n\n\nOla\n\n-- \nOla Sundell\nola@miranda.org - olas@wiw.org - ola.sundell@personalchemistry.com\nhttp://miranda.org/~ola\n\n",
"msg_date": "Tue, 12 Feb 2002 09:53:49 -0500 (EST)",
"msg_from": "Ola Sundell <ola@miranda.org>",
"msg_from_op": true,
"msg_subject": "feature request START WITH ... CONNECT BY"
},
{
"msg_contents": "> Recently I found an Oracle extension, START WITH .. CONNECT BY, which\n> greatly eases parsing of n-way tree structures stored in table format.\n> \n> I'd prefer implementing this myself, but alas, I feel I need something\n> which is a bit easier, to get accustomed to the code base.\n> \n\nHi, \n\nI am currently porting business application from Oracle to PG, for one of our\ncustomers.\n\nI work hardly on migrating CONNECT BY queries, using the really good libraries\nfrom OpenACS project (version 4).\n\nIt a kind of 1 day/man to enable tree structure in PG for one table accessed\nwith a CONNECT BY query.\n\nSo I'll be really glad to you if you enable CONNECT BY statments in PG!!\n\nSo, what you want to know about connect by statment?\n\nread:\nhttp://www.arsdigita.com/books/sql/trees.html\n\nand:\nhttp://openacs.org/bboard/q-and-a-fetch-msg.tcl?msg_id=0000j6&topic_id=12&topic=OpenACS%204%2e0%20Design\n\ndownload OpenACS source code for version 4.x and :\n\nWe're using this extensively in openacs4. If you download the latest code, you\ncan find the tree encoding table and some tree-utility routines in\n openacs-4/packages/acs-kernel/sql/postgresql/postgresql.sql. For\ncorresponding trigger routines look at\n openacs-4/packages/acs-kernel/sql/postgresql/acs-objects-create.sql -\nspecifically look at the acs_objects table definition and its associated\n triggers. \n\n With regards to inheritance, we looked seriously at prior to starting the\nopeancs4 porting activities and we opted not to use it because it was\n deficient in serveral areas. \n\n -- Dan Wickstrom, September 7, 2001 \n\n\nHope this helps\n\n-- \nJean-Paul ARGUDO \t\tIDEALX S.A.S\nConsultant bases de donn�es\t\t\t15-17, av. de S�gur\nhttp://IDEALX.com/ \t\t\t\tF-75007 PARIS\n",
"msg_date": "Tue, 12 Feb 2002 18:21:14 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": false,
"msg_subject": "Re: feature request START WITH ... CONNECT BY"
},
{
"msg_contents": "On Tue, 2002-02-12 at 19:53, Ola Sundell wrote:\n> Recently I found an Oracle extension, START WITH .. CONNECT BY, which\n> greatly eases parsing of n-way tree structures stored in table format.\n> \n\nIt's in TODO as WITH RECURSIVE, which is the SQL3 way of doing it, but I\ndon't know if anyone is seriously working on it. \n\nI have done a little investigation, and I think that this could be\ndoable without too much changes in planner/executor by doing repeated\nmerge or hash joins. \n\nIf we want automatic checks for infinite recursion there are also two\nways of doing it:\n 1) use a has of already selected rows or\n 2) pick new rows from a realize'd table and mark them as removed\n there.\n\n-------------\nHannu\n\n",
"msg_date": "13 Feb 2002 09:21:52 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": false,
"msg_subject": "Re: feature request START WITH ... CONNECT BY"
},
{
"msg_contents": "I jump again on this mail to ask for SQL help. Since I've not found a list on\npgsql-users or something like that, I apologize to post it to hackers..\n\nSo.. I'm porting from Oracle to PG. I have many CONNECT BY queries, luckyly,\nonly 2 tables are hierarchical. I've adopted OpenACS solution, since I am sure\nit is the best way to do with that problem.\n\nBut, I found a problem wich I have no brain left today to resolve.\nI port following Oracle CONNECT BY statment:\n\n--ORACLE QUERY\n--\n--select\n-- sum(t01_caf) SCAF,\n-- sum(t01_itm_cnt) SART\n--from T01_&DateData\n--start with T01_upr_lvl_typ = &TypNiv and T01_upr_lvl_nbr = &Niv\n--connect by prior T01_lvl_typ = T01_upr_lvl_typ and prior T01_lvl_nbr =\nT01_upr_lvl_nbr\n--\n-- The execution in the Oracle DB returns: \n--\n--\n-- SCAF SART\n------------ ----------\n--40164802,4 1404296\n--\n-- with variables &TypNiv = 0 et &Niv = 0\n--\n-- PG port:\n--\n\\set TypNiv 0\n\\set Niv 0\n--\nselect\n sum(t01_caf) as SCAF,\n sum(t01_itm_cnt) as SCAF\nfrom\n t01_20011231\nwhere\n strpos(t01_tree_sortkey,(select t01_tree_sortkey\n from t01_20011231\n where t01_lvl_typ = :TypNiv\n and t01_lvl_nbr = :Niv))=1\ngroup by\n ???;\n\nThe problem is that I am no longuer able to find the RIGHT group by statment :-/\n\nCan someone help me ? I'm sure it is surely kind simplistic? dunno..\n\nAh! The purpose of the query is to sum values on all nodes children of one node.\nInthis crappy customer database, a node is identifyied uniquely with couple\n(t01_lvl_typ,t01_lvl_nbr), ((couple t01_upr_lvl_typ,t01_upr_lvl_nbr identifies\nuniquely the Father of the node)) because there can be nodes at different level\n(lvl_typ) with the same identifyier (lvl_nbr). I dont want to user concat || to\ncreate a pseudo-unique-identifyer, because I think there may be perfs problems\n...\n\nThanks. Best regards & wishes.\n\n\n-- \nJean-Paul ARGUDO \t\tIDEALX S.A.S\nConsultant bases de donn�es\t\t\t15-17, av. de S�gur\nhttp://IDEALX.com/ \t\t\t\tF-75007 PARIS\n",
"msg_date": "Wed, 13 Feb 2002 13:04:29 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": false,
"msg_subject": "Re: feature request START WITH ... CONNECT BY"
},
{
"msg_contents": "At last, here's the solution: NO NEED to group by :-/ ???\n\nNow I ask for a WHY to hackers :)\n\nselect\n sum(t01_caf) as SCAF,\n sum(t01_itm_cnt) as SCAF\nfrom\n t01_20011231\nwhere\n strpos(t01_tree_sortkey,(select t01_tree_sortkey\n from t01_20011231\n where t01_upr_lvl_typ = :TypNiv\n and t01_upr_lvl_nbr = :Niv))=1;\n\n--\n-- Stangely, I don't really understand why, there is no need of group by clause \n-- there!\n--\n-- Here's the result:\n-- scaf | scaf \n---------------+---------\n-- 40164802.36 | 1404296\n--(1 row)\n\nThanks\n\n-- \nJean-Paul ARGUDO \t\tIDEALX S.A.S\nConsultant bases de donn�es\t\t\t15-17, av. de S�gur\nhttp://IDEALX.com/ \t\t\t\tF-75007 PARIS\n",
"msg_date": "Wed, 13 Feb 2002 16:49:23 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": false,
"msg_subject": "Re: feature request START WITH ... CONNECT BY"
}
] |
[
{
"msg_contents": "\n> > Next thing to watch out for is, that RTLD_NOW will probably not load \n> > a shared lib, that was not linked with a \"no entry\" flag.\n> \n> Say again?\n\nOn e.g. AIX (and Sun?) the Makefile.shlib does not use a \"-bnoentry\" flag\nwhen linking shared libs. While the pg backend successfully loads\nthose libs, tclsh e.g. fails. Without reading the tcl code, I thought it\nmight be RTLD_NOW.\n\nI just tested 7.2 on AIX with RTLD_NOW though, and it works fine, \nso the issue must be something else.\n\nAndreas\n",
"msg_date": "Tue, 12 Feb 2002 17:28:06 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: RTLD_LAZY considered harmful (Re: pltlc and pltlcu "
}
] |
[
{
"msg_contents": "Hi All,\n\n(1)I am Amit Kumar Khare, I am doing MCS from UIUC USA offcampus from India.\n\n(2) We have been asked to enhance postgreSQL in one of our assignments. So I\nhave chosen to pick \"Add free-behind capability for large sequential scans\"\nfrom TODO list. Many thanks to Mr. Bruce Momjian who helped me out and\nsuggested to make a patch for this problem.\n\n(3)As explained to me by Mr. Bruce, the problem description is that if say\ncache size is 1 mb and a sequential scan is done through a 2mb file over and\nover again the cache becomes useless.Because by the time the second read of\nthe table happens the first 1mb has been forced out of the cache already.Thus\nthe idea is not to cache very large sequential scans, but to cache index scans\nsmall sequential scans.\n\n(4)what I think the problem arises because of default LRU page replacement\npolicy. So I think we have to make use of MRU or LRU-K page replacement\npolicies.\n\n(5)But I am not sure and I wish more input into the problem description from\nyou all. I have started reading the buffer manager code and I found that\nfreelist.c may be needed to be modified and may be some other too since we\nhave to identify the large sequential scans.\n\nPlease help me out\n\nRegards\nAmit Kumar Khare\n\n\n",
"msg_date": "Tue, 12 Feb 2002 11:41:11 -0600",
"msg_from": "khare <khare@students.uiuc.edu>",
"msg_from_op": true,
"msg_subject": "Add free-behind capability for large sequential scans"
}
] |
[
{
"msg_contents": "\n \n The current SET SESSION AUTHORIZATION is very good feature. What\n allows it to standard users too. For example by:\n\n SET SESSION AUTHORIZATION 'username' WITH PASSWORD 'pwd';\n\n Why? Because change identity is less expensive than reconnection.\n\n Karel\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 12 Feb 2002 18:48:09 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "SET SESSION AUTHORIZATION"
}
] |
[
{
"msg_contents": "Hi All,\n\nMy name is Sebastien FLAESCH and I am in charge of the Open\nDatabase Interface at Four J's Development Tools (www.4js.com).\n\nOur product is a Informix 4GL compatible compiler and with ODI\ndrivers you can connect to Informix, Oracle, DB2, SQL Server\nand PostgreSQL databases.\n\nI had a discussion with Bruce MOMJIAN about preparable SQL and\nhe suggested me to send such request to this mailing list.\n\nI am quite sure you guys are aware of this but I want to make\nsure that you plan to implement such feature in a near future.\n\nHere are the details:\n\nOur PostgreSQL driver is implemented via libpq API, and we would\nneed additional API functions to prepare statements, bind params\nand then execute, as you can do with other APIs like ODBC:\n\n SQLPrepare()\n SQLBindParameter()\n SQLExecute()\n\nFor now, we have to parse statements to replace ? place holders\nby actual values for each execution. This is very slow as you\ncan imagine, when compared to Informix/Oracle/DB2. This is not\nvisible in interractive programs, but batch programs can be up\nto 20 times slower with PostgreSQL...\n\nPostgreSQL can be strategically VERY important for 4gl users\nthat want to detach from IBM/Informix.\n\nThis performance problem is today the main reason why people\nwould not use PostgreSQL.\n\nI can imagine that such feature is not easy to implement, as it\nwould need major changes in the engine internals, but I think\nit is important if you want to compete with big database vendors\nlike Oracle or IBM, and I think you can.\n\nThank you for reading this.\n\nRegards,\nSebastien FLAESCH (sf@4js.com)\nOpen Database Interface Project Manager\nFour J's Development Tools (www.4js.com)\n\n\n",
"msg_date": "Tue, 12 Feb 2002 19:11:52 +0100",
"msg_from": "\"Sebastien FLAESCH\" <sf@4js.com>",
"msg_from_op": true,
"msg_subject": "Preparable SQl statements"
}
] |
[
{
"msg_contents": "Hi,\n\nI am having problems with permissions in postgres. I am using version 7.1.3 of\nPostgres running on RedHat 7.2. \n\nI create the table \"accounts\" and revoke all permissions for the PUBLIC user: \naccounts | {\"=\",\"dcl=arwR\"}\n\nHowever, any user can make a select or update in the table \"accounts\".\n\nCan anybody help me?!\n\nThanks a lot.\n",
"msg_date": "Tue, 12 Feb 2002 19:26:07 +0100",
"msg_from": "noy <noyda@isoco.com>",
"msg_from_op": true,
"msg_subject": "Permissions problem"
},
{
"msg_contents": "noy <noyda@isoco.com> writes:\n> However, any user can make a select or update in the table \"accounts\".\n\nSurely not.\n\ntest71=# select version();\n version\n------------------------------------------------------------------\n PostgreSQL 7.1.3 on hppa2.0-hp-hpux10.20, compiled by GCC 2.95.3\n(1 row)\n\ntest71=# create user foo;\nCREATE USER\ntest71=# create user bar;\nCREATE USER\ntest71=# \\c - foo\nYou are now connected as new user foo.\ntest71=> create table accounts (f1 int);\nCREATE\ntest71=> insert into accounts values(1);\nINSERT 1587112 1\ntest71=> revoke all on accounts from public;\nCHANGE\ntest71=> \\z accounts\nAccess privileges for database \"test71\"\n Table | Access privileges\n----------+-------------------\n accounts | {\"=\",\"foo=arwR\"}\n(1 row)\n\ntest71=> select * from accounts;\n f1\n----\n 1\n(1 row)\n\ntest71=> \\c - bar\nYou are now connected as new user bar.\ntest71=> select * from accounts;\nERROR: accounts: Permission denied.\ntest71=> update accounts set f1 = 2;\nERROR: accounts: Permission denied.\ntest71=> \n\n\nPerhaps your \"any user\" is actually a superuser?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Feb 2002 14:06:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Permissions problem "
},
{
"msg_contents": "Hi,\n\nThanks for your help... I had problem with the user's permissions because I\ncreated the users using the shell scripts:\n\ncreateuser -a login -P\n\nand users created in that way have all the privileges. The man page no makes\nreferences to this. -a, --adduser Allows the new user to create other users.\n\n\nThanks.\n\n\n\nTom Lane wrote:\n> \n> noy <noyda@isoco.com> writes:\n> > However, any user can make a select or update in the table \"accounts\".\n> \n> Surely not.\n> \n> test71=# select version();\n> version\n> ------------------------------------------------------------------\n> PostgreSQL 7.1.3 on hppa2.0-hp-hpux10.20, compiled by GCC 2.95.3\n> (1 row)\n> \n> test71=# create user foo;\n> CREATE USER\n> test71=# create user bar;\n> CREATE USER\n> test71=# \\c - foo\n> You are now connected as new user foo.\n> test71=> create table accounts (f1 int);\n> CREATE\n> test71=> insert into accounts values(1);\n> INSERT 1587112 1\n> test71=> revoke all on accounts from public;\n> CHANGE\n> test71=> \\z accounts\n> Access privileges for database \"test71\"\n> Table | Access privileges\n> ----------+-------------------\n> accounts | {\"=\",\"foo=arwR\"}\n> (1 row)\n> \n> test71=> select * from accounts;\n> f1\n> ----\n> 1\n> (1 row)\n> \n> test71=> \\c - bar\n> You are now connected as new user bar.\n> test71=> select * from accounts;\n> ERROR: accounts: Permission denied.\n> test71=> update accounts set f1 = 2;\n> ERROR: accounts: Permission denied.\n> test71=>\n> \n> Perhaps your \"any user\" is actually a superuser?\n> \n> regards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 11:13:27 +0100",
"msg_from": "noy <noyda@isoco.com>",
"msg_from_op": true,
"msg_subject": "Re: Permissions problem"
},
{
"msg_contents": "noy <noyda@isoco.com> writes:\n> Thanks for your help... I had problem with the user's permissions because I\n> created the users using the shell scripts:\n> createuser -a login -P\n> and users created in that way have all the privileges. The man page no makes\n> references to this. -a, --adduser Allows the new user to create other users.\n\nGood point. It's explained on the man page for the underlying CREATE\nUSER command, but the page for the createuser script needs to say it\ntoo. I've committed a fix for 7.2.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 14:33:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Permissions problem "
}
] |
[
{
"msg_contents": "Hi All,\n\n(1)I am Amit Kumar Khare, I am doing MCS from UIUC USA\noffcampus from India.\n\n(2) We have been asked to enhance postgreSQL in one of\nour assignments. So I\nhave chosen to pick \"Add free-behind capability for\nlarge sequential scans\"\nfrom TODO list. Many thanks to Mr. Bruce Momjian who\nhelped me out and\nsuggested to make a patch for this problem.\n\n(3)As explained to me by Mr. Bruce, the problem\ndescription is that if say\ncache size is 1 mb and a sequential scan is done\nthrough a 2mb file over and\nover again the cache becomes useless.Because by the\ntime the second read of\nthe table happens the first 1mb has been forced out of\nthe cache already.Thus\nthe idea is not to cache very large sequential scans,\nbut to cache index scans\nsmall sequential scans.\n\n(4)what I think the problem arises because of default\nLRU page replacement\npolicy. So I think we have to make use of MRU or LRU-K\npage replacement\npolicies.\n\n(5)But I am not sure and I wish more input into the\nproblem description from\nyou all. I have started reading the buffer manager\ncode and I found that\nfreelist.c may be needed to be modified and may be\nsome other too since we\nhave to identify the large sequential scans.\n\nPlease help me out\n\nRegards\nAmit Kumar Khare\n\n\n__________________________________________________\nDo You Yahoo!?\nSend FREE Valentine eCards with Yahoo! Greetings!\nhttp://greetings.yahoo.com\n",
"msg_date": "Tue, 12 Feb 2002 11:58:14 -0800 (PST)",
"msg_from": "Amit Kumar Khare <skamit2000@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Add free-behind capability for large sequential scans"
},
{
"msg_contents": "Amit Kumar Khare <skamit2000@yahoo.com> writes:\n> (4)what I think the problem arises because of default LRU page\n> replacement policy. So I think we have to make use of MRU or LRU-K\n> page replacement policies.\n\n> (5)But I am not sure and I wish more input into the problem\n> description from you all. I have started reading the buffer manager\n> code and I found that freelist.c may be needed to be modified and may\n> be some other too since we have to identify the large sequential\n> scans.\n\nI do not think it's a good idea for the buffer manager to directly try\nto recognize sequential scans; any such attempt will fall down on the\nfundamental problem that there may be more than one backend accessing\nthe same table concurrently. Plus it would introduce undesirable\ncoupling between the buffer manager and higher-level code. I like the\nidea of using LRU-K or other advanced page replacement policies,\ninstead of plain LRU.\n\nI did some experimentation with LRU-2 awhile back and didn't see any\nmeasurable performance improvement in the small number of test cases\nI tried. But I was not looking at the issue of cache-flushing caused\nby large seqscans (the test cases I tried probably didn't do any\nseqscans at all). It's quite possible that that's a sufficient reason\nto adopt LRU-2 anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Feb 2002 17:07:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add free-behind capability for large sequential scans "
},
{
"msg_contents": "\n--- Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Kumar Khare <skamit2000@yahoo.com> writes:\n> > (4)what I think the problem arises because of\n> default LRU page\n> > replacement policy. So I think we have to make use\n> of MRU or LRU-K\n> > page replacement policies.\n> \n> > (5)But I am not sure and I wish more input into\n> the problem\n> > description from you all. I have started reading\n> the buffer manager\n> > code and I found that freelist.c may be needed to\n> be modified and may\n> > be some other too since we have to identify the\n> large sequential\n> > scans.\n> \n> I do not think it's a good idea for the buffer\n> manager to directly try\n> to recognize sequential scans; any such attempt will\n> fall down on the\n> fundamental problem that there may be more than one\n> backend accessing\n> the same table concurrently. Plus it would\n> introduce undesirable\n> coupling between the buffer manager and higher-level\n> code. I like the\n> idea of using LRU-K or other advanced page\n> replacement policies,\n> instead of plain LRU.\n\nSir, What I have in my mind is some thing like what\nHong-Tai Chou and David J. DeWitt proposes in there\npaper titled \"An Evaluation of Buffer Manaagement\nStrategies for Relational Database Systems\", when talk\nabout there \"DBMIN\" algorithm.\n\nThe problem is same here(Please correct me if I am\nwrong) what they talk of. They have recognized like\nProfessor Stonebraker certain Access patterns (like\nclustered sequential, looping sequential etc.)in\nDatabase Systems and recomend a \"Composite page\nreplacement policy\". But how the buffer manager will\nknow what policy has to be applied? They say \"When a\nfile is opened, its associated locality set size and\nREPLACEMENT POLICY are given to the buffer manager\".\n\nI am thinking of implementing similar strategy. Since\nPlanner/Optimizer hand over the execution plan to the\nexecutor it can also pass the information regarding\npage replacement. Then executor can pass this\ninformation through \nheapam->relcache->smgr -> bufmgr -> finally to\nfreelist.c (I may be wrong in the sequence, this\nconclusion is from my first study of code. I know I\nhave to go through it over and over again to get hang\nof it)\n\nHence as you said we can avoid the undesirable\ncoupling between the buffer manager and higher-level\ncode.\n\nSir, is this scheme feasible? if not then can you\nguide me ?\n\nAmit khare\n> \n> I did some experimentation with LRU-2 awhile back\n> and didn't see any\n> measurable performance improvement in the small\n> number of test cases\n> I tried. But I was not looking at the issue of\n> cache-flushing caused\n> by large seqscans (the test cases I tried probably\n> didn't do any\n> seqscans at all). It's quite possible that that's a\n> sufficient reason\n> to adopt LRU-2 anyway.\n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the\n> unregister command\n> (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n\n\n__________________________________________________\nDo You Yahoo!?\nSend FREE Valentine eCards with Yahoo! Greetings!\nhttp://greetings.yahoo.com\n",
"msg_date": "Wed, 13 Feb 2002 00:11:48 -0800 (PST)",
"msg_from": "Amit Kumar Khare <skamit2000@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Add free-behind capability for large sequential scans "
},
{
"msg_contents": "On Wed, 2002-02-13 at 03:11, Amit Kumar Khare wrote:\n\n[clip]\n\n> The problem is same here(Please correct me if I am\n> wrong) what they talk of. They have recognized like\n> Professor Stonebraker certain Access patterns (like\n> clustered sequential, looping sequential etc.)in\n> Database Systems and recomend a \"Composite page\n> replacement policy\". But how the buffer manager will\n> know what policy has to be applied? They say \"When a\n> file is opened, its associated locality set size and\n> REPLACEMENT POLICY are given to the buffer manager\".\n\nThis is indeed one possibility. However, the problem, as pointed out in\n[1] is that in multi-user situations, getting sane results from query\nexecution analysis is hard. The real problem is -- how do you handle the\ninteraction of multiple simultaneous queries with the buffer pool? \n\nThis problem led for a search for a new approach, which in turn led to\nsimpler algorithms, like LRU-K [1] and 2Q [2]. I'd much rather we pursue\nalgorithms of this type.\n\nNeil\n\n[1] E.H. O'Neil, P.E. O'Neil, and G. Weikum. The LRU-K page replacement\nalgorithm for database disk buffering. _In Proceeedings of the 1993 ACM\nSigmod International Conference on Management of Data_, pages 297-306,\n1993.\n\n[2] Theodore Johnson and Dennis Shasha. 2Q: A Low Overhead High\nPerformace Buffer Amanagement Replacement algorithm. In _Proceedings of\nthe 20th VLDB Conference_, pages 439-450, 1994.\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n\n",
"msg_date": "13 Feb 2002 12:44:43 -0500",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Add free-behind capability for large sequential scans"
},
{
"msg_contents": "On Wed, 2002-02-13 at 12:44, Neil Padgett wrote:\n> \n> [1] E.H. O'Neil, P.E. O'Neil, and G. Weikum. The LRU-K page replacement\n> algorithm for database disk buffering. _In Proceeedings of the 1993 ACM\n> Sigmod International Conference on Management of Data_, pages 297-306,\n> 1993.\n> \n> [2] Theodore Johnson and Dennis Shasha. 2Q: A Low Overhead High\n> Performace Buffer Amanagement Replacement algorithm. In _Proceedings of\n> the 20th VLDB Conference_, pages 439-450, 1994.\n\nI've received some mail expressing interest in reading these papers. So,\nI've grabbed some electronic copies from CiteSeer, and I've made them\navailable at:\n\nhttp://people.redhat.com/npadgett/buffering-papers/\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n\n",
"msg_date": "13 Feb 2002 13:30:57 -0500",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Add free-behind capability for large sequential scans"
}
] |
[
{
"msg_contents": "I'm getting itchy to get to work on 7.3 development, and with so few\nbug reports coming in for 7.2, it seems that now is not too soon to\nsplit off the 7.2 maintenance branch in CVS. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Feb 2002 19:38:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Ready to branch 7.2/7.3 ?"
},
{
"msg_contents": "\nDefinitely agree on the itchiness, especially with the broken planner in\nv7.2? :)\n\nOn Tue, 12 Feb 2002, Tom Lane wrote:\n\n> I'm getting itchy to get to work on 7.3 development, and with so few\n> bug reports coming in for 7.2, it seems that now is not too soon to\n> split off the 7.2 maintenance branch in CVS. Comments?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Tue, 12 Feb 2002 23:18:08 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Ready to branch 7.2/7.3 ?"
},
{
"msg_contents": "I vote for a fork - 7.2 seems to be very stable (as it should be after a\nyear!)\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc G. Fournier\n> Sent: Wednesday, 13 February 2002 11:18 AM\n> To: Tom Lane\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Ready to branch 7.2/7.3 ?\n>\n>\n>\n> Definitely agree on the itchiness, especially with the broken planner in\n> v7.2? :)\n>\n> On Tue, 12 Feb 2002, Tom Lane wrote:\n>\n> > I'm getting itchy to get to work on 7.3 development, and with so few\n> > bug reports coming in for 7.2, it seems that now is not too soon to\n> > split off the 7.2 maintenance branch in CVS. Comments?\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Wed, 13 Feb 2002 12:45:44 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Ready to branch 7.2/7.3 ?"
},
{
"msg_contents": "Tom Lane writes:\n\n> I'm getting itchy to get to work on 7.3 development, and with so few\n> bug reports coming in for 7.2, it seems that now is not too soon to\n> split off the 7.2 maintenance branch in CVS. Comments?\n\nThe ecpg thing should probably be fixed first. Other than that it would\nbe nice to have a new branch by the weekend.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 14 Feb 2002 21:51:33 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Ready to branch 7.2/7.3 ?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The ecpg thing should probably be fixed first. Other than that it would\n> be nice to have a new branch by the weekend.\n\nI musta missed something. What ecpg thing?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Feb 2002 22:50:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Ready to branch 7.2/7.3 ? "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > The ecpg thing should probably be fixed first. Other than that it would\n> > be nice to have a new branch by the weekend.\n>\n> I musta missed something. What ecpg thing?\n\nThe sqlca warning\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 14 Feb 2002 23:39:28 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Ready to branch 7.2/7.3 ? "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> I musta missed something. What ecpg thing?\n\n> The sqlca warning\n\nUm, we can probably manage to double-patch a removal of two lines ;-)\n\nHowever, since there were no objections, what are you waiting for?\nCommit it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Feb 2002 23:39:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Ready to branch 7.2/7.3 ? "
},
{
"msg_contents": "\nOkay, if the ecpg is the only issue, does everyone feel comfortable with a\nbranch going in this evening? I'll do a v7.2.1 tar-ball up Sunday night\nbased on the branch, with an announce going out on Monday?\n\nThat way, those anxious to dive into v7.3 have the whole weekend :)\n\n\n\nOn Thu, 14 Feb 2002, Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> I musta missed something. What ecpg thing?\n>\n> > The sqlca warning\n>\n> Um, we can probably manage to double-patch a removal of two lines ;-)\n>\n> However, since there were no objections, what are you waiting for?\n> Commit it.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Fri, 15 Feb 2002 11:17:53 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Ready to branch 7.2/7.3 ? "
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Okay, if the ecpg is the only issue, does everyone feel comfortable with a\n> branch going in this evening? I'll do a v7.2.1 tar-ball up Sunday night\n> based on the branch, with an announce going out on Monday?\n\nI don't think it's time for 7.2.1 quite yet; we should probably wait\nanother week or two to see what comes in. I just want to branch now...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Feb 2002 11:02:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Ready to branch 7.2/7.3 ? "
},
{
"msg_contents": "On Fri, 15 Feb 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Okay, if the ecpg is the only issue, does everyone feel comfortable with a\n> > branch going in this evening? I'll do a v7.2.1 tar-ball up Sunday night\n> > based on the branch, with an announce going out on Monday?\n>\n> I don't think it's time for 7.2.1 quite yet; we should probably wait\n> another week or two to see what comes in. I just want to branch now...\n\nAhhhh, okay ... I was going by the old 'the way we've always done it'\nmentality, sorry :)\n\nBut, this sounds cool ... most happy to do that ... I'll do it tonight, if\nnobody objects?\n\n",
"msg_date": "Fri, 15 Feb 2002 13:34:27 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Ready to branch 7.2/7.3 ? "
}
] |
[
{
"msg_contents": "Hi all,\n\nThis is the email I sent earlier - I think it was lost in the 7.2 release\nhammering the servers got.\n\nI'm hoping to implement SET NULL / SET NOT NULL for 7.3\n\nI've been searching the SQL99 docs but I still can't figure out the syntax\nfor it. (BTW, can anyone give me SQL92?)\n\nSo I guess there are two possibilties - which is correct?:\n\nALTER TABLE blah ADD CONSTRAINT \"asfd\" NOT NULL (field);\nALTER TABLE blah DROP CONSRAINT \"asdf\";\n\nor\n\nALTER TABLE\tblah ALTER COLUMN field SET [ NOT NULL | NULL ];\n\nMy question is - does the parser already support the syntax? ie. Is there\nan empty function somewhere that says 'not implemented', or do I have to\nactually add the flex code for it?\n\nIf it is there - where is it?\n\nChris\n\n",
"msg_date": "Wed, 13 Feb 2002 10:46:46 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "SET NULL / SET NOT NULL"
},
{
"msg_contents": "Christopher Kings-Lynne writes:\n\n> I'm hoping to implement SET NULL / SET NOT NULL for 7.3\n>\n> I've been searching the SQL99 docs but I still can't figure out the syntax\n> for it.\n\nThere isn't really a syntax for it. SQL only allows you to add table\nconstraints, not column constraints. A NOT NULL constraint is a shorthand\nnotation for a CHECK constraint, so to add a NOT NULL constraint you'd\nhave to recognize CHECK constraints of the form CHECK (col IS NOT NULL)\nand handle them specially. To drop NOT NULL constraints, you'd use the\nregular ALTER TABLE blah DROP CONSTRAINT foo; where foo is the name of the\nNOT NULL constraint. The drawback is that NOT NULL constraints currently\ndon't have a name stored.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 19 Feb 2002 17:50:37 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
}
] |
[
{
"msg_contents": "Does the JDBC driver support connection pooling?\n\nIf so, where is the documentation for this, including examples.\n\nThank-you for your time.\n\nSean Alphonse.\n\n\n\n\n\n\n\n \n\nDoes the JDBC driver support connection pooling?\n \nIf so, where is the documentation for this, including examples.\n \nThank-you for your time.\n \nSean Alphonse.",
"msg_date": "Tue, 12 Feb 2002 23:03:03 -0600",
"msg_from": "Sean Alphonse <salphonse@shaw.ca>",
"msg_from_op": true,
"msg_subject": "Connection Pooling"
},
{
"msg_contents": "Connection pooling isn't a driver function.\n\nYou may:\n\n* get it for free (j2ee container, commercial servlet/application server\netc)\n\n* write it yourself (its not too hard)\n\n* Download one from the 'net. PoolMan is a good example...Google is your\nbest bet for locating one.\n\nhttp://www.google.com/search?sourceid=navclient&q=jdbc+connection+pool\n\nCheers,\n\nMark Pritchard\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Sean Alphonse\nSent: Wednesday, 13 February 2002 4:03 PM\nTo: pgsql-hackers\nSubject: [HACKERS] Connection Pooling\n\n\n\nDoes the JDBC driver support connection pooling?\n\nIf so, where is the documentation for this, including examples.\n\nThank-you for your time.\n\nSean Alphonse.\n\n",
"msg_date": "Wed, 13 Feb 2002 16:24:56 +1100",
"msg_from": "\"Mark Pritchard\" <mark.pritchard@tangent.net.au>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling"
},
{
"msg_contents": "Sean Alphonse <salphonse@shaw.ca> writes:\n\n> \n> Does the JDBC driver support connection pooling?\n\nNo. But there are several packages available that will pool\nconnections for you above the driver level. You might check over on\nthe pgsql-jdbc list and see what people are using there.\n\nOr Google for 'jdbc connection pool' and you'll probably find some. \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "13 Feb 2002 09:42:57 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Connection Pooling"
}
] |
[
{
"msg_contents": "Hello, all,\n\nI'm having a strange problem with v7.2 relating to statistics collection\nand plan calculation. I'm not sure if this relates to the problems Marc\nwas seeing, but here goes.\n\nI have a table with 1,066,673 rows. The column I'm interested in has\nthis distribution of values:\n\n tdnr_ct | ct \n---------+--------\n 16 | 1\n 4 | 1\n 3 | 58\n 2 | 68904\n 1 | 928171\n\nThis means that 'ct' records have 'tdnr_ct' duplicate values. As you can\nsee, the index I have on this column is highly selective, and should be\nused to look up records based on this column. In v7.1.3, it always does.\n\nUnder v7.2, it only sometimes does. I've looked at the statistics,\nthanks to what I learned from Tom and Marc's discussion, and I see that\nsometimes when I VACUUM ANALYZE the table, 'n_distinct' for this column\ngets a value of '-1' (desireable), and other times a value such as 59483\nor something.\n\nThis is with the default setting for the statistics.\n\nDoing a 'SET STATISTICS 40' on the column got me to '-0.106047', which is\nbetter. But even so, the values do change somewhat over subsequent runs\nof VACUUM ANALYZE. And sometimes I get the coveted '-1'.\n\nThe query I'm running is fairly complex. The difference between getting\nthe index lookup versus the sequential scan causes an order of magnitude \ndifference in run time.\n\nThe query plans are below. Same query, no changes, just the difference\nin statistics.\n\nThe desireable query plan:\n\nUnique (cost=176572.08..177673.89 rows=3673 width=176)\n -> Sort (cost=176572.08..176572.08 rows=36727 width=176)\n -> Merge Join (cost=172982.30..173787.35 rows=36727 width=176)\n -> Sort (cost=169436.41..169436.41 rows=27883 width=142)\n -> Nested Loop (cost=0.00..167377.66 rows=27883 width=142) -> Seq Scan on pprv_ticket ptk (cost=0.00..3345.83 rows=27883 width=125)\n -> Index Scan using xie01_cat24 on cat24_ticket_doc_id c24 (cost=0.00..5.87 rows=1 width=17)\n -> Sort (cost=3545.89..3545.89 rows=37048 width=34)\n -> Seq Scan on pprv_violation pe (cost=0.00..734.48 rows=37048 width=34)\n SubPlan\n -> Aggregate (cost=5.87..5.87 rows=1 width=17)\n -> Index Scan using xie01_cat24 on cat24_ticket_doc_id (cost=0.00..5.87 rows=1 width=17)\n -> Aggregate (cost=5.88..5.88 rows=1 width=17)\n -> Index Scan using xie01_cat24 on cat24_ticket_doc_id (cost=0.00..5.88 rows=1 width=17)\n\n\n\n\nThe undesireable query plan:\n\nUnique (cost=1129322.57..1187392.58 rows=193567 width=176)\n -> Sort (cost=1129322.57..1129322.57 rows=1935667 width=176)\n -> Merge Join (cost=204226.57..249046.32 rows=1935667 width=176)\n -> Merge Join (cost=200135.91..209436.90 rows=525268 width=142) -> Sort (cost=6435.89..6435.89 rows=27883 width=125)\n -> Seq Scan on pprv_ticket ptk (cost=0.00..3335.83 rows=27883 width=125)\n -> Sort (cost=193700.02..193700.02 rows=1066173 width=17) -> Seq Scan on cat24_ticket_doc_id c24 (cost=0.00..50164.73 rows=1066173 width=17)\n -> Sort (cost=4090.66..4090.66 rows=37048 width=34)\n -> Seq Scan on pprv_violation pv (cost=0.00..734.48 rows=37048 width=34)\n SubPlan\n -> Aggregate (cost=74.72..74.72 rows=1 width=17)\n -> Index Scan using xie01_cat24 on cat24_ticket_doc_id (cost=0.00..74.67 rows=19 width=17)\n -> Aggregate (cost=29.12..29.12 rows=1 width=17)\n -> Index Scan using xie07_cat24 on cat24_ticket_doc_id (cost=0.00..29.12 rows=1 width=17)\n\n\nI hope I've given enough information that it makes sense. If there's anything\nI can do my end to help figure this out, let me know. \n\nThanks,\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n",
"msg_date": "Wed, 13 Feb 2002 00:22:34 -0500",
"msg_from": "Gordon Runkle <gar@integrated-dynamics.com>",
"msg_from_op": true,
"msg_subject": "Odd statistics behaviour in 7.2"
},
{
"msg_contents": "Gordon Runkle <gar@integrated-dynamics.com> writes:\n> I have a table with 1,066,673 rows. The column I'm interested in has\n> this distribution of values:\n\n> tdnr_ct | ct \n> ---------+--------\n> 16 | 1\n> 4 | 1\n> 3 | 58\n> 2 | 68904\n> 1 | 928171\n\n> This means that 'ct' records have 'tdnr_ct' duplicate values.\n\nI'm confused. You mean that there is one value that appears 16 times,\none that appears 4 times, etc etc, and 928171 values that appear only\nonce?\n\n> Under v7.2, it only sometimes does. I've looked at the statistics,\n> thanks to what I learned from Tom and Marc's discussion, and I see that\n> sometimes when I VACUUM ANALYZE the table, 'n_distinct' for this column\n> gets a value of '-1' (desireable), and other times a value such as 59483\n> or something.\n\nThis seems quite bizarre; given those stats it's hard to see how you\ncould get anything but -1 or close to it, even with a very unlucky\nstatistical sampling. Don't suppose you'd want to trace through the\nANALYZE code and find out why it's computing a bad value?\n\nAlternatively, if you could send me a dump of just the ct column,\nI could try to reproduce the behavior here. (CREATE TABLE foo AS\nSELECT ct FROM yourtab and then pg_dump -t foo should do it.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 11:21:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Odd statistics behaviour in 7.2 "
},
{
"msg_contents": "On Wed, 2002-02-13 at 11:21, Tom Lane wrote:\n> Gordon Runkle <gar@integrated-dynamics.com> writes:\n> > I have a table with 1,066,673 rows. The column I'm interested in has\n> > this distribution of values:\n> \n> > tdnr_ct | ct \n> > ---------+--------\n> > 16 | 1\n> > 4 | 1\n> > 3 | 58\n> > 2 | 68904\n> > 1 | 928171\n> \n> > This means that 'ct' records have 'tdnr_ct' duplicate values.\n> \n> I'm confused. You mean that there is one value that appears 16 times,\n> one that appears 4 times, etc etc, and 928171 values that appear only\n> once?\n\nYes, exactly. I could have stated that more clearly, but probably not\nat 0-dark-thirty... ;-)\n\n\n> > Under v7.2, it only sometimes does. I've looked at the statistics,\n> > thanks to what I learned from Tom and Marc's discussion, and I see that\n> > sometimes when I VACUUM ANALYZE the table, 'n_distinct' for this column\n> > gets a value of '-1' (desireable), and other times a value such as 59483\n> > or something.\n> \n> This seems quite bizarre; given those stats it's hard to see how you\n> could get anything but -1 or close to it, even with a very unlucky\n> statistical sampling. Don't suppose you'd want to trace through the\n> ANALYZE code and find out why it's computing a bad value?\n\nI can do that. I need to build a version of PostgreSQL with debug\nenabled, right?\n\n\n> Alternatively, if you could send me a dump of just the ct column,\n> I could try to reproduce the behavior here. (CREATE TABLE foo AS\n> SELECT ct FROM yourtab and then pg_dump -t foo should do it.)\n\nI can do that too. It's pretty large, I'll email you separately\nwith a URL from which you can retrieve it. Thanks!\n\nThomas suggested in his reply that perhaps the table isn't randomly\npopulated, but if it is storing the data in the order in which it was\nCOPYed in, it's pretty random. The column's values generally trend\nupward, but pretty randomly. We load 6000-10000 new records per week.\n\nThanks again,\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n\n\n",
"msg_date": "13 Feb 2002 12:07:54 -0500",
"msg_from": "\"Gordon A. Runkle\" <gar@integrated-dynamics.com>",
"msg_from_op": false,
"msg_subject": "Re: Odd statistics behaviour in 7.2"
}
] |
[
{
"msg_contents": "Look at this: (top one)\n\nhttp://www.mysql.com/information/benchmarks.html\n\nDoes anyone feel like running the MySQL benchmark against postgres 7.2 to\nsee if there's been a real speed improvement??\n\nChris\n\n",
"msg_date": "Wed, 13 Feb 2002 14:40:12 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "benchmarking postgres"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> Look at this: (top one)\n> \n> http://www.mysql.com/information/benchmarks.html\n> \n> Does anyone feel like running the MySQL benchmark against postgres 7.2 to\n> see if there's been a real speed improvement??\n> \n> Chris\n\nThese guys are just A$%$%holes. We have to come up with a benchmark which shows\nthe the difference between a stupid little file-locking single user toy, and a\nreal tansactional system.\n\nMaybe we too can put in little snide remarks about MySQL.\n",
"msg_date": "Wed, 13 Feb 2002 07:11:28 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: benchmarking postgres"
},
{
"msg_contents": "On Wed, 13 Feb 2002, mlw wrote:\n\n> Christopher Kings-Lynne wrote:\n> >\n> > Look at this: (top one)\n> >\n> > http://www.mysql.com/information/benchmarks.html\n> >\n> > Does anyone feel like running the MySQL benchmark against postgres 7.2 to\n> > see if there's been a real speed improvement??\n> >\n> > Chris\n>\n> These guys are just A$%$%holes. We have to come up with a benchmark which shows\n> the the difference between a stupid little file-locking single user toy, and a\n> real tansactional system.\n>\n> Maybe we too can put in little snide remarks about MySQL.\n\nIf someone comes up with a simple and objective comparison (preferably\nwith the nice color pictures mentioned previously :) and it's professional\nlooking (no childish slams, etc) I'll be happy to put it on the website.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 13 Feb 2002 07:54:17 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: benchmarking postgres"
},
{
"msg_contents": "On Wed, 13 Feb 2002, mlw wrote:\n\n> Christopher Kings-Lynne wrote:\n> > \n> > Look at this: (top one)\n> > \n> > http://www.mysql.com/information/benchmarks.html\n> > \n> > Does anyone feel like running the MySQL benchmark against postgres 7.2 to\n> > see if there's been a real speed improvement??\n> > \n> > Chris\n> \n> These guys are just A$%$%holes. We have to come up with a benchmark which shows\n> the the difference between a stupid little file-locking single user toy, and a\n> real tansactional system.\n> \n> Maybe we too can put in little snide remarks about MySQL.\n\nNow, let's be a bit sensible, here. MySQL is a great product, if you want\na single-user SQL interface to flat files. It is blazingly fast when it\ncomes to retrieving information in an environment where there is little or\nno data change.\n\nWe all know the strenghts of postgresql. It is a fully-featured\ntransactional database. MySQL is not, but it is neither stupid, nor a\ntoy. It has its purposes, as does postgresql.\n\nOla\n\n-- \nOla Sundell\nola@miranda.org - olas@wiw.org - ola.sundell@personalchemistry.com\nhttp://miranda.org/~ola\n\n",
"msg_date": "Wed, 13 Feb 2002 07:57:41 -0500 (EST)",
"msg_from": "Ola Sundell <ola@miranda.org>",
"msg_from_op": false,
"msg_subject": "Re: benchmarking postgres"
},
{
"msg_contents": "On Mi� 13 Feb 2002 09:57, Ola Sundell wrote:\n> On Wed, 13 Feb 2002, mlw wrote:\n> > Christopher Kings-Lynne wrote:\n> > > Look at this: (top one)\n> > >\n> > > http://www.mysql.com/information/benchmarks.html\n> > >\n> > > Does anyone feel like running the MySQL benchmark against postgres 7.2\n> > > to see if there's been a real speed improvement??\n> > >\n> > > Chris\n> >\n> > These guys are just A$%$%holes. We have to come up with a benchmark which\n> > shows the the difference between a stupid little file-locking single user\n> > toy, and a real tansactional system.\n> >\n> > Maybe we too can put in little snide remarks about MySQL.\n>\n> Now, let's be a bit sensible, here. MySQL is a great product, if you want\n> a single-user SQL interface to flat files. It is blazingly fast when it\n> comes to retrieving information in an environment where there is little or\n> no data change.\n>\n> We all know the strenghts of postgresql. It is a fully-featured\n> transactional database. MySQL is not, but it is neither stupid, nor a\n> toy. It has its purposes, as does postgresql.\n\nWhat you say is true, but in that case, they shouldn't make benchmarks \ncomparing the two.\n\nSaludos... :-)\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | mmarques@unl.edu.ar\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n",
"msg_date": "Wed, 13 Feb 2002 10:09:20 -0300",
"msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>",
"msg_from_op": false,
"msg_subject": "Re: benchmarking postgres"
},
{
"msg_contents": "Ola Sundell wrote:\n> \n> On Wed, 13 Feb 2002, mlw wrote:\n> \n> > Christopher Kings-Lynne wrote:\n> > >\n> > > Look at this: (top one)\n> > >\n> > > http://www.mysql.com/information/benchmarks.html\n> > >\n> > > Does anyone feel like running the MySQL benchmark against postgres 7.2 to\n> > > see if there's been a real speed improvement??\n> > >\n> > > Chris\n> >\n> > These guys are just A$%$%holes. We have to come up with a benchmark which shows\n> > the the difference between a stupid little file-locking single user toy, and a\n> > real tansactional system.\n> >\n> > Maybe we too can put in little snide remarks about MySQL.\n> \n> Now, let's be a bit sensible, here. MySQL is a great product, if you want\n> a single-user SQL interface to flat files. It is blazingly fast when it\n> comes to retrieving information in an environment where there is little or\n> no data change.\n\nThe snide remarks on the page about things not working was a bit much. I was\nticked off. On a more serious note, MySQL isn't even really SQL. It supports a\nlot of the syntax, but none of the intentions. Things like sub-selects are\nvital to being able to model a problem. Transactions are vital to predictable\nbehavior. High concurrency is vital to \"real\" performance.\n\nI have said it at least a hundred times before, I have never been able to\nfinish a project started in MySQL. I always come across something that the\ndatabase *must* do, but MySQL can't.\n\nIt is clear that anyone who runs a single user benchmark against a database\nserver capable of multiple connections is not testing their system in its\nintended mode of use. They are resorting to the worst sort of microsoftian\nbenchmark FUD.\n\n> \n> We all know the strenghts of postgresql. It is a fully-featured\n> transactional database. MySQL is not, but it is neither stupid, nor a\n> toy. It has its purposes, as does postgresql.\n\nWhat purpose does MySQL fit? It isn't very good at doing the sorts of things\nSQL is supposed to do and there are faster database libraries (ala Berkeley\nDB). What would be the point of using MySQL for anything?\n",
"msg_date": "Wed, 13 Feb 2002 08:41:50 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: benchmarking postgres"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> On Wed, 13 Feb 2002, mlw wrote:\n> \n> > Christopher Kings-Lynne wrote:\n> > >\n> > > Look at this: (top one)\n> > >\n> > > http://www.mysql.com/information/benchmarks.html\n> > >\n> > > Does anyone feel like running the MySQL benchmark against postgres 7.2 to\n> > > see if there's been a real speed improvement??\n> > >\n> > > Chris\n> >\n> > These guys are just A$%$%holes. We have to come up with a benchmark which shows\n> > the the difference between a stupid little file-locking single user toy, and a\n> > real tansactional system.\n> >\n> > Maybe we too can put in little snide remarks about MySQL.\n> \n> If someone comes up with a simple and objective comparison (preferably\n> with the nice color pictures mentioned previously :) and it's professional\n> looking (no childish slams, etc) I'll be happy to put it on the website.\n\nHas anyone ported \"pgbench\" to MySQL? That would be the perfect tool to show\nthe differences. Tom even has some scripts to make charts from it.\n",
"msg_date": "Wed, 13 Feb 2002 08:44:10 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: benchmarking postgres"
},
{
"msg_contents": "This comes up about once every 6 months. Please take it off HACKERS to\nADVOCACY (do we have such a thing?), or some such. Various members of\nthe PostgreSQL community have tried to work with the MySQL people in\nthe past to address 'issues' with their 'benchmark': it never works out.\n\nRoss\n\nOn Wed, Feb 13, 2002 at 08:41:50AM -0500, mlw wrote:\n> Ola Sundell wrote:\n> > \n> > On Wed, 13 Feb 2002, mlw wrote:\n> > \n> > > Christopher Kings-Lynne wrote:\n> > > >\n> > > > Look at this: (top one)\n> > > >\n> > > > http://www.mysql.com/information/benchmarks.html\n> > > >\n> > > > Does anyone feel like running the MySQL benchmark against postgres 7.2 to\n> > > > see if there's been a real speed improvement??\n> > > >\n> > > > Chris\n> > >\n> > > These guys are just A$%$%holes. We have to come up with a benchmark which shows\n> > > the the difference between a stupid little file-locking single user toy, and a\n> > > real tansactional system.\n> > >\n> > > Maybe we too can put in little snide remarks about MySQL.\n> > \n> > Now, let's be a bit sensible, here. MySQL is a great product, if you want\n> > a single-user SQL interface to flat files. It is blazingly fast when it\n> > comes to retrieving information in an environment where there is little or\n> > no data change.\n> \n> The snide remarks on the page about things not working was a bit much. I was\n> ticked off. On a more serious note, MySQL isn't even really SQL. It supports a\n> lot of the syntax, but none of the intentions. Things like sub-selects are\n> vital to being able to model a problem. Transactions are vital to predictable\n> behavior. High concurrency is vital to \"real\" performance.\n> \n> I have said it at least a hundred times before, I have never been able to\n> finish a project started in MySQL. I always come across something that the\n> database *must* do, but MySQL can't.\n> \n> It is clear that anyone who runs a single user benchmark against a database\n> server capable of multiple connections is not testing their system in its\n> intended mode of use. They are resorting to the worst sort of microsoftian\n> benchmark FUD.\n> \n> > \n> > We all know the strenghts of postgresql. It is a fully-featured\n> > transactional database. MySQL is not, but it is neither stupid, nor a\n> > toy. It has its purposes, as does postgresql.\n> \n> What purpose does MySQL fit? It isn't very good at doing the sorts of things\n> SQL is supposed to do and there are faster database libraries (ala Berkeley\n> DB). What would be the point of using MySQL for anything?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Wed, 13 Feb 2002 10:23:00 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: benchmarking postgres"
},
{
"msg_contents": "Christopher Kings-Lynne writes:\n\n> Does anyone feel like running the MySQL benchmark against postgres 7.2 to\n> see if there's been a real speed improvement??\n\nFor anyone looking for a real benchmark, check out the OSDB project\n(http://osdb.sourceforge.net). It's based on the fairly respected AS3AP\nbenchmark. The drawback is that you can't easily generate the test data,\nyet. I've been working on that, but I sort of ran out of algebra for a\nwhile. For now you can download some test data sets, but note that\nthey're really too small to run the benchmark accurately.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 13 Feb 2002 11:50:52 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: benchmarking postgres"
}
] |
[
{
"msg_contents": "Why not attach a hundred users and run the same test. MySQL will be\nlicking *everyone's* boots. Each and every program listed would\npositively slam them into the floor and grind their face in the dirt\n(unless they have made some monumental improvements recently).\n\nIf a single user database is wanted, MySQL is just the ticket. When it\ngets a bunch of users and has to do complex stuff, it goes into the\ntoilet.\n\nAs far as benchmarks go, TPC-C, TPC-H, TPC-W and TPC-R would be\ninteresting. Trying to beat a toy database under silly conditions is a\nwaste of time.\n\n-----Original Message-----\nFrom: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au]\nSent: Tuesday, February 12, 2002 10:40 PM\nTo: Hackers\nSubject: [HACKERS] benchmarking postgres\n\n\nLook at this: (top one)\n\nhttp://www.mysql.com/information/benchmarks.html\n\nDoes anyone feel like running the MySQL benchmark against postgres 7.2\nto\nsee if there's been a real speed improvement??\n\nChris\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n",
"msg_date": "Tue, 12 Feb 2002 23:09:01 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: benchmarking postgres"
},
{
"msg_contents": "On Tue, 12 Feb 2002, Dann Corbit wrote:\n\n> Why not attach a hundred users and run the same test. MySQL will be\n> licking *everyone's* boots. Each and every program listed would\n> positively slam them into the floor and grind their face in the dirt\n> (unless they have made some monumental improvements recently).\n>\n>> http://www.mysql.com/information/benchmarks.html\n>>\n>> Does anyone feel like running the MySQL benchmark against postgres 7.2\n>> to\n>> see if there's been a real speed improvement??\n\nWhy don't we have some pretty graphs like that on the postgres site? I\nagree that the mysql tests against single user are useless, but users see\nthe FUD of pretty pictures and are mislead. If mysql is giving out\ninformation like that, shouldn't we also have some pretty pictures? If\nthey exist, i've never seen them on the main site.\n\nThoughts..\n\n- Brandon\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Wed, 13 Feb 2002 06:56:14 -0500 (EST)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: benchmarking postgres"
}
] |
[
{
"msg_contents": "Hello, all,\n\nI'm having a strange problem with v7.2 relating to statistics collection\nand plan calculation. I'm not sure if this relates to the problems Marc\nwas seeing, but here goes.\n\nI have a table with 1,066,673 rows. The column I'm interested in has\nthis distribution of values:\n\n tdnr_ct | ct\n---------+--------\n 16 | 1\n 4 | 1\n 3 | 58\n 2 | 68904\n 1 | 928171\n\nThis means that 'ct' records have 'tdnr_ct' duplicate values. As you\ncan\nsee, the index I have on this column is highly selective, and should be\nused to look up records based on this column. In v7.1.3, it always\ndoes.\n\nUnder v7.2, it only sometimes does. I've looked at the statistics,\nthanks to what I learned from Tom and Marc's discussion, and I see that\nsometimes when I VACUUM ANALYZE the table, 'n_distinct' for this column\ngets a value of '-1' (desireable), and other times a value such as 56596\nor something.\n\nThis is with the default setting for the statistics.\n\nDoing a 'SET STATISTICS 40' on the column got me to '-0.106047', which\nis\nbetter. But even so, the values do change somewhat over subsequent runs\nof VACUUM ANALYZE. And sometimes I get the coveted '-1'.\n\nThe query I'm running is fairly complex. The difference between getting\nthe index lookup versus the sequential scan causes an order of magnitude\ndifference in run time.\n\nThe query plans are below. Same query, no changes, just the difference\nin statistics.\n\nThe desireable query plan:\n\nUnique (cost=176572.08..177673.89 rows=3673 width=176)\n -> Sort (cost=176572.08..176572.08 rows=36727 width=176)\n -> Merge Join (cost=172982.30..173787.35 rows=36727 width=176)\n -> Sort (cost=169436.41..169436.41 rows=27883 width=142)\n -> Nested Loop (cost=0.00..167377.66 rows=27883\nwidth=142) -> Seq Scan on pprv_ticket ptk \n(cost=0.00..3345.83 rows=27883 width=125)\n -> Index Scan using xie01_cat24 on\ncat24_ticket_doc_id c24 (cost=0.00..5.87 rows=1 width=17)\n -> Sort (cost=3545.89..3545.89 rows=37048 width=34)\n -> Seq Scan on pprv_violation pe \n(cost=0.00..734.48 rows=37048 width=34)\n SubPlan\n -> Aggregate (cost=5.87..5.87 rows=1 width=17)\n -> Index Scan using xie01_cat24 on\ncat24_ticket_doc_id (cost=0.00..5.87 rows=1 width=17)\n -> Aggregate (cost=5.88..5.88 rows=1 width=17)\n -> Index Scan using xie01_cat24 on\ncat24_ticket_doc_id (cost=0.00..5.88 rows=1 width=17)\n\n\n\n\nThe undesireable query plan:\n\nUnique (cost=1129322.57..1187392.58 rows=193567 width=176)\n -> Sort (cost=1129322.57..1129322.57 rows=1935667 width=176)\n -> Merge Join (cost=204226.57..249046.32 rows=1935667\nwidth=176)\n -> Merge Join (cost=200135.91..209436.90 rows=525268\nwidth=142) -> Sort (cost=6435.89..6435.89\nrows=27883 width=125)\n -> Seq Scan on pprv_ticket ptk \n(cost=0.00..3335.83 rows=27883 width=125)\n -> Sort (cost=193700.02..193700.02 rows=1066173\nwidth=17) -> Seq Scan on cat24_ticket_doc_id\nc24 (cost=0.00..50164.73 rows=1066173 width=17)\n -> Sort (cost=4090.66..4090.66 rows=37048 width=34)\n -> Seq Scan on pprv_violation pv \n(cost=0.00..734.48 rows=37048 width=34)\n SubPlan\n -> Aggregate (cost=74.72..74.72 rows=1 width=17)\n -> Index Scan using xie01_cat24 on\ncat24_ticket_doc_id (cost=0.00..74.67 rows=19 width=17)\n -> Aggregate (cost=29.12..29.12 rows=1 width=17)\n -> Index Scan using xie07_cat24 on\ncat24_ticket_doc_id (cost=0.00..29.12 rows=1 width=17)\n\n\nI hope I've given enough information that it makes sense. If there's\nanything\nI can do my end to help figure this out, let me know.\n\nThanks,\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n\n\n",
"msg_date": "13 Feb 2002 03:11:00 -0500",
"msg_from": "\"Gordon A. Runkle\" <gar@integrated-dynamics.com>",
"msg_from_op": true,
"msg_subject": "Odd statistics behaviour in 7.2"
},
{
"msg_contents": "> ... sometimes ... 'n_distinct' for this column\n> gets a value of '-1' (desireable), and other times a value such as 56596\n> or something.\n> This is with the default setting for the statistics.\n> Doing a 'SET STATISTICS 40' on the column got me to '-0.106047', which\n> is better. But even so, the values do change somewhat over subsequent\n> runs of VACUUM ANALYZE. And sometimes I get the coveted '-1'.\n\nI'm guessing that your table is not randomly populated, so that the new\n\"statistically sampled ANALYZE\" is sometimes getting a good sample (or\nat least one with the result you want) and sometimes getting a terrible\none.\n\nTom?\n\n - Thomas\n",
"msg_date": "Wed, 13 Feb 2002 14:41:34 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Odd statistics behaviour in 7.2"
}
] |
[
{
"msg_contents": "Binary package for QNX6 \n\nhttp://crystaliq.ru/files/pgsql72-qnx6.tgz?from=ideas&id=475055\n\n\nAndy Latin\n----\n http://www.rambler.ru\n",
"msg_date": "Wed, 13 Feb 2002 11:26:24 +0300 (MSK)",
"msg_from": "Andy Latin <303401@rambler.ru>",
"msg_from_op": true,
"msg_subject": "Postgre SQL 7.2 QNX6"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr] \n> Sent: 13 February 2002 08:10\n> To: Christopher Kings-Lynne; Hiroshi Inoue; Tom Lane; Kovacs Zoltan\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] alter table drop column status\n> \n> \n> Le Mercredi 13 F�vrier 2002 06:14, Christopher Kings-Lynne a �crit :\n> > This seems fantastic - why can't this be committed? �Surely if it's \n> > committed then the flaws will fairly quickly be ironed out? \n> �Even if \n> > it has flaws, then if we say 'this function is not yet stable' at \n> > least people can start testing it and reporting the problems?\n> \n> +1. What are the reasons why this hack was not applied?\n\nSee /doc/TODO.detail/drop in the source tree. That pretty much explains it.\n\nRegards, Dave.\n",
"msg_date": "Wed, 13 Feb 2002 08:30:37 -0000",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: alter table drop column status"
}
] |
[
{
"msg_contents": "\ngeneral description:\n--------------------\n\nI've a problem in PG 7.2. If you update rows which are included in plpgsql\nRECORD , updated rows are again added to the RECORD, so you will get into\ninfinite loop.\n\ndetails:\n--------\n\n- 7.2 problem\n- problem in PLPGSQL\n- problem arise only if count in RECORD exceed some number of rows\n- problem arise only if RECORD is not sorted\n\nquestion:\n---------\nIs it feature or bug ?\n\nexample:\n--------\nIf you execute this script, you will enter to infinite loop. If you add\n\"order by a\" it will work fine.\n\n------------<< cut here << ---------------------\ncreate sequence tmp_seq;\n\ncreate table tmp_test ( a int4 default nextval('tmp_seq'), b int4 );\n\ninsert into tmp_test (b) values( 1 );\ninsert into tmp_test (b) values( 1 );\ninsert into tmp_test (b) values( 1 );\ninsert into tmp_test (b) select b from tmp_test;\ninsert into tmp_test (b) select b from tmp_test;\ninsert into tmp_test (b) select b from tmp_test;\ninsert into tmp_test (b) select b from tmp_test;\ninsert into tmp_test (b) select b from tmp_test;\nselect count(*) from tmp_test;\n\ndrop function ftmp_test();\ncreate function ftmp_test() RETURNS varchar AS'\nDECLARE\n _grp varchar;\n\n sql varchar;\n R RECORD;\n i integer;\nBEGIN\n\n i = 0;\n FOR R IN select * from tmp_test\n LOOP\n i = i + 1;\n if i%100 = 0 then\n raise notice ''% - val: %'', i, R.a;\n end if;\n UPDATE tmp_test SET a=1000 WHERE a = R.a;\n END LOOP;\n RETURN '''';\nEND;' LANGUAGE 'plpgsql';\n\nselect ftmp_test();\n\ndrop table tmp_test;\n------------<< cut here << --------------------\n\n\nVaclav Kulakovsky\nDEFINITY Systems, s.r.o.\nTyrsova 2071\n25601 Benesov\nCzech Republic\nTel: +420 301 727975,724456\nFax: +420 301 724456\nvaclav.kulakovsky@definity.cz\nhttp://www.definity.cz\n\n\n",
"msg_date": "Wed, 13 Feb 2002 09:47:49 +0100 (Central Europe Standard Time)",
"msg_from": "Vaclav Kulakovsky <vaclav.kulakovsky@definity.cz>",
"msg_from_op": true,
"msg_subject": "Postgres 7.2 - Updating rows in cursor problem"
},
{
"msg_contents": "Vaclav Kulakovsky <vaclav.kulakovsky@definity.cz> writes:\n> I've a problem in PG 7.2. If you update rows which are included in plpgsql\n> RECORD , updated rows are again added to the RECORD, so you will get into\n> infinite loop.\n\nThis is a bug in plgsql, or more precisely in SPI, I think. The FOR\nstatement needs to restore its initial value of scanCommandId each time\nit resumes execution of the SELECT. Seems like that should be done down\ninside SPI. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 14:47:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Postgres 7.2 - Updating rows in cursor problem "
},
{
"msg_contents": "I wrote:\n> This is a bug in plgsql, or more precisely in SPI, I think. The FOR\n> statement needs to restore its initial value of scanCommandId each time\n> it resumes execution of the SELECT. Seems like that should be done down\n> inside SPI. Comments?\n\nMore specifically, the problem is that plpgsql's FOR-over-a-select now\ndepends on a SPI cursor, and both SPI cursors and regular cursors are\nbroken in this regard. Observe the following misbehavior with a plain\ncursor:\n\nregression=# select * from foo;\n f1 | f2\n----+----\n 1 | 1\n 2 | 2\n 3 | 3\n(3 rows)\n\nregression=# begin;\nBEGIN\nregression=# declare c cursor for select * from foo;\nSELECT\nregression=# fetch 2 from c;\n f1 | f2\n----+----\n 1 | 1\n 2 | 2\n(2 rows)\n\nregression=# update foo set f2 = f2 + 1;\nUPDATE 3\nregression=# fetch all from c;\n f1 | f2\n----+----\n 1 | 2\n 2 | 3\n 3 | 4\n(3 rows)\n\nIMHO the cursor should not be able to see the rows inserted by the\nsubsequent UPDATE. (Certainly it should not return the updated versions\nof rows it's already returned.) The SQL spec says that cursors declared\nINSENSITIVE shall not observe changes made after they are opened --- and\nit gives the implementation the option to make all cursors behave that\nway. I think we should choose to do so.\n\nI believe the correct fix for this is that Portal objects should store\nthe scanCommandId that was current when they were created, and restore\nthis scanCommandId whenever they are asked to run their plan. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 17:53:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Postgres 7.2 - Updating rows in cursor problem "
},
{
"msg_contents": "Vaclav Kulakovsky <vaclav.kulakovsky@definity.cz> writes:\n> I've a problem in PG 7.2. If you update rows which are included in plpgsql\n> RECORD , updated rows are again added to the RECORD, so you will get into\n> infinite loop.\n\nThe attached patch against 7.2 appears to fix this problem, as well as\nthe cursor misbehavior that I exhibited in my followup on pghackers.\n\nIf I don't hear any complaints I will commit this soon.\n\n\t\t\tregards, tom lane\n\n*** src/backend/commands/command.c.orig\tThu Jan 3 18:19:30 2002\n--- src/backend/commands/command.c\tWed Feb 13 19:24:00 2002\n***************\n*** 103,108 ****\n--- 103,109 ----\n \tQueryDesc *queryDesc;\n \tEState\t *estate;\n \tMemoryContext oldcontext;\n+ \tCommandId\tsavedId;\n \tbool\t\ttemp_desc = false;\n \n \t/*\n***************\n*** 156,162 ****\n \t}\n \n \t/*\n! \t * tell the destination to prepare to receive some tuples.\n \t */\n \tBeginCommand(name,\n \t\t\t\t queryDesc->operation,\n--- 157,163 ----\n \t}\n \n \t/*\n! \t * Tell the destination to prepare to receive some tuples.\n \t */\n \tBeginCommand(name,\n \t\t\t\t queryDesc->operation,\n***************\n*** 169,174 ****\n--- 170,183 ----\n \t\t\t\t queryDesc->dest);\n \n \t/*\n+ \t * Restore the scanCommandId that was current when the cursor was\n+ \t * opened. This ensures that we see the same tuples throughout the\n+ \t * execution of the cursor.\n+ \t */\n+ \tsavedId = GetScanCommandId();\n+ \tSetScanCommandId(PortalGetCommandId(portal));\n+ \n+ \t/*\n \t * Determine which direction to go in, and check to see if we're\n \t * already at the end of the available tuples in that direction. If\n \t * so, do nothing.\t(This check exists because not all plan node types\n***************\n*** 213,218 ****\n--- 222,232 ----\n \t\t\t\tportal->atStart = true; /* we retrieved 'em all */\n \t\t}\n \t}\n+ \n+ \t/*\n+ \t * Restore outer command ID.\n+ \t */\n+ \tSetScanCommandId(savedId);\n \n \t/*\n \t * Clean up and switch back to old context.\n*** src/backend/executor/spi.c.orig\tThu Jan 3 15:30:47 2002\n--- src/backend/executor/spi.c\tWed Feb 13 19:23:55 2002\n***************\n*** 740,748 ****\n \t_SPI_current->processed = 0;\n \t_SPI_current->tuptable = NULL;\n \n- \t/* Make up a portal name if none given */\n \tif (name == NULL)\n \t{\n \t\tfor (;;)\n \t\t{\n \t\t\tunnamed_portal_count++;\n--- 740,748 ----\n \t_SPI_current->processed = 0;\n \t_SPI_current->tuptable = NULL;\n \n \tif (name == NULL)\n \t{\n+ \t\t/* Make up a portal name if none given */\n \t\tfor (;;)\n \t\t{\n \t\t\tunnamed_portal_count++;\n***************\n*** 755,765 ****\n \n \t\tname = portalname;\n \t}\n! \n! \t/* Ensure the portal doesn't exist already */\n! \tportal = GetPortalByName(name);\n! \tif (portal != NULL)\n! \t\telog(ERROR, \"cursor \\\"%s\\\" already in use\", name);\n \n \t/* Create the portal */\n \tportal = CreatePortal(name);\n--- 755,767 ----\n \n \t\tname = portalname;\n \t}\n! \telse\n! \t{\n! \t\t/* Ensure the portal doesn't exist already */\n! \t\tportal = GetPortalByName(name);\n! \t\tif (portal != NULL)\n! \t\t\telog(ERROR, \"cursor \\\"%s\\\" already in use\", name);\n! \t}\n \n \t/* Create the portal */\n \tportal = CreatePortal(name);\n***************\n*** 1228,1233 ****\n--- 1230,1236 ----\n \tQueryDesc *querydesc;\n \tEState\t *estate;\n \tMemoryContext oldcontext;\n+ \tCommandId\tsavedId;\n \tCommandDest olddest;\n \n \t/* Check that the portal is valid */\n***************\n*** 1245,1250 ****\n--- 1248,1254 ----\n \n \t/* Switch to the portals memory context */\n \toldcontext = MemoryContextSwitchTo(PortalGetHeapMemory(portal));\n+ \n \tquerydesc = PortalGetQueryDesc(portal);\n \testate = PortalGetState(portal);\n \n***************\n*** 1253,1258 ****\n--- 1257,1270 ----\n \tolddest = querydesc->dest;\n \tquerydesc->dest = dest;\n \n+ \t/*\n+ \t * Restore the scanCommandId that was current when the cursor was\n+ \t * opened. This ensures that we see the same tuples throughout the\n+ \t * execution of the cursor.\n+ \t */\n+ \tsavedId = GetScanCommandId();\n+ \tSetScanCommandId(PortalGetCommandId(portal));\n+ \n \t/* Run the executor like PerformPortalFetch and remember states */\n \tif (forward)\n \t{\n***************\n*** 1278,1283 ****\n--- 1290,1300 ----\n \t\t\t\tportal->atStart = true;\n \t\t}\n \t}\n+ \n+ \t/*\n+ \t * Restore outer command ID.\n+ \t */\n+ \tSetScanCommandId(savedId);\n \n \t/* Restore the old command destination and switch back to callers */\n \t/* memory context */\n*** src/backend/utils/mmgr/portalmem.c.orig\tThu Oct 25 01:49:51 2001\n--- src/backend/utils/mmgr/portalmem.c\tWed Feb 13 19:23:48 2002\n***************\n*** 168,173 ****\n--- 168,174 ----\n \n \tportal->queryDesc = queryDesc;\n \tportal->attinfo = attinfo;\n+ \tportal->commandId = GetScanCommandId();\n \tportal->state = state;\n \tportal->atStart = true;\t\t/* Allow fetch forward only */\n \tportal->atEnd = false;\n***************\n*** 213,218 ****\n--- 214,220 ----\n \t/* initialize portal query */\n \tportal->queryDesc = NULL;\n \tportal->attinfo = NULL;\n+ \tportal->commandId = 0;\n \tportal->state = NULL;\n \tportal->atStart = true;\t\t/* disallow fetches until query is set */\n \tportal->atEnd = true;\n*** src/include/utils/portal.h.orig\tMon Nov 5 14:44:35 2001\n--- src/include/utils/portal.h\tWed Feb 13 19:23:42 2002\n***************\n*** 31,36 ****\n--- 31,37 ----\n \tMemoryContext heap;\t\t\t/* subsidiary memory */\n \tQueryDesc *queryDesc;\t\t/* Info about query associated with portal */\n \tTupleDesc\tattinfo;\n+ \tCommandId\tcommandId;\t\t/* Command counter value for query */\n \tEState\t *state;\t\t\t/* Execution state of query */\n \tbool\t\tatStart;\t\t/* T => fetch backwards is not allowed */\n \tbool\t\tatEnd;\t\t\t/* T => fetch forwards is not allowed */\n***************\n*** 48,55 ****\n */\n #define PortalGetQueryDesc(portal)\t((portal)->queryDesc)\n #define PortalGetTupleDesc(portal)\t((portal)->attinfo)\n! #define PortalGetState(portal)\t((portal)->state)\n! #define PortalGetHeapMemory(portal) ((portal)->heap)\n \n /*\n * estimate of the maximum number of open portals a user would have,\n--- 49,57 ----\n */\n #define PortalGetQueryDesc(portal)\t((portal)->queryDesc)\n #define PortalGetTupleDesc(portal)\t((portal)->attinfo)\n! #define PortalGetCommandId(portal)\t((portal)->commandId)\n! #define PortalGetState(portal)\t\t((portal)->state)\n! #define PortalGetHeapMemory(portal)\t((portal)->heap)\n \n /*\n * estimate of the maximum number of open portals a user would have,\n",
"msg_date": "Wed, 13 Feb 2002 21:12:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Postgres 7.2 - Updating rows in cursor problem "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane\n>\n> I wrote:\n> > This is a bug in plgsql, or more precisely in SPI, I think. The FOR\n> > statement needs to restore its initial value of scanCommandId each time\n> > it resumes execution of the SELECT. Seems like that should be done down\n> > inside SPI. Comments?\n>\n> More specifically, the problem is that plpgsql's FOR-over-a-select now\n> depends on a SPI cursor, and both SPI cursors and regular cursors are\n> broken in this regard. Observe the following misbehavior with a plain\n> cursor:\n\nThis is a known issue. We should implement INSENSITIVE cursors\nto avoid this behavior. The keyword INSENSITIVE is there but isn't\nused long. I plan to implement this feature as the first step toward\ncross transaction cursors. Saving the xid and commandid in the\nportal or snapshot and restoring them at fetch(move) time would\nsolve it.\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Fri, 15 Feb 2002 01:53:17 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Postgres 7.2 - Updating rows in cursor problem "
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> This is a known issue. We should implement INSENSITIVE cursors\n> to avoid this behavior. The keyword INSENSITIVE is there but isn't\n> used long. I plan to implement this feature as the first step toward\n> cross transaction cursors. Saving the xid and commandid in the\n> portal or snapshot and restoring them at fetch(move) time would\n> solve it.\n\nFor the moment I've arranged to save commandId in portals. (xid isn't\nneeded since we don't have cross-transaction portals ... yet)\n\nIt occurs to me though that scanCommandId should not be part of the\nxact.c global status at all. It should be stored in heapscan and\nindexscan state structs, instead. I have been thinking about trying\nto clean up the API for heapscans and indexscans, and maybe I'll see\nif that can be done as part of that work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Feb 2002 12:10:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Postgres 7.2 - Updating rows in cursor problem "
}
] |
[
{
"msg_contents": "Hi,\n\nRecently I got this error while the database was starting up. :\n\nFATAL 2: XLogWrite: writerequest is past end of log\n\nAnd the database did not startup, and finally I had to do a\npg_resetxlog. I tried going through the code which gave\nthis error but I was unable to comprehend much.\n\nCould anyone tell me how exactly this error is caused. Also I would like\nto know whether there is any detail\ndocumentation about the WAL implemenation (also pg_resetxlog\nimplemenation) of postgres.\n\nThanks in advance.\nRegards,\nAniket\n\n",
"msg_date": "Wed, 13 Feb 2002 14:31:10 +0530",
"msg_from": "Aniket Kulkarni <aniket_kulkarni@persistent.co.in>",
"msg_from_op": true,
"msg_subject": "Information about XLogWrite"
},
{
"msg_contents": "Aniket Kulkarni <aniket_kulkarni@persistent.co.in> writes:\n> Recently I got this error while the database was starting up. :\n> FATAL 2: XLogWrite: writerequest is past end of log\n> And the database did not startup, and finally I had to do a\n> pg_resetxlog. I tried going through the code which gave\n> this error but I was unable to comprehend much.\n\n> Could anyone tell me how exactly this error is caused.\n\nXLogWrite? Not XLogFlush? That'd be a new one on me. Too bad you\nflushed the broken xlog, I would have liked to look at it.\n\nThere is a known failure path (believed fixed in 7.2) whereby garbage\ndata in pg_log pages can lead to XLogFlush startup failures.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Feb 2002 16:06:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Information about XLogWrite "
}
] |
[
{
"msg_contents": "Dear hackers,\n\nSorry to come back to ALTER /DROP object features, but I need some advice.\n\n1) What we did in pgAdmin1\n\nIn late version of pgAdmin 1, we used to maintain two sets of objects :\n- one set of production objects (queried from PostgreSQL schema) including \ntables, views, triggers, etc..\n- one set of development objects (which were stored in separate tables). \n\nEvery modification of objects (ex: functions, triggers, views) would apply \nfirst in the development tables. During development, productions objects were \nnot modified.\n\nAt the user request, it was possible to compile the project and generate \nproduction objects. We called this process \"Rebuilding\", but I prefer to say \n\"compiling\".\n\nBefore compiling, of course, we created a dependency list of objects to \ndefine compilation order.\n\nThe advantages of such a process were :\n- to be able to store development code on a separate server.\n- to have better team work with fewer bugs.\n- ability to modify / drop objects (YES!!!).\n\nAs a power user, I confirm PostgreSQL cannot be used in a profesionnal \nenvironment, with teams of developpers and heavy code (1000 tables, 500 \nviews, 200 triggers).\n\nAnd this is not a problem of \"replication\" as PostgreSQL can be optimized for \nheavy loads ... when you have the propers development tools: pgAdmin1.\n\nWhy replicate a server when optimization itself would suffice...\n\n2) Where we are going with pgAdmin 2\n\nBecause we are a community, we need more information to be able to get \npgAdmin2 in the right direction : \n\n- do we need to create our own \"fake\" production / development modes as we \ndid in pgAdmin 1. This would be quite a lot of work now and an immense \ndeception,\n- or are you going to provide us with real ALTER/DROP features. Please inform \nus of the work to be done : dependency tracking, md5sum stamping of objects, \nintegration of patches... I know some of you are fully booked on replication. \nAre others working on ALTER/DROP?\n\n3) Need for information / user needs\n\nIn the Gnome community, projects are clearly defined and owned by major \ndeveloppers. Who is working on the ALTER /DROP project? Who can I ask for \nadvice as regards pgAdmin2? Could there be some clear presentation of \nprojects on PostgreSQL website with ownership of projects.\n\nAlso, do not hesitate to set up a pool out of the to-do-list. Results would \nsurprise some of you. This was just my 0.000002 cents. The overall quality of \nPostgreSQL is excellent, so don't flame me for this mail.\n\nWe only need more information from hackers to make a better pgAdmin2.\n\nCheers,\nJean-Michel POURE\n",
"msg_date": "Wed, 13 Feb 2002 10:39:35 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "ALTER / DROP information for pgAdmin2"
}
] |
[
{
"msg_contents": "\n#include <stdlib.h>\n#include <limits.h>\n#include \"postgres.h\"\n#include \"fmgr.h\"\n#include \"numeric.h\"\n\n/*\n** PostgreSQL Numeric data type conversion functions\n**\n** Original implementation by Dann Corbit on 2-8-2002\n**\n** Since all of the digit groups *should* be in the range of 00 to 99,\n** it may seem strange that these arrays are all dimensioned with\n** UCHAR_MAX+1 (255+1=256 on most systems). But in having an enlarged\n** dimension, we are safeguarded against an accidental over run\n** due to bad data, etc. In any case, very little space is wasted\n** (less than 1K in total).\n** Instead of deciphering the digits one at a time, we just make the\n** look up table with gaps instead so that we can look them up two\n** digits [as encoded into one character] at a time.\n*/\n\n/*\nIf I collect a numeric entry like so:\n void *vp = PQgetvalue(result, r, c);\n int st = PQgetlength(result, r, c);\n char num_data[2000] ={0};\n int err;\n\nI then call the NumericUntangle() routine as follows:\n NumericUntangle(vp, st, num_data, sizeof num_data, &err);\n*/\n\nstatic const char *two_dig[UCHAR_MAX + 1] =\n{\n \"00\", \"01\", \"02\", \"03\", \"04\", \"05\", \"06\", \"07\", \"08\", \"09\", \"\", \"\",\n\"\", \"\", \"\", \"\",\n \"10\", \"11\", \"12\", \"13\", \"14\", \"15\", \"16\", \"17\", \"18\", \"19\", \"\", \"\",\n\"\", \"\", \"\", \"\",\n \"20\", \"21\", \"22\", \"23\", \"24\", \"25\", \"26\", \"27\", \"28\", \"29\", \"\", \"\",\n\"\", \"\", \"\", \"\",\n \"30\", \"31\", \"32\", \"33\", \"34\", \"35\", \"36\", \"37\", \"38\", \"39\", \"\", \"\",\n\"\", \"\", \"\", \"\",\n \"40\", \"41\", \"42\", \"43\", \"44\", \"45\", \"46\", \"47\", \"48\", \"49\", \"\", \"\",\n\"\", \"\", \"\", \"\",\n \"50\", \"51\", \"52\", \"53\", \"54\", \"55\", \"56\", \"57\", \"58\", \"59\", \"\", \"\",\n\"\", \"\", \"\", \"\",\n \"60\", \"61\", \"62\", \"63\", \"64\", \"65\", \"66\", \"67\", \"68\", \"69\", \"\", \"\",\n\"\", \"\", \"\", \"\",\n \"70\", \"71\", \"72\", \"73\", \"74\", \"75\", \"76\", \"77\", \"78\", \"79\", \"\", \"\",\n\"\", \"\", \"\", \"\",\n \"80\", \"81\", \"82\", \"83\", \"84\", \"85\", \"86\", \"87\", \"88\", \"89\", \"\", \"\",\n\"\", \"\", \"\", \"\",\n \"90\", \"91\", \"92\", \"93\", \"94\", \"95\", \"96\", \"97\", \"98\", \"99\", \"\", \"\",\n\"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\"\n};\n\nstatic const char *two_p_dig[UCHAR_MAX + 1] =\n{\n \"0.0\", \"0.1\", \"0.2\", \"0.3\", \"0.4\", \"0.5\", \"0.6\", \"0.7\", \"0.8\",\n\"0.9\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"1.0\", \"1.1\", \"1.2\", \"1.3\", \"1.4\", \"1.5\", \"1.6\", \"1.7\", \"1.8\",\n\"1.9\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"2.0\", \"2.1\", \"2.2\", \"2.3\", \"2.4\", \"2.5\", \"2.6\", \"2.7\", \"2.8\",\n\"2.9\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"3.0\", \"3.1\", \"3.2\", \"3.3\", \"3.4\", \"3.5\", \"3.6\", \"3.7\", \"3.8\",\n\"3.9\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"4.0\", \"4.1\", \"4.2\", \"4.3\", \"4.4\", \"4.5\", \"4.6\", \"4.7\", \"4.8\",\n\"4.9\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"5.0\", \"5.1\", \"5.2\", \"5.3\", \"5.4\", \"5.5\", \"5.6\", \"5.7\", \"5.8\",\n\"5.9\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"6.0\", \"6.1\", \"6.2\", \"6.3\", \"6.4\", \"6.5\", \"6.6\", \"6.7\", \"6.8\",\n\"6.9\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"7.0\", \"7.1\", \"7.2\", \"7.3\", \"7.4\", \"7.5\", \"7.6\", \"7.7\", \"7.8\",\n\"7.9\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"8.0\", \"8.1\", \"8.2\", \"8.3\", \"8.4\", \"8.5\", \"8.6\", \"8.7\", \"8.8\",\n\"8.9\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"9.0\", \"9.1\", \"9.2\", \"9.3\", \"9.4\", \"9.5\", \"9.6\", \"9.7\", \"9.8\",\n\"9.9\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\"\n};\n\n\nstatic const char *p_two_dig[UCHAR_MAX + 1] =\n{\n \".00\", \".01\", \".02\", \".03\", \".04\", \".05\", \".06\", \".07\", \".08\",\n\".09\", \"\", \"\", \"\", \"\", \"\", \"\",\n \".10\", \".11\", \".12\", \".13\", \".14\", \".15\", \".16\", \".17\", \".18\",\n\".19\", \"\", \"\", \"\", \"\", \"\", \"\",\n \".20\", \".21\", \".22\", \".23\", \".24\", \".25\", \".26\", \".27\", \".28\",\n\".29\", \"\", \"\", \"\", \"\", \"\", \"\",\n \".30\", \".31\", \".32\", \".33\", \".34\", \".35\", \".36\", \".37\", \".38\",\n\".39\", \"\", \"\", \"\", \"\", \"\", \"\",\n \".40\", \".41\", \".42\", \".43\", \".44\", \".45\", \".46\", \".47\", \".48\",\n\".49\", \"\", \"\", \"\", \"\", \"\", \"\",\n \".50\", \".51\", \".52\", \".53\", \".54\", \".55\", \".56\", \".57\", \".58\",\n\".59\", \"\", \"\", \"\", \"\", \"\", \"\",\n \".60\", \".61\", \".62\", \".63\", \".64\", \".65\", \".66\", \".67\", \".68\",\n\".69\", \"\", \"\", \"\", \"\", \"\", \"\",\n \".70\", \".71\", \".72\", \".73\", \".74\", \".75\", \".76\", \".77\", \".78\",\n\".79\", \"\", \"\", \"\", \"\", \"\", \"\",\n \".80\", \".81\", \".82\", \".83\", \".84\", \".85\", \".86\", \".87\", \".88\",\n\".89\", \"\", \"\", \"\", \"\", \"\", \"\",\n \".90\", \".91\", \".92\", \".93\", \".94\", \".95\", \".96\", \".97\", \".98\",\n\".99\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\"\n};\n\n\nstatic const char *two_dig_p[UCHAR_MAX + 1] =\n{\n \"00.\", \"01.\", \"02.\", \"03.\", \"04.\", \"05.\", \"06.\", \"07.\", \"08.\",\n\"09.\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"10.\", \"11.\", \"12.\", \"13.\", \"14.\", \"15.\", \"16.\", \"17.\", \"18.\",\n\"19.\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"20.\", \"21.\", \"22.\", \"23.\", \"24.\", \"25.\", \"26.\", \"27.\", \"28.\",\n\"29.\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"30.\", \"31.\", \"32.\", \"33.\", \"34.\", \"35.\", \"36.\", \"37.\", \"38.\",\n\"39.\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"40.\", \"41.\", \"42.\", \"43.\", \"44.\", \"45.\", \"46.\", \"47.\", \"48.\",\n\"49.\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"50.\", \"51.\", \"52.\", \"53.\", \"54.\", \"55.\", \"56.\", \"57.\", \"58.\",\n\"59.\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"60.\", \"61.\", \"62.\", \"63.\", \"64.\", \"65.\", \"66.\", \"67.\", \"68.\",\n\"69.\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"70.\", \"71.\", \"72.\", \"73.\", \"74.\", \"75.\", \"76.\", \"77.\", \"78.\",\n\"79.\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"80.\", \"81.\", \"82.\", \"83.\", \"84.\", \"85.\", \"86.\", \"87.\", \"88.\",\n\"89.\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"90.\", \"91.\", \"92.\", \"93.\", \"94.\", \"95.\", \"96.\", \"97.\", \"98.\",\n\"99.\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\",\n \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\", \"\"\n};\n/*\n#define NUMERIC_SIGN_MASK\t0xC000\n#define NUMERIC_POS\t\t0x0000\n#define NUMERIC_NEG\t\t0x4000\n#define NUMERIC_NAN\t\t0xC000\n#define NUMERIC_DSCALE_MASK 0x3FFF\n#define NUMERIC_SIGN(n)\t\t((n)->n_sign_dscale & NUMERIC_SIGN_MASK)\n#define NUMERIC_DSCALE(n)\t((n)->n_sign_dscale &\nNUMERIC_DSCALE_MASK)\n#define NUMERIC_IS_NAN(n)\t(NUMERIC_SIGN(n) != NUMERIC_POS &&\nNUMERIC_SIGN(n) != NUMERIC_NEG)\n*/\n\nvoid NumericUntangle(void *vp, int bytes_of_digits, char\n*num_data, size_t num_data_len, int *err)\n{\n /* It appears that the pointer to data returned by PQgetvalue() is\n * identical to a struct NumericData except that the varlen field is\n * missing. \n */\n typedef struct tag_numeric_overlay {\n short base_ten_exponent;\n short decimal_shift_right;\n short n_sign_dscale;\n unsigned char digits[1];\n } numeric_overlay;\n int i;\n numeric_overlay *pno;\n pno = (numeric_overlay *) vp;\n char *s = num_data;\n *err = 0;\n if (!NUMERIC_IS_NAN(pno)) {\n int sign = NUMERIC_SIGN(pno);\n int dscale = NUMERIC_DSCALE(pno);\n bytes_of_digits -= 3 * sizeof(short);\n /* If there is too much hamburger to shove into the can (IOW, we\n * have more bytes of data than we do of string) then we set the\n * error flag and fill the string with pound signs. (#)\n */\n if (num_data_len < (bytes_of_digits + 1) * 2) {\n memset(s, '#', num_data_len - 1);\n s[num_data_len - 1] = 0;\n *err = 1;\n return;\n }\n if (sign)\n strcpy(s, \"-\");\n else\n s[0] = 0;\n if (pno->base_ten_exponent < 0) {\n strcat(s, \"0.\");\n int slen = strlen(s);\n int zcount = abs(pno->base_ten_exponent) - 1;\n memset(s + slen, '0', zcount);\n s[slen + zcount + 1] = 0;\n for (i = 0; i < bytes_of_digits; i++)\n strcat(s, two_dig[pno->digits[i]]);\n } else {\n int texp = pno->base_ten_exponent;\n int bytes_to_go = bytes_of_digits;\n for (bytes_to_go = bytes_of_digits, i = 0; texp > 1; texp -=\n2, bytes_to_go--, i++) {\n if (bytes_to_go)\n strcat(s, two_dig[pno->digits[i]]);\n else\n strcat(s, two_dig[pno->digits[0]]);\n }\n switch (texp) {\n case -1:\n if (bytes_to_go) {\n strcat(s, p_two_dig[pno->digits[i]]);\n bytes_to_go--;\n } else\n strcat(s, p_two_dig[pno->digits[0]]);\n break;\n case 0:\n if (bytes_to_go) {\n strcat(s, two_p_dig[pno->digits[i]]);\n bytes_to_go--;\n } else\n strcat(s, two_p_dig[pno->digits[0]]);\n break;\n case 1:\n if (bytes_to_go) {\n strcat(s, two_dig_p[pno->digits[i]]);\n bytes_to_go--;\n } else\n strcat(s, two_dig_p[pno->digits[0]]);\n break;\n }\n texp -= 2;\n for (++i; texp > 1 || bytes_to_go > 0; texp -= 2,\nbytes_to_go--, i++) {\n if (bytes_to_go)\n strcat(s, two_dig[pno->digits[i]]);\n else\n strcat(s, two_dig[pno->digits[0]]);\n }\n\n }\n } else { /* This is a NAN */\n strcpy(s, \"#NAN#\");\n }\n}\n",
"msg_date": "Wed, 13 Feb 2002 02:58:14 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Numeric data type conversion form binary cursor -- Am I all wet,\n\tor is this about right?"
}
] |
[
{
"msg_contents": "\nMorning all ...\n\n\tCan someone explain to me how I should read this:\n\n tablename | attname | avg_width | n_distinct\n-------------------------+----------------------------+-----------+------------\n iwantu_pro_week | uid | 8 | -1\n iwantu_pro_week | pro_week_agent | 9 | 9\n iwantu_pro_week | pro_week_time | 8 | -0.976321\n iwantu_pro_week | pro_week_post | 5 | 2\n iwantu_pro_week | pro_week_club | 2 | 1\n\nn_distinct, I'm taking it, of -1 means its a UNIQUE INDEX, and the\npositive numbers are exact #'s ... but, -0.976321? :)\n\nAlso, how is the avg_width interpreted?\n\n",
"msg_date": "Wed, 13 Feb 2002 09:31:30 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "pg_stats explained ... ?"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> n_distinct, I'm taking it, of -1 means its a UNIQUE INDEX, and the\n> positive numbers are exact #'s ... but, -0.976321? :)\n\n\"Almost unique\", evidently. See stadistinct at\nhttp://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/catalog-pg-statistic.html\n\n> Also, how is the avg_width interpreted?\n\nJust what you'd expect: average stored width, in bytes, of non-NULL\nentries in the column.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 11:06:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats explained ... ? "
}
] |
[
{
"msg_contents": "\nOkay, if I'm understanding pg_stats at all, which I may not be, n_distinct\nshould represent # of distinct values in that row, no?\n\nBut, I have one field that has 5 distinct values:\n\niwantu=# select distinct(profiles_faith) from iwantu_profiles;\n profiles_faith\n----------------\n 0\n 1\n 2\n 7\n 8\n(5 rows)\n\nBut pg_stats is reporting 1:\n\n tablename | attname | avg_width | n_distinct\n-----------------+------------------------+-----------+------------\n iwantu_profiles | profiles_faith | 2 | 1\n\nSo am I reading n_distinct wrong?\n\n",
"msg_date": "Wed, 13 Feb 2002 09:53:43 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "\"Bug\" in statistics for v7.2?"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Okay, if I'm understanding pg_stats at all, which I may not be, n_distinct\n> should represent # of distinct values in that row, no?\n> But, I have one field that has 5 distinct values:\n> But pg_stats is reporting 1:\n\nThe pg_stats values are only, um, statistical. If 99.9% of the table is\nthe same value and the other four values appear only once or twice, it's\ncertainly possible for ANALYZE's sample to include only the common value\nand miss the rare ones. AFAIK that will not break anything; if you have\nan example where the planner seems to be fooled because of this, let's\nsee it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 11:11:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"Bug\" in statistics for v7.2? "
},
{
"msg_contents": "\nThat explains it ...\n\n profiles_faith | count\n----------------+--------\n 0 | 485938\n 1 | 2\n 2 | 6\n 7 | 2\n 8 | 21\n(5 rows)\n\nCool, another waste of space *sigh*\n\nthanks ...\n\n\nOn Wed, 13 Feb 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Okay, if I'm understanding pg_stats at all, which I may not be, n_distinct\n> > should represent # of distinct values in that row, no?\n> > But, I have one field that has 5 distinct values:\n> > But pg_stats is reporting 1:\n>\n> The pg_stats values are only, um, statistical. If 99.9% of the table is\n> the same value and the other four values appear only once or twice, it's\n> certainly possible for ANALYZE's sample to include only the common value\n> and miss the rare ones. AFAIK that will not break anything; if you have\n> an example where the planner seems to be fooled because of this, let's\n> see it.\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Wed, 13 Feb 2002 12:15:46 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: \"Bug\" in statistics for v7.2? "
}
] |
[
{
"msg_contents": "we've had a crash of our postgres database last night, it seems that\nit's because we hit the max-file limit. I'm at a point where I cannot\nstart up my database, and I don't know if there is any way for me to\nrecover it without restoring from a backup. All i want to do is recover\nfrom this and be able to start back up. Any help please?\n\nWhen I try to start the postmaster, i get:\n\n[postgres@pgdb postgres]$ /usr/pg713/bin/postmaster -N 60 -i -F -B 2048\n-D/moby/pgsql\nDEBUG: database system shutdown was interrupted at 2002-02-12 21:38:11\nCST\nDEBUG: CheckPoint record at (2, 4255157328)\nDEBUG: Redo record at (2, 4255157328); Undo record at (0, 0); Shutdown\nTRUE\nDEBUG: NextTransactionId: 25164741; NextOid: 51008237\nDEBUG: database system was not properly shut down; automatic recovery\nin progress...\nDEBUG: redo starts at (2, 4255157392)\nDEBUG: ReadRecord: record with zero len at (2, 4255394508)\nDEBUG: redo done at (2, 4255394472)\nFATAL 2: XLogFlush: request is not satisfied\n/usr/pg713/bin/postmaster: Startup proc 6119 exited with status 512 -\nabort\n[postgres@pgdb postgres]$ \n\n\nI'm using RH7.2, Dual Xeon 1.8ghz 2gb memroy and postgrep 7.1.3. I've\nincluded the first error messages from the postgres log, and then the\nlast messages where the database fails to start up. \n\nERROR: Load of file /usr/pg713/lib/plpgsql.so failed:\n/usr/pg713/lib/plpgsql.so: cannot open shared object file: Too many open\nfiles in system\nFATAL 2: cannot read block 767 of pg_log: Bad file descriptor\nServer process (pid 27459) exited with status 512 at Wed Feb 13 01:29:09\n2002\nTerminating any active server processes...\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died\nabnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to\nterminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\n\n\n... and then later on, things got really bad ...\n\nServer processes were terminated at Wed Feb 13 01:46:40 2002\nReinitializing shared memory and semaphores\nDEBUG: database system was interrupted at 2002-02-13 01:45:33 CST\nDEBUG: CheckPoint record at (2, 4255157328)\nDEBUG: Redo record at (2, 4255157328); Undo record at (0, 0); Shutdown\nTRUE\nDEBUG: NextTransactionId: 25164741; NextOid: 51008237\nDEBUG: database system was not properly shut down; automatic recovery\nin progress...\nDEBUG: redo starts at (2, 4255157392)\nDEBUG: ReadRecord: record with zero len at (2, 4255394508)\nDEBUG: redo done at (2, 4255394472)\nFATAL 2: XLogFlush: request is not satisfied\n/usr/pg713/bin/postmaster: Startup proc 27607 exited with status 512 -\nabort\nDEBUG: database system shutdown was interrupted at 2002-02-13 01:46:40\nCST\nDEBUG: CheckPoint record at (2, 4255157328)\nDEBUG: Redo record at (2, 4255157328); Undo record at (0, 0); Shutdown\nTRUE\nDEBUG: NextTransactionId: 25164741; NextOid: 51008237\nDEBUG: database system was not properly shut down; automatic recovery\nin progress...\nDEBUG: redo starts at (2, 4255157392)\nDEBUG: ReadRecord: record with zero len at (2, 4255394508)\nDEBUG: redo done at (2, 4255394472)\nFATAL 2: XLogFlush: request is not satisfied\n/usr/pg713/bin/postmaster: Startup proc 28181 exited with status 512 -\nabort\n\n..\n..\n\n\n",
"msg_date": "13 Feb 2002 09:14:40 -0700",
"msg_from": "Brian Hirt <bhirt@mobygames.com>",
"msg_from_op": true,
"msg_subject": "FATAL 2:�� XLogFlush: request is not satisfied"
},
{
"msg_contents": "Brian Hirt <bhirt@mobygames.com> writes:\n> DEBUG: redo done at (2, 4255394472)\n> FATAL 2: XLogFlush: request is not satisfied\n\nOh, that old thing again :-(\n\nWhat I would recommend is (1) run contrib/pg_resetxlog to get back into\na state where you can start the database. Then (2) dump, initdb, reload\nto be sure you are in a good state. You might be able to get away\nwithout step 2 but I'm not sure about it.\n\nBTW, I think this particular problem is fixed in 7.2, or at least much\nameliorated. As long as you have to dump/reload anyway, might be a good\ntime to update ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 11:40:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FATAL 2: XLogFlush: request is not satisfied "
},
{
"msg_contents": "Hi,\n\n> FATAL 2: XLogFlush: request is not satisfied\n\nTry pg_resetxlog from contrib.\n\ngreetings,\nBjoern\n\n\n",
"msg_date": "Wed, 13 Feb 2002 18:00:36 +0100",
"msg_from": "\"Bjoern Metzdorf\" <bm@turtle-entertainment.de>",
"msg_from_op": false,
"msg_subject": "Re: FATAL 2:XX XLogFlush: request is not satisfied"
}
] |
[
{
"msg_contents": "Gordon Runkle <gar@www.integrated-dynamics.com> writes:\n> You can retrieve the dump of my data at: [snipped]\n\nThanks. Indeed, the first time I did an ANALYZE I got:\n\n tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation\n-----------+---------+-----------+-----------+------------+-------------------------------+---------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n tom_help | tdnr | 0 | 17 | 56484 | {0557088000957,0557700369880} | {0.000666667,0.000666667} | {0551000386411,0551504108858,0557011074656,0557050633939,0557111430036,0557151012769,0557179871119,0557698138188,0557750158740,0557783053444,0558980779763} | -0.199108\n\nNow, I happen to know that the default size of ANALYZE's statistical\nsample is 3000 rows. What evidently happened here is that the\nstatistical sampling picked up two values appearing twice (namely,\n0557088000957 and 0557700369880); the given frequencies for these two\nvalues establish that they appeared twice in the 3000-row sample.\nSince no other values are mentioned in most_common_vals, the remaining\n2996 samples must have been values that appeared only once.\n\nHaving computed these raw statistics, ANALYZE has to try to extrapolate the\nnumber of distinct values in the whole table. What it's using for the\npurpose is an equation I found in\n\n\t\"Random sampling for histogram construction: how much is enough?\"\n\tby Surajit Chaudhuri, Rajeev Motwani and Vivek Narasayya, in\n\tProceedings of ACM SIGMOD International Conference on Management\n\tof Data, 1998, Pages 436-447.\n\nnamely\n\n\t\tsqrt(n/r) * max(f1,1) + f2 + f3 + ...\n\n\t where fk is the number of distinct values that occurred\n\t exactly k times in our sample of r rows (from a total of n).\n\nAnd indeed you get 56484 when you plug in these numbers. So the code is\noperating as designed, and we can't complain that the sample is an\nunreasonable sample given the true underlying distribution. We have to\nblame the equation: evidently this estimation equation doesn't apply\nvery well to nearly-unique columns.\n\nI had already modified the Chaudhuri approach slightly: if the ANALYZE\nsample contains no duplicate values at all (ie, f1=r, f2=f3=f4=...=0)\nthen their equation reduces to sqrt(n*r), but actually ANALYZE assumes\nthe column is unique (ie, n distinct values, not sqrt(n*r)), which seems\na lot better assumption in practice. The runs where you got n_distinct\nequal to -1 are presumably those where the ANALYZE sample chanced to\ncontain no duplicates.\n\nI am thinking that we need to find another estimator equation, or at\nleast shade away from their equation when f1 is close to r. Ideally the\nestimate for f1=r should be a logical extrapolation of the curve for f1\nclose to r, but right now it's quite discontinuous.\n\nThere was some previous discussion about this, cf the thread at\nhttp://archives.postgresql.org/pgsql-general/2001-10/msg01032.php\nbut nothing really emerged on how to do better. The Chaudhuri paper\npoints out that estimating total number of distinct values from a sample\nis inherently a hard problem and subject to large estimation errors,\nso it may be that we can't do a lot better. I think we should be wary\nof ad-hoc answers, anyhow. Something with a little math behind it would\nmake me feel more comfortable.\n\nAnyone have any thoughts on how to do better?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 13:02:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Odd statistics behaviour in 7.2 "
},
{
"msg_contents": "On Wed, 2002-02-13 at 13:02, Tom Lane wrote:\n\n[ Lots of enlightening info ]\n\nThanks, Tom!\n\nWould it be fair to say that the correct workaround for now would\nbe to use ALTER TABLE SET STATISTICS on columns of interest which have\nthis near-unique characteristic?\n\nDoes ALTER TABLE SET STATISTICS only increase the histogram size, or\ndoes it also cause more rows to be sampled? If not, how would one\nincrease the sample size (assuming this would be desirable)?\n\nI have quite a few tables that have columns of interest with near-unique\nvalues, so I'm keenly interested. I'm sorry I didn't have time to do all\nthis testing during the beta phase, there just wasn't time. :-(\n\nThanks again,\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n\n\n",
"msg_date": "13 Feb 2002 16:10:04 -0500",
"msg_from": "\"Gordon A. Runkle\" <gar@integrated-dynamics.com>",
"msg_from_op": false,
"msg_subject": "Re: Odd statistics behaviour in 7.2"
},
{
"msg_contents": "\"Gordon A. Runkle\" <gar@integrated-dynamics.com> writes:\n> Would it be fair to say that the correct workaround for now would\n> be to use ALTER TABLE SET STATISTICS on columns of interest which have\n> this near-unique characteristic?\n\nYeah, that's probably the best we can do until we can think of a better\nestimation equation.\n\n> Does ALTER TABLE SET STATISTICS only increase the histogram size, or\n> does it also cause more rows to be sampled?\n\nBoth. The Chaudhuri paper I referred to has some math purporting to\nprove that the required sample size is directly proportional to the\nhistogram size, for fixed relative error in the histogram boundaries.\nSo I made the same parameter control both.\n\nActually the sample size is driven by the largest SET STATISTICS value\nfor any column of the table. So you can pick which one you think a\nlarger histogram would be most useful for; it doesn't have to be the\nsame column that's got the bad-number-of-distinct-values problem.\nWhich columns, if any, do you do range queries on? Those would be the\nones where a bigger histogram would be useful.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 16:34:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Odd statistics behaviour in 7.2 "
},
{
"msg_contents": "\"Gordon A. Runkle\" <gar@integrated-dynamics.com> writes:\n> [ tale of poor statistical results in a near-unique column ]\n\nAfter some further digging in the literature I have come across a\nnumber-of-distinct-values estimator that I like better than Chaudhuri's.\nIt runs like this:\n\n n*d / (n - f1 + f1*n/N)\n where f1 is the number of distinct values that occurred\n exactly once in our sample of n rows (from a total of N),\n and d is the total number of distinct values in the sample.\n\nThe nice properties this has include:\n\n1. At f1=0 (all values in sample occurred more than once), the estimate\nreduces to d, which I had already determined to be a sensible result.\n\n2. At f1=n (all values were distinct, hence also d=n), the estimate\nreduces to N, which again is the most reasonable estimate.\n\n3. The equation is numerically stable even when n is much smaller than\nN, because the only cancellation is in the term (n - f1) which we can\ncompute exactly. A lot of the other equations I looked at depend on\nseries like (1 - n/N)**i which are going to be really nasty when n/N\nis tiny.\n\nIn particular, point 2 means that this equation should perform\nreasonably well for nearly-unique columns (f1 approaching n),\nwhich was the case you were seeing bad results for.\n\nAttached is a patch that implements this revised equation. Would\nappreciate any feedback...\n\n\t\t\tregards, tom lane\n\n*** src/backend/commands/analyze.c.orig\tSat Jan 5 19:37:44 2002\n--- src/backend/commands/analyze.c\tFri Feb 15 18:46:45 2002\n***************\n*** 1009,1018 ****\n \t\t{\n \t\t\t/*----------\n \t\t\t * Estimate the number of distinct values using the estimator\n! \t\t\t * proposed by Chaudhuri et al (see citation above). This is\n! \t\t\t *\t\tsqrt(n/r) * max(f1,1) + f2 + f3 + ...\n! \t\t\t * where fk is the number of distinct values that occurred\n! \t\t\t * exactly k times in our sample of r rows (from a total of n).\n \t\t\t * We assume (not very reliably!) that all the multiply-occurring\n \t\t\t * values are reflected in the final track[] list, and the other\n \t\t\t * nonnull values all appeared but once. (XXX this usually\n--- 1009,1023 ----\n \t\t{\n \t\t\t/*----------\n \t\t\t * Estimate the number of distinct values using the estimator\n! \t\t\t * proposed by Haas and Stokes in IBM Research Report RJ 10025:\n! \t\t\t *\t\tn*d / (n - f1 + f1*n/N)\n! \t\t\t * where f1 is the number of distinct values that occurred\n! \t\t\t * exactly once in our sample of n rows (from a total of N),\n! \t\t\t * and d is the total number of distinct values in the sample.\n! \t\t\t * This is their Duj1 estimator; the other estimators they\n! \t\t\t * recommend are considerably more complex, and are numerically\n! \t\t\t * very unstable when n is much smaller than N.\n! \t\t\t *\n \t\t\t * We assume (not very reliably!) that all the multiply-occurring\n \t\t\t * values are reflected in the final track[] list, and the other\n \t\t\t * nonnull values all appeared but once. (XXX this usually\n***************\n*** 1021,1032 ****\n \t\t\t *----------\n \t\t\t */\n \t\t\tint\t\t\tf1 = nonnull_cnt - summultiple;\n! \t\t\tdouble\t\tterm1;\n \n! \t\t\tif (f1 < 1)\n! \t\t\t\tf1 = 1;\n! \t\t\tterm1 = sqrt(totalrows / (double) numrows) * f1;\n! \t\t\tstats->stadistinct = floor(term1 + nmultiple + 0.5);\n \t\t}\n \n \t\t/*\n--- 1026,1044 ----\n \t\t\t *----------\n \t\t\t */\n \t\t\tint\t\t\tf1 = nonnull_cnt - summultiple;\n! \t\t\tint\t\t\td = f1 + nmultiple;\n! \t\t\tdouble\t\tnumer, denom, stadistinct;\n \n! \t\t\tnumer = (double) numrows * (double) d;\n! \t\t\tdenom = (double) (numrows - f1) +\n! \t\t\t\t(double) f1 * (double) numrows / totalrows;\n! \t\t\tstadistinct = numer / denom;\n! \t\t\t/* Clamp to sane range in case of roundoff error */\n! \t\t\tif (stadistinct < (double) d)\n! \t\t\t\tstadistinct = (double) d;\n! \t\t\tif (stadistinct > totalrows)\n! \t\t\t\tstadistinct = totalrows;\n! \t\t\tstats->stadistinct = floor(stadistinct + 0.5);\n \t\t}\n \n \t\t/*\n***************\n*** 1313,1332 ****\n \t\t{\n \t\t\t/*----------\n \t\t\t * Estimate the number of distinct values using the estimator\n! \t\t\t * proposed by Chaudhuri et al (see citation above). This is\n! \t\t\t *\t\tsqrt(n/r) * max(f1,1) + f2 + f3 + ...\n! \t\t\t * where fk is the number of distinct values that occurred\n! \t\t\t * exactly k times in our sample of r rows (from a total of n).\n \t\t\t * Overwidth values are assumed to have been distinct.\n \t\t\t *----------\n \t\t\t */\n \t\t\tint\t\t\tf1 = ndistinct - nmultiple + toowide_cnt;\n! \t\t\tdouble\t\tterm1;\n \n! \t\t\tif (f1 < 1)\n! \t\t\t\tf1 = 1;\n! \t\t\tterm1 = sqrt(totalrows / (double) numrows) * f1;\n! \t\t\tstats->stadistinct = floor(term1 + nmultiple + 0.5);\n \t\t}\n \n \t\t/*\n--- 1325,1356 ----\n \t\t{\n \t\t\t/*----------\n \t\t\t * Estimate the number of distinct values using the estimator\n! \t\t\t * proposed by Haas and Stokes in IBM Research Report RJ 10025:\n! \t\t\t *\t\tn*d / (n - f1 + f1*n/N)\n! \t\t\t * where f1 is the number of distinct values that occurred\n! \t\t\t * exactly once in our sample of n rows (from a total of N),\n! \t\t\t * and d is the total number of distinct values in the sample.\n! \t\t\t * This is their Duj1 estimator; the other estimators they\n! \t\t\t * recommend are considerably more complex, and are numerically\n! \t\t\t * very unstable when n is much smaller than N.\n! \t\t\t *\n \t\t\t * Overwidth values are assumed to have been distinct.\n \t\t\t *----------\n \t\t\t */\n \t\t\tint\t\t\tf1 = ndistinct - nmultiple + toowide_cnt;\n! \t\t\tint\t\t\td = f1 + nmultiple;\n! \t\t\tdouble\t\tnumer, denom, stadistinct;\n \n! \t\t\tnumer = (double) numrows * (double) d;\n! \t\t\tdenom = (double) (numrows - f1) +\n! \t\t\t\t(double) f1 * (double) numrows / totalrows;\n! \t\t\tstadistinct = numer / denom;\n! \t\t\t/* Clamp to sane range in case of roundoff error */\n! \t\t\tif (stadistinct < (double) d)\n! \t\t\t\tstadistinct = (double) d;\n! \t\t\tif (stadistinct > totalrows)\n! \t\t\t\tstadistinct = totalrows;\n! \t\t\tstats->stadistinct = floor(stadistinct + 0.5);\n \t\t}\n \n \t\t/*\n",
"msg_date": "Fri, 15 Feb 2002 19:09:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Odd statistics behaviour in 7.2 "
},
{
"msg_contents": "On Fri, 2002-02-15 at 19:09, Tom Lane wrote:\n> \"Gordon A. Runkle\" <gar@integrated-dynamics.com> writes:\n> > [ tale of poor statistical results in a near-unique column ]\n> [ new algorithm and patch ]\n\nHi Tom,\n\nThis works much better (I did SET STATISTICS back to 10). Not always\n'-1', but almost always negative.\n\nIs \"-0.503824\" the same as \"503824 with a predicted increase in the\nnumber of distinct values\" (as opposed to using \"-503824\")?\n\nI checked a couple of other tables for which my field is a bit less\nnearly-unique, and they have better stats now also. I've attached a\nfile with distributions and stats, if you're interested.\n\nAnd, getting to where the rubber meets the road, my queries are running\nwell again!\n\nI'm running a batch of other queries (which were unaffected by this\nproblem) overnight to see if they still work well.\n\nAre you planning to include this patch in v7.2.1, or would it require\ntoo much testing by others?\n\nThanks for all your hard work, it is greatly appreciated,\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt",
"msg_date": "16 Feb 2002 01:16:32 -0500",
"msg_from": "\"Gordon A. Runkle\" <gar@integrated-dynamics.com>",
"msg_from_op": false,
"msg_subject": "Re: Odd statistics behaviour in 7.2"
},
{
"msg_contents": "\"Gordon A. Runkle\" <gar@integrated-dynamics.com> writes:\n> Is \"-0.503824\" the same as \"503824 with a predicted increase in the\n> number of distinct values\" (as opposed to using \"-503824\")?\n\nNo, it means \"0.503824 times the number of rows in the table\".\nAlthough your table was ~ 1 million rows, so that's approximately\nright in your case.\n\nGiven the stats you cited, the exactly correct stadistinct value would\nbe -0.9348085. In testing I got -1, -0.808612, -0.678641, or once\n-0.584611 from your data, depending on whether the sample chanced to\nfind none, one, two, or three repeated values. Any of these strike me\nas plenty close enough for statistical purposes. But the Chaudhuri\nestimator was off by more than a factor of 10.\n\n> Are you planning to include this patch in v7.2.1, or would it require\n> too much testing by others?\n\nI'm going to put it in 7.2.1 unless there are objections.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Feb 2002 12:17:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Odd statistics behaviour in 7.2 "
},
{
"msg_contents": "BTW, while we're thinking about this, there's another aspect of the\nnumber-of-distinct-values estimator that could use some peer review.\nThat's the decision whether to assume that the number of distinct\nvalues in a column is fixed, or will vary with the size of the\ntable. (For example, in a boolean column, ndistinct should clearly\nbe 2 no matter how large the table gets; but in any unique column\nndistinct should equal the table size.) This is important since there\nare times when we update the table size estimate (pg_class.reltuples)\nwithout recomputing the statistics in pg_statistic. The \"negative\nstadistinct\" convention in pg_statistic is used to signal which case\nANALYZE thinks applies.\n\nPresently the decision is pretty simplistic: if the estimated number\nof distinct values is more than 10% of the number of rows, then assume\nthe number of distinct values scales with the number of rows.\n\nI believe that some rule of this form is reasonable, but the 10%\nthreshold was just picked out of the air. Can anyone suggest an\nargument in favor of some other value, or a better way to look at it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Feb 2002 12:57:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Odd statistics behaviour in 7.2 "
},
{
"msg_contents": "On Fri, Feb 15, 2002 at 07:09:42PM -0500,\n Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> 3. The equation is numerically stable even when n is much smaller than\n> N, because the only cancellation is in the term (n - f1) which we can\n> compute exactly. A lot of the other equations I looked at depend on\n> series like (1 - n/N)**i which are going to be really nasty when n/N\n> is tiny.\n\nYou can work with the above for n << N by using power series. For n << N,\n(1 - n/N)**i ~= 1 - in/N.\n",
"msg_date": "Sat, 16 Feb 2002 14:20:28 -0600",
"msg_from": "Bruno Wolff III <bruno@wolff.to>",
"msg_from_op": false,
"msg_subject": "Re: Odd statistics behaviour in 7.2"
},
{
"msg_contents": "Bruno Wolff III <bruno@wolff.to> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> A lot of the other equations I looked at depend on\n>> series like (1 - n/N)**i which are going to be really nasty when n/N\n>> is tiny.\n\n> You can work with the above for n << N by using power series. For n << N,\n> (1 - n/N)**i ~= 1 - in/N.\n\nYeah, but the pain-in-the-neck aspect comes from the fact that n/N isn't\nnecessarily tiny --- it could approach 1. So you'd need code smart\nenough to handle both cases with accuracy.\n\nWe could probably do this if it proves necessary, but I'd like to stick\nto simpler equations if at all possible.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Feb 2002 15:34:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Odd statistics behaviour in 7.2 "
}
] |
[
{
"msg_contents": "NAMEDATALEN's benchmarked are 32, 64, 128 and 512. Attached is the\nshell script I used to do it.\n\nFirst row of a set is the time(1) for the pgbench -i run, second is\nthe actual benchmark. Aside from the 'real' time of 64 there is a\ndistinct increase in time required, but not significant.\n\nBenchmarks were run for 3000 transactions with scale factor of 5, but\nonly 1 client. If there is a preferred setting for pgbench I can do\nan overnight run with it. Machine is a dual 500Mhz celery with 384MB\nram and 2 IBM Deskstars in Raid 0, and a seperate system drive.\n\nAnything but 32 fails the 'name' check in the regression tests -- I\nassume this is expected?\n\nDon't know why 64 has a high 'real' time, but the system times are\nappropriate.\n\nNAMEDATALEN: 32\n\n158.97 real 1.81 user 0.14 sys\n\n80.58 real 1.30 user 3.81 sys\n\n\n\nNAMEDATALEN: 64\n\n248.40 real 1.85 user 0.10 sys\n\n96.36 real 1.44 user 3.86 sys\n\n\n\nNAMEDATALEN: 128\n\n156.74 real 1.84 user 0.10 sys\n\n94.36 real 1.47 user 4.01 sys\n\n\n\nNAMEDATALEN: 512\n\n157.99 real 1.83 user 0.12 sys\n\n101.14 real 1.47 user 4.23 sys\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.",
"msg_date": "Wed, 13 Feb 2002 15:07:50 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "NAMEDATALEN Changes"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> [ some hard data ]\n\nGreat! The numbers for namedatalen = 64 seem like an outlier; perhaps\nsomething else going on on your system? Did you try more than one run?\n\n> Anything but 32 fails the 'name' check in the regression tests -- I\n> assume this is expected?\n\nRight. If you eyeball the actual diffs for the test you should see that\nthe diff is due to a long name not getting truncated where the test\nexpects it to be.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 16:53:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NAMEDATALEN Changes "
},
{
"msg_contents": "> > Great! The numbers for namedatalen = 64 seem like an outlier;\nperhaps\n> something else going on on your system? Did you try more than one\nrun?\n\nRan it again shortly after sending the email. It fell in line (mid\nway between 32 and 128) with Real time as would normally be expected.\nThe times for the other values and 64's system times were very close\nto the original so I won't bother posting them again.\n\n",
"msg_date": "Wed, 13 Feb 2002 17:07:54 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: NAMEDATALEN Changes "
},
{
"msg_contents": "On Wednesday 13 February 2002 21:07, Rod Taylor wrote:\n> NAMEDATALEN's benchmarked are 32, 64, 128 and 512. Attached is the\n> shell script I used to do it.\n\nAttached is a modified version for Linux, if anyone is interested.\n\nWill run it overnight out of quasi-scientific interest.\n\nNice to have an idea what kind of effect my very long NAMEDATALEN setting \n(128) has.\n\n\nYours\n\nIan Barwick",
"msg_date": "Wed, 13 Feb 2002 23:27:08 +0100",
"msg_from": "Ian Barwick <barwick@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: NAMEDATALEN Changes"
},
{
"msg_contents": "Rod Taylor writes:\n\n> NAMEDATALEN's benchmarked are 32, 64, 128 and 512. Attached is the\n> shell script I used to do it.\n\nThat's around a 15% performance loss for increasing it to 64 or 128.\nSeems pretty scary actually.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 13 Feb 2002 19:12:57 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: NAMEDATALEN Changes"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> That's around a 15% performance loss for increasing it to 64 or 128.\n> Seems pretty scary actually.\n\nSome of that could be bought back by fixing hashname() to not iterate\npast the first \\0 when calculating the hash of a NAME datum; and then\ncc_hashname could go away. Not sure how much this would buy though.\n\nLooking closely at Rod's script, I realize that the user+sys times it is\nreporting are not the backend's but the pgbench client's. So it's\nimpossible to tell from this how much of the extra cost is extra I/O and\nhow much is CPU. I'm actually quite surprised that the client side\nshows any CPU-time difference at all; I wouldn't think it ever sees any\nnull-padded NAME values.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 20:00:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NAMEDATALEN Changes "
},
{
"msg_contents": "On Wed, 2002-02-13 at 20:00, Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > That's around a 15% performance loss for increasing it to 64 or 128.\n> > Seems pretty scary actually.\n> \n> Some of that could be bought back by fixing hashname() to not iterate\n> past the first \\0 when calculating the hash of a NAME datum; and then\n> cc_hashname could go away. Not sure how much this would buy though.\n\nI've attached a pretty trivial patch that implements this. Instead of\nautomatically hashing NAMEDATALEN bytes, hashname() uses only strlen()\nbytes: this should improve both the common case (small identifers, 5-10\ncharacters long), as well as reduce the penalty when NAMEDATALEN is\nincreased. The patch passes the regression tests, FWIW. I didn't remove\ncc_hashname() -- I'll tackle that tomorrow unless anyone objects...\n\nI also did some pretty simple benchmarks; however, I'd appreciate it\nanyone could confirm these results.\n\npg_bench: scale factor 1, 1 client, 10000 transactions.\n\nhardware: p3-850, 384 MB RAM, slow laptop IDE disk\n\nRun 1: Patch applied, NAMEDATALEN = 32\n\n number of transactions actually processed: 10000/10000\n tps = 19.940020(including connections establishing)\n tps = 19.940774(excluding connections establishing)\n\nRun 2: Patch applied, NAMEDATALEN = 128\n\n number of transactions actually processed: 10000/10000\n tps = 20.849385(including connections establishing)\n tps = 20.850010(excluding connections establishing)\n\nRun 3: Vanilla CVS, NAMEDATALEN = 32\n(This is to check that the patch doesn't cause performance regressions\nfor the \"common case\")\n\n number of transactions actually processed: 10000/10000\n tps = 18.295418(including connections establishing)\n tps = 18.296191(excluding connections establishing)\n\nThe performance improvement @ NAMEDATALEN = 128 seems strange. As I\nsaid, these benchmarks may not be particularly accurate, so I'd suggest\nwaiting for others to contribute results before drawing any conclusions.\n\nOh, and this is my first \"real\" Pg patch, so my apologies if I've\nscrewed something up. ;-)\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC",
"msg_date": "14 Feb 2002 00:59:40 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: NAMEDATALEN Changes"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Wednesday 13 February 2002 23:59, Neil Conway wrote:\n> On Wed, 2002-02-13 at 20:00, Tom Lane wrote:\n\n[perf hit comment removed]\n\n>\n> I've attached a pretty trivial patch that implements this. Instead of\n> automatically hashing NAMEDATALEN bytes, hashname() uses only strlen()\n> bytes: this should improve both the common case (small identifers, 5-10\n> characters long), as well as reduce the penalty when NAMEDATALEN is\n> increased. The patch passes the regression tests, FWIW. I didn't remove\n> cc_hashname() -- I'll tackle that tomorrow unless anyone objects...\n>\n> I also did some pretty simple benchmarks; however, I'd appreciate it\n> anyone could confirm these results.\n>\n\nPlease bare with me on this as this is my first posting having any real \ncontent. �Please don't hang me out if I've overlooked anything and I'm \npointing out that I'm making a rather large assumption. Please correct as \nneeded. \n\nThe primary assumption is that the actual key lengths can be less than \nNAMEDATALEN. That is, if the string, \"shortkey\" is a valid input key (??) \nwhich provides a key length of 8-bytes as input to the hash_any() function \neven though NAMEDATALEN may be something like 128 or larger. If this \nassumption is correct, then wouldn't increasing the default input key size \n(NAMEDATALEN) beyond the maximum actual key length be a bug? That is to say, \nif we have a key with only 8-bytes of data and we iterrate over 128-bytes, \nwouldn't the resulting hash be arbitrary and invalid as it would be hashing \nmemory which is not reflective of the key being hashed?\n\nIf my assumptions are correct, then it sounds like using the strlen() \nimplementation (assuming input keys are valid C-strings) is really the proper \nimplementation short of using an adjusted min(NAMEDATALEN,strlen()) type \napproach.\n\n[snip - var NAMEDATALEN benchmark results]\n\n\nGreg\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE8a8Mg4lr1bpbcL6kRAlaxAJ47CO+ExL/ZMo/i6LDoetXrul9qqQCfQli3\nAvqN6RJjSuAH/p/mpZ8J4JY=\n=wnVM\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 14 Feb 2002 08:00:58 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: NAMEDATALEN Changes"
},
{
"msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n> if we have a key with only 8-bytes of data and we iterrate over 128-bytes, \n> wouldn't the resulting hash be arbitrary and invalid as it would be hashing \n> memory which is not reflective of the key being hashed?\n\nAs long as we do it *consistently*, we can do it either way. Using the\ntrailing nulls in the hash does alter the computed hash value --- but\nwe're only ever gonna compare the hash value against other hash values\ncomputed on other NAMEs by this same routine.\n\nThis all assumes that the inputs are valid NAMEs, viz strlen <\nNAMEDATALEN and no funny business beyond the first \\0. In practice,\nhowever, if a bogus NAME were handed to us we would just as soon ignore\nany characters beyond the first \\0, because the ordering comparison\noperators for NAME all do so (they're all based on strncmp), as do the\nI/O routines etc. So this change actually makes the system more\nself-consistent not less so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Feb 2002 09:57:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NAMEDATALEN Changes "
},
{
"msg_contents": "On Thu, Feb 14, 2002 at 12:59:40AM -0500, Neil Conway wrote:\n> I've attached a pretty trivial patch that implements this. Instead of\n> automatically hashing NAMEDATALEN bytes, hashname() uses only strlen()\n> bytes: this should improve both the common case (small identifers, 5-10\n> characters long), as well as reduce the penalty when NAMEDATALEN is\n> increased. The patch passes the regression tests, FWIW. I didn't remove\n> cc_hashname() -- I'll tackle that tomorrow unless anyone objects...\n\nOkay, I've attached a new version that removes cc_hashname(). As with\nthe previous patch, this passes the regression tests. Feedback is welcome.\n\nCheers,\n\nNeil",
"msg_date": "Thu, 14 Feb 2002 13:37:43 -0500",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: NAMEDATALEN Changes"
},
{
"msg_contents": "On Wednesday 13 February 2002 23:27, Ian Barwick wrote:\n> On Wednesday 13 February 2002 21:07, Rod Taylor wrote:\n> > NAMEDATALEN's benchmarked are 32, 64, 128 and 512. Attached is the\n> > shell script I used to do it.\n>\n> Attached is a modified version for Linux, if anyone is interested.\n>\n> Will run it overnight out of quasi-scientific interest.\n>\n> Nice to have an idea what kind of effect my very long NAMEDATALEN setting\n> (128) has.\n\nBelow the probably quite uninformative results, run under Linux with 2.2.16 \non an AMD K2 350Mhz with 256MB RAM, EIDE HDs and other run of the mill\nhardware.\n\nI suspect some of the normal system jobs which usually run during the night\ncaused the wildly varying results. Whatever else, for my purposes at least\nany performance issues with differening NAMEDATALENgths are nothing much\nto worry about.\n\n\nNAMEDATALEN: 32\n220.73 real 3.39 user 0.10 sys\n110.03 real 2.77 user 4.42 sys\n\n\nNAMEDATALEN: 64\n205.31 real 3.55 user 0.08 sys\n109.76 real 2.53 user 4.18 sys\n\n\nNAMEDATALEN: 128\n224.65 real 3.35 user 0.10 sys\n121.30 real 2.60 user 3.89 sys\n\n\nNAMEDATALEN: 256\n209.48 real 3.62 user 0.11 sys\n118.90 real 3.00 user 3.88 sys\n\n\nNAMEDATALEN: 512\n204.65 real 3.36 user 0.14 sys\n115.12 real 2.54 user 3.88 sys\n\n\nIan Barwick\n",
"msg_date": "Thu, 14 Feb 2002 22:02:34 +0100",
"msg_from": "Ian Barwick <barwick@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: NAMEDATALEN Changes"
}
] |
[
{
"msg_contents": "With function privileges (or any other privileges we may come up with),\nthe natural default would be to grant access only to the owner. However,\nthis would probably break every application in the world.\n\nThe compatible alternative choice is to grant all privileges to the world\nby default, but that would seem to be kind of confusing to someone who is\nnot familiar with the history and compatibility concerns.\n\nDo we need a umask?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 13 Feb 2002 15:27:30 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Function privileges and backward compatibility"
}
] |
[
{
"msg_contents": "In my current implementation for function privileges, I have the function\npermission check somewhere down in the executor. (To be precise, the\npermission is determined when the fcache is initialized, and it's checked\nin ExecMakeFunctionResult.) Now I remembered the way SQL99 specifies\nfunction resolution, which has the permission check before the function\nresolution begins. See also\n\nhttp://archives.postgresql.org/pgsql-hackers/2002-01/msg01120.php\n\nfor the full details.\n\nThis makes some sense, because normally you'd want the parser to choose\nonly between the functions you have access to.\n\nI do have two concerns, however:\n\n1. It would lead to confusing error messages, i.e., always \"not found\"\ninstead of \"permission denied\".\n\n2. It would make a great deal of more sense if the table name resolution\nthat will be necessary in the new schema implementation were done the same\nway. But I have a feeling that this could get messy when rewrite rules\nare involved. (More generally, the whole schema resolution could get\nmessy when rewrite rules are involved.)\n\nAny comments?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 13 Feb 2002 15:28:24 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "When and where to check for function permissions"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Now I remembered the way SQL99 specifies\n> function resolution, which has the permission check before the function\n> resolution begins.\n\nThat may be what the spec says, but I think the spec is completely\nbrain-dead in this regard and should be ignored. We do not resolve\ntable names that way, why should we resolve function names?\n\nEven more to the point, what happens when someone adds or revokes\nprivileges that would affect already-planned queries? If I can still\ncall a function that is referenced by an already-planned query even\nthough the function's owner has revoked my right to do so, that is a\nbug. On the other hand, if the query continues to \"work\" but silently\ncalls a different function than I was expecting, that's not cool either.\n\nWe did some nontrivial work awhile back to ensure that table privileges\nwould be checked at execution time and not before. Function privileges\n*must* be handled the same way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 17:01:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: When and where to check for function permissions "
},
{
"msg_contents": "Curiosity got me...\n\nUser 1: Create table; grant all on table to public;\n\nUser 2: select * from table for update;\n\nUser 1: revoke all on table from public;\n\nUser 2: update table set column = column + 1;\n\n\nAs long as User 2 holds a lock on the rows shouldn't the user have\naccess to the rows? I'd expect the revoke statement to be blocked by\nthe locks on those rows.\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Peter Eisentraut\" <peter_e@gmx.net>\nCc: \"PostgreSQL Development\" <pgsql-hackers@postgresql.org>\nSent: Wednesday, February 13, 2002 5:01 PM\nSubject: Re: [HACKERS] When and where to check for function\npermissions\n\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Now I remembered the way SQL99 specifies\n> > function resolution, which has the permission check before the\nfunction\n> > resolution begins.\n>\n> That may be what the spec says, but I think the spec is completely\n> brain-dead in this regard and should be ignored. We do not resolve\n> table names that way, why should we resolve function names?\n>\n> Even more to the point, what happens when someone adds or revokes\n> privileges that would affect already-planned queries? If I can\nstill\n> call a function that is referenced by an already-planned query even\n> though the function's owner has revoked my right to do so, that is a\n> bug. On the other hand, if the query continues to \"work\" but\nsilently\n> calls a different function than I was expecting, that's not cool\neither.\n>\n> We did some nontrivial work awhile back to ensure that table\nprivileges\n> would be checked at execution time and not before. Function\nprivileges\n> *must* be handled the same way.\n>\n> regards, tom lane\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Wed, 13 Feb 2002 17:24:01 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: When and where to check for function permissions "
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> I'd expect the revoke statement to be blocked by\n> the locks on those rows.\n\nWe could do it that way, but it doesn't seem like a really good idea to\nme --- in essence you'd be allowing a denial-of-service attack, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 17:29:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: When and where to check for function permissions "
},
{
"msg_contents": "Thought about that. But if you have a user run through your system\nand locks all the tables for a long time the least of my worries are\nthe permissions. Generally it's getting that damn user disconnected\nthen taken out back shot so that the system is moving forward again.\n\nAnyway, the point was that if Postgresql is going to be worried about\nusers running stored plans on functions they don't have permission on,\nthen a user shouldn't expect permission to be revoked in the middle of\na transaction if they hold a lock on the item.\n\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Peter Eisentraut\" <peter_e@gmx.net>; \"PostgreSQL Development\"\n<pgsql-hackers@postgresql.org>\nSent: Wednesday, February 13, 2002 5:29 PM\nSubject: Re: [HACKERS] When and where to check for function\npermissions\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > I'd expect the revoke statement to be blocked by\n> > the locks on those rows.\n>\n> We could do it that way, but it doesn't seem like a really good idea\nto\n> me --- in essence you'd be allowing a denial-of-service attack, no?\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Wed, 13 Feb 2002 17:34:05 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: When and where to check for function permissions "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Now I remembered the way SQL99 specifies\n> > function resolution, which has the permission check before the function\n> > resolution begins.\n>\n> That may be what the spec says, but I think the spec is completely\n> brain-dead in this regard and should be ignored.\n\nWhy?\n\n> We do not resolve table names that way, why should we resolve function\n> names?\n\nWe do not resolve table names at all.\n\n> Even more to the point, what happens when someone adds or revokes\n> privileges that would affect already-planned queries?\n\nThe query plans are invalidated.\n\n\nNote: I'm not convinced of this idea either. But proclaiming it\nbrain-dead isn't going to push me either way. You could say Unix shells\nare brain-dead, too, because they do the same thing.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 13 Feb 2002 19:26:22 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: When and where to check for function permissions "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> We do not resolve table names that way, why should we resolve function\n>> names?\n\n> We do not resolve table names at all.\n\nThe point is that a table you can't currently access isn't invisible.\n\nIt'd be even weirder if it were visible for some operation types and not\nothers. Among other things, that would break rules on views: you can\nnever insert into a view, so if we adopted the spec's viewpoint, you\ncould never see a view as target of INSERT and would thus never advance\nto the next step of looking for a rewrite rule for it.\n\nHere's another example that should give you pause: let's assume there's\nan UPDATE access right for functions that controls whether you can\nupdate the function definition (via CREATE OR REPLACE). Let's further\nsuppose that you have execute but not update access to function foo(int).\n\ntest=> select foo(2);\n\t-- works fine\n\ntest=> create or replace foo(int) ... etc etc ...;\n\nIf we follow the spec's lead, then CREATE OR REPLACE doesn't see the\nexisting definition at all (because it doesn't have the correct access\nrights), and so instead of the expected update, you get a new function\ndefinition. This might even appear to work, from your point of view,\nif the new definition gets entered into a namespace closer to the front\nof your search path than the original was. But other people will\ncontinue to see the old definition if they don't share your path.\n\nThe implications for ambiguous-function resolution would be no less \nbizarre and unhelpful.\n\nYou could *maybe* get away with visibility-depends-on-access-rights\nin a world with no name search paths and no function name overloading\n(although why you'd bother is not quite clear to me). In the presence\nof those features, doing it the spec's way is sheer folly.\n\nI would also ask for some positive reason why we should do this the\nspec's way; what actual user benefit does it provide to make uncallable\nfunctions invisible? I don't see one; all I see is user confusion.\n\n> You could say Unix shells\n> are brain-dead, too, because they do the same thing.\n\nThey do? How so? Last I checked, trying to execute a program I didn't\nhave exec rights to gave \"no permissions\", not \"not found\", and\ncertainly not \"use the next one down the PATH instead\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 19:44:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: When and where to check for function permissions "
},
{
"msg_contents": "On Wed, 13 Feb 2002, Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> \n> > You could say Unix shells\n> > are brain-dead, too, because they do the same thing.\n> \n> They do? How so? Last I checked, trying to execute a program I didn't\n> have exec rights to gave \"no permissions\", not \"not found\", and\n> certainly not \"use the next one down the PATH instead\".\n> \n\nThat is only true if you give a fully qualified path to the program. \n\nFor example, if you try to run the command as \"ls\" (instead of /bin/ls),\nyou will get the first executable program \"ls\" in your path.\n\nIf an administrator happened to create a file (without execute) named\n\"ls\" near the beginning of your path (say /usr/local/bin), then it would\nbreak, which it does not.\n\nIf it was executable, it would get called instead by being first in the\npath. Granted some shells, eg. bash, will cache the path to the command\non it first lookup and need to be rehashed (hash -r).\n\n\t-rocco\n\n",
"msg_date": "Thu, 14 Feb 2002 14:27:52 -0500 (EST)",
"msg_from": "Rocco Altier <roccoa@routescape.com>",
"msg_from_op": false,
"msg_subject": "Re: When and where to check for function permissions "
}
] |
[
{
"msg_contents": "These macros are from geo_decls.h:\n#define EPSILON\t\t\t\t\t1.0E-06\n\n#ifdef EPSILON\n#define FPzero(A)\t\t\t\t(fabs(A) <= EPSILON)\n#define FPeq(A,B)\t\t\t\t(fabs((A) - (B)) <=\nEPSILON)\n#define FPne(A,B)\t\t\t\t(fabs((A) - (B)) >\nEPSILON)\n#define FPlt(A,B)\t\t\t\t((B) - (A) > EPSILON)\n#define FPle(A,B)\t\t\t\t((A) - (B) <= EPSILON)\n#define FPgt(A,B)\t\t\t\t((A) - (B) > EPSILON)\n#define FPge(A,B)\t\t\t\t((B) - (A) <= EPSILON)\n#else\n#define FPzero(A)\t\t\t\t((A) == 0)\n#define FPeq(A,B)\t\t\t\t((A) == (B))\n#define FPne(A,B)\t\t\t\t((A) != (B))\n#define FPlt(A,B)\t\t\t\t((A) < (B))\n#define FPle(A,B)\t\t\t\t((A) <= (B))\n#define FPgt(A,B)\t\t\t\t((A) > (B))\n#define FPge(A,B)\t\t\t\t((A) >= (B))\n#endif\n\nBut to compare floating point, those are simply wrong. If the values\nare both very small or both very large, the method fails.\nHere is a more reliable way to make a comparison:\n\n#include <float.h>\ndouble double_compare(double d1, double d2)\n{\n if (d1 > d2)\n if ((d1 - d2) < fabs(d1 * DBL_EPSILON))\n return 0;\n else\n return 1;\n if (d1 < d2)\n if ((d2 - d1) < fabs(d2 * DBL_EPSILON))\n return 0;\n else\n return -1;\n return 0;\n}\n\nfloat float_compare(float d1, float d2)\n{\n if (d1 > d2)\n if ((d1 - d2) < fabs(d1 * FLT_EPSILON))\n return 0;\n else\n return 1;\n if (d1 < d2)\n if ((d2 - d1) < fabs(d2 * FLT_EPSILON))\n return 0;\n else\n return -1;\n return 0;\n}\n\nAll the macros can then be defined in terms of the comparison function.\nIt's not perfect either, but at least it is better.\n",
"msg_date": "Wed, 13 Feb 2002 16:26:34 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "geo_decls.h oopsie..."
},
{
"msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n> But to compare floating point, those are simply wrong.\n\nSee previous discussions concerning the quality of the builtin geometric\ntypes. Not clear that it's worth worrying about, unless you want to go\nin for a wholesale overhaul. My own opinion is that PostGIS will\nsupersede the need for these types...\n\n> Here is a more reliable way to make a comparison:\n\nNot like that. Perhaps use a fraction of the absolute value of the\none with larger absolute value. As-is it's hard to tell how FLT_EPSILON\nis measured.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Feb 2002 20:07:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: geo_decls.h oopsie... "
},
{
"msg_contents": "> > But to compare floating point, those are simply wrong.\n> See previous discussions concerning the quality of the builtin geometric\n> types. Not clear that it's worth worrying about, unless you want to go\n> in for a wholesale overhaul. My own opinion is that PostGIS will\n> supersede the need for these types...\n\nOops. I think that we will welcome any updates and fixes to any of the\nbuilt in features of PostgreSQL, no matter our individual opinions of\nthe usefulness of a particular feature (right??). In this case, the\nreturn value patches would seem to be important and the second topic\nshould be addressed also.\n\n - Thomas\n",
"msg_date": "Thu, 14 Feb 2002 12:21:59 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: geo_decls.h oopsie..."
}
] |
[
{
"msg_contents": "Probably a lot more sensible to change the interface to both functions\nbelow to return int instead of double.\n\n#include <float.h>\ndouble double_compare(double d1, double d2)\n{\n if (d1 > d2)\n if ((d1 - d2) < fabs(d1 * DBL_EPSILON))\n return 0;\n else\n return 1;\n if (d1 < d2)\n if ((d2 - d1) < fabs(d2 * DBL_EPSILON))\n return 0;\n else\n return -1;\n return 0;\n}\n\nfloat float_compare(float d1, float d2)\n{\n if (d1 > d2)\n if ((d1 - d2) < fabs(d1 * FLT_EPSILON))\n return 0;\n else\n return 1;\n if (d1 < d2)\n if ((d2 - d1) < fabs(d2 * FLT_EPSILON))\n return 0;\n else\n return -1;\n return 0;\n}\n",
"msg_date": "Wed, 13 Feb 2002 16:28:38 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: geo_decls.h oopsie..."
}
] |
[
{
"msg_contents": "I have a couple wuick questions about upgrading to 7.2. \n\nFirst, are there noticable performance improvements with 7.2? I am\ncurrently using 7.1.2 and it is having a real hard time with large\ntables(about 10million records), queries take forever(even simple ones\nthat are index scans)?\n\nSecond, when compiling with gcc, what level of optimization is safe to\nuse? I know that some code when it is compiler optimized can break\nreally badly.\n\nThird, 7.2 will allow for the creation of fuinctional indexes right? In\nmy current version, it refuses to create them saying that it cannot\ncreate index on max(int4);\n\nAnd last of all, is 7.2 just a drop in replacement, or is this going to\nba a painful process?\n\n\n-- \nChris Field\nAffinity Solutions Inc.",
"msg_date": "14 Feb 2002 11:25:00 -0500",
"msg_from": "Chris Field <cfields@affinitysolutions.com>",
"msg_from_op": true,
"msg_subject": "Upgrading to 7.2"
},
{
"msg_contents": "Chris Field <cfields@affinitysolutions.com> writes:\n\n> I have a couple wuick questions about upgrading to 7.2. \n\nI'm not in any sense a guru but I'll try to give some answers...\n\n> First, are there noticable performance improvements with 7.2? I am\n> currently using 7.1.2 and it is having a real hard time with large\n> tables(about 10million records), queries take forever(even simple ones\n> that are index scans)?\n\n7.2 keeps better statistics, so it will sometimes choose smarter\nqueries than 7.1--whether this will help you is not clear. You can\nalways post your schema and EXPLAIN outout and see if someone can help\nspeed up the query (best to post those kinds of questions to GENERAL\nrather than HACKERS). \n\n> Second, when compiling with gcc, what level of optimization is safe to\n> use? I know that some code when it is compiler optimized can break\n> really badly.\n\nGoing with the optimizations that 'configure' picks for you is\nprobably safe. Databases are usually I/O bound rather than CPU bound\nanyhow. \n\n> Third, 7.2 will allow for the creation of fuinctional indexes right? In\n> my current version, it refuses to create them saying that it cannot\n> create index on max(int4);\n\nUmmm, max() is an aggregate function, how can you create an index on\nit? \n\n> And last of all, is 7.2 just a drop in replacement, or is this going to\n> ba a painful process?\n\nShould just be a pg_dump, upgrade, initdb, restore process. There are\na couple gotchas if your DB contains large objects but no\nshowstoppers. The dump/restore will probably take a while since your\ntables are large...\n\nPersonally, I'd install 7.2 in a different place from 7.1 as a test,\nand make sure you can restore into 7.2 from your 7.1 dump before doing\nthe \"real\" switchover. \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "14 Feb 2002 11:58:58 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Upgrading to 7.2"
},
{
"msg_contents": "Thanks for responding, I am thinking it might be fairly beneficial to\nupgrade.\n\n> Ummm, max() is an aggregate function, how can you create an index on\n> it? \n\nIn the postgresSQL Essential Reference by Barry Stinson it specifically\nhas a index topic on functional indexes, with the given example being\n\" CREATE INDEX max_payroll_idx ON payroll (MAX(salary)); \"\nso either the book was a waste of money, or this is a 7.2 specific\nfeature.\n\n-- \nChris Field\nAffinity Solutions Inc.",
"msg_date": "14 Feb 2002 12:08:42 -0500",
"msg_from": "Chris Field <cfields@affinitysolutions.com>",
"msg_from_op": true,
"msg_subject": "Re: Upgrading to 7.2"
},
{
"msg_contents": "Chris Field <cfields@affinitysolutions.com> writes:\n\n> Thanks for responding, I am thinking it might be fairly beneficial to\n> upgrade.\n> \n> > Ummm, max() is an aggregate function, how can you create an index on\n> > it? \n> \n> In the postgresSQL Essential Reference by Barry Stinson it specifically\n> has a index topic on functional indexes, with the given example being\n> \" CREATE INDEX max_payroll_idx ON payroll (MAX(salary)); \"\n\nThis just seems wrong. MAX() is a function, not of a single value,\nbut of a set of values from a single column (ie it's an aggregate\nfunction). Think about what an index is, and I think you'll see that\nyou can't build one on based on an aggregate function. It's not a\nwell-defined concept.\n\nThink of it this way--an index is \"a list of rows, organized by the\nvalue of the index expression for each row.\" An aggregate function\nlike MAX() or SUM() doesn't have a useful value for a single row--it's\nonly meaningful in the context of a set of rows.\n\nNon-aggregate functions (ie most of them, like sqrt(), sin(), cos()\netc) can definitely be used in indexes.\n\n> so either the book was a waste of money, or this is a 7.2 specific\n> feature.\n\nThe author does seem confused about this point, but the book still\nmight be worthwhile--haven't read it myself. \n\nI might be totally out in left field here, but the reasoning above\nmakes sense to me at least. ;)\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "14 Feb 2002 12:27:10 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Upgrading to 7.2"
},
{
"msg_contents": "I have that book too (page 168 in my revision) definitely looks like an \nerror. Indexes cannot be applied to aggregate functions since the data \nrowset is only defined in the context of a given query. So, until \nfunctionality is added to maintain an index based on a SQL query (view \nor what not) rather than a table, this just ain't gonna happen. :)\n\nChris Field wrote:\n\n> Thanks for responding, I am thinking it might be fairly beneficial to\n> upgrade.\n> \n> \n>>Ummm, max() is an aggregate function, how can you create an index on\n>>it? \n>>\n> \n> In the postgresSQL Essential Reference by Barry Stinson it specifically\n> has a index topic on functional indexes, with the given example being\n> \" CREATE INDEX max_payroll_idx ON payroll (MAX(salary)); \"\n> so either the book was a waste of money, or this is a 7.2 specific\n> feature.\n> \n> \n\n\n-- \n01010101010101010101010101010101010101010101010101\n\nMarc P. Lavergne [wk:650-576-7978 hm:407-648-6996]\nSenior Software Developer\nGlobal Knowledge Management\nWorldwide Support Technologies\nOpenwave Systems Inc.\n\n--\n\n\"Anyone who slaps a 'this page is best viewed with\nBrowser X' label on a Web page appears to be\nyearning for the bad old days, before the Web,\nwhen you had very little chance of reading a\ndocument written on another computer, another word\nprocessor, or another network.\"\n-Tim Berners-Lee (Technology Review, July 1996)\n\n01010101010101010101010101010101010101010101010101\n\n",
"msg_date": "Thu, 14 Feb 2002 12:50:13 -0500",
"msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>",
"msg_from_op": false,
"msg_subject": "Re: Upgrading to 7.2"
}
] |
[
{
"msg_contents": "On Thu, 2002-02-14 at 12:31, Bruce Momjian wrote:\n> Chris Field wrote:\n> \n> Checking application/pgp-signature: FAILURE\n> -- Start of PGP signed section.\n> > Thanks for responding, I am thinking it might be fairly beneficial to\n> > upgrade.\n> > \n> > > Ummm, max() is an aggregate function, how can you create an index on\n> > > it? \n> > \n> > In the postgresSQL Essential Reference by Barry Stinson it specifically\n> > has a index topic on functional indexes, with the given example being\n> > \" CREATE INDEX max_payroll_idx ON payroll (MAX(salary)); \"\n> > so either the book was a waste of money, or this is a 7.2 specific\n> > feature.\n> \n> Uh, MAX() is an aggregate, not really a function like the other\n> functions. It takes an entire column and returns one value, rather than\n> normal functions that take a some values and return a single value.\n> \n> In summary, I don't think we support aggregates (MAX) in functional\n> indexes.\n> \n\nSo, to put it succinctly book=wrong;\n\n-- \nChris Field\nAffinity Solutions Inc.",
"msg_date": "14 Feb 2002 12:33:55 -0500",
"msg_from": "Chris Field <cfields@affinitysolutions.com>",
"msg_from_op": true,
"msg_subject": "Re: Upgrading to 7.2"
}
] |
[
{
"msg_contents": "Hi All,\n Can anyone tell me of a freeware / opensource tool or software that can\nbe used to perform data-mining or statistical analysis of the data contained\nwithin a postgresql db. Some tool that connect efficiently with postgresql\nwould be ideal.\n\nAny Help Would Be Greatly Appreciated.\n\nMany Thanks\n\nRishabh\n\n\n",
"msg_date": "Thu, 14 Feb 2002 17:49:39 -0000",
"msg_from": "\"Rishabh Gupta\" <rg117@york.ac.uk>",
"msg_from_op": true,
"msg_subject": "data mining or statistical analysis with postgresql"
},
{
"msg_contents": "I have written a whole bunch of them. One analyzes many millions of records for\ntrends. This is used as the basis of a recommendations engine.\n\nYou can use C/C++, perl, PHP, what ever. It is really trivial.\n\nRishabh Gupta wrote:\n> \n> Hi All,\n> Can anyone tell me of a freeware / opensource tool or software that can\n> be used to perform data-mining or statistical analysis of the data contained\n> within a postgresql db. Some tool that connect efficiently with postgresql\n> would be ideal.\n> \n> Any Help Would Be Greatly Appreciated.\n> \n> Many Thanks\n> \n> Rishabh\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Tue, 19 Feb 2002 22:25:25 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: data mining or statistical analysis with postgresql"
},
{
"msg_contents": "On Thu, 14 Feb 2002, Rishabh Gupta wrote:\n\n> Can anyone tell me of a freeware / opensource tool or software that can\n> be used to perform data-mining or statistical analysis of the data contained\n> within a postgresql db. Some tool that connect efficiently with postgresql\n> would be ideal.\n\nRishabh,\n\n Don't know about the data mining (I work for the mineral mining industry),\nbut look for the R statistical language. It'll do whatever statistics you\nwant.\n\nRich\n\nDr. Richard B. Shepard, President\n\n Applied Ecosystem Services, Inc. (TM)\n 2404 SW 22nd Street | Troutdale, OR 97060-1247 | U.S.A.\n + 1 503-667-4517 (voice) | + 1 503-667-8863 (fax) | rshepard@appl-ecosys.com\n http://www.appl-ecosys.com\n\n",
"msg_date": "Thu, 21 Feb 2002 16:53:56 -0800 (PST)",
"msg_from": "Rich Shepard <rshepard@appl-ecosys.com>",
"msg_from_op": false,
"msg_subject": "Re: data mining or statistical analysis with postgresql"
}
] |
[
{
"msg_contents": "Hi,\n\nOpenFTS team is glad to announce a pre-alpha release of\nOpenFTS perl version, which is available for testing from\nopenfts.sourceforge.net. We've created openfts-general\nmailing list for general discussion. Details is available at\nhttp://lists.sourceforge.net/lists/listinfo/openfts-general\nWe need your feedback !\n\nWe deprecated using contrib/intarray module and switched to\ncontrib/tsearch module. This version works only with 7.2 release\nof PostgreSQL and later.\n\nAlso, we just posted generic perl interface to Snowball stemmers\n(sbowball.sourceforge.net), which was developed primarly for\nuse as OpenFTS add-on. We'll announce it's availability after\nMartin Porter accepts the module.\n\nBelow is the full list of changes:\n\n Add cygwin compatibility !\n Modify pgsql module to use in contrib directory\n Remove support working without intarray\n Change pgsql's intarray to tsearch module, this modifications\n requre some interface changes:\n 1. dictionary must have method lemms, lemmsid now unused\n 2. is_stoplemm is unused, instead use is_stoplexem\n Add optional method drop to dictionaries\n Add dropindex for remove OpenFTS instance\n Done changes of OpenFTS tables and corresponding relor/relkov function\n Some interface change (use_index_array deprecated,\n instead txtidx_field - mandated option)\n All scripts in examples/ are adapted for changes\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 14 Feb 2002 22:36:02 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "pre-alpha release of OpenFTS (perl version) is available for testing"
}
] |
[
{
"msg_contents": "> Note the third row in the query result below is in error. The four hour\n> interval (2300UTC - 0300UTC) does not overlap the interval 1530UTC-1627UTC).\n> Is this a bug?\n\nNo. It conforms to (my reading of) the SQL99 spec. So it is a feature,\neven if I misread the spec. Which I think I didn't ;) But if I did, then\nwe can change the implementation of course.\n\nI've included the relevant part of the spec below. It seems clause (3)\nrequires that we reorder the arguments to OVERLAPS, though perhaps\nsomeone would like to research whether TIME is allowed to be used with\nOVERLAPS at all (if not, then we could make up the rules ourselves).\n\n> It would be cool if timetz (or time) datatypes were to wrap properly\n> across day boundaries (i.e. if start time < stop time then assume start time\n> is day before) but at the very least, the overlaps functions should not lie\n> to you!\n\nSome parts of the spec aren't cool, or interfer with coolness. This may\nbe one of them. If everything conforms to the standard, then we can\nstart discussing whether that part of the standard is so brain-dead as\nto be useless or likely to directly cause damage.\n\nBut in your case, choosing to record only times but then expecting the\ncode to respect a day boundary seems to be an assumption which could\nbite you in other ways later. What happens when an interval happens to\nbe longer than a day??\n\nhth\n\n - Thomas\n\n(omit some text defining the input as \"(D1, E1) OVERLAPS (D2, E2)\" as\nthe input to the OVERLAPS operator)\n\n3) If D1 is the null value or if E1 < D1, then let S1 = E1 and let\n T1 = D1. Otherwise, let S1 = D1 and let T1 = E1.\n4) Case:\n a) If the most specific type of the second field of <row value\n expression 2> is a datetime data type, then let E2 be the\n value of the second field of <row value expression 2>.\n b) If the most specific type of the second field of <row value\n expression 2> is INTERVAL, then let I2 be the value of the\n second field of <row value expression 2>. Let E2 = D2 + I2.\n5) If D2 is the null value or if E2 < D2, then let S2 = E2 and let\n T2 = D2. Otherwise, let S2 = D2 and let T2 = E2.\n6) The result of the <overlaps predicate> is the result of the\n following expression:\n ( S1 > S2 AND NOT ( S1 >= T2 AND T1 >= T2 ) )\n OR\n ( S2 > S1 AND NOT ( S2 >= T1 AND T2 >= T1 ) )\n OR\n\n ( S1 = S2 AND ( T1 <> T2 OR T1 = T2 ) )\n",
"msg_date": "Thu, 14 Feb 2002 12:17:13 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: FWD: overlaps() bug?"
}
] |
[
{
"msg_contents": "-----Original Message-----\nFrom: Thomas Lockhart [mailto:lockhart@fourpalms.org]\nSent: Thursday, February 14, 2002 12:22 PM\nTo: Dann Corbit\nCc: Tom Lane; Hackers List\nSubject: Re: geo_decls.h oopsie...\n\n\n> > But to compare floating point, those are simply wrong.\n> See previous discussions concerning the quality of the builtin\ngeometric\n> types. Not clear that it's worth worrying about, unless you want to\ngo\n> in for a wholesale overhaul. My own opinion is that PostGIS will\n> supersede the need for these types...\n\nOops. I think that we will welcome any updates and fixes to any of the\nbuilt in features of PostgreSQL, no matter our individual opinions of\nthe usefulness of a particular feature (right??). In this case, the\nreturn value patches would seem to be important and the second topic\nshould be addressed also.\n---------------------------------------------------------------------\nAlso, the PostGIS is under a far more restrictive license (E.g. I \ncannot/won't use it.)\nIf fixing a bug is easy, why not fix it? For most tests, there won't\nbe any difference anyway. It is only when the numbers become large\nor small that differences start to appear.\n---------------------------------------------------------------------\n",
"msg_date": "Thu, 14 Feb 2002 12:32:15 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: geo_decls.h oopsie..."
}
] |
[
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > More specifically, the problem is that plpgsql's FOR-over-a-select now\n> > > depends on a SPI cursor, and both SPI cursors and regular cursors are\n> > > broken in this regard. Observe the following misbehavior with a plain\n> > > cursor:\n> >\n> > This is a known issue. We should implement INSENSITIVE cursors\n> > to avoid this behavior. The keyword INSENSITIVE is there but isn't\n> > used long. I plan to implement this feature as the first step toward\n> > cross transaction cursors. Saving the xid and commandid in the\n> > portal or snapshot and restoring them at fetch(move) time would\n> > solve it.\n> \n> If I read the CVS logs correctly, I think Tom just fixed it.\n\nOh I see it now.\nI'm not sure if it's good that all cursurs are INSENSITIVE.\nNow PostgreSQL is the same as Oracle at the point. Though\nthere's even a SENSITIVE keyword in SQL statndard, INSENSITIVE\nmay be what MVCC requires.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 15 Feb 2002 08:41:25 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Postgres 7.2 - Updating rows in cursor problem"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I'm not sure if it's good that all cursurs are INSENSITIVE.\n\nPerhaps not, but I think implementing a reasonable non-insensitive\nbehavior under MVCC will be difficult. Do you see a way to do it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Feb 2002 19:38:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Postgres 7.2 - Updating rows in cursor problem "
}
] |
[
{
"msg_contents": "I've started playing around with 7.2 on one of my development machines. \nI decided to try the pg_upgrade program, something I usually never do.\n\nAnyway, I followed the steps in the pg_upgrade (going from 7.1.3 to\n7.2), and then when I started the database up after the upgrade finished\nand vacuumed one of my tables, i get these error messages from the\npostmaster. After this point I cannot restart the postmaster without\nresetting the xlog.\n\nI've kept the PGDATA directory around incase someone thinks this is\nworth looking into, i would be more than happy to help out. \n\nIf i migrate the data over manually like a always do (pg_dump then\npg_restore), i don't have any problems. Part of the problem might be\npath names for shared libraries specified in CREATE FUNCTION; I started\nusing pg back when it was version 6 before '$libdir' was supported and I\nhaven't bothered to take the absolute path names out yet -- i've just\nupdated it with each release (each release is installed in a different\nlocation in case i need to roll back, and so i can test multiple version\nat one time). not sure if pg_upgrade even checks for this.\n\noby/pgsql@loopy pg_upgrade]$ /moby/pgsql-7.2/bin/postmaster -i -o -F -B\n256 -D/mo\nDEBUG: database system was shut down at 2002-02-14 12:20:53 MST\nDEBUG: checkpoint record is at 1/A7000010\nDEBUG: redo record is at 1/A7000010; undo record is at 1/A7000010;\nshutdown TRUE\nDEBUG: next transaction id: 589031; next oid: 19512\nDEBUG: database system is ready\n\n\n\nDEBUG: --Relation developer--\nDEBUG: Pages 669: Changed 0, Empty 0; Tup 51508: Vac 0, Keep 0, UnUsed\n0.\n\tTotal CPU 0.07s/0.03u sec elapsed 0.11 sec.\nDEBUG: Analyzing developer\nFATAL 2: read of clog file 0, offset 139264 failed: Success\nDEBUG: server process (pid 17786) exited with exit code 2\nDEBUG: terminating any other active server processes\nNOTICE: Message from PostgreSQL backend:\n\tThe Postmaster has informed me that some other backend\n\tdied abnormally and possibly corrupted shared memory.\n\tI have rolled back the current transaction and am\n\tgoing to terminate your database system connection and exit.\n\tPlease reconnect to the database system and repeat your query.\nDEBUG: all server processes terminated; reinitializing shared memory\nand semaphores\nDEBUG: database system was interrupted at 2002-02-14 12:20:58 MST\nDEBUG: checkpoint record is at 1/A7000010\nDEBUG: redo record is at 1/A7000010; undo record is at 1/A7000010;\nshutdown TRUE\nDEBUG: next transaction id: 589031; next oid: 19512\nDEBUG: database system was not properly shut down; automatic recovery\nin progress\nDEBUG: redo starts at 1/A7000050\nFATAL 2: read of clog file 0, offset 139264 failed: Success\nDEBUG: startup process (pid 17788) exited with exit code 2\nDEBUG: aborting startup due to startup process failure\n[postgres@loopy pg_upgrade]$ \n[postgres@loopy pg_upgrade]$ \n[postgres@loopy pg_upgrade]$ \n[postgres@loopy pg_upgrade]$ df -k\nFilesystem 1k-blocks Used Available Use% Mounted on\n/dev/hda8 248895 192496 43549 82% /\n/dev/hda1 31079 4988 24487 17% /boot\n/dev/hda5 24080660 6601476 17479184 28% /home\n/dev/hda6 5044156 1930892 2857032 41% /usr\n/dev/hda9 248895 133875 102170 57% /var\n/dev/hdd1 59919196 39090008 20829188 66% /disk\noby/pgsql@loopy pg_upgrade]$ /moby/pgsql-7.2/bin/postmaster -i -o -F -B\n256 -D/mo\nDEBUG: database system was interrupted being in recovery at 2002-02-14\n12:21:06 MST\n\tThis probably means that some data blocks are corrupted\n\tand you will have to use the last backup for recovery.\nDEBUG: checkpoint record is at 1/A7000010\nDEBUG: redo record is at 1/A7000010; undo record is at 1/A7000010;\nshutdown TRUE\nDEBUG: next transaction id: 589031; next oid: 19512\nDEBUG: database system was not properly shut down; automatic recovery\nin progress\nDEBUG: redo starts at 1/A7000050\nFATAL 2: read of clog file 0, offset 139264 failed: Success\nDEBUG: startup process (pid 17793) exited with exit code 2\nDEBUG: aborting startup due to startup process failure\n[postgres@loopy pg_upgrade]$ \n\n\n",
"msg_date": "14 Feb 2002 17:17:18 -0700",
"msg_from": "Brian Hirt <bhirt@mobygames.com>",
"msg_from_op": true,
"msg_subject": "Strange problem when upgrading to 7.2 with pg_upgrade."
},
{
"msg_contents": "Brian Hirt <bhirt@mobygames.com> writes:\n> I decided to try the pg_upgrade program, something I usually never do.\n\n> FATAL 2: read of clog file 0, offset 139264 failed: Success\n\nCould we see ls -l $PGDATA/pg_clog?\n\nI suspect that pg_upgrade has neglected to make sure the clog is long\nenough.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Feb 2002 19:54:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade. "
},
{
"msg_contents": "\n[root@loopy pg_clog]# pwd\n/moby/pgsql-upgrade-bad/pg_clog\n[root@loopy pg_clog]# ls -la\ntotal 9\ndrwx------ 2 postgres postgres 72 Feb 14 09:32 .\ndrwx------ 6 postgres postgres 304 Feb 14 16:02 ..\n-rw------- 1 postgres postgres 8192 Feb 14 09:34 0000\n[root@loopy pg_clog]# bzip2 < 0000 | uuencode - \nbegin 644 -\nM0EIH.3%!62936<[:PW<``#Y_\".;,1H``L!``9@!F``(`\"```\"#``V*#5/R*>\nMHT--`TT!31H`T``\"1\"(TT*:#U/4>C]8_(/$);W\"6=D0`'3$(Z9Y_D(V@K9T)\nM+,\\6\"GDBTU?,C9R[NSB.6-X6M3\\55RS<AS$:?0<,;N4/K>#.KV(E,[88LWG%\nM[:QR6B\"\\'JK2G9LB*63\"00449P7!2)#0O3IY4PT;P%DC'J$M$T3$5'RU5';2\nA*:2EB*:1)!MI,SQ%1=GE_(FY2U#027L7<D4X4)#.VL-W\n`\nend\n[root@loopy pg_clog]# \n\n\n\nOn Thu, 2002-02-14 at 17:54, Tom Lane wrote:\n> Brian Hirt <bhirt@mobygames.com> writes:\n> > I decided to try the pg_upgrade program, something I usually never do.\n> \n> > FATAL 2: read of clog file 0, offset 139264 failed: Success\n> \n> Could we see ls -l $PGDATA/pg_clog?\n> \n> I suspect that pg_upgrade has neglected to make sure the clog is long\n> enough.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n",
"msg_date": "14 Feb 2002 19:07:11 -0700",
"msg_from": "Brian Hirt <bhirt@mobygames.com>",
"msg_from_op": true,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade."
},
{
"msg_contents": "Tom Lane wrote:\n> Brian Hirt <bhirt@mobygames.com> writes:\n> > I decided to try the pg_upgrade program, something I usually never do.\n> \n> > FATAL 2: read of clog file 0, offset 139264 failed: Success\n> \n> Could we see ls -l $PGDATA/pg_clog?\n> \n> I suspect that pg_upgrade has neglected to make sure the clog is long\n> enough.\n\nHere is the code that sets the transaction id. Tom, does pg_resetxlog\nhandle pg_clog file creation properly?\n\t\n\t# Set this so future backends don't think these tuples are their own\n\t# because it matches their own XID.\n\t# Commit status already updated by vacuum above\n\t# Set to maximum XID just in case SRC wrapped around recently and\n\t# is lower than DST's database\n\t\n\tif [ \"$SRC_XID\" -gt \"$DST_XID\" ]\n\tthen\tMAX_XID=\"$SRC_XID\"\n\telse\tMAX_XID=\"$DST_XID\"\n\tfi\n\t\n\tpg_resetxlog -x \"$MAX_XID\" \"$PGDATA\"\n\tif [ \"$?\" -ne 0 ]\n\tthen\techo \"Unable to set new XID. Exiting.\" 1>&2\n\t\texit 1\n\tfi\n\t\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Feb 2002 00:02:57 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I suspect that pg_upgrade has neglected to make sure the clog is long\n>> enough.\n\n> Here is the code that sets the transaction id. Tom, does pg_resetxlog\n> handle pg_clog file creation properly?\n\npg_resetxlog doesn't know a single solitary thing about the clog.\n\nThe problem here is that if you're going to move the current xact ID\nforward, you need to be prepared to create pages of the clog\naccordingly. Or maybe the clog routines need to be less rigid in their\nassumptions, but I'm uncomfortable with relaxing their expectations\nunless it can be shown that they may fail to cope with cases that\narise in normal system operation. This isn't such a case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Feb 2002 00:08:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade. "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> I suspect that pg_upgrade has neglected to make sure the clog is long\n> >> enough.\n> \n> > Here is the code that sets the transaction id. Tom, does pg_resetxlog\n> > handle pg_clog file creation properly?\n> \n> pg_resetxlog doesn't know a single solitary thing about the clog.\n> \n> The problem here is that if you're going to move the current xact ID\n> forward, you need to be prepared to create pages of the clog\n> accordingly. Or maybe the clog routines need to be less rigid in their\n> assumptions, but I'm uncomfortable with relaxing their expectations\n> unless it can be shown that they may fail to cope with cases that\n> arise in normal system operation. This isn't such a case.\n\nWe increased the xid because the old files have xid's that are greater\nthan the newly initdb'ed database. We did a vacuum, so no one is going\nto check clog, but we need to increase the transaction counter because\nold rows could be seen as matching the current transaction.\n\nCan you suggest how to create the needed clog files? I don't see any\nvalue in changing your current clog code in the backend.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Feb 2002 00:12:55 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade."
},
{
"msg_contents": "> We increased the xid because the old files have xid's that are greater\n> than the newly initdb'ed database. We did a vacuum, so no one is going\n> to check clog, but we need to increase the transaction counter because\n> old rows could be seen as matching the current transaction.\n> \n> Can you suggest how to create the needed clog files? I don't see any\n> value in changing your current clog code in the backend.\n\nTom, is there a way to increment the XID every 100 million and start the\npostmaster to create the needed pg_clog files to get to the XID I need?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Feb 2002 07:40:49 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade."
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> I suspect that pg_upgrade has neglected to make sure the clog is long\n> >> enough.\n> \n> > Here is the code that sets the transaction id. Tom, does pg_resetxlog\n> > handle pg_clog file creation properly?\n> \n> pg_resetxlog doesn't know a single solitary thing about the clog.\n> \n> The problem here is that if you're going to move the current xact ID\n> forward, you need to be prepared to create pages of the clog\n> accordingly. Or maybe the clog routines need to be less rigid in their\n> assumptions, but I'm uncomfortable with relaxing their expectations\n> unless it can be shown that they may fail to cope with cases that\n> arise in normal system operation. This isn't such a case.\n\nTom, any suggestion on how I can increase clog as part of pg_upgrade?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Apr 2002 00:19:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom, any suggestion on how I can increase clog as part of pg_upgrade?\n\nAppend zeroes ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 09 Apr 2002 00:21:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade. "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom, any suggestion on how I can increase clog as part of pg_upgrade?\n> \n> Append zeroes ...\n\nOK, I can 'dd' /dev/zero to append zeros to pad out the file. How large\ndoes the clog file get, 1gb? Do I need to rename it at all?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Apr 2002 00:25:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, I can 'dd' /dev/zero to append zeros to pad out the file. How large\n> does the clog file get, 1gb? Do I need to rename it at all?\n\n256KB per segment. Do *not* rename existing segments.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 09 Apr 2002 00:28:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade. "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, I can 'dd' /dev/zero to append zeros to pad out the file. How large\n> > does the clog file get, 1gb? Do I need to rename it at all?\n> \n> 256KB per segment. Do *not* rename existing segments.\n\nRight, no rename, but I will have to create additional files in 256kb\nchunks, and I assume 1gb of chunks remains in pg_clog directory?\n\nSince I have done a vacuum, I assume I just keep creating 256k chunks\nuntil I reach the max xid from the previous release, and delete the\nfiles prior to the 1gb size limit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Apr 2002 00:35:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Since I have done a vacuum, I assume I just keep creating 256k chunks\n> until I reach the max xid from the previous release, and delete the\n> files prior to the 1gb size limit.\n\nKeep your hands *off* the existing segments. The CLOG code will clean\nthem up when it's good and ready ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 09 Apr 2002 00:37:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade. "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Since I have done a vacuum, I assume I just keep creating 256k chunks\n> > until I reach the max xid from the previous release, and delete the\n> > files prior to the 1gb size limit.\n> \n> Keep your hands *off* the existing segments. The CLOG code will clean\n> them up when it's good and ready ...\n\nOK. Fill out the current clog and add additional ones to reach the\ncurrent max xid, rounded to the nearest 8k, assuming 256k file equals\n1mb of xids.\n\nWhy do you take these things so personally?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Apr 2002 00:44:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade."
},
{
"msg_contents": "\nThis is a good bug report. I can fix pg_upgrade by adding clog files\ncontaining zeros to pad out to the proper length. However, my guess is\nthat most people have already upgrade to 7.2.X, so there isn't much\nvalue in fixing it now. I have updated pg_upgrade CVS for 7.3, and\nhopefully we will have it working and well tested by the time 7.3 is\nreleased.\n\nCompressed clog was new in 7.2, so I guess it is no surprise I missed\nthat change in pg_upgrade. In 7.3, pg_clog will be moved over from the\nold install, so this shouldn't be a problem with 7.3.\n\nThanks for the report. Sorry I don't have a fix.\n\n---------------------------------------------------------------------------\n\nBrian Hirt wrote:\n> I've started playing around with 7.2 on one of my development machines. \n> I decided to try the pg_upgrade program, something I usually never do.\n> \n> Anyway, I followed the steps in the pg_upgrade (going from 7.1.3 to\n> 7.2), and then when I started the database up after the upgrade finished\n> and vacuumed one of my tables, i get these error messages from the\n> postmaster. After this point I cannot restart the postmaster without\n> resetting the xlog.\n> \n> I've kept the PGDATA directory around incase someone thinks this is\n> worth looking into, i would be more than happy to help out. \n> \n> If i migrate the data over manually like a always do (pg_dump then\n> pg_restore), i don't have any problems. Part of the problem might be\n> path names for shared libraries specified in CREATE FUNCTION; I started\n> using pg back when it was version 6 before '$libdir' was supported and I\n> haven't bothered to take the absolute path names out yet -- i've just\n> updated it with each release (each release is installed in a different\n> location in case i need to roll back, and so i can test multiple version\n> at one time). not sure if pg_upgrade even checks for this.\n> \n> oby/pgsql@loopy pg_upgrade]$ /moby/pgsql-7.2/bin/postmaster -i -o -F -B\n> 256 -D/mo\n> DEBUG: database system was shut down at 2002-02-14 12:20:53 MST\n> DEBUG: checkpoint record is at 1/A7000010\n> DEBUG: redo record is at 1/A7000010; undo record is at 1/A7000010;\n> shutdown TRUE\n> DEBUG: next transaction id: 589031; next oid: 19512\n> DEBUG: database system is ready\n> \n> \n> \n> DEBUG: --Relation developer--\n> DEBUG: Pages 669: Changed 0, Empty 0; Tup 51508: Vac 0, Keep 0, UnUsed\n> 0.\n> \tTotal CPU 0.07s/0.03u sec elapsed 0.11 sec.\n> DEBUG: Analyzing developer\n> FATAL 2: read of clog file 0, offset 139264 failed: Success\n> DEBUG: server process (pid 17786) exited with exit code 2\n> DEBUG: terminating any other active server processes\n> NOTICE: Message from PostgreSQL backend:\n> \tThe Postmaster has informed me that some other backend\n> \tdied abnormally and possibly corrupted shared memory.\n> \tI have rolled back the current transaction and am\n> \tgoing to terminate your database system connection and exit.\n> \tPlease reconnect to the database system and repeat your query.\n> DEBUG: all server processes terminated; reinitializing shared memory\n> and semaphores\n> DEBUG: database system was interrupted at 2002-02-14 12:20:58 MST\n> DEBUG: checkpoint record is at 1/A7000010\n> DEBUG: redo record is at 1/A7000010; undo record is at 1/A7000010;\n> shutdown TRUE\n> DEBUG: next transaction id: 589031; next oid: 19512\n> DEBUG: database system was not properly shut down; automatic recovery\n> in progress\n> DEBUG: redo starts at 1/A7000050\n> FATAL 2: read of clog file 0, offset 139264 failed: Success\n> DEBUG: startup process (pid 17788) exited with exit code 2\n> DEBUG: aborting startup due to startup process failure\n> [postgres@loopy pg_upgrade]$ \n> [postgres@loopy pg_upgrade]$ \n> [postgres@loopy pg_upgrade]$ \n> [postgres@loopy pg_upgrade]$ df -k\n> Filesystem 1k-blocks Used Available Use% Mounted on\n> /dev/hda8 248895 192496 43549 82% /\n> /dev/hda1 31079 4988 24487 17% /boot\n> /dev/hda5 24080660 6601476 17479184 28% /home\n> /dev/hda6 5044156 1930892 2857032 41% /usr\n> /dev/hda9 248895 133875 102170 57% /var\n> /dev/hdd1 59919196 39090008 20829188 66% /disk\n> oby/pgsql@loopy pg_upgrade]$ /moby/pgsql-7.2/bin/postmaster -i -o -F -B\n> 256 -D/mo\n> DEBUG: database system was interrupted being in recovery at 2002-02-14\n> 12:21:06 MST\n> \tThis probably means that some data blocks are corrupted\n> \tand you will have to use the last backup for recovery.\n> DEBUG: checkpoint record is at 1/A7000010\n> DEBUG: redo record is at 1/A7000010; undo record is at 1/A7000010;\n> shutdown TRUE\n> DEBUG: next transaction id: 589031; next oid: 19512\n> DEBUG: database system was not properly shut down; automatic recovery\n> in progress\n> DEBUG: redo starts at 1/A7000050\n> FATAL 2: read of clog file 0, offset 139264 failed: Success\n> DEBUG: startup process (pid 17793) exited with exit code 2\n> DEBUG: aborting startup due to startup process failure\n> [postgres@loopy pg_upgrade]$ \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Apr 2002 14:14:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade."
},
{
"msg_contents": "I wouldn't be so quick to assume that almost everyone has upgraded by now. I \nknow we have not, at least not in production.\n\nOn Tuesday 09 April 2002 02:14 pm, Bruce Momjian wrote:\n> This is a good bug report. I can fix pg_upgrade by adding clog files\n> containing zeros to pad out to the proper length. However, my guess is\n> that most people have already upgrade to 7.2.X, so there isn't much\n> value in fixing it now. I have updated pg_upgrade CVS for 7.3, and\n> hopefully we will have it working and well tested by the time 7.3 is\n> released.\n>\n> Compressed clog was new in 7.2, so I guess it is no surprise I missed\n> that change in pg_upgrade. In 7.3, pg_clog will be moved over from the\n> old install, so this shouldn't be a problem with 7.3.\n>\n> Thanks for the report. Sorry I don't have a fix.\n\n",
"msg_date": "Tue, 9 Apr 2002 15:22:32 -0400",
"msg_from": "\"Mattew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade."
},
{
"msg_contents": "* Mattew T. O'Connor (matthew@zeut.net) [020409 15:34]:\n> I wouldn't be so quick to assume that almost everyone has upgraded by now. I \n> know we have not, at least not in production.\n\nyeah, what he said. Test, QA and development yes, production, no.\n\n-Brad\n",
"msg_date": "Tue, 9 Apr 2002 19:25:15 -0400",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade."
},
{
"msg_contents": "Bradley McLean wrote:\n> * Mattew T. O'Connor (matthew@zeut.net) [020409 15:34]:\n> > I wouldn't be so quick to assume that almost everyone has upgraded by now. I \n> > know we have not, at least not in production.\n> \n> yeah, what he said. Test, QA and development yes, production, no.\n\nThe question is anyone who has delayed installing 7.2 will be using\npg_upgrade. Odds are they will not, and clearly we can't get enough\ntesting on pg_upgrade to be sure it will work well with 7.2.X.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Apr 2002 20:06:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange problem when upgrading to 7.2 with pg_upgrade."
}
] |
[
{
"msg_contents": "Hi guys,\n\nI've been chatting to Tom about implementing the ability to change the NULL\nstatus of a column via SQL.\n\nThis is the Oracle syntax:\n\nalter table table_name modify column1 not null;\nalter table table_name modify column1 null;\n\nThis is the MySQL syntax:\n\nALTER TABLE asfd CHANGE [COLUMN] old_col_name create_definition [FIRST |\nAFTER column_name]\nor ALTER TABLE asfd MODIFY [COLUMN] create_definition [FIRST | AFTER\ncolumn_name]\n\nCHANGE col_name, DROP col_name, and DROP INDEX are MySQL extensions to ANSI\nSQL92.\nMODIFY is an Oracle extension to ALTER TABLE.\n\nSo, the question is - what the heck is the standard syntax? Is there a\nstandard syntax? How about this syntax that I came up with:\n\nALTER TABLE blah ALTER COLUMN col SET [NULL | NOT NULL]\n\nAnyone have any ideas? Perhaps we should use some sort of 'MODIFY'-like\nsyntax to enable in the future maybe the ability to change column specs in\nmore advanced ways (such as column type and size)\n\nIf the answer is no, Postgres's parser does not have this syntax enabled,\nthen I'm going to have to ask someone to implement it for me, and then I can\nfill in the actual guts of the function - whereever that may be. (I don't\nknow parser stuff!)\n\nChris\n\n",
"msg_date": "Fri, 15 Feb 2002 09:59:29 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "SET NULL / SET NOT NULL"
},
{
"msg_contents": "At 09:59 15/02/02 +0800, Christopher Kings-Lynne wrote:\n>\n>ALTER TABLE blah ALTER COLUMN col SET [NULL | NOT NULL]\n>\n\nI'm not too fond of 'SET NULL' - the syntax implies the column is being set\nto NULL. But I agree with the rest given we already have ALTER\nTABLE...ALTER COLUMN, I'd vote for:\n\n ALTER TABLE blah ALTER COLUMN col [ALLOW NULL | NOT NULL]\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 21 Feb 2002 13:12:19 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n\n> I'm not too fond of 'SET NULL' - the syntax implies the column is being set\n> to NULL. But I agree with the rest given we already have ALTER\n> TABLE...ALTER COLUMN, I'd vote for:\n> \n> ALTER TABLE blah ALTER COLUMN col [ALLOW NULL | NOT NULL]\n\nFWIW, I like this syntax too.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "20 Feb 2002 21:18:49 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "> > I'm not too fond of 'SET NULL' - the syntax implies the column\n> is being set\n> > to NULL. But I agree with the rest given we already have ALTER\n> > TABLE...ALTER COLUMN, I'd vote for:\n> >\n> > ALTER TABLE blah ALTER COLUMN col [ALLOW NULL | NOT NULL]\n>\n> FWIW, I like this syntax too.\n\nLet's say, theoretically, that in the future we want to allow people to\nchange the type of their columns, plus allow them to change the nullability.\n\nShould we come up with a syntax for changing nullability that allows for the\nfuture changing of column type? If so, then a syntaxes like these might be\nthe way to go:\n\nALTER TABLE blah ALTER COLUMN col DROP DEFAULT;\nALTER TABLE blah ALTER COLUMN col SET DEFAULT 't';\nALTER TABLE blah ALTER COLUMN col NULL;\nALTER TABLE blah ALTER COLUMN col NOT NULL;\nALTER TABLE blah ALTER COLUMN col varchar(50);\nALTER TABLE blah ALTER COLUMN col int4 NULL;\nALTER TABLE blah ALTER COLUMN col text NOT NULL;\n\nIf we just allow the full col spec we could one day support this:\n\nALTER TABLE blah ALTER COLUMN col text boolean NOT NULL DEFAULT 'f';\n\nWhich would change the column to that definition (if coercion is possible)\nno matter what current definition is...\n\nIs this the eventual goal? Will this cause shift/reduce errors? will we\nneed to put the word 'SET' in after 'col'?\n\nChris\n\n",
"msg_date": "Thu, 21 Feb 2002 10:28:51 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "> > ALTER TABLE blah ALTER COLUMN col [ALLOW NULL | NOT NULL]\n> FWIW, I like this syntax too.\n\nWhat would be the drawbacks to having all portions after \"col\" in the\nexample above be *exactly* the same as the clauses allowed in CREATE\nTABLE? So, this would be\n\n ALTER TABLE tab ALTER COLUMN col [ NULL | NOT NULL ]\n\nThe syntax would then be entirely predictable if you knew what you would\nhave written if you had set the constraint during table creation. I'll\nagree (if someone points it out) that this particular example is pretty\nterse.\n\nIn that same line of thought, how about making it more closely mimic the\noriginal CREATE TABLE syntax? Something like\n\n ALTER TABLE t (c1 NULL)\n\nHmm. Or if we are going to eventually allow altering column types then\none could include the type also. That may be a bit much, but having an\nidea of what *that* syntax might be could help on manipulating other\ncolumn attributes too...\n\n - Thomas\n",
"msg_date": "Wed, 20 Feb 2002 18:34:06 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "(our mail crossed in the ether...)\n\n> Let's say, theoretically, that in the future we want to allow people to\n> change the type of their columns, plus allow them to change the nullability.\n\nRight.\n\n> Should we come up with a syntax for changing nullability that allows for the\n> future changing of column type? If so, then a syntaxes like these might be\n> the way to go:\n\nYup.\n\n> If we just allow the full col spec we could one day support this:\n> ALTER TABLE blah ALTER COLUMN col text boolean NOT NULL DEFAULT 'f';\n> Which would change the column to that definition (if coercion is possible)\n> no matter what current definition is...\n\nRight. No point in *precluding* that with a short-sighted choice of\nsyntax.\n\n> Is this the eventual goal? Will this cause shift/reduce errors? will we\n> need to put the word 'SET' in after 'col'?\n\nProbably not, if we can already do this with CREATE TABLE.\n\nAnd if we head this direction, then choosing a syntax which most closely\nmimics the current CREATE TABLE will allow altering two columns at once,\nwhich would be more efficient presumably than doing one column at a\ntime.\n\n - Thomas\n",
"msg_date": "Wed, 20 Feb 2002 18:40:36 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "At 18:34 20/02/02 -0800, Thomas Lockhart wrote:\n>> > ALTER TABLE blah ALTER COLUMN col [ALLOW NULL | NOT NULL]\n>> FWIW, I like this syntax too.\n>\n>What would be the drawbacks to having all portions after \"col\" in the\n>example above be *exactly* the same as the clauses allowed in CREATE\n>TABLE? So, this would be\n>\n> ALTER TABLE tab ALTER COLUMN col [ NULL | NOT NULL ]\n\nThis looks fine to me. The spec only talks about CHECK constraints in ALTER\nTABLE, but if I had to guess the most spec-like syntax, it would be:\n\n ALTER TABLE tab ALTER COLUMN col DROP NOT NULL\n\nwhich does not seem particularly good; preserving the syntax from table\ncreation has to be TWTG. Do we really allow:\n\n\tCREATE TABLE FOO(BAR INT NULL)\n\n?\n\n\n>In that same line of thought, how about making it more closely mimic the\n>original CREATE TABLE syntax? Something like\n\nBecause the SQL spec does have ALTER TABLE...ALTER COLUMN; so we should\nstick with the same syntax.\n\n\n>Hmm. Or if we are going to eventually allow altering column types then\n>one could include the type also.\n\nDefinitely; Chris' suggestion seems pretty good to me.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 21 Feb 2002 14:18:19 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "At 10:28 21/02/02 +0800, Christopher Kings-Lynne wrote:\n>\n>ALTER TABLE blah ALTER COLUMN col DROP DEFAULT;\n>ALTER TABLE blah ALTER COLUMN col SET DEFAULT 't';\n>ALTER TABLE blah ALTER COLUMN col NULL;\n>ALTER TABLE blah ALTER COLUMN col NOT NULL;\n>ALTER TABLE blah ALTER COLUMN col varchar(50);\n>ALTER TABLE blah ALTER COLUMN col int4 NULL;\n>ALTER TABLE blah ALTER COLUMN col text NOT NULL;\n\nLooks good.\n\n\n>will we need to put the word 'SET' in after 'col'?\n\nThe spec only uses SET for the DEFAULT clause.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 21 Feb 2002 14:21:18 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "> ALTER TABLE tab ALTER COLUMN col DROP NOT NULL\n>\n> which does not seem particularly good; preserving the syntax from\ntable\n> creation has to be TWTG. Do we really allow:\n>\n> CREATE TABLE FOO(BAR INT NULL)\n\nCertainly does. I depend on that ability to override the standard\nNULL / NOT NULL constraint that the domain may have to account for the\nexception to the rule.\n\nActually, is that proper? Equally easy to disallow overrides, but\n(since the books I have don't say) it seemed useful for people with\nfunny circumstances (like wanting to log a miss as well a hit).\n\n",
"msg_date": "Wed, 20 Feb 2002 22:38:01 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "The SQL spec will not help us here, since it doesn't define such a\ncapability AFAICT. We might do worse than to look at Or*cle's\nimplementation, which appears to involve a MODIFY keyword.\n\nI find this in the Or*cle 8i documentation examples:\n\n\tThe following statement alters the EMP table and defines and\n\tenables a NOT NULL constraint on the SAL column:\n\n\tALTER TABLE emp \n\t MODIFY (sal NUMBER CONSTRAINT nn_sal NOT NULL); \n\nThe docs are opaque enough that I can't actually figure out a BNF\ndefinition for ALTER TABLE MODIFY, and I don't have a working\ninstallation to experiment against. Can any Or*cle users here\nenlighten us?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Feb 2002 22:47:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL "
},
{
"msg_contents": "> The SQL spec will not help us here, since it doesn't define such a\n> capability AFAICT. We might do worse than to look at Or*cle's\n> implementation, which appears to involve a MODIFY keyword.\n>\n> I find this in the Or*cle 8i documentation examples:\n>\n> \tThe following statement alters the EMP table and defines and\n> \tenables a NOT NULL constraint on the SAL column:\n>\n> \tALTER TABLE emp\n> \t MODIFY (sal NUMBER CONSTRAINT nn_sal NOT NULL);\n>\n> The docs are opaque enough that I can't actually figure out a BNF\n> definition for ALTER TABLE MODIFY, and I don't have a working\n> installation to experiment against. Can any Or*cle users here\n> enlighten us?\n\nI've already posted the Oracle and MSSQL spec to the list here - just check\none of my earlier posts with this subject...\n\nA good place to ask questions is comp.databases.oracle.misc\n\nChris\n\n",
"msg_date": "Thu, 21 Feb 2002 11:52:56 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: SET NULL / SET NOT NULL "
},
{
"msg_contents": "Christopher Kings-Lynne writes:\n\n> Should we come up with a syntax for changing nullability that allows for the\n> future changing of column type? If so, then a syntaxes like these might be\n> the way to go:\n>\n> ALTER TABLE blah ALTER COLUMN col DROP DEFAULT;\n> ALTER TABLE blah ALTER COLUMN col SET DEFAULT 't';\n\nThis is standard.\n\n> ALTER TABLE blah ALTER COLUMN col NULL;\n> ALTER TABLE blah ALTER COLUMN col NOT NULL;\n\nThis is missing a verb. It can be read as \"alter table blah, in\nparticular, alter column col, (and do what with?) NULL\". Is the NULL part\nof the identity of the column?\n\nUsing the standard precedent above, how about\n\nALTER TABLE blah ALTER COLUMN col SET NOT NULL;\nALTER TABLE blah ALTER COLUMN col DROP NOT NULL;\n\nThis also avoids the confusing \"NULL constraint\", which does not say that\nthe column has to be NULL.\n\n> ALTER TABLE blah ALTER COLUMN col varchar(50);\n\nHere again, there should probably be at least one more word inserted, like\nTYPE.\n\n> If we just allow the full col spec we could one day support this:\n>\n> ALTER TABLE blah ALTER COLUMN col text boolean NOT NULL DEFAULT 'f';\n\nMaybe ... ALTER COLUMN col TO text ...\n\n> Is this the eventual goal? Will this cause shift/reduce errors? will we\n> need to put the word 'SET' in after 'col'?\n\nA shift/reduce conflict has never stopped us. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 21 Feb 2002 20:19:29 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Using the standard precedent above, how about\n\n> ALTER TABLE blah ALTER COLUMN col SET NOT NULL;\n> ALTER TABLE blah ALTER COLUMN col DROP NOT NULL;\n\nThis seems like a good choice if we are not too concerned about\ncompatibility with other DBMSes. (Which, for something like this,\nI'm not; how many applications will be issuing programmed commands\nlike this?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Feb 2002 21:56:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL "
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Using the standard precedent above, how about\n> \n> > ALTER TABLE blah ALTER COLUMN col SET NOT NULL;\n> > ALTER TABLE blah ALTER COLUMN col DROP NOT NULL;\n> \n> This seems like a good choice if we are not too concerned about\n> compatibility with other DBMSes. (Which, for something like this,\n> I'm not; how many applications will be issuing programmed commands\n> like this?)\n\nYes, I like this too; the SET/DROP symetry.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Feb 2002 22:02:04 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "My 2.2c worth (0.2c GST included):\nThe MySQL approach is a pain because it effectively makes you define a\nfield from scratch. You have to know and include all the field\nattributes instead of just changing the attribute you want.\n\nThe attribute definition should be the same as used in create.\n\nIf the SQL standard does not have an appropriate facility, submit\nyours as an enhancement. They can only say yes, no, or that they have\nsomething already in the pipeline and then you can implement their\nproposed standard.\n\nPeter\n\nOn Thu, 21 Feb 2002 01:15:15 +0000 (UTC), chriskl@familyhealth.com.au\n(\"Christopher Kings-Lynne\") wrote:\n\n>Hi guys,\n>\n>I've been chatting to Tom about implementing the ability to change the NULL\n>status of a column via SQL.\n>\n>This is the Oracle syntax:\n>\n>alter table table_name modify column1 not null;\n>alter table table_name modify column1 null;\n>\n>This is the MySQL syntax:\n>\n>ALTER TABLE asfd CHANGE [COLUMN] old_col_name create_definition [FIRST |\n>AFTER column_name]\n>or ALTER TABLE asfd MODIFY [COLUMN] create_definition [FIRST | AFTER\n>column_name]\n>\n>CHANGE col_name, DROP col_name, and DROP INDEX are MySQL extensions to ANSI\n>SQL92.\n>MODIFY is an Oracle extension to ALTER TABLE.\n>\n>So, the question is - what the heck is the standard syntax? Is there a\n>standard syntax? How about this syntax that I came up with:\n>\n>ALTER TABLE blah ALTER COLUMN col SET [NULL | NOT NULL]\n>\n>Anyone have any ideas? Perhaps we should use some sort of 'MODIFY'-like\n>syntax to enable in the future maybe the ability to change column specs in\n>more advanced ways (such as column type and size)\n>\n>If the answer is no, Postgres's parser does not have this syntax enabled,\n>then I'm going to have to ask someone to implement it for me, and then I can\n>fill in the actual guts of the function - whereever that may be. (I don't\n>know parser stuff!)\n>\n>Chris\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Fri, 22 Feb 2002 03:37:15 GMT",
"msg_from": "peter@helpnet_BUT_NOT_SPAM.com.au",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "Hi,\n\nI'm halfway thru implementing setting a column's nullness (I've done\nchanging to null,\nbut not changing to not null)\n\nPeter E. said:\n\n> Using the standard precedent above, how about\n>\n> ALTER TABLE blah ALTER COLUMN col SET NOT NULL;\n> ALTER TABLE blah ALTER COLUMN col DROP NOT NULL;\n\nDo we want the above syntax, or this syntax:\n\nALTER TABLE blah ALTER COLUMN col SET NOT NULL;\nALTER TABLE blah ALTER COLUMN col SET NULL;\n\nThe former sort of treats it like a contraint, where as the latter treats it\nas it is during the CREATE TABLE statement.\n\nSay in the future we want to support changing column type as well. How\nwould we work that in?\n\nALTER TABLE blah ALTER COLUMN col SET int4; ??????\n\nThen we should allow people to do this:\n\nALTER TABLE blah ALTER COLUMN col SET int4 NULL DEFAULT '3';\n\nSo they can change their entire column in one statement.\n\nSo really this implies that ALTER COLUMN/SET NULL is the correct syntax,\nrather than ALTER COLUMN/DROP NOT NULL. In fact, maybe we could support\nBOTH syntaxes...\n\nComments? Let's sort this out before I submit my patch.\n\nRegards,\n\nChris\n\n",
"msg_date": "Fri, 22 Mar 2002 13:39:45 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Say in the future we want to support changing column type as well. How\n> would we work that in?\n\n> ALTER TABLE blah ALTER COLUMN col SET int4; ??????\n\nSeems one keyword shy of a load; I'd prefer\n\nALTER TABLE blah ALTER COLUMN col SET TYPE int4;\n\nOtherwise, every keyword that might appear after SET will have to be\nfully reserved (else it couldn't be distinguished from a type name).\n\nI like the \"SET NULL\"/\"SET NOT NULL\" variant better than SET/DROP, even\nthough \"SET NULL\" is perhaps open to misinterpretation. \"DROP NOT NULL\"\nseems just as confusing for anyone who's not read the documentation :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Mar 2002 00:52:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL "
},
{
"msg_contents": "> Seems one keyword shy of a load; I'd prefer\n>\n> ALTER TABLE blah ALTER COLUMN col SET TYPE int4;\n>\n> Otherwise, every keyword that might appear after SET will have to be\n> fully reserved (else it couldn't be distinguished from a type name).\n\nI like that...\n\nSo would you then envisage something like this:\n\nALTER TABLE blah ALTER COLUMN col SET TYPE int4 DEFAULT 3 NOT NULL;\n\nor\n\nALTER TABLE blah ALTER COLUMN col SET DEFAULT 3 TYPE int4 NULL;\n\netc.\n\nie. Order wouldn't matter and you could do them all at once for convenience?\nThis seems like a cool idea to me.\n\nProblem with all this, of course, is that it's different to everyone else's\nsyntax, but then they're all different to each other. There's no standard\nfor it, but if there's a new standard - I wonder what they would specify?\nSince altering a column is a not oft used operation, I would expect that the\npunters wouldn't have a problem looking in the docs for how to do it, for\neach different DBMS they use...\n\nChris\n\n",
"msg_date": "Fri, 22 Mar 2002 14:15:35 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: SET NULL / SET NOT NULL "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Say in the future we want to support changing column type as well. How\n> > would we work that in?\n> \n> > ALTER TABLE blah ALTER COLUMN col SET int4; ??????\n> \n> Seems one keyword shy of a load; I'd prefer\n> \n> ALTER TABLE blah ALTER COLUMN col SET TYPE int4;\n> \n> Otherwise, every keyword that might appear after SET will have to be\n> fully reserved (else it couldn't be distinguished from a type name).\n> \n> I like the \"SET NULL\"/\"SET NOT NULL\" variant better than SET/DROP, even\n> though \"SET NULL\" is perhaps open to misinterpretation. \"DROP NOT NULL\"\n> seems just as confusing for anyone who's not read the documentation :-(\n\nYes, DROP NOT NULL does have a weird twist to it. However, does SET\nNULL sound to much like you are setting all the values to NULL?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 01:27:23 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "Christopher Kings-Lynne writes:\n\n> Do we want the above syntax, or this syntax:\n>\n> ALTER TABLE blah ALTER COLUMN col SET NOT NULL;\n> ALTER TABLE blah ALTER COLUMN col SET NULL;\n\nMy only objection to the second command is that it's plain wrong. You\ndon't set anything to NULL, so don't make the command look like it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 22 Mar 2002 01:31:05 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "> > Do we want the above syntax, or this syntax:\n> >\n> > ALTER TABLE blah ALTER COLUMN col SET NOT NULL;\n> > ALTER TABLE blah ALTER COLUMN col SET NULL;\n> \n> My only objection to the second command is that it's plain wrong. You\n> don't set anything to NULL, so don't make the command look like it.\n\nSo then how is it any more wrong than SET NOT NULL?\n\nIt should almost be ADD NOT NULL ...\n\nChris\n",
"msg_date": "Fri, 22 Mar 2002 14:34:57 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "On Fri, Mar 22, 2002 at 02:34:57PM +0800, Christopher Kings-Lynne wrote:\n> > > Do we want the above syntax, or this syntax:\n> > >\n> > > ALTER TABLE blah ALTER COLUMN col SET NOT NULL;\n> > > ALTER TABLE blah ALTER COLUMN col SET NULL;\n> > \n> > My only objection to the second command is that it's plain wrong. You\n> > don't set anything to NULL, so don't make the command look like it.\n> \n> So then how is it any more wrong than SET NOT NULL?\n> \n> It should almost be ADD NOT NULL ...\n \nHmm, there's this SQL92 keyword here: what do people thing of NULLABLE?\n\nSET NOT NULLABLE\nSET NULLABLE\n\nRoss\n",
"msg_date": "Fri, 22 Mar 2002 00:54:10 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "On March 22, 2002 01:31 am, Peter Eisentraut wrote:\n> Christopher Kings-Lynne writes:\n> > Do we want the above syntax, or this syntax:\n> >\n> > ALTER TABLE blah ALTER COLUMN col SET NOT NULL;\n> > ALTER TABLE blah ALTER COLUMN col SET NULL;\n>\n> My only objection to the second command is that it's plain wrong. You\n> don't set anything to NULL, so don't make the command look like it.\n\nHow about this?\n\n ALTER TABLE blah ALTER COLUMN col UNSET NOT NULL;\n\nI would almost think that it should be NOTNULL anyway to make it clear that we\nare setting (or unsetting) one thing and that it is not a weird way of saying\n\"...NOT SET NULL\" or \"NOT UNSET NULL\" but I realize that it should also look\nmore like the NOT NULL clause we already have in the CREATE TABLE query.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 22 Mar 2002 07:51:02 -0500",
"msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "Christopher Kings-Lynne writes:\n\n> So then how is it any more wrong than SET NOT NULL?\n\nYou're right.\n\n> It should almost be ADD NOT NULL ...\n\nI like that.\n\nIt also makes sense because the standard syntax is to ADD/DROP CHECK\nconstraints, to which NOT NULL constraints are equivalent.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 22 Mar 2002 11:38:27 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
}
] |
[
{
"msg_contents": "(back on list; it is an interesting discussion imho)\n\n> Thanks for the enlightening reply. It seems self-evident to me that if\n> following a specification results in a mis-assertion (as demonstrated in my\n> test case) then either the specification contains an error in logic or the\n> application logic is in violation of a predicating assumption inherent in\n> the specification. In this case, I think the latter applies. The \"swap\n> inputs if E1 < D1\" logic is predicated on the assumption of an Euclidean\n> space while time-of-day data points form an essentially cylindrical space.\n> The correct\n> logic for this type of space is:\n> if E1 < D1\n> return (D2, E2) not overlap (D1,E1)\n> else\n> return (D1,E1) overlap (D2,E2)\n> where the overlap function above is the Euclidean overlap as currently\n> defined.\n\nYup. But correct and intuitive may not be the same in this case, given\nthe guidance of the SQL99 spec as I understand it.\n\n> \"But in your case, choosing to record only times but then expecting the\n> code to respect a day boundary ...\"\n> The problem I am trying to solve is completely generic. Periodic schedules\n> are quite common and are usually _not_ associated with specific dates.\n> Examples are television broadcast schedules, shipping and routing schedules,\n> maintenance schedules and the like.\n\nSure, I agree. And changing the second argument to an interval does not\nhelp, since the spec seems to call for some implicit math which turns it\ninto exactly the case you already see.\n\n> The problem you allude to (intervals that exceed 24 hours) would seem to me\n> to abuse of the time (of day) data type. As a programmer I would expect to\n> have to handle such corner cases as exceptions (although the overlap\n> function could easily handle this case since _any_ time interval overlaps an\n> interval that exceeds 24 hours!) What I don't expect is for a built-in\n> Boolean function to lie to me when used according to the published API!\n> Violating a specification's underlying assumption is the same as violating\n> the specification. One should either re-write the overlap function to\n> prperly handle time/timetz data points or eliminate the overlap function for\n> the time data altogether. As it stands, it is broken and dangerous.\n\nSorry, I haven't yet made the leap from taking the spec literally (as I\nthink we have done) to somehow violating the spec's underlying\nassumption. Clearly the spec puts TIME and TIME WITH TIME ZONE into the\nsame \"datetime data type\" category discussed in the OVERLAPS definition.\nWhat \"underlying assumption\" are you referring to? I *know* that this\nparticular case seems to lead to non-intuitive behavior, and I've made\nthe argument before that we should violate a spec if it is sufficiently\ndamaged, but I'm not sure that we should make that leap here. I'm not\nactually arguing against it, other than we should be inclined by default\nto follow the spec.\n\nComments?\n\n - Thomas\n\n> > Note the third row in the query result below is in error. The four hour\n> > interval (2300UTC - 0300UTC) does not overlap the interval\n> 1530UTC-1627UTC).\n> > Is this a bug?\n> \n> No. It conforms to (my reading of) the SQL99 spec. So it is a feature,\n> even if I misread the spec. Which I think I didn't ;) But if I did, then\n> we can change the implementation of course.\n> \n> I've included the relevant part of the spec below. It seems clause (3)\n> requires that we reorder the arguments to OVERLAPS, though perhaps\n> someone would like to research whether TIME is allowed to be used with\n> OVERLAPS at all (if not, then we could make up the rules ourselves).\n> \n> > It would be cool if timetz (or time) datatypes were to wrap properly\n> > across day boundaries (i.e. if start time < stop time then assume start\n> time\n> > is day before) but at the very least, the overlaps functions should not\n> lie\n> > to you!\n> \n> Some parts of the spec aren't cool, or interfer with coolness. This may\n> be one of them. If everything conforms to the standard, then we can\n> start discussing whether that part of the standard is so brain-dead as\n> to be useless or likely to directly cause damage.\n> \n> But in your case, choosing to record only times but then expecting the\n> code to respect a day boundary seems to be an assumption which could\n> bite you in other ways later. What happens when an interval happens to\n> be longer than a day??\n> \n> hth\n> \n> - Thomas\n> \n> (omit some text defining the input as \"(D1, E1) OVERLAPS (D2, E2)\" as\n> the input to the OVERLAPS operator)\n> \n> 3) If D1 is the null value or if E1 < D1, then let S1 = E1 and let\n> T1 = D1. Otherwise, let S1 = D1 and let T1 = E1.\n> 4) Case:\n> a) If the most specific type of the second field of <row value\n> expression 2> is a datetime data type, then let E2 be the\n> value of the second field of <row value expression 2>.\n> b) If the most specific type of the second field of <row value\n> expression 2> is INTERVAL, then let I2 be the value of the\n> second field of <row value expression 2>. Let E2 = D2 + I2.\n> 5) If D2 is the null value or if E2 < D2, then let S2 = E2 and let\n> T2 = D2. Otherwise, let S2 = D2 and let T2 = E2.\n> 6) The result of the <overlaps predicate> is the result of the\n> following expression:\n> ( S1 > S2 AND NOT ( S1 >= T2 AND T1 >= T2 ) )\n> OR\n> ( S2 > S1 AND NOT ( S2 >= T1 AND T2 >= T1 ) )\n> OR\n> \n> ( S1 = S2 AND ( T1 <> T2 OR T1 = T2 ) )\n",
"msg_date": "Thu, 14 Feb 2002 18:50:25 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: FWD: overlaps() bug?"
}
] |
[
{
"msg_contents": "Hi guys,\n\nThis is like the 10th time I've tried to post this! Every time I send a\nmail with SET NULL / SET NOT NULL in the subject - it never appears on the\nlist - what's going on?????\n\nI've been chatting to Tom about implementing the ability to change the NULL\nstatus of a column via SQL.\n\nThis is the Oracle syntax:\n\nalter table table_name modify column1 not null;\nalter table table_name modify column1 null;\n\nThis is the MySQL syntax:\n\nALTER TABLE asfd CHANGE [COLUMN] old_col_name create_definition [FIRST |\nAFTER column_name]\nor ALTER TABLE asfd MODIFY [COLUMN] create_definition [FIRST | AFTER\ncolumn_name]\n\nCHANGE col_name, DROP col_name, and DROP INDEX are MySQL extensions to ANSI\nSQL92.\nMODIFY is an Oracle extension to ALTER TABLE.\n\nSo, the question is - what the heck is the standard syntax? Is there a\nstandard syntax? How about this syntax that I came up with:\n\nALTER TABLE blah ALTER COLUMN col SET [NULL | NOT NULL]\n\nAnyone have any ideas? Perhaps we should use some sort of 'MODIFY'-like\nsyntax to enable in the future maybe the ability to change column specs in\nmore advanced ways (such as column type and size)\n\nIf the answer is no, Postgres's parser does not have this syntax enabled,\nthen I'm going to have to ask someone to implement it for me, and then I can\nfill in the actual guts of the function - whereever that may be. (I don't\nknow parser stuff!)\n\nChris\n\n",
"msg_date": "Fri, 15 Feb 2002 14:32:14 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "changing the nulability of columns "
},
{
"msg_contents": "As a follow up to my previous post, this is how MS-SQL defines ALTER TABLE\nasfd ALTER COLUMN:\n\n------------------------------------------\n\nALTER COLUMN\n\nALTER TABLE table\n{ [ ALTER COLUMN column_name\n { new_data_type [ ( precision [ , scale ] ) ]\n [ COLLATE < collation_name > ]\n [ NULL | NOT NULL ]\n | {ADD | DROP } ROWGUIDCOL }\n ]\n | ADD\n { [ < column_definition > ]\n | column_name AS computed_column_expression\n } [ ,...n ]\n | [ WITH CHECK | WITH NOCHECK ] ADD\n { < table_constraint > } [ ,...n ]\n | DROP\n { [ CONSTRAINT ] constraint_name\n | COLUMN column } [ ,...n ]\n | { CHECK | NOCHECK } CONSTRAINT\n { ALL | constraint_name [ ,...n ] }\n | { ENABLE | DISABLE } TRIGGER\n { ALL | trigger_name [ ,...n ] }\n}\n\n\nSpecifies that the given column is to be changed or altered. ALTER COLUMN is\nnot allowed if the compatibility level is 65 or earlier. For more\ninformation, see sp_dbcmptlevel.\n\nThe altered column cannot be:\n\nA column with a text, image, ntext, or timestamp data type.\n\n\nThe ROWGUIDCOL for the table.\n\n\nA computed column or used in a computed column.\n\n\nA replicated column.\n\n\nUsed in an index, unless the column is a varchar, nvarchar, or varbinary\ndata type, the data type is not changed, and the new size is equal to or\nlarger than the old size.\n\n\nUsed in statistics generated by the CREATE STATISTICS statement. First\nremove the statistics using the DROP STATISTICS statement. Statistics\nautomatically generated by the query optimizer are automatically dropped by\nALTER COLUMN.\n\n\nUsed in a PRIMARY KEY or [FOREIGN KEY] REFERENCES constraint.\n\n\nUsed in a CHECK or UNIQUE constraint, except that altering the length of a\nvariable-length column used in a CHECK or UNIQUE constraint is allowed.\n\n\nAssociated with a default, except that changing the length, precision, or\nscale of a column is allowed if the data type is not changed.\nSome data type changes may result in a change in the data. For example,\nchanging an nchar or nvarchar column to char or varchar can result in the\nconversion of extended characters. For more information, see CAST and\nCONVERT. Reducing the precision and scale of a column may result in data\ntruncation.\n\ncolumn_name\n\nIs the name of the column to be altered, added, or dropped. For new columns,\ncolumn_name can be omitted for columns created with a timestamp data type.\nThe name timestamp is used if no column_name is specified for a timestamp\ndata type column.\n\nnew_data_type\n\nIs the new data type for the altered column. Criteria for the new_data_type\nof an altered column are:\n\nThe previous data type must be implicitly convertible to the new data type.\n\n\nnew_data_type cannot be timestamp.\n\n\nANSI null defaults are always on for ALTER COLUMN; if not specified, the\ncolumn is nullable.\n\n\nANSI padding is always on for ALTER COLUMN.\n\n\nIf the altered column is an identity column, new_data_type must be a data\ntype that supports the identity property.\n\n\nThe current setting for SET ARITHABORT is ignored. ALTER TABLE operates as\nif the ARITHABORT option is ON.\nprecision\n\nIs the precision for the specified data type. For more information about\nvalid precision values, see Precision, Scale, and Length.\n\nscale\n\nIs the scale for the specified data type. For more information about valid\nscale values, see Precision, Scale, and Length.\n\nCOLLATE < collation_name >\n\nSpecifies the new collation for the altered column. Collation name can be\neither a Windows collation name or a SQL collation name. For a list and more\ninformation, see Windows Collation Name and SQL Collation Name.\n\nThe COLLATE clause can be used to alter the collations only of columns of\nthe char, varchar, text, nchar, nvarchar, and ntext data types. If not\nspecified, the column is assigned the default collation of the database.\n\nALTER COLUMN cannot have a collation change if any of the following\nconditions apply:\n\nIf a check constraint, foreign key constraint, or computed columns reference\nthe column changed.\n\n\nIf any index, statistics, or full-text index are created on the column.\nStatistics created automatically on the column changed will be dropped if\nthe column collation is altered.\n\n\nIf a SCHEMABOUND view or function references the column.\nFor more information about the COLLATE clause, see COLLATE.\n\nNULL | NOT NULL\n\nSpecifies whether the column can accept null values. Columns that do not\nallow null values can be added with ALTER TABLE only if they have a default\nspecified. A new column added to a table must either allow null values, or\nthe column must be specified with a default value.\n\nIf the new column allows null values and no default is specified, the new\ncolumn contains a null value for each row in the table. If the new column\nallows null values and a default definition is added with the new column,\nthe WITH VALUES option can be used to store the default value in the new\ncolumn for each existing row in the table.\n\nIf the new column does not allow null values, a DEFAULT definition must be\nadded with the new column, and the new column automatically loads with the\ndefault value in the new columns in each existing row.\n\nNULL can be specified in ALTER COLUMN to make a NOT NULL column allow null\nvalues, except for columns in PRIMARY KEY constraints. NOT NULL can be\nspecified in ALTER COLUMN only if the column contains no null values. The\nnull values must be updated to some value before the ALTER COLUMN NOT NULL\nis allowed, such as:\n\nUPDATE MyTable SET NullCol = N'some_value' WHERE NullCol IS NULL\n\nALTER TABLE MyTable ALTER COLUMN NullCOl NVARCHAR(20) NOT NULL\n\nIf NULL or NOT NULL is specified with ALTER COLUMN, new_data_type\n[(precision [, scale ])] must also be specified. If the data type,\nprecision, and scale are not changed, specify the current column values.\n\n",
"msg_date": "Fri, 15 Feb 2002 14:49:21 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: changing the nulability of columns "
},
{
"msg_contents": "> So, the question is - what the heck is the standard syntax? Is there a\n> standard syntax? How about this syntax that I came up with:\n>\n> ALTER TABLE blah ALTER COLUMN col SET [NULL | NOT NULL]\n\nIf there is no standard syntax for this, I would recommend emulating oracle \nor SQL server rather than coming up a new one. Why create yet another SQL \nextension that is not compatible with anyone elses.\n",
"msg_date": "Fri, 15 Feb 2002 22:35:39 -0600",
"msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Re: changing the nulability of columns"
}
] |
[
{
"msg_contents": "I just got a question originating from our lawyers at work, concerning\nthe usage of 'free software' in our product.\n\nThe product (currently) consists of a modified Redhat 7.1 distribution,\nPostgreSQL 7.1.3 (in custom RPM packages) and some proprietary softwares.\n\nI read the license, and saw this paragraph:\n\n use, copy, modify, and distribute this software and its\n documentation for any purpose, without fee, and without\n a written agreement is hereby granted\n\nWe're charging for the SYSTEM (ie, hardware and software combination)\nwhich contains and have PostgreSQL as a fundamental part of the whole\nsystem and function.\n\nWe're not charging SPECIFICALLY for PostgreSQL, but for a complete, working\nsetup... Is this in conformance with the PostgreSQL license...?\n\n\nOrtega counter-intelligence pits security Treasury Serbian smuggle\nMossad [Hello to all my fans in domestic surveillance] Delta Force\niodine South Africa Marxist Nazi tritium\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "15 Feb 2002 13:27:37 +0100",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "License question"
},
{
"msg_contents": "* Turbo Fredriksson <turbo@bayour.com> wrote:\n\n| We're not charging SPECIFICALLY for PostgreSQL, but for a complete, working\n| setup... Is this in conformance with the PostgreSQL license...?\n\nDisclaimer : I'm not a lawyer. \n\nYes, this is perfectly legal with the PostgreSQL license. You could even charge\nfor the source code, without having done any modifications. If you do \nmodifications you can charge for the end product without releasing source code.\nThat is the big difference between BSD style licenses such as the PostgreSQL \nlicense and GPL licenses such as the one for Linux. In both cases you \nare allowed to charge for your product, but with the latter license you would\nhave to make the source code available for anybody you distribute your \nproduct to. This is not a requirement with BSD style licenses.\n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n",
"msg_date": "15 Feb 2002 15:54:50 +0100",
"msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>",
"msg_from_op": false,
"msg_subject": "Re: License question"
},
{
"msg_contents": "...\n> I read the license, and saw this paragraph:\n> use, copy, modify, and distribute this software and its\n> documentation for any purpose, without fee, and without\n> a written agreement is hereby granted\n...\n> We're not charging SPECIFICALLY for PostgreSQL, but for a complete, working\n> setup... Is this in conformance with the PostgreSQL license...?\n\nYes. The license is saying explicitly that there are no fees for you to\nuse PostgreSQL, not that you must allow others to use something\ncontaining PostgreSQL without fees.\n\nThe license asks that you include the license *for PostgreSQL* in your\nproduct, but does not require that the PostgreSQL license cover any\nother part of your system.\n\nhth\n\n - Thomas\n",
"msg_date": "Fri, 15 Feb 2002 07:08:30 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: License question"
},
{
"msg_contents": "Turbo Fredriksson <turbo@bayour.com> writes:\n\n> The product (currently) consists of a modified Redhat 7.1 distribution,\n> PostgreSQL 7.1.3 (in custom RPM packages) and some proprietary softwares.\n> \n> I read the license, and saw this paragraph:\n> \n> use, copy, modify, and distribute this software and its\n> documentation for any purpose, without fee, and without\n> a written agreement is hereby granted\n\nThis just means you can redistribute PG without paying the copyright\nholders or obtaining explicit permission. You are free to charge for\nyour redistribution if you wish (the GPL also allows this as long as\nsource is included). \n\n> We're charging for the SYSTEM (ie, hardware and software combination)\n> which contains and have PostgreSQL as a fundamental part of the whole\n> system and function.\n> \n> We're not charging SPECIFICALLY for PostgreSQL, but for a complete, working\n> setup... Is this in conformance with the PostgreSQL license...?\n\nShould be (IANAL, of course). As long as you supply sources for your\nRed Hat distribution and derived works from it (in compliance with the\nGPL) you should be in good shape from what I know.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "15 Feb 2002 10:18:12 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: License question"
},
{
"msg_contents": "\ngo to town :)\n\nOn 15 Feb 2002, Turbo Fredriksson wrote:\n\n> I just got a question originating from our lawyers at work, concerning\n> the usage of 'free software' in our product.\n>\n> The product (currently) consists of a modified Redhat 7.1 distribution,\n> PostgreSQL 7.1.3 (in custom RPM packages) and some proprietary softwares.\n>\n> I read the license, and saw this paragraph:\n>\n> use, copy, modify, and distribute this software and its\n> documentation for any purpose, without fee, and without\n> a written agreement is hereby granted\n>\n> We're charging for the SYSTEM (ie, hardware and software combination)\n> which contains and have PostgreSQL as a fundamental part of the whole\n> system and function.\n>\n> We're not charging SPECIFICALLY for PostgreSQL, but for a complete, working\n> setup... Is this in conformance with the PostgreSQL license...?\n>\n>\n> Ortega counter-intelligence pits security Treasury Serbian smuggle\n> Mossad [Hello to all my fans in domestic surveillance] Delta Force\n> iodine South Africa Marxist Nazi tritium\n> [See http://www.aclu.org/echelonwatch/index.html for more about this]\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Fri, 15 Feb 2002 11:18:22 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: License question"
},
{
"msg_contents": "Turbo Fredriksson <turbo@bayour.com> writes:\n> We're charging for the SYSTEM (ie, hardware and software combination)\n> which contains and have PostgreSQL as a fundamental part of the whole\n> system and function.\n> We're not charging SPECIFICALLY for PostgreSQL, but for a complete, working\n> setup... Is this in conformance with the PostgreSQL license...?\n\nCertainly. For that matter, you could charge just for Postgres, if you\ncould get anyone to pay ;-). The license says you don't have to pay us\na fee; it doesn't say anything about what you charge for your own work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Feb 2002 10:24:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: License question "
}
] |
[
{
"msg_contents": "\n> Another objection is the need to add an OID field to tuple headers; 4\n> more bytes per tuple adds up (and on some platforms it'd be 8 bytes due\n> to alignment considerations).\n\nHow about only allowing one version per page, this is how Informix does it.\nImho separating in memory tuple representation from on disk tuple representation\nwould be a good thing anyway. While you need to align certain things in memory\nthere is no need to align on disk stuff. This would potentially save a lot of\ndiskspace. I know a lot of people say disk space is cheap, but the issue is that\nIO is slow. It would also open the door to features like compressing datapages \nlike RDB does. We have calculated here that porting six ~750 Gb databases from \nrdb to some other db would need ~4 times the disk space.\n\nAndreas\n",
"msg_date": "Fri, 15 Feb 2002 18:13:14 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: alter table drop column status "
}
] |
[
{
"msg_contents": "\n> That explains it ...\n> \n> profiles_faith | count\n> ----------------+--------\n> 0 | 485938\n> 1 | 2\n> 2 | 6\n> 7 | 2\n> 8 | 21\n> (5 rows)\n> \n> Cool, another waste of space *sigh*\n> \n> thanks ...\n> \n> \n> On Wed, 13 Feb 2002, Tom Lane wrote:\n> \n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > Okay, if I'm understanding pg_stats at all, which I may not be, n_distinct\n> > > should represent # of distinct values in that row, no?\n> > > But, I have one field that has 5 distinct values:\n> > > But pg_stats is reporting 1:\n> >\n> > The pg_stats values are only, um, statistical. If 99.9% of the table is\n> > the same value and the other four values appear only once or twice, it's\n> > certainly possible for ANALYZE's sample to include only the common value\n> > and miss the rare ones. AFAIK that will not break anything; if you have\n> > an example where the planner seems to be fooled because of this, let's\n> > see it.\n\nHmm ? How about select * from xxx where profiles_faith = 7\nwould estimate all rows, no ? Instead of 2.\nThat is why I think a bin for \"very uncommon\" values could also be \nuseful sometimes.\n\nAndreas\n",
"msg_date": "Fri, 15 Feb 2002 18:18:35 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: \"Bug\" in statistics for v7.2? "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> The pg_stats values are only, um, statistical. If 99.9% of the table is\n> the same value and the other four values appear only once or twice, it's\n> certainly possible for ANALYZE's sample to include only the common value\n> and miss the rare ones. AFAIK that will not break anything; if you have\n> an example where the planner seems to be fooled because of this, let's\n> see it.\n\n> Hmm ? How about select * from xxx where profiles_faith = 7\n> would estimate all rows, no ? Instead of 2.\n\nNot in 7.2 ... nor in previous versions AFAIR.\n\n> That is why I think a bin for \"very uncommon\" values could also be \n> useful sometimes.\n\nPerhaps you should experiment or read the code before opining...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Feb 2002 12:45:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"Bug\" in statistics for v7.2? "
}
] |
[
{
"msg_contents": "This may be with the pg_dumpall script from 7.1.3 but the restoration \ndied as a result of the owner not being able to create the database it \nowned...\n\n\nSteps to reproduce:\n\ncreate a user [testuser] with createdb privilege (to create a database \nthat they own)\ncreate a database testdb using testuser's account\ndrop the createdb privilege from testuser (so they can't create an more \ndatabases)\nadd a table to the testdb as testuser\n\npg_dumpall > dump.pgsql\n\nupgrade to postgresql 7.2\n\npsql -U postgres template1 < dump.pgsql\n *it dies here*\n\nCan pg_dumpall create the script where user postgres creates the \ndatabase and the assigns the owner to another user rather than assuming \nthe owner can create the database?\n\n\n\n\n",
"msg_date": "Fri, 15 Feb 2002 11:31:19 -0600",
"msg_from": "Thomas Swan <tswan@olemiss.edu>",
"msg_from_op": true,
"msg_subject": "possible pg_dumpall (7.1.3) bug"
}
] |
[
{
"msg_contents": "Subject: Re: FWD: overlaps() bug?\n\n\n>(back on list; it is an interesting discussion imho)\n\n>> What I don't expect is for a built-in\n>> Boolean function to lie to me when used according to the published API!\n>> Violating a specification's underlying assumption is the same as\nviolating\n>> the specification. One should either re-write the overlap function to\n>> properly handle time/timetz data points or eliminate the overlap function\n>> for the time data altogether. As it stands, it is broken and dangerous.\n\n>Sorry, I haven't yet made the leap from taking the spec literally (as I\n>think we have done) to somehow violating the spec's underlying\n>assumption. Clearly the spec puts TIME and TIME WITH TIME ZONE into the\n>same \"datetime data type\" category discussed in the OVERLAPS definition.\n\nI have to disagree. The datetime data points form a non-periodic, Euclidean\nspace, extending forward and backward to what passes for forever. This is\n_not_ the case with time/timetz data points. They form a periodic, wrapped\nspace which require different operators, much the same way that\ntrigonometric functions differ from their Euclidean counterparts. \n\n>What \"underlying assumption\" are you referring to? \n\n..the assumption of a Euclidean space. It is not specifically spelled out in\nthe specification but the logic (swap inputs if end_point < start_point)is\nonly valid for a non-wrapping space. As mentioned above, TIME and TIME WITH\nTIME ZONE data points are periodic and form a cylindrical, wrapped space. \n\n>I *know* that this\n>particular case seems to lead to non-intuitive behavior,\n\nYou mean non-intuitive as in incorrect??\n\n\n>and I've made\n>the argument before that we should violate a spec if it is sufficiently\n>damaged, \nIMHO the spec is not damaged. It just doesn't cover the type of data we are\nattempting to apply it to in this case. \n\n>but I'm not sure that we should make that leap here. I'm not\n>actually arguing against it, other than we should be inclined by default\n>to follow the spec.\n\n>Comments?\n> - Thomas\n\nSpecs are a good thing and should be adhered to. We should not however\nblindly\nfollow them off a cliff. If a function can not be implemented that both\nfollows the spec and gives the right answer then IMHO the function should\nnot be implemented. At least this way the user knows he/she has to implement\nthier own. The way it stands the result is the programmers worst enemy, the\nsilent error. However, by my reading of the spec, it is silent on the\ncorrect implementation of overlap for TIME data and therefore we should be\nfree to do the right thing.\n\nJeff\n\n\n \n\n> > Note the third row in the query result below is in error. The four hour\n> > interval (2300UTC - 0300UTC) does not overlap the interval\n> 1530UTC-1627UTC).\n> > Is this a bug?\n> \n> No. It conforms to (my reading of) the SQL99 spec. So it is a feature,\n> even if I misread the spec. Which I think I didn't ;) But if I did, then\n> we can change the implementation of course.\n> \n> I've included the relevant part of the spec below. It seems clause (3)\n> requires that we reorder the arguments to OVERLAPS, though perhaps\n> someone would like to research whether TIME is allowed to be used with\n> OVERLAPS at all (if not, then we could make up the rules ourselves).\n> \n> > It would be cool if timetz (or time) datatypes were to wrap properly\n> > across day boundaries (i.e. if start time < stop time then assume start\n> time\n> > is day before) but at the very least, the overlaps functions should not\n> lie\n> > to you!\n> \n> Some parts of the spec aren't cool, or interfer with coolness. This may\n> be one of them. If everything conforms to the standard, then we can\n> start discussing whether that part of the standard is so brain-dead as\n> to be useless or likely to directly cause damage.\n> \n> But in your case, choosing to record only times but then expecting the\n> code to respect a day boundary seems to be an assumption which could\n> bite you in other ways later. What happens when an interval happens to\n> be longer than a day??\n> \n> hth\n> \n> - Thomas\n> \n> (omit some text defining the input as \"(D1, E1) OVERLAPS (D2, E2)\" as\n> the input to the OVERLAPS operator)\n> \n> 3) If D1 is the null value or if E1 < D1, then let S1 = E1 and let\n> T1 = D1. Otherwise, let S1 = D1 and let T1 = E1.\n> 4) Case:\n> a) If the most specific type of the second field of <row value\n> expression 2> is a datetime data type, then let E2 be the\n> value of the second field of <row value expression 2>.\n> b) If the most specific type of the second field of <row value\n> expression 2> is INTERVAL, then let I2 be the value of the\n> second field of <row value expression 2>. Let E2 = D2 + I2.\n> 5) If D2 is the null value or if E2 < D2, then let S2 = E2 and let\n> T2 = D2. Otherwise, let S2 = D2 and let T2 = E2.\n> 6) The result of the <overlaps predicate> is the result of the\n> following expression:\n> ( S1 > S2 AND NOT ( S1 >= T2 AND T1 >= T2 ) )\n> OR\n> ( S2 > S1 AND NOT ( S2 >= T1 AND T2 >= T1 ) )\n> OR\n> \n> ( S1 = S2 AND ( T1 <> T2 OR T1 = T2 ) )",
"msg_date": "Fri, 15 Feb 2002 10:41:38 -0700",
"msg_from": "\"PATTERSON,JEFF (A-Sonoma,ex1)\" <jeff_patterson@agilent.com>",
"msg_from_op": true,
"msg_subject": "Re: FWD: overlaps() bug?"
},
{
"msg_contents": "...\n> Specs are a good thing and should be adhered to. We should not however\n> blindly follow them off a cliff.\n\nRight.\n\n> If a function can not be implemented that both\n> follows the spec and gives the right answer then IMHO the function should\n> not be implemented. At least this way the user knows he/she has to implement\n> thier own. The way it stands the result is the programmers worst enemy, the\n> silent error. However, by my reading of the spec, it is silent on the\n> correct implementation of overlap for TIME data and therefore we should be\n> free to do the right thing.\n\nHmm. We are discussing this so that all viewpoints and interpretations\nare uncovered. But afaict the spec is very clear on this by the fact\nthat it does *not* call for exceptions or differences in the\nimplementation for the \"datetime data types\". And in the spec it does\nnot specify only TIMESTAMP variants for use in OVERLAPS. If the spec\ncalls for a particular behavior (which it seems to in this case) then\n*someone* is going to be disappointed here; either you because it does\nnot do what you want or someone else who knows the spec and finds that\nit does not do what they expect.\n\nExtra pairs of eyes are helpful here; can anyone see that TIME is\nexcluded from the types defined for OVERLAPS (which would free us to Do\nIt Our Way) or if the spec calls for an implementation different from\nthe part of the spec I found (which might be The Right Way)?\n\nIf we end up agreeing as a group on what the spec calls for, and if it\nturns out that it doesn't call for a wrapping behavior on time\nboundaries, then you *could* fix this with your own function which\nchecks for wrapping behavior and acts accordingly.\n\nWe could also implement a separate function for time types which Does\nThe Right Thing.\n\n - Thomas\n",
"msg_date": "Fri, 15 Feb 2002 17:53:00 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: FWD: overlaps() bug?"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Extra pairs of eyes are helpful here; can anyone see that TIME is\n> excluded from the types defined for OVERLAPS (which would free us to Do\n> It Our Way) or if the spec calls for an implementation different from\n> the part of the spec I found (which might be The Right Way)?\n\nNo, the current implementation is correct.\n\nThe drawback with redefining the time data type to be a circular number\nline is that it leads to definitional problems in other areas of the\narithmetic. For example, what would the result of\n\ntime '3:00' - time '23:00'\n\nhave to be?\n\nA wrapping time type would probably be useful, but not when it shadows the\nstandard type.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 15 Feb 2002 22:22:02 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: FWD: overlaps() bug?"
}
] |
[
{
"msg_contents": "hi,\n\ni faced this error after vaccum. please help.\n\nDEBUG: database system was interrupted being in recovery at 2002-02-15 \n12:55:51\n EST\n This propably means that some data blocks are corrupted\n and you will have to use last backup for recovery.\nDEBUG: CheckPoint record at (5, 1864040592)\nDEBUG: Redo record at (5, 1864040592); Undo record at (0, 0); Shutdown TRUE\nDEBUG: NextTransactionId: 13855178; NextOid: 26051231\nDEBUG: database system was not properly shut down; automatic recovery in \nprogre\nss...\nDEBUG: redo starts at (5, 1864040656)\nFATAL 2: heap_update_redo: no block\npostmaster: Startup proc 13311 exited with status 512 - abort\npostmaster: invalid argument -- wal_debug\nTry 'postmaster --help' for more information.\n\n\nregards\nrajan\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.314 / Virus Database: 175 - Release Date: 1/11/2002",
"msg_date": "Fri, 15 Feb 2002 23:39:25 +0530",
"msg_from": "Rajan <rajan@intercept-india.com>",
"msg_from_op": true,
"msg_subject": "heap_update_redo: no block error in pgsql 7.1"
}
] |
[
{
"msg_contents": "Entertaining parser conflicts again. We would like to be able to parse:\n\n GRANT table_privileges ON tablename TO ...\n\n GRANT func_privileges ON FUNCTION funcname(...) TO ...\n\nBoth table_privileges and func_privileges can be ALL PRIVILEGES.\n\nAttempt 1:\n\nWe set up:\n\n table_privileges = SELECT | UPDATE | ... | ALL PRIVILEGES\n\n func_privileges = EXECUTE | ALL PRIVILEGES\n\nThis yields a reduce/reduce conflict.\n\nThe easy fix is not to allow ALL PRIVILEGES in func_privileges because\nit's useless as there's only one privilege anyway. But the problem is\nbound to reappear because one day there will be some kind of object that\nrequires more than one privilege. Futhermore, it would seem nice for\nconsistency and for SQL-compliance to allow ALL PRIVILEGES for any kind of\nobject.\n\nAttempt 2:\n\nWe set up\n\n privileges = SELECT | UPDATE | ... | EXECUTE | ALL PRIVILEGES\n\n grant_table = GRANT privileges ON tablename TO ...\n\n grant_func = GRANT privileges ON FUNCTION funcname(...) TO ...\n\nand worry about sorting out the correct privileges for each object\nelsewhere.\n\nThis leads to a shift/reduce conflict at the state\n\n GRANT privileges ON FUNCTION\n\nwhere FUNCTION could be a table name or introducing an actual function\nname.\n\nSo this option looks like it will require making FUNCTION a reserved word\n(ColLabel).\n\nMore generally, this approach will require that any word introducing an\nobject in the grant statement will have to be reserved. This includes\nLANGUAGE, PROCEDURAL, possibly TYPE in the future.\n\nDoing this is consistent with SQL99 but will lead to the usual annoyances.\n\nAny comments?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 15 Feb 2002 18:15:15 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Parser conflicts in extended GRANT statement"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Entertaining parser conflicts again.\n\nI agree with sorting out the privileges lists later.\n\n> Attempt 2:\n\n> We set up\n\n> privileges = SELECT | UPDATE | ... | EXECUTE | ALL PRIVILEGES\n\n> grant_table = GRANT privileges ON tablename TO ...\n\n> grant_func = GRANT privileges ON FUNCTION funcname(...) TO ...\n\n> This leads to a shift/reduce conflict at the state\n\n> GRANT privileges ON FUNCTION\n\n> where FUNCTION could be a table name or introducing an actual function\n> name.\n\nThe trick with this sort of problem is to make sure that the parser\ndoesn't have to reduce anything until it's seen enough tokens to make\nthe result unambiguous. You are losing here because the parser has to\ndecide whether or not to reduce FUNCTION to tablename before it can\nsee any further than the TO.\n\nI think it might work to do\n\n\tgrant := GRANT privileges ON grant_target TO ...\n\n\tgrant_target := tablename\n\n\t\t\t| FUNCTION funcname(...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Feb 2002 19:29:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parser conflicts in extended GRANT statement "
}
] |
[
{
"msg_contents": "On Fri, 15 Feb 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > Okay, if the ecpg is the only issue, does everyone feel comfortable with a\n> > > branch going in this evening? I'll do a v7.2.1 tar-ball up Sunday night\n> > > based on the branch, with an announce going out on Monday?\n> >\n> > I don't think it's time for 7.2.1 quite yet; we should probably wait\n> > another week or two to see what comes in. I just want to branch now...\n>\n> Agreed. Let's get everything we can into 7.2.1. We normally don't push\n> out a minor this quickly unless we have a major thing to fix, which we\n> don't.\n\nRight, and we never branch until we're ready for the first minor ... so\n...\n\n\n",
"msg_date": "Fri, 15 Feb 2002 20:14:40 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Ready to branch 7.2/7.3 ?"
},
{
"msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> On Fri, 15 Feb 2002, Bruce Momjian wrote:\n> \n> > Tom Lane wrote:\n> > > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > > Okay, if the ecpg is the only issue, does everyone feel comfortable with a\n> > > > branch going in this evening? I'll do a v7.2.1 tar-ball up Sunday night\n> > > > based on the branch, with an announce going out on Monday?\n> > >\n> > > I don't think it's time for 7.2.1 quite yet; we should probably wait\n> > > another week or two to see what comes in. I just want to branch now...\n> >\n> > Agreed. Let's get everything we can into 7.2.1. We normally don't push\n> > out a minor this quickly unless we have a major thing to fix, which we\n> > don't.\n> \n> Right, and we never branch until we're ready for the first minor ... so\n> ...\n\nHang on here. It sounds like we're following previous methodology to\nour (possible) slight detriment here.\n\nAt present, we need 7.3 to start, so that people can begin working on\nit.\n\nWe also need to have the 7.2 branch, so stuff for the to-be 7.2.x can be\nincluded there where appropriate.\n\nSo, lets get it branched, because that's what needs to be done here and\nnow, in this instance.\n\nThen people with a burn to do 7.3 stuff can do so, and things can be\nadded to 7.2.x where needed.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sat, 16 Feb 2002 14:16:40 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Ready to branch 7.2/7.3 ?"
},
{
"msg_contents": "\nWe're planning on ... this is a 'sore point' that I've been arguing for\nseveral releases yet ... I still feel the branch should be made *on\nrelease date* and not several weeks later ... I think this is the first\nrelease where we will do the branch prior to the minor being released, so,\nI guess, as time goes on, we'll get closer to what I've wanted for ages :)\n\n\n\nOn Sat, 16 Feb 2002, Justin Clift wrote:\n\n> \"Marc G. Fournier\" wrote:\n> >\n> > On Fri, 15 Feb 2002, Bruce Momjian wrote:\n> >\n> > > Tom Lane wrote:\n> > > > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > > > Okay, if the ecpg is the only issue, does everyone feel comfortable with a\n> > > > > branch going in this evening? I'll do a v7.2.1 tar-ball up Sunday night\n> > > > > based on the branch, with an announce going out on Monday?\n> > > >\n> > > > I don't think it's time for 7.2.1 quite yet; we should probably wait\n> > > > another week or two to see what comes in. I just want to branch now...\n> > >\n> > > Agreed. Let's get everything we can into 7.2.1. We normally don't push\n> > > out a minor this quickly unless we have a major thing to fix, which we\n> > > don't.\n> >\n> > Right, and we never branch until we're ready for the first minor ... so\n> > ...\n>\n> Hang on here. It sounds like we're following previous methodology to\n> our (possible) slight detriment here.\n>\n> At present, we need 7.3 to start, so that people can begin working on\n> it.\n>\n> We also need to have the 7.2 branch, so stuff for the to-be 7.2.x can be\n> included there where appropriate.\n>\n> So, lets get it branched, because that's what needs to be done here and\n> now, in this instance.\n>\n> Then people with a burn to do 7.3 stuff can do so, and things can be\n> added to 7.2.x where needed.\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n>\n>\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n\n",
"msg_date": "Sat, 16 Feb 2002 00:56:13 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Ready to branch 7.2/7.3 ?"
},
{
"msg_contents": "I guess I don't understand the issue at hand. Why wasn't the branch\ncreated when the release was made? Why can't 7.3 development start\nimmediately? After all, once the branch is made, any 7.2 patches can be\napplied to the branch and specific patches can be tagged as such on the\nbranch. Likewise, merging them back to the truck isn't exactly hard to\ndo once you're ready to migrate back into the 7.3 development process.\n\nWhile I've not checked for any tags in CVS, I have noticed that some\ncode as changes since the 7.2 release so if you tag CVS based on\nfloat/truck, 7.2 CVS != 7.2 release. Is there a document that explain's\nthe general CM approach around here?\n\nWhat gives?\n\nSorry if this is explained elsewhere.\n\nGreg\n\n\n\n\nOn Fri, 2002-02-15 at 21:16, Justin Clift wrote:\n[snip]\n> Hang on here. It sounds like we're following previous methodology to\n> our (possible) slight detriment here.\n> \n> At present, we need 7.3 to start, so that people can begin working on\n> it.\n> \n> We also need to have the 7.2 branch, so stuff for the to-be 7.2.x can be\n> included there where appropriate.\n> \n> So, lets get it branched, because that's what needs to be done here and\n> now, in this instance.\n> \n> Then people with a burn to do 7.3 stuff can do so, and things can be\n> added to 7.2.x where needed.\n[snip]\n\n\nGreg",
"msg_date": "18 Feb 2002 13:33:02 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Ready to branch 7.2/7.3 ?"
},
{
"msg_contents": "I guess I jumped the gun in my previous message as I hadn't read this\none yet. I completely agree with Marc on this topic. What is the logic\nfor simply not snap-shoting what really is 7.2 and move on with 7.3\nwhile allowing 7.2.x to occur on the branch for migration on a later\nday?\n\nHas this been problematic in the past?\n\nGreg\n\n\nOn Fri, 2002-02-15 at 22:56, Marc G. Fournier wrote:\n> \n> We're planning on ... this is a 'sore point' that I've been arguing for\n> several releases yet ... I still feel the branch should be made *on\n> release date* and not several weeks later ... I think this is the first\n> release where we will do the branch prior to the minor being released, so,\n> I guess, as time goes on, we'll get closer to what I've wanted for ages :)\n> \n> \n> \n> On Sat, 16 Feb 2002, Justin Clift wrote:\n> \n> > \"Marc G. Fournier\" wrote:\n> > >\n> > > On Fri, 15 Feb 2002, Bruce Momjian wrote:\n> > >\n> > > > Tom Lane wrote:\n> > > > > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > > > > Okay, if the ecpg is the only issue, does everyone feel comfortable with a\n> > > > > > branch going in this evening? I'll do a v7.2.1 tar-ball up Sunday night\n> > > > > > based on the branch, with an announce going out on Monday?\n> > > > >\n> > > > > I don't think it's time for 7.2.1 quite yet; we should probably wait\n> > > > > another week or two to see what comes in. I just want to branch now...\n> > > >\n> > > > Agreed. Let's get everything we can into 7.2.1. We normally don't push\n> > > > out a minor this quickly unless we have a major thing to fix, which we\n> > > > don't.\n> > >\n> > > Right, and we never branch until we're ready for the first minor ... so\n> > > ...\n> >\n> > Hang on here. It sounds like we're following previous methodology to\n> > our (possible) slight detriment here.\n> >\n> > At present, we need 7.3 to start, so that people can begin working on\n> > it.\n> >\n> > We also need to have the 7.2 branch, so stuff for the to-be 7.2.x can be\n> > included there where appropriate.\n> >\n> > So, lets get it branched, because that's what needs to be done here and\n> > now, in this instance.\n> >\n> > Then people with a burn to do 7.3 stuff can do so, and things can be\n> > added to 7.2.x where needed.\n> >\n> > :-)\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> >\n> >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > >\n> > > http://archives.postgresql.org\n> >\n> > --\n> > \"My grandfather once told me that there are two kinds of people: those\n> > who work and those who take the credit. He told me to try to be in the\n> > first group; there was less competition there.\"\n> > - Indira Gandhi\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly",
"msg_date": "18 Feb 2002 13:38:07 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Ready to branch 7.2/7.3 ?"
},
{
"msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n\n> I guess I jumped the gun in my previous message as I hadn't read this\n> one yet. I completely agree with Marc on this topic. What is the logic\n> for simply not snap-shoting what really is 7.2 and move on with 7.3\n> while allowing 7.2.x to occur on the branch for migration on a later\n> day?\n\nIf you don't branch immediately, then bugfixes only have to be done\nonce. So it makes some sense to let things stabilize for a little\nwhile after a general release.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "18 Feb 2002 15:41:11 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Ready to branch 7.2/7.3 ?"
},
{
"msg_contents": "...\n> Has this been problematic in the past?\n\nNo problem, but for most previous releases the primary developers were\ntired after pushing for the release, and a modest code slowdown after\nrelease was welcome by all. It usually lasted a month or less and\nallowed us to focus on problem reports from the new release without\ngetting those lost in the overall noise of new development.\n\nAs others have pointed out, 7.2 dragged on longer than anyone expected,\nand maybe at least some of the \"release issues\" were fixed before\nrelease.\n\nHmm, maybe one reason that things dragged out was that we *didn't*\nengage the new developers as much as we could have, otherwise y'all\nwould be tired too ;)\n\nhth\n\n - Thomas\n",
"msg_date": "Tue, 19 Feb 2002 06:48:40 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Ready to branch 7.2/7.3 ?"
}
] |
[
{
"msg_contents": "\nselect (random( )*10)::varchar(4)\nworked in 7.1.3\n\nfails in 7.2\n\nERROR: value too long for type character varying(4)\n\n",
"msg_date": "Fri, 15 Feb 2002 18:49:33 -0600",
"msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>",
"msg_from_op": true,
"msg_subject": "Change in casting behavior?"
},
{
"msg_contents": "Thomas Swan writes:\n\n> select (random( )*10)::varchar(4)\n> worked in 7.1.3\n>\n> fails in 7.2\n>\n> ERROR: value too long for type character varying(4)\n\nSay what you really mean:\n\nselect substring(random()*10 for 4);\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 15 Feb 2002 20:35:49 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Change in casting behavior?"
},
{
"msg_contents": "\n\n\n\n\nPeter Eisentraut wrote:\n\nThomas Swan writes:\n\nselect (random( )*10)::varchar(4)worked in 7.1.3fails in 7.2ERROR: value too long for type character varying(4)\n\nSay what you really mean:select substring(random()*10 for 4);\n\nThat may be the better way to say it. But, what I wanted to point out was\nthat the behavior had changed. Previously ::varchar(4) had worked. However,\nnow since 7.2 only ::text or varchar(n) where is sufficiently large to hold\nthe digits.\n\nAs far as I was aware the correct behavior was to truncate the text representation\nto fit the field size or the 'casted' size.\n\nI simply thought I would mention as it caught me by surpris.\n\nThomas\n\n\n\n",
"msg_date": "Fri, 15 Feb 2002 22:36:06 -0600",
"msg_from": "Thomas Swan <tswan@olemiss.edu>",
"msg_from_op": false,
"msg_subject": "Re: Change in casting behavior?"
},
{
"msg_contents": "Thomas Swan writes:\n\n> As far as I was aware the correct behavior was to truncate the text representation to fit the\n> field size or the 'casted' size.\n\nThe correct behaviour has been made even more correct in the 7.2 release.\n;-)\n\n> I simply thought I would mention as it caught me by surpris.\n\nShould have read the release notes.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 16 Feb 2002 00:04:48 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Change in casting behavior?"
}
] |
[
{
"msg_contents": "Bruce Momjian writes:\n\n> Seems there are two uses for the CHANGES file, one is to enable\n> developers to see what has been added to the tree since they last\n> looked, best solved with cvslog,\n\nNo, a cvslog is in nature different from a list of user-visible changes.\nWe're not talking about the core group of developers here that read the\ncommitters list anyway, we're talking about people that only look once in\na while and want to check development progress, try out the latest\nfeatures, and avoid sending in patches for things that are already done.\n\n> and second, to make it easier to\n> package the final HISTORY changes, which is 100x easier to do in one\n> shot from the cvs logs than piecemeal.\n\nThere is no piecemeal. If it's all in one place you just copy it over.\nIt's impossible to beat that.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 15 Feb 2002 20:43:11 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Maintaining the list of release changes"
},
{
"msg_contents": "On Friday 15 February 2002 08:43 pm, Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> > Seems there are two uses for the CHANGES file, one is to enable\n> > developers to see what has been added to the tree since they last\n> > looked, best solved with cvslog,\n\n> No, a cvslog is in nature different from a list of user-visible changes.\n> We're not talking about the core group of developers here that read the\n\nIt almost sounds like you want something like BitKeeper, which, barring its \nweird license, seems tobe just the ticker. But there's that weird license.....\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 16 Feb 2002 00:36:45 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Maintaining the list of release changes"
}
] |
[
{
"msg_contents": "CREATE TABLE t (ts timestamp);\nCREATE\nINSERT INTO t VALUES ('2465001-01-01 00:00:00');\nINSERT 16563 1\nSELECT * from t;\npsql:/home/t-ishii/tmp/datebug.sql:4: ERROR: Unable to format timestamp with time zone; internal coding error\n\nShouldn't timestamp_in detect the invalid timestamp value when it is\ninserted?\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 16 Feb 2002 12:44:12 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "7.2 and current timestamp bug?"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Shouldn't timestamp_in detect the invalid timestamp value when it is\n> inserted?\n\nYeah. A simpler test case is\n\nregression=# select '2465001-01-01 00:00:00'::timestamp;\nERROR: Unable to format timestamp with time zone; internal coding error\n\nIMHO the IS_VALID_JULIAN() macro ought to test for out-of-range in the\nforward direction as well as rearward. The immediate problem in this\nexample is that date2j() overflows --- silently --- producing a negative\nresult which later confuses timestamp2tm. We could limit the allowed\nrange of Julian dates to prevent that.\n\nAnother possibility is to allow date2j and j2date to pass/return double\ninstead of int, but that is a larger change and probably not very safe\nto apply for 7.2.1.\n\nThomas, your thoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Feb 2002 14:03:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 and current timestamp bug? "
},
{
"msg_contents": "...\n> Another possibility is to allow date2j and j2date to pass/return double\n> instead of int, but that is a larger change and probably not very safe\n> to apply for 7.2.1.\n\nPretty sure that j2date() relies on integer math behaviors to work. But\nI haven't looked at it in quite a while. And it probably isn't worth the\neffort to change things around. Limiting the date range a little works\nfor me. Lots of DBs allow only four digit years...\n\n> Thomas, your thoughts?\n\nHmm. Let's yell at Tatsuo for trying silly dates ;)\n\n - Thomas\n",
"msg_date": "Tue, 19 Feb 2002 06:56:57 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 and current timestamp bug?"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Limiting the date range a little works\n> for me. Lots of DBs allow only four digit years...\n\nFine with me. Will you make the change?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Feb 2002 09:52:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 and current timestamp bug? "
}
] |
[
{
"msg_contents": "On Fri, 15 Feb 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Fri, 15 Feb 2002, Bruce Momjian wrote:\n> >\n> > > Tom Lane wrote:\n> > > > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > > > Okay, if the ecpg is the only issue, does everyone feel comfortable with a\n> > > > > branch going in this evening? I'll do a v7.2.1 tar-ball up Sunday night\n> > > > > based on the branch, with an announce going out on Monday?\n> > > >\n> > > > I don't think it's time for 7.2.1 quite yet; we should probably wait\n> > > > another week or two to see what comes in. I just want to branch now...\n> > >\n> > > Agreed. Let's get everything we can into 7.2.1. We normally don't push\n> > > out a minor this quickly unless we have a major thing to fix, which we\n> > > don't.\n> >\n> > Right, and we never branch until we're ready for the first minor ... so\n> > ...\n>\n> Were you asking a question?\n>\n> We can branch before the minor, right? Backpatching into the first\n> minor isn't a bigger deal than backpatching into later minors, and we\n> aren't patching much of anything now anyway.\n\n*rofl* And you've been arguing *against* this for how many releases now,\nand I've tried to get you guys to go along with this for how many\nreleases? :)\n\n\n",
"msg_date": "Sat, 16 Feb 2002 00:57:03 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Ready to branch 7.2/7.3 ?"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n>> We can branch before the minor, right? Backpatching into the first\n>> minor isn't a bigger deal than backpatching into later minors, and we\n>> aren't patching much of anything now anyway.\n\n> *rofl* And you've been arguing *against* this for how many releases now,\n> and I've tried to get you guys to go along with this for how many\n> releases? :)\n\nHey guys: there is no black or white on this. It's a tradeoff ---\ndelaying next-version development versus effort wasted to do double\npatching.\n\nIn this particular cycle, I'm for an early branch because we don't seem\nto have a lot of bugs coming in, and we've got a lot of development work\nthat people are eager to get started on (or even to apply already-done\npatches, in some cases). Both of these facts have doubtless got a lot\nto do with the horrendously long release cycle for 7.2 --- which is\nsomething I am *not* happy about.\n\nI think the fact that we want to branch so soon after release is an\nindication that we delayed the release too long. 7.2 should have been\nout months ago.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Feb 2002 00:09:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Ready to branch 7.2/7.3 ? "
}
] |
[
{
"msg_contents": "The attached patch improves the command parameter checking of pg_ctl.\n\nAt present, there is nothing to check that the parameter given with a\nparameter-taking option is actually valid. For example, -l can be given\nwithout a following logfile name; on a strict POSIX shell such as ash,\nyou will get a subsequent failure because of too many shifts, but bash\nwill let it pass without showing any error. The patch checks that each\nparameter is not empty and is not another option.\n\nA consequence of this change is that no command-line parameter can begin\nwith \"-\" (except for the parameter to -o); this seems a reasonable\nrestriction.\n\nFor consistency and clarity, I have also changed every occurrence of\n\"shift ... var=$1\" to \"var=$2 ... shift\".\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"But as many as received him, to them gave he power to \n become the sons of God, even to them that believe on \n his name\" John 1:12",
"msg_date": "16 Feb 2002 11:29:11 +0000",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "pg_ctl - tighten command parameter checking"
},
{
"msg_contents": "Oliver Elphick writes:\n\n> The attached patch improves the command parameter checking of pg_ctl.\n>\n> At present, there is nothing to check that the parameter given with a\n> parameter-taking option is actually valid. For example, -l can be given\n> without a following logfile name; on a strict POSIX shell such as ash,\n> you will get a subsequent failure because of too many shifts, but bash\n> will let it pass without showing any error. The patch checks that each\n> parameter is not empty and is not another option.\n\nIsn't this problem present in all of our scripts?\n\nBtw., you shouldn't use \"cut\" in portable scripts. You could probably use\n\"case\" to do the matching you want.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 18 Feb 2002 11:35:36 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl - tighten command parameter checking"
},
{
"msg_contents": "On Mon, 2002-02-18 at 16:35, Peter Eisentraut wrote:\n> Oliver Elphick writes:\n> \n> > The attached patch improves the command parameter checking of pg_ctl.\n> >\n> > At present, there is nothing to check that the parameter given with a\n> > parameter-taking option is actually valid. For example, -l can be given\n> > without a following logfile name; on a strict POSIX shell such as ash,\n> > you will get a subsequent failure because of too many shifts, but bash\n> > will let it pass without showing any error. The patch checks that each\n> > parameter is not empty and is not another option.\n> \n> Isn't this problem present in all of our scripts?\n\nPossibly, but this is the one where I had problems:-) I'll look at\nothers when I get some time.\n\n> Btw., you shouldn't use \"cut\" in portable scripts. You could probably use\n> \"case\" to do the matching you want.\n\nWhat kind of an inadequate environment doesn't have cut?\n\nOK. I'll redo it using case...esac.\n\nNB. I saw a comment in this script about dirname's being non-portable.\nBut it uses basename. Is that portable?\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"All we like sheep have gone astray; we have turned \n every one to his own way; and the LORD hath laid on \n him the iniquity of us all.\" Isaiah 53:6 \n\n",
"msg_date": "18 Feb 2002 22:52:03 +0000",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: pg_ctl - tighten command parameter checking"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nOliver Elphick wrote:\n> The attached patch improves the command parameter checking of pg_ctl.\n> \n> At present, there is nothing to check that the parameter given with a\n> parameter-taking option is actually valid. For example, -l can be given\n> without a following logfile name; on a strict POSIX shell such as ash,\n> you will get a subsequent failure because of too many shifts, but bash\n> will let it pass without showing any error. The patch checks that each\n> parameter is not empty and is not another option.\n> \n> A consequence of this change is that no command-line parameter can begin\n> with \"-\" (except for the parameter to -o); this seems a reasonable\n> restriction.\n> \n> For consistency and clarity, I have also changed every occurrence of\n> \"shift ... var=$1\" to \"var=$2 ... shift\".\n> \n> -- \n> Oliver Elphick Oliver.Elphick@lfix.co.uk\n> Isle of Wight http://www.lfix.co.uk/oliver\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> \n> \"But as many as received him, to them gave he power to \n> become the sons of God, even to them that believe on \n> his name\" John 1:12 \n\n[ text/x-patch is unsupported, treating like TEXT/PLAIN ]\n\n> *** postgresql-7.2.orig/src/bin/pg_ctl/pg_ctl.sh\tSat Sep 29 04:09:32 2001\n> --- postgresql-7.2/src/bin/pg_ctl/pg_ctl.sh\tSat Feb 16 10:50:36 2002\n> ***************\n> *** 127,156 ****\n> \t exit 0\n> \t ;;\n> \t-D)\n> - \t shift\n> \t # pass environment into new postmaster\n> ! \t PGDATA=\"$1\"\n> \t export PGDATA\n> \t ;;\n> \t-l)\n> \t logfile=\"$2\"\n> \t shift;;\n> \t-l*)\n> \t logfile=`echo \"$1\" | sed 's/^-l//'`\n> \t ;;\n> \t-m)\n> \t shutdown_mode=\"$2\"\n> \t shift;;\n> \t-m*)\n> \t shutdown_mode=`echo \"$1\" | sed 's/^-m//'`\n> \t ;;\n> \t-o)\n> \t shift\n> - \t POSTOPTS=\"$1\"\n> \t ;;\n> \t-p)\n> \t shift\n> - \t po_path=\"$1\"\n> \t ;;\n> \t-s)\n> \t silence_echo=:\n> --- 127,197 ----\n> \t exit 0\n> \t ;;\n> \t-D)\n> \t # pass environment into new postmaster\n> ! \t PGDATA=\"$2\"\n> ! \t if [ -z \"$PGDATA\" -o `echo x$PGDATA | cut -c1-2` = \"x-\" ]\n> ! \t then\n> ! \t \techo \"$CMDNAME: option '-D' specified without a data directory\"\n> ! \t\texit 1\n> ! \t fi\n> \t export PGDATA\n> + \t shift\n> \t ;;\n> \t-l)\n> \t logfile=\"$2\"\n> + \t if [ -z \"$logfile\" -o `echo x$logfile | cut -c1-2` = \"x-\" ]\n> + \t then\n> + \t \techo \"$CMDNAME: option '-l' specified without a logfile\"\n> + \t\texit 1\n> + \t fi\n> \t shift;;\n> \t-l*)\n> \t logfile=`echo \"$1\" | sed 's/^-l//'`\n> + \t if [ -z \"$logfile\" -o `echo x$logfile | cut -c1-2` = \"x-\" ]\n> + \t then\n> + \t \techo \"$CMDNAME: option '-l' specified without a logfile\"\n> + \t\texit 1\n> + \t fi\n> \t ;;\n> \t-m)\n> \t shutdown_mode=\"$2\"\n> + \t if [ -z \"$shutdown_mode\" -o `echo x$shutdown_mode | cut -c1-2` = \"x-\" ]\n> + \t then\n> + \t \techo \"$CMDNAME: option '-m' specified without a shutdown mode\"\n> + \t\texit 1\n> + \t fi\n> \t shift;;\n> \t-m*)\n> \t shutdown_mode=`echo \"$1\" | sed 's/^-m//'`\n> + \t if [ -z \"$shutdown_mode\" -o `echo x$shutdown_mode | cut -c1-2` = \"x-\" ]\n> + \t then\n> + \t \techo \"$CMDNAME: option '-m' specified without a shutdown mode\"\n> + \t\texit 1\n> + \t fi\n> \t ;;\n> \t-o)\n> + \t POSTOPTS=\"$2\"\n> + \t if [ -z \"$POSTOPTS\" ]\n> + \t then\n> + \t \techo \"$CMDNAME: option '-o' specified without any passed options\"\n> + \t\texit 1\n> + \t fi\n> + \t if [ `echo x$POSTOPTS | cut -c1-2` != x- ]\n> + \t then\n> + \t \techo \"$CMDNAME: option -o must be followed by one or more further options\n> + to pass to the postmaster\"\n> + \t\texit 1\n> + \tfi\n> \t shift\n> \t ;;\n> \t-p)\n> + \t po_path=\"$2\"\n> + \t if [ -z \"$po_path\" -o `echo x$po_path | cut -c1-2` = \"x-\" ]\n> + \t then\n> + \t \techo \"$CMDNAME: option '-p' specified without a path\"\n> + \t\texit 1\n> + \t fi\n> \t shift\n> \t ;;\n> \t-s)\n> \t silence_echo=:\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Feb 2002 21:32:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl - tighten command parameter checking"
},
{
"msg_contents": "\nNeed to use case before patch application.\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> Oliver Elphick writes:\n> \n> > The attached patch improves the command parameter checking of pg_ctl.\n> >\n> > At present, there is nothing to check that the parameter given with a\n> > parameter-taking option is actually valid. For example, -l can be given\n> > without a following logfile name; on a strict POSIX shell such as ash,\n> > you will get a subsequent failure because of too many shifts, but bash\n> > will let it pass without showing any error. The patch checks that each\n> > parameter is not empty and is not another option.\n> \n> Isn't this problem present in all of our scripts?\n> \n> Btw., you shouldn't use \"cut\" in portable scripts. You could probably use\n> \"case\" to do the matching you want.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Feb 2002 21:33:27 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl - tighten command parameter checking"
},
{
"msg_contents": "\nOliver, I am going to reject this. We give them the syntax for the\nparams. I don't see a need to check for leading dash to see if they\nforgot a param. I would like to see a more general solution that uses\ngetopt or something more robust, but moving all that checking to each\nparam just seems like a waste.\n\n---------------------------------------------------------------------------\n\nOliver Elphick wrote:\n> The attached patch improves the command parameter checking of pg_ctl.\n> \n> At present, there is nothing to check that the parameter given with a\n> parameter-taking option is actually valid. For example, -l can be given\n> without a following logfile name; on a strict POSIX shell such as ash,\n> you will get a subsequent failure because of too many shifts, but bash\n> will let it pass without showing any error. The patch checks that each\n> parameter is not empty and is not another option.\n> \n> A consequence of this change is that no command-line parameter can begin\n> with \"-\" (except for the parameter to -o); this seems a reasonable\n> restriction.\n> \n> For consistency and clarity, I have also changed every occurrence of\n> \"shift ... var=$1\" to \"var=$2 ... shift\".\n> \n> -- \n> Oliver Elphick Oliver.Elphick@lfix.co.uk\n> Isle of Wight http://www.lfix.co.uk/oliver\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> \n> \"But as many as received him, to them gave he power to \n> become the sons of God, even to them that believe on \n> his name\" John 1:12 \n\n[ text/x-patch is unsupported, treating like TEXT/PLAIN ]\n\n> *** postgresql-7.2.orig/src/bin/pg_ctl/pg_ctl.sh\tSat Sep 29 04:09:32 2001\n> --- postgresql-7.2/src/bin/pg_ctl/pg_ctl.sh\tSat Feb 16 10:50:36 2002\n> ***************\n> *** 127,156 ****\n> \t exit 0\n> \t ;;\n> \t-D)\n> - \t shift\n> \t # pass environment into new postmaster\n> ! \t PGDATA=\"$1\"\n> \t export PGDATA\n> \t ;;\n> \t-l)\n> \t logfile=\"$2\"\n> \t shift;;\n> \t-l*)\n> \t logfile=`echo \"$1\" | sed 's/^-l//'`\n> \t ;;\n> \t-m)\n> \t shutdown_mode=\"$2\"\n> \t shift;;\n> \t-m*)\n> \t shutdown_mode=`echo \"$1\" | sed 's/^-m//'`\n> \t ;;\n> \t-o)\n> \t shift\n> - \t POSTOPTS=\"$1\"\n> \t ;;\n> \t-p)\n> \t shift\n> - \t po_path=\"$1\"\n> \t ;;\n> \t-s)\n> \t silence_echo=:\n> --- 127,197 ----\n> \t exit 0\n> \t ;;\n> \t-D)\n> \t # pass environment into new postmaster\n> ! \t PGDATA=\"$2\"\n> ! \t if [ -z \"$PGDATA\" -o `echo x$PGDATA | cut -c1-2` = \"x-\" ]\n> ! \t then\n> ! \t \techo \"$CMDNAME: option '-D' specified without a data directory\"\n> ! \t\texit 1\n> ! \t fi\n> \t export PGDATA\n> + \t shift\n> \t ;;\n> \t-l)\n> \t logfile=\"$2\"\n> + \t if [ -z \"$logfile\" -o `echo x$logfile | cut -c1-2` = \"x-\" ]\n> + \t then\n> + \t \techo \"$CMDNAME: option '-l' specified without a logfile\"\n> + \t\texit 1\n> + \t fi\n> \t shift;;\n> \t-l*)\n> \t logfile=`echo \"$1\" | sed 's/^-l//'`\n> + \t if [ -z \"$logfile\" -o `echo x$logfile | cut -c1-2` = \"x-\" ]\n> + \t then\n> + \t \techo \"$CMDNAME: option '-l' specified without a logfile\"\n> + \t\texit 1\n> + \t fi\n> \t ;;\n> \t-m)\n> \t shutdown_mode=\"$2\"\n> + \t if [ -z \"$shutdown_mode\" -o `echo x$shutdown_mode | cut -c1-2` = \"x-\" ]\n> + \t then\n> + \t \techo \"$CMDNAME: option '-m' specified without a shutdown mode\"\n> + \t\texit 1\n> + \t fi\n> \t shift;;\n> \t-m*)\n> \t shutdown_mode=`echo \"$1\" | sed 's/^-m//'`\n> + \t if [ -z \"$shutdown_mode\" -o `echo x$shutdown_mode | cut -c1-2` = \"x-\" ]\n> + \t then\n> + \t \techo \"$CMDNAME: option '-m' specified without a shutdown mode\"\n> + \t\texit 1\n> + \t fi\n> \t ;;\n> \t-o)\n> + \t POSTOPTS=\"$2\"\n> + \t if [ -z \"$POSTOPTS\" ]\n> + \t then\n> + \t \techo \"$CMDNAME: option '-o' specified without any passed options\"\n> + \t\texit 1\n> + \t fi\n> + \t if [ `echo x$POSTOPTS | cut -c1-2` != x- ]\n> + \t then\n> + \t \techo \"$CMDNAME: option -o must be followed by one or more further options\n> + to pass to the postmaster\"\n> + \t\texit 1\n> + \tfi\n> \t shift\n> \t ;;\n> \t-p)\n> + \t po_path=\"$2\"\n> + \t if [ -z \"$po_path\" -o `echo x$po_path | cut -c1-2` = \"x-\" ]\n> + \t then\n> + \t \techo \"$CMDNAME: option '-p' specified without a path\"\n> + \t\texit 1\n> + \t fi\n> \t shift\n> \t ;;\n> \t-s)\n> \t silence_echo=:\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 23 Feb 2002 16:31:45 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl - tighten command parameter checking"
},
{
"msg_contents": "Oliver Elphick wrote:\n> On Sat, 2002-02-23 at 21:31, Bruce Momjian wrote:\n> > \n> > Oliver, I am going to reject this. We give them the syntax for the\n> > params. I don't see a need to check for leading dash to see if they\n> > forgot a param. I would like to see a more general solution that uses\n> > getopt or something more robust, but moving all that checking to each\n> > param just seems like a waste.\n> \n> I would certainly prefer to use getopt, but is that portable? Peter\n> wants me to use case..esac instead of cut; I would have thought getopt\n> was a lot less portable.\n\nNo, it isn't. The problem is we don't have a portable solution _and_ we\ndon't want to throw checks all over the place. I realize this is a\nnon-solution, but I guess I don't consider is a big problem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 23 Feb 2002 16:50:08 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl - tighten command parameter checking"
},
{
"msg_contents": "Oliver Elphick wrote:\n> On Sat, 2002-02-23 at 21:31, Bruce Momjian wrote:\n> > \n> > Oliver, I am going to reject this. We give them the syntax for the\n> > params. I don't see a need to check for leading dash to see if they\n> > forgot a param. I would like to see a more general solution that uses\n> > getopt or something more robust, but moving all that checking to each\n> > param just seems like a waste.\n> \n> I would certainly prefer to use getopt, but is that portable? Peter\n> wants me to use case..esac instead of cut; I would have thought getopt\n> was a lot less portable.\n\nAdded to TODO, at least:\n\n* Add checks for missing parameters to shell script, to prevent \n over-shifting\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 23 Feb 2002 16:51:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl - tighten command parameter checking"
},
{
"msg_contents": "On Sat, 2002-02-23 at 21:31, Bruce Momjian wrote:\n> \n> Oliver, I am going to reject this. We give them the syntax for the\n> params. I don't see a need to check for leading dash to see if they\n> forgot a param. I would like to see a more general solution that uses\n> getopt or something more robust, but moving all that checking to each\n> param just seems like a waste.\n\nI would certainly prefer to use getopt, but is that portable? Peter\nwants me to use case..esac instead of cut; I would have thought getopt\nwas a lot less portable.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"All scripture is given by inspiration of God, and is \n profitable for doctrine, for reproof, for correction, \n for instruction in righteousness; That the man of God \n may be perfect, thoroughly furnished unto all good \n works.\" II Timothy 3:16,17 \n\n",
"msg_date": "23 Feb 2002 21:52:44 +0000",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: pg_ctl - tighten command parameter checking"
}
] |
[
{
"msg_contents": "Hello\n\nDo you find Spanish information, manuals & books online, about\nPostgreSQL ?\n\nPlease, check http://www.mundo.terra.cl/\n\nIt is new, and contains multiply information files & links in\nSpanish about of PostgreSQL\n\nIt is my effort for all Spanish spoken users around the world !!\n\nThanks,\n\nAlejandro Rivadeneira\n(PostgreSQL Information)\nhttp://www.mundo.terra.cl/\n",
"msg_date": "16 Feb 2002 08:33:09 -0800",
"msg_from": "mundo@ctcinternet.cl (Alejandro Rivadeneira)",
"msg_from_op": true,
"msg_subject": "PostgreSQL Spanish manuals , files & links"
}
] |
[
{
"msg_contents": "I've been working away at simple domain support. The simple stuff\nworks (no constraints, simple data type).\n\nCREATE DOMAIN domainname Typename;\n\nI have a few questions about how to proceed.\n\nSo.. Starting with (more) complex datatypes. varchar(), numeric(),\nand the like. There is currently no column in pg_type which stores\ninformation similar to atttypmod. So, I'd like to create one ->\npg_type.typmod. The idea is that this value will be copied across to\npg_attribute with other type information if it is not null. Since\npg_type.typeprtlen isn't used (according to docs) would it be safe to\nsimply rename and resize (int4) this column?\n\nThe second part of this is to apply constraints to the domain. That\nwill require an equivelent to pg_trigger but linked to pg_type, say\npg_domaintrigger. On column creation in a table I'm considering\ncopying the triggers from pg_domaintrigger to pg_trigger for the\ncolumn, and adding a column to pg_trigger which marks them as\noriginating from the domain. Deletes of a trigger in the domain can\ncascade to pg_trigger -- as can updates, etc. Triggers in pg_trigger\nwith domtrgid NOT NULL would not be (directly) erasable by the ALTER\nTABLE DROP CONSTRAINT stuff.\n\nGiven the above, ALTER DOMAIN may be complex. ALTER DOMAIN ADD\nCONSTRAINT may touch several hundred items -- but I wouldn't expect\nthis to be a frequent action.\n\nAnyway, patch attached (hopefully it works, I've modified my source to\nbe in a broken state then). It's against 7.2-HEAD.\n\nWith any luck I'm on the right track. Thus far its making a good\nweekend project -- but I suspect constraints are going to take alot\nlonger than that.\n\nSomething of great fun however is DROP TYPE text. Lots of neat stuff\nhappens when you do that. I want to add a RESTRICT & CASCADE\nstructure to DROP TYPE as well. Cascade may be disabled though.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.",
"msg_date": "Sat, 16 Feb 2002 18:53:53 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Create Domain...."
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> and the like. There is currently no column in pg_type which stores\n> information similar to atttypmod. So, I'd like to create one ->\n> pg_type.typmod. The idea is that this value will be copied across to\n> pg_attribute with other type information if it is not null. Since\n> pg_type.typeprtlen isn't used (according to docs) would it be safe to\n> simply rename and resize (int4) this column?\n\nMake another column. It'll be good for you ;-) ... and you have to\nlearn how anyway, if you intend to finish out this project.\n\n> Something of great fun however is DROP TYPE text.\n\nYeah, there's not really any support presently for dealing with\ndependencies on dropped objects. IMHO it would be a mistake to solve\nthat just in the context of any one kind of object (such as types);\nit's a generic issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Feb 2002 20:32:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Create Domain.... "
}
] |
[
{
"msg_contents": "Hi,\n\nI have done what exactly explains the documentation for the migration from my databases in 7.1.3 to 7.2 ...\n\nBut during the importation in Postgresql v7.2 of the data from the pg_dumpall ...\nI get sometime this message :\n\npsql:backup:24473309: ERROR: DefineIndex: index function must be marked iscachable\n\nbackup is my pg_dumpall file ...\nWhy this message ?\nMay I have lost index ? or data ?\n\nCould you explain me ?\n\nRegards,\n-- \nHerv�\n",
"msg_date": "Sun, 17 Feb 2002 01:15:40 +0100",
"msg_from": "=?iso-8859-1?B?SGVydukgUGllZHZhY2hl?= <herve@elma.fr>",
"msg_from_op": true,
"msg_subject": "Trouble with pg_dumpall import with 7.2 "
},
{
"msg_contents": "On Sun, 17 Feb 2002 01:15:40 +0100\nHerv\u0001Piedvache <herve@elma.fr> wrote:\n\n> I have done what exactly explains the documentation for the migration from my databases in 7.1.3 to 7.2 ...\n> \n> But during the importation in Postgresql v7.2 of the data from the pg_dumpall ...\n> I get sometime this message :\n> \n> psql:backup:24473309: ERROR: DefineIndex: index function must be marked iscachable\n> \n> backup is my pg_dumpall file ...\n> Why this message ?\n> May I have lost index ? or data ?\n> \n> Could you explain me ?\n\n\nHave you created indices on your functions defined without \"with (iscachable)\"\nin 7.1.3 ? If so, an error in 7.2 (see below) will occur while you're \nupgrading PG by pg_dumpall, etc. I would think you need to recreate\nindices on your function redefined with it before dumping. Instead, \nit seems to be OK that you redefine functions and create indices after\nrestoring as well. \n\n ERROR: DefineIndex: index function must be marked iscachable\n\n\nRegards,\nMasaru Sugawara\n\n",
"msg_date": "Sun, 17 Feb 2002 18:06:25 +0900",
"msg_from": "Masaru Sugawara <rk73@echna.ne.jp>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with pg_dumpall import with 7.2"
},
{
"msg_contents": "Hi Masaru,\n\nOK it's a bug of postgreSQL 7.2 ??\n\nI can apply an index on the field datelog where this field is a\ntimestamp like :\n\ncreate index ix_datelog_date on datelog (date(datelog);\n\nERROR: DefineIndex: index function must be marked iscachable\n\nOr could you explain me how to set date() iscachable ?\n\nregards,\n\nMasaru Sugawara a �crit :\n> \n> On Sun, 17 Feb 2002 01:15:40 +0100\n> Herv\u0001Piedvache <herve@elma.fr> wrote:\n> \n> > I have done what exactly explains the documentation for the migration from my databases in 7.1.3 to 7.2 ...\n> >\n> > But during the importation in Postgresql v7.2 of the data from the pg_dumpall ...\n> > I get sometime this message :\n> >\n> > psql:backup:24473309: ERROR: DefineIndex: index function must be marked iscachable\n> >\n> > backup is my pg_dumpall file ...\n> > Why this message ?\n> > May I have lost index ? or data ?\n> >\n> > Could you explain me ?\n> \n> Have you created indices on your functions defined without \"with (iscachable)\"\n> in 7.1.3 ? If so, an error in 7.2 (see below) will occur while you're\n> upgrading PG by pg_dumpall, etc. I would think you need to recreate\n> indices on your function redefined with it before dumping. Instead,\n> it seems to be OK that you redefine functions and create indices after\n> restoring as well.\n> \n> ERROR: DefineIndex: index function must be marked iscachable\n> \n> Regards,\n> Masaru Sugawara\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \nHerv� Piedvache\n\nElma Ingenierie Informatique\n6, rue du Faubourg Saint-Honor�\nF-75008 - Paris - France \nhttp://www.elma.fr\nTel: +33-1-44949901\nFax: +33-1-44949902 \nEmail: herve@elma.fr\n",
"msg_date": "Wed, 20 Feb 2002 11:20:36 +0100",
"msg_from": "=?iso-8859-1?Q?Herv=E9?= Piedvache <herve@elma.fr>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with pg_dumpall import with 7.2"
},
{
"msg_contents": "=?iso-8859-1?Q?Herv=E9?= Piedvache <herve@elma.fr> writes:\n> OK it's a bug of postgreSQL 7.2 ??\n> I can apply an index on the field datelog where this field is a\n> timestamp like :\n> create index ix_datelog_date on datelog (date(datelog);\n> ERROR: DefineIndex: index function must be marked iscachable\n\nIt's not a bug, and it'd not be wise of you to override the decision to\nmark date(timestamp) noncachable. The reason it's marked that way is\nthat timestamp-to-date conversion depends on the TimeZone setting,\nnot only on the input argument.\n\nIf you make a functional index as above, it will misbehave as soon as\nyou have users with different timezone settings.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Feb 2002 09:59:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with pg_dumpall import with 7.2 "
},
{
"msg_contents": "On Wed, 20 Feb 2002 11:20:36 +0100\nHerv\u0001Piedvache <herve@elma.fr> wrote:\n\n> OK it's a bug of postgreSQL 7.2 ??\n> \n> I can apply an index on the field datelog where this field is a\n> timestamp like :\n> \n> create index ix_datelog_date on datelog (date(datelog);\n> \n> ERROR: DefineIndex: index function must be marked iscachable\n> \n> Or could you explain me how to set date() iscachable ?\n\n\nUm, date() function... That sounds like an unavoidable error.\n\nRecently Brent has replied to this sort of subjects on the mailing list\nof sql, and Tom has implied to us that unexpected results might be caused\nby depending on the timezone setting. I would think that you're able to\ncreate an index easily like (2), but need to inspect the results cautiously.\n\n\n(1)create function mydate(timestamp) returns date as '\n select date($1);\n ' language 'sql' with (iscachable);\n\n(2)create index ix_datelog_date on datelog(mydate(datelog));\n\n(3)e.g.:\n select count(*) from datelog\n where mydate(datelog) >= '2002-2-1' and mydate(datelog) <= '2002-2-5' ;\n\n instead of:\n select count(*) from datelog\n where date(datelog) >= '2002-2-1' and date(datelog) <= '2002-2-5' ;\n\n\n\n\n>On Fri, 15 Feb 2002 11:00:11 -0500\n>Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"Nick Fankhauser\" <nickf@ontko.com> writes:\n> > staging=# create index event_day on\n> > event(date_trunc('day',event_date_time));\n> > ERROR: parser: parse error at or near \"'\"\n> \n> You missed the fine print that says the function must be applied to\n> table column name(s) only. No constants, no expressions.\n> \n> You can get around this limitation by defining a custom function that\n> fills in whatever extra baggage you need.\n> \n> My own first thought was that you could just use conversion to type\n> date, but that falls down. Not for syntax reasons though:\n> \n> regression=# create table foo (event_date_time timestamp);\n> CREATE\n> regression=# create index event_day on foo (date(event_date_time));\n> ERROR: DefineIndex: index function must be marked iscachable\n> \n> This raises a subtle point that you'd better think about before you go\n> too far in this direction: truncating a timestamp to date is not a very\n> well-defined operation, because it depends on the timezone setting.\n> Indexes on functions whose values might vary depend on who's executing\n> them are a recipe for disaster --- the index is almost certainly going\n> to wind up corrupted (out of order).\n> \n> \t\t\tregards, tom lane\n\n\nRegards,\nMasaru Sugawara\n\n",
"msg_date": "Thu, 21 Feb 2002 02:02:46 +0900",
"msg_from": "Masaru Sugawara <rk73@echna.ne.jp>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with pg_dumpall import with 7.2"
},
{
"msg_contents": "Couldn't you simply index on the cast of the timestamp to date?\n\ncreate index ix_test on testtable (cast(things as date));\nERROR: parser: parse error at or near \"cast\"\n\nEvidently not...\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Masaru Sugawara\" <rk73@echna.ne.jp>\nTo: <herve@elma.fr>\nCc: <pgsql-hackers@postgresql.org>\nSent: Wednesday, February 20, 2002 12:02 PM\nSubject: Re: [HACKERS] Trouble with pg_dumpall import with 7.2\n\n\n> On Wed, 20 Feb 2002 11:20:36 +0100\n> Herv\u0001Piedvache <herve@elma.fr> wrote:\n>\n> > OK it's a bug of postgreSQL 7.2 ??\n> >\n> > I can apply an index on the field datelog where this field is a\n> > timestamp like :\n> >\n> > create index ix_datelog_date on datelog (date(datelog);\n> >\n> > ERROR: DefineIndex: index function must be marked iscachable\n> >\n> > Or could you explain me how to set date() iscachable ?\n>\n>\n> Um, date() function... That sounds like an unavoidable error.\n>\n> Recently Brent has replied to this sort of subjects on the mailing\nlist\n> of sql, and Tom has implied to us that unexpected results might be\ncaused\n> by depending on the timezone setting. I would think that you're\nable to\n> create an index easily like (2), but need to inspect the results\ncautiously.\n>\n>\n> (1)create function mydate(timestamp) returns date as '\n> select date($1);\n> ' language 'sql' with (iscachable);\n>\n> (2)create index ix_datelog_date on datelog(mydate(datelog));\n>\n> (3)e.g.:\n> select count(*) from datelog\n> where mydate(datelog) >= '2002-2-1' and mydate(datelog) <=\n'2002-2-5' ;\n>\n> instead of:\n> select count(*) from datelog\n> where date(datelog) >= '2002-2-1' and date(datelog) <= '2002-2-5'\n;\n>\n>\n>\n>\n> >On Fri, 15 Feb 2002 11:00:11 -0500\n> >Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > \"Nick Fankhauser\" <nickf@ontko.com> writes:\n> > > staging=# create index event_day on\n> > > event(date_trunc('day',event_date_time));\n> > > ERROR: parser: parse error at or near \"'\"\n> >\n> > You missed the fine print that says the function must be applied\nto\n> > table column name(s) only. No constants, no expressions.\n> >\n> > You can get around this limitation by defining a custom function\nthat\n> > fills in whatever extra baggage you need.\n> >\n> > My own first thought was that you could just use conversion to\ntype\n> > date, but that falls down. Not for syntax reasons though:\n> >\n> > regression=# create table foo (event_date_time timestamp);\n> > CREATE\n> > regression=# create index event_day on foo\n(date(event_date_time));\n> > ERROR: DefineIndex: index function must be marked iscachable\n> >\n> > This raises a subtle point that you'd better think about before\nyou go\n> > too far in this direction: truncating a timestamp to date is not a\nvery\n> > well-defined operation, because it depends on the timezone\nsetting.\n> > Indexes on functions whose values might vary depend on who's\nexecuting\n> > them are a recipe for disaster --- the index is almost certainly\ngoing\n> > to wind up corrupted (out of order).\n> >\n> > regards, tom lane\n>\n>\n> Regards,\n> Masaru Sugawara\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Wed, 20 Feb 2002 13:49:21 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with pg_dumpall import with 7.2"
},
{
"msg_contents": "Masaru Sugawara <rk73@echna.ne.jp> writes:\n> (1)create function mydate(timestamp) returns date as '\n> select date($1);\n> ' language 'sql' with (iscachable);\n\nIf you do it that way then you are simply opening yourself up to exactly\nthe error that the noncachability check is trying to save you from\nmaking.\n\nYou could probably do it safely by hard-wiring the time zone to be used\ninto the function. I think something like this would work:\n\ncreate function mydate(timestamp with time zone) returns date as '\n select date($1 AT TIME ZONE ''EST'');\n' language 'sql' with (iscachable);\n\n(substitute time zone of your choice, of course).\n\nBTW, if the table is at all large then you'd probably be better off to\nuse a plpgsql function instead. SQL-language functions are rather\ninefficient IIRC.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Feb 2002 14:06:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with pg_dumpall import with 7.2 "
},
{
"msg_contents": "As always, wisdom personified by Tom Lane said :\n\n> > regression=# create table foo (event_date_time timestamp);\n> > CREATE\n> > regression=# create index event_day on foo (date(event_date_time));\n> > ERROR: DefineIndex: index function must be marked iscachable\n> >\n> > This raises a subtle point that you'd better think about before you go\n> > too far in this direction: truncating a timestamp to date is not a very\n> > well-defined operation, because it depends on the timezone setting.\n> > Indexes on functions whose values might vary depend on who's executing\n> > them are a recipe for disaster --- the index is almost certainly going\n> > to wind up corrupted (out of order).\n\n\nTom, I clearly understand the problem but it is your developer's (I\nshould say \"your designer's\") POV.\n\nMost of us, users of PG (app developers I mean) never have to deal\nwith timezones and that's where we conflict : we can't use (I mean as\nefficiently as could be) date indexes because of timezones which WE\ndon't care about (at least in, say, 90% of the apps that use DB).\n\nCan't we find a middle point ? I mean keep the current restrictions\nregarding timezones but be able to create, say \"noTZdate\" field types\nthat would be cachable ?\n\nToday we have only the options of :\n\n- using no date index\n- use inefficient date indexes\n- convert dates to integers (eg: Julian) and index the integer\n- convert dates to ISO strings and index the string\n\nSame restrictions for date+time fields.\n\nThere's still something I don't understand : how are timestamps stored?\n\nDon't you store :\n 1)universaltime or gmt\n 2)timezone ?\nThis way, timezones are only used to display a local date from a\nuniversal value (which can be sorted normally)\n\nIs it :\n 1)localtime\n 2)timezone\n\nI guess I should RTFM or RTFS(ources)... Got a URL for dummies like me?\n\n Oops! After re-reading my writing, I realize timezones are\n important in the US though it does not change the problem.\n\nRegards,\n-- \nHerv� Piedvache\n\nElma Ingenierie Informatique\n6, rue du Faubourg Saint-Honor�\nF-75008 - Paris - France \nhttp://www.elma.fr\nTel: +33-1-44949901\nFax: +33-1-44949902 \nEmail: herve@elma.fr\n",
"msg_date": "Thu, 21 Feb 2002 15:26:23 +0100",
"msg_from": "=?iso-8859-1?Q?Herv=E9?= Piedvache <herve@elma.fr>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with pg_dumpall import with 7.2"
},
{
"msg_contents": "=?iso-8859-1?Q?Herv=E9?= Piedvache <herve@elma.fr> writes:\n> Most of us, users of PG (app developers I mean) never have to deal\n> with timezones and that's where we conflict : we can't use (I mean as\n> efficiently as could be) date indexes because of timezones which WE\n> don't care about (at least in, say, 90% of the apps that use DB).\n\nIf you don't care about timezone handling, you should be using timestamp\nwithout time zone. Observe:\n\nregression=# create table foo (tnz timestamp without time zone,\nregression(# tz timestamp with time zone);\nCREATE\nregression=# create index fooi on foo(date(tz));\nERROR: DefineIndex: index function must be marked iscachable\nregression=# create index fooi on foo(date(tnz));\nCREATE\nregression=#\n\ntimestamp-with-timezone is really GMT under the hood; it's rotated to\nyour local timezone (as shown by TimeZone) before conversion to date,\nand that's why timestamp-with-timezone-to-date is, and should be,\nnoncachable.\n\nOn the other hand, timestamp without time zone is not assumed to be\nin any particular zone, and there's never any rotation to local or to\nGMT. So that conversion to date is deterministic.\n\nSome examples (I'm in EST, ie GMT-5):\n\nregression=# select '2002-02-21 08:00-05'::timestamp with time zone;\n timestamptz\n------------------------\n 2002-02-21 08:00:00-05\n(1 row)\n\nregression=# select '2002-02-21 08:00+09'::timestamp with time zone;\n timestamptz\n------------------------\n 2002-02-20 18:00:00-05\n(1 row)\n\nregression=# select date('2002-02-21 08:00+09'::timestamp with time zone);\n date\n------------\n 2002-02-20\n(1 row)\n\nregression=# select '2002-02-21 08:00+09'::timestamp without time zone;\n timestamp\n---------------------\n 2002-02-21 08:00:00 -- the timezone indication is simply dropped\n(1 row)\n\nregression=# select date('2002-02-21 08:00+09'::timestamp without time zone);\n date\n------------\n 2002-02-21\n(1 row)\n\nBTW, 7.2 assumes plain \"timestamp\" to denote \"timestamp with time zone\";\nthis is for backwards compatibility with the behavior of previous\nreleases' timestamp datatype. However, the SQL spec says that\n\"timestamp\" should mean \"\"timestamp without time zone\", so we are\nprobably going to change over eventually.\n\n(Hey Thomas, did I get all that right?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Feb 2002 09:47:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with pg_dumpall import with 7.2 "
},
{
"msg_contents": "...\n> BTW, 7.2 assumes plain \"timestamp\" to denote \"timestamp with time zone\";\n> this is for backwards compatibility with the behavior of previous\n> releases' timestamp datatype. However, the SQL spec says that\n> \"timestamp\" should mean \"\"timestamp without time zone\", so we are\n> probably going to change over eventually.\n> (Hey Thomas, did I get all that right?)\n\nYes, including the change in default in an upcoming release. Well,\nactually I have to admit I lost concentration somewhere in the middle of\nthe \"power examples\" so didn't check those carefully ;) :))\n\n - Thomas\n",
"msg_date": "Thu, 21 Feb 2002 15:22:31 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with pg_dumpall import with 7.2"
},
{
"msg_contents": "On Wed, 20 Feb 2002 14:06:46 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Masaru Sugawara <rk73@echna.ne.jp> writes:\n> > (1)create function mydate(timestamp) returns date as '\n> > select date($1);\n> > ' language 'sql' with (iscachable);\n> \n> If you do it that way then you are simply opening yourself up to exactly\n> the error that the noncachability check is trying to save you from\n> making.\n\n Okey.\n It turned out that the setting time zone was insufficient -- but I also\n understand that users need to avoid the operations for which \n robustness/reliability is lost. \n\n> \n> You could probably do it safely by hard-wiring the time zone to be used\n> into the function. I think something like this would work:\n> \n> create function mydate(timestamp with time zone) returns date as '\n> select date($1 AT TIME ZONE ''EST'');\n> ' language 'sql' with (iscachable);\n> \n> (substitute time zone of your choice, of course).\n\n Thanks a lot. there are likely to be opportunities of making frequent\n use of it.\n\n> \n> BTW, if the table is at all large then you'd probably be better off to\n> use a plpgsql function instead. SQL-language functions are rather\n> inefficient IIRC.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\n\n\n\nRegards,\nMasaru Sugawara\n\n",
"msg_date": "Fri, 22 Feb 2002 01:59:09 +0900",
"msg_from": "Masaru Sugawara <rk73@echna.ne.jp>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with pg_dumpall import with 7.2"
},
{
"msg_contents": "On Wed, 20 Feb 2002 13:49:21 -0500\n\"Rod Taylor\" <rbt@zort.ca> wrote:\n\n> Couldn't you simply index on the cast of the timestamp to date?\n> \n> create index ix_test on testtable (cast(things as date));\n> ERROR: parser: parse error at or near \"cast\"\n> \n> Evidently not...\n\n I'm sorry.\n\n>> I would think that you're able to create an index easily like (2), but need to >> inspect the results cautiously.\n\n This means \"create an index on an function\", not \"create an index on a\n date()/cast() directly\". It seemed ambiguous.\n\n\n\nRegards,\nMasaru Sugawara\n\n",
"msg_date": "Fri, 22 Feb 2002 02:37:21 +0900",
"msg_from": "Masaru Sugawara <rk73@echna.ne.jp>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with pg_dumpall import with 7.2"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've started separating out the protocol stuff into a single module under \nsrc/backend/pgnet. I've posted some initial diffs that move the \nauthentication, cancels, and beginning/end of results out of the mainline \ncode. I still need to move the sending of rows and some miscellaneous \nerror handling things. I've posted my current code at:\n\nhttp://opendrda.sourceforge.net/pgsql/\n\nI'd appreciate it if folks wanted to take a look and give some feedback \nabout the direction and getting this ready to merge.\n\nCheers,\n\nBrian\n\n",
"msg_date": "Sat, 16 Feb 2002 22:25:29 -0500 (EST)",
"msg_from": "Brian Bruns <camber@ais.org>",
"msg_from_op": true,
"msg_subject": "making way for DRDA"
}
] |
[
{
"msg_contents": "Hello,\nI guess this question will be too easy to most of the people here, \nthough I couldn't figure this out myself. I want to recompile only psql \nwith the readline library. When I installed PG, GNU readline package was \nnot there. Now db is running fine, and I don't want to disturb it. Can \nanybody point me to the brief steps to do this.\nThanks in advance.\nRegards.\n-Samik\n\n",
"msg_date": "Sun, 17 Feb 2002 00:31:31 -0600",
"msg_from": "Samik Raychauhduri <samik@cae.wisc.edu>",
"msg_from_op": true,
"msg_subject": "Selectively Compile psql"
},
{
"msg_contents": "Samik Raychauhduri <samik@cae.wisc.edu> writes:\n> I guess this question will be too easy to most of the people here, \n> though I couldn't figure this out myself. I want to recompile only psql \n> with the readline library. When I installed PG, GNU readline package was \n> not there. Now db is running fine, and I don't want to disturb it. Can \n> anybody point me to the brief steps to do this.\n\nThere is not presently any method to recompile only psql. However, you\ncould reinstall only psql. Try this:\n\n\tcd ...postgres top directory...\n\t./configure same-options-as-you-used-before\n\tmake\n\tcd src/bin/psql\n\tmake install\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Feb 2002 22:28:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Selectively Compile psql "
}
] |
[
{
"msg_contents": "I've been thinking about replication and wanted to throw out an idea to see \nhow fast it gets torn apart. I'm sure the problem can't be this easy but I \ncan't think of why.\n\nOk... Let's say you have two fresh databases, both empty. You set up a \npostgres proxy for them. The proxy works like this:\n\n It listens on port 5432.\n It pools connections to both real databases.\n It is very simple just forwarding requests and responses back and forth\n between client and server. A client can connect to the proxy \n and not be able to tell that it is not an actual postgres \n database.\n When connections are made to it, it proxys connections to both back-end\n databases.\n If an insert/update/delete/DDL command comes, it forwards it to both\n machines.\n If a query comes down the line it forwards it to one machine or the \n other.\n If one of the machines goes offline or is not responding the proxy \n queues up all update transactions intended for it and stops \n forwarding queries to it until it comes back online and all \n queued transactions have been committed.\n A new machine can be inserted to the cluster. When the proxy is alerted\n to this, it's first communication would be to pgdumpall() one of\n the functional databases and pipe it to the new one. At that \n moment, it is considered an unreachable database and all update\n transactions are queued for when the dump/rebuild is complete.\n If a machine dies in catastrophic failure it can be removed from the\n cluster, and once the machine is fixed, re-inserted as per \n above.\n If there were some SQL command for determining the load a machine is \n experiencing the proxy could intelligently balance the load \n to the machines in the cluster that can handle it.\n If the proxy were to fail, clients could safely connect to one of the \n back end databases in read-only mode until the proxy came back \n up.\n The proxy would store a log of incomplete transactions in some kind of \n presistant storage for all the databases it's connected to, so \n should it die, it can resume right where it left off assuming \n the log is intact.\n\nWith the proxy set up like this you could connect to it as though it were a \ndatabase, upload your current data and schema and get most all the benifits \nof clustering.\n\nWith this setup could achieve load balancing, fail-over, master-master \nreplication, master-slave replication, hot swap servers, dynamic addition \nand removal of servers and HA-like clustering. The only thing it does not \ndo is partition data across servers. The only assumption I am aware of \nthat I am making is that two identical databases, given the same set of \narbitrary transactions will end up being the same. The only single point \nof failure in this system would be the proxy itself. A modification to the \npostgres client software could allow automatically fail-over to read-only \nconnections with one of the back-end databases. Also, the proxy could be \nrun on a router or other diskless system. I haven't really thought about \nit, but it may even be possible to use current HA technology and run a pool \nof failover proxy's.\n\nIf the proxy ended up NOT slowing the performance of a standalone, \nsingle-system server, it could become the default connection method to \nPostgreSQL such that a person could do an out-of-the-box install of the \ndatabase and a year later realize they really wanted a cluster, they could \nhot-add a server without even restarting the database. \n\nSo, long story short, I'd like to get people's comments on this. If it \nwon't/can't work or has been tried before, I want to hear about it before I \nstart coding. ;)\n\n Orion\n\n",
"msg_date": "Sun, 17 Feb 2002 01:11:15 -0800",
"msg_from": "Orion Henry <orion@trustcommerce.com>",
"msg_from_op": true,
"msg_subject": "A Replication Idea"
},
{
"msg_contents": "> It listens on port 5432.\n> It pools connections to both real databases.\n\nCheck out pgsql-replication. They're doing something more complex,\nbut will get you the multi-master model that you're proposing. It's\nusing a reliable multicast model based on the spread toolkit\n(spread.org).\n\n\n\nAs for your idea, it sounds really good, but has two pit-falls that I\nknow of:\n\n1) TCP latency could intorduce race conditions and data syncronization\n problems.\n\n2) transaction WAL log syncronization.\n\n-sc\n\n-- \nSean Chittenden\n",
"msg_date": "Wed, 20 Feb 2002 18:39:19 -0800",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": false,
"msg_subject": "Re: A Replication Idea"
},
{
"msg_contents": "\n>I've been thinking about replication and wanted to throw out an idea to see \n>how fast it gets torn apart. I'm sure the problem can't be this easy but I \n>can't think of why.\n>\nI have some comments/questions to share. If you are proposing SQL based \nreplication (the statements\nget planned, parsed, and executed on all replicas) how can you guarantee \neach replica will stay synchronized?\nWhen it comes to executing a set of commands in a transactions, which \ncould kick off triggers or call\nstored procedures, or functions, how does the proxy know each data \nchange in the transaction was\nsuccessful? \n\nWhile having an advantage of being outside of the core postgres code, \nyou would not be affected\nby constant changes, so development/integration would be less intrusive. \n OTOH things like conflict\nresolution or avoidance in a multi master scenario are much more \ndifficult to handle in your proxy\napproach.\n\n> \n>So, long story short, I'd like to get people's comments on this. If it \n>won't/can't work or has been tried before, I want to hear about it before I \n>start coding. ;)\n>\nWe did some research a while back, and you might find some of the \ninformation useful within...\n\nhttp://gborg.postgresql.org/genpage?replication_research\n\n\nIf you're interested, maybe we could collaborate,\n\nDarren\n\n\n",
"msg_date": "Wed, 20 Feb 2002 23:42:02 -0500",
"msg_from": "Darren Johnson <darren@up.hrcoxmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A Replication Idea"
}
] |
[
{
"msg_contents": "I think it's important that it's actually documented that they can add\nprimary keys after the fact!\n\nAlso, we need to add regression tests for alter table / add primary key\nand alter table / drop constraint. These shouldn't be added until 7.3 tho\nmethinks...\n\nChris",
"msg_date": "Sun, 17 Feb 2002 18:54:15 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Patch to ALTER TABLE docs for 7.2.1"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nChristopher Kings-Lynne wrote:\n> I think it's important that it's actually documented that they can add\n> primary keys after the fact!\n> \n> Also, we need to add regression tests for alter table / add primary key\n> and alter table / drop constraint. These shouldn't be added until 7.3 tho\n> methinks...\n> \n> Chris\n> \n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 17 Feb 2002 06:49:58 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch to ALTER TABLE docs for 7.2.1"
},
{
"msg_contents": "Awww Rats.\n\nWish I'd known a primary key could be added after table creation about 4\nhours ago. Am wrapping up the 2nd CBT now (on Referential Integrity)\nand it's a bit late for adding another section. Must read the docs more\noften.\n\nOh well. Next CBT maybe.\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nBruce Momjian wrote:\n> \n> Patch applied. Thanks.\n> \n> ---------------------------------------------------------------------------\n> \n> Christopher Kings-Lynne wrote:\n> > I think it's important that it's actually documented that they can add\n> > primary keys after the fact!\n> >\n> > Also, we need to add regression tests for alter table / add primary key\n> > and alter table / drop constraint. These shouldn't be added until 7.3 tho\n> > methinks...\n> >\n> > Chris\n> >\n> \n> Content-Description:\n> \n> [ Attachment, skipping... ]\n> \n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sun, 17 Feb 2002 23:19:23 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Patch to ALTER TABLE docs for 7.2.1"
},
{
"msg_contents": "Another idea. Now that we have ALTER TABLE / ADD PRIMARY KEY - we should\nmodify the pg_dump format to instead of doing this:\n\nCREATE TABLE food (\n\ta int4,\n\tPRIMARY KEY (a)\n);\n\nCOPY FROM ...\n\nIt should be like this:\n\nCREATE TABLE food (\n\ta int4\n);\n\nCOPY FROM ...\n\nALTER TABLE food ADD PRIMARY KEY (a);\n\nThis will be a lot faster. The only reason (I believe) that it was not done\nlike this previously is that it wasn't possible to recreate a PK in any\nother way without twiddling the catalogs.\n\nChris\n\n\n> -----Original Message-----\n> From: Justin Clift [mailto:justin@postgresql.org]\n> Sent: Sunday, 17 February 2002 8:19 PM\n> To: Bruce Momjian\n> Cc: Christopher Kings-Lynne; pgsql-patches@postgresql.org\n> Subject: Re: [PATCHES] Patch to ALTER TABLE docs for 7.2.1\n>\n>\n> Awww Rats.\n>\n> Wish I'd known a primary key could be added after table creation about 4\n> hours ago. Am wrapping up the 2nd CBT now (on Referential Integrity)\n> and it's a bit late for adding another section. Must read the docs more\n> often.\n>\n> Oh well. Next CBT maybe.\n>\n> :)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n>\n> Bruce Momjian wrote:\n> >\n> > Patch applied. Thanks.\n> >\n> >\n> ------------------------------------------------------------------\n> ---------\n> >\n> > Christopher Kings-Lynne wrote:\n> > > I think it's important that it's actually documented that they can add\n> > > primary keys after the fact!\n> > >\n> > > Also, we need to add regression tests for alter table / add\n> primary key\n> > > and alter table / drop constraint. These shouldn't be added\n> until 7.3 tho\n> > > methinks...\n> > >\n> > > Chris\n> > >\n> >\n> > Content-Description:\n> >\n> > [ Attachment, skipping... ]\n> >\n> > >\n> > > ---------------------------(end of\n> broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > >\n> > > http://archives.postgresql.org\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill,\n> Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n\n",
"msg_date": "Mon, 18 Feb 2002 09:34:14 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Patch to ALTER TABLE docs for 7.2.1"
},
{
"msg_contents": "> Christopher Kings-Lynne wrote:\n> Another idea. Now that we have ALTER TABLE / ADD PRIMARY KEY - we should\n> modify the pg_dump format to instead of doing this:\n> \n> CREATE TABLE food (\n> \ta int4,\n> \tPRIMARY KEY (a)\n> );\n> \n> COPY FROM ...\n> \n> It should be like this:\n> \n> CREATE TABLE food (\n> \ta int4\n> );\n> \n> COPY FROM ...\n> \n> ALTER TABLE food ADD PRIMARY KEY (a);\n> \n> This will be a lot faster. The only reason (I believe) that it was not done\n> like this previously is that it wasn't possible to recreate a PK in any\n> other way without twiddling the catalogs.\n\nVery good point. Added to TODO:\n\n\t* Have pg_dump use ADD PRIMARY KEY after COPY, for performance\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 17 Feb 2002 21:00:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch to ALTER TABLE docs for 7.2.1"
},
{
"msg_contents": "Hi Christopher,\n\nForwarding this to [HACKERS] as it sounds like it belong there now.\n\n:)\n\nThis is sounds worthwhile, from the point of view that it makes it\neasier for users with large data sets, and it'll assist in the\n\"PostgreSQL loads database dump data slowly\" type problem which seems to\nshow itself in some people's testing.\n\nHow easy to implement this do you reckon?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nChristopher Kings-Lynne wrote:\n> \n> Another idea. Now that we have ALTER TABLE / ADD PRIMARY KEY - we should\n> modify the pg_dump format to instead of doing this:\n> \n> CREATE TABLE food (\n> a int4,\n> PRIMARY KEY (a)\n> );\n> \n> COPY FROM ...\n> \n> It should be like this:\n> \n> CREATE TABLE food (\n> a int4\n> );\n> \n> COPY FROM ...\n> \n> ALTER TABLE food ADD PRIMARY KEY (a);\n> \n> This will be a lot faster. The only reason (I believe) that it was not done\n> like this previously is that it wasn't possible to recreate a PK in any\n> other way without twiddling the catalogs.\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: Justin Clift [mailto:justin@postgresql.org]\n> > Sent: Sunday, 17 February 2002 8:19 PM\n> > To: Bruce Momjian\n> > Cc: Christopher Kings-Lynne; pgsql-patches@postgresql.org\n> > Subject: Re: [PATCHES] Patch to ALTER TABLE docs for 7.2.1\n> >\n> >\n> > Awww Rats.\n> >\n> > Wish I'd known a primary key could be added after table creation about 4\n> > hours ago. Am wrapping up the 2nd CBT now (on Referential Integrity)\n> > and it's a bit late for adding another section. Must read the docs more\n> > often.\n> >\n> > Oh well. Next CBT maybe.\n> >\n> > :)\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> >\n> > Bruce Momjian wrote:\n> > >\n> > > Patch applied. Thanks.\n> > >\n> > >\n> > ------------------------------------------------------------------\n> > ---------\n> > >\n> > > Christopher Kings-Lynne wrote:\n> > > > I think it's important that it's actually documented that they can add\n> > > > primary keys after the fact!\n> > > >\n> > > > Also, we need to add regression tests for alter table / add\n> > primary key\n> > > > and alter table / drop constraint. These shouldn't be added\n> > until 7.3 tho\n> > > > methinks...\n> > > >\n> > > > Chris\n> > > >\n> > >\n> > > Content-Description:\n> > >\n> > > [ Attachment, skipping... ]\n> > >\n> > > >\n> > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > TIP 6: Have you searched our list archives?\n> > > >\n> > > > http://archives.postgresql.org\n> > >\n> > > --\n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > pgman@candle.pha.pa.us | (610) 853-3000\n> > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > + Christ can be your backup. | Drexel Hill,\n> > Pennsylvania 19026\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n> > --\n> > \"My grandfather once told me that there are two kinds of people: those\n> > who work and those who take the credit. He told me to try to be in the\n> > first group; there was less competition there.\"\n> > - Indira Gandhi\n> >\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 18 Feb 2002 13:40:41 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Patch to ALTER TABLE docs for 7.2.1"
},
{
"msg_contents": "On Sun, 2002-02-17 at 21:00, Bruce Momjian wrote:\n> Very good point. Added to TODO:\n> \n> \t* Have pg_dump use ADD PRIMARY KEY after COPY, for performance\n\nI have started working on this. AFAICT, it shouldn't be too difficult\n(although I haven't finished it yet, so perhaps I speak too soon ;-) ).\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "17 Feb 2002 22:37:46 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Patch to ALTER TABLE docs for 7.2.1"
},
{
"msg_contents": "[re-sending to -hackers, as the thread seems to be moving there]\n\nOn Sun, 2002-02-17 at 21:00, Bruce Momjian wrote: \n> Very good point. Added to TODO:\n> \n> \t* Have pg_dump use ADD PRIMARY KEY after COPY, for performance\n\nI have started working on this. AFAICT, it shouldn't be too difficult\n(although I haven't finished it yet, so perhaps I speak too soon ;-) ).\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "17 Feb 2002 22:41:19 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Patch to ALTER TABLE docs for 7.2.1"
},
{
"msg_contents": "> > \t* Have pg_dump use ADD PRIMARY KEY after COPY, for performance\n>\n> I have started working on this. AFAICT, it shouldn't be too difficult\n> (although I haven't finished it yet, so perhaps I speak too soon ;-) ).\n\nDoh! I was just about to say that I'll do it this afternoon! I'll leave it\nto you then :)\n\nChris\n\n",
"msg_date": "Mon, 18 Feb 2002 12:08:28 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Patch to ALTER TABLE docs for 7.2.1"
}
] |
[
{
"msg_contents": "\nI have a problem running postgresql\nSpecifically, this is the error that Iget:\n\ncsil-sunb23|/usr/dcs/csil-projects/cs411/cs411g3/postgres/bin|[142]%\npostmaster\n -B 32 -N 16 -D /usr/dcs/csil-projects/cs411/cs411g3/postgres/data\n>FATAL 1: configuration file `postgresql.conf' has wrong permissions\n>\n>cs411ta2|csil-sunb23|/usr/dcs/csil-projects/cs411/cs411g3/postgres/bin|[143]%\n>\n\nI am logged in as \"cs411ta2\" to run postgresql created by \"cs411g3\".The\npostgresql user for cs411g3 is \"cs411g3\". I am\na have the rwx permissions on the group \"ta411\" that both \"cs411ta2\"\nand \"cs411g3\" belong to.\n\nHow can I run the postgresql created by \"cs411g3\"? is there a way to\nchange the postgres user created by \"cs411g3\", so that I can run the same\npostgresql that \"cs411g3\" has installed.\n\n\nthanks\n\n",
"msg_date": "Sun, 17 Feb 2002 05:58:12 -0600",
"msg_from": "\"D'laila Pereira\" <dpereira@students.uiuc.edu>",
"msg_from_op": true,
"msg_subject": "Problem running postgresql??"
}
] |
[
{
"msg_contents": "Here is a colorized CVS log showing the commit messages from the 7.2\nrelease:\n\n\thttp://www.ca.postgresql.org/docs/momjian/cvslog.html\n\nThe log is in abbeviated format, and is colorized. It is also 1.3MB,\nwhich makes it hard to view in many browsers. Perhaps it should be\nsplit up into one file per month.\n\nOf course, many would prefer a file like HISTORY, which is much more\nconcise. The problem is that I can't accurately condense and filter to\ncreate a HISTORY-like file until I have a pile of CVS commit messages at\nthe end of a release cycle. For example, look through the CVS file and\nimagine what entry you would add to CHANGES for each commit. It\nrequires thought, and it often requires you consolidate existing entries\nin the CHANGES file to add the new change.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 17 Feb 2002 08:47:56 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "CVS log, HTML format"
}
] |
[
{
"msg_contents": "Hi Bruce,\n\nOn Sat, 16 Feb 2002, Bruce Momjian wrote:\n\n> I have just a few stylistic suggestions. First, we need context diff\n> (diff -c) rather than regular diffs. \n\nWill do.\n\n> Second, I was wondering why you\n> called it pgnet. If we are already in the backend code, seems it could\n> all just be called 'net'. \n\nI wanted to avoid something that might have a namespace clash with other \nmodules or OS headers so \"net.h\" seemed to be shakey ground. I'm flexible \non this point.\n\n> Third, what is the value of this extra level\n> of abstraction? This may already have been covered but I forgot.\n\nI think eventually I'm going to try to make all calls via a set of \nfunction pointers in a structure, so there shouldn't be that much \noverhead. I'm still debating that as there could be some generic \nprocessing going on and then some protocol-specific processing.\n\nThe eventual goal though is to allow pgsql to support multiple network \nprotocols. I'm mostly interested in DRDA (the OpenGroup standard), but I \nknow others want SQL*Net (on the todo list). \n\nCheers,\n\nBrian\n\n",
"msg_date": "Sun, 17 Feb 2002 09:07:08 -0500 (EST)",
"msg_from": "Brian Bruns <camber@ais.org>",
"msg_from_op": true,
"msg_subject": "Re: making way for DRDA"
}
] |
[
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> It would seem that if you could determine if the number of distinct\n> values is _increasing_ as you scan more rows, that an increase in table\n> size would also cause an increase, e.g. if you have X distinct values\n> looking at N rows, and 2X distinct values looking at 2N rows, that\n> clearly would show a scale.\n\n[ thinks for awhile... ] I don't think that'll help. You could not\nexpect an exact 2:1 increase, except in the case of a simple unique\ncolumn, which isn't the problem anyway. So the above would really\nhave to be coded as \"count the number of distinct values in the sample\n(d1) and the number in half of the sample (d2); then if d1/d2 >= X\nassume the number of distinct values scales\". X is a constant somewhere\nbetween 1 and 2, but where? I think you've only managed to trade one\narbitrary threshold for another one.\n\nA more serious problem is that the above could easily be fooled by a\ndistribution that contains a few very-popular values and a larger number\nof seldom-seen ones. Consider for example a column \"number of children\"\nover a database of families. In a sample of a thousand or so, you might\nwell see only values 0..4 (or so); if you double the size of the sample,\nand find a few rows with 5 to 10 kids, are you then correct to label the\ncolumn as scaling with the size of the database?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Feb 2002 13:02:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Odd statistics behaviour in 7.2 "
}
] |
[
{
"msg_contents": "\nI just created a REL7_2_STABLE branch, so that v7.3 can now begin ... have\nfun folks :)\n\n\n",
"msg_date": "Sun, 17 Feb 2002 18:35:55 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Branch created ... May v7.3 be Born!!"
}
] |
[
{
"msg_contents": "Hi All,\n\nIs there a function in the catalogs somewhere that will split this?:\n\naustralia=# select tgargs from pg_trigger where oid=3842228;\n tgargs\n\n----------------------------------------------------------------------------\n------\n--------------\n\n<unnamed>\\000users_mealplans_prefs\\000medidiets_meals\\000UNSPECIFIED\\000meal\n_id\\0\n00meal_id\\000\n(1 row)\n\nI'm hoping to do the splitting within pl/pgsql even. If there is no\nfunction to break it up, can someone please suggest how I would code taking\nthis string and breaking it into components.\n\nChris\n\n",
"msg_date": "Mon, 18 Feb 2002 13:35:14 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Function to split pg_trigger.tgargs"
},
{
"msg_contents": "http://www.zort.ca/postgresql/postgresql_autodoc_0.50.tar.gz\n\nThe below is taken from the above.\n\n my $args = $forcols->{'args'};\n my $nargs = $forcols->{'number_args'};\n\n if ($nargs == 6) {\n my ( $keyname\n , $table\n , $ftable\n , $unspecified\n , $lcolumn_name\n , $fcolumn_name\n ) = split(/\\000/, $args);\n\n $structure{$group}{$table_name}{'COLUMN'}{$lcolumn_name}{'FK'} =\n\"$ftable\"; #.$fcolumn_name\";\n\n # print \" FK $lcolumn_name -> $ftable.$fcolumn_name\\n\";\n } elsif (($nargs - 6) % 2 == 0) {\n my ( $keyname\n , $table\n , $ftable\n , $unspecified\n , $lcolumn_name\n , $fcolumn_name\n , @junk\n ) = split(/\\000/, $args);\n\n my $key_cols = \"$lcolumn_name\";\n my $ref_cols = \"$fcolumn_name\";\n\n while ($lcolumn_name = pop(@junk) and $fcolumn_name =\npop(@junk)) {\n\n $key_cols .= \", $lcolumn_name\";\n $ref_cols .= \", $fcolumn_name\";\n }\n\n $structure{$group}{$table_name}{'CONSTRAINT'}{$constraint_name}\n= \"FOREIGN KEY ($key_cols) REFERENCES $ftable($ref_cols)\";\n }\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nTo: \"Hackers\" <pgsql-hackers@postgresql.org>;\n<pgsql-sql@postgresql.org>\nSent: Monday, February 18, 2002 12:35 AM\nSubject: [HACKERS] Function to split pg_trigger.tgargs\n\n\n> Hi All,\n>\n> Is there a function in the catalogs somewhere that will split this?:\n>\n> australia=# select tgargs from pg_trigger where oid=3842228;\n> tgargs\n>\n> --------------------------------------------------------------------\n--------\n> ------\n> --------------\n>\n>\n<unnamed>\\000users_mealplans_prefs\\000medidiets_meals\\000UNSPECIFIED\\0\n00meal\n> _id\\0\n> 00meal_id\\000\n> (1 row)\n>\n> I'm hoping to do the splitting within pl/pgsql even. If there is no\n> function to break it up, can someone please suggest how I would code\ntaking\n> this string and breaking it into components.\n>\n> Chris\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n>\n\n",
"msg_date": "Mon, 18 Feb 2002 07:40:16 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Function to split pg_trigger.tgargs"
}
] |
[
{
"msg_contents": "Sorry, there is a bug in the first package I sent. You'd better change the postgres.list file with this one.",
"msg_date": "Mon, 18 Feb 2002 16:50:28 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "REf. cross-platform packaging in native format."
}
] |
[
{
"msg_contents": "Looks like it's not been sent. So I send it again whithout build.tar.gz\n\nIs there a place where I can upload build.tar.gz (~600 Kbytes)?\n\n----- Original Message ----- \nFrom: Nicolas Bazin \nTo: pgsql-hackers@posgresql.org \nSent: Monday, February 18, 2002 3:08 PM\nSubject: cross-platform packaging in native format.\n\n\nI have used the \"epm\" package from http://www.easysw.com/epm to create native packages format for the different platform this package supports in native format. Actually I have done that for Openserver and Unixware and will extend it to Linux.\nHere is how to proceed.\nCreate a new directory in your postrges-7.2 root directory called build.\nextract build.tar.gz in this new directory,\nthen start:\n$build/build_package.sh\n\nthis will recompile postgres and make an installation package.\n\nBeware that the current CVS version of epm has a bug in mkepmlist. So apply the patch that I have also submitted to the epm maintainer but is not yet included.\n\nI also have extended the transfer tool of the hsqldb (SQL DB in JAVA http://hsqldb.sourceforge.net/ ) project to transfer database from INFORMIX to postgres. It transfers table structure, indexes, primary keys, foreign keys, data and work arround bugs in INFORMIX JDBC implementation. It uses JDBC and should be easy to adapt to other sources. The hsqldb maintainer wants me to wait until the patch is applied to it's project to post it some place else, so have a look at this project or wait a little bit if you are interested.\n\nNicolas",
"msg_date": "Mon, 18 Feb 2002 17:09:31 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "Fw: cross-platform packaging in native format."
}
] |
[
{
"msg_contents": " Hello,\n\n I has pg_dump my DB in 7.1.3 and try ro pg_restore it in 7.2\nversion.\n Almost all is clear, but restore of some tables generate\nmessages\nlike this:\npsql:/.../dbdump/.dbrestore.tmp:1624094: ERROR: copy: line 1, Bad\ntimestamp external representation 'Fri 25 Jan 23:59:59 2002 KRAT'\npsql:/.../dbdump/.dbrestore.tmp:1624094: lost synchronization with\nserver, resetting connection\n.......\n And in a postmaster log I have for each pg_restore error like\nabove:\n2002-02-07 14:36:05 ERROR: copy: line 1, Bad timestamp external\nrepresentation 'Wed 06 Feb 00:00:00 2002 KRAT'\n2002-02-07 14:36:05 FATAL 1: Socket command type *** unknown\n.......\n where *** is char in (1,2,3,7,8,-,/). What this mean?\n\n I can't upgrade PostgreSQL from 7.1.3 to 7.2 since following\n incompatibles exists:\n\n 1. Function time(datetime) don't exists in 7.2?\n SELECT time('now'); processed ok in 7.1.3, but 7.2 says:\n parser: parse error at or near \"'\".\n\n 2. CREATE TABLE akka (tm TIMESTAMP WITH TIME ZONE);\n SET datestyle TO postgresql,european;\n INSERT INTO akka VALUES ('akka');\n INSERT INTO akka SELECT tm::text FROM akka; -- *\n Last SQL processed well in 7.1.3, but in 7.2 didn't:\n ERROR: Bad timestamp external representation 'Thu 07 Feb\n16:36:50.730499 2002 KRAT'\n I has tried to CREATE TABLE akka with timestamp(0) column,\nbut\nthis\ndoes not help. When I use WITHOUT TIME ZONE query(*) proceed good, but\nI can't use it since my pg_dump'ed DB saved with timezone info.\n\n Any ideas?\n\n",
"msg_date": "Mon, 18 Feb 2002 14:50:56 +0700",
"msg_from": "Ruslan A Dautkhanov <rusland@scn.ru>",
"msg_from_op": true,
"msg_subject": "date/time compatible problems in 7.2"
},
{
"msg_contents": "> I has pg_dump my DB in 7.1.3 and try ro pg_restore it in 7.2\n> version.\n> psql:/.../dbdump/.dbrestore.tmp:1624094: ERROR: copy: line 1, Bad\n> timestamp external representation 'Fri 25 Jan 23:59:59 2002 KRAT'\n> psql:/.../dbdump/.dbrestore.tmp:1624094: lost synchronization with\n> server, resetting connection\n\nNot sure why it is crashing. But \"KRAT\" is a time zone not recognized by\nthe PostgreSQL date/time parser. In fact it could be afaik (it is\nmentioned but commented-out in the parser) but it either had a screwy\ndefinition or I couldn't figure out what the definition was. It could be\nadded for 7.2.1 (and I could send a patch beforehand) if I knew the\nproper definition. Check src/backend/utils/adt/datetime.c and look for\n\"krat\".\n\n> 1. Function time(datetime) don't exists in 7.2?\n> SELECT time('now'); processed ok in 7.1.3, but 7.2 says:\n> parser: parse error at or near \"'\".\n\nRight. 7.2 implements the SQL99 feature of time precision, so \"time()\"\nnow indicates a data type, not a function call. Same for \"timestamp()\".\n\nselect time 'now'\nor\nselect cast('now' as time)\n\nis the preferred syntax for your use case anyway.\n\n> 2. CREATE TABLE akka (tm TIMESTAMP WITH TIME ZONE);\n> SET datestyle TO postgresql,european;\n> INSERT INTO akka VALUES ('akka');\n> INSERT INTO akka SELECT tm::text FROM akka; -- *\n> Last SQL processed well in 7.1.3, but in 7.2 didn't:\n> ERROR: Bad timestamp external representation 'Thu 07 Feb\n> 16:36:50.730499 2002 KRAT'\n\nAh! 7.1 and earlier was forgiving of junk strings in date/time values,\nand just ignored them on input (this was for historical reasons only,\ndating back to at least Postgres95 and probably earlier). But that would\nopen us up to unintended data if, for example, someone mistyped a time\nzone field which would then be ignored as junk. So junk is no longer\nignored except in a few specific cases. I believe that the docs cover\nthe parsing rules, including the changes for 7.2.\n\nI'm a little suprised that input completely devoid of information as in\nexample (2) above was actually accepted by 7.1. In fact it isn't:\n\nlockhart=# CREATE TABLE akka (tm TIMESTAMP WITH TIME ZONE);\nCREATE\nlockhart=# INSERT INTO akka VALUES ('akka');\nERROR: Bad timestamp external representation 'akka'\nlockhart=# select version();\n version \n-------------------------------------------------------------\n PostgreSQL 7.1.2 on i686-pc-linux-gnu, compiled by GCC 2.96\n\nBut if there is some valid info in the input then it was accepted prior\nto 7.2:\n\nlockhart=# INSERT INTO akka VALUES ('now akka');\nINSERT 26953 1\n\nhth\n\n - Thomas\n",
"msg_date": "Wed, 20 Feb 2002 18:23:53 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: date/time compatible problems in 7.2"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> psql:/.../dbdump/.dbrestore.tmp:1624094: ERROR: copy: line 1, Bad\n>> timestamp external representation 'Fri 25 Jan 23:59:59 2002 KRAT'\n>> psql:/.../dbdump/.dbrestore.tmp:1624094: lost synchronization with\n>> server, resetting connection\n\n> Not sure why it is crashing. But \"KRAT\" is a time zone not recognized by\n> the PostgreSQL date/time parser.\n\nThe \"crash\" is totally expected behavior after any error during a COPY\nIN. There isn't any other way to recover except to reset the\nconnection. Yes, this sucks, it's broken, etc, but there's no way to\nfix it except to redesign the frontend/backend COPY protocol :-(\n\nTrust me, this *will* get changed next time we have occasion to make\nincompatible changes in the FE/BE protocol. But I'm not sure that it's\na sufficient reason to create a protocol incompatibility all by itself.\n\nAs to the specific changes in datetime datatype behavior that cause\nthe error report, I bow to Thomas' superior knowledge...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Feb 2002 22:39:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: date/time compatible problems in 7.2 "
},
{
"msg_contents": "(back on list)\n\n> > Not sure why it is crashing. But \"KRAT\" is a time zone not recognized by\n> > the PostgreSQL date/time parser. In fact it could be afaik (it is\n> > mentioned but commented-out in the parser) but it either had a screwy\n> > definition or I couldn't figure out what the definition was. It could be\n> > added for 7.2.1 (and I could send a patch beforehand) if I knew the\n> > proper definition. Check src/backend/utils/adt/datetime.c and look for\n> > \"krat\".\n> KRAT,KRAST is timezone code generated by FreeBSD automatically.\n> You can check up /usr/share/zoneinfo - it have all timezones.\n> You can see timezones KRAT,KRAST in file\n> /usr/share/zoneinfo/Asia/Krasnoyarsk.\n\nNope. You will have to *please* give me more details. On my Linux\n(Mandrake) systems the zoneinfo data is included in the glibc package,\nand the Asia/Krasnoyarsk entries refer to \"Krasnoyarsk\" not to \"KRAT\" or\nany other abbreviation. They also seem to be empty of any other useful\ninformation. I'm not sure where I got the original reference to \"krat\"\nto include as a placeholder in the code.\n\n> I already break idea to pg_dump in 7.1.3 and pg_restore in 7.2 and\n> tried to remove ' KRAT' substring from all my *.dat files, created by\n> pg_dump and change schema to fields without timezone. After I tried\n> pg_restore only data from dbdump-file, but pg_restore says, that\n> can't initialize header from TOC-file, but I not even touched it.\n> TOC - is only one binary file in dbdump-file. I think that it also have\n> smth like CRC code about all other files, and this is reason why they\n> say that can't initialize TOC-file?\n\nNot sure.\n\n> How to patch datetime.c to 7.2 permit my 'KRAT' timezone?\n\nLook in src/backend/utils/adt/datetime.c and search for \"krat\". Add a\nline outside of the #if 0 block which looks like the other enabled time\nzones, including your time zone offset in *minutes* from UTC. Recompile\nand reinstall and you should be ready to go. initdb not required.\n\nSend me details on the krat time zone and another zone you see disabled\nin datetime.c and it will be in 7.2.1...\n\n - Thomas\n",
"msg_date": "Thu, 21 Feb 2002 14:30:37 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: date/time compatible problems in 7.2"
},
{
"msg_contents": "Hi Thomas,\n\n>\n> > > Not sure why it is crashing. But \"KRAT\" is a time zone not recognized by\n> > > the PostgreSQL date/time parser. In fact it could be afaik (it is\n> > > mentioned but commented-out in the parser) but it either had a screwy\n> > > definition or I couldn't figure out what the definition was. It could be\n> > > added for 7.2.1 (and I could send a patch beforehand) if I knew the\n> > > proper definition. Check src/backend/utils/adt/datetime.c and look for\n> > > \"krat\".\n> > KRAT,KRAST is timezone code generated by FreeBSD automatically.\n> > You can check up /usr/share/zoneinfo - it have all timezones.\n> > You can see timezones KRAT,KRAST in file\n> > /usr/share/zoneinfo/Asia/Krasnoyarsk.\n>\n> Nope. You will have to *please* give me more details. On my Linux\n> (Mandrake) systems the zoneinfo data is included in the glibc package,\n> and the Asia/Krasnoyarsk entries refer to \"Krasnoyarsk\" not to \"KRAT\" or\n> any other abbreviation. They also seem to be empty of any other useful\n> information. I'm not sure where I got the original reference to \"krat\"\n> to include as a placeholder in the code.\n\n Check out, please http://www.weltzeituhr.com/laender/zeitzonen_e.shtml.\n KRAT figurate in this list as Krasnoyarsk time, and KRAST as Krasnoyarsk\n Summertime.\n\n You can try also http://www.worldtimezone.com/wtz-names/timezonenames.html\n or http://www.htmlcompendium.org/reference-notes/7timzone.htm\n or (binary zoneinfo files)\nhttp://lrp1.steinkuehler.net/files/kernels/zoneinfo/ .\n\n\n Thanks,\n Ruslan A Dautkhanov",
"msg_date": "Fri, 22 Feb 2002 10:01:17 +0700",
"msg_from": "Ruslan A Dautkhanov <rusland@scn.ru>",
"msg_from_op": true,
"msg_subject": "Re: date/time compatible problems in 7.2"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI'm trying to build the PostgreSQL 7.2 RPMs on Mandrake 8.0 (and then on\n8.1, which isn't yet installed), from Lamar's source RPM.\n\nHave disabled kerberos (doesn't seem to be available for Mandrake), and\nchanged the spec file to accept gettext 0.10.35 which comes with\nMandrake 8.0 (by default, the spec file looks for gettext 0.10.36 or\ngreater).\n\nDid a grep for \"Xlib\" on the config.cache, config.log, and config.status\nfiles in the failed /usr/src/RPM/BUILD/postgresql-7.2 directory, but\ndidn't turn anything up.\n\nAlso did a find for Xlib.h, but it's not on the system at all. Anyone\nknow if I'm missing some X-Windows related Mandrake packages (X runs\nfine, but I could missing a -devel package).\n\nDon't really know where to start looking for getting this solved, but\nsure would appreciate some advice or direction.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n-------- Original Message --------\nSubject: Re: Errors with building PG 7.2 on Mandrake 8.0\nDate: Sun, 17 Feb 2002 23:32:07 -0500\nFrom: Lamar Owen <lamar.owen@wgcr.org>\nTo: Justin Clift <justin@postgresql.org>\nReferences: <3C707F5B.D69FF208@postgresql.org>\n\nOn Sunday 17 February 2002 11:13 pm, Justin Clift wrote:\n> PG 7.2 is still not building. :(\n> -I/usr/X11R6/include -c -o pgtkAppInit.o pgtkAppInit.c\n> In file included from pgtkAppInit.c:15:\n> /usr/include/tk.h:83:29: X11/Xlib.h: No such file or directory\n\n> Any ideas?\n\nYeah -- find Xlib.h. Somehow configure isn't finding it -- a message\nfor \npgsql-hackers. Lessee, methinks config.cache and config.log would be\nuseful \nthings to have on hand.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 18 Feb 2002 20:44:37 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "[Fwd: Re: Errors with building PG 7.2 on Mandrake 8.0]"
},
{
"msg_contents": "Justin Clift writes:\n > Also did a find for Xlib.h, but it's not on the system at all. Anyone\n > know if I'm missing some X-Windows related Mandrake packages (X runs\n > fine, but I could missing a -devel package).\n\nWell on Redhat Xlib.h is provided by XFree86-devel, I imagine Mandrake\nwill be the same.\n\nLee.\n",
"msg_date": "Mon, 18 Feb 2002 11:10:24 +0000",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "[Fwd: Re: Errors with building PG 7.2 on Mandrake 8.0]"
},
{
"msg_contents": "\n> I'm trying to build the PostgreSQL 7.2 RPMs on Mandrake 8.0 (and then on\n> 8.1, which isn't yet installed), from Lamar's source RPM.\n>\n> Have disabled kerberos (doesn't seem to be available for Mandrake), and\n> changed the spec file to accept gettext 0.10.35 which comes with\n> Mandrake 8.0 (by default, the spec file looks for gettext 0.10.36 or\n> greater).\n>\n> Did a grep for \"Xlib\" on the config.cache, config.log, and config.status\n> files in the failed /usr/src/RPM/BUILD/postgresql-7.2 directory, but\n> didn't turn anything up.\n>\n\nThis error seems to be related to the Tk client (pgtksh) build. Even\ndisabling Tk when rebuilding from\nsource causes this. I had this problem yesterday on a RedHat 7.2 (brand\nnew) installation. This was\na \"server\" install (upgrade to be more specific) and by default, nothing\nhaving to do with X is installed.\n\nIn a somewhat related issue, I can't find my \"Maximum RPM book.\" How can I\nrestart the --rebuild without\nre-untarring (<-- New Word?) the source and therefore triggering a complete\nrecompile? I want to pickup with\nthe make after getting a copy of Xlib.h but I want the RPM sequence to\nfinish (i.e., build the RPMS).\n\nlen morgan\n\n",
"msg_date": "Mon, 18 Feb 2002 07:09:57 -0600",
"msg_from": "\"Len Morgan\" <len-morgan@kttk.net>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Re: Errors with building PG 7.2 on Mandrake 8.0]"
},
{
"msg_contents": "\"Len Morgan\" <len-morgan@kttk.net> writes:\n\n> > I'm trying to build the PostgreSQL 7.2 RPMs on Mandrake 8.0 (and then on\n> > 8.1, which isn't yet installed), from Lamar's source RPM.\n> >\n> > Have disabled kerberos (doesn't seem to be available for Mandrake), and\n> > changed the spec file to accept gettext 0.10.35 which comes with\n> > Mandrake 8.0 (by default, the spec file looks for gettext 0.10.36 or\n> > greater).\n> >\n> > Did a grep for \"Xlib\" on the config.cache, config.log, and config.status\n> > files in the failed /usr/src/RPM/BUILD/postgresql-7.2 directory, but\n> > didn't turn anything up.\n> >\n> \n> This error seems to be related to the Tk client (pgtksh) build. Even\n> disabling Tk when rebuilding from\n> source causes this. I had this problem yesterday on a RedHat 7.2 (brand\n> new) installation. This was\n> a \"server\" install (upgrade to be more specific) and by default, nothing\n> having to do with X is installed.\n> \n> In a somewhat related issue, I can't find my \"Maximum RPM book.\" How can I\n> restart the --rebuild without\n> re-untarring (<-- New Word?) the source and therefore triggering a complete\n> recompile?\n\nTry \"rpm -bc --short-circuit\"?\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "18 Feb 2002 11:43:55 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Re: Errors with building PG 7.2 on Mandrake 8.0]"
},
{
"msg_contents": "Hi everyone,\n\nIt's compiled fine now, after XFree86-devel was added to the system.\n\nNow it's complaining the PostgreSQL python module needs \"mx\" though, in\norder to install. Rats.\n\nWe're further along than before, but still no cigar.\n\n:-/\n\nRegards and best wishes,\n\nJustin Clift\n\n\nTrond Eivind Glomsr�d wrote:\n> \n> \"Len Morgan\" <len-morgan@kttk.net> writes:\n> \n> > > I'm trying to build the PostgreSQL 7.2 RPMs on Mandrake 8.0 (and then on\n> > > 8.1, which isn't yet installed), from Lamar's source RPM.\n> > >\n> > > Have disabled kerberos (doesn't seem to be available for Mandrake), and\n> > > changed the spec file to accept gettext 0.10.35 which comes with\n> > > Mandrake 8.0 (by default, the spec file looks for gettext 0.10.36 or\n> > > greater).\n> > >\n> > > Did a grep for \"Xlib\" on the config.cache, config.log, and config.status\n> > > files in the failed /usr/src/RPM/BUILD/postgresql-7.2 directory, but\n> > > didn't turn anything up.\n> > >\n> >\n> > This error seems to be related to the Tk client (pgtksh) build. Even\n> > disabling Tk when rebuilding from\n> > source causes this. I had this problem yesterday on a RedHat 7.2 (brand\n> > new) installation. This was\n> > a \"server\" install (upgrade to be more specific) and by default, nothing\n> > having to do with X is installed.\n> >\n> > In a somewhat related issue, I can't find my \"Maximum RPM book.\" How can I\n> > restart the --rebuild without\n> > re-untarring (<-- New Word?) the source and therefore triggering a complete\n> > recompile?\n> \n> Try \"rpm -bc --short-circuit\"?\n> \n> --\n> Trond Eivind Glomsr�d\n> Red Hat, Inc.\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Tue, 19 Feb 2002 06:19:39 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: [Fwd: Re: Errors with building PG 7.2 on Mandrake 8.0]"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.