threads
listlengths
1
2.99k
[ { "msg_contents": "I've been thinking about how to avoid performance degradation in\nfunction and operator lookup due to the addition of namespaces.\nProbing the syscaches individually for each namespace on the\nsearch path seems like a loser, mainly because a separate indexscan\nis required to load each cache entry; even though repeated searches\nmight be fairly fast, that first one is going to be a killer.\n\nIt occurs to me though that with some improvement in the syscache\nfacility, we could keep the speed similar or even make it faster.\nThe idea is to make the syscaches able to cache the result of searches\nfor partial keys; for example, a syscache whose complete key is\n<proname, pronargs, proargtypes, pronamespace> could be asked for a\nlist of all tuples matching a particular <proname, pronargs> pair.\nThis could actually *eliminate* indexscans that occur now --- for\nexample, the search needed to resolve an ambiguous function call\ncould work on the result list of such a cache lookup, meaning that\nrepeated searches would use syscache entries instead of having to\ndo an indexscan every time.\n\nHere are my thoughts about how to do this. Any comments or better\nideas?\n\nAPI: the call SearchSysCacheList is similar to SearchSysCache but has\nan additional parameter saying how many key columns are actually being\nspecified. It returns a struct that includes a List of HeapTuple\npointers. Each such HeapTuple is a regular syscache entry. The call\nincrements the reference count of each entry in the list, so that they\nwon't disappear while being examined. The caller must call\nReleaseSysCacheList to decrement all these reference counts again when\ndone examining the list.\n\nInternals: each syscache will have a list of structs describing\ncurrently cached list-searches; this will include the search keys and\na list of member tuples, all of which are in the syscache. The member\ntuples will also have back-pointers to the list. Since we allow up to\nfour key columns for a syscache, at most three such back-pointers could\nbe needed in each cache entry: a tuple having key <a,b,c,d> could only\nbe a member of lists for partial search keys <a>, <a,b>, <a,b,c>.\nIn practice I think it'll be easier to have only one back-link, allowing\na given cache row to be a member of at most one list; if the same tuple\nis ever searched for using different numbers of key columns (which seems\nunlikely anyway) then we'd end up making multiple cache entries for it.\n\nIf no existing list entry matches a SearchSysCacheList request, the\nrequired data can be fetched in a single indexscan operation, so this\nshould be only marginally more expensive than loading a single cache\nentry using plain SearchSysCache.\n\nAll lists in a syscache will be invalidated whenever we receive\nnotification of *any* update to the underlying table; this seems a lot\neasier than trying to detect whether a given update has caused any\naddition or deletion in a cached list. (Note that this wouldn't work at\nall if we hadn't recently modified the cache inval logic to notify about\nboth insertions and updates/deletions in system catalogs. But we did.)\nThe list structs will have reference counts just like individual cache\nelements, so that we don't delete a list that's currently in use by some\ncaller. However, we will not provide LRU aging logic for lists; rather,\na list will drop out of cache whenever any one of its member tuples\ndoes.\n\nThis should work pretty well for searches in system catalogs that don't\nchange much; pg_proc and pg_operator probably don't change much in most\ndatabases. I am less sure whether it will be helpful for searches in\npg_class, which changes rather often if you do any temp table operations.\nMight be better to stick with retail probes into the cache for pg_class\nentries.\n\nAny comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Apr 2002 21:13:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Sketch for nonunique searches in syscaches" } ]
[ { "msg_contents": "> I am brazilian and work with o database PostGreSql 7.1.3.\n\nBom dia!\n\n> I am upgrade PostGreSql 7.1.3 to 7.2.1 and find some bugs\n> insert dataype time . <snip>\n\nlockhart=# select time '030000';\nERROR: Bad time external representation '030000'\n\n> This is a bug or new pattern ?\n> The manuals show that it is possible insert with the format hhmmss.\n\nHmm. It certainly does not now work; I'm not sure exactly when it\nchanged. What *does* work at the moment is an ISO-8601 representation:\n\nlockhart=# select time 't030000';\n time \n----------\n 03:00:00\n\nwhich is not of course exactly what you want. I'll look at\nre-introducing the capability for time fields. Sorry for the\nincompatibility.\n\nIf you are building PostgreSQL from source, you might have time to put\nyour test case into the pgsql/src/test/regress/sql/time.sql test file to\nmake sure it gets covered in future releases. Send a patch to the list\nor to me directly and we'll include it.\n\nhth\n\n - Thomas\n", "msg_date": "Thu, 04 Apr 2002 18:30:38 -0800", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Datatype time PostGreSql 7.2.1" } ]
[ { "msg_contents": "PQescapebytea() is not multibyte aware and will produce bad multibyte\ncharacter sequences. Example:\n\nINSERT INTO t1(bytea_col) VALUES('characters produced by\nPQescapebytea');\nERROR: Invalid EUC_JP character sequence found (0x8950)\n\nI think 0x89 should be converted to '\\\\211' since 0x89 of 0x8950 is\nconsidered as \"non printable characters\".\n\nAny objection?\n--\nTatsuo Ishii\n", "msg_date": "Fri, 05 Apr 2002 15:24:53 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "PQescapeBytea is not multibyte aware" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> PQescapebytea() is not multibyte aware and will produce bad multibyte\n> character sequences. Example:\n> I think 0x89 should be converted to '\\\\211' since 0x89 of 0x8950 is\n> considered as \"non printable characters\".\n\nHmm, so essentially we'd have to convert all codes >= 0x80 to prevent\nthem from being mistaken for parts of multibyte sequences? Ugh, but\nyou're probably right. It looks to me like byteaout does the reverse\nalready.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Apr 2002 10:18:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware " }, { "msg_contents": "Tom Lane wrote:\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> \n>>PQescapebytea() is not multibyte aware and will produce bad multibyte\n>>character sequences. Example:\n>>I think 0x89 should be converted to '\\\\211' since 0x89 of 0x8950 is\n>>considered as \"non printable characters\".\n> \n> \n> Hmm, so essentially we'd have to convert all codes >= 0x80 to prevent\n> them from being mistaken for parts of multibyte sequences? Ugh, but\n> you're probably right. It looks to me like byteaout does the reverse\n> already.\n> \n\nBut the error comes from pg_verifymbstr. Since bytea has no encoding \n(it's just an array of bytes afterall), why does pg_verifymbstr get \napplied at all to bytea data?\n\npg_verifymbstr is called by textin, bpcharin, and varcharin. Would it \nhelp to rewrite this as:\n\nINSERT INTO t1(bytea_col) VALUES('characters produced by\nPQescapebytea'::bytea);\n?\n\nJoe\n\n\n\n", "msg_date": "Fri, 05 Apr 2002 08:16:01 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> But the error comes from pg_verifymbstr. Since bytea has no encoding \n> (it's just an array of bytes afterall), why does pg_verifymbstr get \n> applied at all to bytea data?\n\nBecause textin() is used for the initial conversion to an \"unknown\"\nconstant --- see make_const() in parse_node.c.\n\n> pg_verifymbstr is called by textin, bpcharin, and varcharin. Would it \n> help to rewrite this as:\n\n> INSERT INTO t1(bytea_col) VALUES('characters produced by\n> PQescapebytea'::bytea);\n\nProbably that would cause the error to disappear, but it's hardly a\ndesirable answer.\n\nI wonder whether this says that TEXT is not a good implementation of\ntype UNKNOWN. That choice was made on the assumption that TEXT would\nfaithfully preserve the contents of a C string ... but it seems that in\nthe multibyte world it ain't so. It would not be a huge amount of work\nto write a couple more I/O routines and give UNKNOWN its own I/O\nbehavior.\n\nOTOH, I was surprised to read your message because I had assumed the\ndamage was being done much further upstream, viz during collection of\nthe query string by pq_getstr(). Do we need to think twice about that\nprocessing, as well?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Apr 2002 11:32:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware " }, { "msg_contents": "Tom Lane wrote:\n>>INSERT INTO t1(bytea_col) VALUES('characters produced by\n>>PQescapebytea'::bytea);\n> \n> \n> Probably that would cause the error to disappear, but it's hardly a\n> desirable answer.\n> \n> I wonder whether this says that TEXT is not a good implementation of\n> type UNKNOWN. That choice was made on the assumption that TEXT would\n> faithfully preserve the contents of a C string ... but it seems that in\n> the multibyte world it ain't so. It would not be a huge amount of work\n> to write a couple more I/O routines and give UNKNOWN its own I/O\n> behavior.\n\n\nI could take a look at this. Any guidance other than \"faithfully \npreserving the contents of a C string\"?\n\n> \n> OTOH, I was surprised to read your message because I had assumed the\n> damage was being done much further upstream, viz during collection of\n> the query string by pq_getstr(). Do we need to think twice about that\n> processing, as well?\n\nI'll take a look at this as well.\n\nJoe\n\n\n\n", "msg_date": "Fri, 05 Apr 2002 09:21:42 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I could take a look at this. Any guidance other than \"faithfully \n> preserving the contents of a C string\"?\n\nTake textin/textout, remove multibyte awareness? Actually the hard\npart is to figure out which of the existing hardwired calls of textin\nand textout would need to be replaced by calls to unknownin/unknownout.\nI think the assumption UNKNOWN == TEXT has crept into a fair number of\nplaces by now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Apr 2002 13:07:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware " }, { "msg_contents": "Tom Lane wrote:\n> \n> OTOH, I was surprised to read your message because I had assumed the\n> damage was being done much further upstream, viz during collection of\n> the query string by pq_getstr(). Do we need to think twice about that\n> processing, as well?\n> \n\nI just looked in pq_getstr() I see:\n\n#ifdef MULTIBYTE\np = (char *) pg_client_to_server((unsigned char *) s->data, s->len);\nif (p != s->data) /* actual conversion has been done? */\n\n\nand in pg_client_to_server I see:\n\nif (ClientEncoding->encoding == DatabaseEncoding->encoding)\n\treturn s;\n\n\nSo I'm guessing that in Tatsuo's case, both client and database encoding \nare the same, and therefore the string was passed as-is downstream. I \nthink you're correct that in a client/database encoding mismatch \nscenario, there would be bigger problems. Thoughts on this?\n\nJoe\n\n\n\n", "msg_date": "Fri, 05 Apr 2002 10:33:32 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I think you're correct that in a client/database encoding mismatch \n> scenario, there would be bigger problems. Thoughts on this?\n\nThis scenario is probably why Tatsuo wants PQescapeBytea to octalize\neverything with the high bit set; I'm not sure there's any lesser way\nout. Nonetheless, if UNKNOWN conversion introduces additional failures\nthen it makes sense to fix that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Apr 2002 13:40:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> \n>>I think you're correct that in a client/database encoding mismatch \n>>scenario, there would be bigger problems. Thoughts on this?\n> \n> \n> This scenario is probably why Tatsuo wants PQescapeBytea to octalize\n> everything with the high bit set; I'm not sure there's any lesser way\n\nYuck! At that point you're no better off than converting to hex (and \nworse off than converting to base64) for storage.\n\nSQL99 actually defines BLOB as a binary string literal comprised of an \neven number of hexadecimal digits, in single quotes, preceded by \"X\", \ne.g. X'1a43fe'. Should we be looking at implementing the standard \ninstead of, or in addition to, octalizing? Maybe it is possible to do \nthis by creating a new datatype, BLOB, which uses new IN/OUT functions, \nbut otherwise uses the various bytea functions?\n\n\n> out. Nonetheless, if UNKNOWN conversion introduces additional failures\n> then it makes sense to fix that.\n\nI'll follow up on this then.\n\nJoe\n\n\n", "msg_date": "Fri, 05 Apr 2002 13:53:47 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n>> This scenario is probably why Tatsuo wants PQescapeBytea to octalize\n>> everything with the high bit set; I'm not sure there's any lesser way\n\n> Yuck! At that point you're no better off than converting to hex (and \n> worse off than converting to base64) for storage.\n\nNo; the *storage* is still compact, it's just the I/O representation\nthat's not.\n\n> SQL99 actually defines BLOB as a binary string literal comprised of an \n> even number of hexadecimal digits, in single quotes, preceded by \"X\", \n> e.g. X'1a43fe'. Should we be looking at implementing the standard \n> instead of, or in addition to, octalizing?\n\nPerhaps we should cause the system to regard hex-strings as literals of\ntype bytea. Right now I think they're taken to be integer constants,\nwhich is clearly not per spec.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Apr 2002 17:10:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware " }, { "msg_contents": "Tom Lane wrote:\n >> Yuck! At that point you're no better off than converting to hex\n >> (and worse off than converting to base64) for storage.\n >\n >\n > No; the *storage* is still compact, it's just the I/O representation\n > that's not.\n\nYeah, I realized that after I pushed send ;)\n\nBut still, doesn't that mean roughly twice as much memory usage for each\ncopy of the string? And I seem to remember Jan saying that each datum\nwinds up having 4 copies in memory. It ends up impacting the practical\nlength limit for a bytea value.\n\n >\n >\n >> SQL99 actually defines BLOB as a binary string literal comprised\n >> of an even number of hexadecimal digits, in single quotes,\n >> preceded by \"X\", e.g. X'1a43fe'. Should we be looking at\n >> implementing the standard instead of, or in addition to,\n >> octalizing?\n >\n >\n > Perhaps we should cause the system to regard hex-strings as literals\n > of type bytea. Right now I think they're taken to be integer\n > constants, which is clearly not per spec.\n\nWow. I didn't realize this was possible:\n\ntest=# select X'ffff';\n ?column?\n----------\n 65535\n(1 row)\n\nThis does clearly conflict with the spec, but what about backward\ncompatibility? Do you think many people use this capability?\n\n\nJoe\n\n", "msg_date": "Fri, 05 Apr 2002 14:58:41 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> But still, doesn't that mean roughly twice as much memory usage for each\n> copy of the string? And I seem to remember Jan saying that each datum\n> winds up having 4 copies in memory. It ends up impacting the practical\n> length limit for a bytea value.\n\nWell, once the data actually reaches Datum form it'll be in internal\nrepresentation, hence compact. I'm not sure how many copies the parser\nwill make in the process of casting to UNKNOWN and then to bytea, but\nI'm not terribly concerned by the above argument.\n\n> Wow. I didn't realize this was possible:\n\n> test=# select X'ffff';\n> ?column?\n> ----------\n> 65535\n> (1 row)\n\n> This does clearly conflict with the spec, but what about backward\n> compatibility? Do you think many people use this capability?\n\nNo idea. I don't think it's documented anywhere, though...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Apr 2002 18:25:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware " }, { "msg_contents": "> Hmm, so essentially we'd have to convert all codes >= 0x80 to prevent\n> them from being mistaken for parts of multibyte sequences?\n\nYes.\n\n> Ugh, but\n> you're probably right. It looks to me like byteaout does the reverse\n> already.\n\nAs for the new UNKNOWN data type, that seems a good thing for\nme. However, I think more aggressive soultion would be having an\nencoding info in the text data type itself. This would also opens the\nway to implement SQL99's CREATE CHARACTER SET stuffs. I have been\nthinking about this for a while and want to make a RFC in the future(I\nneed to rethink my idea to adopt the SCHEMA you introduced).\n\nBTW, for the 7.2.x tree we need a solution with lesser impact. \nFor this purpose, I would like to change PQescapeBytea as I stated in\nthe previous mail. Objection?\n--\nTatsuo Ishii\n", "msg_date": "Sat, 06 Apr 2002 09:43:08 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: PQescapeBytea is not multibyte aware " }, { "msg_contents": "Tatsuo Ishii wrote:\n> BTW, for the 7.2.x tree we need a solution with lesser impact. \n> For this purpose, I would like to change PQescapeBytea as I stated in\n> the previous mail. Objection?\n> --\n> Tatsuo Ishii\n\nNo objection here, but can we wrap the change in #ifdef MULTIBYTE so \nthere's no effect for people who don't use MULTIBYTE?\n\nJoe\n\n\n", "msg_date": "Sun, 07 Apr 2002 09:38:41 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> No objection here, but can we wrap the change in #ifdef MULTIBYTE so \n> there's no effect for people who don't use MULTIBYTE?\n\nThat opens up the standard set of issues about \"what if your server is\nMULTIBYTE but your libpq is not?\" It seems risky to me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 Apr 2002 13:10:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PQescapeBytea is not multibyte aware " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> \n>>I think you're correct that in a client/database encoding mismatch \n>>scenario, there would be bigger problems. Thoughts on this?\n> \n> \n> This scenario is probably why Tatsuo wants PQescapeBytea to octalize\n> everything with the high bit set; I'm not sure there's any lesser way\n> out. Nonetheless, if UNKNOWN conversion introduces additional failures\n> then it makes sense to fix that.\n> \n> \t\t\tregards, tom lane\n> \n\nHere's a patch to add unknownin/unknownout support. I also poked around \nlooking for places that assume UNKNOWN == TEXT. One of those was the \n\"SET\" type in pg_type.h, which was using textin/textout. This one I took \ncare of in this patch. The other suspicious place was in \nstring_to_dataum (which is defined in both selfuncs.c and indxpath.c). I \nwasn't too sure about those, so I left them be.\n\nRegression tests all pass with the exception of horology, which also \nfails on CVS tip. It looks like that is a daylight savings time issue \nthough.\n\nAlso as a side note, I can't get make check to get past initdb if I \nconfigure with --enable-multibyte on CVS tip. Is there a known problem \nor am I just being clueless . . .wait, let's qualify that -- am I being \nclueless on this one issue? ;-)\n\nJoe", "msg_date": "Sun, 07 Apr 2002 16:18:57 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "unknownin/out patch (was [HACKERS] PQescapeBytea is not multibyte\n\taware)" }, { "msg_contents": "Joe Conway wrote:\n> Here's a patch to add unknownin/unknownout support. I also poked around \n> looking for places that assume UNKNOWN == TEXT. One of those was the \n> \"SET\" type in pg_type.h, which was using textin/textout. This one I took \n> care of in this patch. The other suspicious place was in \n> string_to_dataum (which is defined in both selfuncs.c and indxpath.c). I \n> wasn't too sure about those, so I left them be.\n> \n\nI found three other suspicious spots in the source, where UNKNOWN == \nTEXT is assumed. The first looks like it needs to be changed for sure, \nthe other two I'm less sure about. Feedback would be most appreciated \n(on this and the patch itself).\n\n(line numbers based on CVS from earlier today)\nparse_node.c \nline 428\nparse_coerce.c \nline 85\nparse_coerce.c \nline 403\n\nJoe\n\n\n", "msg_date": "Sun, 07 Apr 2002 19:43:20 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is" }, { "msg_contents": "> Joe Conway <mail@joeconway.com> writes:\n> > No objection here, but can we wrap the change in #ifdef MULTIBYTE so \n> > there's no effect for people who don't use MULTIBYTE?\n> \n> That opens up the standard set of issues about \"what if your server is\n> MULTIBYTE but your libpq is not?\" It seems risky to me.\n\nI have committed changes to the current source (without MULTIBYTE\nifdes). Will change t.2-stable tree soon.\n\nI also added some careful handlings for memory allocation errors and\nchanged some questionable codes useing direct ASCII values 92 instead\nof '\\\\' for example.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 08 Apr 2002 12:52:02 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: PQescapeBytea is not multibyte aware " }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Regression tests all pass with the exception of horology, which also \n> fails on CVS tip. It looks like that is a daylight savings time issue \n> though.\n\nYup, ye olde DST-transition-makes-for-funny-day-length issue. This is\nmentioned in the docs at\nhttp://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/regress-evaluation.html#AEN18363\nalthough I see the troublesome tests are now in horology not timestamp.\n(Docs fixed...)\n\n> Also as a side note, I can't get make check to get past initdb if I \n> configure with --enable-multibyte on CVS tip. Is there a known problem \n\nNews to me --- anyone else seeing that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Apr 2002 00:40:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is not multibyte\n\taware)" }, { "msg_contents": "> Yup, ye olde DST-transition-makes-for-funny-day-length issue. This is\n> mentioned in the docs at\n> http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/regres\n> s-evaluation.html#AEN18363\n> although I see the troublesome tests are now in horology not timestamp.\n> (Docs fixed...)\n>\n> > Also as a side note, I can't get make check to get past initdb if I\n> > configure with --enable-multibyte on CVS tip. Is there a known problem\n>\n> News to me --- anyone else seeing that?\n\nI get initdb failures all the time when building CVS. You need to gmake\nclean to fix some things. Try doing a gmake clean && gmake check\n\nChris\n\n", "msg_date": "Mon, 8 Apr 2002 12:47:36 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is not multibyte\n\taware)" }, { "msg_contents": ">> Also as a side note, I can't get make check to get past initdb if I \n>> configure with --enable-multibyte on CVS tip. Is there a known problem \n\n> News to me --- anyone else seeing that?\n\nFWIW, CVS tip with --enable-multibyte builds and passes regression tests\nhere (modulo the horology thing). I concur with Chris' suggestion that\nyou may not have done a clean reconfiguration. If you're not using\n--enable-depend then a \"make clean\" is certainly needed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Apr 2002 01:25:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is not multibyte\n\taware)" }, { "msg_contents": "> >> Also as a side note, I can't get make check to get past initdb if I \n> >> configure with --enable-multibyte on CVS tip. Is there a known problem \n> \n> > News to me --- anyone else seeing that?\n> \n> FWIW, CVS tip with --enable-multibyte builds and passes regression tests\n> here (modulo the horology thing). I concur with Chris' suggestion that\n> you may not have done a clean reconfiguration. If you're not using\n> --enable-depend then a \"make clean\" is certainly needed.\n\nTry a multibyte encoding database. For example,\n\n$ createdb -E EUC_JP test\n$ psql -c 'SELECT SUBSTRING('1234567890' FROM 3)' test\n substring \n-----------\n 3456\n(1 row)\n\nApparently this is wrong.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 08 Apr 2002 14:45:13 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is not" }, { "msg_contents": "Tom Lane wrote:\n> FWIW, CVS tip with --enable-multibyte builds and passes regression tests\n> here (modulo the horology thing). I concur with Chris' suggestion that\n> you may not have done a clean reconfiguration. If you're not using\n> --enable-depend then a \"make clean\" is certainly needed.\n> \n\n\n--enable-depend did the trick.\n\nPatch now passes all tests but horology with:\n\n./configure --enable-locale --enable-debug --enable-cassert \n--enable-multibyte --enable-syslog --enable-nls --enable-depend\n\nThanks!\n\nJoe\n\n", "msg_date": "Sun, 07 Apr 2002 23:06:43 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is" }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> \n> Try a multibyte encoding database. For example,\n> \n> $ createdb -E EUC_JP test\n> $ psql -c 'SELECT SUBSTRING('1234567890' FROM 3)' test\n> substring \n> -----------\n> 3456\n> (1 row)\n> \n> Apparently this is wrong.\n> --\n> Tatsuo Ishii\n\nThis problem exists in CVS tip *without* the unknownin/out patch:\n\n# psql -U postgres testjp\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntestjp=# SELECT SUBSTRING('1234567890' FROM 3);\n substring\n-----------\n 3456\n(1 row)\n\ntestjp=# select * from pg_type where typname = 'unknown';\n typname | typnamespace | typowner | typlen | typprtlen | typbyval | \ntyptype | typisdefined | typdelim | typrelid | typelem | typinput | \ntypoutput | typreceive | typsend | typalign | typstorage | typnotnull | \ntypbasetype | typtypmod | typndims | typdefaultbin | typdefault\n---------+--------------+----------+--------+-----------+----------+---------+--------------+----------+----------+---------+----------+-----------+------------+---------+----------+------------+------------+-------------+-----------+----------+---------------+------------\n unknown | 11 | 1 | -1 | -1 | f | b \n | t | , | 0 | 0 | textin | \ntextout | textin | textout | i | p | f | \n 0 | -1 | 0 | |\n(1 row)\n\nThis is built from source with:\n#define CATALOG_VERSION_NO 200204031\n\n./configure --enable-locale --enable-debug --enable-cassert \n--enable-multibyte --enable-syslog --enable-nls --enable-depend\n\nJoe\n\n\n", "msg_date": "Mon, 08 Apr 2002 16:46:35 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is" }, { "msg_contents": "> Tatsuo Ishii wrote:\n> > \n> > \n> > Try a multibyte encoding database. For example,\n> > \n> > $ createdb -E EUC_JP test\n> > $ psql -c 'SELECT SUBSTRING('1234567890' FROM 3)' test\n> > substring \n> > -----------\n> > 3456\n> > (1 row)\n> > \n> > Apparently this is wrong.\n> > --\n> > Tatsuo Ishii\n> \n> This problem exists in CVS tip *without* the unknownin/out patch:\n\nSure. That has been broken for a while.\n--\nTatsuo Ishii\n\n", "msg_date": "Tue, 09 Apr 2002 10:23:42 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is" }, { "msg_contents": "> > Tatsuo Ishii wrote:\n> > > \n> > > \n> > > Try a multibyte encoding database. For example,\n> > > \n> > > $ createdb -E EUC_JP test\n> > > $ psql -c 'SELECT SUBSTRING('1234567890' FROM 3)' test\n> > > substring \n> > > -----------\n> > > 3456\n> > > (1 row)\n> > > \n> > > Apparently this is wrong.\n> > > --\n> > > Tatsuo Ishii\n> > \n> > This problem exists in CVS tip *without* the unknownin/out patch:\n> \n> Sure. That has been broken for a while.\n\nI guess this actually happened in 1.79 of varlena.c:\n\n---------------------------------------------------------------------------\nrevision 1.79\ndate: 2002/03/05 05:33:19; author: momjian; state: Exp; lines: +45 -42\nI attach a version of my toast-slicing patch, against current CVS\n(current as of a few hours ago.)\n\nThis patch:\n\n1. Adds PG_GETARG_xxx_P_SLICE() macros and associated support routines.\n\n2. Adds routines in src/backend/access/tuptoaster.c for fetching only\nnecessary chunks of a toasted value. (Modelled on latest changes to\nassume chunks are returned in order).\n\n3. Amends text_substr and bytea_substr to use new methods. It now\nhandles multibyte cases -and should still lead to a performance\nimprovement in the multibyte case where the substring is near the\nbeginning of the string.\n\n4. Added new command: ALTER TABLE tabname ALTER COLUMN colname SET\nSTORAGE {PLAIN | EXTERNAL | EXTENDED | MAIN} to parser and documented in\nalter-table.sgml. (NB I used ColId as the item type for the storage\nmode string, rather than a new production - I hope this makes sense!).\nAll this does is sets attstorage for the specified column.\n\n4. AlterTableAlterColumnStatistics is now AlterTableAlterColumnFlags and\nhandles both statistics and storage (it uses the subtype code to\ndistinguish). The previous version of my patch also re-arranged other\ncode in backend/commands/command.c but I have dropped that from this\npatch.(I plan to return to it separately).\n\n5. Documented new macros (and also the PG_GETARG_xxx_P_COPY macros) in\nxfunc.sgml. ref/alter_table.sgml also contains documentation for ALTER\nCOLUMN SET STORAGE.\n\nJohn Gray\n---------------------------------------------------------------------------\n", "msg_date": "Tue, 09 Apr 2002 14:08:56 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is" }, { "msg_contents": "Tatsuo Ishii wrote:\n >>> Tatsuo Ishii wrote:\n >>>\n >>>>\n >>>> Try a multibyte encoding database. For example,\n >>>>\n >>>> $ createdb -E EUC_JP test $ psql -c 'SELECT\n >>>> SUBSTRING('1234567890' FROM 3)' test substring ----------- 3456\n >>>>\n\n >>>> (1 row)\n >>>>\n >>>> Apparently this is wrong. -- Tatsuo Ishii\n >>>\n >>> This problem exists in CVS tip *without* the unknownin/out\n >>> patch:\n >>\n >> Sure. That has been broken for a while.\n >\n >\n > I guess this actually happened in 1.79 of varlena.c:\n >\nYes, I was just looking at that also. It doesn't consider the case of n \n= -1 for MB. See the lines:\n\n#ifdef MULTIBYTE\n eml = pg_database_encoding_max_length ();\n\n if (eml > 1)\n {\n sm = 0;\n sn = (m + n) * eml + 3;\n }\n#endif\n\nWhen n = -1 this does the wrong thing. And also a few lines later:\n\n#ifdef MULTIBYTE\n len = pg_mbstrlen_with_len (VARDATA (string), sn - 3);\n\nI think both places need to test for n = -1. Do you agree?\n\n\nJoe\n\n", "msg_date": "Mon, 08 Apr 2002 22:37:59 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is" }, { "msg_contents": "Joe Conway wrote:\n> Tatsuo Ishii wrote:\n> >>> Tatsuo Ishii wrote:\n> >>>\n> >>>>\n> >>>> Try a multibyte encoding database. For example,\n> >>>>\n> >>>> $ createdb -E EUC_JP test $ psql -c 'SELECT\n> >>>> SUBSTRING('1234567890' FROM 3)' test substring ----------- 3456\n> >>>>\n> \n> >>>> (1 row)\n> >>>>\n> >>>> Apparently this is wrong. -- Tatsuo Ishii\n> >>>\n> >>> This problem exists in CVS tip *without* the unknownin/out\n> >>> patch:\n> >>\n> >> Sure. That has been broken for a while.\n> >\n> >\n> > I guess this actually happened in 1.79 of varlena.c:\n> >\n> Yes, I was just looking at that also. It doesn't consider the case of n \n> = -1 for MB. See the lines:\n> \n> #ifdef MULTIBYTE\n> eml = pg_database_encoding_max_length ();\n> \n> if (eml > 1)\n> {\n> sm = 0;\n> sn = (m + n) * eml + 3;\n> }\n> #endif\n> \n> When n = -1 this does the wrong thing. And also a few lines later:\n> \n> #ifdef MULTIBYTE\n> len = pg_mbstrlen_with_len (VARDATA (string), sn - 3);\n> \n> I think both places need to test for n = -1. Do you agree?\n> \n> \n> Joe\n> \n\nThe attached patch should fix the bug reported by Tatsuo.\n\n# psql -U postgres testjp\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntestjp=# SELECT SUBSTRING('1234567890' FROM 3);\n substring\n------------\n 34567890\n(1 row)\n\nJoe", "msg_date": "Mon, 08 Apr 2002 22:57:47 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is" }, { "msg_contents": "On Tue, 2002-04-09 at 06:57, Joe Conway wrote:\n[snipped]\n> > Yes, I was just looking at that also. It doesn't consider the case of n \n> > = -1 for MB. See the lines:\n> > \n> > #ifdef MULTIBYTE\n> > eml = pg_database_encoding_max_length ();\n> > \n> > if (eml > 1)\n> > {\n> > sm = 0;\n> > sn = (m + n) * eml + 3;\n> > }\n> > #endif\n> > \n> > When n = -1 this does the wrong thing. And also a few lines later:\n> > \n> > #ifdef MULTIBYTE\n> > len = pg_mbstrlen_with_len (VARDATA (string), sn - 3);\n> > \n> > I think both places need to test for n = -1. Do you agree?\n> > \n\nSorry folks! I hadn't thought through the logic of that in the n = -1 \nand multibyte case. The patch looks OK to me.\n\nJohn\n\n\n\n", "msg_date": "09 Apr 2002 10:11:02 +0100", "msg_from": "John Gray <jgray@azuli.co.uk>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is" }, { "msg_contents": "Tom Lane writes:\n\n> FWIW, CVS tip with --enable-multibyte builds and passes regression tests\n> here (modulo the horology thing). I concur with Chris' suggestion that\n> you may not have done a clean reconfiguration. If you're not using\n> --enable-depend then a \"make clean\" is certainly needed.\n\nMaybe we should turn on dependency tracking by default? This is about the\n(enough + 1)th time I'm seeing this.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 9 Apr 2002 14:13:19 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is" }, { "msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > FWIW, CVS tip with --enable-multibyte builds and passes regression tests\n> > here (modulo the horology thing). I concur with Chris' suggestion that\n> > you may not have done a clean reconfiguration. If you're not using\n> > --enable-depend then a \"make clean\" is certainly needed.\n> \n> Maybe we should turn on dependency tracking by default? This is about the\n> (enough + 1)th time I'm seeing this.\n\nWhat is the downside to turning it on? I can't think of one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 9 Apr 2002 14:23:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Peter Eisentraut wrote:\n>> Maybe we should turn on dependency tracking by default? This is about the\n>> (enough + 1)th time I'm seeing this.\n\n> What is the downside to turning it on? I can't think of one.\n\nWell, we'll still see the same kinds of reports from developers using\nnon-GCC compilers (surely there are some) ... so enable-depend isn't\ngoing to magically make the issue go away.\n\nPersonally I tend to rebuild from \"make clean\" whenever I've done\nanything nontrivial, and certainly after a CVS sync; so I have no use\nfor enable-depend. But as long as I can turn it off, I don't object\nto changing the default.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Apr 2002 15:46:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is " }, { "msg_contents": "I'm about to commit your patches with a small fix.\n--\nTatsuo Ishii\n\nFrom: Joe Conway <mail@joeconway.com>\nSubject: Re: [PATCHES] unknownin/out patch (was [HACKERS] PQescapeBytea is\nDate: Mon, 08 Apr 2002 22:57:47 -0700\nMessage-ID: <3CB282DB.4050708@joeconway.com>\n\n> Joe Conway wrote:\n> > Tatsuo Ishii wrote:\n> > >>> Tatsuo Ishii wrote:\n> > >>>\n> > >>>>\n> > >>>> Try a multibyte encoding database. For example,\n> > >>>>\n> > >>>> $ createdb -E EUC_JP test $ psql -c 'SELECT\n> > >>>> SUBSTRING('1234567890' FROM 3)' test substring ----------- 3456\n> > >>>>\n> > \n> > >>>> (1 row)\n> > >>>>\n> > >>>> Apparently this is wrong. -- Tatsuo Ishii\n> > >>>\n> > >>> This problem exists in CVS tip *without* the unknownin/out\n> > >>> patch:\n> > >>\n> > >> Sure. That has been broken for a while.\n> > >\n> > >\n> > > I guess this actually happened in 1.79 of varlena.c:\n> > >\n> > Yes, I was just looking at that also. It doesn't consider the case of n \n> > = -1 for MB. See the lines:\n> > \n> > #ifdef MULTIBYTE\n> > eml = pg_database_encoding_max_length ();\n> > \n> > if (eml > 1)\n> > {\n> > sm = 0;\n> > sn = (m + n) * eml + 3;\n> > }\n> > #endif\n> > \n> > When n = -1 this does the wrong thing. And also a few lines later:\n> > \n> > #ifdef MULTIBYTE\n> > len = pg_mbstrlen_with_len (VARDATA (string), sn - 3);\n> > \n> > I think both places need to test for n = -1. Do you agree?\n> > \n> > \n> > Joe\n> > \n> \n> The attached patch should fix the bug reported by Tatsuo.\n> \n> # psql -U postgres testjp\n> Welcome to psql, the PostgreSQL interactive terminal.\n> \n> Type: \\copyright for distribution terms\n> \\h for help with SQL commands\n> \\? for help on internal slash commands\n> \\g or terminate with semicolon to execute query\n> \\q to quit\n> \n> testjp=# SELECT SUBSTRING('1234567890' FROM 3);\n> substring\n> ------------\n> 34567890\n> (1 row)\n> \n> Joe\n", "msg_date": "Mon, 15 Apr 2002 16:50:23 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] unknownin/out patch (was PQescapeBytea is" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > Joe Conway <mail@joeconway.com> writes:\n> > \n> >>I think you're correct that in a client/database encoding mismatch \n> >>scenario, there would be bigger problems. Thoughts on this?\n> > \n> > \n> > This scenario is probably why Tatsuo wants PQescapeBytea to octalize\n> > everything with the high bit set; I'm not sure there's any lesser way\n> > out. Nonetheless, if UNKNOWN conversion introduces additional failures\n> > then it makes sense to fix that.\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> Here's a patch to add unknownin/unknownout support. I also poked around \n> looking for places that assume UNKNOWN == TEXT. One of those was the \n> \"SET\" type in pg_type.h, which was using textin/textout. This one I took \n> care of in this patch. The other suspicious place was in \n> string_to_dataum (which is defined in both selfuncs.c and indxpath.c). I \n> wasn't too sure about those, so I left them be.\n> \n> Regression tests all pass with the exception of horology, which also \n> fails on CVS tip. It looks like that is a daylight savings time issue \n> though.\n> \n> Also as a side note, I can't get make check to get past initdb if I \n> configure with --enable-multibyte on CVS tip. Is there a known problem \n> or am I just being clueless . . .wait, let's qualify that -- am I being \n> clueless on this one issue? ;-)\n> \n> Joe\n\n> diff -Ncr pgsql.orig/src/backend/utils/adt/varlena.c pgsql/src/backend/utils/adt/varlena.c\n> *** pgsql.orig/src/backend/utils/adt/varlena.c\tSun Apr 7 10:21:25 2002\n> --- pgsql/src/backend/utils/adt/varlena.c\tSun Apr 7 11:44:54 2002\n> ***************\n> *** 228,233 ****\n> --- 228,273 ----\n> }\n> \n> \n> + /*\n> + *\t\tunknownin\t\t\t- converts \"...\" to internal representation\n> + */\n> + Datum\n> + unknownin(PG_FUNCTION_ARGS)\n> + {\n> + \tchar\t *inputStr = PG_GETARG_CSTRING(0);\n> + \tunknown\t *result;\n> + \tint\t\t\tlen;\n> + \n> + \tlen = strlen(inputStr) + VARHDRSZ;\n> + \n> + \tresult = (unknown *) palloc(len);\n> + \tVARATT_SIZEP(result) = len;\n> + \n> + \tmemcpy(VARDATA(result), inputStr, len - VARHDRSZ);\n> + \n> + \tPG_RETURN_UNKNOWN_P(result);\n> + }\n> + \n> + \n> + /*\n> + *\t\tunknownout\t\t\t- converts internal representation to \"...\"\n> + */\n> + Datum\n> + unknownout(PG_FUNCTION_ARGS)\n> + {\n> + \tunknown\t *t = PG_GETARG_UNKNOWN_P(0);\n> + \tint\t\t\tlen;\n> + \tchar\t *result;\n> + \n> + \tlen = VARSIZE(t) - VARHDRSZ;\n> + \tresult = (char *) palloc(len + 1);\n> + \tmemcpy(result, VARDATA(t), len);\n> + \tresult[len] = '\\0';\n> + \n> + \tPG_RETURN_CSTRING(result);\n> + }\n> + \n> + \n> /* ========== PUBLIC ROUTINES ========== */\n> \n> /*\n> diff -Ncr pgsql.orig/src/include/c.h pgsql/src/include/c.h\n> *** pgsql.orig/src/include/c.h\tSun Apr 7 10:21:29 2002\n> --- pgsql/src/include/c.h\tSun Apr 7 11:40:59 2002\n> ***************\n> *** 389,394 ****\n> --- 389,395 ----\n> */\n> typedef struct varlena bytea;\n> typedef struct varlena text;\n> + typedef struct varlena unknown;\n> typedef struct varlena BpChar;\t/* blank-padded char, ie SQL char(n) */\n> typedef struct varlena VarChar; /* var-length char, ie SQL varchar(n) */\n> \n> diff -Ncr pgsql.orig/src/include/catalog/pg_proc.h pgsql/src/include/catalog/pg_proc.h\n> *** pgsql.orig/src/include/catalog/pg_proc.h\tSun Apr 7 10:21:29 2002\n> --- pgsql/src/include/catalog/pg_proc.h\tSun Apr 7 11:56:09 2002\n> ***************\n> *** 235,240 ****\n> --- 235,245 ----\n> DATA(insert OID = 108 ( scalargtjoinsel PGNSP PGUID 12 f t t s 3 f 701 \"0 26 0\" 100 0 0 100 scalargtjoinsel - _null_ ));\n> DESCR(\"join selectivity of > and related operators on scalar datatypes\");\n> \n> + DATA(insert OID = 109 ( unknownin\t\t\t PGNSP PGUID 12 f t t i 1 f 705 \"0\" 100 0 0 100\tunknownin - _null_ ));\n> + DESCR(\"(internal)\");\n> + DATA(insert OID = 110 ( unknownout\t\t PGNSP PGUID 12 f t t i 1 f 23 \"0\" 100 0 0 100\tunknownout - _null_ ));\n> + DESCR(\"(internal)\");\n> + \n> DATA(insert OID = 112 ( text\t\t\t PGNSP PGUID 12 f t t i 1 f 25 \"23\" 100 0 0 100 int4_text - _null_ ));\n> DESCR(\"convert int4 to text\");\n> DATA(insert OID = 113 ( text\t\t\t PGNSP PGUID 12 f t t i 1 f 25 \"21\" 100 0 0 100 int2_text - _null_ ));\n> diff -Ncr pgsql.orig/src/include/catalog/pg_type.h pgsql/src/include/catalog/pg_type.h\n> *** pgsql.orig/src/include/catalog/pg_type.h\tSun Apr 7 10:21:29 2002\n> --- pgsql/src/include/catalog/pg_type.h\tSun Apr 7 11:57:36 2002\n> ***************\n> *** 302,308 ****\n> DESCR(\"array of INDEX_MAX_KEYS oids, used in system tables\");\n> #define OIDVECTOROID\t30\n> \n> ! DATA(insert OID = 32 (\tSET\t\t PGNSP PGUID -1 -1 f b t \\054 0 0 textin textout textin textout i p f 0 -1 0 _null_ _null_ ));\n> DESCR(\"set of tuples\");\n> \n> DATA(insert OID = 71 (\tpg_type\t\t PGNSP PGUID 4 4 t c t \\054 1247 0 int4in int4out int4in int4out i p f 0 -1 0 _null_ _null_ ));\n> --- 302,308 ----\n> DESCR(\"array of INDEX_MAX_KEYS oids, used in system tables\");\n> #define OIDVECTOROID\t30\n> \n> ! DATA(insert OID = 32 (\tSET\t\t PGNSP PGUID -1 -1 f b t \\054 0 0 unknownin unknownout unknownin unknownout i p f 0 -1 0 _null_ _null_ ));\n> DESCR(\"set of tuples\");\n> \n> DATA(insert OID = 71 (\tpg_type\t\t PGNSP PGUID 4 4 t c t \\054 1247 0 int4in int4out int4in int4out i p f 0 -1 0 _null_ _null_ ));\n> ***************\n> *** 366,372 ****\n> DATA(insert OID = 704 ( tinterval PGNSP PGUID 12 47 f b t \\054 0 0 tintervalin tintervalout tintervalin tintervalout i p f 0 -1 0 _null_ _null_ ));\n> DESCR(\"(abstime,abstime), time interval\");\n> #define TINTERVALOID\t704\n> ! DATA(insert OID = 705 ( unknown PGNSP PGUID -1 -1 f b t \\054 0 0 textin textout textin textout i p f 0 -1 0 _null_ _null_ ));\n> DESCR(\"\");\n> #define UNKNOWNOID\t\t705\n> \n> --- 366,372 ----\n> DATA(insert OID = 704 ( tinterval PGNSP PGUID 12 47 f b t \\054 0 0 tintervalin tintervalout tintervalin tintervalout i p f 0 -1 0 _null_ _null_ ));\n> DESCR(\"(abstime,abstime), time interval\");\n> #define TINTERVALOID\t704\n> ! DATA(insert OID = 705 ( unknown PGNSP PGUID -1 -1 f b t \\054 0 0 unknownin unknownout unknownin unknownout i p f 0 -1 0 _null_ _null_ ));\n> DESCR(\"\");\n> #define UNKNOWNOID\t\t705\n> \n> diff -Ncr pgsql.orig/src/include/fmgr.h pgsql/src/include/fmgr.h\n> *** pgsql.orig/src/include/fmgr.h\tSun Apr 7 10:21:29 2002\n> --- pgsql/src/include/fmgr.h\tSun Apr 7 12:11:30 2002\n> ***************\n> *** 185,190 ****\n> --- 185,191 ----\n> /* DatumGetFoo macros for varlena types will typically look like this: */\n> #define DatumGetByteaP(X)\t\t\t((bytea *) PG_DETOAST_DATUM(X))\n> #define DatumGetTextP(X)\t\t\t((text *) PG_DETOAST_DATUM(X))\n> + #define DatumGetUnknownP(X)\t\t\t((unknown *) PG_DETOAST_DATUM(X))\n> #define DatumGetBpCharP(X)\t\t\t((BpChar *) PG_DETOAST_DATUM(X))\n> #define DatumGetVarCharP(X)\t\t\t((VarChar *) PG_DETOAST_DATUM(X))\n> /* And we also offer variants that return an OK-to-write copy */\n> ***************\n> *** 200,205 ****\n> --- 201,207 ----\n> /* GETARG macros for varlena types will typically look like this: */\n> #define PG_GETARG_BYTEA_P(n)\t\tDatumGetByteaP(PG_GETARG_DATUM(n))\n> #define PG_GETARG_TEXT_P(n)\t\t\tDatumGetTextP(PG_GETARG_DATUM(n))\n> + #define PG_GETARG_UNKNOWN_P(n)\t\tDatumGetUnknownP(PG_GETARG_DATUM(n))\n> #define PG_GETARG_BPCHAR_P(n)\t\tDatumGetBpCharP(PG_GETARG_DATUM(n))\n> #define PG_GETARG_VARCHAR_P(n)\t\tDatumGetVarCharP(PG_GETARG_DATUM(n))\n> /* And we also offer variants that return an OK-to-write copy */\n> ***************\n> *** 239,244 ****\n> --- 241,247 ----\n> /* RETURN macros for other pass-by-ref types will typically look like this: */\n> #define PG_RETURN_BYTEA_P(x) PG_RETURN_POINTER(x)\n> #define PG_RETURN_TEXT_P(x) PG_RETURN_POINTER(x)\n> + #define PG_RETURN_UNKNOWN_P(x) PG_RETURN_POINTER(x)\n> #define PG_RETURN_BPCHAR_P(x) PG_RETURN_POINTER(x)\n> #define PG_RETURN_VARCHAR_P(x) PG_RETURN_POINTER(x)\n> \n> diff -Ncr pgsql.orig/src/include/utils/builtins.h pgsql/src/include/utils/builtins.h\n> *** pgsql.orig/src/include/utils/builtins.h\tSun Apr 7 10:21:29 2002\n> --- pgsql/src/include/utils/builtins.h\tSun Apr 7 12:26:17 2002\n> ***************\n> *** 414,419 ****\n> --- 414,422 ----\n> extern bool SplitIdentifierString(char *rawstring, char separator,\n> \t\t\t\t\t\t\t\t List **namelist);\n> \n> + extern Datum unknownin(PG_FUNCTION_ARGS);\n> + extern Datum unknownout(PG_FUNCTION_ARGS);\n> + \n> extern Datum byteain(PG_FUNCTION_ARGS);\n> extern Datum byteaout(PG_FUNCTION_ARGS);\n> extern Datum byteaoctetlen(PG_FUNCTION_ARGS);\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Apr 2002 09:30:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is not" }, { "msg_contents": "\nPatch applied. Thanks.\n\nCatalog version updated.\n\n---------------------------------------------------------------------------\n\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > Joe Conway <mail@joeconway.com> writes:\n> > \n> >>I think you're correct that in a client/database encoding mismatch \n> >>scenario, there would be bigger problems. Thoughts on this?\n> > \n> > \n> > This scenario is probably why Tatsuo wants PQescapeBytea to octalize\n> > everything with the high bit set; I'm not sure there's any lesser way\n> > out. Nonetheless, if UNKNOWN conversion introduces additional failures\n> > then it makes sense to fix that.\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> Here's a patch to add unknownin/unknownout support. I also poked around \n> looking for places that assume UNKNOWN == TEXT. One of those was the \n> \"SET\" type in pg_type.h, which was using textin/textout. This one I took \n> care of in this patch. The other suspicious place was in \n> string_to_dataum (which is defined in both selfuncs.c and indxpath.c). I \n> wasn't too sure about those, so I left them be.\n> \n> Regression tests all pass with the exception of horology, which also \n> fails on CVS tip. It looks like that is a daylight savings time issue \n> though.\n> \n> Also as a side note, I can't get make check to get past initdb if I \n> configure with --enable-multibyte on CVS tip. Is there a known problem \n> or am I just being clueless . . .wait, let's qualify that -- am I being \n> clueless on this one issue? ;-)\n> \n> Joe\n\n> diff -Ncr pgsql.orig/src/backend/utils/adt/varlena.c pgsql/src/backend/utils/adt/varlena.c\n> *** pgsql.orig/src/backend/utils/adt/varlena.c\tSun Apr 7 10:21:25 2002\n> --- pgsql/src/backend/utils/adt/varlena.c\tSun Apr 7 11:44:54 2002\n> ***************\n> *** 228,233 ****\n> --- 228,273 ----\n> }\n> \n> \n> + /*\n> + *\t\tunknownin\t\t\t- converts \"...\" to internal representation\n> + */\n> + Datum\n> + unknownin(PG_FUNCTION_ARGS)\n> + {\n> + \tchar\t *inputStr = PG_GETARG_CSTRING(0);\n> + \tunknown\t *result;\n> + \tint\t\t\tlen;\n> + \n> + \tlen = strlen(inputStr) + VARHDRSZ;\n> + \n> + \tresult = (unknown *) palloc(len);\n> + \tVARATT_SIZEP(result) = len;\n> + \n> + \tmemcpy(VARDATA(result), inputStr, len - VARHDRSZ);\n> + \n> + \tPG_RETURN_UNKNOWN_P(result);\n> + }\n> + \n> + \n> + /*\n> + *\t\tunknownout\t\t\t- converts internal representation to \"...\"\n> + */\n> + Datum\n> + unknownout(PG_FUNCTION_ARGS)\n> + {\n> + \tunknown\t *t = PG_GETARG_UNKNOWN_P(0);\n> + \tint\t\t\tlen;\n> + \tchar\t *result;\n> + \n> + \tlen = VARSIZE(t) - VARHDRSZ;\n> + \tresult = (char *) palloc(len + 1);\n> + \tmemcpy(result, VARDATA(t), len);\n> + \tresult[len] = '\\0';\n> + \n> + \tPG_RETURN_CSTRING(result);\n> + }\n> + \n> + \n> /* ========== PUBLIC ROUTINES ========== */\n> \n> /*\n> diff -Ncr pgsql.orig/src/include/c.h pgsql/src/include/c.h\n> *** pgsql.orig/src/include/c.h\tSun Apr 7 10:21:29 2002\n> --- pgsql/src/include/c.h\tSun Apr 7 11:40:59 2002\n> ***************\n> *** 389,394 ****\n> --- 389,395 ----\n> */\n> typedef struct varlena bytea;\n> typedef struct varlena text;\n> + typedef struct varlena unknown;\n> typedef struct varlena BpChar;\t/* blank-padded char, ie SQL char(n) */\n> typedef struct varlena VarChar; /* var-length char, ie SQL varchar(n) */\n> \n> diff -Ncr pgsql.orig/src/include/catalog/pg_proc.h pgsql/src/include/catalog/pg_proc.h\n> *** pgsql.orig/src/include/catalog/pg_proc.h\tSun Apr 7 10:21:29 2002\n> --- pgsql/src/include/catalog/pg_proc.h\tSun Apr 7 11:56:09 2002\n> ***************\n> *** 235,240 ****\n> --- 235,245 ----\n> DATA(insert OID = 108 ( scalargtjoinsel PGNSP PGUID 12 f t t s 3 f 701 \"0 26 0\" 100 0 0 100 scalargtjoinsel - _null_ ));\n> DESCR(\"join selectivity of > and related operators on scalar datatypes\");\n> \n> + DATA(insert OID = 109 ( unknownin\t\t\t PGNSP PGUID 12 f t t i 1 f 705 \"0\" 100 0 0 100\tunknownin - _null_ ));\n> + DESCR(\"(internal)\");\n> + DATA(insert OID = 110 ( unknownout\t\t PGNSP PGUID 12 f t t i 1 f 23 \"0\" 100 0 0 100\tunknownout - _null_ ));\n> + DESCR(\"(internal)\");\n> + \n> DATA(insert OID = 112 ( text\t\t\t PGNSP PGUID 12 f t t i 1 f 25 \"23\" 100 0 0 100 int4_text - _null_ ));\n> DESCR(\"convert int4 to text\");\n> DATA(insert OID = 113 ( text\t\t\t PGNSP PGUID 12 f t t i 1 f 25 \"21\" 100 0 0 100 int2_text - _null_ ));\n> diff -Ncr pgsql.orig/src/include/catalog/pg_type.h pgsql/src/include/catalog/pg_type.h\n> *** pgsql.orig/src/include/catalog/pg_type.h\tSun Apr 7 10:21:29 2002\n> --- pgsql/src/include/catalog/pg_type.h\tSun Apr 7 11:57:36 2002\n> ***************\n> *** 302,308 ****\n> DESCR(\"array of INDEX_MAX_KEYS oids, used in system tables\");\n> #define OIDVECTOROID\t30\n> \n> ! DATA(insert OID = 32 (\tSET\t\t PGNSP PGUID -1 -1 f b t \\054 0 0 textin textout textin textout i p f 0 -1 0 _null_ _null_ ));\n> DESCR(\"set of tuples\");\n> \n> DATA(insert OID = 71 (\tpg_type\t\t PGNSP PGUID 4 4 t c t \\054 1247 0 int4in int4out int4in int4out i p f 0 -1 0 _null_ _null_ ));\n> --- 302,308 ----\n> DESCR(\"array of INDEX_MAX_KEYS oids, used in system tables\");\n> #define OIDVECTOROID\t30\n> \n> ! DATA(insert OID = 32 (\tSET\t\t PGNSP PGUID -1 -1 f b t \\054 0 0 unknownin unknownout unknownin unknownout i p f 0 -1 0 _null_ _null_ ));\n> DESCR(\"set of tuples\");\n> \n> DATA(insert OID = 71 (\tpg_type\t\t PGNSP PGUID 4 4 t c t \\054 1247 0 int4in int4out int4in int4out i p f 0 -1 0 _null_ _null_ ));\n> ***************\n> *** 366,372 ****\n> DATA(insert OID = 704 ( tinterval PGNSP PGUID 12 47 f b t \\054 0 0 tintervalin tintervalout tintervalin tintervalout i p f 0 -1 0 _null_ _null_ ));\n> DESCR(\"(abstime,abstime), time interval\");\n> #define TINTERVALOID\t704\n> ! DATA(insert OID = 705 ( unknown PGNSP PGUID -1 -1 f b t \\054 0 0 textin textout textin textout i p f 0 -1 0 _null_ _null_ ));\n> DESCR(\"\");\n> #define UNKNOWNOID\t\t705\n> \n> --- 366,372 ----\n> DATA(insert OID = 704 ( tinterval PGNSP PGUID 12 47 f b t \\054 0 0 tintervalin tintervalout tintervalin tintervalout i p f 0 -1 0 _null_ _null_ ));\n> DESCR(\"(abstime,abstime), time interval\");\n> #define TINTERVALOID\t704\n> ! DATA(insert OID = 705 ( unknown PGNSP PGUID -1 -1 f b t \\054 0 0 unknownin unknownout unknownin unknownout i p f 0 -1 0 _null_ _null_ ));\n> DESCR(\"\");\n> #define UNKNOWNOID\t\t705\n> \n> diff -Ncr pgsql.orig/src/include/fmgr.h pgsql/src/include/fmgr.h\n> *** pgsql.orig/src/include/fmgr.h\tSun Apr 7 10:21:29 2002\n> --- pgsql/src/include/fmgr.h\tSun Apr 7 12:11:30 2002\n> ***************\n> *** 185,190 ****\n> --- 185,191 ----\n> /* DatumGetFoo macros for varlena types will typically look like this: */\n> #define DatumGetByteaP(X)\t\t\t((bytea *) PG_DETOAST_DATUM(X))\n> #define DatumGetTextP(X)\t\t\t((text *) PG_DETOAST_DATUM(X))\n> + #define DatumGetUnknownP(X)\t\t\t((unknown *) PG_DETOAST_DATUM(X))\n> #define DatumGetBpCharP(X)\t\t\t((BpChar *) PG_DETOAST_DATUM(X))\n> #define DatumGetVarCharP(X)\t\t\t((VarChar *) PG_DETOAST_DATUM(X))\n> /* And we also offer variants that return an OK-to-write copy */\n> ***************\n> *** 200,205 ****\n> --- 201,207 ----\n> /* GETARG macros for varlena types will typically look like this: */\n> #define PG_GETARG_BYTEA_P(n)\t\tDatumGetByteaP(PG_GETARG_DATUM(n))\n> #define PG_GETARG_TEXT_P(n)\t\t\tDatumGetTextP(PG_GETARG_DATUM(n))\n> + #define PG_GETARG_UNKNOWN_P(n)\t\tDatumGetUnknownP(PG_GETARG_DATUM(n))\n> #define PG_GETARG_BPCHAR_P(n)\t\tDatumGetBpCharP(PG_GETARG_DATUM(n))\n> #define PG_GETARG_VARCHAR_P(n)\t\tDatumGetVarCharP(PG_GETARG_DATUM(n))\n> /* And we also offer variants that return an OK-to-write copy */\n> ***************\n> *** 239,244 ****\n> --- 241,247 ----\n> /* RETURN macros for other pass-by-ref types will typically look like this: */\n> #define PG_RETURN_BYTEA_P(x) PG_RETURN_POINTER(x)\n> #define PG_RETURN_TEXT_P(x) PG_RETURN_POINTER(x)\n> + #define PG_RETURN_UNKNOWN_P(x) PG_RETURN_POINTER(x)\n> #define PG_RETURN_BPCHAR_P(x) PG_RETURN_POINTER(x)\n> #define PG_RETURN_VARCHAR_P(x) PG_RETURN_POINTER(x)\n> \n> diff -Ncr pgsql.orig/src/include/utils/builtins.h pgsql/src/include/utils/builtins.h\n> *** pgsql.orig/src/include/utils/builtins.h\tSun Apr 7 10:21:29 2002\n> --- pgsql/src/include/utils/builtins.h\tSun Apr 7 12:26:17 2002\n> ***************\n> *** 414,419 ****\n> --- 414,422 ----\n> extern bool SplitIdentifierString(char *rawstring, char separator,\n> \t\t\t\t\t\t\t\t List **namelist);\n> \n> + extern Datum unknownin(PG_FUNCTION_ARGS);\n> + extern Datum unknownout(PG_FUNCTION_ARGS);\n> + \n> extern Datum byteain(PG_FUNCTION_ARGS);\n> extern Datum byteaout(PG_FUNCTION_ARGS);\n> extern Datum byteaoctetlen(PG_FUNCTION_ARGS);\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 22:13:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch (was [HACKERS] PQescapeBytea is not" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n>> Here's a patch to add unknownin/unknownout support. I also poked around \n>> looking for places that assume UNKNOWN == TEXT. One of those was the \n>> \"SET\" type in pg_type.h, which was using textin/textout. This one I took \n>> care of in this patch. The other suspicious place was in \n>> string_to_dataum (which is defined in both selfuncs.c and indxpath.c). I \n>> wasn't too sure about those, so I left them be.\n\nI do not think string_to_datum is a problem. UNKNOWN constants should\nnever get past the parse analysis stage, so the planner doesn't have to\ndeal with them. Certainly, it won't be looking at them in the context\nof making any interesting selectivity decisions.\n\n> I found three other suspicious spots in the source, where UNKNOWN == \n> TEXT is assumed. The first looks like it needs to be changed for sure, \n> the other two I'm less sure about. Feedback would be most appreciated \n> (on this and the patch itself).\n\n> (line numbers based on CVS from earlier today)\n> parse_node.c \n> line 428\n> parse_coerce.c \n> line 85\n> parse_coerce.c \n> line 403\n\nThe first two of these certainly need to be changed --- these are\nexactly the places where we convert literal strings to and (later)\nfrom UNKNOWN-constant representation. The third is okay as-is;\nit's a type resolution rule, not code that is touching any literal\nconstants directly. Will fix these in an upcoming commit.\n\nThe patch looks okay otherwise, except that I'm moving the typedef\nunknown and the fmgr macros for it into varlena.c. These two routines\nare the only routines that will ever need them, so there's no need to\nclutter the system-wide headers with 'em. (Also, I am uncomfortable\nwith having a globally-visible typedef with such a generic name as\n\"unknown\"; strikes me as a recipe for name conflicts.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Apr 2002 22:55:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unknownin/out patch" } ]
[ { "msg_contents": "Hi everyone,\n\nFor anyone who's interested in the patent status of UB-Tree's, here is\nfurther info.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-------- Original Message --------\nSubject: AW: UB-Tree's and patents\nDate: Thu, 4 Apr 2002 17:41:10 +0200\nFrom: \"Rudolf Bayer\" <bayer@in.tum.de>\nTo: \"Justin Clift\" <justin@postgresql.org>\nCC: \"Christian Roth\" <roth@transaction.de>\n\nthe UB-tree has been patented in Europe (PCT) and the US,\nboth patents have been awarded.\nthe Patent application has been filed in Japan, but processing is not\ncompleatet yet.\nA first commercial implementation is available in the product\n\tTransBase Hypercube from\n\tTransAction SW GmbH\ncontact Dr. Roth ++49 89 62709 170\nregards,\nR. Bayer\n\n*********************************************************************\nProf. Rudolf Bayer, Ph.D. \tPhone: ++49 89 48095 171\nInstitut fur Informatik\t\t\tFax: ++49 89 48095 170\nTechnische Universitat Munchen \temail: bayer@in.tum.de\nOrleansstr. 34\t\t\t\thome: http://www3.in.tum.de\n81667 Munchen\n\n> -----Ursprungliche Nachricht-----\n> Von: Justin Clift [mailto:justin@postgresql.org]\n> Gesendet: Dienstag, 2. April 2002 22:08\n> An: bayer@informatik.tu-muenchen.de; Stefan Reich\n> Betreff: UB-Tree's and patents\n>\n>\n> Hi Prof. Rudolf,\n>\n> There seems to be some confusion in the Open Source software community\n> as to whether the algorithms/concepts/etc for UB-Tree's are patented or\n> not.\n>\n> Do you know of anywhere we can find information about it's patent\n> status?\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n", "msg_date": "Sat, 06 Apr 2002 01:27:59 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "[Fwd: AW: UB-Tree's and patents]" }, { "msg_contents": "On Sat, 6 Apr 2002, Justin Clift wrote:\n\n> Hi everyone,\n>\n> For anyone who's interested in the patent status of UB-Tree's, here is\n> further info.\n>\n> :-)\n\nI dont' understand that. Does it means someone has no rights to\nimplement the algorithm in any form ? Who has patented an idea of\nrelational databases ? Donald Knuth in his \"All Questions Answered\"\nsaid \"... the word patent means \"to make public\"\n\n\tRegards,\n\n\tOleg\n\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> -------- Original Message --------\n> Subject: AW: UB-Tree's and patents\n> Date: Thu, 4 Apr 2002 17:41:10 +0200\n> From: \"Rudolf Bayer\" <bayer@in.tum.de>\n> To: \"Justin Clift\" <justin@postgresql.org>\n> CC: \"Christian Roth\" <roth@transaction.de>\n>\n> the UB-tree has been patented in Europe (PCT) and the US,\n> both patents have been awarded.\n> the Patent application has been filed in Japan, but processing is not\n> compleatet yet.\n> A first commercial implementation is available in the product\n> \tTransBase Hypercube from\n> \tTransAction SW GmbH\n> contact Dr. Roth ++49 89 62709 170\n> regards,\n> R. Bayer\n>\n> *********************************************************************\n> Prof. Rudolf Bayer, Ph.D. \tPhone: ++49 89 48095 171\n> Institut fur Informatik\t\t\tFax: ++49 89 48095 170\n> Technische Universitat Munchen \temail: bayer@in.tum.de\n> Orleansstr. 34\t\t\t\thome: http://www3.in.tum.de\n> 81667 Munchen\n>\n> > -----Ursprungliche Nachricht-----\n> > Von: Justin Clift [mailto:justin@postgresql.org]\n> > Gesendet: Dienstag, 2. April 2002 22:08\n> > An: bayer@informatik.tu-muenchen.de; Stefan Reich\n> > Betreff: UB-Tree's and patents\n> >\n> >\n> > Hi Prof. Rudolf,\n> >\n> > There seems to be some confusion in the Open Source software community\n> > as to whether the algorithms/concepts/etc for UB-Tree's are patented or\n> > not.\n> >\n> > Do you know of anywhere we can find information about it's patent\n> > status?\n> >\n> > :-)\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> > --\n> > \"My grandfather once told me that there are two kinds of people: those\n> > who work and those who take the credit. He told me to try to be in the\n> > first group; there was less competition there.\"\n> > - Indira Gandhi\n> >\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 5 Apr 2002 19:11:01 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: [Fwd: AW: UB-Tree's and patents]" }, { "msg_contents": "Hi Oleg,\n\nMy understanding of patent law (in a generalised way) means that for any\ncountry in which you're granted a patent on something, these days it\neffectively means you've got control of the usage of that particular\nthing.\n\nSo, if Prof. Rudolf Bayer has a patent on the algorithm for UB-Tree's in\nthe US and Europe, then he gets to say who can and can't use that\nalgorithm in those countries.\n\nNow, if whoever DOES hold that patent is Open Source friendly we have\nnothing to worry about as long as we get permission in writing (assuming\nUB-Tree's are something we're implementing at some point). If they're\nNOT Open Source friendly, then there are steps we can take, but they all\ncost money of varying amounts. :-(\n\nIf anyone has further info about this kind of thing, that would be\nuseful.\n\nAlso, I've emailed Prof. Bayer yesterday to ask him for more specifics,\nso we should hopefully have his response in a day or two.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nOleg Bartunov wrote:\n> \n> On Sat, 6 Apr 2002, Justin Clift wrote:\n> \n> > Hi everyone,\n> >\n> > For anyone who's interested in the patent status of UB-Tree's, here is\n> > further info.\n> >\n> > :-)\n> \n> I dont' understand that. Does it means someone has no rights to\n> implement the algorithm in any form ? Who has patented an idea of\n> relational databases ? Donald Knuth in his \"All Questions Answered\"\n> said \"... the word patent means \"to make public\"\n> \n> Regards,\n> \n> Oleg\n> \n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> > -------- Original Message --------\n> > Subject: AW: UB-Tree's and patents\n> > Date: Thu, 4 Apr 2002 17:41:10 +0200\n> > From: \"Rudolf Bayer\" <bayer@in.tum.de>\n> > To: \"Justin Clift\" <justin@postgresql.org>\n> > CC: \"Christian Roth\" <roth@transaction.de>\n> >\n> > the UB-tree has been patented in Europe (PCT) and the US,\n> > both patents have been awarded.\n> > the Patent application has been filed in Japan, but processing is not\n> > compleatet yet.\n> > A first commercial implementation is available in the product\n> > TransBase Hypercube from\n> > TransAction SW GmbH\n> > contact Dr. Roth ++49 89 62709 170\n> > regards,\n> > R. Bayer\n> >\n> > *********************************************************************\n> > Prof. Rudolf Bayer, Ph.D. Phone: ++49 89 48095 171\n> > Institut fur Informatik Fax: ++49 89 48095 170\n> > Technische Universitat Munchen email: bayer@in.tum.de\n> > Orleansstr. 34 home: http://www3.in.tum.de\n> > 81667 Munchen\n> >\n> > > -----Ursprungliche Nachricht-----\n> > > Von: Justin Clift [mailto:justin@postgresql.org]\n> > > Gesendet: Dienstag, 2. April 2002 22:08\n> > > An: bayer@informatik.tu-muenchen.de; Stefan Reich\n> > > Betreff: UB-Tree's and patents\n> > >\n> > >\n> > > Hi Prof. Rudolf,\n> > >\n> > > There seems to be some confusion in the Open Source software community\n> > > as to whether the algorithms/concepts/etc for UB-Tree's are patented or\n> > > not.\n> > >\n> > > Do you know of anywhere we can find information about it's patent\n> > > status?\n> > >\n> > > :-)\n> > >\n> > > Regards and best wishes,\n> > >\n> > > Justin Clift\n> > >\n> > > --\n> > > \"My grandfather once told me that there are two kinds of people: those\n> > > who work and those who take the credit. He told me to try to be in the\n> > > first group; there was less competition there.\"\n> > > - Indira Gandhi\n> > >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> \n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sat, 06 Apr 2002 02:48:08 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: [Fwd: AW: UB-Tree's and patents]" } ]
[ { "msg_contents": "It would be nice if total table cardinality could be maintained live.\nSo (after the initial vacuum) we update the cardinality for each table\nin the system table (or perhaps add an entry to the table itself).\nThere are two reasons why this is an important optimization. Firstly,\nit is a psychological benefit for both benchmarks and customers when\ndoing a select count(*) from <tablename>. This is something that pops\nup all the time in benchmarks and customers do it too, in order to get a\nfeel for speed. By storing the current number and incrementing for\nevery insert and decrementing for every delete, the count(*) case with\nno where clause can return the value instantly.\n\nThe far more important reason is for optimizations. An accurate\ncardinality figure can greatly enhance the optimizer's ability to\nperform joins in the correct order.\n\nAn example of a SQL system that does this sort of thing is Microsoft\nSQL*Server. If you have 100 million rows in a table and do:\nSELECT COUNT(*) FROM table_name\nit will return the correct number instantly. The same is true for\nOracle.\n\nIt might also be possible to keep an array in memory of:\ntypedef struct tag_cardinality_list {\n\tchar *table_name;\n\tunsigned long cardinality;\n} cardinality_list;\n\nand keep the data updated there with simple interlocked exchange\noperations. The list would be loaded on Postmaster startup and saved on\nshutdown.\n\n", "msg_date": "Fri, 5 Apr 2002 11:04:13 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Suggestion for optimization" }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n\n> It would be nice if total table cardinality could be maintained live.\n> So (after the initial vacuum) we update the cardinality for each table\n> in the system table (or perhaps add an entry to the table itself).\n> There are two reasons why this is an important optimization. Firstly,\n> it is a psychological benefit for both benchmarks and customers when\n> doing a select count(*) from <tablename>. This is something that pops\n> up all the time in benchmarks and customers do it too, in order to get a\n> feel for speed. By storing the current number and incrementing for\n> every insert and decrementing for every delete, the count(*) case with\n> no where clause can return the value instantly.\n\nHow would this work with MVCC?\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n", "msg_date": "05 Apr 2002 14:30:22 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "Doug McNaught <doug@wireboard.com> writes:\n> \"Dann Corbit\" <DCorbit@connx.com> writes:\n>> It would be nice if total table cardinality could be maintained live.\n\n> How would this work with MVCC?\n\nIt wouldn't. That's why it's not there. Under MVCC, table cardinality\nis in the eye of the beholder...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Apr 2002 14:37:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization " }, { "msg_contents": "> >> It would be nice if total table cardinality could be maintained live.\n> >\n> > How would this work with MVCC?\n>\n> It wouldn't. That's why it's not there. Under MVCC, table cardinality\n> is in the eye of the beholder...\n\nThat makes me curious how oracle implements it. I was under the impression \nthat oracle managed to get the two together (MVCC and cardinality).\n\nAlso, can't triggers basically solve the problem from a functionaility \nstandpoint? I mean, it would be slow for inserts, but I wonder if similar \nprinciples could be applied without the full overhead of triggers?\n\nWhat I had in mind was to create a stats table (I guess much like the one \nthat already exists, or a separate attribute of the existing one) and all of \nthe tables in there, with a rowcount. Once a row is inserted, the trigger \nruns and updates the count (or decrements for delete), making the new count \nvisible to the current transaction. Then when the transaction commits, it's \nvisible everywhere at the same time as the count. \n\nIs there anything wrong with that setup, other than the obvious performance \nhit?\n\nBy the way, since this discussion has happened before, I actually read a \nsimilar idea in a previous email (if I remember correctly), but I didn't see \nmuch talk about it.\n\nRegards,\n\tJeff\n", "msg_date": "Fri, 5 Apr 2002 13:09:08 -0800", "msg_from": "Jeff Davis <list-pgsql-hackers@dynworks.com>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "> I don't think your idea would work for a concurrent user setup where\n> people have different transactions started at different times with\n> different amounts of changes inside each transaction.\n>\n> That's why it would have to be tracked on a \"per connection\" basis for\n> all the tables.\n\nI tried it out with concurrent connections and it seemed to hold up just \nfine. I think MVCC took care of everything. Transactions got a different \ncount depending on whether they could see the inserted values or not. Once \ncommitted all transactions could see the new table count.\n\nCan you provide a case where it wouldn't?\n\nI imagine this causes some major performance issues, not to mention the dead \ntuples would pile up fast, but it seems to work just fine. \n\nMy SQL is below.\n\nRegards,\n\tJeff\n\njdavis=> create table tuple_count(tuples int);\nCREATE\njdavis=> create table c1(a int);\nCREATE\njdavis=> create function f1() returns opaque as '\njdavis'> BEGIN\njdavis'> UPDATE tuple_count set tuples=tuples+1;\njdavis'> RETURN NEW;\njdavis'> END;\njdavis'> ' language 'plpgsql';\nCREATE\njdavis=> create function f2() returns opaque as '\njdavis'> BEGIN\njdavis'> UPDATE tuple_count set tuples=tuples-1;\njdavis'> RETURN NEW;\njdavis'> END;\njdavis'> ' language 'plpgsql';\nCREATE\njdavis=> create trigger t1 after insert on c1 for each row execute procedure \nf1();\nCREATE\njdavis=> create trigger t2 after delete on c1 for each row execute procedure \nf2();\nCREATE\n", "msg_date": "Fri, 5 Apr 2002 15:22:50 -0800", "msg_from": "Jeff Davis <list-pgsql-hackers@dynworks.com>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "Tom Lane wrote:\n> \n> Doug McNaught <doug@wireboard.com> writes:\n> > \"Dann Corbit\" <DCorbit@connx.com> writes:\n> >> It would be nice if total table cardinality could be maintained live.\n> \n> > How would this work with MVCC?\n> \n> It wouldn't. That's why it's not there. Under MVCC, table cardinality\n> is in the eye of the beholder...\n\nThis is true, absolutely, but keeping a running total of the number of records\nshould not change this fact. It may even make it more accurate.\n\nIf count() comes back immediately with *a* number, that number was only\naccurate at the time of the transaction. If count() does a full table scan, it\nstill only comes back with something accurate to the time of the transaction,\nbut it could be more likely less accurate on a busy/large table because many\nmore things may have changed during the time used by a full table scan.\n\nThe issue of a busy table shouldn't make a difference either. If we aready\naccept that count() returns the known count at the beginning time of the\ntransaction, and not the count() at the end of a tansaction (MVCC), then taking\na count() from a counter which is updated when delete/inserts are performed\njust as accurate, or at least just as subject to inaccuracies.\n", "msg_date": "Sun, 07 Apr 2002 08:50:41 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Doug McNaught [mailto:doug@wireboard.com]\nSent: Friday, April 05, 2002 11:30 AM\nTo: Dann Corbit\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Suggestion for optimization\n\n\n\"Dann Corbit\" <DCorbit@connx.com> writes:\n\n> It would be nice if total table cardinality could be maintained live.\n> So (after the initial vacuum) we update the cardinality for each table\n> in the system table (or perhaps add an entry to the table itself).\n> There are two reasons why this is an important optimization. Firstly,\n> it is a psychological benefit for both benchmarks and customers when\n> doing a select count(*) from <tablename>. This is something that pops\n> up all the time in benchmarks and customers do it too, in order to get\na\n> feel for speed. By storing the current number and incrementing for\n> every insert and decrementing for every delete, the count(*) case with\n> no where clause can return the value instantly.\n\nHow would this work with MVCC?\n>>\nWhenever a commit occurs, the pending inserts are totaled into the sum\nand the pending deletes are subtracted. It can be a list in memory or\nwhatever. Maybe you are referring to the old (expired) rows begin\nstored until vacuum? Perhaps I really don't understand your question or\nthe issues involved. Why does MVCC complicate issues?\n<<\n", "msg_date": "Fri, 5 Apr 2002 11:43:30 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n\n> How would this work with MVCC?\n> >>\n> Whenever a commit occurs, the pending inserts are totaled into the sum\n> and the pending deletes are subtracted. It can be a list in memory or\n> whatever. Maybe you are referring to the old (expired) rows begin\n> stored until vacuum? Perhaps I really don't understand your question or\n> the issues involved. Why does MVCC complicate issues?\n> <<\n\nBecause the row count depends on what transactions have committed when\nyours starts. Also, you will see the count(*) reflecting INSERTs in\nyour transaction, but others won't until you commit. So there is no\nwell-defined concept of cardinality under MVCC--it depends on which\nrows are visible to which transactions.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n", "msg_date": "05 Apr 2002 14:55:07 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Friday, April 05, 2002 11:37 AM\nTo: Doug McNaught\nCc: Dann Corbit; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Suggestion for optimization \n\nDoug McNaught <doug@wireboard.com> writes:\n> \"Dann Corbit\" <DCorbit@connx.com> writes:\n>> It would be nice if total table cardinality could be maintained live.\n\n> How would this work with MVCC?\n\nIt wouldn't. That's why it's not there. Under MVCC, table cardinality\nis in the eye of the beholder...\n>>-------------------------\nIf this is true (even after a commit) then MVCC is a very bad thing. No\ntransactions occur, and two people ask the same question yet get\ndifferent answers. Doesn't that scare anyone? That would mean (among\nother things) that Postgresql cannot be used for a data warehouse.\n\nOne of the primary facets of a reliable database transaction system is\nrepeatability. In fact, if there is no certain cardinality known after\ncommits, then there are no reliable database operations that can be\ntrusted.\n\nHow many accounts are 90 days overdue? Bill says 78 and Frank says 82.\nWho is right and how can we know?\n\nI have spent months working on Postgresql projects here (at CONNX\nSolutions Inc.) and talked management into using an open source\ndatabase. Please tell me I'm not going to look like a bloody idiot in\nthe near term.\n<<-------------------------\n", "msg_date": "Fri, 5 Apr 2002 11:50:07 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Suggestion for optimization " }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n\n> If this is true (even after a commit) then MVCC is a very bad thing. No\n> transactions occur, and two people ask the same question yet get\n> different answers. Doesn't that scare anyone? That would mean (among\n> other things) that Postgresql cannot be used for a data warehouse.\n\nHave you read the doc chapter about MVCC? Sounds like you don't\nquite understand how it works yet.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n", "msg_date": "05 Apr 2002 14:51:32 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n> How many accounts are 90 days overdue? Bill says 78 and Frank says 82.\n> Who is right and how can we know?\n\nIf Bill and Frank look at exactly the same instant (ie, from read-only\ntransactions started at the same time), they will get the same answer.\nIf they are looking from transactions started at different times, or\nfrom transactions that have themselves added/removed rows, they won't\nnecessarily get the same answer. That does not make either answer\nwrong, just a reflection of the snapshots they are seeing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Apr 2002 14:51:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization " } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Doug McNaught [mailto:doug@wireboard.com]\nSent: Friday, April 05, 2002 11:55 AM\nTo: Dann Corbit\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Suggestion for optimization\n\n\n\"Dann Corbit\" <DCorbit@connx.com> writes:\n\n> How would this work with MVCC?\n> >>\n> Whenever a commit occurs, the pending inserts are totaled into the sum\n> and the pending deletes are subtracted. It can be a list in memory or\n> whatever. Maybe you are referring to the old (expired) rows begin\n> stored until vacuum? Perhaps I really don't understand your question\nor\n> the issues involved. Why does MVCC complicate issues?\n> <<\n\nBecause the row count depends on what transactions have committed when\nyours starts. Also, you will see the count(*) reflecting INSERTs in\nyour transaction, but others won't until you commit. So there is no\nwell-defined concept of cardinality under MVCC--it depends on which\nrows are visible to which transactions.\n>>-----------------------------------------------------------------\nI guess that this model can be viewed as \"everything is a snapshot\".\nIt seems plain that the repercussions for a data warehouse and for \nreporting have not been thought out very well. This is definitely\nvery, very bad in that arena. I suppose that reporting could still\nbe accomplished, but it would require pumping the data into a new\ncopy of the database that does not allow writes at all. Yuck.\n\nAt any rate, there is clearly a concept of cardinality in any case.\nPerhaps the information would have to be kept as part of the \nconnection. If (after all) you cannot even compute cardinality\nfor a single connection then the database truly is useless. In \nfact, under a scenario where cardinality has no meaning, neither does\nselect count() since that is what it measures. Might as well\nremove it from the language.\n\nI have read a couple books on Postgresql and somehow missed the\nwhole MVCC idea. Maybe after I understand it better the clammy \nbeads of sweat on my forehead will dry up a little.\n<<-----------------------------------------------------------------\n", "msg_date": "Fri, 5 Apr 2002 12:08:11 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n> At any rate, there is clearly a concept of cardinality in any case.\n\nCertainly. The count(*) value is perfectly well defined within any one\ntransaction. We *could*, if we wanted to, implement bookkeeping logic\nthat would keep track of the number of rows inserted by all transactions\nand allow derivation of the count-as-seen-by-any-one-transaction at all\ntimes. The point is that that logic would be vastly more complex than\nyou thought it would be; and it would not be optional. (AFAICS, the\ncounts would have to be determined at postmaster startup and then\nmaintained faithfully by all transactions. There wouldn't be any good\nway for a transaction to initialize the bookkeeping logic on-the-fly ---\nunless you call acquiring an exclusive lock on a table good.) No one\nwho's looked at it has thought that it would be a good tradeoff for\nmaking count(*) faster.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Apr 2002 15:21:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization " }, { "msg_contents": "Not to mention it only increases the speed of:\n\nSELECT count(*) FROM foo;\n\nand not:\n\nSELECT count(*) FROM foo WHERE bar;\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Dann Corbit\" <DCorbit@connx.com>\nCc: \"Doug McNaught\" <doug@wireboard.com>;\n<pgsql-hackers@postgresql.org>\nSent: Friday, April 05, 2002 3:21 PM\nSubject: Re: [HACKERS] Suggestion for optimization\n\n\n> \"Dann Corbit\" <DCorbit@connx.com> writes:\n> > At any rate, there is clearly a concept of cardinality in any\ncase.\n>\n> Certainly. The count(*) value is perfectly well defined within any\none\n> transaction. We *could*, if we wanted to, implement bookkeeping\nlogic\n> that would keep track of the number of rows inserted by all\ntransactions\n> and allow derivation of the count-as-seen-by-any-one-transaction at\nall\n> times. The point is that that logic would be vastly more complex\nthan\n> you thought it would be; and it would not be optional. (AFAICS, the\n> counts would have to be determined at postmaster startup and then\n> maintained faithfully by all transactions. There wouldn't be any\ngood\n> way for a transaction to initialize the bookkeeping logic\non-the-fly ---\n> unless you call acquiring an exclusive lock on a table good.) No\none\n> who's looked at it has thought that it would be a good tradeoff for\n> making count(*) faster.\n>\n> regards, tom lane\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Fri, 5 Apr 2002 15:35:19 -0500", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization " }, { "msg_contents": "Dann Corbit wrote:\n> \n> I guess that this model can be viewed as \"everything is a snapshot\".\n> It seems plain that the repercussions for a data warehouse and for\n> reporting have not been thought out very well. This is definitely\n> very, very bad in that arena. I suppose that reporting could still\n> be accomplished, but it would require pumping the data into a new\n> copy of the database that does not allow writes at all. Yuck.\n> \n> At any rate, there is clearly a concept of cardinality in any case.\n> Perhaps the information would have to be kept as part of the\n> connection. If (after all) you cannot even compute cardinality\n> for a single connection then the database truly is useless. In\n> fact, under a scenario where cardinality has no meaning, neither does\n> select count() since that is what it measures. Might as well\n> remove it from the language.\n> \n> I have read a couple books on Postgresql and somehow missed the\n> whole MVCC idea. Maybe after I understand it better the clammy\n> beads of sweat on my forehead will dry up a little.\n\nOracle is also a MVCC database. So this notion that MVCC somehow makes\nit inappropriate for data warehousing would imply that Oracle is also\ninappropriate. However, in your defense, Oracle did apparently find\nenough customer demand for a MVCC-compatible hack of COUNT() to\nimplement a short-cut route to calculate its value...\n\nMike Mascari\nmascarm@mascari.com\n", "msg_date": "Fri, 05 Apr 2002 15:44:03 -0500", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "At 12:08 PM -0800 4/5/02, Dann Corbit wrote:\n>I guess that this model can be viewed as \"everything is a snapshot\".\n>It seems plain that the repercussions for a data warehouse and for \n>reporting have not been thought out very well. This is definitely\n>very, very bad in that arena. I suppose that reporting could still\n>be accomplished, but it would require pumping the data into a new\n>copy of the database that does not allow writes at all. Yuck.\n\nThat is exactly the point of MVCC. When you start your reporting cycle, you initiate a transaction. That transaction causes the database to _act_ as if you had \"pump[ed] the data into a new copy of the database that does not allow writes at all.\"\n\nYour transaction is isolated from ongoing activities in the database. Your transaction _is_ a snapshot of the database at some instant in time.\n\nThis is a good thing. You should probably ponder it for a while before claiming it hasn't been thought out well wrt. certain applications.\n\n\nStill, your suggestion _could_ be implemented. Your comment: \"An accurate\ncardinality figure can greatly enhance the optimizer's ability to\nperform joins in the correct order\" was intriguing, and I'd be interested in Tom's thoughts on just that bit.\n\n-pmb\n\n\n", "msg_date": "Fri, 5 Apr 2002 13:01:38 -0800", "msg_from": "Peter Bierman <bierman@apple.com>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "Peter Bierman <bierman@apple.com> writes:\n> ... Your comment: \"An\n> accurate cardinality figure can greatly enhance the optimizer's\n> ability to perform joins in the correct order\" was intriguing, and I'd\n> be interested in Tom's thoughts on just that bit.\n\nApproximate figures are quite sufficient for the planner's purposes.\nAFAICS, making them exact would not improve the planning estimates\nat all, because there are too many other sources of error. We have\napproximate stats already via vacuum/analyze statistics gathering.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Apr 2002 18:41:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization " }, { "msg_contents": "Af far as I know Oracle doesn't have any short cut (along the lines of \nwhat is being discussed in this thread) for this operation. However \nOracle is more efficient in providing the answer than postgres currently \nis. While postgres needs to perform a full scan on the table, Oracle \nwill only need to perform a full index scan on the primary key if one \nexists. Since the index will likely have much less data than the full \ntable this will result in fewer IOs and be faster than what postgres \ndoes, but it still takes a while for large tables even in Oracle.\n\nthanks,\n--Barry\n\nMike Mascari wrote:\n> Dann Corbit wrote:\n> \n>>I guess that this model can be viewed as \"everything is a snapshot\".\n>>It seems plain that the repercussions for a data warehouse and for\n>>reporting have not been thought out very well. This is definitely\n>>very, very bad in that arena. I suppose that reporting could still\n>>be accomplished, but it would require pumping the data into a new\n>>copy of the database that does not allow writes at all. Yuck.\n>>\n>>At any rate, there is clearly a concept of cardinality in any case.\n>>Perhaps the information would have to be kept as part of the\n>>connection. If (after all) you cannot even compute cardinality\n>>for a single connection then the database truly is useless. In\n>>fact, under a scenario where cardinality has no meaning, neither does\n>>select count() since that is what it measures. Might as well\n>>remove it from the language.\n>>\n>>I have read a couple books on Postgresql and somehow missed the\n>>whole MVCC idea. Maybe after I understand it better the clammy\n>>beads of sweat on my forehead will dry up a little.\n> \n> \n> Oracle is also a MVCC database. So this notion that MVCC somehow makes\n> it inappropriate for data warehousing would imply that Oracle is also\n> inappropriate. However, in your defense, Oracle did apparently find\n> enough customer demand for a MVCC-compatible hack of COUNT() to\n> implement a short-cut route to calculate its value...\n> \n> Mike Mascari\n> mascarm@mascari.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n", "msg_date": "Fri, 05 Apr 2002 19:28:27 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "On Fri, 5 Apr 2002, Barry Lind wrote:\n\n> Af far as I know Oracle doesn't have any short cut (along the lines of \n> what is being discussed in this thread) for this operation. However \n> Oracle is more efficient in providing the answer than postgres currently \n> is. While postgres needs to perform a full scan on the table, Oracle \n> will only need to perform a full index scan on the primary key if one \n> exists. Since the index will likely have much less data than the full \n\nUnder Postgres, a full index scan is generally more expensive than a full\ntable scan since indices, particularly btree, carry a large amount of meta\ndata and theefore consume more pages.\n\nGavin\n\n", "msg_date": "Sun, 7 Apr 2002 22:27:08 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "> > Af far as I know Oracle doesn't have any short cut (along the lines of\n> > what is being discussed in this thread) for this operation. However\n> > Oracle is more efficient in providing the answer than postgres\n> currently\n> > is. While postgres needs to perform a full scan on the table, Oracle\n> > will only need to perform a full index scan on the primary key if one\n> > exists. Since the index will likely have much less data than the full\n>\n> Under Postgres, a full index scan is generally more expensive than a full\n> table scan since indices, particularly btree, carry a large amount of meta\n> data and theefore consume more pages.\n\nDon't forget that Postgres also doesn't store tids in the index, so must\nalways check with the main table that a row is visible in current\ntransaction.\n\nChris\n\n", "msg_date": "Mon, 8 Apr 2002 10:05:08 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > > Af far as I know Oracle doesn't have any short cut (along the lines of\n> > > what is being discussed in this thread) for this operation. However\n> > > Oracle is more efficient in providing the answer than postgres\n> > currently\n> > > is. While postgres needs to perform a full scan on the table, Oracle\n> > > will only need to perform a full index scan on the primary key if one\n> > > exists. Since the index will likely have much less data than the full\n> >\n> > Under Postgres, a full index scan is generally more expensive than a full\n> > table scan since indices, particularly btree, carry a large amount of meta\n> > data and theefore consume more pages.\n> \n> Don't forget that Postgres also doesn't store tids in the index, so must\n\nI assume you mean xid here. tids are in the index or there would be no\nway to find the heap row. :-)\n\n> always check with the main table that a row is visible in current\n> transaction.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 Apr 2002 23:23:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Mike Mascari [mailto:mascarm@mascari.com]\nSent: Friday, April 05, 2002 12:44 PM\nTo: Dann Corbit\nCc: Doug McNaught; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Suggestion for optimization\n\nDann Corbit wrote:\n> \n> I guess that this model can be viewed as \"everything is a snapshot\".\n> It seems plain that the repercussions for a data warehouse and for\n> reporting have not been thought out very well. This is definitely\n> very, very bad in that arena. I suppose that reporting could still\n> be accomplished, but it would require pumping the data into a new\n> copy of the database that does not allow writes at all. Yuck.\n> \n> At any rate, there is clearly a concept of cardinality in any case.\n> Perhaps the information would have to be kept as part of the\n> connection. If (after all) you cannot even compute cardinality\n> for a single connection then the database truly is useless. In\n> fact, under a scenario where cardinality has no meaning, neither does\n> select count() since that is what it measures. Might as well\n> remove it from the language.\n> \n> I have read a couple books on Postgresql and somehow missed the\n> whole MVCC idea. Maybe after I understand it better the clammy\n> beads of sweat on my forehead will dry up a little.\n\nOracle is also a MVCC database. So this notion that MVCC somehow makes\nit inappropriate for data warehousing would imply that Oracle is also\ninappropriate. However, in your defense, Oracle did apparently find\nenough customer demand for a MVCC-compatible hack of COUNT() to\nimplement a short-cut route to calculate its value...\n>>------------------------------------------------------------------\nThat's interesting. If Oracle is a MVCC database, how did they \nmanage to perform ANSI standard Isolation Levels? It seems it ought\nto be impossible.\n<<------------------------------------------------------------------\n", "msg_date": "Fri, 5 Apr 2002 12:53:49 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Suggestion for optimization" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Jon Grov [mailto:jon@linpro.no]\nSent: Friday, April 05, 2002 12:54 PM\nTo: Dann Corbit\nCc: Mike Mascari; Doug McNaught; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Suggestion for optimization\n\n\n\"Dann Corbit\" <DCorbit@connx.com> writes:\n\n> That's interesting. If Oracle is a MVCC database, how did they \n> manage to perform ANSI standard Isolation Levels? It seems it ought\n> to be impossible.\n\nThere's an excellent introduction to MVCC and snapshot isolation in\nthe PostgreSQL docs.\n\nSee\nhttp://www2.no.postgresql.org/users-lounge/docs/7.2/postgres/mvcc.html\n>>------------------------------------------------------------------\nI have read these documents (and some others) now. It seems that \nthere is a serializable transaction level, and so the goal I was \nafter can be reached anyway. So never mind. I am at peace again\n(and breathing a heavy sigh of relief).\n\nBut I am a bit puzzled. How can a serializable transaction be\nperformed in a MVCC system? I realize the Oracle does it, and also\nPostgresql, but I can't picture how that would work.\n<<------------------------------------------------------------------\n", "msg_date": "Fri, 5 Apr 2002 13:04:34 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "(Sorry that my previous post did not reach the pgsql-hackers list, I\nsent it from the wrong address and was thus not considered a\nsubscriber)\n\n\"Dann Corbit\" <DCorbit@connx.com> writes:\n\n> But I am a bit puzzled. How can a serializable transaction be\n> performed in a MVCC system? I realize the Oracle does it, and also\n> Postgresql, but I can't picture how that would work.\n\nIn short, snapshot isolation implies this (assuming all transactions\nare assigned a monotonously growing timestamp, and that all updates\ncreate a new, unique version of the object):\n\n- A transactions reads the most recent version of an object that is\n written by a transaction who were committed at it's beginning.\n\n- Two concurrent (i.e. at some point in time, they're both active)\n transactions need to have disjunct write sets.\n\nThis is probably best illustrated through an example:\n\nLet x and y be columns in a table a. Initially, x = 20 and y = 5 in\nthe row where k = 1 (k is the primary key).\n\n- Let T_1 be transaction as follows:\n\n BEGIN;\n SELECT x FROM a WHERE k = 1; SELECT y FROM a WHERE k = 1;\n END;\n\n- Let T_2 be another transaction:\n\n BEGIN;\n UPDATE a SET x = 10 WHERE k = 1; UPDATE a SET y = 10 WHERE k = 1;\n END;\n\nThen, if we have the following execution:\n\n10 BEGIN; /* T_2 */\n\n20 BEGIN; /* T_1 */\n\n30 SELECT x FROM a WHERE k = 1; /* T_1 */\n\n40 UPDATE a SET x = 10 WHERE k = 1; /* T_2 */\n\n50 UPDATE y SET a = 10 WHERE k = 1; /* T_2 */\n\n60 SELECT y FROM a WHERE k = 1; /* T_1 */\n\n70 END; /* T_2 */\n\n80 END; /* T_1 */\n\nClearly, this would not be serializable in a non-versioned\ndatabase. But under MVCC and snapshot isolation, T_1 reads from a\nsnapshot, i.e. it would not (under serialable isolation level, at\nleast) be allowed to read anything T_2 have written. The requirement\nis that a transaction T reads values written by transactions committed\nbefore T's beginning.\n\nSo T_1 would find that x = 20 and y = 5, just as it would if T_1 and\nT_2 were executed serially with T_1 before T_2.\n\nWith two (or more) updating transactions, things get more complicated.\n\nAssume we have the following to transactions:\n\nT_1:\n\n BEGIN;\n UPDATE x SET x = x + 10;\n END;\n\nT_2:\n\n BEGIN;\n UPDATE x SET x = x - 5;\n END;\n\nand the following execution (assuming x = 20 before it starts):\n\n10 BEGIN; /* T_1 */\n20 BEGIN; /* T_2 */\n\n30 UPDATE x SET x = x + 10; /* T_1 */\n\n40 END; /* T_1 */\n\n50 UPDATE x SET x = x - 5; /* T_2 */\n\n60 END; /* T_2 *\n\nSince a transaction is only allowed to read values written by\ntransactions committed before it's beginning, both T_1 and T_2 will\nread x = 20. A transaction started after just this execution would\nthen read T_2's newly written value of x, that is 15, and line 40\nwould become a lost update. To avoid this, PostgreSQL offers two\nsolutions:\n\n- Read committed isolation, where a statement on the form UPDATE\n <table> SET <column> = <column> + <value> is considered a special\n case and T_2 is allowed to read T_1's value.\n\n- Serializable isolation, where T_2 would have to be aborted.\n\nIf line 40 and 50 were swapped, T_2 would wait to see what happens to\nT_1. If it's aborted, it can safely read x = 20 regardless of the\nisolation level, if it's committed, the result would again depend on\nthe selected isolation level.\n\nHopefully, this illustrates the basic concepts. An interesting article\nconcerning the subtleties of this subject was posted by Tom Lane a\ncouple of days ago:\n\nhttp://groups.google.com/groups?q=postgresql+regression&hl=en&ie=utf-8&oe=utf-8&scoring=d&selm=11556.1017860660%40sss.pgh.pa.us&rnum=4\n\nIn addition, this seems to be the \"canonical paper\" on snapshot\nisolation:\n\nhttp://citeseer.nj.nec.com/berenson95critique.html\n\n\n--\nJon Grov, Linpro as\n\n", "msg_date": "06 Apr 2002 00:24:49 +0200", "msg_from": "Jon Grov <jongr@ifi.uio.no>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" }, { "msg_contents": "> In addition, this seems to be the \"canonical paper\" on snapshot\n> isolation:\n>\n> http://citeseer.nj.nec.com/berenson95critique.html\n\nThere is an excellent, more recent paper, Generalized Isolation Level\nDefinitions (http://citeseer.nj.nec.com/adya00generalized.html).\n\n\n\n", "msg_date": "Fri, 5 Apr 2002 22:48:57 -0500", "msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Rod Taylor [mailto:rbt@zort.ca]\nSent: Friday, April 05, 2002 12:35 PM\nTo: Dann Corbit; Tom Lane\nCc: Doug McNaught; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Suggestion for optimization \n\n\nNot to mention it only increases the speed of:\n\nSELECT count(*) FROM foo;\n\nand not:\n\nSELECT count(*) FROM foo WHERE bar;\n>>-----------------------------------------------\nOf course. Nobody optimizes that one (that I am\naware of).\n<<-----------------------------------------------\n", "msg_date": "Fri, 5 Apr 2002 13:07:51 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Suggestion for optimization " } ]
[ { "msg_contents": "We had discussed a while ago that it might be a good idea to compile with\ndebugging symbols by default, at least when using GCC. Personally, I\nthink that that would be a good idea, for users and developers alike.\n\nIf we go with that, I'd like to implement a new target 'install-strip'\nthat strips the binaries while they are installed, as a compensation if\nyou will. (It strips the libraries in intelligent ways, too.)\n\nThis could be a win-win situation. Developers don't need to type\n--enable-debug all the time, users don't need to recompile when we ask\nthem to trace a bug, and if you're pressed for disk space then\n'install-strip' will save you even more space than simply omitting\ndebugging symbols. (Or you can keep the unstripped binaries around\nelsewhere for debugging -- the possibilities are endless ;-) )\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 5 Apr 2002 17:55:30 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Debugging symbols by default" }, { "msg_contents": "On Fri, 2002-04-05 at 16:55, Peter Eisentraut wrote:\n> We had discussed a while ago that it might be a good idea to compile with\n> debugging symbols by default, at least when using GCC. Personally, I\n> think that that would be a good idea, for users and developers alike.\n> \n> If we go with that, I'd like to implement a new target 'install-strip'\n> that strips the binaries while they are installed, as a compensation if\n> you will. (It strips the libraries in intelligent ways, too.)\n> \n> This could be a win-win situation. Developers don't need to type\n> --enable-debug all the time, users don't need to recompile when we ask\n> them to trace a bug, and if you're pressed for disk space then\n> 'install-strip' will save you even more space than simply omitting\n> debugging symbols. (Or you can keep the unstripped binaries around\n> elsewhere for debugging -- the possibilities are endless ;-) )\n> \n> Comments?\nWith the Caldera (nee SCO) compiler -O and -g are mutually exclusive. \nIf you include both, you'll get -g. \n\nI'd recommend against this for production use with the Caldera cc and CC\ncompilers. \n\nLER\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "05 Apr 2002 16:56:52 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Debugging symbols by default" }, { "msg_contents": "Larry Rosenman writes:\n\n> On Fri, 2002-04-05 at 16:55, Peter Eisentraut wrote:\n> > We had discussed a while ago that it might be a good idea to compile with\n> > debugging symbols by default, at least when using GCC. Personally, I\n ^^^^^^^^^^^^^^\n\n> With the Caldera (nee SCO) compiler -O and -g are mutually exclusive.\n> If you include both, you'll get -g.\n>\n> I'd recommend against this for production use with the Caldera cc and CC\n> compilers.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 5 Apr 2002 18:11:18 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Debugging symbols by default" }, { "msg_contents": "> We had discussed a while ago that it might be a good idea to compile with\n> debugging symbols by default, at least when using GCC.\n\nA tricky questions is what to do with the --enable-debug option. For GCC\nit would become --disable-debug (i.e., remove -g from CFLAGS), but I'm not\nsure we'd need that if we provide 'make install-strip'.\n\nFor other compilers, it's anyone's guess. We could continue to provide\n--enable-debug to add -g to CFLAGS. Some commerical vendors' compilers\nactually support various combinations of debugging and optimizing these\ndays, but with different flags. So if you really try to build with\ndebugging support on those platforms you'd probably want to supply the\nCFLAGS yourself.\n\nI suppose one of the less confusing choices would be to not have that\noption. Setting CFLAGS yourself is even less typing on average.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 7 Apr 2002 00:44:18 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Debugging symbols by default" }, { "msg_contents": "\nI am not sure about the idea of -g by default. I know lyx uses -g by\ndefault, and the compile/install takes forever. In fact, I have removed\n-g from my compiles here because it takes too long to compile/link and I\ndo it too often. When I need to debug, I recompile with -g. My concern\nis that we may start to look very bloated with -g and those huge\nbinaries. My question is whether it is worth the install slowness/bloat?\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> > We had discussed a while ago that it might be a good idea to compile with\n> > debugging symbols by default, at least when using GCC.\n> \n> A tricky questions is what to do with the --enable-debug option. For GCC\n> it would become --disable-debug (i.e., remove -g from CFLAGS), but I'm not\n> sure we'd need that if we provide 'make install-strip'.\n> \n> For other compilers, it's anyone's guess. We could continue to provide\n> --enable-debug to add -g to CFLAGS. Some commerical vendors' compilers\n> actually support various combinations of debugging and optimizing these\n> days, but with different flags. So if you really try to build with\n> debugging support on those platforms you'd probably want to supply the\n> CFLAGS yourself.\n> \n> I suppose one of the less confusing choices would be to not have that\n> option. Setting CFLAGS yourself is even less typing on average.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 Apr 2002 19:39:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Debugging symbols by default" }, { "msg_contents": "Bruce Momjian writes:\n\n> I am not sure about the idea of -g by default. I know lyx uses -g by\n> default, and the compile/install takes forever. In fact, I have removed\n> -g from my compiles here because it takes too long to compile/link and I\n> do it too often. When I need to debug, I recompile with -g. My concern\n> is that we may start to look very bloated with -g and those huge\n> binaries. My question is whether it is worth the install slowness/bloat?\n\nPostgreSQL compile time is minimal compared to other packages. If you're\nworried about 30 seconds, turn off the optimization or use parallel make.\nIf you see yourself doing a full build too often, turn on dependency\ntracking. The extra time you spend building with -g is the time you save\nyourself and the users from having to recompile everything because a bug\nneeds to be tracked down. And when you rebuild, the bug might not be\nreproduceable.\n\nI don't buy the disk space argument either. If you're worried about a few\nmegabytes then you going to have a lot of trouble running a database.\nAnd if you're still worried, you can run install-strip, which is the\nstandard way to do it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 8 Apr 2002 11:56:17 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Debugging symbols by default" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Friday, April 05, 2002 3:42 PM\nTo: Peter Bierman\nCc: Dann Corbit; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Suggestion for optimization \n\n\nPeter Bierman <bierman@apple.com> writes:\n> ... Your comment: \"An\n> accurate cardinality figure can greatly enhance the optimizer's\n> ability to perform joins in the correct order\" was intriguing, and I'd\n> be interested in Tom's thoughts on just that bit.\n\nApproximate figures are quite sufficient for the planner's purposes.\nAFAICS, making them exact would not improve the planning estimates\nat all, because there are too many other sources of error. We have\napproximate stats already via vacuum/analyze statistics gathering.\n>>\nWhat happens if someone deletes 75% of a table?\nWhat happens if someone imports 30 times more rows than are already in\nthe table?\nWhat happens if one table is remarkably small or even empty and you are\nunaware?\n\nIn extreme cases, it can mean orders of magnitude performance\ndifference.\n<<\n", "msg_date": "Fri, 5 Apr 2002 15:51:00 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Suggestion for optimization " }, { "msg_contents": "> AFAICS, making them exact would not improve the planning estimates\n> at all, because there are too many other sources of error. We have\n> approximate stats already via vacuum/analyze statistics gathering.\n> >>\n> What happens if someone deletes 75% of a table?\n> What happens if someone imports 30 times more rows than are already in\n> the table?\n> What happens if one table is remarkably small or even empty and you are\n> unaware?\n\nIf you are unaware of any of the above, you'll get poorer performance.\nJust make sure you run ANALYZE often enough. Anyone who does a massive\nchange in the number of rows in a table, or updates most of a table should\nalways do an ANALYZE afterwards.\n\nChris\n\n\n", "msg_date": "Sat, 6 Apr 2002 18:43:27 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Suggestion for optimization " } ]
[ { "msg_contents": "I was originally planning to revise pg_aggregate along the same lines\nas pg_proc and so forth: add an aggnamespace column and fix the search\ncode to be namespace-aware. But it seemed a tad annoying that standard\nfunction lookups would thereby incur *two* namespace-aware searches:\none in pg_aggregate and one in pg_proc. Thinking about that, it\noccurred to me that there is another way. Why shouldn't aggregate\nfunctions have entries in pg_proc? Then one search would cover both\npossibilities, and we could eliminate some duplicate code in the parser.\n\nDoing things this way would mean that one could not create an aggregate\nfunction with the same name and input arguments as a regular function\n(at least, not in the same namespace). However, doing so has always\nbeen a bad idea, and it seems like it'd be a step forward not back for\nthe system to reject it as a naming conflict.\n\nA more serious objection is that this will break client applications\nthat know about the pg_aggregate table. However, 7.3 is already going\nto force a lot of reprogramming of clients that inspect system tables,\nbecause of the addition of namespaces. Restructuring pg_aggregate\ndoesn't seem like it makes life all that much worse.\n\nI would envision this working like so:\n\nIn pg_proc: add a boolean column \"proisagg\" to mark function entries\nthat are aggregates. A row for an aggregate function would contain\na pointer to a dummy C function that would just raise an error if\ncalled (which shouldn't ever happen, but just in case some bit of\ncode doesn't get updated, this would be a good safety check).\n\nIn pg_aggregate: remove the columns aggname, aggowner, aggbasetype,\naggfinaltype, and add a column aggfnoid containing the OID of the\naggregate's pg_proc row. (pg_aggregate itself doesn't need OIDs\nanymore, and its only index will be on aggfnoid.) Essentially this\nreduces pg_aggregate to an auxiliary extension of pg_proc, carrying\nthe fields aggtransfn, aggfinalfn, aggtranstype, agginitval for those\npg_proc rows that need them.\n\nAn interesting aspect of this is that the catalog structure would now\nbe prepared to support aggregate functions with more than one input,\nwhich is a feature we've been asked for occasionally. I am *not*\nvolunteering to make that happen right now ... but the catalog\nstructures would be ready for it.\n\nComments, objections, better ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Apr 2002 02:27:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "RFC: Restructuring pg_aggregate" }, { "msg_contents": "> A more serious objection is that this will break client applications\n> that know about the pg_aggregate table. However, 7.3 is already going\n> to force a lot of reprogramming of clients that inspect system tables,\n> because of the addition of namespaces. Restructuring pg_aggregate\n> doesn't seem like it makes life all that much worse.\n\nHow about putting a note in the 7.3 release that tells them not to rely on\nsequential attnums in tn pg_attribute anymore. That should make it easier\nto implement column dropping in the future?\n\nChris\n\n\n", "msg_date": "Sat, 6 Apr 2002 18:41:44 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:\n> How about putting a note in the 7.3 release that tells them not to rely on\n> sequential attnums in tn pg_attribute anymore. That should make it easier\n> to implement column dropping in the future?\n\nThat seems like pure speculation to me, if not outright wrong. If we\ncan't renumber the attnums it'll be because the table's tuples still\nhave data at a particular column position. Since we'll need to know\nthe datatype of that data (if only to skip over it correctly), there\nwill still have to be a pg_attribute entry for the dropped column.\nThus, what people will more likely have to watch out for is pg_attribute\nrows marked \"deleted\" in some fashion.\n\nWe are actually not that far away from being able to do DROP COLUMN,\nif people don't mind being slow to recover the space used by a dropped\ncolumn. It'd work like this:\n\n1. Add an \"attisdropped\" boolean to pg_attribute.\n\n2. DROP COLUMN sets this flag and changes attname to something like\n\"***deleted_NNN\". (Changing attname is only necessary to allow the\nsame column name to be reused without drawing a unique-index error.)\nThat's it --- it's done.\n\n3. Column lookup, expansion of *, etc have to be taught to ignore\ncolumns marked attisdropped.\n\nThe idea is that the extant data sits there but is invisible. Inserts\nof new rows in the table would always insert a NULL in the dropped\ncolumn (which'd fall out more or less for free, there being no way\nto tell the system to insert anything else). Over time, UPDATEs of\nextant rows would also replace the dropped data with NULLs.\n\nI suspect there are only about half a dozen key places that would have\nto explicitly check attisdropped. None of the low-level executor\nmachinery would care at all, since it's dealing with \"real\" tuples where\nthe attribute is still there, at least as a NULL.\n\nHiroshi's \"DROP_COLUMN_HACK\" was essentially along this line, but\nI think he made a representational mistake by trying to change the\nattnums of dropped columns to be negative values. That means that\na lot of low-level places *do* need to know about the dropped-column\nconvention, else they can't make any sense of tuple layouts. The\nnegative-attnum idea might have been a little easier for clients\ninspecting pg_attribute to cope with, but in practice I think they'd\nneed to be taught about dropped columns anyway --- as evidenced by\nyour remark suggesting that gaps in the sequence of positive attnums\nwould break clients.\n\n\t\t\tregards, tom lane\n\nPS: Once you have that, try this on for size: ALTER COLUMN is\n\n\tALTER DROP COLUMN;\n\tALTER ADD COLUMN newtype;\n\tUPDATE foo SET newcol = coercion_fn(oldcol);\n\nThat last couldn't be expressed as an SQL statement because the parser\nwouldn't allow access to oldcol, but there's nothing stopping it at the\nimplementation level.\n\nThis approach changes the user-visible column ordering, which'd be\na tad annoying, so probably something based on building a new version of\nthe table would be better. But as a quick hack this would be doable.\n\nActually, given the DROP feature a user could do it for himself:\n\n\tALTER ADD COLUMN tempcol newtype;\n\tUPDATE foo SET tempcol = coercion_fn(oldcol);\n\tALTER DROP COLUMN oldcol;\n\tALTER RENAME COLUMN tempcol to oldcol;\n\nwhich seems like an okay approach, especially since it'd allow the\nUPDATE computing the new column values to be of arbitrary complexity,\nnot just a simple coercion of one existing column.\n", "msg_date": "Sat, 06 Apr 2002 11:34:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: RFC: Restructuring pg_aggregate " }, { "msg_contents": "Tom Lane writes:\n\n> Why shouldn't aggregate functions have entries in pg_proc? Then one\n> search would cover both possibilities, and we could eliminate some\n> duplicate code in the parser.\n\nFurthermore, we could make the new function privileges apply to aggregates\nas well.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 6 Apr 2002 17:50:01 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Why shouldn't aggregate functions have entries in pg_proc?\n\n> Furthermore, we could make the new function privileges apply to aggregates\n> as well.\n\nGood point. Another thing that would fall out for free is that the\naggregate type-coercion rules would become exactly like the function\ntype-coercion rules; right now they are a tad stupider.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Apr 2002 20:58:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: RFC: Restructuring pg_aggregate " }, { "msg_contents": "> That seems like pure speculation to me, if not outright wrong. If we\n> can't renumber the attnums it'll be because the table's tuples still\n> have data at a particular column position. Since we'll need to know\n> the datatype of that data (if only to skip over it correctly), there\n> will still have to be a pg_attribute entry for the dropped column.\n> Thus, what people will more likely have to watch out for is pg_attribute\n> rows marked \"deleted\" in some fashion.\n\nYou know there is a way to do this and not break client compatibility.\nRename the current pg_attribute relation to pg_baseatt or something. Fix\nall references to it in the code. Create a system view called pg_attribute\nwhich is SELECT * (except attisdropped) FROM pg_baseattr WHERE NOT\nattisdropped.\n\nMore work though, of course.\n\n> We are actually not that far away from being able to do DROP COLUMN,\n> if people don't mind being slow to recover the space used by a dropped\n> column. It'd work like this:\n\nLogical vs. physical column numbers would still be quite handy tho. If\nyou're going to break compatibility, may as well do all breaks at once?\n\n> 1. Add an \"attisdropped\" boolean to pg_attribute.\n>\n> 2. DROP COLUMN sets this flag and changes attname to something like\n> \"***deleted_NNN\". (Changing attname is only necessary to allow the\n> same column name to be reused without drawing a unique-index error.)\n> That's it --- it's done.\n>\n> 3. Column lookup, expansion of *, etc have to be taught to ignore\n> columns marked attisdropped.\n>\n> The idea is that the extant data sits there but is invisible. Inserts\n> of new rows in the table would always insert a NULL in the dropped\n> column (which'd fall out more or less for free, there being no way\n> to tell the system to insert anything else). Over time, UPDATEs of\n> extant rows would also replace the dropped data with NULLs.\n\nWould it be possible to modify VACUUM FULL in some way so as to permanently\nremove these tuples? Surely people would like an actual space-saving column\ndrop?\n\nChris\n\n", "msg_date": "Mon, 8 Apr 2002 10:17:59 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> You know there is a way to do this and not break client compatibility.\n> Rename the current pg_attribute relation to pg_baseatt or something. Fix\n> all references to it in the code. Create a system view called pg_attribute\n> which is SELECT * (except attisdropped) FROM pg_baseattr WHERE NOT\n> attisdropped.\n\nWasn't your original concern that the attnums wouldn't be consecutive?\nHow is this view going to hide that?\n\n> Logical vs. physical column numbers would still be quite handy tho.\n\nBut confusing as all hell, at *all* levels of the code ... I've thought\nabout that quite a bit, and I can't see that we could expect to make it\nwork without a lot of hard-to-find bugs. Too many places where it's\nnot instantly obvious which set of numbers you should be using.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 Apr 2002 23:33:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: RFC: Restructuring pg_aggregate " }, { "msg_contents": "Tom Lane wrote:\n \n> Hiroshi's \"DROP_COLUMN_HACK\" was essentially along this line, but\n> I think he made a representational mistake by trying to change the\n> attnums of dropped columns to be negative values. \n\nNegative attnums had 2 advantages then. It had a big\nadvantage that initdb isn't needed. Note that it was\nonly a trial hack and there was no consensus on the way.\nIt was very easy to change the implementation to use\nattisdropped. OTOH physical/logical attnums approach\nneeded the change on pg_class, pg_attribute and so\nI've never had a chance to open the patch to public. \nIt was also more sensitive about oversights of needed \nchanges than the attisdropped flag approach. \n\n> That means that\n> a lot of low-level places *do* need to know about the dropped-column\n> convention, else they can't make any sense of tuple layouts.\n\nWhy ? As you already mentioned, there were not that many places\nto be changed.\n\nWell what's changed since then ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 10 Apr 2002 19:39:06 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> That means that\n>> a lot of low-level places *do* need to know about the dropped-column\n>> convention, else they can't make any sense of tuple layouts.\n\n> Why ? As you already mentioned, there were not that many places\n> to be changed.\n\nThere are not many places to change if the implementation uses\nattisdropped, because we *only* have to hide the existence of the column\nat the parser level. The guts of the system don't know anything funny\nis going on; a dropped column looks the same as an undropped one\nthroughout the executor. But with negative attnums, even such basic\nroutines as heap_formtuple have to know about it, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Apr 2002 10:19:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: RFC: Restructuring pg_aggregate " }, { "msg_contents": "Hiroshi Inoue wrote:\n> Tom Lane wrote:\n> \n> > Hiroshi's \"DROP_COLUMN_HACK\" was essentially along this line, but\n> > I think he made a representational mistake by trying to change the\n> > attnums of dropped columns to be negative values. \n> \n> Negative attnums had 2 advantages then. It had a big\n> advantage that initdb isn't needed. Note that it was\n> only a trial hack and there was no consensus on the way.\n> It was very easy to change the implementation to use\n> attisdropped. OTOH physical/logical attnums approach\n> needed the change on pg_class, pg_attribute and so\n> I've never had a chance to open the patch to public. \n> It was also more sensitive about oversights of needed \n> changes than the attisdropped flag approach. \n> \n> > That means that\n> > a lot of low-level places *do* need to know about the dropped-column\n> > convention, else they can't make any sense of tuple layouts.\n> \n> Why ? As you already mentioned, there were not that many places\n> to be changed.\n> \n> Well what's changed since then ?\n\nHere is an old email from me that outlines the idea of having a\nphysical/logical attribute numbering system, and the advantages. For\nimplementation, I thought we could do most of the work by filtering what\nthe client saw, and let the server just worry about physical numbering,\nexcept for 'SELECT *' expansion.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026", "msg_date": "Wed, 10 Apr 2002 12:59:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> That means that\n> >> a lot of low-level places *do* need to know about the dropped-column\n> >> convention, else they can't make any sense of tuple layouts.\n> \n> > Why ? As you already mentioned, there were not that many places\n> > to be changed.\n> \n> There are not many places to change if the implementation uses\n> attisdropped, because we *only* have to hide the existence of the column\n> at the parser level. The guts of the system don't know anything funny\n> is going on; a dropped column looks the same as an undropped one\n> throughout the executor. But with negative attnums, even such basic\n> routines as heap_formtuple have to know about it, no?\n\nWhen a tuple descriptor is made, the info of\ndropped columns are placed at (their physical\nposition - 1) index in the same way as ordinary\ncolumns. There are few places where conversions\nbetween negative attnums and the physical positions\nare needed.\n\nThe following is my posting more than 2 years ago.\nWhat's changed since then.\n\nregards,\nHiroshi Inoue\n\n I don't want a final implementation this time.\n What I want is to provide a quick hack for both others and me\n to judge whether this direction is good or not.\n\n My idea is essentially an invisible column implementation.\n DROP COLUMN would change the target pg_attribute tuple\n as follows..\n\t\n\tattnum -> an offset - attnum;\n\tatttypid -> 0\n\n We would be able to see where to change by tracking error/\n crashes caused by this change.\n\n I would also change attname to '*already dropped %d' for\n examle to avoid duplicate attname.\n", "msg_date": "Thu, 11 Apr 2002 08:57:23 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Tom Lane wrote:\n> >\n> > > Hiroshi's \"DROP_COLUMN_HACK\" was essentially along this line, but\n> > > I think he made a representational mistake by trying to change the\n> > > attnums of dropped columns to be negative values.\n> >\n> > Negative attnums had 2 advantages then. It had a big\n> > advantage that initdb isn't needed. Note that it was\n> > only a trial hack and there was no consensus on the way.\n> > It was very easy to change the implementation to use\n> > attisdropped. OTOH physical/logical attnums approach\n> > needed the change on pg_class, pg_attribute and so\n> > I've never had a chance to open the patch to public.\n> > It was also more sensitive about oversights of needed\n> > changes than the attisdropped flag approach.\n> >\n> > > That means that\n> > > a lot of low-level places *do* need to know about the dropped-column\n> > > convention, else they can't make any sense of tuple layouts.\n> >\n> > Why ? As you already mentioned, there were not that many places\n> > to be changed.\n> >\n> > Well what's changed since then ?\n> \n> Here is an old email from me that outlines the idea of having a\n> physical/logical attribute numbering system, and the advantages. \n\nI already tried physical/logical attribute implementation \npretty long ago. Where are new ideas to solve the problems\nthat the approach has ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 11 Apr 2002 09:10:24 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Hiroshi Inoue wrote:\n> > > Why ? As you already mentioned, there were not that many places\n> > > to be changed.\n> > >\n> > > Well what's changed since then ?\n> > \n> > Here is an old email from me that outlines the idea of having a\n> > physical/logical attribute numbering system, and the advantages. \n> \n> I already tried physical/logical attribute implementation \n> pretty long ago. Where are new ideas to solve the problems\n> that the approach has ?\n\nGood question. I am suggesting more than just the drop column fix. It\ncould be used for smaller data files to reduce padding, fix for\ninheritance problems with ADD COLUMN, and performance of moving\nvarlena's to the end of the row.\n\nAlso, my idea was to have the physical/logical mapping happen closer to\nthe client, so the backend mostly only deals with physical. I was\nthinking of having the libpq backend communication layer actually do the\nreordering of the return results.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Apr 2002 20:13:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > > > Why ? As you already mentioned, there were not that many places\n> > > > to be changed.\n> > > >\n> > > > Well what's changed since then ?\n> > >\n> > > Here is an old email from me that outlines the idea of having a\n> > > physical/logical attribute numbering system, and the advantages.\n> >\n> > I already tried physical/logical attribute implementation\n> > pretty long ago. Where are new ideas to solve the problems\n> > that the approach has ?\n> \n> Good question. I am suggesting more than just the drop column fix. It\n> could be used for smaller data files to reduce padding, fix for\n> inheritance problems with ADD COLUMN, and performance of moving\n> varlena's to the end of the row.\n> \n> Also, my idea was to have the physical/logical mapping happen closer to\n> the client, so the backend mostly only deals with physical.\n\nIf the client has to bear the some part, isn't the invisible\ncolumn approach much simpler ?\n\nI've put a pretty much time into DROP COLUMN feature but\nI am really disappointed to see the comments in this thread.\nWhat DROP COLUMN has brought me seems only a waste of time.\n\nPossibly I must have introduced either implementation forcibly.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 11 Apr 2002 13:01:37 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "> If the client has to bear the some part, isn't the invisible\n> column approach much simpler ?\n>\n> I've put a pretty much time into DROP COLUMN feature but\n> I am really disappointed to see the comments in this thread.\n> What DROP COLUMN has brought me seems only a waste of time.\n\nI kind of agree with Hiroshi here. All I want to be able to do is drop\ncolumns from my tables, and reclaim the space. I've got all sorts of\nproduction tables with columns just sitting there doing nothing, awaiting\nthe time that I can happily drop them. It seems to me that whatever we do\nwill require some kind of client breakage.\n\nChris\n\n", "msg_date": "Thu, 11 Apr 2002 12:15:02 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Hiroshi Inoue wrote:\n> If the client has to bear the some part, isn't the invisible\n> column approach much simpler ?\n> \n> I've put a pretty much time into DROP COLUMN feature but\n> I am really disappointed to see the comments in this thread.\n> What DROP COLUMN has brought me seems only a waste of time.\n> \n> Possibly I must have introduced either implementation forcibly.\n\nI understand. I personally think maybe we have been a little to picky\nabout patches being accepted. Sometimes when something is not 100%\nperfect, we do nothing rather than accept the patch, and replace or\nimprove it later. The DROP COLUMN approach you had clearly is one of\nthem.\n\nPersonally, now that we have relfilenode, I think we should implement\ndrop of columns by just recreating the table without the column.\n\nThe big problem with DROP COLUMN was that we couldn't decide on what to\ndo, so we did nothing, which is probably worse than just choosing one\nand doing it.\n\nOur code is littered with my 80% solutions for LIKE optimization,\noptimizer statistics, BETWEEN, and lots of other solutions that have met\na need and are now being replaced with better code. My code was not\ngreat, but I hadn't dont them, PostgreSQL would have had even more\nmissing features than we do now. DROP COLUMN is clearly one where we\nmissed getting something that works and would keep people happy.\n\nAs far as my proposal, my idea was not to do it in the client, but\nrather to do it just before the data is sent from/to the client. Maybe\nthat is a stupid idea. I never really researched it. My idea was more\nto make the physical/logical column numbers distinct so certain tricks\ncould be performed. It wasn't for DROP COLUMN specifically, and in fact\nto do DROP COLUMN with my code, there would have to be more code similar\nto what you had where clients would see a column and have to skip it. I\nwas focusing more on physical/logical to enable other features.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Apr 2002 00:16:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > If the client has to bear the some part, isn't the invisible\n> > column approach much simpler ?\n> >\n> > I've put a pretty much time into DROP COLUMN feature but\n> > I am really disappointed to see the comments in this thread.\n> > What DROP COLUMN has brought me seems only a waste of time.\n> \n> I kind of agree with Hiroshi here. All I want to be able to do is drop\n> columns from my tables, and reclaim the space. I've got all sorts of\n> production tables with columns just sitting there doing nothing, awaiting\n> the time that I can happily drop them. It seems to me that whatever we do\n> will require some kind of client breakage.\n\nActually, what we need to do to reclaim space is to enable table\nrecreation without the column, now that we have relfilenode for file\nrenaming. It isn't hard to do, but no one has focused on it. I want to\nfocus on it, but have not had the time, obviously, and would be very\nexcited to assist someone else.\n\nHiroshi's fine idea of marking certain columns as unused would not have\nreclaimed the missing space, just as my idea of physical/logical column\ndistinction would not reclaim the space either. Again, my\nphysical/logical idea is more for fixing other problems and\noptimization, not DROP COLUMN.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Apr 2002 00:19:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "> Actually, what we need to do to reclaim space is to enable table\n> recreation without the column, now that we have relfilenode for file\n> renaming. It isn't hard to do, but no one has focused on it. I want to\n> focus on it, but have not had the time, obviously, and would be very\n> excited to assist someone else.\n\nI'm happy to help - depends if it's within my skill level or not tho. Most\nof the time the problem I have is finding where to make the changes, not\nactually making the changes themselves. So, count me in.\n\n> Hiroshi's fine idea of marking certain columns as unused would not have\n> reclaimed the missing space, just as my idea of physical/logical column\n> distinction would not reclaim the space either. Again, my\n> physical/logical idea is more for fixing other problems and\n> optimization, not DROP COLUMN.\n\nQuestion: Is it _possible_ to reclaim the space during a VACUUM FULL? I do\nnot know enough about the file format to know this. What happens if the\nVACUUM is stopped halfway thru reclaiming a column in a table?\n\nBruce: WRT modifying libpq to do the translation - won't this cause probs\nfor JDBC and ODBC people?\n\nChris\n\n", "msg_date": "Thu, 11 Apr 2002 12:30:08 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > Actually, what we need to do to reclaim space is to enable table\n> > recreation without the column, now that we have relfilenode for file\n> > renaming. It isn't hard to do, but no one has focused on it. I want to\n> > focus on it, but have not had the time, obviously, and would be very\n> > excited to assist someone else.\n> \n> I'm happy to help - depends if it's within my skill level or not tho. Most\n> of the time the problem I have is finding where to make the changes, not\n> actually making the changes themselves. So, count me in.\n\nOK, let me mention that I have had great success with chat sessions with\nPostgreSQL developers. They can code and ask questions and I can answer\nquickly. Seems to be speeding things along for some people. I am:\n\t\n\tAIM\tbmomjian\n\tICQ\t151255111\n\tYahoo\tbmomjian\n\tMSN\troot@candle.pha.pa.us\n\nI am also on the PostgreSQL IRC channel. As far as where to start, I\nthink the CLUSTER command would be a good start because it just reorders\nthe existing table. Then DROP COLUMN can come out of that by removing\nthe column during the copy, and removing mention of the column from\npg_attribute, and of course renumbering the gap.\n\n> > Hiroshi's fine idea of marking certain columns as unused would not have\n> > reclaimed the missing space, just as my idea of physical/logical column\n> > distinction would not reclaim the space either. Again, my\n> > physical/logical idea is more for fixing other problems and\n> > optimization, not DROP COLUMN.\n> \n> Question: Is it _possible_ to reclaim the space during a VACUUM FULL? I do\n> not know enough about the file format to know this. What happens if the\n> VACUUM is stopped halfway thru reclaiming a column in a table?\n\nNot really. I moves only whole tuples, and only certain ones.\n\n> Bruce: WRT modifying libpq to do the translation - won't this cause probs\n> for JDBC and ODBC people?\n\nNo, not in libpq, but rather in backend/libpq, the backend part of the\nconnection. My idea is for the user to think things are in a different\norder in the row than they actually appear on disk. I haven't really\nresearched it enough to understand its validity.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Apr 2002 00:35:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > If the client has to bear the some part, isn't the invisible\n> > column approach much simpler ?\n> >\n> > I've put a pretty much time into DROP COLUMN feature but\n> > I am really disappointed to see the comments in this thread.\n> > What DROP COLUMN has brought me seems only a waste of time.\n> \n> I kind of agree with Hiroshi here. All I want to be able to do is drop\n> columns from my tables, and reclaim the space. I've got all sorts of\n> production tables with columns just sitting there doing nothing, awaiting\n> the time that I can happily drop them.\n\n> It seems to me that whatever we do\n> will require some kind of client breakage.\n\nPhysical/logical attnum approach was mainly to not break\nclients. I implemented it on trial but the implementation\nwas hard to maintain unfortunately. It's pretty difficult\nto decide whether the number is physical or logical in\nmany cases.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 11 Apr 2002 13:45:22 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Hiroshi Inoue wrote:\n> > It seems to me that whatever we do\n> > will require some kind of client breakage.\n> \n> Physical/logical attnum approach was mainly to not break\n> clients. I implemented it on trial but the implementation\n> was hard to maintain unfortunately. It's pretty difficult\n> to decide whether the number is physical or logical in\n> many cases.\n\nHow many cases do we have that use logical numering? Hiroshi, I know\nyou are the expert on this. I know 'SELECT *' uses it, but are their\nother places that need to know about the logical ordering of the\ncolumns?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Apr 2002 00:50:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > If the client has to bear the some part, isn't the invisible\n> > column approach much simpler ?\n> >\n> > I've put a pretty much time into DROP COLUMN feature but\n> > I am really disappointed to see the comments in this thread.\n> > What DROP COLUMN has brought me seems only a waste of time.\n> >\n> > Possibly I must have introduced either implementation forcibly.\n> \n> I understand. I personally think maybe we have been a little to picky\n> about patches being accepted. Sometimes when something is not 100%\n> perfect, we do nothing rather than accept the patch, and replace or\n> improve it later. The DROP COLUMN approach you had clearly is one of\n> them.\n\nI don't complain about the rejection of my patch.\nIf it has an essential flaw we had better reject it.\nWhat I'm complaining is why it is OK now whereas\nthere's nothing new.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 11 Apr 2002 14:01:35 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > > If the client has to bear the some part, isn't the invisible\n> > > column approach much simpler ?\n> > >\n> > > I've put a pretty much time into DROP COLUMN feature but\n> > > I am really disappointed to see the comments in this thread.\n> > > What DROP COLUMN has brought me seems only a waste of time.\n> > >\n> > > Possibly I must have introduced either implementation forcibly.\n> > \n> > I understand. I personally think maybe we have been a little to picky\n> > about patches being accepted. Sometimes when something is not 100%\n> > perfect, we do nothing rather than accept the patch, and replace or\n> > improve it later. The DROP COLUMN approach you had clearly is one of\n> > them.\n> \n> I don't complain about the rejection of my patch.\n> If it has an essential flaw we had better reject it.\n> What I'm complaining is why it is OK now whereas\n> there's nothing new.\n\nSure, I understand.\n\nMy physical/logical idea may have the same problems as your DROP COLUMN\nidea, and may be as rapidly rejected. I am just throwing it out for\ndiscussion.\n\nI am not sure I like it. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Apr 2002 01:03:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "> Actually, what we need to do to reclaim space is to enable table\n> recreation without the column, now that we have relfilenode for file\n> renaming. It isn't hard to do, but no one has focused on it. I want to\n> focus on it, but have not had the time, obviously, and would be very\n> excited to assist someone else.\n>\n> Hiroshi's fine idea of marking certain columns as unused would not have\n> reclaimed the missing space, just as my idea of physical/logical column\n> distinction would not reclaim the space either. Again, my\n> physical/logical idea is more for fixing other problems and\n> optimization, not DROP COLUMN.\n\nHmmm. Personally, I think that a DROP COLUMN that cannot reclaim space is\nkinda useless - you may as well just use a view!!!\n\nSo how would this occur?:\n\n1. Lock target table for writing (allow reads)\n2. Begin a table scan on target table, writing\n a new file with a particular filenode\n3. Delete the attribute row from pg_attribute\n4. Point the table in the catalog to the new filenode\n5. Release locks\n6. Commit transaction\n7. Delete orhpan filenode\n\ni. Upon postmaster startup, remove any orphaned filenodes\n\nThe real problem here is the fact that there are now missing attnos in\npg_attribute. Either that's handled or we renumber the attnos - which is\nalso quite hard?\n\nThis, of course, suffers from the double size data problem - but I believe\nthat it does not matter - we just need to document it.\n\nInterestingly enough, Oracle support\n\nALTER TABLE foo SET UNUSED col;\n\nWhich invalidates the attribute entry, and:\n\nALTER TABLE foo DROP col CHECKPOINT 1000;\n\nWhich actually reclaims the space. The optional CHECKPOINT [n] clause\ntells Oracle to do a checkpoint every [n] rows.\n\n\"Checkpointing cuts down the amount of undo logs accumulated during the\ndrop column operation to avoid running out of rollback segment space.\nHowever, if this statement is interrupted after a checkpoint has been\napplied, the table remains in an unusable state. While the table is\nunusable, the only operations allowed on it are DROP TABLE, TRUNCATE\nTABLE, and ALTER TABLE DROP COLUMNS CONTINUE (described below). \"\n\nChris\n\n\n", "msg_date": "Fri, 12 Apr 2002 00:00:16 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:\n> The real problem here is the fact that there are now missing attnos in\n> pg_attribute. Either that's handled or we renumber the attnos - which is\n> also quite hard?\n\nUpdating pg_attribute per se is not so hard --- just store new copies of\nall the rows for the table. However, propagating the changes into other\nplaces could be quite painful (I'm thinking of column numbers in stored\nconstraints, rules, etc).\n\nIt seems to me that reducing the column to NULLs already gets you the\nmajority of the space savings. I don't think there is a case to be made\nthat getting back that last bit is worth the pain involved, either in\nimplementation effort or direct runtime costs (do you really want a DROP\nCOLUMN to force an immediate rewrite of the whole table?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 12:22:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: RFC: Restructuring pg_aggregate " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > Actually, what we need to do to reclaim space is to enable table\n> > recreation without the column, now that we have relfilenode for file\n> > renaming. It isn't hard to do, but no one has focused on it. I want to\n> > focus on it, but have not had the time, obviously, and would be very\n> > excited to assist someone else.\n> >\n> > Hiroshi's fine idea of marking certain columns as unused would not have\n> > reclaimed the missing space, just as my idea of physical/logical column\n> > distinction would not reclaim the space either. Again, my\n> > physical/logical idea is more for fixing other problems and\n> > optimization, not DROP COLUMN.\n> \n> Hmmm. Personally, I think that a DROP COLUMN that cannot reclaim space is\n> kinda useless - you may as well just use a view!!!\n\nYep, kind of a problem. It is a tradeoff between double diskspace/speed\nand removing column from disk. I guess that's why Oracle has both.\n\n> \n> So how would this occur?:\n> \n> 1. Lock target table for writing (allow reads)\n> 2. Begin a table scan on target table, writing\n> a new file with a particular filenode\n> 3. Delete the attribute row from pg_attribute\n> 4. Point the table in the catalog to the new filenode\n> 5. Release locks\n> 6. Commit transaction\n> 7. Delete orhpan filenode\n\nYep, something like that. CLUSTER is a good start. DROP COLUMN just\ndeals with the attno too. You would have to renumber them to fill the\ngap.\n\n> i. Upon postmaster startup, remove any orphaned filenodes\n\nActually, we don't have a good solution for finding orphaned filenodes\nright now. I do have some code that tries to do this as part of VACUUM\nbut it was not 100% perfect, so it was rejected. I am willing to open\nthe discussion to see if a perfect solution can be found.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Apr 2002 12:23:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Tom Lane wrote:\n> Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:\n> > The real problem here is the fact that there are now missing attnos in\n> > pg_attribute. Either that's handled or we renumber the attnos - which is\n> > also quite hard?\n> \n> Updating pg_attribute per se is not so hard --- just store new copies of\n> all the rows for the table. However, propagating the changes into other\n> places could be quite painful (I'm thinking of column numbers in stored\n> constraints, rules, etc).\n> \n> It seems to me that reducing the column to NULLs already gets you the\n> majority of the space savings. I don't think there is a case to be made\n> that getting back that last bit is worth the pain involved, either in\n> implementation effort or direct runtime costs (do you really want a DROP\n> COLUMN to force an immediate rewrite of the whole table?)\n\nThat is an excellent point about having to fix all the places that refer\nto attno. In fact, we have been moving away from attname references to\nattno references for a while, so it only gets worse. Tom is also\ncorrect that setting it to NULL removes the problem of disk space usage\nquite easily.\n\nThat only leaves the problem of having gaps in the pg_attribute for that\nrelation, and as I remember, that was the problem for Hiroshi's DROP\nCOLUMN change, but at this point, after years of delay with no great\nsolution on the horizon, we may as well just get this working.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Apr 2002 12:31:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Why shouldn't aggregate functions have entries in pg_proc? Then one\n>> search would cover both possibilities, and we could eliminate some\n>> duplicate code in the parser.\n\n> Furthermore, we could make the new function privileges apply to aggregates\n> as well.\n\nGRANT/REVOKE FUNCTION will now work on aggregate functions too (is there\nany value in making a variant syntax for aggregates?). However, I\ndidn't implement enforcement of the EXECUTE privilege yet. I was\nslightly bemused to notice that your implementation of it for regular\nfunctions tests the privilege at plan startup but doesn't actually throw\nthe error until the function is called. What's the point of that?\nSeems like we might as well throw the error in init_fcache and not\nbother with storing a boolean.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 17:26:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: RFC: Restructuring pg_aggregate " }, { "msg_contents": "Tom Lane writes:\n\n> I was slightly bemused to notice that your implementation of it for\n> regular functions tests the privilege at plan startup but doesn't\n> actually throw the error until the function is called. What's the\n> point of that? Seems like we might as well throw the error in\n> init_fcache and not bother with storing a boolean.\n\nYeah, it's a bit funny. I wanted to keep the fcache code from doing\nanything not to do with caching, and I wanted to keep the permission check\nin the executor, like it is for tables.\n\nThere were a couple of cases, which I have not fully explored yet, for\nwhich this seemed like a good idea, such as some functions being in the\nplan but not being executed, or the permission check being avoided for\nsome functions (e.g., cast functions).\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 11 Apr 2002 18:01:55 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> I was slightly bemused to notice that your implementation of it for\n>> regular functions tests the privilege at plan startup but doesn't\n>> actually throw the error until the function is called. What's the\n>> point of that? Seems like we might as well throw the error in\n>> init_fcache and not bother with storing a boolean.\n\n> Yeah, it's a bit funny. I wanted to keep the fcache code from doing\n> anything not to do with caching, and I wanted to keep the permission check\n> in the executor, like it is for tables.\n\nWell, init_fcache is part of executor startup, so that seems reasonably\nparallel to the table case. I do not buy the idea that we shouldn't\nthrow an error if the function happens not to be called --- that makes\nthe behavior dependent on both planner choices and data conditions,\nwhich seems like a bad idea. For comparison, we *will* throw a\npermission error on tables even if the actual execution of the plan\nturns out never to read a single row from that table.\n\n(After a bit of code reading --- actually, at present init_fcache\ndoesn't get called until first use of the function, so it's a\ndistinction without a difference. But I have been thinking about\nrevising this so that function caches are set up during plan\ninitialization, which is why I'm questioning the point now.)\n\n> There were a couple of cases, which I have not fully explored yet, for\n> which this seemed like a good idea, such as some functions being in the\n> plan but not being executed, or the permission check being avoided for\n> some functions (e.g., cast functions).\n\nCast functions? Not sure that I see the point of excluding them.\n\nWhat I'm inclined to do for aggregates is to check and throw the error\n(if any) during ExecInitAgg, ie, plan startup for the Agg plan node.\nI was just a tad startled to notice that the implementation wasn't\nparallel for plain functions ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 18:12:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: RFC: Restructuring pg_aggregate " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> Hmmm. Personally, I think that a DROP COLUMN that cannot reclaim space is\n> kinda useless - you may as well just use a view!!!\n> \n> So how would this occur?:\n> \n> 1. Lock target table for writing (allow reads)\n> 2. Begin a table scan on target table, writing\n> a new file with a particular filenode\n> 3. Delete the attribute row from pg_attribute\n> 4. Point the table in the catalog to the new filenode\n> 5. Release locks\n> 6. Commit transaction\n> 7. Delete orhpan filenode\n> \n> i. Upon postmaster startup, remove any orphaned filenodes\n> \n> The real problem here is the fact that there are now missing attnos in\n> pg_attribute. Either that's handled or we renumber the attnos - which is\n> also quite hard?\n\nThe attnos should be renumbered and it's easy.\nBut the above seems only 20% of the total implementation.\nIf the attnos are renumbered, all objects which refer to \nthe numbers must be invalidated or re-compiled ...\nFor example the parameters of foreign key constraints\ntriggers are consist of relname and colnames currently.\nThere has been a proposal that change to use relid or\ncolumn numbers instead. Certainly it makes RENAME happy\nbut DROP COLUMN unhappy. If there's a foreign key a_rel/1/3\nand the second column of the relation is dropped the\nparameter must be changed to be a_rel/1/2. If neither\nforeign key stuff nor DROP COLUMN take the other into\naccount, the consistency is easily broken. \n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 12 Apr 2002 08:56:08 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Christopher Kings-Lynne wrote:\n> > \n> > Hmmm. Personally, I think that a DROP COLUMN that cannot reclaim space is\n> > kinda useless - you may as well just use a view!!!\n> > \n> > So how would this occur?:\n> > \n> > 1. Lock target table for writing (allow reads)\n> > 2. Begin a table scan on target table, writing\n> > a new file with a particular filenode\n> > 3. Delete the attribute row from pg_attribute\n> > 4. Point the table in the catalog to the new filenode\n> > 5. Release locks\n> > 6. Commit transaction\n> > 7. Delete orhpan filenode\n> > \n> > i. Upon postmaster startup, remove any orphaned filenodes\n> > \n> > The real problem here is the fact that there are now missing attnos in\n> > pg_attribute. Either that's handled or we renumber the attnos - which is\n> > also quite hard?\n> \n> The attnos should be renumbered and it's easy.\n> But the above seems only 20% of the total implementation.\n> If the attnos are renumbered, all objects which refer to \n> the numbers must be invalidated or re-compiled ...\n> For example the parameters of foreign key constraints\n> triggers are consist of relname and colnames currently.\n> There has been a proposal that change to use relid or\n> column numbers instead. Certainly it makes RENAME happy\n> but DROP COLUMN unhappy. If there's a foreign key a_rel/1/3\n> and the second column of the relation is dropped the\n> parameter must be changed to be a_rel/1/2. If neither\n> foreign key stuff nor DROP COLUMN take the other into\n> account, the consistency is easily broken. \n\nI think that is why Tom was suggesting making all the column values NULL\nand removing the pg_attribute row for the column. With a NULL value, it\ndoesn't take up any room in the tuple, and with the pg_attribute column\ngone, no one will see that row. The only problem is the gap in attno\nnumbering. How big a problem is that?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Apr 2002 22:26:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think that is why Tom was suggesting making all the column values NULL\n> and removing the pg_attribute row for the column.\n\nThat was not my suggestion.\n\n> With a NULL value, it\n> doesn't take up any room in the tuple, and with the pg_attribute column\n> gone, no one will see that row. The only problem is the gap in attno\n> numbering. How big a problem is that?\n\nYou can't do it that way unless you're intending to rewrite all rows of\nthe relation before committing the ALTER; which would be the worst of\nboth worlds. The pg_attribute row *must* be retained to show the\ndatatype of the former column, so that we can correctly skip over it\nin tuples where the column isn't yet nulled out.\n\nHiroshi did this by renumbering the attnum; I propose leaving attnum\nalone and instead adding an attisdropped flag. That would avoid\ncreating a gap in the column numbers, but either way is likely to affect\nsome applications that inspect pg_attribute.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 22:54:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: RFC: Restructuring pg_aggregate " }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Christopher Kings-Lynne wrote:\n> > >\n> I think that is why Tom was suggesting making all the column values NULL\n> and removing the pg_attribute row for the column. With a NULL value, it\n> doesn't take up any room in the tuple, and with the pg_attribute column\n> gone, no one will see that row. The only problem is the gap in attno\n> numbering. How big a problem is that?\n\nThere's no problem with applications which don't inquire\nof system catalogs(pg_attribute). Unfortunately we have \nbeen very tolerant of users' access on system tables and\nthere would be pretty many applications which inquire of\npg_attribute.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 12 Apr 2002 13:09:28 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "> Updating pg_attribute per se is not so hard --- just store new copies of\n> all the rows for the table. However, propagating the changes into other\n> places could be quite painful (I'm thinking of column numbers in stored\n> constraints, rules, etc).\n>\n> It seems to me that reducing the column to NULLs already gets you the\n> majority of the space savings. I don't think there is a case to be made\n> that getting back that last bit is worth the pain involved, either in\n> implementation effort or direct runtime costs (do you really want a DROP\n> COLUMN to force an immediate rewrite of the whole table?)\n\nOK, sounds fair. However, is there a more aggressive way of reclaiming the\nspace? The problem with updating all the rows to null for that column is\nthat the on-disk size is doubled anyway, right? So, could a VACUUM FULL\nprocess do the nulling for us? Vacuum works outside of normal transaction\nconstraints anyway...?\n\nAlso, it seems to me that at some point we are forced to break client\ncompatibility. Either we add attisdropped field to pg_attribute, or we use\nHiroshi's (-1 * attnum - offset) idea. Both Tom and Hiroshi have good\nreasons for each of these - would it be possible for you guys to post with\nyour reasons for and against both the techniques. I just want to get to an\nimplementation specification we all agree on that can be written up and then\nthe coding can proceed. Maybe we should add it to the 'Postgres Major\nProjects' page - and remove those old ones that have already been\nimplemented.\n\nChris\n\n\n", "msg_date": "Sat, 13 Apr 2002 14:17:34 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate " }, { "msg_contents": "[ way past time to change the title of this thread ]\n\n\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> OK, sounds fair. However, is there a more aggressive way of reclaiming the\n> space? The problem with updating all the rows to null for that column is\n> that the on-disk size is doubled anyway, right? So, could a VACUUM FULL\n> process do the nulling for us? Vacuum works outside of normal transaction\n> constraints anyway...?\n\nNo, VACUUM has the same transactional constraints as everyone else\n(unless you'd like a crash during VACUUM to trash your table...)\n\nI do not think that we necessarily need to provide a special mechanism\nfor this at all. The docs for DROP COLUMN could simply explain that\nthe DROP itself doesn't reclaim the space, but that the space will be\nreclaimed over time as extant rows are updated or deleted. If you want\nto hurry the process along you could do\n\tUPDATE table SET othercol = othercol\n\tVACUUM FULL\nto force all the rows to be updated and then reclaim space. But given\nthe peak-space-is-twice-as-much behavior, this is not obviously a win.\nI'd sure object to an implementation that *forced* that approach on me,\nwhether during DROP itself or the next VACUUM.\n\n> Also, it seems to me that at some point we are forced to break client\n> compatibility. Either we add attisdropped field to pg_attribute, or we use\n> Hiroshi's (-1 * attnum - offset) idea. Both Tom and Hiroshi have good\n> reasons for each of these - would it be possible for you guys to post with\n> your reasons for and against both the techniques.\n\nEr, didn't we do that already?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Apr 2002 11:29:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN (was RFC: Restructuring pg_aggregate)" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n>> No, VACUUM has the same transactional constraints as everyone else\n>> (unless you'd like a crash during VACUUM to trash your table...)\n\n> But can't it do the SET TO NULL thing if it knows that the transaction\n> that dropped the column has committed. \n\nHmm, you're thinking of allowing VACUUM to overwrite tuples in-place?\nStrikes me as unsafe, but I'm not really sure.\n\nIn any case it's not that easy. If the column is wide enough\nthat reclaiming its space is actually worth doing, then presumably\nmost of its entries are just TOAST links, and what has to be done is\nnot just rewrite the main tuple but mark the TOAST rows deleted.\nThis is not something that VACUUM does now; I'd be rather concerned\nabout the locking implications (especially for lightweight VACUUM).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Apr 2002 12:19:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN (was RFC: Restructuring pg_aggregate) " }, { "msg_contents": "On Sat, 2002-04-13 at 17:29, Tom Lane wrote:\n> [ way past time to change the title of this thread ]\n> \n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > OK, sounds fair. However, is there a more aggressive way of reclaiming the\n> > space? The problem with updating all the rows to null for that column is\n> > that the on-disk size is doubled anyway, right? So, could a VACUUM FULL\n> > process do the nulling for us? Vacuum works outside of normal transaction\n> > constraints anyway...?\n> \n> No, VACUUM has the same transactional constraints as everyone else\n> (unless you'd like a crash during VACUUM to trash your table...)\n\nBut can't it do the SET TO NULL thing if it knows that the transaction\nthat dropped the column has committed. \n\nThis could probably even be done in the light version of vacuum with a\nspecial flag (VACUUM RECLAIM). \n\nOf course running this this makes sense only if the dropped column had\nsome significant amount of data .\n\n> I do not think that we necessarily need to provide a special mechanism\n> for this at all. The docs for DROP COLUMN could simply explain that\n> the DROP itself doesn't reclaim the space, but that the space will be\n> reclaimed over time as extant rows are updated or deleted. If you want\n> to hurry the process along you could do\n> \tUPDATE table SET othercol = othercol\n> \tVACUUM FULL\n\nIf only we could do it in namageable chunks:\n\nFOR i IN 0 TO (size(table)/chunk) DO\n UPDATE table SET othercol = othercol OFFSET i*chunk LIMIT chunk\n VACUUM FULL;\nEND FOR;\n\nor even better - \"VACUUM FULL OFFSET i*chunk LIMIT chunk\" and then make\nchunk == 1 :)\n\n--------------\nHannu\n\n", "msg_date": "13 Apr 2002 18:47:07 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN (was RFC: Restructuring pg_aggregate)" }, { "msg_contents": "> No, VACUUM has the same transactional constraints as everyone else\n> (unless you'd like a crash during VACUUM to trash your table...)\n\nSeriously, you can run VACUUM in a transaction and rollback the movement of\na tuple on disk? What do you mean by same transactional constraints?\n\nChris\n\n", "msg_date": "Sun, 14 Apr 2002 12:58:43 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN (was RFC: Restructuring pg_aggregate)" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> No, VACUUM has the same transactional constraints as everyone else\n>> (unless you'd like a crash during VACUUM to trash your table...)\n\n> Seriously, you can run VACUUM in a transaction and rollback the movement of\n> a tuple on disk? What do you mean by same transactional constraints?\n\nIn VACUUM FULL, tuples moved to compact the table aren't good until you\ncommit. In this hypothetical column-drop-implementing VACUUM, I think\nthere'd need to be some similar rule --- otherwise it's not clear what\nhappens to TOASTED data if you crash partway through. (In particular,\nif we tried overwriting main tuples in place as Hannu was suggesting,\nwe'd need a way of being certain the deletion of the corresponding TOAST\nrows occurs *before* we overwrite the only reference to them.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Apr 2002 14:13:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN (was RFC: Restructuring pg_aggregate) " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> Also, it seems to me that at some point we are forced to break client\n> compatibility.\n\nIt's not a users' consensus at all. I'm suspicious if\nDROP COLUMN is such a significant feature to break\nclient compatibility at our ease.\n\n> Either we add attisdropped field to pg_attribute, or we use\n> Hiroshi's (-1 * attnum - offset) idea. Both Tom and Hiroshi have good\n> reasons for each of these - would it be possible for you guys to post with\n> your reasons for and against both the techniques. \n\nI don't object to adding attisdropped field. What\nI meant to say is that the differene is very small.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 15 Apr 2002 12:48:10 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" }, { "msg_contents": "\nThread added to TODO.detail/drop.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> > Actually, what we need to do to reclaim space is to enable table\n> > recreation without the column, now that we have relfilenode for file\n> > renaming. It isn't hard to do, but no one has focused on it. I want to\n> > focus on it, but have not had the time, obviously, and would be very\n> > excited to assist someone else.\n> >\n> > Hiroshi's fine idea of marking certain columns as unused would not have\n> > reclaimed the missing space, just as my idea of physical/logical column\n> > distinction would not reclaim the space either. Again, my\n> > physical/logical idea is more for fixing other problems and\n> > optimization, not DROP COLUMN.\n> \n> Hmmm. Personally, I think that a DROP COLUMN that cannot reclaim space is\n> kinda useless - you may as well just use a view!!!\n> \n> So how would this occur?:\n> \n> 1. Lock target table for writing (allow reads)\n> 2. Begin a table scan on target table, writing\n> a new file with a particular filenode\n> 3. Delete the attribute row from pg_attribute\n> 4. Point the table in the catalog to the new filenode\n> 5. Release locks\n> 6. Commit transaction\n> 7. Delete orhpan filenode\n> \n> i. Upon postmaster startup, remove any orphaned filenodes\n> \n> The real problem here is the fact that there are now missing attnos in\n> pg_attribute. Either that's handled or we renumber the attnos - which is\n> also quite hard?\n> \n> This, of course, suffers from the double size data problem - but I believe\n> that it does not matter - we just need to document it.\n> \n> Interestingly enough, Oracle support\n> \n> ALTER TABLE foo SET UNUSED col;\n> \n> Which invalidates the attribute entry, and:\n> \n> ALTER TABLE foo DROP col CHECKPOINT 1000;\n> \n> Which actually reclaims the space. The optional CHECKPOINT [n] clause\n> tells Oracle to do a checkpoint every [n] rows.\n> \n> \"Checkpointing cuts down the amount of undo logs accumulated during the\n> drop column operation to avoid running out of rollback segment space.\n> However, if this statement is interrupted after a checkpoint has been\n> applied, the table remains in an unusable state. While the table is\n> unusable, the only operations allowed on it are DROP TABLE, TRUNCATE\n> TABLE, and ALTER TABLE DROP COLUMNS CONTINUE (described below). \"\n> \n> Chris\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Apr 2002 00:00:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Restructuring pg_aggregate" } ]
[ { "msg_contents": "Sorry I don't know if this is right list.\n\nI use scripts of such a kind (I say \"SQLbang\")\n\n#!/usr/local/bin/psql -qQ\n\\a \\t \\pset fieldsep ' '\n\n\\set router '\\'' :1 '\\''\n\nSELECT ispdb_sfbsdr_allow(network(inet),niface)\n FROM ispdb_sfbsdr_riaddr, nets\n WHERE nrouter = :router \n AND int_type = index_int_type('int')\n AND network(inet) <<= network(net)\n AND nets.control\n\nParameters after sqlbang's name goes to :1, :2 so on.\nThis is patch:\n\n--- doc/src/sgml/ref/psql-ref.sgml\tSun Apr 1 23:17:30 2001\n+++ doc/src/sgml/ref/psql-ref.sgml\tThu Apr 26 05:46:20 2001\n@@ -1406,6 +1406,22 @@\n \n \n <varlistentry>\n+ <term>-Q <replaceable class=\"parameter\">filename</replaceable></term>\n+ <listitem>\n+ <para>\n+ Use the file <replaceable class=\"parameter\">filename</replaceable>\n+ as the source of queries instead of reading queries interactively.\n+ After the file is processed, <application>psql</application> terminates.\n+ This in many ways similar to the <literal>-f</literal> flag,\n+ but for use in <quote>sqlbang</quote> scripts.\n+ First script's psrameters will be assigned to\n+ <literal>:1</literal> .. <literal>:9</literal> variables.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n+\n+ <varlistentry>\n <term>-R, --record-separator <replaceable class=\"parameter\">separator</replaceable></term>\n <listitem>\n <para>\n--- src/bin/psql/help.c\tThu Oct 25 09:49:54 2001\n+++ src/bin/psql/help.c\tSun Mar 17 02:56:34 2002\n@@ -112,6 +112,7 @@\n \n \tputs(_(\" -P VAR[=ARG] Set printing option 'VAR' to 'ARG' (see \\\\pset command)\"));\n \tputs(_(\" -q Run quietly (no messages, only query output)\"));\n+\tputs(_(\" -Q Like -f, for scripts, arguments :1 .. :9\"));\n \tputs(_(\" -R STRING Set record separator (default: newline) (-P recordsep=)\"));\n \tputs(_(\" -s Single step mode (confirm each query)\"));\n \tputs(_(\" -S Single line mode (end of line terminates SQL command)\"));\n--- src/bin/psql/mainloop.c\tSun Apr 1 23:17:32 2001\n+++ src/bin/psql/mainloop.c\tThu Apr 26 05:51:46 2001\n@@ -53,7 +53,7 @@\n \tconst char *var;\n \tvolatile unsigned int bslash_count = 0;\n \n-\tint\t\t\ti,\n+\tint\t\t\ti,j,\n \t\t\t\tprevlen,\n \t\t\t\tthislen;\n \n@@ -91,7 +91,7 @@\n \n \n \t/* main loop to get queries and execute them */\n-\twhile (1)\n+\tfor(j = 0;;j++)\n \t{\n #ifndef WIN32\n \n@@ -189,6 +189,11 @@\n \t\t\t\tline = gets_fromFile(source);\n \t\t}\n \n+\t\tif (!j && line && line[0] == '#' && line[1] == '!')\n+\t\t{\n+\t\t\tfree(line);\n+\t\t\tcontinue;\n+\t\t}\n \n \t\t/* Setting this will not have effect until next line. */\n \t\tdie_on_error = GetVariableBool(pset.vars, \"ON_ERROR_STOP\");\n--- src/bin/psql/startup.c.orig\tMon Nov 5 20:46:31 2001\n+++ src/bin/psql/startup.c\tSun Mar 17 03:28:01 2002\n@@ -237,7 +237,7 @@\n \t */\n \n \t/*\n-\t * process file given by -f\n+\t * process file given by -f or -Q\n \t */\n \tif (options.action == ACT_FILE && strcmp(options.action_string, \"-\") != 0)\n \t{\n@@ -369,19 +369,19 @@\n \textern char *optarg;\n \textern int\toptind;\n \tint\t\t\tc;\n-\tbool\t\tused_old_u_option = false;\n+\tbool\t\tused_old_u_option = false, sqlbang = false;\n \n \tmemset(options, 0, sizeof *options);\n \n #ifdef HAVE_GETOPT_LONG\n-\twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:h:Hlno:p:P:qR:sStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n+\twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:h:Hlno:p:P:qQ:R:sStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n \n \t/*\n \t * Be sure to leave the '-' in here, so we can catch accidental long\n \t * options.\n \t */\n-\twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:h:Hlno:p:P:qR:sStT:uU:v:VWxX?-\")) != -1)\n+\twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:h:Hlno:p:P:qQ:R:sStT:uU:v:VWxX?-\")) != -1)\n #endif /* not HAVE_GETOPT_LONG */\n \t{\n \t\tswitch (c)\n@@ -464,6 +464,12 @@\n \t\t\tcase 'q':\n \t\t\t\tSetVariableBool(pset.vars, \"QUIET\");\n \t\t\t\tbreak;\n+\t\t\tcase 'Q':\n+\t\t\t\tSetVariableBool(pset.vars, \"ON_ERROR_STOP\");\n+\t\t\t\toptions->action = ACT_FILE;\n+\t\t\t\toptions->action_string = optarg;\n+\t\t\t\tsqlbang = true;\n+\t\t\t\tbreak;\n \t\t\tcase 'R':\n \t\t\t\tpset.popt.topt.recordSep = xstrdup(optarg);\n \t\t\t\tbreak;\n@@ -563,21 +569,45 @@\n \t\t}\n \t}\n \n-\t/*\n-\t * if we still have arguments, use it as the database name and\n-\t * username\n-\t */\n-\twhile (argc - optind >= 1)\n+\tif (sqlbang)\n \t{\n-\t\tif (!options->dbname)\n-\t\t\toptions->dbname = argv[optind];\n-\t\telse if (!options->username)\n-\t\t\toptions->username = argv[optind];\n-\t\telse if (!QUIET())\n-\t\t\tfprintf(stderr, gettext(\"%s: warning: extra option %s ignored\\n\"),\n-\t\t\t\t\tpset.progname, argv[optind]);\n+\t\tchar optname[] = \"1\";\n+\t\twhile (argc - optind >= 1)\n+\t\t{\n+\t\t\tif (optname[0] <= '9')\n+\t\t\t{\n+\t\t\t\tif (!SetVariable(pset.vars, optname, argv[optind]))\n+\t\t\t\t{\n+\t\t\t\t\tfprintf(stderr, \"%s: could not set variable %s\\n\",\n+\t\t\t\t\t\t\tpset.progname, optname);\n+\t\t\t\t\texit(EXIT_FAILURE);\n+\t\t\t\t}\n+\t\t\t}\n+\t\t\telse if (!QUIET())\n+\t\t\t\tfprintf(stderr, \"%s: warning: extra option %s ignored\\n\",\n+\t\t\t\t\t\tpset.progname, argv[optind]);\n+\t\t\toptname[0]++;\n+\t\t\toptind++;\n+\t\t}\n+\t}\n+\telse\n+\t{\n+\t\t/*\n+\t\t * if we still have arguments, use it as the database name and\n+\t\t * username\n+\t\t */\n+\t\twhile (argc - optind >= 1)\n+\t\t{\n+\t\t\tif (!options->dbname)\n+\t\t\t\toptions->dbname = argv[optind];\n+\t\t\telse if (!options->username)\n+\t\t\t\toptions->username = argv[optind];\n+\t\t\telse if (!QUIET())\n+\t\t\t\tfprintf(stderr, gettext(\"%s: warning: extra option %s ignored\\n\"),\n+\t\t\t\t\t\tpset.progname, argv[optind]);\n \n-\t\toptind++;\n+\t\t\toptind++;\n+\t\t}\n \t}\n \n \tif (used_old_u_option && !QUIET())\n\nI propose to include this feature.\nSorry for bad English.\n\n-- \n@BABOLO http://links.ru/\n", "msg_date": "Sun, 7 Apr 2002 06:31:43 +0400 (MSD)", "msg_from": "\".\"@babolo.ru", "msg_from_op": true, "msg_subject": "sqlbang" }, { "msg_contents": "\nCan someone comment on this feature?\n\n---------------------------------------------------------------------------\n\n\".\"@babolo.ru wrote:\n> Sorry I don't know if this is right list.\n> \n> I use scripts of such a kind (I say \"SQLbang\")\n> \n> #!/usr/local/bin/psql -qQ\n> \\a \\t \\pset fieldsep ' '\n> \n> \\set router '\\'' :1 '\\''\n> \n> SELECT ispdb_sfbsdr_allow(network(inet),niface)\n> FROM ispdb_sfbsdr_riaddr, nets\n> WHERE nrouter = :router \n> AND int_type = index_int_type('int')\n> AND network(inet) <<= network(net)\n> AND nets.control\n> \n> Parameters after sqlbang's name goes to :1, :2 so on.\n> This is patch:\n> \n> --- doc/src/sgml/ref/psql-ref.sgml\tSun Apr 1 23:17:30 2001\n> +++ doc/src/sgml/ref/psql-ref.sgml\tThu Apr 26 05:46:20 2001\n> @@ -1406,6 +1406,22 @@\n> \n> \n> <varlistentry>\n> + <term>-Q <replaceable class=\"parameter\">filename</replaceable></term>\n> + <listitem>\n> + <para>\n> + Use the file <replaceable class=\"parameter\">filename</replaceable>\n> + as the source of queries instead of reading queries interactively.\n> + After the file is processed, <application>psql</application> terminates.\n> + This in many ways similar to the <literal>-f</literal> flag,\n> + but for use in <quote>sqlbang</quote> scripts.\n> + First script's psrameters will be assigned to\n> + <literal>:1</literal> .. <literal>:9</literal> variables.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> +\n> + <varlistentry>\n> <term>-R, --record-separator <replaceable class=\"parameter\">separator</replaceable></term>\n> <listitem>\n> <para>\n> --- src/bin/psql/help.c\tThu Oct 25 09:49:54 2001\n> +++ src/bin/psql/help.c\tSun Mar 17 02:56:34 2002\n> @@ -112,6 +112,7 @@\n> \n> \tputs(_(\" -P VAR[=ARG] Set printing option 'VAR' to 'ARG' (see \\\\pset command)\"));\n> \tputs(_(\" -q Run quietly (no messages, only query output)\"));\n> +\tputs(_(\" -Q Like -f, for scripts, arguments :1 .. :9\"));\n> \tputs(_(\" -R STRING Set record separator (default: newline) (-P recordsep=)\"));\n> \tputs(_(\" -s Single step mode (confirm each query)\"));\n> \tputs(_(\" -S Single line mode (end of line terminates SQL command)\"));\n> --- src/bin/psql/mainloop.c\tSun Apr 1 23:17:32 2001\n> +++ src/bin/psql/mainloop.c\tThu Apr 26 05:51:46 2001\n> @@ -53,7 +53,7 @@\n> \tconst char *var;\n> \tvolatile unsigned int bslash_count = 0;\n> \n> -\tint\t\t\ti,\n> +\tint\t\t\ti,j,\n> \t\t\t\tprevlen,\n> \t\t\t\tthislen;\n> \n> @@ -91,7 +91,7 @@\n> \n> \n> \t/* main loop to get queries and execute them */\n> -\twhile (1)\n> +\tfor(j = 0;;j++)\n> \t{\n> #ifndef WIN32\n> \n> @@ -189,6 +189,11 @@\n> \t\t\t\tline = gets_fromFile(source);\n> \t\t}\n> \n> +\t\tif (!j && line && line[0] == '#' && line[1] == '!')\n> +\t\t{\n> +\t\t\tfree(line);\n> +\t\t\tcontinue;\n> +\t\t}\n> \n> \t\t/* Setting this will not have effect until next line. */\n> \t\tdie_on_error = GetVariableBool(pset.vars, \"ON_ERROR_STOP\");\n> --- src/bin/psql/startup.c.orig\tMon Nov 5 20:46:31 2001\n> +++ src/bin/psql/startup.c\tSun Mar 17 03:28:01 2002\n> @@ -237,7 +237,7 @@\n> \t */\n> \n> \t/*\n> -\t * process file given by -f\n> +\t * process file given by -f or -Q\n> \t */\n> \tif (options.action == ACT_FILE && strcmp(options.action_string, \"-\") != 0)\n> \t{\n> @@ -369,19 +369,19 @@\n> \textern char *optarg;\n> \textern int\toptind;\n> \tint\t\t\tc;\n> -\tbool\t\tused_old_u_option = false;\n> +\tbool\t\tused_old_u_option = false, sqlbang = false;\n> \n> \tmemset(options, 0, sizeof *options);\n> \n> #ifdef HAVE_GETOPT_LONG\n> -\twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:h:Hlno:p:P:qR:sStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n> +\twhile ((c = getopt_long(argc, argv, \"aAc:d:eEf:F:h:Hlno:p:P:qQ:R:sStT:uU:v:VWxX?\", long_options, &optindex)) != -1)\n> #else\t\t\t\t\t\t\t/* not HAVE_GETOPT_LONG */\n> \n> \t/*\n> \t * Be sure to leave the '-' in here, so we can catch accidental long\n> \t * options.\n> \t */\n> -\twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:h:Hlno:p:P:qR:sStT:uU:v:VWxX?-\")) != -1)\n> +\twhile ((c = getopt(argc, argv, \"aAc:d:eEf:F:h:Hlno:p:P:qQ:R:sStT:uU:v:VWxX?-\")) != -1)\n> #endif /* not HAVE_GETOPT_LONG */\n> \t{\n> \t\tswitch (c)\n> @@ -464,6 +464,12 @@\n> \t\t\tcase 'q':\n> \t\t\t\tSetVariableBool(pset.vars, \"QUIET\");\n> \t\t\t\tbreak;\n> +\t\t\tcase 'Q':\n> +\t\t\t\tSetVariableBool(pset.vars, \"ON_ERROR_STOP\");\n> +\t\t\t\toptions->action = ACT_FILE;\n> +\t\t\t\toptions->action_string = optarg;\n> +\t\t\t\tsqlbang = true;\n> +\t\t\t\tbreak;\n> \t\t\tcase 'R':\n> \t\t\t\tpset.popt.topt.recordSep = xstrdup(optarg);\n> \t\t\t\tbreak;\n> @@ -563,21 +569,45 @@\n> \t\t}\n> \t}\n> \n> -\t/*\n> -\t * if we still have arguments, use it as the database name and\n> -\t * username\n> -\t */\n> -\twhile (argc - optind >= 1)\n> +\tif (sqlbang)\n> \t{\n> -\t\tif (!options->dbname)\n> -\t\t\toptions->dbname = argv[optind];\n> -\t\telse if (!options->username)\n> -\t\t\toptions->username = argv[optind];\n> -\t\telse if (!QUIET())\n> -\t\t\tfprintf(stderr, gettext(\"%s: warning: extra option %s ignored\\n\"),\n> -\t\t\t\t\tpset.progname, argv[optind]);\n> +\t\tchar optname[] = \"1\";\n> +\t\twhile (argc - optind >= 1)\n> +\t\t{\n> +\t\t\tif (optname[0] <= '9')\n> +\t\t\t{\n> +\t\t\t\tif (!SetVariable(pset.vars, optname, argv[optind]))\n> +\t\t\t\t{\n> +\t\t\t\t\tfprintf(stderr, \"%s: could not set variable %s\\n\",\n> +\t\t\t\t\t\t\tpset.progname, optname);\n> +\t\t\t\t\texit(EXIT_FAILURE);\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t\telse if (!QUIET())\n> +\t\t\t\tfprintf(stderr, \"%s: warning: extra option %s ignored\\n\",\n> +\t\t\t\t\t\tpset.progname, argv[optind]);\n> +\t\t\toptname[0]++;\n> +\t\t\toptind++;\n> +\t\t}\n> +\t}\n> +\telse\n> +\t{\n> +\t\t/*\n> +\t\t * if we still have arguments, use it as the database name and\n> +\t\t * username\n> +\t\t */\n> +\t\twhile (argc - optind >= 1)\n> +\t\t{\n> +\t\t\tif (!options->dbname)\n> +\t\t\t\toptions->dbname = argv[optind];\n> +\t\t\telse if (!options->username)\n> +\t\t\t\toptions->username = argv[optind];\n> +\t\t\telse if (!QUIET())\n> +\t\t\t\tfprintf(stderr, gettext(\"%s: warning: extra option %s ignored\\n\"),\n> +\t\t\t\t\t\tpset.progname, argv[optind]);\n> \n> -\t\toptind++;\n> +\t\t\toptind++;\n> +\t\t}\n> \t}\n> \n> \tif (used_old_u_option && !QUIET())\n> \n> I propose to include this feature.\n> Sorry for bad English.\n> \n> -- \n> @BABOLO http://links.ru/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 23:49:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] sqlbang" }, { "msg_contents": "Nobody interested?\n\nI prepare my ISP Management System (ispms)\nto publish and want reduce number of patches\nneeded to install it.\n\nSQLbangs widely ised in it:\n0cicuta~(1)>grep -r '^#\\!/usr/local/bin/psql' /usr/local/libexec/ispms | wc -l\n 61\nThe most reason for patch are paremeters,\nbecause without parameters \n\n#!/usr/local/bin/psql -flags\n\ncan be substituted for:\n\n#!/bin/sh\n/usr/local/bin/psql -flags << \"EOF\"\n\nbut for substitute shell's ${1}, ${2} so on\nfor real parameters \"EOF\" in above example\nMUST be without quotes.\nSo all SQL text will be preprocessored\nby shell. Things are worst - some\nof SQLbangs prepare simple shell's\nprograms with some shell\nvariables and quotes in it which must\nbe escaped to be not expanded\nwhen SQL executes.\n\nYes, I have m4 build system to do such\nescaping:\n0cicuta~(2)>find w/ispms -name \\*.m4 | wc -l\n 71\nfor WWW interface mostly, but without\nSQLbangs escape level will be 1 level more,\nshell has some errors (or features, I dont\nknow) with high level escaping and\nI do not want depend on this errors\n(or features) in base ispms system\n(WWW interface has low rights)\n\nBruce Momjian writes:\n> Can someone comment on this feature?\n> \n> ---------------------------------------------------------------------------\n> \n> \".\"@babolo.ru wrote:\n> > Sorry I don't know if this is right list.\n> > \n> > I use scripts of such a kind (I say \"SQLbang\")\n> > \n> > #!/usr/local/bin/psql -qQ\n> > \\a \\t \\pset fieldsep ' '\n> > \n> > \\set router '\\'' :1 '\\''\n> > \n> > SELECT ispdb_sfbsdr_allow(network(inet),niface)\n> > FROM ispdb_sfbsdr_riaddr, nets\n> > WHERE nrouter = :router \n> > AND int_type = index_int_type('int')\n> > AND network(inet) <<= network(net)\n> > AND nets.control\n> > \n> > Parameters after sqlbang's name goes to :1, :2 so on.\n<patch skiped>\n> > I propose to include this feature.\n> > Sorry for bad English.\n\n-- \n@BABOLO http://links.ru/\n", "msg_date": "Sun, 21 Apr 2002 03:33:06 +0400 (MSD)", "msg_from": "\".\"@babolo.ru", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] sqlbang" }, { "msg_contents": ".@babolo.ru writes:\n\n> The most reason for patch are paremeters,\n\nParameters already exist:\n\npeter ~$ cat test.sql\n\\echo :x1\n\\echo :x2\npeter ~$ pg-install/bin/psql -f test.sql -v x1=foo -v x2=bar\nfoo\nbar\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 21 Apr 2002 00:53:28 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] sqlbang" }, { "msg_contents": "Peter Eisentraut writes:\n> .@babolo.ru writes:\n> > The most reason for patch are paremeters,\n> \n> Parameters already exist:\n> \n> peter ~$ cat test.sql\n> \\echo :x1\n> \\echo :x2\n> peter ~$ pg-install/bin/psql -f test.sql -v x1=foo -v x2=bar\n> foo\n> bar\nOK, positional parameters\n\n-- \n@BABOLO http://links.ru/\n", "msg_date": "Sun, 21 Apr 2002 23:34:41 +0400 (MSD)", "msg_from": "\".\"@babolo.ru", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] sqlbang" } ]
[ { "msg_contents": "Is there any indexing technique available I can use when joining tables\nwith a regular expression pattern in pgsql?\n\nI know one method for indexing strings that will be matched with regular\nexpression patterns, and that is using so called k-gram indexes.\nIndexing the string \"kjartan\" with k-gram index where k = 3 would\ncreate \"kja\", \"jar\", \"art\", \"rta\", \"tan\" as an index. Ofcourse it is hard to\ndecide the size of k and I'm sure in many cases mulitple k values might\nbe needed, depending on the situation.\n\nI have not done any major survey of available techniques, but I was\nhoping I could get some pointers here.\n\nI assume pgsql always uses nested loop join when joining relations which are\njoined with regular expression pattern?\n-- \nKjartan �s��rsson\nhttp://www.kjarri.net \nTel: +46 (0)730 556705\n\n\n", "msg_date": "Sun, 7 Apr 2002 12:09:36 +0200", "msg_from": "=?ISO-8859-1?B?S2phcnRhbiDBc/7zcnNzb24=?= <a98kjaas@student.his.se>", "msg_from_op": true, "msg_subject": "Indexing and regular expressions" }, { "msg_contents": "On Sun, 7 Apr 2002, [ISO-8859-1] Kjartan О©╫sО©╫О©╫rsson wrote:\n\n> Is there any indexing technique available I can use when joining tables\n> with a regular expression pattern in pgsql?\n>\n> I know one method for indexing strings that will be matched with regular\n> expression patterns, and that is using so called k-gram indexes.\n> Indexing the string \"kjartan\" with k-gram index where k = 3 would\n> create \"kja\", \"jar\", \"art\", \"rta\", \"tan\" as an index. Ofcourse it is hard to\n\nUsually, k-grams technique is used to match patterns with errors and\n3-grams produce \"__k\", \"_kj\", \"kja\", \"jar\", \"art\", \"rta\", \"ta_\", \"a__\"\nwhere leading and trailing spaces are used to compensate 'boundary' effect.\n\nBut I dont' quite understand your question. Are you looking for fuzzy match ?\nIf so, take a look on contrib modules.\n\n> decide the size of k and I'm sure in many cases mulitple k values might\n> be needed, depending on the situation.\n>\n> I have not done any major survey of available techniques, but I was\n> hoping I could get some pointers here.\n>\n> I assume pgsql always uses nested loop join when joining relations which are\n> joined with regular expression pattern?\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 7 Apr 2002 13:54:27 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Indexing and regular expressions" }, { "msg_contents": "Thank you for your reply.\n\nNo, I am not looking for a fuzzy match. I am simply wondering if there\nare some methods available that can speed up joining of tables when\nthe join is done with a regular expression operator (one table\ncontains regular expression patterns, and the other strings that\nshould be matched against the pattern).\n\nIf no indexes are used, a full table scan of the string table is\nneeded if we want to select all strings that matches a given pattern.\n\nMy question is if there are any indexing methods available that can\nbe used to prone the search space in the string table when using\nthe regular expression operator.\n\nI hope I made myself more clear now. Best regards,\nKjartan\n\n\nSunday, April 7, 2002, 12:54:27 PM, you wrote:\n\n> On Sun, 7 Apr 2002, [ISO-8859-1] Kjartan О©╫sО©╫О©╫rsson wrote:\n\n>> Is there any indexing technique available I can use when joining tables\n>> with a regular expression pattern in pgsql?\n>>\n>> I know one method for indexing strings that will be matched with regular\n>> expression patterns, and that is using so called k-gram indexes.\n>> Indexing the string \"kjartan\" with k-gram index where k = 3 would\n>> create \"kja\", \"jar\", \"art\", \"rta\", \"tan\" as an index. Ofcourse it is hard to\n\n> Usually, k-grams technique is used to match patterns with errors and\n> 3-grams produce \"__k\", \"_kj\", \"kja\", \"jar\", \"art\", \"rta\", \"ta_\", \"a__\"\n> where leading and trailing spaces are used to compensate 'boundary' effect.\n\n> But I dont' quite understand your question. Are you looking for fuzzy match ?\n> If so, take a look on contrib modules.\n\n>> decide the size of k and I'm sure in many cases mulitple k values might\n>> be needed, depending on the situation.\n>>\n>> I have not done any major survey of available techniques, but I was\n>> hoping I could get some pointers here.\n>>\n>> I assume pgsql always uses nested loop join when joining relations which are\n>> joined with regular expression pattern?\n>>\n\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n", "msg_date": "Sun, 7 Apr 2002 13:49:08 +0200", "msg_from": "=?koi8-r?B?S2phcnRhbiDic8DVcnNzb24=?= <a98kjaas@student.his.se>", "msg_from_op": false, "msg_subject": "Re: Indexing and regular expressions" }, { "msg_contents": "On Sun, 7 Apr 2002 13:49:08 +0200\n\"Kjartan \" <a98kjaas@student.his.se> wrote:\n> Thank you for your reply.\n> \n> No, I am not looking for a fuzzy match. I am simply wondering if there\n> are some methods available that can speed up joining of tables when\n> the join is done with a regular expression operator (one table\n> contains regular expression patterns, and the other strings that\n> should be matched against the pattern).\n> \n> If no indexes are used, a full table scan of the string table is\n> needed if we want to select all strings that matches a given pattern.\n\nIf the pattern is arbitrary, I'm not sure that any indexing technique\nwill be able to significantly improve performance (or at least, I'd\nbe *very* interested to see such a technique).\n\nIf you're storing the regular expressions in another table, you\ncould also store the pre-compiled patterns and use those for a\nseqscan, but that won't net you a large improvement, particularly\nif your set of patterns is much smaller than your set of strings\nto match against (in which case the time to compile the pattern\nbecomes insignificant).\n\nIf your regular expressions contain common sub-elements (e.g.\nmany of them include \"match string beginning with xxx\" or whatever),\nyou could perhaps use indexes to optimize those sub-elements,\nand then run the rest of the pattern on the tuples found by the\nindex. But if your patterns are truly arbitrary, this will be\nunlikely to help.\n\nTherefore, the answer is no AFAICT -- regular expressions are too\nflexible to allow for optimization in advance.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Sun, 7 Apr 2002 15:08:44 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: Indexing and regular expressions" } ]
[ { "msg_contents": "Hello:\n\tNot sure of where to post this, it's not a bug, more of an \napplication note.. Using linux and iptables as a firewall, requests for \nservices are redirected to the machines providing those services, including\npostgress. This approach has been in place for over a year, and includes\noracle, postgress, and apache web services. It is not without its issues,\nand security is greatly enhanced. On a seperate machine behind the \nfirewall, the postgress 7.2.1 release was installed for testing and migration.\n\n Inital testing worked well. When it was decided to have applications \nnormally directed at production try the development instance, ident \nauthenication failed. All other tests passed, including hostssl \nconnections. When the firewall redirects traffic to its intended service\nprovider using the same port postgress is using ident works. When the \nports are not the same, ident authenication fails. User/password and hostssl \nconnections continue to work though.\n\n\tI do not know the interchange of communication traffic when\nident authenication is used, and postgress is the only service currently \nin use that provides ident authenication. Would anyone know if the ports\nneed to be identical for ident to function, or is it a definition of how\nident works for postgress?\n\nKen Klatt\n", "msg_date": "Sun, 7 Apr 2002 18:52:46 -0500", "msg_from": "Kenny H Klatt <kklatt@csd.uwm.edu>", "msg_from_op": true, "msg_subject": "Question on ident authorization" }, { "msg_contents": "Kenny H Klatt writes:\n\n> Inital testing worked well. When it was decided to have applications\n> normally directed at production try the development instance, ident\n> authenication failed. All other tests passed, including hostssl\n> connections. When the firewall redirects traffic to its intended service\n> provider using the same port postgress is using ident works. When the\n> ports are not the same, ident authenication fails. User/password and hostssl\n> connections continue to work though.\n\nI can't quite picture your setup, but two points: One, the PostgreSQL\nserver attempts ident authentication over TCP port 113. If there's no\nident server on that port on the client side then authentication fails.\nTwo, if your firewall is redirecting ident traffic to a dedicated service\nprovider host, then have it stop doing that because that's not how ident\nis supposed to work (or you will have to put in a lot of extra effort to\nmake it work).\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 7 Apr 2002 21:13:17 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Question on ident authorization" } ]
[ { "msg_contents": "hello\n\nwhen selecting in a table with a text[]\n\nthere's an sql command for looking all dimensions of the array\nsomething like 'where text[*]'\n\nor only should be implemented trought a 'for (i=0; i<maxDimension; i++) \n\n?\n\nbests from barcelona\n\njaume teixi\n", "msg_date": "Mon, 8 Apr 2002 12:35:42 +0200", "msg_from": "Jaume Teixi <teixi@6tems.com>", "msg_from_op": true, "msg_subject": "best method for select within all dimensions of an array" } ]
[ { "msg_contents": "\nIf I want a column to have a default TIMESTAMP of 'now' but not in PST\ntimezone but GMT,\nis the best way of doing it the following:\n\nchange_time TIMESTAMP DEFAULT now() AT TIME ZONE 'GMT' NOT NULL,\n\n\nRichard\n\n", "msg_date": "Mon, 08 Apr 2002 16:03:34 -0700", "msg_from": "Richard Emberson <emberson@phc.net>", "msg_from_op": true, "msg_subject": "now() AT TIME ZONE 'GMT';" }, { "msg_contents": "Richard Emberson <emberson@phc.net> writes:\n> If I want a column to have a default TIMESTAMP of 'now' but not in PST\n> timezone but GMT,\n\nIt strikes me that you have a conceptual error. Type TIMESTAMP (ie,\nTIMESTAMP WITH TIME ZONE) *is* GMT internally; it is simply displayed\nin whatever zone you've selected with SET TIMEZONE. (This is basically\nthe same design as Unix system clocks --- always GMT --- and the TZ\nenvironment variable.) If you are trying to force it to a different\ntimezone then you are misusing it.\n\nType TIMESTAMP WITHOUT TIME ZONE doesn't have any concept of time zone\n--- it's just a stated date and time with no particular zone reference.\nIf you apply the AT TIME ZONE operator to a TIMESTAMP WITH TIME ZONE\nvalue, what happens is the internal GMT value is rotated to the\nspecified zone and then the output is labeled as type TIMESTAMP WITHOUT\nTIME ZONE, preventing any further automatic zone rotations. If you\ncoerce this back to TIMESTAMP WITH TIME ZONE, the implicitly assigned\nzone is your local zone --- ie, your local zone is subtracted off again\nto produce a supposed GMT value --- with entirely nonsensical results.\n\nIt wasn't clear to me exactly what you wanted to accomplish, but\napplying AT TIME ZONE to something you are going to store in a TIMESTAMP\nalmost certainly isn't it. My guess is that what you really want is\nplain old unadorned \"TIMESTAMP DEFAULT now()\".\n\nIf you want to deliberately suppress time zone awareness, TIMESTAMP\nWITHOUT TIME ZONE is the way to go. If you want any awareness of zones,\nyou almost certainly want TIMESTAMP WITH TIME ZONE --- and just let the\nsystem do what it wants to do, don't try to force some other approach.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Apr 2002 22:52:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: now() AT TIME ZONE 'GMT'; " }, { "msg_contents": "(on -hackers)\n\n> If you apply the AT TIME ZONE operator to a TIMESTAMP WITH TIME ZONE\n> value, what happens is the internal GMT value is rotated to the\n> specified zone and then the output is labeled as type TIMESTAMP WITHOUT\n> TIME ZONE, preventing any further automatic zone rotations.\n\nHmm. That is how it probably *should* work, but at the moment the\ntimestamptz_zone() function actually outputs a character string! That is\na holdover from previous versions which did not have a \"no zone\"\ntimestamp; it would seem now to be more appropriate to output a no-zone\ntimestamp.\n\nI'll look at changing this in my upcoming patch set...\n\n - Thomas\n", "msg_date": "Mon, 08 Apr 2002 23:22:53 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] now() AT TIME ZONE 'GMT';" }, { "msg_contents": "Tom Lane wrote:\n\n> Richard Emberson <emberson@phc.net> writes:\n> > If I want a column to have a default TIMESTAMP of 'now' but not in PST\n> > timezone but GMT,\n>\n> It strikes me that you have a conceptual error. Type TIMESTAMP (ie,\n\nI guess so. So it is during the retrieval of a timestamp using jdbc, where\nthe timestamp\nis converted into a java.sql.Timestamp that it gets converted to a local\ntimezone.\nSo I will have to use the jdbc access method that preserves (actually where\none can\nset the desired timezone) the timezone.\n\nRichard\n\n", "msg_date": "Tue, 09 Apr 2002 08:14:08 -0700", "msg_from": "Richard Emberson <emberson@phc.net>", "msg_from_op": true, "msg_subject": "Re: now() AT TIME ZONE 'GMT';" } ]
[ { "msg_contents": "Hi all,\n\nI've attached a patch for doing BETWEEN SYM/ASYM, however it just doesn't\nwork!!!\n\ntest=# select 2 between 1 and 3;\n ?column?\n----------\n t\n(1 row)\n\ntest=# select 2 between 3 and 1;\n ?column?\n----------\n f\n(1 row)\n\ntest=# select 2 between symmetric 3 and 1;\nERROR: parser: parse error at or near \"3\"\ntest=# select 2 between asymmetric 3 and 1;\nERROR: parser: parse error at or near \"3\"\ntest=# select 2 not between 3 and 1;\n ?column?\n----------\n t\n(1 row)\n\ntest=# select 2 not between symmetric 3 and 1;\nERROR: parser: parse error at or near \"3\"\n\nCan anyone see what's wrong?\n\nChris", "msg_date": "Wed, 10 Apr 2002 11:02:30 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "BETWEEN SYMMETRIC/ASYMMETRIC" }, { "msg_contents": "On Wed, 10 Apr 2002, Christopher Kings-Lynne wrote:\n\n> Hi all,\n> \n> I've attached a patch for doing BETWEEN SYM/ASYM, however it just doesn't\n> work!!!\n> \n> test=# select 2 between 1 and 3;\n> ?column?\n> ----------\n> t\n> (1 row)\n> \n> test=# select 2 between 3 and 1;\n> ?column?\n> ----------\n> f\n> (1 row)\n> \n> test=# select 2 between symmetric 3 and 1;\n> ERROR: parser: parse error at or near \"3\"\n> test=# select 2 between asymmetric 3 and 1;\n> ERROR: parser: parse error at or near \"3\"\n\nChris,\n\nYou seem to have forgotten to update keywords.c.\n\nGavin\n\n\n", "msg_date": "Wed, 10 Apr 2002 13:21:57 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC" }, { "msg_contents": "> Chris,\n>\n> You seem to have forgotten to update keywords.c.\n\nOK - works perfectly now :)\n\nNow I'm going to play with making the SYMMERIC and ASYMMETRIC keywords less\nreserved...\n\nCan someone comment on my use of %prec BETWEEN? Is that still correct now\nthat we have the extra BETWEEN forms?\n\nChris\n\n", "msg_date": "Wed, 10 Apr 2002 11:38:04 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC" }, { "msg_contents": "On Wed, 10 Apr 2002, Christopher Kings-Lynne wrote:\n\n> > Chris,\n> >\n> > You seem to have forgotten to update keywords.c.\n> \n> OK - works perfectly now :)\n> \n> Now I'm going to play with making the SYMMERIC and ASYMMETRIC keywords less\n> reserved...\n> \n> Can someone comment on my use of %prec BETWEEN? Is that still correct now\n> that we have the extra BETWEEN forms?\n\nYes. Have a look at the precedence table near the top of gram.y:\n\n%left UNION EXCEPT\n%left INTERSECT\n%left JOIN UNIONJOIN CROSS LEFT FULL RIGHT INNER_P NATURAL\n%left OR\n%left AND\n%right NOT\n%right '='\n%nonassoc '<' '>'\n%nonassoc LIKE ILIKE\n%nonassoc ESCAPE\n%nonassoc OVERLAPS\n%nonassoc BETWEEN\n%nonassoc IN\n%left POSTFIXOP /* dummy for postfix Op rules */\n\n[...]\n\nThis is the order of precedence for rules which contain these\noperators. For example, if an expression contains:\n\na AND b AND c\n\nit is evaluated as:\n\n((a AND b) AND c)\n\n\nOn the other hand:\n\na OR b AND c\n\nis evaluated as:\n\n((a OR b) AND c)\n\nsince OR has a lower order of precedence. Now, consider:\n\nselect 2 between asymmetric 3 and 1\n\nWithout the %prec BETWEEN\n\n3 and 1\n\nis given precedence over between. This will break your code.\n\nGavin\n\n", "msg_date": "Wed, 10 Apr 2002 13:58:56 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Can someone comment on my use of %prec BETWEEN? Is that still correct now\n> that we have the extra BETWEEN forms?\n\nLooks fine. AFAICS we want all these forms to have the binding\nprecedence assigned to BETWEEN. If you don't do the %prec thing\nthen the productions will have the precedence of their rightmost\nterminal symbol, ie, AND, ie, wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Apr 2002 00:35:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC " }, { "msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Can someone comment on my use of %prec BETWEEN? Is that still\n> correct now\n> > that we have the extra BETWEEN forms?\n>\n> Looks fine. AFAICS we want all these forms to have the binding\n> precedence assigned to BETWEEN. If you don't do the %prec thing\n> then the productions will have the precedence of their rightmost\n> terminal symbol, ie, AND, ie, wrong.\n\nOK, I've proven that I cannot move the SYM/ASYM keywords anything lower than\ntotally reserved without causing shift/reduce errors. Is this acceptable?\n\nAlso, Tom (or anyone): in regards to your previous email, should I just go\nback to using opt_symmetry to shorten the number of productions, since I\nhave to make them reserved words anyway?\n\nChris\n\n", "msg_date": "Wed, 10 Apr 2002 14:09:20 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Also, Tom (or anyone): in regards to your previous email, should I just go\n> back to using opt_symmetry to shorten the number of productions, since I\n> have to make them reserved words anyway?\n\nMight as well. No point in writing more productions if it doesn't buy\nanything.\n\nBTW, I've forgotten whether your patch is purely syntactic or not,\nbut I'd really like to see someone fix things so that BETWEEN has its\nown expression node tree type and is not expanded into some ugly\n\"A>=B and A<=C\" equivalent. This would (a) allow it to be\nreverse-listed reasonably, and (b) eliminate redundant evaluations of\nthe subexpressions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Apr 2002 11:06:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC " }, { "msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Also, Tom (or anyone): in regards to your previous email,\n> should I just go\n> > back to using opt_symmetry to shorten the number of productions, since I\n> > have to make them reserved words anyway?\n>\n> Might as well. No point in writing more productions if it doesn't buy\n> anything.\n\nSince it's really just two ways of writing the same thing, wouldn't bison\njust produce the exact same C code? I'll rewrite it anyway for elegance,\nbut just wondering...\n\n> BTW, I've forgotten whether your patch is purely syntactic or not,\n> but I'd really like to see someone fix things so that BETWEEN has its\n> own expression node tree type and is not expanded into some ugly\n> \"A>=B and A<=C\" equivalent. This would (a) allow it to be\n> reverse-listed reasonably, and (b) eliminate redundant evaluations of\n> the subexpressions.\n\nIt is purely syntactic. Anyone want to give me a quick hint on how to make\na new node tree type for BETWEEN? What does reverse-listing mean as well?\n\nChris\n\n", "msg_date": "Thu, 11 Apr 2002 11:00:11 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Since it's really just two ways of writing the same thing, wouldn't bison\n> just produce the exact same C code? I'll rewrite it anyway for elegance,\n> but just wondering...\n\nThe emitted code might or might not be the same --- but duplicate or\nnear-duplicate chunks of source code are always best avoided, if only\nfrom a maintenance perspective.\n\n>> BTW, I've forgotten whether your patch is purely syntactic or not,\n>> but I'd really like to see someone fix things so that BETWEEN has its\n>> own expression node tree type and is not expanded into some ugly\n>> \"A>=B and A<=C\" equivalent. This would (a) allow it to be\n>> reverse-listed reasonably, and (b) eliminate redundant evaluations of\n>> the subexpressions.\n\n> It is purely syntactic. Anyone want to give me a quick hint on how to make\n> a new node tree type for BETWEEN?\n\nTry chasing the references to another extant expression node type,\nperhaps NullTest. It's fairly straightforward, but tedious to teach\nall the relevant places about it.\n\n> What does reverse-listing mean as well?\n\nreverse-listing is what src/backend/utils/adt/ruleutils.c does: produce\nsomething readable from an internal node tree. \\d for a view, pg_dump,\nand other useful things depend on this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Apr 2002 23:09:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC " }, { "msg_contents": "Hi,\n\nI'm working on making the SHOW command dump its output as if it were a\nselect result.\n\nTom's declared the following as static (\"private\") methods?\n\nstatic TextOutputState *begin_text_output(CommandDest dest, char *title);\nstatic void do_text_output(TextOutputState *tstate, char *aline);\nstatic void do_text_output_multiline(TextOutputState *tstate, char *text);\nstatic void end_text_output(TextOutputState *tstate);\n\nI should really move these off somewhere else and make them a bit more\nglobal and generic. I will also use Joe's version (private email) that\nshould allow more columns in the output. What should I name these\nfunctions? I notice a tendency towards TextDoOutput, TextDoOuputMultiline,\nTextEndOutput sort of naming conventions for \"global\" functions.\n\nIs this a good idea? Where I should put them?\n\nRegards,\n\nChris\n\n", "msg_date": "Thu, 11 Apr 2002 17:10:59 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Make text output more generic" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> static TextOutputState *begin_text_output(CommandDest dest, char *title);\n> static void do_text_output(TextOutputState *tstate, char *aline);\n> static void do_text_output_multiline(TextOutputState *tstate, char *text);\n> static void end_text_output(TextOutputState *tstate);\n\n> I should really move these off somewhere else and make them a bit more\n> global and generic.\n\nWhat's insufficiently generic about them for you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 10:45:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make text output more generic " }, { "msg_contents": "> > I should really move these off somewhere else and make them a bit more\n> > global and generic.\n>\n> What's insufficiently generic about them for you?\n\nWell, at a _quick_ glance they're designed only for one column output...\n\nChris\n\n\n", "msg_date": "Thu, 11 Apr 2002 23:34:24 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Make text output more generic " }, { "msg_contents": "Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:\n>> What's insufficiently generic about them for you?\n\n> Well, at a _quick_ glance they're designed only for one column output...\n\nWell, exactly. These are intended to be a convenient interface for that\ncase. They're already sitting on top of generic tuple routines...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 11:41:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make text output more generic " }, { "msg_contents": "\nTODO updated:\n\n> * Add BETWEEN ASYMMETRIC/SYMMETRIC (Christopher)\n> * Christopher is Christopher Kings-Lynne <chriskl@familyhealth.com.au>\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> > \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > > Also, Tom (or anyone): in regards to your previous email,\n> > should I just go\n> > > back to using opt_symmetry to shorten the number of productions, since I\n> > > have to make them reserved words anyway?\n> >\n> > Might as well. No point in writing more productions if it doesn't buy\n> > anything.\n> \n> Since it's really just two ways of writing the same thing, wouldn't bison\n> just produce the exact same C code? I'll rewrite it anyway for elegance,\n> but just wondering...\n> \n> > BTW, I've forgotten whether your patch is purely syntactic or not,\n> > but I'd really like to see someone fix things so that BETWEEN has its\n> > own expression node tree type and is not expanded into some ugly\n> > \"A>=B and A<=C\" equivalent. This would (a) allow it to be\n> > reverse-listed reasonably, and (b) eliminate redundant evaluations of\n> > the subexpressions.\n> \n> It is purely syntactic. Anyone want to give me a quick hint on how to make\n> a new node tree type for BETWEEN? What does reverse-listing mean as well?\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 23:09:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC" }, { "msg_contents": "> TODO updated:\n>\n> > * Add BETWEEN ASYMMETRIC/SYMMETRIC (Christopher)\n> > * Christopher is Christopher Kings-Lynne <chriskl@familyhealth.com.au>\n\nSo should I go ahead and submit a patch for BETWEEN that adds SYMMETRY\nsupport in the old-style code, and then at a later stage submit a patch that\nmakes BETWEEN a proper node?\n\nChris\n\n", "msg_date": "Thu, 18 Apr 2002 11:16:37 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > TODO updated:\n> >\n> > > * Add BETWEEN ASYMMETRIC/SYMMETRIC (Christopher)\n> > > * Christopher is Christopher Kings-Lynne <chriskl@familyhealth.com.au>\n> \n> So should I go ahead and submit a patch for BETWEEN that adds SYMMETRY\n> support in the old-style code, and then at a later stage submit a patch that\n> makes BETWEEN a proper node?\n\nSure, I think that makes sense. The larger BETWEEN node code will be\ntricky.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 23:18:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC" }, { "msg_contents": "> > So should I go ahead and submit a patch for BETWEEN that adds SYMMETRY\n> > support in the old-style code, and then at a later stage submit\n> a patch that\n> > makes BETWEEN a proper node?\n>\n> Sure, I think that makes sense. The larger BETWEEN node code will be\n> tricky.\n\nQuestion: Why have you created a special case for NOT BETWEEN? Wouldn't you\njust need a BETWEEN node and the NOT node will handle the NOTing?\n\nOr is it because BETWEEN isn't a node at the moment?\n\nChris\n\n", "msg_date": "Thu, 18 Apr 2002 11:20:39 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > > So should I go ahead and submit a patch for BETWEEN that adds SYMMETRY\n> > > support in the old-style code, and then at a later stage submit\n> > a patch that\n> > > makes BETWEEN a proper node?\n> >\n> > Sure, I think that makes sense. The larger BETWEEN node code will be\n> > tricky.\n> \n> Question: Why have you created a special case for NOT BETWEEN? Wouldn't you\n> just need a BETWEEN node and the NOT node will handle the NOTing?\n> \n> Or is it because BETWEEN isn't a node at the moment?\n\nWell, looking at the grammar, I don't know how I could have gotten NOT\ninto that construction. There are two parameters separated by AND and I\ndidn't know how to do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 23:24:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> So should I go ahead and submit a patch for BETWEEN that adds SYMMETRY\n> support in the old-style code, and then at a later stage submit a patch that\n> makes BETWEEN a proper node?\n\nI'd prefer to do it in one step. I have not noticed any large\ngroundswell of demand for BETWEEN SYMMETRIC ... so I don't see a good\nreason for implementing a stopgap version. (It would be a stopgap\nmainly because the planner wouldn't recognize it as a range query.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 23:24:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC " }, { "msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > So should I go ahead and submit a patch for BETWEEN that adds SYMMETRY\n> > support in the old-style code, and then at a later stage submit \n> a patch that\n> > makes BETWEEN a proper node?\n> \n> I'd prefer to do it in one step. I have not noticed any large\n> groundswell of demand for BETWEEN SYMMETRIC ... so I don't see a good\n> reason for implementing a stopgap version. (It would be a stopgap\n> mainly because the planner wouldn't recognize it as a range query.)\n\nOK, I'll go for the whole change - just expect lots of questions :)\n\nChris\n\n", "msg_date": "Thu, 18 Apr 2002 11:27:26 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC " } ]
[ { "msg_contents": "Hi everyone,\n\nI know we've already got a \"rough\" series of steps to follow when a new\nrelease comes out, but I feel it's worth putting out heads together and\nmaking a \"cheat sheet\" of which places to contact, and \"known good\"\ncontacts there.\n\nAm thinking this after coming across the ZDNet download page for\nPostgreSQL. It's still got version 6.5.3 as being the one to download.\n\nPerhaps we should make a list of which places have downloads like this,\nand at release time a couple of people each take care of a few and\nconfirm the changes?\n\nSound feasible?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 11 Apr 2002 00:08:23 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "A \"New Release\" list of places to contact about new releases of\n\tPostgreSQL" }, { "msg_contents": "Justin Clift wrote:\n> Hi everyone,\n> \n> I know we've already got a \"rough\" series of steps to follow when a new\n> release comes out, but I feel it's worth putting out heads together and\n> making a \"cheat sheet\" of which places to contact, and \"known good\"\n> contacts there.\n> \n> Am thinking this after coming across the ZDNet download page for\n> PostgreSQL. It's still got version 6.5.3 as being the one to download.\n> \n> Perhaps we should make a list of which places have downloads like this,\n> and at release time a couple of people each take care of a few and\n> confirm the changes?\n> \n> Sound feasible?\n\nSure, throw it into a file in src/tools. We already have a\nRELEASE_CHANGES file there.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Apr 2002 13:04:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A \"New Release\" list of places to contact about new releases" } ]
[ { "msg_contents": "Hi everyone,\n\nThis is Prof. Bayer's response to the question \"is it alright to use\nUB-Tree's in Open Source projects?\".\n\nIt's a \"No, but we can discuss a licensing model\" type answer.\n\nRegards and best wishes,\n\nJustin Clift\n\n\n-------- Original Message --------\nSubject: AW: More UB-Tree patent information\nDate: Wed, 10 Apr 2002 15:26:05 +0200\nFrom: \"Prof. Rudolf Bayer\" <bayer@informatik.tu-muenchen.de>\nTo: \"Justin Clift\" <justin@postgresql.org>\n\nDear Justin,\nI am personally holder of the patents.\nconcerning your question:\n> Specifically wondering if it's alright to use UB-Tree's in\n> Open Source projects.\nthe answer is NO, unless there is a patent agreement with me.\nPlease let me know, what specifically the interests and business models\nare,\nthen we could discuss a licensing model in line with the already\nexisting\nlicense agreements,\nbest regards,\nR. Bayer\n*************************************************************************\nProf. Rudolf Bayer, Ph.D.\nInstitut fuer Informatik, Technische Universitaet Muenchen\nOrleansstr. 34, D-81667 Muenchen, Germany\ntel: ++49-89-48095 171 email: bayer@in.tum.de\nfax: ++49-89-48095 170 http://www3.informatik.tu-muenchen.de\n\n> -----Ursprungliche Nachricht-----\n> Von: Justin Clift [mailto:justin@postgresql.org]\n> Gesendet: Dienstag, 9. April 2002 23:04\n> An: Professor Rudolf Bayer\n> Cc: PostgreSQL General Mailing List\n> Betreff: More UB-Tree patent information\n>\n>\n> Hi Prof. Bayer,\n>\n> Haven't heard anything back from you regarding the patents on\n> UB-Tree's. Specifically wondering if it's alright to use UB-Tree's in\n> Open Source projects.\n>\n> On a related topic, in your paper \"The Universal B-Tree for\n> multidimensional Indexing\"\n> (http://mistral.in.tum.de/results/publications/TUM-I9637.pdf) you\n> mention a German \"Patent Pending\" number of \"196 35 429.3\", is this the\n> one which was approved in Europe?\n>\n> In the paper \"Bulk Loading a Data Warehouse built upon a UB-Tree\"\n> (http://mistral.in.tum.de/results/publications/FKM+00.pdf) it mentions\n> the Japanese Patent filed on 22nd May 2000, Application Number\n> 2000-149648. Is this the Japanese patent for UB-Trees which hasn't yet\n> been approved?\n>\n> :)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n", "msg_date": "Thu, 11 Apr 2002 00:32:40 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "[Fwd: AW: More UB-Tree patent information]" }, { "msg_contents": "On Wed, 2002-04-10 at 16:32, Justin Clift wrote:\n> Hi everyone,\n> \n> This is Prof. Bayer's response to the question \"is it alright to use\n> UB-Tree's in Open Source projects?\".\n\nHave you found out _what_ exaclty is patented ?\n\nIs it just his concrete implementation of \"UB-Tree\" or something\nbroader, like using one multi-dimensional index instead of multiple\none-dimensional ones ?\n\n---------------------\nHannu\n\n\n", "msg_date": "10 Apr 2002 18:31:05 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: AW: More UB-Tree patent information]" }, { "msg_contents": "Hannu Krosing wrote:\n> \n> On Wed, 2002-04-10 at 16:32, Justin Clift wrote:\n> > Hi everyone,\n> >\n> > This is Prof. Bayer's response to the question \"is it alright to use\n> > UB-Tree's in Open Source projects?\".\n> \n> Have you found out _what_ exaclty is patented ?\n> \n> Is it just his concrete implementation of \"UB-Tree\" or something\n> broader, like using one multi-dimensional index instead of multiple\n> one-dimensional ones ?\n\nIs there any way of finding out instead of asking him directly? Maybe\nthe patent places have online info?\n\nProfessor Bayer isn't being overly informative.\n\nAnyone know?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> ---------------------\n> Hannu\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 11 Apr 2002 02:55:34 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] [Fwd: AW: More UB-Tree patent information]" }, { "msg_contents": "Patents are as much designed to confuse and dissuade someone from using \nsomething as they are to patent something. Reading a patent is often \nharder than killing the nearest chicken, strewing it's entrails allover \nthe yard, and then trying to make some sense of it.\n\nJustin Clift wrote:\n\n>Hannu Krosing wrote:\n>\n>>On Wed, 2002-04-10 at 16:32, Justin Clift wrote:\n>>\n>>>Hi everyone,\n>>>\n>>>This is Prof. Bayer's response to the question \"is it alright to use\n>>>UB-Tree's in Open Source projects?\".\n>>>\n>>Have you found out _what_ exaclty is patented ?\n>>\n>>Is it just his concrete implementation of \"UB-Tree\" or something\n>>broader, like using one multi-dimensional index instead of multiple\n>>one-dimensional ones ?\n>>\n>\n>Is there any way of finding out instead of asking him directly? Maybe\n>the patent places have online info?\n>\n>Professor Bayer isn't being overly informative.\n>\n>Anyone know?\n>\n>:-)\n>\n>Regards and best wishes,\n>\n>Justin Clift\n>\n>\n>>---------------------\n>>Hannu\n>>\n>\n\n\n", "msg_date": "Wed, 10 Apr 2002 10:51:06 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: AW: More UB-Tree patent information]" }, { "msg_contents": "> Hannu Krosing wrote:\n> > \n> > Have you found out _what_ exaclty is patented ?\n> > \n> > Is it just his concrete implementation of \"UB-Tree\" or something\n> > broader, like using one multi-dimensional index instead of multiple\n> > one-dimensional ones ?\n\n(I know it is OT, please reply in private, I can summarize any reactions \nto the list ...)\n\nPatents are supposed to be only applicable to an industrial application \n(with external side-effects). So ideas in themselves are not patentable.\n\nAnyway, this is once more a good example of the danger of software patents \n- you know what to reply when people say \"software patents promote \ninnovation\"\n\nIANAL, just my 0,02 Euro.\n\nsee also : http://www.gnu.org/philosophy/savingeurope.html (also \ninteresting for non-europeans, of course !)\n\n-- \nTycho Fruru\t\t\ttycho.fruru@conostix.com\n\"Prediction is extremely difficult. Especially about the future.\"\n - Niels Bohr\n\n", "msg_date": "Wed, 10 Apr 2002 20:15:20 +0200 (CEST)", "msg_from": "Tycho Fruru <tycho.fruru@conostix.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: AW: More UB-Tree patent information]" }, { "msg_contents": "On Wed, 2002-04-10 at 21:55, Justin Clift wrote:\n> Hannu Krosing wrote:\n> > \n> > On Wed, 2002-04-10 at 16:32, Justin Clift wrote:\n> > > Hi everyone,\n> > >\n> > > This is Prof. Bayer's response to the question \"is it alright to use\n> > > UB-Tree's in Open Source projects?\".\n> > \n> > Have you found out _what_ exaclty is patented ?\n> > \n> > Is it just his concrete implementation of \"UB-Tree\" or something\n> > broader, like using one multi-dimensional index instead of multiple\n> > one-dimensional ones ?\n> \n> Is there any way of finding out instead of asking him directly? Maybe\n> the patent places have online info?\n\nI did a quick search at USPTO at\nhttp://patft.uspto.gov/netahtml/search-bool.html\non \"UB and Tree and index and database\" and found among other things a\nUS patent no. 5,826,253 on mechanism very similar to LISTEN/NOTIFY,\nafforded to Borland on October 20, 1998 based on application from April\n19, 1996. \nWe should be safe as already Postgres95 had them ;)\n\nwhen I searched for \"UB and Tree and index and database and Bayer\"\n0 results came back.\n\nwhen I omitted UB and searched for \"Tree and index and database and\nBayer\" I got 27 results, first of them on \"Method and composition for\nimproving sexual fitness\" ;)\n\nthe one possibly related related to our Bayer was nr 6,219,662 on\n\"Supporting database indexes based on a generalized B-tree index\" \nwhich had reference to :\n\nRudolf Bayer, \"The Universal B-Tree for Multidimensional Indexing:\nGeneral Concepts\", Worldwide Computing and Its Applications,\nInternational Conference, WWCA '97, Tsukuba, Japan, (Mar. 1997), pp.\n198-209.\n\nand German patent 0 650 131 A1 which may be also relevant\n\n----------------------\nHannu\n\n", "msg_date": "10 Apr 2002 23:45:03 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: AW: More UB-Tree patent information]" }, { "msg_contents": "\nHannu Krosing wrote:\n> \n> Have you found out _what_ exaclty is patented ?\n> \n> Is it just his concrete implementation of \"UB-Tree\" or something\n> broader, like using one multi-dimensional index instead of multiple\n> one-dimensional ones ?\n\n(I know it is OT, please reply in private, I can summarize any reactions \nto the list ...)\n \nPatents are supposed to be only applicable to an industrial application \n(with external side-effects). So ideas in themselves are not patentable.\n\nAnyway, this is once more a good example of the danger of software patents \n- you know what to reply when people say \"software patents promote \ninnovation\"\n\nIANAL, just my 0,02 Euro.\n\nsee also : http://www.gnu.org/philosophy/savingeurope.html (also \ninteresting for non-europeans, of course !)\n\n-- \nTycho Fruru\t\t\ttycho.fruru@conostix.com\n\"Prediction is extremely difficult. Especially about the future.\"\n - Niels Bohr\n\n\n", "msg_date": "Wed, 10 Apr 2002 23:07:49 +0200 (CEST)", "msg_from": "postgresql@fruru.com", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: AW: More UB-Tree patent information]" }, { "msg_contents": "I while ago I used xbase2pg and pg2xbase.\nYou should be able to find it on the net.\n\npostgresql@fruru.com wrote:\n> \n> Hannu Krosing wrote:\n> >\n> > Have you found out _what_ exaclty is patented ?\n> >\n> > Is it just his concrete implementation of \"UB-Tree\" or something\n> > broader, like using one multi-dimensional index instead of multiple\n> > one-dimensional ones ?\n> \n> (I know it is OT, please reply in private, I can summarize any reactions\n> to the list ...)\n> \n> Patents are supposed to be only applicable to an industrial application\n> (with external side-effects). So ideas in themselves are not patentable.\n> \n> Anyway, this is once more a good example of the danger of software patents\n> - you know what to reply when people say \"software patents promote\n> innovation\"\n> \n> IANAL, just my 0,02 Euro.\n> \n> see also : http://www.gnu.org/philosophy/savingeurope.html (also\n> interesting for non-europeans, of course !)\n> \n> --\n> Tycho Fruru tycho.fruru@conostix.com\n> \"Prediction is extremely difficult. Especially about the future.\"\n> - Niels Bohr\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n", "msg_date": "Wed, 10 Apr 2002 17:15:34 -0400", "msg_from": "Jean-Luc Lachance <jllachan@nsd.ca>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: AW: More UB-Tree patent information]" }, { "msg_contents": "Sorry all.\nI replied to the wrong message.\n\nJean-Luc Lachance wrote:\n> \n> I while ago I used xbase2pg and pg2xbase.\n> You should be able to find it on the net.\n> \n> postgresql@fruru.com wrote:\n> >\n> > Hannu Krosing wrote:\n> > >\n> > > Have you found out _what_ exaclty is patented ?\n> > >\n> > > Is it just his concrete implementation of \"UB-Tree\" or something\n> > > broader, like using one multi-dimensional index instead of multiple\n> > > one-dimensional ones ?\n> >\n> > (I know it is OT, please reply in private, I can summarize any reactions\n> > to the list ...)\n> >\n> > Patents are supposed to be only applicable to an industrial application\n> > (with external side-effects). So ideas in themselves are not patentable.\n> >\n> > Anyway, this is once more a good example of the danger of software patents\n> > - you know what to reply when people say \"software patents promote\n> > innovation\"\n> >\n> > IANAL, just my 0,02 Euro.\n> >\n> > see also : http://www.gnu.org/philosophy/savingeurope.html (also\n> > interesting for non-europeans, of course !)\n> >\n> > --\n> > Tycho Fruru tycho.fruru@conostix.com\n> > \"Prediction is extremely difficult. Especially about the future.\"\n> > - Niels Bohr\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n", "msg_date": "Wed, 10 Apr 2002 17:29:00 -0400", "msg_from": "Jean-Luc Lachance <jllachan@nsd.ca>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: AW: More UB-Tree patent information]" }, { "msg_contents": "On Wed, 10 Apr 2002 postgresql@fruru.com wrote:\n\n> Anyway, this is once more a good example of the danger of software patents \n> - you know what to reply when people say \"software patents promote \n> innovation\"\n\nWe (AEL, an association promoting freedom in every sense) have just now\nput on-line a page which contains known software patents and the behaviour\nof their respective owners wrt Free Software. (We have several\npatent-related queries outstanding right now)\n\nhttp://www.ael.be/node.php?id=52\n\nWe hope that this becomes a valuable source of information on which\npatent's implementations are available for Free Software work, and which\naren't. We also include some contact information to ease communication\nwith the patent holder.\n\nOf course, we encourage people on the \"incompatible with Free Software\" \ncategory to inform us of any licensing changes they implement which\nfacilitate Free Software implementations, so that we can promptly update\nthe page accordingly.\n\nBest Regards,\nTycho\n\n-- \nTycho Fruru\t\t\ttycho.fruru@conostix.com\n\"Prediction is extremely difficult. Especially about the future.\"\n - Niels Bohr\n\n", "msg_date": "Fri, 12 Apr 2002 18:41:47 +0200 (CEST)", "msg_from": "postgresql@fruru.com", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: AW: More UB-Tree patent information]" } ]
[ { "msg_contents": "\nHello,\n\ni searched around about privileges for functions, but it seems,\nthat there is nothing available in the 7.2.x series.\n\nSo my question: Is it possible to execute a function (in this case\na C function) with permissions of the function creater instead\nof the user who's actual using function?\n\nBest regards\n\n-- \n Andreas 'ads' Scherbaum\n", "msg_date": "Wed, 10 Apr 2002 19:20:54 +0200", "msg_from": "Andreas Scherbaum <adsmail@htl.de>", "msg_from_op": true, "msg_subject": "setuid functions" } ]
[ { "msg_contents": "Hi all,\n\nI'm working on a fairly large patch (cleaning up Karel Zak's\nPREPARE/EXECUTE work), and I'm having some problems with bison (I'm\na yacc newbie). In fact, my grammar currently has an obscene\n20 shift/reduce and 4 reduce/reduce conflicts!\n\nWould someone to be kind enough to let me know what I'm doing wrong,\nand what I'll need to change? (Unfortunately, bison isn't very\nhelpful: it doesn't provide line-numbers when it warns me about\nthe # of conflicts). The patch for gram.y is below.\n\nThanks in advance,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\nIndex: gram.y\n===================================================================\nRCS file: /var/lib/cvs/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.299\ndiff -c -r2.299 gram.y\n*** gram.y\t1 Apr 2002 04:35:38 -0000\t2.299\n--- gram.y\t11 Apr 2002 01:26:21 -0000\n***************\n*** 133,144 ****\n \t\tClosePortalStmt, ClusterStmt, CommentStmt, ConstraintsSetStmt,\n \t\tCopyStmt, CreateAsStmt, CreateDomainStmt, CreateGroupStmt, CreatePLangStmt,\n \t\tCreateSchemaStmt, CreateSeqStmt, CreateStmt, CreateTrigStmt,\n! \t\tCreateUserStmt, CreatedbStmt, CursorStmt, DefineStmt, DeleteStmt,\n! \t\tDropGroupStmt, DropPLangStmt, DropSchemaStmt, DropStmt, DropTrigStmt,\n! \t\tDropUserStmt, DropdbStmt, ExplainStmt, FetchStmt,\n \t\tGrantStmt, IndexStmt, InsertStmt, ListenStmt, LoadStmt, LockStmt,\n! \t\tNotifyStmt, OptimizableStmt, ProcedureStmt, ReindexStmt,\n! \t\tRemoveAggrStmt, RemoveFuncStmt, RemoveOperStmt,\n \t\tRenameStmt, RevokeStmt, RuleActionStmt, RuleActionStmtOrEmpty,\n \t\tRuleStmt, SelectStmt, TransactionStmt, TruncateStmt,\n \t\tUnlistenStmt, UpdateStmt, VacuumStmt, VariableResetStmt,\n--- 133,145 ----\n \t\tClosePortalStmt, ClusterStmt, CommentStmt, ConstraintsSetStmt,\n \t\tCopyStmt, CreateAsStmt, CreateDomainStmt, CreateGroupStmt, CreatePLangStmt,\n \t\tCreateSchemaStmt, CreateSeqStmt, CreateStmt, CreateTrigStmt,\n! \t\tCreateUserStmt, CreatedbStmt, CursorStmt, DeallocatePrepareStmt,\n! \t\tDefineStmt, DeleteStmt, DropGroupStmt,\n! \t\tDropPLangStmt, DropSchemaStmt, DropStmt, DropTrigStmt,\n! \t\tDropUserStmt, DropdbStmt, ExecuteStmt, ExplainStmt, FetchStmt,\n \t\tGrantStmt, IndexStmt, InsertStmt, ListenStmt, LoadStmt, LockStmt,\n! \t\tNotifyStmt, OptimizableStmt, ProcedureStmt, PrepareStmt, prepare_query,\n! \t\tReindexStmt, RemoveAggrStmt, RemoveFuncStmt, RemoveOperStmt,\n \t\tRenameStmt, RevokeStmt, RuleActionStmt, RuleActionStmtOrEmpty,\n \t\tRuleStmt, SelectStmt, TransactionStmt, TruncateStmt,\n \t\tUnlistenStmt, UpdateStmt, VacuumStmt, VariableResetStmt,\n***************\n*** 204,210 ****\n \t\tany_name, any_name_list, expr_list, dotted_name, attrs,\n \t\ttarget_list, update_target_list, insert_column_list,\n \t\tdef_list, opt_indirection, group_clause, TriggerFuncArgs,\n! \t\tselect_limit, opt_select_limit\n \n %type <range>\tinto_clause, OptTempTableName\n \n--- 205,214 ----\n \t\tany_name, any_name_list, expr_list, dotted_name, attrs,\n \t\ttarget_list, update_target_list, insert_column_list,\n \t\tdef_list, opt_indirection, group_clause, TriggerFuncArgs,\n! \t\tselect_limit, opt_select_limit, types_list,\n! \t\ttypes_prepare_clause, execute_using\n! \n! %type <ival>\tprepare_store\n \n %type <range>\tinto_clause, OptTempTableName\n \n***************\n*** 319,325 ****\n \t\tCOALESCE, COLLATE, COLUMN, COMMIT,\n \t\tCONSTRAINT, CONSTRAINTS, CREATE, CROSS, CURRENT_DATE,\n \t\tCURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_USER, CURSOR,\n! \t\tDAY_P, DEC, DECIMAL, DECLARE, DEFAULT, DELETE, DESC,\n \t\tDISTINCT, DOUBLE, DROP,\n \t\tELSE, ENCRYPTED, END_TRANS, ESCAPE, EXCEPT, EXECUTE, EXISTS, EXTRACT,\n \t\tFALSE_P, FETCH, FLOAT, FOR, FOREIGN, FROM, FULL,\n--- 323,329 ----\n \t\tCOALESCE, COLLATE, COLUMN, COMMIT,\n \t\tCONSTRAINT, CONSTRAINTS, CREATE, CROSS, CURRENT_DATE,\n \t\tCURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_USER, CURSOR,\n! \t\tDAY_P, DEALLOCATE, DEC, DECIMAL, DECLARE, DEFAULT, DELETE, DESC,\n \t\tDISTINCT, DOUBLE, DROP,\n \t\tELSE, ENCRYPTED, END_TRANS, ESCAPE, EXCEPT, EXECUTE, EXISTS, EXTRACT,\n \t\tFALSE_P, FETCH, FLOAT, FOR, FOREIGN, FROM, FULL,\n***************\n*** 329,335 ****\n \t\tMATCH, MINUTE_P, MONTH_P, NAMES,\n \t\tNATIONAL, NATURAL, NCHAR, NEXT, NO, NOT, NULLIF, NULL_P, NUMERIC,\n \t\tOF, OLD, ON, ONLY, OPTION, OR, ORDER, OUTER_P, OVERLAPS,\n! \t\tPARTIAL, POSITION, PRECISION, PRIMARY, PRIOR, PRIVILEGES, PROCEDURE, PUBLIC,\n \t\tREAD, REFERENCES, RELATIVE, REVOKE, RIGHT, ROLLBACK,\n \t\tSCHEMA, SCROLL, SECOND_P, SELECT, SESSION, SESSION_USER, SET, SOME, SUBSTRING,\n \t\tTABLE, TEMPORARY, THEN, TIME, TIMESTAMP,\n--- 333,339 ----\n \t\tMATCH, MINUTE_P, MONTH_P, NAMES,\n \t\tNATIONAL, NATURAL, NCHAR, NEXT, NO, NOT, NULLIF, NULL_P, NUMERIC,\n \t\tOF, OLD, ON, ONLY, OPTION, OR, ORDER, OUTER_P, OVERLAPS,\n! \t\tPARTIAL, POSITION, PRECISION, PREPARE, PRIMARY, PRIOR, PRIVILEGES, PROCEDURE, PUBLIC,\n \t\tREAD, REFERENCES, RELATIVE, REVOKE, RIGHT, ROLLBACK,\n \t\tSCHEMA, SCROLL, SECOND_P, SELECT, SESSION, SESSION_USER, SET, SOME, SUBSTRING,\n \t\tTABLE, TEMPORARY, THEN, TIME, TIMESTAMP,\n***************\n*** 363,372 ****\n \t\tDATABASE, DELIMITERS, DO,\n \t\tEACH, ENCODING, EXCLUSIVE, EXPLAIN,\n \t\tFORCE, FORWARD, FREEZE, FUNCTION, HANDLER,\n! \t\tILIKE, INCREMENT, INDEX, INHERITS, INSTEAD, ISNULL,\n \t\tLANCOMPILER, LIMIT, LISTEN, LOAD, LOCATION, LOCK_P,\n \t\tMAXVALUE, MINVALUE, MODE, MOVE,\n! \t\tNEW, NOCREATEDB, NOCREATEUSER, NONE, NOTHING, NOTIFY, NOTNULL,\n \t\tOFFSET, OIDS, OPERATOR, OWNER, PASSWORD, PROCEDURAL,\n \t\tREINDEX, RENAME, RESET, RETURNS, ROW, RULE,\n \t\tSEQUENCE, SETOF, SHARE, SHOW, START, STATEMENT,\n--- 367,376 ----\n \t\tDATABASE, DELIMITERS, DO,\n \t\tEACH, ENCODING, EXCLUSIVE, EXPLAIN,\n \t\tFORCE, FORWARD, FREEZE, FUNCTION, HANDLER,\n! \t\tILIKE, INCREMENT, INDEX, INHERITS, INSTEAD, ISNULL, INTERNAL,\n \t\tLANCOMPILER, LIMIT, LISTEN, LOAD, LOCATION, LOCK_P,\n \t\tMAXVALUE, MINVALUE, MODE, MOVE,\n! \t\tNEW, NOCREATEDB, NOCREATEUSER, NONE, NOSHARE, NOTHING, NOTIFY, NOTNULL,\n \t\tOFFSET, OIDS, OPERATOR, OWNER, PASSWORD, PROCEDURAL,\n \t\tREINDEX, RENAME, RESET, RETURNS, ROW, RULE,\n \t\tSEQUENCE, SETOF, SHARE, SHOW, START, STATEMENT,\n***************\n*** 460,465 ****\n--- 464,470 ----\n \t\t| CreateTrigStmt\n \t\t| CreateUserStmt\n \t\t| ClusterStmt\n+ \t\t| DeallocatePrepareStmt\n \t\t| DefineStmt\n \t\t| DropStmt\t\t\n \t\t| DropSchemaStmt\n***************\n*** 469,474 ****\n--- 474,480 ----\n \t\t| DropPLangStmt\n \t\t| DropTrigStmt\n \t\t| DropUserStmt\n+ \t\t| ExecuteStmt\n \t\t| ExplainStmt\n \t\t| FetchStmt\n \t\t| GrantStmt\n***************\n*** 477,482 ****\n--- 483,489 ----\n \t\t| UnlistenStmt\n \t\t| LockStmt\n \t\t| NotifyStmt\n+ \t\t| PrepareStmt\n \t\t| ProcedureStmt\n \t\t| ReindexStmt\n \t\t| RemoveAggrStmt\n***************\n*** 3489,3494 ****\n--- 3496,3594 ----\n \t\t| DeleteStmt\t\t\t\t\t/* by default all are $$=$1 */\n \t\t;\n \n+ /*****************************************************************************\n+ *\n+ *\t\t\t\tPREPARE STATEMENTS\n+ *\n+ *****************************************************************************/\n+ PrepareStmt: PREPARE name AS prepare_query types_prepare_clause prepare_store\n+ \t\t\t\t{\n+ \t\t\t\t\tPrepareStmt *n = makeNode(PrepareStmt);\n+ \t\t\t\t\tn->name = $2;\n+ \t\t\t\t\tn->query = (Query *) $4;\n+ \t\t\t\t\tn->types = (List *) $5;\n+ \t\t\t\t\tn->store = $6;\n+ \t\t\t\t\t$$ = (Node *) n;\n+ \t\t\t\t}\n+ \t\t;\n+ \n+ prepare_query: SelectStmt\n+ \t\t| UpdateStmt\n+ \t\t| InsertStmt\n+ \t\t| DeleteStmt\n+ \t\t;\n+ \n+ types_list: SimpleTypename\n+ \t\t\t\t{ $$ = makeList1($1); }\n+ \t\t| types_list ',' SimpleTypename\n+ \t\t\t\t{ $$ = lappend($1, $3); }\n+ \t\t;\n+ \n+ types_prepare_clause: USING types_list\t\t{ $$ = $2; }\n+ \t\t| /*EMPTY*/\t\t\t\t\t\t\t{ $$ = NIL; }\n+ \t\t;\n+ \n+ prepare_store: NOSHARE\t\t{ $$ = 1; }\n+ \t\t| GLOBAL\t\t\t{ $$ = 2; }\n+ \t\t| SHARE\t\t\t\t{ $$ = 0; }\t/* default */\n+ \t\t| /* EMPTY */\t\t{ $$ = 0; }\n+ \t\t;\t\n+ \n+ /*****************************************************************************\n+ *\n+ *\t\t\t\tEXECUTE STATEMENTS\n+ *\n+ *****************************************************************************/\n+ ExecuteStmt: EXECUTE name into_clause USING execute_using prepare_store\n+ \t\t\t\t{\n+ \t\t\t\t\tExecuteStmt *n = makeNode(ExecuteStmt);\n+ \t\t\t\t\tn->name = $2;\n+ \t\t\t\t\tn->into = $3;\n+ \t\t\t\t\tn->using = $5;\n+ \t\t\t\t\tn->store = $6;\n+ \t\t\t\t\t$$ = (Node *) n;\t\t\t\t\t\n+ \t\t\t\t}\n+ \t\t;\n+ \n+ execute_using: a_expr\n+ \t\t\t\t{ $$ = makeList1($1); }\n+ \t\t| execute_using ',' a_expr\n+ \t\t\t\t{ $$ = lappend($1, $3); }\n+ \t\t;\n+ \n+ /*****************************************************************************\n+ *\n+ *\t\t\t\tDEALLOCATE PREPARE STATEMENTS\n+ *\n+ *****************************************************************************/\n+ DeallocatePrepareStmt: DEALLOCATE PREPARE ALL\n+ \t\t\t\t{\n+ \t\t\t\t\tDeallocatePrepareStmt *n = makeNode(DeallocatePrepareStmt);\n+ \t\t\t\t\tn->name = NULL;\n+ \t\t\t\t\tn->store = 0;\n+ \t\t\t\t\tn->all = TRUE;\n+ \t\t\t\t\tn->internal = FALSE;\n+ \t\t\t\t\t$$ = (Node *) n;\n+ \t\t\t\t}\n+ \t\t| DEALLOCATE PREPARE ALL INTERNAL\n+ \t\t\t\t{\n+ \t\t\t\t\tDeallocatePrepareStmt *n = makeNode(DeallocatePrepareStmt);\n+ \t\t\t\t\tn->name = NULL;\n+ \t\t\t\t\tn->store = 0;\n+ \t\t\t\t\tn->all = FALSE;\n+ \t\t\t\t\tn->internal = TRUE;\n+ \t\t\t\t\t$$ = (Node *) n;\n+ \t\t\t\t}\n+ \t\t| DEALLOCATE PREPARE name prepare_store \n+ \t\t\t\t{\n+ \t\t\t\t\tDeallocatePrepareStmt *n = makeNode(DeallocatePrepareStmt);\n+ \t\t\t\t\tn->name = $3;\n+ \t\t\t\t\tn->store = $4;\n+ \t\t\t\t\tn->all = FALSE;\n+ \t\t\t\t\tn->internal = FALSE;\n+ \t\t\t\t\t$$ = (Node *) n;\n+ \t\t\t\t}\n+ \t\t;\n \n /*****************************************************************************\n *\n", "msg_date": "Wed, 10 Apr 2002 21:28:53 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "help with bison" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> Unfortunately, bison isn't very\n> helpful: it doesn't provide line-numbers when it warns me about\n> the # of conflicts.\n\nRun bison with -v switch (thus, \"bison -y -d -v gram.y\") and look at\nthe y.output file it produces. More detail than you really wanted ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Apr 2002 21:55:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: help with bison " }, { "msg_contents": "I don't see in this patch that you've added your new keywords to any of the\nlists of reserved words towards the bottom of gram.y. Have a look down and\nsee the lists. You need to add the keywords to the first list in the file\nthat doesn't give a shift/reduce error. (ie. make the words the least\nreserved as possible.)\n\nAlso, make sure you've put them in keywords.c as well.\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Neil Conway\n> Sent: Thursday, 11 April 2002 9:29 AM\n> To: PostgreSQL Hackers\n> Subject: [HACKERS] help with bison\n>\n>\n> Hi all,\n>\n> I'm working on a fairly large patch (cleaning up Karel Zak's\n> PREPARE/EXECUTE work), and I'm having some problems with bison (I'm\n> a yacc newbie). In fact, my grammar currently has an obscene\n> 20 shift/reduce and 4 reduce/reduce conflicts!\n>\n> Would someone to be kind enough to let me know what I'm doing wrong,\n> and what I'll need to change? (Unfortunately, bison isn't very\n> helpful: it doesn't provide line-numbers when it warns me about\n> the # of conflicts). The patch for gram.y is below.\n>\n> Thanks in advance,\n>\n> Neil\n>\n> --\n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n>\n> Index: gram.y\n> ===================================================================\n> RCS file: /var/lib/cvs/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.299\n> diff -c -r2.299 gram.y\n> *** gram.y\t1 Apr 2002 04:35:38 -0000\t2.299\n> --- gram.y\t11 Apr 2002 01:26:21 -0000\n> ***************\n> *** 133,144 ****\n> \t\tClosePortalStmt, ClusterStmt, CommentStmt,\n> ConstraintsSetStmt,\n> \t\tCopyStmt, CreateAsStmt, CreateDomainStmt,\n> CreateGroupStmt, CreatePLangStmt,\n> \t\tCreateSchemaStmt, CreateSeqStmt, CreateStmt, CreateTrigStmt,\n> ! \t\tCreateUserStmt, CreatedbStmt, CursorStmt,\n> DefineStmt, DeleteStmt,\n> ! \t\tDropGroupStmt, DropPLangStmt, DropSchemaStmt,\n> DropStmt, DropTrigStmt,\n> ! \t\tDropUserStmt, DropdbStmt, ExplainStmt, FetchStmt,\n> \t\tGrantStmt, IndexStmt, InsertStmt, ListenStmt,\n> LoadStmt, LockStmt,\n> ! \t\tNotifyStmt, OptimizableStmt, ProcedureStmt, ReindexStmt,\n> ! \t\tRemoveAggrStmt, RemoveFuncStmt, RemoveOperStmt,\n> \t\tRenameStmt, RevokeStmt, RuleActionStmt,\n> RuleActionStmtOrEmpty,\n> \t\tRuleStmt, SelectStmt, TransactionStmt, TruncateStmt,\n> \t\tUnlistenStmt, UpdateStmt, VacuumStmt, VariableResetStmt,\n> --- 133,145 ----\n> \t\tClosePortalStmt, ClusterStmt, CommentStmt,\n> ConstraintsSetStmt,\n> \t\tCopyStmt, CreateAsStmt, CreateDomainStmt,\n> CreateGroupStmt, CreatePLangStmt,\n> \t\tCreateSchemaStmt, CreateSeqStmt, CreateStmt, CreateTrigStmt,\n> ! \t\tCreateUserStmt, CreatedbStmt, CursorStmt,\n> DeallocatePrepareStmt,\n> ! \t\tDefineStmt, DeleteStmt, DropGroupStmt,\n> ! \t\tDropPLangStmt, DropSchemaStmt, DropStmt, DropTrigStmt,\n> ! \t\tDropUserStmt, DropdbStmt, ExecuteStmt, ExplainStmt,\n> FetchStmt,\n> \t\tGrantStmt, IndexStmt, InsertStmt, ListenStmt,\n> LoadStmt, LockStmt,\n> ! \t\tNotifyStmt, OptimizableStmt, ProcedureStmt,\n> PrepareStmt, prepare_query,\n> ! \t\tReindexStmt, RemoveAggrStmt, RemoveFuncStmt, RemoveOperStmt,\n> \t\tRenameStmt, RevokeStmt, RuleActionStmt,\n> RuleActionStmtOrEmpty,\n> \t\tRuleStmt, SelectStmt, TransactionStmt, TruncateStmt,\n> \t\tUnlistenStmt, UpdateStmt, VacuumStmt, VariableResetStmt,\n> ***************\n> *** 204,210 ****\n> \t\tany_name, any_name_list, expr_list, dotted_name, attrs,\n> \t\ttarget_list, update_target_list, insert_column_list,\n> \t\tdef_list, opt_indirection, group_clause, TriggerFuncArgs,\n> ! \t\tselect_limit, opt_select_limit\n>\n> %type <range>\tinto_clause, OptTempTableName\n>\n> --- 205,214 ----\n> \t\tany_name, any_name_list, expr_list, dotted_name, attrs,\n> \t\ttarget_list, update_target_list, insert_column_list,\n> \t\tdef_list, opt_indirection, group_clause, TriggerFuncArgs,\n> ! \t\tselect_limit, opt_select_limit, types_list,\n> ! \t\ttypes_prepare_clause, execute_using\n> !\n> ! %type <ival>\tprepare_store\n>\n> %type <range>\tinto_clause, OptTempTableName\n>\n> ***************\n> *** 319,325 ****\n> \t\tCOALESCE, COLLATE, COLUMN, COMMIT,\n> \t\tCONSTRAINT, CONSTRAINTS, CREATE, CROSS, CURRENT_DATE,\n> \t\tCURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_USER, CURSOR,\n> ! \t\tDAY_P, DEC, DECIMAL, DECLARE, DEFAULT, DELETE, DESC,\n> \t\tDISTINCT, DOUBLE, DROP,\n> \t\tELSE, ENCRYPTED, END_TRANS, ESCAPE, EXCEPT,\n> EXECUTE, EXISTS, EXTRACT,\n> \t\tFALSE_P, FETCH, FLOAT, FOR, FOREIGN, FROM, FULL,\n> --- 323,329 ----\n> \t\tCOALESCE, COLLATE, COLUMN, COMMIT,\n> \t\tCONSTRAINT, CONSTRAINTS, CREATE, CROSS, CURRENT_DATE,\n> \t\tCURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_USER, CURSOR,\n> ! \t\tDAY_P, DEALLOCATE, DEC, DECIMAL, DECLARE, DEFAULT,\n> DELETE, DESC,\n> \t\tDISTINCT, DOUBLE, DROP,\n> \t\tELSE, ENCRYPTED, END_TRANS, ESCAPE, EXCEPT,\n> EXECUTE, EXISTS, EXTRACT,\n> \t\tFALSE_P, FETCH, FLOAT, FOR, FOREIGN, FROM, FULL,\n> ***************\n> *** 329,335 ****\n> \t\tMATCH, MINUTE_P, MONTH_P, NAMES,\n> \t\tNATIONAL, NATURAL, NCHAR, NEXT, NO, NOT, NULLIF,\n> NULL_P, NUMERIC,\n> \t\tOF, OLD, ON, ONLY, OPTION, OR, ORDER, OUTER_P, OVERLAPS,\n> ! \t\tPARTIAL, POSITION, PRECISION, PRIMARY, PRIOR,\n> PRIVILEGES, PROCEDURE, PUBLIC,\n> \t\tREAD, REFERENCES, RELATIVE, REVOKE, RIGHT, ROLLBACK,\n> \t\tSCHEMA, SCROLL, SECOND_P, SELECT, SESSION,\n> SESSION_USER, SET, SOME, SUBSTRING,\n> \t\tTABLE, TEMPORARY, THEN, TIME, TIMESTAMP,\n> --- 333,339 ----\n> \t\tMATCH, MINUTE_P, MONTH_P, NAMES,\n> \t\tNATIONAL, NATURAL, NCHAR, NEXT, NO, NOT, NULLIF,\n> NULL_P, NUMERIC,\n> \t\tOF, OLD, ON, ONLY, OPTION, OR, ORDER, OUTER_P, OVERLAPS,\n> ! \t\tPARTIAL, POSITION, PRECISION, PREPARE, PRIMARY,\n> PRIOR, PRIVILEGES, PROCEDURE, PUBLIC,\n> \t\tREAD, REFERENCES, RELATIVE, REVOKE, RIGHT, ROLLBACK,\n> \t\tSCHEMA, SCROLL, SECOND_P, SELECT, SESSION,\n> SESSION_USER, SET, SOME, SUBSTRING,\n> \t\tTABLE, TEMPORARY, THEN, TIME, TIMESTAMP,\n> ***************\n> *** 363,372 ****\n> \t\tDATABASE, DELIMITERS, DO,\n> \t\tEACH, ENCODING, EXCLUSIVE, EXPLAIN,\n> \t\tFORCE, FORWARD, FREEZE, FUNCTION, HANDLER,\n> ! \t\tILIKE, INCREMENT, INDEX, INHERITS, INSTEAD, ISNULL,\n> \t\tLANCOMPILER, LIMIT, LISTEN, LOAD, LOCATION, LOCK_P,\n> \t\tMAXVALUE, MINVALUE, MODE, MOVE,\n> ! \t\tNEW, NOCREATEDB, NOCREATEUSER, NONE, NOTHING,\n> NOTIFY, NOTNULL,\n> \t\tOFFSET, OIDS, OPERATOR, OWNER, PASSWORD, PROCEDURAL,\n> \t\tREINDEX, RENAME, RESET, RETURNS, ROW, RULE,\n> \t\tSEQUENCE, SETOF, SHARE, SHOW, START, STATEMENT,\n> --- 367,376 ----\n> \t\tDATABASE, DELIMITERS, DO,\n> \t\tEACH, ENCODING, EXCLUSIVE, EXPLAIN,\n> \t\tFORCE, FORWARD, FREEZE, FUNCTION, HANDLER,\n> ! \t\tILIKE, INCREMENT, INDEX, INHERITS, INSTEAD, ISNULL,\n> INTERNAL,\n> \t\tLANCOMPILER, LIMIT, LISTEN, LOAD, LOCATION, LOCK_P,\n> \t\tMAXVALUE, MINVALUE, MODE, MOVE,\n> ! \t\tNEW, NOCREATEDB, NOCREATEUSER, NONE, NOSHARE,\n> NOTHING, NOTIFY, NOTNULL,\n> \t\tOFFSET, OIDS, OPERATOR, OWNER, PASSWORD, PROCEDURAL,\n> \t\tREINDEX, RENAME, RESET, RETURNS, ROW, RULE,\n> \t\tSEQUENCE, SETOF, SHARE, SHOW, START, STATEMENT,\n> ***************\n> *** 460,465 ****\n> --- 464,470 ----\n> \t\t| CreateTrigStmt\n> \t\t| CreateUserStmt\n> \t\t| ClusterStmt\n> + \t\t| DeallocatePrepareStmt\n> \t\t| DefineStmt\n> \t\t| DropStmt\n> \t\t| DropSchemaStmt\n> ***************\n> *** 469,474 ****\n> --- 474,480 ----\n> \t\t| DropPLangStmt\n> \t\t| DropTrigStmt\n> \t\t| DropUserStmt\n> + \t\t| ExecuteStmt\n> \t\t| ExplainStmt\n> \t\t| FetchStmt\n> \t\t| GrantStmt\n> ***************\n> *** 477,482 ****\n> --- 483,489 ----\n> \t\t| UnlistenStmt\n> \t\t| LockStmt\n> \t\t| NotifyStmt\n> + \t\t| PrepareStmt\n> \t\t| ProcedureStmt\n> \t\t| ReindexStmt\n> \t\t| RemoveAggrStmt\n> ***************\n> *** 3489,3494 ****\n> --- 3496,3594 ----\n> \t\t| DeleteStmt\t\t\t\t\t/*\n> by default all are $$=$1 */\n> \t\t;\n>\n> +\n> /*****************************************************************\n> ************\n> + *\n> + *\t\t\t\tPREPARE STATEMENTS\n> + *\n> +\n> ******************************************************************\n> ***********/\n> + PrepareStmt: PREPARE name AS prepare_query\n> types_prepare_clause prepare_store\n> + \t\t\t\t{\n> + \t\t\t\t\tPrepareStmt *n =\n> makeNode(PrepareStmt);\n> + \t\t\t\t\tn->name = $2;\n> + \t\t\t\t\tn->query = (Query *) $4;\n> + \t\t\t\t\tn->types = (List *) $5;\n> + \t\t\t\t\tn->store = $6;\n> + \t\t\t\t\t$$ = (Node *) n;\n> + \t\t\t\t}\n> + \t\t;\n> +\n> + prepare_query: SelectStmt\n> + \t\t| UpdateStmt\n> + \t\t| InsertStmt\n> + \t\t| DeleteStmt\n> + \t\t;\n> +\n> + types_list: SimpleTypename\n> + \t\t\t\t{ $$ = makeList1($1); }\n> + \t\t| types_list ',' SimpleTypename\n> + \t\t\t\t{ $$ = lappend($1, $3); }\n> + \t\t;\n> +\n> + types_prepare_clause: USING types_list\t\t{ $$ = $2; }\n> + \t\t| /*EMPTY*/\n> \t{ $$ = NIL; }\n> + \t\t;\n> +\n> + prepare_store: NOSHARE\t\t{ $$ = 1; }\n> + \t\t| GLOBAL\t\t\t{ $$ = 2; }\n> + \t\t| SHARE\t\t\t\t{ $$ = 0; }\t/*\n> default */\n> + \t\t| /* EMPTY */\t\t{ $$ = 0; }\n> + \t\t;\n> +\n> +\n> /*****************************************************************\n> ************\n> + *\n> + *\t\t\t\tEXECUTE STATEMENTS\n> + *\n> +\n> ******************************************************************\n> ***********/\n> + ExecuteStmt: EXECUTE name into_clause USING execute_using prepare_store\n> + \t\t\t\t{\n> + \t\t\t\t\tExecuteStmt *n =\n> makeNode(ExecuteStmt);\n> + \t\t\t\t\tn->name = $2;\n> + \t\t\t\t\tn->into = $3;\n> + \t\t\t\t\tn->using = $5;\n> + \t\t\t\t\tn->store = $6;\n> + \t\t\t\t\t$$ = (Node *) n;\n>\n> + \t\t\t\t}\n> + \t\t;\n> +\n> + execute_using: a_expr\n> + \t\t\t\t{ $$ = makeList1($1); }\n> + \t\t| execute_using ',' a_expr\n> + \t\t\t\t{ $$ = lappend($1, $3); }\n> + \t\t;\n> +\n> +\n> /*****************************************************************\n> ************\n> + *\n> + *\t\t\t\tDEALLOCATE PREPARE STATEMENTS\n> + *\n> +\n> ******************************************************************\n> ***********/\n> + DeallocatePrepareStmt: DEALLOCATE PREPARE ALL\n> + \t\t\t\t{\n> + \t\t\t\t\tDeallocatePrepareStmt *n =\n> makeNode(DeallocatePrepareStmt);\n> + \t\t\t\t\tn->name = NULL;\n> + \t\t\t\t\tn->store = 0;\n> + \t\t\t\t\tn->all = TRUE;\n> + \t\t\t\t\tn->internal = FALSE;\n> + \t\t\t\t\t$$ = (Node *) n;\n> + \t\t\t\t}\n> + \t\t| DEALLOCATE PREPARE ALL INTERNAL\n> + \t\t\t\t{\n> + \t\t\t\t\tDeallocatePrepareStmt *n =\n> makeNode(DeallocatePrepareStmt);\n> + \t\t\t\t\tn->name = NULL;\n> + \t\t\t\t\tn->store = 0;\n> + \t\t\t\t\tn->all = FALSE;\n> + \t\t\t\t\tn->internal = TRUE;\n> + \t\t\t\t\t$$ = (Node *) n;\n> + \t\t\t\t}\n> + \t\t| DEALLOCATE PREPARE name prepare_store\n> + \t\t\t\t{\n> + \t\t\t\t\tDeallocatePrepareStmt *n =\n> makeNode(DeallocatePrepareStmt);\n> + \t\t\t\t\tn->name = $3;\n> + \t\t\t\t\tn->store = $4;\n> + \t\t\t\t\tn->all = FALSE;\n> + \t\t\t\t\tn->internal = FALSE;\n> + \t\t\t\t\t$$ = (Node *) n;\n> + \t\t\t\t}\n> + \t\t;\n>\n>\n> /*****************************************************************\n> ************\n> *\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Thu, 11 Apr 2002 10:14:39 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: help with bison" }, { "msg_contents": "> In fact, my grammar currently has an obscene\n> 20 shift/reduce and 4 reduce/reduce conflicts!\n\nA shift/reduce conflict, IIRC, usually indicates a situation where\nthe grammar is unambiguous but may be inefficient. Eliminating them\nis nice, but not critical.\n\nA R/R conflict, in contrast, is a point where the grammar is ambiguous\nand you *must* fix it.\n\n> (Unfortunately, bison isn't very\n> helpful: it doesn't provide line-numbers when it warns me about\n> the # of conflicts).\n\nTurn on the verbose flag (-v|--verbose). Near the top it should\nlist the S/R and R/R states. You can then examine the states and\nrules and see exactly where the problem is.\n\nCutting to the chase, the R/R problems are due to \"TEMP\" and\n\"TEMPORARY\" being both \"unreserved keywords\" and part of\nOptTempTableName. If you comment them out in 'unreserved keywords'\nthe R/R error goes away but this may introduce errors elsewhere.\n\nWhat is probably better is to move those tokens someplace else\nwhere there's some context:\n\n into_clause : INTO OptTempTableName\n\t| /* EMPTY */\n\t;\n\nneeds to be replaced with something like\n\n into_options : /* EMPTY */\n\t| TEMPORARY\n\t| TEMP\n\t;\n\n into_clause : INTO into_options OptTempTableName\n\t| /* EMPTY */\n\t;\n\nwith the corresponding modifiers removed from the OptTempTableName\n\nUnfortunately, when I quickly tested that the number of S/R conflicts \nballooned to 378.\n\nAs an aside, is there any reason to treat TEMP and TEMPORARY as two\nseparate identifiers? It's common practice to have synonyms mapped\nto a single identifier in the lexical analyzer, and the grammar itself\nlooks like it could benefit from some helper rules such as:\n\n temporary\n\t: TEMPORARY { $$ = 1; }\n\t| TEMP\t { $$ = 1; }\n\t| \t { $$ = 0; }\n\t;\n\n scope\n\t: GLOBAL { $$ = 1; }\n \t| LOCAL { $$ = 2; }\n\t| { $$ = 0; }\n\t;\n\n something : scope temporary somethingelse { ... }\n\nBear\n", "msg_date": "Wed, 10 Apr 2002 20:20:12 -0600 (MDT)", "msg_from": "Bear Giles <bgiles@coyotesong.com>", "msg_from_op": false, "msg_subject": "Re: help with bison" }, { "msg_contents": "On Wed, 10 Apr 2002, Bear Giles wrote:\n\n> > In fact, my grammar currently has an obscene\n> > 20 shift/reduce and 4 reduce/reduce conflicts!\n> \n> A shift/reduce conflict, IIRC, usually indicates a situation where\n> the grammar is unambiguous but may be inefficient. Eliminating them\n> is nice, but not critical.\n\nThis is not correct. A shift/reduce conflict is where the grammar is\nambiguous.\n\n> \n> A R/R conflict, in contrast, is a point where the grammar is ambiguous\n> and you *must* fix it.\n\nA reduce/reduce conflict is where there is more than one rule which could\nbe used for the reduction of the grammar.\n\nGavin\n\n", "msg_date": "Thu, 11 Apr 2002 12:44:13 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: help with bison" }, { "msg_contents": "Out of interest, since the FE/BE protocol apprently doesn't support prepared\nstatements (bound variables), what does this patch actually _do_?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Neil Conway\n> Sent: Thursday, 11 April 2002 9:29 AM\n> To: PostgreSQL Hackers\n> Subject: [HACKERS] help with bison\n>\n>\n> Hi all,\n>\n> I'm working on a fairly large patch (cleaning up Karel Zak's\n> PREPARE/EXECUTE work), and I'm having some problems with bison (I'm\n> a yacc newbie). In fact, my grammar currently has an obscene\n> 20 shift/reduce and 4 reduce/reduce conflicts!\n>\n> Would someone to be kind enough to let me know what I'm doing wrong,\n> and what I'll need to change? (Unfortunately, bison isn't very\n> helpful: it doesn't provide line-numbers when it warns me about\n> the # of conflicts). The patch for gram.y is below.\n>\n> Thanks in advance,\n>\n> Neil\n>\n> --\n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n\n", "msg_date": "Thu, 11 Apr 2002 10:54:14 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: help with bison" }, { "msg_contents": "On Thu, 11 Apr 2002 10:54:14 +0800\n\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> wrote:\n> Out of interest, since the FE/BE protocol apprently doesn't support prepared\n> statements (bound variables), what does this patch actually _do_?\n\nIt implements preparable statements, by adding 3 new SQL statements:\n\nPREPARE <plan> AS <query>;\nEXECUTE <plan> USING <parameters>;\nDEALLOCATE <plan>;\n\nI didn't write the original patch -- that was done by Karel Zak.\nBut since that was several years ago, I'm working on cleaning it up,\ngetting it to apply to current sources (which has taken a while),\nand fixing the remaining issues with it. Karel describes his work\nhere:\n\nhttp://groups.google.com/groups?q=query+cache+plan&hl=en&selm=8l4jua%242fo0%241%40FreeBSD.csie.NCTU.edu.tw&rnum=1\n\n(If that's messed up due to newlines, search for \"query cache plan\"\non Google Groups, it's the first result)\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Wed, 10 Apr 2002 23:03:09 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "Re: help with bison" }, { "msg_contents": "> PREPARE <plan> AS <query>;\n> EXECUTE <plan> USING <parameters>;\n> DEALLOCATE <plan>;\n>\n> I didn't write the original patch -- that was done by Karel Zak.\n> But since that was several years ago, I'm working on cleaning it up,\n> getting it to apply to current sources (which has taken a while),\n> and fixing the remaining issues with it. Karel describes his work\n> here:\n\nOK, fair enough. What I don't get is how this patch is related to the FE/BE\nnot supporting variable binding problem? Am I getting confused here?\n\nChris\n\n", "msg_date": "Thu, 11 Apr 2002 11:21:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: help with bison" }, { "msg_contents": "Bear Giles <bgiles@coyotesong.com> writes:\n> As an aside, is there any reason to treat TEMP and TEMPORARY as two\n> separate identifiers?\n\nYes: if the lexer folds them together then unreserved_keyword can't\nregenerate the equivalent name properly. (Possibly this could be fixed\nby making the lexer pass the input string as the value of a keyword\ntoken, but I've not looked at details.)\n\nYou might be right that the grammar could benefit from some refactoring,\nthough I'm not at all sure if that really helps from an\nexecution-efficiency (number of states) standpoint.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Apr 2002 23:24:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: help with bison " }, { "msg_contents": "> > As an aside, is there any reason to treat TEMP and TEMPORARY as two\n> > separate identifiers?\n> \n> Yes: if the lexer folds them together then unreserved_keyword can't\n> regenerate the equivalent name properly.\n\nBut if they're synonyms, is that necessary? I'm not indifferent to the\nbenefits of being able to recreate an input string exactly when all other\nthings are equal, but things aren't equal here. TEMPORARY is a SQL92\nkeyword, TEMP is described as a \"Keyword for Postgres support,\" but the\ngrammar shows that one never appears without the other.\n\nSo why not deprecate TEMP and always show TEMPORARY when reconstructing\nthe query?\n\n> You might be right that the grammar could benefit from some refactoring,\n> though I'm not at all sure if that really helps from an\n> execution-efficiency (number of states) standpoint.\n\nThe goal of the refactoring wouldn't be execution efficiency, it would \nbe simplifying maintenance of the grammar. And it looks like it's the\ncommon practice elsewhere, just not in the OptTemp and OptTempTableName\nrules.\n\nBear\n", "msg_date": "Wed, 10 Apr 2002 21:52:06 -0600 (MDT)", "msg_from": "Bear Giles <bgiles@coyotesong.com>", "msg_from_op": false, "msg_subject": "Re: help with bison" }, { "msg_contents": "Bear Giles <bgiles@coyotesong.com> writes:\n>> Yes: if the lexer folds them together then unreserved_keyword can't\n>> regenerate the equivalent name properly.\n\n> But if they're synonyms, is that necessary?\n\nIf I say\n\tcreate table foo (temp int);\nI will be annoyed if the system decides that the column is named\n\"temporary\". Being synonyms in the SQL grammar does not make them\nequivalent when used as plain identifiers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 00:33:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: help with bison " }, { "msg_contents": "On Wed, 10 Apr 2002, Neil Conway wrote:\n\n> Hi all,\n> \n> I'm working on a fairly large patch (cleaning up Karel Zak's\n> PREPARE/EXECUTE work), and I'm having some problems with bison (I'm\n> a yacc newbie). In fact, my grammar currently has an obscene\n> 20 shift/reduce and 4 reduce/reduce conflicts!\n\nYour first set of problems is coming from PrepareStmt:\n\n---\n\nPrepareStmt: PREPARE name AS prepare_query types_prepare_clause\nprepare_store\n\n---\n\nThere is a reasonably clear problem here. prepare_query encompasses much\nof the grammar of the parser so it will definately cause shift/reduce and\nreduce/reduce conflicts with the other two productions which follow\nit. Easy solution?\n\nPrepareStmt: PREPARE name types_prepare_clause prepare_store AS\nprepare_query\n\nYour second problem is in ExecuteStmt:\n\nExecuteStmt: EXECUTE name into_clause USING execute_using prepare_store\n\nHere your problem is with execute_using and prepare_store. I am not sure\nwhy.\n\nGavin\n\n", "msg_date": "Thu, 11 Apr 2002 15:02:49 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: help with bison" }, { "msg_contents": "Neil,\n\nWill this allow you to pass bytea data as binary data in the parameters \nsection (ability to bind values to parameters) or will this still \nrequire that the data be passed as a text string that the parser needs \nto parse. When passing bytea data that is on the order of Megs in size \n(thus the insert/update statement is multiple Megs in size) it takes a \nlot of CPU cycles for the parser to chug through sql statements that \nlong. (In fact a posting to the jdbc mail list in the last couple of \ndays shows that postgres is 22 times slower than oracle when handling a \n1Meg value in a bytea column).\n\nthanks,\n--Barry\n\nNeil Conway wrote:\n> On Thu, 11 Apr 2002 10:54:14 +0800\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> wrote:\n> \n>>Out of interest, since the FE/BE protocol apprently doesn't support prepared\n>>statements (bound variables), what does this patch actually _do_?\n> \n> \n> It implements preparable statements, by adding 3 new SQL statements:\n> \n> PREPARE <plan> AS <query>;\n> EXECUTE <plan> USING <parameters>;\n> DEALLOCATE <plan>;\n> \n> I didn't write the original patch -- that was done by Karel Zak.\n> But since that was several years ago, I'm working on cleaning it up,\n> getting it to apply to current sources (which has taken a while),\n> and fixing the remaining issues with it. Karel describes his work\n> here:\n> \n> http://groups.google.com/groups?q=query+cache+plan&hl=en&selm=8l4jua%242fo0%241%40FreeBSD.csie.NCTU.edu.tw&rnum=1\n> \n> (If that's messed up due to newlines, search for \"query cache plan\"\n> on Google Groups, it's the first result)\n> \n> Cheers,\n> \n> Neil\n> \n\n\n", "msg_date": "Wed, 10 Apr 2002 22:36:49 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: help with bison" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> PrepareStmt: PREPARE name AS prepare_query types_prepare_clause\n> prepare_store\n\n> There is a reasonably clear problem here. prepare_query encompasses much\n> of the grammar of the parser so it will definately cause shift/reduce and\n> reduce/reduce conflicts with the other two productions which follow\n> it. Easy solution?\n\n> PrepareStmt: PREPARE name types_prepare_clause prepare_store AS\n> prepare_query\n\nIs there any existing standard to follow for the syntax of these\ncommands?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 11:46:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: help with bison " }, { "msg_contents": "On Wed, 10 Apr 2002 22:36:49 -0700\n\"Barry Lind\" <barry@xythos.com> wrote:\n> Neil,\n> \n> Will this allow you to pass bytea data as binary data in the parameters \n> section (ability to bind values to parameters) or will this still \n> require that the data be passed as a text string that the parser needs \n> to parse.\n\nThe patch I'm working on would require that the parameters\nare still parsed, so it would likely suffer from the same\nperformance problems.\n\nI'm unsure how to fix this without a change to the FE/BE\nprotocol (and even then, it wouldn't be trivial). Suggestions?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 11 Apr 2002 12:02:38 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "Re: help with bison" }, { "msg_contents": "On Thu, 11 Apr 2002 15:02:49 +1000 (EST)\n\"Gavin Sherry\" <swm@linuxworld.com.au> wrote:\n> On Wed, 10 Apr 2002, Neil Conway wrote:\n> \n> > Hi all,\n> > \n> > I'm working on a fairly large patch (cleaning up Karel Zak's\n> > PREPARE/EXECUTE work), and I'm having some problems with bison (I'm\n> > a yacc newbie). In fact, my grammar currently has an obscene\n> > 20 shift/reduce and 4 reduce/reduce conflicts!\n> \n> Your first set of problems is coming from PrepareStmt:\n> [...]\n\n> Your second problem is in ExecuteStmt:\n> [...]\n\nGreat, thanks Gavin! (I owe you a beer!)\n\nI re-arranged PrepareStmt as you suggested, as well as\nExecuteStmt, and the conflicts have disappeared. I'm not sure\nif the new syntax is ideal, but I'm happy to leave it as it is\nuntil I've got the patch mostly working. I'd welcome any\nsuggestions for improvements to the syntax that still allow\nfor sane parsing.\n\nThanks again,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 11 Apr 2002 12:16:34 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "Re: help with bison" }, { "msg_contents": "Neil Conway wrote:\n > On Wed, 10 Apr 2002 22:36:49 -0700 \"Barry Lind\" <barry@xythos.com>\n > wrote:\n >\n >> Neil,\n >>\n >> Will this allow you to pass bytea data as binary data in the\n >> parameters section (ability to bind values to parameters) or will\n >> this still require that the data be passed as a text string that\n >> the parser needs to parse.\n >\n >\n > The patch I'm working on would require that the parameters are still\n > parsed, so it would likely suffer from the same performance\n > problems.\n >\n > I'm unsure how to fix this without a change to the FE/BE protocol\n > (and even then, it wouldn't be trivial). Suggestions?\n >\n\n\nThe other day there was a discussion around the fact that X'ffff' will\nget converted into an integer constant, e.g.\n\ntest=# select X'ffff';\n ?column?\n----------\n 65535\n(1 row)\n\n, while SQL99 says that this syntax *should* be used to specify a \n\"binary string\". It looks like the hex-to-integer magic actually occurs \nin the lexer, and then the integer value of 65535 is passed to the \nparser as an ICONST. I'm wondering if changing the lexer to make this a \nconversion to a properly escaped bytea input string, and passing it to \nthe parser as a string constant would speed things up?\n\nJoe\n\n\n\n\n", "msg_date": "Thu, 11 Apr 2002 09:26:19 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: help with bison" }, { "msg_contents": "On Thu, 11 Apr 2002, Tom Lane wrote:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > PrepareStmt: PREPARE name AS prepare_query types_prepare_clause\n> > prepare_store\n> \n> > There is a reasonably clear problem here. prepare_query encompasses much\n> > of the grammar of the parser so it will definately cause shift/reduce and\n> > reduce/reduce conflicts with the other two productions which follow\n> > it. Easy solution?\n> \n> > PrepareStmt: PREPARE name types_prepare_clause prepare_store AS\n> > prepare_query\n> \n> Is there any existing standard to follow for the syntax of these\n> commands?\n\nSQL92 lists:\n\n <prepare statement> ::=\n PREPARE <SQL statement name> FROM <SQL statement variable>\n\nWhere <SQL statement variable> can resolve to:\n\n <preparable statement> ::=\n <preparable SQL data statement>\n | <preparable SQL schema statement>\n | <preparable SQL transaction statement>\n | <preparable SQL session statement>\n | <preparable implementation-defined statement>\n\n <preparable SQL data statement> ::=\n <delete statement: searched>\n | <dynamic single row select statement>\n | <insert statement>\n | <dynamic select statement>\n | <update statement: searched>\n | <preparable dynamic delete statement: positioned>\n | <preparable dynamic update statement: positioned>\n\n <preparable SQL schema statement> ::=\n <SQL schema statement>\n\n <preparable SQL transaction statement> ::=\n <SQL transaction statement>\n\n <preparable SQL session statement> ::=\n <SQL session statement>\n\n <dynamic select statement> ::= <cursor specification>\n\n <dynamic single row select statement> ::= <query specification>\n\n\n\nSo, the form is (according to the yacc code):\n\nPREPARE name FROM prepare_query\n\nThis seems a lot simpler. What is 'types_prepare_clause' used for? I\npresume storage relates to whether or not all backends can see the\nprepared statement? (As a note to those interested in statement\npreparation, this is not the same as variable binding and so requires no\nmodification of the FE/BE protocol).\n\nAs a side note, I'm not sure I really see the point of preparable SQL. The\nonly real use I can think of is if the query could be prepared on the\nclient side. This would require modification of the parser (a few\n#ifdefs), a few client side functions, some memory storage and a\nmodification of the FE/BE protocol to be able to accept parsed query\nnodes. This would allow the backend to spend as little time in the parser\nas possible, if that is desirable. It removes the ability to share\nprepared queries between backends however.\n\nGavin\n\n\n\n\n", "msg_date": "Fri, 12 Apr 2002 02:26:47 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: help with bison " }, { "msg_contents": "Joe Conway writes:\n\n> The other day there was a discussion around the fact that X'ffff' will\n> get converted into an integer constant, e.g.\n\n> , while SQL99 says that this syntax *should* be used to specify a\n> \"binary string\".\n\nActually, SQL99 is ambiguous regarding whether it represents a blob or a\nbit string. But either of these would be better than an integer.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 11 Apr 2002 14:00:28 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: help with bison" }, { "msg_contents": "> The other day there was a discussion around the fact that X'ffff' will\n> get converted into an integer constant...\n> ... while SQL99 says that this syntax *should* be used to specify a\n> \"binary string\". It looks like the hex-to-integer magic actually occurs\n> in the lexer, and then the integer value of 65535 is passed to the\n> parser as an ICONST. I'm wondering if changing the lexer to make this a\n> conversion to a properly escaped bytea input string, and passing it to\n> the parser as a string constant would speed things up?\n\nWhat else is described as a \"binary string\" in the spec? I would have\nguessed that this would map to a bit field type (and maybe even had\nlooked it up at one time).\n\nIs B'00010001' also described as a \"binary string\" also, or is it more\nexplicitly tied to bit fields?\n\n - Thomas\n", "msg_date": "Thu, 11 Apr 2002 22:29:20 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: help with bison" }, { "msg_contents": "Thomas Lockhart wrote:\n >> The other day there was a discussion around the fact that X'ffff'\n >> will get converted into an integer constant... ... while SQL99\n >> says that this syntax *should* be used to specify a \"binary\n >> string\". It looks like the hex-to-integer magic actually occurs in\n >> the lexer, and then the integer value of 65535 is passed to the parser\n >> as an ICONST. I'm wondering if changing the lexer to make this a \nconversion\n >> to a properly escaped bytea input string, and passing it to the\n >> parser as a string constant would speed things up?\n >\n >\n > What else is described as a \"binary string\" in the spec? I would\n > have guessed that this would map to a bit field type (and maybe even\n > had looked it up at one time).\n >\n > Is B'00010001' also described as a \"binary string\" also, or is it\n > more explicitly tied to bit fields?\n >\n > - Thomas\n\nIn SQL99, Section \"5.3 <literal>\", I see this:\n\n <national character string literal> ::=\n N <quote> [ <character representation>... ] <quote>\n [ { <separator> <quote> [ <character representation>... ]\n <quote> }... ]\n <bit string literal> ::=\n B <quote> [ <bit>... ] <quote>\n [ { <separator> <quote> [ <bit>... ] <quote> }... ]\n <hex string literal> ::=\n X <quote> [ <hexit>... ] <quote>\n [ { <separator> <quote> [ <hexit>... ] <quote> }... ]\n <binary string literal> ::=\n X <quote> [ { <hexit> <hexit> }... ] <quote>\n [ { <separator> <quote> [ { <hexit> <hexit> }... ] <quote> }... ]\n <bit> ::=\n 0 | 1\n <hexit> ::=\n <digit> | A | B | C | D | E | F | a | b | c | d | e | f\n\nand further down:\n\n 11) The declared type of a <bit string literal> is fixed-length\n bit string. The length of a <bit string literal> is the number\n of bits that it contains.\n 12) The declared type of a <hex string literal> is fixed-length bit\n string. Each <hexit> appearing in the literal is equivalent to\n a quartet of bits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E,\n and F are interpreted as 0000, 0001, 0010, 0011, 0100, 0101,\n 0110, 0111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, and 1111,\n respectively. The <hexit>s a, b, c, d, e, and f have respectively\n the same values as the <hexit>s A, B, C, D, E, and F.\n 13) The declared type of a <binary string literal> is binary string.\n Each <hexit> appearing in the literal is equivalent to a quartet\n of bits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F are\n interpreted as 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111,\n 1000, 1001, 1010, 1011, 1100, 1101, 1110, and 1111, respectively.\n The <hexit>s a, b, c, d, e, and f have respectively the same\n values as the <hexit>s A, B, C, D, E, and F.\n\nSo, as Peter pointed out, X'ffff' can be interpreted as a binary string \n*or* a bit string, but ISTM B'1111' is explicitly tied to a bit string.\n\nJoe\n\n", "msg_date": "Fri, 12 Apr 2002 09:21:06 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: help with bison" } ]
[ { "msg_contents": "Hi all,\n\nI got these odd messages while doing a vacuum in 7.1.3 0 - any idea what\nthey mean? I assume it's not fatal as they're just notices, but I've never\nhad them before and haven't had them since.\n\nNOTICE: RegisterSharedInvalid: SI buffer overflow\nNOTICE: InvalidateSharedInvalid: cache state reset\n\nChris\n\n", "msg_date": "Thu, 11 Apr 2002 09:38:23 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Odd error during vacuum" }, { "msg_contents": "On Thu, 11 Apr 2002, Christopher Kings-Lynne wrote:\n\n> Hi all,\n> \n> I got these odd messages while doing a vacuum in 7.1.3 0 - any idea what\n> they mean? I assume it's not fatal as they're just notices, but I've never\n> had them before and haven't had them since.\n> \n> NOTICE: RegisterSharedInvalid: SI buffer overflow\n> NOTICE: InvalidateSharedInvalid: cache state reset\n\nThis just means that the cache invalidation buffer got overloaded and was\nreset. Its not really a problem (except in terms of performance). I would\nsay that if you haven't seen this before your database is getting more\nusage and/or more data.\n\nTo fix this increase shared_buffers.\n\nGavin\n\n\n", "msg_date": "Thu, 11 Apr 2002 11:45:04 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Odd error during vacuum" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n>> NOTICE: RegisterSharedInvalid: SI buffer overflow\n>> NOTICE: InvalidateSharedInvalid: cache state reset\n\n> To fix this increase shared_buffers.\n\nAFAIK shared_buffers has no direct effect on the rate of SI overruns.\nI suppose it might have an indirect effect just by improving overall\nperformance...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Apr 2002 21:57:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Odd error during vacuum " } ]
[ { "msg_contents": " \n \n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Tax Center - online filing with TurboTax\nhttp://taxes.yahoo.com/\n", "msg_date": "Wed, 10 Apr 2002 21:36:07 -0700 (PDT)", "msg_from": "Jayaraj Oorath <jayoorath@yahoo.com>", "msg_from_op": true, "msg_subject": "UNSUSCRIBE pgsql_hackers" } ]
[ { "msg_contents": "Is anyone feeling we have the 7.3 release nearing? I certainly am not. \nI can imagine us going for several more months like this, perhaps\nthrough August.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Apr 2002 00:51:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "7.3 schedule" }, { "msg_contents": "> Is anyone feeling we have the 7.3 release nearing?\n\nNo way!\n\n> I certainly am not. \n> I can imagine us going for several more months like this, perhaps\n> through August.\n\nEasily. I think that the critical path is Tom's schema support.\n\nWe'll need a good beta period this time, because of:\n\n* Schemas\n* Prepare/Execute maybe\n* Domains\n\nChris\n\n", "msg_date": "Thu, 11 Apr 2002 13:07:21 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > Is anyone feeling we have the 7.3 release nearing?\n> \n> No way!\n\nGood.\n\n> > I certainly am not. \n> > I can imagine us going for several more months like this, perhaps\n> > through August.\n> \n> Easily. I think that the critical path is Tom's schema support.\n> \n> We'll need a good beta period this time, because of:\n> \n> * Schemas\n> * Prepare/Execute maybe\n> * Domains\n\nI guess I am hoping for even more killer features for this release.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Apr 2002 01:17:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n\n>>Is anyone feeling we have the 7.3 release nearing?\n>>\n>\n>No way!\n>\n>>I certainly am not. \n>>I can imagine us going for several more months like this, perhaps\n>>through August.\n>>\n>\n>Easily. I think that the critical path is Tom's schema support.\n>\n>We'll need a good beta period this time, because of:\n>\n>* Schemas\n>* Prepare/Execute maybe\n>\nWhat are the chances that the BE/FE will be altered to take advantage of \nprepare / execute? Or is it something that will \"never happen\"?\n\n>\n>* Domains\n>\n>Chris\n>\nAshley Cambrell\n\n", "msg_date": "Thu, 11 Apr 2002 16:25:24 +1000", "msg_from": "Ashley Cambrell <ash@freaky-namuh.com>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "For the next release and package it would be good to differentiate the\nrelease candidate to the proper release. (7.2.1 had the same name and it can\nbe confusing). a suffix postgresql-7.3-RCN.tar.gz is enough to make the\ndifference between different verisons of release candidates and the final\nrelease.\n\n----- Original Message -----\nFrom: \"Ashley Cambrell\" <ash@freaky-namuh.com>\nTo: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Thursday, April 11, 2002 4:25 PM\nSubject: Re: [HACKERS] 7.3 schedule\n\n\n> Christopher Kings-Lynne wrote:\n>\n> >>Is anyone feeling we have the 7.3 release nearing?\n> >>\n> >\n> >No way!\n> >\n> >>I certainly am not.\n> >>I can imagine us going for several more months like this, perhaps\n> >>through August.\n> >>\n> >\n> >Easily. I think that the critical path is Tom's schema support.\n> >\n> >We'll need a good beta period this time, because of:\n> >\n> >* Schemas\n> >* Prepare/Execute maybe\n> >\n> What are the chances that the BE/FE will be altered to take advantage of\n> prepare / execute? Or is it something that will \"never happen\"?\n>\n> >\n> >* Domains\n> >\n> >Chris\n> >\n> Ashley Cambrell\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n\n", "msg_date": "Thu, 11 Apr 2002 16:35:51 +1000", "msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "> We'll need a good beta period this time, because of:\n\nI know it's a sore subject, but how about \"ALTER TABLE DROP COLUMN\" this\ntime around? I've been hearing about it for years now. :)\n\n- brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Thu, 11 Apr 2002 06:54:42 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Nicolas Bazin writes:\n\n> For the next release and package it would be good to differentiate the\n> release candidate to the proper release.\n\nThey do have different names.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 11 Apr 2002 11:51:46 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "On Thu, 11 Apr 2002 16:25:24 +1000\n\"Ashley Cambrell\" <ash@freaky-namuh.com> wrote:\n> What are the chances that the BE/FE will be altered to take advantage of \n> prepare / execute? Or is it something that will \"never happen\"?\n\nIs there a need for this? The current patch I'm working on just\ndoes everything using SQL statements, which I don't think is\ntoo bad (the typical client programmer won't actually need to\nsee them, their interface should wrap the PREPARE/EXECUTE stuff\nfor them).\n\nOn the other hand, there are already a few reasons to make some\nchanges to the FE/BE protocol (NOTIFY messages, transaction state,\nand now possibly PREPARE/EXECUTE -- anything else?). IMHO, each of\nthese isn't worth changing the protocol by itself, but perhaps if\nwe can get all 3 in one swell foop it might be a good idea...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 11 Apr 2002 11:54:34 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "On Thu, 11 Apr 2002, Bruce Momjian wrote:\n\n> Is anyone feeling we have the 7.3 release nearing? I certainly am not.\n> I can imagine us going for several more months like this, perhaps\n> through August.\n\nseeing as how we just released v7.2, I don't see a v7.3 even going beta\nuntil end of Summer ... I personally consider July/August to be relatively\ndead months since too much turnover of ppl going on holidays with their\nkids ... right now, I'm kinda seeing Sept 1st/Labour Day Weekend timeframe\nfrom going Beta ...\n\n", "msg_date": "Thu, 11 Apr 2002 13:00:03 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> On the other hand, there are already a few reasons to make some\n> changes to the FE/BE protocol (NOTIFY messages, transaction state,\n> and now possibly PREPARE/EXECUTE -- anything else?).\n\nPassing EXECUTE parameters without having them go through the parser\ncould possibly be done without a protocol change: use the 'fast path'\nfunction-call code to pass binary parameters to a function that is\notherwise equivalent to EXECUTE.\n\nOn the other hand, the 'fast path' protocol itself is pretty horribly\nmisdesigned, and I'm not sure I want to encourage more use of it until\nwe can get it cleaned up (see the comments in backend/tcop/fastpath.c).\nAside from lack of robustness, I'm not sure it can work at all for\nfunctions that don't have prespecified types and numbers of parameters.\n\nThe FE/BE COPY protocol is also horrible. So yeah, there are a bunch of\nthings we *could* fix if we were ready to take on a protocol change.\n\nMy own thought is this might be better held for 7.4, though. We are\nalready going to be causing application programmers a lot of pain with\nthe schema changes and ensuing system-catalog revisions. That might\nbe enough on their plates for this cycle.\n\nIn any case, for the moment I think it's fine to be working on\nPREPARE/EXECUTE support at the SQL level. We can worry about adding\na parser bypass for EXECUTE parameters later.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 12:14:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule " }, { "msg_contents": "On Thu, 2002-04-11 at 18:14, Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > On the other hand, there are already a few reasons to make some\n> > changes to the FE/BE protocol (NOTIFY messages, transaction state,\n> > and now possibly PREPARE/EXECUTE -- anything else?).\n> \n> Passing EXECUTE parameters without having them go through the parser\n> could possibly be done without a protocol change: use the 'fast path'\n> function-call code to pass binary parameters to a function that is\n> otherwise equivalent to EXECUTE.\n> \n> On the other hand, the 'fast path' protocol itself is pretty horribly\n> misdesigned, and I'm not sure I want to encourage more use of it until\n> we can get it cleaned up (see the comments in backend/tcop/fastpath.c).\n> Aside from lack of robustness, I'm not sure it can work at all for\n> functions that don't have prespecified types and numbers of parameters.\n> \n> The FE/BE COPY protocol is also horrible. So yeah, there are a bunch of\n> things we *could* fix if we were ready to take on a protocol change.\n\nAlso _universal_ binary on-wire representation for types would be a good\nthing. There already are slots in pg_type for functions to do that. By\ndoing so we could also avoid parsing text representations of field data.\n\n> My own thought is this might be better held for 7.4, though. We are\n> already going to be causing application programmers a lot of pain with\n> the schema changes and ensuing system-catalog revisions. That might\n> be enough on their plates for this cycle.\n> \n> In any case, for the moment I think it's fine to be working on\n> PREPARE/EXECUTE support at the SQL level. We can worry about adding\n> a parser bypass for EXECUTE parameters later.\n\nIIRC someone started work on modularising the network-related parts with\na goal of supporting DRDA (DB2 protocol) and others in future.\n\n-----------------\nHannu\n\n\n", "msg_date": "11 Apr 2002 19:47:06 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "\n\nNeil Conway wrote:\n> On Thu, 11 Apr 2002 16:25:24 +1000\n> \"Ashley Cambrell\" <ash@freaky-namuh.com> wrote:\n> \n>>What are the chances that the BE/FE will be altered to take advantage of \n>>prepare / execute? Or is it something that will \"never happen\"?\n> \n> \n> Is there a need for this? The current patch I'm working on just\n> does everything using SQL statements, which I don't think is\n> too bad (the typical client programmer won't actually need to\n> see them, their interface should wrap the PREPARE/EXECUTE stuff\n> for them).\n> \n\nYes there is a need.\n\nIf you break up the query into roughly three stages of execution:\nparse, plan, and execute, each of these can be the performance \nbottleneck. The parse can be the performance bottleneck when passing \nlarge values as data to the parser (eg. inserting one row containing a \n100K value will result in a 100K+ sized statement that needs to be \nparsed, parsing will take a long time, but the planning and execution \nshould be relatively short). The planning stage can be a bottleneck for \ncomplex queries. And of course the execution stage can be a bottleneck \nfor all sorts of reasons (eg. bad plans, missing indexes, bad \nstatistics, poorly written sql, etc.).\n\nSo if you look at the three stages (parse, plan, execute) we have a lot \nof tools, tips, and techniques for making the execute faster. We have \nsome tools (at least on the server side via SPI, and plpgsql) to help \nminimize the planning costs by reusing plans. But there doesn't exist \nmuch to help with the parsing cost of large values (actually the \nfastpath API does help in this regard, but everytime I mention it Tom \nresponds that the fastpath API should be avoided).\n\nSo when I look at the proposal for the prepare/execute stuff:\nPREPARE <plan> AS <query>;\nEXECUTE <plan> USING <parameters>;\nDEALLOCATE <plan>;\n\nExecuting a sql statement today is the following:\ninsert into table values (<stuff>);\nwhich does one parse, one plan, one execute\n\nunder the new functionality:\nprepare <plan> as insert into table values (<stuff>);\nexecute <plan> using <stuff>;\nwhich does two parses, one plan, one execute\n\nwhich obviously isn't a win unless you end up reusing the plan many \ntimes. So lets look at the case of reusing the plan multiple times:\nprepare <plan> as insert into table values (<stuff>);\nexecute <plan> using <stuff>;\nexecute <plan> using <stuff>;\n...\nwhich does n+1 parses, one plan, n executes\n\nso this is a win if the cost of the planing stage is significant \ncompared to the costs of the parse and execute stages. If the cost of \nthe plan is not significant there is little if any benefit in doing this.\n\nI realize that there are situations where this functionality will be a \nbig win. But I question how the typical user of postgres will know when \nthey should use this functionality and when they shouldn't. Since we \ndon't currently provide any information to the user on the relative cost \nof the parse, plan and execute phases, the end user is going to be \nguessing IMHO.\n\nWhat I think would be a clear win would be if we could get the above \nsenario of multiple inserts down to one parse, one plan, n executes, and \nn binds (where binding is simply the operation of plugging values into \nthe statement without having to pipe the values through the parser). \nThis would be a win in most if not all circumstances where the same \nstatement is executed many times.\n\nI think it would also be nice if the new explain anaylze showed times \nfor the parsing and planning stages in addition to the execution stage \nwhich it currently shows so there is more information for the end user \non what approach they should take.\n\nthanks,\n--Barry\n\n> On the other hand, there are already a few reasons to make some\n> changes to the FE/BE protocol (NOTIFY messages, transaction state,\n> and now possibly PREPARE/EXECUTE -- anything else?). IMHO, each of\n> these isn't worth changing the protocol by itself, but perhaps if\n> we can get all 3 in one swell foop it might be a good idea...\n> \n> Cheers,\n> \n> Neil\n> \n\n\n", "msg_date": "Thu, 11 Apr 2002 11:38:33 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> ...\n> Since we \n> don't currently provide any information to the user on the relative cost \n> of the parse, plan and execute phases, the end user is going to be \n> guessing IMHO.\n\nYou can in fact get that information fairly easily; set \nshow_parser_stats, show_planner_stats, and show_executor_stats to 1\nand then look in the postmaster log for the results. (Although to be\nfair, this does not provide any accounting for the CPU time expended\nsimply to *receive* the query string, which might be non negligible\nfor huge queries.)\n\nIt would be interesting to see some stats for the large-BLOB scenarios\nbeing debated here. You could get more support for the position that\nsomething should be done if you had numbers to back it up.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 16:48:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule " }, { "msg_contents": "\nTom Lane wrote:\n\n > It would be interesting to see some stats for the large-BLOB scenarios\n > being debated here. You could get more support for the position that\n > something should be done if you had numbers to back it up.\n\nBelow are some stats you did a few months ago when I was asking a \nrelated question. Your summary was: \"Bottom line: feeding huge strings \nthrough the lexer is slow.\"\n\n--Barry\n\nTom Lane wrote:\n\n > Barry Lind <barry@xythos.com> writes:\n >\n >In looking at some performance issues (I was trying to look at the \n >overhead of toast) I found that large insert statements were very slow.\n > ...\n...\nI got around to reproducing this today,\nand what I find is that the majority of the backend time is going into\nsimple scanning of the input statement:\n\nEach sample counts as 0.01 seconds.\n % cumulative self self total time \nseconds seconds calls ms/call ms/call name 31.24 11.90 \n 11.90 _mcount\n 19.51 19.33 7.43 10097 0.74 1.06 base_yylex\n 7.48 22.18 2.85 21953666 0.00 0.00 appendStringInfoChar\n 5.88 24.42 2.24 776 2.89 2.89 pglz_compress\n 4.36 26.08 1.66 21954441 0.00 0.00 pq_getbyte\n 3.57 27.44 1.36 7852141 0.00 0.00 addlit\n 3.26 28.68 1.24 1552 0.80 0.81 scanstr\n 2.84 29.76 1.08 779 1.39 7.18 pq_getstring\n 2.31 30.64 0.88 10171 0.09 0.09 _doprnt\n 2.26 31.50 0.86 776 1.11 1.11 byteain\n 2.07 32.29 0.79 msquadloop\n 1.60 32.90 0.61 7931430 0.00 0.00 memcpy\n 1.18 33.35 0.45 chunks\n 1.08 33.76 0.41 46160 0.01 0.01 strlen\n 1.08 34.17 0.41 encore\n 1.05 34.57 0.40 8541 0.05 0.05 XLogInsert\n 0.89 34.91 0.34 appendStringInfo\n\n60% of the call graph time is accounted for by these two areas:\n\nindex % time self children called name\n 7.43 3.32 10097/10097 yylex [14]\n[13] 41.0 7.43 3.32 10097 base_yylex [13]\n 1.36 0.61 7852141/7852141 addlit [28]\n 1.24 0.01 1552/1552 scanstr [30]\n 0.02 0.03 3108/3108 ScanKeywordLookup [99]\n 0.00 0.02 2335/2335 yy_get_next_buffer [144]\n 0.02 0.00 776/781 strtol [155]\n 0.00 0.01 777/3920 MemoryContextStrdup [108]\n 0.00 0.00 1/1 base_yy_create_buffer \n[560]\n 0.00 0.00 4675/17091 isupper [617]\n 0.00 0.00 1556/1556 yy_get_previous_state \n[671]\n 0.00 0.00 779/779 yywrap [706]\n 0.00 0.00 1/2337 \nbase_yy_load_buffer_state [654]\n-----------------------------------------------\n 1.08 4.51 779/779 pq_getstr [17]\n[18] 21.4 1.08 4.51 779 pq_getstring [18]\n 2.85 0.00 21953662/21953666 appendStringInfoChar \n[20]\n 1.66 0.00 21954441/21954441 pq_getbyte [29]\n-----------------------------------------------\n\nWhile we could probably do a little bit to speed up pg_getstring and its\nchildren, it's not clear that we can do anything about yylex, which is\nflex output code not handmade code, and is probably well-tuned already.\n\nBottom line: feeding huge strings through the lexer is slow.\n\n regards, tom lane\n\n\n\n\n> It would be interesting to see some stats for the large-BLOB scenarios\n> being debated here. You could get more support for the position that\n> something should be done if you had numbers to back it up.\n> \n> \t\t\tregards, tom lane\n> \n\n\n", "msg_date": "Thu, 11 Apr 2002 13:56:12 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Neil Conway wrote:\n\n>On Thu, 11 Apr 2002 16:25:24 +1000\n>\"Ashley Cambrell\" <ash@freaky-namuh.com> wrote:\n>\n>>What are the chances that the BE/FE will be altered to take advantage of \n>>prepare / execute? Or is it something that will \"never happen\"?\n>>\n>\n>Is there a need for this? The current patch I'm working on just\n>does everything using SQL statements, which I don't think is\n>too bad (the typical client programmer won't actually need to\n>see them, their interface should wrap the PREPARE/EXECUTE stuff\n>for them).\n>\nI remember an email Hannu sent (I originally thought Tome sent it but I\nfound the email*) that said postgresql spends a lot of time parsing sql\n(compared to oracle), so if the BE/FE and libpq were extended to support\npg_prepare / pg_bind, then it might make repetitive queries quicker.\n\n\"if we could save half of parse/optimise time by saving query plans, then\nthe backend performance would go up from 1097 to 100000/(91.1-16.2)=1335\nupdates/sec.\"\n \nHannu's email doesn't seem to be in google groups, but it's titled\n\"Oracle vs PostgreSQL in real life\" (2002-03-01). I can attach it if\npeople can't find it.\n\n\n>\n>On the other hand, there are already a few reasons to make some\n>changes to the FE/BE protocol (NOTIFY messages, transaction state,\n>and now possibly PREPARE/EXECUTE -- anything else?). IMHO, each of\n>these isn't worth changing the protocol by itself, but perhaps if\n>we can get all 3 in one swell foop it might be a good idea...\n>\nPassing on a possible 1/3 speed improvement doesn't sound like a bad\nthing.. :-) \n\nHannu: You mentioned that you already had an experimental patch that did\nit? Was that the same sort of thing as Neil's patch (SPI), or did it\ninclude a libpq patch as well?\n\n>\n>Cheers,\n>\n>Neil\n>\nAshley Cambrell\n\n", "msg_date": "Fri, 12 Apr 2002 09:35:07 +1000", "msg_from": "Ashley Cambrell <ash@freaky-namuh.com>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "On Thu, 11 Apr 2002 11:38:33 -0700\n\"Barry Lind\" <barry@xythos.com> wrote:\n> Neil Conway wrote:\n> > On Thu, 11 Apr 2002 16:25:24 +1000\n> > \"Ashley Cambrell\" <ash@freaky-namuh.com> wrote:\n> > \n> >>What are the chances that the BE/FE will be altered to take advantage of \n> >>prepare / execute? Or is it something that will \"never happen\"?\n> > \n> > Is there a need for this? The current patch I'm working on just\n> > does everything using SQL statements, which I don't think is\n> > too bad (the typical client programmer won't actually need to\n> > see them, their interface should wrap the PREPARE/EXECUTE stuff\n> > for them).\n> \n> Yes there is a need.\n\nRight -- I would agree that such functionality would be nice to have.\nWhat I meant was \"is there a need for this in order to implement\nPREPARE/EXECUTE\"? IMHO, no -- the two features are largely\northogonal.\n\n> If you break up the query into roughly three stages of execution:\n> parse, plan, and execute, each of these can be the performance \n> bottleneck. The parse can be the performance bottleneck when passing \n> large values as data to the parser (eg. inserting one row containing a \n> 100K value will result in a 100K+ sized statement that needs to be \n> parsed, parsing will take a long time, but the planning and execution \n> should be relatively short).\n\nIf you're inserting 100KB of data, I'd expect the time to insert\nthat into tables, update relevent indexes, etc. to be larger than\nthe time to parse the query (i.e. execution > parsing). But I\nmay well be wrong, I haven't done any benchmarks.\n \n> Executing a sql statement today is the following:\n> insert into table values (<stuff>);\n> which does one parse, one plan, one execute\n\nYou're assuming that the cost of the \"parse\" step for the EXECUTE\nstatement is the same as \"parse\" for the original query, which\nwill often not be the case (parsing the EXECUTE statement will\nbe cheaper).\n\n> so this is a win if the cost of the planing stage is significant \n> compared to the costs of the parse and execute stages. If the cost of \n> the plan is not significant there is little if any benefit in doing this.\n> \n> I realize that there are situations where this functionality will be a \n> big win. But I question how the typical user of postgres will know when \n> they should use this functionality and when they shouldn't.\n\nI would suggest using it any time you're executing the same query\nplan a large number of times. In my experience, this is very common.\nThere are already hooks for this in many client interfaces: e.g.\nPrepareableStatement in JDBC and $dbh->prepare() in Perl DBI.\n\n> What I think would be a clear win would be if we could get the above \n> senario of multiple inserts down to one parse, one plan, n executes, and \n> n binds\n\nThis behavior would be better, but I think the current solution is\nstill a \"clear win\", and good enough for now. I'd prefer that we\nworry about implementing PREPARE/EXECUTE for now, and deal with\nquery binding/BLOB parser-shortcuts later -- perhaps with an FE/BE\nprotocol in 7.4 as Tom suggested.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 11 Apr 2002 20:51:01 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "On 11 Apr 2002, Hannu Krosing wrote:\n\n> IIRC someone started work on modularising the network-related parts with\n> a goal of supporting DRDA (DB2 protocol) and others in future.\n\nThat was me, although I've been bogged down lately, and haven't been able \nto get back to it. DRDA, btw, is not just a DB2 protocol but an opengroup \nspec that hopefully will someday be *the* standard on the wire database \nprotocol. DRDA handles prepare/execute and is completely binary in \nrepresentation, among other advantages.\n\nBrian\n\n", "msg_date": "Thu, 11 Apr 2002 21:04:53 -0400 (EDT)", "msg_from": "Brian Bruns <camber@ais.org>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "\nNeil Conway wrote:\n> I would suggest using it any time you're executing the same query\n> plan a large number of times. In my experience, this is very common.\n> There are already hooks for this in many client interfaces: e.g.\n> PrepareableStatement in JDBC and $dbh->prepare() in Perl DBI.\n\nI'm not sure that JDBC would use this feature directly. When a \nPreparableStatement is created in JDBC there is nothing that indicates \nhow many times this statement is going to be used. Many (most IMHO) \nwill be used only once. As I stated previously, this feature is only \nuseful if you are going to end up using the PreparedStatement multiple \ntimes. If it only is used once, it will actually perform worse than \nwithout the feature (since you need to issue two sql statements to the \nbackend to accomplish what you were doing in one before).\n\nThus if someone wanted to use this functionality from jdbc they would \nneed to do it manually, i.e. issue the prepare and execute statements \nmanually instead of the jdbc driver doing it automatically for them.\n\nthanks,\n--Barry\n\nPS. I actually do believe that the proposed functionality is good and \nshould be added (even though it may sound from the tone of my emails in \nthis thread that that isn't the case :-) I just want to make sure that \neveryone understands that this doesn't solve the whole problem. And \nthat more work needs to be done either in 7.3 or some future release. \nMy fear is that everyone will view this work as being good enough such \nthat the rest of the issues won't be addressed anytime soon. I only \nwish I was able to work on some of this myself, but I don't have the \nskills to hack on the backend too much. (However if someone really \nwanted a new feature in the jdbc driver in exchange, I'd be more than \nhappy to help)\n\n", "msg_date": "Thu, 11 Apr 2002 19:24:13 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Ashley Cambrell <ash@freaky-namuh.com> writes:\n> I remember an email Hannu sent (I originally thought Tome sent it but I\n> found the email*) that said postgresql spends a lot of time parsing sql\n> (compared to oracle), so if the BE/FE and libpq were extended to support\n> pg_prepare / pg_bind, then it might make repetitive queries quicker.\n\nI'm not sure I believe Hannu's numbers, but in any case they're fairly\nirrelevant to the argument about whether a special protocol is useful.\nHe wasn't testing textually-long queries, but rather the planning\noverhead, which is more or less independent of the length of any literal\nconstants involved (especially if they're not part of the WHERE clause).\nSaving query plans via PREPARE seems quite sufficient, and appropriate,\nto tackle the planner-overhead issue.\n\nWe do have some numbers suggesting that the per-character loop in the\nlexer is slow enough to be a problem with very long literals. That is\nthe overhead that might be avoided with a special protocol.\n\nHowever, it should be noted that (AFAIK) no one has spent any effort at\nall on trying to make the lexer go faster. There is quite a bit of\nmaterial in the flex documentation about performance considerations ---\nsomeone should take a look at it and see if we can get any wins by being\nsmarter, without having to introduce protocol changes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 23:25:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule " }, { "msg_contents": "Tom Lane wrote:\n> \n> I'm not sure I believe Hannu's numbers, but in any case they're fairly\n> irrelevant to the argument about whether a special protocol is useful.\n> He wasn't testing textually-long queries, but rather the planning\n> overhead, which is more or less independent of the length of any literal\n> constants involved (especially if they're not part of the WHERE clause).\n> Saving query plans via PREPARE seems quite sufficient, and appropriate,\n> to tackle the planner-overhead issue.\n\nJust a confirmation.\nSomeone is working on PREPARE/EXECUTE ?\nWhat about Karel's work ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 12 Apr 2002 12:58:01 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "On Fri, 12 Apr 2002 12:58:01 +0900\n\"Hiroshi Inoue\" <Inoue@tpf.co.jp> wrote:\n> Tom Lane wrote:\n> > \n> > I'm not sure I believe Hannu's numbers, but in any case they're fairly\n> > irrelevant to the argument about whether a special protocol is useful.\n> > He wasn't testing textually-long queries, but rather the planning\n> > overhead, which is more or less independent of the length of any literal\n> > constants involved (especially if they're not part of the WHERE clause).\n> > Saving query plans via PREPARE seems quite sufficient, and appropriate,\n> > to tackle the planner-overhead issue.\n> \n> Just a confirmation.\n> Someone is working on PREPARE/EXECUTE ?\n> What about Karel's work ?\n\nI am. My work is based on Karel's stuff -- at the moment I'm still\nbasically working on getting Karel's patch to play nicely with\ncurrent sources; once that's done I'll be addressing whatever\nissues are stopping the code from getting into CVS.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 12 Apr 2002 00:41:34 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "On Fri, Apr 12, 2002 at 12:41:34AM -0400, Neil Conway wrote:\n> On Fri, 12 Apr 2002 12:58:01 +0900\n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> wrote:\n> > \n> > Just a confirmation.\n> > Someone is working on PREPARE/EXECUTE ?\n> > What about Karel's work ?\n\n Right question :-)\n \n> I am. My work is based on Karel's stuff -- at the moment I'm still\n> basically working on getting Karel's patch to play nicely with\n> current sources; once that's done I'll be addressing whatever\n> issues are stopping the code from getting into CVS.\n\n My patch (qcache) for PostgreSQL 7.0 is available at \n ftp://ftp2.zf.jcu.cz/users/zakkr/pg/.\n \n I very look forward to Neil's work on this. \n\n Notes:\n\n * It's experimental patch, but usable. All features below mentioned \n works.\n\n * PREPARE/EXECUTE is not only SQL statements, I think good idea is\n create something common and robus for query-plan caching,\n beacuse there is for example SPI too. The RI triggers are based \n on SPI_saveplan(). \n \n * My patch knows EXECUTE INTO feature:\n\n PREPARE foo AS SELECT * FROM pg_class WHERE relname ~~ $1 USING text;\n\n EXECUTE foo USING 'pg%'; <-- standard select\n\n EXECUTE foo INTO TEMP newtab USING 'pg%'; <-- select into\n \n \n * The patch allows store query-planns to shared memory and is\n possible EXECUTE it at more backends (over same DB) and planns\n are persistent across connetions. For this feature I create special \n memory context subsystem (like current aset.c, but it works with \n IPC shared memory).\n \n This is maybe too complex solution and (maybe) sufficient is cache \n query in one backend only. I know unbelief about this shared\n memory solution (Tom?). \n \n \n Karel\n \n \n My experimental patch README (excuse my English):\n\n Implementation\n ~~~~~~~~~~~~~~\n\n The qCache allows save queryTree and queryPlan. There is available are \n two space for data caching. \n \n LOCAL - data are cached in backend non-shared memory and data aren't\n available in other backends. \n \n SHARE - data are cached in backend shared memory and data are \n visible in all backends.\n \n Because size of share memory pool is limited and it is set during\n postmaster start up, the qCache must remove all old planns if pool is \n full. You can mark each entry as \"REMOVEABLE\" or \"NOTREMOVEABLE\". \n \n A removeable entry is removed if pool is full.\n \n A not-removeable entry must be removed via qCache_Remove() or \n the other routines. The qCache not remove this entry itself.\n \n All records in qCache are cached (in the hash table) under some key.\n The qCache knows two alternate of key --- \"KEY_STRING\" and \"KEY_BINARY\". \n \n The qCache API not allows access to shared memory, all cached planns that \n API returns are copy to CurrentMemoryContext. All (qCache_ ) routines lock \n shmem itself (exception is qCache_RemoveOldest_ShareRemoveAble()).\n\n - for locking is used spin lock.\n\n Memory management\n ~~~~~~~~~~~~~~~~~\n The qCache use for qCache's shared pool its memory context independent on\n standard aset/mcxt, but use compatible API --- it allows to use standard\n palloc() (it is very needful for basic plan-tree operations, an example \n for copyObject()). The qCache memory management is very simular to current\n aset.c code. It is chunk-ed blocks too, but the block is smaller - 1024b.\n\n The number of blocks is available set in postmaster 'argv' via option\n '-Z'.\n\n For plan storing is used separate MemoryContext for each plan, it \n is good idea (Hiroshi's ?), bucause create new context is simple and \n inexpensive and allows easy destroy (free) cached plan. This method is \n used in my SPI overhaul instead TopMemoryContext feeding.\n\n Postmaster\n ~~~~~~~~~~\n The query cache memory is init during potmaster startup. The size of\n query cache pool is set via '-Z <number-of-blocks>' switch --- default \n is 100 blocks where 1 block = 1024b, it is sufficient for 20-30 cached\n planns. One query needs somewhere 3-10 blocks, for example query like\n\n PREPARE sel AS SELECT * FROM pg_class;\n\n needs 10Kb, because table pg_class has very much columns. \n \n Note: for development I add SQL function: \"SELECT qcache_state();\",\n this routine show usage of qCache.\n\n SPI\n ~~~\n I a little overwrite SPI save plan method and remove TopMemoryContext\n \"feeding\".\n\n Standard SPI:\n\n SPI_saveplan() - save each plan to separate standard memory context.\n\n SPI_freeplan() - free plan.\n\n By key SPI:\n\n It is SPI interface for query cache and allows save planns to SHARED\n or LOCAL cache 'by' arbitrary key (string or binary). Routines:\n\n SPI_saveplan_bykey() - save plan to query cache\n\n SPI_freeplan_bykey() - remove plan from query cache\n\n SPI_fetchplan_bykey() - fetch plan saved in query cache\n\n SPI_execp_bykey() - execute (via SPI) plan saved in query\n cache \n\n - now, users can write functions that save planns to shared memory \n and planns are visible in all backend and are persistent arcoss \n connection. \n\n Example:\n ~~~~~~~\n /* ----------\n * Save/exec query from shared cache via string key\n * ----------\n */\n int keySize = 0; \n flag = SPI_BYKEY_SHARE | SPI_BYKEY_STRING;\n char *key = \"my unique key\";\n \n res = SPI_execp_bykey(values, nulls, tcount, key, flag, keySize);\n \n if (res == SPI_ERROR_PLANNOTFOUND) \n {\n /* --- not plan in cache - must create it --- */\n \n void *plan;\n\n plan = SPI_prepare(querystr, valnum, valtypes);\n SPI_saveplan_bykey(plan, key, keySize, flag);\n \n res = SPI_execute(plan, values, Nulls, tcount);\n }\n \n elog(NOTICE, \"Processed: %d\", SPI_processed);\n\n\n PREPARE/EXECUTE\n ~~~~~~~~~~~~~~~\n * Syntax:\n \n PREPARE <name> AS <query> \n [ USING type, ... typeN ] \n [ NOSHARE | SHARE | GLOBAL ]\n \n EXECUTE <name> \n [ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table ]\n [ USING val, ... valN ]\n [ NOSHARE | SHARE | GLOBAL ]\n\n DEALLOCATE PREPARE \n [ <name> [ NOSHARE | SHARE | GLOBAL ]]\n [ ALL | ALL INTERNAL ]\n\n\n I know that it is a little out of SQL92... (use CREATE/DROP PLAN instead\n this?) --- what mean SQL standard guru?\n\n * Where:\n \n NOSHARE --- cached in local backend query cache - not accessable\n from the others backends and not is persisten a across\n conection.\n\n SHARE --- cached in shared query cache and accessable from\n all backends which work over same database.\n\n GLOBAL --- cached in shared query cache and accessable from\n all backends and all databases. \n\n - default is 'SHARE'\n \n Deallocate:\n \n ALL --- deallocate all users's plans\n\n ALL INTERNAL --- deallocate all internal plans, like planns\n cached via SPI. It is needful if user\n alter/drop table ...etc.\n\n * Parameters:\n \n \"USING\" part in the prepare statement is for datetype setting for\n paremeters in the query. For example:\n\n PREPARE sel AS SELECT * FROM pg_class WHERE relname ~~ $1 USING text;\n\n EXECUTE sel USING 'pg%';\n \n\n * Limitation:\n \n - prepare/execute allow use full statement of SELECT/INSERT/DELETE/\n UPDATE. \n - possible is use union, subselects, limit, ofset, select-into\n\n\n Performance:\n ~~~~~~~~~~~\n * the SPI\n\n - I for my tests a little change RI triggers to use SPI by_key API\n and save planns to shared qCache instead to internal RI hash table.\n\n The RI use very simple (for parsing) queries and qCache interest is \n not visible. It's better if backend very often startup and RI check \n always same tables. In this situation speed go up --- 10-12%. \n (This snapshot not include this RI change.)\n\n But all depend on how much complicate for parser is query in \n trigger.\n\n * PREPARE/EXECUTE\n \n - For tests I use query that not use some table (the executor is \n in boredom state), but is difficult for the parser. An example:\n\n SELECT 'a text ' || (10*10+(100^2))::text || ' next text ' || cast \n (date_part('year', timestamp 'now') AS text );\n \n - (10000 * this query):\n\n standard select: 54 sec\n via prepare/execute: 4 sec (93% better)\n\n IMHO it is nod bad.\n \n - For standard query like:\n\n SELECT u.usename, r.relname FROM pg_class r, pg_user u WHERE \n r.relowner = u.usesysid;\n\n it is with PREPARE/EXECUTE 10-20% faster.\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Fri, 12 Apr 2002 09:51:16 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "On Thu, 2002-04-11 at 22:48, Tom Lane wrote:\n> Barry Lind <barry@xythos.com> writes:\n> > ...\n> > Since we \n> > don't currently provide any information to the user on the relative cost \n> > of the parse, plan and execute phases, the end user is going to be \n> > guessing IMHO.\n> \n> You can in fact get that information fairly easily; set \n> show_parser_stats, show_planner_stats, and show_executor_stats to 1\n> and then look in the postmaster log for the results.\n\nOne thing that seems to be missing is backend ids for query stats - if I\nset \n\nlog_timestamp = true\nlog_pid = true\n\nthen I get pid for query but _not_ for stats\n\nIf I have many long-running queries then it is impossible to know which\nstats are for which query ;(\n\n----------------\nHannu\n\n\n", "msg_date": "12 Apr 2002 11:49:38 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> * The patch allows store query-planns to shared memory and is\n> possible EXECUTE it at more backends (over same DB) and planns\n> are persistent across connetions. For this feature I create special \n> memory context subsystem (like current aset.c, but it works with \n> IPC shared memory).\n> This is maybe too complex solution and (maybe) sufficient is cache \n> query in one backend only. I know unbelief about this shared\n> memory solution (Tom?). \n\nYes, that is the part that was my sticking point last time around.\n(1) Because shared memory cannot be extended on-the-fly, I think it is\na very bad idea to put data structures in there without some well\nthought out way of predicting/limiting their size. (2) How the heck do\nyou get rid of obsoleted cached plans, if the things stick around in\nshared memory even after you start a new backend? (3) A shared cache\nrequires locking; contention among multiple backends to access that\nshared resource could negate whatever performance benefit you might hope\nto realize from it.\n\nA per-backend cache kept in local memory avoids all of these problems,\nand I have seen no numbers to make me think that a shared plan cache\nwould achieve significantly more performance benefit than a local one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Apr 2002 10:14:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule " }, { "msg_contents": "Tom Lane wrote:\n> Karel Zak <zakkr@zf.jcu.cz> writes:\n> > * The patch allows store query-planns to shared memory and is\n> > possible EXECUTE it at more backends (over same DB) and planns\n> > are persistent across connetions. For this feature I create special \n> > memory context subsystem (like current aset.c, but it works with \n> > IPC shared memory).\n> > This is maybe too complex solution and (maybe) sufficient is cache \n> > query in one backend only. I know unbelief about this shared\n> > memory solution (Tom?). \n> \n> Yes, that is the part that was my sticking point last time around.\n> (1) Because shared memory cannot be extended on-the-fly, I think it is\n> a very bad idea to put data structures in there without some well\n> thought out way of predicting/limiting their size. (2) How the heck do\n> you get rid of obsoleted cached plans, if the things stick around in\n> shared memory even after you start a new backend? (3) A shared cache\n> requires locking; contention among multiple backends to access that\n> shared resource could negate whatever performance benefit you might hope\n> to realize from it.\n> \n> A per-backend cache kept in local memory avoids all of these problems,\n> and I have seen no numbers to make me think that a shared plan cache\n> would achieve significantly more performance benefit than a local one.\n\nCertainly a shared cache would be good for apps that connect to issue a\nsingle query frequently. In such cases, there would be no local cache\nto use.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Apr 2002 12:21:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "\n\nTom Lane wrote:\n> Yes, that is the part that was my sticking point last time around.\n> (1) Because shared memory cannot be extended on-the-fly, I think it is\n> a very bad idea to put data structures in there without some well\n> thought out way of predicting/limiting their size. (2) How the heck do\n> you get rid of obsoleted cached plans, if the things stick around in\n> shared memory even after you start a new backend? (3) A shared cache\n> requires locking; contention among multiple backends to access that\n> shared resource could negate whatever performance benefit you might hope\n> to realize from it.\n> \n> A per-backend cache kept in local memory avoids all of these problems,\n> and I have seen no numbers to make me think that a shared plan cache\n> would achieve significantly more performance benefit than a local one.\n> \n\nOracle's implementation is a shared cache for all plans. This was \nintroduced in Oracle 6 or 7 (I don't remember which anymore). The net \neffect was that in general there was a significant performance \nimprovement with the shared cache. However poorly written apps can now \nbring the Oracle database to its knees because of the locking issues \nassociated with the shared cache. For example if the most frequently \nrun sql statements are coded poorly (i.e. they don't use bind variables, \neg. 'select bar from foo where foobar = $1' vs. 'select bar from foo \nwhere foobar = || somevalue' (where somevalue is likely to be \ndifferent on every call)) the shared cache doesn't help and its overhead \nbecomes significant.\n\nthanks,\n--Barry\n\n\n", "msg_date": "Fri, 12 Apr 2002 09:42:36 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Barry Lind wrote:\n> Oracle's implementation is a shared cache for all plans. This was \n> introduced in Oracle 6 or 7 (I don't remember which anymore). The net \n> effect was that in general there was a significant performance \n> improvement with the shared cache. However poorly written apps can now \n> bring the Oracle database to its knees because of the locking issues \n> associated with the shared cache. For example if the most frequently \n> run sql statements are coded poorly (i.e. they don't use bind variables, \n> eg. 'select bar from foo where foobar = $1' vs. 'select bar from foo \n> where foobar = || somevalue' (where somevalue is likely to be \n> different on every call)) the shared cache doesn't help and its overhead \n> becomes significant.\n\nThis is very interesting. We have always been concerned that shared\ncache invalidation could cause more of a performance problem that the\nshared cache gives benefit, and it sounds like you are saying exactly\nthat.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Apr 2002 12:49:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Certainly a shared cache would be good for apps that connect to issue a\n> single query frequently. In such cases, there would be no local cache\n> to use.\n\nWe have enough other problems with the single-query-per-connection\nscenario that I see no reason to believe that a shared plan cache will\nhelp materially. The correct answer for those folks will *always* be\nto find a way to reuse the connection.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Apr 2002 12:51:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule " }, { "msg_contents": "On Fri, 12 Apr 2002 12:21:04 -0400 (EDT)\n\"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> Tom Lane wrote:\n> > A per-backend cache kept in local memory avoids all of these problems,\n> > and I have seen no numbers to make me think that a shared plan cache\n> > would achieve significantly more performance benefit than a local one.\n> \n> Certainly a shared cache would be good for apps that connect to issue a\n> single query frequently. In such cases, there would be no local cache\n> to use.\n\nOne problem with this kind of scenario is: what to do if the plan no\nlonger exists for some reason? (e.g. the code that was supposed to be\nPREPARE-ing your statements failed to execute properly, or the cached\nplan has been evicted from shared memory, or the database was restarted,\netc.) -- EXECUTE in and of itself won't have enough information to do\nanything useful. We could perhaps provide a means for an application\nto test for the existence of a cached plan (in which case the\napplication developer will need to add logic to their application\nto re-prepare the query if necessary, which could get complicated).\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 12 Apr 2002 16:24:48 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Neil Conway wrote:\n> On Fri, 12 Apr 2002 12:21:04 -0400 (EDT)\n> \"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> > Tom Lane wrote:\n> > > A per-backend cache kept in local memory avoids all of these problems,\n> > > and I have seen no numbers to make me think that a shared plan cache\n> > > would achieve significantly more performance benefit than a local one.\n> > \n> > Certainly a shared cache would be good for apps that connect to issue a\n> > single query frequently. In such cases, there would be no local cache\n> > to use.\n> \n> One problem with this kind of scenario is: what to do if the plan no\n> longer exists for some reason? (e.g. the code that was supposed to be\n> PREPARE-ing your statements failed to execute properly, or the cached\n> plan has been evicted from shared memory, or the database was restarted,\n> etc.) -- EXECUTE in and of itself won't have enough information to do\n> anything useful. We could perhaps provide a means for an application\n> to test for the existence of a cached plan (in which case the\n> application developer will need to add logic to their application\n> to re-prepare the query if necessary, which could get complicated).\n\nOh, are you thinking that one backend would do the PREPARE and another\none the EXECUTE? I can't see that working at all. I thought there\nwould some way to quickly test if the submitted query was in the cache,\nbut maybe that is too much of a performance penalty to be worth it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Apr 2002 17:25:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Oh, are you thinking that one backend would do the PREPARE and another\n> one the EXECUTE? I can't see that working at all.\n\nUh, why exactly were you advocating a shared cache then? Wouldn't that\nbe exactly the *point* of a shared cache?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Apr 2002 17:36:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Oh, are you thinking that one backend would do the PREPARE and another\n> > one the EXECUTE? I can't see that working at all.\n> \n> Uh, why exactly were you advocating a shared cache then? Wouldn't that\n> be exactly the *point* of a shared cache?\n\nI thought it would somehow compare the SQL query string to the cached\nplans and if it matched, it would use that plan rather than make a new\none. Any DDL statement would flush the cache.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Apr 2002 17:38:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Tom Lane writes:\n\n> We do have some numbers suggesting that the per-character loop in the\n> lexer is slow enough to be a problem with very long literals. That is\n> the overhead that might be avoided with a special protocol.\n\nWhich loop is that? Doesn't the scanner use buffered input anyway?\n\n> However, it should be noted that (AFAIK) no one has spent any effort at\n> all on trying to make the lexer go faster. There is quite a bit of\n> material in the flex documentation about performance considerations ---\n> someone should take a look at it and see if we can get any wins by being\n> smarter, without having to introduce protocol changes.\n\nMy profiles show that the work spent in the scanner is really minuscule\ncompared to everything else.\n\nThe data appears to support a suspicion that I've had many moons ago that\nthe binary search for the key words takes quite a bit of time:\n\n 0.22 0.06 66748/66748 yylex [125]\n[129] 0.4 0.22 0.06 66748 base_yylex [129]\n 0.01 0.02 9191/9191 yy_get_next_buffer [495]\n 0.02 0.00 32808/34053 ScanKeywordLookup [579]\n 0.00 0.01 16130/77100 MemoryContextStrdup [370]\n 0.00 0.00 4000/4000 scanstr [1057]\n 0.00 0.00 4637/4637 yy_get_previous_state [2158]\n 0.00 0.00 4554/4554 base_yyrestart [2162]\n 0.00 0.00 4554/4554 yywrap [2163]\n 0.00 0.00 1/1 base_yy_create_buffer [2852]\n 0.00 0.00 1/13695 base_yy_load_buffer_state [2107]\n\nI while ago I've experimented with hash functions for the key word lookup\nand got a speedup of factor 2.5, but again, this is really minor in the\noverall scheme of things.\n\n(The profile data is from a run of all the regression test files in order\nin one session.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 13 Apr 2002 02:17:06 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Scanner performance (was Re: 7.3 schedule)" }, { "msg_contents": "> > thought out way of predicting/limiting their size. (2) How the heck do\n> > you get rid of obsoleted cached plans, if the things stick around in\n> > shared memory even after you start a new backend? (3) A shared cache\n> > requires locking; contention among multiple backends to access that\n> > shared resource could negate whatever performance benefit you might hope\n> > to realize from it.\n\nI don't understand all these locking problems? Surely the only lock a\ntransaction would need on a stored query is one that prevents the cache\ninvalidation mechanism from deleting it out from under it? Surely this\nmeans that there would be tonnes of readers on the cache - none of them\nblocking each other, and the odd invalidation event that needs a complete\nlock?\n\nAlso, as for invalidation, there probably could be just two reasons to\ninvalidate a query in the cache. (1) The cache is running out of space and\nyou use LRU or something to remove old queries, or (2) someone runs ANALYZE,\nin which case all cached queries should just be flushed? If they specify an\nactual table to analyze, then just drop all queries on the table.\n\nCould this cache mechanism be used to make views fast as well? You could\ncache the queries that back views on first use, and then they can follow the\nabove rules for flushing...\n\nChris\n\n\n", "msg_date": "Sat, 13 Apr 2002 14:21:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> My profiles show that the work spent in the scanner is really minuscule\n> compared to everything else.\n\nUnder ordinary circumstances I think that's true ...\n\n> (The profile data is from a run of all the regression test files in order\n> in one session.)\n\nThe regression tests contain no very-long literals. The results I was\nreferring to concerned cases with string (BLOB) literals in the\nhundreds-of-K range; it seems that the per-character loop in the flex\nlexer starts to look like a bottleneck when you have tokens that much\nlarger than the rest of the query.\n\nSolutions seem to be either (a) make that loop quicker, or (b) find a\nway to avoid passing BLOBs through the lexer. I was merely suggesting\nthat (a) should be investigated before we invest the work implied\nby (b).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Apr 2002 02:21:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Scanner performance (was Re: 7.3 schedule) " }, { "msg_contents": "On Fri, 2002-04-12 at 03:04, Brian Bruns wrote:\n> On 11 Apr 2002, Hannu Krosing wrote:\n> \n> > IIRC someone started work on modularising the network-related parts with\n> > a goal of supporting DRDA (DB2 protocol) and others in future.\n> \n> That was me, although I've been bogged down lately, and haven't been able \n> to get back to it.\n\nHas any of your modularisation work got into CVS yet ?\n\n> DRDA, btw, is not just a DB2 protocol but an opengroup \n> spec that hopefully will someday be *the* standard on the wire database \n> protocol. DRDA handles prepare/execute and is completely binary in \n> representation, among other advantages.\n\nWhat about extensibility - is there some predefined way of adding new\ntypes ?\n\nAlso, does it handle NOTIFY ?\n\n----------------\nHannu\n\n\n", "msg_date": "13 Apr 2002 15:32:38 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> thought out way of predicting/limiting their size. (2) How the heck do\n> you get rid of obsoleted cached plans, if the things stick around in\n> shared memory even after you start a new backend? (3) A shared cache\n> requires locking; contention among multiple backends to access that\n> shared resource could negate whatever performance benefit you might hope\n> to realize from it.\n\n> I don't understand all these locking problems?\n\nSearching the cache and inserting/deleting entries in the cache probably\nhave to be mutually exclusive; concurrent insertions probably won't work\neither (at least not without a remarkably intelligent data structure).\nUnless the cache hit rate is remarkably high, there are going to be lots\nof insertions --- and, at steady state, an equal rate of deletions ---\nleading to lots of contention.\n\nThis could possibly be avoided if the cache is not used for all query\nplans but only for explicitly PREPAREd plans, so that only explicit\nEXECUTEs would need to search it. But that approach also makes a\nsizable dent in the usefulness of the cache to begin with.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Apr 2002 11:46:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule " }, { "msg_contents": "On Sat, 13 Apr 2002 14:21:50 +0800\n\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> wrote:\n> Could this cache mechanism be used to make views fast as well?\n\nThe current PREPARE/EXECUTE code will speed up queries that use\nrules of any kind, including views: the query plan is cached after\nit has been rewritten as necessary, so (AFAIK) this should mean\nthat rules will be evaluated once when the query is PREPAREd, and\nthen cached for subsequent EXECUTE commands.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Sat, 13 Apr 2002 14:35:39 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "On Fri, Apr 12, 2002 at 12:51:26PM -0400, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Certainly a shared cache would be good for apps that connect to issue a\n> > single query frequently. In such cases, there would be no local cache\n> > to use.\n> \n> We have enough other problems with the single-query-per-connection\n> scenario that I see no reason to believe that a shared plan cache will\n> help materially. The correct answer for those folks will *always* be\n> to find a way to reuse the connection.\n\n My query cache was write for 7.0. If some next release will use\n pre-forked backend and after a client disconnection the backend will \n still alives and waits for new client the shared cache is (maybe:-) not\n needful. The current backend fork model is killer of all possible \n caching.\n\n We have more caches. I hope persistent backend help will help to all \n and I'm sure that speed will grow up with persistent backend and \n persistent caches without shared memory usage. There I can agree with\n Tom :-)\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Sun, 14 Apr 2002 21:21:44 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "On 13 Apr 2002, Hannu Krosing wrote:\n\n> On Fri, 2002-04-12 at 03:04, Brian Bruns wrote:\n> > On 11 Apr 2002, Hannu Krosing wrote:\n> > \n> > > IIRC someone started work on modularising the network-related parts with\n> > > a goal of supporting DRDA (DB2 protocol) and others in future.\n> > \n> > That was me, although I've been bogged down lately, and haven't been able \n> > to get back to it.\n> \n> Has any of your modularisation work got into CVS yet ?\n\nNo, Bruce didn't like the way I did certain things, and had some qualms \nabout the value of supporting multiple wire protocols IIRC. Plus the \npatch was not really ready for primetime yet. \n\nI'm hoping to get back to it soon and sync it with the latest CVS, and \nclean up the odds and ends.\n\n> > DRDA, btw, is not just a DB2 protocol but an opengroup \n> > spec that hopefully will someday be *the* standard on the wire database \n> > protocol. DRDA handles prepare/execute and is completely binary in \n> > representation, among other advantages.\n> \n> What about extensibility - is there some predefined way of adding new\n> types ?\n\nNot really, there is some ongoing standards activity adding some new \nfeatures. The list of supported types is pretty impressive, anything in \nparticular you are looking for?\n\n> Also, does it handle NOTIFY ?\n\nI don't know the answer to this. The spec is pretty huge, so it may, but \nI haven't seen it.\n\nEven if it is supported as a secondary protocol, I believe there is alot \nof value in having a single database protocol standard. (why else would I \nbe doing it!). I'm also looking into what it will take to do the same for \nMySQL and Firebird. Hopefully they will be receptive to the idea as well.\n\n> ----------------\n> Hannu\n\nCheers,\n\nBrian\n\n", "msg_date": "Sun, 14 Apr 2002 20:38:48 -0400 (EDT)", "msg_from": "Brian Bruns <camber@ais.org>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "On Thu, 11 Apr 2002, Barry Lind wrote:\n\n> I'm not sure that JDBC would use this feature directly. When a\n> PreparableStatement is created in JDBC there is nothing that indicates\n> how many times this statement is going to be used. Many (most IMHO)\n> will be used only once.\n\nWell, the particular PreparedStatement instance may be used only\nonce, yes. But it's quite likely that other, identical PreparedStatement\nobjects would be used time and time again, so it's still good if\nyou don't need to do much work on the second and subsequent\npreparations of that statement.\n\n> If it only is used once, it will actually perform worse than\n> without the feature (since you need to issue two sql statements to the\n> backend to accomplish what you were doing in one before).\n\nI'm not sure that it would be much worse unless you need to wait\nfor an acknowledgement from the back-end for the first statement.\nIf you had a back-end command along the lines of \"prepare this\nstatement and execute it with these parameters,\" it would have\npretty much the same performance as giving the statement directly\nwith the parameters already substituted in, right?\n\n> Thus if someone wanted to use this functionality from jdbc they would\n> need to do it manually, i.e. issue the prepare and execute statements\n> manually instead of the jdbc driver doing it automatically for them.\n\nI'd say that this is awfully frequent, anyway. I use PreparedStatements\nfor pretty much any non-constant input, because it's just not safe\nor portable to try to escape parameters yourself.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 15 Apr 2002 13:41:03 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "\n\nCurt Sampson wrote:\n> On Thu, 11 Apr 2002, Barry Lind wrote:\n> \n> \n>>I'm not sure that JDBC would use this feature directly. When a\n>>PreparableStatement is created in JDBC there is nothing that indicates\n>>how many times this statement is going to be used. Many (most IMHO)\n>>will be used only once.\n> \n> \n> Well, the particular PreparedStatement instance may be used only\n> once, yes. But it's quite likely that other, identical PreparedStatement\n> objects would be used time and time again, so it's still good if\n> you don't need to do much work on the second and subsequent\n> preparations of that statement.\n> \nBut since the syntax for prepare is: PREPARE <name> AS <statement> you \ncan't easily reuse sql prepared by other PreparedStatement objects since \nyou don't know if the sql you are about to execute has or has not yet \nbeen prepared or what <name> was used in that prepare. Thus you will \nalways need to do a new prepare. (This only is true if the driver is \ntrying to automatically use PREPARE/EXECUTE, which was the senario I was \ntalking about).\n\n> \n>>If it only is used once, it will actually perform worse than\n>>without the feature (since you need to issue two sql statements to the\n>>backend to accomplish what you were doing in one before).\n> \n> \n> I'm not sure that it would be much worse unless you need to wait\n> for an acknowledgement from the back-end for the first statement.\n> If you had a back-end command along the lines of \"prepare this\n> statement and execute it with these parameters,\" it would have\n> pretty much the same performance as giving the statement directly\n> with the parameters already substituted in, right?\n> \nI didn't say it would be much worse, but it won't be faster than not \nusing PREPARE.\n\n\n> \n>>Thus if someone wanted to use this functionality from jdbc they would\n>>need to do it manually, i.e. issue the prepare and execute statements\n>>manually instead of the jdbc driver doing it automatically for them.\n> \n> \n> I'd say that this is awfully frequent, anyway. I use PreparedStatements\n> for pretty much any non-constant input, because it's just not safe\n> or portable to try to escape parameters yourself.\n> \nI agree this is useful, and you can write user code to take advantage of \nthe functionality. I am just pointing out that I don't think the driver \ncan behind the scenes use this capability automatically.\n\n--Barry\n\n", "msg_date": "Sun, 14 Apr 2002 21:56:44 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "On Sun, 14 Apr 2002, Barry Lind wrote:\n\n> But since the syntax for prepare is: PREPARE <name> AS <statement> you\n> can't easily reuse sql prepared by other PreparedStatement objects since\n> you don't know if the sql you are about to execute has or has not yet\n> been prepared or what <name> was used in that prepare. Thus you will\n> always need to do a new prepare. (This only is true if the driver is\n> trying to automatically use PREPARE/EXECUTE, which was the senario I was\n> talking about).\n\nWell, there are some ugly tricks you could build into the driver\nto allow it to effectively use a PREPAREd statement with multiple,\nidentical PreparedStatement objects (basically, via the driver\ncaching various things and identifying PreparedStatements created\nwith the same SQL), but it's messy enough and has some problems\nhard enough to resolve that I can't actually see this being practical.\n\nI was actually just wanting to point out that this is where automatic\ncaching on the server shines.\n\n> >>If it only is used once, it will actually perform worse....\n>\n> I didn't say it would be much worse, but it won't be faster than not\n> using PREPARE.\n\nWell, if it's not faster, that's fine. If it's worse, that's not\nso fine, because as you point out there's really no way for the\ndriver to know whether a PreparedStatement is being used just for\nspeed (multiple queries with one instance) or security (on query,\nbut with parameters).\n\n> I am just pointing out that I don't think the driver\n> can behind the scenes use this capability automatically.\n\nWell, if there's little or no performance impact, I would say that\nthe driver should always use this capability with PreparedStatement\nobjects. If there is a performance impact, perhaps a property could\nturn it on and off?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 15 Apr 2002 14:45:02 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: 7.3 schedule" }, { "msg_contents": "Tom Lane writes:\n\n> The regression tests contain no very-long literals. The results I was\n> referring to concerned cases with string (BLOB) literals in the\n> hundreds-of-K range; it seems that the per-character loop in the flex\n> lexer starts to look like a bottleneck when you have tokens that much\n> larger than the rest of the query.\n>\n> Solutions seem to be either (a) make that loop quicker, or (b) find a\n> way to avoid passing BLOBs through the lexer. I was merely suggesting\n> that (a) should be investigated before we invest the work implied\n> by (b).\n\nI've done the following test: Ten statements of the form\n\nSELECT 1 FROM tab1 WHERE val = '...';\n\nwhere ... are literals of length 5 - 10 MB (some random base-64 encoded\nMP3 files). \"tab1\" was empty. The test ran 3:40 min wall-clock time.\n\nTop ten calls:\n\n % cumulative self self total\n time seconds seconds calls ms/call ms/call name\n 36.95 9.87 9.87 74882482 0.00 0.00 pq_getbyte\n 22.80 15.96 6.09 11 553.64 1450.93 pq_getstring\n 13.55 19.58 3.62 11 329.09 329.10 scanstr\n 12.09 22.81 3.23 110 29.36 86.00 base_yylex\n 4.27 23.95 1.14 34 33.53 33.53 yy_get_previous_state\n 3.86 24.98 1.03 22 46.82 46.83 textin\n 3.67 25.96 0.98 34 28.82 28.82 myinput\n 1.83 26.45 0.49 45 10.89 32.67 yy_get_next_buffer\n 0.11 26.48 0.03 3027 0.01 0.01 AllocSetAlloc\n 0.11 26.51 0.03 129 0.23 0.23 fmgr_isbuiltin\n\nThe string literals didn't contain any backslashes, so scanstr is\noperating in the best-case scenario here. But for arbitary binary data we\nneed some escape mechanism, so I don't see much room for improvement\nthere.\n\nIt seems the real bottleneck is the excessive abstraction in the\ncommunications layer. I haven't looked closely at all, but it would seem\nbetter if pq_getstring would not use pq_getbyte and instead read the\nbuffer directly.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Apr 2002 13:24:12 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Scanner performance (was Re: 7.3 schedule) " }, { "msg_contents": "Peter Eisentraut wrote:\n> The string literals didn't contain any backslashes, so scanstr is\n> operating in the best-case scenario here. But for arbitary binary data we\n> need some escape mechanism, so I don't see much room for improvement\n> there.\n> \n> It seems the real bottleneck is the excessive abstraction in the\n> communications layer. I haven't looked closely at all, but it would seem\n> better if pq_getstring would not use pq_getbyte and instead read the\n> buffer directly.\n\nI am inclined to agree with your analysis. We added abstraction to\nlibpq because the old code was quite poorly structured. Now that it is\nwell structured, removing some of the abstraction seems valuable.\n\nAny chance pq_getbyte could be made into a macro? I would be glad to\nsend you a macro version for testing. I would have to push the while\nloop into pg_recvbuf() and change the while in pg_getbyte to an if, or\nas a macro, ? :.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Apr 2002 13:55:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Scanner performance (was Re: 7.3 schedule)" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Top ten calls:\n\n> % cumulative self self total\n> time seconds seconds calls ms/call ms/call name\n> 36.95 9.87 9.87 74882482 0.00 0.00 pq_getbyte\n> 22.80 15.96 6.09 11 553.64 1450.93 pq_getstring\n> 13.55 19.58 3.62 11 329.09 329.10 scanstr\n> 12.09 22.81 3.23 110 29.36 86.00 base_yylex\n> 4.27 23.95 1.14 34 33.53 33.53 yy_get_previous_state\n> 3.86 24.98 1.03 22 46.82 46.83 textin\n> 3.67 25.96 0.98 34 28.82 28.82 myinput\n> 1.83 26.45 0.49 45 10.89 32.67 yy_get_next_buffer\n> 0.11 26.48 0.03 3027 0.01 0.01 AllocSetAlloc\n> 0.11 26.51 0.03 129 0.23 0.23 fmgr_isbuiltin\n\nInteresting. This should be taken with a grain of salt however: gprof's\ncall-counting overhead is large enough to skew the results on many\nmachines (ie, routines that are called many times tend to show more than\ntheir fair share of runtime). If your profiler does not show the\ncounter subroutine (\"mcount\" or some similar name) separately, you\nshould be very suspicious of where the overhead time is hidden.\n\nFor comparison you might want to check out some similar numbers I\nobtained awhile back:\nhttp://archives.postgresql.org/pgsql-hackers/2001-12/msg00076.php\n(thanks to Barry Lind for reminding me about that ;-)). That test\nshowed base_yylex/addlit/scanstr as costing about twice as much as\npg_getstring/pq_getbyte. Probably the truth is somewhere in between\nyour measurements and mine.\n\nIn any case it does seem that some micro-optimization in the vicinity of\nthe scanner's per-character costs, ie, pq_getbyte, addlit, etc would be\nworth the trouble.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 01:04:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Scanner performance (was Re: 7.3 schedule) " }, { "msg_contents": "\nI have added these emails to TODO.detail/prepare.\n\n---------------------------------------------------------------------------\n\nKarel Zak wrote:\n> On Fri, Apr 12, 2002 at 12:41:34AM -0400, Neil Conway wrote:\n> > On Fri, 12 Apr 2002 12:58:01 +0900\n> > \"Hiroshi Inoue\" <Inoue@tpf.co.jp> wrote:\n> > > \n> > > Just a confirmation.\n> > > Someone is working on PREPARE/EXECUTE ?\n> > > What about Karel's work ?\n> \n> Right question :-)\n> \n> > I am. My work is based on Karel's stuff -- at the moment I'm still\n> > basically working on getting Karel's patch to play nicely with\n> > current sources; once that's done I'll be addressing whatever\n> > issues are stopping the code from getting into CVS.\n> \n> My patch (qcache) for PostgreSQL 7.0 is available at \n> ftp://ftp2.zf.jcu.cz/users/zakkr/pg/.\n> \n> I very look forward to Neil's work on this. \n> \n> Notes:\n> \n> * It's experimental patch, but usable. All features below mentioned \n> works.\n> \n> * PREPARE/EXECUTE is not only SQL statements, I think good idea is\n> create something common and robus for query-plan caching,\n> beacuse there is for example SPI too. The RI triggers are based \n> on SPI_saveplan(). \n> \n> * My patch knows EXECUTE INTO feature:\n> \n> PREPARE foo AS SELECT * FROM pg_class WHERE relname ~~ $1 USING text;\n> \n> EXECUTE foo USING 'pg%'; <-- standard select\n> \n> EXECUTE foo INTO TEMP newtab USING 'pg%'; <-- select into\n> \n> \n> * The patch allows store query-planns to shared memory and is\n> possible EXECUTE it at more backends (over same DB) and planns\n> are persistent across connetions. For this feature I create special \n> memory context subsystem (like current aset.c, but it works with \n> IPC shared memory).\n> \n> This is maybe too complex solution and (maybe) sufficient is cache \n> query in one backend only. I know unbelief about this shared\n> memory solution (Tom?). \n> \n> \n> Karel\n> \n> \n> My experimental patch README (excuse my English):\n> \n> Implementation\n> ~~~~~~~~~~~~~~\n> \n> The qCache allows save queryTree and queryPlan. There is available are \n> two space for data caching. \n> \n> LOCAL - data are cached in backend non-shared memory and data aren't\n> available in other backends. \n> \n> SHARE - data are cached in backend shared memory and data are \n> visible in all backends.\n> \n> Because size of share memory pool is limited and it is set during\n> postmaster start up, the qCache must remove all old planns if pool is \n> full. You can mark each entry as \"REMOVEABLE\" or \"NOTREMOVEABLE\". \n> \n> A removeable entry is removed if pool is full.\n> \n> A not-removeable entry must be removed via qCache_Remove() or \n> the other routines. The qCache not remove this entry itself.\n> \n> All records in qCache are cached (in the hash table) under some key.\n> The qCache knows two alternate of key --- \"KEY_STRING\" and \"KEY_BINARY\". \n> \n> The qCache API not allows access to shared memory, all cached planns that \n> API returns are copy to CurrentMemoryContext. All (qCache_ ) routines lock \n> shmem itself (exception is qCache_RemoveOldest_ShareRemoveAble()).\n> \n> - for locking is used spin lock.\n> \n> Memory management\n> ~~~~~~~~~~~~~~~~~\n> The qCache use for qCache's shared pool its memory context independent on\n> standard aset/mcxt, but use compatible API --- it allows to use standard\n> palloc() (it is very needful for basic plan-tree operations, an example \n> for copyObject()). The qCache memory management is very simular to current\n> aset.c code. It is chunk-ed blocks too, but the block is smaller - 1024b.\n> \n> The number of blocks is available set in postmaster 'argv' via option\n> '-Z'.\n> \n> For plan storing is used separate MemoryContext for each plan, it \n> is good idea (Hiroshi's ?), bucause create new context is simple and \n> inexpensive and allows easy destroy (free) cached plan. This method is \n> used in my SPI overhaul instead TopMemoryContext feeding.\n> \n> Postmaster\n> ~~~~~~~~~~\n> The query cache memory is init during potmaster startup. The size of\n> query cache pool is set via '-Z <number-of-blocks>' switch --- default \n> is 100 blocks where 1 block = 1024b, it is sufficient for 20-30 cached\n> planns. One query needs somewhere 3-10 blocks, for example query like\n> \n> PREPARE sel AS SELECT * FROM pg_class;\n> \n> needs 10Kb, because table pg_class has very much columns. \n> \n> Note: for development I add SQL function: \"SELECT qcache_state();\",\n> this routine show usage of qCache.\n> \n> SPI\n> ~~~\n> I a little overwrite SPI save plan method and remove TopMemoryContext\n> \"feeding\".\n> \n> Standard SPI:\n> \n> SPI_saveplan() - save each plan to separate standard memory context.\n> \n> SPI_freeplan() - free plan.\n> \n> By key SPI:\n> \n> It is SPI interface for query cache and allows save planns to SHARED\n> or LOCAL cache 'by' arbitrary key (string or binary). Routines:\n> \n> SPI_saveplan_bykey() - save plan to query cache\n> \n> SPI_freeplan_bykey() - remove plan from query cache\n> \n> SPI_fetchplan_bykey() - fetch plan saved in query cache\n> \n> SPI_execp_bykey() - execute (via SPI) plan saved in query\n> cache \n> \n> - now, users can write functions that save planns to shared memory \n> and planns are visible in all backend and are persistent arcoss \n> connection. \n> \n> Example:\n> ~~~~~~~\n> /* ----------\n> * Save/exec query from shared cache via string key\n> * ----------\n> */\n> int keySize = 0; \n> flag = SPI_BYKEY_SHARE | SPI_BYKEY_STRING;\n> char *key = \"my unique key\";\n> \n> res = SPI_execp_bykey(values, nulls, tcount, key, flag, keySize);\n> \n> if (res == SPI_ERROR_PLANNOTFOUND) \n> {\n> /* --- not plan in cache - must create it --- */\n> \n> void *plan;\n> \n> plan = SPI_prepare(querystr, valnum, valtypes);\n> SPI_saveplan_bykey(plan, key, keySize, flag);\n> \n> res = SPI_execute(plan, values, Nulls, tcount);\n> }\n> \n> elog(NOTICE, \"Processed: %d\", SPI_processed);\n> \n> \n> PREPARE/EXECUTE\n> ~~~~~~~~~~~~~~~\n> * Syntax:\n> \n> PREPARE <name> AS <query> \n> [ USING type, ... typeN ] \n> [ NOSHARE | SHARE | GLOBAL ]\n> \n> EXECUTE <name> \n> [ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table ]\n> [ USING val, ... valN ]\n> [ NOSHARE | SHARE | GLOBAL ]\n> \n> DEALLOCATE PREPARE \n> [ <name> [ NOSHARE | SHARE | GLOBAL ]]\n> [ ALL | ALL INTERNAL ]\n> \n> \n> I know that it is a little out of SQL92... (use CREATE/DROP PLAN instead\n> this?) --- what mean SQL standard guru?\n> \n> * Where:\n> \n> NOSHARE --- cached in local backend query cache - not accessable\n> from the others backends and not is persisten a across\n> conection.\n> \n> SHARE --- cached in shared query cache and accessable from\n> all backends which work over same database.\n> \n> GLOBAL --- cached in shared query cache and accessable from\n> all backends and all databases. \n> \n> - default is 'SHARE'\n> \n> Deallocate:\n> \n> ALL --- deallocate all users's plans\n> \n> ALL INTERNAL --- deallocate all internal plans, like planns\n> cached via SPI. It is needful if user\n> alter/drop table ...etc.\n> \n> * Parameters:\n> \n> \"USING\" part in the prepare statement is for datetype setting for\n> paremeters in the query. For example:\n> \n> PREPARE sel AS SELECT * FROM pg_class WHERE relname ~~ $1 USING text;\n> \n> EXECUTE sel USING 'pg%';\n> \n> \n> * Limitation:\n> \n> - prepare/execute allow use full statement of SELECT/INSERT/DELETE/\n> UPDATE. \n> - possible is use union, subselects, limit, ofset, select-into\n> \n> \n> Performance:\n> ~~~~~~~~~~~\n> * the SPI\n> \n> - I for my tests a little change RI triggers to use SPI by_key API\n> and save planns to shared qCache instead to internal RI hash table.\n> \n> The RI use very simple (for parsing) queries and qCache interest is \n> not visible. It's better if backend very often startup and RI check \n> always same tables. In this situation speed go up --- 10-12%. \n> (This snapshot not include this RI change.)\n> \n> But all depend on how much complicate for parser is query in \n> trigger.\n> \n> * PREPARE/EXECUTE\n> \n> - For tests I use query that not use some table (the executor is \n> in boredom state), but is difficult for the parser. An example:\n> \n> SELECT 'a text ' || (10*10+(100^2))::text || ' next text ' || cast \n> (date_part('year', timestamp 'now') AS text );\n> \n> - (10000 * this query):\n> \n> standard select: 54 sec\n> via prepare/execute: 4 sec (93% better)\n> \n> IMHO it is nod bad.\n> \n> - For standard query like:\n> \n> SELECT u.usename, r.relname FROM pg_class r, pg_user u WHERE \n> r.relowner = u.usesysid;\n> \n> it is with PREPARE/EXECUTE 10-20% faster.\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Apr 2002 00:13:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: 7.3 schedule" } ]
[ { "msg_contents": "I've detected that the restoring of large objects may consume huge amounts of\ndiskspace when using unusual blocksizes (e.g. 32KB). My setup is Postgresql-7.2.1\n+ 32KB blocks + LOBLKSIZE 16KB, a unusual combination I think, , because this setup gave\nthe very best performance. I wanted to restore a database containing 2 gigabytes of\nlarge objects, and noticed that it took around 6 gigabytes of diskspace to finish.\nAfter it finished I ran \"VACUUM FULL VERBOSE pg_largeobject\",\nand had around 140000 of live tuples, and around 480000 of dead tuples (I don't remember the exact\nvalues, but I think there were 3 times dead tuples to live tuples).\n\nI checked the pg_dump sources and found out that data is writen in 4KB chunks to the large object.\nSince in my database the LO tuples are 16KB each, that would mean:\n1. write 4KB -> have 1 live 4KB tuple\n2. write 4KB -> 1 live 8KB tuple and 1 dead 4KB tuple\n3. write 4KB -> 1 live 12KB tuple and 2 dead tuples\n3. write 4KB -> 1 live 16KB tuple and 3 dead tuples\n\nSo creating a 16KB chunk took 16+12+8+4 => 40KB of diskspace, so recovering 2GB large objects\ntakes around 40/16 * 2 => 5GB diskspace and leaves 3 times the number of dead tuples (supposing\nall LO's have sizes which are multples of 16KB).\n\nI've written a patch which buffers LO's in 32KB blocks and tested again, and had 140000 live tuples\nand nearly no dead tuples (around 10000, I'm still not sure where they're coming from).\n\nIs there a better way to fix this? Can I post the patch to this list (~150 lines).And I did not find out how I can detect the large object chunksize, either from getting it from the headers (include \"storage/large_object.h\" did not work) or how to get it from the database I restore to. Any hints?\n\nBest regards,\n Mario Weilguni\n\n\n", "msg_date": "Thu, 11 Apr 2002 10:56:02 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Inefficient handling of LO-restore + Patch" }, { "msg_contents": "\"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> And I did not find out how I can detect the large object\n> chunksize, either from getting it from the headers (include\n> \"storage/large_object.h\" did not work)\n\nWhy not?\n\nStill, it might make sense to move the LOBLKSIZE definition into\npg_config.h, since as you say it's of some interest to clients like\npg_dump.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 11:44:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch " }, { "msg_contents": "Am Donnerstag, 11. April 2002 17:44 schrieb Tom Lane:\n> \"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> > And I did not find out how I can detect the large object\n> > chunksize, either from getting it from the headers (include\n> > \"storage/large_object.h\" did not work)\n>\n\nYou did not answer if it's ok to post the patch, hope it's ok:\n==================================\ndiff -Nur postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_archiver.c \npostgresql-7.2.1/src/bin/pg_dump/pg_backup_archiver.c\n--- postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_archiver.c\tMon Feb 11 \n01:18:20 2002\n+++ postgresql-7.2.1/src/bin/pg_dump/pg_backup_archiver.c\tThu Apr 11 10:41:09 \n2002\n@@ -819,6 +819,9 @@\n \t\tAH->createdBlobXref = 1;\n \t}\n \n+\t/* Initialize the LO Buffer */\n+\tAH->lo_buf_used = 0;\n+\n \t/*\n \t * Start long-running TXs if necessary\n \t */\n@@ -848,6 +851,19 @@\n void\n EndRestoreBlob(ArchiveHandle *AH, Oid oid)\n {\n+ if(AH->lo_buf_used > 0) {\n+\t /* Write remaining bytes from the LO buffer */\n+ \t int res;\n+ res = lo_write(AH->connection, AH->loFd, (void *) AH->lo_buf, \nAH->lo_buf_used);\n+\n+\t ahlog(AH, 5, \"wrote remaining %d bytes of large object data (result = \n%d)\\n\",\n+\t \t (int)AH->lo_buf_used, res);\n+\t if (res != AH->lo_buf_used)\n+\t\tdie_horribly(AH, modulename, \"could not write to large object (result: %d, \nexpected: %d)\\n\",\n+\t\t\t res, AH->lo_buf_used);\n+ AH->lo_buf_used = 0;\n+ }\n+\n \tlo_close(AH->connection, AH->loFd);\n \tAH->writingBlob = 0;\n \n@@ -1228,14 +1244,27 @@\n \n \tif (AH->writingBlob)\n \t{\n-\t\tres = lo_write(AH->connection, AH->loFd, (void *) ptr, size * nmemb);\n-\t\tahlog(AH, 5, \"wrote %d bytes of large object data (result = %d)\\n\",\n-\t\t\t (int) (size * nmemb), res);\n-\t\tif (res != size * nmemb)\n+\t if(AH->lo_buf_used + size * nmemb > AH->lo_buf_size) {\n+\t\t /* Split LO buffer */\n+\t\t int remaining = AH->lo_buf_size - AH->lo_buf_used;\n+\t\t int slack = nmemb * size - remaining;\n+\n+\t\t memcpy(AH->lo_buf + AH->lo_buf_used, ptr, remaining);\n+\t\t res = lo_write(AH->connection, AH->loFd, AH->lo_buf, AH->lo_buf_size);\n+\t\t ahlog(AH, 5, \"wrote %d bytes of large object data (result = %d)\\n\",\n+\t\t \t AH->lo_buf_size, res);\n+\t\t if (res != AH->lo_buf_size)\n \t\t\tdie_horribly(AH, modulename, \"could not write to large object (result: %d, \nexpected: %d)\\n\",\n-\t\t\t\t\t\t res, (int) (size * nmemb));\n+\t\t\t\t\t\t res, AH->lo_buf_size);\n+\t memcpy(AH->lo_buf, ptr + remaining, slack);\n+\t\t AH->lo_buf_used = slack;\n+\t } else {\n+\t /* LO Buffer is still large enough, buffer it */\n+\t\t memcpy(AH->lo_buf + AH->lo_buf_used, ptr, size * nmemb);\n+\t\t AH->lo_buf_used += size * nmemb;\n+\t }\n \n-\t\treturn res;\n+\t return size * nmemb;\n \t}\n \telse if (AH->gzOut)\n \t{\ndiff -Nur postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_archiver.h \npostgresql-7.2.1/src/bin/pg_dump/pg_backup_archiver.h\n--- postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_archiver.h\tMon Nov 5 \n18:46:30 2001\n+++ postgresql-7.2.1/src/bin/pg_dump/pg_backup_archiver.h\tThu Apr 11 10:41:14 \n2002\n@@ -41,6 +41,7 @@\n #include <errno.h>\n \n #include \"pqexpbuffer.h\"\n+#define LOBBUFSIZE 32768\n \n #ifdef HAVE_LIBZ\n #include <zlib.h>\n@@ -240,6 +241,9 @@\n \n \tRestoreOptions *ropt;\t\t/* Used to check restore options in\n \t\t\t\t\t\t\t\t * ahwrite etc */\n+\tvoid *lo_buf;\n+\tint lo_buf_used;\n+\tint lo_buf_size;\n } ArchiveHandle;\n \n typedef struct _tocEntry\ndiff -Nur postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_custom.c \npostgresql-7.2.1/src/bin/pg_dump/pg_backup_custom.c\n--- postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_custom.c\tWed Nov 28 \n00:48:12 2001\n+++ postgresql-7.2.1/src/bin/pg_dump/pg_backup_custom.c\tThu Apr 11 10:42:45 \n2002\n@@ -153,6 +153,12 @@\n \tif (ctx->zp == NULL)\n \t\tdie_horribly(AH, modulename, \"out of memory\\n\");\n \n+\t/* Initialize LO buffering */\n+\tAH->lo_buf_size = LOBBUFSIZE;\n+\tAH->lo_buf = (void *)malloc(LOBBUFSIZE);\n+\tif(AH->lo_buf == NULL)\n+ die_horribly(AH, modulename, \"out of memory\\n\");\n+\n \t/*\n \t * zlibOutSize is the buffer size we tell zlib it can output to. We\n \t * actually allocate one extra byte because some routines want to\ndiff -Nur postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_files.c \npostgresql-7.2.1/src/bin/pg_dump/pg_backup_files.c\n--- postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_files.c\tThu Oct 25 \n07:49:52 2001\n+++ postgresql-7.2.1/src/bin/pg_dump/pg_backup_files.c\tThu Apr 11 10:43:01 \n2002\n@@ -113,6 +113,12 @@\n \tAH->formatData = (void *) ctx;\n \tctx->filePos = 0;\n \n+\t/* Initialize LO buffering */\n+\tAH->lo_buf_size = LOBBUFSIZE;\n+\tAH->lo_buf = (void *)malloc(LOBBUFSIZE);\n+\tif(AH->lo_buf == NULL)\n+ die_horribly(AH, modulename, \"out of memory\\n\");\n+\n \t/*\n \t * Now open the TOC file\n \t */\ndiff -Nur postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_null.c \npostgresql-7.2.1/src/bin/pg_dump/pg_backup_null.c\n--- postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_null.c\tWed Jun 27 23:21:37 \n2001\n+++ postgresql-7.2.1/src/bin/pg_dump/pg_backup_null.c\tThu Apr 11 10:44:53 2002\n@@ -64,7 +64,6 @@\n \t */\n \tif (AH->mode == archModeRead)\n \t\tdie_horribly(AH, NULL, \"this format cannot be read\\n\");\n-\n }\n \n /*\ndiff -Nur postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_tar.c \npostgresql-7.2.1/src/bin/pg_dump/pg_backup_tar.c\n--- postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_tar.c\tSun Oct 28 07:25:58 \n2001\n+++ postgresql-7.2.1/src/bin/pg_dump/pg_backup_tar.c\tThu Apr 11 10:44:08 2002\n@@ -157,6 +157,12 @@\n \tctx = (lclContext *) malloc(sizeof(lclContext));\n \tAH->formatData = (void *) ctx;\n \tctx->filePos = 0;\n+\t\n+\t/* Initialize LO buffering */\n+\tAH->lo_buf_size = LOBBUFSIZE;\n+\tAH->lo_buf = (void *)malloc(LOBBUFSIZE);\n+\tif(AH->lo_buf == NULL)\n+ die_horribly(AH, modulename, \"out of memory\\n\");\n \n \t/*\n \t * Now open the TOC file\n============================\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Thu, 11 Apr 2002 19:15:39 +0200", "msg_from": "Mario Weilguni <mario.weilguni@icomedias.com>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nMario Weilguni wrote:\n> Am Donnerstag, 11. April 2002 17:44 schrieb Tom Lane:\n> > \"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> > > And I did not find out how I can detect the large object\n> > > chunksize, either from getting it from the headers (include\n> > > \"storage/large_object.h\" did not work)\n> >\n> \n> You did not answer if it's ok to post the patch, hope it's ok:\n> ==================================\n> diff -Nur postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_archiver.c \n> postgresql-7.2.1/src/bin/pg_dump/pg_backup_archiver.c\n> --- postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_archiver.c\tMon Feb 11 \n> 01:18:20 2002\n> +++ postgresql-7.2.1/src/bin/pg_dump/pg_backup_archiver.c\tThu Apr 11 10:41:09 \n> 2002\n> @@ -819,6 +819,9 @@\n> \t\tAH->createdBlobXref = 1;\n> \t}\n> \n> +\t/* Initialize the LO Buffer */\n> +\tAH->lo_buf_used = 0;\n> +\n> \t/*\n> \t * Start long-running TXs if necessary\n> \t */\n> @@ -848,6 +851,19 @@\n> void\n> EndRestoreBlob(ArchiveHandle *AH, Oid oid)\n> {\n> + if(AH->lo_buf_used > 0) {\n> +\t /* Write remaining bytes from the LO buffer */\n> + \t int res;\n> + res = lo_write(AH->connection, AH->loFd, (void *) AH->lo_buf, \n> AH->lo_buf_used);\n> +\n> +\t ahlog(AH, 5, \"wrote remaining %d bytes of large object data (result = \n> %d)\\n\",\n> +\t \t (int)AH->lo_buf_used, res);\n> +\t if (res != AH->lo_buf_used)\n> +\t\tdie_horribly(AH, modulename, \"could not write to large object (result: %d, \n> expected: %d)\\n\",\n> +\t\t\t res, AH->lo_buf_used);\n> + AH->lo_buf_used = 0;\n> + }\n> +\n> \tlo_close(AH->connection, AH->loFd);\n> \tAH->writingBlob = 0;\n> \n> @@ -1228,14 +1244,27 @@\n> \n> \tif (AH->writingBlob)\n> \t{\n> -\t\tres = lo_write(AH->connection, AH->loFd, (void *) ptr, size * nmemb);\n> -\t\tahlog(AH, 5, \"wrote %d bytes of large object data (result = %d)\\n\",\n> -\t\t\t (int) (size * nmemb), res);\n> -\t\tif (res != size * nmemb)\n> +\t if(AH->lo_buf_used + size * nmemb > AH->lo_buf_size) {\n> +\t\t /* Split LO buffer */\n> +\t\t int remaining = AH->lo_buf_size - AH->lo_buf_used;\n> +\t\t int slack = nmemb * size - remaining;\n> +\n> +\t\t memcpy(AH->lo_buf + AH->lo_buf_used, ptr, remaining);\n> +\t\t res = lo_write(AH->connection, AH->loFd, AH->lo_buf, AH->lo_buf_size);\n> +\t\t ahlog(AH, 5, \"wrote %d bytes of large object data (result = %d)\\n\",\n> +\t\t \t AH->lo_buf_size, res);\n> +\t\t if (res != AH->lo_buf_size)\n> \t\t\tdie_horribly(AH, modulename, \"could not write to large object (result: %d, \n> expected: %d)\\n\",\n> -\t\t\t\t\t\t res, (int) (size * nmemb));\n> +\t\t\t\t\t\t res, AH->lo_buf_size);\n> +\t memcpy(AH->lo_buf, ptr + remaining, slack);\n> +\t\t AH->lo_buf_used = slack;\n> +\t } else {\n> +\t /* LO Buffer is still large enough, buffer it */\n> +\t\t memcpy(AH->lo_buf + AH->lo_buf_used, ptr, size * nmemb);\n> +\t\t AH->lo_buf_used += size * nmemb;\n> +\t }\n> \n> -\t\treturn res;\n> +\t return size * nmemb;\n> \t}\n> \telse if (AH->gzOut)\n> \t{\n> diff -Nur postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_archiver.h \n> postgresql-7.2.1/src/bin/pg_dump/pg_backup_archiver.h\n> --- postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_archiver.h\tMon Nov 5 \n> 18:46:30 2001\n> +++ postgresql-7.2.1/src/bin/pg_dump/pg_backup_archiver.h\tThu Apr 11 10:41:14 \n> 2002\n> @@ -41,6 +41,7 @@\n> #include <errno.h>\n> \n> #include \"pqexpbuffer.h\"\n> +#define LOBBUFSIZE 32768\n> \n> #ifdef HAVE_LIBZ\n> #include <zlib.h>\n> @@ -240,6 +241,9 @@\n> \n> \tRestoreOptions *ropt;\t\t/* Used to check restore options in\n> \t\t\t\t\t\t\t\t * ahwrite etc */\n> +\tvoid *lo_buf;\n> +\tint lo_buf_used;\n> +\tint lo_buf_size;\n> } ArchiveHandle;\n> \n> typedef struct _tocEntry\n> diff -Nur postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_custom.c \n> postgresql-7.2.1/src/bin/pg_dump/pg_backup_custom.c\n> --- postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_custom.c\tWed Nov 28 \n> 00:48:12 2001\n> +++ postgresql-7.2.1/src/bin/pg_dump/pg_backup_custom.c\tThu Apr 11 10:42:45 \n> 2002\n> @@ -153,6 +153,12 @@\n> \tif (ctx->zp == NULL)\n> \t\tdie_horribly(AH, modulename, \"out of memory\\n\");\n> \n> +\t/* Initialize LO buffering */\n> +\tAH->lo_buf_size = LOBBUFSIZE;\n> +\tAH->lo_buf = (void *)malloc(LOBBUFSIZE);\n> +\tif(AH->lo_buf == NULL)\n> + die_horribly(AH, modulename, \"out of memory\\n\");\n> +\n> \t/*\n> \t * zlibOutSize is the buffer size we tell zlib it can output to. We\n> \t * actually allocate one extra byte because some routines want to\n> diff -Nur postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_files.c \n> postgresql-7.2.1/src/bin/pg_dump/pg_backup_files.c\n> --- postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_files.c\tThu Oct 25 \n> 07:49:52 2001\n> +++ postgresql-7.2.1/src/bin/pg_dump/pg_backup_files.c\tThu Apr 11 10:43:01 \n> 2002\n> @@ -113,6 +113,12 @@\n> \tAH->formatData = (void *) ctx;\n> \tctx->filePos = 0;\n> \n> +\t/* Initialize LO buffering */\n> +\tAH->lo_buf_size = LOBBUFSIZE;\n> +\tAH->lo_buf = (void *)malloc(LOBBUFSIZE);\n> +\tif(AH->lo_buf == NULL)\n> + die_horribly(AH, modulename, \"out of memory\\n\");\n> +\n> \t/*\n> \t * Now open the TOC file\n> \t */\n> diff -Nur postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_null.c \n> postgresql-7.2.1/src/bin/pg_dump/pg_backup_null.c\n> --- postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_null.c\tWed Jun 27 23:21:37 \n> 2001\n> +++ postgresql-7.2.1/src/bin/pg_dump/pg_backup_null.c\tThu Apr 11 10:44:53 2002\n> @@ -64,7 +64,6 @@\n> \t */\n> \tif (AH->mode == archModeRead)\n> \t\tdie_horribly(AH, NULL, \"this format cannot be read\\n\");\n> -\n> }\n> \n> /*\n> diff -Nur postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_tar.c \n> postgresql-7.2.1/src/bin/pg_dump/pg_backup_tar.c\n> --- postgresql-7.2.1-orig/src/bin/pg_dump/pg_backup_tar.c\tSun Oct 28 07:25:58 \n> 2001\n> +++ postgresql-7.2.1/src/bin/pg_dump/pg_backup_tar.c\tThu Apr 11 10:44:08 2002\n> @@ -157,6 +157,12 @@\n> \tctx = (lclContext *) malloc(sizeof(lclContext));\n> \tAH->formatData = (void *) ctx;\n> \tctx->filePos = 0;\n> +\t\n> +\t/* Initialize LO buffering */\n> +\tAH->lo_buf_size = LOBBUFSIZE;\n> +\tAH->lo_buf = (void *)malloc(LOBBUFSIZE);\n> +\tif(AH->lo_buf == NULL)\n> + die_horribly(AH, modulename, \"out of memory\\n\");\n> \n> \t/*\n> \t * Now open the TOC file\n> ============================\n> \n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Apr 2002 00:01:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nMario Weilguni wrote:\n> Am Donnerstag, 11. April 2002 17:44 schrieb Tom Lane:\n> > \"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> > > And I did not find out how I can detect the large object\n> > > chunksize, either from getting it from the headers (include\n> > > \"storage/large_object.h\" did not work)\n> >\n> \n> You did not answer if it's ok to post the patch, hope it's ok:\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 22:21:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch" }, { "msg_contents": "This patch does not compile correctly:\n\npg_backup_archiver.c: In function `ahwrite':\npg_backup_archiver.c:1252: warning: pointer of type `void *' used in arithmetic\npg_backup_archiver.c:1259: warning: pointer of type `void *' used in arithmetic\npg_backup_archiver.c:1263: warning: pointer of type `void *' used in arithmetic\nmake: *** [pg_backup_archiver.o] Error 1\n\n\nBruce Momjian writes:\n\n>\n> Patch applied. Thanks.\n>\n> ---------------------------------------------------------------------------\n>\n>\n> Mario Weilguni wrote:\n> > Am Donnerstag, 11. April 2002 17:44 schrieb Tom Lane:\n> > > \"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> > > > And I did not find out how I can detect the large object\n> > > > chunksize, either from getting it from the headers (include\n> > > > \"storage/large_object.h\" did not work)\n> > >\n> >\n> > You did not answer if it's ok to post the patch, hope it's ok:\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 24 Apr 2002 01:41:00 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch" }, { "msg_contents": "OK, I have applied the following patch to fix these warnings. However,\nI need Mario to confirm these are the right changes. Thanks.\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> This patch does not compile correctly:\n> \n> pg_backup_archiver.c: In function `ahwrite':\n> pg_backup_archiver.c:1252: warning: pointer of type `void *' used in arithmetic\n> pg_backup_archiver.c:1259: warning: pointer of type `void *' used in arithmetic\n> pg_backup_archiver.c:1263: warning: pointer of type `void *' used in arithmetic\n> make: *** [pg_backup_archiver.o] Error 1\n> \n> \n> Bruce Momjian writes:\n> \n> >\n> > Patch applied. Thanks.\n> >\n> > ---------------------------------------------------------------------------\n> >\n> >\n> > Mario Weilguni wrote:\n> > > Am Donnerstag, 11. April 2002 17:44 schrieb Tom Lane:\n> > > > \"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> > > > > And I did not find out how I can detect the large object\n> > > > > chunksize, either from getting it from the headers (include\n> > > > > \"storage/large_object.h\" did not work)\n> > > >\n> > >\n> > > You did not answer if it's ok to post the patch, hope it's ok:\n> >\n> >\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/bin/pg_dump/pg_backup_archiver.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/pg_dump/pg_backup_archiver.c,v\nretrieving revision 1.43\ndiff -c -r1.43 pg_backup_archiver.c\n*** src/bin/pg_dump/pg_backup_archiver.c\t24 Apr 2002 02:21:04 -0000\t1.43\n--- src/bin/pg_dump/pg_backup_archiver.c\t24 Apr 2002 14:01:15 -0000\n***************\n*** 1249,1266 ****\n \t\t int remaining = AH->lo_buf_size - AH->lo_buf_used;\n \t\t int slack = nmemb * size - remaining;\n \n! \t\t memcpy(AH->lo_buf + AH->lo_buf_used, ptr, remaining);\n \t\t res = lo_write(AH->connection, AH->loFd, AH->lo_buf, AH->lo_buf_size);\n \t\t ahlog(AH, 5, \"wrote %d bytes of large object data (result = %d)\\n\",\n \t\t \t AH->lo_buf_size, res);\n \t\t if (res != AH->lo_buf_size)\n \t\t\tdie_horribly(AH, modulename, \"could not write to large object (result: %d, expected: %d)\\n\",\n \t\t\t\t\t\t res, AH->lo_buf_size);\n! \t memcpy(AH->lo_buf, ptr + remaining, slack);\n \t\t AH->lo_buf_used = slack;\n \t } else {\n \t /* LO Buffer is still large enough, buffer it */\n! \t\t memcpy(AH->lo_buf + AH->lo_buf_used, ptr, size * nmemb);\n \t\t AH->lo_buf_used += size * nmemb;\n \t }\n \n--- 1249,1266 ----\n \t\t int remaining = AH->lo_buf_size - AH->lo_buf_used;\n \t\t int slack = nmemb * size - remaining;\n \n! \t\t memcpy((char *)AH->lo_buf + AH->lo_buf_used, ptr, remaining);\n \t\t res = lo_write(AH->connection, AH->loFd, AH->lo_buf, AH->lo_buf_size);\n \t\t ahlog(AH, 5, \"wrote %d bytes of large object data (result = %d)\\n\",\n \t\t \t AH->lo_buf_size, res);\n \t\t if (res != AH->lo_buf_size)\n \t\t\tdie_horribly(AH, modulename, \"could not write to large object (result: %d, expected: %d)\\n\",\n \t\t\t\t\t\t res, AH->lo_buf_size);\n! \t memcpy(AH->lo_buf, (char *)ptr + remaining, slack);\n \t\t AH->lo_buf_used = slack;\n \t } else {\n \t /* LO Buffer is still large enough, buffer it */\n! \t\t memcpy((char *)AH->lo_buf + AH->lo_buf_used, ptr, size * nmemb);\n \t\t AH->lo_buf_used += size * nmemb;\n \t }", "msg_date": "Wed, 24 Apr 2002 10:03:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch" }, { "msg_contents": "Am Mittwoch, 24. April 2002 16:03 schrieb Bruce Momjian:\n> OK, I have applied the following patch to fix these warnings. However,\n> I need Mario to confirm these are the right changes. Thanks.\n\nI've checked it and works fine, but the memcpy() prototype says it should be \nvoid pointers. Will this give errors with non-gcc compilers?\n", "msg_date": "Wed, 24 Apr 2002 19:02:49 +0200", "msg_from": "Mario Weilguni <mario.weilguni@icomedias.com>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch" }, { "msg_contents": "Mario Weilguni wrote:\n> Am Mittwoch, 24. April 2002 16:03 schrieb Bruce Momjian:\n> > OK, I have applied the following patch to fix these warnings. However,\n> > I need Mario to confirm these are the right changes. Thanks.\n> \n> I've checked it and works fine, but the memcpy() prototype says it should be \n> void pointers. Will this give errors with non-gcc compilers?\n\nNo, it is fine. Anything can be cast _to_ a void pointer. You just\ncan't do arithmetic on them.\n\nAre you sure you want to use 'void *' in your code. Looking at the\nbackend large object code, I see char *:\n\n extern int inv_read(LargeObjectDesc *obj_desc, char *buf, int nbytes);\n extern int inv_write(LargeObjectDesc *obj_desc, char *buf, int nbytes);\n\nI guess my point is that these are just streams of bytes, _not_ really\nstreams of items of unknown length. We know the length, and the length\nis char. This may simplify the code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 22:59:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch" } ]
[ { "msg_contents": "As promised here's an example of deadlock using foreign keys.\n\ncreate table lang (\n id integer not null primary key,\n name text\n);\ninsert into lang values (1, 'English');\ninsert into lang values (2, 'German');\n\ncreate table country (\n id integer not null primary key,\n name text\n);\ninsert into country values (10, 'USA');\ninsert into country values (11, 'Austria');\n\ncreate table entry (\n id integer not null primary key,\n lang_id integer not null references lang(id),\n country integer not null references country(id),\n txt text\n);\ninsert into entry values (100, 1, 10, 'Entry 1');\ninsert into entry values (101, 2, 11, 'Entry 2');\ninsert into entry values (102, 1, 11, 'Entry 3');\n\ntransaction A:begin;\ntransaction A:update entry set txt='Entry 1.1' where id=100;\ntransaction B:begin;\ntransaction B:update entry set txt='Entry 3.1' where id=102;\ntransaction A:update entry set txt='Entry 2.1' where id=101;\ntransaction A:deadlock detected\n\nMy application has around 100 tables with a few central tables like \n\"languages\", \"users\", \"types\".... , and it deadlocked a lot before I patched \nthe postmaster (I added a test to ignore some special, central tables like \n\"languages\", and not use \"select ... for update\" on these tables, as they're \nnearly static and only changed during maintaince, where I'm the only user and \nnothing bad may happen)\n\nI still think that this behaviour is wrong, I asked my collegue to check what \noracle does in this case, it seems that oracle simply makes some sort of \n\"read lock\" on the referenced tables, but no such strong lock as in postgres.\n\n\nBest regards,\n\tMario Weilguni\n\n", "msg_date": "Thu, 11 Apr 2002 16:53:13 +0200", "msg_from": "Mario Weilguni <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Deadlock situation using foreign keys (reproduceable)" }, { "msg_contents": "\nOn Thu, 11 Apr 2002, Mario Weilguni wrote:\n\n> As promised here's an example of deadlock using foreign keys.\n>\n> create table lang (\n> id integer not null primary key,\n> name text\n> );\n> insert into lang values (1, 'English');\n> insert into lang values (2, 'German');\n>\n> create table country (\n> id integer not null primary key,\n> name text\n> );\n> insert into country values (10, 'USA');\n> insert into country values (11, 'Austria');\n>\n> create table entry (\n> id integer not null primary key,\n> lang_id integer not null references lang(id),\n> country integer not null references country(id),\n> txt text\n> );\n> insert into entry values (100, 1, 10, 'Entry 1');\n> insert into entry values (101, 2, 11, 'Entry 2');\n> insert into entry values (102, 1, 11, 'Entry 3');\n>\n> transaction A:begin;\n> transaction A:update entry set txt='Entry 1.1' where id=100;\n> transaction B:begin;\n> transaction B:update entry set txt='Entry 3.1' where id=102;\n> transaction A:update entry set txt='Entry 2.1' where id=101;\n> transaction A:deadlock detected\n\nPlease see past disussions on the fact that the lock grabbed is too\nstrong. I'm going to (when I get time to work on it) try out a lower\nstrength lock that Alex Hayward made a patch for that should limit/prevent\nthese cases. Thanks for sending a nice simple test case to try against :)\n\n\n\n", "msg_date": "Thu, 11 Apr 2002 09:03:06 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock situation using foreign keys (reproduceable)" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: PJourdan [mailto:info@lespetitsplaisirs.com] \n> Sent: 11 April 2002 17:24\n> To: pgadmin-support@postgresql.org\n> Subject: [pgadmin-support] migration problem\n> \n> \n> I know this is not strictly a pgadmin issue, but I don't know \n> where else to \n> turn.\n> At worst, perhaps someone can steer me in the right \n> direction. :)) Dave seems terribly knowlegable on pgsql, so I \n> thought I'd risk the question: \n\nThanks for the vote of confidence, but this is not really within my field of\nknowledge. I've CC'd this to pgsql-hackers@postgresql.org - you are more\nlikely to get good help from there.\n\nRegards, Dave.\n\n> I am trying to restore a \n> database from a gzipped file: I believe that \n> backups were done as complete files (not partial) under \n> Postgresql 7.0.3. \n> Pg_restore does not recognize the ungzipped file \"filename.psql\". The \n> command, psql -d database -f filename.psql, restores it \n> partially, but with \n> numerous errors and the database is mostly empty. As I \n> understand, this \n> command restores the file to an existing database, so I had \n> to create one \n> with the original filename. But I don't know if the newly \n> created database \n> must have the exact same permissions, ownership, etc. as the \n> original. I am told to install the earlier version of \n> Postgresql to restore, but that \n> does not work - cannot configure it. Even if that works, how can the \n> restored database be migrated to a newer version of \n> Postgresql? Does anybody out there know about this kind of \n> thing? Thanks for any help. P. Jourdan\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> majordomo@postgresql.org\n> \n", "msg_date": "Thu, 11 Apr 2002 20:29:55 +0100", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: migration problem" } ]
[ { "msg_contents": "Just to let you know that you can use the transfer tool of the hsqldb JAVA database to transfer database from Informix to Postgres. I have used it to transfer my application from IDS 7.3 to Postgres 7.2.1. Because it relies on JDBC, it can be used/expended to support other databases.\n\nIt supports:\n- tables structure creation,\n- default field value,\n- index creation (unique or not),\n- data transfer,\n- foreign keys creation,\n- it supports schema (to read the source)\n- autoincrement fields\n\nAs it's based on JDBC, it cannot extract VIEWs or store procedures. User privilege are not created even though it is available from the JDBC. \nA further extension would then be:\nsupport of GRANT and also the possibility to transfer to a specific schema.\n\nYou can download it at http://prdownloads.sourceforge.net/hsqldb/hsqldb_1_7_0_RC4.zip\nThe transfer tool is independent of the database. It's in the src/org/hsqldb/util directory you also need the src/org/hsqldb/lib directory.\n\n\n\n\n\n\n\n\nJust to let you know that you can use the transfer \ntool of the hsqldb JAVA database to transfer database from Informix to Postgres. \nI have used it  to transfer my application from IDS 7.3 to Postgres 7.2.1. \nBecause it relies on JDBC, it can be used/expended to \nsupport other databases.\n \nIt supports:\n- tables structure creation,\n- default field value,\n- index creation (unique or not),\n- data transfer,\n- foreign keys creation,\n- it supports schema (to read the \nsource)\n- autoincrement fields\n \nAs it's  based on JDBC, it cannot extract \nVIEWs or store procedures. User privilege are not created even though it is \navailable from the JDBC. \nA further extension would then be:\nsupport of GRANT and also the possibility to \ntransfer to a specific schema.\n \nYou can download it at http://prdownloads.sourceforge.net/hsqldb/hsqldb_1_7_0_RC4.zip\nThe transfer tool is independent of the database. \nIt's in the src/org/hsqldb/util directory you also need the src/org/hsqldb/lib \ndirectory.", "msg_date": "Fri, 12 Apr 2002 10:31:16 +1000", "msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>", "msg_from_op": true, "msg_subject": "Informix to PostgreSQL (JDBC)" } ]
[ { "msg_contents": "Dear all,\n\nHere are some issues:\n\n1) PostgreSQL installer (wizard)\n\nhttp://www.networksimplicity.com/openssh has released an OpenSSH minimal \nCygwin installer. They have had 300.000 tracked downloads so far. If true, \nthis numer is superior to Cygwin downloads itself.\n\nThe installer is incompatible with Cygwin (as it installs a minimal Cygwin \nlayer). They also provide a minimal configuration GUI.Is this the solution?\n\nI got in contact with the installer maintainer to ask the cons/pros of such a \nsolution v.s. a pure Cygwin layer.\n\nFor the moment, I still agree the solution is to create a wizard in pgAdmin2\n\n2) My current work is stopped\n\nFor the moment, I am too bust to work on pgAdmin2. This might last 1 or 2 \nweeks. Here is my personal to-do list. Feel free to pick-up any issue :\n- Handle two separate connexions (one for schema, another for data queries). \nThe user should be able to define client_encoding for each connexion.\n- Work on PostgreSQL wizard.\n- Refine pseudo-alter trigger code (it was removed because unfinished).\n- Add pseudo ALTER DROP COLUM (this should not be difficult using pgSchema).\n\nI will be back soon with plenty of time.\n\n3) KDE3 is marvelous\nI now use KDE3 which is a perfect environment. I tried writing some code in \nkDevelop. It seems possible to write code fast... Therefore I am still asking \nmyself ***why***we should continue develop pgAdmin on Windows...\n\nTo my humble opinion, KDE needs a real database abstraction layer (like \npgSchema) with a multi-vendor interface. The only solution today is Gnome \nlibgda. Unfortunately, libgda is not well-written. A good abstraction layer \nneeds inheritence (C++, not C) and XML to handle specfic features of each \ndatabase provider. \n\nSecondly, KDE3, Konqueror and KDevelop would probably welcome a pgAdmin port. \nThis would immediatly give us hundred thousands of users. So why bother with \nWindows?\n\nThirdly, qt3 applications can be compiled under Windows. And KDE3 is beeing \nported to Windows using Cygwin. You can be sure it is not a year before we \ncan use KDE3 under WIndows and MacOSX.\n\nLast of all, Microft is like a desease: if we do not fight them now, it will \ncontinue growing. So why work for them under Windows?\n\nCheers,\nJean-Michel\n\n", "msg_date": "Fri, 12 Apr 2002 09:25:46 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": true, "msg_subject": "Various issues" }, { "msg_contents": "> To my humble opinion, KDE needs a real database abstraction layer (like\n> pgSchema) with a multi-vendor interface. The only solution today is Gnome\n> libgda. Unfortunately, libgda is not well-written. A good\n> abstraction layer\n> needs inheritence (C++, not C) and XML to handle specfic features of each\n> database provider.\n\nNot quite true. QT3 that KDE3 is based on has full database support. It\nsupports MySQL, Postgres and ODBC I think.\n\nAs for KDE DB admin software:\n\napps.kde.com , search for 'postgres'. There's at least 4 frontends already\nin progress...\n\nChris\n\n", "msg_date": "Fri, 12 Apr 2002 15:55:39 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Various issues" }, { "msg_contents": "Jean-Michel POURE wrote:\n> \n> Dear all,\n> \n> Here are some issues:\n> \n \n> 3) KDE3 is marvelous\n> I now use KDE3 which is a perfect environment. I tried writing some code in\n> kDevelop. It seems possible to write code fast... Therefore I am still asking\n> myself ***why***we should continue develop pgAdmin on Windows...\n> \n> To my humble opinion, KDE needs a real database abstraction layer (like\n> pgSchema) with a multi-vendor interface. The only solution today is Gnome\n> libgda. Unfortunately, libgda is not well-written. A good abstraction layer\n> needs inheritence (C++, not C) and XML to handle specfic features of each\n> database provider.\n\nJust as a sidenote... have you checked hk_classes, \n(http://hk-classes.sourceforge.net/)?\nMoreover, latest TOra version seems to support PostgreSQL thru QT\nSQL plugins, at least to some extents...\nI would really like to see a complete PostgreSQL admin interface\nin KDE... :-)\nBest regards\nAndrea Aime\n", "msg_date": "Fri, 12 Apr 2002 09:57:38 +0200", "msg_from": "\"Andrea Aime\" <aaime@comune.modena.it>", "msg_from_op": false, "msg_subject": "Re: Various issues" } ]
[ { "msg_contents": "I was just being asked if we are working on deletion of attributes in an\nexisting table. Something like ALTER TABLE foo DROP COLUMN bar. Is\nanyone working on this, or are there design problems with it?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 12 Apr 2002 10:14:50 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "deletion of attributes" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 12 April 2002 03:54\n> To: Bruce Momjian\n> Cc: Hiroshi Inoue; Christopher Kings-Lynne; \n> pgsql-hackers@postgresql.org\n> Subject: Re: RFC: Restructuring pg_aggregate \n> \n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think that is why Tom was suggesting making all the column values \n> > NULL and removing the pg_attribute row for the column.\n> \n> That was not my suggestion.\n> \n> > With a NULL value, it\n> > doesn't take up any room in the tuple, and with the pg_attribute \n> > column gone, no one will see that row. The only problem is \n> the gap in \n> > attno numbering. How big a problem is that?\n> \n> You can't do it that way unless you're intending to rewrite \n> all rows of the relation before committing the ALTER; which \n> would be the worst of both worlds. The pg_attribute row \n> *must* be retained to show the datatype of the former column, \n> so that we can correctly skip over it in tuples where the \n> column isn't yet nulled out.\n> \n> Hiroshi did this by renumbering the attnum; I propose leaving \n> attnum alone and instead adding an attisdropped flag. That \n> would avoid creating a gap in the column numbers, but either \n> way is likely to affect some applications that inspect pg_attribute.\n\nApplications like pgAdmin that inspect pg_attribute are being seriously\nhacked to incorporate schema support anyway for 7.3. Personnally I'd be glad\nto spend some time re-coding to allow for this, just to not have to answer\nthe numerous 'how do I drop a column' emails I get reguarly.\n\nRegards, Dave.\n", "msg_date": "Fri, 12 Apr 2002 09:35:52 +0100", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: RFC: Restructuring pg_aggregate " } ]
[ { "msg_contents": "\nWhile I like the optimisation, the SQL syntax seems pretty horrible.\nCould it not be done without changing the syntax at all, except to\nchange slightly how one defines a column? Given something like\n\n\tCREATE TABLE item_name (\n\t\titem_id\t\tINT PRIMARY KEY,\n\t\titem_name\tVARCHAR(255)\n\t\t)\n\tCREATE TABLE item_set (\n\t\titem_set_id\tINT PRIMARY KEY,\n\t\titem_id\t\tINT REFERENCES item_name (item_id)\n\t\t\tON UPDATE CASCADE ON DELETE CASCADE\n\t\t)\n\nit seems to me that it would be possible for the database to\ntransparently implement this using the optimisation described.\nGiven that, maybe one could just add another keyword to the REFERENCES\nstatement that would actually do the reference with a \"pointer\"?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Fri, 12 Apr 2002 17:59:16 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": true, "msg_subject": "Re: Bidirectional hard joins (fwd)" } ]
[ { "msg_contents": "Please disregard my last message which was intended for \npgadmin-hackers@postgresql.org.\n\nJean-Michel POURE\n", "msg_date": "Fri, 12 Apr 2002 11:12:28 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": true, "msg_subject": "Disregard my last message" } ]
[ { "msg_contents": "pg_dumping a table having a primary key yields commands like\n\n--\n-- TOC Entry ID 2 (OID 139812)\n--\n-- Name: table1 Type: TABLE Owner: postgres\n--\n\nCREATE TABLE \"table1\" (\n\t\"column10\" character varying(255) NOT NULL,\n\t\"column1\" character varying(255) NOT NULL,\n\t\"column2\" smallint NOT NULL,\n\t\"column6\" numeric,\n\t\"column7\" \"char\",\n\tConstraint \"table1_pkey\" Primary Key (\"column10\", \"column1\", \"column2\")\n);\n\n[snip]\n\n--\n-- TOC Entry ID 5 (OID 139817)\n--\n-- Name: \"table1_pkey\" Type: CONSTRAINT Owner: postgres\n--\n\nAlter Table \"table1\" Add Constraint \"table1_pkey\" Primary Key (\"column10\", \"column1\", \"column2\");\n\n\nwhich on execution quite properly complains about duplicate primary\nkeys.\n\nI assume this is traceable to this patch:\n\n2002-03-06 15:48 momjian\n\n\t* src/bin/pg_dump/pg_dump.c: Enable ALTER TABLE ADD PRIMARY KEY for\n\tpg_dump, for performance reasons so index is not on table during\n\tCOPY.\n\t\n\t> > AFAICT, the patch I posted to -patches a little while to enable\n\tthe\n\t> > usage of ALTER TABLE ADD PRIMARY KEY by pg_dump hasn't been\n\tapplied, nor\n\t> > is it in the unapplied patches list. I was under the impression\n\tthat\n\t> > this was in the queue for application -- did it just get lost?\n\t\n\tNeil Conway <neilconway@rogers.com>\n\n\nIt would seem that more thought is needed here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Apr 2002 13:28:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pg_dump is broken in CVS tip" }, { "msg_contents": "On Fri, 12 Apr 2002 13:28:34 -0400\n\"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n> pg_dumping a table having a primary key yields commands like\n> \n> --\n> -- TOC Entry ID 2 (OID 139812)\n> --\n> -- Name: table1 Type: TABLE Owner: postgres\n> --\n> \n> CREATE TABLE \"table1\" (\n> \t\"column10\" character varying(255) NOT NULL,\n> \t\"column1\" character varying(255) NOT NULL,\n> \t\"column2\" smallint NOT NULL,\n> \t\"column6\" numeric,\n> \t\"column7\" \"char\",\n> \tConstraint \"table1_pkey\" Primary Key (\"column10\", \"column1\", \"column2\")\n> );\n> \n> [snip]\n> \n> --\n> -- TOC Entry ID 5 (OID 139817)\n> --\n> -- Name: \"table1_pkey\" Type: CONSTRAINT Owner: postgres\n> --\n> \n> Alter Table \"table1\" Add Constraint \"table1_pkey\" Primary Key (\"column10\", \"column1\", \"column2\");\n> \n> which on execution quite properly complains about duplicate primary\n> keys.\n\nThanks for finding this Tom -- my apologies, this is likely my bug.\n\nHowever, when I created a table using the commands above and then\ndumped it again, I got a dump that worked properly: there was no\nConstraint within the table definition itself, just an ALTER\nTABLE at the end of the dump to add the PK (i.e. the patch worked\nas intended and the table could be restored properly).\n\nIf you can give me a reproduceable test-case, I'll fix the bug.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 12 Apr 2002 18:44:17 -0400", "msg_from": "Neil Conway <neilconway@rogers.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken in CVS tip" }, { "msg_contents": "Neil Conway <neilconway@rogers.com> writes:\n> However, when I created a table using the commands above and then\n> dumped it again, I got a dump that worked properly:\n> ...\n> If you can give me a reproduceable test-case, I'll fix the bug.\n\nSigh ... I should take my own advice about checking that I've described\na problem completely :-(. It looks like you also need a foreign-key\nreference to the table. This will generate the problem:\n\n\tcreate table t1 (f1 int primary key);\n\n\tcreate table t2 (f1 int references t1);\n\nThe dump of t1 will now read\n\nCREATE TABLE \"t1\" (\n \"f1\" integer NOT NULL,\n Constraint \"t1_pkey\" Primary Key (\"f1\")\n);\n\nSorry for the inadequate report.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Apr 2002 18:51:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken in CVS tip " }, { "msg_contents": "I said:\n> This will generate the problem:\n\n> \tcreate table t1 (f1 int primary key);\n\n> \tcreate table t2 (f1 int references t1);\n\nActually, I find that I get the double declaration of t1_pkey even\nwithout t2. Either we're not using quite the same sources, or the\nproblem is platform-dependent. I can dig into it if you cannot\nreproduce it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Apr 2002 18:59:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken in CVS tip " }, { "msg_contents": "On Fri, 12 Apr 2002 18:59:33 -0400\n\"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n> I said:\n> > This will generate the problem:\n> \n> > \tcreate table t1 (f1 int primary key);\n> \n> > \tcreate table t2 (f1 int references t1);\n> \n> Actually, I find that I get the double declaration of t1_pkey even\n> without t2. Either we're not using quite the same sources, or the\n> problem is platform-dependent. I can dig into it if you cannot\n> reproduce it ...\n\nCurious -- I was previously using ~1 week old sources, and I was\nunable to reproduce the problem (using either the original\ntest-case or the one provided above: neither has any problems).\nWhen I built the current CVS code, both test-case exhibits the\nproblem quite obviously. Therefore, it seems that the problem\nhas been introduced recently.\n\nI'll investigate...\n\nCheers,\n\nNeil\n\nP.S. Tom, would you mind adding my IP to your spam whitelist?\nYour spam-blocking software rejects my emails.\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 12 Apr 2002 19:24:21 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken in CVS tip" }, { "msg_contents": "On Fri, 12 Apr 2002 19:24:21 -0400\n\"Neil Conway\" <nconway@klamath.dyndns.org> wrote:\n> When I built the current CVS code, both test-case exhibits the\n> problem quite obviously. Therefore, it seems that the problem\n> has been introduced recently.\n\nThe problem was introduced here:\n\n------------------\nrevision 1.246\ndate: 2002/04/05 11:51:12; author: momjian; state: Exp; lines: +129 -3\nAdds domain dumping support to pg_dump.\n\nRod Taylor\n------------------\n\nRod's patch does what it is supposed to do, but it also includes\nsome old code to add PK constraints to CREATE TABLE. That stuff\nhad been removed as part of my original patch for pg_dump a\nlittle while ago.\n\nThe attached patch fixes this by removing (again :-) ) the\ncode in dumpTables() to perform PK creation during CREATE\nTABLE. I briefly tested it locally and it fixes both of\nTom's test cases.\n\nPlease apply.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC", "msg_date": "Fri, 12 Apr 2002 21:28:48 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken in CVS tip" }, { "msg_contents": "\nI will apply shortly because pg_dump is broken. I will give it 8 hours.\n\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nNeil Conway wrote:\n> On Fri, 12 Apr 2002 19:24:21 -0400\n> \"Neil Conway\" <nconway@klamath.dyndns.org> wrote:\n> > When I built the current CVS code, both test-case exhibits the\n> > problem quite obviously. Therefore, it seems that the problem\n> > has been introduced recently.\n> \n> The problem was introduced here:\n> \n> ------------------\n> revision 1.246\n> date: 2002/04/05 11:51:12; author: momjian; state: Exp; lines: +129 -3\n> Adds domain dumping support to pg_dump.\n> \n> Rod Taylor\n> ------------------\n> \n> Rod's patch does what it is supposed to do, but it also includes\n> some old code to add PK constraints to CREATE TABLE. That stuff\n> had been removed as part of my original patch for pg_dump a\n> little while ago.\n> \n> The attached patch fixes this by removing (again :-) ) the\n> code in dumpTables() to perform PK creation during CREATE\n> TABLE. I briefly tested it locally and it fixes both of\n> Tom's test cases.\n> \n> Please apply.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Apr 2002 21:40:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken in CVS tip" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> Curious -- I was previously using ~1 week old sources, and I was\n> unable to reproduce the problem (using either the original\n> test-case or the one provided above: neither has any problems).\n> When I built the current CVS code, both test-case exhibits the\n> problem quite obviously. Therefore, it seems that the problem\n> has been introduced recently.\n\n[ scratches head ... ] I think most of the major recent changes\nhave been schema related, so this is quite likely my fault. Let\nme know if you need help debugging it.\n\n> P.S. Tom, would you mind adding my IP to your spam whitelist?\n> Your spam-blocking software rejects my emails.\n\n24.102.202.* whitelisted; let me know if that's not the correct\nIP range for you. (Although you might think I'm blocking most\nof the net, I still get a depressingly large amount of spam ---\nand that's not even counting the Klez virus that seems to be\ndeliberately targeting my jpeg-info alter ego ... grumble ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Apr 2002 23:35:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken in CVS tip " }, { "msg_contents": "Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > Curious -- I was previously using ~1 week old sources, and I was\n> > unable to reproduce the problem (using either the original\n> > test-case or the one provided above: neither has any problems).\n> > When I built the current CVS code, both test-case exhibits the\n> > problem quite obviously. Therefore, it seems that the problem\n> > has been introduced recently.\n> \n> [ scratches head ... ] I think most of the major recent changes\n> have been schema related, so this is quite likely my fault. Let\n> me know if you need help debugging it.\n\nTom, did you see his patch posted a few hours ago. Did that fix it? I\ncan apply.\n\n> \n> > P.S. Tom, would you mind adding my IP to your spam whitelist?\n> > Your spam-blocking software rejects my emails.\n> \n> 24.102.202.* whitelisted; let me know if that's not the correct\n> IP range for you. (Although you might think I'm blocking most\n> of the net, I still get a depressingly large amount of spam ---\n> and that's not even counting the Klez virus that seems to be\n> deliberately targeting my jpeg-info alter ego ... grumble ...)\n\nHave you read my spam blocking article and tools:\n\n\thttp://candle.pha.pa.us/main/writings/spam\n\nIt blocks ~70-80% with no false blocks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Apr 2002 23:39:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken in CVS tip" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nNeil Conway wrote:\n> On Fri, 12 Apr 2002 19:24:21 -0400\n> \"Neil Conway\" <nconway@klamath.dyndns.org> wrote:\n> > When I built the current CVS code, both test-case exhibits the\n> > problem quite obviously. Therefore, it seems that the problem\n> > has been introduced recently.\n> \n> The problem was introduced here:\n> \n> ------------------\n> revision 1.246\n> date: 2002/04/05 11:51:12; author: momjian; state: Exp; lines: +129 -3\n> Adds domain dumping support to pg_dump.\n> \n> Rod Taylor\n> ------------------\n> \n> Rod's patch does what it is supposed to do, but it also includes\n> some old code to add PK constraints to CREATE TABLE. That stuff\n> had been removed as part of my original patch for pg_dump a\n> little while ago.\n> \n> The attached patch fixes this by removing (again :-) ) the\n> code in dumpTables() to perform PK creation during CREATE\n> TABLE. I briefly tested it locally and it fixes both of\n> Tom's test cases.\n> \n> Please apply.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Apr 2002 15:57:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken in CVS tip" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nSent: Friday, April 12, 2002 2:38 PM\nTo: Tom Lane\nCc: Neil Conway; zakkr@zf.jcu.cz; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] 7.3 schedule\n\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Oh, are you thinking that one backend would do the PREPARE and\nanother\n> > one the EXECUTE? I can't see that working at all.\n> \n> Uh, why exactly were you advocating a shared cache then? Wouldn't\nthat\n> be exactly the *point* of a shared cache?\n\nI thought it would somehow compare the SQL query string to the cached\nplans and if it matched, it would use that plan rather than make a new\none. Any DDL statement would flush the cache.\n>>-------------------------------------------------------------------\nMany applications will have similar queries coming from lots of\ndifferent end-users. Imagine an order-entry program where people are\nordering parts. Many of the queries might look like this:\n\nSELECT part_number FROM parts WHERE part_id = 12324 AND part_cost\n< 12.95\n\nIn order to cache this query, we first parse it to replace the data\nfields with paramter markers.\nThen it looks like this:\nSELECT part_number FROM parts WHERE part_id = ? AND part_cost < ?\n{in the case of a 'LIKE' query or some other query where you can use\nkey information, you might have a symbolic replacement like this:\nWHERE field LIKE '{D}%' to indicate that the key can be used}\nThen, we make sure that the case is consistent by either capitalizing\nthe whole query or changing it all into lower case:\nselect part_number from parts where part_id = ? and part_cost < ?\nThen, we run a checksum on the parameterized string.\nThe checksum might be used as a hash table key, where we keep some\nadditional information like how stale the entry is, and a pointer to\nthe actual parameterized SQL (in case the hash key has a collision\nit would be simply wrong to run an incorrect query for obvious enough\nreasons).\nNow, if there are a huge number of users of the same application, it \nmakes sense that the probabilities of reusing queries goes up with\nthe number of users of the same application. Therefore, I would \nadvocate that the cache be kept in shared memory.\n\nConsider a single application with 100 different queries. Now, add\none user, ten users, 100 users, ... 10,000 users and you can see\nthat the benefit would be greater and greater as we add users.\n<<-------------------------------------------------------------------\n", "msg_date": "Fri, 12 Apr 2002 14:59:15 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: 7.3 schedule" } ]
[ { "msg_contents": "Hi all,\n\nI've attached an updated version of Karel Zak's pg_qcache patch, which\nadds PREPARE/EXECUTE support to PostgreSQL (allowing prepared SQL\nstatements). It should apply cleanly against CVS HEAD, and compile\nproperly -- beyond that, cross your fingers :-)\n\nPlease take a look at the code, play around with using PREPARE and\nEXECUTE, etc. Let me know if you have any suggestions for improvement\nor if you run into any problems -- I've probably introduced some\nregressions when I ported the code from 7.0 to current sources.\n\nBTW, if you run the regression tests, I'd expect (only) the \"prepare\"\ntest to fail: I've only written partial regression tests so far. If\nany other tests fail, please let me know.\n\nThe basic syntax looks like:\n\nPREPARE <plan_name> AS <query>;\nEXECUTE <plan_name> USING <parameters>;\nDEALLOCATE PREPARE <plan_name>;\n\nTo get a look at what's being stored in the cache, try:\n\nSELECT qcache_state();\n\nFor more information on the qCache code, see the README that\nKarel posted to the list a few days ago.\n\nThere are still lots of things that need to be improved. Here's\na short list: (the first 3 items are the most important, any help\non those would be much appreciated)\n\n(1) It has a tendancy to core-dump when executing stored queries,\nparticularly if the EXECUTE has an INTO clause -- it will work\nthe first time, but subsequent attempts will either dump core or\nclaim that they can't find the plan in the cache.\n\n(2) Sometimes executing a PREPARE gives this warning:\n\nnconway=> prepare q1 as select * from pg_class;\nWARNING: AllocSetFree: detected write past chunk end in TransactionCommandContext 0x83087ac\nPREPARE\n\nDoes anyone know what problem this indicates?\n\n(3) Preparing queries with parameters doesn't work:\n\nnconway=> PREPARE sel USING text AS SELECT * FROM pg_class WHERE relname ~~ $1;\nERROR: Parameter '$1' is out of range\n\n(4) Add a mechanism for determining if there is already a\ncached plan with a given name.\n\n(5) Finish regression tests\n\n(6) Clean up some debugging messages, correct Karel's English,\ncode cleanup, etc.\n\n(7) IMHO, the number of qcache buffers should be configurable\nin postgresql.conf, not as a command-line switch.\n\n(8) See if the syntax can be adjusted to be more compatible\nwith the SQL92 syntax. Also, some of the current syntax is\nugly, in order to make parsing easier.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC", "msg_date": "Sat, 13 Apr 2002 18:47:32 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "experimental pg_qcache patch" }, { "msg_contents": "Does it cache all queries or just explicitly prepared ones?\n\nDoes is check for cached queries all the time or just explicitly EXECUTED\nones?\n\nChris\n\n----- Original Message -----\nFrom: \"Neil Conway\" <nconway@klamath.dyndns.org>\nTo: \"PostgreSQL Hackers\" <pgsql-hackers@postgresql.org>\nSent: Sunday, April 14, 2002 6:47 AM\nSubject: [HACKERS] experimental pg_qcache patch\n\n\n> Hi all,\n>\n> I've attached an updated version of Karel Zak's pg_qcache patch, which\n> adds PREPARE/EXECUTE support to PostgreSQL (allowing prepared SQL\n> statements). It should apply cleanly against CVS HEAD, and compile\n> properly -- beyond that, cross your fingers :-)\n>\n> Please take a look at the code, play around with using PREPARE and\n> EXECUTE, etc. Let me know if you have any suggestions for improvement\n> or if you run into any problems -- I've probably introduced some\n> regressions when I ported the code from 7.0 to current sources.\n>\n> BTW, if you run the regression tests, I'd expect (only) the \"prepare\"\n> test to fail: I've only written partial regression tests so far. If\n> any other tests fail, please let me know.\n>\n> The basic syntax looks like:\n>\n> PREPARE <plan_name> AS <query>;\n> EXECUTE <plan_name> USING <parameters>;\n> DEALLOCATE PREPARE <plan_name>;\n>\n> To get a look at what's being stored in the cache, try:\n>\n> SELECT qcache_state();\n>\n> For more information on the qCache code, see the README that\n> Karel posted to the list a few days ago.\n>\n> There are still lots of things that need to be improved. Here's\n> a short list: (the first 3 items are the most important, any help\n> on those would be much appreciated)\n>\n> (1) It has a tendancy to core-dump when executing stored queries,\n> particularly if the EXECUTE has an INTO clause -- it will work\n> the first time, but subsequent attempts will either dump core or\n> claim that they can't find the plan in the cache.\n>\n> (2) Sometimes executing a PREPARE gives this warning:\n>\n> nconway=> prepare q1 as select * from pg_class;\n> WARNING: AllocSetFree: detected write past chunk end in\nTransactionCommandContext 0x83087ac\n> PREPARE\n>\n> Does anyone know what problem this indicates?\n>\n> (3) Preparing queries with parameters doesn't work:\n>\n> nconway=> PREPARE sel USING text AS SELECT * FROM pg_class WHERE relname\n~~ $1;\n> ERROR: Parameter '$1' is out of range\n>\n> (4) Add a mechanism for determining if there is already a\n> cached plan with a given name.\n>\n> (5) Finish regression tests\n>\n> (6) Clean up some debugging messages, correct Karel's English,\n> code cleanup, etc.\n>\n> (7) IMHO, the number of qcache buffers should be configurable\n> in postgresql.conf, not as a command-line switch.\n>\n> (8) See if the syntax can be adjusted to be more compatible\n> with the SQL92 syntax. Also, some of the current syntax is\n> ugly, in order to make parsing easier.\n>\n> Cheers,\n>\n> Neil\n>\n> --\n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n>\n\n\n----------------------------------------------------------------------------\n----\n\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Sun, 14 Apr 2002 12:11:22 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: experimental pg_qcache patch" }, { "msg_contents": "Does it cache all queries or just explicitly prepared ones?\n\nDoes is check for cached queries all the time or just explicitly EXECUTED\nones?\n\nChris\n\n----- Original Message -----\nFrom: \"Neil Conway\" <nconway@klamath.dyndns.org>\nTo: \"PostgreSQL Hackers\" <pgsql-hackers@postgresql.org>\nSent: Sunday, April 14, 2002 6:47 AM\nSubject: [HACKERS] experimental pg_qcache patch\n\n\n> Hi all,\n>\n> I've attached an updated version of Karel Zak's pg_qcache patch, which\n> adds PREPARE/EXECUTE support to PostgreSQL (allowing prepared SQL\n> statements). It should apply cleanly against CVS HEAD, and compile\n> properly -- beyond that, cross your fingers :-)\n>\n> Please take a look at the code, play around with using PREPARE and\n> EXECUTE, etc. Let me know if you have any suggestions for improvement\n> or if you run into any problems -- I've probably introduced some\n> regressions when I ported the code from 7.0 to current sources.\n>\n> BTW, if you run the regression tests, I'd expect (only) the \"prepare\"\n> test to fail: I've only written partial regression tests so far. If\n> any other tests fail, please let me know.\n>\n> The basic syntax looks like:\n>\n> PREPARE <plan_name> AS <query>;\n> EXECUTE <plan_name> USING <parameters>;\n> DEALLOCATE PREPARE <plan_name>;\n>\n> To get a look at what's being stored in the cache, try:\n>\n> SELECT qcache_state();\n>\n> For more information on the qCache code, see the README that\n> Karel posted to the list a few days ago.\n>\n> There are still lots of things that need to be improved. Here's\n> a short list: (the first 3 items are the most important, any help\n> on those would be much appreciated)\n>\n> (1) It has a tendancy to core-dump when executing stored queries,\n> particularly if the EXECUTE has an INTO clause -- it will work\n> the first time, but subsequent attempts will either dump core or\n> claim that they can't find the plan in the cache.\n>\n> (2) Sometimes executing a PREPARE gives this warning:\n>\n> nconway=> prepare q1 as select * from pg_class;\n> WARNING: AllocSetFree: detected write past chunk end in\nTransactionCommandContext 0x83087ac\n> PREPARE\n>\n> Does anyone know what problem this indicates?\n>\n> (3) Preparing queries with parameters doesn't work:\n>\n> nconway=> PREPARE sel USING text AS SELECT * FROM pg_class WHERE relname\n~~ $1;\n> ERROR: Parameter '$1' is out of range\n>\n> (4) Add a mechanism for determining if there is already a\n> cached plan with a given name.\n>\n> (5) Finish regression tests\n>\n> (6) Clean up some debugging messages, correct Karel's English,\n> code cleanup, etc.\n>\n> (7) IMHO, the number of qcache buffers should be configurable\n> in postgresql.conf, not as a command-line switch.\n>\n> (8) See if the syntax can be adjusted to be more compatible\n> with the SQL92 syntax. Also, some of the current syntax is\n> ugly, in order to make parsing easier.\n>\n> Cheers,\n>\n> Neil\n>\n> --\n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n>\n\n\n----------------------------------------------------------------------------\n----\n\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Sun, 14 Apr 2002 12:11:31 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: experimental pg_qcache patch" }, { "msg_contents": "On Sun, 14 Apr 2002 12:11:31 +0800\n\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> wrote:\n> Does it cache all queries or just explicitly prepared ones?\n\nJust explicitly prepared ones. Caching all queries opens a can of\nworms that I'd rather not deal with at the moment (volunteers to\ntackle this problem are welcome).\n> \n> Does is check for cached queries all the time or just explicitly EXECUTED\n> ones?\n\nA cached query plan is only used for EXECUTE queries -- it is\nnot used all the time. My gut feeling WRT to caching everything\nis similar to my response to your first question.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Sun, 14 Apr 2002 00:36:38 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "Re: experimental pg_qcache patch" }, { "msg_contents": "> Just explicitly prepared ones. Caching all queries opens a can of\n> worms that I'd rather not deal with at the moment (volunteers to\n> tackle this problem are welcome).\n\nI definitely agree. I think that the optimisation possiblities offered to\nthe DBA for shared prepared statements are quite large enough to offer\nexciting possibilities. Also, it will minimise the locking contentions Tom\nspeaks of.\n\n> > Does is check for cached queries all the time or just explicitly\nEXECUTED\n> > ones?\n>\n> A cached query plan is only used for EXECUTE queries -- it is\n> not used all the time. My gut feeling WRT to caching everything\n> is similar to my response to your first question.\n\nIt'll be interesting to have VIEWs automatically prepared and executed from\nthe cache...\n\nChris\n\n\n", "msg_date": "Sun, 14 Apr 2002 13:04:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: experimental pg_qcache patch" }, { "msg_contents": "Neil Conway wrote:\n> Hi all,\n> \n> I've attached an updated version of Karel Zak's pg_qcache patch, which\n> adds PREPARE/EXECUTE support to PostgreSQL (allowing prepared SQL\n> statements). It should apply cleanly against CVS HEAD, and compile\n> properly -- beyond that, cross your fingers :-)\n\nI want to say I am really excited about this patch. It illustrates the\ntype of major features that will appear in the coming months. In the\npast few releases, I don't think we had enough development time for our\nnew people to get up to speed. By the time they were ready to tackle\nmajor features, we were wrapping up development (or we thought we were\nand were discouraging new feature additions).\n\nWith our beta target now out at September, I am sure we will have an\nexciting summer of major feature additions that will significancy pair\ndown the TODO list and give users features they have been waiting for\nfor years.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Apr 2002 01:20:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: experimental pg_qcache patch" }, { "msg_contents": "On Sat, Apr 13, 2002 at 06:47:32PM -0400, Neil Conway wrote:\n> \n> I've attached an updated version of Karel Zak's pg_qcache patch, which\n> adds PREPARE/EXECUTE support to PostgreSQL (allowing prepared SQL\n> statements). It should apply cleanly against CVS HEAD, and compile\n> properly -- beyond that, cross your fingers :-)\n \n I will try it during this week.\n\n> Please take a look at the code, play around with using PREPARE and\n> EXECUTE, etc. Let me know if you have any suggestions for improvement\n\n Is needful use shared cache? This is right and cardinal question.\n (Is pre-forked backends expected in next release?) \n\n> or if you run into any problems -- I've probably introduced some\n> regressions when I ported the code from 7.0 to current sources.\n> \n> BTW, if you run the regression tests, I'd expect (only) the \"prepare\"\n> test to fail: I've only written partial regression tests so far. If\n> any other tests fail, please let me know.\n> \n> The basic syntax looks like:\n> \n> PREPARE <plan_name> AS <query>;\n> EXECUTE <plan_name> USING <parameters>;\n> DEALLOCATE PREPARE <plan_name>;\n> \n> To get a look at what's being stored in the cache, try:\n> \n> SELECT qcache_state();\n> \n> For more information on the qCache code, see the README that\n> Karel posted to the list a few days ago.\n> \n> There are still lots of things that need to be improved. Here's\n> a short list: (the first 3 items are the most important, any help\n> on those would be much appreciated)\n> \n> (1) It has a tendancy to core-dump when executing stored queries,\n> particularly if the EXECUTE has an INTO clause -- it will work\n> the first time, but subsequent attempts will either dump core or\n> claim that they can't find the plan in the cache.\n\n I don't know this bug :-)\n\n> (2) Sometimes executing a PREPARE gives this warning:\n> \n> nconway=> prepare q1 as select * from pg_class;\n> WARNING: AllocSetFree: detected write past chunk end in TransactionCommandContext 0x83087ac\n> PREPARE\n> \n> Does anyone know what problem this indicates?\n\n The memory managment is diffrent between 7.0 and 7.2. There is\n needful port cache shared-memory managment. I will look at it.\n\n> (3) Preparing queries with parameters doesn't work:\n> \n> nconway=> PREPARE sel USING text AS SELECT * FROM pg_class WHERE relname ~~ $1;\n> ERROR: Parameter '$1' is out of range\n\n My original syntax was:\n\n PREPARE sel AS SELECT * FROM pg_class WHERE relname ~~ $1 USING text;\n\n ... USING is behind query.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Sun, 14 Apr 2002 22:13:17 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: experimental pg_qcache patch" }, { "msg_contents": "On Sun, Apr 14, 2002 at 10:13:17PM +0200, Karel Zak wrote:\n \n> > (2) Sometimes executing a PREPARE gives this warning:\n> > \n> > nconway=> prepare q1 as select * from pg_class;\n> > WARNING: AllocSetFree: detected write past chunk end in TransactionCommandContext 0x83087ac\n> > PREPARE\n> > \n> > Does anyone know what problem this indicates?\n> \n> The memory managment is diffrent between 7.0 and 7.2. There is\n> needful port cache shared-memory managment. I will look at it.\n\n Hmm, I probably found it be first look to patch file.\n\n The WARNING message is from leak detection. I'm sure that you see\n this message if you use SHARE cache type.\n\n - PREPARE_KEY_PREFIX_SIZE is 4 not 3\n\n - in the PrepareKey() is needful fix:\n\n\n+ if (store == PREPARE_STORE_SHARE) { /* shared between same DB */\n+ *flag |= QCF_SHARE_NOTREMOVEABLE;\n+ key = (char *) palloc(strlen(name) + PREPARE_KEY_PREFIX_SIZE\n+ + strlen(DatabaseName) +1);\n ^^^^^^^\n must be 3 \n\n+ sprintf(key, \"%s_%s_\", DatabaseName, PREPARE_KEY_PREFIX);\n ^^^^^^\n the space for '_' is not allocated :-(\n\n It's my bug probably, I good knew why we need leak detection :-)\n\n Karel\n\n PS. Sorry that I don't send a patch, but now I haven't my computer there. \n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Sun, 14 Apr 2002 22:39:32 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: experimental pg_qcache patch" }, { "msg_contents": "On Sun, 14 Apr 2002 22:39:32 +0200\n\"Karel Zak\" <zakkr@zf.jcu.cz> wrote:\n> - PREPARE_KEY_PREFIX_SIZE is 4 not 3\n> \n> - in the PrepareKey() is needful fix:\n> \n> \n> + if (store == PREPARE_STORE_SHARE) { /* shared between same DB */\n> + *flag |= QCF_SHARE_NOTREMOVEABLE;\n> + key = (char *) palloc(strlen(name) + PREPARE_KEY_PREFIX_SIZE\n> + + strlen(DatabaseName) +1);\n> ^^^^^^^\n> must be 3 \n> \n> + sprintf(key, \"%s_%s_\", DatabaseName, PREPARE_KEY_PREFIX);\n> ^^^^^^\n> the space for '_' is not allocated :-(\n> \n> It's my bug probably, I good knew why we need leak detection :-)\n\nThanks Karel! I made the changes you suggest and the warning (and\nthe accompanying memory leak) have gone away.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Sun, 14 Apr 2002 17:41:24 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "Re: experimental pg_qcache patch" }, { "msg_contents": "> I've attached an updated version of Karel Zak's pg_qcache patch, which\n> adds PREPARE/EXECUTE support to PostgreSQL (allowing prepared SQL\n> statements). \n\nWoah :-))\n\nThanks Neil! You may be remind of a thread in february, where I talked\nabout a survey about migrating from Oracle 8.0 / NT4 to PostgreSQL 7.2 /\nRed Hat 7.2 ...\n\nOverall performances obtained are a ratio of 1,33 on standard queries\nof the application, like on migrated CONNECT BY Oracle statements\n(thanks again to OpenACS guys for this). This ratio is very good for us\nand our customer. We felt some pride about such results.\n\nBut we faced a problem in migrating bulk plain batch in Oracle Pro*C to\nECPG: performances where 3 times slower, due to incapacity of PG to\nprepare statments (some well informed guys here in PG list gave us\ntips&hints to use SPI's prepared statment. Unfortunately, this would\nresult in a loss of functionalities from Pro*C to ECPG.. :-( so we had\nto abandon this issue).\n\nI talk here about CURSORs. \n\nI imagine that with your patch, we could prepare statments used in\ncursors. We going to test this and benchmark the application. Not sure\nit works, I think ECPG has first to take into consideration those new\nfunctialities (Michael?).\n\nBe sure to have feedback on this :-)\n\nThanks again for such initiative! I'm going to inform my co-worker (C++\nsenior) on your patch with the hope he can help you.\n\nCheers,\n\n-- \nJean-Paul ARGUDO IDEALX S.A.S\nConsultant bases de donn�es 15-17, av. de S�gur\nhttp://www.idealx.com F-75007 PARIS\n", "msg_date": "Tue, 16 Apr 2002 10:25:00 +0200", "msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>", "msg_from_op": false, "msg_subject": "Re: experimental pg_qcache patch" }, { "msg_contents": "Jean-Paul ARGUDO wrote:\n> > I've attached an updated version of Karel Zak's pg_qcache patch, which\n> > adds PREPARE/EXECUTE support to PostgreSQL (allowing prepared SQL\n> > statements). \n> \n> Woah :-))\n> \n> Thanks Neil! You may be remind of a thread in february, where I talked\n> about a survey about migrating from Oracle 8.0 / NT4 to PostgreSQL 7.2 /\n> Red Hat 7.2 ...\n> \n> Overall performances obtained are a ratio of 1,33 on standard queries\n> of the application, like on migrated CONNECT BY Oracle statements\n> (thanks again to OpenACS guys for this). This ratio is very good for us\n> and our customer. We felt some pride about such results.\n\nYes, I was specifically thinking of your case to make use of this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Apr 2002 11:51:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: experimental pg_qcache patch" } ]
[ { "msg_contents": "Can someone please fix this? Building JDBC staffs in current has been\nbroken for a while(7.2.x is ok). Maybe current JDBC build process\nrequires more recent version of ant than I have, I don't know. But if\nso, that should be stated somewhere in the docs explicitly, I think.\n\n/usr/bin/ant -buildfile ./build.xml all \\\n -Dmajor=7 -Dminor=3 -Dfullversion=7.3devel -Ddef_pgport=5432 -Denable_debug=yes\nBuildfile: ./build.xml\n\nall:\n\nprepare:\n\nBUILD FAILED\n\n/usr/local/src/pgsql/current/pgsql/src/interfaces/jdbc/./build.xml:155: Could not create task of type: condition. Common solutions are to use taskdef to declare your task, or, if this is an optional task, to put the optional.jar in the lib directory of your ant installation (ANT_HOME).\n\nFYI, my build tools are:\n\nJava: 1.3.0\nAnt: 1.3\n\n", "msg_date": "Sun, 14 Apr 2002 10:09:47 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "JDBC build fails" }, { "msg_contents": "\nI am not seeing any jdbc build failure here. I am using:\n\n\tAnt version 1.4 compiled on September 3 2001\n\n---------------------------------------------------------------------------\n\nTatsuo Ishii wrote:\n> Can someone please fix this? Building JDBC staffs in current has been\n> broken for a while(7.2.x is ok). Maybe current JDBC build process\n> requires more recent version of ant than I have, I don't know. But if\n> so, that should be stated somewhere in the docs explicitly, I think.\n> \n> /usr/bin/ant -buildfile ./build.xml all \\\n> -Dmajor=7 -Dminor=3 -Dfullversion=7.3devel -Ddef_pgport=5432 -Denable_debug=yes\n> Buildfile: ./build.xml\n> \n> all:\n> \n> prepare:\n> \n> BUILD FAILED\n> \n> /usr/local/src/pgsql/current/pgsql/src/interfaces/jdbc/./build.xml:155: Could not create task of type: condition. Common solutions are to use taskdef to declare your task, or, if this is an optional task, to put the optional.jar in the lib directory of your ant installation (ANT_HOME).\n> \n> FYI, my build tools are:\n> \n> Java: 1.3.0\n> Ant: 1.3\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Apr 2002 23:23:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: JDBC build fails" }, { "msg_contents": "> Tatsuo,\n> \n> Yes, ant version 1.4.1 or later is required to build the driver\n\nThen it should be noted somewhere in the docs. Also, that should be\nnoted in the \"incompatibilty section\" of the release note for 7.3.\n--\nTatsuo Ishii\n\n> see http://jakarta.apache.org/ant\n> \n> Dave\n> On Sat, 2002-04-13 at 23:28, Barry Lind wrote:\n> > Dave,\n> > \n> > This was your change I believe. Can you respond?\n> > \n> > thanks,\n> > --Barry\n> > \n> > Tatsuo Ishii wrote:\n> > > Can someone please fix this? Building JDBC staffs in current has been\n> > > broken for a while(7.2.x is ok). Maybe current JDBC build process\n> > > requires more recent version of ant than I have, I don't know. But if\n> > > so, that should be stated somewhere in the docs explicitly, I think.\n> > > \n> > > /usr/bin/ant -buildfile ./build.xml all \\\n> > > -Dmajor=7 -Dminor=3 -Dfullversion=7.3devel -Ddef_pgport=5432 -Denable_debug=yes\n> > > Buildfile: ./build.xml\n> > > \n> > > all:\n> > > \n> > > prepare:\n> > > \n> > > BUILD FAILED\n> > > \n> > > /usr/local/src/pgsql/current/pgsql/src/interfaces/jdbc/./build.xml:155: Could not create task of type: condition. Common solutions are to use taskdef to declare your task, or, if this is an optional task, to put the optional.jar in the lib directory of your ant installation (ANT_HOME).\n> > > \n> > > FYI, my build tools are:\n> > > \n> > > Java: 1.3.0\n> > > Ant: 1.3\n> > > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > > \n> > > http://archives.postgresql.org\n> > > \n> > \n> > \n> > \n> \n> \n", "msg_date": "Tue, 16 Apr 2002 10:08:52 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: JDBC build fails" } ]
[ { "msg_contents": "RPMs for 7.2.1 are immediately available for download from\nftp://ftp.postgresql.org/pub/binary/v7.2.1/RPMS\n\nBinary RPMs available are for RedHat-skipjack 7.2.93 and RedHat 6.2/SPARC, and \nthe source RPM is in SRPMS.\n\nTo rebuild on RedHat 7.x, simply rpm --rebuild if you have the necessary \ndevelopment packages installed. In particular, since tk is a build target, \nthe development libs for X are required for the full build. See \nREADME.rpm-dist, available in the source RPM, for details on the conditional \nbuild system.\n\nTo rebuild on RedHat 6.2, use 'rpm --define \"build6x 1\" --rebuild' to rebuild. \nThe build6x option disables SSL, kerberos, and NLS support, as well as tuning \nthe dependencies for Red Hat 6.2 versus 7.x. If you have gettext, krb5, \nand/or OpenSSL installed on your RedHat 6.2 box (those packages are not stock \noptions in a usable form), visit the postgresql.spec file and edit the top \nfew lines accordingly. However, since the 6.2 package dependencies are \nmodifiied by the build6x option, you still need to define it. And don't \ndefine it to 0 for non-6.x builds, as the state of being undefined or defined \nis used as a conditional as well.\n\nPlease see the changelog included in postgresql.spec in the source RPM for \ndetails on what else has changed.\n\nThere are a few patches and fixes I still need to apply from people, but these \nRPMs are stable and build on both RHL 7.2.93 (skipjack public beta) and \nRedHat 6.2/SPARC (the only RHL 6.2 machine I have available to me). I will \nbe uploading RPMs built on stock fully updated RHL 7.2 Monday. \n\nIncidentally, the 7.2.93 (skipjack) public beta is a serious improvement over \nRHL 7.2, and I personally recommend it, as KDE 3 is worth the upgrade, even \nto a beta.\n\nMy apologies for the long delay since the 7.2-1 RPM release. Since RedHat 6.2 \nsupport seemed important to many people, I took my time making sure I could \nactually rebuild on RHL 6.2. This required me to have a 6.2 box at my \ndisposal to build upon. So I bought a SPARCclassic for $1.25 off ebay, \noutfitted it with 64MB RAM and a 4.5GB SCSI HD, and installed RHL 6.2/sparc \non it. And it took a long time to figure out what was broken with the \nsparc32 build system on it. But I got it figured out and fixed, and it now \nbuilds. On a SPARCclassic (sun4m, MicroSPARC I @ 50MHZ) the build is very \nlong (over 1 hour), but you can't beat the price, and the reliability of the \nhardware. Plus, it's quite cute.\n\nNOTE: I will only directly support RHL 7.2 and later on Intel, and 6.2 on \nSPARC. RHL 7.1 and RHL 7.0 are not directly supported by me as I have no \nmachines running those versions at my disposal. In addition, I will only \nsupport RedHat 6.2 on SPARC directly -- and my SPARCclassic has _all_ the \nerrata installed, including the RPM 4.x packages. To use my source RPM's you \nwill need a version of RPM that understands features available to RPM 4.x.\n\nI do have access to a Caldera/SCO OpenUnix box using Linux emulation thanks to \nLarry Rosenman, even though I've not availed myself of that access as yet. \nOther none-RedHat RPM-based distributions are not directly supported by me, \nalthough SuSE 7.3 on UltraSparc may be supported in the future, as I have an \nUltra 5 running that dist.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 13 Apr 2002 23:48:22 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 7.2.1-2PGDG RPMs available for RedHat-skipjack 7.2.93 and\n\tRedHat 6.2/SPARC" }, { "msg_contents": "On Sun, 2002-04-14 at 08:48, Lamar Owen wrote:\n> \n> Incidentally, the 7.2.93 (skipjack) public beta is a serious improvement over \n> RHL 7.2, and I personally recommend it, as KDE 3 is worth the upgrade, even \n> to a beta.\n\nIs the 7.2.93 (skipjack) public beta an improvement in raw postgresql\nperformance or just in added stuff like KDE ?\n \n----------------------------\nHannu\n\n\n", "msg_date": "14 Apr 2002 10:52:00 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL 7.2.1-2PGDG RPMs available for" }, { "msg_contents": "[Trimmed CC list]\nOn Sunday 14 April 2002 01:52 am, Hannu Krosing wrote:\n> On Sun, 2002-04-14 at 08:48, Lamar Owen wrote:\n> > Incidentally, the 7.2.93 (skipjack) public beta is a serious improvement\n> > over RHL 7.2, and I personally recommend it, as KDE 3 is worth the\n> > upgrade, even to a beta.\n\n> Is the 7.2.93 (skipjack) public beta an improvement in raw postgresql\n> performance or just in added stuff like KDE ?\n\nHmmm.\n\nRaw performance seems to be increased as well, due to an improved kernel \n(2.4.18 plus low-latency and preemptible patches, according to the kernel \nsource RPM). Although I am a little overwhelmed by the increased performance \nof this new Athlon 1.2+512MB RAM versus my old Celeron 650+192MB RAM, 7.2.93 \nseems to be faster on the same hardware.\n\nParticularly during the regression tests.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 14 Apr 2002 14:35:13 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Redhat 7.2.93 performance (was:Re: PostgreSQL 7.2.1-2PGDG RPMs\n\tavailable for RedHat-skipjack 7.2.93 and RedHat 6.2/SPARC)" }, { "msg_contents": "On Sun, Apr 14, 2002 at 02:35:13PM -0400, Lamar Owen wrote:\n> \n> Hmmm.\n> \n> Raw performance seems to be increased as well, due to an improved kernel \n> (2.4.18 plus low-latency and preemptible patches, according to the kernel \n> source RPM).\nThe low-latency and preemptible patches are not meant for performance\ngains, but for responsiveness, and are not designed to be used in servers,\nonly in workstations/desktops.\n\n> Although I am a little overwhelmed by the increased performance \n> of this new Athlon 1.2+512MB RAM versus my old Celeron 650+192MB RAM, 7.2.93 \n> seems to be faster on the same hardware.\n2.4.18 does come with a improved VM, what could justify the performance\nincrease. As could an update on the compiler (I've being using gcc 3.1 in\nmy redhat 7.2).\n\nBut I can't recomend the beta to anyone, we had problems with one\ndual pentium iii server, causing random corruption on\n/usr/include/*.h and a lock up.\n\nRegards,\nLuciano Rocha\n\n-- \nLuciano Rocha, strange@nsk.yi.org\n\nThe trouble with computers is that they do what you tell them, not what\nyou want.\n -- D. Cohen\n", "msg_date": "Sun, 14 Apr 2002 20:00:11 +0100", "msg_from": "Luciano Miguel Ferreira Rocha <strange@nsk.yi.org>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.2.93 performance (was:Re: PostgreSQL 7.2.1-2PGDG RPMs\n\tavailable for RedHat-skipjack 7.2.93 and RedHat 6.2/SPARC)" }, { "msg_contents": "On Sunday 14 April 2002 03:00 pm, Luciano Miguel Ferreira Rocha wrote:\n> On Sun, Apr 14, 2002 at 02:35:13PM -0400, Lamar Owen wrote:\n> > Raw performance seems to be increased as well, due to an improved kernel\n> > (2.4.18 plus low-latency and preemptible patches, according to the kernel\n> > source RPM).\n\n> The low-latency and preemptible patches are not meant for performance\n> gains, but for responsiveness, and are not designed to be used in servers,\n> only in workstations/desktops.\n\nISTM that improving interactive performance would also improve multiuser \nperformance in a server, as low latency and kernel preemption can increase \nmultiuser server responsiveness.\n\n> > Although I am a little overwhelmed by the increased performance\n> > of this new Athlon 1.2+512MB RAM versus my old Celeron 650+192MB RAM,\n> > 7.2.93 seems to be faster on the same hardware.\n\n> 2.4.18 does come with a improved VM, what could justify the performance\n> increase. As could an update on the compiler (I've being using gcc 3.1 in\n> my redhat 7.2).\n\nThe stock gcc on 7.2.93 is still the RedHat-branded 2.96, but with lots of \nfixes backported from higher versions.\n\nHowever, the improved VM may indeed be a large part of it. It sure feels \nfaster.\n\n> But I can't recomend the beta to anyone, we had problems with one\n> dual pentium iii server, causing random corruption on\n> /usr/include/*.h and a lock up.\n\nDid you happen to report it to Red Hat's Skipjack list, or to \nbugzilla.redhat.com/bugzilla? Helps make a better dist!\n\nI have had less problems thus far with 7.2.93 than I ever did with 7.2.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 14 Apr 2002 15:15:39 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Re: Redhat 7.2.93 performance (was:Re: PostgreSQL 7.2.1-2PGDG RPMs\n\tavailable for RedHat-skipjack 7.2.93 and RedHat 6.2/SPARC)" }, { "msg_contents": "Lamar Owen wrote:\n\n>>The low-latency and preemptible patches are not meant for performance\n>>gains, but for responsiveness, and are not designed to be used in servers,\n>>only in workstations/desktops.\n>>\n>\n>ISTM that improving interactive performance would also improve multiuser \n>performance in a server, as low latency and kernel preemption can increase \n>multiuser server responsiveness.\n>\nresponsiveness != performance IT works OK for a low number of \nconcurrent users/processes to increase percieved performance, but to get \nreal gains on large systems with large numebrs of users and processes \nyou actually decrease the responsiveness of individual tasks (IE make \nthe system a little less likely to context switch or pre-empt) and \nschedual in batches or clusters rather than one-at-a-time. For a \ndesktop/workstation this would be insane, and drive a user to kill \nsomeone, but for systems that handle several hundred users (interactive \nor not) this improves overall perfomance.\n\n2.4.18 has a lot of work done to the VM, but most importantly has work \ndone to the queue elevator code, thats probably whats doing most of the \nwork (throttling big writers) of seeing better overall system performance.\n\n\n\n\n\n\n\nLamar Owen wrote:\n\n\nThe low-latency and preemptible patches are not meant for performancegains, but for responsiveness, and are not designed to be used in servers,only in workstations/desktops.\n\nISTM that improving interactive performance would also improve multiuser performance in a server, as low latency and kernel preemption can increase multiuser server responsiveness.\n\nresponsiveness != performance  IT works OK for a low number of concurrent\nusers/processes to increase percieved performance, but to get real gains\non large systems with large numebrs of users and processes you actually decrease\nthe responsiveness of individual tasks (IE make the system a little less\nlikely to context switch or pre-empt) and schedual in batches or clusters\nrather than one-at-a-time.  For a desktop/workstation this would be insane,\nand drive a user to kill someone, but for systems that handle several hundred\nusers (interactive or not) this improves overall perfomance.\n\n2.4.18 has a lot of work done to the VM, but most importantly has work done\nto the queue elevator code, thats probably whats doing most of the work (throttling\nbig writers) of seeing better overall system performance.", "msg_date": "Sun, 14 Apr 2002 12:59:09 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.2.93 performance (was:Re: PostgreSQL 7.2.1-2PGDG RPMs\n\tavailable for RedHat-skipjack 7.2.93 and RedHat 6.2/SPARC)" }, { "msg_contents": "On Sun, Apr 14, 2002 at 03:15:39PM -0400, Lamar Owen wrote:\n> ISTM that improving interactive performance would also improve multiuser \n> performance in a server, as low latency and kernel preemption can increase \n> multiuser server responsiveness.\nI doubt any performance will increase, either on a multiuser or on a\nsingleuser system.\n\nHaving faster response on mouse clicks or keyboard input doesn't translate\non better overall performance, the user just has the felling that it's so.\n\nAs an example, a part of those patches causes brakes in the middle of some\nloops (saving buffers to disk, etc). Then other applications that don't\ndepend on disk activity can have change to run, so the system seems\nfaster, it's more responsive. But it won't actually be faster, the system\nstill has to lock again and continue saving the buffers. Actually, in this\ncase there will be an overhead caused by checking if the kernel should\nbrake.\n\nHowever, both projects review the Linux code, and may find, if they\nhaven't already, some places were a finer locking may be used, giving a\nbetter performance in a SMP system. But it could also break some\nintegrity.\n\nThose patches are not recomended for a server, and now I'm curious to\ncheck if the -enterprise configuration has them active.\n\n> Did you happen to report it to Red Hat's Skipjack list, or to \n> bugzilla.redhat.com/bugzilla? Helps make a better dist!\nAlas, a bug report saying: the system crashed, I can't login remotely,\ndoesn't help a lot...\n\nRegards,\nLuciano Rocha\n\n-- \nLuciano Rocha, strange@nsk.yi.org\n\nThe trouble with computers is that they do what you tell them, not what\nyou want.\n -- D. Cohen\n", "msg_date": "Sun, 14 Apr 2002 21:00:18 +0100", "msg_from": "Luciano Miguel Ferreira Rocha <strange@nsk.yi.org>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.2.93 performance (was:Re: PostgreSQL 7.2.1-2PGDG RPMs\n\tavailable for RedHat-skipjack 7.2.93 and RedHat 6.2/SPARC)" }, { "msg_contents": "Lamar\n\n > RPMs for 7.2.1 are immediately available for download from\n > ftp://ftp.postgresql.org/pub/binary/v7.2.1/RPMS\n\nIs the attached message one of the patches that has yet\nto be applied? Without this patch the RPM needs some\npatching to get it to compile on a MIPS machine.\n\nmake -C lmgr SUBSYS.o\nmake[4]: Entering directory \n`/ws/whunter/dev/rpm/BUILD/postgresql-7.2.1/src/backend/storage/lmgr'\n[snip]\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations \n-I../../../../src/include -I/usr/kerberos/include -c -o s_lock.o s_lock.c\ns_lock.c:170: warning: `tas_dummy' defined but not used\n/tmp/ccuQB808.s: Assembler messages:\n/tmp/ccuQB808.s:173: Error: opcode not supported on this processor: \nR3000 (MIPS1) `ll $14,0($4)'\n/tmp/ccuQB808.s:175: Error: opcode not supported on this processor: \nR3000 (MIPS1) `sc $15,0($4)'\n\nWarwick\n\nPS: forgive me if this is the wrong place to send this.\n-- \nWarwick Hunter Agile TV Corporation\nVoice: +61 7 5584 5912 Fax: +61 7 5575 9550\nmailto:whunter@oz.agile.tv http://www.agile.tv\n\n-----Forwarded Message-----\n\nFrom: rmurray@debian.org\nTo: 139003@bugs.debian.org\nCc: control@bugs.debian.org\nSubject: Bug#139003: a little bit more is needed...\nDate: 27 Mar 2002 00:21:18 -0800\n\nreopen 139003\nthanks\n\nLooks like a small patch is needed as well to do the right thing on Linux.\n\nThe patch enables the mips2 ISA for the ll/sc operations, and then restores\nit when done. The kernel/libc emulation code will take over on CPUs without\nll/sc, and on CPUs with it, it'll use the operations provided by the CPU.\n\nCombined with the earlier fix (removing -mips2), postgresql builds again on\nmips and mipsel. The patch is against 7.2-7.\n\ndiff -urN postgresql-7.2/src/backend/storage/lmgr/s_lock.c postgresql-7.2.fixed/src/backend/storage/lmgr/s_lock.c\n--- postgresql-7.2/src/backend/storage/lmgr/s_lock.c\tMon Nov 5 18:46:28 2001\n+++ postgresql-7.2.fixed/src/backend/storage/lmgr/s_lock.c\tWed Mar 27 07:46:59 2002\n@@ -173,9 +173,12 @@\n .global\ttas\t\t\t\t\t\t\\n\\\n tas:\t\t\t\t\t\t\t\\n\\\n \t\t\t.frame\t$sp, 0, $31\t\\n\\\n+\t\t\t.set push\t\t\\n\\\n+\t\t\t.set mips2\t\t\\n\\n\n \t\t\tll\t\t$14, 0($4)\t\\n\\\n \t\t\tor\t\t$15, $14, 1\t\\n\\\n \t\t\tsc\t\t$15, 0($4)\t\\n\\\n+\t\t\t.set pop\t\t\t\\n\\\n \t\t\tbeq\t\t$15, 0, fail\\n\\\n \t\t\tbne\t\t$14, 0, fail\\n\\\n \t\t\tli\t\t$2, 0\t\t\\n\\", "msg_date": "Fri, 19 Apr 2002 15:25:58 +1000", "msg_from": "Warwick Hunter <whunter@agile.tv>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.2.1-2PGDG RPMs available for RedHat-skipjack" } ]
[ { "msg_contents": "In benchmarks that I have done in the past comparing performance of \nOracle and Postgres in our web application, I found that I got ~140 \nrequests/sec on Oracle and ~50 requests/sec on postgres.\n\nThe code path in my benchmark only issues one sql statement. Since I \nknow that Oracle caches query plans, I wanted to see the cost under \npostgres of the parse/plan/execute to see if the parsing and planing of \nthe sql statement would account for the difference in performance \nbetween Oracle and postgres.\n\nIn a recent mail note to hackers, Tom mentioned the existence of the \nshow_parser_stats, show_planner_stats, and show_executor_stats \nparameters in the postgresql.conf file. So I turned them on ran my \nquery a few times and here are the results:\n\naverage of 10 runs:\nparsing = .003537 sec (19.3%)*\nplanning = .009793 sec (53.5%)\nexecute = .004967 sec (27.2%)\n\nIf Oracle is only incurring the execute cost for each query then this \nwould explain the difference in performance between Oracle and Postgres.\n\nThis would lead me to conclude that the current proposed PREPARE/EXECUTE \npatch will be very useful to me. (now I just need to find the time to \ntest it).\n\nthanks,\n--Barry\n\n* show_parser_stats prints out three separate timings: parser \nstatistics, parse analysis statistics, rewriter statistics, the number \n.003537 is the sum of those three (.001086 + .002350 + .000101)", "msg_date": "Sat, 13 Apr 2002 21:16:02 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "cost of parse/plan/execute for one sample query" }, { "msg_contents": "In testing Neil's PREPARE/EXECUTE patch on my test query, I found the \nparser complains that this query is not valid when using current \nsources. The error I get is:\n\npsql:testorig.sql:1: ERROR: JOIN/ON clause refers to \"xf2\", which is \nnot part of JOIN\n\nI think the sql is valid (at least it has worked in 7.1 and 7.2). Is \nthis a bug?\n\nthanks,\n--Barry\n\nPS. I forgot to mention that the below performance numbers were done on \n7.2 (not current sources).\n\nBarry Lind wrote:\n> In benchmarks that I have done in the past comparing performance of \n> Oracle and Postgres in our web application, I found that I got ~140 \n> requests/sec on Oracle and ~50 requests/sec on postgres.\n> \n> The code path in my benchmark only issues one sql statement. Since I \n> know that Oracle caches query plans, I wanted to see the cost under \n> postgres of the parse/plan/execute to see if the parsing and planing of \n> the sql statement would account for the difference in performance \n> between Oracle and postgres.\n> \n> In a recent mail note to hackers, Tom mentioned the existence of the \n> show_parser_stats, show_planner_stats, and show_executor_stats \n> parameters in the postgresql.conf file. So I turned them on ran my \n> query a few times and here are the results:\n> \n> average of 10 runs:\n> parsing = .003537 sec (19.3%)*\n> planning = .009793 sec (53.5%)\n> execute = .004967 sec (27.2%)\n> \n> If Oracle is only incurring the execute cost for each query then this \n> would explain the difference in performance between Oracle and Postgres.\n> \n> This would lead me to conclude that the current proposed PREPARE/EXECUTE \n> patch will be very useful to me. (now I just need to find the time to \n> test it).\n> \n> thanks,\n> --Barry\n> \n> * show_parser_stats prints out three separate timings: parser \n> statistics, parse analysis statistics, rewriter statistics, the number \n> .003537 is the sum of those three (.001086 + .002350 + .000101)\n> \n>", "msg_date": "Sat, 13 Apr 2002 22:27:24 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "bug with current sources? Re: cost of parse/plan/execute for one\n\tsample query" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> In testing Neil's PREPARE/EXECUTE patch on my test query, I found the \n> parser complains that this query is not valid when using current \n> sources. The error I get is:\n\n> psql:testorig.sql:1: ERROR: JOIN/ON clause refers to \"xf2\", which is \n> not part of JOIN\n\nHmm. I have an open bug with sub-SELECTs inside a JOIN, but this\nexample doesn't look like it would trigger that.\n\n> I think the sql is valid (at least it has worked in 7.1 and 7.2). Is \n> this a bug?\n\nDunno. Give me a test case (and no, I am *not* going to try to\nreverse-engineer table schemas from that SELECT).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Apr 2002 13:11:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug with current sources? Re: cost of parse/plan/execute for one\n\tsample query" }, { "msg_contents": "Tom,\n\nOK here is a test case:\n\ncreate table test1 (t1a int);\ncreate table test2 (t2a int);\ncreate table test3 (t3a int);\nSELECT x2.t2a\nFROM ((test1 t1 LEFT JOIN test2 t2 ON (t1.t1a = t2.t2a)) AS x1\nLEFT OUTER JOIN test3 t3 ON (x1.t2a = t3.t3a)) AS x2\nWHERE x2.t2a = 1;\n\nThe select works under 7.2, but gives the following error in 7.3:\n\nERROR: JOIN/ON clause refers to \"x1\", which is not part of JOIN\n\nthanks,\n--Barry\n\n\n\nTom Lane wrote:\n> Barry Lind <barry@xythos.com> writes:\n> \n>>In testing Neil's PREPARE/EXECUTE patch on my test query, I found the \n>>parser complains that this query is not valid when using current \n>>sources. The error I get is:\n> \n> \n>>psql:testorig.sql:1: ERROR: JOIN/ON clause refers to \"xf2\", which is \n>>not part of JOIN\n> \n> \n> Hmm. I have an open bug with sub-SELECTs inside a JOIN, but this\n> example doesn't look like it would trigger that.\n> \n> \n>>I think the sql is valid (at least it has worked in 7.1 and 7.2). Is \n>>this a bug?\n> \n> \n> Dunno. Give me a test case (and no, I am *not* going to try to\n> reverse-engineer table schemas from that SELECT).\n> \n> \t\t\tregards, tom lane\n> \n\n\n", "msg_date": "Sun, 14 Apr 2002 21:44:42 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Re: bug with current sources? Re: cost of parse/plan/execute" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> OK here is a test case:\n\nLooks like a bug, all right --- I must have introduced this when I redid\nthe handling of JOIN aliases a few weeks ago. Will fix.\n\nThanks for the report.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 00:49:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug with current sources? Re: cost of parse/plan/execute for one\n\tsample query" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> The select works under 7.2, but gives the following error in 7.3:\n> ERROR: JOIN/ON clause refers to \"x1\", which is not part of JOIN\n\nI've committed a fix for this. Thanks again.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 02:07:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug with current sources? Re: cost of parse/plan/execute for one\n\tsample query" } ]
[ { "msg_contents": "James Cole (colejatmsu.edu) reports a bug with a severity of 2\nThe lower the number the more severe it is.\n\nShort Description\nCASE statement evaluation does not short-circut\n\nLong Description\nIn 7.2.1, Both the WHEN and THEN clauses of a CASE statement are evaluated, even if the WHEN clause evaluates to FALSE.\n\n(I'm not sure if this behavior is allowed by the '92 spec, but it's different than under 7.1.x)\n\nPlatform info:\njoel2=# select version();\n version \n------------------------------------------------------------------\n PostgreSQL 7.2.1 on sparc-sun-solaris2.6, compiled by GCC 2.95.2\n(1 row)\n\n\nSample Code\njoel2=# \nSELECT\n CASE\n WHEN 1 = 2 THEN 1 / 0\n WHEN 1 = 1 THEN 1.0\n END;\nERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by zero\n\n\nNo file was uploaded with this report\n\n", "msg_date": "Sun, 14 Apr 2002 12:52:22 -0400 (EDT)", "msg_from": "pgsql-bugs@postgresql.org", "msg_from_op": true, "msg_subject": "Bug #633: CASE statement evaluation does not short-circut" }, { "msg_contents": "pgsql-bugs@postgresql.org writes:\n> In 7.2.1, Both the WHEN and THEN clauses of a CASE statement are evaluated, even if the WHEN clause evaluates to FALSE.\n\nNot in the normal case.\n\n> SELECT\n> CASE\n> WHEN 1 = 2 THEN 1 / 0\n> WHEN 1 = 1 THEN 1.0\n> END;\n> ERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by zero\n\nHmm. The reason for this is that the constant-expression simplifier\nreduces all the subexpressions of the CASE before it tries to discard\nthe ones with constant-FALSE preconditions. This particular example\ncould be fixed by rearranging the order of the simplification operations,\nbut you'd still see a failure with, say,\n\n\tSELECT CASE WHEN boolCol THEN 1 / 0 END FROM table;\n\nsince 1/0 will be const-folded at planning time whether the table\ncontains any TRUE entries or not.\n\nI don't really consider this a bug; at least, fixing it would imply not\nconst-simplifying the result expressions of CASEs, which is a cure far\nworse than the disease IMHO. Does anyone think we *should* allow CASE\nto defeat const-simplification? Are there any real-world cases (as\nopposed to made-up examples) where this is necessary?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Apr 2002 13:30:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug #633: CASE statement evaluation does not short-circut " }, { "msg_contents": "...\n> I don't really consider this a bug; at least, fixing it would imply not\n> const-simplifying the result expressions of CASEs, which is a cure far\n> worse than the disease IMHO. Does anyone think we *should* allow CASE\n> to defeat const-simplification?\n\nNo. Constant-folding during parsing should *always* be allowed.\n\n - Thomas\n", "msg_date": "Mon, 15 Apr 2002 19:33:05 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Bug #633: CASE statement evaluation does not short-circut" } ]
[ { "msg_contents": "With my pg_hba.conf changes done, I am now focusing in the next few days\non clearing out my email/patch application backlog.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Apr 2002 20:56:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "status report" } ]
[ { "msg_contents": "I'm running into a minor issue with security in regards to users being\nable to see constructs that they have no access too.\n\nThe solution? Information_Schema coupled with no direct access to\npg_catalog. Internals can use pg_catalog, possibly super users, but\nregular users shouldn't be able to do any reads / writes to it\ndirectly -- as per spec with definition_schema.\n\nAnyway, I'd like to start working on the information_schema and\nconverting psql, pg_dump and other tools over to use it. After a\ncouple of releases I'd like to block pg_catalog usage -- perhaps a GUC\noption?\n\nAny thoughts or objections? Obviously the information schema needs\n(aside from the spec) enough information to allow pg_dump to run.\n\nMy thought is that if I start now when a large rewrite of clientside\napplications is required for schema support that there won't be nearly\nas much backlash later.\n\nA number of pg_dump items will be moved into base functions. Trigger\nstatement, type formatting (various view fields).\n\nWhats the radix of the numeric, int, etc. types anyway?\n\nAs a bonus, this adds a layer between the actual system tables and the\nclients. Might allow changes to be done easier.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n\n", "msg_date": "Sun, 14 Apr 2002 21:26:16 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Security Issue.." }, { "msg_contents": "Rod Taylor wrote:\n> I'm running into a minor issue with security in regards to users being\n> able to see constructs that they have no access too.\n> \n> The solution? Information_Schema coupled with no direct access to\n> pg_catalog. Internals can use pg_catalog, possibly super users, but\n> regular users shouldn't be able to do any reads / writes to it\n> directly -- as per spec with definition_schema.\n\nIs the problem that people can see system catalog columns that should be\nmore secure?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Apr 2002 21:33:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Security Issue.." }, { "msg_contents": "Yes.\n\nA number of people in the company have mentioned that our customers\ncan see tables and structures which they shouldn't know exist.\n\nNot a severe issue, but it's a checkmark for those wanting to switch\nto Oracle.\n\nRevoking read access to system catalogs causes interesting things to\noccur :)\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Sunday, April 14, 2002 9:33 PM\nSubject: Re: [HACKERS] Security Issue..\n\n\n> Rod Taylor wrote:\n> > I'm running into a minor issue with security in regards to users\nbeing\n> > able to see constructs that they have no access too.\n> >\n> > The solution? Information_Schema coupled with no direct access to\n> > pg_catalog. Internals can use pg_catalog, possibly super users,\nbut\n> > regular users shouldn't be able to do any reads / writes to it\n> > directly -- as per spec with definition_schema.\n>\n> Is the problem that people can see system catalog columns that\nshould be\n> more secure?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n>\n\n", "msg_date": "Sun, 14 Apr 2002 21:42:19 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: Security Issue.." }, { "msg_contents": "Rod Taylor writes:\n\n> The solution? Information_Schema coupled with no direct access to\n> pg_catalog. Internals can use pg_catalog, possibly super users, but\n> regular users shouldn't be able to do any reads / writes to it\n> directly -- as per spec with definition_schema.\n\nThe catch on this is that privileges on views don't work quite perfectly\nyet. For instance, if you create a view\n\n CREATE VIEW bar AS SELECT * FROM foo;\n\nthen the statement\n\n SELECT * FROM bar;\n\nneeds privileges to read \"foo\". The privileges would need to be changed\nto be checked at view creation time.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 14 Apr 2002 21:45:13 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Security Issue.." }, { "msg_contents": "Yeah, I was planning on blocking queries to pg_catalog for all cases.\nMake it so that it can never be done by any user directly. It would\nhave to be done in the parser before the view was evaluated, and no\nuser created views would be allowed to access pg_catalog.\n\nThe spec describes the definition schema as accessable only from the\ninformation schema.\n\nLong term goal of course. It would take a few releases to ensure that\neverything was setup to be done like that.\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Peter Eisentraut\" <peter_e@gmx.net>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Sunday, April 14, 2002 9:45 PM\nSubject: Re: [HACKERS] Security Issue..\n\n\n> Rod Taylor writes:\n>\n> > The solution? Information_Schema coupled with no direct access to\n> > pg_catalog. Internals can use pg_catalog, possibly super users,\nbut\n> > regular users shouldn't be able to do any reads / writes to it\n> > directly -- as per spec with definition_schema.\n>\n> The catch on this is that privileges on views don't work quite\nperfectly\n> yet. For instance, if you create a view\n>\n> CREATE VIEW bar AS SELECT * FROM foo;\n>\n> then the statement\n>\n> SELECT * FROM bar;\n>\n> needs privileges to read \"foo\". The privileges would need to be\nchanged\n> to be checked at view creation time.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net\n>\n\n", "msg_date": "Sun, 14 Apr 2002 22:12:29 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: Security Issue.." }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> For instance, if you create a view\n> CREATE VIEW bar AS SELECT * FROM foo;\n> then the statement\n> SELECT * FROM bar;\n> needs privileges to read \"foo\".\n\nThis works just fine, thank you: the privileges are checked against the\nowner of the view.\n\n> The privileges would need to be changed\n> to be checked at view creation time.\n\nThat would be broken; privileges are and must be checked at query\ntime not view creation time.\n\n\nBut having said that, I do not foresee being able to replace direct\npg_catalog access with INFORMATION_SCHEMA views anytime soon. There\nare too many clients out there that are used to doing it that way.\n\nMoreover, pg_dump will never be able to work off INFORMATION_SCHEMA,\nbecause it needs to get at Postgres-specific information that will\nnot be available from a spec-compliant set of views. I'm fairly\ndubious about converting psql, even.\n\nRod's welcome to work on developing a set of spec-compliant\nINFORMATION_SCHEMA views ... and maybe he can even turn off public\nread access to pg_catalog in his own installation ... but he should\nnot expect us to accept a patch that makes that the default anytime\nin the foreseeable future.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Apr 2002 22:33:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Security Issue.. " }, { "msg_contents": "Tom Lane wrote:\n> But having said that, I do not foresee being able to replace direct\n> pg_catalog access with INFORMATION_SCHEMA views anytime soon. There\n> are too many clients out there that are used to doing it that way.\n> \n> Moreover, pg_dump will never be able to work off INFORMATION_SCHEMA,\n> because it needs to get at Postgres-specific information that will\n> not be available from a spec-compliant set of views. I'm fairly\n> dubious about converting psql, even.\n> \n> Rod's welcome to work on developing a set of spec-compliant\n> INFORMATION_SCHEMA views ... and maybe he can even turn off public\n> read access to pg_catalog in his own installation ... but he should\n> not expect us to accept a patch that makes that the default anytime\n> in the foreseeable future.\n\nYes, it would be nice to have spec-compliant stuff. However, things\nlike psql really get into those catalogs and grab detailed information\nthat is probably not covered the the spec.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Apr 2002 22:38:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Security Issue.." }, { "msg_contents": "For the non-spec compliant stuff, I was going to add various pg_ views\nto accomodate it, but with the spirit of the spec. That is, users can\nonly see catalog entries which they have access to, and can only view\ndefinitions of entries that they have ownership of.\n\nAnyway, I got the feedback I wanted so I'll start puttering away at\nit. Theres a number of minor things missing or slightly out of whack\nwhich I hope to add as well. Timestamps on trigger creation, access\nlevels on data types, etc.\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nCc: \"Peter Eisentraut\" <peter_e@gmx.net>; \"Rod Taylor\" <rbt@zort.ca>;\n\"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Sunday, April 14, 2002 10:38 PM\nSubject: Re: [HACKERS] Security Issue..\n\n\n> Tom Lane wrote:\n> > But having said that, I do not foresee being able to replace\ndirect\n> > pg_catalog access with INFORMATION_SCHEMA views anytime soon.\nThere\n> > are too many clients out there that are used to doing it that way.\n> >\n> > Moreover, pg_dump will never be able to work off\nINFORMATION_SCHEMA,\n> > because it needs to get at Postgres-specific information that will\n> > not be available from a spec-compliant set of views. I'm fairly\n> > dubious about converting psql, even.\n> >\n> > Rod's welcome to work on developing a set of spec-compliant\n> > INFORMATION_SCHEMA views ... and maybe he can even turn off public\n> > read access to pg_catalog in his own installation ... but he\nshould\n> > not expect us to accept a patch that makes that the default\nanytime\n> > in the foreseeable future.\n>\n> Yes, it would be nice to have spec-compliant stuff. However, things\n> like psql really get into those catalogs and grab detailed\ninformation\n> that is probably not covered the the spec.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n>\n\n", "msg_date": "Sun, 14 Apr 2002 22:52:53 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: Security Issue.." }, { "msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > For instance, if you create a view\n> > CREATE VIEW bar AS SELECT * FROM foo;\n> > then the statement\n> > SELECT * FROM bar;\n> > needs privileges to read \"foo\".\n>\n> This works just fine, thank you: the privileges are checked against the\n> owner of the view.\n\nOK, nevermind. The case I was referring to was that the CREATE VIEW\nstatement succeeds and the privileges are checked when the view is\nqueried. This is not in compliance with SQL, but it doesn't seem to\nmatter that much.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 15 Apr 2002 00:15:47 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Security Issue.. " } ]
[ { "msg_contents": "Reports missing values as bad.\n\nBAD: INSERT INTO tab (col1, col2) VALUES ('val1');\nGOOD: INSERT INTO tab (col1, col2) VALUES ('val1', 'val2');\n\nRegress tests against DEFAULT and normal values as they're managed\nslightly different.", "msg_date": "14 Apr 2002 23:07:47 -0300", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "ANSI Compliant Inserts" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> \t/*\n> ! \t * XXX It is possible that the targetlist has fewer entries than were\n> ! \t * in the columns list. We do not consider this an error.\tPerhaps we\n> ! \t * should, if the columns list was explicitly given?\n> \t */\n> =20=20\n> \t/* done building the range table and jointree */\n> \tqry->rtable =3D pstate->p_rtable;\n> --- 547,558 ----\n> \t}\n> =20=20\n> \t/*\n> ! \t * Ensure that the targetlist has the same number of entries\n> ! \t * that were present in the columns list. Don't do the check\n> ! \t * for select statements.\n> \t */\n> + \tif (stmt->cols !=3D NIL && (icolumns !=3D NIL || attnos !=3D NIL))\n> + \t\telog(ERROR, \"INSERT has more target columns than expressions\");\n\n\nWhat's the rationale for changing this exactly?\n\nThe code might or might not need changing (I believe the XXX comment\nquestioning it is mine, in fact) but changing behavior without any\npghackers discussion is not the way to approach this.\n\nIn general I'm suspicious of rejecting cases we used to accept for\nno good reason other than that it's not in the spec. There is a LOT\nof Postgres behavior that's not in the spec.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Apr 2002 23:09:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts " }, { "msg_contents": "Tom Lane wrote:\n> Rod Taylor <rbt@zort.ca> writes:\n> > \t/*\n> > ! \t * XXX It is possible that the targetlist has fewer entries than were\n> > ! \t * in the columns list. We do not consider this an error.\tPerhaps we\n> > ! \t * should, if the columns list was explicitly given?\n> > \t */\n> > =20=20\n> > \t/* done building the range table and jointree */\n> > \tqry->rtable =3D pstate->p_rtable;\n> > --- 547,558 ----\n> > \t}\n> > =20=20\n> > \t/*\n> > ! \t * Ensure that the targetlist has the same number of entries\n> > ! \t * that were present in the columns list. Don't do the check\n> > ! \t * for select statements.\n> > \t */\n> > + \tif (stmt->cols !=3D NIL && (icolumns !=3D NIL || attnos !=3D NIL))\n> > + \t\telog(ERROR, \"INSERT has more target columns than expressions\");\n> \n> \n> What's the rationale for changing this exactly?\n> \n> The code might or might not need changing (I believe the XXX comment\n> questioning it is mine, in fact) but changing behavior without any\n> pghackers discussion is not the way to approach this.\n> \n> In general I'm suspicious of rejecting cases we used to accept for\n> no good reason other than that it's not in the spec. There is a LOT\n> of Postgres behavior that's not in the spec.\n\nTODO has:\n\n o Disallow missing columns in INSERT ... VALUES, per ANSI\n\nI think it should be done because it is very easy to miss columns on\nINSERT without knowing it. I think our current behavior is too\nerror-prone. Now, if we want to just throw a NOTICE is such cases, that\nwould work too.\n\nClearly he didn't need discussion because it was already on the TODO\nlist. I guess the question is whether it should have had a question\nmark. I certainly didn't think so.\n\nAlso, I thought we were going to fix COPY to reject missing columns too.\nI just can't see a valid reason for allowing missing columns in either\ncase, except to hide errors.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Apr 2002 23:21:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> In general I'm suspicious of rejecting cases we used to accept for\n>> no good reason other than that it's not in the spec. There is a LOT\n>> of Postgres behavior that's not in the spec.\n\n> TODO has:\n> o Disallow missing columns in INSERT ... VALUES, per ANSI\n\nWhere's the discussion that's the basis of that entry? I don't recall\nany existing consensus on this (though maybe I forgot).\n\nThere are a fair number of things in the TODO list that you put there\nbecause you liked 'em, but that doesn't mean everyone else agrees.\nI certainly will not accept \"once it's on the TODO list it cannot be\nquestioned\"...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Apr 2002 23:26:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> In general I'm suspicious of rejecting cases we used to accept for\n> >> no good reason other than that it's not in the spec. There is a LOT\n> >> of Postgres behavior that's not in the spec.\n> \n> > TODO has:\n> > o Disallow missing columns in INSERT ... VALUES, per ANSI\n> \n> Where's the discussion that's the basis of that entry? I don't recall\n> any existing consensus on this (though maybe I forgot).\n\nI assume someone (Peter?) looked it up and reported that our current\nbehavior was incorrect and not compliant. I didn't do the research in\nwhether it was compliant.\n\n> There are a fair number of things in the TODO list that you put there\n> because you liked 'em, but that doesn't mean everyone else agrees.\n> I certainly will not accept \"once it's on the TODO list it cannot be\n> questioned\"...\n\nI put it there because I didn't think there was any question. If I was\nwrong, I can add a question mark to it.\n\nDo you want to argue we should continue allowing it? Also, what about\nmissing trailling columns in COPY?\n\nIf we continue allowing missing INSERT columns, I am not sure we can\nclaim it is an extension or whether the standard requires the query to\nfail.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Apr 2002 23:37:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "A team member had a bug in their SQL code which would have been caught\nwith this, so I looked it up. Found the TODO entry indicating it was\nsomething that should be done. It was fairly simple to do, so I went\nforward with it.\n\nIf it's not wanted, then feel free to reject the patch and remove the\nTODO item -- or change the TODO item to indicate discussion is\nrequired.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: <pgsql-patches@postgresql.org>\nSent: Sunday, April 14, 2002 11:09 PM\nSubject: Re: [PATCHES] ANSI Compliant Inserts\n\n\n> Rod Taylor <rbt@zort.ca> writes:\n> > /*\n> > ! * XXX It is possible that the targetlist has fewer entries than\nwere\n> > ! * in the columns list. We do not consider this an error.\nPerhaps we\n> > ! * should, if the columns list was explicitly given?\n> > */\n> > =20=20\n> > /* done building the range table and jointree */\n> > qry->rtable =3D pstate->p_rtable;\n> > --- 547,558 ----\n> > }\n> > =20=20\n> > /*\n> > ! * Ensure that the targetlist has the same number of entries\n> > ! * that were present in the columns list. Don't do the check\n> > ! * for select statements.\n> > */\n> > + if (stmt->cols !=3D NIL && (icolumns !=3D NIL || attnos !=3D\nNIL))\n> > + elog(ERROR, \"INSERT has more target columns than expressions\");\n>\n>\n> What's the rationale for changing this exactly?\n>\n> The code might or might not need changing (I believe the XXX comment\n> questioning it is mine, in fact) but changing behavior without any\n> pghackers discussion is not the way to approach this.\n>\n> In general I'm suspicious of rejecting cases we used to accept for\n> no good reason other than that it's not in the spec. There is a LOT\n> of Postgres behavior that's not in the spec.\n>\n> regards, tom lane\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Sun, 14 Apr 2002 23:39:01 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts " }, { "msg_contents": "Rod Taylor wrote:\n> A team member had a bug in their SQL code which would have been caught\n> with this, so I looked it up. Found the TODO entry indicating it was\n> something that should be done. It was fairly simple to do, so I went\n> forward with it.\n> \n> If it's not wanted, then feel free to reject the patch and remove the\n> TODO item -- or change the TODO item to indicate discussion is\n> required.\n\nYes, that is the point. Tom thinks discussion is required on this item,\nwhile I can't imagine why. OK, Tom, let's discuss.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Apr 2002 23:40:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Do you want to argue we should continue allowing it?\n\nNo; I'm objecting that there hasn't been adequate discussion about\nthis change of behavior.\n\nBTW, if the rationale for the change is \"ANSI compliance\" then the patch\nis still wrong. SQL92 says:\n\n 3) No <column name> of T shall be identified more than once. If the\n <insert column list> is omitted, then an <insert column list>\n that identifies all columns of T in the ascending sequence of\n their ordinal positions within T is implicit.\n\n 5) Let QT be the table specified by the <query expression>. The\n degree of QT shall be equal to the number of <column name>s in\n the <insert column list>.\n\nThe patch enforces equality only for the case of an explicit <insert\ncolumn list> --- which is the behavior I suggested in the original\ncomment, but the spec clearly requires an exact match for an implicit\nlist too. How tight do we want to get?\n\nIn any case this discussion should be taking place someplace more public\nthan -patches.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Apr 2002 23:49:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts " }, { "msg_contents": "I submitted a patch which would make Postgresql ANSI compliant in\nregards to INSERT with a provided column list. As Tom states below,\nthis is not full compliance.\n\nCREATE TABLE tab(col1 text, col2 text);\n\nINSERT INTO tab (col1, col2) VALUES ('val1'); -- bad by spec (enforced\nby patch)\nINSERT INTO tab (col1, col2) VALUES ('val1', 'val2'); -- good\n\nINSERT INTO tab VALUES ('val1'); -- bad by spec (not enforced)\nINSERT INTO tab VALUES ('val1', 'val2'); -- good\n\n\nCurrently in postgres all of the above are valid. I'd like to rule\nout the first case (as enforced by the patch) as it's obvious the user\nhad intended to have two values. Especially useful when the user\nmisses a value and inserts bad data into the table as a result.\n\nFor the latter one, it could be argued that the user understands the\ntable in question and has inserted the values they require. New\ncolumns are added at the end, and probably don't affect the operation\nin question so why should it be changed to suit new columns? But,\nautomated code should always be written with the columns explicitly\nlisted, so this may be a user who has simply forgotten to add the\nvalue -- easy to do on wide tables.\n\nThoughts?\n--\nRod Taylor\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nCc: \"Rod Taylor\" <rbt@zort.ca>; <pgsql-patches@postgresql.org>\nSent: Sunday, April 14, 2002 11:49 PM\nSubject: Re: [PATCHES] ANSI Compliant Inserts\n\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Do you want to argue we should continue allowing it?\n>\n> No; I'm objecting that there hasn't been adequate discussion about\n> this change of behavior.\n>\n> BTW, if the rationale for the change is \"ANSI compliance\" then the\npatch\n> is still wrong. SQL92 says:\n>\n> 3) No <column name> of T shall be identified more than\nonce. If the\n> <insert column list> is omitted, then an <insert column\nlist>\n> that identifies all columns of T in the ascending\nsequence of\n> their ordinal positions within T is implicit.\n>\n> 5) Let QT be the table specified by the <query expression>.\nThe\n> degree of QT shall be equal to the number of <column\nname>s in\n> the <insert column list>.\n>\n> The patch enforces equality only for the case of an explicit <insert\n> column list> --- which is the behavior I suggested in the original\n> comment, but the spec clearly requires an exact match for an\nimplicit\n> list too. How tight do we want to get?\n>\n> In any case this discussion should be taking place someplace more\npublic\n> than -patches.\n>\n> regards, tom lane\n>\n\n", "msg_date": "Mon, 15 Apr 2002 00:00:52 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts " }, { "msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> CREATE TABLE tab(col1 text, col2 text);\n\n> INSERT INTO tab (col1, col2) VALUES ('val1'); -- bad by spec (enforced\n> by patch)\n> INSERT INTO tab (col1, col2) VALUES ('val1', 'val2'); -- good\n\n> INSERT INTO tab VALUES ('val1'); -- bad by spec (not enforced)\n> INSERT INTO tab VALUES ('val1', 'val2'); -- good\n\n> Currently in postgres all of the above are valid. I'd like to rule\n> out the first case (as enforced by the patch) as it's obvious the user\n> had intended to have two values.\n\nSeems reasonable.\n\n> For the latter one, it could be argued that the user understands the\n> table in question and has inserted the values they require.\n\nRuling out this case would break a technique that I've used a lot in the\npast, which is to put defaultable columns (eg, SERIAL columns) at the\nend, so that they can simply be left out of quick manual inserts.\nSo I agree with this part too. (I wouldn't necessarily write\napplication code that way, but then I believe in the theory that robust\napplication code should always specify an explicit column list.)\n\nFor the record --- I actually am in favor of this patch; but I wanted\nto see the change discussed and defended in a more widely-read mailing\nlist than -patches. If there are no objections from the assembled\nhackers, apply away ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 00:10:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts " }, { "msg_contents": "\n[ Discussion moved to hackers.]\n\nWe are discussing TODO item:\n\n o Disallow missing columns in INSERT ... VALUES, per ANSI\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Do you want to argue we should continue allowing it?\n> \n> No; I'm objecting that there hasn't been adequate discussion about\n> this change of behavior.\n\nSo, you don't want to allow it, I don't want to allow it, the patch\nauthor doesn't want to allow it. The reason the item doesn't require\nmuch discussion is that I can't imagine anyone arguing we should allow\nit. If there is anyone out there that doesn't want the listed TODO item\ncompleted, please chime in now.\n\n\n> BTW, if the rationale for the change is \"ANSI compliance\" then the patch\n> is still wrong. SQL92 says:\n> \n> 3) No <column name> of T shall be identified more than once. If the\n> <insert column list> is omitted, then an <insert column list>\n> that identifies all columns of T in the ascending sequence of\n> their ordinal positions within T is implicit.\n> \n> 5) Let QT be the table specified by the <query expression>. The\n> degree of QT shall be equal to the number of <column name>s in\n> the <insert column list>.\n> \n> The patch enforces equality only for the case of an explicit <insert\n> column list> --- which is the behavior I suggested in the original\n> comment, but the spec clearly requires an exact match for an implicit\n> list too. How tight do we want to get?\n\nYes, I think we want both implicit and explicit column names to match\nthe VALUES list. We do have DEFAULT for INSERT now, so that should make\nthings somewhat easier for people wanting to insert DEFAULT values\nwithout specifying the column list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 00:11:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts" }, { "msg_contents": "Rod Taylor wrote:\n> For the latter one, it could be argued that the user understands the\n> table in question and has inserted the values they require. New\n> columns are added at the end, and probably don't affect the operation\n> in question so why should it be changed to suit new columns? But,\n> automated code should always be written with the columns explicitly\n> listed, so this may be a user who has simply forgotten to add the\n> value -- easy to do on wide tables.\n\nI think our new DEFAULT for insert allows people to properly match all\ncolumns, and I think it is too error prone to allow missing columns in\nany INSERT.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 00:13:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts" }, { "msg_contents": "Tom Lane wrote:\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > CREATE TABLE tab(col1 text, col2 text);\n> \n> > INSERT INTO tab (col1, col2) VALUES ('val1'); -- bad by spec (enforced\n> > by patch)\n> > INSERT INTO tab (col1, col2) VALUES ('val1', 'val2'); -- good\n> \n> > INSERT INTO tab VALUES ('val1'); -- bad by spec (not enforced)\n> > INSERT INTO tab VALUES ('val1', 'val2'); -- good\n> \n> > Currently in postgres all of the above are valid. I'd like to rule\n> > out the first case (as enforced by the patch) as it's obvious the user\n> > had intended to have two values.\n> \n> Seems reasonable.\n> \n> > For the latter one, it could be argued that the user understands the\n> > table in question and has inserted the values they require.\n> \n> Ruling out this case would break a technique that I've used a lot in the\n> past, which is to put defaultable columns (eg, SERIAL columns) at the\n> end, so that they can simply be left out of quick manual inserts.\n> So I agree with this part too. (I wouldn't necessarily write\n> application code that way, but then I believe in the theory that robust\n> application code should always specify an explicit column list.)\n\nYes, I understand the tempation to put the columns needing default at\nthe end and skipping them on INSERT. However, our new DEFAULT insert\nvalue seems to handle that nicely, certainly better than the old code\ndid, and I think the added robustness of now requiring full columns on\nINSERT is worth it.\n\nI realize this could break some apps, but with the new DEFAULT value, it\nseems like a good time to reign in this error-prone capability.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 00:17:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts" }, { "msg_contents": "> > INSERT INTO tab VALUES ('val1'); -- bad by spec (not enforced)\n> > INSERT INTO tab VALUES ('val1', 'val2'); -- good\n>\n> I recall that this was the behavior we agreed we wanted. IMHO, it\nwould\n> be conditional on the INSERT ... VALUES (DEFAULT) capability being\n> provided. I'm not sure if that is there yet.\n\nMy patch for that was applied a couple weeks ago.\n\n", "msg_date": "Mon, 15 Apr 2002 00:24:05 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Ruling out this case would break a technique that I've used a lot in the\n>> past, which is to put defaultable columns (eg, SERIAL columns) at the\n>> end, so that they can simply be left out of quick manual inserts.\n\n> Yes, I understand the tempation to put the columns needing default at\n> the end and skipping them on INSERT. However, our new DEFAULT insert\n> value seems to handle that nicely, certainly better than the old code\n> did, and I think the added robustness of now requiring full columns on\n> INSERT is worth it.\n\nIf I have two or three defaultable columns (say, a SERIAL primary key\nand an insertion timestamp), it's going to be a pain in the neck to\nhave to write DEFAULT, DEFAULT, ... at the end of every insert.\n\nI feel that people who want error cross-checking on this will have used\nan explicit column list anyway. Therefore, Rod's patch tightens the\ncase that should be tight, while still being appropriately loose for\ncasual manual inserts.\n\nBTW, I do *not* agree with equating this case with COPY. COPY is mostly\nused for loading dumped data, and so it's reasonable to make different\ntradeoffs between error checking and friendliness for COPY and INSERT.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 00:24:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts " }, { "msg_contents": "Rod Taylor writes:\n\n> I submitted a patch which would make Postgresql ANSI compliant in\n> regards to INSERT with a provided column list. As Tom states below,\n> this is not full compliance.\n>\n> CREATE TABLE tab(col1 text, col2 text);\n>\n> INSERT INTO tab (col1, col2) VALUES ('val1'); -- bad by spec (enforced\n> by patch)\n> INSERT INTO tab (col1, col2) VALUES ('val1', 'val2'); -- good\n>\n> INSERT INTO tab VALUES ('val1'); -- bad by spec (not enforced)\n> INSERT INTO tab VALUES ('val1', 'val2'); -- good\n\nI recall that this was the behavior we agreed we wanted. IMHO, it would\nbe conditional on the INSERT ... VALUES (DEFAULT) capability being\nprovided. I'm not sure if that is there yet.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 15 Apr 2002 00:26:15 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I recall that this was the behavior we agreed we wanted. IMHO, it would\n> be conditional on the INSERT ... VALUES (DEFAULT) capability being\n> provided. I'm not sure if that is there yet.\n\nThat is there now. Do you recall when this was discussed before?\nI couldn't remember if there'd been any real discussion or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 00:26:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts " }, { "msg_contents": "Peter Eisentraut wrote:\n> Rod Taylor writes:\n> \n> > I submitted a patch which would make Postgresql ANSI compliant in\n> > regards to INSERT with a provided column list. As Tom states below,\n> > this is not full compliance.\n> >\n> > CREATE TABLE tab(col1 text, col2 text);\n> >\n> > INSERT INTO tab (col1, col2) VALUES ('val1'); -- bad by spec (enforced\n> > by patch)\n> > INSERT INTO tab (col1, col2) VALUES ('val1', 'val2'); -- good\n> >\n> > INSERT INTO tab VALUES ('val1'); -- bad by spec (not enforced)\n> > INSERT INTO tab VALUES ('val1', 'val2'); -- good\n> \n> I recall that this was the behavior we agreed we wanted. IMHO, it would\n> be conditional on the INSERT ... VALUES (DEFAULT) capability being\n> provided. I'm not sure if that is there yet.\n\nYes, it is key to have DEFAULT working before we change this, and it is\nin CVS now, committed a week or two ago.\n\nPeter, are you saying you don't want to require all columns to be\nspecified when INSERT doesn't list the columns?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 00:27:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts" }, { "msg_contents": "Bruce Momjian writes:\n\n> Peter, are you saying you don't want to require all columns to be\n> specified when INSERT doesn't list the columns?\n\nYes, that's what I'm saying. Too much breakage and annoyance potential in\nthat change.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 15 Apr 2002 00:52:21 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts" }, { "msg_contents": "IMHO, from a developers/users prospective I want to get atleast a NOTICE \nwhen they mismatch, and really preferably I want the query to fail \nbecause if I'm naming a column list then forget a value for it something \nis wrong...\n\nBruce Momjian wrote:\n\n>Tom Lane wrote:\n>\n>>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>>\n>>>>In general I'm suspicious of rejecting cases we used to accept for\n>>>>no good reason other than that it's not in the spec. There is a LOT\n>>>>of Postgres behavior that's not in the spec.\n>>>>\n>>>TODO has:\n>>> o Disallow missing columns in INSERT ... VALUES, per ANSI\n>>>\n>>Where's the discussion that's the basis of that entry? I don't recall\n>>any existing consensus on this (though maybe I forgot).\n>>\n>\n>I assume someone (Peter?) looked it up and reported that our current\n>behavior was incorrect and not compliant. I didn't do the research in\n>whether it was compliant.\n>\n>>There are a fair number of things in the TODO list that you put there\n>>because you liked 'em, but that doesn't mean everyone else agrees.\n>>I certainly will not accept \"once it's on the TODO list it cannot be\n>>questioned\"...\n>>\n>\n>I put it there because I didn't think there was any question. If I was\n>wrong, I can add a question mark to it.\n>\n>Do you want to argue we should continue allowing it? Also, what about\n>missing trailling columns in COPY?\n>\n>If we continue allowing missing INSERT columns, I am not sure we can\n>claim it is an extension or whether the standard requires the query to\n>fail.\n>\n\n\n\n\n\n\nIMHO, from a developers/users prospective I want to get atleast a NOTICE\nwhen they mismatch, and really preferably I want the query to fail because\nif I'm naming a column list then forget a value for it something is wrong...\n\nBruce Momjian wrote:\n\nTom Lane wrote:\n\nBruce Momjian <pgman@candle.pha.pa.us> writes:\n\n\nIn general I'm suspicious of rejecting cases we used to accept forno good reason other than that it's not in the spec. There is a LOTof Postgres behavior that's not in the spec.\n\n\n\nTODO has: o Disallow missing columns in INSERT ... VALUES, per ANSI\n\nWhere's the discussion that's the basis of that entry? I don't recallany existing consensus on this (though maybe I forgot).\n\nI assume someone (Peter?) looked it up and reported that our currentbehavior was incorrect and not compliant. I didn't do the research inwhether it was compliant.\n\nThere are a fair number of things in the TODO list that you put therebecause you liked 'em, but that doesn't mean everyone else agrees.I certainly will not accept \"once it's on the TODO list it cannot bequestioned\"...\n\nI put it there because I didn't think there was any question. If I waswrong, I can add a question mark to it.Do you want to argue we should continue allowing it? Also, what aboutmissing trailling columns in COPY?If we continue allowing missing INSERT columns, I am not sure we canclaim it is an extension or whether the standard requires the query tofail.", "msg_date": "Sun, 14 Apr 2002 22:12:36 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Michael Loftis <mloftis@wgops.com> writes:\n> IMHO, from a developers/users prospective I want to get atleast a NOTICE \n> when they mismatch, and really preferably I want the query to fail \n> because if I'm naming a column list then forget a value for it something \n> is wrong...\n\nSo far I think everyone agrees that if an explicit column name list is\ngiven, then it should fail if the column values don't match up. But\nwhat do you think about the case with no column name list?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 01:27:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts " }, { "msg_contents": "\n\nTom Lane wrote:\n\n>\n>So far I think everyone agrees that if an explicit column name list is\n>given, then it should fail if the column values don't match up. But\n>what do you think about the case with no column name list?\n>\nI'm on the fence in that situation. Though I'd lean towards a patch \nthats a sort of compromise. IIF the 'remaining' columns (IE columns \nunspecified) have some sort of default or auto-generated value (forgive \nme I'm just getting back into workign with postgresql) like a SERIAL or \nTIMESTAMP allow it, IFF any of them do not have a default value then \nfail. This will make it 'do the right thing' -- it's not exactly what \nthe spec does, but it's close to the current behavior that several \nothers (including myself) see as beneficial in the case of interactive use.\n\nAs far as implementation of this sort of compromise, I'm not sure, but \nit hsould be possible, assuming the planner knows/flags triggers on \ncolumn inserts and can make decisions and reject the query based on that \ninformation (I don't think that information would be in the parser)\n\n>\n>\n>\t\t\tregards, tom lane\n>\n\n\n", "msg_date": "Mon, 15 Apr 2002 00:35:32 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "On Mon, 15 Apr 2002, Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I recall that this was the behavior we agreed we wanted. IMHO, it would\n> > be conditional on the INSERT ... VALUES (DEFAULT) capability being\n> > provided. I'm not sure if that is there yet.\n>\n> That is there now. Do you recall when this was discussed before?\n> I couldn't remember if there'd been any real discussion or not.\n\nIt has to be at least a year, Tom. I brought it up in hackers after\nI got bit by it. I had a rather long insert statement and missed a\nvalue in the middle somewhere which shifted everything by one. It\nwas agreed that it shouldn't happen but I don't recall what else was\ndecided.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 15 Apr 2002 05:42:10 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts " }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Tom Lane wrote:\n> \n> > There are a fair number of things in the TODO list that you put there\n> > because you liked 'em, but that doesn't mean everyone else agrees.\n> > I certainly will not accept \"once it's on the TODO list it cannot be\n> > questioned\"...\n> \n> I put it there because I didn't think there was any question.\n\nHonetsly I don't understand what TODO means.\nCan a developer solve the TODOs any way he likes ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 15 Apr 2002 19:21:33 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Michael Loftis <mloftis@wgops.com> writes:\n> I'm on the fence in that situation. Though I'd lean towards a patch \n> thats a sort of compromise. IIF the 'remaining' columns (IE columns \n> unspecified) have some sort of default or auto-generated value (forgive \n> me I'm just getting back into workign with postgresql) like a SERIAL or \n> TIMESTAMP allow it, IFF any of them do not have a default value then \n> fail. This will make it 'do the right thing'\n\nI think the apparent security is illusory. Given the presence of ALTER\nTABLE ADD/DROP DEFAULT, the parser might easily accept a statement for\nwhich an end-column default has been dropped by the time the statement\ncomes to be executed. (Think about an INSERT in a rule.)\n\nAnother reason for wanting it to work as proposed is ADD COLUMN.\nConsider\n\nCREATE TABLE foo (a, b, c);\n\ncreate rule including INSERT INTO foo(a,b,c) VALUES(..., ..., ...);\n\nALTER TABLE foo ADD COLUMN d;\n\nThe rule still works, and will be interpreted as inserting the default\nvalue (NULL if unspecified) into column d.\n\nNow consider same scenario except I write the rule's INSERT without\nan explicit column list. If we follow the letter of the spec, the\nrule will now fail. How is this sensible or consistent behavior?\nThe case that should be laxer/easier is being treated *more* rigidly.\n\nIn any case, the above comparison shows that it's not very consistent\nto require explicit defaults to be available for the omitted column(s).\nINSERT with an explicit column list does not have any such requirement.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 10:08:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts " }, { "msg_contents": "\n\nTom Lane wrote:\n\n>Michael Loftis <mloftis@wgops.com> writes:\n>\n>>\\<snip snip -- no I'm not ignoring anything just cutting down on quoting>\n>>\n>\n>\n>Now consider same scenario except I write the rule's INSERT without\n>an explicit column list. If we follow the letter of the spec, the\n>rule will now fail. How is this sensible or consistent behavior?\n>The case that should be laxer/easier is being treated *more* rigidly.\n>\n>In any case, the above comparison shows that it's not very consistent\n>to require explicit defaults to be available for the omitted column(s).\n>INSERT with an explicit column list does not have any such requirement.\n>\n>\t\t\tregards, tom lane\n>\nIn the case of an implicit response it's to be treated as if all columns \nhad been specified (according to the spec). Which is why the spec says \nthat if you miss a column it's a bad INSERT unless you have specified \nonly a subset.\n\nUnless I'm misreading (which I could be)\n\nEither way, as I said I'm not wholly in favor of either way because both \nsolutions to me make equal sense, but keepign the ability to 'assume' \ndefault values (whether it's NULL or derived) is the better one in this \ncase, if the race condition is indeed an issue.\n\nBTW tom, I can't send mail directly to you -- \nblack-holes.five-ten-sg.com (or something like that) lists most of \nSpeakeasy's netblocks (As well as the rest of the world heh) as \n'dialups' and such. It's a liiittle over-paranoid to drop mail based on \ntheir listings, but just wanted to letcha know you are losing some \nlegitimate mail :) No biggie though I just post ot the list anyway so \nyou get it still ;)\n\nMichael\n\n", "msg_date": "Mon, 15 Apr 2002 07:33:20 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Tom Lane wrote:\n> > \n> > > There are a fair number of things in the TODO list that you put there\n> > > because you liked 'em, but that doesn't mean everyone else agrees.\n> > > I certainly will not accept \"once it's on the TODO list it cannot be\n> > > questioned\"...\n> > \n> > I put it there because I didn't think there was any question.\n> \n> Honetsly I don't understand what TODO means.\n> Can a developer solve the TODOs any way he likes ?\n\nI meant to say there was no question we wanted this item fixed, not that\nthere was no need for implementation discussions.\n\nIn summary, code changes have three stages:\n\n\to Do we want this feature?\n\to How do we want the feature to behave?\n\to How do we want the feature implemented?\n\nTom was complaining because the patch appeared without enough discussion\non these items. However, from my perspective, this is really trying to\nmicromanage the process. When people post patches, we aren't forced to\napply them. If people want to code things up and just send them in and\nthen are willing to get into a discussion to make sure all our questions\nlisted above are dealt with, that is fine with me. \n\nTo think that everyone is going to follow the process of going through\nthose three stages before submitting a patch isn't realistic. I think\nwe have to be a little flexible and work with these submitters so their\ninvolvment in the project is both positive for them and positive for the\nproject. Doing discussion sometimes in a backward order is sometimes\nrequired to accomplish this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 16:00:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Vince Vielhaber wrote:\n> On Mon, 15 Apr 2002, Tom Lane wrote:\n> \n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > I recall that this was the behavior we agreed we wanted. IMHO, it would\n> > > be conditional on the INSERT ... VALUES (DEFAULT) capability being\n> > > provided. I'm not sure if that is there yet.\n> >\n> > That is there now. Do you recall when this was discussed before?\n> > I couldn't remember if there'd been any real discussion or not.\n> \n> It has to be at least a year, Tom. I brought it up in hackers after\n> I got bit by it. I had a rather long insert statement and missed a\n> value in the middle somewhere which shifted everything by one. It\n> was agreed that it shouldn't happen but I don't recall what else was\n> decided.\n\nYes, I do remember Vince's comment, and I do believe that is the time it\nwas added.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 16:03:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Peter, are you saying you don't want to require all columns to be\n> > specified when INSERT doesn't list the columns?\n> \n> Yes, that's what I'm saying. Too much breakage and annoyance potential in\n> that change.\n\nOK, how about a NOTICE stating that the missing columns were filled in\nwith defaults?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 16:04:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts" }, { "msg_contents": "Bruce Momjian writes:\n\n> OK, how about a NOTICE stating that the missing columns were filled in\n> with defaults?\n\nPlease not.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 15 Apr 2002 16:56:12 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> > >\n> > > Tom Lane wrote:\n> > >\n> > > > There are a fair number of things in the TODO list that you put there\n> > > > because you liked 'em, but that doesn't mean everyone else agrees.\n> > > > I certainly will not accept \"once it's on the TODO list it cannot be\n> > > > questioned\"...\n> > >\n> > > I put it there because I didn't think there was any question.\n> >\n> > Honetsly I don't understand what TODO means.\n> > Can a developer solve the TODOs any way he likes ?\n> \n> I meant to say there was no question we wanted this item fixed, not that\n> there was no need for implementation discussions.\n> \n> In summary, code changes have three stages:\n> \n> o Do we want this feature?\n> o How do we want the feature to behave?\n> o How do we want the feature implemented?\n> \n> Tom was complaining because the patch appeared without enough discussion\n> on these items. However, from my perspective, this is really trying to\n> micromanage the process. When people post patches, we aren't forced to\n> apply them. \n\nBut shouldn't someone check the patch ?\nIf the patch is small, making the patch seems\nthe simplest way for anyone but if the patch\nis big, it seems painful for anyone to check\nthe patch. If no one checks the patch, would\nwe apply the patch blindly or reject it ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 16 Apr 2002 09:15:23 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Hiroshi Inoue wrote:\n> > I meant to say there was no question we wanted this item fixed, not that\n> > there was no need for implementation discussions.\n> > \n> > In summary, code changes have three stages:\n> > \n> > o Do we want this feature?\n> > o How do we want the feature to behave?\n> > o How do we want the feature implemented?\n> > \n> > Tom was complaining because the patch appeared without enough discussion\n> > on these items. However, from my perspective, this is really trying to\n> > micromanage the process. When people post patches, we aren't forced to\n> > apply them. \n> \n> But shouldn't someone check the patch ?\n> If the patch is small, making the patch seems\n> the simplest way for anyone but if the patch\n> is big, it seems painful for anyone to check\n> the patch. If no one checks the patch, would\n> we apply the patch blindly or reject it ?\n\nOf course, we would review any patch before application. I guess the\nfull path is:\n\n o Do we want this feature?\n o How do we want the feature to behave?\n o How do we want the feature implemented?\n\to Submit patch\n\to Review patch\n\to Apply patch\n\nI assume your point is that people shouldn't send in big patches\nwithout going through the discussion first. Yes, that is ideal, but if\nthey don't, we just discuss it after the patch appears, and then decide\nif we want to apply it or ask for modifications.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 20:47:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> \n> Of course, we would review any patch before application. I guess the\n> full path is:\n> \n> o Do we want this feature?\n> o How do we want the feature to behave?\n> o How do we want the feature implemented?\n> o Submit patch\n> o Review patch\n> o Apply patch\n> \n> I assume your point is that people shouldn't send in big patches\n> without going through the discussion first. Yes, that is ideal, but if\n> they don't, we just discuss it after the patch appears, and then decide\n> if we want to apply it or ask for modifications.\n\nFor example, I don't understand what pg_depend intends.\nIs there any consensus on it ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 16 Apr 2002 10:55:08 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "...\n> OK, how about a NOTICE stating that the missing columns were filled in\n> with defaults?\n\nYuck. There is a short path from that to rejecting the insert, but\nprinting the entire insert statement which would have been acceptable in\nthe error message ;)\n\n - Thomas\n", "msg_date": "Mon, 15 Apr 2002 19:45:59 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ANSI Compliant Inserts" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > \n> > Of course, we would review any patch before application. I guess the\n> > full path is:\n> > \n> > o Do we want this feature?\n> > o How do we want the feature to behave?\n> > o How do we want the feature implemented?\n> > o Submit patch\n> > o Review patch\n> > o Apply patch\n> > \n> > I assume your point is that people shouldn't send in big patches\n> > without going through the discussion first. Yes, that is ideal, but if\n> > they don't, we just discuss it after the patch appears, and then decide\n> > if we want to apply it or ask for modifications.\n> \n> For example, I don't understand what pg_depend intends.\n> Is there any consensus on it ?\n\nUh, we know we want dependency checking to fix many problems; see TODO\ndependency checking section. As far as how it is implemented, I haven't\nstudied it. I was going to wait for Tom to like it first and then give\nit a review.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 22:46:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> >\n> > For example, I don't understand what pg_depend intends.\n> > Is there any consensus on it ?\n> \n> Uh, we know we want dependency checking to fix many problems; see TODO\n> dependency checking section.\n\nYes I know it's a very siginificant mechanism. It would contibute\ne.g. to the DROP COLUMN implementation considerably but no one\nreferred to pg_depend in the recent discussion about DROP COLUMN.\nSo I've thought pg_depend is for something else. \n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 16 Apr 2002 12:18:16 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > >\n> > > For example, I don't understand what pg_depend intends.\n> > > Is there any consensus on it ?\n> > \n> > Uh, we know we want dependency checking to fix many problems; see TODO\n> > dependency checking section.\n> \n> Yes I know it's a very siginificant mechanism. It would contibute\n> e.g. to the DROP COLUMN implementation considerably but no one\n> referred to pg_depend in the recent discussion about DROP COLUMN.\n> So I've thought pg_depend is for something else. \n\nOh, clearly pg_depend will fix many of our problems, or make some\nproblems much easier to fix. I am excited to see it happening!\n\nWe had a pg_depend discussion about 9 months ago and hashed out a plan\nbut were just waiting for someone to do the work.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 23:21:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> > >\n> > > Hiroshi Inoue wrote:\n> > > >\n> > > > For example, I don't understand what pg_depend intends.\n> > > > Is there any consensus on it ?\n> > >\n> > > Uh, we know we want dependency checking to fix many problems; see TODO\n> > > dependency checking section.\n> >\n> > Yes I know it's a very siginificant mechanism. It would contibute\n> > e.g. to the DROP COLUMN implementation considerably but no one\n> > referred to pg_depend in the recent discussion about DROP COLUMN.\n> > So I've thought pg_depend is for something else.\n> \n> Oh, clearly pg_depend will fix many of our problems, or make some\n> problems much easier to fix. I am excited to see it happening!\n> \n> We had a pg_depend discussion about 9 months ago and hashed out a plan\n> but were just waiting for someone to do the work.\n\nProbably I overlooked the conclusion then.\nWhat was the conclusion of the discussion ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 16 Apr 2002 12:41:25 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Hiroshi Inoue wrote:\n> > Oh, clearly pg_depend will fix many of our problems, or make some\n> > problems much easier to fix. I am excited to see it happening!\n> > \n> > We had a pg_depend discussion about 9 months ago and hashed out a plan\n> > but were just waiting for someone to do the work.\n> \n> Probably I overlooked the conclusion then.\n> What was the conclusion of the discussion ?\n\nHere is one of the threads:\n\n http://groups.google.com/groups?hl=en&threadm=Pine.NEB.4.21.0107171458030.586-100000%40candlekeep.home-net.internetconnect.net&rnum=1&prev=/groups%3Fq%3Dpg_depend%2Bgroup:comp.databases.postgresql.*%26hl%3Den%26selm%3DPine.NEB.4.21.0107171458030.586-100000%2540candlekeep.home-net.internetconnect.net%26rnum%3D1\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 23:51:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Hiroshi Inoue writes:\n\n> > Oh, clearly pg_depend will fix many of our problems, or make some\n> > problems much easier to fix. I am excited to see it happening!\n> >\n> > We had a pg_depend discussion about 9 months ago and hashed out a plan\n> > but were just waiting for someone to do the work.\n>\n> Probably I overlooked the conclusion then.\n> What was the conclusion of the discussion ?\n\nPersonally, I think there wasn't any. Personally part 2, showing some\ncode is always a good way for new contributors to show they're for real.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Apr 2002 00:20:25 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was going to wait for Tom to like it first and then give\n> it a review.\n\nMy review is posted ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 00:43:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts " }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Hiroshi Inoue writes:\n> \n> > Probably I overlooked the conclusion then.\n> > What was the conclusion of the discussion ?\n> \n> Personally, I think there wasn't any. Personally part 2, showing some\n> code is always a good way for new contributors to show they're for real.\n\nCertainly but once an accomplished fact is there, it requires\na great deal of efforts to put it back to the neutral position.\nFor example, all I've done this month are such kind of things.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 16 Apr 2002 14:23:48 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Peter Eisentraut wrote:\n> > \n> > Hiroshi Inoue writes:\n> > \n> > > Probably I overlooked the conclusion then.\n> > > What was the conclusion of the discussion ?\n> > \n> > Personally, I think there wasn't any. Personally part 2, showing some\n> > code is always a good way for new contributors to show they're for real.\n> \n> Certainly but once an accomplished fact is there, it requires\n> a great deal of efforts to put it back to the neutral position.\n> For example, all I've done this month are such kind of things.\n\nYes, certain features have different implementation possibilities. \nSometimes this information is coverd in TODO.detail, but often it is\nnot.\n\nI assume your point is that once a patch appears, it is harder to argue\nto change the implementation than if they had asked for a discussion\nfirst.\n\nI guess the only thing I can say is that we shouldn't feel bad about\nasking more questions and opening up implementation issues, even if a\npatch has already been prepared. I should give a larger delay for\napplying those patches so people can ask more questions and bring up\nimplementation issues.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Apr 2002 12:54:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "\nI have added this bullet list to the developer's FAQ.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> > > \n> > > Hiroshi Inoue wrote:\n> > > \n> > > Of course, we would review any patch before application. I guess the\n> > > full path is:\n> > > \n> > > o Do we want this feature?\n> > > o How do we want the feature to behave?\n> > > o How do we want the feature implemented?\n> > > o Submit patch\n> > > o Review patch\n> > > o Apply patch\n> > > \n> > > I assume your point is that people shouldn't send in big patches\n> > > without going through the discussion first. Yes, that is ideal, but if\n> > > they don't, we just discuss it after the patch appears, and then decide\n> > > if we want to apply it or ask for modifications.\n> > \n> > For example, I don't understand what pg_depend intends.\n> > Is there any consensus on it ?\n> \n> Uh, we know we want dependency checking to fix many problems; see TODO\n> dependency checking section. As far as how it is implemented, I haven't\n> studied it. I was going to wait for Tom to like it first and then give\n> it a review.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Apr 2002 22:10:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nRod Taylor wrote:\n> Reports missing values as bad.\n> \n> BAD: INSERT INTO tab (col1, col2) VALUES ('val1');\n> GOOD: INSERT INTO tab (col1, col2) VALUES ('val1', 'val2');\n> \n> Regress tests against DEFAULT and normal values as they're managed\n> slightly different.\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Apr 2002 00:28:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nRod Taylor wrote:\n> Reports missing values as bad.\n> \n> BAD: INSERT INTO tab (col1, col2) VALUES ('val1');\n> GOOD: INSERT INTO tab (col1, col2) VALUES ('val1', 'val2');\n> \n> Regress tests against DEFAULT and normal values as they're managed\n> slightly different.\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 22:22:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANSI Compliant Inserts" } ]
[ { "msg_contents": "Could this get cleaned up please?\n\nmake[4]: Entering directory `/home/postgres/pgsql/src/interfaces/ecpg/preproc'\nbison -y -d preproc.y\nmv -f y.tab.c ./preproc.c\nmv -f y.tab.h ./preproc.h\ngcc -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -Wno-error -I./../include -I. -I../../../../src/include -DMAJOR_VERSION=2 -DMINOR_VERSION=10 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/postgres/testversion/include\\\" -c -o preproc.o preproc.c\npreproc.y: In function `yyparse':\npreproc.y:3844: warning: assignment makes integer from pointer without a cast\npreproc.y:3906: warning: assignment makes integer from pointer without a cast\npreproc.y:3914: warning: assignment makes integer from pointer without a cast\npreproc.y:3922: warning: assignment makes integer from pointer without a cast\npreproc.y:3930: warning: assignment makes integer from pointer without a cast\npreproc.y:3944: warning: assignment makes integer from pointer without a cast\npreproc.y:3952: warning: assignment makes integer from pointer without a cast\npreproc.y:3960: warning: assignment makes integer from pointer without a cast\npreproc.y:4100: warning: passing arg 3 of `ECPGmake_struct_type' makes pointer from integer without a cast\npreproc.y:4102: warning: passing arg 3 of `ECPGmake_struct_type' makes pointer from integer without a cast\npreproc.y:4449: warning: assignment makes integer from pointer without a cast\npreproc.y:4540: warning: passing arg 3 of `ECPGmake_struct_type' makes pointer from integer without a cast\npreproc.y:4542: warning: passing arg 3 of `ECPGmake_struct_type' makes pointer from integer without a cast\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Apr 2002 22:15:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "ecpg/preproc/preproc.y now generates lots of warnings" }, { "msg_contents": "On Sun, Apr 14, 2002 at 10:15:50PM -0400, Tom Lane wrote:\n> Could this get cleaned up please?\n\nArgh! Sorry, don't know how that typo made it in. I just fixed it.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 16 Apr 2002 09:10:02 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ecpg/preproc/preproc.y now generates lots of warnings" } ]
[ { "msg_contents": "If Bruce is thinking of applying outstanding patches - whatever happened\nwith Bill Studenmund's CREATE OPERATOR CLASS patch?\n\nChris\n\n", "msg_date": "Mon, 15 Apr 2002 10:36:05 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "That CREATE OPERATOR CLASS patch" }, { "msg_contents": "\nGood question. I see the thread at:\n\n\thttp://groups.google.com/groups?hl=en&threadm=Pine.LNX.4.30.0202262002040.685-100000%40peter.localdomain&rnum=2&prev=/groups%3Fq%3Dcreate%2Boperator%2Bgroup:comp.databases.postgresql.*%26hl%3Den%26selm%3DPine.LNX.4.30.0202262002040.685-100000%2540peter.localdomain%26rnum%3D2\n\nI asked the author to resumit but did not see a reply. Perhaps someone\nelse can take it over and make the requested changes. Thanks.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> If Bruce is thinking of applying outstanding patches - whatever happened\n> with Bill Studenmund's CREATE OPERATOR CLASS patch?\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Apr 2002 22:52:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: That CREATE OPERATOR CLASS patch" }, { "msg_contents": "On Sun, 14 Apr 2002, Bruce Momjian wrote:\n\n>\n> Good question. I see the thread at:\n>\n> \thttp://groups.google.com/groups?hl=en&threadm=Pine.LNX.4.30.0202262002040.685-100000%40peter.localdomain&rnum=2&prev=/groups%3Fq%3Dcreate%2Boperator%2Bgroup:comp.databases.postgresql.*%26hl%3Den%26selm%3DPine.LNX.4.30.0202262002040.685-100000%2540peter.localdomain%26rnum%3D2\n>\n> I asked the author to resumit but did not see a reply. Perhaps someone\n> else can take it over and make the requested changes. Thanks.\n\nYeah, the problem is:\n\n1) the author was (is) feeling a bit burnt out over successive battles\nover the command syntax. I proposed it, and Tom said, \"that syntax sucks\n[which it verily did], try this.\" So I did \"this\" and got the patch that\nis lying around. Then when 7.3 was done, someone else chimed in (I think\nit was Peter) that it should be different. While the newer-yet command\nsyntax is better, *why wasn't it proposed the first time we went through\nthis on hackers?* It's frustrating to ask, \"what should I do,\" do it, and\nthen get told, \"no, that's not right.\" I mean, now, how do I know that\nonce I get a new version ready, it won't get revised *again*?\n\n2) I now work at a new job, which is taking up lots of my time doing other\nthings. It's really cool, but PostgreSQL hacking isn't a paid part of it\n(like it was at Zembu). In a few weeks I can probably get time to update\nthis, but if Christopher wants to work on it, go for it.\n\nTake care,\n\nBill\n\n> ---------------------------------------------------------------------------\n>\n> Christopher Kings-Lynne wrote:\n> > If Bruce is thinking of applying outstanding patches - whatever happened\n> > with Bill Studenmund's CREATE OPERATOR CLASS patch?\n> >\n> > Chris\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Mon, 15 Apr 2002 09:28:52 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": false, "msg_subject": "Re: That CREATE OPERATOR CLASS patch" }, { "msg_contents": "\nLooks like Tom got this in. Thanks.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> If Bruce is thinking of applying outstanding patches - whatever happened\n> with Bill Studenmund's CREATE OPERATOR CLASS patch?\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Jul 2002 18:21:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: That CREATE OPERATOR CLASS patch" }, { "msg_contents": "\nLooks like Tom got this patch in. Thanks.\n\n---------------------------------------------------------------------------\n\nBill Studenmund wrote:\n> On Sun, 14 Apr 2002, Bruce Momjian wrote:\n> \n> >\n> > Good question. I see the thread at:\n> >\n> > \thttp://groups.google.com/groups?hl=en&threadm=Pine.LNX.4.30.0202262002040.685-100000%40peter.localdomain&rnum=2&prev=/groups%3Fq%3Dcreate%2Boperator%2Bgroup:comp.databases.postgresql.*%26hl%3Den%26selm%3DPine.LNX.4.30.0202262002040.685-100000%2540peter.localdomain%26rnum%3D2\n> >\n> > I asked the author to resumit but did not see a reply. Perhaps someone\n> > else can take it over and make the requested changes. Thanks.\n> \n> Yeah, the problem is:\n> \n> 1) the author was (is) feeling a bit burnt out over successive battles\n> over the command syntax. I proposed it, and Tom said, \"that syntax sucks\n> [which it verily did], try this.\" So I did \"this\" and got the patch that\n> is lying around. Then when 7.3 was done, someone else chimed in (I think\n> it was Peter) that it should be different. While the newer-yet command\n> syntax is better, *why wasn't it proposed the first time we went through\n> this on hackers?* It's frustrating to ask, \"what should I do,\" do it, and\n> then get told, \"no, that's not right.\" I mean, now, how do I know that\n> once I get a new version ready, it won't get revised *again*?\n> \n> 2) I now work at a new job, which is taking up lots of my time doing other\n> things. It's really cool, but PostgreSQL hacking isn't a paid part of it\n> (like it was at Zembu). In a few weeks I can probably get time to update\n> this, but if Christopher wants to work on it, go for it.\n> \n> Take care,\n> \n> Bill\n> \n> > ---------------------------------------------------------------------------\n> >\n> > Christopher Kings-Lynne wrote:\n> > > If Bruce is thinking of applying outstanding patches - whatever happened\n> > > with Bill Studenmund's CREATE OPERATOR CLASS patch?\n> > >\n> > > Chris\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Don't 'kill -9' the postmaster\n> > >\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Jul 2002 18:21:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: That CREATE OPERATOR CLASS patch" } ]
[ { "msg_contents": "Hi,\n\nI'm thinking of doing a patch to generate foo_fkey and foo_chk names for\nfk's and checks. I know that this will make using DROP CONSTRAINT a whole\nheck of a lot easier. There have also been a few people who've complained\non the list about all the <unnamed> foreign keys, etc.\n\nI know Tom had some fears, but I don't know if they still apply, or if\nthey're any worse than the current situation?\n\nCan I go ahead?\n\nChris\n\n", "msg_date": "Mon, 15 Apr 2002 11:25:54 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RFC: Generating useful names for foreign keys and checks" }, { "msg_contents": "\nYes! Please do something with those unnamed constraints.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> Hi,\n> \n> I'm thinking of doing a patch to generate foo_fkey and foo_chk names for\n> fk's and checks. I know that this will make using DROP CONSTRAINT a whole\n> heck of a lot easier. There have also been a few people who've complained\n> on the list about all the <unnamed> foreign keys, etc.\n> \n> I know Tom had some fears, but I don't know if they still apply, or if\n> they're any worse than the current situation?\n> \n> Can I go ahead?\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 16:01:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Generating useful names for foreign keys and checks" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I'm thinking of doing a patch to generate foo_fkey and foo_chk names for\n> fk's and checks. I know that this will make using DROP CONSTRAINT a whole\n> heck of a lot easier. There have also been a few people who've complained\n> on the list about all the <unnamed> foreign keys, etc.\n\n> I know Tom had some fears, but I don't know if they still apply, or if\n> they're any worse than the current situation?\n\nActually I'm in favor of it. I have a proposal outstanding to require\nconstraints to have names that are unique per-table, for consistency\nwith triggers (already are that way) and rules (will become that way,\nrather than having globally unique names as now). AFAIR the only\nsignificant concern was making sure that the system wouldn't generate\nduplicate constraint names by default.\n\nActually, I was only thinking of CHECK constraints (pg_relcheck) in this\nproposal. In the long run it'd be a good idea to have a table that\nexplicitly lists all constraints --- check, unique, primary, foreign\nkey, etc --- and the index on such a table would probably enforce\nname uniqueness across all types of constraints on one table. Right now,\nthough, each type of constraint effectively has a separate namespace.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 16:55:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: Generating useful names for foreign keys and checks" }, { "msg_contents": "> Actually I'm in favor of it. I have a proposal outstanding to require\n> constraints to have names that are unique per-table, for consistency\n> with triggers (already are that way) and rules (will become that way,\n> rather than having globally unique names as now). AFAIR the only\n> significant concern was making sure that the system wouldn't generate\n> duplicate constraint names by default.\n\nYeah, that's what's giving me pain - foreign key names are generated in the\nrewriter or something somewhere, so I'm not sure exactly what I have access\nto for checking duplicates...\n\nThe other interesting issue is the the little suffix we append is just in\nthe name. ie. someone can create an index called '_pkey' and cause\nconfusion.\n\nChris\n\n", "msg_date": "Tue, 16 Apr 2002 10:58:13 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: RFC: Generating useful names for foreign keys and checks" } ]
[ { "msg_contents": "I have hit another very annouing problem with the oids being larger than \nmax_int. When tables are created under such circumstances, pg_dump cannot dump \nthe database anymore. The error is\n\ngetTables(): SELECT (for PRIMARY KEY) failed on table config_2002_03_02. \nExplanation from backend: ERROR: dtoi4: integer out of range\n\nAny idea how to fix this? This is on 7.1.3. Will the 7.2 pg_dump handle this \ndatabase?\n\nDaniel\n\n\n\n", "msg_date": "Mon, 15 Apr 2002 09:36:34 +0300", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "more on large oids" }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> getTables(): SELECT (for PRIMARY KEY) failed on table config_2002_03_02. \n> Explanation from backend: ERROR: dtoi4: integer out of range\n\n> Any idea how to fix this? This is on 7.1.3. Will the 7.2 pg_dump handle this \n> database?\n\nYes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 09:55:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: more on large oids " } ]
[ { "msg_contents": "\nAppended is a message that I sent to the pgsql-general list, and\nfor which I received no useful reply. (Well, anyway, no reply that\nhas helped me to speed up my imports.) If you've read it already,\nfeel free to ignore it, but if you haven't, I'd appreciate any\nadvice on how to make this work as fast as possible.\n\nThere are some things that could be done to help optimize situations\nlike this, though I don't think any can be done in the short term.\nHere are some of my thoughts, which may or may not be useful:\n\n1. Postgres appears to have a fairly high row overhead (40 bytes\nor so according to the FAQ), which grieves me slightly, as that's\nactually larger than the size of the data in my tuples. It would\nseem that in my case some of the items in that header (the OID and\nthe NULL bitfield) are not used; would it be possible to avoid\nallocating these in this relations that don't use them?\n\nAlso, is there a good description of the on-disk tuple format\nsomewhere? include/access/htup.h seems to require a fair bit of\nknowledge about how other parts of the system work to be understood.\n\n2. Does it make sense to be able to do some operations without\nlogging? For the COPY, if the system crashes and I lose some or\nall all the tuples I'd imported so far, I don't care that much; I\ncan just restart the COPY at an appropriate point. As mentioned\nbelow, that would save half a gig of disk writes when importing 5M\ntuples.\n\n3. How about having a way to take a table off-line to work on it,\nand bring it back on-line again when done? This would get rid of\nthe logging overhead, locking overhead, and that sort of stuff,\nand in theory might be able to get you something approaching\ndisk-speed data imports.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n---------- Forwarded message ----------\nDate: Thu, 11 Apr 2002 17:28:13 +0900 (JST)\nFrom: Curt Sampson <cjs@cynic.net>\nTo: pgsql-general@postgresql.org\nSubject: [GENERAL] Importing Large Amounts of Data\n\nI've been asked by a client to do some testing of Postgres for what\nappears to be OLAP on a fairly large data set (about half a billion\ntuples). I'm probably going to want to try partitioning this in various\nways, but the application, not Postgres, will deal with that.\n\nI'm using PostgreSQL 7.2.1, and the schema I'm testing with is as follows:\n\n\tCREATE TABLE bigone (\n\t\trec_no\t\tINT\t\t\tPRIMARY KEY,\n\t\tday\t\t\tDATE\t\tNOT NULL,\n\t\tuser_id\t\tCHAR(5)\t\tNOT NULL,\n\t\tvalue\t\tVARCHAR(20) NOT NULL\n\t\t) WITHOUT OIDS;\n\tDROP INDEX bigone_pkey;\n\n\t[COPY is done here....]\n\n\tCREATE UNIQUE INDEX bigone_pkey ON bigone (rec_no);\n\tCREATE INDEX bigone_day ON bigone (day);\n\tCREATE INDEX bigone_user_id ON bigone (user_id);\n\nUnfortunately, the first problem I've run into is that importing is\nrather slow. With all indexes (including the bigone_pkey) dropped,\nimporting five million tuples into the above table, starting from empty,\ntakes about 921 seconds. The second 5M tuples takes about 1009 seconds.\nIf I switch to using the -F option, the first 5M takes 714 seconds and the\nsecond 5M takes 742 seconds. At the end, I have about 742 MB of data under\nthe data/base directory. (This is using a fresh database cluster.)\n\nFor comparison, the MySQL does each import in about 221 and 304 seconds,\nand the data in the end take up about 427 MB.\n\nPart of the problem here may be that Postgres appears to be logging the\nCOPY operation; I get from 27-33 \"recycled transaction log file\" messages\nfor every 5M tuple COPY that I do. If there were a way to do an unlogged\ncopy, that might save close to half a gig of writes to the disk.\n\nThe other part of the problem may just be the size of the data;\nwhy does Postgres take up 75% more space (about 78 bytes per tuple,\nvs. 45 bytes per tuple) for this table?\n\nAs well, index builds seem to take about 20% longer (using -F), and they\nseem to be about 10% larger as well.\n\nDoes anybody have any suggestions as to how I can improve performance\nhere, and reduce disk space requirements? If not, I'll probably have\nto suggest to the client that he move to MySQL for this particular\napplication, unless he needs any of the features that Postgres provides\nand MySQL doesn't.\n\n", "msg_date": "Mon, 15 Apr 2002 15:48:50 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": true, "msg_subject": "Importing Large Amounts of Data" }, { "msg_contents": "> 1. Postgres appears to have a fairly high row overhead (40 bytes\n> or so according to the FAQ), which grieves me slightly, as that's\n> actually larger than the size of the data in my tuples. It would\n> seem that in my case some of the items in that header (the OID and\n> the NULL bitfield) are not used; would it be possible to avoid\n> allocating these in this relations that don't use them?\n\nCREATE TABLE WITHOUT OIDS ...\n\n> As well, index builds seem to take about 20% longer (using -F), and they\n> seem to be about 10% larger as well.\n> > Does anybody have any suggestions as to how I can improve performance\n> here, and reduce disk space requirements? If not, I'll probably have\n> to suggest to the client that he move to MySQL for this particular\n> application, unless he needs any of the features that Postgres provides\n> and MySQL doesn't.\n\nThis conclusion seems to me to be remarkably shortsighted. Does the initial\ndata load into the database occur just once or quite often? If just once,\nthen the initial loading time doesn't matter.\n\nIt's a bit hard to say \"just turn off all the things that ensure your data\nintegrity so it runs a bit faster\", if you actually need data integrity.\n\nAnyway, from what I understand an OLTP application is all about selects and\nmemoising certain aggregate results. Since Postgres has far more advanced\nindexing and trigger support than MySQL, surely you need to take this kind\nof difference into account??? The fact that you can load stuff quicker in\nMySQL and it takes up less disk space seems totally irrelevant.\n\nJust wait until your MySQL server crashes and your client finds that half\nhis data is corrupted...\n\nChris\n\n", "msg_date": "Mon, 15 Apr 2002 15:06:00 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data" }, { "msg_contents": "On Mon, 15 Apr 2002, Christopher Kings-Lynne wrote:\n\n> > ...the OID and\n> > the NULL bitfield) are not used; would it be possible to avoid\n> > allocating these in this relations that don't use them?\n>\n> CREATE TABLE WITHOUT OIDS ...\n\nAs you can see from the schema I gave later in my message, that's\nexactly what I did. But does this actually avoid allocating the\nspace in the on-disk tuples? What part of the code deals with this?\nIt looks to me like the four bytes for the OID are still allocated\nin the tuple, but not used.\n\n> This conclusion seems to me to be remarkably shortsighted. Does the initial\n> data load into the database occur just once or quite often?\n\nWell, I'm going to be doing the initial load (.5 billion tuples) quite\na few times, in order to test some different partitioning arrangements.\nSo I'll save quite a lot of time initially if I get a faster import.\n\nBut from the looks of it, the production system will be doing daily\nimports of fresh data ranging in size from a copule of million rows\nto a couple of tens of millions of rows.\n\n> It's a bit hard to say \"just turn off all the things that ensure your data\n> integrity so it runs a bit faster\", if you actually need data integrity.\n\nI'm not looking for \"runs a bit faster;\" five percent either way\nmakes little difference to me. I'm looking for a five-fold performance\nincrease.\n\n> Anyway, from what I understand an OLTP application is all about selects and\n> memoising certain aggregate results.\n\nI guess that was a typo, and you meant OLAP?\n\nAnyway, from the looks of it, this is going to be fairly simple\nstuff. (Unfortunately, I don't have details of the real application\nthe client has in mind, though I sure wish I did.) What I'm trying\nto indicate when I say \"OLAP\" is that it's basically selecting\nacross broad swaths of a large data set, and doing little or nothing\nin the way of updates. (Except for the daily batches of data, of\ncourse.)\n\n> The fact that you can load stuff quicker in\n> MySQL and it takes up less disk space seems totally irrelevant.\n\nYeah, everybody's telling me this. Let me try once again here:\n\n\t1. Every day, I must import millions, possibly tens of\n\tmillions, of rows of data. Thus, speed of import is indeed\n\tfairly important to me.\n\n\t2. It looks, at least at this point, as if the application\n\twill be doing only fairly simple selects out of the current\n\thalf-billion rows of data and whatever gets added in the\n\tfuture. Thus, I don't think that using MySQL would be a\n\tproblem. (If I did, I wouldn't be proposing it.)\n\nI don't want to start a flamewar here, because personally I don't\neven like MySQL and would prefer always to use PostgreSQL. But it\nmakes it a lot harder to do so when people keep insisting that\nimport speed is not important. Rather than say that, why don't we\njust admit that PosgreSQL is a fairly crap performer in this regard\nat the moment (at least the way I'm doing it), and work out ways\nto fix this?\n\n> Just wait until your MySQL server crashes and your client finds that half\n> his data is corrupted...\n\nIf there are no updates, why would anything be corrupted? At any\nrate, I can always restore from backup, since little or nothing\nwould be lost.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 15 Apr 2002 16:41:19 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": true, "msg_subject": "Re: Importing Large Amounts of Data" }, { "msg_contents": "> As you can see from the schema I gave later in my message, that's\n> exactly what I did. But does this actually avoid allocating the\n> space in the on-disk tuples? What part of the code deals with this?\n> It looks to me like the four bytes for the OID are still allocated\n> in the tuple, but not used.\n\nOK, well I guess in that case they are - I'm no expert on the file format.\n\n> But from the looks of it, the production system will be doing daily\n> imports of fresh data ranging in size from a copule of million rows\n> to a couple of tens of millions of rows.\n\nWell that definitely makes a difference then...\n\n> > It's a bit hard to say \"just turn off all the things that\n> ensure your data\n> > integrity so it runs a bit faster\", if you actually need data integrity.\n>\n> I'm not looking for \"runs a bit faster;\" five percent either way\n> makes little difference to me. I'm looking for a five-fold performance\n> increase.\n\n> Anyway, from the looks of it, this is going to be fairly simple\n> stuff. (Unfortunately, I don't have details of the real application\n> the client has in mind, though I sure wish I did.) What I'm trying\n> to indicate when I say \"OLAP\" is that it's basically selecting\n> across broad swaths of a large data set, and doing little or nothing\n> in the way of updates. (Except for the daily batches of data, of\n> course.)\n\nOK, well now it depends on what kind of selects you're doing. Do you\nregularly select over a certain subset of the data, in which case using\npartial indices might give you significant speedup. Do you select functions\nof columns? If so, then you'll need functional indices. MySQL doesn't have\neither of these. However, if you're always doing full table scans, then\nMySQL will probably do these faster.\n\nNow, here's another scenario. Suppose you're often querying aggregate data\nover particular subsets of the data. Now instead of requerying all the\ntime, you can set up triggers to maintain your aggregates for you on the\nfly. This will give O(1) performance on select compared to O(n). MySQL's\nnew query cache might help you with this, however.\n\n> I don't want to start a flamewar here, because personally I don't\n> even like MySQL and would prefer always to use PostgreSQL. But it\n> makes it a lot harder to do so when people keep insisting that\n> import speed is not important. Rather than say that, why don't we\n> just admit that PosgreSQL is a fairly crap performer in this regard\n> at the moment (at least the way I'm doing it), and work out ways\n> to fix this?\n\nIt depends on your definition. You have to accept a certain overhead if\nyou're to have data integrity and MVCC. If you can't handle that overhead,\nthen you can't have data integrity and vice versa.\n\nBTW, instead of:\n\nCREATE UNIQUE INDEX bigone_pkey ON bigone (rec_no);\n\ndo:\n\nALTER TABLE bigone ADD PRIMARY KEY(rec_no);\n\nAnd remember to run \"VACUUM ANALYZE bigone;\" or just \"ANALYZE bigone;\" after\nthe COPY and before trying to use the table. I'm not sure if it's better to\nanalyze before or after the indexes are added, but it's definitely better to\nvaccum before the indexes are added.\n\nChris\n\n", "msg_date": "Mon, 15 Apr 2002 15:53:51 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data" }, { "msg_contents": "On Mon, 15 Apr 2002, Christopher Kings-Lynne wrote:\n\n> OK, well now it depends on what kind of selects you're doing. Do you\n> regularly select over a certain subset of the data, in which case using\n> partial indices might give you significant speedup.\n\nI believe from the information I've been given that we will indeed\nbe regularly selecting over certain subsets, based on day. (One of\nthe test queries I've been asked to use selects based on user_id\nand a date range.) But I was intending to partition the tables\nbased on date range (to keep the index rebuild time from getting\ncompletely out of hand), so that will handily take care of that\nrequirement anyway.\n\n> Do you select functions of columns?\n\nNo.\n\n> It depends on your definition. You have to accept a certain overhead if\n> you're to have data integrity and MVCC. If you can't handle that overhead,\n> then you can't have data integrity and vice versa.\n\nWell, a few points:\n\n\t a) I am not convinced that data integrity should cost a five-fold\n\t decrease in performance,\n\n\t b) In fact, at times I don't need that data integrity. I'm prefectly\n\t happy to risk the loss of a table during import, if it lets me do the\n\t import more quickly, especially if I'm taking the database off line\n\t to do the import anyway. MS SQL server in fact allows me to specify\n\t relaxed integrity (with attendant risks) when doing a BULK IMPORT; it\n\t would be cool if Postgres allowed that to.\n\n> BTW, instead of:\n>\n> CREATE UNIQUE INDEX bigone_pkey ON bigone (rec_no);\n>\n> do:\n>\n> ALTER TABLE bigone ADD PRIMARY KEY(rec_no);\n>\n> And remember to run \"VACUUM ANALYZE bigone;\" or just \"ANALYZE bigone;\" after\n> the COPY and before trying to use the table. I'm not sure if it's better to\n> analyze before or after the indexes are added, but it's definitely better to\n> vaccum before the indexes are added.\n\nThanks. This is the kind of useful information I'm looking for. I\nwas doing a vacuum after, rather than before, generating the indices.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 15 Apr 2002 17:19:20 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": true, "msg_subject": "Re: Importing Large Amounts of Data" }, { "msg_contents": "> \t b) In fact, at times I don't need that data integrity. I'm\n> prefectly\n> \t happy to risk the loss of a table during import, if it\n> lets me do the\n> \t import more quickly, especially if I'm taking the database off line\n> \t to do the import anyway. MS SQL server in fact allows me to specify\n> \t relaxed integrity (with attendant risks) when doing a BULK\n> IMPORT; it\n> \t would be cool if Postgres allowed that to.\n\nWell I guess a TODO item would be to allow COPY to use relaxed constraints.\nDon't know how this would go over with the core developers tho.\n\n> Thanks. This is the kind of useful information I'm looking for. I\n> was doing a vacuum after, rather than before, generating the indices.\n\nThat's because the indexes themselves are cleaned out with vacuum, as well\nas the tables.\n\nChris\n\n", "msg_date": "Mon, 15 Apr 2002 16:24:36 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data" }, { "msg_contents": "On Monday 15 April 2002 03:53, Christopher Kings-Lynne wrote:\n> BTW, instead of:\n>\n> CREATE UNIQUE INDEX bigone_pkey ON bigone (rec_no);\n>\n> do:\n>\n> ALTER TABLE bigone ADD PRIMARY KEY(rec_no);\n\nI am sorry, could you please elaborate more on the difference?\n\n--\nDenis\n\n", "msg_date": "Mon, 15 Apr 2002 04:59:15 -0400", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data" }, { "msg_contents": "> On Monday 15 April 2002 03:53, Christopher Kings-Lynne wrote:\n> > BTW, instead of:\n> >\n> > CREATE UNIQUE INDEX bigone_pkey ON bigone (rec_no);\n> >\n> > do:\n> >\n> > ALTER TABLE bigone ADD PRIMARY KEY(rec_no);\n>\n> I am sorry, could you please elaborate more on the difference?\n\nThey have the same _effect_, it's just that the first sytnax does not mark\nthe index as the _primary_ index on the relation.\n\nChris\n\n", "msg_date": "Mon, 15 Apr 2002 17:15:31 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data" }, { "msg_contents": "On Monday 15 April 2002 05:15, Christopher Kings-Lynne wrote:\n> > On Monday 15 April 2002 03:53, Christopher Kings-Lynne wrote:\n> > > BTW, instead of:\n> > >\n> > > CREATE UNIQUE INDEX bigone_pkey ON bigone (rec_no);\n> > >\n> > > do:\n> > >\n> > > ALTER TABLE bigone ADD PRIMARY KEY(rec_no);\n> >\n> > I am sorry, could you please elaborate more on the difference?\n>\n> They have the same _effect_, it's just that the first sytnax does not mark\n> the index as the _primary_ index on the relation.\n\nYes, I know. I mean how does this affect performance? How this can change\nplanner decision? Does it have any effect except cosmetical one?\n\n--\nDenis\n\n", "msg_date": "Mon, 15 Apr 2002 05:26:25 -0400", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data" }, { "msg_contents": "> Yes, I know. I mean how does this affect performance? How this can change\n> planner decision? Does it have any effect except cosmetical one?\n\nOnly cosmetic. In the example he gave, he wanted a primary key, so I showed\nhim how to make one properly.\n\nChris\n\n", "msg_date": "Mon, 15 Apr 2002 19:06:55 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> Yes, I know. I mean how does this affect performance? How this can change\n>> planner decision? Does it have any effect except cosmetical one?\n\n> Only cosmetic. In the example he gave, he wanted a primary key, so I showed\n> him how to make one properly.\n\nThe ALTER form will complain if any of the columns are not marked NOT\nNULL, so the difference isn't completely cosmetic.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 10:15:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data " }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> On Mon, 15 Apr 2002, Christopher Kings-Lynne wrote:\n>> CREATE TABLE WITHOUT OIDS ...\n\n> As you can see from the schema I gave later in my message, that's\n> exactly what I did. But does this actually avoid allocating the\n> space in the on-disk tuples? What part of the code deals with this?\n> It looks to me like the four bytes for the OID are still allocated\n> in the tuple, but not used.\n\nCurt is correct: WITHOUT OIDS does not save any storage. Having two\ndifferent formats for the on-disk tuple header seemed more pain than\nthe feature was worth. Also, because of alignment considerations it\nwould save no storage on machines where MAXALIGN is 8. (Possibly my\nthinking is colored somewhat by the fact that that's so on all my\nfavorite platforms ;-).)\n\nHowever, as for the NULL values bitmap: that's already compacted out\nwhen not used, and always has been AFAIK.\n\n>> It's a bit hard to say \"just turn off all the things that ensure your data\n>> integrity so it runs a bit faster\", if you actually need data integrity.\n\n> I'm not looking for \"runs a bit faster;\" five percent either way\n> makes little difference to me. I'm looking for a five-fold performance\n> increase.\n\nYou are not going to get it from this; where in the world did you get\nthe notion that data integrity costs that much? When the WAL stuff\nwas added in 7.1, we certainly did not see any five-fold slowdown.\nIf anything, testing seemed to indicate that WAL sped things up.\nA lot would depend on your particular scenario of course.\n\nHave you tried all the usual speedup hacks? Turn off fsync, if you\nreally think you do not care about crash integrity; use COPY FROM STDIN\nto bulk-load data, not retail INSERTs; possibly drop and recreate\nindexes rather than updating them piecemeal; etc. You should also\nconsider not declaring foreign keys, as the runtime checks for reference\nvalidity are pretty expensive.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 10:26:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data " }, { "msg_contents": "On Mon, 15 Apr 2002, Tom Lane wrote:\n\n> > I'm not looking for \"runs a bit faster;\" five percent either way\n> > makes little difference to me. I'm looking for a five-fold performance\n> > increase.\n>\n> You are not going to get it from this; where in the world did you get\n> the notion that data integrity costs that much?\n\nUm...the fact that MySQL imports the same data five times as fast? :-)\n\nNote that this is *only* related to bulk-importing huge amounts of\ndata. Postgres seems a little bit slower than MySQL at building\nthe indicies afterwards, but this would be expected since (probably\ndue to higher tuple overhead) the size of the data once in postgres\nis about 75% larger than in MySQL: 742 MB vs 420 MB. I've not done\nany serious testing of query speed, but the bit of toying I've done\nwith it shows no major difference.\n\n> Have you tried all the usual speedup hacks? Turn off fsync, if you\n> really think you do not care about crash integrity; use COPY FROM STDIN\n> to bulk-load data, not retail INSERTs; possibly drop and recreate\n> indexes rather than updating them piecemeal; etc. You should also\n> consider not declaring foreign keys, as the runtime checks for reference\n> validity are pretty expensive.\n\nYes, I did all of the above. (This was all mentioned in my initial\nmessage, except for turning off foreign key constraints--but the\ntable has no foreign keys.)\n\nWhat I'm thinking would be really cool would be to have an \"offline\"\nway of creating tables using a stand-alone program that would write\nthe files at, one hopes, near disk speed. Maybe it could work by\ncreating the tables in a detached tablespace, and then you'd attach\nthe tablespace when you're done. It might even be extended to be\nable to do foreign key checks, create indicies, and so on. (Foreign\nkey checks would be useful; I'm not sure that creating indicies\nwould be any faster than just doing it after the tablespace is\nattached.)\n\nThis would be particularly useful for fast restores of backups.\nDowntime while doing a restore is always a huge pain for large\ndatabases.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Tue, 16 Apr 2002 10:32:14 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": true, "msg_subject": "Re: Importing Large Amounts of Data " }, { "msg_contents": "Curt Sampson wrote:\n> On Mon, 15 Apr 2002, Tom Lane wrote:\n> \n> > > I'm not looking for \"runs a bit faster;\" five percent either way\n> > > makes little difference to me. I'm looking for a five-fold performance\n> > > increase.\n> >\n> > You are not going to get it from this; where in the world did you get\n> > the notion that data integrity costs that much?\n> \n> Um...the fact that MySQL imports the same data five times as fast? :-)\n> \n> Note that this is *only* related to bulk-importing huge amounts of\n> data. Postgres seems a little bit slower than MySQL at building\n> the indicies afterwards, but this would be expected since (probably\n> due to higher tuple overhead) the size of the data once in postgres\n> is about 75% larger than in MySQL: 742 MB vs 420 MB. I've not done\n> any serious testing of query speed, but the bit of toying I've done\n> with it shows no major difference.\n\nCan you check your load and see if there is a PRIMARY key on the table\nat the time it is being loaded. In the old days, we created indexes\nonly after the data was loaded, but when we added PRIMARY key, pg_dump\nwas creating the table with PRIMARY key then loading it, meaning the\ntable was being loaded while it had an existing index. I know we fixed\nthis recently but I am not sure if it was in 7.2 or not. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 21:44:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data" }, { "msg_contents": "On Mon, 15 Apr 2002 21:44:26 -0400 (EDT)\n\"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> In the old days, we created indexes\n> only after the data was loaded, but when we added PRIMARY key, pg_dump\n> was creating the table with PRIMARY key then loading it, meaning the\n> table was being loaded while it had an existing index. I know we fixed\n> this recently but I am not sure if it was in 7.2 or not. \n\nIt's not in 7.2 -- but it's fixed in CVS.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Mon, 15 Apr 2002 21:49:37 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data" }, { "msg_contents": "On Mon, 15 Apr 2002, Bruce Momjian wrote:\n\n> Can you check your load and see if there is a PRIMARY key on the table\n> at the time it is being loaded.\n\nThere is not. I create the table with a PRIMARY KEY declaration,\nbut I drop that index before doing the import, and do an ALTER\nTABLE to re-add the primary key afterwards.\n\nAt one point I tried doing a load with all indices enabled, but\nafter about eight or nine hours I gave up. (Typically the load\ntakes about 30 minutes. This is using about 2% of the sample data.)\n\n> In the old days, we created indexes\n> only after the data was loaded, but when we added PRIMARY key, pg_dump\n> was creating the table with PRIMARY key then loading it, meaning the\n> table was being loaded while it had an existing index. I know we fixed\n> this recently but I am not sure if it was in 7.2 or not.\n\nAh, I saw that fix. But I'm doing the load by hand, not using\npg_restore.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Tue, 16 Apr 2002 10:53:49 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": true, "msg_subject": "Re: Importing Large Amounts of Data" }, { "msg_contents": "On Tue, 16 Apr 2002, Curt Sampson wrote:\n\n[snip]\n\n> What I'm thinking would be really cool would be to have an \"offline\"\n> way of creating tables using a stand-alone program that would write\n> the files at, one hopes, near disk speed. \n\nPersonally, I think there is some merit in this. Postgres can be used\nfor large scale data mining, an application which does not need\n(usually) multi-versioning and concurrency but which can benefit from\npostgres's implementation of SQL, as well as backend extensibility. \n\nI don't see any straight forward way of modifying the code to allow a fast\npath directly to relationals on-disk. However, it should be possible to\nbypass locking, RI, MVCC etc with the use of a bootstrap-like\ntool.\n\nSuch a tool would only be able to be used when the database was\noffline. It would read data from files pasted to it in some format,\nperhaps that generated by COPY.\n\nGiven the very low parsing and 'planning' overhead, the real cost would be\nWAL (the bootstrapper could fail and render the database unusable) and the\nsubsequent updating of on-disk relations.\n\nComments?\n\nGavin\n\n", "msg_date": "Tue, 16 Apr 2002 13:58:57 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data " }, { "msg_contents": "On Tue, 16 Apr 2002, Gavin Sherry wrote:\n\n> I don't see any straight forward way of modifying the code to allow a fast\n> path directly to relationals on-disk. However, it should be possible to\n> bypass locking, RI, MVCC etc with the use of a bootstrap-like tool.\n\nThat was my thought. I'm not asking for disk-speed writes through the\ndatabase server itself; I can make do with taking the server off-line,\ndoing the imports, and bringing it back again, as you suggest.\n\n> Given the very low parsing and 'planning' overhead, the real cost would be\n> WAL (the bootstrapper could fail and render the database unusable) and the\n> subsequent updating of on-disk relations.\n\nMS SQL Server, when doing a BULK INSERT or BCP, can do it as a fully or\n\"minimally\" logged operation. When minimally logged, there's no ability\nto roll-forward or recover inserted data, just the ability to go back\nto the state at the beginning of the operation. This technique can work\neven though an on-line database. A bit more information is available at\n\n\thttp://msdn.microsoft.com/library/en-us/adminsql/ad_impt_bcp_9esz.asp\n\n(You may want to browse this with lynx; the javascript in it is going to\nforce your screen into a configuration with frames.) You can follow some\nof the links in that page for further information.\n\nAnother option, for off-line databases, might just be not to log at all.\nIf you take a backup first, it may be faster to restore the backup and\nstart again than to try to roll back the operation, or roll it foward\nto partial completion and then figure out where to restart your import.\nThis seems especially likely if you can restore only the files relating\nto the table that was actually damanaged.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Tue, 16 Apr 2002 14:05:25 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": true, "msg_subject": "Re: Importing Large Amounts of Data " }, { "msg_contents": "On Tue, 16 Apr 2002, Curt Sampson wrote:\n\n> > Given the very low parsing and 'planning' overhead, the real cost would be\n> > WAL (the bootstrapper could fail and render the database unusable) and the\n> > subsequent updating of on-disk relations.\n> \n> MS SQL Server, when doing a BULK INSERT or BCP, can do it as a fully or\n> \"minimally\" logged operation. When minimally logged, there's no ability\n> to roll-forward or recover inserted data, just the ability to go back\n> to the state at the beginning of the operation. This technique can work\n> even though an on-line database. A bit more information is available at\n\nThe other reason I say that this bootstrap tool would still use WAL is\nthat bypassing WAL would require writing a fairly significant amount of\ncode (unless the pre-WAL heap_insert() code could be used, with relevant\nmodification).\n\nOn the other hand, I would imagine it to be very difficult to implement\nan 'interactive' roll back facility with the kind of tool I am\ndescribing.\n\nGavin\n\n", "msg_date": "Tue, 16 Apr 2002 15:17:25 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Importing Large Amounts of Data " } ]
[ { "msg_contents": ">\"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n>> And I did not find out how I can detect the large object\n>> chunksize, either from getting it from the headers (include\n>> \"storage/large_object.h\" did not work)\n>\n>Why not?\n>\n>Still, it might make sense to move the LOBLKSIZE definition into\n>pg_config.h, since as you say it's of some interest to clients like\n>pg_dump.\n\nI tried another approach to detect the LOBLKSIZE of the destination server:\n* at restore time, create a LO large enough to be split in two parts (e.g. BLCSIZE+1)\n* select octet_length(data) from pg_largeobject where loid=OIDOFOBJECT and pageno=0\n* select lo_unlink(OIDOFOBJECT)\n\nIMO this gives the advantage that the LOBLKSIZE is taken from the database I'm restoring to, and not a constant defined at compile time. Otherwise, it wastes an OID.\n\nIs there a way to get compile-time settings (such as BLCSIZE, LOBLKSIZE and such via functions - e.g.\nselect pginternal('BLCSIZE') or something similar? \n\n\nI tested with and without my patch against 2 Gigabytes of LO's using MD5, and got exactly the same result on all 25000 large objects. So I think my patch is safe. If there's interest for integration into pg_dump, I'll prepare a patch for the current CVS version.\n\n\n", "msg_date": "Mon, 15 Apr 2002 11:24:40 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: Inefficient handling of LO-restore + Patch " }, { "msg_contents": "\"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> * select octet_length(data) from pg_largeobject where loid=OIDOFOBJECT and pageno=0\n\nThis really should not work if you're not superuser. Right now it does,\nbut I think that's an oversight in the default permissions settings for\nsystem tables. Anyone object if I turn off public read access to\npg_largeobject?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 10:32:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch " }, { "msg_contents": "Tom Lane wrote:\n> \"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> > * select octet_length(data) from pg_largeobject where loid=OIDOFOBJECT and pageno=0\n> \n> This really should not work if you're not superuser. Right now it does,\n> but I think that's an oversight in the default permissions settings for\n> system tables. Anyone object if I turn off public read access to\n> pg_largeobject?\n\nPlease do whatever you can to tighten it up. I thought we needed to keep\nread access so people could get to their large objects, but maybe not.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 15:17:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Anyone object if I turn off public read access to\n>> pg_largeobject?\n\n> Please do whatever you can to tighten it up. I thought we needed to keep\n> read access so people could get to their large objects, but maybe not.\n\nYeah, right after sending that message I remembered that we had already\ndiscussed this and concluded it would break clients :-(.\n\nThere's really no security for large objects anyway, since if you know\nor can guess the OID of one you can read (or write!) it regardless.\nNot much point in turning off read access on pg_largeobject unless we\nrethink that ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 15:24:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch " } ]
[ { "msg_contents": "This one has been corrected to fit in with Toms recent changes, as well\nas the changes with command/ restructuring.\n\nPlease accept or reject quickly, else risk conflicts.", "msg_date": "15 Apr 2002 09:25:05 -0300", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "YADP - Yet another Dependency Patch" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> This one has been corrected to fit in with Toms recent changes, as well\n> as the changes with command/ restructuring.\n> Please accept or reject quickly, else risk conflicts.\n\nThis is interesting but certainly far from ready for prime time.\n\nSome random comments, in no particular order:\n\n1. I don't like the code that installs and removes ad-hoc dependencies\nfrom relations to type Oid. On its own terms it's wrong (if it were\nright, then you'd also need to be installing dependencies for Tid, Xid,\nand the types of the other system columns), but on a larger scale I\ndon't see the point of expending cycles, disk space, and code complexity\nto record these dependencies. The system *cannot* work if you drop type\nOid. So it'd seem to make more sense to wire in a notion that certain\ntypes, tables, etc are \"pinned\" and may never be dropped; then there's\nno need to create explicit dependencies on them. If you want an\nexplicit representation of pinning in the pg_depends table, perhaps it\nwould work to create a row claiming that \"table 0 / Oid 0 / subid 0\"\ndepends on a pinned object.\n\n2. Is it really necessary to treat pg_depends as a bootstrapped\nrelation? That adds a lot of complexity, as you've evidently already\nfound, and it does not seem necessary if you're going to load the system\ndependencies in a later step of the initdb process. You can just make\nthe dependency-adding routines be no-ops in bootstrap mode; then create\npg_depends as an ordinary system catalog; and finally load the entries\npost-bootstrap.\n\n3. Isn't there a better way to find the initial dependencies? That\nSELECT is truly ugly, and more to the point is highly likely to break\nanytime someone rearranges the catalogs. I'd like to see it generated\nautomatically (maybe using a tool like findoidjoins); or perhaps we\ncould do the discovery of the columns to look at on-the-fly.\n\n4. Do not use the parser's RESTRICT/CASCADE tokens as enumerated type\nvalues. They change value every time someone tweaks the grammar.\n(Yes, I know you copied from extant code; that code is on my hitlist.)\nDefine your own enum type instead of creating a lot of bogus\ndependencies on parser/parser.h.\n\n5. Avoid using heapscans on pg_depend; it's gonna be way too big for\nthat to give acceptable performance. Make sure you have indexes\navailable to match your searches, and use the systable_scan routines.\n\n6. The tests on relation names in dependDelete, getObjectName are (a)\nslow and (b) not schema-aware. Can you make these into OID comparisons\ninstead?\n\n7. The namespaceIdGetNspname routine you added is unsafe (it would need\nto pstrdup the name to be sure it's still valid after releasing the\nsyscache entry); but more to the point, it duplicates a routine already\npresent in lsyscache.c (which is where this sort of utility generally\nbelongs, anyway).\n\n8. Aggregate code seems unaware that aggfinalfn is optional.\n\nI have to leave, but reserve the right to make more comments later ;-)\n\nIn general though, this seems like a cool approach and definitely\nworth pursuing. Keep at it!\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 21:37:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: YADP - Yet another Dependency Patch " }, { "msg_contents": "[ copied to hackers ]\n\n> 1. I don't like the code that installs and removes ad-hoc\ndependencies\n> from relations to type Oid. On its own terms it's wrong (if it were\n...\n> explicit representation of pinning in the pg_depends table, perhaps\nit\n> would work to create a row claiming that \"table 0 / Oid 0 / subid 0\"\n> depends on a pinned object.\n\nYes, a pinned dependency makes much more sense.\n\nint4, bool, varchar, and name are in the same boat.\n\nI'll make it so dependCreate() will ignore adding any additional\ndependencies on pinned types (class 0, Oid 0, SubID 0) and\ndependDelete() will never allow deletion when that dependency exists.\n\n> 2. Is it really necessary to treat pg_depends as a bootstrapped\n> relation? That adds a lot of complexity, as you've evidently\nalready\n> found, and it does not seem necessary if you're going to load the\nsystem\n> dependencies in a later step of the initdb process. You can just\nmake\n> the dependency-adding routines be no-ops in bootstrap mode; then\ncreate\n> pg_depends as an ordinary system catalog; and finally load the\nentries\n> post-bootstrap.\n\nAck.. <sound of hand hitting head>. All that work to avoid a simple\nif statement.\n\nAhh well.. learning at it's finest :)\n\n> 3. Isn't there a better way to find the initial dependencies? That\n> SELECT is truly ugly, and more to the point is highly likely to\nbreak\n> anytime someone rearranges the catalogs. I'd like to see it\ngenerated\n> automatically (maybe using a tool like findoidjoins); or perhaps we\n> could do the discovery of the columns to look at on-the-fly.\n\nI'm not entirely sure how to approach this, but it does appear that\nfindoidjoins would find all the relations.\n\nSo... I could create a pg_ function which will find all oid joins,\nand call dependCreate() for each entry it finds. That way\ndependCreate will ignore anything that was pinned (see above)\nautomagically. It would also make initdb quite slow, and would add a\npg_ function that one should normally avoid during normal production.\nThen again, I suppose it could be used to recreate missing\ndependencies if a user was manually fiddling with that table.\n\ninitdb would call SELECT pg_findSystemDepends(); or something.\n\n> 4. Do not use the parser's RESTRICT/CASCADE tokens as enumerated\ntype\n> values. They change value every time someone tweaks the grammar.\n> (Yes, I know you copied from extant code; that code is on my\nhitlist.)\n> Define your own enum type instead of creating a lot of bogus\n> dependencies on parser/parser.h.\n\nAll but one of those will go away once the functions are modified to\naccept the actual RESTRICT or CASCADE bit. That was going to be step\n2 of the process but I suppose I could do it now, along with a rather\nlarge regression test. The only place that RESTRICT will be used is\ndependDelete(); Nowhere else will care. It'll simply pass on what\nwas given to it by the calling function from utility.c or a cascading\ndependDelete. Of course, gram.y will be littered with the\n'opt_restrictcascade' tag.\n\nThe RESTRICT usage is more of a current placeholder. I've marked the\nincludes as /* FOR RESTRICT */ for that reason, make them easy to\nremove later.\n\n> 6. The tests on relation names in dependDelete, getObjectName are\n(a)\n> slow and (b) not schema-aware. Can you make these into OID\ncomparisons\n> instead?\n\nAhh yes. Good point.\n\n", "msg_date": "Mon, 15 Apr 2002 23:24:21 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] YADP - Yet another Dependency Patch " }, { "msg_contents": "Rod Taylor wrote:\n> [ copied to hackers ]\n> \n> > 1. I don't like the code that installs and removes ad-hoc\n> dependencies\n> > from relations to type Oid. On its own terms it's wrong (if it were\n\nLooks good to me. I realize this is a huge chunk of code. The only\nultra-minor thing I saw was the use of dashes for comment blocks when\nnot needed:\n\n\t/* ----\n\t * word\n\t * ----\n\t */\n\nWe use dashes like this only for comments that shouldn't be reformatted\nby pgindent.\n\nThe one thing I would like to know is what things does it track, and\nwhat does it not track? Does it complete any TODO items, or do we save\nthat for later?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Apr 2002 01:15:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] YADP - Yet another Dependency Patch" }, { "msg_contents": "Yes, I've removed all of those.\n\nI've submitted a list of stuff it tracked earlier, but will do so\nagain with the next patch. Basically, anything simple and\nstraightforward ;)\n\nIt doesn't manage dependencies of function code, view internals,\ndefault internals as I don't know how to find navigate an arbitrary\nparse tree for that information. Some function code isn't even\navailable (C), so its next to impossible.\n\nNo, it doesn't directly tick off any todo item other than the first in\ndependency. Make pg_depend table ;)\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; <pgsql-patches@postgresql.org>;\n\"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Tuesday, April 16, 2002 1:15 AM\nSubject: Re: [HACKERS] [PATCHES] YADP - Yet another Dependency Patch\n\n\n> Rod Taylor wrote:\n> > [ copied to hackers ]\n> >\n> > > 1. I don't like the code that installs and removes ad-hoc\n> > dependencies\n> > > from relations to type Oid. On its own terms it's wrong (if it\nwere\n>\n> Looks good to me. I realize this is a huge chunk of code. The only\n> ultra-minor thing I saw was the use of dashes for comment blocks\nwhen\n> not needed:\n>\n> /* ----\n> * word\n> * ----\n> */\n>\n> We use dashes like this only for comments that shouldn't be\nreformatted\n> by pgindent.\n>\n> The one thing I would like to know is what things does it track, and\n> what does it not track? Does it complete any TODO items, or do we\nsave\n> that for later?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n>\n\n", "msg_date": "Tue, 16 Apr 2002 07:32:12 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] YADP - Yet another Dependency Patch" }, { "msg_contents": "Rod Taylor wrote:\n> Yes, I've removed all of those.\n> \n> I've submitted a list of stuff it tracked earlier, but will do so\n> again with the next patch. Basically, anything simple and\n> straightforward ;)\n\nThanks for an updated list of tracked items. I know you posted it\nearlier, but some are complaining recently that patches are coming in\ntoo fast or with too long of a gap between discussion and patch arrival,\nand they are getting confused trying to track where we are going.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Apr 2002 12:59:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] YADP - Yet another Dependency Patch" }, { "msg_contents": "> 3. Isn't there a better way to find the initial dependencies? That\n> SELECT is truly ugly, and more to the point is highly likely to\nbreak\n> anytime someone rearranges the catalogs. I'd like to see it\ngenerated\n> automatically (maybe using a tool like findoidjoins); or perhaps we\n> could do the discovery of the columns to look at on-the-fly.\n\nI'm having a really hard time coming up with a good method for this.\n\nThe key problems with doing what findoidjoins does is recording things\nlike indexes, complex types, etc. as items that should be implicitly\ndropped when the table is dropped. Also, in the case of indicies the\nwork required to 'discover' 2 value foreign keys against pg_attribute\n(attnum and attrelid) would be a royal pain in the ass -- especially\nwhen it's in the form of a vector (ugh). There are enough of those to\nwarrent something other than hardcoding the relations in c code.\n\nIt might be possible to create foreign keys for the system tables and\nuse those to discover the associations. Any foreign key in pg_catalog\nmade up of an object address (table, oid column) or (table, oid\ncolumn, int column) could potentially have each foreign key tuple set\n(join tables by key) recorded as a dependency.\n\nSo... Any thoughts to adding a no-op foreign key reference? A key\nthats used for style purposes only.\n\n\n\n", "msg_date": "Wed, 17 Apr 2002 23:21:44 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] YADP - Yet another Dependency Patch " }, { "msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n>> 3. Isn't there a better way to find the initial dependencies? That\n>> SELECT is truly ugly, and more to the point is highly likely to\n>> break anytime someone rearranges the catalogs.\n\n> I'm having a really hard time coming up with a good method for this.\n\nWell, do we actually need an *accurate* representation of the\ndependencies? You seemed to like my idea of pinning essential stuff,\nand in reality all of the initial catalog structures ought to be pinned.\nMaybe it would be sufficient to just make \"pinned\" entries for\neverything that appears in the initial catalogs. Someone who's really\nintent on manually deleting, say, the \"box\" datatype could be expected\nto be bright enough to figure out how to remove the pg_depends entry\nthat's preventing him from doing so.\n\n(There are a very small number of things that are specifically intended\nto be droppable, like the \"public\" namespace, but seems like excluding\nthat short list from the pg_depends entries would be more maintainable\nthan the approach you've got now.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 01:24:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] YADP - Yet another Dependency Patch " }, { "msg_contents": "Thats what I was going to propose if no-one could figure out a way of\nautomatically gathering system table dependencies.\n\nIt would be nice (for a minimallist db) to be able to drop a bunch of\nstuff, but a number of other things would need to be done as well\n(full system compression for example).\n\n--\nRod\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Thursday, April 18, 2002 1:24 AM\nSubject: Re: [HACKERS] [PATCHES] YADP - Yet another Dependency Patch\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> >> 3. Isn't there a better way to find the initial dependencies?\nThat\n> >> SELECT is truly ugly, and more to the point is highly likely to\n> >> break anytime someone rearranges the catalogs.\n>\n> > I'm having a really hard time coming up with a good method for\nthis.\n>\n> Well, do we actually need an *accurate* representation of the\n> dependencies? You seemed to like my idea of pinning essential\nstuff,\n> and in reality all of the initial catalog structures ought to be\npinned.\n> Maybe it would be sufficient to just make \"pinned\" entries for\n> everything that appears in the initial catalogs. Someone who's\nreally\n> intent on manually deleting, say, the \"box\" datatype could be\nexpected\n> to be bright enough to figure out how to remove the pg_depends entry\n> that's preventing him from doing so.\n>\n> (There are a very small number of things that are specifically\nintended\n> to be droppable, like the \"public\" namespace, but seems like\nexcluding\n> that short list from the pg_depends entries would be more\nmaintainable\n> than the approach you've got now.)\n>\n> regards, tom lane\n>\n\n", "msg_date": "Thu, 18 Apr 2002 07:33:29 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] YADP - Yet another Dependency Patch " } ]
[ { "msg_contents": "Is PostgreSQL broken? Or is it FreeBSD?\n\n--\n\nhttp://www.freebsd.org/cgi/query-pr.cgi?pr=ports/36954\n\nPostgreSQL does not currently check the results of mktime(). On a\n\tborder condition, mktime() fails and would populate invalid data\n\tinto the database.\n\n\tMost other *NIX's seem to automatically account for this and\n\tautomatically adjust the value when it gets passed to mktime(&tm).\n\tShould FreeBSD have it's mktime() in libc updated?\n\nCREATE TABEL tt ( tt TIMESTAMP );\n\tINSERT INTO tt VALUES ('2002-4-7 2:0:0.0');\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n\n", "msg_date": "Mon, 15 Apr 2002 09:31:35 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Stumbled upon a time bug..." }, { "msg_contents": "> Is PostgreSQL broken? Or is it FreeBSD?\n\nBoth at most. FreeBSD at least ;)\n\nThe Posix definition for mktime() insists that the function return \"-1\"\nif it has an error. Which also happens to correspond to 1 second earlier\nthan 1970-01-01, causing trouble for supporting *any* times before 1970.\n\nPostgreSQL relies on a side-effect of mktime(), that the time zone\ninformation pointed to in the tm structure *input* to mktime() gets\nupdated for output. I don't actually care about the function result of\ntime_t, and don't care what it is set to as long as the time zone info\ngets filled in.\n\nThat happens on most every platform I know about, with the exception of\nAIX (confirming for me its reputation as a strange relative of Unix best\nleft chained in the cellar).\n\nApparently glibc (and hence Linux) is at risk of getting this behavior\ntoo, although I *hope* that the mods to glibc will be to return the \"-1\"\n(if necessary) but still using the time zone database to fill in the\ntime zone information, even for dates before 1970.\n\nI'm not sure I still have the info to include the glibc contact in this\nthread. In any case, there is no excuse for refusing to return valid\ninfo for a DST boundary time imho. Even if it requires an arbitrary\nconvention on how to jump the time forward or backward...\n\n - Thomas\n", "msg_date": "Mon, 15 Apr 2002 19:59:19 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Stumbled upon a time bug..." } ]
[ { "msg_contents": "And how about getting database internals via SQL-functions - e.g. getting BLCSIZE, LOBBLCSIZE?\n\n-----Ursprüngliche Nachricht-----\nVon: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nGesendet: Montag, 15. April 2002 16:32\nAn: Mario Weilguni\nCc: pgsql-hackers@postgresql.org\nBetreff: Re: [HACKERS] Inefficient handling of LO-restore + Patch \n\n\n\"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> * select octet_length(data) from pg_largeobject where loid=OIDOFOBJECT and pageno=0\n\nThis really should not work if you're not superuser. Right now it does,\nbut I think that's an oversight in the default permissions settings for\nsystem tables. Anyone object if I turn off public read access to\npg_largeobject?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 16:46:41 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: Inefficient handling of LO-restore + Patch " } ]
[ { "msg_contents": "After some fooling around with gram.y, I have come to the conclusion\nthat there's just no way to use a schema-qualified name for an operator\nin an expression. I was hoping we might be able to write something like\n\toperand1 schema.+ operand2\nbut I can't find any way to make this work without tons of shift/reduce\nconflicts. One counterexample suggesting it can't be done is that\n\tfoo.*\nmight be either a reference to all the columns of foo, or a qualified\noperator name.\n\nWe can still put operators into namespaces and allow qualified names in\nCREATE/DROP OPERATOR. However, lookup of operators in expressions would\nhave to be completely dependent on the search path. That's not real\ncool; among other things, pg_dump couldn't guarantee that dumped\nexpressions would be interpreted the same way when reloaded.\n\nThings we might do to reduce the uncertainty:\n\n1. Keep operators as database-wide objects, instead of putting them into\nnamespaces. This seems a bit silly though: if the types and functions\nthat underlie an operator are private to a namespace, shouldn't the\noperator be as well?\n\n2. Use a restricted, perhaps fixed search-path for searching for\noperators. For example, we might force the search path to have\npg_catalog first even when this is not true for the table name search\npath. But I'm not sure what an appropriate definition would be.\nA restricted search path might limit the usefulness of private operators\nto the point where we might as well have kept them database-wide.\n\nComments anyone? I'm really unsure what's the best way to proceed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 12:43:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Operators and schemas" }, { "msg_contents": "Tom Lane wrote:\n> \n> 1. Keep operators as database-wide objects, instead of putting them into\n> namespaces. This seems a bit silly though: if the types and functions\n> that underlie an operator are private to a namespace, shouldn't the\n> operator be as well?\n> \n\nNot necessarily. One can still create a type and functions to operate \non them. Operators are a convenience, not a necessity (except for \nindices extensions).\n\nIf some types are really important and operators are desired, it can be\ncoordinated with the DBA as operators would be a database wide resource.\n(This would be the case if indices extensions were involved anyway).\n\nI would keep operators database-wide. \u0018\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n", "msg_date": "Mon, 15 Apr 2002 14:28:20 -0400", "msg_from": "Fernando Nasser <fnasser@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Operators and schemas" }, { "msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> If some types are really important and operators are desired, it can be\n> coordinated with the DBA as operators would be a database wide resource.\n> (This would be the case if indices extensions were involved anyway).\n\nNo, there isn't any particular reason that index extensions should be\nconsidered database-wide resources; if operators are named local to\nschemas, then opclasses can be too, and that's all you need.\n\nIn practice maybe it doesn't matter; I doubt anyone would try to\nimplement an indexable datatype in anything but C, and to define\nC functions you must be superuser anyway. But this does not seem\nto me to be a good argument why operator names should be global.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 14:40:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Operators and schemas " }, { "msg_contents": "> 2. Use a restricted, perhaps fixed search-path for searching for\n> operators. For example, we might force the search path to have\n> pg_catalog first even when this is not true for the table name\nsearch\n> path. But I'm not sure what an appropriate definition would be.\n> A restricted search path might limit the usefulness of private\noperators\n> to the point where we might as well have kept them database-wide.\n\nWanting to open a bucket of worms, what would making the system create\nan operator with the schema name in it?\n\nIe. Create operator schema.+ would create:\n\n'schema.+' in pg_catalog\n'+' in schema\n\nThis would require double the operator entries, but isn't really any\nworse than the array types as related to their base type.\n\nSo, user could type:\nselect col1 + col2 from schema.tab;\nselect col1 schema.+ col2 from tab;\n\n\n", "msg_date": "Mon, 15 Apr 2002 15:51:48 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Operators and schemas" }, { "msg_contents": "Tom Lane writes:\n\n> After some fooling around with gram.y, I have come to the conclusion\n> that there's just no way to use a schema-qualified name for an operator\n> in an expression. I was hoping we might be able to write something like\n> \toperand1 schema.+ operand2\n> but I can't find any way to make this work without tons of shift/reduce\n> conflicts. One counterexample suggesting it can't be done is that\n> \tfoo.*\n> might be either a reference to all the columns of foo, or a qualified\n> operator name.\n\nWhat about foo.\"*\"?\n\n> We can still put operators into namespaces and allow qualified names in\n> CREATE/DROP OPERATOR. However, lookup of operators in expressions would\n> have to be completely dependent on the search path. That's not real\n> cool; among other things, pg_dump couldn't guarantee that dumped\n> expressions would be interpreted the same way when reloaded.\n\nWe could make some sort of escape syntax, like\n\n op1 myschema.operator(+) op2\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 15 Apr 2002 15:54:03 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Operators and schemas" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> We could make some sort of escape syntax, like\n\n> op1 myschema.operator(+) op2\n\nI thought a little bit about that ... the above syntax does not work\nbut it looks like we could do something along the lines of\n\n\top1 OPERATOR(myschema.+) op2\n\nwhere OPERATOR has to become a fully reserved keyword.\n\nBut: do you really want to see all dumped rules, defaults, etc in that\nstyle? Ugh... talk about loss of readability...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 16:31:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Operators and schemas " }, { "msg_contents": "Tom Lane writes:\n\n> But: do you really want to see all dumped rules, defaults, etc in that\n> style? Ugh... talk about loss of readability...\n\nI imagine that pg_dump could be able to figure out that certain references\nwould be \"local\", so no explicit schema qualification is necessary.\nThus, the only weird-looking operator invocations would be those that were\nalso created in weird ways. In general, pg_dump should try to avoid\nmaking unnecessary schema qualifications on any object so that you can\nedit the dump and only change the schema name in one place.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 15 Apr 2002 17:17:37 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Operators and schemas " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I imagine that pg_dump could be able to figure out that certain references\n> would be \"local\", so no explicit schema qualification is necessary.\n\nWell, if it makes assumptions about the path then it can do that ... or\nI guess it could explicitly set the path, and then it knows. Yeah, that\nwill probably work well enough. Okay, good ... the question of what\npg_dump should do about qualifying names was bugging me.\n\nWhat I'm now envisioning is that pg_dump will explicitly set\n\tset search_path = 'foo';\nwhen dumping or reloading schema foo. Given the present semantics of\nsearch_path, that will imply an implicit search of pg_catalog before\nfoo. Therefore, we have the following ground rules for schema\nqualification in pg_dump:\n\t* System (pg_catalog) names never need qualification.\n\t* Names in the current schema need be qualified only if they\n\t conflict with system names.\n\t* Cross-references to other schemas will always be qualified.\n\nThis seems workable. Thoughts?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 17:55:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Operators and schemas " }, { "msg_contents": "Tom Lane wrote:\n> \n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I imagine that pg_dump could be able to figure out that certain references\n> > would be \"local\", so no explicit schema qualification is necessary.\n\n[snip]\n\n> * Names in the current schema need be qualified only if they\n\nWhat does the current schema mean ?\nOr What does \"local\" mean ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 16 Apr 2002 09:55:29 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Operators and schemas" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> * Names in the current schema need be qualified only if they\n\n> What does the current schema mean ?\n\nIn this case, it means the one pg_dump is trying to dump.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 20:57:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Operators and schemas " }, { "msg_contents": "Tom Lane writes:\n\n> What I'm now envisioning is that pg_dump will explicitly set\n> \tset search_path = 'foo';\n> when dumping or reloading schema foo.\n\nI had imagined that pg_dump would emit commands such as this:\n\nCREATE SCHEMA foo\n CREATE TABLE bar ( ... )\n CREATE otherthings\n;\n\nwhich is how I read the SQL standard. Are there plans to implement the\nCREATE SCHEMA command that way? I think I recall someone from Toronto\nmentioning something along these lines.\n\nObviously, this command style would be mostly equivalent to temporarily\nsetting the search path. We'd also need alter schema, which SQL doesn't\nhave.\n\n> Given the present semantics of\n> search_path, that will imply an implicit search of pg_catalog before\n> foo.\n\nInteresting ... Is that only temporary? (since you say \"present\"\nsemantics)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 15 Apr 2002 22:02:10 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Operators and schemas " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I had imagined that pg_dump would emit commands such as this:\n\n> CREATE SCHEMA foo\n> CREATE TABLE bar ( ... )\n> CREATE otherthings\n> ;\n\n> which is how I read the SQL standard. Are there plans to implement the\n> CREATE SCHEMA command that way? I think I recall someone from Toronto\n> mentioning something along these lines.\n\nWe have portions of that now, but I don't think there is any serious\nintent to support *all* Postgres CREATE statements inside CREATE SCHEMA.\nBecause there are no semicolons in there, allowing random statements in\nCREATE SCHEMA tends to force promotion of keywords to full-reserved\nstatus (so you can tell where each sub-statement starts). My\ninclination is to allow the minimum necessary for SQL spec compliance.\n\n(Fernando, your thoughts here?)\n\t\t\n>> Given the present semantics of\n>> search_path, that will imply an implicit search of pg_catalog before\n>> foo.\n\n> Interesting ... Is that only temporary? (since you say \"present\"\n> semantics)\n\nOnly meant to imply \"it hasn't been seriously reviewed, so someone\nmight have a better idea\". At the moment I'm happy with it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 00:21:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Operators and schemas " }, { "msg_contents": "Tom Lane wrote:\n> \n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I had imagined that pg_dump would emit commands such as this:\n> \n> > CREATE SCHEMA foo\n> > CREATE TABLE bar ( ... )\n> > CREATE otherthings\n> > ;\n> \n> > which is how I read the SQL standard. Are there plans to implement the\n> > CREATE SCHEMA command that way? I think I recall someone from Toronto\n> > mentioning something along these lines.\n> \n> We have portions of that now, but I don't think there is any serious\n> intent to support *all* Postgres CREATE statements inside CREATE SCHEMA.\n> Because there are no semicolons in there, allowing random statements in\n> CREATE SCHEMA tends to force promotion of keywords to full-reserved\n> status (so you can tell where each sub-statement starts). My\n> inclination is to allow the minimum necessary for SQL spec compliance.\n> \n> (Fernando, your thoughts here?)\n> \n\nI agree. And for Entry level SQL'92 we are done -- only tables, views \nand grants are required. The multiple schemas per user is already\nan intermediate SQL feature -- for intermediate SQL'92 we would still \nneed domains and a character set specification.\n\nFor SQL'99, we would have to add types, functions and triggers\n(only triggers are not part of Core SQL'99, but I would not leave them out).\n\nRegards,\nFernando\n\n\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n", "msg_date": "Tue, 16 Apr 2002 14:27:17 -0400", "msg_from": "Fernando Nasser <fnasser@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Operators and schemas" }, { "msg_contents": "Fernando Nasser writes:\n\n> I agree. And for Entry level SQL'92 we are done -- only tables, views\n> and grants are required. The multiple schemas per user is already\n> an intermediate SQL feature -- for intermediate SQL'92 we would still\n> need domains and a character set specification.\n>\n> For SQL'99, we would have to add types, functions and triggers\n> (only triggers are not part of Core SQL'99, but I would not leave them out).\n\nI can hardly believe that we want to implement this just to be able to\ncheck off a few boxes on the SQL-compliance test. Once you have the\nability to use a fixed list of statements in this context it should be\neasy to allow a more or less arbitrary list. Especially if they all start\nwith the same key word it should be possible to parse this.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 16 Apr 2002 18:33:10 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Operators and schemas" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I can hardly believe that we want to implement this just to be able to\n> check off a few boxes on the SQL-compliance test. Once you have the\n> ability to use a fixed list of statements in this context it should be\n> easy to allow a more or less arbitrary list. Especially if they all start\n> with the same key word it should be possible to parse this.\n\nIt's not the \"start\" part that creates the problem, so much as the \"end\"\npart. What we found was that we were having to reserve secondary\nkeywords. CREATE is now fully reserved, which it was not in 7.2,\nand that alone doesn't bother me. But AUTHORIZATION and GRANT are\nmore reserved than they were before, too, and it'll get worse the\nmore statements we insist on accepting inside CREATE SCHEMA.\n\nAFAICS, embedding statements inside CREATE SCHEMA adds absolutely zero\nfunctionality; you can just as easily execute them separately. Do we\nreally want to push a bunch more keywords into full-reserved status\n(and doubtless break some existing table definitions thereby) just\nto check off a box that isn't even in the SQL compliance test?\n\nTo the extent that we can allow stuff in CREATE SCHEMA without adding\nmore reserved words, it's fine with me. But I question having to add\nreserved words to do it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 18:58:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Operators and schemas " } ]
[ { "msg_contents": "What would it take to make the array iterator functions a part of the\nstandard base? (contrib/array)\n\nA number of people want this type of functionality (value = [ any\nvalue of array ], and value = [ all values of array ] ).\n\nThe license needs to be changed (with authors permission), but other\nthan that?\n\n-- README BELOW --\nThis loadable module defines a new class of functions which take\nan array and a scalar value, iterate a scalar operator over the\nelements of the array and the value, and compute a result as\nthe logical OR or AND of the iteration results.\nFor example array_int4eq returns true if some of the elements\nof an array of int4 is equal to the given value:\n\n array_int4eq({1,2,3}, 1) --> true\n array_int4eq({1,2,3}, 4) --> false\n\nIf we have defined T array types and O scalar operators we can\ndefine T x O x 2 array functions, each of them has a name like\n\"array_[all_]<basetype><operation>\" and takes an array of type T\niterating the operator O over all the elements. Note however\nthat some of the possible combination are invalid, for example\nthe array_int4_like because there is no like operator for int4.\n\nWe can then define new operators based on these functions and use\nthem to write queries with qualification clauses based on the\nvalues of some of the elements of an array.\nFor example to select rows having some or all element of an array\nattribute equal to a given value or matching a regular expression:\n\n create table t(id int4[], txt text[]);\n\n -- select tuples with some id element equal to 123\n select * from t where t.id *= 123;\n\n -- select tuples with some txt element matching '[a-z]'\n select * from t where t.txt *~ '[a-z]';\n\n -- select tuples with all txt elements matching '^[A-Z]'\n select * from t where t.txt[1:3] **~ '^[A-Z]';\n\nThe scheme is quite general, each operator which operates on a base\ntype\ncan be iterated over the elements of an array. It seem to work well\nbut\ndefining each new operators requires writing a different C function.\nFurthermore in each function there are two hardcoded OIDs which\nreference\na base type and a procedure. Not very portable. Can anyone suggest a\nbetter and more portable way to do it ?\n\nSee also array_iterator.sql for an example on how to use this module.\n--\nRod Taylor\n\n\n", "msg_date": "Mon, 15 Apr 2002 13:39:35 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Array Iterator functions" }, { "msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> What would it take to make the array iterator functions a part of the\n> standard base? (contrib/array)\n\nTo me, the main problem with contrib/array is that it doesn't scale:\nyou need more C functions for every array datatype you want to support.\n\nAt the very least it needs a way to avoid more per-datatype C code.\nThe per-datatype operator definitions are annoying too, but perhaps\nnot quite as annoying... one could imagine CREATE TYPE automatically\nadding those along with the array type itself.\n\nI'm not sure what it would take to avoid the per-datatype C code.\nClearly we want something like array_in/array_out, but how does the\nextra information get to these functions?\n\nIt would also be good to have some idea of whether we could ever hope to\nindex queries using these functions. The GIST stuff might provide that,\nor it might not. I don't insist that this work on day one, but I'd like\nto see a road map, just to be sure that we are not shooting ourselves in\nthe foot by standardizing a not-quite-index-compatible definition.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 15:17:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Array Iterator functions " } ]
[ { "msg_contents": "Hi all,\n\nI'm seeing this on a fresh build from CVS:\n\n$ ./configure && make && make check\n...\n$ cd src/test/regress/tmp_check\n$ ./install/tmp/pgsql/bin/postgres -D data regression\nLOG: database system was shut down at 2002-04-15 15:03:58 EDT\nLOG: checkpoint record is at 0/160368C\nLOG: redo record is at 0/160368C; undo record is at 0/0; shutdown TRUE\nLOG: next transaction id: 4551; next oid: 139771\nLOG: database system is ready\n\nPOSTGRES backend interactive interface \n$Revision: 1.260 $ $Date: 2002/03/24 04:31:07 $\n\nbackend> create table foo (c1 int);\nERROR: invalid relation \"foo\"; system catalog modifications are currently disallowed\nbackend> create schema x;\nbackend> create table x.bar (c1 int);\nbackend>\n\nIs this the expected behavior? I haven't been following the schema\nwork very closely, but this was quite a surprise to me...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Mon, 15 Apr 2002 15:07:19 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "regression in CVS HEAD" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> POSTGRES backend interactive interface \n> $Revision: 1.260 $ $Date: 2002/03/24 04:31:07 $\n\n> backend> create table foo (c1 int);\n> ERROR: invalid relation \"foo\"; system catalog modifications are currently disallowed\n> backend> create schema x;\n> backend> create table x.bar (c1 int);\n> backend>\n\n> Is this the expected behavior?\n\nIt is at the moment but I'm planning to change it. Currently, a\nstandalone backend defaults to pg_catalog being the target creation\nnamespace, which is needed by initdb; but I was planning to make initdb\nexplicitly set the search_path to pg_catalog, because it seems like a\nbad idea for pg_catalog to ever be the default target.\n\nIn the meantime, try an explicit\n\tset search_path = 'public';\nthen \"create table foo\" would create public.foo which will be allowed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 15:56:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: regression in CVS HEAD " }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> backend> create table foo (c1 int);\n> ERROR: invalid relation \"foo\"; system catalog modifications are currently disallowed\n\nI've committed a fix for this.\n\nBTW, I dunno about other developers, but I never use standalone backends\nfor debugging. It's a lot nicer to open two windows, run a regular psql\nin one, and use the other to run gdb and \"attach\" to the backend\nprocess.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Apr 2002 18:36:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: regression in CVS HEAD " } ]
[ { "msg_contents": "In my understanding, our consensus was enabling multibyte support by\ndefault for 7.3. Any objection?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 16 Apr 2002 10:20:33 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "multibyte support by default" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> In my understanding, our consensus was enabling multibyte support by\n> default for 7.3. Any objection?\n\nIt was my understanding (or if I was mistaken, then it is my suggestion)\nthat the build-time option would be removed altogether and certain\nperformance-critical places (if any) would be wrapped into\n\n if (encoding_is_single_byte(current_encoding)) { }\n\nThat's basically what I did with the locale support.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 15 Apr 2002 22:07:11 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: multibyte support by default" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> In my understanding, our consensus was enabling multibyte support by\n> default for 7.3. Any objection?\n\nUh, was it? I don't recall that. Do we have any numbers on the\nperformance overhead?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 00:34:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: multibyte support by default " }, { "msg_contents": "> > In my understanding, our consensus was enabling multibyte support by\n> > default for 7.3. Any objection?\n> \n> Uh, was it? I don't recall that. Do we have any numbers on the\n> performance overhead?\n> \n> \t\t\tregards, tom lane\n\nSee below.\n\nSubject: Re: [HACKERS] Unicode combining characters \nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nTo: Tatsuo Ishii <t-ishii@sra.co.jp>\ncc: ZeugswetterA@spardat.at, pgman@candle.pha.pa.us, phede-ml@islande.org,\n pgsql-hackers@postgresql.org\nDate: Wed, 03 Oct 2001 23:05:16 -0400\nComments: In-reply-to Tatsuo Ishii <t-ishii@sra.co.jp>\tmessage dated \"Thu, 04 Oct 2001 11:16:42 +0900\"\n\nTatsuo Ishii <t-ishii@sra.co.jp> writes:\n> To accomplish this, I moved MatchText etc. to a separate file and now\n> like.c includes it *twice* (similar technique used in regexec()). This\n> makes like.o a little bit larger, but I believe this is worth for the\n> optimization.\n\nThat sounds great.\n\nWhat's your feeling now about the original question: whether to enable\nmultibyte by default now, or not? I'm still thinking that Peter's\ncounsel is the wisest: plan to do it in 7.3, not today. But this fix\nseems to eliminate the only hard reason we have not to do it today ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 19:41:58 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: multibyte support by default " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> In my understanding, our consensus was enabling multibyte support by\n> default for 7.3. Any objection?\n>> \n>> Uh, was it? I don't recall that. Do we have any numbers on the\n>> performance overhead?\n\n> See below.\n\nOh, okay, now I recall that thread. You're right, we did agree.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 09:08:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: multibyte support by default " }, { "msg_contents": "On Tue, 2002-04-16 at 03:20, Tatsuo Ishii wrote:\n> In my understanding, our consensus was enabling multibyte support by\n> default for 7.3. Any objection?\n\nIs there currently some agreed plan for introducing standard\nNCHAR/NVARCHAR types.\n\nWhat does ISO/ANSI say about multybyteness of simple CHAR types ?\n\n--------------\nHannu\n\n\n", "msg_date": "16 Apr 2002 21:18:31 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: multibyte support by default" }, { "msg_contents": "> On Tue, 2002-04-16 at 03:20, Tatsuo Ishii wrote:\n> > In my understanding, our consensus was enabling multibyte support by\n> > default for 7.3. Any objection?\n> \n> Is there currently some agreed plan for introducing standard\n> NCHAR/NVARCHAR types.\n\nI have such a kind of *personal* plan, maybe for 7.4, not for 7.3 due\nto the limitation of my free time.\n\nBTW, NCHAR/NVARCHAR is just a abbreviation of \"CHAR(n) CHARACTER SET\nfoo\"(where foo is an implementaion defined charset). So I'm not too\nimpressed by an idea implementing NCHAR/NVARCHAR alone.\n\n> What does ISO/ANSI say about multybyteness of simple CHAR types ?\n\nThere's no such that idea \"multybyteness\" in the standard. In my\nunderstanding the standard does not restrict \"normal\" CHAR types to\nhave only ASCII (more precisely \"SQL_CHARACTER\"). Moreover, CHAR types\nwithout CHARSET specification will a have default charset to SQL_TEXT,\nand its actual charset will be defined by the implementation.\n\nIn summary allowing any characters including multibyte ones in CHAR\ntypes is not againt the standard at all, IMO.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 17 Apr 2002 11:09:29 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: multibyte support by default" } ]
[ { "msg_contents": "\nHere is an email I sent to patches, minus the patch. I am sending to\nhackers for comments.\n\n---------------------------------------------------------------------------\n\n> \n> The following patch adds --maxindfuncparams to configure to allow you to\n> more easily set the maximum number of function parameters and columns\n> in an index. (Can someone come up with a better name?)\n> \n> The patch also removes --def_maxbackends, which Tom reported a few weeks\n> ago he wanted to remove. Can people review this? To test it, you have\n> to run autoconf.\n> \n> Are we staying at 16 as the default? I personally think we can\n> increase it to 32 with little penalty, and that we should increase\n> NAMEDATALEN to 64.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Apr 2002 23:18:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [SQL] 16 parameter limit" } ]
[ { "msg_contents": "The Firebird guys have gotten around to releasing 1.0. If you read this\nfront page spiel, you'll notice that they use MVCC, but with an overwriting\nstorage manager.\n\nhttp://www.ibphoenix.com/ibp_act_db.html\n\nThe relevant extract:\n\n\"Multi-version concurrency control uses back versions of modified and\ndeleted records to maintain a consistent view of data for read transactions.\nEach record version is tagged with the identifier of the transaction that\ncreated it. When a record is modified, the old version of the record is\nreduced to a \"delta record\" - a set of differences from the new version -\nand written to a new location, ordinarily on the same page where it was\noriginally stored. Then the new record overwrites the old. The new record\npoints to the old record. Unless the values of indexed fields are changed,\nthere's no need to update the index. Even if the values have changed, the\nold values remain in the index to keep the record available to older\ntransactions.\n\nThe transaction identifier also permits update transactions to recognize\nupdates by concurrent transactions and allows Firebird to dispense with\nwrite locks on records. When a transaction encounters a record updated by a\nconcurrent transaction, it waits for the other transaction to complete. If\nthe competing transaction commits, the waiting transaction gets an error. If\nthe competing transaction rolls back, the waiting transaction succeeds. If\nthe competing transaction attempts to update a record that the waiting\ntransaction has modified, a deadlock exists and one or the other will\nreceive an error.\n\nMulti-version concurrency replaces a before-image (rollback) log with\nversions stored in the database. When a transaction fails, its changes\nremain in the database. The next transaction that reads that record\nrecognizes that the record version is invalid. Depending on the version of\nFirebird and architecture, that transaction either replaces the invalid\nrecord version with its back version or invokes a garbage collect thread. \"\n\nChris\n\n", "msg_date": "Tue, 16 Apr 2002 12:35:52 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Firebird 1.0 released" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> The Firebird guys have gotten around to releasing 1.0. If you read this\n> front page spiel, you'll notice that they use MVCC, but with an overwriting\n> storage manager.\n\nYup. I've had a couple of long chats with Ann Harrison at the recent\n\"OSDB summit\" meetings. I think we each came away enlightened about the\nother implementation, but not in any large hurry to change our own.\n\nI did steal at least one idea from her, though. (rummages in CVS logs)\nah, here's a hit:\n\n2001-09-29 19:49 tgl\n\n\t* src/backend/access/nbtree/nbtinsert.c: Tweak btree page split\n\tlogic so that when splitting a page that is rightmost on its tree\n\tlevel, we split 2/3 to the left and 1/3 to the new right page,\n\trather than the even split we use elsewhere. The idea is that when\n\tfaced with a steadily increasing series of inserted keys (such as\n\tsequence or timestamp values), we'll end up with a btree that's\n\tabout 2/3ds full not 1/2 full, which is much closer to the desired\n\tsteady-state load for a btree.\tPer suggestion from Ann Harrison of\n\tIBPhoenix.\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 01:11:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Firebird 1.0 released " }, { "msg_contents": "Hi,\n\nI was interested in this:\nFirebird's indexes are very dense because they compress both the prefix and \nthe suffix of each key. Suffix compression is simply the elimination of \ntrailing blanks or zeros, depending on the data type. Suffix compression is \nperformed on each segment of a segmented key. Prefix compression removes the \nleading bytes that match the previous key. Thus a duplicate value has no key \nstored at all. Dense storage in indexes minimizes the depth of the btrees, \neliminating the advantage of other index types for most data.\n\nDo we do this? How feasible is this?\n\nOn Tuesday 16 April 2002 00:35, Christopher Kings-Lynne wrote:\n> The Firebird guys have gotten around to releasing 1.0. If you read this\n> front page spiel, you'll notice that they use MVCC, but with an overwriting\n> storage manager.\n\n--\nDenis\n\n", "msg_date": "Tue, 16 Apr 2002 07:14:17 -0400", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: Firebird 1.0 released" }, { "msg_contents": "I just had an interesting idea. It sounds too easy to beleve, but hear me out\nand correct me if I'm wrong.\n\nCurrently, during update, PostgreSQL takes the existing record, modifyies it,\nand adds it as a new row. The previous record has a pointer to the new version.\nIf the row is updated twice, the original row is hit first, followed by the\nnext version, then the last version. Do I understand this correctly?\n\nNow, what if we did it another way, copy the old version of the row into the\nnew row and update the tuple in place? (Space permitting, of course.) That way,\nperformance does not degrade over time, also Vacuum should be easier and less\ncombersome because it simply lops off the end of the list, and mark tuples\nwhich are not in any transaction path.\n\nIs this a lot of work, is it inherently wrong?\n", "msg_date": "Tue, 16 Apr 2002 08:13:06 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Is this a better MVCC." }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Now, what if we did it another way, copy the old version of the row into the\n> new row and update the tuple in place?\n\nI don't think we can get away with moving the extant tuple. If we did,\na concurrent scan that should have found the old tuple might miss it.\n(This is why VACUUM FULL needs exclusive lock to move tuples.)\n\nIt's fairly unclear whether this would actually buy any performance\ngain, anyway. In the case of a seqscan I don't see that it makes any\ndifference on average, and in the case of an indexscan what matters is\nthe index ordering not the physical location. (In this connection,\nbtree indexes already do the \"right thing\", cf comments for\n_bt_insertonpg.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 09:15:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is this a better MVCC. " }, { "msg_contents": "Denis Perchine <dyp@perchine.com> writes:\n> I was interested in this:\n> Firebird's indexes are very dense because they compress both the prefix and \n> the suffix of each key. Suffix compression is simply the elimination of \n> trailing blanks or zeros, depending on the data type. Suffix compression is \n> performed on each segment of a segmented key. Prefix compression removes the \n> leading bytes that match the previous key. Thus a duplicate value has no key \n> stored at all. Dense storage in indexes minimizes the depth of the btrees, \n> eliminating the advantage of other index types for most data.\n\n> Do we do this? How feasible is this?\n\nNo, and it seems very bogus to me. With a storage scheme like that,\nyou could not do a random-access search --- the only way to know the\ntrue value of each key is if you are doing a linear scan of the page,\nso that you can keep track of the previous key value by dead reckoning.\n(This assumes that the first key on each page is stored in full;\notherwise it's even worse.)\n\nAnother problem is that key inserts and deletes get more complex since\nyou have to recompute the following tuple. Life will get interesting\nif the following tuple expands; you might have to split the page to hold\nit. (Hmm, is it possible for a *delete* to force a page split under the\nFirebird scheme? Not sure.)\n\nThe actual value of leading-byte suppression seems very data-dependent\nto me, anyway. For example, for an integer key column on a\nlittle-endian machine, leading-byte suppression would buy you nothing at\nall. (Perhaps Firebird's implementation has enough datatype-specific\nknowledge to trim integer keys at the proper end; I don't know. But I\nwouldn't want to see us try to push datatype dependencies into our index\naccess methods.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 09:47:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Firebird 1.0 released " }, { "msg_contents": "On 7.1.x it definitely gets slower even for indexscans. e.g. 60 updates/sec \ndropping to 30 then to 20 over time.\n\nIs this fixed for 7.2?\n\nIf not, is it possible to make the pointer point to the latest row instead \nof the most obsolete one, and having the newer rows point to the older \nones, instead of the other way round (which seems to be happening with \n7.1)? I suppose this could make updates slower - have to update indexes? \nBut selects would be faster (other than cases where there are a lot of \nuncommitted updates outstanding).\n\nIf that is not possible (or updating the index too painful), how about \nhaving the first pointer point to first row which then points to latest \nrow, which then points to subsequent older rows. That way the miss penalty \nis reduced.\n\nIt seems reasonable to me that the newer rows should be more visible- \nunless more people update rows and then rollback rather than update and \nthen commit.\n\nI'm missing something out right? :)\n\nRegards,\nLink.\n\nAt 09:15 AM 4/16/02 -0400, Tom Lane wrote:\n>mlw <markw@mohawksoft.com> writes:\n> > Now, what if we did it another way, copy the old version of the row \n> into the\n> > new row and update the tuple in place?\n>\n>I don't think we can get away with moving the extant tuple. If we did,\n>a concurrent scan that should have found the old tuple might miss it.\n>(This is why VACUUM FULL needs exclusive lock to move tuples.)\n>\n>It's fairly unclear whether this would actually buy any performance\n>gain, anyway. In the case of a seqscan I don't see that it makes any\n>difference on average, and in the case of an indexscan what matters is\n>the index ordering not the physical location. (In this connection,\n>btree indexes already do the \"right thing\", cf comments for\n>_bt_insertonpg.)\n\n\n", "msg_date": "Tue, 16 Apr 2002 23:00:48 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Is this a better MVCC. " }, { "msg_contents": "While it is true that you can't do binary searches on compressed indexes you\nmay get a large payoff with compressed indexes since the index fits in fewer\npages and so may be more efficiently cached in the buffer pool. Even a\nsmall reduction in io load may compensate for the higher computational\ndemands of a compressed index.\n\nNote also that insertion of a key can never cause following entries to\nincrease in size, only remain the same or decrease. Deletion of an entry\nmay cause the following entry to increase in size but never more than the\nsize of the entry deleted so deletes can't cause page splits.\n-regards\nricht\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\nSent: Tuesday, April 16, 2002 9:48 AM\nTo: Denis Perchine\nCc: Hackers\nSubject: Re: [HACKERS] Firebird 1.0 released\n\n\nDenis Perchine <dyp@perchine.com> writes:\n> I was interested in this:\n> Firebird's indexes are very dense because they compress both the prefix\nand\n> the suffix of each key. Suffix compression is simply the elimination of\n> trailing blanks or zeros, depending on the data type. Suffix compression\nis\n> performed on each segment of a segmented key. Prefix compression removes\nthe\n> leading bytes that match the previous key. Thus a duplicate value has no\nkey\n> stored at all. Dense storage in indexes minimizes the depth of the btrees,\n> eliminating the advantage of other index types for most data.\n\n> Do we do this? How feasible is this?\n\nNo, and it seems very bogus to me. With a storage scheme like that,\nyou could not do a random-access search --- the only way to know the\ntrue value of each key is if you are doing a linear scan of the page,\nso that you can keep track of the previous key value by dead reckoning.\n(This assumes that the first key on each page is stored in full;\notherwise it's even worse.)\n\nAnother problem is that key inserts and deletes get more complex since\nyou have to recompute the following tuple. Life will get interesting\nif the following tuple expands; you might have to split the page to hold\nit. (Hmm, is it possible for a *delete* to force a page split under the\nFirebird scheme? Not sure.)\n\nThe actual value of leading-byte suppression seems very data-dependent\nto me, anyway. For example, for an integer key column on a\nlittle-endian machine, leading-byte suppression would buy you nothing at\nall. (Perhaps Firebird's implementation has enough datatype-specific\nknowledge to trim integer keys at the proper end; I don't know. But I\nwouldn't want to see us try to push datatype dependencies into our index\naccess methods.)\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Tue, 16 Apr 2002 19:14:28 -0400", "msg_from": "Richard Tucker <richt@nusphere.com>", "msg_from_op": false, "msg_subject": "Re: Firebird 1.0 released" } ]
[ { "msg_contents": "I remember someone mentioning on the list that we should collect a list of\nplaces that refer to postgres so that we can update them for a new release.\n\nI just submitted an update on Linux.com:\n\nhttp://software.linux.com/projects/postgresql/?topic=323,324,325\n\nThat location should be added to the list.\n\nI think ZDNet was the other place.\n\nChris\n\n", "msg_date": "Tue, 16 Apr 2002 15:17:56 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Places to update when a new version is out" }, { "msg_contents": "Hi Chris,\n\nI don't have time at the moment to start making the needed document. :(\n\nDoes anyone want to throw together the basics of it and put it somewhere\nuseful?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nChristopher Kings-Lynne wrote:\n> \n> I remember someone mentioning on the list that we should collect a list of\n> places that refer to postgres so that we can update them for a new release.\n> \n> I just submitted an update on Linux.com:\n> \n> http://software.linux.com/projects/postgresql/?topic=323,324,325\n> \n> That location should be added to the list.\n> \n> I think ZDNet was the other place.\n> \n> Chris\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 16 Apr 2002 17:22:30 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Places to update when a new version is out" }, { "msg_contents": "Hmmm...where's that file in the CVS where the release process is listed (or\nat least the places where version numbers need to be updated, etc.?)\n\nI can't find it...\n\nChris\n\n> -----Original Message-----\n> From: Justin Clift [mailto:justin@postgresql.org]\n> Sent: Tuesday, 16 April 2002 3:23 PM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: [HACKERS] Places to update when a new version is out\n>\n>\n> Hi Chris,\n>\n> I don't have time at the moment to start making the needed document. :(\n>\n> Does anyone want to throw together the basics of it and put it somewhere\n> useful?\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n>\n> Christopher Kings-Lynne wrote:\n> >\n> > I remember someone mentioning on the list that we should\n> collect a list of\n> > places that refer to postgres so that we can update them for a\n> new release.\n> >\n> > I just submitted an update on Linux.com:\n> >\n> > http://software.linux.com/projects/postgresql/?topic=323,324,325\n> >\n> > That location should be added to the list.\n> >\n> > I think ZDNet was the other place.\n> >\n> > Chris\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n\n", "msg_date": "Tue, 16 Apr 2002 15:35:29 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: Places to update when a new version is out" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Hmmm...where's that file in the CVS where the release process is listed (or\n> at least the places where version numbers need to be updated, etc.?)\n> \n> I can't find it...\n\nsrc/tools/RELEASE_CHANGES. I suggested a RELEASE_ANNOUNCEMENT file for\nURL's of place to announce our releases. Right now, I think Marc has\nit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 Apr 2002 13:06:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Places to update when a new version is out" } ]
[ { "msg_contents": "Hi All!\n\nIs anyone else working on pl/java?\n\nWe are still working on it. In a few weeks we may have a testable version. (For the most fearless hackers, we have a test version now :)\n\nIf anyone else is working on any implementation, please contect us!\nhttp://pljava.sourceforge.net/\n\nThanks:\nLaszlo Hornyak\n\n\n\n\n\n\n\nHi All!\n \nIs anyone else working on \npl/java?\n \nWe are still working on it. In a few weeks we may \nhave a testable version. (For the most fearless hackers, we have a test \nversion now :)\n \nIf anyone else is working on any implementation, \nplease contect us!\nhttp://pljava.sourceforge.net/\n \nThanks:\nLaszlo Hornyak", "msg_date": "Tue, 16 Apr 2002 11:30:26 +0200", "msg_from": "\"Laszlo Hornyak\" <lhornyak@ish.hu>", "msg_from_op": true, "msg_subject": "PL/JAVA" } ]
[ { "msg_contents": "I've improved the contributed vacuumlo command, now it behaves like all other \npostgres command line utilites e.g. supports -U, -p, -h, -?, -v, password \nprompt and has a \"test mode\". In test mode, no large objects are removed, \njust reported.\n\nWhat can I do now with the patch (355 lines)? Send it to this list?\n\nBest regards,\n\tMario Weilguni\n", "msg_date": "Tue, 16 Apr 2002 13:49:40 +0200", "msg_from": "Mario Weilguni <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Improved vacuumlo" }, { "msg_contents": "Sorry I should've read the FAQ. I've posted the patch to pgsql-patches.\n\nAm Dienstag, 16. April 2002 13:49 schrieb Mario Weilguni:\n> I've improved the contributed vacuumlo command, now it behaves like all\n> other postgres command line utilites e.g. supports -U, -p, -h, -?, -v,\n> password prompt and has a \"test mode\". In test mode, no large objects are\n> removed, just reported.\n>\n> What can I do now with the patch (355 lines)? Send it to this list?\n>\n> Best regards,\n> \tMario Weilguni\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n", "msg_date": "Tue, 16 Apr 2002 14:02:38 +0200", "msg_from": "Mario Weilguni <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: Improved vacuumlo" } ]
[ { "msg_contents": "\nCould some ppl test out archives.postgresql.org and let me know if they\nnotice any differences in speed?\n\nThanks ...\n\n", "msg_date": "Tue, 16 Apr 2002 09:41:19 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Testers needed ..." }, { "msg_contents": "On Tue, 16 Apr 2002, Marc G. Fournier wrote:\n\n> \n> Could some ppl test out archives.postgresql.org and let me know if they\n> notice any differences in speed?\n\nMarc,\n\nA dramatic increase in performance.\n\nGavin\n\n", "msg_date": "Tue, 16 Apr 2002 23:26:35 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Testers needed ..." }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Could some ppl test out archives.postgresql.org and let me know if they\n> notice any differences in speed?\n\nYup. It's usable again! What did you do?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 10:24:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Testers needed ... " }, { "msg_contents": "On Tue, 16 Apr 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Could some ppl test out archives.postgresql.org and let me know if they\n> > notice any differences in speed?\n>\n> Yup. It's usable again! What did you do?\n\nGot more RAM installed :) The archives have a buffer cache right now of\n1.5Gig, and will be jumped to a full 3Gig as soon as the \"special RAM\"\ngets in ... Rackspace had to special order in some RAM to bring it from\n3Gig->4gig ... I'm suspecting they needed to switch to low-voltage RAM for\nthat, which they didn't have in stock ...\n\nI've also moved the indices to one file system, while leaving the tables\nthemselves on the other (this box is limited to 2 hard drives,\nunfortunately), which seems to be doign a better job of keeping disk I/O\ndown a bit ...\n\nAm currently rebuilding the indices, so *everything* isn't in there yet,\nbut we're up to:\n\n Database statistics\n\n Status Expired Total\n -----------------------------\n 0 61548 61734 Not indexed yet\n 200 0 66252 OK\n 404 0 8 Not found\n -----------------------------\n Total 61548 127994\n\nand climbing fast ...\n\nOh, and, of course, its running on v7.2.1 now, which means that VACUUMs no\nlonger lock up the search ... last night, I had something like 367\nindexers pounding away at the database *and* a VACUUM running *and* did a\nsearch in <2 minutes (considering that is 367 simultaneous and active\nconnections to the database, I'm impressed) ...\n\n\n", "msg_date": "Tue, 16 Apr 2002 11:41:37 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Testers needed ... " }, { "msg_contents": "Marc G. Fournier wrote:\n> [...]\n> Oh, and, of course, its running on v7.2.1 now, which means that VACUUMs no\n> longer lock up the search ... last night, I had something like 367\n> indexers pounding away at the database *and* a VACUUM running *and* did a\n> search in <2 minutes (considering that is 367 simultaneous and active\n> connections to the database, I'm impressed) ...\n\n Who said PostgreSQL doesn't scale?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Tue, 16 Apr 2002 14:06:01 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Testers needed ..." }, { "msg_contents": "En Tue, 16 Apr 2002 09:41:19 -0300 (ADT)\n\"Marc G. Fournier\" <scrappy@hub.org> escribi�:\n\n> Could some ppl test out archives.postgresql.org and let me know if they\n> notice any differences in speed?\n\nWell, it's impressive.\n\nOne thing I don't like about archives.postgresql.org is that when it\nshow results for a search, in the space supposedly dedicated to showing\nsome lines of every result it always shows the header added in the\narchive version of mail, i.e.\n\n1.Re: [HACKERS] Re: psql and comments [2]\n Search for: Results per page: 10 20 50 Search for: Whole word Beginning Ending Substring Output format: Long Short URL Search through: Entire site PgSQL - Admin PgSQL - Announce PgSQL - Bugs PgSQL - Committers PgSQL - Cygwin PgSQL - Docs...\n\n * http://archives.postgresql.org/pgsql-hackers/1999-10/msg00153.php (text/html) Tue, 16 Apr 2002 18:23:56 EDT, 10351 bytes\n\n\n\nThis \"Search for: (etc)\" serves no purpose... what about making it skip\nthe first constant lines of HTML so it can show useful stuff?\n\nAlso, why isn't pgsql-patches in the list?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nSi no sabes adonde vas, es muy probable que acabes en otra parte.\n", "msg_date": "Tue, 16 Apr 2002 20:19:20 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Testers needed ..." } ]
[ { "msg_contents": "Sorry, just a question, maybe I should ask somewhere else.\n\nIs there any difference (like cost, efficiency, execution paths...) between\nthis sql:\n\nselect b\nfrom a\nwhere b is not null\n\nand this sql:\n\nselect b\nfrom a\nwhere not b is null\n\n\nJust curious.\n\nThanks for your answer\n\nBosco\n\n\n\n_________________________________________________________________\nMSN Photos is the easiest way to share and print your photos: \nhttp://photos.msn.com/support/worldwide.aspx\n\n", "msg_date": "Tue, 16 Apr 2002 13:34:10 +0000", "msg_from": "\"Bosco Ng\" <boscoklng@hotmail.com>", "msg_from_op": true, "msg_subject": "Just a Question" } ]
[ { "msg_contents": "\n\nHello,\n\nWhile trying to optimise a query I found that running VACUUM ANALYSE\nchanged all the Index Scans to Seq Scans and that the only way to revert\nto Index Scans was the add \"enable_seqscan = 0\" in postgresql.conf.\n\nSeq Scans are much slower for that specific query. Why does Postgres\nswitch to that method?\n\n PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.4\n(1 row)\n\nOutput with \"enable_seqscan = 0\":\n\n\tgesci5=# explain select p.id_prospect, p.position_prospect, initcap(p1.nom) as nom, initcap(p1.prenom) as prenom, a1.no_tel, a1.no_portable, p.dernier_contact, cn.id_contact, cn.id_vendeur, cn.id_operation, case when p.dernier_contact is not null then cn.date_contact::abstime::int4 else p.cree_le::abstime::int4 end as date_contact, cn.type_contact, cn.nouveau_rdv::abstime::int4 as nouveau_rdv, cn.date_echeance::abstime::int4 as date_echeance, cn.date_reponse::abstime::int4 as date_reponse from prospect p left join personne p1 on (p1.id_personne = p.id_personne1) left join adresse a1 on (a1.id_adresse = p1.id_adresse_principale) left join contact cn on (p.dernier_contact = cn.id_contact) where (p.abandon is null or p.abandon != 'O') order by cn.date_contact desc;\n\tNOTICE: QUERY PLAN:\n\n\tSort (cost=49442.99..49442.99 rows=24719 width=123)\n\t -> Hash Join (cost=14146.79..46656.05 rows=24719 width=123)\n\t\t\t-> Merge Join (cost=9761.33..40724.83 rows=24719 width=66)\n\t\t\t\t -> Sort (cost=9761.33..9761.33 rows=24719 width=49)\n\t\t\t\t\t\t-> Merge Join (cost=0.00..7485.53 rows=24719 width=49)\n\t\t\t\t\t\t\t -> Index Scan using prospect_personne1 on prospect p (cost=0.00..4322.18 rows=24719 width=22)\n\t\t\t\t\t\t\t -> Index Scan using personne_pkey on personne p1 (cost=0.00..2681.90 rows=44271 width=27)\n\t\t\t\t -> Index Scan using adresse_pkey on adresse a1 (cost=0.00..30354.16 rows=95425 width=17)\n\t\t\t-> Hash (cost=3242.09..3242.09 rows=30224 width=57)\n\t\t\t\t -> Index Scan using contact_pkey on contact cn (cost=0.00..3242.09 rows=30224 width=57)\n\nOutput with \"enable_seqscan = 1\": \n\n\tSort (cost=18622.67..18622.67 rows=24719 width=123)\n\t -> Hash Join (cost=10034.30..15835.73 rows=24719 width=123)\n\t\t\t-> Hash Join (cost=8074.99..12330.65 rows=24719 width=66)\n\t\t\t\t -> Hash Join (cost=2088.54..4629.65 rows=24719 width=49)\n\t\t\t\t\t\t-> Seq Scan on prospect p (cost=0.00..1289.35 rows=24719 width=22)\n\t\t\t\t\t\t-> Hash (cost=1106.71..1106.71 rows=44271 width=27)\n\t\t\t\t\t\t\t -> Seq Scan on personne p1 (cost=0.00..1106.71 rows=44271 width=27)\n\t\t\t\t -> Hash (cost=2561.25..2561.25 rows=95425 width=17)\n\t\t\t\t\t\t-> Seq Scan on adresse a1 (cost=0.00..2561.25 rows=95425 width=17)\n\t\t\t-> Hash (cost=1036.24..1036.24 rows=30224 width=57)\n\t\t\t\t -> Seq Scan on contact cn (cost=0.00..1036.24 rows=30224 width=57)\n\n\n\n-- \n OENONE: Rebelle � tous nos soins, sourde � tous nos discours,\n Voulez-vous sans piti� laisser finir vos jours ?\n (Ph�dre, J-B Racine, acte 1, sc�ne 3)\n", "msg_date": "Tue, 16 Apr 2002 16:03:53 +0200", "msg_from": "Louis-David Mitterrand <vindex@apartia.org>", "msg_from_op": true, "msg_subject": "Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Louis-David Mitterrand <vindex@apartia.org> writes:\n> While trying to optimise a query I found that running VACUUM ANALYSE\n> changed all the Index Scans to Seq Scans and that the only way to revert\n> to Index Scans was the add \"enable_seqscan = 0\" in postgresql.conf.\n\nEXPLAIN ANALYZE output would be more interesting than just EXPLAIN.\nAlso, what does the pg_stats view show for these tables?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 10:41:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "On Tue, Apr 16, 2002 at 10:41:57AM -0400, Tom Lane wrote:\n> Louis-David Mitterrand <vindex@apartia.org> writes:\n> > While trying to optimise a query I found that running VACUUM ANALYSE\n> > changed all the Index Scans to Seq Scans and that the only way to revert\n> > to Index Scans was the add \"enable_seqscan = 0\" in postgresql.conf.\n> \n> EXPLAIN ANALYZE output would be more interesting than just EXPLAIN.\n> Also, what does the pg_stats view show for these tables?\n\nThanks, pg_stats output is rather big so I attached it in a separate\nfile. Here are the EXPLAIN ANALYZE ouputs:\n\n*********************************\n* 1) With \"enable_seqscan = 0\": *\n*********************************\n\ngesci5=# explain analyse select p.id_prospect, p.position_prospect, initcap(p1.nom) as nom, initcap(p1.prenom) as prenom, a1.no_tel, a1.no_portable, p.dernier_contact, cn.id_contact, cn.id_vendeur, cn.id_operation, case when p.dernier_contact is not null then cn.date_contact::abstime::int4 else p.cree_le::abstime::int4 end as date_contact, cn.type_contact, cn.nouveau_rdv::abstime::int4 as nouveau_rdv, cn.date_echeance::abstime::int4 as date_echeance, cn.date_reponse::abstime::int4 as date_reponse from prospect p left join personne p1 on (p1.id_personne = p.id_personne1) left join adresse a1 on (a1.id_adresse = p1.id_adresse_principale) left join contact cn on (p.dernier_contact = cn.id_contact) where (p.abandon is null or p.abandon != 'O') order by cn.date_contact desc;\nNOTICE: QUERY PLAN:\n\nSort (cost=49442.99..49442.99 rows=24719 width=123) (actual time=7281.98..7319.91 rows=23038 loops=1)\n -> Hash Join (cost=14146.79..46656.05 rows=24719 width=123) (actual time=2619.85..6143.47 rows=23038 loops=1)\n -> Merge Join (cost=9761.33..40724.83 rows=24719 width=66) (actual time=2061.31..3362.49 rows=23038 loops=1)\n -> Sort (cost=9761.33..9761.33 rows=24719 width=49) (actual time=1912.73..1961.61 rows=23038 loops=1)\n -> Merge Join (cost=0.00..7485.53 rows=24719 width=49) (actual time=42.98..1264.63 rows=23038 loops=1)\n -> Index Scan using prospect_personne1 on prospect p (cost=0.00..4322.18 rows=24719 width=22) (actual time=0.28..528.42 rows=23038 loops=1)\n -> Index Scan using personne_pkey on personne p1 (cost=0.00..2681.90 rows=44271 width=27) (actual time=0.18..384.11 rows=44302 loops=1)\n -> Index Scan using adresse_pkey on adresse a1 (cost=0.00..30354.16 rows=95425 width=17) (actual time=0.44..738.99 rows=95456 loops=1)\n -> Hash (cost=3242.09..3242.09 rows=30224 width=57) (actual time=557.04..557.04 rows=0 loops=1)\n -> Index Scan using contact_pkey on contact cn (cost=0.00..3242.09 rows=30224 width=57) (actual time=0.26..457.97 rows=30224 loops=1)\nTotal runtime: 7965.74 msec\n\nEXPLAIN\n\n*********************************\n* 2) With \"enable_seqscan = 1\": *\n*********************************\n\nNOTICE: QUERY PLAN:\n\nSort (cost=18622.67..18622.67 rows=24719 width=123) (actual time=10329.09..10367.06 rows=23039 loops=1)\n -> Hash Join (cost=10034.30..15835.73 rows=24719 width=123) (actual time=1644.04..9397.53 rows=23039 loops=1)\n -> Hash Join (cost=8074.99..12330.65 rows=24719 width=66) (actual time=1110.05..6475.65 rows=23039 loops=1)\n -> Hash Join (cost=2088.54..4629.65 rows=24719 width=49) (actual time=385.33..2763.91 rows=23039 loops=1)\n -> Seq Scan on prospect p (cost=0.00..1289.35 rows=24719 width=22) (actual time=0.34..361.31 rows=23039 loops=1)\n -> Hash (cost=1106.71..1106.71 rows=44271 width=27) (actual time=381.91..381.91 rows=0 loops=1)\n -> Seq Scan on personne p1 (cost=0.00..1106.71 rows=44271 width=27) (actual time=0.15..246.32 rows=44272 loops=1)\n -> Hash (cost=2561.25..2561.25 rows=95425 width=17) (actual time=723.15..723.15 rows=0 loops=1)\n -> Seq Scan on adresse a1 (cost=0.00..2561.25 rows=95425 width=17) (actual time=0.17..452.55 rows=95427 loops=1)\n -> Hash (cost=1036.24..1036.24 rows=30224 width=57) (actual time=532.87..532.87 rows=0 loops=1)\n -> Seq Scan on contact cn (cost=0.00..1036.24 rows=30224 width=57) (actual time=2.54..302.49 rows=30225 loops=1)\nTotal runtime: 10901.85 msec\n\nEXPLAIN\n\n\n-- \n HIPPOLYTE: Mais quels soins d�sormais peuvent me retarder ?\n Assez dans les for�ts mon oisive jeunesse\n Sur de vils ennemis a montr� son adresse.\n (Ph�dre, J-B Racine, acte 3, sc�ne 5)", "msg_date": "Tue, 16 Apr 2002 17:04:58 +0200", "msg_from": "Louis-David Mitterrand <vindex@apartia.org>", "msg_from_op": true, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Reading all of this discussion lately about how the planner seems to \nprefer seqscan's in alot of places where indexes would be better starts \nmaking me wonder if some of the assumptions or cals made to figure costs \nare wrong...\n\n\nAnyone have any ideas?\n\nLouis-David Mitterrand wrote:\n\n>On Tue, Apr 16, 2002 at 10:41:57AM -0400, Tom Lane wrote:\n>\n>>Louis-David Mitterrand <vindex@apartia.org> writes:\n>>\n>>>While trying to optimise a query I found that running VACUUM ANALYSE\n>>>changed all the Index Scans to Seq Scans and that the only way to revert\n>>>to Index Scans was the add \"enable_seqscan = 0\" in postgresql.conf.\n>>>\n>>EXPLAIN ANALYZE output would be more interesting than just EXPLAIN.\n>>Also, what does the pg_stats view show for these tables?\n>>\n>\n>Thanks, pg_stats output is rather big so I attached it in a separate\n>file. Here are the EXPLAIN ANALYZE ouputs:\n>\n>... SNIP ...\n>\n>\n\n\n", "msg_date": "Tue, 16 Apr 2002 17:22:41 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "I know I know, replying to myself is bad and probably means I'm going \ninsane but thought of one other thing...\n\nRealistically the system should choos *ANY* index over a sequential \ntable scan. Above a fairly low number of records any indexed query \nshould be much faster than a seqscan. Am I right, or did I miss \nsomething? (wouldn't be the first time I missed something)... Right \nnow the planner seems to think that index queries are more expensive \nwith a larger width than doing a seqscan on (possibly) more rows with a \nnarrower width.\n\n\n\nMichael Loftis wrote:\n\n> Reading all of this discussion lately about how the planner seems to \n> prefer seqscan's in alot of places where indexes would be better \n> starts making me wonder if some of the assumptions or cals made to \n> figure costs are wrong...\n>\n>\n> Anyone have any ideas?\n>\n> Louis-David Mitterrand wrote:\n>\n>> On Tue, Apr 16, 2002 at 10:41:57AM -0400, Tom Lane wrote:\n>>\n>>> Louis-David Mitterrand <vindex@apartia.org> writes:\n>>>\n>>>> While trying to optimise a query I found that running VACUUM ANALYSE\n>>>> changed all the Index Scans to Seq Scans and that the only way to \n>>>> revert\n>>>> to Index Scans was the add \"enable_seqscan = 0\" in postgresql.conf.\n>>>>\n>>> EXPLAIN ANALYZE output would be more interesting than just EXPLAIN.\n>>> Also, what does the pg_stats view show for these tables?\n>>>\n>>\n>> Thanks, pg_stats output is rather big so I attached it in a separate\n>> file. Here are the EXPLAIN ANALYZE ouputs:\n>>\n>> ... SNIP ...\n>>\n>>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n\n", "msg_date": "Tue, 16 Apr 2002 17:31:13 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Tue, 16 Apr 2002, Michael Loftis wrote:\n\n> I know I know, replying to myself is bad and probably means I'm going\n> insane but thought of one other thing...\n>\n> Realistically the system should choos *ANY* index over a sequential\n> table scan. Above a fairly low number of records any indexed query\n> should be much faster than a seqscan. Am I right, or did I miss\n> something? (wouldn't be the first time I missed something)... Right\n\nBecause the validity information is stored with the row and not the index\nyou have to read rows for any potential hit in the index. Depending on\nthe clustering of the table, the width of the rows and the percentage of\nthe table being hit by the scan (or estimated to be hit) you may read\nmost or all of the table as well as the index and be paying a penalty for\ndoing it randomly as opposed to be sequentially. IIRC, there are some\nsettings in the configuration that let you play around with the relative\ncosts the estimator uses (the random page cost and cpu costs for dealing\nwith index entries and such).\n\n", "msg_date": "Tue, 16 Apr 2002 17:47:08 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Michael Loftis <mloftis@wgops.com> writes:\n> Reading all of this discussion lately about how the planner seems to \n> prefer seqscan's in alot of places where indexes would be better starts \n> making me wonder if some of the assumptions or cals made to figure costs \n> are wrong...\n\nCould well be. The sources are open, feel free to take a look ...\nsrc/backend/optimizer/path/costsize.c is the epicenter ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 23:52:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "Michael Loftis <mloftis@wgops.com> writes:\n> Realistically the system should choos *ANY* index over a sequential \n> table scan.\n\nSorry, I do not accept that. You might as well say that we should\nrip out any attempt at cost estimation, and instead put in two or\nthree lines of brain-dead heuristics. If it were that simple we'd\nall be using MySQL ;-)\n\n> Above a fairly low number of records any indexed query \n> should be much faster than a seqscan.\n\nIsn't that exactly backwards?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 23:58:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "Louis-David Mitterrand <vindex@apartia.org> writes:\n> While trying to optimise a query I found that running VACUUM ANALYSE\n> changed all the Index Scans to Seq Scans and that the only way to revert\n> to Index Scans was the add \"enable_seqscan = 0\" in postgresql.conf.\n>> \n>> EXPLAIN ANALYZE output would be more interesting than just EXPLAIN.\n>> Also, what does the pg_stats view show for these tables?\n\n> Thanks, pg_stats output is rather big so I attached it in a separate\n> file. Here are the EXPLAIN ANALYZE ouputs:\n\nTell you the truth, I'm having a real hard time getting excited over\na bug report that says the planner chose a plan taking 10.90 seconds\nin preference to one taking 7.96 seconds.\n\nAny time the planner's estimates are within a factor of 2 of reality,\nI figure it's done very well. The inherent unknowns are so large that\nthat really amounts to divination. We can't expect to choose a perfect\nplan every time --- if we can avoid choosing a truly stupid plan (say,\none that takes a couple orders of magnitude more time than the best\npossible plan) then we ought to be happy.\n\nBut having said that, it would be interesting to see if adjusting some\nof the planner cost parameters would yield better results in your\nsituation. The coarsest of these is random_page_cost, which is\npresently 4.0 by default. Although I have done some moderately\nextensive measurements to get that figure, other folks have reported\nthat lower numbers like 3.0 or even less seem to suit their platforms\nbetter. In general a lower random_page_cost will favor indexscans...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 00:44:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "\nLet me add people's expections of the optimizer and the \"it isn't using\nthe index\" questions are getting very old. I have beefed up the FAQ\nitem on this a month ago, but that hasn't reduced the number of\nquestions. I almost want to require people to read a specific FAQ item\n4.8 before we will reply to anything.\n\nMaybe that FAQ item needs more info. Tom can't be running around trying\nto check all these optimizer reports when >90% are just people not\nunderstanding the basics of optimization or query performance.\n\nMaybe we need an optimizer FAQ that will answer the basic questions for\npeople.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Louis-David Mitterrand <vindex@apartia.org> writes:\n> > While trying to optimise a query I found that running VACUUM ANALYSE\n> > changed all the Index Scans to Seq Scans and that the only way to revert\n> > to Index Scans was the add \"enable_seqscan = 0\" in postgresql.conf.\n> >> \n> >> EXPLAIN ANALYZE output would be more interesting than just EXPLAIN.\n> >> Also, what does the pg_stats view show for these tables?\n> \n> > Thanks, pg_stats output is rather big so I attached it in a separate\n> > file. Here are the EXPLAIN ANALYZE ouputs:\n> \n> Tell you the truth, I'm having a real hard time getting excited over\n> a bug report that says the planner chose a plan taking 10.90 seconds\n> in preference to one taking 7.96 seconds.\n> \n> Any time the planner's estimates are within a factor of 2 of reality,\n> I figure it's done very well. The inherent unknowns are so large that\n> that really amounts to divination. We can't expect to choose a perfect\n> plan every time --- if we can avoid choosing a truly stupid plan (say,\n> one that takes a couple orders of magnitude more time than the best\n> possible plan) then we ought to be happy.\n> \n> But having said that, it would be interesting to see if adjusting some\n> of the planner cost parameters would yield better results in your\n> situation. The coarsest of these is random_page_cost, which is\n> presently 4.0 by default. Although I have done some moderately\n> extensive measurements to get that figure, other folks have reported\n> that lower numbers like 3.0 or even less seem to suit their platforms\n> better. In general a lower random_page_cost will favor indexscans...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 01:06:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Let me add people's expections of the optimizer and the \"it isn't using\n> the index\" questions are getting very old. I have beefed up the FAQ\n> item on this a month ago, but that hasn't reduced the number of\n> questions. I almost want to require people to read a specific FAQ item\n> 4.8 before we will reply to anything.\n> \n> Maybe that FAQ item needs more info. Tom can't be running around trying\n> to check all these optimizer reports when >90% are just people not\n> understanding the basics of optimization or query performance.\n> \n> Maybe we need an optimizer FAQ that will answer the basic questions for\n> people.\n\nI think you are missing a huge point, people are confused by the operation of\nPostgreSQL. You admit that there are a lot of questions about this topic. This\nmeans that something is happening which is non-intuitive. Bruce, you are an\nexpert in PostgreSQL, but most people who use it are not. The unexpected\nbehavior is just that, unexpected, or a surprise.\n\nBusiness people, accountants, and engineers do not like surprises. PostgreSQL's\nbehavior on index usage is totally confusing. If I can paraphase correctly,\nPostgreSQL wants to have a good reason to use an index. Most people expect a\ndatabase to have an undeniable reason NOT to use an index. I would also say, if\na DBA created an index, there is a strong indication that there is a need for\none! (DBA knowledge vs statistics)\n\nThat is the difference, in another post Tom said he could not get excited about\n10.9 second execution time over a 7.96 execution time. Damn!!! I would. That is\nwrong.\n\nI have bitched about the index stuff for a while, and always have bumped up\nagainst this problem. If I can sway anyone's opinion, I would say, unless\n(using Tom's words) a \"factor of 2\" planner difference against, I would use an\nindex. Rather than needing clear evidence to use an index, I would say you need\nclear evidence not too.\n", "msg_date": "Wed, 17 Apr 2002 01:26:24 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> That is the difference, in another post Tom said he could not get\n> excited about 10.9 second execution time over a 7.96 execution\n> time. Damn!!! I would. That is wrong.\n\nSure. Show us how to make the planner's estimates 2x more accurate\n(on average) than they are now, and I'll get excited too.\n\nBut forcing indexscan to be chosen over seqscan does not count as\nmaking it more accurate. (If you think it does, then you don't\nneed to be in this thread at all; set enable_seqscan = 0 and\nstop bugging us ;-))\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 01:40:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "> I have bitched about the index stuff for a while, and always have\n> bumped up\n> against this problem. If I can sway anyone's opinion, I would say, unless\n> (using Tom's words) a \"factor of 2\" planner difference against, I\n> would use an\n> index. Rather than needing clear evidence to use an index, I\n> would say you need\n> clear evidence not too.\n\nI spend a lot of time answering questions on various database forums and I\nfind that the single thing that most newbies just cannot understand is that\na sequential scan is often a lot faster than an index scan. They just\ncannot comprehend that an index can be slower. Ever. For any query. That\nis not our problem...\n\nWhat we could offer tho, is more manual control over the planner. People\ncan do this to a mild extend by disabling sequential scans, but it looks\nlike it should be extended...\n\nChris\n\n", "msg_date": "Wed, 17 Apr 2002 13:42:34 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > That is the difference, in another post Tom said he could not get\n> > excited about 10.9 second execution time over a 7.96 execution\n> > time. Damn!!! I would. That is wrong.\n> \n> Sure. Show us how to make the planner's estimates 2x more accurate\n> (on average) than they are now, and I'll get excited too.\n> \n> But forcing indexscan to be chosen over seqscan does not count as\n> making it more accurate. (If you think it does, then you don't\n> need to be in this thread at all; set enable_seqscan = 0 and\n> stop bugging us ;-))\n\nOh, come on Tom, surely I have been around long enough to lend credence that\nwish to have a positive affect on PostgreSQL development.\n\nenable_seqscan=0, disallows sequential scan, that is not what I am saying. This\nis a problem I (and others) have been yapping about for a long time. \n\n(Please remember, I USE PostgreSQL, I have a vested interest in it being the\nbest RDBMS available.)\n\nI just think there is sufficient evidence to suggest that if a DBA creates an\nindex, there is strong evidence (better than statistics) that the index need be\nused. In the event that an index exists, there is a strong indication that,\nwithout overwhelming evidence, that the index should be used. You have admitted\nthat statistics suck, but the existence of an index must weight (heavily) on\nthe evaluation on whether or not to use an index.\n", "msg_date": "Wed, 17 Apr 2002 01:51:18 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > I have bitched about the index stuff for a while, and always have\n> > bumped up\n> > against this problem. If I can sway anyone's opinion, I would say, unless\n> > (using Tom's words) a \"factor of 2\" planner difference against, I\n> > would use an\n> > index. Rather than needing clear evidence to use an index, I\n> > would say you need\n> > clear evidence not too.\n> \n> I spend a lot of time answering questions on various database forums and I\n> find that the single thing that most newbies just cannot understand is that\n> a sequential scan is often a lot faster than an index scan. They just\n> cannot comprehend that an index can be slower. Ever. For any query. That\n> is not our problem...\n\nHere is the problem, in a single paragraph.\n\nIf the DBA notices that there is a problem with a query, he adds an index, he\nnotices that there is no difference, then he notices that PostgreSQL is not\nusing his index. First and foremost he gets mad at PostgreSQL for not using his\nindex. If PostgreSQL decided to use an index which increases execution time,\nthe DBA would delete the index. If PostgreSQL does not use an index, he has to\nmodify the posgresql.conf file, which disallows PostgreSQL from using an index\nwhen it would be a clear loser.\n\nMy assertion is this: \"If a DBA creates an index, he has a basis for his\nactions.\"\n", "msg_date": "Wed, 17 Apr 2002 02:00:24 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, 2002-04-17 at 06:51, mlw wrote:\n> I just think there is sufficient evidence to suggest that if a DBA creates an\n> index, there is strong evidence (better than statistics) that the index need be\n> used. In the event that an index exists, there is a strong indication that,\n> without overwhelming evidence, that the index should be used. You have admitted\n> that statistics suck, but the existence of an index must weight (heavily) on\n> the evaluation on whether or not to use an index.\n\nBut indexes are not, for the most part, there because of a specific\nchoice to have an index, but as the implementation of PRIMARY KEY and\nUNIQUE. Therefore the main part of your argument fails.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"But as many as received him, to them gave he power to \n become the sons of God, even to them that believe on \n his name.\" John 1:12", "msg_date": "17 Apr 2002 07:07:20 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, 2002-04-17 at 06:51, mlw wrote:\n> I just think there is sufficient evidence to suggest that if a DBA creates an\n> index, there is strong evidence (better than statistics) that the index need be\n> used. In the event that an index exists, there is a strong indication that,\n> without overwhelming evidence, that the index should be used. You have admitted\n> that statistics suck, but the existence of an index must weight (heavily) on\n> the evaluation on whether or not to use an index.\n\nBut indexes are not, for the most part, there because of a specific\nchoice to have an index, but as the implementation of PRIMARY KEY and\nUNIQUE. Therefore the main part of your argument fails.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"But as many as received him, to them gave he power to \n become the sons of God, even to them that believe on \n his name.\" John 1:12", "msg_date": "17 Apr 2002 07:07:21 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Oliver Elphick wrote:\n> \n> On Wed, 2002-04-17 at 06:51, mlw wrote:\n> > I just think there is sufficient evidence to suggest that if a DBA creates an\n> > index, there is strong evidence (better than statistics) that the index need be\n> > used. In the event that an index exists, there is a strong indication that,\n> > without overwhelming evidence, that the index should be used. You have admitted\n> > that statistics suck, but the existence of an index must weight (heavily) on\n> > the evaluation on whether or not to use an index.\n> \n> But indexes are not, for the most part, there because of a specific\n> choice to have an index, but as the implementation of PRIMARY KEY and\n> UNIQUE. Therefore the main part of your argument fails.\n\nLet's talk about the primary key, that will not exhibit the borderline behavior\nthat we see. I have had first hand experience (and frustration) on PostgreSQL's\nchoice of using an index.\n\nThe primary key and UNIQUE constraint will only exhibit reduced performance on\nREALLY small tables, in which case, the reduced performance is minimal if not\nnonexistent.\n", "msg_date": "Wed, 17 Apr 2002 02:15:46 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, 2002-04-17 at 11:00, mlw wrote:\n>\n> Here is the problem, in a single paragraph.\n> \n> If the DBA notices that there is a problem with a query, he adds an index, he\n> notices that there is no difference, then he notices that PostgreSQL is not\n> using his index. First and foremost he gets mad at PostgreSQL for not using his\n> index.\n\nPerhaps a notice from backend:\n\nNOTICE: I see the DBA has created a useless index ...\n\n;) \n\nOr would this make the DBA even madder ;) ;)\n\n> If PostgreSQL decided to use an index which increases execution time,\n> the DBA would delete the index. If PostgreSQL does not use an index, he has to\n> modify the posgresql.conf file,\n\nOr just do\n\nset enable_seqscan to off;\n\n> which disallows PostgreSQL from using an index when it would be a clear loser.\n> \n> My assertion is this: \"If a DBA creates an index, he has a basis for his\n> actions.\"\n\nThe basis can be that \"his boss told him to\" ?\n\n------------------\nHannu\n\n\n", "msg_date": "17 Apr 2002 11:36:09 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "> If the DBA notices that there is a problem with a query, he adds\n> an index, he\n> notices that there is no difference, then he notices that\n> PostgreSQL is not\n> using his index. First and foremost he gets mad at PostgreSQL for\n> not using his\n> index. If PostgreSQL decided to use an index which increases\n> execution time,\n> the DBA would delete the index. If PostgreSQL does not use an\n> index, he has to\n> modify the posgresql.conf file, which disallows PostgreSQL from\n> using an index\n> when it would be a clear loser.\n>\n> My assertion is this: \"If a DBA creates an index, he has a basis for his\n> actions.\"\n\nWhat about a GUC parameter\n\nprefer_indexes = yes/no\n\nWhich when set to yes, assumes the DBA knows what he's doing. Unless the\ntable is really small, in which case it'll still scan.\n\nBut then again, if the dba sets up a huge table (million rows) and does a\nselect over an indexed field that will return 1/6 of all the rows, then\npostgres would be nuts to use the index...\n\nBut then if the DBA does a query to return just 1 of the rows, postgres\nwould be nuts NOT to use the index. How do you handle this situation?\n\nChris\n\n", "msg_date": "Wed, 17 Apr 2002 14:53:48 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "\n\n> > On Wed, 2002-04-17 at 06:51, mlw wrote:\n> > > I just think there is sufficient evidence to suggest that if a DBA\ncreates an\n> > > index, there is strong evidence (better than statistics) that the\nindex need be\n> > > used. In the event that an index exists, there is a strong indication\nthat,\n> > > without overwhelming evidence, that the index should be used. You have\nadmitted\n> > > that statistics suck, but the existence of an index must weight\n(heavily) on\n> > > the evaluation on whether or not to use an index.\n\nOn my own few experience I think this could be solved decreasing\nrandom_page_cost, if you would prefer to use indexes than seq scans, then\nyou can lower random_page_cost to a point in which postgres works as you\nwant. So the planner would prefer indexes when in standard conditions it\nwould prefer seq scans.\n\nRegards\n\n", "msg_date": "Wed, 17 Apr 2002 09:30:08 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "My opinion.\n\nExpose some of the cost factors via run-time settings (or start-time \nsettings).\n\nThis would allow those who wanted to 'tweak' the planner to do so and \nthose that felt the defaults were fine or didn't know to leave them alone.\n\nComments?\n\n\n\n", "msg_date": "Wed, 17 Apr 2002 02:17:50 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "\n\nOliver Elphick wrote:\n\n>On Wed, 2002-04-17 at 06:51, mlw wrote:\n>\n>>I just think there is sufficient evidence to suggest that if a DBA creates an\n>>index, there is strong evidence (better than statistics) that the index need be\n>>used. In the event that an index exists, there is a strong indication that,\n>>without overwhelming evidence, that the index should be used. You have admitted\n>>that statistics suck, but the existence of an index must weight (heavily) on\n>>the evaluation on whether or not to use an index.\n>>\n>\n>But indexes are not, for the most part, there because of a specific\n>choice to have an index, but as the implementation of PRIMARY KEY and\n>UNIQUE. Therefore the main part of your argument fails.\n>\nThat is not my experience. Wholly 3/4's of the indices in PeopleSoft, \nSAP, and Clarify (on top of Oracle 8 and 8i backends) are there solely \nfor perfomance reasons, the remaining 1/4 are there because of \nuniqueness and primary key responsibilities.\n\nIn many of the cases where it is a primary key it is also there to \nensure fast lookups when referenced as a foreign key. Or for joins.\n\n", "msg_date": "Wed, 17 Apr 2002 02:44:46 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "> Here is the problem, in a single paragraph.\n> \n> If the DBA notices that there is a problem with a query, he adds an index, he\n> notices that there is no difference, then he notices that PostgreSQL is not\n> using his index. First and foremost he gets mad at PostgreSQL for not using his\n> index. If PostgreSQL decided to use an index which increases execution time,\n> the DBA would delete the index. If PostgreSQL does not use an index, he has to\n> modify the posgresql.conf file, which disallows PostgreSQL from using an index\n> when it would be a clear loser.\n> \n> My assertion is this: \"If a DBA creates an index, he has a basis for his\n> actions.\"\n\nI agree with Mark.\n\nI jump on this thread to ask some questions:\n\n1) When a DBA creates an index, it is mainly to optimize. But when an\nindex is created, we need to make a vacuum --analyze in order to give PG\noptimizer (totally guessed, Tom may correct this affirmation?) knowledge\nof it. My 1st question is : wouldn't we create a kind of trigger to make\nan automatic vacuum --analyze on the table when a new index is created\non it?\n\nHere an example on a pratical optimisation day: (taken from a\noptimisation journal I make every time I need to make an optimisation on\na customer' database):\n\n� Line 962\n\nEXPLAIN SELECT t12_bskid, t12_pnb, t12_lne, t12_tck\nFROM T12_20011231\nWHERE t12_bskid >= 1\nORDER BY t12_bskid, t12_pnb, t12_tck, t12_lne;\n\nSort (cost=1348.70..1348.70 rows=8565 width=16)\n -> Seq Scan on t12_20011231 (cost=0.00..789.20 rows=8565 width=16)\n\n\ndbkslight=# create index t12_bskid_pnb_tck_lne on t12_20011231\n(t12_bskid, t12_pnb, t12_tck, t12_lne);\nCREATE\ndbkslight=# EXPLAIN SELECT t12_bskid, t12_pnb, t12_lne, t12_tck\ndbkslight-# FROM T12_20011231\ndbkslight-# WHERE t12_bskid >= 1\ndbkslight-# ORDER BY t12_bskid, t12_pnb, t12_tck, t12_lne;\n\nNOTICE: QUERY PLAN:\n\nSort (cost=1348.70..1348.70 rows=8565 width=16)\n -> Seq Scan on t12_20011231 (cost=0.00..789.20 rows=8565 width=16)\n\nEXPLAIN\n\ndbkslight=# vacuum analyze t12_20011231;\nVACUUM\n\ndbkslight=# EXPLAIN SELECT t12_bskid, t12_pnb, t12_lne, t12_tck\nFROM T12_20011231\nWHERE t12_bskid >= 1\nORDER BY t12_bskid, t12_pnb, t12_tck, t12_lne;dbkslight-# dbkslight-#\ndbkslight-# ;\nNOTICE: QUERY PLAN:\n\nIndex Scan using t12_bskid_pnb_tck_lne on t12_20011231\n(cost=0.00..2232.11 rows=25693 width=16)\n\n;-))\n\n� end of example............\n\n2) We all know that indices on small tables have to be dropped, because\nthe seq scan is always cheaper. I wonder about middle tables often\naccessed: data are mainly in the PG buffers. So seq scan a table whose\ndata pages are in the buffer is always cheaper too :) \n\nThen, depending the memory allowed to PG, can we say indices on medium\ntables have also to be dropped? I think so, because index maintenace has\na cost too.\n\n3) I have to say that queries sometimes have to be rewrited. It is very\nwell explained in the \"PostgreSQL Developper Handbook\" I have at home...\n(at work at the moment, will post complete references later, but surely\nyou can find this book at techdocs.postgresql.org ..).\n\nI experienced myself many times, joints have to be rewrited...\nThis is really true for outter joins (LEFT/RIGHT join). And it has to be\ntested with explain plans.\n\nHope this helps.\n\nRegards,\n\n\n-- \nJean-Paul ARGUDO IDEALX S.A.S\nConsultant bases de donn�es 15-17, av. de S�gur\nhttp://www.idealx.com F-75007 PARIS\n", "msg_date": "Wed, 17 Apr 2002 12:15:03 +0200", "msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, 17 Apr 2002, Bruce Momjian wrote:\n\n> \n> Let me add people's expections of the optimizer and the \"it isn't using\n> the index\" questions are getting very old. I have beefed up the FAQ\n> item on this a month ago, but that hasn't reduced the number of\n> questions. I almost want to require people to read a specific FAQ item\n> 4.8 before we will reply to anything.\n> \n> Maybe we need an optimizer FAQ that will answer the basic questions for\n> people.\n\nPerhaps you could also include some kind of (possibly nonsensical) keyword\nlike \"flerbage\" in the faq, asking the user to include it in his question\nto prove he has actually read the relevant section ?\n\nJust my 0,02 Euro\n\nCheers,\nTycho\n\n-- \nTycho Fruru\t\t\ttycho.fruru@conostix.com\n\"Prediction is extremely difficult. Especially about the future.\"\n - Niels Bohr\n\n", "msg_date": "Wed, 17 Apr 2002 12:55:23 +0200 (CEST)", "msg_from": "tycho@fruru.com", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "...\n> Perhaps you could also include some kind of (possibly nonsensical) keyword\n> like \"flerbage\" in the faq, asking the user to include it in his question\n> to prove he has actually read the relevant section ?\n\n*rofl*\n\nBut now we'll have to choose some other word.\n\n - Thomas\n", "msg_date": "Wed, 17 Apr 2002 06:52:02 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, Apr 17, 2002 at 12:44:24AM -0400, Tom Lane wrote:\n> Louis-David Mitterrand <vindex@apartia.org> writes:\n> > While trying to optimise a query I found that running VACUUM ANALYSE\n> > changed all the Index Scans to Seq Scans and that the only way to revert\n> > to Index Scans was the add \"enable_seqscan = 0\" in postgresql.conf.\n> >> \n> >> EXPLAIN ANALYZE output would be more interesting than just EXPLAIN.\n> >> Also, what does the pg_stats view show for these tables?\n> \n> > Thanks, pg_stats output is rather big so I attached it in a separate\n> > file. Here are the EXPLAIN ANALYZE ouputs:\n> \n> Tell you the truth, I'm having a real hard time getting excited over\n> a bug report that says the planner chose a plan taking 10.90 seconds\n> in preference to one taking 7.96 seconds.\n\nNow using a reduced test request I have a huge difference in runtime\n(2317ms vs 4ms) on two almost identitcal queries. In both cases the\nwhere clause uses the same table and pattern, however the slower query's\nwhere-clause table appears at the end of the join:\n\ngesci5=# explain analyse select p1.titre, p1.nom, p1.prenom, p2.titre, p2.nom, p2.prenom from personne p1 join prospect p on (p.id_personne1 = p1.id_personne) join personne p2 on (p.id_personne2 = p2.id_personne) join contact cn on (p.dernier_contact = cn.id_contact) where lower(p2.nom) like 'marl%' order by date_contact desc;\nNOTICE: QUERY PLAN:\n\nSort (cost=5944.88..5944.88 rows=137 width=82) (actual time=2317.45..2317.45 rows=5 loops=1)\n -> Nested Loop (cost=2168.52..5940.00 rows=137 width=82) (actual time=1061.58..2317.28 rows=5 loops=1)\n -> Hash Join (cost=2168.52..5208.38 rows=137 width=70) (actual time=1061.23..2316.01 rows=5 loops=1)\n -> Hash Join (cost=1406.52..4238.55 rows=27482 width=41) (actual time=355.60..2267.44 rows=27250 loops=1)\n -> Seq Scan on personne p1 (cost=0.00..1102.00 rows=44100 width=29) (actual time=0.12..303.42 rows=44100 loops=1)\n -> Hash (cost=1216.82..1216.82 rows=27482 width=12) (actual time=354.64..354.64 rows=0 loops=1)\n -> Seq Scan on prospect p (cost=0.00..1216.82 rows=27482 width=12) (actual time=0.11..257.51 rows=27482 loops=1)\n -> Hash (cost=761.45..761.45 rows=220 width=29) (actual time=0.33..0.33 rows=0 loops=1)\n -> Index Scan using personne_nom on personne p2 (cost=0.00..761.45 rows=220 width=29) (actual time=0.07..0.29 rows=16 loops=1)\n -> Index Scan using contact_pkey on contact cn (cost=0.00..5.31 rows=1 width=12) (actual time=0.22..0.23 rows=1 loops=5)\nTotal runtime: 2317.77 msec\n\nEXPLAIN\n\ngesci5=# explain analyse select p1.titre, p1.nom, p1.prenom, p2.titre, p2.nom, p2.prenom from personne p1 join prospect p on (p.id_personne1 = p1.id_personne) join personne p2 on (p.id_personne2 = p2.id_personne) join contact cn on (p.dernier_contact = cn.id_contact) where lower(p1.nom) like 'marl%' order by date_contact desc;\nNOTICE: QUERY PLAN:\n\nSort (cost=3446.49..3446.49 rows=137 width=82) (actual time=3.85..3.85 rows=5 loops=1)\n -> Nested Loop (cost=0.00..3441.61 rows=137 width=82) (actual time=1.86..3.55 rows=5 loops=1)\n -> Nested Loop (cost=0.00..2709.99 rows=137 width=70) (actual time=1.81..3.32 rows=5 loops=1)\n -> Nested Loop (cost=0.00..2018.40 rows=137 width=41) (actual time=0.58..2.41 rows=10 loops=1)\n -> Index Scan using personne_nom on personne p1 (cost=0.00..761.45 rows=220 width=29) (actual time=0.30..0.55 rows=16 loops=1)\n -> Index Scan using prospect_personne1 on prospect p (cost=0.00..5.69 rows=1 width=12) (actual time=0.10..0.11 rows=1 loops=16)\n -> Index Scan using personne_pkey on personne p2 (cost=0.00..5.02 rows=1 width=29) (actual time=0.08..0.08 rows=0 loops=10)\n -> Index Scan using contact_pkey on contact cn (cost=0.00..5.31 rows=1 width=12) (actual time=0.03..0.03 rows=1 loops=5)\nTotal runtime: 4.17 msec\n\n\n-- \n PHEDRE: Il n'est plus temps. Il sait mes ardeurs insens�es.\n De l'aust�re pudeur les bornes sont pass�es.\n (Ph�dre, J-B Racine, acte 3, sc�ne 1)\n", "msg_date": "Wed, 17 Apr 2002 15:52:07 +0200", "msg_from": "Louis-David Mitterrand <vindex@apartia.org>", "msg_from_op": true, "msg_subject": "huge runtime difference between 2 almost identical queries (was: Re:\n\tIndex Scans become Seq Scans after VACUUM ANALYSE)" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> If the DBA notices that there is a problem with a query, he adds an\n> index, he notices that there is no difference, then he notices that\n> PostgreSQL is not using his index. First and foremost he gets mad at\n> PostgreSQL for not using his index. If PostgreSQL decided to use an\n> index which increases execution time, the DBA would delete the\n> index.\n\nI don't buy that argument at all. It might be a unique index that he\nmust have in place for data integrity reasons. It might be an index\nthat he needs for a *different* query.\n\nIf the table has more than one index available that might be usable\nwith a particular query, how does your argument help? It doesn't.\nWe still have to trust to statistics and cost estimates. So I intend\nto proceed on the path of improving the estimator, not in the direction\nof throwing it out in favor of rules-of-thumb.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 09:55:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "\n\nTom Lane wrote:\n\n>mlw <markw@mohawksoft.com> writes:\n>\n>>If the DBA notices that there is a problem with a query, he adds an\n>>index, he notices that there is no difference, then he notices that\n>>PostgreSQL is not using his index. First and foremost he gets mad at\n>>PostgreSQL for not using his index. If PostgreSQL decided to use an\n>>index which increases execution time, the DBA would delete the\n>>index.\n>>\n>\n>I don't buy that argument at all. It might be a unique index that he\n>must have in place for data integrity reasons. It might be an index\n>that he needs for a *different* query.\n>\n>If the table has more than one index available that might be usable\n>with a particular query, how does your argument help? It doesn't.\n>We still have to trust to statistics and cost estimates. So I intend\n>to proceed on the path of improving the estimator, not in the direction\n>of throwing it out in favor of rules-of-thumb.\n>\nOn this point I fully agree. A cost-estimator helps massively in cases \nwhere there are multiple candidate indices. My only current complaint \nis the current coster needs either a little optimisation work, or \nperhaps some of it's internals (like random_page_cost) exposed in a \nmalleable way.\n\nWhat is valid for one application will not (necessarily) be valid for \nanother application. OR set of applications. OLTP has entirely \ndifferent goals from Data Warehouseing. Throwing out the coster would \nbe throwing the baby out with the bathwater. The coster is *not* satan. \n It may be the root of a few evils though ]:>\n\n", "msg_date": "Wed, 17 Apr 2002 07:08:27 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "OK so maybe I'm on some crack, but looking at the docs most of the \nplanners internal cost estimator constants are exposed... It seems that \na few short FAQ entries may just need to be in order to point out that \nyou can flex these values a little to get the desired results....\n\nI guess I should RTFM more :)\n\n", "msg_date": "Wed, 17 Apr 2002 07:11:47 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "> Oh, come on Tom, surely I have been around long enough to lend credence that\n> wish to have a positive affect on PostgreSQL development.\n\n:) Tom does have a way with words sometimes, eh?\n\n> enable_seqscan=0, disallows sequential scan, that is not what I am saying. This\n> is a problem I (and others) have been yapping about for a long time.\n> I just think there is sufficient evidence to suggest that if a DBA creates an\n> index, there is strong evidence (better than statistics) that the index need be\n> used. In the event that an index exists, there is a strong indication that,\n> without overwhelming evidence, that the index should be used. You have admitted\n> that statistics suck, but the existence of an index must weight (heavily) on\n> the evaluation on whether or not to use an index.\n\nTom is a mathematician by training, and is trying to balance the\noptimizer decisions right on the transition between best and next-best\npossibility. Biasing it to one decision or another when all of his test\ncases clearly show the *other* choice would be better puts it in the\nrealm of an arbitrary choice *not* supported by the data!\n\nafaict there are *two* areas which might benefit from analysis or\nadjustments, but they have to be supported by real test cases.\n\n1) the cost estimates used for each property of the data and storage.\n\n2) the statistical sampling done on actual data during analysis.\n\nThe cost numbers have been tested on what is hopefully a representative\nset of machines. It may be possible to have a test suite which would\nallow folks to run the same data on many different platforms, and to\ncontribute other test cases for consideration. Perhaps you would like to\norganize a test suite? Tom, do you already have cases you would like to\nsee in this test?\n\nThe statistical sampling (using somewhere between 1 and 10% of the data\nafaicr) *could* get fooled by pathalogical storage topologies. So for\ncases which seem to have reached the \"wrong\" conclusion we should show\n*what* values for the above would have arrived at the correct result\n*without* biasing other potential results in the wrong direction.\n\nSystems which have optimizing planners can *never* be guaranteed to\ngenerate the actual lowest-cost query plan. Any impression that Oracle,\nfor example, actually does do that may come from a lack of visibility\ninto the process, and a lack of forum for discussing these edge cases.\n\nPlease don't take Tom's claim that he doesn't get excited about wrong\nplanner choices which are not too wrong as an indication that he isn't\ninterested. The point is that for *edge cases* the \"correct answer\" can\nnever be known until the query is actually run two different ways. And\nthe planner is *never* allowed to do that. So tuning the optimizer over\ntime is the only way to improve things, and with edge cases a factor of\ntwo in timing is, statistically, an indication that the results are\nclose to optimal.\n\nWe rarely get reports that the planner made the best choice for a plan,\nbut of course people usually don't consider optimal performance to be a\nreportable problem ;)\n\n - Thomas\n", "msg_date": "Wed, 17 Apr 2002 07:15:26 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "\n\nThomas Lockhart wrote:\n\n>\n>Systems which have optimizing planners can *never* be guaranteed to\n>generate the actual lowest-cost query plan. Any impression that Oracle,\n>for example, actually does do that may come from a lack of visibility\n>into the process, and a lack of forum for discussing these edge cases.\n>\nI wholly agree... Oracle has some fairly *sick* ideas at times about \nwhat to do in the face of partial ambiguity. (I've got a small set of \nqueries that will drive any machine with PeopleSoft DBs loaded to near \ncatatonia...) :)\n\nAs far as the 'planner benchmark suite' so we cans tart gathering more \nstatistical data about what costs should be, or are better at, that's an \nexcellent idea.\n\n", "msg_date": "Wed, 17 Apr 2002 07:24:04 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Thomas Lockhart wrote:\n> Systems which have optimizing planners can *never* be guaranteed to\n> generate the actual lowest-cost query plan. Any impression that Oracle,\n> for example, actually does do that may come from a lack of visibility\n> into the process, and a lack of forum for discussing these edge cases.\n\nAnd here in lies the crux of the problem. It isn't a purely logical/numerical\nformula. It is a probability estimate, nothing more. Currently, the statistics\nare used to calculate a probable best query, not a guaranteed best query. The\npresence of an index should be factored into the probability of a best query,\nshould it not?\n", "msg_date": "Wed, 17 Apr 2002 10:31:21 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "...\n> I experienced myself many times, joints have to be rewrited...\n> This is really true for outter joins (LEFT/RIGHT join). And it has to be\n> tested with explain plans.\n\nIt is particularly true for \"join syntax\" as used in outer joins because\n*that bypasses the optimizer entirely*!!! I'd like to see that changed,\nsince the choice of syntax should have no effect on performance. And\nalthough the optimizer can adjust query plans to choose the best one\n(most of the time anyway ;) there is likely to be *only one correct way\nto write a query using join syntax*. *Every* other choice for query will\nbe wrong, from a performance standpoint.\n\nThat is a bigger \"foot gun\" than the other things we are talking about,\nimho.\n\n - Thomas\n\nfoot gun (n.) A tool built solely for shooting oneself or another person\nin the foot.\n", "msg_date": "Wed, 17 Apr 2002 07:34:57 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Louis-David Mitterrand <vindex@apartia.org> writes:\n> gesci5=# explain analyse select p1.titre, p1.nom, p1.prenom, p2.titre, p2.nom, p2.prenom from personne p1 join prospect p on (p.id_personne1 = p1.id_personne) join personne p2 on (p.id_personne2 = p2.id_personne) join contact cn on (p.dernier_contact = cn.id_contact) where lower(p2.nom) like 'marl%' order by date_contact desc;\n\n> gesci5=# explain analyse select p1.titre, p1.nom, p1.prenom, p2.titre, p2.nom, p2.prenom from personne p1 join prospect p on (p.id_personne1 = p1.id_personne) join personne p2 on (p.id_personne2 = p2.id_personne) join contact cn on (p.dernier_contact = cn.id_contact) where lower(p1.nom) like 'marl%' order by date_contact desc;\n\nBut these aren't at *all* the same query --- the useful constraint is on\np2 in the first case, and p1 in the second. Given the way you've\nwritten the join, the constraint on p2 can't be applied until after\nthe p1/p join is formed --- see \nhttp://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/explicit-joins.html\n\nI've always thought of the follow-the-join-structure rule as a stopgap\nmeasure until we think of something better; it's not intuitive that\nwriting queries using INNER JOIN/ON isn't equivalent to writing FROM/WHERE.\nOn the other hand it provides a useful \"out\" for those people who are\njoining umpteen tables and need to short-circuit the planner's search\nheuristics. If I take it out, I'll get beat up by the same camp that\nthinks they should be able to override the planner's ideas about whether\nto use an index ;-)\n\nThe EXPLAINs also remind me that we don't currently have any statistics\nthat can be applied for clauses like \"lower(p2.nom) like 'marl%'\".\nWe've talked in the past about having the system gather and use stats\non the values of functional indexes --- for example, if you have an\nindex on lower(p2.nom) then this would allow a rational estimate to be\nmade about the selectivity of \"lower(p2.nom) like 'marl%'\". But I\nhaven't had any time to pursue it myself. Anyway it doesn't appear\nthat that's causing a bad choice of plan in this case.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 10:38:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: huge runtime difference between 2 almost identical queries (was:\n\tRe: Index Scans become Seq Scans after VACUUM ANALYSE)" }, { "msg_contents": "> > Systems which have optimizing planners can *never* be guaranteed to\n> > generate the actual lowest-cost query plan. Any impression that Oracle,\n> > for example, actually does do that may come from a lack of visibility\n> > into the process, and a lack of forum for discussing these edge cases.\n> And here in lies the crux of the problem. It isn't a purely logical/numerical\n> formula. It is a probability estimate, nothing more. Currently, the statistics\n> are used to calculate a probable best query, not a guaranteed best query. The\n> presence of an index should be factored into the probability of a best query,\n> should it not?\n\nWell, it is already. I *think* what you are saying is that the numbers\nshould be adjusted to bias the choice toward an index; that *choosing*\nthe index even if the statistics (and hence the average result) will\nproduce a slower query is preferred to trying to choose the lowest cost\nplan.\n\nafaict we could benefit from more test cases run on more machines.\nPerhaps we could also benefit from being able to (easily) run multiple\nversions of plans, so folks can see whether the system is actually\nchoosing the correct one. But until we get better coverage of more test\ncases on more platforms, adjusting the planner based on a small number\nof \"problem queries\" is likely to lead to \"problem queries\" which\nweren't problems before!\n\nThat is why Tom gets excited about \"factor of 10 problems\", but not\nabout factors of two. Because he knows that there are lots of queries\nwhich happen to fall on the other side of the fence, misestimating the\ncosts by a factor of two *in the other direction*, which you will not\nnotice because that happens to choose the correct plan anyway.\n\n - Thomas\n", "msg_date": "Wed, 17 Apr 2002 07:55:36 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "> Systems which have optimizing planners can *never* be guaranteed to\n> generate the actual lowest-cost query plan. Any impression that Oracle,\n> for example, actually does do that may come from a lack of visibility\n> into the process, and a lack of forum for discussing these edge cases.\n\nHmmm...with PREPARE and EXECUTE, would it be possible to somehow get the\nplanner to actually run a few different selects and then actually store the\nactual fastest plan? I'm being very fanciful here, of course...\n\nChris\n\n\n", "msg_date": "Wed, 17 Apr 2002 22:56:20 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Tom Lane wrote:\n> If the table has more than one index available that might be usable\n> with a particular query, how does your argument help? It doesn't.\n> We still have to trust to statistics and cost estimates. So I intend\n> to proceed on the path of improving the estimator, not in the direction\n> of throwing it out in favor of rules-of-thumb.\n\nI'm not saying ignore the statistics. The cost analyzer is trying to create a\ngood query based on the information about the table. Since the statistics are a\nsummation of table characteristics they will never be 100% accurate.\n\nComplex systems have behaviors which are far more unpredictable in practice,\nnumbers alone will not predict behavior. Theoretically they can, of course, but\nthe amount and complexity of the information which would need to be processed\nto make a good prediction would be prohibitive. Think about the work being done\nin weather prediction. \"rules-of-thumb\" are quite important. PostgreSQL already\nhas a number of them, what do you think cpu_tuple_cost and random_page_cost\nare?\n\nHow about a configuration option? Something like an index_weight ratio.\n\nA setting of 1.0 would tell the optimizer that if the index and the non-index\nlookup are the same, it would use the index.\n\nA setting of 2.0 Would tell the optimizer that the index cost would need to be\ntwice that of the non-index lookup to avoid using the index.\n\nHow about that?\n", "msg_date": "Wed, 17 Apr 2002 10:57:17 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "At 07:15 AM 4/17/02 -0700, Thomas Lockhart wrote:\n\n>Tom is a mathematician by training, and is trying to balance the\n>optimizer decisions right on the transition between best and next-best\n>possibility. Biasing it to one decision or another when all of his test\n>cases clearly show the *other* choice would be better puts it in the\n>realm of an arbitrary choice *not* supported by the data!\n\nI do agree with Mark that most cases an index is added to increase \nperformance, if the index is used but doesn't improve performance DBAs will \ndrop them to improve insert/update performance (or they will leave it for \nwhen the table grows big). Thus the bias should be towards using the index \n(which may already be the case for most situations).\n\nMy guess on why one hears many complaints about Postgresql not using the \nindex is because when things work fine you don't hear complaints :). I also \nsuspect when Postgresql wrongly uses the index instead of sequential scans \nnot as many people bother dropping the index to test for a performance \nincrease.\n\nBut it may well be that the cost of wrongly using the index is typically \nnot as high as wrongly doing a sequential scan, and that is why people \ndon't get desperate enough to drop the index and grumble about it.\n\nWeighing these factors, perhaps once we get one or two complaining about \npostgresql using an index vs 20 complaining about not using an index, then \nthe optimizer values have reached a good compromise :). But maybe the ratio \nshould be 1 vs 100?\n\nWhat do you think? ;).\n\nCheerio,\nLink.\n\n", "msg_date": "Wed, 17 Apr 2002 22:59:48 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "...\n> Weighing these factors, perhaps once we get one or two complaining about\n> postgresql using an index vs 20 complaining about not using an index, then\n> the optimizer values have reached a good compromise :). But maybe the ratio\n> should be 1 vs 100?\n\n:)\n\nSo we should work on collecting those statistics, rather than statistics\non data. What do you think Tom; should we work on a \"mailing list based\nplanner\" which adjusts numbers from, say, a web site? That is just too\nfunny :)))\n\n - Thomas\n", "msg_date": "Wed, 17 Apr 2002 08:01:06 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Y'all are having entirely too much fun with this :P\n\nThough the headling ...\n'PostgreSQL with its proprietary bitch-rant-rating query planner storms \nthe DB front.'\n\ndoes have a certain...entertainment value.\n\nThomas Lockhart wrote:\n\n>...\n>\n>>Weighing these factors, perhaps once we get one or two complaining about\n>>postgresql using an index vs 20 complaining about not using an index, then\n>>the optimizer values have reached a good compromise :). But maybe the ratio\n>>should be 1 vs 100?\n>>\n>\n>:)\n>\n>So we should work on collecting those statistics, rather than statistics\n>on data. What do you think Tom; should we work on a \"mailing list based\n>planner\" which adjusts numbers from, say, a web site? That is just too\n>funny :)))\n>\n> - Thomas\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\n", "msg_date": "Wed, 17 Apr 2002 08:11:18 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> writes:\n> On my own few experience I think this could be solved decreasing\n> random_page_cost, if you would prefer to use indexes than seq scans, then\n> you can lower random_page_cost to a point in which postgres works as you\n> want. So the planner would prefer indexes when in standard conditions it\n> would prefer seq scans.\n\nIt's entirely possible that the default value of random_page_cost is too\nhigh, at least for many modern machines. The experiments I did to get\nthe 4.0 figure were done a couple years ago, on hardware that wasn't\nexactly new at the time. I have not heard of anyone else trying to\nmeasure it though.\n\nI don't think I have the source code I used anymore, but the principle\nis simple enough:\n\n1. Make a large file (several times the size of your machine's RAM, to\nensure you swamp out kernel disk buffering effects). Fill with random\ndata. (NB: do not fill with zeroes, some filesystems optimize this away.)\n\n2. Time reading the file sequentially, 8K per read request.\nRepeat enough to get a statistically trustworthy number.\n\n3. Time reading randomly-chosen 8K pages from the file. Repeat\nenough to get a trustworthy number (the total volume of pages read\nshould be several times the size of your RAM).\n\n4. Divide.\n\nThe only tricky thing about this is making sure you are measuring disk\naccess times and not being fooled by re-accessing pages the kernel still\nhas cached from a previous access. (The PG planner does try to account\nfor caching effects, but that's a separate estimate; the value of\nrandom_page_cost isn't supposed to include caching effects.) AFAIK the\nonly good way to do that is to use a large test, which means it takes\nawhile to run; and you need enough spare disk space for a big test file.\n\nIt'd be interesting to get some numbers for this across a range of\nhardware, filesystems, etc ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 11:16:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "From: \n mlw <markw@mohawksoft.com>\n \n11:05\n\n Subject: \n Re: Index Scans become Seq Scans after VACUUM ANALYSE\n To: \n Thomas Lockhart <thomas@fourpalms.org>\n\n\n\n\nThomas Lockhart wrote:\n> \n> ...\n> > Weighing these factors, perhaps once we get one or two complaining about\n> > postgresql using an index vs 20 complaining about not using an index, then\n> > the optimizer values have reached a good compromise :). But maybe the ratio\n> > should be 1 vs 100?\n> \n> :)\n> \n> So we should work on collecting those statistics, rather than statistics\n> on data. What do you think Tom; should we work on a \"mailing list based\n> planner\" which adjusts numbers from, say, a web site? That is just too\n> funny :)))\n\nNo, you miss the point!\n\nOn borderline conditions, wrongly using an index does not result in as bad\nperformance as wrongly not using an index, thus usage of an index should be\nweighted higher because the risk of not using the index out weighs the risk of\nusing it.\n", "msg_date": "Wed, 17 Apr 2002 11:22:37 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, Apr 17, 2002 at 10:38:15AM -0400, Tom Lane wrote:\n> \n> But these aren't at *all* the same query --- the useful constraint is on\n> p2 in the first case, and p1 in the second. Given the way you've\n> written the join, the constraint on p2 can't be applied until after\n> the p1/p join is formed --- see \n> http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/explicit-joins.html\n> \n> I've always thought of the follow-the-join-structure rule as a stopgap\n> measure until we think of something better; it's not intuitive that\n> writing queries using INNER JOIN/ON isn't equivalent to writing FROM/WHERE.\n> On the other hand it provides a useful \"out\" for those people who are\n> joining umpteen tables and need to short-circuit the planner's search\n> heuristics. If I take it out, I'll get beat up by the same camp that\n> thinks they should be able to override the planner's ideas about whether\n> to use an index ;-)\n\nHmm, since 7.1 released we have religiously converted all our joins to\nthe new syntax, thinking it more politically correct ;-). But now all\nour beliefs are put into question. Back to old joins, in certain cases.\n\nHere is the rule of thumb we deduct from your message: only use explicit\njoin syntax if a left|right|full join is involved OR if the\nconditional(s) can go into the ON() clause, ELSE use the old join\nsyntax.\n\nIs that more or less correct?\n\nPreliminary tests converting the query I previously sent you to the old\nsyntax are indeed very impressive: now in both cases (comparaison on p1\nor p2 take ~ 1ms).\n\nTHANKS A LOT FOR THE HEADS UP!\n\n-- \n THESEE: Il fallait, en fuyant, ne pas abandonner\n Le fer qui dans ses mains aide � te condamner ;\n (Ph�dre, J-B Racine, acte 4, sc�ne 2)\n", "msg_date": "Wed, 17 Apr 2002 17:26:33 +0200", "msg_from": "Louis-David Mitterrand <vindex@apartia.org>", "msg_from_op": true, "msg_subject": "Re: huge runtime difference between 2 almost identical queries (was:\n\tRe: Index Scans become Seq Scans after VACUUM ANALYSE)" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> ...\n> > I experienced myself many times, joints have to be rewrited...\n> > This is really true for outter joins (LEFT/RIGHT join). And it has to be\n> > tested with explain plans.\n> \n> It is particularly true for \"join syntax\" as used in outer joins because\n> *that bypasses the optimizer entirely*!!! I'd like to see that changed,\n> since the choice of syntax should have no effect on performance. And\n> although the optimizer can adjust query plans to choose the best one\n> (most of the time anyway ;) there is likely to be *only one correct way\n> to write a query using join syntax*. *Every* other choice for query will\n> be wrong, from a performance standpoint.\n> \n> That is a bigger \"foot gun\" than the other things we are talking about,\n> imho.\n\nPlease keep in mind that before the explicit join syntax \"optimization\"\nwas available, my 20-way joins took PostgreSQL forever to complete,\nregardless of whether the genetic query optimizer was enabled or not.\nUsing the explicit join syntax, they return results essentially\ninstantaneously. Perhaps the optimizer can be used for LEFT/RIGHT joins,\nbut I would still like the option to use explicit join orders in order\nto prohibit the exponential growth in time spend in the optimizer. I'd\nprefer to have a gun to shoot myself in the foot with than no gun at\nall. Or rather, I'd prefer to have a \"foot gun\" than a \"server gun\"\n\nserver gun (n.) A tool built solely for shooting a PostgreSQL server\nwhen waiting for the completion of a query using a large number of joins\n\n;-)\n\nMike Mascari\nmascarm@mascari.com\n\n\n> \n> - Thomas\n> \n> foot gun (n.) A tool built solely for shooting oneself or another person\n> in the foot.\n", "msg_date": "Wed, 17 Apr 2002 11:30:16 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> It is particularly true for \"join syntax\" as used in outer joins because\n> *that bypasses the optimizer entirely*!!! I'd like to see that changed,\n> since the choice of syntax should have no effect on performance.\n\nFor an OUTER join, we do have to join in the specified order, don't we?\n(A left join B) left join C doesn't necessarily produce the same set\nof rows as A left join (B left join C).\n\nThe fact that INNER JOIN syntax is currently treated the same way is\npartly an artifact of implementation convenience, but also partly a\nresponse to trouble reports we were receiving about the amount of\nplanning time spent on many-way joins. If we cause the planner to\ntreat A INNER JOIN B the same way it treats \"FROM A,B WHERE\", I think\nwe'll be back in the soup with the folks using dozen-table joins.\n\nWould it make sense to flatten out INNER JOINs only when the total\nnumber of tables involved is less than some parameter N? N\naround six or eight would probably keep the complex-query crowd\nhappy, while not causing unintuitive behavior for simple queries.\nAnybody who really likes the current behavior could set N=1 to force\nthe system to obey his join order.\n\n(There already is a comparable heuristic used for deciding whether\nto pull up subqueries, but its control parameter isn't separately\nexposed at the moment.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 11:46:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> On borderline conditions, wrongly using an index does not result in as bad\n> performance as wrongly not using an index,\n\nYou're arguing from a completely false premise. It might be true on the\nparticular cases you've looked at, but in general an indexscan-based\nplan can be many times worse than a seqscan plan.\n\nIn particular this is likely to hold when the plan has to access most or\nall of the table. I still remember the first time I got my nose rubbed\nin this unfortunate fact. I had spent a lot of work improving the\nplanner's handling of sort ordering to the point where it could use an\nindexscan in place of seqscan-and-sort to handle ORDER BY queries.\nI proudly committed it, and immediately got complaints that ORDER BY was\nslower than before on large tables. Considering how slow a large sort\noperation is, that should give you pause.\n\nAs for \"borderline conditions\", how is the planner supposed to know what\nis borderline?\n\nI cannot see any rational justification for putting a thumb on the\nscales on the side of indexscan (or any other specific plan type)\nas you've proposed. Thomas correctly points out that you'll just move\nthe planner failures from one area to another.\n\nIf we can identify a reason why the planner tends to overestimate the\ncosts of indexscan vs seqscan, by all means let's fix that. But let's\nnot derive cost estimates that are the best we know how to make and\nthen ignore them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 12:10:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "Louis-David Mitterrand <vindex@apartia.org> writes:\n> Hmm, since 7.1 released we have religiously converted all our joins to\n> the new syntax, thinking it more politically correct ;-). But now all\n> our beliefs are put into question. Back to old joins, in certain cases.\n\n> Here is the rule of thumb we deduct from your message: only use explicit\n> join syntax if a left|right|full join is involved OR if the\n> conditional(s) can go into the ON() clause, ELSE use the old join\n> syntax.\n\nI don't see that the ON clause has anything to do with it. You must use\nthe JOIN syntax for any kind of outer join, of course. For an inner\njoin, the planner currently has a better shot at choosing the right plan\nif you don't use JOIN syntax.\n\nSee nearby thread for some discussion about tweaking this aspect of the\nplanner's behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 12:14:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: huge runtime difference between 2 almost identical queries (was:\n\tRe: Index Scans become Seq Scans after VACUUM ANALYSE)" }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > On borderline conditions, wrongly using an index does not result in as bad\n> > performance as wrongly not using an index,\n> \n> You're arguing from a completely false premise. It might be true on the\n> particular cases you've looked at, but in general an indexscan-based\n> plan can be many times worse than a seqscan plan.\n\nOK, I'll grant you that, but I am talking about the space between when it is\nclear that an index is useful and when it is clear that it is not. For some\nreason you seem to think I am saying \"always use an index,\" when, in fact, I am\nsaying more preference should be given to using an index than it currently has.\n\n> As for \"borderline conditions\", how is the planner supposed to know what\n> is borderline?\n\nIt need not know about borderline conditions.\n\n> \n> I cannot see any rational justification for putting a thumb on the\n> scales on the side of indexscan (or any other specific plan type)\n> as you've proposed. Thomas correctly points out that you'll just move\n> the planner failures from one area to another.\n\nI don't think this is true, and you yourself had said you are not too worried\nabout a 10 vs 8 second difference. I have seen many instances of when\nPostgreSQL refuses to use an index because the data distribution is uneven.\nMaking it more difficult for the planer to ignore an index would solve\npractically all the problems I have seen, and I bet the range of instances\nwhere it would incorrectly use an index would not impact performance as badly\nas those instances where it doesn't.\n\n> \n> If we can identify a reason why the planner tends to overestimate the\n> costs of indexscan vs seqscan, by all means let's fix that. But let's\n> not derive cost estimates that are the best we know how to make and\n> then ignore them.\n\nI don't think you can solve this with statistics. It is a far more complex\nproblem than that. There are too many variables, there is no way a standardized\nsummation will accurately characterize all possible tables. There must be a way\nto add heuristics to the cost based analyzer.\n", "msg_date": "Wed, 17 Apr 2002 12:35:13 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, 2002-04-17 at 17:16, Tom Lane wrote:\n> \n> It's entirely possible that the default value of random_page_cost is too\n> high, at least for many modern machines. The experiments I did to get\n> the 4.0 figure were done a couple years ago, on hardware that wasn't\n> exactly new at the time. I have not heard of anyone else trying to\n> measure it though.\n> \n> I don't think I have the source code I used anymore, but the principle\n> is simple enough:\n> \n> 1. Make a large file (several times the size of your machine's RAM, to\n> ensure you swamp out kernel disk buffering effects). Fill with random\n> data. (NB: do not fill with zeroes, some filesystems optimize this away.)\n\nPeople running postgres often already have large files of random data\nunder $PGDATA directory :)\n\n> 2. Time reading the file sequentially, 8K per read request.\n> Repeat enough to get a statistically trustworthy number.\n> \n> 3. Time reading randomly-chosen 8K pages from the file. Repeat\n> enough to get a trustworthy number (the total volume of pages read\n> should be several times the size of your RAM).\n> \n> 4. Divide.\n> \n> The only tricky thing about this is making sure you are measuring disk\n> access times and not being fooled by re-accessing pages the kernel still\n> has cached from a previous access. (The PG planner does try to account\n> for caching effects, but that's a separate estimate; the value of\n> random_page_cost isn't supposed to include caching effects.) AFAIK the\n> only good way to do that is to use a large test, which means it takes\n> awhile to run; and you need enough spare disk space for a big test file.\n\nIf you have the machine all for yourself you can usually tell it to use\nless RAM at boot time.\n\nOn linux it is append=\" mem=32M\" switch in lilo.conf or just mem=32M on\nlilo boot command line.\n\n> It'd be interesting to get some numbers for this across a range of\n> hardware, filesystems, etc ...\n\n---------------\nHannu\n\n\n", "msg_date": "17 Apr 2002 19:15:40 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, 2002-04-17 at 19:15, Hannu Krosing wrote:\n> On Wed, 2002-04-17 at 17:16, Tom Lane wrote:\n> > \n> > It's entirely possible that the default value of random_page_cost is too\n> > high, at least for many modern machines. The experiments I did to get\n> > the 4.0 figure were done a couple years ago, on hardware that wasn't\n> > exactly new at the time. I have not heard of anyone else trying to\n> > measure it though.\n> > \n> > I don't think I have the source code I used anymore, but the principle\n> > is simple enough:\n> > \n> > 1. Make a large file (several times the size of your machine's RAM, to\n> > ensure you swamp out kernel disk buffering effects). Fill with random\n> > data. (NB: do not fill with zeroes, some filesystems optimize this away.)\n> \n> People running postgres often already have large files of random data\n> under $PGDATA directory :)\n\nOTOH, it is also important where the file is on disk. As seen from disk\nspeed test graphs on http://www.tomshardware.com , the speed difference\nof sequential reads is 1.5 to 2.5 between inner and outer tracks. \n\n> > 2. Time reading the file sequentially, 8K per read request.\n> > Repeat enough to get a statistically trustworthy number.\n> > \n> > 3. Time reading randomly-chosen 8K pages from the file. Repeat\n> > enough to get a trustworthy number (the total volume of pages read\n> > should be several times the size of your RAM).\n> > \n> > 4. Divide.\n> > \n> > The only tricky thing about this is making sure you are measuring disk\n> > access times and not being fooled by re-accessing pages the kernel still\n> > has cached from a previous access. (The PG planner does try to account\n> > for caching effects, but that's a separate estimate;\n\nWill it make the random and seq read cost equal when cache size >\ndatabase size and enough queries are performed to assume that all data\nis in cache.\n\nAlso, can it distinguish between data in pg internal cache (shared\nmemory) and data in OS filesystem cache ?\n\n> > the value of\n> > random_page_cost isn't supposed to include caching effects.) AFAIK the\n> > only good way to do that is to use a large test, which means it takes\n> > awhile to run; and you need enough spare disk space for a big test file.\n> \n> If you have the machine all for yourself you can usually tell it to use\n> less RAM at boot time.\n> \n> On linux it is append=\" mem=32M\" switch in lilo.conf or just mem=32M on\n> lilo boot command line.\n> \n> > It'd be interesting to get some numbers for this across a range of\n> > hardware, filesystems, etc ...\n> \n> ---------------\n> Hannu\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n", "msg_date": "17 Apr 2002 19:28:04 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> OTOH, it is also important where the file is on disk. As seen from disk\n> speed test graphs on http://www.tomshardware.com , the speed difference\n> of sequential reads is 1.5 to 2.5 between inner and outer tracks. \n\nTrue. But if we use the same test file for both the sequential and\nrandom-access timings, hopefully the absolute speed of access will\ncancel out. (Again, it's the sort of thing that could use some\nreal-world testing...)\n\n> (The PG planner does try to account\n> for caching effects, but that's a separate estimate;\n\n> Will it make the random and seq read cost equal when cache size >\n> database size and enough queries are performed to assume that all data\n> is in cache.\n\nThere isn't any attempt to account for the effects of data having been\nread into cache by previous queries. I doubt that it would improve the\nmodel to try to keep track of what the recent queries were --- for one\nthing, do you really want your plans changing on the basis of activity\nof other backends?\n\nOne place where this does fall down is in nestloops with inner index\nscans --- if we know that the inner query will be evaluated multiple\ntimes, then we should give it some kind of discount for cache effects.\nInvestigating this is on the todo list...\n\n> Also, can it distinguish between data in pg internal cache (shared\n> memory) and data in OS filesystem cache ?\n\nCurrently we treat those alike. Yeah, the OS cache is slower to get to,\nbut in comparison to a physical disk read I think the difference is\ninsignificant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 13:43:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> ... I have seen many instances of when\n> PostgreSQL refuses to use an index because the data distribution is uneven.\n\nThis is fixed, or at least attacked, in 7.2. Again, I do not see this\nas an argument for making the planner stupider instead of smarter.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 13:45:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > ... I have seen many instances of when\n> > PostgreSQL refuses to use an index because the data distribution is uneven.\n> \n> This is fixed, or at least attacked, in 7.2. Again, I do not see this\n> as an argument for making the planner stupider instead of smarter.\n> \n\nYou completely ignored the point I was trying to make. Statistics are a\nsummation of the data, not the actual data. As such, it can not possibly\nrepresent all possible configurations of tables.\n\nAdding huristics, such as weighting for index scans, is not making the planner\nstupider. It is making it smarter and more flexable.\n", "msg_date": "Wed, 17 Apr 2002 13:49:16 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, Apr 17, 2002 at 12:35:13PM -0400, mlw wrote:\n\n> about a 10 vs 8 second difference. I have seen many instances of when\n> PostgreSQL refuses to use an index because the data distribution is uneven.\n> Making it more difficult for the planer to ignore an index would solve\n> practically all the problems I have seen, and I bet the range of instances\n> where it would incorrectly use an index would not impact performance as badly\n> as those instances where it doesn't.\n\nYou bet, eh? Numbers, please. \n\nThe best evidence that anyone has been able to generate is _already_\nthe basis for the choices the planner makes. If you can come up with\nother cases where it consistently makes the wrong choice, good:\nthat's data to work with. Maybe it'll expose whatever it is that's\nwrong. But it is not a general case, anyway, so you can't draw any\nconclusion at all about other cases from your case. And Tom Lane is\nright: the repair is _not_ to use some rule of thumb that an index is\nprobably there for a reason.\n\nGiven the apparent infrequency of docs-consultation, I am\nconsiderably less sanguine than you are about the correctness of the\nchoices many DBAs make. Poking at the planner to make it use an\nindex more often strikes me as at least as likely to cause worse\nperformance.\n\n> I don't think you can solve this with statistics. It is a far more\n> complex problem than that. \n\nAw, you just need to take more stats courses ;)\n\nA\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 17 Apr 2002 14:08:58 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw writes:\n\n> Adding huristics, such as weighting for index scans, is not making the planner\n> stupider. It is making it smarter and more flexable.\n\nIf life was as simple as index or no index then this might make some\nsense. But in general the planner has a whole bunch of choices of join\nplans, sorts, scans, and the cost of an individual index scan is hidden\ndown somewhere in the leaf nodes, so you can't simply say that plans of\ntype X should be preferred when the cost estimates are close.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 17 Apr 2002 14:32:19 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Andrew Sullivan wrote:\n> Given the apparent infrequency of docs-consultation, I am\n> considerably less sanguine than you are about the correctness of the\n> choices many DBAs make. Poking at the planner to make it use an\n> index more often strikes me as at least as likely to cause worse\n> performance.\n\nI disagree :-)\n\n> \n> > I don't think you can solve this with statistics. It is a far more\n> > complex problem than that.\n> \n> Aw, you just need to take more stats courses ;)\n\nYou need to move a away form the view that everything calculable and\ndeterministic and move over to the more chaotic perspective where \"more likely\nthan not\" is about the best one can hope for.\n\nThe cost based optimizer is just such a system. There are so many things that\ncan affect the performance of a query that there is no way to adequately model\nthem. Disk performance, inner/outer tracks, RAID systems, concurrent system\nactivity, and so on.\n\nLook at the pgbench utility. I can't run that program without a +- 10%\nvariation from run to run, no mater how many times I run vacuum and checkpoint.\n\nWhen the estimated cost ranges of the different planner strategies overlap, I\nthink that is a case where two approximations with indeterminate precision must\nbe evaluated. In such cases, the variance between the numbers have little or no\nabsolute relevance to one another. This is where heuristics and a bit of\nfuzziness needs to be applied. Favoring an index scan over a sequential scan\nwould probably generate a better query.\n", "msg_date": "Wed, 17 Apr 2002 14:41:28 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> mlw writes:\n> \n> > Adding huristics, such as weighting for index scans, is not making the planner\n> > stupider. It is making it smarter and more flexable.\n> \n> If life was as simple as index or no index then this might make some\n> sense. But in general the planner has a whole bunch of choices of join\n> plans, sorts, scans, and the cost of an individual index scan is hidden\n> down somewhere in the leaf nodes, so you can't simply say that plans of\n> type X should be preferred when the cost estimates are close.\n> \nNo doubt, no one is arguing that it is easy, but as I said in a branch of this\ndiscussion, when the planner has multiple choices, and the cost ranges\noverlapp, the relative numbers are not so meaningful that huristics would not\nimprove the algorithm.\n", "msg_date": "Wed, 17 Apr 2002 14:50:38 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "The fact that an index exists adds a choice -- so by no means is the\nindex ignored.\n\nBut just because a Freeway exists across town doesn't make it faster\nthan the sideroads. It depends on the day of week, time of day, and\nuncontrollable anomolies (accidents).\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"mlw\" <markw@mohawksoft.com>\nTo: \"Thomas Lockhart\" <thomas@fourpalms.org>\nCc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; \"Bruce Momjian\"\n<pgman@candle.pha.pa.us>; \"Louis-David Mitterrand\"\n<vindex@apartia.org>; <pgsql-hackers@postgresql.org>\nSent: Wednesday, April 17, 2002 10:31 AM\nSubject: Re: [HACKERS] Index Scans become Seq Scans after VACUUM\nANALYSE\n\n\n> Thomas Lockhart wrote:\n> > Systems which have optimizing planners can *never* be guaranteed\nto\n> > generate the actual lowest-cost query plan. Any impression that\nOracle,\n> > for example, actually does do that may come from a lack of\nvisibility\n> > into the process, and a lack of forum for discussing these edge\ncases.\n>\n> And here in lies the crux of the problem. It isn't a purely\nlogical/numerical\n> formula. It is a probability estimate, nothing more. Currently, the\nstatistics\n> are used to calculate a probable best query, not a guaranteed best\nquery. The\n> presence of an index should be factored into the probability of a\nbest query,\n> should it not?\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n>\n\n", "msg_date": "Wed, 17 Apr 2002 15:15:36 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, 2002-04-17 at 22:43, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > OTOH, it is also important where the file is on disk. As seen from disk\n> > speed test graphs on http://www.tomshardware.com , the speed difference\n> > of sequential reads is 1.5 to 2.5 between inner and outer tracks. \n> \n> True. But if we use the same test file for both the sequential and\n> random-access timings, hopefully the absolute speed of access will\n> cancel out. (Again, it's the sort of thing that could use some\n> real-world testing...)\n\nWhat I was trying to say was thet if you test on one end you will get\nwrong data for the other end of the same disk.\n\n> > (The PG planner does try to account\n> > for caching effects, but that's a separate estimate;\n> \n> > Will it make the random and seq read cost equal when cache size >\n> > database size and enough queries are performed to assume that all data\n> > is in cache.\n> \n> There isn't any attempt to account for the effects of data having been\n> read into cache by previous queries. I doubt that it would improve the\n> model to try to keep track of what the recent queries were \n\nPerhaps some simple thing, like \nnumber of pages read * cache size / database size\n\nOr perhaps use some additional bookkeeping in cache logic, perhaps even\non per-table basis. If this can be made to use the same locks ás cache\nloading/invalidation it may be quite cheap. \n\nIt may even exist in some weird way already inside the LRU mechanism.\n\n>--- for one\n> thing, do you really want your plans changing on the basis of activity\n> of other backends?\n\nIf I want the best plans then yes. The other backends do affect\nperformance so the best plan would be to account for their activities.\n\nIf other backend is swapping like crazy the best plan may even be to\nwait for it to finish before proceeding :)\n\n----------------\nHannu\n\n\n\n\n", "msg_date": "18 Apr 2002 01:15:50 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Andrew Sullivan wrote: \n> You haven't shown anything except a couple of anecdotal reports as\n> evidence against his view. Anyone who asks you for more evidence\n> gets treated to a remark that statistics won't do everything in this\n> case. \n\nI do not, currently, have access to systems which exhibit the behavior, but I\nhave worked with PostgreSQL quite a bit, and have done a number of projects\nwith it, and have seen the issue first hand and have had to work around it. I\nhave posted detailed data to this list in the past.\n\n> You'll need something stronger than an antipathy for\n> statistical methods to support your position.\n\nI do not have an antipathy for statistics at all, however statistics are a\nreduction of data. They represent a number of properties obtained from a larger\ngroup. For \"statistics\" to be useful, the trends or characteristics you\ncalculate must apply to the problem.\n\nOracle has a cost based optimizer, and they allow you to override it, offer\nhints as to what it should do, or use the rules based optimizer. They know that\na cost based optimizer can not generate the best query all the time.\n\n\n> The heuristic model\n> you propose is a fundamental shift from the current attempts to make\n> the planner choose better plans on the basis of what's in the\n> database. You're saying it can never know enough. And I say, prove\n> it.\n\nI say it is obvious it can never know enough, since statistics are a summation\nof the data set from which they were obtained, thus they can not contain all\nthe information about that data set unless they are at least as large as the\ndata set.\n> \n> > to one another. This is where heuristics and a bit of fuzziness\n> > needs to be applied. Favoring an index scan over a sequential scan\n> > would probably generate a better query.\n> \n> Tom has argued, several times, with reference to actual cases, why\n> that is false. Repeating your view doesn't make it so.\n\nFor some reason, it seems that Tom is under the impression that I am saying\n\"always use and index\" when that is not what I am saying at all.\n\nHere is the logical argument: (correct me if I am wrong)\n\n(1) The table statistics are a summation of the properties of the table which,\namong other things, are thought to affect query performance.\n\n(2) The planner uses the statistics to create estimates about how a strategy\nwill perform.\n\n(3) The \"estimates\" based on the table statistics have a degree of uncertainty,\nbecause they are based on the statistical information about the table not the\ntable itself. The only way to be 100% sure you get the best query is to try all\nthe permutations of the query.\n\n(4) Since the generated estimates have a degree of uncertainty, when multiple\nquery paths are evaluated, the planner will choose a suboptimal query once in a\nwhile.\n\nNow my argument, based on my personal experience, and based on Tom's own\nstatement that +- 2 seconds on a 10 second query is not something the gets\nexcited about, is this:\n\nWhen the planner is presented with a number of possible plans, it must weigh\nthe cost estimates. If there is a choice between two plans which are within\nsome percent range of each other, we can fall into a risk analysis.\n\nFor instance: say we have two similarly performing plans, close to one another,\nsay within 20%, one plan uses an index, and one does not. It is unlikely that\nthe index plan will perform substantially worse than the non-index plan, right?\nThat is the point of the cost estimates, right?\n\nNow, given the choice of the two strategies on a table, both pretty close to\none another, the risk of poor performance for using the index scan is minimal\nbased on the statistics, but the risk of poor performance for using the\nsequential scan is quite high on a large table.\n\nDoes anyone disagree?\n", "msg_date": "Wed, 17 Apr 2002 16:28:03 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, Apr 17, 2002 at 04:28:03PM -0400, mlw wrote:\n\n> Oracle has a cost based optimizer, and they allow you to override\n> it, offer hints as to what it should do, or use the rules based\n> optimizer. They know that a cost based optimizer can not generate\n> the best query all the time.\n\nOracle's the wrong competition to cite here. IBM's optimiser and\nplanner in DB2 is rather difficult to override; IBM actively\ndiscourages doing so. That's because it's the best there is. It's\n_far_ better than Oracle's, and has ever been so. It just about\n_always_ gets it right. Without presuming to speak for him, I'd\nsuggest that Tom probably wants to get the planner to that level,\nrather than adding band-aids.\n\n> I say it is obvious it can never know enough, since statistics are\n\nEnough for what? The idea is that the statistics will get you the\nbest-bet plan. You're trying to redefine what the best bet is; and\nTom and others have suggested that a simple rule of thumb, \"All else\nbeing more or less equal, prefer an index,\" is not a good one.\n\n> Now, given the choice of the two strategies on a table, both pretty\n> close to one another, the risk of poor performance for using the\n> index scan is minimal based on the statistics, but the risk of poor\n> performance for using the sequential scan is quite high on a large\n> table.\n\nI thought that's what the various cost estimates were there to cover. \nIf this is all you're saying, then the feature is already there.\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 17 Apr 2002 16:55:26 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw wrote:\n> Now, given the choice of the two strategies on a table, both pretty close to\n> one another, the risk of poor performance for using the index scan is minimal\n> based on the statistics, but the risk of poor performance for using the\n> sequential scan is quite high on a large table.\n\nWow, what did I start here?\n\nOK, let me see if I can explain why the idea of an index being present\nis not significant, and also explain why doing a sequential scan is\n_less_ risky than a index scan.\n\nFirst, if an admin creates an index, it does mean he thinks it will\nhelp, but is he right? You could say that if they create the index, use\nit, and if the admin finds it makes the query slower, he can then remove\nit, and this does give him some control over the optimizer.\n\nHowever, this assumes two things. First, it assumes the admin will\nactually check to see if the index helps, and if it doesn't remove it,\nbut more importantly, it assumes there is only one type of query for\nthat table. That is the biggest fallacy. If I do:\n\n\tSELECT * FROM tab WHERE col = 0;\n\nI may be selecting 70% of the table, and an index scan will take\nforever if 70% of the table (plus index pages) is significancy larger\nthan the cache size; every row lookup will have to hit the disk! \n\nHowever, if I do:\n\n\tSELECT * FROM tab WHERE col = 89823;\n\nand 89823 is a rare value, perhaps only one row in the table, then an\nindex would be good to use, so yes, indexes can be added by admins to\nimprove performance, but the admin is creating the index probably for\nthe second query, and certainly doesn't want the index used for the\nfirst query.\n\nAlso, these are simple queries. Add multiple tables and join methods,\nand the idea that an admin creating an index could in any way control\nthese cases is implausible.\n\nMy second point, that index scan is more risky than sequential scan, is\noutlined above. A sequential scan reads each page once, and uses the\nfile system read-ahead code to prefetch the disk buffers. Index scans\nare random, and could easily re-read disk pages to plow through a\nsignificant portion of the table, and because the reads are random,\nthe file system will not prefetch the rows so the index scan will have\nto wait for each non-cache-resident row to come in from disk.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 16:56:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Andrew Sullivan wrote:\n\n> > Now, given the choice of the two strategies on a table, both pretty\n> > close to one another, the risk of poor performance for using the\n> > index scan is minimal based on the statistics, but the risk of poor\n> > performance for using the sequential scan is quite high on a large\n> > table.\n> \n> I thought that's what the various cost estimates were there to cover.\n> If this is all you're saying, then the feature is already there.\n\nThe point is that if the index plan is < 20% more costly than the sequential\nscan, it is probably less risky.\n", "msg_date": "Wed, 17 Apr 2002 17:10:30 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw wrote:\n> Andrew Sullivan wrote:\n> \n> > > Now, given the choice of the two strategies on a table, both pretty\n> > > close to one another, the risk of poor performance for using the\n> > > index scan is minimal based on the statistics, but the risk of poor\n> > > performance for using the sequential scan is quite high on a large\n> > > table.\n> > \n> > I thought that's what the various cost estimates were there to cover.\n> > If this is all you're saying, then the feature is already there.\n> \n> The point is that if the index plan is < 20% more costly than the sequential\n> scan, it is probably less risky.\n\nI just posted on this topic. Index scan is more risky, no question\nabout it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 17:13:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> mlw wrote:\n> > Now, given the choice of the two strategies on a table, both pretty close to\n> > one another, the risk of poor performance for using the index scan is minimal\n> > based on the statistics, but the risk of poor performance for using the\n> > sequential scan is quite high on a large table.\n\n> My second point, that index scan is more risky than sequential scan, is\n> outlined above. A sequential scan reads each page once, and uses the\n> file system read-ahead code to prefetch the disk buffers. Index scans\n> are random, and could easily re-read disk pages to plow through a\n> significant portion of the table, and because the reads are random,\n> the file system will not prefetch the rows so the index scan will have\n> to wait for each non-cache-resident row to come in from disk.\n\nThat is a very interesting point, but shouldn't that be factored into the cost\n(random_tuple_cost?) In which case my point still stands.\n", "msg_date": "Wed, 17 Apr 2002 17:16:23 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw wrote:\n> Bruce Momjian wrote:\n> > \n> > mlw wrote:\n> > > Now, given the choice of the two strategies on a table, both pretty close to\n> > > one another, the risk of poor performance for using the index scan is minimal\n> > > based on the statistics, but the risk of poor performance for using the\n> > > sequential scan is quite high on a large table.\n> \n> > My second point, that index scan is more risky than sequential scan, is\n> > outlined above. A sequential scan reads each page once, and uses the\n> > file system read-ahead code to prefetch the disk buffers. Index scans\n> > are random, and could easily re-read disk pages to plow through a\n> > significant portion of the table, and because the reads are random,\n> > the file system will not prefetch the rows so the index scan will have\n> > to wait for each non-cache-resident row to come in from disk.\n> \n> That is a very interesting point, but shouldn't that be factored into the cost\n> (random_tuple_cost?) In which case my point still stands.\n\nYes, I see your point. I think on the high end that index scans can get\nvery expensive if you start to do lots of cache misses and have to wait\nfor i/o. I know the random cost is 4, but I think that number is not\nlinear. It can be much higher for lots of cache misses and waiting for\nI/O, and think that is why it feels more risky to do an index scan on a\nsample size that is not perfectly known.\n\nActually, you pretty much can know sequential scan size because you know\nthe number of blocks in the table. It is index scan that is more\nunknown because you don't know how many index lookups you will need, and\nhow well they will stay in the cache.\n\nDoes that help? Wow, this _is_ confusing. I am still looking for that\nholy grail that will allow this all to be codified so others can learn\nfrom it and we don't have to rehash this repeatedly, but frankly, this\nwhole discussion is covering new ground that we haven't covered yet. \n\n(Maybe TODO.detail this discussion and point to it from the FAQ.)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 17:39:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Michael Loftis wrote:\n> As far as the 'planner benchmark suite' so we cans tart gathering more \n> statistical data about what costs should be, or are better at, that's an \n> excellent idea.\n\nPeople with different hardware have different random page costs,\nclearly. Even different workloads affect it. Added to TODO:\n\n\t* Add utility to compute accurate random_page_cost value\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 17:41:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Bruce Momjian wrote:\n> My second point, that index scan is more risky than sequential scan, is\n> outlined above. A sequential scan reads each page once, and uses the\n> file system read-ahead code to prefetch the disk buffers. Index scans\n> are random, and could easily re-read disk pages to plow through a\n> significant portion of the table, and because the reads are random,\n> the file system will not prefetch the rows so the index scan will have\n> to wait for each non-cache-resident row to come in from disk.\n\nIt took a bike ride to think about this one. The supposed advantage of a\nsequential read over an random read, in an active multitasking system, is a\nmyth. If you are executing one query and the system is doing only that query,\nyou may be right.\n\nExecute a number of queries at the same time, the expected benefit of a\nsequential scan goes out the window. The OS will be fetching blocks, more or\nless, at random.\n", "msg_date": "Wed, 17 Apr 2002 17:44:13 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw wrote:\n> Bruce Momjian wrote:\n> > My second point, that index scan is more risky than sequential scan, is\n> > outlined above. A sequential scan reads each page once, and uses the\n> > file system read-ahead code to prefetch the disk buffers. Index scans\n> > are random, and could easily re-read disk pages to plow through a\n> > significant portion of the table, and because the reads are random,\n> > the file system will not prefetch the rows so the index scan will have\n> > to wait for each non-cache-resident row to come in from disk.\n> \n> It took a bike ride to think about this one. The supposed advantage of a\n> sequential read over an random read, in an active multitasking system, is a\n> myth. If you are executing one query and the system is doing only that query,\n> you may be right.\n> \n> Execute a number of queries at the same time, the expected benefit of a\n> sequential scan goes out the window. The OS will be fetching blocks, more or\n> less, at random.\n\nOK, yes, sequential scan _can_ be as slow as index scan, but sometimes\nit is faster. Can you provide reasoning why index scan should be\npreferred, other than the admin created it, which I already addressed?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 17:54:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n\n> It took a bike ride to think about this one. The supposed advantage of a\n> sequential read over an random read, in an active multitasking system, is a\n> myth. \n\nDisagree.\n\n> Execute a number of queries at the same time, the expected benefit of a\n> sequential scan goes out the window. The OS will be fetching blocks, more or\n> less, at random.\n\nIf readahead is active (and it should be for sequential reads) there\nis still a pretty good chance that the next few disk blocks will be in\ncache next time you get scheduled.\n\nIf your disk is thrashing that badly, you need more RAM and/or more\nspindles; using an index will just put even more load on the i/o\nsystem.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n", "msg_date": "17 Apr 2002 17:55:14 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> mlw wrote:\n> > Bruce Momjian wrote:\n> > >\n> > > mlw wrote:\n> > > > Now, given the choice of the two strategies on a table, both pretty close to\n> > > > one another, the risk of poor performance for using the index scan is minimal\n> > > > based on the statistics, but the risk of poor performance for using the\n> > > > sequential scan is quite high on a large table.\n> >\n> > > My second point, that index scan is more risky than sequential scan, is\n> > > outlined above. A sequential scan reads each page once, and uses the\n> > > file system read-ahead code to prefetch the disk buffers. Index scans\n> > > are random, and could easily re-read disk pages to plow through a\n> > > significant portion of the table, and because the reads are random,\n> > > the file system will not prefetch the rows so the index scan will have\n> > > to wait for each non-cache-resident row to come in from disk.\n> >\n> > That is a very interesting point, but shouldn't that be factored into the cost\n> > (random_tuple_cost?) In which case my point still stands.\n> \n> Yes, I see your point. I think on the high end that index scans can get\n> very expensive if you start to do lots of cache misses and have to wait\n> for i/o. I know the random cost is 4, but I think that number is not\n> linear. It can be much higher for lots of cache misses and waiting for\n> I/O, and think that is why it feels more risky to do an index scan on a\n> sample size that is not perfectly known.\n\nIn an active system, sequential scans are still OS random access to a file. Two\nor more queries running at the same time will blow out most of the expected\ngain.\n\n> \n> Actually, you pretty much can know sequential scan size because you know\n> the number of blocks in the table. It is index scan that is more\n> unknown because you don't know how many index lookups you will need, and\n> how well they will stay in the cache.\n\nAgain, shouldn't that be factored into the cost?\n\n> \n> Does that help? Wow, this _is_ confusing. I am still looking for that\n> holy grail that will allow this all to be codified so others can learn\n> from it and we don't have to rehash this repeatedly, but frankly, this\n> whole discussion is covering new ground that we haven't covered yet.\n\nPath planning by probabilities derived from statistical analysis is always big\nscience, regardless of application. The cost based optimizer will *never* be\nfinished because it can never be perfect.\n\nWhen all is said and done, it could very well be as good as it ever needs to\nbe, and that a method for giving hints to the optimizer, ala Oracle, is the\nanswer.\n", "msg_date": "Wed, 17 Apr 2002 17:56:15 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On April 17, 2002 05:44 pm, mlw wrote:\n> It took a bike ride to think about this one. The supposed advantage of a\n> sequential read over an random read, in an active multitasking system, is a\n> myth. If you are executing one query and the system is doing only that\n> query, you may be right.\n>\n> Execute a number of queries at the same time, the expected benefit of a\n> sequential scan goes out the window. The OS will be fetching blocks, more\n> or less, at random.\n\nIf it does you should look for another OS. A good OS will work with your \naccess requests to keep them as linear as possible. Of course it has a \nslight effect the other way as well but generally lots of sequential reads \nwill be faster than lots of random ones. If you don't believe that then just \nrun the test that Tom suggested to calculate random_tuple_cost on your own \nsystem. I bet your number is higher than 1.\n\nAnd when you are done, just plug the number into your configuration and get \nthe plans that you are looking for.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 17 Apr 2002 17:56:44 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Bruce Momjian wrote:\n\n> \n> OK, yes, sequential scan _can_ be as slow as index scan, but sometimes\n> it is faster. Can you provide reasoning why index scan should be\n> preferred, other than the admin created it, which I already addressed?\n\nIf you have a choice between two or more sub-plans, similar in cost, say within\n20% of one another. Choosing a plan which uses an index has a chance of\nimproved performance if the estimates are wrong where as choosing the\nsequential scan will always have the full cost.\n", "msg_date": "Wed, 17 Apr 2002 18:01:09 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw wrote:\n> Bruce Momjian wrote:\n> \n> > \n> > OK, yes, sequential scan _can_ be as slow as index scan, but sometimes\n> > it is faster. Can you provide reasoning why index scan should be\n> > preferred, other than the admin created it, which I already addressed?\n> \n> If you have a choice between two or more sub-plans, similar in cost, say within\n> 20% of one another. Choosing a plan which uses an index has a chance of\n> improved performance if the estimates are wrong where as choosing the\n> sequential scan will always have the full cost.\n\nAnd the chance of reduced performance if the estimate was too low.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 18:04:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "D'Arcy J.M. Cain wrote:\n> On April 17, 2002 05:44 pm, mlw wrote:\n> > It took a bike ride to think about this one. The supposed advantage of a\n> > sequential read over an random read, in an active multitasking system, is a\n> > myth. If you are executing one query and the system is doing only that\n> > query, you may be right.\n> >\n> > Execute a number of queries at the same time, the expected benefit of a\n> > sequential scan goes out the window. The OS will be fetching blocks, more\n> > or less, at random.\n> \n> If it does you should look for another OS. A good OS will work with your \n> access requests to keep them as linear as possible. Of course it has a \n> slight effect the other way as well but generally lots of sequential reads \n> will be faster than lots of random ones. If you don't believe that then just \n> run the test that Tom suggested to calculate random_tuple_cost on your own \n> system. I bet your number is higher than 1.\n\nThe two backends would have to be hitting the same table at different\nspots to turn off read-ahead, but it is possible. If the backends are\nhitting different tables, then they don't turn off read-ahead. Of\ncourse, for both backends to be hitting the disk, they both would have\nnot found their data in the postgres or kernel cache.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 18:05:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Tom Lane wrote:\n\n>mlw <markw@mohawksoft.com> writes:\n>\n>>That is the difference, in another post Tom said he could not get\n>>excited about 10.9 second execution time over a 7.96 execution\n>>time. Damn!!! I would. That is wrong.\n>>\n>\n>Sure. Show us how to make the planner's estimates 2x more accurate\n>(on average) than they are now, and I'll get excited too.\n>\n>But forcing indexscan to be chosen over seqscan does not count as\n>making it more accurate. (If you think it does, then you don't\n>need to be in this thread at all; set enable_seqscan = 0 and\n>stop bugging us ;-))\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\nDo we have a tool that can analyze a table and indexes to allow the DBA \nto choose when to add an index or when not too?\n\nDB2 has an index analyizer like this. Given a specific query and the \ncurrent table stats it can tell you which indexes would be\nmost beneficial. Do we have something like this already?\n\nAt least we could point those DBA's to a utility like this and then they \nwould not be too suprised when the optimizer didn't use the index.\n\n- Bill\n\n\n", "msg_date": "Wed, 17 Apr 2002 15:20:08 -0700", "msg_from": "Bill Cunningham <billc@ballydev.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "> > ... I have seen many instances of when\n> > PostgreSQL refuses to use an index because the data distribution is uneven.\n> \n> This is fixed, or at least attacked, in 7.2. Again, I do not see\n> this as an argument for making the planner stupider instead of\n> smarter.\n\nCould someone fork out some decent criteria for these \"stats\" that way\nsomeone could generate a small app that would recommend these values\non a per site basis. Having them hardwired and stuffed into a system\ncatalog does no good to the newbie DBA. Iterating over a set of SQL\nstatements, measuring the output, and then sending the user the\nresults in the form of recommended values would be huge.\n<dumb_question>Where could I look for an explanation of all of these\nvalues?</dumb_question>\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Wed, 17 Apr 2002 17:54:44 -0700", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "I threw together the attached program (compiles fine with gcc 2.95.2 on\nSolaris 2.6 and egcs-2.91.66 on RedHat Linux 6.2) and ran it a few\ntimes. Data is below. Usual disclaimers about hastily written code etc\n:)\n\nMachine = ghoul (generic intel, 384mb ram, dual p3-800, ide disk running\ndma)\n\nSequential\nBytes Read\tTime\tBytes / Sec\n536870912 27.14 19783933.74\n536870912 27.14 19783990.60\n536870912 27.11 19801872.14\n536870912 26.92 19942928.41\n536870912 27.31 19657408.43\n 19794026.66 (avg)\n\nRandom\t\t\nBytes Read\tTime\tBytes / Sec\n1073741824 519.57 2066589.21\n1073741824 517.78 2073751.44\n1073741824 516.92 2077193.23\n1073741824 513.18 2092333.29\n1073741824 510.68 2102579.88\n 2082489.41 (avg)\n\nMachine = jedi (Sun E420, 3gb ram, dual 400s, test on single scsi disk)\n\nSequential\t\t\nBytes Read\tTime\tBytes / Sec\n2097152000 65.19 32167675.28\n2097152000 65.22 32154114.65\n2097152000 65.16 32182561.99\n2097152000 65.12 32206105.12\n2097152000 64.67 32429463.26\n 32227984.06 (avg)\n\nRandom\t\t\nBytes Read\tTime\tBytes / Sec\n4194304000 1522.22 2755394.79\n4194304000 278.18 15077622.05\n4194304000 91.43 45874730.07\n4194304000 61.43 68273795.19\n4194304000 54.55 76890231.51\n 41774354.72\n\nIf I interpret Tom's \"divide\" instruction correctly, is that a factor of\n10 on the linux box?\n\nOn Thu, 2002-04-18 at 01:16, Tom Lane wrote:\n> \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> writes:\n> > On my own few experience I think this could be solved decreasing\n> > random_page_cost, if you would prefer to use indexes than seq scans, then\n> > you can lower random_page_cost to a point in which postgres works as you\n> > want. So the planner would prefer indexes when in standard conditions it\n> > would prefer seq scans.\n> \n> It's entirely possible that the default value of random_page_cost is too\n> high, at least for many modern machines. The experiments I did to get\n> the 4.0 figure were done a couple years ago, on hardware that wasn't\n> exactly new at the time. I have not heard of anyone else trying to\n> measure it though.\n> \n> I don't think I have the source code I used anymore, but the principle\n> is simple enough:\n> \n> 1. Make a large file (several times the size of your machine's RAM, to\n> ensure you swamp out kernel disk buffering effects). Fill with random\n> data. (NB: do not fill with zeroes, some filesystems optimize this away.)\n> \n> 2. Time reading the file sequentially, 8K per read request.\n> Repeat enough to get a statistically trustworthy number.\n> \n> 3. Time reading randomly-chosen 8K pages from the file. Repeat\n> enough to get a trustworthy number (the total volume of pages read\n> should be several times the size of your RAM).\n> \n> 4. Divide.\n> \n> The only tricky thing about this is making sure you are measuring disk\n> access times and not being fooled by re-accessing pages the kernel still\n> has cached from a previous access. (The PG planner does try to account\n> for caching effects, but that's a separate estimate; the value of\n> random_page_cost isn't supposed to include caching effects.) AFAIK the\n> only good way to do that is to use a large test, which means it takes\n> awhile to run; and you need enough spare disk space for a big test file.\n> \n> It'd be interesting to get some numbers for this across a range of\n> hardware, filesystems, etc ...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>", "msg_date": "18 Apr 2002 11:49:03 +1000", "msg_from": "Mark Pritchard <mark@tangent.net.au>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Apologies for the naff double post, but I meant to add that obviously\nthe figures for the solaris box are bogus after the first run...imagine\na file system cache of an entire 2gb file. I tried creating a file of\n4gb on this box, but it bombed with a \"file too large error\".\nUnfortunately, I can't rip memory out of this box as I don't have\nexclusive access.\n\nOn Thu, 2002-04-18 at 11:49, Mark Pritchard wrote:\n> I threw together the attached program (compiles fine with gcc 2.95.2 on\n> Solaris 2.6 and egcs-2.91.66 on RedHat Linux 6.2) and ran it a few\n> times. Data is below. Usual disclaimers about hastily written code etc\n> :)\n> \n> Machine = ghoul (generic intel, 384mb ram, dual p3-800, ide disk running\n> dma)\n> \n> Sequential\n> Bytes Read\tTime\tBytes / Sec\n> 536870912 27.14 19783933.74\n> 536870912 27.14 19783990.60\n> 536870912 27.11 19801872.14\n> 536870912 26.92 19942928.41\n> 536870912 27.31 19657408.43\n> 19794026.66 (avg)\n> \n> Random\t\t\n> Bytes Read\tTime\tBytes / Sec\n> 1073741824 519.57 2066589.21\n> 1073741824 517.78 2073751.44\n> 1073741824 516.92 2077193.23\n> 1073741824 513.18 2092333.29\n> 1073741824 510.68 2102579.88\n> 2082489.41 (avg)\n> \n> Machine = jedi (Sun E420, 3gb ram, dual 400s, test on single scsi disk)\n> \n> Sequential\t\t\n> Bytes Read\tTime\tBytes / Sec\n> 2097152000 65.19 32167675.28\n> 2097152000 65.22 32154114.65\n> 2097152000 65.16 32182561.99\n> 2097152000 65.12 32206105.12\n> 2097152000 64.67 32429463.26\n> 32227984.06 (avg)\n> \n> Random\t\t\n> Bytes Read\tTime\tBytes / Sec\n> 4194304000 1522.22 2755394.79\n> 4194304000 278.18 15077622.05\n> 4194304000 91.43 45874730.07\n> 4194304000 61.43 68273795.19\n> 4194304000 54.55 76890231.51\n> 41774354.72\n> \n> If I interpret Tom's \"divide\" instruction correctly, is that a factor of\n> 10 on the linux box?\n> \n> On Thu, 2002-04-18 at 01:16, Tom Lane wrote:\n> > \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> writes:\n> > > On my own few experience I think this could be solved decreasing\n> > > random_page_cost, if you would prefer to use indexes than seq scans, then\n> > > you can lower random_page_cost to a point in which postgres works as you\n> > > want. So the planner would prefer indexes when in standard conditions it\n> > > would prefer seq scans.\n> > \n> > It's entirely possible that the default value of random_page_cost is too\n> > high, at least for many modern machines. The experiments I did to get\n> > the 4.0 figure were done a couple years ago, on hardware that wasn't\n> > exactly new at the time. I have not heard of anyone else trying to\n> > measure it though.\n> > \n> > I don't think I have the source code I used anymore, but the principle\n> > is simple enough:\n> > \n> > 1. Make a large file (several times the size of your machine's RAM, to\n> > ensure you swamp out kernel disk buffering effects). Fill with random\n> > data. (NB: do not fill with zeroes, some filesystems optimize this away.)\n> > \n> > 2. Time reading the file sequentially, 8K per read request.\n> > Repeat enough to get a statistically trustworthy number.\n> > \n> > 3. Time reading randomly-chosen 8K pages from the file. Repeat\n> > enough to get a trustworthy number (the total volume of pages read\n> > should be several times the size of your RAM).\n> > \n> > 4. Divide.\n> > \n> > The only tricky thing about this is making sure you are measuring disk\n> > access times and not being fooled by re-accessing pages the kernel still\n> > has cached from a previous access. (The PG planner does try to account\n> > for caching effects, but that's a separate estimate; the value of\n> > random_page_cost isn't supposed to include caching effects.) AFAIK the\n> > only good way to do that is to use a large test, which means it takes\n> > awhile to run; and you need enough spare disk space for a big test file.\n> > \n> > It'd be interesting to get some numbers for this across a range of\n> > hardware, filesystems, etc ...\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > \n> \n> \n> ----\n> \n\n> #include <errno.h>\n> #include <stdio.h>\n> #include <stdlib.h>\n> #include <time.h>\n> #include <sys/stat.h>\n> #include <sys/time.h>\n> \n> /**\n> * Constants\n> */\n> \n> #define BLOCK_SIZE\t\t(8192)\n> \n> /**\n> * Prototypes\n> */\n> \n> \t// Creates the test file filled with random data\n> \tvoid createTestFile(char *testFileName, long long fileSize);\n> \n> \t// Handles runtime errors by displaying the function, activity and error number\n> \tvoid handleError(char *functionName, char *activity);\n> \n> \t// Standard entry point\n> \tint main(int argc, char *args[]);\n> \n> \t// Prints correct usage and quits\n> \tvoid printUsageAndQuit();\n> \n> \t// Tests performance of random reads of the given file\n> \tvoid testRandom(char *testFileName, long long amountToRead);\n> \n> \t// Tests performance of sequential reads of the given file\n> \tvoid testSeq(char *testFileName);\n> \n> /**\n> * Definitions\n> */\n> \n> /**\n> * createTestFile()\n> */\n> void createTestFile(char *testFileName, long long fileSize)\n> {\n> \tFILE *testFile;\n> \tlong long reps, i, j, bufferReps;\n> \ttime_t timetmp;\n> \tlong long *buffer;\n> \tsize_t written;\n> \n> \t// Indicate op\n> \tprintf(\"Creating test file %s of %lld mb\\n\",testFileName,fileSize);\n> \n> \t// Adjust file size to bytes\n> \tfileSize *= (1024*1024);\n> \n> \t// Allocate a buffer for writing out random long longs\n> \tif (!(buffer = malloc(BLOCK_SIZE)))\n> \t\thandleError(\"createTestFile()\",\"malloc\");\n> \n> \t// Open the file for writing\n> \tif (!(testFile = fopen(testFileName, \"wb\")))\n> \t\thandleError(\"createTestFile()\",\"fopen\");\n> \n> \t// Initialise the random number generator\n> \tsrandom(time(NULL));\n> \n> \t// Write data\n> \treps \t\t= fileSize / BLOCK_SIZE;\n> \tbufferReps\t= BLOCK_SIZE / sizeof(long long);\n> \tfor (i = 0; i < reps; i++)\n> \t{\n> \t\t// Fill buffer with random data\n> \t\tfor (j = 0; j < bufferReps; j++)\n> \t\t\tbuffer[j] = random();\n> \n> \t\t// Write\n> \t\twritten = fwrite(buffer, sizeof(long long), bufferReps, testFile);\n> \t\tif (written != bufferReps)\n> \t\t\thandleError(\"createTestFile()\",\"fwrite\");\n> \t}\n> \n> \t// Flush and close\n> \tif (fflush(testFile))\n> \t\thandleError(\"createTestFile()\",\"fflush\");\n> \tif (fclose(testFile))\n> \t\thandleError(\"createTestFile()\",\"fclose\");\n> \n> \t// Free buffer\n> \tfree(buffer);\n> }\n> \n> /**\n> * handleError()\n> */\n> void handleError(char *functionName, char *activity)\n> {\n> \tfprintf(stderr, \"Error in %s while attempting %s. Error %d (%s)\\n\", functionName, activity, errno, strerror(errno));\n> \texit(1);\n> }\n> \n> /**\n> * main()\n> */\n> int main(int argc, char *argv[])\n> {\n> \t// Print usage and quit if argument count is definitely incorrect\n> \tif (argc < 3)\n> \t{\n> \t\t// Definitely wrong\n> \t\tprintUsageAndQuit();\n> \t}\n> \telse\n> \t{\n> \t\t// Dispatch\n> \t\tif (!strcmp(argv[1], \"create\"))\n> \t\t{\n> \t\t\tif (argc != 4)\n> \t\t\t\tprintUsageAndQuit();\n> \n> \t\t\t// Create the test file of the specified size\n> \t\t\tcreateTestFile(argv[2], atol(argv[3]));\n> \t\t}\n> \t\telse if (!strcmp(argv[1], \"seqtest\"))\n> \t\t{\n> \t\t\tif (argc != 3)\n> \t\t\t\tprintUsageAndQuit();\n> \n> \t\t\t// Test performance of sequential reads\n> \t\t\ttestSeq(argv[2]);\n> \t\t}\n> \t\telse if (!strcmp(argv[1], \"rndtest\"))\n> \t\t{\n> \t\t\tif (argc != 4)\n> \t\t\t\tprintUsageAndQuit();\n> \n> \t\t\t// Test performance of random reads\n> \t\t\ttestRandom(argv[2], atol(argv[3]));\n> \t\t}\n> \t\telse\n> \t\t{\n> \t\t\t// Unknown command\n> \t\t\tprintUsageAndQuit();\n> \t\t}\n> \t}\n> \n> \treturn 0;\n> }\n> \n> /**\n> * printUsageAndQuit()\n> */\n> void printUsageAndQuit()\n> {\n> \tputs(\"USAGE: rndpgcst [create <file> <size_in_mb>] | [seqtest <file>] | [rndtest <file> <read_in_mb>]\");\n> \n> \texit(1);\n> }\n> \n> /**\n> * testSeq()\n> */\n> void testSeq(char *testFileName)\n> {\n> \tFILE *testFile;\n> \tchar *buffer;\n> \tlong long reps, totalRead, thisRead, timeTaken;\n> \tstruct timeval startTime, endTime;\n> \tstruct timezone timezoneDiscard;\n> \n> \t// Indicate op\n> \tprintf(\"Sequential read test of %s\\n\",testFileName);\n> \n> \t// Grab a buffer\n> \tbuffer = malloc(BLOCK_SIZE);\n> \n> \t// Open the file for reading\n> \tif (!(testFile = fopen(testFileName, \"rb\")))\n> \t\thandleError(\"testSeq()\",\"fopen\");\n> \n> \t// Start timer\n> \tif (gettimeofday(&startTime, &timezoneDiscard) == -1)\n> \t\thandleError(\"testSeq()\", \"gettimeofday start\");\n> \n> \t// Read all data from file\n> \ttotalRead = 0;\n> \twhile ((thisRead = fread(buffer, 1, BLOCK_SIZE, testFile)) != 0)\n> \t\ttotalRead += thisRead;\n> \n> \t// End timer\n> \tif (gettimeofday(&endTime, &timezoneDiscard) == -1)\n> \t\thandleError(\"testSeq()\", \"gettimeofday start\");\n> \n> \t// Close\n> \tif (fclose(testFile))\n> \t\thandleError(\"testSeq()\",\"fclose\");\n> \n> \t// Free the buffer\n> \tfree(buffer);\n> \n> \t// Display time taken\n> \ttimeTaken = (endTime.tv_sec - startTime.tv_sec) * 1000000;\n> \ttimeTaken += (endTime.tv_usec - startTime.tv_usec);\n> \tprintf(\"%lld bytes read in %f seconds\\n\", totalRead, (double) timeTaken / (double) 1000000);\n> }\n> \n> /**\n> * testRandom()\n> */\n> void testRandom(char *testFileName, long long amountToRead)\n> {\n> \tFILE *testFile;\n> \tlong long reps, i, fileSize, timeTaken, totalRead, readPos, thisRead, offsetMax;\n> \tstruct stat fileStats;\n> \tchar *buffer;\n> \tstruct timeval startTime, endTime;\n> \tstruct timezone timezoneDiscard;\n> \n> \t// Indicate op\n> \tprintf(\"Random read test of %s for %lld mb\\n\", testFileName, amountToRead);\n> \n> \t// Initialise the random number generator\n> \tsrandom(time(NULL));\n> \n> \t// Adjust amount to read\n> \tamountToRead *= (1024*1024);\n> \n> \t// Determine file size\n> \tif (stat(testFileName, &fileStats) == -1)\n> \t\thandleError(\"testRandom()\", \"stat\");\n> \tfileSize = fileStats.st_size;\n> \n> \t// Grab a buffer\n> \tbuffer = malloc(BLOCK_SIZE);\n> \n> \t// Open the file for reading\n> \tif (!(testFile = fopen(testFileName, \"rb\")))\n> \t\thandleError(\"testRandom()\",\"fopen\");\n> \n> \t// Start timer\n> \tif (gettimeofday(&startTime, &timezoneDiscard) == -1)\n> \t\thandleError(\"testRandom()\", \"gettimeofday start\");\n> \n> \t// Read data from file\n> \treps \t\t= amountToRead / BLOCK_SIZE;\n> \toffsetMax\t= fileSize / BLOCK_SIZE;\n> \tfor (i = 0; i < reps; i++)\n> \t{\n> \t\t// Determine read position\n> \t\treadPos = (random() % offsetMax) * BLOCK_SIZE;\n> \n> \t\t// Seek and read\n> \t\tif (fseek(testFile, readPos, SEEK_SET) == -1)\n> \t\t\thandleError(\"testRandom()\",\"fseek\");\n> \t\tif ((thisRead = fread(buffer, 1, BLOCK_SIZE, testFile)) != BLOCK_SIZE)\n> \t\t\thandleError(\"testRandom()\",\"fread\");\n> \t}\n> \n> \t// End timer\n> \tif (gettimeofday(&endTime, &timezoneDiscard) == -1)\n> \t\thandleError(\"testRandom()\", \"gettimeofday start\");\n> \n> \t// Close\n> \tif (fclose(testFile))\n> \t\thandleError(\"testRandom()\",\"fclose\");\n> \n> \t// Free the buffer\n> \tfree(buffer);\n> \n> \t// Display time taken\n> \ttimeTaken = (endTime.tv_sec - startTime.tv_sec) * 1000000;\n> \ttimeTaken += (endTime.tv_usec - startTime.tv_usec);\n> \tprintf(\"%lld bytes read in %f seconds\\n\", amountToRead, (double) timeTaken / (double) 1000000);\n> }\n\n\n", "msg_date": "18 Apr 2002 11:51:38 +1000", "msg_from": "Mark Pritchard <mark.pritchard@tangent.net.au>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "> Would it make sense to flatten out INNER JOINs only when the total\n> number of tables involved is less than some parameter N? N\n> around six or eight would probably keep the complex-query crowd\n> happy, while not causing unintuitive behavior for simple queries.\n> Anybody who really likes the current behavior could set N=1 to force\n> the system to obey his join order.\n\nI'd like to see the \"reorder, or not to reorder\" to happen as a settable\nparameter, *not* as a side effect of choosing a particular\nshould-be-equivalent syntax for a query.\n\nIf that were exposed, then folks could have additional control over the\noptimizer no matter what syntax they prefer to use. And in fact could\nalter the behavior without having to completely rewrite their query.\n\nOne could also think about a threshold mechanism as you mention above,\nbut istm that allowing explicit control over reordering (fundamentally\ndifferent than, say, control over whether particular kinds of scans are\nused) is the best first step. Not solely continuing to hide that control\nbehind heuristics involving query style and numbers of tables.\n\n - Thomas\n", "msg_date": "Wed, 17 Apr 2002 19:06:45 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n<snip> \n> If that were exposed, then folks could have additional control over the\n> optimizer no matter what syntax they prefer to use. And in fact could\n> alter the behavior without having to completely rewrite their query.\n> \n> One could also think about a threshold mechanism as you mention above,\n> but istm that allowing explicit control over reordering (fundamentally\n> different than, say, control over whether particular kinds of scans are\n> used) is the best first step. Not solely continuing to hide that control\n> behind heuristics involving query style and numbers of tables.\n\nA la Oracle... here we come....\n\n:-/\n\nIf we go down this track, although it would be beneficial in the short\nterm, is it the best long term approach?\n\nI'm of a belief that *eventually* we really can take enough of the\nvariables into consideration for planning the best query every time. I\ndidn't say it was gunna be soon, nor easy though.\n\n+ Justin\n\n \n> - Thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 18 Apr 2002 12:30:54 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "...\n> I'm of a belief that *eventually* we really can take enough of the\n> variables into consideration for planning the best query every time. I\n> didn't say it was gunna be soon, nor easy though.\n\nI agree. But I'd like to eliminate the optimizer variability which\ndepends solely on the syntactical differences between traditional and\n\"join syntax\" inner join queries. If the reason for these differences\nare to allow explicit control over join order, let's get another\nmechanism for doing that.\n\n - Thomas\n", "msg_date": "Wed, 17 Apr 2002 19:38:09 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> ...\n> > I'm of a belief that *eventually* we really can take enough of the\n> > variables into consideration for planning the best query every time. I\n> > didn't say it was gunna be soon, nor easy though.\n> \n> I agree. But I'd like to eliminate the optimizer variability which\n> depends solely on the syntactical differences between traditional and\n> \"join syntax\" inner join queries. If the reason for these differences\n> are to allow explicit control over join order, let's get another\n> mechanism for doing that.\n\nOk. I see what you mean now.\n\nThat makes more sense. :)\n\n+ Justin\n\n\n> - Thomas\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 18 Apr 2002 12:39:03 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> For instance: say we have two similarly performing plans, close to one another,\n> say within 20%, one plan uses an index, and one does not. It is unlikely that\n> the index plan will perform substantially worse than the non-index plan, right?\n\nThis seems to be the crux of the argument ... but I've really seen no\nevidence to suggest that it's true. The downside of improperly picking\nan indexscan plan is *not* any less than the downside of improperly\npicking a seqscan plan, in my experience.\n\nIt does seem (per Thomas' earlier observation) that we get more\ncomplaints about failure to use an index scan than the other case.\nPrior to 7.2 it was usually pretty obviously traceable to overestimates\nof the number of rows to be retrieved (due to inadequate data\nstatistics). In 7.2 that doesn't seem to be the bottleneck anymore.\nI think now that there may be some shortcoming in the planner's cost\nmodel or in the adjustable parameters for same. But my reaction\nto that is to try to figure out how to fix the cost model. I certainly\ndo not feel that we've reached a dead end in which the only answer is\nto give up and stop trusting the cost-based optimization approach.\n\n> Now, given the choice of the two strategies on a table, both pretty close to\n> one another, the risk of poor performance for using the index scan is minimal\n> based on the statistics, but the risk of poor performance for using the\n> sequential scan is quite high on a large table.\n\nYou keep asserting that, and you keep providing no evidence.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 00:01:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > For instance: say we have two similarly performing plans, close to one another,\n> > say within 20%, one plan uses an index, and one does not. It is unlikely that\n> > the index plan will perform substantially worse than the non-index plan, right?\n> \n> This seems to be the crux of the argument ... but I've really seen no\n> evidence to suggest that it's true. The downside of improperly picking\n> an indexscan plan is *not* any less than the downside of improperly\n> picking a seqscan plan, in my experience.\n\nOur experiences differ. I have fought with PostgreSQL on a number of occasions\nwhen it would not use an index. Inevitably, I would have to set \"enable_seqscan\n= false.\" I don't like doing that because it forces the use of an index when it\ndoesn't make sense.\n\nI don't think we will agree, we have seen different behaviors, and our\nexperiences seem to conflict. This however does not mean that either of us is\nin error, it just may mean that we use data with very different\ncharacteristics.\n\nThis thread is kind of frustrating for me because over the last couple years I\nhave seen this problem many times and the answer is always the same, \"The\nstatistics need to be improved.\" Tom, you and I have gone back and forth about\nthis more than once.\n\nI submit to you that the statistics will probably *never* be right. They will\nalways need improvement here and there. Perhaps instead of fighting over an\nalgorithmic solution, and forcing the users to work around problems with\nchoosing an index, should we not just allow the developer to place hints in the\nSQL, as:\n\nselect /*+ INDEX(a_id, b_id) */ * from a, b where a.id = b.id;\n\nThat way if there is a performance issue with using or not using an index, the\ndeveloper can have better control over the evaluation of the query.\n", "msg_date": "Thu, 18 Apr 2002 00:32:59 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> should we not just allow the developer to place hints in the\n> SQL, as:\n\n> select /*+ INDEX(a_id, b_id) */ * from a, b where a.id = b.id;\n\n<<itch>> People have suggested that sort of thing from time to time,\nbut I have a couple of problems with it:\n\n1. It's unobvious how to tag the source in a way that is helpful\nfor any but the most trivial queries. Moreover, reasonable sorts\nof tags would provide only partial specification of the exact\nquery plan, which is a recipe for trouble --- an upgraded optimizer\nmight make different choices, leading to a pessimized plan if some\npoints are pinned down when others aren't.\n\n2. The tag approach presumes that the query programmer is smarter\nthan the planner. This might be true under ideal circumstances,\nbut I have a hard time crediting that the planner looking at today's\nstats is dumber than the junior programmer who left two years ago,\nand no one's updated his query since then. The planner may not be\nvery bright, but it doesn't get bored, tired, or sick, nor move on\nto the next opportunity. It will pick the best plan it can on the\nbasis of current statistics and the specific values appearing in\nthe given query. Every time. A tag-forced query plan doesn't\nhave that adaptability.\n\nBy and large this argument reminds me of the \"compiler versus hand-\nprogrammed assembler\" argument. Which was pretty much a dead issue\nwhen I was an undergrad, more years ago than I care to admit in a\npublic forum. Yes, a competent programmer who's willing to work\nhard can out-code a compiler over small stretches of code. But no\none tries to write large systems in assembler anymore. Hand-tuned\nSQL is up against that same John-Henry-vs-the-steam-hammer logic.\nMaybe the current PG optimizer isn't quite in the steam hammer\nleague yet, but it will get there someday. I'm more interested\nin revving up the optimizer than in betting on John Henry.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 01:00:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": ">\n> Look at the pgbench utility. I can't run that program without a +- 10%\n> variation from run to run, no mater how many times I run vacuum and\ncheckpoint.\n>\n\nIt's pgbench's fault, TPC-B was replaced with TPC-C because it is not\naccurate enough, we run a pseudo TPC-H and it has almost no variations from\none run to another.\n\nRegards\n\n", "msg_date": "Thu, 18 Apr 2002 09:16:34 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, 2002-04-17 at 19:43, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > OTOH, it is also important where the file is on disk. As seen from disk\n> > speed test graphs on http://www.tomshardware.com , the speed difference\n> > of sequential reads is 1.5 to 2.5 between inner and outer tracks. \n> \n> True. But if we use the same test file for both the sequential and\n> random-access timings, hopefully the absolute speed of access will\n> cancel out. (Again, it's the sort of thing that could use some\n> real-world testing...)\n\nNot so sure about that. Random access basically measures latency,\nsequential access measures transfer speed. I'd argue that latency is\nmore or less constant across the disk as it depends on head movement and\nthe spindle turning.\n\ncheers\n-- vbi", "msg_date": "18 Apr 2002 09:37:46 +0200", "msg_from": "Adrian 'Dagurashibanipal' von Bidder <avbidder@fortytwo.ch>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Tom Lane wrote:\n\n> By and large this argument reminds me of the \"compiler versus hand-\n> programmed assembler\" argument. Which was pretty much a dead issue\n> when I was an undergrad, more years ago than I care to admit in a\n> public forum. Yes, a competent programmer who's willing to work\n> hard can out-code a compiler over small stretches of code. But no\n> one tries to write large systems in assembler anymore. Hand-tuned\n> SQL is up against that same John-Henry-vs-the-steam-hammer logic.\n> Maybe the current PG optimizer isn't quite in the steam hammer\n> league yet, but it will get there someday. I'm more interested\n> in revving up the optimizer than in betting on John Henry.\n\nI am not suggesting that anyone is going to write each and every query with\nhints, but a few select queries, yes, people will want to hand tune them.\n\nYou are right no one uses assembler to create big systems, but big systems\noften have spot optimizations in assembler. Even PostgreSQL has assembler in\nit. \n\nNo generic solution can be perfect for every specific application. There will\nalways be times when hand tuning a query will produce better results, and\nsometimes that will make the difference between using PostgreSQL or use\nsomething else.\n\nFor the two years I have been subscribed to this list, this is a fairly\nconstant problem, and the answer is always the same, in effect, \"we're working\non it.\" If PostgreSQL had the ability to accept hints, one could say, \"We are\nalways working to improve it, but in your case you may want to give the\noptimizer a hint as to what you expect it to do.\"\n\nIt may not be the \"best\" solution in your mind, but speaking as a long time\nuser of PostgreSQL, it would be a huge help to me, and I'm sure I am not alone.\n", "msg_date": "Thu, 18 Apr 2002 07:52:35 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw wrote:\n> I don't think we will agree, we have seen different behaviors, and our\n> experiences seem to conflict. This however does not mean that either of us is\n> in error, it just may mean that we use data with very different\n> characteristics.\n> \n> This thread is kind of frustrating for me because over the last couple years I\n> have seen this problem many times and the answer is always the same, \"The\n> statistics need to be improved.\" Tom, you and I have gone back and forth about\n> this more than once.\n> \n\nHave you tried reducing 'random_page_cost' in postgresql.conf. That\nshould solve most of your problems if you would like more index scans.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Apr 2002 09:35:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Tom Lane wrote:\n> 2. The tag approach presumes that the query programmer is smarter\n> than the planner. This might be true under ideal circumstances,\n> but I have a hard time crediting that the planner looking at today's\n> stats is dumber than the junior programmer who left two years ago,\n> and no one's updated his query since then. The planner may not be\n> very bright, but it doesn't get bored, tired, or sick, nor move on\n> to the next opportunity. It will pick the best plan it can on the\n> basis of current statistics and the specific values appearing in\n> the given query. Every time. A tag-forced query plan doesn't\n> have that adaptability.\n\nAdd to this that hand tuning would happem mostly queries where the two\ncost estimates are fairly close, and add the variability of a multi-user\nenvironment, a hard-coded plan may turn out to be faster only some of\nthe time, and could change very quickly into something longer if the\ntable changes.\n\nMy point is that very close cases are the ones most likely to change\nover time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Apr 2002 09:38:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Adrian 'Dagurashibanipal' von Bidder wrote:\n> \n> On Wed, 2002-04-17 at 19:43, Tom Lane wrote:\n> > Hannu Krosing <hannu@tm.ee> writes:\n> > > OTOH, it is also important where the file is on disk. As seen from disk\n> > > speed test graphs on http://www.tomshardware.com , the speed difference\n> > > of sequential reads is 1.5 to 2.5 between inner and outer tracks.\n> >\n> > True. But if we use the same test file for both the sequential and\n> > random-access timings, hopefully the absolute speed of access will\n> > cancel out. (Again, it's the sort of thing that could use some\n> > real-world testing...)\n> \n> Not so sure about that. Random access basically measures latency,\n> sequential access measures transfer speed. I'd argue that latency is\n> more or less constant across the disk as it depends on head movement and\n> the spindle turning.\n\nThe days when \"head movement\" is relevant are long over. Not a single drive\nsold today, or in the last 5 years, is a simple spindle/head system. Many are\nRLE encoded, some have RAID features across the various platters inside the\ndrive. Many have dynamic remapping of sectors, all of them have internal\ncaching, some of them have predictive read ahead, some even use compression. \n\nThe assumption that sequentially reading a file from a modern disk drive means\nthat the head will move less often is largely bogus. Now, factor in a full RAID\nsystem where you have 8 of these disks. Random access of a drive may be slower\nthan sequential access, but this has less to do with the drive, and more to do\nwith OS level caching and I/O channel hardware.\n\nFactor in a busy multitasking system, you have no way to really predict the\nstate of a drive from one read to the next.\n\n(Rotational speed of the drive is still important in that it affects internal\nrotational alignment and data transfer.)\n", "msg_date": "Thu, 18 Apr 2002 10:06:30 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Finally someone writes down whats been itching at my brain for a while.\n\nIn a multi-tasking system it's always cheaper to fetch less blocks, no \nmatter where they are. Because, as you said, it will end up more or \nless random onf a system experiencing a larger number of queries.\n\nmlw wrote:\n\n>Bruce Momjian wrote:\n>\n>>My second point, that index scan is more risky than sequential scan, is\n>>outlined above. A sequential scan reads each page once, and uses the\n>>file system read-ahead code to prefetch the disk buffers. Index scans\n>>are random, and could easily re-read disk pages to plow through a\n>>significant portion of the table, and because the reads are random,\n>>the file system will not prefetch the rows so the index scan will have\n>>to wait for each non-cache-resident row to come in from disk.\n>>\n>\n>It took a bike ride to think about this one. The supposed advantage of a\n>sequential read over an random read, in an active multitasking system, is a\n>myth. If you are executing one query and the system is doing only that query,\n>you may be right.\n>\n>Execute a number of queries at the same time, the expected benefit of a\n>sequential scan goes out the window. The OS will be fetching blocks, more or\n>less, at random.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\n", "msg_date": "Thu, 18 Apr 2002 07:35:50 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Somethings wrong with the random numbers from the sun... re-run them, \nthat first sample is insane.... Caching looks like it's affecctign your \nresults alot...\n\nMark Pritchard wrote:\n\n>I threw together the attached program (compiles fine with gcc 2.95.2 on\n>Solaris 2.6 and egcs-2.91.66 on RedHat Linux 6.2) and ran it a few\n>times. Data is below. Usual disclaimers about hastily written code etc\n>:)\n>\n>Machine = ghoul (generic intel, 384mb ram, dual p3-800, ide disk running\n>dma)\n>\n>Sequential\n>Bytes Read\tTime\tBytes / Sec\n>536870912 27.14 19783933.74\n>536870912 27.14 19783990.60\n>536870912 27.11 19801872.14\n>536870912 26.92 19942928.41\n>536870912 27.31 19657408.43\n> 19794026.66 (avg)\n>\n>Random\t\t\n>Bytes Read\tTime\tBytes / Sec\n>1073741824 519.57 2066589.21\n>1073741824 517.78 2073751.44\n>1073741824 516.92 2077193.23\n>1073741824 513.18 2092333.29\n>1073741824 510.68 2102579.88\n> 2082489.41 (avg)\n>\n>Machine = jedi (Sun E420, 3gb ram, dual 400s, test on single scsi disk)\n>\n>Sequential\t\t\n>Bytes Read\tTime\tBytes / Sec\n>2097152000 65.19 32167675.28\n>2097152000 65.22 32154114.65\n>2097152000 65.16 32182561.99\n>2097152000 65.12 32206105.12\n>2097152000 64.67 32429463.26\n> 32227984.06 (avg)\n>\n>Random\t\t\n>Bytes Read\tTime\tBytes / Sec\n>4194304000 1522.22 2755394.79\n>4194304000 278.18 15077622.05\n>4194304000 91.43 45874730.07\n>4194304000 61.43 68273795.19\n>4194304000 54.55 76890231.51\n> 41774354.72\n>\n>If I interpret Tom's \"divide\" instruction correctly, is that a factor of\n>10 on the linux box?\n>\n\n\n", "msg_date": "Thu, 18 Apr 2002 07:43:21 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> mlw wrote:\n> > I don't think we will agree, we have seen different behaviors, and our\n> > experiences seem to conflict. This however does not mean that either of us is\n> > in error, it just may mean that we use data with very different\n> > characteristics.\n> >\n> > This thread is kind of frustrating for me because over the last couple years I\n> > have seen this problem many times and the answer is always the same, \"The\n> > statistics need to be improved.\" Tom, you and I have gone back and forth about\n> > this more than once.\n> >\n> \n> Have you tried reducing 'random_page_cost' in postgresql.conf. That\n> should solve most of your problems if you would like more index scans.\n\nMy random page cost is 1 :-)\n\nI had a database where I had to have \"enable_seqscan=false\" in the config file.\nThe nature of the data always makes the statistics bogus, and it always refused\nto use the index. \n\nIt is frustrating because sometimes it *is* a problem for some unknown number\nof users (including myself), as evidenced by the perenial \"why isn't postgres\nusing my index\" posts, and for the last two years you guys keep saying it isn't\na problem, or that the statistics just need improvement. Sorry for my tone, but\nI have pulled out my hair numerous times on this very problem.\n\nThis whole process has lead me to change my mind. I don't think adding weight\nto an index scan is the answer, I think having the ability to submit hints to\nthe planner is the only way to really address this or any future issues.\n\nJust so you understand my perspective, I am not thinking of the average web\nmonkey. I am thinking of the expert DBA or archetect who want to deploy a\nsystem, and needs to have real control over performance in critical areas.\n\nMy one most important experience (I've had more than one) with this whole topic\nis DMN's music database, when PostgreSQL uses the index, the query executes in\na fraction of a second. When \"enable_seqscan=true\" PostgreSQL refuses to use\nthe index, and the query takes a about a minute. No matter how much I analyze,\nI have to disable sequential scan for the system to work correctly. \n\ncdinfo=# set enable_seqscan=false ;\nSET VARIABLE\ncdinfo=# explain select * from ztitles, zsong where ztitles.muzenbr =\nzsong.muzenbr and ztitles.artistid = 100 ;\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=3134.95..242643.42 rows=32426 width=356)\n -> Sort (cost=3134.95..3134.95 rows=3532 width=304)\n -> Index Scan using ztitles_artistid on ztitles (cost=0.00..3126.62\nrows=3532 width=304)\n -> Index Scan using zsong_muzenbr on zsong (cost=0.00..237787.51\nrows=4298882 width=52)\n\nEXPLAIN\ncdinfo=# set enable_seqscan=true ;\nSET VARIABLE\ncdinfo=# explain select * from ztitles, zsong where ztitles.muzenbr =\nzsong.muzenbr and ztitles.artistid = 100 ;\nNOTICE: QUERY PLAN:\n\nHash Join (cost=3126.97..61889.37 rows=32426 width=356)\n -> Seq Scan on zsong (cost=0.00..52312.66 rows=4298882 width=52)\n -> Hash (cost=3126.62..3126.62 rows=3532 width=304)\n -> Index Scan using ztitles_artistid on ztitles (cost=0.00..3126.62\nrows=3532 width=304)\n\nEXPLAIN\ncdinfo=# select count(*) from zsong ;\n count\n---------\n 4298882\n(1 row)\n", "msg_date": "Thu, 18 Apr 2002 10:48:34 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Michael Loftis <mloftis@wgops.com> writes:\n> Somethings wrong with the random numbers from the sun... re-run them, \n> that first sample is insane.... Caching looks like it's affecctign your \n> results alot...\n\nYeah; it looks like the test case is not large enough to swamp out\ncaching effects on the Sun box. It is on the Linux box, evidently,\nsince the 10:1 ratio appears very repeatable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 10:56:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "...\n> My one most important experience (I've had more than one) with this whole topic\n> is DMN's music database, when PostgreSQL uses the index, the query executes in\n> a fraction of a second. When \"enable_seqscan=true\" PostgreSQL refuses to use\n> the index, and the query takes a about a minute. No matter how much I analyze,\n> I have to disable sequential scan for the system to work correctly.\n\nHow about contributing the data and a query? We've all got things that\nwe would like to change or adjust in the PostgreSQL feature set. If you\ncan't contribute code, how about organizing some choice datasets for\ntesting purposes? If the accumulated set is too big for postgresql.org\n(probably not, but...) I can host them on my machine.\n\nMost folks seem to not have to manipulate the optimizer to get good\nresults nowadays. So to make more progress we need to have test cases...\n\n - Thomas\n", "msg_date": "Thu, 18 Apr 2002 08:16:52 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw wrote:\n> Bruce Momjian wrote:\n> > \n> > mlw wrote:\n> > > I don't think we will agree, we have seen different behaviors, and our\n> > > experiences seem to conflict. This however does not mean that either of us is\n> > > in error, it just may mean that we use data with very different\n> > > characteristics.\n> > >\n> > > This thread is kind of frustrating for me because over the last couple years I\n> > > have seen this problem many times and the answer is always the same, \"The\n> > > statistics need to be improved.\" Tom, you and I have gone back and forth about\n> > > this more than once.\n> > >\n> > \n> > Have you tried reducing 'random_page_cost' in postgresql.conf. That\n> > should solve most of your problems if you would like more index scans.\n> \n> My random page cost is 1 :-)\n\nHave you tried < 1. Seems that may work well for your case.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Apr 2002 11:17:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> My one most important experience (I've had more than one) with this\n> whole topic is DMN's music database, when PostgreSQL uses the index,\n> the query executes in a fraction of a second. When\n> \"enable_seqscan=true\" PostgreSQL refuses to use the index, and the\n> query takes a about a minute. No matter how much I analyze, I have to\n> disable sequential scan for the system to work correctly.\n\nIt would be useful to see \"explain analyze\" not just \"explain\" for these\ncases. Also, what stats does pg_stats show for the variables used?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 11:43:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "Numbers being run on a BSD box now...\n\nFreeBSD 4.3-p27 512MB RAM 2xPiii600 Xeon ona 4 disk RAID 5 ARRAY on a \ndedicated ICP Vortex card. Sorry no single drives on this box, I have \nan outboard Silicon Gear Mercury on a motherboard based Adaptec \ncontroller I can test as well. I'll post when the tests on the Vortex \nare done. I'm using 2Gb files ATM, I'll look at the code and see if it \ncan be made to work with large files. Atleast for FreeBSD the change \nwill be mostly taking doing s/fseek/fseeko/g s/size_t/off_t/g or \nsomething similar. FreeBSD seems ot prefer teh Open Unix standard in \nthis regard...\n\nThis will make it usable for much larger test files.\n\nTom Lane wrote:\n\n>Michael Loftis <mloftis@wgops.com> writes:\n>\n>>Somethings wrong with the random numbers from the sun... re-run them, \n>>that first sample is insane.... Caching looks like it's affecctign your \n>>results alot...\n>>\n>\n>Yeah; it looks like the test case is not large enough to swamp out\n>caching effects on the Sun box. It is on the Linux box, evidently,\n>since the 10:1 ratio appears very repeatable.\n>\n>\t\t\tregards, tom lane\n>\n\n\n", "msg_date": "Thu, 18 Apr 2002 09:10:37 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Indeed - I had a delayed post (sent from the wrong email address) which\nmentioned that the cache is obviously at play here. I still find it\namazing that the file system would cache 2gb :) The numbers are\ndefinitely correct though...they are actually the second set.\n\nI'm running a test with a larger file size to remove the cache effects\n(having realise that ulimit is the biz). Will post again shortly.\n\nTom - have you had a change to look at the test prg I wrote? Is it\nworking as desired?\n\nCheers,\n\nMark\n\nOn Fri, 2002-04-19 at 00:56, Tom Lane wrote:\n> Michael Loftis <mloftis@wgops.com> writes:\n> > Somethings wrong with the random numbers from the sun... re-run them, \n> > that first sample is insane.... Caching looks like it's affecctign your \n> > results alot...\n> \n> Yeah; it looks like the test case is not large enough to swamp out\n> caching effects on the Sun box. It is on the Linux box, evidently,\n> since the 10:1 ratio appears very repeatable.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n\n\n", "msg_date": "19 Apr 2002 07:41:47 +1000", "msg_from": "Mark Pritchard <mark@tangent.net.au>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Got some numbers now... You'll notice the Random reads are *really* \nslow. The reason for htis is the particular read sizes that are ebing \nused are the absolute worst-case for my particular configuration. (wiht \na 32kb or 64kb block size I generally achieve much higher performance \neven on random I/O) Sequential I/O is most likely being limited atleast \nin part by the CPU power available...\n\nSequential tests:\n\n2147483648 bytes read in 39.716158 seconds 54070780.16 bytes/sec\n2147483648 bytes read in 37.836187 seconds 56757401.27 bytes/sec\n2147483648 bytes read in 38.081452 seconds 56391853.13 bytes/sec\n2147483648 bytes read in 38.122105 seconds 56331717.46 bytes/sec\n2147483648 bytes read in 38.303999 seconds 56064215.33 bytes/sec\n\nTotal: 192.059901 seconds 279615967.4 (mumble)\nAve: 38.4119802 seconds 55923193.47 bytes/sec\n\nRandom tests:\n\n2147483648 bytes read in 1744.002332 seconds 1231353.656 bytes/sec\n2147483648 bytes read in 1744.797705 seconds 1230792.339 bytes/sec\n2147483648 bytes read in 1741.577362 seconds 1233068.191 bytes/sec\n2147483648 bytes read in 1741.497690 seconds 1233124.603 bytes/sec\n2147483648 bytes read in 1739.773354 seconds 1234346.786 bytes/sec\n\nTotal: 8711.648443 seconds 6162685.575\nAve: 1742.329689 seconds 1232537.115 bytes/sec\n\nSo on this machine at that block I/O level (8kb block I believe it was) \nI have a ~55MB/sec Sequential Read rate and ~12MB/sec Random Read rate. \n Like I said though I'm fairly certain the random read rates were worst \ncase because of the particular block size in the configuration this \nsystem uses. But I feel that the results are respectable and valid \nnonetheless.\n\nNote how the random reads kept getting better... The ICP and drive \ncaching firmware were starting to 'catch on' that this 2gb file was a \nhot spot so were preferring to cache things a little longer and \npre-fetch in a different order than normal. I estimate that it would \nhave dropped as low as 1700 if allowed to keep going.\n\n\n\n\nRAW output from my script...\n\n\nmloftis@free:/mnt/rz01/ml01/rndtst$ sh PAGECOST2GB.sh\nCREATING FILE\nThu Apr 18 09:11:55 PDT 2002\nCreating test file 2gb.test of 2048 mb\n 176.23 real 22.75 user 34.72 sys\nBEGINNING SEQUENTIAL TESTS\nThu Apr 18 09:14:51 PDT 2002\nSequential read test of 2gb.test\n2147483648 bytes read in 39.716158 seconds\n 39.73 real 1.52 user 23.87 sys\nSequential read test of 2gb.test\n2147483648 bytes read in 37.836187 seconds\n 37.83 real 1.44 user 23.68 sys\nSequential read test of 2gb.test\n2147483648 bytes read in 38.081452 seconds\n 38.08 real 1.62 user 23.51 sys\nSequential read test of 2gb.test\n2147483648 bytes read in 38.122105 seconds\n 38.12 real 1.63 user 23.50 sys\nSequential read test of 2gb.test\n2147483648 bytes read in 38.303999 seconds\n 38.30 real 1.32 user 23.83 sys\nThu Apr 18 09:18:03 PDT 2002\nBEGINNING RANDOM READ TESTS\nRandom read test of 2gb.test for 2048 mb\n2147483648 bytes read in 1744.002332 seconds\n 1744.01 real 4.33 user 36.47 sys\nRandom read test of 2gb.test for 2048 mb\n2147483648 bytes read in 1744.797705 seconds\n 1744.81 real 4.38 user 36.56 sys\nRandom read test of 2gb.test for 2048 mb\n2147483648 bytes read in 1741.577362 seconds\n 1741.58 real 4.58 user 36.18 sys\nRandom read test of 2gb.test for 2048 mb\n2147483648 bytes read in 1741.497690 seconds\n 1741.50 real 4.17 user 36.57 sys\nRandom read test of 2gb.test for 2048 mb\n2147483648 bytes read in 1739.773354 seconds\n 1739.78 real 4.41 user 36.36 sys\nTESTS COMPLETED\nThu Apr 18 11:43:15 PDT 2002\n\n\n\n\n\n\nMichael Loftis wrote:\n\n> Numbers being run on a BSD box now...\n>\n> FreeBSD 4.3-p27 512MB RAM 2xPiii600 Xeon ona 4 disk RAID 5 ARRAY on a \n> dedicated ICP Vortex card. Sorry no single drives on this box, I have \n> an outboard Silicon Gear Mercury on a motherboard based Adaptec \n> controller I can test as well. I'll post when the tests on the Vortex \n> are done. I'm using 2Gb files ATM, I'll look at the code and see if \n> it can be made to work with large files. Atleast for FreeBSD the \n> change will be mostly taking doing s/fseek/fseeko/g s/size_t/off_t/g \n> or something similar. FreeBSD seems ot prefer teh Open Unix standard \n> in this regard...\n>\n> This will make it usable for much larger test files.\n>\n> Tom Lane wrote:\n>\n>> Michael Loftis <mloftis@wgops.com> writes:\n>>\n>>> Somethings wrong with the random numbers from the sun... re-run \n>>> them, that first sample is insane.... Caching looks like it's \n>>> affecctign your results alot...\n>>>\n>>\n>> Yeah; it looks like the test case is not large enough to swamp out\n>> caching effects on the Sun box. It is on the Linux box, evidently,\n>> since the 10:1 ratio appears very repeatable.\n>>\n>> regards, tom lane\n>>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n\n", "msg_date": "Thu, 18 Apr 2002 17:18:31 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Thu, 18 Apr 2002, mlw wrote:\n\n> The days when \"head movement\" is relevant are long over. Not a single drive\n> sold today, or in the last 5 years, is a simple spindle/head system. ....\n> The assumption that sequentially reading a file from a modern disk drive means\n> that the head will move less often is largely bogus.\n\nWell, oddly enough, even with the head moving just as often, sequential\nI/O has always been much faster than random I/O on every drive I've\nowned in the past five years. So I guess I/O speed doesn't have a lot\nto do with head movement or something.\n\nSome of my drives have started to \"chatter\" quite noisily during random\nI/O, too. I thought that this was due to the head movement, but I guess\nnot, since they're quite silent during sequential I/O.\n\nBTW, what sort of benchmarking did you do to determine that the\nhead movement is similar during random and sequential I/O on drives\nin the last five years or so?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Fri, 19 Apr 2002 15:41:52 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Thu, 18 Apr 2002, Michael Loftis wrote:\n\n> mlw wrote:\n>\n> >The supposed advantage of a sequential read over an random read, in\n> >an active multitasking system, is a myth. If you are executing one\n> >query and the system is doing only that query, you may be right.\n> >\n> >Execute a number of queries at the same time, the expected benefit\n> >of a sequential scan goes out the window. The OS will be fetching\n> >blocks, more or less, at random.\n\nOn a system that has neither read-ahead nor sorting of I/O requests,\nyes. Which systems are you using that provide neither of these\nfacilities?\n\n> In a multi-tasking system it's always cheaper to fetch less blocks, no\n> matter where they are. Because, as you said, it will end up more or\n> less random onf a system experiencing a larger number of queries.\n\nInvariably a process or thread will lose its quantum when it submits\nan I/O request. (There's nothing left for it to do, since it's waiting\nfor its data to be read, so there's nothing for it to execute.) It\nreceives its next quantum when the data are available, and then it may\nbegin processing the data. There are two possibilities at this point:\n\n a) The process will complete its processing of the current blocks of\n data and submit an I/O request. In this case, you would certainly\n have seen better performance (assuming you're not CPU-bound--see\n below) had you read more, because you would have processed more in\n that quantum instead of stopping and waiting for more I/O.\n\n b) In that quantum you cannot complete processing the blocks read\n because you don't have any more CPU time left. In this case there\n are two possibilities:\n\n\ti) You're CPU bound, in which case better disk performance makes\n\tno difference anyway, or\n\n\tii) You are likely to find the blocks still in memory when you\n\tget your next quantum. (Unless you don't have enough memory in\n\tthe system, in which case, you should fix that before you spend\n\tany more time or money on tuning disk performance.)\n\nSo basically, it's only cheaper to fetch fewer blocks all the time if\nyou're doing large amounts of I/O and have relatively little memory. The\nlatter case is getting more and more rare as time goes on. I'd say at\nthis point that anybody interested in performance is likely to have at\nleast 256 MB of memory, which means you're going to need a fairly large\ndatabase and a lot of workload before that becomes the problem.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Fri, 19 Apr 2002 15:58:11 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "At 10:48 AM 4/18/02 -0400, mlw wrote:\n>Bruce Momjian wrote:\n> >\n> > Have you tried reducing 'random_page_cost' in postgresql.conf. That\n> > should solve most of your problems if you would like more index scans.\n>\n>My random page cost is 1 :-)\n\nWhat happens when you set random page cost to 1? Between an index scan of \n50% of a table and a full table scan which would the optimizer pick? With \nit at 1, what percentage would be the switchover point?\n\nBecause I'm thinking that for _repeated_ queries when there is caching the \nrandom page cost for \"small\" selections may be very low after the first \nvery costly select (may not be that costly for smart SCSI drives). So \nselecting 10% of a table randomly may not be that costly after the first \nselect. Whereas for sequential scans 100% of the table must fit in the \ncache. If the cache is big enough then whichever results in selecting less \nshould be faster ( noting that typically sequential RAM reads are faster \nthan random RAM reads ). If the cache is not big enough then selecting less \nmay be better up till the point where the total amount repeatedly selected \ncannot be cached, in which case sequential scans should be better. This is \nof course for queries in serial, not queries in parallel. How would one \ntake these issues into account in an optimizer?\n\nMark's problems with the optimizer seem to be something else tho: \nstatistics off.\n\n>I had a database where I had to have \"enable_seqscan=false\" in the config \n>file.\n>The nature of the data always makes the statistics bogus, and it always \n>refused\n>to use the index.\n>My one most important experience (I've had more than one) with this whole \n>topic\n>is DMN's music database, when PostgreSQL uses the index, the query executes in\n>a fraction of a second. When \"enable_seqscan=true\" PostgreSQL refuses to use\n>the index, and the query takes a about a minute. No matter how much I analyze,\n>I have to disable sequential scan for the system to work correctly.\n\nI'm just wondering why not just use enable_seqscan=false for those \nproblematic queries as a \"hint\"? Unless your query does need some seq scans \nas well?\n\nBy the way, are updates treated the same as selects by the optimizer?\n\nRegards,\nLink.\n\n", "msg_date": "Sat, 20 Apr 2002 12:10:23 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> ...By the way, are updates treated the same as selects by the optimizer?\n\nYeah. The writes must occur in any case, so I see no reason why the\noptimizer should worry about them. All it needs to consider are the\ncycles used by the various alternatives for fetching the data. So\nthe problem is isomorphic to a SELECT.\n\nThis assumption is really wired quite fundamentally into the optimizer,\nbut I'm not sure if it's made clear anywhere in the documentation.\nCan anyone suggest the right place to describe it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Apr 2002 00:16:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "Hi All.\nI've been reading all the thread and I want to add a few points:\n\nYou can set enable_seqscan=off in small or easy queries, but in large\nqueries index can speed parts of the query and slow other, so I think it is\nneccesary if you want Postgres to become a Wide-used DBMS that the planner\ncould be able to decide accuratelly, in the thread there is a point that\nmight be useful, it will be very interesting that the planner could learn\nwith previous executions, even there could be a warm-up policy to let\nplanner learn about how the DB is working, this info could be stored with DB\ndata, and could statistically show how use of index or seqscan works on\nevery column of the DB.\n\nI think it will be useful hearing all users and not guiding only with our\nown experience, the main objective is to make a versatil DBMS, It's very\neasy to get down the need of improving indexes with single selects, but a\nlot of us are not doing single select, so I think that point needs to be\nheard.\nRegards\n\n", "msg_date": "Mon, 22 Apr 2002 12:13:39 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "I will use this thread to throw a question I threw time ago but was\nunresolved, I think it is very close with this thread, cause we can't talk\nabout indexes and seqscans without talk about indexes performance, I've to\napologize if someone thinks that it's off-topic.\n\nIndex usage has a big contention problem when running on large SMP boxes,\nhere are the results from running on an 8 r10000 200MHz, I've sized the\ndatabase in order to be 100% cachable by OS in order to compare memory\nperformance with seq_scan an index_scan, lately I've reduced\nrandom_page_cost first to 0.5 and finally to 0.1 to force postgres to use\nindexes, in both executions 1 only stream(on left in the graph) is faster\nthan in random_page_cost=4, but more than one stream results in high\ncontention rate.\nThese are results from tpc-h(1 first stream of 22 queries followed for s\nparallel streams of same queries other order with refresh functions in\nprogress)\nOrange shows CPU waiting for resources, what means stopped at a sem(it's odd\nbecause all queries are read-only).\nfirst of all is rpg=4(less time 8 streams than first(no loads)),\nsecond=0.5(about twice the parallel than first stream) third=0.1(five times\nparallel than first stream).\nI've marked where the first stream ends and starts the parallel test.", "msg_date": "Mon, 22 Apr 2002 13:58:52 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "Lincoln Yeoh wrote:\n> At 10:48 AM 4/18/02 -0400, mlw wrote:\n> >Bruce Momjian wrote:\n> > >\n> > > Have you tried reducing 'random_page_cost' in postgresql.conf. That\n> > > should solve most of your problems if you would like more index scans.\n> >\n> >My random page cost is 1 :-)\n> \n> What happens when you set random page cost to 1? Between an index scan of \n> 50% of a table and a full table scan which would the optimizer pick? With \n> it at 1, what percentage would be the switchover point?\n> \n> Because I'm thinking that for _repeated_ queries when there is caching the \n> random page cost for \"small\" selections may be very low after the first \n> very costly select (may not be that costly for smart SCSI drives). So \n> selecting 10% of a table randomly may not be that costly after the first \n> select. Whereas for sequential scans 100% of the table must fit in the \n> cache. If the cache is big enough then whichever results in selecting less \n> should be faster ( noting that typically sequential RAM reads are faster \n> than random RAM reads ). If the cache is not big enough then selecting less \n> may be better up till the point where the total amount repeatedly selected \n> cannot be cached, in which case sequential scans should be better. This is \n> of course for queries in serial, not queries in parallel. How would one \n> take these issues into account in an optimizer?\n\nThis is an interesting point, that an index scan may fit in the cache\nwhile a sequential scan may not. I can see cases where even a index\nscan of a large percentage of the table may win over an sequential scan.\nInteresting.\n\nDetermining that, especially in a multi-user environment, is quite\ndifficult.\n\nWe do have 'effective_cache_size', which does try to determine how much\nof the I/O will have to go to disk and how much may fit in the cache,\nbut it is quite a fuzzy number.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 12:41:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Luis Alberto Amigo Navarro wrote:\n> Hi All.\n> I've been reading all the thread and I want to add a few points:\n> \n> You can set enable_seqscan=off in small or easy queries, but in large\n> queries index can speed parts of the query and slow other, so I think it is\n> neccesary if you want Postgres to become a Wide-used DBMS that the planner\n> could be able to decide accuratelly, in the thread there is a point that\n> might be useful, it will be very interesting that the planner could learn\n> with previous executions, even there could be a warm-up policy to let\n> planner learn about how the DB is working, this info could be stored with DB\n> data, and could statistically show how use of index or seqscan works on\n> every column of the DB.\n\nYes, I have always felt it would be good to feed back information from\nthe executor to the optimizer to help with later estimates. Of course,\nI never figured out how to do it. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 12:42:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> At 10:48 AM 4/18/02 -0400, mlw wrote:\n> >Bruce Momjian wrote:\n> > >\n> > > Have you tried reducing 'random_page_cost' in postgresql.conf. That\n> > > should solve most of your problems if you would like more index scans.\n> >\n> >My random page cost is 1 :-)\n> \n> What happens when you set random page cost to 1? Between an index scan of\n> 50% of a table and a full table scan which would the optimizer pick? With\n> it at 1, what percentage would be the switchover point?\n\nI am no longer working on the project. Alas, the company is no more. Anyone\nwant to buy it? :-)\n\n> I'm just wondering why not just use enable_seqscan=false for those\n> problematic queries as a \"hint\"? Unless your query does need some seq scans\n> as well?\n\nI am the architect, thus only one of the developers. It was easier, and safer,\nto make sure sequential scans did not get executed on a global basis. It would\nbe disastrous if the development version of the database did not do a\nsequential scan, but the live version did. (This did happen to us once. Another\npoint of PostgreSQL vs Index frustration.)\n\nThe risk was minimal if a live query erroneously used an index, but the\nconsequenses, at least in our application, would be a 1~2 minute PostgreSQL\nquery.\n", "msg_date": "Tue, 23 Apr 2002 22:50:43 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Curt Sampson wrote:\n> \n> On Thu, 18 Apr 2002, Michael Loftis wrote:\n> \n> > mlw wrote:\n> >\n> > >The supposed advantage of a sequential read over an random read, in\n> > >an active multitasking system, is a myth. If you are executing one\n> > >query and the system is doing only that query, you may be right.\n> > >\n> > >Execute a number of queries at the same time, the expected benefit\n> > >of a sequential scan goes out the window. The OS will be fetching\n> > >blocks, more or less, at random.\n> \n> On a system that has neither read-ahead nor sorting of I/O requests,\n> yes. Which systems are you using that provide neither of these\n> facilities?\n\nThis only happens if the OS can organize the I/O requests in such a manner. It\nis a non-trivial function.\n\n> \n> > In a multi-tasking system it's always cheaper to fetch less blocks, no\n> > matter where they are. Because, as you said, it will end up more or\n> > less random onf a system experiencing a larger number of queries.\n> \n> Invariably a process or thread will lose its quantum when it submits\n> an I/O request. (There's nothing left for it to do, since it's waiting\n> for its data to be read, so there's nothing for it to execute.) \n\nThis statement is verifiably false. What a program does after it submits an I/O\nrequests is VERY OS and state specific. If an I/O request is made for a disk\nblock, which is in read-ahead cache, a number of operating systems my return\nright away.\n", "msg_date": "Tue, 23 Apr 2002 23:02:26 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Tue, 23 Apr 2002, mlw wrote:\n\n> > On a system that has neither read-ahead nor sorting of I/O requests,\n> > yes. Which systems are you using that provide neither of these\n> > facilities?\n>\n> This only happens if the OS can organize the I/O requests in such a manner. It\n> is a non-trivial function.\n\nWell, if you call less than 200 lines of code (including lots of\ncomments), \"non-trivial,\" yes. Have a look at NetBSD's\nsrc/sys/kern/subr_disk.c for one example implementation.\n\nBut trivial or not, if all operating systems on which Postgres runs\nare doing this, your point is, well, pointless. So, once again, which\nsystems are you using that do *not* do this?\n\n> > Invariably a process or thread will lose its quantum when it submits\n> > an I/O request. (There's nothing left for it to do, since it's waiting\n> > for its data to be read, so there's nothing for it to execute.)\n>\n> This statement is verifiably false. What a program does after it\n> submits an I/O requests is VERY OS and state specific. If an I/O\n> request is made for a disk block, which is in read-ahead cache, a\n> number of operating systems my return right away.\n\nSorry, we were working at different levels. You are thinking of\ngenerating an I/O request on the logical level, via a system call.\nI was refering to generating a physical I/O request, which a logical\nI/O reqeust may or may not do.\n\nSo if you would please go back and tackle my argument again, based\non my clarifications above....\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Wed, 24 Apr 2002 13:38:06 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "At 12:41 PM 4/23/02 -0400, Bruce Momjian wrote:\n\n>This is an interesting point, that an index scan may fit in the cache\n>while a sequential scan may not. I can see cases where even a index\n>scan of a large percentage of the table may win over an sequential scan.\n>Interesting.\n\nYes and if it fits in the cache the random access costs drop by orders of \nmagnitude as shown by a recent benchmark someone posted where a Solaris box \ncached gigs of data[1].\n\nThat's why it might be useful to know what the crossover points for index \nscan vs sequential scans for various random page cost values.\n\ne.g. set random page cost to 1 means optimizer will use sequential scan if \nit thinks an index scan will return 50% or more rows. set to 0.5 for 75% or \nmore and so on.\n\nThat's probably very simplistic, but basically some idea of what the \noptimizer will do given a random page cost could be helpful.\n\nThanks,\nLink.\n\n[1] Mark Pritchard's benchmark where you can see 3rd try onwards random is \nactually faster than sequential after caching (TWICE as fast too!).\n\nMachine = jedi (Sun E420, 3gb ram, dual 400s, test on single scsi disk)\n\nSequential\nBytes Read Time Bytes / Sec\n2097152000 65.19 32167675.28\n2097152000 65.22 32154114.65\n2097152000 65.16 32182561.99\n2097152000 65.12 32206105.12\n2097152000 64.67 32429463.26\n 32227984.06 (avg)\n\nRandom\nBytes Read Time Bytes / Sec\n4194304000 1522.22 2755394.79\n4194304000 278.18 15077622.05\n4194304000 91.43 45874730.07\n4194304000 61.43 68273795.19\n4194304000 54.55 76890231.51\n 41774354.72\n\n\n\n\n\n", "msg_date": "Wed, 24 Apr 2002 15:12:46 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "At 12:41 PM 4/23/02 -0400, Bruce Momjian wrote:\n\n>This is an interesting point, that an index scan may fit in the cache\n>while a sequential scan may not.\n\nIf so, I would expect that the number of pages read is significantly\nsmaller than it was with a sequential scan. If that's the case,\ndoesn't that mean that the optimizer made the wrong choice anyway?\n\nBTW, I just did a quick walk down this chain of code to see what happens\nduring a sequential scan:\n\n access/heap/heapam.c\n storage/buffer/bufmgr.c\n storage/smgr/smgr.c\n storage/smgr/md.c\n\nand it looks to me like individual reads are being done in BLKSIZE\nchunks, whether we're scanning or not.\n\nDuring a sequential scan, I've heard that it's more efficient to\nread in multiples of your blocksize, say, 64K chunks rather than\n8K chunks, for each read operation you pass to the OS. Does anybody\nhave any experience to know if this is indeed the case? Has anybody\never added this to postgresql and benchmarked it?\n\nCertainly if there's a transaction based limit on disk I/O, as well\nas a throughput limit, it would be better to read in larger chunks.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Wed, 24 Apr 2002 16:51:29 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Sequential Scan Read-Ahead" }, { "msg_contents": "Curt Sampson wrote:\n> \n> On Tue, 23 Apr 2002, mlw wrote:\n> \n> > > On a system that has neither read-ahead nor sorting of I/O requests,\n> > > yes. Which systems are you using that provide neither of these\n> > > facilities?\n> >\n> > This only happens if the OS can organize the I/O requests in such a manner. It\n> > is a non-trivial function.\n> \n> Well, if you call less than 200 lines of code (including lots of\n> comments), \"non-trivial,\" yes. Have a look at NetBSD's\n> src/sys/kern/subr_disk.c for one example implementation.\n> \n> But trivial or not, if all operating systems on which Postgres runs\n> are doing this, your point is, well, pointless. So, once again, which\n> systems are you using that do *not* do this?\n\nI am not arguing about whether or not they do it, I am saying it is not always\npossible. I/O requests do not remain in queue waiting for reordering\nindefinitely.\n", "msg_date": "Wed, 24 Apr 2002 08:28:35 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Curt Sampson wrote:\n> At 12:41 PM 4/23/02 -0400, Bruce Momjian wrote:\n> \n> >This is an interesting point, that an index scan may fit in the cache\n> >while a sequential scan may not.\n> \n> If so, I would expect that the number of pages read is significantly\n> smaller than it was with a sequential scan. If that's the case,\n> doesn't that mean that the optimizer made the wrong choice anyway?\n> \n> BTW, I just did a quick walk down this chain of code to see what happens\n> during a sequential scan:\n> \n> access/heap/heapam.c\n> storage/buffer/bufmgr.c\n> storage/smgr/smgr.c\n> storage/smgr/md.c\n> \n> and it looks to me like individual reads are being done in BLKSIZE\n> chunks, whether we're scanning or not.\n> \n> During a sequential scan, I've heard that it's more efficient to\n> read in multiples of your blocksize, say, 64K chunks rather than\n> 8K chunks, for each read operation you pass to the OS. Does anybody\n> have any experience to know if this is indeed the case? Has anybody\n> ever added this to postgresql and benchmarked it?\n> \n> Certainly if there's a transaction based limit on disk I/O, as well\n> as a throughput limit, it would be better to read in larger chunks.\n\nWe expect the file system to do re-aheads during a sequential scan. \nThis will not happen if someone else is also reading buffers from that\ntable in another place.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 10:08:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead" }, { "msg_contents": "I was thinking in something independent from the executor, simply a variable\nthat recommends or not the use of a particular index, it could be obtained\nfrom user, and so it could be improved(a factor lower than 1) on planner.\nHow about something like this?\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\nCc: \"Lincoln Yeoh\" <lyeoh@pop.jaring.my>; \"Tom Lane\" <tgl@sss.pgh.pa.us>;\n\"mlw\" <markw@mohawksoft.com>; \"Andrew Sullivan\" <andrew@libertyrms.info>;\n\"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Tuesday, April 23, 2002 6:42 PM\nSubject: Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE\n\n\n> Luis Alberto Amigo Navarro wrote:\n> > Hi All.\n> > I've been reading all the thread and I want to add a few points:\n> >\n> > You can set enable_seqscan=off in small or easy queries, but in large\n> > queries index can speed parts of the query and slow other, so I think it\nis\n> > neccesary if you want Postgres to become a Wide-used DBMS that the\nplanner\n> > could be able to decide accuratelly, in the thread there is a point that\n> > might be useful, it will be very interesting that the planner could\nlearn\n> > with previous executions, even there could be a warm-up policy to let\n> > planner learn about how the DB is working, this info could be stored\nwith DB\n> > data, and could statistically show how use of index or seqscan works on\n> > every column of the DB.\n>\n> Yes, I have always felt it would be good to feed back information from\n> the executor to the optimizer to help with later estimates. Of course,\n> I never figured out how to do it. :-)\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Wed, 24 Apr 2002 18:32:52 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, 24 Apr 2002, Bruce Momjian wrote:\n\n> We expect the file system to do re-aheads during a sequential scan.\n> This will not happen if someone else is also reading buffers from that\n> table in another place.\n\nRight. The essential difficulties are, as I see it:\n\n 1. Not all systems do readahead.\n\n 2. Even systems that do do it cannot always reliably detect that\n they need to.\n\n 3. Even when the read-ahead does occur, you're still doing more\n syscalls, and thus more expensive kernel/userland transitions, than\n you have to.\n\nHas anybody considered writing a storage manager that uses raw\npartitions and deals with its own buffer caching? This has the potential\nto be a lot more efficient, since the database server knows much more\nabout its workload than the operating system can guess.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 25 Apr 2002 10:40:37 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead" }, { "msg_contents": "Curt Sampson wrote:\n> On Wed, 24 Apr 2002, Bruce Momjian wrote:\n> \n> > We expect the file system to do re-aheads during a sequential scan.\n> > This will not happen if someone else is also reading buffers from that\n> > table in another place.\n> \n> Right. The essential difficulties are, as I see it:\n> \n> 1. Not all systems do readahead.\n\nIf they don't, that isn't our problem. We expect it to be there, and if\nit isn't, the vendor/kernel is at fault.\n\n> 2. Even systems that do do it cannot always reliably detect that\n> they need to.\n\nYes, seek() in file will turn off read-ahead. Grabbing bigger chunks\nwould help here, but if you have two people already reading from the\nsame file, grabbing bigger chunks of the file may not be optimal.\n\n> 3. Even when the read-ahead does occur, you're still doing more\n> syscalls, and thus more expensive kernel/userland transitions, than\n> you have to.\n\nI would guess the performance impact is minimal.\n\n> Has anybody considered writing a storage manager that uses raw\n> partitions and deals with its own buffer caching? This has the potential\n> to be a lot more efficient, since the database server knows much more\n> about its workload than the operating system can guess.\n\nWe have talked about it, but rejected it. Look in TODO.detail in\noptimizer and performance for 'raw'. Also interesting info there about\noptimizer cost estimates we have been talking about.\n\nSpecificially see:\n\n\thttp://candle.pha.pa.us/mhonarc/todo.detail/performance/msg00009.html\n\nAlso see:\n\n\thttp://candle.pha.pa.us/mhonarc/todo.detail/optimizer/msg00011.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 21:56:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead" }, { "msg_contents": "On Wed, 24 Apr 2002, Bruce Momjian wrote:\n\n> > 1. Not all systems do readahead.\n>\n> If they don't, that isn't our problem. We expect it to be there, and if\n> it isn't, the vendor/kernel is at fault.\n\nIt is your problem when another database kicks Postgres' ass\nperformance-wise.\n\nAnd at that point, *you're* at fault. You're the one who's knowingly\ndecided to do things inefficiently.\n\nSorry if this sounds harsh, but this, \"Oh, someone else is to blame\"\nattitude gets me steamed. It's one thing to say, \"We don't support\nthis.\" That's fine; there are often good reasons for that. It's a\ncompletely different thing to say, \"It's an unrelated entity's fault we\ndon't support this.\"\n\nAt any rate, relying on the kernel to guess how to optimise for\nthe workload will never work as well as well as the software that\nknows the workload doing the optimization.\n\nThe lack of support thing is no joke. Sure, lots of systems nowadays\nsupport unified buffer cache and read-ahead. But how many, besides\nSolaris, support free-behind, which is also very important to avoid\nblowing out your buffer cache when doing sequential reads? And who\nat all supports read-ahead for reverse scans? (Or does Postgres\nnot do those, anyway? I can see the support is there.)\n\nAnd even when the facilities are there, you create problems by\nusing them. Look at the OS buffer cache, for example. Not only do\nwe lose efficiency by using two layers of caching, but (as people\nhave pointed out recently on the lists), the optimizer can't even\nknow how much or what is being cached, and thus can't make decisions\nbased on that.\n\n> Yes, seek() in file will turn off read-ahead. Grabbing bigger chunks\n> would help here, but if you have two people already reading from the\n> same file, grabbing bigger chunks of the file may not be optimal.\n\nGrabbing bigger chunks is always optimal, AFICT, if they're not\n*too* big and you use the data. A single 64K read takes very little\nlonger than a single 8K read.\n\n> > 3. Even when the read-ahead does occur, you're still doing more\n> > syscalls, and thus more expensive kernel/userland transitions, than\n> > you have to.\n>\n> I would guess the performance impact is minimal.\n\nIf it were minimal, people wouldn't work so hard to build multi-level\nthread systems, where multiple userland threads are scheduled on\ntop of kernel threads.\n\nHowever, it does depend on how much CPU your particular application\nis using. You may have it to spare.\n\n> \thttp://candle.pha.pa.us/mhonarc/todo.detail/performance/msg00009.html\n\nWell, this message has some points in it that I feel are just incorrect.\n\n 1. It is *not* true that you have no idea where data is when\n using a storage array or other similar system. While you\n certainly ought not worry about things such as head positions\n and so on, it's been a given for a long, long time that two\n blocks that have close index numbers are going to be close\n together in physical storage.\n\n 2. Raw devices are quite standard across Unix systems (except\n in the unfortunate case of Linux, which I think has been\n remedied, hasn't it?). They're very portable, and have just as\n well--if not better--defined write semantics as a filesystem.\n\n 3. My observations of OS performance tuning over the past six\n or eight years contradict the statement, \"There's a considerable\n cost in complexity and code in using \"raw\" storage too, and\n it's not a one off cost: as the technologies change, the \"fast\"\n way to do things will change and the code will have to be\n updated to match.\" While optimizations have been removed over\n the years the basic optimizations (order reads by block number,\n do larger reads rather than smaller, cache the data) have\n remained unchanged for a long, long time.\n\n 4. \"Better to leave this to the OS vendor where possible, and\n take advantage of the tuning they do.\" Well, sorry guys, but\n have a look at the tuning they do. It hasn't changed in years,\n except to remove now-unnecessary complexity realated to really,\n really old and slow disk devices, and to add a few thing that\n guess workload but still do a worse job than if the workload\n generator just did its own optimisations in the first place.\n\n> \thttp://candle.pha.pa.us/mhonarc/todo.detail/optimizer/msg00011.html\n\nWell, this one, with statements like \"Postgres does have control\nover its buffer cache,\" I don't know what to say. You can interpret\nthe statement however you like, but in the end Postgres very little\ncontrol at all over how data is moved between memory and disk.\n\nBTW, please don't take me as saying that all control over physical\nIO should be done by Postgres. I just think that Posgres could do\na better job of managing data transfer between disk and memory than\nthe OS can. The rest of the things (using raw paritions, read-ahead,\nfree-behind, etc.) just drop out of that one idea.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 25 Apr 2002 12:19:14 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead" }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> Grabbing bigger chunks is always optimal, AFICT, if they're not\n> *too* big and you use the data. A single 64K read takes very little\n> longer than a single 8K read.\n\nProof?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Apr 2002 23:30:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead " }, { "msg_contents": "> Curt Sampson <cjs@cynic.net> writes:\n> > Grabbing bigger chunks is always optimal, AFICT, if they're not\n> > *too* big and you use the data. A single 64K read takes very little\n> > longer than a single 8K read.\n> \n> Proof?\n\nLong time ago I tested with the 32k block size and got 1.5-2x speed up\ncomparing ordinary 8k block size in the sequential scan case.\nFYI, if this is the case.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 25 Apr 2002 12:34:29 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead " }, { "msg_contents": "\nWell, this is a very interesting email. Let me comment on some points.\n\n\n---------------------------------------------------------------------------\n\nCurt Sampson wrote:\n> On Wed, 24 Apr 2002, Bruce Momjian wrote:\n> \n> > > 1. Not all systems do readahead.\n> >\n> > If they don't, that isn't our problem. We expect it to be there, and if\n> > it isn't, the vendor/kernel is at fault.\n> \n> It is your problem when another database kicks Postgres' ass\n> performance-wise.\n> \n> And at that point, *you're* at fault. You're the one who's knowingly\n> decided to do things inefficiently.\n\nIt is just hard to imagine an OS not doing read-ahead, at least in\nsimple cases.\n\n> Sorry if this sounds harsh, but this, \"Oh, someone else is to blame\"\n> attitude gets me steamed. It's one thing to say, \"We don't support\n> this.\" That's fine; there are often good reasons for that. It's a\n> completely different thing to say, \"It's an unrelated entity's fault we\n> don't support this.\"\n\nWell, we are guilty of trying to push as much as possible on to other\nsoftware. We do this for portability reasons, and because we think our\ntime is best spent dealing with db issues, not issues then can be deal\nwith by other existing software, as long as the software is decent.\n\n> At any rate, relying on the kernel to guess how to optimise for\n> the workload will never work as well as well as the software that\n> knows the workload doing the optimization.\n\nSure, that is certainly true. However, it is hard to know what the\nfuture will hold even if we had perfect knowledge of what was happening\nin the kernel. We don't know who else is going to start doing I/O once\nour I/O starts. We may have a better idea with kernel knowledge, but we\nstill don't know 100% what will be cached.\n\n> The lack of support thing is no joke. Sure, lots of systems nowadays\n> support unified buffer cache and read-ahead. But how many, besides\n> Solaris, support free-behind, which is also very important to avoid\n\nWe have free-behind on our list. I think LRU-K will do this quite well\nand be a nice general solution for more than just sequential scans.\n\n> blowing out your buffer cache when doing sequential reads? And who\n> at all supports read-ahead for reverse scans? (Or does Postgres\n> not do those, anyway? I can see the support is there.)\n> \n> And even when the facilities are there, you create problems by\n> using them. Look at the OS buffer cache, for example. Not only do\n> we lose efficiency by using two layers of caching, but (as people\n> have pointed out recently on the lists), the optimizer can't even\n> know how much or what is being cached, and thus can't make decisions\n> based on that.\n\nAgain, are you going to know 100% anyway?\n\n> \n> > Yes, seek() in file will turn off read-ahead. Grabbing bigger chunks\n> > would help here, but if you have two people already reading from the\n> > same file, grabbing bigger chunks of the file may not be optimal.\n> \n> Grabbing bigger chunks is always optimal, AFICT, if they're not\n> *too* big and you use the data. A single 64K read takes very little\n> longer than a single 8K read.\n\nThere may be validity in this. It is easy to do (I think) and could be\na win.\n\n> > > 3. Even when the read-ahead does occur, you're still doing more\n> > > syscalls, and thus more expensive kernel/userland transitions, than\n> > > you have to.\n> >\n> > I would guess the performance impact is minimal.\n> \n> If it were minimal, people wouldn't work so hard to build multi-level\n> thread systems, where multiple userland threads are scheduled on\n> top of kernel threads.\n> \n> However, it does depend on how much CPU your particular application\n> is using. You may have it to spare.\n\nI assume those apps are doing tons of kernel calls. I don't think we\nreally do that many.\n\n> > \thttp://candle.pha.pa.us/mhonarc/todo.detail/performance/msg00009.html\n> \n> Well, this message has some points in it that I feel are just incorrect.\n> \n> 1. It is *not* true that you have no idea where data is when\n> using a storage array or other similar system. While you\n> certainly ought not worry about things such as head positions\n> and so on, it's been a given for a long, long time that two\n> blocks that have close index numbers are going to be close\n> together in physical storage.\n\nSCSI drivers, for example, are pretty smart. Not sure we can take\nadvantage of that from user-land I/O.\n\n> 2. Raw devices are quite standard across Unix systems (except\n> in the unfortunate case of Linux, which I think has been\n> remedied, hasn't it?). They're very portable, and have just as\n> well--if not better--defined write semantics as a filesystem.\n\nYes, but we are seeing some db's moving away from raw I/O. Our\nperformance numbers beat most of the big db's already, so we must be\ndoing something right. In fact, our big failing is more is missing\nfeatures and limitations of our db, rather than performance.\n\n> 3. My observations of OS performance tuning over the past six\n> or eight years contradict the statement, \"There's a considerable\n> cost in complexity and code in using \"raw\" storage too, and\n> it's not a one off cost: as the technologies change, the \"fast\"\n> way to do things will change and the code will have to be\n> updated to match.\" While optimizations have been removed over\n> the years the basic optimizations (order reads by block number,\n> do larger reads rather than smaller, cache the data) have\n> remained unchanged for a long, long time.\n\nYes, but do we spend our time doing that. Is the payoff worth it, vs.\nworking on other features. Sure it would be great to have all these\nfancy things, but is this where our time should be spent, considering\nother items on the TODO list?\n\n> 4. \"Better to leave this to the OS vendor where possible, and\n> take advantage of the tuning they do.\" Well, sorry guys, but\n> have a look at the tuning they do. It hasn't changed in years,\n> except to remove now-unnecessary complexity realated to really,\n> really old and slow disk devices, and to add a few thing that\n> guess workload but still do a worse job than if the workload\n> generator just did its own optimisations in the first place.\n> \n> > \thttp://candle.pha.pa.us/mhonarc/todo.detail/optimizer/msg00011.html\n> \n> Well, this one, with statements like \"Postgres does have control\n> over its buffer cache,\" I don't know what to say. You can interpret\n> the statement however you like, but in the end Postgres very little\n> control at all over how data is moved between memory and disk.\n> \n> BTW, please don't take me as saying that all control over physical\n> IO should be done by Postgres. I just think that Posgres could do\n> a better job of managing data transfer between disk and memory than\n> the OS can. The rest of the things (using raw paritions, read-ahead,\n> free-behind, etc.) just drop out of that one idea.\n\nYes, clearly there is benefit in these, and some of them, like\nfree-behind, have already been tested, though not committed.\n\nJumping in and doing the I/O ourselves is a big undertaking, and looking\nat our TODO list, I am not sure if it is worth it right now.\n\nOf course, if we had 4 TODO items, I would be much more interested in at\nleast trying to see how much gain we could get.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 00:04:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead" }, { "msg_contents": "On Wed, 24 Apr 2002, mlw wrote:\n\n> I am not arguing about whether or not they do it, I am saying it is\n> not always possible. I/O requests do not remain in queue waiting for\n> reordering indefinitely.\n\nIt doesn't matter. When they go out to the disk they go out in\norder. On every Unix-based OS I know of, and Novell Netware, if\nyou submit a single read request for consecutive blocks, those\nblocks *will* be read sequentially, no matter what the system load.\n\nSo to get back to the original arugment:\n\n> > >The supposed advantage of a sequential read over an random read, in\n> > >an active multitasking system, is a myth. If you are executing one\n> > >query and the system is doing only that query, you may be right.\n\nNo, it's very real, because your sequential read will not be broken up.\n\nIf you think it will, let me know which operating systems this\nhappens on, and how exactly it happens.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 25 Apr 2002 13:12:10 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "\n\nTom Lane wrote:\n\n>Curt Sampson <cjs@cynic.net> writes:\n>\n>>Grabbing bigger chunks is always optimal, AFICT, if they're not\n>>*too* big and you use the data. A single 64K read takes very little\n>>longer than a single 8K read.\n>>\n>\n>Proof?\n>\nI contend this statement.\n\nIt's optimal to a point. I know that my system settles into it's best \nread-speeds @ 32K or 64K chunks. 8K chunks are far below optimal for my \nsystem. Most systems I work on do far better at 16K than at 8K, and \nmost don't see any degradation when going to 32K chunks. (this is \nacross numerous OSes and configs -- results are interpretations from \nbonnie disk i/o marks).\n\nDepending on what you're doing it is more efficiend to read bigger \nblocks up to a point. If you're multi-thread or reading in non-blocking \nmode, take as big a chunk as you can handle or are ready to process in \nquick order. If you're picking up a bunch of little chunks here and \nthere and know oyu're not using them again then choose a size that will \nhopeuflly cause some of the reads to overlap, failing that, pick the \nsmallest usable read size.\n\nThe OS can never do that stuff for you.\n\n\n", "msg_date": "Wed, 24 Apr 2002 22:43:11 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead" }, { "msg_contents": "A Block-sized read will not be rboken up. But if you're reading ina \n size bigger than the underlying systems block sizes then it can get \nbroken up.\n\nSo yes a sequential read will get broken up. A single read request for \na block may or may not get broken up. If you're freading with set block \nsizes you'll see the set sizes of blocks come through, but what the \nunderlyign OS does is undefined, same for writing. If the underlying \nblock size is 8KB and you dump 4MB down on it, the OS may (and in many \ncases does) decide to write part of it, do a read ona nearby sector, \nthen write the rest. This happens when doing long writes that end up \nspanning block groups because the inodes must be allocated. \n\nSo the write WILL get broken up. Reads are under the same gun. IT all \ndepends on how big. To the application you may or may not see this \n(probably not, unless you're set non-blocking, because the kernel will \njust sleep you until your data is ready). Further large read requests \ncan of course be re-ordered by hardware. Tagged Command Queueing on \nSCSI drives and RAIDs. The ICP Vortex cards I use ina number of \nsystems have 64MB on-board cache. They quite happily, and often \nre-order reads and writes when queueing them to keep things moving as \nfast as possible (Intel didn't buy them for their cards, they use the \ni960 as it is, Intel swiped them for their IP rights). The OS also tags \ncommands it fires to the ICP, which can be re-ordered on block-sized chunks.\n\nCurt Sampson wrote:\n\n>On Wed, 24 Apr 2002, mlw wrote:\n>\n>>I am not arguing about whether or not they do it, I am saying it is\n>>not always possible. I/O requests do not remain in queue waiting for\n>>reordering indefinitely.\n>>\n>\n>It doesn't matter. When they go out to the disk they go out in\n>order. On every Unix-based OS I know of, and Novell Netware, if\n>you submit a single read request for consecutive blocks, those\n>blocks *will* be read sequentially, no matter what the system load.\n>\n>So to get back to the original arugment:\n>\n>>>>The supposed advantage of a sequential read over an random read, in\n>>>>an active multitasking system, is a myth. If you are executing one\n>>>>query and the system is doing only that query, you may be right.\n>>>>\n>\n>No, it's very real, because your sequential read will not be broken up.\n>\n>If you think it will, let me know which operating systems this\n>happens on, and how exactly it happens.\n>\n>cjs\n>\n\n\n", "msg_date": "Wed, 24 Apr 2002 22:52:47 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, 24 Apr 2002, Michael Loftis wrote:\n\n> A Block-sized read will not be rboken up. But if you're reading ina\n> size bigger than the underlying systems block sizes then it can get\n> broken up.\n\nIn which operating systems, and under what circumstances?\n\nI'll agree that some OSs may not coalesce adjacent reads into a\nsingle read, but even so, submitting a bunch of single reads for\nconsecutive blocks is going to be much, much faster than if other,\nrandom I/O occured between those reads.\n\n> If the underlying\n> block size is 8KB and you dump 4MB down on it, the OS may (and in many\n> cases does) decide to write part of it, do a read ona nearby sector,\n> then write the rest. This happens when doing long writes that end up\n> spanning block groups because the inodes must be allocated.\n\nUm...we're talking about 64K vs 8K reads here, not 4 MB reads. I am\ncertainly not suggesting Posgres ever submit 4 MB read requests to the OS.\n\nI agree that any single-chunk reads or writes that cause non-adjacent\ndisk blocks to be accessed may be broken up. But in my sense,\nthey're \"broken up\" anyway, in that you have no choice but to take\na performance hit.\n\n> Further large read requests can of course be re-ordered by hardware.\n> ...The OS also tags ICP, which can be re-ordered on block-sized chunks.\n\nRight. All the more reason to read in larger chunks when we know what we\nneed in advance, because that will give the OS, controllers, etc. more\nadvance information, and let them do the reads more efficiently.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 25 Apr 2002 15:05:48 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Wed, 24 Apr 2002, Tom Lane wrote:\n\n> Curt Sampson <cjs@cynic.net> writes:\n> > Grabbing bigger chunks is always optimal, AFICT, if they're not\n> > *too* big and you use the data. A single 64K read takes very little\n> > longer than a single 8K read.\n>\n> Proof?\n\nWell, there are various sorts of \"proof\" for this assertion. What\nsort do you want?\n\nHere's a few samples; if you're looking for something different to\nsatisfy you, let's discuss it.\n\n1. Theoretical proof: two components of the delay in retrieving a\nblock from disk are the disk arm movement and the wait for the\nright block to rotate under the head.\n\nWhen retrieving, say, eight adjacent blocks, these will be spread\nacross no more than two cylinders (with luck, only one). The worst\ncase access time for a single block is the disk arm movement plus\nthe full rotational wait; this is the same as the worst case for\neight blocks if they're all on one cylinder. If they're not on one\ncylinder, they're still on adjacent cylinders, requiring a very\nshort seek.\n\n2. Proof by others using it: SQL server uses 64K reads when doing\ntable scans, as they say that their research indicates that the\nmajor limitation is usually the number of I/O requests, not the\nI/O capacity of the disk. BSD's explicitly separates the optimum\nallocation size for storage (1K fragments) and optimum read size\n(8K blocks) because they found performance to be much better when\na larger size block was read. Most file system vendors, too, do\nread-ahead for this very reason.\n\n3. Proof by testing. I wrote a little ruby program to seek to a\nrandom point in the first 2 GB of my raw disk partition and read\n1-8 8K blocks of data. (This was done as one I/O request.) (Using\nthe raw disk partition I avoid any filesystem buffering.) Here are\ntypical results:\n\n 125 reads of 16x8K blocks: 1.9 sec, 66.04 req/sec. 15.1 ms/req, 0.946 ms/block\n 250 reads of 8x8K blocks: 1.9 sec, 132.3 req/sec. 7.56 ms/req, 0.945 ms/block\n 500 reads of 4x8K blocks: 2.5 sec, 199 req/sec. 5.03 ms/req, 1.26 ms/block\n1000 reads of 2x8K blocks: 3.8 sec, 261.6 req/sec. 3.82 ms/req, 1.91 ms/block\n2000 reads of 1x8K blocks: 6.4 sec, 310.4 req/sec. 3.22 ms/req, 3.22 ms/block\n\nThe ratios of data retrieval speed per read for groups of adjacent\n8K blocks, assuming a single 8K block reads in 1 time unit, are:\n\n 1 block\t1.00\n 2 blocks\t1.18\n 4 blocks\t1.56\n 8 blocks\t2.34\n 16 blocks\t4.68\n\nAt less than 20% more expensive, certainly two-block read requests\ncould be considered to cost \"very little more\" than one-block read\nrequests. Even four-block read requests are only half-again as\nexpensive. And if you know you're really going to be using the\ndata, read in 8 block chunks and your cost per block (in terms of\ntime) drops to less than a third of the cost of single-block reads.\n\nLet me put paid to comments about multiple simultaneous readers\nmaking this invalid. Here's a typical result I get with four\ninstances of the program running simultaneously:\n\n125 reads of 16x8K blocks: 4.4 sec, 28.21 req/sec. 35.4 ms/req, 2.22 ms/block\n250 reads of 8x8K blocks: 3.9 sec, 64.88 req/sec. 15.4 ms/req, 1.93 ms/block\n500 reads of 4x8K blocks: 5.8 sec, 86.52 req/sec. 11.6 ms/req, 2.89 ms/block\n1000 reads of 2x8K blocks: 10 sec, 100.2 req/sec. 9.98 ms/req, 4.99 ms/block\n2000 reads of 1x8K blocks: 18 sec, 110 req/sec. 9.09 ms/req, 9.09 ms/block\n\nHere's the ratio table again, with another column comparing the\naggregate number of requests per second for one process and four\nprocesses:\n\n 1 block\t1.00\t\t310 : 440\n 2 blocks\t1.10\t\t262 : 401\n 4 blocks\t1.28\t\t199 : 346\n 8 blocks\t1.69\t\t132 : 260\n 16 blocks\t3.89\t\t 66 : 113\n\nNote that, here the relative increase in performance for increasing\nsizes of reads is even *better* until we get past 64K chunks. The\noverall throughput is better, of course, because with more requests\nper second coming in, the disk seek ordering code has more to work\nwith and the average seek time spent seeking vs. reading will be\nreduced.\n\nYou know, this is not rocket science; I'm sure there must be papers\nall over the place about this. If anybody still disagrees that it's\na good thing to read chunks up to 64K or so when the blocks are\nadjacent and you know you'll need the data, I'd like to see some\ntangible evidence to support that.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 25 Apr 2002 16:28:51 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead " }, { "msg_contents": "On Thu, 25 Apr 2002, Bruce Momjian wrote:\n\n> Well, we are guilty of trying to push as much as possible on to other\n> software. We do this for portability reasons, and because we think our\n> time is best spent dealing with db issues, not issues then can be deal\n> with by other existing software, as long as the software is decent.\n\nThat's fine. I think that's a perfectly fair thing to do.\n\nIt was just the wording (i.e., \"it's this other software's fault\nthat blah de blah\") that got to me. To say, \"We don't do readahead\nbecase most OSes supply it, and we feel that other things would\nhelp more to improve performance,\" is fine by me. Or even, \"Well,\nnobody feels like doing it. You want it, do it yourself,\" I have\nno problem with.\n\n> Sure, that is certainly true. However, it is hard to know what the\n> future will hold even if we had perfect knowledge of what was happening\n> in the kernel. We don't know who else is going to start doing I/O once\n> our I/O starts. We may have a better idea with kernel knowledge, but we\n> still don't know 100% what will be cached.\n\nWell, we do if we use raw devices and do our own caching, using\npages that are pinned in RAM. That was sort of what I was aiming\nat for the long run.\n\n> We have free-behind on our list.\n\nUh...can't do it, if you're relying on the OS to do the buffering.\nHow do you tell the OS that you're no longer going to use a page?\n\n> I think LRU-K will do this quite well\n> and be a nice general solution for more than just sequential scans.\n\nLRU-K sounds like a great idea to me, as does putting pages read\nfor a table scan at the LRU end of the cache, rather than the MRU\n(assuming we do something to ensure that they stay in cache until\nread once, at any rate).\n\nBut again, great for your own cache, but doesn't work with the OS\ncache. And I'm a bit scared to crank up too high the amount of\nmemory I give Postgres, lest the OS try to too aggressively buffer\nall that I/O in what memory remains to it, and start blowing programs\n(like maybe the backend binary itself) out of RAM. But maybe this\nisn't typically a problem; I don't know.\n\n> There may be validity in this. It is easy to do (I think) and could be\n> a win.\n\nIt didn't look to difficult to me, when I looked at the code, and\nyou can see what kind of win it is from the response I just made\nto Tom.\n\n> > 1. It is *not* true that you have no idea where data is when\n> > using a storage array or other similar system. While you\n> > certainly ought not worry about things such as head positions\n> > and so on, it's been a given for a long, long time that two\n> > blocks that have close index numbers are going to be close\n> > together in physical storage.\n>\n> SCSI drivers, for example, are pretty smart. Not sure we can take\n> advantage of that from user-land I/O.\n\nLooking at the NetBSD ones, I don't see what they're doing that's\nso smart. (Aside from some awfully clever workarounds for stupid\nhardware limitations that would otherwise kill performance.) What\nsorts of \"smart\" are you referring to?\n\n> Yes, but we are seeing some db's moving away from raw I/O.\n\nSuch as whom? And are you certain that they're moving to using the\nOS buffer cache, too? MS SQL server, for example, uses the filesystem,\nbut turns off all buffering on those files.\n\n> Our performance numbers beat most of the big db's already, so we must\n> be doing something right.\n\nReally? Do the performance numbers for simple, bulk operations\n(imports, exports, table scans) beat the others handily? My intuition\nsays not, but I'll happily be convinced otherwise.\n\n> Yes, but do we spend our time doing that. Is the payoff worth it, vs.\n> working on other features. Sure it would be great to have all these\n> fancy things, but is this where our time should be spent, considering\n> other items on the TODO list?\n\nI agree that these things need to be assesed.\n\n> Jumping in and doing the I/O ourselves is a big undertaking, and looking\n> at our TODO list, I am not sure if it is worth it right now.\n\nRight. I'm not trying to say this is a critical priority, I'm just\ntrying to determine what we do right now, what we could do, and\nthe potential performance increase that would give us.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 25 Apr 2002 16:55:50 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead" }, { "msg_contents": "At 12:19 PM 4/25/02 +0900, Curt Sampson wrote:\n>Grabbing bigger chunks is always optimal, AFICT, if they're not\n>*too* big and you use the data. A single 64K read takes very little\n>longer than a single 8K read.\n\nYes I agree that if sequential scans are done reading ahead helps.\n\nAnd often doesn't cost much more- whilst waiting for the first block you \nask for sometimes the other blocks are going to spin past first and often \nthe subsystems will read and cache them anyway. At least that was what a \ndisk caching program I wrote years ago did (it had a track cache and an O/S \nmetadata cache[1]), I'm sure most modern HDDs will do the track caching \namongst even more advanced stuff.\n\n\n> 3. My observations of OS performance tuning over the past six\n> or eight years contradict the statement, \"There's a considerable\n> cost in complexity and code in using \"raw\" storage too, and\n> it's not a one off cost: as the technologies change, the \"fast\"\n> way to do things will change and the code will have to be\n> updated to match.\" While optimizations have been removed over\n> the years the basic optimizations (order reads by block number,\n> do larger reads rather than smaller, cache the data) have\n> remained unchanged for a long, long time.\n\n>BTW, please don't take me as saying that all control over physical\n>IO should be done by Postgres. I just think that Posgres could do\n>a better job of managing data transfer between disk and memory than\n>the OS can. The rest of the things (using raw paritions, read-ahead,\n>free-behind, etc.) just drop out of that one idea.\n\nI think the raw partitions will be more trouble than they are worth. \nReading larger chunks at appropriate circumstances seems to be the \"low \nhanging fruit\".\n\nIf postgresql prefers sequential scans so much it should do them better ;) \n(just being naughty!).\n\nCheerio,\nLink.\n\n[1] The theory was the drive typically has to jump around a lot more for \nmetadata than for files. In practice it worked pretty well, if I do say so \nmyself :). Not sure if modern HDDs do specialized O/S metadata caching \n(wonder how many megabytes would typically be needed for 18GB drives :) ).\n\n", "msg_date": "Thu, 25 Apr 2002 16:33:36 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead" }, { "msg_contents": "On Thu, 25 Apr 2002, Curt Sampson wrote:\n\n> Here's the ratio table again, with another column comparing the\n> aggregate number of requests per second for one process and four\n> processes:\n>\n\nJust for interest, I ran this again with 20 processes working\nsimultaneously. I did six runs at each blockread size and summed\nthe tps for each process to find the aggregate number of reads per\nsecond during the test. I dropped the higest and the lowest ones,\nand averaged the rest. Here's the new table:\n\n\t\t1 proc\t4 procs\t20 procs\n\n 1 block\t310\t440\t260\n 2 blocks\t262\t401\t481\n 4 blocks\t199\t346\t354\n 8 blocks\t132\t260\t250\n 16 blocks\t 66\t113\t116\n\nI'm not sure at all why performance gets so much *worse* with a lot of\ncontention on the 1K reads. This could have something to with NetBSD, or\nits buffer cache, or my laptop's crappy little disk drive....\n\nOr maybe I'm just running out of CPU.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 25 Apr 2002 18:19:02 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead " }, { "msg_contents": "On Thu, 25 Apr 2002, Lincoln Yeoh wrote:\n\n> I think the raw partitions will be more trouble than they are worth.\n> Reading larger chunks at appropriate circumstances seems to be the \"low\n> hanging fruit\".\n\nThat's certainly a good start. I don't know if the raw partitions\nwould be more trouble than they are worth, but it certainly would\nbe a lot more work, yes. One could do pretty much as well, I think,\nby using the \"don't buffer blocks for this file\" option on those\nOSes that have it.\n\n> [1] The theory was the drive typically has to jump around a lot more for\n> metadata than for files. In practice it worked pretty well, if I do say so\n> myself :). Not sure if modern HDDs do specialized O/S metadata caching\n> (wonder how many megabytes would typically be needed for 18GB drives :) ).\n\nSure they do, though they don't necessarially read it all. Most\nunix systems have special cache for namei lookups (turning a filename\ninto an i-node number), often one per-process as well as a system-wide\none. And on machines with a unified buffer cache for file data,\nthere's still a separate metadata cache.\n\nBut in fact, at least with BSD's FFS, there's probably not quite\nas much jumping as you'd think. An FFS filesystem is divided into\n\"cylinder groups\" (though these days the groups don't necessarially\nmatch the physical cylinder boundaries on the disk) and a file's\ni-node entry is kept in the same cylinder group as the file's data,\nor at least the first part of the it.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Thu, 25 Apr 2002 19:47:13 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead" }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> 1. Theoretical proof: two components of the delay in retrieving a\n> block from disk are the disk arm movement and the wait for the\n> right block to rotate under the head.\n\n> When retrieving, say, eight adjacent blocks, these will be spread\n> across no more than two cylinders (with luck, only one).\n\nWeren't you contending earlier that with modern disk mechs you really\nhave no idea where the data is? You're asserting as an article of \nfaith that the OS has been able to place the file's data blocks\noptimally --- or at least well enough to avoid unnecessary seeks.\nBut just a few days ago I was getting told that random_page_cost\nwas BS because there could be no such placement.\n\nI'm getting a tad tired of sweeping generalizations offered without\nproof, especially when they conflict.\n\n> 3. Proof by testing. I wrote a little ruby program to seek to a\n> random point in the first 2 GB of my raw disk partition and read\n> 1-8 8K blocks of data. (This was done as one I/O request.) (Using\n> the raw disk partition I avoid any filesystem buffering.)\n\nAnd also ensure that you aren't testing the point at issue.\nThe point at issue is that *in the presence of kernel read-ahead*\nit's quite unclear that there's any benefit to a larger request size.\nIdeally the kernel will have the next block ready for you when you\nask, no matter what the request is.\n\nThere's been some talk of using the AIO interface (where available)\nto \"encourage\" the kernel to do read-ahead. I don't foresee us\nwriting our own substitute filesystem to make this happen, however.\nOracle may have the manpower for that sort of boondoggle, but we\ndon't...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Apr 2002 09:54:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead " }, { "msg_contents": "Curt Sampson wrote:\n> 3. Proof by testing. I wrote a little ruby program to seek to a\n> random point in the first 2 GB of my raw disk partition and read\n> 1-8 8K blocks of data. (This was done as one I/O request.) (Using\n> the raw disk partition I avoid any filesystem buffering.) Here are\n> typical results:\n> \n> 125 reads of 16x8K blocks: 1.9 sec, 66.04 req/sec. 15.1 ms/req, 0.946 ms/block\n> 250 reads of 8x8K blocks: 1.9 sec, 132.3 req/sec. 7.56 ms/req, 0.945 ms/block\n> 500 reads of 4x8K blocks: 2.5 sec, 199 req/sec. 5.03 ms/req, 1.26 ms/block\n> 1000 reads of 2x8K blocks: 3.8 sec, 261.6 req/sec. 3.82 ms/req, 1.91 ms/block\n> 2000 reads of 1x8K blocks: 6.4 sec, 310.4 req/sec. 3.22 ms/req, 3.22 ms/block\n> \n> The ratios of data retrieval speed per read for groups of adjacent\n> 8K blocks, assuming a single 8K block reads in 1 time unit, are:\n> \n> 1 block\t1.00\n> 2 blocks\t1.18\n> 4 blocks\t1.56\n> 8 blocks\t2.34\n> 16 blocks\t4.68\n\nYou mention you are reading from a raw partition. It is my\nunderstanding that raw partition reads have no kernel read-ahead. (I\nassume this because raw devices are devices, not file system files.) If\nthis is true, could we get numbers with kernel read-ahead active?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 10:55:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead" }, { "msg_contents": "Michael Loftis wrote:\n> A Block-sized read will not be broken up. But if you're reading ina \n> size bigger than the underlying systems block sizes then it can get \n> broken up.\n> \n> So yes a sequential read will get broken up. A single read request for \n> a block may or may not get broken up. If you're freading with set block \n> sizes you'll see the set sizes of blocks come through, but what the \n> underlying OS does is undefined, same for writing. If the underlying \n> block size is 8KB and you dump 4MB down on it, the OS may (and in many \n> cases does) decide to write part of it, do a read ona nearby sector, \n> then write the rest. This happens when doing long writes that end up \n> spanning block groups because the inodes must be allocated. \n\nAlso keep in mind most disks have 512 byte blocks, so even if the file\nsystem is 8k, the disk block sizes are different. A given 8k or 1k file\nsystem block may not even be all in the same cylinder.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 11:01:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "\nActually, this brings up a different point. We use 8k blocks now\nbecause at the time PostgreSQL was developed, it used BSD file systems,\nand those prefer 8k blocks, and there was some concept that an 8k write\nwas atomic, though with 512 byte disk blocks, that was incorrect. (We\nknew that at the time too, but we didn't have any options, so we just\nhoped.)\n\nIn fact, we now write pre-modified pages to WAL specifically because we\ncan't be sure an 8k page write to disk will be atomic. Part of the page\nmay make it to disk, and part may not.\n\nNow, with larger RAM and disk sizes, it may be time to consider larger\npage sizes, like 32k pages. That reduces the granularity of the cache,\nbut it may have other performance advantages that would be worth it.\n\nWhat people are actually suggesting with the read-ahead for sequential\nscans is basically a larger block size for sequential scans than for\nindex scans. While this makes sense, it may be better to just increase\nthe block size overall.\n\n---------------------------------------------------------------------------\n\nCurt Sampson wrote:\n> On Wed, 24 Apr 2002, Michael Loftis wrote:\n> \n> > A Block-sized read will not be rboken up. But if you're reading ina\n> > size bigger than the underlying systems block sizes then it can get\n> > broken up.\n> \n> In which operating systems, and under what circumstances?\n> \n> I'll agree that some OSs may not coalesce adjacent reads into a\n> single read, but even so, submitting a bunch of single reads for\n> consecutive blocks is going to be much, much faster than if other,\n> random I/O occured between those reads.\n> \n> > If the underlying\n> > block size is 8KB and you dump 4MB down on it, the OS may (and in many\n> > cases does) decide to write part of it, do a read ona nearby sector,\n> > then write the rest. This happens when doing long writes that end up\n> > spanning block groups because the inodes must be allocated.\n> \n> Um...we're talking about 64K vs 8K reads here, not 4 MB reads. I am\n> certainly not suggesting Posgres ever submit 4 MB read requests to the OS.\n> \n> I agree that any single-chunk reads or writes that cause non-adjacent\n> disk blocks to be accessed may be broken up. But in my sense,\n> they're \"broken up\" anyway, in that you have no choice but to take\n> a performance hit.\n> \n> > Further large read requests can of course be re-ordered by hardware.\n> > ...The OS also tags ICP, which can be re-ordered on block-sized chunks.\n> \n> Right. All the more reason to read in larger chunks when we know what we\n> need in advance, because that will give the OS, controllers, etc. more\n> advance information, and let them do the reads more efficiently.\n> \n> cjs\n> -- \n> Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n> Don't you know, in this new Dark Age, we're all light. --XTC\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 11:34:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Thu, 2002-04-25 at 12:47, Curt Sampson wrote:\n> On Thu, 25 Apr 2002, Lincoln Yeoh wrote:\n> \n> > I think the raw partitions will be more trouble than they are worth.\n> > Reading larger chunks at appropriate circumstances seems to be the \"low\n> > hanging fruit\".\n> \n> That's certainly a good start. I don't know if the raw partitions\n> would be more trouble than they are worth, but it certainly would\n> be a lot more work, yes. One could do pretty much as well, I think,\n> by using the \"don't buffer blocks for this file\" option on those\n> OSes that have it.\n\nI was on a short DB2 tuning course and was told that on Win NT turning\noff cache causes about 15-20% speedup. \n\n(I don't know what exacly is sped up :)\n\n> > [1] The theory was the drive typically has to jump around a lot more for\n> > metadata than for files. In practice it worked pretty well, if I do say so\n> > myself :). Not sure if modern HDDs do specialized O/S metadata caching\n> > (wonder how many megabytes would typically be needed for 18GB drives :) ).\n> \n> Sure they do, though they don't necessarially read it all. Most\n> unix systems\n\nDo modern HDD's have unix inside them ;)\n\n> have special cache for namei lookups (turning a filename\n> into an i-node number), often one per-process as well as a system-wide\n> one. And on machines with a unified buffer cache for file data,\n> there's still a separate metadata cache.\n\n-----------\nHannu\n\n\n", "msg_date": "25 Apr 2002 19:41:19 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead" }, { "msg_contents": "Bruce Momjian wrote:\n> Now, with larger RAM and disk sizes, it may be time to consider larger\n> page sizes, like 32k pages. That reduces the granularity of the cache,\n> but it may have other performance advantages that would be worth it.\n> \n> What people are actually suggesting with the read-ahead for sequential\n> scans is basically a larger block size for sequential scans than for\n> index scans. While this makes sense, it may be better to just increase\n> the block size overall.\n\nI have seen performance improvements by using 16K blocks over 8K blocks in\nsequential scans of large tables. \n\nI am investigating the performance difference between 16K and 8K block sizes on\none of my systems. I'll let you know what I see. I am using pgbench for generic\nperformance levels.\n\nIf you would like to see any extra data, just let me know.\n", "msg_date": "Thu, 25 Apr 2002 14:13:40 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Tom Lane wrote:\n> ...\n> Curt Sampson <cjs@cynic.net> writes:\n> > 3. Proof by testing. I wrote a little ruby program to seek to a\n> > random point in the first 2 GB of my raw disk partition and read\n> > 1-8 8K blocks of data. (This was done as one I/O request.) (Using\n> > the raw disk partition I avoid any filesystem buffering.)\n> \n> And also ensure that you aren't testing the point at issue.\n> The point at issue is that *in the presence of kernel read-ahead*\n> it's quite unclear that there's any benefit to a larger request size.\n> Ideally the kernel will have the next block ready for you when you\n> ask, no matter what the request is.\n> ...\n\nI have to agree with Tom. I think the numbers below show that with\nkernel read-ahead, block size isn't an issue.\n\nThe big_file1 file used below is 2.0 gig of random data, and the\nmachine has 512 mb of main memory. This ensures that we're not\njust getting cached data.\n\nforeach i (4k 8k 16k 32k 64k 128k)\n echo $i\n time dd bs=$i if=big_file1 of=/dev/null\nend\n\nand the results:\n\nbs user kernel elapsed\n4k: 0.260 7.740 1:27.25\n8k: 0.210 8.060 1:30.48\n16k: 0.090 7.790 1:30.88\n32k: 0.060 8.090 1:32.75\n64k: 0.030 8.190 1:29.11\n128k: 0.070 9.830 1:28.74\n\nso with kernel read-ahead, we have basically the same elapsed (wall\ntime) regardless of block size. Sure, user time drops to a low at 64k\nblocksize, but kernel time is increasing.\n\n\nYou could argue that this is a contrived example, no other I/O is\nbeing done. Well I created a second 2.0g file (big_file2) and did two\nsimultaneous reads from the same disk. Sure performance went to hell\nbut it shows blocksize is still irrelevant in a multi I/O environment\nwith sequential read-ahead.\n\nforeach i ( 4k 8k 16k 32k 64k 128k )\n echo $i\n time dd bs=$i if=big_file1 of=/dev/null &\n time dd bs=$i if=big_file2 of=/dev/null &\n wait\nend\n\nbs user kernel elapsed\n4k: 0.480 8.290 6:34.13 bigfile1\n 0.320 8.730 6:34.33 bigfile2\n8k: 0.250 7.580 6:31.75\n 0.180 8.450 6:31.88\n16k: 0.150 8.390 6:32.47\n 0.100 7.900 6:32.55\n32k: 0.190 8.460 6:24.72\n 0.060 8.410 6:24.73\n64k: 0.060 9.350 6:25.05\n 0.150 9.240 6:25.13\n128k: 0.090 10.610 6:33.14\n 0.110 11.320 6:33.31\n\n\nthe differences in read times are basically in the mud. Blocksize\njust doesn't matter much with the kernel doing readahead.\n\n-Kyle\n", "msg_date": "Thu, 25 Apr 2002 17:40:53 -0700", "msg_from": "Kyle <kaf@nwlink.com>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead " }, { "msg_contents": "\nNice test. Would you test simultaneous 'dd' on the same file, perhaps\nwith a slight delay between to the two so they don't read each other's\nblocks?\n\nseek() in the file will turn off read-ahead in most OS's. I am not\nsaying this is a major issue for PostgreSQL but the numbers would be\ninteresting.\n\n\n---------------------------------------------------------------------------\n\nKyle wrote:\n> Tom Lane wrote:\n> > ...\n> > Curt Sampson <cjs@cynic.net> writes:\n> > > 3. Proof by testing. I wrote a little ruby program to seek to a\n> > > random point in the first 2 GB of my raw disk partition and read\n> > > 1-8 8K blocks of data. (This was done as one I/O request.) (Using\n> > > the raw disk partition I avoid any filesystem buffering.)\n> > \n> > And also ensure that you aren't testing the point at issue.\n> > The point at issue is that *in the presence of kernel read-ahead*\n> > it's quite unclear that there's any benefit to a larger request size.\n> > Ideally the kernel will have the next block ready for you when you\n> > ask, no matter what the request is.\n> > ...\n> \n> I have to agree with Tom. I think the numbers below show that with\n> kernel read-ahead, block size isn't an issue.\n> \n> The big_file1 file used below is 2.0 gig of random data, and the\n> machine has 512 mb of main memory. This ensures that we're not\n> just getting cached data.\n> \n> foreach i (4k 8k 16k 32k 64k 128k)\n> echo $i\n> time dd bs=$i if=big_file1 of=/dev/null\n> end\n> \n> and the results:\n> \n> bs user kernel elapsed\n> 4k: 0.260 7.740 1:27.25\n> 8k: 0.210 8.060 1:30.48\n> 16k: 0.090 7.790 1:30.88\n> 32k: 0.060 8.090 1:32.75\n> 64k: 0.030 8.190 1:29.11\n> 128k: 0.070 9.830 1:28.74\n> \n> so with kernel read-ahead, we have basically the same elapsed (wall\n> time) regardless of block size. Sure, user time drops to a low at 64k\n> blocksize, but kernel time is increasing.\n> \n> \n> You could argue that this is a contrived example, no other I/O is\n> being done. Well I created a second 2.0g file (big_file2) and did two\n> simultaneous reads from the same disk. Sure performance went to hell\n> but it shows blocksize is still irrelevant in a multi I/O environment\n> with sequential read-ahead.\n> \n> foreach i ( 4k 8k 16k 32k 64k 128k )\n> echo $i\n> time dd bs=$i if=big_file1 of=/dev/null &\n> time dd bs=$i if=big_file2 of=/dev/null &\n> wait\n> end\n> \n> bs user kernel elapsed\n> 4k: 0.480 8.290 6:34.13 bigfile1\n> 0.320 8.730 6:34.33 bigfile2\n> 8k: 0.250 7.580 6:31.75\n> 0.180 8.450 6:31.88\n> 16k: 0.150 8.390 6:32.47\n> 0.100 7.900 6:32.55\n> 32k: 0.190 8.460 6:24.72\n> 0.060 8.410 6:24.73\n> 64k: 0.060 9.350 6:25.05\n> 0.150 9.240 6:25.13\n> 128k: 0.090 10.610 6:33.14\n> 0.110 11.320 6:33.31\n> \n> \n> the differences in read times are basically in the mud. Blocksize\n> just doesn't matter much with the kernel doing readahead.\n> \n> -Kyle\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 22:18:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead" }, { "msg_contents": "On Thu, 25 Apr 2002, Tom Lane wrote:\n\n> Curt Sampson <cjs@cynic.net> writes:\n> > 1. Theoretical proof: two components of the delay in retrieving a\n> > block from disk are the disk arm movement and the wait for the\n> > right block to rotate under the head.\n>\n> > When retrieving, say, eight adjacent blocks, these will be spread\n> > across no more than two cylinders (with luck, only one).\n>\n> Weren't you contending earlier that with modern disk mechs you really\n> have no idea where the data is?\n\nNo, that was someone else. I contend that with pretty much any\nlarge-scale storage mechanism (i.e., anything beyond ramdisks),\nyou will find that accessing two adjacent blocks is almost always\n1) close to as fast as accessing just the one, and 2) much, much\nfaster than accessing two blocks that are relatively far apart.\n\nThere will be the odd case where the two adjacent blocks are\nphysically far apart, but this is rare.\n\nIf this idea doesn't hold true, the whole idea that sequential\nreads are faster than random reads falls apart, and the optimizer\nshouldn't even have the option to make random reads cost more, much\nless have it set to four rather than one (or whatever it's set to).\n\n> You're asserting as an article of\n> faith that the OS has been able to place the file's data blocks\n> optimally --- or at least well enough to avoid unnecessary seeks.\n\nSo are you, in the optimizer. But that's all right; the OS often\ncan and does do this placement; the FFS filesystem is explicitly\ndesigned to do this sort of thing. If the filesystem isn't empty\nand the files grow a lot they'll be split into large fragments,\nbut the fragments will be contiguous.\n\n> But just a few days ago I was getting told that random_page_cost\n> was BS because there could be no such placement.\n\nI've been arguing against that point as well.\n\n> And also ensure that you aren't testing the point at issue.\n> The point at issue is that *in the presence of kernel read-ahead*\n> it's quite unclear that there's any benefit to a larger request size.\n\nI will test this.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Fri, 26 Apr 2002 11:27:17 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan Read-Ahead " }, { "msg_contents": "On Thu, 25 Apr 2002, Bruce Momjian wrote:\n\n> Also keep in mind most disks have 512 byte blocks, so even if the file\n> system is 8k, the disk block sizes are different. A given 8k or 1k file\n> system block may not even be all in the same cylinder.\n\nRight. Though much of the time they will be in the same cylinder,\nsometimes they will be in adjacent cylinders if the drive manufacturer\nhas made cylinder sizes that are not multiples of 8K. I don't think\nthis is terribly frequent, but there's no way to substantiate that\nassumption without knowing the real geometries of the drive, which\ngenerally are not given out. (What is reported to the OS has not\nbeen the real geometry for years now, because drive manufacturers\nlong ago started putting more blocks on the outer cylinders than\nthe inner ones.)\n\nHowever, even that they will be in adjacent cylinders doesn't always\nhold: depending on how the disk subsystems are partitioned, you\nmight be crossing a boundary where two partitions are joined\ntogether, necessitating a seek. But this case would be quite rare.\n\nYou can always find conditions, in modern drive subsystems, where\nthe \"read close together\" idea doesn't hold, but in the vast, vast\nmajority of circumstances it does.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Fri, 26 Apr 2002 11:37:08 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Thu, 25 Apr 2002, Bruce Momjian wrote:\n\n> Actually, this brings up a different point. We use 8k blocks now\n> because at the time PostgreSQL was developed, it used BSD file systems,\n> and those prefer 8k blocks, and there was some concept that an 8k write\n> was atomic, though with 512 byte disk blocks, that was incorrect. (We\n> knew that at the time too, but we didn't have any options, so we just\n> hoped.)\n\nMS SQL Server has an interesting way of dealing with this. They have a\n\"torn\" bit in each 512-byte chunk of a page, and this bit is set the\nsame for each chunk. When they are about to write out a page, they first\nflip all of the torn bits and then do the write. If the write does not\ncomplete due to a system crash or whatever, this can be detected later\nbecause the torn bits won't match across the entire page.\n\n> Now, with larger RAM and disk sizes, it may be time to consider larger\n> page sizes, like 32k pages. That reduces the granularity of the cache,\n> but it may have other performance advantages that would be worth it.\n\nIt really depends on the block size your underlying layer is using.\nReading less than that is never useful as you pay for that entire\nblock anyway. (E.g., on an FFS filesystem with 8K blocks, the OS\nalways reads 8K even if you ask for only 4K.)\n\nOn the other hand, reading more does have a tangible cost, as you\nsaw from the benchmark I posted; reading 16K on my system cost 20%\nmore than reading 8K, and used twice the buffer space. If I'm doing\nlots of really random reads, this would result in a performance\nloss (due to doing more I/O, and having less chance that the next\nitem I want is in the buffer cache).\n\nFor some reason I thought we had the ability to change the block\nsize that postgres uses on a table-by-table basis, but I can't find\nanything in the docs about that. Maybe it's just because I saw some\nsupport in the code for it. But this feature would be a nice addition\nfor those cases where a larger block size would help.\n\nBut I think that 8K is a pretty good default, and I think that 32K\nblocks would result in a quite noticable performance reduction for\napps that did a lot of random I/O.\n\n> What people are actually suggesting with the read-ahead for sequential\n> scans is basically a larger block size for sequential scans than for\n> index scans. While this makes sense, it may be better to just increase\n> the block size overall.\n\nI don't think so, because the smaller block size is definitely\nbetter for random I/O.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Fri, 26 Apr 2002 11:51:43 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Curt Sampson wrote:\n> On Thu, 25 Apr 2002, Bruce Momjian wrote:\n> \n> > Actually, this brings up a different point. We use 8k blocks now\n> > because at the time PostgreSQL was developed, it used BSD file systems,\n> > and those prefer 8k blocks, and there was some concept that an 8k write\n> > was atomic, though with 512 byte disk blocks, that was incorrect. (We\n> > knew that at the time too, but we didn't have any options, so we just\n> > hoped.)\n> \n> MS SQL Server has an interesting way of dealing with this. They have a\n> \"torn\" bit in each 512-byte chunk of a page, and this bit is set the\n> same for each chunk. When they are about to write out a page, they first\n> flip all of the torn bits and then do the write. If the write does not\n> complete due to a system crash or whatever, this can be detected later\n> because the torn bits won't match across the entire page.\n\nI was wondering, how does knowing the block is corrupt help MS SQL? \nRight now, we write changed pages to WAL, then later write them to disk.\nI have always been looking for a way to prevent these WAL writes. The\n512-byte bit seems interesting, but how does it help?\n\nAnd how does the bit help them with partial block writes? Is the bit at\nthe end of the block? Is that reliable?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 20 Jun 2002 21:58:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Thu, 20 Jun 2002, Bruce Momjian wrote:\n\n> > MS SQL Server has an interesting way of dealing with this. They have a\n> > \"torn\" bit in each 512-byte chunk of a page, and this bit is set the\n> > same for each chunk. When they are about to write out a page, they first\n> > flip all of the torn bits and then do the write. If the write does not\n> > complete due to a system crash or whatever, this can be detected later\n> > because the torn bits won't match across the entire page.\n>\n> I was wondering, how does knowing the block is corrupt help MS SQL?\n\nI'm trying to recall, but I can't off hand. I'll have to look it\nup in my Inside SQL Server book, which is at home right now,\nunfortunately. I'll bring the book into work and let you know the\ndetails later.\n\n> Right now, we write changed pages to WAL, then later write them to disk.\n\nAh. You write the entire page? MS writes only the changed tuple.\nAnd DB2, in fact, goes one better and writes only the part of the\ntuple up to the change, IIRC. Thus, if you put smaller and/or more\nfrequently changed columns first, you'll have smaller logs.\n\n> I have always been looking for a way to prevent these WAL writes. The\n> 512-byte bit seems interesting, but how does it help?\n\nWell, this would at least let you reduce the write to the 512-byte\nchunk that changed, rather than writing the entire 8K page.\n\n> And how does the bit help them with partial block writes? Is the bit at\n> the end of the block? Is that reliable?\n\nThe bit is somewhere within every 512 byte \"disk page\" within the\n8192 byte \"filesystem/database page.\" So an 8KB page is divided up\nlike this:\n\n | <----------------------- 8 Kb ----------------------> |\n\n | 512b | 512b | 512b | 512b | 512b | 512b | 512b | 512b |\n\nThus, the tear bits start out like this:\n\n | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n\nAfter a successful write of the entire page, you have this:\n\n | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |\n\nIf the write is unsuccessful, you end up with something like this:\n\n | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 |\n\nAnd now you know which parts of your page got written, and which\nparts didn't.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Fri, 21 Jun 2002 11:18:14 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> And now you know which parts of your page got written, and which\n> parts didn't.\n\nYes ... and what do you *do* about it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jun 2002 10:14:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "On Fri, 21 Jun 2002, Tom Lane wrote:\n\n> Curt Sampson <cjs@cynic.net> writes:\n> > And now you know which parts of your page got written, and which\n> > parts didn't.\n>\n> Yes ... and what do you *do* about it?\n\nOk. Here's the extract from _Inside Microsoft SQL Server 7.0_, page 207:\n\n torn page detection When TRUE, this option causes a bit to be\n\tflipped for each 512-byte sector in a database page (8 KB)\n\twhenever the page is written to disk. This option allows\n\tSQL Server to detect incomplete I/O operations caused by\n\tpower failures or other system outages. If a bit is in the\n\twrong state when the page is later read by SQL Server, this\n\tmeans the page was written incorrectly; a torn page has\n\tbeen detected. Although SQL Server database pages are 8\n\tKB, disks perform I/O operations using 512-byte sectors.\n\tTherefore, 16 sectors are written per database page. A\n\ttorn page can occur if the system crashes (for example,\n\tbecause of power failure) between the time the operating\n\tsystem writes the first 512-byte sector to disk and the\n\tcompletion of the 8-KB I/O operation. If the first sector\n\tof a database page is successfully written before the crash,\n\tit will appear that the database page on disk was updated,\n\talthough it might not have succeeded. Using battery-backed\n\tdisk caches can ensure that data is [sic] successfully\n\twritten to disk or not written at all. In this case, don't\n\tset torn page detection to TRUE, as it isn't needed. If a\n\ttorn page is detected, the database will need to be restored\n\tfrom backup because it will be physically inconsistent.\n\nAs I understand it, this is not a problem for postgres becuase the\nentire page is written to the log. So postgres is safe, but quite\ninefficient. (It would be much more efficient to write just the\nchanged tuple, or even just the changed values within the tuple,\nto the log.)\n\nAdding these torn bits would allow posgres at least to write to\nthe log just the 512-byte sectors that have changed, rather than\nthe entire 8 KB page.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Sat, 22 Jun 2002 17:41:30 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "On Thu, 2002-06-20 at 21:58, Bruce Momjian wrote:\n> I was wondering, how does knowing the block is corrupt help MS SQL? \n> Right now, we write changed pages to WAL, then later write them to disk.\n> I have always been looking for a way to prevent these WAL writes. The\n> 512-byte bit seems interesting, but how does it help?\n> \n> And how does the bit help them with partial block writes? Is the bit at\n> the end of the block? Is that reliable?\n> \n\nMy understanding of this is as follows:\n\n1) On most commercial systems, if you get a corrupted block (from\npartial write or whatever) you need to restore the file(s) from the most\nrecent backup, and replay the log from the log archive (usually only the\ndamaged files will be written to during replay). \n\n2) If you can't deal with the downtime to recover the file, then EMC,\nSun, or IBM will sell you an expensive disk array with an NVRAM cache\nthat will do atomic writes. Some plain-vanilla SCSI disks are also\ncapable of atomic writes, though usually they don't use NVRAM to do it. \n\nThe database must then make sure that each page-write gets translated\ninto exactly one SCSI-level write. This is one reason why ORACLE and\nSybase recommend that you use raw disk partitions for high availability.\nSome operating systems support this through the filesystem, but it is OS\ndependent. I think Solaris 7 & 8 has support for this, but I'm not sure.\n\nPostgreSQL has trouble because it can neither archive logs for replay,\nnor use raw disk partitions.\n\n\nOne other point:\n\nPage pre-image logging is fundamentally the same as what Jim Grey's\nbook[1] would call \"careful writes\". I don't believe they should be in\nthe XLOG, because we never need to keep the pre-images after we're sure\nthe buffer has made it to the disk. Instead, we should have the buffer\nIO routines implement ping-pong writes of some kind if we want\nprotection from partial writes.\n\n\nDoes any of this make sense?\n\n\n\n;jrnield\n\n\n[1] Grey, J. and Reuter, A. (1993). \"Transaction Processing: Concepts\n\tand Techniques\". Morgan Kaufmann.\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "22 Jun 2002 18:22:58 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "J. R. Nield wrote:\n> One other point:\n> \n> Page pre-image logging is fundamentally the same as what Jim Grey's\n> book[1] would call \"careful writes\". I don't believe they should be in\n> the XLOG, because we never need to keep the pre-images after we're sure\n> the buffer has made it to the disk. Instead, we should have the buffer\n> IO routines implement ping-pong writes of some kind if we want\n> protection from partial writes.\n\nPing-pong writes to where? We have to fsync, and rather than fsync that\narea and WAL, we just do WAL. Not sure about a win there.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 22 Jun 2002 19:17:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Sat, 2002-06-22 at 19:17, Bruce Momjian wrote:\n> J. R. Nield wrote:\n> > One other point:\n> > \n> > Page pre-image logging is fundamentally the same as what Jim Grey's\n> > book[1] would call \"careful writes\". I don't believe they should be in\n> > the XLOG, because we never need to keep the pre-images after we're sure\n> > the buffer has made it to the disk. Instead, we should have the buffer\n> > IO routines implement ping-pong writes of some kind if we want\n> > protection from partial writes.\n> \n> Ping-pong writes to where? We have to fsync, and rather than fsync that\n> area and WAL, we just do WAL. Not sure about a win there.\n> \n\nThe key question is: do we have some method to ensure that the OS\ndoesn't do the writes in parallel?\n\nIf the OS will ensure that one of the two block writes of a ping-pong\ncompletes before the other starts, then we don't need to fsync() at \nall. \n\nThe only thing we are protecting against is the possibility of both\nwrites being partial. If neither is done, that's fine because WAL will\nprotect us. If the first write is partial, we will detect that and use\nthe old data from the other, then recover from WAL. If the first is\ncomplete but the second is partial, then we detect that and use the\nnewer block from the first write. If the second is complete but the\nfirst is partial, we detect that and use the newer block from the second\nwrite.\n\nSo does anyone know a way to prevent parallel writes in one of the\ncommon unix standards? Do they say anything about this?\n\nIt would seem to me that if the same process does both ping-pong writes,\nthen there should be a cheap way to enforce a serial order. I could be\nwrong though.\n\nAs to where the first block of the ping-pong should go, maybe we could\nreserve a file with nBlocks space for them, and write the information\nabout which block was being written to the XLOG for use in recovery.\nThere are many other ways to do it.\n\n;jrnield\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "23 Jun 2002 08:37:53 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On 23 Jun 2002, J. R. Nield wrote:\n\n> On Sat, 2002-06-22 at 19:17, Bruce Momjian wrote:\n> > J. R. Nield wrote:\n> > > One other point:\n> > >\n> > > Page pre-image logging is fundamentally the same as what Jim Grey's\n> > > book[1] would call \"careful writes\". I don't believe they should be in\n> > > the XLOG, because we never need to keep the pre-images after we're sure\n> > > the buffer has made it to the disk. Instead, we should have the buffer\n> > > IO routines implement ping-pong writes of some kind if we want\n> > > protection from partial writes.\n> >\n> > Ping-pong writes to where? We have to fsync, and rather than fsync that\n> > area and WAL, we just do WAL. Not sure about a win there.\n\nPresumably the win is that, \"we never need to keep the pre-images\nafter we're sure the buffer has made it to the disk.\" So the\npre-image log can be completely ditched when we shut down the\nserver, so a full system sync, or whatever. This keeps the log file\nsize down, which means faster recovery, less to back up (when we\nstart getting transaction logs that can be backed up), etc.\n\nThis should also allow us to disable completely the ping-pong writes\nif we have a disk subsystem that we trust. (E.g., a disk array with\nbattery backed memory.) That would, in theory, produce a nice little\nperformance increase when lots of inserts and/or updates are being\ncommitted, as we have much, much less to write to the log file.\n\nAre there stats that track, e.g., the bandwidth of writes to the\nlog file? I'd be interested in knowing just what kind of savings\none might see by doing this.\n\n> The key question is: do we have some method to ensure that the OS\n> doesn't do the writes in parallel?...\n> It would seem to me that if the same process does both ping-pong writes,\n> then there should be a cheap way to enforce a serial order. I could be\n> wrong though.\n\nWell, whether or not there's a cheap way depends on whether you consider\nfsync to be cheap. :-)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Sun, 23 Jun 2002 22:33:07 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> This should also allow us to disable completely the ping-pong writes\n> if we have a disk subsystem that we trust.\n\nIf we have a disk subsystem we trust, we just disable fsync on the\nWAL and the performance issue largely goes away.\n\nI concur with Bruce: the reason we keep page images in WAL is to\nminimize the number of places we have to fsync, and thus the amount of\nhead movement required for a commit. Putting the page images elsewhere\ncannot be a win AFAICS.\n\n> Well, whether or not there's a cheap way depends on whether you consider\n> fsync to be cheap. :-)\n\nIt's never cheap :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Jun 2002 11:19:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "On Sun, 23 Jun 2002, Tom Lane wrote:\n\n> Curt Sampson <cjs@cynic.net> writes:\n> > This should also allow us to disable completely the ping-pong writes\n> > if we have a disk subsystem that we trust.\n>\n> If we have a disk subsystem we trust, we just disable fsync on the\n> WAL and the performance issue largely goes away.\n\nNo, you can't do this. If you don't fsync(), there's no guarantee\nthat the write ever got out of the computer's buffer cache and to\nthe disk subsystem in the first place.\n\n> I concur with Bruce: the reason we keep page images in WAL is to\n> minimize the number of places we have to fsync, and thus the amount of\n> head movement required for a commit.\n\nAn fsync() does not necessarially cause head movement, or any real\ndisk writes at all. If you're writing to many external disk arrays,\nfor example, the fsync() ensures that the data are in the disk array's\nnon-volatile or UPS-backed RAM, no more. The array might hold the data\nfor quite some time before it actually writes it to disk.\n\nBut you're right that it's faster, if you're going to write out changed\npages and have have the ping-pong file and the transaction log on the\nsame disk, just to write out the entire page to the transaction log.\n\nSo what we would really need to implement, if we wanted to be more\nefficient with trusted disk subsystems, would be the option of writing\nto the log only the changed row or changed part of the row, or writing\nthe entire changed page. I don't know how hard this would be....\n\n> > Well, whether or not there's a cheap way depends on whether you consider\n> > fsync to be cheap. :-)\n>\n> It's never cheap :-(\n\nActually, with a good external RAID system with non-volatile RAM,\nit's a good two to four orders of magnitude cheaper than writing to a\ndirectly connected disk that doesn't claim the write is complete until\nit's physically on disk. I'd say that it qualifies as at least \"not\nexpensive.\" Not that you want to do it more often than you have to\nanyway....\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 24 Jun 2002 01:10:26 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "On Sun, 2002-06-23 at 11:19, Tom Lane wrote: \n> Curt Sampson <cjs@cynic.net> writes:\n> > This should also allow us to disable completely the ping-pong writes\n> > if we have a disk subsystem that we trust.\n> \n> If we have a disk subsystem we trust, we just disable fsync on the\n> WAL and the performance issue largely goes away.\n\nIt wouldn't work because the OS buffering interferes, and we need those\nWAL records on disk up to the greatest LSN of the Buffer we will be writing.\n\n\nWe already buffer WAL ourselves. We also already buffer regular pages.\nWhenever we write a Buffer out of the buffer cache, it is because we\nreally want that page on disk and wanted to start an IO. If thats not\nthe case, then we should have more block buffers! \n\nSo since we have all this buffering designed especially to meet our\nneeds, and since the OS buffering is in the way, can someone explain to\nme why postgresql would ever open a file without the O_DSYNC flag if the\nplatform supports it? \n\n\n\n> \n> I concur with Bruce: the reason we keep page images in WAL is to\n> minimize the number of places we have to fsync, and thus the amount of\n> head movement required for a commit. Putting the page images elsewhere\n> cannot be a win AFAICS.\n\n\nWhy not put all the page images in a single pre-allocated file and treat\nit as a ring? How could this be any worse than flushing them in the WAL\nlog? \n\nMaybe fsync would be slower with two files, but I don't see how\nfdatasync would be, and most platforms support that. \n\nWhat would improve performance would be to have a dbflush process that\nwould work in the background flushing buffers in groups and trying to\nstay ahead of ReadBuffer requests. That would let you do the temporary\nside of the ping-pong as a huge O_DSYNC writev(2) request (or\nfdatasync() once) and then write out the other buffers. It would also\ntend to prevent the other backends from blocking on write requests. \n\nA dbflush could also support aio_read/aio_write on platforms like\nSolaris and WindowsNT that support it. \n\nAm I correct that right now, buffers only get written when they get\nremoved from the free list for reuse? So a released dirty buffer will\nsit in the buffer free list until it becomes the Least Recently Used\nbuffer, and will then cause a backend to block for IO in a call to\nBufferAlloc? \n\nThis would explain why we like using the OS buffer cache, and why our\nperformance is troublesome when we have to do synchronous IO writes, and\nwhy fsync() takes so long to complete. All of the backends block for\neach call to BufferAlloc() after a large table update by a single\nbackend, and then the OS buffers are always full of our \"written\" data. \n\nAm I reading the bufmgr code correctly? I already found an imaginary\nrace condition there once :-) \n\n;jnield \n\n\n> \n> > Well, whether or not there's a cheap way depends on whether you consider\n> > fsync to be cheap. :-)\n> \n> It's never cheap :-(\n> \n-- \nJ. R. Nield\njrnield@usol.com\n\n", "msg_date": "23 Jun 2002 13:57:19 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On 23 Jun 2002, J. R. Nield wrote:\n\n> So since we have all this buffering designed especially to meet our\n> needs, and since the OS buffering is in the way, can someone explain to\n> me why postgresql would ever open a file without the O_DSYNC flag if the\n> platform supports it?\n\nIt's more code, if there are platforms out there that don't support\nO_DYSNC. (We still have to keep the old fsync code.) On the other hand,\nO_DSYNC could save us a disk arm movement over fsync() because it\nappears to me that fsync is also going to force a metadata update, which\nmeans that the inode blocks have to be written as well.\n\n> Maybe fsync would be slower with two files, but I don't see how\n> fdatasync would be, and most platforms support that.\n\nBecause, if both files are on the same disk, you still have to move\nthe disk arm from the cylinder at the current log file write point\nto the cylinder at the current ping-pong file write point. And then back\nagain to the log file write point cylinder.\n\nIn the end, having a ping-pong file as well seems to me unnecessary\ncomplexity, especially when anyone interested in really good\nperformance is going to buy a disk subsystem that guarantees no\ntorn pages and thus will want to turn off the ping-pong file writes\nentirely, anyway.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 24 Jun 2002 03:15:01 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Sun, 2002-06-23 at 12:10, Curt Sampson wrote:\n> \n> So what we would really need to implement, if we wanted to be more\n> efficient with trusted disk subsystems, would be the option of writing\n> to the log only the changed row or changed part of the row, or writing\n> the entire changed page. I don't know how hard this would be....\n> \nWe already log that stuff. The page images are in addition to the\n\"Logical Changes\", so we could just stop logging the page images.\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "23 Jun 2002 14:15:17 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "J. R. Nield wrote:\n> So since we have all this buffering designed especially to meet our\n> needs, and since the OS buffering is in the way, can someone explain to\n> me why postgresql would ever open a file without the O_DSYNC flag if the\n> platform supports it? \n\nWe sync only WAL, not the other pages, except for the sync() call we do\nduring checkpoint when we discard old WAL files.\n\n> > I concur with Bruce: the reason we keep page images in WAL is to\n> > minimize the number of places we have to fsync, and thus the amount of\n> > head movement required for a commit. Putting the page images elsewhere\n> > cannot be a win AFAICS.\n> \n> \n> Why not put all the page images in a single pre-allocated file and treat\n> it as a ring? How could this be any worse than flushing them in the WAL\n> log? \n> \n> Maybe fsync would be slower with two files, but I don't see how\n> fdatasync would be, and most platforms support that. \n\nWe have fdatasync option for WAL in postgresql.conf.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sun, 23 Jun 2002 15:34:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Curt Sampson wrote:\n> On 23 Jun 2002, J. R. Nield wrote:\n> \n> > So since we have all this buffering designed especially to meet our\n> > needs, and since the OS buffering is in the way, can someone explain to\n> > me why postgresql would ever open a file without the O_DSYNC flag if the\n> > platform supports it?\n> \n> It's more code, if there are platforms out there that don't support\n> O_DYSNC. (We still have to keep the old fsync code.) On the other hand,\n> O_DSYNC could save us a disk arm movement over fsync() because it\n> appears to me that fsync is also going to force a metadata update, which\n> means that the inode blocks have to be written as well.\n\nAgain, see postgresql.conf:\n\n#wal_sync_method = fsync # the default varies across platforms:\n# # fsync, fdatasync, open_sync, or open_datasync\n\n> \n> > Maybe fsync would be slower with two files, but I don't see how\n> > fdatasync would be, and most platforms support that.\n> \n> Because, if both files are on the same disk, you still have to move\n> the disk arm from the cylinder at the current log file write point\n> to the cylinder at the current ping-pong file write point. And then back\n> again to the log file write point cylinder.\n> \n> In the end, having a ping-pong file as well seems to me unnecessary\n> complexity, especially when anyone interested in really good\n> performance is going to buy a disk subsystem that guarantees no\n> torn pages and thus will want to turn off the ping-pong file writes\n> entirely, anyway.\n\nYes, I don't see writing to two files vs. one to be any win, especially\nwhen we need to fsync both of them. What I would really like is to\navoid the double I/O of writing to WAL and to the data file; improving\nthat would be a huge win.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sun, 23 Jun 2002 15:36:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Sun, 23 Jun 2002, Bruce Momjian wrote:\n\n> Yes, I don't see writing to two files vs. one to be any win, especially\n> when we need to fsync both of them. What I would really like is to\n> avoid the double I/O of writing to WAL and to the data file; improving\n> that would be a huge win.\n\nYou mean, the double I/O of writing the block to the WAL and data file?\n(We'd still have to write the changed columns or whatever to the WAL,\nright?)\n\nI'd just add an option to turn it off. If you need it, you need it;\nthere's no way around that except to buy hardware that is really going\nto guarantee your writes (which then means you don't need it).\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Mon, 24 Jun 2002 09:09:30 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "I am getting lots of errors on pgadmin.postgresql.org\n\nDave\n\n\n\n\n", "msg_date": "23 Jun 2002 20:24:47 -0400", "msg_from": "Dave Cramer <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "pgadmin.postgresql.org displaying errors" }, { "msg_contents": "On Sun, 2002-06-23 at 15:36, Bruce Momjian wrote:\n> Yes, I don't see writing to two files vs. one to be any win, especially\n> when we need to fsync both of them. What I would really like is to\n> avoid the double I/O of writing to WAL and to the data file; improving\n> that would be a huge win.\n> \n\nIf is impossible to do what you want. You can not protect against\npartial writes without writing pages twice and calling fdatasync between\nthem while going through a generic filesystem. The best disk array will\nnot protect you if the operating system does not align block writes to\nthe structure of the underlying device. Even with raw devices, you need\nspecial support or knowledge of the operating system and/or the disk\ndevice to ensure that each write request will be atomic to the\nunderlying hardware. \n\nAll other systems rely on the fact that you can recover a damaged file\nusing the log archive. This means downtime in the rare case, but no data\nloss. Until PostgreSQL can do this, then it will not be acceptable for\nreal critical production use. This is not to knock PostgreSQL, because\nit is a very good database system, and clearly the best open-source one.\nIt even has feature advantages over the commercial systems. But at the\nend of the day, unless you have complete understanding of the I/O system\nfrom write(2) through to the disk system, the only sure ways to protect\nagainst partial writes are by \"careful writes\" (in the WAL log or\nelsewhere, writing pages twice), or by requiring (and allowing) users to\ndo log-replay recovery when a file is corrupted by a partial write. As\nlong as there is a UPS, and the operating system doesn't crash, then\nthere still should be no partial writes.\n\nIf we log pages to WAL, they are useless when archived (after a\ncheckpoint). So either we have a separate \"log\" for them (the ping-pong\nfile), or we should at least remove them when archived, which makes log\narchiving more complex but is perfectly doable.\n\nFinally, I would love to hear why we are using the operating system\nbuffer manager at all. The OS is acting as a secondary buffer manager\nfor us. Why is that? What flaw in our I/O system does this reveal? I\nknow that:\n\n>We sync only WAL, not the other pages, except for the sync() call we do\n> during checkpoint when we discard old WAL files.\n\nBut this is probably not a good thing. We should only be writing blocks\nwhen they need to be on disk. We should not be expecting the OS to write\nthem \"sometime later\" and avoid blocking (as long) for the write. If we\nneed that, then our buffer management is wrong and we need to fix it.\nThe reason we are doing this is because we expect the OS buffer manager\nto do asynchronous I/O for us, but then we don't control the order. That\nis the reason why we have to call fdatasync(), to create \"sequence\npoints\".\n\nThe reason we have performance problems with either D_OSYNC or fdatasync\non the normal relations is because we have no dbflush process. This\ncauses an unacceptable amount of I/O blocking by other transactions.\n\nThe ORACLE people were not kidding when they said that they could not\ncertify Linux for production use until it supported O_DSYNC. Can you\nexplain why that was the case?\n\nFinally, let me apologize if the above comes across as somewhat\nbelligerent. I know very well that I can't compete with you guys for\nknowledge of the PosgreSQL system. I am still at a loss when I look at\nthe optimizer and executor modules, and it will take some time before I\ncan follow discussion of that area. Even then, I doubt my ability to\ncompare with people like Mr. Lane and Mr. Momjian in experience and\ngeneral intelligence, or in the field of database programming and\nsoftware development in particular. However, this discussion and a\nsearch of the pgsql-hackers archives reveals this problem to be the KEY\narea of PostgreSQL's failing, and general misunderstanding, when\ncompared to its commercial competitors.\n\nSincerely, \n\n\tJ. R. Nield\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n\n\n", "msg_date": "23 Jun 2002 21:29:23 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Sun, 2002-06-23 at 21:29, J. R. Nield wrote:\n\n> If is impossible to do what you want. You can not protect against...\nWow. The number of typo's in that last one was just amazing. I even\nstarted with one.\n\nHave an nice weekend everybody :-)\n\n;jrnield\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n\n\n", "msg_date": "23 Jun 2002 21:58:31 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "J. R. Nield wrote:\n> On Sun, 2002-06-23 at 15:36, Bruce Momjian wrote:\n> > Yes, I don't see writing to two files vs. one to be any win, especially\n> > when we need to fsync both of them. What I would really like is to\n> > avoid the double I/O of writing to WAL and to the data file; improving\n> > that would be a huge win.\n> > \n> \n> If is impossible to do what you want. You can not protect against\n> partial writes without writing pages twice and calling fdatasync between\n> them while going through a generic filesystem. The best disk array will\n> not protect you if the operating system does not align block writes to\n> the structure of the underlying device. Even with raw devices, you need\n> special support or knowledge of the operating system and/or the disk\n> device to ensure that each write request will be atomic to the\n> underlying hardware. \n\nYes, I suspected it was impossible, but that doesn't mean I want it any\nless. ;-)\n\n> All other systems rely on the fact that you can recover a damaged file\n> using the log archive. This means downtime in the rare case, but no data\n> loss. Until PostgreSQL can do this, then it will not be acceptable for\n> real critical production use. This is not to knock PostgreSQL, because\n> it is a very good database system, and clearly the best open-source one.\n> It even has feature advantages over the commercial systems. But at the\n> end of the day, unless you have complete understanding of the I/O system\n> from write(2) through to the disk system, the only sure ways to protect\n> against partial writes are by \"careful writes\" (in the WAL log or\n> elsewhere, writing pages twice), or by requiring (and allowing) users to\n> do log-replay recovery when a file is corrupted by a partial write. As\n> long as there is a UPS, and the operating system doesn't crash, then\n> there still should be no partial writes.\n\nYou are talking point-in-time recovery, a major missing feature right\nnext to replication, and I agree it makes PostgreSQL unacceptable for\nsome applications. Point taken.\n\nAnd the interesting thing you are saying is that with point-in-time\nrecovery, we don't need to write pre-write images of pages because if we\ndetect a partial page write, we then abort the database and tell the\nuser to do a point-in-time recovery, basically meaning we are using the\nprevious full backup as our pre-write page image and roll forward using\nthe logical logs. This is clearly a nice thing to be able to do because\nit let's you take a pre-write image of the page once during full backup,\nkeep it offline, and bring it back in the rare case of a full page write\nfailure. I now can see how the MSSQL tearoff-bits would be used, not\nfor recovery, but to detect a partial write and force a point-in-time\nrecovery from the administrator.\n\n\n> If we log pages to WAL, they are useless when archived (after a\n> checkpoint). So either we have a separate \"log\" for them (the ping-pong\n> file), or we should at least remove them when archived, which makes log\n> archiving more complex but is perfectly doable.\n\nYes, that is how we will do point-in-time recovery; remove the\npre-write page images and archive the rest. It is more complex, but\nhaving the fsync all in one file is too big a win.\n\n> Finally, I would love to hear why we are using the operating system\n> buffer manager at all. The OS is acting as a secondary buffer manager\n> for us. Why is that? What flaw in our I/O system does this reveal? I\n> know that:\n> \n> >We sync only WAL, not the other pages, except for the sync() call we do\n> > during checkpoint when we discard old WAL files.\n> \n> But this is probably not a good thing. We should only be writing blocks\n> when they need to be on disk. We should not be expecting the OS to write\n> them \"sometime later\" and avoid blocking (as long) for the write. If we\n> need that, then our buffer management is wrong and we need to fix it.\n> The reason we are doing this is because we expect the OS buffer manager\n> to do asynchronous I/O for us, but then we don't control the order. That\n> is the reason why we have to call fdatasync(), to create \"sequence\n> points\".\n\nYes. I think I understand. It is true we have to fsync WAL because we\ncan't control the individual writes by the OS.\n\n> The reason we have performance problems with either D_OSYNC or fdatasync\n> on the normal relations is because we have no dbflush process. This\n> causes an unacceptable amount of I/O blocking by other transactions.\n\nUh, that would force writes all over the disk. Why do we really care how\nthe OS writes them? If we are going to fsync, let's just do the one\nfile and be done with it. What would a separate flusher process really\nbuy us if it has to use fsync too. The main backend doesn't have to\nwait for the fsync, but then again, we can't say the transaction is\ncommitted until it hits the disk, so how does a flusher help?\n\n> The ORACLE people were not kidding when they said that they could not\n> certify Linux for production use until it supported O_DSYNC. Can you\n> explain why that was the case?\n\nI don't see O_DSYNC as very different from write/fsync(or fdatasync).\n\n> Finally, let me apologize if the above comes across as somewhat\n> belligerent. I know very well that I can't compete with you guys for\n> knowledge of the PostgreSQL system. I am still at a loss when I look at\n> the optimizer and executor modules, and it will take some time before I\n> can follow discussion of that area. Even then, I doubt my ability to\n> compare with people like Mr. Lane and Mr. Momjian in experience and\n> general intelligence, or in the field of database programming and\n> software development in particular. However, this discussion and a\n> search of the pgsql-hackers archives reveals this problem to be the KEY\n> area of PostgreSQL's failing, and general misunderstanding, when\n> compared to its commercial competitors.\n\nWe appreciate your ideas. Few of us are professional db folks so we are\nalways looking for good ideas.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sun, 23 Jun 2002 22:46:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On 23 Jun 2002, J. R. Nield wrote:\n\n> If is impossible to do what you want. You can not protect against\n> partial writes without writing pages twice and calling fdatasync\n> between them while going through a generic filesystem.\n\nI agree with this.\n\n> The best disk array will not protect you if the operating system does\n> not align block writes to the structure of the underlying device.\n\nThis I don't quite understand. Assuming you're using a SCSI drive\n(and this mostly applies to ATAPI/IDE, too), you can do naught but\nalign block writes to the structure of the underlying device. When you\ninitiate a SCSI WRITE command, you start by telling the device at which\nblock to start writing and how many blocks you intend to write. Then you\nstart passing the data.\n\n(See http://www.danbbs.dk/~dino/SCSI/SCSI2-09.html#9.2.21 for parameter\ndetails for the SCSI WRITE(10) command. You may find the SCSI 2\nspecification, at http://www.danbbs.dk/~dino/SCSI/ to be a useful\nreference here.)\n\n> Even with raw devices, you need special support or knowledge of the\n> operating system and/or the disk device to ensure that each write\n> request will be atomic to the underlying hardware.\n\nWell, so here I guess you're talking about two things:\n\n 1. When you request, say, an 8K block write, will the OS really\n write it to disk in a single 8K or multiple of 8K SCSI write\n command?\n\n 2. Does the SCSI device you're writing to consider these writes to\n be transactional. That is, if the write is interrupted before being\n completed, does the SCSI device guarantee that the partially-sent\n data is not written, and the old data is maintained? And of course,\n does it guarantee that, when it acknowledges a write, that write is\n now in stable storage and will never go away?\n\nBoth of these are not hard to guarantee, actually. For a BSD-based OS,\nfor example, just make sure that your filesystem block size is the\nsame as or a multiple of the database block size. BSD will never write\nanything other than a block or a sequence of blocks to a disk in a\nsingle SCSI transaction (unless you've got a really odd SCSI driver).\nAnd for your disk, buy a Baydel or Clarion disk array, or something\nsimilar.\n\nGiven that it's not hard to set up a system that meets these criteria,\nand this is in fact commonly done for database servers, it would seem a\ngood idea for postgres to have the option to take advantage of the time\nand money spent and adjust its performance upward appropriately.\n\n> All other systems rely on the fact that you can recover a damaged file\n> using the log archive.\n\nNot exactly. For MS SQL Server, at any rate, if it detects a page tear\nyou cannot restore based on the log file alone. You need a full or\npartial backup that includes that entire torn block.\n\n> This means downtime in the rare case, but no data loss. Until\n> PostgreSQL can do this, then it will not be acceptable for real\n> critical production use.\n\nIt seems to me that it is doing this right now. In fact, it's more\nreliable than some commerial systems (such as SQL Server) because it can\nrecover from a torn block with just the logfile.\n\n> But at the end of the day, unless you have complete understanding of\n> the I/O system from write(2) through to the disk system, the only sure\n> ways to protect against partial writes are by \"careful writes\" (in\n> the WAL log or elsewhere, writing pages twice), or by requiring (and\n> allowing) users to do log-replay recovery when a file is corrupted by\n> a partial write.\n\nI don't understand how, without a copy of the old data that was in the\ntorn block, you can restore that block from just log file entries. Can\nyou explain this to me? Take, as an example, a block with ten tuples,\nonly one of which has been changed \"recently.\" (I.e., only that change\nis in the log files.)\n\n> If we log pages to WAL, they are useless when archived (after a\n> checkpoint). So either we have a separate \"log\" for them (the\n> ping-pong file), or we should at least remove them when archived,\n> which makes log archiving more complex but is perfectly doable.\n\nRight. That seems to me a better option, since we've now got only one\nwrite point on the disk rather than two.\n\n> Finally, I would love to hear why we are using the operating system\n> buffer manager at all. The OS is acting as a secondary buffer manager\n> for us. Why is that? What flaw in our I/O system does this reveal?\n\nIt's acting as a \"second-level\" buffer manager, yes, but to say it's\n\"secondary\" may be a bit misleading. On most of the systems I've set\nup, the OS buffer cache is doing the vast majority of the work, and the\npostgres buffering is fairly minimal.\n\nThere are some good (and some perhaps not-so-good) reasons to do it this\nway. I'll list them more or less in the order of best to worst:\n\n 1. The OS knows where the blocks physically reside on disk, and\n postgres does not. Therefore it's in the interest of postgresql to\n dispatch write responsibility back to the OS as quickly as possible\n so that the OS can prioritize requests appropriately. Most operating\n systems use an \"elevator\" algorithm to minimize disk head movement;\n but if the OS does not have a block that it could write while the\n head is \"on the way\" to another request, it can't write it in that\n head pass.\n\n 2. Postgres does not know about any \"bank-switching\" tricks for\n mapping more physical memory than it has address space. Thus, on\n 32-bit machines, postgres might be limited to mapping 2 or 3 GB of\n memory, even though the machine has, say, 6 GB of physical RAM. The\n OS can use all of the available memory for caching; postgres cannot.\n\n 3. A lot of work has been put into the seek algorithms, read-ahead\n algorithms, block allocation algorithms, etc. in the OS. Why\n duplicate all that work again in postgres?\n\nWhen you say things like the following:\n\n> We should only be writing blocks when they need to be on disk. We\n> should not be expecting the OS to write them \"sometime later\" and\n> avoid blocking (as long) for the write. If we need that, then our\n> buffer management is wrong and we need to fix it.\n\nyou appear to be making the arugment that we should take the route of\nother database systems, and use raw devices and our own management of\ndisk block allocation. If so, you might want first to look back through\nthe archives at the discussion I and several others had about this a\nmonth or two ago. After looking in detail at what NetBSD, at least, does\nin terms of its disk I/O algorithms and buffering, I've pretty much come\naround, at least for the moment, to the attitude that we should stick\nwith using the OS. I wouldn't mind seeing postgres be able to manage all\nof this stuff, but it's a *lot* of work for not all that much benefit\nthat I can see.\n\n> The ORACLE people were not kidding when they said that they could not\n> certify Linux for production use until it supported O_DSYNC. Can you\n> explain why that was the case?\n\nI'm suspecting it's because Linux at the time had no raw devices, so\nO_DSYNC was the only other possible method of making sure that disk\nwrites actually got to disk.\n\nYou certainly don't want to use O_DSYNC if you can use another method,\nbecause O_DSYNC still goes through the the operating system's buffer\ncache, wasting memory and double-caching things. If you're doing your\nown management, you need either to use a raw device or open files with\nthe flag that indicates that the buffer cache should not be used at all\nfor reads from and writes to that file.\n\n> However, this discussion and a search of the pgsql-hackers archives\n> reveals this problem to be the KEY area of PostgreSQL's failing, and\n> general misunderstanding, when compared to its commercial competitors.\n\nNo, I think it's just that you're under a few minor misapprehensions\nhere about what postgres and the OS are actually doing. As I said, I\nwent through this whole exact argument a month or two ago, on this very\nlist, and I came around to the idea that what postgres is doing now\nworks quite well, at least on NetBSD. (Most other OSes have disk I/O\nalgorithms that are pretty much as good or better.) There might be a\nvery slight advantage to doing all one's own I/O management, but it's\na huge amount of work, and I think that much effort could be much more\nusefully applied to other areas.\n\nJust as a side note, I've been a NetBSD developer since about '96,\nand have been delving into the details of OS design since well before\nthat time, so I'm coming to this with what I hope is reasonably good\nknowledge of how disks work and how operating systems use them. (Not\nthat this should stop you from pointing out holes in my arguments. :-))\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Mon, 24 Jun 2002 12:40:51 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "\nshould already be fixed ...\n\nOn 23 Jun 2002, Dave Cramer wrote:\n\n> I am getting lots of errors on pgadmin.postgresql.org\n>\n> Dave\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n>\n>\n\n\n\n", "msg_date": "Mon, 24 Jun 2002 11:02:32 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: pgadmin.postgresql.org displaying errors" }, { "msg_contents": "> On Sun, 23 Jun 2002, Bruce Momjian wrote:\n>> Yes, I don't see writing to two files vs. one to be any win, especially\n>> when we need to fsync both of them. What I would really like is to\n>> avoid the double I/O of writing to WAL and to the data file; improving\n>> that would be a huge win.\n\nI don't believe it's possible to eliminate the double I/O. Keep in mind\nthough that in the ideal case (plenty of shared buffers) you are only\npaying two writes per modified block per checkpoint interval --- one to\nthe WAL during the first write of the interval, and then a write to the\nreal datafile issued by the checkpoint process. Anything that requires\ntransaction commits to write data blocks will likely result in more I/O\nnot less, at least for blocks that are modified by several successive\ntransactions.\n\nThe only thing I've been able to think of that seems like it might\nimprove matters is to make the WAL writing logic aware of the layout\nof buffer pages --- specifically, to know that our pages generally\ncontain an uninteresting \"hole\" in the middle, and not write the hole.\nOptimistically this might reduce the WAL data volume by something\napproaching 50%; though pessimistically (if most pages are near full)\nit wouldn't help much.\n\nThis was not very feasible when the WAL code was designed because the\nbuffer manager needed to cope with both normal pages and pg_log pages,\nbut as of 7.2 I think it'd be safe to assume that all pages have the\nstandard layout.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 10:06:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "Tom Lane wrote:\n> > On Sun, 23 Jun 2002, Bruce Momjian wrote:\n> >> Yes, I don't see writing to two files vs. one to be any win, especially\n> >> when we need to fsync both of them. What I would really like is to\n> >> avoid the double I/O of writing to WAL and to the data file; improving\n> >> that would be a huge win.\n> \n> I don't believe it's possible to eliminate the double I/O. Keep in mind\n> though that in the ideal case (plenty of shared buffers) you are only\n> paying two writes per modified block per checkpoint interval --- one to\n> the WAL during the first write of the interval, and then a write to the\n> real datafile issued by the checkpoint process. Anything that requires\n> transaction commits to write data blocks will likely result in more I/O\n> not less, at least for blocks that are modified by several successive\n> transactions.\n> \n> The only thing I've been able to think of that seems like it might\n> improve matters is to make the WAL writing logic aware of the layout\n> of buffer pages --- specifically, to know that our pages generally\n> contain an uninteresting \"hole\" in the middle, and not write the hole.\n> Optimistically this might reduce the WAL data volume by something\n> approaching 50%; though pessimistically (if most pages are near full)\n> it wouldn't help much.\n\nGood idea. How about putting the page through or TOAST compression\nroutine before writing it to WAL? Should be pretty easy and fast and\ndoesn't require any knowledge of the page format.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Mon, 24 Jun 2002 12:40:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> The only thing I've been able to think of that seems like it might\n>> improve matters is to make the WAL writing logic aware of the layout\n>> of buffer pages --- specifically, to know that our pages generally\n>> contain an uninteresting \"hole\" in the middle, and not write the hole.\n>> Optimistically this might reduce the WAL data volume by something\n>> approaching 50%; though pessimistically (if most pages are near full)\n>> it wouldn't help much.\n\n> Good idea. How about putting the page through or TOAST compression\n> routine before writing it to WAL? Should be pretty easy and fast and\n> doesn't require any knowledge of the page format.\n\nEasy, maybe, but fast definitely NOT. The compressor is not speedy.\nGiven that we have to be holding various locks while we build WAL\nrecords, I do not think it's a good idea to add CPU time there.\n\nAlso, compressing already-compressed data is not a win ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 13:11:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "On Sun, 2002-06-23 at 23:40, Curt Sampson wrote:\n> On 23 Jun 2002, J. R. Nield wrote:\n> \n> > If is impossible to do what you want. You can not protect against\n> > partial writes without writing pages twice and calling fdatasync\n> > between them while going through a generic filesystem.\n> \n> I agree with this.\n> \n> > The best disk array will not protect you if the operating system does\n> > not align block writes to the structure of the underlying device.\n> \n> This I don't quite understand. Assuming you're using a SCSI drive\n> (and this mostly applies to ATAPI/IDE, too), you can do naught but\n> align block writes to the structure of the underlying device. When you\n> initiate a SCSI WRITE command, you start by telling the device at which\n> block to start writing and how many blocks you intend to write. Then you\n> start passing the data.\n> \n\nAll I'm saying is that the entire postgresql block write must be\nconverted into exactly one SCSI write command in all cases, and I don't\nknow a portable way to ensure this. \n\n> > Even with raw devices, you need special support or knowledge of the\n> > operating system and/or the disk device to ensure that each write\n> > request will be atomic to the underlying hardware.\n> \n> Well, so here I guess you're talking about two things:\n> \n> 1. When you request, say, an 8K block write, will the OS really\n> write it to disk in a single 8K or multiple of 8K SCSI write\n> command?\n> \n> 2. Does the SCSI device you're writing to consider these writes to\n> be transactional. That is, if the write is interrupted before being\n> completed, does the SCSI device guarantee that the partially-sent\n> data is not written, and the old data is maintained? And of course,\n> does it guarantee that, when it acknowledges a write, that write is\n> now in stable storage and will never go away?\n> \n> Both of these are not hard to guarantee, actually. For a BSD-based OS,\n> for example, just make sure that your filesystem block size is the\n> same as or a multiple of the database block size. BSD will never write\n> anything other than a block or a sequence of blocks to a disk in a\n> single SCSI transaction (unless you've got a really odd SCSI driver).\n> And for your disk, buy a Baydel or Clarion disk array, or something\n> similar.\n> \n> Given that it's not hard to set up a system that meets these criteria,\n> and this is in fact commonly done for database servers, it would seem a\n> good idea for postgres to have the option to take advantage of the time\n> and money spent and adjust its performance upward appropriately.\n\nI agree with this. My point was only that you need to know what\nguarantees your operating system/hardware combination provides on a\ncase-by-case basis, and there is no standard way for a program to\ndiscover this. Most system administrators are not going to know this\neither, unless databases are their main responsibility.\n\n> \n> > All other systems rely on the fact that you can recover a damaged file\n> > using the log archive.\n> \n> Not exactly. For MS SQL Server, at any rate, if it detects a page tear\n> you cannot restore based on the log file alone. You need a full or\n> partial backup that includes that entire torn block.\n> \n\nI should have been more specific: you need a backup of the file from\nsome time ago, plus all the archived logs from then until the current\nlog sequence number.\n\n> > This means downtime in the rare case, but no data loss. Until\n> > PostgreSQL can do this, then it will not be acceptable for real\n> > critical production use.\n> \n> It seems to me that it is doing this right now. In fact, it's more\n> reliable than some commerial systems (such as SQL Server) because it can\n> recover from a torn block with just the logfile.\n\nAgain, what I meant to say is that the commercial systems can recover\nwith an old file backup + logs. How old the backup can be depends only\non how much time you are willing to spend playing the logs forward. So\nif you do a full backup once a week, and multiplex and backup the logs,\nthen even if a backup tape gets destroyed you can still survive. It just\ntakes longer.\n\nAlso, postgreSQL can't recover from any other type of block corruption,\nwhile the commercial systems can. That's what I meant by the \"critical\nproduction use\" comment, which was sort-of unfair.\n\nSo I would say they are equally reliable for torn pages (but not bad\nblocks), and the commercial systems let you trade potential recovery\ntime for not having to write the blocks twice. You do need to back-up\nthe log archives though.\n\n> \n> > But at the end of the day, unless you have complete understanding of\n> > the I/O system from write(2) through to the disk system, the only sure\n> > ways to protect against partial writes are by \"careful writes\" (in\n> > the WAL log or elsewhere, writing pages twice), or by requiring (and\n> > allowing) users to do log-replay recovery when a file is corrupted by\n> > a partial write.\n> \n> I don't understand how, without a copy of the old data that was in the\n> torn block, you can restore that block from just log file entries. Can\n> you explain this to me? Take, as an example, a block with ten tuples,\n> only one of which has been changed \"recently.\" (I.e., only that change\n> is in the log files.)\n>\n> \n> > If we log pages to WAL, they are useless when archived (after a\n> > checkpoint). So either we have a separate \"log\" for them (the\n> > ping-pong file), or we should at least remove them when archived,\n> > which makes log archiving more complex but is perfectly doable.\n> \n> Right. That seems to me a better option, since we've now got only one\n> write point on the disk rather than two.\n\nOK. I agree with this now.\n\n> \n> > Finally, I would love to hear why we are using the operating system\n> > buffer manager at all. The OS is acting as a secondary buffer manager\n> > for us. Why is that? What flaw in our I/O system does this reveal?\n> \n> It's acting as a \"second-level\" buffer manager, yes, but to say it's\n> \"secondary\" may be a bit misleading. On most of the systems I've set\n> up, the OS buffer cache is doing the vast majority of the work, and the\n> postgres buffering is fairly minimal.\n> \n> There are some good (and some perhaps not-so-good) reasons to do it this\n> way. I'll list them more or less in the order of best to worst:\n> \n> 1. The OS knows where the blocks physically reside on disk, and\n> postgres does not. Therefore it's in the interest of postgresql to\n> dispatch write responsibility back to the OS as quickly as possible\n> so that the OS can prioritize requests appropriately. Most operating\n> systems use an \"elevator\" algorithm to minimize disk head movement;\n> but if the OS does not have a block that it could write while the\n> head is \"on the way\" to another request, it can't write it in that\n> head pass.\n> \n> 2. Postgres does not know about any \"bank-switching\" tricks for\n> mapping more physical memory than it has address space. Thus, on\n> 32-bit machines, postgres might be limited to mapping 2 or 3 GB of\n> memory, even though the machine has, say, 6 GB of physical RAM. The\n> OS can use all of the available memory for caching; postgres cannot.\n> \n> 3. A lot of work has been put into the seek algorithms, read-ahead\n> algorithms, block allocation algorithms, etc. in the OS. Why\n> duplicate all that work again in postgres?\n> \n> When you say things like the following:\n> \n> > We should only be writing blocks when they need to be on disk. We\n> > should not be expecting the OS to write them \"sometime later\" and\n> > avoid blocking (as long) for the write. If we need that, then our\n> > buffer management is wrong and we need to fix it.\n> \n> you appear to be making the arugment that we should take the route of\n> other database systems, and use raw devices and our own management of\n> disk block allocation. If so, you might want first to look back through\n> the archives at the discussion I and several others had about this a\n> month or two ago. After looking in detail at what NetBSD, at least, does\n> in terms of its disk I/O algorithms and buffering, I've pretty much come\n> around, at least for the moment, to the attitude that we should stick\n> with using the OS. I wouldn't mind seeing postgres be able to manage all\n> of this stuff, but it's a *lot* of work for not all that much benefit\n> that I can see.\n\nI'll back off on that. I don't know if we want to use the OS buffer\nmanager, but shouldn't we try to have our buffer manager group writes\ntogether by files, and pro-actively get them out to disk? Right now, it\nlooks like all our write requests are delayed as long as possible and\nthe order in which they are written is pretty-much random, as is the\nbackend that writes the block, so there is no locality of reference even\nwhen the blocks are adjacent on disk, and the write calls are spread-out\nover all the backends.\n\nWould it not be the case that things like read-ahead, grouping writes,\nand caching written data are probably best done by PostgreSQL, because\nonly our buffer manager can understand when they will be useful or when\nthey will thrash the cache?\n\nI may likely be wrong on this, and I haven't done any performance\ntesting. I shouldn't have brought this up alongside the logging issues,\nbut there seemed to be some question about whether the OS was actually\ndoing all these things behind the scene.\n\n\n> \n> > The ORACLE people were not kidding when they said that they could not\n> > certify Linux for production use until it supported O_DSYNC. Can you\n> > explain why that was the case?\n> \n> I'm suspecting it's because Linux at the time had no raw devices, so\n> O_DSYNC was the only other possible method of making sure that disk\n> writes actually got to disk.\n> \n> You certainly don't want to use O_DSYNC if you can use another method,\n> because O_DSYNC still goes through the the operating system's buffer\n> cache, wasting memory and double-caching things. If you're doing your\n> own management, you need either to use a raw device or open files with\n> the flag that indicates that the buffer cache should not be used at all\n> for reads from and writes to that file.\n\nWould O_DSYNC|O_RSYNC turn off the cache? \n\n> \n> > However, this discussion and a search of the pgsql-hackers archives\n> > reveals this problem to be the KEY area of PostgreSQL's failing, and\n> > general misunderstanding, when compared to its commercial competitors.\n> \n> No, I think it's just that you're under a few minor misapprehensions\n> here about what postgres and the OS are actually doing. As I said, I\n> went through this whole exact argument a month or two ago, on this very\n> list, and I came around to the idea that what postgres is doing now\n> works quite well, at least on NetBSD. (Most other OSes have disk I/O\n> algorithms that are pretty much as good or better.) There might be a\n> very slight advantage to doing all one's own I/O management, but it's\n> a huge amount of work, and I think that much effort could be much more\n> usefully applied to other areas.\n\nI will look for that discussion in the archives.\n\nThe logging issue is a key one I think. At least I would be very nervous\nas a DBA if I were running a system where any damaged file would cause\ndata loss.\n\nDoes anyone know what the major barriers to infinite log replay are in\nPostgreSQL? I'm trying to look for everything that might need to be\nchanged outside xlog.c, but surely this has come up before. Searching\nthe archives hasn't revealed much.\n\n\n\nAs to the I/O issue:\n\nSince you know a lot about NetBSD internals, I'd be interested in\nhearing about what postgresql looks like to the NetBSD buffer manager.\nAm I right that strings of successive writes get randomized? What do our\ncache-hit percentages look like? I'm going to do some experimenting with\nthis.\n\n> \n> Just as a side note, I've been a NetBSD developer since about '96,\n> and have been delving into the details of OS design since well before\n> that time, so I'm coming to this with what I hope is reasonably good\n> knowledge of how disks work and how operating systems use them. (Not\n> that this should stop you from pointing out holes in my arguments. :-))\n> \n\nThis stuff is very difficult to get right. Glad to know you follow this\nlist.\n\n\n> cjs\n> -- \n> Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n> Don't you know, in this new Dark Age, we're all light. --XTC\n> \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n\n\n", "msg_date": "24 Jun 2002 16:49:42 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "\"J. R. Nield\" <jrnield@usol.com> writes:\n> Also, postgreSQL can't recover from any other type of block corruption,\n> while the commercial systems can.\n\nSay again?\n\n> Would it not be the case that things like read-ahead, grouping writes,\n> and caching written data are probably best done by PostgreSQL, because\n> only our buffer manager can understand when they will be useful or when\n> they will thrash the cache?\n\nI think you have been missing the point. No one denies that there will\nbe some incremental gain if we do all that. However, the conclusion of\neveryone who has thought much about it (and I see Curt has joined that\ngroup) is that the effort would be far out of proportion to the probable\ngain. There are a lot of other things we desperately need to spend time\non that would not amount to re-engineering large quantities of OS-level\ncode. Given that most Unixen have perfectly respectable disk management\nsubsystems, we prefer to tune our code to make use of that stuff, rather\nthan follow the \"conventional wisdom\" that databases need to bypass it.\n\nOracle can afford to do that sort of thing because they have umpteen\nthousand developers available. Postgres does not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 17:16:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "J. R. Nield wrote:\n> > This I don't quite understand. Assuming you're using a SCSI drive\n> > (and this mostly applies to ATAPI/IDE, too), you can do naught but\n> > align block writes to the structure of the underlying device. When you\n> > initiate a SCSI WRITE command, you start by telling the device at which\n> > block to start writing and how many blocks you intend to write. Then you\n> > start passing the data.\n> > \n> \n> All I'm saying is that the entire postgresql block write must be\n> converted into exactly one SCSI write command in all cases, and I don't\n> know a portable way to ensure this. \n\n...\n\n> I agree with this. My point was only that you need to know what\n> guarantees your operating system/hardware combination provides on a\n> case-by-case basis, and there is no standard way for a program to\n> discover this. Most system administrators are not going to know this\n> either, unless databases are their main responsibility.\n\nYes, agreed. >1% are going to know the answer to this question so we\nhave to assume worst case.\n\n> > It seems to me that it is doing this right now. In fact, it's more\n> > reliable than some commerial systems (such as SQL Server) because it can\n> > recover from a torn block with just the logfile.\n> \n> Again, what I meant to say is that the commercial systems can recover\n> with an old file backup + logs. How old the backup can be depends only\n> on how much time you are willing to spend playing the logs forward. So\n> if you do a full backup once a week, and multiplex and backup the logs,\n> then even if a backup tape gets destroyed you can still survive. It just\n> takes longer.\n> \n> Also, postgreSQL can't recover from any other type of block corruption,\n> while the commercial systems can. That's what I meant by the \"critical\n> production use\" comment, which was sort-of unfair.\n> \n> So I would say they are equally reliable for torn pages (but not bad\n> blocks), and the commercial systems let you trade potential recovery\n> time for not having to write the blocks twice. You do need to back-up\n> the log archives though.\n\nYes, good tradeoff analysis. We recover from partial writes quicker,\nand don't require saving of log files, _but_ we don't recover from bad\ndisk blocks. Good summary.\n\n> I'll back off on that. I don't know if we want to use the OS buffer\n> manager, but shouldn't we try to have our buffer manager group writes\n> together by files, and pro-actively get them out to disk? Right now, it\n> looks like all our write requests are delayed as long as possible and\n> the order in which they are written is pretty-much random, as is the\n> backend that writes the block, so there is no locality of reference even\n> when the blocks are adjacent on disk, and the write calls are spread-out\n> over all the backends.\n> \n> Would it not be the case that things like read-ahead, grouping writes,\n> and caching written data are probably best done by PostgreSQL, because\n> only our buffer manager can understand when they will be useful or when\n> they will thrash the cache?\n\nThe OS should handle all of this. We are doing main table writes but no\nsync until checkpoint, so the OS can keep those blocks around and write\nthem at its convenience. It knows the size of the buffer cache and when\nstuff is forced to disk. We can't second-guess that.\n\n> I may likely be wrong on this, and I haven't done any performance\n> testing. I shouldn't have brought this up alongside the logging issues,\n> but there seemed to be some question about whether the OS was actually\n> doing all these things behind the scene.\n\nIt had better. Looking at the kernel source is the way to know.\n\n> Does anyone know what the major barriers to infinite log replay are in\n> PostgreSQL? I'm trying to look for everything that might need to be\n> changed outside xlog.c, but surely this has come up before. Searching\n> the archives hasn't revealed much.\n\nThis has been brought up. Could we just save WAL files and get replay? \nI believe some things have to be added to WAL to allow this, but it\nseems possible. However, the pg_dump is just a data dump and does not\nhave the file offsets and things. Somehow you would need a tar-type\nbackup of the database, and with a running db, it is hard to get a valid\nsnapshot of that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Mon, 24 Jun 2002 17:25:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Does anyone know what the major barriers to infinite log replay are in\n>> PostgreSQL? I'm trying to look for everything that might need to be\n>> changed outside xlog.c, but surely this has come up before. Searching\n>> the archives hasn't revealed much.\n\n> This has been brought up. Could we just save WAL files and get replay? \n> I believe some things have to be added to WAL to allow this, but it\n> seems possible.\n\nThe Red Hat group has been looking at this somewhat; so far there seem\nto be some minor tweaks that would be needed, but no showstoppers.\n\n> Somehow you would need a tar-type\n> backup of the database, and with a running db, it is hard to get a valid\n> snapshot of that.\n\nBut you don't *need* a \"valid snapshot\", only a correct copy of\nevery block older than the first checkpoint in your WAL log series.\nAny inconsistencies in your tar dump will look like repairable damage;\nreplaying the WAL log will fix 'em.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 17:31:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Does anyone know what the major barriers to infinite log replay are in\n> >> PostgreSQL? I'm trying to look for everything that might need to be\n> >> changed outside xlog.c, but surely this has come up before. Searching\n> >> the archives hasn't revealed much.\n> \n> > This has been brought up. Could we just save WAL files and get replay? \n> > I believe some things have to be added to WAL to allow this, but it\n> > seems possible.\n> \n> The Red Hat group has been looking at this somewhat; so far there seem\n> to be some minor tweaks that would be needed, but no showstoppers.\n\n\nGood.\n\n> > Somehow you would need a tar-type\n> > backup of the database, and with a running db, it is hard to get a valid\n> > snapshot of that.\n> \n> But you don't *need* a \"valid snapshot\", only a correct copy of\n> every block older than the first checkpoint in your WAL log series.\n> Any inconsistencies in your tar dump will look like repairable damage;\n> replaying the WAL log will fix 'em.\n\nYes, my point was that you need physical file backups, not pg_dump, and\nyou have to be tricky about the files changing during the backup. You\n_can_ work around changes to the files during backup.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n/usr/local/bin/mime: cannot create /dev/ttyp3: permission denied\n\n\n", "msg_date": "Mon, 24 Jun 2002 17:33:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Mon, 2002-06-24 at 17:16, Tom Lane wrote:\n \n> I think you have been missing the point... \nYes, this appears to be the case. Thanks especially to Curt for clearing\nthings up for me.\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n\n\n", "msg_date": "24 Jun 2002 20:28:00 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On 24 Jun 2002, J. R. Nield wrote:\n\n> All I'm saying is that the entire postgresql block write must be\n> converted into exactly one SCSI write command in all cases, and I don't\n> know a portable way to ensure this.\n\nNo, there's no portable way. All you can do is give the admin who\nis able to set things up safely the ability to turn of the now-unneeded\n(and expensive) safety-related stuff that postgres does.\n\n> I agree with this. My point was only that you need to know what\n> guarantees your operating system/hardware combination provides on a\n> case-by-case basis, and there is no standard way for a program to\n> discover this. Most system administrators are not going to know this\n> either, unless databases are their main responsibility.\n\nCertainly this is true of pretty much every database system out there.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 12:32:05 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "I'm splitting off this buffer mangement stuff into a separate thread.\n\nOn 24 Jun 2002, J. R. Nield wrote:\n\n> I'll back off on that. I don't know if we want to use the OS buffer\n> manager, but shouldn't we try to have our buffer manager group writes\n> together by files, and pro-actively get them out to disk?\n\nThe only way the postgres buffer manager can \"get [data] out to disk\"\nis to do an fsync(). For data files (as opposed to log files), this can\nonly slow down overall system throughput, as this would only disrupt the\nOS's write management.\n\n> Right now, it\n> looks like all our write requests are delayed as long as possible and\n> the order in which they are written is pretty-much random, as is the\n> backend that writes the block, so there is no locality of reference even\n> when the blocks are adjacent on disk, and the write calls are spread-out\n> over all the backends.\n\nIt doesn't matter. The OS will introduce locality of reference with its\nwrite algorithms. Take a look at\n\n http://www.cs.wisc.edu/~solomon/cs537/disksched.html\n\nfor an example. Most OSes use the elevator or one-way elevator\nalgorithm. So it doesn't matter whether it's one back-end or many\nwriting, and it doesn't matter in what order they do the write.\n\n> Would it not be the case that things like read-ahead, grouping writes,\n> and caching written data are probably best done by PostgreSQL, because\n> only our buffer manager can understand when they will be useful or when\n> they will thrash the cache?\n\nOperating systems these days are not too bad at guessing guessing what\nyou're doing. Pretty much every OS I've seen will do read-ahead when\nit detects you're doing sequential reads, at least in the forward\ndirection. And Solaris is even smart enough to mark the pages you've\nread as \"not needed\" so that they quickly get flushed from the cache,\nrather than blowing out your entire cache if you go through a large\nfile.\n\n> Would O_DSYNC|O_RSYNC turn off the cache?\n\nNo. I suppose there's nothing to stop it doing so, in some\nimplementations, but the interface is not designed for direct I/O.\n\n> Since you know a lot about NetBSD internals, I'd be interested in\n> hearing about what postgresql looks like to the NetBSD buffer manager.\n\nWell, looks like pretty much any program, or group of programs,\ndoing a lot of I/O. :-)\n\n> Am I right that strings of successive writes get randomized?\n\nNo; as I pointed out, they in fact get de-randomized as much as\npossible. The more proceses you have throwing out requests, the better\nthe throughput will be in fact.\n\n> What do our cache-hit percentages look like? I'm going to do some\n> experimenting with this.\n\nWell, that depends on how much memory you have and what your working\nset is. :-)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 14:05:45 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Buffer Management" }, { "msg_contents": "On Mon, 24 Jun 2002, Tom Lane wrote:\n\n> There are a lot of other things we desperately need to spend time\n> on that would not amount to re-engineering large quantities of OS-level\n> code. Given that most Unixen have perfectly respectable disk management\n> subsystems, we prefer to tune our code to make use of that stuff, rather\n> than follow the \"conventional wisdom\" that databases need to bypass it.\n> ...\n> Oracle can afford to do that sort of thing because they have umpteen\n> thousand developers available. Postgres does not.\n\nWell, Oracle also started out, a long long time ago, on systems without\nunified buffer cache and so on, and so they *had* to write this stuff\nbecause otherwise data would not be cached. So Oracle can also afford to\nmaintain it now because the code already exists.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 14:08:59 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE " }, { "msg_contents": "Curt Sampson wrote:\n> On Mon, 24 Jun 2002, Tom Lane wrote:\n> \n> > There are a lot of other things we desperately need to spend time\n> > on that would not amount to re-engineering large quantities of OS-level\n> > code. Given that most Unixen have perfectly respectable disk management\n> > subsystems, we prefer to tune our code to make use of that stuff, rather\n> > than follow the \"conventional wisdom\" that databases need to bypass it.\n> > ...\n> > Oracle can afford to do that sort of thing because they have umpteen\n> > thousand developers available. Postgres does not.\n> \n> Well, Oracle also started out, a long long time ago, on systems without\n> unified buffer cache and so on, and so they *had* to write this stuff\n> because otherwise data would not be cached. So Oracle can also afford to\n> maintain it now because the code already exists.\n\nWell, actually, it isn't unified buffer cache that is the issue, but\nrather the older SysV file system had pretty poor performance so\nbypassing it was a bigger win that it is today.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 25 Jun 2002 09:22:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "\nSo, while we're at it, what's the current state of people's thinking\non using mmap rather than shared memory for data file buffers? I\nsee some pretty powerful advantages to this approach, and I'm not\n(yet :-)) convinced that the disadvantages are as bad as people think.\nI think I can address most of the concerns in doc/TODO.detail/mmap.\n\nIs this worth pursuing a bit? (I.e., should I spend an hour or two\nwriting up the advantages and thoughts on how to get around the\nproblems?) Anybody got objections that aren't in doc/TODO.detail/mmap?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 22:52:14 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Buffer Management" }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> So, while we're at it, what's the current state of people's thinking\n> on using mmap rather than shared memory for data file buffers?\n\nThere seem to be a couple of different threads in doc/TODO.detail/mmap.\n\nOne envisions mmap as a one-for-one replacement for our current use of\nSysV shared memory, the main selling point being to get out from under\nkernels that don't have SysV support or have it configured too small.\nThis might be worth doing, and I think it'd be relatively easy to do\nnow that the shared memory support is isolated in one file and there's\nprovisions for selecting a shmem implementation at configure time.\nThe only thing you'd really have to think about is how to replace the\ncurrent behavior that uses shmem attach counts to discover whether any\nold backends are left over from a previous crashed postmaster. I dunno\nif mmap offers any comparable facility.\n\nThe other discussion seemed to be considering how to mmap individual\ndata files right into backends' address space. I do not believe this\ncan possibly work, because of loss of control over visibility of data\nchanges to other backends, timing of write-backs, etc.\n\nBut as long as you stay away from interpretation #2 and go with\nmmap-as-a-shmget-substitute, it might be worthwhile.\n\n(Hey Marc, can one do mmap in a BSD jail?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2002 10:09:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buffer Management " }, { "msg_contents": "On Tue, 2002-06-25 at 09:09, Tom Lane wrote:\n> Curt Sampson <cjs@cynic.net> writes:\n> > So, while we're at it, what's the current state of people's thinking\n> > on using mmap rather than shared memory for data file buffers?\n> \n[snip]\n> \n> (Hey Marc, can one do mmap in a BSD jail?)\nI believe the answer is YES. \n\nI can send you the man pages if you want. \n\n\n> \n> \t\t\tregards, tom lane\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n\n\n", "msg_date": "25 Jun 2002 09:17:30 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buffer Management" }, { "msg_contents": "On Tue, 25 Jun 2002, Tom Lane wrote:\n\n> The only thing you'd really have to think about is how to replace the\n> current behavior that uses shmem attach counts to discover whether any\n> old backends are left over from a previous crashed postmaster. I dunno\n> if mmap offers any comparable facility.\n\nSure. Just mmap a file, and it will be persistent.\n\n> The other discussion seemed to be considering how to mmap individual\n> data files right into backends' address space. I do not believe this\n> can possibly work, because of loss of control over visibility of data\n> changes to other backends, timing of write-backs, etc.\n\nI don't understand why there would be any loss of visibility of changes.\nIf two backends mmap the same block of a file, and it's shared, that's\nthe same block of physical memory that they're accessing. Changes don't\neven need to \"propagate,\" because the memory is truly shared. You'd keep\nyour locks in the page itself as well, of course.\n\nCan you describe the problem in more detail?\n\n> But as long as you stay away from interpretation #2 and go with\n> mmap-as-a-shmget-substitute, it might be worthwhile.\n\nIt's #2 that I was really looking at. :-)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 23:20:15 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Buffer Management " }, { "msg_contents": "Tom Lane wrote:\n> Curt Sampson <cjs@cynic.net> writes:\n> > So, while we're at it, what's the current state of people's thinking\n> > on using mmap rather than shared memory for data file buffers?\n> \n> There seem to be a couple of different threads in doc/TODO.detail/mmap.\n> \n> One envisions mmap as a one-for-one replacement for our current use of\n> SysV shared memory, the main selling point being to get out from under\n> kernels that don't have SysV support or have it configured too small.\n> This might be worth doing, and I think it'd be relatively easy to do\n> now that the shared memory support is isolated in one file and there's\n> provisions for selecting a shmem implementation at configure time.\n> The only thing you'd really have to think about is how to replace the\n> current behavior that uses shmem attach counts to discover whether any\n> old backends are left over from a previous crashed postmaster. I dunno\n> if mmap offers any comparable facility.\n> \n> The other discussion seemed to be considering how to mmap individual\n> data files right into backends' address space. I do not believe this\n> can possibly work, because of loss of control over visibility of data\n> changes to other backends, timing of write-backs, etc.\n\nAgreed. Also, there was in intresting thread that mmap'ing /dev/zero is\nthe same as anonmap for OS's that don't have anonmap. That should cover\nmost of them. The only downside I can see is that SysV shared memory is\nlocked into RAM on some/most OS's while mmap anon probably isn't. \nLocking in RAM is good in most cases, bad in others.\n\nThis will also work well when we have non-SysV semaphore support, like\nPosix semaphores, so we would be able to run with no SysV stuff.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 25 Jun 2002 10:20:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buffer Management" }, { "msg_contents": "Tom Lane writes:\n > There seem to be a couple of different threads in\n > doc/TODO.detail/mmap.\n > [ snip ]\n\nA place where mmap could be easily used and would offer a good\nperformance increase is for COPY FROM.\n\nLee.\n\n\n", "msg_date": "Tue, 25 Jun 2002 15:20:49 +0100", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: Buffer Management " }, { "msg_contents": "On Tue, 25 Jun 2002, Bruce Momjian wrote:\n\n> The only downside I can see is that SysV shared memory is\n> locked into RAM on some/most OS's while mmap anon probably isn't.\n\nIt is if you mlock() it. :-)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 23:24:44 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Buffer Management" }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> On Tue, 25 Jun 2002, Tom Lane wrote:\n>> The other discussion seemed to be considering how to mmap individual\n>> data files right into backends' address space. I do not believe this\n>> can possibly work, because of loss of control over visibility of data\n>> changes to other backends, timing of write-backs, etc.\n\n> I don't understand why there would be any loss of visibility of changes.\n> If two backends mmap the same block of a file, and it's shared, that's\n> the same block of physical memory that they're accessing.\n\nIs it? You have a mighty narrow conception of the range of\nimplementations that's possible for mmap.\n\nBut the main problem is that mmap doesn't let us control when changes to\nthe memory buffer will get reflected back to disk --- AFAICT, the OS is\nfree to do the write-back at any instant after you dirty the page, and\nthat completely breaks the WAL algorithm. (WAL = write AHEAD log;\nthe log entry describing a change must hit disk before the data page\nchange itself does.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2002 10:29:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buffer Management " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> This will also work well when we have non-SysV semaphore support, like\n> Posix semaphores, so we would be able to run with no SysV stuff.\n\nYou do realize that we can use Posix semaphores today? The Darwin (OS X)\nport uses 'em now. That's one reason I am more interested in mmap as\na shmget substitute than I used to be.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2002 10:32:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buffer Management " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > This will also work well when we have non-SysV semaphore support, like\n> > Posix semaphores, so we would be able to run with no SysV stuff.\n> \n> You do realize that we can use Posix semaphores today? The Darwin (OS X)\n> port uses 'em now. That's one reason I am more interested in mmap as\n\nNo, I didn't realize we had gotten that far.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 25 Jun 2002 10:55:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buffer Management" }, { "msg_contents": "Tom Lane wrote:\n> Curt Sampson <cjs@cynic.net> writes:\n> > On Tue, 25 Jun 2002, Tom Lane wrote:\n> >> The other discussion seemed to be considering how to mmap individual\n> >> data files right into backends' address space. I do not believe this\n> >> can possibly work, because of loss of control over visibility of data\n> >> changes to other backends, timing of write-backs, etc.\n> \n> > I don't understand why there would be any loss of visibility of changes.\n> > If two backends mmap the same block of a file, and it's shared, that's\n> > the same block of physical memory that they're accessing.\n> \n> Is it? You have a mighty narrow conception of the range of\n> implementations that's possible for mmap.\n> \n> But the main problem is that mmap doesn't let us control when changes to\n> the memory buffer will get reflected back to disk --- AFAICT, the OS is\n> free to do the write-back at any instant after you dirty the page, and\n> that completely breaks the WAL algorithm. (WAL = write AHEAD log;\n> the log entry describing a change must hit disk before the data page\n> change itself does.)\n\nCan we mmap WAL without problems? Not sure if there is any gain to it\nbecause we just write it and rarely read from it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 25 Jun 2002 10:56:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buffer Management" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can we mmap WAL without problems? Not sure if there is any gain to it\n> because we just write it and rarely read from it.\n\nPerhaps, but I don't see any point to it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2002 11:00:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buffer Management " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Can we mmap WAL without problems? Not sure if there is any gain to it\n> > because we just write it and rarely read from it.\n> \n> Perhaps, but I don't see any point to it.\n\nAgreed. I have been poking around google looking for an article I read\nmonths ago saying that mmap of files is slighly faster in low memory\nusage situations, but much slower in high memory usage situations\nbecause the kernel doesn't know as much about the file access in mmap as\nit does with stdio. I will find it. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 25 Jun 2002 11:02:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buffer Management" }, { "msg_contents": "On Tue, 25 Jun 2002, Tom Lane wrote:\n\n> Curt Sampson <cjs@cynic.net> writes:\n>\n> > I don't understand why there would be any loss of visibility of changes.\n> > If two backends mmap the same block of a file, and it's shared, that's\n> > the same block of physical memory that they're accessing.\n>\n> Is it? You have a mighty narrow conception of the range of\n> implementations that's possible for mmap.\n\nIt's certainly possible to implement something that you call mmap\nthat is not. But if you are using the posix-defined MAP_SHARED flag,\nthe behaviour above is what you see. It might be implemented slightly\ndifferently internally, but that's no concern of postgres. And I find\nit pretty unlikely that it would be implemented otherwise without good\nreason.\n\nNote that your proposal of using mmap to replace sysv shared memory\nrelies on the behaviour I've described too. As well, if you're replacing\nsysv shared memory with an mmap'd file, you may end up doing excessive\ndisk I/O on systems without the MAP_NOSYNC option. (Without this option,\nthe update thread/daemon may ensure that every buffer is flushed to the\nbacking store on disk every 30 seconds or so. You might be able to get\naround this by using a small file-backed area for things that need to\npersist after a crash, and a larger anonymous area for things that don't\nneed to persist after a crash.)\n\n> But the main problem is that mmap doesn't let us control when changes to\n> the memory buffer will get reflected back to disk --- AFAICT, the OS is\n> free to do the write-back at any instant after you dirty the page, and\n> that completely breaks the WAL algorithm. (WAL = write AHEAD log;\n> the log entry describing a change must hit disk before the data page\n> change itself does.)\n\nHm. Well ,we could try not to write the data to the page until\nafter we receive notification that our WAL data is committed to\nstable storage. However, new the data has to be availble to all of\nthe backends at the exact time that the commit happens. Perhaps a\nshared list of pending writes?\n\nAnother option would be to just let it write, but on startup, scan\nall of the data blocks in the database for tuples that have a\ntransaction ID later than the last one we updated to, and remove\nthem. That could pretty darn expensive on a large database, though.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 13:13:42 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Buffer Management " }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> Note that your proposal of using mmap to replace sysv shared memory\n> relies on the behaviour I've described too.\n\nTrue, but I was not envisioning mapping an actual file --- at least\non HPUX, the only way to generate an arbitrary-sized shared memory\nregion is to use MAP_ANONYMOUS and not have the mmap'd area connected\nto any file at all. It's not farfetched to think that this aspect\nof mmap might work differently from mapping pieces of actual files.\n\nIn practice of course we'd have to restrict use of any such\nimplementation to platforms where mmap behaves reasonably ... according\nto our definition of \"reasonably\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2002 09:21:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buffer Management " }, { "msg_contents": "Tom Lane wrote:\n> Curt Sampson <cjs@cynic.net> writes:\n> > Note that your proposal of using mmap to replace sysv shared memory\n> > relies on the behaviour I've described too.\n> \n> True, but I was not envisioning mapping an actual file --- at least\n> on HPUX, the only way to generate an arbitrary-sized shared memory\n> region is to use MAP_ANONYMOUS and not have the mmap'd area connected\n> to any file at all. It's not farfetched to think that this aspect\n> of mmap might work differently from mapping pieces of actual files.\n> \n> In practice of course we'd have to restrict use of any such\n> implementation to platforms where mmap behaves reasonably ... according\n> to our definition of \"reasonably\".\n\nYes, I am told mapping /dev/zero is the same as the anon map.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 26 Jun 2002 13:11:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buffer Management" }, { "msg_contents": "On Wed, 26 Jun 2002, Tom Lane wrote:\n\n> Curt Sampson <cjs@cynic.net> writes:\n> > Note that your proposal of using mmap to replace sysv shared memory\n> > relies on the behaviour I've described too.\n>\n> True, but I was not envisioning mapping an actual file --- at least\n> on HPUX, the only way to generate an arbitrary-sized shared memory\n> region is to use MAP_ANONYMOUS and not have the mmap'd area connected\n> to any file at all. It's not farfetched to think that this aspect\n> of mmap might work differently from mapping pieces of actual files.\n\nI find it somewhat farfetched, for a couple of reasons:\n\n 1. Memory mapped with the MAP_SHARED flag is shared memory,\n anonymous or not. POSIX is pretty explicit about how this works,\n and the \"standard\" for mmap that predates POSIX is the same.\n Anonymous memory does not behave differently.\n\n You could just as well say that some systems might exist such\n that one process can write() a block to a file, and then another\n might read() it afterwards but not see the changes. Postgres\n should not try to deal with hypothetical systems that are so\n completely broken.\n\n 2. Mmap is implemented as part of a unified buffer cache system\n on all of today's operating systems that I know of. The memory\n is backed by swap space when anonymous, and by a specified file\n when not anonymous; but the way these two are handled is\n *exactly* the same internally.\n\n Even on older systems without unified buffer cache, the behaviour\n is the same between anonymous and file-backed mmap'd memory.\n And there would be no point in making it otherwise. Mmap is\n designed to let you share memory; why make a broken implementation\n under certain circumstances?\n\n> In practice of course we'd have to restrict use of any such\n> implementation to platforms where mmap behaves reasonably ... according\n> to our definition of \"reasonably\".\n\nOf course. As we do already with regular I/O.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 12:37:18 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Buffer Management " } ]
[ { "msg_contents": "I want to change the date from a field in a tuple in a trigger_function\n\ncreate table example (\n my_date datetime\n ...\n);\n\nint na;\nchar select[20];\n\nna = SPI_fnumber(trigdata->tg_relation->rd_att, \"my_date\");\nmemset(select, 0, sizeof(select));\nsprintf(select, \"1/1/2002\");\nnewval = DirectFunctionCall1(date_in, CStringGetDatum(select));\nrettuple = SPI_modifytuple(trigdata->tg_relation, rettuple, 1, &na, &newval, \nNULL);\nif(rettuple == NULL) elog(ERROR, \"my_function: %d returned by \nSPI_modifytuple\", SPI_result);\n\nwhen i test that my server goes down. What I missed ?\n\n", "msg_date": "Tue, 16 Apr 2002 17:48:07 +0300", "msg_from": "Dragos Manzateanu <dragon@mail.dntis.ro>", "msg_from_op": true, "msg_subject": "date_in function" }, { "msg_contents": "On Tuesday 16 April 2002 05:48 pm, you wrote:\n> I want to change the date from a field in a tuple in a trigger_function\n>\n> create table example (\n> my_date datetime\n> ...\n> );\n>\n> int na;\n> char select[20];\n>\n> na = SPI_fnumber(trigdata->tg_relation->rd_att, \"my_date\");\n> memset(select, 0, sizeof(select));\n> sprintf(select, \"1/1/2002\");\n> newval = DirectFunctionCall1(date_in, CStringGetDatum(select));\n> rettuple = SPI_modifytuple(trigdata->tg_relation, rettuple, 1, &na,\n> &newval, NULL);\n> if(rettuple == NULL) elog(ERROR, \"my_function: %d returned by\n> SPI_modifytuple\", SPI_result);\n>\n> when i test that my server goes down. What I missed ?\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\nNot my function is about, but date_in.\nHave anybody worked with these ?\n", "msg_date": "Tue, 16 Apr 2002 22:32:15 +0300", "msg_from": "Dragos Manzateanu <dragon@mail.dntis.ro>", "msg_from_op": true, "msg_subject": "Re: date_in function" }, { "msg_contents": "On Tuesday 16 April 2002 10:32 pm, you wrote:\n> On Tuesday 16 April 2002 05:48 pm, you wrote:\n> > I want to change the date from a field in a tuple in a trigger_function\n> >\n> > create table example (\n> > my_date datetime\n> > ...\n> > );\n> >\n> > int na;\n> > char select[20];\n> >\n> > na = SPI_fnumber(trigdata->tg_relation->rd_att, \"my_date\");\n> > memset(select, 0, sizeof(select));\n> > sprintf(select, \"1/1/2002\");\n> > newval = DirectFunctionCall1(date_in, CStringGetDatum(select));\n> > rettuple = SPI_modifytuple(trigdata->tg_relation, rettuple, 1, &na,\n> > &newval, NULL);\n> > if(rettuple == NULL) elog(ERROR, \"my_function: %d returned by\n> > SPI_modifytuple\", SPI_result);\n> >\n> > when i test that my server goes down. What I missed ?\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n>\n> Not my function is about, but date_in.\n> Have anybody worked with these ?\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n", "msg_date": "Wed, 17 Apr 2002 08:48:56 +0300", "msg_from": "Dragos Manzateanu <dragon@mail.dntis.ro>", "msg_from_op": true, "msg_subject": "Re: date_in function" }, { "msg_contents": "I want to change the date from a field in a tuple in a trigger_function\n\n create table example (\n my_date datetime\n ...\n);\n\nint na;\nchar select[20];\n\nna = SPI_fnumber(trigdata->tg_relation->rd_att, \"my_date\");\nmemset(select, 0, sizeof(select));\nsprintf(select, \"1/1/2002\");\nnewval = DirectFunctionCall1(date_in, CStringGetDatum(select));\nrettuple = SPI_modifytuple(trigdata->tg_relation, rettuple, 1, &na, &newval, \nNULL);\nif(rettuple == NULL) elog(ERROR, \"my_function: %d returned by\nSPI_modifytuple\", SPI_result);\n\nwhen i test that my server goes down. What I missed ?\n\nNot my function is about, but date_in.\nHave anybody worked with this function ?\nCan somebody give me a sample to use date_in. Thanks.\n\n", "msg_date": "Wed, 17 Apr 2002 11:42:19 +0300", "msg_from": "Dragos Manzateanu <dragon@mail.dntis.ro>", "msg_from_op": true, "msg_subject": "date_in function" }, { "msg_contents": "Dragos Manzateanu <dragon@mail.dntis.ro> writes:\n> na = SPI_fnumber(trigdata->tg_relation->rd_att, \"my_date\");\n> memset(select, 0, sizeof(select));\n> sprintf(select, \"1/1/2002\");\n> newval = DirectFunctionCall1(date_in, CStringGetDatum(select));\n> rettuple = SPI_modifytuple(trigdata->tg_relation, rettuple, 1, &na, &newval, \n> NULL);\n> if(rettuple == NULL) elog(ERROR, \"my_function: %d returned by\n> SPI_modifytuple\", SPI_result);\n\n> when i test that my server goes down. What I missed ?\n\nI doubt that the problem is with date_in. Have you tried backtracing\nin the core dump?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 11:19:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: date_in function " } ]
[ { "msg_contents": "b=# create table stuff (stuff_id serial unique);\nNOTICE: CREATE TABLE will create implicit sequence\n'stuff_stuff_id_seq' for SERIAL column 'stuff.stuff_id'\nNOTICE: CREATE TABLE / UNIQUE will create implicit index\n'stuff_stuff_id_key' for table 'stuff'\nCREATE\nb=# create table stuff2 (stuff_id int4 references stuff on update\ncascade on delete cascade);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nERROR: PRIMARY KEY for referenced table \"stuff\" not found\n\n\nYou'll notice there isn't a primary key at all -- which shouldn't be\nan issue as there is still the unique.\n\nNot the brightest thing to do, but surely the primary key shouldn't be\nenforced to exist before a plain old unique.\n\nIf thats the case, then unique indecies need to be blocked until there\nis a primary key, or the first one should be automatically marked as\nthe primary key.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n\n", "msg_date": "Tue, 16 Apr 2002 12:10:56 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Foreign Key woes -- 7.2 and ~7.3" }, { "msg_contents": "On Tue, 16 Apr 2002, Rod Taylor wrote:\n\n> b=# create table stuff (stuff_id serial unique);\n> NOTICE: CREATE TABLE will create implicit sequence\n> 'stuff_stuff_id_seq' for SERIAL column 'stuff.stuff_id'\n> NOTICE: CREATE TABLE / UNIQUE will create implicit index\n> 'stuff_stuff_id_key' for table 'stuff'\n> CREATE\n> b=# create table stuff2 (stuff_id int4 references stuff on update\n> cascade on delete cascade);\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\n> check(s)\n> ERROR: PRIMARY KEY for referenced table \"stuff\" not found\n>\n>\n> You'll notice there isn't a primary key at all -- which shouldn't be\n> an issue as there is still the unique.\n>\n> Not the brightest thing to do, but surely the primary key shouldn't be\n> enforced to exist before a plain old unique.\n>\n> If thats the case, then unique indecies need to be blocked until there\n> is a primary key, or the first one should be automatically marked as\n> the primary key.\n\nIf you're not specifying the columns in the references constraint, it\nmeans specifically referencing the primary key of the table. If there\nis no primary key, it's an error (\"If the <referenced table and columns>\ndoes not specify a <reference column list>, then the table descriptor\nof the referenced table shall include a unique constraint that specifies\nPRIMARY KEY.\")\n\n\n", "msg_date": "Tue, 16 Apr 2002 10:18:52 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Foreign Key woes -- 7.2 and ~7.3" }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> On Tue, 16 Apr 2002, Rod Taylor wrote:\n>> You'll notice there isn't a primary key at all -- which shouldn't be\n>> an issue as there is still the unique.\n\n> If you're not specifying the columns in the references constraint, it\n> means specifically referencing the primary key of the table. If there\n> is no primary key, it's an error (\"If the <referenced table and columns>\n> does not specify a <reference column list>, then the table descriptor\n> of the referenced table shall include a unique constraint that specifies\n> PRIMARY KEY.\")\n\nNot sure if Rod got the point here, but: you *can* reference a column\nthat's only UNIQUE and not PRIMARY KEY. You just have to name it\nexplicitly, eg.\n\nregression=# create table stuff2 (stuff_id int4 references stuff(stuff_id)\nregression(# on update cascade on delete cascade);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nCREATE\n\nThis is all per-spec, AFAIK.\n\n>> If thats the case, then unique indecies need to be blocked until there\n>> is a primary key, or the first one should be automatically marked as\n>> the primary key.\n\nThat would be contrary to spec, and I see no need for it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 19:19:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Foreign Key woes -- 7.2 and ~7.3 " }, { "msg_contents": "Understood. It's not what I was expecting to happen.\n\nNormally I always specifically state the match, so I was a little\nsurprised by the behaviour.\n\nMakes sense to match the primary key and only the primary key though.\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Stephan Szabo\" <sszabo@megazone23.bigpanda.com>\nCc: \"Rod Taylor\" <rbt@zort.ca>; <pgsql-bugs@postgresql.org>; \"Hackers\nList\" <pgsql-hackers@postgresql.org>\nSent: Tuesday, April 16, 2002 7:19 PM\nSubject: Re: [BUGS] [HACKERS] Foreign Key woes -- 7.2 and ~7.3\n\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > On Tue, 16 Apr 2002, Rod Taylor wrote:\n> >> You'll notice there isn't a primary key at all -- which shouldn't\nbe\n> >> an issue as there is still the unique.\n>\n> > If you're not specifying the columns in the references constraint,\nit\n> > means specifically referencing the primary key of the table. If\nthere\n> > is no primary key, it's an error (\"If the <referenced table and\ncolumns>\n> > does not specify a <reference column list>, then the table\ndescriptor\n> > of the referenced table shall include a unique constraint that\nspecifies\n> > PRIMARY KEY.\")\n>\n> Not sure if Rod got the point here, but: you *can* reference a\ncolumn\n> that's only UNIQUE and not PRIMARY KEY. You just have to name it\n> explicitly, eg.\n>\n> regression=# create table stuff2 (stuff_id int4 references\nstuff(stuff_id)\n> regression(# on update cascade on delete cascade);\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN\nKEY check(s)\n> CREATE\n>\n> This is all per-spec, AFAIK.\n>\n> >> If thats the case, then unique indecies need to be blocked until\nthere\n> >> is a primary key, or the first one should be automatically marked\nas\n> >> the primary key.\n>\n> That would be contrary to spec, and I see no need for it...\n>\n> regards, tom lane\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Tue, 16 Apr 2002 22:29:10 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: [BUGS] Foreign Key woes -- 7.2 and ~7.3 " } ]
[ { "msg_contents": "\n> I suspect that the main thing that will cause issues is removal of\n> implicit coercions to text. For example, in 7.2 and before you can do\n> \n> test72=# select 'At the tone, the time will be ' || now();\n> ?column?\n> -------------------------------------------------------------\n> At the tone, the time will be 2002-04-11 11:49:27.309181-04\n> (1 row)\n\nI have seen this coding practice extremely often and would be very unhappy if\nit were not allowed any more. Imho automatic coercions are a good thing\nand should be done where possible. Other extensible databases also allow this\nwithout a cast. Imho the main culprit is finding a \"number\" format that will not\nloose precision when implicitly propagated to.\n\nAndreas\n", "msg_date": "Tue, 16 Apr 2002 20:38:08 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Implicit coercions need to be reined in " } ]
[ { "msg_contents": "\n> The lines marked XXX are the ones that I enabled since yesterday, and\n> would like to disable again:\n> \n> implicit | result | input | prosrc\n> ----------+-------------+-------------+--------------------------------------\n\n> no | varchar | int8 | int8_text\n\nWow, I am completely at a loss why you would not allow implicit coercions\nthat do not loose any data in the process. \nCoercing an int to float would in some cases loose precision. It is thus imho \ndebateable what to do here, but not for the rest.\n\nI would think it a major step in the wrong direction.\n\nAndreas\n", "msg_date": "Tue, 16 Apr 2002 20:51:33 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Implicit coercions need to be reined in " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Wow, I am completely at a loss why you would not allow implicit coercions\n> that do not loose any data in the process. \n\nHaven't you read the previous threads? Implicit coercions are\ndangerous, because they cause the system to resolve operators in\nunexpected ways. See, eg, bug #484:\nhttp://archives.postgresql.org/pgsql-bugs/2001-10/msg00103.php\nhttp://archives.postgresql.org/pgsql-bugs/2001-10/msg00108.php\n\nI'm not by any means opposed to *all* implicit coercions, but\ncross-type-category ones strike me as bad news.\n\nIn particular, if all datatypes have implicit coercions to text then\ntype checking is pretty much a thing of the past :-( ... the system will\nbe able to resolve nearly anything by interpreting it as a text\noperation. See above bug.\n\nI suspect you are going to argue that you are prepared to live with such\nmisbehavior because it's too darn convenient not to have to write\n::text. Well, maybe that is indeed the community consensus, but I want\nto see a discussion about it first. And in any case I want a fairly\nwell-defined, circumscribed policy about which implicit coercions we\nwill have.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Apr 2002 18:24:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in " }, { "msg_contents": "...\n> Haven't you read the previous threads? Implicit coercions are\n> dangerous, because they cause the system to resolve operators in\n> unexpected ways.\n\nSure he's read the threads. The conclusion is *not* obvious, and any\nblanket statement to that effect trivializes the issues in a non-helpful\nway imho.\n\nI'd like to see a continuing discussion of this before leaping to a\nconclusion; now that we have (somewhat) more control over coersions some\nadditional tuning is certainly warranted but hopefully it will not\nrequire removing reasonable and convenient behaviors.\n\n - Thomas\n", "msg_date": "Wed, 17 Apr 2002 06:27:56 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in" }, { "msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> I'd like to see a continuing discussion of this before leaping to a\n> conclusion; now that we have (somewhat) more control over coersions some\n> additional tuning is certainly warranted but hopefully it will not\n> require removing reasonable and convenient behaviors.\n\nAbsolutely --- that's why I started this latest round of discussion.\n\nWhat I'm really looking for is a way that we can allow (some?) implicit\ntext coercions while getting rid of the sort of misbehavior exemplified\nby that bug report I keep referring to. Has anyone got any ideas about\nhow to do that? It's one thing to say that \"apples || oranges\" should\nbe interpreted as \"apples::text || oranges::text\", but it is quite\nanother to say that \"apples <= oranges\" should be handled that way.\n\nAlso: now that we can create non-implicit coercion functions, I would\nlike to add functions for bool<->int, numeric<->text, and other\ncoercions that would be handy to have, but previously we resisted on\nthe grounds that they'd turn the type checking system into a joke.\nBut perhaps some of these *should* be allowed as implicit coercions.\nI'd like to develop a well-thought-out policy for which coercions should\nbe implicit, rather than making ad-hoc decisions.\n\nSo far the only policy-like idea that I've had is to forbid cross-type-\ncategory implicit coercions. That would solve the comparing-apples-to-\noranges problem; but if people think it'd cause too much collateral\ndamage, how about proposing some other rule?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 10:14:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in " }, { "msg_contents": "...\n> What I'm really looking for is a way that we can allow (some?) implicit\n> text coercions while getting rid of the sort of misbehavior exemplified\n> by that bug report I keep referring to. Has anyone got any ideas about\n> how to do that? It's one thing to say that \"apples || oranges\" should\n> be interpreted as \"apples::text || oranges::text\", but it is quite\n> another to say that \"apples <= oranges\" should be handled that way.\n\nHmm. istm that we might need some information to travel with the\noperators, not just the coersion functions themselves. We have a fairly\ntype-rich system, but need to preserve the ability to add types and a\n*partial* set of functions and operators and get reasonable behaviors.\n\n - Thomas\n", "msg_date": "Wed, 17 Apr 2002 07:42:52 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in" }, { "msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n>> It's one thing to say that \"apples || oranges\" should\n>> be interpreted as \"apples::text || oranges::text\", but it is quite\n>> another to say that \"apples <= oranges\" should be handled that way.\n\n> Hmm. istm that we might need some information to travel with the\n> operators, not just the coersion functions themselves. We have a fairly\n> type-rich system, but need to preserve the ability to add types and a\n> *partial* set of functions and operators and get reasonable behaviors.\n\nCould we do anything based on looking at the whole set of candidate\noperators? For example, I think that the reason \"apples || oranges\"\nis so appealing is that there really is only one way to interpret\nthe || operator; whereas of course there are lots of different <=\noperators. Perhaps we could be more forgiving of implicit coercions\nwhen there are fewer candidate operators, in some way? Again, something\nbased on type categories would make sense to me. Perhaps allow\ncross-category implicit coercions only if there are no candidate\noperators accepting the input's native category?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Apr 2002 10:52:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in " }, { "msg_contents": "...\n> Could we do anything based on looking at the whole set of candidate\n> operators? For example, I think that the reason \"apples || oranges\"\n> is so appealing is that there really is only one way to interpret\n> the || operator; whereas of course there are lots of different <=\n> operators. Perhaps we could be more forgiving of implicit coercions\n> when there are fewer candidate operators, in some way? Again, something\n> based on type categories would make sense to me. Perhaps allow\n> cross-category implicit coercions only if there are no candidate\n> operators accepting the input's native category?\n\nHmm. That might be a winner. Since everything can be turned into text,\nthere will always be a fallback to text available. Allow it for\noperators which are text-only operators (are there any built in besides\n\"||\"?) and disallow that fallback for the others?\n\nOne edge case which we are probably concerned about is the \"typeless\nliteral\". I'd guess that we make the right choice in *most* cases\nalready.\n\nI guess I'm worried that we may be cutting back on the allowed implicit\ncoersions, when we might really be missing only one or two explicit\ncoersion functions to fill in the set. I haven't looked at that in a\nwhile...\n\n - Thomas\n", "msg_date": "Wed, 17 Apr 2002 08:09:27 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in" }, { "msg_contents": "Tom Lane wrote:\n> Thomas Lockhart <thomas@fourpalms.org> writes:\n> >> It's one thing to say that \"apples || oranges\" should\n> >> be interpreted as \"apples::text || oranges::text\", but it is quite\n> >> another to say that \"apples <= oranges\" should be handled that way.\n> \n> > Hmm. istm that we might need some information to travel with the\n> > operators, not just the coersion functions themselves. We have a fairly\n> > type-rich system, but need to preserve the ability to add types and a\n> > *partial* set of functions and operators and get reasonable behaviors.\n> \n> Could we do anything based on looking at the whole set of candidate\n> operators? For example, I think that the reason \"apples || oranges\"\n> is so appealing is that there really is only one way to interpret\n> the || operator; whereas of course there are lots of different <=\n> operators. Perhaps we could be more forgiving of implicit coercions\n> when there are fewer candidate operators, in some way? Again, something\n> based on type categories would make sense to me. Perhaps allow\n> cross-category implicit coercions only if there are no candidate\n> operators accepting the input's native category?\n\nYes, I think any solution will have to consider the number of possible\nconversions for a given mix of function/args.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 15:06:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Peter Eisentraut [mailto:peter_e@gmx.net]\nSent: Tuesday, April 16, 2002 3:33 PM\nTo: Fernando Nasser\nCc: Tom Lane; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Operators and schemas\n\n\nFernando Nasser writes:\n\n> I agree. And for Entry level SQL'92 we are done -- only tables,\nviews\n> and grants are required. The multiple schemas per user is already\n> an intermediate SQL feature -- for intermediate SQL'92 we would still\n> need domains and a character set specification.\n>\n> For SQL'99, we would have to add types, functions and triggers\n> (only triggers are not part of Core SQL'99, but I would not leave them\nout).\n\nI can hardly believe that we want to implement this just to be able to\ncheck off a few boxes on the SQL-compliance test. Once you have the\nability to use a fixed list of statements in this context it should be\neasy to allow a more or less arbitrary list. Especially if they all\nstart\nwith the same key word it should be possible to parse this.\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\nItems like \"schema\" are a part of the language for a reason. Being \nable to create a schema in an area called 'test' and another in an area\ncalled 'development' and yet another in an area called 'production' is\na key feature for real business usefulness.\n\nIMO-YMMV.\n<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n", "msg_date": "Tue, 16 Apr 2002 15:38:04 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Operators and schemas" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Tuesday, April 16, 2002 3:58 PM\nTo: Peter Eisentraut\nCc: Fernando Nasser; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Operators and schemas \n\n\nPeter Eisentraut <peter_e@gmx.net> writes:\n> I can hardly believe that we want to implement this just to be able to\n> check off a few boxes on the SQL-compliance test. Once you have the\n> ability to use a fixed list of statements in this context it should be\n> easy to allow a more or less arbitrary list. Especially if they all\nstart\n> with the same key word it should be possible to parse this.\n\nIt's not the \"start\" part that creates the problem, so much as the \"end\"\npart. What we found was that we were having to reserve secondary\nkeywords. CREATE is now fully reserved, which it was not in 7.2,\nand that alone doesn't bother me. But AUTHORIZATION and GRANT are\nmore reserved than they were before, too, and it'll get worse the\nmore statements we insist on accepting inside CREATE SCHEMA.\n\nAFAICS, embedding statements inside CREATE SCHEMA adds absolutely zero\nfunctionality; you can just as easily execute them separately. Do we\nreally want to push a bunch more keywords into full-reserved status\n(and doubtless break some existing table definitions thereby) just\nto check off a box that isn't even in the SQL compliance test?\n\nTo the extent that we can allow stuff in CREATE SCHEMA without adding\nmore reserved words, it's fine with me. But I question having to add\nreserved words to do it.\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\nIf the alternative is to make a permanent fork in the road that leads\naway from ANSI compiliance, then it is a very, very bad decision not\nto put in the new keywords. Every week that passes by will make\ncorrecting the problem become more and more expensive.\n\nIMO-YMMV.\n<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n", "msg_date": "Tue, 16 Apr 2002 16:07:54 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Operators and schemas " } ]
[ { "msg_contents": "I keep getting this:\n\ncvs server: failed to create lock directory for\n`/projects/cvsroot/pgsql/contrib/dbsize'\n(/projects/cvsroot/pgsql/contrib/dbsize/#cvs.lock): Permission denied\ncvs server: failed to obtain dir lock in repository\n`/projects/cvsroot/pgsql/contrib/dbsize'\ncvs [server aborted]: read lock failed - giving up\n\nChris\n\n", "msg_date": "Wed, 17 Apr 2002 10:20:05 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@houston.familyhealth.com.au>", "msg_from_op": true, "msg_subject": "problem with anoncvs?" }, { "msg_contents": "\nfrom what I can tell, shouldn't be ... permissions look okay from here ...\nI know that if you catch it partway through the update, it can have some\npermissions problems though ... can you try and let me know if its still a\nproblem?\n\nOn Wed, 17 Apr 2002, Christopher Kings-Lynne wrote:\n\n> I keep getting this:\n>\n> cvs server: failed to create lock directory for\n> `/projects/cvsroot/pgsql/contrib/dbsize'\n> (/projects/cvsroot/pgsql/contrib/dbsize/#cvs.lock): Permission denied\n> cvs server: failed to obtain dir lock in repository\n> `/projects/cvsroot/pgsql/contrib/dbsize'\n> cvs [server aborted]: read lock failed - giving up\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Wed, 17 Apr 2002 10:10:38 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: problem with anoncvs?" }, { "msg_contents": "Yeah, working now. Sorry about that delayed post business. Our sysadmin\nwas mucking around with the domain masquerading and my emails temporarily\ncame from a different address.\n\nChris\n\n----- Original Message -----\nFrom: \"Marc G. Fournier\" <scrappy@hub.org>\nTo: \"Christopher Kings-Lynne\" <chriskl@houston.familyhealth.com.au>\nCc: \"Hackers\" <pgsql-hackers@postgresql.org>\nSent: Wednesday, April 17, 2002 9:10 PM\nSubject: Re: [HACKERS] problem with anoncvs?\n\n\n>\n> from what I can tell, shouldn't be ... permissions look okay from here ...\n> I know that if you catch it partway through the update, it can have some\n> permissions problems though ... can you try and let me know if its still a\n> problem?\n>\n> On Wed, 17 Apr 2002, Christopher Kings-Lynne wrote:\n>\n> > I keep getting this:\n> >\n> > cvs server: failed to create lock directory for\n> > `/projects/cvsroot/pgsql/contrib/dbsize'\n> > (/projects/cvsroot/pgsql/contrib/dbsize/#cvs.lock): Permission denied\n> > cvs server: failed to obtain dir lock in repository\n> > `/projects/cvsroot/pgsql/contrib/dbsize'\n> > cvs [server aborted]: read lock failed - giving up\n> >\n> > Chris\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Wed, 17 Apr 2002 22:31:30 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: problem with anoncvs?" } ]
[ { "msg_contents": "I come to an idea using dblink from a contrib directory:\n\nWhy my pl/psql function can't use common PQ stuff to connect to other database ?\nSo I wrote a wrapper around PQ functions and registered them in postgres.\nNow I can write pl/psql functions like:\n\nCREATE OR REPLACE FUNCTION TestPQ ()\n RETURNS int\nAS '\nDECLARE cSql varchar;\n cConnStr varchar;\n nConnPointer int;\n nRet int;\n cDb text;\n cUser text;\n cPass text;\n cHost text;\n cPort text;\n cTemp text;\n nPid int;\n nResPointer int;\n nResStatus int;\n cResStatus text;\n cResultError text;\n nTuples int;\n nFields int;\n nFieldCurr int;\nBEGIN\n cSql:=''SELECT * FROM pg_database'';\n cConnStr:=''user=finteh host=bart dbname=reg_master'';\n\n --Connect and get some data from connection\n nConnPointer:=PQconnectdb(cConnStr);\n cDb:=PQdb(nConnPointer);\n cUser:=PQuser(nConnPointer);\n cPass:=PQpass(nConnPointer);\n cHost:=PQhost(nConnPointer);\n cPort:=PQport(nConnPointer);\n nPid:=PQbackendPID(nConnPointer);\n RAISE NOTICE ''Connected to : %@%:% as % with password % and backend pid is: %'',cDb,chost,cPort,cUser,cPass,nPid;\n \n --Execute a query and return some data\n nResPointer:=PQexec(nConnPointer,cSql);\n nTuples:=PQntuples(nResPointer);\n nFields:=PQnfields(nResPointer);\n RAISE NOTICE ''Query : % returned % fields in % rows.'',cSql,nFields,nTuples;\n \n nFieldCurr:=0;\n cTemp:='''';\n WHILE nFieldCurr<=nFields-1 LOOP\n cTemp:=cTemp || PQfname(nResPointer,nFieldCurr) || ''|'';\n nFieldCurr:=nFieldCurr+1;\n END LOOP;\n RAISE NOTICE ''Returned field names : %'',cTemp;\n \n nFieldCurr:=PQfnumber(nResPointer,''encoding'');\n RAISE NOTICE ''Index of field \"encoding\" is : %'',nFieldCurr;\n\n \n --Variable to return connection status:\n nRet:= PQstatus(nConnPointer);\n\n PERFORM PQclear(nResPointer);\n PERFORM PQreset(nConnPointer);\n PERFORM PQfinish(nConnPointer);\n RETURN nRet;\nEND;'\nLANGUAGE 'plpgsql' ;\nSELECT TestPQ();\n\nIn other words pl/psql function become client of another postgres backend.\n\nimplemented functions so far:\n\nextern Datum Connectdb(PG_FUNCTION_ARGS);\nextern Datum SetdbLogin(PG_FUNCTION_ARGS);\nextern Datum Status(PG_FUNCTION_ARGS);\nextern Datum Finish(PG_FUNCTION_ARGS);\nextern Datum Reset(PG_FUNCTION_ARGS);\nextern Datum Db(PG_FUNCTION_ARGS);\nextern Datum User(PG_FUNCTION_ARGS);\nextern Datum Password(PG_FUNCTION_ARGS);\nextern Datum Host(PG_FUNCTION_ARGS);\nextern Datum Port(PG_FUNCTION_ARGS);\nextern Datum Tty(PG_FUNCTION_ARGS);\nextern Datum ErrorMessage(PG_FUNCTION_ARGS);\nextern Datum BackendPID(PG_FUNCTION_ARGS);\nextern Datum Exec(PG_FUNCTION_ARGS);\nextern Datum ResultStatus(PG_FUNCTION_ARGS);\nextern Datum ResStatus(PG_FUNCTION_ARGS);\nextern Datum ResultErrorMessage(PG_FUNCTION_ARGS);\nextern Datum Clear(PG_FUNCTION_ARGS);\nextern Datum EscapeString(PG_FUNCTION_ARGS);\nextern Datum Ntuples(PG_FUNCTION_ARGS);\nextern Datum Nfields(PG_FUNCTION_ARGS);\nextern Datum Fname(PG_FUNCTION_ARGS);\nextern Datum Fnumber(PG_FUNCTION_ARGS); \n\nThe rest will be done in few days.\n\nNow I have one problem: Is it possible to return PGresult in same way that\nSQL select statement does ? I saw the code in dblink that does it. Is it\nthe only way ? Anyone know where in documentation to look for structure\nof sql result ?\n\nIf anyone is interested I'll be happy to send the code.\nIs it interesting enough to put it in the /contrib maybe ?\nBruce ?\n\n\n\n\n\n\n\nI come to an idea using dblink from a contrib \ndirectory:\n \nWhy my pl/psql function can't use common PQ stuff \nto connect to other database ?\nSo I wrote a wrapper around PQ functions and \nregistered them in postgres.\nNow I can write pl/psql functions \nlike:\n \nCREATE OR REPLACE FUNCTION TestPQ \n()    RETURNS intAS 'DECLARE cSql \nvarchar;  cConnStr varchar;  nConnPointer \nint;  nRet int;  cDb text;  cUser \ntext;  cPass text;  cHost text;  cPort \ntext;  cTemp text;  nPid \nint;  nResPointer int;  nResStatus \nint;  cResStatus text;  cResultError \ntext;  nTuples int;  nFields \nint;  nFieldCurr int;BEGIN cSql:=''SELECT * FROM \npg_database''; cConnStr:=''user=finteh host=bart \ndbname=reg_master'';\n \n --Connect and get some data from \nconnection nConnPointer:=PQconnectdb(cConnStr); cDb:=PQdb(nConnPointer); cUser:=PQuser(nConnPointer); cPass:=PQpass(nConnPointer); cHost:=PQhost(nConnPointer); cPort:=PQport(nConnPointer); nPid:=PQbackendPID(nConnPointer); RAISE \nNOTICE ''Connected to : %@%:% as % with password % \nand backend pid is: \n%'',cDb,chost,cPort,cUser,cPass,nPid;  --Execute a query and \nreturn some \ndata nResPointer:=PQexec(nConnPointer,cSql); nTuples:=PQntuples(nResPointer); nFields:=PQnfields(nResPointer); RAISE \nNOTICE ''Query : % returned % fields in % \nrows.'',cSql,nFields,nTuples;  nFieldCurr:=0; cTemp:=''''; WHILE \nnFieldCurr<=nFields-1 LOOP     cTemp:=cTemp || \nPQfname(nResPointer,nFieldCurr) || ''|'';    \n nFieldCurr:=nFieldCurr+1; END LOOP; RAISE NOTICE \n''Returned field names : \n%'',cTemp;  nFieldCurr:=PQfnumber(nResPointer,''encoding''); RAISE \nNOTICE ''Index of field \"encoding\" is : %'',nFieldCurr;\n \n  --Variable to return connection \nstatus: nRet:= PQstatus(nConnPointer);\n \n PERFORM \nPQclear(nResPointer); PERFORM PQreset(nConnPointer); PERFORM \nPQfinish(nConnPointer); RETURN nRet;END;'LANGUAGE 'plpgsql' \n;SELECT TestPQ();\n \nIn other words pl/psql function become client of another \npostgres backend.\n \nimplemented functions so far:\n \nextern Datum Connectdb(PG_FUNCTION_ARGS);extern \nDatum SetdbLogin(PG_FUNCTION_ARGS);extern Datum \nStatus(PG_FUNCTION_ARGS);extern Datum Finish(PG_FUNCTION_ARGS);extern \nDatum Reset(PG_FUNCTION_ARGS);extern Datum Db(PG_FUNCTION_ARGS);extern \nDatum User(PG_FUNCTION_ARGS);extern Datum \nPassword(PG_FUNCTION_ARGS);extern Datum Host(PG_FUNCTION_ARGS);extern \nDatum Port(PG_FUNCTION_ARGS);extern Datum Tty(PG_FUNCTION_ARGS);extern \nDatum ErrorMessage(PG_FUNCTION_ARGS);extern Datum \nBackendPID(PG_FUNCTION_ARGS);extern Datum Exec(PG_FUNCTION_ARGS);extern \nDatum ResultStatus(PG_FUNCTION_ARGS);extern Datum \nResStatus(PG_FUNCTION_ARGS);extern Datum \nResultErrorMessage(PG_FUNCTION_ARGS);extern Datum \nClear(PG_FUNCTION_ARGS);extern Datum \nEscapeString(PG_FUNCTION_ARGS);extern Datum \nNtuples(PG_FUNCTION_ARGS);extern Datum Nfields(PG_FUNCTION_ARGS);extern \nDatum Fname(PG_FUNCTION_ARGS);extern Datum \nFnumber(PG_FUNCTION_ARGS);    \n \nThe rest will be done in few days.\n \nNow I have one problem: Is it possible to return \nPGresult in same way that\nSQL select statement does ? I saw the code in \ndblink that does it. Is it\nthe only way ? Anyone know where in documentation \nto look for structure\nof sql result ?\n \nIf anyone is interested I'll be happy to send the \ncode.\nIs it interesting enough to put it in the /contrib \nmaybe ?\nBruce ?", "msg_date": "Wed, 17 Apr 2002 11:14:02 +0200", "msg_from": "\"Darko Prenosil\" <Darko.Prenosil@finteh.hr>", "msg_from_op": true, "msg_subject": "plpq" } ]
[ { "msg_contents": "Does anyone know if there is a way to extract the grammar and only the\ngrammar from a Bison file.? \n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 17 Apr 2002 20:26:56 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Bison grammer" } ]
[ { "msg_contents": "Hi all,\n\nHere's an updated version of the experimental qCache patch I\nposted a couple days ago (which is a port of Karel Zak's 7.0\nwork to CVS HEAD).\n\nChanges:\n\n- fix segfault in EXECUTE under some circumstances (reported\n by Barry Lind)\n- fix some memory leaks (thanks to Karel Zak)\n- write more regression tests (make check still won't pass)\n- re-diff against CVS HEAD\n- more code cleanup, minor tweaks\n\nHowever, I've tentatively decided that I think the best\nway to go forward is to rewrite this code. IMHO the utility of\nplans cached in shared memory is fairly limited, but the\ncode that implements this adds a lot of complex to the patch.\nI'm planning to re-implement PREPARE/EXECUTE with support only\nfor locally-prepared plans, using the existing patch as a\nguide. The result should be a simpler patch -- once it's\nin CVS we can worry about more advanced plan caching\ntechiques. Any complaints/comments on this plan?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC", "msg_date": "Wed, 17 Apr 2002 17:17:51 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "updated qCache" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> I'm planning to re-implement PREPARE/EXECUTE with support only\n> for locally-prepared plans, using the existing patch as a\n> guide. The result should be a simpler patch -- once it's\n> in CVS we can worry about more advanced plan caching\n> techiques. Any complaints/comments on this plan?\n\nThat's what I wanted from day one ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 00:06:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: updated qCache " }, { "msg_contents": "> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > I'm planning to re-implement PREPARE/EXECUTE with support only\n> > for locally-prepared plans, using the existing patch as a\n> > guide. The result should be a simpler patch -- once it's\n> > in CVS we can worry about more advanced plan caching\n> > techiques. Any complaints/comments on this plan?\n>\n> That's what I wanted from day one ;-)\n\nSo with this scheme, people just have to be careful to use a connection pool\n/ persistent connections?\n\nChris\n\n", "msg_date": "Thu, 18 Apr 2002 12:15:53 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: updated qCache " }, { "msg_contents": "> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > I'm planning to re-implement PREPARE/EXECUTE with support only\n> > for locally-prepared plans, using the existing patch as a\n> > guide. The result should be a simpler patch -- once it's\n> > in CVS we can worry about more advanced plan caching\n> > techiques. Any complaints/comments on this plan?\n>\n> That's what I wanted from day one ;-)\n\nYou know, if we had a threaded backend, we wouldn't have any of these\nproblems :)\n\nChris\n\n", "msg_date": "Thu, 18 Apr 2002 12:16:55 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: updated qCache " }, { "msg_contents": "On Wed, Apr 17, 2002 at 05:17:51PM -0400, Neil Conway wrote:\n> Hi all,\n> \n> Here's an updated version of the experimental qCache patch I\n> posted a couple days ago (which is a port of Karel Zak's 7.0\n> work to CVS HEAD).\n> \n> Changes:\n> \n> - fix segfault in EXECUTE under some circumstances (reported\n> by Barry Lind)\n> - fix some memory leaks (thanks to Karel Zak)\n> - write more regression tests (make check still won't pass)\n> - re-diff against CVS HEAD\n> - more code cleanup, minor tweaks\n> \n> However, I've tentatively decided that I think the best\n> way to go forward is to rewrite this code. IMHO the utility of\n> plans cached in shared memory is fairly limited, but the\n> code that implements this adds a lot of complex to the patch.\n> I'm planning to re-implement PREPARE/EXECUTE with support only\n> for locally-prepared plans, using the existing patch as a\n> guide. The result should be a simpler patch -- once it's\n> in CVS we can worry about more advanced plan caching\n> techiques. Any complaints/comments on this plan?\n\n I agree too :-) I think remove the shared memory code from this patch\n is easy and local memory storage is there already done.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 18 Apr 2002 10:34:08 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: updated qCache" }, { "msg_contents": "On Wed, Apr 17, 2002 at 05:17:51PM -0400, Neil Conway wrote:\n> Hi all,\n> \n> Here's an updated version of the experimental qCache patch I\n> posted a couple days ago (which is a port of Karel Zak's 7.0\n> work to CVS HEAD).\n\n I have a question, what the Dllist and malloc()? I think it's\n nothing nice for local-memory cache too.\n\n There is needful destory cached planns by one call of \n MemoryContextDelete(). I hope that Neil wants still use \n original \"context-per-plan\" cache.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 18 Apr 2002 11:04:21 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: updated qCache" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Neil Conway [mailto:nconway@klamath.dyndns.org]\nSent: Wednesday, April 17, 2002 2:18 PM\nTo: PostgreSQL Hackers\nSubject: [HACKERS] updated qCache\n\n\nHi all,\n\nHere's an updated version of the experimental qCache patch I\nposted a couple days ago (which is a port of Karel Zak's 7.0\nwork to CVS HEAD).\n\nChanges:\n\n- fix segfault in EXECUTE under some circumstances (reported\n by Barry Lind)\n- fix some memory leaks (thanks to Karel Zak)\n- write more regression tests (make check still won't pass)\n- re-diff against CVS HEAD\n- more code cleanup, minor tweaks\n\nHowever, I've tentatively decided that I think the best\nway to go forward is to rewrite this code. IMHO the utility of\nplans cached in shared memory is fairly limited, but the\ncode that implements this adds a lot of complex to the patch.\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\nDC% Why do you imagine that the utility is limited?\n<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n\nI'm planning to re-implement PREPARE/EXECUTE with support only\nfor locally-prepared plans, using the existing patch as a\nguide. The result should be a simpler patch -- once it's\nin CVS we can worry about more advanced plan caching\ntechiques. Any complaints/comments on this plan?\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\nDC% Why not allow both kinds and make it configurable...\nDC% local/shared/both.\n<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n", "msg_date": "Wed, 17 Apr 2002 14:34:45 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: updated qCache" }, { "msg_contents": "On Wed, 17 Apr 2002 14:34:45 -0700\n\"Dann Corbit\" <DCorbit@connx.com> wrote:\n> However, I've tentatively decided that I think the best\n> way to go forward is to rewrite this code. IMHO the utility of\n> plans cached in shared memory is fairly limited, but the\n> code that implements this adds a lot of complex to the patch.\n> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n> DC% Why do you imagine that the utility is limited?\n\n(1) It's difficult to tell whether a given plan has already been\n prepared: the server could have been restarted in the mean-time,\n for example. We could allow app developers to check if a\n given has already been prepared, but that's inconvenient,\n and the benefits seem fairly small.\n\n(2) Shared memory is a bad storage location for variable-sized\n data, like query plans. What happens if you're asked to\n cache a plan larger than the free space in the shared cache?\n You could perhaps free up some space by removing another\n entry, but that means that every invocation of EXECUTE\n needs to be prepared for the target plan to have been\n evicted from the cache -- which is irritating.\n\n(3) Managing concurrent access to the shared cache may (or may\n not) be a performance issue.\n\nI'm not saying it's a bad idea, I just think I'd like to\nconcentrate on the locally-cached plans for now and see if\nthere is a need to add shared plans later.\n\n> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n> \n> I'm planning to re-implement PREPARE/EXECUTE with support only\n> for locally-prepared plans, using the existing patch as a\n> guide. The result should be a simpler patch -- once it's\n> in CVS we can worry about more advanced plan caching\n> techiques. Any complaints/comments on this plan?\n> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n> DC% Why not allow both kinds and make it configurable...\n> DC% local/shared/both.\n\nThat's what the current patch does.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Wed, 17 Apr 2002 18:05:59 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: updated qCache" }, { "msg_contents": "On Wed, Apr 17, 2002 at 06:05:59PM -0400, Neil Conway wrote:\n> On Wed, 17 Apr 2002 14:34:45 -0700\n> \n> I'm not saying it's a bad idea, I just think I'd like to\n> concentrate on the locally-cached plans for now and see if\n> there is a need to add shared plans later.\n\n Yes, later we can use shared memory buffer as \"pipe\" between\n backends:\n\n Backend A: Backend B:\n local-memory-query-plan --> shmem --> local-memory-query-plan\n \n In this idea is in the shared memory one query-plan only and backends \n use it for plan copying from \"A\" to \"B\".\n\n It require persistent backends of course.\n\n Karel\n\n PS. it's idea only and nothing other, the original qcache was idea\n only too :-)\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 18 Apr 2002 10:55:19 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: updated qCache" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nSent: Wednesday, April 17, 2002 2:39 PM\nTo: mlw\nCc: Andrew Sullivan; PostgreSQL-development; Tom Lane\nSubject: Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE\n\nmlw wrote:\n> Bruce Momjian wrote:\n> > \n> > mlw wrote:\n> > > Now, given the choice of the two strategies on a table, both\npretty close to\n> > > one another, the risk of poor performance for using the index scan\nis minimal\n> > > based on the statistics, but the risk of poor performance for\nusing the\n> > > sequential scan is quite high on a large table.\n> \n> > My second point, that index scan is more risky than sequential scan,\nis\n> > outlined above. A sequential scan reads each page once, and uses\nthe\n> > file system read-ahead code to prefetch the disk buffers. Index\nscans\n> > are random, and could easily re-read disk pages to plow through a\n> > significant portion of the table, and because the reads are random,\n> > the file system will not prefetch the rows so the index scan will\nhave\n> > to wait for each non-cache-resident row to come in from disk.\n> \n> That is a very interesting point, but shouldn't that be factored into\nthe cost\n> (random_tuple_cost?) In which case my point still stands.\n\nYes, I see your point. I think on the high end that index scans can get\nvery expensive if you start to do lots of cache misses and have to wait\nfor i/o. I know the random cost is 4, but I think that number is not\nlinear. It can be much higher for lots of cache misses and waiting for\nI/O, and think that is why it feels more risky to do an index scan on a\nsample size that is not perfectly known.\n\nActually, you pretty much can know sequential scan size because you know\nthe number of blocks in the table. It is index scan that is more\nunknown because you don't know how many index lookups you will need, and\nhow well they will stay in the cache.\n\nDoes that help? Wow, this _is_ confusing. I am still looking for that\nholy grail that will allow this all to be codified so others can learn\nfrom it and we don't have to rehash this repeatedly, but frankly, this\nwhole discussion is covering new ground that we haven't covered yet. \n\n(Maybe TODO.detail this discussion and point to it from the FAQ.)\n>>\nGeneral rules of thumb (don't know if they apply to postgres or not):\n\nIndex scans are increasingly costly when the data is only a few types.\nFor instance, an index on a single bit makes for a very expensive scan.\nAfter 5% of the data, it would be cheaper to scan the whole table \nwithout using the index.\n\nNow, if you have a clustered index (where the data is *physically*\nordered by the index order) you can use index scans much more cheaply\nthan when the data is not ordered in this way.\n\nA golden rule of thumb when accessing data in a relational database\nis to use the unique clustered index whenever possible (even if it\nis not the primary key).\n\nThese decisions are always heuristic in nature. If a table is small\nit may be cheapest to load the whole table into memory and sort it.\n\nIf the vacuum command could categorize statistically (some RDBMS \nsystems do this) then you can look at the statistical data and make \nmuch smarter choices for how to use the index relations. The \nstatistical information saved could be as simple as the min, max, mean, \nmedian, mode, and standard deviation, or it might also have quartiles\nor deciles, or some other measure to show even more data about the\nactual distribution. You could save what is essentially a binned\nhistogram of the data that is present in the table. A bit of \nimagination will quickly show how useful this could be (some commercial \ndatabase systems actually do this).\n\nAnother notion is to super optimize some queries. By this, I mean\nthat if someone says that a particular query is very important, it\nmight be worthwhile to actually try a dozen or so different plans\n(of the potentially billions that are possible) against the query\nand store the best one. They could also run the super optimize\nfeature again later or even automatically if the vacuum operation\ndetects that the data distributions or cardinality have changed in\nsome significant manner.\n\nBetter yet, let them store the plan and hand edit it, if need be.\nRdb is an example of a commercial database that allows this.\nA maintenance nightmare when dimwits do it, of course, but such is\nlife.\n<<\n", "msg_date": "Wed, 17 Apr 2002 15:03:25 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" } ]
[ { "msg_contents": "\nHello,\n\nI am using postgresql to house chemical informatics data which consists\nof\nseveral interlinked tables with tens of thousands (maximum) of rows.\nWhen\ndoing search queries against these tables (which always requires\nmultiple\njoins) I have noticed that the semantically equivalent SQL queries can\ndiffer\nvastly in speed performance depending on the order of clauses ANDed\ntogether ( \"WHERE cond1 AND cond2\" takes forever, but \"WHERE cond2\nAND cond1\" comes right back).\n\nSo it appears I need to do some pre-optimization of the SQL query\ngenerated\nby the user before submitting it to postgresql in order to guarantee (or\nat least\nincrease the likelihood of) the fastest results. I've tried STFW and\nRTFM but\nhaven't found any good pointers on where to start with this, although I\nfeel that\nthere must be some published algorithms or theories. Can anyone point me\nto\na URL or other source to get me on my way?\n\nAlso, I wonder if this sort of query optimization is built into other\ndatabases\nsuch as Oracle?\n\nI did find this URL: http://redbook.cs.berkeley.edu/lec7.html\nwhich seems to be interesting, but honestly I'm far from a DB expert so\nI\ncan't follow most of it, and I can't tell if it is talking about\noptimization that\ncan be done in application space (query rewrite) or something that has\nto\nbe done in the database engine itself. I'm going to try to find the book\nit\nreferences though.\n\nBasically I feel a bit in over my head, which is ok but I don't want to\nwaste\ntime paddling in the wrong direction, so I'm hoping someone can\nrecognize\nwhere I need to look and nudge me in that direction. Maybe I just need\nproper terminology to plug into google.\n\nThanks,\nDav\n\n", "msg_date": "Wed, 17 Apr 2002 16:28:54 -0700", "msg_from": "Dav Coleman <dav@serve.com>", "msg_from_op": true, "msg_subject": "SQL Query Optimization" }, { "msg_contents": "Dav Coleman <dav@serve.com> writes:\n> I have noticed that the semantically equivalent SQL queries can\n> differ\n> vastly in speed performance depending on the order of clauses ANDed\n> together ( \"WHERE cond1 AND cond2\" takes forever, but \"WHERE cond2\n> AND cond1\" comes right back).\n\nCould we see a specific example?\n\nIt would also be useful to know what PG version you are using, whether\nyou've VACUUM ANALYZEd the tables, and what EXPLAIN has to say about\nyour query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 10:38:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL Query Optimization " } ]
[ { "msg_contents": "No one has replied, so I worked up a patch that I will apply in a few\ndays. Let me know if you don't like it.\n\n---------------------------------------------------------------------------\n\nAndrew Johnson wrote:\n> Not sure if you're the right person to be talking to here, but the recent\n> CVS pacthes to the module belong to you, so here goes.\n> \n> pgdb.connect() seems to be broken on Python 2.0.1 (which ships with\n> Slackware 8), and perhaps on other Pythons, haven't checked. Something in\n> the _pg.connect() call isn't working. I think the problem stems from the\n> fact that 'host' is a named parameter of both _pg.connect and pgdb.connect,\n> and so Python treats it as a variable assignment, not a named parameter.\n> \n> In any case, rewriting the call without named parameters solved the problem.\n> \n> Instead of:\n> \n> cnx = _pg.connect(host = dbhost, dbname = dbbase, port = dbport,\n> opt = dbopt, tty = dbtty,\n> user = dbuser, passwd = dbpassw\n> \n> use:\n> \n> cnx = _pg.connect(dbbase, dbhost, dbport, dbopt,\n> dbtty, dbuser, dbpasswd)\n> \n> -- \n> Andrew Johnson (ajohnson@lynn.ci-n.com)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/interfaces/python/pgdb.py\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/python/pgdb.py,v\nretrieving revision 1.10\ndiff -c -r1.10 pgdb.py\n*** src/interfaces/python/pgdb.py\t19 Mar 2002 02:47:57 -0000\t1.10\n--- src/interfaces/python/pgdb.py\t18 Apr 2002 02:10:20 -0000\n***************\n*** 337,343 ****\n ### module interface\n \n # connects to a database\n! def connect(dsn = None, user = None, password = None, host = None, database = None):\n \t# first get params from DSN\n \tdbport = -1\n \tdbhost = \"\"\n--- 337,343 ----\n ### module interface\n \n # connects to a database\n! def connect(dsn = None, user = None, password = None, xhost = None, database = None):\n \t# first get params from DSN\n \tdbport = -1\n \tdbhost = \"\"\n***************\n*** 364,372 ****\n \t\tdbpasswd = password\n \tif database != None:\n \t\tdbbase = database\n! \tif host != None:\n \t\ttry:\n! \t\t\tparams = string.split(host, \":\")\n \t\t\tdbhost = params[0]\n \t\t\tdbport = int(params[1])\n \t\texcept:\n--- 364,372 ----\n \t\tdbpasswd = password\n \tif database != None:\n \t\tdbbase = database\n! \tif xhost != None:\n \t\ttry:\n! \t\t\tparams = string.split(xhost, \":\")\n \t\t\tdbhost = params[0]\n \t\t\tdbport = int(params[1])\n \t\texcept:", "msg_date": "Wed, 17 Apr 2002 22:12:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: PyGreSQL bug" } ]
[ { "msg_contents": "\nCan someone comment on this? I can't decide.\n\n---------------------------------------------------------------------------\n\nAndreas Scherbaum wrote:\n> \n> Hello,\n> \n> i have written a module for logging changes on a table (without knowing\n> the exact column names).\n> Dont know where to put it, but its ready for use in the contrib directory.\n> \n> Its available at: http://ads.ufp.de/projects/Pg/table_log.tar.gz (3k)\n> \n> Would be nice, if this can be added.\n> \n> \n> Best regards\n> \n> -- \n> \t\t\t\tAndreas 'ads' Scherbaum\n> Failure is not an option. It comes bundled with your Microsoft product.\n> (Ferenc Mantfeld)\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Apr 2002 23:46:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "Hi Bruce,\n\nHaven't looked at the code, but there's no license with it.\n\nAndreas, are you cool with having the same License as PostgreSQL for it\n(BSD license)?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nBruce Momjian wrote:\n> \n> Can someone comment on this? I can't decide.\n> \n> ---------------------------------------------------------------------------\n> \n> Andreas Scherbaum wrote:\n> >\n> > Hello,\n> >\n> > i have written a module for logging changes on a table (without knowing\n> > the exact column names).\n> > Dont know where to put it, but its ready for use in the contrib directory.\n> >\n> > Its available at: http://ads.ufp.de/projects/Pg/table_log.tar.gz (3k)\n> >\n> > Would be nice, if this can be added.\n> >\n> >\n> > Best regards\n> >\n> > --\n> > Andreas 'ads' Scherbaum\n> > Failure is not an option. It comes bundled with your Microsoft product.\n> > (Ferenc Mantfeld)\n> >\n> >\n> >\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 18 Apr 2002 16:36:32 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "Justin Clift wrote:\n> \n> Hi Bruce,\n> \n> Haven't looked at the code, but there's no license with it.\n> \n> Andreas, are you cool with having the same License as PostgreSQL for it\n> (BSD license)?\n> \n> :-)\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> Bruce Momjian wrote:\n> >\n> > Can someone comment on this? I can't decide.\n> >\n> > ---------------------------------------------------------------------------\n> >\n> > Andreas Scherbaum wrote:\n> > >\n> > > Hello,\n> > >\n> > > i have written a module for logging changes on a table (without knowing\n> > > the exact column names).\n> > > Dont know where to put it, but its ready for use in the contrib directory.\n> > >\n> > > Its available at: http://ads.ufp.de/projects/Pg/table_log.tar.gz (3k)\n> > >\n> > > Would be nice, if this can be added.\n> > >\n> > >\n> > > Best regards\n\nHello,\n\nuhm, good point. I thought i missed something ;-)\n\nThis software is distributed under the GNU General Public License\neither version 2, or (at your option) any later version.\n\nI have updated the readme and replaced the archive with a new version.\n\n\nThanks and best regards\n\n-- \n Andreas 'ads' Scherbaum\n", "msg_date": "Thu, 18 Apr 2002 10:03:42 +0200", "msg_from": "Andreas Scherbaum <adsmail@htl.de>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "Hi Bruce,\n\nDid we reach an opinion as to whether we'll include GPL'd code?\n\nMy vote is to not include this code, as it just muddies the water with\nPostgreSQL being BSD based.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\nAndreas Scherbaum wrote:\n> \n> Justin Clift wrote:\n> >\n> > Hi Bruce,\n> >\n> > Haven't looked at the code, but there's no license with it.\n> >\n> > Andreas, are you cool with having the same License as PostgreSQL for it\n> > (BSD license)?\n> >\n> > :-)\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> > Bruce Momjian wrote:\n> > >\n> > > Can someone comment on this? I can't decide.\n> > >\n> > > ---------------------------------------------------------------------------\n> > >\n> > > Andreas Scherbaum wrote:\n> > > >\n> > > > Hello,\n> > > >\n> > > > i have written a module for logging changes on a table (without knowing\n> > > > the exact column names).\n> > > > Dont know where to put it, but its ready for use in the contrib directory.\n> > > >\n> > > > Its available at: http://ads.ufp.de/projects/Pg/table_log.tar.gz (3k)\n> > > >\n> > > > Would be nice, if this can be added.\n> > > >\n> > > >\n> > > > Best regards\n> \n> Hello,\n> \n> uhm, good point. I thought i missed something ;-)\n> \n> This software is distributed under the GNU General Public License\n> either version 2, or (at your option) any later version.\n> \n> I have updated the readme and replaced the archive with a new version.\n> \n> Thanks and best regards\n> \n> --\n> Andreas 'ads' Scherbaum\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 18 Apr 2002 22:03:28 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "Justin Clift wrote:\n> \n> Hi Bruce,\n> \n> Did we reach an opinion as to whether we'll include GPL'd code?\n> \n> My vote is to not include this code, as it just muddies the water with\n> PostgreSQL being BSD based.\n> \n> :-)\n> \n\nHmm, there's enough GPL'ed stuff in contrib/ ;-)\n\n-- \n Andreas 'ads' Scherbaum\n", "msg_date": "Thu, 18 Apr 2002 14:26:49 +0200", "msg_from": "Andreas Scherbaum <adsmail@htl.de>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "first comment : \n\n* a special directory with ./contrib/gpl ?\n\nsecond comment : \n\n* I don't really understand your position regarding the GNU General Public \n License. The GPL is offering multiple advantages for a big project and \n software like PostgreSQL. For example : \n\n\t* Contribution back to the main tree more easy if redistribution. \n (like HP and Samba team are doing, copyright holder remains samba team) \n\n\t* More easy to get a RF (Royalty Free) license from a patent \n owner. (this is guarantee for him that it will not go back to \n proprietary software where it's not a RF license) (like the \n UB-Trees)\n\n\t* A possible bigger audience.\n\nDual licensing is also an alternative but could be a real mess. \n\nIt's just idea. \n\nalx\n\n\n\nOn Thu, 18 Apr 2002, Justin Clift wrote:\n\n> Hi Bruce,\n> \n> Did we reach an opinion as to whether we'll include GPL'd code?\n> \n> My vote is to not include this code, as it just muddies the water with\n> PostgreSQL being BSD based.\n> \n> :-)\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> Andreas Scherbaum wrote:\n> > \n> > Justin Clift wrote:\n> > >\n> > > Hi Bruce,\n> > >\n> > > Haven't looked at the code, but there's no license with it.\n> > >\n> > > Andreas, are you cool with having the same License as PostgreSQL for it\n> > > (BSD license)?\n> > >\n> > > :-)\n> > >\n> > > Regards and best wishes,\n> > >\n> > > Justin Clift\n> > >\n> > > Bruce Momjian wrote:\n> > > >\n> > > > Can someone comment on this? I can't decide.\n> > > >\n> > > > ---------------------------------------------------------------------------\n> > > >\n> > > > Andreas Scherbaum wrote:\n> > > > >\n> > > > > Hello,\n> > > > >\n> > > > > i have written a module for logging changes on a table (without knowing\n> > > > > the exact column names).\n> > > > > Dont know where to put it, but its ready for use in the contrib directory.\n> > > > >\n> > > > > Its available at: http://ads.ufp.de/projects/Pg/table_log.tar.gz (3k)\n> > > > >\n> > > > > Would be nice, if this can be added.\n> > > > >\n> > > > >\n> > > > > Best regards\n> > \n> > Hello,\n> > \n> > uhm, good point. I thought i missed something ;-)\n> > \n> > This software is distributed under the GNU General Public License\n> > either version 2, or (at your option) any later version.\n> > \n> > I have updated the readme and replaced the archive with a new version.\n> > \n> > Thanks and best regards\n> > \n> > --\n> > Andreas 'ads' Scherbaum\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> \n> \n\n-- \nAlexandre Dulaunoy\t\t\tadulau@conostix.com\n\t\t\t\t\thttp://www.conostix.com/\n\n", "msg_date": "Thu, 18 Apr 2002 14:36:49 +0200 (CEST)", "msg_from": "Alexandre Dulaunoy <adulau-conos@conostix.com>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "Andreas Scherbaum <adsmail@htl.de> writes:\n> Justin Clift wrote:\n>> Did we reach an opinion as to whether we'll include GPL'd code?\n>> \n>> My vote is to not include this code, as it just muddies the water with\n>> PostgreSQL being BSD based.\n\n> Hmm, there's enough GPL'ed stuff in contrib/ ;-)\n\nIndeed, the core committee recently agreed that we should try to ensure\nthat the whole distribution is under the same BSD license. I have a\nTODO item to contact the authors of the existing GPL'd contrib modules,\nand if possible get them to agree to relicense. If not, those modules\nwill be removed from contrib.\n\nThere are other possible homes for contrib modules whose authors\nstrongly prefer GPL. For example, Red Hat's add-ons for Postgres will\nbe GPL (per corporate policy), and I expect that they'd be willing to\nhost contrib modules. But the core distribution will be straight BSD\nto avoid license confusion.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 10:20:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory " }, { "msg_contents": "Justin Clift wrote:\n> Hi Bruce,\n> \n> Did we reach an opinion as to whether we'll include GPL'd code?\n> \n> My vote is to not include this code, as it just muddies the water with\n> PostgreSQL being BSD based.\n\nYes, our current policy is to add GPL to /contrib only when we have\nlittle choice and the module is important. I am not sure if the module\nis even appropriate for /contrib.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Apr 2002 11:23:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "Tom Lane wrote:\n> \n> Andreas Scherbaum <adsmail@htl.de> writes:\n> > Justin Clift wrote:\n> >> Did we reach an opinion as to whether we'll include GPL'd code?\n> >>\n> >> My vote is to not include this code, as it just muddies the water with\n> >> PostgreSQL being BSD based.\n> \n> > Hmm, there's enough GPL'ed stuff in contrib/ ;-)\n> \n> Indeed, the core committee recently agreed that we should try to ensure\n> that the whole distribution is under the same BSD license. I have a\n> TODO item to contact the authors of the existing GPL'd contrib modules,\n> and if possible get them to agree to relicense. If not, those modules\n> will be removed from contrib.\n> \n> There are other possible homes for contrib modules whose authors\n> strongly prefer GPL. For example, Red Hat's add-ons for Postgres will\n> be GPL (per corporate policy), and I expect that they'd be willing to\n> host contrib modules. But the core distribution will be straight BSD\n> to avoid license confusion.\n\nI have to excuse myself, because i think, i did a mistake.\nYes, my first intention was to make it GPL, but i do not stick to it.\n\nOn the other hand, i copied some parts from contrib/noupdate (there'e no\nlicence in the readme) and now i think, this is contributed under BSD\nlicence.\nI'm sure or i'm wrong? I think, i have to change the licence.\nWho is the author of the noupdate module and can anybody tell me,\nwhats in this case the right (or left) license?\n\n\nBest regards\n\n-- \n Andreas 'ads' Scherbaum\n", "msg_date": "Thu, 18 Apr 2002 18:43:58 +0200", "msg_from": "Andreas Scherbaum <adsmail@htl.de>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "Andreas Scherbaum <adsmail@htl.de> writes:\n> On the other hand, i copied some parts from contrib/noupdate (there'e no\n> licence in the readme) and now i think, this is contributed under BSD\n> licence.\n> I'm sure or i'm wrong? I think, i have to change the licence.\n> Who is the author of the noupdate module and can anybody tell me,\n> whats in this case the right (or left) license?\n\nSince it was contributed to become part of the Postgres distribution,\nwe assume the author's intent was to license it under the Postgres\ndistribution license --- ie, BSD.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 13:00:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory " }, { "msg_contents": "Tom Lane wrote:\n> \n> Andreas Scherbaum <adsmail@htl.de> writes:\n> > On the other hand, i copied some parts from contrib/noupdate (there'e no\n> > licence in the readme) and now i think, this is contributed under BSD\n> > licence.\n> > I'm sure or i'm wrong? I think, i have to change the licence.\n> > Who is the author of the noupdate module and can anybody tell me,\n> > whats in this case the right (or left) license?\n> \n> Since it was contributed to become part of the Postgres distribution,\n> we assume the author's intent was to license it under the Postgres\n> distribution license --- ie, BSD.\n\nOk, i have changed the license part in the readme to PostgreSQL (BSD)\nlicense\nand published a new archive.\n\nhttp://ads.ufp.de/projects/Pg/table_log.tar.gz\n\nHope, this cleans some open questions.\n\n\nBest regards\n\n-- \n Andreas 'ads' Scherbaum\n", "msg_date": "Fri, 19 Apr 2002 10:13:30 +0200", "msg_from": "Andreas Scherbaum <adsmail@htl.de>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "Andreas Scherbaum wrote:\n> Tom Lane wrote:\n> > \n> > Andreas Scherbaum <adsmail@htl.de> writes:\n> > > On the other hand, i copied some parts from contrib/noupdate (there'e no\n> > > licence in the readme) and now i think, this is contributed under BSD\n> > > licence.\n> > > I'm sure or i'm wrong? I think, i have to change the licence.\n> > > Who is the author of the noupdate module and can anybody tell me,\n> > > whats in this case the right (or left) license?\n> > \n> > Since it was contributed to become part of the Postgres distribution,\n> > we assume the author's intent was to license it under the Postgres\n> > distribution license --- ie, BSD.\n> \n> Ok, i have changed the license part in the readme to PostgreSQL (BSD)\n> license\n> and published a new archive.\n> \n> http://ads.ufp.de/projects/Pg/table_log.tar.gz\n\nOK, now that the license issue is cleared up, I need someone to\ncomment on its appropriateness for /contrib.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 13:29:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "\nHello,\n\nthere's a new archive available with a bugfix for handling null values.\nThanks to Steve Head for reporting this.\n\n\nhttp://ads.ufp.de/projects/Pg/\n\n\nRegards\n\n-- \n Andreas 'ads' Scherbaum\n", "msg_date": "Thu, 25 Apr 2002 16:34:26 +0200", "msg_from": "Andreas Scherbaum <adsmail@htl.de>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" } ]
[ { "msg_contents": "Is this a bug?\n\nusa=# SELECT * FROM palm_buyers WHERE buyer_id=in('150',210) ;\nERROR: Function 'in(unknown, int4)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\n\nChris\n\n", "msg_date": "Thu, 18 Apr 2002 15:11:45 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Is this an IN bug?" } ]
[ { "msg_contents": "Ignore my previous post - for obvious reasons!!!\n\nChris\n\n", "msg_date": "Thu, 18 Apr 2002 15:12:12 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Oops!" } ]
[ { "msg_contents": "On 04/17/2002 01:44:46 PM Michael Loftis wrote:\n> In many of the cases where it is a primary key it is also there to\n> ensure fast lookups when referenced as a foreign key.  Or for joins.\n\nDon't know if the optimizer takes this into consideration, but a query that uses a primary and/or unique key in the where-clause, should always choose to use the related indices (assuming the table size is above a certain threshold). Since a primary key/unique index always restricts the resultset to a single row.....\n\nSomebody else mentioned that after creating an index, he still had to run analyze in order to get the optimizer to choose to use the index. I thought that 'create index' also updated pg_stats?\n\nMaarten\n\n----\n\nMaarten Boekhold, maarten.boekhold@reuters.com\n\nReuters Consulting\nDubai Media City\nBuilding 1, 5th Floor\nPO Box 1426\nDubai, United Arab Emirates\ntel:+971(0)4 3918300 ext 249\nfax:+971(0)4 3918333\nmob:+971(0)505526539\n\n\n------------------------------------------------------------- ---\n Visit our Internet site at http://www.reuters.com\n\nAny views expressed in this message are those of the individual\nsender, except where the sender specifically states them to be\nthe views of Reuters Ltd.\n\n", "msg_date": "Thu, 18 Apr 2002 11:17:34 +0400", "msg_from": "Maarten.Boekhold@reuters.com", "msg_from_op": true, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" }, { "msg_contents": "On Thu, 18 Apr 2002 Maarten.Boekhold@reuters.com wrote:\n\n> \n> On 04/17/2002 01:44:46 PM Michael Loftis wrote:\n> > In many of the cases where it is a primary key it is also there to\n> > ensure fast lookups when referenced as a foreign key. �Or for joins.\n> \n> Don't know if the optimizer takes this into consideration, but a query that uses a primary and/or unique key in the where-clause, should always choose to use\n> the related indices (assuming the table size is above a certain threshold). Since a primary key/unique index always restricts the resultset to a single row.....\n\nI don't think so.\n\neg. table with primary key \"pk\", taking values from 1 to 1000000 (so\n1000000 records)\n\nselect * from table where pk > 5\n\nshould probably not use the index ...\n\nCheers\nTycho\n\n-- \nTycho Fruru\t\t\ttycho.fruru@conostix.com\n\"Prediction is extremely difficult. Especially about the future.\"\n - Niels Bohr\n\n", "msg_date": "Thu, 18 Apr 2002 10:41:15 +0200 (CEST)", "msg_from": "tycho@fruru.com", "msg_from_op": false, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" } ]
[ { "msg_contents": "gistPageAddItem and gist_tuple_replacekey are commented out by GIST_PAGEADDITEM. \nThey was used up to 7.1, but now it is unusable.\n\ngistPageAddItem has interesting feature: recompress entry before writing to \ndisk, but we (with Oleg and Tom) couldn't find any reasons to do it. And so, we \nleave this code for later thinking about.\n\nNow gistPageAddItem is wrong, because it can work only in single-key indexes. In \n7.2 GiST supports multy-key index.\n\n\n\n> I haven't see any comment on this. If no one replies, would you send\n> over a patch of fixes? Thanks.\n> \n> ---------------------------------------------------------------------------\n> \n> Dmitry Tkach wrote:\n> \n>>I was trying to write a gist index extension, and, after some debugging,\n>>it looks like I found a bug somewhere in the gist.c code ...\n>>I can't be quite sure, because I am not familiar with the postgres\n>>code... but, here is what I see happenning (this is 7.1, but I compared\n>>the sources to 7.2, and did not see this fixed - although, I did not\n>>inspect it too carefully)...\n>>\n>>First of all, gistPageAddItem () calls gistdentryinit() with a pointer\n>>to what's stored in the tuple, so, 'by-value' types do not work (because\n>>gistcentryinit () would be passed the value itself, when called from\n>>gistinsert(), and then, in gistPageAddItem (), it is passed a pointer,\n>>coming from gistdentryinit () - so, it just doesn't know really how to\n>>treat the argument)...\n>>\n>>Secondly, gist_tuple_replacekey() seems to have incorrect logic figuring\n>>out if there is enough space in the tuple (it checks for '<', instead of\n>>'<=') - this causes a new tuple to get always created (this one, seems\n>>to be fixed in 7.2)\n>>\n>>Thirdly, gist_tuple_replace_key () sends a pointer to entry.pred (which\n>>is already a pointer to the actual value) to index_formtuple (), that\n>>looks at the tuple, sees that the type is 'pass-by-value', and puts that\n>>pointer directly into the tuple, so that, the resulting tuple now\n>>contains a pointer to a pointer to the actual value...\n>>\n>>Now, if more then one split is required, this sequence is repeated again\n>>and again and again, so that, by the time the tuple gets actually\n>>written, it contains something like a pointer to a pointer to a pointer\n>>to a pointer to the actual data :-(\n>>\n>>Once again, I've seen some comments in the 7.2 branch about gists and\n>>pass-by-value types, but brief looking at the differences in the source\n>>did not make me conveinced that it was indeed fixed...\n>>\n>>Anyone knows otherwise?\n>>\n>>Thanks a lot!\n>>\n>>Dima\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 4: Don't 'kill -9' the postmaster\n>>\n>>\n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n", "msg_date": "Thu, 18 Apr 2002 12:08:12 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": true, "msg_subject": "Re: [SQL] A bug in gistPageAddItem()/gist_tuple_replacekey() ???\n (fwd)" }, { "msg_contents": "\nHere is a good example of why keeping old code around causes confusion. \nI encourage the GIST guys to remove the stuff they don't feel they will\never need. I know Tom may disagree. ;-)\n\n---------------------------------------------------------------------------\n\nTeodor Sigaev wrote:\n> gistPageAddItem and gist_tuple_replacekey are commented out by GIST_PAGEADDITEM. \n> They was used up to 7.1, but now it is unusable.\n> \n> gistPageAddItem has interesting feature: recompress entry before writing to \n> disk, but we (with Oleg and Tom) couldn't find any reasons to do it. And so, we \n> leave this code for later thinking about.\n> \n> Now gistPageAddItem is wrong, because it can work only in single-key indexes. In \n> 7.2 GiST supports multy-key index.\n> \n> \n> \n> > I haven't see any comment on this. If no one replies, would you send\n> > over a patch of fixes? Thanks.\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > Dmitry Tkach wrote:\n> > \n> >>I was trying to write a gist index extension, and, after some debugging,\n> >>it looks like I found a bug somewhere in the gist.c code ...\n> >>I can't be quite sure, because I am not familiar with the postgres\n> >>code... but, here is what I see happenning (this is 7.1, but I compared\n> >>the sources to 7.2, and did not see this fixed - although, I did not\n> >>inspect it too carefully)...\n> >>\n> >>First of all, gistPageAddItem () calls gistdentryinit() with a pointer\n> >>to what's stored in the tuple, so, 'by-value' types do not work (because\n> >>gistcentryinit () would be passed the value itself, when called from\n> >>gistinsert(), and then, in gistPageAddItem (), it is passed a pointer,\n> >>coming from gistdentryinit () - so, it just doesn't know really how to\n> >>treat the argument)...\n> >>\n> >>Secondly, gist_tuple_replacekey() seems to have incorrect logic figuring\n> >>out if there is enough space in the tuple (it checks for '<', instead of\n> >>'<=') - this causes a new tuple to get always created (this one, seems\n> >>to be fixed in 7.2)\n> >>\n> >>Thirdly, gist_tuple_replace_key () sends a pointer to entry.pred (which\n> >>is already a pointer to the actual value) to index_formtuple (), that\n> >>looks at the tuple, sees that the type is 'pass-by-value', and puts that\n> >>pointer directly into the tuple, so that, the resulting tuple now\n> >>contains a pointer to a pointer to the actual value...\n> >>\n> >>Now, if more then one split is required, this sequence is repeated again\n> >>and again and again, so that, by the time the tuple gets actually\n> >>written, it contains something like a pointer to a pointer to a pointer\n> >>to a pointer to the actual data :-(\n> >>\n> >>Once again, I've seen some comments in the 7.2 branch about gists and\n> >>pass-by-value types, but brief looking at the differences in the source\n> >>did not make me conveinced that it was indeed fixed...\n> >>\n> >>Anyone knows otherwise?\n> >>\n> >>Thanks a lot!\n> >>\n> >>Dima\n> >>\n> >>\n> >>---------------------------(end of broadcast)---------------------------\n> >>TIP 4: Don't 'kill -9' the postmaster\n> >>\n> >>\n> > \n> \n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Apr 2002 11:36:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] A bug in gistPageAddItem()/gist_tuple_replacekey()" } ]
[ { "msg_contents": "hi,\ncan anyone explain me why there are different query plans for \"select ...\nfrom ... where y!=x\" and \"select ... from ... where y<x or y>x\" for\nintegers, please?\nsee the details below...\n\nthanks,\nkuba\n\ndb_cen7=# analyze;\nANALYZE\n\ndb_cen7=# \\d ts19\n Table \"ts19\"\n Column | Type | Modifiers\n-----------+--------------------------+--------------------------------------------------------\n ts19pk___ | integer | not null default\nnextval('\"ts19_ts19pk____seq\"'::text)\n ts19datum | timestamp with time zone | not null\n ts19zavaz | integer | not null\n ts19cislo | integer | not null\n ts19text_ | character varying(65536) | not null\n ts19idpri | integer | not null\nIndexes: ts19_ts19zavaz_idx\nPrimary key: ts19_pkey\n\ndb_cen7=# explain analyze select * from ts19 where ts19zavaz != 7 order by\nts19pk___ desc limit 10;\nNOTICE: QUERY PLAN:\n\nLimit (cost=89635.63..89635.63 rows=1 width=38) (actual\ntime=50868.17..50868.18 rows=10 loops=1)\n -> Sort (cost=89635.63..89635.63 rows=1 width=38) (actual\ntime=50868.16..50868.17 rows=11 loops=1)\n -> Seq Scan on ts19 (cost=0.00..89635.62 rows=1 width=38)\n(actual time=95.99..50852.34 rows=300 loops=1)\nTotal runtime: 50868.27 msec\n\ndb_cen7=# explain analyze select * from ts19 where ts19zavaz < 7 or\nts19zavaz > 7 order by ts19pk___ desc limit 10;\nNOTICE: QUERY PLAN:\n\nLimit (cost=4.04..4.04 rows=1 width=38) (actual time=1118.28..1118.29\nrows=10 loops=1)\n -> Sort (cost=4.04..4.04 rows=1 width=38) (actual\ntime=1118.27..1118.28 rows=11 loops=1)\n -> Index Scan using ts19_ts19zavaz_idx, ts19_ts19zavaz_idx on\nts19 (cost=0.00..4.03 rows=1 width=38) (actual time=0.03..1117.58\nrows=300 loops=1)\nTotal runtime: 1118.40 msec\n\nthe runtime times depends on the machine load but generally the second\nquery is much faster...\n\nmore info:\n\ndb_cen7=# select count(*) from ts19;\n count\n---------\n 4190527\n(1 row)\n\ndb_cen7=# select distinct(ts19zavaz) from ts19;\n ts19zavaz\n-----------\n 3\n 7\n(2 rows)\n\ndb_cen7=# select count(*) from ts19 where ts19zavaz = 3;\n count\n-------\n 300\n(1 row)\n\ndb_cen7=# select version();\n version\n---------------------------------------------------------------\n PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.4\n(1 row)\n\n\n", "msg_date": "Thu, 18 Apr 2002 13:34:46 +0200 (CEST)", "msg_from": "Jakub Ouhrabka <jouh8664@ss1000.ms.mff.cuni.cz>", "msg_from_op": true, "msg_subject": "another optimizer question" }, { "msg_contents": "Jakub Ouhrabka <jouh8664@ss1000.ms.mff.cuni.cz> writes:\n> can anyone explain me why there are different query plans for \"select ...\n> from ... where y!=x\" and \"select ... from ... where y<x or y>x\" for\n> integers, please?\n\n!= isn't an indexable operation. This is not the planner's fault, but\na consequence of the index opclass design we inherited from Berkeley.\nI suppose we could make it an indexable operation --- but there are so\nfew cases where it'd be a win that I'm not excited about it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 10:31:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: another optimizer question " } ]
[ { "msg_contents": "Using latest CVS sources with Linux 2.4 i586:\n\nComparing using domains versus traditional explicit field types.\nHere's the control test:\n\ntest=# create table t1 (f varchar(5) not null);\nCREATE\ntest=# insert into t1 values ('2');\nINSERT 16626 1\ntest=# select * from t1 where f='2';\n f\n---\n 2\n(1 row)\n\n\nIf I create a VARCHAR domain, everything works as expected.\n\ntest=# create domain typ varchar(5) not null;\nCREATE DOMAIN\ntest=# create table t2 (f typ);\nCREATE\ntest=# insert into t2 values ('2');\nINSERT 16627 1\ntest=# select * from t2 where f='2';\n f\n---\n 2\n(1 row)\n\n\nHere's a control test for the same thing, except with CHAR:\n\ntest=# create table t1 (f char(5) not null);\nCREATE\ntest=# insert into t1 values ('2');\nINSERT 16639 1\ntest=# select * from t1 where f='2';\n f\n-------\n 2\n(1 row)\n\n\nHowever, if I create a CHAR domain, I'm unable to query the value from the\ntable:\n\ntest=# create domain typ char(5) not null;\nCREATE DOMAIN\ntest=# create table t2 (f typ);\nCREATE\ntest=# insert into t2 values ('2');\nINSERT 16640 1\ntest=# select * from t2 where f='2';\n f\n---\n(0 rows)\n\n\nEven if I coerce the value to the correct domain:\n\ntest=# select * from t2 where f='2'::typ;\n f\n---\n(0 rows)\n\n\nHowever, this works:\n\ntest=# select * from t2 where f='2'::char;\n f\n-------\n 2\n(1 row)\n\n\nIs this a bug? Is this correct behavior? Am I misunderstanding this?\n\nThanks!\n\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Thu, 18 Apr 2002 09:18:59 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": true, "msg_subject": "Bug or misunderstanding w/domains in 7.3devel?" } ]
[ { "msg_contents": "first comment :\n\n* a special directory with ./contrib/gpl ?\n\nsecond comment : \n\n* I don't really understand your position regarding the GNU General Public \n License. The GPL is offering multiple advantages for a big project and \n software like PostgreSQL. For example : \n\n * Contribution back to the main tree more easy if redistribution. \n (like HP and Samba team are doing, copyright holder remains samba team) \n\n * More easy to get a RF (Royalty Free) license from a patent \n owner. (this is guarantee for him that it will not go back to \n proprietary software where it's not a RF license) (like the \n UB-Trees)\n\n * A possible bigger audience.\n\nDual licensing is also an alternative but could be a real mess. \n\nIt's just idea. \n\nalx\n\n\n-- \nAlexandre Dulaunoy\t\t\tadulau@conostix.com\n\t\t\t\t\thttp://www.conostix.com/\n\n", "msg_date": "Thu, 18 Apr 2002 17:02:43 +0200 (CEST)", "msg_from": "Alexandre Dulaunoy <adulau@conostix.com>", "msg_from_op": true, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "Alexandre Dulaunoy <adulau@conostix.com> writes:\n\n> first comment :\n> \n> * a special directory with ./contrib/gpl ?\n\nDoesn't really change anything.\n\n> second comment : \n> \n> * I don't really understand your position regarding the GNU General Public \n> License. The GPL is offering multiple advantages for a big project and \n> software like PostgreSQL. For example : \n\nNot open for discussion. See the FAQ.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n", "msg_date": "18 Apr 2002 11:13:08 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "On 18 Apr 2002, Doug McNaught wrote:\n\n> Alexandre Dulaunoy <adulau@conostix.com> writes:\n> \n> > first comment :\n> > \n> > * a special directory with ./contrib/gpl ?\n> \n> Doesn't really change anything.\n> \n> > second comment : \n> > \n> > * I don't really understand your position regarding the GNU General Public \n> > License. The GPL is offering multiple advantages for a big project and \n> > software like PostgreSQL. For example : \n> \n> Not open for discussion. See the FAQ.\n\nI love that type of respond ;-)\n\nYes, I have read the faq. The 1.2 is not responding why the modified \nBerkeley-style BSD license was choosen. There is only a respond :\"because \nis like that...\" \n\nI have also read that : \nhttp://archives.postgresql.org/pgsql-general/2000-07/msg00210.php\n\nMy question is more regarding the recent issue of RF license for some \nspecific patents. As described in my previous message, \"copyleft\" type \nlicense has some advantages around the RF licensing issue. \n\nCould you extend the FAQ (1.2) with more arguments ? \n\nThanks a lot for the excellent software. \n\n\nalx\n\n\n\n> \n> -Doug\n> \n\n-- \nAlexandre Dulaunoy\t\t\tadulau@conostix.com\n\t\t\t\t\thttp://www.conostix.com/\n\n\n", "msg_date": "Thu, 18 Apr 2002 17:47:15 +0200 (CEST)", "msg_from": "Alexandre Dulaunoy <adulau@conostix.com>", "msg_from_op": true, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "Alexandre Dulaunoy wrote:\n> > Not open for discussion. See the FAQ.\n> \n> I love that type of respond ;-)\n> \n> Yes, I have read the faq. The 1.2 is not responding why the modified \n> Berkeley-style BSD license was choosen. There is only a respond :\"because \n> is like that...\" \n> \n> I have also read that : \n> http://archives.postgresql.org/pgsql-general/2000-07/msg00210.php\n> \n> My question is more regarding the recent issue of RF license for some \n> specific patents. As described in my previous message, \"copyleft\" type \n> license has some advantages around the RF licensing issue. \n\nYes, GPL has advantages, but it does prevent non-source distributions. \nYou can say that is not a problem, but not everyone agrees.\n\n> Could you extend the FAQ (1.2) with more arguments ? \n\nNo. The discussion thread was painful enough. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Apr 2002 11:53:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "Alexandre Dulaunoy <adulau@conostix.com> writes:\n> * I don't really understand your position regarding the GNU General Public \n> License. The GPL is offering multiple advantages for a big project and \n> software like PostgreSQL.\n\nEvery month or two a newbie pops up and asks us why Postgres isn't GPL.\nThe short answer is that we like the BSD license and that's how Berkeley\nreleased it originally. We have no interest in changing it even if we\ncould (which we can't).\n\nIf you want a longer answer, consult the mailing list archives; there\nhave been numerous extended threads on this topic. Most of us are\npretty tired of it by now :-(\n\nThe question of whether to accept GPL'd contrib modules is less\nclear-cut (obviously, since it's been done in the past). But we've\nconcluded that it just muddies the water to have GPL'd code in the\ndistribution. Contrib authors who really prefer GPL have other avenues\nto distribute their code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 12:01:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory " }, { "msg_contents": "Centuries ago, Nostradamus foresaw when adulau@conostix.com (Alexandre Dulaunoy) would write:\n> On 18 Apr 2002, Doug McNaught wrote:\n>\n>> Alexandre Dulaunoy <adulau@conostix.com> writes:\n>> \n>> > first comment :\n>> > \n>> > * a special directory with ./contrib/gpl ?\n>> \n>> Doesn't really change anything.\n>> \n>> > second comment : \n>> > \n>> > I don't really understand your position regarding the GNU General\n>> > Public License. The GPL is offering multiple advantages for a big\n>> > project and software like PostgreSQL. For example :\n>> \n>> Not open for discussion. See the FAQ.\n>\n> I love that type of respond ;-)\n>\n> Yes, I have read the faq. The 1.2 is not responding why the modified \n> Berkeley-style BSD license was choosen. There is only a respond :\"because \n> is like that...\" \nX-Mailer: mh-e 6.1; nmh 1.0.4+dev; Emacs 21.4\n\nDifferent people consider there to be different reasons for the\nBSD-style license to be preferable.\n\nDiscussion of the matter tends to start up flame wars, and basically\nwastes peoples' time.\n\nThose two factors are actually sufficient all by themselves to suggest\nthat \"Because the developers prefer it\" is a quite sufficient\nresponse.\n\n- There are likely some people that dislike the GPL because RMS wrote\n it; having a discussion about that guarantees a flame war.\n\n- There are likely some people who consider the somewhat \"viral\"\n provisions of the GPL to be a Bad Thing; having a discussion about\n that guarantees a flame war.\n\n- There are likely people who prefer the notion that they can, if they\n need to, integrate PostgreSQL with their own other code, and not\n have any need to conform to the requirements of the GPL.\n\n- There are likely people who prefer not to need to conform to the\n requirements of the GPL. \n\nAll of these are eminently \"flameworthy\" topics where different people\nlegitimately have different positions on their merits. Holding a\ndiscussion guarantees leaping into one or another of the \"flames,\" or\nperhaps others I've not thought to mention.\n\nThe simplest answer _definitely_ is to say \"See the FAQ; it says as\nmuch as needs to be said.\"\n\nIf you want to contribute code to a GPLed database system, you are\nentirely free to do so; options include:\n - MySQL (maybe, sorta)\n - SAP-DB\n - GNU SQL\n - Aubit 4GL\n - McKoi SQL\n-- \n(reverse (concatenate 'string \"moc.enworbbc@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/sgml.html\nRules of the Evil Overlord #209. \"I will not, under any circumstances,\nmarry a woman I know to be a faithless, conniving, back-stabbing witch\nsimply because I am absolutely desperate to perpetuate my family\nline. Of course, we can still date.\" <http://www.eviloverlord.com/>\n", "msg_date": "Thu, 18 Apr 2002 12:11:13 -0400", "msg_from": "Christopher Browne <cbbrowne@acm.org>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "...\n> Thanks a lot for the excellent software.\n\nMy personal view is that one might consider using the same BSD license\nas PostgreSQL itself as a gesture of appreciation for the software you\nare using. Contribute or not, it is your choice. But if you are\nbenefiting from the software (and lots of folks are) then why not take\nthe \"big risk\" of contributing back with a similar license?\n\nRegards.\n\n - Thomas\n", "msg_date": "Thu, 18 Apr 2002 09:34:18 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "On Thu, 18 Apr 2002, Alexandre Dulaunoy wrote:\n\n> Yes, I have read the faq. The 1.2 is not responding why the modified\n> Berkeley-style BSD license was choosen. There is only a respond :\"because\n> is like that...\"\n\nYou would have to ask the Regents of the University of California at\nBerkeley, not us. You would also have to ask them for permission to\nchange the licensing for the parts of Posgres that they contributed;\nsince they own the copyright, nobody else, not even the Postgresql\nproject, can change the licensing.\n\nIt might be good to make this a bit more clear in the FAQ. As well, you\nmight wish to add some information in light of the following:\n\nAs a NetBSD developer, I'd like to point out that the experience of the\nNetBSD project has been that having multiple licenses in a system is\nvery expensive and makes releases a nightmare, if you're really going\nto do it \"right.\" Just finding all of the licenses in the system is an\narduous and time-consuming job. People using Posgres in many commerical\nsituations will save real dollars if everything is under one license.\n\nNote also that one of the big problems we experienced was with clause\nthree of BSD-style licenses (the attribution clause). If you change the\nname in clause three, you have a different license, and you may have\nproblems. That was the biggest factor contributing to massive license\nproliferation in the NetBSD tree. Personally, I think clause three is\nbest left out alltogether, though I doubt it's changable for files still\nincluding Berkeley source.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Fri, 19 Apr 2002 16:14:37 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory" }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> Note also that one of the big problems we experienced was with clause\n> three of BSD-style licenses (the attribution clause).\n\nFortunately, Berkeley had already stopped using the advertising clause\nwhen they tossed Postgres over the fence. Our version does not have\nit (see ~/COPYRIGHT).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Apr 2002 09:59:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new food for the contrib/ directory " } ]
[ { "msg_contents": "I'm sorry, I realized after posting this that it went to the wrong\nlist, I resent it to pgsql-sql instead.\n\nBut basically I haven't done any ANALYZE or EXPLAIN yet because of the \nfact that the order -is- making a difference so it can't be executing\nthe same query inside the database engine. Given that, I figured I would \njsut look for theories on how to rewrite the queries before submitting first.\n\nbtw, we are using postgresql 7.1.2 compiled from source on rh linux 7.0.\n\nI also might not have been very clear about the fact that the user is\nbasically constructing the query dynamically in the application, so it's\nnot a matter of just optimizing any specifc query, but any possible query.\n\nTom Lane [tgl@sss.pgh.pa.us] wrote:\n> Dav Coleman <dav@serve.com> writes:\n> > I have noticed that the semantically equivalent SQL queries can\n> > differ\n> > vastly in speed performance depending on the order of clauses ANDed\n> > together ( \"WHERE cond1 AND cond2\" takes forever, but \"WHERE cond2\n> > AND cond1\" comes right back).\n> \n> Could we see a specific example?\n> \n> It would also be useful to know what PG version you are using, whether\n> you've VACUUM ANALYZEd the tables, and what EXPLAIN has to say about\n> your query.\n> \n> \t\t\tregards, tom lane\n\n-- \nDav Coleman\nhttp://www.danger-island.com/dav/\n\n-- \nDav Coleman\nhttp://www.danger-island.com/dav/\n", "msg_date": "Thu, 18 Apr 2002 09:07:09 -0700", "msg_from": "Dav Coleman <dav@danger-island.com>", "msg_from_op": true, "msg_subject": "Re: [SQL] SQL Query Optimization" }, { "msg_contents": "Dav Coleman <dav@danger-island.com> writes:\n> But basically I haven't done any ANALYZE or EXPLAIN yet because of the \n> fact that the order -is- making a difference so it can't be executing\n> the same query inside the database engine.\n\nIf you haven't ever done VACUUM ANALYZE then the planner is flying\ncompletely blind as to table sizes and data distributions. This would\n(among other things) very possibly allow different plans to be estimated\nas exactly the same cost --- since all the cost numbers will be based on\nexactly the same default statistics. So it's not surprising that you'd\nget an arbitrary choice of plans depending on trivial details like\nWHERE clause order.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 12:24:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] SQL Query Optimization " } ]
[ { "msg_contents": "Currently, the name of the ON SELECT rule for a view is defined to be\n\t'_RET' || viewname\ntruncated if necessary to fit in a NAME.\n\nI've just committed fixes to make rule names be per-relation instead\nof global, and it occurs to me that we could now get rid of this\nconvention. The select rule name could be the same for all views ---\n\"_RETURN\", say. This would simplify life in a number of places.\n\nA quick look at psql, pgaccess, etc suggests that a lot of places know\nthat view select rule names begin with _RET, but not that many are\ndependent on the rest of it. So I think this wouldn't break clients\ntoo badly.\n\nAny thoughts pro or con? I'm leaning towards changing it, but could be\npersuaded to leave well enough alone.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 16:20:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Names of view select rules" }, { "msg_contents": " No problem with that. Good idea IMHO.\n\n\nJan\n\nTom Lane wrote:\n> Currently, the name of the ON SELECT rule for a view is defined to be\n> \t'_RET' || viewname\n> truncated if necessary to fit in a NAME.\n> \n> I've just committed fixes to make rule names be per-relation instead\n> of global, and it occurs to me that we could now get rid of this\n> convention. The select rule name could be the same for all views ---\n> \"_RETURN\", say. This would simplify life in a number of places.\n> \n> A quick look at psql, pgaccess, etc suggests that a lot of places know\n> that view select rule names begin with _RET, but not that many are\n> dependent on the rest of it. So I think this wouldn't break clients\n> too badly.\n> \n> Any thoughts pro or con? I'm leaning towards changing it, but could be\n> persuaded to leave well enough alone.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n", "msg_date": "Thu, 18 Apr 2002 18:29:25 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Names of view select rules" } ]
[ { "msg_contents": "I've been thinking about exactly what to do with access privileges for\nnamespaces (a/k/a schemas). The SQL99 spec isn't much guidance, since\nas far as I can tell it doesn't have explicit privileges for schemas\nat all --- and in any case, since it identifies schemas and ownership,\nthe really interesting cases don't arise.\n\nHere is a straw-man definition --- any comments appreciated.\n\nWe'll define two privilege bits for namespaces/schemas: \"read\" and\n\"create\" (GRANT SELECT and GRANT INSERT seem like reasonable keyword\nchoices). \"Read\" controls the ability to look up objects within\nthat namespace --- it's similar to \"execute\" permission on directories\nin Unix. \"Create\" controls the ability to create new objects within\na namespace. As usual, superusers bypass these checks.\n\nThe initial state of the database will be: pg_catalog is world readable,\nbut has no create permissions; public has world read and create\npermissions; pg_toast has no permissions (you can't explicitly inspect\ntoast tables). Newly created schemas will initially have all permissions\nfor the owner, no permissions for anyone else. Whenever a pg_temp\nnamespace is created or recycled by a fresh backend, it will be set to be\nowned by the user running that backend, with all permissions for him and\nnone for anyone else.\n\nRenaming of an object is allowed to the owner of that object regardless of\nschema permissions. While we could invent an UPDATE privilege on schemas\nto control this, leaving it with the owner seems simpler.\n\nDeletion of an object is allowed either to the owner of the object, or to\nthe owner of the containing schema. (Without the latter provision, you\ncouldn't DROP a schema containing objects created by other people; which\nseems wrong.) Again, I'd rather keep this based on ownership than invent,\nsay, a DELETE privilege for schemas.\n\nIt's not quite clear what should happen if User A allows User B to create\nan object in a schema owned by A, but then revokes read access on that\nschema from B. Presumably, B can no longer access the object, even though\nhe still owns it. A would have the ability to delete the object under\nthese rules, but is that enough?\n\nOne of the things I'd like this mechanism to do is answer the request\nwe've heard so often about preventing users from creating new tables.\nIf the DBA revokes write access on the public namespace from a particular\nuser, and doesn't create a personal schema for that user, then under this\nproposal that user would have noplace to create tables --- except TEMP\ntables in his temp schema. Is that sufficient, or do the folks who want\nthis also want a way to prevent TEMP table creation?\n\nAnother thing that would be needed to prevent users from creating new\ntables is to prevent them from creating schemas for themselves. I am not\nsure how to handle that --- should the right to create schemas be treated\nas a user property (a column of pg_shadow), or should it be attached\nsomehow to the database (and if the latter, how)?\n\nAs sketched so far, the schema privilege bits would be the same for all\nobject types --- whether table, type, function, or operator, either you\ncan look it up (resp. create it) in a given namespace, or you can't.\nOffhand I see no need to distinguish different kinds of objects for this\npurpose; does anyone think differently?\n\nShould the owner of a database (assume he's not a superuser) have the\nright to drop any schema in his database, even if he doesn't own it?\nI can see arguments either way on that one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 19:14:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Schema (namespace) privilege details" }, { "msg_contents": "Hi Tom,\n\n> One of the things I'd like this mechanism to do is answer the request\n> we've heard so often about preventing users from creating new tables.\n> If the DBA revokes write access on the public namespace from a particular\n> user, and doesn't create a personal schema for that user, then under this\n> proposal that user would have noplace to create tables --- except TEMP\n> tables in his temp schema. Is that sufficient, or do the folks who want\n> this also want a way to prevent TEMP table creation?\n\nI can't think of a reason that temp tables should be prevented. Being able\nto prevent a user from creating permanent objects is good IMHO.\n\n> Another thing that would be needed to prevent users from creating new\n> tables is to prevent them from creating schemas for themselves. I am not\n> sure how to handle that --- should the right to create schemas be treated\n> as a user property (a column of pg_shadow), or should it be attached\n> somehow to the database (and if the latter, how)?\n\nConnecting this right to a database sounds like the right thing to do. (ISP\ncase: allow a user to do with his database whatever he wants, as long as he\nstays away from other databases) But I don't know a good way to do it...\n\n> Should the owner of a database (assume he's not a superuser) have the\n> right to drop any schema in his database, even if he doesn't own it?\n> I can see arguments either way on that one.\n\nI think that if he owns it, he should be able to control it... Someone\nowning a database should be responsible enough to manage it.\n\nI hope these comments can help you,\nSander.\n\n\n", "msg_date": "Fri, 19 Apr 2002 01:33:59 +0200", "msg_from": "\"Sander Steffann\" <sander@steffann.nl>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details" }, { "msg_contents": "> Should the owner of a database (assume he's not a superuser) have\nthe\n> right to drop any schema in his database, even if he doesn't own it?\n> I can see arguments either way on that one.\n\nGiven that you've chosen to allow the owner of a schema or the table\nto drop a table, it would be consistent to allow the owner of the\ndatabase, schema or table to drop the table.\n\nMuch as I'd tend to allow the owner of a trigger, the table it's on,\nthe schema, or the database to drop the trigger.\n\n\nTechnically if the owner of a database doesn't have permission to drop\na table, do they have permission to drop the database? In which case,\npg_dump, drop create table statement, drop db, create db, restore data\nwill accomplish the same thing. All we've done is make the process\nlong and drawn out.\n\n", "msg_date": "Thu, 18 Apr 2002 19:34:06 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details" }, { "msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n>> Should the owner of a database (assume he's not a superuser) have the\n>> right to drop any schema in his database, even if he doesn't own it?\n>> I can see arguments either way on that one.\n\n> Given that you've chosen to allow the owner of a schema or the table\n> to drop a table, it would be consistent to allow the owner of the\n> database, schema or table to drop the table.\n\n> Much as I'd tend to allow the owner of a trigger, the table it's on,\n> the schema, or the database to drop the trigger.\n\nHmm, interesting analogy. I don't much like the idea of allowing a\nnon-owner of a table to drop a trigger; that could lead directly to\ndata consistency problems, etc. I was envisioning granting the\nschema owner the right to drop another user's table in toto --- but not\nto have ownership rights to mess with its innards.\n\nThat would suggest that a database owner should be allowed to drop a\nschema in toto, but not to selectively drop objects within it. Just\nas with a table, a schema might have some consistency requirements that\nwould be broken by zapping individual elements.\n\n\n> Technically if the owner of a database doesn't have permission to drop\n> a table, do they have permission to drop the database? In which case,\n> pg_dump, drop create table statement, drop db, create db, restore data\n> will accomplish the same thing. All we've done is make the process\n> long and drawn out.\n\nIf the owner is not superuser, he does not have the privileges to do\ndump and restore --- even if he can read everything to dump it, he\nwon't be allowed to recreate objects under other people's names. So\nthis analogy is faulty.\n\nHowever, the database owner definitely does have the right to drop the\nwhole database, so at some level he should have the right to drop\ncontained objects. The question is, how selectively can he do it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 19:48:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "Tom Lane writes:\n\n> We'll define two privilege bits for namespaces/schemas: \"read\" and\n> \"create\" (GRANT SELECT and GRANT INSERT seem like reasonable keyword\n> choices). \"Read\" controls the ability to look up objects within\n> that namespace --- it's similar to \"execute\" permission on directories\n> in Unix. \"Create\" controls the ability to create new objects within\n> a namespace. As usual, superusers bypass these checks.\n\nI think other databases actually use GRANT CREATE.\n\nAbout the read permission, I think that other databases use the rule that\nyou can \"see\" an object if and only if you have some sort of privilege on\nit. I see little reason to create an extra privilege to just see the\nexistence of objects.\n\n> It's not quite clear what should happen if User A allows User B to create\n> an object in a schema owned by A, but then revokes read access on that\n> schema from B. Presumably, B can no longer access the object, even though\n> he still owns it. A would have the ability to delete the object under\n> these rules, but is that enough?\n\nThat concern would be eliminated by the system above. B can still access\nanything it owns. If A doesn't like B anymore, just delete B's stuff in\nA's schemas.\n\n> One of the things I'd like this mechanism to do is answer the request\n> we've heard so often about preventing users from creating new tables.\n> If the DBA revokes write access on the public namespace from a particular\n> user, and doesn't create a personal schema for that user, then under this\n> proposal that user would have noplace to create tables --- except TEMP\n> tables in his temp schema. Is that sufficient, or do the folks who want\n> this also want a way to prevent TEMP table creation?\n\nMaybe the temp schema should be a permanent catalog entry. That way the\nDBA can revoke create access from it as a means to disallow users to\ncreate temp tables.\n\n> Another thing that would be needed to prevent users from creating new\n> tables is to prevent them from creating schemas for themselves. I am not\n> sure how to handle that --- should the right to create schemas be treated\n> as a user property (a column of pg_shadow), or should it be attached\n> somehow to the database (and if the latter, how)?\n\nAn aclitem[] column on pg_database seems like the most flexible solution\nto me.\n\n> Offhand I see no need to distinguish different kinds of objects for this\n> purpose; does anyone think differently?\n\nNot me.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 18 Apr 2002 20:00:17 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details" }, { "msg_contents": "Tom Lane wrote:\n> We'll define two privilege bits for namespaces/schemas: \"read\" and\n> \"create\" (GRANT SELECT and GRANT INSERT seem like reasonable keyword\n> choices). \"Read\" controls the ability to look up objects within\n> that namespace --- it's similar to \"execute\" permission on directories\n> in Unix. \"Create\" controls the ability to create new objects within\n> a namespace. As usual, superusers bypass these checks.\n\nIf user1, the owner of the schema1, creates a new table tab1, will user2 \n who has \"Read\" privilege to schema1, be automatically granted SELECT \nprivilege on tab1? Or will he be able to see that tab1 exists, but not \nselect from it (continuing the analogy with directories)?\n\n\n> \n> The initial state of the database will be: pg_catalog is world readable,\n> but has no create permissions; public has world read and create\n> permissions; pg_toast has no permissions (you can't explicitly inspect\n> toast tables). Newly created schemas will initially have all permissions\n> for the owner, no permissions for anyone else. Whenever a pg_temp\n> namespace is created or recycled by a fresh backend, it will be set to be\n> owned by the user running that backend, with all permissions for him and\n> none for anyone else.\n\nThis looks good to me. I only wonder if public should default to world \nread and no create?\n\n> Renaming of an object is allowed to the owner of that object regardless of\n> schema permissions. While we could invent an UPDATE privilege on schemas\n> to control this, leaving it with the owner seems simpler.\n\nAgreed.\n\n> \n> Deletion of an object is allowed either to the owner of the object, or to\n> the owner of the containing schema. (Without the latter provision, you\n> couldn't DROP a schema containing objects created by other people; which\n> seems wrong.) Again, I'd rather keep this based on ownership than invent,\n> say, a DELETE privilege for schemas.\n\nI'd agree with other posted comments -- db owner should also be \nessentially a superuser in there own db.\n\n\n\n> \n> It's not quite clear what should happen if User A allows User B to create\n> an object in a schema owned by A, but then revokes read access on that\n> schema from B. Presumably, B can no longer access the object, even though\n> he still owns it. A would have the ability to delete the object under\n> these rules, but is that enough?\n\nI like this. That way I can lock out a particular user if I need to with \na single command. Would A automatically get ALL privileges on objects \ncreated in his schema by others? I think he should.\n\n\n> \n> One of the things I'd like this mechanism to do is answer the request\n> we've heard so often about preventing users from creating new tables.\n> If the DBA revokes write access on the public namespace from a particular\n> user, and doesn't create a personal schema for that user, then under this\n> proposal that user would have noplace to create tables --- except TEMP\n> tables in his temp schema. Is that sufficient, or do the folks who want\n> this also want a way to prevent TEMP table creation?\n\nI think there should be a way to prevent temp table creation, but not \nset that way as the default. Presumably you could REVOKE INSERT on the \ntemp schema?\n\n\n> \n> Another thing that would be needed to prevent users from creating new\n> tables is to prevent them from creating schemas for themselves. I am not\n> sure how to handle that --- should the right to create schemas be treated\n> as a user property (a column of pg_shadow), or should it be attached\n> somehow to the database (and if the latter, how)?\n\nI think only the database owner should be able to create schemas in \ntheir own database. That way if I want a user to be able to create \ntables, I just grant them CREATE in the public schema, or create a \nschema for them.\n\n\n> \n> As sketched so far, the schema privilege bits would be the same for all\n> object types --- whether table, type, function, or operator, either you\n> can look it up (resp. create it) in a given namespace, or you can't.\n> Offhand I see no need to distinguish different kinds of objects for this\n> purpose; does anyone think differently?\n> \n\nAgreed. How would it work though if say I wanted to create a view in the \npublic schema, which pointed at a table in a schema which has had SELECT \nrevoked? Same question for a public function/private table. It would be \nideal if you could do this.\n\n\n> Should the owner of a database (assume he's not a superuser) have the\n> right to drop any schema in his database, even if he doesn't own it?\n> I can see arguments either way on that one.\n> \n\nI think the database owner should be just like a superuser in his little \nworld. The db owner should be able to drop contained schemas or other \nobjects at will.\n\n\nJust my 2 cents.\n\nJoe\n\n", "msg_date": "Thu, 18 Apr 2002 17:02:05 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> We'll define two privilege bits for namespaces/schemas: \"read\" and\n>> \"create\" (GRANT SELECT and GRANT INSERT seem like reasonable keyword\n>> choices).\n\n> I think other databases actually use GRANT CREATE.\n\nOkay, I'm not picky about the keywords.\n\n> About the read permission, I think that other databases use the rule that\n> you can \"see\" an object if and only if you have some sort of privilege on\n> it. I see little reason to create an extra privilege to just see the\n> existence of objects.\n\nHm. That seems like it would not interact at all well with resolution\nof ambiguous functions and operators. In the first place, I don't want\nto execute a permission check for every candidate function/operator\nbefore I can assemble the list of candidates to be chosen among. (For\nexample, on every use of an '=' operator that would cost us seventy-three\npermissions checks, rather than one.) In the second place, that would\nmean that granting or revoking access to a particular operator could\nchange resolution decisions for *other* operators of the same name ---\nwhich is certainly surprising. In the third place, it's wrong to be\napplying permissions checks at parse-analysis time; they should be done\nat run-time. Otherwise rules have big problems. I realize that we have\nto apply the namespace permissions checks at parse time, but I don't\nwant to do it for ordinary objects.\n\n>> If the DBA revokes write access on the public namespace from a particular\n>> user, and doesn't create a personal schema for that user, then under this\n>> proposal that user would have noplace to create tables --- except TEMP\n>> tables in his temp schema. Is that sufficient, or do the folks who want\n>> this also want a way to prevent TEMP table creation?\n\n> Maybe the temp schema should be a permanent catalog entry. That way the\n> DBA can revoke create access from it as a means to disallow users to\n> create temp tables.\n\nHm, we could clone a prototype pg_temp schema entry as a means of\ngetting this set up, I suppose. But the first question should be is it\nworth troubling with?\n\n>> Another thing that would be needed to prevent users from creating new\n>> tables is to prevent them from creating schemas for themselves. I am not\n>> sure how to handle that --- should the right to create schemas be treated\n>> as a user property (a column of pg_shadow), or should it be attached\n>> somehow to the database (and if the latter, how)?\n\n> An aclitem[] column on pg_database seems like the most flexible solution\n> to me.\n\nYeah, I was afraid you would say that ;-). I'd prefer to avoid it\nbecause I think we'd need to have a TOAST table for pg_database then.\nAnd I'm not at all sure how to setup a shared toast table. Can we get\naway with constraining pg_database rows to 8K if they contain ACL lists?\n(We might get some benefit from compression of the ACL list, but\nprobably not a heck of a lot.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 20:10:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> If user1, the owner of the schema1, creates a new table tab1, will user2 \n> who has \"Read\" privilege to schema1, be automatically granted SELECT \n> privilege on tab1? Or will he be able to see that tab1 exists, but not \n> select from it (continuing the analogy with directories)?\n\nNo, and yes.\n\n> This looks good to me. I only wonder if public should default to world \n> read and no create?\n\nThat would be non-backwards-compatible. Since the main reason for\nhaving the public namespace at all is backwards compatibility of the\nout-of-the-box behavior, I think we have to let it default to world\nwrite. DBAs can revoke world write, or even remove the public namespace\naltogether, if they want to run a tighter ship.\n\n> I like this. That way I can lock out a particular user if I need to with \n> a single command. Would A automatically get ALL privileges on objects \n> created in his schema by others? I think he should.\n\nHmm, I'd argue not; see nearby messages. The analogy with Unix\ndirectory permissions seems to hold good here. If you are owner of\na directory you can delete files therein, but not necessarily do\nanything else with 'em.\n\n> I think only the database owner should be able to create schemas in \n> their own database.\n\nThat seems overly restrictive to me; it'd be the equivalent of getting\nrid of users that have createdb rights but aren't superusers.\n\nAlso, if a database owner is not superuser, I do not think he should be\nable to create objects that are marked as belonging to other users.\nAt least not in general. Do we need to make an exception for schemas?\n\n> Agreed. How would it work though if say I wanted to create a view in the \n> public schema, which pointed at a table in a schema which has had SELECT \n> revoked? Same question for a public function/private table. It would be \n> ideal if you could do this.\n\nAFAICS this would not be checked at creation time, but when someone\ntries to use the view; just the same as now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 20:17:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "On Fri, 2002-04-19 at 00:14, Tom Lane wrote:\n\n> It's not quite clear what should happen if User A allows User B to create\n> an object in a schema owned by A, but then revokes read access on that\n> schema from B. Presumably, B can no longer access the object, even though\n> he still owns it. A would have the ability to delete the object under\n> these rules, but is that enough?\n\nThen A should take over ownership. It would be like the expiry of a\nlease on a piece of land: any buildings erected by the lessee become the\nproperty of the landowner. (If this consequence was not desired, the\nobjects should not have been created in a database/schema outside the\nowner's control.)\n\n> Another thing that would be needed to prevent users from creating new\n> tables is to prevent them from creating schemas for themselves. I am not\n> sure how to handle that --- should the right to create schemas be treated\n> as a user property (a column of pg_shadow), or should it be attached\n> somehow to the database (and if the latter, how)?\n\nI think it could be both: a database owner may not want any schemas\ncreated by anyone else, or by some particular user; alternatively, the\nadministrator may not want a particular user to create any schemas\nanywhere. These are two different kinds of restriction:\n\n GRANT CREATE SCHEMA TO user | PUBLIC\n REVOKE CREATE SCHEMA FROM user | PUBLIC\n\nwould allow/disallow the user (other than the database owner) the\ntheoretical right to create a schema, whereas\n\n GRANT CREATE SCHEMA IN database TO user | PUBLIC\n REVOKE CREATE SCHEMA IN database FROM user | PUBLIC\n\nwould allow/disallow him it on a particular database. Having both gives\nmore flexibility and allows different people control for different\npurposes (suppose someone needs to pay for the privilege to create\nschemas in a variable set of databases; the general permission could be\nturned on or off according to whether the bill was paid.). A general\npermission would be needed before permission could be effective on a\nparticular database.\n\n\n> Should the owner of a database (assume he's not a superuser) have the\n> right to drop any schema in his database, even if he doesn't own it?\n> I can see arguments either way on that one.\n\nI think a database owner should be able to override the owner of a\nschema within the database; similarly a schema owner should be able to\noverride the owner of an object within the schema. This makes sense in\npractice, since the higher owner can delete the schema/object and\nrecreate it under his own ownership; so there is little point in not\nallowing him to change it directly.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"For I am persuaded, that neither death, nor life, nor \n angels, nor principalities, nor powers, nor things \n present, nor things to come, nor height, nor depth, \n nor any other creature, shall be able to separate us \n from the love of God, which is in Christ Jesus our \n Lord.\" Romans 8:38,39", "msg_date": "19 Apr 2002 01:25:22 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details" }, { "msg_contents": "On Fri, 2002-04-19 at 01:10, Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> \n> >> Another thing that would be needed to prevent users from creating new\n> >> tables is to prevent them from creating schemas for themselves. I am not\n> >> sure how to handle that --- should the right to create schemas be treated\n> >> as a user property (a column of pg_shadow), or should it be attached\n> >> somehow to the database (and if the latter, how)?\n> \n> > An aclitem[] column on pg_database seems like the most flexible solution\n> > to me.\n> \n> Yeah, I was afraid you would say that ;-). I'd prefer to avoid it\n> because I think we'd need to have a TOAST table for pg_database then.\n> And I'm not at all sure how to setup a shared toast table. Can we get\n> away with constraining pg_database rows to 8K if they contain ACL lists?\n> (We might get some benefit from compression of the ACL list, but\n> probably not a heck of a lot.)\n\nCreating schemas is not the kind of thing people do very frequently. \nWhy not simply normalise the relationship into another table? the extra\nexpense of the lookup would be insignificant in the total context of\nschema creation.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"For I am persuaded, that neither death, nor life, nor \n angels, nor principalities, nor powers, nor things \n present, nor things to come, nor height, nor depth, \n nor any other creature, shall be able to separate us \n from the love of God, which is in Christ Jesus our \n Lord.\" Romans 8:38,39", "msg_date": "19 Apr 2002 01:30:32 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details" }, { "msg_contents": "> > Another thing that would be needed to prevent users from creating\nnew\n> > tables is to prevent them from creating schemas for themselves. I\nam not\n> > sure how to handle that --- should the right to create schemas be\ntreated\n> > as a user property (a column of pg_shadow), or should it be\nattached\n> > somehow to the database (and if the latter, how)?\n>\n> I think only the database owner should be able to create schemas in\n> their own database. That way if I want a user to be able to create\n> tables, I just grant them CREATE in the public schema, or create a\n> schema for them.\n\nIf owners could be groups, I'd tend to agree. I'm tired of setting up\ngeneral admin logins and giving a group of people a single key for\ndoing system changes. Anytime someone has to leave the company we run\naround and issue new keys.\n\nI really want to allow a small group to have control of the\ndevelopment db but not in other DBs (other projects generally).\nGranting superuser status isn't appropriate. But, giving a group\ncontrol over an individual database (schema or otherwise) is extreamly\nuseful. Production basically has the same thing but a different\ngroup -- who know enough not to touch anything without a patch and\nchange control being issued by development which has been approved by\nthe resident DBA.\n\nI'd really like to see a schema owner have full control over all\nobjects in a schema, and likewise a database owner have full control\nover their database. My POV for large systems.\n\n\n\nLets look at small ones. Database usage in webhosting companies is on\nthe rise. With the changes to pg_hba.conf to allow specific users\naccess to specific databases it can now be easily sold as a part of a\nhosting package.\n\nFTP accounts on a server always have a master. Larger clients will\noften create a directory structure in such a way that various web\ndevelopers can work in various parts without having to worry about\naccidentally touching others stuff. BUT the master account can still\noverride the entire set if necessary. They own parent, they flip\npermissions to suit themselves if they're blocked by them.\n\n\nPostgresql needs something similar to be easily sold as a service.\nThe person actually paying for the DB installation would of course be\nthe owner of the DB.\n\nIn the event of a company, the buyer may allow others to do work\n(consultants? employee? friend?). They create a user, a schema and\nput the user to work. User does something they shouldn't and is\nremoved for it. Owner wants to clean up the mess or continue\nmaintainence. How do they do this? Owner isn't a superuser as\nthey're simply buying DB services from an Application hosting company.\nThey can't login as the user as they don't have the password (user\ntook it with them). ** I forget whether changing ownership of an\nobject would require superuser access or just ownership of the parent\nobject. ** So, they're left with calling the hosting company to clean\nup the mess for them (not something we'd want to do).\n\n\nWith Postgresql 7.3 the above is a likley scenario at the company I\nwork for as we would like to offer this type of service along side the\nother DBs we currently host -- and it's very close to being feasible.\nWhat I need is a per DB superuser / supergroup which cannot do things\nlike drop database (even their own preferably as that ends in a tech\nsupport call to have it recreated), create untrusted procedures /\nlanguages, and other nerveracking abilities.\n\nGiving the database owner, or better a group at the database level an\nACL to accomplish any job within their own database (including user\ncreation -- but we can get around that with a control panel to do it\nfor them) that an otherwise untrusted user should be allowed to looks\nvery good to me.\n\n", "msg_date": "Thu, 18 Apr 2002 20:37:49 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details" }, { "msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> [ how it ought to be to support hosting companies ]\n\nI'm not real comfortable with this. The design I proposed is based\nfairly firmly on the Unix directory/file protection model --- which\nis assuredly not perfect, but it's survived a lot of use and is not\nknown to have major flaws. You're suggesting that we should invent\na protection model off-the-cuff on the basis of the supposed needs\nof one class of application. I think that's a recipe for trouble...\n\n> I'd really like to see a schema owner have full control over all\n> objects in a schema, and likewise a database owner have full control\n> over their database. My POV for large systems.\n\nThose things are both easily done: just don't allow anyone else to\ncreate objects in your schema (resp. database). This is indeed what\nSQL99 envisions. However, in a database where there are multiple\nusers sharing schemas, I am not convinced that the notion \"the schema\nowner has ALL rights to objects within the schema\" is appropriate.\nThat seems to me to go way too far; if we are troubling to maintain\ndistinct ownership of objects within a schema, that should mean\nsomething. In particular, the guy who is not the schema owner should\nbe able to have some confidence that the guy who is can't make arbitrary\nchanges in his table. Otherwise the schema owner is effectively\nsuperuser, and what's the point of pretending he's not?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 21:19:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "Oliver Elphick wrote:\n> On Fri, 2002-04-19 at 00:14, Tom Lane wrote:\n> I think it could be both: a database owner may not want any schemas\n> created by anyone else, or by some particular user; alternatively, the\n> administrator may not want a particular user to create any schemas\n> anywhere. These are two different kinds of restriction:\n> \n> GRANT CREATE SCHEMA TO user | PUBLIC\n> REVOKE CREATE SCHEMA FROM user | PUBLIC\n> \n> would allow/disallow the user (other than the database owner) the\n> theoretical right to create a schema, whereas\n> \n> GRANT CREATE SCHEMA IN database TO user | PUBLIC\n> REVOKE CREATE SCHEMA IN database FROM user | PUBLIC\n> \n> would allow/disallow him it on a particular database. Having both gives\n> more flexibility and allows different people control for different\n> purposes (suppose someone needs to pay for the privilege to create\n> schemas in a variable set of databases; the general permission could be\n> turned on or off according to whether the bill was paid.). A general\n> permission would be needed before permission could be effective on a\n> particular database.\n\nI like this general idea and syntax. But it seems awkward to have to \nhave the privilege granted twice. What about:\n\n GRANT CREATE SCHEMA [IN { database | ALL }] TO user | PUBLIC\n REVOKE CREATE SCHEMA [IN { database | ALL }] FROM user | PUBLIC\n\nwhere lack of the IN clause implies the current database, and ALL \nimplies a system-wide grant/revoke. System-wide could only be issued by \na superuser, while a specific database command could be issued by the DB \nowner or a superuser.\n\n> \n>>Should the owner of a database (assume he's not a superuser) have the\n>>right to drop any schema in his database, even if he doesn't own it?\n>>I can see arguments either way on that one.\n> \n> \n> I think a database owner should be able to override the owner of a\n> schema within the database; similarly a schema owner should be able to\n> override the owner of an object within the schema. This makes sense in\n> practice, since the higher owner can delete the schema/object and\n> recreate it under his own ownership; so there is little point in not\n> allowing him to change it directly.\n\nYeah, I still feel that the owner of a \"container\" object like a \ndatabase or schema should have complete control of whatever is contained \ntherein. Anything else would strike me as surprising behavior.\n\nJoe\n\n", "msg_date": "Thu, 18 Apr 2002 18:24:18 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details" }, { "msg_contents": "On Fri, 2002-04-19 at 02:24, Joe Conway wrote:\n> I like this general idea and syntax. But it seems awkward to have to \n> have the privilege granted twice. What about:\n> \n> GRANT CREATE SCHEMA [IN { database | ALL }] TO user | PUBLIC\n> REVOKE CREATE SCHEMA [IN { database | ALL }] FROM user | PUBLIC\n\nI would naturally interpret granting permission IN ALL to mean that the\nuser would certainly be allowed permission in all databases, whereas it\nought to be clear that the permission given is only hypothetical and\nsubject to permission's being granted for a specific database.\n\n> where lack of the IN clause implies the current database, and ALL \n> implies a system-wide grant/revoke. System-wide could only be issued by \n> a superuser, while a specific database command could be issued by the DB \n> owner or a superuser.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"For I am persuaded, that neither death, nor life, nor \n angels, nor principalities, nor powers, nor things \n present, nor things to come, nor height, nor depth, \n nor any other creature, shall be able to separate us \n from the love of God, which is in Christ Jesus our \n Lord.\" Romans 8:38,39", "msg_date": "19 Apr 2002 02:49:12 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details" }, { "msg_contents": "> I'm not real comfortable with this. The design I proposed is based\n> fairly firmly on the Unix directory/file protection model --- which\n> is assuredly not perfect, but it's survived a lot of use and is not\n> known to have major flaws. You're suggesting that we should invent\n\nWill we be able to accomplish the equivelent of the below?\n\n\nknight# ls -la\ntotal 3\ndrwxr-xr-x 2 rbt rbt 512 Apr 18 21:53 .\ndrwxr-xr-x 43 rbt rbt 2048 Apr 18 21:36 ..\n-rwx------ 1 root wheel 0 Apr 18 21:53 file\n\nknight# head /etc/group\n# $FreeBSD: src/etc/group,v 1.19.2.1 2001/11/24 17:22:24 gshapiro Exp\n$\n#\nwheel:*:0:root\ndaemon:*:1:daemon\nkmem:*:2:root\nsys:*:3:root\ntty:*:4:root\noperator:*:5:root\nmail:*:6:\nbin:*:7:\n\nknight# exit\nexit\n\nbash-2.05a$ whoami\nrbt\n\nbash-2.05a$ rm file\noverride rwx------ root/wheel for file? y\n\nbash-2.05a$ ls -la\ntotal 3\ndrwxr-xr-x 2 rbt rbt 512 Apr 18 21:55 .\ndrwxr-xr-x 43 rbt rbt 2048 Apr 18 21:36 ..\n\n\n> > I'd really like to see a schema owner have full control over all\n> > objects in a schema, and likewise a database owner have full\ncontrol\n> > over their database. My POV for large systems.\n\n> Those things are both easily done: just don't allow anyone else to\n> create objects in your schema (resp. database). This is indeed what\n\nYes, basically what we do now. I'm hoping to add the ability to\nenable a group (ROLES) to have ownership of items as well as users\nwhen I complete the other tasks I've set before myself.\n\n\n\n", "msg_date": "Thu, 18 Apr 2002 22:02:23 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "> Will we be able to accomplish the equivelent of the below?\n>\n>\n> knight# ls -la\n> total 3\n> drwxr-xr-x 2 rbt rbt 512 Apr 18 21:53 .\n> drwxr-xr-x 43 rbt rbt 2048 Apr 18 21:36 ..\n> -rwx------ 1 root wheel 0 Apr 18 21:53 file\n>\n> knight# head /etc/group\n> # $FreeBSD: src/etc/group,v 1.19.2.1 2001/11/24 17:22:24 gshapiro Exp\n> $\n> #\n> wheel:*:0:root\n> daemon:*:1:daemon\n> kmem:*:2:root\n> sys:*:3:root\n> tty:*:4:root\n> operator:*:5:root\n> mail:*:6:\n> bin:*:7:\n>\n> knight# exit\n> exit\n>\n> bash-2.05a$ whoami\n> rbt\n>\n> bash-2.05a$ rm file\n> override rwx------ root/wheel for file? y\n>\n> bash-2.05a$ ls -la\n> total 3\n> drwxr-xr-x 2 rbt rbt 512 Apr 18 21:55 .\n> drwxr-xr-x 43 rbt rbt 2048 Apr 18 21:36 ..\n\nThat is, of course, a BSD-ism that would confuse a lot of the SysV people...\n:)\n\nChris\n\n", "msg_date": "Fri, 19 Apr 2002 10:07:12 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Will we be able to accomplish the equivelent of the below?\n\nI think what you're depicting is the equivalent of a schema owner\ndropping a table in his schema, right? Yes, I proposed allowing that,\nbut not granting the schema owner any other ownership rights over\ncontained tables. This is analogous to the way that ownership of a Unix\ndirectory lets you rm a contained file ... but not necessarily alter\nthat file in any way short of rm'ing it.\n\n> Yes, basically what we do now. I'm hoping to add the ability to\n> enable a group (ROLES) to have ownership of items as well as users\n> when I complete the other tasks I've set before myself.\n\nThat could be a good extension, but I think it's orthogonal to the\nimmediate issue...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 22:08:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "> That is, of course, a BSD-ism that would confuse a lot of the SysV\npeople...\n> :)\n\nYup.. But it's been around quite a while and I don't know of any\nhorrible problems with it -- that said I've not actually tried it on\nOpenBSD (different mindset) but would be surprised if it wasn't the\nsame.\n\nSure, it may not be the smartest thing to allow user Y to create a\ntable in my schema BUT if I decide to reverse that decision (for\nwhatever reason) I want to be able to drop the junk user Y littered\naround my schema along with the user even if I'm not allowed to look\nat it, use it or otherwise fiddle around with it.\n\nBut if I'm the only one who feels this way, so be it.\n\n", "msg_date": "Thu, 18 Apr 2002 22:22:19 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "> > Will we be able to accomplish the equivelent of the below?\n>\n> I think what you're depicting is the equivalent of a schema owner\n> dropping a table in his schema, right? Yes, I proposed allowing\nthat,\n\nYes, thats what I was looking for. Sorry if I missed that in the\ninitial proposal.\n\n> > Yes, basically what we do now. I'm hoping to add the ability to\n> > enable a group (ROLES) to have ownership of items as well as users\n> > when I complete the other tasks I've set before myself.\n>\n> That could be a good extension, but I think it's orthogonal to the\n> immediate issue...\n\nYes it is.\n\n", "msg_date": "Thu, 18 Apr 2002 22:24:41 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "Tom Lane wrote:\n>>This looks good to me. I only wonder if public should default to world \n>>read and no create?\n> \n> \n> That would be non-backwards-compatible. Since the main reason for\n> having the public namespace at all is backwards compatibility of the\n> out-of-the-box behavior, I think we have to let it default to world\n> write. DBAs can revoke world write, or even remove the public namespace\n> altogether, if they want to run a tighter ship.\n\nAh yes, I forgot about that aspect.\n\n> \n> Also, if a database owner is not superuser, I do not think he should be\n> able to create objects that are marked as belonging to other users.\n> At least not in general. Do we need to make an exception for schemas?\n> \n\nWell, I like to think of the database owner as the superuser within that \none database. This is similar to (at least) SQL Server and Oracle. But I \ndon't think either of those systems have quite this issue because the \nnotion of schema and login user are so tightly coupled, something you \nwere specifically trying to avoid ;-)\n\n\n\n> \n>>Agreed. How would it work though if say I wanted to create a view in the \n>>public schema, which pointed at a table in a schema which has had SELECT \n>>revoked? Same question for a public function/private table. It would be \n>>ideal if you could do this.\n> \n> \n> AFAICS this would not be checked at creation time, but when someone\n> tries to use the view; just the same as now.\n\nGreat!\n\nThanks,\n\nJoe\n\n\n\n", "msg_date": "Thu, 18 Apr 2002 21:49:44 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details" }, { "msg_contents": "I said:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n>> An aclitem[] column on pg_database seems like the most flexible solution\n>> to me.\n\n> Yeah, I was afraid you would say that ;-). I'd prefer to avoid it\n> because I think we'd need to have a TOAST table for pg_database then.\n> And I'm not at all sure how to setup a shared toast table. Can we get\n> away with constraining pg_database rows to 8K if they contain ACL lists?\n\nAfter further thought, ACLs in pg_database are clearly the right way to\ngo, and we shouldn't let some possible implementation ugliness stop us.\nI think we can probably get away without a TOAST table for the time\nbeing, but if we get lots of squawks some way can be found to make one\nhappen.\n\nSo my second pass at a proposal goes like this:\n\nSchemas (namespaces) have two grantable rights: SELECT allows looking up\nobjects within the namespace, and CREATE allows creating new objects\nwithin the namespace. A newly created schema allows both rights to its\nowner and none to anyone else. The predefined schemas have rights as\npreviously stated.\n\nDatabases have two grantable rights: CREATE allows creating new regular\n(permanent) schemas within the database, while TEMP allows creation of\na temp schema (and thus temp tables). A new database will initially\nallow both these rights to world. I am inclined to think that template1\nshould have both rights turned off, however, to prevent the common\nI-created-a-lot-of-trash-in-template1 error. (Not that this will help,\nif you do it as superuser. So maybe it's not worth the trouble.)\n\nTo delete an object you must be either owner of that object or owner of\nits containing namespace. (Ownership of a namespace doesn't grant any\nother ownership rights over contained objects.) You will need SELECT\nrights on the namespace to look up the object in the first place, but\nthere's no specific namespace-level right associated with deletion.\n\nTo delete a namespace you must be either owner of the namespace or owner\nof the database. All contained objects are dropped. (The database\nowner can thus drop things he does not own, but only as part of deleting\na whole namespace.)\n\nRenaming an object is a right reserved to the object owner. Possibly\nwe should also check that the owner (still) has CREATE rights in the\ncontaining namespace; any thoughts there? Should we allow renaming\nto move an object from one namespace to another?\n\nSimilarly, renaming a namespace is reserved to the namespace owner,\nand perhaps should require that he (still) have schema CREATE rights.\n\n\nBTW, it occurs to me that once we have ACLs on pg_database entries,\nwe could define a CONNECT right for databases, and then eliminate\nmost of the complexity of pg_hba.conf in favor of GRANT/REVOKE CONNECT.\nBut that's a separate discussion.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Apr 2002 14:43:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "On Fri, 19 Apr 2002, Sander Steffann wrote:\n\n> I can't think of a reason that [creation of] temp tables should\n> be prevented.\n\nMaybe to keep hostile users from filling up your disk?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Sat, 20 Apr 2002 12:19:27 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details" }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> On Fri, 19 Apr 2002, Sander Steffann wrote:\n>> I can't think of a reason that [creation of] temp tables should\n>> be prevented.\n\n> Maybe to keep hostile users from filling up your disk?\n\nThat does come to mind --- but if you've let hostile users into\nyour database, filling your disk is not exactly the smallest problem\nthey could cause. They can very easily cause DOS problems just based\non overconsumption of CPU cycles, or on crashing your server constantly.\n(Cm'on, we all know that can be done.) Even more to the point, is there\nnothing in your database that you'd not want published to the entire\nworld? There's got to be a certain amount of trust level between you\nand the persons you allow SQL-command-level access to your database.\nIf not, you ought to be interposing another level of software.\n\nMy current proposal for schema protection does include a TEMP-table-\ncreation right ... but to be honest I am not convinced that it'd be\nworth the trouble to implement it. Comments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Apr 2002 00:06:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "Hi,\n\n> Curt Sampson <cjs@cynic.net> writes:\n> > On Fri, 19 Apr 2002, Sander Steffann wrote:\n> >> I can't think of a reason that [creation of] temp tables should\n> >> be prevented.\n>\n> > Maybe to keep hostile users from filling up your disk?\n>\n> That does come to mind --- but if you've let hostile users into\n> your database, filling your disk is not exactly the smallest problem\n> they could cause. They can very easily cause DOS problems just based\n> on overconsumption of CPU cycles, or on crashing your server constantly.\n> (Cm'on, we all know that can be done.) Even more to the point, is there\n> nothing in your database that you'd not want published to the entire\n> world? There's got to be a certain amount of trust level between you\n> and the persons you allow SQL-command-level access to your database.\n> If not, you ought to be interposing another level of software.\n>\n> My current proposal for schema protection does include a TEMP-table-\n> creation right ... but to be honest I am not convinced that it'd be\n> worth the trouble to implement it. Comments anyone?\n\nI see your point, but I think Curt is right... If users are always allowed\nto make temp tables, you can't give someone real read-only access to the DB.\nI agree that there has to be more protection to prevent other abuses, but at\nleast the disk is safe.\n\nSander\n\n\n", "msg_date": "Sat, 20 Apr 2002 13:47:27 +0200", "msg_from": "\"Sander Steffann\" <sander@steffann.nl>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "On Sat, 20 Apr 2002, Sander Steffann wrote:\n\n> > > Maybe to keep hostile users from filling up your disk?\n\nActually, I was serious, not sarcastic, about that \"maybe.\" Like\nTom, I'm not entirely sure that it's necessary to add this complexity,\nbecause there are so many other ways to abuse the system.\n\n> I think Curt is right... If users are always allowed\n> to make temp tables, you can't give someone real read-only access to the DB.\n\nWell, I'm not sure you can give \"real\" read-only access anyway.\nAfter all, if you've got a big enough table, all a user has to do\nis submit a few queries that sort the entire thing and you'll be\neating up disk space like mad. But I think you can arrange for the\nsort files to go on another partition, to help limit the problems\nthis would cause.\n\nAnother question is about the best place to put temporary tables.\nRight now they go in the database you're connected to, right? So\nit's possible for users that can create temporary tables to stop\nall inserts into that database by filling up its partition, but\nother DBs might be on different partitions and be unaffected.\n\nAnother way to go is to do what MS SQL server does, which is to\nput temp tables in a separate database. If you put that on its own\npartition, you can limit the damage users can do to the database\nthat they're connected to, but then users can stop all other users\nfrom creating temporary tables.\n\nPersonally, I feel the Postgres approach is better for postgres at\nthis time, but there are other differences that help to make this\nso. In SQL Server, a \"database\" is really more a schema in the\npostgres sense, except that it's also a separate tablespace. So\nthe two approaches are not directly comparable.\n\nIn the end, it seems to me that there's only so much security you\ncan implement in a database. I don't think that anybody produces\na database server where I'd let random users connect directly,\nrather than going though an application that implements further\nsecurity. Thus, one probably doesn't want to spend a lot of time\ntrying to implement perfect security.\n\nAm I siding with you or Tom here? I'm not sure. :-)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Sun, 21 Apr 2002 13:59:02 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "Hi,\n\n> > > > Maybe to keep hostile users from filling up your disk?\n>\n> Actually, I was serious, not sarcastic, about that \"maybe.\" Like\n> Tom, I'm not entirely sure that it's necessary to add this complexity,\n> because there are so many other ways to abuse the system.\n\nI know... But we have to start somewhere :)\n\n> > I think Curt is right... If users are always allowed\n> > to make temp tables, you can't give someone real read-only access to the\nDB.\n>\n> Well, I'm not sure you can give \"real\" read-only access anyway.\n> After all, if you've got a big enough table, all a user has to do\n> is submit a few queries that sort the entire thing and you'll be\n> eating up disk space like mad.\n\nOk. I forgot about that.\n\n> But I think you can arrange for the\n> sort files to go on another partition, to help limit the problems\n> this would cause.\n\nSounds good.\n\n> Another question is about the best place to put temporary tables.\n> Right now they go in the database you're connected to, right? So\n> it's possible for users that can create temporary tables to stop\n> all inserts into that database by filling up its partition, but\n> other DBs might be on different partitions and be unaffected.\n\nAt the moment all our DBs are on one partition. This would be a good reason\nto split them, but it also makes it difficult if someone needs more space.\n\n> Another way to go is to do what MS SQL server does, which is to\n> put temp tables in a separate database. If you put that on its own\n> partition, you can limit the damage users can do to the database\n> that they're connected to, but then users can stop all other users\n> from creating temporary tables.\n\nThat is true, but when I look at how many of our customers actually use temp\ntables, I think this is not a very big problem (for us!)\n\n> Personally, I feel the Postgres approach is better for postgres at\n> this time, but there are other differences that help to make this\n> so. In SQL Server, a \"database\" is really more a schema in the\n> postgres sense, except that it's also a separate tablespace. So\n> the two approaches are not directly comparable.\n>\n> In the end, it seems to me that there's only so much security you\n> can implement in a database. I don't think that anybody produces\n> a database server where I'd let random users connect directly,\n> rather than going though an application that implements further\n> security. Thus, one probably doesn't want to spend a lot of time\n> trying to implement perfect security.\n\nOnly the idea of real read-only users seems useful to me. Maybe if temp\ntables and big sorts could be limited this would be possible? Maybe a\nrestriction on CPU time... I don't know if there are any other places where\na user can eat resources, but the more I think about it, the more\ncomplicated it gets. :-(\n\n> Am I siding with you or Tom here? I'm not sure. :-)\n\nI don't realy care, as long as we reach a good sollution! :-)\n\n- Sander\n\n\n", "msg_date": "Sun, 21 Apr 2002 16:05:21 +0200", "msg_from": "\"Sander Steffann\" <sander@steffann.nl>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "On Sun, 21 Apr 2002, Sander Steffann wrote:\n\n> At the moment all our DBs are on one partition.\n\nNot really, no. It's easy to put in a symlink to put a database on\nanother partition. It's easy for any object, for that matter, so long as\nit's not the sort of thing that gets deleted and re-created by users.\n\n> That is true, but when I look at how many of our customers actually use temp\n> tables, I think this is not a very big problem (for us!)\n\nOh, of course! I was still in SQL Server mode, thinking that sorts were\ndone via temp tables. But of course Postgres doesn't do it this way.\n\n> I don't know if there are any other places where\n> a user can eat resources, but the more I think about it, the more\n> complicated it gets. :-(\n\nYeah, exactly.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 22 Apr 2002 04:03:04 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "Tom Lane writes:\n\n[ All the rest looks good to me. ]\n\n> Databases have two grantable rights: CREATE allows creating new regular\n> (permanent) schemas within the database, while TEMP allows creation of\n> a temp schema (and thus temp tables).\n\nCouldn't the temp schema be permanent (and unremovable), and thus the\nprivilege to create temp tables can be handled by GRANT CREATE ON SCHEMA\ntemp. It seems to me that creating an extra type of privilege to be able\nto create one specific schema that exists by default anyway(?) is\noverkill.\n\n> A new database will initially allow both these rights to world.\n\nShould it? Shouldn't the database owner have to give out schemas\nexplicitly? This would be consistent with not being able to create\nsubobjects in other people's schemas by default.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 21 Apr 2002 23:14:14 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Schema (namespace) privilege details " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Databases have two grantable rights: CREATE allows creating new regular\n>> (permanent) schemas within the database, while TEMP allows creation of\n>> a temp schema (and thus temp tables).\n\n> Couldn't the temp schema be permanent (and unremovable), and thus the\n> privilege to create temp tables can be handled by GRANT CREATE ON SCHEMA\n> temp. It seems to me that creating an extra type of privilege to be able\n> to create one specific schema that exists by default anyway(?) is\n> overkill.\n\nWell, it's not a single schema but a schema-per-backend. I suppose we\ncould do it as you suggest if we invent a \"prototype\" pg_temp schema\non which the rights can be stuck. But it doesn't seem obviously cleaner\nto do it that way than to attach the rights to the database. In\nparticular, the idea of cloning a temp schema bothers me: if someone\nsticks some tables into the prototype schema, should we clone those\ntoo upon backend startup? If not, why not?\n\n>> A new database will initially allow both these rights to world.\n\n> Should it? Shouldn't the database owner have to give out schemas\n> explicitly? This would be consistent with not being able to create\n> subobjects in other people's schemas by default.\n\nWell, I've been dithering about that. Zero public rights on creation\nwould clearly be more compatible with the way we handle other kinds\nof rights. It would also clearly *not* be backwards-compatible with\nour historical behavior for new databases.\n\nIt seems relevant here that existing pg_dumpall scripts will fail\nmiserably if CREATE DATABASE does not allow connect/create rights\nto world by default.\n\nUnless you see a way around that, my inclination is to allow rights as\nI suggested. We could perhaps tighten this up in a release or three,\nafter we've fixed pg_dumpall to do something appropriate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 23:25:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schema (namespace) privilege details " } ]
[ { "msg_contents": "I was just fooling around with replacing the existing plain index on\npg_trigger.tgrelid with a unique index on (tgrelid, tgname). In theory\nthis should not affect anything --- the code already enforced that two\ntriggers on the same relation can't have the same name. The index\nshould merely provide a backup check.\n\nSo I was a tad surprised to get a regression test failure:\n\n*** ./expected/foreign_key.out\tThu Apr 11 15:13:36 2002\n--- ./results/foreign_key.out\tThu Apr 18 21:26:20 2002\n***************\n*** 899,905 ****\n ERROR: <unnamed> referential integrity violation - key in pktable still referenced from pktable\n -- fails (1,1) is being referenced (twice)\n update pktable set base1=3 where base1=1;\n! ERROR: <unnamed> referential integrity violation - key in pktable still referenced from pktable\n -- this sequence of two deletes will work, since after the first there will be no (2,*) references\n delete from pktable where base2=2;\n delete from pktable where base1=2;\n--- 899,905 ----\n ERROR: <unnamed> referential integrity violation - key in pktable still referenced from pktable\n -- fails (1,1) is being referenced (twice)\n update pktable set base1=3 where base1=1;\n! ERROR: <unnamed> referential integrity violation - key referenced from pktable not found in pktable\n -- this sequence of two deletes will work, since after the first there will be no (2,*) references\n delete from pktable where base2=2;\n delete from pktable where base1=2;\n\n======================================================================\n\nThis particular test involves a table with a foreign-key reference to\nitself, ie, it's both PK and FK. What apparently is happening is that\nthe two RI triggers are now being fired in a different order than\nbefore. While either of them would have detected an error, we now get\nthe other error first.\n\nDoes this bother anyone? It seems to me that the old code essentially\nhad no guarantee at all about the order in which the triggers would\nfire, and so it was pure luck that the regression test never showed\nthe other message.\n\nWith the modified code, because we load the triggers by scanning\nan index on (tgrelid, tgname), it is actually true that triggers are\nfired in name order. We've had requests in the past to provide a\nwell-defined firing order for triggers --- should we document this\nbehavior and support it, or should we pretend it ain't there?\n\nBTW, the same goes for rules: it would now be pretty easy to guarantee\nthat rules are fired in name order.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 22:02:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Odd(?) RI-trigger behavior" }, { "msg_contents": "On Thu, 18 Apr 2002, Tom Lane wrote:\n\n> This particular test involves a table with a foreign-key reference to\n> itself, ie, it's both PK and FK. What apparently is happening is that\n> the two RI triggers are now being fired in a different order than\n> before. While either of them would have detected an error, we now get\n> the other error first.\n>\n> Does this bother anyone? It seems to me that the old code essentially\n> had no guarantee at all about the order in which the triggers would\n> fire, and so it was pure luck that the regression test never showed\n> the other message.\n\nThat's probably a bad thing even if I doubt that it'd ever come up the\nother way barring changes to other regression tests in practice. Forcing\nan order probably helps with this case anyway.\n\n> With the modified code, because we load the triggers by scanning\n> an index on (tgrelid, tgname), it is actually true that triggers are\n> fired in name order. We've had requests in the past to provide a\n> well-defined firing order for triggers --- should we document this\n> behavior and support it, or should we pretend it ain't there?\n\nDidn't someone (Peter?) say that the mandated firing order was based on\ncreation order/time in SQL99?\n\n", "msg_date": "Thu, 18 Apr 2002 20:09:55 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Odd(?) RI-trigger behavior" }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> Didn't someone (Peter?) say that the mandated firing order was based on\n> creation order/time in SQL99?\n\nIt does say that:\n\n The order of execution of a set of triggers is ascending by value\n of their timestamp of creation in their descriptors, such that the\n oldest trigger executes first. If one or more triggers have the\n same timestamp value, then their relative order of execution is\n implementation-defined.\n\nHowever, this strikes me as fairly brain-dead; it's unnecessarily hard\nto control the order of trigger execution. You have to drop and\nrecreate triggers if you want to insert a new one at a desired position.\nWorse, if you create several triggers in the same transaction, they'll\nhave the same timestamp --- leaving you right back in the\nimplementation-defined case. But if you want to make your rearrangement\natomically with respect to other transactions, you have little choice\nbut to drop/recreate in one xact. Looks like a catch-22 to me.\n\nISTM we had discussed this before and concluded that name order was\na more reasonable definition. Nobody had got round to doing anything\nabout it though. (Indeed my current hack was not intended to provide\na predictable firing order, it just fell out that way...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 23:30:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Odd(?) RI-trigger behavior " }, { "msg_contents": "On Thu, 18 Apr 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > Didn't someone (Peter?) say that the mandated firing order was based on\n> > creation order/time in SQL99?\n>\n> It does say that:\n>\n> The order of execution of a set of triggers is ascending by value\n> of their timestamp of creation in their descriptors, such that the\n> oldest trigger executes first. If one or more triggers have the\n> same timestamp value, then their relative order of execution is\n> implementation-defined.\n>\n> However, this strikes me as fairly brain-dead; it's unnecessarily hard\n> to control the order of trigger execution. You have to drop and\n> recreate triggers if you want to insert a new one at a desired position.\n> Worse, if you create several triggers in the same transaction, they'll\n> have the same timestamp --- leaving you right back in the\n> implementation-defined case. But if you want to make your rearrangement\n> atomically with respect to other transactions, you have little choice\n> but to drop/recreate in one xact. Looks like a catch-22 to me.\n>\n> ISTM we had discussed this before and concluded that name order was\n> a more reasonable definition. Nobody had got round to doing anything\n> about it though. (Indeed my current hack was not intended to provide\n> a predictable firing order, it just fell out that way...)\n\nI agree that name is better, I wasn't sure if we'd reached a consensus on\nit or if the conversation drifted away due to the fact that noone was\nlooking at it at the time.\n\n\n", "msg_date": "Thu, 18 Apr 2002 20:43:54 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Odd(?) RI-trigger behavior " }, { "msg_contents": "En Thu, 18 Apr 2002 20:43:54 -0700 (PDT)\nStephan Szabo <sszabo@megazone23.bigpanda.com> escribi�:\n\n> I agree that name is better, I wasn't sure if we'd reached a consensus on\n> it or if the conversation drifted away due to the fact that noone was\n> looking at it at the time.\n\nhttp://archives.postgresql.org/pgsql-general/2001-09/msg00234.php\n\nNobody opposed to the idea of name ordering in that thread.\n\nBut note that this is on TODO:\n\n* Allow user to control trigger firing order\n\nThat probably means that the user should have some reasonable way to\nchange the name, besides fiddling with system catalogs.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Siempre hay que alimentar a los dioses, aunque la tierra este seca\" (Orual)\n", "msg_date": "Fri, 19 Apr 2002 01:00:52 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: Odd(?) RI-trigger behavior" }, { "msg_contents": "> But note that this is on TODO:\n>\n> * Allow user to control trigger firing order\n>\n> That probably means that the user should have some reasonable way to\n> change the name, besides fiddling with system catalogs.\n\nAn ALTER TRIGGER command? Of course, it should not allow modification of\nconstraint triggers...\n\nChris\n\n", "msg_date": "Fri, 19 Apr 2002 13:04:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Odd(?) RI-trigger behavior" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> http://archives.postgresql.org/pgsql-general/2001-09/msg00234.php\n> Nobody opposed to the idea of name ordering in that thread.\n\nOkay, I've committed the fixes that implement this.\n\n> But note that this is on TODO:\n> * Allow user to control trigger firing order\n> That probably means that the user should have some reasonable way to\n> change the name, besides fiddling with system catalogs.\n\nYeah. As of CVS tip, to reshuffle the order of existing triggers you\nmust (a) do a manual UPDATE pg_trigger SET tgname = 'something' ...\nthen (b) restart your backend(s), because the relcache code does not\nnotice that you did that, so it'll keep using the trigger data it\nalready had loaded. This is pretty ugly. An ALTER TRIGGER command\nseems called for if we want to call the TODO item really done.\nI haven't got time for that at the moment; any volunteers?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Apr 2002 12:57:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Odd(?) RI-trigger behavior " }, { "msg_contents": "Tom Lane wrote:\n> \n> Yeah. As of CVS tip, to reshuffle the order of existing triggers you\n> must (a) do a manual UPDATE pg_trigger SET tgname = 'something' ...\n> then (b) restart your backend(s), because the relcache code does not\n> notice that you did that, so it'll keep using the trigger data it\n> already had loaded. This is pretty ugly. An ALTER TRIGGER command\n> seems called for if we want to call the TODO item really done.\n> I haven't got time for that at the moment; any volunteers?\n> \n\nI'll take it.\n\nJoe\n\n", "msg_date": "Fri, 19 Apr 2002 10:25:56 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Odd(?) RI-trigger behavior" }, { "msg_contents": "Joe Conway wrote:\n> Tom Lane wrote:\n> \n>>\n>> Yeah. As of CVS tip, to reshuffle the order of existing triggers you\n>> must (a) do a manual UPDATE pg_trigger SET tgname = 'something' ...\n>> then (b) restart your backend(s), because the relcache code does not\n>> notice that you did that, so it'll keep using the trigger data it\n>> already had loaded. This is pretty ugly. An ALTER TRIGGER command\n>> seems called for if we want to call the TODO item really done.\n>> I haven't got time for that at the moment; any volunteers?\n>>\n> \n> I'll take it.\n> \n\nThere is already a RenameStmt node which is currently only used to \nrename tables or table column names. Is there any objection to modifying \nit to handle trigger names (and possibly other things in the future) also?\n\n\nJoe\n\n\n", "msg_date": "Fri, 19 Apr 2002 13:29:33 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Odd(?) RI-trigger behavior" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> There is already a RenameStmt node which is currently only used to \n> rename tables or table column names. Is there any objection to modifying \n> it to handle trigger names (and possibly other things in the future) also?\n\nYou'd need to add a field so you could distinguish the type of rename,\nbut on the whole that seems a reasonable thing to do; probably better\nthan adding a brand new node type. We're already sharing node types\nfor DROPs, for example, so I see no reason not to do it for RENAMEs.\n(Cf 'DropPropertyStmt' in current sources)\n\nRenaming rules seems like something that should be on the list too,\nso you're right that there will be more stuff later.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Apr 2002 16:36:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Odd(?) RI-trigger behavior " }, { "msg_contents": "Tom Lane wrote:\n > Joe Conway <mail@joeconway.com> writes:\n >\n >> There is already a RenameStmt node which is currently only used to\n >> rename tables or table column names. Is there any objection to\n >> modifying it to handle trigger names (and possibly other things in\n >> the future) also?\n >\n >\n > You'd need to add a field so you could distinguish the type of\n > rename, but on the whole that seems a reasonable thing to do;\n > probably better than adding a brand new node type. We're already\n > sharing node types for DROPs, for example, so I see no reason not to\n > do it for RENAMEs. (Cf 'DropPropertyStmt' in current sources)\n >\n > Renaming rules seems like something that should be on the list too, so\n > you're right that there will be more stuff later.\n >\n\nAttached is a patch for ALTER TRIGGER RENAME per the above thread. I\nleft a stub for a future \"ALTER RULE RENAME\" but did not write that one\nyet. Bruce, if you want to add my name for for that I'll take it and do\nit later.\n\nIt passes all regression tests on my RH box. Usage is as follows:\n\ntest=# create table foo3(f1 int references foo2(f1));\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\ntest=# \\d foo3\n Table \"foo3\"\n Column | Type | Modifiers\n--------+---------+-----------\n f1 | integer |\nTriggers: RI_ConstraintTrigger_16663\n\ntest=# alter trigger \"RI_ConstraintTrigger_16663\" on foo3 rename to\n\"MyOwnConstTriggerName\";\nALTER\ntest=# \\d foo3\n Table \"foo3\"\n Column | Type | Modifiers\n--------+---------+-----------\n f1 | integer |\nTriggers: MyOwnConstTriggerName\n\nObviously there is no build in restriction on altering the name of\nrefint triggers -- is this a problem?\n\nI'll follow up with a doc patch this weekend. If there are no \nobjections, please apply.\n\nThanks,\n\nJoe", "msg_date": "Fri, 19 Apr 2002 17:47:02 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "RENAME TRIGGER patch (was [HACKERS] Odd(?) RI-trigger behavior)" }, { "msg_contents": "Tom Lane wrote:\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > http://archives.postgresql.org/pgsql-general/2001-09/msg00234.php\n> > Nobody opposed to the idea of name ordering in that thread.\n> \n> Okay, I've committed the fixes that implement this.\n> \n> > But note that this is on TODO:\n> > * Allow user to control trigger firing order\n> > That probably means that the user should have some reasonable way to\n> > change the name, besides fiddling with system catalogs.\n> \n> Yeah. As of CVS tip, to reshuffle the order of existing triggers you\n> must (a) do a manual UPDATE pg_trigger SET tgname = 'something' ...\n> then (b) restart your backend(s), because the relcache code does not\n> notice that you did that, so it'll keep using the trigger data it\n> already had loaded. This is pretty ugly. An ALTER TRIGGER command\n> seems called for if we want to call the TODO item really done.\n> I haven't got time for that at the moment; any volunteers?\n\nTODO updated with:\n\n\t* -Allow user to control trigger firing order\n\t* Add ALTER TRIGGER ... RENAME\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 12:55:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Odd(?) RI-trigger behavior" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > Joe Conway <mail@joeconway.com> writes:\n> >\n> >> There is already a RenameStmt node which is currently only used to\n> >> rename tables or table column names. Is there any objection to\n> >> modifying it to handle trigger names (and possibly other things in\n> >> the future) also?\n> >\n> >\n> > You'd need to add a field so you could distinguish the type of\n> > rename, but on the whole that seems a reasonable thing to do;\n> > probably better than adding a brand new node type. We're already\n> > sharing node types for DROPs, for example, so I see no reason not to\n> > do it for RENAMEs. (Cf 'DropPropertyStmt' in current sources)\n> >\n> > Renaming rules seems like something that should be on the list too, so\n> > you're right that there will be more stuff later.\n> >\n> \n> Attached is a patch for ALTER TRIGGER RENAME per the above thread. I\n> left a stub for a future \"ALTER RULE RENAME\" but did not write that one\n> yet. Bruce, if you want to add my name for for that I'll take it and do\n> it later.\n> \n> It passes all regression tests on my RH box. Usage is as follows:\n> \n> test=# create table foo3(f1 int references foo2(f1));\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\n> check(s)\n> CREATE\n> test=# \\d foo3\n> Table \"foo3\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f1 | integer |\n> Triggers: RI_ConstraintTrigger_16663\n> \n> test=# alter trigger \"RI_ConstraintTrigger_16663\" on foo3 rename to\n> \"MyOwnConstTriggerName\";\n> ALTER\n> test=# \\d foo3\n> Table \"foo3\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f1 | integer |\n> Triggers: MyOwnConstTriggerName\n> \n> Obviously there is no build in restriction on altering the name of\n> refint triggers -- is this a problem?\n> \n> I'll follow up with a doc patch this weekend. If there are no \n> objections, please apply.\n> \n> Thanks,\n> \n> Joe\n\n> diff -cNr pgsql.cvs.orig/src/backend/commands/tablecmds.c pgsql/src/backend/commands/tablecmds.c\n> *** pgsql.cvs.orig/src/backend/commands/tablecmds.c\tFri Apr 19 10:32:50 2002\n> --- pgsql/src/backend/commands/tablecmds.c\tFri Apr 19 16:46:11 2002\n> ***************\n> *** 2851,2856 ****\n> --- 2851,2973 ----\n> }\n> \n> /*\n> + *\t\trenametrig\t\t- changes the name of a trigger on a relation\n> + *\n> + *\t\ttrigger name is changed in trigger catalog.\n> + *\t\tNo record of the previous name is kept.\n> + *\n> + *\t\tget proper relrelation from relation catalog (if not arg)\n> + *\t\tscan trigger catalog\n> + *\t\t\t\tfor name conflict (within rel)\n> + *\t\t\t\tfor original trigger (if not arg)\n> + *\t\tmodify tgname in trigger tuple\n> + *\t\tinsert modified trigger in trigger catalog\n> + *\t\tdelete original trigger from trigger catalog\n> + */\n> + extern void renametrig(Oid relid,\n> + \t\t const char *oldname,\n> + \t\t const char *newname)\n> + {\n> + \tRelation\ttargetrel;\n> + \tRelation\ttgrel;\n> + \tHeapTuple\ttuple;\n> + \tSysScanDesc\ttgscan;\n> + \tScanKeyData key;\n> + \tbool\t\tfound = FALSE;\n> + \tRelation\tidescs[Num_pg_trigger_indices];\n> + \n> + \t/*\n> + \t * Grab an exclusive lock on the target table, which we will NOT\n> + \t * release until end of transaction.\n> + \t */\n> + \ttargetrel = heap_open(relid, AccessExclusiveLock);\n> + \n> + \t/*\n> + \t * Scan pg_trigger twice for existing triggers on relation. We do this in\n> + \t * order to ensure a trigger does not exist with newname (The unique index\n> + \t * on tgrelid/tgname would complain anyway) and to ensure a trigger does\n> + \t * exist with oldname.\n> + \t *\n> + \t * NOTE that this is cool only because we have AccessExclusiveLock on the\n> + \t * relation, so the trigger set won't be changing underneath us.\n> + \t */\n> + \ttgrel = heap_openr(TriggerRelationName, RowExclusiveLock);\n> + \n> + \t/*\n> + \t * First pass -- look for name conflict\n> + \t */\n> + \tScanKeyEntryInitialize(&key, 0,\n> + \t\t\t\t\t\t Anum_pg_trigger_tgrelid,\n> + \t\t\t\t\t\t F_OIDEQ,\n> + \t\t\t\t\t\t ObjectIdGetDatum(relid));\n> + \ttgscan = systable_beginscan(tgrel, TriggerRelidNameIndex, true,\n> + \t\t\t\t\t\t\t\tSnapshotNow, 1, &key);\n> + \twhile (HeapTupleIsValid(tuple = systable_getnext(tgscan)))\n> + \t{\n> + \t\tForm_pg_trigger pg_trigger = (Form_pg_trigger) GETSTRUCT(tuple);\n> + \n> + \t\tif (namestrcmp(&(pg_trigger->tgname), newname) == 0)\n> + \t\t\telog(ERROR, \"renametrig: trigger %s already defined on relation %s\",\n> + \t\t\t\t newname, RelationGetRelationName(targetrel));\n> + \t}\n> + \tsystable_endscan(tgscan);\n> + \n> + \t/*\n> + \t * Second pass -- look for trigger existing with oldname and update\n> + \t */\n> + \tScanKeyEntryInitialize(&key, 0,\n> + \t\t\t\t\t\t Anum_pg_trigger_tgrelid,\n> + \t\t\t\t\t\t F_OIDEQ,\n> + \t\t\t\t\t\t ObjectIdGetDatum(relid));\n> + \ttgscan = systable_beginscan(tgrel, TriggerRelidNameIndex, true,\n> + \t\t\t\t\t\t\t\tSnapshotNow, 1, &key);\n> + \twhile (HeapTupleIsValid(tuple = systable_getnext(tgscan)))\n> + \t{\n> + \t\tForm_pg_trigger pg_trigger = (Form_pg_trigger) GETSTRUCT(tuple);\n> + \n> + \t\tif (namestrcmp(&(pg_trigger->tgname), oldname) == 0)\n> + \t\t{\n> + \t\t\t/*\n> + \t\t\t * Update pg_trigger tuple with new tgname.\n> + \t\t\t * (Scribbling on tuple is OK because it's a copy...)\n> + \t\t\t */\n> + \t\t\tnamestrcpy(&(pg_trigger->tgname), newname);\n> + \t\t\tsimple_heap_update(tgrel, &tuple->t_self, tuple);\n> + \n> + \t\t\t/*\n> + \t\t\t * keep system catalog indices current\n> + \t\t\t */\n> + \t\t\tCatalogOpenIndices(Num_pg_trigger_indices, Name_pg_trigger_indices, idescs);\n> + \t\t\tCatalogIndexInsert(idescs, Num_pg_trigger_indices, tgrel, tuple);\n> + \t\t\tCatalogCloseIndices(Num_pg_trigger_indices, idescs);\n> + \n> + \t\t\t/*\n> + \t\t\t * Invalidate relation's relcache entry so that other\n> + \t\t\t * backends (and this one too!) are sent SI message to make them\n> + \t\t\t * rebuild relcache entries.\n> + \t\t\t */\n> + \t\t\tCacheInvalidateRelcache(relid);\n> + \n> + \t\t\tfound = TRUE;\n> + \t\t\tbreak;\n> + \t\t}\n> + \t}\n> + \tsystable_endscan(tgscan);\n> + \n> + \theap_close(tgrel, RowExclusiveLock);\n> + \n> + \tif (!found)\n> + \t\telog(ERROR, \"renametrig: trigger %s not defined on relation %s\",\n> + \t\t\t oldname, RelationGetRelationName(targetrel));\n> + \n> + \t/*\n> + \t * Close rel, but keep exclusive lock!\n> + \t */\n> + \theap_close(targetrel, NoLock);\n> + }\n> + \n> + \n> + /*\n> * Given a trigger function OID, determine whether it is an RI trigger,\n> * and if so whether it is attached to PK or FK relation.\n> *\n> diff -cNr pgsql.cvs.orig/src/backend/nodes/copyfuncs.c pgsql/src/backend/nodes/copyfuncs.c\n> *** pgsql.cvs.orig/src/backend/nodes/copyfuncs.c\tFri Apr 19 10:32:51 2002\n> --- pgsql/src/backend/nodes/copyfuncs.c\tFri Apr 19 13:58:47 2002\n> ***************\n> *** 2137,2146 ****\n> \tRenameStmt *newnode = makeNode(RenameStmt);\n> \n> \tNode_Copy(from, newnode, relation);\n> ! \tif (from->column)\n> ! \t\tnewnode->column = pstrdup(from->column);\n> \tif (from->newname)\n> \t\tnewnode->newname = pstrdup(from->newname);\n> \n> \treturn newnode;\n> }\n> --- 2137,2147 ----\n> \tRenameStmt *newnode = makeNode(RenameStmt);\n> \n> \tNode_Copy(from, newnode, relation);\n> ! \tif (from->oldname)\n> ! \t\tnewnode->oldname = pstrdup(from->oldname);\n> \tif (from->newname)\n> \t\tnewnode->newname = pstrdup(from->newname);\n> + \tnewnode->renameType = from->renameType;\n> \n> \treturn newnode;\n> }\n> diff -cNr pgsql.cvs.orig/src/backend/nodes/equalfuncs.c pgsql/src/backend/nodes/equalfuncs.c\n> *** pgsql.cvs.orig/src/backend/nodes/equalfuncs.c\tFri Apr 19 10:32:51 2002\n> --- pgsql/src/backend/nodes/equalfuncs.c\tFri Apr 19 13:56:00 2002\n> ***************\n> *** 983,992 ****\n> {\n> \tif (!equal(a->relation, b->relation))\n> \t\treturn false;\n> ! \tif (!equalstr(a->column, b->column))\n> \t\treturn false;\n> \tif (!equalstr(a->newname, b->newname))\n> \t\treturn false;\n> \n> \treturn true;\n> }\n> --- 983,994 ----\n> {\n> \tif (!equal(a->relation, b->relation))\n> \t\treturn false;\n> ! \tif (!equalstr(a->oldname, b->oldname))\n> \t\treturn false;\n> \tif (!equalstr(a->newname, b->newname))\n> \t\treturn false;\n> + \tif (a->renameType != b->renameType)\n> + \t\treturn false;\n> \n> \treturn true;\n> }\n> diff -cNr pgsql.cvs.orig/src/backend/parser/gram.y pgsql/src/backend/parser/gram.y\n> *** pgsql.cvs.orig/src/backend/parser/gram.y\tFri Apr 19 10:32:51 2002\n> --- pgsql/src/backend/parser/gram.y\tFri Apr 19 14:07:35 2002\n> ***************\n> *** 2915,2922 ****\n> \t\t\t\t{\n> \t\t\t\t\tRenameStmt *n = makeNode(RenameStmt);\n> \t\t\t\t\tn->relation = $3;\n> ! \t\t\t\t\tn->column = $6;\n> \t\t\t\t\tn->newname = $8;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> \t\t;\n> --- 2915,2935 ----\n> \t\t\t\t{\n> \t\t\t\t\tRenameStmt *n = makeNode(RenameStmt);\n> \t\t\t\t\tn->relation = $3;\n> ! \t\t\t\t\tn->oldname = $6;\n> \t\t\t\t\tn->newname = $8;\n> + \t\t\t\t\tif ($6 == NULL)\n> + \t\t\t\t\t\tn->renameType = RENAME_TABLE;\n> + \t\t\t\t\telse\n> + \t\t\t\t\t\tn->renameType = RENAME_COLUMN;\n> + \t\t\t\t\t$$ = (Node *)n;\n> + \t\t\t\t}\n> + \t\t| ALTER TRIGGER name ON relation_expr RENAME TO name\n> + \t\t\t\t{\n> + \t\t\t\t\tRenameStmt *n = makeNode(RenameStmt);\n> + \t\t\t\t\tn->relation = $5;\n> + \t\t\t\t\tn->oldname = $3;\n> + \t\t\t\t\tn->newname = $8;\n> + \t\t\t\t\tn->renameType = RENAME_TRIGGER;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> \t\t;\n> diff -cNr pgsql.cvs.orig/src/backend/tcop/utility.c pgsql/src/backend/tcop/utility.c\n> *** pgsql.cvs.orig/src/backend/tcop/utility.c\tFri Apr 19 10:32:52 2002\n> --- pgsql/src/backend/tcop/utility.c\tFri Apr 19 15:59:13 2002\n> ***************\n> *** 377,399 ****\n> \n> \t\t\t\tCheckOwnership(stmt->relation, true);\n> \n> ! \t\t\t\tif (stmt->column == NULL)\n> \t\t\t\t{\n> ! \t\t\t\t\t/*\n> ! \t\t\t\t\t * rename relation\n> ! \t\t\t\t\t */\n> ! \t\t\t\t\trenamerel(RangeVarGetRelid(stmt->relation, false),\n> ! \t\t\t\t\t\t\t stmt->newname);\n> ! \t\t\t\t}\n> ! \t\t\t\telse\n> ! \t\t\t\t{\n> ! \t\t\t\t\t/*\n> ! \t\t\t\t\t * rename attribute\n> ! \t\t\t\t\t */\n> ! \t\t\t\t\trenameatt(RangeVarGetRelid(stmt->relation, false),\n> ! \t\t\t\t\t\t\t stmt->column,\t\t/* old att name */\n> \t\t\t\t\t\t\t stmt->newname,\t/* new att name */\n> ! \t\t\t\t\t\t\t interpretInhOption(stmt->relation->inhOpt));\t\t/* recursive? */\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\tbreak;\n> --- 377,406 ----\n> \n> \t\t\t\tCheckOwnership(stmt->relation, true);\n> \n> ! \t\t\t\tswitch (stmt->renameType)\n> \t\t\t\t{\n> ! \t\t\t\t\tcase RENAME_TABLE:\n> ! \t\t\t\t\t\trenamerel(RangeVarGetRelid(stmt->relation, false),\n> ! \t\t\t\t\t\t\t\t stmt->newname);\n> ! \t\t\t\t\t\tbreak;\n> ! \t\t\t\t\tcase RENAME_COLUMN:\n> ! \t\t\t\t\t\trenameatt(RangeVarGetRelid(stmt->relation, false),\n> ! \t\t\t\t\t\t\t stmt->oldname,\t/* old att name */\n> \t\t\t\t\t\t\t stmt->newname,\t/* new att name */\n> ! \t\t\t\t\t\t\t interpretInhOption(stmt->relation->inhOpt));\t/* recursive? */\n> ! \t\t\t\t\t\tbreak;\n> ! \t\t\t\t\tcase RENAME_TRIGGER:\n> ! \t\t\t\t\t\trenametrig(RangeVarGetRelid(stmt->relation, false),\n> ! \t\t\t\t\t\t\t stmt->oldname,\t/* old att name */\n> ! \t\t\t\t\t\t\t stmt->newname);\t/* new att name */\n> ! \t\t\t\t\t\tbreak;\n> ! \t\t\t\t\tcase RENAME_RULE:\n> ! \t\t\t\t\t\telog(ERROR, \"ProcessUtility: Invalid target for RENAME: %d\",\n> ! \t\t\t\t\t\t\t\tstmt->renameType);\n> ! \t\t\t\t\t\tbreak;\n> ! \t\t\t\t\tdefault:\n> ! \t\t\t\t\t\telog(ERROR, \"ProcessUtility: Invalid target for RENAME: %d\",\n> ! \t\t\t\t\t\t\t\tstmt->renameType);\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\tbreak;\n> diff -cNr pgsql.cvs.orig/src/include/commands/tablecmds.h pgsql/src/include/commands/tablecmds.h\n> *** pgsql.cvs.orig/src/include/commands/tablecmds.h\tFri Apr 19 10:32:55 2002\n> --- pgsql/src/include/commands/tablecmds.h\tFri Apr 19 16:06:39 2002\n> ***************\n> *** 15,20 ****\n> --- 15,21 ----\n> #define TABLECMDS_H\n> \n> #include \"nodes/parsenodes.h\"\n> + #include \"utils/inval.h\"\n> \n> extern void AlterTableAddColumn(Oid myrelid, bool inherits,\n> \t\t\t\t\t\t\t\tColumnDef *colDef);\n> ***************\n> *** 60,63 ****\n> --- 61,68 ----\n> extern void renamerel(Oid relid,\n> \t\t const char *newrelname);\n> \n> + extern void renametrig(Oid relid,\n> + \t\t const char *oldname,\n> + \t\t const char *newname);\n> + \n> #endif /* TABLECMDS_H */\n> diff -cNr pgsql.cvs.orig/src/include/nodes/parsenodes.h pgsql/src/include/nodes/parsenodes.h\n> *** pgsql.cvs.orig/src/include/nodes/parsenodes.h\tFri Apr 19 10:32:55 2002\n> --- pgsql/src/include/nodes/parsenodes.h\tFri Apr 19 14:21:21 2002\n> ***************\n> *** 1205,1221 ****\n> } RemoveOperStmt;\n> \n> /* ----------------------\n> ! *\t\tAlter Table Rename Statement\n> * ----------------------\n> */\n> typedef struct RenameStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tRangeVar *relation;\t\t/* relation to be altered */\n> ! \tchar\t *column;\t\t\t/* if NULL, rename the relation name to\n> ! \t\t\t\t\t\t\t\t * the new name. Otherwise, rename this\n> ! \t\t\t\t\t\t\t\t * column name. */\n> \tchar\t *newname;\t\t/* the new name */\n> } RenameStmt;\n> \n> /* ----------------------\n> --- 1205,1227 ----\n> } RemoveOperStmt;\n> \n> /* ----------------------\n> ! *\t\tAlter Object Rename Statement\n> * ----------------------\n> + * Currently supports renaming tables, table columns, and triggers.\n> + * If renaming a table, oldname is ignored.\n> */\n> + #define RENAME_TABLE\t110\n> + #define RENAME_COLUMN\t111\n> + #define RENAME_TRIGGER\t112\n> + #define RENAME_RULE\t\t113\n> + \n> typedef struct RenameStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tRangeVar *relation;\t\t/* owning relation */\n> ! \tchar\t *oldname;\t\t/* name of rule, trigger, etc */\n> \tchar\t *newname;\t\t/* the new name */\n> + \tint\t\t\trenameType;\t\t/* RENAME_TABLE, RENAME_COLUMN, etc */\n> } RenameStmt;\n> \n> /* ----------------------\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 14:07:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RENAME TRIGGER patch (was [HACKERS] Odd(?) RI-trigger" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > Joe Conway <mail@joeconway.com> writes:\n> >\n> >> There is already a RenameStmt node which is currently only used to\n> >> rename tables or table column names. Is there any objection to\n> >> modifying it to handle trigger names (and possibly other things in\n> >> the future) also?\n> >\n> >\n> > You'd need to add a field so you could distinguish the type of\n> > rename, but on the whole that seems a reasonable thing to do;\n> > probably better than adding a brand new node type. We're already\n> > sharing node types for DROPs, for example, so I see no reason not to\n> > do it for RENAMEs. (Cf 'DropPropertyStmt' in current sources)\n> >\n> > Renaming rules seems like something that should be on the list too, so\n> > you're right that there will be more stuff later.\n> >\n> \n> Attached is a patch for ALTER TRIGGER RENAME per the above thread. I\n> left a stub for a future \"ALTER RULE RENAME\" but did not write that one\n> yet. Bruce, if you want to add my name for for that I'll take it and do\n> it later.\n> \n> It passes all regression tests on my RH box. Usage is as follows:\n> \n> test=# create table foo3(f1 int references foo2(f1));\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\n> check(s)\n> CREATE\n> test=# \\d foo3\n> Table \"foo3\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f1 | integer |\n> Triggers: RI_ConstraintTrigger_16663\n> \n> test=# alter trigger \"RI_ConstraintTrigger_16663\" on foo3 rename to\n> \"MyOwnConstTriggerName\";\n> ALTER\n> test=# \\d foo3\n> Table \"foo3\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f1 | integer |\n> Triggers: MyOwnConstTriggerName\n> \n> Obviously there is no build in restriction on altering the name of\n> refint triggers -- is this a problem?\n> \n> I'll follow up with a doc patch this weekend. If there are no \n> objections, please apply.\n> \n> Thanks,\n> \n> Joe\n\n> diff -cNr pgsql.cvs.orig/src/backend/commands/tablecmds.c pgsql/src/backend/commands/tablecmds.c\n> *** pgsql.cvs.orig/src/backend/commands/tablecmds.c\tFri Apr 19 10:32:50 2002\n> --- pgsql/src/backend/commands/tablecmds.c\tFri Apr 19 16:46:11 2002\n> ***************\n> *** 2851,2856 ****\n> --- 2851,2973 ----\n> }\n> \n> /*\n> + *\t\trenametrig\t\t- changes the name of a trigger on a relation\n> + *\n> + *\t\ttrigger name is changed in trigger catalog.\n> + *\t\tNo record of the previous name is kept.\n> + *\n> + *\t\tget proper relrelation from relation catalog (if not arg)\n> + *\t\tscan trigger catalog\n> + *\t\t\t\tfor name conflict (within rel)\n> + *\t\t\t\tfor original trigger (if not arg)\n> + *\t\tmodify tgname in trigger tuple\n> + *\t\tinsert modified trigger in trigger catalog\n> + *\t\tdelete original trigger from trigger catalog\n> + */\n> + extern void renametrig(Oid relid,\n> + \t\t const char *oldname,\n> + \t\t const char *newname)\n> + {\n> + \tRelation\ttargetrel;\n> + \tRelation\ttgrel;\n> + \tHeapTuple\ttuple;\n> + \tSysScanDesc\ttgscan;\n> + \tScanKeyData key;\n> + \tbool\t\tfound = FALSE;\n> + \tRelation\tidescs[Num_pg_trigger_indices];\n> + \n> + \t/*\n> + \t * Grab an exclusive lock on the target table, which we will NOT\n> + \t * release until end of transaction.\n> + \t */\n> + \ttargetrel = heap_open(relid, AccessExclusiveLock);\n> + \n> + \t/*\n> + \t * Scan pg_trigger twice for existing triggers on relation. We do this in\n> + \t * order to ensure a trigger does not exist with newname (The unique index\n> + \t * on tgrelid/tgname would complain anyway) and to ensure a trigger does\n> + \t * exist with oldname.\n> + \t *\n> + \t * NOTE that this is cool only because we have AccessExclusiveLock on the\n> + \t * relation, so the trigger set won't be changing underneath us.\n> + \t */\n> + \ttgrel = heap_openr(TriggerRelationName, RowExclusiveLock);\n> + \n> + \t/*\n> + \t * First pass -- look for name conflict\n> + \t */\n> + \tScanKeyEntryInitialize(&key, 0,\n> + \t\t\t\t\t\t Anum_pg_trigger_tgrelid,\n> + \t\t\t\t\t\t F_OIDEQ,\n> + \t\t\t\t\t\t ObjectIdGetDatum(relid));\n> + \ttgscan = systable_beginscan(tgrel, TriggerRelidNameIndex, true,\n> + \t\t\t\t\t\t\t\tSnapshotNow, 1, &key);\n> + \twhile (HeapTupleIsValid(tuple = systable_getnext(tgscan)))\n> + \t{\n> + \t\tForm_pg_trigger pg_trigger = (Form_pg_trigger) GETSTRUCT(tuple);\n> + \n> + \t\tif (namestrcmp(&(pg_trigger->tgname), newname) == 0)\n> + \t\t\telog(ERROR, \"renametrig: trigger %s already defined on relation %s\",\n> + \t\t\t\t newname, RelationGetRelationName(targetrel));\n> + \t}\n> + \tsystable_endscan(tgscan);\n> + \n> + \t/*\n> + \t * Second pass -- look for trigger existing with oldname and update\n> + \t */\n> + \tScanKeyEntryInitialize(&key, 0,\n> + \t\t\t\t\t\t Anum_pg_trigger_tgrelid,\n> + \t\t\t\t\t\t F_OIDEQ,\n> + \t\t\t\t\t\t ObjectIdGetDatum(relid));\n> + \ttgscan = systable_beginscan(tgrel, TriggerRelidNameIndex, true,\n> + \t\t\t\t\t\t\t\tSnapshotNow, 1, &key);\n> + \twhile (HeapTupleIsValid(tuple = systable_getnext(tgscan)))\n> + \t{\n> + \t\tForm_pg_trigger pg_trigger = (Form_pg_trigger) GETSTRUCT(tuple);\n> + \n> + \t\tif (namestrcmp(&(pg_trigger->tgname), oldname) == 0)\n> + \t\t{\n> + \t\t\t/*\n> + \t\t\t * Update pg_trigger tuple with new tgname.\n> + \t\t\t * (Scribbling on tuple is OK because it's a copy...)\n> + \t\t\t */\n> + \t\t\tnamestrcpy(&(pg_trigger->tgname), newname);\n> + \t\t\tsimple_heap_update(tgrel, &tuple->t_self, tuple);\n> + \n> + \t\t\t/*\n> + \t\t\t * keep system catalog indices current\n> + \t\t\t */\n> + \t\t\tCatalogOpenIndices(Num_pg_trigger_indices, Name_pg_trigger_indices, idescs);\n> + \t\t\tCatalogIndexInsert(idescs, Num_pg_trigger_indices, tgrel, tuple);\n> + \t\t\tCatalogCloseIndices(Num_pg_trigger_indices, idescs);\n> + \n> + \t\t\t/*\n> + \t\t\t * Invalidate relation's relcache entry so that other\n> + \t\t\t * backends (and this one too!) are sent SI message to make them\n> + \t\t\t * rebuild relcache entries.\n> + \t\t\t */\n> + \t\t\tCacheInvalidateRelcache(relid);\n> + \n> + \t\t\tfound = TRUE;\n> + \t\t\tbreak;\n> + \t\t}\n> + \t}\n> + \tsystable_endscan(tgscan);\n> + \n> + \theap_close(tgrel, RowExclusiveLock);\n> + \n> + \tif (!found)\n> + \t\telog(ERROR, \"renametrig: trigger %s not defined on relation %s\",\n> + \t\t\t oldname, RelationGetRelationName(targetrel));\n> + \n> + \t/*\n> + \t * Close rel, but keep exclusive lock!\n> + \t */\n> + \theap_close(targetrel, NoLock);\n> + }\n> + \n> + \n> + /*\n> * Given a trigger function OID, determine whether it is an RI trigger,\n> * and if so whether it is attached to PK or FK relation.\n> *\n> diff -cNr pgsql.cvs.orig/src/backend/nodes/copyfuncs.c pgsql/src/backend/nodes/copyfuncs.c\n> *** pgsql.cvs.orig/src/backend/nodes/copyfuncs.c\tFri Apr 19 10:32:51 2002\n> --- pgsql/src/backend/nodes/copyfuncs.c\tFri Apr 19 13:58:47 2002\n> ***************\n> *** 2137,2146 ****\n> \tRenameStmt *newnode = makeNode(RenameStmt);\n> \n> \tNode_Copy(from, newnode, relation);\n> ! \tif (from->column)\n> ! \t\tnewnode->column = pstrdup(from->column);\n> \tif (from->newname)\n> \t\tnewnode->newname = pstrdup(from->newname);\n> \n> \treturn newnode;\n> }\n> --- 2137,2147 ----\n> \tRenameStmt *newnode = makeNode(RenameStmt);\n> \n> \tNode_Copy(from, newnode, relation);\n> ! \tif (from->oldname)\n> ! \t\tnewnode->oldname = pstrdup(from->oldname);\n> \tif (from->newname)\n> \t\tnewnode->newname = pstrdup(from->newname);\n> + \tnewnode->renameType = from->renameType;\n> \n> \treturn newnode;\n> }\n> diff -cNr pgsql.cvs.orig/src/backend/nodes/equalfuncs.c pgsql/src/backend/nodes/equalfuncs.c\n> *** pgsql.cvs.orig/src/backend/nodes/equalfuncs.c\tFri Apr 19 10:32:51 2002\n> --- pgsql/src/backend/nodes/equalfuncs.c\tFri Apr 19 13:56:00 2002\n> ***************\n> *** 983,992 ****\n> {\n> \tif (!equal(a->relation, b->relation))\n> \t\treturn false;\n> ! \tif (!equalstr(a->column, b->column))\n> \t\treturn false;\n> \tif (!equalstr(a->newname, b->newname))\n> \t\treturn false;\n> \n> \treturn true;\n> }\n> --- 983,994 ----\n> {\n> \tif (!equal(a->relation, b->relation))\n> \t\treturn false;\n> ! \tif (!equalstr(a->oldname, b->oldname))\n> \t\treturn false;\n> \tif (!equalstr(a->newname, b->newname))\n> \t\treturn false;\n> + \tif (a->renameType != b->renameType)\n> + \t\treturn false;\n> \n> \treturn true;\n> }\n> diff -cNr pgsql.cvs.orig/src/backend/parser/gram.y pgsql/src/backend/parser/gram.y\n> *** pgsql.cvs.orig/src/backend/parser/gram.y\tFri Apr 19 10:32:51 2002\n> --- pgsql/src/backend/parser/gram.y\tFri Apr 19 14:07:35 2002\n> ***************\n> *** 2915,2922 ****\n> \t\t\t\t{\n> \t\t\t\t\tRenameStmt *n = makeNode(RenameStmt);\n> \t\t\t\t\tn->relation = $3;\n> ! \t\t\t\t\tn->column = $6;\n> \t\t\t\t\tn->newname = $8;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> \t\t;\n> --- 2915,2935 ----\n> \t\t\t\t{\n> \t\t\t\t\tRenameStmt *n = makeNode(RenameStmt);\n> \t\t\t\t\tn->relation = $3;\n> ! \t\t\t\t\tn->oldname = $6;\n> \t\t\t\t\tn->newname = $8;\n> + \t\t\t\t\tif ($6 == NULL)\n> + \t\t\t\t\t\tn->renameType = RENAME_TABLE;\n> + \t\t\t\t\telse\n> + \t\t\t\t\t\tn->renameType = RENAME_COLUMN;\n> + \t\t\t\t\t$$ = (Node *)n;\n> + \t\t\t\t}\n> + \t\t| ALTER TRIGGER name ON relation_expr RENAME TO name\n> + \t\t\t\t{\n> + \t\t\t\t\tRenameStmt *n = makeNode(RenameStmt);\n> + \t\t\t\t\tn->relation = $5;\n> + \t\t\t\t\tn->oldname = $3;\n> + \t\t\t\t\tn->newname = $8;\n> + \t\t\t\t\tn->renameType = RENAME_TRIGGER;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> \t\t;\n> diff -cNr pgsql.cvs.orig/src/backend/tcop/utility.c pgsql/src/backend/tcop/utility.c\n> *** pgsql.cvs.orig/src/backend/tcop/utility.c\tFri Apr 19 10:32:52 2002\n> --- pgsql/src/backend/tcop/utility.c\tFri Apr 19 15:59:13 2002\n> ***************\n> *** 377,399 ****\n> \n> \t\t\t\tCheckOwnership(stmt->relation, true);\n> \n> ! \t\t\t\tif (stmt->column == NULL)\n> \t\t\t\t{\n> ! \t\t\t\t\t/*\n> ! \t\t\t\t\t * rename relation\n> ! \t\t\t\t\t */\n> ! \t\t\t\t\trenamerel(RangeVarGetRelid(stmt->relation, false),\n> ! \t\t\t\t\t\t\t stmt->newname);\n> ! \t\t\t\t}\n> ! \t\t\t\telse\n> ! \t\t\t\t{\n> ! \t\t\t\t\t/*\n> ! \t\t\t\t\t * rename attribute\n> ! \t\t\t\t\t */\n> ! \t\t\t\t\trenameatt(RangeVarGetRelid(stmt->relation, false),\n> ! \t\t\t\t\t\t\t stmt->column,\t\t/* old att name */\n> \t\t\t\t\t\t\t stmt->newname,\t/* new att name */\n> ! \t\t\t\t\t\t\t interpretInhOption(stmt->relation->inhOpt));\t\t/* recursive? */\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\tbreak;\n> --- 377,406 ----\n> \n> \t\t\t\tCheckOwnership(stmt->relation, true);\n> \n> ! \t\t\t\tswitch (stmt->renameType)\n> \t\t\t\t{\n> ! \t\t\t\t\tcase RENAME_TABLE:\n> ! \t\t\t\t\t\trenamerel(RangeVarGetRelid(stmt->relation, false),\n> ! \t\t\t\t\t\t\t\t stmt->newname);\n> ! \t\t\t\t\t\tbreak;\n> ! \t\t\t\t\tcase RENAME_COLUMN:\n> ! \t\t\t\t\t\trenameatt(RangeVarGetRelid(stmt->relation, false),\n> ! \t\t\t\t\t\t\t stmt->oldname,\t/* old att name */\n> \t\t\t\t\t\t\t stmt->newname,\t/* new att name */\n> ! \t\t\t\t\t\t\t interpretInhOption(stmt->relation->inhOpt));\t/* recursive? */\n> ! \t\t\t\t\t\tbreak;\n> ! \t\t\t\t\tcase RENAME_TRIGGER:\n> ! \t\t\t\t\t\trenametrig(RangeVarGetRelid(stmt->relation, false),\n> ! \t\t\t\t\t\t\t stmt->oldname,\t/* old att name */\n> ! \t\t\t\t\t\t\t stmt->newname);\t/* new att name */\n> ! \t\t\t\t\t\tbreak;\n> ! \t\t\t\t\tcase RENAME_RULE:\n> ! \t\t\t\t\t\telog(ERROR, \"ProcessUtility: Invalid target for RENAME: %d\",\n> ! \t\t\t\t\t\t\t\tstmt->renameType);\n> ! \t\t\t\t\t\tbreak;\n> ! \t\t\t\t\tdefault:\n> ! \t\t\t\t\t\telog(ERROR, \"ProcessUtility: Invalid target for RENAME: %d\",\n> ! \t\t\t\t\t\t\t\tstmt->renameType);\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\tbreak;\n> diff -cNr pgsql.cvs.orig/src/include/commands/tablecmds.h pgsql/src/include/commands/tablecmds.h\n> *** pgsql.cvs.orig/src/include/commands/tablecmds.h\tFri Apr 19 10:32:55 2002\n> --- pgsql/src/include/commands/tablecmds.h\tFri Apr 19 16:06:39 2002\n> ***************\n> *** 15,20 ****\n> --- 15,21 ----\n> #define TABLECMDS_H\n> \n> #include \"nodes/parsenodes.h\"\n> + #include \"utils/inval.h\"\n> \n> extern void AlterTableAddColumn(Oid myrelid, bool inherits,\n> \t\t\t\t\t\t\t\tColumnDef *colDef);\n> ***************\n> *** 60,63 ****\n> --- 61,68 ----\n> extern void renamerel(Oid relid,\n> \t\t const char *newrelname);\n> \n> + extern void renametrig(Oid relid,\n> + \t\t const char *oldname,\n> + \t\t const char *newname);\n> + \n> #endif /* TABLECMDS_H */\n> diff -cNr pgsql.cvs.orig/src/include/nodes/parsenodes.h pgsql/src/include/nodes/parsenodes.h\n> *** pgsql.cvs.orig/src/include/nodes/parsenodes.h\tFri Apr 19 10:32:55 2002\n> --- pgsql/src/include/nodes/parsenodes.h\tFri Apr 19 14:21:21 2002\n> ***************\n> *** 1205,1221 ****\n> } RemoveOperStmt;\n> \n> /* ----------------------\n> ! *\t\tAlter Table Rename Statement\n> * ----------------------\n> */\n> typedef struct RenameStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tRangeVar *relation;\t\t/* relation to be altered */\n> ! \tchar\t *column;\t\t\t/* if NULL, rename the relation name to\n> ! \t\t\t\t\t\t\t\t * the new name. Otherwise, rename this\n> ! \t\t\t\t\t\t\t\t * column name. */\n> \tchar\t *newname;\t\t/* the new name */\n> } RenameStmt;\n> \n> /* ----------------------\n> --- 1205,1227 ----\n> } RemoveOperStmt;\n> \n> /* ----------------------\n> ! *\t\tAlter Object Rename Statement\n> * ----------------------\n> + * Currently supports renaming tables, table columns, and triggers.\n> + * If renaming a table, oldname is ignored.\n> */\n> + #define RENAME_TABLE\t110\n> + #define RENAME_COLUMN\t111\n> + #define RENAME_TRIGGER\t112\n> + #define RENAME_RULE\t\t113\n> + \n> typedef struct RenameStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tRangeVar *relation;\t\t/* owning relation */\n> ! \tchar\t *oldname;\t\t/* name of rule, trigger, etc */\n> \tchar\t *newname;\t\t/* the new name */\n> + \tint\t\t\trenameType;\t\t/* RENAME_TABLE, RENAME_COLUMN, etc */\n> } RenameStmt;\n> \n> /* ----------------------\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 22:48:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RENAME TRIGGER patch (was [HACKERS] Odd(?) RI-trigger" } ]
[ { "msg_contents": "Help!\n\nCan anyone tell me what wrong with the following codesnippet. I nuke the \nserver when called (stored procedure)\n\n... some VALID spi_exec call :-) has been done ...\n\nTupleDesc tupdesc = SPI_tuptable->tupdesc;\nTupleConstr *tupconstr = SPI_tuptable->tupdesc->constr;\nConstrCheck *check = tupconstr->check;\nSPITupleTable *tuptable = SPI_tuptable;\nchar *ccbin;\n\nchar buf[8192];\nint i;\n\nfor (i = 1, buf[0] = 0; i <= tupdesc->natts; i++) { \n ccbin = check[i].ccbin;\n sprintf(buf + strlen (buf), \"%s, %s\",\n SPI_fname(tupdesc,i),\n ccbin);\n elog (NOTICE, \"%s\", buf);\n}\n\n\n\n\nI have not had any luck :-( I'm a C beginner thou, so maybe i screw up when \naccessing the structures\n\nAny help is appreciated\n\n/Steffen Nielsen\n\n", "msg_date": "Fri, 19 Apr 2002 04:15:49 +0200", "msg_from": "Steffen Nielsen <styf@cs.auc.dk>", "msg_from_op": true, "msg_subject": "Getting Constrint information..??" }, { "msg_contents": "Steffen Nielsen <styf@cs.auc.dk> writes:\n> Can anyone tell me what wrong with the following codesnippet. I nuke the \n> server when called (stored procedure)\n\n> for (i = 1, buf[0] = 0; i <= tupdesc->natts; i++) { \n> ccbin = check[i].ccbin;\n\nWell, for one thing, the number of check[] array entries is probably not\nequal to the number of attributes of the relation. tupconstr->num_check\ntells you how many there are. For another, check[] should be indexed\nfrom 0 not 1 (just like all C arrays).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 22:39:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Getting Constrint information..?? " } ]
[ { "msg_contents": "Can we enable syslog support by default for 7.3?\n--\nTatsuo Ishii\n", "msg_date": "Fri, 19 Apr 2002 11:34:19 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "syslog support by default" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Can we enable syslog support by default for 7.3?\n\nAFAIR, we agreed to flip the default some time ago, we just didn't\nwant to do it late in the 7.2 cycle. Go for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Apr 2002 22:53:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: syslog support by default " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Can we enable syslog support by default for 7.3?\n> \n> AFAIR, we agreed to flip the default some time ago, we just didn't\n> want to do it late in the 7.2 cycle. Go for it.\n\nOk. I'll work on this.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 19 Apr 2002 11:55:16 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: syslog support by default " }, { "msg_contents": "Tom Lane writes:\n\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Can we enable syslog support by default for 7.3?\n>\n> AFAIR, we agreed to flip the default some time ago, we just didn't\n> want to do it late in the 7.2 cycle. Go for it.\n\nI think if no one complains about the lack of syslog on his machine we\nshould just remove the option in 7.3+1.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 18 Apr 2002 23:28:48 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: syslog support by default " }, { "msg_contents": "> > > > Can we enable syslog support by default for 7.3?\n> > >\n> > > AFAIR, we agreed to flip the default some time ago, we just didn't\n> > > want to do it late in the 7.2 cycle. Go for it.\n> > \n> > I think if no one complains about the lack of syslog on his machine we\n> > should just remove the option in 7.3+1.\n> \n> My experience has been that logging to syslog makes postgres much\n> slower.\n\nWhat's the problem with this? Even if that's true, you could easily\nturn off syslog logging by tweaking postgresql.conf.\n--\nTatsuo Ishii\n\n", "msg_date": "Fri, 19 Apr 2002 15:15:49 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: syslog support by default" }, { "msg_contents": "> My experience has been that logging to syslog makes postgres much\n> slower.\n>\n> Can anyone confirm or refute this ?\nDo you use synchronous write with syslog? Try to add a dash in \n/etc/syslog.conf\n\ne.g.\ninstead of \nlocal3.* /var/log/syslog.postgres\nuse\nlocal3.* -/var/log/syslog.postgres\n\n\n", "msg_date": "Fri, 19 Apr 2002 08:20:19 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "Re: syslog support by default" }, { "msg_contents": "> On Fri, 2002-04-19 at 08:15, Tatsuo Ishii wrote:\n> > > > > > Can we enable syslog support by default for 7.3?\n> > > > >\n> > > > > AFAIR, we agreed to flip the default some time ago, we just didn't\n> > > > > want to do it late in the 7.2 cycle. Go for it.\n> > > > \n> > > > I think if no one complains about the lack of syslog on his machine we\n> > > > should just remove the option in 7.3+1.\n> > > \n> > > My experience has been that logging to syslog makes postgres much\n> > > slower.\n> > \n> > What's the problem with this? Even if that's true, you could easily\n> > turn off syslog logging by tweaking postgresql.conf.\n> \n> I was worried about the comment of removing the other options. in 7.3+1.\n> At least this is how i interpreted that comment.\n\nIn my understanding we are going to turn on the --enable-syslog\n*configure* option by default (or remove the configuration option\ncompletely), but not change the syslog option in postgresql.conf\n(currently default to 0: that means not output to syslog).\n--\nTatsuo Ishii\n\n", "msg_date": "Fri, 19 Apr 2002 16:03:53 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: syslog support by default" }, { "msg_contents": "On Fri, 2002-04-19 at 05:28, Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > > Can we enable syslog support by default for 7.3?\n> >\n> > AFAIR, we agreed to flip the default some time ago, we just didn't\n> > want to do it late in the 7.2 cycle. Go for it.\n> \n> I think if no one complains about the lack of syslog on his machine we\n> should just remove the option in 7.3+1.\n\nMy experience has been that logging to syslog makes postgres much\nslower.\n\nCan anyone confirm or refute this ?\n\n--------------\nHannu\n\n\n", "msg_date": "19 Apr 2002 09:08:18 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: syslog support by default" }, { "msg_contents": "On Fri, 2002-04-19 at 08:15, Tatsuo Ishii wrote:\n> > > > > Can we enable syslog support by default for 7.3?\n> > > >\n> > > > AFAIR, we agreed to flip the default some time ago, we just didn't\n> > > > want to do it late in the 7.2 cycle. Go for it.\n> > > \n> > > I think if no one complains about the lack of syslog on his machine we\n> > > should just remove the option in 7.3+1.\n> > \n> > My experience has been that logging to syslog makes postgres much\n> > slower.\n> \n> What's the problem with this? Even if that's true, you could easily\n> turn off syslog logging by tweaking postgresql.conf.\n\nI was worried about the comment of removing the other options. in 7.3+1.\nAt least this is how i interpreted that comment.\n\n-------------\nHannu\n\n\n", "msg_date": "19 Apr 2002 09:21:16 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: syslog support by default" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> In my understanding we are going to turn on the --enable-syslog\n> *configure* option by default (or remove the configuration option\n> completely), but not change the syslog option in postgresql.conf\n> (currently default to 0: that means not output to syslog).\n\nRight. We are only going to make the syslog support code be there\nin the default build; we are not forcing anyone to use it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Apr 2002 09:48:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: syslog support by default " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > In my understanding we are going to turn on the --enable-syslog\n> > *configure* option by default (or remove the configuration option\n> > completely), but not change the syslog option in postgresql.conf\n> > (currently default to 0: that means not output to syslog).\n> \n> Right. We are only going to make the syslog support code be there\n> in the default build; we are not forcing anyone to use it.\n\nI have removed the --enable-syslog option. Now as far as the system\nhas syslog(), the syslog support code is always in the build.\nIf this seems ok, I will update the doc.\n--\nTatsuo Ishii\n", "msg_date": "Sun, 21 Apr 2002 09:22:23 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: syslog support by default " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I have removed the --enable-syslog option. Now as far as the system\n> has syslog(), the syslog support code is always in the build.\n> If this seems ok, I will update the doc.\n\nSeems reasonable. It might be a good idea for configure to verify\nthat the <syslog.h> header is present, as well as the syslog() library\nroutine, before enabling HAVE_SYSLOG.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Apr 2002 20:35:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: syslog support by default " }, { "msg_contents": "Tatsuo Ishii wrote:\n> > Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > > In my understanding we are going to turn on the --enable-syslog\n> > > *configure* option by default (or remove the configuration option\n> > > completely), but not change the syslog option in postgresql.conf\n> > > (currently default to 0: that means not output to syslog).\n> > \n> > Right. We are only going to make the syslog support code be there\n> > in the default build; we are not forcing anyone to use it.\n> \n> I have removed the --enable-syslog option. Now as far as the system\n> has syslog(), the syslog support code is always in the build.\n> If this seems ok, I will update the doc.\n\nTODO updated:\n\n\t* -Compile in syslog functionaility by default (Tatsuo)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 12:59:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: syslog support by default" }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I have removed the --enable-syslog option. Now as far as the system\n> > has syslog(), the syslog support code is always in the build.\n> > If this seems ok, I will update the doc.\n> \n> Seems reasonable. It might be a good idea for configure to verify\n> that the <syslog.h> header is present, as well as the syslog() library\n> routine, before enabling HAVE_SYSLOG.\n\nDone.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 26 Apr 2002 22:56:21 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: syslog support by default " } ]
[ { "msg_contents": "On 04/18/2002 12:41:15 PM tycho wrote:\n> > Don't know if the optimizer takes this into consideration, but a query that \n> uses a primary and/or unique key in the where-clause, should always choose to \n> use\n> > the related indices (assuming the table size is above a certain threshold). \n> Since a primary key/unique index always restricts the resultset to a single \n> row.....\n> \n> I don't think so.\n> \n> eg. table with primary key \"pk\", taking values from 1 to 1000000 (so\n> 1000000 records)\n> \n> select * from table where pk > 5\n> \n> should probably not use the index ...\n\nOops, you're right of course. Rephrase the above as 'a query that uses a primary key to uniquely qualify a single row' (which pretty much restricts it to the = operator with a constant). Still, this is probably a fairly common case.....\n\nMaarten\n\n----\n\nMaarten Boekhold, maarten.boekhold@reuters.com\n\nReuters Consulting\nDubai Media City\nBuilding 1, 5th Floor\nPO Box 1426\nDubai, United Arab Emirates\ntel:+971(0)4 3918300 ext 249\nfax:+971(0)4 3918333\nmob:+971(0)505526539\n\n\n-------------------------------------------------------------- --\n Visit our Internet site at http://www.reuters.com\n\nAny views expressed in this message are those of the individual\nsender, except where the sender specifically states them to be\nthe views of Reuters Ltd.\n\n", "msg_date": "Fri, 19 Apr 2002 13:27:05 +0400", "msg_from": "Maarten.Boekhold@reuters.com", "msg_from_op": true, "msg_subject": "Re: Index Scans become Seq Scans after VACUUM ANALYSE" } ]
[ { "msg_contents": "would'nt it be much better to expand pg_largeobject to have another column \"src_oid\" (or similar), containing the OID of the referencing table from pg_class, and when accessing large objects take the privilieges from the referencing class?\n\n-----Ursprüngliche Nachricht-----\nVon: Damon Cokenias [mailto:lists@mtn-palace.com]\nGesendet: Freitag, 19. April 2002 11:04\nAn: pgsql-hackers\nBetreff: [HACKERS] Large object security\n\n\nHi all,\n\nI see there's a TODO item for large object security, it's a feature I'd really like to see. I'm willing to put in the time to write a patch, but know far to little about postgres internals and history to just dive in. Has there been any discussion on this list about what this feature should be or how it might be implemented? I saw a passing reference to \"LOB LOCATORs\" in the list archives, but that was all.\n\nWhat's a LOB LOCATOR ? \n\nWhat about giving each large object its own permission flags? ex:\n\nGRANT SELECT ON LARGE OBJECT 10291 TO USER webapp;\nGRANT SELECT, DELETE, UPDATE ON LARGE OBJECT 10291 TO USER admin;\n\nDefault permission flags (and INSERT permissions) would be set at the table level. All objects without specific permissions would use the table rules. This allows for backward compatibility and convenience.\n\nI think per-object security is important. A user shouldn't be able to get at another user's data just by guessing the right OID. Ideally, users without permission would not know there were objects in the database they were not allowed to see.\n\nI can also imagine a security scheme that uses rule/trigger syntax to give the user a hook to provide her own security functions. I haven't thought that through, though.\n\nAny thoughts?\n\n\n-Damon\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n", "msg_date": "Fri, 19 Apr 2002 12:11:48 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: Large object security" }, { "msg_contents": "At 12:11 PM +0200 4/19/02, Mario Weilguni wrote:\n>would'nt it be much better to expand pg_largeobject to have another column \"src_oid\" (or similar), containing the OID of the referencing table from pg_class, and when accessing large objects take the privilieges from the referencing class?\n\nIt's possible that several tables could reference the same object. And besides, I don't think postgres can tell the difference between a column that contains a large object id and a plain old integer.\n\nAlso, I don't think table-level permissions are flexible enough to be truly useful. What if I want certain objects to be visible only to certain users, but I want all objects to be referenced from the same table? I can enforce row-level security on the table with a view. I'd like the same level of flexibility for large objects.\n\nAnother thought: What if I want to restrict access to large objects based on size or timestamp?\n\n-Damon\n", "msg_date": "Fri, 19 Apr 2002 03:31:00 -0700", "msg_from": "Damon Cokenias <damon@cokenias.org>", "msg_from_op": false, "msg_subject": "Re: Large object security" }, { "msg_contents": "\"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> would'nt it be much better to expand pg_largeobject to have another\n> column \"src_oid\" (or similar), containing the OID of the referencing\n> table from pg_class,\n\nWhat referencing table? The existing LO implementation has no idea\nwhere you are keeping the reference(s). Nor is there any assumption\nthat there's just one link to the LO.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Apr 2002 10:02:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Large object security " }, { "msg_contents": "The problem with this is that the existing functionality of LOs allows \nyou to share a single LO across multiple tables. There may not be a \nsingle source, but multiple. Since LOs just use an OID as a FK to the \nLO, you can store that OID in multiple different tables.\n\n--Barry\n\nMario Weilguni wrote:\n> would'nt it be much better to expand pg_largeobject to have another column \"src_oid\" (or similar), containing the OID of the referencing table from pg_class, and when accessing large objects take the privilieges from the referencing class?\n> \n> -----Urspr�ngliche Nachricht-----\n> Von: Damon Cokenias [mailto:lists@mtn-palace.com]\n> Gesendet: Freitag, 19. April 2002 11:04\n> An: pgsql-hackers\n> Betreff: [HACKERS] Large object security\n> \n> \n> Hi all,\n> \n> I see there's a TODO item for large object security, it's a feature I'd really like to see. I'm willing to put in the time to write a patch, but know far to little about postgres internals and history to just dive in. Has there been any discussion on this list about what this feature should be or how it might be implemented? I saw a passing reference to \"LOB LOCATORs\" in the list archives, but that was all.\n> \n> What's a LOB LOCATOR ? \n> \n> What about giving each large object its own permission flags? ex:\n> \n> GRANT SELECT ON LARGE OBJECT 10291 TO USER webapp;\n> GRANT SELECT, DELETE, UPDATE ON LARGE OBJECT 10291 TO USER admin;\n> \n> Default permission flags (and INSERT permissions) would be set at the table level. All objects without specific permissions would use the table rules. This allows for backward compatibility and convenience.\n> \n> I think per-object security is important. A user shouldn't be able to get at another user's data just by guessing the right OID. Ideally, users without permission would not know there were objects in the database they were not allowed to see.\n> \n> I can also imagine a security scheme that uses rule/trigger syntax to give the user a hook to provide her own security functions. I haven't thought that through, though.\n> \n> Any thoughts?\n> \n> \n> -Damon\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n\n", "msg_date": "Fri, 19 Apr 2002 09:50:25 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Large object security" } ]
[ { "msg_contents": "As some may know I will be talking about PostgreSQL on LinuxTag (6th of\nJune - 9th of June) in Karlsruhe, Germany. In particular I want to\naddress:\n\n- The functionality of PostgreSQL\n- Its stability and capability of handling large databases, ideally by\n some case studies.\n- Comparison to other products, be it proprietry like Oracle or more\n open like MySQL.\n- Plans for the future developmen.\n\nI wonder if we have some sort of comparison list already that I could\nuse. Also I can get most plans from the TODO list, but there is not\ninfo on for which version the features are planned. Is there some kind\nof roadmap available?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 19 Apr 2002 15:02:01 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "My talk at Linuxtag" }, { "msg_contents": "Michael Meskes wrote:\n> As some may know I will be talking about PostgreSQL on LinuxTag (6th of\n> June - 9th of June) in Karlsruhe, Germany. In particular I want to\n> address:\n> \n> - The functionality of PostgreSQL\n> - Its stability and capability of handling large databases, ideally by\n> some case studies.\n> - Comparison to other products, be it proprietry like Oracle or more\n> open like MySQL.\n> - Plans for the future developmen.\n> \n> I wonder if we have some sort of comparison list already that I could\n> use. Also I can get most plans from the TODO list, but there is not\n> info on for which version the features are planned. Is there some kind\n> of roadmap available?\n\nSorry, no roadmap. We have to rely on volunteers. Schemas will be in\n7.3, but I don't know of other major stuff.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 13:46:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: My talk at Linuxtag" } ]
[ { "msg_contents": "I've been poking at the scanner a bit using the large literal test case\nfrom the other day\n\nhttp://archives.postgresql.org/pgsql-hackers/2002-04/msg00811.php\n\nI've been able to reduce the wall-clock run time of that test from 3:37\nmin to 2:13 min and the base_yylex() per-call time from 137ms to 24ms.\n\nI've used 'flex -8 -CFa' and restructured the code to avoid looping over\nand copying the input string half a dozen times. For instance, instead of\nscanstr(), the escape sequences are resolved as the input is scanned, and\ninstead of the myinput() routine I use the function yy_scan_buffer()\nprovided by flex for scanning in-memory strings. (This would make the\ncode flex-dependent, but in reality it already is anyway.)\n\nThe \"before\" profile was:\n\n % cumulative self self total\n time seconds seconds calls ms/call ms/call name\n 23.51 6.65 6.65 110 60.45 137.27 base_yylex\n 23.19 13.21 6.56 11 596.36 1089.69 pq_getstring\n 19.16 18.63 5.42 74882482 0.00 0.00 pq_getbyte\n 14.99 22.87 4.24 11 385.45 385.46 scanstr\n 9.61 25.59 2.72 23 118.26 118.26 yy_get_previous_state\n 3.78 26.66 1.07 34 31.47 31.47 myinput\n 3.64 27.69 1.03 22 46.82 46.82 textin\n 1.48 28.11 0.42 34 12.35 43.82 yy_get_next_buffer\n\nThe \"after\" profile is:\n\n % cumulative self self total\n time seconds seconds calls ms/call ms/call name\n 40.30 5.65 5.65 11 513.64 943.64 pq_getstring\n 33.74 10.38 4.73 74882482 0.00 0.00 pq_getbyte\n 18.90 13.03 2.65 110 24.09 24.09 base_yylex\n 6.85 13.99 0.96 22 43.64 43.64 textin\n 0.07 14.00 0.01 86 0.12 0.12 heap_fetch\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 19 Apr 2002 13:02:16 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Improved scanner performance" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I've used 'flex -8 -CFa' and restructured the code to avoid looping over\n> and copying the input string half a dozen times. For instance, instead of\n> scanstr(), the escape sequences are resolved as the input is scanned, and\n> instead of the myinput() routine I use the function yy_scan_buffer()\n> provided by flex for scanning in-memory strings. (This would make the\n> code flex-dependent, but in reality it already is anyway.)\n\nYes, we've been requiring flex-only features for years, so that aspect\nof it doesn't bother me. Any downsides to the changes? (For instance,\nI had the idea that -CF would enlarge the lexer tables quite a bit ---\nwhat's the change in executable size?)\n\n> The \"after\" profile is:\n\n> % cumulative self self total\n> time seconds seconds calls ms/call ms/call name\n> 40.30 5.65 5.65 11 513.64 943.64 pq_getstring\n> 33.74 10.38 4.73 74882482 0.00 0.00 pq_getbyte\n> 18.90 13.03 2.65 110 24.09 24.09 base_yylex\n> 6.85 13.99 0.96 22 43.64 43.64 textin\n> 0.07 14.00 0.01 86 0.12 0.12 heap_fetch\n\nLooks like inlining pq_getbyte into pq_getstring would be worth doing\ntoo.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Apr 2002 13:35:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improved scanner performance " }, { "msg_contents": "Tom Lane writes:\n\n> I had the idea that -CF would enlarge the lexer tables quite a bit ---\n> what's the change in executable size?)\n\n+150 kB\n\nI've also looked at -CFe, which is supposedly the next slowest level, but\nit doesn't do nearly as well.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 20 Apr 2002 00:26:51 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Improved scanner performance " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> I had the idea that -CF would enlarge the lexer tables quite a bit ---\n>> what's the change in executable size?)\n\n> +150 kB\n\n> I've also looked at -CFe, which is supposedly the next slowest level, but\n> it doesn't do nearly as well.\n\nOuch; that sounds like about a ten percent increase in the size of\nthe backend executable. That's enough to reach my threshold of pain;\nis the long-literal issue worth that much?\n\nHow much of your reported improvement is due to -CFa, and how much to\nthe coding improvements you made?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Apr 2002 01:03:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improved scanner performance " }, { "msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> I had the idea that -CF would enlarge the lexer tables quite a bit ---\n> >> what's the change in executable size?)\n>\n> > +150 kB\n>\n> > I've also looked at -CFe, which is supposedly the next slowest level, but\n> > it doesn't do nearly as well.\n>\n> Ouch; that sounds like about a ten percent increase in the size of\n> the backend executable. That's enough to reach my threshold of pain;\n> is the long-literal issue worth that much?\n\nHere's a breakdown of the postmaster file sizes and the wall-clock run\ntime of the long-literal test:\n\nno options\t1749912\t\t1m58.688s\n-CFe\t\t1754315\t\t1m49.223s\n-CF\t\t1817621\t\t1m43.780s\n-CFa\t\t1890197\t\t1m45.600s\n\n(These numbers are different than yesterday's because they don't have\nprofiling and debugging overhead.)\n\nSeeing this, I think -CF should be OK space and time-wise.\n\n> How much of your reported improvement is due to -CFa, and how much to\n> the coding improvements you made?\n\nAs I recall it, probably a third of the overall improvement came from\nusing -CF[a].\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 20 Apr 2002 13:20:31 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Improved scanner performance " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Here's a breakdown of the postmaster file sizes and the wall-clock run\n> time of the long-literal test:\n\n> no options\t1749912\t\t1m58.688s\n> -CFe\t\t1754315\t\t1m49.223s\n> -CF\t\t1817621\t\t1m43.780s\n> -CFa\t\t1890197\t\t1m45.600s\n\n> Seeing this, I think -CF should be OK space and time-wise.\n\nLooks like a reasonable compromise to me too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Apr 2002 13:25:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improved scanner performance " }, { "msg_contents": "BTW, here is what I get on an HP box for the same test you described\n(a dozen trivial SELECTs using string literals between 5MB and 10MB\nlong), using latest sources:\n\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 47.51 8.19 8.19 chunks\n 26.16 12.70 4.51 129 34.96 35.97 base_yylex\n 12.30 14.82 2.12 1521 1.39 1.39 strlen\n 6.79 15.99 1.17 13 90.00 90.00 pq_getstring\n 4.18 16.71 0.72 chunk2\n 2.55 17.15 0.44 _recv_sys\n 0.29 17.20 0.05 _mcount\n\n\"chunks\" is the inner loop of memcpy() --- evidently all the time is\nbeing spent just copying those big literals around.\n\nWe could probably avoid some of that copying if we had a cleaner\napproach to parsetree handling, ie, no scribbling on one's input.\nThen operations like eval_const_expressions wouldn't feel compelled\nto copy parsetree nodes that they weren't modifying. But I think\nwe've gotten all the low-hanging fruit for now.\n\nAt least on the backend side. Did you notice that psql was chewing\nup three times more CPU than the backend in this test??\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Apr 2002 22:05:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improved scanner performance " } ]
[ { "msg_contents": "COMMENT ON DATABASE db IS 'Comment';\n\nNow switch databases. Comment is gone.\n\nOf course, adding the comments to template1 will carry comments\nforward (in pg_description) to future DBs. Not fatal, but quite\nannoying.\n\nI suppose in order to add a comment field to pg_database it would need\nto be toasted or something (ton of work). Any other way to fix this?\n--\nRod\n\n", "msg_date": "Fri, 19 Apr 2002 14:01:16 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Really annoying comments..." }, { "msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> COMMENT ON DATABASE db IS 'Comment';\n> Now switch databases. Comment is gone.\n\nYeah, it's not very helpful. I'm not sure why we bothered to implement\nthat in the first place.\n\n> I suppose in order to add a comment field to pg_database it would need\n> to be toasted or something (ton of work). Any other way to fix this?\n\nI'm more inclined to rip it out ;-). I don't think it's worth the\ntrouble. Keeping database comments someplace else than pg_description\nis certainly *not* the way to go --- everything that reads them would\nhave to be tweaked too.\n\nWhat does need to be added at the moment is COMMENT ON SCHEMA (a/k/a\nnamespace).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Apr 2002 14:53:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Really annoying comments... " } ]