threads
listlengths
1
2.99k
[ { "msg_contents": "People have complained that we store passwords unencrypted in pg_shadow.\nLong ago we agreed to a solution and I am going to try to implement that\nnext.\n\nWhat we do now with crypt authentication is that the postmaster reads the\nplain-text password out of pg_shadow and encrypts it with a random salt.\nThat random salt is sent to the client, and the client encrypts with the\nsupplied salt and sends it to the postmaster. If they match, the client\nis authenticated.\n\nThe solution for encrypting passwords stored in pg_shadow was to encrypt\nthem when they are stored in pg_shadow. When a client wants to connect,\nthe pre-encrypted password is encrypted again with a random salt. The\npg_shadow salt and random salt are sent to the client where the client\nperforms to encryptions --- one with the pg_shadow salt and one with the\nrandom salt, and sends them back to the postmaster.\n\nIt should be pretty easy to do because the encryption code is already\nthere. \n\nThe problem is for older clients. Do I need to create a new encryption\ntype for this double-encryption? Seems we do.\n\nThe bigger problem is how usernames encrypted in pg_shadow can be used\nto perform the old 'crypt' authentication. We could sent the pg_shadow\nsalt to the client each time, but that leaves snoopers able to replay\nthe dialog to gain authentication because the salt isn't random anymore.\n\nMigrating old sites to encrypted pg_shadow passwords should be easy if a\ntrigger on pg_shadow will look for unencrypted INSERTs and encrypt them.\n\nThis is unrelated to moving to MD5 encryption, which is another item on\nour list.\n\nComments? Seems like lots of old crypt-using client binaries will break\nbecause as soon as someone is encrypted in pg_shadow, we can't use\ncrypt.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 14 Jun 2001 20:16:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Encrypting pg_shadow passwords" }, { "msg_contents": "> The solution for encrypting passwords stored in pg_shadow was to encrypt\n> them when they are stored in pg_shadow. When a client wants to connect,\n> the pre-encrypted password is encrypted again with a random salt. The\n> pg_shadow salt and random salt are sent to the client where the client\n> performs to encryptions --- one with the pg_shadow salt and one with the\n> random salt, and sends them back to the postmaster.\n\nOnce we encrypt in pg_shadow we will be able to use secondary passwords\nwith 'crypt' or whatever we call the new authentication protocol. Prior\nto this we couldn't because secondary password files contain encrypted\npasswords.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 14 Jun 2001 20:47:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The problem is for older clients. Do I need to create a new encryption\n> type for this double-encryption? Seems we do.\n\nHmm ... AFAIR that old discussion, backwards compatibility was not\nthought about at all :-(\n\n> The bigger problem is how usernames encrypted in pg_shadow can be used\n> to perform the old 'crypt' authentication. We could sent the pg_shadow\n> salt to the client each time, but that leaves snoopers able to replay\n> the dialog to gain authentication because the salt isn't random anymore.\n\nClearly not a good idea.\n\n> Migrating old sites to encrypted pg_shadow passwords should be easy if a\n> trigger on pg_shadow will look for unencrypted INSERTs and encrypt them.\n\nIf encrypting pg_shadow will break the old-style crypt method, then I\nthink forcing a conversion via a trigger is unacceptable. It will have\nto be a DBA choice (at configure time, or possibly initdb?) whether to\nuse encryption or not in pg_shadow; accordingly, either crypt or \"new\ncrypt\" auth method will be supported by the server, not both. But\nclient libraries could be built to support both auth methods.\n\n> This is unrelated to moving to MD5 encryption, which is another item on\n> our list.\n\nIt may be unrelated in theory, but in practice we should do both at\nthe same time to minimize the number of client-library incompatibility\nissues that arise. I'd suggest that old-style crypt auth continue to\nuse the crypt() call forever, while the new-style should be based on\nMD5 not crypt() from the get-go.\n\nIn a release or three we could discontinue support for old-style crypt,\nbut I think we must allow a transition period for people to update their\nclients.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Jun 2001 10:03:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The problem is for older clients. Do I need to create a new encryption\n> > type for this double-encryption? Seems we do.\n> \n> Hmm ... AFAIR that old discussion, backwards compatibility was not\n> thought about at all :-(\n\nYes, clearly.\n\n> > The bigger problem is how usernames encrypted in pg_shadow can be used\n> > to perform the old 'crypt' authentication. We could sent the pg_shadow\n> > salt to the client each time, but that leaves snoopers able to replay\n> > the dialog to gain authentication because the salt isn't random anymore.\n> \n> Clearly not a good idea.\n\nYep, big problem because they think they are safe but they are not. \nBetter to just reject it.\n\n> > Migrating old sites to encrypted pg_shadow passwords should be easy if a\n> > trigger on pg_shadow will look for unencrypted INSERTs and encrypt them.\n> \n> If encrypting pg_shadow will break the old-style crypt method, then I\n> think forcing a conversion via a trigger is unacceptable. It will have\n> to be a DBA choice (at configure time, or possibly initdb?) whether to\n> use encryption or not in pg_shadow; accordingly, either crypt or \"new\n> crypt\" auth method will be supported by the server, not both. But\n> client libraries could be built to support both auth methods.\n\nI hate to add initdb options because it may be confusing. I wonder if\nwe should have a script that encrypts the pg_shadow entries that can be\nrun when the administrator knows that there are no old clients left\naround. That way it can be run _after_ initdb.\n\n\n> > This is unrelated to moving to MD5 encryption, which is another item on\n> > our list.\n> \n> It may be unrelated in theory, but in practice we should do both at\n> the same time to minimize the number of client-library incompatibility\n> issues that arise. I'd suggest that old-style crypt auth continue to\n> use the crypt() call forever, while the new-style should be based on\n> MD5 not crypt() from the get-go.\n> \n> In a release or three we could discontinue support for old-style crypt,\n> but I think we must allow a transition period for people to update their\n> clients.\n\nYes, MD5 is something that probably should be done at the same time to\nminimize disruption.\n\nAnother idea is to ship 7.2 with double-salt available to clients but\nnot enabled in pg_shadow then enable it in 7.3.\n\nI think the script idea may be best but it will have to be saved\nsomewhere so once you run it all future password changes are encrypted\nin pg_shadow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 15 Jun 2001 10:11:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think the script idea may be best but it will have to be saved\n> somewhere so once you run it all future password changes are encrypted\n> in pg_shadow.\n\nMore to the point, how does the postmaster know that it's now dealing\nwith encrypted passwords and must use the double-salt auth method?\nSeems to me that this is not a simple matter of changing the data in one\ncolumn of pg_shadow.\n\nThe thing I like about a configure option is that when it's in place you\nknow it's in place. No question of whether some rows of pg_shadow\nmanaged to escape being updated, or any silliness like that. Your point\nabout \"they think they are safe but they are not\" seems relevant here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Jun 2001 10:34:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords " }, { "msg_contents": "> > This is unrelated to moving to MD5 encryption, which is another item on\n> > our list.\n>\n> It may be unrelated in theory, but in practice we should do both at\n> the same time to minimize the number of client-library incompatibility\n> issues that arise. I'd suggest that old-style crypt auth continue to\n> use the crypt() call forever, while the new-style should be based on\n> MD5 not crypt() from the get-go.\n>\n\nI don't know if this was discussed earlier, but for client authentication I\nthink SHA-1 is preferred over MD5 because of weaknesses discovered in MD5.\nAlso an HMAC using SHA-1 is preferred for authentication over just simply\nusing a hash of the password. Ideally the server sends a random session\ntoken to the client, the client creates an HMAC (using SHA-1) from the token\nand their password locally, and sends that back to the server. The server\nthen calculates the HMAC itself using the client's (server stored) password\nand the session token. If they match, the client is authenticated. If we\nwant to store a SHA-1 hash instead of the plain-text password, just do SHA-1\nover the plain-text password on the client side before using it as the HMAC\nkey.\n\nThe downside of storing the password as a SHA-1 hash versus symmetrically\nencrypting (or leaving as plain-text) it is that there's no way to recover\nthe password from the hash if the client forgets it. Of course, the\nsuperuser can just reset it to a known value.\n\n-- Joe\n\n\n", "msg_date": "Fri, 15 Jun 2001 07:57:37 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords " }, { "msg_contents": "On Fri, 15 Jun 2001, Bruce Momjian wrote:\n\n> > > Migrating old sites to encrypted pg_shadow passwords should be easy if a\n> > > trigger on pg_shadow will look for unencrypted INSERTs and encrypt them.\n> >\n> > If encrypting pg_shadow will break the old-style crypt method, then I\n> > think forcing a conversion via a trigger is unacceptable. It will have\n> > to be a DBA choice (at configure time, or possibly initdb?) whether to\n> > use encryption or not in pg_shadow; accordingly, either crypt or \"new\n> > crypt\" auth method will be supported by the server, not both. But\n> > client libraries could be built to support both auth methods.\n>\n> I hate to add initdb options because it may be confusing. I wonder if\n> we should have a script that encrypts the pg_shadow entries that can be\n> run when the administrator knows that there are no old clients left\n> around. That way it can be run _after_ initdb.\n\nWhich clients actually read pg_shadow? I always thought that only the\npostmaster read it.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 15 Jun 2001 11:00:17 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Fri, 15 Jun 2001, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think the script idea may be best but it will have to be saved\n> > somewhere so once you run it all future password changes are encrypted\n> > in pg_shadow.\n>\n> More to the point, how does the postmaster know that it's now dealing\n> with encrypted passwords and must use the double-salt auth method?\n> Seems to me that this is not a simple matter of changing the data in one\n> column of pg_shadow.\n\nThe first three characters are md5 in the code I sent Bruce. The\nauth stuff starts looking at the 4th character of the password column\nin pg_shadow.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 15 Jun 2001 11:28:15 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords " }, { "msg_contents": "Bruce Momjian writes:\n\n> People have complained that we store passwords unencrypted in pg_shadow.\n> Long ago we agreed to a solution and I am going to try to implement that\n> next.\n\nWhatever you do, please wait till I've finished the \"authenticate after\nfork\" change. (this weekend?)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 15 Jun 2001 18:27:02 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Fri, 15 Jun 2001, Peter Eisentraut wrote:\n\n> Bruce Momjian writes:\n> \n> > People have complained that we store passwords unencrypted in pg_shadow.\n> > Long ago we agreed to a solution and I am going to try to implement that\n> > next.\n> \n> Whatever you do, please wait till I've finished the \"authenticate after\n> fork\" change. (this weekend?)\n\nIf you are going to do this this weekend, should I just wait with the PAM\npatch until then? (Patch against the new code)\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Fri, 15 Jun 2001 12:07:56 -0500 (CDT)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "Can someone point me in the direction of any documentation related to\nclient/backend protocol. If it's use the source, that's ok too.\n\nDave\n\n", "msg_date": "Fri, 15 Jun 2001 13:16:10 -0400", "msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "Protocol Documentation" }, { "msg_contents": "Dave Cramer writes:\n\n> Can someone point me in the direction of any documentation related to\n> client/backend protocol. If it's use the source, that's ok too.\n\nhttp://www.ca.postgresql.org/devel-corner/docs/postgres/protocol.html\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 15 Jun 2001 19:56:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Protocol Documentation" }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n>> More to the point, how does the postmaster know that it's now dealing\n>> with encrypted passwords and must use the double-salt auth method?\n\n> The first three characters are md5 in the code I sent Bruce.\n\nUh ... so if I use a password that starts with \"md5\", it breaks?\n\nSeems like adding an additional column to pg_shadow would be a better\ntechnique.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Jun 2001 14:02:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords " }, { "msg_contents": "On Fri, 15 Jun 2001, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> >> More to the point, how does the postmaster know that it's now dealing\n> >> with encrypted passwords and must use the double-salt auth method?\n>\n> > The first three characters are md5 in the code I sent Bruce.\n>\n> Uh ... so if I use a password that starts with \"md5\", it breaks?\n\nyup.\n\n> Seems like adding an additional column to pg_shadow would be a better\n> technique.\n\nI agree but that was frowned upon for reasons I don't recall now.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 15 Jun 2001 14:29:09 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords " }, { "msg_contents": "On Fri, 15 Jun 2001, Vince Vielhaber wrote:\n\n> On Fri, 15 Jun 2001, Tom Lane wrote:\n>\n> > Vince Vielhaber <vev@michvhf.com> writes:\n> > >> More to the point, how does the postmaster know that it's now dealing\n> > >> with encrypted passwords and must use the double-salt auth method?\n> >\n> > > The first three characters are md5 in the code I sent Bruce.\n> >\n> > Uh ... so if I use a password that starts with \"md5\", it breaks?\n>\n> yup.\n\nActually let me elaborate. If you use a password that starts with\nmd5 AND is 35 characters in length.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 15 Jun 2001 14:31:04 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Whatever you do, please wait till I've finished the \"authenticate after\n> fork\" change. (this weekend?)\n\nOh, are you doing that? I thought you weren't convinced it was a good\nidea ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Jun 2001 14:33:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords " }, { "msg_contents": "> > In a release or three we could discontinue support for old-style crypt,\n> > but I think we must allow a transition period for people to update their\n> > clients.\n> \n> Yes, MD5 is something that probably should be done at the same time to\n> minimize disruption.\n\nwhile we are on the subject of auth and password and crypt, i noticed some\ntime ago, that there was an inconsistency in the way the auth passwd/crypt\nstuff worked.\n\nwe have:\n\n host dbname x.x.x.x x.x.x.x password somefile\n\nthis method takes a clear-text password from the client, encrypts it\nand compares it against the matching second field of \"somefile\"\nie. somefile is a traditional /etc/passwd style file\n\ni like to think of this as \"telnet\" authentication, user/passwd in clear text.\nserver stores pre-encrypted passwords.\n\nand i use it for access from my php scripts, thus avoiding the necessity of\ngiving \"webuser\" access to my tables, and setting up some kinda secondary\nauthentication table.\n\nthe docs in pg_hba.conf lead you to believe that if you leave off \"somefile\",\nthen it does the same thing, but compares against pg_shadow.\n\nhowever, and i don't know that this was intentional, but if you leave\n\"somefile\" off, it compares the plain-text user password against the raw\nvalue of the pg_shadow passwd field.\n\ni wanted a behaviour as above, encrypt the clear text, and compare against\nthe stored pre-encrypted password in pg_shadow.\n\ngiven that there are many installations which may be using things as they\nare, i have created a set of patches which do:\n\n host dbname x.x.x.x x.x.x.x password pg_shadow\n\n(pg_shadow is a \"reserved word, similar to the way \"sameuser\" is used with\nident)\n\nthis method takes a clear-text password from the client, encrypts it\nand compares it against the password stored in pg_shadow.\n\nthis method should not conflict with anyone, except those who actually\nwant to use a /usr/local/pgsql/data/pg_shadow as their passwd file.\n(i seem to recall previous versions actually stored the user data in\nthat specific file, although it was not in /etc/passwd format)\n\nin my opinion, this method allows you to have pgusers which are wholy\nseperate from /etc/passwd users. and allows you to manage them entirely\nwithin postgresql.\n\nyou can have a front end which does a\nCREATE USER joe WITH PASSWORD 'crypto-gunge';\n\n[ patch attached ]\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Fri, 15 Jun 2001 17:42:22 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "Tom Lane writes:\n\n> > Whatever you do, please wait till I've finished the \"authenticate after\n> > fork\" change. (this weekend?)\n>\n> Oh, are you doing that? I thought you weren't convinced it was a good\n> idea ...\n\nI do think it's a win, but the point about the connection limits just\nneeded to be addressed, which it has. The code changes turned out to be\neasier than I expected them to be, so I might as well finish it now.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 15 Jun 2001 23:47:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords " }, { "msg_contents": "[ oops, the patch is actually attached this time ]\n\n\n> > In a release or three we could discontinue support for old-style crypt,\n> > but I think we must allow a transition period for people to update their\n> > clients.\n> \n> Yes, MD5 is something that probably should be done at the same time to\n> minimize disruption.\n\nwhile we are on the subject of auth and password and crypt, i noticed some\ntime ago, that there was an inconsistency in the way the auth passwd/crypt\nstuff worked.\n\nwe have:\n\n host dbname x.x.x.x x.x.x.x password somefile\n\nthis method takes a clear-text password from the client, encrypts it\nand compares it against the matching second field of \"somefile\"\nie. somefile is a traditional /etc/passwd style file\n\ni like to think of this as \"telnet\" authentication, user/passwd in clear text.\nserver stores pre-encrypted passwords.\n\nand i use it for access from my php scripts, thus avoiding the necessity of\ngiving \"webuser\" access to my tables, and setting up some kinda secondary\nauthentication table.\n\nthe docs in pg_hba.conf lead you to believe that if you leave off \"somefile\",\nthen it does the same thing, but compares against pg_shadow.\n\nhowever, and i don't know that this was intentional, but if you leave\n\"somefile\" off, it compares the plain-text user password against the raw\nvalue of the pg_shadow passwd field.\n\ni wanted a behaviour as above, encrypt the clear text, and compare against\nthe stored pre-encrypted password in pg_shadow.\n\ngiven that there are many installations which may be using things as they\nare, i have created a set of patches which do:\n\n host dbname x.x.x.x x.x.x.x password pg_shadow\n\n(pg_shadow is a \"reserved word, similar to the way \"sameuser\" is used with\nident)\n\nthis method takes a clear-text password from the client, encrypts it\nand compares it against the password stored in pg_shadow.\n\nthis method should not conflict with anyone, except those who actually\nwant to use a /usr/local/pgsql/data/pg_shadow as their passwd file.\n(i seem to recall previous versions actually stored the user data in\nthat specific file, although it was not in /etc/passwd format)\n\nin my opinion, this method allows you to have pgusers which are wholy\nseperate from /etc/passwd users. and allows you to manage them entirely\nwithin postgresql.\n\nyou can have a front end which does a\nCREATE USER joe WITH PASSWORD 'crypto-gunge';\n\n[ patch attached ]\n\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]", "msg_date": "Fri, 15 Jun 2001 17:58:57 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "oops Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Fri, Jun 15, 2001 at 07:57:37AM -0700, Joe Conway wrote:\n> > > This is unrelated to moving to MD5 encryption, which is another item on\n> > > our list.\n> >\n> > It may be unrelated in theory, but in practice we should do both at\n> > the same time to minimize the number of client-library incompatibility\n> > issues that arise. I'd suggest that old-style crypt auth continue to\n> > use the crypt() call forever, while the new-style should be based on\n> > MD5 not crypt() from the get-go.\n> >\n> \n> I don't know if this was discussed earlier, but for client\n> authentication I think SHA-1 is preferred over MD5 because of\n> weaknesses discovered in MD5. Also an HMAC using SHA-1 is preferred\n> for authentication over just simply using a hash of the password.\n>\n> Ideally the server sends a random session token to the client, the\n> client creates an HMAC (using SHA-1) from the token and their password\n> locally, and sends that back to the server. The server then calculates\n> the HMAC itself using the client's (server stored) password and the\n> session token. If they match, the client is authenticated. If we want\n> to store a SHA-1 hash instead of the plain-text password, just do\n> SHA-1 over the plain-text password on the client side before using it\n> as the HMAC key.\n\nWell said. (I don't recall what HMAC stands for; maybe others don't also.)\n\nA diagram of the above, if you'll forgive the ASCII art, is:\n\n password random challenge (from server)\n | |\n (SHA) |\n | |\n PW hash (stored in server) |\n | |\n +--------------> + <------------+\n | \n\t (SHA)\n\t |\n\t (compare)\n\n> The downside of storing the password as a SHA-1 hash versus symmetrically\n> encrypting (or leaving as plain-text) it is that there's no way to recover\n> the password from the hash if the client forgets it. Of course, the\n> superuser can just reset it to a known value.\n\nI see no value in supporting password recovery, and good reasons not to \nsupport it. Using MD5 or SHA implies we should talk about \"password \nhashing\" rather than \"password encryption\".\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Fri, 15 Jun 2001 16:16:45 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> Well said. (I don't recall what HMAC stands for; maybe others don't\nalso.)\n>\n\nHMAC stands for Hash based Message Authentication Code. Here's a link to the\npaper which originally introduced it, for those who are interested:\n\nhttp://www-cse.ucsd.edu/users/mihir/papers/kmd5.pdf\n\n-- Joe\n\n", "msg_date": "Fri, 15 Jun 2001 16:44:23 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "At 02:02 PM 6/15/01 -0400, Tom Lane wrote:\n>Vince Vielhaber <vev@michvhf.com> writes:\n>>> More to the point, how does the postmaster know that it's now dealing\n>>> with encrypted passwords and must use the double-salt auth method?\n>\n>> The first three characters are md5 in the code I sent Bruce.\n>\n>Uh ... so if I use a password that starts with \"md5\", it breaks?\n>\n>Seems like adding an additional column to pg_shadow would be a better\n>technique.\n\nI agree. It helps with migration and other things.\n\nFor my apps I have: hashed_password, hashtype, salt. I even had MSG at one\npoint ;) - MSG=Multiple Salt Grinds (the number of times you do the\nhashing), but my fellow developers didn't want that.\n\nSo if the hash type is NONE and the salt is '', the hashed password is\nactually in plaintext. This is very useful when migrating users or creating\nusers manually, then when the users next change their password (like NEVER\n;) ) it will be using the default hash method. So say you start with\nMD5BASE64 you can switch to MD5HEX or SHA1HEX later.\n\nBTW, the passing of one time passwords, and subsequently communicating in\nplaintext is a bit too '80s for me. Sure the performance is better, but I\nthink it should be deprecated. If you need to use encryption then having\n_everything_ encrypted is a better idea - SSL etc. Those >1GHz CPUs are\nhandy ;).\n\nCheerio,\nLink.\n\n", "msg_date": "Sat, 16 Jun 2001 11:20:30 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords " }, { "msg_contents": "On Sat, 16 Jun 2001, Lincoln Yeoh wrote:\n\n> At 02:02 PM 6/15/01 -0400, Tom Lane wrote:\n> >Vince Vielhaber <vev@michvhf.com> writes:\n> >>> More to the point, how does the postmaster know that it's now dealing\n> >>> with encrypted passwords and must use the double-salt auth method?\n> >\n> >> The first three characters are md5 in the code I sent Bruce.\n> >\n> >Uh ... so if I use a password that starts with \"md5\", it breaks?\n> >\n> >Seems like adding an additional column to pg_shadow would be a better\n> >technique.\n> \n> I agree. It helps with migration and other things.\n> \n> For my apps I have: hashed_password, hashtype, salt. I even had MSG at one\n> point ;) - MSG=Multiple Salt Grinds (the number of times you do the\n> hashing), but my fellow developers didn't want that.\n\nActually, there's more or less bsdish convention at this (implementation\nof multiple ways of hashing a password). It may be a good idea to do\npostgres' implementation to conform to this convention. (Net/Free/OpenBSD\nall store password in passwd file according to this convention).\n\n\nConvention is, crypted password looks like $algorithm_id$salt$hash\nWhere algorithm_id is a small system-dependent number, convention is that\nalgorithm_id 1 is MD5 and is supported everywhere. \n\nBenefit of adhering to this is that, assuming postgresql doesn't invent\nits own algorithms for hashing, you can exchange password data between\nsystem files and postgresql shadow table, and also, you don't need to\nmassage data coming from conformant bsd-ish crypt implementations.\n\n-alex\n\n", "msg_date": "Fri, 15 Jun 2001 23:44:41 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords " }, { "msg_contents": "On Sat, Jun 16, 2001 at 11:20:30AM +0800, Lincoln Yeoh wrote:\n> If you need to use encryption then having _everything_ encrypted is a\n> better idea - SSL etc. Those >1GHz CPUs are handy ;).\n\n[ yes, i noted the smiley ]\n\nit is rather unfortunate to see the OSS community buying into the tenents\nthat allowed microsoft to get world domination based on crap quality\nsoftware.\n\n\"hardware is cheap\" is a falsehood.\n\nsome people might be suprised at the number of 486's and Pentium 100's that\nare still in active use.\n\nin some places, that is leading edge technology, mostly due to economic\nrealities.\n\nand in a sense, people who have limited resources can be some of the\nmost satisfied recipients of OSS efforts.\n\nplease, lets not have postgresql turn into an OSS behemoth like Mozilla\nor OpenOffice.\n\ni am not opposed to more new features, or even adding support for leading\nedge hardware.\n\nbut, i think it is important to retain as much backward compatibility as\npossible.\n\nas well, features should be able to be turned off such that they don't\noverly bloat the code.\n\nFreeBSD 4.3 can still be built on a 486. i'm sure if someone was patient\nenought to wait, it could be built on a 386.\n\nthis is because it very rarely drops support for something in its code base,\nand it makes most features (in the kernel) optional, to limit bloat.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Sat, 16 Jun 2001 00:04:58 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "At 12:04 AM 6/16/01 -0400, Jim Mercer wrote:\n>On Sat, Jun 16, 2001 at 11:20:30AM +0800, Lincoln Yeoh wrote:\n>> If you need to use encryption then having _everything_ encrypted is a\n>> better idea - SSL etc. Those >1GHz CPUs are handy ;).\n>\n>[ yes, i noted the smiley ]\n>\n>it is rather unfortunate to see the OSS community buying into the tenents\n>that allowed microsoft to get world domination based on crap quality\n>software.\n>\n>\"hardware is cheap\" is a falsehood.\n\nMy point is if you really need encryption, then your data should be\nencrypted too, otherwise it seems a waste of time or more a \"feel good\" thing.\n\nI find it hard to recommend a setup where just the authentication portion\nis encrypted but all the data is left in plaintext for everyone to see. Why\ngo to all that trouble to _fool_ yourself, when you can either do it\nsecurely (encrypt everything), or do it quick (no encryption).\n\nI'd personally put \"only authentication is encrypted\" in the \"crossing a\nchasm in two leaps\" category.\n\nYoda says it better ;).\n\nCheerio,\nLink.\n\n", "msg_date": "Sun, 17 Jun 2001 23:05:52 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Sun, Jun 17, 2001 at 11:05:52PM +0800, Lincoln Yeoh wrote:\n> At 12:04 AM 6/16/01 -0400, Jim Mercer wrote:\n> >On Sat, Jun 16, 2001 at 11:20:30AM +0800, Lincoln Yeoh wrote:\n> >> If you need to use encryption then having _everything_ encrypted is a\n> >> better idea - SSL etc. Those >1GHz CPUs are handy ;).\n> >\n> >[ yes, i noted the smiley ]\n> >\n> >it is rather unfortunate to see the OSS community buying into the tenents\n> >that allowed microsoft to get world domination based on crap quality\n> >software.\n> >\n> >\"hardware is cheap\" is a falsehood.\n> \n> My point is if you really need encryption, then your data should be\n> encrypted too, otherwise it seems a waste of time or more a \"feel good\" thing.\n\ni would agree with that.\n\ni guess my rantwas moreso in reaction to what i was as creeping featurism,\nwith words aluding to \"depreciating\" legacy functionality.\n\nmaybe not your words, but that was what set me off on this thread.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Sun, 17 Jun 2001 11:28:16 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n\n> My point is if you really need encryption, then your data should be\n> encrypted too, otherwise it seems a waste of time or more a \"feel\n> good\" thing.\n\nI would disagree. I think there is a level of security where it's not \na catastrophe if someone sniffs and reconstructs your traffic, but\nit's fairly important that such a person not be able to authenticate\nas you. Most of my personal email (and, I assert, most people's)\nfalls into this category. Encrypted challenge/response addresses this \nneed quite well. \n\nNaturally, if you're working at a level where intercepted traffic *is* \ncatastrophic, you should be doing end-to-end encryption and all that\ngood stuff.\n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n", "msg_date": "17 Jun 2001 11:46:05 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "Dominic J. Eidson writes:\n\n> On Fri, 15 Jun 2001, Peter Eisentraut wrote:\n>\n> > Bruce Momjian writes:\n> >\n> > > People have complained that we store passwords unencrypted in pg_shadow.\n> > > Long ago we agreed to a solution and I am going to try to implement that\n> > > next.\n> >\n> > Whatever you do, please wait till I've finished the \"authenticate after\n> > fork\" change. (this weekend?)\n>\n> If you are going to do this this weekend, should I just wait with the PAM\n> patch until then? (Patch against the new code)\n\nThis is finished, more or less, so both of you can look at\nbackend/libpq/auth.c, function ClientAuthentication() and hook in whatever\nyou want, blocking however long you want.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 20 Jun 2001 20:33:06 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "At 12:20 AM 26-06-2001 -0400, Bruce Momjian wrote:\n>> \n>> at this point in time, i do not see a method of doing that without my mods\n>> or using external password files.\n>\n>We will do double-crypt and everyone will be happy, right?\n\nWow optimistic :).\n\n>> if the API as above existed, then i would be happy to see \"password\" go\naway\n>> (although it should be depreciated to a --enable option, otherwise you are\n>> going to ruin a bunch of existing code).\n>\n>Who is using it? We can continue to allow it but at some point there is\n>no purpose to it unless you have clients that are pre-7.2. Double-crypt\n>removes the use for it, no?\n\nI'm using \"password\".\n\nIf I feel that the wire isn't safe I'll use SSL.\n\nCheerio,\nLink.\n\n", "msg_date": "Mon, 25 Jun 2001 14:07:53 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "At 12:51 AM 26-06-2001 -0400, Jim Mercer wrote:\n\n>my mods are server-side only.\n>\n>to rewind a bit.\n>\n>my mods correct this by doing:\n>\n>with an AUTH_ARGUMENT == \"pg_shadow\", the process is:\n>tmp_pwd = crypt(client->passwd, pg_shadow->passwd)\n>if strcmp(tmp_pwd, pg_shadow->passwd) == 0\n> access allowed\n>else\n> access not allowed\n>\n>this is not so much an enhancement, but a correction of what i think the\n>original \"password\" authentication scheme was supposed to allow.\n>\n\nYep it's a correction. pg_shadow shouldn't have been in plaintext in the\nfirst place.\n\n host all 127.0.0.1 255.255.255.255 password \nshould have meant check crypted passwords in pg_shadow.\n\nGiven your suggestion, what happens when someone does an ALTER USER ...\nWITH PASSWORD ....? \n\nMight it be too late to do a fix? \n\nHMAC sounds interesting. What would the impact be on stuff like Pg DBD?\n\nCheerio,\nLink.\n\n\n\n\n", "msg_date": "Mon, 25 Jun 2001 14:34:51 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "\nI am replying to the original message because it has all the relevant\ninformation. The problem with 'password' authentication is that the\npassword goes across the wire in plaintext. Now, if we start to ship\nencrypted passwords across the wire, but use the same salt for every\nauthentication request, we are really no more secure than if we sent it\nin the clear.\n\nIf a user specifies 'crypt' in pg_hba.conf, they should be assured that\nthe password is safe in case someone snoops it. Encrypting pg_shadow\nand comparing that with the same salt every time is not secure from\nsnooping so we don't allow it.\n\nAm I missing something?\n\n> > > In a release or three we could discontinue support for old-style crypt,\n> > > but I think we must allow a transition period for people to update their\n> > > clients.\n> > \n> > Yes, MD5 is something that probably should be done at the same time to\n> > minimize disruption.\n> \n> while we are on the subject of auth and password and crypt, i noticed some\n> time ago, that there was an inconsistency in the way the auth passwd/crypt\n> stuff worked.\n> \n> we have:\n> \n> host dbname x.x.x.x x.x.x.x password somefile\n> \n> this method takes a clear-text password from the client, encrypts it\n> and compares it against the matching second field of \"somefile\"\n> ie. somefile is a traditional /etc/passwd style file\n> \n> i like to think of this as \"telnet\" authentication, user/passwd in clear text.\n> server stores pre-encrypted passwords.\n> \n> and i use it for access from my php scripts, thus avoiding the necessity of\n> giving \"webuser\" access to my tables, and setting up some kinda secondary\n> authentication table.\n> \n> the docs in pg_hba.conf lead you to believe that if you leave off \"somefile\",\n> then it does the same thing, but compares against pg_shadow.\n> \n> however, and i don't know that this was intentional, but if you leave\n> \"somefile\" off, it compares the plain-text user password against the raw\n> value of the pg_shadow passwd field.\n> \n> i wanted a behaviour as above, encrypt the clear text, and compare against\n> the stored pre-encrypted password in pg_shadow.\n> \n> given that there are many installations which may be using things as they\n> are, i have created a set of patches which do:\n> \n> host dbname x.x.x.x x.x.x.x password pg_shadow\n> \n> (pg_shadow is a \"reserved word, similar to the way \"sameuser\" is used with\n> ident)\n> \n> this method takes a clear-text password from the client, encrypts it\n> and compares it against the password stored in pg_shadow.\n> \n> this method should not conflict with anyone, except those who actually\n> want to use a /usr/local/pgsql/data/pg_shadow as their passwd file.\n> (i seem to recall previous versions actually stored the user data in\n> that specific file, although it was not in /etc/passwd format)\n> \n> in my opinion, this method allows you to have pgusers which are wholy\n> seperate from /etc/passwd users. and allows you to manage them entirely\n> within postgresql.\n> \n> you can have a front end which does a\n> CREATE USER joe WITH PASSWORD 'crypto-gunge';\n> \n> [ patch attached ]\n> \n> -- \n> [ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n> [ Now with more and longer words for your reading enjoyment. ]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 25 Jun 2001 23:27:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Mon, Jun 25, 2001 at 11:27:42PM -0400, Bruce Momjian wrote:\n> I am replying to the original message because it has all the relevant\n> information. The problem with 'password' authentication is that the\n> password goes across the wire in plaintext. Now, if we start to ship\n> encrypted passwords across the wire, but use the same salt for every\n> authentication request, we are really no more secure than if we sent it\n> in the clear.\n>\n> If a user specifies 'crypt' in pg_hba.conf, they should be assured that\n> the password is safe in case someone snoops it. Encrypting pg_shadow\n> and comparing that with the same salt every time is not secure from\n> snooping so we don't allow it.\n> \n> Am I missing something?\n\ni don't disagree that sending plaintext across the wire, if possible, it\nshould be avoided.\n\nhowever, i look at it this way.\n\nmany _existing_ implementations send plaintext across the wire, telnet,\nftp, .htaccess, imap and pop (non-ssl).\n\ni would much rather risk a single plain-text password being snooped on the\nwire, rather than having an entire database of plain-text passwords for\nsomeone to scoop.\n\nmany people re-use passwords for multiple purposes, thus reducing the bio-core\nrequired to keep track of a bazillion passwords.\n\nin my opinion, storing plain-text passwords in any media is just plain wrong,\nand a far greater security risk than having a password sniffed.\n\nin my applications, i have SSL covering the client->app (browser->PHP code),\nso the sniffing would need to be on the wire from the app-server -> database\nserver, which in many cases is the same machine.\n\nmy mods don't alter the operation of the server in any respect.\n\nthey do, however, allow people the choice of using a traditional\ntelnetd/binlogin authentication scheme without resorting to external password\nfiles.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Mon, 25 Jun 2001 23:42:27 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> On Mon, Jun 25, 2001 at 11:27:42PM -0400, Bruce Momjian wrote:\n> > I am replying to the original message because it has all the relevant\n> > information. The problem with 'password' authentication is that the\n> > password goes across the wire in plaintext. Now, if we start to ship\n> > encrypted passwords across the wire, but use the same salt for every\n> > authentication request, we are really no more secure than if we sent it\n> > in the clear.\n> >\n> > If a user specifies 'crypt' in pg_hba.conf, they should be assured that\n> > the password is safe in case someone snoops it. Encrypting pg_shadow\n> > and comparing that with the same salt every time is not secure from\n> > snooping so we don't allow it.\n> > \n> > Am I missing something?\n> \n> i don't disagree that sending plaintext across the wire, if possible, it\n> should be avoided.\n> \n> however, i look at it this way.\n> \n> many _existing_ implementations send plaintext across the wire, telnet,\n> ftp, .htaccess, imap and pop (non-ssl).\n> \n> i would much rather risk a single plain-text password being snooped on the\n> wire, rather than having an entire database of plain-text passwords for\n> someone to scoop.\n> \n> many people re-use passwords for multiple purposes, thus reducing the bio-core\n> required to keep track of a bazillion passwords.\n> \n> in my opinion, storing plain-text passwords in any media is just plain wrong,\n> and a far greater security risk than having a password sniffed.\n> \n> in my applications, i have SSL covering the client->app (browser->PHP code),\n> so the sniffing would need to be on the wire from the app-server -> database\n> server, which in many cases is the same machine.\n> \n> my mods don't alter the operation of the server in any respect.\n> \n> they do, however, allow people the choice of using a traditional\n> telnetd/binlogin authentication scheme without resorting to external password\n> files.\n\nOK, I get you now. Why not ask the client to do a crypt and compare\nthat to pg_shadow. It is better than what we have now for 'password'\nauthentication because it encrypts pg_shadow.\n\nThe big problem is that you can't do 'crypt' authentication once you\nencrypt pg_shadow, unless we do the double-encription thing, and I think\nit is a bigger win for them to use crypt-authentication than to encrypt\npg_shadow. The wire is clearly less secure than pg_shadow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 25 Jun 2001 23:48:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> in my applications, i have SSL covering the client->app (browser->PHP code),\n> so the sniffing would need to be on the wire from the app-server -> database\n> server, which in many cases is the same machine.\n> \n> my mods don't alter the operation of the server in any respect.\n> \n> they do, however, allow people the choice of using a traditional\n> telnetd/binlogin authentication scheme without resorting to external password\n> files.\n\nOne good point you have is what do we do with 'password' authentication\nonce we encrypt pg_shadow. My guess is that we just disallow it. It is\ninsecure and was only there for clients that couldn't do crypt. They\nall have that now. It should just go away. We kept it around for the\nsecondary password file but those secondary password files are the same\nonce pg_shadow is encrypted.\n\nOne item of my plan is that you can encrypt individual users. You don't\nhave to do them all at once in case you have older clients for some\nusers but not others.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 00:00:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Mon, Jun 25, 2001 at 11:48:32PM -0400, Bruce Momjian wrote:\n> OK, I get you now. Why not ask the client to do a crypt and compare\n> that to pg_shadow. It is better than what we have now for 'password'\n> authentication because it encrypts pg_shadow.\n> \n> The big problem is that you can't do 'crypt' authentication once you\n> encrypt pg_shadow, unless we do the double-encription thing, and I think\n> it is a bigger win for them to use crypt-authentication than to encrypt\n> pg_shadow.\n\nmy mods do not require encryption of pg_shadow, unless you want to use\nmy \"password pg_shadow\" extension. it is then the responsibility of the\ndbadmin to do \"CREATE USER username WITH PASSWORD '$1$xxxxxx';\n(i have a unix_crypt(text, text) function i can put in contrib, as well\nas samba_lm_crypt(text) and samba_nt_crypt(text) for anyone interested)\n\nthe current code (without my mods) requires the dbadmin to either play\nthe lottery and store all passwords in plain-text, or to manipulate\nexternal password files, which causes all manner of issues with regards\nto updating (changing) the passwords in the external files.\n\n> The wire is clearly less secure than pg_shadow.\n\nah, you've not had a client rooted lately.\n\nthe wire is far more secure than many default OS installations.\n\ni will not argue that the double-encryption stuff, and MD5 type stuff is\nbetter.\n\nhowever, forcing the dbadmin to store plain-text passwords in pg_shadow\nis at best unwise.\n\ngiving them the option of my mods is a reasonable step towards allowing\nthem to avoid that one-stop-shopping facility for crackers, without breaking\nany existing implementations for those who chose to walk what i consider\nan unsafe path.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Tue, 26 Jun 2001 00:01:03 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Tue, Jun 26, 2001 at 12:00:35AM -0400, Bruce Momjian wrote:\n> One good point you have is what do we do with 'password' authentication\n> once we encrypt pg_shadow. My guess is that we just disallow it. It is\n> insecure and was only there for clients that couldn't do crypt. They\n> all have that now. It should just go away. We kept it around for the\n> secondary password file but those secondary password files are the same\n> once pg_shadow is encrypted.\n\ni would be content if the API allowed me to pass it a plain-text password,\nand that was compared against pg_shadow, where the password is stored\nencrypted.\n\nat this point in time, i do not see a method of doing that without my mods\nor using external password files.\n\nif the API as above existed, then i would be happy to see \"password\" go away\n(although it should be depreciated to a --enable option, otherwise you are\ngoing to ruin a bunch of existing code).\n\n> One item of my plan is that you can encrypt individual users. You don't\n> have to do them all at once in case you have older clients for some\n> users but not others.\n\nit would be nice (in my opinion) if you could have multiple (cascade) entries\nin pg_hba.conf.\n\nand a flag in pg_shadow to \"appoint\" a blessed scheme.\n\nie. if a user identd's ok, and the identd flag is set in pg_shadow, then\nit is ok. otherwise, move on to the next pg_hba.conf entry.\n\nthe reasoning for this is that i (and i assume others) have two classes of\naccess. some type of authenticated client/user and scripts.\n\nhardcoding passwords in scripts is just wrong.\n\ni sometimes have \"localhost\" set up on ident, and non-localhost on some\ntype of passord/crypt type thing. but i don't want to allow all local users\naccess via ident.\n\ni recognize that some of this can be done with the ident mapping facility,\nbut again, that is an external file, and thus presents management issues.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Tue, 26 Jun 2001 00:12:46 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> > The wire is clearly less secure than pg_shadow.\n> \n> ah, you've not had a client rooted lately.\n\nI think most people would disagree.\n\n> the wire is far more secure than many default OS installations.\n\nMaybe time for a new OS. We run on some pretty secure OS's.\n\n> i will not argue that the double-encryption stuff, and MD5 type stuff is\n> better.\n> \n> however, forcing the dbadmin to store plain-text passwords in pg_shadow\n> is at best unwise.\n> \n> giving them the option of my mods is a reasonable step towards allowing\n> them to avoid that one-stop-shopping facility for crackers, without breaking\n> any existing implementations for those who chose to walk what i consider\n> an unsafe path.\n\nThe big problem is that when we make a change we have to also talk to\nold clients to you would have a pretty complex setup to have 'password'\nencryption passing the same crypt over the wire all the time. If not,\nwhy not use 'crypt' authentication.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 00:17:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> On Tue, Jun 26, 2001 at 12:00:35AM -0400, Bruce Momjian wrote:\n> > One good point you have is what do we do with 'password' authentication\n> > once we encrypt pg_shadow. My guess is that we just disallow it. It is\n> > insecure and was only there for clients that couldn't do crypt. They\n> > all have that now. It should just go away. We kept it around for the\n> > secondary password file but those secondary password files are the same\n> > once pg_shadow is encrypted.\n> \n> i would be content if the API allowed me to pass it a plain-text password,\n> and that was compared against pg_shadow, where the password is stored\n> encrypted.\n> \n> at this point in time, i do not see a method of doing that without my mods\n> or using external password files.\n\nWe will do double-crypt and everyone will be happy, right?\n\n> \n> if the API as above existed, then i would be happy to see \"password\" go away\n> (although it should be depreciated to a --enable option, otherwise you are\n> going to ruin a bunch of existing code).\n\nWho is using it? We can continue to allow it but at some point there is\nno purpose to it unless you have clients that are pre-7.2. Double-crypt\nremoves the use for it, no?\n\n> \n> > One item of my plan is that you can encrypt individual users. You don't\n> > have to do them all at once in case you have older clients for some\n> > users but not others.\n> \n> it would be nice (in my opinion) if you could have multiple (cascade) entries\n> in pg_hba.conf.\n> \n> and a flag in pg_shadow to \"appoint\" a blessed scheme.\n> \n> ie. if a user identd's ok, and the identd flag is set in pg_shadow, then\n> it is ok. otherwise, move on to the next pg_hba.conf entry.\n> \n> the reasoning for this is that i (and i assume others) have two classes of\n> access. some type of authenticated client/user and scripts.\n> \n> hardcoding passwords in scripts is just wrong.\n> \n> i sometimes have \"localhost\" set up on ident, and non-localhost on some\n> type of passord/crypt type thing. but i don't want to allow all local users\n> access via ident.\n> \n> i recognize that some of this can be done with the ident mapping facility,\n> but again, that is an external file, and thus presents management issues.\n\nOur authentication system is already too complex. I would prefer not to\nmake it more so. The more complex, the more mistakes admins make.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 00:20:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "[ this message is not meant to be completely denigrating to linux. YMMV ]\n\nOn Tue, Jun 26, 2001 at 12:17:03AM -0400, Bruce Momjian wrote:\n> > > The wire is clearly less secure than pg_shadow.\n> > \n> > ah, you've not had a client rooted lately.\n> \n> I think most people would disagree.\n\ndepends on the crowd. i get to de-crack several linux boxes a month.\n\n> > the wire is far more secure than many default OS installations.\n> \n> Maybe time for a new OS. We run on some pretty secure OS's.\n\ni run a fairly tight ship as well.\n\nhowever, joe blow redhat 6.1 installer who is just following the recipes\nand the RPM's wouldn't know a secure OS from a hole in their head.\n\nand Solaris is just insecure by design, lets not talk about Irix.\n\nthe design should not assume that the dbadmin has a clue. in fact, it should\nassume they don't have a clue.\n\ni challenge you to post \"i think storing plain-text passwords on my system\nis ok.\" to NANOG. 8^)\n\n> The big problem is that when we make a change we have to also talk to\n> old clients to you would have a pretty complex setup to have 'password'\n> encryption passing the same crypt over the wire all the time. If not,\n> why not use 'crypt' authentication.\n\ni don't understand the objection to my mods.\n\ncrypt authentication requires plain-text passwords stored in pg_shadow.\n\nmy stand is that this is not a good idea.\n\nmy mods in no way break any existing code, and add another variant on the\nexisting auth schemes.\n\ni think that any evolution of the auth schemes should depreciate the older\nmethods, but that backwards compatibility needs to be maintained, even\nif the code is disabled by default, and needs a --enable to turn it back on.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Tue, 26 Jun 2001 00:33:20 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> > The big problem is that when we make a change we have to also talk to\n> > old clients to you would have a pretty complex setup to have 'password'\n> > encryption passing the same crypt over the wire all the time. If not,\n> > why not use 'crypt' authentication.\n> \n> i don't understand the objection to my mods.\n> \n> crypt authentication requires plain-text passwords stored in pg_shadow.\n> \n> my stand is that this is not a good idea.\n> \n> my mods in no way break any existing code, and add another variant on the\n> existing auth schemes.\n> \n> i think that any evolution of the auth schemes should depreciate the older\n> methods, but that backwards compatibility needs to be maintained, even\n> if the code is disabled by default, and needs a --enable to turn it back on.\n\nOK, your mods are going to have to propogate to all clients. Older\nclients can't use this scheme, and once we have double-encryption, what\nadvantage does this have?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 00:36:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Tue, Jun 26, 2001 at 12:20:40AM -0400, Bruce Momjian wrote:\n> We will do double-crypt and everyone will be happy, right?\n> \n> > if the API as above existed, then i would be happy to see \"password\" go away\n> > (although it should be depreciated to a --enable option, otherwise you are\n> > going to ruin a bunch of existing code).\n> \n> Who is using it? We can continue to allow it but at some point there is\n> no purpose to it unless you have clients that are pre-7.2. Double-crypt\n> removes the use for it, no?\n\nif the API allows a plain text password, and compares agains a cyrtpo-pg_shadow\ni would imagine that would be fine.\n\nat this point i should apologize for possibly arguing out of turn.\n\nif 7.2 has the above, that is cool.\n\ni'm sorta hoping my mods can make it into 7.1.3, if there is one.\n\n> > i recognize that some of this can be done with the ident mapping facility,\n> > but again, that is an external file, and thus presents management issues.\n> \n> Our authentication system is already too complex. I would prefer not to\n> make it more so. The more complex, the more mistakes admins make.\n\nunderstood, but you were asking for comments. 8^)\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Tue, 26 Jun 2001 00:38:24 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Tue, Jun 26, 2001 at 12:36:43AM -0400, Bruce Momjian wrote:\n> > > The big problem is that when we make a change we have to also talk to\n> > > old clients to you would have a pretty complex setup to have 'password'\n> > > encryption passing the same crypt over the wire all the time. If not,\n> > > why not use 'crypt' authentication.\n> > \n> > i don't understand the objection to my mods.\n> > \n> > crypt authentication requires plain-text passwords stored in pg_shadow.\n> > \n> > my stand is that this is not a good idea.\n> > \n> > my mods in no way break any existing code, and add another variant on the\n> > existing auth schemes.\n> > \n> > i think that any evolution of the auth schemes should depreciate the older\n> > methods, but that backwards compatibility needs to be maintained, even\n> > if the code is disabled by default, and needs a --enable to turn it back on.\n> \n> OK, your mods are going to have to propogate to all clients. Older\n> clients can't use this scheme,\n\nmy mods are server-side only.\n\nto rewind a bit.\n\nthe existing implementation of:\n\nhost dbname ipaddr netmask password\n\nsays:\n\n# password: Authentication is done by matching a password supplied\n# in clear by the host. If AUTH_ARGUMENT is specified then\n# the password is compared with the user's entry in that \n# file (in the $PGDATA directory). These per-host password \n# files can be maintained with the pg_passwd(1) utility.\n# If no AUTH_ARGUMENT appears then the password is compared\n# with the user's entry in the pg_shadow table.\n\nthis description is a tad misleading.\n\nwith an AUTH_ARGUMENT, the process is:\ntmp_pwd = crypt(client->passwd, AUTH_ARGUMENT->passwd)\nif strcmp(tmp_pwd, AUTH_ARGUMENT->passwd) == 0\n access allowed\nelse\n access not allowed\n\nwithout an AUTH_ARGUMENT, the process is:\nif strcmp(client->passwd, pg_shadow->passwd) == 0\n access allowed\nelse\n access not allowed\n\nmy mods correct this by doing:\n\nwith an AUTH_ARGUMENT == \"pg_shadow\", the process is:\ntmp_pwd = crypt(client->passwd, pg_shadow->passwd)\nif strcmp(tmp_pwd, pg_shadow->passwd) == 0\n access allowed\nelse\n access not allowed\n\nthis is not so much an enhancement, but a correction of what i think the\noriginal \"password\" authentication scheme was supposed to allow.\n\n> and once we have double-encryption, what advantage does this have?\n\nonce we have it, cool. as long as the passwords are not stored plain-text.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Tue, 26 Jun 2001 00:51:21 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> On Tue, Jun 26, 2001 at 12:20:40AM -0400, Bruce Momjian wrote:\n> > We will do double-crypt and everyone will be happy, right?\n> > \n> > > if the API as above existed, then i would be happy to see \"password\" go away\n> > > (although it should be depreciated to a --enable option, otherwise you are\n> > > going to ruin a bunch of existing code).\n> > \n> > Who is using it? We can continue to allow it but at some point there is\n> > no purpose to it unless you have clients that are pre-7.2. Double-crypt\n> > removes the use for it, no?\n> \n> if the API allows a plain text password, and compares agains a cyrtpo-pg_shadow\n> i would imagine that would be fine.\n> \n> at this point i should apologize for possibly arguing out of turn.\n> \n> if 7.2 has the above, that is cool.\n> \n> i'm sorta hoping my mods can make it into 7.1.3, if there is one.\n\nNot a chance. Only major bug fixes in 7.1.X.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 00:56:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Mon, Jun 25, 2001 at 02:34:51PM +0800, Lincoln Yeoh wrote:\n> At 12:51 AM 26-06-2001 -0400, Jim Mercer wrote:\n> >this is not so much an enhancement, but a correction of what i think the\n> >original \"password\" authentication scheme was supposed to allow.\n> \n> Yep it's a correction. pg_shadow shouldn't have been in plaintext in the\n> first place.\n> \n> host all 127.0.0.1 255.255.255.255 password \n> should have meant check crypted passwords in pg_shadow.\n> \n> Given your suggestion, what happens when someone does an ALTER USER ...\n> WITH PASSWORD ....? \n> \n> Might it be too late to do a fix? \n\ni didn't want to change things too much. in the case of ALTER USER, the\ncode would need to encrypt the password beforehand, either inline, or\nusing a pgsql-contrib crypt() function. (i have this if you want it)\n\nthe fix is for the authentication behaviour, not the adminitrative interface\n(ie. ALTER USER).\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Tue, 26 Jun 2001 08:56:28 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "Jim and Bruce wrote:\n> [ a lot of stuff ]\n\nWhat this discussion seems to come down to is whether we should take a\nbackward step in one area of security (security against wire-sniffing)\nto take a forward step in another (not storing plaintext passwords).\nIt seems largely a matter of local conditions which hazard you consider\ngreater (though I would note that anyone who is able to examine the\ncontents of pg_shadow has *already* broken into your database). Anyway,\nI doubt anyone will convince anyone else to change sides on that point.\n\nMy take on the matter is that we shouldn't invest any more effort in\ncrypt-based solutions (here crypt means specifically crypt(3), it's\nnot a generic term). The future is double encryption using MD5 ---\nor s/MD5/more-modern-hash-algorithm-of-your-choice/, the exact choice\nis irrelevant to my point. We ought to get off our duffs and implement\nthat, then encourage people to migrate their clients ASAP. The crypt\ncode will be supported for awhile longer, but strictly as a\nbackwards-compatibility measure for old clients. There's no reason to\nspend any additional work on it.\n\nFor the same reason I don't see any value in the idea of adding\ncrypt-based double encryption to clients. We don't really want to\nsupport that over the long run, so why put effort into it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Jun 2001 10:18:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords " }, { "msg_contents": "\npgman@candle.pha.pa.us (Bruce Momjian) writes:\n\n: OK, I get you now. Why not ask the client to do a crypt and compare\n: that to pg_shadow. [...]\n\nYou can't trust the client to do the one-way encryption, for then the\nencrypted password becomes plaintext-equivalent. (The SMB protocol\napparently suffers or suffered from a similar flaw.)\n\n- FChE\n", "msg_date": "26 Jun 2001 10:30:43 -0400", "msg_from": "fche@redhat.com (Frank Ch. Eigler)", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> At 12:51 AM 26-06-2001 -0400, Jim Mercer wrote:\n> \n> >my mods are server-side only.\n> >\n> >to rewind a bit.\n> >\n> >my mods correct this by doing:\n> >\n> >with an AUTH_ARGUMENT == \"pg_shadow\", the process is:\n> >tmp_pwd = crypt(client->passwd, pg_shadow->passwd)\n> >if strcmp(tmp_pwd, pg_shadow->passwd) == 0\n> > access allowed\n> >else\n> > access not allowed\n> >\n> >this is not so much an enhancement, but a correction of what i think the\n> >original \"password\" authentication scheme was supposed to allow.\n> >\n> \n> Yep it's a correction. pg_shadow shouldn't have been in plaintext in the\n> first place.\n> \n> host all 127.0.0.1 255.255.255.255 password \n> should have meant check crypted passwords in pg_shadow.\n\nThe issue is that when we store users in pg_shadow we don't know what\nkind of authentication is going to be used in pg_hba.conf, and in the\nold days if we stored it encrypted we couldn't use random salt in\n'crypt' authentication.\n\nThis is the first time I am hearing people are more concerned about\npg_shadow security than the wire security. I can see cases where people\nare on secure networks or are using only local users where having\npg_shadow encrypted is more important than crypt authentication. \nFortunately the new system will solve both problems.\n\n\n> Given your suggestion, what happens when someone does an ALTER USER ...\n> WITH PASSWORD ....? \n\nIt stores it encrypted in pg_shadow.\n\n> Might it be too late to do a fix? \n> \n> HMAC sounds interesting. What would the impact be on stuff like Pg DBD?\n\nNo idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 11:02:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> On Mon, Jun 25, 2001 at 02:34:51PM +0800, Lincoln Yeoh wrote:\n> > At 12:51 AM 26-06-2001 -0400, Jim Mercer wrote:\n> > >this is not so much an enhancement, but a correction of what i think the\n> > >original \"password\" authentication scheme was supposed to allow.\n> > \n> > Yep it's a correction. pg_shadow shouldn't have been in plaintext in the\n> > first place.\n> > \n> > host all 127.0.0.1 255.255.255.255 password \n> > should have meant check crypted passwords in pg_shadow.\n> > \n> > Given your suggestion, what happens when someone does an ALTER USER ...\n> > WITH PASSWORD ....? \n> > \n> > Might it be too late to do a fix? \n> \n> i didn't want to change things too much. in the case of ALTER USER, the\n> code would need to encrypt the password beforehand, either inline, or\n> using a pgsql-contrib crypt() function. (i have this if you want it)\n> \n> the fix is for the authentication behaviour, not the adminitrative interface\n> (ie. ALTER USER).\n\nBut the fix disables crypt authentication, at least until we do double\nencryption.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 11:03:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> My take on the matter is that we shouldn't invest any more effort in\n> crypt-based solutions (here crypt means specifically crypt(3), it's\n> not a generic term). The future is double encryption using MD5 ---\n> or s/MD5/more-modern-hash-algorithm-of-your-choice/, the exact choice\n> is irrelevant to my point. We ought to get off our duffs and implement\n> that, then encourage people to migrate their clients ASAP. The crypt\n> code will be supported for awhile longer, but strictly as a\n> backwards-compatibility measure for old clients. There's no reason to\n> spend any additional work on it.\n> \n> For the same reason I don't see any value in the idea of adding\n> crypt-based double encryption to clients. We don't really want to\n> support that over the long run, so why put effort into it?\n\nThe only reason to add double-crypt is so we can continue to use\n/etc/passwd entries on systems that use crypt() in /etc/passwd.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 11:05:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The only reason to add double-crypt is so we can continue to use\n> /etc/passwd entries on systems that use crypt() in /etc/passwd.\n\nIn the long run, though, we want to drop crypt(3) usage entirely.\nIt's just too much of a pain in the neck to depend on the C library's\ncrypt(), for two reasons:\n\n1. It's not in libc on all systems, leading to constant problems when\nlinking clients, particularly with shared libraries that have to have\na dependency on another shared library because of this. (Search the\narchives for problems about \"can't find crypt\". There are many such\nreports.)\n\n2. crypt() isn't guaranteed compatible across platforms, meaning that\nyour clients may be unable to log in anyway. See for example\nhttp://fts.postgresql.org/db/mw/msg.html?mid=57516\n\nUsing our own MD5 (or whatever) code will avoid these problems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Jun 2001 11:27:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords " }, { "msg_contents": "> In the long run, though, we want to drop crypt(3) usage entirely.\n> It's just too much of a pain in the neck to depend on the C library's\n> crypt(), for two reasons:\n> \n> 1. It's not in libc on all systems, leading to constant problems when\n> linking clients, particularly with shared libraries that have to have\n> a dependency on another shared library because of this. (Search the\n> archives for problems about \"can't find crypt\". There are many such\n> reports.)\n> \n> 2. crypt() isn't guaranteed compatible across platforms, meaning that\n> your clients may be unable to log in anyway. See for example\n> http://fts.postgresql.org/db/mw/msg.html?mid=57516\n> \n> Using our own MD5 (or whatever) code will avoid these problems.\n\nAgreed. If people say they want to keep crypt for /etc/passwd, we can. \nIf they don't say they want it, we can go with only MD5.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 11:31:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> > My take on the matter is that we shouldn't invest any more effort in\n> > crypt-based solutions (here crypt means specifically crypt(3), it's\n> > not a generic term). The future is double encryption using MD5 ---\n> > or s/MD5/more-modern-hash-algorithm-of-your-choice/, the exact choice\n> > is irrelevant to my point. We ought to get off our duffs and implement\n> > that, then encourage people to migrate their clients ASAP. The crypt\n> > code will be supported for awhile longer, but strictly as a\n> > backwards-compatibility measure for old clients. There's no reason to\n> > spend any additional work on it.\n> > \n> > For the same reason I don't see any value in the idea of adding\n> > crypt-based double encryption to clients. We don't really want to\n> > support that over the long run, so why put effort into it?\n> \n> The only reason to add double-crypt is so we can continue to use\n> /etc/passwd entries on systems that use crypt() in /etc/passwd.\n\nHaven't many systems (at least Linux and FreeBSD) switched from this\nto other algorithms as default, like MD5? (and usually found in /etc/shadow)\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "26 Jun 2001 12:26:28 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> > > For the same reason I don't see any value in the idea of adding\n> > > crypt-based double encryption to clients. We don't really want to\n> > > support that over the long run, so why put effort into it?\n> > \n> > The only reason to add double-crypt is so we can continue to use\n> > /etc/passwd entries on systems that use crypt() in /etc/passwd.\n> \n> Haven't many systems (at least Linux and FreeBSD) switched from this\n> to other algorithms as default, like MD5? (and usually found in /etc/shadow)\n\nYes, most BSD's are MD5. I wasn't sure about Linux. If it is md5 by\ndefault that would remove many sites from using crypt in secondary\npassword files already.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 12:30:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> > > > For the same reason I don't see any value in the idea of adding\n> > > > crypt-based double encryption to clients. We don't really want to\n> > > > support that over the long run, so why put effort into it?\n> > > \n> > > The only reason to add double-crypt is so we can continue to use\n> > > /etc/passwd entries on systems that use crypt() in /etc/passwd.\n> > \n> > Haven't many systems (at least Linux and FreeBSD) switched from this\n> > to other algorithms as default, like MD5? (and usually found in /etc/shadow)\n> \n> Yes, most BSD's are MD5. I wasn't sure about Linux. \n\nMost recent (3-4 years and newer) use PAM, which can use MD5 as an\nunderlying module.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "26 Jun 2001 12:33:00 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > > > > For the same reason I don't see any value in the idea of adding\n> > > > > crypt-based double encryption to clients. We don't really want to\n> > > > > support that over the long run, so why put effort into it?\n> > > > \n> > > > The only reason to add double-crypt is so we can continue to use\n> > > > /etc/passwd entries on systems that use crypt() in /etc/passwd.\n> > > \n> > > Haven't many systems (at least Linux and FreeBSD) switched from this\n> > > to other algorithms as default, like MD5? (and usually found in /etc/shadow)\n> > \n> > Yes, most BSD's are MD5. I wasn't sure about Linux. \n> \n> Most recent (3-4 years and newer) use PAM, which can use MD5 as an\n> underlying module.\n\nBut what is the default? crypt or md5?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 12:33:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > \n> > > > > > For the same reason I don't see any value in the idea of adding\n> > > > > > crypt-based double encryption to clients. We don't really want to\n> > > > > > support that over the long run, so why put effort into it?\n> > > > > \n> > > > > The only reason to add double-crypt is so we can continue to use\n> > > > > /etc/passwd entries on systems that use crypt() in /etc/passwd.\n> > > > \n> > > > Haven't many systems (at least Linux and FreeBSD) switched from this\n> > > > to other algorithms as default, like MD5? (and usually found in /etc/shadow)\n> > > \n> > > Yes, most BSD's are MD5. I wasn't sure about Linux. \n> > \n> > Most recent (3-4 years and newer) use PAM, which can use MD5 as an\n> > underlying module.\n> \n> But what is the default? crypt or md5?\n\nVaries. In Red Hat Linux, it's been user configurable during install\nfor a couple of years now - it's been default to on for most of that\ntime, AFAIR.\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "26 Jun 2001 12:43:08 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "Bruce Momjian writes:\n\n> The only reason to add double-crypt is so we can continue to use\n> /etc/passwd entries on systems that use crypt() in /etc/passwd.\n\nOn the sites that are most likely to utilize that (because they have a lot\nof users) it won't work (because they use NIS). There are better ways to\ndo that (e.g., PAM).\n\nAlso, see http://httpd.apache.org/docs/misc/FAQ.html#passwdauth\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 26 Jun 2001 19:28:25 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Tue, Jun 26, 2001 at 11:03:38AM -0400, Bruce Momjian wrote:\n> > the fix is for the authentication behaviour, not the adminitrative interface\n> > (ie. ALTER USER).\n> \n> But the fix disables crypt authentication, at least until we do double\n> encryption.\n\nonly if the dbadmin decides to store crypt'd passwords in pg_shadow.\n\nthe code only alters the behaviour of the way the client and server\npasswords are compared, if, and only if, the auth type is \"password pg_shadow\".\n\nthe current code does not allow a method for the client to pass clear-text\npassword, and have it compared to an encrypted pg_shadow.\n\ni consider this broken (especially given the intention of using\n\"password /some/file\").\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Tue, 26 Jun 2001 14:49:13 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Tue, Jun 26, 2001 at 11:05:55AM -0400, Bruce Momjian wrote:\n> The only reason to add double-crypt is so we can continue to use\n> /etc/passwd entries on systems that use crypt() in /etc/passwd.\n\nas someone else noted, freebsd's crypt() function does multiple types\ntypes of crypto (DES, MD5, etc) based on a tag in the salt.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Tue, 26 Jun 2001 14:50:23 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Tue, Jun 26, 2001 at 12:30:11PM -0400, Bruce Momjian wrote:\n> > Haven't many systems (at least Linux and FreeBSD) switched from this\n> > to other algorithms as default, like MD5? (and usually found in /etc/shadow)\n> \n> Yes, most BSD's are MD5. I wasn't sure about Linux. If it is md5 by\n> default that would remove many sites from using crypt in secondary\n> password files already.\n\nwhile freebsd is now defaulting to MD5, the core function is still crypt().\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Tue, 26 Jun 2001 14:52:56 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "\nI am not sure this fits in to this discussion (I guess I think\nit does, since I am posting this message...)\n\nWe talk about how it is not good to be storing plain text\npasswords, but I don't know what people are doing about \nclients which are expected to connect without input from\nan authorized user (ie. web scripts, or other public\napplications with access to the database)\n\nI have been:\ncreating users with minimum possible privileges, and\nstoring password in file with minimum possible privileges\n\n\nWhat other options are there?\n", "msg_date": "Tue, 26 Jun 2001 21:11:00 +0000 (UTC)", "msg_from": "missive@frontiernet.net (Lee Harr)", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > The only reason to add double-crypt is so we can continue to use\n> > /etc/passwd entries on systems that use crypt() in /etc/passwd.\n> \n> On the sites that are most likely to utilize that (because they have a lot\n> of users) it won't work (because they use NIS). There are better ways to\n> do that (e.g., PAM).\n> \n> Also, see http://httpd.apache.org/docs/misc/FAQ.html#passwdauth\n\nThanks. That was a nice description. Seems no one is worried about\nlosing /etc/passwd capability so I will not worry about doing\ndouble-crypt and concentrate on md5. I just didn't want to remove\nfunctionality before warning people.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 19:17:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Tue, Jun 26, 2001 at 10:18:37AM -0400, Tom Lane wrote:\n> though I would note that anyone who is able to examine the\n> contents of pg_shadow has *already* broken into your database\n\nnote: the dbadmin ay not bethe system administrator, but the dbadmin,\nby default (with plaintext) can scoop an entirelist of \"useful\" passwords,\nsince many users (like it or not) use the same/similar passwords for\nmultiple accounts.\n\n> There's no reason to spend any additional work on it.\n\nhmmm. what is the expected date of rollout of the new code with a backwards\ncompatible API (i don't mind recompiling), which has encrypted passwords\nin pg_shadow?\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Tue, 26 Jun 2001 22:16:00 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> > There's no reason to spend any additional work on it.\n> \n> hmmm. what is the expected date of rollout of the new code with a backwards\n> compatible API (i don't mind recompiling), which has encrypted passwords\n> in pg_shadow?\n\nIt will be in 7.2, whenever that is. It will not talk to pre-7.2\nclients once you encrypt pg_shadow entries.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 22:34:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "\npgman wrote:\n\n: OK, I get you now. Why not ask the client to do a crypt and compare\n: that to pg_shadow. [...]\n\nYou can't trust the client to do the one-way encryption, for then the\nencrypted password becomes plaintext-equivalent - it defeats the\npurpose. (The SMB protocol apparently suffers or suffered from a\nsimilar flaw.)\n\n\ntgl wrote:\n\n: What this discussion seems to come down to is whether we should take a\n: backward step in one area of security (security against wire-sniffing)\n: to take a forward step in another (not storing plaintext passwords).\n: [...]\n\nIt seems to me that the two issues are orthogonal. Authentication and\nconfidentiality are not mutually dependent or reinforcing, and thus\ngenerally need separate mechanisms.\n\n\n- FChE\n", "msg_date": "27 Jun 2001 09:58:08 -0400", "msg_from": "fche@redhat.com (Frank Ch. Eigler)", "msg_from_op": false, "msg_subject": "Re: Encrypting pg_shadow passwords" }, { "msg_contents": "fche@redhat.com (Frank Ch. Eigler) writes:\n> tgl wrote:\n> : What this discussion seems to come down to is whether we should take a\n> : backward step in one area of security (security against wire-sniffing)\n> : to take a forward step in another (not storing plaintext passwords).\n\n> It seems to me that the two issues are orthogonal.\n\nIn the abstract yes, but not when you have a constraint that you can't\nchange the protocol or the client-side code. Remember we are talking\nabout a backwards-compatibility mode.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 11:07:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords " }, { "msg_contents": "Hi -\n\ntgl wrote:\n\n: [...]\n: > : What this discussion seems to come down to is whether we should take a\n: > : backward step in one area of security (security against wire-sniffing)\n: > : to take a forward step in another (not storing plaintext passwords).\n: \n: > It seems to me that the two issues are orthogonal.\n: \n: In the abstract yes, but not when you have a constraint that you can't\n: change the protocol or the client-side code. Remember we are talking\n: about a backwards-compatibility mode.\n\nHaving scanned over the discussion again, my understanding is that Jim's\nproposed changes don't affect backwards compatibility. As long as user\npasswords continue to be passed in plaintext to the server, the server\ncan store encrypted passwords in the authentication table.\n\nProtecting against wire snooping could properly be left to another\nlayer, which might indeed require client & server changes (unless\nperformed by some external system like stunnel). Wouldn't that be\nsufficient, and avoid the need to invent anything special just for\npostgresql?\n\n- FChE", "msg_date": "Wed, 27 Jun 2001 11:27:07 -0400", "msg_from": "\"Frank Ch. Eigler\" <fche@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "\"Frank Ch. Eigler\" <fche@redhat.com> writes:\n> Having scanned over the discussion again, my understanding is that Jim's\n> proposed changes don't affect backwards compatibility. As long as user\n> passwords continue to be passed in plaintext to the server, the server\n> can store encrypted passwords in the authentication table.\n\nThe 'passwd' mode wouldn't be affected, but the 'crypt' mode would be;\nit would become less secure than it is now, because the server would be\nforced to send the same salt always, and so a captured encrypted\npassword would be just as useful as a captured plaintext one. That's\nthe step backwards.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 11:34:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords " }, { "msg_contents": "Hi -\n\ntgl wrote:\n\n: The 'passwd' mode wouldn't be affected, but the 'crypt' mode would be;\n: it would become less secure than it is now, because the server would be\n: forced to send the same salt always, and so a captured encrypted\n: password would be just as useful as a captured plaintext one. That's\n: the step backwards.\n\nOh, I see finally. You already put a custom little\nchallenge/response authentication scheme into postgresql,\nand want to keep that working. (May I ask when/why that\nwent in at all? Was lower-layer encryption not an option?)\n\nAt least, it looks like the choice of authentication protocol is a\nserver-side decision. Backward-compatibility for old clients can\nbe forced by the adminstrator, whether the server switches to\nencrypted password storage, and/or to lower-level encryption.\n\n- FChE", "msg_date": "Wed, 27 Jun 2001 12:27:08 -0400", "msg_from": "\"Frank Ch. Eigler\" <fche@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "\"Frank Ch. Eigler\" <fche@redhat.com> writes:\n> Oh, I see finally. You already put a custom little\n> challenge/response authentication scheme into postgresql,\n> and want to keep that working. (May I ask when/why that\n> went in at all?\n\nLong before any of the current generation of developers, AFAIK.\n\n> Was lower-layer encryption not an option?)\n\nWhat lower layer? This code predates SSL by a good bit.\n\nIn any case, as several people have pointed out, one may well want to\nguard one's password more carefully than one guards the entire session\ncontents. Running SSL on a session that may transfer many megabytes\nis a lot of overhead.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 12:33:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords " }, { "msg_contents": "Hi -\n\ntgl wrote:\n: > Oh, I see finally. You already put a custom little\n: > challenge/response authentication scheme into postgresql,\n: [...]\n: Long before any of the current generation of developers, AFAIK.\n\nOkay. (Sorry about misinferring \"You\" above!)\n\n\n: In any case, as several people have pointed out, one may well want to\n: guard one's password more carefully than one guards the entire session\n: contents. Running SSL on a session that may transfer many megabytes\n: is a lot of overhead.\n\nSure, but that's a separate performance question that shouldn't affect\nthe logical layering of the mechanisms. With SSL, for example, methinks\nit's possible to renegotiate a connection to turn off encryption after\na certain point.\n\n\n- FChE", "msg_date": "Wed, 27 Jun 2001 12:41:09 -0400", "msg_from": "\"Frank Ch. Eigler\" <fche@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Tue, Jun 26, 2001 at 11:02:15AM -0400, Bruce Momjian wrote:\n> This is the first time I am hearing people are more concerned about\n> pg_shadow security than the wire security. I can see cases where people\n> are on secure networks or are using only local users where having\n> pg_shadow encrypted is more important than crypt authentication. \n> Fortunately the new system will solve both problems.\n\nThe crypt authentication currently used offers _no_ security. If I can\nsniff on the wire, I can hijack the tcp stream, and trick the client\ninto doing password authentication.\n\nAlso, the double crypt authentication offers no advantage over the wire.\n\nYou're better off just doing an md5crypt() on the server side, and just\npassing the password in the clear. At least you're not confusing users\ninto thinking that they're secure.\n\nOf course, SSL *if done correctly with certificate verification* is the\ncorrect fix. If no certificate verification is done, you fall victim to\na man-in-the-middle attack.\n\n-- \nMichael Samuel <michael@miknet.net>\n", "msg_date": "Wed, 11 Jul 2001 13:24:53 +1000", "msg_from": "michael@miknet.net (Michael Samuel)", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> On Tue, Jun 26, 2001 at 11:02:15AM -0400, Bruce Momjian wrote:\n> > This is the first time I am hearing people are more concerned about\n> > pg_shadow security than the wire security. I can see cases where people\n> > are on secure networks or are using only local users where having\n> > pg_shadow encrypted is more important than crypt authentication. \n> > Fortunately the new system will solve both problems.\n> \n> The crypt authentication currently used offers _no_ security. If I can\n> sniff on the wire, I can hijack the tcp stream, and trick the client\n> into doing password authentication.\n\nIt is my understanding that sniffing is much easier than hijacking. If\nhijacking is a concern, you have to use SSL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 23:32:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Tue, Jul 10, 2001 at 11:32:00PM -0400, Bruce Momjian wrote:\n> > On Tue, Jun 26, 2001 at 11:02:15AM -0400, Bruce Momjian wrote:\n> > > This is the first time I am hearing people are more concerned about\n> > > pg_shadow security than the wire security. I can see cases where people\n> > > are on secure networks or are using only local users where having\n> > > pg_shadow encrypted is more important than crypt authentication. \n> > > Fortunately the new system will solve both problems.\n> > \n> > The crypt authentication currently used offers _no_ security. If I can\n> > sniff on the wire, I can hijack the tcp stream, and trick the client\n> > into doing password authentication.\n> \n> It is my understanding that sniffing is much easier than hijacking. If\n> hijacking is a concern, you have to use SSL.\n\nThat is not true. The internet happily allows for active attacks. In\nfact, active attacks are easier on the internet than passive ones.\n\nMy concern is, that by having something that we proclaim to be secure, we\nneed for it to really be secure.\n\nAn HMAC would be a better alternative to the current crypt scheme, as\nit would provide integrity, without the overhead of having privacy.\n\nOf course, HMAC would require the postgres protocol to talk in \"packets\",\nas it can't accept the data as being valid until it verifies the MAC. I'm\nnot familiar with the protocol yet.\n\nI suggest these authentication options:\n\n* password - The current meaning of password, but with passwords hashed\n using md5crypt() or something. (The usual crypt unneccessarily limits\n passwords to 8 characters)\n* HMAC - Wrap all postgres data in an HMAC (I believe this requires an\n plaintext-like password on the server as does crypt and the double\n crypt scheme)\n* Public Key (RSA/DSA) - Use public key cryptography to negotiate a\n connection. (When I'm not busy, I may decide to do this myself)\n\nAlso, I think we should add to the client API the ability to only accept\ncertain authentication schemes, to avoid active attacks tricking your\nsoftware from sending the HMAC password in cleartext.\n\n-- \nMichael Samuel <michael@miknet.net>\n", "msg_date": "Wed, 11 Jul 2001 19:02:22 +1000", "msg_from": "michael@miknet.net (Michael Samuel)", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> That is not true. The internet happily allows for active attacks. In\n> fact, active attacks are easier on the internet than passive ones.\n> \n> My concern is, that by having something that we proclaim to be secure, we\n> need for it to really be secure.\n> \n> An HMAC would be a better alternative to the current crypt scheme, as\n> it would provide integrity, without the overhead of having privacy.\n> \n> Of course, HMAC would require the postgres protocol to talk in \"packets\",\n> as it can't accept the data as being valid until it verifies the MAC. I'm\n> not familiar with the protocol yet.\n> \n> I suggest these authentication options:\n> \n> * password - The current meaning of password, but with passwords hashed\n> using md5crypt() or something. (The usual crypt unneccessarily limits\n> passwords to 8 characters)\n\nOnce I do crypting of pg_shadow/double-crypt for 7.2, we don't need\npassword anymore. It is around only for very old clients and for\nsecondary password files but wWe will not need that workaround with\ndouble-crypt.\n\n> * HMAC - Wrap all postgres data in an HMAC (I believe this requires an\n> plaintext-like password on the server as does crypt and the double\n> crypt scheme)\n\nNo, double-crypt has the passwords stored encrypted.\n\n> * Public Key (RSA/DSA) - Use public key cryptography to negotiate a\n> connection. (When I'm not busy, I may decide to do this myself)\n\nSSL?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 13:00:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "> Also, I think we should add to the client API the ability to only accept\n> certain authentication schemes, to avoid active attacks tricking your\n> software from sending the HMAC password in cleartext.\n\nThis is an interesting point. We have kept 'password' authentication\naround for secondary password files and for very old clients, but now\nsee that having it around can be a security problem because you can ask\nthe client to send you cleartext passwords.\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 13:02:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Wed, Jul 11, 2001 at 01:24:53PM +1000, Michael Samuel wrote:\n> The crypt authentication currently used offers _no_ security. ...\n> Of course, SSL *if done correctly with certificate verification* is the\n> correct fix. If no certificate verification is done, you fall victim to\n> a man-in-the-middle attack.\n\nIt seems worth noting here that you don't have to depend on\nSSL authentication; PG can do its own authentication over SSL\nand avoid the man-in-the-middle attack that way. \n\nOf course, PG would have to do its authentication properly, e.g. \nwith the HMAC method. That seems better than depending on SSL \nauthentication, because SSL certification seems to be universally\nmisconfigured.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Wed, 11 Jul 2001 13:48:21 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" }, { "msg_contents": "On Wed, Jul 11, 2001 at 01:00:42PM -0400, Bruce Momjian wrote:\n> > * HMAC - Wrap all postgres data in an HMAC (I believe this requires an\n> > plaintext-like password on the server as does crypt and the double\n> > crypt scheme)\n> \n> No, double-crypt has the passwords stored encrypted.\n\nYou missed my point. If I can get hold of the encrypted password in\nthe database, I can hack up a client library to use the encrypted\npassword to log in. Therefore, encrypting the password in pg_shadow\noffers no advantage.\n\n> > * Public Key (RSA/DSA) - Use public key cryptography to negotiate a\n> > connection. (When I'm not busy, I may decide to do this myself)\n> \n> SSL?\n\nI'd use the OpenSSL libraries to implement it, but we're talking about\npublic key authentication here, not connection encryption.\n\n-- \nMichael Samuel <michael@miknet.net>\n", "msg_date": "Thu, 12 Jul 2001 16:20:35 +1000", "msg_from": "michael@miknet.net (Michael Samuel)", "msg_from_op": false, "msg_subject": "Re: Re: Encrypting pg_shadow passwords" } ]
[ { "msg_contents": "Don't know about JDBC, but couldn't you just use UPDATE <xxx> SET\n<yyy>=<zzz> WHERE xmin=<stored/old xmin> AND primarykey=<stored/old pk> and\nget the number of altered records? (if its zero then you know somethings\nwrong and can investigate further)\n- Stuart\n\n> -----Original Message-----\n> From:\tDave Cramer [SMTP:dave@fastcrypt.com]\n> Sent:\tThursday, June 14, 2001 4:34 AM\n> To:\tpgsql-hackers@postgresql.org\n> Subject:\tRow Versioning, for jdbc updateable result sets\n> \n> In order to be able to implement updateable result sets there needs to be\n> a mechanism for determining if the underlying data has changed since the\n> resultset was fetched. Short of retrieving the current data and comparing\n> the entire row, can anyone think of a way possibly using the row version\n> to determine if the data has been concurrently changed?\n> \n> Dave\n", "msg_date": "Fri, 15 Jun 2001 10:41:37 +0100", "msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>", "msg_from_op": true, "msg_subject": "RE: Row Versioning, for jdbc updateable result sets" }, { "msg_contents": "Stuart,\n\nI had no idea that xmin even existed, but having a quick look I think this\nis what I am looking for. Can I assume that if xmin has changed, then\nanother process has changed the underlying data ?\n\nDave\n----- Original Message -----\nFrom: \"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>\nTo: \"'Dave Cramer'\" <dave@fastcrypt.com>\nCc: <pgsql-hackers@postgresql.org>\nSent: Friday, June 15, 2001 5:41 AM\nSubject: [HACKERS] RE: Row Versioning, for jdbc updateable result sets\n\n\n> Don't know about JDBC, but couldn't you just use UPDATE <xxx> SET\n> <yyy>=<zzz> WHERE xmin=<stored/old xmin> AND primarykey=<stored/old pk>\nand\n> get the number of altered records? (if its zero then you know somethings\n> wrong and can investigate further)\n> - Stuart\n>\n> > -----Original Message-----\n> > From: Dave Cramer [SMTP:dave@fastcrypt.com]\n> > Sent: Thursday, June 14, 2001 4:34 AM\n> > To: pgsql-hackers@postgresql.org\n> > Subject: Row Versioning, for jdbc updateable result sets\n> >\n> > In order to be able to implement updateable result sets there needs to\nbe\n> > a mechanism for determining if the underlying data has changed since the\n> > resultset was fetched. Short of retrieving the current data and\ncomparing\n> > the entire row, can anyone think of a way possibly using the row version\n> > to determine if the data has been concurrently changed?\n> >\n> > Dave\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n", "msg_date": "Fri, 15 Jun 2001 07:48:48 -0400", "msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "Re: RE: Row Versioning, for jdbc updateable result sets" }, { "msg_contents": "\"Dave Cramer\" <dave@fastcrypt.com> writes:\n> I had no idea that xmin even existed, but having a quick look I think this\n> is what I am looking for. Can I assume that if xmin has changed, then\n> another process has changed the underlying data ?\n\nxmin is a transaction ID, not a process ID, but looking at it should\nwork for your purposes at present.\n\nThere has been talk of redefining xmin as part of a solution to the\nXID-overflow problem: what would happen is that all \"sufficiently old\"\ntuples would get relabeled with the same special xmin, so that only\nrecent transactions would need to have distinguishable xmin values.\nIf that happens then your code would break, at least if you want to\ncheck for changes just at long intervals.\n\nA hack that comes to mind is that when relabeling an old tuple this way,\nwe could copy its original xmin into cmin while setting xmin to the\npermanently-valid XID. Then, if you compare both xmin and cmin, you\nhave only about a 1 in 2^32 chance of being fooled. (At least if we\nuse a wraparound style of allocating XIDs. I think Vadim is advocating\nresetting the XID counter to 0 at each system restart, so the active\nrange of XIDs might be a lot smaller than 2^32 in that scenario.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Jun 2001 10:21:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RE: Row Versioning, for jdbc updateable result sets " }, { "msg_contents": "Tom,\n\nI am considering coding this into postgres's jdbc driver, as there are alot\nof requests for updateable rowsets. I really don't want to code this in;\nonly to have it break later. Is there a way to do this? Can the version # of\nthe row be made available to the client?\n\nDave\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Dave Cramer\" <dave@fastcrypt.com>\nCc: \"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>;\n<pgsql-hackers@postgresql.org>\nSent: Friday, June 15, 2001 10:21 AM\nSubject: Re: [HACKERS] RE: Row Versioning, for jdbc updateable result sets\n\n\n> \"Dave Cramer\" <dave@fastcrypt.com> writes:\n> > I had no idea that xmin even existed, but having a quick look I think\nthis\n> > is what I am looking for. Can I assume that if xmin has changed, then\n> > another process has changed the underlying data ?\n>\n> xmin is a transaction ID, not a process ID, but looking at it should\n> work for your purposes at present.\n>\n> There has been talk of redefining xmin as part of a solution to the\n> XID-overflow problem: what would happen is that all \"sufficiently old\"\n> tuples would get relabeled with the same special xmin, so that only\n> recent transactions would need to have distinguishable xmin values.\n> If that happens then your code would break, at least if you want to\n> check for changes just at long intervals.\n>\n> A hack that comes to mind is that when relabeling an old tuple this way,\n> we could copy its original xmin into cmin while setting xmin to the\n> permanently-valid XID. Then, if you compare both xmin and cmin, you\n> have only about a 1 in 2^32 chance of being fooled. (At least if we\n> use a wraparound style of allocating XIDs. I think Vadim is advocating\n> resetting the XID counter to 0 at each system restart, so the active\n> range of XIDs might be a lot smaller than 2^32 in that scenario.)\n>\n> regards, tom lane\n>\n>\n\n", "msg_date": "Fri, 15 Jun 2001 11:50:32 -0400", "msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "Re: RE: Row Versioning, for jdbc updateable result sets " }, { "msg_contents": "\"Dave Cramer\" <dave@fastcrypt.com> writes:\n> Can the version # of\n> the row be made available to the client?\n\nThere is no \"version # of the row\" in postgres, unless you set up such a\nthing for yourself (which could certainly be done, using triggers).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Jun 2001 14:27:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RE: Row Versioning, for jdbc updateable result sets " }, { "msg_contents": "On Fri, Jun 15, 2001 at 10:21:37AM -0400, Tom Lane wrote:\n> \"Dave Cramer\" <dave@fastcrypt.com> writes:\n> > I had no idea that xmin even existed, but having a quick look I think this\n> > is what I am looking for. Can I assume that if xmin has changed, then\n> > another process has changed the underlying data ?\n> \n> xmin is a transaction ID, not a process ID, but looking at it should\n> work for your purposes at present.\n> \n> There has been talk of redefining xmin as part of a solution to the\n> XID-overflow problem: what would happen is that all \"sufficiently old\"\n> tuples would get relabeled with the same special xmin, so that only\n> recent transactions would need to have distinguishable xmin values.\n> If that happens then your code would break, at least if you want to\n> check for changes just at long intervals.\n\nAn simpler alternative was change all \"sufficiently old\" tuples to have \nan xmin value, N, equal to the oldest that would need to be distinguished. \nxmin values could then be compared using normal arithmetic: less(xminA, \nxminB) is just ((xminA - N) < (xminB - N)), with no special cases.\n\n> A hack that comes to mind is that when relabeling an old tuple this way,\n> we could copy its original xmin into cmin while setting xmin to the\n> permanently-valid XID. Then, if you compare both xmin and cmin, you\n> have only about a 1 in 2^32 chance of being fooled. (At least if we\n> use a wraparound style of allocating XIDs. I think Vadim is advocating\n> resetting the XID counter to 0 at each system restart, so the active\n> range of XIDs might be a lot smaller than 2^32 in that scenario.)\n\nThat assumes a pretty frequent system restart. Many of us prefer\nto code to the goal of a system that could run for decades.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Fri, 15 Jun 2001 16:31:06 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: RE: Row Versioning, for jdbc updateable result sets" } ]
[ { "msg_contents": "Marc,\n\nwhen I try to reach http://fts.postgresql.org/ I see\nhttp://www.hub.org/\n\nwhat's happens ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 15 Jun 2001 13:06:14 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "fts.postgresql.org ?" }, { "msg_contents": "\nalready fixed ...\n\nOn Fri, 15 Jun 2001, Oleg Bartunov wrote:\n\n> Marc,\n>\n> when I try to reach http://fts.postgresql.org/ I see\n> http://www.hub.org/\n>\n> what's happens ?\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Fri, 15 Jun 2001 10:05:49 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: fts.postgresql.org ?" } ]
[ { "msg_contents": "Hi,\n\nI'm currently using PostgreSQL v7.0.3 and I have tried to upgrade to\nv7.1.2.\n\nThe installation (./configure, gmake, gmake install) of v7.0.3 doesn't\nprovide any error message,\nbut I'm not able to use the pg_dumpall binary, even with -p option it\nsays it's not able to connect to postmaster (see message bellow).\n\nCommand line: pg_dumpall -p6947 > /home/postgres/data/postgres.out\n\nError message: \n\npsql: connectDBStart() -- connect() failed: No such file or directory\n Is the postmaster running at 'localhost'\n and accepting connections on Unix socket '5432'?\npsql: connectDBStart() -- connect() failed: No such file or directory\n Is the postmaster running at 'localhost'\n and accepting connections on Unix socket '5432'?\npsql: connectDBStart() -- connect() failed: No such file or directory\n Is the postmaster running at 'localhost'\n and accepting connections on Unix socket '5432'?\npsql: connectDBStart() -- connect() failed: No such file or directory\n Is the postmaster running at 'localhost'\n and accepting connections on Unix socket '5432'?\n\n\nI have checked and the daemon is running accepting connections on\nTCP/IP. I'm also able to use pg_dump (I use it to backup the database\nevery night).\n\nIt's a makor problem for me because I'm not able to \"migrate\" the DB to\nthe new version.\n\nI provide you bellow the configure option I have used:\n\nConfigure line: ./configure --prefix=$PGDIR --with-pgport=6947\n--without-CXX (where $PGDIR=/usr/local/pgsql)\n\n\nThanks for your help\n\n\n-- \nFlorian Gossin\n\nTechnology Development Dpt.\n\nMNC S.A.\n1003 Lausanne \n(Switzerland)\n\nhttp://www.mnc.ch\n", "msg_date": "Fri, 15 Jun 2001 10:19:53 +0000", "msg_from": "Le Gourou qui fait le support <flo@mnc.ch>", "msg_from_op": true, "msg_subject": "pg_dumpall problem" } ]
[ { "msg_contents": "Could anyone please tell me what the following mean:\n\nNOTICE: Index pg_type_typname_index: NUMBER OF INDEX' TUPLES (124) IS NOT THE SAME AS HEAP' (114) \nNOTICE: Index pg_attribute_attrelid_index: NUMBER OF INDEX' TUPLES (883) IS NOT THE SAME AS HEAP' (422)\n\nI've never seen them before.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 15 Jun 2001 14:42:57 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "NOTICE messages" }, { "msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> Could anyone please tell me what the following mean:\n> NOTICE: Index pg_type_typname_index: NUMBER OF INDEX' TUPLES (124) IS NOT THE SAME AS HEAP' (114) \n> NOTICE: Index pg_attribute_attrelid_index: NUMBER OF INDEX' TUPLES (883) IS NOT THE SAME AS HEAP' (422)\n\n> I've never seen them before.\n\nWhat PG version? Do you have any open transactions that might have\ncreated or deleted as-yet-uncommitted tables?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Jun 2001 10:22:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: NOTICE messages " }, { "msg_contents": "On Fri, Jun 15, 2001 at 10:22:53AM -0400, Tom Lane wrote:\n> What PG version? Do you have any open transactions that might have\n> created or deleted as-yet-uncommitted tables?\n\nI'm not sure since this has not happened on my system. Sorry, I wasn't\nprecise enough. It happens on the system of a co-worker of mine. It seems to\nbe related to him having installed phpgroupware which uses postgresql as a\nbackend. But since phpgroupware does not know about transactions I would\nassume he will never have open transactions unless you count the time PG\nneeds to execute a statement.\n\nAlso I assume he's using 7.1. But I will try to get some details.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 15 Jun 2001 17:28:54 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: NOTICE messages" } ]
[ { "msg_contents": "Hi All,\n\nIm running some benchmarking tests on mysql, postgres and interbase\ndatabase servers.\n\ndoes anyone know the reasons why or know where i can find out some\ntechnical reasons why the postgres database server is particularly good\nin relation to the others or just by itself e.g. use of query\noptimisation or indexing etc. I'm only looking at creates, selects,\ninserts, update and delete statements. I've noticed it is slow at\ninserting data into tables, but especially quick at doing complex\nselects (i.e. containing many joins). Why is this so?\n\nI've tried to look around for this on the net, but all i seem to get is\na big arguement consisting of the features lacking in mysql that are in\nother databases e.g. ACID!!\n\nThanks,\n\nDipesh\n\n\n\n", "msg_date": "Fri, 15 Jun 2001 14:46:22 +0100", "msg_from": "Dip <dds98@doc.ic.ac.uk>", "msg_from_op": true, "msg_subject": "Postgres Internals" }, { "msg_contents": "> Hi All,\n> \n> Im running some benchmarking tests on mysql, postgres and interbase\n> database servers.\n> \n> does anyone know the reasons why or know where i can find out some\n> technical reasons why the postgres database server is particularly good\n> in relation to the others or just by itself e.g. use of query\n> optimisation or indexing etc. I'm only looking at creates, selects,\n> inserts, update and delete statements. I've noticed it is slow at\n> inserting data into tables, but especially quick at doing complex\n> selects (i.e. containing many joins). Why is this so?\n\nOur optimizer is very good compared to MySQL. Not sure about Interbase.\nWe even have GEQO for joins of >10 tables.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Jun 2001 21:00:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres Internals" } ]
[ { "msg_contents": "\nIt seems that our current way of enforcing uniqueness knows nothing \nabout transactions ;(\n\nwhen you \n\ncreate table t(\n i int4 primary key\n);\"\"\"\n\nand then run the following query \n\nbegin;\n delete from t where i=1;\n insert into t(i) values(1);\nend;\n\nin a loop from two parallel processes in a loop then one of them will \nalmost instantaneously err out with \n\nERROR: Cannot insert a duplicate key into unique index t_pkey\n\nI guess this can be classified as a bug, but I'm not sure how easy it \nis to fix it.\n\n-------------\nHannu\n\n\nI tested it with the followiong python script\n\n#!/usr/bin/python\n\nsql_reinsert_item = \"\"\"\\\nbegin;\n delete from t where i=1;\n insert into t(i) values(1);\nend;\n\"\"\"\n\ndef main():\n import _pg\n con = _pg.connect('test')\n for i in range(500):\n print '%d. update' % (i+1)\n con.query(sql_reinsert_item)\n\nif __name__=='__main__':\n main()\n", "msg_date": "Fri, 15 Jun 2001 15:48:31 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "UNIQUE INDEX unaware of transactions" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n\n> It seems that our current way of enforcing uniqueness knows nothing \n> about transactions ;(\n> \n> when you \n> \n> create table t(\n> i int4 primary key\n> );\"\"\"\n> \n> and then run the following query \n> \n> begin;\n> delete from t where i=1;\n> insert into t(i) values(1);\n> end;\n> \n> in a loop from two parallel processes in a loop then one of them will \n> almost instantaneously err out with \n> \n> ERROR: Cannot insert a duplicate key into unique index t_pkey\n\nHave you tried running this test with transaction isolation set to\nSERIALIZABLE? \n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n", "msg_date": "15 Jun 2001 11:40:41 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: UNIQUE INDEX unaware of transactions" }, { "msg_contents": "Hi,\n\nA bit theoretical question (sorry for spelling and maybe OT).\n\n...\n> > It seems that our current way of enforcing uniqueness knows nothing\n> > about transactions ;(\n...\n> > create table t(i int4 primary key);\n...\n> > begin;\n> > delete from t where i=1;\n> > insert into t(i) values(1);\n> > end;\n> >\n> > in a loop from two parallel processes in a loop then one of them will\n> > almost instantaneously err out with\n> >\n> > ERROR: Cannot insert a duplicate key into unique index t_pkey\n\n*I think* this is correct behaviour, ie all that one transaction does should\nbe visible to other transactions.\n\nBut then a question: How is this handled by PostgreSQL? (two parallel\nthreads, a row where t=1 allready exist):\n\nbegin; // << Thread 1\n\tdelete from t where i=1;\n\n\t// Now thread 1 does a lot of other stuff...\n\t// and while its working another thread starts doing its stuff\n\nbegin; // << Thread 2\n\tinsert into t(i) values(1);\ncommit; // << Thread 2 is done, and all should be swell\n\n\t// What happens here ????????????\nrollback; // << Thread 1 regrets its delete???????????\n\n// Jarmo\n\n", "msg_date": "Sat, 16 Jun 2001 09:56:39 +0200", "msg_from": "\"Jarmo Paavilainen\" <netletter@comder.com>", "msg_from_op": false, "msg_subject": "UNIQUE INDEX unaware of transactions (a spin of question)" }, { "msg_contents": "Jarmo Paavilainen writes:\n\n> *I think* this is correct behaviour, ie all that one transaction does should\n> be visible to other transactions.\n\nOnly in the \"read uncommitted\" transaction isolation level, which\nPostgreSQL does not provide and isn't really that useful.\n\n> But then a question: How is this handled by PostgreSQL? (two parallel\n> threads, a row where t=1 allready exist):\n>\n> begin; // << Thread 1\n> \tdelete from t where i=1;\n>\n> \t// Now thread 1 does a lot of other stuff...\n> \t// and while its working another thread starts doing its stuff\n>\n> begin; // << Thread 2\n> \tinsert into t(i) values(1);\n> commit; // << Thread 2 is done, and all should be swell\n>\n> \t// What happens here ????????????\n> rollback; // << Thread 1 regrets its delete???????????\n\nYou can try yourself how PostgreSQL handles this, which is probably not\nthe right thing since unique contraints are not correctly transaction\naware.\n\nWhat *should* happen is this: In \"read committed\" isolation level, the\ninsert in the second thread would fail with a constraint violation because\nthe delete in the first thread is not yet visible to it. In\n\"serializable\" isolation level, the thread 2 transaction would be aborted\nwhen the insert is executed because of a serialization failure.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 16 Jun 2001 16:38:02 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: UNIQUE INDEX unaware of transactions (a spin of\n question)" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Jarmo Paavilainen writes:\n> \n> > *I think* this is correct behaviour, ie all that one transaction does should\n> > be visible to other transactions.\n> \n> Only in the \"read uncommitted\" transaction isolation level, which\n> PostgreSQL does not provide and isn't really that useful.\n> \n\n...\n\n> \n> You can try yourself how PostgreSQL handles this, which is probably not\n> the right thing since unique contraints are not correctly transaction\n> aware.\n\nIs there any way to make unique indexes transaction-aware ?\n\nAre competeing updates on unique indexes transaction-aware ?\n\nI.e. can I be sure that if I do \n\nbegin;\nif select where key=1 result exists\nthen update where key=1\nelse insert(key,...)values(1,...)\nend;\n\nthen this will have the expected behaviour in presence of multiple \nconcurrent updaters?\n\n------------------\nHannu\n", "msg_date": "Mon, 18 Jun 2001 12:24:16 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: UNIQUE INDEX unaware of transactions (a spin ofquestion)" }, { "msg_contents": "Hannu Krosing writes:\n\n> Is there any way to make unique indexes transaction-aware ?\n> Are competeing updates on unique indexes transaction-aware ?\n\nAFAIK, indexes are not transaction-aware at all, they only provide\ninformation that there might be a visible row at the pointed-to location\nin the table. (This is also the reason that you cannot simply fetch the\ndata from the index, you always need to look at the table, too.)\n\nPersonally, I think that to support proper transaction-aware and\ndeferrable unique contraints, this needs to be done with triggers,\nsomewhat like the foreign keys.\n\n> I.e. can I be sure that if I do\n>\n> begin;\n> if select where key=1 result exists\n> then update where key=1\n> else insert(key,...)values(1,...)\n> end;\n>\n> then this will have the expected behaviour in presence of multiple\n> concurrent updaters?\n\nI guess not.\n\nThe classical example is\n\nupdate t set x = x + 1;\n\nwhich won't work if x is constrained to be unique.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 18 Jun 2001 18:24:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: UNIQUE INDEX unaware of transactions (a spin ofquestion)" } ]
[ { "msg_contents": "Guys,\n\nJust installed a new data base in my server and while running vacuum\nanalyze postgres dies with the following message:\n\n[...]\nNOTICE: Index pg_rewrite_oid_index: Pages 2; Tuples 16. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_rewrite_rulename_index: Pages 2; Tuples 16. CPU 0.00s/0.00u sec.\nNOTICE: --Relation pg_toast_17058--\nNOTICE: Pages 4: Changed 0, reaped 0, Empty 0, New 0; Tup 17: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 219, MaxLen 2034; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_toast_17058_idx: Pages 2; Tuples 17. CPU 0.00s/0.00u sec.\nNOTICE: Analyzing...\npqReadData() -- backend closed the channel unexpectedly.\n\tThis probably means the backend terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!# \n\nThe postgres version is 7.1.2 and the data base was initialized with\n\n$ LANG=es_MX /usr/bin/initdb -D /var/lib/pgsql/data -E latin1 \n\nIt is running on Redhat Linux 7.1 i686 with 2.4.2-2 kernel.\nHere is the back trace from gdb\n\n(gdb) bt\n#0 strcoll () at strcoll.c:229\n#1 0x081348e7 in varstr_cmp () at eval.c:41\n#2 0x0813493f in varstr_cmp () at eval.c:41\n#3 0x08134b7c in text_gt () at eval.c:41\n#4 0x08148ca2 in FunctionCall2 () at eval.c:41\n#5 0x080b3b09 in analyze_rel () at eval.c:41\n#6 0x080b3795 in analyze_rel () at eval.c:41\n#7 0x080afa76 in vacuum () at eval.c:41\n#8 0x080af9c7 in vacuum () at eval.c:41\n#9 0x0810a3ca in ProcessUtility () at eval.c:41\n#10 0x0810808b in pg_exec_query_string () at eval.c:41\n#11 0x081091ce in PostgresMain () at eval.c:41\n#12 0x080f208b in PostmasterMain () at eval.c:41\n#13 0x080f1c45 in PostmasterMain () at eval.c:41\n#14 0x080f0d0c in PostmasterMain () at eval.c:41\n#15 0x080f0684 in PostmasterMain () at eval.c:41\n#16 0x080cf3c8 in main () at eval.c:41\n#17 0x401e2177 in __libc_start_main (main=0x80cf260 <main>, argc=3, ubp_av=0xbffffa7c, init=0x8065c20 <_init>, \n fini=0x8154bb0 <_fini>, rtld_fini=0x4000e184 <_dl_fini>, stack_end=0xbffffa6c) at ../sysdeps/generic/libc-start.c:129\n(gdb) \n\nSeems like a problem with my locale settings. The \nstrange thing is that postgres dies while analyzing a system\ntable; however I'm able to vacuum my tables individually:\n\n$ for t in `psql dep dep -c '\\dt' -t -A | cut -d\\| -f1`; do psql dep -c \"vacuum analyze $t\"; done\n\nAny ideas?\n\nbest regards,\nManuel.\n", "msg_date": "15 Jun 2001 14:05:51 -0500", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "postgres dies while doing vacuum analyze" }, { "msg_contents": "Manuel Sugawara wrote:\n\n> Guys,\n> \n> Just installed a new data base in my server and while running vacuum\n> analyze postgres dies with the following message:\n> \n> [...]\n> NOTICE: Index pg_rewrite_oid_index: Pages 2; Tuples 16. CPU 0.00s/0.00u\n> sec.\n> NOTICE: Index pg_rewrite_rulename_index: Pages 2; Tuples 16. CPU\n> 0.00s/0.00u sec.\n> NOTICE: --Relation pg_toast_17058--\n> NOTICE: Pages 4: Changed 0, reaped 0, Empty 0, New 0; Tup 17: Vac 0,\n> Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 219, MaxLen 2034; Re-using:\n> Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec.\n> NOTICE: Index pg_toast_17058_idx: Pages 2; Tuples 17. CPU 0.00s/0.00u\n> sec.\n> NOTICE: Analyzing...\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !#\n> \n> The postgres version is 7.1.2 and the data base was initialized with\n> \n> $ LANG=es_MX /usr/bin/initdb -D /var/lib/pgsql/data -E latin1\n> \n> It is running on Redhat Linux 7.1 i686 with 2.4.2-2 kernel.\n> Here is the back trace from gdb\n\nTry 2.4.5 Kernel, I have the same problem with Suse 7.1 2.4.2 Kernel, since \nupdate, no more problems \n\n", "msg_date": "Fri, 15 Jun 2001 21:21:18 +0200", "msg_from": "mordicus <mordicus@free.fr>", "msg_from_op": false, "msg_subject": "Re: postgres dies while doing vacuum analyze" }, { "msg_contents": "Manuel Sugawara <masm@fciencias.unam.mx> writes:\n> [ vacuum analyze dies ]\n> It is running on Redhat Linux 7.1 i686 with 2.4.2-2 kernel.\n> Here is the back trace from gdb\n\n> (gdb) bt\n> #0 strcoll () at strcoll.c:229\n\nWe've heard reports before of strcoll() crashing on apparently valid\ninput. It seems to be a Red Hat-specific problem; the three reports\nI have in my notes are from people running RH 7.0 (check the archives\nfrom 1/1/01, 1/24/01, 3/1/01 if you want to see the prior reports).\n\nIt's possible that Postgres is doing something that confuses RH's\nlocale library, but I dunno what. Since no other platform is reporting\nit, it could also be a plain old bug in that locale library.\n\nWe need some RH-er to burrow in with a debugger and figure out what's\ngoing wrong. The previous reporters don't seem to have done anything;\nare you the man to fix it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Jun 2001 16:04:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres dies while doing vacuum analyze " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Manuel Sugawara <masm@fciencias.unam.mx> writes:\n> > [ vacuum analyze dies ]\n> > It is running on Redhat Linux 7.1 i686 with 2.4.2-2 kernel.\n> > Here is the back trace from gdb\n> \n> > (gdb) bt\n> > #0 strcoll () at strcoll.c:229\n> \n> We've heard reports before of strcoll() crashing on apparently valid\n> input. \n\nWe haven't AFAIK, but would be very interested if it can be reproduced.\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "15 Jun 2001 18:38:43 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: postgres dies while doing vacuum analyze" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Manuel Sugawara <masm@fciencias.unam.mx> writes:\n> > [ vacuum analyze dies ]\n> > It is running on Redhat Linux 7.1 i686 with 2.4.2-2 kernel.\n> > Here is the back trace from gdb\n> \n> > (gdb) bt\n> > #0 strcoll () at strcoll.c:229\n> \n> We've heard reports before of strcoll() crashing on apparently valid\n> input. It seems to be a Red Hat-specific problem; the three reports\n> I have in my notes are from people running RH 7.0 (check the archives\n> from 1/1/01, 1/24/01, 3/1/01 if you want to see the prior reports).\n> \n> It's possible that Postgres is doing something that confuses RH's\n> locale library, but I dunno what. Since no other platform is reporting\n> it, it could also be a plain old bug in that locale library.\n\nAfter a look into strcoll I found the bug. Attached is a tarball\nincluding a patch for strcoll, glibc.spec and an small program that\nshows the bug. Hopefully Trond can address this to the glibc and rpm\nexperts.\n\nbest regards,\nManuel.\n\n> \n> We need some RH-er to burrow in with a debugger and figure out what's\n> going wrong. The previous reporters don't seem to have done anything;\n> are you the man to fix it?\n> \n> \t\t\tregards, tom lane", "msg_date": "15 Jun 2001 23:52:22 -0500", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "Re: postgres dies while doing vacuum analyze" }, { "msg_contents": "Manuel Sugawara <masm@fciencias.unam.mx> writes:\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > Manuel Sugawara <masm@fciencias.unam.mx> writes:\n> > > [ vacuum analyze dies ]\n> > > It is running on Redhat Linux 7.1 i686 with 2.4.2-2 kernel.\n> > > Here is the back trace from gdb\n> > \n> > > (gdb) bt\n> > > #0 strcoll () at strcoll.c:229\n> > \n> > We've heard reports before of strcoll() crashing on apparently valid\n> > input. It seems to be a Red Hat-specific problem; the three reports\n> > I have in my notes are from people running RH 7.0 (check the archives\n> > from 1/1/01, 1/24/01, 3/1/01 if you want to see the prior reports).\n> > \n> > It's possible that Postgres is doing something that confuses RH's\n> > locale library, but I dunno what. Since no other platform is reporting\n> > it, it could also be a plain old bug in that locale library.\n> \n> After a look into strcoll I found the bug. Attached is a tarball\n> including a patch for strcoll, glibc.spec and an small program that\n> shows the bug.\n\nWill do... what is the expected result of the testcase? It seems to\nwork alright for me, but I'm running a slightly newer version than we\nhave released yet... (glibc-2.2.3-11, look in rawhide).\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "16 Jun 2001 09:41:37 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: postgres dies while doing vacuum analyze" }, { "msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> Will do... what is the expected result of the testcase?\n\nGiven a sufficiently large discrepancy between the string lengths,\na core dump is the likely result. Try increasing the \"16k\" numbers\nif it doesn't crash for you.\n\nGood work, Manuel! I'm surprised this hasn't been found before, because\nyou'd think it'd be biting lots of people ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Jun 2001 11:46:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres dies while doing vacuum analyze " }, { "msg_contents": "teg@redhat.com (Trond Eivind Glomsr�d) writes:\n\n> Will do... what is the expected result of the testcase? It seems to\n> work alright for me, but I'm running a slightly newer version than we\n> have released yet... (glibc-2.2.3-11, look in rawhide).\n\na core dump, at least on glibc-2.2.2-10. Try with some locale\ndifferent than C or POSIX.\n\nmasm@dep1$ LC_COLLATE=es_MX ./strcoll-bug\nes_MX\nzsh: 25041 segmentation fault (core dumped) LC_COLLATE=es_MX ./strcoll-bug\nmasm@dep1$ LC_COLLATE=C ./strcoll-bug\nC\nstrcoll returned -1\nmasm@dep1$ \n\nregards,\nManuel. \n\n> \n> \n> -- \n> Trond Eivind Glomsr�d\n> Red Hat, Inc.\n", "msg_date": "16 Jun 2001 12:46:05 -0500", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "Re: postgres dies while doing vacuum analyze" }, { "msg_contents": "Manuel Sugawara <masm@fciencias.unam.mx> writes:\n\n> teg@redhat.com (Trond Eivind Glomsr�d) writes:\n> \n> > Will do... what is the expected result of the testcase? It seems to\n> > work alright for me, but I'm running a slightly newer version than we\n> > have released yet... (glibc-2.2.3-11, look in rawhide).\n> \n> a core dump, at least on glibc-2.2.2-10. Try with some locale\n> different than C or POSIX.\n> \n> masm@dep1$ LC_COLLATE=es_MX ./strcoll-bug\n> es_MX\n> zsh: 25041 segmentation fault (core dumped) LC_COLLATE=es_MX ./strcoll-bug\n> masm@dep1$ LC_COLLATE=C ./strcoll-bug\n> C\n> strcoll returned -1\n> masm@dep1$ \n\nOK, this works with my system - no coredump, correct results. I'll\ntake a look at the glibc sources to verify that, but it looks like\nthis was fixed by drepper@redhat.com and included in glibc 2.2.3:\nhttps://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=36539\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "16 Jun 2001 13:56:25 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: postgres dies while doing vacuum analyze" }, { "msg_contents": "teg@redhat.com (Trond Eivind Glomsr�d) writes:\n\n[...]\n> OK, this works with my system - no coredump, correct results. I'll\n> take a look at the glibc sources to verify that, but it looks like\n> this was fixed by drepper@redhat.com and included in glibc 2.2.3:\n> https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=36539\n\nyes, is already fixed on glibc-2.2.3. It's safe to install this\nversion on my 7.1 systems or should I use my rpms?\n\nregards,\nManuel.\n\n> \n> -- \n> Trond Eivind Glomsr�d\n> Red Hat, Inc.\n", "msg_date": "16 Jun 2001 14:09:47 -0500", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "Re: postgres dies while doing vacuum analyze" }, { "msg_contents": "On 16 Jun 2001, Manuel Sugawara wrote:\n\n> teg@redhat.com (Trond Eivind Glomsr�d) writes:\n>\n> [...]\n> > OK, this works with my system - no coredump, correct results. I'll\n> > take a look at the glibc sources to verify that, but it looks like\n> > this was fixed by drepper@redhat.com and included in glibc 2.2.3:\n> > https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=36539\n>\n> yes, is already fixed on glibc-2.2.3. It's safe to install this\n> version on my 7.1 systems\n\nThe 2.2.3-11 should be safe, we would be very interested to hear\nothwerwise.\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n\n", "msg_date": "Sat, 16 Jun 2001 16:49:33 -0400 (EDT)", "msg_from": "=?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <teg@redhat.com>", "msg_from_op": false, "msg_subject": "Re: postgres dies while doing vacuum analyze" } ]
[ { "msg_contents": "OpenBSD lacks RTLD_GLOBAL, so it should be removed from\ndynloader/openbsd.h\n\n(netbsd and freebsd do have it, so its kosher there)\n\n-alex\n\n", "msg_date": "Fri, 15 Jun 2001 18:41:27 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "RTLD_GLOBAL on openbsd" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> OpenBSD lacks RTLD_GLOBAL, so it should be removed from\n> dynloader/openbsd.h\n\nI agree with this and my regression tests now pass. Can someone please\napply the attached patch?\n\n- - Brandon\n\n- ----------------------------------------------------------------------------\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: PGPfreeware 5.0i for non-commercial use\nCharset: noconv\n\niQA/AwUBOyuO0PYgmKoG+YbuEQK+agCfde8j3y0KxbwhUqIyVNpEjEqCp74AoPyt\nb7+D94bTAw/yXYIValtBNB7d\n=eisL\n-----END PGP SIGNATURE-----", "msg_date": "Sat, 16 Jun 2001 12:52:22 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: RTLD_GLOBAL on openbsd" }, { "msg_contents": "Alex Pilosov writes:\n\n> OpenBSD lacks RTLD_GLOBAL, so it should be removed from\n> dynloader/openbsd.h\n\nOkay.\n\nBtw., couldn't we replace dynloader/openbsd.h with something more like,\nsay, dynloader/sunos4.h and get rid of the BSD44_derived_dl*() functions?\nAFAICT, the issue was that on earlier (Net|Free)BSD systems you needed to\nprepend an '_' to the looked-up symbol, but I could not find any reference\nto this in the OpenBSD documentation.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 16 Jun 2001 19:05:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: RTLD_GLOBAL on openbsd" }, { "msg_contents": "On Sat, 16 Jun 2001, Peter Eisentraut wrote:\n\n> Alex Pilosov writes:\n> \n> > OpenBSD lacks RTLD_GLOBAL, so it should be removed from\n> > dynloader/openbsd.h\n> \n> Okay.\n> \n> Btw., couldn't we replace dynloader/openbsd.h with something more like,\n> say, dynloader/sunos4.h and get rid of the BSD44_derived_dl*() functions?\n> AFAICT, the issue was that on earlier (Net|Free)BSD systems you needed to\n> prepend an '_' to the looked-up symbol, but I could not find any reference\n> to this in the OpenBSD documentation.\nOn OpenBSD, you still need to do that. I haven't checked Free yet, but I\nthink that's still true also.\n-alex\n\t\n\n", "msg_date": "Sat, 16 Jun 2001 20:10:11 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: RTLD_GLOBAL on openbsd" }, { "msg_contents": "Alex Pilosov writes:\n\n> OpenBSD lacks RTLD_GLOBAL, so it should be removed from\n> dynloader/openbsd.h\n\nRTLD_GLOBAL removed. You better hope it acts like it by default, though.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 20 Jun 2001 20:36:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: RTLD_GLOBAL on openbsd" } ]
[ { "msg_contents": "If USE_READLINE exists, but HAVE_RL_FILENAME_COMPLETION_FUNCTION does not,\n-current build breaks:\ngcc -O2 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../../src/interfaces/libpq -I../../../src/include -c tab-complete.c\n-o tab-complete.o\ntab-complete.c: In function `psql_completion':\ntab-complete.c:754: `filename_completion_function' undeclared (first use\nin this function)\ntab-complete.c:754: (Each undeclared identifier is reported only once\ntab-complete.c:754: for each function it appears in.)\n\nI am not sure what should it be defined as when RL_FILENAME_COMPLETION is\nnot available...?\n\n-alex\n\n", "msg_date": "Fri, 15 Jun 2001 18:54:59 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "[current] readline breakage " }, { "msg_contents": "Alex Pilosov writes:\n\n> If USE_READLINE exists, but HAVE_RL_FILENAME_COMPLETION_FUNCTION does not,\n> -current build breaks:\n> gcc -O2 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations\n> -I../../../src/interfaces/libpq -I../../../src/include -c tab-complete.c\n> -o tab-complete.o\n> tab-complete.c: In function `psql_completion':\n> tab-complete.c:754: `filename_completion_function' undeclared (first use\n> in this function)\n> tab-complete.c:754: (Each undeclared identifier is reported only once\n> tab-complete.c:754: for each function it appears in.)\n>\n> I am not sure what should it be defined as when RL_FILENAME_COMPLETION is\n> not available...?\n\nIt should be in the readline.h header file. Is this yet another case of a\nbroken OpenBSD readline installation?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 16 Jun 2001 01:15:07 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [current] readline breakage " }, { "msg_contents": "On Sat, 16 Jun 2001, Peter Eisentraut wrote:\n\n> Alex Pilosov writes:\n> \n> > If USE_READLINE exists, but HAVE_RL_FILENAME_COMPLETION_FUNCTION does not,\n> > -current build breaks:\n> > gcc -O2 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations\n> > -I../../../src/interfaces/libpq -I../../../src/include -c tab-complete.c\n> > -o tab-complete.o\n> > tab-complete.c: In function `psql_completion':\n> > tab-complete.c:754: `filename_completion_function' undeclared (first use\n> > in this function)\n> > tab-complete.c:754: (Each undeclared identifier is reported only once\n> > tab-complete.c:754: for each function it appears in.)\n> >\n> > I am not sure what should it be defined as when RL_FILENAME_COMPLETION is\n> > not available...?\n> \n> It should be in the readline.h header file. Is this yet another case of a\n> broken OpenBSD readline installation?\n\nYeah, sorry, my fault. OpenBSD 2.8 ships with a broken readline (dated\n1996). 2.9 has a recent GNU readline. I don't think its worth supporting\n2.8 readline either, but I forgot to check the archives.\n\n-alex\n\n", "msg_date": "Fri, 15 Jun 2001 22:13:51 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [current] readline breakage " }, { "msg_contents": "Alex Pilosov writes:\n\n> > > tab-complete.c: In function `psql_completion':\n> > > tab-complete.c:754: `filename_completion_function' undeclared (first use\n> > > in this function)\n\n> Yeah, sorry, my fault. OpenBSD 2.8 ships with a broken readline (dated\n> 1996). 2.9 has a recent GNU readline. I don't think its worth supporting\n> 2.8 readline either, but I forgot to check the archives.\n\nWe used to declare filename_completion_function explicitly, but it seems\nto have been removed recently by a Cygwin-related patch from Jason\nTishler. Jason, was this required to work on Cygwin or was it merely a\n\"cleaning up\" issue?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 16 Jun 2001 18:51:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [current] readline breakage " }, { "msg_contents": "Peter,\n\n[This may not make it to pgsql-hackers since I'm not subscribed, feel\nfree to forward it on my behalf...]\n\nOn Sat, Jun 16, 2001 at 06:51:59PM +0200, Peter Eisentraut wrote:\n> Alex Pilosov writes:\n> \n> > > > tab-complete.c: In function `psql_completion':\n> > > > tab-complete.c:754: `filename_completion_function' undeclared (first use\n> > > > in this function)\n> \n> > Yeah, sorry, my fault. OpenBSD 2.8 ships with a broken readline (dated\n> > 1996). 2.9 has a recent GNU readline. I don't think its worth supporting\n> > 2.8 readline either, but I forgot to check the archives.\n> \n> We used to declare filename_completion_function explicitly, but it seems\n> to have been removed recently by a Cygwin-related patch from Jason\n> Tishler. Jason, was this required to work on Cygwin or was it merely a\n> \"cleaning up\" issue?\n\nYes, this change is required under Cygwin when building against\nreadline 4.2. Note that my patch is modeled after the one that you\ndid for completion_matches():\n\n http://www.ca.postgresql.org/~petere/rl42-pg.patch\n\nShouldn't the declaration for filename_completion_function() be picked up\nvia readline.h? IMO, redeclaring functions especially from an external\nlibrary (i.e., readline) is generally not considered good programming\npractice.\n\nThanks,\nJason\n\n-- \nJason Tishler\nDirector, Software Engineering Phone: 732.264.8770 x235\nDot Hill Systems Corp. Fax: 732.264.8798\n82 Bethany Road, Suite 7 Email: Jason.Tishler@dothill.com\nHazlet, NJ 07730 USA WWW: http://www.dothill.com\n", "msg_date": "Sat, 16 Jun 2001 22:10:27 -0400", "msg_from": "Jason Tishler <Jason.Tishler@dothill.com>", "msg_from_op": false, "msg_subject": "Re: [current] readline breakage" }, { "msg_contents": "Jason Tishler writes:\n\n> Shouldn't the declaration for filename_completion_function() be picked up\n> via readline.h? IMO, redeclaring functions especially from an external\n> library (i.e., readline) is generally not considered good programming\n> practice.\n\nIt should, but on some systems it evidently isn't. But since on Cygwin a\ncorrect import/export decorated declaration should be in the header files,\nwould a second declaration without those attributes override or otherwise\ninterfere with that? Otherwise I might have to stick it back covered by\nsome #ifdef's.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 17 Jun 2001 12:08:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [current] readline breakage" }, { "msg_contents": "Peter,\n\nOn Sun, Jun 17, 2001 at 12:08:59PM +0200, Peter Eisentraut wrote:\n> Jason Tishler writes:\n> \n> > Shouldn't the declaration for filename_completion_function() be picked up\n> > via readline.h? IMO, redeclaring functions especially from an external\n> > library (i.e., readline) is generally not considered good programming\n> > practice.\n> \n> It should, but on some systems it evidently isn't. But since on Cygwin a\n> correct import/export decorated declaration should be in the header files,\n> would a second declaration without those attributes override or otherwise\n> interfere with that?\n\nNo, after applying the attached patch, Cygwin psql built against readline\n4.2 without any problems. After some reflection, this outcome should\nhave been obvious to me since my previous distributions built fine with\nthe duplicate (but different) filename_completion_function() declarations.\n\nNote that I did not test this patch against the other three configuration\nthat I used to test my original patch but I would not anticipate any issues\nwith these configurations either.\n\n> Otherwise I might have to stick it back covered by some #ifdef's.\n\nThe #ifdefs do not appear to be necessary.\n\nDo you want to submit this patch to pgsql-patches or should I?\n\nThanks,\nJason\n\n-- \nJason Tishler\nDirector, Software Engineering Phone: 732.264.8770 x235\nDot Hill Systems Corp. Fax: 732.264.8798\n82 Bethany Road, Suite 7 Email: Jason.Tishler@dothill.com\nHazlet, NJ 07730 USA WWW: http://www.dothill.com", "msg_date": "Sun, 17 Jun 2001 20:50:23 -0400", "msg_from": "Jason Tishler <Jason.Tishler@dothill.com>", "msg_from_op": false, "msg_subject": "Re: [current] readline breakage" }, { "msg_contents": "I have checked this in:\n\n*** tab-complete.c 2001/06/11 22:12:48 1.33\n--- tab-complete.c 2001/06/20 18:37:09\n***************\n*** 62,67 ****\n--- 62,70 ----\n\n #ifdef HAVE_RL_FILENAME_COMPLETION_FUNCTION\n #define filename_completion_function rl_filename_completion_function\n+ #else\n+ /* missing in some header files */\n+ extern char *filename_completion_function();\n #endif\n\n #ifdef HAVE_RL_COMPLETION_MATCHES\n\nI hope it still works for both of you. ;-)\n\n\nJason Tishler writes:\n\n> Peter,\n>\n> On Sun, Jun 17, 2001 at 12:08:59PM +0200, Peter Eisentraut wrote:\n> > Jason Tishler writes:\n> >\n> > > Shouldn't the declaration for filename_completion_function() be picked up\n> > > via readline.h? IMO, redeclaring functions especially from an external\n> > > library (i.e., readline) is generally not considered good programming\n> > > practice.\n> >\n> > It should, but on some systems it evidently isn't. But since on Cygwin a\n> > correct import/export decorated declaration should be in the header files,\n> > would a second declaration without those attributes override or otherwise\n> > interfere with that?\n>\n> No, after applying the attached patch, Cygwin psql built against readline\n> 4.2 without any problems. After some reflection, this outcome should\n> have been obvious to me since my previous distributions built fine with\n> the duplicate (but different) filename_completion_function() declarations.\n>\n> Note that I did not test this patch against the other three configuration\n> that I used to test my original patch but I would not anticipate any issues\n> with these configurations either.\n>\n> > Otherwise I might have to stick it back covered by some #ifdef's.\n>\n> The #ifdefs do not appear to be necessary.\n>\n> Do you want to submit this patch to pgsql-patches or should I?\n>\n> Thanks,\n> Jason\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 20 Jun 2001 20:42:06 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [current] readline breakage" }, { "msg_contents": "Peter,\n\nOn Wed, Jun 20, 2001 at 08:42:06PM +0200, Peter Eisentraut wrote:\n> I have checked this in:\n> \n> *** tab-complete.c 2001/06/11 22:12:48 1.33\n> --- tab-complete.c 2001/06/20 18:37:09\n> ***************\n> *** 62,67 ****\n> --- 62,70 ----\n> \n> #ifdef HAVE_RL_FILENAME_COMPLETION_FUNCTION\n> #define filename_completion_function rl_filename_completion_function\n> + #else\n> + /* missing in some header files */\n> + extern char *filename_completion_function();\n> #endif\n> \n> #ifdef HAVE_RL_COMPLETION_MATCHES\n> \n> I hope it still works for both of you. ;-)\n\nI just tried the above with Cygwin/readline 4.2 (which is more important\nthan Cygwin/readline 4.1 -- at least to me) and it still works.\n\nThanks,\nJason\n\n-- \nJason Tishler\nDirector, Software Engineering Phone: 732.264.8770 x235\nDot Hill Systems Corp. Fax: 732.264.8798\n82 Bethany Road, Suite 7 Email: Jason.Tishler@dothill.com\nHazlet, NJ 07730 USA WWW: http://www.dothill.com\n", "msg_date": "Thu, 21 Jun 2001 08:56:11 -0400", "msg_from": "Jason Tishler <Jason.Tishler@dothill.com>", "msg_from_op": false, "msg_subject": "Re: [current] readline breakage" } ]
[ { "msg_contents": "This is second take at indexability of << operator for inet types.\n\nPlease take a look at it. \n\nAlso, I have a question: I put in a regression test to check that the type\ncan be indexed, by doing 'explain select ...'. However, the expected\nresult may vary when the optimizer is tweaked. \n\nI am not sure if its a good idea to check for that, so feel free to not\ncommit the regression test part of this patch...If there's a better way to\ncheck that the query will use the index in regression test, I'd like to\nknow too.\n\n-alex", "msg_date": "Fri, 15 Jun 2001 23:13:07 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] inet << indexability" }, { "msg_contents": "Augh. Previous patch had some garbage changes in it. Sorry. This one is\nclean...I promise, I'll get better at this.\n\n-alex\n\nOn Fri, 15 Jun 2001, Alex Pilosov wrote:\n\n> This is second take at indexability of << operator for inet types.\n> \n> Please take a look at it. \n> \n> Also, I have a question: I put in a regression test to check that the type\n> can be indexed, by doing 'explain select ...'. However, the expected\n> result may vary when the optimizer is tweaked. \n> \n> I am not sure if its a good idea to check for that, so feel free to not\n> commit the regression test part of this patch...If there's a better way to\n> check that the query will use the index in regression test, I'd like to\n> know too.\n> \n> -alex\n>", "msg_date": "Fri, 15 Jun 2001 23:33:42 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "(Really) Re: [PATCH] inet << indexability" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> Also, I have a question: I put in a regression test to check that the type\n> can be indexed, by doing 'explain select ...'. However, the expected\n> result may vary when the optimizer is tweaked. \n\nYes, I'd noted that already in looking at your prior version. I think\nit's best not to do an EXPLAIN in the regress test, because I don't want\nto have to touch the tests every time the cost functions are tweaked.\nHowever, we can certainly check to make sure that the results of an\nindexscan are what we expect. Is the table set up so that this is a\nuseful test case? For example, it'd be nice to check boundary\nconditions (eg, try both << and <<= on a case where they should give\ndifferent results).\n\nDo you have any thought of making network_scan_first and\nnetwork_scan_last user-visible functions? (Offhand I see no use for\nthem to a user, but maybe you do.) If not, I'd suggest not using the\nfmgr call protocol for them, but just making them pass and return inet*,\nor possibly Datum. No need for the extra notational cruft of\nDirectFunctionCall.\n\nAnother minor stylistic gripe is that you should use bool/true/false\nwhere appropriate, not int/1/0. Otherwise it looks pretty good.\n\nOh, one more thing: those dynloader/openbsd.h and psql/tab-complete.c\nchanges don't belong in this patch...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Jun 2001 12:10:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] inet << indexability " }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> I didn't want to make them user-visible, however, the alternative, IMHO,\n> is worse, since these functions rely on network_broadcast and\n> network_network to do the work, calling sequence would be:\n> a) indxpath casts Datum to inet, passes to network_scan*\n> b) network_scan will create new Datum, pass it to network_broadcast\n> c) network_scan will extract inet from Datum returned\n> d) indxpath will then cast inet back to Datum :)\n> Which, I think, is pretty messy :)\n\nSure, but you could make them look like\n\n\tDatum network_scan_first(Datum networkaddress)\n\nwithout incurring any of that overhead. (Anyway, Datum <-> inet* is\nonly a cast.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Jun 2001 14:43:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] inet << indexability " }, { "msg_contents": "On Sat, 16 Jun 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > Also, I have a question: I put in a regression test to check that the type\n> > can be indexed, by doing 'explain select ...'. However, the expected\n> > result may vary when the optimizer is tweaked. \n> \n> Yes, I'd noted that already in looking at your prior version. I think\n> it's best not to do an EXPLAIN in the regress test, because I don't want\n> to have to touch the tests every time the cost functions are tweaked.\nI'll remove it with resubmitted patch.\n\n> However, we can certainly check to make sure that the results of an\n> indexscan are what we expect. Is the table set up so that this is a\n> useful test case? For example, it'd be nice to check boundary\n> conditions (eg, try both << and <<= on a case where they should give\n> different results).\nI'll do that too.\n\n> Do you have any thought of making network_scan_first and\n> network_scan_last user-visible functions? (Offhand I see no use for\n> them to a user, but maybe you do.) If not, I'd suggest not using the\n> fmgr call protocol for them, but just making them pass and return inet*,\n> or possibly Datum. No need for the extra notational cruft of\n> DirectFunctionCall.\nI didn't want to make them user-visible, however, the alternative, IMHO,\nis worse, since these functions rely on network_broadcast and\nnetwork_network to do the work, calling sequence would be:\na) indxpath casts Datum to inet, passes to network_scan*\nb) network_scan will create new Datum, pass it to network_broadcast\nc) network_scan will extract inet from Datum returned\nd) indxpath will then cast inet back to Datum :)\n\nWhich, I think, is pretty messy :)\n\n> Another minor stylistic gripe is that you should use bool/true/false\n> where appropriate, not int/1/0. Otherwise it looks pretty good.\nI'll clean it up and resubmit. \n\n> Oh, one more thing: those dynloader/openbsd.h and psql/tab-complete.c\n> changes don't belong in this patch...\nSorry, my fault \n\n-alex\n\n", "msg_date": "Sat, 16 Jun 2001 14:49:04 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] inet << indexability " }, { "msg_contents": "On Sat, 16 Jun 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > I didn't want to make them user-visible, however, the alternative, IMHO,\n> > is worse, since these functions rely on network_broadcast and\n> > network_network to do the work, calling sequence would be:\n> > a) indxpath casts Datum to inet, passes to network_scan*\n> > b) network_scan will create new Datum, pass it to network_broadcast\n> > c) network_scan will extract inet from Datum returned\n> > d) indxpath will then cast inet back to Datum :)\n> > Which, I think, is pretty messy :)\n> \n> Sure, but you could make them look like\n> \n> \tDatum network_scan_first(Datum networkaddress)\n> \n> without incurring any of that overhead. (Anyway, Datum <-> inet* is\n> only a cast.)\nGotcha, I misunderstood you the first time.\nThanks\n\n-alex\n\n\n", "msg_date": "Sat, 16 Jun 2001 15:20:38 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] inet << indexability " }, { "msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Augh. Previous patch had some garbage changes in it. Sorry. This one is\n> clean...I promise, I'll get better at this.\n> \n> -alex\n> \n> On Fri, 15 Jun 2001, Alex Pilosov wrote:\n> \n> > This is second take at indexability of << operator for inet types.\n> > \n> > Please take a look at it. \n> > \n> > Also, I have a question: I put in a regression test to check that the type\n> > can be indexed, by doing 'explain select ...'. However, the expected\n> > result may vary when the optimizer is tweaked. \n> > \n> > I am not sure if its a good idea to check for that, so feel free to not\n> > commit the regression test part of this patch...If there's a better way to\n> > check that the query will use the index in regression test, I'd like to\n> > know too.\n> > \n> > -alex\n> > \n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 18 Jun 2001 13:34:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: (Really) Re: [PATCH] inet << indexability" }, { "msg_contents": "> Tom already merged [latest version of this patch] it in, so you can delete\n> this one from patch list.\n\nOh, OK. I thought he had done that but I couldn't find the commit\nmessage in my mailbox.\n\n\n> \n> Thanks\n> -alex\n> \n> \n> On Mon, 18 Jun 2001, Bruce Momjian wrote:\n> \n> > Your patch has been added to the PostgreSQL unapplied patches list at:\n> > \n> > \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> > \n> > I will try to apply it within the next 48 hours.\n> > \n> > > Augh. Previous patch had some garbage changes in it. Sorry. This one is\n> > > clean...I promise, I'll get better at this.\n> > > \n> > > -alex\n> > > \n> > > On Fri, 15 Jun 2001, Alex Pilosov wrote:\n> > > \n> > > > This is second take at indexability of << operator for inet types.\n> > > > \n> > > > Please take a look at it. \n> > > > \n> > > > Also, I have a question: I put in a regression test to check that the type\n> > > > can be indexed, by doing 'explain select ...'. However, the expected\n> > > > result may vary when the optimizer is tweaked. \n> > > > \n> > > > I am not sure if its a good idea to check for that, so feel free to not\n> > > > commit the regression test part of this patch...If there's a better way to\n> > > > check that the query will use the index in regression test, I'd like to\n> > > > know too.\n> > > > \n> > > > -alex\n> > > > \n> > \n> > Content-Description: \n> > \n> > [ Attachment, skipping... ]\n> > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Don't 'kill -9' the postmaster\n> > \n> > \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 18 Jun 2001 14:14:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: (Really) Re: [PATCH] inet << indexability" }, { "msg_contents": "Tom already merged [latest version of this patch] it in, so you can delete\nthis one from patch list.\n\nThanks\n-alex\n\n\nOn Mon, 18 Jun 2001, Bruce Momjian wrote:\n\n> Your patch has been added to the PostgreSQL unapplied patches list at:\n> \n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> \n> I will try to apply it within the next 48 hours.\n> \n> > Augh. Previous patch had some garbage changes in it. Sorry. This one is\n> > clean...I promise, I'll get better at this.\n> > \n> > -alex\n> > \n> > On Fri, 15 Jun 2001, Alex Pilosov wrote:\n> > \n> > > This is second take at indexability of << operator for inet types.\n> > > \n> > > Please take a look at it. \n> > > \n> > > Also, I have a question: I put in a regression test to check that the type\n> > > can be indexed, by doing 'explain select ...'. However, the expected\n> > > result may vary when the optimizer is tweaked. \n> > > \n> > > I am not sure if its a good idea to check for that, so feel free to not\n> > > commit the regression test part of this patch...If there's a better way to\n> > > check that the query will use the index in regression test, I'd like to\n> > > know too.\n> > > \n> > > -alex\n> > > \n> \n> Content-Description: \n> \n> [ Attachment, skipping... ]\n> \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> \n> \n\n", "msg_date": "Mon, 18 Jun 2001 14:23:08 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: (Really) Re: [PATCH] inet << indexability" }, { "msg_contents": "> ... best not to do an EXPLAIN in the regress test, because I don't want\n> to have to touch the tests every time the cost functions are tweaked...\n\nAt some point we really should have an \"optimizer\" regression test, so\ny'all *do* have to touch some regression test when the cost functions\nare tweaked. If it were isolated into a single test, with appropriate\ncomments to keep it easy to remember *why* a result should be a certain\nway, then it should be manageable and make it easier to proactively\nevaluate changes.\n\nIt likely would have have full coverage, but at least some over/under\ntest cases would help...\n\n - Thomas\n", "msg_date": "Mon, 18 Jun 2001 19:38:33 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] inet << indexability" } ]
[ { "msg_contents": "\nHi to list,\n\nI had my database installed in my Slackware 7.0 (kernel 2.2.6). I was\nusing postgres 7.1 version. Now the database got corrupted. I had no clue\nhow it got corrupted. Basically, i did found that the data of various\nusers existing (in /usr/local/pgsql/data directory). But there were no\ndata existing in the following system catalogs.\n\npg_database. (except template0 & template1)\npg_shadow. (except postgres user. Previously 15 users were there.)\npg_tables. (Not showing any user created tables. Only system tables\nget displayed).\n\nBut the data structure of individual tables exist somewhere. Suppose if i\ndo 'select * from tablename' then field names gets displayed but says 'no\nrows'. At the same time, i am unable to get the structure through\n\\d tablename. \n\nIs it possible to recover the data in any way. I am having a old dump of\ndata which i can use to reinstall but i wanted to know how it happened in\nthe first place. Atleast, i can take some steps so that it doesn't happen\nagain. \n\nCould some one there help me out to recover the data.\nAny help would be appreciated.\n\nRegards,\nGuru Prasad\n\n\n", "msg_date": "Sat, 16 Jun 2001 11:10:54 +0530 (IST)", "msg_from": "Guru Prasad <pnguruji@yahoo.com>", "msg_from_op": true, "msg_subject": "Postgres" }, { "msg_contents": "Guru Prasad <pnguruji@yahoo.com> writes:\n> using postgres 7.1 version. Now the database got corrupted. I had no clue\n> how it got corrupted. Basically, i did found that the data of various\n> users existing (in /usr/local/pgsql/data directory). But there were no\n> data existing in the following system catalogs.\n\n> pg_database. (except template0 & template1)\n> pg_shadow. (except postgres user. Previously 15 users were there.)\n> pg_tables. (Not showing any user created tables. Only system tables\n> get displayed).\n\n> But the data structure of individual tables exist somewhere. Suppose if i\n> do 'select * from tablename' then field names gets displayed but says 'no\n> rows'. At the same time, i am unable to get the structure through\n> \\d tablename. \n\nIf there's no entry for your database in pg_database, how were you able\nto connect to do a 'select * from tablename'?\n\nI'd like to see exactly what you did and exactly what results you got,\nnot your interpretations about whether there's data in tables or not.\nWhatever's going on here is probably more subtle than that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Jun 2001 12:54:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres " } ]
[ { "msg_contents": "Hi All,\n\nHere is a nice article comparing OSea to run hardcore net apps:\n\nhttp://www.sysadminmag.com/articles/2001/0107/0107a/0107a.htm\n\nThe article compares Linux (RH), FreeBSD, Solaris (Intel), and Windows 2000.\nThe tests were performed against the implementation TCP/IP Architecture\non these platforms with different system calls, file systems tests (EXT2 for Linux, UFS for FreeBSD\nand Solaris, and NTFS for Windows 2000) for creating writing, and reading\n10,000 files, so some apps like different DBMS or other networks apps\ndepending on heavy disk I/O would perform on the OSes, and test of various network\napplications based on number of simultaneous connections, process-based vs. thread-based\nand sync vs. async connection handling architectures.\n\nHope it might be helpful for you :)\n\nSerguei\n\n\n\n", "msg_date": "Sat, 16 Jun 2001 13:45:20 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "Article: Network performance by OS" } ]
[ { "msg_contents": "Well, after persuading cvsup and cvs that it _is_ possible to have local\nmodifiable repositories, I have a clean untrusted plperl patch to offer\nyou :)\n\nHighlights:\n* There's one perl interpreter used for both trusted and untrusted\nprocedures. I do think its unnecessary to keep two perl\ninterpreters around. If someone can break out from trusted \"Safe\" perl \nmode, well, they can do what they want already. If someone disagrees, I\ncan change this.\n\n* Opcode is not statically loaded anymore. Instead, we load Dynaloader,\nwhich then can grab Opcode (and anything else you can 'use') on its own.\n\n* Checked to work on FreeBSD 4.3 + perl 5.5.3 , OpenBSD 2.8 + perl5.6.1,\nRedHat 6.2 + perl 5.5.3\n\n* Uses ExtUtils::Embed to find what options are necessary to link with\nperl shared libraries\n\n* createlang is also updated, it can create untrusted perl using 'plperlu'\n\n* Example script (assuming you have Mail::Sendmail installed):\ncreate function foo() returns text as '\n use Mail::Sendmail;\n\n %mail = ( To => q(you@yourname.com),\n From => q(me@here.com),\n Message => \"This is a very short message\"\n );\n sendmail(%mail) or die $Mail::Sendmail::error;\nreturn \"OK. Log says:\\n\", $Mail::Sendmail::log;\n' language 'plperlu';\n\n\n(well, change the name in the To: line :)\n\n\nHope someone finds that useful and maybe even merged :)\n\n-alex", "msg_date": "Sat, 16 Jun 2001 20:02:25 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] untrusted plperl" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> Hope someone finds that useful and maybe even merged :)\n\nIt looks great to me (except you forgot the documentation updates ;)).\nBut it'd be nice to get a Perl expert to comment on the thing,\nparticularly on the safe/unsafe-in-one-interpreter business.\n\nOne thought that comes to mind: seems like it'd be possible to\ncommunicate via Perl global variables, functions, etc between\nsafe and unsafe functions. This might be good, or it might be\na vehicle for bypassing the safety restrictions. We should\nthink hard about that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Jun 2001 23:20:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] untrusted plperl " }, { "msg_contents": "On Sat, 16 Jun 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > Hope someone finds that useful and maybe even merged :)\n> \n> It looks great to me (except you forgot the documentation updates ;)).\nMy bad! I'll find whereever plperl is mentioned and note plperlu\nexistance.\n\n> But it'd be nice to get a Perl expert to comment on the thing,\n> particularly on the safe/unsafe-in-one-interpreter business.\nI'm no expert, and biased since I wrote it this way, but here's the\nskinny:\n\n1) safe functions has a unique namespace, and may not escape from it.\n(or should not, if Safe module works right). \n\n2) there were attacks on Safe module that resulted in ability to set\nvariables outside of your namespace. None known now.\n\n3) There's an existing problem with AUTOLOAD and Safe, which doesn't apply\nto us, since you can't 'use' a module in a Safe compartment.\n\nTo be truly paranoid, one must have separate interpreters, but that kills\nthe idea of sharing variables. (Actually, when PgSPI is done (see next\nemail), it would be possible to do so via SPI).\n\nI'm awaiting opinion of a real perl expert, tho ;)\n\n> One thought that comes to mind: seems like it'd be possible to\n> communicate via Perl global variables, functions, etc between\n> safe and unsafe functions. This might be good, or it might be\n> a vehicle for bypassing the safety restrictions. We should\n> think hard about that.\nYeah. I thought about that. Thing is, you have to predeclare all variables\nyou want to share with safe functions. I think it would make sense to have\na global hash, named $safe_info (well, $main::safe_info) which would be\nshared. Unfortunately, there's no way to have 'readonly' share, so\nsafe functions should not rely on $safe_info, as it could be corrupted by\nunsafe functions...\n\n-alex\n\n", "msg_date": "Sun, 17 Jun 2001 09:52:53 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] untrusted plperl " }, { "msg_contents": "\nAlex, seems you have done sufficient research to be sure this is OK.\n\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> On Sat, 16 Jun 2001, Tom Lane wrote:\n> \n> > Alex Pilosov <alex@pilosoft.com> writes:\n> > > Hope someone finds that useful and maybe even merged :)\n> > \n> > It looks great to me (except you forgot the documentation updates ;)).\n> My bad! I'll find whereever plperl is mentioned and note plperlu\n> existance.\n> \n> > But it'd be nice to get a Perl expert to comment on the thing,\n> > particularly on the safe/unsafe-in-one-interpreter business.\n> I'm no expert, and biased since I wrote it this way, but here's the\n> skinny:\n> \n> 1) safe functions has a unique namespace, and may not escape from it.\n> (or should not, if Safe module works right). \n> \n> 2) there were attacks on Safe module that resulted in ability to set\n> variables outside of your namespace. None known now.\n> \n> 3) There's an existing problem with AUTOLOAD and Safe, which doesn't apply\n> to us, since you can't 'use' a module in a Safe compartment.\n> \n> To be truly paranoid, one must have separate interpreters, but that kills\n> the idea of sharing variables. (Actually, when PgSPI is done (see next\n> email), it would be possible to do so via SPI).\n> \n> I'm awaiting opinion of a real perl expert, tho ;)\n> \n> > One thought that comes to mind: seems like it'd be possible to\n> > communicate via Perl global variables, functions, etc between\n> > safe and unsafe functions. This might be good, or it might be\n> > a vehicle for bypassing the safety restrictions. We should\n> > think hard about that.\n> Yeah. I thought about that. Thing is, you have to predeclare all variables\n> you want to share with safe functions. I think it would make sense to have\n> a global hash, named $safe_info (well, $main::safe_info) which would be\n> shared. Unfortunately, there's no way to have 'readonly' share, so\n> safe functions should not rely on $safe_info, as it could be corrupted by\n> unsafe functions...\n> \n> -alex\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 18 Jun 2001 13:56:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] untrusted plperl" }, { "msg_contents": "\nPatch applied. Thanks. Waiting for doc updates.\n\n> Well, after persuading cvsup and cvs that it _is_ possible to have local\n> modifiable repositories, I have a clean untrusted plperl patch to offer\n> you :)\n> \n> Highlights:\n> * There's one perl interpreter used for both trusted and untrusted\n> procedures. I do think its unnecessary to keep two perl\n> interpreters around. If someone can break out from trusted \"Safe\" perl \n> mode, well, they can do what they want already. If someone disagrees, I\n> can change this.\n> \n> * Opcode is not statically loaded anymore. Instead, we load Dynaloader,\n> which then can grab Opcode (and anything else you can 'use') on its own.\n> \n> * Checked to work on FreeBSD 4.3 + perl 5.5.3 , OpenBSD 2.8 + perl5.6.1,\n> RedHat 6.2 + perl 5.5.3\n> \n> * Uses ExtUtils::Embed to find what options are necessary to link with\n> perl shared libraries\n> \n> * createlang is also updated, it can create untrusted perl using 'plperlu'\n> \n> * Example script (assuming you have Mail::Sendmail installed):\n> create function foo() returns text as '\n> use Mail::Sendmail;\n> \n> %mail = ( To => q(you@yourname.com),\n> From => q(me@here.com),\n> Message => \"This is a very short message\"\n> );\n> sendmail(%mail) or die $Mail::Sendmail::error;\n> return \"OK. Log says:\\n\", $Mail::Sendmail::log;\n> ' language 'plperlu';\n> \n> \n> (well, change the name in the To: line :)\n> \n> \n> Hope someone finds that useful and maybe even merged :)\n> \n> -alex\n\nContent-Description: plperlu.diff\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 18 Jun 2001 17:40:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] untrusted plperl" } ]
[ { "msg_contents": "I think this addresses all Tom's concerns. Tom? :)\n\n--\nAlex Pilosov | http://www.acecape.com/dsl\nCTO - Acecape, Inc. | AceDSL:The best ADSL in Bell Atlantic area\n325 W 38 St. Suite 1005 | (Stealth Marketing Works! :)\nNew York, NY 10018 |", "msg_date": "Sat, 16 Jun 2001 20:42:09 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] inet << indexability (take 3)" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> I think this addresses all Tom's concerns. Tom? :)\n\nChecked and applied ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Jun 2001 22:06:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] inet << indexability (take 3) " } ]
[ { "msg_contents": "Sorry, due to a misconfiguration I lost the mail I send, so I cannot reply\nto my last message.\n\nAnyway, I checked again and found out the guy is still using 6.5.3 so this\nis probably an old problem.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sun, 17 Jun 2001 14:40:14 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "The NOTICE messages" } ]
[ { "msg_contents": "Hello, All!\n\n\nIs there in pgsql something like\nselect @@rowcount, i. e.\nis it possible to get in sql number of rows affected by the sql last insert,\nupdate or delete statement??\n\nThanks\n\nRegards,\nSergiy.\n\n\n", "msg_date": "Sun, 17 Jun 2001 16:42:45 +0300", "msg_from": "\"s\" <s_e_r_g_o@hotmail.com>", "msg_from_op": true, "msg_subject": "how to get number of rows affected by the sql last statement?" } ]
[ { "msg_contents": "Just wanted to share with y'all what I wanna do with plperl:\n\n1) I want to implement database access from plperl script by providing a\nperl module DBD::PgSPI which instead of using libpq interface to talk to\ndatabase would use SPI. Thus, certain client-side scripts could become\nstored procedures with no change of code.\n\n2) When that's done, it'll be possible to have an 'application server'\nrunning as an frontend to postgresql. An external program would take SOAP\n(or CORBA or something) calls and translate them to calls of plperl stored\nprocedures. Since perl procedures are able to return complex data\nstructures, and methods to marshal them are easily available, such\nprocedure can encapsulate business logic in one place. Also, an neat\npossibility is client providing perl code to process things to the server\n(where SQL just won't do).\n\n3) Possibly, later, the 'external program' described above could be merged\ninto postgresql proper, to achieve additional speedup.\n\nLet me know if this all makes sense.\n\n-alex\n\n\n\n", "msg_date": "Sun, 17 Jun 2001 10:02:49 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "plperl direction" }, { "msg_contents": "> Just wanted to share with y'all what I wanna do with plperl:\n> \n> 1) I want to implement database access from plperl script by providing a\n> perl module DBD::PgSPI which instead of using libpq interface to talk to\n> database would use SPI. Thus, certain client-side scripts could become\n> stored procedures with no change of code.\n> \n> 2) When that's done, it'll be possible to have an 'application server'\n> running as an frontend to postgresql. An external program would take SOAP\n> (or CORBA or something) calls and translate them to calls of plperl stored\n> procedures. Since perl procedures are able to return complex data\n> structures, and methods to marshal them are easily available, such\n> procedure can encapsulate business logic in one place. Also, an neat\n> possibility is client providing perl code to process things to the server\n> (where SQL just won't do).\n> \n> 3) Possibly, later, the 'external program' described above could be merged\n> into postgresql proper, to achieve additional speedup.\n\nThis all sounds good to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 18 Jun 2001 13:59:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: plperl direction" } ]
[ { "msg_contents": "I have finished a first pass at the planner statistics and cost\nestimation changes that I want to do for 7.2. It's now time to see\nhow well the new code does in the real world. Accordingly, if you've\nhad problems in the past with silly choices of plans, I'd like to ask\nyou to load your data into a test installation of current sources and\nsee if the planner is any brighter than before.\n\nI realize that loading a bunch of data into a temporary installation is\na pain in the neck, but it'd be really great to get some feedback about\nperformance of the new code now, while we're still early enough in the\n7.2 development cycle to do something about any problems that turn up.\n\nIf you're willing to help out, you can get current sources from the\nCVS server, or from the nightly snapshot tarball (see the dev/\ndirectory on your favorite Postgres FTP mirror).\n\nSome highlights of the new code include:\n\n* ANALYZE is now available as a separate command; you can run it without\nalso doing a VACUUM. (Of course, VACUUM ANALYZE still works.)\n\n* On large tables, ANALYZE uses a random sample of rows rather than\nexamining every row, so that it should take a reasonably short time\neven on very large tables. Possible downside: inaccurate stats.\nWe need to find out if the sample size is large enough.\n\n* Statistics now include the \"top ten\" most common values, not just\nthe single most common value, plus an estimate of the total number of\ndistinct values in a column. This should mean that selectivity\nestimates for \"column = something\" estimates are a lot better than\nbefore, especially for highly skewed data distributions.\n\n* Statistics also include (for scalar datatypes) a histogram that\ngives the boundary values dividing the data into ten\nroughly-equal-population bins. This should allow much better estimation\nfor inequality and range queries, again especially for skewed data\ndistributions. (Note that \"range queries\" include such things as\nanchored LIKE and regexp searches, plus now inet subnet searches thanks\nto Alex Pilosov.)\n\n* The magic number \"ten\" mentioned above is controllable via\nALTER TABLE tab ALTER COLUMN col SET STATISTICS statstarget.\nAdjusting it gives a tradeoff between estimation accuracy and\ntime/space taken by ANALYZE. We need to find out if ten is a good\ndefault or not ... it might be too high or too low.\n\n* There's also a physical-order-correlation statistic that should help\nthe planner deal with clustered indices better. Whether it's good\nenough, and whether the costs are correctly interpolated using it,\nremain to be seen.\n\nFor more info see my original proposal at\nhttp://fts.postgresql.org/db/mw/msg.html?mid=112714\nand followup discussion.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Jun 2001 16:47:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Call for alpha testing: planner statistics revisions" }, { "msg_contents": "On Sun, 17 Jun 2001, Tom Lane wrote:\n> * On large tables, ANALYZE uses a random sample of rows rather than\n> examining every row, so that it should take a reasonably short time\n> even on very large tables. Possible downside: inaccurate stats.\n> We need to find out if the sample size is large enough.\nHow about letting the user specify how much of the table should be\nexamined? So if you know the data is homogenous you can just specify a\nsmaller sample than normal and if you have time you can have the whole\ntable scanned.\n\n- Einar Karttunen\n\n", "msg_date": "Thu, 21 Jun 2001 14:38:56 +0300 (EEST)", "msg_from": "Einar Karttunen <ekarttun@cs.Helsinki.FI>", "msg_from_op": false, "msg_subject": "Re: Call for alpha testing: planner statistics revisions" }, { "msg_contents": "Tom Lane wrote:\n> \n> I have finished a first pass at the planner statistics and cost\n> estimation changes that I want to do for 7.2. It's now time to see\n> how well the new code does in the real world...\n> \n> Some highlights of the new code include:\n> \n> * ANALYZE is now available as a separate command; you can run it without\n> also doing a VACUUM. (Of course, VACUUM ANALYZE still works.)\n\nWhat is the impact of this newly isolated ANALYZE command on the need\nand/or frequency for VACUUMs? \n\nI'd like to reduce the frequency of VACUUMs given my understanding has\nbeen that updates/inserts/deletes are a no-no during VACUUMs (that was\n6.5.x era hearsay), and we lock people out during VACUUMs (VACUUM\nANALYZE, that is). \n\nRegards,\nEd Loehr\n", "msg_date": "Thu, 21 Jun 2001 14:05:12 -0500", "msg_from": "Ed Loehr <efl@pobox.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Call for alpha testing: planner statistics revisions" }, { "msg_contents": "Ed Loehr <efl@pobox.com> writes:\n> Tom Lane wrote:\n>> * ANALYZE is now available as a separate command; you can run it without\n>> also doing a VACUUM. (Of course, VACUUM ANALYZE still works.)\n\n> What is the impact of this newly isolated ANALYZE command on the need\n> and/or frequency for VACUUMs? \n\nNone really. By the time 7.2 is out, I expect we will also have a\nmore lightweight form of VACUUM, and so running VACUUM ANALYZE as a\nreasonably frequent background operation will still be the norm.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2001 16:02:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Call for alpha testing: planner statistics revisions " } ]
[ { "msg_contents": "Let's switch 'timestamp with time zone' back to 'timestamp'. This just\nmakes no sense.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 18 Jun 2001 00:58:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "timestamp with/without time zone" }, { "msg_contents": "> Let's switch 'timestamp with time zone' back to 'timestamp'. This just\n> makes no sense.\n\nI wasn't following that discussion. Why would we have a timestamp with\nno timezone anyway?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 18 Jun 2001 14:04:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "> We don't have it. Its just that it is misleading to have 'timestamp with\n> time zone' as description of this type, when 'timestamp without time zone' \n> does not exist. (Actually, if you try to create a field as 'timestamp\n> without time zone', you get timestamp anyway). Since you can't have\n> 'without', why mention that the type has a time zone?\n> \n\nTotally agree. I see in psql's \\dT:\n\n time with time zone | hh:mm:ss, ANSI SQL time\n timestamp with time zone | date and time\n\nLooks like 'time' has a similar problem. Let me know if I can help.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 18 Jun 2001 15:00:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "We don't have it. Its just that it is misleading to have 'timestamp with\ntime zone' as description of this type, when 'timestamp without time zone' \ndoes not exist. (Actually, if you try to create a field as 'timestamp\nwithout time zone', you get timestamp anyway). Since you can't have\n'without', why mention that the type has a time zone?\n\n-alex\n \nOn Mon, 18 Jun 2001, Bruce Momjian wrote:\n\n> > Let's switch 'timestamp with time zone' back to 'timestamp'. This just\n> > makes no sense.\n> \n> I wasn't following that discussion. Why would we have a timestamp with\n> no timezone anyway?\n> \n> \n\n\n", "msg_date": "Mon, 18 Jun 2001 15:03:29 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Let's switch 'timestamp with time zone' back to 'timestamp'. This just\n>> makes no sense.\n\n> I wasn't following that discussion. Why would we have a timestamp with\n> no timezone anyway?\n\nThe discussion isn't about what the datatype *does*, but only about what\nit's *called*.\n\nWe currently transform requests for \"timestamp\", \"timestamp with\ntimezone\", and \"timestamp without timezone\" into the same \"timestamp\"\ndatatype. This is fine by me. However, I think that that datatype\nshould display as just \"timestamp\" in psql displays and pg_dump output.\nThe datatype does not act exactly the same as SQL92's \"timestamp with\ntimezone\", so it seems to me that displaying it that way just confuses\npeople.\n\nHowever, Thomas disagreed when last heard from...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jun 2001 15:49:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone " }, { "msg_contents": "> >> Let's switch 'timestamp with time zone' back to 'timestamp'. This just\n> >> makes no sense.\n\n\n> > I wasn't following that discussion. Why would we have a timestamp with\n> > no timezone anyway?\n\nTo be \"compatible\" with the SQL9x brain damage. Support of standards in\nthis area is a big step backwards in functionality.\n\n> The discussion isn't about what the datatype *does*, but only about what\n> it's *called*.\n\nI'd be supportive (and willing to consider doing the work ;) for one of\nseveral options:\n\n1) implement \"timestamp without timezone\", moving the current\nimplementation to be \"timestamp with time zone\".\n\n2) implement true SQL9x \"timestamp with time zone\", and move the current\nimplementation back to \"datetime\". I'd consider this a real pita since\nthe only reason I moved it from that originally is that *no one*, over a\nperiod of years, was willing to take responsibility for a compliant\n\"timestamp\" implementation. We all agreed to the original change, and I\nraised this as an issue back then. Oh, and SQL9x \"timestamp with\ntimezone\" is truly brain damaged, and you should be sure that this is\nwhat you would really want. Really really want.\n\n3) continue the status quo, with modest relabeling of the current\ncapabilities.\n\n> We currently transform requests for \"timestamp\", \"timestamp with\n> timezone\", and \"timestamp without timezone\" into the same \"timestamp\"\n> datatype. This is fine by me. However, I think that that datatype\n> should display as just \"timestamp\" in psql displays and pg_dump output.\n> The datatype does not act exactly the same as SQL92's \"timestamp with\n> timezone\", so it seems to me that displaying it that way just confuses\n> people. However, Thomas disagreed when last heard from...\n\nHmm. Any solution will have some confusion, so it really is not clear\nwhich labeling path is preferable. Maybe even to those who think it is\nclear ;)\n\nSQL9x \"timestamp\" has no notion of time zones. PostgreSQL \"timestamp\"\ndoes. This is likely the reason for the current labeling scheme (at\nleast in pgdump). This also lays the groundwork for more seamless\nupgrade paths later when a \"time zone free\" timestamp type might be\navailable.\n\n - Thomas\n", "msg_date": "Tue, 19 Jun 2001 00:53:29 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "> 3) continue the status quo, with modest relabeling of the current\n> capabilities.\n> \n> Hmm. Any solution will have some confusion, so it really is not clear\n> which labeling path is preferable. Maybe even to those who think it is\n> clear ;)\n> \n> SQL9x \"timestamp\" has no notion of time zones. PostgreSQL \"timestamp\"\n> does. This is likely the reason for the current labeling scheme (at\n> least in pgdump). This also lays the groundwork for more seamless\n> upgrade paths later when a \"time zone free\" timestamp type might be\n> available.\n\nVery few people know the standards stuff so it seems we should just call\nit timestamp and do the best we can. Basically by mentioning \"with\ntimezone\" we are making the standards people happy but confusing our\nusers.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 18 Jun 2001 21:10:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Very few people know the standards stuff so it seems we should just call\n> it timestamp and do the best we can. Basically by mentioning \"with\n> timezone\" we are making the standards people happy but confusing our\n> users.\n\nI don't believe we're making any standards-lovers happy either, because\nthe datatype in question *is* *not* SQL9x's TIMESTAMP WITH TIME ZONE.\nGiven that no one actually wants to change its behavior to conform to\neither of the standard's datatypes, ISTM that calling it something\ndifferent from either of those two is the appropriate path.\n\nAt some point (if someone is foolish enough to want to implement the\nspec's semantics) we might have three distinct datatypes called\ntimestamp, timestamp with time zone, and timestamp without time zone,\nwith the first of these (the existing type) being the recommended\nchoice. What we have at the moment is that lacking implementations\nfor the last two, we map them into the first one. That doesn't seem\nunreasonable to me. But to have a clean upgrade path from one to three\ntypes, we need to be sure we call the existing type what it is, and not\nmislabel it as one of the spec-compliant types.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jun 2001 21:25:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Very few people know the standards stuff so it seems we should just call\n> > it timestamp and do the best we can. Basically by mentioning \"with\n> > timezone\" we are making the standards people happy but confusing our\n> > users.\n> \n> I don't believe we're making any standards-lovers happy either, because\n> the datatype in question *is* *not* SQL9x's TIMESTAMP WITH TIME ZONE.\n> Given that no one actually wants to change its behavior to conform to\n> either of the standard's datatypes, ISTM that calling it something\n> different from either of those two is the appropriate path.\n> \n> At some point (if someone is foolish enough to want to implement the\n> spec's semantics) we might have three distinct datatypes called\n> timestamp, timestamp with time zone, and timestamp without time zone,\n> with the first of these (the existing type) being the recommended\n> choice. What we have at the moment is that lacking implementations\n> for the last two, we map them into the first one. That doesn't seem\n> unreasonable to me. But to have a clean upgrade path from one to three\n> types, we need to be sure we call the existing type what it is, and not\n> mislabel it as one of the spec-compliant types.\n\nI am confused what you are suggesting here.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 18 Jun 2001 21:28:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am confused what you are suggesting here.\n\n*** src/backend/utils/adt/format_type.c.orig\tWed May 23 18:10:19 2001\n--- src/backend/utils/adt/format_type.c\tMon Jun 18 21:41:53 2001\n***************\n*** 178,184 ****\n \t\t\tbreak;\n \n \t\tcase TIMESTAMPOID:\n! \t\t\tbuf = pstrdup(\"timestamp with time zone\");\n \t\t\tbreak;\n \n \t\tcase VARBITOID:\n--- 178,184 ----\n \t\t\tbreak;\n \n \t\tcase TIMESTAMPOID:\n! \t\t\tbuf = pstrdup(\"timestamp\");\n \t\t\tbreak;\n \n \t\tcase VARBITOID:\n\n\nClear enough?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jun 2001 21:44:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am confused what you are suggesting here.\n> \n> *** src/backend/utils/adt/format_type.c.orig\tWed May 23 18:10:19 2001\n> --- src/backend/utils/adt/format_type.c\tMon Jun 18 21:41:53 2001\n> ***************\n> *** 178,184 ****\n> \t\t\tbreak;\n> \n> \t\tcase TIMESTAMPOID:\n> ! \t\t\tbuf = pstrdup(\"timestamp with time zone\");\n> \t\t\tbreak;\n> \n> \t\tcase VARBITOID:\n> --- 178,184 ----\n> \t\t\tbreak;\n> \n> \t\tcase TIMESTAMPOID:\n> ! \t\t\tbuf = pstrdup(\"timestamp\");\n> \t\t\tbreak;\n\nYes, this is exactly what I would suggest. In fact, \\dT shows this long\ntext and it is making some of the lines too long.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 18 Jun 2001 22:59:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "Thomas Lockhart writes:\n\n> SQL9x \"timestamp\" has no notion of time zones. PostgreSQL \"timestamp\"\n> does.\n\nAFAICT, it does not. The value is stored in UTC (more or less) and is\nconverted to the local time zone for display. But a data type is defined\nin terms of storage, not display. In fact, if you use a language binding\nthat converts PostgreSQL values directly to native data types, then the\ntime zone never appears.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 20 Jun 2001 23:42:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Thomas Lockhart writes:\n>> SQL9x \"timestamp\" has no notion of time zones. PostgreSQL \"timestamp\"\n>> does.\n\n> AFAICT, it does not. The value is stored in UTC (more or less) and is\n> converted to the local time zone for display. But a data type is defined\n> in terms of storage, not display.\n\nI think Thomas' point is mainly a syntactic one, that our timestamp type\nwill accept and display timezones --- which makes it compatible at the\nI/O level with SQL-style TIMESTAMP WITH TIME ZONE. But I don't find\nthat argument very persuasive. An app that is expecting SQL-compliant\nhandling of the zone info will still be broken, only in subtle\nhard-to-find ways instead of nice simple obvious ways. IMHO we don't\nsupport TIMESTAMP WITH TIME ZONE, and we really oughtn't give people the\nimpression that we do. Whether what we have is better than the spec's\ndefinition is irrelevant here; the point is that it's different.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2001 01:41:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone " }, { "msg_contents": "\nThomas, can we change the description to just 'timestamp'?\n\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Thomas Lockhart writes:\n> >> SQL9x \"timestamp\" has no notion of time zones. PostgreSQL \"timestamp\"\n> >> does.\n> \n> > AFAICT, it does not. The value is stored in UTC (more or less) and is\n> > converted to the local time zone for display. But a data type is defined\n> > in terms of storage, not display.\n> \n> I think Thomas' point is mainly a syntactic one, that our timestamp type\n> will accept and display timezones --- which makes it compatible at the\n> I/O level with SQL-style TIMESTAMP WITH TIME ZONE. But I don't find\n> that argument very persuasive. An app that is expecting SQL-compliant\n> handling of the zone info will still be broken, only in subtle\n> hard-to-find ways instead of nice simple obvious ways. IMHO we don't\n> support TIMESTAMP WITH TIME ZONE, and we really oughtn't give people the\n> impression that we do. Whether what we have is better than the spec's\n> definition is irrelevant here; the point is that it's different.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Jun 2001 18:20:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "> Thomas, can we change the description to just 'timestamp'?\n\nSure, we can do anything we want. I don't agree with all of the points\nraised, and in particular disagree with the characterization of our\ncurrent \"timestamp\" type as having no concept of time zones, although it\nis true that it has no concept of \"sticky time zones\" which travel with\nthe data value. \n\nThe description in pg_dump was chosen to assist with a transition in the\nnext version of PostgreSQL to having available a true \"no time zone\"\ntimestamp, leaving the current implementation as the \"time zone aware\"\ntype. I'm concerned about changing the current choice in the absence of\nthought about this issue.\n\nistm that if we dump timestamps which have time zone fields (which at\nthe moment all do), trying to read them back in as plain-vanilla SQL92\n\"timestamp\" might result in an error condition, leading to dump/restore\nproblems. Or maybe it would be appropriate for a \"time zone free\" type\nto just ignore time zone info in input? If so, upgrading wouldn't be as\nbig an issue, and just the upgraded schema would need to be\nconsidered...\n\nOn a related note, I've been thinking about removing the following\nfeatures from our current \"timestamp\":\n\no timestamp 'invalid' - an interesting concept which might actually be\nuseful for the original abstime type since it has such a limited range,\nbut not generally useful for timestamp. I'd suggesting leaving it in for\nabstime, at least for now.\n\no timestamp 'current' - another interesting concept not likely used by\nanyone, and causing conniptions for our optimizer (one cannot cache\nresults for datasets containing this value).\n\nComments?\n\n - Thomas\n", "msg_date": "Fri, 22 Jun 2001 15:20:51 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> The description in pg_dump was chosen to assist with a transition in the\n> next version of PostgreSQL to having available a true \"no time zone\"\n> timestamp, leaving the current implementation as the \"time zone aware\"\n> type. I'm concerned about changing the current choice in the absence of\n> thought about this issue.\n\nI already commented what I thought about this: the current type is not\neither of the SQL-compatible timestamp types, and if we want to support\nthe SQL-compatible semantics then we need three types, not two.\n\n> On a related note, I've been thinking about removing the following\n> features from our current \"timestamp\":\n\n> o timestamp 'invalid' - an interesting concept which might actually be\n> useful for the original abstime type since it has such a limited range,\n> but not generally useful for timestamp. I'd suggesting leaving it in for\n> abstime, at least for now.\n\n> o timestamp 'current' - another interesting concept not likely used by\n> anyone, and causing conniptions for our optimizer (one cannot cache\n> results for datasets containing this value).\n\nI believe everyone already agreed that 'current' should be removed.\n'invalid' seems somewhat redundant with NULL, so I wouldn't object to\ntaking it out; on the other hand, is it hurting anything? Also, it\nseems a bad idea to remove it from timestamp if we leave it in abstime;\nyou shouldn't have to worry that casting abstime up to timestamp might\nfail.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 11:29:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone " }, { "msg_contents": "> I already commented what I thought about this: the current type is not\n> either of the SQL-compatible timestamp types, and if we want to support\n> the SQL-compatible semantics then we need three types, not two.\n\nRight, that was clear even to me ;)\n\nWe were on that path for quite some time. I volunteered to move the\ndatetime type to become timestamp since *no one* was interested in\nimplementing timestamp properly. There was extensive (or at least\ncomplete) discussion at the time.\n\nPer Date and Darwen (and common sense) the SQL9x date/time time zone\nsupport is fundamentally flawed, and clearly leads to deep trouble in\ntrying to operate a database across time zones or national boundaries.\nPostgreSQL had strongly influenced SQL standards in the past (e.g. data\ntype extensibility) and imho our current implementation is the way the\nstandard should have read.\n\n> I believe everyone already agreed that 'current' should be removed.\n> 'invalid' seems somewhat redundant with NULL, so I wouldn't object to\n> taking it out; on the other hand, is it hurting anything? Also, it\n> seems a bad idea to remove it from timestamp if we leave it in abstime;\n> you shouldn't have to worry that casting abstime up to timestamp might\n> fail.\n\nI wouldn't worry about that, since we can now return NULL in the\ntranslation of abstime to timestamp. otoh we could choose to do the same\nfor abstime itself, so 'invalid' is not fundamentally necessary for that\ntype anymore either.\n\n - Thomas\n", "msg_date": "Fri, 22 Jun 2001 15:42:44 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "> > I already commented what I thought about this: the current type is not\n> > either of the SQL-compatible timestamp types, and if we want to support\n> > the SQL-compatible semantics then we need three types, not two.\n> \n> Right, that was clear even to me ;)\n> \n> We were on that path for quite some time. I volunteered to move the\n> datetime type to become timestamp since *no one* was interested in\n> implementing timestamp properly. There was extensive (or at least\n> complete) discussion at the time.\n> \n> Per Date and Darwen (and common sense) the SQL9x date/time time zone\n> support is fundamentally flawed, and clearly leads to deep trouble in\n> trying to operate a database across time zones or national boundaries.\n> PostgreSQL had strongly influenced SQL standards in the past (e.g. data\n> type extensibility) and imho our current implementation is the way the\n> standard should have read.\n\nI believe the issue here was how do we describe the TIMESTAMP data type,\nas TIMESTAMP or TIMESTAMP WITH TIMEZONE. I thought people were\nproposing the former. Thomas, did you express a preference?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 17:34:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "> > I believe everyone already agreed that 'current' should be removed.\n> > 'invalid' seems somewhat redundant with NULL, so I wouldn't object to\n> > taking it out; on the other hand, is it hurting anything? Also, it\n> > seems a bad idea to remove it from timestamp if we leave it in abstime;\n> > you shouldn't have to worry that casting abstime up to timestamp might\n> > fail.\n> \n> I wouldn't worry about that, since we can now return NULL in the\n> translation of abstime to timestamp. otoh we could choose to do the same\n> for abstime itself, so 'invalid' is not fundamentally necessary for that\n> type anymore either.\n\nIs this a TODO item?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 17:38:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "> Is this a TODO item?\n\nSure, but I'd hate to have all of these individual items showing up as\nseparate things in some ToDo list, since it won't paint a coherent\npicture of where things are headed.\n\nI'm planning on doing some work on timestamp, which will include:\n\no support for \"ISO variants\" on input, including embedded \"T\" preceeding\nthe time fields\n\no deprecation of 'current' (holdover from Original Postgres)\n\no deprecation of 'invalid' for timestamp at least (holdover from\nOriginal Postgres)\n\no (probably) deprecation of \"ignored fields\" if the value not explicitly\nrecognized (holdover from Original Postgres)\n\no resolution of the SQL99 vs SQL-useful timestamp and timestamp with\ntime zone issues\n\nThe latter has two possible outcomes (maybe more):\n\na) we keep the current timestamp implementation as either timestamp or\ntimestamp with time zone, and implement the other as a new type with\nmuch common underlying code\n\nb) we roll back decisions made a few years ago, and have \"SQL-useful\ntimestamp\" become datetime, leaving timestamp with time zone and\ntimestamp with slavish SQL99 compliance as undersupported, ineffective\nand near-useless data types (an overstatement for simple timestamp, but\nnot for timestamp with time zone).\n\nFor those who haven't used a fully compliant timestamp with time zone\n(and most haven't, since it is brain damaged) the time zone is specified\nas a single offset from GMT. No provisions for DST, etc etc.\n\nThe current identification of timestamp as \"timestamp with time zone\"\nwas to prepare for implementation of a \"no time zone anywhere\" timestamp\nin 7.2. The current timestamp would become \"timestamp with time zone\",\nwith time zone support substantially enhanced from SQL99 specs. I'll\nspeak for the silent majority to claim that these enhancements are\nuseful. They are likely compatible enough (or could be) to pass SQL9x\ncompliance testing, unless that testing includes cases which try to\nenforce the worst aspects of the standard.\n\nHmm, now that I look at it again, SQL99 timestamp with time zone may not\nbe too far away from our current timestamp, except for issues concerning\nUTC vs local time and probably some other details of formatting and\nbehavior (e.g. allowed date ranges; we allow more).\n\nIt appears that SQL99 timestamp with time zone outputs as UTC (which is\nhow it is stored internally already) so the standard is missing the\nnotion of representing time zones in the output of a timestamp or\ntimestamp with time zone type. This is not as horrendous as SQL92 or as\ndescribed in some draft standard docs, but... Comments?\n\n - Thomas\n", "msg_date": "Wed, 11 Jul 2001 01:35:34 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "\nThomas, any status on this? If not, I should add it to the TODO list.\n\n\n> > Is this a TODO item?\n> \n> Sure, but I'd hate to have all of these individual items showing up as\n> separate things in some ToDo list, since it won't paint a coherent\n> picture of where things are headed.\n> \n> I'm planning on doing some work on timestamp, which will include:\n> \n> o support for \"ISO variants\" on input, including embedded \"T\" preceeding\n> the time fields\n> \n> o deprecation of 'current' (holdover from Original Postgres)\n> \n> o deprecation of 'invalid' for timestamp at least (holdover from\n> Original Postgres)\n> \n> o (probably) deprecation of \"ignored fields\" if the value not explicitly\n> recognized (holdover from Original Postgres)\n> \n> o resolution of the SQL99 vs SQL-useful timestamp and timestamp with\n> time zone issues\n> \n> The latter has two possible outcomes (maybe more):\n> \n> a) we keep the current timestamp implementation as either timestamp or\n> timestamp with time zone, and implement the other as a new type with\n> much common underlying code\n> \n> b) we roll back decisions made a few years ago, and have \"SQL-useful\n> timestamp\" become datetime, leaving timestamp with time zone and\n> timestamp with slavish SQL99 compliance as undersupported, ineffective\n> and near-useless data types (an overstatement for simple timestamp, but\n> not for timestamp with time zone).\n> \n> For those who haven't used a fully compliant timestamp with time zone\n> (and most haven't, since it is brain damaged) the time zone is specified\n> as a single offset from GMT. No provisions for DST, etc etc.\n> \n> The current identification of timestamp as \"timestamp with time zone\"\n> was to prepare for implementation of a \"no time zone anywhere\" timestamp\n> in 7.2. The current timestamp would become \"timestamp with time zone\",\n> with time zone support substantially enhanced from SQL99 specs. I'll\n> speak for the silent majority to claim that these enhancements are\n> useful. They are likely compatible enough (or could be) to pass SQL9x\n> compliance testing, unless that testing includes cases which try to\n> enforce the worst aspects of the standard.\n> \n> Hmm, now that I look at it again, SQL99 timestamp with time zone may not\n> be too far away from our current timestamp, except for issues concerning\n> UTC vs local time and probably some other details of formatting and\n> behavior (e.g. allowed date ranges; we allow more).\n> \n> It appears that SQL99 timestamp with time zone outputs as UTC (which is\n> how it is stored internally already) so the standard is missing the\n> notion of representing time zones in the output of a timestamp or\n> timestamp with time zone type. This is not as horrendous as SQL92 or as\n> described in some draft standard docs, but... Comments?\n> \n> - Thomas\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Sep 2001 00:28:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" }, { "msg_contents": "> Thomas, any status on this? If not, I should add it to the TODO list.\n\nWell, sure, there is *always* status ;)\n\nI started coding a couple of days ago. So far, no showstoppers.\n\nThere are two related issues:\n\n1) I should recode TIME WITH TIME ZONE to conform to SQL99. I had done\nit originally with a \"persistant time zone\" since the SQL9x definition\nis weaker than I imagined at the time. afaict it is still a minimally\nuseful data type.\n\n2) upgrading from 7.1.x to 7.2 will likely require a schema change,\nsince 7.1.x TIMESTAMP should become TIMESTAMP WITH TIME ZONE, but afaik\nsomeone took out that feature during the 7.1.x series of releases. So a\n7.1.x dump will give columns labeled as TIMESTAMP but with values\ncontaining time zones. We *might* want to accept (and ignore?) time zone\nfields in TIMESTAMP values for input for 7.2, but that would still leave\nfolks expecting a data type which recognizes time zones needing to\nadjust their schemas during the upgrade.\n\n - Thomas\n", "msg_date": "Thu, 06 Sep 2001 05:56:08 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: timestamp with/without time zone" } ]
[ { "msg_contents": "Hi, some PostgreSQL users in Japan have been translating 7.1 docs into\nJapanese. I hope the work would finish within 1-2 months. My question\nis how the translated docs could be merged into the doc source tree\nonce it is done. Maybe doc/ja/src/sgml?\n--\nTatsuo Ishii\n", "msg_date": "Mon, 18 Jun 2001 10:02:53 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Doc translation" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Hi, some PostgreSQL users in Japan have been translating 7.1 docs into\n> Japanese. I hope the work would finish within 1-2 months. My question\n> is how the translated docs could be merged into the doc source tree\n> once it is done. Maybe doc/ja/src/sgml?\n\nHmm, *should* they be merged into the source tree, or distributed as\na separate tarball? I'm concerned that they'd always be out of sync\nwith the English docs :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Jun 2001 22:11:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Doc translation " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Hi, some PostgreSQL users in Japan have been translating 7.1 docs into\n> > Japanese. I hope the work would finish within 1-2 months. My question\n> > is how the translated docs could be merged into the doc source tree\n> > once it is done. Maybe doc/ja/src/sgml?\n> \n> Hmm, *should* they be merged into the source tree, or distributed as\n> a separate tarball? I'm concerned that they'd always be out of sync\n> with the English docs :-(\n\nRight. However, it would be greater for Japanese users to have the\nJapanese docs in the *official* distribution of PostgreSQL, than\ngetting them from other places. What about setting up a new CVS module\nfor the Japanese docs, isolated from the source and English doc\nmodule(pgsql)?\n--\nTatsuo Ishii\n", "msg_date": "Mon, 18 Jun 2001 12:06:09 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Doc translation " }, { "msg_contents": "\n----- Original Message ----- \nFrom: Tatsuo Ishii <t-ishii@sra.co.jp>\nSent: Sunday, June 17, 2001 11:06 PM\n\n\n> > Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > > Hi, some PostgreSQL users in Japan have been translating 7.1 docs into\n> > > Japanese. I hope the work would finish within 1-2 months. My question\n> > > is how the translated docs could be merged into the doc source tree\n> > > once it is done. Maybe doc/ja/src/sgml?\n> > \n> > Hmm, *should* they be merged into the source tree, or distributed as\n> > a separate tarball? I'm concerned that they'd always be out of sync\n> > with the English docs :-(\n> \n> Right. However, it would be greater for Japanese users to have the\n> Japanese docs in the *official* distribution of PostgreSQL, than\n> getting them from other places. What about setting up a new CVS module\n> for the Japanese docs, isolated from the source and English doc\n> module(pgsql)?\n\nWhat about the possibility of having docs translated into another\nlanguages? Would you then include all those translations into the\n*official* PostgresSQL distribution? There're many languages out there,\nand maybe eventually the PG docs will get translated to these (some of these)\nlanguages. Wouldn't it be too much of the documentation per\ndistribution? I guess that the setting up a separate CVS module \nper language might be a good idea. People just grab the distribution and docs in\ndesired language, and plug the docs in, and here you go. And the docs are almost always\n\"out-of-sync\" with the distribution anyway :), even the English ones...\n\nSerguei\n\n\n\n", "msg_date": "Mon, 18 Jun 2001 01:08:13 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Doc translation " }, { "msg_contents": "> distribution? I guess that the setting up a separate CVS module \n> per language might be a good idea. People just grab the distribution and docs in\n> desired language, and plug the docs in, and here you go. And the docs are almost always\n> \"out-of-sync\" with the distribution anyway :), even the English ones...\n\nSounds good to me too.\n\nFor example, For 7.1 Japanese docs, people could grab\n\"postgresql-7.1-doc-ja.tar.gz\" or whatever...\n--\nTatsuo Ishii\n", "msg_date": "Mon, 18 Jun 2001 14:26:00 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Doc translation " }, { "msg_contents": "Tatsuo Ishii wrote:\n\n> Hi, some PostgreSQL users in Japan have been translating 7.1 docs into\n> Japanese. I hope the work would finish within 1-2 months. My question\n> is how the translated docs could be merged into the doc source tree\n> once it is done. Maybe doc/ja/src/sgml?\n> --\n> Tatsuo Ishii\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\nI think it's better to setup another cvs module, thus maintain\nall other languages' docs except English( the English one act as\nstandard one), and may be we could also maintian a auto-generated\nhtml (or other format if there are no other problems) version on web site\nlike\ntechdoc.postgresql.org or some other places for users to browse.\n(I think most of people would first go to the main site to search methods\nto resolve their problem.)\n\n regards laser\n\n", "msg_date": "Mon, 18 Jun 2001 13:55:05 +0800", "msg_from": "He Weiping <laser@zhengmai.com.cn>", "msg_from_op": false, "msg_subject": "Re: Doc translation" }, { "msg_contents": "\n----- Original Message ----- \nFrom: Tatsuo Ishii <t-ishii@sra.co.jp>\nSent: Monday, June 18, 2001 1:26 AM\n\n\n> > distribution? I guess that the setting up a separate CVS module \n> > per language might be a good idea. People just grab the distribution and docs in\n> > desired language, and plug the docs in, and here you go. And the docs are almost always\n> > \"out-of-sync\" with the distribution anyway :), even the English ones...\n> \n> Sounds good to me too.\n> \n> For example, For 7.1 Japanese docs, people could grab\n> \"postgresql-7.1-doc-ja.tar.gz\" or whatever...\n\n... and also one can put translations not only for the docs in that language CVS module/tarball,\nbut also for all PG messaging, especially when now PG goes international (see Peter Eisentraut's\npost and the follow up thread of June 4 entitled \"FYI: status of native language support\").\n\nSerguei\n\n\n", "msg_date": "Mon, 18 Jun 2001 02:15:28 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Doc translation " }, { "msg_contents": "\nTatsuo ... setting up a seperate CVS module for this does sound like a\ngreat idea ... you already have access to the CVS repository, right? Can\nyou send me a tar file containing what you have so far, and I'll get it\ninto CVS and then you'll be able to update that at will?\n\nIf we set it up as:\n\npgsql-docs/ja\n\nwe could move the english docs out of pgsql itself and into this module\ntoo, as:\n\npgsql-docs/en\n\nand any other language too ...\n\nOn Mon, 18 Jun 2001, Tatsuo Ishii wrote:\n\n> > Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > > Hi, some PostgreSQL users in Japan have been translating 7.1 docs into\n> > > Japanese. I hope the work would finish within 1-2 months. My question\n> > > is how the translated docs could be merged into the doc source tree\n> > > once it is done. Maybe doc/ja/src/sgml?\n> >\n> > Hmm, *should* they be merged into the source tree, or distributed as\n> > a separate tarball? I'm concerned that they'd always be out of sync\n> > with the English docs :-(\n>\n> Right. However, it would be greater for Japanese users to have the\n> Japanese docs in the *official* distribution of PostgreSQL, than\n> getting them from other places. What about setting up a new CVS module\n> for the Japanese docs, isolated from the source and English doc\n> module(pgsql)?\n> --\n> Tatsuo Ishii\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Mon, 18 Jun 2001 09:51:32 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Doc translation " }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> we could move the english docs out of pgsql itself and into this module\n> too, as:\n> pgsql-docs/en\n\nHmm, I'm not sure that that's a good idea; seems it would lose the\ncoupling between versions of the source and versions of the\ndocumentation.\n\nI quite agree that we should have an official distribution of\nnon-English documentation if possible. I'm just wondering how best to\nkeep track of which set of docs goes with which Postgres release.\nSince the English docs are (we hope) kept up to date with the sources,\nit seems best to keep those as part of the master CVS tree.\n\nWe could imagine keeping non-English docs in the same tree, but that\nwould require lots of attention to branch management --- for example,\nwe'd have to be careful to commit these Japanese translations of 7.1\ndocs into the REL7_1_STABLE branch. OTOH maybe that's just as true\nif there's a separate CVS tree for docs; you'd still want to deal with\na new version per source release. So maybe a single tree is the right\nanswer after all.\n\nAnyone have experience with managing this sort of situation under CVS?\nIs separate tree or combined tree better?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jun 2001 10:42:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Doc translation " }, { "msg_contents": "Tatsuo Ishii writes:\n\n> Hi, some PostgreSQL users in Japan have been translating 7.1 docs into\n> Japanese. I hope the work would finish within 1-2 months. My question\n> is how the translated docs could be merged into the doc source tree\n> once it is done. Maybe doc/ja/src/sgml?\n\nA while ago I sent a proposal to -docs about how to handle this (basically\ndoc/src/sgml-<lang>). No one protested so I was going to implement it; in\nfact, I already have in some private branch I have lying around here. It\neven includes some nice side-effects, such as fallback to English for\nincomplete translations (so you can look at the result while translation\nis still going on) and the integration of the translated SQL reference\npages with the internationalized psql that is currently taking shape.\n(Someone else is working on a French translation and has been very anxious\nfor this to happen, too.)\n\nI would not be in favour of a separate CVS module, for several reasons:\nFirst, it will marginalize the efforts. I bet there are a sufficient\nnumber of people how would be willing to track documentation upgrades and\nkeep their language up-to-date. Second, the build machinery would get\ngratuitously complicated and spread around (makefiles, stylesheets,\ngraphics, URL strings, etc.). Third, the (undoubtedly real) problem of\nkeeping these translations up to date would not be helped by this at all.\nThe maintainers of these translations will simply have to be honest to not\nlabel their documentation set as corresponding to version X.Y when the\ncontent is still based on the original documentation for X.(Y-2).\n\nAbout \"I don't want to download all this stuff I can't read\": We already\nhave chunked distribution tarballs. It would be possible to \"chunk out\"\nthe files pertaining to a particular language (which would include the\nmessage catalogs as well).\n\nWhile other open source projects sometimes keep their entire documentation\nin a separate cvs module, they generally keep all languages together for\nthe reasons given above.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 18 Jun 2001 17:47:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Doc translation" }, { "msg_contents": "On Mon, 18 Jun 2001, Peter Eisentraut wrote:\n\n> Tatsuo Ishii writes:\n>\n> > Hi, some PostgreSQL users in Japan have been translating 7.1 docs into\n> > Japanese. I hope the work would finish within 1-2 months. My question\n> > is how the translated docs could be merged into the doc source tree\n> > once it is done. Maybe doc/ja/src/sgml?\n>\n> A while ago I sent a proposal to -docs about how to handle this (basically\n> doc/src/sgml-<lang>). No one protested so I was going to implement it; in\n> fact, I already have in some private branch I have lying around here. It\n> even includes some nice side-effects, such as fallback to English for\n> incomplete translations (so you can look at the result while translation\n> is still going on) and the integration of the translated SQL reference\n> pages with the internationalized psql that is currently taking shape.\n> (Someone else is working on a French translation and has been very anxious\n> for this to happen, too.)\n>\n> I would not be in favour of a separate CVS module, for several reasons:\n> First, it will marginalize the efforts. I bet there are a sufficient\n> number of people how would be willing to track documentation upgrades and\n> keep their language up-to-date. Second, the build machinery would get\n> gratuitously complicated and spread around (makefiles, stylesheets,\n> graphics, URL strings, etc.). Third, the (undoubtedly real) problem of\n> keeping these translations up to date would not be helped by this at all.\n> The maintainers of these translations will simply have to be honest to not\n> label their documentation set as corresponding to version X.Y when the\n> content is still based on the original documentation for X.(Y-2).\n>\n> About \"I don't want to download all this stuff I can't read\": We already\n> have chunked distribution tarballs. It would be possible to \"chunk out\"\n> the files pertaining to a particular language (which would include the\n> message catalogs as well).\n>\n> While other open source projects sometimes keep their entire documentation\n> in a separate cvs module, they generally keep all languages together for\n> the reasons given above.\n\nI definitely have no problems with this ... one comment about Tom's \"how\nto keep releases together\" point though ... that is what Tags/Branches are\nfor ... as long as we tag all modules, things shouldn't \"fall out of sync\"\n...\n\nBut, if you already have a clean method of doign this, please, by all\nmeans, continue ...\n\n", "msg_date": "Mon, 18 Jun 2001 13:57:12 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Doc translation" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> About \"I don't want to download all this stuff I can't read\": We already\n> have chunked distribution tarballs. It would be possible to \"chunk out\"\n> the files pertaining to a particular language\n\nThat works for picking up tarballs, but (AFAIK) not for people who\nupdate from the CVS server. However, seeing that doc/src/sgml is\npresently only about a tenth of the total CVS tree bulk, it'll probably\nbe a long while before the docs form an overwhelming load on CVS users.\n\nYour other arguments seem good to me, so I agree with keeping the\ntranslated documents in the main tree, at least for now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jun 2001 13:05:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Doc translation " }, { "msg_contents": "> > we could move the english docs out of pgsql itself and into this module\n> > too, as:\n> > pgsql-docs/en\n> Hmm, I'm not sure that that's a good idea; seems it would lose the\n> coupling between versions of the source and versions of the\n> documentation.\n\nWe could (and should, imho) leave the English docs where they are, but\nthey could be included as a module in the pgsql-docs repository.\n\n> Anyone have experience with managing this sort of situation under CVS?\n\nYes.\n\n> Is separate tree or combined tree better?\n\nI would suggest defining a \"pgsql-docs\" module, which might contain the\nactual code for non-English docs. We can define a logical module\n\"pgsql-en\" which is included in the *logical* pgsql-docs module (the\nlatter also contains the *physical* pgsql-docs module).\n\nWe've done this on other projects. I can help set up the module\ndefinitions (which are fairly simple when worked out, but perhaps not\ntrivial to derive from first principles. I'll steal the solution from\nwork I've done earlier ;)\n\n - Thomas\n", "msg_date": "Mon, 18 Jun 2001 22:15:32 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Doc translation" } ]
[ { "msg_contents": "PHP HOWTO v24.0 released: ADODB connects oracle, pgsql, mysql, sybase..\n\nHello everyone:\n\nPHP Howto version 24.0 is released to public, which contains\ndetails about the ADODB to connect all types of SQL servers\nlike oracle, sybase, PostgreSQL, MySQL, MS SQLserver, MS Access.\nThere is ADODB driver for more than 30 types of SQL servers.\nYou name it, it is there, whether it is Interbase, Foxbase, DB2 etc..\n\nIt really does not matter what type of SQL server you use, as long as\nyou write your web applications in PHP. Changing the SQL from Oracle to\nPostgreSQL and vice-versa is a snap. Current trend is - you write all\nyour front-ends in Web pages using PHP...\n\nPHP is a simple object-oriented and most popular, powerful server-side\nweb\nscripting language in the world. PHP had a \"astronomical\" growth in\nuser-base and developer base in about a short span of time (less than 2\nyears).\n\nThe doc is at http://www.linuxdoc.org/HOWTO/PHP-HOWTO.html\n\nAlso be aware of the upcoming international conference on PHP\n\"PHP2001 conference\" in Europe, see details at http://php.net\n\n\n", "msg_date": "Mon, 18 Jun 2001 01:42:41 GMT", "msg_from": "alavoor <alavoor@yahoo.com>", "msg_from_op": true, "msg_subject": "PHP HOWTO v24.0 released: ADODB connects oracle, pgsql, mysql,\n\tsybase.." }, { "msg_contents": "On Mon, Jun 18, 2001 at 01:42:41AM +0000, alavoor wrote:\n> PHP HOWTO v24.0 released: ADODB connects oracle, pgsql, mysql, sybase..\n> \n> Hello everyone:\n> \n> PHP Howto version 24.0 is released to public, which contains\n> details about the ADODB to connect all types of SQL servers\n> like oracle, sybase, PostgreSQL, MySQL, MS SQLserver, MS Access.\n> There is ADODB driver for more than 30 types of SQL servers.\n> You name it, it is there, whether it is Interbase, Foxbase, DB2 etc..\n> \n> It really does not matter what type of SQL server you use, as long as\n> you write your web applications in PHP. Changing the SQL from Oracle to\n> PostgreSQL and vice-versa is a snap. Current trend is - you write all\n> your front-ends in Web pages using PHP...\n> \n> (...)\n\nI'm using it (ADODB) with postgresql 7.1.2 / MySQL, works wonderfully.\n\nSergio Bruder\n-- \n ( \t\t\n )) (tm)\thttp://sergio.bruder.net\n|\"\"|-. \t\thttp://pontobr.org\n|__|-' \t\tbruder@conectiva.com.br, sergio@bruder.net\nwe are concerned about the GNU General Public License (GPL)\n-- Microsoft press release\n------------------------------------------------------------------------------\npub 1024D/0C7D9F49 2000-05-26 Sergio Devojno Bruder <bruder@conectiva.com.br>\n Key fingerprint = 983F DBDF FB53 FE55 87DF 71CA 6B01 5E44 0C7D 9F49\nsub 1024g/138DF93D 2000-05-26\n", "msg_date": "Wed, 20 Jun 2001 17:30:58 -0300", "msg_from": "Sergio Bruder <bruder@conectiva.com.br>", "msg_from_op": false, "msg_subject": "Re: PHP HOWTO v24.0 released: ADODB connects oracle, pgsql, mysql,\n\tsybase.." } ]
[ { "msg_contents": "\n> \"Dave Cramer\" <dave@fastcrypt.com> writes:\n> > Can the version # of\n> > the row be made available to the client?\n> \n> There is no \"version # of the row\" in postgres, unless you \n> set up such a\n> thing for yourself (which could certainly be done, using triggers).\n\nAnd in addition there is no row version in SQL in general.\nSo I have the question whether it is actually intended to solve\nupdateable result sets with proprietary row versions, or whether\nsomeone implemented it that way to optimize concurrent access for \nanother db system, that blocks readers ?\n\nAndreas \n", "msg_date": "Mon, 18 Jun 2001 10:35:33 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: RE: Row Versioning, for jdbc updateable result sets\t " } ]
[ { "msg_contents": "First of all thanks for the great effort, it will surely be appreciated :-)\n\n> * On large tables, ANALYZE uses a random sample of rows rather than\n> examining every row, so that it should take a reasonably short time\n> even on very large tables. Possible downside: inaccurate stats.\n> We need to find out if the sample size is large enough.\n\nImho that is not optimal :-) ** ducks head, to evade flying hammer **\n1. the random sample approach should be explicitly requested with some \nsyntax extension\n2. the sample size should also be tuneable with some analyze syntax \nextension (the dba chooses the tradeoff between accuracy and runtime)\n3. if at all, an automatic analyze should do the samples on small tables,\nand accurate stats on large tables\n\nThe reasoning behind this is, that when the optimizer does a \"mistake\"\non small tables the runtime penalty is small, and probably even beats\nthe cost of accurate statistics lookup. (3 page table --> no stats \nexcept table size needed)\n\nWhen on the other hand the optimizer does a \"mistake\" on a huge table\nthe difference is easily a matter of hours, thus you want accurate stats.\n\nBecause we do not want the dba to decide which statistics are optimal,\nthere should probably be an analyze helper application that is invoked\nwith \"vacuum analyze database optimal\" or some such, that also decides \nwhether a table was sufficiently altered to justify new stats gathering\nor vacuum. The decision, what to do may also be based on a runtime limit, \nthat the dba specifies (\"do the most important stats/vacuums you can do \nwithin ~3 hours\"). \n\nThese points are also based on experience with huge SAP/R3 installations\nand the way statistics are gathered there.\n\nAndreas\n", "msg_date": "Mon, 18 Jun 2001 11:43:17 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Call for alpha testing: planner statistics revision\n\ts" }, { "msg_contents": "On Mon, 18 Jun 2001, Zeugswetter Andreas SB wrote:\n\n> First of all thanks for the great effort, it will surely be appreciated :-)\n> \n> > * On large tables, ANALYZE uses a random sample of rows rather than\n> > examining every row, so that it should take a reasonably short time\n> > even on very large tables. Possible downside: inaccurate stats.\n> > We need to find out if the sample size is large enough.\n> \n> Imho that is not optimal :-) ** ducks head, to evade flying hammer **\n> 1. the random sample approach should be explicitly requested with some \n> syntax extension\n> 2. the sample size should also be tuneable with some analyze syntax \n> extension (the dba chooses the tradeoff between accuracy and runtime)\n> 3. if at all, an automatic analyze should do the samples on small tables,\n> and accurate stats on large tables\n> \n> The reasoning behind this is, that when the optimizer does a \"mistake\"\n> on small tables the runtime penalty is small, and probably even beats\n> the cost of accurate statistics lookup. (3 page table --> no stats \n> except table size needed)\nI disagree.\n\nAs monte carlo method shows, _as long as you_ query random rows, your\nresult will be sufficiently close to the real statistics. I'm not sure if\nI can find math behind this, though...\n\n-alex\n\n", "msg_date": "Mon, 18 Jun 2001 09:16:22 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: AW: Call for alpha testing: planner statistics revision\n s" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> Imho that is not optimal :-) ** ducks head, to evade flying hammer **\n> 1. the random sample approach should be explicitly requested with some \n> syntax extension\n\nI don't think so ... with the current implementation you *must* do\napproximate ANALYZE for large tables, or face memory overflow.\nWe can debate where the threshold should be, but you can't get around\nthe fact that approximation is essential with large tables.\n\n> 2. the sample size should also be tuneable with some analyze syntax \n> extension (the dba chooses the tradeoff between accuracy and runtime)\n\nThe sample size is already driven by the largest SET STATISTICS value\nfor any of the columns of the table being analyzed. I'm not sure if we\nneed a user-tweakable multiplier or not. The current multiplier is 300\n(ie, 3000 sample rows with the default SET STATISTICS target of 10).\nThis is not a random choice; there is some theory behind it:\n\n * The following choice of minrows is based on the paper\n * \"Random sampling for histogram construction: how much is enough?\"\n * by Surajit Chaudhuri, Rajeev Motwani and Vivek Narasayya, in\n * Proceedings of ACM SIGMOD International Conference on Management\n * of Data, 1998, Pages 436-447. Their Corollary 1 to Theorem 5\n * says that for table size n, histogram size k, maximum relative\n * error in bin size f, and error probability gamma, the minimum\n * random sample size is\n * r = 4 * k * ln(2*n/gamma) / f^2\n * Taking f = 0.5, gamma = 0.01, n = 1 million rows, we obtain\n * r = 305.82 * k\n * Note that because of the log function, the dependence on n is\n * quite weak; even at n = 1 billion, a 300*k sample gives <= 0.59\n * bin size error with probability 0.99. So there's no real need to\n * scale for n, which is a good thing because we don't necessarily\n * know it at this point.\n\n> 3. if at all, an automatic analyze should do the samples on small tables,\n> and accurate stats on large tables\n\nOther way 'round, surely? It already does that: if your table has fewer\nrows than the sampling target, they all get used.\n\n> When on the other hand the optimizer does a \"mistake\" on a huge table\n> the difference is easily a matter of hours, thus you want accurate stats.\n\nNot if it takes hours to get the stats. I'm more interested in keeping\nANALYZE cheap and encouraging DBAs to run it frequently, so that the\nstats stay up-to-date. It doesn't matter how perfect the stats were\nwhen they were made, if the table has changed since then.\n\n> Because we do not want the dba to decide which statistics are optimal,\n> there should probably be an analyze helper application that is invoked\n> with \"vacuum analyze database optimal\" or some such, that also decides \n> whether a table was sufficiently altered to justify new stats gathering\n> or vacuum.\n\nAnd on what are you going to base \"sufficiently altered\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jun 2001 10:35:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Call for alpha testing: planner statistics revision s " } ]
[ { "msg_contents": "\n> Let's switch 'timestamp with time zone' back to 'timestamp'. This just\n> makes no sense.\n\nImho it only makes no sense, since the impl does not conform to standard :-(\nThe \"with time zone\" requests, that the client timezone be stored in the row.\nThe \"timestamp\" wants no timezone arithmetic/input or output at all. \n\nAndreas\n", "msg_date": "Mon, 18 Jun 2001 11:51:49 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: timestamp with/without time zone" } ]
[ { "msg_contents": "I tried to setup postgresql from current cvs and got\ninitdb failure:\n\nDEBUG: database system is ready\nERROR: TypeCreate: function 'int4in(opaque)' does not exist\n\ninitdb failed.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 18 Jun 2001 12:52:40 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "initdb from current cvs failed" }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> I tried to setup postgresql from current cvs and got\n> initdb failure:\n\nYou may need to do a full recompile; I've been altering some in-memory\ndata structures recently. If you don't enable dependency tracking,\nyou definitely need \"make clean\" and rebuild after every cvs update.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jun 2001 11:00:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: initdb from current cvs failed " }, { "msg_contents": "On Mon, 18 Jun 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > I tried to setup postgresql from current cvs and got\n> > initdb failure:\n>\n> You may need to do a full recompile; I've been altering some in-memory\n> data structures recently. If you don't enable dependency tracking,\n> you definitely need \"make clean\" and rebuild after every cvs update.\n\nTnanks. It's works now\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 18 Jun 2001 20:52:40 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: initdb from current cvs failed " } ]
[ { "msg_contents": "Hi all!\n\nI'm thinking about starting a (serius) project to\nbring a good graphical interface to the administration\nand operation of postgresql, similar to what other\ndatabases have.\n\nThe goal is to be able to do all the administrative\nwork of postgresql in a single program. It would\ninclude manage databases, tables, users, privileges\n..., see/alter tables' contents, start/stop the\nserver(s), configure postgres, manage backups, have a\ngood SQL console, monitorize the backends, ...\n\nAs you can see it would be a superset of the pgaccess\ncapabilities but backwards compatible with it.\nThe program would be done in Java/Swing, being\nautomaticaly portable. The license would be the\nlicense of PostgreSQL itself.\n\nThe cuestion is: Is there any posibility of such a\nbeast being ever included in the standard distribution\nof PostgreSQL, if it prove util, well designed and\nrock-solid, or it would not be accepted like a\nstandard add-on by the PostgreSQL hacker comunity?\n(perhaps Java/Swing not acceptable, ...)\n\nPlease CC to me, since I'm not subscribed to this\nlist.\n\nThank you all.\n\nPedro Abelleira Seco\n\n_______________________________________________________________\nDo You Yahoo!?\nYahoo! Messenger: Comunicaci�n instant�nea gratis con tu gente -\nhttp://messenger.yahoo.es\n", "msg_date": "Mon, 18 Jun 2001 13:17:19 +0200 (CEST)", "msg_from": "=?iso-8859-1?q?Pedro=20Abelleira=20Seco?= <pedroabelleira@yahoo.es>", "msg_from_op": true, "msg_subject": "Project: Java administration center" } ]
[ { "msg_contents": "Write very optimized statements and run them infrequently ;)\n\nI don't really think it's possible. You need to understand how your \napplication will be used, what the resource costs are, and plan \naccordingly, (load balance, etc.)\n\n-r\n\nAt 05:00 PM 6/18/01 +0000, gabriel wrote:\n\n\n>Hello All.\n>\n>How can i limit how much of cpu the postmaster can use?\n>\n>thanks\n>Gabriel...\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://www.postgresql.org/search.mpl\n>\n>\n>\n>---\n>Incoming mail is certified Virus Free.\n>Checked by AVG anti-virus system (http://www.grisoft.com).\n>Version: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01", "msg_date": "Mon, 18 Jun 2001 14:22:17 +0100", "msg_from": "Ryan Mahoney <ryan@paymentalliance.net>", "msg_from_op": true, "msg_subject": "Re: POSTMASTER" }, { "msg_contents": "\nHello All.\n\nHow can i limit how much of cpu the postmaster can use?\n\nthanks \nGabriel...\n", "msg_date": "18 Jun 2001 17:00:41 -0000", "msg_from": "\"gabriel\" <gabriel@workingnetsp.com.br>", "msg_from_op": false, "msg_subject": "POSTMASTER" }, { "msg_contents": "On 18 Jun 2001 17:00:41 -0000, you wrote:\n\n>\n>Hello All.\n>\n>How can i limit how much of cpu the postmaster can use?\n\nMaybe your host OS can limit the resource usage of the userid that\npostmaster runs under?\n-- \n__________________________________________________\n\"Nothing is as subjective as reality\"\nReinoud van Leeuwen reinoud@xs4all.nl\nhttp://www.xs4all.nl/~reinoud\n__________________________________________________\n", "msg_date": "Mon, 18 Jun 2001 19:25:11 GMT", "msg_from": "reinoud@xs4all.nl (Reinoud van Leeuwen)", "msg_from_op": false, "msg_subject": "Re: POSTMASTER" } ]
[ { "msg_contents": "\nHow can I check that all functions references (OIDs)\nare OK ?\n\n\nBest Regards,\n\nJean-Francois Leveque\n\n\n____________________________________________________________________\n- http://www.WebMailSPro.com - >> \nVOTRE service d'email sans pub avec VOTRE nom de domaine\n\n", "msg_date": "Mon, 18 Jun 2001 14:25:22 +0100", "msg_from": "\"Jean-Francois Leveque\" <leveque@webmails.com>", "msg_from_op": true, "msg_subject": "Function checking: referenced functions (OIDs)" } ]
[ { "msg_contents": "\n> > Because we do not want the dba to decide which statistics are optimal,\n> > there should probably be an analyze helper application that is invoked\n> > with \"vacuum analyze database optimal\" or some such, that also decides \n> > whether a table was sufficiently altered to justify new stats gathering\n> > or vacuum.\n> \n> And on what are you going to base \"sufficiently altered\"?\n\nProbably current table size vs table size in statistics and maybe timestamp\nwhen statistics were last updated. Good would also be the active row count, but \nwe don't have cheap access to the current value.\n\nThe point is, that if the combined effort of all \"hackers\" (with the help of \nsome large scale users) cannot come to a more or less generally adequate answer, \nthe field dba most certainly won't eighter.\n\nAndreas\n", "msg_date": "Mon, 18 Jun 2001 17:21:09 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Call for alpha testing: planner statistics revi\n\tsion s" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> And on what are you going to base \"sufficiently altered\"?\n\n> Probably current table size vs table size in statistics and maybe\n> timestamp when statistics were last updated. Good would also be the\n> active row count, but we don't have cheap access to the current value.\n\nOnce we get done with online VACUUM and internal free space re-use\n(which is next on my to-do list), growth of the physical file will be\na poor guide to number of updated tuples, too. So the above proposal\nreduces to \"time since last update\", for which we do not need any\nbackend support: people already run VACUUM ANALYZE from cron tasks.\n\n> The point is, that if the combined effort of all \"hackers\" (with the\n> help of some large scale users) cannot come to a more or less\n> generally adequate answer, the field dba most certainly won't eighter.\n\nTrue, but I regard your \"if\" as unproven. The reason for this call for\nalpha testing is to find out whether we have a good enough solution or\nnot. I feel no compulsion to assume that it's not good enough on the\nbasis of no evidence.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jun 2001 11:31:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Call for alpha testing: planner statistics revi sion s " } ]
[ { "msg_contents": "\n> > 3. if at all, an automatic analyze should do the samples on small tables,\n> > and accurate stats on large tables\n> \n> Other way 'round, surely? It already does that: if your table has fewer\n> rows than the sampling target, they all get used.\n\nI mean, that it is probably not useful to maintain distribution statistics \nfor a table that is that small at all (e.g. <= 3000 rows and less than 512 k size). \nSo let me reword: do the samples for medium sized tables.\n\n> > When on the other hand the optimizer does a \"mistake\" on a huge table\n> > the difference is easily a matter of hours, thus you want accurate stats.\n> \n> Not if it takes hours to get the stats. I'm more interested in keeping\n> ANALYZE cheap and encouraging DBAs to run it frequently, so that the\n> stats stay up-to-date. It doesn't matter how perfect the stats were\n> when they were made, if the table has changed since then.\n\nThat is true, but this is certainly a tradeoff situation. For a huge table\nthat is quite static you would certainly want most accurate statistics even\nif it takes hours to compute once a month.\n\nMy comments are based on praxis and not theory :-) Of course current \nstate of the art optimizer implementations might lag well behind state of\nthe art theory from ACM SIGMOD :-)\n\nAndreas\n", "msg_date": "Mon, 18 Jun 2001 17:41:31 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Call for alpha testing: planner statistics revi\n\tsion s" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> I mean, that it is probably not useful to maintain distribution statistics \n> for a table that is that small at all (e.g. <= 3000 rows and less than\n> 512 k size). \n\nActually, stats are quite interesting for smaller tables too. Maybe not\nso much for the table itself (ie, deciding between seq and index scan is\nnot so critical), but to estimate sizes of joins against other tables.\n\n>> Not if it takes hours to get the stats. I'm more interested in keeping\n>> ANALYZE cheap and encouraging DBAs to run it frequently, so that the\n>> stats stay up-to-date. It doesn't matter how perfect the stats were\n>> when they were made, if the table has changed since then.\n\n> That is true, but this is certainly a tradeoff situation. For a huge table\n> that is quite static you would certainly want most accurate statistics even\n> if it takes hours to compute once a month.\n\nSure. My thought is that one would do this by increasing the SET\nSTATISTICS targets for such tables, thus yielding more detailed stats\nthat take longer to compute. What we need now is experimentation to\nfind out how well this works in practice. It might well be that more\nknobs will turn out to be useful, but let's not add complexity until\nwe've proven it to be necessary ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jun 2001 12:26:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Call for alpha testing: planner statistics revi sion s " } ]
[ { "msg_contents": "\n> > The point is, that if the combined effort of all \"hackers\" (with the\n> > help of some large scale users) cannot come to a more or less\n> > generally adequate answer, the field dba most certainly won't eighter.\n> \n> True, but I regard your \"if\" as unproven. The reason for this call for\n> alpha testing is to find out whether we have a good enough solution or\n> not. I feel no compulsion to assume that it's not good enough on the\n> basis of no evidence.\n\nYes, sure, sorry. I certainly don't mean to be offensive. I am just \nvery interested in this area, and the reasoning behind your decisions.\nTime to start reading all your code comments, and doing test cases :-)\n\nAndreas\n", "msg_date": "Mon, 18 Jun 2001 17:49:43 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: AW: Call for alpha testing: planner statistics revi sion s " } ]
[ { "msg_contents": "I think there will be a race condition around the time of a database\nshutdown. After a new child process is created, the system may go into\nshutdown mode. However, there is a window where the child won't know\nthis, because the signal is blocked and/or not set up yet. Note that we\ncannot check for shutdown before forking the child because we won't know\nyet whether the incoming packet is actually a connection request. We\nwouldn't want to send back an error in response to a cancel request, for\nexample.\n\nAny ideas?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 18 Jun 2001 18:56:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Problem with reading startup packet after fork" }, { "msg_contents": "Peter Eisentraut writes:\n\n> I think there will be a race condition around the time of a database\n> shutdown. After a new child process is created, the system may go into\n> shutdown mode. However, there is a window where the child won't know\n> this, because the signal is blocked and/or not set up yet. Note that we\n> cannot check for shutdown before forking the child because we won't know\n> yet whether the incoming packet is actually a connection request. We\n> wouldn't want to send back an error in response to a cancel request, for\n> example.\n\nWell, cancel that. The child will see the shutdown because the signal is\nblocked around the fork. So unless anyone has issues with the latest\npatch I posted I will check it in in slightly clean-up form.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 19 Jun 2001 20:41:12 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Problem with reading startup packet after fork" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I think there will be a race condition around the time of a database\n> shutdown. After a new child process is created, the system may go into\n> shutdown mode. However, there is a window where the child won't know\n> this, because the signal is blocked and/or not set up yet.\n\nThe child will hold off the signal until it can do something about it.\nThe postmaster will wait for the child to exit. Where's the problem?\n\nNote the following comment in postgres.c:\n\n /*\n * Set up signal handlers and masks.\n *\n * Note that postmaster blocked all signals before forking child process,\n * so there is no race condition whereby we might receive a signal\n * before we have set up the handler.\n *\n * Also note: it's best not to use any signals that are SIG_IGNored in\n * the postmaster. If such a signal arrives before we are able to\n * change the handler to non-SIG_IGN, it'll get dropped. If\n * necessary, make a dummy handler in the postmaster to reserve the\n * signal.\n */\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Jun 2001 23:28:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem with reading startup packet after fork " } ]
[ { "msg_contents": "\nMorning ...\n\n\tI'm trying to wrack my brain over something here, and no matter\nhow I try and look at it, I'm drawing a blank ...\n\n\tI have two tables that are dependent on each other:\n\n\tnotes (86736 tuples) and note_links (173473 tuples)\n\n\tThe relationship is that one note can have several 'ppl' link'd to\nit ...\n\n\tI have a third table: calendar (11014 tuples) ... those calendar\nentries link to a note.\n\n\tSo you have something like:\n\n\tpersonA ---\n personB --|--> note_links --> notes --[maybe]--> calendar entry\n personC ---\n\n\tnow, the query I'm workign with is:\n\nSELECT n.note, n.nid, n.type, c.act_type, c.status, nl.contact_lvl,\n CASE WHEN c.act_start IS NULL\n THEN date_part('epoch', n.added)\n ELSE date_part('epoch', c.act_start)\n END AS start\n FROM note_links nl, notes n LEFT JOIN calendar c ON (n.nid = c.nid)\n WHERE (n.type = 'A' OR n.type = 'N' OR n.type = 'H' OR n.type = 'C')\n AND (nl.id = 15748 AND contact_lvl = 'company')\n AND n.nid = nl.nid\n ORDER BY start DESC;\n\nWhich explains out as:\n\nNOTICE: QUERY PLAN:\n\nSort (cost=7446.32..7446.32 rows=1 width=88)\n -> Nested Loop (cost=306.52..7446.31 rows=1 width=88)\n -> Index Scan using note_links_id on note_links nl (cost=0.00..3.49 rows=1 width=16)\n -> Materialize (cost=6692.63..6692.63 rows=60015 width=72)\n -> Hash Join (cost=306.52..6692.63 rows=60015 width=72)\n -> Seq Scan on notes n (cost=0.00..2903.98 rows=60015 width=36)\n -> Hash (cost=206.22..206.22 rows=10122 width=36)\n -> Seq Scan on calendar c (cost=0.00..206.22 rows=10122 width=36)\n\nEXPLAIN\n\nand takes forever to run ...\n\nNow, if I eliminate the LEFT JOIN part of the above, *one* tuple is\nreturned ... so even with the LEFT JOIN, only *one* tuple is going to be\nreturned ...\n\nIs there some way to write the above so that it evaluates:\n\n WHERE (n.type = 'A' OR n.type = 'N' OR n.type = 'H' OR n.type = 'C')\n AND (nl.id = 15748 AND contact_lvl = 'company')\n AND n.nid = nl.nid\n\nfirst, so that it only has to do the LEFT JOIN on the *one* n.nid that is\nreturned, instead of the 86736 that are in the table?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n\n", "msg_date": "Mon, 18 Jun 2001 14:26:18 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "LEFT JOIN ..." }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> FROM note_links nl, notes n LEFT JOIN calendar c ON (n.nid = c.nid)\n> WHERE (n.type = 'A' OR n.type = 'N' OR n.type = 'H' OR n.type = 'C')\n> AND (nl.id = 15748 AND contact_lvl = 'company')\n> AND n.nid = nl.nid\n> ORDER BY start DESC;\n\n> Is there some way to write the above so that it evaluates:\n> first, so that it only has to do the LEFT JOIN on the *one* n.nid that is\n> returned, instead of the 86736 that are in the table?\n\nTry adding ... AND n.nid = 15748 ... to the WHERE. It's not very\nbright about making that sort of transitive-equality deduction for\nitself...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jun 2001 14:07:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: LEFT JOIN ... " }, { "msg_contents": "\nI think that using INNER JOIN between nl and n (on n.nid=nl.nid) or\njoining those tables in a subquery might work.\n\nOn Mon, 18 Jun 2001, The Hermit Hacker wrote:\n\n> Is there some way to write the above so that it evaluates:\n> \n> WHERE (n.type = 'A' OR n.type = 'N' OR n.type = 'H' OR n.type = 'C')\n> AND (nl.id = 15748 AND contact_lvl = 'company')\n> AND n.nid = nl.nid\n> \n> first, so that it only has to do the LEFT JOIN on the *one* n.nid that is\n> returned, instead of the 86736 that are in the table?\n\n", "msg_date": "Mon, 18 Jun 2001 11:18:49 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: LEFT JOIN ..." }, { "msg_contents": "On Mon, 18 Jun 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > FROM note_links nl, notes n LEFT JOIN calendar c ON (n.nid = c.nid)\n> > WHERE (n.type = 'A' OR n.type = 'N' OR n.type = 'H' OR n.type = 'C')\n> > AND (nl.id = 15748 AND contact_lvl = 'company')\n> > AND n.nid = nl.nid\n> > ORDER BY start DESC;\n>\n> > Is there some way to write the above so that it evaluates:\n> > first, so that it only has to do the LEFT JOIN on the *one* n.nid that is\n> > returned, instead of the 86736 that are in the table?\n>\n> Try adding ... AND n.nid = 15748 ... to the WHERE. It's not very\n> bright about making that sort of transitive-equality deduction for\n> itself...\n\nn.nid is the note id ... nl.id is the contact id ...\n\nI'm trying to pull out all notes for the company with an id of 15748:\n\nsepick=# select * from note_links where id = 15748;\n nid | id | contact_lvl | owner\n-------+-------+-------------+-------\n 84691 | 15748 | company | f\n(1 row)\n\n\n", "msg_date": "Mon, 18 Jun 2001 15:26:41 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: LEFT JOIN ... " }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n>> Try adding ... AND n.nid = 15748 ... to the WHERE.\n\n> n.nid is the note id ... nl.id is the contact id ...\n\nOoops, I misread \"n.nid = nl.nid\" as \"n.nid = nl.id\". Sorry for the\nbogus advice.\n\nTry rephrasing as\n\nFROM (note_links nl JOIN notes n ON (n.nid = nl.nid))\n LEFT JOIN calendar c ON (n.nid = c.nid)\nWHERE ...\n\nThe way you were writing it forced the LEFT JOIN to be done first,\nwhereas what you want is for the note_links-to-notes join to be done\nfirst. See\nhttp://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/explicit-joins.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jun 2001 15:56:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: LEFT JOIN ... " }, { "msg_contents": "\nPerfect, thank you ... i knew I was overlooking something obvious ... the\nquery just flies now ...\n\nOn Mon, 18 Jun 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> >> Try adding ... AND n.nid = 15748 ... to the WHERE.\n>\n> > n.nid is the note id ... nl.id is the contact id ...\n>\n> Ooops, I misread \"n.nid = nl.nid\" as \"n.nid = nl.id\". Sorry for the\n> bogus advice.\n>\n> Try rephrasing as\n>\n> FROM (note_links nl JOIN notes n ON (n.nid = nl.nid))\n> LEFT JOIN calendar c ON (n.nid = c.nid)\n> WHERE ...\n>\n> The way you were writing it forced the LEFT JOIN to be done first,\n> whereas what you want is for the note_links-to-notes join to be done\n> first. See\n> http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/explicit-joins.html\n>\n> \t\t\tregards, tom lane\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Mon, 18 Jun 2001 17:17:00 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: LEFT JOIN ... " } ]
[ { "msg_contents": "\nI have a table that records events, one per row. There is\na timestamp column, and an index on that timestamp. Every\nso often a process checks the number of rows in the table,\nand then deletes old events until the number of rows is below\nsome pre-set limit.\n\nThe size of the data file stays roughly constant over time\n(a vacuum is performed after deletions), but the size of\nthe index file seems to grow forever! Vacuum doesn't seem\nto help, even when I type the command slowly and hit the \nkeys very hard.\n\nHas anyone seen this? I know I can drop/recreate the index but\nI'd rather understand what is going on first.\n\nI am running PSQL V 7.0.3 on Linux. Is this fixed in 7.1\n\nBonus question: I'm also worried about the pg_log file.\n\n-- cary\n\n(I tried pgsql-bugs to no avail)\n\n\n\n", "msg_date": "Mon, 18 Jun 2001 16:05:53 -0400 (EDT)", "msg_from": "\"Cary O'Brien\" <cobrien@Radix.Net>", "msg_from_op": true, "msg_subject": "Index files grow forever?" } ]
[ { "msg_contents": "Hi,\n\nWhen I am trying to import a file(named 'myfile') that is present in\nmy home directory into the the postgresql database using the\n*pgaccess* gui tool, it is giving the following error.\n\nERROR: COPY command, running in backend with effective uid 26, could\nnot open file 'myfile' for reading. Errno=Permission denied\n\nCan any one please tell me what I am missing here. Is it necessary for\nthe 'postgres' user to own the file named 'myfile' under my home\ndirectory.\n\nTIA\nChakravarthy K Sannedhi\n", "msg_date": "18 Jun 2001 19:59:28 -0700", "msg_from": "san_kalyan@yahoo.com (Chakravarthy K Sannedhi)", "msg_from_op": true, "msg_subject": "Copy Error" } ]
[ { "msg_contents": "Is anyone else seeing this with current CVS, or is it my own breakage?\n\n*** ./expected/alter_table.out\tWed May 30 12:38:38 2001\n--- ./results/alter_table.out\tTue Jun 19 00:45:22 2001\n***************\n*** 340,347 ****\n ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable(ptest1);\n NOTICE: ALTER TABLE ... ADD CONSTRAINT will create implicit trigger(s) for FOREIGN KEY check(s)\n DROP TABLE pktable;\n! NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"fktable\"\n! NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"fktable\"\n DROP TABLE fktable;\n CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 text,\n PRIMARY KEY(ptest1, ptest2));\n--- 340,347 ----\n ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable(ptest1);\n NOTICE: ALTER TABLE ... ADD CONSTRAINT will create implicit trigger(s) for FOREIGN KEY check(s)\n DROP TABLE pktable;\n! NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"pg_temp_15818_3\"\n! NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"pg_temp_15818_3\"\n DROP TABLE fktable;\n CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 text,\n PRIMARY KEY(ptest1, ptest2));\n\n======================================================================\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2001 00:49:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "temp-table-related failure in regression tests" }, { "msg_contents": "I wrote:\n> Is anyone else seeing this with current CVS, or is it my own breakage?\n\nAh, the problem is RelationGetRelationName didn't know about the\nnew temprel naming convention.\n\nI quick-hacked rel.h to fix this, but we need a better solution.\nI don't much like having rel.h include temprel.h --- seems like the\ninclude should go the other way. Should is_temp_relname get moved\nto rel.h?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2001 01:24:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: temp-table-related failure in regression tests " }, { "msg_contents": "> I wrote:\n> > Is anyone else seeing this with current CVS, or is it my own breakage?\n> \n> Ah, the problem is RelationGetRelationName didn't know about the\n> new temprel naming convention.\n\nSorry, yes, I missed changing that. I thought I had all the pg_temp\nmapped to defines but I missed that one and hence didn't change to\nunderscore names.\n\n> \n> I quick-hacked rel.h to fix this, but we need a better solution.\n> I don't much like having rel.h include temprel.h --- seems like the\n> include should go the other way. Should is_temp_relname get moved\n> to rel.h?\n\nYou will see near the top of rel.h:\n\n\t/* added to prevent circular dependency. bjm 1999/11/15 */\n\textern char *get_temp_rel_by_physicalname(const char *relname);\n\nso it seems there is an interdependency between rel.h and temprel.h. \nTrying to include temprel.h in rel.h causes a compile problem. Going\nthe other way I assume would cause the same problem.\n\nWe can move the is_temp_relname define if you wish but with one hack\nalready in rel.h for get_temp_rel_by_physicalname(), I am not excited\nabout adding another to rel.h. In fact, is_temp_relname needs\nPG_TEMP_REL_PREFIX so we would have to move it too. However, I don't\nsee any other solution so I moved both to from temprel.h to rel.h.\n\nI am attaching a fix that I have committed to CVS. If you don't like\nit, feel free to reverse it out and try something else. Seems to\ncompile fine.\n\nI will be speaking at Red Hat HQ today so will not be available to\nreverse it out myself.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/include/utils/rel.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/rel.h,v\nretrieving revision 1.47\ndiff -c -r1.47 rel.h\n*** src/include/utils/rel.h\t2001/06/19 05:11:50\t1.47\n--- src/include/utils/rel.h\t2001/06/19 12:00:59\n***************\n*** 188,230 ****\n #define RelationGetFile(relation) ((relation)->rd_fd)\n \n /*\n- * RelationGetRelationName\n- *\n- *\t Returns the relation's logical name (as seen by the user).\n- *\n- * If the rel is a temp rel, the temp name will be returned. Therefore,\n- * this name is not unique. But it is the name to use in heap_openr(),\n- * for example.\n- */\n- #define RelationGetRelationName(relation) \\\n- (\\\n- \t(strncmp(RelationGetPhysicalRelationName(relation), \\\n- \t\t\t \"pg_temp\", 7) != 0) \\\n- \t? \\\n- \t\tRelationGetPhysicalRelationName(relation) \\\n- \t: \\\n- \t\tget_temp_rel_by_physicalname( \\\n- \t\t\tRelationGetPhysicalRelationName(relation)) \\\n- )\n- \n- \n- /*\n- * RelationGetPhysicalRelationName\n- *\n- *\t Returns the rel's physical name, ie, the name appearing in pg_class.\n- *\n- * While this name is unique across all rels in the database, it is not\n- * necessarily useful for accessing the rel, since a temp table of the\n- * same name might mask the rel. It is useful mainly for determining if\n- * the rel is a shared system rel or not.\n- *\n- * The macro is rather unfortunately named, since the pg_class name no longer\n- * has anything to do with the file name used for physical storage of the rel.\n- */\n- #define RelationGetPhysicalRelationName(relation) \\\n- \t(NameStr((relation)->rd_rel->relname))\n- \n- /*\n * RelationGetNumberOfAttributes\n *\n *\t Returns the number of attributes.\n--- 188,193 ----\n***************\n*** 253,257 ****\n--- 216,264 ----\n extern void RelationSetIndexSupport(Relation relation,\n \t\t\t\t\t\tIndexStrategy strategy,\n \t\t\t\t\t\tRegProcedure *support);\n+ \n+ /*\n+ * Handle temp relations\n+ */\n+ #define PG_TEMP_REL_PREFIX \"pg_temp\"\n+ \n+ #define is_temp_relname(relname) \\\n+ \t\t(strncmp(relname, PG_TEMP_REL_PREFIX, strlen(PG_TEMP_REL_PREFIX)) == 0)\n+ \n+ /*\n+ * RelationGetPhysicalRelationName\n+ *\n+ *\t Returns the rel's physical name, ie, the name appearing in pg_class.\n+ *\n+ * While this name is unique across all rels in the database, it is not\n+ * necessarily useful for accessing the rel, since a temp table of the\n+ * same name might mask the rel. It is useful mainly for determining if\n+ * the rel is a shared system rel or not.\n+ *\n+ * The macro is rather unfortunately named, since the pg_class name no longer\n+ * has anything to do with the file name used for physical storage of the rel.\n+ */\n+ #define RelationGetPhysicalRelationName(relation) \\\n+ \t(NameStr((relation)->rd_rel->relname))\n+ \n+ /*\n+ * RelationGetRelationName\n+ *\n+ *\t Returns the relation's logical name (as seen by the user).\n+ *\n+ * If the rel is a temp rel, the temp name will be returned. Therefore,\n+ * this name is not unique. But it is the name to use in heap_openr(),\n+ * for example.\n+ */\n+ #define RelationGetRelationName(relation) \\\n+ (\\\n+ \t!is_temp_relname(relation) \\\n+ \t? \\\n+ \t\tRelationGetPhysicalRelationName(relation) \\\n+ \t: \\\n+ \t\tget_temp_rel_by_physicalname( \\\n+ \t\t\tRelationGetPhysicalRelationName(relation)) \\\n+ )\n+ \n \n #endif\t /* REL_H */\nIndex: src/include/utils/temprel.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/temprel.h,v\nretrieving revision 1.16\ndiff -c -r1.16 temprel.h\n*** src/include/utils/temprel.h\t2001/06/18 16:13:21\t1.16\n--- src/include/utils/temprel.h\t2001/06/19 12:00:59\n***************\n*** 16,26 ****\n \n #include \"access/htup.h\"\n \n- #define PG_TEMP_REL_PREFIX \"pg_temp\"\n- \n- #define is_temp_relname(relname) \\\n- \t\t(strncmp(relname, PG_TEMP_REL_PREFIX, strlen(PG_TEMP_REL_PREFIX)) == 0)\n- \n extern void create_temp_relation(const char *relname,\n \t\t\t\t\t HeapTuple pg_class_tuple);\n extern void remove_temp_rel_by_relid(Oid relid);\n--- 16,21 ----", "msg_date": "Tue, 19 Jun 2001 08:03:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: temp-table-related failure in regression tests" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> We can move the is_temp_relname define if you wish but with one hack\n> already in rel.h for get_temp_rel_by_physicalname(), I am not excited\n> about adding another to rel.h. In fact, is_temp_relname needs\n> PG_TEMP_REL_PREFIX so we would have to move it too. However, I don't\n> see any other solution so I moved both to from temprel.h to rel.h.\n\nYeah, I didn't see any other way either. With luck this whole issue\nwill go away when we do schemas... in the meantime at least the ugliness\nis pretty localized.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2001 10:06:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: temp-table-related failure in regression tests " } ]
[ { "msg_contents": "\n> -- If I have interpreted SQL92 correctly UNKNOWN IS TRUE should return\n> FALSE, and UNKNOWN IS NOT TRUE is equivalent to NOT (UNKNOWN IS TRUE) ==>\n> TRUE. Is this correct?\n\nNo, I do not think it is valid to say \"should return true|false\"\nI think they should return UNKNOWN. Only when it comes to evaluating the\n\"... WHERE UNKNOWN;\" can you translate it to \"... WHERE FALSE;\", or in the \noutput function.\n\nMy interpretation would be:\nUNKNOWN IS TRUE \t\t--> FALSE\nUNKNOWN IS NOT TRUE \t--> FALSE\nNOT (UNKNOWN IS TRUE)\t--> FALSE\n\nAndreas\n", "msg_date": "Tue, 19 Jun 2001 09:10:25 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Re: [SQL] behavior of ' = NULL' vs. MySQL vs. S\n\ttand ards" } ]
[ { "msg_contents": "\n> > -- If I have interpreted SQL92 correctly UNKNOWN IS TRUE should return\n> > FALSE, and UNKNOWN IS NOT TRUE is equivalent to NOT (UNKNOWN IS TRUE) ==>\n> > TRUE. Is this correct?\n> \n> No, I do not think it is valid to say \"should return true|false\"\n> I think they should return UNKNOWN. Only when it comes to evaluating the\n> \"... WHERE UNKNOWN;\" can you translate it to \"... WHERE FALSE;\", or in the \n> output function.\n\nForget it, sorry. I am confusing the \"IS\" with \"=\" :-( \nI am not used to the new SQL99 syntax yet.\nI think your interpretation is correct.\n\nAndreas\n", "msg_date": "Tue, 19 Jun 2001 09:21:07 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Re: [SQL] behavior of ' = NULL' vs. MySQL vs. S\n\ttand ards" } ]
[ { "msg_contents": "I am seeing this error in CVS current:\n\ngmake -f Makefile all\ngmake[1]: Entering directory `/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/pl/plperl'\nLD_RUN_PATH=\"\" ld -o blib/arch/auto/plperl/plperl.so -shared -x -L/usr/X11/lib -L/usr/local/lib plperl.o eloglvl.o SPI.o -rdynamic -Wl,-rpath,/usr/libdata/perl5/5.00503/i386-bsdos/CORE -L/usr/X11/lib -L/usr/local/lib /usr/libdata/perl5/5.00503/i386-bsdos/auto/DynaLoader/DynaLoader.a -L/usr/libdata/perl5/5.00503/i386-bsdos/CORE -lperl -ldl -lm -lc \n\nld: -r and -shared may not be used together\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\ngmake[1]: *** [blib/arch/auto/plperl/plperl.so] Error 1\ngmake[1]: Leaving directory `/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/pl/plperl'\ngmake: *** [all] Error 2\n\nCan anyone suggest a fix? Everything else compiles fine. I am on\nBSD/OS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Jun 2001 08:06:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "PlPerl compile failure" }, { "msg_contents": "Output attached.\n\n\n> That's me fault. I'm reading up on bsdos documentation to see what's the\n> right way to run ld on it.\n> \n> Can you give me output of 'perl -MExtUtils::Embed -e ldopts' and 'perl -V'\n> on your machine?\n> \n> Thanks\n> \n> \n> \n> On Tue, 19 Jun 2001, Bruce Momjian wrote:\n> \n> > I am seeing this error in CVS current:\n> > \n> > gmake -f Makefile all\n> > gmake[1]: Entering directory `/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/pl/plperl'\n> >\n> > LD_RUN_PATH=\"\" ld -o blib/arch/auto/plperl/plperl.so -shared -x\n> > -L/usr/X11/lib -L/usr/local/lib plperl.o eloglvl.o SPI.o -rdynamic\n> > -Wl,-rpath,/usr/libdata/perl5/5.00503/i386-bsdos/CORE -L/usr/X11/lib\n> > -L/usr/local/lib\n> > /usr/libdata/perl5/5.00503/i386-bsdos/auto/DynaLoader/DynaLoader.a\n> > -L/usr/libdata/perl5/5.00503/i386-bsdos/CORE -lperl -ldl -lm -lc\n> > \n> > ld: -r and -shared may not be used together\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > gmake[1]: *** [blib/arch/auto/plperl/plperl.so] Error 1\n> > gmake[1]: Leaving directory `/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/pl/plperl'\n> > gmake: *** [all] Error 2\n> > \n> > Can anyone suggest a fix? Everything else compiles fine. I am on\n> > BSD/OS.\n> > \n> > \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n-rdynamic -Wl,-rpath,/usr/libdata/perl5/5.00503/i386-bsdos/CORE -L/usr/X11/lib -L/usr/local/lib /usr/libdata/perl5/5.00503/i386-bsdos/auto/DynaLoader/DynaLoader.a -L/usr/libdata/perl5/5.00503/i386-bsdos/CORE -lperl -ldl -lm -lc\n\n\nSummary of my perl5 (5.0 patchlevel 5 subversion 3) configuration:\n Platform:\n osname=bsdos, osvers=4.2, archname=i386-bsdos\n uname='bsdos bsdi.com 4.2 :bsdos_distribution defined: i386'\n hint=recommended, useposix=true, d_sigaction=define\n usethreads=undef useperlio=undef d_sfio=undef\n Compiler:\n cc='cc', optimize='-O2', gccversion=2.95.2 19991024 (release)\n cppflags='-I/usr/local/include'\n ccflags ='-I/usr/local/include'\n stdchar='char', d_stdstdio=undef, usevfork=false\n intsize=4, longsize=4, ptrsize=4, doublesize=8\n d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12\n alignbytes=4, usemymalloc=n, prototype=define\n Linker and Libraries:\n ld='ld', ldflags =' -L/usr/X11/lib -L/usr/local/lib'\n libpth=/usr/local/lib /usr/shlib /shlib /lib /usr/lib /usr/X11/lib\n libs=-ldl -lm -lc\n libc=/shlib/libc.so, so=so, useshrplib=true, libperl=libperl.so\n Dynamic Linking:\n dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-rdynamic -Wl,-rpath,/usr/libdata/perl5/5.00503/i386-bsdos/CORE'\n cccdlflags='-fPIC', lddlflags='-shared -x -L/usr/X11/lib -L/usr/local/lib'\n\n\nCharacteristics of this binary (from libperl): \n Built under bsdos\n Compiled at Oct 8 2000 21:58:54\n @INC:\n /usr/libdata/perl5/5.00503/i386-bsdos\n /usr/libdata/perl5/5.00503\n /usr/libdata/perl5/site_perl/i386-bsdos\n /usr/libdata/perl5/site_perl\n /usr/libdata/perl5/site_perl/i386-bsdos/include\n .", "msg_date": "Tue, 19 Jun 2001 08:31:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: PlPerl compile failure" }, { "msg_contents": "That's me fault. I'm reading up on bsdos documentation to see what's the\nright way to run ld on it.\n\nCan you give me output of 'perl -MExtUtils::Embed -e ldopts' and 'perl -V'\non your machine?\n\nThanks\n\n\n\nOn Tue, 19 Jun 2001, Bruce Momjian wrote:\n\n> I am seeing this error in CVS current:\n> \n> gmake -f Makefile all\n> gmake[1]: Entering directory `/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/pl/plperl'\n>\n> LD_RUN_PATH=\"\" ld -o blib/arch/auto/plperl/plperl.so -shared -x\n> -L/usr/X11/lib -L/usr/local/lib plperl.o eloglvl.o SPI.o -rdynamic\n> -Wl,-rpath,/usr/libdata/perl5/5.00503/i386-bsdos/CORE -L/usr/X11/lib\n> -L/usr/local/lib\n> /usr/libdata/perl5/5.00503/i386-bsdos/auto/DynaLoader/DynaLoader.a\n> -L/usr/libdata/perl5/5.00503/i386-bsdos/CORE -lperl -ldl -lm -lc\n> \n> ld: -r and -shared may not be used together\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> gmake[1]: *** [blib/arch/auto/plperl/plperl.so] Error 1\n> gmake[1]: Leaving directory `/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/pl/plperl'\n> gmake: *** [all] Error 2\n> \n> Can anyone suggest a fix? Everything else compiles fine. I am on\n> BSD/OS.\n> \n> \n\n", "msg_date": "Tue, 19 Jun 2001 08:40:24 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: PlPerl compile failure" }, { "msg_contents": "Bruce, try the following patch, and let me know.\n\nApparently, on some systems, ExtUtils::Embed and MakeMaker are slightly\nbroken, and its impossible to make a shared library when compiling with\nboth CCDLFLAGS and LDDLFAGS, you have to pick one or the other.\n\nIndex: Makefile.PL\n===================================================================\nRCS file: /home/cvs/pgsql/pgsql/src/pl/plperl/Makefile.PL,v\nretrieving revision 1.12.1000.1\ndiff --unified -r1.12.1000.1 Makefile.PL\n--- Makefile.PL 2001/06/16 23:18:04 1.12.1000.1\n+++ Makefile.PL 2001/06/19 14:10:33\n@@ -29,8 +29,11 @@\n exit(0);\n }\n \n+my $ldopts=ldopts();\n+$ldopts=~s/$Config{ccdlflags}//;\n+\n WriteMakefile( 'NAME' => 'plperl', \n- dynamic_lib => { 'OTHERLDFLAGS' => ldopts() } ,\n+ dynamic_lib => { 'OTHERLDFLAGS' => $ldopts } ,\n INC => \"$ENV{EXTRA_INCLUDES}\",\n XS => { 'SPI.xs' => 'SPI.c' },\n OBJECT => 'plperl.o eloglvl.o SPI.o',\n\n\nOn Tue, 19 Jun 2001, Bruce Momjian wrote:\n\n> \n> Output attached.\n> \n> \n> > That's me fault. I'm reading up on bsdos documentation to see what's the\n> > right way to run ld on it.\n> > \n> > Can you give me output of 'perl -MExtUtils::Embed -e ldopts' and 'perl -V'\n> > on your machine?\n> > \n> > Thanks\n> > \n> > \n> > \n> > On Tue, 19 Jun 2001, Bruce Momjian wrote:\n> > \n> > > I am seeing this error in CVS current:\n> > > \n> > > gmake -f Makefile all\n> > > gmake[1]: Entering directory `/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/pl/plperl'\n> > >\n> > > LD_RUN_PATH=\"\" ld -o blib/arch/auto/plperl/plperl.so -shared -x\n> > > -L/usr/X11/lib -L/usr/local/lib plperl.o eloglvl.o SPI.o -rdynamic\n> > > -Wl,-rpath,/usr/libdata/perl5/5.00503/i386-bsdos/CORE -L/usr/X11/lib\n> > > -L/usr/local/lib\n> > > /usr/libdata/perl5/5.00503/i386-bsdos/auto/DynaLoader/DynaLoader.a\n> > > -L/usr/libdata/perl5/5.00503/i386-bsdos/CORE -lperl -ldl -lm -lc\n> > > \n> > > ld: -r and -shared may not be used together\n> > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > > gmake[1]: *** [blib/arch/auto/plperl/plperl.so] Error 1\n> > > gmake[1]: Leaving directory `/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/pl/plperl'\n> > > gmake: *** [all] Error 2\n> > > \n> > > Can anyone suggest a fix? Everything else compiles fine. I am on\n> > > BSD/OS.\n> > > \n> > > \n> > \n> > \n> \n> \n\n", "msg_date": "Tue, 19 Jun 2001 10:26:19 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: PlPerl compile failure" }, { "msg_contents": "\nThe fix worked. Thanks. Patch applied.\n\n> Bruce, try the following patch, and let me know.\n> \n> Apparently, on some systems, ExtUtils::Embed and MakeMaker are slightly\n> broken, and its impossible to make a shared library when compiling with\n> both CCDLFLAGS and LDDLFAGS, you have to pick one or the other.\n> \n> Index: Makefile.PL\n> ===================================================================\n> RCS file: /home/cvs/pgsql/pgsql/src/pl/plperl/Makefile.PL,v\n> retrieving revision 1.12.1000.1\n> diff --unified -r1.12.1000.1 Makefile.PL\n> --- Makefile.PL 2001/06/16 23:18:04 1.12.1000.1\n> +++ Makefile.PL 2001/06/19 14:10:33\n> @@ -29,8 +29,11 @@\n> exit(0);\n> }\n> \n> +my $ldopts=ldopts();\n> +$ldopts=~s/$Config{ccdlflags}//;\n> +\n> WriteMakefile( 'NAME' => 'plperl', \n> - dynamic_lib => { 'OTHERLDFLAGS' => ldopts() } ,\n> + dynamic_lib => { 'OTHERLDFLAGS' => $ldopts } ,\n> INC => \"$ENV{EXTRA_INCLUDES}\",\n> XS => { 'SPI.xs' => 'SPI.c' },\n> OBJECT => 'plperl.o eloglvl.o SPI.o',\n> \n> \n> On Tue, 19 Jun 2001, Bruce Momjian wrote:\n> \n> > \n> > Output attached.\n> > \n> > \n> > > That's me fault. I'm reading up on bsdos documentation to see what's the\n> > > right way to run ld on it.\n> > > \n> > > Can you give me output of 'perl -MExtUtils::Embed -e ldopts' and 'perl -V'\n> > > on your machine?\n> > > \n> > > Thanks\n> > > \n> > > \n> > > \n> > > On Tue, 19 Jun 2001, Bruce Momjian wrote:\n> > > \n> > > > I am seeing this error in CVS current:\n> > > > \n> > > > gmake -f Makefile all\n> > > > gmake[1]: Entering directory `/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/pl/plperl'\n> > > >\n> > > > LD_RUN_PATH=\"\" ld -o blib/arch/auto/plperl/plperl.so -shared -x\n> > > > -L/usr/X11/lib -L/usr/local/lib plperl.o eloglvl.o SPI.o -rdynamic\n> > > > -Wl,-rpath,/usr/libdata/perl5/5.00503/i386-bsdos/CORE -L/usr/X11/lib\n> > > > -L/usr/local/lib\n> > > > /usr/libdata/perl5/5.00503/i386-bsdos/auto/DynaLoader/DynaLoader.a\n> > > > -L/usr/libdata/perl5/5.00503/i386-bsdos/CORE -lperl -ldl -lm -lc\n> > > > \n> > > > ld: -r and -shared may not be used together\n> > > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > > > gmake[1]: *** [blib/arch/auto/plperl/plperl.so] Error 1\n> > > > gmake[1]: Leaving directory `/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/pl/plperl'\n> > > > gmake: *** [all] Error 2\n> > > > \n> > > > Can anyone suggest a fix? Everything else compiles fine. I am on\n> > > > BSD/OS.\n> > > > \n> > > > \n> > > \n> > > \n> > \n> > \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Jun 2001 20:26:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: PlPerl compile failure" } ]
[ { "msg_contents": "\nI've added checks in the triggers for checking to\nmake sure the row on insert/update to fk is still\nactually there at check time and that on noaction \nupdate/delete to pk that there wasn't a row added \nwith this value.\n\nWith commenting out the triggered data change check\nin the trigger manager this makes sequences like:\nbegin;\ndelete from pktable where key=<blah>;\ninsert into pktable (key) values (<blah>);\nend;\nwork correctly for deferred triggers.\n\nThis brings up the question however of what to do\nabout other referential actions. In the case above,\nshould we delete an fk row that was referenced by <blah>?\nDeleting it seems wierd (although seemed to be what\noracle did when I tried it), but not deleting it seems\ndangerous if you're using them to link sub-items to\na record (you re-use the number for someone new in that\ntransaction, do they get all the sub-items?) And I don't \nhave an SQL99 spec to see how restrict is supposed to act\nalthough by the comments in the code, it looks like the\nabove should fail with restrict.\n\nLast time this was brought up, I don't remember there \nbeing a consensus on what should be done with these cases\nso I thought I should bring it up before doing any coding.\n\n", "msg_date": "Tue, 19 Jun 2001 06:44:12 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": true, "msg_subject": "Fixing deferred foreign key checks" } ]
[ { "msg_contents": "Hi all!\n\nI'm thinking about starting a (serius) project to\nbring a good graphical interface to the administration\nand operation of postgresql, similar to what other\ndatabases have.\n\nThe goal is to be able to do all the administrative\nwork of postgres in a single program. It would\ninclude manage databases, tables, users, privileges\n..., see/alter table contents, start/stop the\nserver(s), configure postgres, manage backups, have a\ngood SQL console, monitorize the backends, ...\n\nAs you can see it would be a superset of the pgaccess\ncapabilities but backwards compatible with it.\nThe program would be done in Java/Swing, being\nautomaticaly portable. The license would be the\nlicense of PostgreSQL itself.\n\nThe cuestion is: Is there any posibility of such a\nbeast being ever included in the standard distribution\nof PostgreSQL, if it prove util, well designed and\nrock-solid, or it would not be accepted like a\nstandard add-on by the PostgreSQL hacker comunity?\n(perhaps Java/Swing not acceptable, ...)\n\nThank you.\n\nPedro Abelleira Seco\n\n_______________________________________________________________\nDo You Yahoo!?\nYahoo! Messenger: Comunicaci�n instant�nea gratis con tu gente -\nhttp://messenger.yahoo.es\n", "msg_date": "Tue, 19 Jun 2001 17:48:21 +0200 (CEST)", "msg_from": "=?iso-8859-1?q?Pedro=20Abelleira=20Seco?= <pedroabelleira@yahoo.es>", "msg_from_op": true, "msg_subject": "Universal admin frontend" }, { "msg_contents": "Well, there is something called druid which almost works with postgres, some\nof the problems are related to the jdbc driver not being complete.\n\nAlso there is pgAdmin which is windows specific.\n\nDave\n----- Original Message -----\nFrom: \"Pedro Abelleira Seco\" <pedroabelleira@yahoo.es>\nTo: <pgsql-hackers@postgresql.org>\nSent: Tuesday, June 19, 2001 11:48 AM\nSubject: [HACKERS] Universal admin frontend\n\n\n> Hi all!\n>\n> I'm thinking about starting a (serius) project to\n> bring a good graphical interface to the administration\n> and operation of postgresql, similar to what other\n> databases have.\n>\n> The goal is to be able to do all the administrative\n> work of postgres in a single program. It would\n> include manage databases, tables, users, privileges\n> ..., see/alter table contents, start/stop the\n> server(s), configure postgres, manage backups, have a\n> good SQL console, monitorize the backends, ...\n>\n> As you can see it would be a superset of the pgaccess\n> capabilities but backwards compatible with it.\n> The program would be done in Java/Swing, being\n> automaticaly portable. The license would be the\n> license of PostgreSQL itself.\n>\n> The cuestion is: Is there any posibility of such a\n> beast being ever included in the standard distribution\n> of PostgreSQL, if it prove util, well designed and\n> rock-solid, or it would not be accepted like a\n> standard add-on by the PostgreSQL hacker comunity?\n> (perhaps Java/Swing not acceptable, ...)\n>\n> Thank you.\n>\n> Pedro Abelleira Seco\n>\n> _______________________________________________________________\n> Do You Yahoo!?\n> Yahoo! Messenger: Comunicaci�n instant�nea gratis con tu gente -\n> http://messenger.yahoo.es\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n", "msg_date": "Tue, 19 Jun 2001 11:55:30 -0400", "msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "Re: Universal admin frontend" }, { "msg_contents": "\nsomething like this, web based, would be most cool ... have to be able to\nmonitor multiple port/backends too ...\n\nOn Tue, 19 Jun 2001, [iso-8859-1] Pedro Abelleira Seco wrote:\n\n> Hi all!\n>\n> I'm thinking about starting a (serius) project to\n> bring a good graphical interface to the administration\n> and operation of postgresql, similar to what other\n> databases have.\n>\n> The goal is to be able to do all the administrative\n> work of postgres in a single program. It would\n> include manage databases, tables, users, privileges\n> ..., see/alter table contents, start/stop the\n> server(s), configure postgres, manage backups, have a\n> good SQL console, monitorize the backends, ...\n>\n> As you can see it would be a superset of the pgaccess\n> capabilities but backwards compatible with it.\n> The program would be done in Java/Swing, being\n> automaticaly portable. The license would be the\n> license of PostgreSQL itself.\n>\n> The cuestion is: Is there any posibility of such a\n> beast being ever included in the standard distribution\n> of PostgreSQL, if it prove util, well designed and\n> rock-solid, or it would not be accepted like a\n> standard add-on by the PostgreSQL hacker comunity?\n> (perhaps Java/Swing not acceptable, ...)\n>\n> Thank you.\n>\n> Pedro Abelleira Seco\n>\n> _______________________________________________________________\n> Do You Yahoo!?\n> Yahoo! Messenger: Comunicaci�n instant�nea gratis con tu gente -\n> http://messenger.yahoo.es\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Tue, 19 Jun 2001 13:04:17 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Universal admin frontend" }, { "msg_contents": "On Tue, Jun 19, 2001 at 05:48:21PM +0200, Pedro Abelleira Seco wrote:\n> I'm thinking about starting a (serius) project to\n> bring a good graphical interface to the administration\n> and operation of postgresql, similar to what other\n> databases have.\n> \n> The goal is to be able to do all the administrative\n> work of postgres in a single program. It would\n> include manage databases, tables, users, privileges\n> ...\n\nHow about phppgadmin? I haven't checked in depth but it seems to be able to\ndo quite a lot of these.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 19 Jun 2001 21:07:29 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Universal admin frontend" } ]
[ { "msg_contents": ">>>>> \"LT\" == Lamar Thomas <lamart@home.com> writes:\n\nLT> While I know that this is not a MySQL group I was hopping for a little\nLT> help. (I could only find one MySQL group and it was in German).\n\nTry their mailing list. You can find it at www.mysql.com of all\nplaces...\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: khera@kciLink.com Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "19 Jun 2001 11:53:57 -0400", "msg_from": "Vivek Khera <khera@kcilink.com>", "msg_from_op": true, "msg_subject": "Re: MySQL Question" } ]
[ { "msg_contents": "Given the following CREATE TABLE instructions...\n\n1)\nCREATE TABLE message\n(\n int4 msgid PRIMARY KEY NOT NULL,\n text msgtext\n);\n\n2)\nCREATE TABLE message\n(\n int4 msgid not null,\n text msgtext,\n PRIMARY KEY (msgid)\n);\n\n3)\nCREATE TABLE message\n(\n int4 msgid not null,\n text msgtext,\n CONSTRAINT cons_001_pk PRIMARY KEY on (msgid)\n);\n\nThe first two actually create a PRIMARY KEY on msgid. The third seems\nto have a PRIMARY KEY on 'oid', not 'msgid', though it does create a\nunique index on 'msgid'. One of the applications I'm using (Cold\nFusion) looks for the PRIMARY KEY and checks that I have included that\ncolumn(s) in my data statement.\n\nThe first two work, the third does not. Cold Fusion reports that I did\nnot provide 'oid' as one of the data elements.\n\nCold Fusion is accessing the database using ODBC.\nDatabase is Postgres v7.1.1 on Red Hat Linux 7.0\n\nI'm not looking for a fix as I can create the table using the syntax\nthat gives the expected results, but just wanted to alert someone that\nthere is some inconsistency in the way a PRIMARY KEY is used or\ndesignated.\n\nBTW, I did not try the COLUMN CONSTRAINT syntax.\n\nThanks\n\n", "msg_date": "Tue, 19 Jun 2001 14:26:56 -0400", "msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "Primary Key" }, { "msg_contents": "\"P. Dwayne Miller\" <dmiller@espgroup.net> writes:\n> CREATE TABLE message\n> (\n> int4 msgid not null,\n> text msgtext,\n> CONSTRAINT cons_001_pk PRIMARY KEY on (msgid)\n> );\n\n> The first two actually create a PRIMARY KEY on msgid. The third seems\n> to have a PRIMARY KEY on 'oid', not 'msgid', though it does create a\n> unique index on 'msgid'.\n\nAfter fixing the several obvious syntax errors, it works fine for me:\n\nregression=# CREATE TABLE message\nregression-# (\nregression(# msgid int4 not null,\nregression(# msgtext text,\nregression(# CONSTRAINT cons_001_pk PRIMARY KEY (msgid)\nregression(# );\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'cons_001_pk' for table 'message'\nCREATE\nregression=# \\d message\n Table \"message\"\n Attribute | Type | Modifier\n-----------+---------+----------\n msgid | integer | not null\n msgtext | text |\nPrimary Key: cons_001_pk\n\nregression=#\n\nIs Cold Fusion perhaps doing strange things to the query behind your\nback? None of those CREATE TABLE commands are legal SQL according\nto my references.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2001 17:34:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Primary Key " }, { "msg_contents": "My bad on the syntax of all three. I used your syntax (which is what I had originally used) and\ngot the same results with the \\d command that you show.\n\nI'm only using Cold Fusion to read data from the resulting table, not create the table... and I\nstill get an error when I have created the primary key using the table constraint syntax. Cold\nFusion is reporting that the primary key has been defined for the column oid. Using the correct\nsyntax with the first two CREATE TABLE statements, Cold Fusion reports the primary key field as\nmsgid.\n\nThanks for your reply,\nDwayne\n\nTom Lane wrote:\n\n> \"P. Dwayne Miller\" <dmiller@espgroup.net> writes:\n> > CREATE TABLE message\n> > (\n> > int4 msgid not null,\n> > text msgtext,\n> > CONSTRAINT cons_001_pk PRIMARY KEY on (msgid)\n> > );\n>\n> > The first two actually create a PRIMARY KEY on msgid. The third seems\n> > to have a PRIMARY KEY on 'oid', not 'msgid', though it does create a\n> > unique index on 'msgid'.\n>\n> After fixing the several obvious syntax errors, it works fine for me:\n>\n> regression=# CREATE TABLE message\n> regression-# (\n> regression(# msgid int4 not null,\n> regression(# msgtext text,\n> regression(# CONSTRAINT cons_001_pk PRIMARY KEY (msgid)\n> regression(# );\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'cons_001_pk' for table 'message'\n> CREATE\n> regression=# \\d message\n> Table \"message\"\n> Attribute | Type | Modifier\n> -----------+---------+----------\n> msgid | integer | not null\n> msgtext | text |\n> Primary Key: cons_001_pk\n>\n> regression=#\n>\n> Is Cold Fusion perhaps doing strange things to the query behind your\n> back? None of those CREATE TABLE commands are legal SQL according\n> to my references.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Tue, 19 Jun 2001 18:11:12 -0400", "msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "Re: Primary Key" }, { "msg_contents": "Tom Lane wrote:\n\n>After fixing the several obvious syntax errors, it works fine for me:\n>\n>regression=# CREATE TABLE message\n>regression-# (\n>regression(# msgid int4 not null,\n>regression(# msgtext text,\n>regression(# CONSTRAINT cons_001_pk PRIMARY KEY (msgid)\n>regression(# );\n>NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'cons_001_pk' for table 'message'\n>CREATE\n>regression=# \\d message\n> Table \"message\"\n> Attribute | Type | Modifier\n>-----------+---------+----------\n> msgid | integer | not null\n> msgtext | text |\n>Primary Key: cons_001_pk\n>\n>regression=#\n>\n>Is Cold Fusion perhaps doing strange things to the query behind your\n>back? None of those CREATE TABLE commands are legal SQL according\n>to my references.\n>\nI've been using the syntax \"PRIMARY KEY (/column_name/ [, /column_name/ \n]),\" without the constraint name, and the \"/COLUMN_NAME TYPE/ PRIMARY \nKEY\" syntax for sometime now. I may be admitting to SQL heresy in \nsaying that; but, that's the syntax I've seen in MySQL and in quite a \nfew SQL/database books.\n\nAFIAK, it's a legal table creation statement.\n\n\n\n\n\n\n\nTom Lane wrote:\nAfter fixing the several obvious syntax errors, it works fine for me:regression=# CREATE TABLE messageregression-# (regression(# msgid int4 not null,regression(# msgtext text,regression(# CONSTRAINT cons_001_pk PRIMARY KEY (msgid)regression(# );NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'cons_001_pk' for table 'message'CREATEregression=# \\d message Table \"message\" Attribute | Type | Modifier-----------+---------+---------- msgid | integer | not null msgtext | text |Primary Key: cons_001_pkregression=#Is Cold Fusion perhaps doing strange things to the query behind yourback? None of those CREATE TABLE commands are legal SQL accordingto my references.\n\nI've been using the syntax \"PRIMARY KEY (column_name [, column_name\n]),\" without the constraint name, and the \"COLUMN_NAME TYPE PRIMARY\nKEY\" syntax for sometime now.   I may be admitting to SQL heresy in saying\nthat; but, that's the syntax I've seen in MySQL and in quite a few SQL/database\nbooks.\n\nAFIAK, it's a legal table creation statement.", "msg_date": "Tue, 19 Jun 2001 17:37:58 -0500", "msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: Primary Key" }, { "msg_contents": "Thomas Swan <tswan-lst@ics.olemiss.edu> writes:\n> AFIAK, it's a legal table creation statement.\n\nThe variant I showed is. The original one had an extraneous \"ON\" in the\nFOREIGN KEY clause, and even more to the point all the column\ndeclarations had column name and type name reversed. That's why I was\nquestioning the syntax ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2001 18:50:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Primary Key " }, { "msg_contents": "I'm new to this list, and Postgresql, and could use some advice from you \nexperienced users. We are very interested in making postrgresql work for \nour project, but its missing one big feature, that is absolutely necessary \nfor a true OLTP shop.\n\nEven more important that uptime to us, is to never put ourselves in a \nposition where we could lose data. I understand I can do a hot backup with \npg_dumpall. What we need on top of that is the ability to replay the \ntransaction logs against the previous database archive. Without such a \nfeature, even if I did a full backup a few times a day, we would be \nvulnerable to losing hours of data (which would not be acceptable to our \nusers).\n\nI can tell this has been designed to do exactly that, because its really \nclose. What would be needed is a hook to write the logs to disk/tape, when \nthey are full (and not overwrite them until they go elsewhere), and, the \nability to actually play back the logs, exactly at the right place, tied to \na specific archive.\n\nI'm sure this is something that would benefit all our lives. Other than \njust hiring a consultant to do so, is there some way to make this \nhappen? Other than eliminating all my single points of failover in the \nhardware, is there some other way to solve this problem?\n\nThanks,\nNaomi\n--\nNaomi Walker\nChief Information Officer\nEldorado Computing, Inc.\n602-604-3100 ext 242 \n\n", "msg_date": "Tue, 19 Jun 2001 16:04:17 -0700", "msg_from": "Naomi Walker <nwalker@eldocomp.com>", "msg_from_op": false, "msg_subject": "Backup and Recovery" }, { "msg_contents": "\"P. Dwayne Miller\" wrote:\n> \n> My bad on the syntax of all three. I used your syntax (which is what I had originally used) and\n> got the same results with the \\d command that you show.\n> \n> I'm only using Cold Fusion to read data from the resulting table, not create the table... and I\n> still get an error when I have created the primary key using the table constraint syntax. Cold\n> Fusion is reporting that the primary key has been defined for the column oid. Using the correct\n> syntax with the first two CREATE TABLE statements, Cold Fusion reports the primary key field as\n> msgid.\n> \n\nSQLPrimaryKey() in the current psqlodbc driver doesn't\nreport the Primary key other than tablename_pkey.\nIt seems the cause.\nI would change the implementatin of SQLPrimaryKey().\nDwayne, could you try the modified driver ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 20 Jun 2001 08:16:26 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Re: Primary Key" }, { "msg_contents": "I can try it. Where do I get it.\n\nMy question would be why, if SQLPrimaryKey() only reported tablename_pkey, then why does my front end\nreturn oid as the primary key?\n\nThanks,\nDwayne\n\nHiroshi Inoue wrote:\n\n> \"P. Dwayne Miller\" wrote:\n> >\n> > My bad on the syntax of all three. I used your syntax (which is what I had originally used) and\n> > got the same results with the \\d command that you show.\n> >\n> > I'm only using Cold Fusion to read data from the resulting table, not create the table... and I\n> > still get an error when I have created the primary key using the table constraint syntax. Cold\n> > Fusion is reporting that the primary key has been defined for the column oid. Using the correct\n> > syntax with the first two CREATE TABLE statements, Cold Fusion reports the primary key field as\n> > msgid.\n> >\n>\n> SQLPrimaryKey() in the current psqlodbc driver doesn't\n> report the Primary key other than tablename_pkey.\n> It seems the cause.\n> I would change the implementatin of SQLPrimaryKey().\n> Dwayne, could you try the modified driver ?\n>\n> regards,\n> Hiroshi Inoue\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Tue, 19 Jun 2001 23:34:57 -0400", "msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "Re: Primary Key" }, { "msg_contents": "\n\n\"P. Dwayne Miller\" wrote:\n> \n> I can try it. Where do I get it.\n> \n> My question would be why, if SQLPrimaryKey() only reported tablename_pkey, then why does my front end\n> return oid as the primary key?\n> \n> Thanks,\n> Dwayne\n> \n> Hiroshi Inoue wrote:\n> \n> > \"P. Dwayne Miller\" wrote:\n> > >\n> > > My bad on the syntax of all three. I used your syntax (which is what I had originally used) and\n> > > got the same results with the \\d command that you show.\n> > >\n> > > I'm only using Cold Fusion to read data from the resulting table, not create the table... and I\n> > > still get an error when I have created the primary key using the table constraint syntax. Cold\n> > > Fusion is reporting that the primary key has been defined for the column oid. Using the correct\n> > > syntax with the first two CREATE TABLE statements, Cold Fusion reports the primary key field as\n> > > msgid.\n> > >\n> >\n> > SQLPrimaryKey() in the current psqlodbc driver doesn't\n> > report the Primary key other than tablename_pkey.\n> > It seems the cause.\n> > I would change the implementatin of SQLPrimaryKey().\n> > Dwayne, could you try the modified driver ?\n> >\n> > regards,\n> > Hiroshi Inoue\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n", "msg_date": "Wed, 20 Jun 2001 13:14:34 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Re: Primary Key" }, { "msg_contents": "\"P. Dwayne Miller\" wrote:\n> \n> I can try it. Where do I get it.\n> \n\nI would send you the dll though I don't test it by myself.\nOK ?\n\n> My question would be why, if SQLPrimaryKey() only reported tablename_pkey, then why does my front end\n> return oid as the primary key?\n> \n\nDon't you turn on the FAKEOIDINDEX option ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 20 Jun 2001 13:17:04 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Re: Primary Key" }, { "msg_contents": "On Tue, 19 Jun 2001, Naomi Walker wrote:\n\n> Even more important that uptime to us, is to never put ourselves in a\n> position where we could lose data. I understand I can do a hot backup\n> with pg_dumpall. What we need on top of that is the ability to replay\n> the transaction logs against the previous database archive. Without\n> such a feature, even if I did a full backup a few times a day, we\n> would be vulnerable to losing hours of data (which would not be\n> acceptable to our users).\n\nThis is what I'd like too (though I'm not that bothered about\nrolling forward from a dump if I can just do it by replaying\nlogs onto real datafiles).\n\nI mentioned it a while ago:\n\nhttp://fts.postgresql.org/db/mw/msg.html?mid=114397\n\nbut got no response.\n\nYou are aware that you can still lose up to (by default) 16Mb\nworth of transactions in this scheme, I presume?\n\nMatthew.\n\n", "msg_date": "Wed, 20 Jun 2001 12:26:22 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: Backup and Recovery" }, { "msg_contents": "Please send it. And yes, I do have the Fake OID Index turned on. Although I have no idea what it does.\n\nThanks,\nDwayne\n\nHiroshi Inoue wrote:\n\n> \"P. Dwayne Miller\" wrote:\n> >\n> > I can try it. Where do I get it.\n> >\n>\n> I would send you the dll though I don't test it by myself.\n> OK ?\n>\n> > My question would be why, if SQLPrimaryKey() only reported tablename_pkey, then why does my front end\n> > return oid as the primary key?\n> >\n>\n> Don't you turn on the FAKEOIDINDEX option ?\n>\n> regards,\n> Hiroshi Inoue\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n\n", "msg_date": "Wed, 20 Jun 2001 08:02:11 -0400", "msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "Re: Primary Key" }, { "msg_contents": "Hmm, your using ColdFusion, so that goes through the ODBC driver, which\npicks up the 'primary key' by looking for an index named 'foo_pkey',\nI think. Ah, here it is:\n\nin interfaces/odbc/info.c:\n\nsprintf(tables_query, \"select ta.attname, ia.attnum\"\n\t \" from pg_attribute ta, pg_attribute ia, pg_class c, pg_index i\"\n\t\t\" where c.relname = '%s_pkey'\"\n\t\t\" AND c.oid = i.indexrelid\"\n\t\t\" AND ia.attrelid = i.indexrelid\"\n\t\t\" AND ta.attrelid = i.indrelid\"\n\t\t\" AND ta.attnum = i.indkey[ia.attnum-1]\"\n\t\t\" order by ia.attnum\", pktab);\n\nSo, don't name the primary key constraint, or name it 'something_pkey'\nand you should be fine. Something's falling back to trying to use\noid if it can't find a primary key: I'm note sure if that's inside the\nODBC driver, or in ColdFusion.\n\nHmm, seems we have other Access specific hacks in the ODBC driver: \n\n/*\n * I have to hide the table owner from Access, otherwise it\n * insists on referring to the table as 'owner.table'. (this\n * is valid according to the ODBC SQL grammar, but Postgres\n * won't support it.)\n *\n * set_tuplefield_string(&row->tuple[1], table_owner);\n */\n\nI bet PgAdmin would like to have that info.\n\nRoss\n\nOn Tue, Jun 19, 2001 at 06:11:12PM -0400, P. Dwayne Miller wrote:\n> My bad on the syntax of all three. I used your syntax (which is what I had originally used) and\n> got the same results with the \\d command that you show.\n> \n> I'm only using Cold Fusion to read data from the resulting table, not create the table... and I\n> still get an error when I have created the primary key using the table constraint syntax. Cold\n> Fusion is reporting that the primary key has been defined for the column oid. Using the correct\n> syntax with the first two CREATE TABLE statements, Cold Fusion reports the primary key field as\n> msgid.\n> \n> Thanks for your reply,\n> Dwayne\n> \n", "msg_date": "Wed, 20 Jun 2001 10:11:58 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Re: Primary Key" }, { "msg_contents": "At 12:26 PM 6/20/01 +0100, Matthew Kirkwood wrote:\n>On Tue, 19 Jun 2001, Naomi Walker wrote:\n>\n> > Even more important that uptime to us, is to never put ourselves in a\n> > position where we could lose data. I understand I can do a hot backup\n> > with pg_dumpall. What we need on top of that is the ability to replay\n> > the transaction logs against the previous database archive. Without\n> > such a feature, even if I did a full backup a few times a day, we\n> > would be vulnerable to losing hours of data (which would not be\n> > acceptable to our users).\n>\n>This is what I'd like too (though I'm not that bothered about\n>rolling forward from a dump if I can just do it by replaying\n>logs onto real datafiles).\n>\n>I mentioned it a while ago:\n>\n>http://fts.postgresql.org/db/mw/msg.html?mid=114397\n>\n>but got no response.\n\nWell, so now there is at least TWO of us....\n\nWe should start the thread again.\n\n\n>You are aware that you can still lose up to (by default) 16Mb\n>worth of transactions in this scheme, I presume?\n\nI'm just starting with Postgresql, but, I thought with fsync on this was \nnot the case. Is that not true or what else did I miss?\n\n\n>Matthew.\n>\n>\n\n--\nNaomi Walker\nChief Information Officer\nEldorado Computing, Inc.\n602-604-3100 ext 242 \n\n", "msg_date": "Wed, 20 Jun 2001 13:41:23 -0700", "msg_from": "Naomi Walker <nwalker@eldocomp.com>", "msg_from_op": false, "msg_subject": "Re: Backup and Recovery" }, { "msg_contents": "At 13:41 20/06/01 -0700, Naomi Walker wrote:\n>\n>Well, so now there is at least TWO of us....\n>\n>We should start the thread again.\n>\n\nWAL based backup & recovery is something I have been trying to do in\nbackground, but unfortunately I have no time at the moment. I do plan to\nget back to it as soon as I can.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 21 Jun 2001 11:37:10 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Backup and Recovery" }, { "msg_contents": "On Wed, 20 Jun 2001, Naomi Walker wrote:\n\n> >You are aware that you can still lose up to (by default) 16Mb\n> >worth of transactions in this scheme, I presume?\n>\n> I'm just starting with Postgresql, but, I thought with fsync on this\n> was not the case. Is that not true or what else did I miss?\n\nI suppose that it rather depends on how you expected to\nmove the logs over. My approach was to archive the redo\nwhen PG is done with them and only then to roll them\nforward.\n\nIf a catastrophe occurs, then I wouldn't be able to do\nanything with a half-full log.\n\nOur Oracle setups use redo logs of only 1Mb for this\nreason, and it doesn't seem to hurt too much (though\nOracle's datafile formats seem a fair bit denser than\nPostgres's).\n\nMatthew.\n\n", "msg_date": "Thu, 21 Jun 2001 11:01:29 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: Backup and Recovery" }, { "msg_contents": "On Thu, Jun 21, 2001 at 11:01:29AM +0100, Matthew Kirkwood wrote:\n> On Wed, 20 Jun 2001, Naomi Walker wrote:\n> \n> > >You are aware that you can still lose up to (by default) 16Mb\n> > >worth of transactions in this scheme, I presume?\n> >\n> > I'm just starting with Postgresql, but, I thought with fsync on this\n> > was not the case. Is that not true or what else did I miss?\n> \n> I suppose that it rather depends on how you expected to\n> move the logs over. My approach was to archive the redo\n> when PG is done with them and only then to roll them\n> forward.\n> \n> If a catastrophe occurs, then I wouldn't be able to do\n> anything with a half-full log.\n>\n> Our Oracle setups use redo logs of only 1Mb for this\n> reason, and it doesn't seem to hurt too much (though\n> Oracle's datafile formats seem a fair bit denser than\n> Postgres's).\n\nThe above makes no sense to me. A hot recovery that discards some \nrandom number of committed transactions is a poor sort of recovery.\n\nMs. Walker might be able to adapt one of the several replication\ntools available for PG to do replayable logging, instead. \n\nIt seems to me that for any replication regime (symmetric or not, \nsynchronous or not, global or not), and also any hot-backup/recovery\napproach, an update-log mechanism that produces a high-level \ndescription of changes is essential. Using triggers to produce \nsuch a log seems to me to be too slow and too dependent on finicky \nadministrative procedures.\n\nIIUC, the regular WAL records are optimized for a different purpose: \nspeeding up normal operation. Also IIUC, the WAL cannot be applied \nto a database reconstructed from a dump. If augmented to enable such\nreconstruction, the WAL might be too bulky to serve well in that role; \nit currently only needs to keep enough data to construct the current \ndatabase from a recent checkpoint, so compactness is has not been \ncrucial. But there's much to be said for having just a single \nsynchronous log mechanism. A high-level log mixed into the WAL, to \nbe extracted asynchrously to a much more complact replay log, might \nbe the ideal compromise.\n\nThe same built-in high-level logging mechanism could make all the \nvarious kinds of disaster prevention, disaster recovery, and load \nsharing much easier to implement, because they all need much the\nsame thing.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Thu, 21 Jun 2001 16:03:17 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Backup and Recovery" }, { "msg_contents": "On Thu, 21 Jun 2001, Nathan Myers wrote:\n\n> > I suppose that it rather depends on how you expected to\n> > move the logs over. My approach was to archive the redo\n> > when PG is done with them and only then to roll them\n> > forward.\n\n> The above makes no sense to me. A hot recovery that discards some\n> random number of committed transactions is a poor sort of recovery.\n\nAgreed. Nevertheless it's at least db_size - 1Mb better\nthan the current options.\n\nRecovering the rest manually from log files is good enough\nfor us (indeed, much better than the potential loss of\nperformance or reliability from \"real\" replication).\n\nIf it horrifies you that much, think of it as 15-minutely\nincremental backups.\n\nMatthew.\n\n", "msg_date": "Tue, 26 Jun 2001 16:44:36 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: Backup and Recovery" }, { "msg_contents": "matthew@hairy.beasts.org (Matthew Kirkwood) wrote in message news:<Pine.LNX.4.33.0106201212240.25630-100000@sphinx.mythic-beasts.com>...\n> On Tue, 19 Jun 2001, Naomi Walker wrote:\n> \n> > Even more important that uptime to us, is to never put ourselves in a\n> > position where we could lose data. I understand I can do a hot backup\n> > with pg_dumpall. What we need on top of that is the ability to replay\n> > the transaction logs against the previous database archive. Without\n> > such a feature, even if I did a full backup a few times a day, we\n> > would be vulnerable to losing hours of data (which would not be\n> > acceptable to our users).\n> \n> This is what I'd like too (though I'm not that bothered about\n> rolling forward from a dump if I can just do it by replaying\n> logs onto real datafiles).\n\nWith stock PostgreSQL... how many committed transactions can one lose\non a simple system crash/reboot? With Oracle or Informix, the answer\nis zero. Is that true with PostgreSQL in fsync mode? If not, does it\nlose all in the log, or just those not yet written to the DB?\n\nThanks\n\nJohn\n", "msg_date": "28 Jun 2001 08:33:45 -0700", "msg_from": "nj7e@yahoo.com (John Moore)", "msg_from_op": false, "msg_subject": "Re: Backup and Recovery" }, { "msg_contents": "> With stock PostgreSQL... how many committed transactions can one\nlose\n> on a simple system crash/reboot? With Oracle or Informix, the answer\n> is zero. Is that true with PostgreSQL in fsync mode? If not, does it\n> lose all in the log, or just those not yet written to the DB?\n\nWith WAL the theory is that it will not lose a committed transaction.\nBugs have plagged previous versions (7.1.2 looks clean) and it none\n(Oracle, Informix, Postgres) can protect against coding errors in the\ncertain cases but from general power failure it's fine.\n\nThis assumes adequate hardware too. Some harddrives claim to have\nwritten when they haven't among other things, but Postgres itself\nwon't lose the information -- your hardware might :do that silently\nthough.)\n\n", "msg_date": "Tue, 3 Jul 2001 15:40:56 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": false, "msg_subject": "Re: Re: Backup and Recovery" }, { "msg_contents": "On Thu, Jun 28, 2001 at 08:33:45AM -0700, John Moore wrote:\n> matthew@hairy.beasts.org (Matthew Kirkwood) wrote in message news:<Pine.LNX.4.33.0106201212240.25630-100000@sphinx.mythic-beasts.com>...\n> > On Tue, 19 Jun 2001, Naomi Walker wrote:\n> > \n> > > Even more important that uptime to us, is to never put ourselves in a\n> > > position where we could lose data. I understand I can do a hot backup\n> > > with pg_dumpall. What we need on top of that is the ability to replay\n> > > the transaction logs against the previous database archive. Without\n> > > such a feature, even if I did a full backup a few times a day, we\n> > > would be vulnerable to losing hours of data (which would not be\n> > > acceptable to our users).\n> > \n> > This is what I'd like too (though I'm not that bothered about\n> > rolling forward from a dump if I can just do it by replaying\n> > logs onto real datafiles).\n> \n> With stock PostgreSQL... how many committed transactions can one lose\n> on a simple system crash/reboot? With Oracle or Informix, the answer\n> is zero. Is that true with PostgreSQL in fsync mode? If not, does it\n> lose all in the log, or just those not yet written to the DB?\n\nThe answer is zero for PG as well. However, what happens if the\ndatabase becomes corrupted (e.g. because of bad RAM or bad disk)?\n\nWith Informix and Oracle, you can restore from a snapshot backup\nand replay the \"redo\" logs since that backup, if you kept them. \n\nAlternatively, you can keep a \"failover\" server that is up to date \nwith the last committed transaction. If it matters, you do both. \n(If you're lucky, the disk or memory failure won't have corrupted \nall your backups and failover servers before you notice.)\n\nThere is currently no builtin support for either in PG. Of course\nboth can be simulated in the client. Also, for any particular \ncollection of tables, a redo or replication log may be produced with \ntriggers; that's how the currently available replication add-ons \nfor PG work. Something built in could be much faster and much less \nfragile.\n\nI imagine a daemon extracting redo log entries from WAL segments, \nasynchronously. Mixing redo log entries into the WAL allows the WAL \nto be the only synchronous disk writer in the system, a Good Thing.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Tue, 3 Jul 2001 13:17:40 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: Backup and Recovery" } ]
[ { "msg_contents": "Bruce Momjian:\n I am a begineer,The question is PgSQL support the full entrity integrity \nand refernece integerity.For example.does it support \"Restricted Delete锟斤拷NULLIFIES-delete,default-delete....\",I read your book,But can not find detail.Where to find?\n\n\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n lilixin@cqu.edu.cn\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n 锟斤拷\n锟斤拷\n 锟斤拷锟斤拷锟斤拷 lilixin@cqu.edu.cn\n\n", "msg_date": "Wed, 20 Jun 2001 09:20:36 +0800", "msg_from": "=?ISO-8859-1?Q?=C0=EE=C1=A2=D0=C2?= <lilixin@cqu.edu.cn>", "msg_from_op": true, "msg_subject": "Re: Re: [PATCHES] [PATCH] Contrib C source for casting MONEY\n\tto INT[248] and FLOAT[48]" }, { "msg_contents": "I'm guessing you are asking for support of referential indegrity constraints. It exists in Bruce's book under http://www.ca.postgresql.org/docs/aw_pgsql_book/node131.html (ON DELETE NO ACTION/SET NULL/SET DEFAULT)\n\ncheers,\nthalis\n\n\nOn Wed, 20 Jun 2001, [ISO-8859-1] ������ wrote:\n\n> Bruce Momjian:\n> I am a begineer,The question is PgSQL support the full entrity integrity \n> and refernece integerity.For example.does it support \"Restricted Delete��NULLIFIES-delete,default-delete....\",I read your book,But can not find detail.Where to find?\n> \n> \n> >---------------------------(end of broadcast)---------------------------\n> >TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> lilixin@cqu.edu.cn\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> ��\n> ��\n> ������ lilixin@cqu.edu.cn\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n", "msg_date": "Wed, 20 Jun 2001 11:10:21 -0400 (EDT)", "msg_from": "\"Thalis A. Kalfigopoulos\" <thalis@cs.pitt.edu>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] [PATCH] Contrib C source for casting\n\tMONEY to INT[248] and FLOAT[48]" }, { "msg_contents": "If PostgreSQL is run on a system that has a file size limit (2 gig?), where \nmight cause us to hit the limit?\n--\nNaomi Walker\nChief Information Officer\nEldorado Computing, Inc.\n602-604-3100 ext 242 \n\n", "msg_date": "Fri, 06 Jul 2001 15:51:44 -0700", "msg_from": "Naomi Walker <nwalker@eldocomp.com>", "msg_from_op": false, "msg_subject": "2 gig file size limit" }, { "msg_contents": "* Naomi Walker <nwalker@eldocomp.com> [010706 17:57]:\n> If PostgreSQL is run on a system that has a file size limit (2 gig?), where \n> might cause us to hit the limit?\nPostgreSQL is smart, and breaks the table files up at ~1GB per each,\nso it's transparent to you. \n\nYou shouldn't have to worry about it. \nLER\n\n> --\n> Naomi Walker\n> Chief Information Officer\n> Eldorado Computing, Inc.\n> 602-604-3100 ext 242 \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 6 Jul 2001 19:12:05 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 2 gig file size limit" }, { "msg_contents": "On Friday 06 July 2001 18:51, Naomi Walker wrote:\n> If PostgreSQL is run on a system that has a file size limit (2 gig?), where\n> might cause us to hit the limit?\n\nSince PostgreSQL automatically segments its internal data files to get around \nsuch limits, the only place you will hit this limit will be when making \nbackups using pg_dump or pg_dumpall. You may need to pipe the output of \nthose commands into a file splitting utility, and then you'll have to pipe \nthrough a reassembly utility to restore.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 7 Jul 2001 10:33:40 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 2 gig file size limit" }, { "msg_contents": "Lamar Owen wrote:\n> \n> On Friday 06 July 2001 18:51, Naomi Walker wrote:\n> > If PostgreSQL is run on a system that has a file size limit (2 gig?), where\n> > might cause us to hit the limit?\n> \n> Since PostgreSQL automatically segments its internal data files to get around\n> such limits, the only place you will hit this limit will be when making\n> backups using pg_dump or pg_dumpall. You may need to pipe the output of\n\nSpeaking of which.\n\nDoing a dumpall for a backup is taking a long time, the a restore from\nthe dump files doesn't leave the database in its original state. Could\na command be added that locks all the files, quickly tars them up, then\nreleases the lock?\n\n-- \nJoseph Shraibman\njks@selectacast.net\nIncrease signal to noise ratio. http://www.targabot.com\n", "msg_date": "Mon, 09 Jul 2001 19:48:41 -0400", "msg_from": "Joseph Shraibman <jks@selectacast.net>", "msg_from_op": false, "msg_subject": "Re: Backups WAS: 2 gig file size limit" }, { "msg_contents": "[HACKERS removed from CC: list]\n\nJoseph Shraibman <jks@selectacast.net> writes:\n\n\n> Doing a dumpall for a backup is taking a long time, the a restore from\n> the dump files doesn't leave the database in its original state. Could\n> a command be added that locks all the files, quickly tars them up, then\n> releases the lock?\n\nAs I understand it, pg_dump runs inside a transaction, so the output\nreflects a consistent snapshot of the database as of the time the dump \nstarts (thanks to MVCC); restoring will put the database back to where \nit was at the start of the dump.\n\nHave you observed otherwise?\n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n", "msg_date": "09 Jul 2001 20:51:47 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Re: Backups WAS: 2 gig file size limit" }, { "msg_contents": "Doug McNaught wrote:\n> \n> [HACKERS removed from CC: list]\n> \n> Joseph Shraibman <jks@selectacast.net> writes:\n> \n> > Doing a dumpall for a backup is taking a long time, the a restore from\n> > the dump files doesn't leave the database in its original state. Could\n> > a command be added that locks all the files, quickly tars them up, then\n> > releases the lock?\n> \n> As I understand it, pg_dump runs inside a transaction, so the output\n> reflects a consistent snapshot of the database as of the time the dump\n> starts (thanks to MVCC); restoring will put the database back to where\n> it was at the start of the dump.\n> \nIn theory.\n\n> Have you observed otherwise?\n\nYes. Specifically timestamps are dumped in a way that (1) they lose\npercision (2) sometimes have 60 in the seconds field which prevents the\ndump from being restored.\n\nAnd I suspect any statistics generated by VACUUM ANALYZE are lost.\n\nIf a database got corrupted somehow in order to restore from the dump\nthe database would have to delete the original database then restore\nfrom the dump. Untarring would be much easier (especially as the\ndatabase grows). Obviously this won't replace dumps but for quick\nbackups it would be great.\n\n-- \nJoseph Shraibman\njks@selectacast.net\nIncrease signal to noise ratio. http://www.targabot.com\n", "msg_date": "Mon, 09 Jul 2001 20:59:59 -0400", "msg_from": "Joseph Shraibman <jks@selectacast.net>", "msg_from_op": false, "msg_subject": "Re: Re: Backups WAS: 2 gig file size limit" }, { "msg_contents": "On Mon, Jul 09, 2001 at 08:59:59PM -0400, Joseph Shraibman wrote:\n> If a database got corrupted somehow in order to restore from the dump\n> the database would have to delete the original database then restore\n> from the dump. Untarring would be much easier (especially as the\n\nYou could always shut the system down and tar on your own.\n\nOf course, tarring up several gigabytes is going to take a while.\n\nBetter to fix the dump/restore process than to hack in a work around that\nhas very limited benefit.\n\nmrc\n-- \n Mike Castle dalgoda@ix.netcom.com www.netcom.com/~dalgoda/\n We are all of us living in the shadow of Manhattan. -- Watchmen\nfatal (\"You are in a maze of twisty compiler features, all different\"); -- gcc\n", "msg_date": "Mon, 9 Jul 2001 18:46:10 -0700", "msg_from": "Mike Castle <dalgoda@ix.netcom.com>", "msg_from_op": false, "msg_subject": "Re: Re: Backups WAS: 2 gig file size limit" }, { "msg_contents": "Joseph Shraibman <jks@selectacast.net> writes:\n> Could a command be added that locks all the files, quickly tars them\n> up, then releases the lock?\n\npg_ctl stop\ntar cfz - $PGDATA >someplace\npg_ctl start\n\nThere is no possibility of anything less drastic, if you want to ensure\nthat the database files are consistent and not changing. Don't even\nthink about doing a partial dump of the $PGDATA tree, either. If you\ndon't have a pg_log that matches your data files, you've got nothing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Jul 2001 22:07:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Backups WAS: 2 gig file size limit " }, { "msg_contents": "> > Have you observed otherwise?\n> Yes. Specifically timestamps are dumped in a way that (1) they lose\n> percision (2) sometimes have 60 in the seconds field which prevents the\n> dump from being restored.\n\nThe loss of precision for timestamp data stems from conservative\nattempts to get consistant behavior from the data type. It is certainly\nnot entirely successful, but changes would have to solve some of these\nproblems without introducing more.\n\nI've only seen the \"60 seconds problem\" with earlier Mandrake distros\nwhich combined normal compiler optimizations with a \"fast math\"\noptimization, against the apparent advice of the gcc developers. What\nkind of system are you on, and how did you build PostgreSQL?\n\nRegards.\n\n - Thomas\n", "msg_date": "Tue, 10 Jul 2001 02:27:06 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Re: Backups WAS: 2 gig file size limit" }, { "msg_contents": "Tom Lane wrote:\n> \n> Joseph Shraibman <jks@selectacast.net> writes:\n> > Could a command be added that locks all the files, quickly tars them\n> > up, then releases the lock?\n> \n> pg_ctl stop\n> tar cfz - $PGDATA >someplace\n> pg_ctl start\n> \nBut that would mean I would have to have all my programs detect that the\ndatabase went down and make new connections. I would rather that\npostgres just lock all the files and do the tar.\n\n\n\n-- \nJoseph Shraibman\njks@selectacast.net\nIncrease signal to noise ratio. http://www.targabot.com\n", "msg_date": "Tue, 10 Jul 2001 15:33:29 -0400", "msg_from": "Joseph Shraibman <jks@selectacast.net>", "msg_from_op": false, "msg_subject": "Re: Re: Backups WAS: 2 gig file size limit" }, { "msg_contents": "I mentioned this on general a while ago.\n\nI had the problem when I dumped my 7.0.3 db to upgrade to 7.1. I had to\nmodify the dump because there were some 60 seconds in there. It was\nobvious in the code in backend/utils/adt/datetime that it was using\nsprintf to do the formatting, and sprintf was taking the the float the\nrepresented the seconds and rounding it.\n\n select '2001-07-10 15:39:59.999'::timestamp;\n ?column? \n---------------------------\n 2001-07-10 15:39:60.00-04\n(1 row)\n\n\n\nThomas Lockhart wrote:\n> \n> > > Have you observed otherwise?\n> > Yes. Specifically timestamps are dumped in a way that (1) they lose\n> > percision (2) sometimes have 60 in the seconds field which prevents the\n> > dump from being restored.\n> \n> The loss of precision for timestamp data stems from conservative\n> attempts to get consistant behavior from the data type. It is certainly\n> not entirely successful, but changes would have to solve some of these\n> problems without introducing more.\n> \n> I've only seen the \"60 seconds problem\" with earlier Mandrake distros\n> which combined normal compiler optimizations with a \"fast math\"\n> optimization, against the apparent advice of the gcc developers. What\n> kind of system are you on, and how did you build PostgreSQL?\n> \n> Regards.\n> \n> - Thomas\n\n-- \nJoseph Shraibman\njks@selectacast.net\nIncrease signal to noise ratio. http://www.targabot.com\n", "msg_date": "Tue, 10 Jul 2001 15:40:12 -0400", "msg_from": "Joseph Shraibman <jks@selectacast.net>", "msg_from_op": false, "msg_subject": "Re: Re: Backups WAS: 2 gig file size limit" }, { "msg_contents": "(This question was answered several days ago on this list; please check \nthe list archives before posting. I believe it's also in the FAQ.)\n\n> If PostgreSQL is run on a system that has a file size limit (2\n> gig?), where might cause us to hit the limit?\n\nPostgres will never internally use files (e.g. for tables, indexes, \netc) larger than 1GB -- at that point, the file is split.\n\nHowever, you might run into problems when you export the data from Pg \nto another source, such as if you pg_dump the contents of a database > \n2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the \nsize of the dump. If that's still not enough, you can dump individual \ntables (with -t) or use 'split' to divide the dump into several files.\n\nCheers,\n\nNeil\n\n", "msg_date": "Tue, 10 Jul 2001 19:17:05 -0400 (EDT)", "msg_from": "\"Neil Conway\" <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: 2 gig file size limit" }, { "msg_contents": "> (This question was answered several days ago on this list; please check \n> the list archives before posting. I believe it's also in the FAQ.)\n> \n> > If PostgreSQL is run on a system that has a file size limit (2\n> > gig?), where might cause us to hit the limit?\n> \n> Postgres will never internally use files (e.g. for tables, indexes, \n> etc) larger than 1GB -- at that point, the file is split.\n> \n> However, you might run into problems when you export the data from Pg \n> to another source, such as if you pg_dump the contents of a database > \n> 2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the \n> size of the dump. If that's still not enough, you can dump individual \n> tables (with -t) or use 'split' to divide the dump into several files.\n\nI just added the second part of this sentense to the FAQ to try and make\nit more visible:\n\n The maximum table size of 16TB does not require large file\n support from the operating system. Large tables are stored as \n multiple 1GB files so file system size limits are not important.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 21:01:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 2 gig file size limit" }, { "msg_contents": "> I mentioned this on general a while ago.\n\nI'm not usually there/here, but subscribed recently to avoid annoying\nbounce messages from replies to messages cross posted to -hackers. I may\nnot stay long, since the volume is hard to keep up with.\n\n> I had the problem when I dumped my 7.0.3 db to upgrade to 7.1. I had to\n> modify the dump because there were some 60 seconds in there. It was\n> obvious in the code in backend/utils/adt/datetime that it was using\n> sprintf to do the formatting, and sprintf was taking the the float the\n> represented the seconds and rounding it.\n> \n> select '2001-07-10 15:39:59.999'::timestamp;\n> ?column?\n> ---------------------------\n> 2001-07-10 15:39:60.00-04\n> (1 row)\n\nAh, right. I remember that now. Will continue to look at it...\n\n - Thomas\n", "msg_date": "Wed, 11 Jul 2001 01:41:26 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Re: Backups WAS: 2 gig file size limit" }, { "msg_contents": "Can a single database be split over multiple filesystems, or does the\nfilesystem size under e.g. Linux (whatever it is these days) constrain\nthe database size?\n\n-- \nMark Morgan Lloyd\nmarkMLl .AT. telemetry.co .DOT. uk\n\n[Opinions above are the author's, not those of his employers or\ncolleagues]\n", "msg_date": "Wed, 11 Jul 2001 10:00:10 +0000", "msg_from": "markMLl.pgsql-general@telemetry.co.uk", "msg_from_op": false, "msg_subject": "Re: 2 gig file size limit" }, { "msg_contents": "Ian Willis wrote:\n> \n> Postgresql transparently breaks the db into 1G chunks.\n\nYes, but presumably these are still in the directory tree that was\ncreated by initdb, i.e. normally on a single filesystem.\n\n> The main concern is during dumps. A 10G db can't be dumped if the\n> filesustem has a 2G limit.\n\nWhich is why somebody suggested piping into tar or whatever.\n\n> Linus no longer has a filesystem file size limit ( or at least on\n> that you'll hit easily)\n\nI'm not concerned with \"easily\". Telling one of our customers that we\nchose a particular server becuase they won't easily hit limits is a\nnon-starter.\n\n-- \nMark Morgan Lloyd\nmarkMLl .AT. telemetry.co .DOT. uk\n\n[Opinions above are the author's, not those of his employers or\ncolleagues]\n", "msg_date": "Wed, 11 Jul 2001 12:06:05 +0000", "msg_from": "markMLl.pgsql-general@telemetry.co.uk", "msg_from_op": false, "msg_subject": "Re: 2 gig file size limit" }, { "msg_contents": "On Wed, Jul 11, 2001 at 12:06:05PM +0000, markMLl.pgsql-general@telemetry.co.uk wrote:\n> > Linus no longer has a filesystem file size limit ( or at least on\n> > that you'll hit easily)\n> \n> I'm not concerned with \"easily\". Telling one of our customers that we\n> chose a particular server becuase they won't easily hit limits is a\n> non-starter.\n\nMany people would have great difficulty hitting 4 terabytes.\n\nWhat the limit on NT?\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> It would be nice if someone came up with a certification system that\n> actually separated those who can barely regurgitate what they crammed over\n> the last few weeks from those who command secret ninja networking powers.\n", "msg_date": "Wed, 11 Jul 2001 22:57:39 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: 2 gig file size limit" }, { "msg_contents": "Hello,\n\nI am trying to use a stored procedure via JDBC. The objective is to be\nable to get data from more than one table. My procedure is a simple get\ncountry name from table countries where contry code = $1 copied from\nBruces book.\n\nUltradev is giving me \"Error calling GetProcedures: An unidentified\nerror has occured\"\n\nJust thought I would ask here first if I am up against a brick wall?\n\nCheers\n\nTony Grant\n\n--\nRedHat Linux on Sony Vaio C1XD/S\nhttp://www.animaproductions.com/linux2.html\nMacromedia UltraDev with PostgreSQL\nhttp://www.animaproductions.com/ultra.html\n\n", "msg_date": "11 Jul 2001 15:06:57 +0200", "msg_from": "Tony Grant <tony@animaproductions.com>", "msg_from_op": false, "msg_subject": "JDBC and stored procedures" }, { "msg_contents": "Tony,\n\nThe GetProcedures function in the driver does not work.\nYou should be able to a simple select of the stored proc however\n\nDave\n\nOn July 11, 2001 09:06 am, Tony Grant wrote:\n> Hello,\n>\n> I am trying to use a stored procedure via JDBC. The objective is to be\n> able to get data from more than one table. My procedure is a simple get\n> country name from table countries where contry code = $1 copied from\n> Bruces book.\n>\n> Ultradev is giving me \"Error calling GetProcedures: An unidentified\n> error has occured\"\n>\n> Just thought I would ask here first if I am up against a brick wall?\n>\n> Cheers\n>\n> Tony Grant\n>\n> --\n> RedHat Linux on Sony Vaio C1XD/S\n> http://www.animaproductions.com/linux2.html\n> Macromedia UltraDev with PostgreSQL\n> http://www.animaproductions.com/ultra.html\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n", "msg_date": "Wed, 11 Jul 2001 10:20:29 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: [JDBC] JDBC and stored procedures" }, { "msg_contents": "On 11 Jul 2001 10:20:29 -0400, Dave Cramer wrote:\n\n> The GetProcedures function in the driver does not work.\n\nOK. I bet it is on the todo list =:-D\n\n> You should be able to a simple select of the stored proc however\n\nYes! thank you very much!!!\n\nSELECT getcountryname(director.country)\n\ndid the trick where getcountryname is the function (or stored procedure)\n\nCheers\n\nTony\n\n--\nRedHat Linux on Sony Vaio C1XD/S\nhttp://www.animaproductions.com/linux2.html\nMacromedia UltraDev with PostgreSQL\nhttp://www.animaproductions.com/ultra.html\n\n", "msg_date": "11 Jul 2001 17:15:31 +0200", "msg_from": "Tony Grant <tony@animaproductions.com>", "msg_from_op": false, "msg_subject": "Re: [JDBC] JDBC and stored procedures" }, { "msg_contents": "The getProcedures api is on the todo list, but I don't think it returns\nstored procs.\n\nDave\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Tony Grant\nSent: July 11, 2001 11:16 AM\nTo: Dave@micro-automation.net\nCc: pgsql-jdbc@PostgreSQL.org; pgsql-general@PostgreSQL.org\nSubject: Re: [JDBC] JDBC and stored procedures\n\n\nOn 11 Jul 2001 10:20:29 -0400, Dave Cramer wrote:\n\n> The GetProcedures function in the driver does not work.\n\nOK. I bet it is on the todo list =:-D\n\n> You should be able to a simple select of the stored proc however\n\nYes! thank you very much!!!\n\nSELECT getcountryname(director.country)\n\ndid the trick where getcountryname is the function (or stored procedure)\n\nCheers\n\nTony\n\n--\nRedHat Linux on Sony Vaio C1XD/S\nhttp://www.animaproductions.com/linux2.html\nMacromedia UltraDev with PostgreSQL\nhttp://www.animaproductions.com/ultra.html\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n\n", "msg_date": "Wed, 11 Jul 2001 12:14:58 -0400", "msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "RE: JDBC and stored procedures" }, { "msg_contents": "Martijn van Oosterhout wrote:\n\n> What the limit on NT?\n\nI'm told 2^64 bytes. Frankly, I'd be surprised if MS has tested it :-)\n\n-- \nMark Morgan Lloyd\nmarkMLl .AT. telemetry.co .DOT. uk\n\n[Opinions above are the author's, not those of his employers or\ncolleagues]\n", "msg_date": "Thu, 12 Jul 2001 08:02:03 +0000", "msg_from": "markMLl.pgsql-general@telemetry.co.uk", "msg_from_op": false, "msg_subject": "Re: 2 gig file size limit" } ]
[ { "msg_contents": "> How about phppgadmin? I haven't checked in depth but\n> it seems to be able to\n> do quite a lot of these.\n\nYes, and pgaccess too, but I see cons in these two\ntools:\n\n- Pgaccess looks a bit poor. Tcl/Tk is not the\ndefinitive user interface toolkit. Yes, it is enough\nfor many things, but not for all.\n\n- Phppgadmin is a web based tool. You need a php\nenabled web server. Most end users/admins don't want\nto have to configure a web server, PHP (\"what is\nPHP?\") and to have a poor interface (I'm talking about\nweb based interfaces in general, not the phppgadmin in\nparticular).\n\n- Both of them have limitations of what they can\nmanage. You can't use them to backup/restore the\ndatabase, to edit/see the postgresql configuration, to\nmonitor the server(s), to start/stop server(s), ...\nIt's dificult to take an _employer_, who only wants to\ndo his job and go home, and say to him that we are\ngoing to replace the Oracle and SQLServer databases\nwith Postgresql databases. In fact there are more\nreasons that the interface, but you are working\nalready in the other problems and solving they fine.\nTo say it briefly if an average IT manager asks you to\n\"show him PostgreSQL\" and you open pgsql or pgaccess\nyou are done. Sad but true.\n\n\n> Michael\n> -- \n> Michael Meskes\n> Michael@Fam-Meskes.De\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n\nPedro\n\n\n_______________________________________________________________\nDo You Yahoo!?\nYahoo! Messenger: Comunicaci�n instant�nea gratis con tu gente -\nhttp://messenger.yahoo.es\n", "msg_date": "Wed, 20 Jun 2001 09:13:13 +0200 (CEST)", "msg_from": "=?iso-8859-1?q?Pedro=20Abelleira=20Seco?= <pedroabelleira@yahoo.es>", "msg_from_op": true, "msg_subject": "RE: Universal admin frontend" }, { "msg_contents": "> - Both of them have limitations of what they can\n> manage. You can't use them to backup/restore the\n> database, to edit/see the postgresql configuration, to\n> monitor the server(s), to start/stop server(s), ...\n> It's dificult to take an _employer_, who only wants to\n> do his job and go home, and say to him that we are\n> going to replace the Oracle and SQLServer databases\n> with Postgresql databases. In fact there are more\n> reasons that the interface, but you are working\n> already in the other problems and solving they fine.\n> To say it briefly if an average IT manager asks you to\n> \"show him PostgreSQL\" and you open pgsql or pgaccess\n> you are done. Sad but true.\n\nWhat about a KDE or Gnome piece of software? In fact, I believe that such a\nproject may already be in its infancy...\n\nChris\n\n", "msg_date": "Wed, 20 Jun 2001 16:23:57 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: RE: Universal admin frontend" }, { "msg_contents": "On Wed, Jun 20, 2001 at 09:13:13AM +0200, Pedro Abelleira Seco wrote:\n> - Phppgadmin is a web based tool. You need a php\n> enabled web server. Most end users/admins don't want\n> to have to configure a web server, PHP (\"what is\n> PHP?\") and to have a poor interface (I'm talking about\n> web based interfaces in general, not the phppgadmin in\n> particular).\n\nMaybe, but then you are platform independent.\n\n> - Both of them have limitations of what they can\n> manage. You can't use them to backup/restore the\n> database, to edit/see the postgresql configuration, to\n> monitor the server(s), to start/stop server(s), ...\n\nCorrect. You need another sort of tool for that. If you go web based webmin\ncan do most of this, or at least aims at this.\n\n> To say it briefly if an average IT manager asks you to\n> \"show him PostgreSQL\" and you open pgsql or pgaccess\n> you are done. Sad but true.\n\n[I assume you mean psql with pgsql.]\n\nYes, but could show him that there are such tools with Oracle too. Sqlplus\nis no better than psql for that matter. If you want to add your commands\ninside a graphic tool you could use mpsql/kpsql which are unfortunately not\nmaintained anymore. Note, that I do not say the tools are there for all your\nneeds, but that I think there are quite some tools worth extending. I don't\nthink what we need is another tool that does parts of the job, but an effort\nto build the one tool you can use for all of this. If this has to start from\nscratch so be it. But maybe it's good idea to improve some other tool.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 20 Jun 2001 10:43:35 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: RE: Universal admin frontend" }, { "msg_contents": "> What about a KDE or Gnome piece of software? In\n> fact, I believe that such a\n> project may already be in its infancy...\n\nYes, I could be, but no all systems have one of them\ninstalled or even installable (think not only about\nLinux, but all the platforms in wich Postgres run,\nWindows too)\nOther advantage of the Java/Swing aproach, apart from\nits portability, is that is easy to program in that\nplatform. I have programed for KDE and have done a\nlittle test against both gtk+, gtk--, but Java is\nanother level. Suddenly all is easy, you can do what\nyou can imagine. I don't want to start a flamewar\nabout the best programming language. I only say that\nfor this kind of task Java is the only platform in\nwich I feel sure about what can be done and when it\ncan be.\n\n> \n> Chris\n> \n\nPedro\n\n_______________________________________________________________\nDo You Yahoo!?\nYahoo! Messenger: Comunicaci�n instant�nea gratis con tu gente -\nhttp://messenger.yahoo.es\n", "msg_date": "Wed, 20 Jun 2001 12:04:08 +0200 (CEST)", "msg_from": "=?iso-8859-1?q?Pedro=20Abelleira=20Seco?= <pedroabelleira@yahoo.es>", "msg_from_op": true, "msg_subject": "RE: RE: Universal admin frontend" }, { "msg_contents": "On Wed, 20 Jun 2001, Christopher Kings-Lynne wrote:\n\n> > - Both of them have limitations of what they can\n> > manage. You can't use them to backup/restore the\n> > database, to edit/see the postgresql configuration, to\n> > monitor the server(s), to start/stop server(s), ...\n> > It's dificult to take an _employer_, who only wants to\n> > do his job and go home, and say to him that we are\n> > going to replace the Oracle and SQLServer databases\n> > with Postgresql databases. In fact there are more\n> > reasons that the interface, but you are working\n> > already in the other problems and solving they fine.\n> > To say it briefly if an average IT manager asks you to\n> > \"show him PostgreSQL\" and you open pgsql or pgaccess\n> > you are done. Sad but true.\n>\n> What about a KDE or Gnome piece of software? In fact, I believe that such a\n> project may already be in its infancy...\n\nPlease, dont' depend on those things. A lot of people dont' like\nKDE, Gnome. Pure Gtk would be enough for such a project.\n\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 20 Jun 2001 13:26:56 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "RE: RE: Universal admin frontend" }, { "msg_contents": "+1 for implementation in java. While it isn't the fastest it is platform\nindependant.\n\nI would also suggest starting with something like druid, and making it\npostgres specific.\n\nDave\n----- Original Message -----\nFrom: \"Oleg Bartunov\" <oleg@sai.msu.su>\nTo: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nCc: \"Pedro Abelleira Seco\" <pedroabelleira@yahoo.es>;\n<pgsql-hackers@postgresql.org>\nSent: Wednesday, June 20, 2001 6:26 AM\nSubject: RE: [HACKERS] RE: Universal admin frontend\n\n\n> On Wed, 20 Jun 2001, Christopher Kings-Lynne wrote:\n>\n> > > - Both of them have limitations of what they can\n> > > manage. You can't use them to backup/restore the\n> > > database, to edit/see the postgresql configuration, to\n> > > monitor the server(s), to start/stop server(s), ...\n> > > It's dificult to take an _employer_, who only wants to\n> > > do his job and go home, and say to him that we are\n> > > going to replace the Oracle and SQLServer databases\n> > > with Postgresql databases. In fact there are more\n> > > reasons that the interface, but you are working\n> > > already in the other problems and solving they fine.\n> > > To say it briefly if an average IT manager asks you to\n> > > \"show him PostgreSQL\" and you open pgsql or pgaccess\n> > > you are done. Sad but true.\n> >\n> > What about a KDE or Gnome piece of software? In fact, I believe that\nsuch a\n> > project may already be in its infancy...\n>\n> Please, dont' depend on those things. A lot of people dont' like\n> KDE, Gnome. Pure Gtk would be enough for such a project.\n>\n> >\n> > Chris\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n>\n\n", "msg_date": "Wed, 20 Jun 2001 07:16:21 -0400", "msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "Re: RE: Universal admin frontend" }, { "msg_contents": "On Wed, 20 Jun 2001, Oleg Bartunov wrote:\n\n> > What about a KDE or Gnome piece of software? In fact, I believe that such a\n> > project may already be in its infancy...\n>\n> Please, dont' depend on those things. A lot of people dont' like\n> KDE, Gnome. Pure Gtk would be enough for such a project.\n\nI think people go ahead and throw the KDE and Gnome functionality into a\npiece of software just because they can (maybe because they like it as a\nbuzzword?). Pure GTK would be best if that is the type of app you are\nwanting to code and if you do decide to include gnome functionality -\ndon't make it some intwined with the rest of the code that their can't be\na --without-gnome switch to the configure program.\n\nI do agree that java would be the best way to go. I kinda cringe to say\nthat because most java apps I have seen are pretty lousy and buggy\n(personal opinion - no need for flames or holy wars here). I will say the\nonly java application I really did like was the java tools that Oracle\nships with their product. Very nice, very fast and fairly stable (in my\nexperience) - so I know that a nice product can be made if the right\npeople worked on it and enough thought went into the project.\n\n-- \n//========================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\========================================================//\n\n", "msg_date": "Wed, 20 Jun 2001 10:22:36 -0500 (CDT)", "msg_from": "\"D. Hageman\" <dhageman@dracken.com>", "msg_from_op": false, "msg_subject": "RE: RE: Universal admin frontend" }, { "msg_contents": "Michael Meskes wrote:\n\n>On Wed, Jun 20, 2001 at 09:13:13AM +0200, Pedro Abelleira Seco wrote:\n>\n>>- Phppgadmin is a web based tool. You need a PHP\n>>enabled web server. Most end users/admins don't want\n>>to have to configure a web server, PHP (\"what is\n>>PHP?\") and to have a poor interface (I'm talking about\n>>web based interfaces in general, not the phppgadmin in\n>>particular).\n>>\n>\n>Maybe, but then you are platform independent.\n>\n\nFirst, we need a set of tasks that the software would need to be able to \ndo. These tasks, may answer your questions or at least help decide which \nenvironment would best suit your admin tool.\n\nAFIAA, there exists a port of Java for just about every OS that \nPostgreSQL supports, not that it should be the only reason for choosing \nit. Not that my vote counts, but I'd go for the java approach and be \nwilling to code a lot on the interface, anyone else interested?\n\nTo start this list off, the Good Idea (tm):\n\n * User Management\n * Create\n * List\n * Modify\n * Change Password\n * Grant permissions\n * Group Membership\n * Delete\n * Database Management\n * Create\n * List\n * Modify\n * Tables\n * Constraints\n * Rules\n * Owners/Permissions\n * Delete\n * Maintenance\n * Vacuum\n * Analyze\n * Monitoring\n * Statistics\n\n\nThis is one of the big things that PostgreSQL has been missing for \nsometime. Personally, I believe that it would benefit both developers \nand users.\n\nRegardless, that's my two bits...\n\n\n\n\n\n\nMichael Meskes wrote:\nOn Wed, Jun 20, 2001 at 09:13:13AM +0200, Pedro Abelleira Seco wrote:\n- Phppgadmin is a web based tool. You need a PHPenabled web server. Most end users/admins don't wantto have to configure a web server, PHP (\"what isPHP?\") and to have a poor interface (I'm talking aboutweb based interfaces in general, not the phppgadmin inparticular).\n\nMaybe, but then you are platform independent.\n\n\nFirst, we need a set of tasks that the software would need to be able to\ndo. These tasks, may answer your questions or at least help decide which\nenvironment would best suit your admin tool.\n\nAFIAA, there exists a port of Java for just about every OS that PostgreSQL\n supports, not that it should be the only reason for choosing it.  Not that \nmy vote counts, but I'd go for the java approach and be willing to code a \nlot on the interface, anyone else interested?\n\nTo start this list off, the Good Idea (tm):\n\n\nUser Management\n\nCreate\nList\n\nModify\n\nChange Password\nGrant permissions\nGroup Membership\n\nDelete\n\n Database Management\n\nCreate\nList\n\nModify\n\nTables\nConstraints\nRules\nOwners/Permissions\n\n\nDelete\n\nMaintenance\n\nVacuum\nAnalyze\n\nMonitoring\n\n\nStatistics\n\n\n\n\nThis is one of the big things that PostgreSQL has been missing for sometime.\n Personally, I believe that it would benefit both developers and users.\n\nRegardless, that's my two bits...", "msg_date": "Wed, 20 Jun 2001 10:28:22 -0500", "msg_from": "Thomas Swan <tswan@olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: Universal admin frontend" }, { "msg_contents": "On Wed, Jun 20, 2001 at 09:13:13AM +0200, Pedro Abelleira Seco wrote:\n> To say it briefly if an average IT manager asks you to\n> \"show him PostgreSQL\" and you open pgsql or pgaccess\n> you are done. Sad but true.\n> \n\nWell, then open Access. What do you do when the same manager wants to\nsee your web server? Open IE?\n\nRoss\n", "msg_date": "Wed, 20 Jun 2001 10:40:07 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: RE: Universal admin frontend" }, { "msg_contents": "AFIAA, there exists a port of Java for just about every OS that PostgreSQL supports, not that it should be the only reason for choosing it. Not that my vote counts, but I'd go for the java approach and be willing to code a lot on the interface, anyone else interested?\n\nAnyone thought about wxPython? Much faster then java, can be distributed as a standalone executable on Windows. Supports Unix / Mac / Windows. Don't know if it supports more or less PG relevant platforms than Java. I have been thinking about working on this type of tool myself.\n\n\n\n\n\n\n\nAFIAA, there exists a port of Java for just about every OS that PostgreSQL \nsupports, not that it should be the only reason for choosing it.  Not that \nmy vote counts, but I'd go for the java approach and be willing to code a lot on \nthe interface, anyone else interested?Anyone \nthought about wxPython? Much faster then java, can be distributed as a \nstandalone executable on Windows.  Supports Unix / Mac / Windows.  \nDon't know if it supports more or less PG relevant platforms than Java.  I \nhave been thinking about working on this type of tool \nmyself.", "msg_date": "Wed, 20 Jun 2001 23:21:31 -0500", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: Re: Universal admin frontend" }, { "msg_contents": "Hello!\n\nWhy not go back to the roots of postgres?\n\nPostgreSQL is written completely in C. The development community has\nshown that it is\npossible to write efficient code for different platforms with pure C.\n\nThe administration task can be separated in 2 different tasks:\nA server (in C) which is really doing the administrative work.\nA client programm written in what so ever (C + X11, Java, Perl, TCL/Tk,\n....) which\nperforms the user interface.\n\nI know that this a not the easiest way to do the job but the most\nflexible (in my opinion).\n\n--\nMit freundlichen Gruessen / With best regards\n Reiner Dassing\n", "msg_date": "Thu, 21 Jun 2001 08:13:21 +0200", "msg_from": "Reiner Dassing <dassing@wettzell.ifag.de>", "msg_from_op": false, "msg_subject": "Re: RE: Universal admin frontend" } ]
[ { "msg_contents": "\n> > Personally I'd rather take out the =NULL conversion anyway...\n> \n> I'd second that.\n\nOk, from my yesterday's mistake, and mails that M$ doesn't support it \neighter you convinced me. If this was a vote, I would also now vote \nfor removing it.\n\nAndreas\n", "msg_date": "Wed, 20 Jun 2001 09:22:55 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Re: [SQL] behavior of ' = NULL' vs. MySQL vs. S\n\ttand ards" } ]
[ { "msg_contents": "I have seen problems with extremely many concurrent users.\nI run pgbench:\n\npgbench -c 1000 -t 1 test\n\nAnd I get stuck spin lock errors. This is 100% reproducable (i.e. I\nhave nerver succeeded in pgbench -c 1000).\n\nThis is Linux kernel 2.2.18. Followings are some resource settings\nthat seem crytical to me.\n\nkernel.shmall = 134217728\nkernel.shmmax = 134217728\nfs.file-max = 65536\n\nThere are 1GB physical memory and 2GB swap space. Note that same\nthings happen in Sparc/Solaris with much less users (500 concurrent\nusers).\n--\nTatsuo Ishii\n", "msg_date": "Wed, 20 Jun 2001 18:28:00 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "stuck spin lock with many concurrent users" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I have seen problems with extremely many concurrent users.\n> I run pgbench:\n\n> pgbench -c 1000 -t 1 test\n\n> And I get stuck spin lock errors. This is 100% reproducable (i.e. I\n> have nerver succeeded in pgbench -c 1000).\n\nIs it actually stuck, or just timing out due to huge contention?\nYou could try increasing the timeout intervals in s_lock.c to\nmake sure. If it is stuck, on which lock(s)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2001 01:48:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I have seen problems with extremely many concurrent users.\n> > I run pgbench:\n> \n> > pgbench -c 1000 -t 1 test\n> \n> > And I get stuck spin lock errors. This is 100% reproducable (i.e. I\n> > have nerver succeeded in pgbench -c 1000).\n> \n> Is it actually stuck, or just timing out due to huge contention?\n> You could try increasing the timeout intervals in s_lock.c to\n> make sure. \n\nI believe it's an actual stuck. From s_lock.c:\n\n#define DEFAULT_TIMEOUT (100*1000000)\t/* default timeout: 100 sec */\n\nSo even if there are 1000 contentions, 100 sec should be enough (100\nmsec for each backend).\n\n> If it is stuck, on which lock(s)?\n\nHow can I check it? In that situation, it's very hard to attacth a\ndebugger to the backend process. 1000 backends consum all CPU time.\n--\nTatsuo Ishii\n\n", "msg_date": "Thu, 21 Jun 2001 18:59:49 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> If it is stuck, on which lock(s)?\n\n> How can I check it?\n\nThe 'stuck' message should at least give you a code location...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2001 10:07:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> >> If it is stuck, on which lock(s)?\n> \n> > How can I check it?\n> \n> The 'stuck' message should at least give you a code location...\n\nHere is the actual message:\n\nFATAL: s_lock(0x2ac2d016) at spin.c:158, stuck spinlock. Aborting.\n\nLast several queries before stuck spinlock are:\n\nDEBUG: query: update branches set bbalance = bbalance + 436 where bid = 1\nDEBUG: query: update tellers set tbalance = tbalance + 230 where tid = 17\n\nDEBUG: query: update tellers set tbalance = tbalance + 740 where tid = 7\n\nDEBUG: query: update tellers set tbalance = tbalance + 243 where tid = 13\n\nDEBUG: query: select abalance from accounts where aid = 177962\nDEBUG: query: update tellers set tbalance = tbalance + 595 where tid = 18\n\nDEBUG: query: update branches set bbalance = bbalance + 595 where bid = 1\nDEBUG: query: update tellers set tbalance = tbalance + 252 where tid = 15\n\nI'm trying now is increasing the timeout to 10 times longer. Will\nreport in next email...\n--\nTatsuo Ishii\n", "msg_date": "Fri, 22 Jun 2001 11:31:05 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>>> How can I check it?\n>> \n>> The 'stuck' message should at least give you a code location...\n\n> FATAL: s_lock(0x2ac2d016) at spin.c:158, stuck spinlock. Aborting.\n\nHmm, that's SpinAcquire, so it's one of the predefined spinlocks\n(and not, say, a buffer spinlock). You could try adding some\ndebug logging here, although the output would be voluminous.\nBut what would really be useful is a stack trace for the stuck\nprocess. Consider changing the s_lock code to abort() when it\ngets a stuck spinlock --- then you could gdb the coredump.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2001 23:01:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> >>> How can I check it?\n> >> \n> >> The 'stuck' message should at least give you a code location...\n> \n> > FATAL: s_lock(0x2ac2d016) at spin.c:158, stuck spinlock. Aborting.\n> \n> Hmm, that's SpinAcquire, so it's one of the predefined spinlocks\n> (and not, say, a buffer spinlock). You could try adding some\n> debug logging here, although the output would be voluminous.\n> But what would really be useful is a stack trace for the stuck\n> process. Consider changing the s_lock code to abort() when it\n> gets a stuck spinlock --- then you could gdb the coredump.\n\nNice idea. I will try that.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 22 Jun 2001 12:24:22 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "> > > FATAL: s_lock(0x2ac2d016) at spin.c:158, stuck spinlock. Aborting.\n> > \n> > Hmm, that's SpinAcquire, so it's one of the predefined spinlocks\n> > (and not, say, a buffer spinlock). You could try adding some\n> > debug logging here, although the output would be voluminous.\n> > But what would really be useful is a stack trace for the stuck\n> > process. Consider changing the s_lock code to abort() when it\n> > gets a stuck spinlock --- then you could gdb the coredump.\n> \n> Nice idea. I will try that.\n\nI got an interesting result. If I compile backend with -g (and without\n-O2), I get no stuck spin lock errors. However, if s_lock.c is\ncompiled with -O2 enabled, I got the error again. It seems only\ns_lock.c is related to this phenomenon.\n--\nTatsuo Ishii\n", "msg_date": "Sun, 24 Jun 2001 20:37:52 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I got an interesting result. If I compile backend with -g (and without\n> -O2), I get no stuck spin lock errors. However, if s_lock.c is\n> compiled with -O2 enabled, I got the error again. It seems only\n> s_lock.c is related to this phenomenon.\n\nThat's very interesting. Could optimization be breaking the TAS\nsequence on your platform? What is your platform, anyway?\nMight need to burrow into the assembly code to see just what's\nhappening.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Jun 2001 11:01:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I got an interesting result. If I compile backend with -g (and without\n> > -O2), I get no stuck spin lock errors. However, if s_lock.c is\n> > compiled with -O2 enabled, I got the error again. It seems only\n> > s_lock.c is related to this phenomenon.\n> \n> That's very interesting. Could optimization be breaking the TAS\n> sequence on your platform? What is your platform, anyway?\n> Might need to burrow into the assembly code to see just what's\n> happening.\n\nAs I said, it's a x86 Linux (more precisely, kernel 2.2.18 with 2\nprocessors, egcs 2.91). I suspect that the inlined TAS code might be\nincompatible with the caller, s_lock.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 25 Jun 2001 10:03:05 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "> > Tatsuo Ishii <t-ishii@sra.co.jp> writes\n> > >>> How can I check it?\n> > >> \n> > >> The 'stuck' message should at least give you a code location...\n> > \n> > > FATAL: s_lock(0x2ac2d016) at spin.c:158, stuck spinlock. Aborting.\n> > \n> > Hmm, that's SpinAcquire, so it's one of the predefined spinlocks\n> > (and not, say, a buffer spinlock). You could try adding some\n> > debug logging here, although the output would be voluminous.\n> > But what would really be useful is a stack trace for the stuck\n> > process. Consider changing the s_lock code to abort() when it\n> > gets a stuck spinlock --- then you could gdb the coredump.\n> \n> Nice idea. I will try that.\n\nIt appeared that the deadlock checking timer seems to be the source of\nthe problem. With the default settings, it checks deadlocks every 1\nsecond PER backend. So if there are 1000 backends, every 1 msec\nthere's a signal and a shared memory locking in average. That would be\ntoo much. If increase the dealock_timeout to , say 100000, the problem\nseems gone. Also the performance increased SIGNIFICANTLY. Before that\nI got only 1-2 TPS, but now I get ~20 TPS using pgbench -c 1000.\n\nHere is the backtrace:\n\n#0 0x2ab56d21 in __kill () from /lib/libc.so.6\n#1 0x2ab56996 in raise (sig=6) at ../sysdeps/posix/raise.c:27\n#2 0x2ab580b8 in abort () at ../sysdeps/generic/abort.c:88\n#3 0x80ece1a in s_lock_stuck (lock=0x2ac2d016 \"\\001\", \n file=0x816e7bc \"spin.c\", line=158) at s_lock.c:70\n#4 0x80ecf3e in s_lock_sleep (spins=20001, timeout=100000000, microsec=5000, \n lock=0x2ac2d016 \"\\001\", file=0x816e7bc \"spin.c\", line=158) at s_lock.c:109\n#5 0x80ecfa3 in s_lock (lock=0x2ac2d016 \"\\001\", file=0x816e7bc \"spin.c\", \n line=158) at s_lock.c:136\n#6 0x80efb4d in SpinAcquire (lockid=6) at spin.c:158\n#7 0x80f2305 in HandleDeadLock (postgres_signal_arg=14) at proc.c:819\n#8 <signal handler called>\n#9 0x2abeb134 in semop (semid=32786, sops=0x7fffeebc, nsops=1)\n at ../sysdeps/unix/sysv/linux/semop.c:34\n#10 0x80ee460 in IpcSemaphoreLock (semId=32786, sem=13, interruptOK=1 '\\001')\n at ipc.c:426\n#11 0x80f217f in ProcSleep (lockMethodTable=0x81c1708, lockmode=6, \n lock=0x2ce0ab18, holder=0x2ce339b0) at proc.c:666\n#12 0x80f14ff in WaitOnLock (lockmethod=1, lockmode=6, lock=0x2ce0ab18, \n holder=0x2ce339b0) at lock.c:955\n#13 0x80f1298 in LockAcquire (lockmethod=1, locktag=0x7fffeffc, xid=130139, \n lockmode=6) at lock.c:739\n#14 0x80f0a23 in LockPage (relation=0x2dbeb9d0, blkno=0, lockmode=6)\n#15 0x8071ceb in RelationGetBufferForTuple (relation=0x2dbeb9d0, len=132)\n at hio.c:97\n#16 0x8070293 in heap_update (relation=0x2dbeb9d0, otid=0x7ffff114, \n newtup=0x82388c8, ctid=0x7ffff0b0) at heapam.c:1737\n#17 0x80b6825 in ExecReplace (slot=0x823af60, tupleid=0x7ffff114, \n estate=0x8238a58) at execMain.c:1450\n#18 0x80b651e in ExecutePlan (estate=0x8238a58, plan=0x8238d00, \n operation=CMD_UPDATE, numberTuples=0, direction=ForwardScanDirection, \n destfunc=0x823b680) at execMain.c:1125\n#19 0x80b5af3 in ExecutorRun (queryDesc=0x8239080, estate=0x8238a58, \n feature=3, count=0) at execMain.c:233\n#20 0x80f6d93 in ProcessQuery (parsetree=0x822bc18, plan=0x8238d00, \n dest=Remote) at pquery.c:295\n#21 0x80f599b in pg_exec_query_string (\n query_string=0x822b8c0 \"update accounts set abalance = abalance + 277 where aid = 41148\\n\", dest=Remote, parse_context=0x81fc850) at postgres.c:810\n#22 0x80f68c6 in PostgresMain (argc=4, argv=0x7ffff380, real_argc=3, \n real_argv=0x7ffffc94, username=0x81cd981 \"t-ishii\") at postgres.c:1908\n#23 0x80e1ee3 in DoBackend (port=0x81cd718) at postmaster.c:2120\n#24 0x80e1acc in BackendStartup (port=0x81cd718) at postmaster.c:1903\n#25 0x80e0e26 in ServerLoop () at postmaster.c:995\n#26 0x80e0853 in PostmasterMain (argc=3, argv=0x7ffffc94) at postmaster.c:685\n#27 0x80c4865 in main (argc=3, argv=0x7ffffc94) at main.c:175\n#28 0x2ab509cb in __libc_start_main (main=0x80c4750 <main>, argc=3, \n argv=0x7ffffc94, init=0x80656c4 <_init>, fini=0x81395ac <_fini>, \n rtld_fini=0x2aab5ea0 <_dl_fini>, stack_end=0x7ffffc8c)\n at ../sysdeps/generic/libc-start.c:92\n", "msg_date": "Tue, 26 Jun 2001 17:53:04 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> > > Tatsuo Ishii <t-ishii@sra.co.jp> writes\n> > > >>> How can I check it?\n> > > >>\n> > > >> The 'stuck' message should at least give you a code location...\n> > >\n> > > > FATAL: s_lock(0x2ac2d016) at spin.c:158, stuck spinlock. Aborting.\n> > >\n> > > Hmm, that's SpinAcquire, so it's one of the predefined spinlocks\n> > > (and not, say, a buffer spinlock). You could try adding some\n> > > debug logging here, although the output would be voluminous.\n> > > But what would really be useful is a stack trace for the stuck\n> > > process. Consider changing the s_lock code to abort() when it\n> > > gets a stuck spinlock --- then you could gdb the coredump.\n> >\n> > Nice idea. I will try that.\n> \n> It appeared that the deadlock checking timer seems to be the source of\n> the problem. With the default settings, it checks deadlocks every 1\n> second PER backend. \n\nIIRC deadlock check was called only once per backend.\nIt seems to have been changed between 7.0 and 7.1.\nDoes it take effect to disable timer at the beginging of\nHandleDeadLock() ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 26 Jun 2001 18:41:08 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users" }, { "msg_contents": "> Tatsuo Ishii wrote:\n> > \n> > > > Tatsuo Ishii <t-ishii@sra.co.jp> writes\n> > > > >>> How can I check it?\n> > > > >>\n> > > > >> The 'stuck' message should at least give you a code location...\n> > > >\n> > > > > FATAL: s_lock(0x2ac2d016) at spin.c:158, stuck spinlock. Aborting.\n> > > >\n> > > > Hmm, that's SpinAcquire, so it's one of the predefined spinlocks\n> > > > (and not, say, a buffer spinlock). You could try adding some\n> > > > debug logging here, although the output would be voluminous.\n> > > > But what would really be useful is a stack trace for the stuck\n> > > > process. Consider changing the s_lock code to abort() when it\n> > > > gets a stuck spinlock --- then you could gdb the coredump.\n> > >\n> > > Nice idea. I will try that.\n> > \n> > It appeared that the deadlock checking timer seems to be the source of\n> > the problem. With the default settings, it checks deadlocks every 1\n> > second PER backend. \n> \n> IIRC deadlock check was called only once per backend.\n\nIn my understanding the deadlock check is performed every time the\nbackend aquires lock. Once the it aquires, it kill the timer. However,\nunder heavy transactions such as pgbench generates, chances are that\nthe checking fires, and it tries to aquire a spin lock. That seems the\nsituation.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 26 Jun 2001 18:50:38 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: stuck spin lock with many concurrent users" }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> > Tatsuo Ishii wrote:\n> > >\n> > > > > Tatsuo Ishii <t-ishii@sra.co.jp> writes\n> > > > > >>> How can I check it?\n> > > > > >>\n> > > > > >> The 'stuck' message should at least give you a code location...\n> > > > >\n> > > > > > FATAL: s_lock(0x2ac2d016) at spin.c:158, stuck spinlock. Aborting.\n> > > > >\n> > > > > Hmm, that's SpinAcquire, so it's one of the predefined spinlocks\n> > > > > (and not, say, a buffer spinlock). You could try adding some\n> > > > > debug logging here, although the output would be voluminous.\n> > > > > But what would really be useful is a stack trace for the stuck\n> > > > > process. Consider changing the s_lock code to abort() when it\n> > > > > gets a stuck spinlock --- then you could gdb the coredump.\n> > > >\n> > > > Nice idea. I will try that.\n> > >\n> > > It appeared that the deadlock checking timer seems to be the source of\n> > > the problem. With the default settings, it checks deadlocks every 1\n> > > second PER backend.\n> >\n> > IIRC deadlock check was called only once per backend.\n> \n> In my understanding the deadlock check is performed every time the\n> backend aquires lock. Once the it aquires, it kill the timer. \n\nYes, but deadlock check is needed only once and timer should\nbe disabled then also.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 26 Jun 2001 19:02:42 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> It appeared that the deadlock checking timer seems to be the source of\n> the problem. With the default settings, it checks deadlocks every 1\n> second PER backend.\n\nI don't believe it. setitimer with it_interval = 0 should produce one\ninterrupt, no more.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Jun 2001 09:42:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> In my understanding the deadlock check is performed every time the\n> backend aquires lock. Once the it aquires, it kill the timer. However,\n> under heavy transactions such as pgbench generates, chances are that\n> the checking fires, and it tries to aquire a spin lock. That seems the\n> situation.\n\nIt could be that with ~1000 backends all waiting for the same lock, the\ndeadlock-checking code just plain takes too long to run. It might have\nan O(N^2) or worse behavior in the length of the queue; I don't think\nthe code was ever analyzed for such problems.\n\nDo you want to try adding some instrumentation to HandleDeadlock to see\nhow long it runs on each call?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Jun 2001 09:47:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > In my understanding the deadlock check is performed every time the\n> > backend aquires lock. Once the it aquires, it kill the timer. However,\n> > under heavy transactions such as pgbench generates, chances are that\n> > the checking fires, and it tries to aquire a spin lock. That seems the\n> > situation.\n> \n> It could be that with ~1000 backends all waiting for the same lock, the\n> deadlock-checking code just plain takes too long to run. It might have\n> an O(N^2) or worse behavior in the length of the queue; I don't think\n> the code was ever analyzed for such problems.\n> \n> Do you want to try adding some instrumentation to HandleDeadlock to see\n> how long it runs on each call?\n\nI added some codes into HandleDeadLock to measure how long\nLockLockTable and DeadLOckCheck calls take. Followings are the result\nin running pgbench -c 1000 (it failed with stuck spin lock\nerror). \"real time\" shows how long they actually run (using\ngettimeofday). \"user time\" and \"system time\" are measured by calling\ngetrusage. The time unit is milli second.\n\n LockLockTable: real time\n\n min | max | avg \n-----+--------+-------------------\n 0 | 867873 | 152874.9015151515\n\n LockLockTable: user time\n\n min | max | avg \n-----+-----+--------------\n 0 | 30 | 1.2121212121\n\n LockLockTable: system time\n\n min | max | avg \n-----+------+----------------\n 0 | 2140 | 366.5909090909\n\n\n DeadLockCheck: real time\n\n min | max | avg \n-----+-------+-----------------\n 0 | 87671 | 3463.6996197719\n\n DeadLockCheck: user time\n\n min | max | avg \n-----+-----+---------------\n 0 | 330 | 14.2205323194\n\n DeadLockCheck: system time\n\n min | max | avg \n-----+-----+--------------\n 0 | 100 | 2.5095057034\n", "msg_date": "Wed, 27 Jun 2001 22:50:19 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I added some codes into HandleDeadLock to measure how long\n> LockLockTable and DeadLOckCheck calls take. Followings are the result\n> in running pgbench -c 1000 (it failed with stuck spin lock\n> error). \"real time\" shows how long they actually run (using\n> gettimeofday). \"user time\" and \"system time\" are measured by calling\n> getrusage. The time unit is milli second.\n\n> LockLockTable: real time\n\n> min | max | avg \n> -----+--------+-------------------\n> 0 | 867873 | 152874.9015151515\n\n> LockLockTable: user time\n\n> min | max | avg \n> -----+-----+--------------\n> 0 | 30 | 1.2121212121\n\n> LockLockTable: system time\n\n> min | max | avg \n> -----+------+----------------\n> 0 | 2140 | 366.5909090909\n\n\n> DeadLockCheck: real time\n\n> min | max | avg \n> -----+-------+-----------------\n> 0 | 87671 | 3463.6996197719\n\n> DeadLockCheck: user time\n\n> min | max | avg \n> -----+-----+---------------\n> 0 | 330 | 14.2205323194\n\n> DeadLockCheck: system time\n\n> min | max | avg \n> -----+-----+--------------\n> 0 | 100 | 2.5095057034\n\nHm. It doesn't seem that DeadLockCheck is taking very much of the time.\nI have to suppose that the problem is (once again) our inefficient\nspinlock code.\n\nIf you think about it, on a typical platform where processes waiting for\na time delay are released at a clock tick, what's going to be happening\nis that a whole lot of spinblocked processes will all be awoken in the\nsame clock tick interrupt. The first one of these that gets to run will\nacquire the spinlock, if it's free, and the rest will go back to sleep\nand try again at the next tick. This could be highly unfair depending\non just how the kernel's scheduler works --- for example, one could\neasily believe that the waiters might be awoken in process-number order,\nin which case backends with high process numbers might never get to\nacquire the spinlock, or at least would have such low probability of\nwinning that they are prone to \"stuck spinlock\" timeout.\n\nWe really need to look at replacing the spinlock mechanism with\nsomething more efficient.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 15:01:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "Tom Lane wrote:\n> \n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I added some codes into HandleDeadLock to measure how long\n> > LockLockTable and DeadLOckCheck calls take. Followings are the result\n> > in running pgbench -c 1000 (it failed with stuck spin lock\n> > error). \"real time\" shows how long they actually run (using\n> > gettimeofday). \"user time\" and \"system time\" are measured by calling\n> > getrusage. The time unit is milli second.\n> \n> > LockLockTable: real time\n> \n> > min | max | avg\n> > -----+--------+-------------------\n> > 0 | 867873 | 152874.9015151515\n> \n\n[snip]\n\n> \n> > DeadLockCheck: real time\n> \n> > min | max | avg\n> > -----+-------+-----------------\n> > 0 | 87671 | 3463.6996197719\n> \n> > DeadLockCheck: user time\n> \n> > min | max | avg\n> > -----+-----+---------------\n> > 0 | 330 | 14.2205323194\n> \n> > DeadLockCheck: system time\n> \n> > min | max | avg\n> > -----+-----+--------------\n> > 0 | 100 | 2.5095057034\n> \n> Hm. It doesn't seem that DeadLockCheck is taking very much of the time.\n\nIsn't the real time big ?\nIsn't 14.22msec big enough for the spinlocking process to\npass the time slice to other processes ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 04 Jul 2001 08:30:16 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n\n> DeadLockCheck: real time\n>> \n> min | max | avg\n> -----+-------+-----------------\n> 0 | 87671 | 3463.6996197719\n>> \n> DeadLockCheck: user time\n>> \n> min | max | avg\n> -----+-----+---------------\n> 0 | 330 | 14.2205323194\n>> \n> DeadLockCheck: system time\n>> \n> min | max | avg\n> -----+-----+--------------\n> 0 | 100 | 2.5095057034\n>> \n>> Hm. It doesn't seem that DeadLockCheck is taking very much of the time.\n\n> Isn't the real time big ?\n\nYes, it sure is, but remember that the guy getting useful work done\n(DeadLockCheck) is having to share the CPU with 999 other processes\nthat are waking up on every clock tick for just long enough to fail\nto get the spinlock. I think it's those useless process wakeups that\nare causing the problem.\n\nIf you estimate that a process dispatch cycle is ~ 10 microseconds,\nthen waking 999 useless processes every 10 msec is just about enough\nto consume 100% of the CPU doing nothing useful... so what should be\na few-millisecond check takes a long time, which makes things worse\nbecause the 999 wannabees are spinning for that much more time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 19:38:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "> Yes, it sure is, but remember that the guy getting useful work done\n> (DeadLockCheck) is having to share the CPU with 999 other processes\n> that are waking up on every clock tick for just long enough to fail\n> to get the spinlock. I think it's those useless process wakeups that\n> are causing the problem.\n> \n> If you estimate that a process dispatch cycle is ~ 10 microseconds,\n> then waking 999 useless processes every 10 msec is just about enough\n> to consume 100% of the CPU doing nothing useful... so what should be\n> a few-millisecond check takes a long time, which makes things worse\n> because the 999 wannabees are spinning for that much more time.\n\nDon't we back off the sleeps or was that code removed?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 3 Jul 2001 21:04:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> If you estimate that a process dispatch cycle is ~ 10 microseconds,\n>> then waking 999 useless processes every 10 msec is just about enough\n>> to consume 100% of the CPU doing nothing useful...\n\n> Don't we back off the sleeps or was that code removed?\n\nNot enough to affect this calculation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 23:16:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "> >> Hm. It doesn't seem that DeadLockCheck is taking very much of the time.\n> \n> > Isn't the real time big ?\n> \n> Yes, it sure is, but remember that the guy getting useful work done\n> (DeadLockCheck) is having to share the CPU with 999 other processes\n> that are waking up on every clock tick for just long enough to fail\n> to get the spinlock. I think it's those useless process wakeups that\n> are causing the problem.\n> \n> If you estimate that a process dispatch cycle is ~ 10 microseconds,\n> then waking 999 useless processes every 10 msec is just about enough\n> to consume 100% of the CPU doing nothing useful... so what should be\n> a few-millisecond check takes a long time, which makes things worse\n> because the 999 wannabees are spinning for that much more time.\n\nIf so, what about increase the dead lock timer proportional to the\nlength of the waiting holder queue?\n\nHere are the patches against current to increase the dealock timer by\nqueue_length secconds.\n\n*** proc.c.orig\tThu Jul 5 16:12:32 2001\n--- proc.c\tThu Jul 5 16:20:22 2001\n***************\n*** 506,511 ****\n--- 506,512 ----\n \tint\t\t\tmyHeldLocks = MyProc->heldLocks;\n \tPROC\t *proc;\n \tint\t\t\ti;\n+ \tint\tmsec;\n \n #ifndef __BEOS__\n \tstruct itimerval timeval,\n***************\n*** 625,638 ****\n \t * Need to zero out struct to set the interval and the microseconds\n \t * fields to 0.\n \t */\n #ifndef __BEOS__\n \tMemSet(&timeval, 0, sizeof(struct itimerval));\n! \ttimeval.it_value.tv_sec = DeadlockTimeout / 1000;\n! \ttimeval.it_value.tv_usec = (DeadlockTimeout % 1000) * 1000;\n \tif (setitimer(ITIMER_REAL, &timeval, &dummy))\n \t\telog(FATAL, \"ProcSleep: Unable to set timer for process wakeup\");\n #else\n! \ttime_interval = DeadlockTimeout * 1000000;\t/* usecs */\n \tif (set_alarm(time_interval, B_ONE_SHOT_RELATIVE_ALARM) < 0)\n \t\telog(FATAL, \"ProcSleep: Unable to set timer for process wakeup\");\n #endif\n--- 626,642 ----\n \t * Need to zero out struct to set the interval and the microseconds\n \t * fields to 0.\n \t */\n+ \n+ \tmsec = DeadlockTimeout + waitQueue->size * 1000;\n+ \n #ifndef __BEOS__\n \tMemSet(&timeval, 0, sizeof(struct itimerval));\n! \ttimeval.it_value.tv_sec = msec / 1000;\n! \ttimeval.it_value.tv_usec = (msec % 1000) * 1000;\n \tif (setitimer(ITIMER_REAL, &timeval, &dummy))\n \t\telog(FATAL, \"ProcSleep: Unable to set timer for process wakeup\");\n #else\n! \ttime_interval = msec * 1000000;\t/* usecs */\n \tif (set_alarm(time_interval, B_ONE_SHOT_RELATIVE_ALARM) < 0)\n \t\telog(FATAL, \"ProcSleep: Unable to set timer for process wakeup\");\n #endif\n", "msg_date": "Thu, 05 Jul 2001 16:28:24 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: stuck spin lock with many concurrent users " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> If so, what about increase the dead lock timer proportional to the\n> length of the waiting holder queue?\n\nI don't think that's a good idea; it's not solving the problem, only\nreducing performance, and in a fairly arbitrary way at that. (The\nlength of the particular wait queue you happen to be on is no measure\nof the total number of processes waiting for locks.)\n\nThe real problem is in the spinlock implementation --- deadlock checking\nis only one place where lots of processes might gang up on the same\nspinlock. The bufmgr lock is another one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Jul 2001 09:47:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spin lock with many concurrent users " } ]
[ { "msg_contents": "\n> > What about a KDE or Gnome piece of software? In\n> > fact, I believe that such a\n> > project may already be in its infancy...\n> \n> Yes, I could be, but no all systems have one of them\n> installed or even installable (think not only about\n> Linux, but all the platforms in wich Postgres run,\n> Windows too)\n> Other advantage of the Java/Swing aproach, apart from\n> its portability, is that is easy to program in that\n> platform. I have programed for KDE and have done a\n> little test against both gtk+, gtk--, but Java is\n> another level. Suddenly all is easy, you can do what\n> you can imagine. I don't want to start a flamewar\n> about the best programming language. I only say that\n> for this kind of task Java is the only platform in\n> wich I feel sure about what can be done and when it\n> can be.\n\nSeems IBM and Informix have also come to that conclusion,\nsince their new DB admin frontends are also in Java.\n\nAndreas\n", "msg_date": "Wed, 20 Jun 2001 12:25:34 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: RE: Universal admin frontend" } ]
[ { "msg_contents": "Hello\n\nI'm using version 6.5.2 of PostgreSQL\n\nSometimes , I have trouble running some pl/PGSQL functions through my PHP\nWeb application. Message error is :\n\nExecOpenScanR : failed to open relation 27223\n\nAnother trouble is that i make a dump of my database every night and\nsometimes a table, and always this one , becomes empty. The schema of the\ntable is right but there's no data anymore.\n\nHave you some explanations for that kind of trouble.\n\nThanks for your future advise.\n\nOlivier HAIES\n\nSQLI : http://www.sqli.fr\n\n\n\n\n\n\n\nHello\n \nI'm using version \n6.5.2 of PostgreSQL\n \nSometimes , I have \ntrouble running some pl/PGSQL functions through my PHP Web application. \nMessage error is : \n \nExecOpenScanR : \nfailed to open relation 27223\n \nAnother trouble is \nthat i make a dump of my database every night and sometimes a table, and always \nthis one , becomes empty. The schema of the table is right but there's no data \nanymore.\n \nHave you some \nexplanations for that kind of trouble.\n \nThanks for your \nfuture advise.\nOlivier HAIES\nSQLI : http://www.sqli.fr", "msg_date": "Wed, 20 Jun 2001 12:47:11 +0200", "msg_from": "\"Olivier Haies\" <ohaies@sqli.com>", "msg_from_op": true, "msg_subject": "Error messages" }, { "msg_contents": "\"Olivier Haies\" <ohaies@sqli.com> writes:\n> I'm using version 6.5.2 of PostgreSQL\n\nTime to update... 7.1.2 is current...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Jun 2001 23:20:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error messages " } ]
[ { "msg_contents": ">> - Phppgadmin is a web based tool. You need a php\n>> enabled web server. Most end users/admins don't\nwant\n>> to have to configure a web server, PHP (\"what is\n>> PHP?\") and to have a poor interface (I'm talking\nabout\n>> web based interfaces in general, not the phppgadmin\nin\n>> particular).\n\n> Maybe, but then you are platform independent.\n\nWith Java too. Well, you need Java (instead of PHP).\nJava is not free and PHP is. I know there are people\nfor who this can be a problem. In my original post I\nasked this, I'd like to know what developers think\nabout it.\n\n>> - Both of them have limitations of what they can\n>> manage. You can't use them to backup/restore the\n>> database, to edit/see the postgresql configuration,\nto\n>> monitor the server(s), to start/stop server(s), ...\n\n> Correct. You need another sort of tool for that. If\nyou go web based webmin\n> can do most of this, or at least aims at this.\n\nI'm looking for a complete solution. A tool that you\ncan use as your only interface to Postgres, if you\nlike. And I'm talking about a tool designed only for\nPostgres. There are things that become bad if they\nwant too much generality. A Postgres admin tool.\n\n>> To say it briefly if an average IT manager asks you\nto\n>> \"show him PostgreSQL\" and you open pgsql or\npgaccess\n>> you are done. Sad but true.\n\n> [I assume you mean psql with pgsql.]\n\nRight\n\n>\n> Yes, but could show him that there are such tools\nwith Oracle too. Sqlplus\n> is no better than psql for that matter. If you want\nto add your commands\n> inside a graphic tool you could use mpsql/kpsql\nwhich are unfortunately not\n> maintained anymore. Note, that I do not say the\ntools are there for all your\n> needs, but that I think there are quite some tools\nworth extending. I don't\n> think what we need is another tool that does parts\nof the job, but an effort\n> to build the one tool you can use for all of this.\nIf this has to start from\n> scratch so be it. But maybe it's good idea to\nimprove some other tool.\n\nWell, it would be great to pick code and ideas from\ndiferent projects, and to collaborate with them, but\nmy idea is:\n\n1.- Have a tool for the administration of Postgres,\ntotally capable and integrated with the server.\n\nIn particular this means not to have to\ndownload�compile hot-test a miriad of utilities, which\ncan have diferent dependencies and support for\nversions of server or libs... Yes, a competent\nsysadmin with enough time can do this right, but both\nof this things are rare. And don't forget that a\nsysadmin have to upgrade things from time to time, and\nall of us know under what pressure you can be to do\nit. If a sysadmin A with SQLServer can upgrade the\nsystem in one day, only pressing \"next\" buttons and\nyou (B) are compiling, configuring, ..., your boss can\nthink how much is really costing that free database...\nNot to mention what could he think about how much it\nwould cost to find another so competent guy if you\nleft.\n\n2.- Have a beautiful and frienly face for those who\nare used to other enviroments.\n\nThe manager and workers should feel confident in their\ntools. This is a very subjective thing, not only\ninvolving WAL's and transaction integrity. A blinking\ncursor in a black screen is too much for many people.\nMany, many people.\nI'm yet impressed in how much difficult can be for\nsome people editing a config file. The world out the\nUniversity is very weird. And the corporative world\nis, well, interesting.\n\n3.- Have a well designed, extensible, interoperable,\nmantainable, standard tool with a one to one relation\nwith Postgresql versi�ns, easy to download precompiled\nand ready to run.\n\nA bit ambitious, and definitly I'm not that good. But\nothers are. We only need a clear goal about what is\nneeded in this area.\n\n4.- Have a test case where the developers can inspire\nabout what things would be done in the server to easy\nadministration. \n\nThings like a complete interface for monitorizing\nactivity, have a configuration API, ... This things\ncan be used for other tools, if they want. The idea is\nthink about what is needed to provide a programming\ninterface for GUI's. The concrete tool is less\nimportant.\n\nSorry for my English, and thanks for your atention.\nPedro\n\n>\n> Michael\n> --\n> Michael Meskes\n> Michael@Fam-Meskes.De\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n\n_______________________________________________________________\nDo You Yahoo!?\nYahoo! Messenger: Comunicaci�n instant�nea gratis con tu gente -\nhttp://messenger.yahoo.es\n", "msg_date": "Wed, 20 Jun 2001 12:58:15 +0200 (CEST)", "msg_from": "=?iso-8859-1?q?Pedro=20Abelleira=20Seco?= <pedroabelleira@yahoo.es>", "msg_from_op": true, "msg_subject": "None" }, { "msg_contents": "On Wed, Jun 20, 2001 at 12:58:15PM +0200, Pedro Abelleira Seco wrote:\n> With Java too. Well, you need Java (instead of PHP).\n> Java is not free and PHP is. I know there are people\n> for who this can be a problem. In my original post I\n\nI do not care that much about the freeness of java, but be aware that due to\nthe lack thereof the newer java releases are not available on all platforms. \n\nI for one do not really like Java, because I have yet so Java developed\nprogram to run stable and fast on my machine.\n\n> 1.- Have a tool for the administration of Postgres,\n> totally capable and integrated with the server.\n> \n> In particular this means not to have to\n> download�compile hot-test a miriad of utilities, which\n> can have diferent dependencies and support for\n> versions of server or libs... Yes, a competent\n\nBTW how about adding some other tools that are available to the PostgreSQL\ntree? The very same problem you have with all these tools will happen with\nyour tool unless you develop it as part of the PostgreSQL distribution. So\nmaybe we shoul try to merge in other tools too.\n\n> I'm yet impressed in how much difficult can be for\n> some people editing a config file. The world out the\n> University is very weird. And the corporative world\n> is, well, interesting.\n\nDon't get me wrong, we really need that GUI so even the user we call DAU\n(d�mmster anzunehmender User = silliest user imaginable) can operate it\nsomehow. I'm all for your project. My problem is just that we need this tool\nnow or even better two days ago.\n\nMaybe this sounds bitter, but I recently did a search for a groupware tool\nto use in my company and also for a customer. We decided to go web based to\nbe independent of the OS. And of course we wanted open source. So I scanned\nthrough sourceforge just to find at least a dozen such projects, but not\na single one finished enough to fill all our needs. \n\nWhile as a open source software developer I can understand why people set up\ndifferent projects, as a user I think this is a terrible waste of resources.\n\nI hope you now understand what I meant to say.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 20 Jun 2001 13:38:03 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "I love the fact that temp tables do not exist in every PostgreSQL session,\ndon't get me wrong. \n\nThe issue is this: most \"web environments\" have the idea of a session. A\nsession management scheme based on PostgreSQL exposes PostgreSQL's worst\nbehavior. Small amount of records, high update/delete rate for each record. So\nmuch so, that it probably isn't realistic to replace something like Oracle with\nPostgreSQL in this environment.\n\nDo \"temp tables\" suffer the same delete/update behavior of marking the row as\ndeleted and adding another row? Thus requiring vacuum periodically. \n\nIf not, should/could there be a way to create a temp table that is globally\nvisible?\n", "msg_date": "Wed, 20 Jun 2001 07:47:31 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "shared temp tables" }, { "msg_contents": "> I love the fact that temp tables do not exist in every PostgreSQL session,\n> don't get me wrong. \n> \n> The issue is this: most \"web environments\" have the idea of a session. A\n> session management scheme based on PostgreSQL exposes PostgreSQL's worst\n> behavior. Small amount of records, high update/delete rate for each record. So\n> much so, that it probably isn't realistic to replace something like Oracle with\n> PostgreSQL in this environment.\n> \n> Do \"temp tables\" suffer the same delete/update behavior of marking the row as\n> deleted and adding another row? Thus requiring vacuum periodically. \n> \n> If not, should/could there be a way to create a temp table that is globally\n> visible?\n\nTemp table are the same as real tables have have the same update\nbehavior.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Jun 2001 22:23:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: shared temp tables" } ]
[ { "msg_contents": "I was thinking that a solution to problems with all dynamically linked PLs\n(such as perl and python, which must be compiled as shared library to be\nusable), is to link them statically into postgres.\n\nAnyone disagrees? If not, I'll write some patches, probably by hacking\ndynloader to understand when things were already linked in...\n\n-alex\n\n", "msg_date": "Wed, 20 Jun 2001 07:58:08 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "statically linked PL's" }, { "msg_contents": "Alex Pilosov wrote:\n> I was thinking that a solution to problems with all dynamically linked PLs\n> (such as perl and python, which must be compiled as shared library to be\n> usable), is to link them statically into postgres.\n>\n> Anyone disagrees? If not, I'll write some patches, probably by hacking\n> dynloader to understand when things were already linked in...\n\n No need to change the dynamic linker. The PL's handler\n function must be defined in language 'internal' instead of\n 'C' then and this must be done at initdb time. Just the\n CREATE LANGUAGE would then control if a DB has it or not.\n\n Since it should IMHO be a compile time config option, we'd\n need config dependant entries in .bki files for initdb, and\n createlang/droplang must check if there is an 'internal'\n handler in pg_proc before mucking with it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 20 Jun 2001 08:27:16 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: statically linked PL's" } ]
[ { "msg_contents": "> Problem can be demonstrated by following example\n> \n> create table a (a numeric primary key);\n> insert into a values (1);\n> insert into a values (2);\n> insert into a values (3);\n> insert into a values (4);\n> update a set a=a+1 where a>2;\n> ERROR: Cannot insert a duplicate key into unique index a_pkey\n\nWe use uniq index for UK/PK but shouldn't. Jan?\n\nVadim\n", "msg_date": "Wed, 20 Jun 2001 09:43:05 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Update is not atomic" }, { "msg_contents": "Mikheev, Vadim wrote:\n> > Problem can be demonstrated by following example\n> >\n> > create table a (a numeric primary key);\n> > insert into a values (1);\n> > insert into a values (2);\n> > insert into a values (3);\n> > insert into a values (4);\n> > update a set a=a+1 where a>2;\n> > ERROR: Cannot insert a duplicate key into unique index a_pkey\n>\n> We use uniq index for UK/PK but shouldn't. Jan?\n\n What else can you use than an index? A \"deferred until\n statement end\" trigger checking for duplicates? Think it'd\n have a real bad performance impact.\n\n Whatever the execution order might be, the update of '3' to\n '4' will see the other '4' as existent WRT the scan commandId\n and given snapshot - right? If we at the time we now fire up\n the ERROR add the key, the index and heap to a list of\n \"possible dupkeys\", that we'll check at the end of the actual\n command, the above would work. The check at statement end\n would have to increment the commandcounter and for each entry\n do an index scan with the key, counting the number of found,\n valid heap tuples.\n\n Well, with some million rows doing a \"set a = a + 1\" could\n run out of memory. So this would be something that'd work in\n the sandbox and for non-broken applications (tm). Maybe at\n some level (when we escalate the lock to a full table lock?)\n we simply forget about single keys, but have a new index\n access function that checks the entire index for uniqueness.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 20 Jun 2001 17:27:20 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: Update is not atomic" } ]
[ { "msg_contents": "> > update a set a=a+1 where a>2;\n> > ERROR: Cannot insert a duplicate key into unique index a_pkey\n> \n> This is a known problem with unique contraints, but it's not\n> easy to fix it.\n\nYes, it requires dirty reads.\n\nVadim\n", "msg_date": "Wed, 20 Jun 2001 09:50:46 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: [BUGS] Update is not atomic" } ]
[ { "msg_contents": "\n> I think that application people would probably prefer the delete trigger,\n> insert trigger. It makes more sense, because I would interpret replace\n> as \"get rid of the old if it exists\" and \"put in a new item\". If people\n> wanted\n> to make sure code is run on delete, and they have to put it into a\n> delete trigger and a replace trigger, it would be two places for them.\n> \n> Frankly, I'm not sure why this is being seen as a weak approach.\n> My indended semantic was atomic delete (ignoring error) and insert.\n\nAdding another trigger event \"replace\" is imho not acceptable, since\npeople guarding their data integrity with standards defined triggers \nfor insert update and delete would open the door to inconsistency \nbecause they have not defined a replace trigger.\n\nFire the delete then the insert trigger is imho not a straightforward answer,\nsince a second possible interpretation would be to fire eighter the insert trigger \nor the update trigger if a row already existed.\n\nAndreas\n", "msg_date": "Wed, 20 Jun 2001 19:21:14 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: Re: REPLACE INTO table a la mySQL" }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > I think that application people would probably prefer the delete trigger,\n> > insert trigger. It makes more sense, because I would interpret replace\n> > as \"get rid of the old if it exists\" and \"put in a new item\". If people\n> > wanted\n> > to make sure code is run on delete, and they have to put it into a\n> > delete trigger and a replace trigger, it would be two places for them.\n> >\n> > Frankly, I'm not sure why this is being seen as a weak approach.\n> > My indended semantic was atomic delete (ignoring error) and insert.\n> \n> Adding another trigger event \"replace\" is imho not acceptable, since\n> people guarding their data integrity with standards defined triggers\n> for insert update and delete would open the door to inconsistency\n> because they have not defined a replace trigger.\n> \n> Fire the delete then the insert trigger is imho not a straightforward answer,\n> since a second possible interpretation would be to fire eighter the insert trigger\n> or the update trigger if a row already existed.\n\nImho the second one is also the only correct one, as the definition of\nREPLACE INTO \nis \"update if the row is there, else insert\". The problem is just that \nthe test must \nnot fire any triggers and that test+(insert|update) must be atomic and\nmust fire the \nrespective trigger for insert|update. This just implies that it can't be\ndone by \nsimple rewrite, not that it is undoable.\n\nOTOH, I think that our non-transactional UNIQUE constraint\nimplementation is a bigger \nproblem than REPLACE INTO (i.e. one is BUG the other is ENCHANCEMENT).\n\n-----------------\nHannu\n", "msg_date": "Mon, 25 Jun 2001 10:39:04 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: AW: Re: Re: REPLACE INTO table a la mySQL" } ]
[ { "msg_contents": "I would like to get the unixODBC fixs to the pgsql\nODBC driver into the pgsql distro and then remove the\nunixODBC version.\n\nI know how to do this and I am prepared to do this\nASAP...\n\nCan someone advise?\n\nPeter Harvey\n\n\n__________________________________________________\nDo You Yahoo!?\nGet personalized email addresses from Yahoo! Mail\nhttp://personal.mail.yahoo.com/\n", "msg_date": "Wed, 20 Jun 2001 10:47:11 -0700 (PDT)", "msg_from": "Peter Harvey <peteralexharvey@yahoo.com>", "msg_from_op": true, "msg_subject": "ODBC" }, { "msg_contents": "\nUsually the best thing is to send along a patch against the current CVS\ncopy of the ODBC driver.\n\n\n> I would like to get the unixODBC fixs to the pgsql\n> ODBC driver into the pgsql distro and then remove the\n> unixODBC version.\n> \n> I know how to do this and I am prepared to do this\n> ASAP...\n> \n> Can someone advise?\n> \n> Peter Harvey\n> \n> \n> __________________________________________________\n> Do You Yahoo!?\n> Get personalized email addresses from Yahoo! Mail\n> http://personal.mail.yahoo.com/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Jun 2001 22:40:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ODBC" } ]
[ { "msg_contents": "hello all,\n\nI am using an xml -> sql generator where foreign key are specified\nafter the columns are. The sql is written as the xml file is being\nparsed so, the foreign key stuff must be written after the fact.\n\nThe problem I need to solve is adding the on Delete/on Update constraints.\n\nhere's what I would like to do (e.g.):\n\nALTER TABLE league\n ADD CONSTRAINT leagueOwnerId FOREIGN KEY (leagueOwnerId)\n REFERENCES TeamOwner (id) ON DELETE CASCASE;\n\ntable teamOwner (id)\ntable league (id, leagueOwnerId)\nwhere leagueOwnerId is a FKey to owner(id)\n\n\nthanks\n\nmike\n\n\n-- \n-------------------------------------------------\nI am Vinz, Vinz Clortho. Keymaster of Gozer,\nVolguus Zildrohar, Lord of the Sebouillia.\nAre you the Gatekeeper?\n-------------------------------------------------\n", "msg_date": "Wed, 20 Jun 2001 15:32:26 -0500", "msg_from": "Mike Haberman <mikeh@ncsa.uiuc.edu>", "msg_from_op": true, "msg_subject": "help with add constraint syntax needed" }, { "msg_contents": "\nOn Wed, 20 Jun 2001, Mike Haberman wrote:\n\n> hello all,\n> \n> I am using an xml -> sql generator where foreign key are specified\n> after the columns are. The sql is written as the xml file is being\n> parsed so, the foreign key stuff must be written after the fact.\n> \n> The problem I need to solve is adding the on Delete/on Update constraints.\n> \n> here's what I would like to do (e.g.):\n> \n> ALTER TABLE league\n> ADD CONSTRAINT leagueOwnerId FOREIGN KEY (leagueOwnerId)\n> REFERENCES TeamOwner (id) ON DELETE CASCASE;\n> \n> table teamOwner (id)\n> table league (id, leagueOwnerId)\n> where leagueOwnerId is a FKey to owner(id)\n\nWhat problem are you having? The above works for me assuming teamowner\nhas a unique constraint on id on current sources.\n\n", "msg_date": "Fri, 22 Jun 2001 12:51:08 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: help with add constraint syntax needed" } ]
[ { "msg_contents": "> > > update a set a=a+1 where a>2;\n> > > ERROR: Cannot insert a duplicate key into unique index a_pkey\n> >\n> > We use uniq index for UK/PK but shouldn't. Jan?\n> \n> What else can you use than an index? A \"deferred until\n> statement end\" trigger checking for duplicates? Think it'd\n> have a real bad performance impact.\n\nAFAIR, standard requires \"deffered\" (until statement/transaction(?)\nend) as default behaviour for RI (all?) constraints. But no matter\nwhat is default, \"deffered\" *must* be available => uniq indices\nmust not be used.\n\n> Whatever the execution order might be, the update of '3' to\n> '4' will see the other '4' as existent WRT the scan commandId\n> and given snapshot - right? If we at the time we now fire up\n> the ERROR add the key, the index and heap to a list of\n> \"possible dupkeys\", that we'll check at the end of the actual\n> command, the above would work. The check at statement end\n> would have to increment the commandcounter and for each entry\n> do an index scan with the key, counting the number of found,\n> valid heap tuples.\n\nIncrementing comand counter is not enough - dirty reads are required\nto handle concurrent PK updates.\n\n> Well, with some million rows doing a \"set a = a + 1\" could\n> run out of memory. So this would be something that'd work in\n> the sandbox and for non-broken applications (tm). Maybe at\n\nHow is this different from (deffered) updates of million FK we allow\nright now? Let's user decide what behaviour (deffered/immediate) he\nneed. The point is that now user has no ability to choose what's\nright for him.\n\n> some level (when we escalate the lock to a full table lock?)\n> we simply forget about single keys, but have a new index\n> access function that checks the entire index for uniqueness.\n\nI wouldn't bother to implement this. User always has ability to excl.\nlock table, drop constraints, update whatever he want and recreate\nconstraints again.\n\nVadim\n", "msg_date": "Wed, 20 Jun 2001 17:10:45 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: RE: [BUGS] Update is not atomic" } ]
[ { "msg_contents": "Anyone know who looks after ODBC source code -\nincluding the GNUmakefile stuff in the odbc dir of the\ndistro?\n\nPeter\n\n\n\n\n__________________________________________________\nDo You Yahoo!?\nGet personalized email addresses from Yahoo! Mail\nhttp://personal.mail.yahoo.com/\n", "msg_date": "Wed, 20 Jun 2001 18:42:53 -0700 (PDT)", "msg_from": "Peter Harvey <peteralexharvey@yahoo.com>", "msg_from_op": true, "msg_subject": "ODBC" } ]
[ { "msg_contents": "How far off is 7.2? Ages? I want to add the rest of the ALTER TABLE\nfunctionality for 7.2, but I've just been busy - don't worry I haven't\nforgotten!\n\nThis is my personal TODO list:\n\n* ALTER TABLE ADD PRIMARY KEY\n\t- Done, except code that detects whether or not a pk already exists\n* ALTER TABLE ADD UNIQUE\n\t- Done, except code that detects whether or not a unique key already exists\nover the specified fields\n* PSQL - SHOW FOREIGN KEYS\n\t- Still working on a query. If I come up with a good one - would a catalog\nview of them be useful?\n* -ALTER TABLE DROP CHECK\n\t- Already committed\n* ALTER TABLE DROP PRIMARY KEY\n\t- Done, will need review\n* ALTER TABLE DROP UNIQUE\n\t- Done, will need review\n* ALTER TABLE DROP FOREIGN KEY\n\t- Harder than I thought :) Working on it.\n* Check that pgclass.relfkeys is being set correctly.\n\t- Is pgclass.relfkeys being used at the moment?\n* PG_DUMP DUMP CONSTRAINTS AS ALTER TABLE STATEMENTS\n\t- Would be nice, once the alter statements above work.\n* FIX 'RESTRICT' IN DROP CONSTRAINT DOCS\n\t- It would be nice to have restrict/cascade as optional keywords at the\nmoment? At the moment, the grammar forces people to put the word 'restrict'\nin, even though it does nothing.\n* REGRESSION TESTS\n\t- For all of the above\n* WILDCARDS IN PG_DUMP\n\t- It would be nice to be able to dump tables via wildcards, or once schemas\nexist to dump an entire schema I guess.\n* CHECK CREATING DUPLICATE NAMED FOREIGN KEYS\n\t- I seem to be able to create duplicate named fk's, plus I think the\n'<unnamed>' ones should be given auto name to make dropping constraint\neasier...\n* DOCUMENT PG_TRIGGER\n\t- Doesn't seem to be in the system catalog documentation...\n* MOVE ALTER CODE FROM heap.c/command.c INTO alter.c\n\t- I get the feeling I'm filling up heap.c with lots of alter table crud\nthat is beginning to need its own file?\n\nIf anyone is super-interested in seeing my unposted code, feel free to ask\nfor it. (Or better yet, wants to finish the work ;) )\n\nChris\n\n", "msg_date": "Thu, 21 Jun 2001 11:18:09 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "7.2 stuff" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> How far off is 7.2? Ages?\n\nHopefully not. I'd like to see us get back on a reasonably short\nrelease cycle, like every six months or less --- the last couple\nmajor release cycles have been painfully long.\n\nSo, maybe beta around Aug-Sep?\n\nNot speaking on behalf of core here; we haven't discussed release\nschedule at all yet. Just my personal $0.02.\n\n\n> * Check that pgclass.relfkeys is being set correctly.\n> \t- Is pgclass.relfkeys being used at the moment?\n\nA quick glimpse shows not. I have a personal todo item to fix\nrelhaspkey, which isn't implemented either. Feel free to fix this \none if it bugs you. (Note: it might be harder than it looks; think\nabout race conditions when different backends are adding/dropping\nkeys concurrently.)\n\n> * MOVE ALTER CODE FROM heap.c/command.c INTO alter.c\n> \t- I get the feeling I'm filling up heap.c with lots of alter table crud\n> that is beginning to need its own file?\n\nCode beautification efforts are always worthwhile IMHO.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2001 00:25:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff " }, { "msg_contents": "On Thu, 21 Jun 2001, Tom Lane wrote:\n\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > How far off is 7.2? Ages?\n>\n> Hopefully not. I'd like to see us get back on a reasonably short\n> release cycle, like every six months or less --- the last couple\n> major release cycles have been painfully long.\n>\n> So, maybe beta around Aug-Sep?\n>\n> Not speaking on behalf of core here; we haven't discussed release\n> schedule at all yet. Just my personal $0.02.\n\nThat's what I was seeing/hoping for also ...\n\n\n", "msg_date": "Thu, 21 Jun 2001 02:52:34 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff " }, { "msg_contents": "\nSo you have at ~2 months.\n\n\n> How far off is 7.2? Ages? I want to add the rest of the ALTER TABLE\n> functionality for 7.2, but I've just been busy - don't worry I haven't\n> forgotten!\n> \n> This is my personal TODO list:\n> \n> * ALTER TABLE ADD PRIMARY KEY\n> \t- Done, except code that detects whether or not a pk already exists\n> * ALTER TABLE ADD UNIQUE\n> \t- Done, except code that detects whether or not a unique key already exists\n> over the specified fields\n> * PSQL - SHOW FOREIGN KEYS\n> \t- Still working on a query. If I come up with a good one - would a catalog\n> view of them be useful?\n> * -ALTER TABLE DROP CHECK\n> \t- Already committed\n> * ALTER TABLE DROP PRIMARY KEY\n> \t- Done, will need review\n> * ALTER TABLE DROP UNIQUE\n> \t- Done, will need review\n> * ALTER TABLE DROP FOREIGN KEY\n> \t- Harder than I thought :) Working on it.\n> * Check that pgclass.relfkeys is being set correctly.\n> \t- Is pgclass.relfkeys being used at the moment?\n> * PG_DUMP DUMP CONSTRAINTS AS ALTER TABLE STATEMENTS\n> \t- Would be nice, once the alter statements above work.\n> * FIX 'RESTRICT' IN DROP CONSTRAINT DOCS\n> \t- It would be nice to have restrict/cascade as optional keywords at the\n> moment? At the moment, the grammar forces people to put the word 'restrict'\n> in, even though it does nothing.\n> * REGRESSION TESTS\n> \t- For all of the above\n> * WILDCARDS IN PG_DUMP\n> \t- It would be nice to be able to dump tables via wildcards, or once schemas\n> exist to dump an entire schema I guess.\n> * CHECK CREATING DUPLICATE NAMED FOREIGN KEYS\n> \t- I seem to be able to create duplicate named fk's, plus I think the\n> '<unnamed>' ones should be given auto name to make dropping constraint\n> easier...\n> * DOCUMENT PG_TRIGGER\n> \t- Doesn't seem to be in the system catalog documentation...\n> * MOVE ALTER CODE FROM heap.c/command.c INTO alter.c\n> \t- I get the feeling I'm filling up heap.c with lots of alter table crud\n> that is beginning to need its own file?\n> \n> If anyone is super-interested in seeing my unposted code, feel free to ask\n> for it. (Or better yet, wants to finish the work ;) )\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Jun 2001 09:57:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff" }, { "msg_contents": "\nThis is a nice list and all are good items. People should be able to\nhelp you if you encounter problems.\n\n> How far off is 7.2? Ages? I want to add the rest of the ALTER TABLE\n> functionality for 7.2, but I've just been busy - don't worry I haven't\n> forgotten!\n> \n> This is my personal TODO list:\n> \n> * ALTER TABLE ADD PRIMARY KEY\n> \t- Done, except code that detects whether or not a pk already exists\n> * ALTER TABLE ADD UNIQUE\n> \t- Done, except code that detects whether or not a unique key already exists\n> over the specified fields\n> * PSQL - SHOW FOREIGN KEYS\n> \t- Still working on a query. If I come up with a good one - would a catalog\n> view of them be useful?\n> * -ALTER TABLE DROP CHECK\n> \t- Already committed\n> * ALTER TABLE DROP PRIMARY KEY\n> \t- Done, will need review\n> * ALTER TABLE DROP UNIQUE\n> \t- Done, will need review\n> * ALTER TABLE DROP FOREIGN KEY\n> \t- Harder than I thought :) Working on it.\n> * Check that pgclass.relfkeys is being set correctly.\n> \t- Is pgclass.relfkeys being used at the moment?\n> * PG_DUMP DUMP CONSTRAINTS AS ALTER TABLE STATEMENTS\n> \t- Would be nice, once the alter statements above work.\n> * FIX 'RESTRICT' IN DROP CONSTRAINT DOCS\n> \t- It would be nice to have restrict/cascade as optional keywords at the\n> moment? At the moment, the grammar forces people to put the word 'restrict'\n> in, even though it does nothing.\n> * REGRESSION TESTS\n> \t- For all of the above\n> * WILDCARDS IN PG_DUMP\n> \t- It would be nice to be able to dump tables via wildcards, or once schemas\n> exist to dump an entire schema I guess.\n> * CHECK CREATING DUPLICATE NAMED FOREIGN KEYS\n> \t- I seem to be able to create duplicate named fk's, plus I think the\n> '<unnamed>' ones should be given auto name to make dropping constraint\n> easier...\n> * DOCUMENT PG_TRIGGER\n> \t- Doesn't seem to be in the system catalog documentation...\n> * MOVE ALTER CODE FROM heap.c/command.c INTO alter.c\n> \t- I get the feeling I'm filling up heap.c with lots of alter table crud\n> that is beginning to need its own file?\n> \n> If anyone is super-interested in seeing my unposted code, feel free to ask\n> for it. (Or better yet, wants to finish the work ;) )\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Jun 2001 22:59:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff" }, { "msg_contents": "\nChristopher, do you want any of this added to the TODO?\n\n---------------------------------------------------------------------------\n\n> How far off is 7.2? Ages? I want to add the rest of the ALTER TABLE\n> functionality for 7.2, but I've just been busy - don't worry I haven't\n> forgotten!\n> \n> This is my personal TODO list:\n> \n> * ALTER TABLE ADD PRIMARY KEY\n> \t- Done, except code that detects whether or not a pk already exists\n> * ALTER TABLE ADD UNIQUE\n> \t- Done, except code that detects whether or not a unique key already exists\n> over the specified fields\n> * PSQL - SHOW FOREIGN KEYS\n> \t- Still working on a query. If I come up with a good one - would a catalog\n> view of them be useful?\n> * -ALTER TABLE DROP CHECK\n> \t- Already committed\n> * ALTER TABLE DROP PRIMARY KEY\n> \t- Done, will need review\n> * ALTER TABLE DROP UNIQUE\n> \t- Done, will need review\n> * ALTER TABLE DROP FOREIGN KEY\n> \t- Harder than I thought :) Working on it.\n> * Check that pgclass.relfkeys is being set correctly.\n> \t- Is pgclass.relfkeys being used at the moment?\n> * PG_DUMP DUMP CONSTRAINTS AS ALTER TABLE STATEMENTS\n> \t- Would be nice, once the alter statements above work.\n> * FIX 'RESTRICT' IN DROP CONSTRAINT DOCS\n> \t- It would be nice to have restrict/cascade as optional keywords at the\n> moment? At the moment, the grammar forces people to put the word 'restrict'\n> in, even though it does nothing.\n> * REGRESSION TESTS\n> \t- For all of the above\n> * WILDCARDS IN PG_DUMP\n> \t- It would be nice to be able to dump tables via wildcards, or once schemas\n> exist to dump an entire schema I guess.\n> * CHECK CREATING DUPLICATE NAMED FOREIGN KEYS\n> \t- I seem to be able to create duplicate named fk's, plus I think the\n> '<unnamed>' ones should be given auto name to make dropping constraint\n> easier...\n> * DOCUMENT PG_TRIGGER\n> \t- Doesn't seem to be in the system catalog documentation...\n> * MOVE ALTER CODE FROM heap.c/command.c INTO alter.c\n> \t- I get the feeling I'm filling up heap.c with lots of alter table crud\n> that is beginning to need its own file?\n> \n> If anyone is super-interested in seeing my unposted code, feel free to ask\n> for it. (Or better yet, wants to finish the work ;) )\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Nov 2001 21:01:31 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff" }, { "msg_contents": "Well, it was just a bunch of stuff I wanted to work on, feel free to add it\nto the TODO list. Some comments are below.\n\n> > * ALTER TABLE ADD PRIMARY KEY\n> > \t- Done, except code that detects whether or not a pk already exists\n> > * ALTER TABLE ADD UNIQUE\n> > \t- Done, except code that detects whether or not a unique\n\nThe ADD UNIQUE stuff is in 7.2, however Tom Lane has suggested that there\nare some stylistic deficiencies in the code that should be improved. I\nwon't be able to correct these before 7.2 release, as it involves me sitting\ndown for hours searching the souce code for function definitions, figuring\nout how the work, etc. In fact, I'm sure a more experienced developer could\nperform the fixes in 10 mins...\n\nThis problem is also what's stopped me submitting the ALTER TABLE / ADD\nPRIMARY stuff. Once the ADD UNIQUE bit is correct, ADD PRIMARY is trivial.\n\n(See: http://fts.postgresql.org/db/mw/msg.html?mid=1035632) I suggest\nreading the complete thread. I have fixed some of the problems in my\nprivate cvs, but no patch has been sent in...\n\nSome of the issues perhaps I should send in a patch for ASAP??\n\n> key already exists\n> > over the specified fields\n> > * PSQL - SHOW FOREIGN KEYS\n> > \t- Still working on a query. If I come up with a good one -\n> would a catalog\n> > view of them be useful?\n\nIs there a pg_get_* function for getting foreign key definitions yet?\n\n> > * -ALTER TABLE DROP CHECK\n> > \t- Already committed\n\nYeah, committed.\n\n> > * ALTER TABLE DROP PRIMARY KEY\n> > \t- Done, will need review\n> > * ALTER TABLE DROP UNIQUE\n> > \t- Done, will need review\n\nWrote them, but they're uncommitted. Don't worry about them until 7.3.\n\n> > * ALTER TABLE DROP FOREIGN KEY\n> > \t- Harder than I thought :) Working on it.\n\nThis is a toughie this one!\n\n> > * Check that pgclass.relfkeys is being set correctly.\n> > \t- Is pgclass.relfkeys being used at the moment?\n\nIt looked to me that pgclass.relfkeys wasn't ever being set or updated. Is\nthis true/correct?\n\n> > * PG_DUMP DUMP CONSTRAINTS AS ALTER TABLE STATEMENTS\n> > \t- Would be nice, once the alter statements above work.\n> > * FIX 'RESTRICT' IN DROP CONSTRAINT DOCS\n> > \t- It would be nice to have restrict/cascade as optional\n> keywords at the\n> > moment? At the moment, the grammar forces people to put the\n> word 'restrict'\n> > in, even though it does nothing.\n\nDon't bother about this - it's been documented.\n\n> > * REGRESSION TESTS\n> > \t- For all of the above\n\nI've comment a regression test for ADD UNIQUE, but I don't think the DROP\nCONSTRAINT stuff has a regression test yet.\n\n> > * WILDCARDS IN PG_DUMP\n> > \t- It would be nice to be able to dump tables via wildcards,\n> or once schemas\n> > exist to dump an entire schema I guess.\n\nThat was just one of my little wish lists. I have a database with about a\nhundred tables in it and related sets of tables all share the same prefix.\nFor instance, I would like to be able to pg_dump all the diary tables in one\ngo.\n\nie. pg_dump -t diary_\\* audb > dump.sql\n\nDon't know if there would be widespread enough demand for this feature\ntho...\n\n> > * CHECK CREATING DUPLICATE NAMED FOREIGN KEYS\n> > \t- I seem to be able to create duplicate named fk's, plus I think the\n> > '<unnamed>' ones should be given auto name to make dropping constraint\n> > easier...\n\nPretty clear.\n\n> > * DOCUMENT PG_TRIGGER\n> > \t- Doesn't seem to be in the system catalog documentation...\n\nYeah, pg_trigger does not appear on this page:\n\nhttp://postgresql.planetmirror.com/devel-corner/docs/postgres/catalogs.html\n\nThought it should be documented. I noticed this while I was doing the\nimprovements on the contrib/fulltextindex code.\n\n> > * MOVE ALTER CODE FROM heap.c/command.c INTO alter.c\n> > \t- I get the feeling I'm filling up heap.c with lots of\n> alter table crud\n> > that is beginning to need its own file?\n\nBasically I was getting the impression that the command.c was getting big\nand fat and that it might be nice to split all the ALTER* commands into an\nalter.c or something.\n\nTell me what I should do for 7.2...\n\nRegards,\n\nChris\n\n", "msg_date": "Tue, 27 Nov 2001 11:17:02 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: 7.2 stuff" }, { "msg_contents": "> Basically I was getting the impression that the command.c was getting big\n> and fat and that it might be nice to split all the ALTER* commands into an\n> alter.c or something.\n> \n> Tell me what I should do for 7.2...\n\nI think 7.2 is fine. We can start on 7.3 in a few weeks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Nov 2001 23:12:42 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Tell me what I should do for 7.2...\n\nAt this point, none of these are on the radar screen for 7.2; we are in\n\"get the release out\" mode, and anything that's not a critical bug fix\nneed not apply. But here are some comments for 7.3 and beyond.\n\n> Is there a pg_get_* function for getting foreign key definitions yet?\n\nNo, but it seems like possibly a good idea. We should try to move away\nfrom applications looking directly at the system catalogs, and introduce\nsome layer of indirection so that catalog changes don't break so many\nthings. pg_get_xxx functions are one approach. Peter E. has suggested\nthat implementing the SQL92 informational views might be a better (more\nstandards-compliant) way of providing that indirection. That's cool to\nthe extent that it works, but I wonder whether we won't find that the\nSQL92 views omit important Postgres extensions. Anyway, this is a\nlong-term project.\n\n> It looked to me that pgclass.relfkeys wasn't ever being set or updated. Is\n> this true/correct?\n\nI cannot find any references to it in the code, either.\n\n> For instance, I would like to be able to pg_dump all the diary tables in one\n> go.\n> ie. pg_dump -t diary_\\* audb > dump.sql\n> Don't know if there would be widespread enough demand for this feature\n> tho...\n\nI've seen requests for that before ... and I don't think they were all\nfrom you ;-). Seems like a reasonable wishlist item to me.\n\n> * DOCUMENT PG_TRIGGER\n> - Doesn't seem to be in the system catalog documentation...\n> Yeah, pg_trigger does not appear on this page:\n\nIt's in the current sources. Perhaps you're looking at an obsolete\nmirror?\n\n> Basically I was getting the impression that the command.c was getting big\n> and fat and that it might be nice to split all the ALTER* commands into an\n> alter.c or something.\n\nCool with me. We often fail to spend enough effort on code\nbeautification projects; that hurts the maintainability of the project\nin the long run. Feel free to devise and implement a better division\nof the ALTER code. (And as I think we already talked about, it'd also\nbe cool to try to merge the common infrastructure of the ALTER commands\nsomehow. I don't like umpteen copied-and-pasted versions of the same\ncode, either ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Nov 2001 23:21:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff " }, { "msg_contents": "At 23:21 26/11/01 -0500, Tom Lane wrote:\n>\n>> For instance, I would like to be able to pg_dump all the diary tables in\none\n>> go.\n>> ie. pg_dump -t diary_\\* audb > dump.sql\n>> Don't know if there would be widespread enough demand for this feature\n>> tho...\n>\n>I've seen requests for that before ... and I don't think they were all\n>from you ;-). Seems like a reasonable wishlist item to me.\n>\n\nI have been sent patches for this kind of thing, but I would like to see\nthem generalized to some extent. Not sure of the syntax, but I'd like to be\nable to dump *any* selected pg_dump TOC entry type by name, or partial name\nmatch. ie. tables, functions, indexes, etc.\n\nAny suggestions as to how this is best done within unix-like commands? eg.\n\n pg_dump/restore --select=table:<regexp> ? \n pg_dump/restore --select=index:<regexp> ? \n\nAny ideas?\n\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 27 Nov 2001 15:38:55 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff " }, { "msg_contents": "In article <17242.1006834868@sss.pgh.pa.us>, \"Tom Lane\"\n<tgl@sss.pgh.pa.us> wrote:\n\n>> Basically I was getting the impression that the command.c was getting\n>> big and fat and that it might be nice to split all the ALTER* commands\n>> into an alter.c or something.\n> \n> Cool with me. We often fail to spend enough effort on code\n> beautification projects; that hurts the maintainability of the project\n> in the long run. Feel free to devise and implement a better division of\n> the ALTER code. (And as I think we already talked about, it'd also be\n> cool to try to merge the common infrastructure of the ALTER commands\n> somehow. I don't like umpteen copied-and-pasted versions of the same\n> code, either ...)\n> \n\nI'd started a little of this with my TOAST slicing patch -moving some\ncommon code around in command.c (standard permissions checking and\nrecursing over children) to eliminate duplicates. \n\nI'm still sitting on the patch and maintaining separately because it is\nfor 7.3, but I am quite interested in some further tidying, but I don't\nwant to load too much into a single patch.\n\nRegards\n\nJohn\n\n\n\n-- \nJohn Gray\nAzuli IT http://www.azuli.co.uk +44 121 693 3397\njgray@azuli.co.uk\n", "msg_date": "Tue, 27 Nov 2001 11:25:55 +0000", "msg_from": "\"John Gray\" <jgray@azuli.co.uk>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff" }, { "msg_contents": "\nAre there any TODO items here?\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> Well, it was just a bunch of stuff I wanted to work on, feel free to add it\n> to the TODO list. Some comments are below.\n> \n> > > * ALTER TABLE ADD PRIMARY KEY\n> > > \t- Done, except code that detects whether or not a pk already exists\n> > > * ALTER TABLE ADD UNIQUE\n> > > \t- Done, except code that detects whether or not a unique\n> \n> The ADD UNIQUE stuff is in 7.2, however Tom Lane has suggested that there\n> are some stylistic deficiencies in the code that should be improved. I\n> won't be able to correct these before 7.2 release, as it involves me sitting\n> down for hours searching the souce code for function definitions, figuring\n> out how the work, etc. In fact, I'm sure a more experienced developer could\n> perform the fixes in 10 mins...\n> \n> This problem is also what's stopped me submitting the ALTER TABLE / ADD\n> PRIMARY stuff. Once the ADD UNIQUE bit is correct, ADD PRIMARY is trivial.\n> \n> (See: http://fts.postgresql.org/db/mw/msg.html?mid=1035632) I suggest\n> reading the complete thread. I have fixed some of the problems in my\n> private cvs, but no patch has been sent in...\n> \n> Some of the issues perhaps I should send in a patch for ASAP??\n> \n> > key already exists\n> > > over the specified fields\n> > > * PSQL - SHOW FOREIGN KEYS\n> > > \t- Still working on a query. If I come up with a good one -\n> > would a catalog\n> > > view of them be useful?\n> \n> Is there a pg_get_* function for getting foreign key definitions yet?\n> \n> > > * -ALTER TABLE DROP CHECK\n> > > \t- Already committed\n> \n> Yeah, committed.\n> \n> > > * ALTER TABLE DROP PRIMARY KEY\n> > > \t- Done, will need review\n> > > * ALTER TABLE DROP UNIQUE\n> > > \t- Done, will need review\n> \n> Wrote them, but they're uncommitted. Don't worry about them until 7.3.\n> \n> > > * ALTER TABLE DROP FOREIGN KEY\n> > > \t- Harder than I thought :) Working on it.\n> \n> This is a toughie this one!\n> \n> > > * Check that pgclass.relfkeys is being set correctly.\n> > > \t- Is pgclass.relfkeys being used at the moment?\n> \n> It looked to me that pgclass.relfkeys wasn't ever being set or updated. Is\n> this true/correct?\n> \n> > > * PG_DUMP DUMP CONSTRAINTS AS ALTER TABLE STATEMENTS\n> > > \t- Would be nice, once the alter statements above work.\n> > > * FIX 'RESTRICT' IN DROP CONSTRAINT DOCS\n> > > \t- It would be nice to have restrict/cascade as optional\n> > keywords at the\n> > > moment? At the moment, the grammar forces people to put the\n> > word 'restrict'\n> > > in, even though it does nothing.\n> \n> Don't bother about this - it's been documented.\n> \n> > > * REGRESSION TESTS\n> > > \t- For all of the above\n> \n> I've comment a regression test for ADD UNIQUE, but I don't think the DROP\n> CONSTRAINT stuff has a regression test yet.\n> \n> > > * WILDCARDS IN PG_DUMP\n> > > \t- It would be nice to be able to dump tables via wildcards,\n> > or once schemas\n> > > exist to dump an entire schema I guess.\n> \n> That was just one of my little wish lists. I have a database with about a\n> hundred tables in it and related sets of tables all share the same prefix.\n> For instance, I would like to be able to pg_dump all the diary tables in one\n> go.\n> \n> ie. pg_dump -t diary_\\* audb > dump.sql\n> \n> Don't know if there would be widespread enough demand for this feature\n> tho...\n> \n> > > * CHECK CREATING DUPLICATE NAMED FOREIGN KEYS\n> > > \t- I seem to be able to create duplicate named fk's, plus I think the\n> > > '<unnamed>' ones should be given auto name to make dropping constraint\n> > > easier...\n> \n> Pretty clear.\n> \n> > > * DOCUMENT PG_TRIGGER\n> > > \t- Doesn't seem to be in the system catalog documentation...\n> \n> Yeah, pg_trigger does not appear on this page:\n> \n> http://postgresql.planetmirror.com/devel-corner/docs/postgres/catalogs.html\n> \n> Thought it should be documented. I noticed this while I was doing the\n> improvements on the contrib/fulltextindex code.\n> \n> > > * MOVE ALTER CODE FROM heap.c/command.c INTO alter.c\n> > > \t- I get the feeling I'm filling up heap.c with lots of\n> > alter table crud\n> > > that is beginning to need its own file?\n> \n> Basically I was getting the impression that the command.c was getting big\n> and fat and that it might be nice to split all the ALTER* commands into an\n> alter.c or something.\n> \n> Tell me what I should do for 7.2...\n> \n> Regards,\n> \n> Chris\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 18:31:23 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff" }, { "msg_contents": "\nReminder that code cleanup can be done in commands/*.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> > Basically I was getting the impression that the command.c was getting big\n> > and fat and that it might be nice to split all the ALTER* commands into an\n> > alter.c or something.\n> \n> Cool with me. We often fail to spend enough effort on code\n> beautification projects; that hurts the maintainability of the project\n> in the long run. Feel free to devise and implement a better division\n> of the ALTER code. (And as I think we already talked about, it'd also\n> be cool to try to merge the common infrastructure of the ALTER commands\n> somehow. I don't like umpteen copied-and-pasted versions of the same\n> code, either ...)\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 18:32:58 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff" }, { "msg_contents": "Hi,\n\nTom - there's a comment for you further down...\n\n> > This problem is also what's stopped me submitting the ALTER TABLE / ADD\n> > PRIMARY stuff. Once the ADD UNIQUE bit is correct, ADD PRIMARY\n> is trivial.\n> >\n> > (See: http://fts.postgresql.org/db/mw/msg.html?mid=1035632) I suggest\n> > reading the complete thread. I have fixed some of the problems in my\n> > private cvs, but no patch has been sent in...\n> >\n> > Some of the issues perhaps I should send in a patch for ASAP??\n\nAll of the above is a non-issue - it was implemented by tom in the parser,\nand my code was removed. Howver, add primary key needs a regression test.\n\n> > > key already exists\n> > > > over the specified fields\n> > > > * PSQL - SHOW FOREIGN KEYS\n> > > > \t- Still working on a query. If I come up with a good one -\n> > > would a catalog\n> > > > view of them be useful?\n> >\n> > Is there a pg_get_* function for getting foreign key definitions yet?\n\nI still want to do the above - however Stephen Sazbo has ideas about\nchanging all the fk stuff...\n\n> > > > * -ALTER TABLE DROP CHECK\n> > > > \t- Already committed\n> >\n> > Yeah, committed.\n\nCommitted - it needs a regression test tho.\n\n> > > > * ALTER TABLE DROP PRIMARY KEY\n> > > > \t- Done, will need review\n> > > > * ALTER TABLE DROP UNIQUE\n> > > > \t- Done, will need review\n> >\n> > Wrote them, but they're uncommitted. Don't worry about them until 7.3.\n\nI'll dredge this up again if I can. All it does is add a standards\ncompliant alternative syntax for dropping those constraints. Tom - can you\njust do this in the parser, like you did it for the ADD constraints???\n\n> > > > * ALTER TABLE DROP FOREIGN KEY\n> > > > \t- Harder than I thought :) Working on it.\n> >\n> > This is a toughie this one!\n\nAnd also depends on future fk changes.\n\n> > > > * Check that pgclass.relfkeys is being set correctly.\n> > > > \t- Is pgclass.relfkeys being used at the moment?\n> >\n> > It looked to me that pgclass.relfkeys wasn't ever being set or\n> updated. Is\n> > this true/correct?\n\nCorrect.\n\n> > > > * WILDCARDS IN PG_DUMP\n> > > > \t- It would be nice to be able to dump tables via wildcards,\n> > > or once schemas\n> > > > exist to dump an entire schema I guess.\n> >\n> > That was just one of my little wish lists. I have a database\n> with about a\n> > hundred tables in it and related sets of tables all share the\n> same prefix.\n> > For instance, I would like to be able to pg_dump all the diary\n> tables in one\n> > go.\n> >\n> > ie. pg_dump -t diary_\\* audb > dump.sql\n> >\n> > Don't know if there would be widespread enough demand for this feature\n> > tho...\n\nThe above would be a nice feature...\n\n> > > > * CHECK CREATING DUPLICATE NAMED FOREIGN KEYS\n> > > > \t- I seem to be able to create duplicate named fk's,\n> plus I think the\n> > > > '<unnamed>' ones should be given auto name to make dropping\n> constraint\n> > > > easier...\n> >\nPretty clear.\n\nChris\n\n", "msg_date": "Mon, 25 Feb 2002 10:44:12 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: 7.2 stuff" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> * ALTER TABLE DROP PRIMARY KEY\n> - Done, will need review\n> * ALTER TABLE DROP UNIQUE\n> - Done, will need review\n\n> I'll dredge this up again if I can. All it does is add a standards\n> compliant alternative syntax for dropping those constraints. Tom - can you\n> just do this in the parser, like you did it for the ADD constraints???\n\nI don't foresee it falling out of other parser work, if that's what you\nmean. If you want it done in the parser you'll have to do it yourself.\n\nThere are some semantic issues, eg: what does it mean to do ALTER TABLE\nDROP PRIMARY KEY in an inheritance hierarchy? Does every child lose its\nprimary key (if any), even if it's not inherited from the parent?\nI could see doing the \"where's the primary key\" lookup either at\nexecution time (separately for each table) or at parse time (lookup once\nat the parent table) depending on which behavior you want.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Feb 2002 21:53:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff " }, { "msg_contents": "On Mon, 2002-02-25 at 07:53, Tom Lane wrote:\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > * ALTER TABLE DROP PRIMARY KEY\n> > - Done, will need review\n> > * ALTER TABLE DROP UNIQUE\n> > - Done, will need review\n> \n> > I'll dredge this up again if I can. All it does is add a standards\n> > compliant alternative syntax for dropping those constraints. Tom - can you\n> > just do this in the parser, like you did it for the ADD constraints???\n> \n> I don't foresee it falling out of other parser work, if that's what you\n> mean. If you want it done in the parser you'll have to do it yourself.\n> \n> There are some semantic issues, eg: what does it mean to do ALTER TABLE\n> DROP PRIMARY KEY in an inheritance hierarchy? Does every child lose its\n> primary key (if any), even if it's not inherited from the parent?\n\nProbably not as the primary key is currently not inherited when doing\ncreate. \n\n------------\nHannu\n", "msg_date": "25 Feb 2002 09:39:51 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff" }, { "msg_contents": "On Mon, 2002-02-25 at 07:44, Christopher Kings-Lynne wrote:\n> \n> > > > key already exists\n> > > > > over the specified fields\n> > > > > * PSQL - SHOW FOREIGN KEYS\n> > > > > \t- Still working on a query. If I come up with a good one -\n> > > > would a catalog\n> > > > > view of them be useful?\n> > >\n> > > Is there a pg_get_* function for getting foreign key definitions yet?\n> \n> I still want to do the above - however Stephen Sazbo has ideas about\n> changing all the fk stuff...\n\nThat makes it almost mandatory - how else will we be able to dump/reload\nfk's for the new fk implementation?\n\n---------\nHannu\n\n", "msg_date": "25 Feb 2002 09:43:27 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff" }, { "msg_contents": "\nOn Sun, 24 Feb 2002, Tom Lane wrote:\n\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > * ALTER TABLE DROP PRIMARY KEY\n> > - Done, will need review\n> > * ALTER TABLE DROP UNIQUE\n> > - Done, will need review\n>\n> > I'll dredge this up again if I can. All it does is add a standards\n> > compliant alternative syntax for dropping those constraints. Tom - can you\n> > just do this in the parser, like you did it for the ADD constraints???\n>\n> I don't foresee it falling out of other parser work, if that's what you\n> mean. If you want it done in the parser you'll have to do it yourself.\n>\n> There are some semantic issues, eg: what does it mean to do ALTER TABLE\n> DROP PRIMARY KEY in an inheritance hierarchy? Does every child lose its\n> primary key (if any), even if it's not inherited from the parent?\n\nApart from the fact that currently pkeys don't inherit, does it make\nsense that the child can have a separate primary key since it should\nreally be inheriting from the parent and you can't have two, right?\n\n", "msg_date": "Sun, 24 Feb 2002 23:41:01 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: 7.2 stuff " } ]
[ { "msg_contents": "I'd certainly be keen on helping out with a Java/Swing approach. It would be\nan excellent test of the meta-data stuff in the JDBC driver and to catch any\nommissions. I'm willing to provide a homepage etc if wanted (and for some\nreason the main postgres site isn't used), maybe Sourceforge could be a goer\ntoo?. A lot of the code will end up being useful too I suspect for other\nJava/Postgres projects.\n\nRegards,\nJoe\n\n-----Original Message-----\nFrom: Matthew T. O'Connor [mailto:matthew@zeut.net]\nSent: Thursday, 21 June 2001 2:22 PM\nTo: Thomas Swan; Michael Meskes\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Re: Universal admin frontend\n\n\nAFIAA, there exists a port of Java for just about every OS that PostgreSQL\nsupports, not that it should be the only reason for choosing it. Not that\nmy vote counts, but I'd go for the java approach and be willing to code a\nlot on the interface, anyone else interested?\n\nAnyone thought about wxPython? Much faster then java, can be distributed as\na standalone executable on Windows. Supports Unix / Mac / Windows. Don't\nknow if it supports more or less PG relevant platforms than Java. I have\nbeen thinking about working on this type of tool myself.\n", "msg_date": "Thu, 21 Jun 2001 14:32:59 +1000", "msg_from": "Joe Shevland <J.Shevland@eclipsegroup.com.au>", "msg_from_op": true, "msg_subject": "RE: Re: Universal admin frontend" }, { "msg_contents": "On Thu, Jun 21, 2001 at 02:32:59PM +1000, Joe Shevland wrote:\n> I'd certainly be keen on helping out with a Java/Swing approach. It would be\n> an excellent test of the meta-data stuff in the JDBC driver and to catch any\n> ommissions. I'm willing to provide a homepage etc if wanted (and for some\n> reason the main postgres site isn't used), maybe Sourceforge could be a goer\n> too?. A lot of the code will end up being useful too I suspect for other\n> Java/Postgres projects.\n\n\nI'd suggest hosting it at www.greatbridge.org, for that very reason:\nother Java/Postgres projects are more likely to find you there. And\nthat's where the existing admin type projects (phppgadmin, pgadmin)\nare hosted, so you all can steal ideas from one another. ;-)\n\nRoss\n\n", "msg_date": "Fri, 22 Jun 2001 10:21:42 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Re: Universal admin frontend" } ]
[ { "msg_contents": ">Hello!\n\n>Why not go back to the roots of postgres?\n\n>PostgreSQL is written completely in C. The\ndevelopment community has\n>shown that it is\n>possible to write efficient code for different\n>platforms with pure C.\n\n>The administration task can be separated in 2\ndifferent tasks:\n>A server (in C) which is really doing the\nadministrative work.\n>A client programm written in what so ever (C + X11,\nJava, Perl, TCL/Tk, \n>....) which performs the user interface.\n\nI think you are totally right.\n\n>I know that this a not the easiest way to do the job\nbut the most\n>flexible (in my opinion).\n\nIn fact I believe that it is also the easiest.\n\nPedro\n\n_______________________________________________________________\nDo You Yahoo!?\nYahoo! Messenger: Comunicaci�n instant�nea gratis con tu gente -\nhttp://messenger.yahoo.es\n", "msg_date": "Thu, 21 Jun 2001 10:17:30 +0200 (CEST)", "msg_from": "=?iso-8859-1?q?Pedro=20Abelleira=20Seco?= <pedroabelleira@yahoo.es>", "msg_from_op": true, "msg_subject": "RE: Universal admin frontend" }, { "msg_contents": "> >PostgreSQL is written completely in C. The development community\n> >has shown that it is\n[snip]\n> >The administration task can be separated in 2 different tasks:\n[snip]\n\nIsn't this essentially the split between postmaster/workers and a client?\ni.e. I don't know how much value is added by introducing another\ncommunication protocol when JDBC would work fine. From my understanding of\nthe users API you can handle pretty much everything other than pg_dump. Eg\nCREATE USER, CREATE DATABASE etc can all be issued from a client using\nstandard PostgreSQL SQL.\n\nI'd have to cast my vote on the Java frontend.\n\nCheers,\n\nMark Pritchard\n\n", "msg_date": "Thu, 21 Jun 2001 19:23:09 +1000", "msg_from": "\"Mark Pritchard\" <mark@tangent.net.au>", "msg_from_op": false, "msg_subject": "RE: Universal admin frontend" }, { "msg_contents": "On 21 Jun 2001 19:23:09 +1000, Mark Pritchard wrote:\n> > >PostgreSQL is written completely in C. The development community\n> > >has shown that it is\n> [snip]\n> > >The administration task can be separated in 2 different tasks:\n> [snip]\n> \n> Isn't this essentially the split between postmaster/workers and a client?\n> i.e. I don't know how much value is added by introducing another\n> communication protocol when JDBC would work fine. From my understanding of\n> the users API you can handle pretty much everything other than pg_dump. Eg\n> CREATE USER, CREATE DATABASE etc can all be issued from a client using\n> standard PostgreSQL SQL.\n> \n> I'd have to cast my vote on the Java frontend.\n> \n> Cheers,\n> \n> Mark Pritchard\n> \n\n\n\nFor an admin tool you might want to display OS info , server load,\ndatabase file sizes, logfile viewing etc. \nI have been working on such a tool for my own use , ( GTK+ based front\nend) and decided that a client / server model would be the most useful\napproach. I was probably going to write a separate daemon rather than\nintegrate new stuff into the backend.\n\n\n-- \nColin M Strickland perl -e'print \"\\n\",map{chr(ord()-3)}(reverse split \n //,\"\\015%vhlwlqxpprF#ir#uhzrS#hkw#jqlvvhqudK%#\\015\\015nx\".\n\"1rf1wilv1zzz22=swwk###369<#84<#:44#77.={di##339<#84<#:44#77.=ohw\\015]\".\n\"K9#4VE#/ORWVLUE#/whhuwV#dlurwflY#334#/wilV\\015uhsrohyhG#ehZ#urlqhV\");'\n", "msg_date": "21 Jun 2001 10:55:59 +0100", "msg_from": "Colin Strickland <cms@sift.co.uk>", "msg_from_op": false, "msg_subject": "RE: Universal admin frontend" }, { "msg_contents": "Colin Strickland wrote:\n> \n> On 21 Jun 2001 19:23:09 +1000, Mark Pritchard wrote:\n> > > >PostgreSQL is written completely in C. The development community\n> > > >has shown that it is\n> > [snip]\n> > > >The administration task can be separated in 2 different tasks:\n> > [snip]\n> >\n> > Isn't this essentially the split between postmaster/workers and a client?\n> > i.e. I don't know how much value is added by introducing another\n> > communication protocol when JDBC would work fine. From my understanding of\n> > the users API you can handle pretty much everything other than pg_dump. Eg\n> > CREATE USER, CREATE DATABASE etc can all be issued from a client using\n> > standard PostgreSQL SQL.\n> >\n> > I'd have to cast my vote on the Java frontend.\n> >\n> > Cheers,\n> >\n> > Mark Pritchard\n> >\n> \n> For an admin tool you might want to display OS info , server load,\n> database file sizes, logfile viewing etc.\n> I have been working on such a tool for my own use , ( GTK+ based front\n> end) and decided that a client / server model would be the most useful\n> approach. I was probably going to write a separate daemon rather than\n> integrate new stuff into the backend.\n\nActually we could start it by offering the standard SQL92 wiews for\nsystem \ntables and PG-standard PL/PGSQLfunctions for things that can't currently\nbe \ndone in SQL standard way, like removing primary key or dropping a\ncolumn.\n\nHaving _one_ _documented_ way for even the things that psql's \\d\ncommands can \nshow would make writing other system tools much easier.\n\nAnd yes, I think a separate deamon approach will be needed for many\nthings \nanyway (like changing pg_hba.conf or firewall rules, showing system\nlogs, ...).\n\nseems that something using some standard protocol (I would favour\nXML-RPC (for \nsimplicity) over https (for security)) to expose actions would be a good \ncandidate for the daemon thingie.\n\n--------------\nHannu\n", "msg_date": "Thu, 21 Jun 2001 19:48:32 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Universal admin frontend" } ]
[ { "msg_contents": "On Thu, Jun 21, 2001 at 01:39:38PM +0200, RISKO Gergely wrote:\n> Hello!\n> \n> I saaw your patch for 7.0.2, but it is hard to port to 7.1.2 for me,\n> because I haven't got any knowlendge in postgresql programming.\n> Can you give me a nocreatetable patch for postgres 7.1.2?\n\n I'd like, but I unsure with my time -- may be later (3 weeks?).\n\n> Will be the new permission system in 7.2?\n\n Probably not :-(\n\nPS. ...may be someone in hackers list port it to 7.1 (see CC)\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 21 Jun 2001 14:00:03 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "Re: nocreatetable for 7.1.2" }, { "msg_contents": "Hello!\n\nI continued to port your patch, and now it works,only one error,\nif I create ANY table in ANY database:\nNOTICE: Cache reference leak: cache pg_shadow (24), tuple 17201 has \ncount 1\n\nBut everything work nice (the nocreatetable thing also :))) ).\nOh, and the help isn't fixed to show nocreatetable and nolocktable for\n\\h create user.\n\nSo I attach my patch.\n\nGergely", "msg_date": "Fri, 22 Jun 2001 13:13:50 +0200", "msg_from": "RISKO Gergely <risko@atom.hu>", "msg_from_op": false, "msg_subject": "Re: nocreatetable for 7.1.2 [patch]" }, { "msg_contents": "On Fri, Jun 22, 2001 at 01:13:50PM +0200, RISKO Gergely wrote:\n\n Hi,\n\n> I continued to port your patch, and now it works,only one error,\n\n You are clever, I knew that is better wait sometime :-)\n\n> So I attach my patch.\n\n I try see it next week. Thanks.\n\n\t\t\tKarel\n\nPS. This solution isn't acceptable for standard and official sources. \n Probably is better no send this patch to pgsql-patches@postgresql.org, \n somebody could be foggy from it.\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Fri, 22 Jun 2001 14:01:27 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "Re: nocreatetable for 7.1.2 [patch]" }, { "msg_contents": "\nWere are we on this? We keep talking about NOCREATETABLE permission for\nevery release but can't seem to get it in there because people want a\nredesign of permissions. My feeling is that it should be added to 7.2.\n\n> Hello!\n> \n> I continued to port your patch, and now it works,only one error,\n> if I create ANY table in ANY database:\n> NOTICE: Cache reference leak: cache pg_shadow (24), tuple 17201 has \n> count 1\n> \n> But everything work nice (the nocreatetable thing also :))) ).\n> Oh, and the help isn't fixed to show nocreatetable and nolocktable for\n> \\h create user.\n> \n> So I attach my patch.\n> \n> Gergely\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 24 Aug 2001 16:05:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: nocreatetable for 7.1.2 [patch]" }, { "msg_contents": "On Fri, Aug 24, 2001 at 04:05:57PM -0400, Bruce Momjian wrote:\n> \n> Were are we on this? We keep talking about NOCREATETABLE permission for\n> every release but can't seem to get it in there because people want a\n> redesign of permissions. My feeling is that it should be added to 7.2.\n\n I know very well it:-) It was 1day in official CVS, but we remove it \nafter discussion with Peter and me. The result was that we need new \nprivilege system that handle it. If good remember Peter has some \n(IMHO good) proposal about it (and Jan has an own proposal \nabout it too).\n \n\t\t\tKarel\n\nPS. L. Ellison has still better GRANT/REVOKE stuff than PG:-(\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 27 Aug 2001 09:54:50 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Re: Re: nocreatetable for 7.1.2 [patch]" }, { "msg_contents": "\nCan I assume we will have schemas in 7.3 and will not need this patch?\n\n\n---------------------------------------------------------------------------\n\nKarel Zak wrote:\n> On Fri, Jun 22, 2001 at 01:13:50PM +0200, RISKO Gergely wrote:\n> \n> Hi,\n> \n> > I continued to port your patch, and now it works,only one error,\n> \n> You are clever, I knew that is better wait sometime :-)\n> \n> > So I attach my patch.\n> \n> I try see it next week. Thanks.\n> \n> \t\t\tKarel\n> \n> PS. This solution isn't acceptable for standard and official sources. \n> Probably is better no send this patch to pgsql-patches@postgresql.org, \n> somebody could be foggy from it.\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 08:08:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: nocreatetable for 7.1.2 [patch]" } ]
[ { "msg_contents": ">For an admin tool you might want to display OS info ,\n>server load,\n>database file sizes, logfile viewing etc. \n>I have been working on such a tool for my own use , (\n>GTK+ based front\n>end) and decided that a client / server model would\nbe >the most useful\n>approach. I was probably going to write a separate\n>daemon rather than\n>integrate new stuff into the backend.\n\nI agree. In fact one important thing is to be able to\nedit the configuration of Postgres (of the different\nservers) from the app. Administration should include\nconfiguration. And backups, too (with a register of\nthem).\nI also think that the separate daemon would be\nnecessary (you should be able to start/stop/restart\nservers).\nIn a first aproximation one could write a totally\nindependent daemon able to manage the tasks not\naccesible by the standard interface. I mean, no table,\nuser, ... management; this is solved. But it's clear\nthat it would be easier (and more reliable,\nmantainable, elegant) not to have to duplicate things\nlike the config file parsing. One could pick code from\nPostgres itself, but a copy/paste strategy is bad in\nthe long run (not so long, in fact).\nIf one could access some things of the Postgres\nengine, it would be easy to have the daemon. But you\nshould compile such code in the postgres build\nprocess, or have a API to ask Postgres this kind of\nthings. Both aproaches should have aproval/support by\nthe developers.\nOf course at the start one can develop this as a\npatch/addon to the Postgres code, but in the long run\nit should be in the core distribution (with a option,\nsure) for use by all the admin tools.\nThe only hard part is the communication protocol. It\nhave to be secure. Secure by design, so simple. And\nusable for many diferent languages. But I think that,\nfor a start, one should abstract the communication\nprotocol in a interface to help us to concentrate in\nthe general problem, and to plug after a communication\nlib.\nBecause one cannot simply start coding some that has\nno one use case, my idea is to develop a Gui tool,\nconcentrating it in the aspects _not_ covered by other\ntools (pgacces, phppgadmin, pgAdmin, ...) and with a\nrough support for the usual aspects (user, tables\nmanagement, sql querys, ...) to have a good prototype\nin wich experiment what is really needed in the\nbackend. Once this done you can mature the tool\nindependently. The other configuration tools can\nextend to use the new capabilities if they want.\n\nPedro\n\n_______________________________________________________________\nDo You Yahoo!?\nYahoo! Messenger: Comunicaci�n instant�nea gratis con tu gente -\nhttp://messenger.yahoo.es\n", "msg_date": "Thu, 21 Jun 2001 14:03:11 +0200 (CEST)", "msg_from": "=?iso-8859-1?q?Pedro=20Abelleira=20Seco?= <pedroabelleira@yahoo.es>", "msg_from_op": true, "msg_subject": "RE: Universal admin frontend" } ]
[ { "msg_contents": "Mikheev, Vadim wrote:\n> > > > update a set a=a+1 where a>2;\n> > > > ERROR: Cannot insert a duplicate key into unique index a_pkey\n> > >\n> > > We use uniq index for UK/PK but shouldn't. Jan?\n> >\n> > What else can you use than an index? A \"deferred until\n> > statement end\" trigger checking for duplicates? Think it'd\n> > have a real bad performance impact.\n>\n> AFAIR, standard requires \"deffered\" (until statement/transaction(?)\n> end) as default behaviour for RI (all?) constraints. But no matter\n> what is default, \"deffered\" *must* be available => uniq indices\n> must not be used.\n\n Right.\n\n>\n> > Whatever the execution order might be, the update of '3' to\n> > '4' will see the other '4' as existent WRT the scan commandId\n> > and given snapshot - right? If we at the time we now fire up\n> > the ERROR add the key, the index and heap to a list of\n> > \"possible dupkeys\", that we'll check at the end of the actual\n> > command, the above would work. The check at statement end\n> > would have to increment the commandcounter and for each entry\n> > do an index scan with the key, counting the number of found,\n> > valid heap tuples.\n>\n> Incrementing comand counter is not enough - dirty reads are required\n> to handle concurrent PK updates.\n\n What's that with you and dirty reads? Every so often you tell\n me that something would require them - you really like to\n read dirty things - no? :-)\n\n So let me get it straight: I execute the entire UPDATE SET\n A=A+1, then increment the command counter and don't see my\n own results? So an index scan with heap tuple check will\n return OLD (+NEW?) rows? Last time I fiddled around with\n Postgres it didn't, but I could be wrong.\n\n>\n> > Well, with some million rows doing a \"set a = a + 1\" could\n> > run out of memory. So this would be something that'd work in\n> > the sandbox and for non-broken applications (tm). Maybe at\n>\n> How is this different from (deffered) updates of million FK we allow\n> right now? Let's user decide what behaviour (deffered/immediate) he\n> need. The point is that now user has no ability to choose what's\n> right for him.\n\n It isn't and I could live with that. I just wanted to point\n out before we implement it and get complaints.\n\n>\n> > some level (when we escalate the lock to a full table lock?)\n> > we simply forget about single keys, but have a new index\n> > access function that checks the entire index for uniqueness.\n>\n> I wouldn't bother to implement this. User always has ability to excl.\n> lock table, drop constraints, update whatever he want and recreate\n> constraints again.\n\n It'd be easy to implement for btree. Just do an entire index\n scan - returns every index entry in sort order. Check if the\n heap tuple is alive and if the key is equal to the previous\n one found alive, abort with a dupkey error. Well, not really\n super performant, but we where about to run out of memory, so\n it's not a performance question any more, it's a question of\n survival.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Thu, 21 Jun 2001 10:12:08 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: RE: [BUGS] Update is not atomic" } ]
[ { "msg_contents": "Hi all,\n\nSorry for crossposting. As probleme may occur from both sides, I thouth it\nwould be quicker.\n\nWhat I have:\nopenssl 0.9.6a\nopenssh 2.9p1\npostgresql 7.1.2\nunixware 711\n\nAfter install of openssl and openssh , I can successfully ssh and sft,\nslogin etc between the two unixware machines\n\nAfter compiling postgresql with openssl enabled and accepting ssl\nconnections , psql failes connect to the backend complaining PRNG is not\nseeded.\n\nAccording to docs and sources, this is because unixware lacks a\n/dev/urandom device.\n\nI tried prngd but it did'nt help.\n\nIs there any way eithr in openssl or postgresql to have a kludge that\nseeds the PRNG??\n\nMany thanks for your time\n\nRegards,\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Thu, 21 Jun 2001 16:55:13 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "openssl+postgresql+unixware" } ]
[ { "msg_contents": "Here is a session I had on a system.\n\nxdb=# select * from chart where glaccount = '1100-0000';\n glaccount | gldesc | gllevel_id \n-----------+--------+------------\n(0 rows)\n(Note: There is a matching row that this failed to find.)\n\nxdb=# select * from chart where glaccount <= '1100-0000';\n glaccount | gldesc | gllevel_id \n-----------+--------+------------\n 1000-0000 | ASSETS | H\n 1100-0000 | Bank | A\n(2 rows)\n(Note: See, there it is.)\n\nxdb=# insert into chart values ('1100-0000', 'TEST', 'A');\nINSERT 149240 1\n(This should have failed because glaccount is the primary key.)\n\nxdb=# select * from chart where glaccount = '1100-0000';\n glaccount | gldesc | gllevel_id \n-----------+--------+------------\n 1100-0000 | TEST | A\n(1 row)\n\nxdb=# drop index chart_pkey;\nDROP\nxdb=# select * from chart where glaccount = '1100-0000';\n glaccount | gldesc | gllevel_id \n-----------+--------+------------\n 1100-0000 | Bank | A\n 1100-0000 | TEST | A\n(2 rows)\n(And magically the missing one appears.)\n\nxdb=# create unique index chart_pkey on chart (glaccount);\nERROR: Cannot create unique index. Table contains non-unique values\n(As expected.)\n\nxdb=# delete from chart where gldesc = 'TEST';\nDELETE 1\nxdb=# create unique index chart_pkey on chart (glaccount);\nCREATE\nxdb=# select * from chart where glaccount = '1100-0000';\n glaccount | gldesc | gllevel_id \n-----------+--------+------------\n(0 rows)\n(And there is is, gone.)\n\nI followed the instructions on interfacing user defined types as per\nhttp://www.ca.postgresql.org/devel-corner/docs/programmer/xindex.html.\nIn fact I helped write that page so I am pretty sure I got it right.\nThis code worked fine before. The only change I did was in the C code\nto use PG_FUNCTION_INFO_V1() style functions. I put in a lot of debug\nstatements and I am positive that the code is doing the right thing.\nI made no changes to the SQL which does what is described on that web\npage. Is it possible that that page is now outdated and needs to be\nrewritten for PG_FUNCTION_INFO_V1() style interfaces?\n\nOddly enough this seems to be working on another system with the same\nversion but it is in production and I can't play with it as much.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 21 Jun 2001 13:02:27 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "COPY vs. INSERT" }, { "msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n> I followed the instructions on interfacing user defined types as per\n> http://www.ca.postgresql.org/devel-corner/docs/programmer/xindex.html.\n> In fact I helped write that page so I am pretty sure I got it right.\n> This code worked fine before. The only change I did was in the C code\n> to use PG_FUNCTION_INFO_V1() style functions. I put in a lot of debug\n> statements and I am positive that the code is doing the right thing.\n\nObviously it isn't. Care to show us the code?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2001 13:26:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY vs. INSERT " }, { "msg_contents": "Thus spake Tom Lane\n> darcy@druid.net (D'Arcy J.M. Cain) writes:\n> > I followed the instructions on interfacing user defined types as per\n> > http://www.ca.postgresql.org/devel-corner/docs/programmer/xindex.html.\n> > In fact I helped write that page so I am pretty sure I got it right.\n> > This code worked fine before. The only change I did was in the C code\n> > to use PG_FUNCTION_INFO_V1() style functions. I put in a lot of debug\n> > statements and I am positive that the code is doing the right thing.\n> \n> Obviously it isn't. Care to show us the code?\n\nSure. ftp://ftp.vex.net/pub/glaccount.\n\nBy \"right thing\" I mean that when it gets a comparison it returns -1, 0 or\n+1 depending on the comparison. The problem appears to be that the functions\njust don't get called. That's why I suspect the SQL that sets up the\nindexing instead.\n\nAnd then there is the other 7.1.2 system that it works on.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 21 Jun 2001 15:07:14 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: COPY vs. INSERT" }, { "msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n>> Obviously it isn't. Care to show us the code?\n\n> Sure. ftp://ftp.vex.net/pub/glaccount.\n\nPG_FUNCTION_INFO_V1(glaccount_cmp);\nDatum\nglaccount_cmp(PG_FUNCTION_ARGS)\n{\n glaccount *a1 = (glaccount *) PG_GETARG_POINTER(0);\n glaccount *a2 = (glaccount *) PG_GETARG_POINTER(1);\n\n PG_RETURN_BOOL(do_cmp(a1, a2));\n}\n\n\nThe btree comparison function needs to return 1/0/-1, not boolean.\nTry PG_RETURN_INT32().\n\n\nPG_FUNCTION_INFO_V1(glaccount_eq);\nDatum\nglaccount_eq(PG_FUNCTION_ARGS)\n{\n glaccount *a1 = (glaccount *) PG_GETARG_POINTER(0);\n glaccount *a2 = (glaccount *) PG_GETARG_POINTER(1);\n\n PG_RETURN_BOOL (!do_cmp(a1, a2));\n}\n\nPG_FUNCTION_INFO_V1(glaccount_ne);\nDatum\nglaccount_ne(PG_FUNCTION_ARGS)\n{\n glaccount *a1 = (glaccount *) PG_GETARG_POINTER(0);\n glaccount *a2 = (glaccount *) PG_GETARG_POINTER(1);\n\n PG_RETURN_BOOL (!!do_cmp(a1, a2));\n}\n\n\nWhile these two are not actually wrong, that sort of coding always\nmakes me itch. Seems like\n\n\tPG_RETURN_BOOL (do_cmp(a1, a2) == 0);\n\n\tPG_RETURN_BOOL (do_cmp(a1, a2) != 0);\n\nrespectively would be cleaner, more readable, and more like the other\ncomparison functions. I've always thought that C's lack of distinction\nbetween booleans and integers was a bad design decision; indeed, your\ncmp bug kinda proves the point, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2001 16:28:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY vs. INSERT " }, { "msg_contents": "Thus spake Tom Lane\n> darcy@druid.net (D'Arcy J.M. Cain) writes:\n> >> Obviously it isn't. Care to show us the code?\n> \n> > Sure. ftp://ftp.vex.net/pub/glaccount.\n> \n> PG_FUNCTION_INFO_V1(glaccount_cmp);\n> Datum\n> glaccount_cmp(PG_FUNCTION_ARGS)\n> {\n> glaccount *a1 = (glaccount *) PG_GETARG_POINTER(0);\n> glaccount *a2 = (glaccount *) PG_GETARG_POINTER(1);\n> \n> PG_RETURN_BOOL(do_cmp(a1, a2));\n> }\n> \n> \n> The btree comparison function needs to return 1/0/-1, not boolean.\n> Try PG_RETURN_INT32().\n\nDoh! I converted all the ints to booleans and got carried away. Now\nI just have to figure out why this worked on another system.\n\n> PG_RETURN_BOOL (!!do_cmp(a1, a2));\n> \n> While these two are not actually wrong, that sort of coding always\n> makes me itch. Seems like\n> \n> \tPG_RETURN_BOOL (do_cmp(a1, a2) == 0);\n> \n> \tPG_RETURN_BOOL (do_cmp(a1, a2) != 0);\n> \n> respectively would be cleaner, more readable, and more like the other\n> comparison functions. I've always thought that C's lack of distinction\n> between booleans and integers was a bad design decision; indeed, your\n> cmp bug kinda proves the point, no?\n\nI agree with you about the lack of a true boolean type and it certainly\nwas the root of my error but I don't think that the other follows. I\ndon't think that using the paradigms is wrong. Kernighan gives a nice\ntalk on that subject. He argues that you don't have to write for people\nthat don't know the language. Rather you should strive for clarity but\nuse what programmers (in the same language) are used to. For example;\n\n for (i = 1; i <= count; i++)\n\nis correct but the equivalent (assuming i is a counter and is not used\nin the loop itself)\n\n for (i = 0; i < count; i++)\n\nis the right way to code not because it is more correct but because it\nis what a future maintainer who is comfortable with C would understand\nfaster.\n\nIn this case there might be an argument for one or the other as I have\nseen both styles used about equally.\n\nOK, I guess I can work on the docs for the indexing too as there are some\ndifferences with the new methods.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 21 Jun 2001 22:14:44 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: COPY vs. INSERT" } ]
[ { "msg_contents": "> > Incrementing comand counter is not enough - dirty reads are required\n> > to handle concurrent PK updates.\n> \n> What's that with you and dirty reads? Every so often you tell\n> me that something would require them - you really like to\n> read dirty things - no? :-)\n\nDirty things occure - I like to handle them -:)\nAll MVCC stuff is just ability to handle dirties, unlike old,\nlocking, behaviour when transaction closed doors to table while\ndoing its dirty things. \"Welcome to open world but be ready to\nhandle dirty things\" -:)\n\n> So let me get it straight: I execute the entire UPDATE SET\n> A=A+1, then increment the command counter and don't see my\n> own results? So an index scan with heap tuple check will\n> return OLD (+NEW?) rows? Last time I fiddled around with\n> Postgres it didn't, but I could be wrong.\n\nHow are you going to see concurrent PK updates without dirty reads?\nIf two transactions inserted same PK and perform duplicate check at\nthe same time - how will they see duplicates if no one committed yet?\nLook - there is very good example of using dirty reads in current\nsystem: uniq indices, from where we started this thread. So, how uniq\nbtree handles concurrent (and own!) duplicates? Btree calls heap_fetch\nwith SnapshotDirty to see valid and *going to be valid* tuples with\nduplicate key. If VALID --> ABORT, if UNCOMMITTED (going to be valid)\n--> wait for concurrent transaction commit/abort (note that for\nobvious reasons heap_fetch(SnapshotDirty) doesn't return OLD rows\nmodified by current transaction). I had to add all this SnapshotDirty\nstuff right to get uniq btree working with MVCC. All what I propose now\nis to add ability to perform dirty scans to SPI (and so to PL/*), to be\nable make right decisions in SPI functions and triggers, and make those\ndecisions *at right time*, unlike uniq btree which makes decision\ntoo soon. Is it clear now how to use dirty reads for PK *and* FK?\n\nYou proposed using share *row* locks for FK before. I objected then and\nobject now. It will not work for PK because of PK rows \"do not exist\"\nfor concurrent transactions. What would work here is *key* locks (locks\nplaced for some key in a table, no matter does row with this key exist\nor not). This is what good locking systems, like Informix, use. But\nPG is not locking system, no reasons to add key lock overhead, because\nof PG internals are able to handle dirties and we need just add same\nabilities to externals.\n\nVadim\n \n", "msg_date": "Thu, 21 Jun 2001 10:50:31 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: RE: [BUGS] Update is not atomic" }, { "msg_contents": "Mikheev, Vadim wrote:\n>\n> Dirty things occure - I like to handle them -:)\n>\n\n Got it now - you're right. Thanks for your patience.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Thu, 21 Jun 2001 14:13:53 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: RE: [BUGS] Update is not atomic" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Mikheev, Vadim wrote:\n> >\n> > Dirty things occure - I like to handle them -:)\n> >\n> \n> Got it now - you're right. Thanks for your patience.\n\nBut can we count on you (or Vadim) to actually fix this anytime soon ?\n\n-------------\nHannu\n", "msg_date": "Mon, 25 Jun 2001 14:52:02 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: RE: [BUGS] Update is not atomic" } ]
[ { "msg_contents": "HELP!\n\nI am converting an app from Oracle to Postgresql and running into a\nsignificant difference in the behavior of a connection after an SQLException\nhas been asserted. I am looking for the \"correct\" way to deal with the\nissue.\n\n From a number of experiments, it appears that the only way I can re-use a\nconnection after it has asserted an SQLException is to issue a rollback()\ncall on the connection.\n\nI am doing transactional work, with multiple statements and then a commit().\nI am also doing my own connection pooling, so it is important that I be able\nto reliably re-use connections.\n\nMy questions:\n What is the best way (in Postgressql, or even better, in a portable\nmanner) to deal with SQLExceptions in a pooled connection environment?\n\n If I pull a connection out of my pool, is there any way I can tell if it\nwill work? Should I always do a rollback on it just in case? Will that have\na performance impact?\n\nIn the case of Postgresql, I cannot find a way to tell if the connection is\nin the state of having had an SQL Exception exerted and no rollback called,\nother than keeping track of it myself! Is there any way to determine that\nconnection state other than by doing a test query?\n\nA non-working trace (that I think should work but doesn't) is below. Note\nthat a \"Done\" means the SQL operation did NOT produce an SQLException\n------------------------------------ cut\nhere --------------------------------------\n\n...Drop Table Testtable\nSQL Error (Allowed):java.sql.SQLException: ERROR: table \"testtable\" does\nnot exist\n\n......commit()\n...Select from TestTable after drop\nSQL Error (Allowed):No results were returned by the query.\nResult set:null\n\n...Create Table Testtable\n......Done\n...Insert into Testtable\n......Done\n...Insert into Testtable\n......Done\n......commit()\n...Insert into Testtable\nSQL Error (Allowed):java.sql.SQLException: ERROR: Relation 'testtable' does\nnot\n exist\n\n......commit()\n...Select from Testtable\nSQL Error (Allowed):No results were returned by the query.\nResult set:null\n\n......commit()\n\n\n\nA working trace (added rollbacks) is here:\n------------------------------------ cut\nhere --------------------------------------\n...Drop Table Testtable\n......Done\n......commit()\n...Select from TestTable after drop\nSQL Error (Allowed):java.sql.SQLException: ERROR: Relation 'testtable' does\nnot\n exist\n\n......Rollback\nResult set:null\n\n...Create Table Testtable\n......Done\n...Insert into Testtable\n......Done\n...Insert into Testtable\n......Done\n......commit()\n...Insert into BOGUSTABLE\nSQL Error (Allowed):java.sql.SQLException: ERROR: Relation 'bogustable'\ndoes no\nt exist\n\n......Rollback\n......commit()\n...Insert into Testtable\n......Done\n......commit()\n...Select from Testtable\n......done\nResult set:org.postgresql.jdbc2.ResultSet@653108\n\n......commit()\n\nThanks in advance\n\nJohn Moore\nNOSPAMjohn@NOSPAMtinyvital.com\n\n\n", "msg_date": "Thu, 21 Jun 2001 18:32:05 GMT", "msg_from": "\"John Moore\" <NOSPAMnews@NOSPAMtinyvital.com>", "msg_from_op": true, "msg_subject": "JDBC Connection State Management with SQL Exceptions (esp Postgresql)" }, { "msg_contents": "\n\nJohn Moore wrote:\n> \n> HELP!\n> \n> I am converting an app from Oracle to Postgresql and running into a\n> significant difference in the behavior of a connection after an SQLException\n> has been asserted. I am looking for the \"correct\" way to deal with the\n> issue.\n> \n> From a number of experiments, it appears that the only way I can re-use a\n> connection after it has asserted an SQLException is to issue a rollback()\n> call on the connection.\n> \n> I am doing transactional work, with multiple statements and then a commit().\n> I am also doing my own connection pooling, so it is important that I be able\n> to reliably re-use connections.\n\nHi. There is a lot of state that can be left with a connection, and a good\npooling system should do a bunch of cleanup on the connection when it is\nreturned to the pool, so it will be ready for the next user. This would include\nclosing all statements and result sets that the previous user may have created\nbut not closed. This is crucial because you don't want retained references\nto these objects to allow a 'previous user' to affect anything the next user\ndoes. You should clear theconnection warnings that accrue. You should\nroll back any hanging transactional context, by doing a rollback if\nautoCommit() is false, and you should then reset the connection to autoCommit(true),\nwhich is the standard condition for a new JDBC connection.\nJoe\n\n> \n> My questions:\n> What is the best way (in Postgressql, or even better, in a portable\n> manner) to deal with SQLExceptions in a pooled connection environment?\n> \n> If I pull a connection out of my pool, is there any way I can tell if it\n> will work? Should I always do a rollback on it just in case? Will that have\n> a performance impact?\n> \n> In the case of Postgresql, I cannot find a way to tell if the connection is\n> in the state of having had an SQL Exception exerted and no rollback called,\n> other than keeping track of it myself! Is there any way to determine that\n> connection state other than by doing a test query?\n> \n> A non-working trace (that I think should work but doesn't) is below. Note\n> that a \"Done\" means the SQL operation did NOT produce an SQLException\n> ------------------------------------ cut\n> here --------------------------------------\n> \n> ...Drop Table Testtable\n> SQL Error (Allowed):java.sql.SQLException: ERROR: table \"testtable\" does\n> not exist\n> \n> ......commit()\n> ...Select from TestTable after drop\n> SQL Error (Allowed):No results were returned by the query.\n> Result set:null\n> \n> ...Create Table Testtable\n> ......Done\n> ...Insert into Testtable\n> ......Done\n> ...Insert into Testtable\n> ......Done\n> ......commit()\n> ...Insert into Testtable\n> SQL Error (Allowed):java.sql.SQLException: ERROR: Relation 'testtable' does\n> not\n> exist\n> \n> ......commit()\n> ...Select from Testtable\n> SQL Error (Allowed):No results were returned by the query.\n> Result set:null\n> \n> ......commit()\n> \n> A working trace (added rollbacks) is here:\n> ------------------------------------ cut\n> here --------------------------------------\n> ...Drop Table Testtable\n> ......Done\n> ......commit()\n> ...Select from TestTable after drop\n> SQL Error (Allowed):java.sql.SQLException: ERROR: Relation 'testtable' does\n> not\n> exist\n> \n> ......Rollback\n> Result set:null\n> \n> ...Create Table Testtable\n> ......Done\n> ...Insert into Testtable\n> ......Done\n> ...Insert into Testtable\n> ......Done\n> ......commit()\n> ...Insert into BOGUSTABLE\n> SQL Error (Allowed):java.sql.SQLException: ERROR: Relation 'bogustable'\n> does no\n> t exist\n> \n> ......Rollback\n> ......commit()\n> ...Insert into Testtable\n> ......Done\n> ......commit()\n> ...Select from Testtable\n> ......done\n> Result set:org.postgresql.jdbc2.ResultSet@653108\n> \n> ......commit()\n> \n> Thanks in advance\n> \n> John Moore\n> NOSPAMjohn@NOSPAMtinyvital.com\n\n-- \n\nPS: Folks: BEA WebLogic is expanding rapidly, with both entry and advanced positions\nfor people who want to work with Java, XML, SOAP and E-Commerce infrastructure products.\nWe have jobs at Nashua NH, Liberty Corner NJ, San Francisco and San Jose CA.\nSend resumes to joe@bea.com\n", "msg_date": "Thu, 21 Jun 2001 15:40:06 -0700", "msg_from": "Joseph Weinstein <joe@bea.com>", "msg_from_op": false, "msg_subject": "Re: JDBC Connection State Management with SQL Exceptions (esp\n\tPostgresql)" }, { "msg_contents": "\n\"Joseph Weinstein\" <joe@bea.com> wrote in message\nnews:3B3277C6.4C9BCA9@bea.com...\n>\n>\n> John Moore wrote:\n.....\n> > I am doing transactional work, with multiple statements and then a\ncommit().\n> > I am also doing my own connection pooling, so it is important that I be\nable\n> > to reliably re-use connections.\n>\n> Hi. There is a lot of state that can be left with a connection, and a good\n> pooling system should do a bunch of cleanup on the connection when it is\n> returned to the pool, so it will be ready for the next user. This would\ninclude\n> closing all statements and result sets that the previous user may have\ncreated\n> but not closed.\n\nWhat about PreparedConnection pooling?\nWhat is your oppinion on the following code\n[design] for such caching within a connection :\n( getUsedPstmts() is imaginary method of imaginary\nMyConnection interface )\n\npublic void returnConnection (Connection con) {\n Connection local_con = con;\n con = null;\n PreparedStatement [] used_pstmt = (MyConnection) local_con.getUsedPstmts()\n for (int i =0 ; i < used_con.length ; i++) {\n PreparedStatement new_pstmt = used_con[i];\n used_con[i] = null;\n cached_pstmt_HashMap.put( new_pstmt.getSql(), new_pstmt );\n }\n... some other cleaning steps....\n...set connection as available...\n}\n\nAlexV\n\n> This is crucial because you don't want retained references\n> to these objects to allow a 'previous user' to affect anything the next\nuser\n> does. ......\n\n\n", "msg_date": "Thu, 21 Jun 2001 23:52:50 -0400", "msg_from": "\"AV\" <avek_nospam_@videotron.ca>", "msg_from_op": false, "msg_subject": "Re: JDBC Connection State Management with SQL Exceptions (esp\n\tPostgresql)" }, { "msg_contents": "\n\nAV wrote:\n> \n> \"Joseph Weinstein\" <joe@bea.com> wrote in message\n> news:3B3277C6.4C9BCA9@bea.com...\n> >\n> > John Moore wrote:\n> .....\n> > > I am doing transactional work, with multiple statements and then a\n> commit().\n> > > I am also doing my own connection pooling, so it is important that I be\n> able\n> > > to reliably re-use connections.\n> >\n> > Hi. There is a lot of state that can be left with a connection, and a good\n> > pooling system should do a bunch of cleanup on the connection when it is\n> > returned to the pool, so it will be ready for the next user. This would\n> include\n> > closing all statements and result sets that the previous user may have\n> created\n> > but not closed.\n> \n> What about PreparedConnection pooling?\n> What is your oppinion on the following code\n> [design] for such caching within a connection :\n> ( getUsedPstmts() is imaginary method of imaginary\n> MyConnection interface )\n> \n> public void returnConnection (Connection con) {\n> Connection local_con = con;\n> con = null;\n> PreparedStatement [] used_pstmt = (MyConnection) local_con.getUsedPstmts()\n> for (int i =0 ; i < used_con.length ; i++) {\n> PreparedStatement new_pstmt = used_con[i];\n> used_con[i] = null;\n> cached_pstmt_HashMap.put( new_pstmt.getSql(), new_pstmt );\n> }\n> ... some other cleaning steps....\n> ...set connection as available...\n> }\n> \n> AlexV\n\nHi Alex. I think I understand this... The basis of caching/re-using a PreparedStatment\nis via the SQL used to create it, but I see no actual statement-level cleanup here.\nYou should be clearing any warnings the statement may have accrued. Another example\nis that you should do something to cover the possibility some user code called setMaxRows(1)\non the statement. You don't want this condition to remain and silently truncate the results\nof any subsequent user... This code also doesn't allow for multiple statements with the\nsame SQL. There will be some 'utility' statements that might be used at several levels\nin a user's stack, and you want to allow for caching multiple identical statements *and*\nmaking sure that no two methods in the same caller stack get the *same* statement,\neven if it is the same SQL.\n\nJoe\n\n\n> \n> > This is crucial because you don't want retained references\n> > to these objects to allow a 'previous user' to affect anything the next\n> user\n> > does. ......\n\n-- \n\nPS: Folks: BEA WebLogic is expanding rapidly, with both entry and advanced positions\nfor people who want to work with Java, XML, SOAP and E-Commerce infrastructure products.\nWe have jobs at Nashua NH, Liberty Corner NJ, San Francisco and San Jose CA.\nSend resumes to joe@bea.com\n", "msg_date": "Fri, 22 Jun 2001 12:53:29 -0700", "msg_from": "Joseph Weinstein <joe@bea.com>", "msg_from_op": false, "msg_subject": "Re: JDBC Connection State Management with SQL Exceptions (esp \n\tPostgresql)" }, { "msg_contents": "\n\"Joseph Weinstein\" <joe@bea.com> wrote in message\nnews:3B3277C6.4C9BCA9@bea.com...\n> Hi. There is a lot of state that can be left with a connection, and a good\n> pooling system should do a bunch of cleanup on the connection when it is\n> returned to the pool, so it will be ready for the next user.\n\n>This would include\n> closing all statements and result sets that the previous user may have\ncreated\n> but not closed. This is crucial because you don't want retained references\n> to these objects to allow a 'previous user' to affect anything the next\nuser\n> does.\n\nArgh... Does this mean that my connection pooler needs to keep track of all\nstatements and result\nsets the user creates. I assume this means I also need to wrap the\nstatements so that I can\ncapture the returned result sets by overriding the execute method. Is this\ncorrect?\n\nDo you know of any source out there that implements connection pooling in a\nportable manner so I could use it with both Oracle and Postgresql?\n\n>You should clear theconnection warnings that accrue.\n\nOkway\n\n >You should\n> roll back any hanging transactional context, by doing a rollback if\n> autoCommit() is false, and you should then reset the connection to\nautoCommit(true),\n> which is the standard condition for a new JDBC connection.\n\nIt also appears that once a non-autoCommit transaction has sustained an\nSQLException, it is\nuseless until a rollback is done - at least in PostgreSQL. Is this correct?\n\nThe following question is still outstanding...\n\n> > In the case of Postgresql, I cannot find a way to tell if the connection\nis\n> > in the state of having had an SQL Exception exerted and no rollback\ncalled,\n> > other than keeping track of it myself! Is there any way to determine\nthat\n> > connection state other than by doing a test query?\n\nThanks\n\nJohn\n\n\n", "msg_date": "Sun, 24 Jun 2001 21:38:58 GMT", "msg_from": "\"John Moore\" <NOSPAMnews@NOSPAMtinyvital.com>", "msg_from_op": true, "msg_subject": "Re: JDBC Connection State Management with SQL Exceptions (esp\n\tPostgresql)" } ]
[ { "msg_contents": "Hi all,\n\nWhile testing postgresql with openssl on Unixware, I had this problem that\npsql alaways replied \"PRGN not seeded\".\n\nThat because psql does'nt not seed it in anyway. That's all right on\nsystems that have /dev/urandom (or whatever is ok for openssl)\n\nThe hack is simple: install prngd then add -DEGD='\"/var/run/prngd-pool\"'\nto\nCFLAGS in src/makefiles/unixware'CFLAGS\n\nthen add \n#ifdef EDG\n\tRAND_egd(EGD);\n#endif\n\nif src/interfaces/libpq/fe-connect.c near line 965 (#ifdef USE_SSL)\n\nThis done, openssl is doing all right.\n\nI'm sorry I don't have a clue how to make a clean patch. I guess\nreal patch would involve configure testing for /dev/?random then all\n\"standard places\" according to openssl for prng sockets then isse\neventually RAND_egd.\n\nThanks you for your attention.\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Thu, 21 Jun 2001 22:32:13 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "psql+openssl+uniware7" }, { "msg_contents": "Olivier PRENANT writes:\n\n> I'm sorry I don't have a clue how to make a clean patch. I guess\n> real patch would involve configure testing for /dev/?random then all\n> \"standard places\" according to openssl for prng sockets then isse\n> eventually RAND_egd.\n\nShouldn't this be handled by the OpenSSL configuration?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 21 Jun 2001 23:30:25 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: psql+openssl+uniware7" }, { "msg_contents": "On Thu, 21 Jun 2001, Peter Eisentraut wrote:\n\n> Olivier PRENANT writes:\n> \n> > I'm sorry I don't have a clue how to make a clean patch. I guess\n> > real patch would involve configure testing for /dev/?random then all\n> > \"standard places\" according to openssl for prng sockets then isse\n> > eventually RAND_egd.\n> \n> Shouldn't this be handled by the OpenSSL configuration?\nNot yet, opensl-0.9.7 will detect egd. Until then, client has to seed\nprng.\n\n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Fri, 22 Jun 2001 09:53:56 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: psql+openssl+uniware7" }, { "msg_contents": "Olivier PRENANT <ohp@pyrenet.fr> writes:\n>> Shouldn't this be handled by the OpenSSL configuration?\n\n> Not yet, opensl-0.9.7 will detect egd. Until then, client has to seed\n> prng.\n\nI think we shouldn't patch our code to work around an openssl bug that\nwill go away soon anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 10:32:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql+openssl+uniware7 " }, { "msg_contents": "I was afraid you would say that.\n\nAs a user of postgresql for many years, one thing I love is that it's\nmulti-platform.\n\nUnfortunatly, not all platforms have /dev/urandom.\n\nhere is part of openssl doc (RAND_add.pod)\n\nOpenSSL makes sure that the PRNG state is unique for each thread. On\nsystems that provide C</dev/urandom>, the randomness device is used\nto seed the PRNG transparently. However, on all other systems, the\napplication is responsible for seeding the PRNG by calling RAND_add(),\nL<RAND_egd(3)|RAND_egd(3)>\nor L<RAND_load_file(3)|RAND_load_file(3)>.\n\nIt clearly states that THE APPLICATION (psql) is responsible for seedinf\nthe PRNG. ISTM, saying it's a bug of openssl when it's IN THE DOC seems a\nbit \"unnice\".\n\nEven openssh (widely used) seeds PRNG itself.\n\nI'm not trying to start a war, I love Postgresql too much for that, but\njust say, I'll TRY to come up with a patch.\n\nRegards,\n\n\nOn Fri, 22 Jun 2001, Tom Lane wrote:\n\n> Olivier PRENANT <ohp@pyrenet.fr> writes:\n> >> Shouldn't this be handled by the OpenSSL configuration?\n> \n> > Not yet, opensl-0.9.7 will detect egd. Until then, client has to seed\n> > prng.\n> \n> I think we shouldn't patch our code to work around an openssl bug that\n> will go away soon anyway.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Sat, 23 Jun 2001 18:49:38 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: psql+openssl+uniware7 " }, { "msg_contents": "Olivier PRENANT writes:\n\n> It clearly states that THE APPLICATION (psql) is responsible for seedinf\n> the PRNG. ISTM, saying it's a bug of openssl when it's IN THE DOC seems a\n> bit \"unnice\".\n\nMight be better if libpq would handle this.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 23 Jun 2001 19:28:55 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: psql+openssl+uniware7 " }, { "msg_contents": "On Sat, 23 Jun 2001, Peter Eisentraut wrote:\n\n> Olivier PRENANT writes:\n> \n> > It clearly states that THE APPLICATION (psql) is responsible for seedinf\n> > the PRNG. ISTM, saying it's a bug of openssl when it's IN THE DOC seems a\n> > bit \"unnice\".\n> \n> Might be better if libpq would handle this.\nI can't agree more. That's why I changes fe-connect.c (it works ok) The\nonly thing if to write a propper patch!!\n\nRegards,\n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Sat, 23 Jun 2001 20:07:36 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: psql+openssl+uniware7 " } ]
[ { "msg_contents": "Awhile ago I said that I wanted to create a new flavor of table-level\nlock for concurrent VACUUM to get on a table. RowExclusiveLock is\nnot the right thing because it is not self-exclusive, whereas we don't\nwant more than one VACUUM mangling a table at a time. But anything\nhigher locks out concurrent writers, which we don't want either.\nSo we need an intermediate lock type that will conflict with itself\nas well as with ShareLock and above. (It must conflict with ShareLock\nsince we don't want new indexes being created during VACUUM either...)\n\nI'm having a hard time coming up with a name, though. I originally\ncalled it \"VacuumLock\" but naming it after its primary use seems bogus.\nSome other possibilities that I don't much like either:\n\n\tSchemaLock\t--- basically we're locking down the table schema\n\tWriteShareLock\t--- sharing access with writers\n\nAny better ideas out there? Where did the existing lock type names\ncome from, anyway? (Not SQL92 or SQL99, for sure.)\n\nBTW, I'm assuming that I should make the new lock type available\nat the user level as a LOCK TABLE option. Any objections to that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2001 17:15:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Good name for new lock type for VACUUM?" }, { "msg_contents": "Tom Lane wrote:\n\n>Awhile ago I said that I wanted to create a new flavor of table-level\n>lock for concurrent VACUUM to get on a table. RowExclusiveLock is\n>not the right thing because it is not self-exclusive, whereas we don't\n>want more than one VACUUM mangling a table at a time. But anything\n>higher locks out concurrent writers, which we don't want either.\n>So we need an intermediate lock type that will conflict with itself\n>as well as with ShareLock and above. (It must conflict with ShareLock\n>since we don't want new indexes being created during VACUUM either...)\n>\n*snip*\n\n>\n>BTW, I'm assuming that I should make the new lock type available\n>at the user level as a LOCK TABLE option. Any objections to that?\n>\nI think that type of lock would best be kept to the system level. \n\n*thinking out loud*\nIf your goal is to have it used more often, then user level might \nprovide more opportunities for testing. However, I can't really think \nof any situation where it would be beneficial to a user. The rest of \nthe locks seem to take care of everything else.\n\nIs it going to timeout? If a connection is dropped by a user, will the \nlock release?\n\n\n", "msg_date": "Thu, 21 Jun 2001 18:40:16 -0500", "msg_from": "Thomas Swan <tswan@olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: Good name for new lock type for VACUUM?" }, { "msg_contents": "Thomas Swan <tswan@olemiss.edu> writes:\n> I think that type of lock would best be kept to the system level. \n\nWhy?\n\nI don't have a scenario offhand where it'd be useful, but if we've\ndiscovered it's useful for VACUUM then there may be cases where a lock\nwith these properties would be useful to users as well. Besides, we\nhave several lock types that are exposed to users even though we've\nfound no uses for them at the system level.\n\n> Is it going to timeout? If a connection is dropped by a user, will the \n> lock release?\n\nNo, and yes, same as any other lock.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2001 20:10:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Good name for new lock type for VACUUM? " }, { "msg_contents": "Tom Lane writes:\n\n> Awhile ago I said that I wanted to create a new flavor of table-level\n> lock for concurrent VACUUM to get on a table.\n\n> I'm having a hard time coming up with a name, though. I originally\n> called it \"VacuumLock\" but naming it after its primary use seems bogus.\n\nNot that a name like \"share row exclusive\" is any less bogus. ;-)\n\nI've been staring at the lock names for an hour now and the best name I've\ncome up with is SHARE UPDATE EXCLUSIVE, as in \"share update, otherwise\nexclusive\" (the implication being that update would allow select as well),\nor some permutation thereof.\n\nAny other constructs that follow the existing patterns lead to\nsignificantly less desirable names like\n\nEXCLUSIVE ROW EXCLUSIVE == like ROW EXCLUSIVE, but self-exclusive, or\n\nROW EXCLUSIVE SHARE == like SHARE, but allows ROW EXCLUSIVE\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 22 Jun 2001 21:58:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Good name for new lock type for VACUUM?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I've been staring at the lock names for an hour now and the best name I've\n> come up with is SHARE UPDATE EXCLUSIVE, as in \"share update, otherwise\n> exclusive\" (the implication being that update would allow select as well),\n> or some permutation thereof.\n\nOkay with me, unless someone else comes up with a better idea...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 15:59:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Good name for new lock type for VACUUM? " }, { "msg_contents": "> Tom Lane writes:\n> \n> > Awhile ago I said that I wanted to create a new flavor of table-level\n> > lock for concurrent VACUUM to get on a table.\n> \n> > I'm having a hard time coming up with a name, though. I originally\n> > called it \"VacuumLock\" but naming it after its primary use seems bogus.\n> \n> Not that a name like \"share row exclusive\" is any less bogus. ;-)\n> \n> I've been staring at the lock names for an hour now and the best name I've\n> come up with is SHARE UPDATE EXCLUSIVE, as in \"share update, otherwise\n> exclusive\" (the implication being that update would allow select as well),\n> or some permutation thereof.\n> \n> Any other constructs that follow the existing patterns lead to\n> significantly less desirable names like\n> \n> EXCLUSIVE ROW EXCLUSIVE == like ROW EXCLUSIVE, but self-exclusive, or\n> \n> ROW EXCLUSIVE SHARE == like SHARE, but allows ROW EXCLUSIVE\n\nSounds good. I documented the lock types as best I could in the LOCK\nmanual page. I think that is as good as we can do to explain them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Jun 2001 16:58:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Good name for new lock type for VACUUM?" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I've been staring at the lock names for an hour now and the \n> best name I've\n> > come up with is SHARE UPDATE EXCLUSIVE, as in \"share update, otherwise\n> > exclusive\" (the implication being that update would allow \n> select as well),\n> > or some permutation thereof.\n> \n> Okay with me, unless someone else comes up with a better idea...\n> \n\nI have no better idea but I hope to leave VacuumLock as an alias\nbecause I can't remember others.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Sat, 23 Jun 2001 11:53:39 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "RE: Good name for new lock type for VACUUM? " }, { "msg_contents": "> -----Original Message-----\n> From: Hiroshi Inoue\n> > -----Original Message-----\n> > From: Tom Lane\n> >\n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > I've been staring at the lock names for an hour now and the\n> > best name I've\n> > > come up with is SHARE UPDATE EXCLUSIVE, as in \"share update, otherwise\n> > > exclusive\" (the implication being that update would allow\n> > select as well),\n> > > or some permutation thereof.\n> >\n> > Okay with me, unless someone else comes up with a better idea...\n> >\n>\n> I have no better idea but I hope to leave VacuumLock as an alias\n> because I can't remember others.\n>\n\nIsn't it a better idea to have a separate 'SELF EXCLUSIVE' lock\nwhich conflicts with only itself ?\n\nregards,\nHIroshi Inoue\n\n", "msg_date": "Sat, 23 Jun 2001 23:39:39 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "RE: Good name for new lock type for VACUUM? " }, { "msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> Isn't it a better idea to have a separate 'SELF EXCLUSIVE' lock\n> which conflicts with only itself ?\n\n*Only* itself? What would that be useful for? Not for locking\ntables, anyway --- if you don't conflict with AccessExclusiveLock\nthen it's possible for someone to drop the table while you're\nworking on it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Jun 2001 11:51:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Good name for new lock type for VACUUM? " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > Isn't it a better idea to have a separate 'SELF EXCLUSIVE' lock\n> > which conflicts with only itself ?\n> \n> *Only* itself? What would that be useful for?\n\nIsn't VacuumLock = RowExclusiveLock + SelfExclusiveLock \nfor the table ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Sun, 24 Jun 2001 06:11:01 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "RE: Good name for new lock type for VACUUM? " }, { "msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> Isn't it a better idea to have a separate 'SELF EXCLUSIVE' lock\n> which conflicts with only itself ?\n>> \n>> *Only* itself? What would that be useful for?\n\n> Isn't VacuumLock = RowExclusiveLock + SelfExclusiveLock \n> for the table ?\n\nOh, I see, you're suggesting acquiring two separate locks on the table.\nHmm. There would be a risk of deadlock if two processes tried to\nacquire these locks in different orders. That's not a big problem for\nVACUUM, since all processes would presumably be executing the same\nVACUUM code. But it raises questions about just how useful this lock\ntype would be in general-purpose use. You could never acquire *only*\nthis lock type, it'd have to be combined with something else, so it\nseems like any usage would have to be carefully examined for deadlocks.\n\nStill, it's an interesting alternative. Comments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Jun 2001 17:29:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Good name for new lock type for VACUUM? " }, { "msg_contents": "> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > Isn't it a better idea to have a separate 'SELF EXCLUSIVE' lock\n> > which conflicts with only itself ?\n> >> \n> >> *Only* itself? What would that be useful for?\n> \n> > Isn't VacuumLock = RowExclusiveLock + SelfExclusiveLock \n> > for the table ?\n> \n> Oh, I see, you're suggesting acquiring two separate locks on the table.\n> Hmm. There would be a risk of deadlock if two processes tried to\n> acquire these locks in different orders. That's not a big problem for\n> VACUUM, since all processes would presumably be executing the same\n> VACUUM code. But it raises questions about just how useful this lock\n> type would be in general-purpose use. You could never acquire *only*\n> this lock type, it'd have to be combined with something else, so it\n> seems like any usage would have to be carefully examined for deadlocks.\n> \n> Still, it's an interesting alternative. Comments anyone?\n\nSelfExclusiveLock is clear and can't be confused with other lock types.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 23 Jun 2001 18:11:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Good name for new lock type for VACUUM?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Still, it's an interesting alternative. Comments anyone?\n\n> SelfExclusiveLock is clear and can't be confused with other lock types.\n\nIt could possibly be made a little less dangerous if \"SelfExclusiveLock\"\nwere defined to conflict with itself and AccessExclusiveLock (and\nnothing else). That would at least mean that holding SelfExclusiveLock\nwould guarantee the table not go away under you; so there might be\nscenarios where holding just that lock would make sense.\n\nStill, I'm not sure that this lock type is as flexible as it seems at\nfirst glance. What you'd probably really want it to do is guarantee\nthat no other instance of your same operation (whatever it is) is\nrunning, and then you'd acquire another lock type to lock out other\noperations that you couldn't run in parallel with. Sounds great,\nexcept there'd only be one SelfExclusiveLock per table. So, for\nexample, your operation would conflict with VACUUM whether you wanted\nit to or not.\n\nBetween that and the need-two-locks hazards, I'm unconvinced that this\nis a better idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Jun 2001 18:26:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Good name for new lock type for VACUUM? " }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Still, it's an interesting alternative. Comments anyone?\n> \n> > SelfExclusiveLock is clear and can't be confused with other lock types.\n> \n> It could possibly be made a little less dangerous if \"SelfExclusiveLock\"\n> were defined to conflict with itself and AccessExclusiveLock (and\n> nothing else). That would at least mean that holding SelfExclusiveLock\n> would guarantee the table not go away under you; so there might be\n> scenarios where holding just that lock would make sense.\n> \n> Still, I'm not sure that this lock type is as flexible as it seems at\n> first glance. \n\nI don't think \"SelfExclusiveLock\" is an excellent lock either.\nHowever it seems to point out the reason why we couldn't\nplace(name) \"VacuumLock\" properly in our locking system.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 25 Jun 2001 10:42:40 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Good name for new lock type for VACUUM?" } ]
[ { "msg_contents": "> Any better ideas out there?\n\nNames were always hard for me -:)\n\n> Where did the existing lock type names\n> come from, anyway? (Not SQL92 or SQL99, for sure.)\n\nOracle. Except for Access Exclusive/Share Locks.\n\nVadim\n", "msg_date": "Thu, 21 Jun 2001 14:26:09 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Good name for new lock type for VACUUM?" } ]
[ { "msg_contents": "Short description:\n\n PostgreSQL allows perl stored procedures to be created. This module\n will provide access to the PostgreSQL database from inside your stored\n procedure.\n\nFor longer description, see http://www.formenos.org/PgSPI/README\n\nIt is available at http://formenos.org/PgSPI/DBD-PgSPI-0.01.tar.gz\n\nAt this point, I am not looking to have it merged anywhere or distribute\nit on CPAN, this is a release for hardened people only. \n\nenjoy\n-alex\n\n\n\n\n", "msg_date": "Thu, 21 Jun 2001 23:16:14 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "[ANNOUNCE] DBD::PgSPI" } ]
[ { "msg_contents": "At 05:56 PM 22-06-2001 -0400, Bruce Momjian wrote:\n>> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Since 64 is already too much to let 7.1 fit in SHMMAX = 1MB, I think\n>> the original rationale for using 64 is looking pretty broken anyway.\n>> Comments?\n>\n>BSD/OS has a 4MB max but we document how to increase it by recompiling\n>the kernel. Maybe if we fail the startup we can tell them how to\n>decrease the buffers in postgresql.conf file. Seems quite clear.\n>\n\nWhy is SHMMAX so low on some O/Ses? What are the advantages?\n\nMy guess is it's a minimum vs median/popular situation. Get the same thing\nlooking at the default www.kernel.org linux kernel settings vs the Redhat\nkernel settings.\n\nI'd personally prefer the popular situation. But would that mean the\nminimum case can't even boot up to recompile? Maybe the BSD guys should\nship with two kernels then. FreeBSD esp, since it's easy to recompile the\nkernel, just do two, during installation default to \"Regular\", with an\noption for \"Tiny\".\n\nIt's more fair that the people trying the extraordinary (16MB 386) should\nbe the ones doing the extra work.\n\nCheerio,\nLink.\n\n", "msg_date": "Fri, 22 Jun 2001 12:13:52 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": true, "msg_subject": "Re: Multiple Indexing, performance impact" }, { "msg_contents": "I just made i simple little test application inserting 50'000 'pwgen 8' data into a table with only a primary key id and a text column.\n\nIn every run, it is all deleted and the tables are vacuumed.\n\nHaving one separate index on name it took 36 seconds\nHaving an additional index, also on name, it took 69 seconds.\nFurthermore:\n3 indexes: 97 seconds\n4 indexes: 131 seconds\n5 indexes: 163 seconds\n6 indexes: 210 seconds\n7 indexes: 319 seconds\n8 indexes: 572 seconds\n9 indexes: 831 seconds\n10 indexes: 1219 seconds\n\nAnyone know what causes the signifacant performance decrease after 7 indexes?\n\nDaniel Åkerud\n\n\n\n\n\n\n\n\n\nI just made i simple little test application \ninserting 50'000 'pwgen 8' data into a table with only a primary key id and a \ntext column.\n \nIn every run, it is all deleted and the tables are \nvacuumed.\n \nHaving one separate index on name it took 36 \nseconds\nHaving an additional index, also on name, it took \n69 seconds.\nFurthermore:\n3 indexes: 97 seconds\n4 indexes: 131 seconds\n5 indexes: 163 seconds\n6 indexes: 210 seconds\n7 indexes: 319 seconds\n8 indexes: 572 seconds\n9 indexes: 831 seconds\n10 indexes: 1219 seconds\n \nAnyone know what causes the signifacant performance \ndecrease after 7 indexes?\n \nDaniel Åkerud", "msg_date": "Fri, 22 Jun 2001 20:06:03 +0200", "msg_from": "=?iso-8859-1?Q?Daniel_=C5kerud?= <zilch@home.se>", "msg_from_op": false, "msg_subject": "Multiple Indexing, performance impact" }, { "msg_contents": "=?iso-8859-1?Q?Daniel_=C5kerud?= <zilch@home.se> writes:\n> Anyone know what causes the signifacant performance decrease after 7 indexe=\n> s?\n\nI'd bet that somewhere around there, you are starting to see thrashing\nof the buffer pool due to needing to touch too many different pages to\ninsert each tuple. What is your -B setting? If you increase it,\ndoes the performance improve?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 14:48:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact " }, { "msg_contents": "I did a ps ax | postmaster but found no -B, and concluded that it uses the\nvalue specified in /etc/postgrelsql/postgresql.conf on shared_buffers (I\nsaw -B was shared buffer doing a man postmaster). I'll change this to 256\nand rerun the test!\n\nWill post the results here later. Please tell if this was a too puny\nincrease!\n\nDaniel �kerud\n\n> =?iso-8859-1?Q?Daniel_=C5kerud?= <zilch@home.se> writes:\n> > Anyone know what causes the signifacant performance decrease after 7\nindexe=\n> > s?\n>\n> I'd bet that somewhere around there, you are starting to see thrashing\n> of the buffer pool due to needing to touch too many different pages to\n> insert each tuple. What is your -B setting? If you increase it,\n> does the performance improve?\n>\n> regards, tom lane\n>\n\n", "msg_date": "Fri, 22 Jun 2001 21:16:57 +0200", "msg_from": "=?iso-8859-1?Q?Daniel_=C5kerud?= <zilch@home.se>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact " }, { "msg_contents": "=?iso-8859-1?Q?Daniel_=C5kerud?= <zilch@home.se> writes:\n> I did a ps ax | postmaster but found no -B, and concluded that it uses the\n> value specified in /etc/postgrelsql/postgresql.conf on shared_buffers (I\n> saw -B was shared buffer doing a man postmaster). I'll change this to 256\n> and rerun the test!\n\n> Will post the results here later. Please tell if this was a too puny\n> increase!\n\nThat should be enough to see if there's a performance change, but for\nfuture reference, yes you should go higher. On modern machines with\nmany megs of RAM, you should probably be using -B on the order of a few\nthousand, at least for production installations. The reason the default\nis so low is that we hope the system will still be able to fire up on\nmachines where the kernel enforces a SHMMAX limit of only a meg or so.\nThis hope is possibly in vain anymore anyway, since the system's\nnon-buffer shared-memory usage keeps creeping up; I think 7.1 is well\npast 1MB shmem even with 64 buffers...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 16:09:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact " }, { "msg_contents": "Holy ultra-violet-active macaronies :)\n\nFirst I changed it to 256, then I changed it to 1024.\n\n-B 128 is A\n-B 256 is B\n-B 1024 is C\n\nNew multiple-index performance data):\n\n1. A: 36 B: 32 C: 35\n2. A: 69 B: 53 C: 38\n3. A: 97 B: 79 C: 40\n4. A: 131 B: 98 C: 48\n5. A: 163 B: 124 C: 52\n6. A: 210 B: 146 C: 66\n7. A: 319 B: 233 C: 149\n8. A: 572 B: 438 C: 268\n9. A: 831 B: 655 C:\n10. A: 1219 B: 896 C:\n\nThe last test hasn't finished yet, but THANKS! I know the reson now, at\nleast... i'll try\n2048 also.\n\n-B equals --brutal-performance ? ;)\n\nThanks,\nDaniel �kerud\n\n> That should be enough to see if there's a performance change, but for\n> future reference, yes you should go higher. On modern machines with\n> many megs of RAM, you should probably be using -B on the order of a few\n> thousand, at least for production installations. The reason the default\n> is so low is that we hope the system will still be able to fire up on\n> machines where the kernel enforces a SHMMAX limit of only a meg or so.\n> This hope is possibly in vain anymore anyway, since the system's\n> non-buffer shared-memory usage keeps creeping up; I think 7.1 is well\n> past 1MB shmem even with 64 buffers...\n>\n> regards, tom lane\n>\n\n", "msg_date": "Fri, 22 Jun 2001 22:47:08 +0200", "msg_from": "=?iso-8859-1?Q?Daniel_=C5kerud?= <zilch@home.se>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact " }, { "msg_contents": "> Holy ultra-violet-active macaronies :)\n> \n> First I changed it to 256, then I changed it to 1024.\n> \n> -B 128 is A\n> -B 256 is B\n> -B 1024 is C\n> \n> New multiple-index performance data):\n> \n> 1. A: 36 B: 32 C: 35\n> 2. A: 69 B: 53 C: 38\n> 3. A: 97 B: 79 C: 40\n> 4. A: 131 B: 98 C: 48\n> 5. A: 163 B: 124 C: 52\n> 6. A: 210 B: 146 C: 66\n> 7. A: 319 B: 233 C: 149\n> 8. A: 572 B: 438 C: 268\n> 9. A: 831 B: 655 C:\n> 10. A: 1219 B: 896 C:\n> \n> The last test hasn't finished yet, but THANKS! I know the reson now, at\n> least... i'll try\n> 2048 also.\n\nStrange that even at 1024 performance still drops off at 7. Seems it\nmay be more than buffer thrashing.\n\n\n> -B equals --brutal-performance ? ;)\n\nSee my performance article on techdocs.postgresql.org.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Jun 2001 17:15:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact" }, { "msg_contents": "Tried with 2048 also, it complete took away the strange steep after 7:\n\nD is now 2048\n\n1. A: 36 B: 32 C: 35 D: 31\n2. A: 69 B: 53 C: 38 D: 38\n3. A: 97 B: 79 C: 40 D: 40\n4. A: 131 B: 98 C: 48 D: 43\n5. A: 163 B: 124 C: 52 D: 49\n6. A: 210 B: 146 C: 66 D: 50\n7. A: 319 B: 233 C: 149 D: 58\n8. A: 572 B: 438 C: 268 D: 65\n9. A: 831 B: 655 C: 437 D: 76\n10. A: 1219 B: 896 C: 583 D: 79\n\nWhat is the program called that flushes the buffers every 30 seconds on a\nlinux 2.2.x system?\n\nDaniel �kerud\n\n> > Holy ultra-violet-active macaronies :)\n> >\n> > First I changed it to 256, then I changed it to 1024.\n> >\n> > -B 128 is A\n> > -B 256 is B\n> > -B 1024 is C\n> >\n> > New multiple-index performance data):\n> >\n> > 1. A: 36 B: 32 C: 35\n> > 2. A: 69 B: 53 C: 38\n> > 3. A: 97 B: 79 C: 40\n> > 4. A: 131 B: 98 C: 48\n> > 5. A: 163 B: 124 C: 52\n> > 6. A: 210 B: 146 C: 66\n> > 7. A: 319 B: 233 C: 149\n> > 8. A: 572 B: 438 C: 268\n> > 9. A: 831 B: 655 C:\n> > 10. A: 1219 B: 896 C:\n> >\n> > The last test hasn't finished yet, but THANKS! I know the reson now, at\n> > least... i'll try\n> > 2048 also.\n>\n> Strange that even at 1024 performance still drops off at 7. Seems it\n> may be more than buffer thrashing.\n>\n>\n> > -B equals --brutal-performance ? ;)\n>\n> See my performance article on techdocs.postgresql.org.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Fri, 22 Jun 2001 23:37:58 +0200", "msg_from": "=?iso-8859-1?Q?Daniel_=C5kerud?= <zilch@home.se>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Strange that even at 1024 performance still drops off at 7. Seems it\n> may be more than buffer thrashing.\n\nYeah, if anything the knee in the curve seems to be worse at 1024\nbuffers. Curious. Deserves more investigation, perhaps.\n\nThis does remind me that I'd been thinking of suggesting that we\nraise the default -B to something more reasonable, maybe 1000 or so\n(yielding an 8-meg-plus shared memory area). This wouldn't prevent\npeople from setting it small if they have a small SHMMAX, but it's\nprobably time to stop letting that case drive our default setting.\nSince 64 is already too much to let 7.1 fit in SHMMAX = 1MB, I think\nthe original rationale for using 64 is looking pretty broken anyway.\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 17:52:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Strange that even at 1024 performance still drops off at 7. Seems it\n> > may be more than buffer thrashing.\n> \n> Yeah, if anything the knee in the curve seems to be worse at 1024\n> buffers. Curious. Deserves more investigation, perhaps.\n> \n> This does remind me that I'd been thinking of suggesting that we\n> raise the default -B to something more reasonable, maybe 1000 or so\n> (yielding an 8-meg-plus shared memory area). This wouldn't prevent\n> people from setting it small if they have a small SHMMAX, but it's\n> probably time to stop letting that case drive our default setting.\n> Since 64 is already too much to let 7.1 fit in SHMMAX = 1MB, I think\n> the original rationale for using 64 is looking pretty broken anyway.\n> Comments?\n\nBSD/OS has a 4MB max but we document how to increase it by recompiling\nthe kernel. Maybe if we fail the startup we can tell them how to\ndecrease the buffers in postgresql.conf file. Seems quite clear.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Jun 2001 17:56:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> This does remind me that I'd been thinking of suggesting that we\n>> raise the default -B to something more reasonable, maybe 1000 or so\n>> (yielding an 8-meg-plus shared memory area).\n\n> BSD/OS has a 4MB max but we document how to increase it by recompiling\n> the kernel.\n\nHmm. Anyone like the idea of a platform-specific default established\nby configure? We could set it in the template file on platforms where\nthe default SHMMAX is too small to allow 1000 buffers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 18:02:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> This does remind me that I'd been thinking of suggesting that we\n> >> raise the default -B to something more reasonable, maybe 1000 or so\n> >> (yielding an 8-meg-plus shared memory area).\n> \n> > BSD/OS has a 4MB max but we document how to increase it by recompiling\n> > the kernel.\n> \n> Hmm. Anyone like the idea of a platform-specific default established\n> by configure? We could set it in the template file on platforms where\n> the default SHMMAX is too small to allow 1000 buffers.\n\nTemplate file seems like a good idea for platforms that can't handle the\ndefault. I don't think configure should be doing such tests because the\ntarget could be a different kernel. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Jun 2001 18:06:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Hmm. Anyone like the idea of a platform-specific default established\n>> by configure? We could set it in the template file on platforms where\n>> the default SHMMAX is too small to allow 1000 buffers.\n\n> Template file seems like a good idea for platforms that can't handle the\n> default. I don't think configure should be doing such tests because the\n> target could be a different kernel. \n\nRight, I wasn't thinking of an actual run-time test in configure, just\nthat we could use it to let the OS-specific template file override the\nnormal default.\n\nWe could offer a --with switch to manually choose the default, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 18:12:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Hmm. Anyone like the idea of a platform-specific default established\n> >> by configure? We could set it in the template file on platforms where\n> >> the default SHMMAX is too small to allow 1000 buffers.\n> \n> > Template file seems like a good idea for platforms that can't handle the\n> > default. I don't think configure should be doing such tests because the\n> > target could be a different kernel. \n> \n> Right, I wasn't thinking of an actual run-time test in configure, just\n> that we could use it to let the OS-specific template file override the\n> normal default.\n> \n> We could offer a --with switch to manually choose the default, too.\n\nGood idea, yes. Not sure if we need a --with switch because they can\njust edit the postgresql.conf or postgresql.conf.sample file.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Jun 2001 18:22:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> We could offer a --with switch to manually choose the default, too.\n\n> Good idea, yes. Not sure if we need a --with switch because they can\n> just edit the postgresql.conf or postgresql.conf.sample file.\n\nWell, we have a --with switch for DEF_MAXBACKENDS, so one for the\ndefault number of buffers doesn't seem too unreasonable. I wouldn't\nbother with it if configure didn't have to touch the value anyway...\nbut it's just another line or two in configure.in...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 19:13:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact " }, { "msg_contents": "Tom Lane writes:\n\n> This does remind me that I'd been thinking of suggesting that we\n> raise the default -B to something more reasonable, maybe 1000 or so\n> (yielding an 8-meg-plus shared memory area).\n\nOn Modern(tm) systems, 8 MB is just as arbitrary and undersized as 1 MB.\nSo while for real use, manual tuning will still be necessary, on test\nsystems we'd use significant amounts of memory for nothing, or not start\nup at all.\n\nMaybe we could look around what the default limit is these days, but\nraising it to arbitrary values will just paint over the fact that user\nintervention is still required and that there is almost no documentation\nfor this.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 23 Jun 2001 01:21:19 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> This does remind me that I'd been thinking of suggesting that we\n>> raise the default -B to something more reasonable, maybe 1000 or so\n>> (yielding an 8-meg-plus shared memory area).\n\n> On Modern(tm) systems, 8 MB is just as arbitrary and undersized as 1 MB.\n\nA fair complaint, but at least it's within an order of magnitude of\nbeing reasonable; you don't *have* to tune it before you get something\napproaching reasonable performance. 64 is two or more orders of\nmagnitude off.\n\n> So while for real use, manual tuning will still be necessary, on test\n> systems we'd use significant amounts of memory for nothing, or not start\n> up at all.\n\nThe thought of test postmasters was what kept me from proposing\nsomething even higher than 1000. 8Mb is small enough that you can\nstill expect to run several postmasters without problems, on most\nmachines where you might contemplate the idea of multiple postmasters\nat all.\n\nWould you suggest that we have no default at all, and make users pick\nsomething?\n\n> Maybe we could look around what the default limit is these days, but\n> raising it to arbitrary values will just paint over the fact that user\n> intervention is still required and that there is almost no documentation\n> for this.\n\nWe do need to have a section in the administrator's guide about tuning.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 19:29:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> We could offer a --with switch to manually choose the default, too.\n> \n> > Good idea, yes. Not sure if we need a --with switch because they can\n> > just edit the postgresql.conf or postgresql.conf.sample file.\n> \n> Well, we have a --with switch for DEF_MAXBACKENDS, so one for the\n> default number of buffers doesn't seem too unreasonable. I wouldn't\n> bother with it if configure didn't have to touch the value anyway...\n> but it's just another line or two in configure.in...\n\nYes, we could add that too, but now that we have postgresql.conf should\nwe even be mentioning stuff like that in configure. In the old days we\nhad a compiled-in limit but that is not true anymore, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Jun 2001 20:04:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Multiple Indexing, performance impact" }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> Why is SHMMAX so low on some O/Ses?\n\nHistorical artifact, I think: the SysV IPC code was developed on\nmachines that were tiny by current standards. Unfortunately, vendors\nhaven't stopped to review their kernel parameters and scale them up\nappropriately.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Jun 2001 00:53:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple Indexing, performance impact " }, { "msg_contents": "Tom Lane writes:\n\n> Would you suggest that we have no default at all, and make users pick\n> something?\n\nNo. I'm concerned that PostgreSQL should work out of the box for\neveryone. And I would prefer that PostgreSQL works the same on every\nplatform out of the box. Obviously we've already lost this on systems\nwhere the default shmmax is 512kB (SCO OpenServer, Unixware) or 1 MB\n(Solaris), and reducing the parameters is clearly not an option. But if a\nplurality of systems have the default set at 4 MB or 8 MB then we should\nstop there so we don't upset a large fraction of users.\n\nBtw., do we have any data on how appropriate wal_buffers = 8 is?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 23 Jun 2001 18:32:17 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Multiple Indexing, performance impact " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> No. I'm concerned that PostgreSQL should work out of the box for\n> everyone.\n\nAgreed.\n\n> And I would prefer that PostgreSQL works the same on every\n> platform out of the box.\n\nWell, I'm not sure that we need to take that as far as saying that\ndefault NBuffers can't vary across platforms. It's not like we're\nadding or subtracting functionality. All I want is to have the default\nsetup tuned a little better than it is now.\n\n> Obviously we've already lost this on systems\n> where the default shmmax is 512kB (SCO OpenServer, Unixware) or 1 MB\n> (Solaris), and reducing the parameters is clearly not an option. But if a\n> plurality of systems have the default set at 4 MB or 8 MB then we should\n> stop there so we don't upset a large fraction of users.\n\nMaking sure that default NBuffers stays under the platform's default\nSHMMAX would accomplish that goal at least as well, probably better\nthan trying to have a one-size-fits-all default; especially if we've\nalready failed to do the latter.\n\n> Btw., do we have any data on how appropriate wal_buffers = 8 is?\n\nNot that I've seen. It looks like a rather ad-hoc choice to me...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Jun 2001 13:17:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Multiple Indexing, performance impact " }, { "msg_contents": "> At 05:56 PM 22-06-2001 -0400, Bruce Momjian wrote:\n> >> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Since 64 is already too much to let 7.1 fit in SHMMAX = 1MB, I think\n> >> the original rationale for using 64 is looking pretty broken anyway.\n> >> Comments?\n> >\n> >BSD/OS has a 4MB max but we document how to increase it by recompiling\n> >the kernel. Maybe if we fail the startup we can tell them how to\n> >decrease the buffers in postgresql.conf file. Seems quite clear.\n> >\n> \n> Why is SHMMAX so low on some O/Ses? What are the advantages?\n> \n> My guess is it's a minimum vs median/popular situation. Get the same thing\n> looking at the default www.kernel.org linux kernel settings vs the Redhat\n> kernel settings.\n> \n> I'd personally prefer the popular situation. But would that mean the\n> minimum case can't even boot up to recompile? Maybe the BSD guys should\n> ship with two kernels then. FreeBSD esp, since it's easy to recompile the\n> kernel, just do two, during installation default to \"Regular\", with an\n> option for \"Tiny\".\n> \n> It's more fair that the people trying the extraordinary (16MB 386) should\n> be the ones doing the extra work.\n\nI think the problem is that with a default-sized kernel, the little guys\ncouldn't even boot the OS. Also, some of the OS's hard-wire things into\nthe kernel for performance reasons.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 13:35:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Multiple Indexing, performance impact" } ]
[ { "msg_contents": "Is there a way to call SPI_prepare in case the precise datatype is not\nknown? \n\nExample, having a statement like 'select count(*) from foo where\nfieldname=$1' where I know that $1 is a stringish type and it _should_ be\nconvertable using xxx_in (CString-to-datum conversion functions), however,\nI do not know the precise type (could be name or varchar or text).\n\nI understand that SPI_execute uses pg_parse_and_rewrite to do the actual\nparsing and binding of function OIDs to the argument types. I'm looking at\ntransformExpr in parser/parse_expr.c, it seems to be the only place that\nactually does it. \n\nThere, I'm not sure how would it handle the case where paramtype is\nspecified as 'char', but it actually may need to be cast to 'text' for\nexecution. I guess I can find out by just trying it, just wanted to ask\nfirst. :)\n\nIdeally, I would like to be able to say \"I don't know what this parameter\nis like, treat it like it would be treated coming in a fully-formed query\n(i.e. using CSTRING conversions)\", but I'm not sure if its possible.\n\nThanks to anyone who can offer some help.\n\n-alex\n\n", "msg_date": "Fri, 22 Jun 2001 00:33:19 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "SPI_prepare for semi-unknown types" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> Example, having a statement like 'select count(*) from foo where\n> fieldname=$1' where I know that $1 is a stringish type and it _should_ be\n> convertable using xxx_in (CString-to-datum conversion functions), however,\n> I do not know the precise type (could be name or varchar or text).\n\nDeclare the parameter as text, and then coerce what you are given to\ntext before you execute the statement. You don't get any free\nadaptation to new datatypes in an already-completed plan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 09:38:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SPI_prepare for semi-unknown types " } ]
[ { "msg_contents": "Tom,\n\nI've read that 7.2 release is planned in 2 months.\nDo you have a plan to work with index_formtuple ?\nThis is a stopper for our work on implementation of\nB-tree using GiST. We have implemented B-Tree for int4\nand timestamp/datetime types but with some hack\n(all keys are of type varlena and pass-by-reference) and\nwaiting for resolution of known problem with\nindex_formtuple. We badly need it in our real life projects\nin anyway.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 22 Jun 2001 18:13:25 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "7.2 release and index_formtuple" }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> I've read that 7.2 release is planned in 2 months.\n> Do you have a plan to work with index_formtuple ?\n\nAt the moment I'm trying to concentrate on fixing VACUUM.\nWhen that's done, maybe I can spend some time on GIST issues.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 11:17:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 release and index_formtuple " }, { "msg_contents": "On Fri, 22 Jun 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > I've read that 7.2 release is planned in 2 months.\n> > Do you have a plan to work with index_formtuple ?\n>\n> At the moment I'm trying to concentrate on fixing VACUUM.\n> When that's done, maybe I can spend some time on GIST issues.\n\nwell, hope you will be success\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 22 Jun 2001 18:44:43 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: 7.2 release and index_formtuple " } ]
[ { "msg_contents": "If someone was interested in joining the development team, where would\nthey...\n\n- Find a description of the open source development process used by the\nPostgreSQL team.\n\n- Find the development environment (OS, system, compilers, etc)\nrequired to develop code.\n\n- Find an area or two that needs some support.\n\n\nThanks\n", "msg_date": "Fri, 22 Jun 2001 11:55:46 -0400", "msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "Joining the team" }, { "msg_contents": "I can't speak for the PostgreSQL team (well, is there such a thing?), but\nin general, it doesn't quite work this way.\n\nYou usually start out as a user, find out the development environment,\netc, etc, use it for your project, then you find that there are few things\nabout a project that really make you itch. Then you scratch that itch, and\nhopefully send your patch to the team. :)\n\nOn Fri, 22 Jun 2001, P. Dwayne Miller wrote:\n\n> If someone was interested in joining the development team, where would\n> they...\n> \n> - Find a description of the open source development process used by the\n> PostgreSQL team.\n> \n> - Find the development environment (OS, system, compilers, etc)\n> required to develop code.\n> \n> - Find an area or two that needs some support.\n> \n> \n> Thanks\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n> \n\n", "msg_date": "Fri, 22 Jun 2001 12:53:20 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Joining the team" }, { "msg_contents": "On Friday 22 June 2001 11:55, P. Dwayne Miller wrote:\n> If someone was interested in joining the development team, where would\n> they...\n> - Find a description of the open source development process used by the\n> PostgreSQL team.\n\nRead HACKERS for six months (or a full release cycle, whichever is longer). \nReally. HACKERS _is_the process. The process is not well documented (AFAIK \n-- it may be somewhere that I am not aware of) -- and it changes continually.\n\n> - Find the development environment (OS, system, compilers, etc)\n> required to develop code.\n\nDevelopers Corner on the website has links to this information. The \ndistribution tarball itself includes all the extra tools and documents that \ngo beyond a good Unix-like development environment. In general, a modern \nunix with a modern gcc, GNU make or equivalent, autoconf (of a particular \nversion), and good working knowledge of those tools are required.\n\n> - Find an area or two that needs some support.\n\nThe TODO list.\n\nYou've made the first step, by finding and subscribing to HACKERS. Once you \nfind an area to look at in the TODO, and have read the documentation on the \ninternals, etc, then you check out a current CVS,write what you are going to \nwrite (keeping your CVS checkout up to date in the process), and make up a \npatch (as a context diff only) and send to the PATCHES list, prefereably. \n\nDiscussion on the patch typically happens here. If the patch adds a major \nfeature, it would be a good idea to talk about it first on the HACKERS list, \nin order to increase the chances of it being accepted, as well as toavoid \nduplication of effort. Note that experienced developers with a proven track \nrecord usually get the big jobs -- for more than one reason. Also note that \nPostgreSQL is highly portable -- nonportable code will likely be dismissed \nout of hand. \n\nOnce your contributions get accepted, things move from there. Typically, you \nwould be added as a developer on the list on the website when one of the \nother developers recommends it. Membership on the steering committee is by \ninvitation only, by the other steering committee members, from what I have \ngathered watching froma distance.\n\nI make these statements from having watched the process for over two years.\n\nTo see a good example of how one goes about this, search the archives for the \nname 'Tom Lane' and see what his first post consisted of, and where he took \nthings. In particular, note that this hasn't been _that_ long ago -- and his \nbugfixing and general deep knowledge with this codebase is legendary. Take a \nfew days to read after him. And pay special attention to both the sheer \nquantity as well as the painstaking quality of his work. Both are in high \ndemand.\n\nHope that helps!\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 22 Jun 2001 12:59:03 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Joining the team" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> You usually start out as a user, find out the development environment,\n> etc, etc, use it for your project, then you find that there are few things\n> about a project that really make you itch. Then you scratch that itch, and\n> hopefully send your patch to the team. :)\n\nRight. Another comment is that there is no \"required development\nenvironment\". Ideally Postgres should run on pretty nearly any Unix-ish\noperating system and pretty nearly any ANSI-C-ish compiler and C\nlibrary. So use whatever floats your boat. If things don't work\nnicely, then you've got your first project: port to your preferred\nplatform. I know one of the first things I had to do when I started\nusing Postgres was clean up some problems with its portability to HPUX.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 13:21:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Joining the team " }, { "msg_contents": "\nI have added this to the developer's FAQ.\n\n---------------------------------------------------------------------------\n\n> On Friday 22 June 2001 11:55, P. Dwayne Miller wrote:\n> > If someone was interested in joining the development team, where would\n> > they...\n> > - Find a description of the open source development process used by the\n> > PostgreSQL team.\n> \n> Read HACKERS for six months (or a full release cycle, whichever is longer). \n> Really. HACKERS _is_the process. The process is not well documented (AFAIK \n> -- it may be somewhere that I am not aware of) -- and it changes continually.\n> \n> > - Find the development environment (OS, system, compilers, etc)\n> > required to develop code.\n> \n> Developers Corner on the website has links to this information. The \n> distribution tarball itself includes all the extra tools and documents that \n> go beyond a good Unix-like development environment. In general, a modern \n> unix with a modern gcc, GNU make or equivalent, autoconf (of a particular \n> version), and good working knowledge of those tools are required.\n> \n> > - Find an area or two that needs some support.\n> \n> The TODO list.\n> \n> You've made the first step, by finding and subscribing to HACKERS. Once you \n> find an area to look at in the TODO, and have read the documentation on the \n> internals, etc, then you check out a current CVS,write what you are going to \n> write (keeping your CVS checkout up to date in the process), and make up a \n> patch (as a context diff only) and send to the PATCHES list, prefereably. \n> \n> Discussion on the patch typically happens here. If the patch adds a major \n> feature, it would be a good idea to talk about it first on the HACKERS list, \n> in order to increase the chances of it being accepted, as well as toavoid \n> duplication of effort. Note that experienced developers with a proven track \n> record usually get the big jobs -- for more than one reason. Also note that \n> PostgreSQL is highly portable -- nonportable code will likely be dismissed \n> out of hand. \n> \n> Once your contributions get accepted, things move from there. Typically, you \n> would be added as a developer on the list on the website when one of the \n> other developers recommends it. Membership on the steering committee is by \n> invitation only, by the other steering committee members, from what I have \n> gathered watching froma distance.\n> \n> I make these statements from having watched the process for over two years.\n> \n> To see a good example of how one goes about this, search the archives for the \n> name 'Tom Lane' and see what his first post consisted of, and where he took \n> things. In particular, note that this hasn't been _that_ long ago -- and his \n> bugfixing and general deep knowledge with this codebase is legendary. Take a \n> few days to read after him. And pay special attention to both the sheer \n> quantity as well as the painstaking quality of his work. Both are in high \n> demand.\n> \n> Hope that helps!\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Nov 2001 13:26:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Joining the team" }, { "msg_contents": "On Tuesday 27 November 2001 01:26 pm, Bruce Momjian wrote:\n> I have added this to the developer's FAQ.\n> > Developers Corner on the website has links to this information. The\n> > distribution tarball itself includes all the extra tools and documents\n\nThat would be developers.postgresql.org now, right?\n\n> > Hope that helps!\n\nMust've helped somebody.... :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 27 Nov 2001 18:03:39 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Joining the team" }, { "msg_contents": "> On Tuesday 27 November 2001 01:26 pm, Bruce Momjian wrote:\n> > I have added this to the developer's FAQ.\n> > > Developers Corner on the website has links to this information. The\n> > > distribution tarball itself includes all the extra tools and documents\n> \n> That would be developers.postgresql.org now, right?\n\nYes. I added some http links to the text.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Nov 2001 18:40:47 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Joining the team" } ]
[ { "msg_contents": "Can one of you knowledgeable people tell me why current CVS as of \na week ago would have the backend running this query grow to \n600 meg+?\n\n\nINSERT into traffic_summary\nSELECT asn,protocol,\ncast(sum(pkts_src) as float) as pkts_src,\ncast(sum(pkts_dst) as float) as pkts_dst,\ncast(sum(bytes_src) as float) as bytes_src,\ncast(sum(bytes_dst) as float) as bytes_dst,\ncast(sum(secs_src) as float) as secs_src,\ncast(sum(secs_dst) as float) as secs_dst,\nmin(early) as early,\nmax(late) as late \nFROM traffic \nWHERE early between '2001-06-01 00:00:00'::timestamp and\n '2001-06-18 23:59:59'::timestamp \nGROUP BY asn,protocol,date_part('epoch',early)/60/60;\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 22 Jun 2001 11:03:41 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Why would this use 600Meg of VM?" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Can one of you knowledgeable people tell me why current CVS as of \n> a week ago would have the backend running this query grow to \n> 600 meg+?\n\nSounds like there's still a memory leak in there somewhere, but the\nquery looks fairly harmless. Could we see enough info to reproduce\nthis? (Table declarations, explain output, etc) Another useful\nattack would be to let the query run awhile, then set a breakpoint\nat sbrk(). Stack traces from the first few hits of the breakpoint\nwould give a pretty good indication of where the leak is, probably.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 12:55:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why would this use 600Meg of VM? " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010622 11:55]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > Can one of you knowledgeable people tell me why current CVS as of \n> > a week ago would have the backend running this query grow to \n> > 600 meg+?\n> \n> Sounds like there's still a memory leak in there somewhere, but the\n> query looks fairly harmless. Could we see enough info to reproduce\n> this? (Table declarations, explain output, etc) Another useful\n> attack would be to let the query run awhile, then set a breakpoint\n> at sbrk(). Stack traces from the first few hits of the breakpoint\n> would give a pretty good indication of where the leak is, probably.\n> \n> \t\t\tregards, tom lane\n\nneteng@tide.iadfw.net$ psql traffic_analysis\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntraffic_analysis=# analyze traffic;\nANALYZE\ntraffic_analysis=# \\i traffic_sum.sql \npsql:traffic_sum.sql:15: NOTICE: QUERY PLAN:\n\nSubquery Scan *SELECT* (cost=8471740.01..8994414.10 rows=1900633\nwidth=72)\n -> Aggregate (cost=8471740.01..8994414.10 rows=1900633 width=72)\n -> Group (cost=8471740.01..8614287.49 rows=19006331\nwidth=72)\n -> Sort (cost=8471740.01..8471740.01 rows=19006331\nwidth=72)\n -> Seq Scan on traffic (cost=0.00..615601.86\nrows=19006331 width=72)\n\nEXPLAIN\ntraffic_analysis=# \n\n\nneteng@tide.iadfw.net$ cat traffic_sum.sql\nEXPLAIN\nINSERT into traffic_summary\nSELECT asn,protocol,\ncast(sum(pkts_src) as float) as pkts_src,\ncast(sum(pkts_dst) as float) as pkts_dst,\ncast(sum(bytes_src) as float) as bytes_src,\ncast(sum(bytes_dst) as float) as bytes_dst,\ncast(sum(secs_src) as float) as secs_src,\ncast(sum(secs_dst) as float) as secs_dst,\nmin(early) as early,\nmax(late) as late \nFROM traffic \nWHERE early between '2001-06-01 00:00:00'::timestamp and\n '2001-06-18 23:59:59'::timestamp \nGROUP BY asn,protocol,date_part('epoch',early)/60/60;\nneteng@tide.iadfw.net$ \n\nWhat else? \n\nFailing a way to actually get this query to run, how would you suggest\naggregating the data down to 1 hour summaries?\n\nneteng@tide.iadfw.net$ psql traffic_analysis\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntraffic_analysis=# \\d traffic\n Table \"traffic\"\n Attribute | Type | Modifier \n-----------+--------------------------+----------\n asn | integer | \n protocol | integer | \n pkts_src | bigint | \n pkts_dst | bigint | \n bytes_src | bigint | \n bytes_dst | bigint | \n secs_src | bigint | \n secs_dst | bigint | \n early | timestamp with time zone | \n late | timestamp with time zone | \nIndex: traffic_early\n\ntraffic_analysis=# \\d traffic_summary\n Table \"traffic_summary\"\n Attribute | Type | Modifier \n-----------+--------------------------+----------\n asn | integer | \n protocol | integer | \n pkts_src | double precision | \n pkts_dst | double precision | \n bytes_src | double precision | \n bytes_dst | double precision | \n secs_src | double precision | \n secs_dst | double precision | \n early | timestamp with time zone | \n late | timestamp with time zone | \n\ntraffic_analysis=# \ntraffic_analysis=# \\d traffic_early\n Index \"traffic_early\"\n Attribute | Type \n-----------+--------------------------\n early | timestamp with time zone\nbtree\n\ntraffic_analysis=# \n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 22 Jun 2001 12:25:10 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: Why would this use 600Meg of VM?" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> What else? \n\nIf you don't want to do the debugger work yourself, could you send me\nenough of the data to let me reproduce the problem?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 13:31:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why would this use 600Meg of VM? " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010622 12:31]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > What else? \n> \n> If you don't want to do the debugger work yourself, could you send me\n> enough of the data to let me reproduce the problem?\nhow much data do you need? It's multi hundred megs. I can probably\nget permission to give you a login on tide if that would be easier?\n\nLER\n\n> \n> \t\t\tregards, tom lane\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 22 Jun 2001 12:40:35 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: Why would this use 600Meg of VM?" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Can one of you knowledgeable people tell me why current CVS as of \n> a week ago would have the backend running this query grow to \n> 600 meg+?\n\nThe answer: the query has nothing to do with it. However, the\ndeferred triggers you have on the target relation have a lot to do\nwith it. It's all deferred-trigger-event storage.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Jun 2001 01:06:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why would this use 600Meg of VM? " }, { "msg_contents": "At 01:06 24/06/01 -0400, Tom Lane wrote:\n>\n>The answer: the query has nothing to do with it. However, the\n>deferred triggers you have on the target relation have a lot to do\n>with it. It's all deferred-trigger-event storage.\n\nWould it be worth using a local (system) temporary table for this sort of\nthing?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 24 Jun 2001 15:45:52 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Why would this use 600Meg of VM? " }, { "msg_contents": "* Philip Warner <pjw@rhyme.com.au> [010624 00:46]:\n> At 01:06 24/06/01 -0400, Tom Lane wrote:\n> >\n> >The answer: the query has nothing to do with it. However, the\n> >deferred triggers you have on the target relation have a lot to do\n> >with it. It's all deferred-trigger-event storage.\n> \n> Would it be worth using a local (system) temporary table for this sort of\n> thing?\nI this is an FK check, that I can probably turn off. \n\nI wonder if there is a better way? \n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 24 Jun 2001 06:05:29 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: Why would this use 600Meg of VM?" }, { "msg_contents": "> At 01:06 24/06/01 -0400, Tom Lane wrote:\n> >\n> >The answer: the query has nothing to do with it. However, the\n> >deferred triggers you have on the target relation have a lot to do\n> >with it. It's all deferred-trigger-event storage.\n> \n> Would it be worth using a local (system) temporary table for this sort of\n> thing?\n\nJan intially wanted to store large FK events in a file when they got too\nbig but never completed it.\n\nThe TODO list has:\n\n\t* Add deferred trigger queue file (Jan)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 24 Jun 2001 17:08:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why would this use 600Meg of VM?" } ]
[ { "msg_contents": "Attached is documentation describing plperlu differences from plperl.\n\nPlease apply :)\n\n--\nAlex Pilosov | http://www.acedsl.com/home.html\nCTO - Acecape, Inc. | AceDSL:The best ADSL in the world\n325 W 38 St. Suite 1005 | (Stealth Marketing Works! :)\nNew York, NY 10018 |", "msg_date": "Fri, 22 Jun 2001 16:44:46 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "plperl doc" }, { "msg_contents": "\nThanks. Doc patch applied. (No reason to delay most doc patches, I\nthink.)\n\n> Attached is documentation describing plperlu differences from plperl.\n> \n> Please apply :)\n> \n> --\n> Alex Pilosov | http://www.acedsl.com/home.html\n> CTO - Acecape, Inc. | AceDSL:The best ADSL in the world\n> 325 W 38 St. Suite 1005 | (Stealth Marketing Works! :)\n> New York, NY 10018 |\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Jun 2001 17:37:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: plperl doc" } ]
[ { "msg_contents": "\nI make queries on catalog tables in order get metadata about table\nattributes. I need this metadata in order to help me controlling the data\nthat users enter using html forms dynamically generated with PHP.\n\nThe problem I've found is that the attribute that stores the info about data\nlength (attribute atttypmod of catalog table pg_attribute) is some kind of\ninternal coding. For example, for an attribute varchar(100) atttypmod value\nis 104; for an attribute numeric(6,0) atttypmod value is 393220.\n\nI guess I would need some kind of function in order to get the actual lenght\nfor the attributes. Does this function exist? Where can I find it?\n\nAny help will be appreciated.\n\n--\nBernardo Pons\n\n\nP.S.\n\nFor example, typical output of \\d <tablename> in psql is:\n\n Attribute | Type | Modifier\n-----------------+--------------+----------\n CustomerId | numeric(6,0) | not null\n Name | varchar(100) |\n Series | numeric(2,0) | not null\n Number | numeric(6,0) | not null\n ObjectId | numeric(6,0) |\n ObjectType | numeric(3,0) |\n Quantity | numeric(8,2) | not null\n Price | numeric(8,2) | not null\n\nUsing a query like\n\nSELECT a.attname, t.typname, a.atttypmod, a.attnum FROM pg_class c,\npg_attribute a, pg_type t WHERE c.relname = <tablename> AND a.attnum > 0 AND\na.attrelid = c.oid AND a.atttypid = t.oid ORDER BY a.attnum;\n\non system catalog tables I get:\n\n attname | typname | atttypmod | attnum\n-----------------+---------+-----------+--------\n CustomerId | numeric | 393220 | 1\n Name | varchar | 104 | 2\n Series | numeric | 131076 | 1\n Number | numeric | 393220 | 2\n ObjectId | numeric | 393220 | 3\n ObjectType | numeric | 196612 | 4\n Quantity | numeric | 524294 | 7\n Price | numeric | 524294 | 8\n\n\n", "msg_date": "Fri, 22 Jun 2001 23:07:09 +0200", "msg_from": "\"Bernardo Pons\" <bernardo@atlas-iap.es>", "msg_from_op": true, "msg_subject": "Extracting metadata about attributes from catalog" }, { "msg_contents": "Do 'psql -E ...', it will display actual queries used by psql.\n\nYour particular query is:\nSELECT a.attname, t.typname, a.attlen, a.atttypmod, a.attnotnull,\na.atthasdef, a.attnum\nFROM pg_class c, pg_attribute a, pg_type t\nWHERE c.relname = '...tablename...'\n AND a.attnum > 0 AND a.attrelid = c.oid AND a.atttypid = t.oid\nORDER BY a.attnum\n\nAnd pg_type has all information you need.\n\nOn Fri, 22 Jun 2001, Bernardo Pons wrote:\n\n> \n> I make queries on catalog tables in order get metadata about table\n> attributes. I need this metadata in order to help me controlling the data\n> that users enter using html forms dynamically generated with PHP.\n> \n> The problem I've found is that the attribute that stores the info about data\n> length (attribute atttypmod of catalog table pg_attribute) is some kind of\n> internal coding. For example, for an attribute varchar(100) atttypmod value\n> is 104; for an attribute numeric(6,0) atttypmod value is 393220.\n> \n> I guess I would need some kind of function in order to get the actual lenght\n> for the attributes. Does this function exist? Where can I find it?\n> \n> Any help will be appreciated.\n> \n> --\n> Bernardo Pons\n> \n> \n> P.S.\n> \n> For example, typical output of \\d <tablename> in psql is:\n> \n> Attribute | Type | Modifier\n> -----------------+--------------+----------\n> CustomerId | numeric(6,0) | not null\n> Name | varchar(100) |\n> Series | numeric(2,0) | not null\n> Number | numeric(6,0) | not null\n> ObjectId | numeric(6,0) |\n> ObjectType | numeric(3,0) |\n> Quantity | numeric(8,2) | not null\n> Price | numeric(8,2) | not null\n> \n> Using a query like\n> \n> SELECT a.attname, t.typname, a.atttypmod, a.attnum FROM pg_class c,\n> pg_attribute a, pg_type t WHERE c.relname = <tablename> AND a.attnum > 0 AND\n> a.attrelid = c.oid AND a.atttypid = t.oid ORDER BY a.attnum;\n> \n> on system catalog tables I get:\n> \n> attname | typname | atttypmod | attnum\n> -----------------+---------+-----------+--------\n> CustomerId | numeric | 393220 | 1\n> Name | varchar | 104 | 2\n> Series | numeric | 131076 | 1\n> Number | numeric | 393220 | 2\n> ObjectId | numeric | 393220 | 3\n> ObjectType | numeric | 196612 | 4\n> Quantity | numeric | 524294 | 7\n> Price | numeric | 524294 | 8\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n", "msg_date": "Fri, 22 Jun 2001 17:45:50 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Extracting metadata about attributes from catalog" }, { "msg_contents": "\"Bernardo Pons\" <bernardo@atlas-iap.es> writes:\n> The problem I've found is that the attribute that stores the info about data\n> length (attribute atttypmod of catalog table pg_attribute) is some kind of\n> internal coding. For example, for an attribute varchar(100) atttypmod value\n> is 104; for an attribute numeric(6,0) atttypmod value is 393220.\n\nYup.\n\n> I guess I would need some kind of function in order to get the actual lenght\n> for the attributes. Does this function exist? Where can I find it?\n\nIn 7.1, \"format_type(typeoid, typmod)\" is what produces the type\ndisplays seen in psql. This may or may not be exactly what you want,\nbut that's how the knowledge of typmod encoding is exported at the\nmoment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2001 17:58:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extracting metadata about attributes from catalog " }, { "msg_contents": "\n> Do 'psql -E ...', it will display actual queries used by psql.\n\nI already do it. At the end of my first message there was an example with\nexactly the query you suggested.\n\n> Your particular query is:\n> SELECT a.attname, t.typname, a.attlen, a.atttypmod, a.attnotnull,\n> a.atthasdef, a.attnum\n> FROM pg_class c, pg_attribute a, pg_type t\n> WHERE c.relname = '...tablename...'\n> AND a.attnum > 0 AND a.attrelid = c.oid AND a.atttypid = t.oid\n> ORDER BY a.attnum\n>\n> And pg_type has all information you need.\n\nBut, I'm afraid pg_type has not the information I need.\n\nJust in case I missed something you have seen I wrote down a query showing\nall attributes of the pg_type\n\nSELECT a.attname, t.typname, t.typowner, t.typlen, t.typprtlen, t.typbyval,\nt.typtype, t.typisdefined, t.typdelim, t.typrelid, t.typelem, t.typinput,\nt.typoutput, t.typreceive, t.typsend, t.typalign, t.typdefault, a.atttypmod,\na.attnum FROM pg_class c, pg_attribute a, pg_type t WHERE c.relname =\n..TABLENAME.. AND a.attnum > 0 AND a.attrelid = c.oid AND a.atttypid = t.oid\nORDER BY a.attnum;\n\nbut there's neither a field showing me, for example, a value 100 for a\nvarchar(100) field nor two fields showing value 6 and 2 for a numeric(6,2)\nfield.\n\nMaybe I'm missing something from your answer?\n\nRegards,\n\n--\nBernardo Pons\n\n", "msg_date": "Sun, 24 Jun 2001 12:05:12 +0200", "msg_from": "\"Bernardo Pons\" <bernardo@atlas-iap.es>", "msg_from_op": true, "msg_subject": "RE: Extracting metadata about attributes from catalog" }, { "msg_contents": "\n> > I guess I would need some kind of function in order to get the\n> actual lenght\n> > for the attributes. Does this function exist? Where can I find it?\n>\n> In 7.1, \"format_type(typeoid, typmod)\" is what produces the type\n> displays seen in psql. This may or may not be exactly what you want,\n> but that's how the knowledge of typmod encoding is exported at the\n> moment.\n\nThere's 957 functions in psql (output of \\df).\n\nWould I be so lucky that none of these functions is the one that you\nsuggested? :-(\n\nIs \"format_type(typeoid, typmod)\" an internal C language function of the\nPostgres backend? (please... please... say no :-)\n\nIf so (I'm afraid it will be) the only way to extract the actual length of a\nvarchar field or length of integer/fractional part of a numeric field would\nbe implementing the same functions the backend uses in my PHP modules. Any\nother suggestion?\n\nRegards,\n\n--\nBernardo Pons\n\n", "msg_date": "Sun, 24 Jun 2001 12:05:13 +0200", "msg_from": "\"Bernardo Pons\" <bernardo@atlas-iap.es>", "msg_from_op": true, "msg_subject": "RE: Extracting metadata about attributes from catalog " }, { "msg_contents": "\"Bernardo Pons\" <bernardo@atlas-iap.es> writes:\n>> In 7.1, \"format_type(typeoid, typmod)\" is what produces the type\n>> displays seen in psql. This may or may not be exactly what you want,\n>> but that's how the knowledge of typmod encoding is exported at the\n>> moment.\n\n> There's 957 functions in psql (output of \\df).\n\n> Would I be so lucky that none of these functions is the one that you\n> suggested? :-(\n\nregression=# \\df format_type\n List of functions\n Result | Function | Arguments\n--------+-------------+--------------\n text | format_type | oid, integer\n(1 row)\n\nI did say 7.1, however. What version are you using?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Jun 2001 10:59:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extracting metadata about attributes from catalog " }, { "msg_contents": "On Sun, 24 Jun 2001, Bernardo Pons wrote:\n\n> \n> > Do 'psql -E ...', it will display actual queries used by psql.\n> \n> I already do it. At the end of my first message there was an example with\n> exactly the query you suggested.\n> \n> > Your particular query is:\n> > SELECT a.attname, t.typname, a.attlen, a.atttypmod, a.attnotnull,\n> > a.atthasdef, a.attnum\n> > FROM pg_class c, pg_attribute a, pg_type t\n> > WHERE c.relname = '...tablename...'\n> > AND a.attnum > 0 AND a.attrelid = c.oid AND a.atttypid = t.oid\n> > ORDER BY a.attnum\nSorry about that. For parameterized types (like numeric, varchar),\natttypmod contains specific information. For varchar-like parameters, its\nlength of the field+4 (54 means varchar(50), for example). For numeric\nparemeter (numeric(a,b)), its 327680*b+a\n\nI'm not sure if there's a better (and more documented) way to decode those\nnumbers, though.....\n\n", "msg_date": "Sun, 24 Jun 2001 11:08:01 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "RE: Extracting metadata about attributes from catalog" } ]
[ { "msg_contents": "Attached is a patch (including documentation this time :) containing two\nfunctions, base64_encode(bytea) and base64_decode(text) with obvious\nfunctionality.\n\nCode was initially taken from public-domain base64.c by John Walker but\nmuch simplified (such as, breaking up long string into multiple lines is\nnot done, EBCDIC support removed). \n\n--\nAlex Pilosov | http://www.acedsl.com/home.html\nCTO - Acecape, Inc. | AceDSL:The best ADSL in the world\n325 W 38 St. Suite 1005 | (Stealth Marketing Works! :)\nNew York, NY 10018 |", "msg_date": "Fri, 22 Jun 2001 17:26:56 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] by request: base64 for bytea" }, { "msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Attached is a patch (including documentation this time :) containing two\n> functions, base64_encode(bytea) and base64_decode(text) with obvious\n> functionality.\n> \n> Code was initially taken from public-domain base64.c by John Walker but\n> much simplified (such as, breaking up long string into multiple lines is\n> not done, EBCDIC support removed). \n> \n> --\n> Alex Pilosov | http://www.acedsl.com/home.html\n> CTO - Acecape, Inc. | AceDSL:The best ADSL in the world\n> 325 W 38 St. Suite 1005 | (Stealth Marketing Works! :)\n> New York, NY 10018 |\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Jun 2001 21:55:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea" }, { "msg_contents": "On Fri, Jun 22, 2001 at 09:55:46PM -0400, Bruce Momjian wrote:\n> Your patch has been added to the PostgreSQL unapplied patches list at:\n\n> > Attached is a patch (including documentation this time :) containing two\n> > functions, base64_encode(bytea) and base64_decode(text) with obvious\n> > functionality.\n\nBtw, there are functions in form encode(data, 'base64'),\ndecode(data, 'base64') in contrib/pgcrypto. They do also\nencode(data, 'hex'). In the future I need to do probably\nencode(data, 'pgp-armor') too...\n\nI agree those functionality should be in core code, and if\nthe Alex ones get there, maybe he could use same interface?\n\nOr I can extract it out from pgcrypto and submit to core ;)\nI simply had not a need for it because I used those with\npgcrypto, but Alex seems to hint that there would be bit of\ninterest otherwise too.\n\n-- \nmarko\n\n", "msg_date": "Sat, 23 Jun 2001 13:11:34 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea" }, { "msg_contents": "On Sat, 23 Jun 2001, Marko Kreen wrote:\n\n> On Fri, Jun 22, 2001 at 09:55:46PM -0400, Bruce Momjian wrote:\n> > Your patch has been added to the PostgreSQL unapplied patches list at:\n> \n> > > Attached is a patch (including documentation this time :) containing two\n> > > functions, base64_encode(bytea) and base64_decode(text) with obvious\n> > > functionality.\n> \n> Btw, there are functions in form encode(data, 'base64'),\n> decode(data, 'base64') in contrib/pgcrypto. They do also\n> encode(data, 'hex'). In the future I need to do probably\n> encode(data, 'pgp-armor') too...\n> \n> I agree those functionality should be in core code, and if\n> the Alex ones get there, maybe he could use same interface?\nOy, I didn't notice them in contrib/pgcrypt.\n\nBruce, you can take my patch out of queue, stuff in pgcrypt is far more\ncomprehensive than what I done.\n\n> Or I can extract it out from pgcrypto and submit to core ;)\n> I simply had not a need for it because I used those with\n> pgcrypto, but Alex seems to hint that there would be bit of\n> interest otherwise too.\nI think encode/decode should be part of core, as they are basic functions\nto manipulate bytea data...\n\n-alex\n\n", "msg_date": "Sat, 23 Jun 2001 08:42:46 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] by request: base64 for bytea" }, { "msg_contents": "On Sat, Jun 23, 2001 at 08:42:46AM -0400, Alex Pilosov wrote:\n> On Sat, 23 Jun 2001, Marko Kreen wrote:\n> > Or I can extract it out from pgcrypto and submit to core ;)\n> > I simply had not a need for it because I used those with\n> > pgcrypto, but Alex seems to hint that there would be bit of\n> > interest otherwise too.\n> I think encode/decode should be part of core, as they are basic functions\n> to manipulate bytea data...\n\nOk, I think I look into it. I am anyway preparing a big update\nto pgcrypto.\n\nQuestion to -hackers: currently there is not possible to cast\nbytea to text and vice-versa. Is this intentional or bug?\n\nIt is weird because internal representation is exactly the same.\nAs I want my funtions to operate on both, do I need to create\nseparate funtion entries to every combination of parameters?\nIt gets crazy on encrypt_iv(data, key, iv, type) which has 3\nparameters that can be both bytea or text...\n\n\n-- \nmarko\n\n", "msg_date": "Sat, 23 Jun 2001 15:43:33 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea" }, { "msg_contents": "> On Sat, 23 Jun 2001, Marko Kreen wrote:\n> \n> > On Fri, Jun 22, 2001 at 09:55:46PM -0400, Bruce Momjian wrote:\n> > > Your patch has been added to the PostgreSQL unapplied patches list at:\n> > \n> > > > Attached is a patch (including documentation this time :) containing two\n> > > > functions, base64_encode(bytea) and base64_decode(text) with obvious\n> > > > functionality.\n> > \n> > Btw, there are functions in form encode(data, 'base64'),\n> > decode(data, 'base64') in contrib/pgcrypto. They do also\n> > encode(data, 'hex'). In the future I need to do probably\n> > encode(data, 'pgp-armor') too...\n> > \n> > I agree those functionality should be in core code, and if\n> > the Alex ones get there, maybe he could use same interface?\n> Oy, I didn't notice them in contrib/pgcrypt.\n> \n> Bruce, you can take my patch out of queue, stuff in pgcrypt is far more\n> comprehensive than what I done.\n\nSure. Done. Funny we didn't need them as much for crypto but we do\nneed them for binary insertion into the database.\n\n> \n> > Or I can extract it out from pgcrypto and submit to core ;)\n> > I simply had not a need for it because I used those with\n> > pgcrypto, but Alex seems to hint that there would be bit of\n> > interest otherwise too.\n> I think encode/decode should be part of core, as they are basic functions\n> to manipulate bytea data...\n\nAgreed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 23 Jun 2001 09:48:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea" }, { "msg_contents": "> On Sat, Jun 23, 2001 at 08:42:46AM -0400, Alex Pilosov wrote:\n> > On Sat, 23 Jun 2001, Marko Kreen wrote:\n> > > Or I can extract it out from pgcrypto and submit to core ;)\n> > > I simply had not a need for it because I used those with\n> > > pgcrypto, but Alex seems to hint that there would be bit of\n> > > interest otherwise too.\n> > I think encode/decode should be part of core, as they are basic functions\n> > to manipulate bytea data...\n> \n> Ok, I think I look into it. I am anyway preparing a big update\n> to pgcrypto.\n> \n> Question to -hackers: currently there is not possible to cast\n> bytea to text and vice-versa. Is this intentional or bug?\n> \n> It is weird because internal representation is exactly the same.\n> As I want my funtions to operate on both, do I need to create\n> separate funtion entries to every combination of parameters?\n> It gets crazy on encrypt_iv(data, key, iv, type) which has 3\n> parameters that can be both bytea or text...\n\nWe just need to mark them as binary compatible. I will do that now and\ncommit. We really weren't sure what bytea was for in the past (or\nforgot) so I am sure it was an oversight.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 23 Jun 2001 09:49:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea" }, { "msg_contents": "\nPatch removed.\n \n> Attached is a patch (including documentation this time :) containing two\n> functions, base64_encode(bytea) and base64_decode(text) with obvious\n> functionality.\n> \n> Code was initially taken from public-domain base64.c by John Walker but\n> much simplified (such as, breaking up long string into multiple lines is\n> not done, EBCDIC support removed). \n> \n> --\n> Alex Pilosov | http://www.acedsl.com/home.html\n> CTO - Acecape, Inc. | AceDSL:The best ADSL in the world\n> 325 W 38 St. Suite 1005 | (Stealth Marketing Works! :)\n> New York, NY 10018 |\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 23 Jun 2001 09:49:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea" }, { "msg_contents": "> On Sat, Jun 23, 2001 at 08:42:46AM -0400, Alex Pilosov wrote:\n> > On Sat, 23 Jun 2001, Marko Kreen wrote:\n> > > Or I can extract it out from pgcrypto and submit to core ;)\n> > > I simply had not a need for it because I used those with\n> > > pgcrypto, but Alex seems to hint that there would be bit of\n> > > interest otherwise too.\n> > I think encode/decode should be part of core, as they are basic functions\n> > to manipulate bytea data...\n> \n> Ok, I think I look into it. I am anyway preparing a big update\n> to pgcrypto.\n> \n> Question to -hackers: currently there is not possible to cast\n> bytea to text and vice-versa. Is this intentional or bug?\n> \n> It is weird because internal representation is exactly the same.\n> As I want my funtions to operate on both, do I need to create\n> separate funtion entries to every combination of parameters?\n> It gets crazy on encrypt_iv(data, key, iv, type) which has 3\n> parameters that can be both bytea or text...\n\nI have commited code to CVS to make bytea binary compatible with text.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 23 Jun 2001 22:37:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea" }, { "msg_contents": "> On Sat, Jun 23, 2001 at 08:42:46AM -0400, Alex Pilosov wrote:\n> > On Sat, 23 Jun 2001, Marko Kreen wrote:\n> > > Or I can extract it out from pgcrypto and submit to core ;)\n> > > I simply had not a need for it because I used those with\n> > > pgcrypto, but Alex seems to hint that there would be bit of\n> > > interest otherwise too.\n> > I think encode/decode should be part of core, as they are basic functions\n> > to manipulate bytea data...\n> \n> Ok, I think I look into it. I am anyway preparing a big update\n> to pgcrypto.\n> \n> Question to -hackers: currently there is not possible to cast\n> bytea to text and vice-versa. Is this intentional or bug?\n> \n> It is weird because internal representation is exactly the same.\n> As I want my funtions to operate on both, do I need to create\n> separate funtion entries to every combination of parameters?\n> It gets crazy on encrypt_iv(data, key, iv, type) which has 3\n> parameters that can be both bytea or text...\n\nSorry, backed out bytea binary compatibility code. Tom says it will not\nwork.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 23 Jun 2001 22:41:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea" }, { "msg_contents": "Marko Kreen <marko@l-t.ee> writes:\n> Question to -hackers: currently there is not possible to cast\n> bytea to text and vice-versa. Is this intentional or bug?\n\nIntentional. text and friends do not like embedded nulls.\n\nIf there were a cast it would have to be one that implies\nan I/O conversion, just like any other type that contains\nnon-textual data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Jun 2001 22:46:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea " }, { "msg_contents": "On Sat, Jun 23, 2001 at 10:46:46PM -0400, Tom Lane wrote:\n> Marko Kreen <marko@l-t.ee> writes:\n> > Question to -hackers: currently there is not possible to cast\n> > bytea to text and vice-versa. Is this intentional or bug?\n> \n> Intentional. text and friends do not like embedded nulls.\n> \n> If there were a cast it would have to be one that implies\n> an I/O conversion, just like any other type that contains\n> non-textual data.\n\nWell, I have functions that should work on both - encode(),\ndigest(), hmac(). Probably should do then several entries. Ok.\n\nBut what should be return type of decrypt()? I imagine well\nsituations where user wants to crypt both bytea and text data.\nWhen there is even not a way to cast them to each other, then\nhe is stuck for no good reason.\n\n-- \nmarko\n\n", "msg_date": "Sun, 24 Jun 2001 12:33:31 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea" }, { "msg_contents": "On Sun, 24 Jun 2001, Marko Kreen wrote:\n\n> On Sat, Jun 23, 2001 at 10:46:46PM -0400, Tom Lane wrote:\n> > Marko Kreen <marko@l-t.ee> writes:\n> > > Question to -hackers: currently there is not possible to cast\n> > > bytea to text and vice-versa. Is this intentional or bug?\n> > \n> > Intentional. text and friends do not like embedded nulls.\n> > \n> > If there were a cast it would have to be one that implies\n> > an I/O conversion, just like any other type that contains\n> > non-textual data.\n> \n> Well, I have functions that should work on both - encode(),\n> digest(), hmac(). Probably should do then several entries. Ok.\n> \n> But what should be return type of decrypt()? I imagine well\n> situations where user wants to crypt both bytea and text data.\n> When there is even not a way to cast them to each other, then\n> he is stuck for no good reason.\nThere SHOULD be a text_bytea function to cast a text as bytea, as it is\nalways safe. (It doesn't exist yet, but its a trivial patch)\n\nFunction to cast bytea as text, I think, should do proper checking that\ninput did not contain nulls, and return text data back.\n\nYour encrypt/decrypt should take bytea and return bytea. Its user's\nresponsibility to cast the things to bytea when needed.\n\n", "msg_date": "Sun, 24 Jun 2001 11:13:04 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] by request: base64 for bytea" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> Function to cast bytea as text, I think, should do proper checking that\n> input did not contain nulls, and return text data back.\n\nThat is most definitely not good enough. In MULTIBYTE installations\nyou'd have to also check that there were no illegal multibyte sequences.\n\nThe whole approach seems misguided to me anyway. bytea isn't equivalent\nto text and conversion functions based on providing incomplete binary\nequivalence are fundamentally wrong. hex or base64 encode/decode\nfunctions seem like reasonable conversion paths, or you could provide\na function that mimics the existing I/O conversions for bytea, ugly as\nthey are.\n\nIn the case that Marko is describing, it seems to me he is providing\ntwo independent sets of encryption functions, one for text and one\nfor bytea. That they happen to share code under the hood is an\nimplementation detail of his code, not a reason to contort the type\nsystem. If someone wanted to add functions to encrypt, say, polygons,\nwould you start looking for ways to create a binary equivalence between\npolygon and text? I sure hope not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Jun 2001 11:22:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea " }, { "msg_contents": "Marko Kreen <marko@l-t.ee> writes:\n> But what should be return type of decrypt()?\n\nYou'll need more than one name: decrypt to text, decrypt to bytea, etc.\nThink about what happens when you need to support additional types.\nRelying on implicit conversions or binary equivalence will not scale.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Jun 2001 11:26:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea " }, { "msg_contents": "> Alex Pilosov <alex@pilosoft.com> writes:\n> > Function to cast bytea as text, I think, should do proper checking that\n> > input did not contain nulls, and return text data back.\n> \n> That is most definitely not good enough. In MULTIBYTE installations\n> you'd have to also check that there were no illegal multibyte sequences.\n> \n> The whole approach seems misguided to me anyway. bytea isn't equivalent\n> to text and conversion functions based on providing incomplete binary\n> equivalence are fundamentally wrong. hex or base64 encode/decode\n> functions seem like reasonable conversion paths, or you could provide\n> a function that mimics the existing I/O conversions for bytea, ugly as\n> they are.\n\nHe can create an output function just to text, and varchar, etc will work\nOK, right?\n\nI think the main issue is that char(), varchar(), text all input/output\nstrings of the same format while bytea has special backslash handling\nfor binary/null values.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 24 Jun 2001 17:05:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea" }, { "msg_contents": "On Sun, 24 Jun 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > Function to cast bytea as text, I think, should do proper checking that\n> > input did not contain nulls, and return text data back.\n> \n> That is most definitely not good enough. In MULTIBYTE installations\n> you'd have to also check that there were no illegal multibyte sequences.\nTrue, but see below.\n\n> The whole approach seems misguided to me anyway. bytea isn't equivalent\n> to text and conversion functions based on providing incomplete binary\n> equivalence are fundamentally wrong. hex or base64 encode/decode\n> functions seem like reasonable conversion paths, or you could provide\n> a function that mimics the existing I/O conversions for bytea, ugly as\n> they are.\n>\n> In the case that Marko is describing, it seems to me he is providing\n> two independent sets of encryption functions, one for text and one\n> for bytea. That they happen to share code under the hood is an\n> implementation detail of his code, not a reason to contort the type\n> system. If someone wanted to add functions to encrypt, say, polygons,\n> would you start looking for ways to create a binary equivalence between\n> polygon and text? I sure hope not.\n\nWell, encrypt/decrypt are special kinds of functions. When the data is\ndecrypted, its type is not known, as it is not stored anywhere in the\ndata. Caller is responsible to casting the result to whatever he needs to,\nthus, there must be some way to cast output of decrypted data to any type.\n\nI may be going a bit too far, but, if you think about it, if one wanted to\nencrypt a generic type t, these ar e the alternatives: \n\na) to encrypt, caller must use encrypt(t_out(val)) and to decrypt\nt_in(decrypt(val)).\n\nProblem with that is non-existance of CSTRING datatype as of yet, and a\npossible inefficiency of it compared to b).\n\nb) make encrypt operate on 'opaque' type, and just encrypt raw data in\nmemory, as many as there are, and store the original varlen separately.\n(most encrypt-decrypt algorithms do not preserve data length anyway, they\noperate in blocks of n bytes). Question in this situation what to do with\ndecrypt, options are:\n\nb1) make decrypt return opaque and to allow conversion from opaque to any\ndatatype, (by blindly setting the oid of return type), I'm not sure how\nhard is this one to do with current type system, and do not like safety of\nthis since an ordinary user would be able to put garbage data into type\nthat may not be prepared to handle it.\n\nb2) make encrypt store the name of original type in encrypted data. make\ndecrypt return opaque which would contain (type,data,length) triple, and\nallow to cast opaque into any type but _checking_ that opaque has correct\nformat and that type stored in opaque matches type its being cast to.\n\nThis has additional benefit of being able to serialize/deserialize data,\npreserving type, which may be used by something else...\n\nIn my opinion, a) is probably the easiest option to implement. b2) is\n(IMHO) the most correct one, but it may be a bit too much work for not\nthat much of benefit?\n\nThis may be going a bit too far, since original question only dealt with\ntext-bytea conversions, but maybe its time to look at 'generic' functions\nwhich return generic types.\n\n", "msg_date": "Sun, 24 Jun 2001 18:20:39 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] by request: base64 for bytea " }, { "msg_contents": "On Sun, Jun 24, 2001 at 06:20:39PM -0400, Alex Pilosov wrote:\n> On Sun, 24 Jun 2001, Tom Lane wrote:\n> > In the case that Marko is describing, it seems to me he is providing\n> > two independent sets of encryption functions, one for text and one\n> > for bytea. That they happen to share code under the hood is an\n> > implementation detail of his code, not a reason to contort the type\n> > system. If someone wanted to add functions to encrypt, say, polygons,\n> > would you start looking for ways to create a binary equivalence between\n> > polygon and text? I sure hope not.\n> \n> Well, encrypt/decrypt are special kinds of functions. When the data is\n> decrypted, its type is not known, as it is not stored anywhere in the\n> data. Caller is responsible to casting the result to whatever he needs to,\n> thus, there must be some way to cast output of decrypted data to any type.\n> \n> I may be going a bit too far, but, if you think about it, if one wanted to\n> encrypt a generic type t, these ar e the alternatives: \n\n[ ... bunch of good ideas ... ]\n\nI do not want to go that far and imagine current encrypt() as\nsomething low-level, that encrypts a unstructured array of 8bit\nvalues. That makes bytea as 'natural' type to use for it.\nI now took the Tom suggestion that all functions do not operate\nwell on 8bit values - so now I declared that all my funtions\nthat _do_ operate on 8bit values, get data as bytea.\nBtw, the length is preserved - I use padding if needed. But no\nadditional info is preserved.\n\nNow, if you want to do something higher-level, in POV of\nPostgreSQL - to attach type data or something else, you can\nvery well build some higher-level functions on encrypt() that\nadd some additional structure for it. This is easy - you can\ndo it in SQL level if you want, but I also tried to make\nall crypto stuff accesible from C level too. I do not think it\nbelongs to current encrypt() - this is 'next level'. So I do\nnot worry about encrypting polygons yet.\n\nTho' current encrypt() has some 'negative' points on crypto POV.\nAs it does basically pure cipher, and has no structure I cant\nuse some higher features as key generation, attaching algorithm\ninfo to data and checksums. (Actually it _does_ support\nattaching a MD or HMAC to encrypted data, but I consider it as\ntoo hackish). So, ee, someday, when I have more time I would like\nto use current code as building block and do a minimal OpenPGP\nimplementation that does support all of it.\n\nThis again does not offer anything for 'generic types', but\nagain I do not consider it job for that level.\n\n> This may be going a bit too far, since original question only dealt with\n> text-bytea conversions, but maybe its time to look at 'generic' functions\n> which return generic types.\n\nI did want to encrypt() etc. to operate on 'text' too, as it\nwould be _very_ convinient, and they really are similar on POV\nof encrypt().\n\nHmm, on the other hand -\n\nIdea for 'generic types', taking account of PostgreSQL current\ntype system - functions:\n\n\tpack(data::whatever)::bytea,\n\tunpack_text(data::bytea)::text,\n\tunpack_polygon(data::bytea)::polygon\n\t...\n\npack() does a compact representation of data, with type attached\nunpack*() checks if it is of correct type and sane. It may be\ntextual but this takes much room, binary is probably not\nportable. Eg. it could be done using *in(), *out() functions,\nmaybe even keep the '\\0', and prepends type info (oid/name).\nSo later it could be given to encrypt()... ?\n\n-- \nmarko\n\n", "msg_date": "Tue, 26 Jun 2001 00:16:35 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: [PATCH] by request: base64 for bytea" } ]
[ { "msg_contents": "Thank you all for your feedback. Now I know there is\ninterest for this tool. I'm going to do it. I will be\nbusy until the last week of July (including a little\nvacation) so I'm going to start the work in August.\nWhen the basic desing/shedule was ready and I had some\ncode I will announce it, but you have to wait at least\nuntil September.\n\nSee you soon!\n\nPedro\n\n_______________________________________________________________\nDo You Yahoo!?\nYahoo! Messenger: Comunicaci�n instant�nea gratis con tu gente -\nhttp://messenger.yahoo.es\n", "msg_date": "Sat, 23 Jun 2001 01:01:42 +0200 (CEST)", "msg_from": "=?iso-8859-1?q?Pedro=20Abelleira=20Seco?= <pedroabelleira@yahoo.es>", "msg_from_op": true, "msg_subject": "RE: Universal admin frontend" } ]
[ { "msg_contents": "has anyone thought about applying the write-ahead log\nto the results of a vacuum that allows both readers and writers?\nin other words, couldn't vacuum just record the highest commited \ntransaction id, say \"TX\", run the vacuum in a private space,\nthen apply the write-ahead log for all transactions after \"TX\"\nto the private space, and, finally, update the appropriate system \ntables to point to the new version of the tables.\n\nseems to me that this approach, while disk intensive, would work well in\nenvironments where a 24x7 db is critical AND slow periods are acceptable.\n\nof course, this approach is lossy, since the wal could\nbe active through out the sync by the vacuum process.\nin other words, the vacuum could perpetually chase the tail \nof an active wal. so the vacuum just gives up after a certain number of\ntries.\n\notherwise, do any fundamental problems exist with this approach?\n\ncheers-john scott\n\n\n=====\nJohn Scott\nSenior Partner\nAugust Associates\n\nemail: john@august.com\n web: http://www.august.com/~jmscott\n\n__________________________________________________\nDo You Yahoo!?\nGet personalized email addresses from Yahoo! Mail\nhttp://personal.mail.yahoo.com/\n", "msg_date": "Fri, 22 Jun 2001 16:43:47 -0700 (PDT)", "msg_from": "John Scott <jmscott@yahoo.com>", "msg_from_op": true, "msg_subject": "using WAL in a VACUUM?" } ]
[ { "msg_contents": "http://news.cnet.com/news/0-1003-200-6354466.html\n\n\n", "msg_date": "Sat, 23 Jun 2001 10:24:02 +0700", "msg_from": "\"Andy Samuel\" <andysamuel@geocities.com>", "msg_from_op": true, "msg_subject": "Red Hat DB = PostgreSQL confirmed !" } ]
[ { "msg_contents": "There is a lot of going back and forth about shared memory buffers and postgres\nworking \"out of the box.\"\n\nThis is sort of a discussion I'd like to see. I've been using Postgres for a\nlong time, several years and in a few jobs, it is an excellent product, with a\ndepth of configurability that most new users do not know. For these users,\nthis lack of knowledge sometimes prevents them from using Postgres because they\ndo not perceve that it takes a little configuration adjustment to work right\nfor an application.\n\nPackages like Oracle and MS SQL are a nightmere to set up, but they do REQUIRE\nthat you are familiar enough with some settings. More to the point, they\nREQUIRE that you answer some questions about how you will be using it during\nsetup.\n\nI'm not suggesting that Postgres adopt the user hostile stance or Oracle, but\nthe \"out of box experience\" of simply working, may not, from a user interface\nperspective, be the right one.\n", "msg_date": "Sat, 23 Jun 2001 18:12:46 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Working out of the box" } ]
[ { "msg_contents": "CVSROOT:\t/home/projects/pgsql/cvsroot\nModule name:\tpgsql\nChanges by:\tpetere@hub.org\t01/06/23 19:29:48\n\nModified files:\n\tsrc/bin/initdb : initdb.sh \n\nLog message:\n\tDon't use a temp file. It was created insecurely and was easy to do without.\n\n", "msg_date": "Sat, 23 Jun 2001 19:29:48 -0400 (EDT)", "msg_from": "Peter Eisentraut - PostgreSQL <petere@hub.org>", "msg_from_op": true, "msg_subject": "pgsql/src/bin/initdb initdb.sh" }, { "msg_contents": "> CVSROOT:\t/home/projects/pgsql/cvsroot\n> Module name:\tpgsql\n> Changes by:\tpetere@hub.org\t01/06/23 19:29:48\n> \n> Modified files:\n> \tsrc/bin/initdb : initdb.sh \n> \n> Log message:\n> \tDon't use a temp file. It was created insecurely and was easy to do without.\n\nThis brings up a question. If I have pid 333 and someone creates a file\nworld-writable called /tmp/333, and I go and do:\n\n\tcat file >/tmp/$$\n\nisn't another user now able to modify those temp file contents. Is that\nthe insecurity you mentioned Peter, and if so, how do you prevent this?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 23 Jun 2001 19:50:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql/src/bin/initdb initdb.sh" }, { "msg_contents": "Bruce Momjian writes:\n\n> This brings up a question. If I have pid 333 and someone creates a file\n> world-writable called /tmp/333, and I go and do:\n>\n> \tcat file >/tmp/$$\n>\n> isn't another user now able to modify those temp file contents. Is that\n> the insecurity you mentioned Peter, and if so, how do you prevent this?\n\nThat is one possibility. Another exploit is with a symlink from /tmp/333\nto a file you want to overwrite. This is more fun with root, but it's\nstill not a good idea here.\n\nTo securely create a temp file in shell you need to use mktemp(1), or do\nsomething like (umask 077 && mkdir $TMPDIR/$$) to create a subdirectory.\nNeedless to say, it's tricky.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 24 Jun 2001 13:25:12 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pgsql/src/bin/initdb initdb.sh" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > This brings up a question. If I have pid 333 and someone creates a file\n> > world-writable called /tmp/333, and I go and do:\n> >\n> > \tcat file >/tmp/$$\n> >\n> > isn't another user now able to modify those temp file contents. Is that\n> > the insecurity you mentioned Peter, and if so, how do you prevent this?\n> \n> That is one possibility. Another exploit is with a symlink from /tmp/333\n> to a file you want to overwrite. This is more fun with root, but it's\n> still not a good idea here.\n> \n> To securely create a temp file in shell you need to use mktemp(1), or do\n> something like (umask 077 && mkdir $TMPDIR/$$) to create a subdirectory.\n> Needless to say, it's tricky.\n\nWow, that symlink is a bad one. I don't see mktemp(1) on bsd/os, only\nmktemp(3). I do see it on FreeBSD.\n\nGood thing I don't have other shell users on my system. I do cat\n>/tmp/$$ all the time in scripts.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 24 Jun 2001 17:18:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/bin/initdb initdb.sh" }, { "msg_contents": "Bruce Momjian writes:\n\n> > To securely create a temp file in shell you need to use mktemp(1), or do\n> > something like (umask 077 && mkdir $TMPDIR/$$) to create a subdirectory.\n> > Needless to say, it's tricky.\n>\n> Wow, that symlink is a bad one. I don't see mktemp(1) on bsd/os, only\n> mktemp(3). I do see it on FreeBSD.\n>\n> Good thing I don't have other shell users on my system. I do cat\n> >/tmp/$$ all the time in scripts.\n\nI see we have temp file vulnerabilities in genbki.sh and Gen_fmgrtab.sh as\nwell. I'll try to fix them.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 25 Jun 2001 19:01:15 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/bin/initdb initdb.sh" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > > To securely create a temp file in shell you need to use mktemp(1), or do\n> > > something like (umask 077 && mkdir $TMPDIR/$$) to create a subdirectory.\n> > > Needless to say, it's tricky.\n> >\n> > Wow, that symlink is a bad one. I don't see mktemp(1) on bsd/os, only\n> > mktemp(3). I do see it on FreeBSD.\n> >\n> > Good thing I don't have other shell users on my system. I do cat\n> > >/tmp/$$ all the time in scripts.\n> \n> I see we have temp file vulnerabilities in genbki.sh and Gen_fmgrtab.sh as\n> well. I'll try to fix them.\n\nWhat is the vulnerability? I see:\n\n\t- if [ \"$TMPDIR\" ]; then\n\t- TEMPFILE=\"$TMPDIR/initdb.$$\"\n\t- else\n\t- TEMPFILE=\"/tmp/initdb.$$\"\n\t- fi\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 25 Jun 2001 15:01:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/src/bin/initdb initdb.sh" } ]
[ { "msg_contents": "Hi all,\n\nI've been trying to get PostgreSQL to work with Apple's \nWebObjects application server. WebObjects uses JDBC as an \ninterface to back-end databases, translating between SQL and a \npure object model.\n\nI had a problem with incorrect SQL being generated and sent to \nthe PostgreSQL back end. After some work, I tracked it down. I \nhave a fix, but the fix has ramifications for the way that \nothers use PostgreSQL, so I decided to post here and see what \npeople think.\n\nIt turns out that WebObjects uses the \nPreparedStatement.setCharacterStream method in order to set the \nvalues of some character parameters in prepared statements, and \nthus the generated SQL. It's not at all clear why it does this \nfor some parameters but not others; the reason doesn't seem to \nhave anything to do with the declared length of the parameters. \nThis seems odd, because setCharacterStream is a very \nhigh-overhead operation, but in any case, that's what it does.\n\nThe PostgreSQL JDBC driver, however, makes the assumption that \nany JDBC client class that's using the set/get...stream methods \nwants to exchange information with a field that's been \nexplicitly typed as a BLOB. It therefore does what PostgreSQL \nrequires: it creates a new object containing the data, then uses \nthe object ID of the new object as the value to stuff into the \nquery. This has the effect of generating queries like\n\n SELECT ...\n WHERE some_text_field = 57909 ...\n\n57909 is an object ID. The comparison doesn't work because \nsome_text_field is an ordinary char or varchar, not a BLOB.\n\nIt's kind of hard to figure out the \"right\" solution to this \nproblem. I've patched the PostgreSQL JDBC implementation of \nPreparedStatement.setCharacterStream to treat any stream smaller \nthan 8190 bytes as a string. I chose 8190 because of the old \nlimit of 8192 bytes per tuple in versions prior to 7.1, so this \nchange is least likely to cause compatibility problems with \nsystems using setCharacterStream the way that the PostgreSQL \ndevelopers anticipated. I can provide the patch to anyone who \nneeds it.\n\nThe WebObjects use of JDBC is in line with the JDBC 2.0 \nspecification; that spec does not place any restrictions on the \ntypes of fields that can be accessed via get/set...stream. \nWhether it's a good use is a different question, of course, but \nit's still legal. My little kludge with an 8190-byte \"switch\" to \nthe old behavior really can't be the last word.\n\nI was hoping that someone could look at the PostgreSQL back end \nto see if there's any reason to keep the 8190-byte limiting \nbehavior in the JDBC driver. The limit needs to be removed so \nthat character streams and strings are symmetric in order to \ncomply with JDBC 2.0. The effect of switching will simply be the \npossibility that the back end will have to deal with very long \n(>8k) quoted strings. I got the impression from reading TOAST \nproject documents that all such limitations had been removed, \nbut I wanted to check before submitting my patch for inclusion \nin the distribution.\n\nThanks,\n-- Bruce\n\n--------------------------------------------------------------------------\nBruce Toback Tel: (602) 996-8601| My candle burns at both ends;\nOPT, Inc. (800) 858-4507| It will not last the night;\n11801 N. Tatum Blvd. Ste. 142 | But ah, my foes, and oh, my \nfriends -\nPhoenix AZ 85028 | It gives a lovely light.\nbtoback@optc.com | -- Edna St. Vincent Millay\n", "msg_date": "Sat, 23 Jun 2001 16:32:53 -0700", "msg_from": "btoback@mac.com", "msg_from_op": true, "msg_subject": "JDBC adaptor issue" }, { "msg_contents": "\nOn Sunday, June 24, 2001, at 10:32 PM, Barry Lind wrote:\n\n> This is an interesting problem. And I can't think a any easy \n> solution. But given TOAST in 7.1 the existing implementation \n> doesn't make sense IMHO My suggestion would be that the \n> get/setXXXStream methods work on TOASTed data types and \n> get/setBlob be used for Blobs.\n>\n\nThat would be my preference as well.\n\n> As far as your patch, I don't see that as a generic solution. \n> It is equally likely that a Blob could contain less than 8190 \n> characters, or a varchar could contain more that 8190 \n> characters in 7.1.\n\nIt's certainly not a generic solution. I was looking for a \nsolution that would break fewer of the applications that rely on \nthe current nonstandard behavior. I'd much prefer to simply have \nget/set...stream just implement the standard behavior. But not \nknowing the Postgres developers' preferences when it comes to \nthese questions, I chose the break-fewer-existing-apps approach.\n\nIf the answer is that the Postgres developers are willing to \ntell current JDBC users to switch to the Blob/Clob methods when \nthat's what they really mean, I'll remove the switch before \nsubmitting the patch.\n\n-- Bruce\n", "msg_date": "Sun, 24 Jun 2001 22:55:40 -0700", "msg_from": "Bruce Toback <btoback@mac.com>", "msg_from_op": false, "msg_subject": "Re: JDBC adaptor issue" }, { "msg_contents": "\n\nBarry Lind wrote:\n\n> This is an interesting problem. And I can't think a any easy solution. \n> But given TOAST in 7.1 the existing implementation doesn't make sense \n> IMHO My suggestion would be that the get/setXXXStream methods work on \n> TOASTed data types and get/setBlob be used for Blobs.\n> \n> As far as your patch, I don't see that as a generic solution. It is \n> equally likely that a Blob could contain less than 8190 characters, or a \n> varchar could contain more that 8190 characters in 7.1. Using this \n> number as a magic switch to decide whether the driver uses the BLOB API \n> or not just won't work in the general case.\n> \n> thanks,\n> --Barry\n> \n> \n> btoback@mac.com wrote:\n> \n>> Hi all,\n>>\n>> I've been trying to get PostgreSQL to work with Apple's WebObjects \n>> application server. WebObjects uses JDBC as an interface to back-end \n>> databases, translating between SQL and a pure object model.\n>>\n>> I had a problem with incorrect SQL being generated and sent to the \n>> PostgreSQL back end. After some work, I tracked it down. I have a fix, \n>> but the fix has ramifications for the way that others use PostgreSQL, \n>> so I decided to post here and see what people think.\n>>\n>> It turns out that WebObjects uses the \n>> PreparedStatement.setCharacterStream method in order to set the values \n>> of some character parameters in prepared statements, and thus the \n>> generated SQL. It's not at all clear why it does this for some \n>> parameters but not others; the reason doesn't seem to have anything to \n>> do with the declared length of the parameters. This seems odd, because \n>> setCharacterStream is a very high-overhead operation, but in any case, \n>> that's what it does.\n>>\n>> The PostgreSQL JDBC driver, however, makes the assumption that any \n>> JDBC client class that's using the set/get...stream methods wants to \n>> exchange information with a field that's been explicitly typed as a \n>> BLOB. It therefore does what PostgreSQL requires: it creates a new \n>> object containing the data, then uses the object ID of the new object \n>> as the value to stuff into the query. This has the effect of \n>> generating queries like\n>>\n>> SELECT ...\n>> WHERE some_text_field = 57909 ...\n>>\n>> 57909 is an object ID. The comparison doesn't work because \n>> some_text_field is an ordinary char or varchar, not a BLOB.\n>>\n>> It's kind of hard to figure out the \"right\" solution to this problem. \n>> I've patched the PostgreSQL JDBC implementation of \n>> PreparedStatement.setCharacterStream to treat any stream smaller than \n>> 8190 bytes as a string. I chose 8190 because of the old limit of 8192 \n>> bytes per tuple in versions prior to 7.1, so this change is least \n>> likely to cause compatibility problems with systems using \n>> setCharacterStream the way that the PostgreSQL developers anticipated. \n>> I can provide the patch to anyone who needs it.\n>>\n>> The WebObjects use of JDBC is in line with the JDBC 2.0 specification; \n>> that spec does not place any restrictions on the types of fields that \n>> can be accessed via get/set...stream. Whether it's a good use is a \n>> different question, of course, but it's still legal. My little kludge \n>> with an 8190-byte \"switch\" to the old behavior really can't be the \n>> last word.\n>>\n>> I was hoping that someone could look at the PostgreSQL back end to see \n>> if there's any reason to keep the 8190-byte limiting behavior in the \n>> JDBC driver. The limit needs to be removed so that character streams \n>> and strings are symmetric in order to comply with JDBC 2.0. The effect \n>> of switching will simply be the possibility that the back end will \n>> have to deal with very long (>8k) quoted strings. I got the impression \n>> from reading TOAST project documents that all such limitations had \n>> been removed, but I wanted to check before submitting my patch for \n>> inclusion in the distribution.\n>>\n>> Thanks,\n>> -- Bruce\n>>\n>> -------------------------------------------------------------------------- \n>>\n>> Bruce Toback Tel: (602) 996-8601| My candle burns at both ends;\n>> OPT, Inc. (800) 858-4507| It will not last the night;\n>> 11801 N. Tatum Blvd. Ste. 142 | But ah, my foes, and oh, my \n>> friends -\n>> Phoenix AZ 85028 | It gives a lovely light.\n>> btoback@optc.com | -- Edna St. Vincent Millay\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to majordomo@postgresql.org so that your\n>> message can get through to the mailing list cleanly\n>>\n> \n> \n\n\n", "msg_date": "Mon, 25 Jun 2001 08:43:10 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JDBC adaptor issue" }, { "msg_contents": "\n This is an interesting problem. And I can't think a any easy \n solution. But given TOAST in 7.1 the existing implementation doesn't \n make sense IMHO My suggestion would be that the get/setXXXStream \n methods work on TOASTed data types and get/setBlob be used for Blobs.\n\n As far as your patch, I don't see that as a generic solution. It is \n equally likely that a Blob could contain less than 8190 characters, or \n a varchar could contain more that 8190 characters in 7.1. Using this \n number as a magic switch to decide whether the driver uses the BLOB \n API or not just won't work in the general case.\n\n thanks,\n --Barry\n\n>>\n>> btoback@mac.com wrote:\n>>\n>>> Hi all,\n>>>\n>>> I've been trying to get PostgreSQL to work with Apple's WebObjects \n>>> application server. WebObjects uses JDBC as an interface to back-end \n>>> databases, translating between SQL and a pure object model.\n>>>\n>>> I had a problem with incorrect SQL being generated and sent to the \n>>> PostgreSQL back end. After some work, I tracked it down. I have a \n>>> fix, but the fix has ramifications for the way that others use \n>>> PostgreSQL, so I decided to post here and see what people think.\n>>>\n>>> It turns out that WebObjects uses the \n>>> PreparedStatement.setCharacterStream method in order to set the \n>>> values of some character parameters in prepared statements, and thus \n>>> the generated SQL. It's not at all clear why it does this for some \n>>> parameters but not others; the reason doesn't seem to have anything \n>>> to do with the declared length of the parameters. This seems odd, \n>>> because setCharacterStream is a very high-overhead operation, but in \n>>> any case, that's what it does.\n>>>\n>>> The PostgreSQL JDBC driver, however, makes the assumption that any \n>>> JDBC client class that's using the set/get...stream methods wants to \n>>> exchange information with a field that's been explicitly typed as a \n>>> BLOB. It therefore does what PostgreSQL requires: it creates a new \n>>> object containing the data, then uses the object ID of the new object \n>>> as the value to stuff into the query. This has the effect of \n>>> generating queries like\n>>>\n>>> SELECT ...\n>>> WHERE some_text_field = 57909 ...\n>>>\n>>> 57909 is an object ID. The comparison doesn't work because \n>>> some_text_field is an ordinary char or varchar, not a BLOB.\n>>>\n>>> It's kind of hard to figure out the \"right\" solution to this problem. \n>>> I've patched the PostgreSQL JDBC implementation of \n>>> PreparedStatement.setCharacterStream to treat any stream smaller than \n>>> 8190 bytes as a string. I chose 8190 because of the old limit of 8192 \n>>> bytes per tuple in versions prior to 7.1, so this change is least \n>>> likely to cause compatibility problems with systems using \n>>> setCharacterStream the way that the PostgreSQL developers \n>>> anticipated. I can provide the patch to anyone who needs it.\n>>>\n>>> The WebObjects use of JDBC is in line with the JDBC 2.0 \n>>> specification; that spec does not place any restrictions on the types \n>>> of fields that can be accessed via get/set...stream. Whether it's a \n>>> good use is a different question, of course, but it's still legal. My \n>>> little kludge with an 8190-byte \"switch\" to the old behavior really \n>>> can't be the last word.\n>>>\n>>> I was hoping that someone could look at the PostgreSQL back end to \n>>> see if there's any reason to keep the 8190-byte limiting behavior in \n>>> the JDBC driver. The limit needs to be removed so that character \n>>> streams and strings are symmetric in order to comply with JDBC 2.0. \n>>> The effect of switching will simply be the possibility that the back \n>>> end will have to deal with very long (>8k) quoted strings. I got the \n>>> impression from reading TOAST project documents that all such \n>>> limitations had been removed, but I wanted to check before submitting \n>>> my patch for inclusion in the distribution.\n>>>\n>>> Thanks,\n>>> -- Bruce\n>>>\n>>> -------------------------------------------------------------------------- \n>>>\n>>> Bruce Toback Tel: (602) 996-8601| My candle burns at both ends;\n>>> OPT, Inc. (800) 858-4507| It will not last the night;\n>>> 11801 N. Tatum Blvd. Ste. 142 | But ah, my foes, and oh, my \n>>> friends -\n>>> Phoenix AZ 85028 | It gives a lovely light.\n>>> btoback@optc.com | -- Edna St. Vincent Millay\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>>> subscribe-nomail command to majordomo@postgresql.org so that your\n>>> message can get through to the mailing list cleanly\n>>>\n>>\n>>\n> \n> \n\n\n", "msg_date": "Mon, 25 Jun 2001 15:49:01 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JDBC adaptor issue" } ]
[ { "msg_contents": "Hi all,\n\nOne more question/comment.\n\nIn order to track down the problem with the interaction between \nWebObjects and the PostgreSQL JDBC driver, I had to insert a \nfair amount of logging. This logging will be useful for anyone \nelse who's in a similar position, trying to get some piece of \nmiddleware to work with PostgreSQL. If I switch to using log4j \n(see http://jakarta.apache.org/log4j for information), would it \nbe useful to submit the logging calls as a patch?\n\nI think it would be extremely useful, but I don't know the \nphilosophies or mindset of the PostgreSQL developers, so I \nthought I'd ask.\n\n-- Bruce\n\n--------------------------------------------------------------------------\nBruce Toback Tel: (602) 996-8601| My candle burns at both ends;\nOPT, Inc. (800) 858-4507| It will not last the night;\n11801 N. Tatum Blvd. Ste. 142 | But ah, my foes, and oh, my \nfriends -\nPhoenix AZ 85028 | It gives a lovely light.\nbtoback@optc.com | -- Edna St. Vincent Millay\n", "msg_date": "Sat, 23 Jun 2001 16:37:37 -0700", "msg_from": "btoback@mac.com", "msg_from_op": true, "msg_subject": "Instrumenting and Logging in JDBC" }, { "msg_contents": "I would like to see the calls, please submit them\n\nDave\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of btoback@mac.com\nSent: June 23, 2001 7:38 PM\nTo: pgsql-hackers@postgresql.org\nSubject: [HACKERS] Instrumenting and Logging in JDBC\n\nHi all,\n\nOne more question/comment.\n\nIn order to track down the problem with the interaction between \nWebObjects and the PostgreSQL JDBC driver, I had to insert a \nfair amount of logging. This logging will be useful for anyone \nelse who's in a similar position, trying to get some piece of \nmiddleware to work with PostgreSQL. If I switch to using log4j \n(see http://jakarta.apache.org/log4j for information), would it \nbe useful to submit the logging calls as a patch?\n\nI think it would be extremely useful, but I don't know the \nphilosophies or mindset of the PostgreSQL developers, so I \nthought I'd ask.\n\n-- Bruce\n\n------------------------------------------------------------------------\n--\nBruce Toback Tel: (602) 996-8601| My candle burns at both ends;\nOPT, Inc. (800) 858-4507| It will not last the night;\n11801 N. Tatum Blvd. Ste. 142 | But ah, my foes, and oh, my \nfriends -\nPhoenix AZ 85028 | It gives a lovely light.\nbtoback@optc.com | -- Edna St. Vincent Millay\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n\n", "msg_date": "Sat, 23 Jun 2001 23:46:41 -0400", "msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "RE: Instrumenting and Logging in JDBC" }, { "msg_contents": "\nOn Sunday, June 24, 2001, at 09:49 PM, Barry Lind wrote:\n\n> First I would ask what kind of logging you are talking about? \n> I find that simply turning on debug output on the server to \n> print out the sql statements being executed is generally all I \n> need for logging, and the server already supports that.\n\nThe problem is that the SQL sent to the backend is sometimes the \nend product of a lot of interaction between the JDBC driver and \nthe client program. This is frequently the case with \ngeneral-purpose programs like report writers and application \nservers.\n\nIf the generated SQL is bad, or if the data the client program \nreceives back is bad, it's necessary to figure out exactly what \nthe client program is doing in order to solve the problem. For \nexample, the client may use some kinds of row metadata and not \nothers, or may be using an unusual sequence of calls to place \ndata into a PreparedStatement. Logging is the only way to figure \nout what the client is doing if you don't have the client source.\n\n> While logging is a good idea, having yet another non-postgresql \n> component that needs to be installed in order to build and/or \n> run the jdbc driver is in my opionion a bad idea. I already \n> dislike the fact that I have to install ant just to build the \n> driver. It was so much easier under 7.0 when make was all that \n> was required.\n\nAgreed -- especially given what it takes to get a Java program \nto work, since there are no standards for where the various \ncomponents should live. Making ant work wasn't a pleasant \nexperience: it took more effort to build the 7.1 JDBC driver \nalone than to build the entire 7.0 Postgres suite.\n\nOn the other hand, logging *is* useful in making sure that the \nJDBC driver works with the widest possible variety of client \nsoftware, including all kinds of proprietary middleware \nproducts. If the logging is set up so that log4j is loaded \ndynamically, would that be a satisfactory solution to the build \nproblem?\n\nActually, given the purpose for including logging, log4j is \nprobably more than what's required to do the job -- essentially \njust tracing client call activity.\n\n-- Bruce\n\n--------------------------------------------------------------------------\nBruce Toback Tel: (602) 996-8601| My candle burns at both ends;\nOPT, Inc. (800) 858-4507| It will not last the night;\n11801 N. Tatum Blvd. Ste. 142 | But ah, my foes, and oh, my \nfriends -\nPhoenix AZ 85028 | It gives a lovely light.\nbtoback@optc.com | -- Edna St. Vincent Millay\n", "msg_date": "Sun, 24 Jun 2001 22:16:47 -0700", "msg_from": "Bruce Toback <btoback@mac.com>", "msg_from_op": false, "msg_subject": "Re: Instrumenting and Logging in JDBC" }, { "msg_contents": "\n\nBarry Lind wrote:\n\n> First I would ask what kind of logging you are talking about? I find \n> that simply turning on debug output on the server to print out the sql \n> statements being executed is generally all I need for logging, and the \n> server already supports that.\n> \n> If your proposal it so use log4j for logging, then I would be opposed. \n> While logging is a good idea, having yet another non-postgresql \n> component that needs to be installed in order to build and/or run the \n> jdbc driver is in my opionion a bad idea. I already dislike the fact \n> that I have to install ant just to build the driver. It was so much \n> easier under 7.0 when make was all that was required.\n> \n> thanks,\n> --Barry\n> \n> \n> btoback@mac.com wrote:\n> \n>> Hi all,\n>>\n>> One more question/comment.\n>>\n>> In order to track down the problem with the interaction between \n>> WebObjects and the PostgreSQL JDBC driver, I had to insert a fair \n>> amount of logging. This logging will be useful for anyone else who's \n>> in a similar position, trying to get some piece of middleware to work \n>> with PostgreSQL. If I switch to using log4j (see \n>> http://jakarta.apache.org/log4j for information), would it be useful \n>> to submit the logging calls as a patch?\n>>\n>> I think it would be extremely useful, but I don't know the \n>> philosophies or mindset of the PostgreSQL developers, so I thought I'd \n>> ask.\n>>\n>> -- Bruce\n>>\n>> -------------------------------------------------------------------------- \n>>\n>> Bruce Toback Tel: (602) 996-8601| My candle burns at both ends;\n>> OPT, Inc. (800) 858-4507| It will not last the night;\n>> 11801 N. Tatum Blvd. Ste. 142 | But ah, my foes, and oh, my \n>> friends -\n>> Phoenix AZ 85028 | It gives a lovely light.\n>> btoback@optc.com | -- Edna St. Vincent Millay\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/users-lounge/docs/faq.html\n>>\n> \n> \n\n\n", "msg_date": "Mon, 25 Jun 2001 08:42:47 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Instrumenting and Logging in JDBC" }, { "msg_contents": "\n First I would ask what kind of logging you are talking about? I find \n that simply turning on debug output on the server to print out the sql \n statements being executed is generally all I need for logging, and the \n server already supports that.\n\n If your proposal it so use log4j for logging, then I would be opposed. \n While logging is a good idea, having yet another non-postgresql \n component that needs to be installed in order to build and/or run the \n jdbc driver is in my opionion a bad idea. I already dislike the fact \n that I have to install ant just to build the driver. It was so much \n easier under 7.0 when make was all that was required.\n\n thanks,\n --Barry\n\n>>\n>> btoback@mac.com wrote:\n>>\n>>> Hi all,\n>>>\n>>> One more question/comment.\n>>>\n>>> In order to track down the problem with the interaction between \n>>> WebObjects and the PostgreSQL JDBC driver, I had to insert a fair \n>>> amount of logging. This logging will be useful for anyone else who's \n>>> in a similar position, trying to get some piece of middleware to work \n>>> with PostgreSQL. If I switch to using log4j (see \n>>> http://jakarta.apache.org/log4j for information), would it be useful \n>>> to submit the logging calls as a patch?\n>>>\n>>> I think it would be extremely useful, but I don't know the \n>>> philosophies or mindset of the PostgreSQL developers, so I thought \n>>> I'd ask.\n>>>\n>>> -- Bruce\n>>>\n>>> -------------------------------------------------------------------------- \n>>>\n>>> Bruce Toback Tel: (602) 996-8601| My candle burns at both ends;\n>>> OPT, Inc. (800) 858-4507| It will not last the night;\n>>> 11801 N. Tatum Blvd. Ste. 142 | But ah, my foes, and oh, my \n>>> friends -\n>>> Phoenix AZ 85028 | It gives a lovely light.\n>>> btoback@optc.com | -- Edna St. Vincent Millay\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 5: Have you checked our extensive FAQ?\n>>>\n>>> http://www.postgresql.org/users-lounge/docs/faq.html\n>>>\n>>\n>>\n> \n> \n\n\n", "msg_date": "Mon, 25 Jun 2001 15:48:15 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Instrumenting and Logging in JDBC" } ]
[ { "msg_contents": "Had trouble getting ERWin to work with postgres (even with the Oracle\nlook alike views).\n\nSo, I wrote a small perl script which outputs XML for Dia to import.\nWorks pretty good.\n\nMore details can be found at: www.zort.ca/postgresql\n\n<a\nhref=\"http://www.zort.ca/postgresql/\">http://www.zort.ca/postgresql/</\na>\n\nBSD License, released with permission from InQuent Technologies (my\ncurrent employer).\n\n\nPlease send me your thoughts!\n\nNotes: Don't forget to change the database connection settings, and\nfiddle with the StereoType groups.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.", "msg_date": "Sat, 23 Jun 2001 21:18:01 -0400", "msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>", "msg_from_op": true, "msg_subject": "Postgres to Dia UML" }, { "msg_contents": "What is an accsql.pm required to run your script ?\n\n\tOleg\nOn Sat, 23 Jun 2001, Rod Taylor wrote:\n\n> Had trouble getting ERWin to work with postgres (even with the Oracle\n> look alike views).\n>\n> So, I wrote a small perl script which outputs XML for Dia to import.\n> Works pretty good.\n>\n> More details can be found at: www.zort.ca/postgresql\n>\n> <a\n> href=\"http://www.zort.ca/postgresql/\">http://www.zort.ca/postgresql/</\n> a>\n>\n> BSD License, released with permission from InQuent Technologies (my\n> current employer).\n>\n>\n> Please send me your thoughts!\n>\n> Notes: Don't forget to change the database connection settings, and\n> fiddle with the StereoType groups.\n> --\n> Rod Taylor\n>\n> Your eyes are weary from staring at the CRT. You feel sleepy. Notice\n> how restful it is to watch the cursor blink. Close your eyes. The\n> opinions stated above are yours. You cannot imagine why you ever felt\n> otherwise.\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 29 Jun 2001 16:47:07 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Postgres to Dia UML" }, { "msg_contents": "Ack.. I normally use that for internal code to simplify database\naccess. I've put it up on the website, although it doesn't actually\ndo alot I suppose it is required.\n\nI'll remove it when I get a chance, but for now you're welcome to use\nit.\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Oleg Bartunov\" <oleg@sai.msu.su>\nTo: \"Rod Taylor\" <rod.taylor@inquent.com>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Friday, June 29, 2001 9:47 AM\nSubject: Re: [HACKERS] Postgres to Dia UML\n\n\n> What is an accsql.pm required to run your script ?\n>\n> Oleg\n> On Sat, 23 Jun 2001, Rod Taylor wrote:\n>\n> > Had trouble getting ERWin to work with postgres (even with the\nOracle\n> > look alike views).\n> >\n> > So, I wrote a small perl script which outputs XML for Dia to\nimport.\n> > Works pretty good.\n> >\n> > More details can be found at: www.zort.ca/postgresql\n> >\n> > <a\n> >\nhref=\"http://www.zort.ca/postgresql/\">http://www.zort.ca/postgresql/</\n> > a>\n> >\n> > BSD License, released with permission from InQuent Technologies\n(my\n> > current employer).\n> >\n> >\n> > Please send me your thoughts!\n> >\n> > Notes: Don't forget to change the database connection settings,\nand\n> > fiddle with the StereoType groups.\n> > --\n> > Rod Taylor\n> >\n> > Your eyes are weary from staring at the CRT. You feel sleepy.\nNotice\n> > how restful it is to watch the cursor blink. Close your eyes. The\n> > opinions stated above are yours. You cannot imagine why you ever\nfelt\n> > otherwise.\n> >\n> >\n>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\n", "msg_date": "Fri, 29 Jun 2001 10:13:36 -0400", "msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>", "msg_from_op": true, "msg_subject": "Re: Postgres to Dia UML" }, { "msg_contents": "> So, I wrote a small perl script which outputs XML for Dia to import.\n> Works pretty good.\n\nThanks for posting a helpful script. \n\nI haven't kept up on perl packages, but I recall that the \"docbook2man\"\nscripts which we use to convert our DocBook sgml reference pages to man\npage format use a \"SGMLS\" package to help organize the sgml parse tree.\n\nIs that package usable for XML, or are there other XML packages to help\nwith dealing with XML-based information? Just curious...\n\n - Thomas\n", "msg_date": "Fri, 29 Jun 2001 15:06:19 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Postgres to Dia UML" }, { "msg_contents": "Hello, I have a nightly cron script that runs: vacuumdb -a -z. The output\nof the cron script is emailed to me every night. For the last several\nweeks, about 50% of the time I get the following error at least once but\nsometimes more:\n\nNOTICE: RegisterSharedInvalid: SI buffer overflow\nNOTICE: InvalidateSharedInvalid: cache state reset\n\nI am wondering if this is just something that will happen sometimes, or if\nit implies some type of problem. I have not noticed any problems in using\nthe system. Any input would be appreciated.\n\nI am running PG7.0.3 from RPMs (I plan on upgrading to 7.1.2 soon) on RedHat\n7.0, 800Mhz Athlon w/ 256M. Most of the databases are reasonably small (a\nfew Meg) but some of them are over 1Gig.\n\nThe Postmaster command is: /usr/bin/postmaster -i -B 5000 -N 48 -o -S 16384\n\nThe output of the vacuum command looks like:\n\nVacuuming template1\nVACUUM\n... < Several more databases > ...\nVacuuming postgres\nNOTICE: RegisterSharedInvalid: SI buffer overflow\nNOTICE: InvalidateSharedInvalid: cache state reset\nVACUUM\nVacuuming jpplanning\nVACUUM\n... < Several more databases> ...\nVacuuming OEA\nNOTICE: RegisterSharedInvalid: SI buffer overflow\nNOTICE: InvalidateSharedInvalid: cache state reset\nVACUUM\nVacuuming nutil\nVACUUM\nVacuuming feedback\nVACUUM\nVacuuming ctlno\nVACUUM\n0.21user 0.08system 21:37.58elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (5074major+1093minor)pagefaults 0swaps\n\nMatt O'Connor\n\n", "msg_date": "Fri, 29 Jun 2001 10:34:17 -0500", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Help with SI buffer overflow error" }, { "msg_contents": "On Sat, 23 Jun 2001, Rod Taylor wrote:\n\nAhoi Rod,\n\nI suggest you remove the dependency on accsql.pm, since it doesn't do\nanything terribly useful, comes from an unknown source, and can be made\ninline pretty easily with straight calls to DBI.\n\nI'm currently preparing a much better abstraction module for DB connction\nparameters using configuration files and the like. Contact me if you\nwant to take a look.\n\nBest regards,\n\nAlex Perel\nTechnical Project Leader\nT. 416.341.8950 ext. 238\nF. 416.341.8959\nE. aperel@verticalscope.com\nwww.verticalscope.com\n\n> Had trouble getting ERWin to work with postgres (even with the Oracle\n> look alike views).\n> \n> So, I wrote a small perl script which outputs XML for Dia to import.\n> Works pretty good.\n> \n> More details can be found at: www.zort.ca/postgresql\n> \n> <a\n> href=\"http://www.zort.ca/postgresql/\">http://www.zort.ca/postgresql/</\n> a>\n> \n> BSD License, released with permission from InQuent Technologies (my\n> current employer).\n> \n> \n> Please send me your thoughts!\n> \n> Notes: Don't forget to change the database connection settings, and\n> fiddle with the StereoType groups.\n> --\n> Rod Taylor\n> \n> Your eyes are weary from staring at the CRT. You feel sleepy. Notice\n> how restful it is to watch the cursor blink. Close your eyes. The\n> opinions stated above are yours. You cannot imagine why you ever felt\n> otherwise.\n> \n> \n\n", "msg_date": "Fri, 29 Jun 2001 12:09:05 -0400 (EDT)", "msg_from": "Alex Perel <aperel@verticalscope.com>", "msg_from_op": false, "msg_subject": "Re: Postgres to Dia UML" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n\n> > So, I wrote a small perl script which outputs XML for Dia to import.\n> > Works pretty good.\n> \n> Thanks for posting a helpful script. \n> \n> I haven't kept up on perl packages, but I recall that the \"docbook2man\"\n> scripts which we use to convert our DocBook sgml reference pages to man\n> page format use a \"SGMLS\" package to help organize the sgml parse tree.\n> \n> Is that package usable for XML, or are there other XML packages to help\n> with dealing with XML-based information? Just curious...\n\nThere is a module called XML::Parser, which is a lower level interface\nto James Clark's expat library.\n\nRegards,\nManuel.\n", "msg_date": "29 Jun 2001 12:35:14 -0500", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": false, "msg_subject": "Re: Re: Postgres to Dia UML" }, { "msg_contents": "Hi:\n\nOn 29 Jun 2001, Manuel Sugawara wrote:\n\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> \n> > > So, I wrote a small perl script which outputs XML for Dia to import.\n> > > Works pretty good.\n> > \n> > Thanks for posting a helpful script. \n> > \n> > I haven't kept up on perl packages, but I recall that the \"docbook2man\"\n> > scripts which we use to convert our DocBook sgml reference pages to man\n> > page format use a \"SGMLS\" package to help organize the sgml parse tree.\n> > \n> > Is that package usable for XML, or are there other XML packages to help\n> > with dealing with XML-based information? Just curious...\n> \n> There is a module called XML::Parser, which is a lower level interface\n> to James Clark's expat library.\n> \n> Regards,\n> Manuel.\n\nI have made a script using XML::Parser to convert a dia file to a SQL script:\n\nhttp://freshmeat.net/projects/eros/\n\nSaludos,\n\nRoberto Andrade Fonseca\nrandrade@abl.com.mx\n\n", "msg_date": "Fri, 29 Jun 2001 13:16:29 -0500 (CDT)", "msg_from": "\"Ing. Roberto Andrade Fonseca\" <randrade@abl.com.mx>", "msg_from_op": false, "msg_subject": "Re: Re: Postgres to Dia UML" } ]
[ { "msg_contents": "I recently ran across this (i think) bug relating to constraints and\nfunctions written in plpgsql. It seems that I'm getting erroneous foreign\nkey violations. I've included two scripts which create the simplest test\ncase I can reproduce. One script has a foreign key defined and the other\none doesn't. Other than that, they are identical. From the data in the\nscripts, it's obvious there aren't any violations of keys.\n\n[postgres@loopy postgres]$ diff /tmp/good.sql /tmp/bad.sql\n18c18\n< create table c2 ( id int, value_sum int);\n---\n> create table c2 ( id int references c(id), value_sum int);\n\n[postgres@loopy postgres]$ psql test < /tmp/good.sql\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'c_pkey' for\ntable 'c'\nCREATE\nINSERT 19107 1\nINSERT 19108 1\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\nINSERT 19126 1\nINSERT 19127 1\nINSERT 19128 1\nINSERT 19129 1\nINSERT 19130 1\nINSERT 19131 1\nCREATE\nCREATE\nCREATE\nUPDATE 6\n id | value_sum\n----+-----------\n 1 | 6\n 2 | 17\n(2 rows)\n\n id\n----\n 1\n 2\n(2 rows)\n\n id\n----\n 1\n 2\n(2 rows)\n\n id\n----\n 1\n 2\n(2 rows)\n[postgres@loopy postgres]$ psql test < /tmp/bad.sql\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'c_pkey' for\ntable 'c'\nCREATE\nINSERT 19164 1\nINSERT 19165 1\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\nINSERT 19183 1\nINSERT 19184 1\nINSERT 19185 1\nINSERT 19186 1\nINSERT 19187 1\nINSERT 19188 1\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\nCREATE\nCREATE\nERROR: triggered data change violation on relation \"c2\"\n id | value_sum\n----+-----------\n(0 rows)\n\n id\n----\n 1\n 2\n(2 rows)\n\n id\n----\n 1\n 2\n(2 rows)\n\n id\n----\n(0 rows)", "msg_date": "Sun, 24 Jun 2001 01:12:38 -0600", "msg_from": "\"Brian Hirt\" <bhirt@berkhirt.com>", "msg_from_op": true, "msg_subject": "Problem with plpgsql functions and foreign key constraints. " } ]
[ { "msg_contents": "I forgot to mention that this is happening on 7.0.3and 7.1.1 -- and I'm\nrunning on a RedHat 7.0 machine.\n\n----- Original Message -----\nFrom: \"Brian Hirt\" <bhirt@berkhirt.com>\nTo: \"Postgres Hackers\" <pgsql-hackers@postgresql.org>\nCc: <bhirt@mobygames.com>\nSent: Sunday, June 24, 2001 1:12 AM\nSubject: Problem with plpgsql functions and foreign key constraints.\n\n\n> I recently ran across this (i think) bug relating to constraints and\n> functions written in plpgsql. It seems that I'm getting erroneous foreign\n> key violations. I've included two scripts which create the simplest test\n> case I can reproduce. One script has a foreign key defined and the other\n> one doesn't. Other than that, they are identical. From the data in the\n> scripts, it's obvious there aren't any violations of keys.\n>\n> [postgres@loopy postgres]$ diff /tmp/good.sql /tmp/bad.sql\n> 18c18\n> < create table c2 ( id int, value_sum int);\n> ---\n> > create table c2 ( id int references c(id), value_sum int);\n>\n> [postgres@loopy postgres]$ psql test < /tmp/good.sql\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'c_pkey' for\n> table 'c'\n> CREATE\n> INSERT 19107 1\n> INSERT 19108 1\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\n> check(s)\n> CREATE\n> INSERT 19126 1\n> INSERT 19127 1\n> INSERT 19128 1\n> INSERT 19129 1\n> INSERT 19130 1\n> INSERT 19131 1\n> CREATE\n> CREATE\n> CREATE\n> UPDATE 6\n> id | value_sum\n> ----+-----------\n> 1 | 6\n> 2 | 17\n> (2 rows)\n>\n> id\n> ----\n> 1\n> 2\n> (2 rows)\n>\n> id\n> ----\n> 1\n> 2\n> (2 rows)\n>\n> id\n> ----\n> 1\n> 2\n> (2 rows)\n> [postgres@loopy postgres]$ psql test < /tmp/bad.sql\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'c_pkey' for\n> table 'c'\n> CREATE\n> INSERT 19164 1\n> INSERT 19165 1\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\n> check(s)\n> CREATE\n> INSERT 19183 1\n> INSERT 19184 1\n> INSERT 19185 1\n> INSERT 19186 1\n> INSERT 19187 1\n> INSERT 19188 1\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\n> check(s)\n> CREATE\n> CREATE\n> CREATE\n> ERROR: triggered data change violation on relation \"c2\"\n> id | value_sum\n> ----+-----------\n> (0 rows)\n>\n> id\n> ----\n> 1\n> 2\n> (2 rows)\n>\n> id\n> ----\n> 1\n> 2\n> (2 rows)\n>\n> id\n> ----\n> (0 rows)\n>\n>\n>", "msg_date": "Sun, 24 Jun 2001 01:17:08 -0600", "msg_from": "\"Brian Hirt\" <bhirt@mobygames.com>", "msg_from_op": true, "msg_subject": "Fw: Problem with plpgsql functions and foreign key constraints. " } ]
[ { "msg_contents": "Bruce,\n\nI agree that log4j is probably overkill. I also understand the need for \nbetter logging. I have been fortunate that I can run through a debugger \nso that I have been able to track down any problems I have had when the \nserver sql statment log isn't sufficient.\n\nThe one good thing about postgresql (unlike other databases I use) is \nthat at least you have access to the source code so that you can add \nprints as needed.\n\n\nthanks,\n--Barry\n\n\nBruce Toback wrote:\n\n> \n> On Sunday, June 24, 2001, at 09:49 PM, Barry Lind wrote:\n> \n>> First I would ask what kind of logging you are talking about? I find \n>> that simply turning on debug output on the server to print out the sql \n>> statements being executed is generally all I need for logging, and the \n>> server already supports that.\n> \n> \n> The problem is that the SQL sent to the backend is sometimes the end \n> product of a lot of interaction between the JDBC driver and the client \n> program. This is frequently the case with general-purpose programs like \n> report writers and application servers.\n> \n> If the generated SQL is bad, or if the data the client program receives \n> back is bad, it's necessary to figure out exactly what the client \n> program is doing in order to solve the problem. For example, the client \n> may use some kinds of row metadata and not others, or may be using an \n> unusual sequence of calls to place data into a PreparedStatement. \n> Logging is the only way to figure out what the client is doing if you \n> don't have the client source.\n> \n>> While logging is a good idea, having yet another non-postgresql \n>> component that needs to be installed in order to build and/or run the \n>> jdbc driver is in my opionion a bad idea. I already dislike the fact \n>> that I have to install ant just to build the driver. It was so much \n>> easier under 7.0 when make was all that was required.\n> \n> \n> Agreed -- especially given what it takes to get a Java program to work, \n> since there are no standards for where the various components should \n> live. Making ant work wasn't a pleasant experience: it took more effort \n> to build the 7.1 JDBC driver alone than to build the entire 7.0 Postgres \n> suite.\n> \n> On the other hand, logging *is* useful in making sure that the JDBC \n> driver works with the widest possible variety of client software, \n> including all kinds of proprietary middleware products. If the logging \n> is set up so that log4j is loaded dynamically, would that be a \n> satisfactory solution to the build problem?\n> \n> Actually, given the purpose for including logging, log4j is probably \n> more than what's required to do the job -- essentially just tracing \n> client call activity.\n> \n> -- Bruce\n> \n> --------------------------------------------------------------------------\n> Bruce Toback Tel: (602) 996-8601| My candle burns at both ends;\n> OPT, Inc. (800) 858-4507| It will not last the night;\n> 11801 N. Tatum Blvd. Ste. 142 | But ah, my foes, and oh, my friends -\n> Phoenix AZ 85028 | It gives a lovely light.\n> btoback@optc.com | -- Edna St. Vincent Millay\n> \n\n\n", "msg_date": "Sun, 24 Jun 2001 22:40:42 -0700", "msg_from": "Barry Lind <blind@xythos.com>", "msg_from_op": true, "msg_subject": "Re: Instrumenting and Logging in JDBC" }, { "msg_contents": "\n\nBarry Lind wrote:\n\n> Bruce,\n> \n> I agree that log4j is probably overkill. I also understand the need for \n> better logging. I have been fortunate that I can run through a debugger \n> so that I have been able to track down any problems I have had when the \n> server sql statment log isn't sufficient.\n> \n> The one good thing about postgresql (unlike other databases I use) is \n> that at least you have access to the source code so that you can add \n> prints as needed.\n> \n> \n> thanks,\n> --Barry\n> \n> \n> Bruce Toback wrote:\n> \n>>\n>> On Sunday, June 24, 2001, at 09:49 PM, Barry Lind wrote:\n>>\n>>> First I would ask what kind of logging you are talking about? I find \n>>> that simply turning on debug output on the server to print out the \n>>> sql statements being executed is generally all I need for logging, \n>>> and the server already supports that.\n>>\n>>\n>>\n>> The problem is that the SQL sent to the backend is sometimes the end \n>> product of a lot of interaction between the JDBC driver and the client \n>> program. This is frequently the case with general-purpose programs \n>> like report writers and application servers.\n>>\n>> If the generated SQL is bad, or if the data the client program \n>> receives back is bad, it's necessary to figure out exactly what the \n>> client program is doing in order to solve the problem. For example, \n>> the client may use some kinds of row metadata and not others, or may \n>> be using an unusual sequence of calls to place data into a \n>> PreparedStatement. Logging is the only way to figure out what the \n>> client is doing if you don't have the client source.\n>>\n>>> While logging is a good idea, having yet another non-postgresql \n>>> component that needs to be installed in order to build and/or run the \n>>> jdbc driver is in my opionion a bad idea. I already dislike the fact \n>>> that I have to install ant just to build the driver. It was so much \n>>> easier under 7.0 when make was all that was required.\n>>\n>>\n>>\n>> Agreed -- especially given what it takes to get a Java program to \n>> work, since there are no standards for where the various components \n>> should live. Making ant work wasn't a pleasant experience: it took \n>> more effort to build the 7.1 JDBC driver alone than to build the \n>> entire 7.0 Postgres suite.\n>>\n>> On the other hand, logging *is* useful in making sure that the JDBC \n>> driver works with the widest possible variety of client software, \n>> including all kinds of proprietary middleware products. If the logging \n>> is set up so that log4j is loaded dynamically, would that be a \n>> satisfactory solution to the build problem?\n>>\n>> Actually, given the purpose for including logging, log4j is probably \n>> more than what's required to do the job -- essentially just tracing \n>> client call activity.\n>>\n>> -- Bruce\n>>\n>> -------------------------------------------------------------------------- \n>>\n>> Bruce Toback Tel: (602) 996-8601| My candle burns at both ends;\n>> OPT, Inc. (800) 858-4507| It will not last the night;\n>> 11801 N. Tatum Blvd. Ste. 142 | But ah, my foes, and oh, my \n>> friends -\n>> Phoenix AZ 85028 | It gives a lovely light.\n>> btoback@optc.com | -- Edna St. Vincent Millay\n>>\n> \n> \n\n\n", "msg_date": "Mon, 25 Jun 2001 08:43:36 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Instrumenting and Logging in JDBC" }, { "msg_contents": "\n Bruce,\n\n I agree that log4j is probably overkill. I also understand the need \n for better logging. I have been fortunate that I can run through a \n debugger so that I have been able to track down any problems I have \n had when the server sql statment log isn't sufficient.\n\n The one good thing about postgresql (unlike other databases I use) is \n that at least you have access to the source code so that you can add \n prints as needed.\n\n\n thanks,\n --Barry\n\n\n>> Bruce Toback wrote:\n>>\n>>>\n>>> On Sunday, June 24, 2001, at 09:49 PM, Barry Lind wrote:\n>>>\n>>>> First I would ask what kind of logging you are talking about? I \n>>>> find that simply turning on debug output on the server to print out \n>>>> the sql statements being executed is generally all I need for \n>>>> logging, and the server already supports that.\n>>>\n>>>\n>>>\n>>>\n>>> The problem is that the SQL sent to the backend is sometimes the end \n>>> product of a lot of interaction between the JDBC driver and the \n>>> client program. This is frequently the case with general-purpose \n>>> programs like report writers and application servers.\n>>>\n>>> If the generated SQL is bad, or if the data the client program \n>>> receives back is bad, it's necessary to figure out exactly what the \n>>> client program is doing in order to solve the problem. For example, \n>>> the client may use some kinds of row metadata and not others, or may \n>>> be using an unusual sequence of calls to place data into a \n>>> PreparedStatement. Logging is the only way to figure out what the \n>>> client is doing if you don't have the client source.\n>>>\n>>>> While logging is a good idea, having yet another non-postgresql \n>>>> component that needs to be installed in order to build and/or run \n>>>> the jdbc driver is in my opionion a bad idea. I already dislike the \n>>>> fact that I have to install ant just to build the driver. It was so \n>>>> much easier under 7.0 when make was all that was required.\n>>>\n>>>\n>>>\n>>>\n>>> Agreed -- especially given what it takes to get a Java program to \n>>> work, since there are no standards for where the various components \n>>> should live. Making ant work wasn't a pleasant experience: it took \n>>> more effort to build the 7.1 JDBC driver alone than to build the \n>>> entire 7.0 Postgres suite.\n>>>\n>>> On the other hand, logging *is* useful in making sure that the JDBC \n>>> driver works with the widest possible variety of client software, \n>>> including all kinds of proprietary middleware products. If the \n>>> logging is set up so that log4j is loaded dynamically, would that be \n>>> a satisfactory solution to the build problem?\n>>>\n>>> Actually, given the purpose for including logging, log4j is probably \n>>> more than what's required to do the job -- essentially just tracing \n>>> client call activity.\n>>>\n>>> -- Bruce\n>>>\n>>> -------------------------------------------------------------------------- \n>>>\n>>> Bruce Toback Tel: (602) 996-8601| My candle burns at both ends;\n>>> OPT, Inc. (800) 858-4507| It will not last the night;\n>>> 11801 N. Tatum Blvd. Ste. 142 | But ah, my foes, and oh, my \n>>> friends -\n>>> Phoenix AZ 85028 | It gives a lovely light.\n>>> btoback@optc.com | -- Edna St. Vincent Millay\n>>>\n>>\n>>\n> \n> \n\n\n", "msg_date": "Mon, 25 Jun 2001 15:49:51 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Instrumenting and Logging in JDBC" } ]
[ { "msg_contents": "Actually the problem is worse than I thought. Not only do all the \nget/setXXXStream methods assume the datatype is a BLOB, but also the \nget/setBytes methods. This means that it isn't possible to support \nbytea as the binary datatype without also breaking some backward \ncompatability.\n\nIn looking at the CVS log, it appears that the stream methods were only \nintroduced in the 7.1 JDBC driver, since 7.1 has only been out \n(production) a few months, the number of people affected will be \nsmaller, the setBytes() method that assumed a blob was there in 7.0, so \nit is likely more people will be impacted by any change there.\n\nthanks,\n--Barry\n\nBruce Toback wrote:\n\n> \n> On Sunday, June 24, 2001, at 10:32 PM, Barry Lind wrote:\n> \n>> This is an interesting problem. And I can't think a any easy \n>> solution. But given TOAST in 7.1 the existing implementation doesn't \n>> make sense IMHO My suggestion would be that the get/setXXXStream \n>> methods work on TOASTed data types and get/setBlob be used for Blobs.\n>>\n> \n> That would be my preference as well.\n> \n>> As far as your patch, I don't see that as a generic solution. It is \n>> equally likely that a Blob could contain less than 8190 characters, or \n>> a varchar could contain more that 8190 characters in 7.1.\n> \n> \n> It's certainly not a generic solution. I was looking for a solution that \n> would break fewer of the applications that rely on the current \n> nonstandard behavior. I'd much prefer to simply have get/set...stream \n> just implement the standard behavior. But not knowing the Postgres \n> developers' preferences when it comes to these questions, I chose the \n> break-fewer-existing-apps approach.\n> \n> If the answer is that the Postgres developers are willing to tell \n> current JDBC users to switch to the Blob/Clob methods when that's what \n> they really mean, I'll remove the switch before submitting the patch.\n> \n> -- Bruce\n> \n\n\n", "msg_date": "Sun, 24 Jun 2001 23:08:33 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] JDBC adaptor issue" }, { "msg_contents": "> Actually the problem is worse than I thought. Not only do all the \n> get/setXXXStream methods assume the datatype is a BLOB, but also the \n> get/setBytes methods. This means that it isn't possible to support \n> bytea as the binary datatype without also breaking some backward \n> compatability.\n> \n> In looking at the CVS log, it appears that the stream methods were only \n> introduced in the 7.1 JDBC driver, since 7.1 has only been out \n> (production) a few months, the number of people affected will be \n> smaller, the setBytes() method that assumed a blob was there in 7.0, so \n> it is likely more people will be impacted by any change there.\n\nIf you are looking for votes, you can break backward compatibility here.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 13:43:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] JDBC adaptor issue" } ]
[ { "msg_contents": "\n> > This patch will implement the \"ENABLE PRIVILEGE\" and \"DISABLE PRIVILEGE\"\n> > commands in PL/pgSQL, which, respectively, change the effective uid to that\n> > of the function owner and back. It doesn't break security (I hope). The\n> > commands can be abbreviated as \"ENABLE\" and \"DISABLE\" for the poor saps that\n\nAnybody else want to object to this abbreviation idea ? Seems \nreading ENABLE; or DISABLE; is very hard to interpret in source code\n(enable what ?) and should thus not be allowed (or allow \"ENABLE PRIV\").\n\nAndreas\n", "msg_date": "Mon, 25 Jun 2001 12:12:42 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: [PATCH] Re: Setuid functions" }, { "msg_contents": "Actually, I liked the SET AUTHORIZATION { DEFINER | INVOKER } terminology\nmentioned earlier.\n\nMark\n\nZeugswetter Andreas SB wrote:\n> \n> > > This patch will implement the \"ENABLE PRIVILEGE\" and \"DISABLE PRIVILEGE\"\n> > > commands in PL/pgSQL, which, respectively, change the effective uid to that\n> > > of the function owner and back. It doesn't break security (I hope). The\n> > > commands can be abbreviated as \"ENABLE\" and \"DISABLE\" for the poor saps that\n> \n> Anybody else want to object to this abbreviation idea ? Seems\n> reading ENABLE; or DISABLE; is very hard to interpret in source code\n> (enable what ?) and should thus not be allowed (or allow \"ENABLE PRIV\").\n> \n> Andreas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Mon, 25 Jun 2001 09:19:27 -0400", "msg_from": "Mark Volpe <volpe.mark@epa.gov>", "msg_from_op": false, "msg_subject": "Re: AW: [PATCH] Re: Setuid functions" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> Anybody else want to object to this abbreviation idea ?\n\nI thought we already agreed to change the names per Peter's suggestion.\n\nI didn't like the original names whether abbreviated or not ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jun 2001 09:41:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: [PATCH] Re: Setuid functions " } ]
[ { "msg_contents": "\n> > Still, it's an interesting alternative. Comments anyone?\n> \n> SelfExclusiveLock is clear and can't be confused with other \n> lock types.\n\nHow about giving it a label ? SelfExclusive would somehow suggest,\nthat you can have more than one self exclusive lock.\nLike:\n\tlock table atab in self exclusive mode for \"vacuum\";\n\tdoes not conflict with:\n\tlock table atab in self exclusive mode for \"account balancing\";\n\nAndreas \n", "msg_date": "Mon, 25 Jun 2001 12:17:10 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Good name for new lock type for VACUUM?" } ]
[ { "msg_contents": "\nhttp://www.redhat.com/about/presscenter/2001/press_database.html\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 25 Jun 2001 09:51:35 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "RH announcement is there" } ]
[ { "msg_contents": "\n> > Anybody else want to object to this abbreviation idea ?\n> \n> I thought we already agreed to change the names per Peter's suggestion.\n> \n> I didn't like the original names whether abbreviated or not ...\n\nGood. I have not seen that agreement, maybe it was implied.\nI am not sure whether the feature does not actually present a security \nhole ? Two collaborating users can pass each other their privileges.\nI think we might need to guard that feature with a special privilege that \nthe function creator needs during creation( e.g. dba).\n\nAnd why not use the existing \"set session authorization ...\" syntax?\nBecause it would remain after function exit? Because it needs dba to execute ? \n\nDon't misunderstand, I like the feature, but this probably has to be considered.\n\nAndreas\n", "msg_date": "Mon, 25 Jun 2001 16:18:25 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: [PATCH] Re: Setuid functions " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> I am not sure whether the feature does not actually present a security \n> hole ? Two collaborating users can pass each other their privileges.\n\nI don't see any (new) security risk here. Code written by one user can\nbe executed with the privileges of another --- so what? That's the\nsituation now, with non-setuid functions.\n\n> And why not use the existing \"set session authorization ...\" syntax?\n\nThat syntax implies setting authorization permanently (for the rest of\nthe session). If we take over that syntax to mean local privilege\nchange inside a function, then it'd be impossible to let a function do a\nglobal change in the future. Not sure if we ever want that, but I don't\nthink we should foreclose the possibility by using the same syntax to\nmean two different things.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jun 2001 10:47:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: [PATCH] Re: Setuid functions " } ]
[ { "msg_contents": "\n> > I am not sure whether the feature does not actually present a security \n> > hole ? Two collaborating users can pass each other their privileges.\n> \n> I don't see any (new) security risk here. Code written by one user can\n> be executed with the privileges of another --- so what? That's the\n> situation now, with non-setuid functions.\n\nHmm? A non-setuid function can execute code written by another user,\nbut is only allowed to do things the \"invoker\" has privileges for.\nThus it is a convenience, but does not allow the invoker to do anything\nhe could not type himself.\nNot so with setuid functions, that is exactly why they are handy.\nWithout making the \"definer\" need an additional grant for creating such \na function, it would be like giving him all the privs he has \n\"with grant option\".\n\n> \n> > And why not use the existing \"set session authorization ...\" syntax?\n> \n> That syntax implies setting authorization permanently (for the rest of\n> the session). If we take over that syntax to mean local privilege\n> change inside a function, then it'd be impossible to let a function do a\n> global change in the future. Not sure if we ever want that, but I don't\n> think we should foreclose the possibility by using the same syntax to\n> mean two different things.\n\nYes, I was not sure about the intended standards scope of a \"set session auth...\" \nwhen called inside a function.\n\nAndreas\n", "msg_date": "Mon, 25 Jun 2001 17:00:37 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: AW: [PATCH] Re: Setuid functions " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> Without making the \"definer\" need an additional grant for creating such \n> a function, it would be like giving him all the privs he has \n> \"with grant option\".\n\nHmm ... interesting analogy, but does it hold water? The GRANT OPTION\nstuff implies the right to pass on your privileges to someone else\n*permanently*. A setuid function only lets someone else do the same\nthings you can do at the time it is called. There's nothing there that\ncouldn't be done by having the one user ask the other to do something\nusing an outside-the-database communication channel. So I really don't\nsee a security issue.\n\nI also don't see any privilege of this type in SQL92 (which does have\nthe concept of setuid functions, in the form of modules).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jun 2001 11:25:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: [PATCH] Re: Setuid functions " }, { "msg_contents": "Tom Lane writes:\n\n> I also don't see any privilege of this type in SQL92 (which does have\n> the concept of setuid functions, in the form of modules).\n\nSQL99 has setuid functions in the form of setuid functions, with a syntax\nlike CREATE FUNCTION .... SECURITY { INVOKER | DEFINER } (too lazy to look\nup the details). There were some peculiar differences IIRC, such as\ntrigger functions executing with the permission of the trigger creator\n(which is yet different).\n\nModules are more like \"packages\", AFAICT.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 25 Jun 2001 18:14:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: [PATCH] Re: Setuid functions " }, { "msg_contents": "Zeugswetter Andreas SB writes:\n\n> Hmm? A non-setuid function can execute code written by another user,\n> but is only allowed to do things the \"invoker\" has privileges for.\n> Thus it is a convenience, but does not allow the invoker to do anything\n> he could not type himself.\n> Not so with setuid functions, that is exactly why they are handy.\n> Without making the \"definer\" need an additional grant for creating such\n> a function, it would be like giving him all the privs he has\n> \"with grant option\".\n\nSQL99 has an answer for this:\n\n[11.49 GR1]\n\n 1) If R is a schema-level routine, then a privilege descriptor\n is created that defines the EXECUTE privilege on R to the\n <authorization identifier> that owns the schema that includes R.\n The grantor for the privilege descriptor is set to the special\n grantor value \"_SYSTEM\". This privilege is grantable if and only\n if one of the following is satisfied:\n\n a) R is an SQL routine and all of the privileges necessary\n for the <authorization identifier> to successfully execute\n the <SQL procedure statement> contained in the <routine\n body> are grantable. The necessary privileges include the\n EXECUTE privilege on every subject routine of every <routine\n invocation> contained in the <SQL procedure statement>.\n\nWhat this means (to me) is that unless you have grantable privileges for\nall the things that your function does, you can't grant the EXECUTE\nprivilege to anyone.\n\nThis rule, while logical, isn't exactly pleasant, since we can hardly\nevaluate statically what a function will do (shades of the halting\nproblem).\n\nI think your concern is valid. Maybe we can do this:\n\n1. The proposed commands only work in \"setuid\" functions (like in Unix)\n\n2. To create a setuid function you need some privilege.\n\nPart 2 will be a problem, but both the implementation process and the\nimplementation itself might terminate in finite time. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 25 Jun 2001 18:34:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: [PATCH] Re: Setuid functions " }, { "msg_contents": "I updated the patch to use the SET AUTHORIZATION { INVOKER | DEFINER }\nterminology. Also, the function owner is now determined and saved at compile\ntime (no gotchas here, right?). It is located at\n\nhttp://volpe.home.mindspring.com/pgsql/set_auth.patch\n\nMark\n", "msg_date": "Mon, 25 Jun 2001 14:26:17 -0400", "msg_from": "Mark Volpe <volpe.mark@epa.gov>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Re: Setuid functions" }, { "msg_contents": "> I updated the patch to use the SET AUTHORIZATION { INVOKER | DEFINER }\n> terminology. Also, the function owner is now determined and saved at compile\n> time (no gotchas here, right?). It is located at\n> \n> http://volpe.home.mindspring.com/pgsql/set_auth.patch\n\nOK, patch applied. Can I have some docs with that burger? :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/pl/plpgsql/src/gram.y\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/pl/plpgsql/src/gram.y,v\nretrieving revision 1.21\ndiff -c -r1.21 gram.y\n*** src/pl/plpgsql/src/gram.y\t2001/06/06 18:54:41\t1.21\n--- src/pl/plpgsql/src/gram.y\t2001/07/11 18:37:07\n***************\n*** 122,132 ****\n %type <stmts>\tproc_sect, proc_stmts, stmt_else, loop_body\n %type <stmt>\tproc_stmt, pl_block\n %type <stmt>\tstmt_assign, stmt_if, stmt_loop, stmt_while, stmt_exit\n! %type <stmt>\tstmt_return, stmt_raise, stmt_execsql, stmt_fori\n %type <stmt>\tstmt_fors, stmt_select, stmt_perform\n %type <stmt>\tstmt_dynexecute, stmt_dynfors, stmt_getdiag\n %type <stmt>\tstmt_open, stmt_fetch, stmt_close\n \n %type <intlist>\traise_params\n %type <ival>\traise_level, raise_param\n %type <str>\t\traise_msg\n--- 122,134 ----\n %type <stmts>\tproc_sect, proc_stmts, stmt_else, loop_body\n %type <stmt>\tproc_stmt, pl_block\n %type <stmt>\tstmt_assign, stmt_if, stmt_loop, stmt_while, stmt_exit\n! %type <stmt>\tstmt_return, stmt_raise, stmt_execsql, stmt_fori, stmt_setauth\n %type <stmt>\tstmt_fors, stmt_select, stmt_perform\n %type <stmt>\tstmt_dynexecute, stmt_dynfors, stmt_getdiag\n %type <stmt>\tstmt_open, stmt_fetch, stmt_close\n \n+ %type <ival>\tauth_level\n+ \n %type <intlist>\traise_params\n %type <ival>\traise_level, raise_param\n %type <str>\t\traise_msg\n***************\n*** 172,177 ****\n--- 174,183 ----\n %token\tK_PERFORM\n %token\tK_ROW_COUNT\n %token\tK_RAISE\n+ %token\tK_SET\n+ %token\tK_AUTHORIZATION\n+ %token\tK_INVOKER\n+ %token\tK_DEFINER\n %token\tK_RECORD\n %token\tK_RENAME\n %token\tK_RESULT_OID\n***************\n*** 726,731 ****\n--- 732,739 ----\n \t\t\t\t\t\t{ $$ = $1; }\n \t\t\t\t| stmt_raise\n \t\t\t\t\t\t{ $$ = $1; }\n+ \t\t\t\t| stmt_setauth\n+ \t\t\t\t\t\t{ $$ = $1; }\n \t\t\t\t| stmt_execsql\n \t\t\t\t\t\t{ $$ = $1; }\n \t\t\t\t| stmt_dynexecute\n***************\n*** 1242,1247 ****\n--- 1250,1278 ----\n \t\t\t\t\t\t$$ = (PLpgSQL_stmt *)new;\n \t\t\t\t\t}\n \t\t\t\t;\n+ \n+ stmt_setauth\t\t: K_SET K_AUTHORIZATION auth_level lno ';'\n+ \t\t\t\t{\n+ \t\t\t\t\tPLpgSQL_stmt_setauth *new;\n+ \n+ \t\t\t\t\tnew=malloc(sizeof(PLpgSQL_stmt_setauth));\n+ \n+ \t\t\t\t\tnew->cmd_type = PLPGSQL_STMT_SETAUTH;\n+ \t\t\t\t\tnew->auth_level = $3;\n+ new->lineno = $4;\n+ \n+ \t\t\t\t\t$$ = (PLpgSQL_stmt *)new;\n+ \t\t\t\t}\n+ \n+ auth_level : K_DEFINER\n+ \t\t{\n+ \t\t\t$$=PLPGSQL_AUTH_DEFINER;\n+ }\n+ \t | K_INVOKER\n+ \t{\n+ \t$$=PLPGSQL_AUTH_INVOKER;\n+ }\n+ ;\n \n stmt_raise\t\t: K_RAISE lno raise_level raise_msg raise_params ';'\n \t\t\t\t\t{\nIndex: src/pl/plpgsql/src/pl_comp.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/pl/plpgsql/src/pl_comp.c,v\nretrieving revision 1.31\ndiff -c -r1.31 pl_comp.c\n*** src/pl/plpgsql/src/pl_comp.c\t2001/05/21 14:22:18\t1.31\n--- src/pl/plpgsql/src/pl_comp.c\t2001/07/11 18:37:07\n***************\n*** 169,174 ****\n--- 169,175 ----\n \n \tfunction->fn_functype = functype;\n \tfunction->fn_oid = fn_oid;\n+ function->definer_uid = procStruct->proowner;\n \tfunction->fn_name = strdup(DatumGetCString(DirectFunctionCall1(nameout,\n \t\t\t\t\t\t\t\t NameGetDatum(&(procStruct->proname)))));\n \nIndex: src/pl/plpgsql/src/pl_exec.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/pl/plpgsql/src/pl_exec.c,v\nretrieving revision 1.44\ndiff -c -r1.44 pl_exec.c\n*** src/pl/plpgsql/src/pl_exec.c\t2001/05/28 19:33:24\t1.44\n--- src/pl/plpgsql/src/pl_exec.c\t2001/07/11 18:37:07\n***************\n*** 47,52 ****\n--- 47,53 ----\n #include \"plpgsql.h\"\n #include \"pl.tab.h\"\n \n+ #include \"miscadmin.h\"\n #include \"access/heapam.h\"\n #include \"catalog/pg_proc.h\"\n #include \"catalog/pg_type.h\"\n***************\n*** 105,110 ****\n--- 106,113 ----\n \t\t\t PLpgSQL_stmt_exit * stmt);\n static int exec_stmt_return(PLpgSQL_execstate * estate,\n \t\t\t\t PLpgSQL_stmt_return * stmt);\n+ static int exec_stmt_setauth(PLpgSQL_execstate * estate,\n+ \t\t\t\tPLpgSQL_stmt_setauth * stmt);\n static int exec_stmt_raise(PLpgSQL_execstate * estate,\n \t\t\t\tPLpgSQL_stmt_raise * stmt);\n static int exec_stmt_execsql(PLpgSQL_execstate * estate,\n***************\n*** 226,231 ****\n--- 229,237 ----\n \t\t\t\t\tcase PLPGSQL_STMT_RETURN:\n \t\t\t\t\t\tstmttype = \"return\";\n \t\t\t\t\t\tbreak;\n+ \t\t\t\t\tcase PLPGSQL_STMT_SETAUTH:\n+ \t\t\t\t\t\tstmttype = \"setauth\";\n+ \t\t\t\t\t\tbreak;\n \t\t\t\t\tcase PLPGSQL_STMT_RAISE:\n \t\t\t\t\t\tstmttype = \"raise\";\n \t\t\t\t\t\tbreak;\n***************\n*** 277,283 ****\n \testate.retistuple = func->fn_retistuple;\n \testate.retisset = func->fn_retset;\n \testate.exitlabel = NULL;\n! \n \testate.found_varno = func->found_varno;\n \testate.ndatums = func->ndatums;\n \testate.datums = palloc(sizeof(PLpgSQL_datum *) * estate.ndatums);\n--- 283,292 ----\n \testate.retistuple = func->fn_retistuple;\n \testate.retisset = func->fn_retset;\n \testate.exitlabel = NULL;\n! \testate.invoker_uid = GetUserId();\n! \testate.definer_uid = func->definer_uid;\n! \testate.auth_level = PLPGSQL_AUTH_INVOKER;\n! \n \testate.found_varno = func->found_varno;\n \testate.ndatums = func->ndatums;\n \testate.datums = palloc(sizeof(PLpgSQL_datum *) * estate.ndatums);\n***************\n*** 397,402 ****\n--- 406,414 ----\n \t\telog(ERROR, \"control reaches end of function without RETURN\");\n \t}\n \n+ \tif (estate.auth_level!=PLPGSQL_AUTH_INVOKER)\n+ \t\tSetUserId(estate.invoker_uid);\n+ \n \t/*\n \t * We got a return value - process it\n \t */\n***************\n*** 577,582 ****\n--- 589,597 ----\n \testate.retistuple = func->fn_retistuple;\n \testate.retisset = func->fn_retset;\n \testate.exitlabel = NULL;\n+ \testate.invoker_uid = GetUserId();\n+ \testate.definer_uid = func->definer_uid;\n+ \testate.auth_level = PLPGSQL_AUTH_INVOKER;\n \n \testate.found_varno = func->found_varno;\n \testate.ndatums = func->ndatums;\n***************\n*** 760,765 ****\n--- 775,783 ----\n \t\telog(ERROR, \"control reaches end of trigger procedure without RETURN\");\n \t}\n \n+ \tif (estate.auth_level!=PLPGSQL_AUTH_INVOKER)\n+ \t\tSetUserId(estate.invoker_uid);\n+ \n \t/*\n \t * Check that the returned tuple structure has the same attributes,\n \t * the relation that fired the trigger has.\n***************\n*** 1022,1027 ****\n--- 1040,1049 ----\n \t\t\trc = exec_stmt_return(estate, (PLpgSQL_stmt_return *) stmt);\n \t\t\tbreak;\n \n+ \t\tcase PLPGSQL_STMT_SETAUTH:\n+ \t\t\trc = exec_stmt_setauth(estate, (PLpgSQL_stmt_setauth *) stmt);\n+ \t\t\tbreak;\n+ \n \t\tcase PLPGSQL_STMT_RAISE:\n \t\t\trc = exec_stmt_raise(estate, (PLpgSQL_stmt_raise *) stmt);\n \t\t\tbreak;\n***************\n*** 1643,1648 ****\n--- 1665,1693 ----\n \t\t\t\t\t\t\t\t\t&(estate->rettype));\n \n \treturn PLPGSQL_RC_RETURN;\n+ }\n+ \n+ /* ----------\n+ * exec_stmt_setauth Changes user ID to/from\n+ * that of the function owner's\n+ * ----------\n+ */\n+ \n+ static int\n+ exec_stmt_setauth(PLpgSQL_execstate * estate, PLpgSQL_stmt_setauth * stmt)\n+ {\n+ \tswitch(stmt->auth_level)\n+ {\n+ \tcase PLPGSQL_AUTH_DEFINER:\n+ \tSetUserId(estate->definer_uid);\n+ break;\n+ case PLPGSQL_AUTH_INVOKER:\n+ \tSetUserId(estate->invoker_uid);\n+ break;\n+ \t}\n+ \n+ \testate->auth_level=stmt->auth_level;\n+ \treturn PLPGSQL_RC_OK;\n }\n \n \nIndex: src/pl/plpgsql/src/pl_funcs.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/pl/plpgsql/src/pl_funcs.c,v\nretrieving revision 1.13\ndiff -c -r1.13 pl_funcs.c\n*** src/pl/plpgsql/src/pl_funcs.c\t2001/05/21 14:22:19\t1.13\n--- src/pl/plpgsql/src/pl_funcs.c\t2001/07/11 18:37:08\n***************\n*** 382,387 ****\n--- 382,388 ----\n static void dump_select(PLpgSQL_stmt_select * stmt);\n static void dump_exit(PLpgSQL_stmt_exit * stmt);\n static void dump_return(PLpgSQL_stmt_return * stmt);\n+ static void dump_setauth(PLpgSQL_stmt_setauth * stmt);\n static void dump_raise(PLpgSQL_stmt_raise * stmt);\n static void dump_execsql(PLpgSQL_stmt_execsql * stmt);\n static void dump_dynexecute(PLpgSQL_stmt_dynexecute * stmt);\n***************\n*** 438,443 ****\n--- 439,447 ----\n \t\tcase PLPGSQL_STMT_RETURN:\n \t\t\tdump_return((PLpgSQL_stmt_return *) stmt);\n \t\t\tbreak;\n+ \t\tcase PLPGSQL_STMT_SETAUTH:\n+ \t\t\tdump_setauth((PLpgSQL_stmt_setauth *) stmt);\n+ \t\t\tbreak;\n \t\tcase PLPGSQL_STMT_RAISE:\n \t\t\tdump_raise((PLpgSQL_stmt_raise *) stmt);\n \t\t\tbreak;\n***************\n*** 719,724 ****\n--- 723,743 ----\n \t\t\tdump_expr(stmt->expr);\n \t}\n \tprintf(\"\\n\");\n+ }\n+ \n+ static void\n+ dump_setauth(PLpgSQL_stmt_setauth * stmt)\n+ {\n+ \tdump_ind();\n+ switch (stmt->auth_level)\n+ {\n+ \tcase PLPGSQL_AUTH_DEFINER:\n+ \tprintf(\"SET AUTHORIZATION DEFINER\\n\");\n+ break;\n+ case PLPGSQL_AUTH_INVOKER:\n+ \tprintf(\"SET AUTHORIZATION INVOKER\\n\");\n+ break;\n+ }\n }\n \n static void\nIndex: src/pl/plpgsql/src/plpgsql.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/pl/plpgsql/src/plpgsql.h,v\nretrieving revision 1.14\ndiff -c -r1.14 plpgsql.h\n*** src/pl/plpgsql/src/plpgsql.h\t2001/05/21 14:22:19\t1.14\n--- src/pl/plpgsql/src/plpgsql.h\t2001/07/11 18:37:08\n***************\n*** 95,100 ****\n--- 95,101 ----\n \tPLPGSQL_STMT_DYNEXECUTE,\n \tPLPGSQL_STMT_DYNFORS,\n \tPLPGSQL_STMT_GETDIAG,\n+ \tPLPGSQL_STMT_SETAUTH,\n \tPLPGSQL_STMT_OPEN,\n \tPLPGSQL_STMT_FETCH,\n \tPLPGSQL_STMT_CLOSE\n***************\n*** 112,117 ****\n--- 113,128 ----\n \tPLPGSQL_RC_RETURN\n };\n \n+ /* ---------\n+ * Authorization levels\n+ * ---------\n+ */\n+ enum\n+ {\n+ \tPLPGSQL_AUTH_INVOKER,\n+ PLPGSQL_AUTH_DEFINER,\n+ };\n+ \n /* ----------\n * GET DIAGNOSTICS system attrs\n * ----------\n***************\n*** 425,430 ****\n--- 436,447 ----\n \tint\t\t\tretrecno;\n }\t\t\tPLpgSQL_stmt_return;\n \n+ typedef struct\n+ { /* SET AUTHORIZATION statement */\n+ int cmd_type;\n+ int lineno;\n+ int\t\tauth_level;\n+ } PLpgSQL_stmt_setauth;\n \n typedef struct\n {\t\t\t\t\t\t\t\t/* RAISE statement\t\t\t*/\n***************\n*** 480,485 ****\n--- 497,503 ----\n \tint\t\t\ttg_nargs_varno;\n \n \tint\t\t\tndatums;\n+ Oid\t\t\tdefiner_uid;\n \tPLpgSQL_datum **datums;\n \tPLpgSQL_stmt_block *action;\n \tstruct PLpgSQL_function *next;\n***************\n*** 502,507 ****\n--- 520,528 ----\n \tint\t\t\tfound_varno;\n \tint\t\t\tndatums;\n \tPLpgSQL_datum **datums;\n+ \tOid\t\tinvoker_uid;\n+ \tOid\t\tdefiner_uid;\n+ int\t\tauth_level;\n }\t\t\tPLpgSQL_execstate;\n \n \nIndex: src/pl/plpgsql/src/scan.l\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/pl/plpgsql/src/scan.l,v\nretrieving revision 1.12\ndiff -c -r1.12 scan.l\n*** src/pl/plpgsql/src/scan.l\t2001/05/21 14:22:19\t1.12\n--- src/pl/plpgsql/src/scan.l\t2001/07/11 18:37:08\n***************\n*** 121,126 ****\n--- 121,130 ----\n open\t\t\t{ return K_OPEN;\t\t\t}\n perform\t\t\t{ return K_PERFORM;\t\t\t}\n raise\t\t\t{ return K_RAISE;\t\t\t}\n+ set\t\t\t{ return K_SET;\t\t\t\t}\n+ authorization\t\t{ return K_AUTHORIZATION;\t\t}\n+ invoker\t\t\t{ return K_INVOKER;\t\t\t}\n+ definer\t\t\t{ return K_DEFINER;\t\t\t}\n record\t\t\t{ return K_RECORD;\t\t\t}\n rename\t\t\t{ return K_RENAME;\t\t\t}\n result_oid\t\t{ return K_RESULT_OID;\t\t}", "msg_date": "Wed, 11 Jul 2001 14:53:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [PATCH] Re: Setuid functions" }, { "msg_contents": "Bruce Momjian writes:\n\n> > I updated the patch to use the SET AUTHORIZATION { INVOKER | DEFINER }\n> > terminology. Also, the function owner is now determined and saved at compile\n> > time (no gotchas here, right?). It is located at\n> >\n> > http://volpe.home.mindspring.com/pgsql/set_auth.patch\n>\n> OK, patch applied. Can I have some docs with that burger? :-)\n\nI think we concluded that this feature introduced a security hole.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 11 Jul 2001 21:54:02 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [PATCH] Re: Setuid functions" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > > I updated the patch to use the SET AUTHORIZATION { INVOKER | DEFINER }\n> > > terminology. Also, the function owner is now determined and saved at compile\n> > > time (no gotchas here, right?). It is located at\n> > >\n> > > http://volpe.home.mindspring.com/pgsql/set_auth.patch\n> >\n> > OK, patch applied. Can I have some docs with that burger? :-)\n> \n> I think we concluded that this feature introduced a security hole.\n\nI thought that was addressed in the patch with the mention of:\n\n> > > Also, the function owner is now determined and saved at compile\n> > > time (no gotchas here, right?).\n\nDoes anyone remember?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 15:58:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [PATCH] Re: Setuid functions" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > > I updated the patch to use the SET AUTHORIZATION { INVOKER | DEFINER }\n> > > terminology. Also, the function owner is now determined and saved at compile\n> > > time (no gotchas here, right?). It is located at\n> > >\n> > > http://volpe.home.mindspring.com/pgsql/set_auth.patch\n> >\n> > OK, patch applied. Can I have some docs with that burger? :-)\n> \n> I think we concluded that this feature introduced a security hole.\n\nPeter, I see this in the archives. Is this it?\n\n\thttp://fts.postgresql.org/db/mw/msg.html?mid=1022748\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 16:10:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [PATCH] Re: Setuid functions" }, { "msg_contents": "Peter might be referring to this:\n\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1022775\n\nThere was some discussion afterward, but I don't think a definite conclusion\nwas reached.\n\nMark\n\n> Peter, I see this in the archives. Is this it?\n> \n> http://fts.postgresql.org/db/mw/msg.html?mid=1022748\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 16:23:17 -0400", "msg_from": "Mark Volpe <volpe.mark@epa.gov>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [PATCH] Re: Setuid functions" }, { "msg_contents": "> Peter might be referring to this:\n> \n> http://fts.postgresql.org/db/mw/msg.html?mid=1022775\n> \n> There was some discussion afterward, but I don't think a definite conclusion\n> was reached.\n\nBut I see Tom Lane saying he doesn't see a security issue:\n\n\thttp://fts.postgresql.org/db/mw/msg.html?mid=1022758\n\nI don't pretend to understand it. Just tell me what to do with the\npatch. :-) \n\nMaybe back it out until we discuss it more.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 16:26:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [PATCH] Re: Setuid functions" }, { "msg_contents": "I'm going to say apply it, of course! But then, I'd just like to tell the boss\nthat my app is running on an unmodified PostgreSQL :-)\n\nMark\n\nBruce Momjian wrote:\n> \n> I don't pretend to understand it. Just tell me what to do with the\n> patch. :-)\n>\n", "msg_date": "Wed, 11 Jul 2001 16:46:32 -0400", "msg_from": "Mark Volpe <volpe.mark@epa.gov>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] [PATCH] Re: Setuid functions" }, { "msg_contents": "Bruce Momjian writes:\n\n> > Peter might be referring to this:\n> >\n> > http://fts.postgresql.org/db/mw/msg.html?mid=1022775\n> >\n> > There was some discussion afterward, but I don't think a definite conclusion\n> > was reached.\n>\n> But I see Tom Lane saying he doesn't see a security issue:\n>\n> \thttp://fts.postgresql.org/db/mw/msg.html?mid=1022758\n>\n> I don't pretend to understand it. Just tell me what to do with the\n> patch. :-)\n\nThe problem with setuid functions in general is that a database user can\neffectively re-grant privileges to which he has no grant privileges.\nE.g.,\n\nuser1=> create table table1 (id int, secret_content text);\nuser1=> grant select on test to user2;\n\n/* made up the syntax */\nuser2=> create function testfunc (int) returns text as '\nuser2'> begin\nuser2'> set authorization definer;\nuser2'> return select secret_content from table1 where id = $1;\nuser2'> end;' as 'plpgsql';\n\nuser3=> select * from table1 where id = 5;\n(fails)\nuser3=> select testfunc(5);\n(succeeds)\n\nTom has a point that as soon as user2 has the select privilege, he can\nmake a private copy of table1 and send it to user3.\n\nBut if you take this attitude you might as well get rid of the\nfine-grained privilege system, you'd just need 'select to public'. Also,\nthere may be other security or at least auditing mechanisms to supervise\nthe communication between user2 and user3. Or maybe user2 and user3 are\njust pseudo-users implementing some sort of \"least privilege\" paranoid\ndesign.\n\nAt least we should discuss whether we'd eventually like to have grantable\nprivileges, and if so, how this would fit in.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 11 Jul 2001 23:53:28 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: [PATCH] Re: Setuid functions" }, { "msg_contents": "Good point. Would the issue be resolved by either:\n\n- Only allowing the database superuser to use this mechanism?\n- Allowing it only in trigger functions? (That way a user has to actually own\none of the tables)\n\nMark\n\nPeter Eisentraut wrote:\n> \n> Bruce Momjian writes:\n> \n> > > Peter might be referring to this:\n> > >\n> > > http://fts.postgresql.org/db/mw/msg.html?mid=1022775\n> > >\n> > > There was some discussion afterward, but I don't think a definite conclusion\n> > > was reached.\n> >\n> > But I see Tom Lane saying he doesn't see a security issue:\n> >\n> > http://fts.postgresql.org/db/mw/msg.html?mid=1022758\n> >\n> > I don't pretend to understand it. Just tell me what to do with the\n> > patch. :-)\n> \n> The problem with setuid functions in general is that a database user can\n> effectively re-grant privileges to which he has no grant privileges.\n> E.g.,\n> \n> user1=> create table table1 (id int, secret_content text);\n> user1=> grant select on test to user2;\n> \n> /* made up the syntax */\n> user2=> create function testfunc (int) returns text as '\n> user2'> begin\n> user2'> set authorization definer;\n> user2'> return select secret_content from table1 where id = $1;\n> user2'> end;' as 'plpgsql';\n> \n> user3=> select * from table1 where id = 5;\n> (fails)\n> user3=> select testfunc(5);\n> (succeeds)\n> \n> Tom has a point that as soon as user2 has the select privilege, he can\n> make a private copy of table1 and send it to user3.\n> \n> But if you take this attitude you might as well get rid of the\n> fine-grained privilege system, you'd just need 'select to public'. Also,\n> there may be other security or at least auditing mechanisms to supervise\n> the communication between user2 and user3. Or maybe user2 and user3 are\n> just pseudo-users implementing some sort of \"least privilege\" paranoid\n> design.\n> \n> At least we should discuss whether we'd eventually like to have grantable\n> privileges, and if so, how this would fit in.\n> \n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n", "msg_date": "Wed, 11 Jul 2001 18:06:56 -0400", "msg_from": "Mark Volpe <volpe.mark@epa.gov>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: [PATCH] Re: Setuid functions" }, { "msg_contents": "Mark Volpe writes:\n\n> Good point. Would the issue be resolved by either:\n>\n> - Only allowing the database superuser to use this mechanism?\n\nIf you mean \"only allow a superuser do define functions using this\nmechanism\", that could work. But it would probably make this feature a\nlot less attractive, because any setuid function would have to run with\nsuper powers.\n\n> - Allowing it only in trigger functions? (That way a user has to actually own\n> one of the tables)\n\nYour premise is no longer correct in 7.2devel.\n\n>\n> Mark\n>\n> Peter Eisentraut wrote:\n> >\n> > Bruce Momjian writes:\n> >\n> > > > Peter might be referring to this:\n> > > >\n> > > > http://fts.postgresql.org/db/mw/msg.html?mid=1022775\n> > > >\n> > > > There was some discussion afterward, but I don't think a definite conclusion\n> > > > was reached.\n> > >\n> > > But I see Tom Lane saying he doesn't see a security issue:\n> > >\n> > > http://fts.postgresql.org/db/mw/msg.html?mid=1022758\n> > >\n> > > I don't pretend to understand it. Just tell me what to do with the\n> > > patch. :-)\n> >\n> > The problem with setuid functions in general is that a database user can\n> > effectively re-grant privileges to which he has no grant privileges.\n> > E.g.,\n> >\n> > user1=> create table table1 (id int, secret_content text);\n> > user1=> grant select on test to user2;\n> >\n> > /* made up the syntax */\n> > user2=> create function testfunc (int) returns text as '\n> > user2'> begin\n> > user2'> set authorization definer;\n> > user2'> return select secret_content from table1 where id = $1;\n> > user2'> end;' as 'plpgsql';\n> >\n> > user3=> select * from table1 where id = 5;\n> > (fails)\n> > user3=> select testfunc(5);\n> > (succeeds)\n> >\n> > Tom has a point that as soon as user2 has the select privilege, he can\n> > make a private copy of table1 and send it to user3.\n> >\n> > But if you take this attitude you might as well get rid of the\n> > fine-grained privilege system, you'd just need 'select to public'. Also,\n> > there may be other security or at least auditing mechanisms to supervise\n> > the communication between user2 and user3. Or maybe user2 and user3 are\n> > just pseudo-users implementing some sort of \"least privilege\" paranoid\n> > design.\n> >\n> > At least we should discuss whether we'd eventually like to have grantable\n> > privileges, and if so, how this would fit in.\n> >\n> > --\n> > Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 12 Jul 2001 16:51:39 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: [PATCH] Re: Setuid functions" }, { "msg_contents": "> Mark Volpe writes:\n> \n> > Good point. Would the issue be resolved by either:\n> >\n> > - Only allowing the database superuser to use this mechanism?\n> \n> If you mean \"only allow a superuser do define functions using this\n> mechanism\", that could work. But it would probably make this feature a\n> lot less attractive, because any setuid function would have to run with\n> super powers.\n\nPatch backed out until we resolve the security issues. It remains in\nthe patch queue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 12 Jul 2001 13:44:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: [PATCH] Re: Setuid functions" }, { "msg_contents": "OK, I finally got around to adding the superuser check to my patch. So I try\nto test it and...\n\nERROR: Only users with Postgres superuser privilege are permitted to create a\nfunction in the 'plpgsql' language.\n\nD'oh! So, if this is the case, then the last patch should be fine after all.\n\nMark\n\nPeter Eisentraut wrote:\n> \n> If you mean \"only allow a superuser do define functions using this\n> mechanism\", that could work. But it would probably make this feature a\n> lot less attractive, because any setuid function would have to run with\n> super powers.\n>\n", "msg_date": "Thu, 19 Jul 2001 12:40:13 -0400", "msg_from": "Mark Volpe <volpe.mark@epa.gov>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: [PATCH] Re: Setuid functions" }, { "msg_contents": "Mark Volpe <volpe.mark@epa.gov> writes:\n> ERROR: Only users with Postgres superuser privilege are permitted to create a\n> function in the 'plpgsql' language.\n\n> D'oh! So, if this is the case, then the last patch should be fine after all.\n\nNo, evidently you broke something, or your plpgsql is installed wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Jul 2001 13:08:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: [PATCH] Re: Setuid functions " }, { "msg_contents": "You are right; I have forgotten to create a \"trusted\" language.\n\nMark\n\nTom Lane wrote:\n> \n> Mark Volpe <volpe.mark@epa.gov> writes:\n> > ERROR: Only users with Postgres superuser privilege are permitted to create a\n> > function in the 'plpgsql' language.\n> \n> > D'oh! So, if this is the case, then the last patch should be fine after all.\n> \n> No, evidently you broke something, or your plpgsql is installed wrong.\n> \n> regards, tom lane\n", "msg_date": "Thu, 19 Jul 2001 13:13:44 -0400", "msg_from": "Mark Volpe <volpe.mark@epa.gov>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: [PATCH] Re: Setuid functions" } ]
[ { "msg_contents": "(Sorry if you receive this twice, but I sent it Saturday evening and it\nnever made it to the list)\n\nThanks for your thorough review and comments, Tom.\n\nHere's a new patch for review. Summary of changes/response to earlier\ncomments:\n- add a routine for NullTest nodes -- done.\n- declare selectivity functions without fmgr notation -- done.\n- create selfuncs.h for declarations -- done, but I didn't move anything\nelse out of builtins.h\n- use DatumGetBool() and adjust style -- done\n- create better default selectivities -- done:\n - DEFAULT_UNK_SEL = 0.005\n - DEFAULT_NOT_UNK_SEL = 1 - DEFAULT_UNK_SEL\n - DEFAULT_BOOL_SEL = 0.5\n- recurse clause_selectivity() for non-Var input -- done\n- simplify MCV logic -- done, used 2nd approach (always use the first most\ncommon val's frequency)\n\nQuestions:\n- I added a debug define (BOOLTESTDEBUG) to selfuncs.h, and a corresponding\nifdef/elog NOTICE to clause_selectivity(). This was to help me debug/verify\nthe calculations. Should this be left in the code when I create a patch (it\nis in this one), and if so, is there a preferred \"standard\" approach to this\ntype of debug code?\n- Using the debug code mentioned above, I noted that clause_selectivity()\ndid not seem to get called at all for clauses like \"where myfield = 0\" or\n\"where myfield > 0\". I haven't looked too closely at it yet, but I was\nwondering if this is expected behavior?\n\nThanks,\n\n-- Joe", "msg_date": "Mon, 25 Jun 2001 08:17:24 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "Fw: AW: Re: [SQL] behavior of ' = NULL' vs. MySQL vs. Stand ards " }, { "msg_contents": "\nPatch applied by Tom Lane with some small mods.\n\n> (Sorry if you receive this twice, but I sent it Saturday evening and it\n> never made it to the list)\n> \n> Thanks for your thorough review and comments, Tom.\n> \n> Here's a new patch for review. Summary of changes/response to earlier\n> comments:\n> - add a routine for NullTest nodes -- done.\n> - declare selectivity functions without fmgr notation -- done.\n> - create selfuncs.h for declarations -- done, but I didn't move anything\n> else out of builtins.h\n> - use DatumGetBool() and adjust style -- done\n> - create better default selectivities -- done:\n> - DEFAULT_UNK_SEL = 0.005\n> - DEFAULT_NOT_UNK_SEL = 1 - DEFAULT_UNK_SEL\n> - DEFAULT_BOOL_SEL = 0.5\n> - recurse clause_selectivity() for non-Var input -- done\n> - simplify MCV logic -- done, used 2nd approach (always use the first most\n> common val's frequency)\n> \n> Questions:\n> - I added a debug define (BOOLTESTDEBUG) to selfuncs.h, and a corresponding\n> ifdef/elog NOTICE to clause_selectivity(). This was to help me debug/verify\n> the calculations. Should this be left in the code when I create a patch (it\n> is in this one), and if so, is there a preferred \"standard\" approach to this\n> type of debug code?\n> - Using the debug code mentioned above, I noted that clause_selectivity()\n> did not seem to get called at all for clauses like \"where myfield = 0\" or\n> \"where myfield > 0\". I haven't looked too closely at it yet, but I was\n> wondering if this is expected behavior?\n> \n> Thanks,\n> \n> -- Joe\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 25 Jun 2001 19:04:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fw: AW: Re: [SQL] behavior of ' = NULL' vs. MySQL vs. Stand ards" } ]