threads
listlengths
1
2.99k
[ { "msg_contents": "John Gray wrote:\n> \n> Firstly, I appreciate this may be a hare-brained scheme, but I've been\n> thinking about indexes in which the tuple pointer is not unique.\n> \n> The reason for my interest is storing XML documents in text fields in the\n> database. (It could also help with particular kinds of full-text search?)\n\nAFAIK this is what is known as an inverted index. This type of index is\nmost \noften used in full-text indexes.\n\nSomething of similar nature is realised for \"sets of integers\" using\nGiST \nindexes and is available as \"intarray\" in contrib.\n\n-------------------\nHannu\n", "msg_date": "Mon, 25 Jun 2001 22:19:25 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Multi-entry indexes (with a view to XPath queries)" }, { "msg_contents": "Firstly, I appreciate this may be a hare-brained scheme, but I've been\nthinking about indexes in which the tuple pointer is not unique.\n\nThe reason for my interest is storing XML documents in text fields in the\ndatabase. (It could also help with particular kinds of full-text search?)\n\nI would like to be able to construct indexes on a collection of XML\ndocuments, based on the \"value\" of certain \"fields\" within the document.\n(In jargon terms, producing an index whose key is the CDATA content of a\nparticular XML element). This could tie in with the Xpath and XQuery\nproposals\n\nA simplified example (from an archaeological site classification system):\n\n<site>\n\t<name>Glebe Farm, Long Itchington</name>\n\t<location scheme=\"osgb\">SU41793684</location>\n\t<feature>\n\t\t<type>Agricultural:Stock Control</type>\n\t\t<date scheme=\"code\">med</date>\n\t</feature>\n\t<feature>\n\t\t<type>Unassigned:Ditch</type>\n\t\t<size type=\"depth\" unit=\"m\">1.5</size>\n\t</feature>\n</site> \n\nI'd like to produce an index on feature types so that I could type\n(roughly):\n\nSELECT siteid, xpath(doc,'//site/name'), xpath(doc,'//site/location') FROM\ndocuments WHERE xpath(doc,'feature/type') = 'Agricultural: Stock Control';\n\n[create table documents (integer siteid, text doc)]\n\nObviously I need to write a basic XML parser that can support such an\nxpath function, but it would also be good to index by the results of that\nfunction-i.e. to have an index containing feature type values. As each\ndocument could have any number of these instances, the number of index\ntuples would differ from the number of heap tuples.\n\nAs far as I can see, there is no particular reason why a btree index could\nnot be used[*]. However, vacuum.c makes assertions about number of index\ntuples == number of heap tuples. I realise this is a useful consistency\ncheck, but would it be possible to have a field in pg_index\n(indnoidentity?) that indicates that a given index hasn't got a 1:1\nindex:heap relationship.\n\nI have tried the approach of decomposing documents into cdata, element and\nattribute tables, and I can use joins to extract a list of feature types\netc. (and could use triggers to update this) but the idea of not having to\nparse a document to enter it into the database and not requiring\napplication logic to reconstruct it again seems a potential win for a\nsystem which might store complex documents but usually searches on limited\ncriteria.\n", "msg_date": "Mon, 25 Jun 2001 20:17:38 +0000", "msg_from": "\"John Gray\" <jgray@beansindustry.co.uk>", "msg_from_op": false, "msg_subject": "Multi-entry indexes (with a view to XPath queries)" }, { "msg_contents": "\"John Gray\" <jgray@beansindustry.co.uk> writes:\n> Firstly, I appreciate this may be a hare-brained scheme, but I've been\n> thinking about indexes in which the tuple pointer is not unique.\n\nIt sounds pretty hare-brained to me all right ;-). What's wrong with\nthe normal approach of one index tuple per heap tuple, ie, multiple\nindex tuples with the same key? It seems to me that your idea will just\nmake index maintenance a lot more difficult. For example, what happens\nwhen one of the referenced rows is deleted? We'd have to actually\nchange, not just remove, the index tuple, since it'd also be pointing at\nundeleted rows. That'll create a whole bunch of concurrency problems.\n\n> Obviously I need to write a basic XML parser that can support such an\n> xpath function, but it would also be good to index by the results of that\n> function-i.e. to have an index containing feature type values. As each\n> document could have any number of these instances, the number of index\n> tuples would differ from the number of heap tuples.\n\nWhy would you want multiple index entries for the same key (never mind\nwhether they are in a single index tuple or multiple tuples) pointing to\nthe same row?\n\nActually, after thinking a little more, I suspect the idea you are\nreally trying to describe here is index entries with finer-than-tuple\ngranularity. This is not silly, but it is sufficiently outside the\nnormal domain of SQL that I think you are fighting an uphill battle.\nYou'd be *much* better off creating a table that has one row per\nindexable entity, whatever that is.\n\n> I have tried the approach of decomposing documents into cdata, element and\n> attribute tables, and I can use joins to extract a list of feature types\n> etc. (and could use triggers to update this) but the idea of not having to\n> parse a document to enter it into the database\n\nHow do you expect that to happen, when you will have to parse it to get\nthe index terms?\n\nYou might be able to address your problem with two tables, one holding\noriginal documents and one with a row for each indexable entity\n(document section). This second one would then have the field index\nbuilt on it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jun 2001 16:48:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multi-entry indexes (with a view to XPath queries) " }, { "msg_contents": "In article <28692.993502132@sss.pgh.pa.us>, tgl@sss.pgh.pa.us (Tom Lane)\nwrote:\n> \"John Gray\" <jgray@beansindustry.co.uk> writes:\n>> Firstly, I appreciate this may be a hare-brained scheme, but I've been\n>> thinking about indexes in which the tuple pointer is not unique.\n> \n> It sounds pretty hare-brained to me all right ;-). What's wrong with\n> the normal approach of one index tuple per heap tuple, ie, multiple\n> index tuples with the same key? It seems to me that your idea will just\n> make index maintenance a lot more difficult. For example, what happens\n> when one of the referenced rows is deleted? We'd have to actually\n> change, not just remove, the index tuple, since it'd also be pointing at\n> undeleted rows. That'll create a whole bunch of concurrency problems.\n> \n\nSorry, I fear my explanation has not been very clear. I'm not suggesting\nthat one index entry will point to more than one heap tuple, but that more\nthan one index entry may point to the same heap tuple. In this respect it\nis the same as a full-text index (or an 'inverted index' as described by \nHannu). i.e. the data format and algorithm of the btree index need not \nchange.\n\nIf the tuple is changed, then a set of x index tuples which point to it will\nbecome invalid, (and a new set of y index tuples will be created pointing \nto then new version) but VACUUM will dispose of the outdated index tuples \nreadily enough.\n\nIn fact, maybe my question should be: Full-text indexing in contrib is\nprovided by an auxiliary table. Is there a reason why it couldn't be\nperformed using a functional btree index with (broadly) the same \nformat?:\n\nword(key)\tdocument (heaptuplepointer)\napple\t\t2 \napple\t\t3 \nBob\t\t1 \ncow\t\t2 \ncoward\t\t1\n\n\n>> Obviously I need to write a basic XML parser that can support such an\n>> xpath function, but it would also be good to index by the results of\n>> that function-i.e. to have an index containing feature type values. As\n>> each document could have any number of these instances, the number of\n>> index tuples would differ from the number of heap tuples.\n> \n> Why would you want multiple index entries for the same key (never mind\n> whether they are in a single index tuple or multiple tuples) pointing to\n> the same row?\n> \n\nMuddled thinking :). I was trying to decide what to do if two identical \nitems appeared in the same record: should there be two index entries \nor just one?\n\nI had thought that this might help queries where you want to count the \nnumber of instances of a particular element -but on reflection, the index \nentries aren't a useful way to achieve that. \n\n> Actually, after thinking a little more, I suspect the idea you are\n> really trying to describe here is index entries with finer-than-tuple\n> granularity. This is not silly, but it is sufficiently outside the\n> normal domain of SQL that I think you are fighting an uphill battle.\n> You'd be *much* better off creating a table that has one row per\n> indexable entity, whatever that is.\n> \n\nI accept that the offset (and its use) is not so straightforward. It would \nhave benefits for indexing documents, but as SQL isn't built around \ndocument entities, it may be something best left. \n\nIn general, I was trying to formulate something where the functionality \ndidn't rely on extra tables (because I reckoned the choice of extra tables \nwould depend on the type of documents I was trying to index. HOWEVER, \nI realise that this is not true: I can use a table with\n\npath \t\t\tstring document\n/feature/type\t\tMoat\t\t3\n/feature/type\t\tCowshed\t 3\n\netc. and use a two-column index on path and string instead)\n\n>> I have tried the approach of decomposing documents into cdata, element\n>> and attribute tables, and I can use joins to extract a list of feature\n>> types etc. (and could use triggers to update this) but the idea of not\n>> having to parse a document to enter it into the database\n> \n> How do you expect that to happen, when you will have to parse it to get\n> the index terms?\n> \n\nMy feeling was that I wanted the database side to be document-\nstructure agnostic. In other words, I could use the database as a plain \ndocument store and I would develop the xpath function which could be \nused with a sequential scan at first. Then I started to ask whether I could \ncreate a functional index using it -the point being that the xpath \nfunction is like a 'words' function which returns all the words from a \nstring -it returns more than one entry for a given row. An index based on \nthis would need to have more than one entry relating to a given heap \ntuple.\n\nThanks for your comments: given that actions speak louder than words \nmaybe I'll try and implement my fabled xpath operator first, then worry\nabout indexing or performance improvements :)\n\nRegards\n\nJohn\n\n", "msg_date": "Tue, 26 Jun 2001 11:24:12 +0000", "msg_from": "\"John Gray\" <jgray@beansindustry.co.uk>", "msg_from_op": false, "msg_subject": "Re: Multi-entry indexes (with a view to XPath queries)" } ]
[ { "msg_contents": "I have written a simple bourne shell script which performs all backup\nfunctions, but no restore at this point. I personally find it's very\ngood and keeps each daily backup (gzipped) in it's own dated directory\nunder the parent directory being the current month. It's available at:\n\nhttp://database.sourceforge.net\n\nI am open to suggestions and if anyone would like to critique the code to\nmake it simpler or more powerful then by all means go ahead.\n\nThanks.\n\n", "msg_date": "Tue, 26 Jun 2001 11:47:24 +1000 (EST)", "msg_from": "Grant <grant@conprojan.com.au>", "msg_from_op": true, "msg_subject": "Announcing Postgresql backup script." }, { "msg_contents": "On Tue, Jun 26, 2001 at 11:47:24AM +1000, Grant wrote:\n> I have written a simple bourne shell script which performs all backup\n> functions, but no restore at this point.\n\nah, isn't that like building a really fast car with no brakes?\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Mon, 25 Jun 2001 22:22:28 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Announcing Postgresql backup script." }, { "msg_contents": "> > I have written a simple bourne shell script which performs all backup\n> > functions, but no restore at this point.\n> \n> ah, isn't that like building a really fast car with no brakes?\n\nThis car has brakes, but they're not so easy to use just yet. \n\nVolunteering? :)\n\nTo restore just do the following:\n\n[postgres@linux 26-06-2001]$ /server/pgsql/bin/dropdb -h localhost binary_data\nDROP DATABASE\n[postgres@linux 26-06-2001]$ /server/pgsql/bin/createdb -h localhost binary_data\nCREATE DATABASE\n[postgres@linux 26-06-2001]$ gunzip 12:00-postgresql_database-binary_data-backup.gz\n[postgres@linux 26-06-2001]$ psql -h localhost binary_data < 12:00-postgresql_database-binary_data-backup\nYou are now connected as new user postgres.\nCREATE...\n\netc etc.\n\n", "msg_date": "Tue, 26 Jun 2001 13:04:41 +1000 (EST)", "msg_from": "Grant <grant@conprojan.com.au>", "msg_from_op": true, "msg_subject": "Re: Announcing Postgresql backup script." }, { "msg_contents": "On Tue, Jun 26, 2001 at 01:04:41PM +1000, Grant wrote:\n> > > I have written a simple bourne shell script which performs all backup\n> > > functions, but no restore at this point.\n> > \n> > ah, isn't that like building a really fast car with no brakes?\n> \n> This car has brakes, but they're not so easy to use just yet. \n\ni'm glad you caught the humour intended. i forgot the 8^)\n\n> Volunteering? :)\n\nnot at this point.\n\n> To restore just do the following:\n> \n> [postgres@linux 26-06-2001]$ /server/pgsql/bin/dropdb -h localhost binary_data\n> DROP DATABASE\n> [postgres@linux 26-06-2001]$ /server/pgsql/bin/createdb -h localhost binary_data\n> CREATE DATABASE\n> [postgres@linux 26-06-2001]$ gunzip 12:00-postgresql_database-binary_data-backup.gz\n> [postgres@linux 26-06-2001]$ psql -h localhost binary_data < 12:00-postgresql_database-binary_data-backup\n> You are now connected as new user postgres.\n> CREATE...\n\ni haven't looked at the scripts (and probably should before commenting further)\nbut, alas, a few beers makes me bold.\n\nthis looks alot like what i would do with:\n\nbackup:\n\npg_dump dbname | gzip > /backup/`date +%Y-%m-%d`.gz\n\nrestore:\n\ndropdb dbname\ncreatedb dbname\nzcat /some/YYYY-MM-DD.gz | psql -q dbname\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Mon, 25 Jun 2001 23:24:45 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Announcing Postgresql backup script." } ]
[ { "msg_contents": "Here is my proposal to fix the security problem of storing cleartext\npasswords in pg_shadow. The solution is somewhat complex because we\nhave to allow 7.2 servers to communicate with 7.1 clients, at least for\na short while.\n\nHere is a summary of what we currently do and proposed solutions.\n\n\nPG_HBA.CONF\n-----------\npg_hba.conf has three authentication options of interest to this\ndiscussion:\n\ntrust: no authentication required\n\npassword: plaintext password is sent over network from client\n\t to server\n\ncrypt: random salt is sent to client; client encrypts using that salt\nand returns encrypted password to server. Server encrypts pg_shadow\npassword with same random salt and compares. This is why current\npg_shadow password is cleartext. (Call this \"crypt authentication\".)\n\n\nDOUBLE ENCRYPTION\n-----------------\nThe solution for encrypting pg_shadow passwords is to encrypt using a\nsalt when stored in pg_shadow, and to generate a random salt for each\nauthentication request. Send _both_ salts to the client, let the client\ndouble encrypt using the pg_shadow salt first, then the random salt, and\nsend it back. The server encrypt using only the random salt and\ncompares.\n\nAs soon as we encrypt pg_shadow passwords, we can't communicate with\npre-7.2 clients using crypt-authentication. Actually, we could, but we\nwould have to send the same pg_shadow salt every time, which is insecure\nbecause someone snooping the wire could just play back the same reply so\nit is better to just fail such authentications.\n\n\nUSER INTERFACE\n--------------\nSo, my idea is to add an option to CREATE/ALTER USER:\n\n\tCREATE USER WITH ENCRYPTED PASSWORD 'fred';\n\tCREATE USER WITH UNENCRYPTED PASSWORD 'fred';\n\tALTER USER WITH ENCRYPTED PASSWORD 'fred';\n\tALTER USER WITH UNENCRYPTED PASSWORD 'fred';\n\nKeep in mind ENCRYPTED/UNENCRYPTED controls how it is stored in\npg_shadow, not wither \"fred\" is a cleartext or preencrypted password. \nWe plan to prefix md5 passwords with \"md5\" to handle this issue. (Md5\npasswords are also 35-characters in length.)\n\nAlso add a new GUC config option:\n\n\tSET password_encrypted_default TO 'OFF';\n\nIt would ship as OFF in 7.2 and can be removed in a later release. Once\nall clients are upgraded to 7.2, you can change the default to ON and do\nALTER USER WITH PASSWORD 'fred' to encrypt the pg_shadow passwords. The\npasswords are in cleartext in pg_shadow so it is easy to do.\n\n\nMD5\n---\nI assume we will use MD5 for encryption of pg_shadow passwords. The\nletters \"md5\" will appear at the start of the password string and it\nwill be exactly 35 characters. Vince sent me the code. We will need to\nadd MD5 capability to libpq, ODBC, and JDBC. (I hope JDBC will not be a\nproblem.) When using CREATE/ALTER user, the system will automatically\nconsider a 35-character string that starts with \"md5\" to be a\npre-md5-encrypted password, while anything else will be md5 encrypted.\n\n\nSECONDARY PASSWORD FILES\n------------------------\nTo add complexity to this, we also support secondary password files. \n(See pg_hba.conf.sample and pg_password manual in CVS for updated\ndescriptions.) These password files allow encrypted passwords in the\nsame format as they appear in traditional /etc/passwd. (Call this\ncrypt-style passwords.) I realize most BSD's use MD5 in /etc/shadow\nnow.\n\nRight now we can use passwords from the file only if we use\npassword-authentication. We can't use crypt-authentication because the\npasswords already have a salt and we don't want to sent the same salt\nevery time. One nice feature of secondary passwords is you can copy\n/etc/passwd or /etc/shadow and use that as your secondary password file\nfor PostgreSQL. I don't know how many people use that but it is nice\nfeature. Remember the secondary password files sit in /data which is\nreadable only by the PostgreSQL install user.\n\n\nDOUBLE-CRYPT ENCRYPTION\n-----------------------\nSo, we are going to add a new double-MD5 encryption protocol to allow\npg_shadow passwords to be encrypted. Do we also add a\ndouble-crypt-style-password protocol to allow crypt-authentication with\nsecondary password files that use crypt-style passwords or just require\nthe secondary password files to use MD5?\n\n\nComments?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 25 Jun 2001 23:04:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Proposal for encrypting pg_shadow passwords" }, { "msg_contents": "On Mon, Jun 25, 2001 at 11:04:15PM -0400, Bruce Momjian wrote:\n> password: plaintext password is sent over network from client\n> \t to server\n> \n> crypt: random salt is sent to client; client encrypts using that salt\n> and returns encrypted password to server. Server encrypts pg_shadow\n> password with same random salt and compares. This is why current\n> pg_shadow password is cleartext. (Call this \"crypt authentication\".)\n\ndid you see my post of a week or so ago?\n\nhost dbname ipaddr netmask password /some/file\n - uses second field of /some/file, as per /etc/passwd\n - compares second field of /some/file with crypt(clear-text)\n\nhost dbname ipaddr netmask crypt (no file specified)\n - as above\n\nhost dbname ipaddr netmask password (no file specified)\n - same as if the line was s/password/crypt/g\n\ni have mods that allow (in a completely backward compatible fashion)\n\nhost dbname ipaddr netmask password pg_shadow\n - uses password from pg_shadow\n - compares pg_shadow->password with crypt(clear-text)\n\nwhile i applaud the dual-crypt enhancements for the newer versions,\ni think these patches allow storage of encrypted passwords in pg_shadow\nwithout any substantial changes (or possible damage to existing code).\n\ni am using these mods in conjuction with php scripts, and as such i need\nnot give \"webuser\" or \"nobody\" any privs on my tables.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Mon, 25 Jun 2001 23:18:20 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Proposal for encrypting pg_shadow passwords" }, { "msg_contents": "On Mon, Jun 25, 2001 at 11:18:20PM -0400, Jim Mercer wrote:\n> host dbname ipaddr netmask crypt (no file specified)\n> - as above\n(meaning bruce's description)\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ Now with more and longer words for your reading enjoyment. ]\n", "msg_date": "Mon, 25 Jun 2001 23:25:47 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Proposal for encrypting pg_shadow passwords" }, { "msg_contents": "\nOn Monday, June 25, 2001, at 08:04 PM, Bruce Momjian wrote:\n\n> I assume we will use MD5 for encryption of pg_shadow passwords. The\n> letters \"md5\" will appear at the start of the password string and it\n> will be exactly 35 characters. Vince sent me the code. We \n> will need to\n> add MD5 capability to libpq, ODBC, and JDBC. (I hope JDBC will \n> not be a\n> problem.)\n\nJDK 1.1 and later has MD5 available; this shouldn't be a problem.\n\n-- Bruce\n\n--------------------------------------------------------------------------\nBruce Toback Tel: (602) 996-8601| My candle burns at both ends;\nOPT, Inc. (800) 858-4507| It will not last the night;\n11801 N. Tatum Blvd. Ste. 142 | But ah, my foes, and oh, my \nfriends -\nPhoenix AZ 85028 | It gives a lovely light.\nbtoback@optc.com | -- Edna St. Vincent Millay\n", "msg_date": "Mon, 25 Jun 2001 20:38:17 -0700", "msg_from": "Bruce Toback <btoback@mac.com>", "msg_from_op": false, "msg_subject": "Re: Proposal for encrypting pg_shadow passwords" }, { "msg_contents": "> DOUBLE ENCRYPTION\n> -----------------\n> The solution for encrypting pg_shadow passwords is to encrypt using a\n> salt when stored in pg_shadow, and to generate a random salt for each\n> authentication request. Send _both_ salts to the client, let the client\n> double encrypt using the pg_shadow salt first, then the random salt, and\n> send it back. The server encrypt using only the random salt and\n> compares.\n>\n\nI posted something on this a few weeks ago. See\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1021155 for details, but the\nsummary is that it would be better (IMHO) to use HMAC for authentication.\nHMAC has\nbeen mathematically proven to be as secure as the underlying hash algorithm\nused.\nHere's the reference for HMAC --\nhttp://www-cse.ucsd.edu/users/mihir/papers/kmd5.pdf.\n\nIt would actually work almost identically to what you've described. Store\nthe password as a hash using MD5 and some salt. Send the password salt and a\nrandom salt to the client. The client uses the password salt with MD5 (and\nlocal knowledge of the plaintext password) to reproduce the stored password,\nthen calculates an HMAC of the random salt and sends it back. The server\nalso calculates the HMAC of the random salt using the stored hashed\npassword, and compares.\n\nJust my 2 cents . . .\n\n-- Joe\n\n\n", "msg_date": "Mon, 25 Jun 2001 21:30:43 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Proposal for encrypting pg_shadow passwords" }, { "msg_contents": "> > DOUBLE ENCRYPTION\n> > -----------------\n> > The solution for encrypting pg_shadow passwords is to encrypt using a\n> > salt when stored in pg_shadow, and to generate a random salt for each\n> > authentication request. Send _both_ salts to the client, let the client\n> > double encrypt using the pg_shadow salt first, then the random salt, and\n> > send it back. The server encrypt using only the random salt and\n> > compares.\n> >\n> \n> I posted something on this a few weeks ago. See\n> http://fts.postgresql.org/db/mw/msg.html?mid=1021155 for details, but the\n> summary is that it would be better (IMHO) to use HMAC for authentication.\n> HMAC has\n> been mathematically proven to be as secure as the underlying hash algorithm\n> used.\n> Here's the reference for HMAC --\n> http://www-cse.ucsd.edu/users/mihir/papers/kmd5.pdf.\n> \n> It would actually work almost identically to what you've described. Store\n> the password as a hash using MD5 and some salt. Send the password salt and a\n> random salt to the client. The client uses the password salt with MD5 (and\n> local knowledge of the plaintext password) to reproduce the stored password,\n> then calculates an HMAC of the random salt and sends it back. The server\n> also calculates the HMAC of the random salt using the stored hashed\n> password, and compares.\n\nYes, I remember that. I figured MD5 was standard and secure enough for\nour purposes. Newer stuff sometimes has problems because it has not\nbeen tested long enough and I would hate to change this if a problem is\nfound.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 00:34:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Proposal for encrypting pg_shadow passwords" } ]
[ { "msg_contents": "> \n> On Monday, June 25, 2001, at 08:04 PM, Bruce Momjian wrote:\n> \n> > I assume we will use MD5 for encryption of pg_shadow passwords. The\n> > letters \"md5\" will appear at the start of the password string and it\n> > will be exactly 35 characters. Vince sent me the code. We \n> > will need to\n> > add MD5 capability to libpq, ODBC, and JDBC. (I hope JDBC will \n> > not be a\n> > problem.)\n> \n> JDK 1.1 and later has MD5 available; this shouldn't be a problem.\n\nWell, that is certainly good to hear. I will need help.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 25 Jun 2001 23:40:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Proposal for encrypting pg_shadow passwords" } ]
[ { "msg_contents": "\n\n\tI need to check the scalability of a machine with postgresql and Im doing it \nwith pgbench but Im getting values with a variation of a 40% with the same \npgbench call...\n\n\tJust the same variation if I restart posgresql or overwrite the db...\n\n\tSo just wondering if theres another benchmarking tool for postgres...\n\n\tPerhaps should I write my own one?\n", "msg_date": "Tue, 26 Jun 2001 19:41:28 +0200", "msg_from": "=?iso-8859-1?q?V=EDctor=20Romero?= <romero@kde.org>", "msg_from_op": true, "msg_subject": "Benchmarking" }, { "msg_contents": "> \tI need to check the scalability of a machine with postgresql and Im doing it \n> with pgbench but Im getting values with a variation of a 40% with the same \n> pgbench call...\n\nYou might be looking at the effect of the kernel buffer cache. Try run\npgbench several times with same settings. Another point is how many\ntransactions pgbench runs (-t option). More transactions would give\nmore statble results. Here is my small script to run pgbench. I\nusually run it 2 or 3 times and take only the last run result.\n\n#! /bin/sh\npgbench -i -s 2 test\nfor i in 1 2 4 8 16 32 64 128\ndo\n\tt=`expr 640 / $i`\n\tpgbench -t $t -c $i test\n\techo \"===== sync ======\"\n\tsync;sync;sync;sleep 10\n\techo \"===== sync done ======\"\ndone\n", "msg_date": "Wed, 27 Jun 2001 10:11:23 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Benchmarking" } ]
[ { "msg_contents": "Hi, All!\n\nI've developed new data type for PostgreSQL - uniqueidentifier - 128-bit\nvalue claims to be unique across Universe. It depends on libuuid from\ne2fsprogs by Theodore Ts'o. Now I use it in my project. Everybody can grab\nit from\n http://taurussoft.chat.ru/uniqueidentifier-0.1.9.tar.gz\n\nBefore announce this new type through pgsql-announce I want to clear for\nmyself some things.\nI've marked \"=\" operator with HASH clause (and planner has started to use\nhash jons). But as I understand the right way is to create special hash\nfunction (may be wrapper for hash_any(), isn't it?) and register it for hash\nas for btree method.\nSo is it desirable to mark \"=\" as HASH for this type (seems internal 16 byte\nrepresentation will be hash well) and if yes how can I create hash sort\nmethod for uniqueidentifier?\n\nregards,\nDmitry\n\nPS. If you decide to install uniqueidentifier look at the date of\nuuid/uuid.h somewhere in INCLUDE path. Sometimes it's necessary to manualy\nenter \"make install\" in lib/uuid directory of e2fsprogs.\n\n", "msg_date": "Tue, 26 Jun 2001 23:59:06 +0400", "msg_from": "\"Dmitry G. Mastrukov\" <dmitry@taurussoft.org>", "msg_from_op": true, "msg_subject": "New data type: uniqueidentifier" }, { "msg_contents": "On Tue, 26 Jun 2001, Dmitry G. Mastrukov wrote:\n\n> myself some things.\n> I've marked \"=\" operator with HASH clause (and planner has started to use\n> hash jons). But as I understand the right way is to create special hash\n> function (may be wrapper for hash_any(), isn't it?) and register it for hash\n> as for btree method.\n\nNo. Currently, there's no way to specify a hash function for a given\noperator, it always uses a builtin function that operates on memory\nrepresentation of a value.\n\nThere's no need (or possibility) to register a hash with btree method.\n\n> So is it desirable to mark \"=\" as HASH for this type (seems internal 16 byte\n> representation will be hash well) and if yes how can I create hash sort\n> method for uniqueidentifier?\nYou can mark it hashable, since two identical uuid values would have\nidentical memory representation and thus the same hash value. \n\nI'd look at your code, but that is URL too slow, in 5 minutes downloaded\n1000 bytes...\n\n> regards,\n> Dmitry\n> \n> PS. If you decide to install uniqueidentifier look at the date of\n> uuid/uuid.h somewhere in INCLUDE path. Sometimes it's necessary to manualy\n> enter \"make install\" in lib/uuid directory of e2fsprogs.\n\n", "msg_date": "Tue, 26 Jun 2001 18:02:46 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: New data type: uniqueidentifier" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> wrote:\n> \n> I'd look at your code, but that is URL too slow, in 5 minutes downloaded\n> 1000 bytes...\n> \nIt's possible now to grab from another location\n\nhttp://fitmark.net/taurussoft/uniqueidentifier-0.1.9.tar.gz\n\nregards,\nDmitry\n\n", "msg_date": "Wed, 27 Jun 2001 05:48:36 +0400", "msg_from": "\"Dmitry G. Mastrukov\" <dmitry@taurussoft.org>", "msg_from_op": true, "msg_subject": "Re: New data type: uniqueidentifier" }, { "msg_contents": "Dmitry G. Mastrukov writes:\n\n> I've developed new data type for PostgreSQL - uniqueidentifier - 128-bit\n> value claims to be unique across Universe. It depends on libuuid from\n> e2fsprogs by Theodore Ts'o.\n\nISTM that this should be a function, not a data type.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 27 Jun 2001 16:54:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: New data type: uniqueidentifier" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> wrote:\n> On Tue, 26 Jun 2001, Dmitry G. Mastrukov wrote:\n>\n> > myself some things.\n> > I've marked \"=\" operator with HASH clause (and planner has started to\nuse\n> > hash jons). But as I understand the right way is to create special hash\n> > function (may be wrapper for hash_any(), isn't it?) and register it for\nhash\n> > as for btree method.\n>\n> No. Currently, there's no way to specify a hash function for a given\n> operator, it always uses a builtin function that operates on memory\n> representation of a value.\n>\n> There's no need (or possibility) to register a hash with btree method.\n>\nStrange. When I execute following query (slightly modified query from User's\nGuide chapter 7.6)\n\nSELECT am.amname AS acc_name,\n opc.opcname AS ops_name,\n opr.oprname AS ops_comp\n FROM pg_am am, pg_amop amop,\n pg_opclass opc, pg_operator opr\n WHERE amop.amopid = am.oid AND\n amop.amopclaid = opc.oid AND\n amop.amopopr = opr.oid\n ORDER BY ops_name, ops_comp;\n\nI see both hash and btree amname for builtin opclasses. For example\n\n acc_name | ops_name | ops_comp\n----------+----------+----------\n btree | int4_ops | <\n btree | int4_ops | <=\n btree | int4_ops | =\n hash | int4_ops | =\n btree | int4_ops | >\n btree | int4_ops | >=\n\nBut new type has no hash for \"=\". Plus I saw hash functions for builtin\ntypes in source code. So can I achieve for created type such intergration\nwith Postgres as for builtin types? Or am I understanding something wrong?\n\nregards,\nDmitry\n\n", "msg_date": "Thu, 28 Jun 2001 02:21:11 +0400", "msg_from": "\"Dmitry G. Mastrukov\" <dmitry@taurussoft.org>", "msg_from_op": true, "msg_subject": "Re: New data type: uniqueidentifier" }, { "msg_contents": "\"Dmitry G. Mastrukov\" <dmitry@taurussoft.org> writes:\n> Alex Pilosov <alex@pilosoft.com> wrote:\n>> On Tue, 26 Jun 2001, Dmitry G. Mastrukov wrote:\n> I've marked \"=\" operator with HASH clause (and planner has started to\n> use\n> hash jons). But as I understand the right way is to create special hash\n> function (may be wrapper for hash_any(), isn't it?) and register it for\n> hash\n> as for btree method.\n>> \n>> No. Currently, there's no way to specify a hash function for a given\n>> operator, it always uses a builtin function that operates on memory\n>> representation of a value.\n\n> Strange. When I execute following query (slightly modified query from User's\n> Guide chapter 7.6)\n\nYou're looking at support for hash indexes, which have nothing to do\nwith hash joins.\n\n*Why* they have nothing to do with hash joins, I dunno. You'd think\nthat using the same hash functions for both would be a good idea.\nBut that's not how it's set up at the moment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jun 2001 11:12:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: New data type: uniqueidentifier " }, { "msg_contents": " Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \"Dmitry G. Mastrukov\" <dmitry@taurussoft.org> writes:\n> > Alex Pilosov <alex@pilosoft.com> wrote:\n> >> On Tue, 26 Jun 2001, Dmitry G. Mastrukov wrote:\n> > I've marked \"=\" operator with HASH clause (and planner has started to\n> > use\n> > hash jons). But as I understand the right way is to create special hash\n> > function (may be wrapper for hash_any(), isn't it?) and register it for\n> > hash\n> > as for btree method.\n> >>\n> >> No. Currently, there's no way to specify a hash function for a given\n> >> operator, it always uses a builtin function that operates on memory\n> >> representation of a value.\n>\n> > Strange. When I execute following query (slightly modified query from\nUser's\n> > Guide chapter 7.6)\n>\n> You're looking at support for hash indexes, which have nothing to do\n> with hash joins.\n>\n> *Why* they have nothing to do with hash joins, I dunno. You'd think\n> that using the same hash functions for both would be a good idea.\n> But that's not how it's set up at the moment.\n>\nOK. It's clear for me now. Thanks.\nBut should I create support for hash indexes? Since builtin types have such\nsupport I want it too for uniqueidentifier :) How can I make it?\n\nregards,\nDmitry\n\n", "msg_date": "Fri, 29 Jun 2001 05:48:19 +0400", "msg_from": "\"Dmitry G. Mastrukov\" <dmitry@taurussoft.org>", "msg_from_op": true, "msg_subject": "Re: New data type: uniqueidentifier " }, { "msg_contents": "Peter Eisentraut wrote:\n\n>Dmitry G. Mastrukov writes:\n>\n>>I've developed new data type for PostgreSQL - unique identifier - 128-bit\n>>value claims to be unique across Universe. It depends on libuuid from\n>>e2fsprogs by Theodore Ts'o.\n>>\n>\n>ISTM that this should be a function, not a data type.\n>\nI'd second the function idea: function uuid( ) returns an int8 value; \ndon't create a bazillion datatypes. Besides, 128 bit numbers are 7 byte \nintegers. PostgreSQL has an int8 (8 byte integer) datatype. While I \nlike the UUID function idea, I'd recommend a better solution to creating \nan \"unique\" identifier. Why not create a serial8 datatype: int8 with an \nint8 sequence = 256bit \"unique\" number. {Yes, I know I'm violating my \nfirst sentence.} Then, you'd have the same thing (or better) AND your \nnot relying on randomness. \n\n\n\n\n\n\nPeter Eisentraut wrote:\n\nDmitry G. Mastrukov writes:\n\nI've developed new data type for PostgreSQL - unique identifier - 128-bitvalue claims to be unique across Universe. It depends on libuuid frome2fsprogs by Theodore Ts'o.\n\nISTM that this should be a function, not a data type.\n\nI'd second the function idea: function uuid( ) returns an int8 value; don't\ncreate a bazillion datatypes.  Besides, 128 bit numbers are 7 byte integers.\n  PostgreSQL has an int8 (8 byte integer) datatype.  While I like the UUID\nfunction idea, I'd recommend a better solution to creating an \"unique\" identifier.\n Why not create a serial8 datatype: int8 with an int8 sequence = 256bit \"unique\"\nnumber.  {Yes, I know I'm violating my first sentence.}  Then, you'd have\nthe same thing (or better) AND your not relying on randomness.", "msg_date": "Mon, 02 Jul 2001 11:13:41 -0500", "msg_from": "Thomas Swan <tswan@olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: New data type: uniqueidentifier" }, { "msg_contents": "On Mon, 2 Jul 2001, Thomas Swan wrote:\n\n> Peter Eisentraut wrote:\n> \n> >Dmitry G. Mastrukov writes:\n> >\n> >>I've developed new data type for PostgreSQL - unique identifier - 128-bit\n> >>value claims to be unique across Universe. It depends on libuuid from\n> >>e2fsprogs by Theodore Ts'o.\n> >>\n> >\n> >ISTM that this should be a function, not a data type.\n> >\n> I'd second the function idea: function uuid( ) returns an int8 value; \n> don't create a bazillion datatypes. Besides, 128 bit numbers are 7 byte \n> integers. PostgreSQL has an int8 (8 byte integer) datatype. While I \n> like the UUID function idea, I'd recommend a better solution to creating \n> an \"unique\" identifier. Why not create a serial8 datatype: int8 with an \n> int8 sequence = 256bit \"unique\" number. {Yes, I know I'm violating my \n> first sentence.} Then, you'd have the same thing (or better) AND your \n> not relying on randomness. \n\nI don't think you know what UUID is. It is NOT just a unique randon\nnumber. There are specific rules for construction of such number, specific\nrules for comparison of numbers (no, its not bit-by-bit), thus a datatype\nis most appropriate answer. \n\n-alex\n\n", "msg_date": "Mon, 2 Jul 2001 14:39:23 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Re: New data type: uniqueidentifier" }, { "msg_contents": "Alex Pilosov writes:\n\n> I don't think you know what UUID is. It is NOT just a unique randon\n> number. There are specific rules for construction of such number, specific\n> rules for comparison of numbers (no, its not bit-by-bit), thus a datatype\n> is most appropriate answer.\n\nA data type may be appropriate for storing these values, but not for\ngenerating them. Functions generate stuff, data types store stuff.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 2 Jul 2001 21:20:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: New data type: uniqueidentifier" }, { "msg_contents": "On Mon, 2 Jul 2001, Peter Eisentraut wrote:\n\n> Alex Pilosov writes:\n> \n> > I don't think you know what UUID is. It is NOT just a unique randon\n> > number. There are specific rules for construction of such number, specific\n> > rules for comparison of numbers (no, its not bit-by-bit), thus a datatype\n> > is most appropriate answer.\n> \n> A data type may be appropriate for storing these values, but not for\n> generating them. Functions generate stuff, data types store stuff.\n\nSorry, apparently we misunderstood each other but are really in full\nagreement.\n\nDmitry's stuff contains both datatype (uniqueidentifier), a function to\ngenerate a new object of that type (newid), and a set of functions to\nimplement comparison operators for that type.\n\nI don't see anything wrong with that setup, but maybe I'm still missing\nsomething?\n\n-alex\n\n", "msg_date": "Mon, 2 Jul 2001 17:27:37 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Re: New data type: uniqueidentifier" }, { "msg_contents": "> don't create a bazillion datatypes. Besides, 128 bit numbers are 7\n> byte integers.\n\nHang on: 128 div 8 = 16 byte integer\n\n> PostgreSQL has an int8 (8 byte integer) datatype.\n\nAnd therefore it is a _64_ bit integer and you can't have a 256bit unique\nnumber in it...\n\n> While I like the UUID function idea, I'd recommend a better solution to\n> creating an \"unique\" identifier. Why not create a serial8 datatype:\n> int8 with an int8 sequence = 256bit \"unique\" number. {Yes, I know\n> violating my first sentence.} Then, you'd have the same thing (or\n> better) AND your not relying on randomness.\n\nChris\n\n", "msg_date": "Tue, 3 Jul 2001 09:37:33 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: Re: New data type: uniqueidentifier" }, { "msg_contents": "I sit corrected. \n\n*slightly humbled*\n\nWhy not do an unsigned int16 to hold your UUID generated numbers. \nUltimately, this would seem to be a more general solution and accomplish \nyour goals at the sametime. Or, am I completely missing something.\n\nChristopher Kings-Lynne wrote:\n\n>>don't create a bazillion datatypes. Besides, 128 bit numbers are 7\n>>byte integers.\n>>\n>\n>Hang on: 128 div 8 = 16 byte integer\n>\n>>PostgreSQL has an int8 (8 byte integer) datatype.\n>>\n>\n>And therefore it is a _64_ bit integer and you can't have a 256bit unique\n>number in it...\n>\n>>While I like the UUID function idea, I'd recommend a better solution to\n>>creating an \"unique\" identifier. Why not create a serial8 datatype:\n>>int8 with an int8 sequence = 256bit \"unique\" number. {Yes, I know\n>>violating my first sentence.} Then, you'd have the same thing (or\n>>better) AND your not relying on randomness.\n>>\n>\n>Chris\n>\n\n\n\n\n\n\n\nI sit corrected.  \n\n*slightly humbled*\n\nWhy not do an unsigned int16 to hold your UUID generated numbers.   Ultimately,\nthis would seem to be a more general solution and accomplish your goals at\nthe sametime.  Or, am I completely missing something.\n\nChristopher Kings-Lynne wrote:\n\n\ndon't create a bazillion datatypes. Besides, 128 bit numbers are 7byte integers.\n\nHang on: 128 div 8 = 16 byte integer\n\nPostgreSQL has an int8 (8 byte integer) datatype.\n\nAnd therefore it is a _64_ bit integer and you can't have a 256bit uniquenumber in it...\n\nWhile I like the UUID function idea, I'd recommend a better solution tocreating an \"unique\" identifier. Why not create a serial8 datatype:int8 with an int8 sequence = 256bit \"unique\" number. {Yes, I knowviolating my first sentence.} Then, you'd have the same thing (orbetter) AND your not relying on randomness.\n\nChris", "msg_date": "Mon, 02 Jul 2001 20:54:01 -0500", "msg_from": "Thomas Swan <tswan@olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: Re: New data type: uniqueidentifier" }, { "msg_contents": "On Mon, 2 Jul 2001, Thomas Swan wrote:\n\n> I sit corrected. \n> \n> *slightly humbled*\n> \n> Why not do an unsigned int16 to hold your UUID generated numbers. \nNot a good idea, since rules for comparison of UUID are wierd and are\n_definitely_ not same as for comparison of int16.\n\n> Ultimately, this would seem to be a more general solution and accomplish \n> your goals at the sametime. Or, am I completely missing something.\n\n", "msg_date": "Mon, 2 Jul 2001 22:30:50 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Re: New data type: uniqueidentifier" }, { "msg_contents": "Where can I find some more information on it? I'm curious now.\n\nAlex Pilosov wrote:\n\n>On Mon, 2 Jul 2001, Thomas Swan wrote:\n>\n>>I sit corrected. \n>>\n>>*slightly humbled*\n>>\n>>Why not do an unsigned int16 to hold your UUID generated numbers. \n>>\n>Not a good idea, since rules for comparison of UUID are wierd and are\n>_definitely_ not same as for comparison of int16.\n>\n>>Ultimately, this would seem to be a more general solution and accomplish \n>>your goals at the sametime. Or, am I completely missing something.\n>>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\n\n\n\n\n\nWhere can I find some more information on it?  I'm curious now.\n\nAlex Pilosov wrote:\n\nOn Mon, 2 Jul 2001, Thomas Swan wrote:\n\nI sit corrected. *slightly humbled*Why not do an unsigned int16 to hold your UUID generated numbers. \n\nNot a good idea, since rules for comparison of UUID are wierd and are_definitely_ not same as for comparison of int16.\n\nUltimately, this would seem to be a more general solution and accomplish your goals at the sametime. Or, am I completely missing something.\n\n---------------------------(end of broadcast)---------------------------TIP 2: you can get off all lists at once with the unregister command (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)", "msg_date": "Tue, 03 Jul 2001 10:05:18 -0500", "msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: New data type: uniqueidentifier" }, { "msg_contents": "Alex Pilosov writes:\n\n> Dmitry's stuff contains both datatype (uniqueidentifier), a function to\n> generate a new object of that type (newid), and a set of functions to\n> implement comparison operators for that type.\n>\n> I don't see anything wrong with that setup, but maybe I'm still missing\n> something?\n\nIt would be much simpler if you stored the unique id in varchar or text.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 3 Jul 2001 17:33:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: New data type: uniqueidentifier" }, { "msg_contents": "If you mean the [proposed?] standard itself, here is a good description of\nit:\nhttp://www.ics.uci.edu/pub/ietf/webdav/uuid-guid/draft-leach-uuids-guids-01.txt\n\nIt was a proposed IETF standard, however, IETF standardization failed\nbecause ISO already ratified it as a DCE/RPC standard ISO 11578, however,\nthe above URL provides far better description of UUIDs than ISO standard\nitself\n\n On Tue, 3 Jul 2001, Thomas Swan wrote:\n\n> Where can I find some more information on it? I'm curious now.\n\n", "msg_date": "Tue, 3 Jul 2001 11:37:20 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: New data type: uniqueidentifier" }, { "msg_contents": "On Tue, 3 Jul 2001, Peter Eisentraut wrote:\n\n\n>> Dmitry's stuff contains both datatype (uniqueidentifier), a function to\n>> generate a new object of that type (newid), and a set of functions to\n>> implement comparison operators for that type.\n\n> It would be much simpler if you stored the unique id in varchar or text.\nPeter,\n\nUUIDs have specific rules for comparison of them. Its so much easier to\ncompare them via a<b than uuid_lt(a,b). If one wanted to make a meaningful\nindex on uuid value, normal ordering of varchar would not suffice...\n\n-alex\n\n\n", "msg_date": "Tue, 3 Jul 2001 11:52:31 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Re: New data type: uniqueidentifier" }, { "msg_contents": "Peter Eisentraut wrote:\n\n>Alex Pilosov writes:\n>\n>>Dmitry's stuff contains both datatype (uniqueidentifier), a function to\n>>generate a new object of that type (newid), and a set of functions to\n>>implement comparison operators for that type.\n>>\n>>I don't see anything wrong with that setup, but maybe I'm still missing\n>>something?\n>>\n>\n>It would be much simpler if you stored the unique id in varchar or text.\n>\nAre you sure varchar comparision will be quickly than current \nimplementation? Next, varchar will need 36 byte, uniqueidentifier takes \n16. Next, indexing - IMHO current stuff more suitable for indexes. Some \ntime ago I saw some stuff which deals with uniqueidentifiers for \nInterbase. It uses your scheme with chars. But it strip \"-\" from string \nand reverts it to efficiently use indexes (uid sometimes uses \nMAC-address as part of itself, so MAC should go first in string). Weird \nscheme for me!\n\nregards,\nDmitry\n\n\n", "msg_date": "Wed, 04 Jul 2001 00:19:23 +0400", "msg_from": "\"Dmitry G. Mastrukov\" <dmitry@taurussoft.org>", "msg_from_op": true, "msg_subject": "Re: Re: New data type: uniqueidentifier" } ]
[ { "msg_contents": "I started thinking about Tom's idea to implement functions as table\nsource.\n\nTo me, it seems that a very few changes are necessary:\na) parser must be changed to allow functioncall to be a table_ref\n(easy)\n\nb) when a Query node is generated out of such a call \"select * from foo()\"\nit should be almost identical to one generated out of \"select * from\n(select * from foo)\" with one distinction: list of query attributes should\nbe completed based on return type of foo().\n\nc) executor should support execution of such Query node, properly\nextracting things out of function's return value and placing them into\nresult attributes.\n\n\nIf I'm wrong, please correct me.\n\n-alex\n\n", "msg_date": "Tue, 26 Jun 2001 17:11:47 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "functions returning records" }, { "msg_contents": "On Tue, 26 Jun 2001 17:11:47 -0400 (EDT), you wrote:\n\n>I started thinking about Tom's idea to implement functions as table\n>source.\n>\n>To me, it seems that a very few changes are necessary:\n>a) parser must be changed to allow functioncall to be a table_ref\n>(easy)\n>\n>b) when a Query node is generated out of such a call \"select * from foo()\"\n>it should be almost identical to one generated out of \"select * from\n>(select * from foo)\" with one distinction: list of query attributes should\n>be completed based on return type of foo().\n>\n>c) executor should support execution of such Query node, properly\n>extracting things out of function's return value and placing them into\n>result attributes.\n\nComing from a Sybase environment I would love to have functions return\na result set. A few things to think of:\n1: will it be possible to return multiple result sets? (in Sybase any\nselect statement that is not redirected to variables or a table goes\nto the client, so it is quite common to do multiple selects). Does the\npostgresql client library support this?\n\n2: will it be possible to put a single result set in a table.\nSomething like \"resultfunction (argument) INTO TABLENAME\" or \"INSERT\nINTO TABLENAME resultfunction(argument)\n\n-- \n__________________________________________________\n\"Nothing is as subjective as reality\"\nReinoud van Leeuwen reinoud@xs4all.nl\nhttp://www.xs4all.nl/~reinoud\n__________________________________________________\n", "msg_date": "Tue, 26 Jun 2001 23:25:23 GMT", "msg_from": "reinoud@xs4all.nl (Reinoud van Leeuwen)", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "On Tue, 26 Jun 2001, Reinoud van Leeuwen wrote:\n\n> Coming from a Sybase environment I would love to have functions return\n> a result set. A few things to think of:\n> 1: will it be possible to return multiple result sets? (in Sybase any\n> select statement that is not redirected to variables or a table goes\n> to the client, so it is quite common to do multiple selects). Does the\n> postgresql client library support this?\nNo, libpq protocol cannot support that. This is really a sybasism, as good\nas it is, no other database supports anything like that.\n\n> 2: will it be possible to put a single result set in a table.\n> Something like \"resultfunction (argument) INTO TABLENAME\" or \"INSERT\n> INTO TABLENAME resultfunction(argument)\n\nIt will be, but syntax will be:\nselect * into tablename from resultfunction(arg)\ninsert into tablename select * from resultfunction(arg)\n\n(I.E. resultfunction must be in the 'from' clause)\n\n-alex\n\n", "msg_date": "Tue, 26 Jun 2001 22:09:39 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: functions returning records" }, { "msg_contents": "On Tue, Jun 26, 2001 at 05:11:47PM -0400, Alex Pilosov wrote:\n> I started thinking about Tom's idea to implement functions as table\n> source.\n> \n> To me, it seems that a very few changes are necessary:\n> a) parser must be changed to allow functioncall to be a table_ref\n> (easy)\n> \n> b) when a Query node is generated out of such a call \"select * from foo()\"\n> it should be almost identical to one generated out of \"select * from\n> (select * from foo)\" with one distinction: list of query attributes should\n> be completed based on return type of foo().\n\n For the result from foo() you must somewhere define attributes (names). \nWhere? In CREATE FUNCTION statement? Possible must be:\n\n select name1, name2 from foo() where name1 > 10;\n\n What returns foo()? ...the pointer to HeapTuple or something like this or\npointer to some temp table?\n\n> c) executor should support execution of such Query node, properly\n> extracting things out of function's return value and placing them into\n> result attributes.\n\n d) changes in fmgr\n\n e) SPI support for table building/filling inside foo()\n\n\n IMHO very cool and nice feature, but not easy for imlementation.\n\n\t\t\tKarel \n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 27 Jun 2001 09:10:52 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "On Wed, 27 Jun 2001, Karel Zak wrote:\n\n> On Tue, Jun 26, 2001 at 05:11:47PM -0400, Alex Pilosov wrote:\n> > I started thinking about Tom's idea to implement functions as table\n> > source.\n> > \n> > To me, it seems that a very few changes are necessary:\n> > a) parser must be changed to allow functioncall to be a table_ref\n> > (easy)\n> > \n> > b) when a Query node is generated out of such a call \"select * from foo()\"\n> > it should be almost identical to one generated out of \"select * from\n> > (select * from foo)\" with one distinction: list of query attributes should\n> > be completed based on return type of foo().\n> \n> For the result from foo() you must somewhere define attributes (names). \n> Where? In CREATE FUNCTION statement? Possible must be:\nFunction must be returning an existing reltype. I understand its a major\nrestriction, but I can't think of a better way. \n\n> select name1, name2 from foo() where name1 > 10;\n> \n> What returns foo()? ...the pointer to HeapTuple or something like this or\n> pointer to some temp table?\nPointer to heaptuple. We can get to tupdesc for that tuple by looking up\nits prorettype.\n\n> > c) executor should support execution of such Query node, properly\n> > extracting things out of function's return value and placing them into\n> > result attributes.\n> \n> d) changes in fmgr\nDon't think that's necessary, but I guess I'll find out when I try it :)\n\n> e) SPI support for table building/filling inside foo()\n\nAs far as SPI is concerned, its the same as current: function returning\nrecords must return pointer to HeapTuple containing the record.\n\n", "msg_date": "Wed, 27 Jun 2001 06:29:32 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: functions returning records" }, { "msg_contents": "Alex Pilosov wrote:\n> \n> On Tue, 26 Jun 2001, Reinoud van Leeuwen wrote:\n> \n> > Coming from a Sybase environment I would love to have functions return\n> > a result set. A few things to think of:\n> > 1: will it be possible to return multiple result sets? (in Sybase any\n> > select statement that is not redirected to variables or a table goes\n> > to the client, so it is quite common to do multiple selects). Does the\n> > postgresql client library support this?\n> No, libpq protocol cannot support that. This is really a sybasism, as good\n> as it is, no other database supports anything like that.\n\nIIRC the _protocol_ should support it all right, but the current libpq \nimplementation does not (and the sql queries in functions are not sent\nto \nclient either)\n\n---------------\nHannu\n", "msg_date": "Wed, 27 Jun 2001 15:24:33 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Alex Pilosov wrote:\n>> On Tue, 26 Jun 2001, Reinoud van Leeuwen wrote:\n> 1: will it be possible to return multiple result sets? (in Sybase any\n> select statement that is not redirected to variables or a table goes\n> to the client, so it is quite common to do multiple selects). Does the\n> postgresql client library support this?\n\n>> No, libpq protocol cannot support that. This is really a sybasism, as good\n>> as it is, no other database supports anything like that.\n\n> IIRC the _protocol_ should support it all right, but the current libpq \n> implementation does not (and the sql queries in functions are not sent\n> to client either)\n\nActually, libpq supports it just fine too, but most clients don't.\nYou have to use PQsendQuery() and a PQgetResult() loop to deal with\nmultiple resultsets out of one query. It is possible to see this\nhappening even today:\n\n\tPQsendQuery(conn, \"SELECT * FROM foo ; SELECT * FROM bar\");\n\n\twhile ((res = PQgetResult(conn)))\n\t{\n\t\t...\n\nWhether it would be a *good idea* to allow standalone SELECTs in\nfunctions to be handled that way is another question. I've got strong\ndoubts about it. The main problem is that the function call would be\nnested inside another SELECT, which means you'd have the problem of\nsuspending a resultset transmission already in progress. That's *not*\nin the protocol, much less libpq, and you wouldn't really want clients\nforced to buffer incomplete resultsets anyway. But it could be\nsupported in procedures (not functions) that are called by some kind of\nPERFORM statement, so that there's not a SELECT already in progress when\nthey are invoked.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 10:35:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: functions returning records " }, { "msg_contents": "Karel Zak wrote:\n> On Tue, Jun 26, 2001 at 05:11:47PM -0400, Alex Pilosov wrote:\n> > I started thinking about Tom's idea to implement functions as table\n> > source.\n> >\n> > To me, it seems that a very few changes are necessary:\n> > a) parser must be changed to allow functioncall to be a table_ref\n> > (easy)\n> >\n> > b) when a Query node is generated out of such a call \"select * from foo()\"\n> > it should be almost identical to one generated out of \"select * from\n> > (select * from foo)\" with one distinction: list of query attributes should\n> > be completed based on return type of foo().\n>\n> For the result from foo() you must somewhere define attributes (names).\n> Where? In CREATE FUNCTION statement? Possible must be:\n>\n> select name1, name2 from foo() where name1 > 10;\n>\n> What returns foo()? ...the pointer to HeapTuple or something like this or\n> pointer to some temp table?\n>\n> > c) executor should support execution of such Query node, properly\n> > extracting things out of function's return value and placing them into\n> > result attributes.\n>\n> d) changes in fmgr\n>\n> e) SPI support for table building/filling inside foo()\n>\n>\n> IMHO very cool and nice feature, but not easy for imlementation.\n\n Good questions - must be because I asked them myself before.\n :-)\n\n My idea on that is as follows:\n\n 1. Adding a new relkind that means 'record'. So we use\n pg_class, pg_attribute and pg_type as we do for tables\n and views to describe a structure.\n\n 2. A function that RETURNS SETOF record/table/view is\n expected to return a refcursor (which is basically a\n portal name - SPI support already in 7.2), who's tupdesc\n matches the structure.\n\n 3. The Func node for such a function invocation will call\n the function with the appropriate arguments to get the\n portal, receive the tuples with an internal fetch method\n one per invocation (I think another destination is\n basically enough) and close the portal at the end.\n\n 4. Enhancement of the portal capabilities. A new function\n with a tuple descriptor as argument creates a special\n portal that simply opens a tuple sink. Another function\n stores a tuple there and a third one rewinds the sink and\n switches the portal into read mode, so that fetches will\n return the tuples again. One format of the tuple sink is\n capable of backward moves too, so it'll be totally\n transparent.\n\n 5. Enhancement of procedural languages that aren't\n implemented as state machines (currently all of them) to\n use the tuple-sink-portals and implement RETURN AND\n RESUME.\n\n This plan reuses alot of existing code and gains IMHO the\n most functionality. All portals are implicitly closed at the\n end of a transaction. This form of internal portal usage\n doesn't require explicit transaction blocks (as of current\n 7.2 tree). All the neat buffering, segmenting of the tuple\n sink code for materializing the result set comes into play.\n From the executors POV there is no difference between a\n function returning a portal that's a real SELECT, collecting\n the data on the fly, or a function materializing the result\n set first with RETURN AND RESUME. The tuple structure\n returned by a function is not only known at parsetime, but\n can be used in other places like for %ROWTYPE in PL/pgSQL.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 27 Jun 2001 11:05:58 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> 1. Adding a new relkind that means 'record'. So we use\n> pg_class, pg_attribute and pg_type as we do for tables\n> and views to describe a structure.\n\nIt seems fairly ugly to have a pg_class entry for something that\nisn't a table or even a table-like entity. It would be nice if\nwe could describe a record type with only pg_type and pg_attribute\nentries. I haven't thought about it in detail, but seems like it\ncould be done if pg_attribute entries are changed to reference\npg_type, not pg_class, rows as their parent. However, this would\nbreak so many existing queries in psql and other clients that it'd\nprobably be unacceptable :-(\n\n> 2. A function that RETURNS SETOF record/table/view is\n> expected to return a refcursor (which is basically a\n> portal name - SPI support already in 7.2), who's tupdesc\n> matches the structure.\n\nOtherwise this proposal sounds good. Jan and I talked about it earlier;\none point I recall is that the portal/cursor based approach can\ninternally support the existing multiple-call implementation of\nfunctions returning sets. That is, when you call the portal to get the\nnext tuple, it might hand you back a tuple saved from a previous\nfunction call, or it might turn around and call the function again to\nget the next tuple.\n\nBTW, once we've had this for a release or two, I'd like to rip out the\nexisting support for calling functions-returning-sets during SELECT list\nevaluation, so that expression evaluation could be simplified and sped\nup. But we can wait for people to change over their existing uses\nbefore we do that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 11:26:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: functions returning records " }, { "msg_contents": "On Wed, 27 Jun 2001, Jan Wieck wrote:\n\n> My idea on that is as follows:\n> \n> 1. Adding a new relkind that means 'record'. So we use\n> pg_class, pg_attribute and pg_type as we do for tables\n> and views to describe a structure.\nOkay\n\n> 2. A function that RETURNS SETOF record/table/view is\n> expected to return a refcursor (which is basically a\n> portal name - SPI support already in 7.2), who's tupdesc\n> matches the structure.\nOkay, but that will break whatever currently written functions which\nreturn setof. Although it could be considered a good thing, as its too\nugly now :)\n\n> 3. The Func node for such a function invocation will call\n> the function with the appropriate arguments to get the\n> portal, receive the tuples with an internal fetch method\n> one per invocation (I think another destination is\n> basically enough) and close the portal at the end.\nOK\n\n> 4. Enhancement of the portal capabilities. A new function\n> with a tuple descriptor as argument creates a special\n> portal that simply opens a tuple sink. Another function\n> stores a tuple there and a third one rewinds the sink and\n> switches the portal into read mode, so that fetches will\n> return the tuples again. One format of the tuple sink is\n> capable of backward moves too, so it'll be totally\n> transparent.\nOK\n\n> 5. Enhancement of procedural languages that aren't\n> implemented as state machines (currently all of them) to\n> use the tuple-sink-portals and implement RETURN AND\n> RESUME.\nI'm not sure I understand this one correctly. Could you explain what \nyou mean here by 'use'?\n\nWhat is \"RETURN AND RESUME\"? Do you mean a function that precomputes\nentire result set before stuffing it into portal?\n\n> This plan reuses alot of existing code and gains IMHO the\n> most functionality. All portals are implicitly closed at the\n> end of a transaction. This form of internal portal usage\n> doesn't require explicit transaction blocks (as of current\n> 7.2 tree). All the neat buffering, segmenting of the tuple\n> sink code for materializing the result set comes into play.\n> From the executors POV there is no difference between a\n> function returning a portal that's a real SELECT, collecting\n> the data on the fly, or a function materializing the result\n> set first with RETURN AND RESUME. The tuple structure\n> returned by a function is not only known at parsetime, but\n> can be used in other places like for %ROWTYPE in PL/pgSQL.\n\nI think I once again got myself in over my head :) But I'm going to try to\ncode this thing anyway, with great suggestions from Karel and you....\n\n-alex\n\n", "msg_date": "Wed, 27 Jun 2001 11:31:09 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: functions returning records" }, { "msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > 1. Adding a new relkind that means 'record'. So we use\n> > pg_class, pg_attribute and pg_type as we do for tables\n> > and views to describe a structure.\n>\n> It seems fairly ugly to have a pg_class entry for something that\n> isn't a table or even a table-like entity. It would be nice if\n> we could describe a record type with only pg_type and pg_attribute\n> entries. I haven't thought about it in detail, but seems like it\n> could be done if pg_attribute entries are changed to reference\n> pg_type, not pg_class, rows as their parent. However, this would\n> break so many existing queries in psql and other clients that it'd\n> probably be unacceptable :-(\n\n It's not THAT ugly for me, and the fact that it's named\n \"pg_class\" instead of \"pg_relation\" makes some sense all of\n the sudden.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 27 Jun 2001 12:14:54 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "Alex Pilosov wrote:\n> On Wed, 27 Jun 2001, Jan Wieck wrote:\n>\n> > My idea on that is as follows:\n> >\n> > 1. Adding a new relkind that means 'record'. So we use\n> > pg_class, pg_attribute and pg_type as we do for tables\n> > and views to describe a structure.\n> Okay\n>\n> > 2. A function that RETURNS SETOF record/table/view is\n> > expected to return a refcursor (which is basically a\n> > portal name - SPI support already in 7.2), who's tupdesc\n> > matches the structure.\n> Okay, but that will break whatever currently written functions which\n> return setof. Although it could be considered a good thing, as its too\n> ugly now :)\n\n Not necessarily. We could as well (as Tom mentioned already)\n add another portal enhancement, so that the current \"SETOF\n tuple\" function behaviour is wrapped by a portal. So if you\n call a \"SETOF tuple\" function, the function pointer get's\n stored in the portal and the function called on FETCH (or the\n internal fetch methods). The distinction on the SQL level\n could be done as \"RETURNS CURSOR OF ...\", don't know how to\n layer that into pg_proc yet, but would make it even clearer.\n\n> I'm not sure I understand this one correctly. Could you explain what\n> you mean here by 'use'?\n>\n> What is \"RETURN AND RESUME\"? Do you mean a function that precomputes\n> entire result set before stuffing it into portal?\n\n On the PL/pgSQL level such a function could look like\n\n ...\n FOR row IN SELECT * FROM mytab LOOP\n RETURN (row.a, row.b + row.c) AND RESUME;\n END LOOP;\n RETURN;\n\n Poor example and could be done better, but you get the idea.\n The language handler opens a tuple sink portal for it. On\n every loop invocation, one tuple is stuffed into it and on\n the final return, the tuple sink is rewound and prepared to\n return the tuples. The portal around it controls when to get\n rid of the sink, wherever it resides.\n\n These sinks are the place where the sorter for example piles\n it's tuples. For small numbers of tuples, they are just held\n in main memory. Bigger collections get stuffed into a\n tempfile and huge ones even in segmented tempfiles. What's\n considered \"small\" is controlled by the -S option (sort\n buffer size). So it's already a runtime option.\n\n> I think I once again got myself in over my head :) But I'm going to try to\n> code this thing anyway, with great suggestions from Karel and you....\n\n Hard training causes sore muscles, unfortunately it's the\n only way to gain muscle power - but take a break before you\n have a cramp :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 27 Jun 2001 12:40:46 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "Tom Lane wrote:\n> \n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > 1. Adding a new relkind that means 'record'. So we use\n> > pg_class, pg_attribute and pg_type as we do for tables\n> > and views to describe a structure.\n> \n> It seems fairly ugly to have a pg_class entry for something that\n> isn't a table or even a table-like entity. \n\nI dont think that sequence is any more table-like than record.\n\nAnd difference between type and class ia also quite debatable in \nmost languages ;)\n\nAlso there seems to be more existing creative use of pg_class - what \ndoes relkind='s' record for pg_variable stand for ?\n\n> Otherwise this proposal sounds good. Jan and I talked about it earlier;\n> one point I recall is that the portal/cursor based approach can\n> internally support the existing multiple-call implementation of\n> functions returning sets. That is, when you call the portal to get the\n> next tuple, it might hand you back a tuple saved from a previous\n> function call, or it might turn around and call the function again to\n> get the next tuple.\n> \n> BTW, once we've had this for a release or two, I'd like to rip out the\n> existing support for calling functions-returning-sets during SELECT list\n> evaluation, so that expression evaluation could be simplified and sped\n> up. But we can wait for people to change over their existing uses\n> before we do that.\n\nHow hard would it be to turn this around and implement RETURN AND\nCONTINUE\nfor at least PL/PGSQL, and possibly C/Perl/Python ... ?\n\n---------------\nHannu\n", "msg_date": "Thu, 28 Jun 2001 00:47:19 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Tom Lane wrote:\n>> It seems fairly ugly to have a pg_class entry for something that\n>> isn't a table or even a table-like entity. \n\n> I dont think that sequence is any more table-like than record.\n\nOh? It's got storage, it's got columns, you can select from it.\n\ntest71=# create sequence myseq;\nCREATE\ntest71=# select * from myseq;\n sequence_name | last_value | increment_by | max_value | min_value | cache_value | log_cnt | is_cycled | is_called\n---------------+------------+--------------+------------+-----------+-------------+---------+-----------+-----------\n myseq | 1 | 1 | 2147483647 | 1 | 1 | 1 | f | f\n(1 row)\n\nLooks pretty table-ish to me.\n\n> Also there seems to be more existing creative use of pg_class - what \n> does relkind='s' record for pg_variable stand for ?\n\nSpecial system relation. Again, there's storage behind it (at least for\npg_log, I suppose pg_xactlock is a bit of a cheat... but there doesn't\nreally need to be a pg_class entry for pg_xactlock anyway, and I'm not\nsure pg_log needs one either).\n\nHowever, this is fairly academic considering the backwards-compatibility\ndownside of changing pg_attribute.attrelid to pg_attribute.atttypid :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 19:03:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: functions returning records " }, { "msg_contents": "On Thu, 28 Jun 2001, Hannu Krosing wrote:\n\n> Tom Lane wrote:\n> > \n> > Jan Wieck <JanWieck@Yahoo.com> writes:\n> > > 1. Adding a new relkind that means 'record'. So we use\n> > > pg_class, pg_attribute and pg_type as we do for tables\n> > > and views to describe a structure.\n> > \n> > It seems fairly ugly to have a pg_class entry for something that\n> > isn't a table or even a table-like entity. \n> \n> I dont think that sequence is any more table-like than record.\n> \n> And difference between type and class ia also quite debatable in \n> most languages ;)\n> \n> Also there seems to be more existing creative use of pg_class - what \n> does relkind='s' record for pg_variable stand for ?\n> \n> > Otherwise this proposal sounds good. Jan and I talked about it earlier;\n> > one point I recall is that the portal/cursor based approach can\n> > internally support the existing multiple-call implementation of\n> > functions returning sets. That is, when you call the portal to get the\n> > next tuple, it might hand you back a tuple saved from a previous\n> > function call, or it might turn around and call the function again to\n> > get the next tuple.\n> > \n> > BTW, once we've had this for a release or two, I'd like to rip out the\n> > existing support for calling functions-returning-sets during SELECT list\n> > evaluation, so that expression evaluation could be simplified and sped\n> > up. But we can wait for people to change over their existing uses\n> > before we do that.\n> \n> How hard would it be to turn this around and implement RETURN AND\n> CONTINUE\n> for at least PL/PGSQL, and possibly C/Perl/Python ... ?\nCannot talk about plpgsql, but for c this would be probably implemented\nwith setjmp and with perl with goto. Probably not very complex.\n\n-alex\n\n", "msg_date": "Wed, 27 Jun 2001 20:10:12 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: functions returning records" }, { "msg_contents": "Alex Pilosov wrote:\n> On Thu, 28 Jun 2001, Hannu Krosing wrote:\n> >\n> > How hard would it be to turn this around and implement RETURN AND\n> > CONTINUE\n> > for at least PL/PGSQL, and possibly C/Perl/Python ... ?\n> Cannot talk about plpgsql, but for c this would be probably implemented\n> with setjmp and with perl with goto. Probably not very complex.\n\n Don't think so. When the function returns, the call stack\n get's destroyed. Jumping back to there - er - the core dump\n is not even useful any more. Or did I miss something?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 27 Jun 2001 21:08:38 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "On Wed, 27 Jun 2001, Jan Wieck wrote:\n\n> Alex Pilosov wrote:\n> > On Thu, 28 Jun 2001, Hannu Krosing wrote:\n> > >\n> > > How hard would it be to turn this around and implement RETURN AND\n> > > CONTINUE\n> > > for at least PL/PGSQL, and possibly C/Perl/Python ... ?\n> > Cannot talk about plpgsql, but for c this would be probably implemented\n> > with setjmp and with perl with goto. Probably not very complex.\n> \n> Don't think so. When the function returns, the call stack\n> get's destroyed. Jumping back to there - er - the core dump\n> is not even useful any more. Or did I miss something?\n\nWell, it shouldn't return, but instead save the location and longjmp to\nSPI_RESUME_jmp location. On a next call, instead of a function call, it\nshould longjmp back to saved location. I have to admit its more complex\nthan I originally thought, but probably doable.\n\n-alex\n\n", "msg_date": "Wed, 27 Jun 2001 21:24:09 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: functions returning records" }, { "msg_contents": "\n The other thing:\n\n 1/\tSELECT a, b, c FROM foo();\n 2/\tSELECT a FROM foo();\n\n How result (build and) returns function foo() in example 1/ and 2/ ?\n\n It's bad functions if returns same result for both queries -- because \n in example 2/ is wanted only one columns. IMHO function returning \n records needs information about wanted result (number of columns, etc).\n\n For example trigger functions has specific information by\n \"CurrentTriggerData\" struct. For functions returning records we can \n create special struct too. What?\n\n\t\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 28 Jun 2001 09:41:11 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "Alex Pilosov wrote:\n> On Wed, 27 Jun 2001, Jan Wieck wrote:\n>\n> > Alex Pilosov wrote:\n> > > On Thu, 28 Jun 2001, Hannu Krosing wrote:\n> > > >\n> > > > How hard would it be to turn this around and implement RETURN AND\n> > > > CONTINUE\n> > > > for at least PL/PGSQL, and possibly C/Perl/Python ... ?\n> > > Cannot talk about plpgsql, but for c this would be probably implemented\n> > > with setjmp and with perl with goto. Probably not very complex.\n> >\n> > Don't think so. When the function returns, the call stack\n> > get's destroyed. Jumping back to there - er - the core dump\n> > is not even useful any more. Or did I miss something?\n>\n> Well, it shouldn't return, but instead save the location and longjmp to\n> SPI_RESUME_jmp location. On a next call, instead of a function call, it\n> should longjmp back to saved location. I have to admit its more complex\n> than I originally thought, but probably doable.\n\n OK, let's screw it up some more:\n\n SELECT F.a, B.b FROM foo() F, bar() B\n WHERE F.a = B.a;\n\n This should normally result in a merge join, so you might get\n away with longjmp's. But you get the idea.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Thu, 28 Jun 2001 07:50:26 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "\nalex@pilosoft.com (Alex Pilosov) writes:\n\n: [...]\n: Well, it shouldn't return, but instead save the location and longjmp to\n: SPI_RESUME_jmp location. On a next call, instead of a function call, it\n: should longjmp back to saved location. I have to admit its more complex\n: than I originally thought, but probably doable.\n\nImplementing (what are in effect) co-routines or continuations by\nsetjmp/longjmp is an inherently non-portable practice. (Think about\nhow at all SPI_RESUME_jmp *and* the user-defined-function's saved\nlocation could both be valid places to longjmp to at, the same time.)\nAt the least, you would need some assembly language code, and\nheap-allocated stacks. Take a look into what user-level threading\nlibraries do.\n\nIf you went down this avenue, you might decide that a reasonable way\nto do this is in fact to rely on first-class threads to contain the\nexecution context of user-defined functions. You wouldn't have the\nconcurrency problems normally associated with threads (since the\nserver would still only activate one thread at a time).\n\n- FChE\n", "msg_date": "28 Jun 2001 08:27:26 -0400", "msg_from": "fche@redhat.com (Frank Ch. Eigler)", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "Jan Wieck wrote:\n> Alex Pilosov wrote:\n> > On Wed, 27 Jun 2001, Jan Wieck wrote:\n> >\n> > > Alex Pilosov wrote:\n> > > > On Thu, 28 Jun 2001, Hannu Krosing wrote:\n> > > > >\n> > > > > How hard would it be to turn this around and implement RETURN AND\n> > > > > CONTINUE\n> > > > > for at least PL/PGSQL, and possibly C/Perl/Python ... ?\n> > > > Cannot talk about plpgsql, but for c this would be probably implemented\n> > > > with setjmp and with perl with goto. Probably not very complex.\n> > >\n> > > Don't think so. When the function returns, the call stack\n> > > get's destroyed. Jumping back to there - er - the core dump\n> > > is not even useful any more. Or did I miss something?\n> >\n> > Well, it shouldn't return, but instead save the location and longjmp to\n> > SPI_RESUME_jmp location. On a next call, instead of a function call, it\n> > should longjmp back to saved location. I have to admit its more complex\n> > than I originally thought, but probably doable.\n>\n> OK, let's screw it up some more:\n>\n> SELECT F.a, B.b FROM foo() F, bar() B\n> WHERE F.a = B.a;\n>\n> This should normally result in a merge join, so you might get\n> away with longjmp's. But you get the idea.\n\n On a third thought, you don't get anywhere with longjmp's.\n You have a call stack, do a setjmp() saving the stack\n pointer. Then you call the function, do another setjmp() here\n and do the longjmp() to #1. This restores the saved stack\n pointer, so at the very first time you do any other function\n call (lib calls included), you corrupt the stack frame at the\n current stack pointer position. If you later jump back to\n setjmp() #2 location, you'll not be able to return.\n\n You can only drop stack frames safely, you can't add them\n back, they aren't saved.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Thu, 28 Jun 2001 08:40:29 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "On Thu, 28 Jun 2001, Jan Wieck wrote:\n\n> \n> On a third thought, you don't get anywhere with longjmp's.\n> You have a call stack, do a setjmp() saving the stack\n> pointer. Then you call the function, do another setjmp() here\n> and do the longjmp() to #1. This restores the saved stack\n> pointer, so at the very first time you do any other function\n> call (lib calls included), you corrupt the stack frame at the\n> current stack pointer position. If you later jump back to\n> setjmp() #2 location, you'll not be able to return.\n> \n> You can only drop stack frames safely, you can't add them\n> back, they aren't saved.\nTrue. I withdraw the idea. \n\nSee this for a s[l]ick implementation of coroutines in C:\n\nhttp://www.chiark.greenend.org.uk/~sgtatham/coroutines.html\n\n(essentially a replacement for set of gotos)\n\nTis ugly, but it should work (tm).\n\n\n", "msg_date": "Thu, 28 Jun 2001 13:24:32 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: functions returning records" } ]
[ { "msg_contents": "I have another question for you.\n\nIs it a known bug [in PostgreSQL 7.1.2] in the createlang command that it\nasks for my password 4 times. If I type any of them wrong, it partially\ncreates the language [plpgsql] for the given database, but fails.\n\n[veldy@fuggle veldy]$ createlang plpgsql homebrew\nPassword:\nPassword:\nPassword:\nPassword:\n[veldy@fuggle veldy]$\n\nWhy does it ask 4 times?\n\nTom Veldhouse\nveldy@veldy.net\n\n", "msg_date": "Tue, 26 Jun 2001 20:27:03 -0500", "msg_from": "\"Thomas T. Veldhouse\" <veldy@veldy.net>", "msg_from_op": true, "msg_subject": "Bug in createlang?" }, { "msg_contents": "\"Thomas T. Veldhouse\" wrote:\n> \n> Is it a known bug [in PostgreSQL 7.1.2] in the createlang command that it\n> asks for my password 4 times. If I type any of them wrong, it partially\n> creates the language [plpgsql] for the given database, but fails.\n \n> Why does it ask 4 times?\n\ncreatelang is just a script - it basically runs \"/path/to/psql $QUERY\" -\neach query connects a separate time.\n\n- Richard Huxton\n", "msg_date": "Wed, 27 Jun 2001 08:00:01 +0100", "msg_from": "Richard Huxton <dev@archonet.com>", "msg_from_op": false, "msg_subject": "Re: Bug in createlang?" }, { "msg_contents": "Richard Huxton <dev@archonet.com> writes:\n> \"Thomas T. Veldhouse\" wrote:\n>> Why does it ask 4 times?\n\n> createlang is just a script - it basically runs \"/path/to/psql $QUERY\" -\n> each query connects a separate time.\n\nNote that running a setup that requires password auth for the DBA will\nalso be a major pain in the rear when running pg_dumpall: one password\nprompt per database, IIRC. We have other scripts that make more than\none database connection, too.\n\nI'd counsel using a setup that avoids passwords for local connections.\nOne way to do this is to run an ident daemon and use IDENT authorization\nfor connections from 127.0.0.1. This allows \"psql -h localhost\" to work\nwithout a password. (IDENT authorization is quite properly discouraged\nfor remote connections, but it's trustworthy enough on your own machine,\nif you control the ident daemon or trust the person who does.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 10:24:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in createlang? " }, { "msg_contents": "Awesome. That is what I am looking for. I have been having a problem\nrestoring a database without changing the security options and restarting\nthe server. Real hassle. This could be what I am looking for. phpPgAdmin\nis running on the same machine, should I just tell it to use the \"public\"\naddress instead of localhost so that authentication is still required for it\n(without trying to use ident)?\n\nThanks,\n\nTom Veldhouse\nveldy@veldy.net\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Richard Huxton\" <dev@archonet.com>\nCc: \"Thomas T. Veldhouse\" <veldy@veldy.net>; <pgsql-general@postgresql.org>\nSent: Wednesday, June 27, 2001 9:24 AM\nSubject: Re: [GENERAL] Bug in createlang?\n\n\n> Richard Huxton <dev@archonet.com> writes:\n> > \"Thomas T. Veldhouse\" wrote:\n> >> Why does it ask 4 times?\n>\n> > createlang is just a script - it basically runs \"/path/to/psql $QUERY\" -\n> > each query connects a separate time.\n>\n> Note that running a setup that requires password auth for the DBA will\n> also be a major pain in the rear when running pg_dumpall: one password\n> prompt per database, IIRC. We have other scripts that make more than\n> one database connection, too.\n>\n> I'd counsel using a setup that avoids passwords for local connections.\n> One way to do this is to run an ident daemon and use IDENT authorization\n> for connections from 127.0.0.1. This allows \"psql -h localhost\" to work\n> without a password. (IDENT authorization is quite properly discouraged\n> for remote connections, but it's trustworthy enough on your own machine,\n> if you control the ident daemon or trust the person who does.)\n>\n> regards, tom lane\n>\n\n", "msg_date": "Wed, 27 Jun 2001 09:32:42 -0500", "msg_from": "\"Thomas T. Veldhouse\" <veldy@veldy.net>", "msg_from_op": true, "msg_subject": "Re: Bug in createlang? " }, { "msg_contents": "\"Thomas T. Veldhouse\" <veldy@veldy.net> writes:\n> Real hassle. This could be what I am looking for. phpPgAdmin\n> is running on the same machine, should I just tell it to use the \"public\"\n> address instead of localhost so that authentication is still required for it\n> (without trying to use ident)?\n\nSure, that would work, or use Unix-socket connection that way. (BTW,\nthe reason ident doesn't work for Unix sockets is that standard IDENT\ndaemons only know how to get the info for IP-based connections. Too\nbad...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 10:59:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in createlang? " }, { "msg_contents": "> Richard Huxton <dev@archonet.com> writes:\n> > \"Thomas T. Veldhouse\" wrote:\n> >> Why does it ask 4 times?\n> \n> > createlang is just a script - it basically runs \"/path/to/psql $QUERY\" -\n> > each query connects a separate time.\n> \n> Note that running a setup that requires password auth for the DBA will\n> also be a major pain in the rear when running pg_dumpall: one password\n> prompt per database, IIRC. We have other scripts that make more than\n> one database connection, too.\n\nThis brings up an issue I am concerned about. Right now, when we\ninstall the database with initdb, we basically are wide-opened to any\nlocal user who wants to connect to the database as superuser. In fact,\nsomeone could easily install a function in template1 that bypasses\ndatabase security so even after you put a password on the superuser and\nothers, they could bypass security.\n\nDo people have a good solution for this problem? Should be be\ninstalling a password for the super-user at initdb time? I see initdb\nhas this option:\n\n --pwprompt\n\n -W Makes initdb prompt for a password of the database\n superuser. If you don't plan on using password\n authentication, this is not important. Otherwise\n you won't be able to use password authentication\n until you have a password set up.\n\nDo people know they should be using this initdb option if they don't\ntrust their local users? I see no mention of it in the INSTALL file.\n\nI see it does:\n\n# set up password\nif [ \"$PwPrompt\" ]; then\n $ECHO_N \"Enter new superuser password: \"$ECHO_C\n stty -echo > /dev/null 2>&1\n read FirstPw\n stty echo > /dev/null 2>&1\n echo\n $ECHO_N \"Enter it again: \"$ECHO_C\n stty -echo > /dev/null 2>&1\n read SecondPw\n stty echo > /dev/null 2>&1\n echo\n if [ \"$FirstPw\" != \"$SecondPw\" ]; then\n echo \"Passwords didn't match.\" 1>&2\n exit_nicely\n fi\n echo \"ALTER USER \\\"$POSTGRES_SUPERUSERNAME\\\" WITH PASSWORD '$FirstPw'\" \\\n | \"$PGPATH\"/postgres $PGSQL_OPT template1 > /dev/null || exit_nicely\n if [ ! -f $PGDATA/global/pg_pwd ]; then\n echo \"The password file wasn't generated. Please report this problem.\" 1>&2\n exit_nicely\n fi\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 15:02:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in createlang?" }, { "msg_contents": "> Richard Huxton <dev@archonet.com> writes:\n> > \"Thomas T. Veldhouse\" wrote:\n> >> Why does it ask 4 times?\n> \n> > createlang is just a script - it basically runs \"/path/to/psql $QUERY\" -\n> > each query connects a separate time.\n> \n> Note that running a setup that requires password auth for the DBA will\n> also be a major pain in the rear when running pg_dumpall: one password\n> prompt per database, IIRC. We have other scripts that make more than\n> one database connection, too.\n> \n> I'd counsel using a setup that avoids passwords for local connections.\n> One way to do this is to run an ident daemon and use IDENT authorization\n> for connections from 127.0.0.1. This allows \"psql -h localhost\" to work\n> without a password. (IDENT authorization is quite properly discouraged\n> for remote connections, but it's trustworthy enough on your own machine,\n> if you control the ident daemon or trust the person who does.)\n\nI just applied a diff to better document the use of ident for localhost.\nI think it is a good idea, and in some ways a better use of ident than\nfor remote machines. If I missed a spot that could be better\ndocumented, let me know.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/client-auth.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/client-auth.sgml,v\nretrieving revision 1.11\ndiff -c -r1.11 client-auth.sgml\n*** doc/src/sgml/client-auth.sgml\t2001/05/12 22:51:34\t1.11\n--- doc/src/sgml/client-auth.sgml\t2001/07/11 20:27:07\n***************\n*** 242,248 ****\n of the connecting user. <productname>Postgres</productname>\n then verifies whether the so identified operating system user\n is allowed to connect as the database user that is requested.\n! \t This is only available for TCP/IP connections.\n The <replaceable>authentication option</replaceable> following\n the <literal>ident</> keyword specifies the name of an\n <firstterm>ident map</firstterm> that specifies which operating\n--- 242,251 ----\n of the connecting user. <productname>Postgres</productname>\n then verifies whether the so identified operating system user\n is allowed to connect as the database user that is requested.\n! \t This is only available for TCP/IP connections. It can be used\n! \t on the local machine by specifying the localhost address 127.0.0.1.\n! </para>\n! <para>\n The <replaceable>authentication option</replaceable> following\n the <literal>ident</> keyword specifies the name of an\n <firstterm>ident map</firstterm> that specifies which operating\n***************\n*** 553,559 ****\n <attribution>RFC 1413</attribution>\n <para>\n The Identification Protocol is not intended as an authorization\n! or access control protocol.\n </para>\n </blockquote>\n </para>\n--- 556,563 ----\n <attribution>RFC 1413</attribution>\n <para>\n The Identification Protocol is not intended as an authorization\n! or access control protocol. You must trust the machine running the\n! ident server.\n </para>\n </blockquote>\n </para>\nIndex: src/backend/libpq/pg_hba.conf.sample\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/libpq/pg_hba.conf.sample,v\nretrieving revision 1.19\ndiff -c -r1.19 pg_hba.conf.sample\n*** src/backend/libpq/pg_hba.conf.sample\t2001/07/11 19:36:36\t1.19\n--- src/backend/libpq/pg_hba.conf.sample\t2001/07/11 20:27:08\n***************\n*** 1,5 ****\n # \n! # PostgreSQL HOST-BASED ACCESS (HBA) CONTROL FILE\n # \n # \n # This file controls:\n--- 1,5 ----\n # \n! #\t\t PostgreSQL HOST-BASED ACCESS (HBA) CONTROL FILE\n # \n # \n # This file controls:\n***************\n*** 101,109 ****\n # \t\tbe use only for machines where all users are truested.\n # \n # password:\tAuthentication is done by matching a password supplied\n! # \t\tin clear by the host. If no AUTH_ARGUMENT is used, the\n! # \t\tpassword is compared with the user's entry in the\n! # \t\tpg_shadow table.\n # \n # \t\tIf AUTH_ARGUMENT is specified, the username is looked up\n # \t\tin that file in the $PGDATA directory. If the username\n--- 101,109 ----\n # \t\tbe use only for machines where all users are truested.\n # \n # password:\tAuthentication is done by matching a password supplied\n! #\t\tin clear by the host. If no AUTH_ARGUMENT is used, the\n! #\t\tpassword is compared with the user's entry in the\n! #\t\tpg_shadow table.\n # \n # \t\tIf AUTH_ARGUMENT is specified, the username is looked up\n # \t\tin that file in the $PGDATA directory. If the username\n***************\n*** 118,147 ****\n # \t\tpasswords.\n # \n # crypt: \tSame as \"password\", but authentication is done by\n! # \t\tencrypting the password sent over the network. This is\n! # \t\talways preferable to \"password\" except for old clients\n! # \t\tthat don't support \"crypt\". Also, crypt can use\n! # \t\tusernames stored in secondary password files but not\n! # \t\tsecondary passwords.\n! # \n! # ident: Authentication is done by the ident server on the local \n! # \t\tor remote host. AUTH_ARGUMENT is required and maps names \n! #\t\tfound in the $PGDATA/pg_ident.conf file. The connection\n! # \t\tis accepted if the file contains an entry for this map\n! # \t\tname with the ident-supplied username and the requested\n! # \t\tPostgreSQL username. The special map name \"sameuser\"\n! # \t\tindicates an implied map (not in pg_ident.conf) that\n! # \t\tmaps each ident username to the identical PostgreSQL \n #\t\tusername.\n # \n! # krb4: \tKerberos V4 authentication is used.\n # \n! # krb5: \tKerberos V5 authentication is used.\n # \n # reject: \tReject the connection. This is used to reject certain hosts\n! # \t\tthat are part of a network specified later in the file.\n! # \t\tTo be effective, \"reject\" must appear before the later\n! # \t\tentries.\n # \n # Local UNIX-domain socket connections support only the AUTH_TYPEs of\n # \"trust\", \"password\", \"crypt\", and \"reject\".\n--- 118,147 ----\n # \t\tpasswords.\n # \n # crypt: \tSame as \"password\", but authentication is done by\n! #\t\tencrypting the password sent over the network. This is\n! #\t\talways preferable to \"password\" except for old clients\n! #\t\tthat don't support \"crypt\". Also, crypt can use\n! #\t\tusernames stored in secondary password files but not\n! #\t\tsecondary passwords.\n! # \n! # ident:\tAuthentication is done by the ident server on the local\n! #\t\t(127.0.0.1) or remote host. AUTH_ARGUMENT is required and\n! #\t\tmaps names found in the $PGDATA/pg_ident.conf file. The \n! #\t\tconnection is accepted if the file contains an entry for\n! #\t\tthis map name with the ident-supplied username and the \n! #\t\trequested PostgreSQL username. The special map name \n! #\t\t\"sameuser\" indicates an implied map (not in pg_ident.conf)\n! #\t\tthat maps each ident username to the identical PostgreSQL \n #\t\tusername.\n # \n! # krb4:\tKerberos V4 authentication is used.\n # \n! # krb5:\tKerberos V5 authentication is used.\n # \n # reject: \tReject the connection. This is used to reject certain hosts\n! #\t\tthat are part of a network specified later in the file.\n! #\t\tTo be effective, \"reject\" must appear before the later\n! #\t\tentries.\n # \n # Local UNIX-domain socket connections support only the AUTH_TYPEs of\n # \"trust\", \"password\", \"crypt\", and \"reject\".", "msg_date": "Wed, 11 Jul 2001 16:33:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug in createlang?" }, { "msg_contents": "Bruce Momjian writes:\n\n> I just applied a diff to better document the use of ident for localhost.\n> I think it is a good idea, and in some ways a better use of ident than\n> for remote machines. If I missed a spot that could be better\n> documented, let me know.\n\n<blockquote> means it's a quote. What you added there is not part of the\noriginal source. Also, *please* make the pg_hba.conf.sample file shorter,\nnot longer.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 11 Jul 2001 23:12:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Bug in createlang?" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > I just applied a diff to better document the use of ident for localhost.\n> > I think it is a good idea, and in some ways a better use of ident than\n> > for remote machines. If I missed a spot that could be better\n> > documented, let me know.\n> \n> <blockquote> means it's a quote. What you added there is not part of the\n\nMoved out of blockquote.\n\n> original source. Also, *please* make the pg_hba.conf.sample file shorter,\n> not longer.\n\nI have the idea of having the postmaster load the non-comment lines from\npg_hba.conf in as a List of character strings and have the postmaster\nread through the strings, reloading on sighup. That way, we don't have\nto read the file at all for each connection.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 17:27:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Bug in createlang?" }, { "msg_contents": "\nDoes anyone have a comment on this? I wrote it a month ago.\n\n> > Richard Huxton <dev@archonet.com> writes:\n> > > \"Thomas T. Veldhouse\" wrote:\n> > >> Why does it ask 4 times?\n> > \n> > > createlang is just a script - it basically runs \"/path/to/psql $QUERY\" -\n> > > each query connects a separate time.\n> > \n> > Note that running a setup that requires password auth for the DBA will\n> > also be a major pain in the rear when running pg_dumpall: one password\n> > prompt per database, IIRC. We have other scripts that make more than\n> > one database connection, too.\n> \n> This brings up an issue I am concerned about. Right now, when we\n> install the database with initdb, we basically are wide-opened to any\n> local user who wants to connect to the database as superuser. In fact,\n> someone could easily install a function in template1 that bypasses\n> database security so even after you put a password on the superuser and\n> others, they could bypass security.\n> \n> Do people have a good solution for this problem? Should be be\n> installing a password for the super-user at initdb time? I see initdb\n> has this option:\n> \n> --pwprompt\n> \n> -W Makes initdb prompt for a password of the database\n> superuser. If you don't plan on using password\n> authentication, this is not important. Otherwise\n> you won't be able to use password authentication\n> until you have a password set up.\n> \n> Do people know they should be using this initdb option if they don't\n> trust their local users? I see no mention of it in the INSTALL file.\n> \n> I see it does:\n> \n> # set up password\n> if [ \"$PwPrompt\" ]; then\n> $ECHO_N \"Enter new superuser password: \"$ECHO_C\n> stty -echo > /dev/null 2>&1\n> read FirstPw\n> stty echo > /dev/null 2>&1\n> echo\n> $ECHO_N \"Enter it again: \"$ECHO_C\n> stty -echo > /dev/null 2>&1\n> read SecondPw\n> stty echo > /dev/null 2>&1\n> echo\n> if [ \"$FirstPw\" != \"$SecondPw\" ]; then\n> echo \"Passwords didn't match.\" 1>&2\n> exit_nicely\n> fi\n> echo \"ALTER USER \\\"$POSTGRES_SUPERUSERNAME\\\" WITH PASSWORD '$FirstPw'\" \\\n> | \"$PGPATH\"/postgres $PGSQL_OPT template1 > /dev/null || exit_nicely\n> if [ ! -f $PGDATA/global/pg_pwd ]; then\n> echo \"The password file wasn't generated. Please report this problem.\" 1>&2\n> exit_nicely\n> fi\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Sep 2001 00:42:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in createlang?" }, { "msg_contents": "Bruce Momjian writes:\n\n> Does anyone have a comment on this? I wrote it a month ago.\n\nThe fact that the database server is wide-open in the default installation\nis surely not good, but the problem is that we don't have a universally\naccepted way to lock it down. We could make password authentication the\ndefault, but that would annoy a whole lot of people. Another option would\nbe to set the unix domain socket permissions to 0200 by default, so only\nthe user that's running the server can get in. I could live with that;\nnot sure about others.\n\n\n> > > Richard Huxton <dev@archonet.com> writes:\n> > > > \"Thomas T. Veldhouse\" wrote:\n> > > >> Why does it ask 4 times?\n> > >\n> > > > createlang is just a script - it basically runs \"/path/to/psql $QUERY\" -\n> > > > each query connects a separate time.\n> > >\n> > > Note that running a setup that requires password auth for the DBA will\n> > > also be a major pain in the rear when running pg_dumpall: one password\n> > > prompt per database, IIRC. We have other scripts that make more than\n> > > one database connection, too.\n> >\n> > This brings up an issue I am concerned about. Right now, when we\n> > install the database with initdb, we basically are wide-opened to any\n> > local user who wants to connect to the database as superuser. In fact,\n> > someone could easily install a function in template1 that bypasses\n> > database security so even after you put a password on the superuser and\n> > others, they could bypass security.\n> >\n> > Do people have a good solution for this problem? Should be be\n> > installing a password for the super-user at initdb time? I see initdb\n> > has this option:\n> >\n> > --pwprompt\n> >\n> > -W Makes initdb prompt for a password of the database\n> > superuser. If you don't plan on using password\n> > authentication, this is not important. Otherwise\n> > you won't be able to use password authentication\n> > until you have a password set up.\n> >\n> > Do people know they should be using this initdb option if they don't\n> > trust their local users? I see no mention of it in the INSTALL file.\n> >\n> > I see it does:\n> >\n> > # set up password\n> > if [ \"$PwPrompt\" ]; then\n> > $ECHO_N \"Enter new superuser password: \"$ECHO_C\n> > stty -echo > /dev/null 2>&1\n> > read FirstPw\n> > stty echo > /dev/null 2>&1\n> > echo\n> > $ECHO_N \"Enter it again: \"$ECHO_C\n> > stty -echo > /dev/null 2>&1\n> > read SecondPw\n> > stty echo > /dev/null 2>&1\n> > echo\n> > if [ \"$FirstPw\" != \"$SecondPw\" ]; then\n> > echo \"Passwords didn't match.\" 1>&2\n> > exit_nicely\n> > fi\n> > echo \"ALTER USER \\\"$POSTGRES_SUPERUSERNAME\\\" WITH PASSWORD '$FirstPw'\" \\\n> > | \"$PGPATH\"/postgres $PGSQL_OPT template1 > /dev/null || exit_nicely\n> > if [ ! -f $PGDATA/global/pg_pwd ]; then\n> > echo \"The password file wasn't generated. Please report this problem.\" 1>&2\n> > exit_nicely\n> > fi\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://www.postgresql.org/search.mpl\n> >\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 6 Sep 2001 12:21:58 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Bug in createlang?" }, { "msg_contents": "On Wed, 5 Sep 2001, Bruce Momjian wrote:\n\n> Does anyone have a comment on this? I wrote it a month ago.\n\nI'm no authority on security, but it seems to me that PosgreSQL ought to\nrequire the setting of an administrative password. IIRC, some of the\ncommercial database products have default passwords, but those are a\nfrequent security problem, since admins oft forget to change them. But I\ndon't mind setting the root password when I install my RedHat or Yellow\nDog system, so I don't think I'll mind setting it when I install Postgres.\n\nJust MHO.\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\nDavid@Wheeler.net ICQ: 15726394\n Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Thu, 6 Sep 2001 06:22:22 -0700 (PDT)", "msg_from": "David Wheeler <David@Wheeler.net>", "msg_from_op": false, "msg_subject": "Re: Bug in createlang?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The fact that the database server is wide-open in the default installation\n> is surely not good, but the problem is that we don't have a universally\n> accepted way to lock it down. We could make password authentication the\n> default, but that would annoy a whole lot of people.\n\nYes, particularly for pg_dumpall scripts...\n\n> Another option would be to set the unix domain socket permissions to\n> 0200 by default, so only the user that's running the server can get\n> in. I could live with that; not sure about others.\n\nFor my purposes this would be acceptable, but I wouldn't actually want\nto use 0200. So it'd be nicer if the default socket permission were\ntrivially configurable (ideally as a configure switch). Given that,\nI wouldn't mind if the default were 0200.\n\nNote that locking down the unix socket is little help if one is using a\nstartup script that helpfully supplies -i by default. I am not sure\nwhat the score is with all the startup scripts that are in various RPMs\nand other platform-specific distributions; does anyone know if there are\nany that ship with -i enabled?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Sep 2001 11:44:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in createlang? " }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > Does anyone have a comment on this? I wrote it a month ago.\n> \n> The fact that the database server is wide-open in the default installation\n> is surely not good, but the problem is that we don't have a universally\n> accepted way to lock it down. We could make password authentication the\n> default, but that would annoy a whole lot of people. Another option would\n> be to set the unix domain socket permissions to 0200 by default, so only\n> the user that's running the server can get in. I could live with that;\n> not sure about others.\n\nWhatever you suggest. We basically create a world-writeable\nsocket/database when we do initdb. It is similar to a product\ninstalling in a world-writable directory.\n\nI realize you can lock it down later, but it seems people need to lock\nit down _before_ doing initdb or somehow keep it locked down until they\nset security. Our new SO_PEERCRED/SCM_CREDS gives us a lockdown option\non Linux/BSD platforms, but not on the others.\n\nIf we do the socket permissions thing for initdb, when do we start\nsetting the socket permissions properly?\n\nI realize there is no easy answer. I just wanted people to know this is\na security hole.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Sep 2001 11:49:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in createlang?" }, { "msg_contents": "[redirect to -hackers]\n\nTom Lane writes:\n\n> > The fact that the database server is wide-open in the default installation\n> > is surely not good, but the problem is that we don't have a universally\n> > accepted way to lock it down.\n\n> > Another option would be to set the unix domain socket permissions to\n> > 0200 by default, so only the user that's running the server can get in.\n\n> For my purposes this would be acceptable, but I wouldn't actually want\n> to use 0200. So it'd be nicer if the default socket permission were\n> trivially configurable (ideally as a configure switch). Given that,\n> I wouldn't mind if the default were 0200.\n\nIt is configurable already (unix_socket_permissions in postgresql.conf).\nIf we make this change then we just need to make it really clear in the\ndocumentation somewhere, because the error message will say \"Connection\nrefused\", and the permission of the socket file is the last thing people\nwill think of.\n\n> Note that locking down the unix socket is little help if one is using a\n> startup script that helpfully supplies -i by default. I am not sure\n> what the score is with all the startup scripts that are in various RPMs\n> and other platform-specific distributions; does anyone know if there are\n> any that ship with -i enabled?\n\nThe last count is that none that I can see the source code for do. In\ngeneral, I don't think this is our problem. If people change the default\nconfiguration in their packages without knowing better, they cannot be\nhelped. They will just as quickly change the default unix socket\npermissions back to 0777 if they want to.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 6 Sep 2001 19:43:23 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug in createlang? " }, { "msg_contents": "Believe Debian sid does.\n\n -- Matt\n\n-----Original Message-----\nFrom: pgsql-general-owner@postgresql.org\n[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Tom Lane\nSent: Thursday, September 06, 2001 11:44 AM\nTo: Peter Eisentraut\nCc: Bruce Momjian; Richard Huxton; Thomas T. Veldhouse;\npgsql-general@postgresql.org\nSubject: Re: [GENERAL] Bug in createlang? \n\nNote that locking down the unix socket is little help if one is using a\nstartup script that helpfully supplies -i by default. I am not sure\nwhat the score is with all the startup scripts that are in various RPMs\nand other platform-specific distributions; does anyone know if there are\nany that ship with -i enabled?\n\n", "msg_date": "Thu, 6 Sep 2001 14:33:24 -0400", "msg_from": "\"Matt Block\" <matt@blockdev.net>", "msg_from_op": false, "msg_subject": "Re: Bug in createlang? " }, { "msg_contents": "\nTo address this issue, I have added the following paragraph to the\ninstallation instructions:\n\n However, while the directory contents are secure, the default\n <filename>pg_hba.conf</filename> authentication of\n <literal>trust</literal> allows any local user to become the\n superuser and connect to the database. If you don't trust your local\n users, we recommend you use the <command>initdb</command> option\n <option>-W</option> or <option>--pwprompt</option> to assign a\n password to the superuser and modify your\n <filename>pg_hba.conf</filename> accordingly. (Another option:\n Your operating system may support <literal>ident</literal> for\n local connections.)\n\n\n---------------------------------------------------------------------------\n\n> > Richard Huxton <dev@archonet.com> writes:\n> > > \"Thomas T. Veldhouse\" wrote:\n> > >> Why does it ask 4 times?\n> > \n> > > createlang is just a script - it basically runs \"/path/to/psql $QUERY\" -\n> > > each query connects a separate time.\n> > \n> > Note that running a setup that requires password auth for the DBA will\n> > also be a major pain in the rear when running pg_dumpall: one password\n> > prompt per database, IIRC. We have other scripts that make more than\n> > one database connection, too.\n> \n> This brings up an issue I am concerned about. Right now, when we\n> install the database with initdb, we basically are wide-opened to any\n> local user who wants to connect to the database as superuser. In fact,\n> someone could easily install a function in template1 that bypasses\n> database security so even after you put a password on the superuser and\n> others, they could bypass security.\n> \n> Do people have a good solution for this problem? Should be be\n> installing a password for the super-user at initdb time? I see initdb\n> has this option:\n> \n> --pwprompt\n> \n> -W Makes initdb prompt for a password of the database\n> superuser. If you don't plan on using password\n> authentication, this is not important. Otherwise\n> you won't be able to use password authentication\n> until you have a password set up.\n> \n> Do people know they should be using this initdb option if they don't\n> trust their local users? I see no mention of it in the INSTALL file.\n> \n> I see it does:\n> \n> # set up password\n> if [ \"$PwPrompt\" ]; then\n> $ECHO_N \"Enter new superuser password: \"$ECHO_C\n> stty -echo > /dev/null 2>&1\n> read FirstPw\n> stty echo > /dev/null 2>&1\n> echo\n> $ECHO_N \"Enter it again: \"$ECHO_C\n> stty -echo > /dev/null 2>&1\n> read SecondPw\n> stty echo > /dev/null 2>&1\n> echo\n> if [ \"$FirstPw\" != \"$SecondPw\" ]; then\n> echo \"Passwords didn't match.\" 1>&2\n> exit_nicely\n> fi\n> echo \"ALTER USER \\\"$POSTGRES_SUPERUSERNAME\\\" WITH PASSWORD '$FirstPw'\" \\\n> | \"$PGPATH\"/postgres $PGSQL_OPT template1 > /dev/null || exit_nicely\n> if [ ! -f $PGDATA/global/pg_pwd ]; then\n> echo \"The password file wasn't generated. Please report this problem.\" 1>&2\n> exit_nicely\n> fi\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Nov 2001 16:01:55 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in createlang?" } ]
[ { "msg_contents": "Most intelligent database technique: For PostgreSQL and MySQL\n\nMan invented computer because his brain is very slow - today Intel\ncpu runs at 1.5 gigahertz!!.\nThe speed is doubling every 15 months .. this will go on for the next 40\nyears.\n\nBut software is the most time consuming effort - it requires millions of\npersons\ncoding and banging on the computer keyboards.\n\nEach and every person in the world is asking - \"How come Al Dev (that is\nme), single\nhanded is checking the software quality of pgsql and mysql????!!!!\nPostgreSQL (and MySQL) is written by millions of persons from\nover 300 countries in the world!!\n\nAnswer is very simple - I (Al Dev0 simply use the very high speed linux\nbox and\nrun the regression test package of PostgreSQL and\nsee if it runs ok and results tally!! And do you know - I \"hire\"\nthousands\nof guys from internet to write the regression test package!! Bigger the\nregression package\nthe better it is. Make the regression package about 10 million lines of\ncode.\nRegression test package is the most important package in PostgreSQL.\n\nRegression test package is a very \"SOLID TECHNICAL DOCUMENT\" mutually\nagreed\nbetween millions of developers and millions of users of pgsql!!\n\nWhatever commands pass in regression test is supported by pgsql and\nothers\nmay run (if they run you add it to test package!)\n\nIn hindi - \"database bahooth mainga padega, jee haan!! Yeh apne se nahin\nhota\".\n(Developing a SQL database is very expensive. Yes sir!! It is impossible\nfor India to create a SQL server\nIt takes 25 years to create PostgreSQL).\n\nNow, I tell all the countries - \"USA, Russia, Germany, France, Japan,\nUK, Italy, China, India, Malaysia\nall others must work together and create just ONE GOOD SQL server\"\n\nEvery minute of your time is costs about $3 US dollars.\n\nIf Larry Ellison and Bill Gates reads this message they will get\nvery alarmed!!\n\nRead the article - \"How can I trust PostgreSQL\" in\nhttp://www.aldev.8m.com and click on PostgreSQL howto\n\n\n\n", "msg_date": "Wed, 27 Jun 2001 02:31:19 GMT", "msg_from": "alavoor <alavoor@yahoo.com>", "msg_from_op": true, "msg_subject": "Most intelligent database technique: For PostgreSQL and MySQL" } ]
[ { "msg_contents": "Jan,\n\nwe're thinking about possibility to integrate our full-text search\ninto postgres. There are several problems we should thinking about\nbut for now we have a question about rewrite system.\n\nIs't possible to rewrite SQL query and execute it. Currently we build\nsql query outside of postgres using perl.\n\nLet's consider some simple example:\n\ncreate table tst ( a int4, b int4, c int4);\n\n select * from tst where a=2 and c=0;\n\nwe need something like:\n\n select * from tst where str and c=0;\n\nwhere str is a string resulting by call ourfunc(table.a, 2)\nand looks like 'b=2*2 or b=(2-1)'\n\ni.e. instead of original select we need to execute rewritten select\n\n select * from tst where (b=2*2 or b=(2-1)) and c=0;\n\nin other words we need to know is't possible to recognise\n(operator, field,table) and rewrite part of sql by\nresult of calling of ourfunc().\n\nWe're not sure if it's a question of rewrite system though.\n\nAny pointers where to go would be very nice.\n\n\tRegards,\n\n\t\tOleg\n\n", "msg_date": "Wed, 27 Jun 2001 12:17:26 +0400 (MSD)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Non-trivial rewriting sql query " }, { "msg_contents": "I believe (while I'm not an expert on this) that rewrite system cannot\ncope with dynamically-rewritten queries. (i.e. the rewrite rules where a\nfunction must be called to obtain the result of rewrite rule).\n\nA better possibility for you is to return a refcursor, and use on client\nside \"FETCH ALL from rc\", if possible.\n\nI.E, client would do:\nselect setup_query('c=0', 'rc');\nfetch all from rc;\n\ncreate function setup_query(text, refcursor) returns int4 as '\ndeclare\nqry alias for $1;\ncur alias for $2;\nbegin\nexecute ''declare '' || cur || '' cursor for select ... '' || qry ||\nourfunc(....)\n\n\n\n\n-alex\nOn Wed, 27 Jun 2001, Oleg Bartunov wrote:\n\n> Jan,\n> \n> we're thinking about possibility to integrate our full-text search\n> into postgres. There are several problems we should thinking about\n> but for now we have a question about rewrite system.\n> \n> Is't possible to rewrite SQL query and execute it. Currently we build\n> sql query outside of postgres using perl.\n> \n> Let's consider some simple example:\n> \n> create table tst ( a int4, b int4, c int4);\n> \n> select * from tst where a=2 and c=0;\n> \n> we need something like:\n> \n> select * from tst where str and c=0;\n> \n> where str is a string resulting by call ourfunc(table.a, 2)\n> and looks like 'b=2*2 or b=(2-1)'\n> \n> i.e. instead of original select we need to execute rewritten select\n> \n> select * from tst where (b=2*2 or b=(2-1)) and c=0;\n> \n> in other words we need to know is't possible to recognise\n> (operator, field,table) and rewrite part of sql by\n> result of calling of ourfunc().\n> \n> We're not sure if it's a question of rewrite system though.\n> \n> Any pointers where to go would be very nice.\n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n> \n\n", "msg_date": "Wed, 27 Jun 2001 06:42:14 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Non-trivial rewriting sql query " }, { "msg_contents": "Oleg Bartunov wrote:\n> Jan,\n>\n> we're thinking about possibility to integrate our full-text search\n> into postgres. There are several problems we should thinking about\n> but for now we have a question about rewrite system.\n>\n> Is't possible to rewrite SQL query and execute it. Currently we build\n> sql query outside of postgres using perl.\n>\n> Let's consider some simple example:\n>\n> create table tst ( a int4, b int4, c int4);\n>\n> select * from tst where a=2 and c=0;\n>\n> we need something like:\n>\n> select * from tst where str and c=0;\n>\n> where str is a string resulting by call ourfunc(table.a, 2)\n> and looks like 'b=2*2 or b=(2-1)'\n>\n> i.e. instead of original select we need to execute rewritten select\n>\n> select * from tst where (b=2*2 or b=(2-1)) and c=0;\n>\n> in other words we need to know is't possible to recognise\n> (operator, field,table) and rewrite part of sql by\n> result of calling of ourfunc().\n>\n> We're not sure if it's a question of rewrite system though.\n>\n> Any pointers where to go would be very nice.\n\n The problem I see is that this is not the way how the\n rewriter works. The rewriter works on querytree structures,\n after ALL parsing is done (the one for the rewriting rules\n long time ago). Inside of a querytree, the attributes are Var\n nodes, pointing to a rangetable entry by index and an\n attribute number in that rangetable. Creating additional\n qualification expressions could be possible, but I doubt you\n really want to go that far.\n\n In the current v7.2 development tree, there is support for\n reference cursors in PL/pgSQL. And this support integrates\n dynamic queries as well, so you could do it as:\n\n CREATE FUNCTION myfunc(refcursor, text, text, integer)\n RETURNS refcursor AS '\n DECLARE\n cur ALIAS FOR $1;\n t_qry ALIAS FOR $2;\n t_val ALIAS FOR $3;\n i_val ALIAS FOR $4;\n BEGIN\n t_qry := t_qry || '' ('' || ourfunc(t_val, i_val) || '')'';\n OPEN cur FOR EXECUTE t_qry;\n RETURN cur;\n END;'\n LANGUAGE 'plpgsql';\n\n I think at least that's the syntax - did't check so if you\n have problems with it, let me know. Anyway, invocation from\n the application level then would look like this:\n\n BEGIN;\n SELECT myfunc('c1', 'select * from tst where c = 0 and', table.a, 2);\n FETCH ALL IN c1;\n CLOSE c1;\n COMMIT;\n\n You could as well invoke this function inside of another\n function, storing it's return value in a refcursor variable\n and do fetches inside of the caller.\n\n Would that help?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 27 Jun 2001 12:07:47 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Non-trivial rewriting sql query" } ]
[ { "msg_contents": "\n> For the result from foo() you must somewhere define attributes (names). \n> Where? In CREATE FUNCTION statement? Possible must be:\n> \n> select name1, name2 from foo() where name1 > 10;\n\nYes, optimal would imho also be if the foo() somehow had access to\nthe where restriction, so it could only produce output, that the\nhigher level is interested in, very cool. This would be extremely \nuseful for me. Very hard to implement, or even find an appropriate \ninterface for though.\n\nAndreas\n", "msg_date": "Wed, 27 Jun 2001 10:35:23 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: functions returning records" }, { "msg_contents": "> \n>> For the result from foo() you must somewhere define attributes\n>> (names). \n>> Where? In CREATE FUNCTION statement? Possible must be:\n>> \n>> select name1, name2 from foo() where name1 > 10;\n> \n> Yes, optimal would imho also be if the foo() somehow had access to the\n> where restriction, so it could only produce output, that the\n> higher level is interested in, very cool. This would be extremely \n> useful for me. Very hard to implement, or even find an appropriate \n> interface for though.\n\nYou could easily implement it *in* the function foo IMHO. Since the \nfunction does some black magic to create the result set to begin with, you \ncan change it to use parameters:\n\nselect name1, name2 from foo(10, NULL, NULL) where name1 > 10;\n\n", "msg_date": "Wed, 27 Jun 2001 10:56:43 +0200 (CEST)", "msg_from": "\"Reinoud van Leeuwen\" <reinoud@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: AW: functions returning records" }, { "msg_contents": "On Wed, Jun 27, 2001 at 10:56:43AM +0200, Reinoud van Leeuwen wrote:\n> > \n> >> For the result from foo() you must somewhere define attributes\n> >> (names). \n> >> Where? In CREATE FUNCTION statement? Possible must be:\n> >> \n> >> select name1, name2 from foo() where name1 > 10;\n> > \n> > Yes, optimal would imho also be if the foo() somehow had access to the\n> > where restriction, so it could only produce output, that the\n> > higher level is interested in, very cool. This would be extremely \n> > useful for me. Very hard to implement, or even find an appropriate \n> > interface for though.\n> \n> You could easily implement it *in* the function foo IMHO. Since the \n> function does some black magic to create the result set to begin with, you \n> can change it to use parameters:\n> \n> select name1, name2 from foo(10, NULL, NULL) where name1 > 10;\n\n The function execution (data reading, etc) is almost last thing in the \npath-of-query. The parser, planner and others parts of PG must already \nknows enough information about a \"relation\" foo(). I don't know how much \nis intimate idea about this (Tom?), but somewhere in the pg_class / \npg_attribute must be something about foo() result. (*IMHO* of course:) \n\n I can't imagine that foo() builts on-the-fly arbitrary attributes.\n\n By the way, what permissions? For select (view) we can do GRANT/REVOKE, \nand for select * from foo()? For standard tables it's in the \npg_class.relacl. IMHO solution is add foo() to pg_class and mark here\noid of function foo() from pg_proc, and attributes definition store\nto pg_attribute -- everything as for standard table. The source for \nthis information must be from CREATE FUNCTION statement, like:\n\n CREATE FUNCTION foo RETURNS( name1 int, name2 text) ....;\n\nIf the foo is in the pg_class you can do \"GRANT ... ON foo\";\n \n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 27 Jun 2001 11:39:57 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: AW: functions returning records" }, { "msg_contents": "On Wed, 27 Jun 2001, Karel Zak wrote:\n\n> On Wed, Jun 27, 2001 at 10:56:43AM +0200, Reinoud van Leeuwen wrote:\n> > > \n> > >> For the result from foo() you must somewhere define attributes\n> > >> (names). \n> > >> Where? In CREATE FUNCTION statement? Possible must be:\n> > >> \n> > >> select name1, name2 from foo() where name1 > 10;\n> > > \n> > > Yes, optimal would imho also be if the foo() somehow had access to the\n> > > where restriction, so it could only produce output, that the\n> > > higher level is interested in, very cool. This would be extremely \n> > > useful for me. Very hard to implement, or even find an appropriate \n> > > interface for though.\n> > \n> > You could easily implement it *in* the function foo IMHO. Since the \n> > function does some black magic to create the result set to begin with, you \n> > can change it to use parameters:\n> > \n> > select name1, name2 from foo(10, NULL, NULL) where name1 > 10;\n> \n> The function execution (data reading, etc) is almost last thing in the \n> path-of-query. The parser, planner and others parts of PG must already \n> knows enough information about a \"relation\" foo(). I don't know how much \n> is intimate idea about this (Tom?), but somewhere in the pg_class / \n> pg_attribute must be something about foo() result. (*IMHO* of course:) \n> \n> I can't imagine that foo() builts on-the-fly arbitrary attributes.\n> \n> By the way, what permissions? For select (view) we can do GRANT/REVOKE, \n> and for select * from foo()? For standard tables it's in the \n> pg_class.relacl. IMHO solution is add foo() to pg_class and mark here\n> oid of function foo() from pg_proc, and attributes definition store\n> to pg_attribute -- everything as for standard table. The source for \n> this information must be from CREATE FUNCTION statement, like:\n> \n> CREATE FUNCTION foo RETURNS( name1 int, name2 text) ....;\n> \n> If the foo is in the pg_class you can do \"GRANT ... ON foo\";\n\nI'm planning to require return type to be a existing pg_type already. The\nproblem with your idea is question if you have two functions (for example)\nfoo(timestamp) and foo(int4), you must embed the types into relname, and\nthat's ugly.\n\nOnce its possible to control permission to execute a function via GRANT,\nit solves the grant problem for function-as-tablesource\n\n-alex\n\n", "msg_date": "Wed, 27 Jun 2001 06:54:27 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: AW: functions returning records" }, { "msg_contents": "On Wed, Jun 27, 2001 at 06:54:27AM -0400, Alex Pilosov wrote:\n> On Wed, 27 Jun 2001, Karel Zak wrote:\n> \n> > On Wed, Jun 27, 2001 at 10:56:43AM +0200, Reinoud van Leeuwen wrote:\n> > > > \n> > > >> For the result from foo() you must somewhere define attributes\n> > > >> (names). \n> > > >> Where? In CREATE FUNCTION statement? Possible must be:\n> > > >> \n> > > >> select name1, name2 from foo() where name1 > 10;\n> > > > \n> > > > Yes, optimal would imho also be if the foo() somehow had access to the\n> > > > where restriction, so it could only produce output, that the\n> > > > higher level is interested in, very cool. This would be extremely \n> > > > useful for me. Very hard to implement, or even find an appropriate \n> > > > interface for though.\n> > > \n> > > You could easily implement it *in* the function foo IMHO. Since the \n> > > function does some black magic to create the result set to begin with, you \n> > > can change it to use parameters:\n> > > \n> > > select name1, name2 from foo(10, NULL, NULL) where name1 > 10;\n> > \n> > The function execution (data reading, etc) is almost last thing in the \n> > path-of-query. The parser, planner and others parts of PG must already \n> > knows enough information about a \"relation\" foo(). I don't know how much \n> > is intimate idea about this (Tom?), but somewhere in the pg_class / \n> > pg_attribute must be something about foo() result. (*IMHO* of course:) \n> > \n> > I can't imagine that foo() builts on-the-fly arbitrary attributes.\n> > \n> > By the way, what permissions? For select (view) we can do GRANT/REVOKE, \n> > and for select * from foo()? For standard tables it's in the \n> > pg_class.relacl. IMHO solution is add foo() to pg_class and mark here\n> > oid of function foo() from pg_proc, and attributes definition store\n> > to pg_attribute -- everything as for standard table. The source for \n> > this information must be from CREATE FUNCTION statement, like:\n> > \n> > CREATE FUNCTION foo RETURNS( name1 int, name2 text) ....;\n> > \n> > If the foo is in the pg_class you can do \"GRANT ... ON foo\";\n> \n> I'm planning to require return type to be a existing pg_type already. The\n\n Sure, nobody wants to works with something other than is in the \npg_type.\n\n> problem with your idea is question if you have two functions (for example)\n> foo(timestamp) and foo(int4), you must embed the types into relname, and\n> that's ugly.\n\n Good point. First, you needn't work with types, bacause function oid \nis unique for foo(timestamp) and foo(int4). You can work with function\noid. But this is not important. \n\n The important thing is that in the PostgreSQL is already resolved very \nsimular problem. We can define function with same names, unique must\nbe function_name + arguments_types. Why not add same thing for tables and \nallows to define as unique table_name + table_type (where table_type\nis 'standard table', 'foo() table' and in future may be some other \nspecial type of table). \n The parser detect type of table very easy -- 'foo' vs. 'foo()'. \n\n IMHO very important is how add new feature and use it together with\nold feature.\n \n> Once its possible to control permission to execute a function via GRANT,\n> it solves the grant problem for function-as-tablesource\n \n The permissions system was an example only. If you add \"foo()-tables\"\nas something what needs special usage and care you probably found more\nproblems. For example, what show command '\\d' in the psql client, how\nrelation show pg_access ..etc? \n\n\t\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 27 Jun 2001 14:09:38 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: AW: functions returning records" }, { "msg_contents": "On Wed, 27 Jun 2001, Karel Zak wrote:\n\n> Sure, nobody wants to works with something other than is in the \n> pg_type.\n> \n> > problem with your idea is question if you have two functions (for example)\n> > foo(timestamp) and foo(int4), you must embed the types into relname, and\n> > that's ugly.\n> \n> Good point. First, you needn't work with types, bacause function oid \n> is unique for foo(timestamp) and foo(int4). You can work with function\n> oid. But this is not important. \nThat's not nice. GRANT ALL ON FOO_231234 where 231234 is OID of foo(int4)?\new.\n\n> The important thing is that in the PostgreSQL is already resolved very \n> simular problem. We can define function with same names, unique must\n> be function_name + arguments_types. Why not add same thing for tables and \n> allows to define as unique table_name + table_type (where table_type\n> is 'standard table', 'foo() table' and in future may be some other \n> special type of table). \n> The parser detect type of table very easy -- 'foo' vs. 'foo()'. \nThis is a little bit better, but, results in following syntax:\nGRANT SELECT ON FOO(int4). I'm not sure if this really makes sense. Its\nnot a select permission, its an execute permission on a function, and\nshould be handled when/where execute permission is checked.\n\nIts not hard to implement (just change what parser thinks relation is),\nbut I'm sure will conflict with _something_.\n\n> IMHO very important is how add new feature and use it together with\n> old feature.\n> \n> > Once its possible to control permission to execute a function via GRANT,\n> > it solves the grant problem for function-as-tablesource\n> \n> The permissions system was an example only. If you add \"foo()-tables\"\n> as something what needs special usage and care you probably found more\n> problems. For example, what show command '\\d' in the psql client, how\n> relation show pg_access ..etc? \n\\df\n\nIts a function, not a relation. You can do a lot of things to a relation\n(such as define rules, triggers, constraints), which do not make any sense\nfor a function. The function may be used as a table-source, but it does\nnot make it a table. \n\nIf you can give me a better example than permissions system, I'll surely\nreconsider, but currently, I see no use for it...\n\n-alex\n\n", "msg_date": "Wed, 27 Jun 2001 08:42:07 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: AW: functions returning records" }, { "msg_contents": "On Wed, Jun 27, 2001 at 08:42:07AM -0400, Alex Pilosov wrote:\n> On Wed, 27 Jun 2001, Karel Zak wrote:\n\n> This is a little bit better, but, results in following syntax:\n> GRANT SELECT ON FOO(int4). I'm not sure if this really makes sense. Its\n> not a select permission, its an execute permission on a function, and\n\n And if we will have select permission for columns? \n\n> should be handled when/where execute permission is checked.\n> \n> Its not hard to implement (just change what parser thinks relation is),\n> but I'm sure will conflict with _something_.\n> \n> > IMHO very important is how add new feature and use it together with\n> > old feature.\n> > \n> > > Once its possible to control permission to execute a function via GRANT,\n> > > it solves the grant problem for function-as-tablesource\n> > \n> > The permissions system was an example only. If you add \"foo()-tables\"\n> > as something what needs special usage and care you probably found more\n> > problems. For example, what show command '\\d' in the psql client, how\n> > relation show pg_access ..etc? \n> \\df\n\n And list of attributes of foo()?\n \n> Its a function, not a relation. You can do a lot of things to a relation\n> (such as define rules, triggers, constraints), which do not make any sense\n\n Say with me: it isn't a function, its a function that returning records\nand we will use it in same possition as standard table only. The other\nusage donsn't exist for this.\n\n I want wring out from foo()-tables most what is possible (like \npermissions, rules, views). IMHO it's correct requirement :-)\n\n\t\t\t\t\tKarel\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 27 Jun 2001 15:06:27 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: AW: functions returning records" }, { "msg_contents": "On Wed, 27 Jun 2001, Karel Zak wrote:\n\n> On Wed, Jun 27, 2001 at 08:42:07AM -0400, Alex Pilosov wrote:\n> > On Wed, 27 Jun 2001, Karel Zak wrote:\n> \n> > This is a little bit better, but, results in following syntax:\n> > GRANT SELECT ON FOO(int4). I'm not sure if this really makes sense. Its\n> > not a select permission, its an execute permission on a function, and\n> \n> And if we will have select permission for columns? \nFunction returns a tuple. To me, it really makes no sense \"this user can\nsee this attribute of a tuple, but not the other one\". \n\n> > > \n> > > The permissions system was an example only. If you add \"foo()-tables\"\n> > > as something what needs special usage and care you probably found more\n> > > problems. For example, what show command '\\d' in the psql client, how\n> > > relation show pg_access ..etc? \n> > \\df\n> \n> And list of attributes of foo()?\nFoo returns type x. \\dt x.\n> \n> > Its a function, not a relation. You can do a lot of things to a relation\n> > (such as define rules, triggers, constraints), which do not make any sense\n> \n> Say with me: it isn't a function, its a function that returning records\n> and we will use it in same possition as standard table only. The other\n> usage donsn't exist for this.\n> \n> I want wring out from foo()-tables most what is possible (like \n> permissions, rules, views). IMHO it's correct requirement :-)\npermissions -- see above\nrules -- how? 'create rule blah on select from foo(int4) do instead select\nfrom baz()'? Sorry, that's just too strange for me :)\nviews -- why not. Create view bar as select * from foo() [1]\n\nActually, now that I think about it, your idea is essentially creation of\na view automatically when the function returning setof record is created. \nI don't think its a good idea. If you really want to pretend its a\ntable/view, then create such a view [1]. \n\n-alex\n\n\n", "msg_date": "Wed, 27 Jun 2001 09:30:04 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: AW: functions returning records" } ]
[ { "msg_contents": "\n> Let's consider some simple example:\n> \n> create table tst ( a int4, b int4, c int4);\n> \n> select * from tst where a=2 and c=0;\n> \n> we need something like:\n> \n> select * from tst where str and c=0;\n> \n> where str is a string resulting by call ourfunc(table.a, 2)\n> and looks like 'b=2*2 or b=(2-1)'\n> \n> i.e. instead of original select we need to execute rewritten select\n> \n> select * from tst where (b=2*2 or b=(2-1)) and c=0;\n\nCan you give us a real life example ? For me this is too abstract to \nunderstand.\n\nProblem with the rewriter is, that it currently has no access to the\nwhere restriction, and can thus only add restrictions without knowledge\nof the where clause at hand. Of course you would also need to create\na view and replace the standard \"on select\" rule, and do your selects\non the view (unless the rewriter is extended to be invoked by a certain \nwhere clause (here a=2) and the rewritten query does not contain this \nclause).\n\nAndreas\n", "msg_date": "Wed, 27 Jun 2001 11:01:38 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Non-trivial rewriting sql query " } ]
[ { "msg_contents": "\n> On Tue, Jun 26, 2001 at 10:18:37AM -0400, Tom Lane wrote:\n> > though I would note that anyone who is able to examine the\n> > contents of pg_shadow has *already* broken into your database\n> \n> note: the dbadmin may not be the system administrator, but the dbadmin,\n> by default (with plaintext) can scoop an entire list of \"useful\" passwords,\n> since many users (like it or not) use the same/similar passwords for\n> multiple accounts.\n\nI fully agree with this statement and think it is a valid concern.\nWould it help here to introduce some poor man's encryption that is \nreversible ? Then the admin would need to intentionally decrypt the \npg_shadow entry to see that plain password, and not see it if he just \naccidentally select'ed * from pg_shadow. \n\nIf an admin intentionally wants to crack a password he will always \nhave means to do that (e.g. send well chosen salts).\n\nAndreas\n", "msg_date": "Wed, 27 Jun 2001 11:04:38 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: Encrypting pg_shadow passwords" } ]
[ { "msg_contents": "\n> #! /bin/sh\n> pgbench -i -s 2 test\n> for i in 1 2 4 8 16 32 64 128\n> do\n> \tt=`expr 640 / $i`\n> \tpgbench -t $t -c $i test\n> \techo \"===== sync ======\"\n\nWith 7.1 you will probably want a checkpoint instead of sync here:\n\tpsql -c \"checkpoint;\" template1 ; sleep 10\nThe sync does not help, since the pages are not yet written. \n\n> \tsync;sync;sync;sleep 10\n> \techo \"===== sync done ======\"\n> done\n\nAndreas\n", "msg_date": "Wed, 27 Jun 2001 11:08:06 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Benchmarking" }, { "msg_contents": "> > #! /bin/sh\n> > pgbench -i -s 2 test\n> > for i in 1 2 4 8 16 32 64 128\n> > do\n> > \tt=`expr 640 / $i`\n> > \tpgbench -t $t -c $i test\n> > \techo \"===== sync ======\"\n> \n> With 7.1 you will probably want a checkpoint instead of sync here:\n> \tpsql -c \"checkpoint;\" template1 ; sleep 10\n> The sync does not help, since the pages are not yet written. \n\nGood point. I have been used it since 7.0 and forgot to update for\n7.1.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 27 Jun 2001 22:08:51 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: AW: Benchmarking" } ]
[ { "msg_contents": "\n> >> For the result from foo() you must somewhere define attributes (names). \n> >> Where? In CREATE FUNCTION statement? Possible must be:\n> >> \n> >> select name1, name2 from foo() where name1 > 10;\n> > \n> > Yes, optimal would imho also be if the foo() somehow had access to the\n> > where restriction, so it could only produce output, that the\n> > higher level is interested in, very cool. This would be extremely \n> > useful for me. Very hard to implement, or even find an appropriate \n> > interface for though.\n> \n> You could easily implement it *in* the function foo IMHO. Since the \n> function does some black magic to create the result set to begin with, you \n> can change it to use parameters:\n> \n> select name1, name2 from foo(10, NULL, NULL) where name1 > 10;\n\nYes, but this is only an answer to a limited scope of the problem at hand,\nand the user who types the select (or uses a warehouse tool) needs substantial \nadditional knowledge on how to efficiently construct such a query.\n\nIn my setup the function would be hidden by a view.\n\nAndreas\n", "msg_date": "Wed, 27 Jun 2001 11:17:57 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: functions returning records" }, { "msg_contents": "On Wed, 27 Jun 2001, Zeugswetter Andreas SB wrote:\n\n> \n> > >> For the result from foo() you must somewhere define attributes (names). \n> > >> Where? In CREATE FUNCTION statement? Possible must be:\n> > >> \n> > >> select name1, name2 from foo() where name1 > 10;\n> > > \n> > > Yes, optimal would imho also be if the foo() somehow had access to the\n> > > where restriction, so it could only produce output, that the\n> > > higher level is interested in, very cool. This would be extremely \n> > > useful for me. Very hard to implement, or even find an appropriate \n> > > interface for though.\n> > \n> > You could easily implement it *in* the function foo IMHO. Since the \n> > function does some black magic to create the result set to begin with, you \n> > can change it to use parameters:\n> > \n> > select name1, name2 from foo(10, NULL, NULL) where name1 > 10;\n> \n> Yes, but this is only an answer to a limited scope of the problem at hand,\n> and the user who types the select (or uses a warehouse tool) needs substantial \n> additional knowledge on how to efficiently construct such a query.\n> \n> In my setup the function would be hidden by a view.\nIts a different problem. Functions returning tables do just that, return\ntables, they won't care just what from that table you need. Exposing\npieces of optimizer to your function doesn't seem to me like a great\nidea...\n\n", "msg_date": "Wed, 27 Jun 2001 06:45:58 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: AW: AW: functions returning records" }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > >> For the result from foo() you must somewhere define attributes (names).\n> > >> Where? In CREATE FUNCTION statement? Possible must be:\n> > >>\n> > >> select name1, name2 from foo() where name1 > 10;\n> > >\n> > > Yes, optimal would imho also be if the foo() somehow had access to the\n> > > where restriction, so it could only produce output, that the\n> > > higher level is interested in, very cool. This would be extremely\n> > > useful for me. Very hard to implement, or even find an appropriate\n> > > interface for though.\n> >\n> > You could easily implement it *in* the function foo IMHO. Since the\n> > function does some black magic to create the result set to begin with, you\n> > can change it to use parameters:\n> >\n> > select name1, name2 from foo(10, NULL, NULL) where name1 > 10;\n> \n> Yes, but this is only an answer to a limited scope of the problem at hand,\n> and the user who types the select (or uses a warehouse tool) needs substantial\n> additional knowledge on how to efficiently construct such a query.\n> \n> In my setup the function would be hidden by a view.\n\nI have done a lot of playing around with this sort of thing to get my search\nengine working.\n\nWhile functions returning rows would be cool, and something I'd like to see. I\nthink the functionality, if not the syntax, you are looking for is already in\npostgres 7.1.x. Here is an example: (Actual code at bottom of message)\n\nselect n1, n2 from (select foo1(10) as n1, foo2() as n2) as fubar ;\n\nThe trick seems to be, to have the first function return a 'setof' results.\nHave the foo2() function return the next column of foo1()'s current result. \n\nHere is the output:\n\nmarkw=# select foo1(10) as n1, foo2() as n2;\n n1 | n2\n----+----\n 1 | 1\n 2 | 2\n 3 | 3\n 4 | 4\n 5 | 5\n 6 | 6\n 7 | 7\n 8 | 8\n 9 | 9\n 10 | 10\n(10 rows)\n\nOr you can create a synthetic table at query time, called fubar:\n\nmarkw=# select * from (select foo1(10) as n1, foo2() as n2) as fubar;\n n1 | n2\n----+----\n 1 | 1\n 2 | 2\n 3 | 3\n 4 | 4\n 5 | 5\n 6 | 6\n 7 | 7\n 8 | 8\n 9 | 9\n 10 | 10\n(10 rows)\n\n\nNow, I'm not sure if it is documented that the first function gets called\nfirst, or that next functions get called after each result of a result \"setof\"\nbut it seem logical that they should, and I would like to lobby that this\nbecomes an \"official\" behavior of the function manager and the execution\nprocessing.\n\n\n<<<<<<<<<<<<< code >>>>>>>>>>>>>>\n\n\nstatic int count;\nstatic int curr;\n \nDatum foo1(PG_FUNCTION_ARGS);\nDatum foo2(PG_FUNCTION_ARGS);\n \nDatum foo1(PG_FUNCTION_ARGS)\n{\n if(!fcinfo->resultinfo)\n {\n elog(ERROR, \"Not called with fcinfo\");\n PG_RETURN_NULL();\n }\n if(!count)\n {\n count = PG_GETARG_INT32(0);\n curr = 1;\n }\n else\n curr++;\n \n if(curr <= count)\n {\n ReturnSetInfo *rsi = (ReturnSetInfo *)fcinfo->resultinfo;\n rsi->isDone = ExprMultipleResult;\n PG_RETURN_INT32(curr);\n }\n else\n {\n ReturnSetInfo *rsi ;\n curr=0;\n count=0;\n rsi = (ReturnSetInfo *)fcinfo->resultinfo;\n rsi->isDone = ExprEndResult ;\n }\n PG_RETURN_NULL();\n}\n \nDatum foo2(PG_FUNCTION_ARGS)\n{\n if(curr <= count)\n PG_RETURN_INT32(curr);\n else\n PG_RETURN_INT32(42);\n}\n\nSQL:\n\ncreate function foo1( int4)\n returns setof int4\n as '/usr/local/lib/templ.so', 'foo1'\n language 'c' ;\n \ncreate function foo2()\n returns int4\n as '/usr/local/lib/templ.so', 'foo2'\n language 'c' ;\n", "msg_date": "Wed, 27 Jun 2001 07:22:48 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: functions returning records" }, { "msg_contents": "On Wed, 27 Jun 2001, mlw wrote:\n\n> While functions returning rows would be cool, and something I'd like\n> to see. I think the functionality, if not the syntax, you are looking\n> for is already in postgres 7.1.x. Here is an example: (Actual code at\n> bottom of message)\nYes, its already possible, but its extremely ugly and nontransparent. I\ndon't want to create 5 functions to return 5-row tuple, or have to deal\nwith C SPI to do that. It needs a minor cleanup which is all I'm trying to\ndo :)\n\n> select n1, n2 from (select foo1(10) as n1, foo2() as n2) as fubar ;\n> \n> The trick seems to be, to have the first function return a 'setof' results.\n> Have the foo2() function return the next column of foo1()'s current result. \n> \n> Here is the output:\n> \n> markw=# select foo1(10) as n1, foo2() as n2;\n> n1 | n2\n> ----+----\n> 1 | 1\n> 2 | 2\n> 3 | 3\n> 4 | 4\n> 5 | 5\n> 6 | 6\n> 7 | 7\n> 8 | 8\n> 9 | 9\n> 10 | 10\n> (10 rows)\n> \n> Or you can create a synthetic table at query time, called fubar:\n> \n> markw=# select * from (select foo1(10) as n1, foo2() as n2) as fubar;\n> n1 | n2\n> ----+----\n> 1 | 1\n> 2 | 2\n> 3 | 3\n> 4 | 4\n> 5 | 5\n> 6 | 6\n> 7 | 7\n> 8 | 8\n> 9 | 9\n> 10 | 10\n> (10 rows)\n> \n> \n> Now, I'm not sure if it is documented that the first function gets called\n> first, or that next functions get called after each result of a result \"setof\"\n> but it seem logical that they should, and I would like to lobby that this\n> becomes an \"official\" behavior of the function manager and the execution\n> processing.\n> \n> \n> <<<<<<<<<<<<< code >>>>>>>>>>>>>>\n> \n> \n> static int count;\n> static int curr;\n> \n> Datum foo1(PG_FUNCTION_ARGS);\n> Datum foo2(PG_FUNCTION_ARGS);\n> \n> Datum foo1(PG_FUNCTION_ARGS)\n> {\n> if(!fcinfo->resultinfo)\n> {\n> elog(ERROR, \"Not called with fcinfo\");\n> PG_RETURN_NULL();\n> }\n> if(!count)\n> {\n> count = PG_GETARG_INT32(0);\n> curr = 1;\n> }\n> else\n> curr++;\n> \n> if(curr <= count)\n> {\n> ReturnSetInfo *rsi = (ReturnSetInfo *)fcinfo->resultinfo;\n> rsi->isDone = ExprMultipleResult;\n> PG_RETURN_INT32(curr);\n> }\n> else\n> {\n> ReturnSetInfo *rsi ;\n> curr=0;\n> count=0;\n> rsi = (ReturnSetInfo *)fcinfo->resultinfo;\n> rsi->isDone = ExprEndResult ;\n> }\n> PG_RETURN_NULL();\n> }\n> \n> Datum foo2(PG_FUNCTION_ARGS)\n> {\n> if(curr <= count)\n> PG_RETURN_INT32(curr);\n> else\n> PG_RETURN_INT32(42);\n> }\n> \n> SQL:\n> \n> create function foo1( int4)\n> returns setof int4\n> as '/usr/local/lib/templ.so', 'foo1'\n> language 'c' ;\n> \n> create function foo2()\n> returns int4\n> as '/usr/local/lib/templ.so', 'foo2'\n> language 'c' ;\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n> \n\n", "msg_date": "Wed, 27 Jun 2001 08:46:36 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Re: functions returning records" } ]
[ { "msg_contents": "\n> > In my setup the function would be hidden by a view.\n> Its a different problem. Functions returning tables do just that, return\n> tables, they won't care just what from that table you need. Exposing\n> pieces of optimizer to your function doesn't seem to me like a great\n> idea...\n\nOk, I think i need to go into a little more detail to explain.\nMy function needs to construct a table from the where condition.\nIf no where condition is present the result set would be near infinite\nin size (all possible permutations of all possible field values\ne.g. 2^32 for a table with one int column).\n\nThe function answers queries about rows that are not in the table,\nbut the result is based on rows that are in the table and computed\nby a neural net.\n\nAndreas\n", "msg_date": "Wed, 27 Jun 2001 13:03:46 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: AW: functions returning records" }, { "msg_contents": "On Wed, 27 Jun 2001, Zeugswetter Andreas SB wrote:\n\n> \n> > > In my setup the function would be hidden by a view.\n> > Its a different problem. Functions returning tables do just that, return\n> > tables, they won't care just what from that table you need. Exposing\n> > pieces of optimizer to your function doesn't seem to me like a great\n> > idea...\n> \n> Ok, I think i need to go into a little more detail to explain.\n> My function needs to construct a table from the where condition.\n> If no where condition is present the result set would be near infinite\n> in size (all possible permutations of all possible field values\n> e.g. 2^32 for a table with one int column).\n> \n> The function answers queries about rows that are not in the table,\n> but the result is based on rows that are in the table and computed\n> by a neural net.\n\nThis is pretty s[l]ick. Unfortunately, SQL doesn't know about\nlazy-evaluation for functions, and its kind of a different problem from\none I would like to solve, but I agree, maybe some day, there could be a\n[documented] way for an SPI function to peek at the query conditions in\nthe context it was called from.\n\nIt is _probably_ already possible to do that by looking up the execution\nstack somehow, but its definitely not a documented way, and you must be\nable to extract your information from a (Query *) node...\n\n-alex\n\n", "msg_date": "Wed, 27 Jun 2001 07:21:17 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: functions returning records" } ]
[ { "msg_contents": "\n> > For the result from foo() you must somewhere define attributes (names). \n> > Where? In CREATE FUNCTION statement? Possible must be:\n> Function must be returning an existing reltype. I understand its a major\n> restriction, but I can't think of a better way.\n\nYup, that's how Informix does it. It has a \"create row type\" command, \nso you don't actually need a table.\n \n> \n> > select name1, name2 from foo() where name1 > 10;\n> > \n> > What returns foo()? ...the pointer to HeapTuple or something like this or\n> > pointer to some temp table?\n> Pointer to heaptuple. We can get to tupdesc for that tuple by looking up\n> its prorettype.\n\nBut the question is how you get the next row. Do you return a null terminated \narray of heaptuples ?\n\nImho to allow this to be efficient, there would need to be some mechanism, \nthat would allow the function to return the result in small blocks (e.g. each row)\n(similar to a heap access), else you would be limited to return \nvalues, that fit into memory, or fit on temporary disk storage, and do \nwork that might not even be required, because the client only fetches the \nfirst row.\n\nAndreas\n", "msg_date": "Wed, 27 Jun 2001 13:17:13 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: functions returning records" }, { "msg_contents": "On Wed, 27 Jun 2001, Zeugswetter Andreas SB wrote:\n\n> > > select name1, name2 from foo() where name1 > 10;\n> > > \n> > > What returns foo()? ...the pointer to HeapTuple or something like this or\n> > > pointer to some temp table?\n> > Pointer to heaptuple. We can get to tupdesc for that tuple by looking up\n> > its prorettype.\n> \n> But the question is how you get the next row. Do you return a null terminated \n> array of heaptuples ?\n> \n> Imho to allow this to be efficient, there would need to be some mechanism, \n> that would allow the function to return the result in small blocks (e.g. each row)\n> (similar to a heap access), else you would be limited to return \n> values, that fit into memory, or fit on temporary disk storage, and do \n> work that might not even be required, because the client only fetches the \n> first row.\nI haven't thought of this yet, but its a good point. I think I'll find out\nwhat's involved when I write code for it. :)\n\n-alex\n\n", "msg_date": "Wed, 27 Jun 2001 08:30:42 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: AW: functions returning records" } ]
[ { "msg_contents": "hello,\n\ntrying to migrate production servers form 7.0.2 to 7.1.1 now I need to\ncompile an function written in c and when compiling using following\nincludes:\n\n#include <postgres.h>\n#include <utils/builtins.h>\n#include <utils/palloc.h>\n#include <string.h>\n\nI get following compile errors:\n\nc:82: warning: passing arg 1 of `textout' from incompatible pointer type\n\nline 82: src = textout( src_string );\n\nc:238: warning: passing arg 1 of `textin' from incompatible pointer type\n\nline 238: return( (text *)textin( ret ) );\n\nfunction I'm trying to port is this one:\nhttp://www.ca.postgresql.org/mhonarc/pgsql-sql/1998-06/msg00119.html\n\n\nany points to convert textout and textin to 7.1 ?\n\nthanks from barcelona!\n\njaume.\n", "msg_date": "Wed, 27 Jun 2001 15:42:14 +0200", "msg_from": "Jaume Teixi <teixi@6tems.com>", "msg_from_op": true, "msg_subject": "postgresql 7.1.1 and textout and textin" }, { "msg_contents": "> trying to migrate production servers form 7.0.2 to 7.1.1 now I need to\n> compile an function written in c\n...\n> I get following compile errors:\n> c:82: warning: passing arg 1 of `textout' from incompatible pointer type\n> line 82: src = textout( src_string );\n...\n> any points to convert textout and textin to 7.1 ?\n\nLook in src/backend/utils/adt/ for examples of functions called from\nwithin other functions. You will want to upgrade to the new calling\nconvention for functions, and will need to use some macros and \"direct\ncall\" wrappers to accomplish this.\n\nIt is easy, you just need to match up an example with what you want. Let\nus know if you don't find one and we can do some searching to suggest a\nspecific example...\n\n - Thomas\n", "msg_date": "Wed, 27 Jun 2001 14:03:21 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: postgresql 7.1.1 and textout and textin" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> any points to convert textout and textin to 7.1 ?\n\n> Look in src/backend/utils/adt/ for examples of functions called from\n> within other functions. You will want to upgrade to the new calling\n> convention for functions, and will need to use some macros and \"direct\n> call\" wrappers to accomplish this.\n\nThere are also some useful examples in contrib/. Several contrib\nmodules have macros like\n\n#define _textin(str) DirectFunctionCall1(textin, CStringGetDatum(str))\n\n#define _textout(str) DatumGetPointer(DirectFunctionCall1(textout, PointerGetDatum(str)))\n\nwhich work pretty much the same as the old textin() and textout()\nfunctions did.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 11:04:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: postgresql 7.1.1 and textout and textin " } ]
[ { "msg_contents": "Hi,\n\nOn UnixWare 711, I've noticed that when using openssl, neither postmaster\nnor psql would start without LD_LIBRARY_PATH=/usr/local/ssl.\n\nNow Makefile uses the -R option to force the load from\n/usr/local/pgsql/lib (or whatever). According to the unixware 711b docs,\none can specifie multiple libraries concatening them with ':';\n\nTweaking the makefile to do this\n(-R/usr/local/pgsql/lib:/usr/local/ssl/lib) did not help and the linker\nstill doesn't find libssl.so and libcrypto.so if LD_LIBRARY_PATH is not\nset.\n\nEven forcing LD_RUNT_PATH before make didn't help.\n\nAny ideas???\n\nRegards,\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Wed, 27 Jun 2001 17:07:21 +0200", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Library searching problems" } ]
[ { "msg_contents": "Phillip Jansen <pfj@ucs.co.za> writes:\n> <!doctype html public \"-//w3c//dtd html 4.0 transitional//en\">\n> <html>\n> Hi\n> <p>I trying to compare oracle vs postgresql , so I&nbsp;have a table containing\n> about 7.6 mil records but every time I&nbsp;try to do a simple select on\n> the table it throws a core dump and segmentation fault. At first it returns\n> \"backend returns D&nbsp; before T\" and then after a while it crashes.\n> <br>I&nbsp;did run VACUUM ANALYZE and rebuild my indexes after I&nbsp;loaded\n> the data.\n> <p>Any ideas??\n> <p>Phillip</html>\n\n(1) Please don't send HTML-ified mail to the lists.\n\n(2) You're running out of memory for the SELECT result on the client\nside. libpq is not presently very graceful about that, unfortunately.\nBut did you really want to retrieve all 7.6 million rows at once?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 11:10:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Data " }, { "msg_contents": "\nHi\nI trying to compare oracle vs postgresql , so I have a table containing\nabout 7.6 mil records but every time I try to do a simple select on\nthe table it throws a core dump and segmentation fault. At first it returns\n\"backend returns D  before T\" and then after a while it crashes.\nI did run VACUUM ANALYZE and rebuild my indexes after I loaded\nthe data.\nAny ideas??\nPhillip\n", "msg_date": "Wed, 27 Jun 2001 22:09:40 +0200", "msg_from": "Phillip Jansen <pfj@ucs.co.za>", "msg_from_op": false, "msg_subject": "Data" } ]
[ { "msg_contents": "I propose that initdb should do\n\tREVOKE ALL on pg_largeobject FROM public\nsame as it does already for pg_shadow and pg_statistic. This would\nprevent non-superusers from examining or modifying large objects\nexcept through the LO operations.\n\nThis is only security through obscurity, of course, since any user can\nstill read or modify another user's LO if he can guess its OID. But\nsecurity through obscurity is better than no security at all. (Perhaps\nsomeday the LO operations will be enhanced to perform access-rights\nchecks. I have no interest in doing that work right now, however.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 12:27:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pg_largeobject is a security hole" }, { "msg_contents": "At 12:27 27/06/01 -0400, Tom Lane wrote:\n>I propose that initdb should do\n>\tREVOKE ALL on pg_largeobject FROM public\n>\n\nMay have an issue with PG_DUMP, which does a 'select oid from\npg_largeobject', I think.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 28 Jun 2001 09:17:42 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_largeobject is a security hole" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 12:27 27/06/01 -0400, Tom Lane wrote:\n>> I propose that initdb should do\n>> REVOKE ALL on pg_largeobject FROM public\n\n> May have an issue with PG_DUMP, which does a 'select oid from\n> pg_largeobject', I think.\n\nHmm. [sound of grepping] So does psql's \\lo_list command. That's\nannoying ... the list of large object OIDs is *exactly* what you'd want\nto hide from the unwashed masses. Oh well, I'll leave bad enough alone\nfor now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 19:49:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_largeobject is a security hole " }, { "msg_contents": "At 19:49 27/06/01 -0400, Tom Lane wrote:\n>\n>Hmm. [sound of grepping] So does psql's \\lo_list command. That's\n>annoying ... the list of large object OIDs is *exactly* what you'd want\n>to hide from the unwashed masses. Oh well, I'll leave bad enough alone\n>for now.\n>\n\nI suspect this would be cleaned up when/if we implement LOB LOCATORs: they\nhave a limited lifetime, should be the only way to retrieve LOBs, and could\nhide the underlying OID (which would never be used by external interfaces).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 28 Jun 2001 09:54:14 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_largeobject is a security hole " } ]
[ { "msg_contents": "Please apply attached patch to current CVS.\nIt fixes problem if query contains field (with GiST index) more than 1 time.\nCorrensponding patch for 7.1.2 is available from\nhttp://www.sai.msu.su/~megera/postgres/gist\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83", "msg_date": "Thu, 28 Jun 2001 15:22:03 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Patch for multi-key GiST (current CVS)" }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> Please apply attached patch to current CVS.\n> It fixes problem if query contains field (with GiST index) more than 1 time.\n\nApplied, thanks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jun 2001 12:00:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch for multi-key GiST (current CVS) " }, { "msg_contents": "\nPatch applied by Tom. Thanks.\n\n\n> Please apply attached patch to current CVS.\n> It fixes problem if query contains field (with GiST index) more than 1 time.\n> Corrensponding patch for 7.1.2 is available from\n> http://www.sai.msu.su/~megera/postgres/gist\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 28 Jun 2001 12:53:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch for multi-key GiST (current CVS)" } ]
[ { "msg_contents": "Hi,\n\n\n I need to know if in PostgerSQL DB has a utility like in Oracle Release\n8.1.6 contains a new package called \n STATSPACK that improves on the UTLBSTAT/UTLESTAT process (or like the\nUTLBSTAT/UTLESTAT).\n This information is very important since we are testing PostgerSQL\ndatabase \n and I do not know how to monitor/examine the database.\n Thanks,\n Ilan\n_________________________________________\nILAN FAIT\nTel: 972-9-9519133 Ex.247 iWeb Technologies\nFax: 972-9-9519134 91 Medinat Ha'Yehudim St.\n Herzliya 46120 IL\nmailto:ilan@iweb.com www.iweb.com\n\n\n\n\n\n\n\nexamine the PostgerSQL database\n\n\n\n\n\n Hi,\n\n\n     I need to know if in PostgerSQL DB has a utility like in Oracle Release 8.1.6 contains a new package called\n  STATSPACK that improves on the UTLBSTAT/UTLESTAT process (or like the UTLBSTAT/UTLESTAT).\n  This information is very important since we are testing PostgerSQL database \n    and I do not  know how to monitor/examine the database.\n    Thanks,\n    Ilan\n_________________________________________\nILAN FAIT\nTel: 972-9-9519133  Ex.247    iWeb Technologies\nFax: 972-9-9519134                91 Medinat Ha'Yehudim St.\n                                                     Herzliya 46120 IL\nmailto:ilan@iweb.com    www.iweb.com", "msg_date": "Thu, 28 Jun 2001 17:45:49 +0300", "msg_from": "Ilan Fait <ilan@iweb.com>", "msg_from_op": true, "msg_subject": "examine the PostgerSQL database" } ]
[ { "msg_contents": "Hi,\n \n I am testing the PostgerSQL database and I would like to if there any\nbenchmarking tools\n to examine the database and can give a report about the database\nactivity.\n In Oracle you have the Utlbstat/utlestat and statspack (available from\n8.1.6): \n These two bundled with the Oracle Server generate complete reports of the\ndatabase activity. \n The new STATSPACK utility bundled with Oracle 8.1.6 and up allows more\nflexibility in \n managing statistical snapshots. \n Thanks,\n Ilan\n\n_________________________________________\nILAN FAIT\nTel: 972-9-9519133 Ex.247 iWeb Technologies\nFax: 972-9-9519134 91 Medinat Ha'Yehudim St.\n Herzliya 46120 IL\nmailto:ilan@iweb.com www.iweb.com\n\n\n\n\n\n\n\nhow to monitor/examine the database\n\n\n\n\n\n  Hi,\n \n  I am testing the PostgerSQL database and I would like to if there any benchmarking tools\n  to  examine the database and can give a report about the database activity.\n In Oracle you have the Utlbstat/utlestat and statspack (available from 8.1.6): \n  These two bundled with the Oracle Server generate complete reports of the database activity. \n The new STATSPACK utility bundled with Oracle 8.1.6 and up allows more flexibility in\n managing statistical snapshots.\n  Thanks,\n   Ilan\n\n_________________________________________\nILAN FAIT\nTel: 972-9-9519133  Ex.247    iWeb Technologies\nFax: 972-9-9519134                91 Medinat Ha'Yehudim St.\n                                                     Herzliya 46120 IL\nmailto:ilan@iweb.com    www.iweb.com", "msg_date": "Thu, 28 Jun 2001 18:11:45 +0300", "msg_from": "Ilan Fait <ilan@iweb.com>", "msg_from_op": true, "msg_subject": "how to monitor/examine the database" } ]
[ { "msg_contents": "Hello!\n\nI found a funny bug in postgres with c functions. (or feature??)\nLet's say we have got an function like this:\nCREATE FUNCTION hupper(text)\nRETURNS text\nAS '/fun.so'\nLANGUAGE 'c';\n\nand fun.c:\n#include <postgresql/postgres.h>\n#include <postgresql/utils/elog.h>\n#include <postgresql/libpq/libpq-fs.h>\n\ntext *hupper (text *a) {\n int hossz,i;\n\n hossz=a->vl_len;\n for (i=0;i<hossz;i++)\n {\n char ch;\n ch=a->vl_dat[i];\n if ((ch>=97)&(ch<=122)) ch=ch-32;\n else if (ch=='�') ch='�';\n else if (ch=='�') ch='�';\n else if (ch=='�') ch='�';\n else if (ch=='�') ch='�';\n else if (ch=='�') ch='�';\n else if (ch=='�') ch='�';\n else if (ch=='�') ch='�';\n else if (ch=='�') ch='�';\n else if (ch=='�') ch='�';\n a->vl_dat[i]=ch;\n }\n\n return a;\n}\n\nWe use this to make hungarian upper (=Hupper).\nAnd two select:\ngergo=> select mire from mamapenz;\n mire\n-------\n betet\n ebed\n ebed\n ebed\n ebed\n ebed\n(6 rows)\n\ngergo=> select hupper(mire) from mamapenz;\n hupper\n--------\n BETET\n EBED\n EBED\n EBED\n EBED\n EBED\n(6 rows)\n\nthis is good, and now:\ngergo=> select mire from mamapenz;\n ^^^^^^^^^^^^^^^^^^^^^\n mire\n-------\n BETET\n EBED\n EBED\n EBED\n EBED\n EBED\n(6 rows)\n\nAfter once hupper run on the table it will be upper case even I don't use hupper.\nIt can be fixed with a postgres restart or with 10-20 minutes of waiting.\n\nIf this is documented, sorry (but please point out where).\n\nThanks,\nRISKO Gergely\n\n", "msg_date": "Thu, 28 Jun 2001 21:39:18 +0200", "msg_from": "RISKO Gergely <risko@atom.hu>", "msg_from_op": true, "msg_subject": "funny (cache (?)) bug in postgres (7.x tested)" }, { "msg_contents": "RISKO Gergely <risko@atom.hu> writes:\n\n> text *hupper (text *a) {\n> int hossz,i;\n> \n> hossz=a->vl_len;\n> for (i=0;i<hossz;i++)\n> {\n> char ch;\n> ch=a->vl_dat[i];\n> if ((ch>=97)&(ch<=122)) ch=ch-32;\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> a->vl_dat[i]=ch;\n> }\n> \n> return a;\n> }\n\nI think you need to allocate a new TEXT datum and return it. You're\nmodifying the cached data in place, which is a no-no AFAIK.\n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n", "msg_date": "03 Jul 2001 16:01:30 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: funny (cache (?)) bug in postgres (7.x tested)" }, { "msg_contents": "On Thu, 28 Jun 2001, RISKO Gergely wrote:\n\n> text *hupper (text *a) {\n> int hossz,i;\n> \n> hossz=a->vl_len;\n> for (i=0;i<hossz;i++)\n> {\n> char ch;\n> ch=a->vl_dat[i];\n> if ((ch>=97)&(ch<=122)) ch=ch-32;\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> else if (ch=='�') ch='�';\n> a->vl_dat[i]=ch;\n> }\n> \n> return a;\n> }\n(Rest snipped).\n\nYou are not supposed to write directly to the argument you are given. You\nmust construct a new text value (by allocating space via palloc) and\ncopy your string there. By overwriting existing values, you potentially\ncorrupt postgres' cache, resulting in your behaviour.\n\n", "msg_date": "Tue, 3 Jul 2001 16:26:38 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: funny (cache (?)) bug in postgres (7.x tested)" }, { "msg_contents": "RISKO Gergely wrote:\n[Charset iso-8859-1,iso- unsupported, skipping...]\n\n> Hello!\n>\n> I found a funny bug in postgres with c functions. (or feature??)\n> Let's say we have got an function like this:\n> CREATE FUNCTION hupper(text)\n> RETURNS text\n> AS '/fun.so'\n> LANGUAGE 'c';\n\n This is actually neither a feature, nor a bug in Postgres.\n Your function is violating some coding rules.\n\n The text argument handed into is actually residing inside the\n shared buffer cache. So you're not supposed to change it!\n\n Your function isn't safe for compressed or toasted values. If\n you accidentially have big data in that column, it might\n crash the backend completely.\n\n A function returning text has to allocate the return value\n with palloc().\n\n Look into utils/adt/*.c for examples how to deal correctly\n with text attributes.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Tue, 3 Jul 2001 17:03:57 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: funny (cache (?)) bug in postgres (7.x tested)" }, { "msg_contents": "RISKO Gergely <risko@atom.hu> writes:\n> I found a funny bug in postgres with c functions. (or feature??)\n\nScribbling on your input datum is verboten. palloc a new value\nto return.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 17:14:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: funny (cache (?)) bug in postgres (7.x tested) " } ]
[ { "msg_contents": "\tIs it possible to specify some order that multiple triggers should\ntrigger? We have a few triggers that are kicked off after an insert, and\nthey need to be done in a specific order.\n\nThanks for any info,\nMike\n\n", "msg_date": "Thu, 28 Jun 2001 18:16:58 -0700", "msg_from": "Mike Cianflone <mcianflone@littlefeet-inc.com>", "msg_from_op": true, "msg_subject": "Order of triggers" } ]
[ { "msg_contents": "Hi,\n\n OK, all the high-frequently called functions of the pgstat\n stuff are macros (will commit that later today).\n\n Now about the per database configuration. The thing is that I\n don't know if it is worth doing it too detailed. #ifdef'ing\n out the functionality I have the following wallclock runtimes\n for the regression test on a 500MHz P-III:\n\n Backend does nothing: 1:03\n Backend sends per table\n scan and block IO: 1:05\n Backend sends per table\n info plus querystring: 1:10\n\n If somebody wants to see an applications querystring (at\n least the first 512 bytes) just in case something goes wrong\n and the client hangs, he'd have to run querystring reporting\n all the time either way.\n\n So I can see value in a per database default in pg_database\n plus the ability to switch it on/off via statement to analyze\n single commands.\n\n What do others think?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 29 Jun 2001 09:20:47 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Configuration of statistical views" }, { "msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> So I can see value in a per database default in pg_database\n> plus the ability to switch it on/off via statement to analyze\n> single commands.\n\nDo you even need a per-database default? Why not an installation-wide\ndefault in postgresql.conf plus on/off commands? The great advantage\nof doing it that way is that it's simply a GUC variable or three, and\nyou don't need to expend any work on developing infrastructure. So\nI'd recommend doing it that way to get started, even if you later decide\nthat something more complex is warranted.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jun 2001 10:49:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configuration of statistical views " }, { "msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > So I can see value in a per database default in pg_database\n> > plus the ability to switch it on/off via statement to analyze\n> > single commands.\n>\n> Do you even need a per-database default? Why not an installation-wide\n> default in postgresql.conf plus on/off commands? The great advantage\n> of doing it that way is that it's simply a GUC variable or three, and\n> you don't need to expend any work on developing infrastructure. So\n> I'd recommend doing it that way to get started, even if you later decide\n> that something more complex is warranted.\n\n Personally, I can live with no options at all, because I\n think that amount of performance loss is worth it beeing able\n to look at a query in case. You know, if it's a config option\n it tends to allways being off when the errors happen.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 29 Jun 2001 11:51:11 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Configuration of statistical views" }, { "msg_contents": "> If somebody wants to see an applications querystring (at\n> least the first 512 bytes) just in case something goes wrong\n> and the client hangs, he'd have to run querystring reporting\n> all the time either way.\n\nAgreed. That should be on all the time.\n\n> So I can see value in a per database default in pg_database\n> plus the ability to switch it on/off via statement to analyze\n> single commands.\n\nSounds fine. You may be able to just to a GUC/SET option and not do a\nper-database field. GUC doesn't do per-database and having a database\nflag and GUC would be confusing. Let's roll with just GUC/SET and see\nhow it goes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 29 Jun 2001 11:53:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configuration of statistical views" }, { "msg_contents": "Bruce Momjian wrote:\n> > If somebody wants to see an applications querystring (at\n> > least the first 512 bytes) just in case something goes wrong\n> > and the client hangs, he'd have to run querystring reporting\n> > all the time either way.\n>\n> Agreed. That should be on all the time.\n>\n> > So I can see value in a per database default in pg_database\n> > plus the ability to switch it on/off via statement to analyze\n> > single commands.\n>\n> Sounds fine. You may be able to just to a GUC/SET option and not do a\n> per-database field. GUC doesn't do per-database and having a database\n> flag and GUC would be confusing. Let's roll with just GUC/SET and see\n> how it goes.\n\n No per backend on/off statement - is that what you mean?\n That'd be easiest to get started.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 29 Jun 2001 12:10:09 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Configuration of statistical views" }, { "msg_contents": "> Bruce Momjian wrote:\n> > > If somebody wants to see an applications querystring (at\n> > > least the first 512 bytes) just in case something goes wrong\n> > > and the client hangs, he'd have to run querystring reporting\n> > > all the time either way.\n> >\n> > Agreed. That should be on all the time.\n> >\n> > > So I can see value in a per database default in pg_database\n> > > plus the ability to switch it on/off via statement to analyze\n> > > single commands.\n> >\n> > Sounds fine. You may be able to just to a GUC/SET option and not do a\n> > per-database field. GUC doesn't do per-database and having a database\n> > flag and GUC would be confusing. Let's roll with just GUC/SET and see\n> > how it goes.\n> \n> No per backend on/off statement - is that what you mean?\n> That'd be easiest to get started.\n\nGUC as the default, and SET for per-backend. I am liking GUC more and\nmore.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 29 Jun 2001 12:21:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configuration of statistical views" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> If somebody wants to see an applications querystring (at\n>> least the first 512 bytes) just in case something goes wrong\n>> and the client hangs, he'd have to run querystring reporting\n>> all the time either way.\n\n> Agreed. That should be on all the time.\n\n\"On by default\", sure. \"On all the time\", I'm not sold on.\n\nBut anyway, we seem to be converging on the conclusion that setting\nup a GUC variable will do fine, at least until there is definite\nevidence that it won't.\n\nProbably there need to be at least 2 variables: (a) a PGC_POSTMASTER\nvariable that controls whether the stats collector is even started,\nand (b) PGC_USERSET variable(s) that enable a particular backend to\nsend particular kinds of data to the collector. Note that, for example,\nbackend start/stop events probably need to be reported whenever the\npostmaster variable is set, even if all the USERSET variables are off.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jun 2001 13:58:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configuration of statistical views " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> If somebody wants to see an applications querystring (at\n> >> least the first 512 bytes) just in case something goes wrong\n> >> and the client hangs, he'd have to run querystring reporting\n> >> all the time either way.\n> \n> > Agreed. That should be on all the time.\n> \n> \"On by default\", sure. \"On all the time\", I'm not sold on.\n\nSo we will havre GUC for stats and query string. Fine. Set query\nstring on by default and stats off by default. Good.\n\n> \n> But anyway, we seem to be converging on the conclusion that setting\n> up a GUC variable will do fine, at least until there is definite\n> evidence that it won't.\n> \n> Probably there need to be at least 2 variables: (a) a PGC_POSTMASTER\n> variable that controls whether the stats collector is even started,\n> and (b) PGC_USERSET variable(s) that enable a particular backend to\n> send particular kinds of data to the collector. Note that, for example,\n> backend start/stop events probably need to be reported whenever the\n> postmaster variable is set, even if all the USERSET variables are off.\n\nAnd another one to control whether the daemon is even running. OK.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 29 Jun 2001 14:05:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configuration of statistical views" }, { "msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n>> backend start/stop events probably need to be reported whenever the\n>> postmaster variable is set, even if all the USERSET variables are off.\n\n> I don't consider backend start/stop messages to be critical,\n> although we get some complaints already about connection\n> slowness - well, this is somewhere in the microseconds. And\n> it'd be a little messy because the start message is sent by\n> the backend while the stop message is sent by the postmaster.\n> So where exactly to put it?\n\nThis is exactly why I think they should be sent unconditionally.\nIt doesn't matter if a particular backend turns its reporting on and\noff while it runs (I hope), but I'd think the stats collector would\nget confused if it saw, say, a start and no stop message for a\nparticular backend.\n\nOTOH, given that we need to treat the transmission channel as\nunreliable, it would be a bad idea anyway if the stats collector got\nseriously confused by not seeing the start or the stop message.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jun 2001 14:17:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configuration of statistical views " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> If somebody wants to see an applications querystring (at\n> >> least the first 512 bytes) just in case something goes wrong\n> >> and the client hangs, he'd have to run querystring reporting\n> >> all the time either way.\n>\n> > Agreed. That should be on all the time.\n>\n> \"On by default\", sure. \"On all the time\", I'm not sold on.\n>\n> But anyway, we seem to be converging on the conclusion that setting\n> up a GUC variable will do fine, at least until there is definite\n> evidence that it won't.\n\n Up to now, only three fulltime PG-developers spoke up. Maybe\n someone else likes to comment on it too and hasn't had the\n time yet. Let's be a little patient.\n\n> Probably there need to be at least 2 variables: (a) a PGC_POSTMASTER\n> variable that controls whether the stats collector is even started,\n> and (b) PGC_USERSET variable(s) that enable a particular backend to\n> send particular kinds of data to the collector. Note that, for example,\n> backend start/stop events probably need to be reported whenever the\n> postmaster variable is set, even if all the USERSET variables are off.\n\n I don't consider backend start/stop messages to be critical,\n although we get some complaints already about connection\n slowness - well, this is somewhere in the microseconds. And\n it'd be a little messy because the start message is sent by\n the backend while the stop message is sent by the postmaster.\n So where exactly to put it?\n\n\nJan\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 29 Jun 2001 14:19:54 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Configuration of statistical views" }, { "msg_contents": "Bruce Momjian wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > >> If somebody wants to see an applications querystring (at\n> > >> least the first 512 bytes) just in case something goes wrong\n> > >> and the client hangs, he'd have to run querystring reporting\n> > >> all the time either way.\n> >\n> > > Agreed. That should be on all the time.\n> >\n> > \"On by default\", sure. \"On all the time\", I'm not sold on.\n>\n> So we will havre GUC for stats and query string. Fine. Set query\n> string on by default and stats off by default. Good.\n>\n> >\n> > But anyway, we seem to be converging on the conclusion that setting\n> > up a GUC variable will do fine, at least until there is definite\n> > evidence that it won't.\n> >\n> > Probably there need to be at least 2 variables: (a) a PGC_POSTMASTER\n> > variable that controls whether the stats collector is even started,\n> > and (b) PGC_USERSET variable(s) that enable a particular backend to\n> > send particular kinds of data to the collector. Note that, for example,\n> > backend start/stop events probably need to be reported whenever the\n> > postmaster variable is set, even if all the USERSET variables are off.\n>\n> And another one to control whether the daemon is even running. OK.\n\n Forcing the other two to stay off if no daemon present.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 29 Jun 2001 14:36:13 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Configuration of statistical views" }, { "msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@yahoo.com> writes:\n> >> backend start/stop events probably need to be reported whenever the\n> >> postmaster variable is set, even if all the USERSET variables are off.\n>\n> > I don't consider backend start/stop messages to be critical,\n> > although we get some complaints already about connection\n> > slowness - well, this is somewhere in the microseconds. And\n> > it'd be a little messy because the start message is sent by\n> > the backend while the stop message is sent by the postmaster.\n> > So where exactly to put it?\n>\n> This is exactly why I think they should be sent unconditionally.\n> It doesn't matter if a particular backend turns its reporting on and\n> off while it runs (I hope), but I'd think the stats collector would\n> get confused if it saw, say, a start and no stop message for a\n> particular backend.\n>\n> OTOH, given that we need to treat the transmission channel as\n> unreliable, it would be a bad idea anyway if the stats collector got\n> seriously confused by not seeing the start or the stop message.\n\n Hmmm - that's a good point. Right now, the collector is\n totally lax on all of that. Missing start packet - no\n problem, we create the backend slot on the fly. Missing stats\n packet - well, the counters aren't 100% correct, so be it.\n But OTOH it causes him to remember the dead backend for\n postmaster lifetime in case of a missing stop. Except a PID\n wraparound causes a fix someday. Maybe it should\n periodically (every 10 minutes or even longer) check with a\n zero-kill if all the backends it knows about are really\n alive.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 29 Jun 2001 14:44:12 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Configuration of statistical views" }, { "msg_contents": "Tom Lane writes:\n\n> Probably there need to be at least 2 variables: (a) a PGC_POSTMASTER\n> variable that controls whether the stats collector is even started,\n> and (b) PGC_USERSET variable(s) that enable a particular backend to\n> send particular kinds of data to the collector. Note that, for example,\n> backend start/stop events probably need to be reported whenever the\n> postmaster variable is set, even if all the USERSET variables are off.\n\nI'm not familiar with the kinds of statistics that are supposed to be\ngathered here, but I suppose their usefulness would be greatly increased\nif they were gathered across all data/actions, not only the ones that the\nusers turned them on for. So I think ordinary users have no business\ncontrolling these settings.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 29 Jun 2001 23:20:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Configuration of statistical views " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I'm not familiar with the kinds of statistics that are supposed to be\n> gathered here, but I suppose their usefulness would be greatly increased\n> if they were gathered across all data/actions, not only the ones that the\n> users turned them on for. So I think ordinary users have no business\n> controlling these settings.\n\nOkay, the per-backend GUC variables should be SUSET instead of USERSET.\nI don't have a problem with that ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jun 2001 17:33:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configuration of statistical views " } ]
[ { "msg_contents": "Well, I'm on my way to implement what was discussed on list before.\n\nI am doing it the way Karel and Jan suggested: creating a\npg_class/pg_attribute tuple[s] for a function that returns a setof. \n\nI have a special RELKIND_FUNC for parser, and it seems to go through fine,\nand the final query plan has 'Seq Scan on rel####', which I think is a\ngood sign, as the the function should pretend to be a relation. \n\nNow, more interesting question, what's the best way to interface ExecScan\nto function-executing machinery:\n\nOptions are:\n\n1) Create a special scan node type, T_FuncSeqScan and deal with it there.\n\n2) Keep the T_SeqScan, explain to nodeSeqScan special logic when dealing\nwith RELKIND_FUNC relations. \n(I prefer this one, but I would like a validation of it)\n\n3) explain to heap_getnext special logic. \n\n\n", "msg_date": "Fri, 29 Jun 2001 14:46:40 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "functions returning sets" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> Well, I'm on my way to implement what was discussed on list before.\n> I am doing it the way Karel and Jan suggested: creating a\n> pg_class/pg_attribute tuple[s] for a function that returns a setof. \n\nWhat? You shouldn't need pg_class entries for functions unless they\nreturn *tuples*. setof has nothing to do with that. Moreover, the\npg_class entry should be thought of as a record type independent of\nthe existence of any particular function returning it.\n\n> I have a special RELKIND_FUNC for parser,\n\nThis seems totally wrong.\n\n> Options are:\n> 1) Create a special scan node type, T_FuncSeqScan and deal with it there.\n> 2) Keep the T_SeqScan, explain to nodeSeqScan special logic when dealing\n> with RELKIND_FUNC relations. \n> (I prefer this one, but I would like a validation of it)\n> 3) explain to heap_getnext special logic. \n\nI prefer #1. #2 or #3 will imply slowing down normal execution paths\nwith extra clutter to deal with functions.\n\nBTW, based on Jan's sketch, I'd say it should be more like\nT_CursorSeqScan where the object being scanned is a cursor/portal.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jun 2001 16:25:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: functions returning sets " }, { "msg_contents": "Alex Pilosov wrote:\n> Well, I'm on my way to implement what was discussed on list before.\n>\n> I am doing it the way Karel and Jan suggested: creating a\n> pg_class/pg_attribute tuple[s] for a function that returns a setof.\n\n That's not exactly what I suggested. I meant having a\n separate\n\n CREATE TYPE <typname> IS RECORD OF (<atttyplist>);\n\n and then\n\n CREATE FUNCTION ...\n RETURNS SETOF <typname>|<tablename>|<viewname> ...\n\n Note that we need a pg_type entry too as we currently do for\n tables and views. The only thing missing is a file underneath\n and of course, the ability to use it directly for INSERT,\n UP... operations.\n\n This way, you have the functions returned tuple structure\n available elsewhere too, like in PL/pgSQL for %ROWTYPE,\n because it's a named type declaration.\n\n> Now, more interesting question, what's the best way to interface ExecScan\n> to function-executing machinery:\n>\n> Options are:\n>\n> 1) Create a special scan node type, T_FuncSeqScan and deal with it there.\n>\n> 2) Keep the T_SeqScan, explain to nodeSeqScan special logic when dealing\n> with RELKIND_FUNC relations.\n> (I prefer this one, but I would like a validation of it)\n>\n> 3) explain to heap_getnext special logic.\n\n My idea was to change the expected return Datum of a function\n returning SETOF <rowtype> beeing a refcursor or portal\n directly. Portals are an abstraction of a resultset and used\n in Postgres to implement cursors. So the executor node would\n be T_PortalScan. Whatever a function needs (callback per\n tuple, tuple sink to stuff, an executor like now) will be\n hidden in the portal.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 29 Jun 2001 17:09:15 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: functions returning sets" }, { "msg_contents": "On Fri, 29 Jun 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > Well, I'm on my way to implement what was discussed on list before.\n> > I am doing it the way Karel and Jan suggested: creating a\n> > pg_class/pg_attribute tuple[s] for a function that returns a setof. \n> \n> What? You shouldn't need pg_class entries for functions unless they\n> return *tuples*. setof has nothing to do with that. Moreover, the\n> pg_class entry should be thought of as a record type independent of\n> the existence of any particular function returning it.\n\nWell, a lot of things (planner for ex) need to know relid of the relation\nbeing returned. If a function returns setof int4, for example, what relid\nshould be filled in? \n\nVariables (for example) have to be bound to relid and attno. If a function\nreturns setof int4, what should be variables' varno be?\n\nAssigning 'fake' relids valid for length of query (from a low range) may\nbe a solution if you agree?\n\n> > I have a special RELKIND_FUNC for parser,\n> \n> This seems totally wrong.\nProbably :)\n\n> > Options are:\n> > 1) Create a special scan node type, T_FuncSeqScan and deal with it there.\n> > 2) Keep the T_SeqScan, explain to nodeSeqScan special logic when dealing\n> > with RELKIND_FUNC relations. \n> > (I prefer this one, but I would like a validation of it)\n> > 3) explain to heap_getnext special logic. \n> \n> I prefer #1. #2 or #3 will imply slowing down normal execution paths\n> with extra clutter to deal with functions.\n> \n> BTW, based on Jan's sketch, I'd say it should be more like\n> T_CursorSeqScan where the object being scanned is a cursor/portal.\n\nOkay. So the logic should support 'select * from foo' where foo is portal,\nright? Then I _do_ have to deal with a problem of unknown relid to bind\nvariables to...\n\n-alex\n\n", "msg_date": "Fri, 29 Jun 2001 17:26:51 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: functions returning sets " }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> Well, a lot of things (planner for ex) need to know relid of the relation\n> being returned.\n\nOnly if there *is* a relid. Check out the handling of\nsub-SELECT-in-FROM for a more reasonable model.\n\nIt's quite likely that you'll need another variant of RangeTblEntry to\nrepresent a function call. I've been thinking that RangeTblEntry should\nhave an explicit type code (plain rel, subselect, inheritance tree top,\nand join were the variants I was thinking about at the time; add\n\"function returning tupleset\" to that) and then there could be a union\nfor the fields that apply to only some of the variants.\n\n> Variables (for example) have to be bound to relid and attno. If a function\n> returns setof int4, what should be variables' varno be?\n\nI'd say that such a function's output will probably be implicitly\nconverted to single-column tuples in order to store it in the portal\nmechanism. So the varno is 1. Even if the execution-time mechanism\ndoesn't need to do that, the parser has to consider it that way to allow\na column name to be assigned to the result. Example:\n\n\tselect x+1 from funcreturningsetofint4();\n\nWhat can I write for \"x\" to make this work? There isn't anything.\nI have to assign a column alias to make it legal:\n\n\tselect x+1 from funcreturningsetofint4() as f(x);\n\nHere, x must clearly be regarded as the first (and only) column of the\nrangetable entry for \"f\".\n\n> Okay. So the logic should support 'select * from foo' where foo is portal,\n> right?\n\nYeah, that was what I had up my sleeve ... then\n\n\tselect * from mycursor limit 1;\n\nwould be more or less equivalent to\n\n\tfetch 1 from mycursor;\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jun 2001 17:32:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: functions returning sets " }, { "msg_contents": "On Fri, 29 Jun 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > Well, a lot of things (planner for ex) need to know relid of the relation\n> > being returned.\n> \n> Only if there *is* a relid. Check out the handling of\n> sub-SELECT-in-FROM for a more reasonable model.\nThank you!\n> \n> It's quite likely that you'll need another variant of RangeTblEntry to\n> represent a function call. I've been thinking that RangeTblEntry should\n> have an explicit type code (plain rel, subselect, inheritance tree top,\n> and join were the variants I was thinking about at the time; add\n> \"function returning tupleset\" to that) and then there could be a union\n> for the fields that apply to only some of the variants.\n\nI don't think I've got the balls to do this one, cuz it'd need to be\nmodified in many places. I'll just add another field there for my use and\nlet someone clean it up later. :)\n\n> > Variables (for example) have to be bound to relid and attno. If a function\n> > returns setof int4, what should be variables' varno be?\n> \n> I'd say that such a function's output will probably be implicitly\n> converted to single-column tuples in order to store it in the portal\n> mechanism. So the varno is 1. Even if the execution-time mechanism\n> doesn't need to do that, the parser has to consider it that way to allow\n> a column name to be assigned to the result. Example:\n> \n> \tselect x+1 from funcreturningsetofint4();\n> \n> What can I write for \"x\" to make this work? There isn't anything.\n> I have to assign a column alias to make it legal:\n> \n> \tselect x+1 from funcreturningsetofint4() as f(x);\n> \n> Here, x must clearly be regarded as the first (and only) column of the\n> rangetable entry for \"f\".\nmore fun for grammar, but I'll try.\n\n> > Okay. So the logic should support 'select * from foo' where foo is portal,\n> > right?\n> \n> Yeah, that was what I had up my sleeve ... then\n> \n> \tselect * from mycursor limit 1;\n> \n> would be more or less equivalent to\n> \n> \tfetch 1 from mycursor;\nNeat possibilities.\n\n\n", "msg_date": "Fri, 29 Jun 2001 18:46:46 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: functions returning sets " }, { "msg_contents": "On Fri, 29 Jun 2001, Alex Pilosov wrote:\n\n> > \n> > Yeah, that was what I had up my sleeve ... then\n> > \n> > \tselect * from mycursor limit 1;\n> > \n> > would be more or less equivalent to\n> > \n> > \tfetch 1 from mycursor;\nHmm, how would this be resolved if there's a (for example) table foo\nand a cursor named foo? Warning? Error? \n\nMaybe syntax like 'select * from cursor foo' should be required syntax?\n\n-alex\n\n", "msg_date": "Sat, 30 Jun 2001 12:33:28 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: functions returning sets " } ]
[ { "msg_contents": "Hello, I have a nightly cron script that runs: vacuumdb -a -z. The output\nof the cron script is emailed to me every night. For the last several\nweeks, about 50% of the time I get the following error at least once but\nsometimes more:\n\nNOTICE: RegisterSharedInvalid: SI buffer overflow\nNOTICE: InvalidateSharedInvalid: cache state reset\n\nI am wondering if this is just something that will happen sometimes, or if\nit implies some type of problem. I have not noticed any problems in using\nthe system. Any input would be appreciated.\n\nI am running PG7.0.3 from RPMs (I plan on upgrading to 7.1.2 soon) on RedHat\n7.0, 800Mhz Athlon w/ 256M. Most of the databases are reasonably small (a\nfew Meg) but some of them are over 1Gig.\n\nThe Postmaster command is: /usr/bin/postmaster -i -B 5000 -N 48 -o -S 16384\n\nThe output of the vacuum command looks like:\n\nVacuuming template1\nVACUUM\n... < Several more databases > ...\nVacuuming postgres\nNOTICE: RegisterSharedInvalid: SI buffer overflow\nNOTICE: InvalidateSharedInvalid: cache state reset\nVACUUM\nVacuuming jpplanning\nVACUUM\n... < Several more databases> ...\nVacuuming OEA\nNOTICE: RegisterSharedInvalid: SI buffer overflow\nNOTICE: InvalidateSharedInvalid: cache state reset\nVACUUM\nVacuuming nutil\nVACUUM\nVacuuming feedback\nVACUUM\nVacuuming ctlno\nVACUUM\n0.21user 0.08system 21:37.58elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (5074major+1093minor)pagefaults 0swaps\n\nMatt O'Connor\n\n", "msg_date": "Fri, 29 Jun 2001 17:00:59 -0500", "msg_from": "Matthew <matt@ctlno.com>", "msg_from_op": true, "msg_subject": "Help with SI buffer overflow error" }, { "msg_contents": "Matthew <matt@ctlno.com> writes:\n> NOTICE: RegisterSharedInvalid: SI buffer overflow\n> NOTICE: InvalidateSharedInvalid: cache state reset\n\nThese are normal; at most they suggest that you've got another backend\nsitting around doing nothing (but in an open transaction) while VACUUM\nruns.\n\nI think we finally got around to downgrading them to DEBUG messages\nfor 7.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 17:17:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Help with SI buffer overflow error " } ]
[ { "msg_contents": "Idea for a new SQL Data Type:\n\n RECURRINGCHAR\n\nThe idea with RECURRINGCHAR is treated exactly like a VARCHAR in it's\nusage. However, it's designed for table columns that store a small set of\nrepeated values (<=256 values). This allows for a great deal of savings in\nthe storage of the data.\n\nExample:\n\n Query:\n select count(*) from order\n Returns:\n 100,000\n\n Query:\n select distinct status from order\n Returns:\n OPEN\n REWORK\n PLANNED\n RELEASED\n FINISHED\n SHIPPED\n\nIt's apparent that there is a lot of duplicate space used in the storage\nof this information. The idea is if order.status was stored as a\nRECURRINGCHAR\nthen the only data stored for the row would be a reference to the value of\nthe column. The actual values would be stored in a separate lookup table.\n\nAdvantages:\n\n - Storage space is optimized.\n\n - a query like:\n\n select distinct {RECURRINGCHAR} from {table} \n\n can be radically optimized\n\n - Eliminates use of joins and extended knowledge of data relationships\n for adhoc users.\n\nThis datatype could be extended to allow for larger sets of repeated\nvalues:\n\n RECURRINGCHAR1 (8-bit) up to 256 unique column values\n RECURRINGCHAR2 (16-bit) up to 65536 unique column values\n\nReasoning behind using 'long reference values':\n\nIt is often an advantage to actually store an entire word representing a\nbusiness meaning as the value of a column (as opposed to a reference\nnumber or mnemonic abbreviation ). This helps to make the system \n'self documenting' and adds value to users who are performing adhoc\nqueries on the database.\n\n----\nDavid Bennett\nPresident - Bensoft\n912 Baltimore, Suite 200\nKansas City, MO 64105\n\n\n\n", "msg_date": "Fri, 29 Jun 2001 17:05:35 -0500 (CDT)", "msg_from": "<dbennett@jade.bensoft.com>", "msg_from_op": true, "msg_subject": "New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "This is rather like MySQL's enum. I still opt for the join, and if\nyou like make a view for those who don't want to know the data\nstructure.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: <dbennett@jade.bensoft.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Friday, June 29, 2001 6:05 PM\nSubject: [HACKERS] New SQL Datatype RECURRINGCHAR\n\n\n> Idea for a new SQL Data Type:\n>\n> RECURRINGCHAR\n>\n> The idea with RECURRINGCHAR is treated exactly like a VARCHAR in\nit's\n> usage. However, it's designed for table columns that store a small\nset of\n> repeated values (<=256 values). This allows for a great deal of\nsavings in\n> the storage of the data.\n>\n> Example:\n>\n> Query:\n> select count(*) from order\n> Returns:\n> 100,000\n>\n> Query:\n> select distinct status from order\n> Returns:\n> OPEN\n> REWORK\n> PLANNED\n> RELEASED\n> FINISHED\n> SHIPPED\n>\n> It's apparent that there is a lot of duplicate space used in the\nstorage\n> of this information. The idea is if order.status was stored as a\n> RECURRINGCHAR\n> then the only data stored for the row would be a reference to the\nvalue of\n> the column. The actual values would be stored in a separate lookup\ntable.\n>\n> Advantages:\n>\n> - Storage space is optimized.\n>\n> - a query like:\n>\n> select distinct {RECURRINGCHAR} from {table}\n>\n> can be radically optimized\n>\n> - Eliminates use of joins and extended knowledge of data\nrelationships\n> for adhoc users.\n>\n> This datatype could be extended to allow for larger sets of repeated\n> values:\n>\n> RECURRINGCHAR1 (8-bit) up to 256 unique column values\n> RECURRINGCHAR2 (16-bit) up to 65536 unique column values\n>\n> Reasoning behind using 'long reference values':\n>\n> It is often an advantage to actually store an entire word\nrepresenting a\n> business meaning as the value of a column (as opposed to a reference\n> number or mnemonic abbreviation ). This helps to make the system\n> 'self documenting' and adds value to users who are performing adhoc\n> queries on the database.\n>\n> ----\n> David Bennett\n> President - Bensoft\n> 912 Baltimore, Suite 200\n> Kansas City, MO 64105\n>\n>\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Tue, 3 Jul 2001 16:07:01 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": false, "msg_subject": "Re: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "This is not a good idea. You are probably coming from mysql background (no\noffense :).\n\nSee comments inline.\n\nOn Fri, 29 Jun 2001 dbennett@jade.bensoft.com wrote:\n\n> Idea for a new SQL Data Type:\n> \n> It's apparent that there is a lot of duplicate space used in the storage\n> of this information. The idea is if order.status was stored as a\n> RECURRINGCHAR\n> then the only data stored for the row would be a reference to the value of\n> the column. The actual values would be stored in a separate lookup table.\nYou should instead have another table with two columns, order_status_id\nand order_status_desc, and join with it to get your data.\n> \n> Advantages:\n> \n> - Storage space is optimized.\n> \n> - a query like:\n> \n> select distinct {RECURRINGCHAR} from {table} \n> \n> can be radically optimized\nselect distinct order_status_desc from order_status_lookup\n\n> - Eliminates use of joins and extended knowledge of data relationships\n> for adhoc users.\nFor adhoc users, you can create a view so they won't be aware of joins. \n\n> It is often an advantage to actually store an entire word representing a\n> business meaning as the value of a column (as opposed to a reference\n> number or mnemonic abbreviation ). This helps to make the system \n> 'self documenting' and adds value to users who are performing adhoc\n> queries on the database.\nNo, that is against good database design and any database normalization.\n\n-alex\n\n", "msg_date": "Tue, 3 Jul 2001 16:32:21 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": ">> It's apparent that there is a lot of duplicate space used in the storage\n>> of this information. The idea is if order.status was stored as a\n>> RECURRINGCHAR\n>> then the only data stored for the row would be a reference to the value\nof\n>> the column. The actual values would be stored in a separate lookup table.\n\n>You should instead have another table with two columns, order_status_id\n>and order_status_desc, and join with it to get your data.\n\nThe idea is to simplify the process of storing and accessing the data.\nJoins required\na deeper knowledge of the relational structure. This also complicates\napplication\nprogramming, two tables must be maintained instead of just one.\n\n>> select distinct {RECURRINGCHAR} from {table}\n>>\n>> can be radically optimized\n\n> select distinct order_status_desc from order_status_lookup\n\nAgain the idea is to simplify. Reduce the number of tables required to\nrepresent a business model.\n\n>> - Eliminates use of joins and extended knowledge of data relationships\n>> for adhoc users.\n\n> For adhoc users, you can create a view so they won't be aware of joins.\n\nNow we have a master table, a lookup table AND a view?\neven more complication....\n\n>> It is often an advantage to actually store an entire word representing a\n>> business meaning as the value of a column (as opposed to a reference\n>> number or mnemonic abbreviation ). This helps to make the system\n>> 'self documenting' and adds value to users who are performing adhoc\n>> queries on the database.\n\n> No, that is against good database design and any database normalization.\n\nI would like to hear your argument on this. I don't see how optimizing\nthe storage of reference value breaks a normalization rule.\n\n--Dave\n\n", "msg_date": "Tue, 3 Jul 2001 16:23:40 -0500", "msg_from": "\"David Bennett\" <dbennett@bensoft.com>", "msg_from_op": false, "msg_subject": "RE: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "\"Rod Taylor\" <rbt@barchord.com> writes:\n> This is rather like MySQL's enum.\n\nYes. If we were going to do anything like this, I'd vote for stealing\nthe \"enum\" API, lock stock and barrel --- might as well be compatible.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 17:49:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: New SQL Datatype RECURRINGCHAR " }, { "msg_contents": "On Tue, 3 Jul 2001, David Bennett wrote:\n\n> The idea is to simplify the process of storing and accessing the data.\n> Joins required a deeper knowledge of the relational structure. This\n> also complicates application programming, two tables must be\n> maintained instead of just one.\nSometimes, to maintain correctness, its necessary to have complex designs. \n\n\"All problems have simple, easy-to-understand, incorrect solutions\".\n\n> Again the idea is to simplify. Reduce the number of tables required to\n> represent a business model.\nWhy? You should normalize your data, which _increases_ number of tables.\n\n\n> >> - Eliminates use of joins and extended knowledge of data relationships\n> >> for adhoc users.\n> \n> > For adhoc users, you can create a view so they won't be aware of joins.\n> \n> Now we have a master table, a lookup table AND a view?\n> even more complication....\nWell, that's called software development. If you don't want complications,\nyou can use MS-Access *:)\n\n> \n> >> It is often an advantage to actually store an entire word representing a\n> >> business meaning as the value of a column (as opposed to a reference\n> >> number or mnemonic abbreviation ). This helps to make the system\n> >> 'self documenting' and adds value to users who are performing adhoc\n> >> queries on the database.\n> \n> > No, that is against good database design and any database normalization.\n> \n> I would like to hear your argument on this. I don't see how optimizing\n> the storage of reference value breaks a normalization rule.\nWhat if tomorrow you will need to change text name for \"OPEN\" status to\n\"OPEN_PENDING_SOMETHING\"? With your design, you will need to update all\nrows in the table changing it. With normalized design, you just update the\nlookup table. Etc, etc.\n\n-alex\n\n\n\n", "msg_date": "Tue, 3 Jul 2001 18:16:08 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "RE: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "The only problem with 'enum' is that all of the possible values must be\nspecified at CREATE time. A logical extension to this would be to allow for\n'dynamic extensions' to the list.\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Tuesday, July 03, 2001 4:49 PM\nTo: Rod Taylor\nCc: dbennett@jade.bensoft.com; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] New SQL Datatype RECURRINGCHAR\n\n\n\"Rod Taylor\" <rbt@barchord.com> writes:\n> This is rather like MySQL's enum.\n\nYes. If we were going to do anything like this, I'd vote for stealing\nthe \"enum\" API, lock stock and barrel --- might as well be compatible.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 6 Jul 2001 17:44:21 -0500", "msg_from": "\"David Bennett\" <dbennett@bensoft.com>", "msg_from_op": false, "msg_subject": "RE: New SQL Datatype RECURRINGCHAR " }, { "msg_contents": "> various disagreements and \"quotes\"...\n\nI agree that you disagree.... :)\n\nRECURRINGCHAR does not break normal form. It simply optimizes the storage\nof reference values (recurring keys). This allows for the use of 'long\nwords' as reference values with a great deal of system storage savings and a\nboost in performance in certain circumstances. This is more a form of\n'compression' then anything else, as a matter of fact, this is very similar\nto the LZ78 family of substitutional compressors.\n\n http://www.faqs.org/faqs/compression-faq/part2/section-1.html\n\nThe advantage here is that we are targeting a normalized value in it's\natomic state, The recurrence rate of this these values is extremely high\nwhich allows us to store this data in a very small space and optimize the\naccess to this data by using the 'dictionary' that we create.\n\n>What if tomorrow you will need to change text name for \"OPEN\" status to\n>\"OPEN_PENDING_SOMETHING\"? With your design, you will need to update all\n>rows in the table changing it. With normalized design, you just update the\n>lookup table. Etc, etc.\n\nIn either model you would:\n\n\tupdate master_table set status='OPEN_PENDING_SOMETHING' where status='OPEN'\n\nThis would not change, in fact, even in a normalized design you wouldn't\nchange the lookup table (parent) key. Perhaps you are misunderstanding my\ninitial concept. The MySQL 'enum' is close. However, it is static and\nrequires you to embed business data (your key list) in the DDL. The idea I\nhave here is to dynamically extend this list as needed. I am not saying\nthat the value can't relate to a parent (lookup) table. It's just not\nnecessary if the value is all that is needed.\n\n--Dave (Hoping some other SQL developers are monitoring this thread :)\n\n", "msg_date": "Fri, 6 Jul 2001 18:24:48 -0500", "msg_from": "\"David Bennett\" <dbennett@bensoft.com>", "msg_from_op": false, "msg_subject": "RE: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "On Fri, 6 Jul 2001, David Bennett wrote:\n\n<rest snipped>\n> In either model you would:\n> \n> \tupdate master_table set status='OPEN_PENDING_SOMETHING' where status='OPEN'\n> \n> This would not change, in fact, even in a normalized design you\n> wouldn't change the lookup table (parent) key. Perhaps you are\n> misunderstanding my initial concept. The MySQL 'enum' is close. \n> However, it is static and requires you to embed business data (your\n> key list) in the DDL. The idea I have here is to dynamically extend\n> this list as needed. I am not saying that the value can't relate to a\n> parent (lookup) table. It's just not necessary if the value is all\n> that is needed.\nYou are making absolutely no sense. \n\nLet me break it down:\n\n\na) To do an update of a key to a different value, you would need to do\nfollowing:\n1) look up the new value in entire table, find if its already exists\n2) If it exists, good.\n3) if it doesn't, pick a next number. (out of some sequence, I suppose) to\nrepresent the key.\n4) do the actual update.\n\nStep 1 without an index is a killer. Then, you need to have a certain\n'table' to map the existing key values to their numerical representations.\n\nHow would this 'table' get populated? On startup? On select?\n\nIts one thing to take 'enum' datatype, which I wouldn't disagree too\nmuch with. Its another thing to suggest this kind of a scheme, which\nshould be really done with views and rules.\n\nI.E. instead of (as you would have) table a(..., x recurringchar), \nyou must have two things:\n\ntable a_real (..., x int4)\ntable lookup (x int4, varchar value)\n\nThen, have a view:\ncreate view a as select ..., value from a_real, lookup where\na_real.x=lookup.x\n\nThen create a rule on insert: (syntax may be rusty)\ncreate rule foo\non insert on table a\ndo instead\n...whatever magic you need to do the actual inserton, lookup, etc.\n\n\n> --Dave (Hoping some other SQL developers are monitoring this thread :)\n\n\n", "msg_date": "Fri, 6 Jul 2001 21:25:09 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "RE: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "Alex,\n\nI think I fully understand your position. Let me put wrap up our\nconversation so far.\n\n\nGiven the application requirements:\n\n 1) contacts have a type.\n\n 2) new types must be added on the fly as needed.\n\n 3) types names rarely change.\n\n 4) the number of contacts should scale to support millions of records.\n\n 5) the number of types will be limited to under 64k\n\n 6) Users must be able to easily query contacts with readable types.\n\n\n-----\nIn a nutshell you are recommending:\n-----\n\n create table contact_type (\n code\t int2,\n type char(16),\n PRIMARY KEY ( code )\n );\n\n create table contact (\n number \tserial,\n name \tchar(32),\n type int2,\n PRIMARY KEY ( number ),\n FOREIGN KEY ( type ) REFERENCES contact_type ( code )\n );\n\n create view contact_with_readble_type as (\n select c.number as number,\n c.name as name,\n t.type as type\n from\n contact c,\n contact_type t\n );\n\n* To build a type lookup table:\n\n 1) Select type and code from contact_type\n 2) Build UI object which displays type and returns code\n\n* In order to insert a new record with this model:\n\n 1) Look up to see if type exists\n 2) Insert new type\n 3) Get type ID\n 4) Insert contact record\n\n* The adhoc query user is now faced with\n the task of understanding 3 data tables.\n\n-----\nWith recurringchar you could do this easily as:\n-----\n\n create table contact (\n number \tserial,\n name \tchar(32),\n type recurringchar1,\n PRIMARY KEY ( number ),\n );\n\n* To build a type lookup table:\n\n 1) Select distinct type from contact (optimized access to recurringchar\ndictionary)\n 2) Build UI object which displays and returns type.\n\n* In order to insert a new record with this model:\n\n 1) Insert contact record\n\n* The adhoc query user has one data table.\n\n-----\n\nGranted, changing the value of contact_type.type would require edits to the\ncontact records.\nIt may be possible to add simple syntax to allow editing of a 'recurringchar\ndictionary' to\nget around isolated problem which would only exist in certain applications.\n\nActually, maybe 'dictionary' or 'dictref' would be a better name for the\ndatatype.\n\n\n", "msg_date": "Sat, 7 Jul 2001 13:02:55 -0500", "msg_from": "\"David Bennett\" <dbennett@bensoft.com>", "msg_from_op": false, "msg_subject": "RE: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "On Sat, 7 Jul 2001, David Bennett wrote:\n\n> -----\n> In a nutshell you are recommending:\n> -----\n> \n> create table contact_type (\n> code\t int2,\n> type char(16),\n> PRIMARY KEY ( code )\n> );\n> \n> create table contact (\n> number \tserial,\n> name \tchar(32),\n> type int2,\n> PRIMARY KEY ( number ),\n> FOREIGN KEY ( type ) REFERENCES contact_type ( code )\n> );\n> \n> create view contact_with_readble_type as (\n> select c.number as number,\n> c.name as name,\n> t.type as type\n> from\n> contact c,\n> contact_type t\n> );\n> \n> * To build a type lookup table:\n> \n> 1) Select type and code from contact_type\n> 2) Build UI object which displays type and returns code\nJust 'select distinct' on a view should do just fine. \n\n> * In order to insert a new record with this model:\n> \n> 1) Look up to see if type exists\n> 2) Insert new type\n> 3) Get type ID\n> 4) Insert contact record\nThis can be encapsulated with \"ON INSERT\" rule on a view.\n\n> * The adhoc query user is now faced with\n> the task of understanding 3 data tables.\nNo, only one view. All the logic is encapsulated there.\n\n> \n> -----\n> With recurringchar you could do this easily as:\n> -----\n> \n> create table contact (\n> number \tserial,\n> name \tchar(32),\n> type recurringchar1,\n> PRIMARY KEY ( number ),\n> );\n> \n> * To build a type lookup table:\n> \n> 1) Select distinct type from contact (optimized access to recurringchar\n> dictionary)\n> 2) Build UI object which displays and returns type.\n> \n> * In order to insert a new record with this model:\n> \n> 1) Insert contact record\n> \n> * The adhoc query user has one data table.\n> \n> -----\n> \n> Granted, changing the value of contact_type.type would require edits\n> to the contact records. It may be possible to add simple syntax to\n> allow editing of a 'recurringchar dictionary' to get around isolated\n> problem which would only exist in certain applications.\n> \n> Actually, maybe 'dictionary' or 'dictref' would be a better name for\n> the datatype.\nThese things belong in application or middleware (AKA views/triggers), not\nin database server itself.\n\nThere are multiple problems with your implementation, for example,\ntransaction handling, assume this situation:\n\nTran A inserts a new contact with new type \"foo\", but does not commit.\nDictionary assigns value of N to 'foo'.\n\nTran B inserts a new contact with type foo. What value should be entered\nin the dictionary? N? A new value? \n\nIf a type disappears from database, does its dictionary ID get reused?\n\nAll these questions are not simple questions, and its not up to database\nto decide it. Your preferred solution belongs in your triggers/views, not\nin core database.\n\n\n", "msg_date": "Sat, 7 Jul 2001 21:24:08 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "RE: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "This would be a potential feature of being able to insert into views\nin general. Reversing the CREATE VIEW statement to accept inserts,\ndeletes and updates.\n\nIf true, focus on that. Theres lots of views that cannot be reversed\nproperly -- unions come to mind -- but perhaps this type of simple\njoin could be a first step in the package. I believe this is on the\nTODO list already.\n\nDifferent attack, but accomplishes the same thing within SQL standards\nas I seem to recall views are supposed to do this where reasonable.\n\n\nFailing that, implement this type of action the same way as foreign\nkeys. Via the described method with automagically created views,\ntables, etc. Though I suggest leaving it in contrib for sometime.\nEnum functionality isn't particularly useful to the majority whose\napplications tend to pull out the numbers for states when the\napplication is opened (with the assumption they're generally static).\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Alex Pilosov\" <alex@pilosoft.com>\nTo: \"David Bennett\" <dbennett@bensoft.com>\nCc: <pgsql-hackers@postgresql.org>\nSent: Saturday, July 07, 2001 9:24 PM\nSubject: RE: [HACKERS] New SQL Datatype RECURRINGCHAR\n\n\n> On Sat, 7 Jul 2001, David Bennett wrote:\n>\n> > -----\n> > In a nutshell you are recommending:\n> > -----\n> >\n> > create table contact_type (\n> > code int2,\n> > type char(16),\n> > PRIMARY KEY ( code )\n> > );\n> >\n> > create table contact (\n> > number serial,\n> > name char(32),\n> > type int2,\n> > PRIMARY KEY ( number ),\n> > FOREIGN KEY ( type ) REFERENCES contact_type ( code )\n> > );\n> >\n> > create view contact_with_readble_type as (\n> > select c.number as number,\n> > c.name as name,\n> > t.type as type\n> > from\n> > contact c,\n> > contact_type t\n> > );\n> >\n> > * To build a type lookup table:\n> >\n> > 1) Select type and code from contact_type\n> > 2) Build UI object which displays type and returns code\n> Just 'select distinct' on a view should do just fine.\n>\n> > * In order to insert a new record with this model:\n> >\n> > 1) Look up to see if type exists\n> > 2) Insert new type\n> > 3) Get type ID\n> > 4) Insert contact record\n> This can be encapsulated with \"ON INSERT\" rule on a view.\n>\n> > * The adhoc query user is now faced with\n> > the task of understanding 3 data tables.\n> No, only one view. All the logic is encapsulated there.\n>\n> >\n> > -----\n> > With recurringchar you could do this easily as:\n> > -----\n> >\n> > create table contact (\n> > number serial,\n> > name char(32),\n> > type recurringchar1,\n> > PRIMARY KEY ( number ),\n> > );\n> >\n> > * To build a type lookup table:\n> >\n> > 1) Select distinct type from contact (optimized access to\nrecurringchar\n> > dictionary)\n> > 2) Build UI object which displays and returns type.\n> >\n> > * In order to insert a new record with this model:\n> >\n> > 1) Insert contact record\n> >\n> > * The adhoc query user has one data table.\n> >\n> > -----\n> >\n> > Granted, changing the value of contact_type.type would require\nedits\n> > to the contact records. It may be possible to add simple syntax to\n> > allow editing of a 'recurringchar dictionary' to get around\nisolated\n> > problem which would only exist in certain applications.\n> >\n> > Actually, maybe 'dictionary' or 'dictref' would be a better name\nfor\n> > the datatype.\n> These things belong in application or middleware (AKA\nviews/triggers), not\n> in database server itself.\n>\n> There are multiple problems with your implementation, for example,\n> transaction handling, assume this situation:\n>\n> Tran A inserts a new contact with new type \"foo\", but does not\ncommit.\n> Dictionary assigns value of N to 'foo'.\n>\n> Tran B inserts a new contact with type foo. What value should be\nentered\n> in the dictionary? N? A new value?\n>\n> If a type disappears from database, does its dictionary ID get\nreused?\n>\n> All these questions are not simple questions, and its not up to\ndatabase\n> to decide it. Your preferred solution belongs in your\ntriggers/views, not\n> in core database.\n>\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Sat, 7 Jul 2001 22:59:41 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": false, "msg_subject": "Re: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "\nOn Sat, 7 Jul 2001, Rod Taylor wrote:\n\n> This would be a potential feature of being able to insert into views\n> in general. Reversing the CREATE VIEW statement to accept inserts,\n> deletes and updates.\nDefinitely not a 'potential' feature, but a existing and documented one.\nRead up on rules, especially 'ON INSERT DO INSTEAD' stuff. Its not\nautomatic, though.\n\n> If true, focus on that. Theres lots of views that cannot be reversed\n> properly -- unions come to mind -- but perhaps this type of simple\n> join could be a first step in the package. I believe this is on the\n> TODO list already.\nOn TODO list are updatable views in SQL sense of word, [i.e. automatic\nupdateability of a view which matches certain criteria].\n\n> Different attack, but accomplishes the same thing within SQL standards\n> as I seem to recall views are supposed to do this where reasonable.\n> \n> \n> Failing that, implement this type of action the same way as foreign\n> keys. Via the described method with automagically created views,\n> tables, etc. Though I suggest leaving it in contrib for sometime.\n> Enum functionality isn't particularly useful to the majority whose\n> applications tend to pull out the numbers for states when the\n> application is opened (with the assumption they're generally static).\n\nOriginal suggestion was not for an enum type, it was for _dynamically\nextensible_ data dictionary type. \n\nENUM is statically defined, and it wouldn't be too hard to implement, with\none exception: one more type-specific field needs to be added to\npg_attribute table, where would be stored argument for the type (such as,\nlength for a char/varchar types, length/precision for numeric type, and\npossible values for a enum type). \n\nThis just needs a pronouncement that this addition is a good idea, and\nthen its a trivial thing to implement enum.\n\n-alex\n\n", "msg_date": "Sat, 7 Jul 2001 23:42:28 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "> > This would be a potential feature of being able to insert into\nviews\n> > in general. Reversing the CREATE VIEW statement to accept\ninserts,\n> > deletes and updates.\n> Definitely not a 'potential' feature, but a existing and documented\none.\n> Read up on rules, especially 'ON INSERT DO INSTEAD' stuff. Its not\n> automatic, though.\n\nTrust me, I know how to go about doing those kinds of things manually.\nI was referring to the automated reveral -- which creates this\nfeatures in a very simple manner without any additions or changes to\nsystem tables -- aside from reverse rules themselves which is a more\ngeneric feature.\n\n> > If true, focus on that. Theres lots of views that cannot be\nreversed\n> > properly -- unions come to mind -- but perhaps this type of simple\n> > join could be a first step in the package. I believe this is on\nthe\n> > TODO list already.\n> On TODO list are updatable views in SQL sense of word, [i.e.\nautomatic\n> updateability of a view which matches certain criteria].\n>\n> > Different attack, but accomplishes the same thing within SQL\nstandards\n> > as I seem to recall views are supposed to do this where\nreasonable.\n> >\n> >\n> > Failing that, implement this type of action the same way as\nforeign\n> > keys. Via the described method with automagically created views,\n> > tables, etc. Though I suggest leaving it in contrib for sometime.\n> > Enum functionality isn't particularly useful to the majority whose\n> > applications tend to pull out the numbers for states when the\n> > application is opened (with the assumption they're generally\nstatic).\n>\n> Original suggestion was not for an enum type, it was for\n_dynamically\n> extensible_ data dictionary type.\n\nENUMs from my memory are easily redefined. And since the database\nthey're implemented in requires table locks for everything, they can\nappear dynamic (nothing is transaction safe in that thing anyway).\n\n", "msg_date": "Sat, 7 Jul 2001 23:48:57 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": false, "msg_subject": "Re: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "> > > This would be a potential feature of being able to insert into\n> views\n> > > in general. Reversing the CREATE VIEW statement to accept\n> inserts,\n> > > deletes and updates.\n> > Definitely not a 'potential' feature, but a existing and\ndocumented\n> one.\n> > Read up on rules, especially 'ON INSERT DO INSTEAD' stuff. Its not\n> > automatic, though.\n>\n> Trust me, I know how to go about doing those kinds of things\nmanually.\n> I was referring to the automated reveral -- which creates this\n> features in a very simple manner without any additions or changes to\n> system tables -- aside from reverse rules themselves which is a more\n> generic feature.\n\nHmm. My above statement lost all credibility in poor grammer and\nspeeling. Time for bed I suppose.\n\nAnyway, the point is that some of the simple views should be straight\nforward to reversing automatically if someone has the will and the\ntime it can be done. A while back a list of 'views which cannot be\nreversed' was created and included things such as Unions,\nIntersections, exclusions, aggregates, CASE statements, and a few more\nitems.\n\n", "msg_date": "Sun, 8 Jul 2001 00:03:12 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": false, "msg_subject": "Re: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "\"Rod Taylor\" <rbt@barchord.com> writes:\n> Anyway, the point is that some of the simple views should be straight\n> forward to reversing automatically if someone has the will and the\n> time it can be done. A while back a list of 'views which cannot be\n> reversed' was created and included things such as Unions,\n> Intersections, exclusions, aggregates, CASE statements, and a few more\n> items.\n\nSQL92 has a notion that certain simple views are \"updatable\", while the\nrest are not. In our terms this means that we should automatically\ncreate ON INSERT/UPDATE/DELETE rules if the view is updatable according\nto the spec. I have not bothered to chase down all the exact details\nof the spec's \"updatableness\" restrictions, but they're along the same\nlines you mention --- only one referenced table, no aggregation, no\nset operations, all view outputs are simple column references, etc.\n\nMy feeling is that the restrictions are stringent enough to eliminate\nmost of the interesting uses of views, and hence an automatic rule\ncreation feature is not nearly as useful/important as it appears at\nfirst glance. In real-world applications you'll have to expend some\nthought on manual rule creation anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 08 Jul 2001 00:47:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: New SQL Datatype RECURRINGCHAR " }, { "msg_contents": "David Bennett wrote:\n> Alex,\n>\n> I think I fully understand your position. Let me put wrap up our\n> conversation so far.\n> [lots of arguments for recurringchar]\n\n All I've seen up to now is that you continue to mix up\n simplification on the user side with data and content control\n on the DB designer side. Do the users create all the tables\n and would have to create the views, or is that more the job\n of someone who's educated enough?\n\n And about the multiple lookups and storage of new types, we\n have procedural languages and database triggers.\n\n This is no personal offense, but I just don't see why we\n should adopt non-standard MySQLism for functionality that is\n available through standard mechanisms with alot more\n flexibility and control.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 11 Jul 2001 08:19:46 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: New SQL Datatype RECURRINGCHAR" } ]
[ { "msg_contents": "Baby girl on Jun 27.\n\nVadim\n\n\n", "msg_date": "Fri, 29 Jun 2001 23:57:43 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": true, "msg_subject": "Now it's my turn..." }, { "msg_contents": "\n----- Original Message ----- \nFrom: Vadim Mikheev <vmikheev@sectorbase.com>\nSent: Saturday, June 30, 2001 2:57 AM\n\n\n> Baby girl on Jun 27.\n\nCongrats, daddy! :)\n\nSoon, very soon... the babies will replace you guys,\nand will hack PG in the way you never dreamed about! :)\n\nAnybody else?\n\nS.\n\n\n\n\n", "msg_date": "Sat, 30 Jun 2001 03:10:03 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Now it's my turn..." }, { "msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> Baby girl on Jun 27.\n\nCongratulations!\n\nI suppose now you'll have no time for hacking PG for awhile ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Jun 2001 10:42:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Now it's my turn... " }, { "msg_contents": "Hi:\n\nOn Fri, 29 Jun 2001, Vadim Mikheev wrote:\n\n> Baby girl on Jun 27.\n \nCongratulations to Vadim, his wife and his daughter!!!\n\nSaludos,\n\nRoberto Andrade Fonseca\nrandrade@abl.com.mx\n\n", "msg_date": "Sat, 30 Jun 2001 12:03:41 -0500 (CDT)", "msg_from": "\"Ing. Roberto Andrade Fonseca\" <randrade@abl.com.mx>", "msg_from_op": false, "msg_subject": "Re: Now it's my turn..." }, { "msg_contents": "On Fri, 29 Jun 2001, Vadim Mikheev wrote:\n\n> Baby girl on Jun 27.\n\nCongrats Papa!!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 1 Jul 2001 19:27:25 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Now it's my turn..." }, { "msg_contents": "Vadim Mikheev wrote:\n[Charset Windows-1252 unsupported, skipping...]\n\n Congrats! Greetings and all the best from here.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 2 Jul 2001 09:11:58 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Now it's my turn..." }, { "msg_contents": "О©╫О©╫О©╫О©╫О©╫,\n\nО©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫ О©╫ О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫. О©╫ О©╫О©╫О©╫О©╫ О©╫О©╫О©╫О©╫О©╫О©╫ О©╫О©╫ О©╫О©╫О©╫О©╫.\nО©╫О©╫О©╫О©╫О©╫О©╫О©╫ О©╫О©╫ О©╫О©╫О©╫О©╫О©╫ О©╫О©╫О©╫ О©╫ О©╫О©╫О©╫ О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫\n\n\tО©╫О©╫О©╫О©╫\nOn Fri, 29 Jun 2001, Vadim Mikheev wrote:\n\n> Baby girl on Jun 27.\n>\n> Vadim\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 2 Jul 2001 16:42:38 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Now it's my turn..." }, { "msg_contents": "On Saturday 30 June 2001 02:57, Vadim Mikheev wrote:\n> Baby girl on Jun 27.\n\nMany congratulations!\n\nAll I said to Bruce with his youngun applies equally!\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 2 Jul 2001 11:50:14 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Now it's my turn..." } ]
[ { "msg_contents": "----- Original Message ----- \nFrom: Vadim Mikheev <vmikheev@sectorbase.com>\nSent: Saturday, June 30, 2001 2:57 AM\n\n\n> Baby girl on Jun 27.\n\nCongrats, daddy! :)\n\nSoon, very soon... the babies will replace you guys,\nand will hack PG in the way you never dreamed about! :)\n\nAnybody else?\n\nS.\n\n\n", "msg_date": "Sat, 30 Jun 2001 03:43:46 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "Re: Now it's my turn..." } ]
[ { "msg_contents": "With addition of Portal as RangeTblEntry, we'll have three very distinct\nkinds of RTEs: Relation, Subquery and Portal.\n\nThey share some fields, but there are specific fields for each kind.\n\nTo encapsulate that (save on storage and gain cleanness), there are two\noptions:\n\n1) Create three kinds of RTE, RangeTblEntryRelation, RangeTblEntryPortal\nwhich will have these type fields. Then, code that wants to access\nspecific fields, will need to cast RTE to specific RTE type for access to\nthe fields. Code would look like\n\nRangeTblEntry *rte; \nAssert(e->rkind == RTE_RELATION);\n(RangeTblEntryRelation rte)->relname\n\n2) Keep one type, but unionize the fields. RTE definition would be:\n\ntypedef struct RangeTblEntry\n{\n NodeTag type;\n RTEType rtype;\n\n /*\n * Fields valid in all RTEs:\n */\n Attr *alias; /* user-written alias clause, if any */\n Attr *eref; /* expanded reference names */\n bool inh; /* inheritance requested? */\n bool inFromCl; /* present in FROM clause */\n bool checkForRead; /* check rel for read access */\n bool checkForWrite; /* check rel for write access */\n Oid checkAsUser; /* if not zero, check access as this user\n*/\n \n union {\n struct {\n /* Fields valid for a plain relation RTE (else NULL/zero): */\n char *relname; /* real name of the relation */\n Oid relid; /* OID of the relation */\n } rel;\n \n struct {\n /* Fields valid for a subquery RTE (else NULL) */\n Query *subquery; /* the sub-query */\n } sq;\n\n struct {\n /* Fields valid for function-as portal RTE */\n char *portal;\n TupleDesc tupleDesc;\n } func;\n } un;\n}\n\n\nAnd access would be: \n\nRangeTblEntry *rte; \nAssert(e->rkind == RTE_RELATION);\nrte->un.rel.relname\n\nI'm not sure which method is less ugly. I'm leaning towards 2).\nBut now I think that I'm always leaning in wrong direction :)\n\n\n-alex\n\n", "msg_date": "Sat, 30 Jun 2001 17:39:35 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "RangeTblEntry modifications" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> 2) Keep one type, but unionize the fields. RTE definition would be:\n> I'm not sure which method is less ugly. I'm leaning towards 2).\n\nI like that better, also, mainly because it would force code updates\neveryplace that the RTE-type-specific fields are accessed; less chance\nof incorrect code getting overlooked that way.\n\nNote that some of the comments would be obsolete, eg\n\n> /* Fields valid for a plain relation RTE (else NULL/zero): */\n\nThey're not null/0 for a non-relation RTE, they're just not defined at\nall.\n\n> struct {\n> /* Fields valid for function-as portal RTE */\n> char *portal;\n> TupleDesc tupleDesc;\n> } func;\n\nI'm not sure I buy this, however. You'll need an actual\nRTE-as-function-call variant that has the function OID and compiled list\nof arguments. Remember that the parsetree may live a lot longer than\nany specific Portal. For a function call, we'll need to build the\nPortal from the function-call-describing RTE at the start of execution,\nnot during parsing.\n\nThis point may also make \"select from cursor\" rather harder than it\nappears at first glance. You may want to back off on that part for now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Jun 2001 18:49:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RangeTblEntry modifications " }, { "msg_contents": "On Sat, 30 Jun 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > 2) Keep one type, but unionize the fields. RTE definition would be:\n> > I'm not sure which method is less ugly. I'm leaning towards 2).\n> \n> I like that better, also, mainly because it would force code updates\n> everyplace that the RTE-type-specific fields are accessed; less chance\n> of incorrect code getting overlooked that way.\n\nI decided to go first way, it also forces code updates in all the places\n(boy there's a lot of them), but actually it forced some cleanup of code.\n\nInstead of checking whether relid is not null or relname is not null, it\nnow does IsA(rte, RangeTblEntryRelation). Certain functions do take only\nRangeTblEntryRelation, and thus its possible to typecheck more, etc.\n\nIn parsenodes, I have:\n\n#define RTE_COMMON_FIELDS \\\n NodeTag type; \\\n /* \\\n * Fields valid in all RTEs: \\\n */ \\\n Attr *alias; /* user-written alias clause, if any */ \\\n Attr *eref; /* expanded reference names */ \\\n bool inh; /* inheritance requested? */ \\\n bool inFromCl; /* present in FROM clause */ \\\n bool checkForRead; /* check rel for read access */ \\\n bool checkForWrite; /* check rel for write access */ \\\n Oid checkAsUser; /* if not zero, check access as this user*/ \\\n\ntypedef struct RangeTblEntry\n{\n RTE_COMMON_FIELDS\n} RangeTblEntry;\n\ntypedef struct RangeTblEntryRelation\n{\n RTE_COMMON_FIELDS\n /* Fields valid for a plain relation RTE (else NULL/zero): */\n char *relname; /* real name of the relation */\n Oid relid; /* OID of the relation */\n} RangeTblEntryRelation;\n\ntypedef struct RangeTblEntrySubSelect\n{\n RTE_COMMON_FIELDS\n /* Fields valid for a subquery RTE (else NULL) */\n Query *subquery; /* the sub-query */\n} RangeTblEntrySubSelect;\n\n\n> Note that some of the comments would be obsolete, eg\n> \n> > /* Fields valid for a plain relation RTE (else NULL/zero): */\n> \n> They're not null/0 for a non-relation RTE, they're just not defined at\n> all.\nRight...\n\n> \n> > struct {\n> > /* Fields valid for function-as portal RTE */\n> > char *portal;\n> > TupleDesc tupleDesc;\n> > } func;\n> \n> I'm not sure I buy this, however. You'll need an actual\n> RTE-as-function-call variant that has the function OID and compiled list\n> of arguments. Remember that the parsetree may live a lot longer than\n> any specific Portal. For a function call, we'll need to build the\n> Portal from the function-call-describing RTE at the start of execution,\n> not during parsing.\nYes, this is only a first try, just trying to get parser to work straight.\nI may need to add more fields.\n\n> This point may also make \"select from cursor\" rather harder than it\n> appears at first glance. You may want to back off on that part for now.\nIt is harder than I anticipated ;)\n\nUnfortunately (fortunately?) I decided to do 'select from cursor foo'\nfirst. (I hope you don't mind this syntax, it forces user to explicitly\nchoose that he's accessing cursor). \n\nThis will get me where I need to be, if a function returns a refcursor, I\ncan do following:\n\ndeclare foo cursor for func()\n\nselect * from cursor foo;\n\nThank you so much for the help, Tom.\n\n-alex\n\n", "msg_date": "Sat, 30 Jun 2001 19:29:19 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: RangeTblEntry modifications " }, { "msg_contents": "On Sat, 30 Jun 2001, Tom Lane wrote:\n\n> I'm not sure I buy this, however. You'll need an actual\n> RTE-as-function-call variant that has the function OID and compiled list\n> of arguments. Remember that the parsetree may live a lot longer than\n> any specific Portal. For a function call, we'll need to build the\n> Portal from the function-call-describing RTE at the start of execution,\n> not during parsing.\n\nI think I just understood what you were saying: having tupleDesc in RTE is\nnot kosher, because RTE can last longer than a given tupleDesc?\n\nSo, essentially I will need to extract attnames/atttypes from TupleDesc\nand store them separately...\n\nOr I'm totally off my rocker?\n\n", "msg_date": "Sat, 30 Jun 2001 20:01:54 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: RangeTblEntry modifications " }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> I think I just understood what you were saying: having tupleDesc in RTE is\n> not kosher, because RTE can last longer than a given tupleDesc?\n\nDepends where you got the tupleDesc from --- if you copy it into the\nparse context then it's OK in terms of not disappearing. However,\nthat doesn't mean it's still *valid*. Consider\n\n\tbegin;\n\tdeclare foo cursor for select * from bar;\n\tcreate view v1 as select * from cursor foo;\n\tend;\n\nNow the cursor foo is no more, but v1 still exists ... what happens\nwhen we try to select from v1?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Jul 2001 00:47:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RangeTblEntry modifications " }, { "msg_contents": "On Sun, 1 Jul 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > I think I just understood what you were saying: having tupleDesc in RTE is\n> > not kosher, because RTE can last longer than a given tupleDesc?\n> \n> Depends where you got the tupleDesc from --- if you copy it into the\n> parse context then it's OK in terms of not disappearing. However,\n> that doesn't mean it's still *valid*. Consider\n> \n> \tbegin;\n> \tdeclare foo cursor for select * from bar;\n> \tcreate view v1 as select * from cursor foo;\n> \tend;\n> \n> Now the cursor foo is no more, but v1 still exists ... what happens\n> when we try to select from v1?\nI believe it will fail in the executor trying to open the portal...That's\nthe expected behaviour, right?\n\n-alex \n\n", "msg_date": "Sun, 1 Jul 2001 08:58:28 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: RangeTblEntry modifications " } ]
[ { "msg_contents": "abstime regression test started failing this morning.\n\nSELECT '' AS six, ABSTIME_TBL.*\n WHERE ABSTIME_TBL.f1 < abstime 'Jun 30, 2001';\n\nThis used to select 'current', but doesn't anymore. Ooops.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Jun 2001 17:54:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Another regression test fails to stand the test of time" } ]
[ { "msg_contents": "The description of the FE/BE protocol says:\n\n| The postmaster uses this info and the contents of the pg_hba.conf file\n| to determine what authentication method the frontend must use. The\n| postmaster then responds with one of the following messages:\n[...]\n| If the frontend does not support the authentication method requested by\n| the postmaster, then it should immediately close the connection.\n\nHowever, libpq doesn't do that. Instead, it leaves the connection open\nand returns CONNECTION_BAD to the client. The client would then\npresumably call something like PQfinish(), which sends a Terminate message\nand closes the connection. This happened to not confuse the <=7.1\npostmasters because they were waiting for 4 bytes and treated the early\nconnection close appropriately.\n\nOn this occasion let me also point out that\n\n pqPuts(\"X\", conn);\n\nis *not* the way to send a single byte 'X' to the server.\n\nIn current sources the backends do the authentication and use the pqcomm.c\nfunctions for communication, but those aren't that happy about the early\nconnection close:\n\n pq_recvbuf: unexpected EOF on client connection\n FATAL 1: Password authentication failed for user 'peter'\n pq_flush: send() failed: Broken pipe\n\nSo I figured I would sneak in a check for connection close before reading\nthe authentication response in the server, but since the frontends seems\nto be doing what they want I don't really know what to check for.\n\nShould I fix libpq to follow the docs in this and on the server's end make\nanything that's not either a connection close or a valid authentication\nresponse as \"unexpected EOF\"? That way old clients would produce a bit of\nnoise in the server log.\n\nDoes anyone know how the ODBC and JDBC drivers handle this situation?\n\nBtw., is recv(sock, x, 1, MSG_PEEK) == 0 an appropriate way to check for a\nclosed connection without reading anything?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 1 Jul 2001 01:50:08 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "FE/BE protocol oddity" } ]
[ { "msg_contents": "Can someone tell me what do I need to do to get 'tab '(cr tab key =\nCREATE) working in psql. I previously ran postgres on Redhat 6.2 and now\nI am om running Mandrake 8 as my gui interface. I installed postgres\nagain but this feture did not work.\nCan somebody give me some help with this?\n\nThank you.\n", "msg_date": "Mon, 02 Jul 2001 18:36:04 +0200", "msg_from": "Phillip F Jansen <pfj@ucs.co.za>", "msg_from_op": true, "msg_subject": "tab" }, { "msg_contents": "Phillip F Jansen writes:\n\n> Can someone tell me what do I need to do to get 'tab '(cr tab key =\n> CREATE) working in psql. I previously ran postgres on Redhat 6.2 and now\n> I am om running Mandrake 8 as my gui interface. I installed postgres\n> again but this feture did not work.\n\nInstall readline and build again.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 2 Jul 2001 19:42:08 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: tab" } ]
[ { "msg_contents": "Is it documented somewhere how to do on-line backups and then how to\nrecover from some event using those backups with no (or minimum) loss of\ndata?\n", "msg_date": "Mon, 02 Jul 2001 13:24:00 -0400", "msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "Production Backup and Recovery" }, { "msg_contents": "\"P. Dwayne Miller\" <dmiller@espgroup.net> writes:\n\n> Is it documented somewhere how to do on-line backups and then how to\n> recover from some event using those backups with no (or minimum) loss of\n> data?\n\nRight now there is no PIT recovery. All you can do is run pg_dump\nwhenever you want to make a backup. This will take a consistent\nsnapshot of the database and write it out as SQL code that will\nrecreate the database.. \n\nThere has been talk of using the new WAL (write-ahead log) feature to\nenable point-in-time recovery, but it is not currently implemented.\n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n", "msg_date": "02 Jul 2001 13:37:50 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Production Backup and Recovery" }, { "msg_contents": "\n\"Doug McNaught\" <doug@wireboard.com> wrote in message\nnews:m38zi7dyfl.fsf@belphigor.mcnaught.org...\n> \"P. Dwayne Miller\" <dmiller@espgroup.net> writes:\n>\n> > Is it documented somewhere how to do on-line backups and then how to\n> > recover from some event using those backups with no (or minimum) loss of\n> > data?\n>\n> Right now there is no PIT recovery. All you can do is run pg_dump\n> whenever you want to make a backup. This will take a consistent\n> snapshot of the database and write it out as SQL code that will\n> recreate the database..\n\nBy consistent... does that mean that committed transactions are completely\nthere, and uncommitted are not there at all?\n\n\n\n\n", "msg_date": "Mon, 02 Jul 2001 22:23:43 GMT", "msg_from": "\"John Moore\" <NOSPAMnews@NOSPAMtinyvital.com>", "msg_from_op": false, "msg_subject": "Re: Production Backup and Recovery" } ]
[ { "msg_contents": "I hope I'm not taking too much time from people here...\n\nHere's where I'm now (and don't beat me too hard :)\n\nI'm done with change of RangeTblEntry into three different node types:\nRangeTblEntryRelation,RangeTblEntrySubSelect,RangeTblEntryPortal which\nhave different fields. All the existing places instead of using\nrte->subquery to determine type now use IsA(rte, RangeTblEntrySubSelect),\nand later access fields after casting ((RangeTblEntrySubSelect *)rte)->xxx\n\nSome functions that always work on Relation RTEs are declared to accept\nRangeTblEntryRelation. Asserts are added everywhere before casting of RTE\ninto specific type. (Unless there was an IsA before, then I didn't put an\nAssert).\n\nLet me know if that is an acceptable way of doing things, or casting makes\nthings too ugly. (I believe its the best way, unions are more dangerous\nin this context).\n\nLet me know if it would make sense to rename these types into\nRTERelation,RTESubSelect,RTEPortal to cut down on their length ...\n\nNow, I'm going to more interesting stuff, planner:\n\nSELECT blah FROM CURSOR FOO, list_of_tables\n\nThis really should place CURSOR FOO at the very top of join list, since\n(generally), you don't know what cursor is opened for, and you cannot\nReScan a portal. \n\nI'm still trying to find out how to explain that to optimizer, but if\nsomeone gives me a hint, I'd appreciate it.\n\nAlternatively, we can store everything we retrieved from portal for future\nReScan purposes, but I really don't like this idea. \n\nBy the same token, I think, optimizer must reject queries of type:\n\"SELECT ... FROM CURSOR FOO, CURSOR BAR\" because such thing will lead to a\nnested loop scan, which is impossible because, again, you can't ReScan a\ncursor.\n\nLet me know if I'm very off track with these assumptions, and if you have\nto throw rocks at me, pick little ones ;)\n\n-alex\n\n\n\n", "msg_date": "Mon, 2 Jul 2001 18:31:50 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "selecting from cursor" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> I'm done with change of RangeTblEntry into three different node types:\n> RangeTblEntryRelation,RangeTblEntrySubSelect,RangeTblEntryPortal which\n> have different fields. All the existing places instead of using\n> rte->subquery to determine type now use IsA(rte, RangeTblEntrySubSelect),\n> and later access fields after casting ((RangeTblEntrySubSelect *)rte)->xxx\n\n> Some functions that always work on Relation RTEs are declared to accept\n> RangeTblEntryRelation. Asserts are added everywhere before casting of RTE\n> into specific type. (Unless there was an IsA before, then I didn't put an\n> Assert).\n\n> Let me know if that is an acceptable way of doing things, or casting makes\n> things too ugly. (I believe its the best way, unions are more dangerous\n> in this context).\n\nAnd what are you doing with the places that don't care which kind of RTE\nthey are dealing with (which is most of them IIRC)? While you haven't\nshown us the proposed changes, I really suspect that a union would be\ncleaner, because it'd avoid ugliness in those places. Bear in mind that\nthe three RTE types that you have are going to become five or six real\nsoon now, because I have other things to fix that need to be done that\nway --- so the notational advantage of a union is going to increase.\n\n> ... you cannot ReScan a portal. \n\nThat's gonna have to be fixed. If you're not up for it, don't implement\nthis. Given that cursors (are supposed to) support FETCH BACKWARDS,\nI really don't see why they shouldn't be expected to handle ReScan...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Jul 2001 21:51:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: selecting from cursor " }, { "msg_contents": "On Mon, 2 Jul 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > I'm done with change of RangeTblEntry into three different node types:\n> > RangeTblEntryRelation,RangeTblEntrySubSelect,RangeTblEntryPortal which\n> > have different fields. All the existing places instead of using\n> > rte->subquery to determine type now use IsA(rte, RangeTblEntrySubSelect),\n> > and later access fields after casting ((RangeTblEntrySubSelect *)rte)->xxx\n> \n> > Some functions that always work on Relation RTEs are declared to accept\n> > RangeTblEntryRelation. Asserts are added everywhere before casting of RTE\n> > into specific type. (Unless there was an IsA before, then I didn't put an\n> > Assert).\n> \n> > Let me know if that is an acceptable way of doing things, or casting makes\n> > things too ugly. (I believe its the best way, unions are more dangerous\n> > in this context).\n> \n> And what are you doing with the places that don't care which kind of RTE\n> they are dealing with (which is most of them IIRC)? While you haven't\n\nThey just have things declared as RangeTblEntry *, and as long as they\ndon't access type-specific fields, they are fine.\n\n> shown us the proposed changes, I really suspect that a union would be\n> cleaner, because it'd avoid ugliness in those places. Bear in mind that\nI have attached the diff of what I have now. Please take a look through\nit. Its not exactly done (only the parser and pieces of executor parts of\n\"FROM CURSOR\" are there, and I didn't attach gram.y diff because its just\ntoo ugly now), so its definitely not ready for application, but with these\nchanges, it compiles and 'make check' goes through fine ;)\n\n> the three RTE types that you have are going to become five or six real\n> soon now, because I have other things to fix that need to be done that\n> way --- so the notational advantage of a union is going to increase.\n> \n> > ... you cannot ReScan a portal. \n> \n> That's gonna have to be fixed. If you're not up for it, don't implement\n> this. Given that cursors (are supposed to) support FETCH BACKWARDS,\n> I really don't see why they shouldn't be expected to handle ReScan...\nI thought only scrollable cursors can do that. What if cursor isn't\nscrollable? Should it error during the execution?\n\nFor scrollable cursors, Rescan should be implemented as 'scroll backwards\nuntil you can't scroll no more', correct?\n\n-alex\n\n", "msg_date": "Mon, 2 Jul 2001 22:27:31 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: selecting from cursor " }, { "msg_contents": "Erm, forgot to attach the patch. Here it is.\n\nOn Mon, 2 Jul 2001, Alex Pilosov wrote:\n\n> They just have things declared as RangeTblEntry *, and as long as they\n> don't access type-specific fields, they are fine.\n> \n> > shown us the proposed changes, I really suspect that a union would be\n> > cleaner, because it'd avoid ugliness in those places. Bear in mind that\n> I have attached the diff of what I have now. Please take a look through\n> it. Its not exactly done (only the parser and pieces of executor parts of\n> \"FROM CURSOR\" are there, and I didn't attach gram.y diff because its just\n> too ugly now), so its definitely not ready for application, but with these\n> changes, it compiles and 'make check' goes through fine ;)", "msg_date": "Mon, 2 Jul 2001 22:32:01 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: selecting from cursor " }, { "msg_contents": "On Mon, 2 Jul 2001, Alex Pilosov wrote:\n\n> > shown us the proposed changes, I really suspect that a union would be\n> > cleaner, because it'd avoid ugliness in those places. Bear in mind that\n> I have attached the diff of what I have now. Please take a look through\n> it. Its not exactly done (only the parser and pieces of executor parts of\n> \"FROM CURSOR\" are there, and I didn't attach gram.y diff because its just\n> too ugly now), so its definitely not ready for application, but with these\n> changes, it compiles and 'make check' goes through fine ;)\nApparently the diff was too large for mailserver. I have posted it at \nhttp://www.formenos.org/pg/rte.diff\n\n95% of that diff is obvious changes. 5% are not-so-obvious :)\n\n-alex\n\n", "msg_date": "Tue, 3 Jul 2001 09:30:49 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: selecting from cursor " }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n>> And what are you doing with the places that don't care which kind of RTE\n>> they are dealing with (which is most of them IIRC)? While you haven't\n\n> They just have things declared as RangeTblEntry *, and as long as they\n> don't access type-specific fields, they are fine.\n\nSo you have four (soon to be six or seven) different structs that *must*\nhave the same fields? I don't think that's cleaner than a union ...\nat the very least, declare it as structs containing RangeTblEntry,\nsimilar to the way the various Plan node types work (see plannodes.h).\n\n> For scrollable cursors, Rescan should be implemented as 'scroll backwards\n> until you can't scroll no more', correct?\n\nNo, it should be implemented as Rescan. The portal mechanism needs to\nexpose the Rescan call for the contained querytree.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 10:02:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: selecting from cursor " }, { "msg_contents": "On Tue, 3 Jul 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> >> And what are you doing with the places that don't care which kind of RTE\n> >> they are dealing with (which is most of them IIRC)? While you haven't\n> \n> > They just have things declared as RangeTblEntry *, and as long as they\n> > don't access type-specific fields, they are fine.\n> \n> So you have four (soon to be six or seven) different structs that *must*\n> have the same fields? I don't think that's cleaner than a union ...\n> at the very least, declare it as structs containing RangeTblEntry,\n> similar to the way the various Plan node types work (see plannodes.h).\nPlease see my diffs. Its implemented via #define to declare all common\nfields. \n\nI.E.:\n#define RTE_COMMON_FIELDS \\\n NodeTag type; \\\n /* \\\n * Fields valid in all RTEs: \\\n */ \\\n Attr *alias; /* user-written alias clause, if any */ \\\n Attr *eref; /* expanded reference names */ \\\n bool inh; /* inheritance requested? */ \\\n bool inFromCl; /* present in FROM clause */ \\\n bool checkForRead; /* check rel for read access */ \\\n bool checkForWrite; /* check rel for write access */ \\\n Oid checkAsUser; /* if not zero, check access as this user\n*/ \\\n\ntypedef struct RangeTblEntry\n{\n RTE_COMMON_FIELDS\n} RangeTblEntry;\n\ntypedef struct RangeTblEntryRelation\n{\n RTE_COMMON_FIELDS\n /* Fields valid for a plain relation RTE */\n char *relname; /* real name of the relation */\n Oid relid; /* OID of the relation */\n} RangeTblEntryRelation;\n\n\nIf RTEs are done the way plan nodes done, the syntax would be pretty much\nthe same, only with one more indirection to access common fields.\n\nThis is how code looks with my changes:\nRangeTblEntry *rte=rt_fetch(..)\n\nFor common fields\nrte->eref\n\nFor type-specific fields\n((RangeTblEntryRelation *) rte)->relid\n\nThis is how it would look if it was done like Plan nodes are done:\nRangeTblEntry *rte=rt_fetch(..)\n\nFor common fields:\nrte->common->eref\n\nFor type-specific fields:\n((RangeTblEntryRelation *)rte)->relid \n\n> > For scrollable cursors, Rescan should be implemented as 'scroll backwards\n> > until you can't scroll no more', correct?\n> \n> No, it should be implemented as Rescan. The portal mechanism needs to\n> expose the Rescan call for the contained querytree.\nOk.\n\n\n", "msg_date": "Tue, 3 Jul 2001 11:20:33 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: selecting from cursor " }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> On Tue, 3 Jul 2001, Tom Lane wrote:\n>> So you have four (soon to be six or seven) different structs that *must*\n>> have the same fields? I don't think that's cleaner than a union ...\n\n> Please see my diffs. Its implemented via #define to declare all common\n> fields. \n> #define RTE_COMMON_FIELDS \\\n> NodeTag type; \\\n> [etc]\n\nI don't think that technique is cleaner than a union, either ;-).\nThe macro definition is a pain in the neck: you have to play games with\nsemicolon placement, most tools won't autoindent it nicely, etc etc.\n\nBut the main point is that I think NodeType = RangeTblEntry with\na separate subtype field is a better way to go than making a bunch of\ndifferent NodeType values. When most of the fields are common, as in\nthis case, it's going to be true that many places only want to know\n\"is it a rangetable entry or not?\"\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 13:34:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: selecting from cursor " }, { "msg_contents": "On Tue, 3 Jul 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > On Tue, 3 Jul 2001, Tom Lane wrote:\n> >> So you have four (soon to be six or seven) different structs that *must*\n> >> have the same fields? I don't think that's cleaner than a union ...\n> \n> > Please see my diffs. Its implemented via #define to declare all common\n> > fields. \n> > #define RTE_COMMON_FIELDS \\\n> > NodeTag type; \\\n> > [etc]\n> \n> I don't think that technique is cleaner than a union, either ;-).\n> The macro definition is a pain in the neck: you have to play games with\n> semicolon placement, most tools won't autoindent it nicely, etc etc.\n\nTrue true. On other hand, unlike union, its automatically typechecked, you\ncannot by mistake reference a field you shouldn't be referencing.\n\nStrict typechecking allows one to explicitly declare which type your\nfunction takes if you want, and force errors if you miscast something. I\nthink discipline is a good thing here...\n\nBut really its your call, no point in arguing :)\n\n> But the main point is that I think NodeType = RangeTblEntry with\n> a separate subtype field is a better way to go than making a bunch of\n> different NodeType values. When most of the fields are common, as in\n> this case, it's going to be true that many places only want to know\n> \"is it a rangetable entry or not?\"\n\nThat's why I have IsA_RTE(node) macro. (Same as we have IsA_Join and\nIsA_JoinPath already). It \n\n\n", "msg_date": "Tue, 3 Jul 2001 14:00:41 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: selecting from cursor " }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> True true. On other hand, unlike union, its automatically typechecked, you\n> cannot by mistake reference a field you shouldn't be referencing.\n\nOnly true to the extent that you have cast a generic pointer to the\ncorrect type to begin with. However, we've probably wasted more time\narguing the point than it's really worth.\n\nI would suggest leaving off the final semicolon in the macro definition\nso that you can write\n\ntypedef struct RangeTblEntryRelation\n{\n RTE_COMMON_FIELDS;\n /* Fields valid for a plain relation RTE */\n char *relname; /* real name of the relation */\n Oid relid; /* OID of the relation */\n\nWithout this, tools like pgindent will almost certainly mess up these\nstruct declarations (I know emacs' C mode will get it wrong...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 14:38:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: selecting from cursor " }, { "msg_contents": "On Mon, 2 Jul 2001, Alex Pilosov wrote:\n\n> Erm, forgot to attach the patch. Here it is.\n(yow) don't even bother looking at this patch. mail server delayed this\nmessage by almost a week, and by now, the code is totally changed.\n\nI took Tom's suggestion and made RTE a union. So, the below is a new\ndefinition of RTE:\n\nI have most of portal-related code working, only executor needs some more\nfixes. Code properly makes PortalScan Path entry, PortalScan Plan nodes,\netc. I have added PortalReScan to tell portal it needs to rescan itself. \n\nI'll post a correct patch next week. Thank you to everyone and especially\nTom for bearing with my often stupid questions.\n\n--cut here--rte definition--\ntypedef enum RTEType {\n RTE_RELATION,\n RTE_SUBSELECT,\n RTE_PORTAL\n} RTEType;\n\ntypedef struct RangeTblEntry\n{\n NodeTag type;\n RTEType rtetype;\n /*\n * Fields valid in all RTEs:\n */\n Attr *alias; /* user-written alias clause, if any */\n Attr *eref; /* expanded reference names */\n bool inh; /* inheritance requested? */\n bool inFromCl; /* present in FROM clause */\n bool checkForRead; /* check rel for read access */\n bool checkForWrite; /* check rel for write access */\n Oid checkAsUser; /* if not zero, check access as this user\n*/\n \n union {\n struct {\n /* Fields for a plain relation RTE (rtetype=RTE_RELATION) */\n char *relname; /* real name of the relation */\n Oid relid; /* OID of the relation */\n } rel;\n struct {\n /* Fields for a subquery RTE (rtetype=RTE_SUBSELECT) */\n Query *subquery; /* the sub-query */\n } sub;\n struct {\n /* fields for portal RTE (rtetype=RTE_PORTAL) */\n char *portalname; /* portal's name */\n } portal;\n } u;\n} RangeTblEntry;\n\n\n", "msg_date": "Sat, 7 Jul 2001 23:52:31 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: selecting from cursor " } ]
[ { "msg_contents": "I have been making some notes about the rules for accessing shared disk\nbuffers, since they aren't spelled out anywhere now AFAIK. In process\nI found what seems to be a nasty bug in the code that tries to build\nbtree indexes that include already-dead tuples. (If memory serves,\nHiroshi added that code awhile back to help suppress the \"heap tuples\n!= index tuples\" complaint from VACUUM.)\n\nWould people look this over and see if they agree with my deductions?\n\n\t\t\tregards, tom lane\n\n\nNotes about shared buffer access rules\n--------------------------------------\n\nThere are two separate access control mechanisms for shared disk buffers:\nreference counts (a/k/a pin counts) and buffer locks. (Actually, there's\na third level of access control: one must hold the appropriate kind of\nlock on a relation before one can legally access any page belonging to\nthe relation. Relation-level locks are not discussed here.)\n\nPins: one must \"hold a pin on\" a buffer (increment its reference count)\nbefore being allowed to do anything at all with it. An unpinned buffer is\nsubject to being reclaimed and reused for a different page at any instant,\nso touching it is unsafe. Typically a pin is acquired by ReadBuffer and\nreleased by WriteBuffer (if one modified the page) or ReleaseBuffer (if not).\nIt is OK and indeed common for a single backend to pin a page more than\nonce concurrently; the buffer manager handles this efficiently. It is\nconsidered OK to hold a pin for long intervals --- for example, sequential\nscans hold a pin on the current page until done processing all the tuples\non the page, which could be quite a while if the scan is the outer scan of\na join. Similarly, btree index scans hold a pin on the current index\npage. This is OK because there is actually no operation that waits for a\npage's pin count to drop to zero. (Anything that might need to do such a\nwait is instead handled by waiting to obtain the relation-level lock,\nwhich is why you'd better hold one first.) Pins may not be held across\ntransaction boundaries, however.\n\nBuffer locks: there are two kinds of buffer locks, shared and exclusive,\nwhich act just as you'd expect: multiple backends can hold shared locks\non the same buffer, but an exclusive lock prevents anyone else from\nholding either shared or exclusive lock. (These can alternatively be\ncalled READ and WRITE locks.) These locks are relatively short term:\nthey should not be held for long. They are implemented as per-buffer\nspinlocks, so another backend trying to acquire a competing lock will\nspin as long as you hold yours! Buffer locks are acquired and released by\nLockBuffer(). It will *not* work for a single backend to try to acquire\nmultiple locks on the same buffer. One must pin a buffer before trying\nto lock it.\n\nBuffer access rules:\n\n1. To scan a page for tuples, one must hold a pin and either shared or\nexclusive lock. To examine the commit status (XIDs and status bits) of\na tuple in a shared buffer, one must likewise hold a pin and either shared\nor exclusive lock.\n\n2. Once one has determined that a tuple is interesting (visible to the\ncurrent transaction) one may drop the buffer lock, yet continue to access\nthe tuple's data for as long as one holds the buffer pin. This is what is\ntypically done by heap scans, since the tuple returned by heap_fetch\ncontains a pointer to tuple data in the shared buffer. Therefore the\ntuple cannot go away while the pin is held (see rule #5). Its state could\nchange, but that is assumed not to matter after the initial determination\nof visibility is made.\n\n3. To add a tuple or change the xmin/xmax fields of an existing tuple,\none must hold a pin and an exclusive lock on the containing buffer.\nThis ensures that no one else might see a partially-updated state of the\ntuple.\n\n4. It is considered OK to update tuple commit status bits (ie, OR the\nvalues HEAP_XMIN_COMMITTED, HEAP_XMIN_INVALID, HEAP_XMAX_COMMITTED, or\nHEAP_XMAX_INVALID into t_infomask) while holding only a shared lock and\npin on a buffer. This is OK because another backend looking at the tuple\nat about the same time would OR the same bits into the field, so there\nis little or no risk of conflicting update; what's more, if there did\nmanage to be a conflict it would merely mean that one bit-update would\nbe lost and need to be done again later.\n\n5. To physically remove a tuple or compact free space on a page, one\nmust hold a pin and an exclusive lock, *and* observe while holding the\nexclusive lock that the buffer's shared reference count is one (ie,\nno other backend holds a pin). If these conditions are met then no other\nbackend can perform a page scan until the exclusive lock is dropped, and\nno other backend can be holding a reference to an existing tuple that it\nmight expect to examine again. Note that another backend might pin the\nbuffer (increment the refcount) while one is performing the cleanup, but\nit won't be able to to actually examine the page until it acquires shared\nor exclusive lock.\n\n\nCurrently, the only operation that removes tuples or compacts free space\nis VACUUM. It does not have to implement rule #5 directly, because it\ninstead acquires exclusive lock at the relation level, which ensures\nindirectly that no one else is accessing tuples of the relation at all.\nTo implement concurrent VACUUM we will need to make it obey rule #5.\n\n\nI believe that nbtree.c's btbuild() code is currently in violation of\nthese rules, because it calls HeapTupleSatisfiesNow() while holding a\npin but no lock on the containing buffer. Not only does this risk\nreturning a wrong answer if it sees an intermediate state of the tuple\nxmin/xmax fields, but HeapTupleSatisfiesNow() may try to update the\ntuple's infomask. This could produce a wrong state of the infomask if\nthere is a concurrent change of the infomask by an exclusive-lock holder.\n(Even if that catastrophe does not happen, SetBufferCommitInfoNeedsSave\nis not called as it should be when the status bits change.)\n\nI am also disturbed that several places in heapam.c call\nHeapTupleSatisfiesUpdate without checking for infomask change and calling\nSetBufferCommitInfoNeedsSave. In most paths through the code, it seems\nthe buffer will get marked dirty anyway, but wouldn't it be better to make\nthis check?\n", "msg_date": "Mon, 02 Jul 2001 21:40:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Buffer access rules, and a probable bug" }, { "msg_contents": "Tom Lane wrote:\n> \n> I have been making some notes about the rules for accessing shared disk\n> buffers, since they aren't spelled out anywhere now AFAIK. In process\n> I found what seems to be a nasty bug in the code that tries to build\n> btree indexes that include already-dead tuples. (If memory serves,\n> Hiroshi added that code awhile back to help suppress the \"heap tuples\n> != index tuples\" complaint from VACUUM.)\n> \n\n[snip]\n\n> \n> I believe that nbtree.c's btbuild() code is currently in violation of\n> these rules, because it calls HeapTupleSatisfiesNow() while holding a\n> pin but no lock on the containing buffer.\n\nOK, we had better avoid using heapam routines in btbuild() ? \n\nregards,\n\nHiroshi Inoue\n", "msg_date": "Tue, 03 Jul 2001 12:27:58 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Buffer access rules, and a probable bug" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> I believe that nbtree.c's btbuild() code is currently in violation of\n>> these rules, because it calls HeapTupleSatisfiesNow() while holding a\n>> pin but no lock on the containing buffer.\n\n> OK, we had better avoid using heapam routines in btbuild() ? \n\nOn further thought, btbuild is not that badly broken at the moment,\nbecause CREATE INDEX acquires ShareLock on the relation, so there can be\nno concurrent writers at the page level. Still, it seems like it'd be a\ngood idea to do \"LockBuffer(buffer, BUFFER_LOCK_SHARE)\" here, and\nprobably also to invoke HeapTupleSatisfiesNow() via the\nHeapTupleSatisfies() macro so that infomask update is checked for.\nVadim, what do you think?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 13:06:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Buffer access rules, and a probable bug " }, { "msg_contents": "On Mon, Jul 02, 2001 at 09:40:25PM -0400, Tom Lane wrote:\n> \n> 4. It is considered OK to update tuple commit status bits (ie, OR the\n> values HEAP_XMIN_COMMITTED, HEAP_XMIN_INVALID, HEAP_XMAX_COMMITTED, or\n> HEAP_XMAX_INVALID into t_infomask) while holding only a shared lock and\n> pin on a buffer. This is OK because another backend looking at the tuple\n> at about the same time would OR the same bits into the field, so there\n> is little or no risk of conflicting update; what's more, if there did\n> manage to be a conflict it would merely mean that one bit-update would\n> be lost and need to be done again later.\n\nWithout looking at the code, this seems mad. Are you sure?\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Tue, 3 Jul 2001 12:36:19 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Buffer access rules, and a probable bug" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> On Mon, Jul 02, 2001 at 09:40:25PM -0400, Tom Lane wrote:\n>> 4. It is considered OK to update tuple commit status bits (ie, OR the\n>> values HEAP_XMIN_COMMITTED, HEAP_XMIN_INVALID, HEAP_XMAX_COMMITTED, or\n>> HEAP_XMAX_INVALID into t_infomask) while holding only a shared lock and\n>> pin on a buffer. This is OK because another backend looking at the tuple\n>> at about the same time would OR the same bits into the field, so there\n>> is little or no risk of conflicting update; what's more, if there did\n>> manage to be a conflict it would merely mean that one bit-update would\n>> be lost and need to be done again later.\n\n> Without looking at the code, this seems mad. Are you sure?\n\nYes. Those status bits aren't ground truth, only hints. They cache the\nresults of looking up transaction status in pg_log; if they get dropped,\nthe only consequence is the next visitor to the tuple has to do the\nlookup over again.\n\nChanging any other bits in t_infomask requires exclusive lock, however.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 17:11:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Buffer access rules, and a probable bug " }, { "msg_contents": "On Tue, Jul 03, 2001 at 05:11:46PM -0400, Tom Lane wrote:\n> ncm@zembu.com (Nathan Myers) writes:\n> > On Mon, Jul 02, 2001 at 09:40:25PM -0400, Tom Lane wrote:\n> >> 4. It is considered OK to update tuple commit status bits (ie, OR the\n> >> values HEAP_XMIN_COMMITTED, HEAP_XMIN_INVALID, HEAP_XMAX_COMMITTED, or\n> >> HEAP_XMAX_INVALID into t_infomask) while holding only a shared lock and\n> >> pin on a buffer. This is OK because another backend looking at the tuple\n> >> at about the same time would OR the same bits into the field, so there\n> >> is little or no risk of conflicting update; what's more, if there did\n> >> manage to be a conflict it would merely mean that one bit-update would\n> >> be lost and need to be done again later.\n> \n> > Without looking at the code, this seems mad. Are you sure?\n> \n> Yes. Those status bits aren't ground truth, only hints. They cache the\n> results of looking up transaction status in pg_log; if they get dropped,\n> the only consequence is the next visitor to the tuple has to do the\n> lookup over again.\n> \n> Changing any other bits in t_infomask requires exclusive lock, however.\n\nHmm, look:\n\n A B\n\n1 load t_infomask\n\n2 or XMIN_INVALID\n\n3 lock\n\n4 load t_infomask\n\n5 or MOVED_IN\n\n6 store t_infomask\n\n7 unlock\n\n8 store t_infomask\n\nHere, backend B is a good citizen and locks while it makes its change. \nBackend A only means to touch the \"hint\" bits, but it ends up clobbering\nthe MOVED_IN bit too.\n\nAlso, as hints, would it be Bad(tm) if an attempt to clear one failed?\nThat seems possible as well.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Tue, 3 Jul 2001 15:59:34 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Buffer access rules, and a probable bug" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> Here, backend B is a good citizen and locks while it makes its change.\n\nNo, backend B wasn't a good citizen: it should have been holding\nexclusive lock on the buffer.\n\n> Also, as hints, would it be Bad(tm) if an attempt to clear one failed?\n\nClearing hint bits is also an exclusive-lock-only operation. Notice\nI specified that *setting* them is the only case allowed to be done\nwith shared lock.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 23:29:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Buffer access rules, and a probable bug " }, { "msg_contents": "Sorry for the delay.\n\nOn Tue, 3 Jul 2001, Tom Lane wrote:\n\n> ncm@zembu.com (Nathan Myers) writes:\n> \n> > Also, as hints, would it be Bad(tm) if an attempt to clear one failed?\n> \n> Clearing hint bits is also an exclusive-lock-only operation. Notice\n> I specified that *setting* them is the only case allowed to be done\n> with shared lock.\n\nOne problem though is that if you don't have a spin lock around the flag,\nyou can end up clearing it inadvertenty. i.e. two backends go to update\n(different) bit flags. They each load the current value, and each set the\n(different) bit they want to set. They then store the new value they each\nhave come up with. The second store will effectively clear the bit set in\nthe first store.\n\n??\n\nTake care,\n\nBill\n\n", "msg_date": "Tue, 10 Jul 2001 13:21:02 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@zembu.com>", "msg_from_op": false, "msg_subject": "Re: Buffer access rules, and a probable bug " }, { "msg_contents": "Bill Studenmund <wrstuden@zembu.com> writes:\n> One problem though is that if you don't have a spin lock around the flag,\n> you can end up clearing it inadvertenty. i.e. two backends go to update\n> (different) bit flags. They each load the current value, and each set the\n> (different) bit they want to set. They then store the new value they each\n> have come up with. The second store will effectively clear the bit set in\n> the first store.\n\nYes, this is the scenario we're talking about. The point is that losing\nthe bit is harmless, in this particular situation, because it will be\nset again the next time the tuple is visited. It's only a hint. That\nbeing the case, we judge that avoiding the necessity for an exclusive\nlock during read is well worth a (very infrequent) need to do an extra\npg_log lookup due to loss of a hint bit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 16:38:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Buffer access rules, and a probable bug " } ]
[ { "msg_contents": "Hi there,\n\n we are developing reporting software and want to support at least one open \nsource alternative to oracle. Things were going very smoothly with postgresql \nand the only problem we are facing is the I get an exception stating that \nwith the current jdbc driver prepared statements are not supported (yet). Are \nthere any plans, probably even kind of time frames, when to support prepared \nstatements? \n\n Even if postgresql wouldn't support preparing the statement it would be \nnice if it would at least fake it, so that the API can be used. We are \nrelying on a third party component, which just can do prepared statements.\n\nThanks,\nMariano\n", "msg_date": "Tue, 3 Jul 2001 09:06:05 +0200", "msg_from": "Mariano Kamp <mkamp@codamax.com>", "msg_from_op": true, "msg_subject": "JDBC Support - prepared Statements?" }, { "msg_contents": "> ... and the only problem we are facing is the I get an exception stating\nthat \n> with the current jdbc driver prepared statements are not supported (yet).\n\nDid you try the latest 7.1.2 release? I did so recently and did not\nencounter any problems with PreparedStatements. (There are still other bugs in the\nJDBC driver, but that's another story.)\n\n-- \nBest regards\nRainer Klute\n\n Dipl.-Inform. E-Mail: rainer.klute@epost.de\n Rainer Klute Tel.: +49 172 2324824\n K�rner Grund 24 Fax: +49 231 511809\nD-44143 Dortmund\n\nGMX - Die Kommunikationsplattform im Internet.\nhttp://www.gmx.net\n\nGMX Tipp:\n\nMachen Sie Ihr Hobby zu Geld bei unserem Partner 1&1!\nhttp://profiseller.de/info/index.php3?ac=OM.PS.PS003K00596T0409a\n\n", "msg_date": "Tue, 3 Jul 2001 10:02:31 +0200 (MEST)", "msg_from": "Rainer Klute <rainer.klute@gmx.de>", "msg_from_op": false, "msg_subject": "Re: JDBC Support - prepared Statements?" }, { "msg_contents": "Hello Rainer,\n\n yes I did so. The name of the rpm I recently downloaded is: \npostgresql-jdbc-7.1.2-4PGDG.i386.rpm and it is throwing the exception when \ncalling the prepareStatement() Method.\n \n But when I get back some air to breathe I will go and double-double-check \nthat my classpath is setup properly ;-) \n\nMariano\n\n\n\n\nOn Tuesday 03 July 2001 10:02 am, Rainer Klute wrote:\n> > ... and the only problem we are facing is the I get an exception stating\n>\n> that\n>\n> > with the current jdbc driver prepared statements are not supported (yet).\n>\n> Did you try the latest 7.1.2 release? I did so recently and did not\n> encounter any problems with PreparedStatements. (There are still other bugs\n> in the JDBC driver, but that's another story.)\n", "msg_date": "Tue, 3 Jul 2001 10:34:07 +0200", "msg_from": "Mariano Kamp <mkamp@codamax.com>", "msg_from_op": true, "msg_subject": "Re: JDBC Support - prepared Statements?" }, { "msg_contents": "> yes I did so. The name of the rpm I recently downloaded is: \n> postgresql-jdbc-7.1.2-4PGDG.i386.rpm and it is throwing the exception when\n> calling the prepareStatement() Method.\n\nHm, I build everything from source. Perhaps the RPM is buggy.\n\n-- \nBest regards\nRainer Klute\n\n Dipl.-Inform. E-Mail: rainer.klute@epost.de\n Rainer Klute Tel.: +49 172 2324824\n K�rner Grund 24 Fax: +49 231 511809\nD-44143 Dortmund\n\nGMX - Die Kommunikationsplattform im Internet.\nhttp://www.gmx.net\n\nGMX Tipp:\n\nMachen Sie Ihr Hobby zu Geld bei unserem Partner 1&1!\nhttp://profiseller.de/info/index.php3?ac=OM.PS.PS003K00596T0409a\n\n", "msg_date": "Tue, 3 Jul 2001 12:25:31 +0200 (MEST)", "msg_from": "Rainer Klute <rainer.klute@gmx.de>", "msg_from_op": false, "msg_subject": "Re: JDBC Support - prepared Statements?" }, { "msg_contents": "Mariano,\n\nThe JDBC driver does support PreparedStatements and has for a while. \nCan you provide more information and preferably a code example showing \nthe problem you are seeing?\n\nthanks,\n--Barry\n\nMariano Kamp wrote:\n\n> Hi there,\n> \n> we are developing reporting software and want to support at least one open \n> source alternative to oracle. Things were going very smoothly with postgresql \n> and the only problem we are facing is the I get an exception stating that \n> with the current jdbc driver prepared statements are not supported (yet). Are \n> there any plans, probably even kind of time frames, when to support prepared \n> statements? \n> \n> Even if postgresql wouldn't support preparing the statement it would be \n> nice if it would at least fake it, so that the API can be used. We are \n> relying on a third party component, which just can do prepared statements.\n> \n> Thanks,\n> Mariano\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n\n", "msg_date": "Tue, 03 Jul 2001 08:04:48 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JDBC Support - prepared Statements?" }, { "msg_contents": "I would get the newest Jdbc from:\n\n\thttp://jdbc.fastcrypt.com\n\nIt has more bug fixes than 7.1.X.\n\n> Hello Rainer,\n> \n> yes I did so. The name of the rpm I recently downloaded is: \n> postgresql-jdbc-7.1.2-4PGDG.i386.rpm and it is throwing the exception when \n> calling the prepareStatement() Method.\n> \n> But when I get back some air to breathe I will go and double-double-check \n> that my classpath is setup properly ;-) \n> \n> Mariano\n> \n> \n> \n> \n> On Tuesday 03 July 2001 10:02 am, Rainer Klute wrote:\n> > > ... and the only problem we are facing is the I get an exception stating\n> >\n> > that\n> >\n> > > with the current jdbc driver prepared statements are not supported (yet).\n> >\n> > Did you try the latest 7.1.2 release? I did so recently and did not\n> > encounter any problems with PreparedStatements. (There are still other bugs\n> > in the JDBC driver, but that's another story.)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 3 Jul 2001 12:41:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: JDBC Support - prepared Statements?" }, { "msg_contents": "> > yes I did so. The name of the rpm I recently downloaded is: \n> > postgresql-jdbc-7.1.2-4PGDG.i386.rpm and it is throwing the exception when\n> > calling the prepareStatement() Method.\n> \n> Hm, I build everything from source. Perhaps the RPM is buggy.\n\nI would get the JAR from:\n\n\thttp://jdbc.fastcrypt.com\n\nThis is a copy of the CVS tree, which has some bugs fixed from 7.1.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 17:21:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: JDBC Support - prepared Statements?" }, { "msg_contents": "The release postgresql-jdbc-7.1.2-4PGDG.i386.rpm is definately buggy. Or at\nleast there has been a confusion with versioning. I have a had a number of\nsmall issues, but when I downloaded the source and compiled the JDBC drivers\nthrough ANT they worked fine.\n\naddBatch() and executeBatch() should work also in the RPM but they throw\n\"method not yet implemented errors\"...\n\njon folland\n\nhead of technology\n_ tripledash /\n\nphone\t +44 020 7377 07 75\nfax \t +44 020 7247 69 30\nmobile\t +44 07974 324 260\n\nwww.tripledash.co.uk\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nSent: 10 July 2001 22:21\nTo: Rainer Klute\nCc: Mariano Kamp; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] JDBC Support - prepared Statements?\n\n\n> > yes I did so. The name of the rpm I recently downloaded is:\n> > postgresql-jdbc-7.1.2-4PGDG.i386.rpm and it is throwing the exception\nwhen\n> > calling the prepareStatement() Method.\n>\n> Hm, I build everything from source. Perhaps the RPM is buggy.\n\nI would get the JAR from:\n\n\thttp://jdbc.fastcrypt.com\n\nThis is a copy of the CVS tree, which has some bugs fixed from 7.1.2.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://www.postgresql.org/search.mpl\n\n\n\n\n\nRE: [HACKERS] JDBC Support - prepared Statements?\n\n\n\nThe release postgresql-jdbc-7.1.2-4PGDG.i386.rpm  is definately buggy. Or at least there has been a confusion with versioning. I have a had a number of small issues, but when I downloaded the source and compiled the JDBC drivers through ANT they worked fine.\naddBatch() and executeBatch() should work also in the RPM but they throw \"method not yet implemented errors\"...\n\njon folland\n\nhead of technology\n_ tripledash /\n\nphone    +44 020 7377 07 75\nfax      +44 020 7247 69 30\nmobile   +44 07974 324 260\n\nwww.tripledash.co.uk \n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nSent: 10 July 2001 22:21\nTo: Rainer Klute\nCc: Mariano Kamp; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] JDBC Support - prepared Statements?\n\n\n> >   yes I did so. The name of the rpm I recently downloaded is: \n> > postgresql-jdbc-7.1.2-4PGDG.i386.rpm and it is throwing the exception when\n> > calling the prepareStatement() Method.\n> \n> Hm, I build everything from source. Perhaps the RPM is buggy.\n\nI would get the JAR from:\n\n        http://jdbc.fastcrypt.com\n\nThis is a copy of the CVS tree, which has some bugs fixed from 7.1.2.\n\n-- \n  Bruce Momjian                        |  http://candle.pha.pa.us\n  pgman@candle.pha.pa.us               |  (610) 853-3000\n  +  If your life is a hard drive,     |  830 Blythe Avenue\n  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://www.postgresql.org/search.mpl", "msg_date": "Wed, 11 Jul 2001 11:42:24 +0100", "msg_from": "\"Jon Folland\" <jon@hot-computers.com>", "msg_from_op": false, "msg_subject": "RE: JDBC Support - prepared Statements?" } ]
[ { "msg_contents": "> > That's gonna have to be fixed. If you're not up for it, don't implement\n> > this. Given that cursors (are supposed to) support FETCH BACKWARDS,\n> > I really don't see why they shouldn't be expected to handle ReScan...\n> I thought only scrollable cursors can do that. What if cursor isn't\n> scrollable? Should it error during the execution?\n\nIn PostgreSQL, all cursors are scrollable. The allowed grammar keyword is\nsimply ignored. I am actually not sure that this is optimal, since there\nare a few very effective optimizations, that you can do if you know, that \nReScan is not needed (like e.g. not storing the result temporarily).\n\nAndreas\n", "msg_date": "Tue, 3 Jul 2001 10:36:43 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: selecting from cursor " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> this. Given that cursors (are supposed to) support FETCH BACKWARDS,\n> I really don't see why they shouldn't be expected to handle ReScan...\n>> I thought only scrollable cursors can do that. What if cursor isn't\n>> scrollable? Should it error during the execution?\n\n> In PostgreSQL, all cursors are scrollable. The allowed grammar keyword is\n> simply ignored. I am actually not sure that this is optimal, since there\n> are a few very effective optimizations, that you can do if you know, that \n> ReScan is not needed (like e.g. not storing the result temporarily).\n\nIt's worse than that: we don't distinguish plans for cursors from plans\nfor any other query, hence *all* query plans are supposed to be able to\nrun backwards. (In practice, a lot of them don't work :-(.) Someday\nthat needs to be improved. It would be good if the system understood\nwhether a particular plan node would ever be asked to rescan itself or\nrun backwards, and could optimize things on that basis.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 10:12:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: selecting from cursor " } ]
[ { "msg_contents": "Jean-Francois Leveque wrote:\n>\n> I want to check something before\n> a delete is made.\n>\n> I made a before delete trigger that\n> calls a procedure.\n>\n> The procedure raises an exception\n> when I don't want the delete to be\n> made (I could also have returned NULL,\n> but wouldn't have get much information\n> from it).\n>\n>\n> The question is :\n> What do I return when I want the delete to be made ?\n>\n> If I return OLD (known when deleting), maybe that\n> cancels the delete too.\n>\n> I don't have NEW (known only on insert/update).\n>\n> I couldn't find the answer in the docs.\n\n Returning OLD from a BEFORE ROW trigger let's the delete\n happen - no \"maybe\" here. On AFTER ROW triggers it doesn't\n matter what you return, the delete happened already (well,\n RAISE EXCEPTION will rollback of course).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Tue, 3 Jul 2001 09:46:56 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Trigger Procedures question" }, { "msg_contents": "\nI want to check something before\na delete is made.\n\nI made a before delete trigger that\ncalls a procedure.\n\nThe procedure raises an exception\nwhen I don't want the delete to be\nmade (I could also have returned NULL,\nbut wouldn't have get much information\nfrom it).\n\n\nThe question is :\nWhat do I return when I want the delete to be made ?\n\nIf I return OLD (known when deleting), maybe that\ncancels the delete too.\n\nI don't have NEW (known only on insert/update).\n\nI couldn't find the answer in the docs.\n\n\nBest regards,\n\nJean-Francois Leveque\n\n\n______________________________________________________________________\nSur WebMailS.com, mon adresse de courrier �lectronique gratuite.\nService multilingue, s�r, et permanent. http://www.webmails.com/\n", "msg_date": "Tue, 3 Jul 2001 14:59:1 +0100", "msg_from": "\"Jean-Francois Leveque\" <leveque@webmails.com>", "msg_from_op": false, "msg_subject": "Trigger Procedures question" } ]
[ { "msg_contents": "Hi\nThis patch againsts postgresql 7.1.2 allows you to control access based on the\nvirtual host address only (virtualhost access type), or both the remote\naddress and the local address (connection access type).\n\nFor example:\n\nconnection all 192.168.42.0 255.255.255.0 192.168.1.42 255.255.255.255 trust\n\n\nThis patch also allows keyword \"samehost\", similar to \"sameuser\" but for\nhosts.\n\nFor example:\n\nvirtualhost sameuser samehost.sql.domain.com 255.255.255.255 trust\n\nwill prevent you from doing 1 entry per user, all you need is a\n(local) dns entry for each host (user foo needs foo.sql.domain.com).\nIf the dns entry is not found, the line is dropped, so rejecting with\nsamehost is not a good idea for the moment.\n\nAny comments are welcome.\nPlease not that I'm not on the list.\n\n---\nDamien Clermonte", "msg_date": "Tue, 03 Jul 2001 17:17:08 +0200", "msg_from": "Damien =?ISO-8859-1?Q?Clermont=E9?= <damien.clermonte@free.fr>", "msg_from_op": true, "msg_subject": "[PATCH] Patch to make pg_hba.conf handle virtualhost access control\n\tand samehost keyword" }, { "msg_contents": "Damien =?ISO-8859-1?Q?Clermont=E9?= <damien.clermonte@free.fr> writes:\n> Any comments are welcome.\n\nFor one thing: a documentation patch is needed to go with this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 15:45:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Patch to make pg_hba.conf handle virtualhost access\n\tcontrol and samehost keyword" }, { "msg_contents": "Hi\n\n\nNew try, with a (probably bad english) documentation patch this time :)\n\n\nThis patch againsts postgresql 7.1.2 allows you to control access based on the\nvirtual host address only (virtualhost access type), or both the remote\naddress and the local address (connection access type).\n\nFor example:\n\nconnection all 192.168.42.0 255.255.255.0 192.168.1.42 255.255.255.255 trust\n\n\nThis patch also allows keyword \"samehost\", similar to \"sameuser\" but for\nhosts.\n\nFor example:\n\nvirtualhost sameuser samehost.sql.domain.com 255.255.255.255 trust\n\nwill prevent you from doing 1 entry per user, all you need is a\n(local) dns entry for each host (user foo needs foo.sql.domain.com).\nIf the dns entry is not found, the line is dropped, so rejecting with\nsamehost is not a good idea for the moment.\n\nAny comments are welcome.\nPlease not that I'm not on the list.\n\n---\nDamien Clermonte", "msg_date": "Wed, 04 Jul 2001 14:51:49 +0200", "msg_from": "Damien =?ISO-8859-1?Q?Clermont=E9?= <damien.clermonte@free.fr>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Patch to make pg_hba.conf handle virtualhost access\n\tcontrol and samehost keyword" }, { "msg_contents": "Damien Clermont� writes:\n\n> This patch againsts postgresql 7.1.2 allows you to control access based on the\n> virtual host address only (virtualhost access type), or both the remote\n> address and the local address (connection access type).\n>\n> For example:\n>\n> connection all 192.168.42.0 255.255.255.0 192.168.1.42 255.255.255.255 trust\n\nI completely fail to understand what this does. What is the expression\nthat will be evaluated based on these four numbers?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 8 Jul 2001 09:47:45 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Patch to make pg_hba.conf handle virtualhost\n\taccess control and samehost keyword" }, { "msg_contents": "> Damien Clermont? writes:\n> \n> > This patch againsts postgresql 7.1.2 allows you to control access based on the\n> > virtual host address only (virtualhost access type), or both the remote\n> > address and the local address (connection access type).\n> >\n> > For example:\n> >\n> > connection all 192.168.42.0 255.255.255.0 192.168.1.42 255.255.255.255 trust\n> \n> I completely fail to understand what this does. What is the expression\n> that will be evaluated based on these four numbers?\n\nThe killer for me is the added complexity to an already complex file,\npg_hba.conf.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 8 Jul 2001 10:44:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Patch to make pg_hba.conf handle virtualhost access\n\tcontrol and samehost keyword" }, { "msg_contents": "Bruce Momjian writes:\n\n> The killer for me is the added complexity to an already complex file,\n> pg_hba.conf.\n\nI never figured pg_hba.conf was complex. It's one of the simplest\nconfiguration files I've seen. What makes it look complex is that it\nbegins with 700 lines explaining it, obscuring the actual content.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 8 Jul 2001 17:38:50 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Patch to make pg_hba.conf handle virtualhost\n\taccess control and samehost keyword" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > The killer for me is the added complexity to an already complex file,\n> > pg_hba.conf.\n> \n> I never figured pg_hba.conf was complex. It's one of the simplest\n> configuration files I've seen. What makes it look complex is that it\n> begins with 700 lines explaining it, obscuring the actual content.\n> \n\nYes, I cleaned it up a bit for 7.2. I think the confusion is the\nauthentication options and the options that go with the authentication\noptions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 8 Jul 2001 16:14:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Patch to make pg_hba.conf handle virtualhost access\n\tcontrol and samehost keyword" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I never figured pg_hba.conf was complex. It's one of the simplest\n> configuration files I've seen. What makes it look complex is that it\n> begins with 700 lines explaining it, obscuring the actual content.\n\nNow now, it's only ~ 200 lines of comments. However...\n\nSince pg_hba.conf is re-read on every connection, I've always thought\nit was pretty bogus to bulk it up with that much internal documentation.\nI've not tried to measure how much time it takes the postmaster to skip\nover those 200 comment lines, but it can't be completely negligible.\n\nI'd favor reducing the in-the-file docs to about one line saying \"See\nsuch-and-such-a-place in the documentation\". Or a README. Or\nsomething.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 08 Jul 2001 16:49:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Patch to make pg_hba.conf handle virtualhost access\n\tcontrol and samehost keyword" }, { "msg_contents": "Tom Lane writes:\n\n> Since pg_hba.conf is re-read on every connection, I've always thought\n> it was pretty bogus to bulk it up with that much internal documentation.\n\nMaybe it should be cached in memory and only be re-read on request\n(SIGHUP). Parsing that file every time is undoubtedly a large fraction of\nthe total connection startup time.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 9 Jul 2001 00:15:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Patch to make pg_hba.conf handle virtualhost\n\taccess control and samehost keyword" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Since pg_hba.conf is re-read on every connection, I've always thought\n>> it was pretty bogus to bulk it up with that much internal documentation.\n\n> Maybe it should be cached in memory and only be re-read on request\n> (SIGHUP). Parsing that file every time is undoubtedly a large fraction of\n> the total connection startup time.\n\nOkay with me if someone wants to do it ... but that'd be a lot more work\nthan just moving the documentation ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 08 Jul 2001 21:31:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Patch to make pg_hba.conf handle virtualhost access\n\tcontrol and samehost keyword" }, { "msg_contents": "> > Maybe it should be cached in memory and only be re-read on request\n> > (SIGHUP). Parsing that file every time is undoubtedly a large\n> fraction of\n> > the total connection startup time.\n>\n> Okay with me if someone wants to do it ... but that'd be a lot more work\n> than just moving the documentation ...\n\nOr cache the information and just do a file modification timestamp check on\neach connection to see if it needs to be reread. (Best of both worlds??)\n\nChris\n\n", "msg_date": "Mon, 9 Jul 2001 09:32:18 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Patch to make pg_hba.conf handle virtualhost access\n\tcontrol and samehost keyword" }, { "msg_contents": "How about moving the documentation to a pg_hba.conf.README file?\n\nPeople shouldn't be able to miss that very easily.\n\n+ Justin\n\nChristopher Kings-Lynne wrote:\n> \n> > > Maybe it should be cached in memory and only be re-read on request\n> > > (SIGHUP). Parsing that file every time is undoubtedly a large\n> > fraction of\n> > > the total connection startup time.\n> >\n> > Okay with me if someone wants to do it ... but that'd be a lot more work\n> > than just moving the documentation ...\n> \n> Or cache the information and just do a file modification timestamp check on\n> each connection to see if it needs to be reread. (Best of both worlds??)\n> \n> Chris\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n", "msg_date": "Mon, 09 Jul 2001 12:44:52 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Patch to make pg_hba.conf handle virtualhost\n\taccess control and samehost keyword" }, { "msg_contents": "Tom Lane writes:\n\n> Since pg_hba.conf is re-read on every connection, I've always thought\n> it was pretty bogus to bulk it up with that much internal documentation.\n> I've not tried to measure how much time it takes the postmaster to skip\n> over those 200 comment lines, but it can't be completely negligible.\n\nI've run a simplistic test for this. I've let psql start 10000 times\nsequentially and timed the backend startup. All times are wall clock\n(gettimeofday). The first checkpoint is after the accept(), the second\nbefore the backend loop begins. The machine had a load average of 1.00 to\n1.50 and wasn't running anything else besides \"infrastructure\".\n\ndefault pg_hba.conf\n\n count | min | max | avg | stddev\n-------+----------+----------+--------------+---------------------\n 10000 | 0.024667 | 0.060208 | 0.0298723081 | 0.00746411804719077\n\npg_hba.conf, all comments removed\n\n count | min | max | avg | stddev\n-------+----------+---------+--------------------+---------------------\n 10000 | 0.022364 | 0.05946 | 0.0262477744000001 | 0.00570493964559965\n\nSo we're looking at a possible 12% win. I suggest we remove the comments\nand direct the user to the Admin Guide.\n\nBtw., in case someone wants to go optimizing, more than 75% of the backend\nstartup time is spent in InitPostgres():\n\n count | min | max | avg | stddev\n-------+---------+----------+--------------+---------------------\n 10000 | 0.01953 | 0.368216 | 0.0222271679 | 0.00629838985852663\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 10 Jul 2001 21:47:47 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Time to read pg_hba.conf (Re: [PATCHES] [PATCH] Patch to make...)" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> So we're looking at a possible 12% win.\n\nMany thanks for doing this legwork.\n\nThe possible win from not having to read the file at all is probably\nsomewhat higher than that, but not vastly higher. Accordingly, I'd\nsay that pre-parsing the file is not worth the development time needed\nto make it happen. However, moving the comments out is clearly worth\nthe (very small) amount of effort needed to make that happen. Any\nobjections?\n\n> Btw., in case someone wants to go optimizing, more than 75% of the backend\n> startup time is spent in InitPostgres():\n\nNo surprise, that's where all the initial database access happens. We'd\nneed to break it down more to learn anything useful, but I'd bet that\nthe initial loading of required catalog cache entries is a big chunk.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 17:42:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Time to read pg_hba.conf (Re: [PATCHES] [PATCH] Patch to make...)" }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > So we're looking at a possible 12% win.\n> \n> Many thanks for doing this legwork.\n> \n> The possible win from not having to read the file at all is probably\n> somewhat higher than that, but not vastly higher. Accordingly, I'd\n> say that pre-parsing the file is not worth the development time needed\n> to make it happen. However, moving the comments out is clearly worth\n> the (very small) amount of effort needed to make that happen. Any\n> objections?\n\nLet me see if I can cache the contents. I hate to make things harder to\nset us up, even if it is a small thing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 19:36:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Time to read pg_hba.conf (Re: [PATCHES] [PATCH] Patch\n\tto make...)" }, { "msg_contents": "> > > Maybe it should be cached in memory and only be re-read on request\n> > > (SIGHUP). Parsing that file every time is undoubtedly a large\n> > fraction of\n> > > the total connection startup time.\n> >\n> > Okay with me if someone wants to do it ... but that'd be a lot more work\n> > than just moving the documentation ...\n> \n> Or cache the information and just do a file modification timestamp check on\n> each connection to see if it needs to be reread. (Best of both worlds??)\n\nRather than having to create data structures for the complex pg_hba.conf\nformat, I am going to do a quick-and-dirty and load the non-comment\nlines into a List of strings and have the postmaster read that. It will\nreload from the file on sighup, just like we do for postgresql.conf.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 18:19:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Patch to make pg_hba.conf handle virtualhost access\n\tcontrol and samehost keyword" }, { "msg_contents": "\nI assume you are aware that 7.1.X postmaster can control which addresses\nit accepts connections from with -h:\n\n -h hostname\n Specifies the TCP/IP hostname or address on which\n the postmaster is to listen for connections from\n client applications. Defaults to listening on all\n configured addresses (including localhost).\n\nMy question is why virtualhosts are useful in the pg_hba.conf file?\n\n\n> Hi\n> This patch againsts postgresql 7.1.2 allows you to control access based on the\n> virtual host address only (virtualhost access type), or both the remote\n> address and the local address (connection access type).\n> \n> For example:\n> \n> connection all 192.168.42.0 255.255.255.0 192.168.1.42 255.255.255.255 trust\n> \n> \n> This patch also allows keyword \"samehost\", similar to \"sameuser\" but for\n> hosts.\n> \n> For example:\n> \n> virtualhost sameuser samehost.sql.domain.com 255.255.255.255 trust\n> \n> will prevent you from doing 1 entry per user, all you need is a\n> (local) dns entry for each host (user foo needs foo.sql.domain.com).\n> If the dns entry is not found, the line is dropped, so rejecting with\n> samehost is not a good idea for the moment.\n> \n> Any comments are welcome.\n> Please not that I'm not on the list.\n> \n> ---\n> Damien Clermonte\n\n> --- postgresql-7.1.2.orig/src/backend/libpq/hba.c\tMon Jul 2 16:25:56 2001\n> +++ postgresql-7.1.2/src/backend/libpq/hba.c\tTue Jul 3 14:01:20 2001\n> @@ -17,6 +17,7 @@\n> #include <netinet/in.h>\n> #include <arpa/inet.h>\n> #include <unistd.h>\n> +#include <netdb.h>\n> \n> #include \"postgres.h\"\n> \n> @@ -31,6 +32,9 @@\n> #define IDENT_USERNAME_MAX 512\n> /* Max size of username ident server can return */\n> \n> +#define MAX_HOSTNAME 1024\n> + /* Max size of hostname */\n> +\n> \n> /* Some standard C libraries, including GNU, have an isblank() function.\n> Others, including Solaris, do not. So we have our own.\n> @@ -256,8 +260,38 @@\n> \n> \t\tif (!inet_aton(buf, &file_ip_addr))\n> \t\t{\n> +\t\t if (!strncmp(buf, \"samehost\", 8)) /* samehost.somedomain */\n> +\t\t {\n> +\t\t struct hostent* he;\n> +\t\t char host[MAX_HOSTNAME];\n> +\t\t \n> +\t\t strcpy(host, port->user);\n> +\t\t if(strlen(buf) > 8)\n> +\t\t {\n> +\t\t strncat(host, buf + 8, MAX_HOSTNAME - 1 - strlen(host));\n> +\t\t host[MAX_HOSTNAME - 1] = '\\0';\n> +\t\t }\n> +\t\t he = gethostbyname(host);\n> +\t\t \n> +\t\t if(he != NULL)\n> +\t\t {\n> +\t\t file_ip_addr.s_addr = *(int*)he->h_addr;\n> +\t\t }\n> +\t\t else /* Error or Host not found */\n> +\t\t {\n> +\t\t read_through_eol(file);\n> +\t\t snprintf(PQerrormsg, PQERRORMSG_LENGTH,\n> +\t\t\t\t \"process_hba_record: samehost '%s' not found in pg_hba.conf file\\n\", host);\n> +\t\t fputs(PQerrormsg, stderr);\n> +\t\t pqdebug(\"%s\", PQerrormsg);\n> +\t\t return;\n> +\t\t }\n> +\t\t }\n> +\t\t else\n> +\t\t {\n> \t\t\tread_through_eol(file);\n> \t\t\tgoto syntax;\n> +\t\t }\n> \t\t}\n> \n> \t\t/* Read the mask field. */\n> @@ -299,6 +333,301 @@\n> \t\t\t (strcmp(db, \"sameuser\") != 0 || strcmp(port->database, port->user) != 0)) ||\n> \t\t\tport->raddr.sa.sa_family != AF_INET ||\n> \t\t\t((file_ip_addr.s_addr ^ port->raddr.in.sin_addr.s_addr) & mask.s_addr) != 0x0000)\n> +\t\t\treturn;\n> +\t}\n> +\telse if (strcmp(buf, \"virtualhost\") == 0 || strcmp(buf, \"virtualhostssl\") == 0)\n> +\t{\n> +\t\tstruct in_addr file_ip_addr,\n> +\t\t\t\t\tmask;\n> +\t\tbool\t\tdiscard = 0;/* Discard this entry */\n> +\n> +#ifdef USE_SSL\n> +\t\t/* If SSL, then check that we are on SSL */\n> +\t\tif (strcmp(buf, \"virtualhostssl\") == 0)\n> +\t\t{\n> +\t\t\tif (!port->ssl)\n> +\t\t\t\tdiscard = 1;\n> +\n> +\t\t\t/* Placeholder to require specific SSL level, perhaps? */\n> +\t\t\t/* Or a client certificate */\n> +\n> +\t\t\t/* Since we were on SSL, proceed as with normal 'host' mode */\n> +\t\t}\n> +#else\n> +\t\t/* If not SSL, we don't support this */\n> +\t\tif (strcmp(buf, \"virtualhostssl\") == 0)\n> +\t\t\tgoto syntax;\n> +#endif\n> +\n> +\t\t/* Get the database. */\n> +\n> +\t\tnext_token(file, db, sizeof(db));\n> +\n> +\t\tif (db[0] == '\\0')\n> +\t\t\tgoto syntax;\n> +\n> +\t\t/* Read the IP address field. */\n> +\n> +\t\tnext_token(file, buf, sizeof(buf));\n> +\n> +\t\tif (buf[0] == '\\0')\n> +\t\t\tgoto syntax;\n> +\n> +\t\t/* Remember the IP address field and go get mask field. */\n> +\n> +\t\tif (!inet_aton(buf, &file_ip_addr))\n> +\t\t{\n> +\t\t if (!strncmp(buf, \"samehost\", 8)) /* samehost.somedomain */\n> +\t\t {\n> +\t\t struct hostent* he;\n> +\t\t char host[MAX_HOSTNAME];\n> +\t\t \n> +\t\t strcpy(host, port->user);\n> +\t\t if(strlen(buf) > 8)\n> +\t\t {\n> +\t\t strncat(host, buf + 8, MAX_HOSTNAME - 1 - strlen(host));\n> +\t\t host[MAX_HOSTNAME - 1] = '\\0';\n> +\t\t }\n> +\t\t he = gethostbyname(host);\n> +\t\t \n> +\t\t if(he != NULL)\n> +\t\t {\n> +\t\t file_ip_addr.s_addr = *(int*)he->h_addr;\n> +\t\t }\n> +\t\t else /* Error or Host not found */\n> +\t\t {\n> +\t\t read_through_eol(file);\n> +\t\t snprintf(PQerrormsg, PQERRORMSG_LENGTH,\n> +\t\t\t\t \"process_hba_record: samehost '%s' not found in pg_hba.conf file\\n\", host);\n> +\t\t fputs(PQerrormsg, stderr);\n> +\t\t pqdebug(\"%s\", PQerrormsg);\n> +\t\t return;\n> +\t\t }\n> +\t\t }\n> +\t\t else\n> +\t\t {\n> +\t\t\tread_through_eol(file);\n> +\t\t\tgoto syntax;\n> +\t\t }\n> +\t\t}\n> +\n> +\t\t/* Read the mask field. */\n> +\n> +\t\tnext_token(file, buf, sizeof(buf));\n> +\n> +\t\tif (buf[0] == '\\0')\n> +\t\t\tgoto syntax;\n> +\n> +\t\tif (!inet_aton(buf, &mask))\n> +\t\t{\n> +\t\t\tread_through_eol(file);\n> +\t\t\tgoto syntax;\n> +\t\t}\n> +\n> +\t\t/*\n> +\t\t * This is the record we're looking for. Read the rest of the\n> +\t\t * info from it.\n> +\t\t */\n> +\n> +\t\tread_hba_entry2(file, &port->auth_method, port->auth_arg, error_p);\n> +\n> +\t\tif (*error_p)\n> +\t\t\tgoto syntax;\n> +\n> +\t\t/*\n> +\t\t * If told to discard earlier. Moved down here so we don't get\n> +\t\t * \"out of sync\" with the file.\n> +\t\t */\n> +\t\tif (discard)\n> +\t\t\treturn;\n> +\n> +\t\t/*\n> +\t\t * If this record isn't for our database, or this is the wrong\n> +\t\t * sort of connection, ignore it.\n> +\t\t */\n> +\n> +\t\tif ((strcmp(db, port->database) != 0 && strcmp(db, \"all\") != 0 &&\n> +\t\t\t (strcmp(db, \"sameuser\") != 0 || strcmp(port->database, port->user) != 0)) ||\n> +\t\t\tport->laddr.sa.sa_family != AF_INET ||\n> +\t\t\t((file_ip_addr.s_addr ^ port->laddr.in.sin_addr.s_addr) & mask.s_addr) != 0x0000)\n> +\t\t\treturn;\n> +\t}\n> +\telse if (strcmp(buf, \"connection\") == 0 || strcmp(buf, \"connectionssl\") == 0)\n> +\t{\n> +\t\tstruct in_addr file_ip_raddr,\n> +\t\t\t\t\trmask;\n> +\t\tstruct in_addr file_ip_laddr,\n> +\t\t\t\t\tlmask;\n> +\t\tbool\t\tdiscard = 0;/* Discard this entry */\n> +\n> +#ifdef USE_SSL\n> +\t\t/* If SSL, then check that we are on SSL */\n> +\t\tif (strcmp(buf, \"connectionssl\") == 0)\n> +\t\t{\n> +\t\t\tif (!port->ssl)\n> +\t\t\t\tdiscard = 1;\n> +\n> +\t\t\t/* Placeholder to require specific SSL level, perhaps? */\n> +\t\t\t/* Or a client certificate */\n> +\n> +\t\t\t/* Since we were on SSL, proceed as with normal 'host' mode */\n> +\t\t}\n> +#else\n> +\t\t/* If not SSL, we don't support this */\n> +\t\tif (strcmp(buf, \"connectionssl\") == 0)\n> +\t\t\tgoto syntax;\n> +#endif\n> +\n> +\t\t/* Get the database. */\n> +\n> +\t\tnext_token(file, db, sizeof(db));\n> +\n> +\t\tif (db[0] == '\\0')\n> +\t\t\tgoto syntax;\n> +\n> +\t\t/* Read the remote IP address field. */\n> +\n> +\t\tnext_token(file, buf, sizeof(buf));\n> +\n> +\t\tif (buf[0] == '\\0')\n> +\t\t\tgoto syntax;\n> +\n> +\t\t/* Remember the IP address field and go get mask field. */\n> +\n> +\t\tif (!inet_aton(buf, &file_ip_raddr))\n> +\t\t{\n> +\t\t if (!strncmp(buf, \"samehost\", 8)) /* samehost.somedomain */\n> +\t\t {\n> +\t\t struct hostent* he;\n> +\t\t char host[MAX_HOSTNAME];\n> +\t\t \n> +\t\t strcpy(host, port->user);\n> +\t\t if(strlen(buf) > 8)\n> +\t\t {\n> +\t\t strncat(host, buf + 8, MAX_HOSTNAME - 1 - strlen(host));\n> +\t\t host[MAX_HOSTNAME - 1] = '\\0';\n> +\t\t }\n> +\t\t he = gethostbyname(host);\n> +\t\t \n> +\t\t if(he != NULL)\n> +\t\t {\n> +\t\t file_ip_raddr.s_addr = *(int*)he->h_addr;\n> +\t\t }\n> +\t\t else /* Error or Host not found */\n> +\t\t {\n> +\t\t read_through_eol(file);\n> +\t\t snprintf(PQerrormsg, PQERRORMSG_LENGTH,\n> +\t\t\t\t \"process_hba_record: samehost '%s' not found in pg_hba.conf file\\n\", host);\n> +\t\t fputs(PQerrormsg, stderr);\n> +\t\t pqdebug(\"%s\", PQerrormsg);\n> +\t\t return;\n> +\t\t }\n> +\t\t }\n> +\t\t else\n> +\t\t {\n> +\t\t\tread_through_eol(file);\n> +\t\t\tgoto syntax;\n> +\t\t }\n> +\t\t}\n> +\n> +\t\t/* Read the remote mask field. */\n> +\n> +\t\tnext_token(file, buf, sizeof(buf));\n> +\n> +\t\tif (buf[0] == '\\0')\n> +\t\t\tgoto syntax;\n> +\n> +\t\tif (!inet_aton(buf, &rmask))\n> +\t\t{\n> +\t\t\tread_through_eol(file);\n> +\t\t\tgoto syntax;\n> +\t\t}\n> +\n> +\t\t/* Read the local IP address field. */\n> +\n> +\t\tnext_token(file, buf, sizeof(buf));\n> +\n> +\t\tif (buf[0] == '\\0')\n> +\t\t\tgoto syntax;\n> +\n> +\t\t/* Remember the IP address field and go get mask field. */\n> +\n> +\t\tif (!inet_aton(buf, &file_ip_laddr))\n> +\t\t{\n> +\t\t if (!strncmp(buf, \"samehost\", 8)) /* samehost.somedomain */\n> +\t\t {\n> +\t\t struct hostent* he;\n> +\t\t char host[MAX_HOSTNAME];\n> +\t\t \n> +\t\t strcpy(host, port->user);\n> +\t\t if(strlen(buf) > 8)\n> +\t\t {\n> +\t\t strncat(host, buf + 8, MAX_HOSTNAME - 1 - strlen(host));\n> +\t\t host[MAX_HOSTNAME - 1] = '\\0';\n> +\t\t }\n> +\t\t he = gethostbyname(host);\n> +\t\t \n> +\t\t if(he != NULL)\n> +\t\t {\n> +\t\t file_ip_laddr.s_addr = *(int*)he->h_addr;\n> +\t\t }\n> +\t\t else /* Error or Host not found */\n> +\t\t {\n> +\t\t read_through_eol(file);\n> +\t\t snprintf(PQerrormsg, PQERRORMSG_LENGTH,\n> +\t\t\t\t \"process_hba_record: samehost '%s' not found in pg_hba.conf file\\n\", host);\n> +\t\t fputs(PQerrormsg, stderr);\n> +\t\t pqdebug(\"%s\", PQerrormsg);\n> +\t\t return;\n> +\t\t }\n> +\t\t }\n> +\t\t else\n> +\t\t {\n> +\t\t\tread_through_eol(file);\n> +\t\t\tgoto syntax;\n> +\t\t }\n> +\t\t}\n> +\n> +\t\t/* Read the source mask field. */\n> +\n> +\t\tnext_token(file, buf, sizeof(buf));\n> +\n> +\t\tif (buf[0] == '\\0')\n> +\t\t\tgoto syntax;\n> +\n> +\t\tif (!inet_aton(buf, &lmask))\n> +\t\t{\n> +\t\t\tread_through_eol(file);\n> +\t\t\tgoto syntax;\n> +\t\t}\n> +\n> +\t\t/*\n> +\t\t * This is the record we're looking for. Read the rest of the\n> +\t\t * info from it.\n> +\t\t */\n> +\n> +\t\tread_hba_entry2(file, &port->auth_method, port->auth_arg, error_p);\n> +\n> +\t\tif (*error_p)\n> +\t\t\tgoto syntax;\n> +\n> +\t\t/*\n> +\t\t * If told to discard earlier. Moved down here so we don't get\n> +\t\t * \"out of sync\" with the file.\n> +\t\t */\n> +\t\tif (discard)\n> +\t\t\treturn;\n> +\n> +\t\t/*\n> +\t\t * If this record isn't for our database, or this is the wrong\n> +\t\t * sort of connection, ignore it.\n> +\t\t */\n> +\n> +\t\tif ((strcmp(db, port->database) != 0 && strcmp(db, \"all\") != 0 &&\n> +\t\t\t (strcmp(db, \"sameuser\") != 0 || strcmp(port->database, port->user) != 0)) ||\n> +\t\t\tport->laddr.sa.sa_family != AF_INET ||\n> +\t\t\t((file_ip_raddr.s_addr ^ port->raddr.in.sin_addr.s_addr) & rmask.s_addr) != 0x0000 ||\n> +\t\t\t((file_ip_laddr.s_addr ^ port->laddr.in.sin_addr.s_addr) & lmask.s_addr) != 0x0000)\n> \t\t\treturn;\n> \t}\n> \telse\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 12 Jul 2001 15:29:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Patch to make pg_hba.conf handle virtualhost access\n\tcontrol and samehost keyword" } ]
[ { "msg_contents": "I was reading the WAL documentation, and found out that UNDO is going to let \nPostgreSQL allow to make partial rollbacks on invalid transactions.\n\nWhat does this exactly mean? Does it mean that if I have 6 Inserts in between \na begin work - commit work some will get through and the ones with errors no?\n\nSaludos... :-)\n\n-- \nCualquiera administra un NT.\nEse es el problema, que cualquiera administre.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Tue, 3 Jul 2001 18:36:47 +0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": true, "msg_subject": "UNDO and partially commited transactions" }, { "msg_contents": "> I was reading the WAL documentation, and found out that UNDO is going to let \n> PostgreSQL allow to make partial rollbacks on invalid transactions.\n\nWe already \"undo\" aborted transactions, but without using the WAL log. \nNot sure if we will ever add that feature to WAL.\n\n> What does this exactly mean? Does it mean that if I have 6 Inserts in between \n> a begin work - commit work some will get through and the ones with errors no?\n\nNo, they all fail. No worries.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 3 Jul 2001 20:58:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: UNDO and partially commited transactions" } ]
[ { "msg_contents": "On Mar 03 Jul 2001 23:13, you wrote:\n> Bill Studenmund <wrstuden@zembu.com> wrote:\n> > sourceforge for one. They were using MySQL, then changed.\n>\n> Did they really switch though? Tim Perdue's articles\n> on the subject are really good, but I wasn't sure if he\n> really did the switch, or just investigated it and never\n> got to it for some reason. If you look at the \"About\"\n> information up on the sourceforge site, they still seem\n> to be talking about MySQL.\n\nYes! He did switch to PostgreSQL.\n\n> (Of course, it could just be that the page is out of date.\n> If you look at the \"Projects Page\" on postgresql.org, you'd\n> get the impression that the \"Tuple-Toaster\" hasn't been\n> finished...)\n\nNot only tha, but lots of links, like the one to the phpPgAdmin page that is \nnow in Greatbridges.\n\nSaludos... :-)\n\n-- \nCualquiera administra un NT.\nEse es el problema, que cualquiera administre.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Tue, 3 Jul 2001 19:30:12 +0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": true, "msg_subject": "Re: [OT] Any major users of postgresql?" }, { "msg_contents": "\nSorry to bug the list with something a bit off topic, but\nI've been scrounging around for some examples of someone \ndoing some fairly serious work with postgresql, and haven't \nyet been able to turn any up. Someone here must know a few \noff the top of their head... \n\nThe reason I'm asking is that the place that I work is\nactually contemplating reverting from Oracle's expensive\nbugs to MySQL's (supposedly) cheap ones. They'd consider\npostgresql, but they figure that with MySQL they can at\nleast point to sites that pump a fair amount of data with it\n(e.g. mp3.com).\n\nPlease help save me from a life without referential\nintegrity... \n\n", "msg_date": "Tue, 03 Jul 2001 10:44:04 -0700", "msg_from": "Joe Brenner <doom@kzsu.stanford.edu>", "msg_from_op": false, "msg_subject": "[OT] Any major users of postgresql? " }, { "msg_contents": "On Tue, 3 Jul 2001, Joe Brenner wrote:\n\n> The reason I'm asking is that the place that I work is\n> actually contemplating reverting from Oracle's expensive\n> bugs to MySQL's (supposedly) cheap ones. They'd consider\n> postgresql, but they figure that with MySQL they can at\n> least point to sites that pump a fair amount of data with it\n> (e.g. mp3.com).\n> \n> Please help save me from a life without referential\n> integrity... \n\nsourceforge for one. They were using MySQL, then changed. Also, look at\nthe postgres web site - there is an article there were someome did a speed\ncomparison between PG & MySQL. Postgres came out on top, even in places\nwhere folks thought MySQL would win.\n\nAlso, it depends on what your application is. If there is any amount of DB\nupdates, PG will easily be the best choice. :-)\n\nTake care,\n\nBill\n\n", "msg_date": "Tue, 3 Jul 2001 11:16:21 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@zembu.com>", "msg_from_op": false, "msg_subject": "Re: [OT] Any major users of postgresql? " }, { "msg_contents": "Joe,\n\nWe were also in the same predicament. Faced with Oracles steep cost and \nlearning curve (I know Informix), we chose to move ahead with \nPostgresql. Its not in production yet, but I think it will do the trick \nfor at least the short term.\n\nThe only possible showstopper at the moment, is the current inability to \nreplay transactions from the logical logs into a previous backup. I've \nbeen snooping around, and that ability is not far off. It is clear that \nthe root to make that happen are in place (WAL).\n\nI'd also love to hear from some large-ish users, but everything i've read \nso far is extremely promising. I do love your comment about Oracles' \nexpensive bugs (so true). Anyone out there want to chime in?\n\nAt 10:44 AM 7/3/01 -0700, Joe Brenner wrote:\n\n>Sorry to bug the list with something a bit off topic, but\n>I've been scrounging around for some examples of someone\n>doing some fairly serious work with postgresql, and haven't\n>yet been able to turn any up. Someone here must know a few\n>off the top of their head...\n>\n>The reason I'm asking is that the place that I work is\n>actually contemplating reverting from Oracle's expensive\n>bugs to MySQL's (supposedly) cheap ones. They'd consider\n>postgresql, but they figure that with MySQL they can at\n>least point to sites that pump a fair amount of data with it\n>(e.g. mp3.com).\n>\n>Please help save me from a life without referential\n>integrity...\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n\n--\nNaomi Walker\nChief Information Officer\nEldorado Computing, Inc.\n602-604-3100 ext 242 \n\n", "msg_date": "Tue, 03 Jul 2001 11:16:28 -0700", "msg_from": "Naomi Walker <nwalker@eldocomp.com>", "msg_from_op": false, "msg_subject": "Re: [OT] Any major users of postgresql? " }, { "msg_contents": "\nBill Studenmund <wrstuden@zembu.com> wrote: \n\n> sourceforge for one. They were using MySQL, then changed. \n\nDid they really switch though? Tim Perdue's articles \non the subject are really good, but I wasn't sure if he \nreally did the switch, or just investigated it and never \ngot to it for some reason. If you look at the \"About\" \ninformation up on the sourceforge site, they still seem \nto be talking about MySQL. \n\n(Of course, it could just be that the page is out of date.\nIf you look at the \"Projects Page\" on postgresql.org, you'd\nget the impression that the \"Tuple-Toaster\" hasn't been\nfinished...)\n\n", "msg_date": "Tue, 03 Jul 2001 13:13:08 -0700", "msg_from": "Joe Brenner <doom@kzsu.stanford.edu>", "msg_from_op": false, "msg_subject": "Re: [OT] Any major users of postgresql? " }, { "msg_contents": "On Tue, Jul 03, 2001 at 01:13:08PM -0700, Joe Brenner wrote:\n> \n> Bill Studenmund <wrstuden@zembu.com> wrote: \n> \n> > sourceforge for one. They were using MySQL, then changed. \n> \n> Did they really switch though? Tim Perdue's articles \n> on the subject are really good, but I wasn't sure if he \n> really did the switch, or just investigated it and never \n> got to it for some reason. If you look at the \"About\" \n> information up on the sourceforge site, they still seem \n> to be talking about MySQL. \n\nCurrent sources to SourceForge use features not supported in MySQL. \n(I gather that SourceForge also has been made to work with Oracle.)\nIn fact, at last report the web site was running on a beta version\nof 7.1, and that they were not planning to move to the released\nversion until a bug shows up that might affect them.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Tue, 3 Jul 2001 15:39:37 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: [OT] Any major users of postgresql?" }, { "msg_contents": "> \n> Bill Studenmund <wrstuden@zembu.com> wrote: \n> \n> > sourceforge for one. They were using MySQL, then changed. \n> \n> Did they really switch though? Tim Perdue's articles \n> on the subject are really good, but I wasn't sure if he \n> really did the switch, or just investigated it and never \n> got to it for some reason. If you look at the \"About\" \n> information up on the sourceforge site, they still seem \n> to be talking about MySQL. \n\nSourceforge converted to PostgreSQL November, 2000.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 3 Jul 2001 18:44:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [OT] Any major users of postgresql?" }, { "msg_contents": "Try:\n\nwww.sourceforge.net (A big, big PGsql based site)\n or\nwww.calorieking.com (A site I've worked on)\n or\nhttp://www.pgsql.com/user_gallery/ (The pgsql user gallery)\n\nLet me tell you right now that MySQL initially sounds sexy, but it is a\nNIGHTMARE without referential integrity and transaction support.\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Joe Brenner\n> Sent: Wednesday, 4 July 2001 1:44 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] [OT] Any major users of postgresql?\n>\n>\n>\n> Sorry to bug the list with something a bit off topic, but\n> I've been scrounging around for some examples of someone\n> doing some fairly serious work with postgresql, and haven't\n> yet been able to turn any up. Someone here must know a few\n> off the top of their head...\n>\n> The reason I'm asking is that the place that I work is\n> actually contemplating reverting from Oracle's expensive\n> bugs to MySQL's (supposedly) cheap ones. They'd consider\n> postgresql, but they figure that with MySQL they can at\n> least point to sites that pump a fair amount of data with it\n> (e.g. mp3.com).\n\n", "msg_date": "Wed, 4 Jul 2001 09:52:08 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: [OT] Any major users of postgresql? " }, { "msg_contents": "On 04 Jul 2001 09:52:08 +0800, Christopher Kings-Lynne wrote:\n> Try:\n> \n> www.sourceforge.net (A big, big PGsql based site)\n> or\n> www.calorieking.com (A site I've worked on)\n> or\n> http://www.pgsql.com/user_gallery/ (The pgsql user gallery)\n> \n> Let me tell you right now that MySQL initially sounds sexy, but it is a\n> NIGHTMARE without referential integrity and transaction support.\n> \n\n\nI can offer up \nwww.accountingweb.co.uk\nwww.crm-forum.com\nwww.lawzone.co.uk\nwww.travelmole.com \nand about half a dozen others. All of these sites are backed by a single\npostgresql 7.1 database\n\nBy the end of this month all of the communities based around our\nSiftGroups platform ( www.sift.co.uk ) will be backed by postgresql.\n\nWe have been migrating from MySQL and the intial results so far are\nextremely promising. ( It has only been live for 4 days, mind ).\nAs the database developer I can say that in many ways we had reached a\npoint with MySQL where we were struggling to scale it efficiently. We\nexpect these problems to be overcome by using postgresql.\n\n\n-- \nColin M Strickland perl -e'print \"\\n\",map{chr(ord()-3)}(reverse split \n //,\"\\015%vhlwlqxpprF#ir#uhzrS#hkw#jqlvvhqudK%#\\015\\015nx\".\n\"1rf1wilv1zzz22=swwk###369<#84<#:44#77.={di##339<#84<#:44#77.=ohw\\015]\".\n\"K9#4VE#/ORWVLUE#/whhuwV#dlurwflY#334#/wilV\\015uhsrohyhG#ehZ#urlqhV\");'\n", "msg_date": "04 Jul 2001 10:39:50 +0100", "msg_from": "Colin Strickland <cms@sift.co.uk>", "msg_from_op": false, "msg_subject": "RE: [OT] Any major users of postgresql?" }, { "msg_contents": "* Colin Strickland <cms@sift.co.uk> wrote:\n|\n| and about half a dozen others. All of these sites are backed by a single\n| postgresql 7.1 database\n| \n| By the end of this month all of the communities based around our\n| SiftGroups platform ( www.sift.co.uk ) will be backed by postgresql.\n\nCool ! Do you have any data on the total traffic ?\nHow man pageviews a day and also how dynamic each page view is \non average. By \"dynamic\" I mean the average number of\nselect/update/delete/insert operations for a pageview.\n\nregards, \n\n Gunnar\n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n", "msg_date": "04 Jul 2001 12:02:21 +0200", "msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>", "msg_from_op": false, "msg_subject": "Re: [OT] Any major users of postgresql?" }, { "msg_contents": "Joe Brenner wrote:\n> \n> Please help save me from a life without referential\n> integrity...\n\nHow major is major? I have several sites that get between 50,000 and 200,000 page\nviews per day using Apache/PHP/PostgreSQL on single processor Linux servers.\n\nParticularly http://newsroom.co.nz/ which has a news database of around 60,000\narticles.\n\nCheers,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew@catalyst.net.nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64(27)246-7091, Fax:+64(4)499-5596, Office: +64(4)499-2267xtn709\n", "msg_date": "Wed, 04 Jul 2001 23:27:31 +1200", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: [OT] Any major users of postgresql?" }, { "msg_contents": "On 04 Jul 2001 12:02:21 +0200, Gunnar R�nning wrote:\n\n> Cool ! Do you have any data on the total traffic ?\n> How man pageviews a day and also how dynamic each page view is \n> on average. By \"dynamic\" I mean the average number of\n> select/update/delete/insert operations for a pageview.\n> \n> regards, \n> \n> Gunnar\n> \n\nI'm afraid that I'm not at liberty to divulge specific traffic\ninformation(sorry).\nI can safely say that some of the sites do grow very busy and almost all\npages are dynamic.\n\n-- \nColin M Strickland perl -e'print \"\\n\",map{chr(ord()-3)}(reverse split \n //,\"\\015%vhlwlqxpprF#ir#uhzrS#hkw#jqlvvhqudK%#\\015\\015nx\".\n\"1rf1wilv1zzz22=swwk###369<#84<#:44#77.={di##339<#84<#:44#77.=ohw\\015]\".\n\"K9#4VE#/ORWVLUE#/whhuwV#dlurwflY#334#/wilV\\015uhsrohyhG#ehZ#urlqhV\");'\n", "msg_date": "04 Jul 2001 13:00:27 +0100", "msg_from": "Colin Strickland <cms@sift.co.uk>", "msg_from_op": false, "msg_subject": "Re: [OT] Any major users of postgresql?" }, { "msg_contents": "Joe Brenner wrote:\n\n> Sorry to bug the list with something a bit off topic, but\n> I've been scrounging around for some examples of someone\n> doing some fairly serious work with postgresql, and haven't\n> yet been able to turn any up. Someone here must know a few\n> off the top of their head...\n\nIt probably depends on what you call \"serious\". Anyway, the project I am\nworking on is a online community for alternate investments and is built\naround a PostgreSQL (first 7.0, now 7.1) database: it's\n<http://village.albourne.com> but unfortunately most of it is limited\nonly to subscribers so there is not a lot db-related to see. It's\nPostreSQL + Apache + mod_perl on Digital Unix.\n\nAnother site is <http://www.animalhouse.it>, an Italian web site about\npet news and tips made by some friends of mine. It runs on a custom\ncontent management system, again on the PostgreSQL + Apache + mod_perl\ncombination on Red Hat Linux. Being sponsored by a major portal, a lot\nof hits are expected, but the site is brand new so no data is available\nnow.\n\nApart from the obvious wins of data integrity, transactions, etc. both\nsites benefit from a number of SQL functions like triggers, foreign keys\nwith cascade behaviour, custom functions, and especially views, that\nwere simply not available under MySql. From all these, I would pick\nviews: I simply cannot see how you can arrange a coherent db design\n(especially for a web project) without using views.\n\nHope it helps.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Thu, 05 Jul 2001 10:32:47 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": false, "msg_subject": "Re: [OT] Any major users of postgresql?" }, { "msg_contents": "> It probably depends on what you call \"serious\". Anyway, the project I am\n> working on is a online community for alternate investments and is built\n> around a PostgreSQL (first 7.0, now 7.1) database: it's\n> <http://village.albourne.com> but unfortunately most of it is limited\n> only to subscribers so there is not a lot db-related to see. It's\n> PostreSQL + Apache + mod_perl on Digital Unix.\n>\nI would define \"serious use\" as use in transactional applications where the\nloss of data input by users is a very bad thing, and the uptime requirements\nare 24x7, with availability requirements overall of >99%.\n<p>\nAs an example, and where most of my past experience has been, consider a\nreservation system for an airline or hotel chain. Such a system may have\nhundreds to thousands of transactions per second. More importantly, tens per\nsecond of those transactions which must not be lost - i.e.\nreservations/changes/cancellations - and which are worth real money. Losing,\nsay, 15 minutes of these is a catastrophe. Note also that transactions are\nnot equivalent to page views - in this case a single \"page view\" would\nresult in a series of many database operations to generate a single\nresponse.\n<p>\nAn even tougher example would be an online financial system such as an ATM\ndebit system. In that case, you can hand someone a lot of money as a result\nof a transaction. Loss of that data is exactly loss of the money!\n<p>\nIn the case of PostgreSQL, as far as I can tell, one could lose all data\nsince the previous dump if one lost the database media. In Oracle or\nInformix, that is *not* true, because they can do a point-in-time restore\nfrom the last full save, based on the WAL's.\n\n\n\n", "msg_date": "Thu, 05 Jul 2001 16:48:35 GMT", "msg_from": "\"John Moore\" <NOSPAMnews@NOSPAMtinyvital.com>", "msg_from_op": false, "msg_subject": "Re: [OT] Any major users of postgresql?" }, { "msg_contents": "\"John Moore\" <NOSPAMnews@NOSPAMtinyvital.com> writes:\n> In the case of PostgreSQL, as far as I can tell, one could lose all data\n> since the previous dump if one lost the database media. In Oracle or\n> Informix, that is *not* true, because they can do a point-in-time restore\n> from the last full save, based on the WAL's.\n\nIf you are archiving the WAL logs, then in theory you could recover\nfrom those in Postgres as well. In practice, I consider this argument\nirrelevant, because no one is going to want to work that way.\n(Nigh-infinite offline storage for the logs, plus huge recovery time if\nyou do suffer a crash ... I don't think so.)\n\nA more reasonable approach to getting better-than-hardware reliability\nis replicated servers. We have some crude ways of replicating data now,\nand should have much better ways in a release or two. (See\nhttp://www.greatbridge.org/genpage?replication_top for some info on\nstuff that will likely get rolled into the standard distribution\neventually. I consider Postgres-R the most promising approach.)\n\nAs of today, I wouldn't try to run an airline reservation system on\nPostgres either. But check back in a year or so.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 19:56:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [OT] Any major users of postgresql? " } ]
[ { "msg_contents": "> On further thought, btbuild is not that badly broken at the moment,\n> because CREATE INDEX acquires ShareLock on the relation, so\n> there can be no concurrent writers at the page level. Still, it\n> seems like it'd be a good idea to do \"LockBuffer(buffer,\nBUFFER_LOCK_SHARE)\"\n> here, and probably also to invoke HeapTupleSatisfiesNow() via the\n> HeapTupleSatisfies() macro so that infomask update is checked for.\n> Vadim, what do you think?\n\nLooks like there is no drawback in locking buffer so let's lock it.\n\nVadim\n", "msg_date": "Tue, 3 Jul 2001 10:41:53 -0700 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Re: Buffer access rules, and a probable bug " }, { "msg_contents": "Okay, on to the next concern. I've been thinking some more about the\nrestrictions needed to make the world safe for concurrent VACUUM.\nI previously said:\n\n> 5. To physically remove a tuple or compact free space on a page, one\n> must hold a pin and an exclusive lock, *and* observe while holding the\n> exclusive lock that the buffer's shared reference count is one (ie,\n> no other backend holds a pin). If these conditions are met then no other\n> backend can perform a page scan until the exclusive lock is dropped, and\n> no other backend can be holding a reference to an existing tuple that it\n> might expect to examine again. Note that another backend might pin the\n> buffer (increment the refcount) while one is performing the cleanup, but\n> it won't be able to to actually examine the page until it acquires shared\n> or exclusive lock.\n\nThis is OK when considering a page in isolation, but it does not get the\njob done when one is deleting related index tuples and heap tuples. It\nseems to me that there *must* be some cross-page coupling to make that\nwork safely. Otherwise you could have this scenario:\n\n1. Indexscanning process visits an index tuple, decides to access the\n corresponding heap tuple, drops its lock on the index buffer page.\n\n2. VACUUMing process visits the index buffer page and marks the index\n tuple dead. (It won't try to delete the tuple yet, since it sees\n the pin still held on the page by process #1, but it can acquire\n exclusive lock and mark the tuple dead anyway.)\n\n3. VACUUMing process is the first to acquire pin and lock on the heap\n buffer page. It sees no other pin, so it deletes the tuple.\n\n4. Indexscanning process finally acquires pin and lock on the heap page,\n tries to access what is now a gone tuple. Ooops. (Even if we\n made that not an error condition, it could be worse: what if a\n third process already reused the line pointer for a new tuple?)\n\nIt does not help to postpone the actual cleaning of an index or heap\npage until its own pin count drops to zero --- the problem here is that\nan indexscanner has acquired a reference into a heap page from the\nindex, but does not yet hold a pin on the heap page to ensure that\nthe reference stays good. So we can't just postpone the cleaning of\nthe index page till it has pin count zero, we have to make the related\nheap page(s)' cleanup wait for that to happen too.\n\nI can think of two ways of guaranteeing that this problem cannot happen.\n\nOne is for an indexscanning process to retain its shared lock on the\nindex page until it has acquired at least a pin on the heap page.\nThis is very bad for concurrency --- it means that we'd typically\nbe holding indexpage shared locks for the time needed to read in a\nrandomly-accessed disk page. And it's very complicated, since we still\nneed all the other rules, plus the mechanism for postponing cleanups\nuntil pin count goes to zero. It'd cause considerable changes to the\nindex access method API, too.\n\nThe other is to forget about asynchronous cleaning, and instead have the\nVACUUM process directly do the wait for pin count zero, then clean the\nindex page. Then when it does the same for the heap page, we know for\nsure there are no indexscanners in transit to the heap page. This would\nbe logically a lot simpler, it seems to me. Another advantage is that\nwe need only one WAL entry per cleaned page, not two (one for the\ninitial mark-dead step and one for the postponable compaction step),\nand there's no need for an intermediate \"gone but not forgotten\" state\nfor index tuples.\n\nWe could implement this in pretty nearly the same way as the \"mark for\ncleanup\" facility that you partially implemented awhile back:\nessentially, the cleanup callback would send a signal or semaphore\nincrement to the waiting process, which would then try to acquire pin\nand exclusive lock on the buffer. If it succeeded in observing pin\ncount 1 with exclusive lock, it could proceed with cleanup, else loop\nback and try again. Eventually it'll get the lock. (It might take\nawhile, but for a background VACUUM I think that's OK.)\n\nWhat I'm wondering is if you had any other intended use for \"mark for\ncleanup\" than VACUUM. The cheapest implementation would allow only\none process to be waiting for cleanup on a given buffer, which is OK\nfor VACUUM because we'll only allow one VACUUM at a time on a relation\nanyway. But if you had some other uses in mind, maybe the code needs\nto support multiple waiters.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jul 2001 18:27:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Buffer access rules, and a probable bug " }, { "msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim [mailto:vmikheev@SECTORBASE.COM]\n> \n> > On further thought, btbuild is not that badly broken at the moment,\n> > because CREATE INDEX acquires ShareLock on the relation, so\n> > there can be no concurrent writers at the page level. Still, it\n> > seems like it'd be a good idea to do \"LockBuffer(buffer,\n> BUFFER_LOCK_SHARE)\"\n> > here, and probably also to invoke HeapTupleSatisfiesNow() via the\n> > HeapTupleSatisfies() macro so that infomask update is checked for.\n> > Vadim, what do you think?\n> \n> Looks like there is no drawback in locking buffer so let's lock it.\n> \n\nOK I would fix it.\nAs for HeapTupleSatisfies() there seems to be another choise to\nlet HeapTupleSatisfiesAny() be equivalent to HeapTupleSatisfiesNow()\nother than always returning true.\n\nComments ?\n\nregards,\nHiroshi Inoue \n", "msg_date": "Wed, 4 Jul 2001 19:23:22 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "RE: Re: Buffer access rules, and a probable bug " }, { "msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> As for HeapTupleSatisfies() there seems to be another choise to\n> let HeapTupleSatisfiesAny() be equivalent to HeapTupleSatisfiesNow()\n> other than always returning true.\n\nWouldn't that break the other uses of SnapshotAny? I'm not sure\nit's what nbtree.c wants, either, because then the heap_getnext\ncall wouldn't return recently-dead tuples at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Jul 2001 13:09:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Buffer access rules, and a probable bug " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > As for HeapTupleSatisfies() there seems to be another choise to\n> > let HeapTupleSatisfiesAny() be equivalent to HeapTupleSatisfiesNow()\n> > other than always returning true.\n> \n> Wouldn't that break the other uses of SnapshotAny? \n\nIn theory no because HeapTupleSatisfies...() only touches\nhint bits. What I mean is to implement a new function\nHeapTupleSatisfiesAny() as\n\nbool\nHeapTupleSatisfiesAny(HeapTupleHeader tuple)\n{\n\tHeapTupleSatisfiesNow(tuple);\n\treturn true;\n}\n.\n\n> I'm not sure\n> it's what nbtree.c wants, either, because then the heap_getnext\n> call wouldn't return recently-dead tuples at all.\n> \n\nnbtree.c has to see all(including dead) tuples and judge\nif the tuples are alive, dead or removable via unified\ntime qualification.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 05 Jul 2001 08:43:24 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Re: Buffer access rules, and a probable bug" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> What I mean is to implement a new function\n> HeapTupleSatisfiesAny() as\n\n> bool\n> HeapTupleSatisfiesAny(HeapTupleHeader tuple)\n> {\n> \tHeapTupleSatisfiesNow(tuple);\n> \treturn true;\n> }\n\nOh, I see: so that HeapTupleSatisfies would update the hint bits even\nwhen called with snapshot = SnapShotAny. Hmm. This might be a good\nidea on its own merits, but I don't think it simplifies nbtree.c at\nall --- you'd still have to go through the full LockBuffer and hint\nupdate procedure there. (If the other transaction committed meanwhile,\nthe call from nbtree.c could try to update hint bits that hadn't been\nupdated during heap_fetch.)\n\nBTW, I don't really think that this code belongs in nbtree.c at all.\nIf it lives there, then we need to duplicate the logic in each index\naccess method. At some point we ought to fix the index build process\nso that the loop that scans the source relation is outside the access-\nmethod-specific code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Jul 2001 21:04:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Buffer access rules, and a probable bug " }, { "msg_contents": "> In theory no because HeapTupleSatisfies...() only touches\n> hint bits. What I mean is to implement a new function\n> HeapTupleSatisfiesAny() as\n\nCan someone add comments to the Satisfies code to explain each\nvisibility function? Some are documented and some are not. Seems it\nwould be good to get this stuff in there for future coders.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 4 Jul 2001 23:16:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Buffer access rules, and a probable bug" }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > What I mean is to implement a new function\n> > HeapTupleSatisfiesAny() as\n> \n> > bool\n> > HeapTupleSatisfiesAny(HeapTupleHeader tuple)\n> > {\n> > HeapTupleSatisfiesNow(tuple);\n> > return true;\n> > }\n> \n> Oh, I see: so that HeapTupleSatisfies would update the hint bits even\n> when called with snapshot = SnapShotAny. Hmm. This might be a good\n> idea on its own merits, but I don't think it simplifies nbtree.c at\n> all --- you'd still have to go through the full LockBuffer and hint\n> update procedure there. (If the other transaction committed meanwhile,\n> the call from nbtree.c could try to update hint bits that hadn't been\n> updated during heap_fetch.)\n> \n\nDead(HEAP_XMAX_COMMITTED || HEAP_XMIN_INVALID) tuples never\nrevive. Live (not dead) tuples never die in Share Lock mode.\nSo I don't have to call HeapTupleSatisfies() again though I\nseem to have to lock the buffer so as to see t_infomask and\nt_xmax.\n\n> BTW, I don't really think that this code belongs in nbtree.c at all.\n> If it lives there, then we need to duplicate the logic in each index\n> access method. At some point we ought to fix the index build process\n> so that the loop that scans the source relation is outside the access-\n> method-specific code.\n> \n\nAgreed. \n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 05 Jul 2001 13:45:13 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Re: Buffer access rules, and a probable bug" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Dead(HEAP_XMAX_COMMITTED || HEAP_XMIN_INVALID) tuples never\n> revive. Live (not dead) tuples never die in Share Lock mode.\n\nHmm ... so you're relying on the ShareLock to ensure that the state of\nthe tuple can't change between when heap_fetch checks it and when\nnbtree looks at it.\n\nOkay, but put in some comments documenting what this stuff is doing\nand why.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Jul 2001 09:54:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Buffer access rules, and a probable bug " } ]
[ { "msg_contents": "> With stock PostgreSQL... how many committed transactions can one lose\n> on a simple system crash/reboot? With Oracle or Informix, the answer\n> is zero. Is that true with PostgreSQL in fsync mode? If not, does it\n\nIt's true or better say should be, keeping in mind probability of bugs.\n\n> lose all in the log, or just those not yet written to the DB?\n\nBAR is not for \"simple crash\" but for the disk crashes. In this case\none will lose as much as WAL files lost.\n\nVadim\n", "msg_date": "Tue, 3 Jul 2001 12:55:47 -0700 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Re: Backup and Recovery" } ]
[ { "msg_contents": "Has anyone thought about adding SNMP support to PostgreSQL? RFC1697\ncontains the MIB file for RDBMSs. Has anyone worked with a RDBMS that\nsupports SNMP and, if so, did they find it useful? I'm just learning\nabout SNMP, but it seems that it has the potential to be useful.\nThoughts?\n\nNew to the list...\n--Kevin\n\n", "msg_date": "Tue, 03 Jul 2001 17:27:04 -0400", "msg_from": "Kevin <kbryan@webmachines.com>", "msg_from_op": true, "msg_subject": "SNMP support" } ]
[ { "msg_contents": "Hi Dear:\n\n I am using Postgres 7.0 on RedHat Linux 6.3 right now. I wanna know\nis there transaction log in PostGres 7.0? I know there is WAL in 7.1. I\nam also thinking to upgrade to 7.1.2 too. I have read a lot of message\nin the PostgreSQL mailing list archive which is about the WAL.\n The following is some conclusion of what I learn from there:\n\n * checkpoint is made according to the config setting of\n checkpoint_segment and checkpoint_timeout.\n * after a checkpoint has been made, the log flush to the disk drive.\n * after a checkpoint has been made, any log segments writtend before\n the redo record ( the record in the log whick the REDO operation\n begin) are REMOVE to free disk space in WAL directory.\n * seems to me, everything is automatic.\n\nPlease, point me out if there anything wrong.\n However, there is still something really confusing me. In PostgreSQL\n\n7.1 Documentation Chapter 9 WAL , 9.2.1 Database Recovery with WAL, it\ntell me about the flow of the process to recover from WAL: 1. reads\npg_control, 2. reads checkpoint record, 3. reads redo record, and then\nfinally 4. REDO operation. I wonder do we need to run all the above\nprocedures manually, after a crash?? or They will REALLY be\nautomatically run when re-start the postgres after a crash?? It sounds\nlike a miracle for me!!\n\nThank You very Much\nHarry Yau\n\n\n\n\n\n\n", "msg_date": "Wed, 04 Jul 2001 09:43:22 +0800", "msg_from": "Harry Yau <harry@regaltronic.com>", "msg_from_op": true, "msg_subject": "WAL Question" } ]
[ { "msg_contents": "\n> I imagine a daemon extracting redo log entries from WAL segments, \n> asynchronously. Mixing redo log entries into the WAL allows the WAL \n> to be the only synchronous disk writer in the system, a Good Thing.\n\nThis comes up periodically now. WAL currently already has all the info\nthat would be needed for redo (it actually has to). All that is missing \nis a program, that can take a consistent physical snapshot (as it was after \na particular checkpoint) and would replay the WAL after a restore of such a \nsnapshot. This replay after a consistent snapshot is probably as simple\nas making the WAL files available to the standard startup rollforward (redo) \nmechanism, that is already implemented.\n\nActually it might even be possible to do the WAL redo based on\nan inconsistent backup (e.g. done with tar/cpio), but that probably needs \nmore thought and testing than above. At the least, the restore would need to \ngenerate a valid pg_control before starting redo.\n\nAndreas\n", "msg_date": "Wed, 4 Jul 2001 10:37:57 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: Backup and Recovery" }, { "msg_contents": "On Wed, Jul 04, 2001 at 10:37:57AM +0200, Zeugswetter Andreas SB wrote:\n> \n> > I imagine a daemon extracting redo log entries from WAL segments, \n> > asynchronously. Mixing redo log entries into the WAL allows the WAL \n> > to be the only synchronous disk writer in the system, a Good Thing.\n> \n> This comes up periodically now. WAL currently already has all the info\n> that would be needed for redo (it actually has to). \n\nThe WAL has the information needed to take a binary table image\nfrom the checkpoint state to the last committed transaction.\nIIUC, it is meaningless in relation to a pg_dump image.\n\n> All that is missing is a program, that can take a consistent physical\n> snapshot (as it was after a particular checkpoint) and would replay\n> the WAL after a restore of such a snapshot. This replay after a\n> consistent snapshot is probably as simple as making the WAL files\n> available to the standard startup rollforward (redo) mechanism, that\n> is already implemented.\n\nHow would you take a physical snapshot without interrupting database \noperation? Is a physical/binary snapshot a desirable backup format? \nPeople seem to want to be able to restore from ASCII dumps.\n\nAlso, isn't the WAL format rather bulky to archive hours and hours of?\nI would expect high-level transaction redo records to be much more compact;\nmixed into the WAL, such records shouldn't make the WAL grow much faster.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Wed, 4 Jul 2001 04:46:20 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: Backup and Recovery" }, { "msg_contents": "> > All that is missing is a program, that can take a consistent physical\n> > snapshot (as it was after a particular checkpoint) and would replay\n> > the WAL after a restore of such a snapshot. This replay after a\n> > consistent snapshot is probably as simple as making the WAL files\n> > available to the standard startup rollforward (redo) mechanism, that\n> > is already implemented.\n> \n> How would you take a physical snapshot without interrupting database \n> operation? Is a physical/binary snapshot a desirable backup format? \n> People seem to want to be able to restore from ASCII dumps.\n> \n> Also, isn't the WAL format rather bulky to archive hours and hours of?\n> I would expect high-level transaction redo records to be much more compact;\n> mixed into the WAL, such records shouldn't make the WAL grow much faster.\n\nThe page images are not needed and can be thrown away once the page is\ncompletely sync'ed to disk or a checkpoint happens. \n\nThe row images aren't that large. I think any solution would have to\nhandle page images and row images differently.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 4 Jul 2001 11:04:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Backup and Recovery" }, { "msg_contents": "> On Wed, Jul 04, 2001 at 10:37:57AM +0200, Zeugswetter Andreas SB wrote:\n> > \n> > > I imagine a daemon extracting redo log entries from WAL segments, \n> > > asynchronously. Mixing redo log entries into the WAL allows the WAL \n> > > to be the only synchronous disk writer in the system, a Good Thing.\n> > \n> > This comes up periodically now. WAL currently already has all the info\n> > that would be needed for redo (it actually has to). \n> \n> The WAL has the information needed to take a binary table image\n> from the checkpoint state to the last committed transaction.\n> IIUC, it is meaningless in relation to a pg_dump image.\n> \n\nAlso, I expect this will be completed for 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 4 Jul 2001 11:04:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Backup and Recovery" }, { "msg_contents": "At 11:04 AM 7/4/01 -0400, Bruce Momjian wrote:\n> > > All that is missing is a program, that can take a consistent physical\n> > > snapshot (as it was after a particular checkpoint) and would replay\n> > > the WAL after a restore of such a snapshot. This replay after a\n> > > consistent snapshot is probably as simple as making the WAL files\n> > > available to the standard startup rollforward (redo) mechanism, that\n> > > is already implemented.\n> >\nYou would also have to have some triggering method that would not allow \nWAL's to be overwritten, and make sure they are sent to some device before \nthat happen. In addition, you'd have to anchor the LOGS against the dumped \ndatabase, so it would know exactly which log to start with when replaying \nagainst any given archive.\n--\nNaomi Walker\nChief Information Officer\nEldorado Computing, Inc.\n602-604-3100 ext 242 \n\n", "msg_date": "Thu, 05 Jul 2001 09:13:40 -0700", "msg_from": "Naomi Walker <nwalker@eldocomp.com>", "msg_from_op": false, "msg_subject": "Re: Re: Backup and Recovery" } ]
[ { "msg_contents": "Hi all,\n\nwhen a new table or field is created without quotes, it is assumed to be \ncase-insensitive. Herefore I have some questions:\n\n- Is it SQL-92-conform to handle \"test\" like test without quotes, or \nshouldn't it be test forced to lowercase?\n\n- Oracle returns this no_matter_what-case_it_is-fields with \nuppercase-letters. Is it possible for Postgresql, to imitate this behaviour?\n\n- How is the handling of case-sensitivity handled in the system-catalogs? Is \nther any flag or depends it on the name of the object only?\n\nThank you very much in advance!\n\nKlaus\n\n-- \nVisit WWWdb at\nhttp://wwwdb.org\n", "msg_date": "Wed, 4 Jul 2001 11:06:38 +0200", "msg_from": "Klaus Reger <K.Reger@wwwdb.de>", "msg_from_op": true, "msg_subject": "Get table/field-identifiers in uppercase" } ]
[ { "msg_contents": "Hi,\n\nCan someone explain why the following query takes 1 second when using \nLIKE and takes 30 seconds when replacing LIKE by = in the WHERE ?\n- instance_Attribute has 45 rows and Influence has 5 rows.\n- Postgresql 7.1\n\nRegards\n=====================================\nSELECT\nE1A1.nameInstance AS inste,\nE1A1.nameClass AS classe,\nE1A1.value AS dx,\nE1A2.value AS dy,\nE1A3.value AS dz,\nE1A4.value AS v,\nI0.value AS ix,\nI1.value AS iy,\nI2.value AS iz,\nI3.value AS iv\nFROM\ninstance_Attribute AS E1A1,\ninstance_Attribute AS E1A2,\ninstance_Attribute AS E1A3,\ninstance_Attribute AS E1A4,\nInfluence AS I0,\nInfluence AS I1,\nInfluence AS I2,\nInfluence AS I3\nWHERE\nE1A1.nameAttribute LIKE 'directionx' AND\nE1A2.nameInstance LIKE E1A1.nameInstance AND\nE1A2.nameClass LIKE E1A1.nameClass AND\nE1A2.nameAttribute LIKE 'directiony' AND\nE1A3.nameInstance LIKE E1A1.nameInstance AND\nE1A3.nameClass LIKE E1A1.nameClass AND\nE1A3.nameAttribute LIKE 'directionz' AND\nE1A4.nameInstance LIKE E1A1.nameInstance AND\nE1A4.nameClass LIKE E1A1.nameClass AND\nE1A4.nameAttribute LIKE 'vitesse' AND\nI0.nameClass LIKE E1A1.nameClass AND\nI0.nameInstance LIKE E1A1.nameInstance AND\nI0.nameInfluence LIKE 'inf_directionx' AND\nI1.nameClass LIKE E1A1.nameClass AND\nI1.nameInstance LIKE E1A1.nameInstance AND\nI1.nameInfluence LIKE 'inf_directiony' AND\nI2.nameClass LIKE E1A1.nameClass AND\nI2.nameInstance LIKE E1A1.nameInstance AND\nI2.nameInfluence LIKE 'inf_directionz' AND\nI3.nameClass LIKE E1A1.nameClass AND\nI3.nameInstance LIKE E1A1.nameInstance AND\nI3.nameInfluence LIKE 'inf_vitesse' ;\n\nMichel Soto\n----------------------------------------------------------------------------\nUniversite Pierre et Marie Curie TEL: +33 1 44 27 88 30\nLaboratoire LIP6-CNRS +33 1 44 55 35 23\n8, rue du Capitaine Scott FAX: +33 1 44 27 53 53\n75015 PARIS mailto:Michel.Soto@lip6.fr\nFrance\n\nAcc�s: http://www.mappy.fr/PlanPerso/7438/1\n\nHi,\nCan  someone explain why the following query takes  1 second\nwhen using LIKE and takes 30 seconds when replacing LIKE by = in the\nWHERE ?\n- instance_Attribute  has 45 rows and Influence has 5 rows.\n- Postgresql 7.1\nRegards\n=====================================\nSELECT \nE1A1.nameInstance AS inste, \nE1A1.nameClass AS classe, \nE1A1.value AS dx, \nE1A2.value AS dy, \nE1A3.value AS dz, \nE1A4.value AS v, \nI0.value AS ix, \nI1.value AS iy, \nI2.value AS iz, \nI3.value AS iv \nFROM \ninstance_Attribute AS E1A1, \ninstance_Attribute AS E1A2, \ninstance_Attribute AS E1A3, \ninstance_Attribute AS E1A4, \nInfluence AS I0, \nInfluence AS I1, \nInfluence AS I2, \nInfluence AS I3 \nWHERE\nE1A1.nameAttribute LIKE 'directionx' AND \nE1A2.nameInstance LIKE E1A1.nameInstance  AND \nE1A2.nameClass LIKE E1A1.nameClass  AND \nE1A2.nameAttribute LIKE 'directiony' AND \nE1A3.nameInstance LIKE E1A1.nameInstance  AND \nE1A3.nameClass LIKE E1A1.nameClass  AND \nE1A3.nameAttribute LIKE 'directionz' AND \nE1A4.nameInstance LIKE E1A1.nameInstance  AND \nE1A4.nameClass LIKE E1A1.nameClass  AND \nE1A4.nameAttribute LIKE 'vitesse' AND \nI0.nameClass LIKE E1A1.nameClass AND \nI0.nameInstance LIKE E1A1.nameInstance AND \nI0.nameInfluence LIKE 'inf_directionx' AND \nI1.nameClass LIKE E1A1.nameClass AND \nI1.nameInstance LIKE E1A1.nameInstance AND \nI1.nameInfluence LIKE 'inf_directiony' AND \nI2.nameClass LIKE E1A1.nameClass AND \nI2.nameInstance LIKE E1A1.nameInstance AND \nI2.nameInfluence LIKE 'inf_directionz' AND \nI3.nameClass LIKE E1A1.nameClass AND \nI3.nameInstance LIKE E1A1.nameInstance AND \nI3.nameInfluence LIKE 'inf_vitesse' ;\n\nMichel Soto\n----------------------------------------------------------------------------\nUniversite Pierre et Marie Curie     TEL: +33 1 44 27\n88 30\nLaboratoire\nLIP6-CNRS                         \n+33 1 44 55 35 23\n8, rue du Capitaine\nScott               \nFAX: +33 1 44 27 53 53 \n75015\nPARIS                                 \nmailto:Michel.Soto@lip6.fr\n\nFrance\nAcc�s: http://www.mappy.fr/PlanPerso/7438/1", "msg_date": "Wed, 04 Jul 2001 14:12:51 +0200", "msg_from": "Michel Soto <Michel.Soto@lip6.fr>", "msg_from_op": true, "msg_subject": "Strange query execution time" }, { "msg_contents": "\nWhat does explain show for the two queries?\n\nOn Wed, 4 Jul 2001, Michel Soto wrote:\n\n> Hi,\n> \n> Can someone explain why the following query takes 1 second when using \n> LIKE and takes 30 seconds when replacing LIKE by = in the WHERE ?\n> - instance_Attribute has 45 rows and Influence has 5 rows.\n> - Postgresql 7.1\n\n", "msg_date": "Tue, 10 Jul 2001 14:36:03 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Strange query execution time" } ]
[ { "msg_contents": "Issuing the following ( admittedly bogus ) statement against 7.1.1\n\nCREATE TABLE dir_suppliers_var_prodtype (\n dir_suppliers_var_prodtype_id INTEGER ,\n dir_suppliers_var_id integer DEFAULT 0 NOT NULL,\n prodtype_id smallint DEFAULT 0 NOT NULL,\n PRIMARY KEY\n(dir_suppliers_var_prodtype_id,dir_suppliers_var_prodtype_id)\n);\n\ngives the following , initially slightly cryptic response.\n\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index\n'dir_suppliers_var_prodtype_pkey' for table 'dir_suppliers_var_prodtype'\n\nERROR: Cannot insert a duplicate key into unique index\npg_attribute_relid_attnam_index\n\nThis is obviously because of the broken primary key definition.\n \nMy question is, should this not raise a parser error ? It took me a\nlittle while to actually spot the problem with the users statement.\n\n\n\n-- \nColin M Strickland perl -e'print \"\\n\",map{chr(ord()-3)}(reverse split \n //,\"\\015%vhlwlqxpprF#ir#uhzrS#hkw#jqlvvhqudK%#\\015\\015nx\".\n\"1rf1wilv1zzz22=swwk###369<#84<#:44#77.={di##339<#84<#:44#77.=ohw\\015]\".\n\"K9#4VE#/ORWVLUE#/whhuwV#dlurwflY#334#/wilV\\015uhsrohyhG#ehZ#urlqhV\");'\n", "msg_date": "04 Jul 2001 14:28:46 +0100", "msg_from": "Colin Strickland <cms@sift.co.uk>", "msg_from_op": true, "msg_subject": "CREATE TABLE .. PRIMARY KEY quirk" }, { "msg_contents": "Colin Strickland <cms@sift.co.uk> writes:\n> PRIMARY KEY\n> (dir_suppliers_var_prodtype_id,dir_suppliers_var_prodtype_id)\n \n> My question is, should this not raise a parser error ?\n\nYes, it should. SQL92 saith\n\n 4) Each <column name> in the <unique column list> shall identify\n a column of T, and the same column shall not be identified more\n than once.\n\nLooks like we neglect to make that check during initial processing of\nthe PRIMARY KEY clause.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Jul 2001 13:12:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE .. PRIMARY KEY quirk " } ]
[ { "msg_contents": "Hi,\n\nCan someone explain why the following query takes 1 second when using \nLIKE and takes 30 seconds when replacing LIKE by = in the WHERE ?\n- instance_Attribute has 45 rows and Influence has 5 rows.\n- Postgresql 7.1\n\nRegards\n=====================================\nSELECT\nE1A1.nameInstance AS inste,\nE1A1.nameClass AS classe,\nE1A1.value AS dx,\nE1A2.value AS dy,\nE1A3.value AS dz,\nE1A4.value AS v,\nI0.value AS ix,\nI1.value AS iy,\nI2.value AS iz,\nI3.value AS iv\nFROM\ninstance_Attribute AS E1A1,\ninstance_Attribute AS E1A2,\ninstance_Attribute AS E1A3,\ninstance_Attribute AS E1A4,\nInfluence AS I0,\nInfluence AS I1,\nInfluence AS I2,\nInfluence AS I3\nWHERE\nE1A1.nameAttribute LIKE 'directionx' AND\nE1A2.nameInstance LIKE E1A1.nameInstance AND\nE1A2.nameClass LIKE E1A1.nameClass AND\nE1A2.nameAttribute LIKE 'directiony' AND\nE1A3.nameInstance LIKE E1A1.nameInstance AND\nE1A3.nameClass LIKE E1A1.nameClass AND\nE1A3.nameAttribute LIKE 'directionz' AND\nE1A4.nameInstance LIKE E1A1.nameInstance AND\nE1A4.nameClass LIKE E1A1.nameClass AND\nE1A4.nameAttribute LIKE 'vitesse' AND\nI0.nameClass LIKE E1A1.nameClass AND\nI0.nameInstance LIKE E1A1.nameInstance AND\nI0.nameInfluence LIKE 'inf_directionx' AND\nI1.nameClass LIKE E1A1.nameClass AND\nI1.nameInstance LIKE E1A1.nameInstance AND\nI1.nameInfluence LIKE 'inf_directiony' AND\nI2.nameClass LIKE E1A1.nameClass AND\nI2.nameInstance LIKE E1A1.nameInstance AND\nI2.nameInfluence LIKE 'inf_directionz' AND\nI3.nameClass LIKE E1A1.nameClass AND\nI3.nameInstance LIKE E1A1.nameInstance AND\nI3.nameInfluence LIKE 'inf_vitesse' ;\n\nMichel Soto\n----------------------------------------------------------------------------\nUniversite Pierre et Marie Curie TEL: +33 1 44 27 88 30\nLaboratoire LIP6-CNRS +33 1 44 55 35 23\n8, rue du Capitaine Scott FAX: +33 1 44 27 53 53\n75015 PARIS mailto:Michel.Soto@lip6.fr\nFrance\n\nAcc�s: http://www.mappy.fr/PlanPerso/7438/1\n\nHi,\nCan  someone explain why the following query takes  1 second\nwhen using LIKE and takes 30 seconds when replacing LIKE by = in the\nWHERE ?\n- instance_Attribute  has 45 rows and Influence has 5 rows.\n- Postgresql 7.1\nRegards\n=====================================\nSELECT \nE1A1.nameInstance AS inste, \nE1A1.nameClass AS classe, \nE1A1.value AS dx, \nE1A2.value AS dy, \nE1A3.value AS dz, \nE1A4.value AS v, \nI0.value AS ix, \nI1.value AS iy, \nI2.value AS iz, \nI3.value AS iv \nFROM \ninstance_Attribute AS E1A1, \ninstance_Attribute AS E1A2, \ninstance_Attribute AS E1A3, \ninstance_Attribute AS E1A4, \nInfluence AS I0, \nInfluence AS I1, \nInfluence AS I2, \nInfluence AS I3 \nWHERE\nE1A1.nameAttribute LIKE 'directionx' AND \nE1A2.nameInstance LIKE E1A1.nameInstance  AND \nE1A2.nameClass LIKE E1A1.nameClass  AND \nE1A2.nameAttribute LIKE 'directiony' AND \nE1A3.nameInstance LIKE E1A1.nameInstance  AND \nE1A3.nameClass LIKE E1A1.nameClass  AND \nE1A3.nameAttribute LIKE 'directionz' AND \nE1A4.nameInstance LIKE E1A1.nameInstance  AND \nE1A4.nameClass LIKE E1A1.nameClass  AND \nE1A4.nameAttribute LIKE 'vitesse' AND \nI0.nameClass LIKE E1A1.nameClass AND \nI0.nameInstance LIKE E1A1.nameInstance AND \nI0.nameInfluence LIKE 'inf_directionx' AND \nI1.nameClass LIKE E1A1.nameClass AND \nI1.nameInstance LIKE E1A1.nameInstance AND \nI1.nameInfluence LIKE 'inf_directiony' AND \nI2.nameClass LIKE E1A1.nameClass AND \nI2.nameInstance LIKE E1A1.nameInstance AND \nI2.nameInfluence LIKE 'inf_directionz' AND \nI3.nameClass LIKE E1A1.nameClass AND \nI3.nameInstance LIKE E1A1.nameInstance AND \nI3.nameInfluence LIKE 'inf_vitesse' ;\n\nMichel Soto\n----------------------------------------------------------------------------\nUniversite Pierre et Marie Curie     TEL: +33 1 44 27\n88 30\nLaboratoire\nLIP6-CNRS                         \n+33 1 44 55 35 23\n8, rue du Capitaine\nScott               \nFAX: +33 1 44 27 53 53 \n75015\nPARIS                                 \nmailto:Michel.Soto@lip6.fr\n\nFrance\nAcc�s: http://www.mappy.fr/PlanPerso/7438/1", "msg_date": "Wed, 04 Jul 2001 16:30:18 +0200", "msg_from": "Michel Soto <Michel.Soto@lip6.fr>", "msg_from_op": true, "msg_subject": "Strange query execution time" } ]
[ { "msg_contents": "\n> Can someone explain why the following query takes 1 second when using \n> LIKE and takes 30 seconds when replacing LIKE by = in the WHERE ? \n\nBecause there is no optimization built in, that notices, that your\nstring does not contain a wildcard and would translate the restriction \ncorrespondingly. It is currently executed more or less like below:\n\t(I3.nameInfluence >= 'inf_vitesse' and I3.nameInfluence < 'inf_vitessf'\n \t and I3.nameInfluence LIKE 'inf_vitesse')\nI already wanted to try to add this optimization myself, but lacked the time so far.\nAnybody want to volunteer ?\n\nTODO: add LIKE optimization to use = if constant does not contain any wildcards\n\nAndreas\n", "msg_date": "Thu, 5 Jul 2001 09:55:04 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Strange query execution time" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> TODO: add LIKE optimization to use = if constant does not contain any wildcards\n\nYour information is obsolete ... try looking at the EXPLAIN VERBOSE\noutput for such a query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Jul 2001 09:57:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Strange query execution time " } ]
[ { "msg_contents": "\n> > > > I imagine a daemon extracting redo log entries from WAL segments, \n> > > > asynchronously. Mixing redo log entries into the WAL allows the WAL \n> > > > to be the only synchronous disk writer in the system, a Good Thing.\n> > > \n> > > This comes up periodically now. WAL currently already has all the info\n> > > that would be needed for redo (it actually has to). \n> > \n> > The WAL has the information needed to take a binary table image\n> > from the checkpoint state to the last committed transaction.\n\nIt actually even contains the information needed to take an \ninconsistent binary table image from any point in time to as far as you\nhave WAL records for. The only prerequisite is, that you apply\nat least all those WAL records that where created during the time window\nfrom start of reading the binary table image until end of reading\nfor backup.\n\nIf you want to restore a whole instance from an inconsistent physical dump,\nyou need to apply at least all WAL records from the time window from start of \nwhole physical dump to end of your whole physical dump. This would have the \nadvantage of not needing any synchronization with the backend for doing \nthe physical dump. (It doesen't even matter if you read inconsistent pages,\nsince those will be restored by \"physical log\" in WAL later.) If you start \nthe physical dump with pg_control, you might not need to reconstruct one for\nrollforward later.\n\n> > IIUC, it is meaningless in relation to a pg_dump image.\n\nYes, pg_dump produces a \"logical snapshot\". You cannot use a data content\ndump for later rollforward.\n\nAndreas\n", "msg_date": "Thu, 5 Jul 2001 11:42:37 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: Backup and Recovery" } ]
[ { "msg_contents": "\n> > Also, isn't the WAL format rather bulky to archive hours and hours of?\n\nIf it were actually too bulky, then it needs to be made less so, since that\ndirectly affects overall performance :-) \n\n> > I would expect high-level transaction redo records to be much more compact;\n> > mixed into the WAL, such records shouldn't make the WAL grow much faster.\n\nAll redo records have to be at the tuple level, so what higher-level are you talking \nabout ? (statement level redo records would not be able to reproduce the same\nresulting table data (keyword: transaction isolation level)) \n\n> The page images are not needed and can be thrown away once the page is\n> completely sync'ed to disk or a checkpoint happens.\n\nActually they should at least be kept another few seconds to allow \"stupid\"\ndisks to actually write the pages :-) But see previous mail, they can also \nhelp with various BAR restore solutions.\n\nAndreas\n", "msg_date": "Thu, 5 Jul 2001 14:27:01 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: Backup and Recovery" }, { "msg_contents": "> > The page images are not needed and can be thrown away once the page is\n> > completely sync'ed to disk or a checkpoint happens.\n> \n> Actually they should at least be kept another few seconds to allow \"stupid\"\n> disks to actually write the pages :-) But see previous mail, they can also \n> help with various BAR restore solutions.\n\nAgreed. They have to be kept a few seconds.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 5 Jul 2001 12:05:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Re: Backup and Recovery" }, { "msg_contents": "On Thu, Jul 05, 2001 at 02:27:01PM +0200, Zeugswetter Andreas SB wrote:\n> \n> > Also, isn't the WAL format rather bulky to archive hours and hours of?\n> \n> If it were actually too bulky, then it needs to be made less so, since\n> that directly affects overall performance :-) \n\nISTM that WAL record size trades off against lots of things, including \n(at least) complexity of recovery code, complexity of WAL generation \ncode, usefulness in fixing corrupt table images, and processing time\nit would take to produce smaller log entries. \n\nComplexity is always expensive, and CPU time spent \"pre-sync\" is a lot\nmore expensive than time spent in background. That is, time spent\ngenerating the raw log entries affects latency and peak capacity, \nwhere time in background mainly affects average system load.\n\nFor a WAL, the balance seems to be far to the side of simple-and-bulky.\nFor other uses, the balance is sure to be different.\n\n> > > I would expect high-level transaction redo records to be much more\n> > > compact; mixed into the WAL, such records shouldn't make the WAL\n> > > grow much faster.\n> \n> All redo records have to be at the tuple level, so what higher-level\n> are you talking about ? (statement level redo records would not be\n> able to reproduce the same resulting table data (keyword: transaction\n> isolation level)) \n\nStatement-level redo records would be nice, but as you note they are \nrarely practical if done by the database.\n\nRedo records that contain that contain whole blocks may be much bulkier\nthan records of whole tuples. Redo records of whole tuples may be much \nbulkier than those that just identify changed fields.\n\nBulky logs mean more-frequent snapshot backups, and bulky log formats \nare less suitable for network transmission, and therefore less useful \nfor replication. Smaller redo records take more processing to generate, \nbut that processing can be done off-line, and the result saves other \ncosts.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Thu, 5 Jul 2001 16:52:50 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: Backup and Recovery" }, { "msg_contents": "> > > > I would expect high-level transaction redo records to be much more\n> > > > compact; mixed into the WAL, such records shouldn't make the WAL\n> > > > grow much faster.\n> > \n> > All redo records have to be at the tuple level, so what higher-level\n> > are you talking about ? (statement level redo records would not be\n> > able to reproduce the same resulting table data (keyword: transaction\n> > isolation level)) \n> \n> Statement-level redo records would be nice, but as you note they are \n> rarely practical if done by the database.\n> \n> Redo records that contain that contain whole blocks may be much bulkier\n> than records of whole tuples. Redo records of whole tuples may be much \n> bulkier than those that just identify changed fields.\n> \n> Bulky logs mean more-frequent snapshot backups, and bulky log formats \n> are less suitable for network transmission, and therefore less useful \n> for replication. Smaller redo records take more processing to generate, \n> but that processing can be done off-line, and the result saves other \n> costs.\n\nTom has identified that VACUUM generates hug WAL traffic because of the\nwriting of page preimages in case the page is partially written to disk.\nIt would be nice to split those out into a separate WAL file _except_ it\nwould require two fsyncs() for commit (bad), so we are stuck. Once the\npage is flushed to disk after checkpoint, we don't really need those\npre-images anymore, hence the spliting of WAL page images and row\nrecords for recovery purposes.\n\nIn other words, we keep the page images and row records in one file so\nwe can do one fsync, but once we have written the page, we don't want to\nstore them for later point-in-time recovery.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 5 Jul 2001 20:15:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Backup and Recovery" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> In other words, we keep the page images and row records in one file so\n> we can do one fsync, but once we have written the page, we don't want to\n> store them for later point-in-time recovery.\n\nWhat we'd want to do is strip the page images from the version of the\nlogs that's archived for recovery purposes. Ideally the archiving\nprocess would also discard records from aborted transactions, but I'm\nnot sure how hard that'd be to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Jul 2001 21:33:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Backup and Recovery " }, { "msg_contents": "On Thu, Jul 05, 2001 at 09:33:17PM -0400, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > In other words, we keep the page images and row records in one file so\n> > we can do one fsync, but once we have written the page, we don't want to\n> > store them for later point-in-time recovery.\n> \n> What we'd want to do is strip the page images from the version of the\n> logs that's archived for recovery purposes. \n\nAm I correct in believing that the remaining row images would have to \nbe applied to a clean table-image snapshot? Maybe you can produce a \nclean table-image snapshot by making a dirty image copy, and then\nreplaying the WAL from the time you started copying up to the time\nwhen you finish copying. \n\nHow hard would it be to turn these row records into updates against a \npg_dump image, assuming access to a good table-image file?\n\n> Ideally the archiving process would also discard records from aborted\n> transactions, but I'm not sure how hard that'd be to do.\n\nA second pass over the WAL file -- or the log-picker daemon's \nfirst-pass output -- could eliminate the dead row images. Handling \nWAL file boundaries might be tricky if one WAL file has dead row-images \nand the next has the abort-or-commit record. Maybe the daemon has to \nlook ahead into the next WAL file to know what to discard from the \ncurrent file. \n\nWould it be useful to mark points in a WAL file where there are no \ntransactions with outstanding writes?\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Thu, 5 Jul 2001 19:54:43 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: Backup and Recovery" }, { "msg_contents": "> On Thu, Jul 05, 2001 at 09:33:17PM -0400, Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > In other words, we keep the page images and row records in one file so\n> > > we can do one fsync, but once we have written the page, we don't want to\n> > > store them for later point-in-time recovery.\n> > \n> > What we'd want to do is strip the page images from the version of the\n> > logs that's archived for recovery purposes. \n> \n> Am I correct in believing that the remaining row images would have to \n> be applied to a clean table-image snapshot? Maybe you can produce a \n> clean table-image snapshot by making a dirty image copy, and then\n> replaying the WAL from the time you started copying up to the time\n> when you finish copying. \n\nGood point. You are going to need a tar image of the data files to\nrestore via WAL and skip all WAL records from before the tar image. WAL\ndoes some of the tricky stuff now as part of crash recovery but it gets\nmore complited for a point-in-time recovery because the binary images\nwas taken over time, not at a single point in time like crash recovery.\n\n> \n> How hard would it be to turn these row records into updates against a \n> pg_dump image, assuming access to a good table-image file?\n\npg_dump is very hard because WAL contains only tids. No way to match\nthat to pg_dump-loaded rows.\n\n\n> > Ideally the archiving process would also discard records from aborted\n> > transactions, but I'm not sure how hard that'd be to do.\n> \n> A second pass over the WAL file -- or the log-picker daemon's \n> first-pass output -- could eliminate the dead row images. Handling \n> WAL file boundaries might be tricky if one WAL file has dead row-images \n> and the next has the abort-or-commit record. Maybe the daemon has to \n> look ahead into the next WAL file to know what to discard from the \n> current file. \n> \n> Would it be useful to mark points in a WAL file where there are no \n> transactions with outstanding writes?\n\nI think CHECKPOINT is as good as we are going to get in that area, but\nof course there are outstanding transactions that are not going to be\npicked up because they weren't committed before the checkpoint\ncompleted.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Jul 2001 06:52:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Backup and Recovery" }, { "msg_contents": "On Fri, Jul 06, 2001 at 06:52:49AM -0400, Bruce Momjian wrote:\n> Nathan wrote:\n> > How hard would it be to turn these row records into updates against a \n> > pg_dump image, assuming access to a good table-image file?\n> \n> pg_dump is very hard because WAL contains only tids. No way to match\n> that to pg_dump-loaded rows.\n\nMaybe pg_dump can write out a mapping of TIDs to line numbers, and the\nback-end can create a map of inserted records' line numbers when the dump \nis reloaded, so that the original TIDs can be traced to the new TIDs.\nI guess this would require a new option on IMPORT. I suppose the\nmappings could be temporary tables.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Mon, 9 Jul 2001 13:54:32 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: Backup and Recovery" } ]
[ { "msg_contents": "Hi,\n I have compiled with --with-multibytes,\n but the JDBC really can not recognize chinese character.\n\n Any suggestion is kind for me.\n\n malix \n shanghai china\n\n\n_____________________________________________\n[�㲻���� ��������] ����ר�úţ�95963���û���/���룺263\n��ױƷ�����ػݣ����ۿ񳱣� http://shopping.263.net/category04.htm\n", "msg_date": "Thu, 5 Jul 2001 22:44:19 +0800 (CST)", "msg_from": "\"ma li\" <malix@263.net>", "msg_from_op": true, "msg_subject": "i need help for JDBC" }, { "msg_contents": "> I have compiled with --with-multibytes,\n> but the JDBC really can not recognize chinese character.\n> \n> Any suggestion is kind for me.\n\nHard to tell unless provided PostgreSQL version given and what kind of\nencoding you use.\n\n> [������������������ ������������������������] ������������������������95963��������������������/������������263\n\nDo not include non-ASCII contents, please.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 11 Jul 2001 10:03:48 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: i need help for JDBC" } ]
[ { "msg_contents": "\n\n Hello, hackers!\n\n\n I am running postgresql 7.1 on a SMP Linux box. It runs, but it never pass a \nloadavg of 0.4, no matter how I try to overload the system.\n\n The same configuration, the same executable, the same test on a non-SMP \nmachine gives a loadavg of 19.\n\n That means that a Xeon SMP box with 1Gb of RAM goes slowlier than a weak \nCPU, with the same postgres, the same configuration, and the same test.\n\n Anybody knows what is happening? Is there something to do on a SMP machine \nfor it to run? I tried with lots of shared memory, and with \"commit_delay=0\", \nbut nothing worked.\n\n Yours:\n\n\n\n\n", "msg_date": "Thu, 5 Jul 2001 16:51:35 +0200", "msg_from": "=?iso-8859-1?q?V=EDctor=20Romero?= <romero@kde.org>", "msg_from_op": true, "msg_subject": "Pg on SMP half-powered" }, { "msg_contents": "\nWhat is the postgres process doing? what does iostat show for disk I/O?\nfrom reading this, you are comparing apples->oranges ... are the drives\nthe same on the non-SMP as the SMP? amount of RAM? speed of CPUs? hard\ndrive controllers with same amount of cache on them? etc, etc, etc ...\n\nOn Thu, 5 Jul 2001, [iso-8859-1] V�ctor Romero wrote:\n\n>\n>\n> Hello, hackers!\n>\n>\n> I am running postgresql 7.1 on a SMP Linux box. It runs, but it never pass a\n> loadavg of 0.4, no matter how I try to overload the system.\n>\n> The same configuration, the same executable, the same test on a non-SMP\n> machine gives a loadavg of 19.\n>\n> That means that a Xeon SMP box with 1Gb of RAM goes slowlier than a weak\n> CPU, with the same postgres, the same configuration, and the same test.\n>\n> Anybody knows what is happening? Is there something to do on a SMP machine\n> for it to run? I tried with lots of shared memory, and with \"commit_delay=0\",\n> but nothing worked.\n>\n> Yours:\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Thu, 5 Jul 2001 14:34:26 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Pg on SMP half-powered" }, { "msg_contents": "El Jueves 05 Julio 2001 19:34, The Hermit Hacker escribi�:\n> What is the postgres process doing? what does iostat show for disk I/O?\n> from reading this, you are comparing apples->oranges ... are the drives\n> the same on the non-SMP as the SMP? amount of RAM? speed of CPUs? hard\n> drive controllers with same amount of cache on them? etc, etc, etc ...\n\n\tThe postgres is doing inserts, but it does not matter. Other tests and \nbenchs do the same weird results. All the process stay the most of the time \nsleeping.\n\n\tThe disk of the non-SMP is a cheap IDE. The disk of the SMP is a SCSI-2 \ndisk. Bonnie and other benchs give to the SMP machine better results (that is \nevident). Thay is what does strange that postgres run fater on the non-SMP \nmachine.\n\n\t(about comparing apples with oranges) Yes, I know. More exactly, I am \ncomparing a F-1 car with a bike, bike run faster, and I ask on the F-1 \nexperts mailing list why.\n\n \tRAM: 1Gb on the SMP machine. 128Mb on the non-SMP machine.\n\n \tSpeed: 400MHz on the non-SMP machine. 550 on the SMP one.\n\n The SMP machine is far away better than the non-SMP one. Same OS, \nsame distro, same Postgres, same test, and a cheap non-SMP machine \noutperforms a very expensive HP SMP server. It looks a interblocking stuff, \ndue to all the postmasters are sleeping but one, meanwhile on the non-SMP \nthey runs concurently. Does anybody knows what is happening?\n\n\n Yours:\n\n \n", "msg_date": "Fri, 6 Jul 2001 10:52:33 +0200", "msg_from": "=?iso-8859-1?q?V=EDctor=20Romero?= <romero@kde.org>", "msg_from_op": true, "msg_subject": "Re: Pg on SMP half-powered" }, { "msg_contents": "On 06 Jul 2001 10:52:33 +0200, V�ctor Romero wrote:\n\n> (about comparing apples with oranges) Yes, I know. More exactly, I am \n> comparing a F-1 car with a bike, bike run faster, and I ask on the F-1 \n> experts mailing list why.\n> \n> RAM: 1Gb on the SMP machine. 128Mb on the non-SMP machine.\n> \n> Speed: 400MHz on the non-SMP machine. 550 on the SMP one.\n> \n> The SMP machine is far away better than the non-SMP one. Same OS, \n> same distro, same Postgres, same test, and a cheap non-SMP machine \n> outperforms a very expensive HP SMP server. It looks a interblocking stuff, \n> due to all the postmasters are sleeping but one, meanwhile on the non-SMP \n> they runs concurently. Does anybody knows what is happening?\n> \n> \n> Yours:\n\n\nWhat kernel version? 2.4.x has had problems with mtrr on SMP Xeon\nsystems and we had to upgrade to 2.2.19 to get 2.2.x to use the L2 cache\non newer Xeon CPU's. The default redhat kernel had no idea what they\nwere, according to our sysadmin here and they ended up running with L2\ncahce disabled. And then we ran into problems with bigmem with RedHat's\n2.2.19 upgrade and decided that the better route was to complile our own\n, ensuring all of your hardware gets explicitly supported properly.\n\n\n-- \nColin M Strickland perl -e'print \"\\n\",map{chr(ord()-3)}(reverse split \n //,\"\\015%vhlwlqxpprF#ir#uhzrS#hkw#jqlvvhqudK%#\\015\\015nx\".\n\"1rf1wilv1zzz22=swwk###369<#84<#:44#77.={di##339<#84<#:44#77.=ohw\\015]\".\n\"K9#4VE#/ORWVLUE#/whhuwV#dlurwflY#334#/wilV\\015uhsrohyhG#ehZ#urlqhV\");'\n", "msg_date": "06 Jul 2001 10:36:38 +0100", "msg_from": "Colin Strickland <cms@sift.co.uk>", "msg_from_op": false, "msg_subject": "Re: Pg on SMP half-powered" }, { "msg_contents": "On Thursday 05 July 2001 10:51, V�ctor Romero wrote:\n> I am running postgresql 7.1 on a SMP Linux box. It runs, but it never pass\n> a loadavg of 0.4, no matter how I try to overload the system.\n\n> The same configuration, the same executable, the same test on a non-SMP\n> machine gives a loadavg of 19.\n\nSounds like a kernel issue.\n\nHowever, the load average numbers alone are not enough information to get a \nbenchmark. You need to benchmark using a benchmark that can generate enough \ntraffic to load both machines and get good time results for the run of the \nstandard benchmark queries.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 7 Jul 2001 10:36:29 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Pg on SMP half-powered" }, { "msg_contents": "El S�bado 07 Julio 2001 16:36, Lamar Owen escribi�:\n> On Thursday 05 July 2001 10:51, V�ctor Romero wrote:\n> > I am running postgresql 7.1 on a SMP Linux box. It runs, but it never\n> > pass a loadavg of 0.4, no matter how I try to overload the system.\n> >\n> > The same configuration, the same executable, the same test on a non-SMP\n> > machine gives a loadavg of 19.\n>\n> Sounds like a kernel issue.\n>\n> However, the load average numbers alone are not enough information to get a\n> benchmark. You need to benchmark using a benchmark that can generate\n> enough traffic to load both machines and get good time results for the run\n> of the standard benchmark queries.\n\n\tNow Im at home... as soon as I get office again Ill send you the little script benchs I wrote, I think them genetares enough traffic because if I throw more than 400 threads the postmaster start to fail... \n\n\t\n", "msg_date": "Sun, 8 Jul 2001 13:37:26 +0200", "msg_from": "=?iso-8859-1?q?V=EDctor=20Romero?= <romero@kde.org>", "msg_from_op": true, "msg_subject": "Re: Pg on SMP half-powered" } ]
[ { "msg_contents": "After porting a database from FoxPro to PostgreSQL, one of the basic\nqueries that is used throughout the application now takes ~ 4 secs to\nrun, where it took ~ 40 usecs to run in FoxPro. I've been trying to use\nEXPLAIN to give me optimization hints, but I'm not sure what I'm looking\nat. Any places to look that might explain the results of explain?\n\nTks\n\n", "msg_date": "Thu, 05 Jul 2001 11:22:05 -0400", "msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "Using Explain" }, { "msg_contents": "\"P. Dwayne Miller\" <dmiller@espgroup.net> writes:\n> Any places to look that might explain the results of explain?\n\nhttp://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/performance-tips.html\n\nIf you're still confused, feel free to post the query, table schemas,\nand EXPLAIN output.\n\nBTW, have you run VACUUM ANALYZE? If not, you're unlikely to get a\ngood plan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Jul 2001 12:10:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using Explain " } ]
[ { "msg_contents": "I don't recall, has it ever been considered to compare the number of\nactual result rows against the estimate computed by the optimizer and then\ndraw some conclusions from it? Both numbers should be easily available.\nPossible \"conclusions\" might be suggesting running analyze, suggesting\ntweaking \"M\" and \"K\", assigning an out-of-whack factor to the statistics\n(once it goes to infinity (or zero) you do one of the previous things), in\nsome future life possibly automatically switching to an alternative\nstatistics model.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 5 Jul 2001 19:37:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Checking query results against selectivity estimate" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I don't recall, has it ever been considered to compare the number of\n> actual result rows against the estimate computed by the optimizer and then\n> draw some conclusions from it? Both numbers should be easily available.\n\nIt's been suggested, but doing anything with the knowledge that you\nguessed wrong seems to be an AI project, the more so as the query gets\nmore complex. I haven't been able to think of anything very productive\nto do with such a comparison (no, I don't like any of your suggestions\n;-)). Which parameter should be tweaked on the basis of a bad result?\nIf the real problem is not a bad parameter but a bad model, will the\ntweaker remain sane, or will it drive the parameters to completely\nridiculous values?\n\nThe one thing that we *can* recommend unreservedly is running ANALYZE\nmore often, but that's just a DB administration issue, not something you\nneed deep study of the planner results to discover. In 7.2, both VACUUM\nand ANALYZE should be sufficiently cheap/noninvasive that people can\njust run them in background every hour-or-so...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Jul 2001 14:36:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Checking query results against selectivity estimate " } ]
[ { "msg_contents": "> What I'm wondering is if you had any other intended use for \"mark for\n> cleanup\" than VACUUM. The cheapest implementation would allow only\n> one process to be waiting for cleanup on a given buffer, which is OK\n> for VACUUM because we'll only allow one VACUUM at a time on a relation\n> anyway. But if you had some other uses in mind, maybe the code needs\n> to support multiple waiters.\n\nI was going to use it for UNDO but it seems that UNDO w/o OSMGR is not\npopular and OSMGR will require different approaches anyway, so -\ndo whatever you want.\n\nVadim\n", "msg_date": "Thu, 5 Jul 2001 12:46:44 -0700 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Re: Buffer access rules, and a probable bug " }, { "msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > What I'm wondering is if you had any other intended use for \"mark for\n> > cleanup\" than VACUUM. The cheapest implementation would allow only\n> > one process to be waiting for cleanup on a given buffer, which is OK\n> > for VACUUM because we'll only allow one VACUUM at a time on a relation\n> > anyway. But if you had some other uses in mind, maybe the code needs\n> > to support multiple waiters.\n> \n> I was going to use it for UNDO but it seems that UNDO w/o OSMGR is not\n> popular and OSMGR will require different approaches anyway, so -\n> do whatever you want.\n> \n\nHow is UNDO now ?\nI've wanted a partial rollback functionality for a long\ntime and I've never seen the practical solution other\nthan UNDO.\n\nI'm thinking the following rstricted UNDO.\n\nUNDO is never invoked for an entire rollback of a transaction.\nThe implicit savepoint at the beginning of a transaction isn't\nneeded. If there's no savepoints, UNDO is never invoked.\nEspecially UNDO is never invoked for commands outside the\ntransaction block. WAL logs before savepoints could be \ndiscarded at CheckPoint.\n\nComments ? \n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 06 Jul 2001 10:22:38 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Re: Buffer access rules, and a probable bug" } ]
[ { "msg_contents": "[Repost from July 1. Hasn't made it to the list yet.]\n\nThe description of the FE/BE protocol says:\n\n| The postmaster uses this info and the contents of the pg_hba.conf file\n| to determine what authentication method the frontend must use. The\n| postmaster then responds with one of the following messages:\n[...]\n| If the frontend does not support the authentication method requested by\n| the postmaster, then it should immediately close the connection.\n\nHowever, libpq doesn't do that. Instead, it leaves the connection open\nand returns CONNECTION_BAD to the client. The client would then\npresumably call something like PQfinish(), which sends a Terminate message\nand closes the connection. This happened to not confuse the <=7.1\npostmasters because they were waiting for 4 bytes and treated the early\nconnection close appropriately.\n\nOn this occasion let me also point out that\n\n pqPuts(\"X\", conn);\n\nis not the way to send a single byte 'X' to the server. I see the JDBC\ndriver goes through troubles to make the same mistake.\n\nIn current sources the backends do the authentication and use the pqcomm.c\nfunctions for communication, but those aren't that happy about the early\nconnection close:\n\n pq_recvbuf: unexpected EOF on client connection\n FATAL 1: Password authentication failed for user 'peter'\n pq_flush: send() failed: Broken pipe\n\nSo I figured I would sneak in a check for connection close before reading\nthe authentication response in the server, but since the frontends seems\nto be doing what they want I don't really know what to check for.\n\nShould I fix libpq to follow the docs in this and on the server's end make\nanything that's not either a connection close or a valid authentication\nresponse as \"unexpected EOF\"? That way old clients would produce a bit of\nnoise in the server log.\n\nDoes anyone know how the ODBC and JDBC drivers handle this situation?\n\nBtw., is recv(sock, x, 1, MSG_PEEK) == 0 an appropriate way to check for a\nclosed connection without reading anything?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 5 Jul 2001 21:49:18 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "FE/BE protocol oddity" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> However, libpq doesn't do that. Instead, it leaves the connection open\n> and returns CONNECTION_BAD to the client. The client would then\n> presumably call something like PQfinish(), which sends a Terminate message\n> and closes the connection. This happened to not confuse the <=7.1\n> postmasters because they were waiting for 4 bytes and treated the early\n> connection close appropriately.\n\nGood point. Probably, PQfinish should only send the X message if the\nconnection has gotten past the authentication stage. A separate but\nalso useful change would be to do immediate socket close on detecting\nauth failure, before returning to the client application.\n\n> On this occasion let me also point out that\n\n> pqPuts(\"X\", conn);\n\n> is not the way to send a single byte 'X' to the server.\n\nHuh? Oh, the trailing null byte. You're right, it should be\n\tpqPutnchar(\"X\", 1, conn);\n\n> So I figured I would sneak in a check for connection close before reading\n> the authentication response in the server, but since the frontends seems\n> to be doing what they want I don't really know what to check for.\n\nSeems reasonable, with the understanding that we'll still generate the\nsilly log messages when talking to an old client. However...\n\n> Btw., is recv(sock, x, 1, MSG_PEEK) == 0 an appropriate way to check for a\n> closed connection without reading anything?\n\nSeems a little risky as far as portability goes; is MSG_PEEK supported\non BeOS, Darwin, Cygwin, etc? Might be better to fix the backend libpq\nroutines to understand whether a connection-close event is expected or\nnot, and only emit a complaint to the log when it's not. Not sure how\nfar such a change would need to propagate though...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Jul 2001 16:38:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FE/BE protocol oddity " }, { "msg_contents": "Tom Lane writes:\n\n> Good point. Probably, PQfinish should only send the X message if the\n> connection has gotten past the authentication stage. A separate but\n> also useful change would be to do immediate socket close on detecting\n> auth failure, before returning to the client application.\n\nFurther investigation shows that all of this used to work correctly until\nthe infamous async connection patch. The comment now reads:\n(fe-connect.c)\n\n\t/*\n\t * We used to close the socket at this point, but that makes it\n\t * awkward for those above us if they wish to remove this socket from\n\t * their own records (an fd_set for example). We'll just have this\n\t * socket closed when PQfinish is called (which is compulsory even\n\t * after an error, since the connection structure must be freed).\n\t */\n\nI guess there is sort of a point there. So I'm leaning towards adding a\n\"startup complete\" flag somewhere in PGconn and simply fix up\nclosePGconn().\n\n> > Btw., is recv(sock, x, 1, MSG_PEEK) == 0 an appropriate way to check for a\n> > closed connection without reading anything?\n>\n> Seems a little risky as far as portability goes; is MSG_PEEK supported\n> on BeOS, Darwin, Cygwin, etc?\n\nDarwin is FreeBSD, so yes. Cygwin says it supports recv() and MSG_PEEK is\ntrivial from there. BeOS is going backrupt before the next release\nanyway. ;-) Seriously, in the worst case we'll get EINVAL.\n\n> Might be better to fix the backend libpq routines to understand\n> whether a connection-close event is expected or not, and only emit a\n> complaint to the log when it's not. Not sure how far such a change\n> would need to propagate though...\n\nDeep...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 6 Jul 2001 18:10:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: FE/BE protocol oddity " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I guess there is sort of a point there. So I'm leaning towards adding a\n> \"startup complete\" flag somewhere in PGconn and simply fix up\n> closePGconn().\n\nI think you can use the conn->status field; you shouldn't need a new\nflag, just test whether status is CONNECTION_OK or not.\n\n> Seriously, in the worst case we'll get EINVAL.\n\nSo you'll just ignore an error? Okay, that'll probably work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Jul 2001 14:33:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FE/BE protocol oddity " }, { "msg_contents": "Tom Lane writes:\n\n> I think you can use the conn->status field; you shouldn't need a new\n> flag, just test whether status is CONNECTION_OK or not.\n\nYou're right. I was confused by the comment\n\n/* Non-blocking mode only below here */\n\nin the definition of ConnStatusType, thinking that the non-blocking mode\nwas using some of the intermediate states. I'll change it momentarily.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 6 Jul 2001 21:04:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: FE/BE protocol oddity " } ]
[ { "msg_contents": "I have purchased the Solaris source code from Sun for $80. (I could\nhave downloaded it for free after faxing them an 11 page contract, but I\ndecided I wanted the CD's.) See the slashdot story at:\n\n\thttp://slashdot.org/article.pl?sid=01/06/30/1224257&mode=thread\n\nMy hope is that I can use the source code to help debug Solaris\nPostgreSQL problems. It includes source for the kernel and all user\nprograms. The code is similar to *BSD kernels. It is basically Unix\nSvR4 with Sun's enhancements. It has both AT&T and Sun copyrights on\nthe files.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 5 Jul 2001 16:30:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Solaris source code" }, { "msg_contents": "At 04:30 PM 7/5/01 -0400, Bruce Momjian wrote:\n>I have purchased the Solaris source code from Sun for $80. (I could\n>have downloaded it for free after faxing them an 11 page contract, but I\n>decided I wanted the CD's.) See the slashdot story at:\n>\n> http://slashdot.org/article.pl?sid=01/06/30/1224257&mode=thread\n>\n>My hope is that I can use the source code to help debug Solaris\n>PostgreSQL problems. It includes source for the kernel and all user\n>programs. The code is similar to *BSD kernels. It is basically Unix\n>SvR4 with Sun's enhancements. It has both AT&T and Sun copyrights on\n>the files.\n\nBruce,\n\nWe are about to roll out PostgreSQL on Solaris, and I am interested in any \nSolaris specific gotcha's. Do you have some specifics in mind, or was this \njust general preventive maintenance type steps?\n--\nNaomi Walker\nChief Information Officer\nEldorado Computing, Inc.\n602-604-3100 ext 242 \n\n", "msg_date": "Thu, 05 Jul 2001 14:03:31 -0700", "msg_from": "Naomi Walker <nwalker@eldocomp.com>", "msg_from_op": false, "msg_subject": "Re: Solaris source code" }, { "msg_contents": "> At 04:30 PM 7/5/01 -0400, Bruce Momjian wrote:\n> >I have purchased the Solaris source code from Sun for $80. (I could\n> >have downloaded it for free after faxing them an 11 page contract, but I\n> >decided I wanted the CD's.) See the slashdot story at:\n> >\n> > http://slashdot.org/article.pl?sid=01/06/30/1224257&mode=thread\n> >\n> >My hope is that I can use the source code to help debug Solaris\n> >PostgreSQL problems. It includes source for the kernel and all user\n> >programs. The code is similar to *BSD kernels. It is basically Unix\n> >SvR4 with Sun's enhancements. It has both AT&T and Sun copyrights on\n> >the files.\n> \n> Bruce,\n> \n> We are about to roll out PostgreSQL on Solaris, and I am interested in any \n> Solaris specific gotcha's. Do you have some specifics in mind, or was this \n> just general preventive maintenance type steps?\n\nPreventative. I have heard Solaris has higher context switching and that\nmay effect us because we use processes instead of threads.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 5 Jul 2001 17:27:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Solaris source code" }, { "msg_contents": "On Thu, Jul 05, 2001 at 02:03:31PM -0700, Naomi Walker wrote:\n> We are about to roll out PostgreSQL on Solaris, and I am interested\n> in any Solaris specific gotcha's. Do you have some specifics in mind,\n> or was this just general preventive maintenance type steps?\n\nThere have been reports of trouble with Unix sockets on Solaris.\nYou can use TCP sockets, which might be slower; or change, in \nsrc/backend/libpq/pqcomm.c, the line \n\n listen(fd, SOMAXCONN);\n\nto\n\n listen(fd, 1024);\n\n(Cf. Stevens, \"Unix Network Programming, Volume 1\", pp. 96 and 918.)\n\nI don't know (and Stevens doesn't hint) of any reason not to fold \nthis change into the mainline sources. However, we haven't heard \nfrom the people who had had trouble with Unix sockets whether this \nchange actually fixes their problems.\n\nThe effect of the change is to make it much less likely for a \nconnection request to be rejected when connections are being opened \nvery frequently.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Thu, 5 Jul 2001 14:48:18 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Solaris source code" }, { "msg_contents": "On Thu, Jul 05, 2001 at 04:30:40PM -0400, Bruce Momjian allegedly wrote:\n> I have purchased the Solaris source code from Sun for $80. (I could\n> have downloaded it for free after faxing them an 11 page contract, but I\n> decided I wanted the CD's.) See the slashdot story at:\n> \n> \thttp://slashdot.org/article.pl?sid=01/06/30/1224257&mode=thread\n> \n> My hope is that I can use the source code to help debug Solaris\n> PostgreSQL problems. It includes source for the kernel and all user\n> programs. The code is similar to *BSD kernels. It is basically Unix\n> SvR4 with Sun's enhancements. It has both AT&T and Sun copyrights on\n> the files.\n\nCool. It would be nice to know why the regression tests fail on Solaris when\nusing a UNIX socket.\n\nCheers,\n\nMathijs\n", "msg_date": "Mon, 9 Jul 2001 14:19:05 +0200", "msg_from": "Mathijs Brands <mathijs@ilse.nl>", "msg_from_op": false, "msg_subject": "Re: Solaris source code" }, { "msg_contents": "On Thu, Jul 05, 2001 at 02:03:31PM -0700, Naomi Walker allegedly wrote:\n> At 04:30 PM 7/5/01 -0400, Bruce Momjian wrote:\n> >I have purchased the Solaris source code from Sun for $80. (I could\n> >have downloaded it for free after faxing them an 11 page contract, but I\n> >decided I wanted the CD's.) See the slashdot story at:\n> >\n> > http://slashdot.org/article.pl?sid=01/06/30/1224257&mode=thread\n> >\n> >My hope is that I can use the source code to help debug Solaris\n> >PostgreSQL problems. It includes source for the kernel and all user\n> >programs. The code is similar to *BSD kernels. It is basically Unix\n> >SvR4 with Sun's enhancements. It has both AT&T and Sun copyrights on\n> >the files.\n> \n> Bruce,\n> \n> We are about to roll out PostgreSQL on Solaris, and I am interested in any \n> Solaris specific gotcha's. Do you have some specifics in mind, or was this \n> just general preventive maintenance type steps?\n\nPostgreSQL 7.1 fails the regression tests when using a UNIX socket,\nwhich is faster than a TCP/IP socket (when both the client and the\nserver are running on the same machine). We're running a few small\nPostgreSQL databases on Solaris and we're going to implement a bigger\none in the near future. If you connect via TCP/IP sockets, you should be\nsafe. We're using JDBC to connect to the database and JDBC always uses\na TCP/IP socket. So far we haven't run into any real problems, although\nPostgreSQL did crash once, for unknown reasons (probably becase someone\nwas messing with it).\n\nNot really helpful, I guess. Doing some testing of your own is highly\nrecommended ;)\n\nCheers,\n\nMathijs\n", "msg_date": "Mon, 9 Jul 2001 14:24:17 +0200", "msg_from": "Mathijs Brands <mathijs@ilse.nl>", "msg_from_op": false, "msg_subject": "Re: Solaris source code" }, { "msg_contents": "On Mon, Jul 09, 2001 at 02:03:16PM -0700, Nathan Myers allegedly wrote:\n> On Mon, Jul 09, 2001 at 02:24:17PM +0200, Mathijs Brands wrote:\n> > On Thu, Jul 05, 2001 at 02:03:31PM -0700, Naomi Walker allegedly wrote:\n> > > At 04:30 PM 7/5/01 -0400, Bruce Momjian wrote:\n> > > >I have purchased the Solaris source code from Sun for $80. (I could\n> > > >have downloaded it for free after faxing them an 11 page contract, but I\n> > > >decided I wanted the CD's.) See the slashdot story at:\n> > > >\n> > > > http://slashdot.org/article.pl?sid=01/06/30/1224257&mode=thread\n> > > >\n> > > >My hope is that I can use the source code to help debug Solaris\n> > > >PostgreSQL problems. It includes source for the kernel and all user\n> > > >programs. The code is similar to *BSD kernels. It is basically Unix\n> > > >SvR4 with Sun's enhancements. It has both AT&T and Sun copyrights on\n> > > >the files.\n> > > \n> > > Bruce,\n> > > \n> > > We are about to roll out PostgreSQL on Solaris, and I am interested in any \n> > > Solaris specific gotcha's. Do you have some specifics in mind, or was this \n> > > just general preventive maintenance type steps?\n> > \n> > PostgreSQL 7.1 fails the regression tests when using a UNIX socket,\n> > which is faster than a TCP/IP socket (when both the client and the\n> > server are running on the same machine). \n> \n> Have you tried increasing the argument to listen in libpq/pgcomm.c\n> from SOMAXCONN to 1024? I think many people would be very interested\n> in your results.\n\nOK, I tried using 1024 (and later 128) instead of SOMAXCONN (defined to\nbe 5 on Solaris) in src/backend/libpq/pqcomm.c and ran a few regression\ntests on two different Sparc boxes (Solaris 7 and 8). The regression\ntest still fails, but for a different reason. The abstime test fails;\nnot only on Solaris but also on FreeBSD (4.3-RELEASE).\n\n*** ./expected/abstime.out Thu May 3 21:00:37 2001\n--- ./results/abstime.out Tue Jul 10 10:34:18 2001\n***************\n*** 47,56 ****\n | Sun Jan 14 03:14:21 1973 PST\n | Mon May 01 00:30:30 1995 PDT\n | epoch\n- | current\n | -infinity\n | Sat May 10 23:59:12 1947 PST\n! (6 rows)\n\n SELECT '' AS six, ABSTIME_TBL.*\n WHERE ABSTIME_TBL.f1 > abstime '-infinity';\n--- 47,55 ----\n | Sun Jan 14 03:14:21 1973 PST\n | Mon May 01 00:30:30 1995 PDT\n | epoch\n | -infinity\n | Sat May 10 23:59:12 1947 PST\n! (5 rows)\n\n SELECT '' AS six, ABSTIME_TBL.*\n WHERE ABSTIME_TBL.f1 > abstime '-infinity';\n\n======================================================================\n\nI've checked the FreeBSD and Linux headers and they've got SOMAXCONN set\nto 128.\n\nHere's a snippet from the linux listen(2) manpage:\n\nBUGS\n If the socket is of type AF_INET, and the backlog argument\n is greater than the constant SOMAXCONN (128 in Linux 2.0 &\n 2.2), it is silently truncated to SOMAXCONN. Don't rely\n on this value in portable applications since BSD (and some\n BSD-derived systems) limit the backlog to 5.\n\nI've checked Solaris 2.6, 7 and 8 and the kernels have a default value\nof 128 for the number of backlog connections. This number can be\nincreased to 1000 (maybe even larger). On Solaris 2.4 and 2.5 it is\nappearently set to 32. Judging from Adrian Cockcrofts Solaris tuning\nguide Sun has been using a default value of 128 from Solaris 2.5.1\non. You do need some patches for 2.5.1: patches 103582 & 103630 (SPARC)\nor patches 103581 & 10361 (X86). Later versions of Solaris don't need\nany patches. You can check (and set) the number of backlog connections\nby using the following command:\n\nSolaris 2.3, 2.4, 2.5 and unpatched 2.5.1:\n /usr/sbin/ndd /dev/tcp tcp_conn_req_max (untested)\n\nSolaris 2.5.1 (patched), 2.6, 7 and 8:\n /usr/sbin/ndd /dev/tcp tcp_conn_req_max_q\n\nIt'd probably be a good idea to use a value of 128 for the number of\nbacklog connections and not SOMAXCONN. If the requested number of\nbacklog connections is bigger than the number the kernel allows, it\nshould be truncated. Of course, there's no guarantee that this won't\ncause problems on arcane platforms such as Ultrix (if it is still\nsupported).\n\nThe Apache survival guide has more info on TCP/IP tuning for several\nplatforms and includes information on the listen backlog.\n\nCheers,\n\nMathijs\n\nPs. Just checking IRIX 6.5 - it's got the backlog set to 1000\nconnctions.\n-- \nAnd the beast shall be made legion. Its numbers shall be increased a\nthousand thousand fold. The din of a million keyboards like unto a great\nstorm shall cover the earth, and the followers of Mammon shall tremble.\n", "msg_date": "Tue, 10 Jul 2001 11:37:55 +0200", "msg_from": "Mathijs Brands <mathijs@ilse.nl>", "msg_from_op": false, "msg_subject": "Re: Solaris source code" }, { "msg_contents": "Mathijs Brands <mathijs@ilse.nl> writes:\n> OK, I tried using 1024 (and later 128) instead of SOMAXCONN (defined to\n> be 5 on Solaris) in src/backend/libpq/pqcomm.c and ran a few regression\n> tests on two different Sparc boxes (Solaris 7 and 8). The regression\n> test still fails, but for a different reason. The abstime test fails;\n> not only on Solaris but also on FreeBSD (4.3-RELEASE).\n\nThe abstime diff is to be expected (if you look closely, the test is\ncomparing 'current' to 'June 30, 2001'. Ooops). If that's the only\ndiff then you are in good shape.\n\n\nBased on this and previous discussions, I am strongly tempted to remove\nthe use of SOMAXCONN and instead use, say,\n\n\t#define PG_SOMAXCONN\t1000\n\ndefined in config.h.in. That would leave room for configure to twiddle\nit, if that proves necessary. Does anyone know of a platform where this\nwould cause problems? AFAICT, all versions of listen(2) are claimed to\nbe willing to reduce the passed parameter to whatever they can handle.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 13:51:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "SOMAXCONN (was Re: Solaris source code)" }, { "msg_contents": "> Mathijs Brands <mathijs@ilse.nl> writes:\n> > OK, I tried using 1024 (and later 128) instead of SOMAXCONN (defined to\n> > be 5 on Solaris) in src/backend/libpq/pqcomm.c and ran a few regression\n> > tests on two different Sparc boxes (Solaris 7 and 8). The regression\n> > test still fails, but for a different reason. The abstime test fails;\n> > not only on Solaris but also on FreeBSD (4.3-RELEASE).\n> \n> The abstime diff is to be expected (if you look closely, the test is\n> comparing 'current' to 'June 30, 2001'. Ooops). If that's the only\n> diff then you are in good shape.\n> \n> \n> Based on this and previous discussions, I am strongly tempted to remove\n> the use of SOMAXCONN and instead use, say,\n> \n> \t#define PG_SOMAXCONN\t1000\n> \n> defined in config.h.in. That would leave room for configure to twiddle\n> it, if that proves necessary. Does anyone know of a platform where this\n> would cause problems? AFAICT, all versions of listen(2) are claimed to\n> be willing to reduce the passed parameter to whatever they can handle.\n\nCould we test SOMAXCONN and set PG_SOMAXCONN to 1000 only if SOMAXCONN1\nis less than 1000?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 17:06:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SOMAXCONN (was Re: Solaris source code)" }, { "msg_contents": "On Tue, Jul 10, 2001 at 05:06:28PM -0400, Bruce Momjian wrote:\n> > Mathijs Brands <mathijs@ilse.nl> writes:\n> > > OK, I tried using 1024 (and later 128) instead of SOMAXCONN (defined to\n> > > be 5 on Solaris) in src/backend/libpq/pqcomm.c and ran a few regression\n> > > tests on two different Sparc boxes (Solaris 7 and 8). The regression\n> > > test still fails, but for a different reason. The abstime test fails;\n> > > not only on Solaris but also on FreeBSD (4.3-RELEASE).\n> > \n> > The abstime diff is to be expected (if you look closely, the test is\n> > comparing 'current' to 'June 30, 2001'. Ooops). If that's the only\n> > diff then you are in good shape.\n> > \n> > \n> > Based on this and previous discussions, I am strongly tempted to remove\n> > the use of SOMAXCONN and instead use, say,\n> > \n> > \t#define PG_SOMAXCONN\t1000\n> > \n> > defined in config.h.in. That would leave room for configure to twiddle\n> > it, if that proves necessary. Does anyone know of a platform where this\n> > would cause problems? AFAICT, all versions of listen(2) are claimed to\n> > be willing to reduce the passed parameter to whatever they can handle.\n> \n> Could we test SOMAXCONN and set PG_SOMAXCONN to 1000 only if SOMAXCONN\n> is less than 1000?\n\nAll the OSes we know of fold it to 128, currently. We can jump it \nto 10240 now, or later when there are 20GHz CPUs.\n\nIf you want to make it more complicated, it would be more useful to \nbe able to set the value lower for runtime environments where PG is \ncompeting for OS resources with another daemon that deserves higher \npriority.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Tue, 10 Jul 2001 14:15:46 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: SOMAXCONN (was Re: Solaris source code)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Could we test SOMAXCONN and set PG_SOMAXCONN to 1000 only if SOMAXCONN1\n> is less than 1000?\n\nWhy bother?\n\nIf you've got some plausible scenario where 1000 is too small, we could\njust as easily make it 10000. I don't see the need for yet another\nconfigure test for this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 17:24:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SOMAXCONN (was Re: Solaris source code) " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Could we test SOMAXCONN and set PG_SOMAXCONN to 1000 only if SOMAXCONN1\n> > is less than 1000?\n> \n> Why bother?\n> \n> If you've got some plausible scenario where 1000 is too small, we could\n> just as easily make it 10000. I don't see the need for yet another\n> configure test for this.\n\nI was thinking:\n\n\t#if SOMAXCONN >= 1000\n\t#define PG_SOMAXCONN SOMAXCONN\n\t#else\n\t#define PG_SOMAXCONN 1000\n\t#endif\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 17:25:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SOMAXCONN (was Re: Solaris source code)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was thinking:\n\n> \t#if SOMAXCONN >= 1000\n> \t#define PG_SOMAXCONN SOMAXCONN\n> \t#else\n> \t#define PG_SOMAXCONN 1000\n> \t#endif\n\nNot in config.h, you don't. Unless you want <sys/socket.h> (or\nwhichever header defines SOMAXCONN; how consistent is that across\nplatforms, anyway?) to be included by everything in the system ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 17:47:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SOMAXCONN (was Re: Solaris source code) " }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> All the OSes we know of fold it to 128, currently. We can jump it \n> to 10240 now, or later when there are 20GHz CPUs.\n\n> If you want to make it more complicated, it would be more useful to \n> be able to set the value lower for runtime environments where PG is \n> competing for OS resources with another daemon that deserves higher \n> priority.\n\nHmm, good point. Does anyone have a feeling for the amount of kernel\nresources that are actually sucked up by an accept-queue entry? If 128\nis the customary limit, is it actually worth worrying about whether\nwe are setting it to 128 vs. something smaller?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 18:36:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SOMAXCONN (was Re: Solaris source code) " }, { "msg_contents": "On Tue, Jul 10, 2001 at 06:36:21PM -0400, Tom Lane wrote:\n> ncm@zembu.com (Nathan Myers) writes:\n> > All the OSes we know of fold it to 128, currently. We can jump it \n> > to 10240 now, or later when there are 20GHz CPUs.\n> \n> > If you want to make it more complicated, it would be more useful to \n> > be able to set the value lower for runtime environments where PG is \n> > competing for OS resources with another daemon that deserves higher \n> > priority.\n> \n> Hmm, good point. Does anyone have a feeling for the amount of kernel\n> resources that are actually sucked up by an accept-queue entry? If 128\n> is the customary limit, is it actually worth worrying about whether\n> we are setting it to 128 vs. something smaller?\n\nI don't think the issue is the resources that are consumed by the \naccept-queue entry. Rather, it's a tuning knob to help shed load \nat the entry point to the system, before significant resources have \nbeen committed. An administrator would tune it according to actual\nsystem and traffic characteristics.\n\nIt is easy enough for somebody to change, if they care, that it seems \nto me we have already devoted it more time than it deserves right now.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Tue, 10 Jul 2001 17:49:36 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: SOMAXCONN (was Re: Solaris source code)" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> ncm@zembu.com (Nathan Myers) writes:\n> > If you want to make it more complicated, it would be more useful to \n> > be able to set the value lower for runtime environments where PG is \n> > competing for OS resources with another daemon that deserves higher \n> > priority.\n> \n> Hmm, good point. Does anyone have a feeling for the amount of kernel\n> resources that are actually sucked up by an accept-queue entry? If 128\n> is the customary limit, is it actually worth worrying about whether\n> we are setting it to 128 vs. something smaller?\n\nNot much in the way of kernel resources is required by an entry on the\naccept queue. Basically a socket structure and maybe a couple of\naddresses, typically about 200 bytes or so.\n\nBut I wouldn't worry about it, and I wouldn't worry about Nathan's\nsuggestion for making the limit configurable, because Postgres\nconnections don't spend time on the queue. The postgres server will\nbe picking them off as fast as it can. If the server can't pick\nprocesses off fast enough, then your system has other problems;\nreducing the size of the queue won't help those problems. A large\nqueue will help when a large number of connections arrives\nsimultaneously--it will permit Postgres to deal them appropriately,\nrather than causing the system to discard them on its terms.\n\n(Matters might be different if the Postgres server were written to not\ncall accept when it had the maximum number of connections active, and\nto just leave connections on the queue in that case. But that's not\nhow it works today.)\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 842: \"When the only tool you have is a hammer, you tend to treat\neverything as if it were a nail.\"\n-- Abraham Maslow\n", "msg_date": "10 Jul 2001 17:51:38 -0700", "msg_from": "Ian Lance Taylor <ian@zembu.com>", "msg_from_op": false, "msg_subject": "Re: SOMAXCONN (was Re: Solaris source code)" }, { "msg_contents": "> ncm@zembu.com (Nathan Myers) writes:\n> > All the OSes we know of fold it to 128, currently. We can jump it \n> > to 10240 now, or later when there are 20GHz CPUs.\n> \n> > If you want to make it more complicated, it would be more useful to \n> > be able to set the value lower for runtime environments where PG is \n> > competing for OS resources with another daemon that deserves higher \n> > priority.\n> \n> Hmm, good point. Does anyone have a feeling for the amount of kernel\n> resources that are actually sucked up by an accept-queue entry? If 128\n> is the customary limit, is it actually worth worrying about whether\n> we are setting it to 128 vs. something smaller?\n\nAll I can say is keep in mind that Solaris uses SVr4 streams, which are\nquite a bit heavier than the BSD-based sockets. I don't know any\nnumbers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 20:59:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SOMAXCONN (was Re: Solaris source code)" }, { "msg_contents": "\nQuick rundown of our configuration:\nRed Hat 7.1 (no changes or extras added by us)\nPostgresql 7.1.2 and CVS HEAD from 07/10/2001\n3.8 gb database size\n\nI included two pgsql versions because this happens on both.\n\nHere's the problem we're having:\n\nWe run a vacuumdb from the server on the entire database. Some large tables \nare vacuumed very quickly, but the vacuum process hangs or takes more than a \nfew hours on a specific table (we haven't let it finish before). The vacuum \nprocess works quickly on a table (loginhistory) with 2.8 million records, but \nis extremely slow on a table (inbox) with 1.1 million records (the table with \n1.1 million records is actually larger in kb size than the other table).\n\nWe've tried to vacuum the inbox table seperately ('vacuum inbox' within \npsql), but this still takes hours (again we have never let it complete, we \nneed to use the database for development as well).\n\nWe noticed 2 things that are significant to this situatoin:\nThe server logs the following:\n\n\nDEBUG: --Relation msginbox--\nDEBUG: Pages 129921: Changed 26735, reaped 85786, Empty 0, New 0; Tup \n1129861: Vac 560327, Keep/VTL 0/0, Crash 0, UnUsed 51549, MinLen 100,\nMaxLen 2032; Re-using: Free/Avail. Space 359061488/359059332;\nEndEmpty/Avail. Pages 0/85785. CPU 11.18s/5.32u sec.\nDEBUG: Index msginbox_pkey: Pages 4749; Tuples 1129861: Deleted 76360.\nCPU 0.47s/6.70u sec.\nDEBUG: Index msginbox_fromto: Pages 5978; Tuples 1129861: Deleted 0.\nCPU 0.37s/6.15u sec.\nDEBUG: Index msginbox_search: Pages 4536; Tuples 1129861: Deleted 0.\nCPU 0.32s/6.30u sec.\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n\nthe last few lines (XLogWrite .....) repeat for ever and ever and ever. With \n7.1.2 this never stops unless we run out of disk space or cancel the query. \nWith CVS HEAD this still continues, but the log files don't consume all disk \nspace, but we still have to cancel it or it might run forever.\n\nPerhaps we need to let it run until it completes, but we thought that we \nmight be doing something wrong or have some data (we're converting data from \nMS SQL Server) that isn't friendly.\n\nThe major issue we're facing with this is that any read or write access to \nthe table being vacuumed times out (obviously because the table is still \nlocked). We plan to use PostgreSQL in our production service, but we can't \nuntil we get this resolved.\n\nWe're at a loss, not being familiar enough with PostgreSQL and it's source \ncode. Can anyone please offer some advice or suggestions?\n\nThanks,\n\nMark\n", "msg_date": "Wed, 11 Jul 2001 09:16:05 -0600", "msg_from": "Mark <mark@ldssingles.com>", "msg_from_op": false, "msg_subject": "vacuum problems" }, { "msg_contents": "Ian Lance Taylor <ian@zembu.com> writes:\n> But I wouldn't worry about it, and I wouldn't worry about Nathan's\n> suggestion for making the limit configurable, because Postgres\n> connections don't spend time on the queue. The postgres server will\n> be picking them off as fast as it can. If the server can't pick\n> processes off fast enough, then your system has other problems;\n\nRight. Okay, it seems like just making it a hand-configurable entry\nin config.h.in is good enough for now. When and if we find that\nthat's inadequate in a real-world situation, we can improve on it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jul 2001 11:24:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code) " }, { "msg_contents": "Tom Lane writes:\n\n> Right. Okay, it seems like just making it a hand-configurable entry\n> in config.h.in is good enough for now. When and if we find that\n> that's inadequate in a real-world situation, we can improve on it...\n\nWould anything computed from the maximum number of allowed connections\nmake sense?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 11 Jul 2001 18:15:13 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code) " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Right. Okay, it seems like just making it a hand-configurable entry\n>> in config.h.in is good enough for now. When and if we find that\n>> that's inadequate in a real-world situation, we can improve on it...\n\n> Would anything computed from the maximum number of allowed connections\n> make sense?\n\n[ looks at code ... ] Hmm, MaxBackends is indeed set before we arrive\nat the listen(), so it'd be possible to use MaxBackends to compute the\nparameter. Offhand I would think that MaxBackends or at most\n2*MaxBackends would be a reasonable value.\n\nQuestion, though: is this better than having a hardwired constant?\nThe only case I can think of where it might not be is if some platform\nout there throws an error from listen() when the parameter is too large\nfor it, rather than silently reducing the value to what it can handle.\nA value set in config.h.in would be simpler to adapt for such a platform.\n\nBTW, while I'm thinking about it: why doesn't pqcomm.c test for a\nfailure return from the listen() call? Is this just an oversight,\nor is there a good reason to ignore errors?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jul 2001 12:26:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code) " }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> Right. Okay, it seems like just making it a hand-configurable entry\n> >> in config.h.in is good enough for now. When and if we find that\n> >> that's inadequate in a real-world situation, we can improve on it...\n> \n> > Would anything computed from the maximum number of allowed connections\n> > make sense?\n> \n> [ looks at code ... ] Hmm, MaxBackends is indeed set before we arrive\n> at the listen(), so it'd be possible to use MaxBackends to compute the\n> parameter. Offhand I would think that MaxBackends or at most\n> 2*MaxBackends would be a reasonable value.\n\nDon't we have maxbackends configurable at runtime. If so, any constant\nwe put in config.h will be inaccurate. Seems we have to track\nmaxbackends.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 13:13:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Don't we have maxbackends configurable at runtime.\n\nNot after postmaster start, so passing it to the initial listen()\nshouldn't be a problem.\n\nThe other concern I had could be addressed by making the listen\nparameter be MIN(MaxBackends, PG_SOMAXCONN) where PG_SOMAXCONN\nis set in config.h --- but now we could make the default value\nreally large, say 10000. The only reason to change it would be\nif you had a kernel that barfed on large listen() parameters.\n\nHave we beat this issue to death yet, or is it still twitching?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jul 2001 13:18:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code) " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Don't we have maxbackends configurable at runtime.\n> \n> Not after postmaster start, so passing it to the initial listen()\n> shouldn't be a problem.\n> \n> The other concern I had could be addressed by making the listen\n> parameter be MIN(MaxBackends, PG_SOMAXCONN) where PG_SOMAXCONN\n> is set in config.h --- but now we could make the default value\n> really large, say 10000. The only reason to change it would be\n> if you had a kernel that barfed on large listen() parameters.\n\nSounds good to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 13:29:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code)" }, { "msg_contents": "Tom Lane writes:\n\n> The other concern I had could be addressed by making the listen\n> parameter be MIN(MaxBackends, PG_SOMAXCONN) where PG_SOMAXCONN\n> is set in config.h --- but now we could make the default value\n> really large, say 10000. The only reason to change it would be\n> if you had a kernel that barfed on large listen() parameters.\n\nWe'll never find that out if we don't try it. If you're concerned about\ncooperating with other listen()ing processes, set it to MaxBackends * 2,\nif you're not, set it to INT_MAX and watch.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 11 Jul 2001 19:46:08 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code) " }, { "msg_contents": "\nWe increased shared memory in the linux kernel, which decreased the vacuumdb \ntime from 40 minutes to 14 minutes on a 450 mhz processor. We calculate that \non our dual 1ghz box with ghz ethernet san connection this will go down to \nunder 5 minutes. This is acceptable to us. Sorry about the unnecessary post.\n\nOn Wednesday 11 July 2001 09:16, Mark wrote:\n> Quick rundown of our configuration:\n> Red Hat 7.1 (no changes or extras added by us)\n> Postgresql 7.1.2 and CVS HEAD from 07/10/2001\n> 3.8 gb database size\n>\n> I included two pgsql versions because this happens on both.\n>\n> Here's the problem we're having:\n>\n> We run a vacuumdb from the server on the entire database. Some large\n> tables are vacuumed very quickly, but the vacuum process hangs or takes\n> more than a few hours on a specific table (we haven't let it finish\n> before). The vacuum process works quickly on a table (loginhistory) with\n> 2.8 million records, but is extremely slow on a table (inbox) with 1.1\n> million records (the table with 1.1 million records is actually larger in\n> kb size than the other table).\n>\n> We've tried to vacuum the inbox table seperately ('vacuum inbox' within\n> psql), but this still takes hours (again we have never let it complete, we\n> need to use the database for development as well).\n>\n> We noticed 2 things that are significant to this situatoin:\n> The server logs the following:\n>\n>\n> DEBUG: --Relation msginbox--\n> DEBUG: Pages 129921: Changed 26735, reaped 85786, Empty 0, New 0; Tup\n> 1129861: Vac 560327, Keep/VTL 0/0, Crash 0, UnUsed 51549, MinLen 100,\n> MaxLen 2032; Re-using: Free/Avail. Space 359061488/359059332;\n> EndEmpty/Avail. Pages 0/85785. CPU 11.18s/5.32u sec.\n> DEBUG: Index msginbox_pkey: Pages 4749; Tuples 1129861: Deleted 76360.\n> CPU 0.47s/6.70u sec.\n> DEBUG: Index msginbox_fromto: Pages 5978; Tuples 1129861: Deleted 0.\n> CPU 0.37s/6.15u sec.\n> DEBUG: Index msginbox_search: Pages 4536; Tuples 1129861: Deleted 0.\n> CPU 0.32s/6.30u sec.\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n>\n> the last few lines (XLogWrite .....) repeat for ever and ever and ever. \n> With 7.1.2 this never stops unless we run out of disk space or cancel the\n> query. With CVS HEAD this still continues, but the log files don't consume\n> all disk space, but we still have to cancel it or it might run forever.\n>\n> Perhaps we need to let it run until it completes, but we thought that we\n> might be doing something wrong or have some data (we're converting data\n> from MS SQL Server) that isn't friendly.\n>\n> The major issue we're facing with this is that any read or write access to\n> the table being vacuumed times out (obviously because the table is still\n> locked). We plan to use PostgreSQL in our production service, but we can't\n> until we get this resolved.\n>\n> We're at a loss, not being familiar enough with PostgreSQL and it's source\n> code. Can anyone please offer some advice or suggestions?\n>\n> Thanks,\n>\n> Mark\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n", "msg_date": "Wed, 11 Jul 2001 15:55:08 -0600", "msg_from": "Mark <mark@ldssingles.com>", "msg_from_op": false, "msg_subject": "Re: vacuum problems" }, { "msg_contents": "On Wed, Jul 11, 2001 at 12:26:43PM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> Right. Okay, it seems like just making it a hand-configurable entry\n> >> in config.h.in is good enough for now. When and if we find that\n> >> that's inadequate in a real-world situation, we can improve on it...\n> \n> > Would anything computed from the maximum number of allowed connections\n> > make sense?\n> \n> [ looks at code ... ] Hmm, MaxBackends is indeed set before we arrive\n> at the listen(), so it'd be possible to use MaxBackends to compute the\n> parameter. Offhand I would think that MaxBackends or at most\n> 2*MaxBackends would be a reasonable value.\n>\n> Question, though: is this better than having a hardwired constant?\n> The only case I can think of where it might not be is if some platform\n> out there throws an error from listen() when the parameter is too large\n> for it, rather than silently reducing the value to what it can handle.\n> A value set in config.h.in would be simpler to adapt for such a platform.\n\nThe question is really whether you ever want a client to get a\n\"rejected\" result from an open attempt, or whether you'd rather they \ngot a report from the back end telling them they can't log in. The \nsecond is more polite but a lot more expensive. That expense might \nreally matter if you have MaxBackends already running.\n\nI doubt most clients have tested either failure case more thoroughly \nthan the other (or at all), but the lower-level code is more likely \nto have been cut-and-pasted from well-tested code. :-)\n\nMaybe PG should avoid accept()ing connections once it has MaxBackends\nback ends already running (as hinted at by Ian), so that the listen()\nparameter actually has some meaningful effect, and excess connections \ncan be rejected more cheaply. That might also make it easier to respond \nmore adaptively to true load than we do now.\n\n> BTW, while I'm thinking about it: why doesn't pqcomm.c test for a\n> failure return from the listen() call? Is this just an oversight,\n> or is there a good reason to ignore errors?\n\nThe failure of listen() seems impossible. In the Linux, NetBSD, and \nSolaris man pages, none of the error returns mentioned are possible \nwith PG's current use of the function. It seems as if the most that \nmight be needed now would be to add a comment to the call to socket() \nnoting that if any other address families are supported (besides \nAF_INET and AF_LOCAL aka AF_UNIX), the call to listen() might need to \nbe looked at. AF_INET6 (which PG will need to support someday)\ndoesn't seem to change matters.\n\nProbably if listen() did fail, then one or other of bind(), accept(),\nand read() would fail too.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Wed, 11 Jul 2001 15:28:21 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code)" } ]
[ { "msg_contents": "Greetings,\n\nI am going over the use of select() for a server I'm writing and I \n*thought* I understood the man page's description for the use of the first \nparameter, nfds.\n\n From MAN:\n\nThe first nfds descriptors are checked in each set; i.e., the descriptors \nfrom 0 through nfds-1 in the descriptor sets are examined.\n\n\nI take this to mean that each descriptor set contains n descriptors and I \nam interested in examining the first nfds descriptors referenced in my \nsets. I also understood it to mean that nfds has absolutely nothing to do \nwith the actual *value* of a descriptor, i.e. the value returned by \nfopen(), socket(), etc.. Is this correct thinking? What got me \nsecond-guessing myself was a use of select() that seems to indicate that \nyou have to make sure nfds is larger than the value of the largest \ndescriptor you want checked. Here is the select() from the questionable \ncode (I can provide the whole function if necessary, it's not very big):\n\nif (select(conn->sock + 1, &input_mask, &output_mask, &except_mask,\n (struct timeval *) NULL) < 0)\n\n\nIs this improper use? conn->sock is set like this:\n\n/* Open a socket */\nif ((conn->sock = socket(family, SOCK_STREAM, 0)) < 0)\n\n\nAny clarification on how nfds should be set would be greatly appreciated.\n\nThanks,\nMatthew\n\n", "msg_date": "Fri, 06 Jul 2001 01:23:06 -0400", "msg_from": "Matthew Hagerty <mhagerty@voyager.net>", "msg_from_op": true, "msg_subject": "Proper use of select() parameter nfds?" }, { "msg_contents": "Matthew Hagerty <mhagerty@voyager.net> writes:\n> I take this to mean that each descriptor set contains n descriptors\n\nNo. nfds is the length (in bits) of the bit arrays passed to select().\nTherefore, it is possible to inquire about descriptors numbered between\n0 and nfds-1. One sets the bits corresponding to the interesting\ndescriptors before calling select, and then examines those bits to see\nif they're still set on return.\n\nThe code you quoted is perfectly correct, for code that is only\ninterested in one descriptor.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Jul 2001 09:29:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proper use of select() parameter nfds? " }, { "msg_contents": "Matthew Hagerty <mhagerty@voyager.net> writes:\n\n> From MAN:\n> \n> The first nfds descriptors are checked in each set; i.e., the\n> descriptors from 0 through nfds-1 in the descriptor sets are\n> examined.\n> \n> \n> \n> I take this to mean that each descriptor set contains n descriptors and I am\n> interested in examining the first nfds descriptors referenced in my sets. I\n> also understood it to mean that nfds has absolutely nothing to do with the\n> actual *value* of a descriptor, i.e. the value returned by fopen(), socket(),\n> etc.. Is this correct thinking? \n\nNo. Unix always gives you the lowest available descriptor value\n(unless you ask for a value explicitly with dup2(), which is rare).\nSince by default stdin/out/err are 0,1,2, you will get new descriptors\nstarting at 3. You keep track of the highest descriptor value that\nyou're interested in, and pass that value +1 to select().\n\nThe reason for this is that FD_SETSIZE is often large (1024 by defaukt\nin glibc) and you save the system some work by telling select() how\nmuch of each set it needs to scan.\n\n> if (select(conn->sock + 1, &input_mask, &output_mask, &except_mask,\n> (struct timeval *) NULL) < 0)\n> \n> \n> Is this improper use? conn->sock is set like this:\n\nAs long as conn->sock is the highest descriptor value you have (last\ndescriptor opened) this looks right.\n\nYou might want to get hold of _Unix Network Programming, Vol 1_ by\nStevens if you're going to do a lot of this stuff.\n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n", "msg_date": "06 Jul 2001 09:44:04 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Proper use of select() parameter nfds?" }, { "msg_contents": "On Fri, 6 Jul 2001, Matthew Hagerty wrote:\n\n> I take this to mean that each descriptor set contains n descriptors and I \n> am interested in examining the first nfds descriptors referenced in my \n> sets. I also understood it to mean that nfds has absolutely nothing to do \n> with the actual *value* of a descriptor, i.e. the value returned by \n> fopen(), socket(), etc.. Is this correct thinking? What got me \n> second-guessing myself was a use of select() that seems to indicate that \n> you have to make sure nfds is larger than the value of the largest \nCorrect.\n<snip>\n\n> Any clarification on how nfds should be set would be greatly appreciated.\n\nJust like you said:\n\"you have to make sure nfds is larger than the value of the largest\nfiledescriptor\". \n\n\nReason being: kernel has to know how large is the mask passed to it, and\nhow far does it need to look.\n\n-alex\n\n\n\n", "msg_date": "Fri, 6 Jul 2001 09:45:22 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Proper use of select() parameter nfds?" }, { "msg_contents": "Well, it proves that it was to late for me when I wrote this. I apologize, \nthis was supposed to go to FreeBSD hackers, not pgHackers. It does kind of \nrelate however, because the code snip-it is from the pgWait function in the \nfe-misc.c file of the pqlib interface.\n\nI appreciate the response. I had initially understood the correct behavior \nbut the man page is a bit confusing. I suppose digging into the kernel's \nselect() would have resolved my suspicions as well, but it is more fun to \ntalk to all of you! :)\n\nWhat erks me about this call is that I have to know what file descriptors \nare and what the largest one I want to use is. \"Technically\", for \nin/out/err your are supposed to use defines from a lib supplied with your \nOS, and for other files you make a var of type FILE and assign the return \nresult from fopen() (or socket, etc.) to that. I guess my point is that by \nhaving to pass a parameter like nfds, it completely removes all abstraction \nand forces me to know something about the kernel internals. This is not a \nproblem for me, but makes for possibly *very* un-portable code. What if a \nfile descriptor is a structure on anther OS? I read somewhere once that \nthe use of 0,1,2 for the in/out/err was something being frowned on since \nthat could change one day, but select() is not helping things \neither. Also, I know of no function that returns the highest file \ndescriptor I have open for my process.\n\nThanks for the clarification.\n\nMatthew\n\n\n\nAt 09:45 AM 7/6/2001 -0400, Alex Pilosov wrote:\n>On Fri, 6 Jul 2001, Matthew Hagerty wrote:\n>\n> > I take this to mean that each descriptor set contains n descriptors and I\n> > am interested in examining the first nfds descriptors referenced in my\n> > sets. I also understood it to mean that nfds has absolutely nothing to do\n> > with the actual *value* of a descriptor, i.e. the value returned by\n> > fopen(), socket(), etc.. Is this correct thinking? What got me\n> > second-guessing myself was a use of select() that seems to indicate that\n> > you have to make sure nfds is larger than the value of the largest\n>Correct.\n><snip>\n>\n> > Any clarification on how nfds should be set would be greatly appreciated.\n>\n>Just like you said:\n>\"you have to make sure nfds is larger than the value of the largest\n>filedescriptor\".\n>\n>\n>Reason being: kernel has to know how large is the mask passed to it, and\n>how far does it need to look.\n>\n>-alex\n\n", "msg_date": "Fri, 06 Jul 2001 10:18:50 -0400", "msg_from": "Matthew Hagerty <mhagerty@voyager.net>", "msg_from_op": true, "msg_subject": "Re: Proper use of select() parameter nfds?" }, { "msg_contents": "Matthew Hagerty <mhagerty@voyager.net> writes:\n\n> What erks me about this call is that I have to know what file descriptors are\n> and what the largest one I want to use is. \"Technically\", for in/out/err your\n> are supposed to use defines from a lib supplied with your OS, and for other\n> files you make a var of type FILE and assign the return result from fopen()\n> (or socket, etc.) to that.\n\nYou are confused. fopen() returns a FILE *, while socket() returns a\ndescriptor. \n\nDon't use any stdio calls (fopen(), fprintf(), etc) with select() as\nthe stdio buffering can screw you up. \n\n> I guess my point is that by having to pass a\n> parameter like nfds, it completely removes all abstraction and forces me to\n> know something about the kernel internals.\n\nFile descriptors are not \"kernel internals\". They are part of the\nUnix/POSIX API and will not go away. They're simply a but lower level \nthatn the stdio FILE * interface.\n\n> This is not a problem for me, but\n> makes for possibly *very* un-portable code. What if a file\n> descriptor is a structure on anther OS? I read somewhere once that\n> the use of 0,1,2 for the in/out/err was something being frowned on\n> since that could change one day, but select() is not helping things\n> either. \n\nI consider it extremely unlikely that the 0,1,2 convention will ever\nchange, as it would break almost every Unix program out there.\n\n> Also, I know of no function that returns the highest file\n> descriptor I have open for my process.\n\nYes, you have to keep track of it yourself. It's a bit annoying.\n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n", "msg_date": "06 Jul 2001 11:42:08 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Proper use of select() parameter nfds?" } ]
[ { "msg_contents": "\n> > In other words, we keep the page images and row records in one file so\n> > we can do one fsync, but once we have written the page, we don't want to\n> > store them for later point-in-time recovery.\n> \n> What we'd want to do is strip the page images from the version of the\n> logs that's archived for recovery purposes. Ideally the archiving\n> process would also discard records from aborted transactions, but I'm\n> not sure how hard that'd be to do.\n\nUnless we have UNDO we also need to roll forward the physical changes of \naborted transactions, or later redo records will \"sit on a wrong physical image\".\n\nAndreas\n", "msg_date": "Fri, 6 Jul 2001 10:22:24 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: Backup and Recovery " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> Ideally the archiving\n>> process would also discard records from aborted transactions, but I'm\n>> not sure how hard that'd be to do.\n\n> Unless we have UNDO we also need to roll forward the physical changes of \n> aborted transactions, or later redo records will \"sit on a wrong physical image\".\n\nWouldn't it be the same as the case where we *do* have UNDO? How is a\nremoved tuple different from a tuple that was never there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Jul 2001 09:14:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Re: Backup and Recovery " } ]
[ { "msg_contents": " \n> > > Also, isn't the WAL format rather bulky to archive hours and hours of?\n> > \n> > If it were actually too bulky, then it needs to be made less so, since\n> > that directly affects overall performance :-) \n> \n> ISTM that WAL record size trades off against lots of things, including \n> (at least) complexity of recovery code, complexity of WAL generation \n> code, usefulness in fixing corrupt table images, and processing time\n> it would take to produce smaller log entries. \n> \n> Complexity is always expensive, and CPU time spent \"pre-sync\" is a lot\n> more expensive than time spent in background. That is, time spent\n> generating the raw log entries affects latency and peak capacity, \n> where time in background mainly affects average system load.\n> \n> For a WAL, the balance seems to be far to the side of simple-and-bulky.\n> For other uses, the balance is sure to be different.\n\nI do not agree with the conclusions you make above.\nThe limiting factor on the WAL is almost always the IO bottleneck.\nHow long startup rollforward takes after a crash is mainly influenced \nby the checkpoint interval and IO. Thus you can spend enough additional\nCPU to reduce WAL size if that leads to a substantial reduction.\nKeep in mind though, that because of Toast long column values that do not \nchange, already do not need to be written to the WAL. Thus the potential is \nnot as large as it might seem.\n\n> > > > I would expect high-level transaction redo records to be much more\n> > > > compact; mixed into the WAL, such records shouldn't make the WAL\n> > > > grow much faster.\n> > \n> > All redo records have to be at the tuple level, so what higher-level\n> > are you talking about ? (statement level redo records would not be\n> > able to reproduce the same resulting table data (keyword: transaction\n> > isolation level)) \n> \n> Statement-level redo records would be nice, but as you note they are \n> rarely practical if done by the database.\n\nThe point is, that the database cannot do it, unless it only allows \nserializable access and allows no user defined functions with external \nor runtime dependencies.\n\n> \n> Redo records that contain that contain whole blocks may be much bulkier\n> than records of whole tuples.\n\nWhat is written in whole pages is the physical log, and yes those pages can \nbe stripped before the log is copied to the backup location. \n\n> Redo records of whole tuples may be much bulkier than those that just \n> identify changed fields.\n\nYes, that might help in some cases, but as I said above, if it actually\nmakes a substantial difference it would be best already done before the WAL \nis written.\n\n> Bulky logs mean more-frequent snapshot backups, and bulky log formats \n> are less suitable for network transmission, and therefore less useful \n> for replication.\n\nAny reasonably flexible replication that is based on the WAL will need to \npreprocess the WAL files (or buffers) before transmission anyway.\n\nAndreas\n", "msg_date": "Fri, 6 Jul 2001 11:04:32 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: Backup and Recovery" } ]
[ { "msg_contents": "\nComming from Oracle, I was disapointed that\nthe users were not \"per individual database\".\n\nI had to promote my database owner to superuser\nto make it able to create users (I didn't want\nto su to postgres for user creation).\n\nIs there any chance that this will change in\nthe future ?\n\n\nRegarding privileges, I was also disapointed to\nsee that the object owner rights (ALL) had to\nbe stored when grants where made on those objects\nto other users. I rememeber reading something\nabout changes in privileges storing.\n\nIs there a change regarding this in the TODO list ?\n\n\nBest regards,\n\nJean-Francois Leveque\n\n\n______________________________________________________________________\nSur WebMailS.com, mon adresse de courrier �lectronique gratuite.\nService multilingue, s�r, et permanent. http://www.webmails.com/\n", "msg_date": "Fri, 6 Jul 2001 10:37:50 +0100", "msg_from": "\"Jean-Francois Leveque\" <leveque@webmails.com>", "msg_from_op": true, "msg_subject": "Database Users Management and Privileges" }, { "msg_contents": "Jean-Francois Leveque writes:\n\n> Comming from Oracle, I was disapointed that\n> the users were not \"per individual database\".\n\n> Is there any chance that this will change in\n> the future ?\n\nMost likely not. For one thing, it would be a problem to assign owners to\ndatabases.\n\n> Regarding privileges, I was also disapointed to\n> see that the object owner rights (ALL) had to\n> be stored when grants where made on those objects\n> to other users. I rememeber reading something\n> about changes in privileges storing.\n\nThis has been corrected in 7.1.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 6 Jul 2001 16:17:04 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Database Users Management and Privileges" }, { "msg_contents": "* Peter Eisentraut <peter_e@gmx.net> wrote:\n|\n| Jean-Francois Leveque writes:\n| \n| > Comming from Oracle, I was disapointed that\n| > the users were not \"per individual database\".\n| \n| > Is there any chance that this will change in\n| > the future ?\n| \n| Most likely not. For one thing, it would be a problem to assign owners to\n| databases.\n| \n\nWhy ? Better user management and policy delegations would be important\npostgresql to succeed in enterprise environments. Maybe one should \nstart distinguishing logins from users like Sybase does. Logins are global\nto all databases, and you can create a user for a given database and assign\nit to a login. It would also be nice to be able to assign users to \ngroups(which in turn define access rights within the database). \n\nregards,\n\n Gunnar\n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n", "msg_date": "06 Jul 2001 17:08:05 +0200", "msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>", "msg_from_op": false, "msg_subject": "Re: Database Users Management and Privileges" }, { "msg_contents": "Gunnar R�nning writes:\n\n> Better user management and policy delegations would be important\n> postgresql to succeed in enterprise environments.\n\nKeeping compatibility is also important.\n\n> Maybe one should\n> start distinguishing logins from users like Sybase does. Logins are global\n> to all databases, and you can create a user for a given database and assign\n> it to a login.\n\nThat doesn't strike me as terribly better. Operating system\nadministrators tend to unify user management across the whole network.\nYou're essentially suggesting making separate users per file system.\nUgh.\n\n> It would also be nice to be able to assign users to\n> groups(which in turn define access rights within the database).\n\nThat would indeed be nice. That's why we have already implemented it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 6 Jul 2001 17:41:23 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Database Users Management and Privileges" }, { "msg_contents": "* Peter Eisentraut <peter_e@gmx.net> wrote:\n\n| > Better user management and policy delegations would be important\n| > postgresql to succeed in enterprise environments.\n| \n| Keeping compatibility is also important.\n\nWell nobody said you can't get both ;-)\n\n| > to all databases, and you can create a user for a given database and assign\n| > it to a login.\n| \n| That doesn't strike me as terribly better. Operating system\n| administrators tend to unify user management across the whole network.\n| You're essentially suggesting making separate users per file system.\n| Ugh.\n\nWell, it is important for some networks to have the ability to create users \nlocal to a subset of the network. Let the sub networks manage themselves. \nMatter of policy of course.\n\n| > It would also be nice to be able to assign users to\n| > groups(which in turn define access rights within the database).\n| \n| That would indeed be nice. That's why we have already implemented it.\n\nOops, sorry. RTFM.... But the set of permissions you can assign to a group is\nfairly limited. E.g. I can't see that you are able to grant a user/group \ncreate/drop table permissions for a database. Does that mean any user can \ncreate/drop tables ? I think this is an example of a permission a DBA would \nlike to grant to users per database. \n\ncreateuser/createdb are rights assigned to a user directly. Wouldn't it make \nsense to be able to assign these rights to a group of users ?\n\nregards, \n\n Gunnar\n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n", "msg_date": "06 Jul 2001 20:53:05 +0200", "msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>", "msg_from_op": false, "msg_subject": "Re: Database Users Management and Privileges" } ]
[ { "msg_contents": "CVSROOT:\t/home/projects/pgsql/cvsroot\nModule name:\tpgsql\nChanges by:\twieck@hub.org\t01/07/06 09:40:47\n\nModified files:\n\tsrc/backend/rewrite: rewriteHandler.c \n\nLog message:\n\tFire rule actions ON INSERT after original statement (if not INSTEAD).\n\t\n\tJan\n\n", "msg_date": "Fri, 6 Jul 2001 09:40:47 -0400 (EDT)", "msg_from": "Jan Wieck <wieck@hub.org>", "msg_from_op": true, "msg_subject": "pgsql/src/backend/rewrite rewriteHandler.c" }, { "msg_contents": "\nTODO updated.\n\n> CVSROOT:\t/home/projects/pgsql/cvsroot\n> Module name:\tpgsql\n> Changes by:\twieck@hub.org\t01/07/06 09:40:47\n> \n> Modified files:\n> \tsrc/backend/rewrite: rewriteHandler.c \n> \n> Log message:\n> \tFire rule actions ON INSERT after original statement (if not INSTEAD).\n> \t\n> \tJan\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Jul 2001 14:23:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql/src/backend/rewrite rewriteHandler.c" }, { "msg_contents": "Jan Wieck <wieck@hub.org> writes:\n> \tFire rule actions ON INSERT after original statement (if not INSTEAD).\n\nIt seems to me that this change of ordering should apply to\nqual_products (ie, original statement with negation of a conditional\nINSTEAD rule's condition) as well as to the unvarnished original\nstatement in the non-INSTEAD case. Otherwise it's just about impossible\nto give a coherent statement of what the behavior is.\n\nHowever, when I tried making that change, a whole bunch of differences\npopped up in the rules regression test. They seemed to come from this\nexample:\n\ncreate rule rtest_nothn_r1 as on insert to rtest_nothn1\n\twhere new.a >= 10 and new.a < 20 do instead (select 1);\n\nIn the old regime, the SELECT got done before the INSERT, so psql throws\naway the SELECT result and you see no output. In the new regime, the\nSELECT gets done last and you see its output. What might be even more\nconfusing to newbies, you see SELECT output of zero rows when the rule's\nWHERE condition fails (since the select is done anyway, but with a false\ncondition).\n\nMy feeling is that I should make the change and adjust the rule test's\nexpected output (probably by changing this rule to DO INSTEAD NOTHING).\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Jul 2001 14:29:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Rule action ordering" } ]
[ { "msg_contents": "\n> >> Ideally the archiving\n> >> process would also discard records from aborted transactions, but I'm\n> >> not sure how hard that'd be to do.\n> \n> > Unless we have UNDO we also need to roll forward the physical changes of \n> > aborted transactions, or later redo records will \"sit on a \n> wrong physical image\".\n> \n> Wouldn't it be the same as the case where we *do* have UNDO? How is a\n> removed tuple different from a tuple that was never there?\n\nHiHi, the problem is a subtile one. What if a previously aborted txn \nproduced a btree page split, that would otherwise not have happened ?\nAnother issue is \"physical log\" if first modification after checkpoint\nwas from an aborted txn. Now because you need to write that physical log\npage you will also need to write the abort to pg_log ...\n\nI guess you can however discard heap tuple *column values* from aborted \ntxns, but I am not sure that is worth it.\n\nAndreas\n", "msg_date": "Fri, 6 Jul 2001 16:00:34 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Re: Backup and Recovery " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> Wouldn't it be the same as the case where we *do* have UNDO? How is a\n>> removed tuple different from a tuple that was never there?\n\n> HiHi, the problem is a subtile one. What if a previously aborted txn \n> produced a btree page split, that would otherwise not have happened ?\n\nGood point. We'd have to recognize btree splits (and possibly some\nother operations) as things that must be done anyway, even if their\noriginating transaction is aborted.\n\nThere already is a mechanism for doing that: xlog entries can be written\nwithout any transaction identifier (see XLOG_NO_TRAN). Seems to me that\nbtree split XLOG records should be getting written that way now --- Vadim,\ndon't you agree?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Jul 2001 10:20:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Re: Backup and Recovery " } ]
[ { "msg_contents": "\n> Another issue is \"physical log\" if first modification after checkpoint\n> was from an aborted txn. Now because you need to write that physical log\n> page you will also need to write the abort to pg_log ...\n\nForget that part, sorry, how stupid. We were getting rid of those pages anyway.\n\nAndreas\n", "msg_date": "Fri, 6 Jul 2001 16:08:34 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Re: Backup and Recovery " } ]
[ { "msg_contents": "Hi all,\n\nI am having the following problems using Postgres 7.1.2 with clients which\nhave long transactions:\n\nIf there is a transaction running when 'vacuumdb -a -z' is run (as a cron\njob) it stops running at that database till the transaction completes. That\nis not so much of a problem until a new client tries to connect to the\ndatabase. This new connection hangs, waiting for the vacuum to complete.\nThis situation is not all that helpful and means I have to be careful at\nwhat time I run vacuum so it does not interfere with new clients. Is this a\nbug or the standard way in which postgres works and are there any plans\nchange this?\n\nRegards\n\nBen\n\n\n*****************************************************************************\nThis email and any attachments transmitted with it are confidential\nand intended solely for the use of the individual or entity to whom\nthey are addressed. If you have received this email in error please\nnotify the sender and do not store, copy or disclose the content\nto any other person.\n\nIt is the responsibility of the recipient to ensure that opening this\nmessage and/or any of its attachments will not adversely affect\nits systems. No responsibility is accepted by the Company.\n*****************************************************************************\n", "msg_date": "Fri, 6 Jul 2001 15:25:49 +0100 ", "msg_from": "\"Trewern, Ben\" <Ben.Trewern@mowlem.com>", "msg_from_op": true, "msg_subject": "Vacuum and Transactions" }, { "msg_contents": "Trewern, Ben writes:\n\n> If there is a transaction running when 'vacuumdb -a -z' is run (as a cron\n> job) it stops running at that database till the transaction completes. That\n> is not so much of a problem until a new client tries to connect to the\n> database. This new connection hangs, waiting for the vacuum to complete.\n\nThere are plans to make vacuum less intrusive in the next major release,\nbut until then this is what you have to deal with. Unless you really need\nto run vacuum all the time you should schedule it for low activity times.\nYes, that means 24/7 100% uptime is not *really* feasible with PostgreSQL.\n\n> This email and any attachments transmitted with it are confidential\n\nIf the email is confidential you shouldn't send it to public mailing\nlists.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 6 Jul 2001 17:45:17 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Vacuum and Transactions" }, { "msg_contents": "In 7.2, VACUUM will not require an exclusive lock.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Jul 2001 12:02:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vacuum and Transactions " }, { "msg_contents": "From: \"Trewern, Ben\" <Ben.Trewern@mowlem.com>\n\n> If there is a transaction running when 'vacuumdb -a -z' is run (as a cron\n> job) it stops running at that database till the transaction completes.\nThat\n> is not so much of a problem until a new client tries to connect to the\n> database. This new connection hangs, waiting for the vacuum to complete.\n> This situation is not all that helpful and means I have to be careful at\n> what time I run vacuum so it does not interfere with new clients. Is this\na\n> bug or the standard way in which postgres works and are there any plans\n> change this?\n\nWould vacuuming the tables one at a time not help here? It'd mean a small\nscript to read a list of databases/tables out of PG but should reduce the\nimpact on your clients (if I'm thinking straight here)\n\n- Richard Huxton\n\n", "msg_date": "Fri, 6 Jul 2001 17:42:23 +0100", "msg_from": "\"Richard Huxton\" <dev@archonet.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum and Transactions" }, { "msg_contents": "> In 7.2, VACUUM will not require an exclusive lock.\n\nCare to elaborate on that? How are you going to do it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Jul 2001 14:42:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vacuum and Transactions" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> In 7.2, VACUUM will not require an exclusive lock.\n\n> Care to elaborate on that? How are you going to do it?\n\nUh, have you not been paying attention to pg-hackers for the\nlast two months?\n\nI am assuming here that concurrent VACUUM will become the default\nkind of vacuum, and the old style will be invoked by some other\nsyntax (VACUUM FULL ..., maybe).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Jul 2001 14:45:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vacuum and Transactions " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> In 7.2, VACUUM will not require an exclusive lock.\n> \n> > Care to elaborate on that? How are you going to do it?\n> \n> Uh, have you not been paying attention to pg-hackers for the\n> last two months?\n> \n> I am assuming here that concurrent VACUUM will become the default\n> kind of vacuum, and the old style will be invoked by some other\n> syntax (VACUUM FULL ..., maybe).\n\nBy concurrent vacuum, do you mean the auto-vacuum you are doing? I\nrealize that will not need a lock. Are you changing default VACUUM so\nit only moves rows inside existing blocks too?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Jul 2001 14:49:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vacuum and Transactions" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Are you changing default VACUUM\n\nOnly to the extent of not being the default.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Jul 2001 14:50:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vacuum and Transactions " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> In 7.2, VACUUM will not require an exclusive lock.\n> \n> > Care to elaborate on that? How are you going to do it?\n> \n> Uh, have you not been paying attention to pg-hackers for the\n> last two months?\n> \n> I am assuming here that concurrent VACUUM will become the default\n> kind of vacuum, and the old style will be invoked by some other\n> syntax (VACUUM FULL ..., maybe).\n\nOK, I just talked to Tom on the phone and here is his idea for 7.2. He\nsays he already posted this, but I missed it.\n\nHis idea is that in 7.2 VACUUM will only move rows within pages. It\nwill also store unused space locations into shared memory to be used by\nbackends needing to add rows to tables. Actual disk space compaction\nwill be performed by new a VACUUM FULL(?) command.\n\nThe default VACUUM will not lock the table but only prevent the table\nfrom being dropped.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Jul 2001 17:59:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Vacuum and Transactions" }, { "msg_contents": "At 05:59 PM 7/6/01 -0400, Bruce Momjian wrote:\n>\n>OK, I just talked to Tom on the phone and here is his idea for 7.2. He\n>says he already posted this, but I missed it.\n>\n>His idea is that in 7.2 VACUUM will only move rows within pages. It\n>will also store unused space locations into shared memory to be used by\n>backends needing to add rows to tables. Actual disk space compaction\n>will be performed by new a VACUUM FULL(?) command.\n>\n>The default VACUUM will not lock the table but only prevent the table\n>from being dropped.\n\nWould 7.2 maintain performance when updating a row repeatedly (update,\ncommit)? Right now performance goes down in a somewhat 1/x manner. It's\nstill performs ok but it's nice to have things stay blazingly fast.\n\nIf not will the new vacuum restore the performance? \n\nOr will we have to use the VACUUM FULL?\n\nThanks,\nLink.\n\n", "msg_date": "Sat, 07 Jul 2001 16:17:57 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Vacuum and Transactions" }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> Would 7.2 maintain performance when updating a row repeatedly (update,\n> commit)?\n\nYou'll still need to VACUUM to get rid of the obsoleted versions of the\nrow. The point of the planned 7.2 changes is to make VACUUM cheap and\nnonintrusive enough so that you can run it frequently on tables that are\nseeing continual updates.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Jul 2001 15:15:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Vacuum and Transactions " }, { "msg_contents": "> You'll still need to VACUUM to get rid of the obsoleted versions of the\n> row. The point of the planned 7.2 changes is to make VACUUM cheap and\n> nonintrusive enough so that you can run it frequently on tables that are\n> seeing continual updates.\n\nIf it becomes non-intrusive, then why not have PostgreSQL run VACUUM\nautomatically when certain conditions (user-configurable, load, changes per\ntable, etc.) are met.\n\nAll the sys admin would need to do is put the VACCUUM FULL in a cron job.\n\nChris\n\n", "msg_date": "Mon, 9 Jul 2001 09:29:43 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: Re: [GENERAL] Vacuum and Transactions " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> If it becomes non-intrusive, then why not have PostgreSQL run VACUUM\n> automatically\n\nThat might happen eventually, but I'm not all that eager to convert\nthe postmaster into a (half-baked) substitute for cron. My experience\nas a dbadmin is that you need various sorts of routinely-run maintenance\ntasks anyway; VACUUM is only one of them. So you're gonna need some\ncron tasks no matter what. If we try to make the postmaster responsible\nfor this sort of thing, we're going to end up reimplementing cron.\nI think that's a waste of effort.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 08 Jul 2001 21:46:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Vacuum and Transactions " }, { "msg_contents": "> That might happen eventually, but I'm not all that eager to convert\n> the postmaster into a (half-baked) substitute for cron. My experience\n> as a dbadmin is that you need various sorts of routinely-run maintenance\n> tasks anyway; VACUUM is only one of them. So you're gonna need some\n> cron tasks no matter what. If we try to make the postmaster responsible\n> for this sort of thing, we're going to end up reimplementing cron.\n> I think that's a waste of effort.\n\nExcept that you can only set cron jobs to run every hour, etc. The DBA\nmight want to set it to run after say 5% of the rows in a table are\nupdated/deleted, etc. It is an esoteric feature, I know, but it'd be cool.\n\nChris\n\n", "msg_date": "Mon, 9 Jul 2001 09:46:40 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: Re: [GENERAL] Vacuum and Transactions " }, { "msg_contents": "> > That might happen eventually, but I'm not all that eager to convert\n> > the postmaster into a (half-baked) substitute for cron. My experience\n> > as a dbadmin is that you need various sorts of routinely-run maintenance\n> > tasks anyway; VACUUM is only one of them. So you're gonna need some\n> > cron tasks no matter what. If we try to make the postmaster responsible\n> > for this sort of thing, we're going to end up reimplementing cron.\n> > I think that's a waste of effort.\n> \n> Except that you can only set cron jobs to run every hour, etc. The DBA\n> might want to set it to run after say 5% of the rows in a table are\n> updated/deleted, etc. It is an esoteric feature, I know, but it'd be cool.\n\nI don't think it is esoteric. If I UPDATE all the rows in a table,\nCOMMIT, and all transactions viewing old versions of my table are gone,\nit would be nice for VACUUM-light to come alone and gather up my free\ntuple space for later use. \n\nOnly the database knows when this has happened, not cron.\n\nI also think we have to leave VACUUM alone and come up with a new name\nfor our light VACUUM. That way, people who do VACUUM at night when no\none is on the system can keep doing that, and just add something to run\nlight vacuum periodically during the day. I also believe eventually we\nwill remove VACUUM-light and come up with some automatic solution.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 12 Jul 2001 15:53:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] Vacuum and Transactions" } ]
[ { "msg_contents": "> Good point. We'd have to recognize btree splits (and possibly some\n> other operations) as things that must be done anyway, even if their\n> originating transaction is aborted.\n> \n> There already is a mechanism for doing that: xlog entries can\n> be written without any transaction identifier (see XLOG_NO_TRAN).\n> Seems to me that btree split XLOG records should be getting written\n> that way now --- Vadim, don't you agree?\n\nWe would have to write two records per split instead of one as now.\nAnother way is new xlog AM method: we have XXX_redo, XXX_undo (unfunctional)\nand XXX_desc (for debug output) now - add XXX_compact (or whatever)\nable to modify record somehow for BAR. For heap, etc this method could\nbe {return} (or NULL) and for btree it could remove inserted tuple\nfrom record (for aborted TX).\n\nVadim\n\n", "msg_date": "Fri, 6 Jul 2001 08:54:27 -0700 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: AW: AW: Re: Backup and Recovery " } ]
[ { "msg_contents": "anybody knows about \t amiint\n\npg_log relation is created by this thing and need to\nknow what it is??\n\n*\t first open the log and time relations\n*\t (these are created by amiint so they are guaranteed\nto exist)\n\nRegards, J-P\n\n\n\n__________________________________________________\nDo You Yahoo!?\nGet personalized email addresses from Yahoo! Mail\nhttp://personal.mail.yahoo.com/\n", "msg_date": "Fri, 6 Jul 2001 09:16:47 -0700 (PDT)", "msg_from": "J-P Guguy <thejeeman@yahoo.com>", "msg_from_op": true, "msg_subject": "amiint" } ]
[ { "msg_contents": "Gunnar R�nning\twrote:\n> \n> * Peter Eisentraut <peter_e@gmx.net> wrote:\n> |\n> | Jean-Francois Leveque writes:\n> | \n> | > Comming from Oracle, I was disapointed that\n> | > the users were not \"per individual database\".\n> | \n> | > Is there any chance that this will change in\n> | > the future ?\n> | \n> | Most likely not. For one thing, it would be a problem to assign\nowners to\n> | databases.\n\nWhy can't database owners be referenced in one table\nand database users (not owners) be referenced in\nanother table with the corresponding database\nreferenced ?\n\nThey're not the same kind of users, are they ?\n\nMaybe I used Oracle too much in the past.\n\n> Why ? Better user management and policy delegations would be\nimportant\n> postgresql to succeed in enterprise environments. Maybe one should \n> start distinguishing logins from users like Sybase does. Logins are\nglobal\n> to all databases, and you can create a user for a given database and\nassign\n> it to a login. It would also be nice to be able to assign users to \n> groups(which in turn define access rights within the database). \n\nI created database user groups and I'm satisfied\nabout users assignment to groups (See CREATE GROUP\nand ALTER GROUP).\n\nRegarding Privileges, I was thinking about\nthe content of \\z \"Access permissions for database\"\nresults. We have a lot of \"=arwR\" for the object\nowner when we granted permissions to others. The\nowner obviously has all rights on his objects and\nI see no reason to revoke those rights. So, I think\nthey don't have to be stored in access permissions\nif the PostgreSQL code can check if it's the owner\nasking. We wouldn't then need the '\"=\"' anymore for\nnot granting anything to PUBLIC.\n\nWe then wouldn't need to have :\n\"REVOKE ALL on <object> from PUBLIC;\"\n\"GRANT ALL on <object> to <owner>;\"\nin pg_dump output.\n\nI'm not able to help on this because I'm no\npgsql-hacker, but I think PostgreSQL will be\nbetter with such alteration.\n\nMaybe it's already on someone's list but I\ncouldn't find information about such work in progress.\n\n\nMaybe those two changes are too much for 7.1.3,\nbut I think they would be good candidates for 8.0 .\n\nPlease tell me if I'm pushing too far, I'm not much\nused to this list etiquette.\n\nPostgreSQL is good, I just want it to be better.\n\n\nregards,\n\nJean-Francois Leveque\n\n\n______________________________________________________________________\nSur WebMailS.com, mon adresse de courrier �lectronique gratuite.\nService multilingue, s�r, et permanent. http://www.webmails.com/\n", "msg_date": "Fri, 6 Jul 2001 18:11:13 +0100", "msg_from": "\"Jean-Francois Leveque\" <leveque@webmails.com>", "msg_from_op": true, "msg_subject": "Re: Database Users Management and Privileges" } ]
[ { "msg_contents": "Vitalino writes:\n\n> I have some problem with the Postgres user authentication. A user\n> with password was created by me, but when I tried to enter in psql using\n> that user I received a fail mesage.\n>\n> $ psql -W template1 pagano\n> Password:\n> psql: Peer authentication failed for user 'pagano'\n\nOfficial PostgreSQL sources don't have peer authentication. You should\ncontact the provider of your package (Debian?).\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 6 Jul 2001 19:53:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Problem with authentication in psql." }, { "msg_contents": "Hi everybody,\n\n I have some problem with the Postgres user authentication. A user\nwith password was created by me, but when I tried to enter in psql using\nthat user I received a fail mesage.\n\n$ psql -W template1 pagano\nPassword:\npsql: Peer authentication failed for user 'pagano'\n\n Anybody knows what is happening?\n\nThanks for all.\n Vitalino.\n\n", "msg_date": "Fri, 06 Jul 2001 19:12:28 +0000", "msg_from": "Vitalino <vitalino@supercable.es>", "msg_from_op": false, "msg_subject": "Problem with authentication in psql." } ]
[ { "msg_contents": "Greetings,\n\nI'm working with pqlib in asynchronous mode and I have a question about \nPQgetResult. I have this situation:\n\nsubmit a query via PQsendQuery()\nflush to the backend with PQflush()\n\nset my read descriptor on the socket and enter a select()\n\nselect returns read_ready on the socket, so call PGconsumeInput()\nPQisBusy() returns zero, so call PQgetResult()\nPQgetResult() returns a pointer so do whatever with the result\ncall PQclear() on the result\n\nNow what do I do? The docs say that in async mode that PQgetResult() must \nbe called until it returns NULL. But, how do I know that calling \nPQgetResult() a second, third, fourth, etc. time will not block? When \nPQisBusy() returns zero, does that mean that PQgetResult() is guaranteed \nnot to block for all results, i.e. until it returns NULL and a new query is \nissued?\n\nThanks,\nMatthew\n\n", "msg_date": "Fri, 06 Jul 2001 23:09:40 -0400", "msg_from": "Matthew Hagerty <mhagerty@voyager.net>", "msg_from_op": true, "msg_subject": "Async PQgetResult() question." }, { "msg_contents": "Matthew Hagerty <mhagerty@voyager.net> writes:\n> I'm working with pqlib in asynchronous mode and I have a question about \n> PQgetResult. I have this situation:\n\n> submit a query via PQsendQuery()\n> flush to the backend with PQflush()\n\nI think the flush is not necessary; PQsendQuery should do it.\n\n> set my read descriptor on the socket and enter a select()\n\n> select returns read_ready on the socket, so call PGconsumeInput()\n> PQisBusy() returns zero, so call PQgetResult()\n> PQgetResult() returns a pointer so do whatever with the result\n> call PQclear() on the result\n\nSo far so good.\n\n> Now what do I do?\n\nLoop back to the PQisBusy() step. In practice, you are probably doing\nthis in an event-driven application, and the select() is the place in\nthe outer loop that blocks for an event. PQconsumeInput followed by\na PQisBusy/PQgetResult/process result loop are just your response\nsubroutine for an input-ready-on-the-database-connection event.\n\n> The docs say that in async mode that PQgetResult() must \n> be called until it returns NULL. But, how do I know that calling \n> PQgetResult() a second, third, fourth, etc. time will not block?\n\nWhen PQisBusy says you can.\n\n> When \n> PQisBusy() returns zero, does that mean that PQgetResult() is guaranteed \n> not to block for all results, i.e. until it returns NULL and a new query is \n> issued?\n\nNo, it means *one* result is available (or that a NULL is available, ie,\nlibpq has detected the end of the query cycle). Its answer will\nprobably change after you read the result.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Jul 2001 12:35:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Async PQgetResult() question. " }, { "msg_contents": "Thanks for the response Tom! Does anyone ever tell you how much you are \nappreciated? If not, I will. When I post to pgHackers I know I will get a \nresponse (usually) from many knowledgeable people, and for that I thank \neveryone. But I *always* receive a response from Tom, and for that I am \ntruly and greatly thankful!\n\nNow, on with asynchronous query processing! WooHoo! :)\n\nAt 12:35 PM 7/7/2001 -0400, Tom Lane wrote:\n>Matthew Hagerty <mhagerty@voyager.net> writes:\n> > I'm working with pqlib in asynchronous mode and I have a question about\n> > PQgetResult. I have this situation:\n>\n> > submit a query via PQsendQuery()\n> > flush to the backend with PQflush()\n>\n>I think the flush is not necessary; PQsendQuery should do it.\n\n\nThe docs warn that PQflush() must be called, here is what is says:\n\n\"PQflush needs to be called on a non-blocking connection before calling \nselect to determine if a response has arrived. If 0 is returned it ensures \nthat there is no data queued to the backend that has not actually been \nsent. Only applications that have used PQsetnonblocking have a need for this.\"\n\nSince I use PQsetnonblocking(), I included PQflush().\n\n\n\n> > set my read descriptor on the socket and enter a select()\n>\n> > select returns read_ready on the socket, so call PGconsumeInput()\n> > PQisBusy() returns zero, so call PQgetResult()\n> > PQgetResult() returns a pointer so do whatever with the result\n> > call PQclear() on the result\n>\n>So far so good.\n>\n> > Now what do I do?\n>\n>Loop back to the PQisBusy() step. In practice, you are probably doing\n>this in an event-driven application, and the select() is the place in\n>the outer loop that blocks for an event. PQconsumeInput followed by\n>a PQisBusy/PQgetResult/process result loop are just your response\n>subroutine for an input-ready-on-the-database-connection event.\n\nThis is my primary concern. I'm actually writing a server and I have other \nevents going on. I wrote a small test program and at this point (after \nreading the first result) I looped back to my select, but the socket never \nwent read-ready again, so the last \nPQconsumeInput()/PQisBusy()/PQgetResults() was never called to receive the \nNULL response from PQgetResult(), which is how the docs say I know the \nquery is done.\n\nBut if I loop back to PQconsumeInput()/PQisBusy(), then I am effectively \nblocking since I have no way to know that PQconsumeInput() won't block or \nthat the PQisBusy() will ever return zero again.\n\nIf there is more input to read, i.e. the NULL from PQgetResult(), then how \ncome the socket never goes read-ready again, as in my test program \nabove? Or can I call PQconsumeInput() repeatedly and be guaranteed that it \nwill not block and that I will be able to process the remaining results?\n\n> > The docs say that in async mode that PQgetResult() must\n> > be called until it returns NULL. But, how do I know that calling\n> > PQgetResult() a second, third, fourth, etc. time will not block?\n>\n>When PQisBusy says you can.\n>\n> > When\n> > PQisBusy() returns zero, does that mean that PQgetResult() is guaranteed\n> > not to block for all results, i.e. until it returns NULL and a new \n> query is\n> > issued?\n>\n>No, it means *one* result is available (or that a NULL is available, ie,\n>libpq has detected the end of the query cycle). Its answer will\n>probably change after you read the result.\n\nMy main problem is that sockets are slow devices and I have some other disk \nI/O I have to be doing as well. I know that another call to PQgetResult() \nwill probably return NULL, but if that data has not come from the backend \nyet then PQgetResult() will block, and I can't sit in a loop calling \nPQconsumeInput()/PQisBusy() becuse that is just like blocking and I may as \nwell just let PQgetResult() block.\n\nIt seems that when the query cycle has ended and PQgetResult() would return \nNULL, the socket should be set read-ready again, but that never \nhappens... Well, at least I did not see it happen. I can send my test \ncode if need be.\n\nThanks,\nMatthew\n\n> regards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://www.postgresql.org/search.mpl\n\n", "msg_date": "Sat, 07 Jul 2001 13:40:57 -0400", "msg_from": "Matthew Hagerty <mhagerty@voyager.net>", "msg_from_op": true, "msg_subject": "Re: Async PQgetResult() question. " }, { "msg_contents": "Matthew Hagerty <mhagerty@voyager.net> writes:\n> Only applications that have used PQsetnonblocking have a need for this.\"\n\n> Since I use PQsetnonblocking(), I included PQflush().\n\nHmm. My opinions about the PQsetnonblocking patch are on record:\nit's broken and needs fundamental redesign before it has any chance\nof operating reliably. Unless you are sending queries whose text is\nmany kB (more than your kernel will buffer for one send() call),\nI recommend you not use it.\n\nHowever, that only affects output, not input.\n\n> I wrote a small test program and at this point (after \n> reading the first result) I looped back to my select, but the socket never \n> went read-ready again, so the last \n> PQconsumeInput()/PQisBusy()/PQgetResults() was never called to receive the \n> NULL response from PQgetResult(), which is how the docs say I know the \n> query is done.\n\n> But if I loop back to PQconsumeInput()/PQisBusy(), then I am effectively \n> blocking since I have no way to know that PQconsumeInput() won't block or \n> that the PQisBusy() will ever return zero again.\n\n(1) No, you don't need to repeat the PQconsumeInput, unless select still\nsays read-ready. (You could call it again, but there's no point.)\n\n(2) You should repeat PQisBusy && PQgetResult until one of them fails,\nhowever. What you're missing here is that a single TCP packet might\nprovide zero, one, or more than one PQgetResult result. You want to\nloop until you've gotten all the results you can get from the current\ninput packet. Then you go back to select(), and eventually you'll see\nmore backend input and you do another consumeInput and another isBusy/\ngetResult loop.\n\n(3) PQconsumeInput never blocks. Period. PQgetResult can block, but\nit promises not to if an immediately prior PQisBusy returned 0.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Jul 2001 14:13:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Async PQgetResult() question. " }, { "msg_contents": "At 02:13 PM 7/7/2001 -0400, Tom Lane wrote:\n>Matthew Hagerty <mhagerty@voyager.net> writes:\n> > Only applications that have used PQsetnonblocking have a need for this.\"\n>\n> > Since I use PQsetnonblocking(), I included PQflush().\n>\n>Hmm. My opinions about the PQsetnonblocking patch are on record:\n>it's broken and needs fundamental redesign before it has any chance\n>of operating reliably. Unless you are sending queries whose text is\n>many kB (more than your kernel will buffer for one send() call),\n>I recommend you not use it.\n>\n>However, that only affects output, not input.\n\nIf I don't call PQsetnonblocking() will that affect any of the async \nfunctions I'm dealing with? I might have insert queries that are rather \nlarge and I'm not sure how big my kernel's buffers are (and surely it will \nbe different on other OSes.)\n\n\n\n> > I wrote a small test program and at this point (after\n> > reading the first result) I looped back to my select, but the socket never\n> > went read-ready again, so the last\n> > PQconsumeInput()/PQisBusy()/PQgetResults() was never called to receive the\n> > NULL response from PQgetResult(), which is how the docs say I know the\n> > query is done.\n>\n> > But if I loop back to PQconsumeInput()/PQisBusy(), then I am effectively\n> > blocking since I have no way to know that PQconsumeInput() won't block or\n> > that the PQisBusy() will ever return zero again.\n>\n>(1) No, you don't need to repeat the PQconsumeInput, unless select still\n>says read-ready. (You could call it again, but there's no point.)\n>\n>(2) You should repeat PQisBusy && PQgetResult until one of them fails,\n>however. What you're missing here is that a single TCP packet might\n>provide zero, one, or more than one PQgetResult result. You want to\n>loop until you've gotten all the results you can get from the current\n>input packet. Then you go back to select(), and eventually you'll see\n>more backend input and you do another consumeInput and another isBusy/\n>getResult loop.\n\nYup, I think that is what I was misunderstanding. I'll modify my loop and \nsee how it goes.\n\n\n>(3) PQconsumeInput never blocks. Period. PQgetResult can block, but\n>it promises not to if an immediately prior PQisBusy returned 0.\n>\n> regards, tom lane\n\nThanks,\nMatthew\n\n", "msg_date": "Sat, 07 Jul 2001 15:02:30 -0400", "msg_from": "Matthew Hagerty <mhagerty@voyager.net>", "msg_from_op": true, "msg_subject": "Re: Async PQgetResult() question. " }, { "msg_contents": "Matthew Hagerty <mhagerty@voyager.net> writes:\n> If I don't call PQsetnonblocking() will that affect any of the async \n> functions I'm dealing with?\n\nPQsetnonblocking has nothing to do with the\nPQconsumeInput/PQisBusy/PQgetResult family of functions. The point of\nthe latter is to avoid blocking while waiting for input from the\ndatabase. The point of PQsetnonblocking is to avoid blocking while\nsending stuff to the database.\n\nNow in a TCP environment, the only way send() is going to block is if\nyou send more stuff than there's currently room for in your kernel's\nnetworking buffers --- which typically are going to be hundreds of K.\nI could see needing PQsetnonblocking if you need to avoid blocking\nwhile transmitting COPY IN data to the database ... but for queries\nit's harder to credit. Also, unless you are sending more than one\nquery in a query string, the backend is going to be absorbing the\ndata as fast as it can anyway; so even if you do block it's only\ngoing to be for a network transit delay, not for database processing.\n\nPersonally I've done quite a bit of asynchronous-application coding with\nPQconsumeInput &friends, but never felt the need for PQsetnonblocking.\nThis is why I've not been motivated to try to fix its problems...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Jul 2001 15:46:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Async PQgetResult() question. " }, { "msg_contents": "I said:\n> Also, unless you are sending more than one\n> query in a query string, the backend is going to be absorbing the\n> data as fast as it can anyway; so even if you do block it's only\n> going to be for a network transit delay, not for database processing.\n\nActually, forget the \"unless\" part -- the backend won't start parsing\nthe querystring until it's got it all. It just reads the query into\nmemory as fast as it can, semicolons or no.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Jul 2001 16:14:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Async PQgetResult() question. " }, { "msg_contents": "At 03:46 PM 7/7/2001 -0400, Tom Lane wrote:\n>Matthew Hagerty <mhagerty@voyager.net> writes:\n> > If I don't call PQsetnonblocking() will that affect any of the async\n> > functions I'm dealing with?\n>\n>PQsetnonblocking has nothing to do with the\n>PQconsumeInput/PQisBusy/PQgetResult family of functions. The point of\n>the latter is to avoid blocking while waiting for input from the\n>database. The point of PQsetnonblocking is to avoid blocking while\n>sending stuff to the database.\n>\n>Now in a TCP environment, the only way send() is going to block is if\n>you send more stuff than there's currently room for in your kernel's\n>networking buffers --- which typically are going to be hundreds of K.\n>I could see needing PQsetnonblocking if you need to avoid blocking\n>while transmitting COPY IN data to the database ... but for queries\n>it's harder to credit. Also, unless you are sending more than one\n>query in a query string, the backend is going to be absorbing the\n>data as fast as it can anyway; so even if you do block it's only\n>going to be for a network transit delay, not for database processing.\n>\n>Personally I've done quite a bit of asynchronous-application coding with\n>PQconsumeInput &friends, but never felt the need for PQsetnonblocking.\n>This is why I've not been motivated to try to fix its problems...\n\nSo then how would I code for the exception, i.e. the backend goes down just \nbefore or during my call to PQsendQuery()? If I am non-blocking then I can \ndetermine that my query did not go (PQsendQuery() or PQflush() returns an \nerror) and attempt to recover. Otherwise, my server could block until \nPQsendQuery() times out and returns an error. I guess it would depend on \nhow long PQsendQuery() waits to return if there is an error or problem with \nthe backend or the connection to the backend?\n\nMatthew\n\n", "msg_date": "Sat, 07 Jul 2001 16:27:07 -0400", "msg_from": "Matthew Hagerty <mhagerty@voyager.net>", "msg_from_op": true, "msg_subject": "Re: Async PQgetResult() question. " }, { "msg_contents": "Matthew Hagerty <mhagerty@voyager.net> writes:\n> So then how would I code for the exception, i.e. the backend goes down just \n> before or during my call to PQsendQuery()? If I am non-blocking then I can \n> determine that my query did not go (PQsendQuery() or PQflush() returns an \n> error) and attempt to recover.\n\nThis is the nasty part of any async client, all right. The case of a\nbackend crash doesn't bother me particularly: in the first place, you'll\nget back a \"connection closed\" failure quickly, and in the second place,\nbackend crashes while absorbing query text (as opposed to while\nexecuting a query) are just about unheard of. However, the possibility\nof loss of network connectivity is much more dire: it's plausible, and\nin most cases you're looking at a very long timeout before the kernel\nwill decide that the connection is toast and report an error to you.\n\nI'm unconvinced, however, that using PQsetnonblocking improves the\npicture very much. Unless the database operations are completely\nnoncritical to what your app is doing, you're going to be pretty\nmuch dead in the water anyway with a lost connection :-(\n\nIn the end you pays your money and you takes your choice. I do\nrecommend reading my past rants about why PQsetnonblocking is broken\n(circa Jan 2000, IIRC) before you put any faith in it. If you end\nup deciding that it really is something you gotta have, maybe you'll\nbe the one to do the legwork to make it reliable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Jul 2001 23:44:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Async PQgetResult() question. " }, { "msg_contents": "At 11:44 PM 7/7/2001 -0400, Tom Lane wrote:\n>Matthew Hagerty <mhagerty@voyager.net> writes:\n> > So then how would I code for the exception, i.e. the backend goes down \n> just\n> > before or during my call to PQsendQuery()? If I am non-blocking then I \n> can\n> > determine that my query did not go (PQsendQuery() or PQflush() returns an\n> > error) and attempt to recover.\n>\n>This is the nasty part of any async client, all right. The case of a\n>backend crash doesn't bother me particularly: in the first place, you'll\n>get back a \"connection closed\" failure quickly, and in the second place,\n>backend crashes while absorbing query text (as opposed to while\n>executing a query) are just about unheard of. However, the possibility\n>of loss of network connectivity is much more dire: it's plausible, and\n>in most cases you're looking at a very long timeout before the kernel\n>will decide that the connection is toast and report an error to you.\n>\n>I'm unconvinced, however, that using PQsetnonblocking improves the\n>picture very much. Unless the database operations are completely\n>noncritical to what your app is doing, you're going to be pretty\n>much dead in the water anyway with a lost connection :-(\n>\n>In the end you pays your money and you takes your choice. I do\n>recommend reading my past rants about why PQsetnonblocking is broken\n>(circa Jan 2000, IIRC) before you put any faith in it. If you end\n>up deciding that it really is something you gotta have, maybe you'll\n>be the one to do the legwork to make it reliable.\n>\n> regards, tom lane\n\n\nWell, I guess sending a query will have to be my weak link for the moment, \nheck, that's why we have version releases, right? ;) I'll take your advise \nand disable PQsetnonblocking for now, but I would like to read your rants \nand maybe (if I think I can muster the courage), look into fixing \nPQsetnonblocking. I have never dug around for an IIRC archive before, \nmight you recommend one that contains your rants?\n\nThanks,\nMatthew\n\n", "msg_date": "Sun, 08 Jul 2001 12:03:26 -0400", "msg_from": "Matthew Hagerty <mhagerty@voyager.net>", "msg_from_op": true, "msg_subject": "Re: Async PQgetResult() question. " }, { "msg_contents": "Uh oops! I misread IIRC as (IRC, i.e. Internet Relay Chat or something \nsimilar.) It is too early! ;) I'll dig in the archives.\n\nThanks,\nMatthew\n\nAt 12:03 PM 7/8/2001 -0400, Matthew Hagerty wrote:\n>At 11:44 PM 7/7/2001 -0400, Tom Lane wrote:\n>>Matthew Hagerty <mhagerty@voyager.net> writes:\n>> > So then how would I code for the exception, i.e. the backend goes down \n>> just\n>> > before or during my call to PQsendQuery()? If I am non-blocking then \n>> I can\n>> > determine that my query did not go (PQsendQuery() or PQflush() returns an\n>> > error) and attempt to recover.\n>>\n>>This is the nasty part of any async client, all right. The case of a\n>>backend crash doesn't bother me particularly: in the first place, you'll\n>>get back a \"connection closed\" failure quickly, and in the second place,\n>>backend crashes while absorbing query text (as opposed to while\n>>executing a query) are just about unheard of. However, the possibility\n>>of loss of network connectivity is much more dire: it's plausible, and\n>>in most cases you're looking at a very long timeout before the kernel\n>>will decide that the connection is toast and report an error to you.\n>>\n>>I'm unconvinced, however, that using PQsetnonblocking improves the\n>>picture very much. Unless the database operations are completely\n>>noncritical to what your app is doing, you're going to be pretty\n>>much dead in the water anyway with a lost connection :-(\n>>\n>>In the end you pays your money and you takes your choice. I do\n>>recommend reading my past rants about why PQsetnonblocking is broken\n>>(circa Jan 2000, IIRC) before you put any faith in it. If you end\n>>up deciding that it really is something you gotta have, maybe you'll\n>>be the one to do the legwork to make it reliable.\n>>\n>> regards, tom lane\n>\n>\n>Well, I guess sending a query will have to be my weak link for the moment, \n>heck, that's why we have version releases, right? ;) I'll take your \n>advise and disable PQsetnonblocking for now, but I would like to read your \n>rants and maybe (if I think I can muster the courage), look into fixing \n>PQsetnonblocking. I have never dug around for an IIRC archive before, \n>might you recommend one that contains your rants?\n>\n>Thanks,\n>Matthew\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n", "msg_date": "Sun, 08 Jul 2001 12:14:35 -0400", "msg_from": "Matthew Hagerty <mhagerty@voyager.net>", "msg_from_op": true, "msg_subject": "Re: Async PQgetResult() question. " } ]
[ { "msg_contents": "I suppose few people have remembered that today is what could be\nconsidered the 5th anniversary of the PostgreSQL project. Cheers for\nanother five years!\n\n\nhttp://www.ca.postgresql.org/mhonarc/pgsql-hackers/1999-10/msg00552.html\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 8 Jul 2001 22:50:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Happy Anniversary" }, { "msg_contents": "> I suppose few people have remembered that today is what could be\n> considered the 5th anniversary of the PostgreSQL project. Cheers for\n> another five years!\n> \n> \n> http://www.ca.postgresql.org/mhonarc/pgsql-hackers/1999-10/msg00552.html\n\nGood catch! Yes, you are right.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 8 Jul 2001 17:20:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Happy Anniversary" }, { "msg_contents": "Does postgresql have any sort of fast bulk loader?\n--\nNaomi Walker\nChief Information Officer\nEldorado Computing, Inc.\n602-604-3100 ext 242 \n\n", "msg_date": "Tue, 10 Jul 2001 17:05:12 -0700", "msg_from": "Naomi Walker <nwalker@eldocomp.com>", "msg_from_op": false, "msg_subject": "Postgresql bulk fast loader" }, { "msg_contents": "Naomi Walker wrote:\n> \n> Does postgresql have any sort of fast bulk loader?\n\nIt has a very cool SQL extension called COPY. Super fast.\n\nCommand: COPY\nDescription: Copies data between files and tables\nSyntax:\nCOPY [ BINARY ] table [ WITH OIDS ]\n FROM { 'filename' | stdin }\n [ [USING] DELIMITERS 'delimiter' ]\n [ WITH NULL AS 'null string' ]\nCOPY [ BINARY ] table [ WITH OIDS ]\n TO { 'filename' | stdout }\n [ [USING] DELIMITERS 'delimiter' ]\n [ WITH NULL AS 'null string' ]\n", "msg_date": "Tue, 10 Jul 2001 20:41:21 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Postgresql bulk fast loader" }, { "msg_contents": "> Does postgresql have any sort of fast bulk loader?\n\nCOPY command.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 21:02:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql bulk fast loader" }, { "msg_contents": "Avoid doing this with indexes on the table, though. I learned the hard way!\n\nMark\n\nmlw wrote:\n> \n> Naomi Walker wrote:\n> >\n> > Does postgresql have any sort of fast bulk loader?\n> \n> It has a very cool SQL extension called COPY. Super fast.\n> \n> Command: COPY\n> Description: Copies data between files and tables\n> Syntax:\n> COPY [ BINARY ] table [ WITH OIDS ]\n> FROM { 'filename' | stdin }\n> [ [USING] DELIMITERS 'delimiter' ]\n> [ WITH NULL AS 'null string' ]\n> COPY [ BINARY ] table [ WITH OIDS ]\n> TO { 'filename' | stdout }\n> [ [USING] DELIMITERS 'delimiter' ]\n> [ WITH NULL AS 'null string' ]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n", "msg_date": "Wed, 11 Jul 2001 09:38:57 -0400", "msg_from": "Mark Volpe <volpe.mark@epa.gov>", "msg_from_op": false, "msg_subject": "Re: Postgresql bulk fast loader" }, { "msg_contents": "Mark Volpe wrote:\n> \n> Avoid doing this with indexes on the table, though. I learned the hard way!\n> \n> Mark\n> \n> mlw wrote:\n> >\n> > Naomi Walker wrote:\n> > >\n> > > Does postgresql have any sort of fast bulk loader?\n> >\n> > It has a very cool SQL extension called COPY. Super fast.\n> >\n> > Command: COPY\n> > Description: Copies data between files and tables\n> > Syntax:\n> > COPY [ BINARY ] table [ WITH OIDS ]\n> > FROM { 'filename' | stdin }\n> > [ [USING] DELIMITERS 'delimiter' ]\n> > [ WITH NULL AS 'null string' ]\n> > COPY [ BINARY ] table [ WITH OIDS ]\n> > TO { 'filename' | stdout }\n> > [ [USING] DELIMITERS 'delimiter' ]\n> > [ WITH NULL AS 'null string' ]\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\nHi\n\nOn a daily basis I have an automated procedure that that bulk copies\ninformation into a \"holding\" table. I scan for duplicates and put the\nOID for the first unique record into a temporary table. Using the OID\nand other information I do an INSERT with SELECT to move the unique\ndata into its appropriate table. Then I remove the unique records and\nmove the duplicates into a debugging table. After that I remove the\nremaining records and drop the temporary tables. Once this is done I\nvacuum the tables and regenerate the indexes.\n\nThis sounds complicated but by doing things in quick simple transactions\nthe database is able to run continuously without disruption. I am able\nto import 30+ MB of data every day with only a small disruption when\nupdating the the summary tables.\n\nGuy Fraser\n\n-- \nThere is a fine line between genius and lunacy, fear not, walk the\nline with pride. Not all things will end up as you wanted, but you\nwill certainly discover things the meek and timid will miss out on.\n", "msg_date": "Thu, 12 Jul 2001 17:18:59 -0600", "msg_from": "Guy Fraser <guy@incentre.net>", "msg_from_op": false, "msg_subject": "Re: Re: Postgresql bulk fast loader" } ]
[ { "msg_contents": "When adding unique keys:\n\n* If you do this, you get two unique keys (7.0.3):\n\ncreate table test (int4 a, int4 b);\ncreate unique index indx1 on test(a, b);\ncreate unique index indx2 on test(a, b);\n\nThen you get this:\n\n Table \"test\"\n Attribute | Type | Modifier\n-----------+---------+----------\n a | integer |\n b | integer |\nIndices: asdf,\n asdf2\n\n* If you do this, you only get two unique keys (7.0.3):\n\ncreate table test (a int4, b int4, unique(a, b), unique(a, b));\n\nThen you get this:\n\n Table \"test\"\n Attribute | Type | Modifier\n-----------+---------+----------\n a | integer |\n b | integer |\nIndices: test_a_key,\n test_a_key1\n\n* So, does this mean that my ALTER TABLE/ADD CONSTRAINT code should happily\nlet people define multiple unique indices over the same columns?\n\n* As a corollary, should it prevent people from adding more than one primary\nkey constraint?\n\nChris\n\nps. I know I only tested these on 7.0.3 - but I assume HEAD has similar\nbehaviour?\n\n", "msg_date": "Mon, 9 Jul 2001 09:26:33 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "ADD CONSTRAINT behaviour question" }, { "msg_contents": "> You assume wrong.\n>\n> It's a bad idea to try to develop backend code against back releases.\n\nMy bad. I'm at work at the moment and I tried it out here to rejig my\nmemory before posting. I do remember testing it on HEAD at home and the\ncreate table (.., unique, unique) doesn't duplicate.\n\nDon't worry - it's being developed on HEAD - I was just trying to get out of\nfiguring out how to detect an already existing index across specified\ncolumns. It's complicated.\n\nChris\n\n", "msg_date": "Mon, 9 Jul 2001 09:49:03 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RE: ADD CONSTRAINT behaviour question " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> ps. I know I only tested these on 7.0.3 - but I assume HEAD has similar\n> behaviour?\n\nYou assume wrong.\n\nIt's a bad idea to try to develop backend code against back releases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 08 Jul 2001 21:50:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ADD CONSTRAINT behaviour question " } ]
[ { "msg_contents": " \n> My feeling is that the restrictions are stringent enough to eliminate\n> most of the interesting uses of views, and hence an automatic rule\n> creation feature is not nearly as useful/important as it appears at\n> first glance.\n\nThe most prominent of the \"interesting uses\" probably beeing when the views\nare part of the authorization system, since views are the only standardized\nmechanism to restrict access at the row level. Imho not to be neglected.\n(user xxx is only allowed to manipulate rows that belong to his department,\nso he is only granted access to a view, not the main table)\n\nAndreas\n", "msg_date": "Mon, 9 Jul 2001 12:31:23 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: New SQL Datatype RECURRINGCHAR " }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> \n> > My feeling is that the restrictions are stringent enough to eliminate\n> > most of the interesting uses of views, and hence an automatic rule\n> > creation feature is not nearly as useful/important as it appears at\n> > first glance.\n> \n> The most prominent of the \"interesting uses\" probably beeing when the views\n> are part of the authorization system, since views are the only standardized\n> mechanism to restrict access at the row level.\n\nTrue, and often the views can be restricted to insert only data that\nwill be \nvisible using this view.\n\n> Imho not to be neglected.\n> (user xxx is only allowed to manipulate rows that belong to his department,\n> so he is only granted access to a view, not the main table)\n\nThis seems to be a little more complicated that Tom described (I.e. it\nhas \nprobably more than one relation involved or uses a function to get\nCURRENT_USER's \ndepartment id)\n\nIIRC MS Access has much broader repertoire of updatable views than\ndescribed \nby Tom. Can be it's an extension to standard SQL though.\n\n-----------------\nHannu\n", "msg_date": "Mon, 09 Jul 2001 16:39:19 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: AW: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Zeugswetter Andreas SB wrote:\n>> The most prominent of the \"interesting uses\" probably beeing when the views\n>> are part of the authorization system, since views are the only standardized\n>> mechanism to restrict access at the row level.\n\n> True, and often the views can be restricted to insert only data that\n> will be visible using this view.\n\nRight. The interesting question is whether an automatic rule creator\ncould be expected to derive the correct restrictions on\ninsert/update/delete given the WHERE clause of the view. Insert/delete\nmight not be too bad (at first thought, requiring the inserted/deleted\nrows to pass the WHERE condition would do), but I'm not so sure about\nupdate. Is it sufficient to require both the old and new states of the\nrow to pass the WHERE condition?\n\nSQL92 gives this restriction on WHERE clauses for updatable views:\n\n d) If the <table expression> immediately contained in QS imme-\n diately contains a <where clause> WC, then no leaf generally\n underlying table of QS shall be a generally underlying table\n of any <query expression> contained in WC.\n\nwhich conveys nothing to my mind :-(, except that they're restricting\nsub-SELECTs in WHERE somehow. Can anyone translate that into English?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Jul 2001 10:50:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: New SQL Datatype RECURRINGCHAR " }, { "msg_contents": "Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > Zeugswetter Andreas SB wrote:\n> >> The most prominent of the \"interesting uses\" probably beeing when the views\n> >> are part of the authorization system, since views are the only standardized\n> >> mechanism to restrict access at the row level.\n>\n> > True, and often the views can be restricted to insert only data that\n> > will be visible using this view.\n>\n> Right. The interesting question is whether an automatic rule creator\n> could be expected to derive the correct restrictions on\n> insert/update/delete given the WHERE clause of the view. Insert/delete\n> might not be too bad (at first thought, requiring the inserted/deleted\n> rows to pass the WHERE condition would do), but I'm not so sure about\n> update. Is it sufficient to require both the old and new states of the\n> row to pass the WHERE condition?\n\n Yes, no other chance. Remember that the rule on SELECT is\n allways applied to the scan that looks for the rows to\n update, so you'd never have a chance to hit other rows\n through the view.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 9 Jul 2001 13:09:03 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: AW: New SQL Datatype RECURRINGCHAR" }, { "msg_contents": "Tom Lane writes:\n\n> SQL92 gives this restriction on WHERE clauses for updatable views:\n>\n> d) If the <table expression> immediately contained in QS imme-\n> diately contains a <where clause> WC, then no leaf generally\n> underlying table of QS shall be a generally underlying table\n> of any <query expression> contained in WC.\n>\n> which conveys nothing to my mind :-(, except that they're restricting\n> sub-SELECTs in WHERE somehow. Can anyone translate that into English?\n\nNo table mentioned in the FROM-clause (in PG even implicitly) of the query\nexpression (or view definition) is allowed to be mentioned in a subquery\nin the WHERE clause of the query expression (or view definition).\n\nThe phrasing \"leaf\" and \"generally\" underlying is only to make this\nstatement theoretically pure because you can create generally underlying\ntables that look different but do the same thing (different join syntax),\nwhereas a leaf generally underlying table is guaranteed to be a real base\ntable.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 9 Jul 2001 23:44:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: AW: New SQL Datatype RECURRINGCHAR " } ]
[ { "msg_contents": "Anyone maintaining generic autoconf scripts for linking against libpq, \ni.e., returns path to libpq-fe.h and proper link options?\n\nTim\n\n-- \nTimothy H. Keitt\nDepartment of Ecology and Evolution\nState University of New York at Stony Brook\nStony Brook, New York 11794 USA\nPhone: 631-632-1101, FAX: 631-632-7626\nhttp://life.bio.sunysb.edu/ee/keitt/\n\n", "msg_date": "Mon, 09 Jul 2001 17:23:22 -0400", "msg_from": "\"Timothy H. Keitt\" <Timothy.Keitt@StonyBrook.Edu>", "msg_from_op": true, "msg_subject": "libpq autoconf scripts?" }, { "msg_contents": "Timothy H. Keitt writes:\n\n> Anyone maintaining generic autoconf scripts for linking against libpq,\n> i.e., returns path to libpq-fe.h and proper link options?\n\npg_config since 7.1\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 11 Jul 2001 16:44:07 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: libpq autoconf scripts?" } ]
[ { "msg_contents": "Ok, i've been told to bring this up on this mailing list, so, I do so:\n\nrather than kill myself re-explaining, i'll just cut&paste my email\ncorrespondence.\n\nI said:\ncan't create timestamp field (only timestamp with time zone)\n\nthe response was:\n\nThose are the same data types.\n\nthen I said:\nxxiii writes:\n\n> Well, please document it as such then, as SQL definately implies that\n> they are not the same (admittedly, i'm using SQL92 and not SQL99,\nwhich I\n> don't have a copy of), as does postgres's documentation, and also the\n> fact that one can create a time field with and without timezone.\n>\n>\nhttp://postgresql.crimelabs.net/users-lounge/docs/7.1/user/datatype-datetime.html\n\n>\n> This is very confusing, as is the fact that pre 7.1 postgres shows\n> \"timestamp\" and 7.1 shows \"timestamp with time zone\", neither version\n> seems to be willing to create the other variant (presumably because\nthey\n> are really the same as far as postgres (but not its documentation) are\n\n> concerned). I definately need the \"without time zone\" behaviour and\n> range.\n>\n> If postgres is really using the same type internally to implement both\n\n> behaviours that should be documented, along with how it works.\n>\n> I've just done some additional testing, and find that the timestamp\ntype\n> does appear to support the wider range, and is acting like the\n\"without\n> time zone\" version, in spite of \"\\d table\" saying \"with time zone\".\n> However its output incorrectly, when years exceed 10000.\n>\n> insert into test values('05-05-12080', '05-05-12080 1:1:1-7:00');\n> insert into test values('05-05-12080', '05-05-12080 1:1:1+7:00');\n>\n> select * from test;\n> w | o\n> ---------------------+---------------------\n> 2080-05-05 00:00:00 | 2080-05-05 00:00:00\n> 2080-05-05 00:00:00 | 2080-05-05 08:01:01\n> 12080-05-05 00:0000 | 12080-05-05 08:0101\n> 12080-05-05 00:0000 | 12080-05-04 18:0101\n> (4 rows)\n\nthen I got told to bring it up here.\n", "msg_date": "Mon, 09 Jul 2001 17:26:55 -0600", "msg_from": "Dave Martin <xxiii@cyberdude.com>", "msg_from_op": true, "msg_subject": "timestamp not consistent with documentation or standard" }, { "msg_contents": "Dave Martin <xxiii@cyberdude.com> writes:\n> Ok, i've been told to bring this up on this mailing list, so, I do so:\n> rather than kill myself re-explaining, i'll just cut&paste my email\n> correspondence.\n\nActually, what you should have done was consult the archives of this\nlist. You will find that you have wandered into the no man's land\nof an armed conflict :-(. Unless you have some new argument that will\npersuade one camp or the other to concede, it's unlikely that the\nnaming of the timestamp type (there is only one, and no visible interest\nin implementing more) will change soon.\n\n\n>> However its output incorrectly, when years exceed 10000.\n>> \n>> insert into test values('05-05-12080', '05-05-12080 1:1:1-7:00');\n>> insert into test values('05-05-12080', '05-05-12080 1:1:1+7:00');\n>> \n>> select * from test;\n>> w | o\n>> ---------------------+---------------------\n>> 2080-05-05 00:00:00 | 2080-05-05 00:00:00\n>> 2080-05-05 00:00:00 | 2080-05-05 08:01:01\n>> 12080-05-05 00:0000 | 12080-05-05 08:0101\n>> 12080-05-05 00:0000 | 12080-05-04 18:0101\n\nThis is definitely a bug --- looks like EncodeDateTime fails to consider\nthe possibility that the output of sprintf will be longer than \"normal\".\nWill fix.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Jul 2001 20:07:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamp not consistent with documentation or standard " } ]
[ { "msg_contents": "When someone issues this command:\n\nALTER TABLE test ADD UNIQUE (a, b);\n\nWhat happens when:\n\n1. A non-unique index is already defined over (a, b)\n\n\t- Either add new index or promote existing one to unique?\n\n2. A non-unique index is already defined over (b, a)\n\n\t- As above?\n\n3. A primary index is already defined over (a, b)\n\n\t- ERROR: unique already implied by primary?\n\n4. A primary index is already defined over (b, a)\n\n\t- As above?\n\n5. A unique index is already defined over (a, b)\n\n\t- ERROR: unique index already exists over keys?\n\n6. A unique index is already defined over (b, a)\n\n\t- As above. Technically a different index, but effect\n\t as far as uniqueness is concerned is identical?\n\n7. No index exists over (a, b) or (b, a)\n\n\t- Create a new unique index over (a, b)?\n\nChris\n\n\n", "msg_date": "Tue, 10 Jul 2001 09:39:11 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "More ADD CONSTRAINT behaviour questions" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> 6. A unique index is already defined over (b, a)\n\n> \t- As above. Technically a different index, but effect\n> \t as far as uniqueness is concerned is identical?\n\nThis case *must not* be an error IMHO: it's perfectly reasonable to have\nindexes on both (a,b) and (b,a), and if the column pair happens to be\nunique, there's no reason why they shouldn't both be marked unique.\n\nBecause of that, I'm not too excited about raising an error in any case\nexcept where you have an absolutely identical pre-existing index, ie,\nthere's already a unique index on (a,b) --- doesn't matter much whether\nit's marked primary or not.\n\nFor ADD PRIMARY KEY, there mustn't be any pre-existing primary index,\nof course. I can see promoting an extant matching unique index to\nprimary status, though, rather than making another index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Jul 2001 22:31:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More ADD CONSTRAINT behaviour questions " }, { "msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > 6. A unique index is already defined over (b, a)\n> \n> > \t- As above. Technically a different index, but effect\n> > \t as far as uniqueness is concerned is identical?\n> \n> This case *must not* be an error IMHO: it's perfectly reasonable to have\n> indexes on both (a,b) and (b,a), and if the column pair happens to be\n> unique, there's no reason why they shouldn't both be marked unique.\n> \n> Because of that, I'm not too excited about raising an error in any case\n> except where you have an absolutely identical pre-existing index, ie,\n> there's already a unique index on (a,b) --- doesn't matter much whether\n> it's marked primary or not.\n> \n> For ADD PRIMARY KEY, there mustn't be any pre-existing primary index,\n> of course. I can see promoting an extant matching unique index to\n> primary status, though, rather than making another index.\n> \n\nYea, I agree with Tom. Usually we let the person do whatever they want\nexcept in cases that clearly make no sense or where we can improve it.\n\nGood questions, though.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Jul 2001 23:32:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: More ADD CONSTRAINT behaviour questions" }, { "msg_contents": "On Tue, 10 Jul 2001, Christopher Kings-Lynne wrote:\n\n> When someone issues this command:\n> \n> ALTER TABLE test ADD UNIQUE (a, b);\n> \n> What happens when:\n> \n> 1. A non-unique index is already defined over (a, b)\n> \n> \t- Either add new index or promote existing one to unique?\n\nWell, either works, but if you promote, you should have a way\nto keep track of the fact you did so, because dropping the\nconstraint shouldn't drop the index then but demote it.\nI'm less sure about what the correct behavior would be for adding\nprimary keys (if you added a primary key on a unique index and\nthen dropped the primary key, do you end up with a normal \nunique at the end?)\n\n> 2. A non-unique index is already defined over (b, a)\n> \n> \t- As above?\nI agree with Tom for 2/4/6, since the indexes are different\nfor planning purposes.\n\n> 3. A primary index is already defined over (a, b)\n> \n> \t- ERROR: unique already implied by primary?\n\nSeems reasonable. Maybe errors like:\n ERROR: Primary key <name> already defined on test(a,b)\n ERROR: Unique constraint <name> already defined on test(a,b)\n\n", "msg_date": "Mon, 9 Jul 2001 20:33:49 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: More ADD CONSTRAINT behaviour questions" }, { "msg_contents": "OK, so just to summarize:\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Tuesday, 10 July 2001 9:39 AM\n> To: Hackers\n> Subject: [HACKERS] More ADD CONSTRAINT behaviour questions\n>\n>\n> When someone issues this command:\n>\n> ALTER TABLE test ADD UNIQUE (a, b);\n>\n> What happens when:\n>\n> 1. A non-unique index is already defined over (a, b)\n>\n> \t- Either add new index or promote existing one to unique?\n\nPromoting is in my too-hard basket, so I will simply add a new unique index?\nToo bad if it slows them down, as they should know better? Should I issue a\nNOTICE warning them that they have overlapping indices?\n\n> 2. A non-unique index is already defined over (b, a)\n>\n> \t- As above?\n\nIrrelevant as (a,b) will be handled independently of (b,a). Basically\nproblem ignored?\n\n> 3. A primary index is already defined over (a, b)\n>\n> \t- ERROR: unique already implied by primary?\n\nDone. Implemented.\n\n> 4. A primary index is already defined over (b, a)\n>\n> \t- As above?\n\nAs per (2).\n\n> 5. A unique index is already defined over (a, b)\n>\n> \t- ERROR: unique index already exists over keys?\n\nDone. Implemented.\n\n> 6. A unique index is already defined over (b, a)\n>\n> \t- As above. Technically a different index, but effect\n> \t as far as uniqueness is concerned is identical?\n\nAs per (2).\n\n> 7. No index exists over (a, b) or (b, a)\n>\n> \t- Create a new unique index over (a, b)?\n\nDone.\n\nMy current code does all of the above, plus will auto-generate constraint\nnames, save it only looks at combinations of keys, not permutations - so if\na unique key exists on (a,b), you can't add one over (b,a). I should be\nable to fix this in my next hack session tho. After that I'll check my use\nof locking, then I'll submit a patch.\n\nThe other issue is that I'm not sure how much argument checking I should do\nin my code, and how much I should leave for DefineIndex?\n\nFor example, if you have a table with cols 'a' and 'b' and you go ADD UNIQUE\n(c), you get something like this:\n\nERROR: DefineIndex: Attribute 'c' does not exist.\n\nHowever, this could be slightly odd error for the user of the ALTER\nfunction. But I guess this kind of thing happens all thru the postgres\ncode... Another thing that I let DefineIndex handle is the ADD UNIQUE (a,a)\nkind of thing.\n\nChris\n\n", "msg_date": "Mon, 23 Jul 2001 10:01:36 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RE: More ADD CONSTRAINT behaviour questions" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> 1. A non-unique index is already defined over (a, b)\n>> \n>> - Either add new index or promote existing one to unique?\n\n> Promoting is in my too-hard basket, so I will simply add a new unique index?\n> Too bad if it slows them down, as they should know better? Should I issue a\n> NOTICE warning them that they have overlapping indices?\n\nSeems reasonable. I suppose dropping the old index wouldn't be a good\nidea ;-)\n\n> The other issue is that I'm not sure how much argument checking I should do\n> in my code, and how much I should leave for DefineIndex?\n\nI'd say there's no value in expending code space on duplicated error\nchecks --- *unless* you can give a more specific/appropriate error\nmessage than DefineIndex would.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 22 Jul 2001 23:04:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More ADD CONSTRAINT behaviour questions " } ]
[ { "msg_contents": "Here is a message about finding a target date for Mozilla's 1.0 release.\nI found the article sobering:\n\n\thttp://www.mozillazine.org/articles/article1958.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Jul 2001 22:26:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Mozilla 1.0 release soon?" }, { "msg_contents": "...\n> I found the article sobering:\n...\n\nIt is commonly thought that one should be sober at least part of every\nday, so it isn't entirely clear whether you consider this good or bad.\n\nOh, maybe that isn't what you meant? ;)\n\n - Thomas\n", "msg_date": "Tue, 10 Jul 2001 02:42:52 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Mozilla 1.0 release soon?" }, { "msg_contents": "> ...\n> > I found the article sobering:\n> ...\n> \n> It is commonly thought that one should be sober at least part of every\n> day, so it isn't entirely clear whether you consider this good or bad.\n> \n> Oh, maybe that isn't what you meant? ;)\n\nI read it and thought, \"Wow, that seems like a royal mess.\"\n\nI am sure someone will chime in and say it isn't, and perhaps it isn't. \nIt just sounded that way to me.\n\nI know we have a challenge to put out each release, but their challenges\nseem almost overwhelming to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Jul 2001 23:41:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Mozilla 1.0 release soon?" } ]
[ { "msg_contents": "\nAnyone know why I could possibly get this error? This doesn't happen\ndeterministically.\n\nWaitOnLock: error on wakeup - Aborting this transaction\n\nI also got this notice:\n\nNOTICE: Deadlock detected -- See the lock(l) manual page \n\n---\n\nActually, what I'm looking for in this mail is a possible way for me to\ndeterministically reproduce this by hand, to see if I can create this\nsituation and then look in my code to see where I could possibly be doing\nthe wrong thing. I'm not using anything fancy in my queries, Just foreign\nkey constraints (all initially deferred), Selects, inserts, updates, views,\ntransactions. No explicit lock or select for updates or triggers or notifiys\nor rules.\n\nI'm using Postgres 7.0.3.\n\nBTW, i tried searching the mailing list and turned up nothing interesting. I\ndidn't search super carefully, because the search site is extremely slow.\n\nThanx!\n\n-rchit\n", "msg_date": "Mon, 9 Jul 2001 21:30:38 -0700 ", "msg_from": "Rachit Siamwalla <rachit@ensim.com>", "msg_from_op": true, "msg_subject": "WaitOnLock: error on wakeup" }, { "msg_contents": "Rachit Siamwalla <rachit@ensim.com> writes:\n> Anyone know why I could possibly get this error? This doesn't happen\n> deterministically.\n\nIt wouldn't, because the problem arises from the interaction of multiple\nclients --- AFAIK it is not possible to get that error with only a\nsingle client, no matter what it does.\n\nA good bet is that your code is written to acquire the same locks in\ndifferent orders in different cases. Then you can get cases like\n\n\tClient A\t\t\tClient B;\n\n\tbegin;\n\tlock table a;\n\n\t...\t\t\t\tbegin;\n\t...\t\t\t\tlock table b;\n\n\tlock table b;\n\t-- now A is waiting for B\n\n\t...\t\t\t\tlock table a;\n\t\t\t\t\t-- deadlock\n\nB's second lock attempt will be rejected with\n\ntest71=# lock table a;\nERROR: Deadlock detected.\n See the lock(l) manual page for a possible cause.\n\n\n> WaitOnLock: error on wakeup - Aborting this transaction\n> NOTICE: Deadlock detected -- See the lock(l) manual page \n\nApparently you're running an older PG release; that's what the\nerror report used to look like.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 10:16:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WaitOnLock: error on wakeup " } ]
[ { "msg_contents": "Hi all,\n\nbecause no one noticed my last question, please forgive me this repost, but I \npromised my boss to get an answer to this questions.\n\nHere they are:\n\nWhen a new table or field is created without quotes, it is assumed to be\ncase-insensitive. Herefore I have some questions:\n\n- Is it SQL-92-conform to handle >\"test\"< like >test< without quotes, or\nshouldn't it be >test< forced to lowercase?\n\n- Oracle returns this no_matter_what-case_it_is-fields with\nuppercase-letters. Is it possible for Postgresql, to imitate this behaviour?\n\n- How is the handling of case-sensitivity handled in the system-catalogs? Is\nther any flag or depends it on the name of the object only?\n\nThank you very much in advance!\n\nKlaus\n\n-- \nVisit WWWdb at\nhttp://wwwdb.org\n", "msg_date": "Tue, 10 Jul 2001 07:01:23 +0200", "msg_from": "Klaus Reger <K.Reger@wwwdb.de>", "msg_from_op": true, "msg_subject": "Repost: Get table/field-identifiers in uppercase" } ]
[ { "msg_contents": "Hi:\nA really simple question: I wanna set up some WAL related config\nparameter. I am wondering is it the same to add those parameter on\npostgresql.conf as to add them on /etc/rc.d/init.d/postgres.init??\n\n", "msg_date": "Tue, 10 Jul 2001 17:14:55 +0800", "msg_from": "Harry Yau <harry@regaltronic.com>", "msg_from_op": true, "msg_subject": "Quick Question!" }, { "msg_contents": "Hi,\n\nI am about to put a 7.1.2 server into production on RedHat 7.1\nThe server will be dedicated to PostgreSQL, running a bare minimum of \nadditional services.\nIf anyone has already tuned the configurable parameters on a dual PIII w/ \n1GB RAM then I\nwill have a great place to start for my performance tuning!\nWhen I'm done I'll be posting my results here for the next first timer that \ncomes along.\n\nThanks in advance,\n\nAdam\n\n", "msg_date": "Tue, 10 Jul 2001 07:44:34 -0400", "msg_from": "Adam Manock <abmanock@planetcable.net>", "msg_from_op": false, "msg_subject": "Performance tuning for linux, 1GB RAM, dual CPU?" }, { "msg_contents": "Am Dienstag, 10. Juli 2001 13:44 schrieb Adam Manock:\n> Hi,\n>\n> I am about to put a 7.1.2 server into production on RedHat 7.1\n> The server will be dedicated to PostgreSQL, running a bare minimum of\n> additional services.\n> If anyone has already tuned the configurable parameters on a dual PIII w/\n> 1GB RAM then I\n> will have a great place to start for my performance tuning!\n\ni am running the same hardware and postgresql 7.1.2\nbut i didnt tuned it because its fast enough for my purpose. But i am very \ninterested in your investigations. Could you please pm me, if you have \nsomething of interest?\n\nthanks\njanning\n\n> When I'm done I'll be posting my results here for the next first timer that\n> comes along.\n>\n> Thanks in advance,\n>\n> Adam\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n\n-- \nPlanwerk 6 /websolutions\nHerzogstra���e 86\n40215 D���sseldorf\n\nfon 0211-6015919\nfax 0211-6015917\nhttp://www.planwerk6.de\n", "msg_date": "Tue, 10 Jul 2001 15:21:37 +0200", "msg_from": "Janning Vygen <vygen@planwerk6.de>", "msg_from_op": false, "msg_subject": "Re: Performance tuning for linux, 1GB RAM, dual CPU?" }, { "msg_contents": "On Tue, Jul 10, 2001 at 07:44:34AM -0400, Adam Manock wrote:\n: Hi,\n: \n: I am about to put a 7.1.2 server into production on RedHat 7.1\n: The server will be dedicated to PostgreSQL, running a bare minimum of \n: additional services.\n: If anyone has already tuned the configurable parameters on a dual PIII w/ \n: 1GB RAM then I\n: will have a great place to start for my performance tuning!\n: When I'm done I'll be posting my results here for the next first timer that \n: comes along.\n\nI have a similar system. It's a dual PII-450MHz Xeon with 512MB of RAM\nrunning RH7.1 and 7.1.2. As far as performance tuning goes, here's the\nrelevant lines from the postgresql.conf file we're using:\n\n max_connections = 64 # 1-1024\n sort_mem = 8192\n shared_buffers = 192\n fsync = false\n\nObviously, depending on your needs, you can adjust those. If you've\ngot a 1GB of RAM, I'd set everything high and not worry about it.\n\n* Philip Molter\n* DataFoundry.net\n* http://www.datafoundry.net/\n* philip@datafoundry.net\n", "msg_date": "Tue, 10 Jul 2001 09:06:07 -0500", "msg_from": "Philip Molter <philip@datafoundry.net>", "msg_from_op": false, "msg_subject": "Re: Performance tuning for linux, 1GB RAM, dual CPU?" }, { "msg_contents": "On Tue, 10 Jul 2001, Adam Manock wrote:\n\n> Hi,\n> \n> I am about to put a 7.1.2 server into production on RedHat 7.1\n> The server will be dedicated to PostgreSQL, running a bare minimum of \n> additional services.\n> If anyone has already tuned the configurable parameters on a dual PIII w/ \n> 1GB RAM then I\n> will have a great place to start for my performance tuning!\n> When I'm done I'll be posting my results here for the next first timer that \n> comes along.\n\nI've just did a quick read of Bruce's article in Linux Journal. My take\nis this is more than a hardware issue. Table size, indexes (indices?)\nverses sequential scans will come into play. I don't think you can even\nreally test without some relevant data, queries, etc.\n Again this was a very quick read of the article.\n\n\nThanks Bruce. It looks enlightening.\n\nCheers,\nRod\n-- \n Remove the word 'try' from your vocabulary ... \n Don't try. Do it or don't do it ...\n Steers try!\n\n Don Aslett\n\n", "msg_date": "Tue, 10 Jul 2001 13:12:05 -0700 (PDT)", "msg_from": "\"Roderick A. Anderson\" <raanders@tincan.org>", "msg_from_op": false, "msg_subject": "Re: Performance tuning for linux, 1GB RAM, dual CPU?" }, { "msg_contents": "Hi all,\n\nJust added these figures to a new document :\n\nhttp://techdocs.postgresql.org/techdocs/perftuningfigures.php\n\nAs people provide/submit figures and details of the equipment they use,\nI'll post them onto this page.\n\nHopefully it will become a good set of real world guidelines for people,\netc.\n\nStandardised benchmarks would be nice to though.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\nPhilip Molter wrote:\n> \n> On Tue, Jul 10, 2001 at 07:44:34AM -0400, Adam Manock wrote:\n> : Hi,\n> :\n> : I am about to put a 7.1.2 server into production on RedHat 7.1\n> : The server will be dedicated to PostgreSQL, running a bare minimum of\n> : additional services.\n> : If anyone has already tuned the configurable parameters on a dual PIII w/\n> : 1GB RAM then I\n> : will have a great place to start for my performance tuning!\n> : When I'm done I'll be posting my results here for the next first timer that\n> : comes along.\n> \n> I have a similar system. It's a dual PII-450MHz Xeon with 512MB of RAM\n> running RH7.1 and 7.1.2. As far as performance tuning goes, here's the\n> relevant lines from the postgresql.conf file we're using:\n> \n> max_connections = 64 # 1-1024\n> sort_mem = 8192\n> shared_buffers = 192\n> fsync = false\n> \n> Obviously, depending on your needs, you can adjust those. If you've\n> got a 1GB of RAM, I'd set everything high and not worry about it.\n> \n> * Philip Molter\n> * DataFoundry.net\n> * http://www.datafoundry.net/\n> * philip@datafoundry.net\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n", "msg_date": "Wed, 11 Jul 2001 13:32:10 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Performance tuning for linux, 1GB RAM, dual CPU?" }, { "msg_contents": "At 01:32 PM 7/11/01 +1000, Justin Clift wrote:\n\n > Standardised benchmarks would be nice to though.\n\tSure would be nice....\n\nHere's some preliminary results of my testing...\n\nSystem:\tRedHat Linux 7.1, kernel 2.4.3smp (i686)\n\t\tDual PIII 733 / 133FSB 1024MB RAM\n\t\t2 18Gb 10000rpm U160 SCSI drives, hardware RAID1\n\t\t\n\t\tadded to /etc/sysctl.conf:\n\t\t\tkernel.shmmax = 268435456\n\t\t\tkernel.shmall = 268435456\n\nPG 7.1.2\t\n\npostgresql-7.1.2-4PGDG binary rpm set\n\t\t\n\trelevant entries from postgresql.conf:\n\t\tmax_connections = 32 \t\n\t\tsort_mem = 4096\n\t\tshared_buffers = 32000\n\t\tfsync = true\n\nThe only benchmark I have is almost useless but I'll put it here anyway...\n\ntime ./pg_regress --schedule=parallel_schedule\n(using postmaster on Unix socket, default port)\n...\n...\n======================\n All 76 tests passed.\n======================\n\n\nreal 0m40.784s\nuser 0m1.980s\nsys 0m1.220s\n\n\nAdam.\n\n", "msg_date": "Thu, 12 Jul 2001 20:32:10 -0400", "msg_from": "Adam Manock <abmanock@planetcable.net>", "msg_from_op": false, "msg_subject": "Re: Performance tuning for linux, 1GB RAM, dual CPU?" } ]
[ { "msg_contents": "\n> When a new table or field is created without quotes, it is assumed to be\n> case-insensitive. Herefore I have some questions:\n> \n> - Is it SQL-92-conform to handle >\"test\"< like >test< without quotes, or\n> shouldn't it be >test< forced to lowercase?\n\nI do not understand this question. If you want case sensitivity, you need\nto quote your identifiers. Unquoted identifiers are case insensitive.\nI do not think the standard states what should happen when you start mixing \nquoted and unquoted identifiers for the same object.\n\n> \n> - Oracle returns this no_matter_what-case_it_is-fields with\n> uppercase-letters. Is it possible for Postgresql, to imitate this behaviour?\n\nNo. PostgreSQL stores them in all lower case (Informix also).\n\n> \n> - How is the handling of case-sensitivity handled in the system-catalogs? Is\n> ther any flag or depends it on the name of the object only?\n\nThe unquoted identifier is converted to all lower case, no flag.\nThe quoted identifier is taken as is. \n\nAndreas\n", "msg_date": "Tue, 10 Jul 2001 11:45:06 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Repost: Get table/field-identifiers in uppercase" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> I do not think the standard states what should happen when you start mixing \n> quoted and unquoted identifiers for the same object.\n\nActually, it does:\n\n 13)A <regular identifier> and a <delimited identifier> are equiva-\n lent if the <identifier body> of the <regular identifier> (with\n every letter that is a lower-case letter replaced by the equiva-\n lent upper-case letter or letters) and the <delimited identifier\n body> of the <delimited identifier> (with all occurrences of\n <quote> replaced by <quote symbol> and all occurrences of <dou-\n blequote symbol> replaced by <double quote>), considered as\n the repetition of a <character string literal> that specifies a\n <character set specification> of SQL_TEXT and an implementation-\n defined collation that is sensitive to case, compare equally\n according to the comparison rules in Subclause 8.2, \"<comparison\n predicate>\".\n\nThe spec expects unquoted identifiers to be made case-insensitive by\nfolding them to upper case. We do it by folding to lower case, instead.\nWhile this isn't 100% standard, it's unlikely to be changed. Too many\napplications would break...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 10:34:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Repost: Get table/field-identifiers in uppercase " }, { "msg_contents": "On Tuesday, 10. July 2001 11:45, you wrote:\n> > When a new table or field is created without quotes, it is assumed to be\n> > case-insensitive. Herefore I have some questions:\n> >\n> > - Is it SQL-92-conform to handle >\"test\"< like >test< without quotes, or\n> > shouldn't it be >test< forced to lowercase?\n>\n> I do not understand this question. If you want case sensitivity, you need\n> to quote your identifiers. Unquoted identifiers are case insensitive.\n> I do not think the standard states what should happen when you start mixing\n> quoted and unquoted identifiers for the same object.\n\nOK, lets assume I want the field test explicitly in lowercase (and ONLY in \nlowercase). I can see no way to make this with the current implementation.\n\nBy the way in Oracle, we have the same problem, but everything is in \nuppercase.\n\nRegards & thanks\n\nKlaus\n\n-- \nVisit WWWdb at\nhttp://wwwdb.org\n", "msg_date": "Thu, 12 Jul 2001 07:05:23 +0200", "msg_from": "Klaus Reger <K.Reger@wwwdb.org>", "msg_from_op": false, "msg_subject": "Re: Repost: Get table/field-identifiers in uppercase" } ]
[ { "msg_contents": "If you have time, take a quick look at\n\nhttp://acidlab.sourceforge.net/perf/acid_perf.html\n\nPostgreSQL has serious scalability problems with snort + acid. Any\nadvices?\n\n(Now I'm using MySQL with my SNORT/ACID setup, but I'm willing to\nchange to PostgreSQL if more tests are needed)\n\nSergio Bruder\n\n-- \nCoordena��o de Desenvolvimento - Projetos Especiais, Conectiva\nhttp://www.conectiva.com.br, http://sergio.bruder.net, http://pontobr.org\n-----------------------------------------------------------------------------\npub 1024D/0C7D9F49 2000-05-26 Sergio Devojno Bruder <bruder@conectiva.com.br>\n Key fingerprint = 983F DBDF FB53 FE55 87DF 71CA 6B01 5E44 0C7D 9F49\nsub 1024g/138DF93D 2000-05-26\n", "msg_date": "Tue, 10 Jul 2001 09:40:21 -0300", "msg_from": "Sergio Bruder <bruder@conectiva.com.br>", "msg_from_op": true, "msg_subject": "Any tips for this particular performance problem?" }, { "msg_contents": "On Tue, Jul 10, 2001 at 04:04:43PM +0200, Hannu Krosing wrote:\n> Sergio Bruder wrote:\n> > \n> > If you have time, take a quick look at\n> > \n> > http://acidlab.sourceforge.net/perf/acid_perf.html\n> > \n> > PostgreSQL has serious scalability problems with snort + acid. Any\n> > advices?\n> \n> Usually porting from MySQL to PostgreSQL needs some rewrite of \n> queries and process logic if good performance is required.\n\nACID is using ADODB SQL interface, thus using the same queries for\nMySQL and PostgreSQL.\n\n> \n> > (Now I'm using MySQL with my SNORT/ACID setup, but I'm willing to\n> > change to PostgreSQL if more tests are needed)\n> \n> Are these tests run using BSDDB backend (for transaction support) or \n> the old/fast MySQL storage ?\n\nDunno (These tests arent executed by me), but probably using the\nold transaction-less format of MySQL.\n\nSergio Bruder\n\n-- \nCoordena��o de Desenvolvimento - Projetos Especiais, Conectiva\nhttp://www.conectiva.com.br, http://sergio.bruder.net, http://pontobr.org\n-----------------------------------------------------------------------------\npub 1024D/0C7D9F49 2000-05-26 Sergio Devojno Bruder <bruder@conectiva.com.br>\n Key fingerprint = 983F DBDF FB53 FE55 87DF 71CA 6B01 5E44 0C7D 9F49\nsub 1024g/138DF93D 2000-05-26\n", "msg_date": "Tue, 10 Jul 2001 11:01:59 -0300", "msg_from": "Sergio Bruder <bruder@conectiva.com.br>", "msg_from_op": true, "msg_subject": "Re: Any tips for this particular performance problem?" }, { "msg_contents": "Sergio Bruder wrote:\n> \n> If you have time, take a quick look at\n> \n> http://acidlab.sourceforge.net/perf/acid_perf.html\n> \n> PostgreSQL has serious scalability problems with snort + acid. Any\n> advices?\n\nUsually porting from MySQL to PostgreSQL needs some rewrite of \nqueries and process logic if good performance is required.\n\n> (Now I'm using MySQL with my SNORT/ACID setup, but I'm willing to\n> change to PostgreSQL if more tests are needed)\n\nAre these tests run using BSDDB backend (for transaction support) or \nthe old/fast MySQL storage ?\n\n-----------------\nHannu\n", "msg_date": "Tue, 10 Jul 2001 16:04:43 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Any tips for this particular performance problem?" }, { "msg_contents": "Hannu Krosing wrote:\n> Sergio Bruder wrote:\n> >\n> > If you have time, take a quick look at\n> >\n> > http://acidlab.sourceforge.net/perf/acid_perf.html\n> >\n> > PostgreSQL has serious scalability problems with snort + acid. Any\n> > advices?\n>\n> Usually porting from MySQL to PostgreSQL needs some rewrite of\n> queries and process logic if good performance is required.\n>\n> > (Now I'm using MySQL with my SNORT/ACID setup, but I'm willing to\n> > change to PostgreSQL if more tests are needed)\n>\n> Are these tests run using BSDDB backend (for transaction support) or\n> the old/fast MySQL storage ?\n\n I'm not familiar with SNORT/ACID, but I assume that the DB\n schema and queries are basically all identically for all the\n databases.\n\n Well, how such analyzis software can be implemented without\n views and stored procedures is a mystery to me, but at least\n it must do a whole lot of PHP-aerobics.\n\n I'd say as long as the database is used as a stupid data\n container and not as a relational database management system,\n just don't use Postgres. It's not designed to be stupid, so\n it doesn't work well if used stupid.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Tue, 10 Jul 2001 10:54:54 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Any tips for this particular performance problem?" }, { "msg_contents": "On Tue, 10 Jul 2001, Sergio Bruder wrote:\n\n> If you have time, take a quick look at\n> \n> http://acidlab.sourceforge.net/perf/acid_perf.html\n> \n> PostgreSQL has serious scalability problems with snort + acid. Any\n> advices?\n> \n> (Now I'm using MySQL with my SNORT/ACID setup, but I'm willing to\n> change to PostgreSQL if more tests are needed)\n\nIt might be handy to see schema and query examples for the system.\nThere may be obvious things in the queries such that we'll at least\nbe able to tell you why things seem to be slow.\n\n\n", "msg_date": "Tue, 10 Jul 2001 09:46:07 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Any tips for this particular performance problem?" }, { "msg_contents": "On Tue, Jul 10, 2001 at 09:46:07AM -0700, Stephan Szabo wrote:\n> On Tue, 10 Jul 2001, Sergio Bruder wrote:\n> \n> > If you have time, take a quick look at\n> > \n> > http://acidlab.sourceforge.net/perf/acid_perf.html\n> > \n> > PostgreSQL has serious scalability problems with snort + acid. Any\n> > advices?\n> > \n> > (Now I'm using MySQL with my SNORT/ACID setup, but I'm willing to\n> > change to PostgreSQL if more tests are needed)\n> \n> It might be handy to see schema and query examples for the system.\n> There may be obvious things in the queries such that we'll at least\n> be able to tell you why things seem to be slow.\n\n\nThe web page says:\n\nHost: Intel Mobile 800Mhz, 256 MB RAM\nOS: Linux 2.2.16-22\nApache: 1.3.19\nPHP: 4.0.5\nMySQL: 3.23.32 (MyISAM tables, Unix socket)\nPostgreSQL: 7.1.2 (Unix socket, fsync disabled, vacuum analyzed between runs)\nDB schema: v102 (indexed as per create_mysql/postgresl in Snort v1.8b5 build 24)\nACID: 0.9.6b10 - 0.9.6b13\n\nAll I can find online are v. 1.7 and 1.8-RELEASE. In 1.7, the mysql script\nhas a lot more indices than the postgresql one. In the 1.8-RELEASE,\nthey both seem to have the same set. If those indices went in between\nb5 and release, there's your problem!\n\nHmm, I've pulled the appropriate file from CVS, now. Seems that v102\nhas most the indices, so Stephan's request of example queries is the only\nway we're going to be able to help.\n\nHmm, on third look, I've grovelled through the PHP for ACID 0.9.6b11\n(since that was in the snort CVS) and I see that ACID creates some tables,\nas well, one of which is missing an index that MySQL gets:\n\nMySQL:\n\nCREATE TABLE acid_ag_alert( ag_id INT UNSIGNED NOT NULL,\n ag_sid INT UNSIGNED NOT NULL,\n ag_cid INT UNSIGNED NOT NULL, \n\n PRIMARY KEY (ag_id, ag_sid, ag_cid),\n INDEX (ag_id),\n INDEX (ag_sid),\n INDEX (ag_cid),\n INDEX (ag_sid, ag_cid));\n\nPgsql:\n\nCREATE TABLE acid_ag_alert( ag_id INT8 NOT NULL,\n ag_sid INT4 NOT NULL,\n ag_cid INT8 NOT NULL, \n\n PRIMARY KEY (ag_id, ag_sid, ag_cid) );\n\nCREATE INDEX acid_ag_alert_id_idx ON acid_ag_alert (ag_sid, ag_cid);\n\n\nThis isn't as extreme as it looks, since pgsql knows how to use the\nmulti-key indices in place of some of the single key indices the MySQL\ntable has, so the only one completely missing, from the pgsql point of\nview, is an index on ag_cid alone. From grepping the PHP sources, it\nseems that this this a common join key, so missing that index might hurt.\n\nIf ag_id is used a lot, having only a triplekey isn't the best, since\nthe index entries will be much larger, so fewer will fit in a page.\n\nAs Stephan said, the only way to know for sure what's happening is to see the\nactual queries (and explains on them for the actual test dataset). Turn\non logging, and grab the queries from the postgresql logs, seems the\nway to go.\n\nRoss\n", "msg_date": "Tue, 10 Jul 2001 17:40:43 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Any tips for this particular performance problem?" }, { "msg_contents": "\"Ross J. Reedstrom\" wrote:\n> \n> Hmm, I've pulled the appropriate file from CVS, now. Seems that v102\n> has most the indices, so Stephan's request of example queries is the only\n> way we're going to be able to help.\n> \n> Hmm, on third look, I've grovelled through the PHP for ACID 0.9.6b11\n> (since that was in the snort CVS) and I see that ACID creates some tables,\n> as well, one of which is missing an index that MySQL gets:\n\nAlso, do they run VACUUM ANALYZE after filling the table ?\n\nPostgreSQL could choose very poor plans without it.\n\n-------------\nHannu\n", "msg_date": "Wed, 11 Jul 2001 14:11:26 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Any tips for this particular performance problem?" } ]
[ { "msg_contents": "Hi,\n\nI try postgresql V 7.1.2 under solaris 2.8 ( patch + the last version ) and i use directio implementation\nfor ufs . \nImproved UFS Direct I/O Concurrency (Quick I/O Equivalent) \nSolaris 8 1/01 update release allows concurrent read and write access to regular UFS files. As databases generally pre-allocate files and seldom extend them thereafter, the effects of this enhancement are seen during the normal database operations. The improvement brings I/O-bound database performance on a UFS file system to about 90% of raw partition access speeds. \n\n\n \nWhen you mount an ufs partition,\njust try this command in order to test directio:\nmount -F ufs -o forcedirectio /dev/dsk/XXX /testdb\n\nI try on the same machine 2 databases location :\nOne under partition with directio \nOne under normal ufs partition\n\nI use the same postgresql.conf and with pgbench i obtain\nthis resulats:\n\nPgbench -c 4 -v -t 100 testdb ( directio ufs )\ntps = 13.425330\ntps = 13.626090\n\n\nPgbench -c 4 -v -t 100 testdb ( ufs )\ntps = 30.052012\ntps = 30.630632\n\nIf you interest with directio try this links :\n\nhttp://gecitsolutions.systemnews.com/system-news/jobdir/submitted/2001.03/3076/3076.html\nhttp://www.idg.net/crd_solaris_452714_102.html\n\n\nCheers,\n\nPEJAC Pascal\n", "msg_date": "Tue, 10 Jul 2001 14:52:24 +0200 (CEST)", "msg_from": "<pejac@altern.org>", "msg_from_op": true, "msg_subject": "Tips performance under solaris" }, { "msg_contents": "> Hi,\n> \n> I try postgresql V 7.1.2 under solaris 2.8 ( patch + the last version ) and i use directio implementation\n> for ufs . \n> Improved UFS Direct I/O Concurrency (Quick I/O Equivalent) \n> Solaris 8 1/01 update release allows concurrent read and write access to regular UFS files. As databases generally pre-allocate files and seldom extend them thereafter, the effects of this enhancement are seen during the normal database operations. The improvement brings I/O-bound database performance on a UFS file system to about 90% of raw partition access speeds. \n> \n> \n> \n> When you mount an ufs partition,\n> just try this command in order to test directio:\n> mount -F ufs -o forcedirectio /dev/dsk/XXX /testdb\n> \n> I try on the same machine 2 databases location :\n> One under partition with directio \n> One under normal ufs partition\n> \n> I use the same postgresql.conf and with pgbench i obtain\n> this resulats:\n> \n> Pgbench -c 4 -v -t 100 testdb ( directio ufs )\n> tps = 13.425330\n> tps = 13.626090\n> \n> \n> Pgbench -c 4 -v -t 100 testdb ( ufs )\n> tps = 30.052012\n> tps = 30.630632\n> \n> If you interest with directio try this links :\n> \n> http://gecitsolutions.systemnews.com/system-news/jobdir/submitted/2001.03/3076/3076.html\n> http://www.idg.net/crd_solaris_452714_102.html\n\nI looked around and found that directio is:\n\n\tO_DIRECT\n\n If set, all reads and writes on the resulting file descriptor will\n be performed directly to or from the user program buffer, provided\n appropriate size and alignment restrictions are met. Refer to the\n F_SETFL and F_DIOINFO commands in the fcntl(2) manual entry for\n information about how to determine the alignment constraints.\n O_DIRECT is a Silicon Graphics extension and is only supported on\n local EFS file systems.\n\nSo it does I/O directly from the user buffer to disk, bypassing the\nsystem cache. I am not sure if that is a good idea because you are not\nusing the system buffer cache nor is it allowing writes to be re-ordered\nfor optimial performance. It does prevent copying the buffer into\nkernel space, which I suppose is the major advantage for that feature.\n\nI see discussion at:\n\n\thttp://groups.google.com/groups?q=solaris+direct+ufs&hl=en&safe=off&rnum=1&ic=1&selm=Dy1sx9.378%40baerlap.north.de\n\nand\n\n\thttp://groups.google.com/groups?q=solaris+direct+ufs&hl=en&safe=off&rnum=2&ic=1&selm=0cosks09u834jipekdh4r9sr8tb17liokj%404ax.com\n\nSpecifically, the users say that sometimes it makes Oracle slower too. \nYou might try increasing the number of PostgreSQL shared buffers and see\nif you can increase that enough so this option is a win.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 10:17:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Tips performance under solaris" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> So it does I/O directly from the user buffer to disk, bypassing the\n> system cache. I am not sure if that is a good idea because you are not\n> using the system buffer cache nor is it allowing writes to be re-ordered\n> for optimial performance.\n\n... and, more than likely, the user program is blocked for the whole\nphysical write operation, not just for long enough to memcpy the data\ninto a kernel buffer. Given that info, I find it completely\nunsurprising that this \"feature\" makes Postgres a lot slower. It seems\nthat Sun's idea of what a database does has little connection to what\nPostgres does.\n\nIt might possibly make sense to set this bit on WAL writes, but not on\nwrites to data files.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 10:47:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Tips performance under solaris " } ]
[ { "msg_contents": "\n> > > > > > > Can someone tell me what we use indislossy for?\n> > > >\n> > > > Ok, so the interpretation of this field is:\n> > > > \tA match in the index needs to be reevaluated in the heap tuple data,\n> > > > \tsince a match in the index does not necessarily mean, that the heap tuple\n> > > > \tmatches.\n> > > > \tIf the heap tuple data matches, the index must always match.\n> > \n> > AFAIK, this is true for all indexes in PostgreSQL, because index rows\n> > don't store the transactions status. Of course those are two different\n> > underlying reasons why a heap lookup is always necessary, but there\n> > shouldn't be any functional difference in the current implementation.\n> \n> Seems it is something they added for the index abstraction and not for\n> practical use by PostgreSQL.\n\nWhy, you do not need to call the comparison function on the heap data\nif the index is not lossy, saves some CPU cycles.\n\nAndreas\n", "msg_date": "Tue, 10 Jul 2001 17:06:12 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: pg_index.indislossy" }, { "msg_contents": "> \n> > > > > > > > Can someone tell me what we use indislossy for?\n> > > > >\n> > > > > Ok, so the interpretation of this field is:\n> > > > > \tA match in the index needs to be reevaluated in the heap tuple data,\n> > > > > \tsince a match in the index does not necessarily mean, that the heap tuple\n> > > > > \tmatches.\n> > > > > \tIf the heap tuple data matches, the index must always match.\n> > > \n> > > AFAIK, this is true for all indexes in PostgreSQL, because index rows\n> > > don't store the transactions status. Of course those are two different\n> > > underlying reasons why a heap lookup is always necessary, but there\n> > > shouldn't be any functional difference in the current implementation.\n> > \n> > Seems it is something they added for the index abstraction and not for\n> > practical use by PostgreSQL.\n> \n> Why, you do not need to call the comparison function on the heap data\n> if the index is not lossy, saves some CPU cycles.\n\nBecause we don't know of the tuples expired status until we check the\nheap.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 13:04:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: pg_index.indislossy" } ]
[ { "msg_contents": "I have 'select ... from cursor foo, tables ...' working, and halfway done\nwith functions-as-result sets.\n\nA few questions to gurus:\n\na) Currently, I have a shift-reduce conflict, because cursor is a valid\nTokenId. It can be resolved by removing it from list of TokenId, but if\nsomeone was naming one of their columns or tables \"CURSOR\", this will bite\nthem.\n\nSQL92 says that 'CURSOR' is a reserved word. Is it OK to remove it from\nTokenId list?\n\nOn other hand, its fairly simple to allow syntax 'select * from foo' and\ntreat foo as a cursor IF there is such a cursor, but I'm not sure if its\nthe right thing, it may be counterintuitive when your cursor disappears?\n\nb) Next issue is ReScan of a cursor. To have consistent results, cursor\nmust be set to the beginning of underlying query every time. On other\nhand, it may be surprising to do\n\ndeclare foo cursor for select * from test; \nfetch 5 from foo; \nselect * from cursor foo;\n\nand have cursor rewind back to the beginning of 'test' table. \n\nAdvice?\n\nc) Here's how I'm implementing a function-as-table-source, i.e.:\nselect * from func(args)\n\nQuery plan is transformed into multiple queries: \n\nfirst, 'open portal' query for \"SELECT func(args)\" (however, with\ntupledesc having all the fields function returns (if it returns a tuple),\nnot one field).\n\nsecond, original query itself, but referring to portal instead of\nfunction,\n\nthird, 'close portal' query.\n\nNow, small questions:\n\n1) Currently, there's a calling convention for functions-that-return-sets:\n* fn_extra in fcinfo is set to null on first call, and can be set by\n function to distinguish a next call from first call\n* ResultSetInfo must be set to 'done' by function when its done returning\n values.\n\nI think it all makes sense so far. \n\nHowever, what should be the calling convention for returning a record from\na function? Currently, (correct me if I'm wrong), a function returning a\nrecord (well, only a SQL function is capable of doing that meaningfully),\nwill return a TupleTableSlot *. \n\nPrevious suggestion from Jan Wieck was to make functions that return\nrecords put them into a portal, and return refcursor of that portal, but I\ndon't see utility of that. \n\nIs it a bad style for functions to create slots and return TupleTableSlot\n*?\n\nThanks for all advice\n-alex\n\n\n\n\n\n", "msg_date": "Tue, 10 Jul 2001 14:27:20 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "selecting from cursor/function" } ]
[ { "msg_contents": "\tIs it possible to trick pg/sql to allow passing of the NEW record\ninto a function? I've got a trigger that gets kicked off before an insert\nand I need to call another function and pass that record in, but doing a\nperform activate_event(NEW); /* my function is activate_event(OPAQUE) */\n\nonly spits out \"ERROR: NEW used in non-rule query\". Is there a way to trick\nit into passing the NEW record into my function?\n\nThanks for any pointers,\nMike\n\n", "msg_date": "Tue, 10 Jul 2001 14:22:34 -0700", "msg_from": "Mike Cianflone <mcianflone@littlefeet-inc.com>", "msg_from_op": true, "msg_subject": "way to pass NEW into function" }, { "msg_contents": "What I did in a similar trigger was set a variable (of type RECORD) to \nNEW and then use that. \n\n(I actually used the appropriate fields, but record should... work)\n\nLER\n\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 7/10/01, 4:22:34 PM, Mike Cianflone <mcianflone@littlefeet-inc.com> \nwrote regarding [HACKERS] way to pass NEW into function:\n\n\n> Is it possible to trick pg/sql to allow passing of the NEW record\n> into a function? I've got a trigger that gets kicked off before an insert\n> and I need to call another function and pass that record in, but doing a\n> perform activate_event(NEW); /* my function is activate_event(OPAQUE) */\n\n> only spits out \"ERROR: NEW used in non-rule query\". Is there a way to \ntrick\n> it into passing the NEW record into my function?\n\n> Thanks for any pointers,\n> Mike\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n", "msg_date": "Tue, 10 Jul 2001 21:38:38 GMT", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: way to pass NEW into function" } ]
[ { "msg_contents": "\nIs there any good reason to use VARCHAR over TEXT for a string field? ie.\nperformance hits, etc.\n\nOther than running into the row size limit problem, are there any large\nstorage / performance penalties of using TEXT for virtually all strings?\n\nFor ex. A phone number. This field probably wouldn't be bigger that 40\ncharacters, but I can use TEXT and be sure that nothing gets truncated. Same\nwith a \"name\" field.\n\n-rchit\n", "msg_date": "Tue, 10 Jul 2001 14:37:22 -0700", "msg_from": "Rachit Siamwalla <rachit@ensim.com>", "msg_from_op": true, "msg_subject": "varchar vs. text" }, { "msg_contents": "Rachit Siamwalla <rachit@ensim.com> writes:\n> Is there any good reason to use VARCHAR over TEXT for a string field?\n\nThe only reason to use VARCHAR is if you *want* the data to be truncated\nat a specific length. If you don't have a well-defined upper limit in\nmind, I'd recommend TEXT.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 19:37:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: varchar vs. text " }, { "msg_contents": "Rachit Siamwalla wrote:\n>\n> Is there any good reason to use VARCHAR over TEXT for a string field? ie.\n> performance hits, etc.\n>\n> Other than running into the row size limit problem, are there any large\n> storage / performance penalties of using TEXT for virtually all strings?\n\n Er - what kind of \"row size limit\"? I remember vaguely that\n there was something the like in ancient releases, but forgot\n the specific restrictions.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 11 Jul 2001 09:56:27 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: varchar vs. text" }, { "msg_contents": "On Wed, Jul 11, 2001 at 09:56:27AM -0400, Jan Wieck wrote:\n> Rachit Siamwalla wrote:\n> >\n> > Is there any good reason to use VARCHAR over TEXT for a string field? ie.\n> > performance hits, etc.\n> >\n> > Other than running into the row size limit problem, are there any large\n> > storage / performance penalties of using TEXT for virtually all strings?\n> \n> Er - what kind of \"row size limit\"? I remember vaguely that\n> there was something the like in ancient releases, but forgot\n> the specific restrictions.\n\n<FX: Sound of Jan whistling, looking around innocently>\n\nVery good Jan. Yes, PostgreSQL certainly develops on Internet time, and\nwhile TOAST may seem ancient news to you, it was only in the 7.1 release\n(2001-04-13). Three months is a little early to start the 'Problem? What\nproblem?' campaign. Especially since some of the client libs (OBDC)\njust caught up, last week. :-)\n\nWhat Jan is so innocently not saying is described here:\n\nhttp://www.ca.postgresql.org/projects/devel-toast.html\n\nJan not only solved the 'row size limit', he did it in a more general\nway, solving lots of the follow on problems that come from putting large\nfields into a table. Details at the above URL.\n\nRoss\n", "msg_date": "Wed, 11 Jul 2001 09:56:29 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: varchar vs. text" }, { "msg_contents": "Still can't index those large toasted items -- not that I want to.\nOne interesting aspect is versioning of text documents where you want\nthem to be UNIQUE in regards to book development otherwise you have\nthe same document with 2 or more entries (more than a single version\nnumber). Poor example; I know.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Ross J. Reedstrom\" <reedstrm@rice.edu>\nTo: \"Jan Wieck\" <JanWieck@Yahoo.com>\nCc: \"Rachit Siamwalla\" <rachit@ensim.com>;\n<pgsql-hackers@postgresql.org>\nSent: Wednesday, July 11, 2001 10:56 AM\nSubject: Re: [HACKERS] varchar vs. text\n\n\n> On Wed, Jul 11, 2001 at 09:56:27AM -0400, Jan Wieck wrote:\n> > Rachit Siamwalla wrote:\n> > >\n> > > Is there any good reason to use VARCHAR over TEXT for a string\nfield? ie.\n> > > performance hits, etc.\n> > >\n> > > Other than running into the row size limit problem, are there\nany large\n> > > storage / performance penalties of using TEXT for virtually all\nstrings?\n> >\n> > Er - what kind of \"row size limit\"? I remember vaguely that\n> > there was something the like in ancient releases, but forgot\n> > the specific restrictions.\n>\n> <FX: Sound of Jan whistling, looking around innocently>\n>\n> Very good Jan. Yes, PostgreSQL certainly develops on Internet time,\nand\n> while TOAST may seem ancient news to you, it was only in the 7.1\nrelease\n> (2001-04-13). Three months is a little early to start the 'Problem?\nWhat\n> problem?' campaign. Especially since some of the client libs (OBDC)\n> just caught up, last week. :-)\n>\n> What Jan is so innocently not saying is described here:\n>\n> http://www.ca.postgresql.org/projects/devel-toast.html\n>\n> Jan not only solved the 'row size limit', he did it in a more\ngeneral\n> way, solving lots of the follow on problems that come from putting\nlarge\n> fields into a table. Details at the above URL.\n>\n> Ross\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Wed, 11 Jul 2001 11:14:32 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": false, "msg_subject": "Re: varchar vs. text" }, { "msg_contents": "Ross J. Reedstrom wrote:\n> On Wed, Jul 11, 2001 at 09:56:27AM -0400, Jan Wieck wrote:\n> > Rachit Siamwalla wrote:\n> > >\n> > > Is there any good reason to use VARCHAR over TEXT for a string field? ie.\n> > > performance hits, etc.\n> > >\n> > > Other than running into the row size limit problem, are there any large\n> > > storage / performance penalties of using TEXT for virtually all strings?\n> > \n> > Er - what kind of \"row size limit\"? I remember vaguely that\n> > there was something the like in ancient releases, but forgot\n> > the specific restrictions.\n> \n> <FX: Sound of Jan whistling, looking around innocently>\n> \n> Very good Jan. Yes, PostgreSQL certainly develops on Internet time, and\n> while TOAST may seem ancient news to you, it was only in the 7.1 release\n> (2001-04-13). Three months is a little early to start the 'Problem? What\n> problem?' campaign. Especially since some of the client libs (OBDC)\n> just caught up, last week. :-)\n\n You're absotulely right, including the whistling ;-)\n\n I just couldn't resist, was too temping. And since I was sure\n there'll be more informative responses either way, why not?\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 11 Jul 2001 12:17:12 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: varchar vs. text" } ]
[ { "msg_contents": "Hi,\n\nWe (SRA) have done the translation of PostgreSQL 7.1 docs into\nJapanese. They can be freely available at\nhttp://osb.sra.co.jp/PostgreSQL/Manual/.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 11 Jul 2001 13:15:35 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "docs Japanese translation" }, { "msg_contents": "> > We (SRA) have done the translation of PostgreSQL 7.1 docs into\n> > Japanese. They can be freely available at\n> > http://osb.sra.co.jp/PostgreSQL/Manual/.\n> \n> How long did that take, and with howmany people? :)\n\nIt took 4 months. Two of our employees worked for that plus several\npeople did some checkings. It was definitely lots of effort.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 12 Jul 2001 10:58:18 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: docs Japanese translation" } ]
[ { "msg_contents": "Hello.\n\nI was just curious if you guys would accept a feature which would allow\nfor the generation of non-standard messages for the violation of index,\ncheck, and referential integrity constraints. I understand that Peter\nE's proposal regarding error messages would allow clients to determine\nin greater detail the cause of an elog(). However, I think it might be\nof value to implement something which would allow the user to override\nthe default message sent by the backend. An example situation would be\nlike this:\n\nCREATE TABLE employees (\nemployeeid integer not null,\nssnumber text not null\n);\n\nCREATE UNIQUE INDEX i_employees on employees(ssnumber);\n\nMESSAGE ON INDEX i_employees IS \n'An employee with a matching Social Security number already exists';\n\nThen, when the UNIQUE constraint of the index is violated, instead of\nthe message:\n\n'Cannot insert a duplicate key into a unique index i_test1'\n\nthe client application would receive:\n\n'An employee with a matching Social Security number already exists'\n\nThe benefit to a feature like this is that each client application\ndoesn't need to handle the generation of the appropriate error messages\nthemselves, but instead can rely on the database to do so. In fact, it\nwouldn't be too hard to have a SET command to set the client language\n(much like CLIENT_ENCODING) that would return the message appropriate\nfor the language of the client. \n\nAnother example:\n\nCREATE TABLE cars (\nmodel integer not null,\nmake integer not null,\ncolor text not null\nconstraint check_color check (color = 'Red' or color = 'Blue')\n);\n\nMESSAGE ON CONSTRAINT check_color IS\n'Only Red or Blue cars are valid. Please refer to page 12 of the User''s\nGuide';\n\nOf course, its quite probable that all of this belongs in each of the\nclients, but it seems trivial to do, much like pg_description and\nCOMMENT ON. This is obviously an informal suggestion to determine if the\nidea should be rejected out-of-hand.\n\nMike Mascari\nmascarm@mascari.com\n", "msg_date": "Wed, 11 Jul 2001 07:20:22 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "Possible feature?" }, { "msg_contents": "Mike Mascari writes:\n\n> MESSAGE ON INDEX i_employees IS\n> 'An employee with a matching Social Security number already exists';\n>\n> Then, when the UNIQUE constraint of the index is violated, instead of\n> the message:\n>\n> 'Cannot insert a duplicate key into a unique index i_test1'\n>\n> the client application would receive:\n>\n> 'An employee with a matching Social Security number already exists'\n\nI think what you're after is\n\nTRY\n BEGIN\n INSERT ...\n END\nCATCH SQLCODE 12345 -- made up\n BEGIN\n RAISE 'your message here'\n END\n\nI'm positive people would kill for that kind of feature.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 11 Jul 2001 17:28:58 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Possible feature?" }, { "msg_contents": "On Wednesday, 11. July 2001 17:28, you wrote:\n> Mike Mascari writes:\n> > MESSAGE ON INDEX i_employees IS\n> > 'An employee with a matching Social Security number already exists';\n> >\n> > Then, when the UNIQUE constraint of the index is violated, instead of\n> > the message:\n> >\n> > 'Cannot insert a duplicate key into a unique index i_test1'\n> >\n> > the client application would receive:\n> >\n> > 'An employee with a matching Social Security number already exists'\n>\n> I think what you're after is\n>\n> TRY\n> BEGIN\n> INSERT ...\n> END\n> CATCH SQLCODE 12345 -- made up\n> BEGIN\n> RAISE 'your message here'\n> END\n>\n> I'm positive people would kill for that kind of feature.\n\nThen we should use this syntax (like Oracle does):\n\nBEGIN\n INSERT ....\n\nEXCEPTION WHEN .... THEN\n RAISE 'your message here'\nEND\n\n\nRegards, \nKlaus\n\n-- \nVisit WWWdb at\nhttp://wwwdb.org\n\n", "msg_date": "Thu, 12 Jul 2001 07:05:56 +0200", "msg_from": "Klaus Reger <K.Reger@wwwdb.de>", "msg_from_op": false, "msg_subject": "Re: Possible feature?" } ]
[ { "msg_contents": "\n> Then, when the UNIQUE constraint of the index is violated, instead of\n> the message:\n> \n> 'Cannot insert a duplicate key into a unique index i_test1'\n> \n> the client application would receive:\n> \n> 'An employee with a matching Social Security number already exists'\n\nI would only allow this text to be output in addition to the standard\ntext. Else confusion would imho be too great for the unwary admin.\n\nThus following would be returned:\nERROR 03005 'Cannot insert a duplicate key into a unique index i_test1'\nDESCRIPTION 'An employee with a matching Social Security number already exists'\n\nOn the other hand, what hinders you from using a \"speaking\" name for the \nconstraint ?\n\npostgres=# create table aa (id int, constraint \"for Social Security number\" unique (id));\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'for Social Security number' for table 'aa'\nCREATE\npostgres=# insert into aa values (1);\nINSERT 23741 1\npostgres=# insert into aa values (1);\nERROR: Cannot insert a duplicate key into unique index for Social Security number\npostgres=# :-O\n\nAndreas\n", "msg_date": "Wed, 11 Jul 2001 14:25:08 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Possible feature?" }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> On the other hand, what hinders you from using a \"speaking\" name for the\n> constraint ?\n> \n> postgres=# create table aa (id int, constraint \"for Social Security number\" unique (id));\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index 'for Social Security number' for table 'aa'\n> CREATE\n> postgres=# insert into aa values (1);\n> INSERT 23741 1\n> postgres=# insert into aa values (1);\n> ERROR: Cannot insert a duplicate key into unique index for Social Security number\n\nI might want the message to be in some other language ...\n\nI might even want the language to depend on CURRENT_USER.\n\n-------------\nHannu\n", "msg_date": "Wed, 11 Jul 2001 14:58:59 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: AW: Possible feature?" } ]
[ { "msg_contents": "Has it ever been considered to (optionally) use the iconv interface for\ncharacter set conversion instead of rolling our own? It seems to be a lot\nmore flexible, has pluggable conversion modules (depending on the\nimplementation), supports more character sets. It seems to be available\non quite a few systems, too.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 11 Jul 2001 20:15:25 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "iconv?" }, { "msg_contents": "> Has it ever been considered to (optionally) use the iconv interface for\n> character set conversion instead of rolling our own? It seems to be a lot\n> more flexible, has pluggable conversion modules (depending on the\n> implementation), supports more character sets. It seems to be available\n> on quite a few systems, too.\n\nI have not checked iconv seriously since it's not very portable among\nour supported platforms.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 12 Jul 2001 12:04:25 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: iconv?" }, { "msg_contents": "On Thu, Jul 12, 2001 at 12:04:25PM +0900, Tatsuo Ishii wrote:\n> > Has it ever been considered to (optionally) use the iconv interface for\n> > character set conversion instead of rolling our own? It seems to be a lot\n> > more flexible, has pluggable conversion modules (depending on the\n> > implementation), supports more character sets. It seems to be available\n> > on quite a few systems, too.\n> \n> I have not checked iconv seriously since it's not very portable among\n> our supported platforms.\n\nJust FYI, in the mutt readme:\n\n- Mutt needs an implementation of the iconv API for character set\n conversions. A free one can be found under the following URL:\n\n http://clisp.cons.org/~haible/packages-libiconv.html\n\n\nCheers,\n\nPatrick\n", "msg_date": "Fri, 13 Jul 2001 17:25:08 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: iconv?" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> I have not checked iconv seriously since it's not very portable among\n> our supported platforms.\n\nAllow me to bring you up to date:\n\naix\t\tyes\nbeos\nbsdi\ndarwin\nfreebsd\t\tin ports\nhpux\t\tyes\nirix5\t\tyes\nlinux\t\tyes\nnetbsd\t\tin ports\nopenbsd\t\tin ports\nosf\t\tyes\nqnx4\nsco\t\tyes\nsolaris\t\tyes\nsunos4\nunixware\tyes\nwin\n\nIn addition there's a free portable libiconv library available.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 13 Jul 2001 21:18:47 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: iconv?" }, { "msg_contents": "> Tatsuo Ishii writes:\n> \n> > I have not checked iconv seriously since it's not very portable among\n> > our supported platforms.\n> \n> Allow me to bring you up to date:\n> \n> aix\t\tyes\n> beos\n> bsdi\n\nCount bsdi as yes. I have it working here. Compiled fine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 13 Jul 2001 16:20:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: iconv?" }, { "msg_contents": "> Tatsuo Ishii writes:\n> \n> > I have not checked iconv seriously since it's not very portable among\n> > our supported platforms.\n> \n> Allow me to bring you up to date:\n> \n> aix\t\tyes\n> beos\n> bsdi\n> darwin\n> freebsd\t\tin ports\n> hpux\t\tyes\n> irix5\t\tyes\n> linux\t\tyes\n> netbsd\t\tin ports\n> openbsd\t\tin ports\n> osf\t\tyes\n> qnx4\n> sco\t\tyes\n> solaris\t\tyes\n> sunos4\n> unixware\tyes\n> win\n\nOK. Can people run iconv --list to see what kind of encodings are\nsupported on each platform?\n\nMy iconv (verison 2.1.3 on Linux) shows followings:\n\n 437, 500, 500V1, 850, 851, 852, 855, 857, 860, 861, 862, 863, 864, 865, 866,\n 869, 874, 904, 1026, 1047, 8859_1, 8859_2, 8859_3, 8859_4, 8859_5, 8859_6,\n 8859_7, 8859_8, 8859_9, 10646-1:1993, 10646-1:1993/UCS4, ANSI_X3.4-1968,\n ANSI_X3.4-1986, ANSI_X3.4, ANSI_X3.110-1983, ANSI_X3.110, ARABIC, ARABIC7,\n ASCII, ASMO-708, ASMO_449, BALTIC, BIG-5, BIG-FIVE, BIG5, BIGFIVE, BS_4730,\n CA, CN-BIG5, CN-GB, CN, CP-AR, CP-GR, CP-HU, CP037, CP038, CP273, CP274,\n CP275, CP278, CP280, CP281, CP284, CP285, CP290, CP297, CP367, CP420, CP423,\n CP424, CP437, CP500, CP737, CP775, CP819, CP850, CP851, CP852, CP855, CP857,\n CP860, CP861, CP862, CP863, CP864, CP865, CP866, CP868, CP869, CP870, CP871,\n CP874, CP875, CP880, CP891, CP903, CP904, CP905, CP918, CP932, CP949, CP1004,\n CP1026, CP1047, CP1250, CP1251, CP1252, CP1253, CP1254, CP1255, CP1256,\n CP1257, CP1258, CP1361, CPIBM861, CSA7-1, CSA7-2, CSASCII, CSA_T500-1983,\n CSA_T500, CSA_Z243.4-1985-1, CSA_Z243.4-1985-2, CSDECMCS, CSEBCDICATDE,\n CSEBCDICATDEA, CSEBCDICCAFR, CSEBCDICDKNO, CSEBCDICDKNOA, CSEBCDICES,\n CSEBCDICESA, CSEBCDICESS, CSEBCDICFISE, CSEBCDICFISEA, CSEBCDICFR,\n CSEBCDICIT, CSEBCDICPT, CSEBCDICUK, CSEBCDICUS, CSEUCKR, CSEUCPKDFMTJAPANESE,\n CSGB2312, CSHPROMAN8, CSIBM037, CSIBM038, CSIBM273, CSIBM274, CSIBM275,\n CSIBM277, CSIBM278, CSIBM280, CSIBM281, CSIBM284, CSIBM285, CSIBM290,\n CSIBM297, CSIBM420, CSIBM423, CSIBM424, CSIBM599, CSIBM851, CSIBM855,\n CSIBM857, CSIBM860, CSIBM863, CSIBM864, CSIBM865, CSIBM866, CSIBM868,\n CSIBM869, CSIBM870, CSIBM871, CSIBM880, CSIBM891, CSIBM903, CSIBM904,\n CSIBM905, CSIBM918, CSIBM1026, CSISO4UNITEDKINGDOM, CSISO10SWEDISH,\n CSISO11SWEDISHFORNAMES, CSISO14JISC6220RO, CSISO15ITALIAN, CSISO16PORTUGESE,\n CSISO17SPANISH, CSISO18GREEK7OLD, CSISO19LATINGREEK, CSISO21GERMAN,\n CSISO25FRENCH, CSISO27LATINGREEK1, CSISO49INIS, CSISO50INIS8,\n CSISO51INISCYRILLIC, CSISO58GB1988, CSISO60DANISHNORWEGIAN,\n CSISO60NORWEGIAN1, CSISO61NORWEGIAN2, CSISO69FRENCH, CSISO84PORTUGUESE2,\n CSISO85SPANISH2, CSISO86HUNGARIAN, CSISO88GREEK7, CSISO89ASMO449, CSISO90,\n CSISO92JISC62991984B, CSISO99NAPLPS, CSISO103T618BIT, CSISO111ECMACYRILLIC,\n CSISO121CANADIAN1, CSISO122CANADIAN2, CSISO139CSN369103, CSISO141JUSIB1002,\n CSISO143IECP271, CSISO150, CSISO150GREEKCCITT, CSISO151CUBA,\n CSISO153GOST1976874, CSISO646DANISH, CSISO2022JP, CSISO2022JP2, CSISO2022KR,\n CSISO2033, CSISO5427CYRILLIC, CSISO5427CYRILLIC1981, CSISO5428GREEK,\n CSISO10367BOX, CSISOLATIN1, CSISOLATIN2, CSISOLATIN3, CSISOLATIN4,\n CSISOLATIN5, CSISOLATIN6, CSISOLATINARABIC, CSISOLATINCYRILLIC,\n CSISOLATINGREEK, CSISOLATINHEBREW, CSKOI8R, CSKSC5636, CSMACINTOSH,\n CSNATSDANO, CSNATSSEFI, CSN_369103, CSPC8CODEPAGE437, CSPC775BALTIC,\n CSPC850MULTILINGUAL, CSPC862LATINHEBREW, CSPCP852, CSSHIFTJIS, CUBA, CWI-2,\n CWI, CYRILLIC, DE, DEC-MCS, DEC, DIN_66003, DK, DS2089, DS_2089, E13B,\n EBCDIC-AT-DE-A, EBCDIC-AT-DE, EBCDIC-BE, EBCDIC-BR, EBCDIC-CA-FR,\n EBCDIC-CP-AR1, EBCDIC-CP-AR2, EBCDIC-CP-BE, EBCDIC-CP-CA, EBCDIC-CP-CH,\n EBCDIC-CP-DK, EBCDIC-CP-ES, EBCDIC-CP-FI, EBCDIC-CP-FR, EBCDIC-CP-GB,\n EBCDIC-CP-GR, EBCDIC-CP-HE, EBCDIC-CP-IS, EBCDIC-CP-IT, EBCDIC-CP-NL,\n EBCDIC-CP-NO, EBCDIC-CP-ROECE, EBCDIC-CP-SE, EBCDIC-CP-TR, EBCDIC-CP-US,\n EBCDIC-CP-WT, EBCDIC-CP-YU, EBCDIC-CYRILLIC, EBCDIC-DK-NO-A, EBCDIC-DK-NO,\n EBCDIC-ES-A, EBCDIC-ES-S, EBCDIC-ES, EBCDIC-FI-SE-A, EBCDIC-FI-SE, EBCDIC-FR,\n EBCDIC-GREEK, EBCDIC-INT, EBCDIC-INT1, EBCDIC-IS-FRISS, EBCDIC-IT,\n EBCDIC-JP-E, EBCDIC-JP-KANA, EBCDIC-PT, EBCDIC-UK, EBCDIC-US, ECMA-114,\n ECMA-118, ECMA-CYRILLIC, ELOT_928, ES, ES2, EUC-CN, EUC-JP, EUC-KR, EUC-TW,\n EUCCN, EUCJP, EUCKR, EUCTW, FI, FR, GB, GB2312, GB_1988-80, GOST_19768-74,\n GOST_19768, GREEK-CCITT, GREEK, GREEK7-OLD, GREEK7, GREEK8, HEBREW,\n HP-ROMAN8, HU, IBM037, IBM038, IBM256, IBM273, IBM274, IBM275, IBM277,\n IBM278, IBM280, IBM281, IBM284, IBM285, IBM290, IBM297, IBM367, IBM420,\n IBM423, IBM424, IBM437, IBM500, IBM775, IBM819, IBM850, IBM851, IBM852,\n IBM855, IBM857, IBM860, IBM861, IBM862, IBM863, IBM864, IBM865, IBM866,\n IBM868, IBM869, IBM870, IBM871, IBM874, IBM875, IBM880, IBM891, IBM903,\n IBM904, IBM905, IBM918, IBM1004, IBM1026, IBM1047, IEC_P27-1, INIS-8,\n INIS-CYRILLIC, INIS, ISIRI-3342, ISO-2022-JP-2, ISO-2022-JP, ISO-2022-KR,\n ISO-8859-1, ISO-8859-2, ISO-8859-3, ISO-8859-4, ISO-8859-5, ISO-8859-6,\n ISO-8859-7, ISO-8859-8, ISO-8859-9, ISO-8859-10, ISO-8859-11, ISO-8859-13,\n ISO-8859-14, ISO-8859-15, ISO-10646, ISO-10646/UCS2, ISO-10646/UCS4,\n ISO-10646/UTF-?8, ISO-10646/UTF8, ISO-IR-4, ISO-IR-6, ISO-IR-8-1, ISO-IR-9-1,\n ISO-IR-10, ISO-IR-11, ISO-IR-14, ISO-IR-15, ISO-IR-16, ISO-IR-17, ISO-IR-18,\n ISO-IR-19, ISO-IR-21, ISO-IR-25, ISO-IR-27, ISO-IR-37, ISO-IR-49, ISO-IR-50,\n ISO-IR-51, ISO-IR-54, ISO-IR-55, ISO-IR-57, ISO-IR-60, ISO-IR-61, ISO-IR-69,\n ISO-IR-84, ISO-IR-85, ISO-IR-86, ISO-IR-88, ISO-IR-89, ISO-IR-90, ISO-IR-92,\n ISO-IR-98, ISO-IR-99, ISO-IR-100, ISO-IR-101, ISO-IR-103, ISO-IR-109,\n ISO-IR-110, ISO-IR-111, ISO-IR-121, ISO-IR-122, ISO-IR-126, ISO-IR-127,\n ISO-IR-138, ISO-IR-139, ISO-IR-141, ISO-IR-143, ISO-IR-144, ISO-IR-148,\n ISO-IR-150, ISO-IR-151, ISO-IR-153, ISO-IR-155, ISO-IR-156, ISO-IR-157,\n ISO-IR-166, ISO-IR-179, ISO-IR-193, ISO-IR-197, ISO-IR-199, ISO-IR-203,\n ISO646-CA, ISO646-CA2, ISO646-CN, ISO646-CU, ISO646-DE, ISO646-DK, ISO646-ES,\n ISO646-ES2, ISO646-FI, ISO646-FR, ISO646-FR1, ISO646-GB, ISO646-HU,\n ISO646-IT, ISO646-JP-OCR-B, ISO646-JP, ISO646-KR, ISO646-NO, ISO646-NO2,\n ISO646-PT, ISO646-PT2, ISO646-SE, ISO646-SE2, ISO646-US, ISO646-YU, ISO6937,\n ISO_646.IRV:1991, ISO_2033-1983, ISO_2033, ISO_5427-EXT, ISO_5427,\n ISO_5427:1981, ISO_5428, ISO_5428:1980, ISO_6937-2, ISO_6937-2:1983,\n ISO_6937, ISO_6937:1992, ISO_8859-1, ISO_8859-1:1987, ISO_8859-2,\n ISO_8859-2:1987, ISO_8859-3, ISO_8859-3:1988, ISO_8859-4, ISO_8859-4:1988,\n ISO_8859-5, ISO_8859-5:1988, ISO_8859-6, ISO_8859-6:1987, ISO_8859-7,\n ISO_8859-7:1987, ISO_8859-8, ISO_8859-8:1988, ISO_8859-9, ISO_8859-9:1989,\n ISO_8859-10, ISO_8859-10:1992, ISO_8859-14:1998, ISO_8859-15:1998, ISO_9036,\n ISO_10367-BOX, IT, JIS_C6220-1969-RO, JIS_C6229-1984-B, JOHAB, JP-OCR-B, JP,\n JS, JUS_I.B1.002, KOI-7, KOI-8, KOI8-R, KOI8-U, KSC5636, L1, L2, L3, L4, L5,\n L6, L7, L8, LATIN-GREEK-1, LATIN-GREEK, LATIN1, LATIN2, LATIN3, LATIN4,\n LATIN5, LATIN6, LATIN7, LATIN8, MAC-IS, MAC-UK, MAC, MACINTOSH, MS-ANSI,\n MS-ARAB, MS-CYRL, MS-EE, MS-GREEK, MS-HEBR, MS-TURK, MSCP949, MSCP1361,\n MSZ_7795.3, MS_KANJI, NAPLPS, NATS-DANO, NATS-SEFI, NC_NC00-10,\n NC_NC00-10:81, NF_Z_62-010, NF_Z_62-010_(1973), NF_Z_62-010_1973, NO, NO2,\n NS_4551-1, NS_4551-2, OS2LATIN1, OSF00010001, OSF00010002, OSF00010003,\n OSF00010004, OSF00010005, OSF00010006, OSF00010007, OSF00010008, OSF00010009,\n OSF0001000A, OSF00010020, OSF00010100, OSF00010101, OSF00010102, OSF00010104,\n OSF00010105, OSF00010106, OSF00030010, OSF0004000A, OSF0005000A, OSF05010001,\n OSF100201A4, OSF100201A8, OSF100201B5, OSF100201F4, OSF100203B5, OSF1002011C,\n OSF1002011D, OSF1002035D, OSF1002035E, OSF1002035F, OSF1002036B, OSF1002037B,\n OSF10010001, OSF10020025, OSF10020111, OSF10020115, OSF10020116, OSF10020118,\n OSF10020122, OSF10020129, OSF10020352, OSF10020354, OSF10020357, OSF10020359,\n OSF10020360, OSF10020364, OSF10020365, OSF10020366, OSF10020367, OSF10020370,\n OSF10020387, OSF10020388, OSF10020396, OSF10020402, OSF10020417, PT, PT2, R8,\n ROMAN8, SE, SE2, SEN_850200_B, SEN_850200_C, SHIFT-JIS, SHIFT_JIS, SJIS,\n SS636127, ST_SEV_358-88, T.61-8BIT, T.61, TIS-620, TIS620-0, TIS620.2529-1,\n TIS620.2533-0, TIS620, UCS-2, UCS-4, UCS2, UCS4, UHC, UJIS, UK, UNICODE,\n UNICODEBIG, UNICODELITTLE, US-ASCII, US, UTF-8, UTF-16, UTF8, UTF16,\n WIN-SAMI-2, WINBALTRIM, WS2, YU\n", "msg_date": "Sat, 14 Jul 2001 12:22:37 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: iconv?" }, { "msg_contents": "* Tatsuo Ishii <t-ishii@sra.co.jp> [010713 22:29]:\n> > Tatsuo Ishii writes:\n> > \n> > > I have not checked iconv seriously since it's not very portable among\n> > > our supported platforms.\n> > \n> > Allow me to bring you up to date:\n> > \n> > aix\t\tyes\n> > beos\n> > bsdi\n> > darwin\n> > freebsd\t\tin ports\n> > hpux\t\tyes\n> > irix5\t\tyes\n> > linux\t\tyes\n> > netbsd\t\tin ports\n> > openbsd\t\tin ports\n> > osf\t\tyes\n> > qnx4\n> > sco\t\tyes\n> > solaris\t\tyes\n> > sunos4\n> > unixware\tyes\n> > win\n> \n> OK. Can people run iconv --list to see what kind of encodings are\n> supported on each platform?\n> \n> My iconv (verison 2.1.3 on Linux) shows followings:\n> \n> 437, 500, 500V1, 850, 851, 852, 855, 857, 860, 861, 862, 863, 864, 865, 866,\n> 869, 874, 904, 1026, 1047, 8859_1, 8859_2, 8859_3, 8859_4, 8859_5, 8859_6,\n> 8859_7, 8859_8, 8859_9, 10646-1:1993, 10646-1:1993/UCS4, ANSI_X3.4-1968,\n> ANSI_X3.4-1986, ANSI_X3.4, ANSI_X3.110-1983, ANSI_X3.110, ARABIC, ARABIC7,\n> ASCII, ASMO-708, ASMO_449, BALTIC, BIG-5, BIG-FIVE, BIG5, BIGFIVE, BS_4730,\n> CA, CN-BIG5, CN-GB, CN, CP-AR, CP-GR, CP-HU, CP037, CP038, CP273, CP274,\n> CP275, CP278, CP280, CP281, CP284, CP285, CP290, CP297, CP367, CP420, CP423,\n> CP424, CP437, CP500, CP737, CP775, CP819, CP850, CP851, CP852, CP855, CP857,\n> CP860, CP861, CP862, CP863, CP864, CP865, CP866, CP868, CP869, CP870, CP871,\n> CP874, CP875, CP880, CP891, CP903, CP904, CP905, CP918, CP932, CP949, CP1004,\n> CP1026, CP1047, CP1250, CP1251, CP1252, CP1253, CP1254, CP1255, CP1256,\n> CP1257, CP1258, CP1361, CPIBM861, CSA7-1, CSA7-2, CSASCII, CSA_T500-1983,\n> CSA_T500, CSA_Z243.4-1985-1, CSA_Z243.4-1985-2, CSDECMCS, CSEBCDICATDE,\n> CSEBCDICATDEA, CSEBCDICCAFR, CSEBCDICDKNO, CSEBCDICDKNOA, CSEBCDICES,\n> CSEBCDICESA, CSEBCDICESS, CSEBCDICFISE, CSEBCDICFISEA, CSEBCDICFR,\n> CSEBCDICIT, CSEBCDICPT, CSEBCDICUK, CSEBCDICUS, CSEUCKR, CSEUCPKDFMTJAPANESE,\n> CSGB2312, CSHPROMAN8, CSIBM037, CSIBM038, CSIBM273, CSIBM274, CSIBM275,\n> CSIBM277, CSIBM278, CSIBM280, CSIBM281, CSIBM284, CSIBM285, CSIBM290,\n> CSIBM297, CSIBM420, CSIBM423, CSIBM424, CSIBM599, CSIBM851, CSIBM855,\n> CSIBM857, CSIBM860, CSIBM863, CSIBM864, CSIBM865, CSIBM866, CSIBM868,\n> CSIBM869, CSIBM870, CSIBM871, CSIBM880, CSIBM891, CSIBM903, CSIBM904,\n> CSIBM905, CSIBM918, CSIBM1026, CSISO4UNITEDKINGDOM, CSISO10SWEDISH,\n> CSISO11SWEDISHFORNAMES, CSISO14JISC6220RO, CSISO15ITALIAN, CSISO16PORTUGESE,\n> CSISO17SPANISH, CSISO18GREEK7OLD, CSISO19LATINGREEK, CSISO21GERMAN,\n> CSISO25FRENCH, CSISO27LATINGREEK1, CSISO49INIS, CSISO50INIS8,\n> CSISO51INISCYRILLIC, CSISO58GB1988, CSISO60DANISHNORWEGIAN,\n> CSISO60NORWEGIAN1, CSISO61NORWEGIAN2, CSISO69FRENCH, CSISO84PORTUGUESE2,\n> CSISO85SPANISH2, CSISO86HUNGARIAN, CSISO88GREEK7, CSISO89ASMO449, CSISO90,\n> CSISO92JISC62991984B, CSISO99NAPLPS, CSISO103T618BIT, CSISO111ECMACYRILLIC,\n> CSISO121CANADIAN1, CSISO122CANADIAN2, CSISO139CSN369103, CSISO141JUSIB1002,\n> CSISO143IECP271, CSISO150, CSISO150GREEKCCITT, CSISO151CUBA,\n> CSISO153GOST1976874, CSISO646DANISH, CSISO2022JP, CSISO2022JP2, CSISO2022KR,\n> CSISO2033, CSISO5427CYRILLIC, CSISO5427CYRILLIC1981, CSISO5428GREEK,\n> CSISO10367BOX, CSISOLATIN1, CSISOLATIN2, CSISOLATIN3, CSISOLATIN4,\n> CSISOLATIN5, CSISOLATIN6, CSISOLATINARABIC, CSISOLATINCYRILLIC,\n> CSISOLATINGREEK, CSISOLATINHEBREW, CSKOI8R, CSKSC5636, CSMACINTOSH,\n> CSNATSDANO, CSNATSSEFI, CSN_369103, CSPC8CODEPAGE437, CSPC775BALTIC,\n> CSPC850MULTILINGUAL, CSPC862LATINHEBREW, CSPCP852, CSSHIFTJIS, CUBA, CWI-2,\n> CWI, CYRILLIC, DE, DEC-MCS, DEC, DIN_66003, DK, DS2089, DS_2089, E13B,\n> EBCDIC-AT-DE-A, EBCDIC-AT-DE, EBCDIC-BE, EBCDIC-BR, EBCDIC-CA-FR,\n> EBCDIC-CP-AR1, EBCDIC-CP-AR2, EBCDIC-CP-BE, EBCDIC-CP-CA, EBCDIC-CP-CH,\n> EBCDIC-CP-DK, EBCDIC-CP-ES, EBCDIC-CP-FI, EBCDIC-CP-FR, EBCDIC-CP-GB,\n> EBCDIC-CP-GR, EBCDIC-CP-HE, EBCDIC-CP-IS, EBCDIC-CP-IT, EBCDIC-CP-NL,\n> EBCDIC-CP-NO, EBCDIC-CP-ROECE, EBCDIC-CP-SE, EBCDIC-CP-TR, EBCDIC-CP-US,\n> EBCDIC-CP-WT, EBCDIC-CP-YU, EBCDIC-CYRILLIC, EBCDIC-DK-NO-A, EBCDIC-DK-NO,\n> EBCDIC-ES-A, EBCDIC-ES-S, EBCDIC-ES, EBCDIC-FI-SE-A, EBCDIC-FI-SE, EBCDIC-FR,\n> EBCDIC-GREEK, EBCDIC-INT, EBCDIC-INT1, EBCDIC-IS-FRISS, EBCDIC-IT,\n> EBCDIC-JP-E, EBCDIC-JP-KANA, EBCDIC-PT, EBCDIC-UK, EBCDIC-US, ECMA-114,\n> ECMA-118, ECMA-CYRILLIC, ELOT_928, ES, ES2, EUC-CN, EUC-JP, EUC-KR, EUC-TW,\n> EUCCN, EUCJP, EUCKR, EUCTW, FI, FR, GB, GB2312, GB_1988-80, GOST_19768-74,\n> GOST_19768, GREEK-CCITT, GREEK, GREEK7-OLD, GREEK7, GREEK8, HEBREW,\n> HP-ROMAN8, HU, IBM037, IBM038, IBM256, IBM273, IBM274, IBM275, IBM277,\n> IBM278, IBM280, IBM281, IBM284, IBM285, IBM290, IBM297, IBM367, IBM420,\n> IBM423, IBM424, IBM437, IBM500, IBM775, IBM819, IBM850, IBM851, IBM852,\n> IBM855, IBM857, IBM860, IBM861, IBM862, IBM863, IBM864, IBM865, IBM866,\n> IBM868, IBM869, IBM870, IBM871, IBM874, IBM875, IBM880, IBM891, IBM903,\n> IBM904, IBM905, IBM918, IBM1004, IBM1026, IBM1047, IEC_P27-1, INIS-8,\n> INIS-CYRILLIC, INIS, ISIRI-3342, ISO-2022-JP-2, ISO-2022-JP, ISO-2022-KR,\n> ISO-8859-1, ISO-8859-2, ISO-8859-3, ISO-8859-4, ISO-8859-5, ISO-8859-6,\n> ISO-8859-7, ISO-8859-8, ISO-8859-9, ISO-8859-10, ISO-8859-11, ISO-8859-13,\n> ISO-8859-14, ISO-8859-15, ISO-10646, ISO-10646/UCS2, ISO-10646/UCS4,\n> ISO-10646/UTF-?8, ISO-10646/UTF8, ISO-IR-4, ISO-IR-6, ISO-IR-8-1, ISO-IR-9-1,\n> ISO-IR-10, ISO-IR-11, ISO-IR-14, ISO-IR-15, ISO-IR-16, ISO-IR-17, ISO-IR-18,\n> ISO-IR-19, ISO-IR-21, ISO-IR-25, ISO-IR-27, ISO-IR-37, ISO-IR-49, ISO-IR-50,\n> ISO-IR-51, ISO-IR-54, ISO-IR-55, ISO-IR-57, ISO-IR-60, ISO-IR-61, ISO-IR-69,\n> ISO-IR-84, ISO-IR-85, ISO-IR-86, ISO-IR-88, ISO-IR-89, ISO-IR-90, ISO-IR-92,\n> ISO-IR-98, ISO-IR-99, ISO-IR-100, ISO-IR-101, ISO-IR-103, ISO-IR-109,\n> ISO-IR-110, ISO-IR-111, ISO-IR-121, ISO-IR-122, ISO-IR-126, ISO-IR-127,\n> ISO-IR-138, ISO-IR-139, ISO-IR-141, ISO-IR-143, ISO-IR-144, ISO-IR-148,\n> ISO-IR-150, ISO-IR-151, ISO-IR-153, ISO-IR-155, ISO-IR-156, ISO-IR-157,\n> ISO-IR-166, ISO-IR-179, ISO-IR-193, ISO-IR-197, ISO-IR-199, ISO-IR-203,\n> ISO646-CA, ISO646-CA2, ISO646-CN, ISO646-CU, ISO646-DE, ISO646-DK, ISO646-ES,\n> ISO646-ES2, ISO646-FI, ISO646-FR, ISO646-FR1, ISO646-GB, ISO646-HU,\n> ISO646-IT, ISO646-JP-OCR-B, ISO646-JP, ISO646-KR, ISO646-NO, ISO646-NO2,\n> ISO646-PT, ISO646-PT2, ISO646-SE, ISO646-SE2, ISO646-US, ISO646-YU, ISO6937,\n> ISO_646.IRV:1991, ISO_2033-1983, ISO_2033, ISO_5427-EXT, ISO_5427,\n> ISO_5427:1981, ISO_5428, ISO_5428:1980, ISO_6937-2, ISO_6937-2:1983,\n> ISO_6937, ISO_6937:1992, ISO_8859-1, ISO_8859-1:1987, ISO_8859-2,\n> ISO_8859-2:1987, ISO_8859-3, ISO_8859-3:1988, ISO_8859-4, ISO_8859-4:1988,\n> ISO_8859-5, ISO_8859-5:1988, ISO_8859-6, ISO_8859-6:1987, ISO_8859-7,\n> ISO_8859-7:1987, ISO_8859-8, ISO_8859-8:1988, ISO_8859-9, ISO_8859-9:1989,\n> ISO_8859-10, ISO_8859-10:1992, ISO_8859-14:1998, ISO_8859-15:1998, ISO_9036,\n> ISO_10367-BOX, IT, JIS_C6220-1969-RO, JIS_C6229-1984-B, JOHAB, JP-OCR-B, JP,\n> JS, JUS_I.B1.002, KOI-7, KOI-8, KOI8-R, KOI8-U, KSC5636, L1, L2, L3, L4, L5,\n> L6, L7, L8, LATIN-GREEK-1, LATIN-GREEK, LATIN1, LATIN2, LATIN3, LATIN4,\n> LATIN5, LATIN6, LATIN7, LATIN8, MAC-IS, MAC-UK, MAC, MACINTOSH, MS-ANSI,\n> MS-ARAB, MS-CYRL, MS-EE, MS-GREEK, MS-HEBR, MS-TURK, MSCP949, MSCP1361,\n> MSZ_7795.3, MS_KANJI, NAPLPS, NATS-DANO, NATS-SEFI, NC_NC00-10,\n> NC_NC00-10:81, NF_Z_62-010, NF_Z_62-010_(1973), NF_Z_62-010_1973, NO, NO2,\n> NS_4551-1, NS_4551-2, OS2LATIN1, OSF00010001, OSF00010002, OSF00010003,\n> OSF00010004, OSF00010005, OSF00010006, OSF00010007, OSF00010008, OSF00010009,\n> OSF0001000A, OSF00010020, OSF00010100, OSF00010101, OSF00010102, OSF00010104,\n> OSF00010105, OSF00010106, OSF00030010, OSF0004000A, OSF0005000A, OSF05010001,\n> OSF100201A4, OSF100201A8, OSF100201B5, OSF100201F4, OSF100203B5, OSF1002011C,\n> OSF1002011D, OSF1002035D, OSF1002035E, OSF1002035F, OSF1002036B, OSF1002037B,\n> OSF10010001, OSF10020025, OSF10020111, OSF10020115, OSF10020116, OSF10020118,\n> OSF10020122, OSF10020129, OSF10020352, OSF10020354, OSF10020357, OSF10020359,\n> OSF10020360, OSF10020364, OSF10020365, OSF10020366, OSF10020367, OSF10020370,\n> OSF10020387, OSF10020388, OSF10020396, OSF10020402, OSF10020417, PT, PT2, R8,\n> ROMAN8, SE, SE2, SEN_850200_B, SEN_850200_C, SHIFT-JIS, SHIFT_JIS, SJIS,\n> SS636127, ST_SEV_358-88, T.61-8BIT, T.61, TIS-620, TIS620-0, TIS620.2529-1,\n> TIS620.2533-0, TIS620, UCS-2, UCS-4, UCS2, UCS4, UHC, UJIS, UK, UNICODE,\n> UNICODEBIG, UNICODELITTLE, US-ASCII, US, UTF-8, UTF-16, UTF8, UTF16,\n> WIN-SAMI-2, WINBALTRIM, WS2, YU\n> \nHere is UnixWare:\n$ ls /usr/lib/iconv\n113.88595.b 857.88599.p 88591.646DK.p 88591.vt220.e\n113.88595.d 860.646.b 88591.646ES.b 88592.852.b\n113.88595.e 860.646.d 88591.646ES.d 88592.852.d\n1252.88591.b 860.646.e 88591.646ES.e 88592.852.e\n1252.88591.d 860.88591.b 88591.646ES.p 88592.852.p\n1252.88591.e 860.88591.d 88591.646FR.b 88592.cpz\n1252.88591.p 860.88591.e 88591.646FR.d 88595.113.b\n1254.88599.b 860.cpz 88591.646FR.e 88595.113.d\n1254.88599.d 860.dk_pc 88591.646FR.p 88595.113.e\n1254.88599.e 863.646.b 88591.646GB.b 88595.113.p\n1254.88599.p 863.646.d 88591.646GB.d 88595.866.b\n437.646.b 863.646.e 88591.646GB.e 88595.866.d\n437.646.d 863.88591.b 88591.646GB.p 88595.866.e\n437.646.e 863.88591.d 88591.646IT.b 88595.866.p\n437.88591.b 863.88591.e 88591.646IT.d 88597.737.b\n437.88591.d 863.cpz 88591.646IT.e 88597.737.d\n437.88591.e 863.dk_pc 88591.646IT.p 88597.737.e\n437.88591.p 865.646.b 88591.646PT.b 88597.737.p\n437.cpz 865.646.d 88591.646PT.d 88597.869.b\n437.dk_pc 865.646.e 88591.646PT.e 88597.869.d\n437.ebcdic 865.88591.b 88591.646PT.p 88597.869.e\n437.ibm_ebcdic 865.88591.d 88591.646SE.b 88597.869.p\n646DE.88591.d 865.88591.e 88591.646SE.d 88599.1254.b\n646DK.88591.d 865.cpz 88591.646SE.e 88599.1254.d\n646ES.88591.d 865.dk_pc 88591.646SE.p 88599.1254.e\n646FR.88591.d 866.88595.b 88591.6937.d 88599.1254.p\n646GB.88591.d 866.88595.d 88591.850.b 88599.857.b\n646IT.88591.d 866.88595.e 88591.850.d 88599.857.d\n646PT.88591.d 869.88597.b 88591.850.e 88599.857.e\n646SE.88591.d 869.88597.d 88591.850.p 88599.857.p\n6937.88591.d 869.88597.e 88591.860.b asc.88591.b\n737.88597.b 869.88597.p 88591.860.d asc.88591.d\n737.88597.d 88591.1252.b 88591.860.e asc.88591.e\n737.88597.e 88591.1252.d 88591.860.p asc.88591.p\n737.88597.p 88591.1252.e 88591.863.b asc.ebcdic\n850.646.b 88591.1252.p 88591.863.d asc.ibm_ebcdic\n850.646.d 88591.437.b 88591.863.e codesets\n850.646.e 88591.437.d 88591.863.p dos.unicode.so\n850.88591.b 88591.437.e 88591.865.b ebcdic.437\n850.88591.d 88591.437.p 88591.865.d ebcdic.asc\n850.88591.e 88591.646.b 88591.865.e eucJP.NWsjis.so\n850.88591.p 88591.646.d 88591.865.p eucJP.sjis.so\n850.cpz 88591.646.e 88591.asc.b iconv_data\n850.dk_pc 88591.646.p 88591.asc.d kmods\n852.88592.b 88591.646DE.b 88591.asc.e unicode.88591.so\n852.88592.d 88591.646DE.d 88591.asc.p unicode.eucJP.so\n852.88592.e 88591.646DE.e 88591.cpz unicode.sjis.so\n852.88592.p 88591.646DE.p 88591.dk_pc unicode.utf.so\n857.88599.b 88591.646DK.b 88591.roman8.d vt220.88591.b\n857.88599.d 88591.646DK.d 88591.vt220.b vt220.88591.d\n857.88599.e 88591.646DK.e 88591.vt220.d vt220.88591.e\n$ uname -a\nUnixWare lerami 5 7.1.1 i386 x86at SCO UNIX_SVR5\n$ \n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 15 Jul 2001 17:45:21 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: iconv?" }, { "msg_contents": "> Here is UnixWare:\n[snip]\n\nHum. I'm not sure what each file represents, but it looks like no\nAsian language is supported except Japanese on UnixWare.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 16 Jul 2001 10:02:16 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: iconv?" }, { "msg_contents": "* Tatsuo Ishii <t-ishii@sra.co.jp> [010715 20:02]:\n> > Here is UnixWare:\n> [snip]\n> \n> Hum. I'm not sure what each file represents, but it looks like no\n> Asian language is supported except Japanese on UnixWare.\nMay need to load something from the CD....\n\nLER\n\n> --\n> Tatsuo Ishii\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 15 Jul 2001 20:13:48 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: iconv?" } ]
[ { "msg_contents": "\nhello all.\nIs there a way to config the postmaster\nto log in a file all connections and querys to each database?\n\nthanks...\n", "msg_date": "11 Jul 2001 18:28:17 -0000", "msg_from": "\"gabriel\" <gabriel@workingnetsp.com.br>", "msg_from_op": true, "msg_subject": "LOG PgSql .." }, { "msg_contents": "gabriel writes:\n\n> Is there a way to config the postmaster\n> to log in a file all connections and querys to each database?\n\nhttp://www.de.postgresql.org/users-lounge/docs/7.1/postgres/runtime-config.html#LOGGING\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 11 Jul 2001 21:06:29 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: LOG PgSql .." } ]
[ { "msg_contents": "I must say, I am having more trouble keeping up with the email traffic. \nIt is taking days just to catch up to current emails because some of the\nemails need special attention. I am going as fast as I can. I think I\nhave one more week of emails to go through before I am current. \nHopefully today...\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 16:20:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "I can't keep up" }, { "msg_contents": "> I must say, I am having more trouble keeping up with the email traffic. \n> It is taking days just to catch up to current emails because some of the\n> emails need special attention. I am going as fast as I can. I think I\n> have one more week of emails to go through before I am current. \n> Hopefully today...\n\nI deal with one email and 10 more show up! I know I am not helping by\nsending out emails myself. I guess this is how we progress so quickly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Jul 2001 16:34:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I can't keep up" } ]