threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "[snip]\n> Very likely, it is only my limited understanding not really grasping \n> what it is that you are trying to do. Even so, I don't think it\nreally\n> helps even for read only queries, unless it is exactly the same query\n> with the same parameter markers and everything that was issued before.\n> That is very unusual. Normally, you won't have the entire query hard-\n> wired, but with allow the customer to do some sort of filtering of the\n> data.\n\nHmmm... the more I think about it, the more unusual it would be for\n_exactly_ the same query to be repeated a lot. However, the article\nreported a significant performance gain when this feature was enabled.\nThat could mean that:\n\n (a) the performance measurements/benchmarks used by the article were\nsynthetic and don't reflect real database applications\n\n (b) the feature MySQL implements is different than the one I am\ndescribing\n\nWhen I get a chance I'll investigate further the technique used by MySQL\nto see if (b) is the case. However, it is beginning to look like this\nisn't a good idea, overall.\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\nI did not read the article at all, but I am familiar with query cache \nand in fact, I do it a lot (I work for a database company). Here is \nhow the algorithm works:\n\nYou intercept every incoming query and parse it. Any physical data \ngets replaced with parameter markers. A 64 bit hash is formed from the\nparsed query with the parameter markers removed. The hash is used as\nan index into a skiplist which also stores the original query. After \nall, if a client has a million dollar request, he won't be happy that\nthe unbelievably rare thing happened and the checksums agreed.\n\nYou can add a counter to the data in the skiplist so that you know how\noften the query happens. The parsed query will only be useful to a \nsystem that can save time from having a query prepared (most systems\ncall it preparing the query). I was kind of surprised to see that \nPostgreSQL does not have a prepare stage in libpq. This can be a\nvery large speedup in query execution (for obvious reasons).\n<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n",
"msg_date": "Tue, 26 Feb 2002 16:39:56 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: eWeek Poll: Which database is most critical to your"
},
{
"msg_contents": "Here is the documentation about MySQL's new Query Cache. I think that it\nwould be helpful, as they indicate, for dynamic web sites, such as Slashdot.\nThere are hundreds or maybe thousands of queries in between added comments\nand there are probably only a few common combinations of\nthreshold/nesting/sort.\n\nMySQL Query Cache\n=================\n\n From version 4.0.1, `MySQL server' features a `Query Cache'. When in\nuse, the query cache stores the text of a `SELECT' query together with\nthe corresponding result that is sent to a client. If another\nidentical query is received, the server can then retrieve the results\nfrom the query cache rather than parsing and executing the same query\nagain.\n\nThe query cache is extremely useful in an environment where (some)\ntables don't change very often and you have a lot of identical queries.\nThis is a typical situation for many web servers that use a lot of\ndynamic content.\n\nFollowing are some performance data for the query cache (We got these\nby running the MySQL benchmark suite on a Linux Alpha 2x500 MHz with\n2GB RAM and a 64MB query cache):\n\n * If you want to disable the query cache code set\n `query_cache_size=0'. By disabling the query cache code there is\n no noticeable overhead.\n\n * If all of the queries you're preforming are simple (such as\n selecting a row from a table with one row); but still differ so\n that the queries can not be cached, the overhead for having the\n query cache active is 13%. This could be regarded as the worst\n case scenario. However, in real life, queries are much more\n complicated than our simple example so the overhead is normally\n significantly lower.\n\n * Searches after one row in a one row table is 238% faster. This\n can be regarded as close to the minimum speedup to be expected for\n a query that is cached.\n\nHow The Query Cache Operates\n----------------------------\n\nQueries are compared before parsing, thus\n\n SELECT * FROM TABLE\n\nand\n\n Select * from table\n\nare regarded as different queries for query cache, so queries need to\nbe exactly the same (byte for byte) to be seen as identical. In\naddition, a query may be seen as different if for instance one client\nis using a new communication protocol format or another character set\nthan another client.\n\nQueries that uses different databases, uses different protocol versions\nor the uses different default character sets are considered different\nqueries and cached separately.\n\nThe cache does work for `SELECT CALC_ROWS ...' and `SELECT FOUND_ROWS()\n...' type queries because the number of found rows is also stored in\nthe cache.\n\nIf a table changes (`INSERT', `UPDATE', `DELETE', `TRUNCATE', `ALTER'\nor `DROP TABLE|DATABASE'), then all cached queries that used this table\n(possibly through a `MRG_MyISAM' table!) become invalid and are removed\nfrom the cache.\n\nCurrently all `InnoDB' tables are invalidated on `COMMIT', in the\nfuture this will be changed so only tables changed in the transaction\ncause the corresponding cache entries to be invalidated.\n\nA query cannot be cached if it contains one of the functions:\n*Function* *Function* *Function* *Function*\n`User Defined `CONNECTION_ID' `FOUND_ROWS' `GET_LOCK'\nFunctions'\n`RELEASE_LOCK' `LOAD_FILE' `MASTER_POS_WAIT' `NOW'\n`SYSDATE' `CURRENT_TIMESTAMP'`CURDATE' `CURRENT_DATE'\n`CURTIME' `CURRENT_TIME' `DATABASE' `ENCRYPT' (with\n one parameter)\n`LAST_INSERT_ID' `RAND' `UNIX_TIMESTAMP' `USER'\n (without\n parameters)\n`BENCHMARK'\n\nNor can a query be cached if it contains user variables, if it is of\nthe form `SELECT ... IN SHARE MODE' or of the form `SELECT * FROM\nAUTOINCREMENT_FIELD IS NULL' (to retrieve last insert id - ODBC work\naround).\n\nHowever, `FOUND ROWS()' will return the correct value, even if the\npreceding query was fetched from the cache.\n\nQueries that don't use any tables or if the user has a column privilege\nfor any of the involved tables are not cached.\n\nBefore a query is fetched from the query cache, MySQL will check that\nthe user has SELECT privilege to all the involved databases and tables.\nIf this is not the case, the cached result will not be used.\n\nQuery Cache Configuration\n-------------------------\n\nThe query cache adds a few `MySQL' system variables for `mysqld' which\nmay be set in a configuration file, on the command line when starting\n`mysqld'.\n\n * `query_cache_limit' Don't cache results that are bigger than this.\n (Default 1M).\n\n * `query_cache_size' The memory allocated to store results from old\n queries. If this is 0, the query cache is disabled (default).\n\n * `query_cache_startup_type' This may be set (only numeric) to\n *Option* *Description*\n 0 (OFF, don't cache or retrieve results)\n 1 (ON, cache all results except `SELECT\n SQL_NO_CACHE ...' queries)\n 2 (DEMAND, cache only `SELECT SQL_CACHE ...'\n queries)\n\nInside a thread (connection), the behaviour of the query cache can be\nchanged from the default. The syntax is as follows:\n\n`SQL_QUERY_CACHE_TYPE = OFF | ON | DEMAND' `SQL_QUERY_CACHE_TYPE = 0\n| 1 | 2'\n\n*Option* *Description*\n0 or OFF Don't cache or retrieve results.\n1 or ON Cache all results except `SELECT SQL_NO_CACHE\n ...' queries.\n2 or DEMAND Cache only `SELECT SQL_CACHE ...' queries.\n\nBy default `SQL_QUERY_CACHE_TYPE' depends on the value of\n`query_cache_startup_type' when the thread was created.\n\nQuery Cache Options in `SELECT'\n-------------------------------\n\nThere are two possible query cache related parameters that may be\nspecified in a `SELECT' query:\n\n*Option* *Description*\n`SQL_CACHE' If `SQL_QUERY_CACHE_TYPE' is `DEMAND', allow the\n query to be cached. If `SQL_QUERY_CACHE_TYPE'\n is `ON', this is the default. If\n `SQL_QUERY_CACHE_TYPE' is `OFF', do nothing.\n`SQL_NO_CACHE' Make this query non-cachable, don't allow this\n query to be stored in the cache.\n\nQuery Cache Status and Maintenance\n----------------------------------\n\nWith the `FLUSH QUERY CACHE' command you can defragment the query cache\nto better utilise its memory. This command will not remove any queries\nfrom the cache. `FLUSH TABLES' also flushes the query cache.\n\nThe `RESET QUERY CACHE' command removes all query results from the\nquery cache.\n\nYou can monitor query cache performance in `SHOW STATUS':\n\n*Variable* *Description*\n`Qcache_queries_in_cache'Number of queries registered in the cache.\n`Qcache_inserts' Number of queries added to the cache.\n`Qcache_hits' Number of cache hits.\n`Qcache_not_cached' Number of non-cached queries (not cachable, or\n due to `SQL_QUERY_CACHE_TYPE').\n`Qcache_free_memory' Amount of free memory for query cache.\n`Qcache_total_blocks' Total number of blocks in query cache.\n`Qcache_free_blocks' Number of free memory blocks in query cache.\n\nTotal number of queries = `Qcache_inserts' + `Qcache_hits' +\n`Qcache_not_cached'.\n\nThe query cache uses variable length blocks, so `Qcache_total_blocks'\nand `Qcache_free_blocks' may indicate query cache memory fragmentation.\nAfter `FLUSH QUERY CACHE' only a single (big) free block remains.\n\nNote: Every query needs a minimum of two blocks (one for the query text\nand one or more for the query results). Also, every table that is used\nby a query needs one block, but if two or more queries use same table\nonly one block needs to be allocated.\n\n\n\n\n",
"msg_date": "Wed, 27 Feb 2002 00:28:09 -0500",
"msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>",
"msg_from_op": false,
"msg_subject": "Re: eWeek Poll: Which database is most critical to your"
}
] |
[
{
"msg_contents": "We had a discussion about renaming some files in src/commands. Can't we\njust 'mv' the CVS file to keep the log during a file rename? I sure\nthought that would work.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Feb 2002 00:29:01 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Renaming files"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> We had a discussion about renaming some files in src/commands. Can't we\n> just 'mv' the CVS file to keep the log during a file rename? I sure\n> thought that would work.\n\nSure, if your idea of \"work\" does not include being able to extract\ncorrect copies of past releases anymore.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Feb 2002 00:57:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Renaming files "
}
] |
[
{
"msg_contents": "pgman wrote:\n> Peter Eisentraut wrote:\n> > John Gray writes:\n> > \n> > > dbcommands.c\trename to database.c (see below)\n> > > indexcmds.c\trename to index.c (see below)\n> > \n> > Might as well keep these. They don't hurt anyone just because they spell\n> > a little differently.\n> \n> I disagree. If we are cleaning, let's clean. *cmd* is redundant. The\n> contents of the files will be quite different anyway.\n\nGood point about keeping CVS logs. Can't we just move the CVS file to\nanother name, create an empty file in its place, then delete it. I\nthought that would move the history. We have never done it but I would\nthink there is a way to do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Feb 2002 00:31:34 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring of command.c"
},
{
"msg_contents": "A simple copy works quite well. Bit of a waste of space but you\npreserve the history in both locations.\n\nAn empty file will not work. If someone were to check out an older\nversion it would be broken.\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: <pgman@candle.pha.pa.us>\nCc: \"Peter Eisentraut\" <peter_e@gmx.net>; \"John Gray\"\n<jgray@azuli.co.uk>; <pgsql-hackers@postgresql.org>\nSent: Wednesday, February 27, 2002 12:31 AM\nSubject: Re: [HACKERS] Refactoring of command.c\n\n\n> pgman wrote:\n> > Peter Eisentraut wrote:\n> > > John Gray writes:\n> > >\n> > > > dbcommands.c rename to database.c (see below)\n> > > > indexcmds.c rename to index.c (see below)\n> > >\n> > > Might as well keep these. They don't hurt anyone just because\nthey spell\n> > > a little differently.\n> >\n> > I disagree. If we are cleaning, let's clean. *cmd* is redundant.\nThe\n> > contents of the files will be quite different anyway.\n>\n> Good point about keeping CVS logs. Can't we just move the CVS file\nto\n> another name, create an empty file in its place, then delete it. I\n> thought that would move the history. We have never done it but I\nwould\n> think there is a way to do it.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n>\n\n",
"msg_date": "Wed, 27 Feb 2002 06:50:38 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring of command.c"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> A simple copy works quite well. Bit of a waste of space but you\n> preserve the history in both locations.\n\nIf you do that, the copied file will appear to CVS to be part of older\nversions, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Feb 2002 10:13:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring of command.c "
},
{
"msg_contents": "It will, but I've always been able to make the assumption it won't be\ncompiled in as the make files always specified exactly which ones to\ncompile (and would ignore the new 'stray').\n\nPerhaps the best way to approach this would be to ask the FreeBSD\n'repo' people what they do -- as they do this kind of stuff quite\nfrequently.\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Bruce Momjian\" <pgman@candle.pha.pa.us>; \"Peter Eisentraut\"\n<peter_e@gmx.net>; \"John Gray\" <jgray@azuli.co.uk>;\n<pgsql-hackers@postgresql.org>\nSent: Wednesday, February 27, 2002 10:13 AM\nSubject: Re: [HACKERS] Refactoring of command.c\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > A simple copy works quite well. Bit of a waste of space but you\n> > preserve the history in both locations.\n>\n> If you do that, the copied file will appear to CVS to be part of\nolder\n> versions, no?\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Wed, 27 Feb 2002 10:21:27 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring of command.c "
},
{
"msg_contents": "Dug the below out of googles cache -- it's what the BSDs do for moving\nfiles in cvs.\n\nWhat is a repo-copy?\nA repo-copy (which is a short form of ``repository copy'') refers to\nthe direct copying of files within the CVS repository.\n\nWithout a repo-copy, if a file needed to be copied or moved to another\nplace in the repository, the committer would run cvs add to put the\nfile in its new location, and then cvs rm on the old file if the old\ncopy was being removed.\n\nThe disadvantage of this method is that the history (i.e. the entries\nin the CVS logs) of the file would not be copied to the new location.\nAs the FreeBSD Project considers this history very useful, a\nrepository copy is often used instead. This is a process where one of\nthe repository meisters will copy the files directly within the\nrepository, rather than using the cvs program.\n\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Bruce Momjian\" <pgman@candle.pha.pa.us>; \"Peter Eisentraut\"\n<peter_e@gmx.net>; \"John Gray\" <jgray@azuli.co.uk>;\n<pgsql-hackers@postgresql.org>\nSent: Wednesday, February 27, 2002 10:13 AM\nSubject: Re: [HACKERS] Refactoring of command.c\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > A simple copy works quite well. Bit of a waste of space but you\n> > preserve the history in both locations.\n>\n> If you do that, the copied file will appear to CVS to be part of\nolder\n> versions, no?\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Wed, 27 Feb 2002 10:24:26 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring of command.c "
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Dug the below out of googles cache -- it's what the BSDs do for moving\n> files in cvs.\n\n> What is a repo-copy?\n> A repo-copy (which is a short form of ``repository copy'') refers to\n> the direct copying of files within the CVS repository.\n\nYeah, I think that's what we discussed the last time the question came\nup.\n\nIt seems awfully wrongheaded to me. IMHO, the entire point of a CVS\nrepository is to store past states of your software, not only the\ncurrent state. Destroying the accurate representation of your historical\nreleases is a poor tradeoff for making it a little easier to find the\nlog entries for code that's been moved around. What's the point\nof having history, if it's not accurate?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Feb 2002 14:06:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring of command.c "
},
{
"msg_contents": "> > What is a repo-copy?\n> > A repo-copy (which is a short form of ``repository copy'') refers to\n> > the direct copying of files within the CVS repository.\n>\n> Yeah, I think that's what we discussed the last time the question came\n> up.\n>\n> It seems awfully wrongheaded to me. IMHO, the entire point of a CVS\n> repository is to store past states of your software, not only the\n> current state. Destroying the accurate representation of your historical\n> releases is a poor tradeoff for making it a little easier to find the\n> log entries for code that's been moved around. What's the point\n> of having history, if it's not accurate?\n\nSounds like it's time to move to using 'arch':\n\nhttp://www.regexps.com/#arch\n\nSupports everything that CVS doesn't, including rename events...\n\nBTW - I'm not _seriously_ suggesting this change - but it would be cool,\nwouldn't it?\n\nPeople could start their own local branches which are part of the global\nnamespace, easily merge them in, etc...\n\nChris\n\n",
"msg_date": "Thu, 28 Feb 2002 09:18:39 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Arch (was RE: Refactoring of command.c )"
},
{
"msg_contents": "On Thu, 28 Feb 2002, Christopher Kings-Lynne wrote:\n\n[Shame on Christopher for breaking attributions]\n\n> > > What is a repo-copy?\n> > > A repo-copy (which is a short form of ``repository copy'') refers to\n> > > the direct copying of files within the CVS repository.\n> >\n> > Yeah, I think that's what we discussed the last time the question came\n> > up.\n> >\n> > It seems awfully wrongheaded to me. IMHO, the entire point of a CVS\n> > repository is to store past states of your software, not only the\n> > current state. Destroying the accurate representation of your historical\n> > releases is a poor tradeoff for making it a little easier to find the\n> > log entries for code that's been moved around. What's the point\n> > of having history, if it's not accurate?\n>\n> Sounds like it's time to move to using 'arch':\n\nI see this going down the road of a religious debate, and to prove the\npoint, I bring up BitKeeper:\n\nhttp://www.bitkeeper.com\n\n> http://www.regexps.com/#arch\n>\n> Supports everything that CVS doesn't, including rename events...\n\nSo does BitKeeper :)\n\n> BTW - I'm not _seriously_ suggesting this change - but it would be cool,\n> wouldn't it?\n>\n> People could start their own local branches which are part of the global\n> namespace, easily merge them in, etc...\n\nThis seems quite pointless for PostgreSQL's development.\n\nC'est la vie.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Wed, 27 Feb 2002 22:25:31 -0600 (CST)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": false,
"msg_subject": "Re: Arch (was RE: Refactoring of command.c )"
},
{
"msg_contents": "\"Dominic J. Eidson\" <sauron@the-infinite.org> writes:\n> On Thu, 28 Feb 2002, Christopher Kings-Lynne wrote:\n>> Sounds like it's time to move to using 'arch':\n\n> I see this going down the road of a religious debate, and to prove the\n> point, I bring up BitKeeper:\n\nHmm. I'd surely be the last to claim that CVS is the be-all and end-all\nof software archiving systems. But is BitKeeper, or arch, or anything\nelse enough better to justify the pain of switching? This is intended\nas an honest question, not flamebait --- I haven't looked closely at\nthe alternatives.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Feb 2002 23:44:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Arch (was RE: Refactoring of command.c ) "
},
{
"msg_contents": "> I see this going down the road of a religious debate, and to prove the\n> point, I bring up BitKeeper:\n>\n> http://www.bitkeeper.com\n\nI admit I don't know much about bitkeeper, except its license is a bit\nweird...\n\n> > http://www.regexps.com/#arch\n> >\n> > Supports everything that CVS doesn't, including rename events...\n>\n> So does BitKeeper :)\n>\n> > BTW - I'm not _seriously_ suggesting this change - but it would be cool,\n> > wouldn't it?\n> >\n> > People could start their own local branches which are part of the global\n> > namespace, easily merge them in, etc...\n>\n> This seems quite pointless for PostgreSQL's development.\n\nNOT TRUE!!!\n\nImagine you want to develop a massive new feature for Postgres. You just\ncreate a branch on your own machine, do all your changes, commits, etc. and\nkeep it current with the main branch. Then, you can merge it back into the\nmain tree... That way you can have a history of commits on your own branch\nof the repo!\n\nDisclaimer: Have only read docs, not actually _used_ 'arch'... :(\n\nChris\n\n",
"msg_date": "Thu, 28 Feb 2002 17:00:00 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Arch (was RE: Refactoring of command.c )"
},
{
"msg_contents": "On Thu, 2002-02-28 at 06:44, Tom Lane wrote:\n\n> Hmm. I'd surely be the last to claim that CVS is the be-all and end-all\n> of software archiving systems. But is BitKeeper, or arch, or anything\n> else enough better to justify the pain of switching? This is intended\n> as an honest question, not flamebait --- I haven't looked closely at\n> the alternatives.\n\nThere was a discussion on Slashdot a couple of weeks ago, with (as\nusual) some interesting comments out of the lot. It's at\nhttp://slashdot.org/comments.pl?sid=27540&threshold=4&mode=flat\nincluding comments from Subversion's Karl Fogel.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n\n",
"msg_date": "28 Feb 2002 12:14:32 +0200",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: Arch (was RE: Refactoring of command.c )"
},
{
"msg_contents": "> Imagine you want to develop a massive new feature for Postgres. You\njust\n> create a branch on your own machine, do all your changes, commits,\netc. and\n> keep it current with the main branch. Then, you can merge it back\ninto the\n> main tree... That way you can have a history of commits on your own\nbranch\n> of the repo!\n\nThe same thing can be accomplished with CVS as well -- it's just not\nas pretty. There is a reason that the FreeBSD group uses $FreeBSD$\nand leaves $Id$ untouched.\n\nBasically, check out of one, drop CVS directories, check into the\nsecond, check out of the second, and when doing work with either\nrepository you specify which repo with the -D flag. Coupled with\nthe -j (merge) flag you can accomplish most tasks.\n\nThat said, if the work was thought through and beneficial you may be\nable to obtain a branch in postgresql cvs to work with.\n\n",
"msg_date": "Thu, 28 Feb 2002 08:56:36 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Arch (was RE: Refactoring of command.c )"
}
] |
[
{
"msg_contents": "On Wed, 2002-02-27 at 13:09, Mike Mascari wrote:\n> On general a discussion has been taking place regarding cached query\n> plans and how MySQL invented them.\n\nIMHO the discussion was about cached queries not query plans.\n\n> Of course, this is totally false. I\n> remembered a nice paragraph in the Oracle docs as to the process by\n> which Oracle uses shared SQL areas to share the execution plan of\n> identical statements, flushing the area whenever a dependent object was\n> modified. In searching for the reference, however, I stumbled an\n> interesting fact. Unlike normal queries where blocks are added to the\n> MRU end of an LRU list, full table scans add the blocks to the LRU end\n> of the LRU list.\n\nThis seems really elegant solution , much better than not caching at all\nand much better than flushing the whole cache by a large table scan\n\n> I was wondering, in the light of the discussion of\n> using LRU-K, if PostgreSQL does, or if anyone has tried, this technique?\n\n------------\nHannu\n",
"msg_date": "27 Feb 2002 10:43:11 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": true,
"msg_subject": "Re: LRU and full table scans"
},
{
"msg_contents": "On general a discussion has been taking place regarding cached query\nplans and how MySQL invented them. Of course, this is totally false. I\nremembered a nice paragraph in the Oracle docs as to the process by\nwhich Oracle uses shared SQL areas to share the execution plan of\nidentical statements, flushing the area whenever a dependent object was\nmodified. In searching for the reference, however, I stumbled an\ninteresting fact. Unlike normal queries where blocks are added to the\nMRU end of an LRU list, full table scans add the blocks to the LRU end\nof the LRU list. I was wondering, in the light of the discussion of\nusing LRU-K, if PostgreSQL does, or if anyone has tried, this technique?\n\nMike Mascari\nmascarm@mascari.\n",
"msg_date": "Wed, 27 Feb 2002 03:09:45 -0500",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "LRU and full table scans"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> On Wed, 2002-02-27 at 13:09, Mike Mascari wrote:\n> > On general a discussion has been taking place regarding cached query\n> > plans and how MySQL invented them.\n> \n> IMHO the discussion was about cached queries not query plans.\n\nYou're right, of course. It would be interesting though to compare the\nspeed of a cached query against a cache query plan + cached data blocks.\nIf the cached query got a hit, the cached query plan + cached data\nblocks would lose by the number of cycles spent in the executor.\nAlternatively, the cost of a cache miss in the caching of a query means\nwasted memory that could have been used for cached data blocks...\n\n> \n> > Of course, this is totally false. I\n> > remembered a nice paragraph in the Oracle docs as to the process by\n> > which Oracle uses shared SQL areas to share the execution plan of\n> > identical statements, flushing the area whenever a dependent object was\n> > modified. In searching for the reference, however, I stumbled an\n> > interesting fact. Unlike normal queries where blocks are added to the\n> > MRU end of an LRU list, full table scans add the blocks to the LRU end\n> > of the LRU list.\n> \n> This seems really elegant solution , much better than not caching at all\n> and much better than flushing the whole cache by a large table scan\n\nYes. And Oracle has a CACHE keyword option on its CREATE TABLE/ALTER\nTABLE statement to allow full table scans of small lookup tables to\nfollow normal MRU caching, if necessary.\n\n> \n> > I was wondering, in the light of the discussion of\n> > using LRU-K, if PostgreSQL does, or if anyone has tried, this technique?\n> \n> Hannu\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Wed, 27 Feb 2002 07:22:08 -0500",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: LRU and full table scans"
},
{
"msg_contents": "Mike Mascari wrote:\n> On general a discussion has been taking place regarding cached query\n> plans and how MySQL invented them. Of course, this is totally false. I\n> remembered a nice paragraph in the Oracle docs as to the process by\n> which Oracle uses shared SQL areas to share the execution plan of\n> identical statements, flushing the area whenever a dependent object was\n> modified. In searching for the reference, however, I stumbled an\n> interesting fact. Unlike normal queries where blocks are added to the\n> MRU end of an LRU list, full table scans add the blocks to the LRU end\n> of the LRU list. I was wondering, in the light of the discussion of\n> using LRU-K, if PostgreSQL does, or if anyone has tried, this technique?\n\nYes, someone from India has a project to test LRU-K and MRU for large\ntable scans and report back the results. He will implement whichever is\nbest. He posted a week ago, see \"Implementation Proposal For Add Free\nBehind Capability For Large Sequential Scan\", Amit Kumar Khare\n<skamit2000@yahoo.com>.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Feb 2002 22:42:21 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LRU and full table scans"
},
{
"msg_contents": "\"Mike Mascari\" <mascarm@mascari.com> escribi� en el mensaje\nnews:3C7C9449.B747532B@mascari.com...\n> On general a discussion has been taking place regarding cached query\n> plans and how MySQL invented them. Of course, this is totally false. I\n> remembered a nice paragraph in the Oracle docs as to the process by\n> which Oracle uses shared SQL areas to share the execution plan of\n> identical statements, flushing the area whenever a dependent object was\n> modified. In searching for the reference, however, I stumbled an\n> interesting fact. Unlike normal queries where blocks are added to the\n> MRU end of an LRU list, full table scans add the blocks to the LRU end\n> of the LRU list. I was wondering, in the light of the discussion of\n> using LRU-K, if PostgreSQL does, or if anyone has tried, this technique?\n>\n> Mike Mascari\n> mascarm@mascari.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n\nFirst Hello, Second...Yes, I'm doing some studies on the buffer manager and\nalso studying and implementing different policys, one of them is the LRU-K.\n\nBy the time I don't have any results, I'm just preparing to run the TPC-H\nbenchmark to test all the policys that I have implemented.\nI have implemented some policys for the buffer manager, some better and some\nworst than LRU (FIFO, LFU, LRD, FBR, LRU-K, LRFU, 4 CLOCK policys (or second\nchance), CORRELATED REFERENCES, and different combinations of them like\n(LFU+CLOCK+AGING(by division or by substraction)+CORRELATED REFERENCES)),\nthere are seven principal policys and seven \"add-on's\" that could be applyed\nto them, resulting in 25 combinations of policys.\n\n\n\n\n",
"msg_date": "Wed, 6 Mar 2002 23:09:51 +0100",
"msg_from": "\"Roque Bonilla\" <roque.bonilla@wanadoo.es>",
"msg_from_op": false,
"msg_subject": "Re: LRU and full table scans"
},
{
"msg_contents": "Roque Bonilla wrote:\n> First Hello, Second...Yes, I'm doing some studies on the buffer manager and\n> also studying and implementing different policys, one of them is the LRU-K.\n> \n> By the time I don't have any results, I'm just preparing to run the TPC-H\n> benchmark to test all the policys that I have implemented.\n> I have implemented some policys for the buffer manager, some better and some\n> worst than LRU (FIFO, LFU, LRD, FBR, LRU-K, LRFU, 4 CLOCK policys (or second\n> chance), CORRELATED REFERENCES, and different combinations of them like\n> (LFU+CLOCK+AGING(by division or by substraction)+CORRELATED REFERENCES)),\n> there are seven principal policys and seven \"add-on's\" that could be applyed\n> to them, resulting in 25 combinations of policys.\n\nWe do have someone working on LRU-K but we haven't seen any test results\nfrom him yet. Please let us know what you find.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 13 Mar 2002 13:27:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LRU and full table scans"
}
] |
[
{
"msg_contents": "So that's it. A completely lame \"benchmark faker\" tool. Useful for\nonly the dumb benchmark they create.\n",
"msg_date": "Tue, 26 Feb 2002 22:03:33 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: eWeek Poll: Which database is most critical to your"
}
] |
[
{
"msg_contents": "On Wed, 2002-02-27 at 14:48, Jean-Paul ARGUDO wrote:\n> Ok, \n> \n> I'm working on query analysis for a program in ecpg for business puposes. Look\n> at what I found on with PG 7.2: Please be cool with my french2english processor,\n> I got few bogomips in my brain dedicated to english (should have listen more in\n> class..):\n> ----\n> \n> line 962 (in the ecpg source..)\n> \n> EXPLAIN SELECT t12_bskid, t12_pnb, t12_lne, t12_tck\n> FROM T12_20011231\n> WHERE t12_bskid >= 1 \n> ORDER BY t12_bskid, t12_pnb, t12_tck, t12_lne;\n> \n...\n\n> \n> \n> => Uh? Seq scan cheaper than index??? \n> \n> => let's disable seqscan to read cost of index:\n> postgresql.conf : enable_seqscan = false\n\nYou could just do \n\nset enable_seqscan to 'off'\n\nin sql\n\n> Sort (cost=3126.79..3126.79 rows=25693 width=46)\n> -> Index Scan using t12_idx_bskid_20011231 on t12_20011231\n> (cost=0.00..1244.86 rows=25693 width=46)\n> \n> => Uh? seq scan'cost is lower than index scan?? => mailto hackers\n\nIt often is. Really.\n\n> ----\n> \n> What's your opinion? \n\nWhat are the real performance numbers ?\n\nIf they are other than what postgresql optimiser thinks you can change\nthem in system table.\n\n----------------\nHannu\n",
"msg_date": "27 Feb 2002 12:25:15 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": true,
"msg_subject": "Re: Yet again on indices..."
},
{
"msg_contents": "Ok, \n\nI'm working on query analysis for a program in ecpg for business puposes. Look\nat what I found on with PG 7.2: Please be cool with my french2english processor,\nI got few bogomips in my brain dedicated to english (should have listen more in\nclass..):\n----\n\nline 962 (in the ecpg source..)\n\nEXPLAIN SELECT t12_bskid, t12_pnb, t12_lne, t12_tck\nFROM T12_20011231\nWHERE t12_bskid >= 1 \nORDER BY t12_bskid, t12_pnb, t12_tck, t12_lne;\n\nNOTICE: QUERY PLAN:\n \nSort (cost=3006.13..3006.13 rows=25693 width=46)\n -> Seq Scan on t12_20011231 (cost=0.00..1124.20 rows=25693 width=46)\n\n=> not good, table t12_20011231 as 26K tuples :-(\n\n=> create index t12_idx_bskid_20011231 on t12_20011231 (t12_bskid);\n\nSort (cost=3006.13..3006.13 rows=25693 width=46)\n -> Seq Scan on t12_20011231 (cost=0.00..1124.20 rows=25693 width=46)\n\n=> probably statistic refresh to be done: \n$ /usr/local/pgsql/bin/vacuumdb --analyze dbks\n\nSort (cost=3006.13..3006.13 rows=25693 width=46)\n -> Seq Scan on t12_20011231 (cost=0.00..1124.20 rows=25693 width=46)\n\n\n=> Uh? Seq scan cheaper than index??? \n\n=> let's disable seqscan to read cost of index:\npostgresql.conf : enable_seqscan = false\n\nSort (cost=3126.79..3126.79 rows=25693 width=46)\n -> Index Scan using t12_idx_bskid_20011231 on t12_20011231\n(cost=0.00..1244.86 rows=25693 width=46)\n\n=> Uh? seq scan'cost is lower than index scan?? => mailto hackers\n\n----\n\nWhat's your opinion? \n\nI have to tell that this select opperates in a forloop statment . \nI hardly believe reading 26K tuples is cheaper thant index reading, but maybe\nyou'll ask me about buffers that should store de 26K tuples?...\n\nBut just after this query, there is another one that maybe will put data in\nbuffers, kicking t12_20011231 data blocks...\n\nWell I feel a little stuck there. I'll continue with enable_scans=false, but\nI feel bad beeing forced to do so... and still asking myself if this is good\nidea.\n\nThanks for support, best regards.\n\n-- \nJean-Paul ARGUDO\n\n",
"msg_date": "Wed, 27 Feb 2002 10:48:15 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": false,
"msg_subject": "Yet again on indices..."
},
{
"msg_contents": "> > postgresql.conf : enable_seqscan = false\n> You could just do \n> set enable_seqscan to 'off'\n> in sql\n\nthanks for the tip :-)\n \n> > => Uh? seq scan'cost is lower than index scan?? => mailto hackers\n> It often is. Really.\n \n> > What's your opinion? \n> What are the real performance numbers ?\n\nFinally, testing and testing again shows the choice of table scan is faster than\nindex scan on this 26K tuples table. really impresive.\n \nI posted another mail about Oracle vs PG results in a comparative survey I'm\ncurrently working on for 1 month. Please read it, I feel a bit disapointed with\nOracle's 1200 tps..\n\nThanks for your support Hannu!\n\n-- \nJean-Paul ARGUDO\n\n",
"msg_date": "Wed, 27 Feb 2002 15:59:00 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": false,
"msg_subject": "Re: Yet again on indices..."
},
{
"msg_contents": "On Wed, 27 Feb 2002, Jean-Paul ARGUDO wrote:\n\n> EXPLAIN SELECT t12_bskid, t12_pnb, t12_lne, t12_tck\n> FROM T12_20011231\n> WHERE t12_bskid >= 1\n> ORDER BY t12_bskid, t12_pnb, t12_tck, t12_lne;\n>\n> NOTICE: QUERY PLAN:\n>\n> Sort (cost=3006.13..3006.13 rows=25693 width=46)\n> -> Seq Scan on t12_20011231 (cost=0.00..1124.20 rows=25693 width=46)\n>\n> => not good, table t12_20011231 as 26K tuples :-(\n\n>\n> => create index t12_idx_bskid_20011231 on t12_20011231 (t12_bskid);\n>\n> Sort (cost=3006.13..3006.13 rows=25693 width=46)\n> -> Seq Scan on t12_20011231 (cost=0.00..1124.20 rows=25693 width=46)\n>\n> => probably statistic refresh to be done:\n> $ /usr/local/pgsql/bin/vacuumdb --analyze dbks\n>\n> Sort (cost=3006.13..3006.13 rows=25693 width=46)\n> -> Seq Scan on t12_20011231 (cost=0.00..1124.20 rows=25693 width=46)\n>\n>\n> => Uh? Seq scan cheaper than index???\n>\n> => let's disable seqscan to read cost of index:\n> postgresql.conf : enable_seqscan = false\n>\n> Sort (cost=3126.79..3126.79 rows=25693 width=46)\n> -> Index Scan using t12_idx_bskid_20011231 on t12_20011231\n> (cost=0.00..1244.86 rows=25693 width=46)\n>\n> => Uh? seq scan'cost is lower than index scan?? => mailto hackers\n>\n> ----\n>\n\n> What's your opinion?\n\nWell you didn't send the schema, or explain analyze results to show\nwhich is actually faster, but...\n\nSequence scan *can be* faster than index scan when a large portion of the\ntable is going to be read. If the data is randomly distributed,\neventually you end up reading most/all of the table blocks anyway to get\nthe validity information for the rows and you're doing it in random order,\nplus you're reading parts of the index as well. How many rows are in\nthe table, and how many match t12_bskid >=1?\n\n",
"msg_date": "Wed, 27 Feb 2002 07:03:37 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Yet again on indices..."
},
{
"msg_contents": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com> writes:\n> EXPLAIN SELECT t12_bskid, t12_pnb, t12_lne, t12_tck\n> FROM T12_20011231\n> WHERE t12_bskid >= 1 \n> ORDER BY t12_bskid, t12_pnb, t12_tck, t12_lne;\n\n> Sort (cost=3006.13..3006.13 rows=25693 width=46)\n> -> Seq Scan on t12_20011231 (cost=0.00..1124.20 rows=25693 width=46)\n\n> => Uh? Seq scan cheaper than index??? \n\nFor that kind of query, very probably. How much of the table is\nactually selected by \"WHERE t12_bskid >= 1\"?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Feb 2002 10:18:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Yet again on indices... "
},
{
"msg_contents": "\n\n\n\n\nTom Lane wrote:\n\nJean-Paul ARGUDO <jean-paul.argudo@idealx.com> writes:\n\nEXPLAIN SELECT t12_bskid, t12_pnb, t12_lne, t12_tckFROM T12_20011231WHERE t12_bskid >= 1 ORDER BY t12_bskid, t12_pnb, t12_tck, t12_lne;\n\n\n\nSort (cost=3006.13..3006.13 rows=25693 width=46) -> Seq Scan on t12_20011231 (cost=0.00..1124.20 rows=25693 width=46)\n\n\n\nTry the following:\n\nEXPLAIN ANALYZE SELECT t12_bskid, t12_pnb, t12_lne, t12_tck FROM T12_20011231\nWHERE t12_bskid >= 1 ORDER BY t12_bskid, t12_pnb, t12_tck, t12_lne;\n\nand see what the actual results are. Then turn the seq_scans off and do\nthe same thing.\n\n\n\n=> Uh? Seq scan cheaper than index??? \n\nFor that kind of query, very probably. How much of the table isactually selected by \"WHERE t12_bskid >= 1\"?\t\t\tregards, tom lane---------------------------(end of broadcast)---------------------------TIP 2: you can get off all lists at once with the unregister command (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n\n\n\n",
"msg_date": "Wed, 27 Feb 2002 12:37:32 -0600",
"msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>",
"msg_from_op": false,
"msg_subject": "Re: Yet again on indices..."
}
] |
[
{
"msg_contents": "I see following in the manual:\n\n-------------------------------------------------------------------\nThe seconds field, including fractional parts, multiplied by\n1000. Note that this includes full seconds.\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n SELECT EXTRACT(MILLISECONDS FROM TIME '17:12:28.5');\n Result: 28500\n-------------------------------------------------------------------\n\nAnd I see:\n\ntest=# select current_timestamp,extract(milliseconds from current_timestamp);\n timestamptz | date_part \n-------------------------------+-----------\n 2002-02-27 14:45:53.945529+09 | 945.529\n(1 row)\n\nApparently there's an inconsistency among manuals, timestamp(tz)_part\nand timetz_part. Does anybody know which one is correct?\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 27 Feb 2002 17:07:50 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "timestamp_part() bug?"
},
{
"msg_contents": "On Wed, Feb 27, 2002 at 05:07:50PM +0900, Tatsuo Ishii wrote:\n> I see following in the manual:\n> \n> -------------------------------------------------------------------\n> The seconds field, including fractional parts, multiplied by\n> 1000. Note that this includes full seconds.\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> SELECT EXTRACT(MILLISECONDS FROM TIME '17:12:28.5');\n> Result: 28500\n> -------------------------------------------------------------------\n> \n> And I see:\n> \n> test=# select current_timestamp,extract(milliseconds from current_timestamp);\n> timestamptz | date_part \n> -------------------------------+-----------\n> 2002-02-27 14:45:53.945529+09 | 945.529\n> (1 row)\n> \n> Apparently there's an inconsistency among manuals, timestamp(tz)_part\n> and timetz_part. Does anybody know which one is correct?\n\n I hope bug is in the manual -- for example minutes the \"extract\" returns \n without hours. Is any matter why returs millisecons with seconds?\n If somebody wants milliseconds with seconds:\n\n# select extract(SECONDS from '14:45:53.945529'::time) * 1000;\n ?column?\n------------------\n 53945.5289999969\n(1 row)\n\n \n BTW, to_char() retuns milliseconds without seconds too:\n\ntest=# select to_char('2002-02-27 14:45:53.945529+09'::timestamp, 'MS');\n to_char\n---------\n 946\n(1 row)\n\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Wed, 27 Feb 2002 09:52:51 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: timestamp_part() bug?"
},
{
"msg_contents": "> I see following in the manual:\n> \n> -------------------------------------------------------------------\n> The seconds field, including fractional parts, multiplied by\n> 1000. Note that this includes full seconds.\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> SELECT EXTRACT(MILLISECONDS FROM TIME '17:12:28.5');\n> Result: 28500\n> -------------------------------------------------------------------\n> \n> And I see:\n> \n> test=# select current_timestamp,extract(milliseconds from current_timestamp);\n> timestamptz | date_part \n> -------------------------------+-----------\n> 2002-02-27 14:45:53.945529+09 | 945.529\n> (1 row)\n> \n> Apparently there's an inconsistency among manuals, timestamp(tz)_part\n> and timetz_part. Does anybody know which one is correct?\n\nAs far as I know, allowing MILLISECONDS etc. for the first arugument\nof EXTARCT is a PostgreSQL extention and we should decide what to do\nby ourselves.\n\nMy proposal is fixing timestamp(tz)_part so that it returns \"the\nseconds field, including fractional parts, multiplied by > 1000. Note\nthat this includes full seconds\" as the manual stats, since this would\nkeep the consistency and also have the least impact for existing\napplications.\n\nOpinion?\n--\nTatsuo Ishi\n\n",
"msg_date": "Sat, 02 Mar 2002 11:29:53 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: timestamp_part() bug?"
},
{
"msg_contents": "> > I see following in the manual:\n> > \n> > -------------------------------------------------------------------\n> > The seconds field, including fractional parts, multiplied by\n> > 1000. Note that this includes full seconds.\n> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> > SELECT EXTRACT(MILLISECONDS FROM TIME '17:12:28.5');\n> > Result: 28500\n> > -------------------------------------------------------------------\n> > \n> > And I see:\n> > \n> > test=# select current_timestamp,extract(milliseconds from current_timestamp);\n> > timestamptz | date_part \n> > -------------------------------+-----------\n> > 2002-02-27 14:45:53.945529+09 | 945.529\n> > (1 row)\n> > \n> > Apparently there's an inconsistency among manuals, timestamp(tz)_part\n> > and timetz_part. Does anybody know which one is correct?\n> \n> As far as I know, allowing MILLISECONDS etc. for the first arugument\n> of EXTARCT is a PostgreSQL extention and we should decide what to do\n> by ourselves.\n> \n> My proposal is fixing timestamp(tz)_part so that it returns \"the\n> seconds field, including fractional parts, multiplied by > 1000. Note\n> that this includes full seconds\" as the manual stats, since this would\n> keep the consistency and also have the least impact for existing\n> applications.\n\nFix committed into both current and 7.2-stable.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 05 Mar 2002 12:47:10 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: timestamp_part() bug?"
},
{
"msg_contents": "There is a problem with epoch as well that was not in the 7.1.3\n\n\n7.1.3# select extract(epoch from '00:00:34'::time), now();\n7.1.3# 34 2002-03-05 22:13:16 +01\n\n7.2# select extract(epoch from '00:00:34'::time), now();\n7.2# 3634 2002-03-05 22:13:16 +01\n\n7.2# select extract(epoch from '00:00:34'::time without time zone), now();\n7.2# 3634 2002-03-05 22:13:16 +01\n\nIs that a bug or I didn't understand the new date/time types ?\n",
"msg_date": "5 Mar 2002 13:22:25 -0800",
"msg_from": "domingo@dad-it.com (Domingo Alvarez Duarte)",
"msg_from_op": false,
"msg_subject": "Re: timestamp_part() bug?"
},
{
"msg_contents": "> There is a problem with epoch as well that was not in the 7.1.3\n> 7.1.3# select extract(epoch from '00:00:34'::time), now();\n> 7.1.3# 34 2002-03-05 22:13:16 +01\n> Is that a bug or I didn't understand the new date/time types ?\n\nLooks like a bug (or at least it looks like it behaves differently than\nI would expect). Thanks for the report; I'll look at it asap.\n\n - Thomas\n",
"msg_date": "Fri, 15 Mar 2002 08:19:15 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: timestamp_part() bug?"
},
{
"msg_contents": "> There is a problem with epoch as well that was not in the 7.1.3\n\nHmm. 7.1.x did not implement any date_part() functions for time types.\nSo the results were obtained from a conversion to interval before\ncalling date_part()!\n\n7.2 implements date_part() for time with time zone, and converts time\nwithout time zone to time with time zone when executing your query. The\nbehavior is likely to be somewhat different. But...\n\n\nI think that your problem report now has two parts:\n\n1) extract(epoch from time with time zone '00:00:34') should return\nsomething \"reasonable\". I'll claim that it does that currently, since\n(if you were trying that query) you are one hour away from GMT and get\n3600+34 seconds back, which is consistant with same instant in GMT. If\nthe epoch is relative to GMT, then this may be The Right Thing To Do.\n\n2) extract(epoch from time '00:00:34') should return something which\ndoes not involve a time zone of any kind if it were following the\nconventions used for timestamp without time zone. So we should have an\nexplicit function to do that, rather than relying on converting to \"time\nwith time zone\" before extracting the \"epoch\".\n\nUnfortunately, I can't put a new function into 7.2.x due to the\nlong-standing rule of not modifying system tables in minor upgrades. So\nsolving (2) completely needs to wait for 7.3.\n\nYou can work around this mis-feature for now by patching 7.2.x,\nreplacing one of the definitions for date_part in\nsrc/include/catalog/pg_proc.h, oid = 1385 with the following:\n\nselect date_part($1, cast((cast($2 as text) || ''+00'') as time with\ntime zone));\n\nOr, it seems that you can actually drop and replace this built-in\nfunction (I vaguely recall that there used to be problems with doing\nthis, but it sure looks like it works!):\n\nthomas=# drop function date_part(text,time);\nDROP\nthomas=# create function date_part(text,time) returns double precision\nas '\nthomas'# select date_part($1, cast((cast($2 as text) || ''+00'') as time\nwith time zone));\nthomas'# ' language 'sql';\nCREATE\nthomas=# select extract(epoch from time '00:00:34');\n date_part \n-----------\n 34\n\n\nIn looking at this issue I did uncover a bug in moving time with time\nzones to other time zones:\n\nthomas=# select timetz(interval '01:00', time with time zone\n'08:09:10-08');\n timetz \n----------------\n 00:00:00.00+01\n\nafter repairing the offending code in timetz_izone() it seems to do the\nright thing:\n\nthomas=# select timetz(interval '01:00', time with time zone\n'08:09:10-08');\n timetz \n-------------\n 17:09:10+01\n\nThis last issue will be fixed in 7.2.1. And the function will be renamed\nto \"timezone()\" in 7.3 to be consistant with similar functions for other\ndata types.\n\n - Thomas\n",
"msg_date": "Fri, 15 Mar 2002 15:20:17 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: timestamp_part() bug?"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 27 February 2002 05:20\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Rename sequence bug/feature \n> \n> \n> Dave Page <dpage@vale-housing.co.uk> writes:\n> > I noticed in a post recently that it was possible to rename objects \n> > other than tables in pg_class using ALTER TABLE RENAME. I've now \n> > implemented this in pgAdmin II for views, sequences and indexes.\n> \n> > Today I've had cause to dump my test database and found a minor \n> > problem:\n> \n> > dumping database \"helpdesk\"...\n> > pg_dump: query to get data of sequence \"cat\" returned name \"dog\"\n> \n> Well, we could either add code to ALTER RENAME to hack the \n> sequence name stored in sequences, or we could remove that \n> check from pg_dump. I kinda lean to the latter myself; it \n> seems pretty useless.\n\nThat could potentially break any user apps that (for whatever bizarre\nreason) do a select sequence_name of course, though I can't imagine why\nanyone would do that. pgAdmin certainly doesn't.\n\nEither fix would be fine for me though...\n\nThanks, Dave.\n",
"msg_date": "Wed, 27 Feb 2002 08:34:28 -0000",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Rename sequence bug/feature "
}
] |
[
{
"msg_contents": "\n> One of the things we've agreed to do in 7.3 is change COPY IN to remove\n> that assumption --- a line with too few fields (too few tabs) will draw\n> an error report instead of silently doing what's likely the wrong thing.\n\nBut there will be new syntax for COPY, that allows missing trailing columns. \nI hope.\n\nAndreas\n",
"msg_date": "Wed, 27 Feb 2002 10:19:40 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: COPY incorrectly uses null instead of an empty string in last\n\tfield"
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> One of the things we've agreed to do in 7.3 is change COPY IN to remove\n>> that assumption --- a line with too few fields (too few tabs) will draw\n>> an error report instead of silently doing what's likely the wrong thing.\n\n> But there will be new syntax for COPY, that allows missing trailing columns. \n> I hope.\n\nWhy?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Feb 2002 09:54:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY incorrectly uses null instead of an empty string in last\n\tfield"
}
] |
[
{
"msg_contents": "Okay...\n\nI'm very sceptic today.\n\nI'm making a survey on Oracle 8.0 on NT4 remplacement with a RedHat 7.2/PG 7.2\n\nThe customer gave me stuff to migrate, like scripts in Pro*C Oracle that I\nmigrated successfully with ECPG. Other stuff with Connect by statments, thanks\nto OpenACS guys, I migrated this Connect by statments too.\n\nBut finaly, with all my mind I explained all queries, made all good, I hope\neverything has be done. \n\nThe \"test\" is a big batch that computes stuffs in the database. Here are the\ntimings of both Oracle and PG (7.2) :\n\nOracle on NT 4 : 45 minuts to go , 1200 tps (yes one thousand and two hundred\ntps)\n\nLinux Red Hat 7.2 with PostgreSQL 7.2 : hours to go (statistically, 45 hours),\n80 tps (eighty tps).\n\n\nTests were made on the same machine, a pentium 3 600 MHz with 256 Megs RAM and\nRAID 5.\n\nWe formatted the server and insstalled linux stuff after..\n\nSo what you think of SUCH difference between Oracle/NT and Linux/PG ?\n\nI feel very bad in front of the customer, to tell there is a 1:15 ratio between\nOracle / Nt and Linux / PostgreSQL, since I'm real PG fan and DBA in french Open\nSouce company...\n\nThanks a lot for support.\n\n:-(((\n\n-- \nJean-Paul ARGUDO \n",
"msg_date": "Wed, 27 Feb 2002 15:46:18 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": true,
"msg_subject": "Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "Jean-Paul ARGUDO wrote:\n> \n> Okay...\n> \n> I'm very sceptic today.\n> \n> I'm making a survey on Oracle 8.0 on NT4 remplacement with a RedHat 7.2/PG 7.2\n> \n> The customer gave me stuff to migrate, like scripts in Pro*C Oracle that I\n> migrated successfully with ECPG. Other stuff with Connect by statments, thanks\n> to OpenACS guys, I migrated this Connect by statments too.\n> \n> But finaly, with all my mind I explained all queries, made all good, I hope\n> everything has be done.\n> \n> The \"test\" is a big batch that computes stuffs in the database. Here are the\n> timings of both Oracle and PG (7.2) :\n> \n> Oracle on NT 4 : 45 minuts to go , 1200 tps (yes one thousand and two hundred\n> tps)\n> \n> Linux Red Hat 7.2 with PostgreSQL 7.2 : hours to go (statistically, 45 hours),\n> 80 tps (eighty tps).\n\nWow! That is huge. Ok, let me ask some questions:\n\nDid you do a \"vacuum analyze\" on the tables?\nIf you did not analyze the tables, it may be using table scans instead of\nindexes. That would make a huge difference. Also, it may choose poorly between\nhash joins and merge joins.\n\n\nDid you tune \"buffers\" in postgresql.conf?\nIf you have too few buffers, you will get no caching effect on the queries.\n",
"msg_date": "Wed, 27 Feb 2002 11:07:19 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "Hi Jean-Paul,\n\nI know you've probably done this, but I'll ask just in case.\n\nDid you tune the memory of the PostgreSQL server configuration?\n\ni.e. the postgresql.conf file?\n\nIf so, what are the values you changed from default?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nJean-Paul ARGUDO wrote:\n> \n> Okay...\n> \n> I'm very sceptic today.\n> \n> I'm making a survey on Oracle 8.0 on NT4 remplacement with a RedHat 7.2/PG 7.2\n> \n> The customer gave me stuff to migrate, like scripts in Pro*C Oracle that I\n> migrated successfully with ECPG. Other stuff with Connect by statments, thanks\n> to OpenACS guys, I migrated this Connect by statments too.\n> \n> But finaly, with all my mind I explained all queries, made all good, I hope\n> everything has be done.\n> \n> The \"test\" is a big batch that computes stuffs in the database. Here are the\n> timings of both Oracle and PG (7.2) :\n> \n> Oracle on NT 4 : 45 minuts to go , 1200 tps (yes one thousand and two hundred\n> tps)\n> \n> Linux Red Hat 7.2 with PostgreSQL 7.2 : hours to go (statistically, 45 hours),\n> 80 tps (eighty tps).\n> \n> Tests were made on the same machine, a pentium 3 600 MHz with 256 Megs RAM and\n> RAID 5.\n> \n> We formatted the server and insstalled linux stuff after..\n> \n> So what you think of SUCH difference between Oracle/NT and Linux/PG ?\n> \n> I feel very bad in front of the customer, to tell there is a 1:15 ratio between\n> Oracle / Nt and Linux / PostgreSQL, since I'm real PG fan and DBA in french Open\n> Souce company...\n> \n> Thanks a lot for support.\n> \n> :-(((\n> \n> --\n> Jean-Paul ARGUDO\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Thu, 28 Feb 2002 04:09:53 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com> writes:\n\n> Okay...\n> \n> I'm very sceptic today.\n\nDid you adjust the shared buffers and other tuning settings for\nPostgres?\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "27 Feb 2002 12:12:27 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "There is probably an explanation but \"computes stuffs\" doesn't provide \nmuch information to go with. Do you think you could boil this down to a \ntest case? Also, expand on what the batch file does, the size\ndatabase, and which interface you are using. I'm sure people would like \nto help, but there simply isn't enough information do derive and \nconclusions here.\n\nCheers,\n\nMarc\n\nJean-Paul ARGUDO wrote:\n\n> Okay...\n> \n> I'm very sceptic today.\n> \n> I'm making a survey on Oracle 8.0 on NT4 remplacement with a RedHat 7.2/PG 7.2\n> \n> The customer gave me stuff to migrate, like scripts in Pro*C Oracle that I\n> migrated successfully with ECPG. Other stuff with Connect by statments, thanks\n> to OpenACS guys, I migrated this Connect by statments too.\n> \n> But finaly, with all my mind I explained all queries, made all good, I hope\n> everything has be done. \n> \n> The \"test\" is a big batch that computes stuffs in the database. Here are the\n> timings of both Oracle and PG (7.2) :\n> \n> Oracle on NT 4 : 45 minuts to go , 1200 tps (yes one thousand and two hundred\n> tps)\n> \n> Linux Red Hat 7.2 with PostgreSQL 7.2 : hours to go (statistically, 45 hours),\n> 80 tps (eighty tps).\n> \n> \n> Tests were made on the same machine, a pentium 3 600 MHz with 256 Megs RAM and\n> RAID 5.\n> \n> We formatted the server and insstalled linux stuff after..\n> \n> So what you think of SUCH difference between Oracle/NT and Linux/PG ?\n> \n> I feel very bad in front of the customer, to tell there is a 1:15 ratio between\n> Oracle / Nt and Linux / PostgreSQL, since I'm real PG fan and DBA in french Open\n> Souce company...\n> \n> Thanks a lot for support.\n> \n> :-(((\n> \n> \n\n\n",
"msg_date": "Wed, 27 Feb 2002 12:40:59 -0500",
"msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "\nThe batch is originally wrotten in Pro*C for Oracle under Windows NT.\n\nWe transalted it thanks to fabulous ecpg client interface.\n\nI posted details as asked before, hope this will help on some deeper analysis.\n\nThanks for your remarks.\n\n\n\n-- \nJean-Paul ARGUDO\n\n",
"msg_date": "Wed, 27 Feb 2002 18:46:50 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": true,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "On Wed, 2002-02-27 at 16:46, Jean-Paul ARGUDO wrote:\n> Okay...\n> \n> I'm very sceptic today.\n> \n> I'm making a survey on Oracle 8.0 on NT4 remplacement with a RedHat 7.2/PG 7.2\n> \n> The customer gave me stuff to migrate, like scripts in Pro*C Oracle that I\n> migrated successfully with ECPG. Other stuff with Connect by statments, thanks\n> to OpenACS guys, I migrated this Connect by statments too.\n> \n> But finaly, with all my mind I explained all queries, made all good, I hope\n> everything has be done. \n\nWhat was the postgresql.conf set to ?\n\n> \n> The \"test\" is a big batch that computes stuffs in the database. \n\nCould you run this batch in smaller chunks to see if PG is slow from the\nstart or does it slow down as it goes ?\n\n> Here are the timings of both Oracle and PG (7.2) :\n> \n> Oracle on NT 4 : 45 minuts to go , 1200 tps (yes one thousand and two hundred\n> tps)\n> \n> Linux Red Hat 7.2 with PostgreSQL 7.2 : hours to go (statistically, 45 hours),\n> 80 tps (eighty tps).\n\nWhat kind of tps are these ? \n\nI.e. what does each t do ?\n\n-------------\nHannu\n\n\n\n",
"msg_date": "27 Feb 2002 20:09:42 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "> What was the postgresql.conf set to ?\n\nI put parameters in another mail, please watch for it.\n\n> > The \"test\" is a big batch that computes stuffs in the database. \n> Could you run this batch in smaller chunks to see if PG is slow from the\n> start or does it slow down as it goes ?\n\nThe batch starts really fast and past 2 minuts, begins to slow down dramatically\nand never stops to get slower and slower\n\n \n> > Linux Red Hat 7.2 with PostgreSQL 7.2 : hours to go (statistically, 45 hours),\n> > 80 tps (eighty tps).\n> What kind of tps are these ? \n\nHere's what we have in output:\n\nThis is the WINDOWS NT4 / Oracle 8.0 ouput when the batch is totally finished:\n\nTime : 00:47:50\n\nTransaction : 25696\nItem : 344341\nTransaction (in milliseconds) : 111\nItem (in milliseconds) : 8\n\nErrors : 0\nWarnings : 0\nPLU not found : 0\nNOM not found : 0\nAlloc NOM : 739\nFree NOM : 739\nError 1555 : 0\n\nRead : 2093582\nWrite : 1772364\nRead/Write : 3865946\n\nFree memory (RAM) : 117396 Ko / 261548 Ko\n\nPLU SELECT : 344341\nNOM SELECT : 1377364\nT04 SELECT : 1840\nT01 INSERT : 593\nT01 UPDATE : 1376771\nT02 INSERT : 28810\nT02 UPDATE : 315531\nT03 INSERT : 41199\nT13 INSERT : 9460\nRJT INSERT : 0\nRJT SELECT : 0\n\n-------------------- \nBeware \"Transaction\" does not mean transaction.. a \"transaction\" here contains one ore\nmore \"item\", in the context of the application/database.\n\nWhat for real DML orders: 3.865.946 queries done in 47 min 50 secs. (the queries\nare reparted in many tables, look for detail couting under \"Free memory...\"\nline.. (a table name is 3 letters long)\n\nThats 1 347 queries per second... -ouch!\n\nThis is the Linux Red Hat 7.2 / PostgreSQL 7.2 port of the Pro*C program\nproducing the output\n\nAs you'll understand, it is not the COMPLETE batch, we had to stop it..:\n\n\nTime : 00:16:26\n\nTransaction : 750\nItem : 7391\nTransaction (ms) : 1314\nItem (ms) : 133\n\nErrors : 1\nWarnings : 0\nPLU not found : 0\nNOM not found : 0\nAlloc NOM : 739\nFree NOM : 0\nError 1555 : 0\n\nRead : 45127.000\nWrite : 37849.000\nRead/Write : 82976.000\n\nPLU SELECT : 7391\nNOM SELECT : 29564\nT04 SELECT : 31\nT01 INSERT : 378\nT01 UPDATE : 29186\nT02 INSERT : 3385\nT02 UPDATE : 4006\nT03 INSERT : 613\nT13 INSERT : 281\nRJT INSERT : 0\nRJT SELECT : 0\n\n---------------- you see\n\nwe have 82.976 queries in 16 min 26 seconds thats a \n\n84 queries per second\n\n--\n\ndefinitely nothing to do with Oracle :-((\n\nVery bad for us since if this customers kicks Oracle to get PG, it can be really\nfantastic, this customer has much influence on the business....\n\n\nThanks for helping me that much to all of you.\n-- \nJean-Paul ARGUDO\n",
"msg_date": "Wed, 27 Feb 2002 19:21:46 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": true,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "On Wed, 2002-02-27 at 23:21, Jean-Paul ARGUDO wrote:\n> > What was the postgresql.conf set to ?\n> \n> I put parameters in another mail, please watch for it.\n> \n> > > The \"test\" is a big batch that computes stuffs in the database. \n> > Could you run this batch in smaller chunks to see if PG is slow from the\n> > start or does it slow down as it goes ?\n> \n> The batch starts really fast and past 2 minuts, begins to slow down dramatically\n\nThis usually means that it is a good time to pause the patch and do a\n\"vacuum analyze\" (or just \"analyze\" for 7.2)\n\nIn 7.2 you can probably do the vacuum analyze in parallel but it will\nlikely run faster when other backends are stopped.\n\n> and never stops to get slower and slower\n> > > Linux Red Hat 7.2 with PostgreSQL 7.2 : hours to go (statistically, 45 hours),\n> > > 80 tps (eighty tps).\n> > What kind of tps are these ? \n> \n> Here's what we have in output:\n> \n> This is the WINDOWS NT4 / Oracle 8.0 ouput when the batch is totally finished:\n> \n> Time : 00:47:50\n> \n> Transaction : 25696\n> Item : 344341\n> Transaction (in milliseconds) : 111\n> Item (in milliseconds) : 8\n> \n> Errors : 0\n> Warnings : 0\n> PLU not found : 0\n> NOM not found : 0\n> Alloc NOM : 739\n> Free NOM : 739\n> Error 1555 : 0\n> \n> Read : 2093582\n> Write : 1772364\n> Read/Write : 3865946\n> \n> Free memory (RAM) : 117396 Ko / 261548 Ko\n\nAssuming these must be interpreted as TABLE COMMAND : COUNT\n\n> PLU SELECT : 344341\n> NOM SELECT : 1377364\n> T04 SELECT : 1840\n> T01 INSERT : 593\n> T01 UPDATE : 1376771\n\nThis means for postgres with no vacuum in between that you will have in\nfact a 1.3M row table to search for 593 actual rows.\n\nRunning a (parallel) vacuum or vacuum full and possibly even reindex\nwill help a lot.\n\n\n\n\n> T02 INSERT : 28810\n> T02 UPDATE : 315531\n\nhere we have only 10/1 ratio on deleted/live records all of which have\nunfortunately be checked for visibility in postgres.\n\n> T03 INSERT : 41199\n> T13 INSERT : 9460\n> RJT INSERT : 0\n> RJT SELECT : 0\n> \n> -------------------- \n> Beware \"Transaction\" does not mean transaction.. a \"transaction\" here contains one ore\n> more \"item\", in the context of the application/database.\n\nI dont know ECPG very well, but are you sure that you are not running in\nautocommit mode, i.e. that each command is not run in its own\ntransaction.\n\nOn the other end of spectrum - are you possibly running all the queries\nin one transaction ?\n\n> What for real DML orders: 3.865.946 queries done in 47 min 50 secs. (the queries\n> are reparted in many tables, look for detail couting under \"Free memory...\"\n> line.. (a table name is 3 letters long)\n> \n> Thats 1 347 queries per second... -ouch!\n\nHow complex are these queries ?\n\nIf much time is spent by backend on optimizing (vs. executing), then you\ncould win by rewriting some of these as PL/SQL or C procedures that do a\nprepare/execute using SPI and use a stored plan.\n\n> This is the Linux Red Hat 7.2 / PostgreSQL 7.2 port of the Pro*C program\n> producing the output\n> \n> As you'll understand, it is not the COMPLETE batch, we had to stop it..:\n\nCan you run VACUUM ANALYZE and continue ?\n\n> Time : 00:16:26\n> \n> Transaction : 750\n> Item : 7391\n> Transaction (ms) : 1314\n> Item (ms) : 133\n> \n> Errors : 1\n> Warnings : 0\n> PLU not found : 0\n> NOM not found : 0\n> Alloc NOM : 739\n> Free NOM : 0\n> Error 1555 : 0\n> \n> Read : 45127.000\n> Write : 37849.000\n> Read/Write : 82976.000\n> \n> PLU SELECT : 7391\n> NOM SELECT : 29564\n> T04 SELECT : 31\n> T01 INSERT : 378\n\nWas the T01 table empty at the start (does it have 378 rows) ?\n\n> T01 UPDATE : 29186\n\ncould you get a plan for an update on T01 at this point\n\ndoes it look ok ?\n\ncan you make it faster by manipulating enable_xxx variables ?\n\n> T02 INSERT : 3385\n> T02 UPDATE : 4006\n> T03 INSERT : 613\n> T13 INSERT : 281\n> RJT INSERT : 0\n> RJT SELECT : 0\n> \n> ---------------- you see\n> \n> we have 82.976 queries in 16 min 26 seconds thats a \n> \n> 84 queries per second\n> \n> --\n> \n> definitely nothing to do with Oracle :-((\n\nWas oracle out-of-box or did you (or someone else) tune it too ?\n\n> Very bad for us since if this customers kicks Oracle to get PG, it can be really\n> fantastic, this customer has much influence on the business....\n\n--------------\nHannu\n\n",
"msg_date": "28 Feb 2002 01:48:17 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "On Wed, 2002-02-27 at 23:21, Jean-Paul ARGUDO wrote:\n> > What was the postgresql.conf set to ?\n> \n> I put parameters in another mail, please watch for it.\n> \n> > > The \"test\" is a big batch that computes stuffs in the database. \n> > Could you run this batch in smaller chunks to see if PG is slow from the\n> > start or does it slow down as it goes ?\n> \n> The batch starts really fast and past 2 minuts, begins to slow down dramatically\n> and never stops to get slower and slower\n> \n\nI did a small test run on my home computer (Celeron 350, IDE disks,\nuntuned 7.2 on RH 7.2) \n\nI made a small table (int,text) with primary key on int and filled it\nwith values 1-512 for int.\n\nthen I ran a python script that updated 10000 random rows in patches of\n10 updates.\n\nthe first run took \n\na) 1.28 - 112 tps \n\nas it used seq scans\n\nthen I ran VACUUM ANALYZE and next runs were\n\n1. 24 sec - 416 tps\n2. 43 sec - 232 tps\n3. 71 sec - 140 tps\n\nthen I tied the same query and run vacuum in another window manually\neach 5 sec.\n\nthe result was similar to 1 - 24.5 sec\n\nrunning vacuum every 10 sec slowed it to 25.1 sec, running every 3 sec\nto 24.3 sec. Running vacuum in a tight loop slowed test down to 30.25\nsec.\n\n\n-------------------------\nHannu\n\n\n\n",
"msg_date": "28 Feb 2002 02:08:37 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "Jean-Paul ARGUDO wrote:\n> This is the Linux Red Hat 7.2 / PostgreSQL 7.2 port of the Pro*C program\n> producing the output\n> \n> As you'll understand, it is not the COMPLETE batch, we had to stop it..:\n> \n> Time : 00:16:26\n> \n> Transaction : 750\n> Item : 7391\n> Transaction (ms) : 1314\n> Item (ms) : 133\n> \n> Errors : 1\n> Warnings : 0\n> PLU not found : 0\n> NOM not found : 0\n> Alloc NOM : 739\n> Free NOM : 0\n> Error 1555 : 0\n> \n> Read : 45127.000\n> Write : 37849.000\n> Read/Write : 82976.000\n> \n> PLU SELECT : 7391\n> NOM SELECT : 29564\n> T04 SELECT : 31\n> T01 INSERT : 378\n> T01 UPDATE : 29186\n\nAre you updating 29186 records in a table here? If so, is this table used in\nthe following queries?\n\n\n> T02 INSERT : 3385\n> T02 UPDATE : 4006\n\nDitto here, is T02 updated and then used in subsequent queries?\n\n> T03 INSERT : 613\n> T13 INSERT : 281\n> RJT INSERT : 0\n> RJT SELECT : 0\n\nAre these queries run in this order, or are the inserts/updates/selects\nintermingled?\n\nA judicial vacuum on a couple of the tables may help.\n\nAlso, I noticed you had 19000 buffers. I did some experimentation with buffers\nand found more is not always better. Depending on the nature of your database,\n2048~4096 seem to be a sweet spot for some of he stuff that I do.\n\nAgain, have you \"analyzed\" the database? PostgreSQL will do badly if you have\nnot analyzed. (Oracle also benefits from analyzing, depending on the nature of\nthe data.)\n\nHave you done an \"explain\" on the queries used in your batch? You may be able\nto see what's going on.\n",
"msg_date": "Wed, 27 Feb 2002 16:44:21 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "Hi all,\n\nAs I wrote it before there, it is an ECPG script that runs with bad perfs.\nI put back trace/ notices/debug mode on the server.\n\nHere is an example of what does the debug doesnt stop to do:\n\nc... stuffs are CURSORS\n\nit seems that on every commit, the cursor is closed\n\n[... snip ...]\nNOTICE: Closing pre-existing portal \"csearcht04\"\nNOTICE: Closing pre-existing portal \"csearcht30\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"csearcht04\"\nNOTICE: Closing pre-existing portal \"csearcht30\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\nNOTICE: Closing pre-existing portal \"cfindplu\"\n[... snip ...]\n\nc... stuffs are CURSORS\n\nit seems that on every commit, the cursor is closed... and re-opened with new\nvariables'values \n\nbtw, as many asked me, queries are VERY simple, there is only a few queries.\nEach query works on one table at a time. no joins for example. Only massive bulk\nwork with CURSORS.\n\nAny way to avoid closing/opening of cursors? \nAny tip on porting the best way cursors?;.\n\nthanks in advance.\n\nPS: I am currently testing vacuums between the script to pause the data\nmanipulation, make a vacuum analyze and continue the treatments.\n\n-- \nJean-Paul ARGUDO\n",
"msg_date": "Thu, 28 Feb 2002 10:32:48 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": true,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "\n----- Original Message -----\nFrom: \"Jean-Paul ARGUDO\" <jean-paul.argudo@idealx.com>\n>\n> it seems that on every commit, the cursor is closed... and re-opened with\nnew\n> variables'values\n\nI think that currently the only way to reuse query plans would be migrating\nsome\nof your logic to backend and using SPI prepared statements.\n\n> btw, as many asked me, queries are VERY simple, there is only a few\nqueries.\n> Each query works on one table at a time. no joins for example. Only\nmassive bulk\n> work with CURSORS.\n\nAgain, can't some of it be moved to backend, either using PL/PgSQL or C (or\npltcl, plperl, plpython ;)\n\n> PS: I am currently testing vacuums between the script to pause the data\n> manipulation, make a vacuum analyze and continue the treatments.\n\nStarting with 7.2 you cand also run both analyze and simple vacuum in\nparallel to the main app.\n\nYou will most likely need to run analyze once after tables are more or less\nfilled and then a parallel\nvacuum every 5-30 sec to avoid tables growing too big. You could limit\nvacuum to only those\ntables that see a lot of updating (or delete/insert).\n\n-----------\nHannu\n\n",
"msg_date": "Thu, 28 Feb 2002 14:46:38 +0200",
"msg_from": "\"Hannu Krosing\" <hannu@itmeedia.ee>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "----- Original Message -----\nFrom: \"Jean-Paul ARGUDO\" <jean-paul.argudo@idealx.com>\n>\n>\n> PS: I am currently testing vacuums between the script to pause the data\n> manipulation, make a vacuum analyze and continue the treatments.\n>\n\nI ran a small test (Pg 7.2, RH 7.1, Athlon 850, 512Mb) that created a small\ntable of 2 fields with primary key on first,\nfilled it with 768 values and then run the following script:\n\n--------------------------------\n#!/usr/bin/python\n\nimport pg, random\ncon = pg.connect()\nq = \"update t01 set val='bum' where i = %s\"\nfor trx in range(5000):\n con.query('begin')\n for cmd in range(20):\n rn = random.randint(1,768)\n con.query(q % rn)\n con.query('commit')\n--------------------------------\n\nwhen run as is it made average of 152 updates/sec\n[hannu@taru hannu]$ time ./abench.py\n\nreal 10m55.034s\nuser 0m27.270s\nsys 0m4.700s\n\nafter doing a vacuum full i run it together with a parallel process\nthat was a simple loop sleeping 5 sec and then doing vacuum\n\n--------------------------------\n#!/usr/bin/python\n\nimport time, pg\n\ncon = pg.connect()\n\nfor trx in range(5000):\n for cmd in range(20):\n time.sleep(5)\n print 'vacuum'\n con.query('vacuum')\n print 'done!'\n--------------------------------\n\nThe same script runs now at average 917\n[hannu@taru hannu]$ time ./abench.py\n\nreal 1m48.416s\nuser 0m16.840s\nsys 0m3.300s\n\nSo here we have a case where the new vacuum can really save a day !\n\nI also tried other vacuuming intervals and it seems that ~4 sec is the best\nfor this case\n\nhere are test results\ninterval - time - updates per sec\n\n2 sec - 1.53.5 - 881\n3 sec - 1.49.6 - 912\n4 sec - 1.48.0 - 925\n5 sec - 1.48.4 - 922\n6 sec - 1.49.7 - 911\n10 sec - 1.56.8 - 856\nno vac - 10.55.0 - 152\n--------------\nHannu\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 28 Feb 2002 15:24:15 +0200",
"msg_from": "\"Hannu Krosing\" <hannu@itmeedia.ee>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "> > it seems that on every commit, the cursor is closed... and re-opened with\n> > new\n> > variables'values\n> \n> I think that currently the only way to reuse query plans would be migrating\n> some\n> of your logic to backend and using SPI prepared statements.\n> \n> > btw, as many asked me, queries are VERY simple, there is only a few\n> queries.\n> > Each query works on one table at a time. no joins for example. Only\n> massive bulk\n> > work with CURSORS.\n> \n> Again, can't some of it be moved to backend, either using PL/PgSQL or C (or\n> pltcl, plperl, plpython ;)\n> \n\n\nOK.\n\n\nWe read all the \" Chapter 21. Server Programming Interface\" with SPI doc.\n\nThis seems _really_ interresting, make me remember of outline statments in\nOracle.\n\nSo:\n\n1) how to find some sample code? are SPI statments can be called from\n/into ecpg?\n\n2) if prepared statments and stored execution plan exist, why can't thos be used\nfrom any client interface or simple sql?\n\n3) You tell us we can \"move to the backend\" some queries: do you mean we would\nhave better performances with stored functions in plpgsql? \n\nThanks a lot Hannu, I promise to stop soon with questions :-)\n\nThis is _so_ important for us..\n\nBest regards & wishes.\n\n--\nJean-Paul ARGUDO\n",
"msg_date": "Thu, 28 Feb 2002 14:39:47 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": true,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "On Thu, Feb 28, 2002 at 02:39:47PM +0100, Jean-Paul ARGUDO wrote:\n\n> 2) if prepared statments and stored execution plan exist, why can't thos be used\n> from any client interface or simple sql?\n\n There is \"execute already parsed query plan\" in SPI layout only. \n The PostgreSQL hasn't SQL interface for this -- except my experimental \n patch for 7.0 (I sometime think about port it to latest PostgreSQL\n releace, but I haven't relevant motivation do it...)\n\n> 3) You tell us we can \"move to the backend\" some queries: do you mean we would\n> have better performances with stored functions in plpgsql? \n\n You needn't use plpgsql only. You can use C/C++, Tcl, Perl, Python.\n IMHO best performance has C + SPI + \"store execution plan\".\n\n But don't forget the path of query in PostgreSQL is not query \n parser only. Use \"execute already parsed query plan\" has effect\n if you use some query really often and the query spend in parser\n longer time....\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 28 Feb 2002 14:58:22 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "On Thu, 2002-02-28 at 15:58, Karel Zak wrote:\n> On Thu, Feb 28, 2002 at 02:39:47PM +0100, Jean-Paul ARGUDO wrote:\n> \n> > 2) if prepared statments and stored execution plan exist, why can't thos be used\n> > from any client interface or simple sql?\n> \n> There is \"execute already parsed query plan\" in SPI layout only. \n> The PostgreSQL hasn't SQL interface for this -- except my experimental \n> patch for 7.0 (I sometime think about port it to latest PostgreSQL\n> releace, but I haven't relevant motivation do it...)\n\nI did some testing \n\n5000*20 runs of update on non-existing key\n\n(send query+parse+optimise+update 0 rows)\n\n[hannu@taru abench]$ time ./abench.py 2>/dev/null \n\nreal 0m38.992s\nuser 0m6.590s\nsys 0m1.860s\n\n5000*20 runs of update on random existing key\n\n(send query+parse+optimise+update 1 row)\n\n[hannu@taru abench]$ time ./abench.py 2>/dev/null \n\nreal 1m48.380s\nuser 0m17.330s\nsys 0m2.940s\n\n\nthe backend wallclock time for first is 39.0 - 6.6 = 32.4\nthe backend wallclock time for second is 108.4 - 17.3 = 91.1\n\nso roughly 1/3 of time is spent on \n\ncommunication+parse+optimize+locate\n\nand 2/3 on actually updating the tuples\n\nif we could save half of parse/optimise time by saving query plans, then\nthe backend performance would go up from 1097 to 100000/(91.1-16.2)=1335\nupdates/sec.\n\n------------------\n\nAs an ad hoc test for parsing-planning-optimising costs I did the\nfollowing\n\nbackend time for \"explain update t01 set val='bum'\"\n30.0 - 5.7 = 24.3\n\n[hannu@taru abench]$ time ./abench.py 2>/dev/null \n\nreal 0m30.038s\nuser 0m5.660s\nsys 0m2.800s\n\n\nbackend time for \"explain update t01 set val='bum' where i = %s\"\n39.8 - 8.0 = 31.8\n\n[hannu@taru abench]$ time ./abench.py 2>/dev/null \n\nreal 0m39.883s\nuser 0m8.000s\nsys 0m2.620s\n\n\nso adding \"where i=n\" to a query made\n(parse+plan+show plan) run 1.3 times slower\n\nsome of it must be communication overhead, but sure \nsome is parsing/planning/optimising time.\n\n--------------\nHannu\n\n\n",
"msg_date": "28 Feb 2002 18:21:34 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": " Jean-Paul ARGUDO wrote:\n\n>As I wrote it before there, it is an ECPG script that runs with bad perfs.\n>I put back trace/ notices/debug mode on the server.\n>\n>Here is an example of what does the debug doesnt stop to do:\n>\n>c... stuffs are CURSORS\n>\n>it seems that on every commit, the cursor is closed\n>\n>[... snip ...]\n>NOTICE: Closing pre-existing portal \"csearcht04\"\n>\n...\n\n>NOTICE: Closing pre-existing portal \"cfindplu\"\n>NOTICE: Closing pre-existing portal \"cfindplu\"\n>NOTICE: Closing pre-existing portal \"cfindplu\"\n>[... snip ...]\n>\n>c... stuffs are CURSORS\n>\n>it seems that on every commit, the cursor is closed... and re-opened with new\n>variables'values \n>\nBy default, Postgres executes transactions in autocommit mode.\nThis means that each statement is executed in its own transaction and a \ncommit is performed\nat the end of the statement, what is much slower than executing all \nstatements inside a\nbegin ... commit block.\nTo disable the autocommit mode you have to compile the ECPG script with \nthe -t option.\nI Hope that it helps.\n\nRegards,\n\nAntonio Sergio\n\n",
"msg_date": "Thu, 28 Feb 2002 13:18:29 -0500",
"msg_from": "Antonio Sergio de Mello e Souza <asergioz@bol.com.br>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "On Thu, Feb 28, 2002 at 10:32:48AM +0100, Jean-Paul ARGUDO wrote:\n> As I wrote it before there, it is an ECPG script that runs with bad perfs.\n> ...\n> it seems that on every commit, the cursor is closed\n\nCursors shouldn't be closed, but prepared statements are deallocated on each\ncommit. AFAIK this is what the standard says.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 1 Mar 2002 09:05:07 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "On Thu, Feb 28, 2002 at 01:18:29PM -0500, Antonio Sergio de Mello e Souza wrote:\n> By default, Postgres executes transactions in autocommit mode.\n\nThat of course is true.\n\n> To disable the autocommit mode you have to compile the ECPG script with \n> the -t option.\n\nThat unfortunately is not. It's just the opposite way. ecpg per default uses\nthe Oracle way and issues a BEGIN after each commit automatically. Thus you\nonly have to specify COMMIT every now and then to end the transaction. If\nyou use \"-t\" or SET AUTOCOMMIT ON, then you run in the normal PostgreSQL\nenvironment and get each command inside its own transaction. To manually\nstart and end transactions you have to use \"-t\" resp. EXEC SQL SET AUTOCOMMIT ON\nand then issue a BEGIN.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 1 Mar 2002 09:12:41 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "On Thu, Feb 28, 2002 at 06:21:34PM +0200, Hannu Krosing wrote:\n> On Thu, 2002-02-28 at 15:58, Karel Zak wrote:\n> > On Thu, Feb 28, 2002 at 02:39:47PM +0100, Jean-Paul ARGUDO wrote:\n> > \n> > > 2) if prepared statments and stored execution plan exist, why can't thos be used\n> > > from any client interface or simple sql?\n> > \n> > There is \"execute already parsed query plan\" in SPI layout only. \n> > The PostgreSQL hasn't SQL interface for this -- except my experimental \n> > patch for 7.0 (I sometime think about port it to latest PostgreSQL\n> > releace, but I haven't relevant motivation do it...)\n> \n> I did some testing \n> \n> 5000*20 runs of update on non-existing key\n> \n> (send query+parse+optimise+update 0 rows)\n> \n> [hannu@taru abench]$ time ./abench.py 2>/dev/null \n> \n> real 0m38.992s\n> user 0m6.590s\n> sys 0m1.860s\n> \n> 5000*20 runs of update on random existing key\n> \n> (send query+parse+optimise+update 1 row)\n> \n> [hannu@taru abench]$ time ./abench.py 2>/dev/null \n> \n> real 1m48.380s\n> user 0m17.330s\n> sys 0m2.940s\n> \n> \n> the backend wallclock time for first is 39.0 - 6.6 = 32.4\n> the backend wallclock time for second is 108.4 - 17.3 = 91.1\n> \n> so roughly 1/3 of time is spent on \n> \n> communication+parse+optimize+locate\n> \n> and 2/3 on actually updating the tuples\n> \n> if we could save half of parse/optimise time by saving query plans, then\n> the backend performance would go up from 1097 to 100000/(91.1-16.2)=1335\n> updates/sec.\n\n It depend on proportion between time-in-parser and time-in-executor. If\n your query spend a lot of time in parser and optimizer is a query plan \n cache interesting for you. Because the PostgreSQL has dynamic functions \n and operators the time in parser can be for some queries very interesting.\n \n We have good notion about total queries time now (for example from\n bench tests), but we haven't real time statistics about path-of-query\n in backend. How long time spend a query in the parser, how log in the \n optimizer or executor? (... maybe use profiling, but I not sure\n with it). All my suggestion for memory managment was based on result\n of control messages those I wrote into mmgr. And for example Tom was \n surprised of often realloc usage. I want say, we need more and more\n data from code, else we can't good optimize it ;-)\n \n suggestion: \"TODO: solid path-of-query time profiling for developers\" :-)\n \n Karel\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Fri, 1 Mar 2002 10:28:46 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "Michael Meskes wrote:\n\n>On Thu, Feb 28, 2002 at 01:18:29PM -0500, Antonio Sergio de Mello e Souza wrote:\n>\n>>By default, Postgres executes transactions in autocommit mode.\n>>\n>That of course is true.\n>\n>>To disable the autocommit mode you have to compile the ECPG script with \n>>the -t option.\n>>\n>That unfortunately is not. It's just the opposite way. ecpg per default uses\n>the Oracle way and issues a BEGIN after each commit automatically. Thus you\n>only have to specify COMMIT every now and then to end the transaction. If\n>you use \"-t\" or SET AUTOCOMMIT ON, then you run in the normal PostgreSQL\n>environment and get each command inside its own transaction. To manually\n>start and end transactions you have to use \"-t\" resp. EXEC SQL SET AUTOCOMMIT ON\n>and then issue a BEGIN.\n>\nMany thanks for the explanation! Sorry for the wrong advice... :-(\n\nRegards,\n\nAntonio Sergio\n\n\n\n\n",
"msg_date": "Fri, 01 Mar 2002 10:14:34 -0500",
"msg_from": "Antonio Sergio de Mello e Souza <asergioz@bol.com.br>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "> > Oracle on NT 4 : 45 minuts to go , 1200 tps (yes one thousand and two hundred\n> > tps)\n> > \n> > Linux Red Hat 7.2 with PostgreSQL 7.2 : hours to go (statistically, 45 hours),\n> > 80 tps (eighty tps).\n\nWell... Where to start?\n\nWe work on a team of two. The other one is a C/C++ senior coder. He\nmailed me a remark about datatypes on the database. Here is what he sent\nme:\n\nOur database has different datatypes, here are a count of distinct\ndatatypes in all tables:\n\n197 numeric(x)\n 19 numeric(x,2)\n 2 varchar(x)\n 61 char(x)\n 36 datetime\n\nHe asked me about numeric(x) and he questioned my about how PG managed\nthe NUMERIC types. \n\nI gave him a pointer on \"numeric.c\" in the PG srcs.\n\nI analyzed this source and found that NUMERIC types are much most\nexpensive than simple INTEGER.\n\nI really fall on the floor.. :-( I was sure with as good quality PG is,\nwhen NUMERIC(x) columns are declared, It would be translated in INTEGER\n(int2, 4 or 8, whatever...).\n\nSo, I made a pg_dump of the current database, made some perl\nremplacements NUMERIC(x,0) to INTEGER.\n\nI loaded the database and launched treatments: the results are REALLY\nIMPRESIVE: here what I have:\n\n((it is a port of Oracle/WinNT stuff to PostgreSQL/Red Hat 7.2)):\n\n\t\tOracle\t\tPG72 with NUMERICs\tPG72 with INTEGERS\n--------------------------------------------------------------------------\nsample\nconnect by\nquery ported 350ms 750ms 569ms \nto PG\n(thanks to \nOpenACS code!)\n--------------------------------------------------------------------------\nsample \"big\"\nquery with\nconnect bys\t 3 min 30s 8 min 40s 5 min 1s\nand many \nsub-queries\n--------------------------------------------------------------------------\nBig Batch \ntreatment 1300 queries/s 80 queries/s 250 queries/s\nqueries\n\nPRO*C to\t 45 min to go ~4 to 6 DAYS not yet \nECPG\t to go tested fully\n\nRatio 1:1 1:21 not yet ..\n 21 times slower!\n\n--------------------------------------------------------------------------\n((but this batch will be yet re-writen in pure C + libpq + SPI,\n so we think we'll have better results again))\n\n\nSo as you see, DATA TYPES are REALLY important, as I did write on a\ntechdocs article ( I should have tought in this earlier )\n\nThen?\n\nI'll inform you of what's going on with this Oracle/winnt 2 PG/linux port :-))\n\nAnd We thank you _very_ much of all the help you gave us.\n\nBest regards and Wishes,\n\n-- \nJean-Paul ARGUDO\n",
"msg_date": "Fri, 1 Mar 2002 19:44:10 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@IDEALX.com>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life : NEWS!!!"
},
{
"msg_contents": "On Fri, Mar 01, 2002 at 10:14:34AM -0500, Antonio Sergio de Mello e Souza wrote:\n> Many thanks for the explanation! Sorry for the wrong advice... :-(\n\nNo problem. I have to apologize for the lack of docs. :-)\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 1 Mar 2002 20:13:39 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "On March 1, 2002 01:44 pm, Jean-Paul ARGUDO wrote:\n> I analyzed this source and found that NUMERIC types are much most\n> expensive than simple INTEGER.\n>\n> I really fall on the floor.. :-( I was sure with as good quality PG is,\n> when NUMERIC(x) columns are declared, It would be translated in INTEGER\n> (int2, 4 or 8, whatever...).\n>\n> So, I made a pg_dump of the current database, made some perl\n> remplacements NUMERIC(x,0) to INTEGER.\n>\n> I loaded the database and launched treatments: the results are REALLY\n> IMPRESIVE: here what I have:\n\nAny chance you can try it with the MONEY type? It does use integers to\nstore the data. It isn't really designed for general numeric use but it\nwould be interesting to see how it fares.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 2 Mar 2002 08:19:23 -0500",
"msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life : NEWS!!!"
},
{
"msg_contents": "\"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n\n> Any chance you can try it with the MONEY type? It does use integers to\n> store the data. It isn't really designed for general numeric use but it\n> would be interesting to see how it fares.\n\nI think the MONEY type is deprecated...\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "02 Mar 2002 09:02:12 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life : NEWS!!!"
},
{
"msg_contents": "On March 2, 2002 09:02 am, Doug McNaught wrote:\n> \"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n> > Any chance you can try it with the MONEY type? It does use integers to\n> > store the data. It isn't really designed for general numeric use but it\n> > would be interesting to see how it fares.\n>\n> I think the MONEY type is deprecated...\n\nI keep hearing that but I use it heavily and I hope it never goes away. The\nNUMERIC type is nice but I still think that the MONEY type works well for\ncertain things. I bet it will be shown to be more efficient. Certainly\nit has limitations but within those limitations it works well.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 2 Mar 2002 10:47:36 -0500",
"msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life : NEWS!!!"
},
{
"msg_contents": "On Fri, 2002-03-01 at 23:44, Jean-Paul ARGUDO wrote:\n> > > Oracle on NT 4 : 45 minuts to go , 1200 tps (yes one thousand and two hundred\n> > > tps)\n> > > \n> > > Linux Red Hat 7.2 with PostgreSQL 7.2 : hours to go (statistically, 45 hours),\n> > > 80 tps (eighty tps).\n> \n> Well... Where to start?\n> \n> We work on a team of two. The other one is a C/C++ senior coder. He\n> mailed me a remark about datatypes on the database. Here is what he sent\n> me:\n> \n> Our database has different datatypes, here are a count of distinct\n> datatypes in all tables:\n> \n> 197 numeric(x)\n> 19 numeric(x,2)\n> 2 varchar(x)\n> 61 char(x)\n> 36 datetime\n> \n> He asked me about numeric(x) and he questioned my about how PG managed\n> the NUMERIC types. \n> \n> I gave him a pointer on \"numeric.c\" in the PG srcs.\n> \n> I analyzed this source and found that NUMERIC types are much most\n> expensive than simple INTEGER.\n> \n> I really fall on the floor.. :-( I was sure with as good quality PG is,\n> when NUMERIC(x) columns are declared, It would be translated in INTEGER\n> (int2, 4 or 8, whatever...).\n\nPostgres does not do any silent type replacing based on data type max\nlength.\n\n> So, I made a pg_dump of the current database, made some perl\n> remplacements NUMERIC(x,0) to INTEGER.\n> \n> I loaded the database and launched treatments: the results are REALLY\n> IMPRESIVE: here what I have:\n> \n> ((it is a port of Oracle/WinNT stuff to PostgreSQL/Red Hat 7.2)):\n> \n> \t\tOracle\t\tPG72 with NUMERICs\tPG72 with INTEGERS\n> --------------------------------------------------------------------------\n> sample\n> connect by\n> query ported 350ms 750ms 569ms \n> to PG\n> (thanks to \n> OpenACS code!)\n\nDid you rewrite your CONNECT BY queries as recursive functions or did\nyou use varbit tree position pointers ?\n\n> --------------------------------------------------------------------------\n> sample \"big\"\n> query with\n> connect bys\t 3 min 30s 8 min 40s 5 min 1s\n> and many \n> sub-queries\n\nCould you give more information on this query - i suspect this can be\nmade at least as fast as Oracle :)\n\n> --------------------------------------------------------------------------\n> Big Batch \n> treatment 1300 queries/s 80 queries/s 250 queries/s\n> queries\n> \n> PRO*C to\t 45 min to go ~4 to 6 DAYS not yet \n> ECPG\t to go tested fully\n> \n> Ratio 1:1 1:21 not yet ..\n> 21 times slower!\n\nDid you run concurrent vacuum for both PG results ?\n\n From my limited testing it seems that such vacuum is absolutely needed\nfor big batches of mostly updates.\n\nAnd btw 45min * 21 = 15h45 not 4-6 days :)\n\n> --------------------------------------------------------------------------\n> ((but this batch will be yet re-writen in pure C + libpq + SPI,\n> so we think we'll have better results again))\n\nYou probably will get better results :)\n\nI rerun my test (5000 transactions of 20 updates on random unique key\nbetween 1 and 768, with concurrent vacuum running every 4 sec) moving\nthe inner loop of 20 random updates to server, both without SPI prepared\nstatements and then using prepared statements.\n \nTest hardware - Athlon 859, IDE, 512MB ram\n\nupdate of random row i=1..768 \nall queries sent from client 2:02 = 820 updates sec\n[hannu@taru abench]$ time ./abench.py \nreal 2m1.522s\nuser 0m20.260s\nsys 0m3.320s\n[hannu@taru abench]$ time ./abench.py \nreal 2m2.320s\nuser 0m19.830s\nsys 0m3.490s\n\nusing plpython without prepared statements 1:35 = 1052 updates/sec\n[hannu@taru abench]$ time ./abenchplpy2.py \nreal 1m34.587s\nuser 0m1.280s\nsys 0m0.400s\n[hannu@taru abench]$ time ./abenchplpy2.py \nreal 1m36.919s\nuser 0m1.350s\nsys 0m0.450s\n\nusing plpython with SPI prepared statements 1:06.30 = 1503 updates/sec\n[hannu@taru abench]$ time ./abenchplpy.py \nreal 1m6.134s\nuser 0m1.400s\nsys 0m0.720s\n[hannu@taru abench]$ time ./abenchplpy.py \nreal 1m7.186s\nuser 0m1.580s\nsys 0m0.570s\n\nplpython non-functional with SPI prepared \nstatements - update where i=1024 0:17.65 = 5666 non-updates sec\n[hannu@taru abench]$ time ./abenchplpy.py \nreal 0m17.650s\nuser 0m0.990s\nsys 0m0.290s\n\n\n> So as you see, DATA TYPES are REALLY important, as I did write on a\n> techdocs article ( I should have tought in this earlier )\n\nYes they are.\n\nBut running concurrent vacuum is _much_ more important if the number of\nupdates is much bigger than number of records (thanks Tom!)\n\n------------------\nHannu\n\n",
"msg_date": "03 Mar 2002 03:21:56 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life : NEWS!!!"
},
{
"msg_contents": "On Fri, Mar 01, 2002 at 07:44:10PM +0100, Jean-Paul ARGUDO wrote:\n> ((but this batch will be yet re-writen in pure C + libpq + SPI,\n> so we think we'll have better results again))\n\nYou mean instead of using ecpg? I'd really be interested in the results of\nthis.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sun, 3 Mar 2002 09:51:41 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life : NEWS!!!"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Neil Conway [mailto:nconway@klamath.dyndns.org]\n> Sent: 27 February 2002 00:24\n> \n> On Tue, 2002-02-26 at 19:08, Dann Corbit wrote:\n> > Statistical tools are a good idea, because they can tell us where\n> > indexes should be added. However, you cannot simply return \n> the result\n> > of the previous query, because the contents may have \n> changed since the\n> > last time it was executed. It is simply invalid to do \n> that. If some\n> > other system is doing that, then it isn't a relational database.\n> \n> No -- as I said, any inserts, updates or deletes that affect the table\n> in question will cause a full cache flush.\n> \n> > How do you know whether or not someone has affected the row that you\n> > are reading? If you do not know, then every single update, \n> insert or\n> > delete will mean that you have to refresh.\n> \n> Yes, that is true.\n> \n> > And not only that, you will\n> > also have to track it. For sure, it will make the whole system run \n> > more slowly rather than faster.\n> \n> I don't think tracking changes imposes a lot of overhead -- it is\n> relatively simple to determine if a query affects a given table.\n> \n> > Very likely, it is only my limited understanding not really \n> grasping \n> > what it is that you are trying to do. Even so, I don't \n> think it really\n> > helps even for read only queries, unless it is exactly the \n> same query\n> > with the same parameter markers and everything that was \n> issued before.\n> > That is very unusual. Normally, you won't have the entire \n> query hard-\n> > wired, but with allow the customer to do some sort of \n> filtering of the\n> > data.\n> \n> Hmmm... the more I think about it, the more unusual it would be for\n> _exactly_ the same query to be repeated a lot. However, the article\n> reported a significant performance gain when this feature was enabled.\n> That could mean that:\n> \n> (a) the performance measurements/benchmarks used by the \n> article were\n> synthetic and don't reflect real database applications\n> \n> (b) the feature MySQL implements is different than the one I am\n> describing\n> \n> When I get a chance I'll investigate further the technique \n> used by MySQL\n> to see if (b) is the case. However, it is beginning to look like this\n> isn't a good idea, overall.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n>\nIf I understand correctly you'll be taking matching the on the original \nquery string, then pulling out the previous plan, rather than doing all \nthe planning again? Or where you thinking of storing the resultant tuples \n(seems far more diffcult to do effciently)?\nEither way would be handy for me though as I have a number of clients who \nall basically ask the same query and then ask it again every few minutes \nto update themselves. Therefore this sounds like something that would \nimprove performance for me.\nHope I've understood correctly,\n- Stuart\n",
"msg_date": "Wed, 27 Feb 2002 15:51:14 -0000",
"msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: eWeek Poll: Which database is most critical to your"
},
{
"msg_contents": "On Wed, 2002-02-27 at 10:51, Henshall, Stuart - WCP wrote:\n> If I understand correctly you'll be taking matching the on the original \n> query string, then pulling out the previous plan, rather than doing all \n> the planning again? Or where you thinking of storing the resultant tuples \n> (seems far more diffcult to do effciently)?\n\nWell, those are really two different features. The second (caching\nentire result sets based upon the _exact_ query string) is implemented\nby MySQL, and is probably the more exotic feature of the two. There is\nsome debate about whether this is even worthwhile, or just results in\nbetter benchmark results...\n\nAs Tom points out, the first feature (caching query plans) is probably\nbetter implemented by allowing application developers to prepare queries\nand specify their own parameters. This is a fairly conventional RDBMS\nfeature and it is already on the TODO list.\n\n> Either way would be handy for me though as I have a number of clients who \n> all basically ask the same query and then ask it again every few minutes \n> to update themselves. Therefore this sounds like something that would \n> improve performance for me.\n\nGood to know...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "27 Feb 2002 17:34:58 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: eWeek Poll: Which database is most critical to your"
},
{
"msg_contents": "Hello,\n\n> If I understand correctly you'll be taking matching the on the original\n> query string, then pulling out the previous plan, rather than doing all\n> the planning again?\n\nNo.\n\n> Or where you thinking of storing the resultant tuples (seems far more\n> diffcult to do effciently)?\n\nYes. They store a result of the query, not the plan. Storing a plan is not \nsuch a win as storing a result of queries returning relativly small amount of \ntuples.\n\n--\nDenis\n",
"msg_date": "Thu, 28 Feb 2002 11:41:03 +0600",
"msg_from": "Denis Perchine <dyp@perchine.com>",
"msg_from_op": false,
"msg_subject": "Re: eWeek Poll: Which database is most critical to your"
}
] |
[
{
"msg_contents": "\n> >> One of the things we've agreed to do in 7.3 is change COPY IN to remove\n> >> that assumption --- a line with too few fields (too few tabs) will draw\n> >> an error report instead of silently doing what's likely the wrong thing.\n> \n> > But there will be new syntax for COPY, that allows missing trailing columns. \n> > I hope.\n> \n> Why?\n\nWell, good question. For one for backwards compatibility.\nI guess I would prefer COPY syntax that allows you to specify columns\nas has been previously discussed. Having that, that would be sufficient\nand safer than only a switch.\n\nCOPY 'afile' to atab (a1, a3, a5, a2)\n\nAndreas\n",
"msg_date": "Wed, 27 Feb 2002 17:13:53 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: COPY incorrectly uses null instead of an empty string in last\n\tfield"
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> But there will be new syntax for COPY, that allows missing trailing columns. \n> I hope.\n>> \n>> Why?\n\n> Well, good question. For one for backwards compatibility.\n\nIt's an undocumented feature. How many people are likely to be using it?\n\n> I guess I would prefer COPY syntax that allows you to specify columns\n> as has been previously discussed.\n\nYes, that's on the to-do list as well. But no matter what the expected\nset of columns is, COPY ought to complain if a line is missing some\nfields.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Feb 2002 11:21:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY incorrectly uses null instead of an empty string in last\n\tfield"
}
] |
[
{
"msg_contents": "During my coding of the per-user/database settings, it occurred to me one\nmore time that arrays are evil. Basically, the initial idea was to have a\ncolumn pg_database.datconfig that contains, say,\n'{\"geqo_threshold=55\",\"enable_seqscan=off\"}'. Just inserting and deleting\nin arrays is terrible, let alone querying them in a reasonable manner.\nWe're getting killed by this every day in the privileges and groups case.\n\nWhat are people's thoughts on where (variable-length) arrays are OK in\nsystem catalogs, and where a new system catalog should be created?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 27 Feb 2002 12:13:22 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Arrays vs separate system catalogs"
},
{
"msg_contents": "Hello.\n\nIt's more common problem, I think. Working with array in postgres is \nvery difficult and uncomfortable. For any array type it needs methods \nlike a push, pop, length, splice etc. This methods may be implemented \nfor any array type. BTW, aggregates MAX and MIN works only for built-in \ntypes, but such aggregates may be defined for any type which supports \ncompare function.\n\nPeter Eisentraut wrote:\n> During my coding of the per-user/database settings, it occurred to me one\n> more time that arrays are evil. Basically, the initial idea was to have a\n> column pg_database.datconfig that contains, say,\n> '{\"geqo_threshold=55\",\"enable_seqscan=off\"}'. Just inserting and deleting\n> in arrays is terrible, let alone querying them in a reasonable manner.\n> We're getting killed by this every day in the privileges and groups case.\n> \n> What are people's thoughts on where (variable-length) arrays are OK in\n> system catalogs, and where a new system catalog should be created?\n> \n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n",
"msg_date": "Thu, 28 Feb 2002 00:41:20 +0300",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": false,
"msg_subject": "Re: Arrays vs separate system catalogs"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> During my coding of the per-user/database settings, it occurred to me one\n> more time that arrays are evil. Basically, the initial idea was to have a\n> column pg_database.datconfig that contains, say,\n> '{\"geqo_threshold=55\",\"enable_seqscan=off\"}'. Just inserting and deleting\n> in arrays is terrible, let alone querying them in a reasonable manner.\n> We're getting killed by this every day in the privileges and groups case.\n\n> What are people's thoughts on where (variable-length) arrays are OK in\n> system catalogs, and where a new system catalog should be created?\n\nSeems like an array is a perfectly fine representation, and what's\nlacking are suitable operators. Maybe we should think about inventing\nsome operators, rather than giving up on arrays.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Feb 2002 17:03:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Arrays vs separate system catalogs "
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> \n>>During my coding of the per-user/database settings, it occurred to me one\n>>more time that arrays are evil. Basically, the initial idea was to have a\n>>column pg_database.datconfig that contains, say,\n>>'{\"geqo_threshold=55\",\"enable_seqscan=off\"}'. Just inserting and deleting\n>>in arrays is terrible, let alone querying them in a reasonable manner.\n>>We're getting killed by this every day in the privileges and groups case.\n>>\n> \n>>What are people's thoughts on where (variable-length) arrays are OK in\n>>system catalogs, and where a new system catalog should be created?\n>>\n> \n> Seems like an array is a perfectly fine representation, and what's\n> lacking are suitable operators. Maybe we should think about inventing\n> some operators, rather than giving up on arrays.\n\nIMHO making arrays and relations equivalent is a real challenge. But \nthis would give the full power of SQL to arrays (subselects, aggregates, \n easy insertion, deletion, selection, updates).\n\nBut if you manage to make an array accessible as a relation this would \nbe a big step for mankind ;-)\n\n(e.g. select * from pg_class.relacl where pg_class.relname='pg_stats';\n\ninsert into pg_class.relacl values 'christof=r' where \npg_class.relname='pg_stats';\n\nBut at least the second example looks unSQLish to me\n(I doubt the syntax \"insert ... where\" is legal))\n\nSeemed a good idea first ... but I don't know whether it is worth the \n(syntactic, planning, non-standard) trouble.\n Christof Petig\n\n",
"msg_date": "Fri, 01 Mar 2002 09:17:03 +0100",
"msg_from": "Christof Petig <christof@petig-baender.de>",
"msg_from_op": false,
"msg_subject": "Re: Arrays vs separate system catalogs"
}
] |
[
{
"msg_contents": "The \"test\" is a big batch that computes stuffs in the database. Here are the\ntimings of both Oracle and PG (7.2) :\n\nOracle on NT 4 : 45 minuts to go , 1200 tps (yes one thousand and two\nhundred\ntps)\n\nLinux Red Hat 7.2 with PostgreSQL 7.2 : hours to go (statistically, 45\nhours),\n80 tps (eighty tps).\n\n---\n\nJean-Paul, I think the problem here is not having postgres configured\nproperly. I am in a similar situation here where we are migrating data from\npostgres into oracle. Postgres has been as much as 40x faster than Oracle in\nmany situations here. Note also that our oracle instance is on a quad\nprocessor Sun 280R, and our postgres 'instance' is on a p3/1ghz. Iterating\nover 440,000 xml 'text' fields in oracle takes about 4 days. In postgres it\ntakes 8 hours. Iterating over a 3.5M row table is just inconceivable for\noracle, and I do it in postgres all the time.\n\nMy suspicion is that our oracle instance is not tuned very well, and the\ncode that is manipulating the database (in this case perl) is much smarter\nfor postgres (we have separate developers to do perl-oracle interfaces).\n\nPostgres is a fantastic, fast database. But you really must configure it,\nand code intelligently to use it.\n\n-alex\n",
"msg_date": "Wed, 27 Feb 2002 12:32:14 -0500",
"msg_from": "Alex Avriette <a_avriette@acs.org>",
"msg_from_op": true,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "Okay,\n\nTo answer many replies (thanks!), I'll try to put more details:\n\n* DELL server\n\tP3 600 MHZ \n\t256 M ram\n\tRAID 5 \n\n* kernel\n\nLinux int2412 2.4.9-21SGI_XFS_1.0.2 #1 Thu Feb 7 16:50:37 CET 2002 i686 unknown\t\n\nwith aacraid-cox because aacraid had poor perfs with this server (at 1st we\ntought about raid5 problems)\n\n* postgresql.conf : here are _all_ uncomented parameters:\n\ntcpip_socket = true\nmax_connections = 16\nport = 5432\n\nshared_buffers = 19000 # 2*max_connections, min 16\nmax_fsm_relations = 200 # min 10, fsm is free space map\nmax_fsm_pages = 12000 # min 1000, fsm is free space map\nmax_locks_per_transaction = 256 # min 10\nwal_buffers = 24 # min 4\n\nsort_mem = 8192 # min 32\nvacuum_mem = 8192 # min 1024\n\n\nwal_debug = 0 # range 0-16\n\nfsync = true\n\nsilent_mode = true\nlog_connections = false\nlog_timestamp = false\nlog_pid = false\n\ndebug_level = 0 # range 0-16\n\ndebug_print_query = false\ndebug_print_parse = false\ndebug_print_rewritten = false\ndebug_print_plan = false\ndebug_pretty_print = false\nshow_parser_stats = false\nshow_planner_stats = false\nshow_executor_stats = false\nshow_query_stats = false\n\ntransform_null_equals = true\n\n* /proc parameters:\n\nproc/sys/kernel/shmall => 184217728 (more than 130M)\nproc/sys/kernel/shmall => 184217728 \n\n* we made a bunch of vmstat logs too, we made graphics to understand, all in a\n postscript file, with gun graph ... this is very interesting, but as I dont\nknow if attachments are autorized here, please tell me if I can post it too. It\nshows swap in/out, memory, I/O, etc.. \n\n\nThanks for your support!\n\n\n-- \nJean-Paul ARGUDO\n",
"msg_date": "Wed, 27 Feb 2002 18:44:53 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "> shared_buffers = 19000 # 2*max_connections, min 16\n\nThis number sounds too high. If you only have 256M RAM, this is using over \n150 of it. Are you swapping alot? What is the load on the server while it's \nruning?\n",
"msg_date": "Wed, 27 Feb 2002 17:35:33 -0600",
"msg_from": "\"Mattew T. O'Connor\" <matthew@rh71.postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "> many situations here. Note also that our oracle instance is on a quad\n> processor Sun 280R, and our postgres 'instance' is on a p3/1ghz. Iterating\n\nA 280r is a 2 way system, not 4 way (hence the 2 in 280).\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Thu, 28 Feb 2002 09:00:05 -0500 (EST)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "On Wed, Feb 27, 2002 at 06:44:53PM +0100, Jean-Paul ARGUDO wrote:\n> To answer many replies (thanks!), I'll try to put more details:\n> ... \n> Linux int2412 2.4.9-21SGI_XFS_1.0.2 #1 Thu Feb 7 16:50:37 CET 2002 i686 unknown\t\n\nBut you know that kernels up to 2.4.10 had huge problems with virtual\nmemory, don#t you. I'd recommend testing it either on 2.4.17 (which seems to\nrun stable for me) or, if you want to be sure and do not need SMP, use\n2.2.20.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 1 Mar 2002 09:03:12 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
},
{
"msg_contents": "The number 2.4.9-21 corresponds to the (Red Hat) kernel I'm running right now. \n Yes, 2.4.X as released from kernel.org had huge problems with virtual memory \n(for virually all values of X), but many of these problems have been addressed \nby keeping the kernel relatively frozen and just working on VM problems (which \nis one of the things we've been doing at Red Hat). I'm not saying we've got it \ntotally nailed just yet, but I want to present the view that some branches of \nthe Linux kernel *have* been given the attention they need to avoid some of the \nwell-known problems that linux.org kernels are (essentially--through Linus's \nlaw) designed to find.\n\nM\n\nMichael Meskes wrote:\n> On Wed, Feb 27, 2002 at 06:44:53PM +0100, Jean-Paul ARGUDO wrote:\n> \n>>To answer many replies (thanks!), I'll try to put more details:\n>>... \n>>Linux int2412 2.4.9-21SGI_XFS_1.0.2 #1 Thu Feb 7 16:50:37 CET 2002 i686 unknown\t\n>>\n> \n> But you know that kernels up to 2.4.10 had huge problems with virtual\n> memory, don#t you. I'd recommend testing it either on 2.4.17 (which seems to\n> run stable for me) or, if you want to be sure and do not need SMP, use\n> 2.2.20.\n> \n> Michael\n> \n\n\n",
"msg_date": "Fri, 01 Mar 2002 07:19:23 -0500",
"msg_from": "Michael Tiemann <tiemann@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
}
] |
[
{
"msg_contents": "Tom,\n\nAFAIK, it's required to write min(),max() aggregate functions for\nuser-defined types even if compare function is already defined for\nthis type. Is't possible to get these functions working for\nsuch user-defined types automatically ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 27 Feb 2002 20:44:15 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "min,max aggregate functions"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> AFAIK, it's required to write min(),max() aggregate functions for\n> user-defined types even if compare function is already defined for\n> this type. Is't possible to get these functions working for\n> such user-defined types automatically ?\n\nDoesn't really seem worth the trouble. We'd need some sort of notion\nof a generic aggregate function; and even then, you'd have to tell the\nsystem what comparison function to use for your datatype.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Feb 2002 12:47:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: min,max aggregate functions "
},
{
"msg_contents": "On Wed, 27 Feb 2002, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > AFAIK, it's required to write min(),max() aggregate functions for\n> > user-defined types even if compare function is already defined for\n> > this type. Is't possible to get these functions working for\n> > such user-defined types automatically ?\n>\n> Doesn't really seem worth the trouble. We'd need some sort of notion\n> of a generic aggregate function; and even then, you'd have to tell the\n> system what comparison function to use for your datatype.\n\nI was just curious. It's always possible to use 'order by desc limit 1'\nfor max() function.\n\nSo, nobody is working on aggregates.\n\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 27 Feb 2002 20:52:13 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: min,max aggregate functions "
}
] |
[
{
"msg_contents": "Just found \"pocketSQL - compact SQL database\"\nhttp://www.sgpr.net/pocketsql/\n\nLooks like it's postgresql incarnation for pocket computers\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 27 Feb 2002 21:17:29 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "PocketSQL"
},
{
"msg_contents": "On Wed, 2002-02-27 at 20:17, Oleg Bartunov wrote:\n> Just found \"pocketSQL - compact SQL database\"\n> http://www.sgpr.net/pocketsql/\n> \n> Looks like it's postgresql incarnation for pocket computers\n\nNot likely:\n\npocketSQL tables are managed as ordinary tabular ascii data files. \nThese files may be created or modified using a text editor, \nby external programs, and so on. Data files have one \"record\" \nper line, and each record has one or more \"fields\". Fields are \nseparated by a single space. ....\n> \n-------\nHannu\n\n",
"msg_date": "04 Mar 2002 18:43:11 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: PocketSQL"
},
{
"msg_contents": "\nYou sure about that? It doesn't appear to be based on the PostgreSQL code \nbase, and it doesn't sound like it's for pocket computers. From the web \npage:\n \n\"pocketSQL is a compact multiuser database system that can be embedded in \napplications.\"\n\nSounds more like a little database library than a pocket PC implementation \nof PgSQL.\n\nJ\n\n\n\nOleg Bartunov wrote:\n\n> Just found \"pocketSQL - compact SQL database\"\n> http://www.sgpr.net/pocketsql/\n> \n> Looks like it's postgresql incarnation for pocket computers\n> \n> \n> Regards,\n> Oleg\n\n",
"msg_date": "Mon, 04 Mar 2002 12:03:36 -0500",
"msg_from": "J Smith <dark_panda@hushmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PocketSQL"
},
{
"msg_contents": "Hannu,\n\nI was confused by 'psql' command.\nbtw, seems hackers list doesn't working. I don't see any messages posted to\nhackers list.\n\n\tOleg\nOn 4 Mar 2002, Hannu Krosing wrote:\n\n> On Wed, 2002-02-27 at 20:17, Oleg Bartunov wrote:\n> > Just found \"pocketSQL - compact SQL database\"\n> > http://www.sgpr.net/pocketsql/\n> >\n> > Looks like it's postgresql incarnation for pocket computers\n>\n> Not likely:\n>\n> pocketSQL tables are managed as ordinary tabular ascii data files.\n> These files may be created or modified using a text editor,\n> by external programs, and so on. Data files have one \"record\"\n> per line, and each record has one or more \"fields\". Fields are\n> separated by a single space. ....\n> >\n> -------\n> Hannu\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 4 Mar 2002 21:43:02 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: PocketSQL"
}
] |
[
{
"msg_contents": "[snip]\nIf I understand correctly you'll be taking matching the on the original \nquery string, then pulling out the previous plan, rather than doing all \nthe planning again? Or where you thinking of storing the resultant\ntuples \n(seems far more diffcult to do effciently)?\nEither way would be handy for me though as I have a number of clients\nwho \nall basically ask the same query and then ask it again every few minutes\n\nto update themselves. Therefore this sounds like something that would \nimprove performance for me.\nHope I've understood correctly,\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\nIf they expect the information to change, then caching the query data\nis not going to help. Unless you have a read-only database, it is my\nopinion that caching of the actual query data is a very bad idea. It\nwill complicate the code and (in the end) not end up being any faster.\n\nTwo other aspects of the approach (gathering statistical information\non the frequency and usage of queries and storing the prepared query)\nare both excellent ideas and ought to be pursued. Maintaining an LRU\ncache of prepared queries is also a good idea.\n\nIf you have a read only database or table, then you can make a lot of \nsafe, simplifying assumptions about queries against that table. I \nthink that you can cache the data for a read-only table or database.\n\nBut for those instances, will it really be worth it? In other words,\nif I have some read only tables and they really are frequently \naccessed by user queries, the data will be in memory anyway. How much\nwill you actually gain by saving the data somewhere?\n\nBenchmarks are great, but I think it makes a lot more sense to focus\non real applications. If we want to improve web server performance \n(here is a tangible, real goal) then why not try with a real, useful\nweb server application? If we want to perform well in benchmarks, use\nreal benchmarks like the TPC-X benchmarks. \"The Benchmark Factory\"\nhas some useful stuff along these lines.\n\nFor sure, I can write a benchmark that makes any tool look good. But\ndoes it indicate that the tool is actually useful for getting real\nwork done?\n\nInstead of trying to win benchmarks, we should imagine how we can\nmake improvements that will solve real scientific and business \nproblems.\n\nIMO-YMMV\n<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n",
"msg_date": "Wed, 27 Feb 2002 11:15:42 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: eWeek Poll: Which database is most critical to your"
}
] |
[
{
"msg_contents": "-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Wednesday, February 27, 2002 9:47 AM\nTo: Oleg Bartunov\nCc: Pgsql Hackers\nSubject: Re: [HACKERS] min,max aggregate functions \n\n\nOleg Bartunov <oleg@sai.msu.su> writes:\n> AFAIK, it's required to write min(),max() aggregate functions for\n> user-defined types even if compare function is already defined for\n> this type. Is't possible to get these functions working for\n> such user-defined types automatically ?\n\nDoesn't really seem worth the trouble. We'd need some sort of notion\nof a generic aggregate function; and even then, you'd have to tell the\nsystem what comparison function to use for your datatype.\n>>---------------------------------------------------------------------\nHere is a C++ template summation tool I created, and made public domain:\nftp://cap.connx.com/pub/tournament_software/Kahan.Hpp\n\nIt is a far more effective technique than just adding the numbers up.\n\nYou can have different types for the input data and the accumulator.\nSo (for instance) you can sum floats into a double or doubles into a\nlong double or whatever. This allows a simple way to prevent overflows\nand also greatly increases accuracy with very little additional cost\nin computation.\n\nNow, PostgreSQL is C and not C++, but the idea can be translated into\nordinary C code.\n\nIf you adopt some fundamental large type for the accumulator, and the\nuser type has a conversion into that type, then you could make a\ngeneric accumulator very easily.\n\nThis template does all kinds of statistics, and might also prove useful:\nftp://cap.connx.com/pub/tournament_software/STATS.HPP\n<<---------------------------------------------------------------------\n",
"msg_date": "Wed, 27 Feb 2002 13:05:52 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: min,max aggregate functions "
}
] |
[
{
"msg_contents": "-----Original Message-----\nFrom: Marc Lavergne [mailto:mlavergne-pub@richlava.com]\nSent: Wednesday, February 27, 2002 9:41 AM\nTo: Jean-Paul ARGUDO\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Oracle vs PostgreSQL in real life\n\n\nThere is probably an explanation but \"computes stuffs\" doesn't provide \nmuch information to go with. Do you think you could boil this down to a \ntest case? Also, expand on what the batch file does, the size\ndatabase, and which interface you are using. I'm sure people would like \nto help, but there simply isn't enough information do derive and \nconclusions here.\n>>----------------------------------------------------------------------\nThis seems a very important test case. If possible, and the client will\nallow it, perhaps the relevant pieces of the schema could be published\nto some ftp site along with the relevant C code. Then, we could\npopulate\nthe tables with dummy data and run the same tests. One of two things\nwill happen (I predict).\n\n1. Someone will find a way to make it run fast.\nOR\n2. Someone will offer an improvement to PostgreSQL so that it can do as\nwell or better than Oracle for this application.\n\nWithout understanding the problem, we end up guessing.\n<<----------------------------------------------------------------------\n",
"msg_date": "Wed, 27 Feb 2002 13:10:32 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
}
] |
[
{
"msg_contents": "-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Wednesday, February 27, 2002 1:25 PM\nTo: F Harvell\nCc: Dann Corbit; Neil Conway; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] eWeek Poll: Which database is most critical to \n\n\nF Harvell <fharvell@fts.net> writes:\n> The query plan is not going to be interested at all in\n> the literal value of the parameters and therefore will be the same for\n> any query of the same form.\n\nUnfortunately, this is completely false.\n\n> For example, from above:\n\n> SELECT shirt, color, backorder_qty FROM garments WHERE color like \n> 'BLUE%'\n\n> should become something on the order of:\n\n> SELECT shirt, color, backorder_qty FROM garments WHERE color like \n> '{param0}%'\n\nYou managed to pick an example that's perfectly suited to demolish your\nassertion. The query with \"color like 'BLUE%'\" can be optimized into an\nindexscan (using index quals of the form \"color >= 'BLUE' and color <\n'BLUF'), at least in C locale. The parameterized query cannot be\noptimized at all, because the planner cannot know whether the\nsubstituted parameter string will provide a left-anchored pattern.\nWhat if param0 contains '_FOO' at runtime? An indexscan will be\nuseless in that case.\n\nIn general, Postgres' query plans *do* depend on the values of\nconstants, and it's not always possible to produce an equally good plan\nthat doesn't assume anything about constants. This is why I think it's\na lousy idea for the system to try to automatically abstract a\nparameterized query plan from the actual queries it sees. On the other\nhand, an application programmer will have a very good idea of which\nparts of a repeated query are really constant and which are parameters.\nSo what we really need is preparable parameterized queries, wherein the\napplication tells us what to parameterize, rather than having to guess\nabout it.\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\nUsing the data to enhance the plan is quite a brilliant strategy.\nI was not aware that PostgreSQL could do that.\n\nRdb has a very nice feature -- it allows you to *edit* the plan.\nObviously, you can get some real disasters that way, but for advanced\nusers, it is very nice.\n<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n",
"msg_date": "Wed, 27 Feb 2002 13:33:40 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: eWeek Poll: Which database is most critical to "
}
] |
[
{
"msg_contents": "The current WAL recovery implementation does not recover newly created\nobjects such as tables. My suggested patch is:\n\nWhen XLogOpenRelation fails to open the relation file, if errno is\nENOENT (no file or directory) we shuld attempt to recreate the file\nusing smgrcreate.\n\nThis seems to work fine for tables, indexes and sequences but can anyone\nsee any potential problems? I have not tried this with Toast tables;\nare these handled any differently?\n\nIs it reasonable to assume that recreating the file in this way is\nsafe? It seems OK to me as we only recreate the file if it does not\nalready exist, so we are not in danger of making a bad situation worse.\n\nIf no-one tells me this is a bad idea, I will submit a patch.\n\n-- \nMarc\t\tmarc@bloodnok.com\n",
"msg_date": "27 Feb 2002 16:39:58 -0800",
"msg_from": "Marc Munro <marc@bloodnok.com>",
"msg_from_op": true,
"msg_subject": "Point in time recovery: recreating relation files"
},
{
"msg_contents": "Marc Munro <marc@bloodnok.com> writes:\n> The current WAL recovery implementation does not recover newly created\n> objects such as tables. My suggested patch is:\n\n> When XLogOpenRelation fails to open the relation file, if errno is\n> ENOENT (no file or directory) we shuld attempt to recreate the file\n> using smgrcreate.\n\nNo, that's wrong. The missing ingredient is that the WAL log should\nexplicitly log table creations. (And also table drops.) If you look\nyou will find some comments showing the places where code is missing.\n\nIf you try to do it as you suggest above, then you will erroneously\nrecreate files that have been dropped.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Feb 2002 22:44:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Point in time recovery: recreating relation files "
},
{
"msg_contents": "On Wed, 2002-02-27 at 19:44, Tom Lane wrote:\n> No, that's wrong. The missing ingredient is that the WAL log should\n> explicitly log table creations. (And also table drops.) If you look\n> you will find some comments showing the places where code is missing.\n> \n> If you try to do it as you suggest above, then you will erroneously\n> recreate files that have been dropped.\n\nOK, that makes sense. I will take another look. Thanks.\n\n-- \nMarc\t\tmarc@bloodnok.com\n",
"msg_date": "28 Feb 2002 08:21:55 -0800",
"msg_from": "Marc Munro <marc@bloodnok.com>",
"msg_from_op": true,
"msg_subject": "Re: Point in time recovery: recreating relation files"
},
{
"msg_contents": "> No, that's wrong. The missing ingredient is that the WAL log should\n> explicitly log table creations. (And also table drops.) If you look\n> you will find some comments showing the places where code is missing.\n\nI'm wondering where we could record the LSN when creating or dropping\ntables.\n\n> If you try to do it as you suggest above, then you will erroneously\n> recreate files that have been dropped.\n\nYes, but I think we need to compare log's LSN and tables LSN.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 07 Mar 2002 09:22:00 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Point in time recovery: recreating relation files "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> No, that's wrong. The missing ingredient is that the WAL log should\n>> explicitly log table creations. (And also table drops.) If you look\n>> you will find some comments showing the places where code is missing.\n\n> I'm wondering where we could record the LSN when creating or dropping\n> tables.\n\nUm, why would that matter?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Mar 2002 23:04:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Point in time recovery: recreating relation files "
},
{
"msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> >> No, that's wrong. The missing ingredient is that the WAL log should\n> >> explicitly log table creations. (And also table drops.) If you look\n> >> you will find some comments showing the places where code is missing.\n> \n> > I'm wondering where we could record the LSN when creating or dropping\n> > tables.\n> \n> Um, why would that matter?\n\nIn my understanding to prevent redo-ing two or more times while in the\nrecovery process, we need to compare LSN in the object against the LSN\nin the WAL log.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 07 Mar 2002 13:56:38 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Point in time recovery: recreating relation files "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I'm wondering where we could record the LSN when creating or dropping\n> tables.\n>> \n>> Um, why would that matter?\n\n> In my understanding to prevent redo-ing two or more times while in the\n> recovery process, we need to compare LSN in the object against the LSN\n> in the WAL log.\n\nBut undo/redo checking on file creation or deletion is trivial: either\nthe kernel has the file or it doesn't. We do not need any other check\nAFAICS.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 00:00:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Point in time recovery: recreating relation files "
},
{
"msg_contents": "> > In my understanding to prevent redo-ing two or more times while in the\n> > recovery process, we need to compare LSN in the object against the LSN\n> > in the WAL log.\n> \n> But undo/redo checking on file creation or deletion is trivial: either\n> the kernel has the file or it doesn't. We do not need any other check\n> AFAICS.\n\nAre you saying that the table creation log record would contain a\nrelfilenode? I'm not sure the relfilenode is same before and after the\nrecovery if we consider the point time recovery.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 07 Mar 2002 14:11:10 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Point in time recovery: recreating relation files "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> But undo/redo checking on file creation or deletion is trivial: either\n>> the kernel has the file or it doesn't. We do not need any other check\n>> AFAICS.\n\n> Are you saying that the table creation log record would contain a\n> relfilenode?\n\nSure. What else would it contain?\n\n> I'm not sure the relfilenode is same before and after the\n> recovery if we consider the point time recovery.\n\nConsidering that all the WAL entries concerning updates to the table\nwill name it by relfilenode, we'd better be prepared to ensure that\nthe relfilenode doesn't change over recovery.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 00:29:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Point in time recovery: recreating relation files "
},
{
"msg_contents": "Could someone explain to this poor newbie (who is hoping to implement\nthis) exactly what the issue is here? Like Tom, I could originally see\nno reason to worry about the LSN for file creation but I am very\nconcerned that I have failed to grasp Tatsuo's concerns.\n\nIs there some reason why the relfilenode might change either during or\nas a result of recovery? Unless I have missed the point again, during\nrecovery we must recreate files with exactly the same path, name and\nrelfilenode as they would have originally been created, and in the same\norder relative to the creation of the relation. I see no scope for\nanything to be different.\n\n\nOn Wed, 2002-03-06 at 21:29, Tom Lane wrote:\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> >> But undo/redo checking on file creation or deletion is trivial: either\n> >> the kernel has the file or it doesn't. We do not need any other check\n> >> AFAICS.\n> \n> > Are you saying that the table creation log record would contain a\n> > relfilenode?\n> \n> Sure. What else would it contain?\n> \n> > I'm not sure the relfilenode is same before and after the\n> > recovery if we consider the point time recovery.\n> \n> Considering that all the WAL entries concerning updates to the table\n> will name it by relfilenode, we'd better be prepared to ensure that\n> the relfilenode doesn't change over recovery.\n> \n> \t\t\tregards, tom lane\n-- \nMarc\t\tmarc@bloodnok.com\n",
"msg_date": "07 Mar 2002 08:03:05 -0800",
"msg_from": "Marc Munro <marc@bloodnok.com>",
"msg_from_op": true,
"msg_subject": "Re: Point in time recovery: recreating relation files"
},
{
"msg_contents": "> Could someone explain to this poor newbie (who is hoping to implement\n> this) exactly what the issue is here? Like Tom, I could originally see\n> no reason to worry about the LSN for file creation but I am very\n> concerned that I have failed to grasp Tatsuo's concerns.\n> \n> Is there some reason why the relfilenode might change either during or\n> as a result of recovery? Unless I have missed the point again, during\n> recovery we must recreate files with exactly the same path, name and\n> relfilenode as they would have originally been created, and in the same\n> order relative to the creation of the relation. I see no scope for\n> anything to be different.\n\nSorry for the confusion. I'm not very familiar with other DBMSs, and I\njust don't know what kind of features for point in time recovery in\nthem could provide. One a scenario I could imagine is recovering\nsingle table with different name. I'm not sure this is implemented by\nother DBMS though.\n\nBTW, next issue would be TRUCATE and CREATE/DROP DATABASE.\nI regard this is not currently supported by WAL.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 08 Mar 2002 09:52:57 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Point in time recovery: recreating relation files"
}
] |
[
{
"msg_contents": "We need to archive WAL files and I am unsure of the right approach. \nWhat is the right way to do this without completely blocking the backend\nthat gets the task?\n\nI can see a number of options but lack the depth of PosgreSQL knowledge\nto be able to choose between them. No doubt some of you will see other\noptions.\n\n1) Just let the backend get on with it.\n This will effectively stop the user's session while the copy occurs. \nBad idea.\n\n2) Have the backend spawn a child process to do this.\n Will the backend wait for it's child before closing down? Will\naborting the backend kill the archiving child? This just seems wrong to\nme.\n\n3) Have the backend spawn a disconnected (nohup) process.\n This seems dangerous to me but I can't put my finger on why.\n\n4) Have the backend tell the postmaster to archive the file. The\npostmaster will spawn a dedicated process to make it happen.\n I think I like this but I don't know how to do it yet.\n\n5) Have a dedicated archiver process. Have backends tell it to get on\nwith the job.\n This is Oracle's approach. I see no real benefit over option 4\nexcept that we don't have to keep spawning new processes. On a personal\nlevel I want to be different from Oracle.\n\n6) I have completely missed the point about backends\n Please be gentle.\n\nAny and all feedback welcomed. Thanks.\n\n\n-- \nMarc\t\tmarc@bloodnok.com\n",
"msg_date": "27 Feb 2002 16:40:07 -0800",
"msg_from": "Marc Munro <marc@bloodnok.com>",
"msg_from_op": true,
"msg_subject": "Point in time recovery: archiving WAL files"
}
] |
[
{
"msg_contents": "Here is a patch to clean up elog():\n\n\tftp://candle.pha.pa.us/pub/postgresql/mypatches/elog\n\nHere is the detail:\n\n\nREALLYFATAL => PANIC\nSTOP => PANIC\nNew INFO level the prints to client by default\nNew LOG level the prints to server log by default\nCause VACUUM information to print only to the client in verbose mode\nVACUUM doesn't output to server logs\nNOTICE => INFO where purely information messages are sent\nDEBUG => LOG for purely server status messages\nDEBUG removed, kept as backward compatible (will be added near 7.3)\nDEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1 added\nDebugLvl removed in favor of new DEBUG[1-5] symbols\nNew server_min_messages GUC parameter with values DEBUG[5-1], INFO, LOG, ...\nNew client_min_messages GUC parameter with values DEBUG[5-1], LOG, INFO, ...\nServer startup now logged with LOG instead of DEBUG\nPostmaster -d flag effects only postmaster message, not backend messages\nRemove debug_level GUC parameter\nelog() numbers now start at 10\nAdd test to print error message if older elog() values are passed to elog()\nBootstrap mode now has a -d that requires an argument, like postmaster\nThis clears the -d debug level on backend start. Is that done correctly?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 28 Feb 2002 00:10:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "elog() patch"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> REALLYFATAL => PANIC\n> STOP => PANIC\n\nThe annoying thing about the choice PANIC is that while the previous\nsuggestions may not give you the most accurate idea about what the action\nreally is, PANIC is just about the worst possible choice, because \"panic\"\nis *no* action at all, it's just a state of mind.\n\n> New INFO level the prints to client by default\n\nI doubt this idea. NOTICE should really print to the client only. This\nagain comes down to the user-level errors vs. server-side errors issue.\nBut INFO doesn't convey either of these meanings.\n\n> DEBUG removed, kept as backward compatible (will be added near 7.3)\n> DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1 added\n> DebugLvl removed in favor of new DEBUG[1-5] symbols\n\nSince you've made us stick with 1-5, are there any meanings attached to\nthose numbers?\n\n> New server_min_messages GUC parameter with values DEBUG[5-1], INFO, LOG, ...\n> New client_min_messages GUC parameter with values DEBUG[5-1], LOG, INFO, ...\n\nNow that is *really* confusing. Two different ways to number the same\nthings.\n\n> Postmaster -d flag effects only postmaster message, not backend messages\n\nWhy?\n\n> Remove debug_level GUC parameter\n\nWhy?\n\n> Bootstrap mode now has a -d that requires an argument, like postmaster\n\nOK\n\n> This clears the -d debug level on backend start. Is that done correctly?\n\nWhy?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 28 Feb 2002 00:43:49 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > REALLYFATAL => PANIC\n> > STOP => PANIC\n> \n> The annoying thing about the choice PANIC is that while the previous\n> suggestions may not give you the most accurate idea about what the action\n> really is, PANIC is just about the worst possible choice, because \"panic\"\n> is *no* action at all, it's just a state of mind.\n\nYes, but PANIC was chosen by vote, and it does match the kernel-level\ndescription.\n\n> > New INFO level the prints to client by default\n> \n> I doubt this idea. NOTICE should really print to the client only. This\n> again comes down to the user-level errors vs. server-side errors issue.\n> But INFO doesn't convey either of these meanings.\n\nWe could call it TIP or something like that. I think INFO is used\nbecause it isn't a NOTICE or ERROR or something major. It is only INFO.\nIt is neutral information.\n\n> > DEBUG removed, kept as backward compatible (will be added near 7.3)\n> > DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1 added\n> > DebugLvl removed in favor of new DEBUG[1-5] symbols\n> \n> Since you've made us stick with 1-5, are there any meanings attached to\n> those numbers?\n\n5 is max, 1 is for higher level messages. I just followed what was\nalready there. We can adjsut these.\n\n> > New server_min_messages GUC parameter with values DEBUG[5-1], INFO, LOG, ...\n> > New client_min_messages GUC parameter with values DEBUG[5-1], LOG, INFO, ...\n> \n> Now that is *really* confusing. Two different ways to number the same\n> things.\n\nSure is, but it was agreed to by the group discussing it as the cleanest\nsolution. postgresql.conf has these levels documented, as does the SGML\ndocs.\n\n> > Postmaster -d flag effects only postmaster message, not backend messages\n> \n> Why?\n\nThis allows you to see postmaster connection-level debug stuff without\nthe query debug stuff from the backend. If you want both, you have to\nset the postgres -d flag too. Seemed clearer but I can remove it if\npeople don't want it.\n\n> > Remove debug_level GUC parameter\n> \n> Why?\n\nNo longer needed with new DEBUG* levels.\n\n> > This clears the -d debug level on backend start. Is that done correctly?\n> \n> Why?\n\nAgain, seemed clearer. The way things are in the patch, you can't do -d\n0 in the backend to turn off debug on the backend, so you have to\nexplicitly enable it. Of course, with these new GUC paramaters, the need\nfor -d is less anyway, and you can see all the messages in your client\nif you wish.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 28 Feb 2002 10:58:16 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Yes, but PANIC was chosen by vote, and it does match the kernel-level\n> description.\n\nWhat is the kernel-level description?\n\n> > I doubt this idea. NOTICE should really print to the client only. This\n> > again comes down to the user-level errors vs. server-side errors issue.\n> > But INFO doesn't convey either of these meanings.\n>\n> We could call it TIP or something like that. I think INFO is used\n> because it isn't a NOTICE or ERROR or something major. It is only INFO.\n> It is neutral information.\n\nThat's what NOTICE is. NOTICE is only neutral information. NOTICE could\ngo to the client by default, whereas if you want something in the server\nlog you use LOG. I doubt an extra level is needed.\n\n> > > New server_min_messages GUC parameter with values DEBUG[5-1], INFO, LOG, ...\n> > > New client_min_messages GUC parameter with values DEBUG[5-1], LOG, INFO, ...\n> >\n> > Now that is *really* confusing. Two different ways to number the same\n> > things.\n>\n> Sure is, but it was agreed to by the group discussing it as the cleanest\n> solution. postgresql.conf has these levels documented, as does the SGML\n> docs.\n\nI doubt that agreement.\n\nConsider, what and how much I want to debug is really quite independent of\nwhat amount of regular \"neutral\" messages I want to see where. The latter\nis a rather permanent administrative decision, whereas the former is a\ntemporary decision to isolate problems. A \"debug level\" is really a\nuniversal concept in any package, and I hate to see it go.\n\nSecondly, once I have decided how much debugging I want to do, it is\nunlikely that I want to do a different amount of debugging on the client\nand on the server. I can see users becoming confused by this: \"I already\nset the debugging level to 5, but I still only see the same messages in\nthe client\". I think that the current debug_level, plus a new Boolean\nsetting \"debug_to_client\" or such is sufficient and much clearer.\n\nAs far as the non-debug levels go, there isn't much choice. ERROR and\nabove really needs to be communicated to the client anyway. So you might\nbe able to tune which one of LOG, INFO, NOTICE goes where. But that's\nabout all.\n\n> > > Postmaster -d flag effects only postmaster message, not backend messages\n> >\n> > Why?\n>\n> This allows you to see postmaster connection-level debug stuff without\n> the query debug stuff from the backend. If you want both, you have to\n> set the postgres -d flag too. Seemed clearer but I can remove it if\n> people don't want it.\n\nWe had wanted to get rid of the discrepancy between postmaster and\npostgres flag, not add new ones.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 28 Feb 2002 14:20:08 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Yes, but PANIC was chosen by vote, and it does match the kernel-level\n> > description.\n> \n> What is the kernel-level description?\n\nKernel stops, can't continue. kernel panic.\n\n> > > I doubt this idea. NOTICE should really print to the client only. This\n> > > again comes down to the user-level errors vs. server-side errors issue.\n> > > But INFO doesn't convey either of these meanings.\n> >\n> > We could call it TIP or something like that. I think INFO is used\n> > because it isn't a NOTICE or ERROR or something major. It is only INFO.\n> > It is neutral information.\n> \n> That's what NOTICE is. NOTICE is only neutral information. NOTICE could\n> go to the client by default, whereas if you want something in the server\n> log you use LOG. I doubt an extra level is needed.\n\nNotice isn't as neutral. It is for truncation of long identifiers and\nstuff like that.\n\n> > > > New server_min_messages GUC parameter with values DEBUG[5-1], INFO, LOG, ...\n> > > > New client_min_messages GUC parameter with values DEBUG[5-1], LOG, INFO, ...\n> > >\n> > > Now that is *really* confusing. Two different ways to number the same\n> > > things.\n> >\n> > Sure is, but it was agreed to by the group discussing it as the cleanest\n> > solution. postgresql.conf has these levels documented, as does the SGML\n> > docs.\n> \n> I doubt that agreement.\n\nWell, we discussed it on the lists, and no one objected after this\nresult was found.\n\n\n> Consider, what and how much I want to debug is really quite independent of\n> what amount of regular \"neutral\" messages I want to see where. The latter\n> is a rather permanent administrative decision, whereas the former is a\n> temporary decision to isolate problems. A \"debug level\" is really a\n> universal concept in any package, and I hate to see it go.\n\nWell, yes, but unless we want a really complicated setup, we may as well\njust emit the neutral messages with the debug. It has to be simple. \nThere is usually one neutral message per query, if any, so I don't see\nthe point.\n\n\n> Secondly, once I have decided how much debugging I want to do, it is\n> unlikely that I want to do a different amount of debugging on the client\n> and on the server. I can see users becoming confused by this: \"I already\n> set the debugging level to 5, but I still only see the same messages in\n> the client\". I think that the current debug_level, plus a new Boolean\n> setting \"debug_to_client\" or such is sufficient and much clearer.\n\nI disagree. The patch gives us clear control over the various levels. \nTom wanted the DebugLvl variable stuff rolled into elog, and I think\nthis is a very clean solution.\n\nI think Tom's point was that we want to control the independently, and\nthat is what this does. I think it a valid use.\n\n> As far as the non-debug levels go, there isn't much choice. ERROR and\n> above really needs to be communicated to the client anyway. So you might\n> be able to tune which one of LOG, INFO, NOTICE goes where. But that's\n> about all.\n\nYes, that is all the options you have for client, ERROR and below.\n\n> > > > Postmaster -d flag effects only postmaster message, not backend messages\n> > >\n> > > Why?\n> >\n> > This allows you to see postmaster connection-level debug stuff without\n> > the query debug stuff from the backend. If you want both, you have to\n> > set the postgres -d flag too. Seemed clearer but I can remove it if\n> > people don't want it.\n> \n> We had wanted to get rid of the discrepancy between postmaster and\n> postgres flag, not add new ones.\n\nWe now separate them so they act independently.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 28 Feb 2002 16:35:05 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "> > Postmaster -d flag effects only postmaster message, not backend messages\n> \n> Why?\n\nOK, how about this? What if we add a new GUC parameter,\npostmaster_min_messages, and that controls the postmaster printing to\nthe server logs. (Postmaster has no client to print to.) Then, we can\npropogate the -d flag to the backends, and if someone wants just\npostmaster loggging, they can use the GUC parameter.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 28 Feb 2002 18:07:20 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "\nOK, I have talked with Peter via phone and he is OK with the patch\nassuming the changes I have outlined below. This is not a total\noverhaul of elog() but rather a major cleanup. These changes are in the\ndirection of where we want to head.\n\nPeter is also concerned if allowing clients to see elog() messages is a\nsecurity problem. Clients can't see postmaster messages because there\nis no client at the time, but backend messages will be visible. I can't\nthink of any server log messages that shouldn't be seen by the client. \nDoes anyone else?\n\n> Here is a patch to clean up elog():\n> \n> \tftp://candle.pha.pa.us/pub/postgresql/mypatches/elog\n> \n> Here is the detail:\n> \n> \n> REALLYFATAL => PANIC\n> STOP => PANIC\n> New INFO level the prints to client by default\n> New LOG level the prints to server log by default\n> Cause VACUUM information to print only to the client in verbose mode\n> VACUUM doesn't output to server logs\n> NOTICE => INFO where purely information messages are sent\n> DEBUG => LOG for purely server status messages\n> DEBUG removed, kept as backward compatible (will be added near 7.3)\n> DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1 added\n> DebugLvl removed in favor of new DEBUG[1-5] symbols\n> New server_min_messages GUC parameter with values DEBUG[5-1], INFO, LOG, ...\n> New client_min_messages GUC parameter with values DEBUG[5-1], LOG, INFO, ...\n> Server startup now logged with LOG instead of DEBUG\n> Postmaster -d flag effects only postmaster message, not backend messages\n\nChanged. Postmaster -d propogates to backends, like current. New -d 0\npostgres parameter allows this propogation to be turned off.\n\n> Remove debug_level GUC parameter\n> elog() numbers now start at 10\n> Add test to print error message if older elog() values are passed to elog()\n> Bootstrap mode now has a -d that requires an argument, like postmaster\n\n> This clears the -d debug level on backend start. Is that done correctly?\n\nI cleared this up with Peter.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Mar 2002 00:00:23 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Peter is also concerned if allowing clients to see elog() messages is a\n> security problem. Clients can't see postmaster messages because there\n> is no client at the time, but backend messages will be visible. I can't\n> think of any server log messages that shouldn't be seen by the client. \n\nThe only thing I can think of is the detailed authorization-failure\nmessages that the postmaster has traditionally logged but not sent to\nthe client. We need to be sure that the client cannot change that\nbehavior by setting PGOPTIONS. I *think* this is OK, since client\noptions aren't processed till after the auth cycle finishes --- but\ncheck it. If you are using IsUnderPostmaster to control things then\nyou might have a problem, because that gets set too soon.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Mar 2002 00:46:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Peter is also concerned if allowing clients to see elog() messages is a\n> > security problem. Clients can't see postmaster messages because there\n> > is no client at the time, but backend messages will be visible. I can't\n> > think of any server log messages that shouldn't be seen by the client. \n> \n> The only thing I can think of is the detailed authorization-failure\n> messages that the postmaster has traditionally logged but not sent to\n> the client. We need to be sure that the client cannot change that\n> behavior by setting PGOPTIONS. I *think* this is OK, since client\n> options aren't processed till after the auth cycle finishes --- but\n> check it. If you are using IsUnderPostmaster to control things then\n> you might have a problem, because that gets set too soon.\n\nIs this what you were looking for? I set client_min_messages to the max\nof debug5 and the output is attached.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nDEBUG: ./bin/postmaster child[10023]: starting with (\nDEBUG: postgres \nDEBUG: -v131072 \nDEBUG: -p \nDEBUG: test \nDEBUG: )\n\nDEBUG: InitPostgres\nDEBUG: StartTransactionCommand\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntest=> show client_min_messages;\nDEBUG: StartTransactionCommand\nDEBUG: ProcessUtility\nINFO: client_min_messages is debug5\nDEBUG: CommitTransactionCommand\nSHOW VARIABLE\ntest=> \\q",
"msg_date": "Sat, 2 Mar 2002 18:00:43 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is this what you were looking for? I set client_min_messages to the max\n> of debug5 and the output is attached.\n\nIf the DBA wants to do that, I don't have a problem with it. I'm\nwondering what happens if an unprivileged user tries to do it,\nvia either PGOPTIONS or Peter's new user/database-local options.\n\nPlease note also that I'm wondering about the messages emitted during\nan authorization *failure*, not a successful connection.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Mar 2002 18:19:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is this what you were looking for? I set client_min_messages to the max\n> > of debug5 and the output is attached.\n> \n> If the DBA wants to do that, I don't have a problem with it. I'm\n> wondering what happens if an unprivileged user tries to do it,\n> via either PGOPTIONS or Peter's new user/database-local options.\n> \n> Please note also that I'm wondering about the messages emitted during\n> an authorization *failure*, not a successful connection.\n\nYou ask a very good question here. I never tested authentication with\ndebug sent to the client. The answer is that it doesn't work without\nthe attached patch. Now, I am not about to apply this because it does\nchange getNotice() to an extern and moves its prototype to libpq-int.h. \nThis is necessary because I now use getNotice() in fe-connect.c.\n\nThe second issue is that this isn't going to work for pre-7.2 clients\nbecause the protocol doesn't expect 'N' messages during the\nauthentication phase. I think we can live with a client_min_messages\nlevel of debug* not working on old clients, though we should make a\nmention of it in the release notes.\n\nAnd finally, here is the output from a failed password login with the\npatch applied:\n\n\t$ psql test\n\tPassword: \n\tDEBUG: received password packet with len=12, pw=lkjasdf\n\t\n\tDEBUG: received password packet with len=12, pw=lkjasdf\n\t\n\tpsql: FATAL: Password authentication failed for user \"postgres\"\n\nBasically it echoes the failed password back to the user. Again, this\nis only with client_min_messages set to debug1-5. I don't know how to\nfix this because we specifically set things up so the client could see\neverything the server logs see. I wonder if echoing the failed password\ninto the logs is a good idea either. I don't think so.\n\nSomeone please advise on patch application. Are there other places that\ndon't expect a NOTICE in the middle of a protocol handshake?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/interfaces/libpq/fe-connect.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.182\ndiff -c -r1.182 fe-connect.c\n*** src/interfaces/libpq/fe-connect.c\t2 Mar 2002 00:49:22 -0000\t1.182\n--- src/interfaces/libpq/fe-connect.c\t3 Mar 2002 02:33:51 -0000\n***************\n*** 1296,1301 ****\n--- 1296,1310 ----\n \t\t\t\t\treturn PGRES_POLLING_READING;\n \t\t\t\t}\n \n+ \t\t\t\t/* Grab NOTICE/INFO/DEBUG and discard them. */\n+ \t\t\t\twhile (beresp == 'N')\n+ \t\t\t\t{\n+ \t\t\t\t\tif (getNotice(conn))\n+ \t\t\t\t\t\treturn PGRES_POLLING_READING;\n+ \t\t\t\t\tif (pqGetc(&beresp, conn))\n+ \t\t\t\t\t\treturn PGRES_POLLING_READING;\n+ \t\t\t\t}\n+ \n \t\t\t\t/* Handle errors. */\n \t\t\t\tif (beresp == 'E')\n \t\t\t\t{\n***************\n*** 1314,1319 ****\n--- 1323,1337 ----\n \t\t\t\t\t */\n \t\t\t\t\tappendPQExpBufferChar(&conn->errorMessage, '\\n');\n \t\t\t\t\tgoto error_return;\n+ \t\t\t\t}\n+ \n+ \t\t\t\t/* Grab NOTICE/INFO/DEBUG and discard them. */\n+ \t\t\t\twhile (beresp == 'N')\n+ \t\t\t\t{\n+ \t\t\t\t\tif (getNotice(conn))\n+ \t\t\t\t\t\treturn PGRES_POLLING_READING;\n+ \t\t\t\t\tif (pqGetc(&beresp, conn))\n+ \t\t\t\t\t\treturn PGRES_POLLING_READING;\n \t\t\t\t}\n \n \t\t\t\t/* Otherwise it should be an authentication request. */\nIndex: src/interfaces/libpq/fe-exec.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v\nretrieving revision 1.113\ndiff -c -r1.113 fe-exec.c\n*** src/interfaces/libpq/fe-exec.c\t25 Oct 2001 05:50:13 -0000\t1.113\n--- src/interfaces/libpq/fe-exec.c\t3 Mar 2002 02:33:52 -0000\n***************\n*** 54,60 ****\n static int\tgetRowDescriptions(PGconn *conn);\n static int\tgetAnotherTuple(PGconn *conn, int binary);\n static int\tgetNotify(PGconn *conn);\n- static int\tgetNotice(PGconn *conn);\n \n /* ---------------\n * Escaping arbitrary strings to get valid SQL strings/identifiers.\n--- 54,59 ----\n***************\n*** 1379,1385 ****\n * Exit: returns 0 if successfully consumed Notice message.\n *\t\t returns EOF if not enough data.\n */\n! static int\n getNotice(PGconn *conn)\n {\n \t/*\n--- 1378,1384 ----\n * Exit: returns 0 if successfully consumed Notice message.\n *\t\t returns EOF if not enough data.\n */\n! int\n getNotice(PGconn *conn)\n {\n \t/*\nIndex: src/interfaces/libpq/libpq-fe.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq/libpq-fe.h,v\nretrieving revision 1.80\ndiff -c -r1.80 libpq-fe.h\n*** src/interfaces/libpq/libpq-fe.h\t8 Nov 2001 20:37:52 -0000\t1.80\n--- src/interfaces/libpq/libpq-fe.h\t3 Mar 2002 02:33:56 -0000\n***************\n*** 252,257 ****\n--- 252,258 ----\n extern size_t PQescapeString(char *to, const char *from, size_t length);\n extern unsigned char *PQescapeBytea(unsigned char *bintext, size_t binlen,\n \t\t\t size_t *bytealen);\n+ extern int\tgetNotice(PGconn *conn);\n \n /* Simple synchronous query */\n extern PGresult *PQexec(PGconn *conn, const char *query);",
"msg_date": "Sat, 2 Mar 2002 21:46:05 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "> Basically it echoes the failed password back to the user. Again, this\n> is only with client_min_messages set to debug1-5. I don't know how to\n> fix this because we specifically set things up so the client could see\n> everything the server logs see. I wonder if echoing the failed password\n> into the logs is a good idea either. I don't think so.\n\nCrypt/MD5 authentication does output the password encrypted:\n\n DEBUG: received password packet with len=40, pw=md515e315f11670d4ba385d0c1615476780\n\n DEBUG: received password packet with len=40, pw=md515e315f11670d4ba385d0c1615476780\n\n psql: FATAL: Password authentication failed for user \"postgres\"\n\nHowever, I still don't think we should be echoing this to the server\nlogs or the client. There is just little value to it and potential\nproblems, especially with 'password' authentication.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 3 Mar 2002 00:08:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Here is a better patch I am inclined to apply. I fixes the debug\nmessages during authentication problem in a cleaner way, and removes\npassword echo to server logs and client.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Is this what you were looking for? I set client_min_messages to the max\n> > > of debug5 and the output is attached.\n> > \n> > If the DBA wants to do that, I don't have a problem with it. I'm\n> > wondering what happens if an unprivileged user tries to do it,\n> > via either PGOPTIONS or Peter's new user/database-local options.\n> > \n> > Please note also that I'm wondering about the messages emitted during\n> > an authorization *failure*, not a successful connection.\n> \n> You ask a very good question here. I never tested authentication with\n> debug sent to the client. The answer is that it doesn't work without\n> the attached patch. Now, I am not about to apply this because it does\n> change getNotice() to an extern and moves its prototype to libpq-int.h. \n> This is necessary because I now use getNotice() in fe-connect.c.\n> \n> The second issue is that this isn't going to work for pre-7.2 clients\n> because the protocol doesn't expect 'N' messages during the\n> authentication phase. I think we can live with a client_min_messages\n> level of debug* not working on old clients, though we should make a\n> mention of it in the release notes.\n> \n> And finally, here is the output from a failed password login with the\n> patch applied:\n> \n> \t$ psql test\n> \tPassword: \n> \tDEBUG: received password packet with len=12, pw=lkjasdf\n> \t\n> \tDEBUG: received password packet with len=12, pw=lkjasdf\n> \t\n> \tpsql: FATAL: Password authentication failed for user \"postgres\"\n> \n> Basically it echoes the failed password back to the user. Again, this\n> is only with client_min_messages set to debug1-5. I don't know how to\n> fix this because we specifically set things up so the client could see\n> everything the server logs see. I wonder if echoing the failed password\n> into the logs is a good idea either. I don't think so.\n> \n> Someone please advise on patch application. Are there other places that\n> don't expect a NOTICE in the middle of a protocol handshake?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/libpq/auth.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/libpq/auth.c,v\nretrieving revision 1.76\ndiff -c -r1.76 auth.c\n*** src/backend/libpq/auth.c\t2 Mar 2002 21:39:25 -0000\t1.76\n--- src/backend/libpq/auth.c\t3 Mar 2002 21:39:40 -0000\n***************\n*** 854,861 ****\n \t\treturn STATUS_EOF;\n \t}\n \n! \telog(DEBUG5, \"received password packet with len=%d, pw=%s\\n\",\n! \t\tlen, buf.data);\n \n \tresult = checkPassword(port, port->user, buf.data);\n \tpfree(buf.data);\n--- 854,861 ----\n \t\treturn STATUS_EOF;\n \t}\n \n! \t/* For security reasons, do not output contents of password packet */\n! \telog(DEBUG5, \"received password packet\");\n \n \tresult = checkPassword(port, port->user, buf.data);\n \tpfree(buf.data);\nIndex: src/interfaces/libpq/fe-connect.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.182\ndiff -c -r1.182 fe-connect.c\n*** src/interfaces/libpq/fe-connect.c\t2 Mar 2002 00:49:22 -0000\t1.182\n--- src/interfaces/libpq/fe-connect.c\t3 Mar 2002 21:39:42 -0000\n***************\n*** 1296,1301 ****\n--- 1296,1310 ----\n \t\t\t\t\treturn PGRES_POLLING_READING;\n \t\t\t\t}\n \n+ \t\t\t\t/* Grab NOTICE/INFO/DEBUG and discard them. */\n+ \t\t\t\twhile (beresp == 'N')\n+ \t\t\t\t{\n+ \t\t\t\t\tif (getNotice(conn))\n+ \t\t\t\t\t\treturn PGRES_POLLING_READING;\n+ \t\t\t\t\tif (pqGetc(&beresp, conn))\n+ \t\t\t\t\t\treturn PGRES_POLLING_READING;\n+ \t\t\t\t}\n+ \n \t\t\t\t/* Handle errors. */\n \t\t\t\tif (beresp == 'E')\n \t\t\t\t{\nIndex: src/interfaces/libpq/fe-exec.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v\nretrieving revision 1.113\ndiff -c -r1.113 fe-exec.c\n*** src/interfaces/libpq/fe-exec.c\t25 Oct 2001 05:50:13 -0000\t1.113\n--- src/interfaces/libpq/fe-exec.c\t3 Mar 2002 21:39:44 -0000\n***************\n*** 54,60 ****\n static int\tgetRowDescriptions(PGconn *conn);\n static int\tgetAnotherTuple(PGconn *conn, int binary);\n static int\tgetNotify(PGconn *conn);\n- static int\tgetNotice(PGconn *conn);\n \n /* ---------------\n * Escaping arbitrary strings to get valid SQL strings/identifiers.\n--- 54,59 ----\n***************\n*** 1379,1385 ****\n * Exit: returns 0 if successfully consumed Notice message.\n *\t\t returns EOF if not enough data.\n */\n! static int\n getNotice(PGconn *conn)\n {\n \t/*\n--- 1378,1384 ----\n * Exit: returns 0 if successfully consumed Notice message.\n *\t\t returns EOF if not enough data.\n */\n! int\n getNotice(PGconn *conn)\n {\n \t/*\nIndex: src/interfaces/libpq/libpq-int.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq/libpq-int.h,v\nretrieving revision 1.44\ndiff -c -r1.44 libpq-int.h\n*** src/interfaces/libpq/libpq-int.h\t5 Nov 2001 17:46:38 -0000\t1.44\n--- src/interfaces/libpq/libpq-int.h\t3 Mar 2002 21:39:44 -0000\n***************\n*** 305,310 ****\n--- 305,311 ----\n extern void *pqResultAlloc(PGresult *res, size_t nBytes, bool isBinary);\n extern char *pqResultStrdup(PGresult *res, const char *str);\n extern void pqClearAsyncResult(PGconn *conn);\n+ extern int\tgetNotice(PGconn *conn);\n \n /* === in fe-misc.c === */",
"msg_date": "Sun, 3 Mar 2002 17:27:00 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Here is a better patch I am inclined to apply.\n\nPlease do not ... I am about to commit a patch that fixes it properly,\nie, suppresses all non-error reports to the client during\nauthentication. I do not think that we can assume that clients will\nbe prepared to handle notice messages in the auth cycle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Mar 2002 17:38:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Here is a better patch I am inclined to apply.\n> \n> Please do not ... I am about to commit a patch that fixes it properly,\n> ie, suppresses all non-error reports to the client during\n> authentication. I do not think that we can assume that clients will\n> be prepared to handle notice messages in the auth cycle.\n\nOK, great. Can you take care of the echo of entered password too, or I\nwill fix it once you are done?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 3 Mar 2002 17:48:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can you take care of the echo of entered password too,\n\nI'm unconvinced that that's wrong, and will not change it without\nmore discussion. (1) The reason it was put in was to allow debugging\nof \"that's the wrong password\" mistakes. (2) The postmaster log\ninherently contains a great deal of sensitive information, so anyone\nwho runs with it world-readable has a problem already. (3) The password\nis not emitted unless the message level is a lot lower than anyone would\nroutinely use. (4) If you're using the recommended MD5 encryption\napproach, then what's logged is encrypted; it seems no more dangerous\nthan having encrypted passwords in pg_shadow.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Mar 2002 17:53:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Can you take care of the echo of entered password too,\n> \n> I'm unconvinced that that's wrong, and will not change it without\n> more discussion. (1) The reason it was put in was to allow debugging\n> of \"that's the wrong password\" mistakes. (2) The postmaster log\n> inherently contains a great deal of sensitive information, so anyone\n> who runs with it world-readable has a problem already. (3) The password\n> is not emitted unless the message level is a lot lower than anyone would\n> routinely use. (4) If you're using the recommended MD5 encryption\n> approach, then what's logged is encrypted; it seems no more dangerous\n> than having encrypted passwords in pg_shadow.\n\nThat's a good point, particularly that MD5 echos the MD5 string and not\nthe actual password. We can leave it and wait to see if anyone\ncomplains.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 3 Mar 2002 17:55:36 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Can you take care of the echo of entered password too,\n> \n> I'm unconvinced that that's wrong, and will not change it without\n> more discussion. (1) The reason it was put in was to allow debugging\n> of \"that's the wrong password\" mistakes. (2) The postmaster log\n> inherently contains a great deal of sensitive information, so anyone\n> who runs with it world-readable has a problem already. (3) The password\n> is not emitted unless the message level is a lot lower than anyone would\n> routinely use. (4) If you're using the recommended MD5 encryption\n> approach, then what's logged is encrypted; it seems no more dangerous\n> than having encrypted passwords in pg_shadow.\n\nI assume with your changes that the password will no longer be echoed to\nthe client on failure at debug5, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 3 Mar 2002 18:04:47 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I assume with your changes that the password will no longer be echoed to\n> the client on failure at debug5, right?\n\nNope.\n\nI've noticed a bunch of breakage with this patch at high (or is it low?)\ndebug output levels; for example client disconnection leaves this in the\nlog:\n\tDEBUG: proc_exit(0)\n\tDEBUG: shmem_exit(0)\n\tLOG: pq_flush: send() failed: Broken pipe\n\tDEBUG: exit(0)\n\tLOG: pq_flush: send() failed: Bad file number\n\tDEBUG: reaping dead processes\n\tDEBUG: child process (pid 12462) exited with exit code 0\nThe problem is that elog is still trying to send stuff to the client\nafter client disconnection. I propose to fix that by resetting\nwhereToSendOutput to None as soon as we detect client disconnect.\n\nA more serious problem has to do with error reports for communication\nproblems, for example this fragment in pqcomm.c:\n\n /*\n * Careful: an elog() that tries to write to the client\n * would cause recursion to here, leading to stack overflow\n * and core dump! This message must go *only* to the postmaster\n * log. elog(LOG) is presently safe.\n */\n elog(LOG, \"pq_recvbuf: recv() failed: %m\");\n\nelog(LOG) is NOT safe anymore :-(.\n\nI am thinking of inventing an additional elog level, perhaps called\nCOMMERR, to be used specifically for reports of client communication\ntrouble. This could be treated the same as LOG as far as output to\nthe server log goes, but we would hard-wire it to never be reported\nto the client (for fear of recursive failure).\n\nComments?\n\nBTW, I am also looking at normalizing all the backend/libpq reports\nthat look like\n\t\tsnprintf(PQerrormsg, ...);\n\t\tfputs(PQerrormsg, stderr);\n\t\tpqdebug(\"%s\", PQerrormsg);\nto just use elog. As-is this code is fairly broken since it doesn't\nhonor the syslog option. Any objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Mar 2002 18:21:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I assume with your changes that the password will no longer be echoed to\n> > the client on failure at debug5, right?\n> \n> Nope.\n> \n> I've noticed a bunch of breakage with this patch at high (or is it low?)\n> debug output levels; for example client disconnection leaves this in the\n> log:\n> \tDEBUG: proc_exit(0)\n> \tDEBUG: shmem_exit(0)\n> \tLOG: pq_flush: send() failed: Broken pipe\n> \tDEBUG: exit(0)\n> \tLOG: pq_flush: send() failed: Bad file number\n> \tDEBUG: reaping dead processes\n> \tDEBUG: child process (pid 12462) exited with exit code 0\n> The problem is that elog is still trying to send stuff to the client\n> after client disconnection. I propose to fix that by resetting\n> whereToSendOutput to None as soon as we detect client disconnect.\n\nThat is exactly the solution I would recommend. Clearly we are\nstressing elog() and the client by forcing more information than we used\nto.\n\n> A more serious problem has to do with error reports for communication\n> problems, for example this fragment in pqcomm.c:\n> \n> /*\n> * Careful: an elog() that tries to write to the client\n> * would cause recursion to here, leading to stack overflow\n> * and core dump! This message must go *only* to the postmaster\n> * log. elog(LOG) is presently safe.\n> */\n> elog(LOG, \"pq_recvbuf: recv() failed: %m\");\n> \n> elog(LOG) is NOT safe anymore :-(.\n\nSure isn't. We know we can always output to the server logs, but not\nalways to the client.\n\n\n> I am thinking of inventing an additional elog level, perhaps called\n> COMMERR, to be used specifically for reports of client communication\n> trouble. This could be treated the same as LOG as far as output to\n> the server log goes, but we would hard-wire it to never be reported\n> to the client (for fear of recursive failure).\n> \n> Comments?\n\nCouldn't we just set whereToSendOutput to None to fix this, or is there\na sense that we may be able to send messages later.\n\nIf needed, we can put COMMERR into the existing numbering. I would be\nglad to add it for you if you wish. There would be no mention of it in\nthe docs because it is just like LOG but only to server logs.\n\n> BTW, I am also looking at normalizing all the backend/libpq reports\n> that look like\n> \t\tsnprintf(PQerrormsg, ...);\n> \t\tfputs(PQerrormsg, stderr);\n> \t\tpqdebug(\"%s\", PQerrormsg);\n> to just use elog. As-is this code is fairly broken since it doesn't\n> honor the syslog option. Any objections?\n\nYes, I never liked this stuff. Please remove it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 3 Mar 2002 18:27:43 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I am thinking of inventing an additional elog level, perhaps called\n>> COMMERR, to be used specifically for reports of client communication\n>> trouble. This could be treated the same as LOG as far as output to\n>> the server log goes, but we would hard-wire it to never be reported\n>> to the client (for fear of recursive failure).\n\n> Couldn't we just set whereToSendOutput to None to fix this, or is there\n> a sense that we may be able to send messages later.\n\nWe might as well just do proc_exit() as do that: once you reset\nwhereToSendOutput, you are effectively done talking to the client\n(because SELECT won't send results to the client anymore). The\nerrors that libpq notices might or might not be hard failures, but\nI don't want to take the approach of changing global state in order\nto report them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Mar 2002 18:40:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> I am thinking of inventing an additional elog level, perhaps called\n> >> COMMERR, to be used specifically for reports of client communication\n> >> trouble. This could be treated the same as LOG as far as output to\n> >> the server log goes, but we would hard-wire it to never be reported\n> >> to the client (for fear of recursive failure).\n> \n> > Couldn't we just set whereToSendOutput to None to fix this, or is there\n> > a sense that we may be able to send messages later.\n> \n> We might as well just do proc_exit() as do that: once you reset\n> whereToSendOutput, you are effectively done talking to the client\n> (because SELECT won't send results to the client anymore). The\n> errors that libpq notices might or might not be hard failures, but\n> I don't want to take the approach of changing global state in order\n> to report them.\n\nOh, I thought whereToSendOutput only affected elog(). I now see it is\nused in many places. Sure new log-only code is fine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 3 Mar 2002 18:42:17 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Can you take care of the echo of entered password too,\n> \n> I'm unconvinced that that's wrong, and will not change it without\n> more discussion. (1) The reason it was put in was to allow debugging\n> of \"that's the wrong password\" mistakes. (2) The postmaster log\n> inherently contains a great deal of sensitive information, so anyone\n> who runs with it world-readable has a problem already. (3) The password\n> is not emitted unless the message level is a lot lower than anyone would\n> routinely use. (4) If you're using the recommended MD5 encryption\n> approach, then what's logged is encrypted; it seems no more dangerous\n> than having encrypted passwords in pg_shadow.\n\nOK, I have thought about how we display invalid passwords in the server\nlogs. This isn't an issue if the password is the same as stored in\npg_shadow. However, if the invalid password was incorrect because it\nwas their Unix password or a password on another machine, I think we do\nhave an issue storing it in the server logs. I can't think of any unix\nutility that stores invalid passwords in the log, no matter what the\ndebugging level, and I don't think we should be doing it either.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 01:43:43 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, I have thought about how we display invalid passwords in the server\n> logs. This isn't an issue if the password is the same as stored in\n> pg_shadow. However, if the invalid password was incorrect because it\n> was their Unix password or a password on another machine, I think we do\n> have an issue storing it in the server logs.\n\nGood point. Okay, yank it out ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Mar 2002 01:47:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, I have thought about how we display invalid passwords in the server\n> > logs. This isn't an issue if the password is the same as stored in\n> > pg_shadow. However, if the invalid password was incorrect because it\n> > was their Unix password or a password on another machine, I think we do\n> > have an issue storing it in the server logs.\n> \n> Good point. Okay, yank it out ...\n\nDone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 01:52:02 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Michael Meskes [mailto:meskes@postgresql.org] \n> Sent: 28 February 2002 07:04\n> To: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] eWeek Poll: Which database is most \n> critical to your\n> \n> \n> On Wed, Feb 27, 2002 at 12:39:16AM -0500, Tom Lane wrote:\n> > I cannot believe that caching results for \n> literally-identical queries \n> > is a win, except perhaps for the most specialized (read brain dead)\n> \n> I don't think they are brain dead. Well that is at first I \n> thought so too, but then thinking some more it made sense. \n> After all MySQL is used mostly for web pages and even your \n> dynamic content doesn't change that often. But in between \n> there are thousands of concurrent access that all execute the \n> very same statement. This feature makes no sense IMO for the \n> \"normal\" use we both probably had in mind when first reading, \n> but for this web usage I see a benefit if it's implementable.\n\nEverytime someone browses http://pgadmin.postgresql.org/ this is exactly\nwhat's going on, as well as (I imagine) with Vince's interactive docs.\n\nRegards, Dave.\n",
"msg_date": "Thu, 28 Feb 2002 09:03:41 -0000",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: eWeek Poll: Which database is most critical to your"
}
] |
[
{
"msg_contents": "First of all sorry for posting directly to -hackers without trying it in \nthe other lists but I'm pretty sure that this is the place to ask this.\n\nSecond, the embarrasing thing, in summary, deleted table, no backup at \nall, neither vacuum so ended with a pretty file with all my datas but no \nclever way to access them.\n\nI've search for this question in the archives and only find a reference \nto \"The Tao of Backup\" which enlightened me, and will be very useful for \nfuture reference but doesn't solve my present problem. Also find some \nreferences for a \"not so hard to make\" recover utility so...\n\nI'm not a bad C and PERL hacker so I'm trying to contribute this utility \nbut havent't found any doc about this file format, I've scanned also \nsome of the source code but haven't been able to find any information \nthat helps me (I've only studied some parts of the fti contrib module \nbefore), also tried to do some rev. eng. but this is no the way to go so \nI would be grateful is someone can:\n\n- Point me to some developer doc about this file format.\n\n- Point me to some part of the source code that could be of help, please \nbe precise because I'm not very familiar with the tree.\n\n- Or even explain, in summary, something about this file format, or some \nof the bytes needed for me to write this tool.\n\n- Of course, if someone has made or started this tool I will be very \nhappy to receive an URL to a .tar.gz ;-) .\n\nOn another point, I've been thinking about this tool, and will accept \nhints about this proccess:\n\n- The tool will receive the schema of the table and a copy of the file \nwhich stores this table, as far I know the schema is not stored in the \ntable data file.\n\n- After scanning the file will print a (sorted or filtered by \ntransaction id or number of altered rows) list of transactions.\n\n- Ask for a trasaction id and store all the data on a new .sql file much \nlike the one generated from a pg_dump.\n\nI think that this is fairly feasible and practical, am I wrong?\n\n\nSo, thanks in advance for your help, and thankyou all for this killer \napplication.\n\nP.S.: Please answer also to my email also because I only read pg lists \nby nntp.\n\nOops, forgot something, I'm talkin about PG 7.1 after that I will port \nit to 7.2 if someone finds it usefull.\n\nMiguel A. Ar�valo\nmarevalo@marevalo.net\n____________________________________________\nBiblios.Org: All your Book Are Belong to Us.\n\n\n\n\n",
"msg_date": "Thu, 28 Feb 2002 10:40:13 +0100",
"msg_from": "\"Miguel A. =?ISO-8859-1?Q?Ar=E9valo?=\" <marevalo@marevalo.net>",
"msg_from_op": true,
"msg_subject": "Clues about tables fileformat"
}
] |
[
{
"msg_contents": "Hi all\ni have this error message\n\nNOTICE: mdopen: couldn't open tmp_pg_shadow: No such file or directory\nNOTICE: RelationIdBuildRelation: smgropen(tmp_pg_shadow): No such file or\ndirectory\nNOTICE: --Relation tmp_pg_shadow--\nNOTICE: mdopen: couldn't open tmp_pg_shadow: No such file or directory\nERROR: cannot open relation tmp_pg_shadow\nvacuumdb: vacuum failed\nhelp\n\n--\n_______________________________\nFouad Fezzi\nIngenieur R�seau\nIUP Institut Universitaire Professionnalis�\nUniversite d'Avignon et des Pays de Vaucluse\n339 ch. des Meinajaries\ntel : (+33/0) 4 90 84 35 50\nBP 1228 - 84911 AVIGNON CEDEX 9\nfax : (+33/0) 4 90 84 35 01\nhttp://www.iup.univ-avignon.fr\n_________________________________\n\n\n",
"msg_date": "Thu, 28 Feb 2002 15:37:22 +0100",
"msg_from": "\"Fouad Fezzi\" <fezzi@iup.univ-avignon.fr>",
"msg_from_op": true,
"msg_subject": "problem with vaccumdb"
}
] |
[
{
"msg_contents": "\nI have developed a function to help me with escaping strings more easily. \nIt kind of behaves like printf and is very crude. Before I do anymore\nwork, I was hoping to get some comments or notice if someone has already\ndone this.\n\nI was also thinking there could be a function call PQprintfExec that would\nbuild the sql from the printf and call PQexec in one step.\n\nComments Please!\n\nRegards, \nAdam\n\n\n/*\n * PQprintf\n *\n * This function acts kind of like printf. It takes care of escaping\n * strings and bytea for you, then runs PQexec. The format string\n * defintion is as follows:\n *\n * %i = integer\n * %f = float\n * %s = normal string\n * %e = escape the string\n * %b = escape the bytea\n * \n * When you use %b, you must add another argument just after the\n * variable holding the binary data with its length.\n *\n */\nchar *\nPQprintf(const char *format, ...)\n{\n va_list arg;\n char *sql = NULL;\n char *parse = (char*)strdup(format);\n char *p;\n char buff[256];\n char *str;\n char *to;\n size_t length;\n size_t size;\n size_t esize;\n char* s_arg;\n float f_arg;\n int i_arg;\n int i;\n\n va_start(arg, format);\n\n p = (char*)strtok(parse, \"%\");\n sql = (char*)strdup(p);\n size = strlen(sql);\n\n while (p)\n {\n\t if ((p = (char*)strtok(NULL, \"%\")))\n\t {\n\t switch (*p) \n\t {\n\t\t /* integer */\n\t case 'i':\n\t\t i_arg = va_arg(arg, int);\n\t\t sprintf(buff, \"%i\", i_arg);\n\t\t size += strlen(buff);\n\t\t sql = (char*)realloc(sql, size + 1);\n\t\t strcat(sql, buff);\n\t\t break;\n\n\t\t /* float */\n\t case 'f':\n\t\t f_arg = va_arg(arg, float);\n\t\t sprintf(buff, \"%f\", f_arg);\n\t\t size += strlen(buff);\n\t\t sql = (char*)realloc(sql, size + 1);\n\t\t strcat(sql, buff);\n\t\t break;\n\n\t\t /* string */\n\t case 's':\n\t\t s_arg = va_arg(arg, char*);\n\t\t puts(s_arg);\n\t\t size += strlen(s_arg);\n\t\t sql = (char*)realloc(sql, size + 1);\n\t\t strcat(sql, s_arg);\n\t\t break;\n\n\t\t /* escape string */\n\t case 'e':\n\t\t s_arg = va_arg(arg, char*);\n\t\t to = (char*)malloc((2 * strlen(s_arg)) + 1);\n\t\t PQescapeString(to, s_arg, strlen(s_arg));\n\t\t size += strlen(to);\n\t\t sql = (char*)realloc(sql, size + 1);\n\t\t strcat(sql, to);\n\t\t free(to);\n\t\t break;\n\n\t\t /* escape bytea */\n\t case 'b':\n\t\t s_arg = va_arg(arg, char*);\n\t\t length = va_arg(arg, int);\n\t\t str = PQescapeBytea(s_arg, length, &esize);\n\t\t size += esize;\n\t\t sql = (char*)realloc(sql, size + 1);\n\t\t strcat(sql, str);\n\t\t free(str);\n\t\t break;\n\t }\n\n\t size += strlen(++p);\n\t sql = (char*)realloc(sql, size + 1);\n\t strcat(sql, p);\n\t }\n }\n\n va_end(arg);\n\n free(parse);\n\n return sql;\n}\n\n\n\n",
"msg_date": "Thu, 28 Feb 2002 12:23:58 -0500 (EST)",
"msg_from": "Adam Siegel <adam@sycamorehq.com>",
"msg_from_op": true,
"msg_subject": "PQprintf"
}
] |
[
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nI had to make a relatively long drive yesterday, so I had lots of free time \nto do some thinking...and my thoughts were turning to caching and databases. \nThe following is what I came up with: forgive me if it seems to be just an \nobvious ramble...\n\nWhy does a database need caching?\n\nNormally, when one thinks of a database (or to be precise, a RDBMS) the \nACID acronym comes up. This is concerned with having a stable database that \ncan reliably be used by many users at the same time. Caching a query is \nunintuitive because it involves sharing information from transactions that \nmay be separated by a great amount of time and/or by different users. \nHowever, from the standpoint of the database server, caching increases \nefficiency enormously. If 800 users all make the same query, then caching \ncan help the database server backend (hereafter simply \"database\") to \nsave part or all of the work it performs so it doesn't have to repeat the \nentire sequence of steps 800 times.\n\nWhat is caching?\n\nCaching basically means that we want to save frequently-used information \ninto an easy to get to area. Usually, this means storing it into memory. \nCaching has three main goals: reducing disk access, reducing computation \n(i.e. CPU utilization), and speeding up the time as measured by how long a \nit takes a user to seea result. It does all this at the expense of RAM, \nand the tradeoff is almost always worth it.\n\nIn a database, there are three basic types of caching: query results, \nquery plans, and relations.\n\nThe first, query result caching, simply means that we store into memory \nthe exact output of a SELECT query for the next time that somebody performs \nthat exact same SELECT query. Thus, if 800 people do a \"SELECT * FROM foo\", \nthe database runs it for the first person, saves the results, and simply \nreads the cache for the next 799 requests. This saves the database from doing \nany disk access, practically removes CPU usage, and speeds up the query.\n\nThe second, query plan caching, involves saving the results of the optimizer, \nwhich is responsible for figuring out exactly \"how\" the databse is going to \nfetch the requested data. This type of caching usually involves a \"prepared\" \nquery, which has almost all of the information needed to run the query with \nthe exception of one or more \"placeholders\" (spots that are populated with \nvariables at a later time). The query could also involve non-prepared \nstatments as well. Thus, if someone prepares the query \"SELECT flavor FROM \nfoo WHERE size=?\", and then executes it by sending in 300 different values \nfor \"size\", the prepared statement is run through the optimizer, the r\nesulting path is stored into the query plan cache, and the stored path is \nused for the 300 execute requests. Because the path is already known, the \noptimizer does not need to be called, which saves the database CPU and time.\n\nThe third, relation caching, simply involves putting the entire relation \n(usually a table or index) into memory so that it can be read quickly. \nThis saves disk access, which basically means that it saves time. (This type \nof caching also can occur at the OS level, which caches files, but that will \nnot be discussed here).\n\nThose are the three basic types of caching, ways of implementing each are \ndiscussed below. Each one should complement the other, and a query may be \nable to use one, two, or all three of the caches.\n\nI. Query result caching:\n\nA query result cache is only used for SELECT queries that involve a \nrelation (i.e. not for \"SELECT version\") Each cache entry has the following \nfields: the query itself, the actual results, a status, an access time, an \naccess number, and a list of all included columns. (The column list actually \ntells as much information as needed to uniquely identify it, i.e. schema, \ndatabase, table, and column). The status is merely an indicator of whether or \nnot this cached query is valid. It may not be, because it may be invalidated \nfor a user within a transaction but still be of use to others. \n\nWhen a select query is processed, it is first parsed apart into a basic common \nform, stripping whitespace, standardizing case, etc., in order to facilitate \nan accurate match. Note that no other pre-processing is really required, \nsince we are only interested in exact matches that produce the exact same \noutput. An advanced version of this would ideally be able to use the cached \noutput of \"SELECT bar,baz FROM foo\" when it receives the query \"SELECT \nbaz,bar FROM foo\", but that will require some advanced parsing. Possible, \nbut probably not something to attempt in the first iteration of a query \ncaching function. :) If there *is* a match (via a simple strcmp at first), \nand the status is marked as \"valid\", then the database simply uses the \nstored output, updates the access time and count, and exits. This should be \nextremely fast, as no disk access is needed, and almost no CPU. The \ncomplexity of the query will not matter either: a simple query will run just \nas fast as something with 12 sorts and 28 joins.\n\nIf a query is *not* already in the cache, then after the results are found \nand delivered to the user, the database will try and store them for the \nnext appearance of that query. First, the size of the cache will be compared \nto the size of the query+output, to see if there is room for it. If there \nis, the query will be saved, with a status of valid, a time of 'now', a count \nof 1, a list of all affected columns found by parsing the query, and the total \nsize of the query+output. If there is no room, then it will try to delete one \nor more to make room. Deleting can be done based on the oldest access time, \nsmallest access count, or size of the query+output. Some balance of the first \ntwo would probably work best, with the access time being the most important. \nEverything will be configurable, of course.\n\nWhenever a table is changed, the cache must be checked as well. A list of \nall columns that were actually changed is computed and compared against \nthe list of columns for each query. At the first sign of a match, the \nquery is marked as \"invalid.\" This should happen before the changes are made \nto the table itself. We do not delete the query immediately since this may \nbe inside of a transaction, and subject to rollback. However, we do need \nto mark it as invalid for the current user inside the current transaction: \nthus, the status flag. When the transaction is commited, all queries that have \nan \"invalid\" flag are deleted, then the tables are changed. Since the only \ntime a query can be flagged as \"invalid\" is inside your own transaction, \nthe deletion can be done very quickly.\n\n\nII. Query plan caching\n\nIf a query is not cached, then it \"falls through\" to the next level of \ncaching, the query plan. This can either be automatic or strictly on a \nuser-requested format (i.e. through the prepare-execute paradigm). The latter \nis probably better, but it also would not hurt much to store non-explicitly \nprepared queries in this cache as long as there is room. This cache has a \nfield for the query itself, the plan to be followed (i.e. scan this table, \nthat index, sort the results, then group them), the columns used, the access \ntime, the access count, and the total size. It may also want a simple flag \nof \"prepared or non-prepared\", where prepared indicates an explicitly \nprepared statment that has placeholders for future values. A good optimizer \nwill actually change the plan based on the values plugged in to the prepared \nqueries, so that information should become a part of the query itself as \nneeded, and multiple queries may exist to handle different inputs. In \ngeneral, most of the inputs will be similar enough to use the same path (e.g. \n\"SELECT flavor FROM foo WHERE size=?\" will most usually result in a simple \nnumeric value for the executes). If a match *is* found, then the database \ncan use the stored path, and not have to bother calling up the optimizer \nto figure it out. It then updates the access time, the access count, and \ncontinues as normal. If a match was *not* found, then it might possibly \nwant to be cached. Certainly, explicit prepares should always be cached. \nNon-explicitly prepared queries (those without placeholders) can also be \ncached. In theory, some of this will also be in the result cache, so that \nshould be checked as well: it it is there, no reason to put it here. Prepared \nqueries should always have priority over non-prepared, and the rest of the \nrules above for the result query should also apply, with a caveat that things \nthat would affect the output of the optimizer (e.g. vacuuming) should also \nbe taken into account when deleting entries.\n\n\nIII. Relation caching\n\nThe final cache is the relation itself, and simply involves putting the entire \nrelation into memory. This cache has a field for the name of the relation, \nthe table info itself, the type (indexes should ideally be cached more than \ntables, for example), the access time, and the acccess number. Loading could \nbe done automatically, but most likely should be done according to a flag \non the table itself or as an explicit command by the user.\n\n\nNotes:\n\nThe \"strcmp\" used may seem rather crude, as it misses all but the exact \nsame query, but it does cover most of the everyday cases. Queries are \nusually called through an application that keeps it in the same format, \ntime after time, so the queries are very often exactly identical. A better \nparser would help, of course, but it would get rather complicated quickly.\nTwo quick examples: a web page that is read from a database is a query that \nis called many times with exactly the same syntax; a user doing a \"refresh\" \nto constantly check if something has changed since they last looked.\n\nSometimes a query may jump back to a previous type of cache, especially \nfor things like subselects. The entire subselect query may not match, \nbut the inner query should also be checked against the query result cache.\n\nEach cache should have some configurable parameters, including the size in \nRAM, the maximum number of entries, and rules for adding and deleting.\nThey should also be directly viewable through a system table, so a DBA \ncan quickly see exactly which queries are being cached and how often \nthey are being used. There should be a command to quickly flush the cache, \nremove \"old\" entries, and to populate the query plan cache via a prepare \nstatment. It should also be possible to do table changes without stopping \nto check the cache: perhaps flushing the cache and setting a global \n\"cache is empty\" flag would suffice.\n\nAnother problem is the prepare and execute: you are not always guaranteed \nto get a cached prepare if you do an execute, as it may have expired or \nthere may simply be no more room. Those prepared statements inside a \ntransaction should probably be flagged as \"non-deletable\" until the \ntransaction is ended.\n\nStoring the results of an execute in the query result cache is another \nproblem. When a prepare is made, the database returns a link to that \nexact prepared statement in cache, so that all the client has to say is \n\"run the query at 0x4AB3 with the value \"12\". Ideally, the database \nshould be able to check these against the query result cache as well. It \ncan do this by reconstructing the resulting query (by plugging the value into \nthe prepared statement) or it can store the execute request as a type of \nquery itself; instead of \"SELECT baz FROM bar WHERE size=12\" it would \nstore \"p0x4aB3:12\".\n\n\nThe result cache would probaby be the easiest to implement, and also gives \nthe most \"bang for the buck.\" The path cache may be a bit harder, but is \na very necessary feature. I don't know about the relation caching: it looks \nto be fairly easy, and I don't trust that, so I am guessing it is actually \nquite difficult.\n\nGreg Sabino Mullane greg@turnstep.com\nPGP Key: 0x14964AC8 200202281132\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE8fqybvJuQZxSWSsgRAps9AKDwCkIH7GKSBjflyYSA0F7mQqD1MwCeJLCw\nhqE1SxJ2Z7RxFGCu3UwIBrI=\n=jlBy\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Thu, 28 Feb 2002 22:23:46 -0000",
"msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>",
"msg_from_op": true,
"msg_subject": "Database Caching"
},
{
"msg_contents": "\"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n> III. Relation caching\n\n> The final cache is the relation itself, and simply involves putting the entire \n> relation into memory. This cache has a field for the name of the relation, \n> the table info itself, the type (indexes should ideally be cached more than \n> tables, for example), the access time, and the acccess number. Loading could \n> be done automatically, but most likely should be done according to a flag \n> on the table itself or as an explicit command by the user.\n\nThis would be a complete waste of time; the buffer cache (both Postgres'\nown, and the kernel's disk cache) serves the purpose already.\n\nAs I've commented before, I have deep misgivings about the idea of a\nquery-result cache, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 28 Feb 2002 18:27:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching "
},
{
"msg_contents": "On Thu, Feb 28, 2002 at 10:23:46PM -0000, Greg Sabino Mullane wrote:\n\n> The first, query result caching, simply means that we store into memory \n> the exact output of a SELECT query for the next time that somebody performs \n> that exact same SELECT query. Thus, if 800 people do a \"SELECT * FROM foo\", \n> the database runs it for the first person, saves the results, and simply \n> reads the cache for the next 799 requests. This saves the database from doing \n> any disk access, practically removes CPU usage, and speeds up the query.\n\n How expensive is keep cache in consistent state? May be maintain\n the result cache is simular to read raw data from disk/buffer cache.\n\n The result cache may be speeds up SELECT, but probably speeds down\n UPDATE/INSERT.\n\n> The second, query plan caching, involves saving the results of the optimizer, \n> which is responsible for figuring out exactly \"how\" the databse is going to \n> fetch the requested data. This type of caching usually involves a \"prepared\" \n> query, which has almost all of the information needed to run the query with \n> the exception of one or more \"placeholders\" (spots that are populated with \n> variables at a later time). The query could also involve non-prepared \n> statments as well. Thus, if someone prepares the query \"SELECT flavor FROM \n> foo WHERE size=?\", and then executes it by sending in 300 different values \n> for \"size\", the prepared statement is run through the optimizer, the r\n> esulting path is stored into the query plan cache, and the stored path is \n> used for the 300 execute requests. Because the path is already known, the \n> optimizer does not need to be called, which saves the database CPU and time.\n\n IMHO query plan cache maintained by user's PREPARE/EXECUTE/DEALLOCATE \n statements is sufficient, because user good know what change in DB\n scheme (drop function, relation...).\n\n The \"transparent\" query cache for each query that go into backend\n (IMHO) will too expensive, because you must check/analyze each query. \n I mean more effective is keep in memory fragments of query, for example\n operator, relation description -- the cache like this PostgreSQL already\n have (syscache).\n\n> The third, relation caching, simply involves putting the entire relation \n> (usually a table or index) into memory so that it can be read quickly. \n\n Already done by buffers :-)\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Fri, 1 Mar 2002 11:02:05 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n> > III. Relation caching\n> \n> > The final cache is the relation itself, and simply involves putting the entire\n> > relation into memory. This cache has a field for the name of the relation,\n> > the table info itself, the type (indexes should ideally be cached more than\n> > tables, for example), the access time, and the acccess number. Loading could\n> > be done automatically, but most likely should be done according to a flag\n> > on the table itself or as an explicit command by the user.\n> \n> This would be a complete waste of time; the buffer cache (both Postgres'\n> own, and the kernel's disk cache) serves the purpose already.\n> \n> As I've commented before, I have deep misgivings about the idea of a\n> query-result cache, too.\n\nI appreciate your position, and I can see your point, however, where caching is\na huge win is when you have a database which is largely static, something like\nthe back end of a web server.\n\nThe content is updated regularly, but not constantly. There is a window, say 4\nhours, where the entire database is static, and all it is doing is running the\nsame 100 queries, over and over again.\n\nMy previous company, www.dmn.com, has a music database system. We logged all\nthe backed info, most of the queries were duplicated many times. This can be\nexplained by multiple users interested in the same thing or the same user\nhitting \"next page\"\n\nIf you could cache the \"next page\" or similar hit results, you could really\nincrease throughput and capaciy of a website.\n",
"msg_date": "Fri, 01 Mar 2002 07:49:24 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "Tom Lane wrote:\n> \"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n> > III. Relation caching\n>\n> > The final cache is the relation itself, and simply involves putting the entire\n> > relation into memory. This cache has a field for the name of the relation,\n> > the table info itself, the type (indexes should ideally be cached more than\n> > tables, for example), the access time, and the acccess number. Loading could\n> > be done automatically, but most likely should be done according to a flag\n> > on the table itself or as an explicit command by the user.\n>\n> This would be a complete waste of time; the buffer cache (both Postgres'\n> own, and the kernel's disk cache) serves the purpose already.\n>\n> As I've commented before, I have deep misgivings about the idea of a\n> query-result cache, too.\n\n I wonder how this sort of query result caching could work in\n our MVCC and visibility world at all. Multiple concurrent\n running transactions see different snapshots of the table,\n hence different result sets for exactly one and the same\n querystring at the same time ... er ... yeah, one cache set\n per query/snapshot combo, great!\n\n To really gain some speed with this sort of query cache, we'd\n have to adopt the #1 MySQL design rule \"speed over precision\"\n and ignore MVCC for query-cached relations, or what?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 1 Mar 2002 09:22:29 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "On Fri, 1 Mar 2002, Jan Wieck wrote:\n\n> Tom Lane wrote:\n> > \"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n> > > III. Relation caching\n> >\n> > > The final cache is the relation itself, and simply involves putting the entire\n> > > relation into memory. This cache has a field for the name of the relation,\n> > > the table info itself, the type (indexes should ideally be cached more than\n> > > tables, for example), the access time, and the acccess number. Loading could\n> > > be done automatically, but most likely should be done according to a flag\n> > > on the table itself or as an explicit command by the user.\n> >\n> > This would be a complete waste of time; the buffer cache (both Postgres'\n> > own, and the kernel's disk cache) serves the purpose already.\n> >\n> > As I've commented before, I have deep misgivings about the idea of a\n> > query-result cache, too.\n>\n> I wonder how this sort of query result caching could work in\n> our MVCC and visibility world at all. Multiple concurrent\n> running transactions see different snapshots of the table,\n> hence different result sets for exactly one and the same\n> querystring at the same time ... er ... yeah, one cache set\n> per query/snapshot combo, great!\n>\n> To really gain some speed with this sort of query cache, we'd\n> have to adopt the #1 MySQL design rule \"speed over precision\"\n> and ignore MVCC for query-cached relations, or what?\n\nActually, you are missing, I think, as is everyone, the 'semi-static'\ndatabase ... you know? the one where data gets dumped to it by a script\nevery 5 minutes, but between dumps, there are hundreds of queries per\nsecond/minute between the updates that are the same query repeated each\ntime ...\n\nAs soon as there is *any* change to the data set, the query cache should\nbe marked dirty and reloaded ... mark it dirty on any update, delete or\ninsert ...\n\nSo, if I have 1000 *pure* SELECTs, the cache is fine ... as soon as one\nU/I/D pops up, its invalidated ...\n\n\n\n",
"msg_date": "Fri, 1 Mar 2002 11:07:28 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> My previous company, www.dmn.com, has a music database system. We logged all\n> the backed info, most of the queries were duplicated many times. This can be\n> explained by multiple users interested in the same thing or the same user\n> hitting \"next page\"\n\n> If you could cache the \"next page\" or similar hit results, you could really\n> increase throughput and capaciy of a website.\n\nSure, but the most appropriate place to do that sort of thing is in the\napplication (in this case, probably a cgi/php-ish layer). Only the\napplication can know what its requirements are. In the case you\ndescribe, it'd be perfectly okay for a \"stale\" cache result to be\ndelivered that's a few minutes out of date. Maybe a few hours out of\ndate would be good enough too, or maybe not. But if we do this at the\ndatabase level then we have to make sure it won't break *any*\napplications, and that means the most conservative validity assumptions.\n(Thus all the angst about how to invalidate cache entries on-the-fly.)\n\nLikewise, the application has a much better handle than the database on\nthe issue of which query results are likely to be worth caching.\n\nI think that reports of \"we sped up this application X times by caching\nquery results on the client side\" are interesting, but they are not good\nguides to what would happen if we tried to put a query-result cache into\nthe database.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Mar 2002 10:15:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching "
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Fri, 1 Mar 2002, Jan Wieck wrote:\n>\n> > Tom Lane wrote:\n> > > \"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n> > > > III. Relation caching\n> > >\n> > > > The final cache is the relation itself, and simply involves putting the entire\n> > > > relation into memory. This cache has a field for the name of the relation,\n> > > > the table info itself, the type (indexes should ideally be cached more than\n> > > > tables, for example), the access time, and the acccess number. Loading could\n> > > > be done automatically, but most likely should be done according to a flag\n> > > > on the table itself or as an explicit command by the user.\n> > >\n> > > This would be a complete waste of time; the buffer cache (both Postgres'\n> > > own, and the kernel's disk cache) serves the purpose already.\n> > >\n> > > As I've commented before, I have deep misgivings about the idea of a\n> > > query-result cache, too.\n> >\n> > I wonder how this sort of query result caching could work in\n> > our MVCC and visibility world at all. Multiple concurrent\n> > running transactions see different snapshots of the table,\n> > hence different result sets for exactly one and the same\n> > querystring at the same time ... er ... yeah, one cache set\n> > per query/snapshot combo, great!\n> >\n> > To really gain some speed with this sort of query cache, we'd\n> > have to adopt the #1 MySQL design rule \"speed over precision\"\n> > and ignore MVCC for query-cached relations, or what?\n>\n> Actually, you are missing, I think, as is everyone, the 'semi-static'\n> database ... you know? the one where data gets dumped to it by a script\n> every 5 minutes, but between dumps, there are hundreds of queries per\n> second/minute between the updates that are the same query repeated each\n> time ...\n>\n> As soon as there is *any* change to the data set, the query cache should\n> be marked dirty and reloaded ... mark it dirty on any update, delete or\n> insert ...\n>\n> So, if I have 1000 *pure* SELECTs, the cache is fine ... as soon as one\n> U/I/D pops up, its invalidated ...\n\n But in that case, why not caching the entire HTML result for\n the URL or search request? That'd save some wasted cycles in\n Tomcat as well.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 1 Mar 2002 10:17:57 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > My previous company, www.dmn.com, has a music database system. We logged all\n> > the backed info, most of the queries were duplicated many times. This can be\n> > explained by multiple users interested in the same thing or the same user\n> > hitting \"next page\"\n> \n> > If you could cache the \"next page\" or similar hit results, you could really\n> > increase throughput and capaciy of a website.\n> \n> Sure, but the most appropriate place to do that sort of thing is in the\n> application (in this case, probably a cgi/php-ish layer). Only the\n> application can know what its requirements are. In the case you\n> describe, it'd be perfectly okay for a \"stale\" cache result to be\n> delivered that's a few minutes out of date. Maybe a few hours out of\n> date would be good enough too, or maybe not. But if we do this at the\n> database level then we have to make sure it won't break *any*\n> applications, and that means the most conservative validity assumptions.\n> (Thus all the angst about how to invalidate cache entries on-the-fly.)\n> \n> Likewise, the application has a much better handle than the database on\n> the issue of which query results are likely to be worth caching.\n> \n> I think that reports of \"we sped up this application X times by caching\n> query results on the client side\" are interesting, but they are not good\n> guides to what would happen if we tried to put a query-result cache into\n> the database.\n\nI would like to respectfully differ with you here. If query results are cached\nin an ACID safe way, then many things could be improved.\n\nThe problem with applications caching is that they do not have intimate\nknowledge of the database, and thus do not know when their cache is invalid. On\ntop of that, many web sites have multiple web servers connected to a single\ndatabase. The caching must sit between the web software and the DB. The logical\nplace for caching is in the database.\n\nIf we went even further, and cached multiple levels of query, i.e. the result\nof the sub-select within the whole query, then things like views and more\ncomplex queries would could get an increase in performance.\n\nTake this query:\n\nselect * from (select * from T1 where field = 'fubar') as Z right outer join\n (select alt from T2, (select * from T1 where field = 'fubar') as X where\nT2.key = X.key) as Y \n\ton T3.key = Y.key) on (Z.key = Y.alt) where Z.key = NULL;\n\n\nForgive this query, it is probably completely wrong, the actual query it is\nintended to represent is quite a bit larger. The intention is to select a set\nof alternate values based on a set of initial values, but also eliminating any\nalternate values which may also be in the initial set. Anyway, we have to query\n\"Select * from T1 where field = 'fubar'\" twice.\n\nIf that subselect could be cached, it could speed up the query a bit. Right now\nI use a temp table, which is a hassle.\n\nCaching results can and do speed up duplicate queries, there can really be no\nargument about it. The argument is about the usefulness of the feature and the\ncost of implementing it. If maintaining the cache costs more than the benefit\nof having it, obviously it is a loser. If implementing it takes up the\nbiological CPU cycles of he development team that would be spent doing more\nimportant things, then it is also a loser. If however, it is relatively \"easy\"\n(hehe) to do, and doesn't affect performance greatly, is there any harm in\ndoing so?\n",
"msg_date": "Fri, 01 Mar 2002 10:46:48 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "> > I wonder how this sort of query result caching could work in\n> > our MVCC and visibility world at all. Multiple concurrent\n> > running transactions see different snapshots of the table,\n> > hence different result sets for exactly one and the same\n> > querystring at the same time ... er ... yeah, one cache set\n> > per query/snapshot combo, great!\n> >\n> > To really gain some speed with this sort of query cache, we'd\n> > have to adopt the #1 MySQL design rule \"speed over precision\"\n> > and ignore MVCC for query-cached relations, or what?\n>\n> Actually, you are missing, I think, as is everyone, the 'semi-static'\n> database ... you know? the one where data gets dumped to it by a script\n> every 5 minutes, but between dumps, there are hundreds of queries per\n> second/minute between the updates that are the same query repeated each\n> time ...\n>\n> As soon as there is *any* change to the data set, the query cache should\n> be marked dirty and reloaded ... mark it dirty on any update, delete or\n> insert ...\n>\n> So, if I have 1000 *pure* SELECTs, the cache is fine ... as soon as one\n> U/I/D pops up, its invalidated ...\n\nThe question is, when it's invalidated, how does it become valid again?\nI don't see that there's a way to do it only by query string that doesn't\nresult in meaning that the cache cannot cache a query again until any\ntransactions that can see the prior state are finished since otherwise\nyou'd be providing the incorrect results to that transaction. But I\nhaven't spent much time thinking about it either.\n\n",
"msg_date": "Fri, 1 Mar 2002 08:47:12 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "Hi guys,\n\nStephan Szabo wrote:\n<snip> \n> The question is, when it's invalidated, how does it become valid again?\n> I don't see that there's a way to do it only by query string that doesn't\n> result in meaning that the cache cannot cache a query again until any\n> transactions that can see the prior state are finished since otherwise\n> you'd be providing the incorrect results to that transaction. But I\n> haven't spent much time thinking about it either.\n\nIt seems like a good idea to me, but only if it's optional. It could\nget in the way for systems that don't need it, but would be really\nbeneficial for some types of systems which are read-only or mostly-read\nonly (with consistent queries) in nature.\n\ni.e. Lets take a web page where clients can look up which of 10,000\nrecords are either .biz, .org, .info, or .com.\n\nSo, we have a database query of simply:\n\nSELECT name FROM sometable WHERE tld = 'biz';\n\nAnd lets say 2,000 records come back, which are cached.\n\nThen the next query comes in, which is :\n\nSELECT name FROM sometable WHERE tld = 'info';\n\nAnd lets say 3,000 records come back, which are also cached.\n\nNow, both of these queries are FULLY cached. So, if either query\nhappens again, it's a straight memory read and dump, no disk activity\ninvolved, etc (very fast in comparison).\n\nNow, lets say a transaction which involves a change of \"sometable\"\nCOMMITs. This should invalidate these results in the cache, as the\nviewpoint of the transaction could now be incorrect (there might now be\nless or more or different results for .info or .biz). The next queries\nwill be cached too, and will keep upon being cached until the next\ntransaction involving a change to \"sometable\" COMMITs.\n\nIn this type of database access, this looks like a win.\n\nBut caching results in this matter could be a memory killer for those\napplications which aren't so predictable in their queries, or are not so\nread-only. That's why I feel it should be optional, but I also feel it\nshould be added due to what looks like massive wins without data\nintegrity nor reliability issues.\n\nHope this helps.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sat, 02 Mar 2002 04:47:50 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "On Sat, 2 Mar 2002, Justin Clift wrote:\n\n> Hi guys,\n>\n> Stephan Szabo wrote:\n> <snip>\n> > The question is, when it's invalidated, how does it become valid again?\n> > I don't see that there's a way to do it only by query string that doesn't\n> > result in meaning that the cache cannot cache a query again until any\n> > transactions that can see the prior state are finished since otherwise\n> > you'd be providing the incorrect results to that transaction. But I\n> > haven't spent much time thinking about it either.\n>\n> i.e. Lets take a web page where clients can look up which of 10,000\n> records are either .biz, .org, .info, or .com.\n>\n> So, we have a database query of simply:\n>\n> SELECT name FROM sometable WHERE tld = 'biz';\n>\n> And lets say 2,000 records come back, which are cached.\n>\n> Then the next query comes in, which is :\n>\n> SELECT name FROM sometable WHERE tld = 'info';\n>\n> And lets say 3,000 records come back, which are also cached.\n>\n> Now, both of these queries are FULLY cached. So, if either query\n> happens again, it's a straight memory read and dump, no disk activity\n> involved, etc (very fast in comparison).\n>\n> Now, lets say a transaction which involves a change of \"sometable\"\n> COMMITs. This should invalidate these results in the cache, as the\n> viewpoint of the transaction could now be incorrect (there might now be\n> less or more or different results for .info or .biz). The next queries\n> will be cached too, and will keep upon being cached until the next\n> transaction involving a change to \"sometable\" COMMITs.\n\nBut, if there's a transaction that started before the change committed,\nthen you may have two separate sets of possible results for the same query\nstring so query string doesn't seem unique enough to describe a set of\nresults. Maybe I haven't read carefully enough, but most of the proposals\nseem to gloss over this point.\n\n\n",
"msg_date": "Fri, 1 Mar 2002 10:19:56 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "I'm sneaking out of my cave here.. ;)\n\nmlw wrote:\n> Tom Lane wrote:\n> \n>>mlw <markw@mohawksoft.com> writes:\n>>\n>>>My previous company, www.dmn.com, has a music database system. We logged all\n>>>the backed info, most of the queries were duplicated many times. This can be\n>>>explained by multiple users interested in the same thing or the same user\n>>>hitting \"next page\"\n>>>\n>>>If you could cache the \"next page\" or similar hit results, you could really\n>>>increase throughput and capaciy of a website.\n\nWell, I'm going to assume that the records in question have been \ncompletely cached in cases like this. So isn't the primary source of \nimprovement the query parse and plan generation? As Tom seems to think, \nwouldn't it make more sense to optimize the parse/plan generation rather \nthan caching the result set? After all, if the plan can be pinned how \nmuch of a performance boost do you expect to get from processing a \ncached plan versus returning a cached result set?\n\nSeriously, I am curious as to what the expected return is? Still a \nmultiple or simply some minor percent?\n\n\n>>>\n>>Sure, but the most appropriate place to do that sort of thing is in the\n>>application (in this case, probably a cgi/php-ish layer). Only the\n>>application can know what its requirements are. In the case you\n>>describe, it'd be perfectly okay for a \"stale\" cache result to be\n>>delivered that's a few minutes out of date. Maybe a few hours out of\n>>date would be good enough too, or maybe not. But if we do this at the\n>>database level then we have to make sure it won't break *any*\n>>applications, and that means the most conservative validity assumptions.\n>>(Thus all the angst about how to invalidate cache entries on-the-fly.)\n>>\n>>Likewise, the application has a much better handle than the database on\n>>the issue of which query results are likely to be worth caching.\n>>\n>>I think that reports of \"we sped up this application X times by caching\n>>query results on the client side\" are interesting, but they are not good\n>>guides to what would happen if we tried to put a query-result cache into\n>>the database.\n>>\n> \n> I would like to respectfully differ with you here. If query results are cached\n> in an ACID safe way, then many things could be improved.\n> \n> The problem with applications caching is that they do not have intimate\n> knowledge of the database, and thus do not know when their cache is invalid. On\n> top of that, many web sites have multiple web servers connected to a single\n> database. The caching must sit between the web software and the DB. The logical\n> place for caching is in the database.\n> \n\n\nBut hybrid application cache designs can mostly if not completely \naddress this and also gets some added benefits in many cases. If you \nhave a \"cache\" table which denotes the tables which are involved in the \ncached results that you desire, you can then update it's state via \ntriggers or even exposed procedures accordingly to reflect if the client \nside cache has been invalidated or not. This means that a client need \nonly query the cache table first to determine if it's cache is clean or \ndirty. When it's dirty, it mearly needs to query the result set again.\n\nLet's also not forget that client side caching can also yield \nsignificant networking performance improvements over a result set that \nis able to be cached on the server. Why? Well, let's say a query has a \nresult set of 10,000 rows which are being cached on the server. A \nremote client queries and fetches 10,0000 results over the network. Now \nthen, even though the result set is cached by the database, it is still \nbeing transfered over the wire for each and every query. Now then, \nlet's assume that 10 other people perform this same query. That's \n100,000 rows which get transfered across the wire. With the client side \ncaching scheme, you have 10,010 rows (initial 10,000 result set lpus a \nsingle row result set which indicates the status of the cache) returned \nacross the wire which tell the client that it's cache is clean or dirty.\n\nLet's face it, in order for the cache to make sense, the same result set \nneeds to be used over and over again. In these cases, it would seem \nlike in real world situations, a strong client side hybrid caching \nscheme wins in most cases.\n\nI'd also like to toss out that I'd expect somewhere there would be a \ntrade off between data cache and result set cache. On systems without \ninfinite memory, where's the line of deliniation? It seems somewhere \nyou may be limiting the size of the generalized cache at the expense of \nthe cached result sets. If this happens, the cases where a cached \nresult set may be improved but refreshing that result set my be hindered \nas well might all other queries on the system.\n\n\n> If we went even further, and cached multiple levels of query, i.e. the result\n> of the sub-select within the whole query, then things like views and more\n> complex queries would could get an increase in performance.\n> \n> Take this query:\n> \n> select * from (select * from T1 where field = 'fubar') as Z right outer join\n> (select alt from T2, (select * from T1 where field = 'fubar') as X where\n> T2.key = X.key) as Y \n> \ton T3.key = Y.key) on (Z.key = Y.alt) where Z.key = NULL;\n> \n> \n> Forgive this query, it is probably completely wrong, the actual query it is\n> intended to represent is quite a bit larger. The intention is to select a set\n> of alternate values based on a set of initial values, but also eliminating any\n> alternate values which may also be in the initial set. Anyway, we have to query\n> \"Select * from T1 where field = 'fubar'\" twice.\n> \n> If that subselect could be cached, it could speed up the query a bit. Right now\n> I use a temp table, which is a hassle.\n> \n\nIt's funny you say that because I was thinking that should server side \nresult set caching truely be desired, wouldn't the use of triggers, a \nprocedure for client interface and a temporary table be a poor-man's \nimplementation yeilding almost the same results? Though, I must say I'm \nassuming that the queries will *be* the same and *not nearly* the same. \n But in the web world, isn't this really the situation we're trying to \naddress? That is, give me the front page?\n\n\n> Caching results can and do speed up duplicate queries, there can really be no\n> argument about it. The argument is about the usefulness of the feature and the\n> cost of implementing it. If maintaining the cache costs more than the benefit\n> of having it, obviously it is a loser. If implementing it takes up the\n> biological CPU cycles of he development team that would be spent doing more\n> important things, then it is also a loser. If however, it is relatively \"easy\"\n> (hehe) to do, and doesn't affect performance greatly, is there any harm in\n> doing so?\n> \n\n\nAre any of the ideas that I put forth a viable substitute?\n\nGreg\n\n",
"msg_date": "Fri, 01 Mar 2002 17:06:20 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "\nDo we want to add \"query caching\" to the TODO list, perhaps with a\nquestion mark?\n\n---------------------------------------------------------------------------\n\nGreg Sabino Mullane wrote:\n[ There is text before PGP section. ]\n> \n[ PGP not available, raw data follows ]\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> \n> I had to make a relatively long drive yesterday, so I had lots of free time \n> to do some thinking...and my thoughts were turning to caching and databases. \n> The following is what I came up with: forgive me if it seems to be just an \n> obvious ramble...\n> \n> Why does a database need caching?\n> \n> Normally, when one thinks of a database (or to be precise, a RDBMS) the \n> ACID acronym comes up. This is concerned with having a stable database that \n> can reliably be used by many users at the same time. Caching a query is \n> unintuitive because it involves sharing information from transactions that \n> may be separated by a great amount of time and/or by different users. \n> However, from the standpoint of the database server, caching increases \n> efficiency enormously. If 800 users all make the same query, then caching \n> can help the database server backend (hereafter simply \"database\") to \n> save part or all of the work it performs so it doesn't have to repeat the \n> entire sequence of steps 800 times.\n> \n> What is caching?\n> \n> Caching basically means that we want to save frequently-used information \n> into an easy to get to area. Usually, this means storing it into memory. \n> Caching has three main goals: reducing disk access, reducing computation \n> (i.e. CPU utilization), and speeding up the time as measured by how long a \n> it takes a user to seea result. It does all this at the expense of RAM, \n> and the tradeoff is almost always worth it.\n> \n> In a database, there are three basic types of caching: query results, \n> query plans, and relations.\n> \n> The first, query result caching, simply means that we store into memory \n> the exact output of a SELECT query for the next time that somebody performs \n> that exact same SELECT query. Thus, if 800 people do a \"SELECT * FROM foo\", \n> the database runs it for the first person, saves the results, and simply \n> reads the cache for the next 799 requests. This saves the database from doing \n> any disk access, practically removes CPU usage, and speeds up the query.\n> \n> The second, query plan caching, involves saving the results of the optimizer, \n> which is responsible for figuring out exactly \"how\" the databse is going to \n> fetch the requested data. This type of caching usually involves a \"prepared\" \n> query, which has almost all of the information needed to run the query with \n> the exception of one or more \"placeholders\" (spots that are populated with \n> variables at a later time). The query could also involve non-prepared \n> statments as well. Thus, if someone prepares the query \"SELECT flavor FROM \n> foo WHERE size=?\", and then executes it by sending in 300 different values \n> for \"size\", the prepared statement is run through the optimizer, the r\n> esulting path is stored into the query plan cache, and the stored path is \n> used for the 300 execute requests. Because the path is already known, the \n> optimizer does not need to be called, which saves the database CPU and time.\n> \n> The third, relation caching, simply involves putting the entire relation \n> (usually a table or index) into memory so that it can be read quickly. \n> This saves disk access, which basically means that it saves time. (This type \n> of caching also can occur at the OS level, which caches files, but that will \n> not be discussed here).\n> \n> Those are the three basic types of caching, ways of implementing each are \n> discussed below. Each one should complement the other, and a query may be \n> able to use one, two, or all three of the caches.\n> \n> I. Query result caching:\n> \n> A query result cache is only used for SELECT queries that involve a \n> relation (i.e. not for \"SELECT version\") Each cache entry has the following \n> fields: the query itself, the actual results, a status, an access time, an \n> access number, and a list of all included columns. (The column list actually \n> tells as much information as needed to uniquely identify it, i.e. schema, \n> database, table, and column). The status is merely an indicator of whether or \n> not this cached query is valid. It may not be, because it may be invalidated \n> for a user within a transaction but still be of use to others. \n> \n> When a select query is processed, it is first parsed apart into a basic common \n> form, stripping whitespace, standardizing case, etc., in order to facilitate \n> an accurate match. Note that no other pre-processing is really required, \n> since we are only interested in exact matches that produce the exact same \n> output. An advanced version of this would ideally be able to use the cached \n> output of \"SELECT bar,baz FROM foo\" when it receives the query \"SELECT \n> baz,bar FROM foo\", but that will require some advanced parsing. Possible, \n> but probably not something to attempt in the first iteration of a query \n> caching function. :) If there *is* a match (via a simple strcmp at first), \n> and the status is marked as \"valid\", then the database simply uses the \n> stored output, updates the access time and count, and exits. This should be \n> extremely fast, as no disk access is needed, and almost no CPU. The \n> complexity of the query will not matter either: a simple query will run just \n> as fast as something with 12 sorts and 28 joins.\n> \n> If a query is *not* already in the cache, then after the results are found \n> and delivered to the user, the database will try and store them for the \n> next appearance of that query. First, the size of the cache will be compared \n> to the size of the query+output, to see if there is room for it. If there \n> is, the query will be saved, with a status of valid, a time of 'now', a count \n> of 1, a list of all affected columns found by parsing the query, and the total \n> size of the query+output. If there is no room, then it will try to delete one \n> or more to make room. Deleting can be done based on the oldest access time, \n> smallest access count, or size of the query+output. Some balance of the first \n> two would probably work best, with the access time being the most important. \n> Everything will be configurable, of course.\n> \n> Whenever a table is changed, the cache must be checked as well. A list of \n> all columns that were actually changed is computed and compared against \n> the list of columns for each query. At the first sign of a match, the \n> query is marked as \"invalid.\" This should happen before the changes are made \n> to the table itself. We do not delete the query immediately since this may \n> be inside of a transaction, and subject to rollback. However, we do need \n> to mark it as invalid for the current user inside the current transaction: \n> thus, the status flag. When the transaction is commited, all queries that have \n> an \"invalid\" flag are deleted, then the tables are changed. Since the only \n> time a query can be flagged as \"invalid\" is inside your own transaction, \n> the deletion can be done very quickly.\n> \n> \n> II. Query plan caching\n> \n> If a query is not cached, then it \"falls through\" to the next level of \n> caching, the query plan. This can either be automatic or strictly on a \n> user-requested format (i.e. through the prepare-execute paradigm). The latter \n> is probably better, but it also would not hurt much to store non-explicitly \n> prepared queries in this cache as long as there is room. This cache has a \n> field for the query itself, the plan to be followed (i.e. scan this table, \n> that index, sort the results, then group them), the columns used, the access \n> time, the access count, and the total size. It may also want a simple flag \n> of \"prepared or non-prepared\", where prepared indicates an explicitly \n> prepared statment that has placeholders for future values. A good optimizer \n> will actually change the plan based on the values plugged in to the prepared \n> queries, so that information should become a part of the query itself as \n> needed, and multiple queries may exist to handle different inputs. In \n> general, most of the inputs will be similar enough to use the same path (e.g. \n> \"SELECT flavor FROM foo WHERE size=?\" will most usually result in a simple \n> numeric value for the executes). If a match *is* found, then the database \n> can use the stored path, and not have to bother calling up the optimizer \n> to figure it out. It then updates the access time, the access count, and \n> continues as normal. If a match was *not* found, then it might possibly \n> want to be cached. Certainly, explicit prepares should always be cached. \n> Non-explicitly prepared queries (those without placeholders) can also be \n> cached. In theory, some of this will also be in the result cache, so that \n> should be checked as well: it it is there, no reason to put it here. Prepared \n> queries should always have priority over non-prepared, and the rest of the \n> rules above for the result query should also apply, with a caveat that things \n> that would affect the output of the optimizer (e.g. vacuuming) should also \n> be taken into account when deleting entries.\n> \n> \n> III. Relation caching\n> \n> The final cache is the relation itself, and simply involves putting the entire \n> relation into memory. This cache has a field for the name of the relation, \n> the table info itself, the type (indexes should ideally be cached more than \n> tables, for example), the access time, and the acccess number. Loading could \n> be done automatically, but most likely should be done according to a flag \n> on the table itself or as an explicit command by the user.\n> \n> \n> Notes:\n> \n> The \"strcmp\" used may seem rather crude, as it misses all but the exact \n> same query, but it does cover most of the everyday cases. Queries are \n> usually called through an application that keeps it in the same format, \n> time after time, so the queries are very often exactly identical. A better \n> parser would help, of course, but it would get rather complicated quickly.\n> Two quick examples: a web page that is read from a database is a query that \n> is called many times with exactly the same syntax; a user doing a \"refresh\" \n> to constantly check if something has changed since they last looked.\n> \n> Sometimes a query may jump back to a previous type of cache, especially \n> for things like subselects. The entire subselect query may not match, \n> but the inner query should also be checked against the query result cache.\n> \n> Each cache should have some configurable parameters, including the size in \n> RAM, the maximum number of entries, and rules for adding and deleting.\n> They should also be directly viewable through a system table, so a DBA \n> can quickly see exactly which queries are being cached and how often \n> they are being used. There should be a command to quickly flush the cache, \n> remove \"old\" entries, and to populate the query plan cache via a prepare \n> statment. It should also be possible to do table changes without stopping \n> to check the cache: perhaps flushing the cache and setting a global \n> \"cache is empty\" flag would suffice.\n> \n> Another problem is the prepare and execute: you are not always guaranteed \n> to get a cached prepare if you do an execute, as it may have expired or \n> there may simply be no more room. Those prepared statements inside a \n> transaction should probably be flagged as \"non-deletable\" until the \n> transaction is ended.\n> \n> Storing the results of an execute in the query result cache is another \n> problem. When a prepare is made, the database returns a link to that \n> exact prepared statement in cache, so that all the client has to say is \n> \"run the query at 0x4AB3 with the value \"12\". Ideally, the database \n> should be able to check these against the query result cache as well. It \n> can do this by reconstructing the resulting query (by plugging the value into \n> the prepared statement) or it can store the execute request as a type of \n> query itself; instead of \"SELECT baz FROM bar WHERE size=12\" it would \n> store \"p0x4aB3:12\".\n> \n> \n> The result cache would probaby be the easiest to implement, and also gives \n> the most \"bang for the buck.\" The path cache may be a bit harder, but is \n> a very necessary feature. I don't know about the relation caching: it looks \n> to be fairly easy, and I don't trust that, so I am guessing it is actually \n> quite difficult.\n> \n> Greg Sabino Mullane greg@turnstep.com\n> PGP Key: 0x14964AC8 200202281132\n> \n> -----BEGIN PGP SIGNATURE-----\n> Comment: http://www.turnstep.com/pgp.html\n> \n> iD8DBQE8fqybvJuQZxSWSsgRAps9AKDwCkIH7GKSBjflyYSA0F7mQqD1MwCeJLCw\n> hqE1SxJ2Z7RxFGCu3UwIBrI=\n> =jlBy\n> -----END PGP SIGNATURE-----\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n[ Decrypting message... End of raw data. ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 25 Aug 2002 20:15:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "I'm not sure about query result caching or 'relation caching', since the\nfirst would seem to run into problems with concurrent updates, and the\nsecond is sort-of what the buffer cache does.\n\nQuery plan caching sounds like a really good idea though. Neil Conway's\nPREPARE patch already does this for an individual backend. Do you think\nit would be hard to make it use shared memory, and check if a query has\nalready been prepared by another backend? Maybe it could use something\nlike a whitespace insensitive checksum for a shared hash key.\n\nRegards,\n\n\tJohn Nield\n\nOn Sun, 2002-08-25 at 20:15, Bruce Momjian wrote:\n> \n> Do we want to add \"query caching\" to the TODO list, perhaps with a\n> question mark?\n> \n> ---------------------------------------------------------------------------\n> \n> Greg Sabino Mullane wrote:\n[snip]\n> \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n",
"msg_date": "25 Aug 2002 21:35:24 -0400",
"msg_from": "\"J. R. Nield\" <jrnield@usol.com>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "On Sun, 25 Aug 2002, Bruce Momjian wrote:\n\n> Do we want to add \"query caching\" to the TODO list, perhaps with a\n> question mark?\n\nI'd love to have query plans cached, preferably across backends.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Mon, 26 Aug 2002 13:17:45 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "On Sun, Aug 25, 2002 at 09:35:24PM -0400, J. R. Nield wrote:\n> I'm not sure about query result caching or 'relation caching', since the\n> first would seem to run into problems with concurrent updates, and the\n> second is sort-of what the buffer cache does.\n> \n> Query plan caching sounds like a really good idea though. Neil Conway's\n> PREPARE patch already does this for an individual backend. Do you think\n> it would be hard to make it use shared memory, and check if a query has\n> already been prepared by another backend? Maybe it could use something\n> like a whitespace insensitive checksum for a shared hash key.\n\n The original version of query plan cache allows exactly this. But\n after some discussion the shared memory usage in qcache was remove.\n\n I think better and more robus solution is store cached planns in\n backend memory and allows to run backend as persistent (means not\n startup/stop for each client connection).\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 26 Aug 2002 10:39:10 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
}
] |
[
{
"msg_contents": "-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Thursday, February 28, 2002 3:27 PM\nTo: Greg Sabino Mullane\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Database Caching \n\n\n\"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n> III. Relation caching\n\n> The final cache is the relation itself, and simply involves putting\nthe entire \n> relation into memory. This cache has a field for the name of the\nrelation, \n> the table info itself, the type (indexes should ideally be cached more\nthan \n> tables, for example), the access time, and the acccess number. Loading\ncould \n> be done automatically, but most likely should be done according to a\nflag \n> on the table itself or as an explicit command by the user.\n\nThis would be a complete waste of time; the buffer cache (both Postgres'\nown, and the kernel's disk cache) serves the purpose already.\n\nAs I've commented before, I have deep misgivings about the idea of a\nquery-result cache, too.\n>>\nI certainly agree with Tom on both counts.\n\nThink of the extra machinery that would be needed to retain full\nrelational integrity with a result cache...\n\nThen think of how easy it is to write your own application that caches\nresults if that is what you are after and you know (for some reason)\nthat it won't matter if the database gets updated.\n\nI don't see how result caching can be a win, since it can be done when\nneeded anyway, without adding complexity to the database engine. Just\nhave the application cache the result set. Certainly a web server could\ndo this, if needed.\n\nIf there were a way to mark a database as read only (and this could not\nbe changed unless the entire database is shut down and restarted in\nread/write mode) then there might be some utility to result set cache.\nOtherwise, I think it will be wasted effort. It might be worthwhile to\ndo the same for individual tables (with the same sort of restrictions).\nBut think of all the effort that would be needed to do this properly,\nand what sort of payback would be received from it?\n\nAgain, the same goals can easily be accomplished without having to\nperform major surgery on the database system. I suspect that there is\nsome logical reason that Oracle/Sybase/IBM/Microsoft have not bothered\nwith it.\n\nI am ready to be convinced otherwise if I see a logical reason for it.\nBut with the current evidence, I don't see any compelling reason to put\neffort in that direction.\n<<\n",
"msg_date": "Thu, 28 Feb 2002 15:44:54 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Database Caching "
},
{
"msg_contents": "On Fri, 2002-03-01 at 04:44, Dann Corbit wrote:\n> As I've commented before, I have deep misgivings about the idea of a\n> query-result cache, too.\n> >>\n> I certainly agree with Tom on both counts.\n> \n> Think of the extra machinery that would be needed to retain full\n> relational integrity with a result cache...\n> \n> Then think of how easy it is to write your own application that caches\n> results if that is what you are after and you know (for some reason)\n> that it won't matter if the database gets updated.\n\nThat would be trivial indeed.\n\nThe tricky case is when you dont know when and how the database will be\nupdated. That would need an insert/update/delete trigger on each and\nevery table that contributes to the query, either explicitly ot through\nrule expansion. Doing that from client side would a) be difficult and b)\nprobably too slow to be of any use. To do it in a general fashion wopuld\nalso need a way to get the expanded query tree for a query to see which\ntables the query depends on.\n\n> I don't see how result caching can be a win, since it can be done when\n> needed anyway, without adding complexity to the database engine. Just\n> have the application cache the result set. Certainly a web server could\n> do this, if needed.\n\nMore advanced application server can do it all right. But you still need\nsound cache invalidation mechanisms.\n\n> If there were a way to mark a database as read only (and this could not\n> be changed unless the entire database is shut down and restarted in\n> read/write mode) then there might be some utility to result set cache.\n> Otherwise, I think it will be wasted effort. It might be worthwhile to\n> do the same for individual tables (with the same sort of restrictions).\n> But think of all the effort that would be needed to do this properly,\n> and what sort of payback would be received from it?\n\nThe payback will be blazingly fast slashdot type applications with very\nlittle effort from end application programmer.\n\n> Again, the same goals can easily be accomplished without having to\n> perform major surgery on the database system. I suspect that there is\n> some logical reason that Oracle/Sybase/IBM/Microsoft have not bothered\n> with it.\n\nI think they were designed/developed when WWW was nonexistent, and\nclient-server meant a system where client and server where separated by\na (slow) network connection that would negate most of the benefit from\nserver side cacheing. On todays application server scenario the client\n(AS) and server (DB) are usually very close, if not on the same computer\nand thus effectively managed cache can be kept on DB to avoid all the\ncache invalidation logic going through DB-AS link.\n\n> I am ready to be convinced otherwise if I see a logical reason for it.\n> But with the current evidence, I don't see any compelling reason to put\n> effort in that direction.\n\nIn what direction are _you_ planning to put your effort ?\n\n------------\nHannu\n\n",
"msg_date": "01 Mar 2002 10:24:39 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "> The tricky case is when you dont know when and how the database will\nbe\n> updated. That would need an insert/update/delete trigger on each and\n> every table that contributes to the query, either explicitly ot\nthrough\n> rule expansion. Doing that from client side would a) be difficult\nand b)\n> probably too slow to be of any use. To do it in a general fashion\nwopuld\n> also need a way to get the expanded query tree for a query to see\nwhich\n> tables the query depends on.\n\nRather than result caching, I'd much rather see an asynchronous NOTICE\ntelling my webservers which have RULES firing them off when a table is\nmodified.\n\nLet the webserver hold the cache (as they do now in my case, and in\nslashdots) but it gives a way that the database can tell all those\ninvolved to drop the cache and rebuild. Currently I accomplish this\nwith a timestamp on a single row table. Could probably accomplish it\nwith a periodic SELECT TRUE and watch for the notice -- but in my case\nI need to support other dbs as well.\n\n",
"msg_date": "Mon, 4 Mar 2002 16:05:16 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Rather than result caching, I'd much rather see an asynchronous NOTICE\n> telling my webservers which have RULES firing them off when a table is\n> modified.\n\nLISTEN/NOTIFY?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Mar 2002 16:50:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching "
},
{
"msg_contents": "Sorry, NOTIFY -- not NOTICE (damn keyboard...)\n\nRight now we're required to do a select against the database\nperiodically which means a test is required before hitting any cached\nelements. By the time you select true, might as well do the real\nselect anyway (normally simple index lookup).\n\nThe ability to receive one without making a query first would be very\nadvantageous.\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hannu Krosing\" <hannu@krosing.net>; \"Dann Corbit\"\n<DCorbit@connx.com>; \"Greg Sabino Mullane\" <greg@turnstep.com>;\n<pgsql-hackers@postgresql.org>\nSent: Monday, March 04, 2002 4:50 PM\nSubject: Re: [HACKERS] Database Caching\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > Rather than result caching, I'd much rather see an asynchronous\nNOTICE\n> > telling my webservers which have RULES firing them off when a\ntable is\n> > modified.\n>\n> LISTEN/NOTIFY?\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Mon, 4 Mar 2002 16:56:35 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching "
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Sorry, NOTIFY -- not NOTICE (damn keyboard...)\n> Right now we're required to do a select against the database\n> periodically\n\nCertainly not; I fixed that years ago (I think it was the first\nnontrivial thing I ever did with the PostgreSQL code). You can\nexecute PQnotifies without any PQexecs, and you can even have your\napplication sleep waiting for a notify to come in. See the libpq\ndocs under the heading of \"Asynchronous Notification\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Mar 2002 17:05:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching "
},
{
"msg_contents": "On Mon, 2002-03-04 at 23:50, Tom Lane wrote:\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > Rather than result caching, I'd much rather see an asynchronous NOTICE\n> > telling my webservers which have RULES firing them off when a table is\n> > modified.\n> \n> LISTEN/NOTIFY?\n\nBut is there an easy way to see which tables affect the query result,\nsomething like machine-readable EXPLAIN ?\n\nAnother thing that I have thought about Is adding a parameter to notify,\nso that you can be told _what_ is changed (there is big difference\nbetween being told that \"somebody called\" and \"Bob called\")\n\nThere are two ways of doing it \n\n1) the \"wire protocol compatible\" way , where the argument to LISTEN is\ninterpreted as a regular expression (or LIKE expression), so that you\ncan do\n\nLISTEN 'ITEM_INVALID:%';\n\nand the receive all notifies for\n\nNOTIFY 'ITEM_INVALID:' || ITEM_ID;\n\nand\n\nNOTIFY 'ITEM_INVALID:ALL';\n\nwhere the notify comes in as one string\n\n2) the more general way where you listen on exact \"relation\" and notify\nhas an argument at both syntax and protocol level, i.e\n\nLISTEN ITEM_INVALID;\n\nand\n\nNOTIFY 'ITEM_INVALID',ITEM_ID;\n\nNOTIFY 'ITEM_INVALID','ALL';\n\n------------------\nHannu\n\n\n",
"msg_date": "05 Mar 2002 09:45:26 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Cache invalidation notification (was: Database Caching)"
},
{
"msg_contents": "You could accomplish that with rules.\n\nMake a rule with the where clause you like, then NOTIFY the client.\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Hannu Krosing\" <hannu@tm.ee>\nTo: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nCc: \"Rod Taylor\" <rbt@zort.ca>; \"Hannu Krosing\" <hannu@krosing.net>;\n\"Dann Corbit\" <DCorbit@connx.com>; \"Greg Sabino Mullane\"\n<greg@turnstep.com>; <pgsql-hackers@postgresql.org>\nSent: Tuesday, March 05, 2002 2:45 AM\nSubject: [HACKERS] Cache invalidation notification (was: Database\nCaching)\n\n\n> On Mon, 2002-03-04 at 23:50, Tom Lane wrote:\n> > \"Rod Taylor\" <rbt@zort.ca> writes:\n> > > Rather than result caching, I'd much rather see an asynchronous\nNOTICE\n> > > telling my webservers which have RULES firing them off when a\ntable is\n> > > modified.\n> >\n> > LISTEN/NOTIFY?\n>\n> But is there an easy way to see which tables affect the query\nresult,\n> something like machine-readable EXPLAIN ?\n>\n> Another thing that I have thought about Is adding a parameter to\nnotify,\n> so that you can be told _what_ is changed (there is big difference\n> between being told that \"somebody called\" and \"Bob called\")\n>\n> There are two ways of doing it\n>\n> 1) the \"wire protocol compatible\" way , where the argument to LISTEN\nis\n> interpreted as a regular expression (or LIKE expression), so that\nyou\n> can do\n>\n> LISTEN 'ITEM_INVALID:%';\n>\n> and the receive all notifies for\n>\n> NOTIFY 'ITEM_INVALID:' || ITEM_ID;\n>\n> and\n>\n> NOTIFY 'ITEM_INVALID:ALL';\n>\n> where the notify comes in as one string\n>\n> 2) the more general way where you listen on exact \"relation\" and\nnotify\n> has an argument at both syntax and protocol level, i.e\n>\n> LISTEN ITEM_INVALID;\n>\n> and\n>\n> NOTIFY 'ITEM_INVALID',ITEM_ID;\n>\n> NOTIFY 'ITEM_INVALID','ALL';\n>\n> ------------------\n> Hannu\n>\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Tue, 5 Mar 2002 08:10:50 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Cache invalidation notification (was: Database Caching)"
},
{
"msg_contents": "On 01 Mar 2002 10:24:39 +0500, Hannu Krosing wrote:\n> ...\n> \n> > I don't see how result caching can be a win, since it can be done when\n> > needed anyway, without adding complexity to the database engine. Just\n> > have the application cache the result set. Certainly a web server could\n> > do this, if needed.\n> \n> More advanced application server can do it all right. But you still need\n> sound cache invalidation mechanisms.\n> \n> > If there were a way to mark a database as read only (and this could not\n> > be changed unless the entire database is shut down and restarted in\n> > read/write mode) then there might be some utility to result set cache.\n> > Otherwise, I think it will be wasted effort. It might be worthwhile to\n> > do the same for individual tables (with the same sort of restrictions).\n> > But think of all the effort that would be needed to do this properly,\n> > and what sort of payback would be received from it?\n> \n> The payback will be blazingly fast slashdot type applications with very\n> little effort from end application programmer.\n> \n> > Again, the same goals can easily be accomplished without having to\n> > perform major surgery on the database system. I suspect that there is\n> > some logical reason that Oracle/Sybase/IBM/Microsoft have not bothered\n> > with it.\n> \n> I think they were designed/developed when WWW was nonexistent, and\n> client-server meant a system where client and server where separated by\n> a (slow) network connection that would negate most of the benefit from\n> server side cacheing. On todays application server scenario the client\n> (AS) and server (DB) are usually very close, if not on the same computer\n> and thus effectively managed cache can be kept on DB to avoid all the\n> cache invalidation logic going through DB-AS link.\n> \n> ...\n\nYou have identified one of the most valuable reasons for query/result\ncaching. As I read the threads going on about caching, it reminds me\nof the arguments within MySQL about the need for referential integrity\nand transactions. Yes, the application can do it, however, by having\nthe database do it, it frees the application programmer to perform\nmore important tasks (e.g., more features).\n\nYour insights about the caching, etc. in the older, legacy databases\nis very likely the case. With the development of high speed\nnetworking, etc., it is now very feasible to move the caching closer\nto the sources of change/invalidation.\n\nQuite frankly, while certainly there are literally millions of\napplications that would find minimal value in query/results caching, I\nknow that, at least for web applications, that the query/results\ncaching _would_ be very valuable.\n\nCertainly, as has been pointed out several times, the application can\ndo the caching, however, utilizing todays \"generic\" tools such as\nApache and PHP, it is relatively difficult. By moving caching to the\ndatabase server, there is little that needs to be done by the\napplication to realize the benefits.\n\nFor example, with my typical application, as a straight dynamic\napplication, the process would be approximately:\n\n 1) receive the request \"http://www.xyz.com/index.php\"\n\n 2) query the current page contents from the database:\n\n\tselect * from table where dateline = '2002-03-05';\n\n 3) begin the HTML output\n\n 4) loop through the select results printing the contents\n\n 5) complete the HTML output\n\nWith caching within the database, this would likely achieve\nperformance good enough to serve millions of requests per day. (This\nis the case for some of our sites using Intersystems Cache which does\ndo query caching and serves over 2 million page views per day.)\n\nWithout the database caching, it is necessary to have the application\nperform this function. Because of the short life of an Apache/PHP\nprocess, caching needs to be performed externally to the application:\n\n 1) receive the request \"http://www.xyz.com/index.php\"\n\n 2) check the age of the cache file (1min, 5min ???):\n\n 3) if the cache is not fresh:\n\n 3.1) lock the cache file\n\n 3.2) query the database\n\n 3.3) begin HTML output to the cache file\n\n 3.4) loop through select results\n\n 3.5) complete the HTML output to the cache file\n\n 3.6) unlock the cache file\n\n 4) open the cache file (i.e., wait for locked file)\n\n 5) read/passthrough cache contents\n\n(Please note that this is one, simplified way to do the caching. It\nassumes that the data becomes stale over time and needs to be\nrefreshed. It also uses a file cache instead of a memory cache.\nCertainly things could be made more efficient through the use of\nnotifications from the database and the use of shared memory. Another\npath would be to have an external program generating the pages and\nthen placing the \"static\" pages into service (which requires changes\nto apache and/or the OS due to cache file invalidation during the\ngeneration process). Both paths make the application more complex\nrequiring more programming time.)\n\nIt is possible to have an application server do this caching for you.\nOf course, that in turn has its own complications. For applications\nthat do not need the added \"features\" of an application server,\ndatabase caching can be a big win in both processing/response time as\nwell as programmer time.\n\nThanks,\nF Harvell\n\n\n",
"msg_date": "Tue, 05 Mar 2002 11:35:32 -0500",
"msg_from": "F Harvell <fharvell@fts.net>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching "
},
{
"msg_contents": "> Certainly, as has been pointed out several times, the application\ncan\n> do the caching, however, utilizing todays \"generic\" tools such as\n> Apache and PHP, it is relatively difficult. By moving caching to\nthe\n> database server, there is little that needs to be done by the\n> application to realize the benefits.\n\nPHP has an amazingly easy to use Shared Memory resource.\n\nCreate a semaphore, and an index hash in position 0 which points to\nthe rest of the resources.\n\nOn query lock memory, lookup in index, pull values if it exists\notherwise run db query and stuff values in.\n\nThe tricky part is expiring the cache. I store a time in position 1\nand a table in the database. To see if cache should be expired I do a\nselect against the 'timer' table in the db. If it's expired, clear\ncache and let it be rebuilt. A trigger updates the time on change.\n\nDoesn't work well at all for frequent updates -- but thats not what\nit's for.\n\n\nTook about 15 minutes to write the caching mechanism in a generic\nfashion for all my applications. Step 2 of my process was to cache\nthe generated HTML page itself so I generate it once. I store it gzip\ncompressed, and serve directly to browsers which support gzip\ncompression (most). Pages are stored in shared memory using the above\nmechanism. This took about 40 minutes (php output buffer) and is\nbased on the uniqueness of the pages requested. Transparent to the\napplication too (prepend and append code elements). (Zend cache does\nthis too I think).\n\nAnyway, I don't feel like rewriting it as non-private code, but the\nconcept is simple enough. Perhaps Zend or PEAR could write the above\ndata lookup wrappers to shared memory. Although you don't even have\nto worry about structures or layout, just the ID that represents the\nlocation of your object in memory.\n\nIt can serve a couple million per hour using this on a E220 with lots\nof ram -- bandwidth always runs out first.\n\n\nAnyway, first suggestion is to buy Zend Cache (if available) to cache\nentire HTML pages, not to cache db queries. Even the great Slashdot\n(which everone appears to be comparing this to) uses a page cache to\nreduce load and serves primarily static HTML pages which were\npregenerated.\n\nI agree with the solution, I don't agree with the proposed location as\ncaching DB queries only solves about 1/4 of the whole problem. The\nother being network (millions of queries across 100mbit isn't fast\neither), and genereation of the non-static page from the static\n(cached) query. Build a few large (10k row) tables to see what I mean\nabout where the slowdown really is.\n\n",
"msg_date": "Tue, 5 Mar 2002 12:46:27 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching "
},
{
"msg_contents": "> > > I don't see how result caching can be a win, since it can be done when\n> > > needed anyway, without adding complexity to the database engine. Just\n> > > have the application cache the result set. Certainly a web server\ncould\n> > > do this, if needed.\n\n There are a couple of catches with that idea. First, we have thirty+\napplications that we've written, and trying to go back and patch the caching\ninto all of them would be much more work than just doing it in the DB.\nSecondly, changes to the data can be made from psql, other apps, or from\ntriggers and hence, there is no reliable way to deal with cache expiration.\nNot only that, but if you have a pool of web servers, applications on each\none can be inserting/updating data, and none of the other machines have any\nclue about that.\n\n Really, doing it in PG makes a lot of sense. Doing it outside of PG has a\nlot of problems. At one point, I was resolved that I was going to do it in\nmiddleware. I sat down and planned out the entire thing, including a really\nnifty hash structure that would make cache expiration and invalidtion\nlightning-fast... but I had focused entirely on the implementation, and when\nI realized all of the drawbacks to doing it outside of the backend, I\nscrapped it. That's the first time I've ever really wished that I was a C\nprogrammer....\n\n In the worst-case scenario (never repeating a query), result caching\nwould have a very small overhead. In any semi-realistic scenario, the\nbenefits are likely to be significant to extraordinary.\n\nsteve\n\n\n\n\n",
"msg_date": "Tue, 5 Mar 2002 12:13:15 -0700",
"msg_from": "\"Steve Wolfe\" <steve@iboats.com>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching "
},
{
"msg_contents": "On Tue, 05 Mar 2002 12:46:27 EST, \"Rod Taylor\" wrote:\n> \n> PHP has an amazingly easy to use Shared Memory resource.\n> \n> ...\n> \n> Anyway, first suggestion is to buy Zend Cache (if available) to cache\n> entire HTML pages, not to cache db queries. Even the great Slashdot\n> (which everone appears to be comparing this to) uses a page cache to\n> reduce load and serves primarily static HTML pages which were\n> pregenerated.\n\n Thanks for the information about PHP. It looks very useful.\n\n Just an FYI, Zend cache (which we use) only caches the prepared PHP\n\"code\" in a shared memory buffer. The code is still executed (just\nnot parsed) for each request. \"Finished\" HTML pages are not cached.\nThis is still a terrific gain as the majority of the overhead with PHP\nis the parsing/preparation. We have seen 8 page/second executions\njump to 40 pages/second.\n\n> I agree with the solution, I don't agree with the proposed location as\n> caching DB queries only solves about 1/4 of the whole problem. The\n> other being network (millions of queries across 100mbit isn't fast\n> either), and genereation of the non-static page from the static\n> (cached) query. Build a few large (10k row) tables to see what I mean\n> about where the slowdown really is.\n\n Certainly I agree that query caching does not provide the most\nefficient solution for webpage caching. Neither does DB based\nreferential integrity provide the most efficient solution for\nintegrity either. Application driven referential integrity can have\norders of magnitude better performance (application, i.e., programmer,\nknows what needs to happen, etc.).\n\n What query caching does (potentially) provide is a direct, data\ndriven solution to caching that relieves the programmer from having to\ndeal with the issue. The solution brings the caching down to the data\nlayer where invalidation/caching occurs automatically and \"correctly\".\n\n This works well for most applications, at least until the provided\n\"solution\" becomes/drives the next bottleneck. This can occur with\nreferential integrity, transactions, data typing, etc. even now. When\nit does, it becomes incumbent upon the programmer to find a higher\nlevel solution to the issue.\n\n It also likely would provide a measured level of benefit to most\napplications. I know that many, many people on this list have said\nthat after thinking about it they didn't think so, but, it has been my\nexperience (as a database engine user not a database engine\nprogrammer) that people using any application tend to ask the same\nquery over and over and over again. This has true from both the human\ninteraction as well as the application code interaction. Because I\nhave been buried in the web application domain for the last 5 years, I\nwill defer to others about applicability to other domains.\n\nThanks,\nF Harvell\n\n\n",
"msg_date": "Tue, 05 Mar 2002 14:59:45 -0500",
"msg_from": "F Harvell <fharvell@fts.net>",
"msg_from_op": false,
"msg_subject": "Re: Database Caching "
}
] |
[
{
"msg_contents": "Here is what I have implemented for database and user-specific run-time\nconfiguration settings. Unless there are desired changes, I plan to check\nthis in tomorrow.\n\n* Syntax\n\nALTER DATABASE dbname SET variable TO value;\nALTER DATABASE dbname SET variable TO DEFAULT;\nALTER DATABASE dbname RESET variable;\nALTER DATABASE dbname RESET ALL;\n\nAnd analogously for USER. The second and the third syntax are equivalent.\n\nAFAIK, there's no precedent for this general feature, but the syntax was\nsort of extrapolated from a leading vendor's ALTER USER name SET TIME ZONE\nstatement. (Which, ironically, this patch won't provide, because the time\nzone setting is handled separately. I hope to fix this soon.)\n\n* Semantics\n\nSettings are processed just before the session loop starts, thus giving\nthe intended impression of a SET statement executed manually just before\nthe session start. This naturally restricts the allowable set of options\nto SUSET and USERSET. However, these settings do establish a new default\nfor the purpose of RESET ALL.\n\n(The truth is, these settings are processed somewhere in the middle of\nInitPostgres(). This way we only have to read pg_shadow and pg_database\nonce per session startup. This keeps the performance impact negligible.)\n\n* Privileges\n\nUsers can change their own defaults. Database owners can change the\ndefaults for the databases they own. Superusers can change anything.\n\nIf the executing user is a superuser, he can change SUSET settings. Else\none can only change USERSET settings. This means that superusers can\n\"force\" SUSET settings onto other people who can't unset them.\n\n* Escape hatch\n\nIf you severely mess up your settings so that you can't log in you can\nstill start up a standalone backend. These settings are not processed in\nthis case.\n\nIn the standalone backend we don't even read pg_shadow now, because we set\nthe uid to 1 in any case. So the reasoning was that if we don't read\npg_shadow, we can't process the user options, and if we don't process the\nuser options we shouldn't process the database options either.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 28 Feb 2002 20:59:16 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Final spec on per-database/per-user settings"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Here is what I have implemented for database and user-specific run-time\n> configuration settings. Unless there are desired changes, I plan to check\n> this in tomorrow.\n\nSemantics seem fine, but an implementation detail:\n\n> (The truth is, these settings are processed somewhere in the middle of\n> InitPostgres(). This way we only have to read pg_shadow and pg_database\n> once per session startup. This keeps the performance impact negligible.)\n\nI trust the pg_database row is read during ReverifyMyDatabase, and isn't\nthe one fetched by GetRawDatabaseInfo...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 28 Feb 2002 22:57:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Final spec on per-database/per-user settings "
}
] |
[
{
"msg_contents": "While writing about the database and user-specific session defaults, it\noccurred to me, there's a more general feature lurking. Users or admins\ncould register an SQL-realm function that is executed before the session\nstarts. That way you could, say, implement a connection audit with\narbitrary alert actions.\n\nI don't mean this to replace the proposed system of session defaults,\nsince this new idea would probably be a lot more complex and slow, but\nit's something to think about.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 28 Feb 2002 21:08:28 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "More general configurability of session startup actions"
}
] |
[
{
"msg_contents": "Andrew McMillan (andrew@catalyst.net.nz) reports a bug with a severity of 3\nThe lower the number the more severe it is.\n\nShort Description\ntimestamp(timestamp('a timestamp)) no longer works\n\nLong Description\nIn version 7.2 it seems that I can't reduntantly cast value to timestamp if it is already a timestamp.\n\nI do this reasonably often in my code by way of being paranoid that I might have a date, or a time, where I for sure _really_ want it to be a timestamp...\n\nIt's cleaning up some bugs in my code, I suppose, but I kind of like making it explicit to people who might come along after me :-)\n\n\nSample Code\nHere's the broken query:\n\npcnz=# select timestamp('2002-03-01'::timestamp);\nERROR: parser: parse error at or near \"'\"\npcnz=# select version();\n version \n-------------------------------------------------------------\n PostgreSQL 7.2 on i686-pc-linux-gnu, compiled by GCC 2.95.4\n(1 row)\n\nI notice that int4(int4()) still works:\n\npcnz=# select int4( '777'::int4 );\n int4 \n------\n 777\n(1 row)\n\nA couple of older versions where this worked:\n\npcnz=# select timestamp('2002-03-01'::timestamp);\n timestamp \n------------------------\n 2002-03-01 00:00:00+13\n(1 row)\n\npcnz=# select version();\n version \n---------------------------------------------------------------\n PostgreSQL 7.0.3 on i686-pc-linux-gnu, compiled by gcc 2.95.2\n(1 row)\n\nstimulus=# select timestamp('2002-03-01'::timestamp);\n timestamp \n------------------------\n 2002-03-01 00:00:00+13\n(1 row)\n\nstimulus=# select version();\n version \n---------------------------------------------------------------\n PostgreSQL 7.1.3 on i686-pc-linux-gnu, compiled by GCC 2.95.4\n(1 row)\n\n\n\nNo file was uploaded with this report\n\n",
"msg_date": "Fri, 1 Mar 2002 04:50:26 -0500 (EST)",
"msg_from": "pgsql-bugs@postgresql.org",
"msg_from_op": true,
"msg_subject": "Bug #605: timestamp(timestamp('a timestamp)) no longer works"
},
{
"msg_contents": "pgsql-bugs@postgresql.org writes:\n> timestamp(timestamp('a timestamp)) no longer works\n\ntimestamp(x) is a type name now. In place of timestamp(foo) use\n\n\t\"timestamp\"(foo)\n\tfoo::timestamp\n\tCAST(foo AS timestamp)\n\nAnd yes, this is pointed out in the migration notes...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Mar 2002 10:03:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #605: timestamp(timestamp('a timestamp)) no longer works "
},
{
"msg_contents": "> timestamp(timestamp('a timestamp)) no longer works\n> I do this reasonably often in my code by way of being paranoid\n> that I might have a date, or a time, where I for sure _really_\n> want it to be a timestamp...\n> pcnz=# select timestamp('2002-03-01'::timestamp);\n> ERROR: parser: parse error at or near \"'\"\n\nYou *can* coerce timestamps to be timestamps, but in 7.2 non-standard\nsyntax no longer works to do this. The reason is that \"timestamp(p)\" now\nfollows the SQL9x usage of defining a timestamp type with precision \"p\".\nSo trying to call a function \"timestamp()\" no longer works as it did.\n\nYou can use SQL9x syntax for the type coersion:\n\n select cast('2002-03-01'::timestamp as timestamp);\n\nor (not recommended) you can cheat and force the call to the function by\nsurrounding it in double-quotes:\n\n select \"timestamp\"('2002-03-01'::timestamp);\n\nhth\n\n - Thomas\n",
"msg_date": "Fri, 01 Mar 2002 15:16:23 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Bug #605: timestamp(timestamp('a timestamp)) no longer works"
},
{
"msg_contents": "How would I go about clearing this \"bug\" from the bug database? I\nhaven't looked at the bugtool in quite some time, but I'm not seeing a\nreference to it on the web site...\n\n - Thomas\n",
"msg_date": "Fri, 01 Mar 2002 15:21:01 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Bug #605: timestamp(timestamp('a timestamp)) no longer works"
},
{
"msg_contents": "On Sat, 2002-03-02 at 04:16, Thomas Lockhart wrote:\n> > timestamp(timestamp('a timestamp)) no longer works\n> > I do this reasonably often in my code by way of being paranoid\n> > that I might have a date, or a time, where I for sure _really_\n> > want it to be a timestamp...\n> > pcnz=# select timestamp('2002-03-01'::timestamp);\n> > ERROR: parser: parse error at or near \"'\"\n> \n> You *can* coerce timestamps to be timestamps, but in 7.2 non-standard\n> syntax no longer works to do this. The reason is that \"timestamp(p)\" now\n> follows the SQL9x usage of defining a timestamp type with precision \"p\".\n> So trying to call a function \"timestamp()\" no longer works as it did.\n> \n> You can use SQL9x syntax for the type coersion:\n> \n> select cast('2002-03-01'::timestamp as timestamp);\n> \n> or (not recommended) you can cheat and force the call to the function by\n> surrounding it in double-quotes:\n> \n> select \"timestamp\"('2002-03-01'::timestamp);\n\nThanks Thomas,\n\nI wasn't aware of that SQL9x timestamp precision, which was why it\nseemed like a strange change to me.\n\nSorry to have not read the migration issues before filing this - I\nthought from following these mailing lists that I knew them already :-)\n\nCheers,\n\t\t\t\t\t\tAndrew.\n-- \n--------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Are you enrolled at http://schoolreunions.co.nz/ yet?\n\n",
"msg_date": "02 Mar 2002 12:56:22 +1300",
"msg_from": "Andrew McMillan <andrew@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: Bug #605: timestamp(timestamp('a timestamp)) no longer works"
}
] |
[
{
"msg_contents": "\n> > We could call it TIP or something like that. I think INFO is used\n> > because it isn't a NOTICE or ERROR or something major. It is only INFO.\n> > It is neutral information.\n> \n> That's what NOTICE is. NOTICE is only neutral information. NOTICE could\n> go to the client by default, whereas if you want something in the server\n> log you use LOG. I doubt an extra level is needed.\n\nSQL92 has WARNING, would that be a suitable addition to NOTICE ?\nINFO would not be added since it is like old NOTICE which would stay.\nSo, instead of introducing a lighter level we would introduce a \nstronger level. (WARNING more important than NOTICE) \nIf we change, we might as well adopt some more SQL'ism. \n\ne.g. string truncation is defined to return SUCCESS with WARNING.\n\nI guess it would be a horror for existing client code though :-(\n\nAndreas\n",
"msg_date": "Fri, 1 Mar 2002 12:15:17 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > > We could call it TIP or something like that. I think INFO is used\n> > > because it isn't a NOTICE or ERROR or something major. It is only INFO.\n> > > It is neutral information.\n> > \n> > That's what NOTICE is. NOTICE is only neutral information. NOTICE could\n> > go to the client by default, whereas if you want something in the server\n> > log you use LOG. I doubt an extra level is needed.\n> \n> SQL92 has WARNING, would that be a suitable addition to NOTICE ?\n> INFO would not be added since it is like old NOTICE which would stay.\n> So, instead of introducing a lighter level we would introduce a \n> stronger level. (WARNING more important than NOTICE) \n> If we change, we might as well adopt some more SQL'ism. \n> \n> e.g. string truncation is defined to return SUCCESS with WARNING.\n> \n> I guess it would be a horror for existing client code though :-(\n\nThat is a good point. We don't have tons of NOTICE messages, and\nWARNING does better describe the new functionality of NOTICE, because\nall those informative messages like sequence creation are now doing INFO\ninstead of NOTICE.\n\nI can make a followup patch to do this. The current patch doesn't touch\nthe NOTICE messages that are left alone, so a separate patch makes\nsense. How about WARN as a tag? Seems shorter than WARNING. Or maybe\nWARNING is fine. It is just one more letter than NOTICE.\n\nWe can keep a NOTICE define for backward compatibility for 7.3, as I\nhave done with DEBUG.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Mar 2002 10:15:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > > We could call it TIP or something like that. I think INFO is used\n> > > because it isn't a NOTICE or ERROR or something major. It is only INFO.\n> > > It is neutral information.\n> > \n> > That's what NOTICE is. NOTICE is only neutral information. NOTICE could\n> > go to the client by default, whereas if you want something in the server\n> > log you use LOG. I doubt an extra level is needed.\n> \n> SQL92 has WARNING, would that be a suitable addition to NOTICE ?\n> INFO would not be added since it is like old NOTICE which would stay.\n> So, instead of introducing a lighter level we would introduce a \n> stronger level. (WARNING more important than NOTICE) \n> If we change, we might as well adopt some more SQL'ism. \n> \n> e.g. string truncation is defined to return SUCCESS with WARNING.\n> \n> I guess it would be a horror for existing client code though :-(\n\nActually, an interesting idea would be to leave NOTICE alone and make\nthe more serious messages WARNING. The problem with that is I think\nINFO is clearer as something for client/user, and LOG something for the\nlogs. I don't think NOTICE has the same conotation. I just thought I\nwould mention that possibility.\n\nSo, with WARNING, NOTICE would go away and become INFO or WARNING, and\nDEBUG goes away to become DEBUG1-5. With DEBUG gone, our need to add\nPG_* to the beginning of the elog symbols may not be necessary.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Mar 2002 11:42:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> SQL92 has WARNING, would that be a suitable addition to NOTICE ?\n> INFO would not be added since it is like old NOTICE which would stay.\n> So, instead of introducing a lighter level we would introduce a\n> stronger level. (WARNING more important than NOTICE)\n> If we change, we might as well adopt some more SQL'ism.\n\nAt the client side SQL knows two levels, namely a \"completion condition\"\nand an \"exception condition\". In the PostgreSQL client protocol, these\nare distinguished as N and E message packets. The tags of the messages\nare irrelevant, they just serve as a guide to the user reading the\nmessage.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 1 Mar 2002 12:09:56 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Zeugswetter Andreas SB SD writes:\n> \n> > SQL92 has WARNING, would that be a suitable addition to NOTICE ?\n> > INFO would not be added since it is like old NOTICE which would stay.\n> > So, instead of introducing a lighter level we would introduce a\n> > stronger level. (WARNING more important than NOTICE)\n> > If we change, we might as well adopt some more SQL'ism.\n> \n> At the client side SQL knows two levels, namely a \"completion condition\"\n> and an \"exception condition\". In the PostgreSQL client protocol, these\n> are distinguished as N and E message packets. The tags of the messages\n> are irrelevant, they just serve as a guide to the user reading the\n> message.\n\nYes, both INFO and NOTICE/WARNING will come to the client as N. Only\nthe message tags will be different.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Mar 2002 13:09:35 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
}
] |
[
{
"msg_contents": "\n> > As I wrote it before there, it is an ECPG script that runs with bad perfs.\n> > ...\n> > it seems that on every commit, the cursor is closed\n> \n> Cursors shouldn't be closed, but prepared statements are deallocated on each\n> commit. AFAIK this is what the standard says.\n\nWow, this sure sounds completely bogus to me. \nImho CURSORS opened inside a transanction (after BEGIN WORK) are supposed to\nbe closed (at least with autocommit=yes). \n\nI do not think COMMIT is supposed to do anything with a prepared \nstatement. That is what EXEC SQL FREE :statementid is for.\n\nThat would then match e.g. Informix esql/c.\n\nAndreas\n",
"msg_date": "Fri, 1 Mar 2002 12:34:08 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Oracle vs PostgreSQL in real life"
}
] |
[
{
"msg_contents": "\n> New LOG level the prints to server log by default\n\n> Cause VACUUM information to print only to the client in verbose mode\n> VACUUM doesn't output to server logs\n\nWhy that ? For me vacuum was one of the more useful messages in the log.\nOr have you added a separate elog(LOG,... ?\nIt told me whether the interval between vacuums was good or not. \n\nAndreas\n",
"msg_date": "Fri, 1 Mar 2002 12:38:31 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > New LOG level the prints to server log by default\n> \n> > Cause VACUUM information to print only to the client in verbose mode\n> > VACUUM doesn't output to server logs\n> \n> Why that ? For me vacuum was one of the more useful messages in the log.\n> Or have you added a separate elog(LOG,... ?\n> It told me whether the interval between vacuums was good or not. \n\nIf you set server_min_messages to DEBUG1, you will get those in the log.\nVacuum was unusual because it was doing DEBUG to the log and not to the\nuser. I can change this so it does LOG to the log all the time. I\nchanged it to DEBUG1 because I was not sure every VACUUM show be spewing\nto the log file. It didn't see like a standard log message.\n\nAnother option is to set server_min_messages to INFO and do a VACUUM\nVERBOSE. That will force those to the log without other DEBUG1\nmessages. As you can see, this is going to allow more control over\nsending information to different places.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Mar 2002 10:07:31 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Vacuum was unusual because it was doing DEBUG to the log and not to the\n> user. I can change this so it does LOG to the log all the time.\n\nI think that VACUUM's messages should not appear at the default logging\nlevel (which is going to be LOG, no?). DEBUG1 seems a reasonable level\nfor VACUUM.\n\nWhat exactly will VACUUM VERBOSE do?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Mar 2002 12:34:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Vacuum was unusual because it was doing DEBUG to the log and not to the\n> > user. I can change this so it does LOG to the log all the time.\n> \n> I think that VACUUM's messages should not appear at the default logging\n> level (which is going to be LOG, no?). DEBUG1 seems a reasonable level\n> for VACUUM.\n\nRight.\n\n> What exactly will VACUUM VERBOSE do?\n\nVACUUM VERBOSE will send INFO vacuum stats to client, and if you set\nserver_min_messages to INFO, you will get them in the logs too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Mar 2002 13:07:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> What exactly will VACUUM VERBOSE do?\n\n> VACUUM VERBOSE will send INFO vacuum stats to client, and if you set\n> server_min_messages to INFO, you will get them in the logs too.\n\nSo VACUUM will use either INFO or DEBUG1 level depending on VERBOSE,\nand from there it works as usual. Seems okay.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Mar 2002 14:06:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> What exactly will VACUUM VERBOSE do?\n> \n> > VACUUM VERBOSE will send INFO vacuum stats to client, and if you set\n> > server_min_messages to INFO, you will get them in the logs too.\n> \n> So VACUUM will use either INFO or DEBUG1 level depending on VERBOSE,\n> and from there it works as usual. Seems okay.\n\nYep.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Mar 2002 14:11:40 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
}
] |
[
{
"msg_contents": "Here's an excerpt from a database comparison between Oracle, DB2, MySQL, \nSQLserver, and Sybase. (I just asked the author why postgres wasn't used.)\n\nMySQL's great performance was due mostly to our use of an in-memory query \nresults cache that is new in MySQL 4.0.1. When we tested without this cache, \nMySQL's performance fell by two-thirds.\n\nAnyway, this confirms an earlier message suggesting that for web servers that \nhave relatively constant queries, query caching can be a Big Deal.\n\n<http://www.eweek.com/article/0,3658,s=708&a=23115,00.asp>\n\n",
"msg_date": "Fri, 01 Mar 2002 07:44:30 -0500",
"msg_from": "Michael Tiemann <tiemann@redhat.com>",
"msg_from_op": true,
"msg_subject": "Server Databases Clash"
},
{
"msg_contents": "On Fri, 1 Mar 2002, Michael Tiemann wrote:\n\n> Here's an excerpt from a database comparison between Oracle, DB2, MySQL,\n> SQLserver, and Sybase. (I just asked the author why postgres wasn't used.)\n>\n> MySQL's great performance was due mostly to our use of an in-memory query\n> results cache that is new in MySQL 4.0.1. When we tested without this cache,\n> MySQL's performance fell by two-thirds.\n>\n> Anyway, this confirms an earlier message suggesting that for web servers that\n> have relatively constant queries, query caching can be a Big Deal.\n>\n> <http://www.eweek.com/article/0,3658,s=708&a=23115,00.asp>\n\nIf the use of a database on a webserver is to keep serving up the same\ndata over and over again that the database caches it, why not just serve\nup a static page? You can keep the content of an entire website in a\ndatabase and generate static pages as the content changes. The PostgreSQL\nwebsite does this. The only exception being the iDocs, but that's not\nhit enough to worry about with caching or making some of the pages static.\n\nI have a number of webservers that are database driven and I'd be\nsurprized if any of them saw the same queries even twice in the same day.\nAnything I know that will get requested that often will be made static -\nand since I designed the site, I know what's going to be requested\nrepeatedly.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 1 Mar 2002 08:40:36 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Server Databases Clash"
},
{
"msg_contents": "> MySQL's great performance was due mostly to our use of an in-memory\nquery\n> results cache that is new in MySQL 4.0.1. When we tested without\nthis cache,\n> MySQL's performance fell by two-thirds.\n>\n> Anyway, this confirms an earlier message suggesting that for web\nservers that\n> have relatively constant queries, query caching can be a Big Deal.\n\nI'd be willing to bet that they would have been around 15 to 20%\nfaster if their JSP code did the caching as there is no protocol or\ntransfer overhead.\n\nFor that matter, if they were to have used static webpages with no JSP\ncode (updating those as needed) they probably could have been several\norders of magnitude higher in their serving speed.\n\nIf the information in the database is truely that consistent, it makes\nthe most sense to go with the last option as you don't want the\noverhead of PHP, JSP, ASP and friends either. Afterall, why waste all\nthat time generating the same HTML pages time and time again. The\nproblem really has nothing to do with the database. PHP may wish to\n(and I think there is a project) add caching of a generated page for\ncommon request variables to avoid generation of the page entirely.\n\nHigh volume 'dynamic' websites often use this method when they expect\na low number of changes. Slashdot is a good example. Static\nfrontpage and main page of chat rooms, but once you go in a couple of\nlevels it's generated on the fly due to high level of change compared\nto requests for the information.\n\n",
"msg_date": "Fri, 1 Mar 2002 08:42:24 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Server Databases Clash"
}
] |
[
{
"msg_contents": "My current gig has just gone south.\n\nAnyone know of any employment/contract opportunities in the Boston, MA USA area\nusing PostgreSQL, Linux, etc?\n\nhttp://www.mohawksoft.com/mlwresume.html\n\n(If it is inappropriate for me to post this sort of message here, I apologize\nin advance. I just thought you guys would know best.)\n",
"msg_date": "Fri, 01 Mar 2002 08:01:59 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL employment/contract "
}
] |
[
{
"msg_contents": "\nI have developed a function to help me with escaping strings more easily. \nIt kind of behaves like printf and is very crude. Before I do anymore\nwork, I was hoping to get some comments or notice if someone has already\ndone this.\n\nI was also thinking there could be a function call PQprintfExec that would\nbuild the sql from the printf and call PQexec in one step.\n\nComments Please!\n\nRegards, \nAdam\n\n\n/*\n * PQprintf\n *\n * This function acts kind of like printf. It takes care of escaping\n * strings and bytea for you, then runs PQexec. The format string\n * defintion is as follows:\n *\n * %i = integer\n * %f = float\n * %s = normal string\n * %e = escape the string\n * %b = escape the bytea\n * \n * When you use %b, you must add another argument just after the\n * variable holding the binary data with its length.\n *\n */\nchar *\nPQprintf(const char *format, ...)\n{\n va_list arg;\n char *sql = NULL;\n char *parse = (char*)strdup(format);\n char *p;\n char buff[256];\n char *str;\n char *to;\n size_t length;\n size_t size;\n size_t esize;\n char* s_arg;\n float f_arg;\n int i_arg;\n int i;\n\n va_start(arg, format);\n\n p = (char*)strtok(parse, \"%\");\n sql = (char*)strdup(p);\n size = strlen(sql);\n\n while (p)\n {\n\t if ((p = (char*)strtok(NULL, \"%\")))\n\t {\n\t switch (*p) \n\t {\n\t\t /* integer */\n\t case 'i':\n\t\t i_arg = va_arg(arg, int);\n\t\t sprintf(buff, \"%i\", i_arg);\n\t\t size += strlen(buff);\n\t\t sql = (char*)realloc(sql, size + 1);\n\t\t strcat(sql, buff);\n\t\t break;\n\n\t\t /* float */\n\t case 'f':\n\t\t f_arg = va_arg(arg, float);\n\t\t sprintf(buff, \"%f\", f_arg);\n\t\t size += strlen(buff);\n\t\t sql = (char*)realloc(sql, size + 1);\n\t\t strcat(sql, buff);\n\t\t break;\n\n\t\t /* string */\n\t case 's':\n\t\t s_arg = va_arg(arg, char*);\n\t\t puts(s_arg);\n\t\t size += strlen(s_arg);\n\t\t sql = (char*)realloc(sql, size + 1);\n\t\t strcat(sql, s_arg);\n\t\t break;\n\n\t\t /* escape string */\n\t case 'e':\n\t\t s_arg = va_arg(arg, char*);\n\t\t to = (char*)malloc((2 * strlen(s_arg)) + 1);\n\t\t PQescapeString(to, s_arg, strlen(s_arg));\n\t\t size += strlen(to);\n\t\t sql = (char*)realloc(sql, size + 1);\n\t\t strcat(sql, to);\n\t\t free(to);\n\t\t break;\n\n\t\t /* escape bytea */\n\t case 'b':\n\t\t s_arg = va_arg(arg, char*);\n\t\t length = va_arg(arg, int);\n\t\t str = PQescapeBytea(s_arg, length, &esize);\n\t\t size += esize;\n\t\t sql = (char*)realloc(sql, size + 1);\n\t\t strcat(sql, str);\n\t\t free(str);\n\t\t break;\n\t }\n\n\t size += strlen(++p);\n\t sql = (char*)realloc(sql, size + 1);\n\t strcat(sql, p);\n\t }\n }\n\n va_end(arg);\n\n free(parse);\n\n return sql;\n}\n\n\n\n\n",
"msg_date": "Fri, 1 Mar 2002 08:45:22 -0500 (EST)",
"msg_from": "Adam Siegel <adam@sycamorehq.com>",
"msg_from_op": true,
"msg_subject": "PQprintf"
},
{
"msg_contents": "Adam Siegel <adam@sycamorehq.com> writes:\n> I have developed a function to help me with escaping strings more easily. \n> It kind of behaves like printf and is very crude. Before I do anymore\n> work, I was hoping to get some comments or notice if someone has already\n> done this.\n\nSeems like the start of a good idea, though I agree it's crude yet.\nOne thing you definitely need is more control over %f (precision\narguments).\n\nOne suggestion: use libpq's \"pqexpbuffer.h\" routines to manipulate the\nexpansible string buffer, instead of reinventing that wheel yet again.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Mar 2002 10:25:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PQprintf "
}
] |
[
{
"msg_contents": "> Go to: http://www.ca.postgresql.org/bugs/admin/managebugs.php and\n> you should be able to select and remove/update/whatever it.\n\nWarning: Unable to connect to PostgreSQL server:\n No pg_hba.conf entry for host 64.49.215.8,\n user vev, database postgresql in\n /usr/local/www/www/html/bugs/admin/opendb.inc on line 3\nUnable to access database\n",
"msg_date": "Fri, 01 Mar 2002 16:06:46 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: Bug #605: timestamp(timestamp('a timestamp)) no longer works"
}
] |
[
{
"msg_contents": "\n> Actually, an interesting idea would be to leave NOTICE alone and make\n> the more serious messages WARNING. The problem with that is I think\n> INFO is clearer as something for client/user, and LOG something for the\n> logs. I don't think NOTICE has the same conotation. I just thought I\n> would mention that possibility.\n> \n> So, with WARNING, NOTICE would go away and become INFO or WARNING, and\n> DEBUG goes away to become DEBUG1-5. With DEBUG gone, our need to add\n> PG_* to the beginning of the elog symbols may not be necessary.\n\nNow I am verwirrt (== brain all knots) :-)\n\nMy take was to have WARNING and NOTICE, yours is WARNING and INFO ?\nFor me INFO is also better to understand than NOTICE.\nNot sure that alone is worth the change though, since lots of \nclients will currently parse \"NOTICE\".\n\nI also like LOG, since I don't like the current NOTICES in the log.\nImho INFO and WARNING would be nothing for the log per default.\nLOG would be things that are only of concern to the DBA.\nMy preferred client level would prbbly be WARNING (no INFO).\n\nAndreas\n",
"msg_date": "Fri, 1 Mar 2002 18:01:12 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > Actually, an interesting idea would be to leave NOTICE alone and make\n> > the more serious messages WARNING. The problem with that is I think\n> > INFO is clearer as something for client/user, and LOG something for the\n> > logs. I don't think NOTICE has the same conotation. I just thought I\n> > would mention that possibility.\n> > \n> > So, with WARNING, NOTICE would go away and become INFO or WARNING, and\n> > DEBUG goes away to become DEBUG1-5. With DEBUG gone, our need to add\n> > PG_* to the beginning of the elog symbols may not be necessary.\n> \n> Now I am verwirrt (== brain all knots) :-)\n> \n> My take was to have WARNING and NOTICE, yours is WARNING and INFO ?\n> For me INFO is also better to understand than NOTICE.\n> Not sure that alone is worth the change though, since lots of \n> clients will currently parse \"NOTICE\".\n\nOK, now that the current elog() patch seems to be OK with everyone, we\ncan discuss if we want to change the remaining non-INFO NOTICE messages\nto WARNING. Seems to more closely match the SQL standard. All messages\nwill continue using the 'N' protocol type so this shouldn't be an issue.\nI don't know any clients that parse the NOTICE: tag, but they are going\nto have to change things for INFO: anyway so we might as well make the\nchange during 7.3 too.\n\nComments?\n\n> I also like LOG, since I don't like the current NOTICES in the log.\n\nGood, that was one of my goals.\n\n> Imho INFO and WARNING would be nothing for the log per default.\n> LOG would be things that are only of concern to the DBA.\n> My preferred client level would prbbly be WARNING (no INFO).\n\nWell, that is interesting. Currently we would send WARNING/NOTICE to\nthe logs because it is an exceptional condition, though not as serious\nas error.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Mar 2002 13:15:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "...\n> My take was to have WARNING and NOTICE, yours is WARNING and INFO ?\n> For me INFO is also better to understand than NOTICE.\n> Not sure that alone is worth the change though, since lots of\n> clients will currently parse \"NOTICE\".\n\nfwiw, I find the connotations of these terms to be, in increasing order\nof severity:\n\n INFO, NOTICE, WARNING\n\nthough the distinction between INFO and NOTICE is not so great that one\nabsolutely could not replace the other.\n\n - Thomas\n",
"msg_date": "Fri, 01 Mar 2002 11:53:53 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> ...\n> > My take was to have WARNING and NOTICE, yours is WARNING and INFO ?\n> > For me INFO is also better to understand than NOTICE.\n> > Not sure that alone is worth the change though, since lots of\n> > clients will currently parse \"NOTICE\".\n> \n> fwiw, I find the connotations of these terms to be, in increasing order\n> of severity:\n> \n> INFO, NOTICE, WARNING\n> \n> though the distinction between INFO and NOTICE is not so great that one\n> absolutely could not replace the other.\n\nYes, that was my thought to. I am not sure we have any need for NOTICE\nwhen we have INFO and WARNING. The NOTICE sort of kept both meanings,\nand we don't need that anymore.\n\nOn a humorous note, when we got the code from Berkeley, WARN was the\noriginal tag to error out a query.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Mar 2002 15:37:00 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
}
] |
[
{
"msg_contents": "Peter writes:\n> > SQL92 has WARNING, would that be a suitable addition to NOTICE ?\n> > INFO would not be added since it is like old NOTICE which would stay.\n> > So, instead of introducing a lighter level we would introduce a\n> > stronger level. (WARNING more important than NOTICE)\n> > If we change, we might as well adopt some more SQL'ism.\n> \n> At the client side SQL knows two levels, namely a \"completion condition\"\n> and an \"exception condition\". In the PostgreSQL client protocol, these\n> are distinguished as N and E message packets. The tags of the messages\n> are irrelevant, they just serve as a guide to the user reading the\n> message.\n\nI am referring to \"completion condition\" messages according to SQLSTATE:\n\n00xxx:\tSuccess\n01xxx:\tSuccess with Warning\n02xxx:\tSuccess but no rows found\n03 and > :\tFailure\n\nI see that there is no notion of INFO, thus I agree that INFO should not be\nsomething normally sent to the user. INFO could be the first DEBUG Level,\nor completely skipped.\n\nI think that LOG would be more worth the trouble than INFO.\n\nAndreas\n",
"msg_date": "Fri, 1 Mar 2002 18:22:14 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> I am referring to \"completion condition\" messages according to SQLSTATE:\n>\n> 00xxx:\tSuccess\n\nThis is an INFO (or no message at all). The idea was that things like the\nautomatic index creation for a PK would be INFO, and you could easily turn\noff INFO somehow.\n\n> 01xxx:\tSuccess with Warning\n\nThis is a NOTICE.\n\n> 02xxx:\tSuccess but no rows found\n\nThis is nothing special.\n\n> 03 and > :\tFailure\n\nThis is is ERROR or above.\n\n> I see that there is no notion of INFO, thus I agree that INFO should not be\n> something normally sent to the user. INFO could be the first DEBUG Level,\n> or completely skipped.\n\nIt's sort of the \"tip\" level. A lot of people don't like to see them, so\nit's reasonable to separate them from NOTICE. You could think of them as\nfirst debug level, if you like.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 1 Mar 2002 13:06:03 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
}
] |
[
{
"msg_contents": "\n> > My take was to have WARNING and NOTICE, yours is WARNING and INFO ?\n> > For me INFO is also better to understand than NOTICE.\n> > Not sure that alone is worth the change though, since lots of \n> > clients will currently parse \"NOTICE\".\n> \n> OK, now that the current elog() patch seems to be OK with everyone, we\n> can discuss if we want to change the remaining non-INFO NOTICE messages\n> to WARNING. Seems to more closely match the SQL standard. All messages\n> will continue using the 'N' protocol type so this shouldn't be an issue.\n\nYes, I think that would be good.\n\n> I don't know any clients that parse the NOTICE: tag, but they are going\n> to have to change things for INFO: anyway so we might as well make the\n> change during 7.3 too.\n\nGood point.\n\n> > I also like LOG, since I don't like the current NOTICES in the log.\n> \n> Good, that was one of my goals.\n> \n> > Imho INFO and WARNING would be nothing for the log per default.\n> > LOG would be things that are only of concern to the DBA.\n> > My preferred client level would prbbly be WARNING (no INFO).\n> \n> Well, that is interesting. Currently we would send WARNING/NOTICE to\n> the logs because it is an exceptional condition, though not as serious\n> as error.\n\nWell, string truncation is imho not for the log, might interest the app\nprogrammer but probably not the dba ? And if your point was to get rid \nof the notices in the log (as is mine) you would have to not log Warning, \nno ?\n\nAndreas\n",
"msg_date": "Fri, 1 Mar 2002 19:37:33 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "> > > I also like LOG, since I don't like the current NOTICES in the log.\n> > \n> > Good, that was one of my goals.\n> > \n> > > Imho INFO and WARNING would be nothing for the log per default.\n> > > LOG would be things that are only of concern to the DBA.\n> > > My preferred client level would prbbly be WARNING (no INFO).\n> > \n> > Well, that is interesting. Currently we would send WARNING/NOTICE to\n> > the logs because it is an exceptional condition, though not as serious\n> > as error.\n> \n> Well, string truncation is imho not for the log, might interest the app\n> programmer but probably not the dba ? And if your point was to get rid \n> of the notices in the log (as is mine) you would have to not log Warning, \n> no ?\n\nOK, now things could get complicated. If we don't want to see\nNOTICE/WARNING in the log, you could say you don't want to see ERROR in\nthe log, particularly syntax errors. Where do we go from here? You\ncould set your server_min_messages to FATAL, but then you don't see LOG\nmessages about server startup.\n\nI have considered allowing individual message types to be specified,\nparticularly for the server logs, but the user API for that,\nparticilarly as SET from psql, becomes very complicated.\n\nMaybe I have the ordering wrong for server_min_messages. Perhaps it\nshould be:\n\n\tDEBUG5-1, INFO, NOTICE/WARNING, ERROR, LOG, FATAL, PANIC\n\nNothing prevents us from doing that. Well, anyway, not sure how much I\nlike it but I throw it out as an idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Mar 2002 13:50:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > > My take was to have WARNING and NOTICE, yours is WARNING and INFO ?\n> > > For me INFO is also better to understand than NOTICE.\n> > > Not sure that alone is worth the change though, since lots of \n> > > clients will currently parse \"NOTICE\".\n> > \n> > OK, now that the current elog() patch seems to be OK with everyone, we\n> > can discuss if we want to change the remaining non-INFO NOTICE messages\n> > to WARNING. Seems to more closely match the SQL standard. All messages\n> > will continue using the 'N' protocol type so this shouldn't be an issue.\n> \n> Yes, I think that would be good.\n\nOK, now that the elog() patch is in, we can discuss NOTICE. I know\nPeter wants to keep NOTICE to reduce the number of changes, but I\nalready have a few votes that the existing NOTICE messages should be\nchanged to a tag of WARNING. I have looked through the NOTICE elog\ncalls, and they all appear as warnings to me --- of course with the\ninformative messages now INFO, that is no surprise.\n\nI realize the change will be large, ~240 entries, but we are doing\nelog() changes right now, and I think this is a good time to do it. It\nis more informative, and closer to SQL standard. The priority of NOTICE\nwill not change, it will just be called WARNING in the code and as sent\nto the user. I did look at keeping both NOTICE and WARNING but didn't\nsee any value in both of them. NOTICE was sufficiently generic to\nhandle info and warning message, but now that we have INFO, NOTICE just\ndoesn't seem right and WARNING makes more sense.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 3 Mar 2002 18:03:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, now that the elog() patch is in, we can discuss NOTICE. I know\n> Peter wants to keep NOTICE to reduce the number of changes, but I\n> already have a few votes that the existing NOTICE messages should be\n> changed to a tag of WARNING.\n\nIf you're taking a vote, I vote with Peter. I don't much care for the\nthought of EXPLAIN results coming out tagged WARNING ;-)\n\nIn any case, simple renamings like this ought to be carried out as part\nof the prefix-tagging of elog names that we intend to do late in 7.3,\nno? I see no value in having two rounds of widespread changes instead\nof just one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Mar 2002 21:02:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "EXPLAIN would come out as INFO would it not?\n\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nCc: \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>;\n<pgsql-hackers@postgresql.org>\nSent: Sunday, March 03, 2002 9:02 PM\nSubject: Re: [HACKERS] elog() patch\n\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, now that the elog() patch is in, we can discuss NOTICE. I\nknow\n> > Peter wants to keep NOTICE to reduce the number of changes, but I\n> > already have a few votes that the existing NOTICE messages should\nbe\n> > changed to a tag of WARNING.\n>\n> If you're taking a vote, I vote with Peter. I don't much care for\nthe\n> thought of EXPLAIN results coming out tagged WARNING ;-)\n>\n> In any case, simple renamings like this ought to be carried out as\npart\n> of the prefix-tagging of elog names that we intend to do late in\n7.3,\n> no? I see no value in having two rounds of widespread changes\ninstead\n> of just one.\n>\n> regards, tom lane\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n>\n\n",
"msg_date": "Sun, 3 Mar 2002 21:18:09 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> EXPLAIN would come out as INFO would it not?\n\nIf we make it INFO it won't come out at all, at the message level that\na lot of more-advanced users will prefer to use. That's no solution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Mar 2002 21:23:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, now that the elog() patch is in, we can discuss NOTICE. I know\n> > Peter wants to keep NOTICE to reduce the number of changes, but I\n> > already have a few votes that the existing NOTICE messages should be\n> > changed to a tag of WARNING.\n> \n> If you're taking a vote, I vote with Peter. I don't much care for the\n> thought of EXPLAIN results coming out tagged WARNING ;-)\n\nEXPLAIN now comes out as INFO.\n\n> In any case, simple renamings like this ought to be carried out as part\n> of the prefix-tagging of elog names that we intend to do late in 7.3,\n> no? I see no value in having two rounds of widespread changes instead\n> of just one.\n\nAgreed. So you think WARNING makes sense, but let's do it at the right\ntime. Perhaps that is what Peter was saying anyway.\n\nHowever, with DEBUG symbol gone, or at least gone after 7.3, I don't see\na big need to add PG_ to the beginning of every elog() symbol. Can I\nget some votes on that? DEBUG was our big culprit of conflict with\nother interfaces, specifically Perl. With that split into DEBUG1-5, do\nwe need to prefix the remaining symbols?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 3 Mar 2002 21:49:07 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Tom Lane wrote:\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > EXPLAIN would come out as INFO would it not?\n> \n> If we make it INFO it won't come out at all, at the message level that\n> a lot of more-advanced users will prefer to use. That's no solution.\n\nWell, right now it is INFO. It is certainly not a notice/warning or\nerror. Seems we will have to address this. Let me look at the existing\nINFO message and see if there are any others that _have_ to be emitted. \nI will add a new symbol INFOFORCE which will always be sent to the\nclient no matter what the client_min_messages level. Seems this is the\nonly good way to fix this. Even if we make that NOTICE/WARNING, people\nmay turn that off too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 3 Mar 2002 21:52:52 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I will add a new symbol INFOFORCE which will always be sent to the\n> client no matter what the client_min_messages level.\n\nI was thinking along the same lines, but I hate that name.\nINFOALWAYS maybe?\n\nAlso, should it be different from INFO as far as the server log\ngoes? Not sure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Mar 2002 22:22:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I will add a new symbol INFOFORCE which will always be sent to the\n> > client no matter what the client_min_messages level.\n> \n> I was thinking along the same lines, but I hate that name.\n> INFOALWAYS maybe?\n\nLove it. :-) Sounds like a song. Info-always on my mind, ...\n\n> Also, should it be different from INFO as far as the server log\n> goes? Not sure.\n\nNo, I think we add too much confusion doing that. It should behave like\ninfo to the server or we have to add an additional server_min_messages\nlevel. My feeling is that they would want INFOALWAYS in server logs in\nthe same cases they would want INFO.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 3 Mar 2002 22:24:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> However, with DEBUG symbol gone, or at least gone after 7.3, I don't see\n> a big need to add PG_ to the beginning of every elog() symbol. Can I\n> get some votes on that? DEBUG was our big culprit of conflict with\n> other interfaces, specifically Perl.\n\nWrong: ERROR is a problem too. And if you think that INFO or WARNING\nor PANIC will not create new problems, you are a hopeless optimist.\n\nWe are already forcing renamings of a lot of elog symbols (eg, NOTICE\nand DEBUG), so we may as well go the extra step and actually solve the\nproblem, rather than continue to ignore it.\n\nBTW, I'd go with PGERROR etc, not PG_ERROR, just to save typing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Mar 2002 22:26:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > However, with DEBUG symbol gone, or at least gone after 7.3, I don't see\n> > a big need to add PG_ to the beginning of every elog() symbol. Can I\n> > get some votes on that? DEBUG was our big culprit of conflict with\n> > other interfaces, specifically Perl.\n> \n> Wrong: ERROR is a problem too. And if you think that INFO or WARNING\n> or PANIC will not create new problems, you are a hopeless optimist.\n> \n> We are already forcing renamings of a lot of elog symbols (eg, NOTICE\n> and DEBUG), so we may as well go the extra step and actually solve the\n> problem, rather than continue to ignore it.\n> \n> BTW, I'd go with PGERROR etc, not PG_ERROR, just to save typing.\n\nOK, unless someone else responds, we will go with PG* and\nNOTICE->WARNING near 7.3 beta start when tree is quiet. We will keep\nold symbols around for 7.3 release.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 3 Mar 2002 22:28:55 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I will add a new symbol INFOFORCE which will always be sent to the\n> > client no matter what the client_min_messages level.\n> \n> I was thinking along the same lines, but I hate that name.\n> INFOALWAYS maybe?\n> \n> Also, should it be different from INFO as far as the server log\n> goes? Not sure.\n\nIn going over the existing INFO messages, I now see that there are\nseveral places that must send messages to the client no matter what\nclient_min_messages is set to. The areas are EXPLAIN, VACUUM, ANALYZE,\nSHOW, and various \"unsuported\" messages.\n\nSo, I am now thinking that INFOALWAYS is not the proper way to handle\nthese cases. While I saw no value in splitting the current NOTICE\nmessagees into WARNING and NOTICE (they all seem pretty much the same),\nI now see a value in splitting INFO into INFO for \"always to client\" and\nNOTICE which will be things like automatic sequence creation.\n\nSo, based on current CVS, NOTICE -> WARNING, and some INFO will be\nchanged to NOTICE and remaining INFO will always be sent to the client.\n\nIf I don't create a new tag, then people who set the client_min_messages\nto ERROR will be confused to see INFO messages in some cases and not\nothers. In the final code, client_min_messages will not have an INFO\nlevel option because INFO will always go to the client. \nserver_min_messages will have an INFO option.\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 4 Mar 2002 14:32:25 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "OK, I talked to Tom on the phone and here is what I would like to do:\n\no Change all current CVS messages of NOTICE to WARNING. We were going\nto do this just before 7.3 beta but it has to be done now, as you will\nsee below.\n\no Change current INFO messages that should be controlled by\nclient_min_messages to NOTICE.\n\no Force remaining INFO messages, like from EXPLAIN, VACUUM VERBOSE, etc.\nto always go to the client.\n\no Remove INFO from the client_min_messages options and add NOTICE.\n\nSeems we do need three non-ERROR elog levels to handle the various\nbehaviors we need for these messages.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 4 Mar 2002 15:46:56 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "\nChanges committed. Regression passes. We now have three elog levels to\nclient, INFO for always to client, NOTICE for tips, and WARNING for\nnon-error problems. See elog.h for discription.\n\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> OK, I talked to Tom on the phone and here is what I would like to do:\n> \n> o Change all current CVS messages of NOTICE to WARNING. We were going\n> to do this just before 7.3 beta but it has to be done now, as you will\n> see below.\n> \n> o Change current INFO messages that should be controlled by\n> client_min_messages to NOTICE.\n> \n> o Force remaining INFO messages, like from EXPLAIN, VACUUM VERBOSE, etc.\n> to always go to the client.\n> \n> o Remove INFO from the client_min_messages options and add NOTICE.\n> \n> Seems we do need three non-ERROR elog levels to handle the various\n> behaviors we need for these messages.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 6 Mar 2002 01:11:20 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
}
] |
[
{
"msg_contents": "\nPeter writes:\n> > I am referring to \"completion condition\" messages according \n> to SQLSTATE:\n> >\n> > 00xxx:\tSuccess\n> \n> This is an INFO (or no message at all). The idea was that things like the\n> automatic index creation for a PK would be INFO, and you could easily turn\n> off INFO somehow.\n> \n> > 01xxx:\tSuccess with Warning\n> \n> This is a NOTICE.\n> \n> > 02xxx:\tSuccess but no rows found\n> \n> This is nothing special.\n> \n> > 03 and > :\tFailure\n> \n> This is is ERROR or above.\n> \n> > I see that there is no notion of INFO, thus I agree that INFO should not be\n> > something normally sent to the user. INFO could be the first DEBUG Level,\n> > or completely skipped.\n> \n> It's sort of the \"tip\" level. A lot of people don't like to see them, so\n> it's reasonable to separate them from NOTICE. You could think of them as\n> first debug level, if you like.\n\nAll agreed, matches what I was trying to say. Only I like the keyword\nWARNING more than NOTICE.\n\nAndreas\n",
"msg_date": "Fri, 1 Mar 2002 19:39:31 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
}
] |
[
{
"msg_contents": "\n> Maybe I have the ordering wrong for server_min_messages. Perhaps it\n> should be:\n> \n> \tDEBUG5-1, INFO, NOTICE/WARNING, ERROR, LOG, FATAL, PANIC\n> \n> Nothing prevents us from doing that. Well, anyway, not sure how much I\n> like it but I throw it out as an idea.\n\nAh, yes, that sounds optimal to me. \n\nAndreas\n",
"msg_date": "Fri, 1 Mar 2002 20:05:19 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > Maybe I have the ordering wrong for server_min_messages. Perhaps it\n> > should be:\n> > \n> > \tDEBUG5-1, INFO, NOTICE/WARNING, ERROR, LOG, FATAL, PANIC\n> > \n> > Nothing prevents us from doing that. Well, anyway, not sure how much I\n> > like it but I throw it out as an idea.\n> \n> Ah, yes, that sounds optimal to me. \n\nWhat do others think of this. I can easily do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Mar 2002 15:42:04 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Zeugswetter Andreas SB SD wrote:\n> >\n> > > Maybe I have the ordering wrong for server_min_messages. Perhaps it\n> > > should be:\n> > >\n> > > \tDEBUG5-1, INFO, NOTICE/WARNING, ERROR, LOG, FATAL, PANIC\n> > >\n> > > Nothing prevents us from doing that. Well, anyway, not sure how much I\n> > > like it but I throw it out as an idea.\n> >\n> > Ah, yes, that sounds optimal to me.\n>\n> What do others think of this. I can easily do it.\n\nI'd rather keep NOTICE instead of WARNING. It's just to keep things\nlooking familiar a bit.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 1 Mar 2002 22:21:22 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: elog() patch"
}
] |
[
{
"msg_contents": "[snip]\n>>----------------------------------------------------------------------\n---------------\nThought for the day:\nCaching this stuff could very easily exhaust the entire server disk\nspace in a jiffy.\nAny lists stashed in memory will take away from database performance for\nnormal sorts\nof queries. Any lists stashed on disk will have to be read again.\nWhere is the big\nsavings?\n<<----------------------------------------------------------------------\n---------------\nSure. It would \n\na) make many queries faster\n\nb) make client libs (ODBC/JDBC/ECPG) faster and simpler by not forcing\nthem to fake it.\n\nBut there is also a big class of applications that would benefit much\nmore from caching exact queries.\n\nAnd it will make us as fast as MySQL for 100000 consecutive calls of\nSELECT MAX(N) FROM T ;)\n>>----------------------------------------------------------------------\n---------------\nTry as I might, I can't think of anything that could possibly be more\nuseless than that.\n;-)\n\nI agree that some web apps might benefit mightily with a careful plan\nfor this notion.\nMaybe a special function/procedure call is in order.\n\nIf someone wants to explore this stuff, I'm all for it. But I think it\nneeds careful\nthought. I suspect that:\n1. The work for retaining ACID properties of the database is ten times\nharder than\nanyone cares to imagine with changes of this nature.\n2. The benefit for most apps will be small, and will be negative if not\nimplemented\nproperly.\n3. The functionality can *already* be achieved by user applications\nsimply by holding\nopen your own cursors and reusing them.\n\nFor whoever wants to tackle this thing, I say, \"GO for it!\"\nBut go for it with both eyes open.\n\nI don't think it should be anyone's priority. But that is just because\nit isn't very\ninteresting to me. Maybe other people are a lot more keen on it.\n<<----------------------------------------------------------------------\n---------------\n",
"msg_date": "Fri, 1 Mar 2002 12:08:57 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: eWeek Poll: Which database is most critical to your"
}
] |
[
{
"msg_contents": "-----Original Message-----\nFrom: Hannu Krosing [mailto:hannu@krosing.net]\nSent: Tuesday, February 26, 2002 9:54 PM\nTo: Oleg Bartunov\nCc: Pgsql Hackers\nSubject: Re: [HACKERS] single task postgresql\n\n\nOn Tue, 2002-02-26 at 17:52, Oleg Bartunov wrote:\n> Having frustrated with performance on Windows box I'm wondering if\nit's\n> possible to get postgresql optimized for working without shared\nmemory,\n> say in single-task mode. It looks like it's shared memory emulation on\ndisk\n> (by cygipc daemon) is responsible for performance degradation.\n> In our project we have to use Windows for desktop application and it's\n> single task, so we don't need shared memory. In principle, it's\npossible\n> to hack cygipc, so it wouldn't emulate shared memory and address calls\n> to normal memory, but I'm wondering if it's possible from postgres\nside.\n\nYou might also be interested in hacking the threaded verion to run on\nnative windows without cygwin.\n\nOn Sun, 2002-01-27 at 02:19, mkscott@sacadia.com wrote:\n> For anyone interested, I have posted a new version of multi-threaded\n> Postgres 7.0.2 here:\n> \n> http://sourceforge.net/projects/mtpgsql\n> \n>>\nI did a modification of a port of POSIX threads to NT so that I can run\nUNIX threaded chess programs on NT. I don't know if it will work with\nthe threaded PostgreSQL server, but you are welcome to try.\nHere is the stuff I did, you are free to use it however you like:\nftp://cap.connx.com/pub/amy/pthreads.ZIP\n<<\n",
"msg_date": "Fri, 1 Mar 2002 12:20:23 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: single task postgresql"
}
] |
[
{
"msg_contents": "In datatype.sgml:\n\n The type numeric can store numbers of practically\n unlimited size and precision,...\n\nI think this is simply wrong since the current implementation of\nnumeric and decimal data types limit the precision up to 1000.\n\n#define NUMERIC_MAX_PRECISION\t\t1000\n\nComments?\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 02 Mar 2002 23:14:23 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "numeric/decimal docs bug?"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> In datatype.sgml:\n> The type numeric can store numbers of practically\n> unlimited size and precision,...\n\n> I think this is simply wrong since the current implementation of\n> numeric and decimal data types limit the precision up to 1000.\n\n> #define NUMERIC_MAX_PRECISION\t\t1000\n\nI was thinking just the other day that there's no reason for that\nlimit to be so low. Jan, couldn't we bump it up to 8 or 16K or so?\n\n(Not that I'd care to do heavy arithmetic on such numbers, or that\nI believe there's any practical use for them ... but why set the\nlimit lower than we must?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Mar 2002 12:23:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug? "
},
{
"msg_contents": "Are there other cases where the pgsql docs may say unlimited where it might \nnot be?\n\nI remember when the FAQ stated unlimited columns per table (it's been \ncorrected now so that's good).\n\nNot asking for every limit to be documented but while documentation is \nwritten if one does not yet know (or remember) the actual (or even \nrough/estimated) limit it's better to skip it for later than to falsely say \n\"unlimited\". Better to have no signal than noise in this case.\n\nRegards,\nLink.\n\nAt 11:14 PM 02-03-2002 +0900, Tatsuo Ishii wrote:\n>In datatype.sgml:\n>\n> The type numeric can store numbers of practically\n> unlimited size and precision,...\n>\n>I think this is simply wrong since the current implementation of\n>numeric and decimal data types limit the precision up to 1000.\n>\n>#define NUMERIC_MAX_PRECISION 1000\n>\n>Comments?\n\n\n",
"msg_date": "Tue, 05 Mar 2002 11:39:41 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Tom Lane writes:\n\n> > #define NUMERIC_MAX_PRECISION\t\t1000\n>\n> I was thinking just the other day that there's no reason for that\n> limit to be so low. Jan, couldn't we bump it up to 8 or 16K or so?\n\nWhy have an arbitrary limit at all? Set it to INT_MAX, or whatever the\nindex variables have for a type.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 10 Mar 2002 20:10:17 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug? "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n> #define NUMERIC_MAX_PRECISION\t\t1000\n>> \n>> I was thinking just the other day that there's no reason for that\n>> limit to be so low. Jan, couldn't we bump it up to 8 or 16K or so?\n\n> Why have an arbitrary limit at all? Set it to INT_MAX,\n\nThe hard limit is certainly no more than 64K, since we store these\nnumbers in half of an atttypmod. In practice I suspect the limit may\nbe less; Jan would be more likely to remember...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Mar 2002 20:12:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug? "
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> > #define NUMERIC_MAX_PRECISION 1000\n> >>\n> >> I was thinking just the other day that there's no reason for that\n> >> limit to be so low. Jan, couldn't we bump it up to 8 or 16K or so?\n>\n> > Why have an arbitrary limit at all? Set it to INT_MAX,\n>\n> The hard limit is certainly no more than 64K, since we store these\n> numbers in half of an atttypmod. In practice I suspect the limit may\n> be less; Jan would be more likely to remember...\n\n It is arbitrary of course. I don't recall completely, have to\n dig into the code, but there might be some side effect when\n mucking with it.\n\n The NUMERIC code increases the actual internal precision when\n doing multiply and divide, what happens a gazillion times\n when doing higher functions like trigonometry. I think there\n was some connection between the max precision and how high\n this internal precision can grow, so increasing the precision\n might affect the computational performance of such higher\n functions significantly.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 11 Mar 2002 16:04:44 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Jan Wieck wrote:\n> > The hard limit is certainly no more than 64K, since we store these\n> > numbers in half of an atttypmod. In practice I suspect the limit may\n> > be less; Jan would be more likely to remember...\n> \n> It is arbitrary of course. I don't recall completely, have to\n> dig into the code, but there might be some side effect when\n> mucking with it.\n> \n> The NUMERIC code increases the actual internal precision when\n> doing multiply and divide, what happens a gazillion times\n> when doing higher functions like trigonometry. I think there\n> was some connection between the max precision and how high\n> this internal precision can grow, so increasing the precision\n> might affect the computational performance of such higher\n> functions significantly.\n\nOh, interesting, maybe we should just leave it alone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 17:02:25 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Jan Wieck wrote:\n> > > The hard limit is certainly no more than 64K, since we store these\n> > > numbers in half of an atttypmod. In practice I suspect the limit may\n> > > be less; Jan would be more likely to remember...\n> >\n> > It is arbitrary of course. I don't recall completely, have to\n> > dig into the code, but there might be some side effect when\n> > mucking with it.\n> >\n> > The NUMERIC code increases the actual internal precision when\n> > doing multiply and divide, what happens a gazillion times\n> > when doing higher functions like trigonometry. I think there\n> > was some connection between the max precision and how high\n> > this internal precision can grow, so increasing the precision\n> > might affect the computational performance of such higher\n> > functions significantly.\n>\n> Oh, interesting, maybe we should just leave it alone.\n\n As said, I have to look at the code. I'm pretty sure that it\n currently will not use hundreds of digits internally if you\n use only a few digits in your schema. So changing it isn't\n that dangerous.\n\n But who's going to write and run a regression test, ensuring\n that the new high limit can really be supported. I didn't\n even run the numeric_big test lately, which tests with 500\n digits precision at least ... and therefore takes some time\n (yawn). Increasing the number of digits used you first have\n to have some other tool to generate the test data (I\n originally used bc(1) with some scripts). Based on that we\n still claim that our system deals correctly with up to 1,000\n digits precision.\n\n I don't like the idea of bumping up that number to some\n higher nonsense, claiming we support 32K digits precision on\n exact numeric, and noone ever tested if natural log really\n returns it's result in that precision instead of a 30,000\n digit precise approximation.\n\n I missed some of the discussion, because I considered the\n 1,000 digits already beeing complete nonsense and dropped the\n thread. So could someone please enlighten me what the real\n reason for increasing our precision is? AFAIR it had\n something to do with the docs. If it's just because the docs\n and the code aren't in sync, I'd vote for changing the docs.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 11 Mar 2002 17:32:22 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "> Jan Wieck wrote:\n> > > The hard limit is certainly no more than 64K, since we store these\n> > > numbers in half of an atttypmod. In practice I suspect the limit may\n> > > be less; Jan would be more likely to remember...\n> > \n> > It is arbitrary of course. I don't recall completely, have to\n> > dig into the code, but there might be some side effect when\n> > mucking with it.\n> > \n> > The NUMERIC code increases the actual internal precision when\n> > doing multiply and divide, what happens a gazillion times\n> > when doing higher functions like trigonometry. I think there\n> > was some connection between the max precision and how high\n> > this internal precision can grow, so increasing the precision\n> > might affect the computational performance of such higher\n> > functions significantly.\n> \n> Oh, interesting, maybe we should just leave it alone.\n\nSo are we going to just fix the docs?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 12 Mar 2002 11:01:17 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > Jan Wieck wrote:\n> > > > The hard limit is certainly no more than 64K, since we store these\n> > > > numbers in half of an atttypmod. In practice I suspect the limit may\n> > > > be less; Jan would be more likely to remember...\n> > >\n> > > It is arbitrary of course. I don't recall completely, have to\n> > > dig into the code, but there might be some side effect when\n> > > mucking with it.\n> > >\n> > > The NUMERIC code increases the actual internal precision when\n> > > doing multiply and divide, what happens a gazillion times\n> > > when doing higher functions like trigonometry. I think there\n> > > was some connection between the max precision and how high\n> > > this internal precision can grow, so increasing the precision\n> > > might affect the computational performance of such higher\n> > > functions significantly.\n> >\n> > Oh, interesting, maybe we should just leave it alone.\n> \n> As said, I have to look at the code. I'm pretty sure that it\n> currently will not use hundreds of digits internally if you\n> use only a few digits in your schema. So changing it isn't\n> that dangerous.\n> \n> But who's going to write and run a regression test, ensuring\n> that the new high limit can really be supported. I didn't\n> even run the numeric_big test lately, which tests with 500\n> digits precision at least ... and therefore takes some time\n> (yawn). Increasing the number of digits used you first have\n> to have some other tool to generate the test data (I\n> originally used bc(1) with some scripts). Based on that we\n> still claim that our system deals correctly with up to 1,000\n> digits precision.\n> \n> I don't like the idea of bumping up that number to some\n> higher nonsense, claiming we support 32K digits precision on\n> exact numeric, and noone ever tested if natural log really\n> returns it's result in that precision instead of a 30,000\n> digit precise approximation.\n> \n> I missed some of the discussion, because I considered the\n> 1,000 digits already beeing complete nonsense and dropped the\n> thread. So could someone please enlighten me what the real\n> reason for increasing our precision is? AFAIR it had\n> something to do with the docs. If it's just because the docs\n> and the code aren't in sync, I'd vote for changing the docs.\n\nI have done a little more research on this. If you create a numeric\nwith no precision:\n\n\tCREATE TABLE test (x numeric);\n\nYou can insert numerics that are greater in length that 1000 digits:\n\n\tINSERT INTO test values ('1111(continues 1010 times)');\n\nYou can even do computations on it:\n\n\tSELECT x+1 FROM test;\n\n1000 is pretty arbitrary. If we can handle 1000, I can't see how larger\nvalues somehow could fail.\n\nAlso, the numeric regression tests takes much longer than the other\ntests. I don't see why a test of that length is required, compared to\nthe other tests. Probably time to pair it back a little.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 11 Apr 2002 17:39:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "...\n> Also, the numeric regression tests takes much longer than the other\n> tests. I don't see why a test of that length is required, compared to\n> the other tests. Probably time to pair it back a little.\n\nThe numeric types are inherently slow. You might look at what effect you\ncan achieve by restructuring that regression test to more closely\nresemble the other tests. In particular, it defines several source\ntables, each one of which containing similar initial values. And it\ndefines a results table, into which intermediate results are placed,\nwhich are then immediately queried for display and comparison to obtain\na test result. If handling the values is slow, we could certainly remove\nthese intermediate steps and still get most of the test coverage.\n\nOn another related topic:\n\nI've been wanting to ask: we have in a few cases moved aggregate\ncalculations from small, fast data types to using numeric as the\naccumulator. It would be nice imho to allow, say, an int8 accumulator\nfor an int4 data type, rather than requiring numeric.\n\nBut not all platforms (I assume) have an int8 data type. So we would\nneed to be able to fall back to numeric for those platforms which need\nto use it. What would it take to make some of the catalogs configurable\nor sensitive to configuration results?\n\n - Thomas\n",
"msg_date": "Fri, 12 Apr 2002 06:20:36 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Jan Wieck wrote:\n> >\n> > I missed some of the discussion, because I considered the\n> > 1,000 digits already beeing complete nonsense and dropped the\n> > thread. So could someone please enlighten me what the real\n> > reason for increasing our precision is? AFAIR it had\n> > something to do with the docs. If it's just because the docs\n> > and the code aren't in sync, I'd vote for changing the docs.\n>\n> I have done a little more research on this. If you create a numeric\n> with no precision:\n>\n> CREATE TABLE test (x numeric);\n>\n> You can insert numerics that are greater in length that 1000 digits:\n>\n> INSERT INTO test values ('1111(continues 1010 times)');\n>\n> You can even do computations on it:\n>\n> SELECT x+1 FROM test;\n>\n> 1000 is pretty arbitrary. If we can handle 1000, I can't see how larger\n> values somehow could fail.\n\n And I can't see what more than 1,000 digits would be good\n for. Bruce, your research is neat, but IMHO wasted time.\n\n Why do we need to change it now? Is the more important issue\n (doing the internal storage representation in base 10,000,\n done yet? If not, we can open up for unlimited precision at\n that time.\n\n Please, adjust the docs for now, drop the issue and let's do\n something useful.\n\n> Also, the numeric regression tests takes much longer than the other\n> tests. I don't see why a test of that length is required, compared to\n> the other tests. Probably time to pair it back a little.\n\n What exactly do you mean with \"pair it back\"? Shrinking the\n precision of the test or reducing it's coverage of\n functionality?\n\n For the former, it only uses 10 of the possible 1,000 digits\n after the decimal point. Run the numeric_big test (which\n uses 800) at least once and you'll see what kind of\n difference precision makes.\n\n And on functionality, it is absolutely insufficient for\n numerical functionality that has possible carry, rounding\n etc. issues, to check a function just for one single known\n value, and if it computes that result correctly, consider it\n OK for everything.\n\n I thought the actual test is sloppy already ... but it's\n still too much for you ... hmmmm.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Fri, 12 Apr 2002 09:46:59 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I've been wanting to ask: we have in a few cases moved aggregate\n> calculations from small, fast data types to using numeric as the\n> accumulator.\n\nWhich ones are you concerned about? As of 7.2, the only ones that use\nnumeric accumulators for non-numeric input types are\n\n aggname | basetype | aggtransfn | transtype\n----------+-------------+---------------------+-------------\n avg | int8 | int8_accum | _numeric\n sum | int8 | int8_sum | numeric\n stddev | int2 | int2_accum | _numeric\n stddev | int4 | int4_accum | _numeric\n stddev | int8 | int8_accum | _numeric\n variance | int2 | int2_accum | _numeric\n variance | int4 | int4_accum | _numeric\n variance | int8 | int8_accum | _numeric\n\nAll of these seem to have good precision/range arguments for using\nnumeric accumulators, or to be enough off the beaten track that it's\nnot worth much angst to optimize them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Apr 2002 10:32:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug? "
},
{
"msg_contents": "> Which ones are you concerned about? As of 7.2, the only ones that use\n> numeric accumulators for non-numeric input types are\n...\n\nOK, I did imply that I've been wanting to ask this for some time. I\nshould have asked during the 7.1 era, when this was true for more cases.\n:)\n\n> All of these seem to have good precision/range arguments for using\n> numeric accumulators, or to be enough off the beaten track that it's\n> not worth much angst to optimize them.\n\nWell, they *are* on the beaten track for someone, just not you! ;)\n\nI'd think that things like stddev might be OK with 52 bits of\naccumulation, so could be done with doubles. Were they implemented that\nway at one time? Do we have a need to provide precision greater than\nthat, or to guard against the (unlikely) case of having so many values\nthat a double-based accumulator overflows its ability to see the next\nvalue?\n\nI'll point out that for the case of accumulating so many integers that\nthey can't work with a double, the alternative implementation of using\nnumeric may approach infinite computation time.\n\nBut in any case, I can ask the same question, only reversed:\n\nWe now have some aggregate functions which use, say, int4 to accumulate\nint4 values, if the target platform does *not* support int8. What would\nit take to make the catalogs configurable or able to respond to\nconfiguration results so that, for example, platforms without int8\nsupport could instead use numeric or double values as a substitute?\n\n - Thomas\n",
"msg_date": "Fri, 12 Apr 2002 07:51:05 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> All of these seem to have good precision/range arguments for using\n>> numeric accumulators, or to be enough off the beaten track that it's\n>> not worth much angst to optimize them.\n\n> Well, they *are* on the beaten track for someone, just not you! ;)\n\n> I'd think that things like stddev might be OK with 52 bits of\n> accumulation, so could be done with doubles.\n\nISTM that people who are willing to have it done in a double can simply\nwrite stddev(x::float8). Of course you will rejoin that if they want\nit done in a numeric, they can write stddev(x::numeric) ... but since\nwe are talking about exact inputs, I would prefer that the default\nbehavior be to carry out the summation without loss of precision.\nThe stddev calculation *is* subject to problems if you don't do the\nsummation as accurately as you can.\n\n> Do we have a need to provide precision greater than\n> that, or to guard against the (unlikely) case of having so many values\n> that a double-based accumulator overflows its ability to see the next\n> value?\n\nYou don't see the cancellation problems inherent in N*sum(x^2) - sum(x)^2?\nYou're likely to be subtracting bignums even with not all that many\ninput values; they just have to be large input values.\n\n> But in any case, I can ask the same question, only reversed:\n\n> We now have some aggregate functions which use, say, int4 to accumulate\n> int4 values, if the target platform does *not* support int8. What would\n> it take to make the catalogs configurable or able to respond to\n> configuration results so that, for example, platforms without int8\n> support could instead use numeric or double values as a substitute?\n\nHaven't thought hard about it. I will say that I don't like the idea\nof changing the declared output type of the aggregates across platforms.\nChanging the internal implementation (ie, transtype) would be acceptable\n--- but I doubt it's worth the trouble. In most other arguments that\ntouch on this point, I seem to be one of the few holdouts for insisting\nthat we worry about int8-less platforms anymore at all ;-). For those\nfew old platforms, the 7.2 behavior of avg(int) and sum(int) is no worse\nthan it was for everyone in all pre-7.1 versions; I am not excited about\nexpending significant effort to make it better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Apr 2002 12:41:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug? "
},
{
"msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > Jan Wieck wrote:\n> > >\n> > > I missed some of the discussion, because I considered the\n> > > 1,000 digits already beeing complete nonsense and dropped the\n> > > thread. So could someone please enlighten me what the real\n> > > reason for increasing our precision is? AFAIR it had\n> > > something to do with the docs. If it's just because the docs\n> > > and the code aren't in sync, I'd vote for changing the docs.\n> >\n> > I have done a little more research on this. If you create a numeric\n> > with no precision:\n> >\n> > CREATE TABLE test (x numeric);\n> >\n> > You can insert numerics that are greater in length that 1000 digits:\n> >\n> > INSERT INTO test values ('1111(continues 1010 times)');\n> >\n> > You can even do computations on it:\n> >\n> > SELECT x+1 FROM test;\n> >\n> > 1000 is pretty arbitrary. If we can handle 1000, I can't see how larger\n> > values somehow could fail.\n> \n> And I can't see what more than 1,000 digits would be good\n> for. Bruce, your research is neat, but IMHO wasted time.\n> \n> Why do we need to change it now? Is the more important issue\n> (doing the internal storage representation in base 10,000,\n> done yet? If not, we can open up for unlimited precision at\n> that time.\n\nI certainly would like the 10,000 change done, but few of us are\ncapable of doing it. :-(\n\n> Please, adjust the docs for now, drop the issue and let's do\n> something useful.\n\nThats how I got started. The problem is that the limit isn't 1,000. \nLooking at NUMERIC_MAX_PRECISION, I see it used in gram.y to prevent\ncreation of NUMERIC columns that exceed the maximum length, and I see it\nused in numeric.c to prevent exponients that exceed the maximum length,\nbut I don't see other cases that would actually enforce the limit in\nINSERT and other cases.\n\nRemember how people complained when I said \"unlimited\" in the FAQ for\nsome items that actually had a limit. Well, in this case, we have a\nlimit that is only enforced in some places. I would like to see this\ncleared up on way or the other so the docs would be correct.\n\nJan, any chance on doing the 10,000 change in your spare time? ;-)\n\n\n> > Also, the numeric regression tests takes much longer than the other\n> > tests. I don't see why a test of that length is required, compared to\n> > the other tests. Probably time to pair it back a little.\n> \n> What exactly do you mean with \"pair it back\"? Shrinking the\n> precision of the test or reducing it's coverage of\n> functionality?\n> \n> For the former, it only uses 10 of the possible 1,000 digits\n> after the decimal point. Run the numeric_big test (which\n> uses 800) at least once and you'll see what kind of\n> difference precision makes.\n> \n> And on functionality, it is absolutely insufficient for\n> numerical functionality that has possible carry, rounding\n> etc. issues, to check a function just for one single known\n> value, and if it computes that result correctly, consider it\n> OK for everything.\n> \n> I thought the actual test is sloppy already ... but it's\n> still too much for you ... hmmmm.\n\nWell, our regression tests are not intended to test every possible\nNUMERIC combination, just a resonable subset. As it is now, I often\nthink the regression tests have hung because numeric takes so much\nlonger than any of the other tests. We have had this code in there for\na while now, and it is not OS-specific stuff, so I think we should just\npair it back so we know it is working. We already have bignumeric for a\nlarger test.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Apr 2002 12:47:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Well, our regression tests are not intended to test every possible\n> NUMERIC combination, just a resonable subset. As it is now, I often\n> think the regression tests have hung because numeric takes so much\n> longer than any of the other tests. We have had this code in there for\n> a while now, and it is not OS-specific stuff, so I think we should just\n> pair it back so we know it is working. We already have bignumeric for a\n> larger test.\n\nBruce,\n\n have you even taken one single look at the test? It does 100\n of each add, sub, mul and div, these are the fast operations\n that don't really take much time.\n\n Then it does 10 of each sqrt(), ln(), log10(), pow10() and 10\n combined power(ln()). These are the time consuming\n operations, working iterative alas Newton, Taylor and\n McLaurin. All that is done with 10 digits after the decimal\n point only!\n\n So again, WHAT exactly do you mean with \"pair it back\"?\n Sorry, I don't get it. Do you want to remove the entire test?\n Reduce it to an INSERT, one SELECT (so that we know the\n input- and output functions work) and the four basic\n operators used once? Well, that's a hell of a test, makes me\n really feel comfortable. Like the mechanic kicking against\n the tire then saying \"I ain't see noth'n wrong with the\n brakes, ya sure can make a trip in the mountains\". Yeah, at\n least once!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Fri, 12 Apr 2002 13:57:28 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > Well, our regression tests are not intended to test every possible\n> > NUMERIC combination, just a resonable subset. As it is now, I often\n> > think the regression tests have hung because numeric takes so much\n> > longer than any of the other tests. We have had this code in there for\n> > a while now, and it is not OS-specific stuff, so I think we should just\n> > pair it back so we know it is working. We already have bignumeric for a\n> > larger test.\n> \n> Bruce,\n> \n> have you even taken one single look at the test? It does 100\n> of each add, sub, mul and div, these are the fast operations\n> that don't really take much time.\n> \n> Then it does 10 of each sqrt(), ln(), log10(), pow10() and 10\n> combined power(ln()). These are the time consuming\n> operations, working iterative alas Newton, Taylor and\n> McLaurin. All that is done with 10 digits after the decimal\n> point only!\n> \n> So again, WHAT exactly do you mean with \"pair it back\"?\n> Sorry, I don't get it. Do you want to remove the entire test?\n> Reduce it to an INSERT, one SELECT (so that we know the\n> input- and output functions work) and the four basic\n> operators used once? Well, that's a hell of a test, makes me\n> really feel comfortable. Like the mechanic kicking against\n> the tire then saying \"I ain't see noth'n wrong with the\n> brakes, ya sure can make a trip in the mountains\". Yeah, at\n> least once!\n\nJan, regression is not a test of the level a developer would use to make\nsure his code works. It is merely to make sure the install works on a\nlimited number of cases. Having seen zero reports of any numeric\nfailures since we installed it, and seeing it takes >10x times longer\nthan the other tests, I think it should be paired back. Do we really\nneed 10 tests of each complex function? I think one would do the trick.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Apr 2002 17:30:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Jan Wieck wrote:\n> > Bruce Momjian wrote:\n> > > Well, our regression tests are not intended to test every possible\n> > > NUMERIC combination, just a resonable subset. As it is now, I often\n> > > think the regression tests have hung because numeric takes so much\n> > > longer than any of the other tests. We have had this code in there for\n> > > a while now, and it is not OS-specific stuff, so I think we should just\n> > > pair it back so we know it is working. We already have bignumeric for a\n> > > larger test.\n> >\n> > Bruce,\n> >\n> > have you even taken one single look at the test? It does 100\n> > of each add, sub, mul and div, these are the fast operations\n> > that don't really take much time.\n> >\n> > Then it does 10 of each sqrt(), ln(), log10(), pow10() and 10\n> > combined power(ln()). These are the time consuming\n> > operations, working iterative alas Newton, Taylor and\n> > McLaurin. All that is done with 10 digits after the decimal\n> > point only!\n> >\n> > So again, WHAT exactly do you mean with \"pair it back\"?\n> > Sorry, I don't get it. Do you want to remove the entire test?\n> > Reduce it to an INSERT, one SELECT (so that we know the\n> > input- and output functions work) and the four basic\n> > operators used once? Well, that's a hell of a test, makes me\n> > really feel comfortable. Like the mechanic kicking against\n> > the tire then saying \"I ain't see noth'n wrong with the\n> > brakes, ya sure can make a trip in the mountains\". Yeah, at\n> > least once!\n>\n> Jan, regression is not a test of the level a developer would use to make\n> sure his code works. It is merely to make sure the install works on a\n> limited number of cases. Having seen zero reports of any numeric\n> failures since we installed it, and seeing it takes >10x times longer\n> than the other tests, I think it should be paired back. Do we really\n> need 10 tests of each complex function? I think one would do the trick.\n\n You forgot who wrote that code originally. I feel alot\n better WITH the tests in place :-)\n\n And if it's merely to make sure the install worked, man who\n is doing source installations these days and runs the\n regression tests anyway? Most people throw in a RPM or the\n like, only a few serious users install from sources, and only\n a fistfull of them then runs regression.\n\n Aren't it mostly developers and distro-maintainers who use\n that directory? I think your entire point isn't just weak,\n IMNSVHO you don't really have a point.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Fri, 12 Apr 2002 18:15:29 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Jan Wieck wrote:\n> You forgot who wrote that code originally. I feel alot\n> better WITH the tests in place :-)\n> \n> And if it's merely to make sure the install worked, man who\n> is doing source installations these days and runs the\n> regression tests anyway? Most people throw in a RPM or the\n> like, only a few serious users install from sources, and only\n> a fistfull of them then runs regression.\n> \n> Aren't it mostly developers and distro-maintainers who use\n> that directory? I think your entire point isn't just weak,\n> IMNSVHO you don't really have a point.\n\nIt is my understanding that RPM does run that test. My main issue is\nwhy does numeric have to be so much larger than the other tests? I have\nnot heard that explained.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Apr 2002 18:23:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "...\n> Jan, regression is not a test of the level a developer would use to make\n> sure his code works. It is merely to make sure the install works on a\n> limited number of cases. Having seen zero reports of any numeric\n> failures since we installed it, and seeing it takes >10x times longer\n> than the other tests, I think it should be paired back. Do we really\n> need 10 tests of each complex function? I think one would do the trick.\n\nWhoops. We rely on the regression tests to make sure that previous\nbehaviors continue to be valid behaviors. Another use is to verify that\na particular installation can reproduce this same test. But regression\ntesting is a fundamental and essential development tool, precisely\nbecause it covers cases outside the range you might be thinking of\ntesting as you do development.\n\nAs a group, we might tend to underestimate the value of this, which\ncould be evidenced by the fact that our regression test suite has not\ngrown substantially more than it has over the years. It could have many\nmore tests within each module, and bug reports *could* be fed back into\nregression updates to make sure that failures do not reappear.\n\nAll imho of course ;)\n\n - Thomas\n",
"msg_date": "Fri, 12 Apr 2002 15:25:13 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "...\n> It is my understanding that RPM does run that test. My main issue is\n> why does numeric have to be so much larger than the other tests? I have\n> not heard that explained.\n\nafaict it is not larger. It *does* take more time, but the number of\ntests is relatively small, or at least compatible with the number of\ntests which appear, or should appear, in other tests of data types\ncovering a large problem space (e.g. date/time).\n\nIt does illustrate that BCD-like encodings are expensive, and that\nmachine-supported math is usually a win. If it is a big deal, jump in\nand widen the internal math operations!\n\n - Thomas\n",
"msg_date": "Fri, 12 Apr 2002 15:33:42 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Jan Wieck wrote:\n> > You forgot who wrote that code originally. I feel alot\n> > better WITH the tests in place :-)\n> >\n> > And if it's merely to make sure the install worked, man who\n> > is doing source installations these days and runs the\n> > regression tests anyway? Most people throw in a RPM or the\n> > like, only a few serious users install from sources, and only\n> > a fistfull of them then runs regression.\n> >\n> > Aren't it mostly developers and distro-maintainers who use\n> > that directory? I think your entire point isn't just weak,\n> > IMNSVHO you don't really have a point.\n>\n> It is my understanding that RPM does run that test. My main issue is\n> why does numeric have to be so much larger than the other tests? I have\n> not heard that explained.\n\n Well, I heard Thomas commenting that it's horribly slow\n implemented (or so, don't recall his exact wording). But\n he's right.\n\n I think the same test done with float8 would run in less than\n a tenth of that time. This is only an explanation \"why it\n takes so long\"? It is no argument pro or con the test itself.\n\n I think I made my point clear enough, that I consider calling\n these functions just once is plain sloppy. But that's just\n my opinion. What do others think?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Fri, 12 Apr 2002 18:48:51 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> I think I made my point clear enough, that I consider calling\n> these functions just once is plain sloppy. But that's just\n> my opinion. What do others think?\n\nI don't have a problem with the current length of the numeric test.\nThe original form of it (now shoved over to bigtests) did seem\nexcessively slow to me ... but I can live with this one.\n\nI do agree that someone ought to reimplement numeric using base10k\narithmetic ... but it's not bugging me so much that I'm likely\nto get around to it anytime soon myself ...\n\nBruce, why is there no TODO item for that project?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Apr 2002 19:04:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug? "
},
{
"msg_contents": "Thomas Lockhart wrote:\n> ...\n> > It is my understanding that RPM does run that test. My main issue is\n> > why does numeric have to be so much larger than the other tests? I have\n> > not heard that explained.\n> \n> afaict it is not larger. It *does* take more time, but the number of\n> tests is relatively small, or at least compatible with the number of\n> tests which appear, or should appear, in other tests of data types\n> covering a large problem space (e.g. date/time).\n> \n> It does illustrate that BCD-like encodings are expensive, and that\n> machine-supported math is usually a win. If it is a big deal, jump in\n> and widen the internal math operations!\n\nOK, as long as everyone else is fine with the tests, we can leave it\nalone. The concept that the number of tests is realisitic, and that\nthey are just slower than other data types, makes sense.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Apr 2002 20:12:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > I think I made my point clear enough, that I consider calling\n> > these functions just once is plain sloppy. But that's just\n> > my opinion. What do others think?\n> \n> I don't have a problem with the current length of the numeric test.\n> The original form of it (now shoved over to bigtests) did seem\n> excessively slow to me ... but I can live with this one.\n> \n> I do agree that someone ought to reimplement numeric using base10k\n> arithmetic ... but it's not bugging me so much that I'm likely\n> to get around to it anytime soon myself ...\n> \n> Bruce, why is there no TODO item for that project?\n\nNot sure. I was aware of it for a while. Added:\n\n\t* Change NUMERIC data type to use base 10,000 internally\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Apr 2002 20:13:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> > Jan Wieck wrote:\n> > > > The hard limit is certainly no more than 64K, since we store these\n> > > > numbers in half of an atttypmod. In practice I suspect the limit may\n> > > > be less; Jan would be more likely to remember...\n> > > \n> > > It is arbitrary of course. I don't recall completely, have to\n> > > dig into the code, but there might be some side effect when\n> > > mucking with it.\n> > > \n> > > The NUMERIC code increases the actual internal precision when\n> > > doing multiply and divide, what happens a gazillion times\n> > > when doing higher functions like trigonometry. I think there\n> > > was some connection between the max precision and how high\n> > > this internal precision can grow, so increasing the precision\n> > > might affect the computational performance of such higher\n> > > functions significantly.\n> > \n> > Oh, interesting, maybe we should just leave it alone.\n> \n> So are we going to just fix the docs?\n\nOK, I have updated the docs. Patch attached.\n\nI have also added this to the TODO list:\n\n\t* Change NUMERIC to enforce the maximum precision, and increase it\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: datatype.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v\nretrieving revision 1.87\ndiff -c -r1.87 datatype.sgml\n*** datatype.sgml\t3 Apr 2002 05:39:27 -0000\t1.87\n--- datatype.sgml\t13 Apr 2002 01:26:54 -0000\n***************\n*** 506,518 ****\n <title>Arbitrary Precision Numbers</title>\n \n <para>\n! The type <type>numeric</type> can store numbers of practically\n! unlimited size and precision, while being able to store all\n! numbers and carry out all calculations exactly. It is especially\n! recommended for storing monetary amounts and other quantities\n! where exactness is required. However, the <type>numeric</type>\n! type is very slow compared to the floating-point types described\n! in the next section.\n </para>\n \n <para>\n--- 506,517 ----\n <title>Arbitrary Precision Numbers</title>\n \n <para>\n! The type <type>numeric</type> can store numbers with up to 1,000\n! digits of precision and perform calculations exactly. It is\n! especially recommended for storing monetary amounts and other\n! quantities where exactness is required. However, the\n! <type>numeric</type> type is very slow compared to the\n! floating-point types described in the next section.\n </para>\n \n <para>",
"msg_date": "Fri, 12 Apr 2002 21:37:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Jan Wieck wrote:\n> > Oh, interesting, maybe we should just leave it alone.\n> \n> As said, I have to look at the code. I'm pretty sure that it\n> currently will not use hundreds of digits internally if you\n> use only a few digits in your schema. So changing it isn't\n> that dangerous.\n> \n> But who's going to write and run a regression test, ensuring\n> that the new high limit can really be supported. I didn't\n> even run the numeric_big test lately, which tests with 500\n> digits precision at least ... and therefore takes some time\n> (yawn). Increasing the number of digits used you first have\n> to have some other tool to generate the test data (I\n> originally used bc(1) with some scripts). Based on that we\n> still claim that our system deals correctly with up to 1,000\n> digits precision.\n> \n> I don't like the idea of bumping up that number to some\n> higher nonsense, claiming we support 32K digits precision on\n> exact numeric, and noone ever tested if natural log really\n> returns it's result in that precision instead of a 30,000\n> digit precise approximation.\n> \n> I missed some of the discussion, because I considered the\n> 1,000 digits already beeing complete nonsense and dropped the\n> thread. So could someone please enlighten me what the real\n> reason for increasing our precision is? AFAIR it had\n> something to do with the docs. If it's just because the docs\n> and the code aren't in sync, I'd vote for changing the docs.\n\nJan, if the numeric code works on 100 or 500 digits, could it break with\n10,000 digits. Is there a reason to believe longer digits could cause\nproblems not present in shorter tests?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Apr 2002 21:39:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "> Jan, regression is not a test of the level a developer would use to make\n> sure his code works. It is merely to make sure the install works on a\n> limited number of cases.\n\nNews to me! If anything, I don't think a lot of the current regression\ntests are comprehensive enough! For the SET/DROP NOT NULL patch I\nsubmitted, I included a regression test that tests every one of the\npreconditions in my code - that way if anything gets changed or broken,\nwe'll find out very quickly.\n\nI personally don't have a problem with the time taken to regression test -\nand I think that trimming the numeric test _might_ be a false economy. Who\nknows what's going to turn around and bite us oneday?\n\n> Having seen zero reports of any numeric\n> failures since we installed it, and seeing it takes >10x times longer\n> than the other tests, I think it should be paired back. Do we really\n> need 10 tests of each complex function? I think one would do the trick.\n\nA good point tho, I didn't submit a regression test that tries to ALTER 3\ndifferent non-existent tables to check for failures - one test was enough...\n\nChris\n\n\n",
"msg_date": "Sat, 13 Apr 2002 14:31:35 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > Having seen zero reports of any numeric\n> > failures since we installed it, and seeing it takes >10x times longer\n> > than the other tests, I think it should be paired back. Do we really\n> > need 10 tests of each complex function? I think one would do the trick.\n> \n> A good point tho, I didn't submit a regression test that tries to ALTER 3\n> different non-existent tables to check for failures - one test was enough...\n\nThat was my point. Is there much value in testing each function ten\ntimes. Anyway, seems only I care so I will drop it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 13 Apr 2002 10:34:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Christopher Kings-Lynne wrote:\n> > > Having seen zero reports of any numeric\n> > > failures since we installed it, and seeing it takes >10x times longer\n> > > than the other tests, I think it should be paired back. Do we really\n> > > need 10 tests of each complex function? I think one would do the trick.\n> >\n> > A good point tho, I didn't submit a regression test that tries to ALTER 3\n> > different non-existent tables to check for failures - one test was enough...\n>\n> That was my point. Is there much value in testing each function ten\n> times. Anyway, seems only I care so I will drop it.\n\n Yes there is value in it. There is conditional code in it\n that depends on the values. I wrote that before (I said there\n are possible carry, rounding etc. issues), and it looked to\n me that you simply ignored these facts.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Sat, 13 Apr 2002 12:50:22 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: numeric/decimal docs bug?"
}
] |
[
{
"msg_contents": "Is there any other way to accomplish NEW.TG_ARGV[0] in plpgsql, or am\nI stuck writing a C function which will allow:\n_value := getRecordValue(NEW, TG_ARGV[0]).\n\n\ncreate table test(col1 int4);\n\ncreate or replace function test() returns opaque as '\ndeclare\n _value int4;\nbegin\n RAISE NOTICE ''Args: %'', TG_NARGS;\n RAISE NOTICE ''Column: %'', TG_ARGV[0];\n\n -- PARSE ERROR\n _value := NEW.TG_ARGV[0];\n\n RAISE NOTICE ''Data: %'', _value;\nend;\n' language 'plpgsql';\n\ncreate trigger name BEFORE INSERT ON test\n FOR EACH ROW\n EXECUTE PROCEDURE test('col1');\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n\n\n",
"msg_date": "Sat, 2 Mar 2002 22:15:16 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "plpgsql Field of Record issue"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Is there any other way to accomplish NEW.TG_ARGV[0] in plpgsql,\n\nYou could do something involving EXECUTE.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Mar 2002 15:33:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql Field of Record issue "
},
{
"msg_contents": "> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > Is there any other way to accomplish NEW.TG_ARGV[0] in plpgsql,\n>\n> You could do something involving EXECUTE.\n\nThen it tells me:: ERROR: NEW used in non-rule query\n\nTried a few other things, FOR .. IN EXECUTE .. which gives the same\nerror as above.\n\nThe below query gives me the same error as well -- though I'd be\nsurprised if it worked anyway:\nCREATE TEMP TABLE test AS SELECT * FROM (NEW);\n\nCreating a record and assigning NEW to it gives a parse error.\n\n",
"msg_date": "Sun, 3 Mar 2002 16:40:23 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: plpgsql Field of Record issue "
}
] |
[
{
"msg_contents": "Hi,\n\nI've implemented Bob Jenkin's hash function for PostgreSQL; more\ninformation on the hash function can be found at\nhttp://burtleburtle.net/bob/hash/doobs.html\n\nI'm posting this to -hackers because I'd like to get some feedback on\nwhether this actually improves performance. I've tested 2 situations\nlocally:\n\n\t(1) pgbench, btree indexes (to test typical performance)\n\n\t(2) pgbench, hash indexes\n\nI haven't looked at the implementation of hash joins; if they happen to\nuse this hash function as well, that would be another informative\nsituation to benchmark.\n\nIn my local tests, the performance in (1) is basically the same, while\nthe performance in (2) is increased by 4% to 8%. If you could let me\nknow the results on your local setup, it should become clear whether\nthis patch is a performance win overall.\n\nNote that to test case (2) properly, you'll need to drop and re-create\nyour pgbench indexes after you apply the patch (so that when the new\nindexes are created, they use the new hash function). Also, it would be\na good idea to use concurrency level 1; at higher concurrency levels,\nhash indexes have a tendancy to deadlock (yes, I'm currently trying to\nfix that too ;-) ).\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC",
"msg_date": "03 Mar 2002 01:02:51 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "new hashing function"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> I haven't looked at the implementation of hash joins; if they happen to\n> use this hash function as well, that would be another informative\n> situation to benchmark.\n\nHash joins use some chosen-at-random hashing code of their own; see\nhashFunc() in src/backend/executor/nodeHash.c. One of the things on my\nto-do list has been to replace that with the datatype-specific hash\nfunctions used for indexing/caching, since the latter seem better\nengineered (even before your improvements).\n\nBTW, I don't particularly approve of the parts of this patch that\nsimply remove unused arguments from various routines. You aren't\ngoing to save very many cycles that way, and you are reducing\nflexibility (eg, the changes to remove nkeys would interfere with\ntrying to make hash indexes support multiple columns).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Mar 2002 12:31:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: new hashing function "
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> I've implemented Bob Jenkin's hash function for PostgreSQL; more\n> information on the hash function can be found at\n> http://burtleburtle.net/bob/hash/doobs.html\n\nOne other thought --- presently, catcache.c is careful to use a prime\nsize (257) for its hash tables, so that reducing the raw hash value\nmod 257 will allow all bits of the hash to contribute to determining\nthe hash bucket number. This is necessary because of the relatively\npoor randomness of the hash functions. Perhaps with Jenkins' function\nwe could dispense with that, and use a fixed power-of-2 size so that the\ndivision becomes a simple bit masking. On machines with slow integer\ndivision, this could be a nice speedup. Wouldn't help for hash indexes\nor joins though, since they don't use constant hashtable sizes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Mar 2002 13:48:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: new hashing function "
},
{
"msg_contents": "On Sun, 2002-03-03 at 12:31, Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > I haven't looked at the implementation of hash joins; if they happen to\n> > use this hash function as well, that would be another informative\n> > situation to benchmark.\n> \n> Hash joins use some chosen-at-random hashing code of their own; see\n> hashFunc() in src/backend/executor/nodeHash.c. One of the things on my\n> to-do list has been to replace that with the datatype-specific hash\n> functions used for indexing/caching, since the latter seem better\n> engineered (even before your improvements).\n\nOkay, I'll implement this.\n\n> BTW, I don't particularly approve of the parts of this patch that\n> simply remove unused arguments from various routines. You aren't\n> going to save very many cycles that way, and you are reducing\n> flexibility (eg, the changes to remove nkeys would interfere with\n> trying to make hash indexes support multiple columns).\n\nHmmm... I had viewed removing extra, unused functions to arguments as\nbasically code cleanup. But I see your point -- although really, the\nfuture purpose behind the extra arguments should probably be\ndocumented... I'll review my changes and remove the ones that seem to\nhave some future benefit.\n\nThanks for the feedback Tom.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "03 Mar 2002 14:17:54 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "Re: new hashing function"
}
] |
[
{
"msg_contents": "I've now committed changes to make the system caches store negative as\nwell as positive entries, per the discussion from about a month ago:\nhttp://archives.postgresql.org/pgsql-hackers/2002-01/msg01314.php\n\nI have not done very much performance testing, but preliminary results\nsay that the change had the expected effect. On a 100-query pgbench\nrun, I had these results a month ago:\nCatcache totals: 43 tup, 14519 srch, 13976 hits, 43 loads, 500 not found\nwhile with CVS tip I now get:\nCatcache totals: 48 tup, 15019 srch, 14476+495=14971 hits, 43+5=48 loads, 0 invals, 0 discards\n(Hits and new-entry loads are both expressed in the form pos+neg=total.)\nThis confirms my guess that the failing searches were all looking for\nthe same five tuples. The net number of catalog indexscans has been\nreduced from 543 to 48. The improvement is much less spectacular on the\nregression tests, which don't have such a repetitive query structure;\nbut several of the regression tests show factor-of-2 reductions in\ncatalog probes.\n\ninval.c has gotten a lot simpler since it no longer needs to distinguish\ninsert and delete operations. It now just keeps two lists: one of inval\nevents caused in the current command (and not yet reflected to the\ncache) and one of events in prior commands of the current transaction\n(which have been reflected to this backend's cache, but not yet reported\nto other backends). At CommandCounterIncrement time we process the\ncurrent-command list against the local caches and then nconc it onto the\nother list. The total time and memory costs of storing the inval lists\nshould actually be less than before, since no event needs to be stored\nin more than one list.\n\nI did observe some interesting breakage: creation of tables that inherit\nconstraints or column defaults from a parent table failed with the new\ncode. On investigation, it turned out that StoreDefaults() was doing\nCommandCounterIncrement at a time when we'd created pg_class and\npg_attribute rows claiming the new table had constraints or defaults,\nbut we hadn't yet made any pg_relcheck or pg_attrdef entries for them.\nIn the new code, the CommandCounterIncrement causes a rebuild of the\nnew table's relcache entry, and various sanity checks were failing.\nThe fix was to treat inherited constraints/defaults more like the normal\ncase: we first create pg_class and pg_attribute rows without any mention\nof constraints or defaults, and then update those rows when we've stored\nthe auxiliary info.\n\nIt's possible that there are similar bugs lurking elsewhere. I did some\ncasual exercising of ALTER TABLE but didn't turn up any problems.\nAnyway one should be wary of doing CommandCounterIncrement partway\nthrough a set of changes to a relation.\n\nOne other change is that inval.c now exposes a routine that can be used\nto queue a relcache flush event without necessarily having modified the\nrelation's pg_class row. I changed heap.c and index.c to use this in\nplace of doing no-op pg_class updates, but there may be some other\nplaces that could be improved as well.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Mar 2002 14:32:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Followup: Syscaches should store negative entries, too"
}
] |
[
{
"msg_contents": "Just to verify, I'm getting the same message when trying the examples\nin the following docs:\n\nPostgreSQL 7.2 Documentation \nChapter 23. PL/pgSQL - SQL Procedural Language \n23.2. Structure of PL/pgSQL\n",
"msg_date": "3 Mar 2002 12:19:04 -0800",
"msg_from": "egon.phillips@sympatico.ca (egon.phillips)",
"msg_from_op": true,
"msg_subject": "Re: Problems with 7.2 on Solaris 2.6"
}
] |
[
{
"msg_contents": "In order to add items to an array:\nie. array[2] := variable;\n\nUse the below works, but is quite slow (due to the execute):\n\nDECLARE\n data ALIAS FOR $1;\n arr text[];\n\n -- Required due to bad arrays\n query text;\n rec RECORD;\nBEGIN\n\n -- WHAT WE WANT:\n -- arr[1] := data;\n\n\n -- HERE''S HOW WE DO IT:\n query := ''SELECT cast(''''{\"'' || data || ''\"}'''' as _text) as\narr'';\n\n -- Expecting a single loop only.\n FOR rec IN EXECUTE query LOOP\n arr := rec.arr;\n END LOOP\n\nEND;\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n\n",
"msg_date": "Sun, 3 Mar 2002 21:49:05 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "plpgsql nitpicking -- possible TODO item?"
}
] |
[
{
"msg_contents": "$ java -version\njava version \"1.3.0_02\"\nJava(TM) 2 Runtime Environment, Standard Edition (build 1.3.0_02)\nJava HotSpot(TM) Client VM (build 1.3.0_02, mixed mode)\n\nOn this environment, 7.1 and 7.2 have been build without any problem,\nbut today I got:\n\n/usr/bin/ant -buildfile ./build.xml install \\\n -Dinstall.directory=/usr/local/src/pgsql/current/share/java -Dmajor=7 -Dminor=3 -Dfullversion=7.3devel -Ddef_pgport=5432\nBuildfile: ./build.xml\n\nall:\n\nprepare:\n\ncheck_versions:\n\nBUILD FAILED\n\n/usr/local/src/pgsql/current/pgsql/src/interfaces/jdbc/./build.xml:33: Could not create task of type: condition. Common solutions are to use taskdef to declare your task, or, if this is an optional task, to put the optional.jar in the lib directory of your ant installation (ANT_HOME).\n",
"msg_date": "Mon, 04 Mar 2002 12:33:43 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "JDBC build failed on current"
}
] |
[
{
"msg_contents": "Hello!\n\nNeed advice: where is the right place to get eurodates \"by default\"?\n\nAFAIK there is 3 opportunity:\n\n1) to change default assignments from Eurodates=false to =true in\nbackend/utils/init/globals.c (IMHO by configure option --with-eurodates)\n2) to add an extra option to postmaster (just for convenience, -o\"-e\"\nlooks quite ugly, it will be transparently passed to all postgres on\nthose start-up)\n3) to add ConfigOption \"eurodates\" alike \"fsync\".\n\nThe last is one the less familar to me but it seems to be good point in\ncommon tendention to move command-line arguments to postgresql.conf .\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n",
"msg_date": "Mon, 4 Mar 2002 13:11:05 +0600 (NOVT)",
"msg_from": "Yury Bokhoncovich <byg@center-f1.ru>",
"msg_from_op": true,
"msg_subject": "permanent EuroDates"
},
{
"msg_contents": "On Mon, 2002-03-04 at 07:11, Yury Bokhoncovich wrote:\n> Need advice: where is the right place to get eurodates \"by default\"?\n\nexport PGDATESTYLE=Iso,European\npg_ctl start\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"Give, and it will be given to you. A good measure, \n pressed down, taken together and running over, \n will be poured into your lap. For with the same \n measure that you use, it will be measured to \n you.\" Luke 6:38 \n\n",
"msg_date": "04 Mar 2002 08:36:08 +0000",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: permanent EuroDates"
},
{
"msg_contents": "On 4 Mar 2002, Oliver Elphick wrote:\n\n> On Mon, 2002-03-04 at 07:11, Yury Bokhoncovich wrote:\n> > Need advice: where is the right place to get eurodates \"by default\"?\n>\n> export PGDATESTYLE=Iso,European\n> pg_ctl start\n\nMmmm...but if I wanna POSTGRES-like time format with eurodates?\nOption -e of postgres works fine but its absence in postmaster\nis a pain IMHO. Messing with environement variable is bad IMHO.\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n",
"msg_date": "Mon, 4 Mar 2002 15:25:29 +0600 (NOVT)",
"msg_from": "Yury Bokhoncovich <byg@center-f1.ru>",
"msg_from_op": true,
"msg_subject": "Re: permanent EuroDates"
},
{
"msg_contents": "On Mon, 2002-03-04 at 09:25, Yury Bokhoncovich wrote:\n> On 4 Mar 2002, Oliver Elphick wrote:\n> \n> > On Mon, 2002-03-04 at 07:11, Yury Bokhoncovich wrote:\n> > > Need advice: where is the right place to get eurodates \"by default\"?\n> >\n> > export PGDATESTYLE=Iso,European\n> > pg_ctl start\n> \n> Mmmm...but if I wanna POSTGRES-like time format with eurodates?\n\nexport PGDATESTYLE=Postgres,European\n\n> Option -e of postgres works fine but its absence in postmaster\n> is a pain IMHO. Messing with environement variable is bad IMHO.\n\nYou can limit it to pg_ctl itself:\n\n PGDATESTYLE=Postgres,European pg_ctl...\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"Give, and it will be given to you. A good measure, \n pressed down, taken together and running over, \n will be poured into your lap. For with the same \n measure that you use, it will be measured to \n you.\" Luke 6:38 \n\n",
"msg_date": "04 Mar 2002 10:06:51 +0000",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: permanent EuroDates"
},
{
"msg_contents": "Yury Bokhoncovich <byg@center-f1.ru> writes:\n> 3) to add ConfigOption \"eurodates\" alike \"fsync\".\n\nNot eurodates per se. There should be a postgresql.conf option for\nDateStyle, which'd allow you to set what you want. Not sure why it's\nnot there already :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Mar 2002 10:02:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: permanent EuroDates "
},
{
"msg_contents": "Tom Lane wrote:\n> Yury Bokhoncovich <byg@center-f1.ru> writes:\n> > 3) to add ConfigOption \"eurodates\" alike \"fsync\".\n> \n> Not eurodates per se. There should be a postgresql.conf option for\n> DateStyle, which'd allow you to set what you want. Not sure why it's\n> not there already :-(\n> \n\nAdded to TODO:\n\n\t* Add GUC parameter for eurodates\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 4 Mar 2002 11:19:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: permanent EuroDates"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm in the process of switching frrom 7.1.3 to 7.2.\n\nThe only problem I have so far is the definition of an index.\n\nThe table contains a timestamp column. In 7.1.3 an index is defined as\nsuch:\n\nCREATE INDEX deb ON xxxx USING btree (date(timestamp coll) date_ops);\n\non 7.2 I have an error message saying that functrional indexes must but\nmade ISCACHABLE.\n\nNow should date() be made iscachable, do I have an other way to create\ntrhis index?\n\nCAST did'nt wok.\n\nTIA\n\nRegards,\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Mon, 4 Mar 2002 11:34:24 +0100 (MET)",
"msg_from": "Olivier PRENANT <ohp@pyrenet.fr>",
"msg_from_op": true,
"msg_subject": "Bug or Feature?"
},
{
"msg_contents": "Olivier PRENANT <ohp@pyrenet.fr> writes:\n> CREATE INDEX deb ON xxxx USING btree (date(timestamp coll) date_ops);\n> on 7.2 I have an error message saying that functrional indexes must but\n> made ISCACHABLE.\n\nSee previous discussion of this identical problem. The fact is that\nsuch an index is dangerous, because it depends on the timezone setting.\n\nYou might want to make the underlying column be timestamp without time\nzone.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Mar 2002 10:08:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug or Feature? "
},
{
"msg_contents": "Hi, Tom\n\nThnaks for you reply; I don't need timezone in this application so it's\nokay.\n\nHowever, I've been trying to change the table on 7.1.3 to get ready for\nthe swap but there's no timestamp without timezone on 7.1.3; \n\ndo I really have to edit the dump before reloading, ISTM that it's the\nbest way to generate errors...\n\nRegards,\nOn Mon, 4 Mar 2002, Tom Lane wrote:\n\n> Olivier PRENANT <ohp@pyrenet.fr> writes:\n> > CREATE INDEX deb ON xxxx USING btree (date(timestamp coll) date_ops);\n> > on 7.2 I have an error message saying that functrional indexes must but\n> > made ISCACHABLE.\n> \n> See previous discussion of this identical problem. The fact is that\n> such an index is dangerous, because it depends on the timezone setting.\n> \n> You might want to make the underlying column be timestamp without time\n> zone.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Mon, 4 Mar 2002 17:59:13 +0100",
"msg_from": "Olivier PRENANT <ohp@pyrenet.fr>",
"msg_from_op": true,
"msg_subject": "Re: Bug or Feature?"
},
{
"msg_contents": "Hi, Tom,\n\nThanks for replying.\nYou were right (of course) I don't need timestamp on this app...\nHowever I tried to change the column type on 7.1.3 to ease the\nswith... AFAIK, there's no timestamp without timezone on 7.1.3; Please\ntell me there's another way other than editing pg_dump by hand... That\nwould loose time and would call for errors...\n\nRegards,\n On Mon, 4 Mar 2002, Tom Lane wrote:\n\n> Olivier PRENANT <ohp@pyrenet.fr> writes:\n> > CREATE INDEX deb ON xxxx USING btree (date(timestamp coll) date_ops);\n> > on 7.2 I have an error message saying that functrional indexes must but\n> > made ISCACHABLE.\n> \n> See previous discussion of this identical problem. The fact is that\n> such an index is dangerous, because it depends on the timezone setting.\n> \n> You might want to make the underlying column be timestamp without time\n> zone.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Mon, 4 Mar 2002 22:22:18 +0100",
"msg_from": "Olivier PRENANT <ohp@pyrenet.fr>",
"msg_from_op": true,
"msg_subject": "Re: Bug or Feature?"
},
{
"msg_contents": "> You were right (of course) I don't need timestamp on this app...\n> However I tried to change the column type on 7.1.3 to ease the\n> swith... AFAIK, there's no timestamp without timezone on 7.1.3; Please\n> tell me there's another way other than editing pg_dump by hand... That\n> would loose time and would call for errors...\n\nThen import the data into your new database and use temporary tables and\ndrop/create to convert your existing data to the different schema. Then\ncreate the function index you want.\n\nThere are probably other ways (as there are other schemas, such as using\nDATE rather than TIMESTAMP), but as some point you should choose one and\njust do it.\n\n - Thomas\n",
"msg_date": "Mon, 04 Mar 2002 14:00:28 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Bug or Feature?"
}
] |
[
{
"msg_contents": "Hello,\n\nI am working on the port of PostgreSQL to NetWare.\n\nDuring my work i�ve found the following that need to be added to\nthe sources in order to compile postgreSQL on NetWare. There\nmay be some more changes i�ve not seen until today, but if so\ni�ll post the upcomping issues here.\n\n\n1. The following additions to main.c:\n#ifdef N_PLAT_NLM\n\t/* NetWare-specific actions on startup */\n NWInit();\n#endif /* N_PLAT_NLM */\n\nand\n\n#if !defined(__BEOS__) && !defined(N_PLAT_NLM)\nif (geteuid() == 0)\n{\n...\n\n\n2. in bootstrap.c/cleanup()\nstatic void\ncleanup()\n{\n\tstatic int\tbeenhere = 0;\n\n\tif (!beenhere)\n\t\tbeenhere = 1;\n\telse\n\t{\n\t\telog(FATAL, \"Memory manager fault: cleanup called twice.\\n\");\n\t\tproc_exit(1);\n\t}\n\tif (reldesc != (Relation) NULL)\n\t\theap_close(reldesc, NoLock);\n\tCommitTransactionCommand();\n\n#ifdef N_PLAT_NLM\n NWCleanUp();\n#endif\n\n\tproc_exit(Warnings);\n}\n\n\n3. in xlog.c\n#if !defined(__BEOS__) && !defined(N_PLAT_NLM)\n\tif (link(tmppath, path) < 0)\n\t\telog(STOP, \"link from %s to %s (initialization of log file %u, segment %u) failed: %m\",\n\t\t\t tmppath, path, log, seg);\n\tunlink(tmppath);\n\n\n4. in fd.c/filepath()\nstatic char *\nfilepath(const char *filename)\n{\n\tchar\t *buf;\n\n#ifdef N_PLAT_NLM\n\tbuf = (char *) palloc(strlen(filename) + 1);\n\tstrcpy(buf, filename);\n#else\n\t/* Not an absolute path name? Then fill in with database path... */\n\tif (*filename != '/')\n\t{\n\t...\n\n5. in dfmgr.c/load_external_function()\n#ifdef N_PLAT_NLM\n\t\tfile_scanner->handle = dlopen(filename, RTLD_LAZY);\n#else\n\t\tfile_scanner->handle = pg_dlopen(filename);\n#endif\n\n\n6. in datetime.h\n#if !defined(__CYGWIN__) && !defined(N_PLAT_NLM)\n#define TIMEZONE_GLOBAL timezone\n#else\n#define TIMEZONE_GLOBAL _timezone\n#endif\n\n\n7. in dynamic_loader.h\n#ifndef N_PLAT_NLM\nextern void *pg_dlopen(char *filename);\nextern PGFunction pg_dlsym(void *handle, char *funcname);\nextern void pg_dlclose(void *handle);\nextern char *pg_dlerror(void);\n#endif\n\n\n8. a directory backend/port/netware for some OS specific files I want to maintain.\nPlease give me some information how I can get access to such a directory and how\nI can check in files to this directory. If you have another way please let me know.\n\n\nAs you see these are only some simple #ifdefs that need to be added to the code.\n\n\nbest regards\n\n\nUlrich Neumann\nNovell Worldwide Developer Support\n",
"msg_date": "Mon, 4 Mar 2002 15:02:00 +0200",
"msg_from": "Ulrich Neumann <u_neumann@gne.de>",
"msg_from_op": true,
"msg_subject": "Some additions/#ifdefs to target new OS NetWare"
},
{
"msg_contents": "Ulrich Neumann <u_neumann@gne.de> writes:\n> During my work i�ve found the following that need to be added to\n> the sources in order to compile postgreSQL on NetWare.\n\nWe'd appreciate a patch (diff -c format), not random snippets of code.\n\n> 2. in bootstrap.c/cleanup()\n> #ifdef N_PLAT_NLM\n> NWCleanUp();\n> #endif\n\nUnlikely to be the right place for it, if it's needed at all which I\ndoubt. (Surely NetWare can manage to provide a *standard* C execution\nenvironment, in which any platform-specific startup and cleanup stuff\nis done in the C library?)\n\n> 4. in fd.c/filepath()\n\n> #ifdef N_PLAT_NLM\n> \tbuf = (char *) palloc(strlen(filename) + 1);\n> \tstrcpy(buf, filename);\n> #else\n\nI don't believe this either.\n\n> 7. in dynamic_loader.h\n> #ifndef N_PLAT_NLM\n> extern void *pg_dlopen(char *filename);\n> extern PGFunction pg_dlsym(void *handle, char *funcname);\n> extern void pg_dlclose(void *handle);\n> extern char *pg_dlerror(void);\n> #endif\n\nNope. Make a platform-specific implementation of pg_dlopen and friends,\njust like all the other platforms have done.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Mar 2002 10:27:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Some additions/#ifdefs to target new OS NetWare "
}
] |
[
{
"msg_contents": "Hi Tom,\n\nthanks for your quick response.\n\nI�ll do it with a patch and I�ll add some code to the OS specific stuff that I\ndon�t need no. 4 and 7. If I couldn�t avoid no.4 I�ll let you know with some\nexplanation.\n\nOn Netware there is a standard ANSI/POSIX C library with startup and\ncleanup possibilities. PostgreSQL is based on a very new one that isn�t\ncompletely finished, so I need NWInit and NWCleanUp at the moment.\nThis will change in the next weeks so I don�t need NWInit and NWCleanup\non NetWare.\n\nAnother question: I�ll present PostgreSQL at Novell�s BrainShare this month\nto many Novell customers. Do you have some material that I can use?\n(If you want to follow Brainshare you can look at http://www.novell.com/brainshare.)\n\nregards\n\nUlrich\n",
"msg_date": "Mon, 4 Mar 2002 16:58:00 +0200",
"msg_from": "Ulrich Neumann <u_neumann@gne.de>",
"msg_from_op": true,
"msg_subject": "Re: Some additions/#ifdefs to target new OS NetWare"
}
] |
[
{
"msg_contents": "Sorry to bug, but its been a week since I sent this message with an\nupdated Domain patch without any feedback (and the initial patch\nreceived suggestions quite quick).\n\nAre there any further changes recommended or is it good to go?\nSupported SQL constructs are at the bottom of the message. Once the\nbase is approved I'll make alterations to psql, pg_dump and friends to\nhandle the external elements.\n\nhttp://archives.postgresql.org/pgsql-patches/2002-02/msg00252.php\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n\n",
"msg_date": "Mon, 4 Mar 2002 13:45:03 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "\nI am working the patches now. I will send feedback in a few hours. \nThanks. Sorry for the delay.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> Sorry to bug, but its been a week since I sent this message with an\n> updated Domain patch without any feedback (and the initial patch\n> received suggestions quite quick).\n> \n> Are there any further changes recommended or is it good to go?\n> Supported SQL constructs are at the bottom of the message. Once the\n> base is approved I'll make alterations to psql, pg_dump and friends to\n> handle the external elements.\n> \n> http://archives.postgresql.org/pgsql-patches/2002-02/msg00252.php\n> \n> --\n> Rod Taylor\n> \n> Your eyes are weary from staring at the CRT. You feel sleepy. Notice\n> how restful it is to watch the cursor blink. Close your eyes. The\n> opinions stated above are yours. You cannot imagine why you ever felt\n> otherwise.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 4 Mar 2002 15:43:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "Currently we have a rather confusing mismash of behaviors for the names\nof rules, constraints, and triggers. I'd like to unify the rules\nso that these objects all have the same naming behavior; and the only\nbehavior that makes sense to me now is that of triggers.\n\nThe current behavior is:\n\n1. Rules are required to have a name that is unique within the current\ndatabase. The rule can be named without reference to the table it is\non. Dropping a rule is done with \"DROP RULE name\".\n\n2. Constraints are not required to have any unique name at all.\nDropping constraints is done with \"ALTER TABLE tablename DROP CONSTRAINT\nconstraintname\", which will drop all constraints on that table that\nmatch the given name.\n\n3. Triggers are required to have names that are unique among the\ntriggers on a given table. Dropping a trigger is done with \"DROP\nTRIGGER name ON table\".\n\nThe SQL spec is not a great deal of help on this, since it doesn't\nhave rules or triggers at all. For constraints, it requires\ndatabase-wide uniqueness of constraint names --- a rule I doubt\nanyone is going to favor adopting for Postgres.\n\nI think that all three object types should have names that are unique\namong the objects associated with a particular table, but not unique\nacross a whole database. So, triggers are okay already, but rules\nand constraints need work.\n\nFor rules, we'd need to change the syntax of DROP RULE to be \"DROP RULE\nname ON table\", much like DROP TRIGGER. This seems unlikely to cause\nproblems for existing applications, since I doubt rule-dropping is done\nmuch by application code.\n\nFor constraints, we'd need to change the code to be more careful to\ngenerate unique names for unnamed constraints. That doesn't seem\ndifficult, but I'm a little worried about the possibility of errors\nin loading schemas from existing databases, where there might be\nnon-unique constraint names. Perhaps it'd be safer to maintain the\ncurrent behavior (no uniqueness required for constraint names).\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Mar 2002 14:24:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Uniqueness of rule, constraint, and trigger names"
},
{
"msg_contents": "On 4 Mar 2002 at 14:24, Tom Lane wrote:\n\n> For constraints, we'd need to change the code to be more careful to\n> generate unique names for unnamed constraints. That doesn't seem\n> difficult, but I'm a little worried about the possibility of errors\n> in loading schemas from existing databases, where there might be\n> non-unique constraint names.\n\nCreate a tool to generate unique constraint names during a dump. Sounds \nlike a pg_dump[all] switch to me.\n\n> Perhaps it'd be safer to maintain the\n> current behavior (no uniqueness required for constraint names).\n\nI would rather see a simple and unambigous way to maintain a single \nconstraint. Perhaps a \"unique/not-unique\" knob is appropriate.\n-- \nDan Langille\nThe FreeBSD Diary - http://freebsddiary.org/ - practical examples\n\n",
"msg_date": "Mon, 4 Mar 2002 15:04:45 -0500",
"msg_from": "\"Dan Langille\" <dan@langille.org>",
"msg_from_op": false,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names"
},
{
"msg_contents": "\"Dan Langille\" <dan@langille.org> writes:\n> On 4 Mar 2002 at 14:24, Tom Lane wrote:\n>> ... but I'm a little worried about the possibility of errors\n>> in loading schemas from existing databases, where there might be\n>> non-unique constraint names.\n\n> Create a tool to generate unique constraint names during a dump.\n\nAnd then all we need is a time machine, so we can make existing\ninstances of pg_dump contain the tool? It's not that easy ...\n\nI am not sure that there's really a problem here, because I don't\nthink duplicate constraint names will be generated during plain\nCREATE operations. However, an ALTER TABLE might leave you with\na problem. Hard to tell if this is critical enough to worry about.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Mar 2002 15:38:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names "
},
{
"msg_contents": "I agree with your statement: \"I think that all three object types should\nhave names that are unique\namong the objects associated with a particular table, but not unique across\na whole database.\"\n\nTo address the potential problem of \"loading schemas of existing databases,\"\nwhy not let the new proposed behavior be the default behavior and provide a\nconfiguration option and/or command line option that would enable the old\nbehavior - at least for some period of time.\n\nTim\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: <pgsql-hackers@postgreSQL.org>; <pgsql-sql@postgreSQL.org>\nSent: Monday, March 04, 2002 11:24 AM\nSubject: [SQL] Uniqueness of rule, constraint, and trigger names\n\n\n> Currently we have a rather confusing mismash of behaviors for the names\n> of rules, constraints, and triggers. I'd like to unify the rules\n> so that these objects all have the same naming behavior; and the only\n> behavior that makes sense to me now is that of triggers.\n>\n> The current behavior is:\n>\n> 1. Rules are required to have a name that is unique within the current\n> database. The rule can be named without reference to the table it is\n> on. Dropping a rule is done with \"DROP RULE name\".\n>\n> 2. Constraints are not required to have any unique name at all.\n> Dropping constraints is done with \"ALTER TABLE tablename DROP CONSTRAINT\n> constraintname\", which will drop all constraints on that table that\n> match the given name.\n>\n> 3. Triggers are required to have names that are unique among the\n> triggers on a given table. Dropping a trigger is done with \"DROP\n> TRIGGER name ON table\".\n>\n> The SQL spec is not a great deal of help on this, since it doesn't\n> have rules or triggers at all. For constraints, it requires\n> database-wide uniqueness of constraint names --- a rule I doubt\n> anyone is going to favor adopting for Postgres.\n>\n> I think that all three object types should have names that are unique\n> among the objects associated with a particular table, but not unique\n> across a whole database. So, triggers are okay already, but rules\n> and constraints need work.\n>\n> For rules, we'd need to change the syntax of DROP RULE to be \"DROP RULE\n> name ON table\", much like DROP TRIGGER. This seems unlikely to cause\n> problems for existing applications, since I doubt rule-dropping is done\n> much by application code.\n>\n> For constraints, we'd need to change the code to be more careful to\n> generate unique names for unnamed constraints. That doesn't seem\n> difficult, but I'm a little worried about the possibility of errors\n> in loading schemas from existing databases, where there might be\n> non-unique constraint names. Perhaps it'd be safer to maintain the\n> current behavior (no uniqueness required for constraint names).\n>\n> Comments?\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Mon, 4 Mar 2002 12:41:42 -0800",
"msg_from": "\"Tim Barnard\" <tbarnard@povn.com>",
"msg_from_op": false,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names"
},
{
"msg_contents": "On 4 Mar 2002 at 15:38, Tom Lane wrote:\n\n> \"Dan Langille\" <dan@langille.org> writes:\n> > On 4 Mar 2002 at 14:24, Tom Lane wrote:\n> >> ... but I'm a little worried about the possibility of errors\n> >> in loading schemas from existing databases, where there might be\n> >> non-unique constraint names.\n> \n> > Create a tool to generate unique constraint names during a dump.\n> \n> And then all we need is a time machine, so we can make existing\n> instances of pg_dump contain the tool? It's not that easy ...\n\n*sigh*\n\nYou don't have to modify all previous versions of pg_dump. They use this \ntool if they want to upgrade to this version. There is no perfect \nsolution. Even a script to check for uniqueness might help. I'm just \ntrying to help the best way I can; by providing suggestions. As \nrequested.\n\n> I am not sure that there's really a problem here, because I don't\n> think duplicate constraint names will be generated during plain\n> CREATE operations. However, an ALTER TABLE might leave you with\n> a problem. Hard to tell if this is critical enough to worry about.\n\nIt would be easily detected at load time. Then you direct the user to the \ntool mentioned above.\n-- \nDan Langille\nThe FreeBSD Diary - http://freebsddiary.org/ - practical examples\n\n",
"msg_date": "Mon, 4 Mar 2002 15:43:38 -0500",
"msg_from": "\"Dan Langille\" <dan@langille.org>",
"msg_from_op": false,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names "
},
{
"msg_contents": "\"Dan Langille\" <dan@langille.org> writes:\n> It would be easily detected at load time. Then you direct the user to the \n> tool mentioned above.\n\nBut if the user has already dumped and flushed his old installation,\nhe's still in need of a time machine.\n\nHmm, maybe what we need is a tool that can be applied during load.\nEssentially, alter incoming constraint names as needed to make them\nunique. We could have this enabled by a SET switch...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Mar 2002 15:47:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names "
},
{
"msg_contents": "On 4 Mar 2002 at 15:47, Tom Lane wrote:\n\n> \"Dan Langille\" <dan@langille.org> writes:\n> > It would be easily detected at load time. Then you direct the user to\n> > the tool mentioned above.\n> \n> But if the user has already dumped and flushed his old installation,\n> he's still in need of a time machine.\n\nAgreed. I had that same problem with upgrading to 7.2. My solution was \nto install 7.1.3 on another box, load, massage, dump. Next time, I'll \nkeep the old version around a bit longer.\n\n> Hmm, maybe what we need is a tool that can be applied during load.\n> Essentially, alter incoming constraint names as needed to make them\n> unique. We could have this enabled by a SET switch...\n\nI think that is an eloquent solution.\n-- \nDan Langille\nThe FreeBSD Diary - http://freebsddiary.org/ - practical examples\n\n",
"msg_date": "Mon, 4 Mar 2002 15:50:39 -0500",
"msg_from": "\"Dan Langille\" <dan@langille.org>",
"msg_from_op": false,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names "
},
{
"msg_contents": "If pgupgrade can be fixed to work, perhaps it could set off warnings\non items that need to be corrected in a 'schema upgradability test'\nwhich will ensure that the user can upgrade it properly -- it\nshouldn't upgrade if it can't guarentee an upgrade will succeed.\n\nThis should include full schema test (that whole bad schema entry\nstuff that pg_dump is supposed to work around) too.\n\nSomething I'm thinking about digging into.\n\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: <dan@langille.org>\nCc: <pgsql-hackers@postgreSQL.org>; <pgsql-sql@postgreSQL.org>\nSent: Monday, March 04, 2002 3:38 PM\nSubject: Re: [HACKERS] [SQL] Uniqueness of rule, constraint, and\ntrigger names\n\n\n> \"Dan Langille\" <dan@langille.org> writes:\n> > On 4 Mar 2002 at 14:24, Tom Lane wrote:\n> >> ... but I'm a little worried about the possibility of errors\n> >> in loading schemas from existing databases, where there might be\n> >> non-unique constraint names.\n>\n> > Create a tool to generate unique constraint names during a dump.\n>\n> And then all we need is a time machine, so we can make existing\n> instances of pg_dump contain the tool? It's not that easy ...\n>\n> I am not sure that there's really a problem here, because I don't\n> think duplicate constraint names will be generated during plain\n> CREATE operations. However, an ALTER TABLE might leave you with\n> a problem. Hard to tell if this is critical enough to worry about.\n>\n> regards, tom lane\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Mon, 4 Mar 2002 16:40:16 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Uniqueness of rule, constraint, and trigger names "
},
{
"msg_contents": "\nOn Mon, 4 Mar 2002, Tom Lane wrote:\n\n> For constraints, we'd need to change the code to be more careful to\n> generate unique names for unnamed constraints. That doesn't seem\n\nAnother question would be what to do with inherited constraints that\nconflict in multiple inheritance cases. It'd probably be safe to rename\nthose on the child table to be unique, but then drop constraint may\nbecome more involved (and the error messages no longer use the name\ngiven by the user for either constraint).\n\n\n",
"msg_date": "Mon, 4 Mar 2002 14:13:18 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> On Mon, 4 Mar 2002, Tom Lane wrote:\n>> For constraints, we'd need to change the code to be more careful to\n>> generate unique names for unnamed constraints. That doesn't seem\n\n> Another question would be what to do with inherited constraints that\n> conflict in multiple inheritance cases. It'd probably be safe to rename\n> those on the child table to be unique,\n\nI'd just raise an error, I think, unless perhaps the constraints are\nidentical (for some definition of identical). We don't allow\nconflicting column definitions to be inherited, so why constraints?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Mar 2002 17:18:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names "
},
{
"msg_contents": "On Mon, 4 Mar 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > On Mon, 4 Mar 2002, Tom Lane wrote:\n> >> For constraints, we'd need to change the code to be more careful to\n> >> generate unique names for unnamed constraints. That doesn't seem\n>\n> > Another question would be what to do with inherited constraints that\n> > conflict in multiple inheritance cases. It'd probably be safe to rename\n> > those on the child table to be unique,\n>\n> I'd just raise an error, I think, unless perhaps the constraints are\n> identical (for some definition of identical). We don't allow\n> conflicting column definitions to be inherited, so why constraints?\n\nGood point. That's probably better than autorenaming them.\n\n\n",
"msg_date": "Mon, 4 Mar 2002 14:51:16 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names "
},
{
"msg_contents": "Tom,\n\n> Currently we have a rather confusing mismash of behaviors for the\n> names\n> of rules, constraints, and triggers. I'd like to unify the rules\n> so that these objects all have the same naming behavior; and the only\n> behavior that makes sense to me now is that of triggers.\n\nI agree. \n\nRegarding prioritization: As a heavy user of constraints and triggers,\n on two commercial projects, I have yet to have constraint names\n overlap. What's more of a problem for me is those pesky <unnamed>\n constraints.\n\n\n-Josh Berkus\n",
"msg_date": "Mon, 04 Mar 2002 15:05:07 -0800",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names"
},
{
"msg_contents": "> 2. Constraints are not required to have any unique name at all.\n> Dropping constraints is done with \"ALTER TABLE tablename DROP CONSTRAINT\n> constraintname\", which will drop all constraints on that table that\n> match the given name.\n\nPersonally, I'd like to see CHECK contraints given decent names (eg.\nmytable_fullname_chk) instead of '$1', '$2', etc. This makes it much easier\nto use DROP CONSTRAINT...\n\nAlso, when it comes time to let people drop FOREIGN KEY constraints, it\nmight be a problem that they're all generated as '<unnamed>' at the\nmoment...\n\nChris\n\n",
"msg_date": "Tue, 5 Mar 2002 10:36:10 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names"
},
{
"msg_contents": "Tom Lane writes:\n\n> The SQL spec is not a great deal of help on this, since it doesn't\n> have rules or triggers at all.\n\nThe SQL spec has triggers, and their names are supposed to be\nglobally unique.\n\n> For constraints, it requires database-wide uniqueness of constraint\n> names --- a rule I doubt anyone is going to favor adopting for\n> Postgres.\n\nThis should probably be schema-wide, which poses much less of a problem.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 10 Mar 2002 20:33:43 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names"
},
{
"msg_contents": "> > The SQL spec is not a great deal of help on this, since it doesn't\n> > have rules or triggers at all.\n> The SQL spec has triggers, and their names are supposed to be\n> globally unique.\n\nSpeaking of which, I was looking at\n\n CREATE ASSERTION/DROP ASSERTION\n\nin (what I think is) SQL99. It has enough restrictions on use that I'm\nnot yet sure what exactly it is supposed to do, and am not sure if it is\nclose to equivalent to something we already have. Anyone know?\n\n - Thomas\n",
"msg_date": "Sun, 10 Mar 2002 17:37:43 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Uniqueness of rule, constraint, and trigger names"
}
] |
[
{
"msg_contents": "Hackers,\n\nI get the following error (see below) when I attempt to insert a row into\none of our tables. I'm using postgres 6.4 with java/jdbc 1.2.1.\n\nAny ideas on what it means? I can't seem to find documentation\non this particular exception anywhere? \n\nThanks,\n\nEric\n\n----------------------------------------------\n\nINSERT INTO bad_urls(ID,DATA,ATTEMPTS,REASON) VALUES \n(3376,'http://www.oit.itd.umich.edu/projects/adw2k/chordata/aves.html',0,'Unknown \nHost')\n\njava.sql.SQLException: results returned\n at java.lang.Throwable.fillInStackTrace(Native Method)\n at java.lang.Throwable.fillInStackTrace(Compiled Code)\n at java.lang.Throwable.<init>(Compiled Code)\n at java.lang.Exception.<init>(Compiled Code)\n at java.sql.SQLException.<init>(SQLException.java:82)\n at postgresql.Statement.executeUpdate(Compiled Code)\n at postgresql.PreparedStatement.executeUpdate(Compiled Code)\n at InsertError.record(Compiled Code)\n at InsertError.record(Compiled Code)\n at wbCheckUrl$CheckThread.run(Compiled Code)\n\n",
"msg_date": "Mon, 04 Mar 2002 13:28:57 -0700",
"msg_from": "Eric Scroger <escroger@carl.org>",
"msg_from_op": true,
"msg_subject": "JDBC: java.sql.SQLException: results returned"
},
{
"msg_contents": "Eric Scroger <escroger@carl.org> writes:\n\n> Hackers,\n> \n> I get the following error (see below) when I attempt to insert a row into\n> one of our tables. I'm using postgres 6.4 with java/jdbc 1.2.1.\n\n6.4!!!!!! Jeez, upgrade already. \n\n> Any ideas on what it means? I can't seem to find documentation\n> on this particular exception anywhere? Thanks,\n\nRun a version of PG that isn't 5 years old and maybe someone will help\nyou out. ;)\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "04 Mar 2002 19:04:45 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: JDBC: java.sql.SQLException: results returned"
},
{
"msg_contents": "UPDATE:\n\nSo now I've got postgres 7.1 running. I upgraded the postgres JDBC jar\nfile as well, yet I'm still getting the exception, \njava.sql.SQLException: results returned.\n\nAny ideas?\n\nEric\n\n\nEric Scroger wrote:\n\n> Hackers,\n>\n> I get the following error (see below) when I attempt to insert a row into\n> one of our tables. I'm using postgres 6.4 with java/jdbc 1.2.1.\n>\n> Any ideas on what it means? I can't seem to find documentation\n> on this particular exception anywhere? \n> Thanks,\n>\n> Eric\n>\n> ----------------------------------------------\n>\n> INSERT INTO bad_urls(ID,DATA,ATTEMPTS,REASON) VALUES \n> (3376,'http://www.oit.itd.umich.edu/projects/adw2k/chordata/aves.html',0,'Unknown \n> Host')\n>\n> java.sql.SQLException: results returned\n> at java.lang.Throwable.fillInStackTrace(Native Method)\n> at java.lang.Throwable.fillInStackTrace(Compiled Code)\n> at java.lang.Throwable.<init>(Compiled Code)\n> at java.lang.Exception.<init>(Compiled Code)\n> at java.sql.SQLException.<init>(SQLException.java:82)\n> at postgresql.Statement.executeUpdate(Compiled Code)\n> at postgresql.PreparedStatement.executeUpdate(Compiled Code)\n> at InsertError.record(Compiled Code)\n> at InsertError.record(Compiled Code)\n> at wbCheckUrl$CheckThread.run(Compiled Code)\n>\n\n\n",
"msg_date": "Mon, 04 Mar 2002 18:18:00 -0700",
"msg_from": "Eric Scroger <escroger@carl.org>",
"msg_from_op": true,
"msg_subject": "Re: JDBC: java.sql.SQLException: results returned"
},
{
"msg_contents": "Eric,\n\nCan you send the new stack trace, and the lines of code that are causing\nit?\n\nDave\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Eric Scroger\nSent: Monday, March 04, 2002 8:18 PM\nTo: pgsql-jdbc@postgresql.org\nCc: pgsql hackers\nSubject: Re: [JDBC] JDBC: java.sql.SQLException: results returned\n\n\nUPDATE:\n\nSo now I've got postgres 7.1 running. I upgraded the postgres JDBC jar\nfile as well, yet I'm still getting the exception, \njava.sql.SQLException: results returned.\n\nAny ideas?\n\nEric\n\n\nEric Scroger wrote:\n\n> Hackers,\n>\n> I get the following error (see below) when I attempt to insert a row \n> into one of our tables. I'm using postgres 6.4 with java/jdbc 1.2.1.\n>\n> Any ideas on what it means? I can't seem to find documentation on this\n\n> particular exception anywhere? Thanks,\n>\n> Eric\n>\n> ----------------------------------------------\n>\n> INSERT INTO bad_urls(ID,DATA,ATTEMPTS,REASON) VALUES\n>\n(3376,'http://www.oit.itd.umich.edu/projects/adw2k/chordata/aves.html',0\n,'Unknown \n> Host')\n>\n> java.sql.SQLException: results returned\n> at java.lang.Throwable.fillInStackTrace(Native Method)\n> at java.lang.Throwable.fillInStackTrace(Compiled Code)\n> at java.lang.Throwable.<init>(Compiled Code)\n> at java.lang.Exception.<init>(Compiled Code)\n> at java.sql.SQLException.<init>(SQLException.java:82)\n> at postgresql.Statement.executeUpdate(Compiled Code)\n> at postgresql.PreparedStatement.executeUpdate(Compiled Code)\n> at InsertError.record(Compiled Code)\n> at InsertError.record(Compiled Code)\n> at wbCheckUrl$CheckThread.run(Compiled Code)\n>\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Mon, 4 Mar 2002 20:27:24 -0500",
"msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>",
"msg_from_op": false,
"msg_subject": "Re: JDBC: java.sql.SQLException: results returned"
},
{
"msg_contents": "Are there any rules or triggers on this particular table?\n\n--Barry\n\nEric Scroger wrote:\n> UPDATE:\n> \n> So now I've got postgres 7.1 running. I upgraded the postgres JDBC jar\n> file as well, yet I'm still getting the exception, \n> java.sql.SQLException: results returned.\n> \n> Any ideas?\n> \n> Eric\n> \n> \n> Eric Scroger wrote:\n> \n>> Hackers,\n>>\n>> I get the following error (see below) when I attempt to insert a row into\n>> one of our tables. I'm using postgres 6.4 with java/jdbc 1.2.1.\n>>\n>> Any ideas on what it means? I can't seem to find documentation\n>> on this particular exception anywhere? Thanks,\n>>\n>> Eric\n>>\n>> ----------------------------------------------\n>>\n>> INSERT INTO bad_urls(ID,DATA,ATTEMPTS,REASON) VALUES \n>> (3376,'http://www.oit.itd.umich.edu/projects/adw2k/chordata/aves.html',0,'Unknown \n>> Host')\n>>\n>> java.sql.SQLException: results returned\n>> at java.lang.Throwable.fillInStackTrace(Native Method)\n>> at java.lang.Throwable.fillInStackTrace(Compiled Code)\n>> at java.lang.Throwable.<init>(Compiled Code)\n>> at java.lang.Exception.<init>(Compiled Code)\n>> at java.sql.SQLException.<init>(SQLException.java:82)\n>> at postgresql.Statement.executeUpdate(Compiled Code)\n>> at postgresql.PreparedStatement.executeUpdate(Compiled Code)\n>> at InsertError.record(Compiled Code)\n>> at InsertError.record(Compiled Code)\n>> at wbCheckUrl$CheckThread.run(Compiled Code)\n>>\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n",
"msg_date": "Mon, 04 Mar 2002 17:32:46 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC: java.sql.SQLException: results returned"
}
] |
[
{
"msg_contents": "On the way to supporting schemas, I am thinking about trying to make the\nparsing of attributes a little more intelligible. The Attr node type\nseems overused to mean several different things. I'd like to do the\nfollowing:\n\nFor column references:\n\nSplit \"Attr\" into three node types for different uses:\n\nAlias: for AS clauses. Carries a \"char *aliasname\" and a List of column\nalias names. The current uses of Attr in range table entries would\nbecome Alias.\n\nColumnRef: for referencing a column (possibly qualified, possibly with\narray subscripts) in the raw grammar output. Carries a List of names\nwhich correspond to the dotted names (eg, a.b.c), plus a List of array\nsubscripting info (currently called \"indirection\" in Attr, but I wonder\nif \"subscripts\" wouldn't be a more useful name).\n\nParamRef: for referencing a parameter. Carries parameter number,\npossibly-empty list of field names to qualify the param, and a subscript\nlist. The ParamNo node type goes away, to be merged into this.\n\nThe Ident node type is not semantically distinct from ColumnRef with a\none-element name list. Probably should retire it.\n\nPerhaps indirection should be split out as a separate node type, with an eye\nto allowing (arbitrary-expression)[42] someday.\n\nFor table references:\n\nCurrently, the table name associated with an unparsed statement is typically\njust a string. I propose replacing this with a RelationRef node type,\ncarrying a List of names corresponding to the dotted names of the reference\n(1 to 3 names). Alternatively, we could just use the raw List of names and\nnot bother with an explicit node; any preferences?\n\n\nAlso, I think we could retire the notion of \"relation vs. column\nprecedence\" in the parser. AFAICS the only place where transformExpr is\ntold EXPR_RELATION_FIRST is for processing an Attr's paramNo --- but\nthe ParamNo path through transformExpr never looks at the precedence!\nAccordingly, only the EXPR_COLUMN_FIRST cases are ever actually used\nanywhere, and there's no need for the notational cruft of passing\nprecedence around.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Mar 2002 17:52:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Planned cleanups in attribute parsing"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> On the way to supporting schemas, I am thinking about trying to make the\n> parsing of attributes a little more intelligible. The Attr node type\n> seems overused to mean several different things. I'd like to do the\n> following:\n> \n> For column references:\n> \n> Split \"Attr\" into three node types for different uses:\n> \n> Alias: for AS clauses. Carries a \"char *aliasname\" and a List of column\n> alias names. The current uses of Attr in range table entries would\n> become Alias.\n> \n> ColumnRef: for referencing a column (possibly qualified, possibly with\n> array subscripts) in the raw grammar output. Carries a List of names\n> which correspond to the dotted names (eg, a.b.c), plus a List of array\n> subscripting info (currently called \"indirection\" in Attr, but I wonder\n> if \"subscripts\" wouldn't be a more useful name).\n> \n> ParamRef: for referencing a parameter. Carries parameter number,\n> possibly-empty list of field names to qualify the param, and a subscript\n> list. The ParamNo node type goes away, to be merged into this.\n> \n> The Ident node type is not semantically distinct from ColumnRef with a\n> one-element name list. Probably should retire it.\n> \n\nThese sound good to me.\n\n> Perhaps indirection should be split out as a separate node type, with an eye\n> to allowing (arbitrary-expression)[42] someday.\n> \n> For table references:\n> \n> Currently, the table name associated with an unparsed statement is typically\n> just a string. I propose replacing this with a RelationRef node type,\n> carrying a List of names corresponding to the dotted names of the reference\n> (1 to 3 names). Alternatively, we could just use the raw List of names and\n> not bother with an explicit node; any preferences?\n> \n\nWe can handle most cases with RangeVar (+ the ones you've proposed\nabove).\nThe schema name will have to go into RangeVar anyway.\n\n\n> Also, I think we could retire the notion of \"relation vs. column\n> precedence\" in the parser. AFAICS the only place where transformExpr is\n> told EXPR_RELATION_FIRST is for processing an Attr's paramNo --- but\n> the ParamNo path through transformExpr never looks at the precedence!\n> Accordingly, only the EXPR_COLUMN_FIRST cases are ever actually used\n> anywhere, and there's no need for the notational cruft of passing\n> precedence around.\n> \n> Comments?\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Wed, 06 Mar 2002 15:16:26 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Planned cleanups in attribute parsing"
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> Tom Lane wrote:\n>> Currently, the table name associated with an unparsed statement is typically\n>> just a string. I propose replacing this with a RelationRef node type,\n>> carrying a List of names corresponding to the dotted names of the reference\n>> (1 to 3 names). Alternatively, we could just use the raw List of names and\n>> not bother with an explicit node; any preferences?\n\n> We can handle most cases with RangeVar (+ the ones you've proposed\n> above).\n\nRight, I had not noticed there was already a suitable node type.\nRangeVar will do fine, no need to invent RelationRef ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Mar 2002 15:28:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Planned cleanups in attribute parsing "
},
{
"msg_contents": "> On the way to supporting schemas, I am thinking about trying to make the\n> parsing of attributes a little more intelligible. The Attr node type\n> seems overused to mean several different things.\n\nRight.\n\n> For column references:\n> Split \"Attr\" into three node types for different uses:\n> Alias: for AS clauses. Carries a \"char *aliasname\" and a List of column\n> alias names. The current uses of Attr in range table entries would\n> become Alias.\n\nIs there a one-to-one relationship between the alias and the column? Or\ndoes the list of column names actually have more than one entry? Or is\nthe \"list of column alias names\" the qualified name of the column?\n\n> ColumnRef: for referencing a column (possibly qualified, possibly with\n> array subscripts) in the raw grammar output. Carries a List of names\n> which correspond to the dotted names (eg, a.b.c), plus a List of array\n> subscripting info (currently called \"indirection\" in Attr, but I wonder\n> if \"subscripts\" wouldn't be a more useful name).\n\nWould it be helpful to separate the name itself from the qualifying\nprefixes? istm that most use cases would require this anyway...\n\n> ParamRef: for referencing a parameter. Carries parameter number,\n> possibly-empty list of field names to qualify the param, and a subscript\n> list. The ParamNo node type goes away, to be merged into this.\n\nOK.\n\n> The Ident node type is not semantically distinct from ColumnRef with a\n> one-element name list. Probably should retire it.\n> Perhaps indirection should be split out as a separate node type, with an eye\n> to allowing (arbitrary-expression)[42] someday.\n\nOK.\n\n> Currently, the table name associated with an unparsed statement is typically\n> just a string. I propose replacing this with a RelationRef node type,\n> carrying a List of names corresponding to the dotted names of the reference\n> (1 to 3 names). Alternatively, we could just use the raw List of names and\n> not bother with an explicit node; any preferences?\n\nNodes are better imho.\n\n> Also, I think we could retire the notion of \"relation vs. column\n> precedence\" in the parser. AFAICS the only place where transformExpr is\n> told EXPR_RELATION_FIRST is for processing an Attr's paramNo --- but\n> the ParamNo path through transformExpr never looks at the precedence!\n> Accordingly, only the EXPR_COLUMN_FIRST cases are ever actually used\n> anywhere, and there's no need for the notational cruft of passing\n> precedence around.\n\nHmm. I can't think of a case where either columns *or* tables could be\nmentioned and where a table name would take precedence. otoh we should\ndecide pretty carefully that this will *never* happen before ripping too\nmuch out.\n\n - Thomas\n",
"msg_date": "Wed, 06 Mar 2002 14:53:25 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Planned cleanups in attribute parsing"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> Alias: for AS clauses. Carries a \"char *aliasname\" and a List of column\n>> alias names. The current uses of Attr in range table entries would\n>> become Alias.\n\n> Is there a one-to-one relationship between the alias and the column? Or\n> does the list of column names actually have more than one entry? Or is\n> the \"list of column alias names\" the qualified name of the column?\n\nBasically type Alias represents an AS clause, which can come in two\nflavors: just \"AS foo\", or \"AS foo(bar1,bar2,bar3)\" for renaming a\nFROM-list item along with its columns. So the list of names in this\ncase represents individual column names, *not* a qualified name.\nOne reason that I want to separate this from Attr is that the list\nof names has a totally different meaning from what it has in Attr.\n\n>> ColumnRef: for referencing a column (possibly qualified, possibly with\n>> array subscripts) in the raw grammar output. Carries a List of names\n>> which correspond to the dotted names (eg, a.b.c), plus a List of array\n>> subscripting info (currently called \"indirection\" in Attr, but I wonder\n>> if \"subscripts\" wouldn't be a more useful name).\n\n> Would it be helpful to separate the name itself from the qualifying\n> prefixes? istm that most use cases would require this anyway...\n\nRemember this is raw parsetree output; the grammar does not have a real\ngood idea which names are qualifiers and which are field names and/or\nfunction names. The parse analysis phase will rip the list apart and\ndetermine what's what. The output of that will be some other node type\n(eg, a Var).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Mar 2002 18:11:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Planned cleanups in attribute parsing "
}
] |
[
{
"msg_contents": "From what I read in the recent exchanges in the PostgreSQL vs ORACLE thread it would seem a good idea for the backend to keep track of the number of update performed on a database and after a certain threshold start a vacuum in a separate process by itself. \nAny comments?\n\n\n\n\n\n\n\n\nFrom what I read in the recent exchanges in the \nPostgreSQL vs ORACLE thread it would seem a good idea for the backend to keep \ntrack of the number of update performed on a database and after a certain \nthreshold start a vacuum in a separate process by itself. \nAny comments?",
"msg_date": "Tue, 5 Mar 2002 12:14:42 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "Nicolas Bazin wrote:\n> >From what I read in the recent exchanges in the PostgreSQL vs ORACLE thread it would seem a good idea for the backend to keep track of the number of update performed on a database and after a certain threshold start a vacuum in a separate process by itself.\n> Any comments?\n\nYes, makes sense to me, especially now that we have a nolocking vacuum. \nTom, do you have any ideas on this?\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 4 Mar 2002 20:53:52 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Nicolas Bazin wrote:\n> From what I read in the recent exchanges in the PostgreSQL vs ORACLE thread it would seem a good idea for the backend to keep track of the number of update performed on a database and after a certain threshold start a vacuum in a separate process by itself.\n>> Any comments?\n\n> Yes, makes sense to me, especially now that we have a nolocking vacuum. \n> Tom, do you have any ideas on this?\n\nThere's a TODO item about this already.\n\n* Provide automatic scheduling of background vacuum (Tom)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Mar 2002 00:33:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically "
},
{
"msg_contents": "> > Yes, makes sense to me, especially now that we have a nolocking vacuum.\n> > Tom, do you have any ideas on this?\n>\n> There's a TODO item about this already.\n>\n> * Provide automatic scheduling of background vacuum (Tom)\n\nI think an good system would have some parameters something like this in\npostgresql.conf:\n\nvacuum_update_threshold\n= num of operations that cause frags on a table before a vacuum is run, 0 =\nno requirement, if followed by a percent (%) sign, indicates percentage of\nrows changed, rather than an absolute number\n\nvacuum_idle_threshold\n= system load below which the system must be before a vacuum can be\nperformed. 0 = no requirement\n\nvacuum_time_threshold\n= seconds since last vacuum before which another vacuum cannot occur. 0 = no\nrequirement.\n\nIf all 3 are 0, then no auto-vacuuming is performed at all. There's\nprobably trouble if only the idle threshold is set to zero.\n\nAnd the same for the 'analyze' command?\n\nIf they want it on a per-table basis, then they can just do it themselves\nwith a cronjob!\n\nChris\n\n\n",
"msg_date": "Tue, 5 Mar 2002 14:03:52 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically "
},
{
"msg_contents": "> vacuum_idle_threshold\n> = system load below which the system must be before a vacuum can be\n> performed. 0 = no requirement\n> \n> vacuum_time_threshold\n> = seconds since last vacuum before which another vacuum cannot occur. 0 = no\n> requirement.\n> \n> If all 3 are 0, then no auto-vacuuming is performed at all. There's\n> probably trouble if only the idle threshold is set to zero.\n> \n> And the same for the 'analyze' command?\n> \n> If they want it on a per-table basis, then they can just do it themselves\n> with a cronjob!\n\nYes, and Jan's statistics stuff has stats on table activity.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 01:05:06 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Nicolas Bazin wrote:\n> > From what I read in the recent exchanges in the PostgreSQL vs ORACLE thread it would seem a good idea for the backend to keep track of the number of update performed on a database and after a certain threshold start a vacuum in a separate process by itself.\n> >> Any comments?\n> \n> > Yes, makes sense to me, especially now that we have a nolocking vacuum.\n> > Tom, do you have any ideas on this?\n> \n> There's a TODO item about this already.\n> \n> * Provide automatic scheduling of background vacuum (Tom)\n\nI have been thinking about this. I like the idea, but it may be problematic.\n\nI suggested running a vacuum process on a constant low priority in the\nbackground, and it was pointed out that this may cause some deadlock issues.\n\nFor vacuum to run in the background, it needs to be more regulated or targeted.\n\nIt needs to be able to know which tables need it. Many tables are static and\nnever get updated, vacuuming them would be pointless. It needs to be sensitive\nto database load, and be tunable to vacuum only when safe or necessary to\nreduce load.\n\nAre there tables that track information that would be useful for guiding\nvacuum? Could I write a program which queries some statistical tables and and\nknows which tables need to be vacuumed?\n\nIf the info is around, I could whip up something pretty easily, I think.\n",
"msg_date": "Tue, 05 Mar 2002 09:15:02 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "> Are there tables that track information that would be useful for guiding\n> vacuum? Could I write a program which queries some statistical tables and and\n> knows which tables need to be vacuumed?\n> \n> If the info is around, I could whip up something pretty easily, I think.\n\nSure, Jan's new statistics tables in 7.2 had just that info.\n\t\n\ttest=> \\d pg_stat_user_tables\n\t View \"pg_stat_user_tables\"\n\t Column | Type | Modifiers \n\t---------------+---------+-----------\n\t relid | oid | \n\t relname | name | \n\t seq_scan | bigint | \n\t seq_tup_read | bigint | \n\t idx_scan | numeric | \n\t idx_tup_fetch | numeric | \n\t n_tup_ins | bigint | \n\t n_tup_upd | bigint | \n\t n_tup_del | bigint | \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 13:06:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Are there tables that track information that would be useful for guiding\n> > vacuum? Could I write a program which queries some statistical tables and and\n> > knows which tables need to be vacuumed?\n> >\n> > If the info is around, I could whip up something pretty easily, I think.\n> \n> Sure, Jan's new statistics tables in 7.2 had just that info.\n> \n> test=> \\d pg_stat_user_tables\n> View \"pg_stat_user_tables\"\n> Column | Type | Modifiers\n> ---------------+---------+-----------\n> relid | oid |\n> relname | name |\n> seq_scan | bigint |\n> seq_tup_read | bigint |\n> idx_scan | numeric |\n> idx_tup_fetch | numeric |\n> n_tup_ins | bigint |\n> n_tup_upd | bigint |\n> n_tup_del | bigint |\n\nI have a system running 7.2b2, does it update these fields? Is it an option?\nAll I am getting is zeros.\n\n relid | relname | seq_scan | seq_tup_read | idx_scan |\nidx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del\n-----------+----------------------+----------+--------------+----------+---------------+-----------+-----------+-----------\n 16559 | snf_cache | 0 | 0 | 0\n| 0 | 0 | 0 | 0\n 8689760 | fav_stat | 0 | 0 | \n| | 0 | 0 | 0\n 19174244 | mbr_art_aff | 0 | 0 | 0\n| 0 | 0 | 0 | 0\n 20788376 | artist_affinity | 0 | 0 | 0\n| 0 | 0 | 0 | 0\n 83345144 | saffweak | 0 | 0 | \n| | 0 | 0 | 0\n 94811871 | pga_queries | 0 | 0 | \n| | 0 | 0 | 0\n 94812980 | pga_forms | 0 | 0 | \n| | 0 | 0 | 0\n 94813425 | pga_scripts | 0 | 0 | \n| | 0 | 0 | 0\n 94813869 | pga_reports | 0 | 0 | \n| | 0 | 0 | 0\n 94814245 | pga_schema | 0 | 0 | \n| | 0 | 0 | 0\n 116675008 | int_song_affinity | 0 | 0 | \n| | 0 | 0 | 0\n 166147508 | favorites | 0 | 0 | 0\n| 0 | 0 | 0 | 0\n 173869647 | song_affinity_orig_t | 0 | 0 | \n| | 0 | 0 | 0\n 176339567 | song_affinity | 0 | 0 | 0\n| 0 | 0 | 0 | 0\n 178658941 | song_affinity_array | 0 | 0 | 0\n| 0 | 0 | 0 | 0\n 186403716 | mbr_art_aff_t | 0 | 0 | \n| | 0 | 0 | 0\n",
"msg_date": "Tue, 05 Mar 2002 13:16:25 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "\nYes, I am seeing only zeros too. Jan?\n\n---------------------------------------------------------------------------\n\nmlw wrote:\n> Bruce Momjian wrote:\n> > \n> > > Are there tables that track information that would be useful for guiding\n> > > vacuum? Could I write a program which queries some statistical tables and and\n> > > knows which tables need to be vacuumed?\n> > >\n> > > If the info is around, I could whip up something pretty easily, I think.\n> > \n> > Sure, Jan's new statistics tables in 7.2 had just that info.\n> > \n> > test=> \\d pg_stat_user_tables\n> > View \"pg_stat_user_tables\"\n> > Column | Type | Modifiers\n> > ---------------+---------+-----------\n> > relid | oid |\n> > relname | name |\n> > seq_scan | bigint |\n> > seq_tup_read | bigint |\n> > idx_scan | numeric |\n> > idx_tup_fetch | numeric |\n> > n_tup_ins | bigint |\n> > n_tup_upd | bigint |\n> > n_tup_del | bigint |\n> \n> I have a system running 7.2b2, does it update these fields? Is it an option?\n> All I am getting is zeros.\n> \n> relid | relname | seq_scan | seq_tup_read | idx_scan |\n> idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del\n> -----------+----------------------+----------+--------------+----------+---------------+-----------+-----------+-----------\n> 16559 | snf_cache | 0 | 0 | 0\n> | 0 | 0 | 0 | 0\n> 8689760 | fav_stat | 0 | 0 | \n> | | 0 | 0 | 0\n> 19174244 | mbr_art_aff | 0 | 0 | 0\n> | 0 | 0 | 0 | 0\n> 20788376 | artist_affinity | 0 | 0 | 0\n> | 0 | 0 | 0 | 0\n> 83345144 | saffweak | 0 | 0 | \n> | | 0 | 0 | 0\n> 94811871 | pga_queries | 0 | 0 | \n> | | 0 | 0 | 0\n> 94812980 | pga_forms | 0 | 0 | \n> | | 0 | 0 | 0\n> 94813425 | pga_scripts | 0 | 0 | \n> | | 0 | 0 | 0\n> 94813869 | pga_reports | 0 | 0 | \n> | | 0 | 0 | 0\n> 94814245 | pga_schema | 0 | 0 | \n> | | 0 | 0 | 0\n> 116675008 | int_song_affinity | 0 | 0 | \n> | | 0 | 0 | 0\n> 166147508 | favorites | 0 | 0 | 0\n> | 0 | 0 | 0 | 0\n> 173869647 | song_affinity_orig_t | 0 | 0 | \n> | | 0 | 0 | 0\n> 176339567 | song_affinity | 0 | 0 | 0\n> | 0 | 0 | 0 | 0\n> 178658941 | song_affinity_array | 0 | 0 | 0\n> | 0 | 0 | 0 | 0\n> 186403716 | mbr_art_aff_t | 0 | 0 | \n> | | 0 | 0 | 0\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 13:23:53 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, I am seeing only zeros too. Jan?\n\nDid you turn on statistics gathering? See the Admin Guide's discussion\nof database monitoring.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Mar 2002 13:36:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, I am seeing only zeros too. Jan?\n> \n> Did you turn on statistics gathering? See the Admin Guide's discussion\n> of database monitoring.\n\nOops, now I remember. Those are off by default and only the query\nstring is on by default. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 13:37:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Yes, I am seeing only zeros too. Jan?\n> >\n> > Did you turn on statistics gathering? See the Admin Guide's discussion\n> > of database monitoring.\n> \n> Oops, now I remember. Those are off by default and only the query\n> string is on by default. Thanks.\n\nThis raises the question, by turning these on, does that affect database\nperformance?\n\nIf so, it may not be the answer for a selective vacuum.\n\nIf they do not affect performance, then why have them off?\n",
"msg_date": "Tue, 05 Mar 2002 14:21:22 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "mlw wrote:\n> Bruce Momjian wrote:\n> > \n> > Tom Lane wrote:\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > Yes, I am seeing only zeros too. Jan?\n> > >\n> > > Did you turn on statistics gathering? See the Admin Guide's discussion\n> > > of database monitoring.\n> > \n> > Oops, now I remember. Those are off by default and only the query\n> > string is on by default. Thanks.\n> \n> This raises the question, by turning these on, does that affect database\n> performance?\n> \n> If so, it may not be the answer for a selective vacuum.\n> \n> If they do not affect performance, then why have them off?\n\nI think Jan said 2-3%. If we can get autovacuum from it, it would be a\nwin to keep it on all the time, perhaps.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 14:28:14 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> mlw wrote:\n> > Bruce Momjian wrote:\n> > >\n> > > Tom Lane wrote:\n> > > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > > Yes, I am seeing only zeros too. Jan?\n> > > >\n> > > > Did you turn on statistics gathering? See the Admin Guide's discussion\n> > > > of database monitoring.\n> > >\n> > > Oops, now I remember. Those are off by default and only the query\n> > > string is on by default. Thanks.\n> >\n> > This raises the question, by turning these on, does that affect database\n> > performance?\n> >\n> > If so, it may not be the answer for a selective vacuum.\n> >\n> > If they do not affect performance, then why have them off?\n> \n> I think Jan said 2-3%. If we can get autovacuum from it, it would be a\n> win to keep it on all the time, perhaps.\n\nAssuming that the statistics get updated:\n\nHow often should the sats table be queried?\nWhat sort of configurability would be needed?\n",
"msg_date": "Tue, 05 Mar 2002 15:49:30 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "> > > If they do not affect performance, then why have them off?\n> > \n> > I think Jan said 2-3%. If we can get autovacuum from it, it would be a\n> > win to keep it on all the time, perhaps.\n> \n> Assuming that the statistics get updated:\n> \n> How often should the sats table be queried?\n> What sort of configurability would be needed?\n\nYou could wake up every few minutes and see how the values have changed.\nI don't remember if there is a way to clear that stats so you can see\njust the changes in the past five minutes. Vacuum the table that had\nactivity.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 15:59:08 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "On Tue, 2002-03-05 at 15:59, Bruce Momjian wrote:\n> > > > If they do not affect performance, then why have them off?\n> > > \n> > > I think Jan said 2-3%. If we can get autovacuum from it, it would be a\n> > > win to keep it on all the time, perhaps.\n> > \n> > Assuming that the statistics get updated:\n> > \n> > How often should the sats table be queried?\n> > What sort of configurability would be needed?\n> \n> You could wake up every few minutes and see how the values have changed.\n> I don't remember if there is a way to clear that stats so you can see\n> just the changes in the past five minutes. Vacuum the table that had\n> activity.\n\nIck -- polling. The statistics process should be able to wake somebody\nup / notify the postmaster when the statistics change such that a vacuum\nis required.\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n\n",
"msg_date": "05 Mar 2002 16:42:25 -0500",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "Neil Padgett wrote:\n> On Tue, 2002-03-05 at 15:59, Bruce Momjian wrote:\n> > > > > If they do not affect performance, then why have them off?\n> > > > \n> > > > I think Jan said 2-3%. If we can get autovacuum from it, it would be a\n> > > > win to keep it on all the time, perhaps.\n> > > \n> > > Assuming that the statistics get updated:\n> > > \n> > > How often should the sats table be queried?\n> > > What sort of configurability would be needed?\n> > \n> > You could wake up every few minutes and see how the values have changed.\n> > I don't remember if there is a way to clear that stats so you can see\n> > just the changes in the past five minutes. Vacuum the table that had\n> > activity.\n> \n> Ick -- polling. The statistics process should be able to wake somebody\n> up / notify the postmaster when the statistics change such that a vacuum\n> is required.\n\nYes, that would tie that stats collector closer to auto-vacuum, but it\ncertainly could be done.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 16:54:47 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Neil Padgett wrote:\n> > On Tue, 2002-03-05 at 15:59, Bruce Momjian wrote:\n> > > > > > If they do not affect performance, then why have them off?\n> > > > >\n> > > > > I think Jan said 2-3%. If we can get autovacuum from it, it would be a\n> > > > > win to keep it on all the time, perhaps.\n> > > >\n> > > > Assuming that the statistics get updated:\n> > > >\n> > > > How often should the sats table be queried?\n> > > > What sort of configurability would be needed?\n> > >\n> > > You could wake up every few minutes and see how the values have changed.\n> > > I don't remember if there is a way to clear that stats so you can see\n> > > just the changes in the past five minutes. Vacuum the table that had\n> > > activity.\n> >\n> > Ick -- polling. The statistics process should be able to wake somebody\n> > up / notify the postmaster when the statistics change such that a vacuum\n> > is required.\n> \n> Yes, that would tie that stats collector closer to auto-vacuum, but it\n> certainly could be done.\n\nUsing an alert can be done, but polling is easier for a proof of concept. I\ndont see too much difficulty there. We could use notify.\n",
"msg_date": "Tue, 05 Mar 2002 16:56:24 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": ">>>>> \"Bruce\" == Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n >> Are there tables that track information that would be useful\n >> for guiding vacuum? Could I write a program which queries some\n >> statistical tables and and knows which tables need to be\n >> vacuumed?\n >> \n >> If the info is around, I could whip up something pretty easily,\n >> I think.\n\n Bruce> Sure, Jan's new statistics tables in 7.2 had just that\n Bruce> info.\n\t\n Bruce> \ttest=> \\d pg_stat_user_tables\n\nI don't have that table. Only 'pg_statistic'... ?\n-- \nNoriega Peking smuggle Iran jihad spy domestic disruption Cocaine CIA\nkibo Waco, Texas toluene iodine Qaddafi FSF\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "06 Mar 2002 10:06:29 +0100",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "Turbo Fredriksson wrote:\n> >>>>> \"Bruce\" == Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> >> Are there tables that track information that would be useful\n> >> for guiding vacuum? Could I write a program which queries some\n> >> statistical tables and and knows which tables need to be\n> >> vacuumed?\n> >> \n> >> If the info is around, I could whip up something pretty easily,\n> >> I think.\n> \n> Bruce> Sure, Jan's new statistics tables in 7.2 had just that\n> Bruce> info.\n> \t\n> Bruce> \ttest=> \\d pg_stat_user_tables\n> \n> I don't have that table. Only 'pg_statistic'... ?\n\nUpgrade to 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 6 Mar 2002 09:35:17 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": ">>>>> \"Bruce\" == Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n Bruce> Turbo Fredriksson wrote:\n >> >>>>> \"Bruce\" == Bruce Momjian <pgman@candle.pha.pa.us> writes:\n >> \n >> >> Are there tables that track information that would be useful\n >> >> for guiding vacuum? Could I write a program which queries\n >> some >> statistical tables and and knows which tables need to\n >> be >> vacuumed?\n >> >> \n >> >> If the info is around, I could whip up something pretty\n >> easily, >> I think.\n >> \n Bruce> Sure, Jan's new statistics tables in 7.2 had just that\n Bruce> info.\n >>\n Bruce> test=> \\d pg_stat_user_tables\n >> I don't have that table. Only 'pg_statistic'... ?\n\n Bruce> Upgrade to 7.2.\n\nDoh! I have :)\n-- \nFSF Peking AK-47 Nazi arrangements quiche explosion jihad ammunition\nNorth Korea toluene spy Treasury FBI congress\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "06 Mar 2002 17:18:15 +0100",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
}
] |
[
{
"msg_contents": "I'd like to take a stab at getting dependency tracking going. Have a\nweek of evenings ahead of me and a couple of weekends...\n\nMy short term goal is simply to have all items which can hold comments\n(most) store what they depend on during their creation, and to remove\nthose associations on their destruction. No visible changes to the\nfrontend or users in any shape or form, but will allow stuff to be\nbuilt on it.\n\nLong term goal is to implement RESTRICT and CASCADE on all of those\nitems (in particular those damn SERIALs, which should have the side\neffect of allowing me to shrink my NAMEDATALEN value (avoids naming\nconflicts) ).\n\nTo accomplish this I'm going to create a table called pg_depends which\nlooks very similar to pg_description doubed.\n\nclassoid - Class of the depender\nobjoid - ID of the depender object\nobjsubid - SubID (See pg_description)\ndepclassoid - Class of the dependee\ndepobjoid - ID of the dependee object\ndepobjsubid - SubID of the dependee\nWITHOUT OIDS\n\nIt's assummed there is only one kind of dependency, that which is\nrequired. No optional relations. Dependencies are also direct, that\nis the smallest step possible to the immediate item. A CHECK\nconstraint may depend on 2 columns of a table, but never on the table\nitself. The table can be modified without affecting the check -- add\ncolumn -- but the columns cannot.\n\n\nThe postgresql backend will require a few functions to operate\ndependencies.\n\ndependCreate(depender, dependee)\n - Create a dependency between a depender and dependee.\n - Called during the creation of any objects -- possibly several\ntimes -- to show dependencies on various items (columns on types for\nexample, or types on functions\n - Assumes both the depender and dependee exist (does no lookups to\nensure they do).\n\ndependUpdate(old, new)\n - Enables mapping an old Object Address (classid, objid, objsubid)\nto a new one\n - Is this even required? I'm thinking CREATE OR REPLACE type\nstuff -- but those shouldn't change the IDs right?\n\ndependDelete(object being dropped, RestrictOrCascade)\n - Drop a dependency between a depender and dependee.\n - Enable Restrictions (abort transaction on found dependency\nduring a delete)\n - Enables cascades (drop everything in our way via the big switch\nstatement)\n - Would it be better to drill down to the end points and start\ndropping or start from the top in the event of an object which cannot\nbe dropped?\n\nNOTES:\n- Speed hit. All drops and creations will take a small speed hit.\nWon't affect selects, inserts, deletes or updates though.\n- Syntax changes in alot of places to add RESTRICT OR CASCADE type\nelements. Do we go with restrict by default and not enforce? I'd\nprefer to enforce the user to choose as thats spec.\n- Creation of src/backend/catalogs/depend.c for the above functions.\n- Did I miss anything?\n- pg_depends will be preloaded with association data between\nfunctions, types and attributes (not looking forward to this step)\n- Although items may reference others directly (tables to columns) I'm\ngoing to add it to pg_depends anyway as a second copy HOWEVER when\nprocessing with dependDelete() it will skip testing the columns, and\nmove directly to the columns dependencies. Is this silly? Kinda a\nspecial case. Can anything depend on a table directly (rather than a\ncolumn within a table?). Types to functions should definately be in\nhere (drop the function, lose the type on cascade)\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n\n",
"msg_date": "Mon, 4 Mar 2002 21:16:43 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Dependencies with pg_depends"
}
] |
[
{
"msg_contents": "I've attached a patch which implements Bob Jenkin's hash function for\nPostgreSQL. This hash function replaces the one used by hash indexes and\nthe catalog cache. Hash joins use a different, relatively poor-quality\nhash function, but I'll fix that later.\n\nAs suggested by Tom Lane, this patch also changes the size of the fixed\nhash table used by the catalog cache to be a power-of-2 (instead of a\nprime: I chose 256 instead of 257). This allows the catcache to lookup\nhash buckets using a simple bitmask. This should improve the performance\nof the catalog cache slightly, since the previous method (modulo a\nprime) was slow.\n\nIn my tests, this improves the performance of hash indexes by between 4%\nand 8%; the performance when using btree indexes or seqscans is\nbasically unchanged.\n\nUnless anyone seems a problem, please apply.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC",
"msg_date": "04 Mar 2002 21:36:09 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "new hash function"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nNeil Conway wrote:\n> I've attached a patch which implements Bob Jenkin's hash function for\n> PostgreSQL. This hash function replaces the one used by hash indexes and\n> the catalog cache. Hash joins use a different, relatively poor-quality\n> hash function, but I'll fix that later.\n> \n> As suggested by Tom Lane, this patch also changes the size of the fixed\n> hash table used by the catalog cache to be a power-of-2 (instead of a\n> prime: I chose 256 instead of 257). This allows the catcache to lookup\n> hash buckets using a simple bitmask. This should improve the performance\n> of the catalog cache slightly, since the previous method (modulo a\n> prime) was slow.\n> \n> In my tests, this improves the performance of hash indexes by between 4%\n> and 8%; the performance when using btree indexes or seqscans is\n> basically unchanged.\n> \n> Unless anyone seems a problem, please apply.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 01:35:50 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: new hash function"
},
{
"msg_contents": "...\n> In my tests, this improves the performance of hash indexes by between 4%\n> and 8%; the performance when using btree indexes or seqscans is\n> basically unchanged.\n\nThat's great! Picking up 5% performance is not something we do every\nday...\n\n - Thomas\n",
"msg_date": "Tue, 05 Mar 2002 05:44:47 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] new hash function"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nNeil Conway wrote:\n> I've attached a patch which implements Bob Jenkin's hash function for\n> PostgreSQL. This hash function replaces the one used by hash indexes and\n> the catalog cache. Hash joins use a different, relatively poor-quality\n> hash function, but I'll fix that later.\n> \n> As suggested by Tom Lane, this patch also changes the size of the fixed\n> hash table used by the catalog cache to be a power-of-2 (instead of a\n> prime: I chose 256 instead of 257). This allows the catcache to lookup\n> hash buckets using a simple bitmask. This should improve the performance\n> of the catalog cache slightly, since the previous method (modulo a\n> prime) was slow.\n> \n> In my tests, this improves the performance of hash indexes by between 4%\n> and 8%; the performance when using btree indexes or seqscans is\n> basically unchanged.\n> \n> Unless anyone seems a problem, please apply.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 6 Mar 2002 15:49:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: new hash function"
}
] |
[
{
"msg_contents": "-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org]On Behalf Of Dave Cramer\nSent: Monday, March 04, 2002 4:58 PM\nTo: 'Hany Ziad'; pgsql-jdbc@postgresql.org\nSubject: Re: [JDBC] DB mirroring\n\n\nHany,\n\nActually IMHO the best way to do this is with database mirroring at the\nbackend. There is a project underway to provide mirroring but it is not\nfinished. Try on the hackers list to see the status, or\ngborg.postgresql.org\n\nDave\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Hany Ziad\nSent: Wednesday, February 27, 2002 2:09 PM\nTo: pgsql-jdbc@postgresql.org\nSubject: [JDBC] DB mirroring\n\n\nHi everyone,\n\n I am new to the PostGres and I am writing in Java and JDBC.\n\n My application consists of several sites, each with a DB server with\nthin clients. When the user finishes work in a site, he moves towards\nanother site with the same architecture.\n The problem I am facing is that the user needs to find his DB updated\nin each site he logs into. He needs to find even the newest updates he\ndid in the previous site.\n So, I thought about making the recent changes in the DB available on\nan authenticated web site, that can be accessed when the user starts a\nsession and then the changes are downloaded and then reflected on to\nthe DB. When the user terminates the session, the updates he made are\nuploaded to the web site for future use and so on.\n\n Am I on the right track? If so, how can I monitor these changes?\n\n How can I update the older DB?\n\n Can \"Batch updates\" do the job?\n\n\nHelp pls,\n\nH. ZIAD\nincode co.\n\n",
"msg_date": "Tue, 5 Mar 2002 12:08:14 +0200",
"msg_from": "\"info\" <info@incode.com.eg>",
"msg_from_op": true,
"msg_subject": "FW: Re: [JDBC] DB mirroring"
}
] |
[
{
"msg_contents": "Hello help wanted asap for this one please.\nI have installed POstgres on my own machine and i am\nin charge of its settings.\n\nthe problem is when i try and run a small piece of c\ncode i get an error which states the following:\ncc testing_connection.c -L/usr/local/pgsql/lib -lpq \n-I/usr/local/pgsql/include -o fred -O0 \n$ ./fred\n./fred: error in loading shared libraries: libpq.so.2:\ncannot open shared object file: No such file or\ndirectory\n\n\nthe program followed all the steps set out in the\npostgres documentation on building Libpq programs.\nBut i still get the above??\nAny suggestions please\n\n__________________________________________________\nDo You Yahoo!?\nTry FREE Yahoo! Mail - the world's greatest free email!\nhttp://mail.yahoo.com/\n",
"msg_date": "Tue, 5 Mar 2002 03:26:46 -0800 (PST)",
"msg_from": "Pa O'Clerigh <switswoo79@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Help Wanted for running C code"
},
{
"msg_contents": "Hi,\n\nYou should probably post this on -general or some other list, but I believe\nthe answer is that the postgres libraries cannot be found by the dynamic\nlibrary loader. You can fix this by:\n\n* Including the PGINSTALLDIR/lib directory in your LD_LIBRARY_PATH\n\nor\n\n* Editing your /etc/ld.so.conf to include the above path and running\nldconfig\n\nor\n\n* symlinking from an existing library path (/usr/lib etc) to this file.\n\nRegards,\n\nMark\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Pa O'Clerigh\n> Sent: Tuesday, 5 March 2002 10:27 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] Help Wanted for running C code\n>\n>\n> Hello help wanted asap for this one please.\n> I have installed POstgres on my own machine and i am\n> in charge of its settings.\n>\n> the problem is when i try and run a small piece of c\n> code i get an error which states the following:\n> cc testing_connection.c -L/usr/local/pgsql/lib -lpq\n> -I/usr/local/pgsql/include -o fred -O0\n> $ ./fred\n> ./fred: error in loading shared libraries: libpq.so.2:\n> cannot open shared object file: No such file or\n> directory\n>\n>\n> the program followed all the steps set out in the\n> postgres documentation on building Libpq programs.\n> But i still get the above??\n> Any suggestions please\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Try FREE Yahoo! Mail - the world's greatest free email!\n> http://mail.yahoo.com/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n",
"msg_date": "Wed, 13 Mar 2002 11:07:53 +1100",
"msg_from": "\"Mark Pritchard\" <mark@tangent.net.au>",
"msg_from_op": false,
"msg_subject": "Re: Help Wanted for running C code"
},
{
"msg_contents": "On Tue, Mar 05, 2002 at 03:26:46AM -0800, Pa O'Clerigh wrote:\n... \n> the problem is when i try and run a small piece of c\n> code i get an error which states the following:\n> cc testing_connection.c -L/usr/local/pgsql/lib -lpq \n> -I/usr/local/pgsql/include -o fred -O0 \n> $ ./fred\n> ./fred: error in loading shared libraries: libpq.so.2:\n> cannot open shared object file: No such file or\n> directory\n...\n\nWhat does\n\n ldd fred\n file fred\n\ntell you? Maybe\n\n cc testing_connection -L/usr/local/pgsql/lib -Wl,-R/usr/local/pgsql/lib -lpq\\\n -I/usr/local/pgsql/include -o fred -O0\n\n?\n\nPatrick\n",
"msg_date": "Wed, 13 Mar 2002 00:10:12 +0000",
"msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>",
"msg_from_op": false,
"msg_subject": "Re: Help Wanted for running C code"
},
{
"msg_contents": "Pa O'Clerigh writes:\n\n> the problem is when i try and run a small piece of c\n> code i get an error which states the following:\n> cc testing_connection.c -L/usr/local/pgsql/lib -lpq\n> -I/usr/local/pgsql/include -o fred -O0\n> $ ./fred\n> ./fred: error in loading shared libraries: libpq.so.2:\n> cannot open shared object file: No such file or\n> directory\n\nAdd -Wl,-rpath,/usr/local/pgsql/lib, or set and export\nLD_LIBRARY_PATH=/usr/local/psgql/lib, or modify /etc/ld.so.conf and run\nldconfig.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 12 Mar 2002 19:36:10 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Help Wanted for running C code"
}
] |
[
{
"msg_contents": "> I guess you are inserting correct EUC Traditional\n> Chinese (EUC-TW)\n> characters but hard to tell what is happening unless\n> you are showing\n> us the character sequences in hexa decimal format.\n> --\n> Tatsuo Ishii\n> ===============================\n> Many thanks! Tatsuo,\n> \n> Please see below. Best Regards,\n> \n> CN\n> ---------------\n> linux:~$ cat /tmp/tt\n> 1111\n> ¦¨¥\\\n> ³\\\n> 2222\n> linux:~$ od -t x /tmp/tt\n> 0000000 31313131 a5a8a60a 5cb30a5c 3232320a\n> 0000020 00000a32\n> 0000022\n\nAre you sure that they are EUC-TW? Considering the byte swapping, they\nare actually like this:\n\n0x31,0x31,0x31,0x31,0x0a,\n0xa6,0xa8,0xa5,0x5c,0x0a,\n0xb3,0x5c,0x0a,\n0x32,0x32,0x32,0x32,0x0a\n\nHere we see a55c and b35c, which should never happen in EUC-TW, since\nthe each second byte is lower than 0x80.\nI guess they are BIG5. If my guess is correct, you could set the\nclient encoding to BIG5 (\"\\encoding BIG5\" in psql) and get correct\nresult.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 06 Mar 2002 00:01:27 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Rep:Re: [BUGS] Encoding Problem?"
}
] |
[
{
"msg_contents": "I've uploaded Mandrake RPMs built using Lamar Owen's source RPM\n(unchanged btw) to\n\n ftp.postgresql.org:/pub/binary/RPMS/mandrake-8.1/\n\nLamar, is /var/spool/ftp the right location on that machine, or is it\nrsync'd (or something) from another machine or location?\n\n - Thomas\n",
"msg_date": "Tue, 05 Mar 2002 08:46:52 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Mandrake RPMs uploaded"
},
{
"msg_contents": "> I've uploaded Mandrake RPMs built using Lamar Owen's source RPM\n> (unchanged btw)...\n\nLamar, the python package has \"mx\" as a prerequisite to RPM\ninstallation. I see no such package available on my Mandrake box, and\nthe python code seems to build and install without it. What is the\npackage and why might it be required? If it is a RH-specific feature,\nshould we put in a test to make it optional?\n\n - Thomas\n",
"msg_date": "Tue, 05 Mar 2002 10:54:31 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: Mandrake RPMs uploaded"
},
{
"msg_contents": "On Tuesday 05 March 2002 01:54 pm, Thomas Lockhart wrote:\n> > I've uploaded Mandrake RPMs built using Lamar Owen's source RPM\n> > (unchanged btw)...\n\n> Lamar, the python package has \"mx\" as a prerequisite to RPM\n> installation. I see no such package available on my Mandrake box, and\n> the python code seems to build and install without it. What is the\n> package and why might it be required? If it is a RH-specific feature,\n> should we put in a test to make it optional?\n\nThe mx package is required by the new python client code. It will indeed \nbuild without mx, but it will not RUN without it. See rpmfind.net for \nsources -- the name 'mx' is a RedHatism, and the same package goes by another \nname, which I don't remember right off.\n\nOh, BTW: /var/spool/ftp on ftp.postgresql.org is correct.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 5 Mar 2002 14:59:27 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Mandrake RPMs uploaded"
},
{
"msg_contents": "> > ... the python package has \"mx\" as a prerequisite to RPM\n> > installation. I see no such package available on my Mandrake box, and\n> > the python code seems to build and install without it. What is the\n> > package and why might it be required? If it is a RH-specific feature,\n> > should we put in a test to make it optional?\n> The mx package is required by the new python client code. It will indeed\n> build without mx, but it will not RUN without it. See rpmfind.net for\n> sources -- the name 'mx' is a RedHatism, and the same package goes by another\n> name, which I don't remember right off.\n\nWhat in the package requires \"mx\"? Ah, I see in the release notes that\n\"mxDateTime\" is required for running the DBI-compatible interface.\n\nThe general \"mx\" set of packages is a mix of free and non-free software,\nthough mxDateTime seems to be covered in the former. License wording at\nthe end, in case anyone cares.\n\nIn the meantime, I'll build these packages for Mandrake, since only one\nor two other distros seem to bother with them at all. And since\nmxDateTime (and Distutils, required by RH's mx package build) *could* be\ninstalled without RPMs, istm that it should be a configure test rather\nthan an RPM prerequisite. Comments?\n\n - Thomas\n\nThe Public License is very similar to the Python 2.0 license and covers\nthe open source software made available by eGenix.com which is free of\ncharge even for commercial\n use. \n\n The Commercial License is intended for covering commercial\neGenix.com software, notably the mxODBC package. Only private and\nnon-commercial use is free of charge.\n",
"msg_date": "Tue, 05 Mar 2002 14:58:57 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Python client code (was, Mandrake RPMs uploaded)"
},
{
"msg_contents": "Hi Thomas,\n\nCool.\n\nI don't have an 8.1 machine, and was having difficulties getting it to\ncompile on an 8.0 machine.\n\nThis at least give some people some relief!\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nThomas Lockhart wrote:\n> \n> I've uploaded Mandrake RPMs built using Lamar Owen's source RPM\n> (unchanged btw) to\n> \n> ftp.postgresql.org:/pub/binary/RPMS/mandrake-8.1/\n> \n> Lamar, is /var/spool/ftp the right location on that machine, or is it\n> rsync'd (or something) from another machine or location?\n> \n> - Thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Wed, 06 Mar 2002 11:52:51 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Mandrake RPMs uploaded"
}
] |
[
{
"msg_contents": "A few questions on the parser -> planner stage.\n\nI'm currently pursuing a line of thought on resource limiting, and I'd like\nsome opinions on whether it's possible/probable.\n\n\nI need to give community access directly to the database, so I need to\nimpose some sane controls. As I can't hold any of that class of users\naccountable, I need to impose the limits in the software.\n\nI'd like to try hooking in right after the parser produces it's tree and\nmodifying limitCount based on a few rules, then handing it back to normal\nflow. After that I'd also like to hook in before the planner hands the plan\nto the executor, evaluate estimated cost, and accept/deny the query based on\nthat.\n\nI realise cost is just simply a number for comparison, but I'm only looking\nto cap excessively high costs due to inexperience (lots of cartesians\nproducts by accident) or maliscious intent. It would be set based on a\nrefference set of queries run on the individual system.\n\nAt the same time processes will be monitored (probably using Bruce's tool)\nat the same time and killing anything that might slip by.\n\nThe concept (rewriting the query and limiting cost) seems to work well. At\ncurrent though it's horribly expensive and buggy as I'm rewriting the query\nusing regexeps (no grammar rules), running an explain on it, parsing and\nevaluating the explain output for cost, then finally running the query.\n\nAs a related issue I've been hunting about for ways to limit classes of\nusers to certain commands (such as only allow SELECT on a database). I've\nonly begun to play with the debug output but so far it's my understanding\nthat the :command node from the parse tree identifies the operation being\nperformed. Since I plan to be intervening after the parser anyways, I\nthought it would be opertune to check a permissions table and see if that\nuser/group has permission to run that command class on the database.\n\n\n\nAt the moment I'm just looking for opinions on the attemp and, if it's not\nan obvious dead end, a few pointers on where to start. This is a learning\nproject (as my C skills are horrid) so any suggestions are appreciated.\n\n\n",
"msg_date": "Tue, 5 Mar 2002 12:57:50 -0500",
"msg_from": "\"Arguile\" <arguile@lucentstudios.com>",
"msg_from_op": true,
"msg_subject": "Intervening in Parser -> Planner Stage"
},
{
"msg_contents": "\"Arguile\" <arguile@lucentstudios.com> writes:\n> I'm currently pursuing a line of thought on resource limiting, and I'd like\n> some opinions on whether it's possible/probable.\n\nThese seem like fairly indirect ways of limiting resource usage. Why\nnot instead set a direct limit on the amount of runtime allowed?\n\nPostgres doesn't use the ITIMER_VIRTUAL (CPU time) timer, so about all\nyou'd need is\n\tsetitimer(ITIMER_VIRTUAL, &itimer, NULL);\nto a suitable limit before starting query execution, and cancel it again\nat successful query end. Set the SIGVTALRM signal handler to\nQueryCancelHandler (same as SIGINT), and voila. A direct solution in\nabout ten lines of code ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Mar 2002 13:47:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Intervening in Parser -> Planner Stage "
},
{
"msg_contents": "> I need to give community access directly to the database, so I need to\n> impose some sane controls. As I can't hold any of that class of users\n> accountable, I need to impose the limits in the software.\n\nFor comparison, I know that MySQL allows you to set the max queries per\nhour for a particular user...\n\nChris\n\n",
"msg_date": "Wed, 6 Mar 2002 08:22:17 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Intervening in Parser -> Planner Stage"
},
{
"msg_contents": "> > I need to give community access directly to the database, so I\nneed to\n> > impose some sane controls. As I can't hold any of that class of\nusers\n> > accountable, I need to impose the limits in the software.\n>\n> For comparison, I know that MySQL allows you to set the max queries\nper\n> hour for a particular user...\n\nWhich is completely useless. A single table with a few thousand\nrecords joined to itself 40 or so times as a cartesian product will\ncompletely wipe out most MySQL boxes and thats only 1 query :)\nPostgreSQL has the same problem..\n\nWhat would be interesting is the ability to limit a query to a maximum\nof X cost. This requires knowing the cost of a function call which is\ncurrently evaluated as nothing.\n\nPerhaps even 'nice' processeses running for certain users / groups.\nOffline or background processes could be niced to 20 to allow the\ninteractive sessions to run unnoticed. Not so good if the background\nprocess locks a bunch of stuff though -- so that may be more trouble\nthan good.\n\nEither way the user can simply attempt to make thousands of database\nconnections which would effectivly block anyone else from connecting\nat all.\n\n",
"msg_date": "Tue, 5 Mar 2002 20:03:23 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Intervening in Parser -> Planner Stage"
}
] |
[
{
"msg_contents": "-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nSent: Tuesday, March 05, 2002 12:59 PM\nTo: mlw\nCc: Tom Lane; Nicolas Bazin; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Postgresql backend to perform vacuum\nautomatically\n\n\n> > > If they do not affect performance, then why have them off?\n> > \n> > I think Jan said 2-3%. If we can get autovacuum from it, it would\nbe a\n> > win to keep it on all the time, perhaps.\n> \n> Assuming that the statistics get updated:\n> \n> How often should the sats table be queried?\n> What sort of configurability would be needed?\n\nYou could wake up every few minutes and see how the values have changed.\nI don't remember if there is a way to clear that stats so you can see\njust the changes in the past five minutes. Vacuum the table that had\nactivity.\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n>>>>\nHow long does it take to vacuum a table with 12 indexes and 100 million \nrows in it? This idea is making me very nervous. Suppose (for\ninstance)\nthat we have regular updates to some table, and it is constantly getting\n\nvacuum attempts thrown at it.\n\nNow imagine a large systems with many large tables which are frequently\nreceiving updates. Would 100 simultaneous vacuum operations be a good\nthing when .0001% of the table has changed [on average] for each of\nthem?\n\nI know for sure \"update statistics\" at regular intervals on some of the\nSQL systems I have used would be sheer suicide.\n\nBetter make it configurable, that's for sure.\n<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n<<<<\n",
"msg_date": "Tue, 5 Mar 2002 13:16:28 -0800",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
}
] |
[
{
"msg_contents": "JDBC-ers,\n\nWe are doing an INSERT and occasionally Postgres throws a SQLException\nbecause there are unexpected results (see stacktrace below), maybe 10% \nof the time,\nAny idea why this would happen when it works over 90% of the time?\nHowever, it appears the insert is completed successfully.\n\nWe have looked at the source code for ResultSet.java and noticed\nthe method reallyResultSet() returns true if any of the fields are non-null.\nI hope that helps in debugging. \n\nAlso, we are running Postgres 7.1 with JDK1.2.1.\n\nRegards,\n\nEric\n\n--------------------------------------------------------------------------\n\nIINSERT INTO bad_urls(ID,DATA,ATTEMPTS,REASON) VALUES \n(3375,'http://www.oit.itd.umich.edu/projects/adw2k/chordata/aves.html',0,'Unknown \nHost')\n\nA result was returned by the statement, when none was expected.\n at java.lang.Throwable.fillInStackTrace(Native Method)\n at java.lang.Throwable.fillInStackTrace(Compiled Code)\n at java.lang.Throwable.<init>(Compiled Code)\n at java.lang.Exception.<init>(Compiled Code)\n at java.sql.SQLException.<init>(SQLException.java:98)\n at org.postgresql.util.PSQLException.<init>(PSQLException.java:23)\n at org.postgresql.jdbc2.Statement.executeUpdate(Statement.java:80)\n at \norg.postgresql.jdbc2.PreparedStatement.executeUpdate(PreparedStatement.java:122)\n at InsertError.record(InsertError.java:98)\n at InsertError.record(InsertError.java:69)\n at wbCheckUrl$CheckThread.run(wbCheckUrl.java:307)\n\n",
"msg_date": "Tue, 05 Mar 2002 15:27:59 -0700",
"msg_from": "Eric Scroger <escroger@carl.org>",
"msg_from_op": true,
"msg_subject": "A result was returned by the statement, when none was expected"
},
{
"msg_contents": "Eric,\n\nWhich version of the driver are you using? Also do you have logs from\nthe postgres backend?\n\nMy suspicion is something in the data is causing the problem, what I\ndon't know, but that's my guess.\n\nDave\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Eric Scroger\nSent: Tuesday, March 05, 2002 5:28 PM\nTo: pgsql hackers; pgsql-jdbc@postgresql.org\nCc: ed@carl.org\nSubject: [JDBC] A result was returned by the statement, when none was\nexpected\n\n\nJDBC-ers,\n\nWe are doing an INSERT and occasionally Postgres throws a SQLException\nbecause there are unexpected results (see stacktrace below), maybe 10% \nof the time,\nAny idea why this would happen when it works over 90% of the time?\nHowever, it appears the insert is completed successfully.\n\nWe have looked at the source code for ResultSet.java and noticed the\nmethod reallyResultSet() returns true if any of the fields are non-null.\nI hope that helps in debugging. \n\nAlso, we are running Postgres 7.1 with JDK1.2.1.\n\nRegards,\n\nEric\n\n------------------------------------------------------------------------\n--\n\nIINSERT INTO bad_urls(ID,DATA,ATTEMPTS,REASON) VALUES \n(3375,'http://www.oit.itd.umich.edu/projects/adw2k/chordata/aves.html',0\n,'Unknown \nHost')\n\nA result was returned by the statement, when none was expected.\n at java.lang.Throwable.fillInStackTrace(Native Method)\n at java.lang.Throwable.fillInStackTrace(Compiled Code)\n at java.lang.Throwable.<init>(Compiled Code)\n at java.lang.Exception.<init>(Compiled Code)\n at java.sql.SQLException.<init>(SQLException.java:98)\n at org.postgresql.util.PSQLException.<init>(PSQLException.java:23)\n at org.postgresql.jdbc2.Statement.executeUpdate(Statement.java:80)\n at \norg.postgresql.jdbc2.PreparedStatement.executeUpdate(PreparedStatement.j\nava:122)\n at InsertError.record(InsertError.java:98)\n at InsertError.record(InsertError.java:69)\n at wbCheckUrl$CheckThread.run(wbCheckUrl.java:307)\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n",
"msg_date": "Tue, 5 Mar 2002 17:56:52 -0500",
"msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>",
"msg_from_op": false,
"msg_subject": "Re: A result was returned by the statement, when none was expected"
},
{
"msg_contents": "Dave,\n\n>Eric,\n>\n>Which version of the driver are you using? Also do you have logs from\n>the postgres backend?\n>\n\nWe are using driver jar file jdbc7.1-1.2.jar. \n\nI reproduced the problem with postgresql configured the debug setting to 16.\nThe backend debug logs are attached to this as they are kinda long \n(postback.log).\n\nI'm not sure it is data directly, but it does always fail on the same data.\nOne thing I do know is the problem occurs when we are using \nPreparedStatments. \nIf I put the same SQL statement into a separate statement it works fine.\nFor your info, here is the code related to the PreparedStatements.\n\n----------------------------------------------------\n\nPreparedStatement psInsertError = null;\n\n psInsertError = conn.prepareStatement(\n \"INSERT INTO \" + tabName\n + \"(ID,DATA,ATTEMPTS,REASON) VALUES (?,?,?,?)\" );\n\n psInsertError.setLong(1,id);\n psInsertError.setString(2,data);\n psInsertError.setInt(3,attempts);\n psInsertError.setString(4,reason);\n psInsertError.executeUpdate();\n\n----------------------------\n\nThanks for your suggestions.\n\nEric\n\n\n>\n>\n>My suspicion is something in the data is causing the problem, what I\n>don't know, but that's my guess.\n>\n>Dave\n>\n\n>\n>-----Original Message-----\n>From: pgsql-jdbc-owner@postgresql.org\n>[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Eric Scroger\n>Sent: Tuesday, March 05, 2002 5:28 PM\n>To: pgsql hackers; pgsql-jdbc@postgresql.org\n>Cc: ed@carl.org\n>Subject: [JDBC] A result was returned by the statement, when none was\n>expected\n>\n>\n>JDBC-ers,\n>\n>We are doing an INSERT and occasionally Postgres throws a SQLException\n>because there are unexpected results (see stacktrace below), maybe 10% \n>of the time,\n>Any idea why this would happen when it works over 90% of the time?\n>However, it appears the insert is completed successfully.\n>\n>We have looked at the source code for ResultSet.java and noticed the\n>method reallyResultSet() returns true if any of the fields are non-null.\n>I hope that helps in debugging. \n>\n>Also, we are running Postgres 7.1 with JDK1.2.1.\n>\n>Regards,\n>\n>Eric\n>\n>------------------------------------------------------------------------\n>--\n>\n>IINSERT INTO bad_urls(ID,DATA,ATTEMPTS,REASON) VALUES \n>(3375,'http://www.oit.itd.umich.edu/projects/adw2k/chordata/aves.html',0\n>,'Unknown \n>Host')\n>\n>A result was returned by the statement, when none was expected.\n> at java.lang.Throwable.fillInStackTrace(Native Method)\n> at java.lang.Throwable.fillInStackTrace(Compiled Code)\n> at java.lang.Throwable.<init>(Compiled Code)\n> at java.lang.Exception.<init>(Compiled Code)\n> at java.sql.SQLException.<init>(SQLException.java:98)\n> at org.postgresql.util.PSQLException.<init>(PSQLException.java:23)\n> at org.postgresql.jdbc2.Statement.executeUpdate(Statement.java:80)\n> at \n>org.postgresql.jdbc2.PreparedStatement.executeUpdate(PreparedStatement.j\n>ava:122)\n> at InsertError.record(InsertError.java:98)\n> at InsertError.record(InsertError.java:69)\n> at wbCheckUrl$CheckThread.run(wbCheckUrl.java:307)\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n>",
"msg_date": "Tue, 05 Mar 2002 16:31:46 -0700",
"msg_from": "Eric Scroger <escroger@carl.org>",
"msg_from_op": true,
"msg_subject": "Re: A result was returned by the statement, when none was expected"
}
] |
[
{
"msg_contents": "\n----- Original Message -----\nFrom: \"Nicolas Bazin\" <nbazin@ingenico.com.au>\nTo: \"mlw\" <markw@mohawksoft.com>\nSent: Wednesday, March 06, 2002 10:21 AM\nSubject: Re: [HACKERS] Postgresql backend to perform vacuum automatically\n\n\n> Also I think that each database should be vaccuumed sequentially and also\n> vacuum should check whether another instance is already at work.\n>\n> I don't know whether a time delay should be used anyway. From the tests in\n> the ORACLE vs POSTGRESQL thread it appears that 5 sec is the best delay.\n> Vacuum should be smart enough to assess the performance gain / processing\n> penalty ratio and decide to go forth. So may be the number of update\n> threshold should be refined.\n> The idea would be to assess how much the vacuum will penalize access to a\n> specific database and then what performance improvement we can expect. The\n> DBA can then set a ratio to start vacuum.\n> The statistic should compute the ratio and set a flag when a vacuum is\n> needed. A trigger can be set on the flag field update to start vacuuming.\n>\n> ----- Original Message -----\n> From: \"mlw\" <markw@mohawksoft.com>\n> To: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> Cc: \"Neil Padgett\" <npadgett@redhat.com>; \"Tom Lane\" <tgl@sss.pgh.pa.us>;\n> \"Nicolas Bazin\" <nbazin@ingenico.com.au>; <pgsql-hackers@postgresql.org>\n> Sent: Wednesday, March 06, 2002 8:56 AM\n> Subject: Re: [HACKERS] Postgresql backend to perform vacuum automatically\n>\n>\n> > Bruce Momjian wrote:\n> > >\n> > > Neil Padgett wrote:\n> > > > On Tue, 2002-03-05 at 15:59, Bruce Momjian wrote:\n> > > > > > > > If they do not affect performance, then why have them off?\n> > > > > > >\n> > > > > > > I think Jan said 2-3%. If we can get autovacuum from it, it\n> would be a\n> > > > > > > win to keep it on all the time, perhaps.\n> > > > > >\n> > > > > > Assuming that the statistics get updated:\n> > > > > >\n> > > > > > How often should the sats table be queried?\n> > > > > > What sort of configurability would be needed?\n> > > > >\n> > > > > You could wake up every few minutes and see how the values have\n> changed.\n> > > > > I don't remember if there is a way to clear that stats so you can\n> see\n> > > > > just the changes in the past five minutes. Vacuum the table that\n> had\n> > > > > activity.\n> > > >\n> > > > Ick -- polling. The statistics process should be able to wake\nsomebody\n> > > > up / notify the postmaster when the statistics change such that a\n> vacuum\n> > > > is required.\n> > >\n> > > Yes, that would tie that stats collector closer to auto-vacuum, but it\n> > > certainly could be done.\n> >\n> > Using an alert can be done, but polling is easier for a proof of\nconcept.\n> I\n> > dont see too much difficulty there. We could use notify.\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>\n\n\n",
"msg_date": "Wed, 6 Mar 2002 10:22:14 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "Fw: Postgresql backend to perform vacuum automatically"
}
] |
[
{
"msg_contents": "Today I made some benchmarks using Tatsuo's scripts to test the\nperformance of ext2 vs ext3 vs software RAID5. Attached is the gnuplot\noutput. Note that the RAID5 setup has the WAL in another partition\nwith RAID1 and both are spread across three SCSI disks. The machine is\na dual pentium III 1 Ghz with 2Gb in RAM running RedHat 7.2 and\nPostgreSQL 7.2.\n\nRegards,\nManuel.",
"msg_date": "05 Mar 2002 19:01:24 -0600",
"msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>",
"msg_from_op": true,
"msg_subject": "ext2 vs ext3 vs RAID5 (software) benchmark"
},
{
"msg_contents": "On Tue, 2002-03-05 at 20:01, Manuel Sugawara wrote:\n> Today I made some benchmarks using Tatsuo's scripts to test the\n> performance of ext2 vs ext3 vs software RAID5. Attached is the gnuplot\n> output. Note that the RAID5 setup has the WAL in another partition\n> with RAID1 and both are spread across three SCSI disks. The machine is\n> a dual pentium III 1 Ghz with 2Gb in RAM running RedHat 7.2 and\n> PostgreSQL 7.2.\n\nThe poor performance of ext3 is interesting. You could try mounting ext3\nwith \"data=writeback\", as there is no benefit to preserving write\nordering with Postgres (because it has WAL, anyway). That should improve\nperformance slightly, but certainly not by a factor of 3.\n\nIn fact, I can reproduce these results locally (for ext2 & ext3): ext3\ntops out at around 11 TPS, ext2 at 21 to 29 TPS, and ext3 with\ndata=writeback at 16 TPS (running pgbench, 100 transactions). I wasn't\naware that the speed penalty for ext3 was this high...\n\nFor the very simple benchmarks I was running, ext3's poor performance\nmight be explained by the more conservative buffering used by ext3\n(flushing buffers to disk every 5 seconds, not every 30 seconds like\nwith ext2) -- but I'm sure that your benchmarks were running for a much\nlonger period of time, right? BTW, what are \"Tatsuo's scripts\", which\nyou said you were using?\n\nIt would be interesting to see the results using other filesystems, like\nXFS or ReiserFS. If you can gather some more data, you could send this\noff to the linux kernel list, they would probably be interested in ext3\nperformance under \"real\" load.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "05 Mar 2002 20:27:54 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: ext2 vs ext3 vs RAID5 (software) benchmark"
},
{
"msg_contents": "...\n> It would be interesting to see the results using other filesystems, like\n> XFS or ReiserFS.\n\nHmm, since you mention XFS: are y'all aware of kernel and XFS troubles\nunder Linux? I've seen disk corruption troubles using Mandrake 2.4.8-xx\nkernel and whatever XFS shipped with Mdk8.1. I'm not sure if the\ntroubles have been fixed in more recent kernels and XFS packages...\n\n - Thomas\n",
"msg_date": "Tue, 05 Mar 2002 20:07:46 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: ext2 vs ext3 vs RAID5 (software) benchmark"
}
] |
[
{
"msg_contents": "Hi all,\n\nOne of the things which the AS3AP benchmark does is have multiple users\naccess a table with hash indexes on it.\n\nWith the OSDB (Open Source Database Benchmark: http://osdb.sf.net) we've\nfound on PG 7.1 that multiple clients hitting a table using a hash index\ngenerates locking problems. I remember Tom mentioning that this is a\nknown thing, but I'm not sure if this has been fixed since then.\n\nDoes anyone have any ideas? If not, would someone be willing to take\nthe time to fix it?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Wed, 06 Mar 2002 13:20:21 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Do we still have locking problems with concurrent users of hash\n\ttables?"
},
{
"msg_contents": "Justin Clift wrote:\n> Hi all,\n> \n> One of the things which the AS3AP benchmark does is have multiple users\n> access a table with hash indexes on it.\n> \n> With the OSDB (Open Source Database Benchmark: http://osdb.sf.net) we've\n> found on PG 7.1 that multiple clients hitting a table using a hash index\n> generates locking problems. I remember Tom mentioning that this is a\n> known thing, but I'm not sure if this has been fixed since then.\n> \n> Does anyone have any ideas? If not, would someone be willing to take\n> the time to fix it?\n\nIt has not been fixed. One TODO item is to either stop mentioning hash\nat all or get it improved. We have been sitting on the fence for too\nlong.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 21:59:51 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we still have locking problems with concurrent users"
},
{
"msg_contents": "On Tue, 2002-03-05 at 21:59, Bruce Momjian wrote:\n> Justin Clift wrote:\n> > Hi all,\n> > \n> > One of the things which the AS3AP benchmark does is have multiple users\n> > access a table with hash indexes on it.\n> > \n> > With the OSDB (Open Source Database Benchmark: http://osdb.sf.net) we've\n> > found on PG 7.1 that multiple clients hitting a table using a hash index\n> > generates locking problems. I remember Tom mentioning that this is a\n> > known thing, but I'm not sure if this has been fixed since then.\n\nNo, it hasn't been fixed yet.\n\n> > Does anyone have any ideas? If not, would someone be willing to take\n> > the time to fix it?\n> \n> It has not been fixed. One TODO item is to either stop mentioning hash\n> at all or get it improved. We have been sitting on the fence for too\n> long.\n\nI'll be working on fixing this. I'm also going to try to add more\nfeatures to the hash index implementation: for example, allow UNIQUE\nhash indexes, hash indexes over multiple keys, etc. My first improvement\nto the hash code, replacing the hash function with a better one, is on\nthe unapplied patches list and should be in CVS soon. Bruce, can you add\nmy name to the TODO list next to this item?\n\nBTW, does anyone have any tips for debugging deadlock conditions? I\nnormally debug problems by running a backend under gdb in standalone\nmode, but that obviously won't help for this problem. Any further advice\non improving hash index concurrency would be very welcome...\n\nJustin: \n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "05 Mar 2002 22:13:25 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Do we still have locking problems with concurrent"
},
{
"msg_contents": "> > > Does anyone have any ideas? If not, would someone be willing to take\n> > > the time to fix it?\n> > \n> > It has not been fixed. One TODO item is to either stop mentioning hash\n> > at all or get it improved. We have been sitting on the fence for too\n> > long.\n> \n> I'll be working on fixing this. I'm also going to try to add more\n> features to the hash index implementation: for example, allow UNIQUE\n> hash indexes, hash indexes over multiple keys, etc. My first improvement\n> to the hash code, replacing the hash function with a better one, is on\n> the unapplied patches list and should be in CVS soon. Bruce, can you add\n> my name to the TODO list next to this item?\n\nTODO updated. Done.\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 22:15:09 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we still have locking problems with concurrent users"
},
{
"msg_contents": "> It has not been fixed. One TODO item is to either stop mentioning hash\n> at all or get it improved. We have been sitting on the fence for too\n> long.\n\nCould someone give me a quick rundown on where using a hash index would be\nadvantageous over using a btree index?\n\nThanks,\n\nChris\n\n",
"msg_date": "Wed, 6 Mar 2002 11:16:50 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Do we still have locking problems with concurrent users"
},
{
"msg_contents": "Awesome!\n\nThanks Neil.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nNeil Conway wrote:\n> \n> On Tue, 2002-03-05 at 21:59, Bruce Momjian wrote:\n> > Justin Clift wrote:\n> > > Hi all,\n> > >\n> > > One of the things which the AS3AP benchmark does is have multiple users\n> > > access a table with hash indexes on it.\n> > >\n> > > With the OSDB (Open Source Database Benchmark: http://osdb.sf.net) we've\n> > > found on PG 7.1 that multiple clients hitting a table using a hash index\n> > > generates locking problems. I remember Tom mentioning that this is a\n> > > known thing, but I'm not sure if this has been fixed since then.\n> \n> No, it hasn't been fixed yet.\n> \n> > > Does anyone have any ideas? If not, would someone be willing to take\n> > > the time to fix it?\n> >\n> > It has not been fixed. One TODO item is to either stop mentioning hash\n> > at all or get it improved. We have been sitting on the fence for too\n> > long.\n> \n> I'll be working on fixing this. I'm also going to try to add more\n> features to the hash index implementation: for example, allow UNIQUE\n> hash indexes, hash indexes over multiple keys, etc. My first improvement\n> to the hash code, replacing the hash function with a better one, is on\n> the unapplied patches list and should be in CVS soon. Bruce, can you add\n> my name to the TODO list next to this item?\n> \n> BTW, does anyone have any tips for debugging deadlock conditions? I\n> normally debug problems by running a backend under gdb in standalone\n> mode, but that obviously won't help for this problem. Any further advice\n> on improving hash index concurrency would be very welcome...\n> \n> Justin:\n> \n> Cheers,\n> \n> Neil\n> \n> --\n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Wed, 06 Mar 2002 14:17:24 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: Do we still have locking problems with concurrentusers"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > It has not been fixed. One TODO item is to either stop mentioning hash\n> > at all or get it improved. We have been sitting on the fence for too\n> > long.\n> \n> Could someone give me a quick rundown on where using a hash index would be\n> advantageous over using a btree index?\n\nThat is the issue, right now, there is little or no advantage. If we\ncan improve it, it may become better than btree for cases where you are\nonly doing equal comparisons, rather than > which only btree can do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 22:17:53 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we still have locking problems with concurrent users"
},
{
"msg_contents": "Justin Clift writes:\n\n> One of the things which the AS3AP benchmark does is have multiple users\n> access a table with hash indexes on it.\n\nIn a closed-source system, we could get away with making hash and B-tree\nindexes the same internally and tell onlookers that they're different, so\nas to satisfy the AS3AP requirements. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 10 Mar 2002 20:53:02 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Do we still have locking problems with concurrent"
}
] |
[
{
"msg_contents": "(for background, see conversation: \"Postgresql backend to perform vacuum\nautomatically\" )\n\nIn the idea phase 1, brainstorm\n\nCreate a table for the defaults in template1\nCreate a table in each database for state inforation.\n\nShould have a maximum duty cycle for vacuum vs non-vacuum on a per table basis.\nIf a vacuum takes 3 minutes, and a duty cycle is no more than 10%, the next\nvacuum can not take place for another 30 minutes. Is this a table or database\nsetting? I am thinking table. Anyone have good arguments for database?\n\nMust have a trigger point of number of total tuples vs number of dirty tuples.\nUnfortunately some tuples are more important than others, but that I don't know\nhow to really detect that. We should be able to keep track of the number of\ndirty tuples in a table. Is it known how many tuples are in a table at any\npoint? (if so, on a side note, can we use this for a count()?) How about dirty\ntuples?\n\nIs the number of deleted tuples sufficient to decide priority on vacuum? My\nthinking is that the tables with the most deleted tuples is the table which\nneed most vacuum. Should ratio of deleted tuples vs total tuples or just count\nof deleted tuples. I am thinking ratio, but maybe it need be tunable.\n\n\nHere is the program flow:\n\n(1) Startup (Do this for each database.)\n(2) Get all the information from a vacuumd table.\n(2) If the table does not exist, perform a vacuum on all tables, and initialize\nthe table to current state. \n(3) Check which tables can be vacuumed based on their duty cycle and current\ntime.\n(4) If the tables eligible to be vacuumed have deleted tuples which exceed\nacceptable limits, vacuum them.\n(5) Wait a predefined time, loop (2)\n\nThis is my basic idea, what do you all think?\n\nI plan to work on this in the next couple weeks. Any suggestions, notes,\nconcerns, features would be welcome.\n",
"msg_date": "Tue, 05 Mar 2002 21:21:42 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Vacuum daemon (pgvacuumd ?)"
},
{
"msg_contents": "> Is the number of deleted tuples sufficient to decide priority on vacuum? My\n> thinking is that the tables with the most deleted tuples is the table which\n> need most vacuum. Should ratio of deleted tuples vs total tuples or just count\n> of deleted tuples. I am thinking ratio, but maybe it need be tunable.\n\nDeleted or updated. Both expire tuples. Also, the old tuples can't be\nvacuumed until no other transaction is viewing them as active.\n\n> (4) If the tables eligible to be vacuumed have deleted tuples which exceed\n> acceptable limits, vacuum them.\n\nSeems you will measure in percentages, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 22:01:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum daemon (pgvacuumd ?)"
},
{
"msg_contents": "> (for background, see conversation: \"Postgresql backend to perform\nvacuum\n> automatically\" )\n>\n> In the idea phase 1, brainstorm\n>\n> Create a table for the defaults in template1\n> Create a table in each database for state inforation.\n>\n> Should have a maximum duty cycle for vacuum vs non-vacuum on a per\ntable basis.\n> If a vacuum takes 3 minutes, and a duty cycle is no more than 10%,\nthe next\n> vacuum can not take place for another 30 minutes. Is this a table or\ndatabase\n> setting? I am thinking table. Anyone have good arguments for\ndatabase?\n\nI'd vote for database (or even system) settings personally, as those\ntables which don't get updated simply won't have vacuum run on them.\nThose that do will. Vacuum anywhere will degrade performance as it's\nadditional disk work. To top that off, if it's a per table duty cycle\nyou need to add additional checks to prevent vacuum from running on\nall or several tables at the same time. Duty cycle per DB (single\nvacuum tracking per db) will limit to a single instance of vacuum.\n\nI'm a little concerned about duty cycle. Why limit? If a tables\naccess speed could be increased enough to outweight the cost of the\nvacuum it should always be done. Perhaps a generic cost > 500 + (15%\ntuples updated / deleted) would work. That is, a %age dead tuples,\nplus a base to keep it from constantly firing on nearly empty tables.\n\nDo the table, and pick the next worse off (if there are more than one\nrequring vacuum). Perhaps frequency of selects weighs in here too.\n15% dead in a table recieving 99% selects is worse than 100% dead in a\ntable receiving 99% updates as the former will have more long term\naffect by doing it now. Table with updates is probably constantly\nputting up requests anyway.\n\nI'd suggest making the base and %age dead tuple numbers GUCable rather\nthan stored in a system table. It's probably not something we want\npeople playing with easily -- especially when they can still run\nvacuum manually.\n\n\nFinally, are the stats your collecting based on completed transactions\nor do they include ones that are rolled back as well? 100 updates\nrolled back is just as evil as 100 that completed -- speed wise\nanyway.\n\n",
"msg_date": "Tue, 5 Mar 2002 22:12:03 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum daemon (pgvacuumd ?)"
},
{
"msg_contents": "\n----- Original Message -----\nFrom: \"mlw\" <markw@mohawksoft.com>\nTo: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Wednesday, March 06, 2002 1:21 PM\nSubject: [HACKERS] Vacuum daemon (pgvacuumd ?)\n\n\n> (for background, see conversation: \"Postgresql backend to perform vacuum\n> automatically\" )\n>\n> In the idea phase 1, brainstorm\n>\n> Create a table for the defaults in template1\n> Create a table in each database for state inforation.\n>\n> Should have a maximum duty cycle for vacuum vs non-vacuum on a per table\nbasis.\n> If a vacuum takes 3 minutes, and a duty cycle is no more than 10%, the\nnext\n> vacuum can not take place for another 30 minutes. Is this a table or\ndatabase\n> setting? I am thinking table. Anyone have good arguments for database?\n>\n> Must have a trigger point of number of total tuples vs number of dirty\ntuples.\n> Unfortunately some tuples are more important than others, but that I don't\nknow\n> how to really detect that. We should be able to keep track of the number\nof\n> dirty tuples in a table. Is it known how many tuples are in a table at any\n> point? (if so, on a side note, can we use this for a count()?) How about\ndirty\n> tuples?\nThis parameters are certainly correct in a lot of cases, but why not use a\nstored proc to decide when to start a vacuum. The system table can maintain\nraw data related to vacuum: last vacuum timestamp, previous vacuum duration,\ntable priority, .... Then a parameter can be used let say from 0 to 9 (0 for\nno vacuum) as a vacuum profile. The stored proc. would translate this\nprofile to thresholds adapted to its algorithm that can use the per-table\nstatistic that already exist.\nObviously a standard proc can be installed but it lets the DBA the\npossibility to adapt the criteria to its DB whith no modification to the\ncode.\n\n>\n> Is the number of deleted tuples sufficient to decide priority on vacuum?\nMy\n> thinking is that the tables with the most deleted tuples is the table\nwhich\n> need most vacuum. Should ratio of deleted tuples vs total tuples or just\ncount\n> of deleted tuples. I am thinking ratio, but maybe it need be tunable.\n>\n>\n> Here is the program flow:\n>\n> (1) Startup (Do this for each database.)\n> (2) Get all the information from a vacuumd table.\n> (2) If the table does not exist, perform a vacuum on all tables, and\ninitialize\n> the table to current state.\n> (3) Check which tables can be vacuumed based on their duty cycle and\ncurrent\n> time.\n> (4) If the tables eligible to be vacuumed have deleted tuples which exceed\n> acceptable limits, vacuum them.\n> (5) Wait a predefined time, loop (2)\n>\n> This is my basic idea, what do you all think?\n>\n> I plan to work on this in the next couple weeks. Any suggestions, notes,\n> concerns, features would be welcome.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\n",
"msg_date": "Wed, 6 Mar 2002 14:14:02 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum daemon (pgvacuumd ?)"
},
{
"msg_contents": "I'm thinking that unless the vacuum daemon needs backend info (stats?) it \ncould be a totally separate entity (reading from the same config file \nperhaps). However that would make it slightly harder to start up automatically.\n\nI'm not fond of the duty cycle idea. My guess is in too many cases if you \ndelay the vacuum, it tends to take longer, then you delay even more, then \nit takes even longer...\n\nIt should be related to the number of updates and deletes on a table or \ndatabase.\n\nMaybe you don't need the duty cycle, just check stats every X minutes, if \nenough invalid rows, do vacuum. Issue: you could end up doing vacuum \ncontinuously, would this impact performance drastically?\n\nIf there's vacuuming to be done, is it better to do it later than now? My \nassumption is that lazy vacuum no longer has such a severe impact and so it \nmight be better to just do it ASAP. So actually a simple vacuum daemon may \nbe good enough.\n\nIs there a danger of high file fragmentation with frequent lazy vacuums?\n\nRegards,\nLink.\n\nAt 09:21 PM 05-03-2002 -0500, mlw wrote:\n>(for background, see conversation: \"Postgresql backend to perform vacuum\n>automatically\" )\n>\n>In the idea phase 1, brainstorm\n>\n>Create a table for the defaults in template1\n>Create a table in each database for state inforation.\n>\n>Should have a maximum duty cycle for vacuum vs non-vacuum on a per table \n>basis.\n>If a vacuum takes 3 minutes, and a duty cycle is no more than 10%, the next\n>vacuum can not take place for another 30 minutes. Is this a table or database\n>setting? I am thinking table. Anyone have good arguments for database?\n>\n>Must have a trigger point of number of total tuples vs number of dirty tuples.\n>Unfortunately some tuples are more important than others, but that I don't \n>know\n>how to really detect that. We should be able to keep track of the number of\n>dirty tuples in a table. Is it known how many tuples are in a table at any\n>point? (if so, on a side note, can we use this for a count()?) How about dirty\n>tuples?\n>\n>Is the number of deleted tuples sufficient to decide priority on vacuum? My\n>thinking is that the tables with the most deleted tuples is the table which\n>need most vacuum. Should ratio of deleted tuples vs total tuples or just count\n>of deleted tuples. I am thinking ratio, but maybe it need be tunable.\n>\n>\n>Here is the program flow:\n>\n>(1) Startup (Do this for each database.)\n>(2) Get all the information from a vacuumd table.\n>(2) If the table does not exist, perform a vacuum on all tables, and \n>initialize\n>the table to current state.\n>(3) Check which tables can be vacuumed based on their duty cycle and current\n>time.\n>(4) If the tables eligible to be vacuumed have deleted tuples which exceed\n>acceptable limits, vacuum them.\n>(5) Wait a predefined time, loop (2)\n>\n>This is my basic idea, what do you all think?\n>\n>I plan to work on this in the next couple weeks. Any suggestions, notes,\n>concerns, features would be welcome.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>subscribe-nomail command to majordomo@postgresql.org so that your\n>message can get through to the mailing list cleanly\n\n\n",
"msg_date": "Wed, 06 Mar 2002 13:14:56 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum daemon (pgvacuumd ?)"
}
] |
[
{
"msg_contents": "In my logs I have this:\n\n2002-03-05 23:59:26 DEBUG: MoveOfflineLogs: remove 0000000200000054\n2002-03-05 23:59:26 DEBUG: MoveOfflineLogs: remove 0000000200000055\n2002-03-05 23:59:26 DEBUG: MoveOfflineLogs: remove 0000000200000053\n2002-03-05 23:59:38 DEBUG: XLogWrite: new log file created - consider\nincreasing WAL_FILES\n2002-03-06 00:00:10 DEBUG: XLogWrite: new log file created - consider\nincreasing WAL_FILES\n2002-03-06 00:00:23 DEBUG: XLogWrite: new log file created - consider\nincreasing WAL_FILES\n2002-03-06 00:00:25 DEBUG: MoveOfflineLogs: remove 0000000200000057\n2002-03-06 00:00:25 DEBUG: MoveOfflineLogs: remove 0000000200000058\n2002-03-06 00:00:25 DEBUG: MoveOfflineLogs: remove 0000000200000056\n2002-03-06 00:01:38 DEBUG: XLogWrite: new log file created - consider\nincreasing WAL_FILES\n\nNotice the '2' in the middle of the WAL file name? It was '1' for quite a\nwhile. I get the feeling it increments when an overflow occurs, just as\ngoing from 'FF' to '100' sort of thing.\n\nie. Is there an endian problem here? Is it purely cosmetic? Is it\ndeliberate?\n\nusa=# select version();\n version\n--------------------------------------------------------------\n PostgreSQL 7.1.3 on i386--freebsd4.3, compiled by GCC 2.95.2\n(1 row)\n\nuname -a\nFreeBSD xxx 4.4-RELEASE FreeBSD 4.4-RELEASE #4: Mon Jan 21 0\n7:14:44 GMT 2002 xxx:/usr/obj/usr/src/sys/CANAVERAL i386\n\nChris\n\n",
"msg_date": "Wed, 6 Mar 2002 16:08:07 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Weird WAL numbering"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Notice the '2' in the middle of the WAL file name?\n\nYeah, that's just how the WAL file-naming code works. The upper and\nlower halves of the 64-bit WAL address are handled independently.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Mar 2002 11:08:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Weird WAL numbering "
}
] |
[
{
"msg_contents": "We're using a function to insert some information into the database.\nThis information is later (within seconds) retrieved from a program,\nthat does the actual processing of the information. It is then\ndeleted from the database when we're done with it.\n\n\nWe see a MAJOR performance loss the longer the time. It starts out\nfrom around 28 'data chunks' per second (inserts in a couple tables),\nand drops down to below 10/s...\n\nIf doing 'VACUUM ANALYZE' every 20 minutes improves the performance,\nwith the expected drop when the VACUUM is done, but in general the\nperformance is steady...\n\nInvestigation have shown that it's the actual DELETE that's slow,\nany idea how to find WHERE (and hopefully WHY :) this is so?\n-- \nattack Serbian radar PLO ammonium toluene Legion of Doom congress DES\npits Ft. Bragg KGB Honduras kibo World Trade Center\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "06 Mar 2002 09:19:45 +0100",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "Drop in performance for each INSERT/DELETE combo"
},
{
"msg_contents": "Could you send the schema of the table , the definition of the index on it and\nthe SQL query?\n\nIt is hard to help you without this info :-/\n\nCheers,\n\n-- \nJean-Paul ARGUDO\n",
"msg_date": "Wed, 6 Mar 2002 10:24:37 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": false,
"msg_subject": "Re: Drop in performance for each INSERT/DELETE combo"
},
{
"msg_contents": ">>>>> \"Jean-Paul\" == Jean-Paul ARGUDO <jean-paul.argudo@idealx.com> writes:\n\n Jean-Paul> Could you send the schema of the table , the definition\n Jean-Paul> of the index on it and the SQL query?\n\nI can't do that at the moment, it's a closed-source (ie, commercial)\nproduct, and I'll need official aprovement etc :)\n\n Jean-Paul> It is hard to help you without this info :-/\n\nI know, that's why I formulated the mail like a question on how to\nprocreed, not how _YOU_ (ie, the mailinglist) could solve my problem :)\n\nThanx anyway.\n-- \nSaddam Hussein SEAL Team 6 congress strategic ammonium arrangements\nNoriega DES SDI FBI nuclear domestic disruption attack Marxist Delta\nForce\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "06 Mar 2002 17:29:47 +0100",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "Re: Drop in performance for each INSERT/DELETE combo"
},
{
"msg_contents": "IANAD (I am not a developer) but deleted rows are not removed till \nvacuuming occurs. They are just marked so.\n\nAre you deleting specific rows? If you are then you have to keep vacuuming \nto keep it going at about 30/sec. This can be more viable with 7.2. \nPostgresql often has to go through relevant deleted rows in order to find \nthe valid rows.\n\nIf you want to delete everything, truncating might be faster. Unfortunately \ntruncating can't work in a transaction block.\n\nLink.\n\nAt 09:19 AM 06-03-2002 +0100, Turbo Fredriksson wrote:\n>We're using a function to insert some information into the database.\n>This information is later (within seconds) retrieved from a program,\n>that does the actual processing of the information. It is then\n>deleted from the database when we're done with it.\n>\n>\n>We see a MAJOR performance loss the longer the time. It starts out\n>from around 28 'data chunks' per second (inserts in a couple tables),\n>and drops down to below 10/s...\n>\n>If doing 'VACUUM ANALYZE' every 20 minutes improves the performance,\n>with the expected drop when the VACUUM is done, but in general the\n>performance is steady...\n>\n>Investigation have shown that it's the actual DELETE that's slow,\n>any idea how to find WHERE (and hopefully WHY :) this is so?\n\n\n",
"msg_date": "Thu, 07 Mar 2002 11:36:15 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Drop in performance for each INSERT/DELETE combo"
},
{
"msg_contents": "----- Original Message -----\nFrom: \"Turbo Fredriksson\" <turbo@bayour.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Wednesday, March 06, 2002 10:19 AM\nSubject: [HACKERS] Drop in performance for each INSERT/DELETE combo\n\n\n> We're using a function to insert some information into the database.\n> This information is later (within seconds) retrieved from a program,\n> that does the actual processing of the information. It is then\n> deleted from the database when we're done with it.\n>\n>\n> We see a MAJOR performance loss the longer the time. It starts out\n> from around 28 'data chunks' per second (inserts in a couple tables),\n> and drops down to below 10/s...\n>\n> If doing 'VACUUM ANALYZE' every 20 minutes improves the performance,\n> with the expected drop when the VACUUM is done, but in general the\n> performance is steady...\n\nWhat version of PG are you running ?\n\nOn PG 7.2 vacuum itself does not incur very big performance hit. And you\ndon't need to run VACUUM ANALYZE that often, just plain VACUUM will do\nnicely.\n\nYou can also restrict VACUUMING to your table only by doing VACUUM TABLENAME\n\nIf the total size of your table is small I'd recommend running VACUUM\nTABLENAME\neven more often, up to every few seconds.\n\n> Investigation have shown that it's the actual DELETE that's slow,\n\nDo you have any foreign keys on that table ?\n\nOr even an ON DELETE trigger.\n\n> any idea how to find WHERE (and hopefully WHY :) this is so?\n\nNope :)\n\n-------------\nHannu\n\n\n\n\n\n\n",
"msg_date": "Thu, 7 Mar 2002 10:41:16 +0200",
"msg_from": "\"Hannu Krosing\" <hannu@itmeedia.ee>",
"msg_from_op": false,
"msg_subject": "Re: Drop in performance for each INSERT/DELETE combo"
}
] |
[
{
"msg_contents": "\n> > > > If they do not affect performance, then why have them off?\n> > > \n> > > I think Jan said 2-3%. If we can get autovacuum from it, it would be a\n> > > win to keep it on all the time, perhaps.\n> > \n> > Assuming that the statistics get updated:\n> > \n> > How often should the sats table be queried?\n> > What sort of configurability would be needed?\n> \n> You could wake up every few minutes and see how the values have changed.\n> I don't remember if there is a way to clear that stats so you can see\n> just the changes in the past five minutes. Vacuum the table that had\n> activity.\n\nI cannot envision querying the stats every 4 seconds, especially if the stats \nthread already has most of the info in hand. \n\nI still think, that for best results the vacuums should happen continuously\nfor single pages based on a hook in wal or the buffer manager. Do I remember \ncorrectly, that the active page (the one receiving the next row) already has \na strategy for slot reuse ? Maybe this strategy should be the followed more\naggressively ?\n\nSeems the worst case is a few row table that permanently get updated,\nit should be possible to harness this situation with above method.\n\nAndreas\n",
"msg_date": "Wed, 6 Mar 2002 09:35:16 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > > > > If they do not affect performance, then why have them off?\n> > > >\n> > > > I think Jan said 2-3%. If we can get autovacuum from it, it would be a\n> > > > win to keep it on all the time, perhaps.\n> > >\n> > > Assuming that the statistics get updated:\n> > >\n> > > How often should the sats table be queried?\n> > > What sort of configurability would be needed?\n> >\n> > You could wake up every few minutes and see how the values have changed.\n> > I don't remember if there is a way to clear that stats so you can see\n> > just the changes in the past five minutes. Vacuum the table that had\n> > activity.\n> \n> I cannot envision querying the stats every 4 seconds, especially if the stats\n> thread already has most of the info in hand.\n> \n> I still think, that for best results the vacuums should happen continuously\n> for single pages based on a hook in wal or the buffer manager. Do I remember\n> correctly, that the active page (the one receiving the next row) already has\n> a strategy for slot reuse ? Maybe this strategy should be the followed more\n> aggressively ?\n> \n> Seems the worst case is a few row table that permanently get updated,\n> it should be possible to harness this situation with above method.\n\nPerhaps the statistics thread can trigger a semaphore when it sees that vacuum\nneeds to be run?\n",
"msg_date": "Wed, 06 Mar 2002 07:12:09 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> I still think, that for best results the vacuums should happen continuously\n> for single pages based on a hook in wal or the buffer manager.\n\nNot possible unless you are willing to have SELECTs grab much stronger\nlocks than they do now (viz, the same kind of lock that VACUUM does).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Mar 2002 11:11:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > I still think, that for best results the vacuums should happen continuously\n> > for single pages based on a hook in wal or the buffer manager.\n> \n> Not possible unless you are willing to have SELECTs grab much stronger\n> locks than they do now (viz, the same kind of lock that VACUUM does).\n\nRemember, you can't recycle the page until all transactions are done\nseeing it as active. I think something periodic will do the job just\nfine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 6 Mar 2002 12:44:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically"
}
] |
[
{
"msg_contents": "Did you try this using temporary tables?\nI've noticed a better performance on one of our apps that used to do\njust that (insert some records and delete some records from a sctrach\ntable)\n\nWe recoded it to basically create a temp table, insert records, do\nwhatever with them and than drop the temp table.\n\nThis is easily achieved with CREATE TEMP TABLE\n\n\nHope this helps\n\ndali\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Turbo\nFredriksson\nSent: Wednesday, 6 March 2002 21:20\nTo: pgsql-hackers@postgresql.org\nSubject: [HACKERS] Drop in performance for each INSERT/DELETE combo\n\n\nWe're using a function to insert some information into the database.\nThis information is later (within seconds) retrieved from a program,\nthat does the actual processing of the information. It is then\ndeleted from the database when we're done with it.\n\n\nWe see a MAJOR performance loss the longer the time. It starts out\nfrom around 28 'data chunks' per second (inserts in a couple tables),\nand drops down to below 10/s...\n\nIf doing 'VACUUM ANALYZE' every 20 minutes improves the performance,\nwith the expected drop when the VACUUM is done, but in general the\nperformance is steady...\n\nInvestigation have shown that it's the actual DELETE that's slow,\nany idea how to find WHERE (and hopefully WHY :) this is so?\n-- \nattack Serbian radar PLO ammonium toluene Legion of Doom congress DES\npits Ft. Bragg KGB Honduras kibo World Trade Center\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n",
"msg_date": "Wed, 6 Mar 2002 22:59:19 +1300",
"msg_from": "\"Dalibor Andzakovic\" <dalibor.andzakovic@swerve.co.nz>",
"msg_from_op": true,
"msg_subject": "Re: Drop in performance for each INSERT/DELETE combo"
}
] |
[
{
"msg_contents": "Glad to hear that. Enjoy PostgreSQL!\n--\nTatsuo Ishii\n\nFrom: cnliou@eurosport.com\nSubject: Rep: [BUGS] Encoding Problem?\nDate: Wed, 6 Mar 2002 10:40:48 GMT\nMessage-ID: <200203061040.30a8@th00.opsion.fr>\n\n> Hello! Tatsuo,\n> \n> Please pardon my ignorance! I did not understand the\n> basic concept: the client encoding being different\n> from server encoding.\n> \n> It works perfectly after I rest my client from EUC_TW\n> and MULE_INTERNAL to BIG5.\n> \n> Best Regards,\n> \n> CN\n> \n> > linux:~$ od -t x /tmp/tt\n> > 0000000 31313131 a5a8a60a 5cb30a5c 3232320a\n> > 0000020 00000a32\n> > 0000022\n> \n> Are you sure that they are EUC-TW? Considering the\n> byte swapping, they\n> are actually like this:\n> \n> 0x31,0x31,0x31,0x31,0x0a,\n> 0xa6,0xa8,0xa5,0x5c,0x0a,\n> 0xb3,0x5c,0x0a,\n> 0x32,0x32,0x32,0x32,0x0a\n> \n> Here we see a55c and b35c, which should never happen\n> in EUC-TW, since\n> the each second byte is lower than 0x80.\n> I guess they are BIG5. If my guess is correct, you\n> could set the\n> client encoding to BIG5 (\"\\encoding BIG5\" in psql)\n> and get correct\n> result.\n> --\n> Tatsuo Ishii\n> \n> --------------------------------------------------------\n> You too can have your own email address from Eurosport.\n> http://www.eurosport.com\n> \n> \n> \n> \n> \n",
"msg_date": "Wed, 06 Mar 2002 21:59:56 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Rep: [BUGS] Encoding Problem?"
}
] |
[
{
"msg_contents": "I am recreating my development environent, and tuning my PostgreSQL system.\n\nHere is the question:\n\nOK, I am running pgbench against my new server. How do I know it is doing well\nor not? I understand it is all subjective, but what sort of numbers are you\nguys getting?\n\nI have a dual PIII, 1G RAM, two disks one dedicated to /pg_xlog and system, one\nto /base.\n\nUsing a benchmark with a scale of 100, I hover around a tps of 110 (106~114)\nfor 25,50, and 100 concurrent users. Now, is that good? What do you guys get?\n",
"msg_date": "Wed, 06 Mar 2002 09:50:10 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Subjective question: What is a good TPS for pgbench?"
}
] |
[
{
"msg_contents": "I've reposted RPMs for Mandake 8.1 at\n\n ftp://ftp.postgresql.org/pub/binary/v7.2/RPMS/mandrake-8.1/\n\nwith one additional RPM built for the python \"mx\" package.\n\nThe postgresql RPMs themselves are built using the same spec file as\nused by Lamar for the RH RPMS. The mx RPM is massaged from the RH source\nRPM to remove gratuitous RH-isms and to remove references to the\nDistutils package, which on Mandrake is included in the normal python\npackages.\n\n - Thomas\n",
"msg_date": "Wed, 06 Mar 2002 07:42:53 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Mandrake RPMs rebuilt"
},
{
"msg_contents": "On Wednesday 06 March 2002 10:42 am, Thomas Lockhart wrote:\n> with one additional RPM built for the python \"mx\" package.\n\n> The postgresql RPMs themselves are built using the same spec file as\n> used by Lamar for the RH RPMS. The mx RPM is massaged from the RH source\n> RPM to remove gratuitous RH-isms and to remove references to the\n> Distutils package, which on Mandrake is included in the normal python\n> packages.\n\nThank you, Thomas.\n\nI try to nix RedHatisms from my spec -- it is not always possible since I \nonly run RedHat on my Intel machines (I run SuSE 7.3 on my UltraSPARC, but \nhave not had time to work on SuSEifying the 7.2 set yet), and I don't always \nknow what the RedHatisms are. \n\nBack in the 7.0.x days I found a number of RedHatisms while putting together \nGreatBridge's RPMsets, but that knowledge isn't as applicable today as it was \nthen.\n\nI do know that Mandrake is just as guilty in applying 'Mandrakisms' -- but at \nleast they are documented. :-)\n\nIt does make it difficult to maintain a 'generic' RPM.....\n\nSpeaking of, I may be able to get another release out this week to fix some \nproblems that have been brought to my attention. We'll see how I recuperate \nfrom my last 18 16-hour days..... (18 _consecutive_ days.....)\n\nI'll look at your mx RPM's spec and compare to RedHat's -- if a finer grained \ndependency can be determined, then we can go that route, as the eGenix mx \nstuff goes by more names than just 'mx'.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 6 Mar 2002 11:02:30 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Mandrake RPMs rebuilt"
},
{
"msg_contents": "(dropped -general from the recipients)\n\n> I try to nix RedHatisms from my spec -- it is not always possible since I\n> only run RedHat on my Intel machines (I run SuSE 7.3 on my UltraSPARC, but\n> have not had time to work on SuSEifying the 7.2 set yet), and I don't always\n> know what the RedHatisms are.\n\nSo far, the only things I had to do were to generate an \"mx\" RPM for my\nmachine, since the eGenix modules are not apparently available from Mdk\nsites.\n\nMaybe this is one reason:\n\n WARNING: Using this file is only recommended if you really must\n use it for some reason. It is not being actively maintained !\n\n(from the header of the mxDateTime module :(\n\n> I'll look at your mx RPM's spec and compare to RedHat's -- if a finer grained\n> dependency can be determined, then we can go that route, as the eGenix mx\n> stuff goes by more names than just 'mx'.\n\nRight. The RH spec seems to have called it \"mx2\" sometimes, and who\nknows what else it is called once it is packaged (the tarballs have the\neGenix name in them).\n\nThe RH spec file did weird things like explicitly reference\n\"/usr/bin/python2.2\" as the python executable rather than\n/usr/bin/python, perhaps because of problems with their build of\npython-2.2. Mandrake 8.1 has python-2.1, and I was able to use your\ntechniques for version detection from the Pg spec file instead.\n\nAnyway, I'm not sure what the future of the mx package is, or whether\nthere is an alternative in the python package set. It is a fairly simple\nbuild, so occasionally grabbing the RH package and rebuilding isn't a\nbig deal.\n\n - Thomas\n",
"msg_date": "Wed, 06 Mar 2002 08:35:51 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: Mandrake RPMs rebuilt"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n\n> (dropped -general from the recipients)\n> So far, the only things I had to do were to generate an \"mx\" RPM for my\n> machine, since the eGenix modules are not apparently available from Mdk\n> sites.\n> \n> Maybe this is one reason:\n> \n> WARNING: Using this file is only recommended if you really must\n> use it for some reason. It is not being actively maintained !\n> \n> (from the header of the mxDateTime module :(\n\nIt is a standard part of the Python DB API, for better or worse.\n \n> > I'll look at your mx RPM's spec and compare to RedHat's -- if a finer grained\n> > dependency can be determined, then we can go that route, as the eGenix mx\n> > stuff goes by more names than just 'mx'.\n> \n> Right. The RH spec seems to have called it \"mx2\" sometimes, and who\n> knows what else it is called once it is packaged (the tarballs have the\n> eGenix name in them).\n\nThey're set up so you can create mx or mx2, in order to have python\n1.5 and python 2.2 installed in parallel.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "06 Mar 2002 16:43:18 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Mandrake RPMs rebuilt"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> \n> Speaking of, I may be able to get another release out this week to fix some \n> problems that have been brought to my attention.\n\nYou really want the last set of changes I did... the initial one is\nbroken wrt. contrib (the paths in it broken for 7.1, fixed, then\nbroken differently on 7.2)\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "06 Mar 2002 16:45:34 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Mandrake RPMs rebuilt"
},
{
"msg_contents": "On Wednesday 06 March 2002 04:45 pm, Trond Eivind Glomsr�d wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Speaking of, I may be able to get another release out this week to fix\n> > some problems that have been brought to my attention.\n\n> You really want the last set of changes I did... the initial one is\n> broken wrt. contrib (the paths in it broken for 7.1, fixed, then\n> broken differently on 7.2)\n\nAre the latest changes in 7.2-1.72? Or do you have newer ones?\n\nLove the changelog comments.... Particularly the one about my changelog entry \nthat didn't escape a macro... :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 6 Mar 2002 22:59:01 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Mandrake RPMs rebuilt"
}
] |
[
{
"msg_contents": "[let's keep this thread on the list please]\n\n>>>>> \"Nikolay\" == Nikolay Mihaylov <pg@nmmm.nu> writes:\n\n Nikolay> Why you do not use UPDATE instead DELETE ? (e.g. flag if\n Nikolay> the operation is finished)\n\nThat was my first response when the test crew said that 'they found\nthat the problem seemed to be in the DELETE, not the INSERT' (their\nexact words :).\n\nMy idea was that that would decrease the fragmentation of the database...\n\nThe difference was minor, (yet again) according to the test crew...\n\n Nikolay> We had similar problems, but a VACUUM once per 2-3 mo,\n Nikolay> helps us (the database is not so big ~ 20 - 30MB).\n\nIs this database constantly changing? Or is it more or less static?\n\nThe database won't be bigger than 10Mb at any time (and that's an\nexaggeration). The real issue seem to be the constant changing of\nthe content...\n-- \nUzi Ortega 767 class struggle Clinton counter-intelligence\narrangements toluene PLO AK-47 Ft. Meade Soviet quiche Khaddafi\ncracking\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "06 Mar 2002 17:27:59 +0100",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "Re: Drop in performance for each INSERT/DELETE combo"
}
] |
[
{
"msg_contents": "Current CVS of postgres is completely broken.\n\ninitdb fails with SIG11 during the 'creation of template1'\n\n--debug doesn't show anything being written.\n\nSeveral regression tests have postmaster crashing.\n\nThese appeared somewhere between 4 days ago and yesterday.\n\nI'm afraid I really don't know where to start debugging the problem.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n\n",
"msg_date": "Wed, 6 Mar 2002 11:39:28 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Bad Build"
},
{
"msg_contents": "I get this from initdb:\n\n[gcope@mouse pgsql]$ initdb\nThe files belonging to this database system will be owned by user\n\"gcope\".\nThis user must also own the server process.\n\nFixing permissions on existing directory /usr/local/src/pgsql/data... ok\ncreating directory /usr/local/src/pgsql/data/base... ok\ncreating directory /usr/local/src/pgsql/data/global... ok\ncreating directory /usr/local/src/pgsql/data/pg_xlog... ok\ncreating directory /usr/local/src/pgsql/data/pg_clog... ok\ncreating template1 database in /usr/local/src/pgsql/data/base/1...\n/usr/bin/initdb: line 473: 1234 Broken pipe cat\n\"$POSTGRES_BKI\"\n 1235 | sed -e\n\"s/POSTGRES/$POSTGRES_SUPERUSERNAME/g\" -e \"s/ENCODING/$MULTIBYTEID/g\"\n 1236 Segmentation fault | \"$PGPATH\"/postgres -boot -x1\n$PGSQL_OPT $BACKEND_TALK_ARG template1\n\ninitdb failed.\n\n\nOn Wed, 2002-03-06 at 10:39, Rod Taylor wrote:\n> Current CVS of postgres is completely broken.\n> \n> initdb fails with SIG11 during the 'creation of template1'\n> \n> --debug doesn't show anything being written.\n> \n> Several regression tests have postmaster crashing.\n> \n> These appeared somewhere between 4 days ago and yesterday.\n> \n> I'm afraid I really don't know where to start debugging the problem.\n> --\n> Rod Taylor\n> \n> Your eyes are weary from staring at the CRT. You feel sleepy. Notice\n> how restful it is to watch the cursor blink. Close your eyes. The\n> opinions stated above are yours. You cannot imagine why you ever felt\n> otherwise.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org",
"msg_date": "06 Mar 2002 17:44:38 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad Build"
},
{
"msg_contents": "On Wed, 2002-03-06 at 11:39, Rod Taylor wrote:\n> Current CVS of postgres is completely broken.\n\nYes, I see this as well. The likely culprit seems to be applying patches\nthat conflict with one another...\n\n> I'm afraid I really don't know where to start debugging the problem.\n\nWell, there is a shift/reduce conflict in gram.y; there may be other\nproblems, but that's the first one I saw.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "06 Mar 2002 23:26:54 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> Yes, I see this as well.\n\nBy my count the breakage from the DOMAIN patch is:\n\t1 shift/reduce conflict in gram.y\n\t3 gcc warnings (at least one being obviously a bug)\n\t1 core dump during regression tests\n\nBruce, what in the heck were you doing applying this patch? You knew\ndarn well it had not been meaningfully reviewed. Not bothering to check\nfor compile problems or regression failures before applying is\nunforgivably sloppy work. I don't blame the submitter; I blame you,\nwho should have known better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 00:19:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build "
},
{
"msg_contents": "Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > Yes, I see this as well.\n> \n> By my count the breakage from the DOMAIN patch is:\n> \t1 shift/reduce conflict in gram.y\n> \t3 gcc warnings (at least one being obviously a bug)\n> \t1 core dump during regression tests\n> \n> Bruce, what in the heck were you doing applying this patch? You knew\n> darn well it had not been meaningfully reviewed. Not bothering to check\n> for compile problems or regression failures before applying is\n> unforgivably sloppy work. I don't blame the submitter; I blame you,\n> who should have known better.\n\nYou can blame me all you want. That was in the queue, and no one\nobjected, so I did my best. If you don't want to forgive me, don't.\n\nIn fact, this whole indignation thing is starting to tire me. This is an\nopen-source project. People do the best they can. Just make the best of\nit and move on.\n\nIf this is the worst that has happened from my applying all those back\npatches, I am happy.\n\nI do compile tests regularly, and regression tests periodically. I do\nnot do it for every patch but more for every batch of patches.\n\nNow, if people would like the patch backed out, it can be easily done.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 00:53:26 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Neil Conway <nconway@klamath.dyndns.org> writes:\n> > > Yes, I see this as well.\n> >\n> > By my count the breakage from the DOMAIN patch is:\n> > 1 shift/reduce conflict in gram.y\n> > 3 gcc warnings (at least one being obviously a bug)\n> > 1 core dump during regression tests\n> >\n> > Bruce, what in the heck were you doing applying this patch? You knew\n> > darn well it had not been meaningfully reviewed. Not bothering to check\n> > for compile problems or regression failures before applying is\n> > unforgivably sloppy work. I don't blame the submitter; I blame you,\n> > who should have known better.\n>\n> You can blame me all you want. That was in the queue, and no one\n> objected, so I did my best. If you don't want to forgive me, don't.\n>\n> In fact, this whole indignation thing is starting to tire me. This is an\n> open-source project. People do the best they can. Just make the best of\n> it and move on.\n>\n> If this is the worst that has happened from my applying all those back\n> patches, I am happy.\n\n Sorry Bruce, but just because your patch queue is very long\n due to the delays in the 7.2 release cycle is no excuse to\n work sloppy now. Rushing things in is not the solution.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 7 Mar 2002 10:02:52 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build"
},
{
"msg_contents": "In GDB we have this notion of a \"write after approval\" list,\nso more people can be given write access to the repository with\nthe condition that they can only check things in after someone\n(which is entitled to do so) approves the patch. This reduces\nthe burden on the person who has to check in things for everyone else.\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 07 Mar 2002 12:48:38 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build"
},
{
"msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > Tom Lane wrote:\n> > > By my count the breakage from the DOMAIN patch is:\n> > > 1 shift/reduce conflict in gram.y\n> > > 3 gcc warnings (at least one being obviously a bug)\n> > > 1 core dump during regression tests\n> > >\n> > > Bruce, what in the heck were you doing applying this patch? You knew\n> > > darn well it had not been meaningfully reviewed. Not bothering to check\n> > > for compile problems or regression failures before applying is\n> > > unforgivably sloppy work. I don't blame the submitter; I blame you,\n> > > who should have known better.\n> >\n> > You can blame me all you want. That was in the queue, and no one\n> > objected, so I did my best. If you don't want to forgive me, don't.\n> >\n> > In fact, this whole indignation thing is starting to tire me. This is an\n> > open-source project. People do the best they can. Just make the best of\n> > it and move on.\n> >\n> > If this is the worst that has happened from my applying all those back\n> > patches, I am happy.\n> \n> Sorry Bruce, but just because your patch queue is very long\n> due to the delays in the 7.2 release cycle is no excuse to\n> work sloppy now. Rushing things in is not the solution.\n\nNo, this was not in a very old patch. First patch appeared late\nFebruary, there was discussion, a second patch appeared early March, and\nthere was no discussion on it. The patch had a file of sample queries,\nand someone else had even submitted a psql patch based on the feature. \nSo, actually, it looked pretty good.\n\nIn fact, the patch did have a compile problem when applied because it\nused our commandTag that isn't used anymore in that place, so I fixed\nit.\n\nHowever, that wasn't really my issue. I am happy to back out anything,\nand have does so for both patches listed above. I will contact the\nauthors and get an updated version that we can discuss and request more\ntesting.\n\nMy issue is that trying to blame someone isn't the proper way to address\nthese issues.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 13:05:51 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build"
},
{
"msg_contents": "> In fact, the patch did have a compile problem when applied because\nit\n> used our commandTag that isn't used anymore in that place, so I\nfixed\n> it.\n\nSpeaking of which, whats the proper way to do this? I get ??? after\nall commands now.\n\n",
"msg_date": "Thu, 7 Mar 2002 13:18:02 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: Bad Build"
},
{
"msg_contents": "Rod Taylor wrote:\n> > In fact, the patch did have a compile problem when applied because\n> it\n> > used our commandTag that isn't used anymore in that place, so I\n> fixed\n> > it.\n> \n> Speaking of which, whats the proper way to do this? I get ??? after\n> all commands now.\n\nGood question. I see commandTag set to \"CREATE\" in many places in\npostgres.c. I believe you need to add an additional 'case' for the\ndomain stuff.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 13:21:45 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build"
},
{
"msg_contents": "If I do \"initdb\" with the \"-d\" option, postgres will crash during the\nbootstrap. I think it's because postgres is attempting to strlen(NULL)\nwhile trying to build a string during it's option parsing.\n\nThis happen for anyone else?\n\nGreg\n\n\n\nOn Wed, 2002-03-06 at 22:26, Neil Conway wrote:\n> On Wed, 2002-03-06 at 11:39, Rod Taylor wrote:\n> > Current CVS of postgres is completely broken.\n> \n> Yes, I see this as well. The likely culprit seems to be applying patches\n> that conflict with one another...\n> \n> > I'm afraid I really don't know where to start debugging the problem.\n> \n> Well, there is a shift/reduce conflict in gram.y; there may be other\n> problems, but that's the first one I saw.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly",
"msg_date": "07 Mar 2002 16:19:21 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build"
},
{
"msg_contents": "Greg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> If I do \"initdb\" with the \"-d\" option, postgres will crash during the\n> bootstrap. I think it's because postgres is attempting to strlen(NULL)\n> while trying to build a string during it's option parsing.\n> \n> This happen for anyone else?\n\nYes, I see that here. Let me look at it. It may have to do with the\nelog() changes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 17:37:49 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build"
},
{
"msg_contents": "Greg Copeland wrote:\n> \n> If I do \"initdb\" with the \"-d\" option, postgres will crash during the\n> bootstrap. I think it's because postgres is attempting to strlen(NULL)\n> while trying to build a string during it's option parsing.\n> \n> This happen for anyone else?\n> \n> Greg\n> \n\nYes, but it is fixed now. cvs update -d -P should solve it.\n\n\n> On Wed, 2002-03-06 at 22:26, Neil Conway wrote:\n> > On Wed, 2002-03-06 at 11:39, Rod Taylor wrote:\n> > > Current CVS of postgres is completely broken.\n> >\n> > Yes, I see this as well. The likely culprit seems to be applying patches\n> > that conflict with one another...\n> >\n> > > I'm afraid I really don't know where to start debugging the problem.\n> >\n> > Well, there is a shift/reduce conflict in gram.y; there may be other\n> > problems, but that's the first one I saw.\n> >\n> > Cheers,\n> >\n> > Neil\n> >\n> > --\n> > Neil Conway <neilconway@rogers.com>\n> > PGP Key ID: DB3C29FC\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> \n> ------------------------------------------------------------------------\n> Name: signature.asc\n> signature.asc Type: application/pgp-signature\n> Description: This is a digitally signed message part\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 07 Mar 2002 17:38:50 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build"
},
{
"msg_contents": "Guess I failed to mention that I had just done an update and rebuild\njust minutes prior to reporting it.\n\nGreg\n\n\nOn Thu, 2002-03-07 at 16:37, Bruce Momjian wrote:\n> Greg Copeland wrote:\n> \n> Checking application/pgp-signature: FAILURE\n> -- Start of PGP signed section.\n> > If I do \"initdb\" with the \"-d\" option, postgres will crash during the\n> > bootstrap. I think it's because postgres is attempting to strlen(NULL)\n> > while trying to build a string during it's option parsing.\n> > \n> > This happen for anyone else?\n> \n> Yes, I see that here. Let me look at it. It may have to do with the\n> elog() changes.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026",
"msg_date": "07 Mar 2002 16:48:11 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build"
},
{
"msg_contents": "Greg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> Guess I failed to mention that I had just done an update and rebuild\n> just minutes prior to reporting it.\n\nYes, I see initdb -d failures here with current CVS. Investigating.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 18:28:27 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build"
},
{
"msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n> If I do \"initdb\" with the \"-d\" option, postgres will crash during the\n> bootstrap. I think it's because postgres is attempting to strlen(NULL)\n> while trying to build a string during it's option parsing.\n\nFixed --- problem was bad option information passed to getopt().\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 19:47:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build "
},
{
"msg_contents": "Tom Lane wrote:\n> Greg Copeland <greg@CopelandConsulting.Net> writes:\n> > If I do \"initdb\" with the \"-d\" option, postgres will crash during the\n> > bootstrap. I think it's because postgres is attempting to strlen(NULL)\n> > while trying to build a string during it's option parsing.\n> \n> Fixed --- problem was bad option information passed to getopt().\n\nYes, I suspected it was somewhere in there but had to run out for an\nhour. Was just looking at it. Before bootstrap just took -d while it\nnow takes '-d level'. I was sure I missed something and I see now it\nwas getopt(). Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 19:51:06 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad Build"
}
] |
[
{
"msg_contents": "Help me please!! Here I describe my problem.\nThank you!!\n\nBug using ADODB and Visual Basic for acceding PSQL\n\nWe have a problem using ADODB (from Visual Basic) and POSTGRESQL when trying \nto use an unconnected recordset which�s property \"locktype\" is set to \n\"adLockBatchOptimistic\" (Rs.locktype=adLockBatchOptimistic). When trying to \naccess to a database through ODBC, using ADODB, we have noticed an important \ndelay in comparison to other databases such as SQLServer. Investigating this \nproblem, we have detected where that delay is produced. The problem is \nproduced when we are trying to get an unconnected recordset with the \n\"adLockBatchOptimistic\" block type. When it is trying to execute the query \n(tracing it) we have seen that arrives three select results, but only one of \nthem corresponds to the sentence of the query but the other two are not.\nWe conclude that the last two results are important to recover the recordset \nstructure that will be disconnected later.\nThe problem is that one of this two extra queries runs a complete select \nover the table that we are accessing with the written select.This table has \na lot of records, and it turns the answer really slow because it brings all \nthe talbe recordsets.But this records are not kept into the recordset. It \nonly brings the data requested by the written sentence.Now, we present the \nVisual Basic code used to access the database and, after that, a log \nobtained through POSTGRE ODBC.\n\n\nVisual Basic Code\n\n\n Dim objRs As New ADODB.Recordset\n Dim objConn As New ADODB.Connection\n Dim strSql As String\n Dim strCadenaConexion As String\n\n strCadenaConexion = _\n \"DSN=contabilidad_psql;DATABASE=contabilidad;\" & _\n \"SERVER=192.168.1.41;PORT=5432;UID=credito;PWD=;\"\n\n objConn.Open strCadenaConexion\n\n strSql = \"select * from agentes where id_sucursal = 7 and id_agente = 100\"\n\n objRs.CursorLocation = adUseClient\n objRs.CursorType = adOpenStatic\n objRs.LockType = adLockBatchOptimistic\n\n Set objRs.ActiveConnection = objConn\n objRs.Open strSql\n\n objRs.ActiveConnection = Nothing\n objConn.Close\n\n MsgBox \"Cant: \" & objRs.RecordCount\n\n\n\nLog ODBC POSTGRE\n\n\nconn=409745072, query='select * from agentes where id_sucursal = 7 and \nid_agente = 100'\n[ fetched 1 rows ]\nconn=409745072, query='SELECT * FROM agentes'\n[ fetched 10773 rows ]\nconn=409745072, query='select ta.attname, ia.attnum from pg_attribute ta, \npg_attribute ia, pg_class c, pg_index i where c.relname = 'agentes' AND \nc.oid = i.indrelid AND i.indisprimary = 't' AND ia.attrelid = i.indexrelid \nAND ta.attrelid = i.indrelid AND ta.attnum = i.indkey[ia.attnum-1] order by \nia.attnum'\n[ fetched 2 rows ]\nconn=409745072, PGAPI_Disconnect\n\nI hope you could help us to solve this problem.\nThank you!\nGaston.-\n\n_________________________________________________________________\nMSN Photos es la manera m�s sencilla de compartir e imprimir sus fotos: \nhttp://photos.latam.msn.com/Support/WorldWide.aspx\n\n",
"msg_date": "Wed, 06 Mar 2002 20:19:23 -0300",
"msg_from": "\"Gaston Micheri\" <ggmsistemas@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Odbc, postgresql and disconnected recordsets"
}
] |
[
{
"msg_contents": "Please apply patch. \n----- Original Message ----- \nFrom: Nicolas Bazin \nTo: pgsql-patches@postgresql.org \nSent: Tuesday, March 05, 2002 12:08 PM\nSubject: Re: BUG#599 & BUG 606 correction\n\n\nThe end of a define section is tested too soon during the parsing of ecpg. This patch makes sure that the parser only do the test at the end of the file being parsed.\n\nPlease apply this patch to src/interface/ecpg/preproc/pgc.l\n\nNicolas BAZIN",
"msg_date": "Thu, 7 Mar 2002 10:24:07 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "fix for BUG#599 & BUG 606"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nNicolas Bazin wrote:\n> \n> Please apply patch. \n> ----- Original Message ----- \n> From: Nicolas Bazin \n> To: pgsql-patches@postgresql.org \n> Sent: Tuesday, March 05, 2002 12:08 PM\n> Subject: Re: BUG#599 & BUG 606 correction\n> \n> \n> The end of a define section is tested too soon during the parsing of ecpg. This patch makes sure that the parser only do the test at the end of the file being parsed.\n> \n> Please apply this patch to src/interface/ecpg/preproc/pgc.l\n> \n> Nicolas BAZIN\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 6 Mar 2002 18:31:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fix for BUG#599 & BUG 606"
},
{
"msg_contents": "\nPatch applied by Michael Meskes. Thanks.\n\n---------------------------------------------------------------------------\n\n\nNicolas Bazin wrote:\n> \n> Please apply patch. \n> ----- Original Message ----- \n> From: Nicolas Bazin \n> To: pgsql-patches@postgresql.org \n> Sent: Tuesday, March 05, 2002 12:08 PM\n> Subject: Re: BUG#599 & BUG 606 correction\n> \n> \n> The end of a define section is tested too soon during the parsing of ecpg. This patch makes sure that the parser only do the test at the end of the file being parsed.\n> \n> Please apply this patch to src/interface/ecpg/preproc/pgc.l\n> \n> Nicolas BAZIN\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 10:21:42 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fix for BUG#599 & BUG 606"
}
] |
[
{
"msg_contents": "Greetings,\n\nSome more info in case anyone may have some clues. \n\nThe main info is we are using PreparedStatements when the problem occurs.\nThe same SQL works normally. Any ideas.\n\nThanks in advance,\n\nEric\n\n\n\n\n\n-------- Original Message --------\nMessage-ID: <3C855943.2010007@carl.org>\nDate: Tue, 05 Mar 2002 16:48:19 -0700\nFrom: Eric Scroger <escroger@carl.org>\nUser-Agent: Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:0.9.4) \nGecko/20011019 Netscape6/6.2\nX-Accept-Language: en-us\nMIME-Version: 1.0\nTo: Dave@micro-automation.net\nCC: ed@carl.org\nSubject: Re: [HACKERS] [JDBC] A result was returned by the statement, \nwhen none was expected\nReferences: <00f701c1c49e$8bc624e0$8201a8c0@inspiron>\nContent-Type: text/plain; charset=us-ascii; format=flowed\nContent-Transfer-Encoding: 7bit\n\n\n\nDave,\n\nI just installed the driver. Unfortunately, we're still seeing the \nexception.\nSince we know the exception message, we can suppress it from our logs.\n\nThanks again for the idea. Keep'em coming if you have more.\n\nEric\n\n>Eric,\n>\n>The current 7.2 driver will work fine with your database, and may just\n>fix your problem.\n>\n>Dave\n>\n>-----Original Message-----\n>From: pgsql-hackers-owner@postgresql.org\n>[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Eric Scroger\n>Sent: Tuesday, March 05, 2002 6:32 PM\n>To: Dave@micro-automation.net\n>Cc: 'pgsql hackers'; pgsql-jdbc@postgresql.org; ed@carl.org\n>Subject: Re: [HACKERS] [JDBC] A result was returned by the statement,\n>when none was expected\n>\n>\n>Dave,\n>\n>>Eric,\n>>\n>>Which version of the driver are you using? Also do you have logs from \n>>the postgres backend?\n>>\n>\n>We are using driver jar file jdbc7.1-1.2.jar. \n>\n>I reproduced the problem with postgresql configured the debug setting to\n>16. The backend debug logs are attached to this as they are kinda long \n>(postback.log).\n>\n>I'm not sure it is data directly, but it does always fail on the same\n>data. One thing I do know is the problem occurs when we are using \n>PreparedStatments. \n>If I put the same SQL statement into a separate statement it works fine.\n>For your info, here is the code related to the PreparedStatements.\n>\n>----------------------------------------------------\n>\n>PreparedStatement psInsertError = null;\n>\n> psInsertError = conn.prepareStatement(\n> \"INSERT INTO \" + tabName\n> + \"(ID,DATA,ATTEMPTS,REASON) VALUES (?,?,?,?)\" );\n>\n> psInsertError.setLong(1,id);\n> psInsertError.setString(2,data);\n> psInsertError.setInt(3,attempts);\n> psInsertError.setString(4,reason);\n> psInsertError.executeUpdate();\n>\n>----------------------------\n>\n>Thanks for your suggestions.\n>\n>Eric\n>\n>\n>>\n>>My suspicion is something in the data is causing the problem, what I \n>>don't know, but that's my guess.\n>>\n>>Dave\n>>\n>\n>>-----Original Message-----\n>>From: pgsql-jdbc-owner@postgresql.org \n>>[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Eric Scroger\n>>Sent: Tuesday, March 05, 2002 5:28 PM\n>>To: pgsql hackers; pgsql-jdbc@postgresql.org\n>>Cc: ed@carl.org\n>>Subject: [JDBC] A result was returned by the statement, when none was \n>>expected\n>>\n>>\n>>JDBC-ers,\n>>\n>>We are doing an INSERT and occasionally Postgres throws a SQLException \n>>because there are unexpected results (see stacktrace below), maybe 10% \n>>of the time, Any idea why this would happen when it works over 90% of \n>>the time? However, it appears the insert is completed successfully.\n>>\n>>We have looked at the source code for ResultSet.java and noticed the \n>>method reallyResultSet() returns true if any of the fields are \n>>non-null. I hope that helps in debugging.\n>>\n>>Also, we are running Postgres 7.1 with JDK1.2.1.\n>>\n>>Regards,\n>>\n>>Eric\n>>\n>>-----------------------------------------------------------------------\n>>-\n>>--\n>>\n>>IINSERT INTO bad_urls(ID,DATA,ATTEMPTS,REASON) VALUES\n>>(3375,'http://www.oit.itd.umich.edu/projects/adw2k/chordata/aves.html',\n>>\n>0\n>\n>>,'Unknown \n>>Host')\n>>\n>>A result was returned by the statement, when none was expected.\n>> at java.lang.Throwable.fillInStackTrace(Native Method)\n>> at java.lang.Throwable.fillInStackTrace(Compiled Code)\n>> at java.lang.Throwable.(Compiled Code)\n>> at java.lang.Exception.(Compiled Code)\n>> at java.sql.SQLException.(SQLException.java:98)\n>> at org.postgresql.util.PSQLException.(PSQLException.java:23)\n>> at org.postgresql.jdbc2.Statement.executeUpdate(Statement.java:80)\n>> at\n>>org.postgresql.jdbc2.PreparedStatement.executeUpdate(PreparedStatement.\n>>\n>j\n>\n>>ava:122)\n>> at InsertError.record(InsertError.java:98)\n>> at InsertError.record(InsertError.java:69)\n>> at wbCheckUrl$CheckThread.run(wbCheckUrl.java:307)\n>>\n>>\n>>---------------------------(end of \n>>broadcast)---------------------------\n>>TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to\n>>\n>majordomo@postgresql.org)\n>\n>>\n>\n>\n\n\n\n",
"msg_date": "Wed, 06 Mar 2002 18:26:24 -0700",
"msg_from": "Eric Scroger <escroger@carl.org>",
"msg_from_op": true,
"msg_subject": "[Fwd: Re: [HACKERS] A result was returned by the statement,\n\twhen none was expected]"
},
{
"msg_contents": "Eric,\n\nThe one thing that I don't understand is that the log from the backend\nlooks like it should work?\n\nOther than that PreparedStatements do attempt to escape out some\ncharacters, that might be a clue\n\nDave\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Eric Scroger\nSent: Wednesday, March 06, 2002 8:26 PM\nTo: pgsql-hackers@postgresql.org; pgsql-jdbc@postgresql.org\nCc: ed@carl.org\nSubject: [Fwd: Re: [HACKERS] [JDBC] A result was returned by the\nstatement, when none was expected]\n\n\nGreetings,\n\nSome more info in case anyone may have some clues. \n\nThe main info is we are using PreparedStatements when the problem\noccurs. The same SQL works normally. Any ideas.\n\nThanks in advance,\n\nEric\n\n\n\n\n\n-------- Original Message --------\nMessage-ID: <3C855943.2010007@carl.org>\nDate: Tue, 05 Mar 2002 16:48:19 -0700\nFrom: Eric Scroger <escroger@carl.org>\nUser-Agent: Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:0.9.4) \nGecko/20011019 Netscape6/6.2\nX-Accept-Language: en-us\nMIME-Version: 1.0\nTo: Dave@micro-automation.net\nCC: ed@carl.org\nSubject: Re: [HACKERS] [JDBC] A result was returned by the statement, \nwhen none was expected\nReferences: <00f701c1c49e$8bc624e0$8201a8c0@inspiron>\nContent-Type: text/plain; charset=us-ascii; format=flowed\nContent-Transfer-Encoding: 7bit\n\n\n\nDave,\n\nI just installed the driver. Unfortunately, we're still seeing the \nexception.\nSince we know the exception message, we can suppress it from our logs.\n\nThanks again for the idea. Keep'em coming if you have more.\n\nEric\n\n>Eric,\n>\n>The current 7.2 driver will work fine with your database, and may just \n>fix your problem.\n>\n>Dave\n>\n>-----Original Message-----\n>From: pgsql-hackers-owner@postgresql.org\n>[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Eric Scroger\n>Sent: Tuesday, March 05, 2002 6:32 PM\n>To: Dave@micro-automation.net\n>Cc: 'pgsql hackers'; pgsql-jdbc@postgresql.org; ed@carl.org\n>Subject: Re: [HACKERS] [JDBC] A result was returned by the statement, \n>when none was expected\n>\n>\n>Dave,\n>\n>>Eric,\n>>\n>>Which version of the driver are you using? Also do you have logs from\n>>the postgres backend?\n>>\n>\n>We are using driver jar file jdbc7.1-1.2.jar.\n>\n>I reproduced the problem with postgresql configured the debug setting \n>to 16. The backend debug logs are attached to this as they are kinda \n>long (postback.log).\n>\n>I'm not sure it is data directly, but it does always fail on the same \n>data. One thing I do know is the problem occurs when we are using \n>PreparedStatments.\n>If I put the same SQL statement into a separate statement it works\nfine.\n>For your info, here is the code related to the PreparedStatements.\n>\n>----------------------------------------------------\n>\n>PreparedStatement psInsertError = null;\n>\n> psInsertError = conn.prepareStatement(\n> \"INSERT INTO \" + tabName\n> + \"(ID,DATA,ATTEMPTS,REASON) VALUES (?,?,?,?)\" );\n>\n> psInsertError.setLong(1,id);\n> psInsertError.setString(2,data);\n> psInsertError.setInt(3,attempts);\n> psInsertError.setString(4,reason);\n> psInsertError.executeUpdate();\n>\n>----------------------------\n>\n>Thanks for your suggestions.\n>\n>Eric\n>\n>\n>>\n>>My suspicion is something in the data is causing the problem, what I\n>>don't know, but that's my guess.\n>>\n>>Dave\n>>\n>\n>>-----Original Message-----\n>>From: pgsql-jdbc-owner@postgresql.org\n>>[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Eric Scroger\n>>Sent: Tuesday, March 05, 2002 5:28 PM\n>>To: pgsql hackers; pgsql-jdbc@postgresql.org\n>>Cc: ed@carl.org\n>>Subject: [JDBC] A result was returned by the statement, when none was \n>>expected\n>>\n>>\n>>JDBC-ers,\n>>\n>>We are doing an INSERT and occasionally Postgres throws a SQLException\n>>because there are unexpected results (see stacktrace below), maybe 10%\n\n>>of the time, Any idea why this would happen when it works over 90% of \n>>the time? However, it appears the insert is completed successfully.\n>>\n>>We have looked at the source code for ResultSet.java and noticed the\n>>method reallyResultSet() returns true if any of the fields are \n>>non-null. I hope that helps in debugging.\n>>\n>>Also, we are running Postgres 7.1 with JDK1.2.1.\n>>\n>>Regards,\n>>\n>>Eric\n>>\n>>----------------------------------------------------------------------\n>>-\n>>-\n>>--\n>>\n>>IINSERT INTO bad_urls(ID,DATA,ATTEMPTS,REASON) VALUES \n>>(3375,'http://www.oit.itd.umich.edu/projects/adw2k/chordata/aves.html'\n>>,\n>>\n>0\n>\n>>,'Unknown\n>>Host')\n>>\n>>A result was returned by the statement, when none was expected.\n>> at java.lang.Throwable.fillInStackTrace(Native Method)\n>> at java.lang.Throwable.fillInStackTrace(Compiled Code)\n>> at java.lang.Throwable.(Compiled Code)\n>> at java.lang.Exception.(Compiled Code)\n>> at java.sql.SQLException.(SQLException.java:98)\n>> at org.postgresql.util.PSQLException.(PSQLException.java:23)\n>> at org.postgresql.jdbc2.Statement.executeUpdate(Statement.java:80)\n>> at \n>>org.postgresql.jdbc2.PreparedStatement.executeUpdate(PreparedStatement\n>>.\n>>\n>j\n>\n>>ava:122)\n>> at InsertError.record(InsertError.java:98)\n>> at InsertError.record(InsertError.java:69)\n>> at wbCheckUrl$CheckThread.run(wbCheckUrl.java:307)\n>>\n>>\n>>---------------------------(end of\n>>broadcast)---------------------------\n>>TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to\n>>\n>majordomo@postgresql.org)\n>\n>>\n>\n>\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to majordomo@postgresql.org so that your\nmessage can get through to the mailing list cleanly\n\n\n",
"msg_date": "Wed, 6 Mar 2002 21:52:06 -0500",
"msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Re: [HACKERS] A result was returned by the statement,\n\twhen none was expected]"
}
] |
[
{
"msg_contents": "Me again, I have some more details on my storage location patch \n\n\n\nThis patch would allow the system admin (DBA) to specify the location of\ndatabases, tables/indexes and temporary objects (temp tables and temp sort\nspace) independent of the database/system default location. This patch would\nreplace the current \"LOCATION\" code.\n\nPlease let me know if you have any questions/comments. I would like to see\nthis feature make 7.3. I believe it will take about 1 month of coding and\ntesting after I get started.\n\nThanks\nJim\n\n==============================================================================\nStorage Location Patch (Try 3)\n\n\n(If people like TABLESPACE instead of LOCATION then s/LOCATION/TABLESPACE/g\nbelow)\n\n\nThis patch would add the following NEW commands\n----------------------------------------------------\n CREATE LOCATION name PATH 'dbpath';\n DROP LOCATION name;\n\nwhere dbpath is any directory that the postgresql backend can write to.\n(I know this is how Oracle works, don't know about the other major db systems)\n\nThe following NEW GLOBAL system table would be added. \n-----------------------------------------------------\nPG_LOCATION\n( \n LOC_NAME name,\n LOC_PATH text -- This should be able to take any path name.\n);\n(initdb would add (PGDATA,'/usr/local/pgsql/data')\n\nThe following system tables would need to be modified\n-----------------------------------------------------\nPG_DATABASE drop datpath \n add DATA_LOC_NAME name or DATA_LOC_OID OID \n add INDEX_LOC_NAME name or INDEX_LOC_OID OID\n add TEMP_LOC_NAME name or TEMP_LOC_OID OID\nPG_CLASS to add LOC_NAME name or LOC_OID OID \n\nDATA_LOC_* and INDEX_LOC_* would default to PGDATA if not specified.\n\n(I like *LOC_NAME better but I believe the rest of the systems tables use OID)\n\n\nThe following command syntax would be modified\n------------------------------------------------------\nCREATE DATABASE WITH DATA_LOCATION=XXX INDEX_LOCATION=YYY TEMP_LOCATION=ZZZ\nCREATE TABLE aaa (...) WITH LOCATION=XXX;\nCREATE TABLE bbb (c1 text primary key location CCC) WITH LOCATION=XXX;\nCREATE TABLE ccc (c2 text unique location CCC) WITH LOCATION=XXX;\nCREATE INDEX XXX on SAMPLE (C2) WITH LOCATION BBB;\n\n\n\nNow for an example\n------------------------------------------------------\nFirst:\n postgresql is installed at /usr/local/pgsql\n userid postgres\n the postgres user also is the owner of /pg01 /pg02 /pg03\n\nthe dba executes the following script\nCREATE LOCATION pg01 PATH '/pg01';\nCREATE LOCATION pg02 PATH '/pg02';\nCREATE LOCATION pg03 PATH '/pg03';\nCREATE LOCATION bigdata PATH '/bigdata';\nCREATE LOCATION bigidx PATH '/bigidx';\n\\q\n\nPG_LOCATION now has\npg01 | /pg01\npg02 | /pg02\npg03 | /pg03\nbigdata | /bigdata\nbigidx | /bigidx\n\nNow the following command is run\nCREATE DATABASE jim1 WITH DATA_LOCATION='pg01' INDEX_LOCATION='pg02'\nTEMP_LOCATION='pg03'\n-- OID of 'jim1' tuple is 1786146\n\non disk the directories look like this \n/pg01/1786146 <<-- Default DATA Location \n/pg02/1786146 <<-- Default INDEX Location\n/pg03/1786146 <<-- Default Temp Location \n\nAll files from the above directories will have symbolic links to\n/usr/local/pgsql/data/base/1786146/ \n\n\n\nNow the system will have 1 BIG table that will get its own disk for data and\nits own disk for index\ncreate table big (a text,b text ..., primary key (a,b) location 'bigidx');\n\noid of big table is 1786150\noid of big table primary key index is 1786151\n\non disk directories look like this\n/bigdata/1786146/1786150\n/bigidx/1786146/1786151\n/usr/local/pgsql/data/base/1786146/1786150 symbolic link to\n/bigdata/1786146/1786150\n/usr/local/pgsql/data/base/1786146/1786151 symbolic link to\n/bigdata/1786146/1786151\n\n\n\nThe symbolic links will enable the rest of the software to be location\nindependent.\n\n\n",
"msg_date": "Wed, 6 Mar 2002 20:44:43 -0500",
"msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>",
"msg_from_op": true,
"msg_subject": "Storage Location / Tablespaces (try 3)"
}
] |
[
{
"msg_contents": "This is on a FreeBSD/Alpha machine. Using latest CVS.\n\n/bin/sh\n./pg_regress --temp-install --top-builddir=../../.. --schedule=./parallel_\nschedule --multibyte=\n============== creating temporary installation ==============\n============== initializing database system ==============\n============== starting postmaster ==============\nrunning on port 65432 with pid 30866\n============== creating database \"regression\" ==============\nCREATE DATABASE\n============== dropping regression test user accounts ==============\n============== installing PL/pgSQL ==============\n============== running regression test queries ==============\nparallel group (13 tests): text oid int2 name varchar char float4 int8\nboolean fl\noat8 int4 bit numeric\n boolean ... ok\n char ... ok\n name ... ok\n varchar ... ok\n text ... ok\n int2 ... ok\n int4 ... ok\n int8 ... ok\n oid ... ok\n float4 ... ok\n float8 ... ok\n bit ... ok\n numeric ... ok\ntest strings ... ok\ntest numerology ... ok\nparallel group (20 tests): comments path lseg box polygon tinterval circle\nabstim\ne reltime time interval point inet type_sanity timestamp timestamptz timetz\ndate o\npr_sanity oidjoins\n point ... ok\n lseg ... ok\n box ... ok\n path ... ok\n polygon ... ok\n circle ... ok\n date ... ok\n time ... ok\n timetz ... ok\n timestamp ... ok\n timestamptz ... ok\n interval ... ok\n abstime ... ok\n reltime ... ok\n tinterval ... ok\n inet ... ok\n comments ... ok\n oidjoins ... ok\n type_sanity ... ok\n opr_sanity ... ok\ntest geometry ... ok\ntest horology ... ok\ntest create_function_1 ... FAILED\ntest create_type ... FAILED\ntest create_table ... FAILED\ntest create_function_2 ... FAILED\ntest copy ... FAILED\nparallel group (7 tests): create_misc constraints triggers create_index\ninherit c\nreate_operator create_aggregate\n constraints ... FAILED\n triggers ... FAILED\n create_misc ... FAILED\n create_aggregate ... FAILED\n create_operator ... FAILED\n create_index ... FAILED\n inherit ... FAILED\ntest create_view ... FAILED\ntest sanity_check ... FAILED\ntest errors ... FAILED\ntest select ... FAILED\nparallel group (16 tests): select_distinct_on select_into random\nselect_distinct\nbtree_index aggregates transactions select_having arrays subselect\nhash_index port\nals select_implicit case union join\n select_into ... FAILED\n select_distinct ... FAILED\n select_distinct_on ... FAILED\n select_implicit ... ok\n select_having ... ok\n subselect ... ok\n union ... ok\n case ... ok\n join ... ok\n aggregates ... FAILED\n transactions ... FAILED\n random ... failed (ignored)\n portals ... FAILED\n arrays ... ok\n btree_index ... FAILED\n hash_index ... FAILED\ntest privileges ... ok\ntest misc ... FAILED\nparallel group (5 tests): select_views portals_p2 alter_table rules\nforeign_key\n select_views ... FAILED\n alter_table ... FAILED\n portals_p2 ... FAILED\n rules ... FAILED\n foreign_key ... ok\nparallel group (3 tests): limit temp plpgsql\n limit ... FAILED\n plpgsql ... ok\n temp ... ok\n============== shutting down postmaster ==============\n\n=====================================================\n 31 of 79 tests failed, 1 of these failures ignored.\n=====================================================\n\nThe differences that caused some tests to fail can be viewed in the\nfile `./regression.diffs'. A copy of the test summary that you see\nabove is saved in the file `./regression.out'.\n\nrm regress.o\ngmake[2]: Leaving directory `/home/chriskl/pgsql/src/test/regress'\ngmake[1]: Leaving directory `/home/chriskl/pgsql/src/test'",
"msg_date": "Thu, 7 Mar 2002 11:12:43 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "CVS regression test failures"
}
] |
[
{
"msg_contents": "I'd like to confirm I'm doing this in an acceptable manner as running\nthrough the dependency tree is kinda heavy -- then again how often do\nyou drop something?\n\nSource patch (against 7.2 release) can be found at:\nhttp://www.zort.ca/~rbt/patches/postgresql_dependencies/depend.patch\n\nTracking table pg_depend is similar to pg_description in regards to\napplying an address to an object. It's quite simple to create and\ndelete dependencies between objects.\n\n\nThe bad part is that the depend...() functions rely on the name of\ntheir 'class' (pg_class.relname) to align with those recorded in\ncatname.h. In order to figure out what the object is and how to deal\nwith it, the code is required to run through a large number of if /\nelseif statements. Then the specific removal function is called; like\nremoveType(). I'd much prefer to use a switch statement which doesn't\nappear to work with strings, so I'm looking for suggestions.\nOtherwise those if / else combinations will have as many entries as\nthere are system tables -- well, a couple less.\n\nA completed example on types is a part of the patch, simple output\nbelow. System structures do not have entries in pg_depend yet, but\nthose will be done before submission of the patch.\n\ntemplate1=# create database d;\nCREATE DATABASE\ntemplate1=# \\c d\nYou are now connected to database d.\nd=# select * from pg_depend;\n classid | objid | objsubid | depclassid | depobjid | depobjsubid\n---------+-------+----------+------------+----------+-------------\n(0 rows)\nd=# create type typ1 (input = int2in,output = int2out);\nCREATE\nd=# select * from pg_depend;\n classid | objid | objsubid | depclassid | depobjid | depobjsubid\n---------+-------+----------+------------+----------+-------------\n 1247 | 41155 | 0 | 1255 | 38 | 0\n 1247 | 41155 | 0 | 1255 | 39 | 0\n 1247 | 41155 | 0 | 1255 | 38 | 0\n 1247 | 41155 | 0 | 1255 | 39 | 0\n 1247 | 41156 | 0 | 1255 | 750 | 0\n 1247 | 41156 | 0 | 1255 | 751 | 0\n 1247 | 41156 | 0 | 1255 | 750 | 0\n 1247 | 41156 | 0 | 1255 | 751 | 0\n(8 rows)\n\nd=# drop type typ1;\nDROP\nd=# select * from pg_depend;\n classid | objid | objsubid | depclassid | depobjid | depobjsubid\n---------+-------+----------+------------+----------+-------------\n(0 rows)\n\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n\n",
"msg_date": "Wed, 6 Mar 2002 23:24:09 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Dependencies"
}
] |
[
{
"msg_contents": "> > I still think, that for best results the vacuums should happen continuously\n> > for single pages based on a hook in wal or the buffer manager.\n> \n> Not possible unless you are willing to have SELECTs grab much stronger\n> locks than they do now (viz, the same kind of lock that VACUUM does).\n\nI am talking about slots, that are marked deleted before oldest tx in progress.\nIt should be possible to overwrite those only with the \"pin\" on the page.\nA pageread for a select does wait (spinlock) while a new txinfo is written,\nthis is necessary since the txinfo is more than one byte, no ?\nIt should be more or less the same situation like using a slot from the \nfreelist, no ?\nUpdate and delete would need to check the \"old page\" for %free and add it to \nthe freelist, like vacuum does.\n\nThis would avoid the (imho large) overhead vacuum imposes of reading static \npages that have not been modified in ages.\n\nAndreas\n",
"msg_date": "Thu, 7 Mar 2002 08:30:48 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically "
},
{
"msg_contents": "\nIf I may pipe in to this discussion, I have some experience with this.\nWith threaded postgres, the thread that is reponsible for writing out\nbuffer pages keeps track of the number of writes on a particular relation\n(implied update, insert, or delete but not always). That is multiplied by\na tolerance factor and accumulated until a limit is reached. After much\ntrial and error I came up with this formula for the tolerance factor.\n\nlive_tuple_count of previous vacuum * 0.01 \n+\ndead_tuple_count of previous vacuum * 0.1\n+\n100 to force vacuums on small relations\n\nall divided by live_tuple_count + 1\n\nwith bounds on the change in tolerance factor from \nthe last vacuum (3.0x max increase or 80% max decrease)\n\nLastly, the second phase of vacuum_lazy only gets triggered \nif more than 10% of the relation is dead.\n\nAll this seems to keep relations pretty trim without getting\nin the way. Tables that have a large ratio of dead to live tuples \nget vacuumed more frequently which actually helps performance by clearing \nout old tuples which don't have to be scanned by other threads doing\nupdates.\n\nMyron Scott\nmkscott@sacadia.com\n\nOn Thu, 7 Mar 2002, Zeugswetter Andreas SB SD wrote:\n\n> > > I still think, that for best results the vacuums should happen continuously\n> > > for single pages based on a hook in wal or the buffer manager.\n> > \n> > Not possible unless you are willing to have SELECTs grab much stronger\n> > locks than they do now (viz, the same kind of lock that VACUUM does).\n> \n> I am talking about slots, that are marked deleted before oldest tx in progress.\n> It should be possible to overwrite those only with the \"pin\" on the page.\n> A pageread for a select does wait (spinlock) while a new txinfo is written,\n> this is necessary since the txinfo is more than one byte, no ?\n> It should be more or less the same situation like using a slot from the \n> freelist, no ?\n> Update and delete would need to check the \"old page\" for %free and add it to \n> the freelist, like vacuum does.\n> \n> This would avoid the (imho large) overhead vacuum imposes of reading static \n> pages that have not been modified in ages.\n> \n> Andreas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n",
"msg_date": "Thu, 7 Mar 2002 00:51:22 -0800 (PST)",
"msg_from": "<mkscott@sacadia.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> Not possible unless you are willing to have SELECTs grab much stronger\n>> locks than they do now (viz, the same kind of lock that VACUUM does).\n\n> I am talking about slots, that are marked deleted before oldest tx in progress.\n> It should be possible to overwrite those only with the \"pin\" on the page.\n\nNo, it is not.\n\nYou can't physically remove a tuple before you've removed all index\nentries that point to it. That requires, at minimum, a table lock that\nensures that no new indexes are being created meanwhile. Which is what\nVACUUM uses.\n\nThere's also a serious question about whether this would really improve\nperformance. \"Retail\" deletion of index tuples would be fairly\ninefficient over the long run, compared to the bulk deletion technique\nused by VACUUM.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 10:01:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql backend to perform vacuum automatically "
}
] |
[
{
"msg_contents": "\n> (If people like TABLESPACE instead of LOCATION then \n> s/LOCATION/TABLESPACE/g\n> below)\n\nI like \"tablespace\" :-)\n\n> This patch would add the following NEW commands\n> ----------------------------------------------------\n> CREATE LOCATION name PATH 'dbpath';\n> DROP LOCATION name;\n\n> The following command syntax would be modified\n> ------------------------------------------------------\n> CREATE DATABASE WITH DATA_LOCATION=XXX INDEX_LOCATION=YYY \n> TEMP_LOCATION=ZZZ\n> CREATE TABLE aaa (...) WITH LOCATION=XXX;\n> CREATE TABLE bbb (c1 text primary key location CCC) WITH LOCATION=XXX;\n> CREATE TABLE ccc (c2 text unique location CCC) WITH LOCATION=XXX;\n> CREATE INDEX XXX on SAMPLE (C2) WITH LOCATION BBB;\n\nSounds great, but shouldn't we use syntax that is already around,\nlike Oracle's or DB2's or ...\n\n> The symbolic links will enable the rest of the software to be location\n> independent.\n\nI see, that this is the least intrusive way, but I am not sure this is the \nbest way to do it. It would probably be better to pass the Tablespace oid\naround (or look it up).\n\nThat would also leave the door open for other \"Tablespace types\" (currently\n\"Filesystem directory\" an OS managed tablespace :-).\n\nAndreas\n",
"msg_date": "Thu, 7 Mar 2002 09:28:34 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Storage Location / Tablespaces (try 3)"
},
{
"msg_contents": "Andreas,\n\nMy first try passed the tablespace OID arround but someone pointed out the the\nWAL code doesn't know what the tablespace OID is or what it's location is. \nThis is why I would like to use the symbolic links. \n\nTom do you have any ideas on this?\n\nJim\n\n> > (If people like TABLESPACE instead of LOCATION then \n> > s/LOCATION/TABLESPACE/g\n> > below)\n> \n> I like \"tablespace\" :-)\n> \n> > This patch would add the following NEW commands\n> > ----------------------------------------------------\n> > CREATE LOCATION name PATH 'dbpath';\n> > DROP LOCATION name;\n> \n> > The following command syntax would be modified\n> > ------------------------------------------------------\n> > CREATE DATABASE WITH DATA_LOCATION=XXX INDEX_LOCATION=YYY \n> > TEMP_LOCATION=ZZZ\n> > CREATE TABLE aaa (...) WITH LOCATION=XXX;\n> > CREATE TABLE bbb (c1 text primary key location CCC) WITH LOCATION=XXX;\n> > CREATE TABLE ccc (c2 text unique location CCC) WITH LOCATION=XXX;\n> > CREATE INDEX XXX on SAMPLE (C2) WITH LOCATION BBB;\n> \n> Sounds great, but shouldn't we use syntax that is already around,\n> like Oracle's or DB2's or ...\n> \n> > The symbolic links will enable the rest of the software to be location\n> > independent.\n> \n> I see, that this is the least intrusive way, but I am not sure this \n> is the best way to do it. It would probably be better to pass the \n> Tablespace oid around (or look it up).\n> \n> That would also leave the door open for other \"Tablespace types\" (currently\n> \"Filesystem directory\" an OS managed tablespace :-).\n> \n> Andreas\n\n\n\n",
"msg_date": "Thu, 7 Mar 2002 16:05:19 -0500",
"msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location / Tablespaces (try 3)"
},
{
"msg_contents": "\"Jim Buttafuoco\" <jim@buttafuoco.net> writes:\n> My first try passed the tablespace OID arround but someone pointed out the the\n> WAL code doesn't know what the tablespace OID is or what it's location is. \n\nThe low-level file access code (including WAL references) names tables\nby two OIDs, which currently are database OID and relfilenode (the\nlatter is NOT to be considered equivalent to table OID, even though it\npresently always is equal).\n\nI believe that the correct implementation approach is to revise things\nso that the low-level name of a table is tablespace OID + relfilenode;\nthis physical table name would in concept be completely distinct from\nthe logical table identification (database OID + table OID). The file\nreference path would become something like\n\"$PGDATA/base/tablespaceoid/relfilenode\", where tablespaceoid might\nreference a symlink to a directory instead of a plain directory.\nTablespace management then consists of setting up those symlinks\ncorrectly, and there is essentially zero impact on the low-level access\ncode.\n\nThe hard part of this is that we are probably being sloppy in some\nplaces about the difference between physical and logical table\nidentifications. Those places will need to be found and fixed.\nThis needs to happen anyway, of course, since the point of introducing\nrelfilenode was to allow table versioning, which we still want.\n\nVadim suggested long ago that bufmgr, smgr, and below should have\nnothing to do with referencing files by relcache entries; they should\nonly deal in physical file identifiers. That requires some tedious but\n(in principle) straightforward API changes.\n\nBTW, if tablespaces can be shared by databases then DROP DATABASE\nbecomes rather tricky: how do you zap the correct files out of a shared\ntablespace, keeping in mind that you are not logged into the doomed\ndatabase and can't look at its catalogs? The best idea I've seen for\nthis so far is:\n\n1. Access path for tables is really\n\t$PGDATA/base/databaseoid/tablespaceoid/relfilenode.\n(BTW, we could save some work if we chdir'd into\n$PGDATA/base/databaseoid at backend start and then used only relative\ntablespaceoid/relfilenode paths. Right now we tend to use absolute\npaths because the bootstrap code doesn't do that chdir; which seems\nlike a stupid solution...)\n\n2. A shared tablespace directory contains a subdirectory for each database\nthat has files in the tablespace. Thus, the actual filesystem location\nof a table is something like\n\t<tablespace>/databaseoid/relfilenode\nThe symlink from a database's $PGDATA/base/databaseoid/ directory to\nthe tablespace points at <tablespace>/databaseoid. The first attempt to\ncreate a table in a tablespace from a particular database will create\nthe hard subdirectory and set up the symlink; or perhaps that should be\ndone by an explicit tablespace management operation to \"connect\" the\ndatabase to the tablespace.\n\n3. To drop a database, we examine the symlinks in its\n$PGDATA/base/databaseoid/ and rm -rf each referenced tablespace\nsubdirectory before rm -rf'ing $PGDATA/base/databaseoid.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 17:46:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location / Tablespaces (try 3) "
}
] |
[
{
"msg_contents": "Avoid problems when one of the pointer values is NULL (or both).\n\n_equalVariableSetStmt() dumps core without this one.\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9",
"msg_date": "Thu, 07 Mar 2002 07:38:26 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": true,
"msg_subject": "Small fix for _equalValue()"
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> *************** _equalValue(Value *a, Value *b)\n> *** 1771,1777 ****\n> \t\tcase T_Float:\n> \t\tcase T_String:\n> \t\tcase T_BitString:\n> ! \t\t\treturn strcmp(a->val.str, b->val.str) == 0;\n> \t\tdefault:\n> \t\t\tbreak;\n> \t}\n> --- 1771,1780 ----\n> \t\tcase T_Float:\n> \t\tcase T_String:\n> \t\tcase T_BitString:\n> ! \t\t\tif ((a->val.str != NULL) && (b->val.str != NULL))\n> ! \t\t\t\treturn strcmp(a->val.str, b->val.str) == 0;\n> ! \t\t\telse\n> ! \t\t\t\treturn a->val.ival == b->val.ival; /* true if both are NULL */\n> \t\tdefault:\n> \t\t\tbreak;\n> \t}\n\nSeveral comments here:\n\nThis is not the idiomatic way to do it; there is an equalstr() macro\nin equalfuncs.c that does this pushup for you. So \"return\nequalstr(a->val.str, b->val.str)\" would be the appropriate fix.\n\nPossibly a more interesting question, though, is *why* equalValue is\nseeing Values with null pointer parts. I cannot think of any good\nreason to consider that a legal data structure. Do you know where this\nconstruct is coming from? I'd be inclined to consider the source at\nfault, not equalValue.\n\nOn the other fixes: as a rule, a field-typing bug in copyfuncs suggests\nan equivalent bug over in equalfuncs, and vice versa; as well as\npossible errors in readfuncs/outfuncs. Did you look?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 10:14:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small fix for _equalValue() "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@redhat.com> writes:\n> > *************** _equalValue(Value *a, Value *b)\n> > *** 1771,1777 ****\n> > case T_Float:\n> > case T_String:\n> > case T_BitString:\n> > ! return strcmp(a->val.str, b->val.str) == 0;\n> > default:\n> > break;\n> > }\n> > --- 1771,1780 ----\n> > case T_Float:\n> > case T_String:\n> > case T_BitString:\n> > ! if ((a->val.str != NULL) && (b->val.str != NULL))\n> > ! return strcmp(a->val.str, b->val.str) == 0;\n> > ! else\n> > ! return a->val.ival == b->val.ival; /* true if both are NULL */\n> > default:\n> > break;\n> > }\n> \n> Several comments here:\n> \n> This is not the idiomatic way to do it; there is an equalstr() macro\n> in equalfuncs.c that does this pushup for you. So \"return\n> equalstr(a->val.str, b->val.str)\" would be the appropriate fix.\n> \n\nI see. Thanks, I will fix the patch and resubmit it (see below).\n\n\n> Possibly a more interesting question, though, is *why* equalValue is\n> seeing Values with null pointer parts. I cannot think of any good\n> reason to consider that a legal data structure. Do you know where this\n> construct is coming from? I'd be inclined to consider the source at\n> fault, not equalValue.\n> \n\nSomeone is using NULL strings in gram.y, like in:\n\nVariableSetStmt: SET ColId TO var_value\n\t\t\t\t{\n\t\t\t\t\tVariableSetStmt *n = makeNode(VariableSetStmt);\n\t\t\t\t\tn->name = $2;\n\t\t\t\t\tn->args = makeList1(makeStringConst($4, NULL));\n\t\t\t\t\t$$ = (Node *) n;\n\t\t\t\t}\n\nthere are several instances of it, all related to variable set.\n\nWell, NULL is a valid value for a (char *) so this seems legal\nenough to me.\n\nI still think we should handle NULL pointer values in equalValue.\n(we can through an ERROR if we decide to disallow NULL pointers in\nValue -- we must go after who added it to VariableSet or revert that\nchange though).\n\n\n\n> On the other fixes: as a rule, a field-typing bug in copyfuncs suggests\n> an equivalent bug over in equalfuncs, and vice versa; as well as\n> possible errors in readfuncs/outfuncs. Did you look?\n> \n\nYes, but I will double check all the same.\n\nThanks for the comments.\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 07 Mar 2002 10:35:45 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Small fix for _equalValue()"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> This is not the idiomatic way to do it; there is an equalstr() macro\n> in equalfuncs.c that does this pushup for you. So \"return\n> equalstr(a->val.str, b->val.str)\" would be the appropriate fix.\n> \n\nHere it is. Thanks again.\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9",
"msg_date": "Thu, 07 Mar 2002 10:43:25 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Small fix for _equalValue() REPOST"
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> Tom Lane wrote:\n>> Possibly a more interesting question, though, is *why* equalValue is\n>> seeing Values with null pointer parts. I cannot think of any good\n>> reason to consider that a legal data structure.\n\n> Someone is using NULL strings in gram.y, like in:\n\nAh, and the DEFAULT case returns a NULL.\n\nIMHO this gram.y code is the broken part, not copyfuncs/equalfuncs.\nThere isn't any reason to build a Value with a null pointer --- and\nthere are probably a lot more places that will crash on one than just\ncopyfuncs/equalfuncs.\n\nI note that SET DEFAULT was not done that way in 7.1, which IIRC was\nthe last time we had COPY_PARSE_PLAN_TREES on by default during a\ndevelopment cycle. Might be time to turn it on again for awhile ;-).\n(The reason we don't keep it on always is that that case can mask\nbugs too. I like to flip the default setting every few months, but\nI think I forgot to do it anytime during the 7.2 cycle.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 11:00:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small fix for _equalValue() "
},
{
"msg_contents": "...\n> Someone is using NULL strings in gram.y, like in:\n> n->args = makeList1(makeStringConst($4, NULL));\n> there are several instances of it, all related to variable set.\n> Well, NULL is a valid value for a (char *) so this seems legal\n> enough to me.\n\nAh, that was me, to allow comma-delimited lists of parameters to be sent\nto the SET handling code. In previous versions, multi-parameter\narguments had to be enclosed in quotes. For most of the SET variables,\nlists aren't indicated, but I wanted to use the list for all cases to\nminimize the special cases downstream.\n\nIf this should be done differently I'm happy for suggestions...\n\n - Thomas\n",
"msg_date": "Thu, 07 Mar 2002 08:36:39 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Small fix for _equalValue()"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> If this should be done differently I'm happy for suggestions...\n\nI think DEFAULT should probably be represented by a NULL, not by\na Value node containing a null string pointer.\n\nI'm willing to do the work if no one else feels strongly about it ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 11:39:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small fix for _equalValue() "
},
{
"msg_contents": "> > If this should be done differently I'm happy for suggestions...\n> I think DEFAULT should probably be represented by a NULL, not by\n> a Value node containing a null string pointer.\n> I'm willing to do the work if no one else feels strongly about it ;-)\n\nOK. I can't think of a case where we would want to represent multiple\nDEFAULT placeholders in the context of SET. \n\nOr if we are going to pick up on the recent proposal to allow\ncolumn-specific DEFAULT values perhaps we should use a common\nrepresentation for the solution here?\n\nIn either case, I won't feel stepped on if you implement the solution,\nbut I can do so if desired.\n\n - Thomas\n",
"msg_date": "Thu, 07 Mar 2002 08:50:09 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Small fix for _equalValue()"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Or if we are going to pick up on the recent proposal to allow\n> column-specific DEFAULT values perhaps we should use a common\n> representation for the solution here?\n\nHuh? I recall someone working to allow expressions for type-specific\ndefault values, but I didn't think anything was happening for columns.\n\n> In either case, I won't feel stepped on if you implement the solution,\n> but I can do so if desired.\n\nYour choice ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 11:55:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small fix for _equalValue() "
},
{
"msg_contents": "> > Possibly a more interesting question, though, is *why* equalValue is\n> > seeing Values with null pointer parts. I cannot think of any good\n> > reason to consider that a legal data structure. Do you know where this\n> > construct is coming from? I'd be inclined to consider the source at\n> > fault, not equalValue.\n> Someone is using NULL strings in gram.y, like in:\n...\n> there are several instances of it, all related to variable set.\n> Well, NULL is a valid value for a (char *) so this seems legal\n> enough to me.\n\nOK, I've committed a patch to the stable and development trees which\nadds a guard check in three places in gram.y, matching the guards\nalready in place for other cass in the same area.\n\nFernando, can you please check this (preferably without your patches to\nguard the output functions, since those mask the upstream problem)?\n\nOr, can you give me the complete test case you are using to demonstrate\nthe problem to make sure I'm testing the thing you are seeing?\n\nWhile I'm looking at it, the \"SET key=val\" area is somewhat \"kludge on\nkludge\" to enable lists of values, at least partly because it was at the\nend of the development cycle and I didn't want to ripple breakage into\nparts of the code I wasn't trying to touch. I'll go through and try to\nrationalize it sometime soon (actually, I've already started but haven't\nfinished).\n\n - Thomas\n",
"msg_date": "Sat, 09 Mar 2002 09:47:19 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Small fix for _equalValue()"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> Fernando, can you please check this (preferably without your patches to\n> guard the output functions, since those mask the upstream problem)?\n> Or, can you give me the complete test case you are using to demonstrate\n> the problem to make sure I'm testing the thing you are seeing?\n\nI believe the test case is just to compile tcop/postgres.c with\nCOPY_PARSE_PLAN_TREES #defined, and see if the regression tests pass...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Mar 2002 13:33:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small fix for _equalValue() "
},
{
"msg_contents": "> I believe the test case is just to compile tcop/postgres.c with\n> COPY_PARSE_PLAN_TREES #defined, and see if the regression tests pass...\n\nOK, that works, afaik without having or requiring Fernando's patches.\n\nI'll plan on going through the parser code a bit more in the next week\nor so.\n\n - Thomas\n",
"msg_date": "Sat, 09 Mar 2002 14:26:25 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Small fix for _equalValue()"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n>> I believe the test case is just to compile tcop/postgres.c with\n>> COPY_PARSE_PLAN_TREES #defined, and see if the regression tests pass...\n\n> OK, that works, afaik without having or requiring Fernando's patches.\n\nThat's because I already committed the other changes he pointed out ;-).\nBut yeah, we seem to be copy-clean again.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Mar 2002 17:46:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small fix for _equalValue() "
},
{
"msg_contents": "...\n> That's because I already committed the other changes he pointed out ;-).\n> But yeah, we seem to be copy-clean again.\n\nI had thought that you objected to the guard code in the copy functions\nsince nodes should not have had the content they did. And afaik I have\nnow fixed the upstream problems with the content.\n\nHad you changed you mind about the necessity for the guard code? Why did\nthose patches get applied if the only feedback in the thread was that\nthe problem did not lie there?\n\nOr are we talking about two different parts of the patch submission? I'm\na bit confused as to the current state of the code tree...\n\n - Thomas\n",
"msg_date": "Sat, 09 Mar 2002 15:43:16 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Small fix for _equalValue()"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n>> That's because I already committed the other changes he pointed out ;-).\n>> But yeah, we seem to be copy-clean again.\n\n> I had thought that you objected to the guard code in the copy functions\n> since nodes should not have had the content they did. And afaik I have\n> now fixed the upstream problems with the content.\n\nRight, the SET DEFAULT problem is fixed that way. Fernando had pointed\nout a couple of problems in unrelated constructs (GRANT and something\nelse I forget now) that also needed to be fixed. Those fixes did get\ncommitted.\n\n> Had you changed you mind about the necessity for the guard code?\n\nNo. I think Value is fine as-is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Mar 2002 18:58:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Small fix for _equalValue() "
},
{
"msg_contents": "> No. I think Value is fine as-is.\n\nAh, great.\n\n - Thomas\n",
"msg_date": "Sat, 09 Mar 2002 16:28:51 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Small fix for _equalValue()"
}
] |
[
{
"msg_contents": "I was just toying around with things, and you know, running vacuum in the\nbackground doesn't work. It slows things down too much.\n\nThe worst case senario is when one does this:\n\nupdate accounts set abalance = abalance + 1 ;\n\nThis takes forever to run and doubles the size of the table.\n\nIs there a way that a separate thread managing the freelist can perform a \"per\nrow\" vacuum concurrently? Maybe I am stating the problem incorrectly, but we\nneed to be able to recover rows already in memory for performance.\n",
"msg_date": "Thu, 07 Mar 2002 10:15:43 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "a vacuum thread is not the answer"
},
{
"msg_contents": "On Thu, 2002-03-07 at 20:15, mlw wrote:\n> I was just toying around with things, and you know, running vacuum in the\n> background doesn't work. It slows things down too much.\n> \n> The worst case senario is when one does this:\n> \n> update accounts set abalance = abalance + 1 ;\n> \n> This takes forever to run and doubles the size of the table.\n\nHow is this related to running vacuum in background ?\n\nDoes it run fast when vacuum is not running ?\n\n> Is there a way that a separate thread managing the freelist can perform a \"per\n> row\" vacuum concurrently? Maybe I am stating the problem incorrectly, but we\n> need to be able to recover rows already in memory for performance.\n\nWhat could be possibly done (and is probably not very useful anyway) is\nreusing the row modified _in_the_same_transaction_ so that \n\nbegin;\nabalance = abalance + 1 ;\nabalance = abalance + 1 ;\nabalance = abalance + 1 ;\nend;\n\nwould consume just 2x the tablespace and not 4x. But this does not\nrequire a separate thread, just some changes in update logic.\n\nOTOH, this will probably interfere with some transaction modes that make\nuse of command ids.\n\n--------------\nHannu\n\n",
"msg_date": "08 Mar 2002 01:49:41 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": false,
"msg_subject": "Re: a vacuum thread is not the answer"
},
{
"msg_contents": "mlw wrote:\n> I was just toying around with things, and you know, running vacuum in the\n> background doesn't work. It slows things down too much.\n>\n> The worst case senario is when one does this:\n>\n> update accounts set abalance = abalance + 1 ;\n>\n> This takes forever to run and doubles the size of the table.\n>\n> Is there a way that a separate thread managing the freelist can perform a \"per\n> row\" vacuum concurrently? Maybe I am stating the problem incorrectly, but we\n> need to be able to recover rows already in memory for performance.\n\n So you want to reuse space from rows before your transaction\n committed? Fine, I'm all for it, as long as\n\n begin ;\n update accounts set abalance = abalance + 1 ;\n rollback ;\n\n still works.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 7 Mar 2002 16:16:16 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: a vacuum thread is not the answer"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> On Thu, 2002-03-07 at 20:15, mlw wrote:\n> > I was just toying around with things, and you know, running vacuum in the\n> > background doesn't work. It slows things down too much.\n> >\n> > The worst case senario is when one does this:\n> >\n> > update accounts set abalance = abalance + 1 ;\n> >\n> > This takes forever to run and doubles the size of the table.\n> \n> How is this related to running vacuum in background ?\n> \n> Does it run fast when vacuum is not running ?\n\nThe problem is that it doubles the size of a table. This invariably means that\nyou have more I/O. If there were a way to reuse old tulples, while they are\nstill in the buffer cache, then PostgreSQL could handle this query faster.\n\nIt was, however, pointed out that (obviously) you can't do reclaimation during\na transaction because if it fails or someone issues \"rollback\" you have broken\nthe database. \n\nSo, I guess I'm saying ignore that part.\n\n> \n> > Is there a way that a separate thread managing the freelist can perform a \"per\n> > row\" vacuum concurrently? Maybe I am stating the problem incorrectly, but we\n> > need to be able to recover rows already in memory for performance.\n> \n> What could be possibly done (and is probably not very useful anyway) is\n> reusing the row modified _in_the_same_transaction_ so that\n> \n> begin;\n> abalance = abalance + 1 ;\n> abalance = abalance + 1 ;\n> abalance = abalance + 1 ;\n> end;\n> \n> would consume just 2x the tablespace and not 4x. But this does not\n> require a separate thread, just some changes in update logic.\n> \n> OTOH, this will probably interfere with some transaction modes that make\n> use of command ids.\n\nI haven't looked at the code, so I don't even know if it is doable. Could a\nsmall vacuum thread run in the background and monitor the buffer cache? When it\nfinds a buffer with an unreferenced tuple, do what vacuum does, but only to\nthat block?\n\nHere is my problem with vacuum. It scans the whole damn table and it takes a\nlong time. In many, dare I say most, SQL databases, the rows which are updated\nare likely a small percent.\n\nIf a small vacuum routine can be run against the blocks that are already in the\nbuffer, this will eliminate a block read, and focus more on blocks which are\nlikely to have been modified.\n",
"msg_date": "Wed, 13 Mar 2002 11:22:12 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: a vacuum thread is not the answer"
}
] |
[
{
"msg_contents": "I am building/testing the current cvs sources on a Red Hat Linux 7.2 \nmachine and today I am seeing several failures (it was OK yesterday).\nThe test that actually crashes the backend is create_function_1.out.\n\nBTW, whoever changed the \"type %s is not yet defined\" from NOTICE\nto WARNING forgot to update expected/create_function_1.out as well.\n\nHere is the data in case someone know what of the changes made\nyesterday broke this:\n\n\n*** ./expected/create_function_1.out Wed Mar 6 18:53:13 2002\n--- ./results/create_function_1.out Thu Mar 7 07:50:43 2002\n***************\n*** 5,36 ****\n RETURNS widget\n AS '/home/fnasser/BUILD/pgsql/src/test/regress/regress.so'\n LANGUAGE 'c';\n! NOTICE: ProcedureCreate: type widget is not yet defined\n! CREATE FUNCTION widget_out(opaque)\n! RETURNS opaque\n! AS '/home/fnasser/BUILD/pgsql/src/test/regress/regress.so'\n! LANGUAGE 'c';\n(...)\n--- 5,12 ----\n RETURNS widget\n AS '/home/fnasser/BUILD/pgsql/src/test/regress/regress.so'\n LANGUAGE 'c';\n! WARNING: ProcedureCreate: type widget is not yet defined\n! server closed the connection unexpectedly\n! This probably means the server terminated abnormally\n! before or while processing the request.\n\n\n\n(gdb) bt\n#0 0x08068406 in ComputeDataSize (tupleDesc=0x8211358,\nvalue=0xbfffd0e0, \n nulls=0xbfffd0c0 ' ' <repeats 22 times>, \"\\024\\b\\004\\a\")\n at\n/home/fnasser/DEVO/pgsql/pgsql/src/backend/access/common/heaptuple.c:52\n#1 0x08068baa in heap_formtuple (tupleDescriptor=0x8211358,\nvalue=0xbfffd0e0, \n nulls=0xbfffd0c0 ' ' <repeats 22 times>, \"\\024\\b\\004\\a\")\n at\n/home/fnasser/DEVO/pgsql/pgsql/src/backend/access/common/heaptuple.c:604\n#2 0x08092610 in TypeShellMakeWithOpenRelation (pg_type_desc=0x8211248, \n typeName=0x8236eb8 \"widget\")\n at /home/fnasser/DEVO/pgsql/pgsql/src/backend/catalog/pg_type.c:194\n#3 0x08092696 in TypeShellMake (typeName=0x8236eb8 \"widget\")\n at /home/fnasser/DEVO/pgsql/pgsql/src/backend/catalog/pg_type.c:248\n#4 0x08091a7c in ProcedureCreate (procedureName=0x8236e48 \"widget_in\", \n replace=0 '\\000', returnsSet=0 '\\000', returnTypeName=0x8236eb8\n\"widget\", \n languageObjectId=13, prosrc=0x81884a2 \"-\", \n probin=0x8236ef8\n\"/home/fnasser/BUILD/pgsql-pgorg-nocheck/src/test/regress/regress.so\",\ntrusted=1 '\\001', canCache=0 '\\000', isStrict=0 '\\000', \n byte_pct=100, perbyte_cpu=0, percall_cpu=0, outin_ratio=100, \n argList=0x8236ea0)\n at /home/fnasser/DEVO/pgsql/pgsql/src/backend/catalog/pg_proc.c:171\n#5 0x080aea79 in CreateFunction (stmt=0x8236fc8)\n at /home/fnasser/DEVO/pgsql/pgsql/src/backend/commands/define.c:276\n#6 0x0810b270 in pg_exec_query_string (\n query_string=0x8236cb8 \"CREATE FUNCTION widget_in(opaque)\\n \nRETURNS widget\\n AS\n'/home/fnasser/BUILD/pgsql-pgorg-nocheck/src/test/regress/regress.so'\\n \nLANGUAGE 'c';\", dest=Remote, parse_context=0x8206e98)\n at /home/fnasser/DEVO/pgsql/pgsql/src/backend/tcop/postgres.c:768\n(...)\n\n(gdb) up 4\n#4 0x08091a7c in ProcedureCreate (procedureName=0x8236e48 \"widget_in\", \n replace=0 '\\000', returnsSet=0 '\\000', returnTypeName=0x8236eb8\n\"widget\", \n languageObjectId=13, prosrc=0x81884a2 \"-\", \n probin=0x8236ef8\n\"/home/fnasser/BUILD/pgsql-pgorg-nocheck/src/test/regress/regress.so\",\ntrusted=1 '\\001', canCache=0 '\\000', isStrict=0 '\\000', \n byte_pct=100, perbyte_cpu=0, percall_cpu=0, outin_ratio=100, \n argList=0x8236ea0)\n at /home/fnasser/DEVO/pgsql/pgsql/src/backend/catalog/pg_proc.c:171\n171\t\t\t\ttypeObjectId = TypeShellMake(returnTypeName);\n(gdb) list\n166\t\n167\t\t\tif (!OidIsValid(typeObjectId))\n168\t\t\t{\n169\t\t\t\telog(WARNING, \"ProcedureCreate: type %s is not yet defined\",\n170\t\t\t\t\t returnTypeName);\n171\t\t\t\ttypeObjectId = TypeShellMake(returnTypeName);\n172\t\t\t\tif (!OidIsValid(typeObjectId))\n173\t\t\t\t\telog(ERROR, \"could not create type %s\",\n174\t\t\t\t\t\t returnTypeName);\n175\t\t\t}\n(gdb) \n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 07 Mar 2002 10:24:45 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": true,
"msg_subject": "Current cvs source regression: create_function_1.out"
},
{
"msg_contents": "I found and am working on fixing the problem.\n\nShell types aren't being created properly by\nTypeShellMakeWithOpenRelation()\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Fernando Nasser\" <fnasser@redhat.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Thursday, March 07, 2002 10:24 AM\nSubject: [HACKERS] Current cvs source regression:\ncreate_function_1.out\n\n\n> I am building/testing the current cvs sources on a Red Hat Linux 7.2\n> machine and today I am seeing several failures (it was OK\nyesterday).\n> The test that actually crashes the backend is create_function_1.out.\n>\n> BTW, whoever changed the \"type %s is not yet defined\" from NOTICE\n> to WARNING forgot to update expected/create_function_1.out as well.\n>\n> Here is the data in case someone know what of the changes made\n> yesterday broke this:\n>\n>\n> *** ./expected/create_function_1.out Wed Mar 6 18:53:13 2002\n> --- ./results/create_function_1.out Thu Mar 7 07:50:43 2002\n> ***************\n> *** 5,36 ****\n> RETURNS widget\n> AS '/home/fnasser/BUILD/pgsql/src/test/regress/regress.so'\n> LANGUAGE 'c';\n> ! NOTICE: ProcedureCreate: type widget is not yet defined\n> ! CREATE FUNCTION widget_out(opaque)\n> ! RETURNS opaque\n> ! AS '/home/fnasser/BUILD/pgsql/src/test/regress/regress.so'\n> ! LANGUAGE 'c';\n> (...)\n> --- 5,12 ----\n> RETURNS widget\n> AS '/home/fnasser/BUILD/pgsql/src/test/regress/regress.so'\n> LANGUAGE 'c';\n> ! WARNING: ProcedureCreate: type widget is not yet defined\n> ! server closed the connection unexpectedly\n> ! This probably means the server terminated abnormally\n> ! before or while processing the request.\n>\n>\n>\n> (gdb) bt\n> #0 0x08068406 in ComputeDataSize (tupleDesc=0x8211358,\n> value=0xbfffd0e0,\n> nulls=0xbfffd0c0 ' ' <repeats 22 times>, \"\\024\\b\\004\\a\")\n> at\n>\n/home/fnasser/DEVO/pgsql/pgsql/src/backend/access/common/heaptuple.c:5\n2\n> #1 0x08068baa in heap_formtuple (tupleDescriptor=0x8211358,\n> value=0xbfffd0e0,\n> nulls=0xbfffd0c0 ' ' <repeats 22 times>, \"\\024\\b\\004\\a\")\n> at\n>\n/home/fnasser/DEVO/pgsql/pgsql/src/backend/access/common/heaptuple.c:6\n04\n> #2 0x08092610 in TypeShellMakeWithOpenRelation\n(pg_type_desc=0x8211248,\n> typeName=0x8236eb8 \"widget\")\n> at\n/home/fnasser/DEVO/pgsql/pgsql/src/backend/catalog/pg_type.c:194\n> #3 0x08092696 in TypeShellMake (typeName=0x8236eb8 \"widget\")\n> at\n/home/fnasser/DEVO/pgsql/pgsql/src/backend/catalog/pg_type.c:248\n> #4 0x08091a7c in ProcedureCreate (procedureName=0x8236e48\n\"widget_in\",\n> replace=0 '\\000', returnsSet=0 '\\000', returnTypeName=0x8236eb8\n> \"widget\",\n> languageObjectId=13, prosrc=0x81884a2 \"-\",\n> probin=0x8236ef8\n>\n\"/home/fnasser/BUILD/pgsql-pgorg-nocheck/src/test/regress/regress.so\",\n> trusted=1 '\\001', canCache=0 '\\000', isStrict=0 '\\000',\n> byte_pct=100, perbyte_cpu=0, percall_cpu=0, outin_ratio=100,\n> argList=0x8236ea0)\n> at\n/home/fnasser/DEVO/pgsql/pgsql/src/backend/catalog/pg_proc.c:171\n> #5 0x080aea79 in CreateFunction (stmt=0x8236fc8)\n> at\n/home/fnasser/DEVO/pgsql/pgsql/src/backend/commands/define.c:276\n> #6 0x0810b270 in pg_exec_query_string (\n> query_string=0x8236cb8 \"CREATE FUNCTION widget_in(opaque)\\n\n> RETURNS widget\\n AS\n>\n'/home/fnasser/BUILD/pgsql-pgorg-nocheck/src/test/regress/regress.so'\\\nn\n> LANGUAGE 'c';\", dest=Remote, parse_context=0x8206e98)\n> at\n/home/fnasser/DEVO/pgsql/pgsql/src/backend/tcop/postgres.c:768\n> (...)\n>\n> (gdb) up 4\n> #4 0x08091a7c in ProcedureCreate (procedureName=0x8236e48\n\"widget_in\",\n> replace=0 '\\000', returnsSet=0 '\\000', returnTypeName=0x8236eb8\n> \"widget\",\n> languageObjectId=13, prosrc=0x81884a2 \"-\",\n> probin=0x8236ef8\n>\n\"/home/fnasser/BUILD/pgsql-pgorg-nocheck/src/test/regress/regress.so\",\n> trusted=1 '\\001', canCache=0 '\\000', isStrict=0 '\\000',\n> byte_pct=100, perbyte_cpu=0, percall_cpu=0, outin_ratio=100,\n> argList=0x8236ea0)\n> at\n/home/fnasser/DEVO/pgsql/pgsql/src/backend/catalog/pg_proc.c:171\n> 171 typeObjectId = TypeShellMake(returnTypeName);\n> (gdb) list\n> 166\n> 167 if (!OidIsValid(typeObjectId))\n> 168 {\n> 169 elog(WARNING, \"ProcedureCreate: type %s is not yet defined\",\n> 170 returnTypeName);\n> 171 typeObjectId = TypeShellMake(returnTypeName);\n> 172 if (!OidIsValid(typeObjectId))\n> 173 elog(ERROR, \"could not create type %s\",\n> 174 returnTypeName);\n> 175 }\n> (gdb)\n>\n> --\n> Fernando Nasser\n> Red Hat Canada Ltd. E-Mail: fnasser@redhat.com\n> 2323 Yonge Street, Suite #300\n> Toronto, Ontario M4P 2C9\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Thu, 7 Mar 2002 11:51:14 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Current cvs source regression: create_function_1.out"
},
{
"msg_contents": "\nDo a CVS update. I backed out the failing patch this morning.\n\n---------------------------------------------------------------------------\n\nFernando Nasser wrote:\n> I am building/testing the current cvs sources on a Red Hat Linux 7.2 \n> machine and today I am seeing several failures (it was OK yesterday).\n> The test that actually crashes the backend is create_function_1.out.\n> \n> BTW, whoever changed the \"type %s is not yet defined\" from NOTICE\n> to WARNING forgot to update expected/create_function_1.out as well.\n> \n> Here is the data in case someone know what of the changes made\n> yesterday broke this:\n> \n> \n> *** ./expected/create_function_1.out Wed Mar 6 18:53:13 2002\n> --- ./results/create_function_1.out Thu Mar 7 07:50:43 2002\n> ***************\n> *** 5,36 ****\n> RETURNS widget\n> AS '/home/fnasser/BUILD/pgsql/src/test/regress/regress.so'\n> LANGUAGE 'c';\n> ! NOTICE: ProcedureCreate: type widget is not yet defined\n> ! CREATE FUNCTION widget_out(opaque)\n> ! RETURNS opaque\n> ! AS '/home/fnasser/BUILD/pgsql/src/test/regress/regress.so'\n> ! LANGUAGE 'c';\n> (...)\n> --- 5,12 ----\n> RETURNS widget\n> AS '/home/fnasser/BUILD/pgsql/src/test/regress/regress.so'\n> LANGUAGE 'c';\n> ! WARNING: ProcedureCreate: type widget is not yet defined\n> ! server closed the connection unexpectedly\n> ! This probably means the server terminated abnormally\n> ! before or while processing the request.\n> \n> \n> \n> (gdb) bt\n> #0 0x08068406 in ComputeDataSize (tupleDesc=0x8211358,\n> value=0xbfffd0e0, \n> nulls=0xbfffd0c0 ' ' <repeats 22 times>, \"\\024\\b\\004\\a\")\n> at\n> /home/fnasser/DEVO/pgsql/pgsql/src/backend/access/common/heaptuple.c:52\n> #1 0x08068baa in heap_formtuple (tupleDescriptor=0x8211358,\n> value=0xbfffd0e0, \n> nulls=0xbfffd0c0 ' ' <repeats 22 times>, \"\\024\\b\\004\\a\")\n> at\n> /home/fnasser/DEVO/pgsql/pgsql/src/backend/access/common/heaptuple.c:604\n> #2 0x08092610 in TypeShellMakeWithOpenRelation (pg_type_desc=0x8211248, \n> typeName=0x8236eb8 \"widget\")\n> at /home/fnasser/DEVO/pgsql/pgsql/src/backend/catalog/pg_type.c:194\n> #3 0x08092696 in TypeShellMake (typeName=0x8236eb8 \"widget\")\n> at /home/fnasser/DEVO/pgsql/pgsql/src/backend/catalog/pg_type.c:248\n> #4 0x08091a7c in ProcedureCreate (procedureName=0x8236e48 \"widget_in\", \n> replace=0 '\\000', returnsSet=0 '\\000', returnTypeName=0x8236eb8\n> \"widget\", \n> languageObjectId=13, prosrc=0x81884a2 \"-\", \n> probin=0x8236ef8\n> \"/home/fnasser/BUILD/pgsql-pgorg-nocheck/src/test/regress/regress.so\",\n> trusted=1 '\\001', canCache=0 '\\000', isStrict=0 '\\000', \n> byte_pct=100, perbyte_cpu=0, percall_cpu=0, outin_ratio=100, \n> argList=0x8236ea0)\n> at /home/fnasser/DEVO/pgsql/pgsql/src/backend/catalog/pg_proc.c:171\n> #5 0x080aea79 in CreateFunction (stmt=0x8236fc8)\n> at /home/fnasser/DEVO/pgsql/pgsql/src/backend/commands/define.c:276\n> #6 0x0810b270 in pg_exec_query_string (\n> query_string=0x8236cb8 \"CREATE FUNCTION widget_in(opaque)\\n \n> RETURNS widget\\n AS\n> '/home/fnasser/BUILD/pgsql-pgorg-nocheck/src/test/regress/regress.so'\\n \n> LANGUAGE 'c';\", dest=Remote, parse_context=0x8206e98)\n> at /home/fnasser/DEVO/pgsql/pgsql/src/backend/tcop/postgres.c:768\n> (...)\n> \n> (gdb) up 4\n> #4 0x08091a7c in ProcedureCreate (procedureName=0x8236e48 \"widget_in\", \n> replace=0 '\\000', returnsSet=0 '\\000', returnTypeName=0x8236eb8\n> \"widget\", \n> languageObjectId=13, prosrc=0x81884a2 \"-\", \n> probin=0x8236ef8\n> \"/home/fnasser/BUILD/pgsql-pgorg-nocheck/src/test/regress/regress.so\",\n> trusted=1 '\\001', canCache=0 '\\000', isStrict=0 '\\000', \n> byte_pct=100, perbyte_cpu=0, percall_cpu=0, outin_ratio=100, \n> argList=0x8236ea0)\n> at /home/fnasser/DEVO/pgsql/pgsql/src/backend/catalog/pg_proc.c:171\n> 171\t\t\t\ttypeObjectId = TypeShellMake(returnTypeName);\n> (gdb) list\n> 166\t\n> 167\t\t\tif (!OidIsValid(typeObjectId))\n> 168\t\t\t{\n> 169\t\t\t\telog(WARNING, \"ProcedureCreate: type %s is not yet defined\",\n> 170\t\t\t\t\t returnTypeName);\n> 171\t\t\t\ttypeObjectId = TypeShellMake(returnTypeName);\n> 172\t\t\t\tif (!OidIsValid(typeObjectId))\n> 173\t\t\t\t\telog(ERROR, \"could not create type %s\",\n> 174\t\t\t\t\t\t returnTypeName);\n> 175\t\t\t}\n> (gdb) \n> \n> -- \n> Fernando Nasser\n> Red Hat Canada Ltd. E-Mail: fnasser@redhat.com\n> 2323 Yonge Street, Suite #300\n> Toronto, Ontario M4P 2C9\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 13:24:31 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Current cvs source regression: create_function_1.out"
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> The test that actually crashes the backend is create_function_1.out.\n\nThat is the DOMAIN patch's fault (I believe the proximate cause was\nthat TypeShellMake wasn't taught about the new pg_type layout).\n\nBruce says he's backed out the patch, although the mail servers are\nsufficiently behind that I've not seen the committers-list notice yet.\n\n> BTW, whoever changed the \"type %s is not yet defined\" from NOTICE\n> to WARNING forgot to update expected/create_function_1.out as well.\n\nBruce, you forgot the regress/output/ files again.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 13:29:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Current cvs source regression: create_function_1.out "
},
{
"msg_contents": "Tom Lane wrote:\n> Fernando Nasser <fnasser@redhat.com> writes:\n> > The test that actually crashes the backend is create_function_1.out.\n> \n> That is the DOMAIN patch's fault (I believe the proximate cause was\n> that TypeShellMake wasn't taught about the new pg_type layout).\n> \n> Bruce says he's backed out the patch, although the mail servers are\n> sufficiently behind that I've not seen the committers-list notice yet.\n\nYep, just ran regression on current CVS and all is fine.\n\n> > BTW, whoever changed the \"type %s is not yet defined\" from NOTICE\n> > to WARNING forgot to update expected/create_function_1.out as well.\n> \n> Bruce, you forgot the regress/output/ files again.\n\nDone and committed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 13:55:40 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Current cvs source regression: create_function_1.out"
},
{
"msg_contents": "Tom Lane wrote:\n> Fernando Nasser <fnasser@redhat.com> writes:\n> > The test that actually crashes the backend is create_function_1.out.\n> \n> That is the DOMAIN patch's fault (I believe the proximate cause was\n> that TypeShellMake wasn't taught about the new pg_type layout).\n> \n> Bruce says he's backed out the patch, although the mail servers are\n> sufficiently behind that I've not seen the committers-list notice yet.\n> \n> > BTW, whoever changed the \"type %s is not yet defined\" from NOTICE\n> > to WARNING forgot to update expected/create_function_1.out as well.\n> \n> Bruce, you forgot the regress/output/ files again.\n\nActually, I have not fixed it yet. It is related to elog tags. Working\non it now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 14:07:31 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Current cvs source regression: create_function_1.out"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Fernando Nasser <fnasser@redhat.com> writes:\n> > > The test that actually crashes the backend is create_function_1.out.\n> > \n> > That is the DOMAIN patch's fault (I believe the proximate cause was\n> > that TypeShellMake wasn't taught about the new pg_type layout).\n> > \n> > Bruce says he's backed out the patch, although the mail servers are\n> > sufficiently behind that I've not seen the committers-list notice yet.\n> > \n> > > BTW, whoever changed the \"type %s is not yet defined\" from NOTICE\n> > > to WARNING forgot to update expected/create_function_1.out as well.\n> > \n> > Bruce, you forgot the regress/output/ files again.\n> \n> Actually, I have not fixed it yet. It is related to elog tags. Working\n> on it now.\n\nOK, fixed now and committed. It was the /output directory that I had\nmissed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 15:20:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Current cvs source regression: create_function_1.out"
}
] |
[
{
"msg_contents": "I think you all should really buy the book 'Database Development for Dummies'.\nPostgresql is for sure the only database on this planet that cannot optimize a\nselect(max) using an index. Not even Microsoft has implemented such a design\ndeficiency yet and even MySQL which you like to talk so bad about uses an\nindex to optimize select max() queries. Some of you should really consider\nattending a programming course and all of you should consider to stop working\non this totally screwed up monster!\n\nTom\n\nNirvana: Zustand des Gluecks durch Ausloeschung des Selbst.\n-- \n T h o m a s Z e h e t b a u e r ( TZ251 )\n PGP encrypted mail preferred - KeyID 96FFCB89\n mail pgp-key-request@hostmaster.org\n",
"msg_date": "Thu, 7 Mar 2002 17:04:47 +0100",
"msg_from": "Thomas Zehetbauer <thomasz@hostmaster.org>",
"msg_from_op": true,
"msg_subject": "select max(column) not using index"
},
{
"msg_contents": "On Thu, 7 Mar 2002, Thomas Zehetbauer wrote:\n\n> I think you all should really buy the book 'Database Development for Dummies'.\n> Postgresql is for sure the only database on this planet that cannot optimize a\n> select(max) using an index. Not even Microsoft has implemented such a design\n> deficiency yet and even MySQL which you like to talk so bad about uses an\n> index to optimize select max() queries. Some of you should really consider\n> attending a programming course and all of you should consider to stop working\n> on this totally screwed up monster!\n\nI real man would proffer criticism in diff -u format.\n\nGavin\n\n",
"msg_date": "Thu, 14 Mar 2002 02:37:09 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: select max(column) not using index"
},
{
"msg_contents": "Thomas Zehetbauer wrote:\n> \n> I think you all should really buy the book 'Database Development for Dummies'.\n> Postgresql is for sure the only database on this planet that cannot optimize a\n> select(max) using an index. Not even Microsoft has implemented such a design\n> deficiency yet and even MySQL which you like to talk so bad about uses an\n> index to optimize select max() queries. Some of you should really consider\n> attending a programming course and all of you should consider to stop working\n> on this totally screwed up monster!\n> \n> Tom\n\nThe query:\n\nselect max from table order by max desc limit 1\n\nWill do it, but \"max()\" is by no means an easy to optimize function. Aggregates\nhave an assumption of a range scan, especially custom aggregates. What about\nthis:\n\nselect max(foo) from bar where x = 'y';\n\nHow is the index used in this query?\n\nThe only instance where an aggregate optimization would pay off is when there\nis no selection criteria and there is an index on the field. In this case, it\nis easy enough to create a function for the particular application.\n\nI hear and understand your frustration, yes PostgreSQL should be able to do\nthat, and maybe it would be worth the time and effort, that is not for me to\nsay, however there is a very viable work around for the problem you state and\nthe stated problem, while a common query, is a small subset of the actual\ncapability of the max() function.\n",
"msg_date": "Wed, 13 Mar 2002 11:03:31 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: select max(column) not using index"
},
{
"msg_contents": "On Thu, 2002-03-07 at 18:04, Thomas Zehetbauer wrote:\n> I think you all should really buy the book 'Database Development for Dummies'.\n> Postgresql is for sure the only database on this planet that cannot optimize a\n> select(max) using an index.\n\nPostgreSQL is extensible enough that luser can define max() to mean\nanything and thus you don't have a general way to optimise it without\nbreaking some cases.\n\nIf you know that max(x) means the biggest x there is and you have a\nb-tree index on x you can use:\n\nselect x from t order by x desc limit 1;\n\n> Not even Microsoft has implemented such a design deficiency yet and \n\nIt would be a very microsofty way to optimise in ways that sometimes\nproduce wrong results ;)\n\n> even MySQL which you like to talk so bad about uses an\n> index to optimize select max() queries. \n\nWhat do you need the superfast max() for ?\n\nIf you are trying to re-implement sequences you may yet find some\nsurprises.\n\n> Some of you should really consider\n> attending a programming course and all of you should consider to stop working\n> on this totally screwed up monster!\n \nDid you make yourself look bad by assuming that postgreSQL _does_ your\nsuggested optimisation ?\n \n> Nirvana: Zustand des Gluecks durch Ausloeschung des Selbst.\n\nHow is this related to above ??\n\n-------------\nHannu\n\n",
"msg_date": "13 Mar 2002 18:14:49 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: select max(column) not using index"
},
{
"msg_contents": "On Thu, 7 Mar 2002, Thomas Zehetbauer wrote:\n\n> I think you all should really buy the book 'Database Development for Dummies'.\n> Postgresql is for sure the only database on this planet that cannot optimize a\n> select(max) using an index. Not even Microsoft has implemented such a design\n> deficiency yet and even MySQL which you like to talk so bad about uses an\n> index to optimize select max() queries. Some of you should really consider\n> attending a programming course and all of you should consider to stop working\n> on this totally screwed up monster!\n\nI'm not sure why I'm bothering to respond, but...\n\nGiven that postgres allows user defined aggregates and I guess it'd be\npossible for a user to redefine max into some form where the optimization\nisn't valid (I'm not sure why mind you, but...) that'd mean that the\noptimization is not always available. Personally, I'd generally prefer\ncorrect and slow over incorrect and fast. I'm fairly sure that if you made\na patch that cleanly dealt with the issue without programming in special\nknowledge of min and max it'd be considered for inclusion.\n\n",
"msg_date": "Wed, 13 Mar 2002 08:35:00 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: select max(column) not using index"
}
] |
[
{
"msg_contents": "Hi all,\n\nIs there a reason why the reltuples column of pg_class is stored as a\n\"real\", rather than one of the integer data types? Are there any\nsituations in which there will be a non-integer value stored in this\ncolumn?\n\nCheers,\n\nNeil\n\nP.S. I tried to search the archives, but archives.postgresql.org is so\nslow, it's basically unusable. So my apologies if this has already been\ndiscussed...\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "07 Mar 2002 17:02:21 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "pg_class -> reltuples?"
},
{
"msg_contents": "Neil Conway wrote:\n> Hi all,\n> \n> Is there a reason why the reltuples column of pg_class is stored as a\n> \"real\", rather than one of the integer data types? Are there any\n> situations in which there will be a non-integer value stored in this\n> column?\n\nThat is an excellent question. I assume it is related to having > 4\nbillion rows, but we have int8 for that. The value is used mostly by\nthe optimizer, which does most of its calcultions using float8 (real),\nso that may be why.\n\n> P.S. I tried to search the archives, but archives.postgresql.org is so\n> slow, it's basically unusable. So my apologies if this has already been\n> discussed...\n\nYes, it is hampering me from researching some of these patches too, and\nfts is completely down. If I could just get a web page of all the\nthreads (forget searching), I would be happy. The archives site contents\nhasn't been updated since Feb 28.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 17:11:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_class -> reltuples?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Neil Conway wrote:\n>> Is there a reason why the reltuples column of pg_class is stored as a\n>> \"real\", rather than one of the integer data types?\n\n> That is an excellent question. I assume it is related to having > 4\n> billion rows, but we have int8 for that.\n\n1. We support tables > 4G rows.\n\n2. int8 is not available on all platforms.\n\n3. The only use for reltuples is in the optimizer, which is perfectly\n content with approximate values.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 17:51:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_class -> reltuples? "
},
{
"msg_contents": "On Thursday 07 March 2002 23:11, Bruce Momjian wrote:\n> Neil Conway wrote:\n>\n> > P.S. I tried to search the archives, but archives.postgresql.org is so\n> > slow, it's basically unusable. So my apologies if this has already been\n> > discussed...\n>\n> Yes, it is hampering me from researching some of these patches too, and\n> fts is completely down. If I could just get a web page of all the\n> threads (forget searching), I would be happy. The archives site contents\n> hasn't been updated since Feb 28.\n\nmaybe google?\n\nhttp://groups.google.com/groups?hl=en&group=comp.databases.postgresql.hackers\n\nThough a cursory glance shows some mails which went over the\nlist aren't there, particularly the most recent threads are pretty patchy, pun\nunintended..\n\nIan Barwick\n",
"msg_date": "Fri, 8 Mar 2002 00:00:22 +0100",
"msg_from": "Ian Barwick <barwick@gmx.net>",
"msg_from_op": false,
"msg_subject": "Archive search (was: pg_class -> reltuples?)"
},
{
"msg_contents": "> maybe google?\n> \n> http://groups.google.com/groups?hl=en&group=comp.databases.postgresql.hackers\n> \n> Though a cursory glance shows some mails which went over the\n> list aren't there, particularly the most recent threads are pretty patchy, pun\n> unintended..\n\nThanks. That is a huge help. In fact, this lists all the groups:\n\n\thttp://groups.google.com/groups?hl=en&group=comp.databases.postgresql\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 18:10:20 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Archive search (was: pg_class -> reltuples?)"
},
{
"msg_contents": "On Thu, 2002-03-07 at 17:51, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Neil Conway wrote:\n> >> Is there a reason why the reltuples column of pg_class is stored as a\n> >> \"real\", rather than one of the integer data types?\n> \n> > That is an excellent question. I assume it is related to having > 4\n> > billion rows, but we have int8 for that.\n> \n> 1. We support tables > 4G rows.\n\nI agree we should try to support very large tables -- so why waste space\non storing floating point? And am I missing something, or is a \"real\"\nonly 4 bytes?\n\n> 2. int8 is not available on all platforms.\n\nI have no problem making restrictions on data types for portability, but\nat least we should be consistent:\n\n% grep -rI 'long long' * | wc -l\n 37\n% grep -rI 'int64' * | wc -l\n 191\n\nOn all the platforms I tested (x86, SPARC, PPC, PA-RISC, Alpha), a 'long\nlong' is supported, and is 8 bytes. Which platforms don't have this, and\nare we actively supporting them?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "07 Mar 2002 19:44:55 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_class -> reltuples?"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> I have no problem making restrictions on data types for portability, but\n> at least we should be consistent:\n\nWe *are* consistent. int8 is not used in the system catalogs, and where\nit is used, the system will continue to function if it's implemented as\na 32-bit datatype. (At least, things still worked the last time I tried\nturning off HAVE_LONG_LONG_INT. If someone broke it since then, it\nneeds to be fixed.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 19:54:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_class -> reltuples? "
},
{
"msg_contents": "On Thu, 2002-03-07 at 19:54, Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > I have no problem making restrictions on data types for portability, but\n> > at least we should be consistent:\n> \n> We *are* consistent. int8 is not used in the system catalogs, and where\n> it is used, the system will continue to function if it's implemented as\n> a 32-bit datatype. (At least, things still worked the last time I tried\n> turning off HAVE_LONG_LONG_INT. If someone broke it since then, it\n> needs to be fixed.)\n\n9 regression tests fail without HAVE_LONG_LONG_INT on a 32-bit machine\n(int8, constraints, select_implicit, select_having, subselect, union,\naggregates, misc, rules). It's pretty obvious that int8 should fail, but\nthe others look like bugs.\n\nAs for the original question, maybe I'm missing something obvious, but\nis there a reason why reltuples can't be an int8? (which is already\ntypedef'ed to a int4 on broken machines/compilers) This would mean that\non machines without a 64-bit int type, tables greater than 2^32 rows\ncan't be stored (or at least, reltuples breaks). But I'm inclined to\ndismiss those platforms as broken, anyway...\n\nIn any case, I think the current situation is the wrong way around:\nwe're using a workaround on _all_ platforms, just to avoid breaking a\nfew old systems. Wouldn't it make more sense to use an int8 by default,\nand fall back to a floating-point workaround if the default, optimal\nsolution isn't available?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "07 Mar 2002 22:22:23 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_class -> reltuples?"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> 9 regression tests fail without HAVE_LONG_LONG_INT on a 32-bit machine\n> (int8, constraints, select_implicit, select_having, subselect, union,\n> aggregates, misc, rules). It's pretty obvious that int8 should fail, but\n> the others look like bugs.\n\nI think int8_tbl may be used in some of the other tests, so diffs there\nare not necessarily a big deal. Did you examine the diffs closely?\n\n> As for the original question, maybe I'm missing something obvious, but\n> is there a reason why reltuples can't be an int8? (which is already\n> typedef'ed to a int4 on broken machines/compilers)\n\nYes: it won't work. If reltuples is construed to be 8 bytes by some\ncompilers and 4 bytes by others, then the struct definition will fail to\noverlay onto the storage as seen by the general-purpose tuple access\nroutines. (We could maybe fix that by having pg_type.h and some other\nplaces conditionally compile the declared size of type int8, but it\nain't worth the trouble.)\n\n> This would mean that\n> on machines without a 64-bit int type, tables greater than 2^32 rows\n> can't be stored (or at least, reltuples breaks). But I'm inclined to\n> dismiss those platforms as broken, anyway...\n\nSorry, but I have very little patience for arguments that \"if it works\non all the machines I use, it's good enough\". Especially for a case\nlike this, where there is zero advantage to using int8 anyway.\nUsing a float here is not a \"workaround\", it's the right thing to do.\n(The optimizer would only have to convert it to float anyway for its\ninternal calculations.)\n\n> Wouldn't it make more sense to use an int8 by default,\n> and fall back to a floating-point workaround if the default, optimal\n> solution isn't available?\n\nSo the user-visible column types of pg_class would vary depending on\nthis implementation detail? Not a good idea IMHO.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 23:02:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_class -> reltuples? "
}
] |
[
{
"msg_contents": "I was just bitten by an issue in pg_dump where if the table it is to dump\ndoesn't exist, it doesn't return an error or anything! We only just\nrealised that a dump script we wrote had a typo in it, and all our 'backups'\nare invalid...\n\nThis is the current CVS behaviour:\n\nbash-2.05$ pg_dump -a -t badtable mydb\n--\n-- Selected TOC Entries:\n--\n\nWhere 'badtable' is the name of a table that doesn't exist.\n\nShould this beheaviour be at least modified to write something to stderr?\n\nChris\n\n",
"msg_date": "Fri, 8 Mar 2002 09:22:35 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "pg_dump doesn't report failure"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> This is the current CVS behaviour:\n\n> bash-2.05$ pg_dump -a -t badtable mydb\n> --\n> -- Selected TOC Entries:\n> --\n\n> Where 'badtable' is the name of a table that doesn't exist.\n\n> Should this beheaviour be at least modified to write something to stderr?\n\nI dunno. I think the direction we were planning to go in was that -t's\nargument should be treated as a pattern to match against table names\n(eg, a regexp). It's not clear that zero matches are an error when\nyou are thinking in those terms. On the other hand I can see your point\nabout not realizing your \"backup\" isn't.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 21:21:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump doesn't report failure "
},
{
"msg_contents": "On Thu, 2002-03-07 at 21:21, Tom Lane wrote:\n> I dunno. I think the direction we were planning to go in was that -t's\n> argument should be treated as a pattern to match against table names\n> (eg, a regexp). It's not clear that zero matches are an error when\n> you are thinking in those terms. On the other hand I can see your point\n> about not realizing your \"backup\" isn't.\n\nCan't the regex mode be made a different switch (say, -T or -E)? In\naddition to allowing better error reporting, I'm uncomfortable with\nsaying \"a table name must consist of non-regex special characters\",\nwhich is basically what this implies.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "07 Mar 2002 21:58:36 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump doesn't report failure"
}
] |
[
{
"msg_contents": "Seems like it's about time to release 7.2.1. We have enough\npost-release patches in there that a dot-release is clearly needed\n(particularly for the pg_dump-related issues). And it seems like\nnot too much new stuff is coming in. Does anyone have any issues\noutstanding that need to be dealt with before 7.2.1?\n\n\t\t\tregards, tom lane\n\n\nPost-release patches in the 7.2 branch:\n\n2002-03-05 01:10 momjian\n\n\t* contrib/tsearch/dict/porter_english.dct (REL7_2_STABLE): Please,\n\tapply attached patch for contrib/tsearch to 7.2.1 and current CVS.\n\tIt fix english stemmer's problem with ending words like\n\t'technology'.\n\t\n\tWe have found one more bug in english stemmer. The bug is with\n\t'irregular' english words like 'skies' -> 'sky'. Please, apply\n\tattached cumulative patch to 7.2.1 and current CVS instead\n\tprevious one.\n\t\n\tThank to Thomas T. Thai <tom@minnesota.com> for hard testing. This\n\tkind of bug has significance only for dump/reload database and\n\tviewing, but searching/indexing works right.\n\t\n\tTeodor Sigaev\n\n2002-03-05 00:13 tgl\n\n\t* src/backend/optimizer/prep/prepunion.c (REL7_2_STABLE): Previous\n\tpatch to mark UNION outputs with common typmod (if any) breaks\n\tthree-or-more-way UNIONs, as per example from Josh Berkus. Cause\n\tis a fragile assumption that one tlist's entries will exactly match\n\tanother. Restructure code to make that assumption a little less\n\tfragile.\n\n2002-03-04 22:45 ishii\n\n\t* src/: backend/utils/adt/timestamp.c,\n\ttest/regress/expected/timestamp.out,\n\ttest/regress/expected/timestamptz.out (REL7_2_STABLE): A backport\n\tpatch:\n\t\n\tFix bug in extract/date_part for milliseconds/miscroseconds and\n\ttimestamp/timestamptz combo. Now extract/date_part returns\n\tseconds*1000 or 1000000 + fraction part as the manual stats. \n\tregression test are also fixed.\n\t\n\tSee the thread in pgsql-hackers:\n\t\n\tSubject: Re: [HACKERS] timestamp_part() bug? Date: Sat, 02 Mar 2002\n\t11:29:53 +0900\n\n2002-03-04 12:47 tgl\n\n\t* doc/FAQ_Solaris (REL7_2_STABLE): Update FAQ_Solaris with info\n\tabout gcc 2.95.1 problems and how to work around 64-bit vsnprintf\n\tbug.\n\n2002-02-27 18:17 tgl\n\n\t* src/backend/tcop/postgres.c (REL7_2_STABLE): Back-patch fix for\n\terrors reported at transaction end.\n\n2002-02-26 20:47 ishii\n\n\t* src/backend/commands/copy.c (REL7_2_STABLE): Back-patch fix for\n\tfollowings:\n\t\n\tFix bug in COPY FROM when DELIMITER is not in ASCII range. See\n\tpgsql-bugs/pgsql-hackers discussion \"COPY FROM is not 8bit clean\"\n\taround 2002/02/26 for more details -- Tatsuo Ishii\n\n2002-02-26 18:48 tgl\n\n\t* src/: backend/commands/command.c, backend/commands/explain.c,\n\tbackend/executor/functions.c, backend/executor/spi.c,\n\tbackend/nodes/copyfuncs.c, backend/nodes/equalfuncs.c,\n\tbackend/nodes/readfuncs.c, backend/parser/analyze.c,\n\tbackend/tcop/dest.c, backend/tcop/postgres.c,\n\tbackend/tcop/pquery.c, backend/tcop/utility.c,\n\tinclude/commands/command.h, include/nodes/parsenodes.h,\n\tinclude/tcop/dest.h, include/tcop/pquery.h, include/tcop/utility.h\n\t(REL7_2_STABLE): Back-patch fix for command completion report\n\thandling. This is primarily needed so that INSERTing a row still\n\treports the row's OID even when there are ON INSERT rules firing\n\tadditional queries.\n\n2002-02-25 16:37 tgl\n\n\t* src/bin/psql/command.c (REL7_2_STABLE): Tweak psql's \\connect\n\tcommand to not downcase unquoted database and user names. This is\n\ta temporary measure to allow backwards compatibility with 7.2 and\n\tearlier pg_dump. 7.2.1 and later pg_dump will double-quote mixed\n\tcase names in \\connect. Once we feel that older dumps are not a\n\tproblem anymore, we can revert this change and treat \\connect\n\targuments as normal SQL identifiers.\n\n2002-02-25 15:07 momjian\n\n\t* src/backend/libpq/auth.c (REL7_2_STABLE): Fix for PAM error\n\tmessage display:\n\t\n\t> and that the right fix is to make each of the subsequent calls be\n\tin\n\t> this same pattern, not to try to emulate their nonsensical style.\n\t\n\tDominic J. Eidson\n\n2002-02-25 11:22 thomas\n\n\t* src/backend/utils/adt/datetime.c (REL7_2_STABLE): Add a large\n\tnumber of time zones to the lookup table. Fix a few\n\tapparently-wrong TZ vs DTZ declarations. Same patch as added to\n\tHEAD.\n\n2002-02-22 10:40 momjian\n\n\t* src/include/libpq/pqsignal.h (REL7_2_STABLE): We had a problem\n\twith to compile pgsql-7.2 under SW-8.0. In the mailing lists I\n\tfound no informations.\tSee note for further informations.\n\t\n\tAdd missing AuthBlockSig.\n\t\n\tregards Heiko\n\n2002-02-22 08:02 momjian\n\n\t* doc/: FAQ_russian, src/FAQ/FAQ_russian.html (REL7_2_STABLE):\n\tBACKPATCH:\n\t\n\tAdd Russian FAQ to 7.2.1. Why not?\n\n2002-02-22 01:08 momjian\n\n\t* contrib/btree_gist/btree_gist.c (REL7_2_STABLE): BACKPATCH:\n\t\n\tPlease, apply attached patch of contrib/btree_gist to 7.2.1 and\n\tcurrent cvs. The patch fixes memory leak during creation GiST\n\tindex on timestamp column.\n\t\n\tThank you.\n\t\n\t-- Teodor Sigaev teodor@stack.net\n\n2002-02-19 17:19 tgl\n\n\t* src/backend/utils/adt/cash.c (REL7_2_STABLE): Avoid failures in\n\tcash_out and cash_words for INT_MIN. Also, 'fourty' -> 'forty'.\n\n2002-02-18 11:04 tgl\n\n\t* src/backend/commands/analyze.c (REL7_2_STABLE): Replace\n\tnumber-of-distinct-values estimator equation, per recent pghackers\n\tdiscussion.\n\n2002-02-17 23:12 ishii\n\n\t* src/bin/pgaccess/lib/tables.tcl (REL7_2_STABLE): Fix\n\tkanji-coversion key binding. This has been broken since 7.1 Per\n\tYoshinori Ariie's report.\n\n2002-02-17 08:29 momjian\n\n\t* doc/src/sgml/ref/alter_table.sgml: Fix SGML typo in previous\n\tpatch.\n\n2002-02-17 06:50 momjian\n\n\t* doc/src/sgml/ref/alter_table.sgml: I think it's important that\n\tit's actually documented that they can add primary keys after the\n\tfact!\n\t\n\tAlso, we need to add regression tests for alter table / add primary\n\tkey and alter table / drop constraint.\tThese shouldn't be added\n\tuntil 7.3 tho methinks...\n\t\n\tChris\n\n2002-02-16 18:45 momjian\n\n\t* doc/src/sgml/ref/alter_table.sgml: Clarify params to ALTER TABLE\n\tto clearly show single parameters.\n\t\n\te.g. table contraint definition -> table_constraint_definition.\n\n2002-02-15 12:46 petere\n\n\t* src/interfaces/ecpg/preproc/pgc.l: Remove warning about automatic\n\tinclusion of sqlca.\n\n2002-02-14 10:24 tgl\n\n\t* src/: backend/commands/command.c, backend/executor/spi.c,\n\tbackend/utils/mmgr/portalmem.c, include/utils/portal.h: Ensure that\n\ta cursor is scanned under the same scanCommandId it was originally\n\tcreated with, so that the set of visible tuples does not change as\n\ta result of other activity. This essentially makes PG cursors\n\tINSENSITIVE per the SQL92 definition. See bug report of 13-Feb-02.\n\n2002-02-13 14:32 tgl\n\n\t* doc/src/sgml/ref/createuser.sgml: Point out that --adduser\n\tactually makes the new user a superuser. This was mentioned on the\n\tman page for the underlying CREATE USER command, but it should be\n\texplained here too.\n\n2002-02-12 18:39 tgl\n\n\t* src/backend/port/dynloader/: README.dlfcn.aix, aix.h, bsdi.h,\n\tdgux.h, freebsd.h, irix5.h, linux.h, netbsd.h, openbsd.h, osf.h,\n\tsco.h, solaris.h, sunos4.h, svr4.h, univel.h, unixware.h, win.h:\n\tUse RTLD_NOW, not RTLD_LAZY, as binding mode for dlopen() on all\n\tplatforms. This restores the Linux behavior to what it was in PG\n\t7.0 and 7.1, and causes other platforms to agree. (Other\n\twell-tested platforms like HPUX were doing it this way already.) \n\tPer pghackers discussion over the past month or so.\n\n2002-02-12 17:35 tgl\n\n\t* doc/FAQ_Solaris: Add warning not to use /usr/ucb/cc on Solaris.\n\n2002-02-12 17:25 momjian\n\n\t* doc/src/sgml/advanced.sgml: Fix tutorial for references problem,\n\tfrom rainer.tammer@spg.schulergroup.com\n\n2002-02-12 16:25 tgl\n\n\t* doc/src/sgml/ref/copy.sgml, src/backend/commands/copy.c: Modify\n\tCOPY TO to emit carriage returns and newlines as backslash escapes\n\t(backslash-r, backslash-n) for protection against\n\tnewline-conversion munging. In future we will also tweak COPY\n\tFROM, but this part of the change should be backwards-compatible. \n\tPer pghackers discussion. Also, update COPY reference page to\n\tdescribe the backslash conversions more completely and accurately.\n\n2002-02-11 18:25 momjian\n\n\t* doc/src/sgml/wal.sgml: Update wal files computation\n\tdocumentation.\n\n2002-02-11 17:41 tgl\n\n\t* src/backend/access/gist/gist.c: Tweak GiST code to work correctly\n\ton machines where 8-byte alignment of pointers is required. Patch\n\tfrom Teodor Sigaev per pghackers discussion. It's an ugly kluge\n\tbut avoids forcing initdb; we'll put a better fix into 7.3 or\n\tlater.\n\n2002-02-11 16:38 petere\n\n\t* src/backend/port/dynloader/freebsd.h: Fix for old FreeBSD\n\tversions that don't have RTLD_GLOBAL\n\n2002-02-11 15:10 tgl\n\n\t* src/backend/executor/: nodeIndexscan.c, nodeTidscan.c: Repair\n\tproblems with EvalPlanQual where target table is scanned as inner\n\tindexscan (ie, one with runtime keys).\tExecIndexReScan must\n\tcompute or recompute runtime keys even if we are rescanning in the\n\tEPQ case. TidScan seems to have comparable problems. Per bug\n\tnoted by Barry Lind 11-Feb-02.\n\n2002-02-11 10:19 momjian\n\n\t* contrib/pg_upgrade/pg_upgrade: Fix flag handling of pg_upgrade.\n\n2002-02-10 19:18 tgl\n\n\t* src/bin/pg_dump/: common.c, pg_backup.h, pg_backup_archiver.c,\n\tpg_dump.c, pg_dump.h, pg_dumpall.sh: Be more wary about mixed-case\n\tdatabase names and user names.\tGet the CREATE DATABASE command\n\tright in pg_dump -C case.\n\n2002-02-10 19:14 tgl\n\n\t* doc/src/sgml/ref/: pg_dump.sgml, pg_restore.sgml: pg_dump and\n\tpg_restore man pages need to mention that one should restore into a\n\tvirgin database, ie, one created from template0, if there are any\n\tsite-local additions in template1.\n\n2002-02-10 17:56 tgl\n\n\t* src/backend/storage/file/fd.c: Don't Assert() that fsync() and\n\tclose() never fail; I have seen this crash on Solaris when over\n\tdisk quota. Instead, report such failures via elog(DEBUG).\n\n2002-02-08 11:30 momjian\n\n\t* src/backend/utils/init/findbe.c: Move sys/types.h to top, for\n\thiroyuki hanai/ FreeBSD.\n\n2002-02-08 09:47 momjian\n\n\t* contrib/mysql/my2pg.pl: Upgrade my2pg version 1.23.\n\n2002-02-07 17:20 tgl\n\n\t* src/backend/postmaster/pgstat.c: pgstat's truncation of query\n\tstring needs to be multibyte-aware. Patch from sugita@sra.co.jp.\n\n2002-02-07 17:11 tgl\n\n\t* contrib/: intarray/_int.c, tsearch/README.tsearch,\n\ttsearch/gistidx.c, tsearch/tsearch.sql.in: Repair some problems in\n\tGIST-index contrib modules. Patch from Teodor Sigaev\n\t<teodor@stack.net>.\n\n2002-02-06 19:27 inoue\n\n\t* src/backend/tcop/utility.c: Removed a check for REINDEX TABLE.\n\n2002-02-06 15:29 petere\n\n\t* doc/Makefile: Fix for parallel make\n\n2002-02-06 12:27 tgl\n\n\t* src/bin/pg_dump/: pg_backup_archiver.c, pg_dump.c: Fix failure to\n\treconnect as sequence's owner before issuing setval().\n",
"msg_date": "Thu, 07 Mar 2002 20:28:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Time for 7.2.1?"
},
{
"msg_contents": "The ecpg Patch I send correct a bug in the pre-processor that it would be\nnice to have in this release.\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: <pgsql-hackers@postgreSQL.org>\nSent: Friday, March 08, 2002 12:28 PM\nSubject: [HACKERS] Time for 7.2.1?\n\n\n> Seems like it's about time to release 7.2.1. We have enough\n> post-release patches in there that a dot-release is clearly needed\n> (particularly for the pg_dump-related issues). And it seems like\n> not too much new stuff is coming in. Does anyone have any issues\n> outstanding that need to be dealt with before 7.2.1?\n>\n> regards, tom lane\n>\n>\n> Post-release patches in the 7.2 branch:\n>\n> 2002-03-05 01:10 momjian\n>\n> * contrib/tsearch/dict/porter_english.dct (REL7_2_STABLE): Please,\n> apply attached patch for contrib/tsearch to 7.2.1 and current CVS.\n> It fix english stemmer's problem with ending words like\n> 'technology'.\n>\n> We have found one more bug in english stemmer. The bug is with\n> 'irregular' english words like 'skies' -> 'sky'. Please, apply\n> attached cumulative patch to 7.2.1 and current CVS instead\n> previous one.\n>\n> Thank to Thomas T. Thai <tom@minnesota.com> for hard testing. This\n> kind of bug has significance only for dump/reload database and\n> viewing, but searching/indexing works right.\n>\n> Teodor Sigaev\n>\n> 2002-03-05 00:13 tgl\n>\n> * src/backend/optimizer/prep/prepunion.c (REL7_2_STABLE): Previous\n> patch to mark UNION outputs with common typmod (if any) breaks\n> three-or-more-way UNIONs, as per example from Josh Berkus. Cause\n> is a fragile assumption that one tlist's entries will exactly match\n> another. Restructure code to make that assumption a little less\n> fragile.\n>\n> 2002-03-04 22:45 ishii\n>\n> * src/: backend/utils/adt/timestamp.c,\n> test/regress/expected/timestamp.out,\n> test/regress/expected/timestamptz.out (REL7_2_STABLE): A backport\n> patch:\n>\n> Fix bug in extract/date_part for milliseconds/miscroseconds and\n> timestamp/timestamptz combo. Now extract/date_part returns\n> seconds*1000 or 1000000 + fraction part as the manual stats.\n> regression test are also fixed.\n>\n> See the thread in pgsql-hackers:\n>\n> Subject: Re: [HACKERS] timestamp_part() bug? Date: Sat, 02 Mar 2002\n> 11:29:53 +0900\n>\n> 2002-03-04 12:47 tgl\n>\n> * doc/FAQ_Solaris (REL7_2_STABLE): Update FAQ_Solaris with info\n> about gcc 2.95.1 problems and how to work around 64-bit vsnprintf\n> bug.\n>\n> 2002-02-27 18:17 tgl\n>\n> * src/backend/tcop/postgres.c (REL7_2_STABLE): Back-patch fix for\n> errors reported at transaction end.\n>\n> 2002-02-26 20:47 ishii\n>\n> * src/backend/commands/copy.c (REL7_2_STABLE): Back-patch fix for\n> followings:\n>\n> Fix bug in COPY FROM when DELIMITER is not in ASCII range. See\n> pgsql-bugs/pgsql-hackers discussion \"COPY FROM is not 8bit clean\"\n> around 2002/02/26 for more details -- Tatsuo Ishii\n>\n> 2002-02-26 18:48 tgl\n>\n> * src/: backend/commands/command.c, backend/commands/explain.c,\n> backend/executor/functions.c, backend/executor/spi.c,\n> backend/nodes/copyfuncs.c, backend/nodes/equalfuncs.c,\n> backend/nodes/readfuncs.c, backend/parser/analyze.c,\n> backend/tcop/dest.c, backend/tcop/postgres.c,\n> backend/tcop/pquery.c, backend/tcop/utility.c,\n> include/commands/command.h, include/nodes/parsenodes.h,\n> include/tcop/dest.h, include/tcop/pquery.h, include/tcop/utility.h\n> (REL7_2_STABLE): Back-patch fix for command completion report\n> handling. This is primarily needed so that INSERTing a row still\n> reports the row's OID even when there are ON INSERT rules firing\n> additional queries.\n>\n> 2002-02-25 16:37 tgl\n>\n> * src/bin/psql/command.c (REL7_2_STABLE): Tweak psql's \\connect\n> command to not downcase unquoted database and user names. This is\n> a temporary measure to allow backwards compatibility with 7.2 and\n> earlier pg_dump. 7.2.1 and later pg_dump will double-quote mixed\n> case names in \\connect. Once we feel that older dumps are not a\n> problem anymore, we can revert this change and treat \\connect\n> arguments as normal SQL identifiers.\n>\n> 2002-02-25 15:07 momjian\n>\n> * src/backend/libpq/auth.c (REL7_2_STABLE): Fix for PAM error\n> message display:\n>\n> > and that the right fix is to make each of the subsequent calls be\n> in\n> > this same pattern, not to try to emulate their nonsensical style.\n>\n> Dominic J. Eidson\n>\n> 2002-02-25 11:22 thomas\n>\n> * src/backend/utils/adt/datetime.c (REL7_2_STABLE): Add a large\n> number of time zones to the lookup table. Fix a few\n> apparently-wrong TZ vs DTZ declarations. Same patch as added to\n> HEAD.\n>\n> 2002-02-22 10:40 momjian\n>\n> * src/include/libpq/pqsignal.h (REL7_2_STABLE): We had a problem\n> with to compile pgsql-7.2 under SW-8.0. In the mailing lists I\n> found no informations. See note for further informations.\n>\n> Add missing AuthBlockSig.\n>\n> regards Heiko\n>\n> 2002-02-22 08:02 momjian\n>\n> * doc/: FAQ_russian, src/FAQ/FAQ_russian.html (REL7_2_STABLE):\n> BACKPATCH:\n>\n> Add Russian FAQ to 7.2.1. Why not?\n>\n> 2002-02-22 01:08 momjian\n>\n> * contrib/btree_gist/btree_gist.c (REL7_2_STABLE): BACKPATCH:\n>\n> Please, apply attached patch of contrib/btree_gist to 7.2.1 and\n> current cvs. The patch fixes memory leak during creation GiST\n> index on timestamp column.\n>\n> Thank you.\n>\n> -- Teodor Sigaev teodor@stack.net\n>\n> 2002-02-19 17:19 tgl\n>\n> * src/backend/utils/adt/cash.c (REL7_2_STABLE): Avoid failures in\n> cash_out and cash_words for INT_MIN. Also, 'fourty' -> 'forty'.\n>\n> 2002-02-18 11:04 tgl\n>\n> * src/backend/commands/analyze.c (REL7_2_STABLE): Replace\n> number-of-distinct-values estimator equation, per recent pghackers\n> discussion.\n>\n> 2002-02-17 23:12 ishii\n>\n> * src/bin/pgaccess/lib/tables.tcl (REL7_2_STABLE): Fix\n> kanji-coversion key binding. This has been broken since 7.1 Per\n> Yoshinori Ariie's report.\n>\n> 2002-02-17 08:29 momjian\n>\n> * doc/src/sgml/ref/alter_table.sgml: Fix SGML typo in previous\n> patch.\n>\n> 2002-02-17 06:50 momjian\n>\n> * doc/src/sgml/ref/alter_table.sgml: I think it's important that\n> it's actually documented that they can add primary keys after the\n> fact!\n>\n> Also, we need to add regression tests for alter table / add primary\n> key and alter table / drop constraint. These shouldn't be added\n> until 7.3 tho methinks...\n>\n> Chris\n>\n> 2002-02-16 18:45 momjian\n>\n> * doc/src/sgml/ref/alter_table.sgml: Clarify params to ALTER TABLE\n> to clearly show single parameters.\n>\n> e.g. table contraint definition -> table_constraint_definition.\n>\n> 2002-02-15 12:46 petere\n>\n> * src/interfaces/ecpg/preproc/pgc.l: Remove warning about automatic\n> inclusion of sqlca.\n>\n> 2002-02-14 10:24 tgl\n>\n> * src/: backend/commands/command.c, backend/executor/spi.c,\n> backend/utils/mmgr/portalmem.c, include/utils/portal.h: Ensure that\n> a cursor is scanned under the same scanCommandId it was originally\n> created with, so that the set of visible tuples does not change as\n> a result of other activity. This essentially makes PG cursors\n> INSENSITIVE per the SQL92 definition. See bug report of 13-Feb-02.\n>\n> 2002-02-13 14:32 tgl\n>\n> * doc/src/sgml/ref/createuser.sgml: Point out that --adduser\n> actually makes the new user a superuser. This was mentioned on the\n> man page for the underlying CREATE USER command, but it should be\n> explained here too.\n>\n> 2002-02-12 18:39 tgl\n>\n> * src/backend/port/dynloader/: README.dlfcn.aix, aix.h, bsdi.h,\n> dgux.h, freebsd.h, irix5.h, linux.h, netbsd.h, openbsd.h, osf.h,\n> sco.h, solaris.h, sunos4.h, svr4.h, univel.h, unixware.h, win.h:\n> Use RTLD_NOW, not RTLD_LAZY, as binding mode for dlopen() on all\n> platforms. This restores the Linux behavior to what it was in PG\n> 7.0 and 7.1, and causes other platforms to agree. (Other\n> well-tested platforms like HPUX were doing it this way already.)\n> Per pghackers discussion over the past month or so.\n>\n> 2002-02-12 17:35 tgl\n>\n> * doc/FAQ_Solaris: Add warning not to use /usr/ucb/cc on Solaris.\n>\n> 2002-02-12 17:25 momjian\n>\n> * doc/src/sgml/advanced.sgml: Fix tutorial for references problem,\n> from rainer.tammer@spg.schulergroup.com\n>\n> 2002-02-12 16:25 tgl\n>\n> * doc/src/sgml/ref/copy.sgml, src/backend/commands/copy.c: Modify\n> COPY TO to emit carriage returns and newlines as backslash escapes\n> (backslash-r, backslash-n) for protection against\n> newline-conversion munging. In future we will also tweak COPY\n> FROM, but this part of the change should be backwards-compatible.\n> Per pghackers discussion. Also, update COPY reference page to\n> describe the backslash conversions more completely and accurately.\n>\n> 2002-02-11 18:25 momjian\n>\n> * doc/src/sgml/wal.sgml: Update wal files computation\n> documentation.\n>\n> 2002-02-11 17:41 tgl\n>\n> * src/backend/access/gist/gist.c: Tweak GiST code to work correctly\n> on machines where 8-byte alignment of pointers is required. Patch\n> from Teodor Sigaev per pghackers discussion. It's an ugly kluge\n> but avoids forcing initdb; we'll put a better fix into 7.3 or\n> later.\n>\n> 2002-02-11 16:38 petere\n>\n> * src/backend/port/dynloader/freebsd.h: Fix for old FreeBSD\n> versions that don't have RTLD_GLOBAL\n>\n> 2002-02-11 15:10 tgl\n>\n> * src/backend/executor/: nodeIndexscan.c, nodeTidscan.c: Repair\n> problems with EvalPlanQual where target table is scanned as inner\n> indexscan (ie, one with runtime keys). ExecIndexReScan must\n> compute or recompute runtime keys even if we are rescanning in the\n> EPQ case. TidScan seems to have comparable problems. Per bug\n> noted by Barry Lind 11-Feb-02.\n>\n> 2002-02-11 10:19 momjian\n>\n> * contrib/pg_upgrade/pg_upgrade: Fix flag handling of pg_upgrade.\n>\n> 2002-02-10 19:18 tgl\n>\n> * src/bin/pg_dump/: common.c, pg_backup.h, pg_backup_archiver.c,\n> pg_dump.c, pg_dump.h, pg_dumpall.sh: Be more wary about mixed-case\n> database names and user names. Get the CREATE DATABASE command\n> right in pg_dump -C case.\n>\n> 2002-02-10 19:14 tgl\n>\n> * doc/src/sgml/ref/: pg_dump.sgml, pg_restore.sgml: pg_dump and\n> pg_restore man pages need to mention that one should restore into a\n> virgin database, ie, one created from template0, if there are any\n> site-local additions in template1.\n>\n> 2002-02-10 17:56 tgl\n>\n> * src/backend/storage/file/fd.c: Don't Assert() that fsync() and\n> close() never fail; I have seen this crash on Solaris when over\n> disk quota. Instead, report such failures via elog(DEBUG).\n>\n> 2002-02-08 11:30 momjian\n>\n> * src/backend/utils/init/findbe.c: Move sys/types.h to top, for\n> hiroyuki hanai/ FreeBSD.\n>\n> 2002-02-08 09:47 momjian\n>\n> * contrib/mysql/my2pg.pl: Upgrade my2pg version 1.23.\n>\n> 2002-02-07 17:20 tgl\n>\n> * src/backend/postmaster/pgstat.c: pgstat's truncation of query\n> string needs to be multibyte-aware. Patch from sugita@sra.co.jp.\n>\n> 2002-02-07 17:11 tgl\n>\n> * contrib/: intarray/_int.c, tsearch/README.tsearch,\n> tsearch/gistidx.c, tsearch/tsearch.sql.in: Repair some problems in\n> GIST-index contrib modules. Patch from Teodor Sigaev\n> <teodor@stack.net>.\n>\n> 2002-02-06 19:27 inoue\n>\n> * src/backend/tcop/utility.c: Removed a check for REINDEX TABLE.\n>\n> 2002-02-06 15:29 petere\n>\n> * doc/Makefile: Fix for parallel make\n>\n> 2002-02-06 12:27 tgl\n>\n> * src/bin/pg_dump/: pg_backup_archiver.c, pg_dump.c: Fix failure to\n> reconnect as sequence's owner before issuing setval().\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\n",
"msg_date": "Fri, 8 Mar 2002 14:05:24 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "\nI see the patch appeared on patches March 4, and on hackers March 6:\n\n\thttp://groups.google.com/groups?hl=en&q=BUG%23599&btnG=Google+Search&meta=group%3Dcomp.databases.postgresql.*\n\nI was hoping our ecpg guy could take a look at it before it was applied.\nI will apply it in a day or so. It is in the patches queue just added\ntodo:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\n---------------------------------------------------------------------------\n\nNicolas Bazin wrote:\n> The ecpg Patch I send correct a bug in the pre-processor that it would be\n> nice to have in this release.\n> ----- Original Message -----\n> From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> To: <pgsql-hackers@postgreSQL.org>\n> Sent: Friday, March 08, 2002 12:28 PM\n> Subject: [HACKERS] Time for 7.2.1?\n> \n> \n> > Seems like it's about time to release 7.2.1. We have enough\n> > post-release patches in there that a dot-release is clearly needed\n> > (particularly for the pg_dump-related issues). And it seems like\n> > not too much new stuff is coming in. Does anyone have any issues\n> > outstanding that need to be dealt with before 7.2.1?\n> >\n> > regards, tom lane\n> >\n> >\n> > Post-release patches in the 7.2 branch:\n> >\n> > 2002-03-05 01:10 momjian\n> >\n> > * contrib/tsearch/dict/porter_english.dct (REL7_2_STABLE): Please,\n> > apply attached patch for contrib/tsearch to 7.2.1 and current CVS.\n> > It fix english stemmer's problem with ending words like\n> > 'technology'.\n> >\n> > We have found one more bug in english stemmer. The bug is with\n> > 'irregular' english words like 'skies' -> 'sky'. Please, apply\n> > attached cumulative patch to 7.2.1 and current CVS instead\n> > previous one.\n> >\n> > Thank to Thomas T. Thai <tom@minnesota.com> for hard testing. This\n> > kind of bug has significance only for dump/reload database and\n> > viewing, but searching/indexing works right.\n> >\n> > Teodor Sigaev\n> >\n> > 2002-03-05 00:13 tgl\n> >\n> > * src/backend/optimizer/prep/prepunion.c (REL7_2_STABLE): Previous\n> > patch to mark UNION outputs with common typmod (if any) breaks\n> > three-or-more-way UNIONs, as per example from Josh Berkus. Cause\n> > is a fragile assumption that one tlist's entries will exactly match\n> > another. Restructure code to make that assumption a little less\n> > fragile.\n> >\n> > 2002-03-04 22:45 ishii\n> >\n> > * src/: backend/utils/adt/timestamp.c,\n> > test/regress/expected/timestamp.out,\n> > test/regress/expected/timestamptz.out (REL7_2_STABLE): A backport\n> > patch:\n> >\n> > Fix bug in extract/date_part for milliseconds/miscroseconds and\n> > timestamp/timestamptz combo. Now extract/date_part returns\n> > seconds*1000 or 1000000 + fraction part as the manual stats.\n> > regression test are also fixed.\n> >\n> > See the thread in pgsql-hackers:\n> >\n> > Subject: Re: [HACKERS] timestamp_part() bug? Date: Sat, 02 Mar 2002\n> > 11:29:53 +0900\n> >\n> > 2002-03-04 12:47 tgl\n> >\n> > * doc/FAQ_Solaris (REL7_2_STABLE): Update FAQ_Solaris with info\n> > about gcc 2.95.1 problems and how to work around 64-bit vsnprintf\n> > bug.\n> >\n> > 2002-02-27 18:17 tgl\n> >\n> > * src/backend/tcop/postgres.c (REL7_2_STABLE): Back-patch fix for\n> > errors reported at transaction end.\n> >\n> > 2002-02-26 20:47 ishii\n> >\n> > * src/backend/commands/copy.c (REL7_2_STABLE): Back-patch fix for\n> > followings:\n> >\n> > Fix bug in COPY FROM when DELIMITER is not in ASCII range. See\n> > pgsql-bugs/pgsql-hackers discussion \"COPY FROM is not 8bit clean\"\n> > around 2002/02/26 for more details -- Tatsuo Ishii\n> >\n> > 2002-02-26 18:48 tgl\n> >\n> > * src/: backend/commands/command.c, backend/commands/explain.c,\n> > backend/executor/functions.c, backend/executor/spi.c,\n> > backend/nodes/copyfuncs.c, backend/nodes/equalfuncs.c,\n> > backend/nodes/readfuncs.c, backend/parser/analyze.c,\n> > backend/tcop/dest.c, backend/tcop/postgres.c,\n> > backend/tcop/pquery.c, backend/tcop/utility.c,\n> > include/commands/command.h, include/nodes/parsenodes.h,\n> > include/tcop/dest.h, include/tcop/pquery.h, include/tcop/utility.h\n> > (REL7_2_STABLE): Back-patch fix for command completion report\n> > handling. This is primarily needed so that INSERTing a row still\n> > reports the row's OID even when there are ON INSERT rules firing\n> > additional queries.\n> >\n> > 2002-02-25 16:37 tgl\n> >\n> > * src/bin/psql/command.c (REL7_2_STABLE): Tweak psql's \\connect\n> > command to not downcase unquoted database and user names. This is\n> > a temporary measure to allow backwards compatibility with 7.2 and\n> > earlier pg_dump. 7.2.1 and later pg_dump will double-quote mixed\n> > case names in \\connect. Once we feel that older dumps are not a\n> > problem anymore, we can revert this change and treat \\connect\n> > arguments as normal SQL identifiers.\n> >\n> > 2002-02-25 15:07 momjian\n> >\n> > * src/backend/libpq/auth.c (REL7_2_STABLE): Fix for PAM error\n> > message display:\n> >\n> > > and that the right fix is to make each of the subsequent calls be\n> > in\n> > > this same pattern, not to try to emulate their nonsensical style.\n> >\n> > Dominic J. Eidson\n> >\n> > 2002-02-25 11:22 thomas\n> >\n> > * src/backend/utils/adt/datetime.c (REL7_2_STABLE): Add a large\n> > number of time zones to the lookup table. Fix a few\n> > apparently-wrong TZ vs DTZ declarations. Same patch as added to\n> > HEAD.\n> >\n> > 2002-02-22 10:40 momjian\n> >\n> > * src/include/libpq/pqsignal.h (REL7_2_STABLE): We had a problem\n> > with to compile pgsql-7.2 under SW-8.0. In the mailing lists I\n> > found no informations. See note for further informations.\n> >\n> > Add missing AuthBlockSig.\n> >\n> > regards Heiko\n> >\n> > 2002-02-22 08:02 momjian\n> >\n> > * doc/: FAQ_russian, src/FAQ/FAQ_russian.html (REL7_2_STABLE):\n> > BACKPATCH:\n> >\n> > Add Russian FAQ to 7.2.1. Why not?\n> >\n> > 2002-02-22 01:08 momjian\n> >\n> > * contrib/btree_gist/btree_gist.c (REL7_2_STABLE): BACKPATCH:\n> >\n> > Please, apply attached patch of contrib/btree_gist to 7.2.1 and\n> > current cvs. The patch fixes memory leak during creation GiST\n> > index on timestamp column.\n> >\n> > Thank you.\n> >\n> > -- Teodor Sigaev teodor@stack.net\n> >\n> > 2002-02-19 17:19 tgl\n> >\n> > * src/backend/utils/adt/cash.c (REL7_2_STABLE): Avoid failures in\n> > cash_out and cash_words for INT_MIN. Also, 'fourty' -> 'forty'.\n> >\n> > 2002-02-18 11:04 tgl\n> >\n> > * src/backend/commands/analyze.c (REL7_2_STABLE): Replace\n> > number-of-distinct-values estimator equation, per recent pghackers\n> > discussion.\n> >\n> > 2002-02-17 23:12 ishii\n> >\n> > * src/bin/pgaccess/lib/tables.tcl (REL7_2_STABLE): Fix\n> > kanji-coversion key binding. This has been broken since 7.1 Per\n> > Yoshinori Ariie's report.\n> >\n> > 2002-02-17 08:29 momjian\n> >\n> > * doc/src/sgml/ref/alter_table.sgml: Fix SGML typo in previous\n> > patch.\n> >\n> > 2002-02-17 06:50 momjian\n> >\n> > * doc/src/sgml/ref/alter_table.sgml: I think it's important that\n> > it's actually documented that they can add primary keys after the\n> > fact!\n> >\n> > Also, we need to add regression tests for alter table / add primary\n> > key and alter table / drop constraint. These shouldn't be added\n> > until 7.3 tho methinks...\n> >\n> > Chris\n> >\n> > 2002-02-16 18:45 momjian\n> >\n> > * doc/src/sgml/ref/alter_table.sgml: Clarify params to ALTER TABLE\n> > to clearly show single parameters.\n> >\n> > e.g. table contraint definition -> table_constraint_definition.\n> >\n> > 2002-02-15 12:46 petere\n> >\n> > * src/interfaces/ecpg/preproc/pgc.l: Remove warning about automatic\n> > inclusion of sqlca.\n> >\n> > 2002-02-14 10:24 tgl\n> >\n> > * src/: backend/commands/command.c, backend/executor/spi.c,\n> > backend/utils/mmgr/portalmem.c, include/utils/portal.h: Ensure that\n> > a cursor is scanned under the same scanCommandId it was originally\n> > created with, so that the set of visible tuples does not change as\n> > a result of other activity. This essentially makes PG cursors\n> > INSENSITIVE per the SQL92 definition. See bug report of 13-Feb-02.\n> >\n> > 2002-02-13 14:32 tgl\n> >\n> > * doc/src/sgml/ref/createuser.sgml: Point out that --adduser\n> > actually makes the new user a superuser. This was mentioned on the\n> > man page for the underlying CREATE USER command, but it should be\n> > explained here too.\n> >\n> > 2002-02-12 18:39 tgl\n> >\n> > * src/backend/port/dynloader/: README.dlfcn.aix, aix.h, bsdi.h,\n> > dgux.h, freebsd.h, irix5.h, linux.h, netbsd.h, openbsd.h, osf.h,\n> > sco.h, solaris.h, sunos4.h, svr4.h, univel.h, unixware.h, win.h:\n> > Use RTLD_NOW, not RTLD_LAZY, as binding mode for dlopen() on all\n> > platforms. This restores the Linux behavior to what it was in PG\n> > 7.0 and 7.1, and causes other platforms to agree. (Other\n> > well-tested platforms like HPUX were doing it this way already.)\n> > Per pghackers discussion over the past month or so.\n> >\n> > 2002-02-12 17:35 tgl\n> >\n> > * doc/FAQ_Solaris: Add warning not to use /usr/ucb/cc on Solaris.\n> >\n> > 2002-02-12 17:25 momjian\n> >\n> > * doc/src/sgml/advanced.sgml: Fix tutorial for references problem,\n> > from rainer.tammer@spg.schulergroup.com\n> >\n> > 2002-02-12 16:25 tgl\n> >\n> > * doc/src/sgml/ref/copy.sgml, src/backend/commands/copy.c: Modify\n> > COPY TO to emit carriage returns and newlines as backslash escapes\n> > (backslash-r, backslash-n) for protection against\n> > newline-conversion munging. In future we will also tweak COPY\n> > FROM, but this part of the change should be backwards-compatible.\n> > Per pghackers discussion. Also, update COPY reference page to\n> > describe the backslash conversions more completely and accurately.\n> >\n> > 2002-02-11 18:25 momjian\n> >\n> > * doc/src/sgml/wal.sgml: Update wal files computation\n> > documentation.\n> >\n> > 2002-02-11 17:41 tgl\n> >\n> > * src/backend/access/gist/gist.c: Tweak GiST code to work correctly\n> > on machines where 8-byte alignment of pointers is required. Patch\n> > from Teodor Sigaev per pghackers discussion. It's an ugly kluge\n> > but avoids forcing initdb; we'll put a better fix into 7.3 or\n> > later.\n> >\n> > 2002-02-11 16:38 petere\n> >\n> > * src/backend/port/dynloader/freebsd.h: Fix for old FreeBSD\n> > versions that don't have RTLD_GLOBAL\n> >\n> > 2002-02-11 15:10 tgl\n> >\n> > * src/backend/executor/: nodeIndexscan.c, nodeTidscan.c: Repair\n> > problems with EvalPlanQual where target table is scanned as inner\n> > indexscan (ie, one with runtime keys). ExecIndexReScan must\n> > compute or recompute runtime keys even if we are rescanning in the\n> > EPQ case. TidScan seems to have comparable problems. Per bug\n> > noted by Barry Lind 11-Feb-02.\n> >\n> > 2002-02-11 10:19 momjian\n> >\n> > * contrib/pg_upgrade/pg_upgrade: Fix flag handling of pg_upgrade.\n> >\n> > 2002-02-10 19:18 tgl\n> >\n> > * src/bin/pg_dump/: common.c, pg_backup.h, pg_backup_archiver.c,\n> > pg_dump.c, pg_dump.h, pg_dumpall.sh: Be more wary about mixed-case\n> > database names and user names. Get the CREATE DATABASE command\n> > right in pg_dump -C case.\n> >\n> > 2002-02-10 19:14 tgl\n> >\n> > * doc/src/sgml/ref/: pg_dump.sgml, pg_restore.sgml: pg_dump and\n> > pg_restore man pages need to mention that one should restore into a\n> > virgin database, ie, one created from template0, if there are any\n> > site-local additions in template1.\n> >\n> > 2002-02-10 17:56 tgl\n> >\n> > * src/backend/storage/file/fd.c: Don't Assert() that fsync() and\n> > close() never fail; I have seen this crash on Solaris when over\n> > disk quota. Instead, report such failures via elog(DEBUG).\n> >\n> > 2002-02-08 11:30 momjian\n> >\n> > * src/backend/utils/init/findbe.c: Move sys/types.h to top, for\n> > hiroyuki hanai/ FreeBSD.\n> >\n> > 2002-02-08 09:47 momjian\n> >\n> > * contrib/mysql/my2pg.pl: Upgrade my2pg version 1.23.\n> >\n> > 2002-02-07 17:20 tgl\n> >\n> > * src/backend/postmaster/pgstat.c: pgstat's truncation of query\n> > string needs to be multibyte-aware. Patch from sugita@sra.co.jp.\n> >\n> > 2002-02-07 17:11 tgl\n> >\n> > * contrib/: intarray/_int.c, tsearch/README.tsearch,\n> > tsearch/gistidx.c, tsearch/tsearch.sql.in: Repair some problems in\n> > GIST-index contrib modules. Patch from Teodor Sigaev\n> > <teodor@stack.net>.\n> >\n> > 2002-02-06 19:27 inoue\n> >\n> > * src/backend/tcop/utility.c: Removed a check for REINDEX TABLE.\n> >\n> > 2002-02-06 15:29 petere\n> >\n> > * doc/Makefile: Fix for parallel make\n> >\n> > 2002-02-06 12:27 tgl\n> >\n> > * src/bin/pg_dump/: pg_backup_archiver.c, pg_dump.c: Fix failure to\n> > reconnect as sequence's owner before issuing setval().\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 22:36:04 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "\nSorry, should have been clearer. I don't think the ecpg patch qualifies\nas a major bug that should be fixed in 7.2.1. It will appear in 7.3.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> \n> I see the patch appeared on patches March 4, and on hackers March 6:\n> \n> \thttp://groups.google.com/groups?hl=en&q=BUG%23599&btnG=Google+Search&meta=group%3Dcomp.databases.postgresql.*\n> \n> I was hoping our ecpg guy could take a look at it before it was applied.\n> I will apply it in a day or so. It is in the patches queue just added\n> todo:\n> \n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 22:41:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Sorry, should have been clearer. I don't think the ecpg patch qualifies\n> as a major bug that should be fixed in 7.2.1.\n\nNot unless Meskes thinks so, anyway ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Mar 2002 23:12:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Time for 7.2.1? "
},
{
"msg_contents": "Sorry, one more patch for tsearch.\n\nPleas apply it for 7.2.1 and current CVS.\n\nPatch fixes using lc.lang instead of lc.lc_ctype.\nThank you.\n\n\nTom Lane wrote:\n> Seems like it's about time to release 7.2.1. We have enough\n> post-release patches in there that a dot-release is clearly needed\n> (particularly for the pg_dump-related issues). And it seems like\n> not too much new stuff is coming in. Does anyone have any issues\n> outstanding that need to be dealt with before 7.2.1?\n> \n> \t\t\tregards, tom lane\n> \n> \n> Post-release patches in the 7.2 branch:\n> \n> 2002-03-05 01:10 momjian\n> \n> \t* contrib/tsearch/dict/porter_english.dct (REL7_2_STABLE): Please,\n> \tapply attached patch for contrib/tsearch to 7.2.1 and current CVS.\n> \tIt fix english stemmer's problem with ending words like\n> \t'technology'.\n> \t\n> \tWe have found one more bug in english stemmer. The bug is with\n> \t'irregular' english words like 'skies' -> 'sky'. Please, apply\n> \tattached cumulative patch to 7.2.1 and current CVS instead\n> \tprevious one.\n> \t\n> \tThank to Thomas T. Thai <tom@minnesota.com> for hard testing. This\n> \tkind of bug has significance only for dump/reload database and\n> \tviewing, but searching/indexing works right.\n> \t\n> \tTeodor Sigaev\n> \n> 2002-03-05 00:13 tgl\n> \n> \t* src/backend/optimizer/prep/prepunion.c (REL7_2_STABLE): Previous\n> \tpatch to mark UNION outputs with common typmod (if any) breaks\n> \tthree-or-more-way UNIONs, as per example from Josh Berkus. Cause\n> \tis a fragile assumption that one tlist's entries will exactly match\n> \tanother. Restructure code to make that assumption a little less\n> \tfragile.\n> \n> 2002-03-04 22:45 ishii\n> \n> \t* src/: backend/utils/adt/timestamp.c,\n> \ttest/regress/expected/timestamp.out,\n> \ttest/regress/expected/timestamptz.out (REL7_2_STABLE): A backport\n> \tpatch:\n> \t\n> \tFix bug in extract/date_part for milliseconds/miscroseconds and\n> \ttimestamp/timestamptz combo. Now extract/date_part returns\n> \tseconds*1000 or 1000000 + fraction part as the manual stats. \n> \tregression test are also fixed.\n> \t\n> \tSee the thread in pgsql-hackers:\n> \t\n> \tSubject: Re: [HACKERS] timestamp_part() bug? Date: Sat, 02 Mar 2002\n> \t11:29:53 +0900\n> \n> 2002-03-04 12:47 tgl\n> \n> \t* doc/FAQ_Solaris (REL7_2_STABLE): Update FAQ_Solaris with info\n> \tabout gcc 2.95.1 problems and how to work around 64-bit vsnprintf\n> \tbug.\n> \n> 2002-02-27 18:17 tgl\n> \n> \t* src/backend/tcop/postgres.c (REL7_2_STABLE): Back-patch fix for\n> \terrors reported at transaction end.\n> \n> 2002-02-26 20:47 ishii\n> \n> \t* src/backend/commands/copy.c (REL7_2_STABLE): Back-patch fix for\n> \tfollowings:\n> \t\n> \tFix bug in COPY FROM when DELIMITER is not in ASCII range. See\n> \tpgsql-bugs/pgsql-hackers discussion \"COPY FROM is not 8bit clean\"\n> \taround 2002/02/26 for more details -- Tatsuo Ishii\n> \n> 2002-02-26 18:48 tgl\n> \n> \t* src/: backend/commands/command.c, backend/commands/explain.c,\n> \tbackend/executor/functions.c, backend/executor/spi.c,\n> \tbackend/nodes/copyfuncs.c, backend/nodes/equalfuncs.c,\n> \tbackend/nodes/readfuncs.c, backend/parser/analyze.c,\n> \tbackend/tcop/dest.c, backend/tcop/postgres.c,\n> \tbackend/tcop/pquery.c, backend/tcop/utility.c,\n> \tinclude/commands/command.h, include/nodes/parsenodes.h,\n> \tinclude/tcop/dest.h, include/tcop/pquery.h, include/tcop/utility.h\n> \t(REL7_2_STABLE): Back-patch fix for command completion report\n> \thandling. This is primarily needed so that INSERTing a row still\n> \treports the row's OID even when there are ON INSERT rules firing\n> \tadditional queries.\n> \n> 2002-02-25 16:37 tgl\n> \n> \t* src/bin/psql/command.c (REL7_2_STABLE): Tweak psql's \\connect\n> \tcommand to not downcase unquoted database and user names. This is\n> \ta temporary measure to allow backwards compatibility with 7.2 and\n> \tearlier pg_dump. 7.2.1 and later pg_dump will double-quote mixed\n> \tcase names in \\connect. Once we feel that older dumps are not a\n> \tproblem anymore, we can revert this change and treat \\connect\n> \targuments as normal SQL identifiers.\n> \n> 2002-02-25 15:07 momjian\n> \n> \t* src/backend/libpq/auth.c (REL7_2_STABLE): Fix for PAM error\n> \tmessage display:\n> \t\n> \t> and that the right fix is to make each of the subsequent calls be\n> \tin\n> \t> this same pattern, not to try to emulate their nonsensical style.\n> \t\n> \tDominic J. Eidson\n> \n> 2002-02-25 11:22 thomas\n> \n> \t* src/backend/utils/adt/datetime.c (REL7_2_STABLE): Add a large\n> \tnumber of time zones to the lookup table. Fix a few\n> \tapparently-wrong TZ vs DTZ declarations. Same patch as added to\n> \tHEAD.\n> \n> 2002-02-22 10:40 momjian\n> \n> \t* src/include/libpq/pqsignal.h (REL7_2_STABLE): We had a problem\n> \twith to compile pgsql-7.2 under SW-8.0. In the mailing lists I\n> \tfound no informations.\tSee note for further informations.\n> \t\n> \tAdd missing AuthBlockSig.\n> \t\n> \tregards Heiko\n> \n> 2002-02-22 08:02 momjian\n> \n> \t* doc/: FAQ_russian, src/FAQ/FAQ_russian.html (REL7_2_STABLE):\n> \tBACKPATCH:\n> \t\n> \tAdd Russian FAQ to 7.2.1. Why not?\n> \n> 2002-02-22 01:08 momjian\n> \n> \t* contrib/btree_gist/btree_gist.c (REL7_2_STABLE): BACKPATCH:\n> \t\n> \tPlease, apply attached patch of contrib/btree_gist to 7.2.1 and\n> \tcurrent cvs. The patch fixes memory leak during creation GiST\n> \tindex on timestamp column.\n> \t\n> \tThank you.\n> \t\n> \t-- Teodor Sigaev teodor@stack.net\n> \n> 2002-02-19 17:19 tgl\n> \n> \t* src/backend/utils/adt/cash.c (REL7_2_STABLE): Avoid failures in\n> \tcash_out and cash_words for INT_MIN. Also, 'fourty' -> 'forty'.\n> \n> 2002-02-18 11:04 tgl\n> \n> \t* src/backend/commands/analyze.c (REL7_2_STABLE): Replace\n> \tnumber-of-distinct-values estimator equation, per recent pghackers\n> \tdiscussion.\n> \n> 2002-02-17 23:12 ishii\n> \n> \t* src/bin/pgaccess/lib/tables.tcl (REL7_2_STABLE): Fix\n> \tkanji-coversion key binding. This has been broken since 7.1 Per\n> \tYoshinori Ariie's report.\n> \n> 2002-02-17 08:29 momjian\n> \n> \t* doc/src/sgml/ref/alter_table.sgml: Fix SGML typo in previous\n> \tpatch.\n> \n> 2002-02-17 06:50 momjian\n> \n> \t* doc/src/sgml/ref/alter_table.sgml: I think it's important that\n> \tit's actually documented that they can add primary keys after the\n> \tfact!\n> \t\n> \tAlso, we need to add regression tests for alter table / add primary\n> \tkey and alter table / drop constraint.\tThese shouldn't be added\n> \tuntil 7.3 tho methinks...\n> \t\n> \tChris\n> \n> 2002-02-16 18:45 momjian\n> \n> \t* doc/src/sgml/ref/alter_table.sgml: Clarify params to ALTER TABLE\n> \tto clearly show single parameters.\n> \t\n> \te.g. table contraint definition -> table_constraint_definition.\n> \n> 2002-02-15 12:46 petere\n> \n> \t* src/interfaces/ecpg/preproc/pgc.l: Remove warning about automatic\n> \tinclusion of sqlca.\n> \n> 2002-02-14 10:24 tgl\n> \n> \t* src/: backend/commands/command.c, backend/executor/spi.c,\n> \tbackend/utils/mmgr/portalmem.c, include/utils/portal.h: Ensure that\n> \ta cursor is scanned under the same scanCommandId it was originally\n> \tcreated with, so that the set of visible tuples does not change as\n> \ta result of other activity. This essentially makes PG cursors\n> \tINSENSITIVE per the SQL92 definition. See bug report of 13-Feb-02.\n> \n> 2002-02-13 14:32 tgl\n> \n> \t* doc/src/sgml/ref/createuser.sgml: Point out that --adduser\n> \tactually makes the new user a superuser. This was mentioned on the\n> \tman page for the underlying CREATE USER command, but it should be\n> \texplained here too.\n> \n> 2002-02-12 18:39 tgl\n> \n> \t* src/backend/port/dynloader/: README.dlfcn.aix, aix.h, bsdi.h,\n> \tdgux.h, freebsd.h, irix5.h, linux.h, netbsd.h, openbsd.h, osf.h,\n> \tsco.h, solaris.h, sunos4.h, svr4.h, univel.h, unixware.h, win.h:\n> \tUse RTLD_NOW, not RTLD_LAZY, as binding mode for dlopen() on all\n> \tplatforms. This restores the Linux behavior to what it was in PG\n> \t7.0 and 7.1, and causes other platforms to agree. (Other\n> \twell-tested platforms like HPUX were doing it this way already.) \n> \tPer pghackers discussion over the past month or so.\n> \n> 2002-02-12 17:35 tgl\n> \n> \t* doc/FAQ_Solaris: Add warning not to use /usr/ucb/cc on Solaris.\n> \n> 2002-02-12 17:25 momjian\n> \n> \t* doc/src/sgml/advanced.sgml: Fix tutorial for references problem,\n> \tfrom rainer.tammer@spg.schulergroup.com\n> \n> 2002-02-12 16:25 tgl\n> \n> \t* doc/src/sgml/ref/copy.sgml, src/backend/commands/copy.c: Modify\n> \tCOPY TO to emit carriage returns and newlines as backslash escapes\n> \t(backslash-r, backslash-n) for protection against\n> \tnewline-conversion munging. In future we will also tweak COPY\n> \tFROM, but this part of the change should be backwards-compatible. \n> \tPer pghackers discussion. Also, update COPY reference page to\n> \tdescribe the backslash conversions more completely and accurately.\n> \n> 2002-02-11 18:25 momjian\n> \n> \t* doc/src/sgml/wal.sgml: Update wal files computation\n> \tdocumentation.\n> \n> 2002-02-11 17:41 tgl\n> \n> \t* src/backend/access/gist/gist.c: Tweak GiST code to work correctly\n> \ton machines where 8-byte alignment of pointers is required. Patch\n> \tfrom Teodor Sigaev per pghackers discussion. It's an ugly kluge\n> \tbut avoids forcing initdb; we'll put a better fix into 7.3 or\n> \tlater.\n> \n> 2002-02-11 16:38 petere\n> \n> \t* src/backend/port/dynloader/freebsd.h: Fix for old FreeBSD\n> \tversions that don't have RTLD_GLOBAL\n> \n> 2002-02-11 15:10 tgl\n> \n> \t* src/backend/executor/: nodeIndexscan.c, nodeTidscan.c: Repair\n> \tproblems with EvalPlanQual where target table is scanned as inner\n> \tindexscan (ie, one with runtime keys).\tExecIndexReScan must\n> \tcompute or recompute runtime keys even if we are rescanning in the\n> \tEPQ case. TidScan seems to have comparable problems. Per bug\n> \tnoted by Barry Lind 11-Feb-02.\n> \n> 2002-02-11 10:19 momjian\n> \n> \t* contrib/pg_upgrade/pg_upgrade: Fix flag handling of pg_upgrade.\n> \n> 2002-02-10 19:18 tgl\n> \n> \t* src/bin/pg_dump/: common.c, pg_backup.h, pg_backup_archiver.c,\n> \tpg_dump.c, pg_dump.h, pg_dumpall.sh: Be more wary about mixed-case\n> \tdatabase names and user names.\tGet the CREATE DATABASE command\n> \tright in pg_dump -C case.\n> \n> 2002-02-10 19:14 tgl\n> \n> \t* doc/src/sgml/ref/: pg_dump.sgml, pg_restore.sgml: pg_dump and\n> \tpg_restore man pages need to mention that one should restore into a\n> \tvirgin database, ie, one created from template0, if there are any\n> \tsite-local additions in template1.\n> \n> 2002-02-10 17:56 tgl\n> \n> \t* src/backend/storage/file/fd.c: Don't Assert() that fsync() and\n> \tclose() never fail; I have seen this crash on Solaris when over\n> \tdisk quota. Instead, report such failures via elog(DEBUG).\n> \n> 2002-02-08 11:30 momjian\n> \n> \t* src/backend/utils/init/findbe.c: Move sys/types.h to top, for\n> \thiroyuki hanai/ FreeBSD.\n> \n> 2002-02-08 09:47 momjian\n> \n> \t* contrib/mysql/my2pg.pl: Upgrade my2pg version 1.23.\n> \n> 2002-02-07 17:20 tgl\n> \n> \t* src/backend/postmaster/pgstat.c: pgstat's truncation of query\n> \tstring needs to be multibyte-aware. Patch from sugita@sra.co.jp.\n> \n> 2002-02-07 17:11 tgl\n> \n> \t* contrib/: intarray/_int.c, tsearch/README.tsearch,\n> \ttsearch/gistidx.c, tsearch/tsearch.sql.in: Repair some problems in\n> \tGIST-index contrib modules. Patch from Teodor Sigaev\n> \t<teodor@stack.net>.\n> \n> 2002-02-06 19:27 inoue\n> \n> \t* src/backend/tcop/utility.c: Removed a check for REINDEX TABLE.\n> \n> 2002-02-06 15:29 petere\n> \n> \t* doc/Makefile: Fix for parallel make\n> \n> 2002-02-06 12:27 tgl\n> \n> \t* src/bin/pg_dump/: pg_backup_archiver.c, pg_dump.c: Fix failure to\n> \treconnect as sequence's owner before issuing setval().\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net",
"msg_date": "Mon, 11 Mar 2002 19:39:45 +0300",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "\nApplied to current and 7.2.X. Thanks.\n\n(No delay for /contrib commits from maintainers.)\n\n---------------------------------------------------------------------------\n\nTeodor Sigaev wrote:\n> Sorry, one more patch for tsearch.\n> \n> Pleas apply it for 7.2.1 and current CVS.\n> \n> Patch fixes using lc.lang instead of lc.lc_ctype.\n> Thank you.\n> \n> \n> Tom Lane wrote:\n> > Seems like it's about time to release 7.2.1. We have enough\n> > post-release patches in there that a dot-release is clearly needed\n> > (particularly for the pg_dump-related issues). And it seems like\n> > not too much new stuff is coming in. Does anyone have any issues\n> > outstanding that need to be dealt with before 7.2.1?\n> > \n> > \t\t\tregards, tom lane\n> > \n> > \n> > Post-release patches in the 7.2 branch:\n> > \n> > 2002-03-05 01:10 momjian\n> > \n> > \t* contrib/tsearch/dict/porter_english.dct (REL7_2_STABLE): Please,\n> > \tapply attached patch for contrib/tsearch to 7.2.1 and current CVS.\n> > \tIt fix english stemmer's problem with ending words like\n> > \t'technology'.\n> > \t\n> > \tWe have found one more bug in english stemmer. The bug is with\n> > \t'irregular' english words like 'skies' -> 'sky'. Please, apply\n> > \tattached cumulative patch to 7.2.1 and current CVS instead\n> > \tprevious one.\n> > \t\n> > \tThank to Thomas T. Thai <tom@minnesota.com> for hard testing. This\n> > \tkind of bug has significance only for dump/reload database and\n> > \tviewing, but searching/indexing works right.\n> > \t\n> > \tTeodor Sigaev\n> > \n> > 2002-03-05 00:13 tgl\n> > \n> > \t* src/backend/optimizer/prep/prepunion.c (REL7_2_STABLE): Previous\n> > \tpatch to mark UNION outputs with common typmod (if any) breaks\n> > \tthree-or-more-way UNIONs, as per example from Josh Berkus. Cause\n> > \tis a fragile assumption that one tlist's entries will exactly match\n> > \tanother. Restructure code to make that assumption a little less\n> > \tfragile.\n> > \n> > 2002-03-04 22:45 ishii\n> > \n> > \t* src/: backend/utils/adt/timestamp.c,\n> > \ttest/regress/expected/timestamp.out,\n> > \ttest/regress/expected/timestamptz.out (REL7_2_STABLE): A backport\n> > \tpatch:\n> > \t\n> > \tFix bug in extract/date_part for milliseconds/miscroseconds and\n> > \ttimestamp/timestamptz combo. Now extract/date_part returns\n> > \tseconds*1000 or 1000000 + fraction part as the manual stats. \n> > \tregression test are also fixed.\n> > \t\n> > \tSee the thread in pgsql-hackers:\n> > \t\n> > \tSubject: Re: [HACKERS] timestamp_part() bug? Date: Sat, 02 Mar 2002\n> > \t11:29:53 +0900\n> > \n> > 2002-03-04 12:47 tgl\n> > \n> > \t* doc/FAQ_Solaris (REL7_2_STABLE): Update FAQ_Solaris with info\n> > \tabout gcc 2.95.1 problems and how to work around 64-bit vsnprintf\n> > \tbug.\n> > \n> > 2002-02-27 18:17 tgl\n> > \n> > \t* src/backend/tcop/postgres.c (REL7_2_STABLE): Back-patch fix for\n> > \terrors reported at transaction end.\n> > \n> > 2002-02-26 20:47 ishii\n> > \n> > \t* src/backend/commands/copy.c (REL7_2_STABLE): Back-patch fix for\n> > \tfollowings:\n> > \t\n> > \tFix bug in COPY FROM when DELIMITER is not in ASCII range. See\n> > \tpgsql-bugs/pgsql-hackers discussion \"COPY FROM is not 8bit clean\"\n> > \taround 2002/02/26 for more details -- Tatsuo Ishii\n> > \n> > 2002-02-26 18:48 tgl\n> > \n> > \t* src/: backend/commands/command.c, backend/commands/explain.c,\n> > \tbackend/executor/functions.c, backend/executor/spi.c,\n> > \tbackend/nodes/copyfuncs.c, backend/nodes/equalfuncs.c,\n> > \tbackend/nodes/readfuncs.c, backend/parser/analyze.c,\n> > \tbackend/tcop/dest.c, backend/tcop/postgres.c,\n> > \tbackend/tcop/pquery.c, backend/tcop/utility.c,\n> > \tinclude/commands/command.h, include/nodes/parsenodes.h,\n> > \tinclude/tcop/dest.h, include/tcop/pquery.h, include/tcop/utility.h\n> > \t(REL7_2_STABLE): Back-patch fix for command completion report\n> > \thandling. This is primarily needed so that INSERTing a row still\n> > \treports the row's OID even when there are ON INSERT rules firing\n> > \tadditional queries.\n> > \n> > 2002-02-25 16:37 tgl\n> > \n> > \t* src/bin/psql/command.c (REL7_2_STABLE): Tweak psql's \\connect\n> > \tcommand to not downcase unquoted database and user names. This is\n> > \ta temporary measure to allow backwards compatibility with 7.2 and\n> > \tearlier pg_dump. 7.2.1 and later pg_dump will double-quote mixed\n> > \tcase names in \\connect. Once we feel that older dumps are not a\n> > \tproblem anymore, we can revert this change and treat \\connect\n> > \targuments as normal SQL identifiers.\n> > \n> > 2002-02-25 15:07 momjian\n> > \n> > \t* src/backend/libpq/auth.c (REL7_2_STABLE): Fix for PAM error\n> > \tmessage display:\n> > \t\n> > \t> and that the right fix is to make each of the subsequent calls be\n> > \tin\n> > \t> this same pattern, not to try to emulate their nonsensical style.\n> > \t\n> > \tDominic J. Eidson\n> > \n> > 2002-02-25 11:22 thomas\n> > \n> > \t* src/backend/utils/adt/datetime.c (REL7_2_STABLE): Add a large\n> > \tnumber of time zones to the lookup table. Fix a few\n> > \tapparently-wrong TZ vs DTZ declarations. Same patch as added to\n> > \tHEAD.\n> > \n> > 2002-02-22 10:40 momjian\n> > \n> > \t* src/include/libpq/pqsignal.h (REL7_2_STABLE): We had a problem\n> > \twith to compile pgsql-7.2 under SW-8.0. In the mailing lists I\n> > \tfound no informations.\tSee note for further informations.\n> > \t\n> > \tAdd missing AuthBlockSig.\n> > \t\n> > \tregards Heiko\n> > \n> > 2002-02-22 08:02 momjian\n> > \n> > \t* doc/: FAQ_russian, src/FAQ/FAQ_russian.html (REL7_2_STABLE):\n> > \tBACKPATCH:\n> > \t\n> > \tAdd Russian FAQ to 7.2.1. Why not?\n> > \n> > 2002-02-22 01:08 momjian\n> > \n> > \t* contrib/btree_gist/btree_gist.c (REL7_2_STABLE): BACKPATCH:\n> > \t\n> > \tPlease, apply attached patch of contrib/btree_gist to 7.2.1 and\n> > \tcurrent cvs. The patch fixes memory leak during creation GiST\n> > \tindex on timestamp column.\n> > \t\n> > \tThank you.\n> > \t\n> > \t-- Teodor Sigaev teodor@stack.net\n> > \n> > 2002-02-19 17:19 tgl\n> > \n> > \t* src/backend/utils/adt/cash.c (REL7_2_STABLE): Avoid failures in\n> > \tcash_out and cash_words for INT_MIN. Also, 'fourty' -> 'forty'.\n> > \n> > 2002-02-18 11:04 tgl\n> > \n> > \t* src/backend/commands/analyze.c (REL7_2_STABLE): Replace\n> > \tnumber-of-distinct-values estimator equation, per recent pghackers\n> > \tdiscussion.\n> > \n> > 2002-02-17 23:12 ishii\n> > \n> > \t* src/bin/pgaccess/lib/tables.tcl (REL7_2_STABLE): Fix\n> > \tkanji-coversion key binding. This has been broken since 7.1 Per\n> > \tYoshinori Ariie's report.\n> > \n> > 2002-02-17 08:29 momjian\n> > \n> > \t* doc/src/sgml/ref/alter_table.sgml: Fix SGML typo in previous\n> > \tpatch.\n> > \n> > 2002-02-17 06:50 momjian\n> > \n> > \t* doc/src/sgml/ref/alter_table.sgml: I think it's important that\n> > \tit's actually documented that they can add primary keys after the\n> > \tfact!\n> > \t\n> > \tAlso, we need to add regression tests for alter table / add primary\n> > \tkey and alter table / drop constraint.\tThese shouldn't be added\n> > \tuntil 7.3 tho methinks...\n> > \t\n> > \tChris\n> > \n> > 2002-02-16 18:45 momjian\n> > \n> > \t* doc/src/sgml/ref/alter_table.sgml: Clarify params to ALTER TABLE\n> > \tto clearly show single parameters.\n> > \t\n> > \te.g. table contraint definition -> table_constraint_definition.\n> > \n> > 2002-02-15 12:46 petere\n> > \n> > \t* src/interfaces/ecpg/preproc/pgc.l: Remove warning about automatic\n> > \tinclusion of sqlca.\n> > \n> > 2002-02-14 10:24 tgl\n> > \n> > \t* src/: backend/commands/command.c, backend/executor/spi.c,\n> > \tbackend/utils/mmgr/portalmem.c, include/utils/portal.h: Ensure that\n> > \ta cursor is scanned under the same scanCommandId it was originally\n> > \tcreated with, so that the set of visible tuples does not change as\n> > \ta result of other activity. This essentially makes PG cursors\n> > \tINSENSITIVE per the SQL92 definition. See bug report of 13-Feb-02.\n> > \n> > 2002-02-13 14:32 tgl\n> > \n> > \t* doc/src/sgml/ref/createuser.sgml: Point out that --adduser\n> > \tactually makes the new user a superuser. This was mentioned on the\n> > \tman page for the underlying CREATE USER command, but it should be\n> > \texplained here too.\n> > \n> > 2002-02-12 18:39 tgl\n> > \n> > \t* src/backend/port/dynloader/: README.dlfcn.aix, aix.h, bsdi.h,\n> > \tdgux.h, freebsd.h, irix5.h, linux.h, netbsd.h, openbsd.h, osf.h,\n> > \tsco.h, solaris.h, sunos4.h, svr4.h, univel.h, unixware.h, win.h:\n> > \tUse RTLD_NOW, not RTLD_LAZY, as binding mode for dlopen() on all\n> > \tplatforms. This restores the Linux behavior to what it was in PG\n> > \t7.0 and 7.1, and causes other platforms to agree. (Other\n> > \twell-tested platforms like HPUX were doing it this way already.) \n> > \tPer pghackers discussion over the past month or so.\n> > \n> > 2002-02-12 17:35 tgl\n> > \n> > \t* doc/FAQ_Solaris: Add warning not to use /usr/ucb/cc on Solaris.\n> > \n> > 2002-02-12 17:25 momjian\n> > \n> > \t* doc/src/sgml/advanced.sgml: Fix tutorial for references problem,\n> > \tfrom rainer.tammer@spg.schulergroup.com\n> > \n> > 2002-02-12 16:25 tgl\n> > \n> > \t* doc/src/sgml/ref/copy.sgml, src/backend/commands/copy.c: Modify\n> > \tCOPY TO to emit carriage returns and newlines as backslash escapes\n> > \t(backslash-r, backslash-n) for protection against\n> > \tnewline-conversion munging. In future we will also tweak COPY\n> > \tFROM, but this part of the change should be backwards-compatible. \n> > \tPer pghackers discussion. Also, update COPY reference page to\n> > \tdescribe the backslash conversions more completely and accurately.\n> > \n> > 2002-02-11 18:25 momjian\n> > \n> > \t* doc/src/sgml/wal.sgml: Update wal files computation\n> > \tdocumentation.\n> > \n> > 2002-02-11 17:41 tgl\n> > \n> > \t* src/backend/access/gist/gist.c: Tweak GiST code to work correctly\n> > \ton machines where 8-byte alignment of pointers is required. Patch\n> > \tfrom Teodor Sigaev per pghackers discussion. It's an ugly kluge\n> > \tbut avoids forcing initdb; we'll put a better fix into 7.3 or\n> > \tlater.\n> > \n> > 2002-02-11 16:38 petere\n> > \n> > \t* src/backend/port/dynloader/freebsd.h: Fix for old FreeBSD\n> > \tversions that don't have RTLD_GLOBAL\n> > \n> > 2002-02-11 15:10 tgl\n> > \n> > \t* src/backend/executor/: nodeIndexscan.c, nodeTidscan.c: Repair\n> > \tproblems with EvalPlanQual where target table is scanned as inner\n> > \tindexscan (ie, one with runtime keys).\tExecIndexReScan must\n> > \tcompute or recompute runtime keys even if we are rescanning in the\n> > \tEPQ case. TidScan seems to have comparable problems. Per bug\n> > \tnoted by Barry Lind 11-Feb-02.\n> > \n> > 2002-02-11 10:19 momjian\n> > \n> > \t* contrib/pg_upgrade/pg_upgrade: Fix flag handling of pg_upgrade.\n> > \n> > 2002-02-10 19:18 tgl\n> > \n> > \t* src/bin/pg_dump/: common.c, pg_backup.h, pg_backup_archiver.c,\n> > \tpg_dump.c, pg_dump.h, pg_dumpall.sh: Be more wary about mixed-case\n> > \tdatabase names and user names.\tGet the CREATE DATABASE command\n> > \tright in pg_dump -C case.\n> > \n> > 2002-02-10 19:14 tgl\n> > \n> > \t* doc/src/sgml/ref/: pg_dump.sgml, pg_restore.sgml: pg_dump and\n> > \tpg_restore man pages need to mention that one should restore into a\n> > \tvirgin database, ie, one created from template0, if there are any\n> > \tsite-local additions in template1.\n> > \n> > 2002-02-10 17:56 tgl\n> > \n> > \t* src/backend/storage/file/fd.c: Don't Assert() that fsync() and\n> > \tclose() never fail; I have seen this crash on Solaris when over\n> > \tdisk quota. Instead, report such failures via elog(DEBUG).\n> > \n> > 2002-02-08 11:30 momjian\n> > \n> > \t* src/backend/utils/init/findbe.c: Move sys/types.h to top, for\n> > \thiroyuki hanai/ FreeBSD.\n> > \n> > 2002-02-08 09:47 momjian\n> > \n> > \t* contrib/mysql/my2pg.pl: Upgrade my2pg version 1.23.\n> > \n> > 2002-02-07 17:20 tgl\n> > \n> > \t* src/backend/postmaster/pgstat.c: pgstat's truncation of query\n> > \tstring needs to be multibyte-aware. Patch from sugita@sra.co.jp.\n> > \n> > 2002-02-07 17:11 tgl\n> > \n> > \t* contrib/: intarray/_int.c, tsearch/README.tsearch,\n> > \ttsearch/gistidx.c, tsearch/tsearch.sql.in: Repair some problems in\n> > \tGIST-index contrib modules. Patch from Teodor Sigaev\n> > \t<teodor@stack.net>.\n> > \n> > 2002-02-06 19:27 inoue\n> > \n> > \t* src/backend/tcop/utility.c: Removed a check for REINDEX TABLE.\n> > \n> > 2002-02-06 15:29 petere\n> > \n> > \t* doc/Makefile: Fix for parallel make\n> > \n> > 2002-02-06 12:27 tgl\n> > \n> > \t* src/bin/pg_dump/: pg_backup_archiver.c, pg_dump.c: Fix failure to\n> > \treconnect as sequence's owner before issuing setval().\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > \n> > \n> \n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n\n[ application/gzip is not supported, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 11:55:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "is there any further word on 7.2.1, at this point? haven't seen mention \nof it on the list in a while? is it still waiting on something big?\n\n-tfo\n\nBruce Momjian wrote:\n> Applied to current and 7.2.X. Thanks.\n> \n> (No delay for /contrib commits from maintainers.)\n\n",
"msg_date": "Fri, 15 Mar 2002 11:22:35 -0600",
"msg_from": "\"Thomas F. O'Connell\" <tfo@monsterlabs.com>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "\"Thomas F. O'Connell\" <tfo@monsterlabs.com> writes:\n> is there any further word on 7.2.1, at this point? haven't seen mention \n> of it on the list in a while? is it still waiting on something big?\n\nWell, we were gonna release it last weekend, but now it's waiting on\nsequence fixes (currently being tested). And Lockhart may also wish to\nhold it up while he looks at the recently reported timestamp_part\nproblem. (Thomas, are you considering backpatching that?) One way\nor another I'd expect it next week sometime.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Mar 2002 12:57:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Time for 7.2.1? "
},
{
"msg_contents": "> Well, we were gonna release it last weekend, but now it's waiting on\n> sequence fixes (currently being tested). And Lockhart may also wish to\n> hold it up while he looks at the recently reported timestamp_part\n> problem. (Thomas, are you considering backpatching that?) One way\n> or another I'd expect it next week sometime.\n\nI'll consider backpatching once I have a chance to dive in. \n\nIt is somewhat complicated by the fact that my code tree is pretty\nmassively changed in this area as I implement an int64-based date/time\nstorage alternative to the float64 scheme we use now. The alternative\nwould be enabled with something like #ifdef HAVE_INT64_TIMESTAMP.\nBenefits would include having a predictable precision behavior for all\nallowed dates and times.\n\n - Thomas\n",
"msg_date": "Fri, 15 Mar 2002 10:06:57 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> It is somewhat complicated by the fact that my code tree is pretty\n> massively changed in this area as I implement an int64-based date/time\n> storage alternative to the float64 scheme we use now. The alternative\n> would be enabled with something like #ifdef HAVE_INT64_TIMESTAMP.\n> Benefits would include having a predictable precision behavior for all\n> allowed dates and times.\n\nInteresting. But if this is just an #ifdef, I can see some serious\nproblems coming up the first time someone runs a backend compiled with\none set of timestamp code in a database created with the other. May\nI suggest that the timestamp representation be identified in a field\nadded to pg_control? That's how we deal with other options that\naffect database contents ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Mar 2002 14:44:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Time for 7.2.1? "
},
{
"msg_contents": "I believe we've now committed fixes for all the \"must fix\" items there\nwere for 7.2.1. Does anyone have any reasons to hold up 7.2.1 more,\nor are we ready to go?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Mar 2002 15:14:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Time for 7.2.1? "
},
{
"msg_contents": "Tom Lane wrote:\n> I believe we've now committed fixes for all the \"must fix\" items there\n> were for 7.2.1. Does anyone have any reasons to hold up 7.2.1 more,\n> or are we ready to go?\n\nI need to brand 7.2.1 --- will do tomorrow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 17 Mar 2002 17:41:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "\nOK, I have branded 7.2.1 and updated HISTORY/release.sgml. Do we want\nany special text about the sequence bug fix, or just mention in the\nannouncement that all 7.2 people should upgrade?\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Tom Lane wrote:\n> > I believe we've now committed fixes for all the \"must fix\" items there\n> > were for 7.2.1. Does anyone have any reasons to hold up 7.2.1 more,\n> > or are we ready to go?\n> \n> I need to brand 7.2.1 --- will do tomorrow.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 18 Mar 2002 18:12:50 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, I have branded 7.2.1 and updated HISTORY/release.sgml. Do we want\n> any special text about the sequence bug fix, or just mention in the\n> announcement that all 7.2 people should upgrade?\n\nThe first change item should maybe be more explicit, say\n\n\tEnsure that sequence counters do not go backwards after a crash\n\nOtherwise I think it's fine. BTW, the bug exists in 7.1 as well.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Mar 2002 18:51:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Time for 7.2.1? "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, I have branded 7.2.1 and updated HISTORY/release.sgml. Do we want\n> > any special text about the sequence bug fix, or just mention in the\n> > announcement that all 7.2 people should upgrade?\n> \n> The first change item should maybe be more explicit, say\n> \n> \tEnsure that sequence counters do not go backwards after a crash\n> \n> Otherwise I think it's fine. BTW, the bug exists in 7.1 as well.\n\nDone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 18 Mar 2002 19:11:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "On March 18, 2002 06:12 pm, Bruce Momjian wrote:\n> OK, I have branded 7.2.1 and updated HISTORY/release.sgml. Do we want\n> any special text about the sequence bug fix, or just mention in the\n> announcement that all 7.2 people should upgrade?\n\nDoes this mean that I can start putting fixes and upgrades into PyGreSQL?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 19 Mar 2002 04:55:36 -0500",
"msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "Bruce,\n\nwe have something to add. It's quite important for users of our tsearch module.\nToo late ?\n\n\nOn Mon, 18 Mar 2002, Bruce Momjian wrote:\n\n>\n> OK, I have branded 7.2.1 and updated HISTORY/release.sgml. Do we want\n> any special text about the sequence bug fix, or just mention in the\n> announcement that all 7.2 people should upgrade?\n>\n> ---------------------------------------------------------------------------\n>\n> Bruce Momjian wrote:\n> > Tom Lane wrote:\n> > > I believe we've now committed fixes for all the \"must fix\" items there\n> > > were for 7.2.1. Does anyone have any reasons to hold up 7.2.1 more,\n> > > or are we ready to go?\n> >\n> > I need to brand 7.2.1 --- will do tomorrow.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n>\n>\n\n",
"msg_date": "Tue, 19 Mar 2002 15:00:36 +0300 (MSK)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "D'Arcy J.M. Cain wrote:\n> On March 18, 2002 06:12 pm, Bruce Momjian wrote:\n> > OK, I have branded 7.2.1 and updated HISTORY/release.sgml. Do we want\n> > any special text about the sequence bug fix, or just mention in the\n> > announcement that all 7.2 people should upgrade?\n> \n> Does this mean that I can start putting fixes and upgrades into PyGreSQL?\n\nSure, main CVS is open for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Mar 2002 07:11:36 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "Oleg Bartunov wrote:\n> Bruce,\n> \n> we have something to add. It's quite important for users of our tsearch module.\n> Too late ?\n\nFor 7.2.1, I don't think it is too late but I don't think we can wait\ndays.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Mar 2002 07:12:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "On Tue, 19 Mar 2002, Bruce Momjian wrote:\n\n> Oleg Bartunov wrote:\n> > Bruce,\n> >\n> > we have something to add. It's quite important for users of our tsearch module.\n> > Too late ?\n>\n> For 7.2.1, I don't think it is too late but I don't think we can wait\n> days.\n\nI'll do a wrap on Friday, if Oleg wants to get his stuff in and tested?\n\n\n",
"msg_date": "Tue, 19 Mar 2002 09:10:40 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
}
] |
[
{
"msg_contents": "This seems strange:\n\nnconway=> create table ddd\\\\bar;\nInvalid command \\. Try \\? for help.\nnconway-> select 1;\nERROR: parser: parse error at or near \"select\"\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "07 Mar 2002 22:44:11 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "bug in psql"
},
{
"msg_contents": "On 07 Mar 2002 22:44:11 -0500\nNeil Conway <nconway@klamath.dyndns.org> wrote:\n\n> This seems strange:\n> \n> nconway=> create table ddd\\\\bar;\n> Invalid command \\. Try \\? for help.\nOk, I get this error the same.\n\n> nconway-> select 1;\n> ERROR: parser: parse error at or near \"select\"\nThis error I DO NOT GET. Under 7.1.1, 7.2RC1 or 7.2\n \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n\n\nGB\n\n-- \nGB Clark II | Roaming FreeBSD Admin\ngclarkii@VSServices.COM | General Geek \n CTHULU for President - Why choose the lesser of two evils?\n",
"msg_date": "Fri, 8 Mar 2002 14:38:24 -0600",
"msg_from": "GB Clark <postgres@vsservices.com>",
"msg_from_op": false,
"msg_subject": "Re: bug in psql"
},
{
"msg_contents": "> This seems strange:\n> \n> nconway=> create table ddd\\\\bar;\n> Invalid command \\. Try \\? for help.\n> nconway-> select 1;\n\nNotice ..^ secondary prompt here\nTry \\e between these two commands to see the\ncurrent query buffer. \n\n\n> ERROR: parser: parse error at or near \"select\"\n> \n\nBecause your query is actually:\nCREATE TABLE ddd SELECT 1;\n\n\n\nTry this:\n\nCREATE TABLE ddd \\d\n(i int)\n\\e\n\n\nOR\n\nCREATE TABLE ddd\\\\bar;\n\\e\n\n\nOr, are you saying you want to be able to create a table\nwith a backslash in the name by escaping the backslash?\n\nIf so, you can use:\n\nCREATE TABLE \"ddd\\bar\" (i int);\n\n\n",
"msg_date": "Sun, 10 Mar 2002 00:33:13 +0000 (UTC)",
"msg_from": "missive@frontiernet.net (Lee Harr)",
"msg_from_op": false,
"msg_subject": "Re: bug in psql"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.